url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
44
2.87k
text
stringlengths
319
2.51M
meta
dict
https://arxiv.org/abs/1205.5306
Polytopes of Minimum Positive Semidefinite Rank
The positive semidefinite (psd) rank of a polytope is the smallest $k$ for which the cone of $k \times k$ real symmetric psd matrices admits an affine slice that projects onto the polytope. In this paper we show that the psd rank of a polytope is at least the dimension of the polytope plus one, and we characterize those polytopes whose psd rank equals this lower bound. We give several classes of polytopes that achieve the minimum possible psd rank including a complete characterization in dimensions two and three.
\section{Introduction} Efficient representations of polytopes are of fundamental importance in contexts such as linear optimization where the complexity of many algorithms depends on the size of the representation. A standard idea to find a compact description of a complicated polytope $P \subset \mathbb{R}^n$ is to look for a simpler convex set of higher dimension that has $P$ as a linear image of it. Affine slices of closed convex cones offer a rich source of convex sets and the following definition was introduced in \cite{GPT2}. \begin{definition} \label{def:K-lift} Let $P \subset \mathbb{R}^n$ be a polytope. If $K \subset \mathbb{R}^m$ is a closed convex cone, $L$ an affine space in $\mathbb{R}^m$, and $\pi \,:\, \mathbb{R}^m \rightarrow \mathbb{R}^n$ a linear map such that $P = \pi(K \cap L)$, then we say that $K \cap L$ is a $K$-{\em lift} of $P$. \end{definition} If linear optimization over affine slices of $K$ admits efficient algorithms, then often, linear optimization over $P$ can be done rapidly as well. Well studied cones in this context are nonnegative orthants and the cones of real symmetric positive semidefinite (psd) matrices. We will denote the $m$-dimensional nonnegative orthant by $\mathbb{R}^m_+$ and the cone of $m \times m$ psd matrices by $\mathcal{S}_+^m$. Affine slices of $\mathbb{R}^m_+$ are polyhedra over which linear optimization can be done efficiently via {\em linear programming}. Affine slices of $\mathcal{S}_+^m$ are called {\em spectrahedra}, and linear optimization over them can be done efficiently via {\em semidefinite programming}. Recall that $\mathbb{R}^m_+$ embeds into $\mathcal{S}_+^m$ via diagonal matrices and hence, polyhedra are special cases of spectrahedra, and semidefinite programming generalizes linear programming. There are many families of polytopes in $\mathbb{R}^n$ with exponentially many facets (in $n$) that admit small (polynomial in $n$) polyhedral or spectrahedral lifts. Examples are the {\em parity} and {\em spanning tree polytopes} \cite{Yannakakis}, the {\em permutahedron} \cite{Goemans} and the {\em stable set polytope} of a {\em perfect graph} \cite{LovaszSchrijver91}. When the lifts come from families of cones such as $\{\mathbb{R}^m_+\}$ or $\{ \mathcal{S}_+^m \}$, it is useful to determine the smallest cone in the family that admits a lift of the polytope. This allows the notion of {\em cone rank} of a polytope with respect to a family of cones \cite{GPT2}. We recall the definitions needed in this paper. \begin{definition} \label{def:ranks} \cite{GPT2} \begin{enumerate} \item The {\em nonnegative rank} of a polytope $P \subset \mathbb{R}^n$, denoted as $\textup{rank}_+\, P$, is the smallest $k$ such that $P$ has an $\mathbb{R}^k_+$-lift. \item The {\em positive semidefinite rank} of a polytope $P \subset \mathbb{R}^n$, denoted as $\textup{rank}_{\textup{psd}}\, P$, is the smallest $k$ such that $P$ has an $\mathcal{S}_+^k$-lift. \end{enumerate} \end{definition} To describe our results, we need the following further definitions. \begin{definition} \label{def:slack matrix} \cite{Yannakakis} Let $P$ be a full-dimensional polytope in $\mathbb{R}^n$ with vertex set $\{p_1,\ldots,p_v\}$ and an irredundant (facet) inequality representation \[ P=\left\{ x \in \mathbb{R}^n : \beta_1 - \langle a_1, x \rangle \geq 0 , \ldots, \beta_f - \langle a_f, x \rangle \geq 0 \right\} \] where $\beta_j \in \mathbb{R}$ and $a_j \in \mathbb{R}^n$. Then the nonnegative matrix in $\mathbb{R}^{v \times f}$ whose $(i,j)$-entry is $\beta_j - \langle a_j, p_i \rangle$ is called a {\em slack matrix of $P$}. \end{definition} Recall that the {\em polar dual} of a cone $K \subset \mathbb{R}^m$ is the cone $$K^* := \{ y \in \mathbb{R}^m \,:\, \langle x, y \rangle \geq 0 \,\,\,\forall \,\,\, x \in K \}.$$ In the vector space of $m \times m$ symmetric matrices we use the trace inner product $\langle A, B \rangle = \textup{Tr}(AB)$. Both $\mathcal{S}_+^k$ and $\mathbb{R}^k_+$ are {\em self dual} cones, meaning that $K^*=K$, and we will identify them with their polar duals in what follows. The notion of {\em cone factorizations} of slack matrices plays a central role in the theory of cone lifts of polytopes. \begin{definition} \label{def:K-factorization} \cite{GPT2} Let $M = (M_{ij}) \in \mathbb{R}_+^{p \times q}$ be a nonnegative matrix and $K$ a closed convex cone whose polar dual is $K^*$. \begin{itemize} \item A $K$-{\em factorization} of $M$ is a pair of ordered sets $a^1, \ldots, a^p \in K$ and $b^1, \ldots, b^q \in K^*$ (called {\em factors}) such that $\langle a^i, b^j \rangle = M_{ij}$. \item When $K = \mathbb{R}^m_+$ (respectively, $\mathcal{S}_+^m$), a $K$-factorization of $M$ is called a {\em nonnegative} (respectively, {\em psd}) {\em factorization} of $M$. \item The smallest $k$ for which $M$ has an $\mathbb{R}^k_+$-factorization (respectively, $\mathcal{S}_+^k$-factorization) is called the {\em nonnegative rank} (respectively, {\em psd rank}) of $M$. We denote these invariants of $M$ as $\textup{rank}_+\, M$ and $\textup{rank}_{\textup{psd}}\, M$. \end{itemize} \end{definition} Any positive scaling of a facet inequality of a polytope $P$ can be used in Definition~\ref{def:slack matrix} and so the slack matrix of $P$ is only defined up to positive scalings of its columns. We denote any such slack matrix of $P$ by $S_P$. Since scaling rows or columns of a matrix $M$ by arbitrary positive real numbers does not affect the existence of a $K$-factorization of $M$, all slack matrices of $P$ will have the same behavior with respect to $K$-factorizations and, in particular, have the same nonnegative (respectively, psd) rank. In what follows, $P \subset \mathbb{R}^n$ is always an $n$-dimensional polytope. Yannakakis showed in \cite{Yannakakis} that $\textup{rank}_+\, P = \textup{rank}_+\, S_P$ by proving that $P$ has an $\mathbb{R}^k_+$-lift if and only if $S_P$ has an $\mathbb{R}^k_+$-factorization. The nonnegative rank of a polytope has been the subject of many recent papers \cite{FKPT, FMPTW, FioriniRothvossTiwary, GillisGlineur, KaibelPashkovich}. The psd rank of a {\em convex set} $C \subset \mathbb{R}^n$ was introduced in \cite{GPT2} where Yannakakis' theorem was generalized (Theorem 2.4 \cite{GPT2}). Specializing to polytopes, this theorem says that $P$ has a $K$-lift (in particular, $\mathcal{S}_+^k$-lift) if and only if $S_P$ has a $K$-factorization ($\mathcal{S}_+^k$-factorization), and so, $\textup{rank}_{\textup{psd}}\, P = \textup{rank}_{\textup{psd}}\, S_P$. (The extension of Yannakakis' theorem in the case of polytopes also appeared in \cite{FMPTW}.) Since $\mathbb{R}^k_+$ embeds into $\mathcal{S}_+^k$ for each $k$, we always have $\textup{rank}_{\textup{psd}}\, P \leq \textup{rank}_+\, P$. It is easy to see that $\textup{rank}_+\, P \geq \textup{rank}\, S_P = n+1$. In Proposition~\ref{prop:lower bound on psd rank} we show that $\textup{rank}_{\textup{psd}}\, P$ is also at least $n+1$. This is not immediate since for a general nonnegative matrix $M$, $\textup{rank}\, M$ is not a lower bound for $\textup{rank}_{\textup{psd}}\, M$, and the correct relationship is that $\frac{1}{2}(\sqrt{1 + 8 \textup{rank}\, M}-1) \leq \textup{rank}_{\textup{psd}}\, M$ \cite{GPT2}. Theorem~\ref{thm:psdrank n+1} characterizes those $n$-polytopes whose psd rank equals $n+1$, and we give several families of $n$-dimensional polytopes whose psd rank equals this lower bound. We now recall a few useful facts about nonnegative and psd ranks of polytopes that will be needed in this paper. It follows from \cite[Prop.~2]{GPT2} that $\textup{rank}_+\, P$ and $\textup{rank}_{\textup{psd}}\, P$ are invariant under projective (and hence also, affine) transformations of $P$. Further, transposing a matrix $M$ does not effect the existence of a $K$-factorization of $M$ if $K$ is self-dual. Therefore, if $P$ contains the origin in its interior, its {\em polar} polytope is $P^\circ := \{ y \in \mathbb{R}^n \,:\, \langle x, y \rangle \leq 1 \,\,\forall \,\, x \in P \}$, and $\textup{rank}_+\, P = \textup{rank}_+\, P^\circ$ and $\textup{rank}_{\textup{psd}}\, P = \textup{rank}_{\textup{psd}}\, P^\circ$ since we can obtain a slack matrix of $P^\circ$ by transposing a slack matrix of $P$ and rescaling rows. It is common to define the slack matrix of a polytope using any inequality description of the polytope, including redundant inequalities. This will not affect the nonnegative or psd rank of the polytope. However, since some of our results will become more cumbersome to state using this more general definition of a slack matrix, we restrict ourselves to Definition~\ref{def:slack matrix}. The psd rank of a polytope $P$ quantifies the power of semidefinite programming to provide efficient algorithms for linear optimization over $P$. For example, the stable set polytope of a perfect graph on $n$ vertices is known to have psd rank $n+1$ which provides the only known polynomial time algorithm (via semidefinite programming) for finding the highest weight stable set in a perfect graph. The connection between psd rank and semidefinite lifts allows psd rank to become a possible tool for settling questions concerning semidefinite programming in combinatorial optimization. A question that is currently active is whether the nonnegative rank of the {\em perfect matching polytope} of a complete graph $K_n$ is polynomial in $n$. This was raised in \cite{Yannakakis} where it was shown that there are no small symmetric $\mathbb{R}^k_+$-lifts of these polytopes. Both nonnegative and psd ranks of these polytopes are unknown at the moment. Another active question concerns the possible gap between $\textup{rank}_+\, P$ and $\textup{rank}_{\textup{psd}}\, P$ which is a measure of the relative strength of linear vs. semidefinite programming for linear optimization over $P$. No example where this gap is large is known so far. While nonnegative rank has been studied in several papers, the notion of psd rank is new. The results and techniques presented here further our understanding of psd rank of a polytope. This paper is organized as follows. In Section~\ref{sec:general matrices} we introduce tools to study the psd rank of a general nonnegative matrix $M$ using Hadamard square roots of $M$. In Section~\ref{sec:slack matrices}, we specialize to slack matrices of polytopes and derive the lower bound of $n+1$ for the psd rank of a $n$-dimensional polytope (Proposition~\ref{prop:lower bound on psd rank}). Theorem~\ref{thm:psdrank n+1} characterizes $n$-dimensional polytopes with psd rank $n+1$ in terms of the lowest rank of a Hadamard square root of a slack matrix of $P$. In Section~\ref{sec:examples} we give several families of polytopes whose psd rank equals this lower bound. In the plane, the full-dimensional polytopes with psd rank three are exactly triangles and quadrilaterals (Theorem~\ref{thm:polygons of psd rank 3}). Every polytope in $\mathbb{R}^n$ with at most $n+2$ vertices has psd rank $n+1$ (Theorem~\ref{thm:n+2 vertices}). In $\mathbb{R}^3$, the situation gets more tricky and we exhibit polytopes of a fixed combinatorial type (octahedra) whose psd rank depends on the embedding of the polytope. Nonetheless, we show that the three dimensional polytopes with psd rank four are exactly tetrahedra, quadrilateral pyramids, bisimplicies, combinatorial triangular prisms, ``biplanar'' octahedra, and ``biplanar'' cuboids (Theorem~\ref{thm:3d polytopes of psd rank four}). It follows from \cite{GPT1} that if $S_P$ is a $0/1$ matrix then $\textup{rank}_{\textup{psd}}\, P = n+1$. Such polytopes are called $2$-level polytopes and include the stable set polytopes of perfect graphs. We exhibit polytopes that are not combinatorially equivalent to $2$-level polytopes whose psd rank achieves the lower bound. We also show polytopes that are combinatorially equivalent to $2$-level polytopes whose psd rank is not the minimum possible. Finally, we prove in Theorem~\ref{thm:perfect graphs} that for stable set polytopes, the results of Lov{\'a}sz prevail even in our general setting in the sense that the stable set polytope of a graph on $n$ vertices has psd rank $n+1$ if and only if the graph is perfect. \section{Hadamard square roots and psd ranks of matrices} \label{sec:general matrices} \begin{definition} \label{def:Hadamard square root} A {\em Hadamard square root} of a nonnegative real matrix $M$, denoted as $\sqrt{M}$, is any matrix whose $(i,j)$-entry is a square root (positive or negative) of the $(i,j)$-entry of $M$. Additionally, we let $\sqrt[+]{M}$ denote the all-nonnegative Hadamard square root of $M$. \end{definition} Let $\textup{rank}_{\! \! {\sqrt{\ }}}\, M := \textup{min} \{ \textup{rank}\, \sqrt{M} \}$ be the minimum rank of a Hadamard square root of a nonnegative matrix $M$. We recall the basic connection between the psd rank of a nonnegative matrix $M$ and $\textup{rank}_{\! \! {\sqrt{\ }}}\, M$ shown in \cite[Proposition 4.8]{GPT2} , and also in \cite{FMPTW}. \begin{proposition} \label{prop:psd rank and Hadamard square roots} If $M$ is a nonnegative matrix, then $\textup{rank}_{\textup{psd}}\, M \leq \textup{rank}_{\! \! {\sqrt{\ }}}\, M $. In particular, the psd rank of a $0/1$ matrix is at most the rank of the matrix. \end{proposition} \begin{proof} Let $\sqrt{M}$ be a Hadamard square root of $M \in \mathbb{R}^{p \times q}_+$ of rank $r$. Then there exist vectors $a_1, \ldots, a_p, b_1, \ldots, b_q \in \mathbb{R}^{r}$ such that $(\sqrt{M})_{ij} = \langle a_i, b_j \rangle$. Therefore, $M_{ij} = \langle a_i, b_j \rangle^2 = \langle a_i a_i^T, b_j b_j^T \rangle$ where the second inner product is the trace inner product for symmetric matrices defined earlier. Hence, $\textup{rank}_{\textup{psd}}\,{M} \leq r$. \end{proof} The upper bound in Proposition~\ref{prop:psd rank and Hadamard square roots} can be strict even for simple examples. \begin{example} \label{ex:rank 3 and psd rank 2} For the matrix \[ M := \left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \end{array} \right], \] $\textup{rank}\, M = \textup{rank}_{\! \! {\sqrt{\ }}}\, M = 3$ while $\textup{rank}_{\textup{psd}}\, M = 2$. Assigning the first three psd matrices below to the rows of $M$, and the next three to the columns of $M$, we obtain a $\mathcal{S}_+^2$-factorization of $M$: $$ \left[ \begin{array}{cc} 0.5 & -0.5 \\ -0.5 & 1 \end{array} \right], \left[ \begin{array}{cc} 0.5 & 0 \\ 0 & 0 \end{array} \right], \left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right] \textup{ and } \left[ \begin{array}{cc} 2 & 0 \\ 0 & 0 \end{array} \right], \left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right], \left[ \begin{array}{cc} 2 & 1 \\ 1 & 1 \end{array} \right].$$ \end{example} \bigskip Even though $\textup{rank}_{\! \! {\sqrt{\ }}}\, M$ is only an upper bound on $\textup{rank}_{\textup{psd}}\,{M}$, we cannot find $\mathcal{S}_+^k$-factorizations of $M$ with only rank one factors if $k < \textup{rank}_{\! \! {\sqrt{\ }}}\, M$ as shown in Lemma~\ref{lem:rank one factors} below. Note that the psd factors corresponding to the first row and the third column of the matrix $M$ in Example~\ref{ex:rank 3 and psd rank 2} both have rank two. \begin{lemma} \label{lem:rank one factors} The smallest $k$ for which a nonnegative real matrix $M$ admits a $\mathcal{S}_+^k$-factorization in which all factors are matrices of rank one is $k = \textup{rank}_{\! \! {\sqrt{\ }}}\, M$. \end{lemma} \begin{proof} If $k = \textup{rank}_{\! \! {\sqrt{\ }}}\, M$, then there is a Hadamard square root of $M \in \mathbb{R}^{p \times q}_+$ of rank $k$ and the proof of Proposition~\ref{prop:psd rank and Hadamard square roots} gives a $\mathcal{S}_+^k$-factorization of $M$ in which all factors have rank one. On the other hand, if there exist $a_1a_1^T, \ldots, a_pa_p^T, b_1b_1^T, \ldots, b_qb_q^T \in \mathcal{S}_+^k$ such that $M_{ij}= \langle a_ia_i^T, b_jb_j^T \rangle = \langle a_i, b_j \rangle^2$, then the matrix with $(i,j)$-entry $\langle a_i, b_j \rangle$ is a Hadamard square root of $M$ of rank at most $k$. \end{proof} \begin{example} \label{ex:derangement matrix} For a $0/1$ matrix $M$, $\textup{rank}_{\textup{psd}}\, M \leq \textup{rank}_{\! \! {\sqrt{\ }}}\, M \leq \textup{rank}\, M$. In Example~\ref{ex:rank 3 and psd rank 2} we saw that the first inequality may be strict. We now show that the second inequality may also be strict. The following {\em derangement} matrix \[ \left[ \begin{array}{ccc} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{array} \right] \] has rank three and psd rank two. An $\mathcal{S}_+^2$-factorization in which all factors have rank one is gotten by assigning $$ \left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right], \left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right], \left[ \begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array} \right], \left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right],\left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right], \left[ \begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array} \right] $$ to the three rows and the three columns, respectively. A Hadamard square root of $M$ of rank two is $$ \left[ \begin{array}{rrr} 0 & -1 & 1\\ 1 & 0 & 1\\ 1 & 1 & 0 \end{array} \right]. $$ \end{example} We now show a method to increase the psd rank of any matrix by one. This technique will be used later to study the psd rank of a polytope. \begin{proposition} \label{prop:extending rank} Suppose $M \in \mathbb{R}^{p \times q}_+$ and $\textup{rank}_{\textup{psd}}\, M = k$. If $M$ is extended to $M' = \left( \begin{array}{cc} M & {\bf 0} \\ w & \alpha \end{array} \right)$ where $w \in \mathbb{R}_+^q$, $\alpha > 0$ and ${\bf 0}$ is a column of zeros, then $\textup{rank}_{\textup{psd}}\, M' = k+1$. Further, the factor associated to the last column of $M'$ in any $\mathcal{S}_+^{k+1}$-factorization of $M'$ has rank one. \end{proposition} \begin{proof} Suppose $M'$ has a $\mathcal{S}_+^k$-factorization with factors $A_1, \ldots, A_p, A \in \mathcal{S}_+^k$ associated to its rows and $B_1, \ldots, B_q,B \in \mathcal{S}_+^k$ associated to its columns. Then $A, B \neq 0$ since $\langle A, B \rangle = \alpha \neq 0$. Let $r= \textup{rank}\,(B) >0$. Then there exists an orthogonal matrix $U$ such that $U^{-1} B U = \textup{diag}(\lambda_1, \ldots, \lambda_{r}, 0, \ldots, 0) =: D$ where $\lambda_1, \ldots, \lambda_r$ are the nonzero (positive) eigenvalues of $B$. Let $A_i' := U^{-1}A_iU$ for $i=1,\ldots,p$. Then $$\langle D,A_i' \rangle = \textup{Tr}(U^{-1}BA_iU) = \textup{Tr}(BA_i) = \langle B, A_i \rangle = 0 \,\,\,\forall \,\,\,i=1,\ldots,p.$$ Since the diagonal entries of $A_i'$ are nonnegative, $\langle D, A_i' \rangle = 0$ implies that the first $r$ diagonal entries of $A_i'$ are all zero. Therefore, the first $r$ rows and the first $r$ columns of $A_i'$ are all zero since $A_i'$ is psd. Now let $B_j' := U^{-1}B_jU$ for all $j=1,\ldots,q$. Then for all $i=1,\ldots,p$ and $j=1,\ldots,q$, $$ \langle A_i', B_j' \rangle = \textup{Tr}(U^{-1}A_iB_j U) = \langle A_i, B_j \rangle = M_{ij}. $$ However, since $A_i'$ has nonzero entries only in its bottom right $(k-r) \times (k-r)$ block, it also follows that $ M_{ij} = \langle \tilde{A_i}, \tilde{B_j} \rangle$ where $\tilde{A_i}$ is the bottom right $(k-r) \times (k-r)$-submatrix of $A_i'$ and $\tilde{B_j}$ is the bottom right $(k-r) \times (k-r)$ submatrix of $B_j'$. Thus, there exists a $\mathcal{S}_+^{k-r}$-factorization of $M$ which is a contradiction to the fact that the psd rank of $M$ is $k$. Therefore, $\textup{rank}_{\textup{psd}}\, M' \geq k+1$. An $\mathcal{S}_+^{k+1}$-factorization of $M'$ can be obtained from an $\mathcal{S}_+^k$-factorization $A_1, \ldots, A_p$ $B_1, \ldots, B_q \in \mathcal{S}_+^k$ of $M$ by setting $$ \tilde{A_i} := \left[ \begin{array}{cc} A_i & {\bf 0} \\ {\bf 0} & 0 \end{array} \right], \tilde{B_j} := \left[ \begin{array}{cc} B_j & {\bf 0} \\ {\bf 0} & w_j \end{array} \right], \tilde{A} := \left[ \begin{array}{cc} {\bf 0} & {\bf 0} \\ {\bf 0} & 1 \end{array} \right], \tilde{B} := \left[ \begin{array}{cc} {\bf 0} & {\bf 0} \\ {\bf 0} & \alpha \end{array} \right].$$ Now consider an $\mathcal{S}_+^{k+1}$-factorization of $M'$ and let $B$ be the matrix associated to the last column of $M'$ in this factorization. If $\textup{rank}\,(B)=r$, then by the same argument as above, there exists an $\mathcal{S}_+^{k+1-r}$-factorization of $M$. Since $\textup{rank}_{\textup{psd}}\, M = k$, $k+1-r \geq k$ or equivalently, $r \leq 1$. Since $B \neq 0$, it follows that $\textup{rank}\,(B)=1$. \end{proof} \begin{example} \label{ex:psd rank of diagonal matrix} The psd rank of a $n \times n$ diagonal matrix with positive diagonal entries is $n$. The statement holds for $n=1$ and the general case follows by induction on $n$ and the first part of Proposition~\ref{prop:extending rank}. Each factor in an $\mathcal{S}_+^n$-factorization of such a diagonal matrix must have rank one. This follows by applying the second part of Proposition~\ref{prop:extending rank} to both the diagonal matrix and its transpose. \end{example} \section{Hadamard square roots and psd ranks of polytopes} \label{sec:slack matrices} In this section we derive a lower bound to the psd rank of any polytope. We begin with the following easy fact. \begin{lemma} \label{lem:rank of slack matrix} Let $P \subset \mathbb{R}^n$ be an $n$-dimensional polytope. Then a slack matrix $S_P$ has rank $n+1$. \end{lemma} \begin{proof} Let the vertices of $P$ be $p_1, \ldots, p_v$ and the facet inequalities of $P$ be $\langle a_j, x \rangle \leq \beta_j$ for $j=1, \ldots, f$. Then the corresponding $v \times f$ slack matrix $S_P$ has $(i,j)$-entry equal to $\beta_j - \langle a_j, p_i \rangle$, and we may factorize $S_P$ as $$ \left( \begin{array}{ll} 1 & p_1 \\ \vdots & \vdots \\ 1 & p_v \end{array} \right) \left( \begin{array}{ccc} \beta_1 & \cdots & \beta_f\\ - a_1 & \cdots & - a_f \end{array} \right). $$ Since $P$ is full-dimensional and bounded, both of the factors have rank $n+1$. \end{proof} We now obtain a lower bound on the psd rank of a polytope. \begin{proposition} \label{prop:lower bound on psd rank} If $P \subset \mathbb{R}^n$ is a full-dimensional polytope, then the psd rank of $P$ is at least $n+1$. Furthermore, if $\textup{rank}_{\textup{psd}}\, P = n+1$, then {\em every} $\mathcal{S}_+^{n+1}$-factorization of the slack matrix of $P$ only uses rank one matrices as factors. \end{proposition} \begin{proof} The proof is by induction on $n$. If $n=1$, then $P$ is a line segment and we may assume that its vertices are $p_1, p_2$ and facets are $f_1, f_2$ with $p_1 = f_2$ and $p_2 = f_1$. Hence its slack matrix is a $2 \times 2$ diagonal matrix with positive diagonal entries. By the arguments in Example~\ref{ex:psd rank of diagonal matrix}, $\textup{rank}_{\textup{psd}}\, S_P = 2$ and any $\mathcal{S}_+^{2}$-factorization of it uses only matrices of rank one. Assume the first statement in the theorem holds up to dimension $n-1$ and consider a polytope $P \subset \mathbb{R}^n$ of dimension $n$. Let $F$ be a facet of $P$ with vertices $p_1, \ldots, p_s$, facets $f_1, \ldots, f_t$ and slack matrix $S_F$. Suppose $f_i$ corresponds to facet $F_i$ of $P$ for $i=1, \ldots, t$. By induction hypothesis, $\textup{rank}_{\textup{psd}}\, F = \textup{rank}_{\textup{psd}}\, S_F \geq n$. Let $p$ be a vertex of $P$ not in $F$ and assume that the top left $(s+1) \times (t+1)$ submatrix of $S_P$ is indexed by $p_1, \ldots, p_s, p$ in the rows and $F_1, \ldots, F_t, F$ in the columns. Then this submatrix of $S_P$, which we will call $S_F'$, has the form $$ S_F' = \left( \begin{array}{cc} S_F & {\bf 0} \\ * & \alpha \end{array} \right)$$ with $\alpha > 0$. By Proposition~\ref{prop:extending rank}, the psd rank of $S_F'$ is at least $n+1$ since the psd rank of $S_F$ is at least $n$. Hence, $\textup{rank}_{\textup{psd}}\, P = \textup{rank}_{\textup{psd}}\, S_P \geq n+1$. Suppose there is now a $\mathcal{S}_+^{n+1}$-factorization of $S_P$ and therefore of $S_F'$. By Proposition~\ref{prop:extending rank} the factor corresponding to the facet $F$ has rank one. Repeating the procedure for all facets $F$ and all submatrices $S_F'$ we get that all factors corresponding to the facets of $P$ in this $\mathcal{S}_+^{n+1}$-factorization of $S_P$ must have rank one. To prove that all factors indexed by the vertices of $P$ also have rank one, recall that the transpose of a slack matrix of $P$ is (up to row scaling) a slack matrix of the polar polytope $P^\circ$, concluding the proof. \end{proof} \begin{remark} \label{rmk:zero pattern} The zero pattern in $S_P$ has been used to provide lower bounds for $\textup{rank}_+\, P$ (see for instance, \cite{Yannakakis, FKPT}). We note that the zero pattern of a slack matrix by itself is not enough to improve the lower bound on psd rank given in Proposition~\ref{prop:lower bound on psd rank}. For example, consider the slack matrix $S_k$ of a k-gon in $\mathbb{R}^2$. Then $\textup{rank}_{\textup{psd}}\, S_k$ grows to infinity as $k$ goes to infinity as shown in \cite{GPT2}. The Hadamard square $S_k^2$, however, has the same zero pattern as $S_k$ and $\textup{rank}_{\textup{psd}}\, S_k^2 \leq \textup{rank}\, S_k = 3$ by Lemma~\ref{lem:rank of slack matrix}. \end{remark} \begin{example} The {\em Birkhoff polytope} $B(n)$ is the convex hull of all $n \times n$ permutation matrices. It was shown in \cite{FKPT} that $\textup{rank}_+\, B(n) = n^2$ when $n \geq 5$. By Proposition~\ref{prop:lower bound on psd rank}, $\textup{rank}_{\textup{psd}}\, B(n) \geq n^2-2n+2$. The {\em permutahedron} $\Pi(n)$ is the convex hull of the vectors $(\pi(1), \ldots, \pi(n))$ where $\pi$ is a permutation on $n$ letters. It was shown in \cite{Goemans} that $\textup{rank}_+\, \Pi(n) = O(n \textup{ log } n )$. By Proposition~\ref{prop:lower bound on psd rank}, $\textup{rank}_{\textup{psd}}\, \Pi(n) \geq n$. \end{example} \begin{theorem} \label{thm:psdrank n+1} If $P \subset \mathbb{R}^n$ is a full-dimensional polytope, then $\textup{rank}_{\textup{psd}}\, P = n+1$ if and only if $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_P = n+1$. \end{theorem} \begin{proof} By Proposition~\ref{prop:psd rank and Hadamard square roots}, $\textup{rank}_{\textup{psd}}\, P \leq \textup{rank}_{\! \! {\sqrt{\ }}}\, S_P$. Therefore, if $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_P = n+1$, then by Proposition~\ref{prop:lower bound on psd rank}, the psd rank of $P$ is exactly $n+1$. Conversely, suppose $\textup{rank}_{\textup{psd}}\, P = n+1$. Then there exists a $\mathcal{S}_+^{n+1}$-factorization of $S_P$ which, by Proposition~\ref{prop:lower bound on psd rank}, has all factors of rank one. Thus, by Lemma~\ref{lem:rank one factors}, we have $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_P \leq n+1$. Since $\textup{rank}_{\! \! {\sqrt{\ }}}\,$ is bounded below by $\textup{rank}_{\textup{psd}}\,$, we must have $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_P = n+1$. \end{proof} Theorem~\ref{thm:psdrank n+1} says that if a full-dimensional polytope $P \subset \mathbb{R}^n$ has the minimum possible psd rank $n+1$, then there must be a Hadamard square root of $S_P$ of rank $n+1$ that serves as a witness. In the next section we exhibit several classes of $n$-polytopes whose psd rank is $n+1$. We now give examples in the plane that show that many of the properties we have derived so far for $n$-polytopes of psd rank $n+1$ fail when psd rank is larger than $n+1$. \begin{example} \label{ex:ngons} Consider the pentagon $P$ in $\mathbb{R}^2$ with vertices $$(0,0), (1,0), (2,1), (1,2), (0,1),$$ and a regular hexagon $H$ in $\mathbb{R}^2$. Then we have slack matrices: \[ S_P = \left[ \begin{array}{ccccc} 0 & 4 & 12 & 4 & 0 \\ 0 & 0 & 8 & 8 & 2 \\ 2 & 0 & 0 & 8 & 4 \\ 4 & 8 & 0 & 0 & 2 \\ 2 & 8 & 8 & 0 & 0 \end{array} \right], \; S_H = \left[ \begin{array}{cccccc} 0 & 2 & 4 & 4 & 2 & 0 \\ 0 & 0 & 2 & 4 & 4 & 2 \\ 2 & 0 & 0 & 2 & 4 & 4 \\ 4 & 2 & 0 & 0 & 2 & 4 \\ 4 & 4 & 2 & 0 & 0 & 2 \\ 2 & 4 & 4 & 2 & 0 & 0 \end{array} \right] . \] Theorem~\ref{thm:polygons of psd rank 3} will show that these polytopes have psd rank at least four which is not the minimum possible in the plane. We make the following observations: \begin{description} \item[(i)] $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_P > \textup{rank}_{\textup{psd}}\, P$ This pentagon has psd rank four due to the $\mathcal{S}_+^4$-factorization given by the following matrices (the first five matrices correspond to the rows and the second five to the columns): \[ \tiny \left[ \begin{array}{rrrr} 3 & 0 & 0 & 0 \\ 0 & 1 & 1 & -1 \\ 0 & 1 & 1 & -1 \\ 0 & -1 & -1 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 1 & -1 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & -1 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 1 & 0 & 0 & -1 \\ 0 & 1 & -1 & 0 \\ 0 & -1 & 1 & 0 \\ -1 & 0 & 0 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 \end{array} \right],\] \[ \tiny \left[ \begin{array}{rrrr} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 1 & -1 & -1 & 1 \\ -1 & 1 & 1 & -1 \\ -1 & 1 & 1 & -1 \\ 1 & -1 & -1 & 1 \end{array} \right], \] \[ \tiny \left[ \begin{array}{rrrr} 1 & -1 & 1 & -1 \\ -1 & 1 & -1 & 1 \\ 1 & -1 & 1 & -1 \\ -1 & 1 & -1 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right] . \] One can check that $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_P = 5$ in this case via the following algebraic calculation. Create a symbolic matrix with the same zeros as a $S_P$, say \[ S := \left[ \begin{array}{ccccc} 0 & a & b & c & 0 \\ 0 & 0 & d & e & f \\ g & 0 & 0 & h & i \\ j & k & 0 & 0 & l \\ m & n & o & 0 & 0 \end{array} \right]. \] Then there is a Hadamard square root of $S_P$ of rank at most four if and only if there is a solution to the system of polynomial equations $$ \{ \textup{det}(S) = 0, \,\, a^2 = 4, \,\,b^2= 12,\,\,c^2=4, \ldots, o^2 = 8 \}.$$ Using a computer algebra package such as Macaulay2 \cite{M2}, we can see that this system of equations has no solutions. Therefore, when the psd rank of a $n$-polytope is greater than $n+1$, there need not be any Hadamard square root of the slack matrix whose rank equals the psd rank of the polytope. \item[(ii)] $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_H < \textup{rank}\, \sqrt[+]{S_H}$ The all-nonnegative Hadamard square root $\sqrt[+]{S_H}$ has rank $5$. The following Hadamard square root has rank 4: \[ \left[ \begin{array}{rrrrrr} 0 & \sqrt{2} & 2 & 2 & \sqrt{2} & 0 \\ 0 & 0 & \sqrt{2} & 2 & 2 & \sqrt{2} \\ \sqrt{2} & 0 & 0 & \sqrt{2} & 2 & 2 \\ -2 & -\sqrt{2} & 0 & 0 & \sqrt{2} & 2 \\ 2 & -2 & -\sqrt{2} & 0 & 0 & \sqrt{2} \\ \sqrt{2} & 2 & -2 & -\sqrt{2} & 0 & 0 \end{array} \right]. \] Thus, it is not enough to check the positive Hadamard square root of $S_P$ to get $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_P$. \item[(iii)] Recall that if $Q$ is an $n$-dimensional polytope and $\textup{rank}_{\textup{psd}}\, Q = n+1$, then $\textup{rank}_{\textup{psd}}\, Q = \textup{rank}_{\! \! {\sqrt{\ }}}\, Q$ and all $\mathcal{S}_+^{n+1}$-factorizations of $S_Q$ have factors of rank one. However, even if $\textup{rank}_{\textup{psd}}\, Q = \textup{rank}_{\! \! {\sqrt{\ }}}\, Q$, but $\textup{rank}_{\textup{psd}}\, Q > n+1$, then there can be factorizations of $S_Q$ by psd matrices of size $\textup{rank}_{\textup{psd}}\, Q$ in which the factors do not all have rank one as in the case of the hexagon $H$. From above, $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_H = 4$. A $\mathcal{S}_+^4$-factorization of $S_H$ is gotten by assigning the following six psd matrices of rank two to the columns: \[ \tiny \left[ \begin{array}{rrrr} 1 & -1 & 0 & 1 \\ -1 & 1 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 1 & -1 & 0 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & -1 \\ 0 & 1 & 1 & -1 \\ 0 & -1 & -1 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right], \] \[ \tiny \left[ \begin{array}{rrrr} 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 0 & 1 & -1 & 1 \\ 0 & -1 & 1 & -1 \\ 0 & 1 & -1 & 1 \end{array} \right], \left[ \begin{array}{rrrrr} 1 & -1 & 1 & 0 \\ -1 & 1 & -1 & 0 \\ 1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right], \] and the following six psd matrices of rank one to the rows: \[ \tiny \left[ \begin{array}{rrrr} 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right], \left[ \begin{array}{rrrr} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right], \] \[ \tiny \left[ \begin{array}{rrrr} 1 & -1 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right], \left[ \begin{array}{rrrr} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 1 \end{array} \right], \left[ \begin{array}{rrrr} 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right] . \] \end{description} \end{example} There is no systematic algorithm to find exact psd factorizations of the type shown above. The factorizations in the above example were obtained via trial and error with a pen and paper. We always tried to choose row factors of rank one and all factors as sparse as possible. We now give two applications of Propositions~\ref{prop:extending rank} and \ref{prop:lower bound on psd rank}. The first yields a method to produce polytopes of psd rank $k$ from polytopes of psd rank $k-1$. \begin{proposition} \label{prop:pyramid} If $P \subset \mathbb{R}^n$ is an $n$-dimensional pyramid over a $(n-1)$-polytope $Q$ and $\textup{rank}_{\textup{psd}}\, Q = k$, then $\textup{rank}_{\textup{psd}}\, P = k+1$. \end{proposition} \begin{proof} Let $S_Q$ be the slack matrix of $Q$. By assumption, $\textup{rank}_{\textup{psd}}\, S_Q = k$. We may assume without loss of generality that $Q$ lies in the hyperplane $x_n = 0$ and that the apex $v$ of $P$ has $v_n > 0$. The facets of $P$ that contain $v$ are in bijection with the facets of $Q$. The only other facet inequality of $P$ is $x_n \geq 0$. A slack matrix of $P$ is $$ \left[ \begin{array}{cc} S_Q & {\bf 0} \\ {\bf 0} & \alpha \end{array} \right] $$ where the last row is indexed by $v$ and the last column by $x_n \geq 0$. Therefore, $\alpha > 0$ and by Proposition~\ref{prop:extending rank}, the psd rank of $S_P$ is $k+1$. \end{proof} The following result will be used in Section~\ref{sec:examples}. \begin{proposition} \label{prop:face psd rank} If a polytope $P$ has a facet of psd rank $k$, then $P$ has psd rank at least $k+1$. In particular, if $\textup{rank}_{\textup{psd}}\, P = n+1$ where $P \subset \mathbb{R}^n$ is a $n$-polytope, then $\textup{rank}_{\textup{psd}}\, F = i+1$ for every $i$-dimensional face of $P$. \end{proposition} \begin{proof} The first fact is an immediate consequence of the proof of Proposition~\ref{prop:lower bound on psd rank} where we saw that if $F$ is a facet of psd rank $k$, then Proposition~\ref{prop:extending rank} can be used to construct a submatrix $S_F'$ of the slack matrix $S_P$ that has psd rank at least $k+1$. The second statement then follows from Proposition~\ref{prop:lower bound on psd rank}. \end{proof} \section{ Families of polytopes of minimum psd rank} \label{sec:min psd rank} \label{sec:examples} Recall that if $P$ is an $n$-dimensional polytope in $\mathbb{R}^n$ then $\textup{rank}_+\, P \geq n+1$. It is straightforward to see that the only $n$-dimensional polytopes of nonnegative rank $n+1$ are simplices. The psd situation is much richer with many more classes of polytopes achieving the minimum possible psd rank as we show in this section. \begin{definition} A $n$-dimensional polytope $P \subset \mathbb{R}^n$ is said to be $2$-{\em level} if it has a slack matrix all of whose entries are zero or one. Geometrically, $P$ is $2$-level if and only if for each facet of the polytope, all vertices of $P$ lie on the union of this facet and exactly one other parallel translate of the hyperplane spanning this facet. \end{definition} It follows from \cite{GPT1} that a $2$-level polytope in $\mathbb{R}^n$ admits an $\mathcal{S}_+^{n+1}$-lift which can be constructed explicitly using sums of squares polynomials. In the language of the current paper, it follows that $n$-dimensional $2$-level polytopes have psd rank $n+1$. We can also see this directly from Theorem~\ref{thm:psdrank n+1}. \begin{corollary} \label{cor:2level} Let $P$ be an $n$-dimensional $2$-level polytope in $\mathbb{R}^n$. Then the psd rank of $P$ is exactly $n+1$. Further, all the factors in any $\mathcal{S}_+^{n+1}$-factorization of $P$ have rank one. \end{corollary} \begin{proof} Since a $2$-level polytope has a $0/1$ slack matrix $S_P$, $\textup{rank}\, \sqrt[+]{S_P} = \textup{rank}\, S_P = n+1$. Therefore, $\textup{rank}_{\! \! {\sqrt{\ }}}\, S_P = n+1$, and by Theorem~\ref{thm:psdrank n+1}, the psd rank of a $2$-level polytope equals $n+1$. The second statement follows from Proposition~\ref{prop:lower bound on psd rank}. \end{proof} Since any $n$-polytope with $n+1$ vertices is a simplex which is $2$-level, its psd rank is $n+1$. In fact, Theorem~\ref{thm:psdrank n+1} implies the following stronger result. \begin{theorem} \label{thm:n+2 vertices} Any full-dimensional polytope in $\mathbb{R}^n$ with $n+2$ vertices has psd rank $n+1$. \end{theorem} \begin{proof} Suppose $P$ is a polytope with $n+2$ vertices. Then if $f$ is the number of facets of $P$, we have that $S_P$ is an $(n+2) \times f$ matrix of rank $n+1$. Let $S_i$ denote the $i$th row of $S_P$. Since $\textup{rank}\, S_P = n+1$, we have $\sum_{i=1}^{n+2}a_iS_i = \left( 0, \ldots,0 \right)$ for some $a_i \in \mathbb{R}$. Each column of $S_P$ must have at least $n$ zeros, so when we consider the above equation component-wise, all but at most two of the summands must be zero. Thus, for each $j = 1, \ldots, f$, $a_{i_0}\left(S_{i_0}\right)_j +a_{i_1}\left(S_{i_1}\right)_j = 0$ for some $1 \leq i_0, i_1 \leq n+2$. For each $a_i$ define $b_i := \text{sgn}\left(a_i\right)\sqrt{\left| a_i\right|}$. Then $b_{i_0}\sqrt{\left(S_{i_0}\right)_j} + b_{i_1}\sqrt{\left(S_{i_1}\right)_j} = 0$. Since this holds for each component, we have $\sum_{i=1}^{n+2}b_i\sqrt{S_i} = \left( 0, \ldots,0 \right)$. Thus, $\sqrt[+]{S_P}$ must have rank $n+1$ and the result follows from Theorem~\ref{thm:psdrank n+1}. \end{proof} There are $\lfloor n^2/4 \rfloor$ distinct combinatorial types of $n$-dimensional polytopes with $n+2$ vertices \cite{Grunbaum}. In the plane, we get that all quadrilaterals have psd rank three. In $\mathbb{R}^3$, the two combinatorial types of polytopes with five vertices are the pyramid over a quadrilateral and a double simplex (bipyramid over a triangle). A quadrilateral pyramid need not be $2$-level but it is combinatorially equivalent to a pyramid over a square which is $2$-level. By Theorem~\ref{thm:n+2 vertices}, a $n$-dimensional double simplex (bipyramid over a simplex of dimension $n-1$) has psd rank $n+1$. They are polytopes of minimum psd rank that are not combinatorially equivalent to $2$-level polytopes. \begin{proposition} \label{prop:double simplex} There is no $2$-level polytope that is combinatorially equivalent to a double simplex except in the plane. \end{proposition} \begin{proof} Let $P \subset \mathbb{R}^n$ be an $n$-dimensional double simplex. Then the support of any $(n+2) \times 2n$ slack matrix of $P$ where the first and last rows correspond to the vertices acquired when taking the bipyramid over a $(n-1)$-dimensional simplex is $$M := \left( \begin{array}{ccc|ccc} 0 & \cdots & 0 & 1 & \cdots & 1 \\ \hline & I_n & & & I_n & \\ \hline 1 & \cdots & 1& 0 & \cdots & 0 \end{array} \right).$$ The rank of $M$ is $n+1$ and hence the left kernel of $M$ has dimension one and is generated by the vector $z := (1,-1,-1,\ldots,-1,-1,1) \in \mathbb{R}^{n+2}$ with all entries equal to $-1$ except the first and last. Also, $P$ is combinatorially equivalent to a $2$-level polytope if and only if there is a ($2$-level) polytope with slack matrix $M$. Suppose $M$ is the slack matrix of a $n$-dimensional polytope. Then we should be able to factorize $M$ as in the proof of Lemma~\ref{lem:rank of slack matrix} into the form $$ M = \left( \begin{array}{ll} 1 & p_1 \\ \vdots & \vdots \\ 1 & p_{n+2} \end{array} \right) \left( \begin{array}{ccc} \beta_1 & \cdots & \beta_f\\ - a_1 & \cdots & - a_{2n} \end{array} \right).$$ Call the two factors $V$ and $F$. The left kernel of $V$ is non-trivial since $V$ is a $(n+2) \times (n+1)$ matrix. Let $z'$ be a non-zero element in the left kernel of $V$. Then since $z'VF = 0$, it must also be that $z'M = 0$. This implies that $z'$ is a scalar multiple of $z$ and hence $z$ is in the left kernel of $V$. But looking at the first column of $V$, which is all ones, we see that $z$ can be in the left kernel of $V$ only if $n=2$. \end{proof} On the other hand, being combinatorially equivalent to a $2$-level polytope does not imply minimal psd rank. The regular octahedron in $\mathbb{R}^3$ is a $2$-level polytope but we now show an octahedron whose psd rank is five. \begin{example} \label{ex:octahedra} Consider the octahedron with vertices $$(0,0,0),(2,0,0),(0,2,0),(2,2,0),(1,1,-1),(1,2,1)$$ which has slack matrix: \[ \left[ \begin{array}{cccccccc} 0 & 0 & 0 & 0 & 2 & 2 & 2 & 2 \\ 0 & 2 & 0 & 2 & 0 & 0 & 2 & 2 \\ 2 & 0 & 2 & 0 & 2 & 2 & 0 & 0 \\ 2 & 2 & 2 & 2 & 0 & 0 & 0 & 0 \\ 0 & 2 & 3 & 0 & 0 & 2 & 1 & 0 \\ 3 & 0 & 0 & 2 & 2 & 0 & 0 & 1 \\ \end{array} \right] . \] It can be checked algebraically as in Example~\ref{ex:ngons} that no Hadamard square root of this slack matrix has rank four. However, the positive Hadamard square root has rank five and hence the psd rank of this octahedron is five. \end{example} \begin{remark} We have seen that having the combinatorial type of a $2$-level polytope is not enough for minimal psd rank, while being the image under a projective transformation of a $2$-level polytope is enough. Proposition~\ref{prop:double simplex} shows that not all polytopes of minimal psd rank are projectively equivalent to $2$-level polytopes. Strictly weaker than being projectively equivalent to a $2$-level polytope is the existence of a positive scaling of each row and column of $S_P$ that turns it into a $0/1$-matrix. This clearly implies minimal psd rank, and includes double simplices. So one could suppose this to be a necessary and sufficient condition for having minimal psd rank. This turns out to be false. Consider the prism with vertices $(0,0,0)$, $(1,0,0)$, $(0,1,0)$, $(1,2,0)$, $(0,0,1)$, $(1,0,1)$, $(0,1,1)$, $(1,2,1)$ which has slack matrix \[ \left[ \begin{array}{cccccc} 0 & 0 & 2 & 1 & 0 & 1 \\ 1 & 0 & 0 & 2 & 0 & 1 \\ 0 & 1 & 2 & 0 & 0 & 1 \\ 1 & 2 & 0 & 0 & 0 & 1 \\ 0 & 0 & 2 & 1 & 1 & 0 \\ 1 & 0 & 0 & 2 & 1 & 0 \\ 0 & 1 & 2 & 0 & 1 & 0 \\ 1 & 2 & 0 & 0 & 1 & 0 \end{array} \right] . \] The positive square root of this matrix has rank four, so the polytope has minimal psd rank, but it is easy to see that we can never turn the submatrix from the first two rows and the fourth and sixth columns into a $0/1$-matrix by any scaling. \end{remark} In the plane we can fully characterize the polytopes of psd rank three. \begin{theorem} \label{thm:polygons of psd rank 3} A convex polygon $P$ in the plane has psd rank three if and only if it has at most four vertices. \end{theorem} \begin{proof} The ``if'' direction was discussed after Theorem~\ref{thm:n+2 vertices}. Now suppose that $P$ is a convex polygon with $5$ or more vertices. By an affine transformation we can suppose $P$ has facets given by $x \geq 0$ and $y \geq 0$ with vertices on $(0,0)$, $(1,0)$ and $(0,1)$. Let $(a,b)$ be the vertex sharing an edge with $(0,1)$ and $(c,d)$ the one sharing an edge with $(1,0)$. These facets are then given by the two inequalities $(b-1)x - ay + a \geq 0$ and $(c-1)y-dx + d \geq 0$ respectively, so we can take the $5 \times 4$ submatrix of the slack matrix of $P$ indexed by these vertices and facets, which is then \[S'_P=\left(\begin{array}{cccc} 0 & 0 & a & d \\ 0 & 1 & 0 & d+c-1 \\ 1 & 0 & a+b-1 & 0 \\ a & b & 0 & cb-b-da+d \\ c & d & bc-c-ad+a & 0 \end{array}\right). \] It is then enough to show that every possible Hadamard square root of the $4 \times 4$ upper left portion of this matrix has rank four. This matrix is given by \[\left(\begin{array}{cccc} 0 & 0 & \pm\sqrt{a} & \pm\sqrt{d} \\ 0 & \pm 1 & 0 & \pm\sqrt{d+c-1} \\ \pm 1 & 0 & \pm\sqrt{a+b-1} & 0 \\ \pm\sqrt{a} & \pm\sqrt{b} & 0 & \pm\sqrt{cb-b-da+d} \end{array}\right).\] Assume this matrix has rank three. Since the first three rows are independent, we can write the fourth row as a combination of the first three. In such a combination, the coefficients for the first three rows must be $\pm\sqrt{a+b-1}$, $\pm\sqrt{b}$ and $\pm\sqrt{a}$, respectively. For ease of notation, let $\alpha=b(d+c-1)$ and $\beta=d(a+b-1)$. Then $\alpha,\beta > 0$ and $\alpha \geq \beta$. Looking at the last column, we see that \[ \pm\sqrt{\alpha-\beta}=\pm\sqrt{\alpha}\pm\sqrt{\beta} .\] Out of these eight possible equations, the only four that are feasible are $\pm \sqrt{\alpha-\beta}=\sqrt{\alpha}-\sqrt{\beta}$ and $\pm \sqrt{\alpha-\beta}=-\sqrt{\alpha}+\sqrt{\beta}$, all of which imply $\alpha=\beta$. Hence, $cb-b=ad-d$ and we have that $b/(a-1)=d/(c-1)$. Thus, the slope of the line between $(a,b)$ and $(1,0)$ equals the slope between $(c,d)$ and $(1,0)$, implying that the three are collinear and cannot all be vertices unless $(a,b)=(c,d)$. \end{proof} In $\mathbb{R}^3$, it is more difficult to classify the convex polytopes of minimum psd rank. We have seen that all polytopes with four or five vertices have psd rank four. Additionally, we can say precisely which octahedra in $\mathbb{R}^3$ have psd rank four. Let $O \subset \mathbb{R}^3$ be a (combinatorial) octahedron. We say that $O$ is planar with respect to a plane $E$ if $O \cap E$ contains four vertices of $O$. For example, the regular octahedron is planar to the $xy$, $xz$, and $yz$ planes. A combinatorial octahedron can be planar with respect to at most three planes. We say $O$ is \emph{biplanar} if it is planar with respect to at least two distinct planes. \begin{theorem} \label{thm:octahedra} An octahedron $O \subset \mathbb{R}^3$ has psd rank four if and only if $O$ is biplanar. \end{theorem} \begin{proof} First, assume $O$ is biplanar. Then, by applying an affine transformation, we can assume that $O$ is planar with respect to the $xy$ plane and has vertices $(0,0,0)$, $(1,0,0)$, $(0,1,0)$, $(a,b,0)$, $(z_1,z_2,z_3)$, and $(w_1,w_2,w_3)$ where $z_3 > 0$, $w_3 < 0$, and $a+b > 1$. For ease of notation, let $\alpha = z_3 - w_3$, $\beta = w_1z_3 - z_1w_3$, and $\gamma = w_2z_3 - z_2w_3$. Then $(0,0,0)$, $(a,b,0)$, $(z_1,z_2,z_3)$, $(w_1,w_2,w_3)$ are coplanar if and only if $b\beta = a\gamma$ and $(1,0,0)$, $(0,1,0)$, $(z_1,z_2,z_3)$, $(w_1,w_2,w_3)$ are coplanar if and only if $\alpha = \beta + \gamma$. The combinatorics of $O$ dictates that these are the only possible further planarities, and since $O$ is biplanar, at least one of these conditions must be satisfied. Now $O$ has slack matrix $S_O$: \begin{small} \[ \left[ \begin{array}{cccccccc} 0 & 0 & b & a & 0 & 0 & b & a \\ 1 & 0 & 0 & a+b-1 & 1 & 0 & 0 & a+b-1 \\ 0 & 1 & a+b-1 & 0 & 0 & 1 & a+b-1 & 0 \\ a & b & 0 & 0 & a & b & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{-\beta}{w_3} & \frac{-\gamma}{w_3} & \frac{b(\beta-\alpha) + (1-a)\gamma}{w_3} & \frac{a(\gamma-\alpha) + (1-b)\beta}{w_3} \\ \frac{\beta}{z_3} & \frac{\gamma}{z_3} & \frac{b(\alpha-\beta) + (a-1)\gamma}{z_3} & \frac{a(\alpha-\gamma) + (b-1)\beta}{z_3} & 0 & 0 & 0 & 0 \end{array} \right] . \] \end{small} In the case $b\beta = a\gamma$ or the case $\alpha = \beta + \gamma$, row reduction shows that $\sqrt[+]{S_O}$ has rank four. Hence, $O$ has psd rank four. For the converse, suppose $O$ is planar to either one or zero planes. If a planar condition is satisfied, assume it is by the vertices $v_1,v_2,v_3,v_4$. By applying an affine transformation, we can assume that $v_1 = (0,0,1)$, $v_2 = (0,0,0)$, $v_3 = (1,0,0)$, and $v_5 = (0,1,0)$. Let $v_4 = (z_1,z_2,z_3)$ and $v_6 = (w_1,w_2,w_3)$ where we must have \begin{equation} \label{eq:octahedron conditions} z_1 < 0, \,\, w_3 > 0, \,\,1- z_1 - z_2 - z_3 > 0, \textup{ and } 1 - w_1 - w_2 - w_3 > 0 \end{equation} to preserve the combinatorial structure. (These are not all of the required conditions, but we will use these particular ones below.) Since $O$ cannot satisfy planarity conditions on the set of vertices $\left\{v_1,v_2,v_5,v_6\right\}$ or $\left\{v_3,v_4,v_5,v_6\right\}$, we must have that \begin{equation} \label{eq:planarity violations} w_1 \neq 0 \textup{ and } w_1 z_3 + w_2 z_3 - z_1 w_3 - z_2 w_3 + w_3 - z_3 \neq 0. \end{equation} We calculate the slack matrix $S_O$ and consider its $5 \times 5$ submatrix $M$ indexed by the vertices $v_1,v_2,v_3,v_5,v_6$ in the rows and the facets $F_{1,3,5}$, $F_{2,3,6}$, $F_{2,4,5}$, $F_{1,3,6}$, $F_{1,4,5}$ in the columns where $F_{i,j,k}$ is the facet defined by the vertices $v_i,v_j,v_k$. After multiplying the rows and columns by nonnegative constants, $M$ has the form: \begin{tiny} \[ \left[ \begin{array}{ccccc} 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 1 \\ 0 & 0 & z_3 & 0 & 1-z_1-z_2-z_3 \\ 0 & w_3 & 0 & 1-w_1-w_2-w_3 & 0 \\ -z_1(1-w_1-w_2-w_3) & 0 & -z_1 w_3 + w_1 z_3 & 0 & -z_1(1-w_2-w_3) + w_1(1-z_2-z_3) \end{array} \right] . \] \end{tiny} Now consider an arbitrary Hadamard square root $\sqrt{M}$. For the purposes of calculating rank of $\sqrt{M}$, we can assume that the $(1,2)$, $(1,3)$, $(2,1)$, $(2,4)$, and $(2,5)$ entries of $\sqrt{M}$ are all $1$. Let \[ S= \left[ \begin{array}{ccccc} 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 1 \\ 0 & 0 & s_1 & 0 & s_2 \\ 0 & s_3 & 0 & s_4 & 0 \\ s_5 & 0 & s_6 & 0 & s_7 \end{array} \right] . \] be a symbolic matrix corresponding to a $\sqrt{M}$ and let $\tilde{z}_1,\ldots,\tilde{w}_3$ be variables corresponding to $z_1,\ldots,w_3$. Consider the ideal $I$ generated by the polynomials: \[ \left\{ \det S, \,s_1^2 - \tilde{z}_3, \ldots, s_7^2 + \tilde{z}_1(1-\tilde{w}_2-\tilde{w}_3) - \tilde{w}_1(1-\tilde{z}_2-\tilde{z}_3) \right\}. \] Now if $\textup{rank}_{\textup{psd}}\, O = 4$, then $\textup{rank}_{\! \! {\sqrt{\ }}}\, M \leq 4$ and, hence, there must exist real numbers $x_1,\ldots,x_7$ such that $(x_1,\ldots,x_7,z_1,\ldots,w_3)$ lies in $V(I)$, the variety of $I$. The three possible planarity conditions on $O$ are given by the equations: \[ \tilde{w}_1 = 0, \,\,\tilde{z}_2 =0, \textup{ and } \tilde{w}_1 \tilde{z}_3 + \tilde{w}_2 \tilde{z}_3 - \tilde{z}_1 \tilde{w}_3 - \tilde{z}_2 \tilde{w}_3 + \tilde{w}_3 - \tilde{z}_3 = 0. \] Let $J_1, J_2, J_3$ be the ideals generated by two each of the three polynomials defining the above planarity conditions. Then the product ideal $J := J_1*J_2*J_3$ has variety $V(J) = V(J_1) \cup V(J_2) \cup V(J_3)$. By our planarity assumption on $O$, $(x_1,\ldots,x_7,z_1,\ldots,w_3)$ is not contained in $V(J)$. Now $V(I) \backslash (V(J)$ is contained in the variety of the colon ideal $I:J$ \cite[Chapter 4.4, Theorem 7]{CLO} and, hence, $(x_1,\ldots,x_7,z_1,\ldots,w_3)$ vanishes on every polynomial in $I:J$. Using Macaulay2 \cite{M2}, we can compute a set of generators of $I:J$ and by elimination one sees that \[ f = \tilde{z}_1\tilde{w}_1\tilde{w}_3(\tilde{w}_1+\tilde{w}_2+\tilde{w}_3 - 1)(\tilde{z}_1+\tilde{z}_2+\tilde{z}_3 - 1)(\tilde{w}_1 \tilde{z}_3 + \tilde{w}_2 \tilde{z}_3 - \tilde{z}_1 \tilde{w}_3 - \tilde{z}_2 \tilde{w}_3 + \tilde{w}_3 - \tilde{z}_3) \] lies in $I:J$. However, no choice of $z_1,\ldots,w_3$ that is required to satisfy (\ref{eq:octahedron conditions}) and (\ref{eq:planarity violations}) can vanish on $f$. Hence, we must have $\textup{rank}_{\textup{psd}}\, O \geq 5$. \end{proof} A {\em cuboid}, or combinatorial cube, is a polytope in $\mathbb{R}^3$ that is combinatorially equivalent to a cube. Since the polars of cuboids are octahedra and psd rank is preserved under polarity, the cuboids of minimal psd rank are precisely those that are polars of biplanar octahedra. We call these biplanar cuboids. More explicitly, these are the cuboids for which there exists two sets of four facets whose supporting hyperplanes intersect in a point (possibly at infinity). We will now argue that there are no polytopes in $\mathbb{R}^3$ of psd rank four beyond the ones we have considered above (and their polars). Let $P$ be a polytope in $\mathbb{R}^3$ of psd rank four. By Proposition~\ref{prop:face psd rank}, all the facets of $P$ must be triangles or quadrilaterals. Further, since $\textup{rank}_{\textup{psd}}\, P^\circ = 4$, each vertex of $P$ must be of {\em degree} three or four. Recall that the degree of a vertex of $P$ is the number of edges of $P$ incident to that vertex. \begin{lemma}\label{lem:facet restriction} Let $P \subset \mathbb{R}^3$ be a three-dimensional polytope with $\textup{rank}_{\textup{psd}}\, P = 4$. If $p$ is a vertex of $P$ of degree four, then the four facets incident to $p$ must be triangles. \end{lemma} \begin{proof} Let $P$ and $p$ be as above and suppose that the four facets incident to $p$ are not all triangles. By Proposition~\ref{prop:face psd rank}, one of the facets surrounding $p$ must be a quadrilateral and $P$ contains the following structure (with $p_1,\ldots,p_5$ vertices of $P$): \begin{center} \begin{tikzpicture} [scale=.3,auto=center,every node/.style={circle,fill=blue!20}] \node (p) at (6,6) {$p$}; \node (p1) at (1,11) {$p_1$}; \node (p2) at (1,1) {$p_2$}; \node (p3) at (11,1) {$p_3$}; \node (p4) at (16,6) {$p_4$}; \node (p5) at (11,11) {$p_5$}; \foreach \from/\to in {p/p1,p/p2,p/p3,p/p5,p3/p4,p4/p5} \draw (\from) -- (\to); \end{tikzpicture} \end{center} Let $S_P$ be a slack matrix of $P$. Then $S_P$ is of rank four. Further, since $P$ has minimum psd rank, there exists a Hadamard square root $\sqrt{S_P}$ of rank four. Let $M$ be the $5 \times 4$ submatrix of $\sqrt{S_P}$ indexed by $p, p_1,p_2,p_3, p_4$ in the rows and by the four facets incident to $p$ in the columns. By scaling the columns of $\sqrt{S_P}$ by nonzero scalars, we may assume that $M$ is of the following form, with $a,b,c,d,e$ nonzero: \[ \left[\begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & a \\ 1 & 0 & b & 0 \\ c & d & e & 0 \end{array} \right]. \] The four rows of $\sqrt{S_P}$ and $S_P$ corresponding to the first four rows of $M$ are linearly independent by the structure of $M$. Hence, we can write the row of $\sqrt{S_P}$ and $S_P$ corresponding to the fifth row of $M$ as a linear combination of the other four. Thus, we can write the fifth row of $M$ and $M^2$ as a linear combination of the first four. This results in two necessary equations: $d + ae = abc$ and $d^2 + a^2e^2 = (abc)^2$, which implies that $ade=0$, a contradiction. \end{proof} \begin{proposition}\label{prop:3d classification} A polytope in $\mathbb{R}^3$ of psd rank four has the combinatorial type of a simplex, quadrilateral pyramid, bisimplex, triangular prism, octahedron, or cube. \end{proposition} \begin{proof} Let $P$ be a polytope in $\mathbb{R}^3$ of psd rank four with $v$ vertices, $e$ edges, and $f$ facets. Let $v_t$ and $v_q$ denote the number of vertices of degree three and four in $P$, and let $f_t$ and $f_q$ denote the number of triangular and quadrangular facets of $P$. By double counting edges, $2e = 3f_t+4f_q$, and by considering $P^\circ$, we also see that $2e=3v_t+4v_q$. Now using Euler's formula, $v-e+f=2$, it is easy to deduce that $v_t$ and $f_t$ are even and that $v_t+f_t=8$. Hence, we only need to consider polytopes where $(v_t,f_t)$ equals $(0,8)$, $(2,6)$, $(4,4)$, $(6,2)$, or $(8,0)$. Further, by taking polars we need only consider the cases where $(v_t,f_t)$ equals $(0,8)$, $(2,6)$, or $(4,4)$. When $(v_t,f_t) = (0,8)$, we have that every vertex is of degree four. Thus, by Lemma~\ref{lem:facet restriction}, every facet must be triangular. The only polytope in $\mathbb{R}^3$ that satisfies these conditions is the octahedron. Now suppose $(v_t,f_t) = (4,4)$. If there are no degree four vertices, then there are only four total vertices and the polytope must be the simplex. If there is a degree four vertex, then by Lemma~\ref{lem:facet restriction} the polytope must contain the following configuration: \begin{center} \begin{tikzpicture} [scale=.3,auto=center,every node/.style={circle,fill=blue!20}] \node (p) at (6,6) {$p$}; \node (p1) at (1,11) {$p_1$}; \node (p2) at (1,1) {$p_2$}; \node (p3) at (11,1) {$p_3$}; \node (p4) at (11,11) {$p_4$}; \foreach \from/\to in {p/p1,p/p2,p/p3,p/p4,p3/p4,p4/p1,p1/p2,p2/p3} \draw (\from) -- (\to); \end{tikzpicture} \end{center} If vertex $p_1$,$p_2$,$p_3$, or $p_4$ has degree four, then we will be forced to include too many triangular facets. Thus, they all have degree three and the polytope is a quadrilateral pyramid. Finally, suppose $(v_t,f_t) = (2,6)$. Then $P$ must have a degree four vertex (call it $p$) and the configuration above is again included in the boundary complex of $P$ with the four triangles shown being facets of $P$. Since $P$ has only two vertices of degree three, at least two of the vertices surrounding $p$ must have degree four. Suppose two adjacent vertices among $p_1,p_2,p_3,p_4$ have degree four. Then each of them must be contained in four triangular facets which means that each such vertex is incident to two triangular facets that are not shown in the figure. But since these degree four adjacent vertices already share a facet, they can share at most one of these four extra triangular facets. This creates a total of seven triangular facets in $P$ contradicting $f_t=6$. Therefore, the two vertices of degree four among $p_1,p_2,p_3,p_4$ must be nonadjacent. As before, each is adjacent to two triangular facets that are not shown and since $f_t=6$, it must be that the two vertices share these two triangular facets. Therefore, $P$ is a bisimplex. Now the facts that the polar of a bisimplex is combinatorially a triangular prism, and the polar of an octahedron is a cube completes the proof. \end{proof} We now immediately obtain the following theorem which gives a complete classification of polytopes in $\mathbb{R}^3$ of psd rank four. \begin{theorem} \label{thm:3d polytopes of psd rank four} The polytopes in $\mathbb{R}^3$ of psd rank four are precisely simplices, quadrilateral pyramids, bisimplicies, combinatorial triangular prisms, biplanar octahedra, and biplanar cuboids. \end{theorem} A major catalyst for the use of semidefinite programming in combinatorial optimization was the {\em Lov{\'a}sz theta body of a graph} \cite{ShannonCapacity, GLS}, denoted as $\textup{TH}(G)$, which is a convex relaxation of the stable set polytope of a graph. Let $G = ([n],E)$ be a graph with vertex set $[n] := \{1, \ldots, n \}$ and edge set $E$. Recall that a {\em stable set} of $G$ is a subset $S \subseteq [n]$ such that for all $i,j \in S$, the pair $\{i,j\}$ is not in $E$. The {\em characteristic vector} of a stable set $S$ is $\mathcal{X}^S \in \{0,1\}^n$ defined as $(\mathcal{X}^S)_i = 1$ if $i \in S$ and $0$ otherwise. The {\em stable set polytope} of $G$ is the $n$-dimensional polytope $$ \textup{STAB}(G) := \textup{convex hull}( \mathcal{X}^S \,:\, S \textup{ stable set in } G ) \subset \mathbb{R}^n,$$ and $\textup{TH}(G)$ is the following projection of an affine slice of $\mathcal{S}_+^{n+1}$: $$\left\{ x \in \mathbb{R}^n \,:\, \exists \left[ \begin{array}{cc} 1 & x^T \\ x & U \end{array} \right] \succeq 0 \textup{ s.t. } U_{ii} = x_i \,\,\forall \,\, i=1,\ldots,n \textup{ and } U_{ij} = 0 \,\,\forall \,\,\{i,j\} \in E \right\}.$$ Further, $\textup{TH}(G) = \textup{STAB}(G)$ if and only if $G$ is a {\em perfect graph} \cite[Chapter 9]{GLS}. Hence if $G$ is perfect, $\textup{rank}_{\textup{psd}}\, \textup{STAB}(G) = n+1$ and the description of $\textup{TH}(G)$ gives a $\mathcal{S}_+^{n+1}$-lift of $\textup{STAB}(G)$. In the context of this paper, it is natural to ask if there are non-perfect graphs for which $\textup{rank}_{\textup{psd}}\, \textup{STAB}(G) = n+1$, via other $\mathcal{S}_+^{n+1}$-lifts. \begin{theorem} \label{thm:perfect graphs} Let $G$ be a graph with $n$ vertices. Then $\textup{STAB}(G)$ has psd rank $n+1$ if and only if $G$ is perfect. \end{theorem} \begin{proof} We saw that $\textup{rank}_{\textup{psd}}\, \textup{STAB}(G) = n+1$ when $G$ is a perfect graph with $n$ vertices. Suppose $G$ is not perfect. By Proposition~\ref{prop:face psd rank}, it is enough to show that $\textup{STAB}(G)$ has a face that is not of minimal psd rank. By the perfect graph theorem \cite{CRSTAnnals}, $G$ contains a {\em odd hole} or {\em odd anti-hole} $H$. Since $\textup{STAB}(H)$ forms a face of $\textup{STAB}(G)$, we just need to show that $\textup{STAB}(H)$ is not of minimal psd rank. Let $H=([2m+1],E)$ and assume $H$ is an odd hole. The anti-hole case is exactly analogous and is omitted here. Now $\textup{STAB}(H)$ is a $(2m+1)$-dimensional polytope with facet inequalities: \begin{enumerate} \item $x_i \geq 0$ for each $i \in [2m+1]$ \item $\mathbf x_e \leq 1$ for each $e \in E$ \item $\mathbf x_{[2m+1]} \leq m$ \end{enumerate} where $\mathbf x_T := \sum_{i \in T} x_i$ for every subset $T$ of $[2m+1]$ and $\mathbf x_e := x_{i} + x_{j}$ for $e=\left\{ i,j \right\} \in E$. Let $S$ be the slack matrix of $\textup{STAB}(H)$ and let $S^\prime$ be the $(2m+3) \times (2m+3)$ submatrix of $S$ where $S^\prime$ is indexed by the stable sets \[ \left\{ \; \right\},\left\{ 1 \right\},\left\{ 2 \right\},\ldots,\left\{ 2m+1 \right\},\left\{ 1,3\right\} \] in the rows and the facets $\mathbf x_{\left\{ 1,2\right\}} \leq 1, x_{1} \geq 0, \ldots, x_{2m+1} \geq 0, \mathbf x_{[2m+1]} \leq m$ in the columns. Then $S^\prime$ has the form: \[ \left[ \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & \cdots & 0 & m \\ 0 & 1 & 0 & 0 & 0 & \cdots & 0 & m-1 \\ 0 & 0 & 1 & 0 & 0 & \cdots & 0 & m-1 \\ 1 & 0 & 0 & 1 & 0 & \cdots & 0 & m-1 \\ 1 & 0 & 0 & 0 & 1 & \cdots & 0 & m-1 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & 0 & 0 & 0 & 0 & \cdots & 1 & m-1 \\ 0 & 1 & 0 & 1 & 0 & \cdots & 0 & m-2 \end{array} \right] . \] Let $\sqrt{S^\prime}$ be an arbitrary Hadamard square root and suppose that $\textup{rank}\, \sqrt{S^\prime} \leq 2m+2$. Then since the first $2m+2$ columns are linearly independent, we must have that the final column is a linear combination of the first $2m+2$. Let $\alpha_1,\ldots,\alpha_{2m+2}$ be coefficients in such a combination. By looking at the first, second, fourth, and last columns, we see that $\alpha_1 = \pm \sqrt{m}$, $\alpha_2 = \pm \sqrt{m-1}$, and $\alpha_4 = \pm \sqrt{m} \pm \sqrt{m-1}$. Now by looking at the last row, we must have $\pm \alpha_2 \pm \alpha_4 = \pm \sqrt{m-2}$, which is a contradiction. Hence, $\textup{rank}_{\! \! {\sqrt{\ }}}\, S > 2m+2$ and we have that $\textup{STAB}(H)$ is not of minimal psd rank. \end{proof} \bibliographystyle{plain}
{ "timestamp": "2013-08-01T02:07:55", "yymm": "1205", "arxiv_id": "1205.5306", "language": "en", "url": "https://arxiv.org/abs/1205.5306", "abstract": "The positive semidefinite (psd) rank of a polytope is the smallest $k$ for which the cone of $k \\times k$ real symmetric psd matrices admits an affine slice that projects onto the polytope. In this paper we show that the psd rank of a polytope is at least the dimension of the polytope plus one, and we characterize those polytopes whose psd rank equals this lower bound. We give several classes of polytopes that achieve the minimum possible psd rank including a complete characterization in dimensions two and three.", "subjects": "Optimization and Control (math.OC)", "title": "Polytopes of Minimum Positive Semidefinite Rank", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575157745542, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7091542176877316 }
https://arxiv.org/abs/2203.10509
Stability Of Matrix Polynomials In One And Several Variables
The paper presents methods of eigenvalue localisation of regular matrix polynomials, in particular, stability of matrix polynomials is investigated. For this aim a stronger notion of hyperstability is introduced and widely discussed. Matrix versions of the Gauss-Lucas theorem and Szász inequality are shown. Further, tools for investigating (hyper)stability by multivariate complex analysis methods are provided. Several second- and third-order matrix polynomials with particular semi-definiteness assumptions on coefficients are shown to be stable.
\section{Introduction} Given a matrix polynomial $P(\lambda)=\lambda^d A_d+ \lambda^{d-1} A_{d-1} + \dots + A_0$ ($A_0, A_1, \dots , A_d \in \mathbb{C}^{n, n}$) it is a natural question to ask under which conditions all eigenvalues are located outside a given set $D$. By eigenvalues we mean zeros of the function $\lambda\mapsto\det P(\lambda)$, which is assumed to be nonzero, i.e., $ P(\lambda)$ is regular. Note that $ P(\lambda)$ has no eigenvalues in some set $D$ if and only if \begin{equation}\label{SC} \textrm{ for all } {\mu \in D}, \;{\textrm{ for all } } {x \in \mathbb{C}^n \setminus \{0\}},\; \textrm{ there exists } {y \in \mathbb{C}^n } \textrm{ such that } y^*P(\mu)x \neq 0. \end{equation} There are several techniques proving such localisation of eigenvalues. One of them is the numerical range \cite{LiR94, Psa03, Psa00}, see Definition~\ref{numran} below, (see also \cite{bini2013, hig2003} for other techniques less related with the current project). Indeed, it is elementary that the numerical range contains all eigenvalues, cf. \cite{mar1997, MehMW22} for applications. However, a localisation condition for eigenvalues that uses numerical range is in general very restrictive. Consider for example a port-Hamiltonian pencil of the form (cf.\cite{MehMW18}, see also \cite{GerHH21,mehrmann2022control}) \begin{equation}\label{pH} \lambda E+(J+R)Q \end{equation} with matrices $J$, $E^*Q$ positive semi-definite, and $J$ being skew-symmetric and $Q$ not necessarily invertible, but satisfying a weaker condition. It was shown in \cite{MehMW18} that the eigenvalues are in the closed left half-plane. The key issue of that proof, although not stated explicitly, was considering the numerical range of the pencil $\lambda Q^* E+Q^*(J+R)Q, $ not of the original one. In order to cover such situations, we introduce the following condition \begin{equation}\label{HSC} \textrm{ for all } x \in \mathbb{C}^n \setminus \{0\},\;\textrm{ there exists } y \in \mathbb{C}^n, \textrm{ such that for all } \mu \in D : y^*P(\mu)x \neq 0. \end{equation} We will refer to condition \eqref{HSC} as to \textit{hyperstability with respect to $D$}, see Definition \ref{defla}. Note that on one hand it is clearly stronger than \eqref{SC}, on the other it is clearly weaker than ordering that the numerical range is outside $D$ (take $y=x$). The choice $y=Qx$ shows that the pencil in \eqref{pH} is hyperstable with respect to the open right half-plane, this idea will be extended in the current paper to quadratic polynomials in Corollary~\ref{c-hp}. The first outcome of the present manuscript is presenting some classes of matrix polynomials for which the conditions \eqref{SC} and \eqref{HSC} are equivalent, see Theorem \ref{uppert}, and providing examples showing that they are not equivalent in general, cf. Examples \ref{exa} and \ref{hyper-nsinf}. In particular, for linear pencils the two notions coincide. Further, we show that similarity of matrix polynomials do not preserve hyperstability (Remark~\ref{eqstab}). For these two reasons, we think that the notion does not seem to be much related with invariant factors, and we concentrate the current research on location of eigenvalues only. On the other hand, the notion is related with current work on (generalised) triangularisation of matrix polynomials \cite{ang2021, trian2013, tis2013}, see Proposition \ref{upperblock} and Theorem \ref{uppert}. Further, it appears that the hyperstability notion in \eqref{HSC} inherits many properties of stability for scalar polynomials. For example, we show a hyperstable version of the Gauss-Lucass Theorem for matrix polynomials (Theorem~ \ref{GLmat}), while an analogous result using the notion of stability \eqref{SC} simply fails, see Examples \ref{nonGL}. Our next contribution is introducing several complex variables. Up to our knowledge, the notions of regularity and numerical range for multivariate matrix polynomials are not yet established and need a separate investigation. We show that the condition of hyperstability may be used as an alternative, as the condition \eqref{HSC} generalises naturally onto multivariate matrix polynomials, see Definition~\ref{defzi}. Further, we introduce the polarization operator for matrix polynomials, see Section~\ref{sPol}. E.g., for a quadratic matrix polynomial $\lambda^2 A_2 +\lambda A_1+ A_0$ its polarization is given by $z_1z_2A_2+\frac{z_1+z_2}2A_1+A_0$. It is well known that the polarization operator preserves stability of scalar polynomials of any degree. This concept was heavily used e.g. in \cite{BorB09}, where all operators preserving stability of multivariate polynomials were classified. It appears that polarisation preserves also by hyperstability of matrix polynomials, see Theorem~\ref{GWS}. Again, as in the case of the Gauss-Lucas Theorem, an analogue result, with stability condition instead of hyperstability, is not true (see Example \ref{nonstab}). Another outcome of the paper are general hyperstability criteria for quadratic and cubic matrix polynomials, see Theorems \ref{poly2} and \ref{poly3}. For example, we show that if the function $\det(z_1^2 A_2+z_2 A_1 +A_0)$ has no zeros in some set $D^2$ ($D\subseteq\mathbb{C}$) then the matrix polynomial $\lambda^2 A_2+\lambda A_1+A_0$ satisfies the hyperstability condition \eqref{HSC} with respect to $D$. Note that analytic properties of a certain multivariate polynomial imply here strictly linear algebra properties of a matrix polynomial $P(\lambda)$, which we find remarkable. The last outcome of the paper is proving hyperstability and stability of some one variable matrix polynomials of degrees 2 and 3 and certain multivariate matrix polynomials. The methods are particularly tailored for matrix polynomials with certain positivity conditions on the coefficients. Such polynomials were recently considered in \cite{MehMW21,MehMW22} due their numerous applications in mathematical modelling \cite{BetHMST08,MehMW21,mehrmann2022control}. In particular, Theorems \ref{half-plane}, \ref{ker} and \ref{deg3} contribute to the knowledge of the topic. The paper is organised as follows. Section~\ref{sPrel} contains the usual linear algebra notions and notations. In Section~\ref{sHyper} we introduce hyperstability and show its basic properties. In Section~\ref{sGL} we prove a generalisation of the Gauss-Lucas theorem for hyperstable matrix polynomials and a version of a Sz\'asz-type inequality for matrix polynomials. In Section~\ref{s5} we introduce the complex analysis of many variables and show some hyperstability criteria for cubic and quadratic matrix polynomials. In Section~\ref{sPol} we introduce the polarisation operator and show that it preserves hyperstability. The last part of the paper, Section~\ref{sPos}, states hyperstabilty for particular classes of quadratic and cubic matrix polynomials with coefficients satisfying certain positive-definiteness conditions. \section{Preliminaries}\label{sPrel} Let us begin with fixing the notation. The matrix polynomials of one variable will be denoted as $P(\lambda) = A_d\lambda^d + A_{d-1}\lambda^{d-1} + \dots +A_0$, with matrix coefficients $A_0, A_1, \dots , A_d\in\mathbb{C}^{n,n}$ and $\lambda$ being the complex variable. By multivariate matrix polynomials we will understand finite sums of the form $P(z_1, z_2, \dots, z_\kappa)= \sum_{\alpha_1, \alpha_2, \dots , \alpha_\kappa} z_1^{\alpha_1}z_2^{\alpha_2} \cdots z_\kappa^{\alpha_\kappa} A_{\alpha_1, \alpha_2, \dots , \alpha_k}$, where $A_{\alpha_1, \alpha_2, \dots , \alpha_k} \in \mathbb{C}^{n,n}$, $\alpha_1, \alpha_2 \dots , \alpha_\kappa \geq 0$, and $z_1, z_2 \dots , z_\kappa$ are complex variables. Although we allow $\kappa=1$, this will usually lead to a trivial situation. For scalar polynomials ($n=1$) we will use lower-case letters for coefficients. Let $D$ be a nonempty open or closed subset of $\mathbb{C}^\kappa$. We will call a scalar polynomial $p \in \mathbb{C}[z_1, z_2, \dots , z_{\kappa}]$ \emph{stable} with respect to $D$ if and only if it has no zeros in $D$. Two possible generalisations of this notion to matrix polynomials, see Definitions~\ref{defla} and \ref{defzi}, are the main object of this paper. \begin{definition} \rm We call a matrix polynomial $P(\lambda) = A_d\lambda^d + A_{d-1}\lambda^{d-1} + \dots +A_0 \in \mathbb{C}^{n, n}[\lambda]$, where $A_d \neq 0$, \emph{regular} if and only if the function $\lambda \mapsto \det P(\lambda)$ is a nonzero scalar polynomial. In such case, we call a vector $x \in \mathbb{C}^n \setminus \{0\}$ \emph{eigenvector} of the matrix polynomial $P(\lambda)$ if and only if there exists $\lambda_0 \in \mathbb{C}$ such that \begin{equation*} P(\lambda_0)x = 0\text{.} \end{equation*} Then, such complex number $\lambda_0$ is called an \emph{eigenvalue} corresponding to the eigenvector $x$. Note that for a regular polynomial being an eigenvalue is equivalent to being a root of the scalar polynomial $\lambda \mapsto \det P(\lambda)$. \end{definition} Note that we do not intend to define regular or singular multivariate polynomials, the example $z_1(A+z_2I)$ shows difficulties with generalising the notion (fixing $z_1=0$ gives a singular polynomial in the second variable). Instead, we will heavily use the hyperstability notion, see Definition \ref{defzi}. Multivariate matrix polynomials will appear again in Section~\ref{s5}, now we concentrate on scalar ones. In few cases, we need a factorization tools like the well known Kronecker canonical form of a matrix pencil or Smith canonical form of an arbitrary square matrix polynomial reviewed below, see \cite[Chapter VI]{Gan59}. \begin{definition} \rm Let $P(\lambda) \in \mathbb{C}^{n, n}[\lambda]$ be a regular matrix polynomial. Then, there exist regular polynomials $U(\lambda), V(\lambda) \in \mathbb{C}^{n, n}[\lambda]$ with nonzero constant determinants such that \begin{equation}\label{Smith} U(\lambda)P(\lambda)V(\lambda) = \mathop{\mathrm{diag}}{\Bigl(s_1(\lambda), s_2(\lambda), \dots , s_n(\lambda)\Bigr)} =: S(\lambda)\text{,} \end{equation} where the symbols $s_1(\lambda), s_2(\lambda), \dots , s_n(\lambda) \in \mathbb{C}[\lambda]$ denote uniquely determined monic polynomials such that a polynomial $s_i(\lambda)$ is a divisor of a polynomial $s_{i+1}(\lambda)$ for $i \in \{1, 2, \dots , n\}$. The non-singular diagonal matrix polynomial $ S(\lambda) \in \mathbb{C}^{n, n}$ is called \emph{the Smith canonical form} of $ P(\lambda)$ and the diagonal entries in the right hand side of \eqref{Smith} are called \emph{the invariant factors} of $ P(\lambda)$. \end{definition} Besides the notion of an eigenvalue of a matrix polynomial $P \in \mathbb{C}^{n, n}[\lambda]$ we need a notion of a wider set containing all eigenvalues, i.e. the notion of the numerical range. It was introduced in \cite{LiR94} and studied further in \cite{Psa03,Psa00}. \begin{definition}\label{numran} \rm Let $P(\lambda) \in \mathbb{C}^{n, n}[\lambda]$ be an arbitrary matrix polynomial. A subset \begin{equation} W(P) := \{\mu \in \mathbb{C} : x^*P(\mu)x = 0 \text{ for some } x \neq 0\} \end{equation} of the complex plane $\mathbb{C}$ is called \emph{the numerical range} of the matrix polynomial $P(\lambda)$. \end{definition} \section{Hyperstability of matrix polynomials in one variable}\label{sHyper} In this section we will deal with matrix polynomials of one variable, we emphasize now two central notions \begin{definition}\label{defla} \rm Let the symbol $D$ denote an nonempty open or closed subset of the complex plane $\mathbb{C}$ and let $P(\lambda) \in \mathbb{C}^{n, n}[\lambda]$ be a matrix polynomial. We say that the polynomial $ P(\lambda)$ is \emph{stable with respect to $D$} if and only if it does not have eigenvalues in $D$. Further, we say that the polynomial $P(\lambda)$ \emph{is hyperstable with respect to $D$} if and only if for all $x \in \mathbb{C}^n \setminus\{0\}$ there exists $y \in \mathbb{C}^n \setminus \{0\}$ such that \begin{equation}\label{nozeroinD} y^*P(\mu)x \neq 0 \;\;\text{for all} \;\;\mu \in D\text{.} \end{equation} \end{definition} Observe that stability of a matrix polynomial implies its regularity. Further, the notion of hyperstability is situated somewhere between stability and a (rather strong) numerical range condition. \begin{proposition}\label{abc} Let $P(\lambda) \in \mathbb{C}^{n, n}[\lambda]$ be a matrix polynomial and let $D$ be a nonempty open or closed subset of the complex plane $\mathbb{C}$. Consider the following conditions. \begin{enumerate}[\rm (a)] \item\label{num} the numerical range $W(P)$ does not intersect $D$; \item\label{hyper} the polynomial $P(\lambda)$ is hyperstable with respect to $D$; \item\label{stable} the polynomial $ P(\lambda)$ is stable with respect to $D$. \end{enumerate} Then \eqref{num}$\Rightarrow$\eqref{hyper}$\Rightarrow$\eqref{stable}. \end{proposition} \begin{proof} The first implication follows by setting $y=x$ in \eqref{nozeroinD}. The second implication becomes obvious, as one observes that stability can be reformulated as: for all $\mu\in D$ and for all $x \in \mathbb{C}^n \setminus\{0\}$ there exists $y \in \mathbb{C}^n \setminus \{0\}$ such that $y^*P(\mu)x \neq 0$. \end{proof} It appears that in many cases implication \eqref{num} $\Rightarrow$ \eqref{stable} above is a convenient criterion for stability, see \cite{MehMW22}. Note that the numerical range of a polynomial $P(\lambda)=\lambda I_n - A$ coincides with the numerical range of the matrix $A$. Thus condition \eqref{num} cannot be, in general, equivalent to stability \eqref{stable}. As for matrix pencils hyperstability \eqref{hyper} is equivalent to stability \eqref{stable} (see Theorem~\ref{uppert}\eqref{pq} below), we obtain that in many cases \eqref{num} is not equivalent to \eqref{hyper}. Further, hyperstability \eqref{hyper} is, in general, not equivalent to stability \eqref{stable}, cf. the important following Example. \begin{example}\label{exa}\rm Let $$ P(\lambda) := \mat{cc} 1 & \lambda \\ \lambda & \lambda^2 + 1\rix. $$ Then $\det P(\lambda)\equiv 1$, hence $P(\lambda)$ is a stable matrix polynomial with respect to any open or closed subset $D$ of the complex plane. However, taking $x=[0 \; 1]^{\top}$ and arbitrary $y = [y_1 \; y_2]^{\top}$ yields $$ y^*P(\lambda)x = \overline{y}_1 \lambda + \overline{y}_2 (\lambda^2 + 1)\text{.} $$ Note that for any $y \neq 0$ the polynomial above has a root in the closed unit disc $\overline{\mathbb{D}}$, as either $y_2 = 0$ and we have one root $\lambda=0$ or $y_2\neq 0$ and we have two roots (counting multiplicities) with the product equal $1$. Replacing $\lambda$ by $\alpha\lambda + \beta$ one gets an example of a polynomial stable, but not hyperstable, with respect to an arbitrary desired half-plane or disc. E.g., the polynomial $P(\lambda-\ii)$ is stable, but not hyperstable, with respect to the half-plane $H_0$. Observe, that the polynomial $P(\alpha\lambda+\beta)$ ($\alpha\neq0$) has always infinity as an eigenvalue, i.e., the leading coefficient $\mathop{\mathrm{diag}}(0,\alpha^2)$ is not an invertible matrix. One may wonder if this property is necessary to construct such an example. Apparently, this is not the case, see Example~\ref{hyper-nsinf} below. \end{example} Let us recall two notions of equivalence for matrix polynomials, cf. \cite{Gan59}. We say that matrix polynomials $P(\lambda), Q(\lambda) \in \mathbb{C}^{n, n}[\lambda]$ are \emph{equivalent} if and only if there exist matrix polynomials $U(\lambda), V(\lambda) \in \mathbb{C}^{n, n}[\lambda]$ with constant nonzero determinants such that $P(\lambda) = U(\lambda)Q(\lambda)V(\lambda)$. We say that the polynomials $ P(\lambda)$ and $Q(\lambda)$ are \emph{strongly equivalent} if and only if the polynomials $ U(\lambda)$ and $ V(\lambda)$ are constant invertible matrices. Clearly, the latter relation preserves hypertability, in fact a stronger statement holds: \begin{lemma}\label{lQ} Let $D$ be any open or closed set, let $Q\in\mathbb{C}^{n,n}$ be any matrix and let $S\in\mathbb{C}^{n,n}$ be invertible. If $Q^* P(\lambda)S$ is hyperstable with respect to $D$, then $P(\lambda)$ is hyperstable with respect to $D$. \end{lemma} The proof follows directly from the definition. In a moment we will see that equivalence does not preserve hyperstability, see Remark~\ref{eqstab}. First, however, we need to show some properties of hyperstability, connected with (block) upper-triangular matrices. \begin{proposition}\label{upperblock} Let $P(\lambda) \in \mathbb{C}^{n,n}[\lambda]$ be a matrix polynomial and let $D$ be an open or closed subset of the complex plane. Assume that $ P(\lambda)$ is strongly equivalent to a block upper-triangular matrix polynomial $$ \mat{ccc} P_{11}(\lambda) & \cdots & P_{1m}(\lambda)\\ &\ddots&\vdots\\ && P_{mm}(\lambda) \rix,\quad P_{ij}(\lambda)\in\mathbb{C}^{k_i,k_j}[ \lambda], \ \sum_{j=1}^m k_j=n, $$ with the diagonal entries hyperstable with respect to $D$. Then the polynomial $P(\lambda)$ is hyperstable with respect to $D$. \end{proposition} \begin{proof} As strong equivalence preserves hyperstability, we may assume without loss of generality that $U =V = I_n$. Take $x = \mat{c} x_1^\top \dots x_m^\top \rix^\top \neq 0$, with $x_j\in \mathbb{C}^{k_j}$ and let $r \in \set{1, 2, \dots , n}$ denote the index of the last nonzero $x_j$. Let $y_r\in\mathbb{C}^{k_r}\setminus\set0$ be such that the polynomial $ y_r^* P_{rr}(\lambda)x_r$ is stable with respect to $D$. Taking $y=\mat{c} 0 \dots 0\ y_r^\top 0 \dots 0 \rix^\top $ we get $y^* P(\lambda)x=y_r^* P_{rr}(\lambda)x_r$, which is stable with respect to $D$. \end{proof} The following Theorem presents a class of matrix polynomials for which the notions of stability and hyperstability coincide. For applications see Proposition~\ref{MGT} below. \begin{theorem}\label{uppert} Let $P(\lambda) \in \mathbb{C}^{n, n}[\lambda]$ be a matrix polynomial and let $D$ be an open or closed subset of the complex plane, then the following holds. \begin{enumerate}[\rm (i)] \item\label{ut} Assume that a matrix polynomial $P(\lambda)$ is strongly equivalent to an upper triangular matrix polynomial. Then it is hyperstable with respect to $D$ if and only if it is stable with respect to $D$. \item\label{pq} If $P(\lambda) = p(\lambda)A + q(\lambda)B$ with some scalar polynomials $p(\lambda), q(\lambda) \in \mathbb{C}[\lambda]$ and $A, B \in \mathbb{C}^{n, n}$ (in particular: if $P(\lambda)$ is a matrix pencil, i. e. a matrix polynomial of degree one), then it is hyperstable with respect to $D$ if and only if it is stable with respect to $D$. \end{enumerate} \end{theorem} \begin{proof} \eqref{ut} Consider a matrix polynomial $P(\lambda)$ stable with respect to $D$. We have $$ U^{-1}P(\lambda)V^{-1} = \mat{ccc} p_{11}(\lambda) & \cdots & p_{1n}(\lambda)\\ &\ddots&\vdots\\ && p_{nn}(\lambda) \rix $$ with a stable upper triangular matrix polynomial on the right hand side of the equation. Hence, the scalar polynomials on the diagonal $p_{11}(\lambda), p_{22}(\lambda), \dots , p_{nn}(\lambda) \in \mathbb{C}[\lambda]$ are stable with respect to $D$. By Proposition~\ref{upperblock} $P(\lambda)$ is hyperstable with respect to $D$. \eqref{pq} Using the Kronecker form \cite{Gan59} (or the generalised Schur form) of matrices $A$ and $B$ we obtain the matrix polynomial $\lambda \mapsto p(\lambda)A + q(\lambda)B$ to be strongly equivalent to an upper triangular one and the claim follows from \eqref{ut}. \end{proof} \begin{remark}\label{eqstab}\rm Note that an arbitrary matrix polynomial $P(\lambda) \in \mathbb{C}^{n, n}[\lambda]$ is equivalent to an upper triangular matrix polynomial (with $V(\lambda) = I_n$), see \cite[Chapter VI]{Gan59}. E.g., the polynomial in Example~\ref{exa} satisfies the following equality: $$ \mat{cc} 1 & \lambda\\ \lambda & \lambda^2 + 1\rix = \mat{cc} 1 & 0\\ \lambda & 1\rix\mat{cc} 1 & \lambda\\ 0 & 1\rix\text{.} $$ Hence, equivalence of matrix polynomials does not preserve hyperstability. \end{remark} \section{Gauss-Lucas Theorem and Sz\'asz type inequality for matrix polynomials}\label{sGL} We present here generalisations of two major results for scalar polynomials onto matrix polynomials. The celebrated Gauss-Lucas Theorem says that for a non-constant scalar polynomial $p(\lambda)$ the roots of $p'(\lambda)$ are contained in the convex hull of the roots of $p(\lambda)$. Using the terminology of the current paper one can formulate it as follows: if $D\subseteq\mathbb{C}$ is such that $\mathbb{C}\setminus D$ is convex and $p(\lambda)$ is stable with respect to $D$ then $p'(\lambda)$ is stable with respect to $D$. It seems to be generally known, that a similar statement cannot be true for matrix polynomials. Nonetheless, we present a very general example which will serve in future constructions. \begin{example}\label{nonGL}\rm Consider a polynomial $$ P(\lambda)=\mat{cc} \lambda p(\lambda)+q(\lambda) & p(\lambda)\\ \lambda & 1\rix, $$ where $q(\lambda)$ has its roots outside $D$ and $p'(\lambda)$ is non-constant and has some of its roots inside $D$. Then $$ \det P(\lambda)=q(\lambda),\quad \det P'(\lambda) = -p'(\lambda), $$ i.e. the polynomial $P(\lambda)$ is stable with respect to $D$ but its derivative $P'(\lambda)$ is not. \end{example} The example above says that, in general, there is no relation between the location of eigenvalues of a polynomial $P(\lambda)$ and of its derivative. However, a hyperstable version of the Gauss-Lucas Theorem holds. \begin{theorem}\label{GLmat} Let $D\subseteq\mathbb{C}$ be such that $\mathbb{C}\setminus D$ is convex. If a matrix polynomial $P(\lambda)$ is hyperstable with respect to $ D$ and the entries of its derivative $P'(\lambda)$ are linearly independent polynomials, then the matrix polynomial $P'(\lambda)$ is also hyperstable with respect to $D$. \end{theorem} \begin{proof} Fix a nonzero $x\in\mathbb{C}^n\setminus\{0\}$. As $P(\lambda)$ is hyperstable with respect to $D$ there exists $y\in\mathbb{C}^n\setminus\{0\}$ such that the scalar polynomial $p(\lambda)=y^*P(\lambda)x$ has all its roots in $\mathbb{C}\setminus D$. Note that the polynomial $p'(\lambda)=y^*P'(\lambda)x$ is a nontrivial linear combination of entries of $P'(\lambda)$, hence $p'(\lambda)$ is a nonzero polynomial. As $\mathbb{C}\setminus D$ is convex, by the Gauss-Lucas theorem, $p'(\lambda)$ has all its roots in $D$. \end{proof} Firstly, note that the assumption on the derivative $P'(\lambda)$ having independent entries cannot be dropped: \begin{example}\rm The polynomial $P(\lambda)=\mathop{\mathrm{diag}}(\lambda, 1)$ is hyperstable with respect to the outside of the unit disc $\mathbb{D}$, but its derivative $P'(\lambda)$ is singular, hence is not hyperstable with respect to any nonempty set. \end{example} Secondly, note that the assumption on $P(\lambda)$ being hyperstable cannot be relaxed to stability, even if we keep the assumption on entries of $P'(\lambda)$ being linearly independent and relax the claim to $P'(\lambda)$ being stable with respect to $D$. \begin{example}\label{hyper-nsinf}\rm We specify and modify the polynomial $P(\lambda)$ from Example~\ref{nonGL}. Let $D=\{z\in\mathbb{C}:|z|>1\}$, $q(\lambda)=\lambda^2$, $p(\lambda)=\lambda^3-4\lambda$ and consider a perturbed polynomial $$ P_\varepsilon(\lambda):= \mat{cc} \lambda p(\lambda)+q(\lambda) & p(\lambda)\\ \lambda & 1+\varepsilon \lambda^4\rix= \mat{cc} \lambda^4-3\lambda^2 & \lambda^3-4\lambda\\ \lambda & 1+\varepsilon \lambda^4\rix. $$ Fix $\varepsilon>0$ sufficiently small, so that $P_\varepsilon(\lambda)$ has still its eigenvalues inside the unit disc $\mathbb{D}$ and $P'_\varepsilon(\lambda)$ has still its eigenvalues outside the closed unit disc. This is possible due to continuity of zeros of polynomials with respect to the coefficients. Further, note that $$ P'_\varepsilon(\lambda) = \mat{cc} 4\lambda^3-6\lambda & 3\lambda^2-4\\ 1 & 4\varepsilon \lambda^3\rix, $$ which clearly has linearly independent entries. By Theorem~\ref{GLmat} the polynomial $P_\varepsilon(\lambda)$ is not hyperstable with respect to $D$. Further, note that the leading coefficient of this polynomial is an invertible matrix $\mathop{\mathrm{diag}}(1, \varepsilon)$, which gives the desired example of stable, but not hyperstable polynomial with no eigenvalues at infinity. \end{example} We use the current moment to present a Sz\'asz-type inequality for stable matrix polynomials. \begin{proposition}\label{Szasz} Consider a matrix polynomial $P(\lambda) = \lambda^d A_d + \lambda^{d-1} A_{d-1} + \dots + \lambda A_1 + I_n$. If the numerical range $W(P)$ is contained in some half-plane $H_\varphi$, $\varphi \in [0;2\pi)$, then $$ \| P(\lambda) \| \leq 2\exp \left( \lambda_{H}\Bigl[\lambda A_1 -|\lambda|^2 A_2\Bigr] +\frac12 |\lambda|^2 \| A_1\|^2 \right), \quad \lambda \in \mathbb{C}, $$ where the symbol $\lambda_{H}(X)$ denotes the largest (possibly negative) eigenvalue of the Hermitian matrix $\frac{X+X^*}2$. \end{proposition} \begin{proof} Without loss of generality we may assume that the numerical range of $P(\lambda)$ is contained in the open upper half-plane. By the well known fact that the norm of a matrix is less than twice its numerical radius, see e.g. \cite{HorJ91}, we obtain for $\lambda\in\mathbb{C}$ \begin{eqnarray*} \|P(\lambda)\| &\leq & 2\sup_{\|x\| = 1} |x^*P(\lambda)x| \\ & \leq & 2\sup_{\|x\|=1} \exp \left(\mbox{\rm re\,}(x^*A_1x \lambda) - \mbox{\rm re\,}(x^* A_2 x) |\lambda|^2 + \frac12 |x^*A_1x|^2|\lambda|^2 \right), \end{eqnarray*} where the second inequality follows from scalar inequality for stable polynomials (\cite[Lemma 5]{deB61}, see also \cite[Theorem 1.3]{Kne19}), applied to $x^*P(\lambda)x$. Taking the supremum under the exponent we see that the assertion follows. \end{proof} In view of Proposition~\ref{abc} one may ask whether hyperstability implies any Sz\'asz-type inequality. \section{Hyperstability of matrix polynomials in one variable via multivariate matrix polynomials}\label{s5} In this Section we show how certain analytic properties of multivariate matrix polynomials imply algebraic properties of one variable matrix polynomials. To be more precise, we show that stability of multivariate matrix polynomials (with respect to some set $D^\kappa$) implies hyperstablity of matrix polynomials of one variable. First, let us extend Definition~\ref{defla} to the multivariate case. \begin{definition}\label{defzi} \rm Let the symbol $D$ denote a nonempty open or closed subset of the complex plane. We say that a multivariate matrix polynomial $(z_1, z_2, \dots , z_\kappa) \mapsto P(z_1, z_2, \dots , z_\kappa)$ \emph{is stable with respect to $D^{\kappa}$} if and only if for all $(\mu_1,\dots,\mu_\kappa) \in D^{\kappa}$ and all $x \in \mathbb{C}^n\setminus\{0\}$ we have \begin{equation*} P(\mu_1, \mu_2, \dots, \mu_{\kappa})x \neq 0\text{.} \end{equation*} Further, we say that $(z_1, z_2, \dots , z_{\kappa}) \mapsto P(z_1, z_2, \dots , z_{\kappa})$ is \emph{hyperstable with respect to $D^{\kappa}$} if and only if for all $x \in \mathbb{C}^n\setminus\{0\}$ there exists $y \in \mathbb{C}^n\setminus\{0\}$ such that \begin{equation}\label{nozeroinDmatrix} y^*P(\mu_1, \mu_2, \dots , \mu_{\kappa})x \neq 0 \;\;\text{for all} \;\;(\mu_1, \mu_2, \dots , \mu_{\kappa}) \in D^{\kappa}\text{.} \end{equation} \end{definition} We present now one of the major outcomes of the current paper relating hyperstability of one variable quadratic polynomials to stability of certain two variables matrix polynomials. \begin{theorem}\label{poly2} Let $P(\lambda) = \lambda^2 A_2 + \lambda A_1 + A_0$ be a quadratic matrix polynomial and let $D$ be a nonempty open or closed subset of the complex plane $\mathbb{C}$. If at least one of the following conditions hold: \begin{enumerate}[\rm (a)] \item\label{0?D} the multivariate matrix polynomial $(z_1, z_2) \mapsto z_1^2 A_2 + z_2 A_1 + A_0$ is stable with respect to $D^2$, \item\label{0notinD1} the multivariate matrix polynomial $(z_1, z_2) \mapsto z_1z_2 A_2 + z_2 A_1 + A_0$ is stable with respect to $D^2$ and $0 \notin D$, \item\label{0notinD2} the multivariate matrix polynomial $(z_1, z_2) \mapsto z_1^2z_2 A_2 + z_1^2 A_1 + z_2 A_0$ is stable with respect to $D^2$ and $0 \notin D$, \end{enumerate} then the matrix polynomial $\lambda \mapsto P(\lambda)$ is hyperstable with respect to $D$. \end{theorem} \begin{proof} First observe that setting $z_1 = z_2 = \lambda$ implies, in each case, that the polynomial $\lambda \mapsto P(\lambda)$ is stable with respect to $D$, in particular it is regular. In the case \eqref{0notinD2} stability of a polynomial $\lambda \mapsto \lambda P(\lambda)$ is equivalent to stability of the polynomial $\lambda \mapsto P(\lambda)$. To show that the polynomial $\lambda \mapsto P(\lambda)$ is hyperstable with respect to $D$, fix $x \in \mathbb{C}^n\setminus\{0\}$. Below by $\perp$ we denote orthogonality with respect to the standard complex inner-product. The proof in each case is similar, however, requires certain adaptations and thus we present all details. Assume \eqref{0?D} holds. If there exists a vector $y \in \mathbb{C}^n\setminus\{0\}$ such that $y \perp A_2x, \; y \perp A_1x$ and $y \not\perp A_0x$, then \begin{equation*} y^* P(\lambda)x = y^*A_0x \end{equation*} and \eqref{nozeroinD} is simply satisfied. Hence, further we assume that such vector $y$ does not exist, i.e. $ \{A_2x, A_1x\}^{\perp} \subseteq \{A_0x\}^{\perp}$, equivalently $A_0x\in \mathop{\mathrm{span}}\set{A_2x, A_1x}$. Thus, we have \begin{equation}\label{A_0x} A_0x = \alpha_0 A_2x + \beta_0 A_1x \end{equation} for some $\alpha_0, \beta_0 \in \mathbb{C}$. Next, we can write \begin{align*} P(\lambda)x = \lambda^2 A_2x + \lambda A_1x + (\alpha_0 A_2x + \beta_0 A_1x) = (\lambda^2 + \alpha_0)A_2x + (\lambda + \beta_0)A_1x\text{.} \end{align*} We show now that at least one of the scalar polynomials $\lambda \mapsto \lambda^2 + \alpha_0$ or $\lambda \mapsto \lambda + \beta_0$ is stable with respect to $D$. By the assumption that the polynomial $(z_1, z_2) \mapsto z_1^2 A_2 + z_2 A_1 + A_0$ is stable with respect to $D^2$, we have that the equation $(z_1^2 A_2 + z_2 A_1 + A_0)x = 0$ has no solutions $(z_1, z_2) \in D^2$. Substituting \eqref{A_0x} we obtain that the equation $$ (z_1^2 + \alpha_0)A_2x + (z_2 + \beta_0)A_1x = 0 $$ has no solutions $(z_1, z_2) \in D^2$. This implies that at least one of the scalar polynomials $\lambda \mapsto \lambda^2 + \alpha_0$ or $\lambda \mapsto \lambda + \beta_0$ has no roots in $D$ and the claim follows. If the polynomial $\lambda \mapsto \lambda^2 + \alpha_0$ is stable with respect to $D$, then similarly as before we seek for a vector $y$ such that $y \not\perp A_2x$ and $y \perp A_1x$. If such $y$ exists we have \begin{equation*} y^*P(\lambda)x = (y^*A_2x)(\lambda^2 + \alpha_0) \end{equation*} and condition \eqref{nozeroinD} is satisfied. If such vector $y$ does not exist we have $\{A_1x\}^{\perp} \subseteq \{A_2x\}^{\perp}$ and consequently $A_2x = c_1 A_1x$ for some constant $c_1 \in \mathbb{C}$. Then: \begin{equation*} P(\lambda)x = (c_1\lambda^2 + \lambda + c_1\alpha_0 + \beta_0)A_1x \end{equation*} and since the polynomial $\lambda \mapsto P(\lambda)$ is regular and has no eigenvalues in $D$ we have that $A_1x \neq 0$ and $\lambda \mapsto c_1\lambda^2 + \lambda + c_1\alpha_0 + \beta_0$ has no roots in $D$. Setting $y = A_1x$ we have \begin{equation*} y^*P(\lambda)x = (A_1x)^*(c_1\lambda^2 + \lambda + c_1\alpha_0 + \beta_0)A_1x = \|A_1x\|^2(c_1\lambda^2 + \lambda + c_1\alpha_0 + \beta_0) \end{equation*} and \eqref{nozeroinD} is again satisfied. Similarly, if the polynomial $\lambda + \beta_0$ is stable with respect to $D$, then we seek for a vector $y$ such that $y \perp A_2x$ and $y \not\perp A_1x$. If such $y$ exists, then \begin{equation*} y^*P(\lambda)x = (y^*A_2x)(\lambda + \beta_0) \end{equation*} and \eqref{nozeroinD} is satisfied. If such the vector $y$ does not exist we have $\{A_2x\}^{\perp} \subseteq \{A_1x\}^{\perp}$ and consequently $A_1x= c_2 A_2x$ for some constant $c_2 \in \mathbb{C}$ and \begin{equation*} P(\lambda)x =(\lambda^2 + c_2\lambda + \alpha_0 + c_2\beta_0)A_2x\text{.} \end{equation*} As before, the scalar polynomial $\lambda \mapsto \lambda^2 + c_2\lambda + \alpha_0 + c_2\beta_0$ is stable and $A_2x \neq 0$. Taking $y = A_2x$ we have \eqref{nozeroinD} satisfied. Assume that \eqref{0notinD1} holds. If there exists a vector $y \in \mathbb{C}^n\setminus\{0\}$ such that $y \perp A_2x, \; y \not\perp A_1x$ and $y \perp A_0x$, then \begin{equation*} y^* P(\lambda)x = (y^* A_1x)\lambda \end{equation*} and \eqref{nozeroinD} is satisfied, due to the assumption that $0 \notin D$. Hence, further of the proof we assume that such vector $y$ does not exist, i.e. $ \{A_2x, A_0x\}^{\perp} \subseteq \{A_1x\}^{\perp}$, equivalently $A_1x\in \mathop{\mathrm{span}}\set{A_2x, A_0x}$. Thus, we have \begin{equation}\label{A_1x} A_1x = \alpha_0 A_2x + \beta_0 A_0x \end{equation} for some $\alpha_0, \beta_0 \in \mathbb{C}$. Next, we can write \begin{align*} P(\lambda)x = \lambda^2 A_2x + \lambda(\alpha_0 A_2x + \beta_0 A_0x) + A_0x = \lambda(\lambda + \alpha_0)A_2x + (\beta_0\lambda + 1)A_0x\text{.} \end{align*} We show now that at least one of the scalar polynomials $\lambda \mapsto \lambda(\lambda + \alpha_0)$ or $\lambda \mapsto \beta_0\lambda + 1$ is stable with respect to $D$. By the assumption that the polynomial $(z_1, z_2) \mapsto z_1z_2 A_2 + z_2 A_1 + A_0$ is stable with respect to $D^2$ we have that the equation $(z_1z_2 A_2 + z_2 A_1 + A_0)x = 0$ has no solutions $(z_1, z_2) \in D^2$. Substituting \eqref{A_1x} we obtain that the equation $$ z_2\left(z_1+\alpha_0\right) A_2 x+ \left( \beta_0 z_2 +1 \right)A_0x = 0 $$ has no solutions $(z_1, z_2) \in D^2$. This implies, that at least one of the scalar polynomials $\lambda \mapsto \lambda(\lambda + \alpha_0)$ or $\lambda \mapsto \beta_0\lambda + 1$ has no roots in $D$ and the claim follows. If the polynomial $\lambda \mapsto \lambda(\lambda + \alpha_0)$ is stable with respect to $D$, then similarly as before we seek for a vector $y$ such that $y \not\perp A_2x$ and $y \perp A_0x$. If such $y$ exists we have \begin{equation*} y^*P(\lambda)x = (y^*A_2x)\lambda(\lambda + \alpha_0) \end{equation*} and condition \eqref{nozeroinD} is satisfied. If such the vector $y$ does not exist we have $A_2x = c_1 A_0x$ for some constant $c_1 \in \mathbb{C}$. Then: \begin{equation*} P(\lambda)x = \Big[c_1 \lambda^2 + (c_1\alpha_0 + \beta_0)\lambda + 1\Big]A_0x \end{equation*} and since the polynomial $\lambda \mapsto P(\lambda)$ is regular and has no eigenvalues in $D$ we have that $A_0x \neq 0$ and $c_1\lambda^2 + (c_1\alpha_0 + \beta_0)\lambda + 1$ has no roots in $D$. Setting $y = A_0x$ we have \begin{equation*} y^*P(\lambda)x = (A_0x)^*\Big[c_2 \lambda^2 + (c_2\alpha_0 + \beta_0)\lambda + 1\Big]A_0x = \|A_0x\|^2\Big[c_2 \lambda^2 + (c_2\alpha_0 + \beta_0)\lambda + 1\Big] \end{equation*} and \eqref{nozeroinD} is again satisfied. Similarly, if the polynomial $\lambda \mapsto \beta_0\lambda + 1$ is stable with respect to $D$, then we seek for a vector $y$ such that $y \perp A_2x$ and $y \not\perp A_0x$. If such $y$ exists then \begin{equation*} y^*P(\lambda)x = (y^*A_0x)(\beta_0\lambda + 1) \end{equation*} and \eqref{nozeroinD} is satisfied. If such vector $y$ does not exist we have $A_0x = c_2 A_2x$ for some constant $c_2 \in \mathbb{C}$ and \begin{equation*} P(\lambda)x = \Bigl[\lambda^2 + (\alpha_0 + c_2\beta_0 )\lambda + c_2\Bigr]A_2x\text{.} \end{equation*} As before, the scalar polynomial $\lambda \mapsto \lambda^2 + (\alpha_0 + c_2\beta_0)\lambda + c_2$ is stable and $A_2x \neq 0$. Taking $y = A_2x$ we have \eqref{nozeroinD} satisfied. Finally, assume that \eqref{0notinD2} holds. If there exists a vector $y \in \mathbb{C}^n\setminus\{0\}$ such that $y \not\perp A_2x, \; y \perp A_1x$ and $y \perp A_0x$, then \begin{equation*} y^* P(\lambda)x = (y^*A_2x)\lambda^2 \end{equation*} and \eqref{nozeroinD} is satisfied, due to the assumption that $0\notin D$. Hence, further we assume that such vector $y$ does not exist, i.e. $ \{A_1x, A_0x\}^{\perp} \subseteq \{A_2x\}^{\perp}$, equivalently $A_2x \in \mathop{\mathrm{span}}\set{A_1x, A_0x}$. Thus, we have \begin{equation}\label{A_2x} A_2x = \alpha_0 A_1x + \beta_0 A_0x \end{equation} for some $\alpha_0, \beta_0 \in \mathbb{C}$. Next, we can write \begin{align*} P(\lambda)x = (\alpha_0\lambda^2 + \lambda)A_1x + (\beta_0\lambda^2 + 1)A_0x = \lambda(\alpha_0\lambda + 1)A_1x + (\beta_0\lambda^2 + 1)A_0x\text{.} \end{align*} We show now that at least one of the scalar polynomials $\lambda \mapsto \lambda(\alpha_0\lambda + 1)$ or $\lambda \mapsto \beta_0\lambda^2 + 1$ is stable with respect to $D$. By the assumption that the polynomial $(z_1, z_2) \mapsto z_1^2z_2 A_2 + z_1^2 A_1 + z_2 A_0$ is stable with respect to $D^2$, we have that the equation $(z_1^2z_2 A_2 + z_1^2 A_1 + z_2 A_0)x = 0$ has no solutions $(z_1, z_2) \in D^2$. Substituting \eqref{A_2x} we obtain that the equation $$ z_1^2z_2(\alpha_0 A_1x + \beta_0 A_0x) + z_1^2 A_1x + z_2 A_0x = 0 $$ this means the equation $$ \label{noso}z_1^2(\alpha_0z_2 + 1)A_1x + z_2(\beta_0z_1^2 + 1)A_0x = 0 $$ has no solutions $(z_1, z_2) \in D^2$. This implies, that at least one of the scalar polynomials $\lambda \mapsto \lambda(\alpha_0\lambda + 1)$ or $\lambda \mapsto \beta_0\lambda^2 + 1$ has no roots in $D$. Otherwise, there would exist $\lambda_1, \lambda_2 \in D$ such that $\lambda_1(\alpha_0\lambda_1 + 1) = \beta_0\lambda_2^2 + 1 = 0$. Since $\lambda_1 \neq 0$ we would have $\alpha_0\lambda_1 + 1 = \beta_0\lambda_2^2 + 1 = 0$ and taking $(z_1, z_2) = (\lambda_2, \lambda_1) \in D^2$, which is a solution of the equation \eqref{noso} - contradiction. Therefore, the claim follows. If the polynomial $\lambda \mapsto \lambda(\alpha_0\lambda + 1)$ is stable with respect to $D$, then similarly as before we seek for a vector $y$ such that $y \not\perp A_1x$ and $y \perp A_0x$. If such $y$ exists we have \begin{equation*} y^*P(\lambda)x = (y^*A_1x)\lambda(\alpha_0\lambda + 1) \end{equation*} and condition \eqref{nozeroinD} is satisfied. If such vector $y$ does not exist we have $\{A_0x\}^{\perp} \subseteq \{A_1x\}^{\perp}$ and consequently $A_1x = c_1 A_0x$ for some constant $c_1 \in \mathbb{C}$. Then: \begin{equation*} P(\lambda)x = \Bigl[\lambda(\alpha_0\lambda + 1)c_1 + \beta_0\lambda^2 + 1\Bigr]A_0x = \Bigl[(\alpha_0c_1 + \beta_0)\lambda^2 + c_1\lambda + 1\Bigr]A_0x \end{equation*} and since the polynomial $\lambda \mapsto P(\lambda)$ is regular and has no eigenvalues in $D$ we have that $A_0x \neq 0$ and the polynomial $\lambda \mapsto (\alpha_0c_1 + \beta_0)\lambda^2 + c_1\lambda + 1$ has no roots in $D$. Setting $y = A_0x$ we have \begin{equation*} y^*P(\lambda)x = (A_0x)^*\Bigl[(\alpha_0c_1 + \beta_0)\lambda^2 + c_1\lambda + 1\Bigr]A_0x = \|A_0x\|^2\Bigl[(\alpha_0c_1 + \beta_0)\lambda^2 + c_1\lambda + 1\Bigr] \end{equation*} and \eqref{nozeroinD} is satisfied again. Similarly, if the polynomial $\lambda \mapsto \beta_0\lambda^2 + 1$ is stable with respect to $D$, then we seek for a vector $y$ such that $y \perp A_1x$ and $y \not\perp A_0x$. If such $y$ exists, then \begin{equation*} y^*P(\lambda)x = (y^*A_0x)(\beta_0\lambda^2 + 1) \end{equation*} and \eqref{nozeroinD} is satisfied. If such the vector $y$ does not exist we have $\{A_1x\}^{\perp} \subseteq \{A_0x\}^{\perp}$ and consequently $A_0x= c_2 A_1x$ for some constant $c_2 \in \mathbb{C}$ and \begin{equation*} P(\lambda)x = \Bigl[\lambda(\alpha_0\lambda + 1) + (\beta_0\lambda^2 + 1)c_2\Bigr]A_1x = \Bigl[(\alpha_0 + \beta_0c_2)\lambda^2 + \lambda + c_2\Bigr]A_1x\text{.} \end{equation*} As before, the scalar polynomial $\lambda \mapsto (\alpha_0 + \beta_0c_2)\lambda^2 + \lambda + c_2$ is stable with respect to $D$ and $A_1x \neq 0$. Taking $y = A_1x$ we have \eqref{nozeroinD} satisfied. \end{proof} From Theorem 3.6 (ii) we know that hyperstability of palindromic matrix polynomials of degree three is equivalent to their stability. In addition to this, we consider now polynomials of the form $P(\lambda) = \lambda^3 A_0 + \lambda^2 A_2 + \lambda A_1 + A_0$. Therefore, the following Theorem delivers us a lot of interesting examples, when $A_1 \neq A_2$. \begin{theorem}\label{poly3} Let $P(\lambda) = \lambda^3 A_0 + \lambda^2 A_2 + \lambda A_1 + A_0$ be a cubic matrix polynomial and let $D$ be a nonempty open or closed subset of the complex plane $\mathbb{C}$. If at least one of the following conditions hold: \begin{enumerate}[\rm (a)] \item\label{forA_0} the multivariate matrix polynomial $(z_1, z_2) \mapsto (z_1^3 z_2^3 + z_1^3 + z_2^3)A_0 + (z_1^2 z_2^3 +z_1^2)A_2 + (z_1^3z_2 + z_2) A_1 + A_0$ is stable with respect to $D^2$ and $-1, \frac{1}{2} - \frac{\sqrt{3}}{2}i, \frac{1}{2} + \frac{\sqrt{3}}{2}i \not\in D$, \item\label{forA_1} the multivariate matrix polynomial $(z_1, z_2) \mapsto z_2^3 A_0 + z_1 z_2 A_2 + z_2 A_1 + A_0$ is stable with respect to $D^2$ and $0 \not\in D$, \item\label{forA_2} the multivariate matrix polynomial $(z_1, z_2) \mapsto z_1 z_2^3 A_0 + z_1 z_2^2 A_2 + z_2^2 A_1 + z_1 A_0$ is stable with respect to $D^2$ and $0 \not\in D$, \end{enumerate} then the matrix polynomial $ P(\lambda)$ is hyperstable with respect to $D$. \end{theorem} \begin{proof} In this proof, we proceed as in the proof of Theorem 5.2. Setting $z_1 = z_2 = \lambda$ leads in each case to stability of the polynomial $\lambda \mapsto P(\lambda)$ with respect to $D$. In particular, $P(\lambda) $ is regular. In the case \eqref{forA_0} stability of a polynomial $\lambda \mapsto (\lambda^3 + 1)P(\lambda)$ is equivalent to stability of the polynomial $\lambda \mapsto P(\lambda)$. For hyperstability of the polynomial $\lambda \mapsto P(\lambda)$ with respect to $D$, fix $x \in \mathbb{C}^n\setminus\{0\}$. Assume that the condition \eqref{forA_0} holds. If there exists a vector $y \in \mathbb{C}^n\setminus\{0\}$ such that $y \not\perp A_0x, y \perp A_2x, y \perp A_1x$, then \begin{equation} y^*P(\lambda)x = (y^*A_0x)(\lambda^3 + 1) \end{equation} and the condition \eqref{nozeroinD} is satisfied because of fact that $-1, \frac{1}{2} - \frac{\sqrt{3}}{2}i, \frac{1}{2} + \frac{\sqrt{3}}{2}i \not\in D$. Otherwise, we have $\{A_2x, A_1x\}^{\perp} \subseteq \{A_0x\}^{\perp}$, which equivalently means that $A_0x = \alpha_0 A_2x + \beta_0 A_1x$ for some $\alpha_0, \beta_0 \in \mathbb{C}$. Thus we can write \begin{equation*} P(\lambda)x = (\alpha_0\lambda^3 + \lambda^2 + \alpha_0)A_2x + (\beta_0\lambda^3 + \lambda + \beta_0)A_1x\text{.} \end{equation*} Due to stability of the multivariate polynomial in \eqref{forA_0} we conclude that the equation \begin{equation*} \Bigl[(z_1^3 z_2^3 + z_1^3 + z_2^3)A_0 + (z_1^2 z_2^3 +z_1^2)A_2 + (z_1^3z_2 + z_2) A_1 + A_0\Bigr]x = 0 \end{equation*} does not have solutions $(z_1, z_2) \in D^2$. Then the equation \begin{equation*} (\alpha_0 z_1^3 z_2^3 + z_1^2 z_2^3 + \alpha_0 z_1^3 + \alpha_0 z_2^3 + z_1^2 + \alpha_0)A_2x + (\beta_0 z_1^3 z_2^3 + z_1^3 z_2 + \beta_0 z_1^3 + \beta_0 z_2^3 + z_2 + \beta_0)A_1x = 0 \end{equation*} does not have such solutions as well. Therefore, at least one of the polynomials $\lambda \mapsto \alpha_0\lambda^3 + \lambda^2 + \alpha_0$ or $\lambda \mapsto \beta_0\lambda^3 + \lambda + \beta_0$ is stable with respect to $D$. If the polynomial $\lambda \mapsto \alpha_0\lambda^3 + \lambda^2 + \alpha_0$ is stable with respect $D$, then we seek for a vector $y \not\perp A_2x$ and $y \perp A_1x$. If such vector $y$ exists, then \begin{equation*} y^*P(\lambda)x = (y^*A_2x)(\alpha_0\lambda^3 + \lambda^2 + \alpha_0) \end{equation*} and the condition \eqref{nozeroinD} is also satisfied. Otherwise, we have $\{A_1x\}^{\perp} \subseteq \{A_2x\}^{\perp}$, which equivalently means that $A_2x = c_1A_1x$ for some $c_1 \in \mathbb{C}$. Finally, we can write \begin{equation*} P(\lambda)x = \Bigl[(c_1\alpha_0 + \beta_0)\lambda^3 + x_1\lambda^2 + \lambda + (c_1\alpha_0 + \beta_0)\Bigr]A_1x, \end{equation*} where a polynomial in the square brackets is stable because of stability of the polynomial $\lambda \mapsto P(\lambda)$. Now, we just take $y = A_1x$ to satisfy the condition \eqref{nozeroinD}: \begin{equation*} y^*P(\mu)x = ||A_1x||^2\Bigl[(c_1\alpha_0 + \beta_0)\mu^3 + c_1\mu^2 + \mu + (c_1\alpha_0 + \beta_0)\Bigr] \neq 0 \end{equation*} for $\mu \in D$. If the polynomial $\lambda \mapsto \beta_0\lambda^3 + \lambda + \beta_0$ is stable with respect $D$, then we seek in turn for a vector $y \perp A_2x$ and $y \not\perp A_1x$. If such vector $y$ exists, then \begin{equation*} y^*P(\lambda)x = (y^*A_1x)(\beta_0\lambda^3 + \lambda + \beta_0) \end{equation*} and the condition \eqref{nozeroinD} is satisfied again. Otherwise, we have $\{A_2x\}^{\perp} \subseteq \{A_1x\}^{\perp}$, which equivalently means that $A_1x = c_2A_2x$ for some $c_2 \in \mathbb{C}$. As before, we can write: \begin{equation*} P(\lambda)x = \Bigl[(\alpha_0 + c_2\beta_0)\lambda^3 + \lambda^2 + c_2\lambda + (\alpha_0 + c_2\beta_0)\Bigr]A_2x, \end{equation*} where a polynomial in the square brackets is stable because of stability of the polynomial $\lambda \mapsto P(\lambda)$. As previously, we take $y = A_2x$ to satisfy the condition \eqref{nozeroinD}: \begin{equation*} y^*P(\mu)x = ||A_2x||^2\Bigl[(\alpha_0 + c_2\beta_0)\lambda^3 + \lambda^2 + c_2\lambda + (\alpha_0 + c_2\beta_0)\Bigr] \neq 0 \end{equation*} for $\mu \in D$, which ends the proof in this case. Assume that the condition \eqref{forA_1} holds. If there exists a vector $y \in \mathbb{C}^n\setminus\{0\}$ such that $y \perp A_0x, y \perp A_2x, y \not\perp A_1x$, then \begin{equation} y^*P(\lambda)x = (y^*A_1x)\lambda \end{equation} and the condition\eqref{nozeroinD} is satisfied because of fact that $0 \not\in D$. Otherwise, we have $\{A_0x, A_2x\}^{\perp} \subseteq \{A_1x\}^{\perp}$, which equivalently means that $A_1x = \alpha_0 A_2x + \beta_0 A_0x$ for some $\alpha_0, \beta_0 \in \mathbb{C}$. Thus we can write \begin{equation*} P(\lambda)x = (\lambda^2 + \alpha_0\lambda)A_2x + (\lambda^3 +\beta_0\lambda +1)A_0x\text{.} \end{equation*} Due to stability of the multivariate polynomial in \eqref{forA_1} we conclude that the equation \begin{equation*} \Bigl[z_2^3 A_0 + z_1 z_2 A_2 + z_2 A_1 + A_0\Bigr]x = 0 \end{equation*} does not have solutions $(z_1, z_2) \in D^2$. Then the equation \begin{equation*} (z_1 z_2 + \alpha_0 z_2)A_2x + (z_2^3 + \beta_0z_2 + 1)A_0x = 0 \end{equation*} does not have such solutions as well. Therefore, at least one of the polynomials $\lambda \mapsto \lambda^2 + \alpha_0\lambda$ or $\lambda \mapsto \lambda^3 + \beta_0\lambda + 1$ is stable with respect to $D$. If the polynomial $\lambda \mapsto \lambda^2 + \alpha_0\lambda$ is stable with respect $D$, then we seek for a vector $y \not\perp A_2x$ and $y \perp A_0x$. If such vector $y$ exists, then \begin{equation*} y^*P(\lambda)x = (y^*A_2x)(\lambda^2 + \alpha_0\lambda) \end{equation*} and the condition \eqref{nozeroinD} is also satisfied. Otherwise, we have $\{A_0x\}^{\perp} \subseteq \{A_2x\}^{\perp}$, which equivalently means that $A_2x = c_1A_0x$ for some $c_1 \in \mathbb{C}$. Finally, we can write \begin{equation*} P(\lambda)x = \Bigl[\lambda^3 + c_1\lambda^2 + (c_1\alpha_0 + \beta_0)\lambda + 1\Bigr]A_0x, \end{equation*} where a polynomial in the square brackets is stable because of stability of the polynomial $\lambda \mapsto P(\lambda)$. Now, we just take $y = A_0x$ to satisfy the condition \eqref{nozeroinD}: \begin{equation*} y^*P(\mu)x = ||A_0x||^2\Bigl[\lambda^3 + c_1\lambda^2 + (c_1\alpha_0 + \beta_0)\lambda + 1\Bigr] \neq 0 \end{equation*} for $\mu \in D$. If the polynomial $\lambda \mapsto \lambda^3 + \beta_0\lambda + 1$ is stable with respect $D$, then we seek in turn for a vector $y \perp A_2x$ and $y \not\perp A_0x$. If such vector $y$ exists, then \begin{equation*} y^*P(\lambda)x = (y^*A_0x)(\lambda^3 + \beta_0\lambda + 1) \end{equation*} and the condition \eqref{nozeroinD} is satisfied again. Otherwise, we have $\{A_2x\}^{\perp} \subseteq \{A_0x\}^{\perp}$, which equivalently means that $A_0x = c_2A_2x$ for some $c_2 \in \mathbb{C}$. As before, we can write: \begin{equation*} P(\lambda)x = \Bigl[c_2\lambda^3 + \lambda^2 + (\alpha_0 + c_2\beta_0)\lambda + c_2\Bigr]A_2x, \end{equation*} where a polynomial in the square brackets is stable because of stability of the polynomial $\lambda \mapsto P(\lambda)$. As previously, we take $y = A_2x$ to satisfy the condition \eqref{nozeroinD}: \begin{equation*} y^*P(\mu)x = ||A_2x||^2\Bigl[c_2\lambda^3 + \lambda^2 + (\alpha_0 + c_2\beta_0)\lambda + c_2\Bigr] \neq 0 \end{equation*} for $\mu \in D$, which ends the proof in this case. Assume that the condition \eqref{forA_2} holds. If there exists a vector $y \in \mathbb{C}^n\setminus\{0\}$ such that $y \perp A_0x, y \not\perp A_2x, y \perp A_1x$, then \begin{equation} y^*P(\lambda)x = (y^*A_2x)\lambda^2 \end{equation} and the condition\eqref{nozeroinD} is satisfied because of fact that $0 \not\in D$. Otherwise, we have $\{A_1x, A_0x\}^{\perp} \subseteq \{A_2x\}^{\perp}$, which equivalently means that $A_2x = \alpha_0 A_1x + \beta_0 A_0x$ for some $\alpha_0, \beta_0 \in \mathbb{C}$. Thus we can write \begin{equation*} P(\lambda)x = (\alpha_0\lambda^2 + \lambda)A_2x + (\lambda^3 + \beta_0\lambda^2 + 1)A_0x\text{.} \end{equation*} Due to stability of the multivariate polynomial in \eqref{forA_2} we conclude that the equation \begin{equation*} \Bigl[z_1 z_2^3 A_0 + z_1 z_2^2 A_2 + z_2^2 A_1 + z_1 A_0\Bigr]x = 0 \end{equation*} does not have solutions $(z_1, z_2) \in D^2$. Then the equation \begin{equation*} (\alpha_0 z_1 z_2^2 + z_2^2)A_1x + (z_1 z_2^3 + \beta_0 z_1 z_2^2 + z_1)A_0x = 0 \end{equation*} does not have such solutions as well. Therefore, at least one of the polynomials $\lambda \mapsto \alpha_0\lambda^2 + \lambda$ or $\lambda \mapsto \lambda^3 + \beta_0\lambda^2 + 1$ is stable with respect to $D$. If the polynomial $\lambda \mapsto \alpha_0\lambda^2 + \lambda$ is stable with respect $D$, then we seek for a vector $y \not\perp A_1x$ and $y \perp A_0x$. If such vector $y$ exists, then \begin{equation*} y^*P(\lambda)x = (y^*A_1)(\alpha_0\lambda^2 + \lambda) \end{equation*} and the condition \eqref{nozeroinD} is also satisfied. Otherwise, we have $\{A_0x\}^{\perp} \subseteq \{A_1x\}^{\perp}$, which equivalently means that $A_1x = c_1A_0x$ for some $c_1 \in \mathbb{C}$. Finally, we can write \begin{equation*} P(\lambda)x = \Bigl[\lambda^3 + (c_1\alpha_0 + \beta_0)\lambda^2 + c_1\lambda + 1\Bigr]A_0x, \end{equation*} where a polynomial in the square brackets is stable because of stability of the polynomial $\lambda \mapsto P(\lambda)$. Now, we just take $y = A_0x$ to satisfy the condition \eqref{nozeroinD}: \begin{equation*} y^*P(\mu)x = ||A_0x||^2\Bigl[\lambda^3 + (c_1\alpha_0 + \beta_0)\lambda^2 + c_1\lambda + 1\Bigr] \neq 0 \end{equation*} for $\mu \in D$. If the polynomial $\lambda \mapsto \lambda^3 + \beta_0\lambda^2 + 1$ is stable with respect $D$, then we seek in turn for a vector $y \perp A_1x$ and $y \not\perp A_0x$. If such vector $y$ exists, then \begin{equation*} y^*P(\lambda)x = (y^*A_0x)(\lambda^3 + \beta_0\lambda^2 + 1) \end{equation*} and the condition \eqref{nozeroinD} is satisfied again. Otherwise, we have $\{A_1x\}^{\perp} \subseteq \{A_0x\}^{\perp}$, which equivalently means that $A_0x = c_2A_1x$ for some $c_2 \in \mathbb{C}$. As before, we can write: \begin{equation*} P(\lambda)x = \Bigl[c_2\lambda^3 + (\alpha_0 + c_2\beta_0)\lambda^2 + \lambda + c_2\Bigr]A_1x, \end{equation*} where a polynomial in the square brackets is stable because of stability of the polynomial $\lambda \mapsto P(\lambda)$. As previously, we take $y = A_1x$ to satisfy the condition \eqref{nozeroinD}: \begin{equation*} y^*P(\mu)x = ||A_1x||^2\Bigl[c_2\lambda^3 + (\alpha_0 + c_2\beta_0)\lambda^2 + \lambda + c_2\Bigr] \neq 0 \end{equation*} for $\mu \in D$, which ends the proof in this case and the whole proof is completed. \end{proof} \section{The polarization operator for matrix polynomials}\label{sPol} In this Section we develop the theory of hyperstable matrix polynomials of several variables. This, besides an independent interest, will later on serve, once again, as a tool for showing hyperstability of matrix polynomial of one variable. First, let us define a scalar multi-variate polynomial $(z_1, z_2, \dots , z_{\kappa}) \mapsto p(z_1, z_2, \dots, z_\kappa)$ to be \emph{multi-affine} if and only if its degree with respect to each variable $z_j$, $j=1,\dots,\kappa$, is less or equal to one. Further, we say that $(z_1, z_2, \dots , z_{\kappa}) \mapsto p(z_1, z_2, \dots, z_\kappa)$ is \emph{symmetric} if and only if any permutation of the variables $z_1, z_2, \dots, z_\kappa$ leaves the polynomial intact. Below we recall the famous Grace-Walsh-Szeg\"o Theorem \cite{Gr1902,Sze1922,Wa1922}, we present its shortened version needed for the current investigations. \begin{theorem}\label{GWS} Let $p \in \mathbb{C}[z_1, z_2, \dots , z_{\kappa}]$ be a symmetric multi-affine polynomial, let $D$ be an open or closed disc or open or closed half-plane and let $\zeta_1, \zeta_2, \dots , \zeta_{\kappa}\in D$. Then there exists a point $\zeta_0 \in D$ such that \begin{equation*} p(\zeta_1, \zeta_2, \dots , \zeta_{\kappa}) = p(\zeta_0, \zeta_0, \dots , \zeta_0)\text{.} \end{equation*} \end{theorem} We introduce now the main object of this Section. Let $\kappa \in \mathbb{Z}_+$ and let $P(\lambda)=\lambda^d A_d + \lambda^{d-1} A_{d-1} + \dots + A_0$ be a matrix polynomial of degree $d \leq \kappa$, we put $A_{d+1} = A_{d+2} = \dots = A_{\kappa} = 0$. We define the \emph{polarization} of $P(\lambda)$ by \begin{equation}\label{Tdef1} (T_{\kappa}P) (z_1, z_2, \dots , z_{\kappa}):= \sum_{j=0}^{\kappa} \binom{\kappa}{j}^{-1}s_j(z_1, z_2, \dots , z_{\kappa})A_j, \end{equation} where the symbol $s_j$ denotes the $j$-th elementary symmetric polynomial \begin{equation}\label{Tdef2} s_0(z_1, z_2, \dots, z_{\kappa}) := 1,\;\;\; s_j(z_1, z_2, \dots , z_{\kappa}) := \sum_{1 \leq i_1 < i_2 < \dots < i_j \leq \kappa} z_{i_1}z_{i_2} \dots z_{i_j}. \end{equation} The operator $T_\kappa$ defined above is a well known tool for scalar multi-variate polynomials, see, e.g., \cite{BorB09}. Our contribution is to extend its action to matrix polynomials. Note that the operator $T_\kappa$ is injective with its image being the set consisting of all symmetric multi-affine polynomials (with matrix coefficients). We have the following result. \begin{theorem}\label{Tkappa2} Let $\kappa\geq 1$ and let the operator $T_\kappa$ be defined by \eqref{Tdef1} above with $D$ being an open or closed disc or open or closed half-plane. Then a matrix polynomial $P(\lambda)$ is hyperstable with respect to $D$ if and only if $(T_\kappa P)(z_1,\dots,z_\kappa)$ is hyperstable with respect to $D^\kappa$. \end{theorem} \begin{proof} Consider first the case $n=1$, i.e., take a scalar polynomial $p(\lambda)$ and suppose that there exists a point $(\zeta_1, \zeta_2, \dots , \zeta_{\kappa}) \in D^{\kappa}$ such that $(T_{\kappa}p)(\zeta_1, \zeta_2, \dots , \zeta_{\kappa}) = 0$. Since the polynomial $(z_1, z_2, \dots , z_{\kappa}) \mapsto (T_\kappa p)(z_1, z_2, \dots z_{\kappa})$ is a symmetric multi-affine polynomial, then from Theorem~\ref{GWS} we conclude that there exists a point $\zeta_0 \in D$ such that $(T_\kappa p)(\zeta_1, \zeta_2, \dots , \zeta_{\kappa}) = (T_\kappa p)(\zeta_0, \zeta_0, \dots , \zeta_0)$. However, note that $ (T_\kappa p)(\zeta_0, \zeta_0, \dots , \zeta_0)=p(\zeta_0)$, which shows the forward implication. The converse implication is obvious, as $(T_{\kappa}p)(\lambda, \lambda, \dots , \lambda) = p(\lambda)$. Assume now that $P(\lambda)$ is a matrix polynomial, hyperstable with respect to $D$. Take an arbitrary vector $x \in \mathbb{C}^n \setminus\set0$. By definition of hyperstability, there exists $y\in \mathbb{C}^n \setminus\set0$ such that the scalar polynomial $p(\lambda)=y^*P(\lambda)x$ is stable with respect to $D$. By the first part of the proof, the polynomial $(T_\kappa p)(z_1,\dots,z_\kappa)$ is stable with respect to $D^\kappa$. Observe that $$ (T_\kappa p)(z_1,\dots,z_\kappa)= y^* \left( (T_\kappa P )(z_1,\dots,z_\kappa)\right)x, $$ which shows that $(T_\kappa P )(z_1,\dots,z_\kappa)$ is hyperstable with respect to $D^\kappa$. The converse implication is again obvious. \end{proof} The first part of the proof is well known (in a slightly different setting), see e.g. \cite{BorB09}, and is presented here for completeness. One may wonder if the following version of Theorem~\ref{Tkappa2} is true: if $P(\lambda)$ is stable with respect to $D$ then $(T_\kappa P)(z_1,\dots,z_\kappa)$ is stable with respect to $D$. However, it is not the case. \begin{example}\label{nonstab}\rm We continue with $P(\lambda)$ as in Example \ref{exa}. Again, $P(\lambda)$ is stable with respect to any $D$ but $$ \det((T_2 P)(z_1,z_2))=\det\mat{cc} 1 & \frac{z_1+z_2}2 \\ \frac{z_1+z_2}2 & z_1z_2+1\rix=1-\left( \frac{z_1-z_2}2\right)^2. $$ Hence, every $(\mu_1,\mu_2)$ with $\mu_1-\mu_2=\pm 2$ is an eigenvalue, in particular $(T_2P)(z_1,z_2)$ is not stable with respect to, e.g., $H_0^2$. \end{example} We present now a general tool for creating hyperstable matrix polynomials of one variable using the operator $T_\kappa$. Its applications will be given in the next Section. \begin{theorem}\label{increasedegree} Let $D$ be an open or closed disc or open or closed half-plane. If a matrix polynomial $P(\lambda)$ of degree $d $ is hyperstable with respect to $D$, then for any scalar polynomials $p_1(\lambda), p_2(\lambda) \dots, p_\kappa(\lambda)$, $\kappa\geq d$, the matrix polynomial $Q(\lambda) := (T_{\kappa}P)(p_1(\lambda), p_2(\lambda), \dots , p_\kappa(\lambda))$ is hyperstable with respect to $E := p_1^{-1}(D) \cap p_2^{-1}(D) \cap \dots \cap p_\kappa^{-1}(D)\subseteq\mathbb{C}$. \end{theorem} \begin{proof} Fix a nonzero vector $x\in\mathbb{C}^n\setminus\{0\}$. By Theorem~\ref{Tkappa2}, the multivariate polynomial $(T_{\kappa}P)(z_1, z_2, \dots , z_\kappa)$ is hyperstable with respect to $D^\kappa$, i.e., there exists $y\in\mathbb{C}^n\setminus\{0\}$ such that $y^*(T_{\kappa}P)(z_1, z_2, \dots , z_\kappa)x \neq 0$ for all $z_1, z_2, \dots, z_\kappa \in D$. In particular, for any $\lambda\in\mathbb{C}$ such that $p_j(\lambda)\in D$ ($j=1,\dots,\kappa$) one has $y^*(T_{\kappa}P)(p_1(\lambda), p_2(\lambda), \dots , p_\kappa(\lambda))x \neq 0$. Hence, $Q(\lambda)$ is hyperstable with respect to $E$. \end{proof} \section{Hyperstability and stability of some classes of matrix polynomials}\label{sPos} Now we will apply the theory developed in previous Sections. First let us deal with a relatively simple matrix polynomial, connected with the Moore-Gibson-Thompson eigenvalue equation \cite{Ben21,KalN19}. \begin{proposition}\label{MGT} Let $P(\lambda)=\lambda^3 I_n +a I_n \lambda^2 +\lambda b R + cR$ with $R\in\mathbb{C}^{n,n}$ positive definite. If $a>1$ and $b>c$ then $P(\lambda)$ is hyperstable with respect to the open right half-plane. \end{proposition} \begin{proof} It was showed in \cite{MehMW22} that $P(\lambda)$ is stable with respect to the open right half-plane. By Theorem~\ref{uppert}\eqref{pq} it is also hyperstable. \end{proof} Second, let us explore a simple triangle inequality argument, which shows an application of Theorem~\ref{poly2}. Note that stability of $P(\lambda)$ below is almost obvious, but hyperstability needs some justification. \begin{proposition}\label{subadd} Let $A_0, A_1, A_2 \in \mathbb{C}^{n \times n}$ and let $D = \set{z \in \mathbb{C}: |z|<r}$, $r>0$ be such that $r\norm{A_1}+r^2\norm{A_2}<\sigma_{\min}(A_0)$. Then the multivariate polynomial $z_1^2A_2+z_2A_1+A_0$ is stable with respect to $D$ and consequently $P(\lambda) = \lambda^2 A_2 + \lambda A_1 + A_0$ is a regular matrix polynomial, hyperstable with respect to $D$. \end{proposition} \begin{proof} Note that $$ \norm{z_1^2 A_2+z_2A_1}\leq r\norm {A_1}+r^2\norm{A_2}<\sigma_{\min}(A_0), $$ which implies stability of the multivariate polynomial $z_1^2A_2+z_2A_1+A_0$ with respect to $D$. Application of Theorem~\ref{poly2}\eqref{0?D} finishes the proof. \end{proof} The quadratic polynomials with $A_2$, $A_0$ Hermitian positive semi-definite and the Hermitian part of $ A_1$ positive semi-definite were studied extensively in \cite{MehMW18,MehMW21}. It was shown therein that they are stable with respect to the right half-plane. We show more, namely hyperstablity with respect to the right half-plane. \begin{theorem}\label{half-plane} Let $R_j \in \mathbb{C}^{n, n}$ $(j=0,1,2)$ be Hermitian positive semi-definite and let $J\in \mathbb{C}^{n, n}$ be skew-Hermitian. Assume that $\ker R_0\cap\ker R_1\cap \ker R_2\cap\ker J=\set0$. Then the polynomial $z_1z_2 R_2 +z_2 (J+R_1) +R_0$ is stable with respect to $H_{\pi/2}^2$. In consequence, $P(\lambda) = \lambda^2 R_2 + \lambda (J+R_1) + R_0$ is a regular polynomial, hyperstable with respect to the open right half-plane $H_{\pi/2}$. \end{theorem} \begin{proof We will make use of Theorem \ref{poly2} (b), note that $0 \not\in D = H_0$. Consider the polynomial $\tilde P(z_1,z_2)=z_1z_2R_2+z_2 (J+R_1) +R_0$, and suppose it is not stable, i.e., $\tilde P(\mu_1,\mu_2)x=0$ for some $(\mu_1,\mu_2) \in H_{\pi/2}^2$ and $x\neq 0$. Multiplying from the left by $x^*$ and taking the real part one obtains \begin{equation}\label{eigenvalue1} \mbox{\rm re\,}(\mu_1) x^*R_2 x + x^*R_1x + \mbox{\rm re\,}\left(\frac1{\mu_2}\right) x^*R_0 x=0. \end{equation} Since both $\mbox{\rm re\,}( \mu_1)$ and $\mbox{\rm re\,}(\frac1{\mu_2})$ are positive and $R_2,R_1,R_0$ are positive semi-definite, we obtain $x^*R_2 x = x^*R_1x = x^*R_0 x=0$. Hence, $R_2x=R_1x=R_0x=0$. But this implies that $0=\tilde P(\mu_1,\mu_2)x=Jx$, contradiction. \end{proof} \begin{remark}\rm Let us remark two things. First, the condition $\ker R_1\cap\ker R_2\cap \ker R_3\cap\ker J=\set0$ is equivalent to regularity of $P(\lambda)$, see \cite{MehMW21}. Second, although we multiplied in the proof from the left by $x^*$, we have \emph{not} shown that the numerical range of $P(\lambda)$ is outside the open right half-plane. \end{remark} \begin{corollary}\label{c-hp} Let $R\in\mathbb C^{n,n}$ be Hermitian positive semi-definite, let $J\in \mathbb{C}^{n, n}$ be skew-Hermitian and let $Q, A_0, A_2 \in \mathbb{C}^{n, n}$ be such that $Q^*A_2$ and $Q^*A_0$ are Hermitian positive semi-definite. Assume also that $\ker Q^* A_0\cap\ker(Q^* R Q)\cap \ker (Q^*JQ)\cap\ker Q^*A_2=\set0$. Then the matrix polynomial $P(\lambda) = \lambda^2 A_2 + \lambda (J+R)Q + A_0$ is a regular polynomial, hyperstable with respect to the open right half-plane $H_{\pi/2}$. \end{corollary} \begin{proof} First, observe that the polynomial $ \lambda^2 Q^*A_2 + \lambda Q^*(J+R)Q + Q^*A_0$ satisfies the assumptions of Theorem~\ref{half-plane}. Second, apply Lemma~\ref{lQ}. \end{proof} In what follows, the argument of a complex number is taken such that $\mbox{\rm Arg } z \in (-\pi; \pi]$ and for the sake of simplicity we set $\mbox{\rm Arg } 0 := 0$. It was shown in~\cite{MehMW22} that a regular cubic matrix polynomial with all coefficients positive semi-definite is stable with respect to $D = \{z \in \mathbb{C} : -\pi/3 < \mbox{\rm Arg } z < \pi/3\}$. We present here a related result. \begin{theorem}\label{ker} Let $R_j \in \mathbb{C}^{n, n}$ $(j=1,2,3)$ be Hermitian positive semi-definite and let $A_0\in\mathbb C^{n,n}$ be Hermitian and let $G\in \mathbb{C}^{n, n}$ be skew-Hermitian with $-\ii G$ positive semi-definite. Assume that $\ker G \cap \ker A_0\cap \ker R_1\cap\ker R_2\cap \ker R_3 = \set0$. Then the following holds: \begin{enumerate}[\rm (i)] \item the multivariate matrix polynomial $P_1(z_1, z_2)= (z_1^3 z_2^3 + z_1^3 + z_2^3)R_3 + (z_1^2 z_2^3 +z_1^2)R_2 + (z_1^3z_2 + z_2) R_1 + A_0+G$ is stable with respect to $D_1^2$, where $D_1 = \{z \in \mathbb{C} : 0 < \mbox{\rm Arg } z < \pi/6\}$; \item\label{P2} the multivariate matrix polynomial $P_2(z_1, z_2) = z_2^3 R_3 + z_1 z_2 R_2 + z_2 R_1 + A_0+G$ is stable with respect to $D_2^2$, where $D_2 = \{z \in \mathbb{C} : 0 < \mbox{\rm Arg } z < \pi/3\}$; \item the multivariate matrix polynomial $ P_3(z_1, z_2) = z_1 z_2^3 R_3 + z_1 z_2^2 R_2 + z_2^2 R_1 + z_1 (A_0 + G)$ is stable with respect to $D_3^2$, where $D_3 = \{z \in \mathbb{C} : 0 < \mbox{\rm Arg } z < \pi/4\}$. \end{enumerate} In particular, $P(\lambda) = \lambda^3 R_3+\lambda^2 R_2+\lambda R_1 +A_0+G$ is stable with respect to $D = \{z \in \mathbb{C} : 0 < \mbox{\rm Arg } z < \pi/3\}$. \end{theorem} \begin{proof} We show only (i), the proofs of (ii) and (iii) are very similar. Suppose that the matrix polynomial $(z_1, z_2) \mapsto P_1(z_1, z_2)$ has an eigenvalue $(\mu_1, \mu_2) \in D_1^2$. Thus there exist a nonzero vector $x \in \mathbb{C}^n\setminus\{0\}$ such that \begin{equation}\label{mmm} (\mu_1^3\mu_2^3 + \mu_1^3 + \mu_2^3)R_3x + (\mu_1^2\mu_2^3 + \mu_1^2)R_2x + (\mu_1^3\mu_2 + \mu_2)R_1x + (A_0+G)x = 0\text{.} \end{equation} Multiplying by $x^*$ and taking the imaginary part of both sides of the equation above we obtain \begin{equation*} (x^*R_3x)\;\mbox{\rm im\,}(\mu_1^3\mu_2^3 + \mu_1^3 + \mu_2^3) + (x^*R_2x)\;\mbox{\rm im\,}(\mu_1^2\mu_2^3 + \mu_1^2) + (x^*R_1x)\;\mbox{\rm im\,}(\mu_1^3\mu_2 + \mu_2) - x^*(\ii G)x = 0\text{.} \end{equation*} Note that by assumption $x^*R_3x, x^*R_1x, x^*R_2x, -x^*(\ii G)x \geq 0$ and since $0< \mbox{\rm Arg } \mu_1, \mbox{\rm Arg } \mu_2 < \pi/6$, we have $\mbox{\rm im\,}(\mu_1^3\mu_2^3 + \mu_1^3 + \mu_2^3), \mbox{\rm im\,}(\mu_1^2\mu_2^3 + \mu_1^2), \mbox{\rm im\,}(\mu_1^3\mu_2 + \mu_2) > 0$. Therefore, we have $x^*R_3x = x^*R_2x = x^*R_1x = x^*(iG)x = 0$ and consequently $R_3x = R_2x = R_1x = G x = 0$. Due to the equation \eqref{mmm} we have $A_0x = 0$, contradiction. The `In particular' part follows by substituting $\lambda$ for $z_1$ and $z_2$ in \eqref{P2}. \end{proof} Directly from Theorems~\ref{poly3} and \ref{ker}\eqref{P2} we get the following Corollary. \begin{corollary} Let $R_j \in \mathbb{C}^{n, n}$ $(j=0,1,2)$ be Hermitian positive semi-definite, then the polynomial $P(\lambda) = \lambda^3 R_0 + \lambda^2 R_2 + \lambda R_1 + R_0$ is hyperstable with respect to $ D = \{z \in \mathbb{C} : 0 < \mbox{\rm Arg } z < \pi/3\}$. \end{corollary} We show now how the polarization operator may be used to increase the degree of the polynomial. The price for it is narrowing the set $D$. \begin{corollary}\label{deg3} Let $R_j \in \mathbb{C}^{n, n}$ $(j=0,1,2)$ be Hermitian positive semi-definite, and let $J\in\mathbb{C}^{n,n}$ be skew-Hermitian. Then the matrix polynomial $Q(\lambda) = \lambda^3 R_2 + (\lambda^2+\lambda)(R_1+J) + R_0$ is a regular cubic matrix polynomial, hyperstable with respect to the angle $E := \{z\in\mathbb{C}:-\pi/4 < \mbox{\rm Arg } z <\pi/4\}\setminus\{0\}$. \end{corollary} \begin{proof} By Theorem~\ref{half-plane} we get the polynomial $P(\lambda)= \lambda^2 R_2 + 2 \lambda (R_1+J) + R_0$ hyperstable with respect to the open right half-plane $H_{\pi/2}$. We apply now Theorem~\ref{increasedegree} with $p_1(\lambda)=\lambda^2$, $p_2(\lambda)=\lambda$ getting $$ (T_2)P(z_1,z_2)= z_1z_2 A_2 + (z_1+z_2) A_1 + A_0 $$ so that $ (T_2P)(\lambda^2,\lambda) = Q(\lambda)$. Finally, observe that the angle $E$ is precisely the set $p_1^{-1}(H_{\pi/2})\cap p_2^{-1}(H_{\pi/2})$ from Theorem~\ref{increasedegree}. \end{proof} In Theorem~\ref{deg3} the operator $T_2$ was used to increase the degree of the quadratic polynomial $P(\lambda)$, which is known to by hyperstable. Let us show now an opposite action, where we use $T_2$ to decrease the degree. \begin{corollary} Let $R_j \in \mathbb{C}^{n, n}$ $(j=0,1)$ be Hermitian positive semi-definite, and let $J\in\mathbb{C}^{n,n}$ be skew-Hermitian. Consider a linear pencil $P(\lambda)=\lambda( R_1+J) + (R_0+a J)$, where $a\geq0$. Then the eigenvalues of $P(\lambda)$ are contained in the closed left half-plane. \end{corollary} \begin{proof} The case $a=0$ was considered in \cite{MehMW22}. Now take $a>0$ and define a matrix polynomial $P(\lambda)=\lambda^2\frac{R_1}a+\lambda \cdot 2 J+ R_0$, note that it clearly satisfies the assumptions of Theorem~\ref{half-plane}. Hence it is hyperstable, and by Theorem~\ref{Tkappa2} the matrix polynomial $$ (T_2 P)(z_1,z_2)=z_1z_2 \frac{R_1}a+(z_1+z_2) J+ R_0 $$ is hyperstable with respect to $H_{\pi/2}^2$. In particular, if we set $z_2=a$ and replace $z_1$ by $\lambda$ then we obtain the original polynomial $\lambda a \frac{R_1}a+ (\lambda+a)J+ R_0=\lambda A_1+A_0$ being stable with respect to $H_{\pi/2}$. \end{proof} \section{Acknowledgment} Both authors acknowledge the financial support of the Priority Research Area SciMat under the program Excellence Initiative Research University at the Jagiellonian University in Krakow, Poland, decision no. U1U/P05/NO/03.55.
{ "timestamp": "2022-05-18T02:25:50", "yymm": "2203", "arxiv_id": "2203.10509", "language": "en", "url": "https://arxiv.org/abs/2203.10509", "abstract": "The paper presents methods of eigenvalue localisation of regular matrix polynomials, in particular, stability of matrix polynomials is investigated. For this aim a stronger notion of hyperstability is introduced and widely discussed. Matrix versions of the Gauss-Lucas theorem and Szász inequality are shown. Further, tools for investigating (hyper)stability by multivariate complex analysis methods are provided. Several second- and third-order matrix polynomials with particular semi-definiteness assumptions on coefficients are shown to be stable.", "subjects": "Complex Variables (math.CV); Numerical Analysis (math.NA)", "title": "Stability Of Matrix Polynomials In One And Several Variables", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575137315162, "lm_q2_score": 0.7217432122827969, "lm_q1q2_score": 0.7091542162131829 }
https://arxiv.org/abs/0711.3544
Alternative Method for Determining the Feynman Propagator of a Non-Relativistic Quantum Mechanical Problem
A direct procedure for determining the propagator associated with a quantum mechanical problem was given by the Path Integration Procedure of Feynman. The Green function, which is the Fourier Transform with respect to the time variable of the propagator, can be derived later. In our approach, with the help of a Laplace transform, a direct way to get the energy dependent Green function is presented, and the propagator can be obtained later with an inverse Laplace transform. The method is illustrated through simple one dimensional examples and for time independent potentials, though it can be generalized to the derivation of more complicated propagators.
\section{Introduction} It is well know that quantum mechanics acquired its f\/inal formulation in 1925--1926 through fundamental papers of Schr\"odinger and Heisenberg. Originally these papers appeared as two independent views of the structure of quantum mechanics, but in 1927 Schr\"odinger established their equivalence, and since then one or the other of the papers mentioned have been used to analyze quantum mechanical systems, depending on which method gave the most convenient way of solving the problem. Thus the existence of alternative procedures to solve a given problem can be quite fruitful in deriving solutions of it. In the 1940's Richard Feynman, and later many others, derived a propagator for quantum mechanical problems through a path integration procedure. In contrast with the Hamiltonian emphasis in the original formulation of quantum mechanics, Feynmans approach could be referred to as Lagrangian and it emphasized the propagator $K(x,t,x', t')$ which takes the wave function $\psi (x', t')$ at the point $x'$ and time $t'$ to the point $x,$ at time $t$, i.e. \begin{gather} \psi (x,t) = \int K (x,t,x',t') \psi (x', t')dx'. \label{mf1} \end{gather} While this propagator could be derived by the standard methods of quantum mechanics, Feynman invented a procedure by summing all time dependent paths connecting points $x$, $x'$ and this became an alternative formulation of quantum mechanics whose results coincided with the older version when all of them where applicable, but also became relevant for problems that the original methods could not solve. Feynmans procedure f\/irst led to the propagator $K(x, t, x', t')$ and then by a Laplace transform to the corresponding Green function $G(x,x',E)$ with $E$ being the energy. We found Feynmans method for deriving the propagator, though entirely correct, somewhat cumbersome to use, and thus tried to look for alternative procedures. As we mentioned before, in Feynmans approach the f\/irst step is deriving the propagator $K(x,t,x', t')$ and later the energy dependent Green functions $G(x,x', E)$. In this paper we invert the procedure, we start by deriving the $G(x,x', E)$ which is a simpler problem, at least in the one dimensional single particle case we will be discussing here. Once we have $G(x,x', E)$ the $K(x,t,x', t')$ is given by the inverse Laplace transform and can be written as \begin{gather} K (x,x',t) = \frac{1}{2\pi\hbar i} \int^{i\hbar c+\infty}_{i\hbar c -\infty} \exp(-iE t/\hbar) G (x,x',E) dE, \label{mf2} \end{gather} where $c$ is a constant that allows the upper line $i \hbar c +E$ in the complex plane of $E$ to be above all the poles of $G(x,x', E)$. For compactness in the notation from now on we will take $t'=0$ so we write $K(x,t,x', t')$ as $K(x,x', t)$. The real hard part in our approach will be the determination by \eqref{mf2} of $K(x,x', t)$ but this is a well def\/ined problem in mathematics and procedures have been developed to solve it. This is then the program we plan to follow. In Section~\ref{section2} we show that for a single particle in one dimension (the initial case of our analysis) all we need to know are two independent solutions $u^\pm_E$ of the equation. \[ \left[\frac{-\hbar^2}{2m} \frac{d^2}{dx^2} + V(x) - E\right] u^\pm_E (x,E) = 0 \] to be able to derive $G(x,x', E)$ in Section~\ref{section3}. We then consider in Section~\ref{section4}, three elementary cases, the free one dimensional particle, the corresponding one with a $\delta$ interaction at the origin $x=0$, and the harmonic oscillator. In the f\/irst two cases the integral \eqref{mf2} is trivial to evaluate. In the case of the harmonic oscillator the evaluation of \eqref{mf2} requires a more careful analysis but it can be carried out. In all three cases our f\/inal result is identical to the one presented in the book of Grosche and Steiner~\cite{1} that use Feynmans method to derive the results. Thus we have an alternative method for deriving $K(x,x', t)$ but it remains to be shown that it can be applied to more particles in more dimensions and of arbitrary angular momenta and whether the analysis can be extended to relativistic as well as time dependent problems. What ever results may be obtained in the future, it seems that our alternative approach follows more closely the standard procedures of quantum mechanics and could be useful in a~simpler derivation of the propagators. \section{The Hamiltonian of the problem and the equation\\ for the propagator}\label{section2} We start with the simplest Hamiltonian of one particle in one dimension, i.e. \[ H=-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x) \] with thus far an arbitrary potential $V(x)$. From the equation (\ref{mf1}) that def\/ines the properties of the propagator it must satisfy the equation \begin{gather} \bigg[-\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + V(x) - i\hbar \frac{\partial}{\partial t}\bigg] K (x,x',t)= 0 \label{mf5} \end{gather} and besides if $t=0$ it becomes \begin{gather} K (x,x',0) = \delta (x-x'). \label{mf6} \end{gather} We proceed now to take the Laplace transform of (\ref{mf5}) \begin{gather*} \int^\infty_0 \exp (-st) \left[ - \frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + V(x) - i\hbar \frac{\partial}{\partial t} \right] K(x,x',t)dt \nonumber\\ \qquad {}= - \frac{\hbar^2}{2m} \frac{\partial^2 \bar G (x,x',s)}{\partial x^2} + V(x) \bar G(x,x',s) - i\hbar \int^\infty_0 \exp (-st) \frac{\partial K(x,x',t')}{\partial t} dt=0, \end{gather*} where \begin{gather} \bar G(x,x',s) \equiv \int^\infty_0 e^{-st} K (x, x', t)dt. \label{mf8} \end{gather} We note though that \begin{gather} \int^\infty_0 \exp(-st) \frac{\partial K(x,x',t)} {\partial t} dt = \int^\infty_0 \frac{\partial}{\partial t} \big[ e^{-st} K(x,x',t)\big] dt + s\int^\infty_0 e^{-st} K(x,x',t) dt\nonumber\\ \qquad{} = -\delta (x-x') + s \bar G(x,x',s), \label{mf9} \end{gather} where we made use of \eqref{mf6} and \eqref{mf8}. With the help of \eqref{mf9} we see that $\bar G(x,x',s)$ satisf\/ies \begin{gather} \left[ - \frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x) - i\hbar s\right] \bar G (x,x',s) = -i \hbar \delta (x-x'), \label{mf10} \end{gather} where we now have that the partial derivative with respect to $x$ becomes the ordinary one as there is no longer a time variable. We integrate~(\ref{mf10}) with respect to~$x$ in the interval $x'-\epsilon \leq x \leq x' + \epsilon$ and in the limit $\epsilon \to 0$ obtain two equations \begin{gather} \left[-\frac{\hbar^2}{2m} \left(\frac{d\bar G}{dx}\right)_{x=x'+0} + \frac{\hbar^2}{2m} \left(\frac{d\bar G}{dx}\right)_{x=x'-0}\right] = -i\hbar, \label{mf11a}\\ \left[-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x) - i\hbar s \right] \bar G(x,x',s) =0, \qquad x\ne x'. \label{mf11b} \end{gather} We proceed now to indicate how we can derive the explicit expression of $K(x,x',t)$ with the help of the Green function $\bar G(x,x',s)$ of the corresponding problem satisfying (\ref{mf11a}) and (\ref{mf11b}). \section{Determination of the Green function\\ and the inverse Laplace transform for the propagator}\label{section3} Our interest is not to stop at equations (\ref{mf11a}), (\ref{mf11b}) for $\bar G (x,x',s)$ but actually to get $K(x,x',t)$ for which we can use the inverse Laplace transform~\cite{2} to get \begin{gather} K(x,x',t) = \frac{1}{2\pi i} \int^{c+i\infty}_{c-i\infty} \bar G (x,x',s) e^{st} ds, \label{mf12} \end{gather} where the integration takes place along a line in the complex plane $s$ parallel to the imaginary axis and at a distance $c$ to it so that all singularities of $\bar G(x,x',s)$ in the $s$ plane are on the left of it. To have a more transparent notation rather than the $s$ plane we shall consider an energy variable $E$ proportional to it through the relation \[ E= i \hbar s \qquad {\mbox{\rm or}} \qquad s=-i (E/\hbar) \] and def\/ine $G(x,x',E)$ by \[ -i G (x,x',E) \equiv \bar G (x,x', -iE/\hbar). \] The energy Green function must be symmetric under interchange of $x$ and $x'$, i.e. \begin{gather} G (x,x',E )= G (x',x,E) \label{mf15} \end{gather} which combines with the two equations (\ref{mf11a}), (\ref{mf11b}) to give in this notation \begin{gather} \left[\frac{dG}{dx}\right]_{x=x'+0} - \left[\frac{dG}{dx}\right]_{x=x'-0} = -\frac{2m}{\hbar}, \label{mf16a}\\ \left[-\frac{\hbar^2}{2m}\frac{d^2}{dx^2} + V(x) - E\right] G(x,x',E) =0 \qquad {\rm for}\quad x\ne x'. \label{mf16b} \end{gather} Let us f\/irst consider the case when $x<x'$ and proceed to show that the equations (\ref{mf15})--(\ref{mf16b}) determine in a unique way the Green function of the problem. For this purpose we introduce with the notation $u^\pm_E(x)$ two linearly independent solutions of the equation (\ref{mf16b}) \[ \left[-\frac{\hbar^2}{2m}\frac{d^2}{dx^2} + V(x) - E\right] u^\pm_E (x) = 0. \] From this equation we see that \[ u^-_E (x) \frac{d^2u^+_E (x)}{dx} - u^+_E \frac{d^2u^-_E (x)} {dx} = \frac{d}{dx} \left(u^-_E \frac{du^+_E}{dx} - u^+_E \frac{d u^-_E}{dx}\right) =0. \] Thus the Wronskian of the problem def\/ined by \begin{gather} W (E) = u^-_E (x) \frac{du^+_E}{dx} - u^+_E (x) \frac{du^-_E}{dx} \label{mf19} \end{gather} is independent of $x$. As $G(x,x',E)$ satisf\/ies (\ref{mf16b}) we can write it for $x<x'$ as \begin{gather} G(x,x',E) = F(x', E) u^+_E (x), \label{mf20} \end{gather} choosing one of the two solutions of equation (\ref{mf16b}) and $F(x',E)$ is as yet an undetermined function of $x'$, $E$. We see from the symmetry of $G(x,x',E)$ that it must satisfy the same equation (\ref{mf16b}) in $x'$ so that from (\ref{mf20}) we get \[ \left[-\frac{\hbar^2}{2m}\frac{d^2}{dx'^2} + V(x') - E\right] F(x',E)=0 \] and thus $F(x',E)$ must a be linear combination of the two independent solutions $u^\pm_E(x)$, i.e. \[ F(x',E) = a_+ (E) u^+_E (x') + a_-(E) u^-_E (x') \] and our Green function becomes \begin{gather} G(x,x',E) = \big[ a_+ (E) u^+_E (x') + a_- (E) u^-_E (x') \big] u^+_E(x), \label{mf23} \end{gather} while for the other case, i.e.\ $x>x'$, the symmetry of the Green function demands \begin{gather} G(x,x',E) = \big[ a_+ (E) u^+_E (x) + a_- (E) u^-_E (x) \big] u^+_E(x'). \label{mf23bis} \end{gather} Replacing (\ref{mf23}) and (\ref{mf23bis}) in (\ref{mf16a}) we f\/ind that the coef\/f\/icient $a_+(E)$ vanishes and $a_-(E)$ satisf\/ies \begin{gather} a_-(E) W(E) = -\frac{2m}{\hbar}. \label{mf25} \end{gather} Thus from (\ref{mf23}), (\ref{mf23bis}) and (\ref{mf25}) we get that \begin{gather} G(x,x',E) = - \frac{2m}{\hbar} W^{-1} (E) \left\{ \begin{array}{c} u^-_E (x') u^+_E (x) \quad {\rm if } \ x < x', \vspace{1mm}\\ u^-_E (x) u^+_E (x') \quad {\rm if } \ x >x'. \end{array} \right. \label{mf26} \end{gather} We thus have the explicit Green function of our problem once we can obtain two independent solutions of the equations (\ref{mf16b}). Once $G(x,x',E)$ has been determined, the propagator $K(x,x',t)$ is given by the inverse Laplace transform (\ref{mf12}) which in terms of the $E$ variable becomes \begin{gather} K(x,x',t) = \frac{1}{2\pi \hbar i} \int^{i\hbar c+ \infty}_{i\hbar c-\infty} \exp (-iE t/\hbar) G(x,x',E) dE, \label{mf27} \end{gather} where now the integral takes place in the $E$ plane over a line parallel to the real axis with all the poles of $G(x,x',E)$ below it. We proceed to give some specif\/ic examples of application of our method. \section[Specific examples]{Specif\/ic examples}\label{section4} \subsection*{a) The free particle} The potential $V(x)$ is taken as zero and so the equation (\ref{mf16b}) becomes \[ \left[-\frac{\hbar^2}{2m}\frac{d^2}{dx^2} - E\right] G(x,x',E) = 0. \] We introduce the variable $k$ through the def\/inition \[ E= \frac{\hbar^2k^2}{2m} , \qquad dE = \frac{\hbar^2k}{m} dk \] and thus the $u^\pm_E(x)$ for this problem satisfy the equation \[ \left[ \frac{d^2}{dx} + k^2\right] u^\pm_E (x) = 0, \qquad u^\pm_E (x) = \exp (\pm i k x) \] with the Wronskian (\ref{mf19}) given by \[ W(E) = 2 i k. \] Thus from the two cases of (\ref{mf26}) our function $G(x,x',E)$ is written compactly as \begin{gather} G(x,x'E) = \frac{im}{\hbar k} \exp [ik |x-x'|]. \label{mf32} \end{gather} The propagator $K(x,x',t)$ is given by (\ref{mf27}) in terms of $G(x,x',E)$ and substituting (\ref{mf32}) in it and writing in terms of $k$ we get \begin{gather} K(x,x',t) = \frac{1}{2\pi} \int^\infty_{-\infty} \exp [i k |x-x'| - i (\hbar k^2/2m) t] dk, \label{mf33} \end{gather} where, as $G(x,x',E)$ has no singularities, the energy can be integrated over the real line \mbox{$-\infty \leq E \leq \infty$} while $k$ has double the range of $E$. The integral (\ref{mf33}) can be determined by completing the square and we get \begin{gather} K(x,x',t) = \sqrt{\frac{m}{2\pi i \hbar t} } \exp \left[ \frac{im(x-x')^2}{2\hbar t}\right] \label{mf34} \end{gather} which has been derived also by many other methods. \subsection*{b) The case of the delta potential} We wish now to discuss the ef\/fect on the Feynman propagator of a potential \[ V(x) = Q(x) + b\delta (x), \] where $Q(x)$ in a continuous function of $x$ and we assume $b>0$ to avoid bound states of the $\delta$ potential. The equation for $u^\pm_E$ becomes now \[ [H + b\delta (x) - E] u^\pm_E (x) = 0, \] where \[ H=- \frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + Q (x). \] The Green function $G(x,x', E)$ satisf\/ies the equations (\ref{mf16a}), (\ref{mf16b}) which can be written as the single equation \begin{gather} [H + b \delta (x) -E] G (x,x',E) = -i \hbar \delta (x-x') \label{mf40} \end{gather} and (\ref{mf16a}) holds if we integrate (\ref{mf40}) with respect to the variable $x$ in the integral $x' - \epsilon \leq x \leq x'+ \epsilon$ in the limit $\epsilon\to 0$, and the one corresponding to (\ref{mf16b}) holds when $x\ne x'$. For $x \not= 0$ we have (\ref{mf40}) with no delta potential and therefore $G$ can be written as \begin{gather} G(x,x',E) = G_Q(x,x',E) + F(x,x',E), \qquad x \not= 0, \label{mf41} \end{gather} where $G_Q(x,x',E)$ is the Green function satisfying \begin{gather} [H - E ] G_Q(x,x',E) = -i \hbar \delta(x-x') \label{mf42} \end{gather} while $F(x,x',E)$ is a solution of the corresponding homogeneous equation, i.e. \[ [H - E ] F(x,x',E) = 0, \qquad x \not= x'. \] and the form of $F(x,x',E)$ is to be determined. The continuity of $G(x,x',E)$ at $x=0$ allows to write (\ref{mf41}) for all values of $x, x'$ and with this in mind we can replace (\ref{mf41}) in (\ref{mf40}) to obtain \begin{gather} [H - E - b \delta(x) ] [ G_Q(x,x',E) + F(x,x',E) ] =- i \hbar \delta(x-x') \nonumber \\ \qquad{}= -i \hbar \delta(x-x') -b \delta(x)G_Q(0,x',E) + [H - E - b \delta(x) ]F(x,x',E), \label{mf44} \end{gather} where in the second line we have used (\ref{mf42}). The two lines in (\ref{mf44}) imply \[ [H - E - b \delta(x) ]F(x,x',E) = b \delta(x)G_Q(0,x',E) \] which is a version of (\ref{mf40}) but with a source term. Therefore the solution can be readily given as \[ F(x,x',E) = -\frac{i}{\hbar}\int^{\infty}_{-\infty} dx'' G(x,x'',E) ( b \delta(x'')G_Q(0,x',E)). \] Performing the integral in the last expression and using (\ref{mf41}) for $G$, we have \begin{gather} F(x,x',E) = \gamma \left[ G_Q(x,0,E) + F(x,0,E) \right] G_Q(0,x',E), \label{mf47} \end{gather} where $\gamma \equiv -ib/\hbar $. To determine $F(x,0,E)$ in the RHS of (\ref{mf47}) we set $x'=0$ and solve for $F$, obtaining \begin{gather} F(x,0,E) = \frac{\gamma G_Q(x,0,E) G_Q(0,0,E)}{1-\gamma G_Q(0,0,E)}. \label{mf48} \end{gather} Finally, (\ref{mf48}) can be replaced back in (\ref{mf47}) to get \begin{gather} F(x,x',E) = \frac{\gamma G_Q(x,0,E) G_Q(0,x',E)}{1-\gamma G_Q(0,0,E)}. \label{mf49} \end{gather} With this, $G(x,x',E)$ is given now in terms of $G_Q(x,x',E)$ and if we make $Q=0$ we can apply it to the case of the free particle. The Green function $G_0(x,x',E)$ is given by (\ref{mf32}) and the Green function of our problem becomes \begin{gather*} G(x,x',E) = \frac{im}{\hbar k} \exp [ik |x-x'|] - \frac{m^2 b}{2\hbar^4}\frac{\exp [ik (|x|+|x'|)]}{k \left(k +i\frac{mb}{\hbar^2} \right)}\nonumber\\ \phantom{G(x,x',E)}{} = \frac{im}{\hbar k} \exp [ik |x-x'|] - \frac{ i m }{2\hbar^2}\frac{\exp [ik (|x|+|x'|)]}{ k } + \frac{ i m }{2\hbar^2}\frac{\exp [ik (|x|+|x'|)]}{k+i\frac{mb}{\hbar^2}}. \end{gather*} This is the same result that appears in Grosche and Steiner~\cite[(6.12.4), p.~328]{1} and accounts explicitly for the four possible cases $\pm x, \pm x' > 0$. The inverse Laplace transform of this expression can be easily evaluated since it contains integrals of the form \[ \int^{\infty}_{-\infty} dk k^{-n} \exp \left( -C k^2 + D k \right), \] where $C$ and $D$ are independent of $k$ and the integrals are either gaussians or error functions when $n=0$ or $n=1$ respectively. The propagator is then \begin{gather*} K(x,x',t)= K_0(x,x',t)+ \frac{mb}{2\hbar^2} \exp \left( -\frac{mb}{ \hbar^2 }\left( |x|+|x'|+ \frac{imb^2 t}{2 \hbar^3}\right) \right) \nonumber \\ \phantom{K(x,x',t)=}{} \times { \rm erfc } \left\{ \sqrt{\frac{m}{2i \hbar t}} \left( |x|+|x'|-\frac{ibt}{ \hbar } \right) \right\} \end{gather*} with $K_0(x,x',t)$ as in (\ref{mf34}). This result coincides with the one that Grosche and Steiner~\cite[(6.12.2), p.~327]{1} obtained by the Path Integral Methods. Note also that if in $G$ of (\ref{mf41}) we replace the $F(x,x',E)$ by (\ref{mf49}) we also get the result of Grosche and Steiner~\cite[(6.12.1), p.~327]{1}. \subsection*{c) The harmonic oscillator} The potential $V(x)$ is proportional to $x^2$ and thus $u^\pm_E(x)$ satisf\/ies the equation \begin{gather} \left[ - \frac{\hbar^2}{2m} \frac{d^2}{dx^2} + \frac12 m \omega^2 x^2 - E\right] u^\pm_E (x) = 0, \label{mf35a} \end{gather} where $\omega$ is the frequency of the oscillator. We introduce the variables \begin{gather} z = \sqrt{\frac{2m\omega}{\hbar} } x , \qquad p = \frac{E}{\hbar\omega} - \frac12 \label{mf36a} \end{gather} in terms of which the equation (\ref{mf35a}) takes the form \begin{gather} \left[ \frac{d^2}{dz^2} - \frac{z^2}{4} + p + \frac12 \right] u^\pm_E(x) =0. \label{mf37a} \end{gather} Two independent solutions of (\ref{mf37a}) are given by parabolic cylinder functions~\cite{3}, i.e. \[ u^\pm_E (x) = D_p(\pm z). \] The Wronskians of these functions, where the derivative is taken with respect to the $x$ rather than the $z$ variable, is from (\ref{mf19}) and (\ref{mf36a}) given by \begin{gather*} W(E) = \sqrt{\frac{2m\omega}{\hbar} } \left\{ D_p (-z) \left[ \frac{dD_p(z)}{dz}\right] - D_p (z) \frac{dD_p(-z)}{dz}\right\}\\ \phantom{W(E)}{} = \sqrt{\frac{2m\omega}{\hbar}} \bigg\{ D_p (-z) \left[ -D_{p+1}(z) + \frac12 zD_p(z)\right] + D_p(z) \left[ - D_{p+1} (-z) - \frac12 z D_p(-z)\right] \bigg\},\nonumber \end{gather*} where we made use \cite[(9.247-3), p.~1066]{3} together with the fact that \[ [dD_p (-z)/dz] = - [d D_p(-z)/d(-z)]. \] The Wronskian is independent on $z$ so we may take any value of the latter and we choose $z=0$ to get \[ W (E) = - \sqrt{\frac{2m\omega}{\hbar} } 2 D_p (0) D_{p+1} (0). \] We note from \cite[(9.240), p.~1064]{3} that we can write $D_p(z)$ in terms of the degenerate hypergeometric function $\Phi$ and, in particular \[ D_p (0) = 2^{p/2} \frac{\sqrt{\pi}}{\Gamma \big(\frac{1-p}{2}\big)} \Phi \left(- \frac{p}{2}, \frac12, 0\right) \] while from \cite[(9.210), p.~1058]{3} the $\Phi (-\frac{p}{2}, \frac12, 0) =1$ so that f\/inally \begin{gather} W(E) = - \sqrt{\frac{2m\omega}{\hbar}} \frac{2^{p+1} \pi}{\Gamma \big(\frac{1-p}{2}\big) \Gamma \big(- \frac{p}{2}\big)} = - \sqrt{ \frac{2m\omega}{\hbar}} \frac{\sqrt{\pi}}{\Gamma(-p)}, \label{mf43a} \end{gather} where for the last expression in (\ref{mf43a}) we made use of the doubling formula for the $\Gamma$ function given in \cite[(8.335), p.~938]{3}. From the general relation (\ref{mf26}) we then obtain that the Green function of the oscillator is given by \begin{gather} G(x,x',E) = \sqrt{\frac{2m}{\pi \hbar \omega}} \Gamma(-p) D_p (z) D_p (-z') \label{mf44a} \end{gather} with $p$, $z$ given by (\ref{mf36a}), $z<z'$ and $z'$ has the same def\/inition as $z$ but $x$ replaced by $x'$. When we consider the case $z>z'$ we have an expression which is similar to (\ref{mf44a}) but interchanging $z$ and $z'$. The complete formula can be written in a compact way introducing the variables \[ x_> = \max \{ x,x' \} , \qquad x_< = \min \{ x,x' \} \] and thus we have \begin{gather} G(x,x',E) = \sqrt{\frac{2m}{\pi \hbar \omega}} \Gamma \left(\frac12 - \frac{E}{\hbar\omega}\right) D_{\frac{E}{\hbar\omega} - \frac12} \left( \sqrt{\frac{2m\omega}{\hbar}} x_> \right) D_{\frac{E}{\hbar\omega} -\frac12} \left(-\sqrt{\frac{2m\omega}{\hbar}} x_<\right). \label{mf46a} \end{gather} The expression (\ref{mf46a}) is identical to \cite[(6.2.37), p.~179]{1} except for a factor of $\sqrt2$. We want though to obtain $K (x,x',t)$ of (\ref{mf27}), but for this we can analyze the pole structure of (\ref{mf46a}) in order to evaluate the inverse Laplace transform of $G(x,x',E)$ by means of the residue theorem. The result of this procedure is the series known as the spectral decomposition of $K (x,x',t)$. This series can be evaluated by using an identity of Hermite polynomials in~\cite{4} and references cited therein. We carry out the analysis in the Appendix and get the f\/inal result \[ K(x,x',t) = \left( \frac{m\omega}{2\pi i \hbar \sin\omega t}\right)^{1/2} \exp \left\{ \frac{im\omega}{2\hbar \sin\omega t} \bigg[ (x'^2 +x^2) \cos \omega t - 2 xx'\bigg]\right\} \] which coincides with expression in \cite[p.~160]{1}. \section{Conclusion}\label{section5} We will state here the full steps to get the Feynman propagator of a non-relativistic single particle one dimensional problem. Our Hamiltonian is \[ H= \left[ - \frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + V(x)\right] \] with an arbitrary potential $V(x)$. We assume that two independent eigen-functions of energy $E$ can be found and denoted by~$u^\pm_E (x)$. We need then to determine the Wronskian \[ W(E)= u^-_E (x) \frac{du^+_E (x)}{dx} - u^+_E (x) \frac{d u^-(x)}{dx} \] whose value, as we know, is independent on $x$. Following the analysis of Section~\ref{section3} we can determine the energy Green function \[ G(x,x',E)= - \frac{2m}{\hbar} W(E)^{-1} \left\{ \begin{array}{l} u^-_E (x') u^+_E (x) \quad \hbox{if}\ x<x', \vspace{1mm}\\ u^-_E (x) u^+_E (x') \quad \hbox{if} \ x>x'. \end{array} \right. \] Using the Laplace transform the Feynman propagator becomes \begin{gather} K(x,x',t) = \frac{1}{2\pi \hbar i} \int^{i\hbar c+\infty}_{i\hbar c-\infty} \exp (-i E t/\hbar) G(x,x',E) dE. \label{mf78} \end{gather} Once we get $u^\pm_E (x)$ the only troublesome part of our calculation is the integral (\ref{mf78}) as we already saw in the discussion of the examples in Section~\ref{section4}. For time dependent Hamiltonians probably other techniques should be used but we have not developed them yet.
{ "timestamp": "2007-12-06T22:18:51", "yymm": "0711", "arxiv_id": "0711.3544", "language": "en", "url": "https://arxiv.org/abs/0711.3544", "abstract": "A direct procedure for determining the propagator associated with a quantum mechanical problem was given by the Path Integration Procedure of Feynman. The Green function, which is the Fourier Transform with respect to the time variable of the propagator, can be derived later. In our approach, with the help of a Laplace transform, a direct way to get the energy dependent Green function is presented, and the propagator can be obtained later with an inverse Laplace transform. The method is illustrated through simple one dimensional examples and for time independent potentials, though it can be generalized to the derivation of more complicated propagators.", "subjects": "Quantum Physics (quant-ph)", "title": "Alternative Method for Determining the Feynman Propagator of a Non-Relativistic Quantum Mechanical Problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575137315161, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7091542162131826 }
https://arxiv.org/abs/1610.07497
Analyzing the structure of multidimensional compressed sensing problems through coherence
Recently it has been established that asymptotic incoherence can be used to facilitate subsampling, in order to optimize reconstruction quality, in a variety of continuous compressed sensing problems, and the coherence structure of certain one-dimensional Fourier sampling problems was determined. This paper extends the analysis of asymptotic incoherence to cover multidimensional reconstruction problems. It is shown that Fourier sampling and separable wavelet sparsity in any dimension can yield the same optimal asymptotic incoherence as in one dimensional case. Moreover in two dimensions the coherence structure is compatible with many standard two dimensional sampling schemes that are currently in use. However, in higher dimensional problems with poor wavelet smoothness we demonstrate that there are considerable restrictions on how one can subsample from the Fourier basis with optimal incoherence. This can be remedied by using a sufficiently smooth generating wavelet. It is also shown that using tensor bases will always provide suboptimal decay marred by problems associated with dimensionality. The impact of asymptotic incoherence on the ability to subsample is demonstrated with some simple two dimensional numerical experiments.
\section{Introduction} Exploiting additional structure has always been central to the success of compressed sensing, ever since it was introduced by Cand\`es, Romberg \& Tao \cite{CandesRombergTao} and Donoho \cite{donohoCS}. Sparsity and incoherence has allowed us to recover signals and images from uniformly subsampled measurements. Recently \cite{AHPRBreaking} the notions of asymptotic sparsity in levels and asymptotic incoherence were introduced to provide enough flexibility to recover signals in a larger variety of inverse problems using subsampling in levels. The key is that optimal subsampling strategies will depend both on the signal structure (asymptotic sparsity) and the asymptotic incoherence structure. There is a wide variety of problems that lack incoherence, a fact that has been widely recognized \cite{AHPRBreaking, discrete, VanderEtAlSpreadSpectrum, VanderEtAlVariable, ChauffertGradientwaveform, ChauffertVDS, BoyerBlockStructured, PoonFrames, Siemens, Gitta_Fourier, PoonTV, WardFourierAndPolys}, however, they instead posses asymptotic incoherence. Examples include Magnetic Resonance Imaging (MRI) \cite{Unser,Lustig3}, X-ray Computed Tomography \cite{Stanford_CT, quinto2006xrayradon}, Electron Tomography \cite{lawrence2012et,leary2013etcs}, Fluorescence microscopy \cite{Candes_PNAS, Roman} and Surface scattering \cite{JonesTamtoglHAS}, to name a few. This phenomena often originates from the inverse problems being based upon integral transforms, for example, reconstructing a function $f$ from pointwise evaluations of its Fourier transform. In compressed sensing, such a transform is combined with an appropriate sparsifying transformation associated to a basis or frame, giving rise to an infinite measurement matrix $U$. The `coherence' of $U \in \bbC^{\bbN \times \bbN} $ or $U' \in C^{N \times N}$ is defined by \bes{ \mu(U) = \sup_{i,j \in \bbN} | U_{ij} |^2, \qquad \mu(U') = \max_{i,j=1,\ldots,N} | U'_{ij} |^2. } Small coherence is refered to as `incoherence'. Asymptotic incoherence is the phenomena of when \be{ \label{asympinco} \mu(P^\perp_N U), \mu(U P^\perp_N) \to 0, \qquad N \to \infty, } where $P^\perp_N$ denotes the projection onto the indices $N+1,N+2,...$. As a general rule, the faster asymptotic incoherence decays the more we are able to subsample (see (\ref{conditions31_levels})). The study of more precise notions of coherence has also been considered for the one and two dimensional discrete Fourier sampling, separable Haar sparsity problems in \cite{discrete}. This paper focuses on studying the structure of (\ref{asympinco}) in continuous multidimensional inverse problems and the impact this has on the ability to effectively subsample. In previous work \cite{onedimpaper}, the structure of incoherence was analyzed as a general problem and theoretical limits on how fast it can decay over all such inverse problems were established. Furthermore, the notion of optimal decay was introduced, which describes the fastest asymptotic incoherence decay possible for a given inverse problem. The notion of an optimal ordering was also introduced, which acted as a set of instructions on how to actually attain this optimal incoherence decay rate by ordering the sampling basis. Optimal decay rates and optimal orderings were determined for the one-dimensional Fourier-wavelet and Fourier-polynomial cases and the former was found to attain the theoretically optimal incoherence decay rate of $N^{-1}$. By `optimal' here we mean in the sense of over all inverse problems that has $U$ an isometry. Furthermore, it is the fastest decay as a power of $N$. This paper extends the basic findings in \cite{onedimpaper} to general $d$-dimensional problems. \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.43\textwidth} \begin{center} \includegraphics[width=\textwidth]{haarmat} \caption{\footnotesize Coherence Matrix for the 1D case} \end{center} \end{subfigure} \begin{subfigure}[t]{0.43\textwidth} \begin{center} \includegraphics[width=\textwidth]{haar1} \caption{\footnotesize 1D Column Coherences} \end{center} \end{subfigure}\\ \begin{subfigure}[t]{0.43\textwidth} \begin{center} \includegraphics[width=\textwidth]{haar2} \caption{\footnotesize 2D analogue of (b)} \end{center} \end{subfigure} \begin{subfigure}[t]{0.43\textwidth} \begin{center} \includegraphics[width=\textwidth]{haar3} \caption{\footnotesize Isosurface of 3D Case} \end{center} \end{subfigure} \end{center} \caption{Fourier - Separable Haar Cases: Incoherence Structures in Different Dimensions. In (b), the coherences are calculated by taking the maxima over the columns in (a), demonstrating decay that scales with frequency. In 2D this decay roughly matches that of the norm of the frequency as seen in (c). However in 3D there are hyperbolic spikes around the coordinate axes that lead to poor incoherence decay (see (d)) when using sampling patterns with rotational invariance or linear scaling. In the black and white plots, white indicates larger absolute value.} \label{haarimages} \end{figure} The optimal orderings in these one dimensional cases matched the leveled schemes that were already used for subsampling. For example, when sampling from the 1D Fourier basis, the sampling levels are usually ordered according to increasing frequency. In multiple dimensions there is no such consensus, instead many different sampling patterns are used, especially when it comes to 2D sampling patterns where radial lines \cite{radial}, spirals \cite{spiralmed} or other k-space trajectories are used. There are also a variety of other sampling techniques used in even higher dimensional (3-10D) problems, such as in the field of NMR spectroscopy \cite{highdimnmr}. If one desires to exploit asymptotic incoherence to its fullest it must be understood whether the coherence structure is consistent with the sampling pattern that one intends to use. This paper determines optimal orderings for the case of Fourier sampling and (separable) wavelet sparsity in any dimension. It is shown that the optimal decay is always that of the one-dimensional case, and moreover in two dimensions the optimal orderings are compatible with the structure of the 2D sampling patterns mentioned above. However, in higher dimensions problems with poor wavelet smoothness, such as the three dimensional separable Haar case, the class of optimal orderings\footnote{Technically we mean \emph{strongly} optimal here (see Definition \ref{strongoptimality}).} are no longer rotationally invariant (as in Figure \ref{haarimages}), hindering the ability to subsample with traditional sampling schemes. It is also shown that using a pair of tensor bases in general leads to a best possible incoherence decay that is always anisotropic and suboptimal. We should mention here that for many inverse problems in higher dimensions, using separable wavelets as a reconstruction basis fairs poorly against other bases such as shearlets \cite{shearlet} and curvelets \cite{candes2004new} for approximating images with curve-like features. However, it is not our goal to focus on a particular reconstruction basis in this paper, instead we wish to demonstrate how the incoherence structure can vary for different bases and the impact this has on its application in compressed sensing problems, for good or for worse. \subsection{Setup \& Key Concepts : Incoherence, Sparsity \& Orderings} Throughout this paper we shall work in an infinite dimensional separable Hilbert space $ \mathcal{H}$, typically $\mathcal{H}=L^2(\bbR^d)$, with two closed infinite dimensional subspaces $V_1, V_2$ spanned by orthonormal bases $B_1,B_2$ respectively, \[ V_1 = \overline{ \text{Span} \{ f \in B_1 \}}, \qquad V_2 = \overline{ \text{Span} \{ f \in B_2 \} }.\] We call $(B_1,B_2)$ a `basis pair'. If we are to form the change of basis matrix $U=(U_{i,j})_{i,j \in \bbN}$ we must list the two bases, which leads to following definitions: \begin{definition}[Ordering] Let $S$ be a set. Say that a function $\rho: \mathbb{N} \to S$ is an `ordering' of $S$ if it is bijective. \end{definition} \begin{definition}[Change of Basis Matrix] For a basis pair $(B_1,B_2)$, with corresponding orderings $\rho:\mathbb{N} \to B_1$ and $\tau:\mathbb{N} \to B_2$, form a matrix $U$ by the equation \begin{equation}\label{U} U_{m,n} := \langle \tau(n) , \rho(m) \rangle. \end{equation} Whenever a matrix $U$ is formed in this way we write `$U:=[(B_1,\rho),(B_2,\tau)]$'. \end{definition} Standard compressed sensing theory says that if $x \in \mathbb{C}^N$ is $s$-sparse, i.e.\ $x$ has at most $s$ nonzero components, then, with probability exceeding $1-\epsilon$, $x$ is the unique minimiser to the problem \bes{ \min_{\eta \in \bbC^N} \| \eta \|_{l^1} \quad \mbox{subject to} \quad P_{\Omega} U \eta = P_{\Omega} Ux, } where $P_{\Omega}$ is the projection onto $\mathrm{span}\{e_j:j\in \Omega\}$, $\{e_j\}$ is the canonical basis, $\Omega$ is chosen uniformly at random with $|\Omega| = m$ and \be{ \label{m_est_Candes_Plan} m \ge C \cdot \mu(U) \cdot N \cdot s \cdot \log (\epsilon^{-1}) \cdot \log (N), } for some universal constant $C>0$ (see \cite{Candes_Plan} and \cite{BAACHGSCS}). In \cite{AHPRBreaking} a new theory of compressed sensing was introduced based on the following three key concepts: \defn{[Sparsity in Levels] \label{d:Asy_Sparse} Let $x$ be an element of either $\bbC^N$ or $l^2(\bbN)$. For $r \in \bbN$ let $\mathbf{M} = (M_1,\ldots,M_r) \in \bbN^r$ with $1 \leq M_1 < \ldots < M_r$ and $\mathbf{s} = (s_1,\ldots,s_r) \in \bbN^r$, with $s_k \leq M_k - M_{k-1}$, $k=1,\ldots,r$, where $M_0 = 0$. We say that $x$ is $(\mathbf{s},\mathbf{M})$-sparse if, for each $k=1,\ldots,r$, \bes{ \Delta_k : = \mathrm{supp}(x) \cap \{ M_{k-1}+1,\ldots,M_{k} \}, } satisfies $| \Delta_k | \leq s_k$. We denote the set of $(\mathbf{s},\mathbf{M})$-sparse vectors by $\Sigma_{\mathbf{s},\mathbf{M}}$. } \defn{[Multi-level sampling scheme] \label{multi_level_dfn} Let $r \in \bbN$, $\mathbf{N} = (N_1,\ldots,N_r) \in \bbN^r$ with $1 \leq N_1 < \ldots < N_r$, $\mathbf{m} = (m_1,\ldots,m_r) \in \bbN^r$, with $m_k \leq N_k-N_{k-1}$, $k=1,\ldots,r$, and suppose that \bes{ \Omega_k \subseteq \{ N_{k-1}+1,\ldots,N_{k} \},\quad | \Omega_k | = m_k,\quad k=1,\ldots,r, } are chosen uniformly at random, where $N_0 = 0$. We refer to the set \bes{ \Omega = \Omega_{\mathbf{N},\mathbf{m}} := \Omega_1 \cup \ldots \cup \Omega_r } as an $(\mathbf{N},\mathbf{m})$-multilevel sampling scheme. } \defn{[Local coherence]\label{loc_coherence} Let $U$ be an isometry of either $\bbC^{N}$ or $l^2(\bbN)$. If $\mathbf{N} = (N_1,\ldots,N_r) \in \bbN^r$ and $\mathbf{M} = (M_1,\ldots,M_r) \in \bbN^r$ with $1 \leq N_1 < \ldots N_r $ and $1 \leq M_1 < \ldots < M_r $ the $(k,l)^{\rth}$ local coherence of $U$ with respect to $\mathbf{N}$ and $\mathbf{M}$ is given by \ea{ \label{localincoherence} \mu_{\mathbf{N},\mathbf{M}}(k,l) &= \sqrt{\mu(P^{N_{k-1}}_{N_{k}}UP^{M_{l-1}}_{M_{l}}) \cdot \mu(P^{N_{k-1}}_{N_{k}}U)},\qquad k,l=1,\ldots,r, } where $N_0 = M_0 = 0$ and $P^{a}_{b}$ denotes the projection matrix corresponding to indices $\{a+1,\hdots, b\}$. } The paper \cite{AHPRBreaking} provided the following estimate (with $C>0$ a universal constant) regarding the local number of measurements $m_k$ in the $k^{\rth}$ level in order to obtain a good reconstruction with probability $\ge 1-\epsilon$: \be{ \label{conditions31_levels} \frac{m_k}{N_k-N_{k-1}} \ge C \cdot \log(\epsilon^{-1}) \cdot \left( \sum_{l=1}^r \mu_{\mathbf{N},\mathbf{M}}(k,l) \cdot s_l\right) \cdot \log\left(N\right),\quad k=1,\ldots,r. } In particular, the sampling strategy (i.e.\ the parameters $\mathbf{N}$ and $\mathbf{m}$) is now determined through the local sparsities and coherences. Since the local coherence (\ref{localincoherence}) is rather difficult to analyze in its current form, we bound it above by the following: \ea{ \label{local2asymp} \mu_{\mathbf{N},\mathbf{M}}(k,l) &= \sqrt{\mu(P^{N_{k-1}}_{N_{k}}UP^{M_{l-1}}_{M_{l}}) \cdot \mu(P^{N_{k-1}}_{N_{k}}U)} \\ \label{local2asymp2} & \le \sqrt{\min(\mu(P^{N_{k-1}}_{N_{k}}U), \mu(U P^{M_{l-1}}_{M_{l}})) \cdot \mu(P^{N_{k-1}}_{N_{k}}U)} \\ \label{local2asymp3} & \le \sqrt{\min(\mu(P^\perp_{N_{k-1}}U), \mu(U P^\perp_{M_{l-1}})) \cdot \mu(P^\perp_{N_{k-1}}U)} } It is arguably (\ref{local2asymp3}) rather than (\ref{local2asymp2}) that is the roughest bound here, however we shall see that this becomes effectively an equality in what follows. The crucial improvement of (\ref{local2asymp3}) over (\ref{local2asymp}) is that it is completely in terms of the asymptotic incoherences $\mu(P_N^\perp U), \mu(U P_N^\perp)$, which depend only on the orderings of $B_1,B_2$ respectively, rather than both of them. Furthermore, we can treat the two problems of maximizing the decay of $\mu(P_N^\perp U), \mu(U P_N^\perp)$ separately and then combine the two resulting orderings together at the end. Next we describe how one determines the fastest decay of $\mu(P_N^\perp U)$. In \cite{onedimpaper} this was done via the notion of optimality up to constants: \begin{definition}[Optimal Orderings] \label{fasterdecay} Let $\rho_1, \rho_2 : \mathbb{N} \to B_1$ be any two orderings of a basis $B_1$ and $\tau$ any ordering of a basis $B_2$. Let $U_1:=[(B_1,\rho_1) , (B_2,\tau)], \ U_2:=[(B_1,\rho_2) , (B_2,\tau)]$ as in (\ref{U}). Also let $Q_N:=P_{N-1}^\perp$. If there is a constant $C>0$ such that \[ \mu(Q_NU_1) \le C \cdot \mu(Q_NU_2), \qquad \forall N \in \mathbb{N}, \] then we write $\rho_1 \prec \rho_2$ and say that `$\rho_1$ has a faster decay rate than $\rho_2$ for the basis pair $(B_1, B_2)$'. $\rho_1$ is said to be an `optimal ordering of $(B_1,B_2)$' if $\rho_1 \prec \rho_2$ for all other orderings $\rho_2$ of $B_1$. The relation $\prec$, defined on the set of orderings of $B_1$, is independent of the ordering $\tau$ since the values of $\mu(Q_N U_1), \mu(Q_N U_2)$ are invariant under permuation of the columns of $U_1, U_2$. \end{definition} It was shown in \cite{onedimpaper} that optimal orderings always exist. Optimal orderings are used to give us the optimal decay rate: \begin{definition}[Optimal Decay Rate] Let $f,g : \mathbb{N} \to \mathbb{R}_{>0}$ be decreasing functions. We write $f \lesssim g$ to mean there is a constant $C>0$ such that \[ f(N) \le C \cdot g(N), \qquad \forall N \in \mathbb{N}. \] If both $f \lesssim g$ and $g \lesssim f$ holds, we write `$f \approx g$'. Now suppose that $\rho: \mathbb{N} \to B_1$ is an optimal ordering for the basis pair $(B_1,B_2)$ and we let $U=[(B_1,\rho),(B_2,\tau)]$ be a corresponding incoherence matrix (with some ordering $\tau$ of $B_2$). Then any decreasing function $f: \mathbb{N} \to \mathbb{R}_{>0}$ which satisfies $f \approx g$, where $g$ is defined by $g(N) = \mu(Q_N U)$, $\forall N \in \bbN$, is said to `represent the optimal decay rate' of the basis pair $(B_1,B_2)$. \end{definition} Notice that the optimal decay rate is unique up to the equivalence relation $\approx$ defined on the set of decreasing functions $f: \mathbb{N} \to \mathbb{R}_{>0}$. We also have a stronger notion of optimality, which gives us finer details on the exact decay: \begin{definition} [Strong Optimality] \label{strongoptimality} Let $U=[(B_1,\rho),(B_2,\tau)]$ and $\pi_N$ denote the projection onto the single index $N$. If $f$ represents the optimal decay rate of the basis pair $(B_1,B_2)$ then $\rho$ is said to be `strongly optimal' if the function $g(N):= \mu(\pi_N U)$ satisfies $f \approx g$. \end{definition} Estimates in terms of the row incoherence $\mu(\pi_N U)$ have used before in \cite{discrete}, where it was called the `local coherence'. If $\rho$ is a strongly optimal ordering, $U=[(B_1,\rho),(B_2,\tau)]$ and $f$ represents the optimal decay of $(B_1,B_2)$ then \[ \mu(Q_N U) \le C_1 \cdot f(N) \le C_2 \cdot \mu(\pi_N U) \le C_2 \cdot \mu(P_{N-1}^{N-1+M} U), \qquad N, M \in \bbN, \] for some constants $C_1(\rho), C_2(\rho)>0$, which can then be used to show the $\le$ in (\ref{local2asymp3}) can be replaced by $\approx$. We shall introduce the Fourier basis here as it is used in all of the examples discussed in this paper: \begin{definition}[Fourier Basis] \label{fourier} If we define \[ \chi_k(x) = \sqrt{\epsilon} \exp(2 \pi \mathrm{i} \epsilon k x)\cdot \ \mathds{1}_{[(- 2 \epsilon)^{-1},(2 \epsilon)^{-1}]} (x), \qquad k \in \mathbb{Z}, \] then the $(\chi_k)_{k \in \bbZ}$ form a basis\footnote{The little $\mathrm{f}$ here stands for `Fourier'.} $B_\mathrm{f}(\epsilon)$ of $L^2([-(2 \epsilon)^{-1},(2 \epsilon)^{-1}])$ . We can form a $d$-dimensional basis of $L^2([-(2 \epsilon)^{-1},(2 \epsilon)^{-1}]^d)$ by taking tensor products (see Section \ref{tensors}) \[ \chi_{k} := \bigotimes_{j=1}^d \chi_{k_j} , \qquad k \in \mathbb{Z}^d, \] and setting $B^d_\mathrm{f}(\epsilon)= \{ \chi_{k} \ : \ k \in \mathbb{Z}^d \} $. It shall be convenient to identify $B^d_\mathrm{f}(\epsilon)$ with $\mathbb{Z}^d$ using the function \be{ \label{multidimlambda} \lambda_d:B_\mathrm{f}^d \to \mathbb{Z}^d, \quad \lambda_d(\chi_k):=(\lambda(\chi_{k_1}), ..., \lambda(\chi_{k_d}))= (k_1,...,k_d)=k. } \end{definition} \section{Main Results} It turns out that the task of determining the asymptotic incoherence for general $d$-dimensional cases is substantially more difficult and subtle than the $1$-dimensional problems. However, we are able to present sharp results on the decay as well as optimal orderings of the bases. The main results can be broken down into two groups: one for tensor cases in general and one for the Fourier-Separable wavelet case. In what follows $ d \in \bbN$ denotes dimension. \subsubsection{Fourier to Tensor Wavelets} \begin{theorem} \label{tensormainwavelet} Let $B^d_\mathrm{w}$ be a tensor wavelet basis. The optimal decay rate of both $(B^d_\mathrm{f}, B^d_\mathrm{w})$ and $(B^d_\mathrm{w},B^d_\mathrm{f})$ is represented by $f(N)= \log^{d-1}(N) \cdot N^{-1}$. \end{theorem} This theorem is a user friendly and easy-to-read restatement of Theorem \ref{TensorResultsWavelet}. The latter theorem contains the more subtle and technical statements of the results. \subsubsection{Fourier to Legendre polynomials} \begin{theorem} \label{tensormainpoly} Let $B^d_\mathrm{p}$ be a (tensor) Legendre polynomial basis. The optimal decay rate of both $(B^d_\mathrm{f}, B^d_\mathrm{p})$ and $(B^d_\mathrm{p},B^d_\mathrm{f})$ is represented by $f(N)= \big( \log^{(d-1)}(N) \cdot N^{-1} \big)^{2/3}$. \end{theorem} This theorem is a restatement of Theorem \ref{TensorResultsPoly} for the purpose of an easy-to-read exposition. The additional logarithmic factors in the tensor cases here demonstrates the typical problems associated with dimensionality. In general the optimal orderings for all tensor problems are constructed using the hyperbolic cross on the original one-dimensional optimal orderings. \subsubsection{Fourier to Separable Wavelets} The definition of a separable wavelet basis $B_\text{sep}^d$ is provided in Section \ref{separable}. The main results on these cases are summarized below: \begin{theorem} \label{separablesummary} Consider the Fourier basis $B^d_\mathrm{f}$ and the wavelets basis $B^d_\text{sep}$. Then the following is true. \begin{itemize} \item[(i)] The optimal decay rate of $(B^d_\text{sep},B^d_\mathrm{f})$ is represented by $f(N)= N^{-1}$. The optimal decay rate of $(B^d_\text{sep},B^d_\mathrm{f})$ is obtained by using an ordering $\tau$ that is consistent with the wavelet levels (see definitions \ref{consistent_ordering} and \ref{leveled}) and this ordering is strongly optimal. \item[(ii)] In 2D ($d=2$) the optimal decay rate of $(B^d_\mathrm{f},B^d_\text{sep})$ is represented by $f(N)= N^{-1}$. This optimal decay rate is obtained by using an ordering $\rho$ of $B^d_\mathrm{f}$ that satisfies, for some constants $C_1, C_2>0$ and some norm $\| \cdot \|$ on $\bbR^d$, \be{ \label{linearrough} \max(\| \lambda_d(\rho(N)) \|,1) \approx N^{1/d}, \qquad N \in \mathbb{N}. } In fact $\rho$ is strongly optimal in 2D if and only if (\ref{linearrough}) holds. \item[(iii)] In higher dimensions ($d \ge 3$) the optimal decay rate of $(B^d_\mathrm{f},B^d_\text{sep})$ is still represented by $f(N)= N^{-1}$. However the optimal ordering used to obtain this decay rate is dependent on the wavelet used to generate the basis $B^d_\text{sep}$. \end{itemize} \end{theorem} Part (i) is the subject of Section \ref{separablewaveletordering} and is proven in Corollary \ref{leveledresults}. Part (ii), tackled in Section \ref{linearproof}, is the same as Corollary \ref{twodimresults}. Part (iii), covered in Section \ref{semihypsection}, is proven in Theorem \ref{semihyperbolicthm}. An ordering satisfying (\ref{linearrough}) is called a `linear ordering'. The class of linear orderings are rotation invariant and compatible with sampling schemes based on linearly scaling a fixed shape from the origin (see Section \ref{linearsection}). Optimal orderings in the case of high dimensions and poor wavelet smoothness can be found by interpolating between the case of (\ref{linearrough}) and the hyperbolic cross, which generates semi-hyperbolic orderings (see Definition \ref{semihyperbolic}). If the wavelet is sufficiently smooth relative to the dimension then linear orderings are optimal. It is also shown that if a linear ordering is optimal then the wavelet used must have some degree of smoothness proportional to the dimension; in 3D it is $C^0$, 5D it is $C^1$, 7D it is $C^2$, etc. (see Section \ref{orderingsandsmoothness}). The differences between the two incoherence structures of the Fourier-Tensor wavelet and Fourier-Separable wavelet cases are tested in 2D in Section \ref{numericalsection}. \subsection{Outline for the Remainder} Some key tools that we use to find optimal orderings are given in Section \ref{orderings}. Those familiar with \cite{onedimpaper} can skip the majority of this section except for the concept of characterization. We then cover the general tensor case and introduce hyperbolic orderings in Section \ref{tensors} and prove Theorem \ref{tensormainwavelet} and Theorem \ref{tensormainpoly}. In Section \ref{separable} we discuss the separable cases, first covering how to optimally order the wavelet basis before quickly moving on to the central problem of finding optimal orderings of the Fourier basis. Linear orderings are introduced first, then we justify the need for semihyperbolic orderings. Finally we move onto some simple compressed sensing experiments, one demonstrating the benefits of multilevel subsampling and one showing the impact of differing incoherence structures between the 2D tensor and separable cases. \section{Tools for Finding Optimal Orderings \& Theoretical Limits on Optimal Decay} \label{orderings} The first tool is perhaps the most important, as it is a very easy way to identify a strongly optimal ordering: \begin{lemma} \label{StrongOptimalEquivalence} \textbf{1):} Let $(B_1,B_2)$ be a basis pair and $\tau$ any ordering of $B_2$. Furthermore, let $ B_1$ have an ordering $\rho_1 : \mathbb{N} \to B_1$, and define $U_1:=[(B_1,\rho_1),(B_2,\tau)]$. Suppose that that there is a decreasing function $f_1: \mathbb{N} \to \mathbb{R}_{>0}$ such that \[ f_1(N) \le \mu(\pi_N U_1), \qquad \forall N \in \mathbb{N}. \] Then if $\rho_2: \mathbb{N} \to B_1$ is an ordering, $U_2=[(B_1,\rho_2),(B_2,\tau)]$ and $f_2: \mathbb{N} \to \mathbb{R}_{>0}$ is a function with \[ \mu( Q_N U_2) \le f_2(N), \qquad \forall N \in \mathbb{N}, \] then $f_1(N) \le f_2(N)$ for every $N \in \mathbb{N}$. \textbf{2):} Let $\rho$ be an ordering of $B_1$ with $U:=[(B_1,\rho),(B_2,\tau)]$ and $f: \mathbb{N} \to \mathbb{R}_{\ge 0}$ be a decreasing function with $f(N) \to 0$ as $N \to \infty$. If, for some constants $C_1, C_2 > 0$, we have \begin{equation} \label{rowordercond} C_1 f(N) \le \mu( \pi_N U) \le C_2 f(N), \qquad \forall N \in \mathbb{N}, \end{equation} then $\rho$ is a strongly optimal ordering and $f$ is a representative of the optimal decay rate. \end{lemma} \begin{proof} See Lemma 2.11 in \cite{onedimpaper}. \end{proof} \begin{definition}[Best ordering] Let $(B_1,B_2)$ be a basis pair. Then any ordering $\rho: \mathbb{N} \to B_1$ is said to be a `best ordering' if for any ordering $\tau$ of $B_2$ and $U=[(B_1,\rho),(B_2,\tau)]$ we have that the function $g(N):= \mu(\pi_N U)$ is decreasing. \end{definition} Notice that any best ordering is also a strongly optimal ordering. We shall need the notion of a best ordering briefly to prove Lemma \ref{characterisationlemma}. \begin{lemma} \label{bestexistence} Suppose that we have a basis pair $(B_1, B_2)$ with two orderings $\rho: \bbN \to B_1$, $\tau: \bbN \to B_2$ of $B_1, B_2$ respectively. If $U=[(B_1,\rho),(B_2,\tau)]$ satisfies \[ \mu(\pi_N U) \to 0 \quad \text{as} \quad N \to \infty, \] then a best ordering exists. \end{lemma} \begin{proof} See Lemma 2.10 in \cite{onedimpaper}. \end{proof} Throughout this paper we would like to define an ordering according to a particular property of the basis but this property may not be enough to specify a unique ordering. To deal with this issue we introduce the notion of consistency: \begin{definition}[Consistent ordering]\label{consistent_ordering} Let $F: S \to \mathbb{R}$ where $S$ is a set. We say that an ordering $\rho: \mathbb{N} \to S$ is `consistent with F' if \[ F(f) < F(g) \quad \Rightarrow \quad \rho^{-1}(f) < \rho^{-1}(g), \qquad \forall f,g \in S. \] \end{definition} The notion of consistency becomes important if we want to convert bounds on the coherence into optimal orderings: \begin{definition} \label{Characterisation} \begin{itemize} \item[1.)] Suppose $F:S \to \bbR_{> 0} $ satisfies $| \{ x \in S : 1/F(x) \le K \}| < \infty$ for all $K>0$, $\sigma: \bbN \to S$ is consistent with $1/F$ and $F(\sigma(N)) \to 0$ as $N \to \infty$. Then any decreasing function $f: \bbN \to \bbR_{>0}$ such that $f \approx F \circ \sigma$ is said to `represent the fastest decay of $F$'. \item[2.)] Suppose $(B_1,B_2)$ is a basis pair and $\iota: S \to B_1$ a bijection. If there exists a function $F:S \to \bbR_{>0}$ and a constant $C_1>0$ such that \be{ \label{dominate} \sup_{g \in B_2} | \langle \iota(s) , g \rangle |^2 \le C_1 \cdot F(s), \quad \forall s \in S, } then $F$ is said to `dominate the optimal decay of $(B_1,B_2)$'. If the inequality is reversed we say $F$ is `dominated by the optimal decay of $(B_1,B_2)$'. Furthermore, if there is a constant $C_2>0$ such that \be{ \label{characterise} C_2 \cdot F(s) \le \sup_{g \in B_2} | \langle \iota(s) , g \rangle |^2 \le C_1 \cdot F(s), \quad \forall s \in S, } then $F$ is said to `characterize the optimal decay of $(B_1,B_2)$'. \end{itemize} \end{definition} \begin{lemma} \label{characterisationlemma} 1): \ \ Suppose $f$ is a representative of the optimal decay rate for the basis pair $(B_1,B_2)$, $\iota: S \to B_1$ is a bijection, $F: S \to \bbR$ dominates the optimal decay of $(B_1,B_2)$, $\sigma : \bbN \to S$ is consistent with $1/F$ and $U=[(B_1, \iota \circ \sigma), (B_2, \tau)]$ . Then if $g$ represents the fastest decay of $F$ then $f, \mu(\pi_{\cdot}U) \lesssim g$. 2): \ \ If $F$ is instead is dominated by the optimal decay of $(B_1,B_2)$ then $f, \mu(\pi_{\cdot}U) \gtrsim g$. 3): \ \ If $F$ now characterizes the optimal decay of $(B_1,B_2)$ then $f, \mu(\pi_{\cdot}U) \approx g$ and therefore $\rho$ is a strongly optimal ordering for the basis pair $(B_1,B_2)$ if and only if $F(\iota^{-1} \circ \rho(\cdot)) \approx g $. \end{lemma} \begin{proof} 1.) \ \ We may assume, without loss of generality, that $g(N) \to 0$ as $N \to \infty$ else there is nothing to prove as $f(N), \mu(\pi_N U)$ are bounded functions of $N$. Therefore, a best ordering exists by Lemma \ref{bestexistence}. (\ref{dominate}) becomes (for $C_1'>0$ a constant), \[ \mu(\pi_N U) \le C_1 \cdot F(\sigma(N)) \le C_1' \cdot g(N), \quad \forall N \in \bbN. \] Since $g$ is decreasing we have $\mu(Q_N U) \le C_1'\cdot g(N)$ and therefore we can apply part 1) of Lemma \ref{StrongOptimalEquivalence} to $f_1=f$ and $f_2=g$ (using a best ordering as $\rho_1$ and $\rho_2 = \iota \circ \sigma$) to deduce that $f \lesssim g$. 2.) \ \ (\ref{dominate}) reversed becomes \[ \mu(\pi_N U) \ge C \cdot F(\sigma(N)) \ge C_1' \cdot g(N), \quad \forall N \in \bbN. \] Therefore we can apply part 1) of Lemma \ref{StrongOptimalEquivalence} to $f_1=g$ and $f_2=f$ (using $\rho_1 = \iota \circ \sigma$, $\rho_2$ an optimal ordering) to deduce that $f \lesssim g$. 3.) \ \ Notice that if $F$ characterizes the optimal decay of $(B_1,B_2)$ then (\ref{characterise}) becomes \[ C_2' \cdot g(N) \le C_2 \cdot F(\sigma(N)) \le \mu(\pi_N U) \le C_1 \cdot F(\sigma(N)) \le C'_1 \cdot g(N), \quad \forall N \in \bbN, \] and we can then apply part 2) of Lemma \ref{StrongOptimalEquivalence} to show $f, \mu(\pi_{\cdot}U) \approx g$. If we let $U':=[(B_1, \rho), (B_2, \tau)]$ then (\ref{characterise}) becomes \[ C_2 \cdot F(\iota^{-1} \circ \rho(N)) \le \mu(\pi_N U') \le C_1 \cdot F(\iota^{-1} \circ \rho(N)) , \quad \forall N \in \bbN, \] and the result follows from Definition \ref{strongoptimality}. \end{proof} Before moving on, we recall from \cite{onedimpaper} some results on the fastest optimal decay rate for a basis pair: \begin{theorem} \label{isometrydecaylowerbound} Let $U \in \cB(l^2(\bbN))$ be an isometry. Then $ \sum_{N} \mu(Q_N U) $ diverges. \end{theorem} \begin{proof} See Theorem 2.14 in \cite{onedimpaper}. \end{proof} \begin{corollary} Let $U \in \cB(l^2(\bbN))$ be any isometry. Then there does not exist an $\epsilon > 0$ such that $$ \mu(Q_N U) = \mathcal{O}(N^{-1-\epsilon}) , \qquad \ N \to \infty. $$ \end{corollary} It turns out that Theorem \ref{isometrydecaylowerbound} cannot be improved without imposing additional conditions on $U$: \begin{lemma} \label{incoherencecounter} Let $f,g: \bbN \to \bbR$ be any two strictly positive decreasing functions and suppose that $\sum_N f(N)$ diverges. Then there exists $U \in \cB(l^2(\bbN))$ an isometry with \be{ \label{strongerthanoptimal} \mu(Q_N U) \le f(N), \quad \mu(U Q_N) \le g(N), \qquad N \in \bbN . } \end{lemma} \begin{proof} See Lemma 2.16 in \cite{onedimpaper}. \end{proof} If we restrict our decay function to be a power law, i.e. $f(N):= CN^{- \alpha}$ for some constants $\alpha, C >0$ then the largest possible value of $\alpha>0$ such that (\ref{strongerthanoptimal}) holds for an isometry $U$ is $\alpha=1$. This gives us a notion of the fastest optimal decay rate as a power of $N$ over all pairs of bases where the span of $B_2$ lies in the span of $B_1$. \section{One-dimensional Bases and Incoherence Results} \label{onedim} Before we begin our review of the one-dimensional cases by quickly going the one-dimensional bases and orderings that we shall be working with to construct multi-dimensional bases and orderings in Section \ref{tensors}. \subsection{Fourier Basis} We recall the one-dimensional Fourier basis $B_\mathrm{f}(\epsilon)=(\chi_k)_{k \in \bbZ}$ from Definition \ref{fourier}. \begin{definition}[Standard ordering]\label{standard_ordering} We define $F_\mathrm{f}:B_\mathrm{f} \to \mathbb{N} \cup \{0\}$ by $F_\mathrm{f}(\chi_k)=|k|$ and say that an ordering $\rho: \mathbb{N} \to B_\mathrm{f}$ is a `standard ordering' if it is consistent with $F_\mathrm{f}$ (recall Definition \ref{consistent_ordering}). \end{definition} \subsection{Standard Wavelet Basis} \label{waveletbasis} Take a Daubechies wavelet $\psi$ and corresponding scaling function $\phi$ in $L^2(\mathbb{R})$ with \[\text{Supp} (\phi) = \text{Supp} (\psi) = [-p+1,p]. \] We write \[ \begin{aligned} \phi_{j,k}(x) =2^{j/2} \phi(2^j x-k) , \qquad \psi_{j,k} (x) = 2^{j/2} \psi(2^j x - k), \\ V_j := \overline{\text{Span} \{ \phi_{j,k}: k \in \bbZ \}}, \quad W_j := \overline{\text{Span} \{ \psi_{j,k}: k \in \bbZ \}}. \end{aligned} \] With the above notation, $(V_j)_{j \in \mathbb{Z}}$ is the multiresolution analysis for $\phi$, and therefore \[V_j \subset V_{j+1} , \qquad V_{j+1} = V_j \oplus W_j, \qquad L^2(\bbR) = \overline{\bigcup_{j \in \bbZ} V_j}, \] where $W_j$ here is the orthogonal complement of $V_j$ in $V_{j+1}$. For a fixed $J \in \bbN$ we define the set\footnote{`$\mathrm{w}$' here stands for `wavelet'.} \begin{align} \label{waveletbasisdefine} B_\mathrm{w} := \left\{ \begin{array}{cc} & \mathrm{Supp}(\phi_{J,k}) \cap (-1,1) \neq \emptyset , \\ \phi_{J,k} , \ \psi_{j,k} : & \mathrm{Supp}(\psi_{j,k}) \cap (-1,1) \neq \emptyset, \\ & j \in \mathbb{N}, j \ge J , \ k \in \mathbb{Z} \end{array} \right \} , \end{align} Let $\rho$ be an ordering of $B_\mathrm{w}$. Notice that since $L^2(\mathbb{R})= \overline{ V_J \oplus \bigoplus^{\infty}_{j=J} W_j}$ for all $f \in L^2(\mathbb{R})$ with $\mathrm{supp}(f) \subseteq [-1,1]$ we have \[ f = \sum_{n=1}^\infty c_{n} \rho(n) \quad \text{for some} \quad (c_n)_{n \in \bbN} \in \ell^2(\bbN) .\] \begin{definition} [Leveled ordering (standard wavelets)]\label{leveled} Define $F_\mathrm{w}:B_\mathrm{w} \to \mathbb{R}$ by \[ F_\mathrm{w}( f) \ = \ \begin{cases} \ j, \ & \mbox{if } f \in W_j\\ \ -1, \ & \mbox{if }f \in V_J \end{cases} , \] and say that any ordering $\tau: \mathbb{N} \to B_\mathrm{w}$ is a `leveled ordering' if it is consistent with $F_\mathrm{w}$. \end{definition} Notice that $F_\mathrm{w}(\psi_{j,k})=j$. We use the name ``leveled'' here since requiring an ordering to be leveled means that you can order however you like within the individual wavelet levels themselves, as long as you correctly order the sequence of wavelet levels according to scale. Suppose that $U=[(B_\mathrm{f}(\epsilon), \rho), (B_\mathrm{w},\tau)]$ for orderings $\rho, \tau$. If we require $U$ to be an isometry we must impose the constraint $(2\epsilon)^{-1} \ge 1+2^{-J+1}(p-1)$ otherwise the elements in $B_\mathrm{w}$ do not lie in the span of $B_\mathrm{f}(\epsilon)$. For convenience we rewrite this as $\epsilon \in I_{J,p}$ where \[I_{J,p}:=(0,(2+2^{-J+2}(p-1))^{-1}]. \] \subsection{Boundary Wavelet Basis} \label{BoundaryWavelets} We now look at an alternative way of decomposing a function $f \in L^2([-1,1])$ in terms of a wavelet basis, which involves using boundary wavelets \cite[Section 7.5.3]{dDwav}. The basis functions all have support contained within $[-1,1]$, while still spanning $L^2[-1,1]$. Furthermore, the new multiresolution analysis retains the ability to reconstruct polynomials of order up to $p-1$ from the corresponding original multiresolution analysis. We shall not go into great detail here but we will outline the construction; we take, along with a Daubechies wavelet $\psi$ and corresponding scaling function $ \phi$ with $ \mathrm{Supp} (\psi) = \mathrm{Supp} (\phi)=[-p+1,p]$, boundary scaling functions and wavelets (using the same notation as in \cite{dDwav} except that we use $[-1,1]$ instead of $[0,1]$ as our reconstruction interval) \[ \phi^{\text{left}}_n, \ \phi^{\text{right}}_n, \ \psi^{\text{left}}_n , \ \psi^{\text{right}}_n , \qquad n =0,\cdots,p-1 .\] Like in the standard wavelet case we shift and scale these functions, \[ \phi^{\text{left}}_{j,n}(x) = 2^{j/2} \phi^{\text{left}}_{n}(2^j (x+1)), \qquad \phi^{\text{right}}_{j,n}(x)= 2^{j/2} \phi^{\text{right}}_{n}(2^j (x-1)). \] We are then able to construct nested spaces , $ (V^{\text{int}}_j)_{j \ge J}$, $ (W^{\text{int}}_j)_{j \ge J}$ for a fixed base level $J \ge \lceil \log_2 (p) \rceil $, such that \mbox{$L^2([-1,1])=\overline{ \bigoplus^{\infty}_{j=J} V^{\text{int}}_j}$}, $V^{\text{int}}_{j+1}=V^{\text{int}}_j \oplus W^{\text{int}}_j$ with $W^\text{int}_j$ the orthogonal complement of $V^\text{int}_j$ in $V^\text{int}_{j+1}$ by defining \begin{equation*} V^{\text{int}}_j = \overline{ \text{Span} \left \{ \begin{aligned} \phi^{\text{left}}_{j,n} & , \phi^{\text{right}}_{j,n} \\ & \phi_{j,k} \end{aligned} : \begin{aligned} & n =0 , \cdots , p-1 \ \\ & k \in \mathbb{Z} \ s.t. \ \mathrm{Supp}( \phi_{j,k} ) \subset (-1,1) \end{aligned} \right \} } , \end{equation*} \begin{equation*} W^{\text{int}}_j = \overline{ \text{Span} \left \{ \begin{aligned} \psi^{\text{left}}_{j,n} & , \psi^{\text{right}}_{j,n} \\ & \psi_{j,k} \end{aligned} : \begin{aligned} & n =0 , \cdots , p-1 \ \\ & k \in \mathbb{Z} \ s.t. \ \mathrm{Supp}( \psi_{j,k} ) \subset (-1,1) \end{aligned} \right \} } . \end{equation*} We then take the spanning elements of $V^{ \text{int}}_J$ and the spanning elements of $W^{\text{int}}_j$ for every $j \ge J$ to form the basis $B_{\mathrm{b} \mathrm{w}}$ ($\mathrm{b} \mathrm{w}$ for 'boundary wavelets'). \begin{definition}[Leveled ordering (boundary wavelets)] Define $F_w: B_{\mathrm{b} \mathrm{w}} \to \mathbb{R}$ by the formula \[ F_{\mathrm{b} \mathrm{w}}( f) \ = \ \begin{cases} \ j, \ & \mbox{if } f \in W^{\text{int}}_j\\ \ -1, \ & \mbox{if }f \in V^{\text{int}}_J \end{cases}. \] Then we say that an ordering $\tau: \mathbb{N} \to B_{\mathrm{b} \mathrm{w}}$ of this basis is a `leveled ordering' if it is consistent with $F_{\mathrm{b} \mathrm{w}}$. \end{definition} \subsection{Legendre Polynomial Basis} \label{polynomialbasis} If $(p_n)_{n \in \mathbb{N}}$ denotes the standard Legendre polynomials on $[-1,1]$ (so $p_n(1)=1$ and $p_1(x)=1$ for $x \in [-1,1]$) then the $L^2$-normalised Legendre polynomials are defined by $\tilde{p}_n=\sqrt{n-1/2} \cdot p_n$ and we write $B_\mathrm{p} := (\tilde{p}_n )_{n=1}^\infty$ (the $\mathrm{p}$ here stands for ``polynomial'' ). $B_\mathrm{p}$ is already ordered; call this the \emph{natural ordering} . \subsection{Incoherence Results for One-dimensional Bases} \label{1DResults} Next we recall the one-dimensional incoherence results proved in \cite{onedimpaper}, which shall be used to prove the corresponding multi-dimensional tensor results in Section \ref{tensors}: \begin{theorem} \label{FourierWaveletResults} Let $\rho$ be a standard ordering of $B_\mathrm{f}(\epsilon)$ with $\epsilon \in I_{J,p}$, $\tau$ a leveled ordering of $B_\mathrm{w}$ and $U=[(B_\mathrm{f}(\epsilon),\rho),(B_\mathrm{w},\tau)]$. Then we have, for some constants $C_1, C_2>0$ the decay \be{ \label{FourierWaveletOptimalBounds} \frac{C_1}{N} \le \mu(\pi_N U), \ \mu(U \pi_N) \le \frac{C_2}{N}, \qquad \forall N \in \mathbb{N} , } The same conclusions also hold if the basis $B_\mathrm{w}$ is replaced by $B_{\mathrm{b} \mathrm{w}}$ and the condition $\epsilon \in I_{J,p}$ by $\epsilon \in (0,1/2]$. \end{theorem} \begin{theorem} \label{FourierPolynomialResults} Let $\rho$ be a standard ordering of $B_\mathrm{f}(\epsilon)$ with $\epsilon \in (0,0.45], $ $\tau$ a natural ordering of $B_\mathrm{p}$ and $U=[(B_\mathrm{f}(\epsilon),\rho),(B_\mathrm{p},\tau)]$. Then we have, for some constants $C_1, C_2>0$ the decay \be{ \label{FourierPolynomialOptimalBounds} \frac{C_1}{N^{2/3}} \le \mu(\pi_N U), \ \mu(U \pi_N), \le \frac{C_2}{N^{2/3}}, \qquad \forall N \in \mathbb{N} . } \end{theorem} \section{Multidimensional Tensor Cases: Proof of Theorem \ref{tensormainwavelet} and Theorem \ref{tensormainpoly} } \label{tensors} In this section we prove Theorem \ref{tensormainwavelet} and Theorem \ref{tensormainpoly}. In fact, we state and prove their slightly more involved generalisations: Theorems \ref{TensorResultsWavelet} and \ref{TensorResultsPoly}. We also provide examples of hyperbolic orderings. \subsection{General Estimates} \begin{definition}[Tensor basis] Suppose that $B$ is an orthonormal basis of some space $T \le L^2 (\mathbb{R})$ (i.e. $T$ is a subspace $L^2 (\mathbb{R})$) and we already have an ordering $\rho: \mathbb{N} \to B$. Define $\rho^d: \mathbb{N}^d \to \bigotimes_{j=1}^d T \le L^2 (\mathbb{R}^d)$ by the formula ($m \in \mathbb{N}^d$) \[ \rho^d(m)(x):= \Big( \bigotimes_{j=1}^d \rho(m_j) \Big) (x) = \prod_{j=1}^d \rho(m_j)(x_j). \] This gives a basis of $\bigotimes_{j=1}^d T \le L^2 (\mathbb{R}^d)$ because of the formula \begin{equation} \label{prodsplit} \langle \rho^d (m),\rho^d(n) \rangle_{L^2(\mathbb{R}^d)} = \prod_{j=1}^d \langle \rho(m_j), \rho(n_j) \rangle_{L^2(\mathbb{R})}. \end{equation} We call $B^d:=(\rho^d(m))_{m \in \mathbb{N}^d}$ a `tensor basis'. The function $\rho^d$ is said to be the `d-dimensional indexing induced by $\rho$'. Notice that $\rho^d$ is not an ordering unless $d=1$. \end{definition} Now suppose that we have two one-dimensional bases $B_1$, $B_2$ with corresponding optimal orderings $\rho_1, \rho_2$. Let $\rho^d_1, \rho^d_2$ be the d-dimensional indexings induced by $\rho_1,\rho_2$ of the bases $B^d_1,B^d_2$. What are optimal orderings of the basis pair $(B^d_1,B^d_2)$ and what is the resulting optimal decay rate? Some insight is given by the following Lemma: \begin{lemma} \label{generaltensor} Let $(B_1,B_2)$ be a pair of bases with corresponding tensor bases $B_1^d, B_2^d$. Let $\rho_1$ be a strongly optimal ordering of $B_1$ and $\rho_1^d$ be the $d$-dimensional indexing induced by $\rho_1$. Finally, for some ordering $\tau$ of $B_2$, let $U=[(B_1, \rho_1), (B_2, \tau)]$ . Then if $f$ represents the optimal decay rate corresponding to the basis pair $(B_1,B_2)$ we have, for some constants $C_1, C_2>0$, \be{ \label{generaltensorequation} \prod_{i=1}^d C_1^d \cdot f(n_i) \le \sup_{g \in B^d_2} | \langle \rho_1^d (n) , g \rangle |^2 = \prod_{i=1}^d \mu(\pi_{n_i} U) \le \prod_{i=1}^d C_2^d \cdot f(n_i), \quad n \in \bbN^d. } Consequently, if we let $\iota := \rho_1^d$ then $F(n):=\prod_{i=1}^d f(n_i)$ characterizes the optimal decay of $(B_1,B_2)$. \end{lemma} \begin{proof} Let $\tau^d$ denote the $d$-dimensional indexing induced by $\tau$. Then by breaking the down the tensor product into terms and using the bijectivity of $\tau^d$ we have \[ \begin{aligned} \sup_{g \in B^d_2} | \langle \rho_1^d (n) , g \rangle |^2 & = \sup_{m \in \bbN^d} | \langle \rho_1^d (n) , \tau^d(m) \rangle |^2 = \sup_{m \in \bbN^d} \prod_{i=1}^d | \langle \rho_1 (n_i) , \tau(m_i) \rangle |^2 \\ & = \prod_{i=1}^d \sup_{m \in \bbN} | \langle \rho_1 (n_i) , \tau(m) \rangle |^2 = \prod_{i=1}^d \mu(\pi_{n_i} U). \end{aligned} \] Therefore (\ref{generaltensorequation}) follows from applying the definition of a strongly optimal ordering to each term in the product. \end{proof} Lemma \ref{generaltensor} says that if we have a strongly optimal ordering for the basis pair $(B_1,B_2)$ then we can use Lemma \ref{characterisationlemma} to find all strongly optimal orderings for the corresponding tensor basis pair $(B_1^d,B_2^d)$. In particular, we have \begin{corollary} \label{generaltensorcorollary} We use the framework of the previous Lemma. Let $\sigma:\bbN \to \bbN^d$ be consistent with $1/F$. Then an ordering $\rho$ is strongly optimal for the basis pair $(B^d_1,B^d_2)$ if and only if there are constants $C_1,C_2>0$ such that \[ C_1 F(\sigma(N)) \le F((\rho_1^d)^{-1} \circ \rho(N)) \le C_2 F(\sigma(N)), \quad N \in \bbN. \] \end{corollary} Suppose that we have a strongly optimal ordering $\rho_1$ of $B_1$ such that the optimal decay rate is a power of $N$, namely that $f(n)=n^{-\alpha}$ for some $\alpha>0$, which is the case for the one dimensional examples we covered in Section \ref{onedim}. The above Lemma tells us that to find the optimal decay rate we should take an ordering $\sigma : \bbN \to \bbN^d$ that is consistent with $1/F(n):= \prod_{i=1}^d 1/f(n_i)= \prod_{i=1}^d n_i^{\alpha}$ which is equivalent to being consistent with $1/F^{1/\alpha}(n)=\prod_{i=1}^d n_i$. This motivates the following: \begin{definition}[Corresponding to the hyperbolic cross] Define $F_H: \mathbb{N}^d \to \mathbb{R}$ by $ F_{H}(n)= \prod_{i=1}^d n_i$. Then we say a bijective function $\sigma: \mathbb{N} \to \mathbb{N}^d$ `corresponds to the hyperbolic cross' if it is consistent with $F_H$. \end{definition} The name `hyperbolic cross' originates from its use in approximation theory \cite{crossorig,hypcross}. We now claim that if $\sigma$ corresponds to the hyperbolic cross and $d \ge 2$, then \be{ \label{hyperbolicdecayrate} \prod_{i=1}^d \sigma(N)_i \sim \frac{(d-1)! N}{\log^{d-1}(N+1)} \quad \text{as } \quad N \to \infty. } Next we proceed to prove this claim. \begin{definition} For $d \in \mathbb{N}$ let $f_d(x) = x \log^{d-1} x$ be defined on $[1,\infty)$. We define $g_d$ as the inverse function of $f_d$ on $[1,\infty)$, and so $g_d: [ 0, \infty) \to [1,\infty)$. Furthermore, we define \be{ \label{hyperbolicdecay} h_d(x):= \frac{x}{\log^{d-1}(x+1)} , \qquad x \in [1, \infty).} \end{definition} \begin{lemma} \label{ordersimplify} The following holds: \\ 1.) \ \ $g_d(x)/h_{d}(x) \to 1 \quad \text{as} \quad x \to \infty.$ \\ 2.) \ \ Let $\tilde{f}(x) = x \log^{d-1} x + x p( \log(x) ) + \beta$ with $p$ a polynomial of degree at most $d-2$, $ \beta \in \mathbb{R}$ and let $\tilde{g}$ be its inverse function defined for large $x \in \bbR_+$. Then we also have $\tilde{g}(x)/h_{d}(x) \to 1 \quad \text{as} \quad x \to \infty.$ \end{lemma} \begin{proof} 1.) \ \ For notational convenience we shall prove the equivalent result \[ \frac{g_d(x) \log^{d-1}(x)}{x} \to 1 \quad \text{as} \quad x \to \infty. \] By taking logarithms we change the problem from studying the asymptotics of a fraction to the asymptotics of the difference \be{ \label{equivalentasymp} \log(g_d(x))-\log(h_d(x)) = \log(g_d(x)) - \log x + (d-1) \log \log x \to 0 \quad \text{as} \quad x \to \infty. } With this in mind we notice that the function $\log(g_d)$ (defined on $[0,\infty)$) is the inverse function of $e_d(x):= f_d(\exp(x)) = x^{d-1} \exp x$ (defined on $[0,\infty)$). Notice that for $x$ large we have $e_d(x- (d-1) \log x)= \frac{(x- (d-1) \log x)^{d-1}}{x^{d-1}} \exp(x) \le \exp(x)$ which implies that $ x - (d-1) \log x \le \log(g_d( \exp(x))) $. Now if we let $\epsilon>0$ then we deduce that \[e_d(x- (d-1) \log x+ \epsilon)= \frac{(x- (d-1) \log x+\epsilon)^{d-1}}{x^{d-1}} \exp(x+\epsilon) \ge \exp(x) \quad \text{for} \quad x \quad \text{large.} \] This implies that $ x - (d-1) \log x + \epsilon \ge \log(g_d( \exp(x))) $ for $x$ large. We therefore conclude that for all $x$ sufficiently large we have \[ x - (d-1) \log x \le \log(g_d( \exp(x))) \le x - (d-1) \log x + \epsilon,\] from which (\ref{equivalentasymp}) follows since $\epsilon>0$ is arbitrary. 2.) \ \ Notice that by part 1. it suffices to show that $\tilde{g}(x)/g_{d}(x) \to 1 \ $ as $\ x \to \infty.$ Again, we shall show this by taking logarithms, reducing the proof to showing \[ \log( \tilde{g}(x)) - \log( g_d(x)) \to 0 \quad \text{as} \quad x \to \infty. \] Notice that $\log( \tilde{g}(x))$ is the inverse function, defined for large $x$, of \[\tilde{e}(x):=\tilde{f}( \exp(x)) = x^{d-1} \exp(x) + p(x) \cdot \exp(x) + \beta, \] Then since \[\tilde{e}'(x)= x^{d-1} \exp(x) + ((d-1) \cdot x^{d-2} +p'(x)+p(x)) \cdot \exp(x), \] we can use the hypothesis that $p$ is of a lower order than $x^{d-1}$ to show that for every $\epsilon>0$, there is an $L(\epsilon)>0$ such that for all $x \ge L(\epsilon)$ we have $\epsilon \cdot \tilde{e}'(x -\epsilon) \ge |\tilde{e}(x) - e_d(x)|=|p(x) \cdot \exp(x) + \beta|$. We therefore deduce from the mean value theorem that for $x \ge \exp(L(\epsilon))$ we have \begin{align*} \tilde{e}(\log(g_d(x))-\epsilon) \le e_d(\log(g_d(x)))= & x \le \tilde{e}( \log(g_d(x))+ \epsilon) \\ & \Rightarrow \log(g_d(x)) - \epsilon \le \log(\tilde{g}(x)) \le \log(g_d(x)) + \epsilon, \end{align*} where we applied $\log(\tilde{g})$ to the inequality in the last step (this preserves the inequality since $\log(\tilde{g})$ is an increasing function of $x$ for $x$ large). \end{proof} \begin{lemma} \label{orderset} 1). \ \ For every $d \in \mathbb{N}$ we have \be{ \label{hyplogestimate} R_N:=\sum_{i=1}^N \frac{1}{i} (\log(N) - \log(i))^d \ = \ \frac{1}{d+1} \log^{d+1} N + \mathcal{O}(\log^d N) \qquad N \to \infty. } 2). \ \ Let $S_d(N)$ for $d, N \in \mathbb{N}$ be defined by \begin{equation} \label{simplehyperbolic} S_d(N):= \# \Big\{ m \in \mathbb{N}^d : \prod_{i=1}^d m_i \le N \Big\}. \end{equation} Then for every $d \in \mathbb{N}$, there exists polynomials $ \underline{p}_d, \overline{p}_d$ both of degree $d-1$ with identical leading coefficient $1/(d-1)!$ such that \begin{equation} \label{hyperbolic_count} N \underline{p}_d( \log(N)) \le S_d(N) \le N \overline{p}_d( \log(N)) . \end{equation} 3). \ \ If we let $\sigma: \mathbb{N} \to \mathbb{N}^d$ correspond to the hyperbolic cross then (\ref{hyperbolicdecayrate}) holds. \end{lemma} \begin{proof} 1). \ \ Let $I_N:=\int_1^N \frac{1}{x} (\log(N) - \log(x))^d \, dx$. Since the integrand is a decreasing function of $x$ (with $N$ fixed) we find that by the Maclaurin integral test that $0 \le R_N -I_N \le \log^d(N)$. This means that showing (\ref{hyplogestimate}) is equivalent to showing that \[ \int_1^N \frac{1}{x} (\log(N) - \log(x))^d \, dx = \frac{1}{d+1} \log^{d+1} N + \mathcal{O}(\log^d N). \] Now, by expanding out the factors of the integrand and integrating (recall that the integral of $x^{-1} \log^k x$ is $\frac{1}{k+1} \cdot \log^{k+1}x$) the integral becomes \[ \log^{d+1}(N) \cdot \sum_{i=0}^d \frac{1}{i+1} \binom{d}{i} (-1)^{i}. \] Since $\frac{1}{i+1} \binom{d}{i} = \frac{1}{d+1} \binom{d+1}{i+1}$ we see that the sum simplifies to $ \frac{1}{d+1}$ and we are done. 2). \ \ We use induction on the dimension $d$. The case $d=1$ is immediate since $\underline{p}_1(x)=\overline{p}_1(x)=1$ satisfies inequality (\ref{hyperbolic_count}). Therefore suppose that inequality (\ref{hyperbolic_count}) holds for dimension $d=k$. We shall extend the result to $d=k+1$ using the equality: \begin{equation} \label{hyperbolicdimreduce} S_{k+1}(N)= \sum_{i=1}^N S_{k}\Big( \left\lfloor \frac{N}{i} \right\rfloor \Big). \end{equation} This equality follows from rewriting the set defining $S_{k+1}$ as the following disjoint union: \[ \Big\{ m \in \bbN^{k+1} : \prod_{i=1}^{k+1} m_i \le N \Big\} = \coprod_{j=1}^N \Bigg\{ m \in \bbN^{k+1} : m_{k+1}=j, \prod_{i=1}^k m_i \le \left\lfloor \frac{N}{i} \right\rfloor \Bigg\} . \] \textbf{Upper Bound:} We may assume without loss of generality that $\overline{p}_k$ has all coefficients positive. Therefore, by replacing $ \lfloor \frac{N}{i} \rfloor $ with $\frac{N}{i}$ and using the upper bound in (\ref{hyperbolic_count}), we can upper bound equation (\ref{hyperbolicdimreduce}) by \[ \sum_{i=1}^N \frac{N}{i} \cdot \overline{p}_k \Big( \log \Big( \frac{N}{i} \Big) \Big) \ \le \ N \sum_{i=1}^N \frac{1}{i} \cdot \overline{p}_k ( \log(N) - \log(i)). \] We can then get the required upper bound by applying part 1) of the lemma to each term in the polynomial; for example the highest order term becomes \[ \begin{aligned} \sum_{i=1}^N \frac{N}{i} \cdot \frac{1}{(k-1)!} ( \log(N) - \log(i))^{k-1} \le \frac{N }{k!} \log^{k}N + C N \log^{k-1} N, \qquad \forall N \in \mathbb{N}, \end{aligned} \] for some constant $C>0$ sufficiently large. The other terms in $\overline{p}_k$ are handled similarly. \textbf{Lower Bound:} Notice that without loss of generality we can assume all the coefficients of $\underline{p}_k$ apart from the leading coefficient are negative. Using the lower bound in (\ref{hyperbolic_count}), we can lower bound equation (\ref{hyperbolicdimreduce}) by \[ \sum_{i=1}^N \left\lfloor \frac{N}{i} \right\rfloor \cdot \overline{p}_k \Big( \log \Big( \left\lfloor \frac{N}{i} \right\rfloor \Big) \Big). \] This means we can tackle the $<k-1$ order terms in the same way as in the upper bound since we can replace $ \left\lfloor \frac{N}{i} \right\rfloor $ with $\frac{N}{i}$ (recall we have assumed these terms are negative). Now we are left with bounding the highest order term: \begin{equation} \begin{aligned} \sum_{i=1}^N \left\lfloor \frac{N}{i} \right\rfloor \frac{1}{(k-1)!} (\log \Big( \left\lfloor \frac{N}{i} \right\rfloor \Big) )^k = \sum_{i=1}^N \left\lfloor \frac{N}{i} \right\rfloor \frac{1}{(k-1)!} \cdot \Big[ \log \Big( \frac{N}{i} \Big) - \big( \log \Big( \frac{N}{i} \Big) - \log \Big( \left\lfloor \frac{N}{i} \right\rfloor \Big) \big) \Big]^k. \end{aligned} \end{equation} Therefore expanding out the binomial term, setting the sign of all terms except the first to be negative, and noticing $\log \Big( \frac{N}{i} \Big) - \log \Big( \left\lfloor \frac{N}{i} \right\rfloor \Big) \le 1$ for every $i,N$ we get the lower bound \[ \begin{aligned} \sum_{i=1}^N & \left\lfloor \frac{N}{i} \right\rfloor \frac{1}{(k-1)!} \log^{k} \Big( \frac{N}{i} \Big) - \sum_{i=1}^N \sum_{j=0}^{k-1} \left\lfloor \frac{N}{i} \right\rfloor \binom{k}{j} \frac{1}{(k-1)!} \log^{j} \Big( \frac{N}{i} \Big). \end{aligned} \] From here we can replace $\left\lfloor \frac{N}{i} \right\rfloor$ by $ \frac{N}{i} $ for the right term, $\left\lfloor \frac{N}{i} \right\rfloor$ by $\frac{N}{i} -1 $ on the left term and use part 1) of the lemma again to prove the lower bound. 3.) \ \ From the second part of the lemma we know that for some degree $d-1$ polynomials $\underline{p}_d , \overline{p}_d$ with leading coefficient $1/(d-1)!$ we have $ N \underline{p}_d (\log(N)) \le S_d(N) \le N \overline{p}_d ( \log(N)). $ Now notice that if $m \in \mathbb{N}$ then because of consistency we must have $ S_d( F_H(\sigma(m))-1) \le m$ since $\sigma$ must first list all the terms $n$ in $\bbN^d$ with $F_H(n) \le F_H(\sigma(m))-1$ before listing $\sigma(m)$. Likewise we must have $m \le S_d( F_H(\sigma(m)))$ since the $S_d( F_H(\sigma(m)))$ terms with $F_H(n) \le F_H(\sigma(m)), n \in \bbN^d$ must be listed by $\sigma$ first, including $m$ , before any others. Consequently we deduce \begin{equation} \label{productbound} \begin{aligned} (F_H(\sigma(m)) -1) \underline{p}_d (\log(F_H(\sigma(m)) & -1)) \le m \le F_H(\sigma(m)) \overline{p}_d ( \log( F_H( \sigma(m)) )). \end{aligned} \end{equation} We now treat both sides separately. Looking at the LHS we get the estimate $F_H(\sigma(m)) -1 \le \tilde{g}_d(m),$ where $\tilde{g}_d(m)$ is the inverse function (defined for large $m$) of \[ \begin{aligned} \tilde{f}_d(x):= & \frac{1}{(d-1)!} x \log^{d-1}(x) + \mbox{(degree $d-2$ poly)}(\log(x)), \end{aligned} \] and so we may apply part 2. of Lemma \ref{ordersimplify} to deduce $ F_H(\sigma(m)) \le h_d( (d-1)! m ) \cdot (1 + \epsilon(m)),$ where $\epsilon(m) \to 0$ as $m \to \infty$. The right hand side is handled similarly to get the same asymptotic lower bound on $F_H(\sigma(m))$, namely $ F_H(\sigma(m)) \ge h_d( (d-1)! m ) \cdot (1 + \epsilon(m)),$ where $\epsilon(m) \to 0$ as $m \to \infty$. Since $\frac{h_d((d-1)! x)}{(d-1)! h_d(x)} \to 1$ as $x \to \infty$ the proof is complete. \end{proof} (\ref{hyperbolicdecayrate}) allows us to determine the optimal decay rate for when the optimal one dimensional decay rate is a power of $N$. \begin{theorem} \label{generaltensortheorem} Returning to the framework of Corollary \ref{generaltensorcorollary}, if $f(n)=n^{- \alpha}$ for $n \in \bbN$, $F(n)= \prod_{i=1}^d f(n)$ for $n \in \bbN^d$ and $\sigma : \bbN \to \bbN^d$ corresponds to the hyperbolic cross then \be{ \label{finaltensordecay} F(\sigma(N)) = \Bigg( \prod_{i=1}^d \sigma(N)_i \Bigg)^{- \alpha} \sim \big((d-1)! \cdot h_d(N) \big) ^{-\alpha}, \quad N \to \infty. } Consequently $h_d^{-\alpha}$ is representative of the optimal decay rate for the basis pair $(B_1^d,B_2^d)$. Furthermore, an ordering $\rho$ is strongly optimal for the basis pair $(B_1^d,B_2^d)$ if and only if there are constants $C_1, C_2>0$ such that \begin{equation} \label{hyperbolicdef} C_1 \cdot h_d(N) \le \prod_{i=1}^d \Big((\rho_1^d)^{-1} \circ \rho(N) \Big)_i \le C_2 \cdot h_d(N), \quad N \in \bbN. \end{equation} \end{theorem} \begin{proof} (\ref{finaltensordecay}) follows immediately from (\ref{hyperbolicdecayrate}). This implies that $F \circ \sigma \approx h_d^{-\alpha}$. The statement on the optimal decay rate then follows from the characterization result from Lemma \ref{generaltensor} applied to Lemma \ref{characterisationlemma}. The statement on strongly optimal orderings follows from Corollary \ref{generaltensorcorollary}. \end{proof} \begin{definition} \label{hyperbolicordering} Using the framework of Lemma \ref{generaltensor}, any ordering $\rho: \bbN \to B_1^d$ such that (\ref{hyperbolicdef}) holds is called a 'hyperbolic' ordering. with respect to $\rho_1$. Notice that by (\ref{finaltensordecay}) that if $\sigma: \bbN \to \bbN^d$ corresponds to the hyperbolic cross then $\rho_1^d \circ \sigma$ is hyperbolic with respect to $\rho_1$. \end{definition} We now apply Theorem \ref{generaltensortheorem} to the one-dimensional cases we have already covered: \subsection{Fourier-Wavelet Case} \begin{theorem} \label{TensorResultsWavelet} We use the setup of Lemma \ref{generaltensor}. Suppose $B_1=B_\mathrm{f}(\epsilon)$, $B_2=B_\mathrm{w}$ for some fixed $\epsilon \in I_{J,p}$, $\rho_1$ is a standard ordering of $B_1$ and $\tau_1$ is a leveled ordering of $B_2$. Let $U_d=[(B_1^d, \rho), (B_2^d, \tau)]$ where $\rho, \tau$ is hyperbolic with respect to $\rho_1, \tau_1$ respectively. Then we have, for some constants $C_1,C_2>0$, \begin{equation}\label{FourWave1} \frac{C_1 \log^{d-1}(N+1)}{N} \le \mu( \pi_N U_d ), \ \mu(U_d \pi_N) \le \frac{C_2 \log^{d-1}(N+1)}{N}, \qquad N \in \bbN. \end{equation} The above also holds if the basis $B_\mathrm{w}$ is replaced by $B_{\mathrm{b} \mathrm{w}}$ and the condition $\epsilon \in I_{J,p}$ by $\epsilon \in (0,1/2]$. \end{theorem} \begin{proof} Inequality (\ref{FourWave1}) follows from applying Theorem \ref{FourierWaveletOptimalBounds} to Theorem \ref{generaltensortheorem}. \end{proof} \subsection{Fourier-Polynomial Case} \begin{theorem} \label{TensorResultsPoly} We use the setup of Lemma \ref{generaltensor}. Suppose $B_1=B_\mathrm{f}(\epsilon)$, $B_2=B_\mathrm{p}$ for some fixed $\epsilon \in (0,0.45]$, $\rho_1$ is a standard ordering of the Fourier basis and $\tau_1$ is the natural ordering of the polynomial basis. Let $U_d=[(B_1^d, \rho), (B_2^d, \tau)]$ where $\rho, \tau$ is hyperbolic with respect to $\rho_1, \tau_1$ respectively. Then we have, for some constants $C_1,C_2>0$, that \begin{equation}\label{FourLeg1} \frac{C_1 (\log^{d-1}(N+1))^{2/3}}{N^{2/3}} \le \mu( \pi_N U_d ), \ \mu(U_d \pi_N) \le \frac{ C_2 (\log^{d-1}(N+1))^{2/3}}{N^{2/3}}, \qquad N \in \bbN. \end{equation} \end{theorem} \begin{proof} Inequality (\ref{FourLeg1}) follows from applying Theorem \ref{FourierPolynomialOptimalBounds} to Proposition \ref{generaltensortheorem}. \end{proof} \subsection{Examples of Hyperbolic Orderings} \label{hyperbolicexamples} The generalisation introduced by Definition \ref{hyperbolicordering}, apart from allowing us to characterise all orderings that are strongly optimal, may seem to fulfil little other purpose. However, as we shall see in this section, this definition admits orderings which in specific cases are very natural and appear a little less abstract than an ordering derived from the hyperbolic cross. \begin{example} \label{hypcrossZd} (Hyperbolic Cross in $\mathbb{Z}^d$) Our first example is unremarkable but nonetheless important. In $d$ dimensions, take $B_1^d:=B_\mathrm{f}^d$ as a d-dimensional tensor Fourier basis. Recall we can identify this basis with $\mathbb{Z}^d$ using the function $\lambda_d$. Suppose that we define a function $H_d: \mathbb{Z}^d \to \mathbb{R}$ by \be{ \label{hddef} H_d(m) = \prod_{i=1}^d | \max(|m_i|,1)|, } and say that a bijective function $\sigma: \mathbb{N} \to \mathbb{Z}^d$ `corresponds to the hyperbolic cross in $\mathbb{Z}^d$' if it is consistent with $H_d$. Figure \ref{hyperbolic} shows the first few contour lines of $H_d$ in two dimensions. With this definition we can then prove the analogous result of Lemma \ref{orderset}: \begin{figure} \centering \includegraphics[width=0.6\textwidth]{cross} \caption{Hyperbolic Fourier Ordering in Two Dimensions: A Contour Plot of $H_2$} \label{hyperbolic} \end{figure} \begin{lemma} \label{orderset2} Let $\sigma: \mathbb{N} \to \mathbb{Z}^d$ correspond to the hyperbolic cross and let $h_d$ be as in (\ref{hyperbolicdecayrate}). Then we have \be{ \label{hyperboliccrossZdecay} \prod_{i=1}^d | \max(| \sigma(m)_i|,1)| \sim \frac{(d-1)!}{2^d} \cdot h_d(m) \quad \mbox{as} \quad m \to \infty. } Moreover, if $\rho_1$ is a standard ordering of $B_\mathrm{f}$ and $\sigma: \mathbb{N} \to \mathbb{Z}^d$ corresponds to the hyperbolic cross. Then $\lambda_d^{-1} \circ \sigma$ is a hyperbolic ordering with respect to $\rho_1$. \end{lemma} \begin{proof} Let $R_d(n)$ denote the number of lattice points in the hyperbolic cross of size $n$ in $\bbZ^d$, namely \[ R_d(n) := \# \{ m \in \mathbb{Z}^d : \prod_{i=1}^d \max(| m_i|,1) \le n \}. \] Call the set in the above definition $ \mathcal{H}_d(n)$. If we remove the hyperplanes $\{m_i=0 \}$ for every $i$ from $\mathcal{H}_d(n)$, we are left with $2^d$ quadrants in $\mathbb{Z}^d$ which are congruent to set in equation (\ref{simplehyperbolic}). From the second part of Lemma \ref{orderset} we therefore have \[ R_d(n) \ge 2^d n \underline{p}_d( \log(n)). \] Next notice that the intersection of $\mathcal{H}_d(n)$ with each hyperplane $\{ m_i=0 \}$ can be identified with $\mathcal{H}_{d-1}(n)$ and so we also have the upper bound \[ \begin{aligned} R_d(n) & \le 2^d n \overline{p}_d( \log(n)) + d \cdot R_{d-1}(n) \quad \Rightarrow \quad R_{d}(n) \le n \overline{r}_d( \log (n)), \end{aligned} \] for some degree $d-1$ polynomial $\overline{r}_d$ with leading coefficient $\frac{2^{d}}{(d-1)!}$. Combining the upper and lower bounds we see that for some polynomials $\underline{r}_d, \overline{r}_d$ of degree $d-1$ with leading coefficient $ \frac{2^{d}}{(d-1)!}$ we have \[ n \underline{r}_d (\log(n)) \le R_d(n) \le n \overline{r}_d( \log(n) ). \] Therefore for $m \in \mathbb{N}$ since \[R_d(H_d(\sigma(m))-1) \le m \le R_d(H_d(\sigma(m))),\] we have \[ \begin{aligned} (H_d(\sigma(m))-1) & \underline{r}_d (\log(H_d(\sigma(m))-1)) \le m \le H_d(\sigma(m)) \overline{r}_d (\log(H_d(\sigma(m)))). \end{aligned} \] Consequently we can apply Lemma \ref{ordersimplify} to both sides to derive (\ref{hyperboliccrossZdecay}) like in the proof of Lemma \ref{orderset}. For the last part of the Lemma notice that since $\rho_1$ is a standard ordering then $\max(|\lambda_1 \circ \rho_1(N)|,1) \approx N$. This means that the bounds on $\mu(\pi_N U)$ in Theorem \ref{FourierWaveletResults} can be rephrased as (for some constants $C_1,C_2>0$) \[ C_1 \cdot (\max(|n|,1))^{-1} \le \sup_{g \in B_\mathrm{w}} | \langle \lambda_1^{-1}(n), g \rangle |^2 \le C_2 \cdot (\max(|n|,1))^{-1}, \quad n \in \bbZ, \] and by Lemma \ref{generaltensor} this extends to the dD tensor case: \be{ \label{zhypcrosscharacterise} C^d_1 \cdot \prod_{i=1}^d (\max(|n_i|,1))^{-1} \le \sup_{g \in B^d_\mathrm{w}} | \langle \lambda_d^{-1}(n), g \rangle |^2 \le C^d_2 \cdot \prod_{i=1}^d (\max(|n_i|,1))^{-1}, \quad n \in \bbZ^d. } This describes a characterization of the optimal decay of $(B_\mathrm{f}^d(\epsilon),B^d_\mathrm{w})$. Lemma \ref{characterisationlemma} tells us that $\lambda_d^{-1} \circ \sigma$ is strongly optimal for $(B_\mathrm{f}^d(\epsilon),B^d_\mathrm{w})$, which by Theorem \ref{generaltensortheorem} is hyperbolic with respect to $\rho_1$. \end{proof} \end{example} \begin{example} (Tensor Wavelet Ordering) Now we look at an example of a less obvious hyperbolic ordering. We first introduce some notation to describe a tensor wavelet basis: For $j \in \mathbb{N}, \ k \in \mathbb{Z}$ let $\phi^0_{j,k}:= \phi_k, \ \phi^1_{j,k}:= \psi_{j,k}$. Now for $s \in \{ 0,1 \}^d , \ j \in \mathbb{N}^d, \ k \in \mathbb{Z}^d$ define \[ \Psi^s_{j,k} := \bigotimes_{i=1}^d \phi^{s_i}_{j_i,k_i}.\] Then it follows that for $J \in \bbN$ fixed, we have \begin{align} \label{tensorwaveletbasisdefine} B^d_\mathrm{w} := \left\{ \begin{array}{cc} & \mathrm{Supp}(\phi^{s_i}_{j_i,k_i}) \cap (-1,1) \neq \emptyset \quad \forall i , \\ \Psi^s_{j,k} : & s_i=0 \Rightarrow j_i=J, \quad s_i=1 \Rightarrow j_i\ge J \\ & j \in \mathbb{N}^d, s \in \{ 0,1 \}^d , \ k \in \mathbb{Z}^d \end{array} \right \} , \end{align} The same approach can be applied to the boundary wavelet basis $B_{\mathrm{b} \mathrm{w}}$ to generate a boundary tensor wavelet basis $B^d_{\mathrm{b} \mathrm{w}}$, although we must include the extra boundary terms, which can be done by letting $s \in \{ 0,1,2,3 \}^d$ where $\phi^2_{J,n}$ would be a boundary scaling function term and $\phi^3_{j,n}$ a boundary wavelet term. \begin{lemma} \label{tensorwavelethyp} Let $\rho_1$ be any leveled ordering of a one-dimensional Haar wavelet basis $B^d_{\mathrm{w}}$. Setting $\overline{j}=\sum_{i=1}^d j_i$ define $F_\text{hyp}:B^d_{\mathrm{w}} \to \mathbb{R}$ by the formula \[ F_\text{hyp}( f) = \overline{j} \quad \mbox{if } \quad f= \Psi^s_{j,k}. \] Then any ordering $\rho: \mathbb{N} \to B^d_{\mathrm{w}}$ that is consistent with $F_\text{hyp}$ is a hyperbolic ordering with respect to $\rho_1$. \end{lemma} \begin{remark} Such an ordering $\rho$ is used to implement a tensor wavelet basis in Section \ref{numericalsection}. \end{remark} \begin{remark} For the sake of simplicity we only work with the Haar wavelet case, although we could cover the boundary wavelet case with the same argument. \end{remark} \begin{proof} By recalling inequality (3.10) in \cite{onedimpaper} or by using Lemma \ref{levelgrowth} in the case $d=1$ we know that there are constants $C_1, C_2>0$ such that for $\rho_1(N) = \phi^{s}_{j,k}, s \in \{0,1\}, j \in \bbN, k \in \bbZ$, \[ C_1 2^{ j} \le N \le C_2 2^{j}, \qquad N \in \bbN. \] Therefore, writing $\rho_1^d(m) = \Psi^{s(m)}_{j(m),k(m)}$, \[ C^d_1 2^{ \overline{j(m)}} \le \prod_{i=1}^d m_i \le C^d_2 2^{ \overline{j(m)}}, \qquad m \in \bbN^d. \] Consequently if we rewrite this with an actual ordering $\rho(N)= \Psi^{s(N)}_{j(N),k(N)}$ for $N \in \mathbb{N}$ we deduce \begin{equation} \label{down2j} C^d_1 2^{ \overline{j(N)}} \le \prod_{i=1}^d \Big( (\rho_1^d)^{-1} \circ \rho(N) \Big)_i \le C^d_2 2^{ \overline{j(N)}} , \end{equation} and so we have reduced the problem to determining how $\overline{j(N)}$ scales with $N$. Notice that from our ordering of the wavelet basis that $\overline{j(N)}$ is a monotonically increasing function in $N$ and moreover, for every value of $\overline{j(N)}$ there are $r_d(\overline{j(N)})2^{ \overline{j(N)}}$ terms in $B_w^d$ with this value of $\overline{j(N)}$ in the wavelet basis, where \[ \begin{aligned} r_d(N):= \# \Bigg\{ (j,s) \in \mathbb{N}^d \times \{0,1 \}^d : \quad \overline{j} = N , \quad j_i \ge J, \quad (s_i-1)(j_i-J)=0 \quad \forall i=1,...,d \Bigg\}, \end{aligned} \] This is where we are using that the support of the Haar wavelet is $[0,1]$ and so there are $2^j$ shifts of $\phi_{j,0}, \psi_{j,0}$ in $B_\mathrm{w}$. Notice that $r_d(N)$ is a polynomial of degree $d-1$. With this in mind notice we can define, consistent for $n \in \mathbb{N}, \ n \ge J$, \begin{align*} T_d(x) & ``=" \sum_{i=J}^x r_d(i)2^{i} := \ p_d(x) 2^{x} + \alpha_d , \end{align*} for some degree $d-1$ polynomial $p_d$ and constant $\alpha_d$. This is possible by taking the formula for the geometric series expansion and differentiating repeatedly. By the consistency property of $\rho$ we deduce the inequality \begin{align*} T_d( \overline{j(N)}-1) \le N \le T_d(\overline{j(N)}) \quad &\Rightarrow \quad \overline{j(N)}-1 \le T_d^{-1}(N) \le \overline{j(N)} \\ &\Rightarrow \quad 2^{\overline{j(N)}-1} \le 2^{T_d^{-1}(N)} \le 2^{\overline{j(N)}}. \end{align*} Notice that $2^{T_d^{-1}(x)}$ is the inverse function of $T_d( \log_2 x)$ which is of the form $ x \cdot p_d( \log_2x) + \alpha,$ Therefore, applying parts 2. \& 3. of Lemma \ref{ordersimplify} gives, for some constants $D_1,D_2 >0$ and $N$ large, \begin{equation} \label{gasymp} \quad (1+ \epsilon_1(N)) \cdot D_1 \cdot 2^{\overline{j(N)}} \le g_d(N) \le (1+\epsilon_2(N)) \cdot D_2 \cdot 2^{\overline{j(N)}} , \end{equation} where $\epsilon_1(N), \epsilon_2(N) \to 0$ as $N \to \infty$. Combining this with (\ref{down2j}) shows that we have a hyperbolic ordering. \end{proof} \end{example} \subsection{Plotting Tensor Coherences} Let us consider a simple illustration of this theory applied to a 2D tensor Fourier-Wavelet case $(B^2_\mathrm{f},B^2_\mathrm{w})$. We can identify the 2D Fourier Basis $B^2_\mathrm{f}$ with $\bbZ^2$ using the function $\lambda_2$, so the row incoherences can also be identified with $\bbZ^2$ and therefore they can be imaged directly in 2D, as in Figure \ref{tensorincoherences}. \begin{figure}[!t] \begin{center} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{tensororig} \caption{\footnotesize Original Coherences} \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{tensorscaling} \caption{\footnotesize Hyperbolic Scaling } \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{tensorscaled} \caption{\footnotesize Scaled Coherences \\ (Product of (a) \& (b)) } \end{center} \end{subfigure} \end{center} \caption{2D Tensor Fourier - Tensor Haar Incoherences. We show the subset $\{-250,-249,...,249,250\}^2 \subset \bbZ^2$. Notice that the scaled coherences have no vanishing values (no pure black) and no values that blow up (no pure white, baring the center value which = 1) indicating that we have characterised the coherence in terms of the hyperbolic scaling used. Formally this is shown by equation (\ref{zhypcrosscharacterise}). The coherences shown in the Figure are square rooted to reduce contrast (i.e. we image $ \sqrt{\mu(\pi_N U)}$ instead of $\mu(\pi_N U)$).} \label{tensorincoherences} \end{figure} \section{Multidimensional Fourier - Separable Wavelet Case: Proof of Theorem \ref{separablesummary}} \label{separable} We repeat the notation of the one-dimensional case, with scaling function $\phi$ (in one dimension) \& Daubechies wavelet $\psi$: \[ \phi_{j,k}(x) =2^{j/2}\phi(2^j x - k) , \quad \psi_{j,k} (x) = 2^{j/2} \psi(2^j x - k). \] We can construct a d-dimensional scaling function $\Phi$ by taking the tensor product of $\phi$ with itself, namely \[ \Phi(x) \ := \ \Big( \bigotimes_{j=1}^d \phi \Big)(x) = \prod_{j=1}^d \phi(x_j), \qquad x \in \mathbb{R}^d, \] which has corresponding multiresolution analysis $(\tilde{V}_j)_{j \in \mathbb{Z}}$ with diagonal scaling matrix $A \in \mathbb{R}^{d \times d}$ with $A_{i,j}=2 \delta_{i,j}$. Let $\phi^0:=\phi, \ \phi^1:=\psi$ and for $s \in \{ 0,1 \}^d, \ j \ge J, \ k \in \mathbb{Z}^{d}$ where $J \in \bbN$ is fixed define the functions \be{ \label{separabledefine} \Psi^s_{j,d}:= \bigotimes_{i=1}^d \phi^{s_i}_{j,k_i}. } If we write (for $s \in \{ 0,1 \}^d \setminus \{0\}, \ j \ge J$) \[W^s_j := \overline{ \text{Span} \{ \Psi^s_{j,k} : k \in \mathbb{Z}^d \} }.\] Then it follows that \[ \tilde{V}_{j+1} = \tilde{V}_j \oplus \bigoplus_{s \in \{ 0,1 \}^d \setminus \{0\}} W^s_j, \quad L^2(\mathbb{R}^d) = \overline{\tilde{V}_J \oplus \bigoplus_{\substack{s \in \{ 0,1 \}^d \setminus \{0\} \\ j \ge J}} W^s_j}.\] This corresponds to taking $2^d-1$ wavelets for our basis in d dimensions (see \cite{dDwav}). As before we take the spanning functions from the above whose support has non-zero intersection with $[-1,1]^d$ as our basis $B_2$ (called a `separable wavelet basis'): \begin{align} \label{separablewaveletbasisdefine} B^d_{\text{sep}} := \left\{ \begin{array}{cc} & \mathrm{Supp}(\phi^{s_i}_{j,k_i}) \cap (-1,1) \neq \emptyset \quad \forall i , \\ \Psi^s_{j,k} : & s=0 \Rightarrow j=J, \\ & j \in \mathbb{N}, s \in \{ 0,1 \}^d , \ k \in \mathbb{Z}^d \end{array} \right \} , \end{align} \begin{remark} We can also construct a separable boundary wavelet basis in the same manner like in the one-dimensional case however, for the sake of simplicity, we stick to the above relatively simple construction throughout (although all the coherence results we cover here also hold for the separable boundary wavelet case as well). \end{remark} \subsection{Ordering the Separable Wavelet Basis: Proving Theorem \ref{separablesummary} Part (i)} \label{separablewaveletordering} We note a few key equalities from the one-dimensional case that will come in handy: \be{ \label{1Dequalities} \mathcal{F}\phi_{j,k}( \omega ) = e^{-2\pi \mathrm{i} 2^{-j} k \omega} 2^{-j/2} \mathcal{F} \phi(2^{-j} \omega), \qquad \mathcal{F}\psi_{j,k}( \omega ) = e^{-2\pi \mathrm{i} 2^{-j} k \omega} 2^{-j/2} \mathcal{F} \psi(2^{-j} \omega), } where $\mathcal{F}$ here denotes the Fourier Transform, i.e. for $f \in L^2(\bbR^d)$ we define $$ \mathcal{F}f(\omega) = \int_{\mathbb{R}^d} f(x) e^{-2\pi i \omega \cdot x } \, dx, \qquad \omega \in \bbR^d. $$ Recall $\chi_k$ from Definition \ref{fourier}. We observe that by (\ref{separabledefine}) \begin{equation} \label{fourierprod} \begin{aligned} \langle \Psi^s_{j,k} , \chi_n \rangle & = \epsilon^{d/2} \cdot \mathcal{F} \Psi^s_{j,k}(\epsilon n) = \epsilon^{d/2} \prod_{i=1}^d \mathcal{F} \phi^{s_i}_{j,k_i} (\epsilon n_i), \qquad n \in \bbZ^d, \\ \Rightarrow & \sup_{n \in \bbN^d} |\langle \Psi^s_{j,k} , \chi_n \rangle|^2 = \epsilon^{d} 2^{-dj} \cdot \prod_{i=1}^d \sup_{n \in \bbN} |\mathcal{F} \phi^{s_i} (\epsilon 2^{-j} n)|^2. \end{aligned} \end{equation} By careful treatment of the product term we can determine the optimal decay of $(B^d_\text{sep}, B_\mathrm{f}^d (\epsilon))$, using the following result: \begin{proposition} \label{dDSeparableWaveletFourierCharacterisation} There are constants $C_1,C_2>0$ such that for all $\epsilon \in I_{J,p}, \Psi^s_{j,k} \in B^d_\text{sep}$ we have \[ C_1 \cdot \epsilon^{d} 2^{-dj} \le \sup_{n \in \bbN^d} |\langle \Psi^s_{j,k} , \chi_n \rangle|^2 \le C_2 \cdot \epsilon^{d} 2^{-dj}. \] Consequently, fixing $\epsilon$, the function $F_\text{power}: B^d_\text{sep} \to \bbR$ defined by $F_\text{power}(\Psi^s_{j,k})=2^{-dj}$ characterizes the optimal decay of $(B_\text{sep}^d, B_\mathrm{f}^d(\epsilon))$. \end{proposition} \begin{proof} Let $A=\max(\sup_{\omega \in \bbR} |\mathcal{F} \phi (\omega)|^2, \sup_{\omega \in \bbR} |\mathcal{F} \psi (\omega)|^2)$. Then (\ref{fourierprod}) gives us the upper bound \[ \sup_{n \in \bbN^d} |\langle \Psi^s_{j,k} , \chi_n \rangle|^2 \le \epsilon^{d} 2^{-dj} \cdot A^d. \] This leaves the lower bound. This can be achieved if we can show that there exists constants $D_1, D_2>0$ such that for all $\epsilon \in I_{J,p}$\footnote{Notice that replacing $J$ with $j \ge J$ below would have been redundant.} \be{ \label{fourierinfimum} \mathcal{F}_1(\epsilon):=\sup_{n \in \bbN} |\mathcal{F} \phi (\epsilon 2^{-J} n)| \ge D_1, \quad \mathcal{F}_2(\epsilon):= \sup_{n \in \bbN} |\mathcal{F} \psi (\epsilon 2^{-J} n)| \ge D_2. } By the Riemann-Lebesgue Lemma the functions $\mathcal{F}_1, \mathcal{F}_2$ are continuous on $I_{J,p}$ and \[ \mathcal{F}_1(\epsilon) \to \sup_{\omega \in \bbR} |\mathcal{F} \phi (\omega)|>0 \quad \text{as} \quad \epsilon \to 0. \] Likewise for $\mathcal{F}_2$. Therefore $\mathcal{F}_1, \mathcal{F}_2$ can be extended to continuous functions over the closed interval $I_{J,p} \cup \{0\}$. Finally we notice that $\mathcal{F}_1(\epsilon)>0, \mathcal{F}_2(\epsilon)>0$ for every $\epsilon \in I_{J,p}$ otherwise we would deduce that $\phi$ or $\psi$ has no support in $[-1,1]$ since the span of $B_\mathrm{f}^d(\epsilon)$ covers $L^2[-1,1]$. This means that the infimums over $I_{J,p} \cup \{0\}$ are attained and are strictly positive, proving (\ref{fourierinfimum}) and the lower bound. \end{proof} Let $F_\text{level}:B^d_\text{sep} \to \bbR$ be defined by $F_\text{level}(\Psi^s_{j,k})=j$. Lemma \ref{characterisationlemma} tells that an ordering that is consistent with $1/F_\text{power}$, i.e. consistent with $F_\text{level}$ will be strongly optimal. \begin{definition} \label{sepleveled} We say that an ordering $\rho: \mathbb{N} \to B^d_{\text{sep}}$ is `leveled' if it is consistent with $F_\text{level}$. \end{definition} \begin{lemma} \label{levelgrowth} Let $\rho: \mathbb{N} \to B^d_{\text{sep}}$ be leveled. Then there are constants $D_1, D_2>0$ such \be{ \label{levelgrowthequation} D_1 \cdot N \le 2^{d F_\text{level}(\rho(N))} \le D_2 \cdot N. } \end{lemma} \begin{proof} Let $a \in \mathbb{N}$ denote the length of the support of $\phi, \psi$. Notice that for each $j \in \mathbb{N}$ and $s \in \{ 0,1 \}^d$, there are $(2^{j+1} +a-1)^d $ shifts of $\Psi^{s}_{j,0}$ whose support lies in $[-1,1]^d$. For convenience we use the notation $j(N):= F_\text{level}(\rho(N))$ and shall also be using the simple bounds $ 2^{j(N)+1} \le 2^{j(N)+1}+a-1 \le 2^{j(N)+a}$. Now for every $N \in \bbN$ with $j(N)>J$, we must have had all the terms of the form $f \in B^d_{\text{sep}}, F_\text{level}(f)=j(N)-1$ come before $N$ in the leveled ordering and there are at least $(2^d-1) \cdot 2^{dj(N)}$ of these terms, implying that \[ (2^d-1) \cdot 2^{dj(N)} \le N.\] This completes the upper bound for $j(N)>J$. Likewise for every $N \in \bbN$ with $j(N)\ge J$ there can be no more than \[ 2^d \cdot \sum_{i=J}^{j(N)} 2^{d(i+a)} \le 2^d \cdot 2^{d(j(N)+a+1)}= 2^{d(a+2)} \cdot 2^{dj(N)}, \] terms such that $F_\text{level}(f) \le j(N)$. This shows that $N \le 2^{d(a+2)} \cdot 2^{dj(N)}$, completing the upper bound for $j(N)>J$. Extending (\ref{levelgrowthequation}) to all $N \in \bbN$ (i.e. $j(N) \ge J$) is trivial since we have only omitted finitely many terms so a change of constants will suffice. \end{proof} \begin{corollary} \label{leveledresults} Any ordering $\rho$ of $B^d_\text{sep}$ that is leveled is strongly optimal for the basis pair $(B^d_\text{sep},B^d_\mathrm{f}(\epsilon))$. Furthermore, the optimal decay rate of $(B^d_\text{sep},B_\mathrm{f}^d(\epsilon))$ is represented by the function $f(N)=N^{-1}$. \end{corollary} \begin{proof} Lemma \ref{characterisationlemma} applied to Proposition \ref{dDSeparableWaveletFourierCharacterisation} tells us that $\rho$ is strongly optimal and moreover the optimal decay rate is represented by $F_\text{power}(\rho(N))$ which by Lemma \ref{levelgrowth} is of order $N^{-1}$. \end{proof} \subsection{Ordering the Fourier Basis: Proving Theorem \ref{separablesummary} Part (ii)} \label{linearproof} We now want to find the optimal decay rate of $(B^d_\mathrm{f}(\epsilon), B^d_{\text{sep}})$ which means looking at orderings of the Fourier basis. It might be tempting to try and extend the standard ordering definition from the one dimensional Fourier basis. Recall as well that, using the function $\lambda_d$ defined in (\ref{multidimlambda}), ordering $B^d_\mathrm{f}(\epsilon)$ is equivalent to ordering $\bbZ^d$. If we let $s \in \{ 0,1 \}^d , \ j \in \bbN, \ k \in \bbZ^d$, then in order to bound the coherence $\mu(\pi_NU)$ we need to be bounding terms of the form \be{ \label{sepprodexample2} | \langle \Psi^{s}_{j,k} , \lambda_d^{-1} (n) \rangle |^2 = \epsilon^d 2^{-dj} \prod_{i=1}^d | \mathcal{F} \phi^{s_i}(2^{-j} \epsilon n_i) |^2. } In the one-dimensional case in \cite{onedimpaper} the following decay property of the Fourier transform of the scaling function $\phi$ was used: \begin{lemma} \label{FTdecayLemma} If $\phi$ is any Daubechies scaling function with corresponding mother wavelet $\psi$ then there exists a constant $K>0$ such that for all $\omega \in \bbR \setminus \{0 \}$, \be{ \label{FTdecay} | \mathcal{F} \phi(\omega)|, |\mathcal{F} \psi (\omega)| \le \frac{K}{|\omega|}. } Furthermore, suppose that for some $\alpha>0$ we have, for some constant $K>0$, the decay $| \mathcal{F} \phi(\omega)| \le K | \omega|^{-\alpha}$ for all $\omega \in \bbR \setminus \{0\}$. Then, for a larger constant $K>0$, $| \mathcal{F} \psi(\omega)| \le K | \omega|^{-\alpha}$ for all $\omega \in \bbR \setminus \{0\}$. \end{lemma} \begin{proof} The first result is a direct result of Lemma 3.5 in \cite{onedimpaper}. The last statement follows immediately from the equality (taken from equation (3.14) in \cite{onedimpaper}): \begin{equation} \label{fourierscalingwavelet} |\mathcal{F}\psi(2 \omega)|= |m_0(\omega + 1/2) \cdot \mathcal{F}\phi(\omega)|, \end{equation} where $m_0$ is the low pass filter corresponding to $\phi$ which satisfies $|m_0(\omega)| \le 1$ for all $\omega \in \bbR$. \end{proof} Therefore let us first consider the case where we use (\ref{FTdecay}) to bound every term in the product, giving us ($n \in \bbZ^d, n_i \neq 0, i=1,...,d$) \be{ \label{hyperbolicbound} | \langle \Psi^{s}_{j,k} , \lambda_d^{-1} (n) \rangle |^2 \le \epsilon^d 2^{-dj} \prod_{i=1}^d \frac{K^{2}}{|\epsilon 2^j n_i|} =\frac{K^{2d}}{ \prod_{i=1}^d | n_i|}. } Making adjustments to prevent dividing by zero by using $\sup_{\omega \in \bbR} \max(| \mathcal{F} \phi(\omega)|, | \mathcal{F} \psi(\omega)|)\le 1$ (for $\phi$ this follows from Proposition 1.11 in \cite{wav}. We extend this to $\psi$ using equation (\ref{fourierscalingwavelet})), this can then be rephrased as \be{ \label{hyperbolicbound2} \sup_{g \in B_\text{sep}^d} | \langle g , \lambda_d^{-1} (n) \rangle |^2 \le \frac{\max (K^{2d},1)}{ \prod_{i=1}^d \max(| n_i|,1)}, \qquad n \in \bbZ^d. } This tells us that the function $F_\text{hyp}: \bbZ^d \to \bbR, F_\text{hyp}(n)= (\prod_{i=1}^d \max(|n_i|,1))^{-1}$ dominates the optimal decay of $(B_\mathrm{f}^d, B_\text{sep}^d)$ (see Definition \ref{Characterisation} for the definition of domination). Therefore if we want to maximise the utility of this bound then we should use an ordering $\sigma$ of $\bbZ^d$ so that $\prod_{i=1}^d \max(|\sigma(N)_i|,1)$ is increasing, namely an ordering corresponding to the hyperbolic cross in $\bbZ^d$ (see Example \ref{hypcrossZd}). However, using such an ordering will not give us the $N^{-1}$ decay rate that we got from the one dimensional case: \begin{proposition} \label{Hyperbolic4Separable} Let $\sigma: \bbN \to \bbZ^d$ correspond to the hyperbolic cross in $\bbZ^d$ and define an ordering $\rho$ of $B^d_\mathrm{f}(\epsilon)$ by $\rho:=\lambda_d^{-1} \circ \sigma$, where $\epsilon \in I_{J,p}$. Next let $U=[(B^d_\mathrm{f}(\epsilon),\rho),(B^d_{\text{sep}},\tau)]$ for any ordering $\tau$ and fix $\epsilon$. Then there are constants $C_1, C_2 > 0$ \[ \frac{C_1 \log^{d-1}(N+1)}{N} \le \mu( Q_N U ) \le \frac{C_2 \log^{d-1}(N+1)}{N}, \qquad N \in \bbN. \] \end{proposition} As this result is primarily for motivation, its proof is left to the appendix. Since this approach gives us suboptimal results, we return to our bound of (\ref{sepprodexample2}). Instead of using (\ref{FTdecay}) on every term in the product, why not just use it once on the term that give us the best decay instead? To bound the remaining terms we can simply use $\sup_{\omega \in \bbR} \max(| \mathcal{F} \phi(\omega)|, | \mathcal{F} \psi(\omega)|)\le 1$ . This approach gives us the following bound \be{ \label{linearbound1} | \langle \Psi^{s}_{j,k} , \lambda_d^{-1} (n) \rangle |^2 \le \epsilon^d 2^{-dj} \cdot \min_{i=1,...d} \frac{K^{2}}{|\epsilon 2^j n_i|} =\epsilon^{d-1} 2^{-(d-1)j} \cdot \frac{K^{2}}{ \max_{i=1,...,d} | n_i|}, \qquad n \in \bbZ^d. } As we shall see in Lemma \ref{normest}, choosing $\rho$ so that we maximise the growth of the $\max_{i=1,...,d} | n_i|$ leads to $ \max_{i=1,...,d} | n_i| \ge E \cdot N^{1/d}$ for some constant $E>0$ and so (\ref{linearbound1}) is bounded above by $\text{constant} \cdot N^{-1/d}$, which is very poor decay. However, if we instead replace (\ref{FTdecay}) by the stronger condition \begin{equation} \label{dDFTdecay} |\mathcal{F} \phi( \omega )| \le \frac{K}{| \omega |^{d/2}} , \qquad \omega \in \bbR \setminus \{0 \} . \end{equation} then we can obtain the following upper bound\footnote{noting that (\ref{dDFTdecay}) also holds for $\psi$ by (\ref{fourierscalingwavelet}).} \be{ \label{linearbound2} | \langle \Psi^{s}_{j,k} , \rho(N) \rangle |^2 \le \epsilon^d 2^{-dj} \cdot \min_{i=1,...d} \frac{K^{2d}}{|\epsilon 2^j n_i|^d} = \frac{K^{2d}}{ \max_{i=1,...,d} | n_i|^d}. } Let us write $\|n\|_\infty:=\max_{i=1,...,d} | n_i |$. The above can be rephrased as \be{ \label{linearbound3} \sup_{g \in B_\text{sep}^d} | \langle g , \lambda_d^{-1}(n) \rangle |^2 \le \frac{\max(K^{2d},1)}{ \max(\|n\|_\infty^d,1)}, \qquad n \in \bbZ^d . } Therefore we deduce that $F_\text{lin}: \bbZ^d \to \bbR, F_\text{lin}(n)=(\max(\|n\|_\infty^d,1))^{-1}$ dominates the optimal decay of $(B_\mathrm{f}^d (\epsilon), B_\text{sep}^d)$. In fact in can be shown that $F_\text{lin}$ also \textit{characterizes} the optimal decay (i.e. a lower bound of the same form is possible) by using the following preliminary Lemma: \begin{lemma} \label{wavelower} For any compactly supported wavelet $\psi$ there exists an $R \in \mathbb{N}$ such that for all $q \ge R, \ (q \in \mathbb{N})$ we have \be{ \label{wavelowerequation} L_q := \inf_{\omega \in [2^{-(q+1)},2^{-q}] } | \mathcal{F}\psi(\omega)| \ > \ 0. } \end{lemma} \begin{proof} See Lemma 3.6 in \cite{onedimpaper}. \end{proof} \begin{proposition} \label{dDSeparableFourierWavelet} We fix the choice of wavelet basis $B^d_\text{sep}$ and recall the function $\lambda_d : B^d_\mathrm{f} (\epsilon) \to \bbZ^d$ from (\ref{multidimlambda}). 1.) \ \ Then there are constants $C_1(\phi)>0, D(J)>0$ such that for all $\epsilon \in I_{J,p}$ and $n \in \bbZ^d$ with $\| n \|_\infty \ge D \epsilon^{-1}$ we have \be{ \label{linearbound4} \sup_{g \in B_\text{sep}^d} | \langle g , \lambda_d^{-1}(n) \rangle |^2 \ge \frac{C_1}{ \|n\|_\infty^d}. } Therefore (by fixing $\epsilon$) the function $F_\text{lin}$ is dominated by the optimal decay of $(B^d_\mathrm{f}(\epsilon), B^d_\text{sep})$. 2.) \ \ Suppose that $\phi$ satisfies (\ref{dDFTdecay}). Then there is a constant $C_2(\phi)>0$ such that for all $\epsilon \in I_{J,p}$ and $n \in \bbZ^d$, \be{ \label{linearbound5} \sup_{g \in B_\text{sep}^d} | \langle g , \lambda_d^{-1}(n) \rangle |^2 \le \frac{C_2}{ \max(\|n\|_\infty^d,1)}. } Therefore (by fixing $\epsilon$) the function $F_\text{lin}$ characterizes the optimal decay of $(B^d_\mathrm{f}(\epsilon), B^d_\text{sep})$. \end{proposition} \begin{proof} 2.) \ \ Follows from (\ref{linearbound3}). 1.) \ \ If we set $j= \lceil \log_2 \epsilon \| n \|_\infty \rceil +q$ for some $q \in \bbN$ fixed we observe that $| \epsilon 2^{-j} n_i | \in [0,2^{-q}]$ for every $i=1,...,d$ and, since we are using the max norm, $| \epsilon 2^{-j} n_i | \in [2^{-q-1},2^{-q}]$ for at least one $i$, say $i'$ . Set $s_i=0$ for $i \neq i'$ and $s_{i'}=1$. Then, assuming $j \ge J$, by (\ref{sepprodexample2}) we have the lower bound. \begin{equation} \label{lastcall} \begin{aligned} &| \langle \Psi^s_{j,0} , \lambda_d^{-1}(n) \rangle |^2 \ge \frac{2^{-d(q+1)}}{ \|n\|_\infty^d} \prod_{i=1}^d | \mathcal{F} \phi^{s_i} (\epsilon 2^{-j} n_i)|^2 \\ & \qquad \ge \frac{ 2^{-d(q+1)}}{\|n\|_\infty^d} \cdot \inf_{ \omega \in (2^{-q-1},2^{-q}]}| \mathcal{F} \psi(\omega)|^2 \cdot \inf_{ \omega \in [0,2^{-q}]}| \mathcal{F} \phi (\omega)|^{2(d-1)}. \end{aligned} \end{equation} Recall that by Lemma \ref{wavelower} there exists a $q \in \mathbb{N}$ such that $L_q>0$ and $\inf_{ \omega \in [0,2^{-q}]}| \mathcal{F} \phi (\omega)| >0 $\footnote{We are using the fact that $|\mathcal{F} \phi(0)|=1$ and continuity of $\mathcal{F} \phi$ here which follows from $\phi \in L^1(\bbR)$.} and therefore (\ref{linearbound5}) follows as long as $j \ge J$. To ensure that $j= \lceil \log_2 ( \epsilon \|n\|_\infty) \rceil + q$ satisfies $j \ge J$ we must therefore impose the constraint that $n$ is sufficiently large. $j \ge J$ is satisfied if \[ J \le \log_2 ( \epsilon \|n\|_\infty) \quad \Rightarrow \quad \|n\|_\infty \ge 2^{J} \epsilon^{-1}. \] \end{proof} \begin{remark} \label{2doptimal} If $d=2$ then (\ref{dDFTdecay}) always holds by Lemma \ref{FTdecayLemma}. This means we have characterized every 2D Separable wavelet case (for Daubechies Wavelets). \end{remark} \begin{remark} A similar upper bound in two dimensions based on the norm of $n \in \mathbb{Z}^2$ has already been considered in a discrete framework for separable Haar wavelets \cite{discrete}. \end{remark} Let $F_\text{norm}(n):=\max(\|n\|_\infty,1)$ . By \ref{characterisationlemma} we know that if (\ref{dDFTdecay}) holds then the optimal decay of $(B_\mathrm{f}^d,B^d_\text{sep})$ is determined by the fastest growth of $F_\text{norm}$. This motivates the following: \begin{lemma} \label{normest} Let $\sigma : \mathbb{N} \rightarrow \mathbb{Z}^d$ be consistent with $F_\text{norm}$. Then there are constants $E_1 , E_2 >0$ such that \begin{equation} \label{normestresult} E_1 \cdot N^{1/d} \le \max(\| \sigma(N) \|_\infty,1) \le E_2 \cdot N^{1/d}, \qquad \forall N \in \mathbb{N}. \end{equation} \end{lemma} \begin{proof} If $ \| \sigma(N) \|_\infty = L \ge 2$, then $\sigma$ must have enumerated beforehand all points $m$ in $ \mathbb{Z}^d$ with $\|m\|_\infty \le L-1$ and there are $(2L -1)^d$ of such points. This means that \[ N \ge (2 L -1)^d \quad \Rightarrow \quad \| \sigma(N) \|_\infty \le \frac{N^{1/d}+1}{2}, \qquad N \in \bbN. \] which proves the upper bound when $ \| \sigma(N) \|_\infty = L \ge 2$. The lower bound is tackled similarly by noting $\sigma$ must first list all $m \in \bbZ^d$ with $\| m \|_\infty \le L$, including $\sigma(N)$ which shows \[ N \le (2 L +1)^d \quad \Rightarrow \quad \| \sigma(N) \|_\infty \ge \frac{N^{1/d}-1}{2}, \qquad N \in \bbN. \] This proves (\ref{normest}) for $ \| \sigma(N) \|_\infty = L \ge 2$. Extending this to all $ N \in \bbN$ is trivial since we have only omitted finitely many terms, so changing the constants will suffice since all terms are strictly positive. \end{proof} \begin{definition}[Linear Ordering] Any ordering $\rho: \bbN \to B_\mathrm{f}^d(\epsilon)$ such that $\sigma=\lambda_d \circ \rho$ satisfies (\ref{normestresult}) is called a `linear ordering'. \end{definition} \begin{corollary} \label{linearresults} Assuming (\ref{dDFTdecay}) holds for the scaling function corresponding to $B^d_\text{sep}$, an ordering $\rho$ of $B_\mathrm{f}^d(\epsilon)$ is strongly optimal for the basis pair $(B_\mathrm{f}^d(\epsilon),B^d_\text{sep})$ if and only if it is linear. Furthermore, the optimal decay rate of $(B_\mathrm{f}^d(\epsilon),B^d_\text{sep})$ is represented by the function $f(N)=N^{-1}$. \end{corollary} \begin{proof} If we apply part 2.) of Proposition \ref{dDSeparableFourierWavelet} to Lemma \ref{characterisationlemma} we kow that if $\sigma: \bbN \to \bbZ^d$ is consistent with $1/F_\text{lin}=F_\text{norm}^d$, i.e. consistent with $F_\text{norm}$, then $F_\text{lin}(\sigma(\cdot))=1/F_\text{norm}^d(\sigma(\cdot))$ represents the optimal decay rate. Lemma \ref{normest} tells us that this optimal decay is $1/(N^{1/d})^d=1/N$. Furthermore, Lemma \ref{characterisationlemma} says that an ordering $\rho$ is strongly optimal for $(B_\mathrm{f}^d(\epsilon),B^d_\text{sep})$ if and only if $F_\text{lin}(\lambda_d \circ \rho(\cdot)) \approx F_\text{lin}(\sigma(\cdot))$ which holds if and only if $F_\text{norm}(\lambda_d \circ \rho(\cdot)) \approx F_\text{norm}(\sigma(\cdot))$, namely $\rho$ is linear. \end{proof} Corollary \ref{linearresults} gives us the same optimal decay as in one dimension, which is in contrast to the multidimensional tensor case, where the best we can do is have $d-1$ extra log factors. We can use this result to cover the two dimensional case in full: \begin{corollary} \label{twodimresults} In 2D the optimal decay rate of $(B^2_\mathrm{f},B^2_\text{sep})$ is represented by $f(N)= N^{-1}$. This optimal decay rate is obtained by using a linear ordering. In fact an ordering $\rho$ of $B^2_\mathrm{f}$ is strongly optimal in 2D if and only if it is linear. \end{corollary} \begin{proof} Using Lemma \ref{FTdecayLemma} we observe that the decay condition (\ref{dDFTdecay}) holds automatically if $d=2$. Therefore we may apply Corollary \ref{linearresults} directly. \end{proof} This result \emph{does not extend to higher dimensions:} \begin{example} \label{3DHaar} If we do not have condition (\ref{dDFTdecay}) then our argument can break down very badly: For Haar wavelets we have an explicit formula for the Fourier transform of the one-dimensional mother wavelet, \[ \mathcal{F} \phi( \omega) = \frac{ \exp(2 \pi i \omega)-1}{2 \pi i \omega }. \] Therefore we have that (\ref{dDFTdecay}) is not satisfied for $d \ge 3$ and furthermore we have (for $\epsilon<1$ and $J \in \bbN$ fixed) \begin{equation} \label{FTdecayfail} | \mathcal{F} \phi( \epsilon 2^{-J} k ) | \ge \frac{1}{2\pi \epsilon k} , \end{equation} for infinitely many $k \in \mathbb{N}$. Now consider the case of $d$D separable Haar wavelets with a linear ordering $\rho$ of the Fourier Basis. Then, for $m \in \mathbb{N}$ such that $\lambda_d \circ \rho(m)=(\lambda_d \circ \rho(m)_1,0,\cdots,0)$ we know that by (\ref{FTdecayfail}) there are infinitely many $m$ such that \be{ \label{poorlowerbound} \begin{aligned} | \langle \Phi , \rho(m) \rangle |^2 & = \epsilon^d |\mathcal{F} \phi(\epsilon 2^{-J} \lambda_d \circ \rho(m)_1)|^2 \cdot |\mathcal{F} \phi(0)|^2(d-1) \\ & \ge \epsilon^d \cdot \frac{1}{(2\pi \epsilon |\lambda_d \circ \rho(m)_1|)^2} \ge \frac{\epsilon^{d-2} E}{4 \pi^2 m^{2/d}}, \end{aligned} } for some constant $E$ using Lemma \ref{normest}. Therefore an upper bound of the form $\text{Constant} \cdot N^{-1}$ is not possible for a linear scaling scheme if $d \ge 3$. This can be rectified by applying a semi-hyperbolic scaling scheme, as in the next subsection. \end{example} \subsection{Examples of Linear Orderings - Linear Scaling Schemes} \label{linearsection} A wide variety of sampling schemes that are commonly used happen to be linear. In particular we demonstrate that sampling according to how a shape scales linearly from the origin always corresponds to a linear ordering (see Figure \ref{linearscalingimages}): \begin{figure}[!t] \begin{center} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{shape} \caption{\footnotesize Scaling shape} \end{center} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{shapelevels} \caption{\footnotesize Sampling according to a linear scaling scheme with scaling shape (a)} \end{center} \end{subfigure} \end{center} \caption{A Simple Linear Scaling Scheme} \label{linearscalingimages} \end{figure} \begin{definition} \label{lineardef} Let $D \subset \mathbb{R}^d$ be bounded with $0$ in its interior and define $S_{D}: \mathbb{Z}^d: \to \mathbb{R}$ \[ S_{D}(x) := \inf \big\{ \kappa >0 : x \in \kappa D \ \big\}. \] An ordering $\sigma: \bbN \to \bbZ^d$ is said to `correspond to a linear scaling scheme with scaling shape $D$' if it is consistent with $S_D$. Furthermore, an ordering $\rho : \mathbb{N} \rightarrow B^d_\mathrm{f}(\epsilon)$ is said to `correspond to a linear scaling scheme with scaling shape $D$' if it is consistent with $S_D \circ \lambda_d$. \end{definition} \begin{remark} \label{linearnorm} If we put a norm $\| \cdot \|$ on $\mathbb{Z}^d$ and take an ordering consistent with this norm then this ordering corresponds to a linear scaling scheme with scaling shape $\{ x \in \mathbb{R}^d \ : \ \|x\|=1 \}.$ \end{remark} \begin{lemma} \label{normestold} Let $\rho: \bbN \to B^d_\mathrm{f}(\epsilon)$ corresponds to a linear scaling scheme with scaling shape $D$. Then $\rho$ is linear. \end{lemma} \begin{proof} Let $\sigma=\lambda_d \circ \rho$. Because the scaling shape $D$ is bounded and contains $0$ in its interior we have that there exists constants $C_1,C_2>0$ such that $C_1 \mathcal{S} \subset D \subset C_2 \mathcal{S}$ where $\mathcal{S}$ is defined to be the unit hypercube, i.e. $ \mathcal{S} := \{ x \in \mathbb{R}^d : \|x\|_\infty=1 \}.$ Therefore if $ \| \sigma(N) \|_\infty = L$, then since $D \subset C_2 \mathcal{S}$ we have that $S_D(\sigma(N))\ge L C_2^{-1}$. Applying this to $C_1 \mathcal{S} \subset D$ we deduce that $\sigma$ must have enumerated beforehand all points $m$ in $ \mathbb{Z}^d$ with $\|m\|_\infty<LC_1 C_2^{-1}$ and there are at least $(2 ( L C_1 C_2^{-1} -1 )+1)^d$ of such points. This means that \[ N \ge (2 ( \| \sigma(N) \|_\infty C_1 C_2^{-1} -1 ) +1)^d \quad \Rightarrow \quad \| \sigma(N) \|_\infty \le \frac{N^{1/d}+1}{2 C_1 C_2^{_1}} \le \frac{N^{1/d}}{C_1 C_2^{-1}}, \qquad N \in \bbN. \] which proves the upper bound. The lower bound is tackled similarly to prove (\ref{normestresult}). \end{proof} \subsection{2D Separable Incoherences} By Remark \ref{2doptimal} we have shown that linear orderings are strongly optimal for all 2D Fourier - separable wavelet cases, so this is a good point to have a quick look at a few of these in Figure \ref{separableincoherences}. \begin{figure}[!t] \begin{center} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{separablehaarorig} \caption{\footnotesize Haar Coherences} \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{separablehaarscaled} \caption{\footnotesize Scaled Haar Coherences} \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{separablescaling} \caption{\footnotesize Linear Scaling used for Both Bases} \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{separabledauborig} \caption{\footnotesize Daubechies16 Coherences } \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{separabledaubscaled} \caption{\footnotesize Scaled \\ Daubechies16 Coherences } \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \end{center} \end{subfigure} \end{center} \caption{2D Fourier - Separable Wavelet Coherences. We show the subset $\{-250,-249,...,249,250\}^2 \subset \bbZ^2$. Notice again that the scaled coherences are bounded above zero and below 1 indicating that we have characterised the incoherence in terms of the linear scaling used, as shown in Proposition \ref{dDSeparableFourierWavelet}. The incoherences shown in the Figure are square rooted to reduce contrast.} \label{separableincoherences} \end{figure} \subsection{Semi-Hyperbolic Orderings: Proof of Theorem \ref{separablesummary} Part (iii)} \label{semihypsection} By Example \ref{3DHaar} we know that if (\ref{dDFTdecay}) does not hold then our approach of using a linear ordering can fail. We therefore return once more to (\ref{sepprodexample2}). Let us now try to use an approach that is halfway between our two previous linear/hyperbolic approaches. Let $r \in \{1,...,d-1\}$ be fixed. We shall first impose a decay condition that is stronger than (\ref{FTdecay}) but weaker than (\ref{dDFTdecay}): \be{ \label{semiFTdecay} |\mathcal{F}\phi (\omega)| \le \frac{K}{| \omega|^{d/2r} }, \qquad \omega \in \mathbb{R} \setminus \{ 0 \} . } Instead of just taking out the dominant term of the product in (\ref{sepprodexample2}), let us take out the $r$ smallest terms: \be{ \label{semihypbound} \begin{aligned} | \langle \Psi^{s}_{j,r} , \lambda_d^{-1}(n) \rangle |^2 & \le \epsilon^d 2^{-dj} \cdot \min_{\substack{i_1,...,i_r \in \{1,...,d\} \\ i_1<...<i_r}} \prod_{r=1}^r \frac{K^{2}}{|\epsilon 2^j n_{i_r}|}^{d/r} \\ & = K^{2r} \cdot \Big( \max_{\substack{i_1,...,i_r \in \{1,...,d\} \\ i_1<...<i_r}} \prod_{r=1}^r | n_{i_r}| \Big)^{-d/r}, \qquad n \in \bbZ^d, n_i \neq 0, i=1,...,d. \end{aligned} } Again we can extend this bound to all $n \in \bbZ^d$: \be{ \label{semihypbound2} \sup_{g \in B_\text{sep}^d} | \langle g , \lambda_d^{-1}(n) \rangle |^2 \le \max(K^{2r},1) \cdot \Big( \max_{\substack{i_1,...,i_r \in \{1,...,d\} \\ i_1<...<i_r}} \prod_{r=1}^r \max(| n_{i_r}|,1) \Big)^{-d/r}, \qquad n \in \bbZ^d. } We deduce that the function $F_{\text{hyp},r}: \bbZ^d \to \bbR, F_{\text{hyp},r}(n)=\Big( \max_{\substack{i_1,...,i_r \in \{1,...,d\} \\ i_1<...<i_r}} \prod_{r=1}^r \max(| n_{i_r}|,1) \Big)^{-d/r}$ dominates the optimal decay of of $(B_\mathrm{f}^d, B_\text{sep}^d)$. \begin{definition} \label{semihyperbolic} Let us define, for $r,d \in \bbN, r \le d$ the function \[ H_{d,r}(n):= \max_{\substack{i_1,...,i_r \in \{1,...,d\} \\ i_1<...<i_r}} \prod_{j=1}^r \max(|n_{i_j}|,1) , \qquad n \in \bbZ^d. \] Then we say an ordering $\sigma: \bbN \to \bbZ^d$ is `semi-hyperbolic of order $r$ in $d$ dimensions' if it is consistent with $H_{d,r}$. \end{definition} Figure \ref{Consistent3D} presents some isosurface plots of $H_{3,r}$ for the various values of $r$ Notice that a semi-hyperbolic ordering of order d in d dimensions corresponds to the hyperbolic cross in $\bbZ^d$ (see Example \ref{hypcrossZd}). Furthermore, if $\sigma: \bbN \to \bbZ^d$ is a semi-hyperbolic ordering of order $1$ in $d$ dimensions then, by Remark \ref{linearnorm}, $\sigma$ corresponds to a linear scaling scheme because $H_{d,1}(n)= \|n\|_\infty$ for the componentwise max norm $\| \cdot \|_\infty$ on $\bbR^d$. Like in the linear and hyperbolic cases discussed in the previous sections, we want to determine how $H_{d,r}(\sigma(n))$ scales with $n \in \bbN$. \begin{figure}[!t] \begin{center} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{Linear3DConsistent} \caption{\footnotesize Case $r=1$ (Linear) ; \\ Isosurface value=10. } \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{SemiHyp3DConsistent} \caption{\footnotesize Case $r=2$ (Semi-Hyperbolic) ; \\ Isosurface value=20. } \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{Hyp3DConsistent} \caption{\footnotesize Case $r=3$ (Hyperbolic) ; \\ Isosurface value=20. } \end{center} \end{subfigure} \end{center} \caption{Isosurfaces of $H_{3,r}$, $r=1,2,3$ describing the three types of ordering available in 3D} \label{Consistent3D} \end{figure} \begin{lemma} \label{semihypbehaviour} 1). \ \ Let $r,d\in \bbN, r \le d-1$ be fixed. Let us define \[ S_{d,r}(n):= \# \{ m \in \bbZ^d : H_{d,r}(m) \le n \}, \qquad n \in \bbN.\] Then there is a constant $C>0$ such that \[ n^{d/r} \le S_{d,r}(n) \le C \cdot n^{d/r}, \qquad n \in \mathbb{N}. \] 2). \ \ If $\sigma: \bbN \to \bbZ^d$ is semi-hyperbolic of order $r$ with $r \le d-1$ then there are constants $C_1,C_2>0$ such that \[ C_1 \cdot n^{r/d} \le H_{d,r}(\sigma(n)) \le C_2 \cdot n^{r/d}, \qquad n \in \bbN. \] \end{lemma} \begin{proof} 1). \ \ For notational simplicity we prove the same bounds but with $S_{d,r}$ replaced by the smaller set \[ \tilde{S}_{d,r}(n):= \# \{ m \in \bbN^d : H_{d,r}(m) \le n \}, \qquad n \in \bbN. \] The same bounds for $S_{d,r}$ then follows immediately, albeit with a larger constant $C>0$. The lower bound is straightforward since the set defining $\tilde{S}_{d,r}(n)$ contains the set $\{ m \in \bbN^d : m_i \le n^{1/r}, \ i=1,..,d \}$. We prove the upper bound by induction on $r$. The case $r=1$ is clear because $\tilde{S}_{d,1}(n)$ is simply the number of points inside a $d$-dimensional hypercube with side length $n$. Suppose the result holds for $r=r'-1$. We use the following set inclusion: \be{ \begin{aligned} \{ m \in \bbZ^d : H_{d,r'}(m) \le n \} \subset & \{ m \in \bbN^d : m_i \le n^{1/r'}, \ i=1,..,d \} \\ & \cup \bigcup_{i=1}^d \{m \in \bbN^d : n^{1/r'} \le m_i \le n, \ H_{d-1,r'-1}(\tilde{m}_i) \le n/m_i \}, \end{aligned} } where $\tilde{m}_i$ here refers to $m$ with the $i$th entry removed. The cardinality of the first set on the right is just $n^{d/r'}$ and so we are done if we can show that for some constant $C>0$, \[ \# \{m \in \bbN^d : n^{1/r'} \le m_1 \le n, \ H_{d-1,r'-1}((m_2,...,m_d)) \le n/m_1 \} \le C n^{d/r'}, \qquad n \in \bbN \] We achieve this by applying our inductive hypothesis: \be{ \begin{aligned} \# \{m & \in \bbN^d : n^{1/r'} \le m_1 \le n, \ H_{d-1,r'-1}((m_2,...,m_d)) \le n/m_1 \} \\ & \le \sum_{ i= \lfloor n^{1/r'} \rfloor} ^n S_{d-1,r'-1}(\lfloor n/i \rfloor) \le C' \cdot \sum_{ i= \lfloor n^{1/r'} \rfloor}^n \big( n/i \big)^{(d-1)/(r'-1)} \\ & \le C'n^{(d-1)/(r'-1)} \cdot \int_{n^{1/r'}-2}^n x^{-(d-1)/(r'-1)} \, dx \\ & \le C'n^{(d-1)/(r'-1)} \cdot (n^{1/r'}-2)^{(1-(d-1)/(r'-1))}, \qquad ( \text{noting } r' \le d-1) \end{aligned} } where $C'>0$ is some constant. We can replace $(n^{1/r'}-2)$ by $n^{1/r'}$ in the above by changing the constant $C'$ and assuming $n>2^{r'}$. Finally, we notice that the exponents add to the desired expression: \[ \frac{d-1}{r-1} + \frac{1}{r'}\Big( 1- \frac{d-1}{r'-1} \Big)= \frac{d-1}{r'-1} - \frac{ d-r'}{r'(r'-1)} = \frac{d}{r'}.\] This gives the required upper bound for $n>2^{r'}$. Since the terms involved are all positive, we can just increase the constant $C'$ to include the cases $n \le 2^{r'}$. This shows that the result holds for $r=r'$ and the induction argument is complete. 2.) \ \ By consistency we know that \[ S_{d,r}(H_{d,r}(\sigma(n))-1) \le n \le S_{d,r}(H_{d,r}(\sigma(n))), \qquad n \in \bbN \] and therefore we can directly apply part 1 to deduce \[ ( H_{d,r}(\sigma(n))-1)^{d/r} \le n \le C \cdot ( H_{d,r}(\sigma(n)))^{d/r}, \qquad n \in \bbN, \] from which the result follows. \end{proof} Armed with this result, we can now completely tackle the separable wavelet case. \begin{theorem} \label{semihyperbolicthm} Suppose that the scaling function $\phi$ corresponding to the separable wavelet basis $B^d_{\text{sep}}$, satisfies (\ref{semiFTdecay}) for some constant $K\ge 0$ and $r \in \{1,...,d-1\}$. Next let $\sigma: \bbN \to \bbZ^d$ be semi-hyperbolic of order $r$ in $d$ dimensions and $\rho:=\lambda_d^{-1} \circ \sigma$. Finally, we let $U=[(B^d_\mathrm{f}(\epsilon),\rho),(B^d_{\text{sep}}, \tau)]$, where $\tau$ is an ordering of $B^d_{\text{sep}}$. Let us also fix $\epsilon \in I_{J,p}$. Then there are constants $C_1, C_2$ such that \[ \frac{C_1}{N} \le \mu(Q_N U) \le \frac{C_2}{N}, \quad N \in \bbN. \] Furthermore it follows that the ordering $\rho$ is optimal for the basis pair $(B^d_\mathrm{f}(\epsilon),B^d_\text{sep})$. \end{theorem} \begin{proof} Applying part 1.) from Proposition \ref{dDSeparableFourierWavelet} (with $\epsilon$ fixed) to part 2.) of Lemma \ref{characterisationlemma} immediately gives us the lower bound for the semihyperberbolic ordering since this bound also holds for the optimal decay rate. Furthermore this lower bound holds for any other ordering and therefore if we have the upper bound then the ordering $\rho$ is automatically optimal. We now focus on the upper bound. By (\ref{semihypbound2}) we know that the optimal decay of $(B_\mathrm{f}^d(\epsilon),B^d_\text{sep})$ is dominated by $F_\text{hyp,r}$. Therefore by part 1.) of Lemma \ref{characterisationlemma} if $\sigma: \bbN \to \bbZ^d$ is consistent with $1/F_\text{hyp,r}$, i.e. $\sigma$ is semihyperbolic of order $r$ then we can bound the row incoherence $\mu(\pi_N U)$ by $F_\text{hyp,r}(\sigma(N))= H_{d,r}^{-d/r}(\sigma(N)) \approx (N^{r/d})^{-d/r}=N^{-1}$ by Lemma \ref{semihypbehaviour}. Since $N^{-1}$ is decreasing this bound extends to $\mu(Q_N U)$. \end{proof} Finally we can summarise our results on the $(B_\mathrm{f}^d (\epsilon),B^d_{\text{sep}})$ case as follows: \begin{theorem} \label{SeparableResults} Let $\rho$ be a Linear ordering of the d-dimensional Fourier basis $B_\mathrm{f}^d(\epsilon)$ with $\epsilon \in I_{J,p}$, $\tau$ a leveled ordering of the d-dimensional separable wavelet basis $B^d_{\text{sep}}$ and $U=[(B_\mathrm{f}^d(\epsilon),\rho),(B^d_{\text{sep}},\tau)]$. Furthermore, suppose that the decay condition (\ref{FTdecay}) holds for the wavelet basis. Then, keeping $\epsilon>0$ fixed, we have, for some constants $C_1, C_2>0$ the decay \be{ \label{dDSeparableFourierPolynomialLinearBounds} \frac{C_1}{N} \le \mu(\pi_N U), \ \mu(U \pi_N), \le \frac{C_2}{N} \qquad \forall N \in \mathbb{N}. } Let us now instead replace $\rho$ by a semi-hyperbolic ordering of order $r$ in $d$ dimensions with $r \in \{1,....,d-1\}$ and assume the weaker decay condition (\ref{semiFTdecay}). Then, keeping $\epsilon>0$ fixed, we have, for some constants $C_1, C_2>0$ the decay \be{ \label{dDSeparableFourierPolynomialSemiHypBounds} \frac{C_1}{N} \le \mu(Q_N U), \ \mu(U \pi_N), \le \frac{C_2}{N} \qquad \forall N \in \mathbb{N}, } and furthermore $\rho$ is optimal for the basis pair $(B_\mathrm{f}^d(\epsilon),B^d_{\text{sep}})$. Since, for any separable Daubechies wavelet basis, (\ref{semiFTdecay}) always holds for $r=d-1$ any semi-hyperbolic ordering of order $d-1$ in $d$ dimensions will produce (\ref{dDSeparableFourierPolynomialSemiHypBounds}). \end{theorem} \begin{proof} (\ref{dDSeparableFourierPolynomialLinearBounds}) follows from Corollaries \ref{leveledresults} and \ref{linearresults}. (\ref{dDSeparableFourierPolynomialSemiHypBounds}) follows from Corollary \ref{leveledresults} and Theorem \ref{semihyperbolicthm}. To show that (\ref{dDSeparableFourierPolynomialSemiHypBounds}) always holds for a $d-1$ degree semi-hyperbolic ordering in $d$ dimensions, we note that the weakest decay on the scaling function $\phi$ is $| \mathcal{F} \phi (\omega)| \le K \cdot | \omega |^{-1}$ (see Lemma \ref{FTdecayLemma}) and therefore (\ref{semiFTdecay}) is automatically satisfied for $r=d-1$. \end{proof} \subsection{Optimal Orderings \& Wavelet Smoothness} \label{orderingsandsmoothness} Theorem \ref{SeparableResults} demonstrates how certain degrees of smoothness, in terms of decay of the Fourier transform, allows us to show certain orderings are optimal and this smoothness requirement becomes increasingly more demanding as the dimension increases. But if a certain ordering is optimal for the basis pair $(B_\mathrm{f}^d, B^d_\text{sep})$, does this mean that the wavelet must also have some degree of smoothness as well? The answer to this question turns out to be yes, and it is the goal of this section to prove this result. We shall rely heavily on the following simple result from \cite[Thm. 9.4]{korner}: \begin{theorem} \label{kornertheorem} Let\footnote{$\bbR/\bbZ$ denotes the unit circle which we write as $[0,1)$ with the quotient topology induced by $M: \bbR \to [0,1), M(x)=x (\text{mod} \ 1)$.} $f: \bbR/\bbZ \to \bbC$ be continuous and for $k \in \bbZ$ define $\hat{f}(k)= \int_0^1 f(x) \exp(2 \pi \mathrm{i} k x) \, dx$. If $\sum_{k=-\infty}^\infty |k||\hat{f}(k)| < \infty$ then $f \in C^1$. Consequently, using $\widehat{f'}(k)=(2 \pi \mathrm{i} k)^{-1} \cdot \hat{f}(k)$, if $\sum_{k=-\infty}^\infty |k|^n|\hat{f}(k)| < \infty$ then $f \in C^n$. \end{theorem} Now the main result itself: \begin{theorem} \label{ordering2smooth} Let $\sigma : \bbN \to \bbZ^d$ be semihyperbolic of order $r<d$ in $d$ dimensions and let $\rho:= \lambda_d^{-1} \circ \sigma : \bbN \to B_\mathrm{f}^d(\epsilon)$ where $\epsilon \in I_{J,p}$. Then if $\rho$ is optimal for the basis pair $(B_\mathrm{f}^d(\epsilon),B_\text{sep}^d)$ then $\phi \in C^l$ for any $l \in \bbN \cup \{0\}$ with $l+1 < d/2r$. \end{theorem} \begin{proof} By Theorem \ref{SeparableResults} we know that the optimal decay rate for the basis pair is $N^{-1}$, therefore if $\rho$ is optimal we must have, for some constant $C_1>0$, the bound \[ \sup_{g \in B^d_\text{sep}} | \langle g, \lambda_d^{-1} \circ \sigma(N) \rangle |^2 \le C_1 \cdot N^{-1}, \qquad N \in \bbN. \] Next since $\sigma$ is semihyperbolic we also know that, by Lemma \ref{semihypbehaviour}, there is a constant $C_2>0$ such that \[ H_{d,r}(\sigma(N)) \le C_2 \cdot N^{r/d}, \qquad N \in \bbN. \] Consequently we deduce, \be{ \label{conversion} \begin{aligned} \sup_{g \in B^d_\text{sep}} | \langle g, \lambda_d^{-1} \circ \sigma(N) \rangle |^2 \le C_1 \cdot N^{-1} \le C_1 C_2^{-d/r} \cdot H^{-d/r}_{d,r}(\sigma(N)), \qquad N \in \bbN. \\ \Rightarrow \sup_{g \in B^d_\text{sep}} | \langle g, \lambda_d^{-1}(n) \rangle |^2 \le \frac{C_1 C_2^{-d/r}}{ \big( \max_{\substack{i_1,...,i_r \in \{1,...,d\} \\ i_1<...<i_r}} \prod_{j=1}^r \max(n_{i_j},1) \big)^{d/r}}, \qquad n \in \bbZ^d. \end{aligned} } Letting $g=\Psi^s_{J,0}$ where $s=\{0,...,0\}$ and $n=(k,0,...,0)$ for $k \in \bbZ$ we see that (\ref{conversion}) becomes \be{ \label{phiprior} \epsilon^d 2^{-dJ} | \mathcal{F}\phi(2^{-J} \epsilon k) |^2 \le \frac{C_1 C_2^{-d/r}}{\max(|k|,1)^{d/r}}, \qquad k \in \bbZ. } Since the scaling function $\phi$ has compact support in $[-p+1,p]$ and $\epsilon \in I_{J,p}$, $\phi_{J,0}$ can be viewed as a function on $\bbR/\bbZ$ and (\ref{phiprior}) describes a bound on the Fourier coefficients of $\phi$. Formally, if we write $\varphi(x):=\phi(2^J \epsilon^{-1}(x-1/2))$, then since $\epsilon \in I_{J,p}$ we have that $\varphi$ is supported in $[0,1]$ and (\ref{phiprior}) becomes, for some constant $D(\epsilon,J,p)>0$: \[ | \mathcal{F}\varphi(k) |^2 = | \widehat{\varphi}(k)|^2 \le \frac{D}{\max(|k|,1)^{d/r}}, \qquad k \in \bbZ. \] If $\phi \in C^{0}$ then the result follows from Theorem \ref{kornertheorem}. If $\phi \notin C^0$, i.e. $\phi$ corresponds to a Haar wavelet basis, then (\ref{phiprior}) cannot hold with $d/2r > 1$ as this would contradict (\ref{FTdecayfail}). \end{proof} \begin{corollary} \label{hyperbolictendency} Let the scaling function $\phi$ corresponding to the Daubechies wavelet basis $B^d_\text{sep}$ be fixed. Then for every order $r \in \bbN$, there exists a dimension $d' \in \bbN, d'>r$ such that for all $d \ge d'$, we have that a semihyperbolic ordering $\sigma $ of order $r$ in $d$ dimensions is such that $\rho= \lambda_d^{-1} \circ \sigma$ is not optimal for the basis pair $(B_\mathrm{f}^d(\epsilon),B^d_\text{sep})$. \end{corollary} \begin{proof} If this result was not true then we would deduce by Theorem \ref{ordering2smooth} that the wavelet $\phi$ satisfies $\phi \in C^{\infty}$, which is a contradiction because no compactly supported wavelet can be infinitely smooth \cite[Thm. 3.8]{wav}. \end{proof} \subsection{Hierarchy of Semihyperbolic Orderings} One other notable point from Theorem \ref{SeparableResults} is that we can have multiple values of $r$ such that if $\sigma$ is semi-hyperbolic of order $r$ in $d$ dimensions then $\rho= \lambda_d^{-1} \circ \sigma$ is optimal for the basis pair $(B_\mathrm{f}^d, B^d_\text{sep})$, so which one should we choose? We know that in the case of sufficient smoothness linear orderings are strongly optimal and therefore this suggests that the lower the order $r$ the stronger the optimality result. We now seek to prove this conjecture. \begin{lemma} \label{semihypinequality} Let $r,r',d \in \bbN, r \le r' \le d$. Then for all $n \in \bbZ^d$ we have that $H_{d,r'}^r(n) \le H_{d,r}^{r'}(n)$. \end{lemma} \begin{proof} Let $n \in \bbZ^d$ be fixed. For each $j=1,..d$ let $i_j$ denote the $j$th largest terms of the form $\max(|n_{i_j}|,1)$. Observe that \[ H_{d,r}(n)= \prod_{j=1}^r \max(|n_{i_j}|,1), \quad H_{d,r'}(n)= \prod_{j=1}^{r'} \max(|n_{i_j}|,1), \] \[ \Rightarrow \frac{ H_{d,r}^{r'}(n)}{H_{d,r'}^r(n)} = \frac{\prod_{j=1}^r \max(|n_{i_j}|,1)^{r'-r}}{\prod_{j=r+1}^{r'} \max(|n_{i_j}|,1)^r}. \] Finally we observe that the numerator and denominator have the same number ($r(r'-r)$) of terms in the product and that each term in the numerator is greater than each term in the denominator, proving the inequality. \end{proof} \begin{corollary} \label{hierarchycorollary} Let $r,r',d \in \bbN, r \le r' < d$ and $\sigma, \sigma'$ be semihyperbolic of orders $r,r'$ in $d$ dimensions respectively. If $\rho=\lambda_d^{-1} \circ \sigma$ is optimal for the basis pair $(B_\mathrm{f}^d(\epsilon), B_\text{sep}^d)$ then so is $\rho'=\lambda_d^{-1} \circ \sigma'$. \end{corollary} \begin{proof} Recalling (\ref{conversion}) we know that there is a constant $C>0$ such that \[ \sup_{g \in B^d_\text{sep}} | \langle g, \lambda_d^{-1} \circ \sigma(N) \rangle |^2 \le C \cdot H^{-d/r}_{d,r}(\sigma(N)), \qquad N \in \bbN, \] \[ \Rightarrow \sup_{g \in B^d_\text{sep}} | \langle g, \lambda_d^{-1} \circ \sigma'(N) \rangle |^2 \le C' \cdot H^{-d/r'}_{d,r'}(\sigma'(N)), \qquad N \in \bbN, \] where we have used Lemma \ref{semihypinequality} on the second line. We then apply Lemma \ref{semihypbehaviour} to deduce the result. \end{proof} \begin{figure}[!t] \begin{center} \begin{subfigure}[t]{0.49\textwidth} \begin{center} \includegraphics[width=\textwidth]{Daub43d} \caption{\footnotesize Daubechies4 - Isosurface Value $ = 5 \cdot 10^{-3}$} \end{center} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \begin{center} \includegraphics[width=\textwidth]{Daub83d} \caption{\footnotesize Daubechies8 - Isosurface Value $ = 5 \cdot 10^{-3}$} \end{center} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \begin{center} \includegraphics[width=\textwidth]{Haar3d} \caption{\footnotesize Haar - Isosurface Value $ = 5 \cdot 10^{-3}$} \end{center} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \begin{center} \includegraphics[width=\textwidth]{Daub45d} \caption{\footnotesize 3D slice of 5D Daubechies4 - Isosurface Value $ = 5 \cdot 10^{-4}$ } \end{center} \end{subfigure} \end{center} \caption{3D Fourier - Separable Wavelet Incoherence Isosurface Plots. We draw the isosurface plots over the subset $\{-50,-49,...,49,50\}^3 \subset \bbZ^3$. These pictures should be compared with the ordering plots in Figure \ref{Consistent3D}. Notice that for the smoother wavelets in (a) \& (b), the growth matches that of a linear ordering however the 3D Haar case lacks this smoothness, resulting in semi-hyperbolic scaling in (c). If we keep the wavelet basis fixed and let the dimension increase, the scaling becomes increasingly hyperbolic, as seen in (d) and proved in Corollary \ref{hyperbolictendency}. } \label{3dseparableincoherences} \end{figure} \begin{remark} Corollary \ref{hierarchycorollary} tells us that if there are several orders $r$ that give us optimality then the smallest $r$ possible, say $r^*$, is the strongest result. \end{remark} \subsection{3D Separable Incoherences} We have found optimal orderings for every multidimensional Fourier- separable wavelet case however, we have not shown that (apart from in the linear case with sufficient Fourier decay) that the ordering is strongly optimal and we have not characterized the decay. Therefore it is of interest to see how the incoherence scales in further detail by directly imaging them in 3D. We do this by drawing levels sets in $\bbZ^3$, as seen in Figure \ref{3dseparableincoherences}. \section{Asymptotic Incoherence and Compressed Sensing in Levels} \label{numericalsection} We now return to the original compressed sensing problem which was described in the introduction of this paper and aim to study how asymptotic incoherence can influence the ability to subsample effectively. We shall be working exclusively in 2D for this section. Consider the problem of reconstructing a function $f \in L^2([-1,1]^2)$ from its samples $\{ \langle f, g \rangle : g \in B^2_\mathrm{f} (2^{-1}) \} $. The function $f$ is reconstructed as follows: Let $U:=[(B^2_\mathrm{f}(2^{-1}),\rho), (B_2, \tau)]$ for some orderings $\rho, \tau$ and a reconstruction basis $B_2$. The number $2^{-1}$ is present here to ensure the span of $B_\mathrm{f}$ contains $L^2([-1,1]^2)$. Next let $\Omega \subset \bbN$ denote the set of subsamples from $B^2_\mathrm{f}(2^{-1})$ (indexed by $\rho$), $P_\Omega$ the projection operator onto $\Omega$ and $\hat{f}:=( \langle f , \rho(m) \rangle )_{m \in \bbN}$. We then attempt to approximate $f$ by $\sum_{n=1}^\infty \tilde{x}_n \tau(n)$ where $\tilde{x} \in \ell^1(\bbN)$ solves the optimisation problem \be{ \label{basicl1full} \min_{x \in \ell^1(\bbN)} \| x \|_1 \quad \text{subject to} \quad P_\Omega Ux= P_\Omega \hat{f} . } \begin{figure}[!t] \begin{center} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{resphant} \caption{\footnotesize Rasterized Phantom \\ Resolution = $2^{12} \times 2^{12}$} \end{center} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{resphantbasicrecon} \caption{\footnotesize Reconstruction from pattern A \\ $L^1$ error = 0.0735} \end{center} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{resphantleveledrecon} \caption{\footnotesize Reconstruction from pattern B \\ $L^1$ error = 0.0620} \end{center} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{resphantzoom} \caption{\footnotesize Rasterized Phantom - Closeup} \end{center} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{resphantbasicreconzoom} \caption{\footnotesize Closeup of (b)} \end{center} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{resphantleveledreconzoom} \caption{\footnotesize Closeup of (c)} \end{center} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \end{center} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{resphantbasiclist} \caption{\footnotesize Sampling Pattern A \\ Number of Samples: 40401} \end{center} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{resphantleveledlist} \caption{\footnotesize Sampling Pattern B \\ Number of Samples: 39341} \end{center} \end{subfigure} \end{center} \caption{Simple Resolution Phantom Experiment. Samples are from the subset $\{-200,-199,...,199,200\}^2 \subset \bbZ^2$. Notice that the checkerboard feature are captured by the leveled sampling pattern but not by pattern (a), even though it uses fewer samples. Reconstructions are at a resolution of $2^{10} \times 2^{10}$.} \label{resphantimages} \end{figure} Since the optimisation problem is infinite dimensional we cannot solve it numerically so instead we proceed as in \cite{BAACHGSCS} and truncate the problem, approximating $f$ by $\sum_{n=1}^R \tilde{x}_n \tau(n)$ (for $R \in \bbN$ large) where $\tilde{x} = (\tilde{x}_n)_{n=1}^R$ now solves the optimisation problem \be{ \label{basicl1} \min_{x \in \bbC^R} \| x \|_1 \quad \text{subject to} \quad P_\Omega U P_R x= P_\Omega \hat{f} . } We shall be using the SPGL1 package \cite{SPGL} to solve (\ref{basicl1}) numerically. \subsection{Demonstrating the Benefits of Multilevel Subsampling} We shall first demonstrate directly how subsampling in levels is beneficial in situations with asymptotic incoherence ($\mu(Q_N U) \to 0$) but poor global incoherence ($\mu(U)$ is relatively large). The image $f$ that we will attempt to reconstruct is made up of regions defined by Bezier curves with one degree of smoothness, as in \cite{GLPU}. This image is intended is model a resolution phantom\footnote{`resolution' here refers to `resolving' a signal from a MRI device.} which is often used to calibrate MRI devices \cite{resphantom}. A rasterization of this phantom is provided in image (a) of Figure \ref{resphantimages}. We reconstruct with 2D separable Haar wavelets, ordered according to its resolution levels, from a base level of 0 up to a highest resolution level of 8. The Fourier basis is ordered by the linear consistency function $H_{2,1}$, which gives us a square leveling structure when viewed in $\bbZ^2$. We choose these orderings because we know that they are both strongly optimal for the corresponding bases, and therefore should allow reasonable degrees of subsampling when given an (asymptotically) sparse problem. By looking at Figure \ref{resphantimages}, we observe that subsampling in levels (pattern (b)) allows to pick up features that would be otherwise impossible from a direct linear reconstruction from the first number of samples (pattern (a)) and moreover the $L^1$ error is smaller. \subsection{Tensor vs Separable - Finding a Fair Comparison} We would like to study how different asymptotic incoherence behaviours can impact how well one can subsample. In 2D it would be unwise to compare 2 different separable wavelet bases, since we know that they have the same optimal orderings and decay rates in 2D (see Corollary \ref{linearresults}). Therefore we are left with comparing a separable wavelet basis to a tensor basis. The incoherence decay rates for the 2D Haar cases are shown in the table below for Linear and Hyperbolic orderings of the Fourier basis $B^2_\mathrm{f}$: \begin{table}[H] \label{incoherencetable2D} \begin{center} \large \begin{tabular}{c|cc} \hline \multicolumn{3}{c}{2D Haar Basis Incoherence Decay Rates} \\ \hline Ordering & Tensor & Separable \\ \hline Linear & $ N^{-1/2} $ & $ N^{-1} $ \\ Hyperbolic & $\log(N+1) \cdot N^{-1}$ & $\log(N+1) \cdot N^{-1} $ \\ \hline \end{tabular} \end{center} \caption{ \footnotesize The decay rates for the hyperbolic case comes from Theorem \ref{TensorResultsWavelet} and Proposition \ref{Hyperbolic4Separable}. For the linear case, the separable result comes from Theorem \ref{SeparableResults} and the tensor result can be deduced from Lemma \ref{normest} applied to (\ref{zhypcrosscharacterise}), although we do not provide the details here. } \end{table} Observe that for linear orderings, there is a large discrepancy between the decay rates, however they are the same for hyperbolic orderings. Therefore, comparing separable and tensor reconstructions appears to be a good method for testing the behaviour of differing speeds of asymptotic incoherence. However, there is one serious problem, namely the choice of image $f$ that we would like to reconstruct. Recall from (\ref{conditions31_levels}) that the ability to subsample depends on both the coherence structure of the pair of bases and the sparsity structure of the function $f$ we are trying to reconstruct. Ideally, to isolate the effect of asymptotic incoherence we would like to choose an $f$ that has the same sparsity structure in both a tensor and separable wavelet basis. If $f$ was chosen to be the resolution phantom like before then the tensor wavelet approximation would be a poor comparison to that of the separable wavelet reconstruction (due to a poor resolution structure). Therefore we need to choose a function that we expect to reconstruct well in tensor wavelets, for example a tensor product of one dimensional functions. \begin{figure}[!t] \begin{center} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{spectrumorig} \caption{\footnotesize Rasterized Spectrum} \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{spectrumfullsep} \caption{\footnotesize Separable Reconstruction \\ $L^1$ error = 0.0157} \end{center} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \begin{center} \includegraphics[width=\textwidth]{spectrumfulltensor} \caption{\footnotesize Tensor Reconstruction \\ $L^1$ error = 0.0159} \end{center} \end{subfigure} \end{center} \caption{Spectrum Model and `Full Sampling' Reconstructions. Reconstructions uses all samples from the subset $\{-200,-199,...,199,200\}^2 \subset \bbZ^2$. Images are at a resolution of $2^{10} \times 2^{10}$. Haar wavelets are used for tensor and separable cases. Observe that both reconstructions match the original very closely and have similar $L^1$ approximation errors.} \label{spectrumsetup} \end{figure} Such an example is provided by NMR spectroscopy \cite[Eqn. (5.24)]{spindynamics}. A 2D spectrum is sometimes modelled as a product of 1D Lorentzian functions: \be{ \label{spectrumdefine} \begin{aligned} f(x) & = \sum_{i=1}^r L_{2,p(i),s(i)}(x), \quad x,p(i),s(i) \in \bbR^2, \\ L_{2,p,s}(x) & = L_{p_1,s_1}(x_1) \cdot L_{p_2,s_2}(x_2), \quad x,p,s \in \bbR^2 \\ L_{p,s} & = \frac{s}{s^2 + (x -p)^2}, \quad x,p,s \in \bbR. \end{aligned} } We consider a specific spectrum $f$ of the above form. By looking at Figure \ref{spectrumsetup} we observe that, without any subsampling from the subset $\{-200,-199,...,199,200\}^2 \subset \bbZ^2$, the tensor and separable Haar wavelet reconstructions have almost identical $L^1$ errors, suggesting that this problem does not bias either reconstruction basis. We order the tensor and separable reconstruction bases using their corresponding level based orderings, which are defined in Lemma \ref{tensorwavelethyp} and Definition \ref{sepleveled} respectively. For separable wavelets we start at a base level of $J=0$ and stop at level 8 (so we truncate at the first $2^{10} \times 2^{10}$ wavelet coefficients) and for tensor wavelets we start at level $J=0$ and stop at level 10 (when the problem was truncated at higher wavelet resolutions the improvement in reconstruction quality was negligible). \begin{figure}[t] \begin{center} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{linearsampling2d} \caption{\footnotesize Linear Sampling Pattern} \end{center} \end{subfigure} \begin{subfigure}[t]{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{hypsampling2d} \caption{Hyperbolic Sampling Pattern \\ (Boxed in)} \end{center} \end{subfigure} \end{center} \caption{Sampling Patterns. Samples are from the subset $\{-200,-199,...,199,200\}^2 \subset \bbZ^2$. White indicates sample is taken.} \label{spectrumsamples} \end{figure} \begin{figure}[!h] \begin{center} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{seplinear2d} \caption{\footnotesize Separable Reconstruction \\ $L^1$ error = 0.0367} \end{center} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{seplinear2dcloseup} \caption{\footnotesize Separable Closeup} \end{center} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{tensorlinear2d} \caption{\footnotesize Tensor Reconstruction \\ $L^1$ error = 0.0592} \end{center} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{tensorlinear2dcloseup} \caption{\footnotesize Tensor Closeup} \end{center} \end{subfigure} \end{center} \caption{Reconstructions from Linear Sampling Pattern} \label{linearrecons2d} \end{figure} We are now going to test how well these two bases perform under subsampling with different orderings of $B^2_\mathrm{f}$. Two subampling patterns, one based on a linear ordering and another on a hyperbolic ordering, are presented in Figure \ref{spectrumsamples}. Ideally the hyperbolic subsampling pattern would not be restricted by the $\{-200,-199,...,199,200\}^2$ but this is numerically unfeasible. Let us first consider what happens when using pattern (a) (see Figure \ref{linearrecons2d}). Notice that the separable reconstruction performs far better than the tensor reconstruction and therefore is more tolerant to subsampling with a linear ordering than the tensor case. This is unsurprising as the tensor problem suffers from noticeably large $1/\sqrt{N}$ incoherence when using a linear ordering when compared to the $1/N$ separable decay rate. Of course we should have fully considered the sparsity of these two problems which also factor into the ability to subsample, however $f$ was specifically chosen because it was sparse in the tensor basis and moreover we have seen that it provides a comparable reconstruction to the separable case when taking a full set of $\{-200,-199,...,199,200\}^2$ samples. Next we observe what happens when using the pattern (b) (Figure \ref{hyprecons2d}). There is now a stark contrast to the linear case, in that both separable and tensor cases provide very similar reconstructions and furthermore the $L^1$ errors are very close. This suggests that both problems have similar susceptibility to subsampling when using hyperbolic sampling, which is reflected by their identical rates of incoherence decay with hyperbolic orderings. \begin{figure}[h] \begin{center} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{sephyp2d} \caption{\footnotesize Separable Reconstruction \\ $L^1$ error = 0.0263} \end{center} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{sephyp2dcloseup} \caption{\footnotesize Separable Closeup} \end{center} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{tensorhyp2d} \caption{\footnotesize Tensor Reconstruction \\ $L^1$ error = 0.0277} \end{center} \end{subfigure} \begin{subfigure}[t]{0.4\textwidth} \begin{center} \includegraphics[width=\textwidth]{tensorhyp2dcloseup} \caption{\footnotesize Tensor Closeup} \end{center} \end{subfigure} \end{center} \caption{Reconstructions from Hyperbolic Sampling Pattern} \label{hyprecons2d} \end{figure} \section{Appendix} \textit{Proof of Proposition \ref{Hyperbolic4Separable}:} (\ref{hyperbolicbound2}) applied to part 1.) of Lemma \ref{characterisationlemma} shows that the decay of $\mu(\pi_N U)$ is bounded above by\footnote{for the definitions of $H_d, h_d$ see (\ref{hyperbolicdecay}) and (\ref{hddef}).} $F_\text{hyp}(\sigma(N))=1/H_d(\sigma(N)) \approx 1/h_d(N)$, which gives us the upper bound for $\mu(Q_N U)$ since $1/h_d(N)$ is decreasing. For the lower bound, we focus on terms of the form $\lambda_d \circ \rho(m)=(t,...,t)$ for some $t \in \bbN$ and we set, for a fixed $q \in \bbN$ \[s=(1,...,1), \quad j:= \lceil \epsilon \log_2 t \rceil + q , \] where we assume for now that $j \ge J$ is satisfied. This gives us \be{ \label{specificlower} \begin{aligned} | \langle \Psi^s_{j,0}, \rho(m) \rangle |^2 & = \epsilon^d 2^{-dj} \prod_{i=1}^d | \mathcal{F} \psi(\epsilon 2^{-j} t) |^2 \\ & \ge \frac{1}{2^{d(1+q)} t^d} \cdot | \mathcal{F} \psi(\epsilon 2^{-(\lceil \epsilon \log_2 t \rceil + q)} t) |^2 \\ & \ge \frac{1}{2^{d(1+q)} t^d} \cdot L_q^{2d} \quad ( \text{using (\ref{wavelower})}) . \end{aligned} } Let $m$ now be arbitrary with $\prod_{i=1}^d \max( | \lambda_d \circ \rho(m)_i|,1)=M \ge 1$ and let $t = \lceil M^{1/d} \rceil +1$. Because $\rho$ corresponds to the hyperbolic cross there exists an $m'>m$ such that $\prod_{i=1}^d \max( | \lambda_d \circ \rho(m')_i|,1)=t^d$ where $\lambda_d \circ \rho(m')=(t,...,t)$. Notice that $t^d \le E(d)M$ for some constant dependent on the dimension $d$. Furthermore, (\ref{specificlower}) holds for $m=m'$ if we have that $j \ge J$, which is satisfied if $m$ is sufficiently large. Therefore we deduce by (\ref{specificlower}) that \[ \begin{aligned} \mu( Q_m U) \ge | \langle \Psi^s_{j,0}, \rho(m') \rangle |^2 & \ge \frac{1}{2^{d(1+q)} t^d} \cdot L_q^{2d} \\ & \ge \frac{1}{E 2^{d(1+q)+1} M} \cdot L_q^{2d} \\ & = \frac{1}{E 2^{d(1+q)} \prod_{i=1}^d \max( | \lambda_d \circ \rho(m)_i|,1) } \cdot L_q^{2d} \\ & \ge \frac{C}{E 2^{d(1+q)} h_d(m)} \cdot L^{2d}_q \quad (\text{using (\ref{hyperboliccrossZdecay}), $C>0$ some constant}). \end{aligned} \] This proves the lower bound. \bibliographystyle{abbrv}
{ "timestamp": "2016-10-25T02:12:04", "yymm": "1610", "arxiv_id": "1610.07497", "language": "en", "url": "https://arxiv.org/abs/1610.07497", "abstract": "Recently it has been established that asymptotic incoherence can be used to facilitate subsampling, in order to optimize reconstruction quality, in a variety of continuous compressed sensing problems, and the coherence structure of certain one-dimensional Fourier sampling problems was determined. This paper extends the analysis of asymptotic incoherence to cover multidimensional reconstruction problems. It is shown that Fourier sampling and separable wavelet sparsity in any dimension can yield the same optimal asymptotic incoherence as in one dimensional case. Moreover in two dimensions the coherence structure is compatible with many standard two dimensional sampling schemes that are currently in use. However, in higher dimensional problems with poor wavelet smoothness we demonstrate that there are considerable restrictions on how one can subsample from the Fourier basis with optimal incoherence. This can be remedied by using a sufficiently smooth generating wavelet. It is also shown that using tensor bases will always provide suboptimal decay marred by problems associated with dimensionality. The impact of asymptotic incoherence on the ability to subsample is demonstrated with some simple two dimensional numerical experiments.", "subjects": "Information Theory (cs.IT)", "title": "Analyzing the structure of multidimensional compressed sensing problems through coherence", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575121992375, "lm_q2_score": 0.7217432122827967, "lm_q1q2_score": 0.7091542151072708 }
https://arxiv.org/abs/1012.5904
Sutured Floer homology distinguishes between Seifert surfaces
We exhibit the first example of a knot in the three-sphere with a pair of minimal genus Seifert surfaces that can be distinguished using the sutured Floer homology of their complementary manifolds together with the Spin^c-grading. This answers a question of Juhász. More precisely, we show that the Euler characteristic of the sutured Floer homology of the complementary manifolds distinguishes between the two surfaces, as does the sutured Floer polytope introduced by Juhász. Actually, we exhibit an infinite family of knots with pairs of Seifert surfaces that can be distinguished by the Euler characteristic.
\section{Introduction} Let $K$ be an oriented knot in the three-sphere $S^3$. Then $K$ is the oriented boundary of at least one connected compact oriented surface in $S^3$ called a Seifert surface for $K$. Two Seifert surfaces $R_1$ and $R_2$ of a knot are considered to be {\it equivalent} if they are ambient isotopic in the knot complement. There are a number of invariants that provide obstructions to two Seifert surfaces being equivalent; possibly the first two that come to mind are the genus of the surface and the fundamental group of the surface complement. In general, any invariant of the surface complement offers an obstruction to the equivalence of $R_1$ and $R_2$. Given a Seifert surface $R$, the complement $S^3(R):=\nolinebreak S^3 \setminus \Int (R \times I)$ together with the curve $\partial R \times \{1/2\}$ on the boundary is a type of 3-manifold called a {\it balanced sutured manifold}. Therefore, it is reasonable to investigate the possibility of using {\it sutured Floer homology}, an invariant of balanced sutured manifolds introduced by Juh\'asz \cite{Ju06}, to distinguish between equivalence classes of Seifert surfaces. Sutured Floer homology associates to a given balanced sutured manifold $(M,\gamma)$ a finitely generated bigraded abelian group denoted by $SFH(M,\gamma)$. The group $SFH(M,\gamma)$ is graded by the relative $\Spin^c$ structures $\mathfrak{s} \in \Spin^c(M,\gamma)$, and has a relative ${\mathbb Z}_2$ grading. The support of sutured Floer homology gives rise to the {\it sutured Floer polytope} $P(M,\gamma)$, defined in \cite{Ju10a}, which is a polytope in $H^2(M, \partial M;{\mathbb R})$. Suppose $R$ is any minimal genus Seifert surface for a knot in $S^3$. Then Juh\'asz showed that $SFH(S^3(R))$ is a knot invariant \cite{Ju08}; that is, the top term of {\it knot Floer homology} \cite{OS04b, Ra03} is isomorphic to the sutured Floer homology of the complement: \[ SFH(S^3(R)) \cong \widehat{HFK}(K,\mbox{genus}(R)). \] As the isomorphism is in terms of ungraded abelian groups, it is interesting to ask whether the extra structure, given by the $\Spin^c$ grading of $SFH(S^3(R))$, enables sutured Floer homology to distinguish between two minimal genus Seifert surfaces. \begin{prob} \cite[Problem 2]{Juprob} \label{prob} Is there a knot $K$ in $S^3$ that has two minimal genus Seifert surfaces $R_1$ and $R_2$ that can be distinguished using $SFH(S^3(R_i))$ together with the $\Spin^c$-grading? Is there an example where the sutured Floer homology polytopes of $S^3(R_1)$ and $S^3(R_2)$ are different? \end{prob} Until now research has provided evidence to suggest that the answer to both question is no. For example, the first obvious place to investigate these ideas are small knots. Indeed, in \cite[Ex.\,8.6]{FJR10} the authors compute $SFH(S^3(R))$ for $R$ ranging through the minimal genus Seifert surfaces for knots with less than 10 crossings. These small knots have either a unique minimal genus Seifert surface, or all of their minimal genus Seifert surfaces can be identified with Murasugi sums of bands. However, Juh\'asz showed that the sutured Floer homology of the complement of a Murasugi sum is the tensor product of the sutured Floer homology of the complement of each summand \cite[Cor.\,8.8]{Ju08}. It is immediate from \cite[Prop.\,5.4]{Ju10a} that the relative $\Spin^c$ grading of the tensor product is independent of how the surfaces were summed. Thus, all surfaces arising from Murasugi sums (and even dual Murasugi sums) of the same summands cannot be distinguished even by the $\Spin^c$-graded sutured Floer homology group. The aim of this article is to give an affirmative answer to both questions posed in Problem \ref{prob} by exhibiting examples of the phenomena. Our examples come from a family of knots that were studied by Lyon \cite{Lyon}; see Figure \ref{Lyons figure}. Indeed, we show that even the Euler characteristic $\chi SFH$ of sutured Floer homology distinguishes between two Seifert surfaces for each of these knots. \begin{thm} \label{thm} There are infinitely many knots with the property that each knot has two minimal genus Seifert surfaces $R_1$ and $R_2$ such that \[ \chi SFH(S^3(R_1)) \not \sim \chi SFH(S^3(R_2)). \] Moreover, for at least one of these knots the sutured Floer polytopes $P(S^3(R_1))$ and $P(S^3(R_2))$ are such that there exists no affine isomorphism of $H^2(M,\partial M;{\mathbb R})$ taking one polytope to the other. \end{thm} Here the symbol `$\not \sim$' is used to mean the negation of an appropriate equivalence relation (see end of Section \ref{prelims}). \begin{figure}[h] \centering \includegraphics [scale=0.5]{lyonsfigurefinal} \caption{One of the knots studied by Lyon \cite[Fig.\,1]{Lyon}.} \label{Lyons figure} \end{figure} We prove the first statement of Theorem \ref{thm} by applying the work of Friedl, Juh\'asz and Rasmussen in \cite{FJR10}, where they give a way of finding the Euler characteristic using Fox calculus. Let $(M,\gamma)$ be a balanced sutured manifold. Then after an identification of $\Spin^c(M,\gamma)$ with $H_1(M;{\mathbb Z})$ (see subsection 2.2 for more details), the Euler characteristic $\chi SFH(M,\gamma)$ can be identified with a type of Turaev torsion polynomial denote by $\tau(M,\gamma)$ \cite[Sec.\,3]{FJR10}. Here the {\it sutured torsion} $\tau(M,\gamma)$ is a well-defined element of the group ring ${\mathbb Z}[H_1(M)]$ up to multiplication by units of the group ring. The sutured torsion has similar properties to that of the classical Alexander polynomial, and so $\tau(M,\gamma)$ can be thought of as the generalisation of the Alexander polynomial to sutured manifolds. For the second statement of Theorem \ref{thm}, we compute the sutured Floer homology and polytopes for one particular knot and two of its Seifert surfaces. Firstly, for each knot that we study, the considered Seifert surfaces $R_1$ and $R_2$ are disjoint. Moreover, the two sutured manifolds, $X$ and $Y$, obtained by cutting the knot exterior along $R_1$ and $R_2$ are handlebodies of genus two. Secondly, we are able to find a particular knot, for which there is disk decomposition of $X$ and $Y$ along a product disk. The latter gives us a way of explicitly computing the sutured Floer homology groups $SFH(X)$ and $SFH(Y)$ (see Propositions \ref{product decomp} and \ref{torus prop}). Then the groups $SFH(S^3(R_1))$ and $SFH(S^3(R_2))$ are the tensor product $SFH(X) \otimes SFH(Y)$ \cite[Prop.\,8.6]{Ju08}, where the $\Spin^c$ grading can be derived from the appropriate Mayer-Vietoris maps on the level of the first homology groups \cite[Prop.\,5.4]{Ju10a}. Observe that this method allows us to compute the top term of knot Floer homology of a rather complicated knot --- something which would be significantly harder to do directly. Lastly, it is important to note that in order to solve Problem \ref{prob}, in Theorem \ref{thm} we use only the $\Spin^c$ grading of the sutured Floer homology. Therefore, our result is independent of any auxiliary data, in comparison to the work of Hedden, Juh\'asz, and Sarkar \cite{HJS08}, who show the nonequivalence of two Seifert surfaces $R_1$ and $R_2$ of the knot $8_3$ using sutured Floer homology methods together with properties of the Seifert form. They prove that there is no map $\sigma \colon \Spin^c(S^3(R_1)) \to \Spin^c(S^3(R_2))$ that induces an isomorphism of sutured Floer homology groups for every relative $\Spin^c$ structure, and that is compatible with an isomorphism $H_1(S^3(R_1)) \to H_1(S^3(R_2))$ which preserves the Seifert form. Section \ref{prelims} covers some preliminary definitions and explains the method for computing the Euler characteristic via Fox calculus. Section \ref{example} contains the computations and proof of Theorem \ref{thm}. \\ \\ \noindent {\it Acknowledgements.} I would like to thank my Ph.D. adviser Stefan Friedl for many helpful discussions and suggestions. I am very grateful to Andr\'as Juh\'asz for highlighting interesting questions related to sutured Floer homology, and for pointing out errors in earlier drafts of this work. I thank Saul Schleimer for stimulating conversations and his interest in my work. I owe much gratitude to the University of Warwick and the Warwick Mathematics Institute for generously supporting me through a Warwick Postgraduate Research Scholarship. \section{Preliminaries} \label{prelims} To begin with, let us set up some conventions. Given a space $X$, we denote by $H_*(X)$ the homology group with integer coefficients $H_*(X;{\mathbb Z})$. Further, we write $\chi(X)$ to mean the Euler characteristic $\chi(H_*(X))$. Lastly, for $K$ a submanifold of $M$ denote by $N(K)$ a regular neighbourhood of $K$ in $M$. \subsection{Sutured manifolds} The notion of a sutured manifold $(M,\gamma)$ was first defined by Gabai \cite{Gabai}. Here we give a less general definition that is suited to thinking about a particular class of so-called {\it balanced} sutured manifolds defined by Juh\'asz \cite{Ju06}. \begin{deff} A {\it sutured manifold} $(M,\gamma)$ is a compact oriented 3-manifold $M$ with boundary, together with a set $s(\gamma)$ of oriented and pairwise disjoint simple closed curves in $\partial M$ called {\it sutures}, which satisfy two conditions. The first condition is that each component of $\partial M$ must contain at least one suture. Fix a neighbourhood $\gamma$ of the sutures in $\partial M$ that consists of a pairwise disjoint collection of annuli. The second condition is that every component $R$ of the surface $\partial M \setminus \Int(\gamma)$ must be orientable in such a way that the induced orientation on each component of $\partial R$ represents the same homology class as the corresponding suture in $H_1(\gamma)$. \end{deff} Let $R(\gamma)$ be the exterior of the sutures in the boundary of $M$, that is, $R(\gamma):=\partial M \setminus \Int (\gamma)$. Now each component of $R(\gamma)$ has two orientations: one induced by the orientation of $M$, and one compatible with the orientation of the sutures. Denote by $R_+(\gamma)$ the set of components of $R(\gamma)$ on which the two orientations match, and denote by $R_-(\gamma)$ the set of remaining components. \begin{deff} A sutured manifold $(M,\gamma)$ is said to be {\it balanced} if it has no closed components and if there is an equality of Euler characteristics $\chi (R_+(\gamma))=\chi (R_-(\gamma))$. \end{deff} \begin{rmk} Our definition of a balanced sutured manifold is equivalent to that of Juh\'asz \cite[Def.\,2.1]{Ju06}. \end{rmk} In particular, given a Seifert surface $R$, the complement $S^3(R)$ is a balanced sutured manifold with a single suture $s(\gamma):=\partial R \times \{\frac{1}{2}\}$ and a single annular neighbourhood $\gamma:=\partial R \times I$. We refer to $(S^3(R),\gamma)$ as the sutured manifold {\it complementary} to $R$. Actually, since $R_+(\gamma)$ consists of one component only, $S^3(R)$ is {\it strongly balanced} \cite[Def.\,3.5]{Ju08}. A balanced sutured manifold $(M,\gamma)$ is strongly balanced if for each component $F$ of $\partial M$, we have the equality $\chi (F \cap R_+(\gamma)) = \chi (F \cap R_-(\gamma))$ \cite[Def.\,3.5]{Ju08}. The fact that $S^3(R)$ is strongly balanced becomes relevant later, as the sutured Floer polytope is only defined for strongly balanced sutured manifolds. Next, we describe an operation on sutured manifolds that leaves the sutured Floer homology unchanged. Suppose $(M,\gamma)$ is a balanced sutured manifold, and $D$ is a properly embedded disc in $M$, such that $\abs{D \cap s(\gamma)}=2$ and $\partial D \cap \gamma$ consists of essential arcs. Choose a regular neighbourhood $N(D):=D \times [0,1]$ such that $\partial D \times [0,1] \subset \partial M$. Denote by $D_+:=D \times \{0\}$ and $D_-:= D \times \{1\}$. Then the {\it product decomposition} of $(M,\gamma)$ along $D$ is an operation on $M$ which results in another balanced sutured manifold $(M',\gamma')$ defined by \begin{gather*} M':=M \setminus D \times (0,1), \\ \gamma':=(\gamma \cap M) \cup \left(N(D_+)\cap R_-(\gamma)\right) \cup \left(N(D_-) \cap R_+(\gamma)\right). \end{gather*} We use product decomposition in the proof of Theorem \ref{thm}, and we denote it by \[ (M,\gamma) \leadsto^D (M',\gamma'). \] \begin{prop} \cite[Lemma\,9.13]{Ju06} \label{product decomp} Suppose $(M,\gamma)$ is a balanced sutured manifold, and there is a product decomposition $(M,\gamma) \leadsto^D (M',\gamma')$. Then $SFH(M,\gamma)=SFH(M',\gamma')$. \end{prop} Product decomposition is a useful operation when computing the sutured Floer homology of a specific sutured manifold. In particular, in the proof of Theorem \ref{thm}, we have handlebodies of genus two with a single suture, and each of the handlebodies can be product decomposed into a solid torus with two sutures on the boundary. The sutured Floer homology of $S^1 \times D^2$ with any collection of sutures is already known; see Proposition \ref{torus prop}. \subsection{Relative $\Spin^c$ structures and the sutured Floer polytope} \label{subsec polytope} Every balanced sutured manifold $(M,\gamma)$ has an associated space of relative $\Spin^c$ structures $\Spin^c(M,\gamma)$; we define relative $\Spin^c$ structures in the following paragraph. For each $\mathfrak{s} \in \Spin^c(M,\gamma)$ there is a well-defined abelian group $SFH(M,\gamma,\mathfrak{s})$ \cite{Ju06}, and the direct sum of these groups forms the {\it sutured Floer homology} of $(M,\gamma)$. That is, \[ SFH(M,\gamma)=\bigoplus_{\mathfrak{s} \in \Spin^c(M,\gamma)} SFH(M,\gamma,\mathfrak{s}). \] Juh\'asz computed the sutured Floer homology of $(M,\gamma)$ when $M$ is the solid torus. We use this in the proof of Theorem \ref{thm} to compute the polytopes. Let $T(p,q;n)$ be the balanced sutured manifold $(M,\gamma)$, where $M$ is a solid torus, and the sutures are $n$ parallel $(p,q)$ torus knots. Here $p$ denotes the number of times the curve on $\partial M$ goes around in the longitudinal direction. Note that $n$ has to be even. \begin{prop} \cite[Prop.\,9.1]{Ju10a} \label{torus prop} Suppose that $T(p,q;n)$ is as described above, and suppose that $n=2k+2$, for some nonnegative integer $k$. Then there is an identification \[ \Spin^c(T(p,q;n)) \cong {\mathbb Z} \] such that the following holds \[ SFH(T(p,q;n),i) \cong \begin{cases} {\mathbb Z}^{\binom{k}{\lfloor i/p \rfloor}}, & \textrm{if } 0 \leq i < p(k+1); \\ 0 , & \textrm{otherwise.} \end{cases} \] \end{prop} The following definition of relative $\Spin^c$ structures originates from Turaev's work \cite{Tu90}, but in the current phrasing comes from \cite{Ju06}. Fix a Riemannian metric on $(M,\gamma)$. Let $v_0$ denote a nonsingular vector field on $\partial M$ that points into $M$ on $R_-(\gamma)$ and out of $M$ on $R_+(\gamma)$, and that is equal to the gradient of the height function $s(\gamma) \times I \to I$ on $\gamma$. The space of such vector fields is contractible. A relative $\Spin^c$ structure is defined to be a {\it homology class} of vector fields $v$ on $M$ such that $v|_{\partial M}$ is equal to $v_0$. Here two vector fields $v$ and $w$ are said to be {\it homologous} if there exists an open ball $B \subset \Int(M)$ such that $v$ and $w$ are homotopic on $M \setminus B$ relative to the boundary. There is a free and transitive action of $H_1(M)$ on $\Spin^c(M,\gamma)$ given by {\it Reeb turbulization} \cite[p.\,639]{Tu90}. This action makes the set $\Spin^c(M,\gamma)$ into an $H_1(M)$-torsor. From now on, we call a map $\iota \colon \Spin^c(M,\gamma) \to H_1(M)$ an {\it affine isomorphism} if $\iota$ is an $H_1(M)$-equivariant bijection. Note that $\iota$ is completely defined by which element $\mathfrak{s} \in \Spin^c(M,\gamma)$ it sends to $0 \in H_1(M)$ (or any other fixed element of $H_1(M)$). The perpendicular two-plane field $v_0^\perp$ is trivial on $\partial M$ if and only if $(M,\gamma)$ is strongly balanced \cite[Prop.\,3.4]{Ju08}. Suppose that $(M,\gamma)$ is strongly balanced. Let $t$ be a trivialisation of $v_0^\perp$. Then there is a map dependent on the choice of trivialisation, \[ c_1(\cdot, t) \colon \Spin^c(M,\gamma) \to H^2(M,\partial M), \] where $c_1(\mathfrak{s},t)$ is defined to be the relative Euler class of the vector bundle $v^\perp \to M$ with respect to a partial section coming from a trivialisation $t$. So $c_1(\mathfrak{s},t)$ is the first obstruction to extending the trivialisation $t$ of $v_0^\perp$ to a trivialisation of $v^\perp$. Here $v$ is a vector field on $M$ representing the homology class $\mathfrak{s}$. We now have all the ingredients required to define the sutured Floer polytope. Let $S(M,\gamma)$ be the {\it support} of the sutured Floer homology of $(M,\gamma)$. That is, \[ S(M,\gamma):=\{ \mathfrak{s} \in \Spin^c(M,\gamma) \colon SFH(M,\gamma,\mathfrak{s})\neq 0\}. \] Consider the map $i \colon H^2(M,\partial M;{\mathbb Z}) \to H^2(M,\partial M;{\mathbb R})$ induced by the inclusion ${\mathbb Z} \hookrightarrow {\mathbb R}$. For $t$ a trivialisation of $v_0^\perp$, define \[ C(M,\gamma,t):=\{ i \circ c_1(\mathfrak{s},t) : \mathfrak{s} \in S(M,\gamma)\} \subset H^2(M,\partial M;{\mathbb R}). \] Then the {\it sutured Floer polytope} $P(M,\gamma,t)$ with respect to $t$ is defined to be the convex hull of $C(M,\gamma,t)$. Finally, we have that $c_1(\mathfrak{s},t_1)-c_1(\mathfrak{s},t_2)$ is an element of $H^2(M,\partial M)$ dependent only on the trivialisations $t_1$ and $t_2$ \cite[Lem.\,3.11]{Ju10a}, and therefore we may write $P(M,\gamma)$ to mean the polytope in $H^2(M,\partial M;{\mathbb R})$ up to translation. \begin{rmk} \label{polytope rmk} It is important to note that $c_1$ ``doubles the distances.'' Namely, the map $PD \circ c_1 \colon \Spin^c(M,\gamma) \to H_1(M)$ is equal to $2 \iota \colon \Spin^c(M,\gamma) \to H_1(M)$, where $\iota$ is an affine isomorphism \cite[5.3.1\,Thm]{Tu90}. Thus, we can compare two polytopes by comparing the ratios of their side lengths, since the ratios remain the same under affine isomorphisms and doubling. \end{rmk} \subsection{Sutured torsion} Each of the groups $SFH(M,\gamma,\mathfrak{s})$ has a relative ${\mathbb Z}_2$ grading, which is made into an absolute ${\mathbb Z}_2$ grading by choosing an orientation $\omega$ of the vector space $H_*(M,R_-(\gamma);{\mathbb R})$. Then, for every relative $\Spin^c$ structure $\mathfrak{s}$, the Euler characteristic \linebreak $\chi SFH(M,\gamma,\mathfrak{s})$ is well-defined with no sign ambiguity. Theorem 1 of \cite{FJR10} tells us that the Euler characteristic with respect to the orientation $\omega$, denoted by $\chi SFH(M,\gamma,\mathfrak{s},\omega)$, is a function $T_{(M,\gamma,\omega)} \colon \Spin^c(M,\gamma) \to {\mathbb Z}$ that can be thought of as the maximal abelian torsion of the pair $(M,R_-(\gamma))$, in the sense of Turaev \cite{Tu01}. Fixing an affine isomorphism $\iota \colon \Spin^c(M,\gamma) \to H_1(M)$ lets us collect all of these functions into a single generating function \[ \tau(M,\gamma):=\sum_{\mathfrak{s} \in \Spin^c(M,\gamma)} T_{(M,\gamma,\omega)}(\mathfrak{s}) \cdot \iota(\mathfrak{s}). \] We refer to $\tau(M,\gamma)$ as the {\it sutured torsion} invariant. In the case when $(M,\gamma)$ is a manifold complementary to a Seifert surface we drop the reference to $\gamma$ and write just $\tau(M)$ to mean $\tau(M,\gamma)$. Note that $\tau(M,\gamma)$ is an element of the group ring ${\mathbb Z}[H_1(M)]$, and that it is well-defined up to multiplication by an element of the form $\pm h$, where $h \in H_1(M)$. We can extend the affine isomorphism $\iota$ linearly to a map on the group rings denoted by the same letter $\iota \colon {\mathbb Z}[\Spin^c(M,\gamma)] \to {\mathbb Z}[H_1(M)]$. Then \[ \tau(M,\gamma)= \iota(\chi SFH(M,\gamma)). \] \begin{rmk} Notice that the abelian group $H_1(M)$ is thought of as a multiplicative group; hence the notion of being well-defined up to multiplication by an element. Specifically, if $f= \pm h\cdot g$, for elements $f,g$ of the group ring ${\mathbb Z}[H_1(M)]$, then we use the notation $f \doteq g$. \end{rmk} Finally, let us describe how to compute the torsion $\tau(M,\gamma)$ of a given irreducible balanced sutured manifold $(M,\gamma)$ with connected subsurfaces $R_\pm(\gamma)$. Fix a basepoint $p \in R_-(\gamma)$. Then Proposition 5.1 of \cite{FJR10} tells us how to compute the torsion from the map $\kappa_* \colon \pi_1(R_-(\gamma),p) \to \pi_1(M,p)$ induced by the natural inclusion $\kappa \colon R_-(\gamma) \hookrightarrow M$. First, take a {\it geometrically balanced} presentation of $\pi_1(M,p)$; that is, a presentation \[ \pi_1(M,p)=\langle a_1, \ldots, a_m| r_1, \ldots, r_n \rangle, \] where the deficiency of the presentation $m-n$ is equal to the genus $g(\partial M)$ of the boundary of $M$. Obtaining a geometrically balanced presentation is not hard. Any balanced sutured manifold $(M,\gamma)$ can be reconstructed in a standard way from a {\it balanced sutured diagram} $(\Sigma,\mbox{\boldmath{$\alpha$}},\mbox{\boldmath{$\beta$}})$ \cite[Prop.\,2.14]{Ju06}, where $\Sigma$ is a surface with boundary, and each of $\mbox{\boldmath{$\alpha$}}$ and $\mbox{\boldmath{$\beta$}}$ is a set containing the same number of pairwise disjoint simple closed curves. To recover $(M,\gamma)$, thicken $\Sigma$ to $\Sigma \times [0,1]$, regard $\mbox{\boldmath{$\alpha$}}$ as curves on $\Sigma \times \{0\}$, and $\mbox{\boldmath{$\beta$}}$ as curves on $\Sigma \times \{1\}$. Then attach 2-handles along $\mbox{\boldmath{$\alpha$}}$ and $\mbox{\boldmath{$\beta$}}$ to obtain $M$ with sutures $\partial \Sigma \times \{1/2\}$. Suppose that we picked the orientations so that $R_-(\gamma)$ is the component of the boundary on ``the bottom'' that includes the boundaries of the 2-handles attached to $\alpha$. Note that the 2-handles attached to $\alpha$ are precisely the 1-handles attached to $R_-(\gamma)$. Then the generators of the free group $\pi_1(R_-(\gamma),p)$ and the cores of the 1-handles attached to $R_-(\gamma)$ are a generating set for $\pi_1(M,p)$; the cores of the 2-handles attached to $\mbox{\boldmath{$\beta$}}$ give the relations of $\pi_1(M,p)$ in these generators. Therefore, the deficiency of this presentation is equal to the number of generators of $\pi_1(R_-(\gamma),p)$: say this number is $l$. Finally, as $M$ is balanced, $l$ is precisely equal to the genus of $\partial M$. Let $\pi_1(R_-(\gamma),p):=\langle \sigma_1, \ldots, \sigma_l \rangle$. Then the images of $\sigma_j$ under the map $\kappa_*$ are words in the generators $a_i$ of $\pi_1(M,p)$. In later sections, we abuse notation and refer to $\kappa_*(\sigma_j)$ as $\sigma_j$. Now we can form the square matrix of Fox derivatives \[ \Theta_M:= \begin{pmatrix} \varphi \Bigl( \frac{\partial \kappa_*(\sigma_j)}{\partial a_i}\Bigr) & \varphi \bigl( \frac{\partial r_k} {\partial a_i} \bigr) \end{pmatrix}, \] where $\varphi \colon {\mathbb Z}[\pi_1(M,p)] \to {\mathbb Z}[H_1(M)]$ is the map induced by the abelianization of the fundamental group. \begin{rmk} We use the convention that the Fox derivative is computed left-to-right. For example, take words $u,w \in {\mathbb Z}[\pi_1(M,p)]$ and apply the Fox derivative $\frac{\partial }{\partial a_i} \colon {\mathbb Z} [\pi_1(M,p)] \to {\mathbb Z}[\pi_1(M,p)]$ to $u w$. Then \[ \frac{\partial(u w) }{\partial a_i}=\frac{\partial u} {\partial a_i} \mbox{aug}(w) + u \frac{\partial w}{\partial a_i}, \] where $\mbox{aug} \colon {\mathbb Z}[\pi_1(M,p)] \to {\mathbb Z}$ is the augmentation map. \end{rmk} \begin{prop} \label{prop} \cite[Prop.\,5.1]{FJR10} Let $(M,\gamma)$ be a balanced sutured manifold such that $M$ is irreducible and the subsurfaces $R_\pm(\gamma)$ are connected. Then \[ \tau(M,\gamma) \doteq \det \Theta_{M}. \] \end{prop} In particular, Proposition \ref{prop} can be applied in the case of a sutured manifold complementary to a minimal genus Seifert surface of a knot in $S^3$. Lastly, let us say what it means for two sutured torsion polynomials $\tau_1:=\tau(M_1,\gamma_1) \in {\mathbb Z}[H_1(M_1)]$ and $\tau_2:=\tau(M_2,\gamma_2) \in {\mathbb Z}[H_1(M_2)]$ to be equivalent. Note that the only relevant choices that we have made is that of the affine isomorphism $\iota_i \colon \Spin^c(M_i,\gamma_i) \to H_1(M_i)$, for $i=1,2$. Therefore, the two sutured torsion polynomials are {\it equivalent} $\tau_1 \sim \tau_2$ if there is an affine isomorphism $\psi \colon H_1(M_1) \to H_1(M_2)$, which extends linearly to a map on the group rings, such that $\psi(\tau_1)\doteq \tau_2$. Also, we say that $\chi SFH(M_1,\gamma_1)$ is {\it equivalent} to $\chi SFH(M_2,\gamma_2)$ if $\tau_1 \sim \tau_2$. \section{The example} \label{example} Lyon's paper \cite{Lyon} is part of a series of papers in the 70's that aimed to produce examples of knots with nonisotopic Seifert surfaces. The first few papers by Alford, Schaufele, and Daigle \cite{Alford, AS,Daigle} all give various infinite families of such examples. Some of these families have readily computable sutured torsion invariants, and it turns out that the sutured torsion does not distinguish between Seifert surfaces in these cases. However, as we will see in this section, the examples in Lyon's paper can be distinguished by their sutured torsion. \subsection{The knots} The following construction is taken from \cite[pp.\,1--2]{Lyon}. Let $k$ be the $(3,4)$ torus knot on the torus $T$. Let $A$ be a tubular neighbourhood of $k$ on $T$, depicted on Figure \ref{Lyons figure}. Denote by $A'$ the closure of the complement $T \setminus A$. The boundary of $A$ has two components; connect these components via the boundary of the twisted strip $B$ as shown in Figure \ref{Lyons figure}. Define the knot $K$ to be the boundary of $A \cup B$. Note that we can introduce full twists in the strip $B$ to produce an infinite family of knots $K_n$, labelled by the integers, where the strip $B$ of the knot $K_n$ has $2n+1$ half twists. Then Figure \ref{Lyons figure} depicts $K:=K_0$ with one positive half-twist. The Alexander polynomial of $K_n$ is easily computed to be \[ \Delta_{K_n}(t)=(6+12n)t -(11+24n)+ (6+12n)t^{-1}. \] Therefore, each knot $K_n$ is nontrivial. For computational convenience we work with $n \geq -1$, but of course similar computations can be performed for $n<-1$. The knot $K_{-1}$ is the one for which we are able to show the polytopes statement from Theorem \ref{thm}. \subsection{The Seifert surfaces} Fix a basepoint $p \in K_n$, as in Figure \ref{Lyons figure}. Observe that $K_n$ bounds two Seifert surfaces $S_n:=A \cup B$ and $S_n':=A' \cup B$; Figure \ref{Seifert surfaces} depicts $S_0$ and $S'_0$. Let $(Y_n,\gamma_n)$ and $(Y_n',\gamma'_n)$ be the sutured manifolds complementary to $S_n$ and $S_n'$, respectively. Note that in both cases $p$ is contained in $K_n$, or more precisely, $p$ is contained in the sutures $s(\gamma_n)$ and $s(\gamma'_n)$. From now on we fix an integer $n \geq -1$. For the remainder of this subsection we drop `$n$' from the subscript in order to avoid cluttered notation. \begin{figure}[h] \centering \includegraphics [scale=0.5]{Sfinal} \hspace{0.5cm} \includegraphics [scale=0.5]{Sprimefinal} \caption{The two Seifert surfaces $S_0$ (left) and $S'_0$ (right) for $K_0$.} \label{Seifert surfaces} \end{figure} The torus $T$ gives a genus one Heegaard splitting of $S^3$ into solid tori $U$ and $V$, with $B \subset V$. This splitting is convenient for computing the fundamental groups $\pi_1(Y,p)$ and $\pi_1(Y',p)$. From now on, let $V \setminus B$ and $U \setminus A$ stand for the manifolds obtained by removing the appropriate, small (collar) neighbourhoods of $B$ and $A$, respectively. Observe that $V\setminus B$ is a genus two handlebody; let $a$ and $b$ be a generating set of $\pi_1(V\setminus B,p)$ as shown in Figure \ref{Lyoncomplement} (left). Let $x$ be the generator of $\pi_1(U,p)$, as shown in the same figure. Figure \ref{Lyoncomplement} (right) shows the discs $D_a$ and $D_b$ that are dual to $a$ and $b$, respectively. In the remainder of the paper, we compute the homotopy class of a curve in $V \setminus B$ by counting the signed intersections of that curve with the dual discs. \begin{figure}[h] \centering \includegraphics [scale=0.45]{Lyoncomplement} \hspace{0.5cm} \includegraphics [scale=0.45]{dualdiscs} \caption{Left: The curves $a,b$, and $x$ in the manifolds $Y$ and $Y'$. Right: The dual discs $D_a$ and $D_b$.} \label{Lyoncomplement} \end{figure} In order to compute Fox derivatives, we need to know the fundamental groups of $Y$ and $Y'$. Note that the following lemma shows that these groups are independent of $n$. \begin{lemma} The fundamental groups of the two surface complements have the following presentations: \begin{align*} \pi_1(Y,p) \hspace{0.17cm} &=\langle a,b,x| x^3=a^2b^2 \rangle, \\ \pi_1(Y',p)&= \langle x, b \rangle. \end{align*} \end{lemma} \begin{proof} View $Y$ as the union of $V\setminus B$ and $U\setminus A$, and then apply Van Kampen's theorem. In applying Van Kampen's theorem the only interesting point is what relations come from the intersection $(V\setminus B)\cap(U\setminus A) \cong A'$. Figure \ref{complement} (left) tells us that the sole relation is $x^3=a^2b^2$, which can be seen by following around the spine of the annulus $A'$ and counting its signed intersections with the dual discs $D_a$ and $D_b$. So indeed $\pi_1(Y,p)=\langle a,b,x| x^3=a^2b^2 \rangle$. Similarly, when computing $\pi_1(Y',p)$, we are interested in what relations come from the intersection $(V\setminus B) \cap (U\setminus A') \cong A$. Figure \ref{complement} (right) tells us that there is again a single relation: $x^3=bab^2$. Since $a=b^{-1}x^3b^{-2}$, it follows that $\pi_1(Y',p)\cong {\mathbb Z} \langle x \rangle * {\mathbb Z}\langle b\rangle$ . \end{proof} \begin{figure}[h] \centering \includegraphics [scale=0.45]{Aprimefinal} \hspace{0.5cm} \includegraphics [scale=0.45]{Afinal} \caption{Left: spine of $A'$ that gives the relation $x^3=a^2b^2$ in $\pi_1(Y,p)$. Right: spine of $A$ that gives the relation $x^3=bab^2$ in $\pi_1(Y',p)$.} \label{complement} \end{figure} \begin{rmk} \label{ab} In order to apply Proposition \ref{prop}, we must know explicitly how to abelianize the fundamental groups. For $\pi_1(Y',p)$, this is clear. For $\pi_1(Y,p)$, it is convenient to introduce $u:=x^{-1}ab \in \pi_1(Y,p)$. Then, we have $x=u^2$ and $b=u^3a^{-1}$ in homology, so $ H_1(Y;{\mathbb Z})\cong {\mathbb Z}\langle a \rangle \oplus {\mathbb Z}\langle u \rangle.$ \end{rmk} \begin{rmk} \label{rmk disjoint} Actually, it can be seen from Figure \ref{Lyons figure} that the surfaces $S$ and $S'$ can be made disjoint in the complement of the knot. Take two copies of the strip, call them $B$ and $B'$, such that $S=B \cup A$ and $S'=B' \cup A'$. Then $S \cup S'$ form the boundary of a genus-two handlebody, and $S \cap S'=K$. See Figure \ref{disjoint} for an illustration in the case when $n=0$. In particular, let $W$ and $X$ be the two handlebodies of the genus two splitting of $S^3$ given by $S \cup S'$, where $W$ is the handlebody on Figure \ref{disjoint} containing the point at infinity. In other words, $W$ can be thought of as $V\setminus B$. For a particular $n$, note that $W$ and $X$ are sutured manifolds with $K_n$ as their single suture. The fact that $S$ and $S'$ are disjoint could be used as a shortcut to compute the sutured torsion. To do so, first compute $\tau(W)$ and $\tau(X)$. Then use \cite[Prop.\,5.4]{Ju10a} to ``glue'' the two torsion polynomials by Mayer-Vietoris induced maps on the level of homology and so obtain $\tau(Y)$ and $\tau(Y')$. However, we choose not to make use of this shortcut in order to illustrate how Proposition \ref{prop} can be used in a general situation where the two Seifert surfaces are not necessarily disjoint. Therefore, we compute $\tau(Y)$ and $\tau(Y')$ directly from Proposition \ref{prop}, and just point out how $\tau(W)$ and $\tau(X)$ appear in this computation. See the beginning of subsection \ref{conclusion} for more comments. \end{rmk} \begin{figure}[h] \centering \includegraphics [scale=0.5]{disjointfinal} \caption{The surfaces $S_0$ and $S'_0$ bounding a handlebody of genus two.} \label{disjoint} \end{figure} In order to specify the $R_\pm$ regions on $(Y,\gamma)$ and $(Y',\gamma')$, we fix an orientation of the knot and an orientation of $S^3$. Suppose that these orientations are chosen so that the union of $R_-(\gamma)$ and $R_+(\gamma')$ forms the visible side of the genus two surface which is depicted in Figure \ref{disjoint} for the case $n=0$. Recall that the sutured torsion of a manifold $(M,\gamma)$ is defined using the pair of spaces $(M,R_-(\gamma))$. Let $\tau^+(M,\gamma)$ denote the sutured torsion computed using the same algorithm only with the pair of spaces $(M,R_+(\gamma))$. Fix an affine isomorphism $\iota \colon \Spin^c(M,\gamma) \to H_1(M)$. Then, Proposition 2.14 of \cite{FJR10} gives a useful duality result, which says that, as elements of the group ring ${\mathbb Z}[H_1(M)]$, the two torsion polynomials $\tau(M)$ and $\tau^+(M)$ are equivalent up to a reflection in the origin. That is, $\tau(M) \doteq \sigma \circ \tau^+(M)$, where $\sigma$ is the linear extension of the inversion map $H_1(M) \to H_1(M)$ given by $h \mapsto h^{-1}$. \begin{rmk} \label{R+} In subsection \ref{computing prime}, we compute $\tau^+(Y')$ even though we write $\tau(Y')$. Once computed, the polynomial $\tau^+(Y')$ is easily seen to be centrally symmetric, so $\tau^+(Y') \doteq \tau(Y')$ and we are justified in writing $\tau(Y')$ instead. \end{rmk} \subsection{Computing $\tau(Y_n)$} Take $\alpha$ and $\beta$ to be the generators of $\pi_1(S_n,p)$ as depicted in Figure \ref{generators of S}. Push these curves into the complement. In particular, push them into $V\setminus B$; this operation amounts to considering the inclusion map $\kappa_* \colon \pi_1(R_-(\gamma_n),p) \to \pi_1(Y_n,p)$ that occurs in the definition of the matrix $\Theta_{Y_n}$. \begin{figure}[h] \centering \includegraphics [scale=0.45]{alphainYprime.eps} \hspace{0.5cm} \includegraphics [scale=0.45]{betainYprime.eps} \caption{The generators $\alpha$ and $\beta$ of $\pi_1(S_2,p)$.} \label{generators of S} \end{figure} Next, read off the relations $\alpha=a(b^{-1}a)^{n}b$ and $\beta=ba(ba^{-1})^{n}ba^{-1}$. So we have \begin{gather*} \alpha=(a b^{-1})^{n+1} b^2, \label{al'} \\ \beta=ba(ba^{-1})^{n+1}. \label{be'} \end{gather*} It turns out that $\alpha$ and $\beta$ are curves entirely given in the two generators $a,b$. Therefore, their Fox derivatives with respect to $x$ are zero. Denote by $r:=x^3b^{-2}a^{-2}$ the group relation of $\pi_1(Y_n,p)$. So by Proposition \ref{prop}, \[ \tau(Y_n)\doteq \det \Theta_{Y_n}= \varphi \Bigl( \frac{\partial r}{\partial x}\Bigr) \cdot \det \begin{pmatrix} \varphi \bigl( \frac{\partial \alpha}{\partial a} \bigr)& \varphi \bigl( \frac{\partial \beta}{\partial a}\bigr)\\ \varphi \bigl( \frac{\partial \alpha}{\partial b} \bigr) &\varphi \bigl( \frac{\partial \beta}{\partial b}\bigr) \end{pmatrix}. \] We have $\frac{\partial r}{\partial x}=1+x+x^2$ and \begin{align*} \frac{\partial \alpha}{\partial a} &= \frac{(ab^{-1})^{n+1}-1}{ab^{-1}-1}, & \frac{\partial \beta}{\partial a} &=b-baba^{-1}\frac{(ba^{-1})^{n+1}-1}{ba^{-1}-1},\\ \frac{\partial \alpha}{\partial b} &=-ab^{-1}\frac{(ab^{-1})^{n+1}-1}{ab^{-1}-1}+(ab^{-1})^{n+1}(1+b), & \frac{\partial \beta}{\partial b}&= 1+ba\frac{(ba^{-1})^{n+1}-1}{ba^{-1}-1}. \end{align*} Now compute the polynomial $ q_n(a,b):=\det \begin{pmatrix} \frac{\partial \alpha}{\partial a}& \frac{\partial \beta}{\partial a}\\ \frac{\partial \alpha}{\partial b} & \frac{\partial \beta}{\partial b} \end{pmatrix} $ as a polynomial in ${\mathbb Z}[H]$, where $H:={\mathbb Z}\langle a \rangle \oplus {\mathbb Z}\langle b \rangle$. Then \begin{align*} q_n(a,b) &=-\frac{b}{a-b} \left( 1 + a+a b + ab^2 - \left( \frac{a}{b}\right)^{n+1}- b\left( \frac{a}{b}\right)^{n+1} - b^2 \left( \frac{a}{b}\right)^{n+1} - a b^2 \left( \frac{a}{b}\right)^{n+1}\right) \\ &\doteq \frac{b}{a-b} \left(a^{n+1} (1+b+b^2 +ab^2) - b^{n+1}(1+a+ab+ab^2)\right). \end{align*} This polynomial appears again when we compute $\tau(Y'_n)$; see the beginning of subsection \ref{conclusion} for an explanation. Note that \begin{equation} \label{rec} q_{n+1}(a,b)\doteq a \cdot q_{n}(a,b)+ b^{n+2} (1+a+ab+ab^2). \end{equation} Recall from Remark \ref{ab} how to abelianize $\pi_1(Y_n,p)$. To obtain the sutured torsion we need to calculate \begin{equation} \label{torprime} \tau(Y_n) \doteq \varphi \bigl(q_n(a,b) \cdot (1+x+x^2)\bigr), \end{equation} which yields a polynomial in ${\mathbb Z}[a^{\pm 1},u^{\pm 1}]$. For a general $n \geq 0$, we have \begin{equation} \label{tor} \tau(Y_n)\doteq \frac{(1+u^2+u^4)}{a^2-u^3} \left[ (a^2+u^3 a+u^6 a +u^6)a^{2n+2} - u^{3n+3}(a^3+a^2+u^3 a^2+u^6 a) \right]. \end{equation} As $q_0(a,b)=1+ab^2$ has all positive coefficients, it follows from \eqref{rec} that all the coefficients of $q_n(a,b)$ are of the same sign. The recursive equation \eqref{rec} together with \eqref{torprime} implies that the coefficients of $\tau(Y_n)$ add up to $6+12n$, which is exactly the top term of $\Delta_{K_n}(t)$, as it should be by Lemma\,6.4 of \cite{FJR10}. \subsection{Computing $\tau(Y'_n)$} \label{computing prime} We follow a similar procedure to compute the sutured torsion of $Y'_n$. Take $\alpha$ and $\beta$ to be the generators of $\pi_1(S'_n,p)$ as depicted in Figure \ref{generators of S'}. As before, push the curves into $V \setminus B$; this operation amounts to considering the inclusion map $\kappa_* \colon \pi_1(R_+(\gamma_n'),p) \to \pi_1(Y_n',p)$. Therefore, what we refer to as $\tau(Y_n')$ below is actually $\tau^+(Y_n')$; see Remark \ref{R+}. \begin{figure}[h] \centering \includegraphics [scale=0.45]{alphainY.eps} \hspace{0.5cm} \includegraphics [scale=0.45]{betainY} \caption{The generators $\alpha$ and $\beta$ of $\pi_1(S'_2,p)$.} \label{generators of S'} \end{figure} Read off the relations $\alpha=ab(a^{-1}b)^{n}a^{-1}$ and \linebreak $\beta=ab^{-1}(ab^{-1})^{n}ab^2$. So we have \begin{gather*} \alpha=a(ba^{-1})^{n+1} \label{al}, \\ \beta=(ab^{-1})^{n+1}ab^2 \label{be}. \end{gather*} Denote by $r:=x^3b^{-2}a^{-1}b^{-1}$ the group relation. Even though $\pi_1(Y'_n,p)$ is a free group, we choose to compute $\tau(Y'_n)$ in the presentation with three generators and one relation, in order to exhibit similarities with $\tau(Y_n)$. As before, the Fox derivatives of $\alpha$ and $\beta$ with respect to $x$ are both zero, so again the only relevant Fox derivative of $r$ is $\frac{\partial r}{\partial x}=1+x+x^2$. The other Fox derivatives are: \begin{align*} \frac{\partial \alpha}{\partial a} &= 1-aba^{-1} \frac{(ba^{-1})^{n+1}-1}{ba^{-1}-1}, & \frac{\partial \beta}{\partial a} &=\frac{(ab^{-1})^{n+1}-1}{ab^{-1}-1} + (ab^{-1})^{n+1}, \\ \frac{\partial \alpha}{\partial b} &=a\frac{(ba^{-1})^{n+1}-1}{ba^{-1}-1}, & \frac{\partial \beta}{\partial b}&=-ab^{-1}\frac{(ab^{-1})^{n+1}-1}{ab^{-1}-1}+(ab^{-1})^{n+1} a(1+b) . \end{align*} Computing the polynomial $ q_n'(a,b) := \det \begin{pmatrix} \frac{\partial \alpha}{\partial a}& \frac{\partial \beta}{\partial a}\\ \frac{\partial \alpha}{\partial b} & \frac{\partial \beta}{\partial b} \end{pmatrix} \in {\mathbb Z}[H] $ we find that $q'_n(a,b)\doteq q_n(a,b)$. Therefore, the difference between the two sutured torsion invariants comes from the abelianization maps. Recall that $a= x^3b^{-3} \in H_1(Y_n';{\mathbb Z})$ and make this substitution for $a$ in the expression \[ \tau(Y'_n)\doteq \varphi \bigl( q'_n(a,b) \cdot (1+x+x^2) \bigr), \] to find $\tau(Y'_n)$ as a polynomial of ${\mathbb Z}[b^{\pm 1}, x^{\pm 1}]$. For a general $n \geq 0$, we have \begin{equation} \label{tor'} \tau(Y'_n) \doteq \frac{(1+x+x^2)}{x^3-b^4} \left[ x^{3n+3} (b^5+ b^4+b^3+ x^3 b^2)-b^{4n+4}(b^3+x^3 b^2+x^3 b+x^3) \right]. \end{equation} The same argument as before shows that the coefficients of $\tau(Y'_n)$ add up to $6+12n$, as expected. \subsection{Conclusion} \label{conclusion} The polynomials $q(a,b):\doteq q_n(a,b) \doteq q_n'(a,b)$ and $(1+x+x^2)$ appear in the computations of $\tau(Y_n)$ and $\tau(Y_n')$ . Indeed, in both cases the sutured torsion is computed by abelianizing an expression of the form $q(a,b) \cdot (1+x+x^2)$. With regards to Remark \ref{rmk disjoint} this phenomenon is not surprising. In particular, from the work we have already done, it is not hard to see that \begin{align*} \tau(W) &\doteq q(a,b) \hspace{0.68cm} \in H_1(W)\cong{\mathbb Z}[a^{\pm1},b^{\pm1}], \\ \tau(X) &\doteq 1+x+x^2 \in H_1(X). \end{align*} For us, these observations are useful inasmuch as they verify our computations. In general, if the Seifert surfaces are not disjoint, then such a verification is not at our convenience. \begin{rmk} Note that we have just shown that two vertices of the {\it Kakimizu complex} \cite{Ka92} of $K_n$ have associated to them different sutured torsions, and hence different sutured Floer homology groups. \end{rmk} We claim that the sutured torsion invariants $\tau(Y_n)$ and $\tau(Y_n')$ given in \eqref{tor} and \eqref{tor'} are not equivalent for all $n \geq 0$. For $n=0$, we have \begin{align*} \tau(Y_0) &\doteq(a+u^6)(1+u^2+u^4) \in {\mathbb Z}[a^{\pm 1},u^{\pm 1}], \\ \tau(Y_0') &\doteq(b+x^3)(1+x+x^2) \hspace {0.23cm} \in {\mathbb Z}[b^{\pm 1},x^{\pm 1}]. \end{align*} Inspection reveals that there is no affine isomorphism $H_1(Y_0)\to H_1(Y_0')$, taking one sutured torsion polynomial onto the other. See Figure \ref{support 1} for the supports. \begin{figure}[h] \centering \includegraphics [scale=0.45]{Sprimetorsionsupport1.eps} \hspace{2cm} \includegraphics [scale=0.45]{Storsionsupport1.eps} \caption{Left: the support of $\tau(Y_0)$. Right: the support of $\tau(Y_0')$.} \label{support 1} \end{figure} \begin{rmk} The three different shades of grey in the support of the polynomials indicate the ``shift'' of $q_n(a,b)$ by $ 1+u^2+u^4$ and of $q_n'(a,b)$ by $1+x+x^2$. \end{rmk} For $n=1$, the relations are \begin{align*} \tau(Y_1)&\doteq(a^3 + a u^3 + a^2 u^3 + a u^6 + a^2 u^6 + u^9)(1+u^2+u^4) \in {\mathbb Z}[a^{\pm 1},u^{\pm 1}], \\ \tau(Y_1')&\doteq(b^5 + b x^3 + b^2 x^3 + b^3 x^3 + b^4 x^3 + x^6)(1+x+x^2) \hspace{0.2cm} \in {\mathbb Z}[b^{\pm 1},x^{\pm 1}]. \end{align*} Figure \ref{support 2} indicates that the support of $\tau(Y'_1)$ contains a $3 \times 4$ parallelogram, which cannot be found in the support of $\tau(Y_1)$. Therefore, there too is no affine isomorphism taking one to the other. \begin{figure}[h] \centering \includegraphics [scale=0.45]{Sprimetorsionsupport2.eps} \hspace{2cm} \includegraphics [scale=0.45]{Storsionsupport2.eps} \caption{Left: the support of $\tau(Y_1)$. Right: the support of $\tau(Y_1')$.} \label{support 2} \end{figure} Lastly, for a general $n > 0$, the supports of the sutured torsion follows the pattern from $n=1$, with another parallelogram containing twelve points being added for each increase of $n$ by one; see Figure \ref{support n}. The same argument as for $n=1$ shows that there is no affine isomorphism taking one torsion polynomial onto another, and thus $\tau(Y_n') \not \sim \tau(Y_n')$. \begin{rmk} For $n>0$, observe that the convex hulls of the supports in both cases are hexagons, only with sides of different length. For $\tau(Y_n)$ the sides of the convex hull are of slope $-2/3,-1/6,0$ and length $n,1,4$, respectively. On the other hand, for $\tau(Y'_n)$ the sides of the convex hull are of slope $-4/3,-1/3,0$ and length $n,1,2$, respectively. So alternatively, we can argue that no affine isomorphism taking one convex hull onto the other. For $n=-1$, see the latter part of the proof of Theorem \ref{thm}. For $n<-1$, the sutured torsion invariants can be computed similarly, and an analogous argument can be made to show that they are nonequivalent. \end{rmk} \begin{figure}[h] \centering \includegraphics [scale=0.45]{Sprimetorsionsupportn} \hspace{1.5cm} \includegraphics [scale=0.45]{Storsionsupportn} \caption{Left: the support of $\tau(Y_n)$. Right: the support of $\tau(Y_n')$.} \label{support n} \end{figure} \begin{proof} [Proof of Theorem \ref{thm}] Let $K:=K_0$, and set $R_1:=S_0$ and $R_2:=S_0'$. Then $\tau(S^3(R_1)) \not \sim \tau(S^3(R_2))$. Therefore, $SFH(S^3(R_1)) \not \cong SFH(S^3(R_2))$ as $\Spin^c$-graded groups. For any $n>0$, the knots $K_n$ together with pairs of minimal genus Seifert surfaces $(S_n,S_n')$ have the same property. For notational convenience, in the remainder of the proof we suppress any references to the sutures of manifolds, as they are clearly understood. To prove the statement about polytopes, consider the knot $K_{-1}$. \begin{figure}[h] \centering \includegraphics [scale=0.5]{ddiscszero} \caption{The surfaces $S_{-1}$ and $S'_{-1}$, together with a decomposing disc $D$ for $X$.} \label{disjoint -1} \end{figure} See Figure \ref{disjoint -1} for an analogue of Figure \ref{disjoint} in the case of $K_{-1}$. Since our torsion computations hold for $n=-1$, we easily compute that $q_{-1}(a,b)\doteq 1+b$, and that the two sutured torsion polynomials are given by the following polynomials: \begin{align*} \tau(Y_{-1}) & \doteq (a+u^3)(1+u^2+u^4), \\ \tau(Y'_{-1}) &\doteq (1+b)(1+x+x^2). \end{align*} Next, observe that the disc $D_a$ from Figure \ref{Lyoncomplement} (right) gives a product decompositions of $W_{-1}$. Similarly, the disc $D$ in Figure \ref{disjoint -1} gives a product decomposition of $X_{-1}$. In particular, the two handlebodies are product decomposed into solid tori with two sutures each; in the notation of Proposition \ref{torus prop}, we have \begin{gather*} W_{-1} \leadsto^{D_a} T(2,1;2), \\ X_{-1} \leadsto^{D} T(3,4;2). \end{gather*} Lemma \ref{product decomp} and Proposition \ref{torus prop} imply that \begin{align*} SFH(W_{-1}) &=SFH(T(2,1;2)) ={\mathbb Z}^2, \\ SFH(X_{-1})&=SFH(T(3,4;2)) ={\mathbb Z}^3. \end{align*} Juh\'asz's {\it decomposition formula} \cite[Prop.\,8.6]{Ju08} implies that $SFH(Y_{-1})$ and \linebreak $SFH(Y'_{-1})$ are isomorphic to ${\mathbb Z}^6$. Hence, the sutured torsion $\tau(Y_{-1})$ is the image of the support $S(Y_{-1})$ under some affine isomorphism $\iota \colon \Spin^c(Y_{-1}) \to H_1(Y_{-1};{\mathbb Z})$. Similarly for $\tau(Y_{-1}')$. By Remark \ref{polytope rmk}, it follows that we can now easily compare the polytopes: Figure \ref{support -1} shows that $P(Y_{-1})$ is a parallelogram with ratio of side lengths 1:4, whereas $P(Y'_{-1})$ is a parallelogram with ratio of side lengths 1:2. Therefore, the polytopes are different. \begin{figure}[h] \centering \includegraphics [scale=0.45]{Sprimetorsionsupport-1.eps} \hspace{2cm} \includegraphics [scale=0.45]{Storsionsupport-1.eps} \caption{Left: the support of $\tau(Y_{-1})$. Right: the support of $\tau(Y_{-1}')$.} \label{support -1} \end{figure} \end{proof} \begin{rmk} We see from the sutured torsion polynomials $\tau(Y_{-1})$ and $\tau(Y_{-1}')$ that \linebreak $SFH(Y_{-1},\gamma_{-1})$ and $SFH(Y_{-1}',\gamma_{-1}')$ are supported in a single ${\mathbb Z}/2$ homological grading, and we know that both groups are torsion-free. Therefore, the proof of Theorem \ref{thm} shows that the sutured torsion and the sutured Floer polytope can distinguish between Seifert surfaces whose complementary manifolds are {\it sutured L-spaces} \cite[Def.\,1.1]{FJR10}. \end{rmk}
{ "timestamp": "2011-04-07T02:01:22", "yymm": "1012", "arxiv_id": "1012.5904", "language": "en", "url": "https://arxiv.org/abs/1012.5904", "abstract": "We exhibit the first example of a knot in the three-sphere with a pair of minimal genus Seifert surfaces that can be distinguished using the sutured Floer homology of their complementary manifolds together with the Spin^c-grading. This answers a question of Juhász. More precisely, we show that the Euler characteristic of the sutured Floer homology of the complementary manifolds distinguishes between the two surfaces, as does the sutured Floer polytope introduced by Juhász. Actually, we exhibit an infinite family of knots with pairs of Seifert surfaces that can be distinguished by the Euler characteristic.", "subjects": "Geometric Topology (math.GT)", "title": "Sutured Floer homology distinguishes between Seifert surfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575162853136, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7091542121755665 }
https://arxiv.org/abs/2207.11042
Strong c-concavity and stability in optimal transport
The stability of solutions to optimal transport problems under variation of the measures is fundamental from a mathematical viewpoint: it is closely related to the convergence of numerical approaches to solve optimal transport problems and justifies many of the applications of optimal transport. In this article, we introduce the notion of strong c-concavity, and we show that it plays an important role for proving stability results in optimal transport for general cost functions c. We then introduce a differential criterion for proving that a function is strongly c-concave, under an hypothesis on the cost introduced originally by Ma-Trudinger-Wang for establishing regularity of optimal transport maps. Finally, we provide two examples where this stability result can be applied, for cost functions taking value +$\infty$ on the sphere: the reflector problem and the Gaussian curvature measure prescription problem.
\section{Introduction} The theory of optimal transport has had an important impact in applied mathematics, with applications in inverse problems, in variational modeling of evolution PDEs \cite{villani2003topics,santambrogio2015optimal}, and in machine learning \cite{peyre2019computational} to name but a few. Numerical applications of this theory have been made possible thanks to the tremendous progress of optimal transport solvers in the last decade \cite{peyre2019computational, merigot2021optimal, benamou2021optimal}. The stability of solutions to optimal transport problems under variation of the data is fundamental from a mathematical viewpoint, making optimal transport a ``well-posed'' problem in the terminology of Hadamard. The question of \emph{quantitative stability} is also of prime importance. The first and most obvious reason is that it is strongly related to the convergence of many numerical approaches to solve optimal transport problems --- both in statistical and in numerical analysis contexts --- and explicitly or implicitly it justifies most of the applications of optimal transport. Quantitative stability is at the heart of several other applications, including the understanding of geometric embeddings of spaces of probability measures to Hilbert spaces used in statistics~\cite{delalande2021quantitative}, the convergence analysis of numerical methods for evolution equations using optimal transport as a building block~\cite{bourne2022semi}, the estimation of transport maps in high dimension~\cite{hutter2021minimax} or the construction of precise asymptotics for random matching problems~\cite{ambrosio2019optimal}. The stability of optimal transport plans can be established in a very general setting \cite{villani2008optimal}, under variations of the source and target measures, and even under variations of the cost. However, the question of \emph{quantitative} stability has only been addressed rather recently, and most of the existing results deal with the cost function $c(x,y) = \nr{x-y}^2$ \cite{gigli2011holder,berman2021convergence, delalande2021quantitative,li2021quantitative}, or with the squared geodesic distance on a Riemannian manifold \cite{ambrosio2019optimal}. The aim of this article is to establish stability results for more general cost functions, namely those that satisfy the \emph{strong Twist} and \emph{Ma-Trudinger-Wang} conditions on manifolds. We also identify \emph{strong $c$-concavity} of the Kantorovitch potential as a central notion to get stability results. \subsection*{Optimal transport} Let $M,N$ be two Polish spaces, let $\mu \in \mathcal{P}(M)$, $\nu \in \mathcal{P}(N)$ be two probability measures on $M$ and $N$ and let $c:M\times N\to \mathbb{R} \cup \{+\infty\}$ be a lower semi-continuous cost function which is bounded below. A \emph{transport map} between $\mu$ and $\nu$ is a map $T:M\to N$ such that the image measure $T_\#\mu$ equals $\nu$. Monge's optimal transport problem between $\mu$ and $\nu$ for the cost $c$ amounts to finding a map $T:M \to N$ that minimizes \begin{equation} \label{eq:MP} \tag{MP} \inf_{T_\# \mu = \nu} \int_M c(x, T(x))d\mu(x). \end{equation} Such a map, if it exists, is called an \emph{optimal transport map} between $\mu$ and $\nu$. Existence and uniqueness of such an optimal transport map is obtained for instance when the transport cost is quadratic, i.e. $c(x,y) =\nr{x-y}^2$, when $M,N$ are compact subsets of $\mathbb{R}^d$, and when $\mu$ is absolutely continuous with respect to the Lebesgue measure \cite{brenier1991polar}. Existence and uniqueness also hold for more general cost functions satisfying a so-called ``twist'' hypothesis \cite{gangbo1995optimal}. Kantorovich's relaxation consists in minimizing the same quantity, but among \emph{transport plans} $\Gamma(\mu,\nu)$: \begin{equation} \label{eq:KP} \tag{KP} \min_{\gamma \in \Gamma(\mu,\nu)} \int_{M\times N} c(x,y) \mathrm{d}\gamma(x,y). \end{equation} We recall that a transport plan between $\mu$ and $\nu$ is a probability measure $\gamma\in\mathcal{P}(M\times N)$ with marginals $\mu$ and $\nu$. Under mild assumptions (e.g. $c$ is lower-semicontinuous and bounded below), a minimizer to \eqref{eq:KP} always exists -- but uniqueness may fail. A minimizer to \eqref{eq:KP} is called an optimal transport plan. \subsection*{Existing stability results} The problem of stability of optimal transport maps can be expressed as a continuity property of the map $(\mu,\nu) \mapsto T_{\mu\to\nu}$, where $T_{\mu\to\nu}$ is the optimal transport map between a source probability measure $\mu$ and a target measure $\nu$. In order to have a common space in which to consider the optimal transport map $T_{\mu\to\nu}$, we will mainly consider the problem of the stability of the map $T_{\nu} := T_{\mu\to\nu}$ for a fixed $\mu$. As first noted by Li and Nochetto \cite{li2021quantitative}, the arguments implying quantitative stability of $\nu \mapsto T_{\mu\to\nu}$ sometimes also imply general stability results, where both the source and target measures can change. To the best of our knowledge, the first quantitative stability result in optimal transport is of ``local'' nature, in the sense that it only holds near a configuration $(\mu,\nu)$, and is established under strong assumptions on the data. It is due to Ambrosio and reported in an article of Gigli \cite{gigli2011holder}. It can be phrased as follows. \begin{theorem}[Ambrosio-Gigli] Assume that $M$ and $N$ are compact subsets of $\mathbb{R}^d$, that $\mu \in\mathcal{P}(M)$ is absolutely continuous, and that for some $\nu_0\in \mathcal{P}(N)$ the optimal transport map $T_{\mu \to \nu_0}$ for the quadratic cost $c(x,y) = \nr{x-y}^2$ is Lipschitz. Then \begin{equation} \label{eq:AmbrosioGigli} \forall \nu_1\in\mathcal{P}(N), ~~\nr{T_{\mu\to \nu_0} - T_{\mu\to\nu_1}}^2_{L^2(\mu)} \leq \diam(M) \mathrm{Lip}(T_{\mu\to \nu_0}) W_1(\nu_0, \nu_1). \end{equation} \end{theorem} In the above statement, $\mathrm{Lip}(T)$ is the Lipschitz constant of the map $T$ and $\operatorname{W}_1(\nu_0,\nu_1)$ is the Wasserstein distance between $\nu_0$ and $\nu_1$ with respect to the Euclidean distance on $N$. By Brenier theorem \cite{brenier1991polar}, we know that $T_\nu = \nabla \phi_\nu$, where $\phi_\nu$ is convex. A convex analysis result shows that the Lipschitz regularity of $T_\nu$ is equivalent to the strong convexity of the convex conjugate $\psi_\nu = \phi_\nu^*$. Using these remarks, the proof of the stability estimate \eqref{eq:AmbrosioGigli} can then be obtained in a few lines, see e.g. \cite[Theorem 2.2]{delalande2021quantitative}. Li and Nochetto~\cite{li2021quantitative} prove under the same hypothesis that if $\gamma \in\mathcal{P}(M\times N)$ is the transport plan between $\mu$ and $\nu$ induced by the optimal map $T_{\mu\to\nu}$, and $\tilde{\gamma}$ is \emph{any} optimal transport plan between $\tilde{\mu}$ and $\tilde{\nu}$, i.e. any solution to \eqref{eq:KP} then \[ \operatorname{W}_2(\gamma, \tilde{\gamma})^2 \leq C (\operatorname{W}_2(\mu, \tilde{\mu}) + \operatorname{W}_2(\nu, \tilde{\nu})) ,\] where $C$ is a constant that depends on $\mathrm{Lip}(T_{\mu\to\nu})$, the diameters of $M$ and $N$. The Wasserstein distance $\operatorname{W}_2$ in the left-hand side is with respect to a product metric on $M\times N$. We mention that the ``Euclidean'' stability result \eqref{eq:AmbrosioGigli} can be extended to optimal transport problems on a compact Riemannian manifold with the squared geodesic distance~\cite{ambrosio2019optimal}. We also mention the more ``global'' stability results of \cite{berman2021convergence, delalande2021quantitative}, which do not make regularity assumptions on $T_{\mu\to\nu}$, but come with worse continuity estimates. For instance, the main theorem of \cite{delalande2021quantitative} shows that if $\mu \in\mathcal{P}(\mathbb{R}^d)$ is a probability density on a compact convex subset of $\mathbb{R}^d$, which is bounded from above and below by a positive constant, then for any compact subset $Y\subseteq\mathbb{R}^d$, the map $\nu \mapsto T_{\mu\to\nu}$ is $\frac{1}{6}$-Hölder from $(\mathcal{P}(Y),\operatorname{W}_1)$ to $\mathrm{L}^2(\mu,\mathbb{R}^d)$, to be compared to the $\frac12$ exponent in \eqref{eq:AmbrosioGigli}. \subsection*{Strong $c$-concavity of the potential} A key ingredient in the stability results for the quadratic cost~\cite{gigli2011holder, ambrosio2019optimal} is the strong convexity of the Kantorovich potentials $\psi$ associated to the optimal transport maps. In order to get stability results for general cost functions $c$, we introduce below the notion of \emph{strong $c$-concavity}. We denote by $\d_N : N \times N \to \mathbb{R}_+$ the distance on $N$. The $p$-Wasserstein distance on $\mathcal{P}(N)$ between two probability measures is defined with respect to the distance by \[ W_p^p(\nu_0,\nu_1) = \inf_{\gamma \in \Gamma(\nu_0, \nu_1) } \int_{N \times N} d_N(y,z)^p d\gamma(y,z),\] \begin{definition}[Transport map induced by a potential] Let $T : M \to N$ be a measurable map, and $\psi: N \to \mathbb{R}$. We say that $T$ is induced by $\psi$, or that $\psi$ is a potential associated to $T$ if \[ \forall x \in M, \quad T(x) \in \argmin_{y \in N} c(x,y) - \psi(y) \] \end{definition} Thanks to Kantorovich duality~\cite{villani2003topics}, we know that if a transport map $T$ from $\mu$ to $\nu$ is induced by a potential $\psi$ then T is a solution to the Monge problem~\eqref{eq:MP}. Such a potential $\psi$ can be constructed by solving the dual problem \begin{equation}\tag{DP} \label{eq:KD} \sup_{\psi : N \to \mathbb{R}} \int_M \psi^c \mathrm{d} \mu + \int_N \psi \mathrm{d} \nu \end{equation} where $\psi^c : M \to \mathbb{R}$ is the c-transform of $\psi$, defined by \[ \psi^c (x) = \inf_{y \in N} c(x,y) - \psi(y) \] so that $\psi^c(x) + \psi(y) \leq c(x,y)$. The dual problem~\eqref{eq:KD} has a maximizer, for instance, if the cost $c$ is continuous on the compact $M \times N$, but existence also holds with weaker hypothesis on $c$, see~\cite{villani2008optimal} for instance. When such a maximizer exists, and still by Kantorovich theory, we can assume that a map $T$ solution of~\eqref{eq:MP} is induced by a $c$-concave potential $\psi$. We recall the notion of $c$-concavity, and we refer to~\cite{villani2008optimal}. \begin{definition}[$c$-concavity and $c$-conjugate] We say that $\psi : N \to \mathbb{R} \cup \{ - \infty \}$ is c-concave if for any $y \in N$ there exists $x \in M$ such that \[\forall z \in N, \quad c(x,z) - \psi(z) \geq c(x,y) - \psi(y) \] An equivalent definition is that there exists a function $\phi : M \to \mathbb{R} \cup \{ \pm \infty \}$ such that for any $y \in N$ \[ \psi(y) = \inf_{x \in M} c(x,y) - \phi(x). \] We denote the right-hand side of the above equation by $\phi^c(x)$, and we call it the $c$-conjugate of $\phi$. One can define similarly the notion of $c$-concave function on $M$. \end{definition} The $c$-superdifferential of $\psi$ at a point $y \in N$ is defined by \begin{equation} \label{c-superdiff} \partial^c \psi(y) = \{x \in M \mid \forall z \in N, \psi(z) - c(x,z) \leq \psi(y) - c(x,y) \} \end{equation} Note that $\psi$ is $c$-concave iff for any $y \in N$ its c-superdifferential $ \partial^c \psi(y)$ is non-empty. We can now introduce the notion of strong c-concavity. \begin{definition}[strong $c$-concavity on $D$] \label{def:stcconc} We say that a c-concave function $\psi$ is strongly $c$-concave on a set $D \subseteq M \times N$ and with modulus $\omega$ if for all $x,y,z$ such that $(x,y) \in D, (x,z) \in D$ and $x \in \partial^c \psi(y)$: \begin{equation} \label{strong_concavity} \psi(z) - c(x,z) \leq \psi(y) - c(x, y) - \omega(\d_N(y,z)) \end{equation} \end{definition} In the above definition, the modulus $\omega : \mathbb{R}_+ \to \mathbb{R}_+$ is an increasing function that satisfies $\omega(0)=0$. One can check that when $c(x,y) = - \sca{x}{y}$ and $\omega(r) = C r^2$ the notion of strong concavity and strong c-concavity are equivalent. Moreover if a function $\psi : N \to \mathbb{R}$ is strongly c-concave, then for $y \neq z$ in $N$, $\partial^c \psi(y) \cap \partial^c \psi(z) = \emptyset$, or equivalently for $x \in M$ there exists a unique minimizer of $y \mapsto c(x,y) - \psi(y)$. This implies that the transport map associated to $\psi$ is uniquely defined by minimizing $c(x,\cdot) -\psi$: \[ \forall x \in M \quad T(x) = \argmin_{y \in N} c(x,y) - \psi(y) \] \subsection*{Contribution} This paper is concerned with stability problems in optimal transport. We introduce the notion of \emph{strong c-concavity}, which is central to get stability results. \begin{itemize} \item We provide two stability results in Section~\ref{sec:stability} that depend on an assumption of strong $c$-concavity. First, we extend the $1/2$-H\"older stability result of Ambrosio stated in~\cite{gigli2011holder} to general cost function $c$ (Theorem~\ref{th:stability-cconc}). Our result is local around transport maps associated to \emph{strongly $c$-concave} potential. Second, we generalize a result of Li and Nochetto~\cite{li2021quantitative} that estimates the distance of a transport plan to an optimal transport map (the source and target measures being fixed) in terms of the suboptimality gap (Proposition~\ref{prop:stabplan}). We then use this result to obtain quantitative stability of the transport plan with respect to both measures (Proposition~\ref{prop:stabbothmeasureW1}), following the strategy of Li-Nochetto ~\cite{li2021quantitative} for the quadratic cost. \item We provide in Section~\ref{sec:criterioncconc} the central result of this paper (Theorem~\ref{th:criterionstrongcconc}), which is a differential criterion for a potential function $\psi$ to be strongly $c$-concave. This result generalizes a sufficient condition for c-convexity proposed by Villani~\cite[Th. 12.46]{villani2008optimal}. It requires that $M, N$ are two smooth $d$-dimensional complete Riemannian manifolds. Similarly to Villani, we require a local condition on the derivatives of the potential $\psi$ and a weak Ma-Trudinger-Wang condition~\cite{Ma2005regularity} . In Section~\ref{sec:otstability}, we combine Theorem~\ref{th:criterionstrongcconc} to the stability results of Section~\ref{sec:stability} to get local stability results for optimal transport maps. \item The last two sections are dedicated to the applications of our stability results to two optimal transport problems on the sphere, with cost functions taking the value $+\infty$. In Section~\ref{sec:reflector} we consider the reflector antenna problem, which is a non-imaging optics problem that can be written as optimal transport~\cite{wang2004design}. Section~\ref{sec:gaussmeasure} is dedicated to the prescription of the Gaussian curvature measure of a convex body, originally introduced by Alexandrov~\cite{alexandrov1950convex} and rephrased as an optimal transport problem by Oliker~\cite{oliker2007embedding}. \end{itemize} \section{Stability under strong c-concavity}\label{sec:stability} In this section we assume that $M$ and $N$ are Polish spaces. We provide stability results in the neighborhood of transport maps that are associated to strongly c-concave Kantorovitch potential. The stability result of Section~\ref{sec:stabtarget} is with respect to variations of the target measure, whereas the result in Section~\ref{sec:stabboth} is with respect to variations of both the source and the target measures. This last result is a consequence of an error bound for a fixed optimal transport problem given in Section~\ref{subsec:errorbound}. As a side note, we also remark in the last section that strong $c$-concavity implies H\"{o}lder regularity of transport maps. \subsection{Stability with respect to the target measure}\label{sec:stabtarget} The following theorem extends to general cost functions a theorem of Ambrosio~ \cite{gigli2011holder}, using a reformulation proposed in \cite{delalande2021quantitative}. The hypothesis that the transport map $T$ is Lipschitz (in the formulation of \cite{delalande2021quantitative}) is replaced by the assumption that the transport map is induced by a strongly c-concave potential $\psi$, i.e. $$ \forall x \in M\quad T(x) \in \argmin_{y\in N} c(x,y) - \psi(y). $$ \begin{theorem} \label{th:stability-cconc} Let $D \subseteq M \times N$ be a compact set and $c : M \times N \to \mathbb{R} \cup \{ + \infty\}$ be a cost function of class $\mathcal{C}^1$ on $D$. Let $\mu \in \mathcal{P}(M)$ and $\nu_0,\nu_1 \in \mathcal{P}(N)$. We assume that there exists optimal transport maps $T_i$ from $\mu$ to $\nu_i$ with associated potential $\psi_i : N \to \mathbb{R}$ ($i=0,1$) such that: \begin{itemize} \item $\psi_0$ is Lipschitz on $N$ and c-concave on $D$. \item $\psi_1$ is Lipschitz on $N$ and strongly c-concave with modulus $\omega$ on $D$. \item The maps $T_i$ satisfies for any $x \in M$, $(x,T_i(x)) \in D$. \end{itemize} Then \begin{equation} \label{eq:stabtarget} \int_M \omega(\d_N(T_0 (x), T_1(x))) d\mu(x) \leq (\mathrm{Lip}(\psi_0) + \mathrm{Lip}(\psi_1)) W_1(\nu_0, \nu_1) \end{equation} \end{theorem} \begin{remark} \label{rem:dist} The left hand side of inequality~\eqref{eq:stabtarget} measures the distance between transport maps $T_0$ and $T_1$. To see this let us consider a simpler case where $M$ and $N$ are domains of $\mathbb{R}^d$ and $\omega(r) = r^2$ then we get \[\int_M \omega(\d_N(T_0 (x), T_1(x))) d\mu(x) = \nr{T_1 - T_0}_{L^2(\mu)}^2 \] and in that case, Theorem~\ref{th:stability-cconc} amounts to bounding the $L^2$ norm of the distance between transport maps. \end{remark} \begin{remark}[Discretization of the target measure] Assume that we have two absolutely continuous measures $\mu \in \mathcal{P}(M)$ and $\nu \in \mathcal{P}(N)$ and an optimal transport map $T$ from $\mu$ to $\nu$ satisfying all the hypothesis of Theorem~\ref{th:stability-cconc}. One can pick a family of points $(y_i)_{1 \leq i \leq n}$ in the target space $N$ and approximate the measure $\nu$ by a discrete measure $\nu_h$ of the form \[ \nu_h = \sum_i \nu(V_i) \delta_{y_i} \] where $(V_i)_{1 \leq i \leq n}$ is a Voronoi tesselation of $N$ around the points $(y_i)_{1 \leq i \leq n}$ chosen in an appropriate way in the support of $\nu$. The parameter $h$ is given by $h = \max_{1 \leq i \leq n} \diam(V_i)$ so that $W_1(\nu, \nu_h) \leq h$. We can compute the optimal transport map $T_h$ between $\mu$ and $\nu_h$ using semi-discrete methods such as~\cite{kitagawa2019convergence}. Then, Theorem~\ref{th:stability-cconc} implies \[ \int_M \omega(\d_N(T(x), T_h(x))) d\mu(x) \leq C h\] where the constant $C$ depends on the Lipschitz constants of the potentials, which can be controlled explicitely in many cases. If the modulus $\omega(r)$ is quadratic, then the $\mathrm{L}^2(\mu)$ distance between $T$ and $T_h$ is controlled by $h^{1/2}$. \end{remark} \begin{proof}[Proof of Theorem~\ref{th:stability-cconc}] We have \[\sca{\nu_1 - \nu_0}{\psi_1 - \psi_0} = \int_N \psi_1 d_N(\nu_1 - \nu_0) + \int_N \psi_0 d_N(\nu_0 - \nu_1) \] Let $A = \int_N \psi_1 d_N(\nu_1 - \nu_0)$ and $B = \int_N \psi_0 d_N(\nu_0 - \nu_1)$. Since $T_{i\#} \mu = \nu_i$ we have \begin{align*} A &= \int_N \psi_1 d\nu_1 - \int_N \psi_1 d\nu_0 \\ &= \int_M \psi_1(T_1(x)) d\mu(x) - \int_M \psi_1(T_0(x)) d\mu(x) \end{align*} For $x \in M$ we have $x \in \partial^c \psi_i(T_i(x))$. Then the strong $c$-concavity of $\psi_1$ gives \begin{align*} A &= \int_M \psi_1(T_1(x)) - \psi_1(T_0(x)) d\mu(x) \\ &\geq \int_M c(x, T_1(x)) - c(x, T_0(x)) + \omega(\d_N(T_0(x), T_1(x))) d\mu \end{align*} Now since $\psi_0$ is also $c$-concave, we have \[ B \geq \int_M - c(x, T_1(x)) + c(x, T_0(x)) d\mu\] Summing these two inequalities gives \[ \int_M \omega(\d_N(T_0 (x), T_1(x))) d\mu(x) \leq \int_N \psi_1 - \psi_0 d_N(\nu_1 - \nu_0) \] Since $\psi_0$ and $\psi_1$ are Lipschitz, we have \begin{align*} \int_N \psi_1 - \psi_0 d_N(\nu_1 - \nu_0) &\leq (\mathrm{Lip}(\psi_0) + \mathrm{Lip}(\psi_1)) W_1(\nu_0, \nu_1) \end{align*} where the last inequality is given by Kantorovich-Rubinstein theorem. \end{proof} \subsection{Error bounds for optimal transport problems}\label{subsec:errorbound} In this section, we generalize in Proposition~\ref{prop:stabplan} a stability result of Li and Nochetto~\cite{li2021quantitative} to general cost functions, using the notion of strong $c$-concavity. This result allows to bound in Corollary~\ref{cor:stabplanW1} the Wasserstein distance between the optimal transport map and any transport plan with the same marginals by the suboptimality gap of the transport plan. \begin{proposition} \label{prop:stabplan} Let $\mu \in \mathcal{P}(M)$, $\nu \in \mathcal{P}(N)$ and $T: M \to N$ be an optimal transport map from $\mu$ to $\nu$. We assume that $T$ is induced by a strongly c-concave potential $\psi : N \to \mathbb{R}$ with modulus $\omega$ on a compact subset $D$ of $M\times N$ wich contains the graph of $T$. Then any transport plan $\gamma \in \Gamma(\mu, \nu)$ supported on $D$ satisfies \begin{equation*} \int_{M \times N} \omega(\d_N(T(x),y)) d\gamma(x,y) \leq \int_{M \times N} c(x,y) d\gamma(x,y) - \int_M c(x,T(x)) d\mu(x) \end{equation*} \end{proposition} The left hand side of this equation is called the suboptimality gap of $\gamma$, and measures how worse the transport plan $\gamma$ behaves compared to the optimal transport map $T$. \begin{proof} The strong c-concavity of $\psi$ implies that for any $x,y \in D$, \[ \psi(y) \leq \psi(T(x)) - c(x,T(x)) + c(x,y) - \omega(\d_N(T(x),y)). \] Moreover since $T_\#\mu = \nu$, we have \[ \int_N \psi(y)d\nu(y) = \int_M \psi(T(x)) d\mu(x) \] which combined with the strong c-concavity of $\psi$ gives \begin{align*} 0 &= \int_N \psi(y)d\nu(y) - \int_M \psi(T(x)) d\mu(x)\\ &= \int_{D} \psi(y) - \psi(T(x)) d\gamma(x,y) \\ &\leq \int_{D} c(x,y) - c(x,T(x)) - \omega(\d_N(T(x),y)) d\gamma(x,y) \\ &= \int_{D} c(x,y)d\gamma(x,y) - \int_M c(x,T(x))d\mu(x) - \int_{D} \omega(\d_N(T(x), y))d\gamma(x,y) \end{align*} Rearranging this inequality gives the desired conclusion. \end{proof} We can rephrase this proposition using the the 1-Wasserstein distance $\operatorname{W}_1$ in $\mathcal{P}(M \times N)$ induced by the distance $$\d_{M \times N}((x,y),(x',y')) = \d_M(x,x') + \d_N(y,y').$$ \begin{corollary} \label{cor:stabplanW1} Under the assumptions of Proposition~\ref{prop:stabplan}, if the modulus of the Kantorovitch potential $\psi$ is $\omega(r) = Cr^2$, one has \[ \operatorname{W}_1(\gamma, \gamma_T) \leq \frac{1}{\sqrt{C}} \left( \int_{M \times N} c(x,y) d\gamma(x,y) - \int_M c(x,T(x)) d\mu(x) \right)^{1/2} \] where $\gamma_T = (Id, T)_\# \mu$. \end{corollary} \begin{proof} Let $S : M \times N \to (M \times N)^2$ defined by \[ S(x,y) = (S_1(x,y), S_2(x,y))\] where $S_1(x,y) = (x,T(x))$ and $S_2(x,y) = (x,y)$. Let $\pi = S_\# \gamma \in \mathcal{P}((M \times N)^2)$. One can check that $\pi \in \Gamma(\gamma_T, \gamma)$, which implies \begin{align*} W_1(\gamma_T, \gamma) &\leq \int_{(M \times N)^2} \d_{M \times N}((x,y),(x',y')) \mathrm{d} \pi(x,y,x',y') \\ &= \int_{M \times N} \d_{M \times N}(S_1(x,y), S_2(x,y)) \mathrm{d} \gamma(x,y) \\ &= \int_{M \times N} \d_N(T(x),y) \mathrm{d} \gamma(x,y). \end{align*} We use the Cauchy-Schwarz inequality in $L^2(M \times N, \gamma)$ and Proposition~\ref{prop:stabplan} to get the desired result. \end{proof} \subsection{Stability with respect to both measures}\label{sec:stabboth} Here we apply Corollary~\ref{cor:stabplanW1} to show stability results of transport plans with respect to both the source and the target measures. Our result holds for general cost functions and is inspired by a result of Li and Nochetto~\cite{li2021quantitative} that holds in the quadratic case. We denote by $\d_M$ the distance on $M$ and $\d_N$ the distance on $N$. We also choose for distance on the product space $\d_{M \times N}((x,y),(x',y')) = \d_M(x,x') + \d_N(y,y')$. Throughout this section, we require the cost function $c$ to be Lipschitz on the whole product space $M \times N$. \begin{proposition}[Stability with respect to both measures] \label{prop:stabbothmeasureW1} Let $\mu , \tilde{\mu} \in \mathcal{P}(M)$ and $\nu , \tilde{\nu} \in \mathcal{P}(N)$. Let $c : M \times N \to \mathbb{R}$ be a cost function which is Lipschitz on $M \times N$. Let $T : M \to N$ be an optimal transport map between $\mu$ and $\nu$, and $\tilde{\gamma}$ be an optimal transport plan between $\tilde{\mu}$ and $\tilde{\nu}$ for the cost $c$. We assume that $T$ is induced by a strongly c-concave potential $\psi : N \to \mathbb{R}$ with associated modulus $\omega(r) = C r^2$ on $D=M \times N$. Then we have \[ \operatorname{W}_1(\gamma_T, \tilde{\gamma}) \leq \varepsilon + \sqrt{\frac{2 \mathrm{Lip}(c)}{C} \varepsilon}, \quad \hbox{where } \varepsilon := \operatorname{W}_1(\tilde{\mu},\mu) + \operatorname{W}_1(\nu,\tilde{\nu}). \] \end{proposition} The end of this section is devoted to the proof of this proposition. As in \cite{li2021quantitative}, we will use the gluing lemma ~\cite{santambrogio2015optimal,villani2008optimal}. \begin{lemma}[gluing of measures] \label{lemma:gluing} Let $(X_i, \mu_i)$ be probability spaces for $i \in \{1,2,3\}$, and $\gamma_{12} \in \Gamma(\mu_1, \mu_2)$, $\gamma_{23} \in \Gamma(\mu_2, \mu_3)$. Then there exists $\pi \in \mathcal{P}(X_1 \times X_2 \times X_3)$ such that $\pi(\cdot, \cdot, X_3) = \gamma_{12}$ and $\pi(X_1, \cdot, \cdot) = \gamma_{23}$. Or equivalently \[ p_{12\#}\pi = \gamma_{12} \quad p_{23\#}\pi = \gamma_{23} \] where $p_{ij}$ is the projection defined by $p_{ij}(x_1, x_2, x_3) = (x_i, x_j)$. \end{lemma} We also need the following (easy) lemma, showing that the transport cost $$\mathcal{T}^c(\mu,\nu) := \min_{\gamma\in\Gamma(\mu,\nu)} \int c\mathrm{d}\gamma$$ is Lipschitz with respect to perturbations of the measures when $c$ is Lipschitz. \begin{lemma} \label{lemma:globalcostgap} Let $c : M \times N \to \mathbb{R}$ be a Lipschitz cost function. Let $\mu, \tilde{\mu} \in \mathcal{P}(M)$ and $\nu, \tilde{\nu} \in \mathcal{P}(N)$. Then we have \[ \left| \mathcal{T}^c(\mu, \nu) - \mathcal{T}^c(\tilde{\mu},\tilde{\nu}) \right| \leq \mathrm{Lip}(c) (\operatorname{W}_1(\mu, \tilde{\mu}) + \operatorname{W}_1(\nu, \tilde{\nu})).\] \end{lemma} \begin{proof} Kantorovich duality gives \[ \mathcal{T}^c(\mu, \nu) = \max_{\phi \oplus \psi \leq c} \int_{M} \phi \mathrm{d} \mu + \int_N \psi \d \nu. \] Moreover, since the cost is Lipschitz, the maximum is attained in the dual problem; one can assume that the maximum is attained for two potentials $\phi,\psi$ satisfying $\phi =\psi^c$ and $\phi = \psi^c$. In particular both $\phi$ and $\psi$ are Lipschitz continuous with Lipschitz constant lower than $\mathrm{Lip}(c)$. Kantorovitch (weak) duality applied to the two measures $\tilde{\mu}$ and $\tilde{\nu}$ gives \[ \mathcal{T}^c(\tilde{\mu},\tilde{\nu}) \geq \int_M \phi \d \tilde{\mu}+ \int_N \psi \d \tilde{\nu}. \] We thus get \[\mathcal{T}^c(\mu, \nu) - \mathcal{T}^c(\tilde{\mu},\tilde{\nu}) \leq \int_M \phi \d (\mu -\tilde{\mu})+ \int_N \psi \d (\nu - \tilde{\nu}) \leq \mathrm{Lip}(c) ( \operatorname{W}_1(\mu, \nu) + \operatorname{W}_1(\tilde{\mu},\tilde{\nu}) )\] where the last inequality is given by Kantorovich-Rubinstein Theorem. By symmetry the same result holds when we exchange $\mu, \nu$ and $\tilde{\mu},\tilde{\nu}$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:stabbothmeasureW1}] Let $\alpha \in \Gamma(\mu, \tilde{\mu})$ and $\beta \in \Gamma(\tilde{\nu},\nu)$ be optimal transport plans for the cost $d_M$ and $d_N$. Let $\pi \in \mathcal{P}(M^2 \times N^2)$ be a gluing of $\alpha, \tilde{\gamma}$ and $\beta$, i.e. \[ p_{12\#}\pi = \alpha, \quad p_{23\#}\pi = \tilde{\gamma}, \quad p_{34\#}\pi = \beta \] Defining $\gamma = p_{14\#} \pi \in \Gamma(\mu, \nu)$, we get \begin{align} \label{ineq:W1plansmarginals} \operatorname{W}_1(\gamma, \tilde{\gamma}) &\leq \int_{M^2 \times N^2} \d_M(x,x') + \d_N(y,y') \d \pi(x,x',y,y') \nonumber \\ &= \int_{M^2} \d_M(x,x') \d \alpha(x,x') + \int_{N^2} \d_N(y,y') \d \beta(y,y') \nonumber \\ &= \operatorname{W}_1(\tilde{\mu},\mu) + \operatorname{W}_1(\nu,\tilde{\nu}) \end{align} We also have \begin{align} \label{ineq:gammatilde} & \int_{M \times N} c(x,y) \mathrm{d} \gamma \nonumber \\ = &\int_{M^2 \times N^2} c(x,y) \mathrm{d} \pi(x,x',y',y) \nonumber \\ = &\int_{M^2 \times N^2} c(x',y') + c(x, y) - c(x',y') \mathrm{d} \pi(x,x',y',y) \nonumber \\ \leq &\int_{M^2 \times N^2} c(x',y') + \mathrm{Lip}(c)(\d_M(x,x') + \d_N(y,y')) \mathrm{d} \pi(x,x',y',y) \nonumber \\ = &\int_{M \times N} c(x',y') \mathrm{d} \tilde{\gamma} + \mathrm{Lip}(c)\left( \int_{M^2} \d_M(x,x')\mathrm{d}\alpha + \int_{N^2} \d_N(y,y') \mathrm{d}\beta \right) \nonumber \\ \leq &\int_{M \times N} c(x,y)\mathrm{d} \tilde{\gamma} + \mathrm{Lip}(c) (W_1(\mu, \tilde{\mu}) + W_1(\nu, \tilde{\nu})) \end{align} The transport plans $\gamma_T = (Id, T)_\# \mu \in \Gamma(\mu, \nu)$ and $\tilde{\gamma} \in \Gamma(\mu, \nu)$ are optimal, so that by Lemma~\ref{lemma:globalcostgap}, \[\int_{M \times N} c(x,y)\mathrm{d} \tilde{\gamma} \leq \int_{M \times N} c(x,y)\mathrm{d} \gamma_T + \mathrm{Lip}(c) (W_1(\mu, \tilde{\mu}) + W_1(\nu, \tilde{\nu})) \] which combined with~\eqref{ineq:gammatilde} gives \[\int_{M \times N} c(x,y) \mathrm{d} \gamma - \int_{M \times N} c(x,y)\mathrm{d} \gamma_T \leq 2 \mathrm{Lip}(c) (W_1(\mu, \tilde{\mu}) + W_1(\nu, \tilde{\nu})) \] Corollary~\ref{cor:stabplanW1} then implies that \[ \operatorname{W}_1(\gamma, \gamma_T) \leq \left[ \frac{2 \mathrm{Lip}(c)}{C} (\operatorname{W}_1(\tilde{\mu},\mu) + \operatorname{W}_1(\nu,\tilde{\nu}) ) \right]^{1/2} \] Finally, using the triangle inequality along with \eqref{ineq:W1plansmarginals} we get \begin{align*} \operatorname{W}_1(\tilde{\gamma}, \gamma_T) &\leq \operatorname{W}_1(\tilde{\gamma}, \gamma) + \operatorname{W}_1(\gamma, \gamma_T) \\ &\leq \operatorname{W}_1(\tilde{\mu},\mu) + \operatorname{W}_1(\nu,\tilde{\nu}) + \left( \frac{2 \mathrm{Lip}(c)}{C} (\operatorname{W}_1(\tilde{\mu},\mu) + \operatorname{W}_1(\nu,\tilde{\nu}) ) \right)^{1/2}\\ &= \varepsilon + \sqrt{\frac{2 \mathrm{Lip}(c)}{C} \varepsilon} \qedhere \end{align*} \end{proof} \subsection{A remark on regularity} The above results show that the notion of strong $c$-concavity is sufficient to get stability results. In fact, this notion can also lead to regularity of the associated transport maps, as expressed in the following lemma. \begin{lemma}[Regularity under strong $c$-concavity] Let us assume that the cost function $c : M \times N \to \mathbb{R}$ is Lipschitz on $M \times N$ and let $T : M \to N$ be a transport map induced by a strongly c-concave potential $\psi : N \to \mathbb{R}$, with continuity modulus $\omega(r) = C r^2$ on $M \times N$. Then $T$ is $1/2$-H\"older: \[ \d_N(T(x),T(x')) \leq \left( \frac{\mathrm{Lip}(c)}{C} \d_M(x,x') \right)^{1/2} \] \end{lemma} \begin{proof} Let $x \in M$. Since $T$ is induced by a strongly c-concave potential $\psi$ we have $T(x)= \argmin_{y \in N} c(x,y) - \psi(y)$. The strong c-concavity of $\psi$ implies that for every $y\in N$ \[ c(x,y) - \psi(y) \geq c(x,T(x)) - \psi(T(x)) + \omega(\d_N(y,T(x)))\] Now let $x' \in M$. By choosing $y = T(x')$ the above inequality becomes \[ c(x,T(x')) - \psi(T(x')) \geq c(x,T(x)) - \psi(T(x)) + \omega(\d_N(T(x'),T(x)))\] This inequality still holds when we exchange $x$ and $x'$, summing the two gives \[ 2 \omega(\d_N(T(x),T(x'))) \leq c(x',T(x)) + c(x,T(x')) - c(x',T(x')) - c(x, T(x)) \] and since $c$ Lipschitz we have \[ C \d_N(T(x),T(x'))^2 \leq \mathrm{Lip}(c) \d_M(x,x'). \qedhere \] \end{proof} Thus, strong $c$-concavity of the potential entails some regularity of the transport map, generalizing what is well-known in the convex setting (i.e. if $\psi$ is strongly convex, then $\psi^*$ is $\mathcal{C}^{1,1}$). The next section will show a partial converse statement, under strong assumptions on the cost function. \section{Sufficient condition for strong c-concavity}\label{sec:criterioncconc} This section is all about the notion of strong c-concavity that we used through the previous section to deduce stability results of optimal transport maps. From now on, we assume that $M$ and $N$ are smooth complete Riemannian manifolds. It is known that the notions of convexity and strong convexity can be easily characterized by conditions on the Hessian for smooth functions. The c-convexity is not that easy to study but for cost functions $c$ that are regular enough in a certain sense, there is a differential criterion for c-convexity, given by Villani~\cite{villani2008optimal}. In this section we extend Villani's result for strong c-concavity, in other words we show that the strong c-concavity of a function can also be guaranteed by conditions on its derivatives. This result is presented in Corollary~\ref{cor:strongcconc}. To do so we need the cost function $c : M \times N \to \mathbb{R}\cup\{+\infty\}$ to satisfy the Ma-Trudinger-Wang (MTW) condition, which is a well known condition in regularity theory of optimal transport. \subsection{The Ma-Trudinger-Wang tensor} We recall in this section the notion of MTW tensor~\cite{villani2008optimal}. Recall that we are working with two smooth complete Riemannian manifolds $M$ and $N$, and a cost function $c:M\times N\to \mathbb{R} \cup \{+ \infty \}$. We denote by $\Dom(\nabla_x c) \subseteq M \times N$ the set of differentiability of the cost $c$ and $\Dom'(\nabla_x c(x, \cdot)) = \inter(\Dom(\nabla_x c(x, \cdot)))$ the interior of the domain of definition of $\nabla_x c(x, \cdot)$, then \begin{equation} \label{Domp} \Dom'(\nabla_x c) = \{ (x,y) \mid x \in \inter(M), y \in \Dom'(\nabla_x c(x, \cdot))) \} \end{equation} \begin{definition}[Twisted cost] The cost $c$ satifies the (Twist) condition if $\nabla_x c (x, \cdot)$ is injective on its domain of definition, i.e. for any $x, y, y'$ such that $(x,y) \in \Dom'(\nabla_x c)$ and $(x, y') \in \Dom'(\nabla_x c)$: \[\nabla_xc(x,y)= \nabla_xc(x,y') \implies y = y' \] \end{definition} \begin{definition}[STwist] \label{STwist} The cost satisfies the strong Twist condition (STwist) if $c$ is $\mathcal{C}^2$, $\nabla_x c$ is one-to-one and $D_{xy}^2 c$ is non singular on $\Dom'(\nabla_x c)$. \end{definition} If the cost function satisfies (Twist), then for $x \in \inter(M)$ the function $- \nabla_x c(x, \cdot)$ is invertible on its image $\Ix \subseteq T_x M$, i.e. \[ - \nabla_x c(x, \cdot) : \Dom'(\nabla_x c(x, \cdot)) \subseteq N \to \Ix \subseteq T_xM \] is one-to-one. \begin{definition}[c-exponential] When the cost $c$ satisfies the (Twist) condition, we can define the c-exponential for $x \in M$ by $\cexp_x = \left(- \nabla_x c (x, \cdot)\right)^{-1}$, giving for $p \in \Ix$: \begin{align*} \cexp_x(p) : \Ix \subseteq T_xM &\to \Dom'(\nabla_x c(x, \cdot))\subseteq N\\ p &\to \big(\nabla_x c(x, \cdot)\big)^{-1}(-p) \end{align*} \end{definition} \begin{definition}[c-segment] A c-segment is the image of a usual segment by the map $\cexp_x$. We denote $(y_t)_{0 \leq t \leq 1} = [y_0, y_1]_{x}$ the $c$-segment between $y_0$ and $y_1$ with base $x$ defined for $p_0 = (- \nabla_x c)^{-1}(x, y_0)$ and $p_1 = (- \nabla_x c)^{-1}(x, y_1)$ by \[ y_t = \cexp_{x}((1 - t) p_0 + t p_1)\] \end{definition} \begin{definition}[c-convex set] Let $A \subseteq N$. \begin{itemize} \item We say that $A$ is c-convex with respect to $x \in M$ if for any $y_0, y_1 \in A$, there is a c-segment $[y_0, y_1]_x$ entirely contained in $A$. \item The set $A$ is said to be c-convex with respect to a set $B \subseteq M$ if $A$ is c-convex with respect to any $x \in B$. \item A set $D \subseteq M \times N$ is said to be totally c-convex if for any two points $(x, y_0) \in D$ and $(x, y_1) \in D$, the c-segment $(y_t)_{0\leq t \leq 1}=[y_0, y_1]_x$ satisfies for any $t$ $(x,y_t)\in D$. \item We say that $D \subseteq M \times N$ is symmetrically c-convex if both $[x_0, x_1]_y \subseteq D$ and $[y_0, y_1]_x \subseteq D$. \end{itemize} \end{definition} \begin{definition}[MTW tensor] Assuming that $c$ is of class $\mathcal{C}^4$ on $ \Dom'(\nabla_x c)$ and satisfies the (STwist) condition, the Ma-Trudinger-Wang tensor is defined for $(x_0,y_0) \in \Dom'(\nabla_x c)$ and $(\zeta,\eta) \in T_x M \times T_y N$ by \[ \mathfrak{S}_c (x_0,y_0)(\eta,\zeta) = -\frac{3}{2} \frac{\partial^2}{\partial q_{\tilde{\eta}}^2} \frac{\partial^2}{\partial y_{\zeta}^2}\big(c(\cexp_{y_0}(q),y)\big)\Big|_{y=y_0,q=-\nabla_yc(x_0,y_0)}\] with $\tilde{\eta} = - \nabla_{xy}^2 c(x_0,y_0) \eta \in T_xM$. \end{definition} In the above definition $- \nabla_{xy}^2 c(x_0,y_0) : T_xM \times T_yN \to \mathbb{R}$ is a bilinear form which is non singular since (STwist) is satisfied. We then identify for $\eta \in T_yN$ the linear form $- \nabla_{xy}^2 c(x_0,y_0) \eta = \tilde{\eta} : T_xM \to \mathbb{R}$ with a vector of $T_xM$ using the Riemannian structure. \begin{definition}[weak MTW] We say that the weak MTW condition (MTWw) is satisfied on a compact set $D \subseteq M \times N$if there exists a constant $C > 0$ such that for any $(x,y) \in D$ and $(\zeta,\eta) \in T_x M \times T_y N$ we have \begin{equation}\label{eq:MTWw} \mathfrak{S}_c (x,y)(\eta,\zeta) \geq - C |\sca{\zeta}{\tilde{\eta}}|\nr{\zeta}\nr{\tilde{\eta}} \tag{MTWw} \end{equation} This condition was introduced by Ma, Trudinger and Wang~\cite{Ma2005regularity} and is often referred to as \emph{(A3w)}. \end{definition} \subsection{Differential criterion for strong c-concavity} The goal here is to generalize Villani's differential criterion~\cite{villani2008optimal} (detailed in the following theorem) for c-convexity to our definition of strong c-concavity. Our proof is highly inspired from Villani's one, in particular we study the same real valued function $h : [0,1] \to \mathbb{R}$ and show inequalities that are similar and also require positivity of the MTW tensor. \begin{theorem}[Differential criterion for c-convexity, {\cite[Th. 12.46]{villani2008optimal}}] \label{th:criterioncconc} Let $D \subseteq M \times N$ be a closed symmetrically c-convex set and $c \in \mathcal{C}^4(D,\mathbb{R})$ such that $c$ and $\check{c}$ satisfy (STwist) on $D$. Assume that the weak MTW condition \eqref{eq:MTWw} is satisfied on $D$. Let $\mathcal{X} = \mathrm{proj}_M(D)$ and $\psi \in \mathcal{C}^2(\mathcal{X},\mathbb{R})$. If for any $x \in \mathcal{X}$ there exists $y \in N$ such that $(x,y) \in D$ and \[ \begin{cases} \nabla\psi(x) + \nabla_xc(x,y) = 0 \\ D^2 \psi(x) + D_{xx}^2 c(x,y) \geq 0 \end{cases} \] Then $\psi$ is $c$-convex on $D$. \end{theorem} This theorem is given for a potential function $\psi$ on $\mathcal{X} \subseteq M$ and gives a c-convexity result while we consider $\psi : N \to \mathbb{R}$ and work on c-concavity, but this is really just a matter of convention. Also Villani needs the Hessian $D^2 \psi(x) + D_{xx}^2 c(x,y)$ to be positive semi-definite to obtain c-convexity, while we are naturally going to need the Hessian $D^2_{yy}c(x,y) - D^2\psi(y)$ to have eigenvalues bounded from below by a positive constant to obtain strong c-concavity. A noticeable difference of c-convexity with respect to convexity is that it cannot be expressed locally, as we require the MTW tensor to be positive on the whole set $D$ which is a global condition. \begin{theorem}[Differential criterion for strong c-concavity] \label{th:criterionstrongcconc} We consider $D \subseteq \Dom'(\nabla_x c) \cap \Dom'(\nabla_y c)$ a symmetrically c-convex compact set and denote $\mathcal{X} = \mathrm{proj}_M(D)$, $\mathcal{Y} = \mathrm{proj}_N(D)$. We assume that $c \in \mathcal{C}^4(D, \mathbb{R})$, that $c$ and $\check{c}$ satisfy (STwist) on $D$ where $\check{c}(x,y) = \check{c}(y,x)$. We also assume that the weak MTW condition is satisfied on $D$. Let $\psi \in \mathcal{C}^2(\mathcal{Y}, \mathbb{R})$ be a c-concave function on $D$ and such that there exists $\lambda > 0$ satisfying for any $x \in \partial^c\psi(y)$ \[ D^2 \psi(y) - D_{yy}^2 c(x,y) \geq \lambda Id \] Then $\psi$ is strongly $c$-concave on $D$ with modulus $\omega(\d_N(y,z)) = C \d_N(y,z)^2$, where $C > 0$ is a constant depending on $c$, $\mathcal{X}$ and $\mathcal{Y}$. This means that we have \[ \psi(z) - c(x,z) \geq \psi(y) - c(x,y) + C\d_N(y,z)^2 \] for the points $x \in \mathcal{X}$, $y,z \in \mathcal{Y}$ such that $x \in \partial^c\psi(y)$, $(x,y) \in D$ and $(x,z) \in D$. \end{theorem} \begin{corollary}[Strong c-concavity] \label{cor:strongcconc} We make the same hypothesis on $c$ and $D$, and just assume $\psi \in \mathcal{C}^2(\mathcal{Y}, \mathbb{R})$. Let $T : \mathcal{X} \to \mathcal{Y}$ the map defined by $T(x) = \argmin_y c(x, y) - \psi(y)$ be of class $\mathcal{C}^1$ and satisfying for any $x \in \mathcal{X}$, $(x, T(x)) \in D$. Then the function $\psi$ is strongly c-concave on the set $D$ with modulus $\omega(\d_N(y,z)) = C \d_N(y,z)^2$. \end{corollary} \begin{remark}[Restriction of c-concavity to $D$] Theorem~\ref{th:criterionstrongcconc} actually gives the strong $c$-concavity of the potential $\psi$ on a set $D$ where the cost function is smooth enough. This can be an issue if we want to find a transport map $T : M \to N$ such that $T(x) = \argmin_{z \in N} c(x,y) - \psi(y)$, because we cannot be sure that the argmin will be obtained at a point $y$ such that $(x,y) \in D$. This issue has to be treated independently for each application. \end{remark} \subsection{Proof of Theorem~\ref{th:criterionstrongcconc}.} In this subsection we consider that all the hypothesis of Theorem~\ref{th:criterionstrongcconc} are satisfied. For any $x\in \mathcal{X}$, we denote $\mathcal{Y}^x = \{ y \in N \mid (x,y) \in D \}$. Let $\bar{y} \in \mathcal{Y}$ and $\bar{x} \in \partial^c \psi(\bar{y})$ such that $(\bar{x},\bar{y}) \in D$. Note that $\bar{x}$ always exists by hypothesis. Let us fix $y \in \mathcal{Y}^{\bar{x}}$. We want to show that there exists a constant $C > 0$ independant of $\bar{x}, \bar{y}$ and $y$ such that \begin{equation} \label{eq:strong_cconc} c(\bar{x},y) - \psi(y) \geq c(\bar{x}, \bar{y}) - \psi(\bar{y}) + C \d_N(y,\bar{y})^2 \end{equation} We put $(y_t)_{0 \leq t \leq 1} = [\bar{y}, y]_{\bar{x}}$ the $c$-segment between $\bar{y}$ and $y$ with base $\bar{x}$. Remark that the $c$-convexity of $D$ implies that for any $t$ in $[0,1]$, $(\bar{x}, y_t) \in D$. We define the function $h$ by \[ h(t) := c(\bar{x}, y_t) - \psi(y_t)\] such that Equation~\eqref{eq:strong_cconc} writes \begin{equation} \label{eq:hcconc} h(1) \geq h(0) + C \d_N(y,\bar{y})^2 \end{equation} The end of this section is devoted to the proof of Equation~\eqref{eq:hcconc}. \\ \noindent \textbf{Notation.} We first introduce some notations. Note that $A_{\bar{x}}:=\nabla^2_{xy}c(\bar{x},y_t):T_{\bar{x}}M\times T_{y_t}N \to \mathbb{R}$ is a bilinear form which is assumed to be nonsingular. For any $X\in T_{\bar{x}}M$ and $Y\in T_{y_t}N$, we can write $\nabla^2_{xy}c(\bar{x},y_t)(X,Y) = \sca{A_{\bar{x}} X}{Y}=\sca{^tA_{\bar{x}} Y}{X}$ where in some local coordinates $A_{\bar{x}}$ is an invertible matrix and $X$ and $Y$ are column matrices. Then $\nabla^2_{xy}c(\bar{x},y_t)(X,\cdot)$ is a linear form on $T_{y_t}N$ which is identified to the vector $A_{\bar{x}}X\in T_{y_t}N$. Similarly $^tA_{\bar{x}}Y\in T_{\bar{x}}M$. We take the same notation for $A_{x_s}=\nabla^2_{xy}c(x_s,y_t)$. \begin{lemma} \label{lem:hderivative} \[ h'(t) = \sca{\zeta}{\hat{\eta}}\ and \[ h''(t) = \Big(\mathrm{D}^2_{yy}c(x^t,y_t) - D^2\psi(y_t)\Big) (\hat{\eta},\hat{\eta}) + \frac{2}{3} \int_0^1 \mathfrak{S}_c\Big( \cexp_{y_t}(\bar{q_t}+s\zeta), y_t \Big) (\bar{\zeta}, \hat{\eta}) (1-s) \d s, \] where $x^t \in \partial^c \psi(y_t)$, \[ \begin{array}{ll} \eta = \nabla_xc(\bar{x},\bar{y}) -\nabla_xc(\bar{x},y) \in T_{\bar{x}}M & \hat{\eta} = -^tA_{\bar{x}}^{-1}\eta \in T_{y_t}N\\ \zeta= \nabla_yc(\bar{x},y_t) - \nabla \psi(y_t)\in T_{y_t}N & \hat{\zeta} = -A_{\bar{x}}^{-1}\zeta \in T_{\bar{x}}M\\ \bar{q_t}:=-\nabla_yc(\bar{x},y_t) \in T_{y_t}M & \bar{\zeta} = - A_{x_s}^{-1}\zeta \in T_{x_s}N \end{array} \] \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:hderivative}] Since $D$ is symmetrically c-convex and $(\bar{x}, \bar{y}) \in D$, $(\bar{x}, y) \in D$, we can differentiate $h$ as follows $$ h'(t) = \sca{\nabla_yc(\bar{x},y_t) - \nabla \psi(y_t)}{\dot{y_t}} $$ We also have by differentiating $-\nabla_xc(\bar{x},y_t) = \bar{p} + t \eta$: $$ \eta = -\nabla_{xy}^2c(\bar{x},y_t)\dot{y_t}= -^tA_{\bar{x}}\dot{y_t} $$ So that $\hat{\eta}=-^tA_{\bar{x}}^{-1}\eta=\dot{y_t}$ and thus $$ h'(t) = \sca{\zeta}{\hat{\eta}} $$ Differentiating $h'$ gives $$ h''(t) = \Big(\nabla^2_{yy}c(\bar{x},y_t) - \nabla^2 \psi(y_t) \Big)(\dot{y_t},\dot{y_t}) + \sca{\zeta}{\ddot{y_t}}. $$ By differentiating $-\eta = \nabla_{xy}^2c(\bar{x},y_t)\dot{y_t}$, one gets $$ \nabla_{xyy}c(\bar{x},y_t)(\dot{y_t},\dot{y_t})+ \sca{\nabla_{xy}c(\bar{x},y_t)}{\ddot{y_t}}=0 $$ so that $$ \ddot{y_t} = -^tA_{\bar{x}}^{-1} \nabla_{xyy}c(\bar{x},y_t)(\hat{\eta},\hat{\eta}) $$ and $$ \sca{\zeta}{\ddot{y_t}} = \sca{\zeta}{-^tA_{\bar{x}}^{-1} \nabla_{xyy}c(\bar{x},y_t)(\hat{\eta},\hat{\eta})} = \sca{-A_{\bar{x}}^{-1}\zeta}{ \nabla_{xyy}c(\bar{x},y_t)(\hat{\eta},\hat{\eta})}. $$ We therefore have $$ h''(t)= \Big(\nabla^2_{yy}c(\bar{x},y_t) - \nabla^2 \psi(y_t) \Big) (\hat{\eta},\hat{\eta}) + \sca{\hat{\zeta}}{\nabla_{xyy}c(\bar{x},y_t)(\hat{\eta},\hat{\eta})} $$ We define $\Phi(x):= \Big(\nabla^2_{yy}c(x,y_t) - \nabla^2 \psi(y_t) \Big) (\hat{\eta},\hat{\eta})$. Then we have for $X\in T_xM$ $$ D\Phi(x).X = \sca{X}{\nabla^3_{xyy}c(x,y_t)(\hat{\eta},\hat{\eta})}, $$ so that $$ h''(t) = \Phi(\bar{x}) + D\Phi(\bar{x})\hat{\zeta} $$ We put $\tilde{\Phi}(q)=\tilde{\Phi}(-\nabla_yc(x,y_t))=\Phi(x)$, so that for $X\in T_xM$ $$ D\Phi(\bar{x})X= D\tilde{\Phi}(\bar{q_t})(-A_{\bar{x}} X $$ For $x=\bar{x}$ and $\eta\in T_{y_t}N$ $$ D\Phi(\bar{x})\hat{\eta} = D\tilde{\Phi}(\bar{q_t})(-A_{\bar{x}} \hat{\eta}) = D\tilde{\Phi}(\bar{q_t})\eta $$ We put $q_t:=-\nabla_yc(x^t,y_t)$ with $x^t \in \partial^c \psi(y_t)$ and recall $\bar{q_t}=-\nabla_yc(\bar{x},y_t)$.We get $\nabla \psi(y_t) = \nabla_yc(x^t,y_t) = - q_t$ and therefore get $\zeta = q_t - \bar{q_t}$. Using the c-convexity of $D$ to differentiate $c$ at $(x_t, \cexp_{x_t}(\bar{p_t}+s\zeta))$, we get $$ h''(t) = \tilde{\Phi}(\bar{q_t})+ D\tilde{\Phi}(\bar{q_t})({q_t}-{\bar{q_t}}) = \tilde{\Phi}(q_t) - \int_0^1 D^2_q \tilde{\Phi}(\bar{q_t}+s\zeta)(\zeta,\zeta)(1-s)\d s $$ We have $$ \tilde{\Phi}(q_t) = \Phi(x^t) = \Big( \nabla^2_{yy}c(x^t,y_t) - \nabla^2 \psi(y_t)\Big)(\hat{\eta},\hat{\eta}) $$ Using the change of variable $q= -\nabla_yc(x,y_t)\in T_{y_t}M$ (or equivalently $x =\cexp_{y_t}(q)$), we get $$ \tilde{\Phi}(q) = \Big(\nabla^2_{yy}c(\cexp_{y_t}(q),y_t) - \nabla^2 \psi(y_t))\Big) (\hat{\eta},\hat{\eta}) $$ Since $\nabla^2 \psi(y_t)$ does not depend on $q$, one gets by the definition of the MTW tensor, for any $\zeta \in T_q(T_{y_t}N)= T_{y_t}N$: $ D^2_{q} \tilde{\Phi}(q) (\zeta,\zeta) = \frac{\partial^2}{\partial q_{\zeta}^2} \frac{\partial^2}{\partial y_{\hat{\eta}}^2}\big(c(\cexp_{y_t}(q),y_t))\big) =-\frac{2}{3}\mathfrak{S}_c(\cexp_{y_t}(q),y_t))\ (\bar{\zeta},\hat{\eta}) $ where we put $\bar{\zeta} := -A_{x_s}^{-1} \zeta$ so as to have $\tilde{\bar{\zeta}} = \zeta$. \end{proof} \begin{lemma} \label{lem:ODE} Let $y \in \mathcal{C}^1([0,1],\mathbb{R})$ satisfying for $C > 0$, \[ \begin{cases} y'(t) \geq -C |y(t)| \\ y(0) = 0 \end{cases} \] then $y(t) \geq 0$ for any $t \in [0,1]$. \end{lemma} \begin{proof} First remark that there exists $g \in \mathcal{C}^0([0,1], \mathbb{R}_+)$ such that $y$ is solution of \[ \begin{cases} y'(t) = -C |y(t)| + g(t) \\ y(0) = 0 \end{cases} \] Assume that $y \leq 0$ on $[0,1]$ then $|y| = -y$ and $y$ satisfies \[ \begin{cases} y'(t) = C y(t) + g(t) \\ y(0) = 0 \end{cases} \] yet the unique solution to this system is $t \mapsto \int_0^t g(s) e^{C(t-s)}ds \geq 0$ which gives $y = 0$. Now assume that there exists $t_0$ such that $y(t_0) := y_0 > 0$, then the system may be rewritten \[ \begin{cases} y'(t) = -C |y(t)| + g(t) \\ y(t_0) = y_0 \end{cases} \] and has for unique solution $t \mapsto y_0 e^{C (t_0-t)} + \int_{t_0}^t g(s)e^{C(s - t)}ds \geq 0$ on $[t_0, 1]$. To conclude let us consider $t_* = \inf\{t, y(t) > 0\}$, then we have $y(t) \leq 0$ on $[0, t_*]$ which implies $y = 0$ on $[0, t_*]$ as we have seen previously. \end{proof} \begin{proposition}Under hypothesis of Theorem~\ref{th:criterionstrongcconc}, \label{prop:ineqhsec} \[ h''(t) \geq -C h'(t) + \lambda \|\hat{\eta}\|^2 \] \end{proposition} \begin{proof} We have $$ |h'(t)| = |\sca{\zeta}{\hat{\eta}}|. $$ We also have $$ h''(t) = \Big(\mathrm{D}^2_{yy}c(x^t,y_t) - D^2\psi(y_t)\Big) (\hat{\eta},\hat{\eta}) + \frac{2}{3} \int_0^1 \mathfrak{S}_c( x_s, y_t) (\bar{\zeta}, \hat{\eta}) (1-s) \d s, $$ where $x_s=\cexp_{y_t}(\bar{q_t}+s\zeta)$. By hypothesis we have \[ \Big(\mathrm{D}^2_{yy}c(x^t,y_t) - D^2\psi(y_t)\Big) (\hat{\eta},\hat{\eta}) \geq \lambda \nr{\hat{\eta}}^2 \] and \eqref{eq:MTWw} gives \[ \mathfrak{S}_c( x_s,y_t) (\bar{\zeta}, \hat{\eta}) \geq - C |\sca{\nabla_{xy}^2c( x_s,y_t)\hat{\eta}}{\bar{\zeta}} | \|\hat{\eta}\|\|\bar{\zeta}\| \] The norms $ \|\hat{\eta}\|$ and $\|\bar{\zeta}\|$ can be integrated in the constant by compactness, so we get \[ \mathfrak{S}_c( x_s,y_t) (\bar{\zeta}, \hat{\eta}) \geq - C |\sca{\nabla_{xy}^2c( x_s,y_t))\hat{\eta}}{\bar{\zeta}}| = -C |\sca{^tA_{x_s}\hat{\eta}}{\bar{\zeta}}| \] Recall that $\bar{\zeta}=-\ A_{x_s}^{-1} \zeta$. Therefore we get $$ |\sca{^tA_{x_s}\hat{\eta}}{\bar{\zeta}}| = |\sca{{}^tA_{x_s} \hat{\eta}}{A_{x_s}^{-1} \zeta}| = |\sca{\zeta}{\hat{\eta}}| = |h'(t)|, $$ We thus have $h''(t) \geq -C |h'(t)| + \lambda \|\hat{\eta}\|^2 $. Note that $\zeta |_{t = 0} = 0$ so $h'(0) = 0$. Then we can apply Lemma~\ref{lem:ODE} to $h'$, which gives $h'(t) \geq 0$, so we can drop the absolute value and we obtain $h''(t) \geq - C h'(t) + \lambda \|\hat{\eta}\|^2$. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:criterionstrongcconc}] By compactness we have \[ C_1 := \inf_{(x,y) \in D, u \in T_xM, \nr{u} = 1} \nr{\nabla_{xy}^2 c(x,y)^{-1} u}^2 > 0 \] and \[ C_2 := \inf_{x \in \mathcal{X}, y,z \in \mathcal{Y}^x} \frac{\nr{\nabla_xc(x,y) - \nabla_xc(x,z)}^2}{\d_N(y,z)^2} > 0 \] such that $\nr{\hat{\eta}}^2 \geq C_1 C_2 \d_N(y,\bar{y})^2$. By Proposition~\ref{prop:ineqhsec}, we get \[ h''(t) \geq - C h'(t) + \lambda C_1 C_2 \d_N(y,\bar{y})^2 \] Using Gr\"onwall's Lemma we then have that $h'(t) \geq g(t)$ with $g$ solution of \[ \begin{cases} g'(t) = - C g(t) + \lambda C_1 C_2 \d_N(y,\bar{y})^2 \\ g(0) = 0 \end{cases} \] which immediatly gives $g(t) = \left(\frac{\lambda C_1 C_2}{C} \d_N(y,\bar{y})^2 \right)(1 - e^{-Ct})$, so finally we have for $t \in [0,1]$, $h'(t) \geq \left(\frac{\lambda C_1 C_2}{C} \d_N(y,\bar{y})^2 \right)(1 - e^{-Ct})$, and the by integrating for $t \in [0,1]$ we conclude that \[ \int_0^1 h'(t) dt \geq {\lambda C_1 C_2}e^{-C}\d_N(y,\bar{y})^2 \] which is exactly what we wanted in Equation~\eqref{eq:hcconc}. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:strongcconc}] We want to show that under the hypothesis of Corollary~\ref{cor:strongcconc}, we have $$ \forall y \in \mathcal{Y}\ \forall x \in \partial^c \psi(y),\ D^2_{yy}c(x,y) - D^2\psi(y) \geq \lambda Id, $$ We recall that $T:\mathcal{X} \to \mathcal{Y}$ is of class $\mathcal{C}^1$. Let $x \in \mathcal{X}$, we first assume that $T(x)\in \inter(\mathcal{Y})$. Since $T(x)$ minimizes $c(x,\cdot) - \psi(\cdot)$ we have \begin{equation} \label{eq:grad_zero} \nabla_y c(x,T(x)) - \nabla \psi(T(x)) = 0 \end{equation} and \begin{equation} \label{eq:diff_pos} D^2_{yy} c(x,T(x)) - D^2 \psi(T(x)) \geq 0 \end{equation} By differentiating \eqref{eq:grad_zero} with respect to $x$, we get \begin{equation} \label{eq:diff_non_singular} \left( D^2_{yy} c(x,T(x)) - D^2 \psi(T(x)) \right) \circ DT(x) = - D^2_{xy} c(x,T(x)). \end{equation} By (STwist) assumption, $ D^2_{xy} c(x,T(x))$ is nonsingular, which implies that $D^2_{yy} c(x,T(x)) - D^2 \psi(T(x))$ is also nonsingular. Since we also know that it is positive semi-definite from \eqref{eq:diff_pos} we get that $$ D^2_{yy} c(x,T(x)) - D^2 \psi(T(x)) > 0. $$ We now need to extend this inequality for any $T(x) \in \partial \mathcal{Y}$, including the boundary. By continuity, since $\psi$ is $\mathcal{C}^2$ on $\mathcal{Y}$, $c$ is $\mathcal{C}^2$ on $D$ and $T$ is $\mathcal{C}^1$ on $\mathcal{X}$, Equations~\eqref{eq:diff_pos} and \eqref{eq:diff_non_singular} still hold when $T(x) \in \partial \mathcal{Y}$. Moreover (STwist) being satisfied on $D$, we have $D^2_{yy} c(x,T(x)) - D^2 \psi(T(x)) > 0$ for any $x \in \mathcal{X}$. By compactness of $\mathcal{X}$, there exists $\lambda > 0$ such that $$ \forall x\in \mathcal{X}\quad D^2_{yy} c(x,T(x)) - D^2 \psi(T(x)) \geq \lambda Id. $$ We conclude using that $T(x) = y$ is equivalent to $x \in \partial^c \psi(y)$. \end{proof} \section{Stability of optimal transport map for MTW cost}\label{sec:otstability} In this section, we show that the stability results of Section~\ref{sec:stability} can be applied to optimal transport maps. We consider two compact Riemannian manifolds $M$ and $N$ in $\mathbb{R}^d$ and still denote by $d_N$ the distance on $N$. \begin{theorem}[Stability in optimal transport] \label{th:OTstability} Let $\mu \in \mathcal{P}(M)$ and $\nu \in \mathcal{P}(N)$ be two probability measures. Let $c : M \times N \to \mathbb{R}$ be a cost function of class $\mathcal{C}^4$ that satisfies \emph{(STwist)} and \eqref{eq:MTWw} hypothesis. Let $T : M \to N$ be an optimal transport map between $\mu$ and $\nu$ of class $\mathcal{C}^1$ for the cost $c$ and assume that its associated Kantorovich potential $\psi : M \to \mathbb{R}$ is of class $\mathcal{C}^2$. \begin{itemize} \item Let $\tilde{\nu} \in \mathcal{P}(N)$ be any probability measure, and $S : M \to N$ be an optimal transport map between $\mu$ and $\tilde{\nu}$. Then we have \[ \nr{\d_N(T, S)}^2_{L^2(\mu)} \leq C W_1(\nu, \tilde{\nu})\] where $W_1$ denotes the $1$-Wasserstein distance and $C$ is a constant depending on the cost $c$, $M$ and $N$. \item Let $\tilde{\mu} \in \mathcal{P}(M)$, $\tilde{\nu} \in \mathcal{P}(N)$ and $\tilde{\gamma}$ be an optimal transport plan between $\tilde{\mu}$ and $\tilde{\nu}$. Then we have \[ \operatorname{W}_1(\tilde{\gamma}, \gamma_T) \leq C \left( \operatorname{W}_1(\tilde{\mu},\mu) + \operatorname{W}_1(\nu,\tilde{\nu}) \right)^{1/2} \] where $\gamma_T = (Id,T)_\# \mu$ and $C$ is a constant depending on the cost $c$, $M$ and $N$. \end{itemize} \end{theorem} \begin{proof} Since $M$ and $N$ are compact we have strong duality with a cost $c$ that is Lipschitz on $M \times N$ so $S$ is induced by a Lipschitz potential. Since $T \in \mathcal{C}^1$, $\psi \in \mathcal{C}^2$ and $c \in \mathcal{C}^4$ satisfies (STwist) and \eqref{eq:MTWw}, we can then apply Corollary~\ref{cor:strongcconc} to $\psi$, which gives that it is strongly c-concave on $N$, with modulus $\omega(\d_N(y,z)) = C \d_N(y,z)^2$. Then the first result is given by Theorem~\ref{th:stability-cconc} and the second is given by Proposition~\ref{prop:stabbothmeasureW1}. \end{proof} For simplicity, the above theorem is stated in a restrictive way as it requires $c$ to be smooth on the whole product space $M \times N$. It may happen that the regularity conditions such as (STwist) and \eqref{eq:MTWw} are not satisfied on the whole product space $M\times N$, but only on a subset $D \subseteq M\times N$. In this case we can still obtain stability with respect to the target measure if we can show that optimal transports plans are supported on this subset $D$. This is treated independently on examples of Sections~\ref{sec:reflector} and~\ref{sec:gaussmeasure}. \section{Stability for the reflector cost on the sphere}\label{sec:reflector} In this section, we apply a stability result of Section~\ref{sec:stability} to the reflector antenna problem. It is known that this problem amounts to solving an optimal transport problem on the unit sphere $M = N = \mathcal{S}^{d-1}$ for the cost function $c(x,y) = - \ln(1 - \sca{x}{y})$~\cite{wang2004design}, extended by $+\infty$ on the diagonal $\{x=y\}$. One of the key element in the proof is to show that optimal transport maps are supported on compact sets that avoid the diagonal \begin{equation} \label{eq:Deps} D_\varepsilon = \{ (x,y) \in M^2 \mid \d_M(x,y) \geq \varepsilon\} \end{equation} where $d_M$ is the geodesic distance on $M$. We first need the following definition. \begin{definition} \label{def:massconcentration} Given a probability measures $\mu\in\mathcal{P}(M)$, we put $$M_{\mu}(r) = \sup_{x\in M} \mu(\mathrm{B}(x,r)).$$ \end{definition} \begin{theorem} \label{th:stabilitymapsreflector} Let $c(x,y) = -\ln(1 - \sca{x}{y})$ be the reflector cost on the sphere $M=\mathcal{S}^{d-1}$. Let $\mu, \nu_0, \nu_1 \in \mathcal{P}(M)$ be such that $\mu$ and $\nu_0$ are absolutely continuous with respect to the Lebesgue measure with strictly positive $\mathcal{C}^{1,1}$ densities. Let $T_i$ be optimal transport maps between $\mu$ and $\nu_i$. Then for all $\beta>0$, there exists a constant $C>0$ depending on $\mu,\nu_0$ and $\beta$ such that \[ \forall \nu_1\in \mathcal{P}(N) \hbox{ s.t. } M_{\nu_1}(\beta) < 1/8, \quad \nr{ \d_M(T_0,T_1)}^2_{L^2(\mu)} \leq C\ W_1(\nu_0, \nu_1) \] where $d_M$ is the geodesic distance on $M$. \end{theorem} The main difficulty to prove the previous theorem is to show that the optimal transport plan is supported on the compact set $D_\varepsilon$ for some $\epsilon$. This is done in the following subsection in a more general setting. \subsection{Support of the optimal transport plan} In this subsection, we show that optimal transport plans are supported on compact sets of the form $D_\epsilon$. Since our result holds in a slightly more general context than the sphere, we consider that $M$ can be any smooth complete Riemannian manifold. Let $c: M \times M \to \mathbb{R}$ be any cost bounded from below that satisfies $c(x,y) = h(\d_M(x,y))$ where $h : \mathbb{R}_+ \to \mathbb{R}$ is a continuous decreasing function such that $h(0) = +\infty$ and $h(t) < + \infty$ for $t > 0$. \begin{theorem}\label{th:kantorovichreflector} Let $\mu,\nu \in \mathcal{P}(M)$ and $\beta>0$ such that both $M_\mu(\beta) < 1/8$ and $M_\nu(\beta) < 1/8$, then there exists a constant $\varepsilon>0$ such that any optimal transport plan $\gamma\in\Gamma(\mu,\nu)$ is concentrated on $D_\varepsilon$. \end{theorem} Similar results have already been obtained in different settings~\cite{gangbo2007existence,buttazzo2018continuity,loeper2011regularity}, but none of them can be applied to discrete measures and therefore does not imply our result. W. Gangbo and V.Oliker~\cite{gangbo2007existence} work with Borel measures that vanish on $(d-1)$-rectifiable sets. G. Buttazzo et al.~\cite{buttazzo2018continuity} consider multimarginal optimal transport problems for constant measures. G.Loeper~\cite{loeper2011regularity} considers two measures $\mu$ and $\nu$ such that $\mu \geq m \emph{dVol}$ with $m > 0$ and $\nu$ satisfies for any $\varepsilon \geq 0$ and $x \in M$, $\nu(\mathrm{B}(x,\varepsilon)) \leq f(\varepsilon) \varepsilon^{n(1 - 1/n)}$ for some function $f : \mathbb{R}_+ \to \mathbb{R}_+$ satisfying $\lim_{t \to 0} f(t) = 0$. These hypothesis imply that neither $\mu$ nor $\nu$ can be discrete. Our proof is an adaptation of their proofs in a different context. Lemma~\ref{lem:ccycl} is inspired by~\cite{gangbo2007existence} while Lemma~\ref{lem:pairs} and the overall strategy of the proof come from~\cite{buttazzo2018continuity}. The main difference is that here we work on any measure satisfying $M(\beta) < 1/8$, including discrete measures, which is useful for semi-discrete optimal transport. \begin{remark} Our proof requires $M(\beta) < 1/8$ but we believe that the theoretical bound is $M(\beta) \leq 1/2$, which is enough to guarantee that there exists a transport plan with finite global cost, as showed in the following lemma. It is easy to show that we cannot expect a greater bound. Take for example $x \neq y$ in $\mathcal{S}^{d-1}$, $\varepsilon \in ]0, 1/2[$, $\mu = 1/2(\delta_x + \delta_y)$ and $\nu = (1/2 + \varepsilon) \delta_x + (1/2 - \varepsilon)\delta_y $. Any transport plan between $\mu$ and $\nu$ will send a set of measure at least $\varepsilon$ from $x$ to itself for which the cost is infinite. \end{remark} The end of this section is mainly dedicated to the proof of Theorem~\ref{th:kantorovichreflector}, which is necessary to guarantee that the optimal transport plan is supported where the cost is regular enough and the MTW tensor is non-negative, and allow us to apply our strong c-concavity result in order to obtain the stability results of Theorem~\ref{th:stabilitymapsreflector}. We first show in the following lemma that there exists a transport plan with bounded total cost. \begin{lemma} \label{lem:finite-cost} If $M_\mu(\beta) \leq 1/2$ and $M_\nu(\beta) \leq 1/2$ for some $\beta>0$, then there exists $\gamma\in\Gamma(\mu,\nu)$ s.t. $$ \int c\mathrm{d}\gamma \leq h(\beta/2).$$ \end{lemma} The proof of Lemma~\ref{lem:finite-cost} relies on the following result, which can be seen as a continuous formulation of Hall's mariage Lemma. A proof is given in~\cite[Theorem 1.27]{villani2003topics}. \begin{lemma}[Continuous Hall's marriage lemma]\label{lem:hall} Let $M,N$ be Polish spaces, and let $P$ be a closed subset of $M\times N$. Given $\mu \in \mathcal{P}(M)$ and $\nu\in\mathcal{P}(N)$, the following propositions are equivalent: \begin{itemize} \item[(i)] $\exists \gamma \in\Gamma(\mu,\nu)$ such that $\mathrm{spt}(\gamma)\subseteq P$ ; \item[(ii)] for every Borel subset $B\subseteq M$, $$\nu(\{ y\in N \mid \exists x\in B \hbox{ s.t. } (x,y)\in P) \})\geq \mu(B). $$ \end{itemize} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:finite-cost}] We are going to apply the Continuous Hall's marriage lemma to the set $P=\{(x,y)\in M\times N,\ \d_M(x,y) \geq \beta/2\}$. Let $B$ be any Borel set of $X$. We first assume that the diameter of $B$ is a most $\beta$ so that $B\subseteq \mathrm{B}(x_0,\beta)$ for some $x_0 \in B$. Then, $\mu(B) \leq \mu(\mathrm{B}(x_0,\beta)) \leq 1/2 $ using $M_\mu(\beta) \leq 1/2$. having also $M_\nu(\beta) \leq 1/2$ we get \begin{align*} \nu(\{ y \in N \mid \exists x \in B,~ \d_M(x,y) \geq \beta/2 \}) &\quad \geq \nu(\{ y \in N \mid \d_M(x_0,y) \geq \beta/2 \}) \\ &\quad = 1 -\nu(\mathrm{B}(x_0,\beta/2)\\ &\quad \geq 1/2\\ &\quad \geq \mu(B). \end{align*} Assume now that the diameter of $B$ is greater than $\beta$. Then there exist $ x,x' \in B$ such that $\d_M(x,x')\geq \beta$ and the left hand side of the previous inequation is equal to 1 and the condition is obviously satisfied. We can therefore apply Lemma~\ref{lem:hall}, which implies the existence of a transport plan $\gamma$ between $\mu$ and $\nu$ such that for any pair $(x,y)\in\mathrm{spt}(\gamma)$ one has $\d_M(x,y)\geq \beta/2$. Since $h$ is decreasing, we have $c(x,y)\leq h(\beta/2)$ for every pair $(x,y)\in\mathrm{spt}(\gamma)$, which implies the desired result. \end{proof} \begin{lemma} \label{lem:pairs} Let $\gamma$ be an optimal transport map between $\mu$ and $\nu$ for the cost $c$ and let $\beta>0$ such that $M_\mu(\beta) < 1/8$ and $M_\nu(\beta) < 1/8$. Then, for any optimal transport plan $\gamma\in\Gamma(\mu,\nu)$, there exists pairs $(x_0,y_0), (x_0',y_0')\in\mathrm{spt}(\gamma)$ such that the four points $x_0,y_0,x_0',y_0'$ are at distance at least $\min(\varepsilon, \beta)$ with $\varepsilon := h^{-1}(4h(\beta/2))$ from each other. \end{lemma} \begin{proof} Since $\gamma$ is an optimal transport plan, its cost is less than the cost of the transport plan constructed in Lemma~\ref{lem:finite-cost}. Since $h$ is deacreasing and by definition of $\Delta_\varepsilon$, we have for any $\varepsilon > 0$, \[ h(\varepsilon) \gamma(\Delta_\varepsilon) \leq \int_{\Delta_\varepsilon} c \mathrm{d} \gamma \leq h(\beta/2) \] Note that we can consider $h(\beta/2) > 0$, choosing a smaller $\beta$ if necessary. Then if $\varepsilon = h^{-1}(4h(\beta/2))$ we get $\gamma(\Delta_\varepsilon) \leq \frac{1}{4}, $ thus proving the existence of a pair $(x_0,y_0)\in\mathrm{spt}(\gamma) \setminus \Delta_\varepsilon$. Since $M_\mu(\beta) < 1/8$, one has $$\gamma((\mathrm{B}(x_0,\beta) \cup \mathrm{B}(y_0,\beta)) \times \mathcal{S}^{d-1}) \leq \mu(\mathrm{B}(x_0,\beta)) + \mu(\mathrm{B}(y_0,\beta))< \frac{1}{4}, $$ Similarly, $M_\nu(\beta) < 1/8$, gives $$\gamma(\mathcal{S}^{d-1} \times (\mathrm{B}(x_0,\beta) \cup \mathrm{B}(y_0,\beta))) \leq \nu(\mathrm{B}(x_0,\beta)) + \nu(\mathrm{B}(y_0,\beta) < \frac{1}{4}, $$ so that \begin{align*} &\gamma(\{ (x,y) \in M^2 \mid \d_M(x, x_0) > \beta,~~ \d_M(y, y_0) > \beta,~ \d_M(y, - x_0) > \beta, \\ &\phantom{\gamma(\{ (x,y) \in M^2 \mid} \d_M(x, y_0) > \beta \hbox{ and } \d_M(x, y) > \varepsilon \} \\ &\quad = \gamma\bigg(M^2 \setminus \bigg[ (\mathrm{B}(x_0, \beta) \cup \mathrm{B}(y_0, \beta)) \times \mathcal{S}^{d-1} \\ &\phantom{\quad = \gamma\bigg(M^2 \setminus} \cup \mathcal{S}^{d-1} \times (\mathrm{B}(x_0, \beta) \cup \mathrm{B}(y_0, \beta)) \cup \Delta_\varepsilon\bigg] \bigg) \\ &\quad \geq 1 - \gamma((\mathrm{B}(x_0,\beta) \cup \mathrm{B}(y_0,\beta)) \times \mathcal{S}^{d-1})\\ &\qquad\quad- \gamma(\mathcal{S}^{d-1} \times (\mathrm{B}(x_0,\beta) \cup \mathrm{B}(y_0,\beta))) - \gamma(\Delta_\varepsilon) \quad> 1/4. \end{align*} This proves the existence of $(x'_0,y_0')\in\mathrm{spt}(\gamma)$ such that $\d_M(x_0, x_0') > \beta$ and $\d_M(y_0, y_0') > \beta$ and $\d_M(x'_0, y'_0)\geq \varepsilon$ and allows us to conclude. \end{proof} \begin{lemma} \label{lem:ccycl} Assume that $c$ is bounded from below by a constant $c_{min}$. Let $S\subseteq M \times M$ be a $c$-cyclically monotone set , which contains two pairs $(x_0,y_0), (x'_0, y'_0)$ such that the pairwise distance between the points $x_0,y_0,x'_0,y'_0$ is at least $\varepsilon>0$. Then, $$ \forall (x,y) \in S,\quad c(x,y) \leq C_\varepsilon := h(\varepsilon) + 2h(\varepsilon/2) + 2 |c_{min}|. $$ \end{lemma} \begin{proof} Using the $c$-cyclical monotonicity of $S$ and $c\geq c_{min}$ one has \[ c(x,y) \leq c(x,y) + c(x_0,y_0) + c(x'_0,y'_0) - 2 c_{min} \leq F(x,y) + 2|c_{min}| \] where $$\left\{\begin{aligned} &F(x,y) = \min(c(x,y_0) + R_1(y), c(x,y'_0) + R_2(y)) \\ &R_1(y) = \min(c(x_0,y) + c(x'_0,y'_0), c(x_0,y'_0) + c(x'_0,y))) \\ &R_2(y) = \min(c(x_0,y) + c(x'_0,y_0), c(x_0,y_0) + c(x'_0,y))). \end{aligned}\right.$$ By assumption, we have $\d_M(x_0, x_0') \geq \varepsilon$, thus $\max(\d_M(x_0,y), \d_M(x_0,y) \geq \varepsilon /2$. Then, since $h$ is decreasing, one has $\min(c(x_0,y),c(x'_0,y)) \leq h(\varepsilon/2)$. We also have $c(x'_0,y'_0) \leq h(\varepsilon)$ and $c(x_0,y'_0) \leq h(\varepsilon)$, which leaves us with $$R_1(y) \leq h(\varepsilon) + \min(c(x_0,y), c(x'_0,y)) \leq h(\varepsilon) + h(\varepsilon/2), $$ and the same bound holds for $R_2(y)$. Using the same argument we get $\min(c(x,y_0),c(x,y_0')) \leq h(\varepsilon/2)$ and thus, \begin{equation*} F(x,y) \leq h(\varepsilon) + h(\varepsilon/2) + \min(c(x,y_0), c(x,y'_0)) \leq h(\varepsilon) + 2h(\varepsilon/2). \qedhere \end{equation*} \end{proof} \noindent \textbf{Proof of Theorem~\ref{th:kantorovichreflector}.} Let $\beta>0$ such that $M(\beta)> 1/8$. Let $\gamma$ be an optimal transport plan, and denote by $S$ its support. By Lemma~\ref{lem:finite-cost}, the cost of this transport plan is finite. This implies that $S$ is $c$-cyclically monotone. Recall that by assumption, the cost $c$ is bounded from below. Therefore by Lemmas~\ref{lem:pairs} and \ref{lem:ccycl} one has $$ \forall (x,y) \in S,\quad c(x,y) \leq C_\varepsilon := h(\varepsilon) + 2h(\varepsilon/2) + 2 |c_{min}|. $$ where $\varepsilon = \min(\beta,h^{-1}(4h(\beta/2)))$. This directly implies that $S \subseteq D_\delta$ with $\delta = h^{-1}(C_\varepsilon)$. \subsection{Proof of Theorems~\ref{th:stabilitymapsreflector}} Here, we come back to the sphere case, i.e. $M = \mathcal{S}^{d-1}$. We recall that the reflector cost is given on $M^2$ by $c(x,y) = - \ln(1 - \sca{x}{y})$. Note that on the unit sphere, $\d_M(x,y) = \arccos(\sca{x}{y})$, hence the reflector cost is of the form $c(x,y) = h(\d_M(x,y))$ with $h(t) = -\ln(1 - \cos(t))$ and satisfies the assumptions of Theorem~\ref{th:kantorovichreflector}. \begin{lemma} \label{lem:Depscconvex} For $\varepsilon < 2$, $D_\varepsilon$ is symmetrically $c$-convex. \end{lemma} \begin{proof} A simple computation gives for $x \in M$, that $\nabla_x c(x,\cdot) : M \setminus \{x\} \to T_x M$ is one to one and given by \[ \nabla_x c (x,y) = \frac{y - \sca{x}{y}x}{1 - \sca{x}{y}} \] and the inverse of $-\nabla_xc(x, \cdot)$ is \[ \cexp_x(p) = \left( 1 - \frac{2}{1 + \nr{p}^2} \right) x - \frac{2}{1 + \nr{p}^2} p \] Let $(x,y_0)$ and $(x, y_1)$ in $D_\varepsilon$, and define the $c$-segment $(y_t) = [y_0, y_1]_x$. For $p_0 = \nabla_x c(x,y_0)$ and $p_1 = \nabla_x c(x,y_1)$, we put $p_t = (1-t) p_0 + t p_1$, so that $y_t = \cexp_x(p_t)$. We want to show that $(x,y_t) \in D_\varepsilon$, hence we only have to show that $\d_M(x,y_t) \geq \varepsilon$ . We have \[ x - y_t = \frac{2}{1 + \nr{p_t}^2} x + \frac{2}{1 + \nr{p_t}^2} p_t.\] Since $x$ is orthogonal to $p_t$ and $\nr{x} = 1$, we get \[ \d_M(x,y_t) = \arccos(\sca{x}{y_t}) = \arccos\left(1 - \frac{2}{1 + \nr{p_t}^2}\right).\] So $\d_M(x,y_t) \geq \varepsilon$ is satisfied if $1 - \frac{2}{1 + \nr{p_t}^2} \leq \cos(\varepsilon)$. Since $cos(\varepsilon) \geq 1 - \varepsilon^2/2$ it is sufficient to show that \[ \frac{2}{1 + \nr{p_t}^2} \geq \varepsilon^2 /2. \] Since $\nr{p_t} \leq \max(\nr{p_0}, \nr{p_1})$, and by symmetry of $p_0$ and $p_1$ it is sufficient to show that $\nr{p_0}^2 \leq \frac{4}{\varepsilon^2} - 1$. Again using that $\nr{x} = \nr{y_0} = 1$, we have \[ \nr{p_0}^2 = \nr{\frac{y_0 - \sca{x}{y_0}x}{1 - \sca{x}{y_0}^2}}^2 = \frac{1 - \sca{x}{y_0}^2}{(1 - \sca{x}{y_0})^2} = \frac{1 + \sca{x}{y_0}}{1 - \sca{x}{y_0}}\] Finally using the relation $\sca{x}{y_0} = 1 - \nr{x-y_0}^2 / 2$, we get \[ \nr{p_0}^2 = \frac{4}{\nr{x-y_0}^2} - 1 \leq \frac{4}{\varepsilon^2} - 1 \] and in conclusion, $D_\varepsilon$ is $c$-convex. Note that by symmetry it is obviously symmetrically c-convex. \end{proof} \noindent \textbf{End of proof of Theorem~\ref{th:stabilitymapsreflector}} Since $\mu$ and $\nu_0$ are absolutely continuous there exists $\beta > 0$ such that $M_\mu(\beta) < 1/8$, $M_{\nu_0}(\beta) < 1/8$ and $M_{\nu_1}(\beta) < 1/8$. Therefore, by Theorem~\ref{th:kantorovichreflector}, there exists $\varepsilon >0$ such that for every $x \in M$, $(x, T_i(x)) \in D_\varepsilon$. The set $D_\varepsilon$ is a compact set and symmetrically c-convex by Lemma~\ref{lem:Depscconvex}. Recall that the optimal transport map $T_0$ between $\mu$ and $\nu_0$ is of the form $T_0(x)=\argmin_{y\in N} c(x,y) - \psi_0(y)$, where $\psi_0:N\to \mathbb{R}$ is a $c$-concave function. Since $\mu$ and $\nu_0$ have $\mathcal{C}^{1,1}$ strictly positive densities, a result of Gregoire Loeper~\cite[Theorem 2.5]{loeper2011regularity} implies that $\psi_0$ is of class $\mathcal{C}^3$ and that $T:x\mapsto \cexp_x(\nabla \psi^c(x))$ is of class $\mathcal{C}^2$. As seen in the proof of Theorem~\ref{th:kantorovichreflector}, $\psi_1$ is $c$-concave for the truncated cost, which is Lipschitz, and is therefore also Lipschitz. Furthermore, it is known that the reflector cost satisfies MTW and (STwist)~\cite{loeper2011regularity}. We can thus apply Corollary~\ref{cor:strongcconc} which gives that $\psi_0$ is strongly c-concave on $D_\varepsilon$. We then conclude by applying Theorem~\ref{th:stability-cconc}. \section{Prescription of Gauss curvature measure}\label{sec:gaussmeasure} The problem of Gauss curvature measure prescription for a convex body has been introduced by A.D. Aleksandrov in 1950~\cite{alexandrov1950convex} and has been shown to be equivalent to an optimal transport problem on the sphere~\cite{oliker2007embedding,bertrand2016prescription}. In this section we apply our stability result to this optimal transport problem. To this purpose we define the Gauss curvature measure introduced in~\cite{alexandrov1950convex}. Let $K \subseteq \mathbb{R}^d$ be a closed bounded convex body such that $0 \in \inter(K)$. We denote by $\rho_K: \mathcal{S}^{d-1} \to \mathbb{R}$ the radial parametrization of $\partial K$ defined for any direction $x$ in the sphere $\mathcal{S}^{d-1}$ by $\rho_K(x) = \sup \{ r \in \mathbb{R} \mid rx \in K\}$. This induces a homeomorphism $\overrightarrow{\rho_K}$ from $\mathcal{S}^{d-1}$ to $\partial K$ defined by \begin{align*} \overrightarrow{\rho_K} : \mathcal{S}^{d-1} &\to \partial K \\ x &\mapsto \rho_K(x)x \end{align*} We call (multivalued) Gauss map, the map $\mathcal{G}_K$ which maps a point $x\in\partial K$ to the set of unit exterior normals to $K$ at $x$, namely $$G_K(x) = \{ n \in \mathcal{S}^{d-1} \mid x\in\arg\max_K\sca{n}{\cdot} \}. $$ Note that $\mathcal{G}_K(x)$ is a set when $K$ is not smooth at $x$. Through this section, we denote by $\sigma$ the uniform probability measure on the sphere $\mathcal{S}^{d-1}$, i.e. the normalized $(d-1)$-dimensional Hausdorff measure. \begin{definition}[Gauss curvature measure] Let $K$ be a bounded convex body containing $0$ in its interior. The \emph{Gauss curvature measure} of $K$, denoted $\mu_K$, is a probability measure over $\mathcal{S}^{d-1}$ defined for any Borel subset $A \subseteq \mathcal{S}^{d-1}$ by $\mu_K(A) = \sigma(\mathcal{G}_K \circ \overrightarrow{\rho_K}(A))$. \end{definition} The \emph{Gauss curvature measure prescription problem} is the following inverse problem: given a measure $\mu \in \mathcal{P}(\mathcal{S}^{d-1})$, is it possible to find a convex body $K$ such that $\mu=\mu_K$ ? It is well-known that convexity of $K$ implies that for every non-empty spherical convex subset $\Theta \subsetneq \mathcal{S}^{d-1}$ -- i.e. subsets $\Theta$ that contains any mimimizing geodesic between any pair of its points --- we have \begin{equation} \label{eq:convexmeasure} \mu_K(\Theta) < \sigma(\Theta_{\pi/2}) \end{equation} with $\Theta_{\pi/2} = \{ x \in \mathcal{S}^{d-1} \mid d_M(x,\Theta) < \pi/2 \}$, and where where $d_M$ is the geodesic distance on the sphere. Aleksandrov's theorem states that Equation~\eqref{eq:convexmeasure} is in fact a sufficient condition for $\mu$ to be the Gauss curvature measure of a convex body. \begin{theorem}[Aleksandrov] Let $\mu \in \mathcal{P}(\mathcal{S}^{d-1})$ be a probability measure satisfying condition~\eqref{eq:convexmeasure}, then there exists a unique (up to homotheties) convex body $K \subseteq \mathbb{R}^d$ with $0 \in \inter(K)$ such that $\mu$ is the Gaussian curvature measure of $K$. \end{theorem} \subsection{An optimal transport problem} Following~\cite{oliker2007embedding, bertrand2016prescription} we briefly recall that this inverse problem can be recast as an optimal transport problem on the sphere for the cost $c(x,n) = -\ln(\max(0,\sca{x}{n}))$, which takes value $+\infty$ when $\sca{x}{n}\leq 0$. Let $\mu$ be any measure in $\mathcal{P}(\mathcal{S}^{d-1})$ satisfying condition~\eqref{eq:convexmeasure}. Note that the very same cost plays an important role in the theory of unbalanced optimal transport \cite{chizat2018interpolating,liero2018optimal,gallouet2021regularity}. In the following proposition, we use the notion of \emph{support function} of a convex set $K$, defined by \[ h_K(n) = \sup_{x \in \mathcal{S}^{d-1}} \rho_K(x) \sca{x}{n}. \] \begin{proposition}[\cite{oliker2007embedding, bertrand2016prescription}] Let $\sigma \in \mathcal{P}(\mathcal{S}^{d-1})$ be the uniform measure over the sphere, let $K$ be a compact convex body containing zero in its interior, and let $\mu = \mu_K$. Then, \begin{itemize} \item The map $T_K:\mathcal{S}^{d-1} \to \mathcal{S}^{d-1}$ defined $\sigma$-a.e by \[ T_K(n) = (\mathcal{G}_K \circ \overrightarrow{\rho_K})^{-1}(n) \] is the optimal transport map between $\sigma$ and $\mu$ for the cost $c$. \\ \item The functions $\phi_K = - \ln(h_K)$ and $\psi_K = \ln(\rho_K)$ are maximizers of the Kantorovich dual problem. In particular we have \begin{equation} \label{eq:gauss:kd} \int_{\mathcal{S}^{d-1}} c(T_K(n),n) \d \sigma(n) = \int \phi_K(n) \mathrm{d} \sigma(n) + \int \psi_K(x) \mathrm{d} \mu_K(x). \end{equation} \end{itemize} \end{proposition} For the sake of completeness, we recall the proof of this proposition. \begin{proof} Let $(x,n) \in \mathcal{S}^{d-1}\times \mathcal{S}^{d-1}$ be such that $c(x,n)< +\infty$, i.e. $\sca{x}{n} >0$. Then, \begin{equation}\label{eq:kd:admiss} h_K(n) = \max_{y\in K}\sca{n}{y} \geq \sca{n}{\rho_K(x)x} = \rho_K(x) \sca{n}{x}, \end{equation} with equality if and only if $n\in \mathcal{G}_K(x)$. Since all quantities are positive, taking the logarithm, we see that $\phi_K(n) + \psi_K(x) \leq c(x,n)$, ensuring that $(\phi_K, \psi_K)$ are admissible for the dual Kantorovich problem. Note that e.g. by \cite{bertrand2016prescription} $\sigma$-a.e. direction $n\in\mathcal{S}^{-d-1}$ is normal to a unique point in $\partial K$. This implies that the map $T_K = (\mathcal{G}_K \circ \overrightarrow{\rho_K})^{-1}$ is well defined $\sigma$-a.e. The equality case of \eqref{eq:kd:admiss} gives $$ \phi_K(n) + \psi_K(T_K(n)) \leq c(x,T_K(n)).$$ Integrating this equality with respect to $\sigma$ directly gives \eqref{eq:gauss:kd}. In turn, Kantorovich duality implies that $T_K$ is an optimal transport between $\sigma$ and $\mu$, and that $(\phi_K,\psi_K)$ is a maximizer in the dual Kantorovich problem. \end{proof} \subsection{Stability of transport maps} In this subsection we apply our stability result to the Gauss curvature measure prescription problem. We introduce the following notation: $$ \mathcal{K}(r,R) = \{ K \subseteq \mathbb{R}^d\hbox{ convex, compact } \mid B(0,r) \subseteq K \subseteq B(0,R)\}.$$ \begin{proposition} \label{prop:stabcurvature} Let $K$ be a strictly convex and $\mathcal{C}^2$ compact convex body containing $0$ in its interior. Then, for any $R>r>0$, there exists a constant $C$ depending on $K$, $r$ and $R$ such that \[ \forall L \in \mathcal{K}(r,R),\quad \nr{\d_M(T_K,T_L)}^2_{L^2(\sigma)} \leq C \operatorname{W}_1(\mu_K, \mu_L). \] \end{proposition} Note that in addition to the strict convexity and smoothness of $K$, the constant $C$ also depends on the anisotropy of $K$ --- i.e. the radii $R_K\geq r_K > 0$ such that $K\in \mathcal{K}(r_K,R_K)$. The end of the section is devoted to the proof of Proposition~\ref{prop:stabcurvature}. We need to check that the hypothesis of Corollary~\ref{cor:strongcconc} are satisfied for the cost $c(x,n) = -\ln(\max(0,\sca{x}{n}))$. \begin{lemma} \label{lemma:Depscurvature} Given any $R>r>0$, there exists $\varepsilon>0$ such that for any set $K \in\mathcal{K}(r,R)$ and any $c$-optimal transport plan $\gamma\in\Gamma(\sigma,\mu_K)$, one has $$ \mathrm{spt}(\gamma) \subseteq D_\varepsilon, $$ where $D_\varepsilon = \{ (x,n) \in (\mathcal{S}^{d-1})^2 \mid d_M(x,n) \leq \pi/2 - \varepsilon \}.$ \end{lemma} \begin{proof} By hypothesis, $r \leq \rho_K(x) \leq R$ for all $x\in\mathcal{S}^{d-1}$, where $\rho_K$ is the radial function of the convex $K$. Since $$h_K(n) = \sup_{x \in \mathcal{S}^{d-1}} \rho_K(x) \sca{x}{n}, $$ we also have $r < h_K(n) < R$. Hence the two Kantorovich potential $\phi_K(n) = -\ln(h_K(n))$ and $\psi_K(x) = \ln(\rho_K(x))$ therefore satisfy $$ \phi_K(n) + \psi_K(x) \leq -\ln(r) + \ln(R) = \ln(R/r),$$ By strong Kantorovich duality $\phi_K(n) + \psi_K(x) = c(x,n)$ on $\mathrm{spt}(\gamma)$, which implies that $c$ is bounded by $\ln(R/r)$ on $\mathrm{spt}(\gamma)$, i.e. for any $(x,n) \in \mathrm{spt}(\gamma)$, one has $$ c(x,n) = - \ln(\max(0,\sca{x}{n})) \leq \ln(R/r), $$ implying that $\sca{x}{n}\geq r/R$ and $d_M(x,n) = \arccos(\sca{x}{n}) \leq \arccos(r/R)$. Finally $(x,n) \in D_\varepsilon$ with $\varepsilon = \pi /2 - \arccos(r/R)$. \end{proof} \begin{lemma}\label{lemma:convexity-gauss} The set $D_\varepsilon = \{ (x,n) \in (\mathcal{S}^{d-1})^2 \mid d_M(x,n) \leq \pi/2 - \varepsilon \}$ is symmetrically c-convex for the cost $c(x,n) = -\ln(\max(0,\sca{x}{n}))$. \end{lemma} \begin{proof} We have \[\nabla_x c(x,n) = - \frac{n}{\sca{x}{n}} + x \] and by inverting $- \nabla_x c(x, \cdot)$ we get \[\cexp_x(p) = \frac{p + x}{\sqrt{1 + \nr{p}^2}}\] Let $(x,y_0) \in D_\varepsilon$ and $(x,y_1) \in \mathrm{D}_\varepsilon$ then we have $y_t = \cexp_x(p_t)$ where $p_0 = - \nabla_x c(x,y_0)$ and $p_1 = - \nabla_x c(x,y_1)$ and $p_t = (1-t) p_0 + t p_1$. By symmetry we can consider that $\nr{p_t} \leq \nr{p_0}$, which implies $\frac{1}{\sqrt{1 + \nr{p_t}^2}} \geq \frac{1}{\sqrt{1 + \nr{p_0}^2}}$ and thus \begin{align*} d_M(x,y_t) &= \arccos(\sca{x}{y_t}) = \arccos\left(\frac{1}{\sqrt{1 + \nr{p_t}^2}}\right) \\ &\leq \arccos\left(\frac{1}{\sqrt{1 + \nr{p_0}^2}}\right) = d_M(x,y_0) \leq \frac{\pi}{2} - \varepsilon \qedhere \end{align*} \end{proof} \noindent \textit{End of proof of Proposition~\ref{prop:stabcurvature}.} The map $T_K$ (resp. $T_L$) is the optimal transport map between the uniform measure $\sigma$ on $\mathcal{S}^{d-1}$ and $\mu_K$ (resp. $\mu_L$) for the cost $c(x,n) = -\ln(\max(0,\sca{x}{n}))$. From Lemma~\ref{lemma:Depscurvature}, for any $n \in \mathcal{S}^{d-1}$ we have $(T_K(n),n) \in D_\varepsilon$ and $(T_L(n),n) \in D_\varepsilon$. Note that for $(x,n) \in D_\varepsilon$, one has $\sca{x}{n}>0$ and therefore $c(x,n) = -\ln(\sca{x}{n}) = -\ln(\cos(\d_M(x,n)))$. It has been shown in~\cite{gallouet2021regularity} that this cost satisfies (STwist) and \eqref{eq:MTWw} on $D_\varepsilon$. By Lemma~\ref{lemma:convexity-gauss} the set $D_\varepsilon$ is a symmetrically c-convex compact set. Finally it remains to show that $\psi_K$ is of class $\mathcal{C}^2$ and $T_K$ is of class $\mathcal{C}^1$. Since $\partial K$ is $\mathcal{C}^2$, its radial parametrization $\rho_K$ is also $\mathcal{C}^2$, so $\psi_K = \ln(\rho_K)$ of class $\mathcal{C}^2$. Furthermore $\overrightarrow{\rho_K}(x) = \rho_K(x) x$ is a $\mathcal{C}^1$ diffeomorphism. Since $K$ is stricly convex and $\partial K$ is of class $\mathcal{C}^2$, its associated Gauss map $\mathcal{G}_K$ is a $\mathcal{C}^1$ diffeomorphism. We thus have that $T_K = (\mathcal{G}_K \circ \overrightarrow{\rho_K})^{-1}$ is of class $\mathcal{C}^1$. By Corollary~\ref{cor:strongcconc}, we know that $\psi_K$ is strongly c-concave. We conclude by applying Theorem~\ref{th:stability-cconc}. \qed
{ "timestamp": "2022-07-25T02:13:02", "yymm": "2207", "arxiv_id": "2207.11042", "language": "en", "url": "https://arxiv.org/abs/2207.11042", "abstract": "The stability of solutions to optimal transport problems under variation of the measures is fundamental from a mathematical viewpoint: it is closely related to the convergence of numerical approaches to solve optimal transport problems and justifies many of the applications of optimal transport. In this article, we introduce the notion of strong c-concavity, and we show that it plays an important role for proving stability results in optimal transport for general cost functions c. We then introduce a differential criterion for proving that a function is strongly c-concave, under an hypothesis on the cost introduced originally by Ma-Trudinger-Wang for establishing regularity of optimal transport maps. Finally, we provide two examples where this stability result can be applied, for cost functions taking value +$\\infty$ on the sphere: the reflector problem and the Gaussian curvature measure prescription problem.", "subjects": "Numerical Analysis (math.NA); Optimization and Control (math.OC)", "title": "Strong c-concavity and stability in optimal transport", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575132207566, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7091542099637433 }
https://arxiv.org/abs/2005.01045
Locally testable codes via high-dimensional expanders
Locally testable codes (LTC) are error-correcting codes that have a local tester which can distinguish valid codewords from words that are "far" from all codewords by probing a given word only at a very few (sublinear, typically constant) number of locations. Such codes form the combinatorial backbone of PCPs. A major open problem is whether there exist LTCs with positive rate, constant relative distance and testable with a constant number of queries.In this paper, we present a new approach towards constructing such LTCs using the machinery of high-dimensional expanders. To this end, we consider the Tanner representation of a code, which is specified by a graph and a base code. Informally, our result states that if this graph is part of a high-dimensional expander then the local testability of the code follows from the local testability of the base code.This work unifies and generalizes the known results on testability of the Hadamard, Reed-Muller and lifted codes on the Subspace Complex, all of which are proved via local self correction. However, unlike previous results, constant rounds of self correction do not suffice as the diameter of the underlying test graph can be logarithmically large in a high-dimensional expander and not constant as in all known earlier results. We overcome this technical hurdle by performing iterative self correction with logarithmically many rounds and tightly controlling the error in each iteration using properties of the high-dimensional expander.Given this result, the missing ingredient towards constructing a constant-query LTC with positive rate and constant relative distance is an instantiation of a base code that interacts well with a constant-degree high-dimensional expander.
\section{Local Testability in Vector Spaces} \label{sec:applications} In this section we demonstrate how the main theorem fits in with, and generalizes, the known results on testability of Reed-Muller codes. In this case the MAS is the Grassmannian complex MAS described in \pref{lem:Grassmann-is-MAS} for \(V=\mathbb{F}_p^n\) and \(T,K,S\) being the collections of all affine subspaces of dimension \(q_0,q_1,q_2\) respectively. We define the code on \(V\) by lifting base codes \(\sett{C_t}{t \in T}\). Namely \[ C = \sett{w:\mathbb{F}_p^n\to\mathbb{F}_p }{\rest{w}{t} \in C_t, \; \forall t\in T }. \] One example of such a code, is the \((n,r)\)-Reed-Muller code on \(\mathbb{F}_p^n\). This code consists of all polynomials of degree \(\leq r\). When \(n=1\) we call this the Reed-Solomon code. Take \(T\) to be the set of all affine lines (i.e. \(q_0=1\)), and let \(C_t\) be the \(r\)-Reed-Solomon code on every line. Lifting this code to \(V\) results in all functions \(w:\mathbb{F}_p^n \to \mathbb{F}_p\) so that for every line \(t \in T\), \(\rest{f}{t}\) is a function of degree at most \(r\). For some parameters \(n,r,p\) this results in the \((n,r)\)-Reed-Muller code. Surprisingly, \cite{GuoKS2013} showed that for some other parameters \(r,n,p\) the code lifted from the \(r\)-Reed-Solomon code, contains more than the \((n,r)\)-Reed-Muller code. Nevertheless, these codes are locally testable as well \cite{GuoHS2015,HaramatyRS2015}. Our main theorem states that to prove local testability of \(C\) it is enough to prove that \(C_s = \sett{w:s \to \mathbb{F}_p }{\rest{w}{t} \in C_t, \; \forall t\in T, t\subset s } \) is locally testable, for to each subspace \(s \in S\). This gives rise to testability results for Reed-Muller codes (which are well studied, see \cite{RubinfeldS1996, RazS1997, AroraS2003}) as well as to lifted codes as were studied in \cite{GuoKS2013} (given of course, that we check their local testability in a some small fixed space). Moreover, this statement continues to hold for more general sets of base codes \(\sett{C_t}{t \in T}\): If the lifts of \(\sett{C_t}{t\in T}\) to dimension \(q_2\) subspaces are locally testable (with good enough parameters), then the lifted code to dimension \(n\) is also locally testable. This is particularly useful in the regime where \(q_0,q_2\) are fixed, and \(n\) tends to infinity. This includes the examples above, but is a more general statement. \begin{theorem} \label{thm:tanner-grassmann} There is a universal constant \(\alpha>0\) so that the following holds. Let \(q_0<q_1<q_2<n\) be as above, and assume \(q_2 \geq 3q_1+2\). Let \(p\) be any prime power. Let \(X=(V,T,K,S)\) be as above. Let \(\sett{C_t}{t\in T}\) be a set of base codes, and suppose that there exists some \(\delta > 0\) and \(\rho \geq \frac{64 p^{q_1-q_0}}{\alpha \delta^2}\) so that: \begin{enumerate} \item For any \(q_1\) dimensional space \(k \in K\), \(C_k\) has distance \(\geq \delta\).\footnote{\cite{GuoKS2013} showed this holds, for example, whenever the base codes \(C_t\) themselves have distance \(\geq \delta + \frac{1}{p^{q_0}}\).} \item For every \(q_2\) dimensional space \(s \in S\), \(C_s\) is \(\rho\)-locally testable. \end{enumerate} Then for any \(n > q_2\), the lift of \(\sett{C_t}{t\in T}\) to \(\mathbb{F}_p^n\) is \(\frac{\rho \delta^2 \alpha}{16}\)-locally testable. \end{theorem} The constant \(\alpha\) doesn't depend on any of the other parameters, nor on the field size. We encourage the readers to think of \(\delta = \Omega(1)\). Then for every fixed dimensions \(q_0,q_1\) and field size \(p\) there is some \(\rho\), so that for every lifted code that is \(\rho\)-locally testable on spaces of dimension \(q_2\), the code is also \(\Omega(\rho)\)-locally testable on all spaces of dimension \(n>d\) (for a large enough \(\rho\)). Note that this theorem applies both to the regime where the field size is small (e.g. \(p=2,3\)), and where the field size goes to infinity. When \(p\) grows, the conditions of the theorem become easier to satisfy, that is, that the lower bound on \(\rho\) becomes smaller as well. \begin{proof}[Proof of \pref{thm:tanner-grassmann}] Let \(\alpha\) be the constant stated in \pref{lem:Grassmann-is-MAS}. The system \(X=(V,T,K,S)\) defined above is a \((p^{q_0-q_1},\delta, \delta \alpha)\)-MAS, by \pref{lem:Grassmann-is-MAS}, for that \(\alpha\). Denote by \(C\) the lift of \(\sett{C_t}{t \in T}\) to \(\mathbb{F}_p^n\). This code satisfies the distance and local local testability properties: \begin{enumerate} \item The lift of \(\sett{C_t}{t\in T}\) to an \(q_1\) dimensional space \(k \in K\) has distance \(\geq \delta\). \item The lift of \(\sett{C_t}{t\in T}\) to a \(q_2\)-dimensional space \(s \in S\) is \(\rho\)-locally testable. \end{enumerate} Hence by \pref{thm:LLTC-imples-GLTC}, this code is \(\frac{\rho \delta^2 \alpha}{16}\)-locally testable. \end{proof} \section{Introduction} \label{sec:intro} In this work, we study an approach to constructing locally testable codes (LTCs) based on high-dimensional expansion. LTCs are error-correcting codes that have a local tester which can test if a given word is a valid codeword or far (in Hamming distance) from all codewords, by probing the given word only at a very small (sublinear, typically constant) number of locations. Reed-Muller codes were the first codes shown to be locally-testable \cite{FriedlS1995,RubinfeldS1996}. These codes are based on low degree polynomial functions, and have inverse polynomial rate. Later on, LTCs with inverse poly-logarithmic rate were constructed by \cite{BenSassonS2008,Dinur2007}. Obtaining an LTC family with rate that is not vanishing is a major open question in this area. Such codes are known as ``good'' LTCs or $c^3$-LTCs since they have \textbf{c} onstant rate, \textbf{c} onstant relative distance, and testable with a \textbf{c} onstant number of queries~\cite{Goldreich2010}. This question is interesting in its own right, and also could potentially lead towards constructing linear-length PCPs (as LTCs are the combinatorial backbone of all PCP constructions). The problem of constructing $c^3$-LTCs is particularly difficult as we do not know if such good codes exist, even non-explicitly (say using a probabilistic argument). The difficulty stems from the fact that local testability requires redundancy in the constraints. In known LTCs, the constraints are highly overlapping, a property that in the past went hand in hand with relatively {dense} families of constraints. Alas this density seems to significantly limit the rate. In contrast, high-dimensional expanders give {sparse} families of subsets that are heavily overlapping. Perhaps if we manage to find appropriate constraints on these subsets we may find higher rate LTCs. In this work, the vague notion of ``overlapping constraints'' is captured through so-called agreement-expansion (which will be formally defined below). Informally speaking, we show that if an error-correcting code is defined through a collection of local constraints that {\em sit on an agreement expander}, then to prove local testability of the entire code it suffices to prove local testability of the local components (which are of merely constant size in the case of constant-degree agreement expanders). This is similar in spirit to recent applications of high-dimensional expanders towards proving other local-to-global results. This passing from local to global is particularly important because known constructions of high-dimensional expanders are very difficult to analyze on a global level. So far, successful analyses focused on the local structure (in neighborhoods, or so-called links) of these objects. Through this work, the task of constructing global LTCs is reduced to the task of constructing LTCs on the local structure, which appears to be a much more reasonable task. This work can be viewed as providing a generic scheme for constructing an LTC on a high-dimensional expander (or an agreement expander), and the (big) missing ingredient is an appropriate instantiation. We comment that the flagship example of an LTC, namely Reed-Muller codes, can be viewed as an instantiation of this scheme, with the underlying agreement expander being the Grassmannian complex and the base code being the Reed-Solomon code (see \pref{sec:applications}). The hope is that replacing the ``dense'' Grassmannian complex by a bounded-degree complex, together with finding an appropriate base code, could potentially lead to a $c^3$ LTC. \paragraph{Tanner Codes} To elucidate the main result, we begin by recalling a well-studied family of codes, the \emph{Tanner codes}~\cite{Gallager1960,Tanner1981}. A Tanner code $C \subseteq \{0,1\}^n$ is given by a family of (small, often constant-sized) subsets $t_1,\ldots,t_m \subset [n]$ and for each subset a base code $C_{t_i}\subset \{0,1\}^{t_i}$. A string $w\in \{0,1\}^n$ is in the code $C$ if for each $i$, $\rest{w}{t_i}\in C_{t_i}$.\footnote{A Tanner code is equivalently described on a bipartite graph (called the Tanner graph) with $n$ right vertices corresponding to the coordinates of the code and $m$ left vertices corresponding to the sets $t_i$, with an edge between $v$ and $t_i$ if $v\in t_i$. } Many known codes, including Reed-Muller codes, lifted codes, tensor codes, and expander codes, are in fact Tanner codes. In all of these cases, there is a single base code $C_0$ such that $C_{t_i} = C_0$ for all $i$, but this need not be the case. The Tanner representation of a code also gives a natural candidate for a local test for checking whether a given word $w\in \{0,1\}^n$ is in the code. \textbf{Natural Tanner Test}: Choose a random $i\in [m]$ and accept iff $\rest{w}{t_i}\in C_{t_i}$. We say that $C$ is $\rho$-locally-testable with the natural tester if \[ \rho\cdot \dist(w,C) \le \prob{\text{Test fails}}.\] A family of codes is a locally testable code (LTC) if it satisfies the above inequality for some test (not necessarily the natural Tanner test) with a constant $\rho$ (that does not decrease with the block length of the code). Many Tanner codes, including expander codes and random LDPC codes, that are very good in terms of rate and distance, {\em and} can be characterized by ``low density'' constraints (that look at only a constant number of bits in the codeword) fail quite miserably at being LTCs~\cite{BenSassonHR2005}. \\ Imagine that in addition to $T=\set{t_1,\ldots,t_m}$ we also have a family $S$ of subsets of $[n]$, such that each $s\in S$ has constant size, but slightly larger than the size of the $t_i$'s. For each such $s\in S$ we consider the `local' Tanner code \[C_s = \sett{w\in \{0,1\}^s}{w|_t \in C_t,\,\forall t\in T,\;t\subset s}.\] (Of course, $C_s$ is non-trivial only if there are some $t\in T$ contained in $s$.) In this work, we show that if for each $s\in S$, the code $C_{s}$ itself is locally testable with the natural Tanner test, then the code $C$ too must be locally testable with respect to the natural Tanner test. This holds as long as we assume some nice structure on the families $S$ and $T$, namely that they are part of a ``multi-layered agreement sampler'', MAS for short, which is described below. Let us change point of view and look at the codes $\set{C_s}$ as a collection of base codes, giving rise to the Tanner code $C$. Our main result is that local testability of the base codes $C_s$ \emph{lifts} to local testability of the entire code $C$, assuming an expander-like MAS condition on the underlying Tanner graph. This is analogous to the celebrated expander codes \cite{SipserS1996} in which distance of the base codes gets lifted to distance of the entire code, assuming expansion of the underlying Tanner graph. Whereas expansion alone does not suffice for local testability, the MAS structure does. \paragraph{High-dimensional expanders and Agreement Expanders} There are several interesting and non-equivalent definitions for high-dimensional expanders, the two main ones being topological definitions of coboundary or cosystolic expansion \cite{LinialM2006,Gromov2010,DotterrerKW2018}, and, more relevant to this work, random walk definitions either locally at the link level \cite{KaufmanM2017, DinurK2017} or globally \cite{DiksteinDFH2018, KaufmanO2017}. Without going into details, high-dimensional expansion has already been shown to imply some surprising local to global theorems. For example the trickling down theorem of \cite{Oppenheim2018} proves global spectral expansion using local spectral expansion in the links (which are the neighborhoods of individual vertices). Another example is the list decoding of \cite{DinurHKNT2019} which deduces global list decoding from list-decoding on the local pieces. Yet another example, which is crucial for this work, is that high-dimensional expanders give rise to agreement expanders~\cite{DinurK2017,DiksteinD2019}. An agreement expander allows one to stitch together many mostly-consistent local functions into a single global function. We elaborate a little more on this notion. Let $V$ be a ground set of $n$ elements, and let $ S$ be a collection of subsets of $V$ of some fixed size. Let ${\mathcal{A}}$ be a graph whose vertices are the subsets in $S$, and each edge $\set{s,s'}$ is labeled by a subset $k\subset s\cap s'$. Let $K$ be a collection of subsets labelling the edges. $(V,K,S,{\mathcal{A}})$ is an $\alpha$-agreement expander if whenever an ensemble has agreement value $1-\epsilon$ there exists a global function $F\colon V \to \{0,1\}$ such that $f_s = F|_s$ for all but at most $\epsilon/\alpha$ of $s\in S$. (See \pref{sec:ae} for the full definition). An agreement expander is given by $V,K,S$ and the edge-labelled graph ${\mathcal{A}}$. Suppose that for each $s\in S$ we are given a local function $f_s\in \{0,1\}^s$. The {\em agreement value} of the ensemble $\set{f_s}$ is the probability of $f_s|_k = f_{s'}|_k$ for a randomly chosen edge $\set{s,s'}_k$ (this is notation for an edge between $s,s'$ labeled by $k$) in the graph ${\mathcal{A}}$. Whenever there is a global function $F\colon V\to\{0,1\}$ such that $f_s = F|_s$ for all $s\in S$, the agreement value of $\set{f_s}$ is clearly $1$. We say that Agreement expanders have been studied and used in the LTC and PCP literature for years (under different names such as direct product tests or sometimes low degree tests). However, prior to the recent connection with high-dimensional expanders, the only known agreement expanders were relatively dense. The existence of sparse such objects seems promising and could potentially lead to LTCs with positive rate. This work shows how agreement expansion can be useful for constructing LTCs. \paragraph{Multilayered Agreement Samplers (MAS)} We now describe the MAS combinatorial structure needed for our LTC scheme. Let $V$ be a ground set of $n$ elements, and let $T,K,S$ be three families of subsets of $V$ of sizes $q_0<q_1<q_2$. The system $(V,T,K,S)$ is said to be a \((\lambda,\alpha)\) -\emph{MAS} if the following two conditions are met. \begin{itemize} \item $V,K,S$ are part of an $\alpha$-agreement-expander. \item The bipartite containment graph of \(T\) vs. \(K\) is a \(\lambda\)-sampler. \end{itemize} The above definition is stricter than what we actually need, see the formal more refined definition in \pref{def:mas}. We are now ready to state our main result. \paragraph{Main Result} Let $V,T,K,S$ be a \((\lambda,\alpha)\) -\emph{MAS}. Suppose that for each $t\in T$ we have a local code \(C_t\subset \{0,1\}^t\). Let $C \subset \{0,1\}^n$ be the Tanner code defined by $\set{C_{t}}$ for all $t\in T$. Namely, \[ C:= \sett{w\in \{0,1\}^V}{\rest{w}{t}\in C_t\hbox{ for every }t}.\] Similarly, for each $s\in S$, let $C_s$ be the Tanner code defined by $\sett{C_t}{t\subset s}$, namely, \[ C_s = \sett{w\in \{0,1\}^s}{\rest{w}{t}\in C_t\hbox{ for every }t\subset s},\] and similarly define for each $k\in K$, \(C_k = \sett{w\in \{0,1\}^k}{\rest{w}{t}\in C_t\hbox{ for every }t\subset k}\). \begin{theorem}\torestate{ \label{thm:main-hdx} Let $V,T,S$ be layers in a $(\lambda,\alpha)$-MAS satisfying $\lambda \leq \rho \delta \alpha/64$. Suppose $C_k\subset \{0,1\}^k$ has relative distance $\delta$ for all $k\in K$ and suppose that $C_s$ is $\rho$-locally testable with the natural Tanner tester. Then \(C\) is $\rho \delta \alpha/16$ locally testable (with the natural Tanner tester).} \end{theorem} We state our full main theorem in \pref{thm:LLTC-imples-GLTC}. \paragraph{Overview of proof} Our proof of local testability, like previous proofs of testability, goes via self correction. The main difficulty in our setting is that a single round of self-correction is insufficient to correct the word. Let \(w\) be a word that satisfies a \((1-\varepsilon)\)-fraction of the constraints in the Tanner graph. We would like to show that there exists a \(w^* \in C\) such that \(\dist(w,w^*) = O(\varepsilon)\). For specific codes, one could use the properties of the code to perform this self-correction (cf. Reed-Muller testing, one could use the properties of polynomials). However, we cannot resort to such properties since we are working in an abstract setting. Instead, we rely on simple majority decoding: each vertex takes a value that satisfies the majority of the constraints it participates in. The main engine driving our proof is agreement expansion. Our proof strategy is as follows: Construct a word \(w'\) from the received word \(w\) via self correction (or otherwise) and show \begin{enumerate} \item[(a)] \(w\) is close to \(w'\), and \item[(b)] \(w'\) is a valid codeword. \end{enumerate} Property (a) is easy to show if \(w'\) is constructed via self correction using majority decoding. Property (b) is not very hard in the context of Hadamard testing and Reed-Muller testing: every vertex participates in a constraint with every other vertex (indeed the diameter of the Tanner graph is a constant), hence one round of self-correction results in a valid codeword $w'$. However, since our proof is general enough to work even for constant-degree Tanner graphs wherein the diameter can be as large as logarithmic, one does not expect a single step of self correction via majority decoding to yield a codeword in a single step. \ Our proof instead relies on a novel iterative self correction procedure that slowly corrects a given word in logarithmically many iterations. A standard problem that arises when using iterative procedures is that the error grows linearly in the number of iterations, which is prohibitively expensive in our setting. We use the properties of MAS to show that the number of unsatisfied constraints by the self-corrected word $w'$ reduces by a constant factor in each iteration. This allows us to perform an arbitrary number of rounds in the iterative self-correction procedure till we reach a perfect codeword $w^* \in C$ (actually a logarithmic number of rounds will suffice). This type of argument is new in the context of locally testable codes. Given this we can proceed with the proof overview as follows. Since \(w\) satisfies \((1- \varepsilon)\)-fraction of the constraints, an averaging argument shows that a \((1-O(\varepsilon))\)-fraction of the \(s\)'s satisfy most of the constraints within them. Hence, by the local testability of the code $C_s$ we get that for most \(s\)'s, \(\rest{w}{s}\) is close to a local codeword, say \(w_s\in C_s\). Furthermore, it is not hard to show that these local codewords satisfy that for a typical $k \in K$ and \(s,s' \in S\) such that \(k \subset s\cap s'\), we have \(\rest{w_s}{k} \equiv \rest{w_{s'}}{k}\). In other words, the \(w_s\)'s satisfy the hypothesis of the agreement test. From the agreement expansion of the MAS, there exists a ``global'' word \(w'\) that explains most of the \(w_s\)'s. Furthermore, it is not hard to show that \(w'\) is close to the original word \(w\). We then use the sampler property of the MAS to show that \(w'\) violates {\em significantly fewer} constraints than \(w\) (in particular, \(w'\) violates at most \(\varepsilon/2\)-fraction of constraints). We iteratively apply the above self-correction procedure to get a sequence of words such that \(w^{(0)} := w,w^{(1)},w^{(2)},\ldots\) such that \(w^{(i)}\) violates at most \(\varepsilon/2^i\)-fraction of constraints and \(\dist(w^{(i)},w^{(i+1)}) = \bigO{\varepsilon}/2^i\). Since the fraction of violated constraints cannot infinitely decrease, we have that eventually for a large enough \(i\), \(w^*:=w^{(i)} \in C\) and \( \dist(w,w^*) \leq \sum_{j=0}^{i-1}dist(w^{(j)},w^{(j+1)}) = O(\varepsilon)\). \paragraph{Relation to previous work} We begin by recalling the history of LTCs and the close connection between PCP and LTC constructions. LTCs were first studied in the context of program checking by Blum, Luby and Rubinfeld~\cite{BlumLR1993} and Gemmell~{\em et al.}~\cite{GemmellLRSW1991}. The notion of LTCs is implicit in the work on locally checkable proofs by Babai~et al.\xspace~\cite{BabaiFLS1991} and subsequent works on PCPs. The explicit definition appeared independently in the works of Rubinfeld and Sudan~\cite{RubinfeldS1996}, Friedl and Sudan~\cite{FriedlS1995}, Arora's PhD thesis~\cite{Arora1994} and Spielman's PhD thesis~\cite{Spielman1995}. A formal study of LTCs was initiated by Goldreich and Sudan~\cite{GoldreichS2006}. Most known constructions of PCPs yield LTCs with similar parameters. In fact, there is a generic transformation to convert a PCP of proximity (which is a PCP with more requirements) into an LTC with comparable parameters~\cite{BenSassonGHSV2006,Trevisan2004}. See a survey by Goldreich~\cite{Goldreich2010} for the interplay between PCP and LTC constructions. In fact, the current best construction of LTCs (constant-query, constant fractional distance and inverse polylogarithmic rate) is obtained from the PCP constructions of Ben-Sasson and Sudan~\cite{BenSassonS2008} and Dinur~\cite{Dinur2007}. PCP-based constructions are unlikely to yield LTCs with constant rate since PCP constructions typically involve at least a logarithmic overhead. Nevertheless LTC constructions that aren't derived from PCPs perhaps have a better chance at achieving the coding-theory gold-standard of positive rate and distance. Agreement expansion and the multilayered set system structure play a central role in our proof of local testability. Another application of agreement expansion towards local testability was studied in \cite{DinurHKR2019}, where it was used to enhance the local testability of a code in the context of the subspaces (Grassmannian) complex. We remark that use of such multilayered agreement samplers in the context of locally-testable codes is actually implicit in many previous constructions of locally testable codes. The Raz-Safra \cite{RazS1997} proof of the local testability of the Reed-Muller codes works with points-lines-planes structure, a subgraph of the Grassmannian complex which is an excellent agreement expander as explained in detail in \pref{sec:applications}. The original proof due to Blum, Luby and Rubinfeld \cite{BlumLR1993} (as well as subsequent improvements due to Coppersmith) of the local testability of the Hadamard codes as well as Kaufman and Sudan's proof of testability of affine-invariant codes~\cite{KaufmanS2008}, relies on the three-layered structure comprising of the points, the three-point tests and certain nine-point sets, sometimes referred to as "magic squares"~\cite{KaufmanS2008}. Our proof makes explicit this use of MAS to construct LTCs and shows that four-layered MAS are sufficient to transform ``local'' local testability to ``global'' local testability. In this sense, our proof can be viewed as bringing together these seemingly different proofs of local-testability under a common umbrella. We already remarked that our construction has a similar paradigm as the Sipser-Spielman construction of expander codes~\cite{SipserS1996} which demonstrates that if the base code has good distance then the Tanner code also has good distance provided the graph is an expander. Another construction of the same flavor is the result of Dinur et al.~\cite{DinurHKNT2019} that demonstrates that if the local code is efficiently list-decodable then so is the global code defined by ABNNR distance amplification property via an expander~\cite{AlonBNNR1992}, provided the expander is part of a large high-dimensional expander. \paragraph{Further Discussion and Future Work} This work gives a general scheme for constructing an LTC. It needs to be instanciated with an appropriate MAS and base codes. As mentioned earlier, and explained in detail in \pref{sec:applications}, one such instanciation is to choose the Grassmannian complex as the MAS, and the Reed Solomon code as the base codes. This gives the well-studied locally testable codes called Reed-Muller codes, as well as the more recent so-called lifted codes. The most interesting direction is to instantiate this scheme with an MAS that comes from some bounded-degree high-dimensional expander, and to combine it with appropriate choice of locally testable base code. The main hurdle in choosing the base codes is to be able to certify that the resulting Tanner code maintains positive rate. In some similar situations this is done by a simple counting of the number of constraints. However, such an argument cannot work in the setting of LTCs, and we leave it as an open question. \subsection{Proof of the Main Theorem} \begin{proof}[Proof of \pref{thm:LLTC-imples-GLTC}] Let \(w_0: V \to \Sigma\) be some word so that \[Fail(w_0) \stackrel{\mathrm{def}}= \Prob[t \in T]{\rest{w_0}{t} \notin C_t} = \varepsilon.\] We need to find a word \(w^*\) so that \(\dist(w_0,w^*) \leq \frac{16\varepsilon}{\rho \delta \alpha}\). We will find a word \(w_1:V \to \Sigma\) so that \(\dist(w_0,w_1) = \frac{8 \varepsilon}{\rho \delta \alpha}\), and \[ Fail(w_1) = \Prob[t \in T]{\rest{w_1}{t} \notin C_t} \leq \frac{1}{2}\varepsilon.\] As a first step we define a function ensemble \(\sett{f_s}{s \in S}\) so that \(f_s \in C_s\) is the closest code word to \(\rest{w_0}{s}\) (ties broken arbitrarily). For each $k\subset s,s'$ both $f_s|_k\in C_k$ and $f_{s'}|_k\in C_k$, and since $C_k$ is a code with relative distance $\delta$, we get that $\set{f_s}$ is a \(\delta\)-ensemble. We claim that the ensemble passes the agreement test with high probability. \begin{claim} \label{claim:ensemble-has-high-agreement} \[\Prob[\set{s_1,s_2}_k \sim {\mathcal{A}}]{\rest{f_{s_1}}{k} = \rest{f_{s_2}}{k}} = 1- \frac{4 \varepsilon}{\rho \delta}.\] \end{claim} As there is an agreement expander \({\mathcal{A}}\) that is \((K,\alpha)\)-sound with respect to \(\delta\)-ensembles, there exists some function \(w_1:V \to \Sigma\) so that \begin{equation} \label{eq:agreement-guarantee} \Prob[k \subset s]{\rest{w_1}{k} = \rest{f_s}{k}} = 1-\frac{4 \varepsilon}{\rho \delta \alpha}. \end{equation} We claim that \(w_0\) is close to \(w_1\), and that \(w_1\) fails the test with probability \(\leq \frac{\varepsilon}{2}\). \begin{claim} \label{claim:new-is-close-to-old} \(\dist (w_0,w_1) \leq \frac{8 \varepsilon}{\rho \delta \alpha}.\) \end{claim} \begin{claim} \label{claim:new-function-rarely-fails} \(Fail(w_1) \leq \frac{1}{2}\varepsilon.\) \end{claim} Modulo \pref{claim:new-is-close-to-old} and \pref{claim:new-function-rarely-fails}, we repeat the correction process \(poly(\log (\min_{t \in T} \prob{t}))\) times. In the beginning of the \(i\)-th iteration we start with \(w_i\) that fails the test with probability \( \leq \varepsilon/2^i\). In the end of the iteration, we find \(w_{i+1}\) that fails the test with probability \( \leq \varepsilon/2^{i+1}\), and so that \(\dist(w_i,w_{i+1})\leq \frac{8 \varepsilon}{ \rho \delta \alpha 2^{i}}\). Thus we obtain a sequence of functions \(w_0,w_1,w_2,..., w_r\) that ends with \(w_r = w^*\) that always passes the test. The distance we accumulate from \(w_0\) is \[ \dist (w_0,w_r) \leq \sum_{i=0}^{r-1}\dist(w_i, w_{i+1}) \leq \frac{8}{\rho \delta \alpha } \sum_{i=0}^\infty \frac{1}{2^i} = \frac{16}{\rho \delta \alpha}. \] \end{proof} \begin{proof}[Proof of \pref{claim:ensemble-has-high-agreement}] By the local testability of the base code $C_s$, \begin{equation} \label{eq:small-S-local-distance} \Ex[s]{\dist (\rest{w_0}{s},C_s)} \leq \rho^{-1} \Ex[s]{\Prob[t \subset s]{\rest{w_0}{t} \notin C_t}} = \rho^{-1} \Prob[t]{\rest{w_0}{t} \notin C_t} \leq \frac{\varepsilon}{\rho} . \end{equation} As \(f_s\) is closest code word to \(\rest{w_0}{s}\), \[ \frac{\varepsilon}{\rho} \geq \Ex[s]{\dist (\rest{w_0}{s},f_s)} = \Ex[s]{\Ex[k \subset s]{\dist (\rest{w_0}{k},\rest{f_s}{k})}}.\] By Markov's inequality, with probability \(1-\frac{4\varepsilon}{\rho \delta}\) of sampling \(\set{s_1,s_2}_k \sim {\mathcal{A}}\), it holds that \(\dist(\rest{w_0}{k},\rest{f_{s_i}}{k}) < \frac{\delta}{2}\) where \(f_{s_i}\) is the closest codeword in \(C_{s_i}\) to \(\rest{w_0}{s_i}\). By the local distance assumption, \(C_k\) has distance \(\delta\), and if \(\dist(\rest{f_{s_1}}{k},\rest{f_{s_2}}{k}) < \delta\), then \[\rest{f_{s_1}}{k} = \rest{f_{s_2}}{k}.\] \end{proof} \begin{proof}[Proof of \pref{claim:new-is-close-to-old}] We note that \[\dist(w_0,w_1) = \Ex[s]{\dist(\rest{w_0}{s},\rest{w_1}{s})}.\] We show closeness by the triangle inequality. Fix \(s \in S\), then \[\dist(\rest{w_0}{s},\rest{w_1}{s}) \leq \dist(\rest{w_0}{s},f_s) + \dist(f_s,\rest{w_1}{s}).\] By \eqref{eq:small-S-local-distance}, \[\Ex[s]{\dist (\rest{w_0}{s},f_s)} \leq \frac{\varepsilon}{\rho}.\] By the \((K,\alpha)\)-soundness of the agreement expander \({\mathcal{A}}\), \[ \dist(\rest{w_1}{s},f_s) = \Ex[k \subset s]{\dist(\rest{w_1}{k},\rest{f_s}{k})} \leq \Prob[k \subset s]{\rest{w_1}{k} \ne \rest{f_s}{k}} = \frac{4 \varepsilon}{\rho \delta \alpha}.\] By the triangle inequality, and using the fact that both \(\delta,\alpha < 1\) \[\dist (w_0,w_1) \leq \frac{8 \varepsilon }{\rho \delta \alpha}.\] \end{proof} The proof of \pref{claim:new-function-rarely-fails} relies on the \(\lambda\)-sampling property of the MAS. \begin{proof}[Proof of \pref{claim:new-function-rarely-fails}] By assumption the containment graph between \(T\) and \(K\) is has the \(\lambda\)-sampling property. Let \(B = \sett{k \in K}{\forall s \supset k, \; \rest{f_s}{k} \ne \rest{w_1}{k} }\). We observe the following: \begin{enumerate} \item By the agreement property, \(\prob{B} \leq \frac{8\varepsilon }{\rho \delta \alpha}\), and without loss of generality \(\prob{B} \leq \frac{1}{2}\) (if we want to show that the code is \(\frac{\rho \delta \alpha }{16}\)-locally testable, it is enough to consider \(\varepsilon\) so that \(\frac{16 \varepsilon }{\rho \delta \alpha} \leq 1\)). \item If \(t \in T\) contributes to the failure (i.e \(\rest{w_1}{t} \notin C_t\)), then \(\rest{w_1}{k} \ne \rest{f_s}{k}\) for all \(k\supset t\) and \(s \supset k\). Thus \emph{all} its neighbours are in \(B\). \end{enumerate} Denote by \(N\) the set of \(t \in T\) so that all of \(t\)'s neighbours are in \(B\). By item \(2\) above we have that \(\Prob[t \in T]{\rest{w_1}{t} \notin C_t} \leq \Prob[t \in T]{N}\). We note that if we sample a neighbour of \(t\), we get some \(k \in B\) with probability \(1\geq \prob{B} + \frac{1}{2}\). Thus by the \(\lambda\)-sampling property, we have that \[ \Prob[t \in T]{N} \leq 4\lambda \frac{8 \varepsilon}{\rho \delta \alpha}.\] We chose \(\lambda \leq \frac{\rho \delta \alpha}{64}\), hence \(Fail(w_1) \leq \frac{1}{2}\varepsilon\). \end{proof} \begin{remark} The MAS has four layers. The vertex layer \(V\) and the layer \(T\) are required to define the lifted code itself. It is also natural to introduce a higher layer \(S\), since without any other requirements we can't expect any lifted code to be locally testable. However, the intermediate layer \(K\) is possibly unneeded. While it is a crucial part of the \emph{proof}, it is not needed for lifting the code, nor for the local tests. We believe it is interesting to understand whether it is enough to study a three-layered set system, namely \((V,T,S)\). Are there similar properties, in terms of agreement, sampling and expansion, that also give us a similar result? \end{remark} \section{Multilayer Agreement Samplers} \label{sec:MAS} \begin{figure} \centering \includegraphics[scale=0.4]{MAS.png} \caption{Multilayer Agreement Sampler} \label{fig:mas} \end{figure} The structure we use to construct locally testable codes has a sampler component and an agreement expander component, that sit together in four layers. We call these structures Multilayer Agreement Samplers. \begin{definition}[Multilayer Agreement Samplers (MAS)]\label{def:mas} Let \(\delta,\lambda,\alpha \geq 0\). Let \(V\) be a set of elements, and \(T,K,S \subset 2^{V}\) be families of subsets so that there is a non-degenerate Markov chain that samples \((v,t,k,s)\) from \(V,T,K,S\) respectively, so that \(v \in t \subset k \subset s\). (Spelling out the Markov chain requirement we have a distribution over $(v,t,k,s)$ such that the choice of $t$ is conditioned only on $v$, the choice of $k$ is conditioned only on $t$, and finally the choice of $s$ is conditioned only on $k$.) We say that \((V,T,K,S)\) are a \((\lambda,\delta,\alpha)\) -\emph{MAS} if \begin{enumerate} \item There is an agreement expander \({\mathcal{A}}\) with vertex set \(S\) and edge labels \(K\) so that: \begin{itemize} \item The marginal distribution of sampling a labeled edge \(\set{s,s'}_k\) in \({\mathcal{A}}\), and returning \(s,k\), is the same as the marginal distribution \(s,k\) of the Markov chain. \item \({\mathcal{A}}\) is \((K,\alpha)\)-sound for \(\delta\)-ensembles. \end{itemize} \item The bipartite containment graph of \(K\) vs. \(T\), is a \(\lambda\)-sampler. Here the probability of sampling an edge \((k,t)\) is the probability of sampling \((k,t)\) together in the Markov chain. \end{enumerate} \end{definition} A natural example for an MAS is the Grassmannian complex, that is, a four layer structure where \(V = \mathbb{F}_p^n\) and \(T,K,S\) are affine spaces of \(\mathbb{F}_p^n\) of fixed dimensions. We elaborate on this example in the subsection below. The Grassmannian complex is dense, that is, the number of subspaces grows exponentially with the dimension. No known codes on the Grassmannian complex have good rate. Currently known constant degree MASs arise from high-dimensional expanders which are simplicial complexes. However, we cannot use MASs that are directly simplicial complexes to construct any code with non-trivial rate and distance. It is conceivable that high-dimensional expanders that are not simplicial complexes\footnote{These can still arise from high dimensional expanders. For example an MAS whose subsets are {\em links} of a high dimensional expander.} may yield good LTCs. \subsection{MASs coming from the Grassmannian Complex} The set system for the Grassmannian MAS is corresponds to points and affine subspaces of a vector space. Formally, let \(p\) be some prime power, and \(q_0<q_1<q_2<n\) be some integers greater than \(0\). Our ground set is \(V = \mathbb{F}_p^n\), and over it we define the following set system \(X=(V,T,K,S)\) where \(T,K,S\) consist of all affine subspaces of dimensions \(q_0,q_1\) and \(q_2\) respectively. The Markov process of this set system, is sampling \((v \in t \subseteq k \subseteq s)\) uniformly. The edge distribution of the test graph \(\set{s,s'}_k \sim {\mathcal{A}}\), is to sample a subspace \(k \in K\), and then two subspaces \(s,s'\in S\) independently, given that \(s,s'\supset k\). We call this the \(q_2,q_1\)-agreement test. We claim that this set system is an MAS: \begin{lemma} \label{lem:Grassmann-is-MAS} There is a universal constant \(\alpha > 0\) so that the following holds. Let \(q_0<q_1<q_2<n\) be as above, and assume \(q_2 \geq 3q_1+2\). Let \(p\) be any prime power. Let \(X=(V,T,K,S)\) be as above. Then \(X\) is a \((p^{q_0-q_1},\delta, \delta \alpha)\)-MAS for every \(\delta > 0\). \end{lemma} the constant above does not depend on \(p\), nor on \(q_0,q_1,q_2,n\). \begin{proof} The sampling properties of the layers of a Grassmannian complex are folklore: \begin{fact}\torestate{ \label{fact:grassmann-samplers} Let \(G=(K,T,E)\) be the graph where \(K\) are subspaces \(\mathbb{F}_p^n\) of dimension \(q_1\) and \(T\) are subspaces of dimension \(q_0\), and \((t,k) \in E\) if \(t \subset k\) with uniform weights. This graph is a \(p^{-|q_0-q_1|}\)-sampler. } \end{fact} Agreement of the \(q_2,q_1\)-agreement test graph was proven by \cite{DiksteinD2019} (Theorem 6.2). \begin{theorem}[Agreement for Grassmannian] \torestate{\label{thm:agreement-on-Grassmann-affine} There exists a constant $\alpha > 0$ such that for every prime power $p$, $\delta > 0$, and integers $q_1,q_2,n$ such that $3q_1+2 < q_2 \leq n$ the following holds. The $q_2,q_1$-Grassmannian agreement test is $(K,\delta \alpha)$-sound for $\delta$-ensembles.} \end{theorem} Combining these two statements together we get that there exists some \(\alpha>0\) so that for every \(\delta>0\), \((V,T,K,S)\) defined above are a \((p^{q_0-q_1},\delta,\delta\alpha)\)-MAS. \end{proof} In \pref{sec:applications} use our main theorem, \pref{thm:LLTC-imples-GLTC}, to show that local testability of lifted on the Grassmannian complex, is implied by the local testability of the base code. \section{Main Theorem - Locally Testable Codes on MASs} Given an \(MAS\) \((V,T,K,S)\) and a set of base codes \(\sett{C_t}{t\in T}\), the \emph{lifted code} to \(V\) is \[C = \sett{w:V\to \Sigma}{\rest{w}{t}\in C_t, \forall t\in T}.\] Similarly, for every \(s \in S\) or \(k \in K\), the local lifts to \(s\) or \(k\) are \[C_s = \sett{w:s\to \Sigma}{\rest{w}{t}\in C_t, \forall t\subseteq s}, \; C_k = \sett{w:k\to \Sigma}{\rest{w}{t}\in C_t, \forall t\subseteq k}.\] The next theorem is a reformulation of \pref{thm:main-hdx}. \begin{theorem}[Main] \label{thm:LLTC-imples-GLTC} Let \(V\) be a finite set and \(\rho,\delta,\lambda,\alpha \geq 0\) so that \(\lambda \leq \frac{\rho \delta \alpha}{64} \). Let \(X = (V,T,K,S)\) be a \((\delta,\lambda,\alpha)\)-MAS. Let \(\sett{C_t}{t\in T}\) be a set of base codes, and let \(C\) be the lifted code. Suppose that \begin{enumerate} \item \emph{Local Distance:} \(C_k\) has distance \(\delta\) for every \(k \in K\). \item \emph{Local local testability:} For every \(s \in S\), the code \(C_s\) is \(\rho\)-locally testable with respect to sampling \(t \in T\) given that \(t \subset s\). \end{enumerate} Then \(C\) is \(\frac{\rho \delta \alpha}{16}\)-locally testable with respect to the distribution of choosing \(t \in T\). \end{theorem} We encourage the readers to think of \(\lambda,\alpha\) as some fixed constants of the set system. Then the theorem states that if \(\set{C_k}\) have large relative distance \(\delta = \Omega(1)\), and \(\set{C_s}\) are \(\rho\)-locally testable for a large enough \(\rho\), then the lifted code is \(\Omega (\rho)\)-locally testable. \section{Preliminaries} \label{sec:preliminaries} \subsection{Error Correcting Codes} Let \(\Sigma\) be some finite set. A code is some \(C \subseteq \Sigma^n\). Let \(p\) be a prime power and \(\Sigma = \mathbb{F}_p^n\) be an \(n\)-dimensional vector space over a field with \(q\) elements. We say that \(C\) is a \emph{linear code} when \(C\) is a subspace of \(\mathbb{F}_p^n\). The rate of the code is \(rate(C) = \frac{\log_q |C|}{n}\). It is convenient to think about \(\mathbb{F}_p^n\) as functions \(f:[n] \to \mathbb{F}_p\). The distance between two functions \(f,g:[n] \to \Sigma\), denoted by \(\dist(f,g)\), is the fraction of \(x\in [n]\) so that \(f(x)\ne g(x)\). The distance of a code is defined to be \(\dist(C) = \min_{f,g\in C, f\ne g} \dist(f,g)\). When \(C\) is linear, this is the same as \(\min_{0\ne f\in C}\dist(f,0)\). \subsection{Tanner Codes} A Tanner code \cite{Gallager1960,Tanner1981} over an alphabet $\Sigma$ (also called a lifted code) is defined through two objects: a family \(T\) of \(q\)-element subsets of \([n]\), and with each subset $t\in T$ a base code \(C_t \subset \Sigma^t\). The code \(C \subseteq \Sigma^n\) is given by \[ C = \sett{w \in \Sigma^n}{\rest{w}{t} \in C_t, t \in T}.\] The family \(T\) is often described through a bipartite graph on vertex sets \([n]\) and \(T\) connecting \(t \in T\) to \(i \in [n]\) whenever \(i \in t\). Several well-known families of codes can be constructed as Tanner codes, including tensor codes, Reed-Muller codes, and the codes considered by Sipser and Spielman~\cite{SipserS1996}. A family of Tanner codes that is especially related to our context is the family of so-called lifted codes. Lifted codes were first introduced by Ben-Sasson, Maatouk, Shpilka and Sudan~\cite{BenSassonMSS2011} and their local testability was studied by Guo, Kopparty and Sudan~\cite{GuoKS2013}. These codes can be described as Tanner codes where \([n]\) is identified with points of a vector space and the family \(T\) contains all possible affine subspaces of a prescribed dimension \(m\). The base code \(C_0\) is taken to be affine invariant. A prime example for such codes is the Reed-Muller code. \subsection{Locally Testable Codes} A \((Q,\rho)\)-local tester for the code \(C\) is a probabilistic oracle algorithm that determines whether a word is in the code. It does the following: given oracle access to a function \(f:[n] \to \Sigma\), it queries \(f\) at \(Q\) input locations. Then if \(f \in C\) it accepts with probability \(1\). If \(f \notin C\) it rejects with probability at least \(\rho \cdot dist(f,C)\). Here \(\rho \in (0,1)\) is some constant parameter, and \(dist(f,C)\) is the distance between \(f\) and the closest codeword to it in \(C\). For linear codes \(C\), \cite{BenSassonHR2005} showed that without loss of generality, we can assume that the local testers picks a random subset \(t \subset [n]\) according to some distribution, and accept if and only if \(\rest{w}{t} \in C_t\) (that is, that there exists some codeword \(w' \in C\) so that \(\rest{w}{t} = \rest{w'}{t}\)). Thus we formally define the locally testable codes as following: \begin{definition}[Locally Testable Codes] Let \(V\) some finite set, and \(C\) be some linear code on \(V\). Let \(D\) be some distribution on subsets of \(V\), and suppose every set \(t \sim D\) of of size at most \(Q\). Let \(\rho > 0\). We say \(C\) is \((Q,\rho)\)-testable with respect to \(D\) if \[ \rho \cdot \dist(f,C) \leq \Prob[t \sim D]{\rest{f}{t} \notin \rest{C}{t}}.\] \end{definition} An alternate way of describing a locally testable code is using the Tanner graph \(G= ([n],T,E)\) representation of a code. In this representation, \([n]\) corresponds to the \(n\) input locations in the codeword. \(T\) corresponds to the subsets of indexes that are queried by the local tester. We connect \(i \in [n]\) and \(t\in T\) if \(i \in t\). A local tester that corresponds to this representation picks a random constraint \(t \in T\) and checks if the corresponding constraint is satisfied. \subsection{Sampler Graphs} Let \(G = (U,V,E)\) be a bipartite graph, and assume that each edge carries a non-negative weight \(p_{uv}\) such that \(\sum_{uv \in E} p_{uv} = 1\). This probability distribution induces a marginal probability distribution on U and similarly on V given by \(p_u =\sum_{uv\in E}p_{uv}\). For every set \(B \subseteq U\) (and \(V\) respectively) we denote by \(\prob{B} = \Prob[u \in U]{u \in B}.\) As a slight abuse of notation, for a set \(B \subseteq U\) and a vertex \(v_0 \in V\) we denote by \[\cProb{}{B}{v_0} = \cProb{uv \in E}{u \in B}{v=v_0}.\] A sampler graph is a graph where for all \(B \subseteq U\), most of the vertices \(v_0 \in V\) have that \(\prob{B} \approx \cProb{}{B}{v_0}\). \begin{definition}[\(\lambda\)-sampler] Let \(G = (V,U,E)\) be a bipartite graph. For any \(B \subset U\), we define \(N = N(B,\delta) = \sett{v \in V}{\cProb{u \in U}{u \in B}{u \sim v} > \prob{B} + \delta}\). For \(\lambda \in (0,1)\) we say \(G\) is a \(\lambda\)-sampler if for every \(B \subseteq U\) and every \(\delta > 0\), \[\prob{N} \leq \frac{\lambda}{\delta^2}\prob{B}.\] \end{definition} There is a tight connection between expander bipartite graphs and sampler graphs. For more on this, see \cite{Goldreich2011-samp}. \subsection{Agreement Expanders}\label{sec:ae} \def{\mathcal{A}}{{\mathcal{A}}} Let \(V\) be a finite universe, $S$ a collection of subsets of $V$, and for each subset $s\in S$, a local function $f_s\in \Sigma^s$. An ensemble $\set{f_s}$ is \emph{perfectly global} if it comes from a single global function $w:V\to \Sigma$, namely, \(f_s = \rest{w}{s}\) for all $s$. We denote by $\mathcal{G}$ the collection of all perfectly global ensembles. An agreement tester is given by a non-negatively weighted graph ${\mathcal{A}}$ with vertex set $S$, and such that each edge $\set{s,s'}$ is labelled by some $k\subseteq s\cap s'$. Without loss of generality we require that the weights sum to $1$, so that the edges form a distribution over pairs $s,s'$. Given a collection $\set{f_s}$ of local functions, the tester selects an edge $s,s'$ at random, and accepts if $f_s(v) = f_{s'}(v)$ for each $v\in k$. We call this {\em the value of $\set{f_s}$ under ${\mathcal{A}}$} and denote it by ${\mathcal{A}}(\set{f_s})$, \[ {\mathcal{A}}(\set{f_s}) := \Pr_{s,s'\sim {\mathcal{A}}} [ f_s(v) = f_{s'}(v),\;\forall v\in k]. \] It is clear that a perfectly global ensemble has value $1$. Indeed for any pair $s,s'$ and any $v'\in s\cap s'$, $f_s(v) = g(v) = f_{s'}(v)$ assuming that $g:V\to\Sigma$ is the global function that agrees with $\set{f_s}$. The graph ${\mathcal{A}}$ is an agreement expander if a robust converse holds, namely any ensemble with $A(\set{f_s})\approx 1$ has to be close to a perfectly global ensemble. Formally, \begin{definition} Let $V,K,S,{\mathcal{A}}$ be as above. We call ${\mathcal{A}}$ an $\alpha$-agreement expander if for every ensemble of local functions $\set{f_s}$ \begin{equation}\label{eq:c-soundness} \alpha \cdot \dist(\set{f_s},\mathcal{G}) \leq 1-{\mathcal{A}}(\set{f_s}). \end{equation} where the distance \(\dist(\set{f_s},\mathcal{G})\) is the distance between $\set{f_s}$ and the closest perfectly global ensemble; where distance between two ensembles $\set{f_s},\set{g_s}$ is defined as probability $f_s\neq g_s$ when $s$ is chosen from the marginal distribution of ${\mathcal{A}}$. \end{definition} \paragraph{More refined notions of agreement-expansion} We also say that {\em ${\mathcal{A}}$ is an agreement expander with respect to $\delta$-ensembles} if \eqref{eq:c-soundness} holds for every ensemble $\set{f_s}$ that is a \(\delta\)-ensemble, namely, such that for every edge $\set{s,s'}_k$ in the graph ${\mathcal{A}}$, we have either \(\rest{f_s}{k} = \rest{f_{s'}}{k}\) or else the Hamming distance between $\rest{f_s}{k}$ and $\rest{f_{s'}}{k}$ is at least \( \delta|k|\). Furthermore, we allow a slightly weaker notion of distance from being perfectly global. We say that ${\mathcal{A}}$ has $(K,\alpha)$ soundness wrt $\delta$ ensembles if the following holds. Suppose that for every \(s \in S\) there is a distribution \(k \sim D_s\) that samples \(k \in K\) that are subsets of \(s\). We say that ${\mathcal{A}}$ is \((K,\alpha)\)-sound wrt $\set{f_s}$ when \begin{equation}\label{eq:k-c-soundness} \alpha \cdot \min_{G \in \mathcal{G}}\Prob[s \in S, k\sim D_s]{\rest{f_s}{k} \ne \rest{G}{k}} \leq 1-{\mathcal{A}}(\set{f_s}). \end{equation} We say that ${\mathcal{A}}$ is $(K,\alpha)$ sound with respect to $\delta$ ensembles if \eqref{eq:k-c-soundness} holds for all $\delta$ ensembles.
{ "timestamp": "2020-05-05T02:18:51", "yymm": "2005", "arxiv_id": "2005.01045", "language": "en", "url": "https://arxiv.org/abs/2005.01045", "abstract": "Locally testable codes (LTC) are error-correcting codes that have a local tester which can distinguish valid codewords from words that are \"far\" from all codewords by probing a given word only at a very few (sublinear, typically constant) number of locations. Such codes form the combinatorial backbone of PCPs. A major open problem is whether there exist LTCs with positive rate, constant relative distance and testable with a constant number of queries.In this paper, we present a new approach towards constructing such LTCs using the machinery of high-dimensional expanders. To this end, we consider the Tanner representation of a code, which is specified by a graph and a base code. Informally, our result states that if this graph is part of a high-dimensional expander then the local testability of the code follows from the local testability of the base code.This work unifies and generalizes the known results on testability of the Hadamard, Reed-Muller and lifted codes on the Subspace Complex, all of which are proved via local self correction. However, unlike previous results, constant rounds of self correction do not suffice as the diameter of the underlying test graph can be logarithmically large in a high-dimensional expander and not constant as in all known earlier results. We overcome this technical hurdle by performing iterative self correction with logarithmically many rounds and tightly controlling the error in each iteration using properties of the high-dimensional expander.Given this result, the missing ingredient towards constructing a constant-query LTC with positive rate and constant relative distance is an instantiation of a base code that interacts well with a constant-degree high-dimensional expander.", "subjects": "Computational Complexity (cs.CC)", "title": "Locally testable codes via high-dimensional expanders", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575121992375, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7091542092264689 }
https://arxiv.org/abs/1809.03158
Minimum Eccentric Connectivity Index for Graphs with Fixed Order and Fixed Number of Pending Vertices
The eccentric connectivity index of a connected graph $G$ is the sum over all vertices $v$ of the product $d_{G}(v) e_{G}(v)$, where $d_{G}(v)$ is the degree of $v$ in $G$ and $e_{G}(v)$ is the maximum distance between $v$ and any other vertex of $G$. This index is helpful for the prediction of biological activities of diverse nature, a molecule being modeled as a graph where atoms are represented by vertices and chemical bonds by edges. We characterize those graphs which have the smallest eccentric connectivity index among all connected graphs of a given order $n$. Also, given two integers $n$ and $p$ with $p\leq n-1$, we characterize those graphs which have the smallest eccentric connectivity index among all connected graphs of order $n$ with $p$ pending vertices.
\section{Introduction} A chemical graph is a representation of the structural formula of a chemical compound in terms of graph theory where atoms are represented by vertices and chemical bonds by edges. Arthur Cayley \cite{Cayley1874} was probably the first to publish results that consider chemical graphs. In an attempt to analyze the chemical properties of alkanes, Wiener \cite{Wie47} has introcuced the \emph{path number index}, nowadays called \emph{Wiener index}, which is defined as the sum of the lengths of the shortest paths between all pairs of vertices. Mathematical properties and chemical applications of this distance-based index have been widely researched. Numerous other topological indices are used for quantitative structure-property relationship (QSPR) and quantitative structure-activity relationship (QSAR) studies that help to describe and understand the structure of molecules \cite{Tod00, Kar00}, among which the \emph{eccentric connectivity index} which can be defined as follows. Let $G=(V,E)$ be a simple connected undirected graph. The \emph{distance} $\dist_G(u,v)$ between two vertices $u$ and $v$ in $G$ is the number of edges of a shortest path in $G$ connecting $u$ and $v$. The \emph{eccentricity} $\ecc{v}$ of a vertex $v$ is the maximum distance between $v$ and any other vertex, that is $\max \{ \dist_G(v, w) ~|~ w \in V \}$. The \emph{eccentric connectivity index} $\ensuremath{\xi^c}\xspace(G)$ of $G$ is defined by \[ \ensuremath{\xi^c}\xspace(G) = \sum_{v \in V} \degr{v} \ecc{v}. \] This index was introduced by Sharma {\emph{et al.}}~in~\cite{Sharma97} and successfully used for mathematical models of biological activities of diverse nature \cite{Duj08, Gup02, Kumar2004, Sar01, Ilic11}. Recently, Hauweele {\emph{et al.}} \cite{Hau18} have characterized those graphs which have the largest eccentric connectivity index among all connected graphs of given order $n$. These results are summarized in Table \ref{table1}, where \begin{itemize} \vspace{-0.2cm}\item $\K{n}$ is the complete graph of order $n$; \vspace{-0.2cm}\item $\Path{n}$ is the path of order $n$; \vspace{-0.2cm}\item $\ensuremath{{\sf W}_{n}}$ is the wheel of order $n$, i.e., the graph obtained by joining a vertex to all vertices of a cycle of order $n-1$; \vspace{-0.2cm}\item $\M{n}$ is the graph obtained from $\K{n}$ by removing a maximum matching and, if $n$ is odd, an additional edge adjacent to the unique vertex that still degree $n-1$; \vspace{-0.2cm}\item $\extG{n}{D}$ is the graph constructed from a path $u_0-u_1-\ldots-u_D$ by joining each vertex of a clique $\K{n-D-1}$ to $u_0$, $u_1$ and $u_2$. \end{itemize} \begin{table}[h!] \begin{center} \caption{Largest eccentric connectivity index for a fixed order $n$} \label{table1} \begin{tabular}{ccc} $n$ & optimal graphs \\ \hline 1&$\K{1}$\\ 2&$\K{2}$\\ 3&$\K{3}$ and $\Path{3}$\\ 4&$\M{4}$\\ 5&$\M{5}$ and $\ensuremath{{\sf W}_{5}}$\\ 6&$\M{6}$\\ 7&$\M{7}$\\ 8&$\M{8}$ and $\extG{8}{4}$\\ $\geq 9$&$\extG{n}{\left\lceil\frac{n+1}{3}\right\rceil+1}$ \end{tabular} \end{center} \end{table} \noindent In addition to the above-mentioned graphs, we will also consider the following ones: \begin{itemize} \vspace{-0.2cm}\item $\C{n}$ is the chordless cycle of order $n$; \vspace{-0.2cm}\item $\St{n}{x}$ is the graph of order $n$ obtained by linking all vertices of a stable set of $n-x$ vertices with all vertices of a clique $\K{x}$. The graph $\St{n}{1}$ is called a \emph{star}. \end{itemize} Also, for $n\geq 4$ and $p\leq n-3$, let $\extH{n}{p}$ be the graph of order $n$ obtained by adding a dominating vertex (i.e., a vertex linked to all other vertices) to the graph or order $n-1$ having $p$ vertices of degree 0, and \begin{itemize} \vspace{-0.2cm}\item $n-1-p$ vertices of degree 1 if $n-p$ is odd; \vspace{-0.2cm}\item $n-2-p$ vertices of degree 1 and one vertex of degree 2 if $n-p$ is even. \end{itemize} For illustration, $\extH{8}{3}$ and $\extH{9}{3}$ are drawn on Figure \ref{fig1}. Note that $\extH{4}{0}\simeq \St{4}{2}$. Moreover, $\extH{4}{0}$ has two dominating vertices while $\extH{4}{1}$ and $\extH{n}{p}$ have exactly one dominating vertex for all $n\geq 5$ and $p\leq n-3$. \begin{figure}[h!] \centering\includegraphics[scale=0.75]{Fig1.eps} \captionof{figure}{Two graphs with $p=3$ pending vertices}. \label{fig1} \end{figure} In this paper, we first give an alternative proof to a result of Zhou and Du \cite{ZD10} showing that the stars are the only graphs with smallest eccentric connectivity index among all connected graphs of given order $n\geq 4$. These graphs have $n-1$ pending vertices (i.e., vertices of degree 1). We then consider all pairs $(n,p)$ of integers with $p\leq n-1$ and characterize the graphs with smallest eccentric connectivity index among all connected graphs of order $n$ with $p$ pending vertices. \section{Minimizing $\ensuremath{\xi^c}\xspace$ for graphs with fixed order} $\K{1}$ and $\K{2}$ are the only connected graphs with 1 and 2 vertices, respectively, while $\K{3}$ and $\Path{3}$ are the only connected graphs with 3 vertices. Since $\ensuremath{\xi^c}\xspace(\K{3})=\ensuremath{\xi^c}\xspace(\Path{3})=6$, all connected graphs of given order $n\leq 3$ have the same eccentric connectivity index. From now on, we therefore only consider connected graphs with fixed order $n\geq 4$. A proof of the following theorem was already given by Zhou and Du in \cite{ZD10}. Ours is slightly different. \begin{thm} Let $G$ be a connected graph of order $n\geq 4$. Then $\ensuremath{\xi^c}\xspace(G) \ge 3(n-1)$, with equality if and only if $G\simeq\St{n}{1}$. \end{thm} \begin{proof} Let $x$ be the number of dominating vertices (i.e., vertices of degree $n-1$) in $G$. We distinguish three cases. \begin{itemize} \vspace{-0.1cm}\item If $x=1$, then let $u$ be the dominating vertex in $G$. Clearly, $\ecc{u}=1$ and $\degr{u}=n-1$. All vertices $v\neq u$ have eccentricity $\ecc{v}=2$, while their degree is at least 1 (since $G$ is connected). Hence, $\ensuremath{\xi^c}\xspace(G)\geq (n-1)+2(n-1)=3(n-1)$, with equality if and only if all $v\neq u$ have degree 1, i.e., $G\simeq\St{n}{1}$. \vspace{-0.2cm}\item If $x>1$, then all dominating vertices $u$ have $\degr{u}\ecc{u}=n-1$, while all non-dominating vertices $v$ have $\degr{v}\geq x\geq 2$ and $\ecc{v}\geq 2$, which implies $\degr{u}\ecc{u}\geq 4$. If $n=4$, we therefore have $\ensuremath{\xi^c}\xspace(G)\geq 3n>3(n-1)$, while if $n>4$, we have $\ensuremath{\xi^c}\xspace(G)\geq 2(n-1)+4(n-2)=6n-10>3(n-1)$. \vspace{-0.2cm}\item If $x=0$, then every pending vertex $v$ has $\ecc{v}\geq 3$ since its only neighbor is a non-dominating vertex. Since the eccentricity of the non-pending vertices is at least two, we have $\degr{v}\ecc{v}\geq 3$ for all vertices $v$ in $G$, which implies $\ensuremath{\xi^c}\xspace(G)\geq 3n>3(n-1)$. \end{itemize} \end{proof} Stars have $n-1$ pending vertices. As will be shown in the next section, a similar result is more challenging when the total number of pending vertices is fixed to a value strictly smaller than $n-2$. \section{Minimizing $\ensuremath{\xi^c}\xspace$ for graphs with fixed order and fixed number of pending vertices} Let $G$ be a connected graph of order $n\geq 4$ with $p$ pending vertices. Clearly, $p\leq n-1$, and $G\simeq\St{n}{1}$ if $p=n-1$. For $p=n-2$, let $u$ and $v$ be the two non-pending vertices. Note that $u$ is adjacent to $v$ since $G$ is connected. Clearly, $G$ is obtained by linking $x\leq n-3$ vertices of a stable set $S$ of $n-2$ vertices to $u$, and the $n-2-x$ other vertices of $S$ to $v$. The $n-2$ pending vertices $w$ have $\degr{w}=1$ and $\ecc{w}=3$, while $\ecc{u}=\ecc{v}=2$ and $\degr{u}+\degr{v}=n$. Hence $\ensuremath{\xi^c}\xspace(G)=3(n-2)+2n=5n-6$ for all graphs of order $n$ with $n-2$ pending vertices. The above observations show that all graphs of order $n$ with a fixed number $p\geq n-2$ of pending vertices have the same eccentric connectivity index. As will be shown, this is not the case when $n\geq 4$ and $p\leq n-3$. We will prove that $\extH{n}{p}$ is almost always the unique graph minimizing the eccentric connectivity index. Note that \[ \ensuremath{\xi^c}\xspace(\extH{n}{p})=\left\{\begin{array}{ll} n-1+2p+4(n-p-1)=5n-2p-5 &\mbox{if }n-p\mbox{ is odd} \\ n-1+2p+4(n-p-2)+6=5n-2p-3 &\mbox{if }n-p\mbox{ is even}. \end{array}\right. \] \vspace{0.3cm}\begin{thm}\label{lem2} Let $G$ be a connected graph of order $n\geq 4$ with $p\leq n-3$ pending vertices and one dominating vertex. Then $\ensuremath{\xi^c}\xspace(G)\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$, with equality if and only if $G\simeq\extH{n}{p}$. \end{thm} \begin{proof} The dominating vertex $u$ in $G$ has $\degr{u}\ecc{u}=n-1$, the pending vertices $v$ have $\degr{v}\ecc{v}=2$, and the other vertices $w$ have $\ecc{w}=2$ and $\degr{w}\geq 2$. Hence, $\ensuremath{\xi^c}\xspace(G)$ is minimized if all non-pending and non-dominating vertices have degree 2, except one that has degree 3 if $n-p-1$ is odd. In other words, $\ensuremath{\xi^c}\xspace(G)$ is minimized if and only if $G\simeq\extH{n}{p}$. \end{proof} \vspace{0.3cm}\begin{thm}\label{lem3} Let $G$ be a connected graph of order $n\geq 4$, with at least two dominating vertices. \begin{itemize} \vspace{-0.2cm}\item If $n=4$ then $\ensuremath{\xi^c}\xspace(G)\geq 12$, with equality if and only if $G\simeq\K{4}$. \vspace{-0.2cm}\item If $n=5$ then $\ensuremath{\xi^c}\xspace(G)\geq 20$, with equality if and only if $G\simeq\St{5}{2}$ or $G\simeq\K{5}$. \vspace{-0.2cm}\item If $n\geq 6$ then $\ensuremath{\xi^c}\xspace(G)\geq 6n-10$, with equality if and only if $G\simeq\St{n}{2}$. \end{itemize} \end{thm} \begin{proof} Let $x$ be the number of dominating vertices in $G$. Then $\degr{u}\ecc{u}=n-1$ for all dominating vertices $u$, while $\ecc{v}=2$ and $\degr{v}\geq x$ for all other vertices $v$. Hence, $\ensuremath{\xi^c}\xspace(G)\geq -2 x^2 + x(3 n-1)$. \begin{itemize} \vspace{-0.1cm}\item If $n=4$ then $\ensuremath{\xi^c}\xspace(G)\geq f(x)=-2 x^2 + 11x$. Since $2\leq x\leq 4$, $f(2)=14, f(3)=15$, and $f(4)=12$, we conclude that $\ensuremath{\xi^c}\xspace(G)\geq 12$, with equality if and only if $x=4$, which is the case when $G\simeq\K{4}$. \vspace{-0.3cm}\item If $n=5$ then $\ensuremath{\xi^c}\xspace(G)\geq f(x)=-2 x^2 + 14x$. Since $2\leq x\leq 5$, $f(2)=f(5)=20$ and $f(3)=f(4)=24$, we conclude that $\ensuremath{\xi^c}\xspace(G)\geq 20$, with equality if and only if $x=2$ or $5$, which is the case when $G\simeq\St{5}{2}$ or $G\simeq\K{5}$. \vspace{-0.3cm}\item If $n\geq 6$ then $-2 x^2 + x(3 n-1)$ is minimized for $x=2$, which is the case when $G\simeq\St{n}{2}$. \end{itemize} \end{proof} \begin{thm}\label{lem4} Let $G$ be a connected graph of order $n\geq 4$, with $p\leq n-3$ pending vertices and no dominating vertex. Then $\ensuremath{\xi^c}\xspace(G)>\ensuremath{\xi^c}\xspace(\extH{n}{p})$ unless $n=5$, $p=0$ and $G\simeq\C{5}$, in which case $\ensuremath{\xi^c}\xspace(G)=\ensuremath{\xi^c}\xspace(\extH{n}{0})=20$. \end{thm} \begin{proof} Let $U$ be the subset of vertices $u$ in $G$ such that $\degr{u}=\ecc{u}=2$. If $U$ is empty, then all non-pending vertices $v$ in $G$ have $\degr{v}\geq 2$ and $\ecc{v}\geq 2$ (since $G$ has no dominating vertex), and at least one of these two inequalities is strict, which implies $\degr{u}\ecc{u}\geq 6$. Also, every pending vertex $w$ has $\ecc{w}\geq 3$ since their only neighbor is not dominant. Hence, $\ensuremath{\xi^c}\xspace(G)\geq 6(n-p)+3p=6n-3p$. Since $p\leq n-3$, we have $\ensuremath{\xi^c}\xspace(G)\geq 5n-2p+3>\ensuremath{\xi^c}\xspace(\extH{n}{p})$. So, assume $U\neq \emptyset$. Let $u$ be a vertex in $U$, and let $v,w$ be its two neighbors. Also, let $A=N(v)\setminus (N(w)\cup\{w\})$, $B=(N(v)\cup N(w))\setminus \{u\}$, and $C=N(w)\setminus (N(v)\cup\{v\})$. Since $\ecc{u}=2$, all vertices of $G$ belong to $A\cup B\cup C\cup \{u,v,w\}$. We finally define $B'$ as the subset of $B$ that contains all vertices $b$ of $B$ with $\degr{b}=2$ (i.e., their only neighbors are $v$ and $w$). \vspace{0.5cm}\noindent\emph{Case 1}: $v$ is adjacent to $w$. \noindent $A\neq \emptyset$ else $w$ is a dominating vertex, and $C\neq \emptyset$ else $v$ is dominating. Let $G'$ be the graph obtained from $G$ by replacing every edge linking $v$ to a vertex $a\in A$ with an edge linking $w$ to $a$, and by removing all edges linking $v$ to a vertex of $B\setminus B'$. Clearly, $G'$ is also a connected graph of order $n$ with $p$ pending vertices, and $w$ is the only dominating vertex in $G'$. It follows from Theorem \ref{lem2} that $\ensuremath{\xi^c}\xspace(G')\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$. Also, \begin{itemize} \vspace{-0.1cm}\item $\degr{u}=\degrp{u}$ and $\ecc{u}=\eccp{u}$; \vspace{-0.2cm}\item $\degr{x}=\degrp{x}$ and $\ecc{x}\geq \eccp{x}$ for all $x\in A\cup C$; \vspace{-0.2cm}\item $\degr{x}=\degrp{x}$ and $\ecc{x}=\eccp{x}$ for all $x\in B'$; \vspace{-0.2cm}\item $\degr{x}>\degrp{x}$ and $\ecc{x}=\eccp{x}$ for all $x\in B\setminus B'$. \end{itemize} Hence, \[ \sum\limits_{x\in A\cup B\cup C\cup\{u\}}\degr{x}\ecc{x} \geq \sum\limits_{x\in A\cup B\cup C\cup\{u\}} \degrp{x}\eccp{x}. \] Moreover, \begin{itemize} \vspace{-0.1cm}\item $\degr{v}\ecc{v} + \degr{w}\ecc{w} = 2(\sizs{A}+\sizs{B}+2) + 2(\sizs{C}+\sizs{B}+2) = 2\sizs{A}+4\sizs{B}+2\sizs{C}+8$; \vspace{-0.2cm}\item $\degrp{v}\eccp{v}+\degrp{w}\eccp{w}=2(\sizs{B'}+2)+\sizs{A}+\sizs{B}+\sizs{C}+2.$ \end{itemize} We therefore have \[ \begin{array}{rll} \ensuremath{\xi^c}\xspace(G)-\ensuremath{\xi^c}\xspace(G')&=&\quad\sum\limits_{x\in A\cup B\cup C\cup\{u\}}\degr{x}\ecc{x}+(\degr{v}\ecc{v} + \degr{w}\ecc{w})\\ & & - \sum\limits_{x\in A\cup B\cup C\cup\{u\}}\degrp{x}\eccp{x} - (\degrp{v}\eccp{v}+\degrp{w}\eccp{w})\\ &\geq& (2\sizs{A}+4\sizs{B}+2\sizs{C}+8)-(2(\sizs{B'}+2)+\sizs{A}+\sizs{B}+\sizs{C}+2)\\ &=&\sizs{A}+\sizs{C}+3(\sizs{B'}+\sizs{B\setminus B'})-2\sizs{B'}+2\\ &=&\sizs{A}+\sizs{C}+\sizs{B'}+3\sizs{B\setminus B'}+2 > 0\\ \end{array} \] This implies $\ensuremath{\xi^c}\xspace(G)>\ensuremath{\xi^c}\xspace(G')\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$. \vspace{0.5cm}\noindent\emph{Case 2}: $v$ is not adjacent to $w$, and both $A\cup (B\setminus B')$ and $C\cup (B\setminus B')$ are nonempty. \noindent Let $G'$ be the graph obtained from $G$ by adding an edge linking $v$ to $w$, by replacing every edge linking $v$ to a vertex $a\in A$ with an edge linking $w$ to $a$, and by removing all edges linking $v$ to a vertex of $B\setminus B'$. Clearly, $G'$ is also a connected graph of order $n$ with $p$ pending vertices. As in the previous case, we have \[ \sum\limits_{x\in A\cup B\cup C\cup\{u\}}\degr{x}\ecc{x} \geq \sum\limits_{x\in A\cup B\cup C\cup\{u\}} \degrp{x}\eccp{x}. \]\\ \noindent Moreover, $\ecc{v}\geq 2$ and $\ecc{w}\geq 2$, while $\eccp{v}\leq 2$ and $\eccp{w}=1$, which implies \begin{itemize} \vspace{-0.1cm}\item $\degr{v}\ecc{v} + \degr{w}\ecc{w} \geq 2(\sizs{A}+\sizs{B}+1) + 2(\sizs{C}+\sizs{B}+1)=2\sizs{A}+4\sizs{B}+2\sizs{C}+4$; \vspace{-0.1cm}\item $\degrp{v}\eccp{v}+\degrp{w}\eccp{w}\leq 2(\sizs{B'}+2)+\sizs{A}+\sizs{B}+\sizs{C}+2.$ \end{itemize} We therefore have \[ \begin{array}{rll} \ensuremath{\xi^c}\xspace(G)-\ensuremath{\xi^c}\xspace(G')&\geq& (2\sizs{A}+4\sizs{B}+2\sizs{C}+4)-(2(\sizs{B'}+2)+\sizs{A}+\sizs{B}+\sizs{C}+2)\\ &=&\sizs{A}+\sizs{C}+\sizs{B'}+3\sizs{B\setminus B'}-2.\\ \end{array} \] If $B\setminus B'\neq \emptyset$, $w$ is the only dominating vertex in $G'$, and $\ensuremath{\xi^c}\xspace(G)-\ensuremath{\xi^c}\xspace(G')>0$. It then follows from Theorem \ref{lem2} that $\ensuremath{\xi^c}\xspace(G)>\ensuremath{\xi^c}\xspace(G')\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$. So assume $B\setminus B'= \emptyset$. Since $A\cup (B\setminus B')\neq \emptyset$, and $C\cup (B\setminus B')\neq \emptyset$, we have $A\neq \emptyset$ and $C\neq \emptyset$. Hence, once again, $w$ is the only dominating vertex in $G'$, and we know from Theorem \ref{lem2} that $\ensuremath{\xi^c}\xspace(G')\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$. \begin{itemize} \item If $\sizs{B'}\geq 1$, $\sizs{A}\geq 2$ or $\sizs{C}\geq 2$, then $\ensuremath{\xi^c}\xspace(G)>\ensuremath{\xi^c}\xspace(G')\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$. \item If $\sizs{B'}=0$ and $\sizs{A}=\sizs{C}=1$, there are two possible cases: \begin{itemize} \item if the vertex in $A$ is not adjacent to the vertex in $C$, then $n=5$, $p=2$, $G\simeq \Path{5}$ and $G'\simeq\extH{5}{2}$. Hence, $\ensuremath{\xi^c}\xspace(G)=24>16=\ensuremath{\xi^c}\xspace(\extH{n}{p})$; \item if the vertex in $A$ is adjacent to the vertex in $C$, then $n=5$, $p=0$, $G\simeq \C{5}$ and $G'\simeq\extH{5}{2}$. Hence, $\ensuremath{\xi^c}\xspace(G)=\ensuremath{\xi^c}\xspace(\extH{n}{p})=20$; \end{itemize} \end{itemize} \vspace{0.4cm}\noindent\emph{Case 3}: $v$ is not adjacent to $w$, and at least one of $A\cup (B\setminus B')$ and $C\cup (B\setminus B')$ is empty. \noindent Without loss of generality, suppose $A\cup (B\setminus B')=\emptyset$. We distinguish two subcases. \vspace{0.4cm}\noindent\emph{Case 3.1}: $B'=\emptyset$. \noindent Since $n\geq 4$, $C\neq \emptyset$. Also, since $p\leq n-3$, there is a non-pending vertex $r\in C$. Let $G'$ be the graph obtained from $G$ by removing the edge linking $u$ and $v$ and by linking $v$ to $w$ and to $r$. Note that $G'$ is a connected graph of order $n$ with $p$ pending vertices~: while $v$ was pending in $G$, but not $u$, the situation is the opposite in $G'$. Note also that Theorem \ref{lem2} implies $\ensuremath{\xi^c}\xspace(G')\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$ since $w$ is the only dominating vertex in $G'$. We then have: \begin{itemize} \vspace{-0.1cm}\item $\degr{u}\!=\!2$, $\degrp{u}\!=\!1$ and $\ecc{u}\!=\!\eccp{u}\!=\!2$, which gives $\degr{u}\ecc{u}-\degrp{u}\eccp{u}=2$; \vspace{-0.1cm}\item $\degr{v}\!=\!1$, $\degrp{v}\!=\!2$ $\ecc{v}\!=\!3$ and $\eccp{v}\!=\!2$, which gives $\degr{v}\ecc{v}-\degrp{v}\eccp{v}=\!-\!1$; \vspace{-0.1cm}\item $\degr{w}\!=\!n-2$, $\degrp{w}\!=\!n-1$ $\ecc{w}\!=\!2$ and $\eccp{w}\!=\!1$, which gives $\degr{w}\ecc{w}-\degrp{w}\eccp{w}=n-3$; \vspace{-0.1cm}\item $\degrp{r}\!=\!\degr{r}\!+\!1$, $\ecc{r}\!=\!3$ and $\eccp{w}\!=\!2$, which gives $\degr{r}\ecc{r}-\degrp{r}\eccp{r}=\degr{r}-2$; \vspace{-0.1cm}\item $\degrp{c}\!=\!\degr{c}$ and $\ecc{c}>\eccp{c}$ for all $c\in (C\setminus\{r\})$. Since $r$ has a neighbor in $C$ of degree at least 2, we have $\sum_{c\in C\setminus\{r\}}(\degr{c}\ecc{c}-\degrp{c}\eccp{c}\geq 2)$. \end{itemize} Hence, $\ensuremath{\xi^c}\xspace(G)-\ensuremath{\xi^c}\xspace(G') \geq 2 - 1 + \underbrace{n-3}_{> 0} + \underbrace{\degr{r}-2}_{\geq 0}+2 > 0$, which implies $\ensuremath{\xi^c}\xspace(G)>\ensuremath{\xi^c}\xspace(G')\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$.\\ \vspace{0.4cm}\noindent\emph{Case 3.2}: $B'\neq \emptyset$. \noindent Let $b_1,\ldots,b_{\sizs{B'}}$ be the vertices in $B'$. Remember that the unique neighbors of these vertices are $v$ and $w$. Let $G'$ be the graph obtained from $G$ as follows. We first add an edge linking $v$ to $w$. Then, for every odd $i<\sizs{B'}$, we add an edge linking $b_i$ to $b_{i+1}$ and remove the edges linking $v$ to $b_i$ and to $b_{i+1}$. We then have \begin{itemize} \vspace{-0.2cm}\item $\degr{x}=\degrp{x}$ and $\ecc{x}=\eccp{x}$ for all $x\in B'\cup C\cup\{u\}$; \vspace{-0.2cm}\item $\degr{v}=\sizs{B'}+1$, $\degrp{v}\leq 3$, $\ecc{v}\geq 2$, and $\eccp{v}\leq 2$; \vspace{-0.2cm}\item $\degr{w}=\sizs{B'}+\sizs{C}+1$, $\degrp{w}=\sizs{B'}+\sizs{C}+2$, $\ecc{w}=2$, and $\eccp{w}=1$. \end{itemize} Hence, \[ \begin{array}{rll} \ensuremath{\xi^c}\xspace(G)-\ensuremath{\xi^c}\xspace(G')&=&\degr{v}\ecc{v}+\degr{w}\ecc{w}-\degrp{v}\eccp{v}+\degrp{w}\eccp{w}\\ &\geq& 2(\sizs{B'}+1) + 2(\sizs{B'}+\sizs{C}+1) - 6-(\sizs{B'}+\sizs{C}+2)\\ &=& 3\sizs{B'}+\sizs{C}-4. \end{array} \] IF $\sizs{B'}\geq 2$ or $\sizs{C}\geq 2$, then $\ensuremath{\xi^c}\xspace(G)-\ensuremath{\xi^c}\xspace(G')>0$, and since $w$ is then the only dominating vertex in $G'$, we know from Theorem \ref{lem2} that $\ensuremath{\xi^c}\xspace(G)>\ensuremath{\xi^c}\xspace(G')\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$. So, assume $\sizs{B'}= 1$ and $\sizs{C}\leq 1$: \begin{itemize} \vspace{-0.2cm}\item if $\sizs{C}= 0$ then $n=4$, $p=0$, $G\simeq\C{4}$ and $G'\simeq\extH{4}{0}$ which implies $\ensuremath{\xi^c}\xspace(G)=16>14=\ensuremath{\xi^c}\xspace(\extH{n}{p})$; \vspace{-0.2cm}\item if $\sizs{C}= 1$ then $n=5$, $p=1$, $\ensuremath{\xi^c}\xspace(G)=23$ and $G'\simeq\extH{5}{1}$ which implies $\ensuremath{\xi^c}\xspace(G)>20=\ensuremath{\xi^c}\xspace(\extH{n}{p})$. \end{itemize} \end{proof} We can now combine these results as follows. Assume $G$ is a connected graph of order $n$ with $p$ pending vertices. If $p\geq 1$, then $G$ has at most one dominating vertex, and it follows from Theorems \ref{lem2} and \ref{lem4} that $\extH{n}{p}$ is the only graph with maximum eccentric connectivity index. If $p=0$ and $n=4$, then $G$ cannot contain exactly one dominating vertex, and Theorems \ref{lem3} and \ref{lem4} show that $\K{4}$ is the only graph with maximum eccentric connectivity index. If $p=0$ and $n=5$, Theorems \ref{lem2}, \ref{lem3} and \ref{lem4} show that $\extH{5}{0}$, $\St{5}{2}$, $\K{5}$ and $\C{5}$ are the only candidates to minimize the eccentric connectivity index, and since $\ensuremath{\xi^c}\xspace(\extH{5}{0})=\ensuremath{\xi^c}\xspace(\St{5}{2})=\ensuremath{\xi^c}\xspace(\K{5})=\ensuremath{\xi^c}\xspace(\C{5})=20$, the four graphs are the optimal ones. If $p=0$ and $n\geq 6$ then we know from Theorems \ref{lem2}, \ref{lem3} and \ref{lem4} that $\St{n}{2}$ and $\extH{n}{0}$ are the only candidates to minimize the eccentric connectivity index. Since $\ensuremath{\xi^c}\xspace(\St{6}{2})=26<27=\ensuremath{\xi^c}\xspace(\extH{6}{0})$, $\ensuremath{\xi^c}\xspace(\St{7}{2})=32>30=\ensuremath{\xi^c}\xspace(\extH{7}{0})$ and $\ensuremath{\xi^c}\xspace(\St{n}{2})=6n-10>5n-3\geq\ensuremath{\xi^c}\xspace(\extH{n}{0})$ for $n\geq 8$, we deduce that $\St{6}{2}$ is the only graph with maximum eccentric connectivity index when $n=6$ and $p=0$, while $\extH{n}{0}$ is the only optimal graph when $n\geq 7$ and $p=0$. This is summarized in the following Corollary. \begin{cor} Let $G$ be a connected graph of order $n\geq 4$ with $p\leq n-3$ pending vertices. \begin{itemize} \vspace{-0.2cm}\item If $p\geq 1$ then $\ensuremath{\xi^c}\xspace(G)\geq \ensuremath{\xi^c}\xspace(\extH{n}{p})$, with equality if and only if $G\simeq \extH{n}{p}$; \vspace{-0.0cm}\item If $p=0$ then \vspace{-0.0cm}\begin{itemize} \item if $n=4$ then $\ensuremath{\xi^c}\xspace(G)\geq 12$, with equality if and only if $G\simeq \K{4}$; \vspace{-0.0cm} \item if $n=5$ then $\ensuremath{\xi^c}\xspace(G)\geq 20$, with equality if and only if $G\simeq \extH{5}{0}$, $\St{5}{2}$, $\K{5}$ or $\C{5}$; \vspace{-0.0cm} \item if $n=6$ then $\ensuremath{\xi^c}\xspace(G)\geq 26$, with equality if and only if $G\simeq \St{6}{2}$; \vspace{-0.0cm} \item if $n\geq 7$ then $\ensuremath{\xi^c}\xspace(G)\geq \ensuremath{\xi^c}\xspace(\extH{n}{0})$, with equality if and only if $G\simeq \extH{n}{0}$. \end{itemize} \end{itemize} \end{cor} \section{Conclusion} We have characterized the graphs with smallest eccentric connectivity index among those of fixed order $n$ and fixed or non-fixed number of pending vertices. Such a characterization for graphs with a fixed order $n$ and a fixed size $m$ was given in \cite{ZD10}. It reads as follows. \begin{thm}Let $G$ be a connected graph of order $n$ with $m$ edges, where $n-1\leq m <{ {n}\choose{2}}.$ Also, let \[k=\left\lfloor\frac{2n-1-\sqrt{(2n-1)^2-8m}}{2}\right\rfloor. \] Then $\ensuremath{\xi^c}\xspace(G)\geq 4m-k(n-1)$, with equality if and only if $G$ has $k$ dominating vertices and $n-k$ vertices of eccentricity 2. \end{thm} It is, however, an open question to characterize the graphs with largest eccentric connectivity index among those of fixed order $n$ and fixed size $m$. The following conjecture appears in \cite{Hau18}, where $\extg{n}{D}{k}$ is the graph of order $n$ constructed from a path $u_0-u_1-\ldots-u_D$ by joining each vertex of a clique $\K{n-D-1}$ to $u_0$ and $u_1$, and $k$ vertices of the clique to $u_2$. \begin{conj} Let $G$ be a connected graph of order $n$ with $m$ edges, where $n-1\leq m \leq{ {n-1}\choose{2}}.$ Also, let \[D=\left\lfloor\frac{2n+1-\sqrt{17+8(m-n)}}{2}\right\rfloor \text{ and } k=m-{{n-D+1}\choose{2}}-D+1. \] Then $\ensuremath{\xi^c}\xspace(G)\leq \ensuremath{\xi^c}\xspace(\extg{n}{D}{k})$, with equality if and only if $G\simeq \extg{n}{D}{k}$ or $D=3$, $k=n-4$ and $G$ is the graph constructed from a path $u_0-u_1-u_2-u_3$, by joining $1\leq i\leq n-3$ vertices of a clique $\K{n-4}$ to $u_0,u_1,u_2$ and the $n-4-i$ other vertices of $\K{n-4}$ to $u_1,u_2,u_3$. \end{conj} \bibliographystyle{acm}
{ "timestamp": "2018-09-11T02:15:21", "yymm": "1809", "arxiv_id": "1809.03158", "language": "en", "url": "https://arxiv.org/abs/1809.03158", "abstract": "The eccentric connectivity index of a connected graph $G$ is the sum over all vertices $v$ of the product $d_{G}(v) e_{G}(v)$, where $d_{G}(v)$ is the degree of $v$ in $G$ and $e_{G}(v)$ is the maximum distance between $v$ and any other vertex of $G$. This index is helpful for the prediction of biological activities of diverse nature, a molecule being modeled as a graph where atoms are represented by vertices and chemical bonds by edges. We characterize those graphs which have the smallest eccentric connectivity index among all connected graphs of a given order $n$. Also, given two integers $n$ and $p$ with $p\\leq n-1$, we characterize those graphs which have the smallest eccentric connectivity index among all connected graphs of order $n$ with $p$ pending vertices.", "subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)", "title": "Minimum Eccentric Connectivity Index for Graphs with Fixed Order and Fixed Number of Pending Vertices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575121992375, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7091542092264689 }
https://arxiv.org/abs/1206.3210
A refined and unified version of the inverse scattering method for the Ablowitz-Ladik lattice and derivative NLS lattices
We refine and develop the inverse scattering theory on a lattice in such a way that the Ablowitz-Ladik lattice and derivative NLS lattices as well as their matrix analogs can be solved in a unified way. The inverse scattering method for the (matrix analog of the) Ablowitz-Ladik lattice is simplified to the same level as that for the continuous NLS system. Using the linear eigenfunctions of the Lax pair for the Ablowitz-Ladik lattice, we can construct solutions of the derivative NLS lattices such as the discrete Gerdjikov-Ivanov (also known as Ablowitz-Ramani-Segur) system and the discrete Kaup-Newell system. Thus, explicit solutions such as the multisoliton solutions for these systems can be obtained by solving linear summation equations of the Gel'fand-Levitan-Marchenko type. The derivation of the discrete Kaup-Newell system from the Ablowitz-Ladik lattice is based on a new method that allows us to generate new integrable systems from known systems in a systematic manner. In an appendix, we describe the reduction of the matrix Ablowitz-Ladik lattice to a vector analog of the modified Volterra lattice from the point of view of the inverse scattering method.
\section{Introduction} The cubic nonlinear Schr\"odinger (NLS) equation~\cite{ZS1,ZS2} is probably the most prominent example of an integrable partial differential equation in \mbox{$1+1$} space-time dimensions. The inverse scattering method for the NLS equation devised by Zakharov and Shabat~\cite{ZS1} was reformulated by Ablowitz, Kaup, Newell and Segur~\cite{AKNS73,AKNS74} in a user-friendly and broadly-applicable manner. Since then various extensions of the NLS equation have been obtained within the framework of the inverse scattering method. Among them, we mention two kinds of extensions:\ (i) space-discrete NLS systems\footnote{The problem of how to discretize the continuous time variable in integrable space-discrete systems has a long history, see~\cite{Tsu2010JPA} and references therein.} wherein the spatial variable is discretized~\cite{AL76,GI82} and (ii) derivative NLS systems wherein the nonlinear terms involve differentiation with respect to the spatial variable~\cite{KN,CLL,ARS,GI,Kun}. For these extensions, the inverse scattering method can be applied on a case-by-case basis, but its application requires more steps and is apparently more complicated than that for the original NLS system~\cite{AKNS73,AKNS74}. The main objective of this paper is twofold. First, we develop the inverse scattering method on a lattice and correct the widespread impression that it is essentially more complicated than the inverse scattering method on the line. Second, we show that the derivative NLS systems can be solved using the inverse scattering method for the NLS system; however, this paper focuses on the space-discrete case.\footnote{Relevant results on continuous derivative NLS systems can be found in~\cite{talk07,talk08}.} To be specific, we consider (a matrix generalization of) the Ablowitz--Ladik lattice~\cite{AL76} that is an integrable space discretization of the NLS system. The inverse scattering method for the Ablowitz--Ladik lattice reported in the existing literature involves some onerous processes peculiar to the discrete case; in fact, they are redundant and can be avoided. For this purpose, we only need to start with an eigenvalue problem that is trivially equivalent to the Ablowitz--Ladik eigenvalue problem up to a similarity transformation and inversion of the spatial coordinate. Then, all the key quantities such as the scattering data become even functions of the conventional spectral parameter; thus, we can use its square as a new (and more essential) parameter. This considerably simplifies the subsequent computations. Moreover, by the inversion of the spatial coordinate, we no longer need to normalize the ``integral kernels" of the linear eigenfunctions (Jost solutions) to express the potentials in the Ablowitz--Ladik eigenvalue problem explicitly. This is in contrast to the conventional approach wherein one has to first introduce the ``integral kernels" and then normalize them; in the literature (see, {\em e.g.}, \cite{Ab78,AS81}), they are usually denoted as $K(n,m)$, $\accentset{{\cc@style\underline{\mskip10mu}}}{K}(n,m)$ and $\kappa (n,m)$, $\accentset{{\cc@style\underline{\mskip10mu}}}{\kappa}(n,m)$, respectively. Thus, we can successfully refine the inverse scattering method associated with the (matrix) Ablowitz--Ladik eigenvalue problem; both the potentials and the linear eigenfunctions are determined from the scattering data through a set of linear summation equations in the most transparent manner. In our previous paper~\cite{TsuJMP11}, we proposed a systematic method of generating new integrable systems from known systems through inverse Miura maps.\footnote{ The original Miura map transforms the modified KdV equation (or, more generally, its one-parameter generalization called the Gardner equation) to the KdV equation~\cite{Miura68}.} As a result, two derivative NLS systems, namely, the Gerdjikov--Ivanov (also known as Ablowitz--Ramani--Segur) system~\cite{ARS,GI} and the Chen--Lee--Liu system~\cite{CLL}, were constructed from the Lax representation\footnote{The term was coined after Lax's work on the KdV hierarchy~\cite{Lax}.} for the NLS system; the same prescription applies to the space-discrete case. Thus, the inverse scattering method for the Ablowitz--Ladik lattice can also provide the solutions of the space-discrete Gerdjikov--Ivanov system and the space-discrete Chen--Lee--Liu system in a unified way. In this paper, we propose yet another method of generating new integrable systems from known systems. In particular, by applying this new method to the Ablowitz--Ladik lattice, we obtain a lattice system that is essentially equivalent to the space-discrete Kaup--Newell system studied in~\cite{Tsuchi02,TsuJMP10}; its solutions can be expressed in terms of the linear eigenfunctions associated with the Ablowitz--Ladik lattice. Thus, the space-discrete Kaup--Newell system can also be solved using the inverse scattering method for the Ablowitz--Ladik lattice; note that among the derivative NLS systems, the Kaup--Newell system~\cite{KN} is the most important for physical applications. The main body of this paper is organized as follows. In section 2, we start with the Lax representation for the Ablowitz--Ladik lattice; using its linear eigenfunctions, we derive two derivative NLS lattices. First, we apply the method proposed in~\cite{TsuJMP11} and derive the space-discrete Gerdjikov--Ivanov system. Second, we propose a new systematic method and obtain the space-discrete Kaup--Newell system. We can also derive and solve the space-discrete Chen--Lee--Liu system, but we do not present it in this paper; the interested reader is referred to appendix B of~\cite{TsuJMP11} and section 3 of~\cite{Tsuchi02}. In section 3, we present a streamlined version of the inverse scattering method for the Ablowitz--Ladik lattice. In section 4, we combine the results of sections 2 and 3 and show that the derivative NLS lattices can be solved by the inverse scattering method associated with the Ablowitz--Ladik eigenvalue problem. Their multisoliton solutions can be derived from the linear summation equations in a straightforward manner. The last section, section 5, is devoted to concluding remarks. In this paper, we consider the general case where the dependent variables take their values in matrices~\cite{GI82,ZS3} in such a way that the operations such as addition and multiplication in the equations of motion make sense. In appendix~\ref{app2}, we consider the reduction of the matrix Ablowitz--Ladik lattice to a vector analog of the modified Volterra lattice and discuss the effect of the reduction on the scattering data; this considerably refines our previous results given in~\cite{TUW98,TUW99}. \section{Ablowitz--Ladik lattice and derivative NLS lattices} \label{sect2} \subsection{Lax representation for the Ablowitz--Ladik lattice} We start with the matrix Ablowitz--Ladik eigenvalue problem written in the nonstandard form:\footnote{Actually, in the simplest \mbox{$2 \times 2$} matrix case, (\ref{mAL1}) can be identified with a special case of the eigenvalue problem studied in~\cite{AL75}.} \begin{align} & \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right] = \left[ \begin{array}{cc} z I & z Q_n \\ z^{-1} R_n & z^{-1} I \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n+1} \\ \Psi_{2, n+1} \\ \end{array} \right], \label{mAL1} \end{align} where $z$ is a constant spectral parameter and $I$ is the identity matrix of arbitrary size. For simplicity, we assume that all the entries in (\ref{mAL1}), such as the potentials $Q_n$ and $R_n$, are \mbox{$\l \times l$} square matrices. It is also possible to consider the more general case where $Q_n$ is an \mbox{$\l_1 \times l_2$} matrix and $R_n$ is an \mbox{$\l_2 \times l_1$} matrix; however, the results for that case can be easily obtained by setting some rows and columns in $Q_n$ and $R_n$ as identically zero. The time evolution of the linear eigenfunction can be introduced in such a way that it is compatible with the eigenvalue problem (\ref{mAL1}). The most illustrative example is given by \begin{align} & \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right]_t = \left[ \begin{array}{cc} (-z^2+1)bI + b Q_{n-1}R_n & -z^2 b Q_{n-1} -a Q_n \\ -b R_n - z^{-2} a R_{n-1} & (1-z^{-2})a I + a R_{n-1} Q_n \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right]. \label{mAL-time} \end{align} Here, $a$ and $b$ are arbitrary scalar constants; in fact, they may depend on the time variable $t$ in an arbitrary manner (see, {\it e.g.}, \cite{Calo76,Vakh02}), but we do not discuss it in this paper. The compatibility condition for the overdetermined linear system, (\ref{mAL1}) and (\ref{mAL-time}), with the isospectral condition \mbox{$z_t=0$} implies the time evolution equations for $Q_n$ and $R_n$: \begin{subnumcases}{\label{mALsys}} {} \label{AL-Qt} Q_{n,t} - a Q_{n+1} + b Q_{n-1} + (a-b) Q_n +a Q_n R_n Q_{n+1} -b Q_{n-1} R_n Q_n = O, \hspace{13mm} \\[1mm] \label{AL-Rt} R_{n,t} -b R_{n+1} +a R_{n-1} + (b-a) R_n +b R_n Q_n R_{n+1} -a R_{n-1} Q_n R_n =O. \hspace{13mm} \end{subnumcases} We call (\ref{mALsys}) the (matrix) Ablowitz--Ladik lattice/system~\cite{AL76,GI82}; (\ref{mAL1}) and (\ref{mAL-time}) comprise its Lax representation. The symbol $O$ on the right-hand side of the equations implies that the dependent variables can take their values in matrices. In fact, there exist infinitely many ways to define such an isospectral time evolution (cf.~\cite{Chiu77,Kako}), depending on the choice of the temporal Lax matrix in (\ref{mAL-time}) as a Laurent polynomial in $z^2$; they provide the positive flows of the Ablowitz--Ladik hierarchy (cf.~appendix A of~\cite{Tsuchi02} for the negative flows) and each of them is uniquely determined by its linear part or, equivalently, the dispersion relation. \subsection{Space-discrete Gerdjikov--Ivanov system} \label{subs2.2} In this subsection, using the method proposed in~\cite{TsuJMP11}, we derive the space-discrete Gerdjikov--Ivanov system from the Lax representation for the Ablowitz--Ladik lattice. The result is essentially the same as that given in~\cite{TsuJMP11}, but we restate it here for the self-containedness and readability of the paper. We consider a \mbox{$2l \times l$} matrix-valued solution to the pair of linear equations (\ref{mAL1}) and (\ref{mAL-time}) such that $\Psi_{1,n}$ is an \mbox{$l \times l$} invertible matrix. Then, in terms of the \mbox{$l \times l$} matrix \mbox{$P_n := \Psi_{2,n} \Psi_{1,n}^{-1}$}, (\ref{mAL1}) and (\ref{mAL-time}) can be rewritten as a pair of discrete and continuous matrix Riccati equations for $P_n$, \begin{subequations} \label{AL-R} \begin{align} & R_n = \mu P_{n} - P_{n+1} + \mu P_{n}Q_n P_{n+1}, \label{AL-R1} \\[2mm] & P_{n,t} = -b R_{n} -\mu^{-1} a R_{n-1} +(1-\mu^{-1})a P_n +( \mu -1 ) b P_n \nonumber \\ & \hphantom{P_{n,t} =}\hspace{1pt} +a R_{n-1} Q_{n} P_n - b P_n Q_{n-1} R_{n} + \mu b P_n Q_{n-1} P_n + a P_n Q_n P_n, \label{AL-R2} \end{align} \end{subequations} where \mbox{$\mu := z^2$}. The first relation (\ref{AL-R1}) defines the Miura map \mbox{$(Q_n,P_n) \mapsto (Q_n,R_n)$}. Using (\ref{AL-R1}), we can eliminate $R_n$ and $R_{n-1}$ in (\ref{AL-Qt}) and (\ref{AL-R2}) to obtain a closed system for \mbox{$(Q_n,P_n)$}, i.e., the space-discrete Gerdjikov--Ivanov system~\cite{Tsuchi02}: \begin{subnumcases}{\label{sdGI}} {} Q_{n,t}- a Q_{n+1} + b Q_{n-1} + (a-b)Q_n + a Q_{n} \left( \mu P_{n} - P_{n+1} \right) Q_{n+1} \nonumber \\ \mbox{}- b Q_{n-1} \left( \mu P_{n} - P_{n+1} \right) Q_{n} + a \mu Q_{n} P_{n}Q_n P_{n+1} Q_{n+1} - b \mu Q_{n-1} P_{n}Q_n P_{n+1} Q_{n} =O, \hspace{12mm} \label{} \\[2mm] P_{n,t} -b P_{n+1} + a P_{n-1} +(b-a)P_n -b P_{n} \left( Q_{n-1} - \mu Q_n \right) P_{n+1} \nonumber \\ \mbox{}+a P_{n-1}\left( Q_{n-1} -\mu Q_n \right) P_{n} + b \mu P_{n} Q_{n-1} P_{n}Q_{n} P_{n+1} - a \mu P_{n-1} Q_{n-1} P_{n}Q_{n} P_{n} =O. \label{} \end{subnumcases} \subsection{Space-discrete Kaup--Newell system} \label{subs2.3} In this subsection, we propose yet another method of generating new integrable systems from known systems using a fundamental set of linear eigenfunctions. The method is applicable to a matrix Lax representation of arbitrary size, but for brevity we describe it in the simplest case of a \mbox{$2 \times 2$} Lax representation as well as its block-matrix generalization involving two potentials. In this paper, we consider the space-discrete case (see~\cite{talk08} for the continuous case). Suppose that a lattice system admits the Lax representation \begin{subequations} \label{gen-Lax} \begin{align} \label{gen-Lax1} & \left[ \begin{array}{c} \Psi_{1, n}^{(j)} \\ \Psi_{2, n}^{(j)} \\ \end{array} \right] = \left[ \begin{array}{cc} L_{11,n} & L_{12,n} \\ L_{21,n} & L_{22,n} \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n+1}^{(j)} \\ \Psi_{2, n+1}^{(j)} \\ \end{array} \right], \\[2.5mm] \label{gen-Lax2} & \left[ \begin{array}{c} \Psi_{1, n}^{(j)} \\ \Psi_{2, n}^{(j)} \\ \end{array} \right]_t = \left[ \begin{array}{cc} M_{11,n} & M_{12,n} \\ M_{21,n} & M_{22,n} \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n}^{(j)} \\ \Psi_{2, n}^{(j)} \\ \end{array} \right], \end{align} \end{subequations} where all the entries are assumed to be square matrices of the same size. Because the method requires a full set of linearly independent eigenfunctions, the superscript ${}^{(j)}$ with $j=1$ or $2$ is used to designate the two eigenfunctions. We apply a gauge transformation defined using one eigenfunction to the other eigenfunction as \[ \left[ \begin{array}{c} \Psi_{1, n}^{(2)} \\ \Psi_{2, n}^{(2)} \\ \end{array} \right] \mapsto \left[ \begin{array}{cc} \Psi_{1, n}^{(1)\, -1} & O \\ O & \Psi_{2, n}^{(1)\, -1} \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n}^{(2)} \\ \Psi_{2, n}^{(2)} \\ \end{array} \right] = \left[ \begin{array}{c} \Psi_{1, n}^{(1)\, -1} \Psi_{1, n}^{(2)} \\ \Psi_{2, n}^{(1)\, -1} \Psi_{2, n}^{(2)} \\ \end{array} \right]. \] Then, the Lax representation (\ref{gen-Lax}) is transformed to the degenerate form: \begin{align} & \left[ \begin{array}{c} \Psi_{1, n}^{(1)\, -1} \Psi_{1, n}^{(2)} \\ \Psi_{2, n}^{(1)\, -1} \Psi_{2, n}^{(2)} \\ \end{array} \right] = \left[ \begin{array}{cc} I- \Psi_{1, n}^{(1)\, -1} L_{12,n} \Psi_{2, n+1}^{(1)} & \Psi_{1, n}^{(1)\, -1} L_{12,n} \Psi_{2, n+1}^{(1)} \\ \Psi_{2, n}^{(1)\, -1} L_{21,n} \Psi_{1, n+1}^{(1)} & I- \Psi_{2, n}^{(1)\, -1} L_{21,n} \Psi_{1, n+1}^{(1)} \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n+1}^{(1)\, -1} \Psi_{1, n+1}^{(2)} \\ \Psi_{2, n+1}^{(1)\, -1} \Psi_{2, n+1}^{(2)} \\ \end{array} \right], \nonumber \\[2.5mm] & \left[ \begin{array}{c} \Psi_{1, n}^{(1)\, -1} \Psi_{1, n}^{(2)} \\ \Psi_{2, n}^{(1)\, -1} \Psi_{2, n}^{(2)} \\ \end{array} \right]_t = \left[ \begin{array}{cc} -\Psi_{1, n}^{(1)\, -1} M_{12,n} \Psi_{2, n}^{(1)} & \Psi_{1, n}^{(1)\, -1} M_{12,n} \Psi_{2, n}^{(1)} \\ \Psi_{2, n}^{(1)\, -1} M_{21,n} \Psi_{1, n}^{(1)} & -\Psi_{2, n}^{(1)\, -1} M_{21,n} \Psi_{1, n}^{(1)} \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n}^{(1)\, -1} \Psi_{1, n}^{(2)} \\ \Psi_{2, n}^{(1)\, -1} \Psi_{2, n}^{(2)} \\ \end{array} \right]. \nonumber \end{align} Indeed, there are only \mbox{$2+2$} independent quantities in these Lax matrices. Thus, we can express them in terms of the components of the linear eigenfunction as \begin{align} & \Psi_{1,n}^{(1)\,-1} L_{12,n} \Psi_{2,n+1}^{(1)} = \left( \Psi_{1,n}^{(1)\,-1} \Psi_{1,n}^{(2)} - \Psi_{1,n+1}^{(1)\,-1} \Psi_{1,n+1}^{(2)} \right) \left( \Psi_{2,n+1}^{(1)\,-1} \Psi_{2,n+1}^{(2)} - \Psi_{1,n+1}^{(1)\,-1} \Psi_{1,n+1}^{(2)} \right)^{-1}, \label{psi1-uv} \\ & \Psi_{2,n}^{(1)\,-1} L_{21,n} \Psi_{1,n+1}^{(1)} = \left( \Psi_{2,n}^{(1)\,-1} \Psi_{2,n}^{(2)} - \Psi_{2,n+1}^{(1)\,-1} \Psi_{2,n+1}^{(2)} \right) \left( \Psi_{1,n+1}^{(1)\,-1} \Psi_{1,n+1}^{(2)} - \Psi_{2,n+1}^{(1)\,-1} \Psi_{2,n+1}^{(2)} \right)^{-1}, \label{psi2-uv} \end{align} and \begin{align} & \Psi_{1,n}^{(1)\,-1} M_{12,n} \Psi_{2,n}^{(1)} = \left( \Psi_{1,n}^{(1)\,-1} \Psi_{1,n}^{(2)} \right)_t \left( \Psi_{2,n}^{(1)\,-1} \Psi_{2,n}^{(2)} -\Psi_{1,n}^{(1)\,-1} \Psi_{1,n}^{(2)} \right)^{-1}, \label{M12-uv} \\ & \Psi_{2,n}^{(1)\,-1} M_{21,n} \Psi_{1,n}^{(1)} = \left( \Psi_{2,n}^{(1)\,-1} \Psi_{2,n}^{(2)} \right)_t \left( \Psi_{1,n}^{(1)\,-1} \Psi_{1,n}^{(2)} -\Psi_{2,n}^{(1)\,-1} \Psi_{2,n}^{(2)} \right)^{-1}. \label{M21-uv} \end{align} Note that (\ref{gen-Lax1}) also implies the two important relations: \begin{align} \label{L11-12} & \Psi_{1,n}^{(1)\,-1} L_{11,n} \Psi_{1,n+1}^{(1)} = I - \Psi_{1,n}^{(1)\,-1} L_{12,n} \Psi_{2,n+1}^{(1)}, \\[1mm] \label{L22-21} & \Psi_{2,n}^{(1)\,-1} L_{22,n} \Psi_{2,n+1}^{(1)} = I - \Psi_{2,n}^{(1)\,-1} L_{21,n} \Psi_{1,n+1}^{(1)}. \end{align} Typically, $L_{11,n}$ and $L_{22,n}$ are ultralocal functions of $L_{12,n}$ and $L_{21,n}$, such as \[ L_{11,n} = \alpha I + \beta L_{12,n} L_{21,n}, \hspace{5mm} L_{22,n} = \gamma I + \delta L_{21,n} L_{12,n}, \] where $\alpha$, $\beta$, $\gamma$ and $\delta$ are scalar functions of the spectral parameter. Thus, we can try to solve (\ref{L11-12}) and (\ref{L22-21}) to express $\Psi_{1,n}^{(1)\,-1} \Psi_{1,n+1}^{(1)}$ and $\Psi_{2,n}^{(1)\,-1} \Psi_{2,n+1}^{(1)}$ in terms of $\Psi_{1,n}^{(1)\,-1} L_{12,n} \Psi_{2,n+1}^{(1)}$ and $\Psi_{2,n}^{(1)\,-1} L_{21,n} \Psi_{1,n+1}^{(1)}$. This is relatively easy if either $L_{11,n}$ or $L_{22,n}$ is a constant scalar matrix, {\it e.g.}, $\beta$ or $\delta$ vanishes in the above example. Then, using (\ref{psi1-uv}) and (\ref{psi2-uv}), we can also express \begin{align} & \Psi_{1,n}^{(1)\,-1} L_{12,n} \Psi_{2,n}^{(1)} = \Psi_{1,n}^{(1)\,-1} L_{12,n} \Psi_{2,n+1}^{(1)} \left( \Psi_{2,n}^{(1)\,-1} \Psi_{2,n+1}^{(1)} \right)^{-1}, \nonumber \\[1mm] & \Psi_{1,n+1}^{(1)\,-1} L_{12,n} \Psi_{2,n+1}^{(1)} = \left( \Psi_{1,n}^{(1)\,-1} \Psi_{1,n+1}^{(1)} \right)^{-1} \Psi_{1,n}^{(1)\,-1} L_{12,n} \Psi_{2,n+1}^{(1)}, \nonumber \\[1mm] & \Psi_{2,n}^{(1)\,-1} L_{21,n} \Psi_{1,n}^{(1)} = \Psi_{2,n}^{(1)\,-1} L_{21,n} \Psi_{1,n+1}^{(1)} \left( \Psi_{1,n}^{(1)\,-1} \Psi_{1,n+1}^{(1)} \right)^{-1}, \nonumber \\[1mm] & \Psi_{2,n+1}^{(1)\,-1} L_{21,n} \Psi_{1,n+1}^{(1)} = \left( \Psi_{2,n}^{(1)\,-1} \Psi_{2,n+1}^{(1)} \right)^{-1} \Psi_{2,n}^{(1)\,-1} L_{21,n} \Psi_{1,n+1}^{(1)}, \hspace{5mm} \mathrm{etc.} \nonumber \end{align} recursively in terms of \begin{equation} \label{uv-def} u_n := \Psi_{1,n}^{(1)\,-1} \Psi_{1,n}^{(2)}, \hspace{5mm} v_n := \Psi_{2,n}^{(1)\,-1} \Psi_{2,n}^{(2)}. \end{equation} In general, $M_{12,n}$ and $M_{21,n}$ are local (but not ultralocal) functions of $L_{12,n}$ and $L_{21,n}$. Therefore, with the aid of the above relations, (\ref{M12-uv}) and (\ref{M21-uv}) can be rewritten as a closed lattice system for $u_n$ and $v_n$. Let us illustrate the method using the matrix Ablowitz--Ladik lattice (\ref{mALsys}) as an example. By setting \[ L_{11,n} = z I, \hspace{5mm} L_{22,n} = z^{-1} I, \] (\ref{L11-12}) and (\ref{L22-21}) provide the useful relations \begin{align} \label{aux1} & \Psi_{1,n+1}^{(1)\,-1} \Psi_{1,n}^{(1)} = z \left( I - \Psi_{1,n}^{(1)\,-1} L_{12,n} \Psi_{2,n+1}^{(1)} \right)^{-1}, \\[1mm] \label{aux2} & \Psi_{2,n+1}^{(1)\, -1} \Psi_{2,n}^{(1)} = z^{-1} \left( I - \Psi_{2,n}^{(1)\,-1} L_{21,n} \Psi_{1,n+1}^{(1)} \right)^{-1}. \end{align} The Lax representation, (\ref{mAL1}) and (\ref{mAL-time}), implies simple relations between off-diagonal elements of the temporal and spatial Lax matrices, i.e. \begin{align} \nonumber & M_{12,n} + z^{-1}a L_{12,n} +z b L_{12,n-1} =O, \\[1mm] & M_{21,n} + z b L_{21,n} +z^{-1} a L_{21,n-1} =O, \nonumber \end{align} which can be rewritten as \begin{align} \nonumber \Psi_{1,n}^{(1)\,-1} M_{12,n} \Psi_{2,n}^{(1)} & + z^{-1} a \Psi_{1,n}^{(1)\,-1} L_{12,n} \Psi_{2,n+1}^{(1)} \Psi_{2,n+1}^{(1)\, -1} \Psi_{2,n}^{(1)} \\ & + z b \Psi_{1,n}^{(1)\, -1} \Psi_{1,n-1}^{(1)} \Psi_{1,n-1}^{(1)\,-1} L_{12,n-1} \Psi_{2,n}^{(1)} =O, \nonumber \\[2mm] \Psi_{2,n}^{(1)\,-1} M_{21,n} \Psi_{1,n}^{(1)} & + z b \Psi_{2,n}^{(1)\,-1} L_{21,n} \Psi_{1,n+1}^{(1)} \Psi_{1,n+1}^{(1)\, -1} \Psi_{1,n}^{(1)} \nonumber \\ & + z^{-1} a \Psi_{2,n}^{(1)\, -1} \Psi_{2,n-1}^{(1)} \Psi_{2,n-1}^{(1)\,-1} L_{21,n-1} \Psi_{1,n}^{(1)} = O. \nonumber \end{align} Substituting (\ref{aux1}) and (\ref{aux2}) and subsequently (\ref{psi1-uv})--(\ref{M21-uv}) into the above relations, we arrive at a closed system for $\Psi_{1,n}^{(1)\,-1} \Psi_{1,n}^{(2)}$ and $\Psi_{2,n}^{(1)\,-1} \Psi_{2,n}^{(2)}$. \\ \begin{proposition} Consider two linearly independent solutions of (\ref{mAL1}) and (\ref{mAL-time}) and write them as in (\ref{gen-Lax}) with $j=1$ or $2$. Then, $u_n$ and $v_n$ defined in (\ref{uv-def}) satisfy the following system: \begin{subnumcases}{\label{lHF1}} {} u_{n,t} + \frac{a}{\mu } \left( u_n - u_{n+1} \right) \left[ I + (v_n-u_n)^{-1}(u_n-u_{n+1}) \right]^{-1} \nonumber \\ \hphantom{u_{n,t}} +\mu b \left[ I - (u_{n-1}-u_n) (v_n-u_n)^{-1}\right]^{-1} (u_{n-1}-u_n) = O, \hspace{10mm} \\[2mm] v_{n,t} + \mu b \left( v_n -v_{n+1} \right) \left[ I + (u_n-v_n)^{-1} (v_n -v_{n+1})\right]^{-1} \nonumber \\ \hphantom{v_{n,t}} +\frac{a}{\mu } \left[ I - (v_{n-1} -v_n ) (u_n -v_n)^{-1}\right]^{-1} (v_{n-1} - v_n ) = O, \end{subnumcases} where \mbox{$\mu = z^2$}. \label{prop1} \end{proposition} \noindent {\it Remarks:} \\ (i) The system (\ref{lHF1}) with \mbox{$\mu b = (a/\mu )^\ast$} allows both the complex conjugation reduction \mbox{$v_n = \sigma u_n^\ast$} and the Hermitian conjugation reduction \mbox{$v_n = \sigma u_n^\dagger$}, where \mbox{$\sigma = \pm 1$}. In addition, by setting \mbox{$v_n = -u_n$}, (\ref{lHF1}) with \mbox{$\mu b = a/\mu $} reduces to a single matrix equation, \[ u_{n,t} = u_n \left[ \left( u_{n+1} + u_n \right)^{-1} - \left( u_n + u_{n-1} \right)^{-1} \right] u_n, \] up to a rescaling of $t$. In the scalar case, this belongs to Yamilov's list of Volterra-type lattices in~\cite{Yamilov83}; in the matrix case, it allows further reductions to various multicomponent systems (cf.~\cite{AdSviYam99}). \\ \\ (ii) The system (\ref{lHF1}) provides a space-discrete analog of the system studied by Svinolupov and Sokolov~\cite{SviSok94}; their system gives a matrix generalization of the Heisenberg ferromagnet model written in a two-component form~\cite{MS2,MikShYam87}. Indeed, (\ref{lHF1}) in the scalar case is closely related to the lattice Heisenberg ferromagnet model~\cite{Ishi82} and its simplest higher symmetry~\cite{GIV86,Papanico}. \\ \\ (iii) The system (\ref{lHF1}) in the scalar case appeared in the recent paper~\cite{Veks11} (also see~\cite{AdSha06}). In this context, it is natural to rewrite (\ref{lHF1}) as a two-component system for the pair of variables $u_n$ and $v_n^{-1}$ (cf.~(4.8) in~\cite{MS2}). \\ In (\ref{lHF1}), the two variables $u_n$ and $v_n$ interact with each other through the quantity \mbox{$\left( v_n - u_n \right)^{-1}$}, which can be used as a new dependent variable. Indeed, a direct calculation shows the following two propositions. \\ \begin{proposition} Let $u_n$ and $v_n$ satisfy the system (\ref{lHF1}). Then, the new pair of variables $q_n$ and $r_n$, \[ q_n :=\boldsymbol{\Delta}_n^+ u_n \left( = u_{n+1} - u_{n} \right) , \hspace{5mm} r_n := \left( v_n - u_n \right)^{-1}, \] satisfies the space-discrete Kaup--Newell system~\cite{Tsuchi02}: \begin{subnumcases}{\label{sdKN}} {} q_{n,t} - \boldsymbol{\Delta}_n^+ \left[ \frac{a}{\mu } \left( I - q_{n}r_{n} \right)^{-1} q_{n} + \mu b \left( I + q_{n-1} r_{n} \right)^{-1} q_{n-1} \right] = O, \\[1mm] r_{n,t} - \boldsymbol{\Delta}_n^+ \left[ \mu b \left( I + r_{n} q_{n-1} \right)^{-1} r_{n} + \frac{a}{\mu } \left( I - r_{n-1} q_{n-1} \right)^{-1} r_{n-1} \right] = O, \hspace{15mm} \end{subnumcases} where $\boldsymbol{\Delta}_n^+$ denotes the forward difference operator. \\ \label{prop2} \end{proposition} \begin{proposition} Let $u_n$ and $v_n$ satisfy the system (\ref{lHF1}). Then, the new pair of variables $\widetilde{q}_n$ and $\widetilde{r}_n$, \[ \widetilde{q}_n := \left( v_n - u_n \right)^{-1}, \hspace{5mm} \widetilde{r}_n := \boldsymbol{\Delta}_n^+ v_{n-1} \left( = v_{n} - v_{n-1} \right), \] also satisfies the space-discrete Kaup--Newell system (\ref{sdKN}) for $\widetilde{q}_n$ and $\widetilde{r}_n$. \\ \label{prop3} \end{proposition} \section{Inverse scattering method for the Ablowitz--Ladik lattice} \label{sec3} \subsection{Revisiting the Ablowitz--Ladik eigenvalue problem} In this section, we describe the inverse scattering method associated with the matrix Ablowitz--Ladik eigenvalue problem (\ref{mAL1}), \begin{equation} \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right] = \left[ \begin{array}{cc} z I & z Q_n \\ z^{-1} R_n & z^{-1} I \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n+1} \\ \Psi_{2, n+1} \\ \end{array} \right]. \label{mAL0} \end{equation} Here, the potentials $Q_n$ and $R_n$ are assumed to decay sufficiently rapidly at spatial infinity: \begin{equation} \label{zero-bc} \lim_{n \to \pm \infty} Q_n = \lim_{n \to \pm \infty} R_n =O. \end{equation} The matrix generalization of the Ablowitz--Ladik lattice~\cite{AL76} first considered in the early 1980s~\cite{GI82} is still a topic of interest in discrete integrable systems (see~\cite{Tsuchi02,APT, DM2010} and references therein). In our previous papers~\cite{TUW98,TUW99}, we presented the inverse scattering method for the matrix Ablowitz--Ladik lattice while assuming some symmetry conditions on the potentials $Q_n$ and $R_n$. Here, we remove such assumptions and consider the general case of \mbox{$\l \times l$} square matrices $Q_n$ and $R_n$; recall that the results on rectangular matrix potentials can be obtained by setting some rows/columns in $Q_n$ and $R_n$ as zero. The inverse scattering method reported here bypasses some redundant computation and consideration contained in the existing literature on the same subject, so we believe that this is the most streamlined version. The results in the previous work~\cite{TUW98,TUW99} can be reproduced by imposing some reduction conditions on the scattering data; this is briefly sketched in appendix~\ref{app2}\@. In addition, we fix some minor inconsistencies in~\cite{TUW98,TUW99}, though they do not affect the main results of these papers. All the flows of the matrix Ablowitz--Ladik hierarchy are associated with the same eigenvalue problem (\ref{mAL0}), so they can be solved together by the inverse scattering method. However, in the following, we concentrate on the matrix Ablowitz--Ladik lattice (\ref{mALsys}) to illustrate the method in an easy-to-read manner. We stress that in contrast to other methods of obtaining special solutions, the inverse scattering method can provide the general solution formulas. Moreover, the method can determine not only the potentials $Q_n$ and $R_n$ but also a fundamental set of linear eigenfunctions, which will be used in section~4. \subsection{Jost solutions and relevant quantities} \label{subs3.2} To analyze the general case of the matrix potentials $Q_n$ and $R_n$, we consider the adjoint equation,\footnote{Note that if we consider a square matrix solution $\Psi_n$ to the eigenvalue problem \mbox{$\Psi_{n} = L_n(z) \Psi_{n+1}$}, then its inverse \mbox{$\Phi_n := \Psi_n^{-1}$} satisfies the eigenvalue problem \mbox{$\Phi_{n+1} = \Phi_{n} L_n (z)$}.} \begin{equation} \left[ \begin{array}{cc} \! \Phi_{1,n+1} \! & \! \Phi_{2,n+1} \! \end{array} \right] = \left[ \begin{array}{cc} \! \Phi_{1,n} \! & \! \Phi_{2,n} \! \end{array} \right] \left[ \begin{array}{cc} z I & z Q_n \\ z^{-1} R_n & z^{-1} I \\ \end{array} \right]. \label{mAL2} \end{equation} Indeed, a discrete analog of Lagrange's identity, \begin{align} & \left[ \begin{array}{cc} \! \Phi_{1,n} \! & \! \Phi_{2,n} \! \end{array} \right] \left\{ \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right] - \left[ \begin{array}{cc} z I & z Q_n \\ z^{-1} R_n & z^{-1} I \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n+1} \\ \Psi_{2, n+1} \\ \end{array} \right] \right\} \nonumber \\ & \mbox{} - \left\{ \left[ \begin{array}{cc} \! \Phi_{1,n+1} \! & \! \Phi_{2,n+1} \! \end{array} \right] - \left[ \begin{array}{cc} \! \Phi_{1,n} \! & \! \Phi_{2,n} \! \end{array} \right] \left[ \begin{array}{cc} z I & z Q_n \\ z^{-1} R_n & z^{-1} I \\ \end{array} \right] \right\} \left[ \begin{array}{c} \Psi_{1, n+1} \\ \Psi_{2, n+1} \\ \end{array} \right] \nonumber \\[1mm] & = - \boldsymbol{\Delta}_n^+ \left\{ \left[ \begin{array}{cc} \! \Phi_{1,n} \! & \! \Phi_{2,n} \! \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right] \right\}, \nonumber \end{align} implies that (\ref{mAL0}) and (\ref{mAL2}) can be said to be adjoint to each other. Thus, we can introduce an $l \times l$ matrix function \mbox{$W[ \, \cdot\, , \, \cdot \,]$} for a pair of solutions to (\ref{mAL0}) and (\ref{mAL2}) as \[ W [ \Phi_n, \Psi_n ] := \left[ \begin{array}{cc} \! \Phi_{1,n} \! & \! \Phi_{2,n} \! \end{array} \right] \left[ \begin{array}{c} \Psi_{1,n} \\ \Psi_{2,n} \\ \end{array} \right] = \Phi_{1,n} \Psi_{1,n} + \Phi_{2,n} \Psi_{2,n}, \] which is $n$-independent: \[ W [\Phi_{n}, \Psi_{n}] = W [\Phi_{n+1}, \Psi_{n+1}]. \] In addition to the rapidly decaying boundary conditions (\ref{zero-bc}), we assume that the spatial Lax matrix defining the eigenvalue problem is invertible: \begin{equation} \det \left( I-Q_n R_n \right) \neq 0, \hspace{3mm} \forall\hspace{1pt} n\in {\mathbb Z}. \label{nonzero-det} \end{equation} Because \mbox{$\log \left[ \det \left( I-Q_n R_n \right) \right]$} is a conserved density for the matrix Ablowitz--Ladik hierarchy~\cite{GI82}, this assumption is preserved under the time evolution. Thus, a set of column-vector (or row-vector) solutions to the eigenvalue problem (\ref{mAL0}) (or (\ref{mAL2})) that are linearly independent at some lattice site, say \mbox{$n=n_0$}, remain independent for all \mbox{$n \in {\mathbb Z}$}. We introduce Jost solutions $\phi_n (z)$, $\accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n (z)$ and $\psi_n (z)$, $\accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n (z)$ at a fixed time that satisfy (\ref{mAL0}) and the boundary conditions, \begin{subequations} \label{leftJost} \begin{equation} \left. \begin{array}{l} z^{n} \phi_n \to \left[ \begin{array}{c} I \\ O \\ \end{array} \right] \vspace{2mm} \\ z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n \to \left[ \begin{array}{c} O \\ -I \\ \end{array} \right] \end{array} \right\} \hspace{4mm} {\rm as}~~~ n \rightarrow -\infty \label{phi_bar} \end{equation} % and % \begin{equation} \left. \begin{array}{l} z^{-n} \psi_n \to \left[ \begin{array}{c} O \\ I \\ \end{array} \right] \vspace{2mm} \\ z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n \to \left[ \begin{array}{c} I \\ O \\ \end{array} \right] \end{array} \right\} \hspace{4mm} {\rm as}~~~ n \rightarrow +\infty. \label{psi_bar} \end{equation} \end{subequations} The time evolution of the Jost solutions will be considered in subsection~\ref{sTDsd}. Note that the overbar does {\it not} mean complex conjugation in this paper. Similarly, we introduce adjoint Jost solutions $\phi_n^{\mathrm{ad}} (z)$, $\accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\mathrm{ad}} (z)$ and $\psi_n^{\mathrm{ad}}(z)$, $\accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{ad}} (z)$ that satisfy (\ref{mAL2}) and the boundary conditions, \begin{subequations} \label{rightJost} \begin{equation} \left. \begin{array}{l} z^{n} \phi_n^{\mathrm{ad}} \to \left[ \begin{array}{cc} \! O \! & \! -I \! \end{array} \right] \vspace{2mm} \\ z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\mathrm{ad}} \to \left[ \begin{array}{cc} \! I \! & \! O \! \end{array} \right] \end{array} \right\} \hspace{4mm} {\rm as}~~~ n \rightarrow -\infty \end{equation} % and % \begin{equation} \left. \begin{array}{l} z^{-n} \psi_n^{\mathrm{ad}} \to \left[ \begin{array}{cc} \! I \! & \! O \! \end{array} \right] \vspace{2mm} \\ z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{ad}} \to \left[ \begin{array}{cc} \! O \! & \! I \! \end{array} \right] \end{array} \right\} \hspace{4mm} {\rm as}~~~ n \rightarrow +\infty. \label{psi_ad} \end{equation} \end{subequations} Because the \mbox{$l+l\hspace{2pt}(=2l)$} columns of the Jost solutions $\psi_n$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n$ form a fundamental set of solutions to the eigenvalue problem (\ref{mAL0}), we can set on the unit circle \mbox{$|z|=1$} as % % \begin{subequations} \label{ref102} \begin{align} \phi_n (z) & = \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n (z) A + \psi_n (z)B, \label{phi_relation} \\[0.5mm] \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n (z) &= \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n (z) \accentset{{\cc@style\underline{\mskip10mu}}}{B} - \psi_n (z) \accentset{{\cc@style\underline{\mskip10mu}}}{A}. \label{phi_bar_relation} \end{align} \end{subequations} Here, $A$, $B$, $\accentset{{\cc@style\underline{\mskip10mu}}}{B}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}$ are $n$-independent \mbox{$l \times l$} matrices, which depend on the spectral parameter $z$ and are called scattering data. According to the asymptotic behaviors of the Jost solutions (\ref{leftJost})--(\ref{rightJost}), we can express them on \mbox{$|z|=1$} as \begin{subequations} \label{W_rel} \begin{align} A &= W[ \psi_n^{\mathrm{ad}}, \phi_n ], \label{rep1} \\[1mm] B &= W[ \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{ad}}, \phi_n ], \label{rep3} \\[1mm] \accentset{{\cc@style\underline{\mskip10mu}}}{B} &= W [ \psi_n^{\mathrm{ad}}, \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n ], \label{rep4} \\[1mm] \accentset{{\cc@style\underline{\mskip10mu}}}{A} &= - W [ \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{ad}}, \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n ]. \label{rep2} \end{align} \end{subequations} We can rewrite the eigenvalue problem (\ref{mAL0}) in the following equivalent forms:\footnote{In the area of orthogonal polynomials, the Ablowitz--Ladik eigenvalue problem in a similar rewritten form was studied by G.~Baxter in the early 1960s after the seminal work of G.~Szeg\"o, see~\cite{Bax1,Bax2}. } \begin{subequations} \label{Jost-1} \begin{align} \left[ \begin{array}{c} z^{-n} \Psi_{1, n} \\ z^{-n} \Psi_{2, n} \\ \end{array} \right] &= \left[ \begin{array}{cc} z^2 I & z^2 Q_n \\ R_n & I \\ \end{array} \right] \left[ \begin{array}{c} z^{-(n+1)} \Psi_{1, n+1} \\ z^{-(n+1)} \Psi_{2, n+1} \\ \end{array} \right]_{\vphantom \int}, \label{mAL3} \\[1.5mm] \left[ \begin{array}{c} z^{n} \Psi_{1, n} \\ z^{n} \Psi_{2, n} \\ \end{array} \right] &= \left[ \begin{array}{cc} I & Q_n \\ z^{-2} R_n & z^{-2} I \\ \end{array} \right] \left[ \begin{array}{c} z^{n+1} \Psi_{1, n+1} \\ z^{n+1} \Psi_{2, n+1} \\ \end{array} \right]. \label{mAL4} \end{align} \end{subequations} Thus, in view of the boundary conditions (\ref{leftJost}), $z^{n} \phi_n$, $z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n$, $z^{-n} \psi_n$ and $z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n$ depend on $z$ only through \mbox{$z^2$}. Similarly, (\ref{mAL2}) can be rewritten as \begin{subequations} \label{Jost-2} \begin{align} \left[ \begin{array}{cc} \! z^{n+1} \Phi_{1, n+1} \! & \! z^{n+1} \Phi_{2, n+1} \! \end{array} \right] &= \left[ \begin{array}{cc} \! z^{n} \Phi_{1, n} \! & \! z^{n} \Phi_{2, n} \! \end{array} \right] \left[ \begin{array}{cc} z^2 I & z^2 Q_n \\ R_n & I \\ \end{array} \right], \label{mAL5} \\[1.5mm] \left[ \begin{array}{cc} \! z^{-(n+1)} \Phi_{1, n+1} \! & \! z^{-(n+1)} \Phi_{2, n+1} \! \end{array} \right] &= \left[ \begin{array}{cc} \! z^{-n} \Phi_{1, n} \! & \! z^{-n} \Phi_{2, n} \! \end{array} \right] \left[ \begin{array}{cc} I & Q_n \\ z^{-2} R_n & z^{-2} I \\ \end{array} \right], \label{mAL6} \end{align} \end{subequations} so $z^{-n} \psi_n^{\mathrm{ad}}$ and $z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{ad}}$ depend on $z$ only through \mbox{$z^2$}. Therefore, relations (\ref{W_rel}) imply that the scattering data $A$, $B$, $\accentset{{\cc@style\underline{\mskip10mu}}}{B}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}$ are even functions of $z$ (cf.~\cite{Papanico}); they can be denoted as $A(\mu)$, $B(\mu)$, $\accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)$, where \[ \mu =z^2. \] We introduce the following representations of the Jost solutions $\psi_n$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n$: \begin{subequations} \label{psi_form} \begin{align} & z^{-n} \psi_n = \dprod{\displaystyle \curvearrowright}{\infty}_{i=n} \left[ \begin{array}{cc} \mu I & \mu Q_i \\ R_i & I \\ \end{array} \right] \left[ \begin{array}{c} O \\ I \\ \end{array} \right] =: \left[ \begin{array}{c} O \\ I \\ \end{array} \right] + \sum_{k=0}^{\infty} \mu^{k+1} K (n, n+k), \label{Kdef} \\[0.5mm] & z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n = \dprod{\displaystyle \curvearrowright}{\infty}_{i=n} \left[ \begin{array}{cc} I & Q_i \\ \mu^{-1} R_i & \mu^{-1} I \\ \end{array} \right] \left[ \begin{array}{c} I \\ O \\ \end{array} \right] =: \left[ \begin{array}{c} I \\ O \\ \end{array} \right] + \sum_{k=0}^{\infty} \mu^{-k-1} \accentset{{\cc@style\underline{\mskip10mu}}}{K} (n,n+k), \label{5Kdef} \end{align} \end{subequations} which are assumed to be uniformly convergent in \mbox{$|\mu|\le 1$} and \mbox{$|\mu|\ge 1$}, respectively (cf.~(\ref{zero-bc})). Here and hereafter, the order of the matrix product is defined as \[ \dprod{\displaystyle \curvearrowright}{m }_{i=n} X_i := X_n X_{n+1} \cdots X_m, \hspace{5mm} \dprod{\displaystyle \curvearrowleft}{m }_{i=n} X_i := X_m X_{m-1} \cdots X_n, \hspace{5mm} m \ge n , \] and the ``integral kernels" $K(n,m)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{K}(n,m)$ are $\mu$-independent \mbox{$2l \times l$} matrices denoted in terms of \mbox{$l \times l$} matrices as \[ K(n,m) = \left[ \begin{array}{c} K_1(n,m) \\ K_2(n,m) \\ \end{array} \right], \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{K}(n,m) = \left[ \begin{array}{c} \accentset{{\cc@style\underline{\mskip10mu}}}{K}_1(n,m) \\ \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2(n,m) \\ \end{array} \right], \hspace{5mm} m \ge n. \] We substitute (\ref{Kdef}) and (\ref{5Kdef}) into (\ref{mAL3}) and (\ref{mAL4}), respectively. Noting that they are identities in $\mu$, we can express the ``integral kernels" recursively in terms of the potentials $Q_n$ and $R_n$; the most important relations are \begin{align} & K_1(n,n) =Q_n, \label{K2} \\[1mm] & K_2 (n,n) = \sum_{j=n}^\infty R_j Q_{j+1}, \nonumber \end{align} and \begin{align} & \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2 (n,n)=R_n, \label{K_bar2} \\[1mm] & \accentset{{\cc@style\underline{\mskip10mu}}}{K}_1 (n,n) = \sum_{j=n}^\infty Q_j R_{j+1}. \nonumber \end{align} In a way similar to (\ref{psi_form}), we can also express the other (adjoint) Jost solutions as power series in either $\mu$ or $\mu^{-1}$. Indeed, noting the identity, \[ \left[ \begin{array}{cc} \mu I & \mu Q_n \\ R_n & I \\ \end{array} \right] \left[ \begin{array}{cc} \mu^{-1} ( I-Q_n R_n )^{-1} & -Q_n ( I-R_n Q_n )^{-1} \\ -\mu^{-1} R_n ( I-Q_n R_n )^{-1} & ( I-R_n Q_n )^{-1} \\ \end{array} \right] = \left[ \begin{array}{cc} I & O \\ O & I \\ \end{array} \right], \] we obtain \begin{subequations} \begin{align} & z^{n} \phi_n = \dprod{\displaystyle \curvearrowleft}{n-1}_{i=-\infty} \left[ \begin{array}{cc} \left( I-Q_i R_i \right)^{-1} & -\mu Q_i \left( I-R_i Q_i \right)^{-1} \\ - R_i \left( I-Q_i R_i \right)^{-1} & \mu \left( I-R_i Q_i \right)^{-1} \\ \end{array} \right] \left[ \begin{array}{c} I \\ O \\ \end{array} \right], \label{phi-ex} \\[0.5mm] & z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n = \dprod{\displaystyle \curvearrowleft}{n-1}_{i=-\infty} \left[ \begin{array}{cc} \mu^{-1} \left( I-Q_i R_i \right)^{-1} & -Q_i \left( I-R_i Q_i \right)^{-1} \\ -\mu^{-1} R_i \left( I-Q_i R_i \right)^{-1} & \left( I-R_i Q_i \right)^{-1} \\ \end{array} \right] \left[ \begin{array}{c} O \\ -I \\ \end{array} \right], \label{phi_bar-ex} \\ & z^{-n} \psi_n^{\mathrm{ad}} = \left[ \begin{array}{cc} \! I \! & \! O \! \end{array} \right] \dprod{\displaystyle \curvearrowleft}{\infty}_{i=n} \left[ \begin{array}{cc} \left( I-Q_i R_i \right)^{-1} & -\mu Q_i \left( I-R_i Q_i \right)^{-1} \\ - R_i \left( I-Q_i R_i \right)^{-1} & \mu \left( I-R_i Q_i \right)^{-1} \\ \end{array} \right], \nonumber \\[0.5mm] & z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{ad}} = \left[ \begin{array}{cc} \! O \! & \! I \! \end{array} \right] \dprod{\displaystyle \curvearrowleft}{\infty}_{i=n} \left[ \begin{array}{cc} \mu^{-1} \left( I-Q_i R_i \right)^{-1} & -Q_i \left( I-R_i Q_i \right)^{-1} \\ -\mu^{-1} R_i \left( I-Q_i R_i \right)^{-1} & \left( I-R_i Q_i \right)^{-1} \\ \end{array} \right], \hspace{3mm} \mathrm{etc.} \nonumber \end{align} \end{subequations} Thus, (\ref{rep1}) and (\ref{rep2}) imply that $A(\mu)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)$ can be written explicitly as \begin{subequations} \label{AAbar-ex} \begin{align} & A(\mu) = \left[ \begin{array}{cc} \! I \! & \! O \! \end{array} \right] \dprod{\displaystyle \curvearrowleft}{\infty}_{i=-\infty} \left[ \begin{array}{cc} \left( I-Q_i R_i \right)^{-1} & -\mu Q_i \left( I-R_i Q_i \right)^{-1} \\ - R_i \left( I-Q_i R_i \right)^{-1} & \mu \left( I-R_i Q_i \right)^{-1} \\ \end{array} \right] \left[ \begin{array}{c} I \\ O \\ \end{array} \right], \label{A-ex} \\ & \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu) = \left[ \begin{array}{cc} \! O \! & \! I \! \end{array} \right] \dprod{\displaystyle \curvearrowleft}{\infty}_{i=-\infty} \left[ \begin{array}{cc} \mu^{-1} \left( I-Q_i R_i \right)^{-1} & -Q_i \left( I-R_i Q_i \right)^{-1} \\ -\mu^{-1} R_i \left( I-Q_i R_i \right)^{-1} & \left( I-R_i Q_i \right)^{-1} \\ \end{array} \right] \left[ \begin{array}{c} O \\ I \\ \end{array} \right]. \label{Abar-ex} \end{align} \end{subequations} Therefore, as long as $Q_n$ and $R_n$ decay sufficiently rapidly as \mbox{$n \to \pm \infty$}, $z^{n}\phi_n$ and $z^{-n} \psi_n^{\mathrm{ad}}$ are analytic on and inside the unit circle (\mbox{$|\mu|\le 1$}), and $z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n$ and $z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{ad}}$ are analytic on and outside the unit circle (\mbox{$|\mu|\ge 1$}). Consequently, $A(\mu)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)$ can be analytically continued for \mbox{$|\mu|\le1$} and \mbox{$|\mu| \ge 1$}, respectively.\footnote{Here, we use the term ``analytic continuation" loosely. See appendix B of~\cite{Vakh10} for a rigorous treatment of ``analytic continuation" in the delicate case, {\it e.g.}, when $Q_n$ and $R_n$ do not decay exponentially fast as \mbox{$n \to \pm \infty$}.} A more precise discussion on the analytical properties of the Jost solutions can be made using a discrete analog of the approach in~\cite{AKNS74}; that is, we can rewrite the eigenvalue problem in the form of linear summation equations and discuss the convergence of their Liouville--Neumann-type series solutions. However, we omit such a discussion in this paper. \subsection{Gel'fand--Levitan--Marchenko equations} \label{GLM_eq} We multiply (\ref{phi_relation}) and (\ref{phi_bar_relation}) from the right by $z^{n} A(\mu)^{-1}$ and $z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$, respectively, to obtain \begin{subequations} \begin{align} [ z^{n} \phi_n ] (\mu) A(\mu)^{-1} &= [z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n] (\mu) + [z^{-n} \psi_n] (\mu) B(\mu)A(\mu)^{-1} \mu^{n}, \label{prep1} \\[1mm] % [z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n] (\mu) \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1} &= - [z^{-n} \psi_n] (\mu) + [z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n ] (\mu) \accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu) \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1} \mu^{-n}. \label{prep2} \end{align} \end{subequations} Here, ``$(\mu)$'' emphasizes that the argument of the functions is \mbox{$\mu \hspace{1pt}(=z^2)$} rather than $z$. Then, we substitute the summation representations (\ref{psi_form}) into the right-hand side of (\ref{prep1}) and operate with \begin{align} \frac{1}{2\pi \mathrm{i}} \oint_{C} \mathrm{d} \mu \, \mu^{m-n} \hspace{3mm} (m \ge n) \label{oper1} \end{align} on both sides. Here, $C$ denotes the counterclockwise contour along the unit circle \mbox{$|\mu|=1$}. Thus, we obtain \begin{equation} J(n,m) = \accentset{{\cc@style\underline{\mskip10mu}}}{K} (n,m) + \left[ \begin{array}{c} O \\ F_{\mathrm C}(m) \\ \end{array} \right] + \sum_{k=0}^\infty K(n,n+k) F_{\mathrm C} (m+k+1), \label{GLM-pre1} \end{equation} where \begin{align} J(n,m) &:= \frac{1}{2\pi \mathrm{i}} \oint_{C_{\vphantom \int}} [z^{n} \phi_n ] (\mu) A(\mu)^{-1} \mu^{m-n}\mathrm{d} \mu, \label{eq_ref6} \\[1mm] F_{\mathrm C} (m) &:= \frac{1}{2 \pi \mathrm{i}} \oint_C B(\mu)A(\mu)^{-1} \mu^{m} \mathrm{d} \mu. \nonumber \end{align} Because of the analyticity of $[z^{n} \phi_n ](\mu)$ and $A(\mu)$ in \mbox{$|\mu|\le 1$}, we can evaluate $J(n,m)$ using the residue theorem. Recall that the inverse of the matrix $A(\mu)$ is given by \[ A(\mu)^{-1} = \frac{1}{\det A(\mu)} \widetilde{A}(\mu), \] where the tilde denotes the adjugate (i.e., transposed cofactor) matrix. Thus, the singularities of the integrand in (\ref{eq_ref6}) are determined by the zeros of $\det A(\mu)$. For simplicity, we assume that the matrix function $A(\mu)^{-1}$ only has isolated simple poles in \mbox{$|\mu| < 1$}, denoted as \mbox{$\{\mu_1, \mu_2, \ldots, \mu_{N} \}$}, and is regular on \mbox{$|\mu|=1$}.\footnote{We should not assume such a strong condition as $\det A(\mu)$ has only simple zeros, which was assumed in our previous papers~\cite{TUW98,TUW99}. Indeed, a zero of multiplicity $k$ of $\det A(\mu)$ may be cancelled by a zero of multiplicity \mbox{$k-1$} of $\widetilde{A}(\mu)$ to give a simple pole of $A(\mu)^{-1}$. However, this correction does not affect the validity of the formulas in~\cite{TUW98,TUW99}. } In fact, the more general case where $A(\mu)^{-1}$ also has higher order poles can be recovered by taking a suitable coalescence limit of two or more simple poles afterward. In the neighborhood of \mbox{$\mu = \mu_j$}, we can expand $A(\mu)$ and $A(\mu)^{-1}$ as (cf.~\cite{Kamijo,Olme85}) \begin{eqnarray} && A(\mu) = A(\mu_j) + (\mu-\mu_j) A'(\mu_j) + \mathrm{O} ((\mu-\mu_j)^2 ), \hspace{5mm} \det A(\mu_j) =0_{\vphantom \sum}, \nonumber \\[1mm] && A(\mu)^{-1} = \frac{1}{\mu-\mu_j} A_j^{(-1)} + A_j^{(0)} + \mathrm{O}( \mu-\mu_j ), \hspace{5mm} A_j^{(-1)} \neq O, \label{A-inv} \end{eqnarray} where \[ A(\mu_j) A_j^{(-1)} =O, \hspace{5mm} A(\mu_j) A_j^{(0)} + A'(\mu_j) A_j^{(-1)} = I. \] Thus, using (\ref{psi_bar}), (\ref{psi_ad}) and (\ref{rep1}), we obtain \begin{align} z^{-n} \psi_n^{\mathrm{ad}} \left[ \begin{array}{cc} \! z^{n} \phi_n A_j^{(-1)} \! & \! z^{n} \psi_n \! \end{array} \right] &= \left[ \begin{array}{cc} \! A(\mu) A_j^{(-1)} \! & \! O \! \end{array} \right] \nonumber \\[1mm] &= \left[ \begin{array}{cc} \! O \! & \! O \! \end{array} \right] \;\; {\rm at} \;\; \mu = \mu_j. \nonumber \end{align} Because $z^{-n}\psi_n^{\mathrm{ad}}$ consisting of $l$ rows satisfies the boundary condition in (\ref{psi_ad}) and the eigenvalue problem (\ref{mAL6}), the rank of $[z^{-n} \psi_n^{\mathrm{ad}}](\mu_j)$ is equal to $l$ for all \mbox{$n \in {\mathbb Z}$}. Similarly, the rank of $[z^{n} \psi_n ](\mu_j)$ is $l$ for all \mbox{$n \in {\mathbb Z}$}. Therefore, there exists an \mbox{$\l \times l$} matrix $C_j$ such that \begin{eqnarray} [z^{n} \phi_n ] (\mu_j) A_j^{(-1)} = [z^{-n} \psi_n ](\mu_j) C_j \mu_j^{n}. \label{connect3} \end{eqnarray} Here, $C_j$ must be $n$-independent, because both $z^{n} \phi_n$ and $z^{n} \psi_n$ satisfy the same eigenvalue problem (\ref{mAL4}). The matrix $C_j$ can be intuitively considered as \mbox{$B(\mu_j) \lim_{\mu \to \mu_j} (\mu -\mu_j) A(\mu)^{-1}$}, but it is, in general, different from the naive residue $\lim_{\mu \to \mu_j} (\mu -\mu_j) B(\mu) A(\mu)^{-1}$. Indeed, $B(\mu)$ can have a {\it discontinuity} at \mbox{$\mu = \mu_j$}. Because $\phi_n A_j^{(-1)}$ and $\psi_n C_j$ vanish exponentially for \mbox{$n \to -\infty$} and \mbox{$n \to +\infty$} respectively, each nonzero column vector of the \mbox{$2l \times l$} matrix \mbox{$\phi_n A_j^{(-1)}=\psi_n C_j$} at \mbox{$z^2 = \mu_j$} gives a bound state in the potentials $Q_n$ and $R_n$. Therefore, using the residue theorem with the aid of (\ref{A-inv}) and (\ref{connect3}), we can compute the right-hand side of (\ref{eq_ref6}) as \begin{align} J(n,m) &= \sum_{j=1}^{N} [z^{-n} \psi_n ](\mu_j) C_j \mu_j^{m} \nonumber \\[1mm] &= \sum_{j=1}^{N} \left\{ \left[ \begin{array}{c} O \\ I \\ \end{array} \right] + \sum_{k=0}^\infty \mu_j^{k+1} K(n,n+k) \right\} C_j \mu_j^{m} \nonumber \\[1mm] &= - \left[ \begin{array}{c} O \\ F_{\mathrm D} (m) \\ \end{array} \right] - \sum_{k=0}^\infty K(n,n+k)F_{\mathrm D} (m+k+1), \nonumber \end{align} where \[ F_{\mathrm D} (m) := -\sum_{j=1}^{N} C_j \mu_j^{m}. \] Substituting this expression for $J(n,m)$ into (\ref{GLM-pre1}), we obtain a linear summation equation of the Gel'fand--Levitan--Marchenko type, \begin{equation} \accentset{{\cc@style\underline{\mskip10mu}}}{K}(n,m) + \left[ \begin{array}{c} O \\ F(m) \\ \end{array} \right]+ \sum_{k=0}^{\infty} K(n,n+k) F(m+k+1) = \left[ \begin{array}{c} O \\ O \\ \end{array} \right], \hspace{5mm} m \geq n. \label{GLM_1} \end{equation} Here, $F(m)$ is defined as \begin{align} F(m) &:= F_{\mathrm C} (m) + F_{\mathrm D} (m) \nonumber \\ & \hphantom{:} = \frac{1}{2\pi \mathrm{i}} \oint_{C} B(\mu)A(\mu)^{-1} \mu^{m} \mathrm{d} \mu - \sum_{j=1}^{N} C_j \mu_j^{m}. \label{F_form} \end{align} Note that $F_{\mathrm C}$ and $F_{\mathrm D}$ correspond to the contributions of the continuous and discrete spectra, respectively.\footnote{In this section, we often follow the notation of Ablowitz {\it et al.}~\cite{AL76,Ab78, AS81}.} \\ \\ {\it Remark.} Using the expressions (\ref{phi-ex}) and (\ref{A-ex}), we can evaluate $[z^{n} \phi_n](\mu)$ and $A(\mu)$ in the limit \mbox{$\mu \to 0$} as (cf.~\cite{AL75,Ab78,APT}) \begin{align} & \lim_{\mu \to 0} [z^{n} \phi_n](\mu) = \left[ \begin{array}{c} I \\ -R_{n-1} \\ \end{array} \right] \dprod{\displaystyle \curvearrowleft}{n-1}_{i=-\infty} \left( I-Q_i R_i \right)^{-1}, \nonumber \\[1mm] & \lim_{\mu \to 0} A(\mu) = \dprod{\displaystyle \curvearrowleft}{\infty}_{i=-\infty} \left( I-Q_i R_i \right)^{-1}. \nonumber \end{align} Because \mbox{$\lim_{\mu \to 0} A(\mu)$} is invertible (cf.~(\ref{nonzero-det})), we have \mbox{$\mu_j \neq 0$}, \mbox{$j=1, 2, \ldots, N$}. Note that instead of (\ref{oper1}), we can operate with $ \frac{1}{2\pi \mathrm{i}} \oint_{C} \mathrm{d} \mu \, \mu^{-1}$ on (\ref{prep1}) with (\ref{psi_form}). Thus, we can express \mbox{$\lim_{\mu \to 0} [z^{n} \phi_n](\mu) A(\mu)^{-1}$} as \[ \lim_{\mu \to 0} [z^{n} \phi_n](\mu) A(\mu)^{-1} = \left[ \begin{array}{c} I \\ F(n-1) \\ \end{array} \right] +\sum_{k=0}^\infty \left[ \begin{array}{c} K_1(n,n+k) \\ K_2(n,n+k) \\ \end{array} \right] F(n+k). \] From the above three relations, we obtain \[ \left[ \begin{array}{c} I \\ -R_{n-1} \\ \end{array} \right] \dprod{\displaystyle \curvearrowright}{\infty}_{i=n} \left( I-Q_i R_i \right) = \left[ \begin{array}{c} I \\ F(n-1) \\ \end{array} \right] +\sum_{k=0}^\infty \left[ \begin{array}{c} K_1(n,n+k) \\ K_2(n,n+k) \\ \end{array} \right] F(n+k). \] The nonlocal quantities on the left-hand side can be used to transform the matrix Ablowitz--Ladik lattice (\ref{mALsys}) to other systems~\cite{GI82}. \\ Next, we substitute the summation representations (\ref{psi_form}) into the right-hand side of (\ref{prep2}) and operate with \begin{equation} \frac{1}{2\pi \mathrm{i}}\oint_{C} \mathrm{d} \mu \, \mu^{n-m-2} \hspace{5mm} (m \ge n) \label{oper2} \end{equation} on both sides. Thus, we obtain \begin{equation} \accentset{{\cc@style\underline{\mskip10mu}}}{J} (n,m) = - K(n,m) +\left[ \begin{array}{c} \accentset{{\cc@style\underline{\mskip10mu}}}{F}_{\mathrm C} (m)\\ O \\ \end{array} \right]+ \sum_{k=0}^\infty \accentset{{\cc@style\underline{\mskip10mu}}}{K}(n,n+k) \accentset{{\cc@style\underline{\mskip10mu}}}{F}_{\mathrm C} (m+k+1), \label{GLM-pre2} \end{equation} where \begin{align} \accentset{{\cc@style\underline{\mskip10mu}}}{J}(n,m) &:= \frac{1}{2\pi \mathrm{i}} \oint_C [z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n ](\mu) \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1} \mu^{n-m-2} \mathrm{d} \mu, \label{J_bar} \\[1mm] \accentset{{\cc@style\underline{\mskip10mu}}}{F}_{\mathrm C} (m) &:= \frac{1}{2 \pi \mathrm{i}} \oint_C \accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu)\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1} \mu^{-m-2} \mathrm{d} \mu. \nonumber \end{align} Because of the analyticity of $[z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n](\mu)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)$ in \mbox{$|\mu| \ge 1$}, we can evaluate \mbox{$\accentset{{\cc@style\underline{\mskip10mu}}}{J}(n,m)$} using the residue theorem. We assume that $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ only has isolated simple poles in \mbox{$|\mu| > 1$}, denoted as $\{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1, \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \}$, and is regular on \mbox{$|\mu| = 1$}. In the neighborhood of $\mu = \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j$, we expand $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ as \begin{eqnarray} && \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu) = \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) + (\mu-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) \accentset{{\cc@style\underline{\mskip10mu}}}{A}\hspace{1pt}'(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) + \mathrm{O}( (\mu-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j)^2 ), \hspace{5mm} \det \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) =0_{\vphantom \sum}, \nonumber \\[1mm] && \accentset{{\cc@style\underline{\mskip10mu}}}{A} (\mu)^{-1} = \frac{1}{\mu-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j} \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(-1)} + \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(0)} + \mathrm{O}( \mu-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j ), \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(-1)} \neq O, \label{A_bar-inv} \end{eqnarray} where \[ \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(-1)} =O, \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(0)} + \accentset{{\cc@style\underline{\mskip10mu}}}{A}\hspace{1pt}'(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(-1)} = I. \] Thus, using (\ref{psi_bar}), (\ref{psi_ad}) and (\ref{rep2}), we obtain \begin{align} z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{ad}} \left[ \begin{array}{cc} \! z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(-1)} \! & \! z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n \! \end{array} \right] &= \left[ \begin{array}{cc} \! -\accentset{{\cc@style\underline{\mskip10mu}}}{A} (\mu) \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(-1)} \! & \! O \! \end{array} \right] \nonumber \\[1mm] &= \left[ \begin{array}{cc} \! O \! & \! O \! \end{array} \right] \;\; {\rm at} \;\; \mu = \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j. \nonumber \end{align} In the same way as the derivation of (\ref{connect3}), there exists an $n$-independent \mbox{$l \times l$} matrix $\accentset{{\cc@style\underline{\mskip10mu}}}{C}_j$ such that \begin{eqnarray} [z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n ] (\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(-1)} = [z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n ](\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-n}. \label{connect4} \end{eqnarray} Therefore, using the residue theorem with the aid of (\ref{A_bar-inv}) and (\ref{connect4}), we can compute the right-hand side of (\ref{J_bar}) as \begin{align} \accentset{{\cc@style\underline{\mskip10mu}}}{J} (n,m) &= - \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} [z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n ](\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-m-2} \nonumber \\[1mm] &= - \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \left\{ \left[ \begin{array}{c} I \\ O \\ \end{array} \right] + \sum_{k=0}^\infty \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-k-1} \accentset{{\cc@style\underline{\mskip10mu}}}{K}(n,n+k) \right\} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-m-2} \nonumber \\[1mm] &= - \left[ \begin{array}{c} \accentset{{\cc@style\underline{\mskip10mu}}}{F}_{\mathrm D} (m) \\ O \\ \end{array} \right] - \sum_{k=0}^\infty \accentset{{\cc@style\underline{\mskip10mu}}}{K}(n,n+k) \accentset{{\cc@style\underline{\mskip10mu}}}{F}_{\mathrm D} (m+k+1), \nonumber \end{align} where \[ \accentset{{\cc@style\underline{\mskip10mu}}}{F}_{\mathrm D} (m) := \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-m-2}. \] Substituting this expression for \mbox{$\accentset{{\cc@style\underline{\mskip10mu}}}{J} (n,m)$} into (\ref{GLM-pre2}), we obtain another linear summation equation of the Gel'fand--Levitan--Marchenko type, \begin{equation} -K(n,m) + \left[ \begin{array}{c} \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m) \\ O \\ \end{array} \right] + \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}(n,n+k) \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m+k+1) = \left[ \begin{array}{c} O \\ O \\ \end{array} \right], \hspace{5mm} m \geq n. \label{GLM_2} \end{equation} Here, $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(m)$ is defined as \begin{align} \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m) &:= \accentset{{\cc@style\underline{\mskip10mu}}}{F}_{\mathrm C} (m) + \accentset{{\cc@style\underline{\mskip10mu}}}{F}_{\mathrm D} (m) \nonumber \\ & \hphantom{:} = \frac{1}{2\pi \mathrm{i}} \oint_{C} \accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu)\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1} \mu^{-m-2}\mathrm{d} \mu + \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-m-2}. \label{F_bar_form} \end{align} \\ {\it Remark.} Using the expressions (\ref{phi_bar-ex}) and (\ref{Abar-ex}), we can evaluate $[z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n ](\mu)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)$ in the limit \mbox{$|\mu| \to \infty$} as (cf.~\cite{APT}) \begin{align} & \lim_{|\mu| \to \infty} [z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n ](\mu) = \left[ \begin{array}{c} Q_{n-1} \\ -I \\ \end{array} \right] \dprod{\displaystyle \curvearrowleft}{n-1}_{i=-\infty} \left( I-R_i Q_i \right)^{-1}, \nonumber \\[1mm] & \lim_{|\mu| \to \infty} \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu) = \dprod{\displaystyle \curvearrowleft}{\infty}_{i=-\infty} \left( I-R_i Q_i \right)^{-1}. \nonumber \end{align} Because \mbox{$\lim_{|\mu| \to \infty} \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)$} is invertible (cf.~(\ref{nonzero-det})), $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ does not have poles at infinity, which is indeed consistent with the computation for $\accentset{{\cc@style\underline{\mskip10mu}}}{J} (n,m)$. Note that instead of (\ref{oper2}), we can operate with \mbox{$ \frac{1}{2\pi \mathrm{i}} \oint_{C} \mathrm{d} \mu \, \mu^{-1}$} on (\ref{prep2}) with (\ref{psi_form}). Thus, we can express \mbox{$\lim_{|\mu| \to \infty} [z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n ](\mu) \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$} as \begin{align} \lim_{|\mu| \to \infty} [z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n ](\mu) \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1} = -\left[ \begin{array}{c} -\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n-1) \\ I \\ \end{array} \right] +\sum_{k=0}^\infty \left[ \begin{array}{c} \accentset{{\cc@style\underline{\mskip10mu}}}{K}_1(n,n+k) \\ \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2(n,n+k) \\ \end{array} \right] \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n+k). \nonumber \end{align} From the above three relations, we obtain \begin{align} \left[ \begin{array}{c} -Q_{n-1} \\ I \\ \end{array} \right] \dprod{\displaystyle \curvearrowright}{\infty}_{i=n} \left( I-R_i Q_i \right) = \left[ \begin{array}{c} -\accentset{{\cc@style\underline{\mskip10mu}}}{F} (n-1) \\ I \\ \end{array} \right] -\sum_{k=0}^\infty \left[ \begin{array}{c} \accentset{{\cc@style\underline{\mskip10mu}}}{K}_1(n,n+k) \\ \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2(n,n+k) \\ \end{array} \right] \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n+k). \nonumber \end{align} The nonlocal quantities on the left-hand side can be used to transform the matrix Ablowitz--Ladik lattice (\ref{mALsys}) to other systems~\cite{GI82}. \\ The Gel'fand--Levitan--Marchenko equations (\ref{GLM_1}) and (\ref{GLM_2}) relate the scattering data to the potentials $Q_n$ and $R_n$ through (\ref{K2}) and (\ref{K_bar2}). The required set of the scattering data is given by $B(\mu)A(\mu)^{-1}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu)\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ for \mbox{$|\mu|=1$}, \mbox{$\{ \mu_j, C_j \}_{j=1, 2, \ldots, N}$} and \mbox{$\{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j, \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \}_{j=1, 2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{N}}$}, which define $F(m)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(m)$ as in (\ref{F_form}) and (\ref{F_bar_form}). Because $F(m)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(m)$ are ``linear" in the scattering data, we can consider a linear superposition of different sets of scattering data at this level. In fact, we will show in the next subsection that $F(m)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(m)$ satisfy linear evolution equations. \subsection{Time evolution} \label{sTDsd} Under the rapidly decaying boundary conditions (\ref{zero-bc}), the temporal part of the Lax representation (\ref{mAL-time}) for the Ablowitz--Ladik lattice (\ref{mALsys}) has the asymptotic behavior: \begin{equation} \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right]_t \sim \left[ \begin{array}{cc} (-\mu+1) b I & O \\ O & (1-\mu^{-1}) a I \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right] \hspace{4mm}{\rm as}~~~ n \rightarrow \pm \infty. \nonumber \end{equation} This can be used to fix the time dependence of the leading order terms in $\Psi_{1, n}$ and $\Psi_{2, n}$ (cf.~(\ref{leftJost})). Thus, we can introduce the explicitly time-dependent Jost solutions $\phi_n^{(t)}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\hspace{1pt}(t)}$ as \begin{subequations} \label{Jost-time} \begin{eqnarray} \left. \begin{array}{l} z^{n} \phi_n^{(t)} := \mathrm{e}^{(-\mu+1) b t} z^{n} \phi_n \to \mathrm{e}^{(-\mu+1) b t} \left[ \begin{array}{c} I \\ O \\ \end{array} \right] \vspace{2mm} \\ z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\hspace{1pt}(t)} := \mathrm{e}^{(1-\mu^{-1}) a t} z^{-n} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n \to \mathrm{e}^{(1-\mu^{-1})a t} \left[ \begin{array}{c} O \\ -I \\ \end{array} \right] \end{array} \right\} \hspace{4mm} {\rm as}~~n \rightarrow -\infty \hspace{8mm} \label{Jost-time-phi} \end{eqnarray} and $\psi_n^{(t)}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}(t)}$ as \begin{eqnarray} \left. \begin{array}{l} z^{-n} \psi_n^{(t)} := \mathrm{e}^{(1- \mu^{-1}) a t} z^{-n} \psi_n \to \mathrm{e}^{(1- \mu^{-1})a t} \left[ \begin{array}{c} O \\ I \\ \end{array} \right] \vspace{2mm} \\ z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}(t)} := \mathrm{e}^{(-\mu+1) b t} z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n \to \mathrm{e}^{(-\mu+1) b t} \left[ \begin{array}{c} I \\ O \\ \end{array} \right] \label{Jost-time-psi} \end{array} \right\} \hspace{4mm} {\rm as}~~n \rightarrow +\infty, \hspace{8mm} \end{eqnarray} \end{subequations} respectively; they satisfy both the eigenvalue problem (\ref{mAL1}) and the time-evolution equation (\ref{mAL-time}). To determine the time dependence of the scattering data, we rewrite the defining relations (\ref{ref102}) as \begin{align} & \phi_n^{\mathrm (t)} = \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}\mathrm (t)}A + \psi_n^{\mathrm (t)}B \mathrm{e}^{-[(\mu-1) b + ( 1-\mu^{-1}) a ] t}, \nonumber \\[1mm] & \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\hspace{1pt}\mathrm (t)} = \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}\mathrm (t)} \accentset{{\cc@style\underline{\mskip10mu}}}{B} \mathrm{e}^{[(\mu-1) b + ( 1-\mu^{-1}) a ] t} - \psi_n^{\mathrm (t)}\accentset{{\cc@style\underline{\mskip10mu}}}{A}. \nonumber \end{align} Note that $\phi_n^{(t)}$, $\accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\hspace{1pt}(t)}$, $\accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}(t)}$ and ${\psi}_n^{(t)}$ satisfy the same equation (\ref{mAL-time}) and the columns of $\accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}(t)}$ and ${\psi}_n^{(t)}$ are linearly independent. Thus, the time dependences of $A$, $B$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}$, $\accentset{{\cc@style\underline{\mskip10mu}}}{B}$ for \mbox{$|\mu|=1$} are given by \begin{align} A(\mu,t) &= A(\mu,0), \hspace{5mm} B(\mu,t) =B(\mu, 0) \mathrm{e}^{[(\mu-1) b + ( 1-\mu^{-1}) a ] t} \label{BA_time} \end{align} % and % \begin{align} \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu,t) &= \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu,0),\hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu,t) = \accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu, 0) \mathrm{e}^{-[(\mu-1) b + (1 - \mu^{-1})a] t}, \label{BA_bar_time} \end{align} respectively. This implies that $A(\mu,t)$, $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu,t)$, $B(\mu, t)\mathrm{e}^{-[(\mu-1) b + ( 1-\mu^{-1}) a ] t}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu, t) \mathrm{e}^{[(\mu-1) b + (1 - \mu^{-1})a] t}$ are generating functions of the integrals of motion. Because the ``analytic continuation" of $A(\mu)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)$ into the regions \mbox{$|\mu| \le 1$} and \mbox{$|\mu| \ge 1$}, respectively, is unique and remains time-independent (cf.~(\ref{AAbar-ex})), the positions of the simple poles of $A(\mu)^{-1}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ and the corresponding residues, \mbox{$ \bigl\{ \mu_j, A_j^{(-1)} \bigr\}_{j=1, 2, \ldots, N }$} and \mbox{$\bigl\{ \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j, \accentset{{\cc@style\underline{\mskip10mu}}}{A}_j^{\hspace{1pt}(-1)} \bigr\}_{j=1, 2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{N}}$}, are also time-independent. For (\ref{connect3}) and (\ref{connect4}), we can apply a similar discussion as used to obtain (\ref{BA_time}) and (\ref{BA_bar_time}), so the time dependences of $C_j$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{C}_j$ are given by \begin{align} C_j(t) &= C_j(0)\mathrm{e}^{[(\mu_j-1) b + (1-\mu_j^{-1})a] t} \label{C_time} \end{align} % and % \begin{align} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j(t) &= \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j(0) \mathrm{e}^{-[(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j-1) b + (1-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-1}) a] t}, \label{C_bar_time} \end{align} respectively. Substituting (\ref{BA_time})--(\ref{C_bar_time}) into (\ref{F_form}) and (\ref{F_bar_form}), we obtain the explicitly time-dependent forms of $F(n)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n)$ as % \begin{align} F(n, t) &= \frac{1}{2\pi \mathrm{i}} \oint_{C} B(\mu,0)A(\mu,0)^{-1} \mu^{n} \mathrm{e}^{[(\mu-1) b + (1-\mu^{-1})a]t}\mathrm{d} \mu \nonumber \\ & \hphantom{=} \; \mbox{} - \sum_{j=1}^N C_j(0) \mu_j^{n} \mathrm{e}^{[(\mu_j-1) b + (1-\mu_j^{-1})a] t}, \label{F_time1} \\[2mm] \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n,t) &= \frac{1}{2\pi \mathrm{i}} \oint_{C} \accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu,0)\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu,0)^{-1} \mu^{-n-2} \mathrm{e}^{-[(\mu-1) b + (1-\mu^{-1})a]t} \mathrm{d} \mu \nonumber \\ & \hphantom{=} \; \mbox{} + \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j(0) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-n-2} \mathrm{e}^{-[(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j-1) b + (1-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-1})a] t}. \label{Fbar_time1} \end{align} Thus, it is easy to see that $F(n,t)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n,t)$ satisfy the pair of uncoupled linear evolution equations: \begin{subnumcases}{\label{AL-linear}} {} \frac{\partial F(n,t)}{\partial t} - b F (n+1,t) + a F(n-1,t) + (b-a) F(n,t)=O, \hspace{10mm} \\[1mm] \frac{\partial \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n,t)}{\partial t} -a \accentset{{\cc@style\underline{\mskip10mu}}}{F} (n+1,t) + b \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n-1,t) + (a-b) \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n,t)=O. \hspace{10mm} \end{subnumcases} Note that these equations coincide with the linear part of the equations for $R_n$ and $Q_n$ (see (\ref{mALsys})). In addition, $F(n,t)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n,t)$ are required to decay rapidly as \mbox{$n \to +\infty$} so that the Gel'fand--Levitan--Marchenko equations (\ref{GLM_1}) and (\ref{GLM_2}) are well-posed. Because of the linear nature of the sum terms in (\ref{F_time1}) and (\ref{Fbar_time1}), we can take a coalescence limit of two or more simple poles of $A(\mu)^{-1}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ directly. Thus, we obtain the following generalized expressions for $F(n,t)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n, t)$: \begin{align} F(n, t) &= \frac{1}{2\pi \mathrm{i}} \oint_{C} B(\mu,0)A(\mu,0)^{-1} \mu^{n} \mathrm{e}^{[(\mu-1) b + (1-\mu^{-1})a]t}\mathrm{d} \mu \nonumber \\ & \hphantom{=} \; \mbox{} - \sum_{j=1}^N \sum_{k=0}^{M_j} C_j^{(k)}(0) \left( \frac{\partial}{\partial \mu_j} \right)^k \mu_j^{n} \mathrm{e}^{[(\mu_j-1) b + (1-\mu_j^{-1})a] t}, \nonumber \\[3mm] \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n,t) &= \frac{1}{2\pi \mathrm{i}} \oint_{C} \accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu,0)\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu,0)^{-1} \mu^{-n-2} \mathrm{e}^{-[(\mu-1) b + (1-\mu^{-1})a]t} \mathrm{d} \mu \nonumber \\ & \hphantom{=} \; \mbox{} + \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \sum_{k=0}^{L_j} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j^{(k)}(0) \left( \frac{\partial}{\partial \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j} \right)^k \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-n-2} \mathrm{e}^{-[(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j-1) b + (1-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-1})a] t}. \nonumber \end{align} These expressions encompass the most general case where $A(\mu)^{-1}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ have arbitrarily higher order poles. Moreover, they satisfy the same linear equations (\ref{AL-linear}) as the original expressions (\ref{F_time1}) and (\ref{Fbar_time1}). Instead of using the partial differentiation with respect to $\mu_j$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j$, one can consider the matrix functions \mbox{$X^{n} \mathrm{e}^{bt(X-I) + at(I-X^{-1})} $} and \mbox{$Y^{-n-2}\mathrm{e}^{-bt(Y-I) - at(I-Y^{-1})} $} with constant invertible matrices $X$ and $Y$ in Jordan normal form. Indeed, they satisfy the linear equations of the form (\ref{AL-linear}), so linear combinations of the independent elements of each matrix function can be used to replace the sum terms in $F(n,t)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n, t)$. Readers interested in such an approach are referred, {\it e.g.}, to~\cite{Scieb00,Scieb04,Akto07,Akto10,DM2010,DM-SIGMA,DKM}. \subsection{Exact linearization} To reconstruct the potentials $Q_n$ and $R_n$ from the scattering data through (\ref{K2}) and (\ref{K_bar2}), we rewrite the Gel'fand--Levitan--Marchenko equations (\ref{GLM_1}) and (\ref{GLM_2}) as ``closed" linear summation equations for $K_1(n,m)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{K}_2 (n,m)$. Thus, the general solution formulas for the matrix Ablowitz--Ladik lattice (\ref{mALsys}) can be presented in the form: \begin{subequations} \label{ALlinearization} \begin{align} & Q_n = K_1(n,n), \label{ALlin-1} \\[1mm] & R_n = \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2 (n,n), \label{ALlin-2} \\ & K_1(n,m) = \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m) -\sum_{i=0}^{\infty} \sum_{k=0}^{\infty} K_1(n,n+i) F(n+i+k+1) \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m+k+1), \hspace{5mm} m \geq n, \label{K1-closed} \\ & \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2 (n,m) = -F(m) -\sum_{i=0}^{\infty} \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2 (n,n+i) \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n+i+k+1) F(m+k+1), \hspace{5mm} m \geq n. \label{K2-closed} \end{align} \end{subequations} Here, the time dependence of the functions is suppressed; $F(n)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n)$ are solutions of the linear uncoupled system (\ref{AL-linear}) and decay rapidly as \mbox{$n \to +\infty$}. More generally, the set of formulas (\ref{ALlinearization}) can provide the solutions for any flow of the matrix Ablowitz--Ladik hierarchy if $F(n)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n)$ satisfy the linear part of the equations for $R_n$ and $Q_n$, instead of (\ref{AL-linear}). Hence, (\ref{ALlinearization}) realizes an {\it exact linearization} of the matrix Ablowitz--Ladik hierarchy in the sense of~\cite{ARS} (also see~\cite{GGKM74,Wadati76} and Proposition~\ref{Prop.A.2}). As long as $F(n)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n)$ decay rapidly as \mbox{$n \to +\infty$}, so do $Q_n$ and $R_n$ determined by (\ref{ALlinearization}). However, the requirement that $Q_n$ and $R_n$ should also decay as \mbox{$n \to -\infty$} imposes nontrivial conditions on $F(n)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n)$, which will be touched upon in the next subsection. \subsection{Multisoliton solutions} \label{subsec3.6} To construct exact solutions in explicit form, we consider the special case of \mbox{$B(\mu)=\accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu)=O$} on \mbox{$|\mu|=1$}; this is preserved under the time evolution (cf.~(\ref{BA_time}) and (\ref{BA_bar_time})) and corresponds to the reflectionless potentials (cf.~(\ref{ref102})). Moreover, we assume that $A(\mu)^{-1}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ only have simple poles (see~\cite{DM2010} for the more general case). Thus, we can set \begin{align} F(n, t) = - \sum_{j=1}^N C_j(t) \mu_j^{n}, \hspace{7mm} \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n,t) = \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j(t) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-n-2}, \label{F-reflec} \end{align} where the time dependences of $C_j$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{C}_j$ are given by (\ref{C_time}) and (\ref{C_bar_time}). We also set \begin{equation} K_1(n, m; t) = \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} G_j(n,t) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-m-2}, \hspace{7mm} \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2 (n, m; t) = \sum_{j=1}^{N} H_j(n,t) \mu_j^{m}, \label{G-H} \end{equation} and substitute all these expressions into (\ref{K1-closed}) and (\ref{K2-closed}); recalling that \mbox{$|\mu_j| < 1$} $(j=1, 2, \ldots, N)$ and \mbox{$|\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j| >1$} $(j=1, 2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{N})$, we can evaluate the infinite sum. Thus, we obtain a linear algebraic system for determining $G_j$ and that for $H_j$ as \begin{subequations} \label{GH-sys} \begin{align} & \left[ \begin{array}{cccc} \! G_1 \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n-2} \! & \! G_2 \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_2^{\hspace{1pt}-n-2} \! & \! \cdots \! & \! G_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n-2}\! \end{array} \right] \left[ \begin{array}{ccc} U_{11} & \cdots & U_{1\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \vdots & \ddots & \vdots \\ U_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}1} & \cdots & U_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \end{array} \right] \nonumber \\[1mm] &= \left[ \begin{array}{cccc} \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n-2} \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_2 \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_2^{\hspace{1pt}-n-2}\! & \! \cdots \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}\hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n-2} \! \end{array} \right] \label{GH-sys1} \end{align} and \begin{align} & \left[ \begin{array}{cccc} \! H_1 \hspace{1pt} \mu_1^{n} \! & \! H_2 \hspace{1pt} \mu_2^{n} \! & \! \cdots \! & \! H_{N} \hspace{1pt} \mu_N^{n} \! \end{array} \right] \left[ \begin{array}{ccc} V_{11} & \cdots & V_{1N} \\ \vdots & \ddots & \vdots \\ V_{N1} & \cdots & V_{NN} \\ \end{array} \right] = \left[ \begin{array}{cccc} \! C_1 \hspace{1pt} \mu_1^{n} \! & \! C_2 \hspace{1pt} \mu_2^{n} \! & \! \cdots \! & \! C_{N} \hspace{1pt} \mu_N^{n} \! \end{array} \right]. \label{GH-sys2} \end{align} \end{subequations} Here, all the entries in (\ref{GH-sys}) are \mbox{$l \times l$} matrices; the block matrices \mbox{$U=(U_{jk})_{1 \le j,k \le \accentset{{\cc@style\underline{\mskip10mu}}}{N}}$} and \mbox{$V=(V_{jk})_{1 \le j,k \le N}$} are defined as \begin{align} U_{jk} &:= \delta_{jk} I - \sum_{i=1}^{N} \frac{\mu_i^{n+1}\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_k^{\hspace{1pt}-n-3}}{ \displaystyle \left( 1-\frac{\mu_i}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j}\right) \left( 1-\frac{\mu_i}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_k} \right)} C_i (t)\accentset{{\cc@style\underline{\mskip10mu}}}{C}_k(t) \nonumber \end{align} and \begin{align} V_{jk} &:= \delta_{jk} I - \sum_{i=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \frac{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i^{\hspace{1pt} -n-3} \mu_k^{n+1}}{ \displaystyle \left( 1-\frac{\mu_j}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i}\right) \left( 1- \frac{\mu_k}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i} \right)} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_i(t) C_k(t), \nonumber \end{align} respectively. Here, $\delta_{jk}$ denotes the Kronecker delta. Thus, using (\ref{ALlin-1}), (\ref{ALlin-2}) and (\ref{G-H}), we obtain \begin{subequations} \label{QR-soliton} \begin{align} Q_n (t) &= K_1(n,n;t) \nonumber \\ &= \left[ \begin{array}{ccc} \! G_1 \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n-2} \! & \! \cdots \! & \! G_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n-2}\! \end{array} \right] \left[ \begin{array}{c} I \\ \vdots \\ I \\ \end{array} \right] \nonumber \\ &= \left[ \begin{array}{ccc} \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1(t) \hspace{1pt}\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n-2} \! & \! \cdots \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}(t) \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n-2} \! \end{array} \right] \left[ \begin{array}{ccc} U_{11} & \cdots & U_{1\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \vdots & \ddots & \vdots \\ U_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}1} & \cdots & U_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \end{array} \right]^{-1} \left[ \begin{array}{c} I \\ \vdots \\ I \\ \end{array} \right], \label{Q-Nsol} \\[1mm] R_n (t) &= \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2 (n,n;t) \nonumber \\ &= \left[ \begin{array}{ccc} \! H_1 \hspace{1pt} \mu_1^{n} \! & \! \cdots \! & \! H_{N} \hspace{1pt} \mu_N^{n} \! \end{array} \right] \left[ \begin{array}{c} I \\ \vdots \\ I \\ \end{array} \right] \nonumber \\ &= \left[ \begin{array}{ccc} \! C_1 (t) \hspace{1pt} \mu_1^{n} \! & \! \cdots \! & \! C_{N} (t) \hspace{1pt} \mu_N^{n} \! \end{array} \right] \left[ \begin{array}{ccc} V_{11} & \cdots & V_{1N} \\ \vdots & \ddots & \vdots \\ V_{N1} & \cdots & V_{NN} \\ \end{array} \right]^{-1} \left[ \begin{array}{c} I \\ \vdots \\ I \\ \end{array} \right]. \end{align} \end{subequations} This provides the multisoliton solutions of the nonreduced matrix Ablowitz--Ladik lattice (\ref{mALsys}); some additional conditions on \mbox{$\{ \mu_j, C_j \}_{j=1, 2, \ldots, N}$} and \mbox{$\{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j, \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \}_{j=1, 2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{N}}$} need to be satisfied for (\ref{QR-soliton}) to exhibit solitonic behavior. In the simplest nontrivial case of \mbox{$N=\accentset{{\cc@style\underline{\mskip10mu}}}{N}=1$}, we obtain the one-soliton solution of (\ref{mALsys}) in the form: \begin{subequations} \label{one-soli} \begin{align} Q_n (t) &= \accentset{{\cc@style\underline{\mskip10mu}}}{D}(n,t) \left[ I - \frac{\frac{\mu_1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} {\left( 1-\frac{\mu_1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1} \right)^2} D(n,t) \accentset{{\cc@style\underline{\mskip10mu}}}{D} (n,t) \right]^{-1}, \label{one-soli1} \\[2mm] R_n (t) &= D(n,t) \left[ I - \frac{\frac{\mu_1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} {\left( 1-\frac{\mu_1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1} \right)^2} \accentset{{\cc@style\underline{\mskip10mu}}}{D}(n,t) D(n,t) \right]^{-1}, \label{one-soli2} \end{align} \end{subequations} where \mbox{$\accentset{{\cc@style\underline{\mskip10mu}}}{D}(n,t) := \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1(0) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt} -n-2} \mathrm{e}^{-[(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1-1) b + (1-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-1}) a] t}$} and \mbox{$D(n,t) := C_1 (0) \mu_1^n \mathrm{e}^{[(\mu_1-1) b + (1-\mu_1^{-1})a] t}$}. For this solution to decay also as \mbox{$n \to -\infty$}, we require that \mbox{$\lim_{n \to -\infty} Q_n (t) \vt{l} =\vt{0}$} for any $n$-independent column vector $\vt{l}$ of dimension $l$ and similar for $R_n (t)$. Thus, considering the Maclaurin series for \mbox{$(I-X)^{-1}$} where $X$ is the corresponding matrix in (\ref{one-soli1}) or (\ref{one-soli2}), we obtain the following conditions on the kernels of the \mbox{$l \times l$} matrices $\accentset{{\cc@style\underline{\mskip10mu}}}{C}_1$ and $C_1$: \[ \mathrm{Ker} \left( \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 C_1 \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 \right) =\mathrm{Ker} \left( \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 \right) \] and \[ \mathrm{Ker} \left( C_1 \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 C_1 \right) = \mathrm{Ker} \left( C_1 \right). \] Note that these conditions remain invariant under the time evolution (cf.~(\ref{C_time}) and (\ref{C_bar_time})). They can also be written in a more easy-to-understand form: \[ \mathrm{Ker} \left( C_1 \right) \cap \mathrm{Im} \left( \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 \right) = \mathrm{Ker} \left( \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 \right) \cap \mathrm{Im} \left( C_1 \right) = \{ \vt{0} \}. \] Consequently, we have \mbox{$\mathrm{rank} \left( \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1\right) = \mathrm{rank} \left( C_1 \right)$}. For general values of $N$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{N}$, it is rather difficult to grasp the condition that $Q_n$ and $R_n$ given by (\ref{QR-soliton}) should also decay as \mbox{$n \to -\infty$}. Thus, we take a different route. In view of the first component of (\ref{GLM_2}) and the second component of (\ref{GLM_1}), relations (\ref{F-reflec}) and (\ref{G-H}) together with (\ref{psi_form}) imply that \begin{align} & [z^{n} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n ](\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j) \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-n-2} = \left[ \begin{array}{c} G_j (n,t) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-n-2} \\ \ast \\ \end{array} \right], \hspace{5mm} j=1, 2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{N}, \nonumber \\[2mm] & [z^{-n} \psi_n ](\mu_j) C_j \mu_j^{n} =\left[ \begin{array}{c} \ast \\ H_j (n,t) \mu_j^{n} \\ \end{array} \right], \hspace{5mm} j=1, 2, \ldots, N. \nonumber \end{align} Thus, $G_j$ and $H_j$ are closely related to the bound-state eigenfunctions. Owing to the connection formulas (\ref{connect4}) and (\ref{connect3}) as well as the boundary conditions (\ref{phi_bar}), $G_j \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-n-2}$ and $H_j \mu_j^{n}$ must decay as \mbox{$n \to -\infty$}. If this is satisfied, then $Q_n$ and $R_n$ in (\ref{QR-soliton}) naturally vanish as \mbox{$n \to -\infty$}. The relations (\ref{GH-sys}) for determining $G_j \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-n-2}$ and $H_j \mu_j^{n}$ can be rewritten as \begin{subequations} \begin{align} & \left[ \begin{array}{cccc} \! G_1 \! & \! G_2 \! & \! \cdots \! & \! G_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \! \end{array} \right] - \left[ \begin{array}{cccc} \! G_1 \! & \! G_2 \! & \! \cdots \! & \! G_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \! \end{array} \right] \left[ \begin{array}{ccc} \frac{\mu_1^{n+1} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n-2}} {1-\frac{\mu_1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} I & \cdots & \frac{\mu_N^{n+1} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n-2}} {1-\frac{\mu_N}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} I \\ \vdots & \ddots & \vdots \\ \frac{\mu_1^{n+1} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n-2}} {1-\frac{\mu_1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} I & \cdots & \frac{\mu_N^{n+1} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n-2}} {1-\frac{\mu_N}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} I \\ \end{array} \right] \nonumber \\[2mm] & \times \left[ \begin{array}{ccc} \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1-\mu_1} C_1 \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 & \cdots & \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} - \mu_1 } C_1 \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \vdots & \ddots & \vdots \\ \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1 -\mu_N} C_N \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 & \cdots & \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}-\mu_N} C_N \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \end{array} \right] = \left[ \begin{array}{cccc} \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_2 \! & \! \cdots \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \! \end{array} \right] \label{GH-sys3} \end{align} and \begin{align} & \left[ \begin{array}{cccc} \! H_1 \! & \! H_2 \! & \! \cdots \! & \! H_{N} \! \end{array} \right] - \left[ \begin{array}{cccc} \! H_1 \! & \! H_2 \! & \! \cdots \! & \! H_{N} \! \end{array} \right] \left[ \begin{array}{ccc} \frac{\mu_1^n \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n-3}} {1-\frac{\mu_1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} I & \cdots & \frac{\mu_1^n \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n-3}} {1-\frac{\mu_1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} I \\ \vdots & \ddots & \vdots \\ \frac{\mu_N^n \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n-3}} {1-\frac{\mu_N}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} I & \cdots & \frac{\mu_N^n \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n-3}} {1-\frac{\mu_N}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} I \\ \end{array} \right] \nonumber \\[2mm] & \times \left[ \begin{array}{ccc} \frac{1}{\frac{1}{\mu_1} -\frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 C_1 & \cdots & \frac{1}{\frac{1}{\mu_N} - \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 C_N \\ \vdots & \ddots & \vdots \\ \frac{1}{\frac{1}{\mu_1} -\frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} C_1 & \cdots & \frac{1}{\frac{1}{\mu_N} -\frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} C_N \\ \end{array} \right] = \left[ \begin{array}{cccc} \! C_1 \! & \! C_2 \! & \! \cdots \! & \! C_{N} \! \end{array} \right]. \label{GH-sys4} \end{align} \end{subequations} Then, we multiply both sides of (\ref{GH-sys3}) from the right by an $n$-independent column vector of dimension $l \times \accentset{{\cc@style\underline{\mskip10mu}}}{N}$ and consider the limit \mbox{$n \to -\infty$}. Thus, we obtain the condition \begin{subequations} \label{necess} \begin{equation} \mathrm{Ker} \left[ \begin{array}{ccc} \frac{1}{ \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1 - \mu_1} C_1 \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 & \cdots & \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} - \mu_1} C_1 \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \vdots & \ddots & \vdots \\ \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1 - \mu_N} C_N \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 & \cdots & \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}- \mu_N} C_N \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \end{array} \right] \subseteq \mathrm{Ker} \left[ \begin{array}{cccc} \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_2 \! & \! \cdots \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \! \end{array} \right]. \label{nece1} \end{equation} Similarly, from (\ref{GH-sys4}), we obtain \begin{equation} \mathrm{Ker} \left[ \begin{array}{ccc} \frac{1}{\frac{1}{\mu_1} -\frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 C_1 & \cdots & \frac{1}{\frac{1}{\mu_N} -\frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1 C_N \\ \vdots & \ddots & \vdots \\ \frac{1}{\frac{1}{\mu_1} - \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} C_1 & \cdots & \frac{1}{\frac{1}{\mu_N} - \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} C_N \\ \end{array} \right] \subseteq \mathrm{Ker} \left[ \begin{array}{cccc} \! C_1 \! & \! C_2 \! & \! \cdots \! & \! C_{N} \! \end{array} \right]. \label{nece2} \end{equation} \end{subequations} The two conditions (\ref{nece1}) and (\ref{nece2}) can be combined and simplified to provide more easy-to-understand conditions on the scattering data in the reflectionless case. For this purpose, we need the following lemma. \begin{lemma} \label{L3.1} Define the \mbox{$\left( M-1 \right) \times M$} matrix elements \mbox{$d_{ij} \in {\mathbb C}$} as \[ d_{ij} := \frac{ \langle \vt{a}_i, \vt{b}_j \rangle} {\lambda_i - \nu_j}, \hspace{5mm} i \in \{ i_1, i_2, \ldots, i_{M-1} \}, \hspace{5mm} j \in \{ j_1, j_2, \ldots, j_M \}. \] Here, $\lambda_i$ and $\nu_j$ are parameters, $\vt{a}_i$ and $\vt{b}_j$ are nonzero vectors of dimension $l$ and \mbox{$\langle \, \cdot \, , \, \cdot \, \rangle$} stands for the scalar product. For $d_{ij}$ to be well-defined, we assume \mbox{$\lambda_i \neq \nu_j$} for all $i$ and $j$, but we do not require \mbox{$\lambda_{i_\alpha} \neq \lambda_{i_\beta}$} or \mbox{$\nu_{j_\alpha} \neq \nu_{j_\beta}$} for \mbox{$\alpha \neq \beta$}. Instead, we assume the following condition: for any subset \mbox{$\{ k_1, k_2, \ldots, k_\gamma \} \subseteq \{ j_1, j_2, \ldots, j_M \}$} such that \mbox{$\nu_{k_1} = \nu_{k_2} =\cdots=\nu_{k_\gamma}$}, the vectors \[ \vt{b}_{k_1}, \vt{b}_{k_2}, \ldots, \vt{b}_{k_\gamma} \] are linearly independent. Then, if the equality \[ \sum_{\alpha=1}^M (-1)^{\alpha -1} \left| \begin{array}{cccccc} d_{i_1 j_1} & \cdots & d_{i_1 j_{\alpha-1}} & d_{i_1 j_{\alpha+1}} & \cdots & d_{i_1 j_M} \\ \vdots & & \vdots & \vdots & & \vdots \\ d_{i_{M-1} j_1} & \cdots & d_{i_{M-1} j_{\alpha-1}} & d_{i_{M-1} j_{\alpha+1}} & \cdots & d_{i_{M-1} j_M} \\ \end{array} \right| \vt{b}_{j_\alpha} = \vt{0} \] is valid, all the scalar coefficients must be zero, where \mbox{$| \cdot |$} stands for the determinant. In other words, the above vector equation holds true only in the trivial case; note that this equation can be written compactly as \[ \left| \begin{array}{ccc} \vt{b}_{j_1} & \cdots & \vt{b}_{j_M} \\ d_{i_1 j_1} & \cdots & d_{i_1 j_M} \\ \vdots & & \vdots \\ d_{i_{M-1} j_1} & \cdots & d_{i_{M-1} j_M} \\ \end{array} \right| = \vt{0}, \] using the Laplace expansion formally. \\ \end{lemma} We omit the proof of this lemma. To obtain useful information from the conditions (\ref{necess}), we first remove the trivial subspace of the kernels commonly contained on both sides. From a given \mbox{$l \times l$} matrix $W$, we extract the maximum number of linearly independent column vectors to form an \mbox{$l \times \mathrm{rank}(W)$} matrix $W^{(\mathrm{c})}$. Similarly, we extract the maximum number of linearly independent row vectors from $W$ to form a \mbox{$\mathrm{rank}(W) \times l$} matrix $W^{(\mathrm{r})}$. With this notation, (\ref{nece1}) and (\ref{nece2}) can be rewritten in more compact forms as \begin{subequations} \label{necess2} \begin{equation} \mathrm{Ker} \left[ \begin{array}{ccc} \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1 - \mu_1} C_1^{(\mathrm{r})} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1^{(\mathrm{c})} & \cdots & \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}- \mu_1} C_1^{(\mathrm{r})} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{(\mathrm{c})} \\ \vdots & \ddots & \vdots \\ \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1 - \mu_N} C_N^{(\mathrm{r})} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1^{(\mathrm{c})} & \cdots & \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} - \mu_N} C_N^{(\mathrm{r})} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{(\mathrm{c})} \\ \end{array} \right] \subseteq \mathrm{Ker} \left[ \begin{array}{cccc} \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1^{(\mathrm{c})} \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_2^{(\mathrm{c})} \! & \! \cdots \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{(\mathrm{c})} \! \end{array} \right] \label{nece3} \end{equation} and \begin{equation} \mathrm{Ker} \left[ \begin{array}{ccc} \frac{1}{\frac{1}{\mu_1} -\frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1^{(\mathrm{r})} C_1^{(\mathrm{c})} & \cdots & \frac{1}{\frac{1}{\mu_N} - \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1^{(\mathrm{r})} C_N^{(\mathrm{c})} \\ \vdots & \ddots & \vdots \\ \frac{1}{\frac{1}{\mu_1} - \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{(\mathrm{r})} C_1^{(\mathrm{c})} & \cdots & \frac{1}{\frac{1}{\mu_N} -\frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{(\mathrm{r})} C_N^{(\mathrm{c})} \\ \end{array} \right] \subseteq \mathrm{Ker} \left[ \begin{array}{cccc} \! C_1^{(\mathrm{c})} \! & \! C_2^{(\mathrm{c})} \! & \! \cdots \! & \! C_{N}^{(\mathrm{c})} \! \end{array} \right]. \label{nece4} \end{equation} \end{subequations} With the aid of Lemma~\ref{L3.1}, we can prove the following two propositions. \begin{proposition} \label{prop3.2} Assume that \[ \sum_{j=1}^{N} \mathrm{rank}\left( C_j \right) \le \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \mathrm{rank}\left( \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \right), \] and the condition (\ref{nece3}) is satisfied. Then, the above inequality becomes an equality, \begin{equation} \sum_{j=1}^{N} \mathrm{rank}\left( C_j \right) = \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \mathrm{rank}\left( \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \right), \label{sca-con1} \end{equation} and the matrix on the left-hand side of (\ref{nece3}) must be invertible, i.e. \begin{equation} \left| \begin{array}{ccc} \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1 - \mu_1} C_1^{(\mathrm{r})} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1^{(\mathrm{c})} & \cdots & \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}-\mu_1} C_1^{(\mathrm{r})} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{(\mathrm{c})} \\ \vdots & \ddots & \vdots \\ \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1 - \mu_N} C_N^{(\mathrm{r})} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1^{(\mathrm{c})} & \cdots & \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} - \mu_N} C_N^{(\mathrm{r})} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{(\mathrm{c})} \\ \end{array} \right| \neq 0. \label{sca-con2} \end{equation} \hphantom{aa} \\ \end{proposition} \begin{proposition} \label{prop3.3} Assume that \[ \sum_{j=1}^{N} \mathrm{rank}\left( C_j \right) \ge \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \mathrm{rank}\left( \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \right), \] and the condition (\ref{nece4}) is satisfied. Then, the above inequality becomes an equality, \[ \sum_{j=1}^{N} \mathrm{rank}\left( C_j \right) = \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \mathrm{rank}\left( \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \right), \] and the matrix on the left-hand side of (\ref{nece4}) must be invertible, i.e. \begin{equation} \left| \begin{array}{ccc} \frac{1}{\frac{1}{\mu_1} -\frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1^{(\mathrm{r})} C_1^{(\mathrm{c})} & \cdots & \frac{1}{\frac{1}{\mu_N} - \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1^{(\mathrm{r})} C_N^{(\mathrm{c})} \\ \vdots & \ddots & \vdots \\ \frac{1}{\frac{1}{\mu_1} - \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{(\mathrm{r})} C_1^{(\mathrm{c})} & \cdots & \frac{1}{\frac{1}{\mu_N} - \frac{1}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}}} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{(\mathrm{r})} C_N^{(\mathrm{c})} \\ \end{array} \right| \neq 0. \label{sca-con3} \end{equation} \hphantom{aa} \\ \end{proposition} By combining Propositions \ref{prop3.2} and \ref{prop3.3}, we arrive at the following theorem. \begin{theorem} \label{theorem3.4} Assume that (\ref{QR-soliton}) provides the multisoliton solutions of the matrix Ablowitz--Ladik lattice (\ref{mALsys}), which decay as \mbox{$n \to \pm \infty$} and produce the bound states of the associated eigenvalue problem (\ref{mAL0}). Then, the scattering data must satisfy the three conditions (\ref{sca-con1})--(\ref{sca-con3}). \\ \end{theorem} \noindent Note that the time evolution does not change these conditions (cf.~(\ref{C_time}) and (\ref{C_bar_time})). \\ \\ {\it Remark.} Here, we only considered the case where the time variable $t$ is fixed at some finite value. In the limits \mbox{$t \to \pm \infty$}, the solitons generally separate from one another and restore their original shapes. Thus, for (\ref{QR-soliton}) to describe proper multisoliton collisions through the passage of time, we have to impose additional conditions, i.e., the above conditions for subsets of \mbox{$\{ \mu_j, C_j \}_{j=1, 2, \ldots, N}$} and \mbox{$\{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j, \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j \}_{j=1, 2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{N}}$}. Some relevant results were obtained independently in~\cite{DM2010}. \subsection{Complex conjugation reduction} When \mbox{$b=a^\ast$}, the matrix Ablowitz--Ladik lattice (\ref{mALsys}) allows the complex conjugation reduction \mbox{$R_n = \sigma Q_n^\ast$} with a real constant $\sigma$ (cf.~\cite{GI82}). In particular, the simplest reduction \mbox{$R_n = -Q_n^\ast$} can be realized in formulas (\ref{ALlinearization}) by identifying $\accentset{{\cc@style\underline{\mskip10mu}}}{F} (n)$ with the complex conjugate of $F(n)$, i.e. \begin{equation} \accentset{{\cc@style\underline{\mskip10mu}}}{F} (n) = \left\{ F(n) \right\}^\ast, \label{F-Fbar-rel} \end{equation} which is naturally preserved under the time evolution (\ref{AL-linear}) with \mbox{$b=a^\ast$}. Indeed, this relation can be derived by exploiting the symmetry of the eigenvalue problem (\ref{mAL0}) with \mbox{$R_n = -Q_n^\ast$}; that is, if \[ \left[ \begin{array}{c} \Psi_{1, n} (z)\\ \Psi_{2, n} (z)\\ \end{array} \right] \] is an eigenfunction, then \[ \pm \left[ \begin{array}{c} - \Psi_{2, n}(1/z^\ast) \\ \Psi_{1, n} (1/z^\ast) \\ \end{array} \right]^\ast \] gives another eigenfunction of the same problem. Thus, we can reflect this symmetry in the Jost solutions and the scattering data to confirm (\ref{F-Fbar-rel}). In particular, the $N$-soliton solution of the matrix Ablowitz--Ladik equation, \[ Q_{n,t} - a Q_{n+1} + a^\ast Q_{n-1} + (a-a^\ast) Q_n -a Q_n Q_n^\ast Q_{n+1} +a^\ast Q_{n-1} Q_n^\ast Q_n = O, \] is obtained by setting \mbox{$b=a^\ast$}, \mbox{$\accentset{{\cc@style\underline{\mskip10mu}}}{N}=N$} and \[ \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j= \frac{1}{\mu_j^\ast}, \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j(t) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-2} = -\left\{ C_j (t) \right\}^\ast, \hspace{5mm} j=1, 2, \ldots, N \] in formula (\ref{Q-Nsol}) with (\ref{C_time}). The reduced set of scattering data is required to satisfy the conditions (\ref{sca-con2}) and (\ref{sca-con3}); in fact, they are equivalent under this reduction. In addition, (\ref{sca-con2}) (or (\ref{sca-con3})) for subsets of \mbox{$\{ \mu_j, C_j \}_{j=1, 2, \ldots, N}$} should also be satisfied. Throughout this paper, we do not discuss the issue of regularity of solutions and the term ``soliton solution" is used in a broad sense. That is, it may have singularities at some values of the independent variables $n$ and $t$. \section{Solution formulas for the derivative NLS lattices} \subsection{Solutions of the space-discrete Gerdjikov--Ivanov system} \label{subs4.1} In this subsection, we solve the space-discrete Gerdjikov--Ivanov system derived in subsection~\ref{subs2.2} by using the results in section~\ref{sec3}. Note that the nonzero parameter \mbox{$\mu $} in the space-discrete Gerdjikov--Ivanov system (\ref{sdGI}) is nonessential; it can be fixed at any nonzero value, say $1$, using a simple point transformation and rescalings of the parameters $a$ and $b$ (cf.~\cite{Tsuchi02,TsuJMP11}). In addition, as is clear from the defining relation of the Miura map (\ref{AL-R1}), the limit \mbox{$\mu \to 0$} is trivial and need not be considered separately. Thus, in the following, we consider the space-discrete Gerdjikov--Ivanov system (\ref{sdGI}) with \mbox{$\mu =1$}: \begin{subnumcases}{\label{sdGI2}} {} Q_{n,t}- a Q_{n+1} + b Q_{n-1} + (a-b)Q_n + a Q_{n} \left( P_{n} - P_{n+1} \right) Q_{n+1} \nonumber \\ \mbox{}- b Q_{n-1} \left( P_{n} - P_{n+1} \right) Q_{n} + a Q_{n} P_{n}Q_n P_{n+1} Q_{n+1} - b Q_{n-1} P_{n}Q_n P_{n+1} Q_{n} =O, \hspace{12mm} \label{sdGIeq} \\[2mm] P_{n,t} -b P_{n+1} + a P_{n-1} +(b-a)P_n -b P_{n} \left( Q_{n-1} - Q_n \right) P_{n+1} \nonumber \\ \mbox{}+a P_{n-1}\left( Q_{n-1} - Q_n \right) P_{n} + b P_{n} Q_{n-1} P_{n}Q_{n} P_{n+1} - a P_{n-1} Q_{n-1} P_{n}Q_{n} P_{n} =O. \label{} \end{subnumcases} In section~\ref{sec3}, we developed the inverse scattering method associated with the matrix Ablowitz--Ladik eigenvalue problem (\ref{mAL0}) under the vanishing boundary conditions on the potentials $Q_n$ and $R_n$ (cf.\ (\ref{zero-bc})). The remaining unknown $P_n$ in (\ref{sdGI2}) can be determined from a linear eigenfunction through the simple formula \begin{equation} P_n = \left. \hspace{-1pt} \Psi_{2,n} \Psi_{1,n}^{-1} \right|_{\mu \hspace{1pt}(=z^2) =1}. \label{P-formu1} \end{equation} Because there is some arbitrariness in choosing the linear eigenfunction, we need to specify boundary conditions to determine $P_n$ uniquely. Thus, we assume that not only $Q_n$ but also $P_n$ decays rapidly as \mbox{$n \to \pm \infty$}: \begin{equation} \label{zero-bc2} \lim_{n \to \pm \infty} Q_n = \lim_{n \to \pm \infty} P_n =O. \end{equation} This is consistent if we choose the linear eigenfunction appearing in the right-hand side of (\ref{P-formu1}) as \[ \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right] := \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n, \] and assume that the scattering data $B(\mu)$ vanishes at \mbox{$\mu=1$}, i.e., \mbox{$B(1)=O$} (see (\ref{leftJost}) and (\ref{phi_relation})). In fact, the Jost solution $\accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n$ as well as $\phi_n$ does not satisfy the time part of the Lax representation (\ref{mAL-time}), so it is more appropriate to use the explicitly time-dependent Jost solutions introduced in subsection~\ref{sTDsd}. However, the overall multiplicative factor \mbox{$\mathrm{e}^{(-\mu+1) b t}$} as introduced in (\ref{Jost-time}) plays no role in formula (\ref{P-formu1}), so in view of (\ref{5Kdef}), we can express $P_n$ as \[ P_n = \left\{ \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}_2 (n,n+k) \right\} \left\{ I + \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}_1 (n,n+k) \right\}^{-1}. \] This expression enables us to determine $P_n$ from the set of scattering data with the aid of the Gel'fand--Levitan--Marchenko equations (\ref{GLM_1}) and (\ref{GLM_2}); however, for later convenience, we take an alternative approach. Because the space-discrete Gerdjikov--Ivanov system (\ref{sdGI2}) is ``symmetric" with respect to $Q_n$ and $P_n$, there must be a formula for expressing $Q_n$ in a manner similar to (\ref{P-formu1}). Such a formula can be established by identifying an appropriate Ablowitz--Ladik eigenvalue problem that is gauge equivalent to the original problem (\ref{mAL0}); the corresponding gauge transformation is often referred to as a B\"acklund--Darboux transformation. Then, in the new gauge, the roles of $Q_n$ and $P_n$ are swapped and $P_n$ appears directly as a potential in the Ablowitz--Ladik eigenvalue problem. In other words, there exists another Miura map from the space-discrete Gerdjikov--Ivanov system (\ref{sdGI2}) to the Ablowitz--Ladik lattice in the form \mbox{$(Q_n,P_n) \mapsto (\widetilde{Q}_n,P_n)$}. Let us consider the original Ablowitz--Ladik eigenvalue problem (\ref{mAL0}) with \mbox{$R_n = P_{n} - P_{n+1} + P_{n}Q_n P_{n+1}$} (cf.~(\ref{AL-R1})): \begin{flalign} \mathrm{(AL1):} \hspace{5mm} \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right] &= \left[ \begin{array}{cc} z I & z Q_n \\ z^{-1} \left( P_{n} - P_{n+1} + P_{n}Q_n P_{n+1} \right) & z^{-1} I \\ \end{array} \right] \left[ \begin{array}{c} \Psi_{1, n+1} \\ \Psi_{2, n+1} \\ \end{array} \right].& \label{ALsca1} \end{flalign} Indeed, this can be rewritten as another Ablowitz--Ladik eigenvalue problem: \begin{flalign} \mathrm{(AL2):} \hspace{5mm} \left[ \begin{array}{c} \Phi_{1, n} \\ \Phi_{2, n} \\ \end{array} \right] &= \left[ \begin{array}{cc} z I & z \left( -Q_n +Q_{n+1} + Q_n P_{n+1} Q_{n+1} \right) \\ z^{-1} P_{n+1} & z^{-1} I \\ \end{array} \right] \left[ \begin{array}{c} \Phi_{1, n+1} \\ \Phi_{2, n+1} \\ \end{array} \right],& \label{ALsca2} \end{flalign} using the gauge transformation defined as \begin{align} \hspace{-3mm} \left[ \begin{array}{c} \Phi_{1, n} \\ \Phi_{2, n} \\ \end{array} \right] & := \left[ \begin{array}{c} \left( z^{-2}-1 \right) \Psi_{1, n} -Q_n \left( I-P_n Q_n \right)^{-1} \left( \Psi_{2,n} -z^{-2} P_n \Psi_{1,n} \right) \\ \left( I-P_n Q_n \right)^{-1} \left( \Psi_{2,n} -z^{-2} P_n \Psi_{1,n} \right) \\ \end{array} \right] \label{gauge-def1} \\ & \hphantom{:} =: g_n \left[ \begin{array}{c} \Psi_{1, n} \\ \Psi_{2, n} \\ \end{array} \right]. \nonumber \end{align} The explicit form of the transformation matrix $g_n$ is unimportant; only its asymptotic behavior as \mbox{$n \to \pm \infty$} is needed: \[ \lim_{n \to \pm \infty} g_n = \left[ \begin{array}{cc} \left( z^{-2}-1 \right) I & O \\ O & I \\ \end{array} \right]. \] For the original Jost solutions for (AL1) defined as (\ref{leftJost}), the Jost solutions for (AL2) are given as \begin{subequations} \label{Jost-AL1-2} \begin{align} & \phi_n^{\mathrm{(AL2)}} (z) = \frac{1}{z^{-2}-1} g_n \phi_n^{\mathrm{(AL1)}} (z), \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\mathrm{\hspace{1pt}(AL2)}} (z) = g_n \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\mathrm{\hspace{1pt}(AL1)}} (z), \label{Jost-AL-phi} \\[2mm] & \psi_n^{\mathrm{(AL2)}} (z) = g_n \psi_n^{\mathrm{(AL1)}} (z), \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{\hspace{1pt}(AL2)}} (z) = \frac{1}{z^{-2}-1} g_n \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\mathrm{\hspace{1pt}(AL1)}} (z). \label{Jost-AL-psi} \end{align} \end{subequations} Indeed, they satisfy both the eigenvalue problem (\ref{ALsca2}) and the boundary conditions (\ref{leftJost}). The case of \mbox{$z^2=1$} can be understood in the corresponding limit. In view of the aforementioned condition \mbox{$B(1)=O$}, it is natural to modify the defining relations (\ref{ref102}) of the scattering data for (AL1) on \mbox{$|\mu|=1$} as % % \begin{subequations}\label{AL1-scat} \begin{align} \phi_n^{\mathrm{(AL1)}} &= \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}\mathrm{(AL1)}} A(\mu) + \psi_n^{\mathrm{(AL1)}} (\mu^{-1}-1) B(\mu), \\[0.5mm] \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\hspace{1pt}\mathrm{(AL1)}} &= \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}\mathrm{(AL1)}} \accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu) - \psi_n^{\mathrm{(AL1)}} \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu). \end{align} \end{subequations} Then, relations (\ref{Jost-AL1-2}) between the Jost solutions for (AL1) and those for (AL2) imply that the defining relations of the scattering data for (AL2) on \mbox{$|\mu|=1$} become \begin{subequations}\label{AL2-scat} \begin{align} \phi_n^{\mathrm{(AL2)}} &= \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}\mathrm{(AL2)}} A(\mu) + \psi_n^{\mathrm{(AL2)}} B(\mu), \\[0.5mm] \accentset{{\cc@style\underline{\mskip10mu}}}{\phi}_n^{\hspace{1pt}\mathrm{(AL2)}} &= \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}\mathrm{(AL2)}} (\mu^{-1}-1)\accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu) - \psi_n^{\mathrm{(AL2)}} \accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu). \end{align} \end{subequations} The bound-state eigenvalues are determined by the positions of the simple poles of $A(\mu)^{-1}$ in \mbox{$|\mu| <1$} and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ in \mbox{$|\mu| >1$}; the more general case of higher order poles can be recovered by taking a suitable coalescence limit. Because of the uniqueness of the ``analytic continuation", the bound-state eigenvalues are common to (AL1) and (AL2): \begin{align} & \mu_j^{\mathrm{(AL1)}} =\mu_j^{\mathrm{(AL2)}}= \mu_j, \hspace{5mm} j=1, 2, \ldots, N, \nonumber \\[2mm] & \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}\mathrm{(AL1)}} =\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}\mathrm{(AL2)}} = \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j, \hspace{5mm} j=1, 2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{N}. \nonumber \end{align} Owing to (\ref{Jost-AL1-2}), the corresponding matrices $C_j$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{C}_j$ (cf.~(\ref{connect3}) and (\ref{connect4})) for (AL1) and (AL2) can be expressed as \begin{subequations} \label{C-AL-GI} \begin{align} & C_j^{\mathrm{(AL1)}}= \left( \mu_j^{-1}-1 \right) C_j, \hspace{5mm} C_j^{\mathrm{(AL2)}}= C_j,\hspace{5mm} j=1, 2, \ldots, N, \\[2mm] & \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j^{\hspace{1pt}\mathrm{(AL1)}}= \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j, \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j^{\hspace{1pt}\mathrm{(AL2)}} = \left( \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-1}-1 \right) \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j, \hspace{5mm} j=1, 2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{N}. \end{align} \end{subequations} By combining the above relations, the functions $F(m)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(m)$ for (AL1) and (AL2) (cf.\ (\ref{F_form}) and (\ref{F_bar_form})) can be written as \begin{subequations} \label{F-GI} \begin{align} & F^{\mathrm{(AL1)}}(m)= F(m-1)-F(m), \hspace{5mm} F^{\mathrm{(AL2)}}(m)= F(m), \label{F-GI1} \\[2mm] & \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(m)= \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m), \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL2)}}(m) = \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m+1) - \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m), \label{F-GI2} \end{align} \end{subequations} in terms of the original $F(m)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(m)$ defined as (\ref{F_form}) and (\ref{F_bar_form}). Clearly, the modification of the scattering data for (AL1) and (AL2) as described above does not change their time dependences given by (\ref{BA_time})--(\ref{C_bar_time}). Thus, each of the pairs \mbox{$( F, \accentset{{\cc@style\underline{\mskip10mu}}}{F} \hspace{1pt} )$}, \mbox{$( F^{\mathrm{(AL1)}}, \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}} )$} and \mbox{$( F^{\mathrm{(AL2)}}, \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL2)}} )$} satisfies the same linear evolutionary system (\ref{AL-linear}). We can construct the Gel'fand--Levitan--Marchenko equations for the space-discrete Gerdjikov--Ivanov system (\ref{sdGI2}) by combining (\ref{ALlin-1}) and (\ref{K1-closed}) for (AL1) and (\ref{ALlin-2}) and (\ref{K2-closed}) for (AL2), i.e. \begin{align} & Q_n = K_1^{\mathrm{(AL1)}} (n,n), \nonumber \\[1mm] & P_{n+1} = \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL2)}}_2 (n,n), \nonumber \\ & K_1^{\mathrm{(AL1)}}(n,m) = \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(m) -\sum_{i=0}^{\infty} \sum_{k=0}^{\infty} K_1^{\mathrm{(AL1)}}(n,n+i) F^{\mathrm{(AL1)}}(n+i+k+1) \nonumber \\ & \hspace{28mm} \times \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(m+k+1), \hspace{5mm} m \geq n, \nonumber \\[2mm] & \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL2)}}_2 (n,m) = -F^{\mathrm{(AL2)}}(m) -\sum_{i=0}^{\infty} \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL2)}}_2 (n,n+i) \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL2)}}(n+i+k+1) \nonumber \\ & \hspace{28mm} \times F^{\mathrm{(AL2)}}(m+k+1), \hspace{5mm} m \geq n. \nonumber \end{align} Substituting (\ref{F-GI}) and changing the notation slightly, we obtain \begin{subequations} \label{GIlinearization2} \begin{align} & Q_n = {\mathscr K} (n,n), \label{GIlin-1} \\[1mm] & P_n = \accentset{{\cc@style\underline{\mskip10mu}}}{\mathscr K} (n,n), \label{GIlin-2} \\ & {\mathscr K} (n,m) = \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m) +\sum_{i=0}^{\infty} \sum_{k=0}^{\infty} {\mathscr K} (n,n+i) \left\{ F(n+i+k+1) -F(n+i+k) \right\} \nonumber \\ & \hspace{21.5mm} \times \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m+k+1), \hspace{5mm} m \geq n, \label{K11-closed} \\[2mm] & \accentset{{\cc@style\underline{\mskip10mu}}}{\mathscr K} (n,m) = -F(m-1) -\sum_{i=0}^{\infty} \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{\mathscr K} (n,n+i) \left\{ \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n+i+k+1) - \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n+i+k) \right\} \nonumber \\ & \hspace{21.5mm} \times F(m+k), \hspace{5mm} m \geq n. \label{K22-closed} \end{align} \end{subequations} Note that $F$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}$ satisfy the linear part of the equations for $P_n$ and $Q_n$, which is (\ref{AL-linear}) for the space-discrete Gerdjikov--Ivanov system (\ref{sdGI2}). In the case of \mbox{$B(\mu)=\accentset{{\cc@style\underline{\mskip10mu}}}{B}(\mu)=O$} on \mbox{$|\mu|=1$}, which corresponds to the reflectionless potentials for both (AL1) and (AL2), we can solve the Gel'fand--Levitan--Marchenko equations (\ref{GIlinearization2}) to obtain the soliton solutions in closed form. The derivation is essentially the same as in the Ablowitz--Ladik case described in subsection~\ref{subsec3.6}; naturally, the multisoliton solutions of the space-discrete Gerdjikov--Ivanov system (\ref{sdGI2}) can be obtained directly by applying the correspondence relations (\ref{C-AL-GI}) to (\ref{QR-soliton}), i.e. \begin{subequations} \label{QP-soliton} \begin{align} Q_n (t) &= \left[ \begin{array}{ccc} \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_1(t) \hspace{1pt}\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n-2} \! & \! \cdots \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}(t) \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n-2} \! \end{array} \right] \left[ \begin{array}{ccc} \mathscr{U}_{11} & \cdots & \mathscr{U}_{1\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \vdots & \ddots & \vdots \\ \mathscr{U}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}1} & \cdots & \mathscr{U}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \end{array} \right]^{-1} \left[ \begin{array}{c} I \\ \vdots \\ I \\ \end{array} \right], \label{Q-Nsol2} \\[1mm] P_n (t) &= \left[ \begin{array}{ccc} \! C_1 (t) \hspace{1pt} \mu_1^{n-1} \! & \! \cdots \! & \! C_{N} (t) \hspace{1pt} \mu_N^{n-1} \! \end{array} \right] \left[ \begin{array}{ccc} \mathscr{V}_{11} & \cdots & \mathscr{V}_{1N} \\ \vdots & \ddots & \vdots \\ \mathscr{V}_{N1} & \cdots & \mathscr{V}_{NN} \\ \end{array} \right]^{-1} \left[ \begin{array}{c} I \\ \vdots \\ I \\ \end{array} \right]. \end{align} \end{subequations} Here, all the entries in (\ref{QP-soliton}) are \mbox{$l \times l$} matrices; the block matrices \mbox{$\mathscr{U}=(\mathscr{U}_{jk})_{1 \le j,k \le \accentset{{\cc@style\underline{\mskip10mu}}}{N}}$} and \mbox{$\mathscr{V}=(\mathscr{V}_{jk})_{1 \le j,k \le N}$} are defined as \begin{align} & \mathscr{U}_{jk} := \delta_{jk} I - \sum_{i=1}^{N} \frac{\left( 1- \mu_i \right) \mu_{i}^n \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_k^{\hspace{1pt}-n-3}} {\displaystyle \left(1-\frac{\mu_i}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j}\right) \left(1-\frac{\mu_i}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_k}\right)} C_i(t)\accentset{{\cc@style\underline{\mskip10mu}}}{C}_k(t), \nonumber \\[2mm] & \mathscr{V}_{jk} := \delta_{jk} I + \sum_{i=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \frac{ \left( 1- \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i^{\hspace{1pt}-1} \right) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i^{\hspace{1pt} -n-2} \mu_k^{n} } {\displaystyle \left( 1-\frac{\mu_j}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i} \right) \left( 1-\frac{\mu_k}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i} \right) } \accentset{{\cc@style\underline{\mskip10mu}}}{C}_i(t) C_k(t), \nonumber \end{align} and the time dependences of $C_j$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{C}_j$ are given by (\ref{C_time}) and (\ref{C_bar_time}). Note that the three conditions (\ref{sca-con1})--(\ref{sca-con3}) must be satisfied for (\ref{QP-soliton}) to describe proper multisoliton solutions decaying as \mbox{$n \to \pm \infty$} (cf.~Theorem \ref{theorem3.4}). In addition, we need to impose similar conditions for subsets of the soliton parameters so that the solitons interact with each other properly throughout the time evolution. When \mbox{$b=a^\ast$}, the space-discrete Gerdjikov--Ivanov system (\ref{sdGI2}) allows the complex conjugation reduction \mbox{$P_n = \mathrm{i} \sigma Q_{n- 1/2 }^{\,\ast}$} with a real constant $\sigma$~\cite{Tsuchi02}. That is, two originally uncoupled systems, (\ref{sdGI2}) with \mbox{$n \in \mathbb{Z}$} and (\ref{sdGI2}) with \mbox{$n \in \mathbb{Z}+1/2$}, can be related by this reduction to give a single equation with \mbox{$n \in \mathbb{Z}/2$}. Clearly, the value of $\sigma$ is nonessential, so we set \mbox{$\sigma=1$} and consider the reduction \mbox{$P_n = \mathrm{i} Q_{n- 1/2 }^{\,\ast}$}. This reduction can be realized at the level of formulas (\ref{GIlinearization2}) by setting \begin{equation} \accentset{{\cc@style\underline{\mskip10mu}}}{F} (n) = -\mathrm{i} \left\{ F \left( n-\mbox{$\tiny{\frac{1}{2}}$} \right) \right\}^\ast, \nonumber \end{equation} which is consistent with the time evolution (\ref{AL-linear}) with \mbox{$b=a^\ast$}. In particular, the $N$-soliton solution of the space-discrete Gerdjikov--Ivanov equation, \begin{align} & Q_{n,t} - a Q_{n+1} + a^\ast Q_{n-1} + (a-a^\ast )Q_n + \mathrm{i}a Q_{n} \left( Q_{n-\frac{1}{2}}^{\,\ast} - Q_{n+\frac{1}{2}}^{\,\ast} \right) Q_{n+1} \nonumber \\[1mm] & \mbox{} - \mathrm{i} a^\ast Q_{n-1} \left( Q_{n-\frac{1}{2}}^{\,\ast} - Q_{n+\frac{1}{2}}^{\,\ast} \right) Q_{n} - a Q_{n} Q_{n-\frac{1}{2}}^{\,\ast} Q_n Q_{n+\frac{1}{2}}^{\,\ast} Q_{n+1} + a^\ast Q_{n-1} Q_{n-\frac{1}{2}}^{\,\ast} Q_n Q_{n+\frac{1}{2}}^{\,\ast} Q_{n} =O, \nonumber \end{align} is obtained by setting \mbox{$b=a^\ast$}, \mbox{$\accentset{{\cc@style\underline{\mskip10mu}}}{N}=N$} and \[ \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j= \frac{1}{\mu_j^\ast}, \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{C}_j(t) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-2} = \mathrm{i} \left\{ C_j (t) \mu_j^{-\frac{1}{2}} \right\}^\ast, \hspace{5mm} j=1, 2, \ldots, N \] in formula (\ref{Q-Nsol2}) with (\ref{C_time}). The imaginary unit (roman $\mathrm{i}$) should not be confused with the index of summation (italic $i$) in the definition of $\mathscr{U}_{jk}$. The reduced set of scattering data is required to satisfy the condition (\ref{sca-con2}) (or (\ref{sca-con3})) and its smaller versions corresponding to subsets of the solitons. \subsection{Solutions of the space-discrete Kaup--Newell system} \label{subs4.2} In this subsection, we solve the space-discrete Kaup--Newell system (\ref{sdKN}) derived in subsection~\ref{subs2.3} by applying the results in section~\ref{sec3}. Because the parameter $\mu $ is nonessential in (\ref{sdKN}), we set \mbox{$\mu\hspace{1pt}(=z^2) =1$} and consider the space-discrete Kaup--Newell system in the form: \begin{subnumcases}{\label{sdKN2}} {} q_{n,t} - \boldsymbol{\Delta}_n^+ \left[ a \left( I - q_{n}r_{n} \right)^{-1} q_{n} + b \left( I + q_{n-1} r_{n} \right)^{-1} q_{n-1} \right] = O, \\[1mm] r_{n,t} - \boldsymbol{\Delta}_n^+ \left[ b \left( I + r_{n} q_{n-1} \right)^{-1} r_{n} + a \left( I - r_{n-1} q_{n-1} \right)^{-1} r_{n-1} \right] = O. \hspace{15mm} \end{subnumcases} Recall that $\boldsymbol{\Delta}_n^+$ denotes the forward difference operator: \mbox{$\boldsymbol{\Delta}_n^+ f_n := f_{n+1} - f_{n} $}. We assume vanishing boundary conditions at spatial infinity: \begin{equation} \label{zero-bc3} \lim_{n \to \pm \infty} q_n = \lim_{n \to \pm \infty} r_n =O. \end{equation} Propositions~\ref{prop1} and \ref{prop2} imply that the solution of (\ref{sdKN2}) can be obtained as \begin{equation} q_n = \boldsymbol{\Delta}_n^+ \left. \hspace{-1pt} \left( \Psi_{1,n}^{(1)\,-1} \Psi_{1,n}^{(2)} \right)\right|_{\mu =1}, \hspace{5mm} r_n = \left. \hspace{-1pt} \left( \Psi_{2,n}^{(1)\,-1} \Psi_{2,n}^{(2)} - \Psi_{1,n}^{(1)\,-1} \Psi_{1,n}^{(2)} \right)^{-1} \right|_{\mu =1}, \label{qr-formu} \end{equation} using two linearly independent eigenfunctions of the linear problem, (\ref{mAL1}) and (\ref{mAL-time}), associated with the Ablowitz--Ladik lattice (cf.~(\ref{gen-Lax})); this is in contrast to the space-discrete Gerdjikov--Ivanov system, which can be derived using only one linear eigenfunction as described in subsection~\ref{subs2.2}. Proposition~\ref{prop3} implies that $r_n$ in (\ref{qr-formu}) can be rewritten in the difference form as $q_n$; to see this explicitly, we need to identify an appropriate linear problem for the Ablowitz--Ladik lattice, which is gauge equivalent to the original problem. It turns out that the gauge transformation connecting (AL1) and (AL2) considered in subsection~\ref{subs4.1} plays the desired role. Suppose that the two eigenfunctions appearing in (\ref{qr-formu}) satisfy (AL1). Moreover, we choose the linear eigenfunction appearing in (\ref{P-formu1}) as \[ P_n = \left. \hspace{-1pt} \Psi_{2,n}^{(1)} \Psi_{1,n}^{(1)\, -1} \right|_{\mu \hspace{1pt}(=z^2) =1}. \] Recall that (AL1) and (AL2) involve this $P_n$ and are connected through the gauge transformation (\ref{gauge-def1}). Thus, the two linear eigenfunctions for (AL2) can be introduced as \begin{align} \left[ \begin{array}{c} \Phi_{1, n}^{(1)} \\ \Phi_{2, n}^{(1)} \\ \end{array} \right] := \frac{1}{z^{-2}-1} g_n \left[ \begin{array}{c} \Psi_{1, n}^{(1)} \\ \Psi_{2, n}^{(1)} \\ \end{array} \right], \hspace{5mm} \left[ \begin{array}{c} \Phi_{1, n}^{(2)} \\ \Phi_{2, n}^{(2)} \\ \end{array} \right] := g_n \left[ \begin{array}{c} \Psi_{1, n}^{(2)} \\ \Psi_{2, n}^{(2)} \\ \end{array} \right], \label{redef-Jost} \end{align} so that both of them become nontrivial in the limit \mbox{$\mu\hspace{1pt}(=z^2) \to 1$}. Note that (\ref{ALsca2}) and (\ref{gauge-def1}) imply that \begin{align} \Phi_{2,n}^{(1)} &= z^{-1} P_{n+1} \Psi_{1,n+1}^{(1)} + z^{-1} \left( I-P_{n+1} Q_{n+1} \right) \Phi_{2,n+1}^{(1)}, \nonumber \\[0.5mm] \Phi_{2,n}^{(2)} &= z^{-1} \left( z^{-2}-1 \right) P_{n+1} \Psi_{1,n+1}^{(2)} + z^{-1} \left( I-P_{n+1} Q_{n+1} \right) \Phi_{2,n+1}^{(2)}. \nonumber \end{align} Thus, the ratio between these quantities in the limit \mbox{$\mu\hspace{1pt}(=z^2) \to 1$} satisfies \begin{align} & \left. \hspace{-1pt} \Phi_{2,n-1}^{(2)\,-1} \Phi_{2,n-1}^{(1)} \right|_{\mu \to 1} \nonumber \\ =\; & \left. \hspace{-1pt} \left[ \left( I-P_{n} Q_{n} \right) \Phi_{2,n}^{(2)} \right]^{-1} \left[ P_{n} \Psi_{1,n}^{(1)} + \left( I-P_{n} Q_{n} \right) \Phi_{2,n}^{(1)} \right] \right|_{\mu \to 1} \nonumber \\ =\; & \left. \hspace{-1pt} \left( \Psi_{2,n}^{(2)} -z^{-2} P_n \Psi_{1,n}^{(2)} \right)^{-1} P_{n} \Psi_{1,n}^{(1)} + \Phi_{2,n}^{(2)\,-1} \Phi_{2,n}^{(1)} \right|_{\mu \to 1} \nonumber \\ =\; & \left. \hspace{-1pt} \left( \Psi_{2,n}^{(1)\, -1} \Psi_{2,n}^{(2)} - \Psi_{1,n}^{(1)\, -1} \Psi_{1,n}^{(2)} \right)^{-1} + \Phi_{2,n}^{(2)\,-1} \Phi_{2,n}^{(1)} \right|_{\mu \to 1}. \nonumber \end{align} Therefore, the formula for determining $r_n$ in (\ref{qr-formu}) can be replaced with \begin{equation} r_n = - \boldsymbol{\Delta}_n^+ \left. \hspace{-1pt} \left( \Phi_{2,n-1}^{(2)\,-1} \Phi_{2,n-1}^{(1)} \right) \right|_{\mu \to 1}, \label{qr-formu2} \end{equation} which uses the two linear eigenfunctions for (AL2). We set \begin{align} \left[ \begin{array}{c} \Psi_{1, n}^{(1)} \\ \Psi_{2, n}^{(1)} \\ \end{array} \right] &:= \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}\mathrm{(AL1)}}, \hspace{5mm} \left[ \begin{array}{c} \Psi_{1, n}^{(2)} \\ \Psi_{2, n}^{(2)} \\ \end{array} \right] := \psi_n^{\mathrm{(AL1)}}, \end{align} so that (cf.~(\ref{redef-Jost}) and (\ref{Jost-AL-psi})) \begin{align} \left[ \begin{array}{c} \Phi_{1, n}^{(1)} \\ \Phi_{2, n}^{(1)} \\ \end{array} \right] = \accentset{{\cc@style\underline{\mskip10mu}}}{\psi}_n^{\hspace{1pt}\mathrm{(AL2)}}, \hspace{5mm} \left[ \begin{array}{c} \Phi_{1, n}^{(2)} \\ \Phi_{2, n}^{(2)} \\ \end{array} \right] &= \psi_n^{\mathrm{(AL2)}}. \end{align} In view of (\ref{leftJost}), (\ref{AL1-scat}) and (\ref{AL2-scat}), this choice is indeed consistent with the boundary conditions (\ref{zero-bc3}). To be precise, we should use the explicitly time-dependent Jost solutions introduced in subsection~\ref{sTDsd}, which satisfy not only the Ablowitz--Ladik eigenvalue problem (\ref{mAL0}) but also the time-evolution equation (\ref{mAL-time}); however, this makes no difference in the limit \mbox{$\mu \to 1$} (cf.~(\ref{Jost-time-psi})), so the above choice is valid in formulas (\ref{qr-formu}) and (\ref{qr-formu2}). Thus, with the aid of (\ref{psi_form}), the solution of the space-discrete Kaup--Newell system (\ref{sdKN2}) can be expressed as \begin{subequations} \label{qr-formu3} \begin{align} q_n &= \boldsymbol{\Delta}_n^+ \left\{ \left[ I+ \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL1)}}_1 (n,n+k) \right]^{-1} \sum_{k=0}^{\infty} K^{\mathrm{(AL1)}}_1 (n, n+k) \right\}, \\[2mm] r_n &= - \boldsymbol{\Delta}_n^+ \left\{ \left[ I+ \sum_{k=0}^{\infty} K^{\mathrm{(AL2)}}_2 (n-1, n-1+k) \right]^{-1} \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL2)}}_2 (n-1,n-1+k) \right\}. \end{align} \end{subequations} Recall that the infinite sums in (\ref{qr-formu3}) are assumed to be convergent; this is satisfied if the potentials in (AL1) and (AL2) decay sufficiently rapidly as \mbox{$n \to \pm \infty$} (cf.~(\ref{psi_form})). To realize an exact linearization of the space-discrete Kaup--Newell system (\ref{sdKN2}), we introduce new quantities ${\cal K}(n,m)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{\cal K}(n,m)$ for \mbox{$m \ge n$} as \begin{align} {\cal K}(n,m) &:= \left[ I+ \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL1)}}_1 (n,n+k) \right]^{-1} \sum_{s=m}^{\infty} K^{\mathrm{(AL1)}}_1 (n, s), \nonumber \\[2mm] \accentset{{\cc@style\underline{\mskip10mu}}}{\cal K}(n,m) &:= - \left[ I+ \sum_{k=0}^{\infty} K^{\mathrm{(AL2)}}_2 (n-1, n-1+k) \right]^{-1} \sum_{s=m-1}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL2)}}_2 (n-1,s), \nonumber \end{align} so that \mbox{$q_n = \boldsymbol{\Delta}_n^+ {\cal K}(n,n)$} and \mbox{$r_n=\boldsymbol{\Delta}_n^+ \accentset{{\cc@style\underline{\mskip10mu}}}{\cal K}(n,n)$}. Let us derive the linear summation equation for ${\cal K}(n,m)$ from the Gel'fand--Levitan--Marchenko equations (\ref{GLM_1}) and (\ref{GLM_2}) for (AL1). From (\ref{GLM_1}), we have \[ \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL1)}}_1 (n,p) + \sum_{j=0}^{\infty} K^{\mathrm{(AL1)}}_1 (n,n+j) F^{\mathrm{(AL1)}}(p+j+1) = O, \hspace{5mm} p \geq n. \] Thus, taking the sum with respect to $p$, we obtain the relation \begin{align} \sum_{p=n+k}^\infty \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL1)}}_1 (n,p) &=- \sum_{j=0}^{\infty} \left[ \sum_{s=n+j}^\infty K^{\mathrm{(AL1)}}_1 (n,s) - \sum_{s=n+j+1}^\infty K^{\mathrm{(AL1)}}_1 (n,s) \right] \nonumber \\ & \hphantom{=} \; \mbox{}\times \sum_{p=n+k}^\infty F^{\mathrm{(AL1)}}(p+j+1). \label{4.22} \end{align} From (\ref{GLM_2}), we have \begin{align} K^{\mathrm{(AL1)}}_1 (n,s) &= \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(s) + \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL1)}}_1 (n,n+k) \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(s) \nonumber \\ & \hphantom{=}\; - \sum_{k=0}^{\infty} \left[ \sum_{p=n+k}^\infty \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL1)}}_1 (n,p) \right] \left[ \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(s+k) - \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(s+k+1) \right], \hspace{5mm} s \geq n, \nonumber \end{align} where (a variant of) the summation by parts formula is used. Thus, taking the sum with respect to $s$ and using the fact that \mbox{$\accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(n)$} decays rapidly as \mbox{$n \to +\infty$}, we obtain \begin{align} \sum_{s=m}^\infty K^{\mathrm{(AL1)}}_1 (n,s) & = \left[ I + \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL1)}}_1 (n,n+k) \right] \sum_{s=m}^\infty \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(s) \nonumber \\ & \hphantom{=}\; - \sum_{k=0}^{\infty} \left[ \sum_{p=n+k}^\infty \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL1)}}_1 (n,p) \right] \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(m+k), \hspace{5mm} m \geq n. \nonumber \end{align} Substituting (\ref{4.22}) into the last term and multiplying both sides from the left by \mbox{$\left[ I + \sum_{k=0}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{K}^{\hspace{1pt}\mathrm{(AL1)}}_1 (n,n+k) \right]^{-1}$}, we arrive at the linear summation equation for ${\cal K}(n,m)$: \begin{align} {\cal K} (n,m) & = \sum_{s=m}^{\infty } \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(s) + \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \left[ {\cal K}(n,n+j) - {\cal K}(n,n+j+1)\right] \nonumber \\ & \hphantom{=}\; \mbox{}\times \sum_{p=n+k}^\infty F^{\mathrm{(AL1)}}(p+j+1) \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL1)}}(m+k), \hspace{5mm} m \geq n, \nonumber \end{align} which can be rewritten using (\ref{F-GI}) as \begin{align} {\cal K} (n,m) = \sum_{s=m}^{\infty} \accentset{{\cc@style\underline{\mskip10mu}}}{F}(s) + \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \left[ {\cal K}(n,n+j) - {\cal K}(n,n+j+1)\right] F(n+j+k) \accentset{{\cc@style\underline{\mskip10mu}}}{F}(m+k), \hspace{5mm} m \geq n. \nonumber \end{align} In a similar way, we can derive the linear summation equation for $\accentset{{\cc@style\underline{\mskip10mu}}}{\cal K}(n,m)$ from the Gel'fand--Levitan--Marchenko equations (\ref{GLM_1}) and (\ref{GLM_2}) for (AL2) as \begin{align} \accentset{{\cc@style\underline{\mskip10mu}}}{\cal K} (n,m) &= \sum_{s=m-1}^{\infty } F^{\mathrm{(AL2)}}(s) + \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \left[ \accentset{{\cc@style\underline{\mskip10mu}}}{\cal K}(n,n+j) - \accentset{{\cc@style\underline{\mskip10mu}}}{\cal K}(n,n+j+1) \right] \nonumber \\ & \hphantom{=}\; \mbox{}\times \sum_{p=n+k-1}^\infty \accentset{{\cc@style\underline{\mskip10mu}}}{F}^{\hspace{1pt}\mathrm{(AL2)}}(p+j+1) F^{\mathrm{(AL2)}}(m+k-1) \nonumber \\[2mm] &= \sum_{s=m-1}^{\infty } F(s) - \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \left[ \accentset{{\cc@style\underline{\mskip10mu}}}{\cal K}(n,n+j) - \accentset{{\cc@style\underline{\mskip10mu}}}{\cal K}(n,n+j+1) \right] \accentset{{\cc@style\underline{\mskip10mu}}}{F}(n+j+k) F(m+k-1), \nonumber \\ & \hspace{110mm} m \geq n, \nonumber \end{align} where (\ref{F-GI}) is used. Combining the above results, we obtain a set of formulas for the solutions of the space-discrete Kaup--Newell system (\ref{sdKN2}), which tend to zero as \mbox{$n \to + \infty$}, in the form~\cite{TsuJMP10}: \begin{subequations} \label{sdKNsol} \begin{align} q_n &= \boldsymbol{\Delta}_n^+ {\cal K}(n,n ), \label{KN-lin1} \\[4pt] r_n &= \boldsymbol{\Delta}_n^+ {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{K}}(n,n ), \label{KN-lin2} \\[4pt] {\cal K} (n,m) &= {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}} (m) +\sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \left[ {\cal K}(n,n+j) - {\cal K}(n,n+j+1)\right] \nonumber \\ & \hphantom{=} \; \mbox{}\times \left[ {\cal F} (n+j+k+1) - {\cal F}(n+j+k+2) \right] \left[ {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}}(m+k) - {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}}(m+k+1) \right], \hspace{5mm} m \geq n, \label{calK-KN} \\[4pt] {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{K}} (n,m) &= {\cal F}(m) - \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \left[ {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{K}}(n,n+j) - {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{K}}(n,n+j+1) \right] \nonumber \\ & \hphantom{=} \; \mbox{} \times \left[ {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}}(n+j+k) -{\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}}(n+j+k+1) \right] \left[ {\cal F}(m+k) - {\cal F}(m+k+1) \right], \hspace{5mm} m \geq n. \label{calKbar-KN} \end{align} \end{subequations} Here, the functions ${\cal F}(n)$ and ${\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}}(n)$ are defined as \mbox{${\cal F}(n) :=\sum_{s=n-1}^\infty F(s)$} and \mbox{${\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}} (n) :=\sum_{s=n}^\infty \accentset{{\cc@style\underline{\mskip10mu}}}{F}(s)$}, respectively; they satisfy the same linear evolutionary system as $F(n)$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{F}(n)$ (cf.~(\ref{AL-linear})) and decay rapidly as \mbox{$n \to + \infty$}. Note that the set of formulas (\ref{sdKNsol}) can provide the solutions for any flow of the space-discrete Kaup--Newell hierarchy if ${\cal F}(n)$ and ${\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}}(n)$ satisfy the corresponding linear system. In the same way as for the other lattice systems, we can derive the multisoliton solutions of the space-discrete Kaup--Newell system (\ref{sdKN2}) from the set of formulas (\ref{sdKNsol}); this corresponds to the special case of reflectionless potentials for both (AL1) and (AL2). For simplicity, we assume that $A(\mu)^{-1}$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{A}(\mu)^{-1}$ only have simple poles and set \begin{align} {\cal F}(n, t) = \sum_{j=1}^N {\cal C}_j(t) \mu_j^{n}, \hspace{7mm} {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}}(n,t) = \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_j(t) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-n}, \label{F-Fbar-KN} \end{align} where the time dependences of ${\cal C}_j$ and $\accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_j$ are given as (\ref{C_time}) and (\ref{C_bar_time}), i.e. \[ {\cal C}_j(t) = {\cal C}_j(0) \mathrm{e}^{[(\mu_j-1) b + (1-\mu_j^{-1})a] t}, \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_j(t) = \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_j(0) \mathrm{e}^{-[(\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j-1) b + (1-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-1}) a] t}. \] We also set \begin{equation} {\cal K}(n, m; t) = \sum_{j=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} {\cal G}_j(n,t) \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-m}, \hspace{7mm} \accentset{{\cc@style\underline{\mskip10mu}}}{\cal K} (n, m; t) = \sum_{j=1}^{N} {\cal H}_j(n,t) \mu_j^{m}, \label{G-H-KN} \end{equation} and substitute the expressions (\ref{F-Fbar-KN}) and (\ref{G-H-KN}) into (\ref{calK-KN}) and (\ref{calKbar-KN}). Because \mbox{$|\mu_j| < 1$} $(j=1, 2, \ldots, N)$ and \mbox{$|\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j| >1$} $(j=1, 2, \ldots, \accentset{{\cc@style\underline{\mskip10mu}}}{N})$, we can evaluate the infinite sum to obtain linear algebraic systems for determining ${\cal G}_j$ and ${\cal H}_j$, respectively. They can be written as \begin{subequations} \label{GH-sys-KN} \begin{align} & \left[ \begin{array}{cccc} \! {\cal G}_1 \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n} \! & \! {\cal G}_2 \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_2^{\hspace{1pt}-n} \! & \! \cdots \! & \! {\cal G}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n}\! \end{array} \right] \left[ \begin{array}{ccc} {\cal U}_{11} & \cdots & {\cal U}_{1\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \vdots & \ddots & \vdots \\ {\cal U}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}1} & \cdots & {\cal U}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \end{array} \right] = \left[ \begin{array}{cccc} \! \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_1 \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n} \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_2 \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_2^{\hspace{1pt}-n}\! & \! \cdots \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}\hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n} \! \end{array} \right] \label{GH-sys1-KN} \end{align} and \begin{align} & \left[ \begin{array}{cccc} \! {\cal H}_1 \hspace{1pt} \mu_1^{n} \! & \! {\cal H}_2 \hspace{1pt} \mu_2^{n} \! & \! \cdots \! & \! {\cal H}_{N} \hspace{1pt} \mu_N^{n} \! \end{array} \right] \left[ \begin{array}{ccc} {\cal V}_{11} & \cdots & {\cal V}_{1N} \\ \vdots & \ddots & \vdots \\ {\cal V}_{N1} & \cdots & {\cal V}_{NN} \\ \end{array} \right] = \left[ \begin{array}{cccc} \! {\cal C}_1 \hspace{1pt} \mu_1^{n} \! & \! {\cal C}_2 \hspace{1pt} \mu_2^{n} \! & \! \cdots \! & \! {\cal C}_{N} \hspace{1pt} \mu_N^{n} \! \end{array} \right]. \label{GH-sys2-KN} \end{align} \end{subequations} Here, all the entries in (\ref{GH-sys-KN}) are \mbox{$l \times l$} matrices; the block matrices \mbox{${\cal U}=({\cal U}_{jk})_{1 \le j,k \le \accentset{{\cc@style\underline{\mskip10mu}}}{N}}$} and \mbox{${\cal V}=({\cal V}_{jk})_{1 \le j,k \le N}$} are defined as \begin{align} {\cal U}_{jk} &:= \delta_{jk} I - \sum_{i=1}^{N} \frac{\displaystyle \left( 1-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j^{\hspace{1pt}-1}\right) \left(1-\mu_i\right) \left(1-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_k^{\hspace{1pt}-1} \right) } {\displaystyle \left( 1-\frac{\mu_i}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j}\right) \left( 1-\frac{\mu_i}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_k} \right)} \mu_i^{n+1} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_k^{\hspace{1pt}-n}\hspace{1pt} {\cal C}_i (t) \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_k(t) \nonumber \end{align} and \begin{align} {\cal V}_{jk} &:= \delta_{jk} I + \sum_{i=1}^{\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \frac{\left( 1- \mu_j\right) \left(1-\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i^{\hspace{1pt}-1} \right) \left(1-\mu_k\right)} {\displaystyle \left( 1-\frac{\mu_j}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i}\right) \left( 1- \frac{\mu_k}{\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i} \right)} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_i^{\hspace{1pt} -n} \mu_k^{n} \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_i(t) \hspace{1pt} {\cal C}_k(t), \nonumber \end{align} respectively. Thus, with the aid of (\ref{KN-lin1}), (\ref{KN-lin2}) and (\ref{G-H-KN}), we obtain the multisoliton solutions of the space-discrete Kaup--Newell system (\ref{sdKN2}) in the difference form: \begin{subequations} \label{KN-soliton} \begin{align} q_n (t) &= \boldsymbol{\Delta}_n^+ \left\{ \left[ \begin{array}{ccc} \! \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_1(t) \hspace{1pt}\accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_1^{\hspace{1pt}-n} \! & \! \cdots \! & \! \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}(t) \hspace{1pt} \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}}^{\hspace{1pt}-n} \! \end{array} \right] \left[ \begin{array}{ccc} {\cal U}_{11} & \cdots & {\cal U}_{1\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \vdots & \ddots & \vdots \\ {\cal U}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}1} & \cdots & {\cal U}_{\accentset{{\cc@style\underline{\mskip10mu}}}{N}\accentset{{\cc@style\underline{\mskip10mu}}}{N}} \\ \end{array} \right]^{-1} \left[ \begin{array}{c} I \\ \vdots \\ I \\ \end{array} \right] \right\}, \label{Q-Nsol-KN} \\[1mm] r_n (t) &= \boldsymbol{\Delta}_n^+ \left\{ \left[ \begin{array}{ccc} \! {\cal C}_1 (t) \hspace{1pt} \mu_1^{n} \! & \! \cdots \! & \! {\cal C}_{N} (t) \hspace{1pt} \mu_N^{n} \! \end{array} \right] \left[ \begin{array}{ccc} {\cal V}_{11} & \cdots & {\cal V}_{1N} \\ \vdots & \ddots & \vdots \\ {\cal V}_{N1} & \cdots & {\cal V}_{NN} \\ \end{array} \right]^{-1} \left[ \begin{array}{c} I \\ \vdots \\ I \\ \end{array} \right] \right\}. \end{align} \end{subequations} When \mbox{$b=a^\ast$}, the space-discrete Kaup--Newell system (\ref{sdKN2}) allows not only the complex conjugation reduction \mbox{$r_n = \mathrm{i} \sigma q_{n- 1/2 }^{\,\ast}$} but also the Hermitian conjugation reduction \mbox{$r_n = \mathrm{i} \sigma q_{n- 1/2 }^{\,\dagger}$}, where $\sigma$ is a real constant~\cite{Tsuchi02}. Each reduction relates two originally uncoupled systems, (\ref{sdKN2}) with \mbox{$n \in \mathbb{Z}$} and (\ref{sdKN2}) with \mbox{$n \in \mathbb{Z}+1/2$}, to provide a single equation with \mbox{$n \in \mathbb{Z}/2$}. Clearly, the value of $\sigma$ is nonessential, so we set \mbox{$\sigma=1$} and consider the Hermitian conjugation reduction \mbox{$r_n = \mathrm{i} q_{n- 1/2 }^{\,\dagger}$}. This reduction can be realized at the level of formulas (\ref{sdKNsol}) by setting \begin{equation} {\cal \accentset{{\cc@style\underline{\mskip10mu}}}{F}} (n) = \mathrm{i} \left\{ {\cal F} \left( n+\mbox{$\tiny{\frac{1}{2}}$} \right) \right\}^\dagger, \nonumber \end{equation} which is consistent with the time evolution (cf.~(\ref{AL-linear}) with \mbox{$b=a^\ast$}). In particular, the $N$-soliton solution of the space-discrete Kaup--Newell equation, \begin{equation} q_{n,t} - \boldsymbol{\Delta}_n^+ \left[ a \left( I - \mathrm{i}q_{n} q_{n-\frac{1}{2}}^{\,\dagger} \right)^{-1} q_{n} + a^\ast \left( I + \mathrm{i}q_{n-1} q_{n-\frac{1}{2}}^{\,\dagger} \right)^{-1} q_{n-1} \right] = O, \label{matrixKNeq} \end{equation} is obtained by setting \mbox{$b=a^\ast$}, \mbox{$\accentset{{\cc@style\underline{\mskip10mu}}}{N}=N$} and \[ \accentset{{\cc@style\underline{\mskip10mu}}}{\mu}_j= \frac{1}{\mu_j^\ast}, \hspace{5mm} \accentset{{\cc@style\underline{\mskip10mu}}}{\cal C}_j(t) = \mathrm{i} \left\{ {\cal C}_j (t) \mu_j^{\frac{1}{2}} \right\}^\dagger, \hspace{5mm} j=1, 2, \ldots, N \] in formula (\ref{Q-Nsol-KN}), where \mbox{${\cal C}_j(t) = {\cal C}_j(0) \mathrm{e}^{[(\mu_j-1) a^\ast + (1-\mu_j^{-1})a] t}$}. The imaginary unit (roman $\mathrm{i}$) should not be confused with the index of summation (italic $i$) in the definitions of ${\cal U}_{jk}$ and ${\cal V}_{jk}$. The $N$-soliton solution thus obtained is a space-discrete analog of the $N$-soliton solution of the continuous matrix Kaup--Newell equation reported in~\cite{talk08}. Note that the square matrix equation (\ref{matrixKNeq}) can be further reduced to a vector equation by setting all but one of the columns (or rows) of $q_n$ to zero; its $N$-soliton solution can be obtained in the same way as described in~\cite{Tsuchi04}. \section{Concluding remarks} In this paper, we have developed the inverse scattering method associated with the matrix Ablowitz--Ladik eigenvalue problem and its applications to space-discrete analogs of derivative NLS systems. In particular, the most streamlined version of the inverse scattering method on a lattice is formulated, which can avoid redundant processes present in the existing literature. Thus, we are now able to understand the inverse scattering method for the Ablowitz--Ladik lattice as a direct discrete analog of the inverse scattering method for the continuous NLS system; in essence, the discrete case is no longer more complicated than the continuous case. Moreover, we can characterize the space-discrete derivative NLS systems using the potentials and linear eigenfunctions appearing in the Lax representation for the Ablowitz--Ladik lattice. On the basis of this characterization, we can solve the space-discrete derivative NLS systems by preparing two relevant copies of the inverse scattering formulas for the Ablowitz--Ladik lattice and considering a B\"acklund--Darboux transformation between them. This provides a unification of the inverse scattering method for the NLS system and that for the derivative NLS systems in the discrete setting; such a unification is also possible in the continuous case (see~\cite{talk07,talk08}). The multisoliton solutions of the space-discrete derivative NLS systems can be obtained in a straightforward manner within this unified framework; they reduce to the multisoliton solutions of the derivative NLS systems in the continuous space limit. Note that the space-discrete Kaup--Newell system allows the introduction of the potential variables with respect to the discrete spatial coordinate and can be rewritten locally in terms of these variables; our solution formulas reflect this property accurately, that is, by construction any solution is written in the difference form using the forward difference operator. We assumed vanishing boundary conditions at spatial infinity, namely, as \mbox{$n \to + \infty$} and \mbox{$n \to - \infty$}. In the case of matrix-valued dependent variables, this assumption imposes highly nontrivial conditions on the scattering data, which become almost trivial in the scalar case (cf.~(\ref{sca-con1})). For simplicity, we derived such conditions in the reflectionless case of the potentials; however, they are expected to be valid in the general case, because in the limits \mbox{$t \to \pm \infty$} the contribution of the continuous spectrum would become negligible (cf.~(\ref{F_time1}) and (\ref{Fbar_time1}) with \mbox{$b=a^\ast$}). Note also that our approach of solving the derivative NLS systems using the NLS eigenvalue problem is applicable under other boundary conditions that are amenable to the inverse scattering method or its generalizations. In addition, although we mainly considered the first nontrivial flows of the integrable hierarchies, our approach can be applied, with minor amendments, to the higher flows as well as the negative flows of the hierarchies.
{ "timestamp": "2012-07-13T02:06:39", "yymm": "1206", "arxiv_id": "1206.3210", "language": "en", "url": "https://arxiv.org/abs/1206.3210", "abstract": "We refine and develop the inverse scattering theory on a lattice in such a way that the Ablowitz-Ladik lattice and derivative NLS lattices as well as their matrix analogs can be solved in a unified way. The inverse scattering method for the (matrix analog of the) Ablowitz-Ladik lattice is simplified to the same level as that for the continuous NLS system. Using the linear eigenfunctions of the Lax pair for the Ablowitz-Ladik lattice, we can construct solutions of the derivative NLS lattices such as the discrete Gerdjikov-Ivanov (also known as Ablowitz-Ramani-Segur) system and the discrete Kaup-Newell system. Thus, explicit solutions such as the multisoliton solutions for these systems can be obtained by solving linear summation equations of the Gel'fand-Levitan-Marchenko type. The derivation of the discrete Kaup-Newell system from the Ablowitz-Ladik lattice is based on a new method that allows us to generate new integrable systems from known systems in a systematic manner. In an appendix, we describe the reduction of the matrix Ablowitz-Ladik lattice to a vector analog of the modified Volterra lattice from the point of view of the inverse scattering method.", "subjects": "Exactly Solvable and Integrable Systems (nlin.SI); Mathematical Physics (math-ph); Analysis of PDEs (math.AP); Classical Analysis and ODEs (math.CA); Spectral Theory (math.SP)", "title": "A refined and unified version of the inverse scattering method for the Ablowitz-Ladik lattice and derivative NLS lattices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575116884779, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7091542088578316 }
https://arxiv.org/abs/2108.00592
Smith Normal Form and the Generalized Spectral Characterization of Graphs
Spectral characterization of graphs is an important topic in spectral graph theory, which has received a lot of attention from researchers in recent years. It is generally very hard to show a given graph to be determined by its spectrum. Recently, Wang [10] gave a simple arithmetic condition for graphs being determined by their generalized spectra. Let $G$ be a graph with adjacency matrix $A$ on $n$ vertices, and $W=[e,Ae,\ldots,A^{n-1}e]$ ($e$ is the all-one vector) be the walk-matrix of $G$. A theorem of Wang [10] states that if $2^{-\lfloor n/2\rfloor}\det W$ (which is always an integer) is odd and square-free, then $G$ is determined by the generalized spectrum. In this paper, we find a new and short route which leads to a stronger version of the above theorem. The result is achieved by using the Smith Normal Form of the walk-matrix of $G$. The proposed method gives a new insight in dealing with the problem of generalized spectral characterization of graphs.
\section{Introduction} The spectrum of a graph encodes a lot of combinatorial information about the given graph and thus has long been a powerful tool in dealing with various problems in graph theory. A long-standing unsolved question in spectral graph theory is ``Which graphs are determined by their spectra (DS for short)?". The problem originates from chemistry and is closely related to many other problems of central interest such as the graph isomorphism problem and a famous problem of Kac ``Can one hear the shape of a drum?". We say two graphs are \emph{cospectral}, if they share the same spectrum. A graph $G$ is said to be \emph{determined by its spectrum} (DS for short) if any graph having the same spectrum as $G$ is isomorphic to $G$. It is generally very hard and challenging to show a given graph to be DS. Despite many efforts, up to now, only very few families of graphs with special structural properties are known to be DS, and the techniques involved in proving them to be DS highly depend on the special properties of the spectra of these graphs and cannot be applied to general graphs. For the background and more known results about this problem, we refer the readers to \cite{DH,DH1}. In recent years, Wang and Xu~\cite{W1}, Wang~\cite{wang2017JCTB} considered the above problem from the perspective of the generalized spectrum. Two graphs are \emph{generalized cospectral} if they are cospectral with cospectral complements. A graph $G$ is said to be \emph{determined by its generalized spectrum} (DGS for short), if any graph generalized cospectral with $G$ is isomorphic to $G$. For a given graph $G$ with adjacency matrix $A=A(G)$ on $n$ vertices, let $W=W(G)=:[e,Ae,\ldots,A^{n-1}e]$ ($e$ is the all-one vector) be the walk-matrix of $G$. In Wang~\cite{wang2017JCTB}, the author proved the following theorem. \begin{them}[Wang~\cite{wang2013EJC,wang2017JCTB}]\label{thm1} If $\frac{\det W(G)}{2^{\lfloor n/2\rfloor}}$ (which is always an integer) is odd and square-free, then $G$ is DGS. \end{them} It is not difficult to show (see Lemma 3.5 in~\cite{wang2017JCTB}) that $\frac{\det W(G)}{2^{\lfloor n/2\rfloor}}$ is odd and square-free if and only if the Smith Normal Form of $W(G)$ is as follows: $${\rm diag}(\underbrace{1,1,\cdots,1}_{\lceil\frac{n}{2}\rceil},\underbrace{2,2,\cdots,2,2m}_{\lfloor\frac{n}{2}\rfloor}),$$ where $m$ is an odd and square-free integer. Motivated by the above observation, in this paper, we shall give a stronger version of the above theorem by using Smith Normal Form of the walk-matrix of $G$, which significantly improves upon Theorem~\ref{thm1}; see Theorem~\ref{Main} in Section 2.2. The proof of our result is, however, new and much shorter than the original one, and hence gives a new insight in dealing with the problem of generalized spectral characterizations of graphs. The key new ingredient is an observation that whenever $G$ and $H$ are generalized cospectral, then under certain mild conditions, the null spaces of $W(G)$ and $W(H)$ coincide, over the finite field $\mathbb{F}_p$, where $p$ is a prime. The rest of the paper is organized as follows. In Section 2, we shall give some preliminary results that will be needed later in the paper and then present the main result. In Section 3, we present the proof of Theorem~\ref{Main} in two cases: $p$ is odd and $p=2$. Some examples and conclusions are given in Section 4 and 5, respectively. \section{Preliminaries and main results} \subsection{Preliminaries} In this section, we shall give some preliminaries that will be used later in the paper. Throughout this paper, let $G$ be a simple graph with adjacency matrix $A=A(G)$ on $n$ vertices. The \emph{spectrum} of $G$, denoted by ${\rm Spec}(G)$, is the multiset of the eigenvalues of $A(G)$. The \emph{generalized spectrum} of $G$ is the ordered pair $({\rm Spec}(G),{\rm Spec}(\bar{G}))$, where $\bar{G}$ is the complement of $G$. We say that $G$ and $H$ are \emph{generalized cospectral}, if they have the same generalized spectrum. A graph $G$ is \emph{determined by the generalized spectrum} (DGS for short) if any graph having the same generalized spectrum as $G$ is isomorphic to $G$. The \emph{walk-matrix} of $G$ is defined as $$W=W(G):=[e,Ae,\ldots,A^{n-1}e],$$ where $e$ is the all-one vector. Note that the $(i,j)$-th entry of $W$ counts the number of walks starting from vertex $i$ with length $j-1$. An integral matrix $U$ is \emph{unimodular}, if $\det(U) =\pm 1$. It is well-known that, for every integral matrix $M$ with full rank, there exist unimodular matrices $U$ and $V$ such that $M=USV$, where $S=\diag(d_{1},d_{2},\ldots,d_{n-1},d_{n})$ is a diagonal matrix with $d_{i}\mid d_{i+1}$ for $i=1,2,\ldots,n-1$, which is known as Smith Normal Form (abbreviated SNF) of the matrix $M$, and $d_{i}=d_i(M)$ is the $i$-th \emph{invariant factor} of $M$. The SNF of an integral matrix can be computed efficiently; see e.g.~\cite{ASchrijver}. Recall that a \emph{rational orthogonal matrix} $Q$ is an orthogonal matrix with rational entries; it is called \emph{regular} if the sum of every row (column) of $Q$ is one, i.e., $Qe=e$. Wang and Xu~\cite{W1} initiated the study of the generalized spectral characterizations of graphs. The starting point of their study is the following lemma, which gives a simple characterization for a pair of graphs to be generalized cospectral. \begin{lemma}[Wang and Xu~\cite{W1}; Johnson and Newman~\cite{ JohnsonNew}]\label{lortho} Let $G$ and $H$ be two graphs with $\det W(G)\neq0$. Then $H$ is generalized cospectral with $G$ if and only if there exists a unique regular rational orthogonal matrix $Q$ such that $Q^{\T}A(G)Q=A(H)$. \end{lemma} Define $$\mathcal{Q}(G)=\{Q\in{RO_n(\mathbb{Q})}~|~Q^{\T}A(G)Q~\mbox{is a (0,1)-matrix }\},$$ where $RO_n(\mathbb{Q})$ denotes the set of all $n$ by $n$ regular rational orthogonal matrices. \begin{lm}[Wang and Xu~\cite{W1}]\label{good} Let $G$ be a graph with $\det W(G)\neq 0$. Then $G$ is DGS if and only if $\mathcal{Q}(G)$ contains only permutation matrices. \end{lm} According to the above lemma, in order to show that $G$ is DGS, it suffices to show that every $Q\in {\mathcal{Q}(G)}$ is a permutation matrix. In order to do so, the following notion is proved to be useful. \begin{Defi}[Wang and Xu~\cite{W1}] The level of a rational matrix $Q$, denoted by $\ell (Q)$ or $\ell$, is the smallest positive integer $k$ such that $kQ$ is an integral matrix. \end{Defi} It is clear that a regular rational orthogonal matrix is a permutation matrix if and only if $\ell(Q)=1$. The following lemma will be frequently used in the sequel. \begin{lemma}\label{rem1} Let $X$ and $Y$ be two non-singular integral matrices such that $QX=Y$. Then $\ell\mid \gcd(d_n(X),d_n(Y))$, where $d_n(X)$ (resp. $d_n(Y)$) is $n$-th invariant factor of $X$ (resp. $Y$). \end{lemma}\label{level1} \begin{proof} Suppose that $X=USV$, where $S={\rm diag}(d_1(X),d_2(X),\ldots,d_n(X))$ is the SNF of $X$, and $U$ and $V$ are unimodular matrices. Then we have $$Q=YV^{-1}{\rm diag}(d_1^{-1}(X),d_2^{-1}(X),\ldots,d_n^{-1}(X))U^{-1},$$ and hence $d_n(X)Q$ is an integral matrix. By the minimality of $\ell$, we get that $\ell\mid d_n(X)$. Similarly, noting that $Q^{\T}Y=X$ and $\ell(Q)=\ell(Q^{\T})$, we have $\ell\mid d_n(Y)$. So the lemma follows. \end{proof} Suppose $Q\in {\mathcal{Q}(G)}$ with level $\ell$, then $Q^{\T}A(G)Q=A(H)$ for some graph $H$. It follows that $Q^{\T}A^k(G)e=A^k(H)e$ for $k=0,1,\ldots,n-1$, which gives that $Q^{\T}W(G)=W(H)$. By Lemma~\ref{level1}, we get that $\ell\mid \gcd(d_n(W(G)),d_n(W(H)))$. Thus, the $n$-th invariant factor of $W(G)$ (resp. $W(H)$) provides useful information about the level of $Q$. How to sharpen this observation to give more information about $\ell$ will be the main focus of this paper. In \cite{wang2013EJC} and \cite{wang2017JCTB}, the author was able to establish the following results. \begin{lemma}[Wang~\cite{wang2013EJC}]\label{oddwang} Let $Q\in {\mathcal{Q}(G)}$ with level $\ell$, and $p$ be odd prime. If $p\mid \det W(G)$ and $p^2\nmid \det W(G)$, then $p\nmid \ell$. \end{lemma} \begin{lemma}[Wang~\cite{wang2017JCTB}]\label{evenwang} Let $Q\in {\mathcal{Q}(G)}$ with level $\ell$. Suppose that $2^{\lfloor n/2\rfloor}\mid\det W(G)$ and $2^{\lfloor n/2\rfloor+1}\nmid\det W(G)$. Then $\ell$ is even. \end{lemma} \begin{rem} In~\cite{wang2013EJC}, the author proved Lemma~\ref{oddwang} using the following strategy: Let $Q\in {\mathcal{Q}(G)}$ with level $\ell$. Suppose that $p$ is an odd prime and $p\mid \ell$. Then the author was able to show that the congruence equation $W^{\T}x\equiv 0~({\rm mod}~p^2)$ has a non-trivial solution $x\not\equiv 0~({\rm mod}~p)$, which implies $p^2\mid \det W(G)$. This contradicts the assumption of the lemma. While in~\cite{wang2017JCTB}, the strategy of proving Lemma~\ref{evenwang} is too involved to describe here; we refer the reader to the original paper. \end{rem} Now suppose that $G$ is a graph such that $\frac{\det W(G)}{2^{\lfloor n/2\rfloor}}$ is odd and square-free. Let $Q\in {\mathcal{Q}(G)}$ with level $\ell$. Then, combining the above two lemmas, we have $\ell=1$ and hence $Q$ is a permutation matrix and $G$ is DGS. Thus Theorem~\ref{thm1} follows. Unlike the methods in~\cite{wang2013EJC} and \cite{wang2017JCTB}, in this paper, we shall give a stronger version of Theorem~\ref{thm1} using a totally different approach. We use $\rank_{p} X$ throughout to denote the rank of an integral matrix $X$ over the finite field $\mathbb{F}_p$. We need the following lemma. \begin{lemma}[Qiu et al.~\cite{QiuJiWangQ}]\label{rank} Let $r=\rank_{p}W(G)$. Then the first $r$ columns of $W(G)$ are linearly independent over $\mathbb{F}_{p}$. \end{lemma} By Lemma~\ref{rank}, we know that $A^{r}e$ can be uniquely expressed as the linear combination of $e,Ae,\ldots,A^{r-1}e$, in other words, there exist $r$ integers $a_{0},a_{1},\ldots,a_{r-1}$ such that $a_{0}e+a_{1}Ae+\cdots+a_{r-1}A^{r-1}e+A^{r}e=0$, over $\mathbb{F}_{p}$. Now we introduce a new matrix $M=M(G)$, which plays a critical role in proving our main theorem (Theorem~\ref{Main}). \begin{Defi}\label{defM} For a graph $G$ with adjacency matrix $A$, define $M=M(G):=a_{0}I+a_{1}A+\cdots+a_{r-1}A^{r-1}+A^{r}$. \end{Defi} \begin{rem} \textup{We would like to mention that a similar matrix $M$ was considered in Wang~\cite{Wangan} over $\mathbb{F}_{2}$.} \end{rem} \begin{Defi} With the above $M$, we introduce two matrices $\bar{W}(G)$ and $\hat{W}(G)$ as follows: $$\bar{W}=\bar{W}(G):=[e,Ae,\ldots,A^{r-1}e,Me,AMe,\ldots,A^{n-r-1}Me],$$ and $$\hat{W}=\hat{W}(G):=[e,Ae,\ldots,A^{r-1}e,\frac{Me}p,\frac{AMe}p,\ldots,\frac{A^{n-r-1}Me}p].$$ \end{Defi} Clearly $\hat{W}$ is an integral matrix since $Me\equiv~0~({\rm mod}~p)$. The following lemma gives a relationship between the SNF of $W$ and that of $\hat{W}$, which generalizes a result on Eulerian graphs in~\cite{QJWEulerian}. \begin{lemma}\label{lSNF} Let $p$ be a prime. Suppose that the ${\rm SNF}$ of $W$ is \begin{equation}\label{SNF1} \textup{diag}(d_1,d_2,\ldots,d_r,d_{r+1},\ldots,d_n), \end{equation} where $p\nmid d_r$ and $p\mid d_{r+1}$. Then the ${\rm SNF}$ of $\hat{W}$ is $\diag(d_1,d_2,\ldots,d_r,\frac{d_{r+1}}p,\ldots,\frac{d_n}p).$ \end{lemma} \begin{proof} Note that $\bar{W}$ can be obtained from $W$ by applying a series of elementary column operations. Hence $\bar{W}$ and $W$ have the same SNF. Thus, there exist two unimodular matrices $\bar{U}$ and $\bar{V}$ such that $\bar{W}=\bar{U}N\bar{V}$, where $N$ is as shown in Eq.~\eqref{SNF1}. It follows that \begin{equation}\label{eqeven2} \begin{split} \bar{U}^{-1}\bar{W} &=\diag(d_1,d_2,\ldots,d_r,d_{r+1},\ldots,d_n)\bar{V}\\ &= \left( \begin{array}{cc} \Delta & \\ & p\Lambda \\ \end{array} \right)\left( \begin{array}{cc} V_{1} & V_{2} \\ V_{3} & V_{4} \\ \end{array} \right) \\ &=\left(\begin{array}{cc} \Delta V_{1} & \Delta V_{2} \\ p\Lambda V_{3} & p\Lambda V_{4} \\ \end{array} \right), \end{split} \end{equation} where $\Delta=\diag(d_1,d_2,\ldots,d_r)$ is a diagonal matrix of order $r$, $\Lambda=\diag(\frac{d_{r+1}}p,\ldots,\frac{d_n}p)$ is a diagonal matrix of order $n-r$, and $\bar{V}=\left( \begin{array}{cc} V_{1} & V_{2} \\ V_{3} & V_{4} \\ \end{array} \right)$ is the corresponding matrix partition of $\bar{V}$. Then Eq.~\eqref{eqeven2} can be rewritten as $$[\bar{U}^{-1}e,\ldots,\bar{U}^{-1}A^{r-1}e,\bar{U}^{-1}Me,\ldots,\bar{U}^{-1}A^{n-r-1}Me] =\left(\begin{array}{cc} \Delta V_{1} & \Delta V_{2} \\ p\Lambda V_{3} & p\Lambda V_{4} \\ \end{array} \right),$$ and hence $$[\bar{U}^{-1}e,\ldots,\bar{U}^{-1}A^{r-1}e,\frac{\bar{U}^{-1}Me}p,\ldots,\frac{\bar{U}^{-1}A^{n-r-1}Me}p] =\left(\begin{array}{cc} \Delta V_{1} & \frac{\Delta V_{2}}{p} \\ p\Lambda V_{3} & \Lambda V_{4} \\ \end{array} \right),$$ i.e., \begin{equation}\label{eqeven3} \bar{U}^{-1}\hat{W}(G)=\left(\begin{array}{cc} \Delta V_{1} & \frac{\Delta V_{2}}{p} \\ p\Lambda V_{3} & \Lambda V_{4} \\ \end{array} \right)=\left(\begin{array}{cc} \Delta & O \\ O & \Lambda \end{array}\right) \left(\begin{array}{cc} V_{1} & \frac{V_{2}}{p} \\ p V_{3} & V_{4} \end{array}\right). \end{equation} Let $\bar{V}^{'}=\left(\begin{array}{cc} V_{1} & \frac{V_{2}}{p} \\ p V_{3} & V_{4} \\ \end{array}\right)$. By the first equality of Eq.~\eqref{eqeven3}, we get that $\frac{\Delta V_{2}}{p}$ is an integral matrix, and so is $\frac{V_{2}}{p}$, since $p\nmid d_i$ for $i=1,2,\ldots,r$. Thus, $\bar{V}^{'}$ is an integral matrix. Moreover, taking determinant on both sides of Eq.~\eqref{eqeven3} gives $\det \bar{V}^{'}=\pm 1$. Therefore, $\bar{V}^{'}$ is a unimodular matrix. Then the lemma follows immediately from the second equality of Eq.~\eqref{eqeven3}. \end{proof} \subsection{Main Results} The main result of this paper is the following theorem. \begin{them}\label{Main} Let $G$ be a graph with $\det W(G)\neq0$. Let $d_n=d_n(W(G))$ be the $n$-th invariant factor of $W(G)$. Suppose that $Q\in{\mathcal{Q}_G}$ with level $\ell$. Then we have:\\ \noindent (i) For an odd prime $p$, if $\rank_pW(G) =n-1$, then $\ell\mid \frac{d_n}{p}$;\\ (ii) For $p=2$, if $\rank_2 W(G)=\lceil\frac{n}2\rceil$, then $\ell\mid \frac{d_n}{2}$. \end{them} As an immediate consequence, we have \begin{Cor}Set $r=\lceil\frac{n}2\rceil$. Suppose that the Smith Normal Form of $W(G)$ is as follows: $${\rm diag}(\underbrace{1,1,\ldots,1}_{r},\underbrace{2^{l_1},2^{l_2},\ldots,2^{l_{n-r-1}},2^{l_{n-r}}p_1^{m_1}p_2^{m_2}\cdots p_s^{m_s}}_{n-r}),$$ where $p_i$'s are distinct odd primes for $1\leq i\leq s$. Suppose that $Q\in{\mathcal{Q}_G}$ with level $\ell$. Then $\ell\mid 2^{l_{n-r}-1}p_1^{m_1-1}p_2^{m_2-1}\cdots p_s^{m_s-1}$. \end{Cor} \begin{Cor}[Wang~\cite{W1,wang2017JCTB}] If $\frac{\det W(G)}{2^{\lfloor n/2 \rfloor}}$ is odd and square-free, then $G$ is DGS. \end{Cor} \begin{proof} It follows from \cite{wang2017JCTB} that $\frac{\det W(G)}{2^{\lfloor n/2 \rfloor}}$ is odd and square-free if and only if the SNF of $W(G)$ is as follows: $${\rm diag}(\underbrace{1,1,\ldots,1}_{\lceil\frac{n}{2}\rceil},\underbrace{2,2,\ldots,2,2p_1p_2\cdots p_s}_{\lfloor\frac{n}{2}\rfloor}),$$ where $p_i$'s are distinct odd primes. Let $Q\in{\mathcal{Q}_G}$ with level $\ell$. By Corollary 1, we have $\ell \mid 2^{0}p_1^{0}p_{2}^0\cdots p_s^{0}$, i.e., $\ell\mid 1$. Thus, $\ell=1$ and $Q$ is a permutation matrix. By Lemma~\ref{good}, $G$ is DGS. \end{proof} \section{Proof of Theorem~\ref{Main}} In this section, we shall present the proof of Theorem~\ref{Main}. Before doing so, we shall describe our main strategy, as follows. \subsection{Main ideas} Let $G$ and $H$ be two generalized cospectral graphs. For simplicity, we write $A=A(G)$ and $B=A(H)$. Let $p$ be a prime and $r=\rank_p W(G)$. Let $Q\in{\mathcal{Q}_G}$ be a regular rational orthogonal matrix such that $Q^{T}AQ=B$ and $Qe=e$, which imply $Q^{\T}A^ke=B^ke$ for $k=0,1,\ldots,n-1$, i.e., \begin{gather} \nonumber Q^{\T}e=e,\\\nonumber Q^{\T}Ae=Be,\\\nonumber \vdots\\\label{EM} Q^{\T}A^{r-1}e=B^{r-1}e,\\ \nonumber Q^{\T}A^{r}e=B^{r}e,\\\nonumber \vdots\\\nonumber Q^{\T}A^{n-1}e=B^{n-1}e.\nonumber \end{gather} Suppose that we can find some integers $a_0,a_1,\ldots,a_{r-1}$ such that \begin{eqnarray}\label{key1} \left\{ \begin{split} M(G)e=a_0e+a_1Ae+\cdots+a_{r-1}A^{r-1}e+A^{r}e\equiv 0~({\rm mod}~p),\\ M(H)e=a_0e+a_1Be+\cdots+a_{r-1}B^{r-1}e+B^{r}e\equiv 0~({\rm mod}~p). \end{split} \right. \end{eqnarray} Then multiplying both sides of the first to the $r$-th equations by $a_0, a_1,\ldots, a_{r-1}$ in Eq.~\eqref{EM}, respectively, and then adding them to the $(r+1)$-th equation generates that $Q^{\T}M(G)e=M(H)e$. Furthermore, it is easy to see that $$Q^{\T}A^{i}M(G)e=B^{i}M(H)e,$$ for $i=1,2,\ldots,n-r-1$. Let $$\hat{W}(G):=[e,Ae,\ldots,A^{r-1}e,\frac{M(G)e}p,\frac{AM(G)e}p,\ldots,\frac{A^{n-r-1}M(G)e}p]$$ and $$\hat{W}(H):=[e,Be,\ldots,B^{r-1}e,\frac{M(H)e}p,\frac{BM(H)e}p,\ldots,\frac{B^{n-r-1}M(H)e}p].$$ Then both $\hat{W}(G)$ and $\hat{W}(H)$ are integral matrices, and $Q^{\T}\hat{W}(G)=\hat{W}(H)$ still holds. Note that $d_n(\hat{W}(G))=\frac{d_n(W(G))}p$ according to Lemma~\ref{lSNF}. It follows from Lemma~\ref{rem1} that $\ell \mid \frac{d_n(W(G))}p$, and we are done! \begin{rem} We would like to mention that if Eq.~\eqref{key1} holds, then the null spaces of $W(G)$ and $W(H)$ coincide, or in notations, ${\rm Null}(W(G))={\rm Null} (W(H))$, over $\mathbb{F}_p$. Actually, let $\alpha_0=(a_0,a_1,\ldots,a_{r-1},1,0,\ldots,0)^{\T}$ and $$\alpha_i=(\underbrace{0,\ldots,0}_i,a_0,a_1,\ldots,a_{r-1},1,0,\ldots,0)^{\T},$$ for $i=0,1,\ldots,n-r-1$. Then it follows from the first equation in Eq.~\eqref{key1} that $W(G)\alpha_0=0$, and hence $W(G)\alpha_i=0$ for $i=1,\ldots,n-r-1$. Note that $\alpha_i$'s are linearly independent and thus form a basis of ${\rm Null}(W(G))$. The same is true for ${\rm Null}(W(H))$. \end{rem} So the primary focus of our proof is to show that there indeed exist some integers $a_0,a_1,\ldots,a_{r-1}$ such that Eq.~\eqref{key1} holds. This will be done in two cases: $p$ is an odd prime and $p=2$. \subsection{The case that $p$ is odd } In this subsection, we shall present the proof of Theorem~\ref{Main}~(i). For the case $p$ is an odd prime, we have $r={\rm rank}_pW(G)=n-1$. We need the following lemmas. \begin{lemma}\label{oddroot} Let $G$ and $H$ be two generalized cospectral graphs with walk matrices $W(G)$ and $W(H)$, respectively. Let $\phi(x)=x^n+c_{2}x^{n-2}+\cdots+c_{n-1}x+c_n$ be their common characteristic polynomial (note that $c_{1}=0$). Suppose that ${\rm rank}_p(W(G))={\rm rank}_p(W(H))=n-1$ and $\phi(x)\equiv 0~({\rm mod}~p)$ has no two distinct roots. Then under the condition of Theorem~\ref{Main}~(i), $W(G)x\equiv 0~({\rm mod}~p)$ and $W(H)x\equiv 0~({\rm mod}~p)$ have the same set of solutions. \end{lemma} \begin{proof} By Cayley-Hamilton's Theorem, we have \begin{equation}\label{ch} A^n+c_{2}A^{n-2}+\cdots+c_{n-1}A+c_nI=0. \end{equation} Right-multiplying by $e$ on both right sides of Eq.~\eqref{ch} and then reducing modulo $p$ gives that \begin{equation}\label{CH1} A^ne\equiv-c_{2}A^{n-2}e-\cdots-c_{n-1}Ae-c_ne~({\rm mod}~p). \end{equation} Suppose that $W(G)x\equiv 0~({\rm mod}~p)$ has a solution $(a_0,a_1,\ldots,a_{n-2},1)^T$, i.e., \begin{equation}\label{Eq1} A^{n-1}e\equiv-a_{n-2}A^{n-2}e-\cdots-a_1Ae-a_0e~({\rm mod}~p). \end{equation} Left-multiplying both sides of Eq.~\eqref{Eq1} by $A$ generates that \begin{equation}\label{Eq2} A^{n}e\equiv-a_{n-2}A^{n-1}e-\cdots-a_1A^2e-a_0Ae~({\rm mod}~p). \end{equation} Plugging Eq.~\eqref{Eq1} into Eq.~\eqref{Eq2} gives that \begin{eqnarray}\label{Eq3} \begin{split} A^{n}e&\equiv&(-a_{n-3}+a_{n-2}^2)A^{n-2}e+(-a_{n-4}+a_{n-3}a_{n-2})A^{n-3}e+\cdots+\\ &&(-a_1+a_2a_{n-2})A^{2}e+(-a_0+a_1a_{n-2})Ae+a_0a_{n-2}e~({\rm mod}~p). \end{split} \end{eqnarray} It follows from Lemma~\ref{rank} that $e,Ae,\ldots,A^{n-2}e$ are linearly independent over $\mathbb{F}_p$. Comparing Eq.~\eqref{CH1} and Eq.~\eqref{Eq3}, we get that \begin{eqnarray}\label{EQQ} a_0a_{n-2}&=&-c_n,\nonumber\\ -a_0+a_1a_{n-2}&=&-c_{n-1},\nonumber\\ -a_1+a_2a_{n-2}&=&-c_{n-2},\nonumber \\ \vdots\\ -a_{n-4}+a_{n-3}a_{n-2}&=&-c_{3},\nonumber\\ -a_{n-3}+a_{n-2}^2&=&-c_{2}.\nonumber \end{eqnarray} Iterating the above equations from the bottom up, it is easy to verify that $\phi(a_{n-2})\equiv0~({\rm mod}~p)$. Similarly, suppose $W(H)x\equiv~0~({\rm mod}~p)$ has a solution $(b_0,b_1,\ldots,b_{n-2},1)^T$, we must have $\phi(b_{n-2})\equiv0~({\rm mod}~p)$. Note that every $a_i$ (resp. $b_i$) is uniquely determined by $a_{n-2}$ (resp. $b_{n-2}$) for $i=0,1,\ldots,n-3$. By the assumption that $\phi(x)\equiv 0~({\rm mod}~p)$ has no two distinct roots, we get $a_{n-2}=b_{n-2}$, and hence $a_i=b_i$ for $i=0,1,\ldots,n-3$. The proof is complete. \end{proof} \begin{rem} We give a straightforward explanation why $\phi(a_{n-2})=0$ holds, over $\mathbb{F}_p$. Let $C$ be the companion matrix of $\phi(x)$. Let $\eta=(a_0,a_1,a_2,\ldots,a_{n-2},1)^T$. Then it is easy to verify that Eq.~\eqref{EQQ} is equivalent to the following equation: \begin{eqnarray*} \left(\begin{array}{cccccc} a_{n-2}&0&0&\cdots&0&c_n\\ -1&a_{n-2}&0&\cdots&0&c_{n-1}\\ 0&-1&a_{n-2}&\cdots&0&c_{n-2}\\ \vdots&\vdots&\vdots&\ddots&0&\vdots\\ 0&0&0&\cdots&a_{n-2}&c_{2}\\ 0&0&0&\cdots&-1&a_{n-2} \end{array} \right) \left(\begin{array}{c} a_0\\ a_1\\ a_2\\ \vdots\\ a_{n-2}\\ 1 \end{array} \right)=O, \end{eqnarray*} i.e., $(a_{n-2}I-C)\eta=O$. Therefore, $a_{n-2}$ is an eigenvalue of $C$, i.e., $\phi(a_{n-2})=0$. \end{rem} Unfortunately, the assumption in Lemma~\ref{oddroot} that $\phi(x)\equiv 0~({\rm mod}~p)$ has no two distinct roots does not always hold. We shall get rid of this difficulty by considering the family of matrices $A+tJ$ and $B+tJ$ ($J$ is the all-one matrix) for all integers $t$ instead of $A$ and $B$. Now, let $$W_t(G):=[e,A_te,\ldots,A_t^{n-2}e,A_t^{n-1}e]~{\rm and}~ W_t(H):=[e,B_te,\ldots,B_t^{n-2}e,B_t^{n-1}e],$$ where $A_t=A+t J$, $B_t=B+t J$, and $t\in \mathbb{Z}$. Define $\phi(x,t)=\det(x I-A_t(G))$. By Lemma~\ref{oddroot}, we can easily get the following corollary. \begin{Cor} Suppose that $\phi(x,t_0)\equiv 0~({\rm mod}~p)$ has no two distinct roots for some integer $t_0$. Suppose that ${\rm rank}_p(W(G))={\rm rank}_p(W(H))=n-1$. Then $W_{t_0}(G)x\equiv 0~({\rm mod}~p)$ and $W_{t_0}(H)x\equiv 0~({\rm mod}~p)$ have the same set of solutions. \end{Cor} \begin{proof}It follows from $Q^{\T}AQ=B$ and $Qe=e$ that $Q^{\T}A_tQ=B_t$. Then replace respectively $A$ and $B$ with $A_t$ and $B_t$ in the proof of Lemma~\ref{oddroot}, it is easy to see the corollary holds. \end{proof} \begin{lemma}\label{lemodd} Under the conditions of Theorem~\ref{Main}, suppose that $p\mid\ell$, then there exists an integer $t_0$ such that $\phi(x,t_0)\equiv 0~({\rm mod}~p)$ has no two distinct roots. \end{lemma} \begin{proof} It is easy to see that $\phi(x,t)=(1+t)\phi(G,x)-(-1)^nt\phi(\bar{G},-1-x)$. Thus we may write $\phi(x,t)=\varphi(x)(\phi_1(x)+t\phi_2(x))$, where $\gcd(\phi_1(x),\phi_2(x))=1.$ Now we show that $\varphi(x)$ always has a factor $x-\lambda_0$ for some $\lambda_0\in{\mathbb{F}_p}$. Actually, it follows from $Q^{\T}AQ=B$ and $Qe=e$ that $Q^{\T}W(G)=W(H)$, and hence $W^{\T}(G)(\ell Q)=\ell W^{\T}(H)\equiv~0~({\rm mod}~p).$ Note that ${\rm rank}_pW(G)=n-1$. It follows that ${\rm rank}_p \ell Q=1$. Let $z\not\equiv~0~({\rm mod}~p) $ be any column of $\ell Q$. Then it follows from $AQ=QB$ that $Az\equiv~\lambda_0z~({\rm mod}~p) $ for some integer $\lambda_0$. Further note that $e^{\T}z\equiv~0~({\rm mod}~p)$, we have $A_tz=(A+tJ)z\equiv~\lambda_0z~({\rm mod}~p)$ for any $t\in\mathbb{Z}$. Therefore, we have $(x-\lambda_0)\mid \phi(x,t)$, over $\mathbb{F}_p$. Thus, $(x-\lambda_0)\mid \varphi(x)$, since $\gcd(\phi_1(x),\phi_2(x))=1$. Now we claim, suppose that $\phi_1(x)+t\phi_2(x)\equiv~0~({\rm mod}~p)$ has a root $x_t\in {\mathbb{F}_p}$ for every $t\in{\{0,1,\ldots,p-1\}}$, we must have $x_i\neq x_j$ for $i\neq j$. For otherwise, suppose that $\phi_1(x)+t_1\phi_2(x)\equiv~0~({\rm mod}~p)$ and $\phi_1(x)+t_2\phi_2(x)\equiv~0~({\rm mod}~p)$ have a common root $\hat{x}$, for $t_1\neq t_2$. It is easy to see $(x-\hat{x})\mid \phi_1(x)$ and $(x-\hat{x})\mid \phi_2(x)$; a contradiction. Therefore, there must exist a $t_0\in{\mathbb{F}_p}$ such that $\phi_1(x)+t_0\phi_2(x)\equiv~0~({\rm mod}~p)$ has a root $\lambda_0$ by the Pigeonhole Principle. Note that $\phi_1(x)+t_0\phi_2(x)\equiv~0~({\rm mod}~p)$ has no other root than $\lambda_0$ (otherwise it contradicts the above claim). It remains to show that $\varphi(x)$ has no root other than $\lambda_0$. This will be proved in the next lemma. Combining the above facts, whenever there is a $t\in{\mathbb{F}_p}$, for which $\phi_1(x)+t\phi_2(x)\equiv~0~({\rm mod}~p)$ has no root, we are done! When $\phi_1(x)+t\phi_2(x)\equiv~0~({\rm mod}~p)$ has a root for every $t\in{\mathbb{F}_p}$, still the lemma is true. This completes the proof. \end{proof} \begin{lemma}\label{lemodd1} Under the condition of Lemma~\ref{lemodd}, $\varphi(x)$ has no roots other than $\lambda_0$. \end{lemma} \begin{proof}Suppose $\varphi(\lambda_1)\equiv~0~({\rm mod}~p)$ with $\lambda_1\neq \lambda_0$. Then for every $t=0,1,\ldots,p-1$, there exists a $\beta_t$ such that $(A+tJ)\beta_t=\lambda_1\beta_t$. It follows that $(\lambda_1I-A)\beta_t=t(e^{\T}\beta_t)e$. Clearly $e^{\T}\beta_t\neq 0$, for otherwise, we have $W_t^{\T}(G)\beta_t=0$. As $e^{\T}z\equiv~0~({\rm mod}~p)$, we get that $\beta_t$ and $z$ are linearly independent. Thus we get a contradiction since ${\rm rank}_p(W_t)=n-1$. For $t=0$, we have $(\lambda_1I-A)\beta_0=0$. For $t=1$, we have \begin{equation}\label{EE} (\lambda_1I-A)\beta_1=(e^{\T}\beta_1)e. \end{equation} Left-multiplying both sides of Eq.~\eqref{EE} by $\beta_0^{\T}$ we get $$0=\beta_0^{\T}(\lambda_1I-A)\beta_1=(e^{\T}\beta_1)(e^{\T}\beta_0),$$ which contradicts the fact that $e^{\T}\beta_0\neq 0$ and $e^{\T}\beta_1\neq0.$ This completes the proof. \end{proof} Now, we are ready to present the proof of Theorem~\ref{Main}~(i). \begin{proof}[Proof of Theorem~\ref{Main}~(i)] If $p\nmid \ell$, then it is obviously that $\ell\mid d_n(W(G))/p$ according to Lemma~\ref{rem1}. So we assume $p\mid \ell$ henceforth. First suppose that ${\rm rank}_pW(H)<n-1$. Let $d_n(W(G))=p^{t}b$ and $p\nmid b$, $d_n(W(H))=p^{t'}b'$ and $p\nmid b'$. Then $t'<t$. By Lemma~\ref{rem1}, we have $\ell\mid\gcd(p^{t}b,p^{t'}b')$. Note that $\gcd(p^{t}b,p^{t'}b')\mid p^{t'}b$. Thus, we have $\ell \mid p^{t-1}b=d_n(W(G))/p$. Next, it needs only to consider the case that ${\rm rank}_pW(H)=n-1$, and hence ${\rm rank}_pW(G)={\rm rank}_pW(H)=n-1$. It follows from $Q^{\T}AQ=B$ and $Q^{\T}e=e$ that $Q^{\T}A_tQ=B_t$. Thus, we have $Q^{\T}A_t^ke=B_t^ke$ for $k=0,1,\ldots,n-1$. By Lemma~\ref{lemodd}, there exists a $t_0\in{\mathbb{F}_p}$ such that $\phi(x,t_0)$ has no two distinct roots over $\mathbb{F}_p$. Thus, there exists an integral vector $\eta:=(a_0,a_1,\ldots,a_{n-2},1)^T$ such that $$W_{t_0}(G)\eta\equiv 0~({\rm mod}~p)~{\rm and}~ W_{t_0}(H)\eta \equiv 0~({\rm mod}~p).$$ Let $$M_{t_0}(G)e:=a_0e+a_1A_{t_0}e+\cdots+a_{n-2}A_{t_0}^{n-2}e+A_{t_0}^{n-1}e$$ and $$M_{t_0}(H)e:=a_0e+a_1B_{t_0}e+\cdots+a_{n-2}B_{t_0}^{n-2}e+B_{t_0}^{n-1}e.$$ Then we have $Q^{\T} M_{t_0}(G)e=M_{t_0}(H)e$. Define $$\hat{W}_{t_0}(G):=[e,A_{t_0}e,\ldots,A_{t_0}^{n-2}e,M_{t_0}(G)e/p]$$ and $$\hat{W}_{t_0}(H):=[e,B_{t_0}e,\ldots,B_{t_0}^{n-2}e,M_{t_0}(H)e/p].$$ Then $Q^{\T}\hat{W}_{t_0}(G)=\hat{W}_{t_0}(H)$. Note that $d_{n}(\hat{W}_{t_0}(G))=\frac{d_n(W(G))}p$ according to Lemma~\ref{lSNF}. We have $\ell\mid \frac{d_n(W(G))}p$ according to Lemma~\ref{rem1}. This completes the proof of Theorem~\ref{Main}~(i). \end{proof} \subsection{The case $p=2$} In this subsection, we present the proof of Theorem~\ref{Main}~(ii). For the case $p=2$, we have $r={\rm rank}_2W(G)=\lceil\frac{n}2\rceil$. We shall show that Eq.~\eqref{key1} holds and the solution can be given explicitly using the coefficients of the characteristic polynomial of the graph $G$, as we shall see later. We need the following lemmas. \begin{lemma}[Wang~\cite{wang2017JCTB}]\label{leM} Let $S$ be an integral symmetric matrix. If $S^2\equiv0\pmod2$, then $Se\equiv0\pmod2$. \end{lemma} \begin{lemma}[Wang~\cite{wang2017JCTB}]\label{sachs} Let $\phi(x)=x^{n}+c_{1}x^{n-1}+\cdots+c_{n-1}x+c_{n}$ be the characteristic polynomial of graph $G$. Then $c_{i}$ is even when $i$ is odd. \end{lemma} \begin{lemma}[Wang~\cite{wang2017JCTB}]\label{evenM} Let $\phi(x)=x^{n}+c_{1}x^{n-1}+\cdots+c_{n-1}x+c_{n}$ be the characteristic polynomial of the graph $G$. Let \begin{equation*} M=M(G):=\begin{cases} &A^{\lceil\frac{n}{2}\rceil}+c_{2}A^{\lceil\frac{n-2}{2}\rceil}+\cdots+c_{n-2}A+c_{n}I, ~\mbox{if n is even};\\ &A^{\lceil\frac{n}{2}\rceil}+c_{2}A^{\lceil\frac{n-2}{2}\rceil}+\cdots+c_{n-3}A^2+c_{n-1}A, ~\mbox{if n is odd}. \end{cases} \end{equation*} Then $Me\equiv0\pmod2$. \end{lemma} \begin{proof}In view of the importance of the lemma, we shall give a short proof here for completeness. We only prove the lemma for the case that $n$ is even, the case that $n$ is odd can be proved in the similar way. By Cayley-Hamilton's Theorem, we get \begin{equation}\label{eqM1} A^{n}+c_{1}A^{n-1}+\cdots+c_{n-1}A+c_{n}I=0. \end{equation} By Lemma~\ref{sachs}, $c_{i}$ is even when $i$ is odd. Then we can rewrite Eq.~\eqref{eqM1} as \begin{equation}\label{eqM2} A^{n}+c_{2}A^{n-2}+\cdots+c_{n-2}A^{2}+c_{n}I\equiv0\pmod2. \end{equation} Note that $M=A^{\frac{n}{2}}+c_{2}A^{\frac{n-2}{2}}+\cdots+c_{n-2}A+c_{n}I$ when $n$ is even, hence \begin{eqnarray*} M^{2} &=& (A^{\frac{n}{2}}+c_{2}A^{\frac{n-2}{2}}+\cdots+c_{n-2}A+c_{n}I)^{2} \\ &\equiv& A^{n}+c_{2}^{2}A^{n-2}+\cdots+c_{n-2}^{2}A^{2}+c_{n}^{2}I\\ &\equiv&A^{n}+c_{2}A^{n-2}+\cdots+c_{n-2}A^{2}+c_{n}I \\ &\equiv&0\pmod2. \end{eqnarray*} Note that $M$ is an integral symmetric matrix, according to Lemma~\ref{leM}, $Me\equiv0\pmod2$. The proof is complete. \end{proof} As an immediate consequence of Lemma~\ref{evenM}, we have \begin{Cor}\label{evencase} Let $G$ and $H$ be two generalized cospectral graphs. Then there exist integers $a_0, a_1,\ldots,a_{r-1}$ such that Eq.~\eqref{key1} holds for $p=2$. \end{Cor} Now, we are ready to present the proof of Theorem~\ref{Main}~(ii). \begin{proof}[Proof of Theorem~\ref{Main} (ii)] Since $G$ and $H$ have the same generalized spectrum, by Lemma~\ref{lortho}, we have $Q^{\T}A(G)Q=A(H)$ and $Qe=e$. It follows that \begin{gather*} Q^{\T}e=e, \\ Q^{\T}Ae=Be,\\ \vdots \\ Q^{\T}A^{r-1}e=B^{r-1}e,\\ Q^{\T}A^{r}e=B^{r}e, \end{gather*} It follows from Corollary~\ref{evencase} that Eq.~\eqref{key1} holds. Multiplying by $a_0, a_1,\ldots,a_{r-1}$ on the first to the $r$-th equations, respectively, and adding them to both sides of the $(r+1)$-th equations generates $Q^{\T}M(G)e=M(H)e$. Furthermore, it is easy to verify that \begin{equation*} Q^{\T}A^{i}M(G)e=B^{i}M(H)e,~ \mbox{for} ~i=1,2,\ldots,\left\lfloor n/2\right\rfloor-1. \end{equation*} Thus, $Q^{\T}\bar{W}(G)=\bar{W}(H)$, where $$\bar{W}(G)=[e,Ae,\ldots,A^{\lceil\frac{n}{2}\rceil-1}e,M(G)e,AM(G)e,\ldots,A^{\lfloor\frac{n}{2}\rfloor-1}M(G)e]$$ and $$\bar{W}(H)=[e,Be,\ldots,B^{\lceil\frac{n}{2}\rceil-1}e,M(H)e,BM(H)e,\ldots,B^{\lfloor\frac{n}{2}\rfloor-1}M(H)e].$$ By Lemma~\ref{evenM}, we have $M(G)e\equiv M(H)e\equiv0\pmod2$. Dividing the last $\lfloor\frac{n}{2}\rfloor$ columns of $\bar{W}(G)$ and $\bar{W}(H)$ by $2$ simultaneously, it follows that \begin{equation}\label{eqeven1} Q^{\T}\hat{W}(G)=\hat{W}(H), \end{equation} where $$\hat{W}(G)=[e,Ae,\ldots,A^{\lceil\frac{n}{2}\rceil-1}e,\frac{M(G)e}{2},\frac{AM(G)e}{2},\ldots,\frac{A^{\lfloor\frac{n}{2}\rfloor-1}M(G)e}{2}]$$ and $$\hat{W}(H)=[e,Be,\ldots,B^{\lceil\frac{n}{2}\rceil-1}e,\frac{M(H)e}{2},\frac{BM(H)e}{2},\ldots,\frac{B^{\lfloor\frac{n}{2}\rfloor-1}M(H)e}{2}].$$ It follows from Lemma~\ref{lSNF} that $\hat{d}_{n}=\frac{d_n}2$. By Lemma~\ref{rem1}, we get that $\ell\mid \frac{d_{n}}{2}$. This completes the proof. \end{proof} Finally, combining the proofs of Theorem~\ref{Main} (i) and Theorem~\ref{Main} (ii), Theorem~\ref{Main} follows. \section{An example} In this section, we shall give an example to illustrate the powerfulness of Theorem~\ref{Main}. Let $G$ be a graph with adjacency matrix $A(G)$ given as follows: $$A(G)={\tiny{ \left( \begin{array}{cccccccccccc} 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ \end{array} \right) _{12\times 12}}}.$$ It is easy to compute using Mathematica 12.0 that the SNF of $W(G)$ is $${\rm diag}(\underbrace{1,1,1,1,1,1}_6,\underbrace{2,2,2,2,2,2\times 5^2\times 1145387}_6).$$ Then for any $Q\in{\mathcal{Q}(G)}$ with level $\ell$, we have $\ell\mid 5$ according to Theorem~\ref{Main}. Therefore $\ell=1$ or $\ell=5$; the case $\ell=25$ is impossible by Theorem~\ref{Main}. Actually, we can find a regular rational orthogonal matrix $Q\in{\mathcal{Q}(G)}$ with level 5 which is given as follows (we refer the interested reader to \cite{WangWangYu} for the details): $$Q={\tiny{\frac{1}{5} \left( \begin{array}{cccccccccccc} 2 & 2 & -1 & -1 & 1 & 1 & 3 & -2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5 & 0 & 0 & 0 \\ 2 & 2 & -1 & -1 & 1 & 1 & -2 & 3 & 0 & 0 & 0 & 0 \\ 3 & -2 & 1 & 1 & -1 & -1 & 2 & 2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5 & 0 & 0 \\ 1 & 1 & 2 & 2 & 3 & -2 & -1 & -1 & 0 & 0 & 0 & 0 \\ -1 & -1 & 3 & -2 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 \\ -1 & -1 & -2 & 3 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5 \\ -2 & 3 & 1 & 1 & -1 & -1 & 2 & 2 & 0 & 0 & 0 & 0 \\ 1 & 1 & 2 & 2 & -2 & 3 & -1 & -1 & 0 & 0 & 0 & 0 \\ \end{array} \right)}}.$$ Then, it is easy to verify that $A'=Q^{\T}AQ$ is a (0,1)-matrix given as follows: $$A'=Q^{\T}AQ={\tiny{\left( \begin{array}{cccccccccccc} 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ \end{array} \right)_{12\times 12}}}.$$ Thus $A'$ is the adjacency matrix of a graph $G'$ which is generalized cospectral with $G$ but non-isomorphic to $G$. It can be further proved that, up to isomorphism, the graph $G'$ is the \emph{only} graph that are generalized cospectral with $G$ but non-isomorphic to $G$; see~\cite{WangWangYu}. \section{Conclusions} In this paper, we have presented a new method which results in a stronger version of a theorem of Wang~\cite{wang2017JCTB}. The new proof is quite straightforward and is much easier to follow than the original ones in~\cite{wang2013EJC,wang2017JCTB}, which gives a new framework in dealing with the problem of generalized spectral characterizations of graphs. As a future work, we would like to explore the method in more general situations, e.g., for $p=2$, we require that $\rank_2 W=\lceil\frac{n}2\rceil$ holds in Theorem~\ref{Main}~(ii). However, we suspect that this condition can be removed. Also, Theorem~\ref{Main}~(i) requires that $\rank_pW =n-1$ for odd prime $p$. It would be interesting to consider the general case $\rank_pW =r$ for $r\geq 1$.
{ "timestamp": "2021-08-03T02:27:11", "yymm": "2108", "arxiv_id": "2108.00592", "language": "en", "url": "https://arxiv.org/abs/2108.00592", "abstract": "Spectral characterization of graphs is an important topic in spectral graph theory, which has received a lot of attention from researchers in recent years. It is generally very hard to show a given graph to be determined by its spectrum. Recently, Wang [10] gave a simple arithmetic condition for graphs being determined by their generalized spectra. Let $G$ be a graph with adjacency matrix $A$ on $n$ vertices, and $W=[e,Ae,\\ldots,A^{n-1}e]$ ($e$ is the all-one vector) be the walk-matrix of $G$. A theorem of Wang [10] states that if $2^{-\\lfloor n/2\\rfloor}\\det W$ (which is always an integer) is odd and square-free, then $G$ is determined by the generalized spectrum. In this paper, we find a new and short route which leads to a stronger version of the above theorem. The result is achieved by using the Smith Normal Form of the walk-matrix of $G$. The proposed method gives a new insight in dealing with the problem of generalized spectral characterization of graphs.", "subjects": "Combinatorics (math.CO)", "title": "Smith Normal Form and the Generalized Spectral Characterization of Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575116884778, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7091542088578315 }
https://arxiv.org/abs/1612.02722
Area bounds for minimal surfaces in geodesic ball of hyperbolic space
In hyperbolic space $H^n$ we set a geodesic ball of radius $\rho$. Consider a $k$ dimensional minimal submanifold passing through the origin of the geodesic ball with boundary lies on the boundary of that geodesic ball. We prove that its area is no less than the totally geodesic $k$ dimensional submanifold passing through the origin in that geodesic ball. This is a partial generalization of the corresponding problem in $R^n$.
\section{Introduction} This article mainly proves a sharp lower bound of area for $k$ dimensional minimal surfaces passing through the origin of the geodesic ball $B_p(\rho_0) $ in standard hyperbolic space $H^n$ with boundary on $ \partial B_p(\rho_0) $ Fix a point $p \in H^n$ and a positive number $\rho_0$. Denote the geodesic ball in $ H^n $ with center $p$ and radius $\rho_0$ by $B_p(\rho_0)$. Consider the $k$ dimensional plane $P$ passing through the origin in $T_p H^n$. Let $N_0=\textnormal{exp}_p(P) \cap B_p(\rho_0)$. Then $N_0$ is a totally geodesic surface and thus a minimal surface in $H^n$ since its second fundamental form vanishes. Now we can state the theorem: \newtheorem{thm}{Theorem} \newtheorem{lem}{Lemma} \begin{thm} Suppose $M$ is a $k$ dimensional minimal surface of $H^n$ passing through $p$ with the boundary $\partial M$ lies on $\partial B_p(\rho_0)$. Then the area of $M$ is no less than the area of $N_0$, i.e. $|M|\geq|N_0|$ \end{thm} It is one generalization of a result on the area bound of a minimal surface passing through the a fixed point $P$ of a ball in $R^n$ with boundary on the ball. If $P=O$ be the origin of the ball, then by monotonicity formula the area is no less than a plane. It has also been proved recently in \cite{simonbrendlepeikenhung.{2016}} that if $P\neq O$, the area is no less than the area of the plane passing through $P$, perpendicular to $OP$. The technique it used was to find a proper vector field and use the first variation formula. Inspired from this, we can proceed to Theorem 1. In fact, the vector field we finally derived looks like the gradient of the Green function of $B_p(\rho_0)$ (with power changed). \section{Proof of the theorem} Let $|N_0|=\omega$ . We shall find a vector field $W$ defined on $H^n \backslash \{p\}$ that satisfies following three conditions \begin{enumerate} \item $W$ vanishes on $\partial B_p(\rho_0)$ \item $|W(x)| \rightarrow -\frac{\omega r^{-{k+1}}}{\omega_{k-1}}$ as $x \rightarrow p$ where $r=r(x,p)$ is the distance between $p$ and $x$ and $\omega_{k-1}$ is area of $k-1$ dimensional unit sphere. \item $ \sum_{i=1}^{k} \langle D_{\tau _i}W, \tau_i \rangle \leq 1 $ where $D$ is the covariant derivative for $H^n$ and $\{\tau_i , i=1,2,...,k\} $ are $k$ orthonormal vectors in $T_qH^n$ for any point $q \in H^n \backslash \{p\} $ \end{enumerate} We now turn our attention to the rotational symmetric vector field. Choose Gauss Normal Coordinate of point $p$. More explicitly, Let $\{\Theta \in S^{n-1} , r \}$ be canonical polar coordinate for $T_pH^n$. Then using exponential map to get the coordinate for $H^n\backslash \{p\}$. By Gauss's theorem, $\frac{\partial}{\partial r} = \nabla r $ is unit vector perpendicular to $\partial B_p(r)$ . For simplicity, let $\partial_r = \frac{\partial}{\partial r}$. Let $f=f(r)$ be a smooth function defined on $H^n$, depending only on the distance to $p$. Let $W=f(r)\partial_r$. We have the following lemma. \begin{lem} Let $q\in H^n \neq p$. For orthonormal vectors $\{\tau_i, i=1,2,...,k\} \in T_qH^n$, r(q)=r(q,p), we have $\sum_{i=1}^{k} \langle D_{\tau_i}W, \tau_i \rangle = kf \coth r + (f'-f \coth r) \cdot |(\partial_r)^T|^2$, where $(\partial_r)^T$ is orthogonal projection of $\partial r$ onto the vector space spanned by $\{\tau_i\}$, i.e. $(\partial_r)^T = \langle \partial_r , \tau_i \rangle \tau_i $ \end{lem} Proof: Using the metric form $ds^2=dr^2+(\sinh r)^2 \ d\Omega^2$ for $H^n$ where $d \Omega ^2$ is the standard metric for unit sphere $S^{n-1}$. Fix an $r=r_0$. For any point $q \neq p$ in $H^n$, let $\{e_i,i=1,2,...,n-1\}$ be orthonormal basis for $T_q\partial B_p(r)$ and let $e_n=\partial_r$. Since the induced metric on $\partial B_p(r_0)$, i.e. the level set of $r$, is spherical symmetric metric depending only on $r$, by symmetry we have $D_{e_i}\partial _r = \psi e_i, i=1,...,n-1$ where $\psi =\psi(r)$ is a function depending only on $r$, i.e $D \partial_r$ as a linear transformation is a multiple of identity. Further, $ D_{\partial_r}\partial_r =0$. Hence the Laplacian $\Delta r = (n-1)\psi(r)$. However we have already known that in a space of constant negative sectional curvature $-\lambda ^2$ with $r$ be distance function, $\Delta r = (n-1)\lambda \coth \lambda r $. Let $\lambda=1$, we will have $\psi(r)=\coth r$. Notice that $e_i(f)=0$ for $i=1,...,n-1$ since $f$ is a constant on $\partial B_p(r)$.Then \[D_{e_i}W= f(r)D_{e_i}e_n = f (\coth r) e_i \ \ \ \ i=1,2,...,n-1\] \[D_{e_n}W=D_{e_n}f(r)e_n + fD_{e_n}e_n=f'(r)e_n \] Let $\{\tau_j, j=1,2,...,k\}$be $k$ orthonormal vectors in $T_qH^n$. Assume $\tau_j=\sum_{i=1}^{n}a_{ji}e_i$. Thus $A=(a_{ji})_{k \times n}$ is a matrix with $k$ orthonormal row vectors. After doing that, we will have \[\sum_{j=1}^{k}\langle D_{\tau_j}W,\tau_j \rangle = kf \coth r + (f'-f \coth r)(\sum_{j=1}^{k}a_{jn}^2) \] On the other hand \[(\sum_{j=1}^{k}a_{jn}^2) = \sum_{j=1}^{k}\langle e_n, \tau_j \rangle ^2 = |\sum_{j=1}^{k}\langle e_n, \tau_j \rangle \tau_j|^2=|(\partial_r)^T|^2\] So we have proven Lemma1. \begin{lem} Let \[f(r)=(\sinh r)^{-(k-1)} \int_{\rho_0}^{r}(\sinh \rho)^{k-1} \mathrm d \rho\] then vector field $W=f(r)\partial_r$\ defined on $B_p(\rho_0) \backslash \{p\}$ satisfies all the three conditions enumerated above. \end{lem} Proof: Since $f(\rho_0)=0$, then $W$ vanishes on $\partial B_p(\rho_0)$. We can write \[f(r)=-C(\sinh r)^{-(k-1)}+(\sinh r)^{-(k-1)} \int_{0}^{r}(\sinh \rho)^{k-1} \mathrm d \rho \] where \[C= \int_{0}^{\rho_0}(\sinh \rho)^{k-1} \mathrm d \rho\] is a constant depending only on $\rho_0$. Then by direct computation we can check $f$ defined above solves the equation \[f'+(k-1)f \coth r =1\] with boundary condition $f(\rho_0)=0$. With the explicit form of the solution given above we can see $f\leq 0 $ on $(0,\rho_0]$. Thus \ $f'=1-(k-1)f \coth r >0$. \ Then $f'-f\coth r >0 $ on $(0,\rho_0]$. Since $|\partial_r|=1$, thus $|(\partial_r)^T| \leq 1$ and equality holds iff $\partial_r $ lies on the subspace spanned by ${\tau_1,...,\tau_k}$ Together with lemma1 ,we have \begin{eqnarray*} \sum_{i=1}^{k} \langle D_{\tau_i}W, \tau_i \rangle &=& kf \coth r + (f'-f \coth r) \cdot |(\partial_r)^T|^2 \\ &\leq& kf \coth r + (f'-f \coth r) \cdot 1= f'+(k-1)f \coth r =1 \end{eqnarray*} Moreover, equality holds iff $\partial_r$ lies on the subspace spanned by $\tau_1,...,\tau_k$ and condition3 is satisfied. Using the metric of the form $ds^2=dr^2+(\sinh r)^2 \ d\Omega^2 $ , we can see that area of a totally geodesic surface in $H^n$ is \[\omega=|N_0|=\int_{0}^{\rho_0}\omega_{k-1}(\sinh \rho)^{k-1} \mathrm d \rho\] But $\sinh r \rightarrow r $ as $r \rightarrow 0$, and \[(\sinh r)^{-(k-1)} \int_{0}^{r}(\sinh \rho)^{k-1} \mathrm d \rho = O(r) \rightarrow 0 \ \ \ \mathrm{as} \ \ \ r \rightarrow 0\] So we have \[f \rightarrow \frac{\omega r^{-{k+1}}}{\omega_{k-1}}\ \ \mathrm{as} \ \ r \rightarrow 0\] Thus \[|W(x)| \rightarrow \frac{\omega r^{-{k+1}}}{\omega_{k-1}} \ \ as \ \ x \rightarrow p\] Then condition2 is satisfied. \\ \\ Now we prove the theorem with the help of the vector field $W$. Let $M$ be any minimal surface in $H^n$ passing through point $p$ and with boundary on $\partial B_p(\rho_0)$. At each point $q \in M\backslash \{p\}$, let $\tau_1, ...,\tau_k$ \ be orthonormal basis for $T_qM$ and let $W^T$ be orthogonal projection of $W$ onto $T_qM$. Let $H$ be mean curvature vector about the immersion $M \subset H^n$ and $div_M$ is the divergence operator on submanifold $M$, it is known that \[div_M W^T = \sum_{i=1}^{k} \langle D_{\tau_i}W, \tau_i \rangle + \langle H, W\rangle \] Since $M$ is minimal and thus $H $ \ is zero vector, then \[div_M W^T = \sum_{i=1}^{k} \langle D_{\tau_i}W, \tau_i \rangle \leq 1\] on $M\backslash \{p\}$\ \ \ by property 3 of $W$. Using divergence formula,for each sufficient small $r>0$ \ we have \[|M| \geq \int_{M\backslash B_p(r)} div_M W^T = \int_{M \cap \partial B_p(\rho_0)}\langle W^T, \nu \rangle + \int_{M \cap \partial B_p(r)}\langle W^T, \nu \rangle \] where $\nu$ is outside unit vector perpendicular to boundary of $M \backslash B_p(r)$(in the sense of submanifold). Since $W$ vanishes on $\partial B_p(\rho_0)$, the first term on the $\mathcal{RHS}$ of above formula vanishes. Moreover,on $M \cap \partial B_p(r), \nu = -\partial_r + o(r)$. Together with asymptotic behaviour of $W$, we have \[|M|\geq \lim_{r \rightarrow 0}\int_{M \cap \partial B_p(r)}\langle W^T, \nu \rangle \] \[= \lim_{r \rightarrow 0}\int_{M \cap \partial B_p(r)} -\frac{\omega}{\omega_{k-1}}r^{-(k-1)}(-1+o(r)) \\ = \omega = |N_0|\] which proves the theorem. Obviously, choose $M=N_0$ will attain the minimum. \section{A generalization} In fact we can derive the formula for the space of any space forms. What really matters is the area of a unit sphere and the coefficient $\phi(r)$ in warped product metric $ds^2=dr^2+\phi(r)^2 d \Omega ^2$. We first assume the ambient space $N=I \times G $ where $I=(0,c), 0<c\leq \infty $ and $G$ is equipped with the metric $d \Omega ^2$. $N$\ has warped product metric $ds^2=dr^2+\phi(r)^2 d \Omega ^2$. Let $p$ corresponds the point when $r \rightarrow 0$. (If $G=S^{n-1}$ and $d \Omega ^2$ is standard sphere metric, then the conditions are that $\phi$ is smooth with $\lim_{r \rightarrow 0}\phi = 0$ \ and the Taylor expansion of $\phi$ about $0$ has only odd terms and coefficient for $r^1$ should be 1). We consider the case that requires $\phi>0$ to be smooth with $\lim_{r \rightarrow 0}\phi = 0$ and $d \Omega ^2$ \ to be a spherical metric. Then locally use the sphere coordinate $\Theta=\theta_1,...,\theta_{n-1}$ as well as $r$, local coordinate tangent vector field $\partial_i=\frac{\partial}{\partial \theta_i}, i=1,...,n-1$ and $\partial_n=\partial_r$ are mutually orthogonal. The connection coefficient $\Gamma_{ij}^n$ is proportional to $\phi '/\phi $. Moreover we already have the connection coefficient $\tilde{\Gamma}_{ij}^n$ is $\delta_{ij}/r$ for metric $ds^2=dr^2+r^2 d \Omega ^2$\ because it is a flat metric and whose $r$ level set is obviously a part of a standard sphere, we now can compute $\Gamma_{ij}^n$ $\Gamma_{ij}^n=\tilde{\Gamma}_{ij}^n \cdot \frac{\phi '}{\phi} / \frac{r'}{r} =\frac{\phi '}{\phi}\delta_{ij}$. Thus, the second fundamental form of level set $r=\rho_0$ in ambient space $N$ will be $\frac{\phi '}{\phi}I$. Using the same argument as in the Lemma1,and now $D$ denotes the covariant derivative in $N$, we have \[\sum_{i=1}^{k} \langle D_{\tau_i}W, \tau_i \rangle = kf \frac{\phi '}{\phi} + (f'-f \frac{\phi '}{\phi}) \cdot |(\partial_r)^T|^2\]. If we use \[f(r)=(\phi(r))^{-(k-1)} \int_{\rho_0}^{r}(\phi(\rho))^{k-1} \mathrm d \rho\] and $W=f(r)\partial_r$, then $W$ also satisfies three conditions with little changes in constants. Similar argument can be used to finally derive corresponding lower estimate of area of the minimal surface $M$, $|M|$. However it should noticed that three remarkable changes will appear in above argument. \begin{enumerate} \item the definition of $N_0$ is not clear when metric is not smooth at $p$, so we just choose constant without interpreting \item we have used the fact that $\phi'/\phi \geq 0$ in the hyperbolic case. So we need $\phi'(0) \geq 0 $ and work on the interval $(0,R)$ on which $\phi' \geq 0$ \item value of the following limit of integral will change \[ \lim_{r \rightarrow 0}\int_{M \cap \partial B_p(r)}\langle W^T, \nu \rangle\]. \end{enumerate} Regarding the area as the integral of volume form. When the fundamental group of $G$ no longer trivial, we will encounter changes when integrating on equator of $S^{n-1}$ rather than integrating on equator of $G$. We may not consider these cases and assume further that $G$ is merely a sphere, i.e. simply connected. Moreover, assume $\phi'(0)>0$ and we must let the radius $r\leq R$ where $\phi' \geq 0$ on $[0,R]$, Then \[|W| \rightarrow (r/\phi'(0))^{k-1}\int_{0}^{\rho_0}\phi(\rho)^{k-1} \mathrm d \rho \mathrm \ \ \mathrm{as} \ \ r \rightarrow 0\] Let \[\omega =\int_{0}^{\rho_0}\omega_{k-1}\phi(\rho)^{k-1} \mathrm{d} \rho\] \[\lim_{r \rightarrow 0}\int_{M \cap \partial B_p(r)}\langle W^T, \nu \rangle = \lim_{r \rightarrow 0}\int_{M \cap \partial B_p(r)} -\frac{\omega}{\omega_{k-1}}(\frac{r}{\phi'(0)}) ^{-(k-1)}(1+o(r))\] Now notice the volume form on $M \cap \partial B_p(r)$ is $\phi ^{k-1}$ times the volume form on $S^{n-1}$, and that $\phi / (\phi'(0)r) \rightarrow 1 $ \ as \ $r\rightarrow 0$. \ The limit of integral is again $\omega$. When the metric is smooth, $\omega$ is just $|N_0|$ as defined by the image of plane in unit ball under exponential map . So we have derived lower bound for minimal surface with boundary on boundary of geodesic ball in the case when $G$ is simply connected sphere.
{ "timestamp": "2016-12-09T02:06:36", "yymm": "1612", "arxiv_id": "1612.02722", "language": "en", "url": "https://arxiv.org/abs/1612.02722", "abstract": "In hyperbolic space $H^n$ we set a geodesic ball of radius $\\rho$. Consider a $k$ dimensional minimal submanifold passing through the origin of the geodesic ball with boundary lies on the boundary of that geodesic ball. We prove that its area is no less than the totally geodesic $k$ dimensional submanifold passing through the origin in that geodesic ball. This is a partial generalization of the corresponding problem in $R^n$.", "subjects": "Differential Geometry (math.DG)", "title": "Area bounds for minimal surfaces in geodesic ball of hyperbolic space", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575188391107, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542081379499 }
https://arxiv.org/abs/2103.01471
On the Connectivity and Giant Component Size of Random K-out Graphs Under Randomly Deleted Nodes
Random K-out graphs, denoted $\mathbb{H}(n;K)$, are generated by each of the $n$ nodes drawing $K$ out-edges towards $K$ distinct nodes selected uniformly at random, and then ignoring the orientation of the arcs. Recently, random K-out graphs have been used in applications as diverse as random (pairwise) key predistribution in ad-hoc networks, anonymous message routing in crypto-currency networks, and differentially-private federated averaging. In many applications, connectivity of the random K-out graph when some of its nodes are dishonest, have failed, or have been captured is of practical interest. We provide a comprehensive set of results on the connectivity and giant component size of $\mathbb{H}(n;K_n,\gamma_n)$, i.e., random K-out graph when $\gamma_n$ of its nodes, selected uniformly at random, are deleted. First, we derive conditions for $K_n$ and $n$ that ensure, with high probability (whp), the connectivity of the remaining graph when the number of deleted nodes is $\gamma_n=\Omega(n)$ and $\gamma_n=o(n)$, respectively. Next, we derive conditions for $\mathbb{H}(n;K_n,\gamma_n)$ to have a giant component, i.e., a connected subgraph with $\Omega(n)$ nodes, whp. This is also done for different scalings of $\gamma_n$ and upper bounds are provided for the number of nodes outside the giant component. Simulation results are presented to validate the usefulness of the results in the finite node regime.
\section{Introduction} \label{sec:introduction} Random graphs are widely used in modeling and analysis of diverse real-world networks including social networks~\cite{newman2002random}, economic networks~\cite{kakade2005economic}, and communication networks~\cite{goldenberg2010survey}. In recent years, a random graph model known as the {\em random K-out graph} has received interest in designing secure sensor networks \cite{Yagan2013Pairwise}, decentralized learning \cite{2020dprivacy}, and anonymity preserving crypto-currency networks \cite{FantiDandelion2018}. Random K-out graphs, denoted $\hh$, are generated over a set of $n$ nodes as follows. Each of the $n$ nodes draws $K$ out-edges towards $K$ distinct nodes selected uniformly at random. The resulting {\em undirected} graph obtained by ignoring the orientation of the edges is referred to as a random K-out graph. In the context of sensor networks, random K-out graphs have been used \cite{Yagan2013Pairwise, yagan2012modeling, yavuz2015designing} to analyze the performance of the random {\em pairwise} key predistribution scheme \cite{Haowen_2003} and its heterogeneous variants \cite{eletreby2020connectivity,sood2020size}. The random {\em pairwise} scheme works as follows. Before deployment, each sensor chooses $K$ others uniformly at random. A unique {\em pairwise} key is given to each node pair where at least one of them selected the other. After deployment, two sensors can securely communicate if they share a pairwise key. The topology of the sensor network can thus be represented by a random K-out graph; each edge of the random K-out represents a secure communication link between two sensors. Consequently, random K-out graphs have been analyzed to answer key questions on the values of the parameters $n, K$ needed to achieve certain desired properties, including connectivity at the time of deployment \cite{FennerFrieze1982,Yagan2013Pairwise}, connectivity under {\em link} removals \cite{yagan2012modeling,yavuz2015designing}, and unassailability \cite{yagan2016wireless}. Despite many prior works on random K-out graphs, very little is known about its connectivity properties when some of its {\em nodes} are removed. This is an increasingly relevant problem since many deployments of sensor networks are expected to take place in {\em hostile} environments where nodes may be captured by an adversary, or fail due to harsh conditions. In addition, random K-out graphs have recently been used to construct the communication graph in a differentially-private federated averaging scheme called the GOPA (GOssip Noise for Private Averaging) protocol \cite[Algorithm~1]{2020dprivacy}. According to the GOPA protocol, a random K-out graph is constructed on a set of nodes of which an unknown subset is {\em dishonest}. It was shown in \cite[Theorem~3]{2020dprivacy} that the privacy-utility trade-offs achieved by the GOPA protocol is tightly dependent on the subgraph on {\em honest} nodes being {\em connected}. When the subgraph on honest nodes is not connected, it was shown that the performance of GOPA is tied to the {\em size} of the connected components of the honest nodes. With these motivations in mind, this paper aims to fill a gap in the literature and provide a comprehensive set of results on the connectivity and size of the giant component of the random K-out graph when some of its nodes are {\em dishonest}, have {\em failed}, or have been {\em captured}. Let \(\mathbb{H}(n;K_n,\gamma_n)\) denote the random K-out graph when \(\gamma_n\) of its nodes, selected uniformly at random, are deleted. First, we provide a set of conditions for $K_n$ and $n$ that ensure, {\em with high probability} (whp), the connectivity of the remaining graph when the number $\gamma_n$ of deleted nodes is $\Omega(n)$ and $o(n)$, respectively. Our result for $\gamma_n = \Omega(n)$ (see Theorem \ref{theorem:thmc_1}) significantly improves a prior result \cite{YAGAN2013493} on the same problem and leads to a {\em sharp} zero-one law for the connectivity of \(\mathbb{H}(n;K_n,\gamma_n)\). Our result for the case $\gamma_n = o({n})$ (see Theorem \ref{theorem:thmc_2}) expands the existing threshold of $K_n \geq 2$ required for connectivity by showing that the graph is still connected whp for $K_n \geq 2$ when $o(\sqrt{n})$ nodes are deleted. We then derive conditions on $K_n$ that leads $\hhdn$ to have a {\em giant component} with an upper bound on the number of nodes allowed outside the giant component. This is also done for both cases $\gamma_n=\Omega(n)$ and $\gamma_n=o(n)$. Finally, we present simulation results when the number of nodes is finite and compare the results with an Erd\H{o}s-R\'enyi graph with same average node degree. \section{Notations and the Model} \label{sec:model} All random variables are defined on the same probability space $(\Omega, {\mathcal{F}}, \mathbb{P})$ and probabilistic statements are given with respect to the probability measure $\mathbb{P}$. The complement of an event $A$ is denoted by $A^{\rm c}$. The cardinality of a discrete set $A$ is denoted by $|A|$. All limits are understood with $n$ going to infinity. If the probability of an event tends to one as $n\rightarrow \infty$, we say that it occurs with high probability (whp). The statements $a_n = {o}(b_n)$, $a_n=\omega(b_n)$, $a_n = {O}(b_n)$, $a_n=\Theta(b_n)$, and $a_n = \Omega(b_n)$, used when comparing the asymptotic behavior of sequences $\{a_n\},\{b_n\}$, have their meaning in standard Landau notation. The asymptotic equivalence $a_n \sim b_n$ is used to denote the fact that $\lim_{n \to \infty} \frac{a_n}{b_n}=1$. The random K-out graph is defined on the vertex set $V:=\{v_1, \ldots, v_n\}$ as follows. Let $\mathcal{N}:=\{1,2,\dots,n\}$ denote the set vertex labels. For each $i \in \mathcal{N}$, let $\Gamma_{n,i} \subseteq \mathcal{N} \setminus i$ denote the set of $K_n$ labels corresponding to the nodes selected by $v_i$. It is assumed that $\Gamma_{n,1}, \ldots , \Gamma_{n,n}$ are mutually independent. Distinct nodes $v_i$ and $v_j$ are adjacent, denoted by $v_i \sim v_j$ if at least one of them picks the other. Namely, \vspace{-1mm} \begin{align} v_i \sim v_j ~~\quad \mbox{if} ~~~\quad [j \in \Gamma_{n,i}] ~\vee~ [i \in \Gamma_{n,j}]. \label{eq:Adjacency} \end{align} The random graph defined on the vertex set $V$ through the adjacency condition (\ref{eq:Adjacency}) is called a random K-out graph \cite{frieze2016introduction,Bollobas,Yagan2013Pairwise} and denoted by $\hhn$. It was previously established in \cite{Yagan2013Pairwise, FennerFrieze1982} that random K-out graphs are connected whp when $K \geq 2$ and not connected when $K=1$; i.e., \vspace{-1mm} \begin{equation} \lim_{n \to \infty} \mathbb{P}\left[ \mathbb{H}(n;K) \text{ is connected}\right] = \begin{cases} 1 & \mathrm{if} \quad K\geq 2, \\ 0 & \mathrm{if} \quad K=1. \end{cases} \label{eq:homogeneous_zero_one_law} \end{equation} Next, we model random K-out graphs under random removal of nodes. As already mentioned, our motivation is to understand the properties of the underlying network when some nodes are {\em dishonest}, or have {\em failed}, or have been {\em captured}. We let $\gamma_n$ denote the number of such nodes and assume, for simplicity, that they are selected uniformly at random among all nodes in $V$. The case where the set of dishonest/captured/failed nodes are selected carefully by an adversary might also be of interest, but is beyond of the scope of the current paper; see \cite{yagan2016wireless} for partial results in that case. A related model of interest is the random K-out graph under randomly deleted {\em edges}. The connectivity and $k$-connectivity under that case have been studied in \cite{yagan2012modeling,yavuz2015toward,yavuz2017k}. Formally, let $D\subset V$ denote the set of deleted nodes with $|D| = \gamma_n$. We are interested in the random graph $\hhdn$, defined on the vertex set $R = V \setminus D$ such that distinct vertices $v_i$ and $v_j$ (both in $R$) are adjacent if they were adjacent in $\hhn$; i.e., if $[j \in \Gamma_{n,i}] ~\vee~ [i \in \Gamma_{n,j}]$. \begin{definition}[Connected Components] {\sl A pair of nodes in a graph $\mathbb{G}$ are said to be {\em connected} if there exists a path of edges connecting them. A {\em component} $C_i$ of $\mathbb{G}$ is a subgraph in which any two vertices are connected to each other, and no vertex is connected to a node outside of $C_i$. } \label{def:concomp} \end{definition}{} A graph with $n$ nodes is said to have a {\em giant} component if its largest connected component is of size $\Omega(n)$. \section{Main Results and Discussion} \label{sec:Main Results} Our main results are presented in Theorems $3.1-3.4$ below. Each Theorem addresses a design question as to how we should choose the parameter $K_n$ such that when the given number $\gamma_n$ of nodes are deleted, the remaining graph satisfies the given desired property (e.g., connectivity or a giant component with a specific size) whp. \subsection{Results on Connectivity} Let $P(n,K_n,\gamma_n) = \mathbb{P} \left[ \hhdn \text{ is connected}\right]$. \begin{theorem} {\sl Let $\gamma_n = \alpha n$ with $\alpha$ in $(0,1)$, and consider a scaling $K:\mathbb{N}_0 \to \mathbb{N}_0$ such that with $c>0$ we have \begin{align} K_n \sim c \cdot r_1(\alpha, n), \ \ \textrm{where} \quad r_1(\alpha, n) = \frac{\log n}{1 - \alpha - \log \alpha} \label{eq:threshold_1} \end{align} is the threshold function. Then, we have \begin{align} & \lim_{n \to \infty} P(n,K_n,\gamma_n) = \begin{cases} 1, & \mathrm{if}\quad c > 1\\ 0, & \mathrm{if}\quad 0< c < 1. \end{cases} \label{eq:c_1} \end{align} } \label{theorem:thmc_1} \end{theorem} The proof of the {\em one-law} in (\ref{eq:c_1}), i.e., that $\lim_{n \to \infty} P(n,K_n,\gamma_n)=1$ if $c>1$, is given in Section \ref{sec:proof}. The {\em zero-law} of (\ref{eq:c_1}), i.e., that $\lim_{n \to \infty} P(n,K_n,\gamma_n)=0$ if $c<1$, was established previously in \cite[Corollary 3.3]{YAGAN2013493}. There, a one-law was also provided: under (\ref{eq:threshold_1}), it was shown that $\lim_{n \to \infty} P(n,K_n,\gamma_n)$ if $c>\frac{1}{1-\alpha}$, leaving a gap between the thresholds of the zero-law and the one-law. Theorem \ref{theorem:thmc_1} presented here fills this gap by establishing a tighter one-law, and constitutes a {\em sharp} zero-one law; e.g., when $\alpha=0.5$, the one-law in \cite{YAGAN2013493} is given with $c>2$, while we show that it suffices to have $c>1$. \vspace{1mm} \begin{theorem} {\sl Consider a scaling $K:\mathbb{N}_0 \to \mathbb{N}_0$. \\ a) If $\gamma_n = o(\sqrt{n})$, then we have \begin{align} \lim_{n \to \infty} P(n,K_n,\gamma_n) = 1, \quad \mathrm{if}\quad K_n \geq 2 \ \ \forall n \label{eq:c_2} \end{align} b) If $\gamma_n = \Omega(\sqrt{n})$ and $\gamma_n = o(n)$, and if for some sequence $w_n$, it holds that $$ K_n = r_2(\gamma_n) + \omega_n, \ \ \textrm{where} \quad r_2(\gamma_n) = \frac{\log (\gamma_n)}{\log 2 + 1/2}$$ is the threshold function, then we have \begin{align} \lim_{n \to \infty} P(n,K_n,\gamma_n) = 1, \quad \mathrm{if}\quad \lim_{n \to \infty}\omega_n = \infty \label{eq:c_2} \end{align} } \label{theorem:thmc_2} \end{theorem} \vspace{-2mm} Random K-out graph was known \cite{FennerFrieze1982,Yagan2013Pairwise} to be connected whp when $K_n \geq 2$ (viz.~(\ref{eq:homogeneous_zero_one_law})). Theorem \ref{theorem:thmc_2} extends this result by showing that $K_n \geq 2$ is sufficient to have the random K-out graph remain connected whp when $o(\sqrt{n})$ of its nodes (selected randomly) are deleted. \subsection{Results on the Size of the Giant Component} Let $C_{max}(n,K_n,\gamma_n)$ denote the set of nodes in the {\em largest} connected component of $\hhdn$ and let $P_G(n,K_n,\gamma_n,\lambda_n) :=\mathbb{P}[|C_{max}(n,K_n,\gamma_n)| > n - \gamma_n- \lambda_n]$. Namely, $P_G(n,K_n,\gamma_n,\lambda_n)$ is the probability that less than $\lambda_n$ nodes are {\em outside} the largest component of $\hhdn$. \begin{theorem} {\sl Let $\gamma_n = o(n)$ and $\lambda_n = \Omega(\sqrt{n})$. Consider a scaling $K:\mathbb{N}_0 \to \mathbb{N}_0$ and let $$r_3(\gamma_n,\lambda_n) = 1 + \frac{\log(1+\gamma_n / \lambda_n)}{\log 2 + 1/2}$$ be the threshold function. Then, we have \begin{align*} & \lim_{n \to \infty} P_G(n,K_n,\gamma_n,\lambda_n) = 1, \quad \mathrm{if}\quad K_n > r_3(\gamma_n, \lambda_n), \ \ \forall n. \end{align*} } \label{theorem:thmg_3} \end{theorem} \vspace{-3mm} We remark that if $\lambda_n=\beta n$ with $\beta>0$, then $r_3(\gamma_n,\lambda_n) = 1+o(1)$. This shows that when $\gamma_n=o(n)$, it suffices to have $K_n \geq 2$ for $\hhdn$ to have a giant component containing $\Omega((1-\beta)n)$ nodes for arbitrary $\beta>0$. \vspace{2mm} \begin{theorem} {\sl Let $\gamma_n = \alpha n$ with $\alpha$ in $(0,1)$, $\lambda_n=o(n)$, and $\lambda_n=\omega(1)$. Consider a scaling $K:\mathbb{N}_0 \to \mathbb{N}_0$ and let $$r_4(\alpha, \lambda_n) = 1 + \frac{\log(1 + \frac{n \alpha}{\lambda_n}) +\alpha + \log(1-\alpha)}{\frac{1-\alpha}{2} - \log\left(\frac{1+\alpha}{2}\right)}$$ be the threshold function. Then, we have \begin{align*} & \lim_{n \to \infty} P_G(n,K_n,\alpha,\lambda) = 1, \quad \mathrm{if}\quad K_n > r_4(\alpha, x_n), \ \ \forall n. \end{align*} } \label{theorem:thmg_5} \end{theorem} \vspace{-3mm} Due to space limitations, we only provide a proof of Theorem \ref{theorem:thmc_1} here. Proofs of all results are available in \cite{icc2021proof}. \subsection{Simulation Results} To check the usefulness of our results when the number $n$ of nodes is finite, we examine the probability of connectivity and the number of nodes outside the giant component (i.e., $ n - \gamma_n-|C_{max}(n,K_n,\gamma_n)|$) in two different experimental setups. The first setup is to obtain the results for the case where $\gamma_n = \alpha n$, with $\alpha$ in $(0,1)$. We generate instantiations of the random graph $\hhdn$ with $n=5000$, varying $K_n$ in the interval $[1, 25]$ and several $\alpha$ values in the interval $[0.1,0.8]$. Then, we record the empirical probability of connectivity and $\lambda_n$ from 1000 independent experiments for each $(K_n,\alpha)$ pair. The results of this experiment are shown in Fig.~\ref{fig:fig1} (Left) and Fig.~\ref{fig:fig3}. Fig.~\ref{fig:fig1} (Left) depicts the empirical probability of connectivity of $\hhdn$. The vertical lines stand for the critical threshold of connectivity asserted by Theorem \ref{theorem:thmc_1}. In each curve, $P(n,K_n,\gamma_n)$ exhibits a threshold behaviour as $K_n$ increases, and the transition from $P(n,K_n,\gamma_n)=0$ to $P(n,K_n,\gamma_n)=1$ takes place around $K_n = \frac{\log n}{1 - \alpha - \log \alpha}$, validating the claims of Theorem \ref{theorem:thmc_1}. In Fig.~\ref{fig:fig3}, we plot the {\em maximum} number of nodes outside the giant component observed in 1000 experiments for each parameter pair, and compare these with our result, namely the upper bound on $n - |C_{max}|$ obtained from Theorem \ref{theorem:thmg_5} by taking the maximum $\gamma_n$ value that gives a threshold less than or equal to the $K_n$ value tested in the simulation. As can be seen, for any $K_n$ and $\gamma_n$ value, the experimental maximum number of nodes outside the giant component is smaller than the upper bound obtained from Theorem \ref{theorem:thmg_5}, reinforcing the usefulness of our results in practical settings. \begin{figure}[!t] \centering \includegraphics[scale=0.275]{Fig1.png}\label{fig:conn1} \hspace{0.4mm} \includegraphics[scale=0.275]{Fig2.png}\label{fig:conn2} \vspace{-4mm} \caption{\sl (Left) Empirical probability that $\hhdn$ is connected for $n = 5000$ calculated from 1000 experiments. The vertical lines are the theoretical thresholds given by Theorem \ref{theorem:thmc_1}. (Right) Maximum number of nodes outside the giant component of $\hhdn$ for $n = 50,000$ in 1000 experiments. \vspace{-4mm}} \label{fig:fig1} \end{figure} \begin{figure}[!t] \vspace{-1mm} \centering \includegraphics[scale=0.26]{Fig5.png}\label{fig:gc2} \hspace{0.4mm} \includegraphics[scale=0.27]{Fig6.png}\label{fig:gc2a} \vspace{-7mm} \caption{\sl Maximum number of nodes outside the giant component of $\hhdn$ for $n = 5000$ and $\gamma_n = 0.1n$, $\gamma_n = 0.2n$ cases (Left); and for $n = 5000$ and $\gamma_n = 0.4n$, $\gamma_n = 0.6n$ cases (Right), obtained through 1000 experiments along with the respective plot of theoretical $n - |C_{max}|$.} \label{fig:fig3} \end{figure} \begin{figure}[!t] \vspace{-3mm} \centering \includegraphics[scale=0.275]{Fig3.png}\label{fig:gc1} \hspace{0.4mm} \includegraphics[scale=0.275]{Fig4.png}\label{fig:gc3} \vspace{-4mm} \caption{\sl Maximum number of nodes outside the giant component of $\hhdn$ for $n = 50,000$ and $\gamma_n = 10$ cases (Left); and for for $n = 50,000$ and $\gamma_n = 250$ cases (Right), obtained through 1000 experiments along with the plot of theoretical $n - |C_{max}|$. \vspace{-2mm}} \label{fig:fig2} \end{figure} \begin{figure}[!t] \vspace{-1mm} \centering \includegraphics[scale=0.26]{compare1.png}\label{fig:cmp1} \hspace{0.4mm} \includegraphics[scale=0.26]{compare3.png}\label{fig:cmp2} \vspace{-4mm} \caption{\sl Comparison of maximum number of nodes outside the giant component of a random K-out graph $\hhdn$ and an Erd\H{o}s-R\'enyi graph with same mean node degree when $n = 5000$, $\gamma_n = 0.4n$ (Left); and when $n = 50,000$ and $\gamma_n= 500$ (Right). Each data-point is obtained through 1000 experiments. \vspace{-7mm}} \label{fig:cmp} \end{figure} We ran a second set of experiments for the case where $\gamma_n = o(n)$. As before, we generate instantiations of the random graph $\hhdn$, with $n=50,000$, varying $K_n$ in $[2, 5]$ and varying $\lambda_n$ in $[10,2000]$. For each $(K_n,\gamma_n)$ pair, we generate 1000 experiments and record the maximum number of nodes seen outside the giant component; in some case no nodes are seen outside the giant component indicating that the graph is connected. The results of this experiment are shown in Fig.~\ref{fig:fig1} (Right) and Fig.~\ref{fig:fig2}. In Fig.~\ref{fig:fig1} (Right), the maximum number of nodes seen outside the giant component in 1000 experiments is depicted as a function of $K_n$. The plots for $\gamma_n=10$ and $\gamma_n=100$ correspond to the $\gamma_n = o(\sqrt{n})$ case in Theorem \ref{theorem:thmc_2}a. As can be seen from these plots, there is only one node outside the giant component in the worst case for $\gamma_n=10$ and $\gamma_n=100$ when $K_n = 2$, roughly in line with Theorem \ref{theorem:thmc_2}a which expects the graph to be connected when $K_n = 2$. The plots for $\gamma_n=1000$ and $\gamma_n=2000$ correspond to the $\gamma_n = w(\sqrt{n})$ and $\gamma_n = o(n)$ case in Theorem \ref{theorem:thmc_2}b. The thresholds on $K_n$ for these $\gamma_n$ values, obtained using Theorem \ref{theorem:thmc_2}b are $r_2(1000)=6.79$ and $r_2(2000)=7.37$, rounded to two digits after decimal when the $\omega(1)$ term in Theorem \ref{theorem:thmc_2}b is ignored due to $n$ having a finite value in the simulations. As can be seen from the plots, the graph becomes connected for $\gamma_n=1000$ when $K_n \geq4$, and for $\gamma_n=2000$ when $K_n \geq5$. Hence, we can see that graphs for $\gamma_n=1000$ and $\gamma_n=2000$ are connected when $K_n$ is selected above the theoretical threshold obtained from \ref{theorem:thmc_2}b, supporting Theorem \ref{theorem:thmc_2}b. In Fig.~\ref{fig:fig2}, the maximum number of nodes seen outside the giant component in 1000 experiments is plotted as a function of $K_n$ for $\gamma_n = 10$ (Left) and for $\gamma_n = 250$ (Right). The corresponding theoretical plots are obtained by the upper bound on $n - |C_{max}|$ asserted by Theorem \ref{theorem:thmg_3} for the given value of $K_n$. For any $K_n$ and $\gamma_n$ pair, the experimental values are smaller than the theoretical values, supporting the usefulness of Theorem \ref{theorem:thmg_3} in the finite node regime. \subsection{Discussion} In Theorem \ref{theorem:thmc_1}, we improve the results given in \cite{YAGAN2013493} by closing the gap between the zero law and the one law, and hence we establish a sharp zero-one law for connectivity when $\gamma_n= \Omega(n)$ nodes are deleted from $\hhdn$. In Theorem \ref{theorem:thmc_2}, we establish that the graph $\hhdn$ with $\gamma_n = o(n)$ is connected whp when $K_n \sim \log(\gamma_n)$; and when $\gamma_n = o(\sqrt{n})$, $K_n \geq 2$ is sufficient for connectivity. The latter result is especially important, since $K_n \geq 2$ is the previously established threshold for connectivity \cite{FennerFrieze1982}, and here we improve this result by showing that the graph is still connected with $K_n \geq 2$ even after $o(\sqrt{n})$ nodes (selected randomly) are deleted. To put these results in perspective, we compare them with an Erd\H{o}s-R\'enyi graph $G(n,p)$, which is connected whp if $p > \log n / n$. This translates to having an average node degree of $<k> \sim \log n$ \cite{erdHos1960evolution}. The $<k>$ required for the random K-out graph to be connected whp is much lower, with $<k> = O(1)$ when $o(\sqrt{n})$ nodes are removed, and $<k> \sim \log(\gamma_n)$ when $\gamma_n=\Omega(\sqrt{n})$ nodes are removed. For a better comparison, we examine the experimental maximum number of nodes outside the giant component out of 1000 experiments of a random K-out graph $\hhdn$ and an Erd\H{o}s-R\'enyi graph $G(n,p)$ with same mean node degree when $\gamma_n$ random nodes are removed from the graph. To achieve the same node degree, $p$ is selected as $p = 2K_n/n$. The results are given in Fig.~\ref{fig:cmp} for $n=5000$, $\gamma_n =0.4n$ on (Left), and $n=50,000$, $\gamma_n= 500$ on (Right). As can be seen, the random K-out graph has less maximum number of nodes outside the giant component than the Erd\H{o}s-R\'enyi graph and this difference is more pronounced when $\gamma_n$ is smaller. Hence, we can conclude that random K-out graphs are more robust to random node removals than Erd\H{o}s-R\'enyi graphs in the sense of probability of connectivity and size of the giant component being larger. This reinforces the efficiency of the K-out construction in various distributed network applications including federated averaging \cite{2018dprivacy,2020dprivacy} where it is desirable to maintain connectivity in the event of node failures or adversarial capture of nodes. \section{A Proof of Theorem~\ref{theorem:thmc_1} } \label{sec:proof} \vspace{-1mm} We start by defining a {\em cut} \begin{definition}[Cut] \cite[Definition 6.3]{MeiPanconesiRadhakrishnan2008} For a graph $\mathcal{G}$ defined on the node set $V$, a \emph{cut} is a non-empty subset $S \subset V$ of nodes {\em isolated} from the rest of the graph. Namely, $S \subset V$ is a cut if there is no edge between $S$ and $S^{\rm c}=V \setminus S$. \label{def:cut} \end{definition}{} Definition~\ref{def:cut} implies that if $S$ is a cut, then so is $S^{\rm c}$. Recall from Section \ref{sec:model} that we defined $\hhdn$ as the graph when the set $D$ of nodes is removed from the graph $\hhn$. Namely, the vertex set of $\hhdn$ is given by $R=V \setminus D$. Let $\mathcal{E}_n (K_n, \gamma_n; S)$ denote the event that $S \subset R$ is a cut in $\hhdn$ as per Definition~\ref{def:cut}. With $S^{\rm c} = R/S$, the event $\mathcal{E}_n (K_n, \gamma_n; S)$ occurs if no nodes in $S$ pick neighbors in $S^{\rm c}$, and no nodes in $S^{\rm c}$ pick neighbors in $S$. Note that nodes in $S$ or $S^{\rm c}$ can still pick neighbors in the set $D$. Thus, we have \begin{align} \mathcal{E}_n (K_n, \gamma_n; S) = \bigcap_{i \in \mathcal{N}_S} \bigcap_{j \in \mathcal{N}_{S^{\rm c}}} \left( \left \{ i \not \in \Gamma_{n-\gamma_n,j} \right \} \cap \left \{ j \notin \Gamma_{n-\gamma_n,i} \right \} \right). \nonumber \vspace{-2mm} \end{align} with $\mathcal{N}_S$, $\mathcal{N}_{S^{\rm c}}$ denoting the set of labels of the vertices in $S$ and $S^{\rm c}$, respectively. Let $\mathcal{Z}(x_n;K_n, \gamma_n)$ denote the event that $\hhdn$ has no cut $S \subset R $ with size $x_n \leq |S| \leq n-\gamma_n - x_n$ where $x:\mathbb{N}_0 \rightarrow \mathbb{N}_0$ is a sequence such that $x_n \leq (n-\gamma_n)/{2} \ \forall n$. Namely, $\mathcal{Z}(x_n;K_n, \gamma_n)$ is the event that there are no cuts in $\hhdn$ whose size falls in the range $[x_n, n-\gamma_n-x_n]$. \begin{lemma} \cite[Lemma 4.3]{sood2020size} For any sequence $x: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ such that $x_n \leq \lfloor (n-\gamma_n)/3 \rfloor$ for all $n$, we have \begin{align} \mathcal{Z}(x_n;K_n, \gamma_n) \Rightarrow |C_{max}(n, K_n, \gamma_n)| > n - \gamma_n - x_n. \end{align} \label{lemma:gc} \end{lemma} \vspace{-5mm} Lemma \ref{lemma:gc} states that if the event $\mathcal{Z}(x_n;K_n, \gamma_n)$ holds, then the size of the largest connected component of $\hhdn$ is greater than $n - \gamma_n - x_n$; i.e., there are less than $x_n$ nodes outside of the giant component of $\hhdn$. Hence, we can see that $\hhdn$ is connected if $\mathcal{Z}(x_n;K_n, \gamma_n)$ takes place with $x_n=1$. Thus, the one-law will be established if we show that $\lim_{n \to \infty} \mathbb{P} [\mathcal{Z}(x_n;K_n, \gamma_n)^{\rm c}] = 0$ with $x_n=1$. From the definition of $\mathcal{Z}(x_n;K_n, \gamma_n)$, we have \vspace{-1mm} \begin{align} \mathcal{Z}(x_n;K_n, \gamma_n) & = \bigcap_{S \in \mathcal{P}_n: ~x_n\leq |S| \leq \lfloor \frac{n-\gamma}{2} \rfloor} \left(\mathcal{E}_n({K}_n,{\gamma_n}; S)\right)^{\rm c}, \nonumber \end{align} where $\mathcal{P}_n$ is the collection of all non-empty subsets of $R$. Complementing both sides and using union bound, we get \begin{align} \mathbb{P}\left[\left(\mathcal{Z}(x_n;K_n, \gamma_n)\right)^{\rm c}\right] &\leq \hspace{-3mm} \sum_{ S \in \mathcal{P}_n: x_n \leq |S| \leq \lfloor \frac{n-\gamma}{2} \rfloor } \mathbb{P}[ \mathcal{E}_n ({K}_n,{\gamma_n}; S) ] \nonumber \\ &=\hspace{-1mm} \sum_{r=x_n}^{ \left\lfloor \frac{n-\gamma}{2} \right\rfloor } \hspace{-1mm} \sum_{S \in \mathcal{P}_{n,r} } \mathbb{P}[\mathcal{E}_n ({K}_n,{\gamma_n}; S)] \label{eq:BasicIdea+UnionBound}, \end{align} where $\mathcal{P}_{n,r} $ denotes the collection of all subsets of $R$ with exactly $r$ elements. For each $r=1, \ldots , \left\lfloor (n-\gamma)/2\right\rfloor$, we can simplify the notation by denoting $\mathcal{E}_{n,r} ({K}_n,{\gamma_n})=\mathcal{E}_n ({K}_n,{\gamma_n} ; \{v_1, \ldots , v_r \} )$. From the exchangeability of the node labels and associated random variables, we have \[ \mathbb{P}[ \mathcal{E}_n({K}_n,{\gamma_n} ; S) ] = \mathbb{P}[ \mathcal{E}_{n,r}({K}_n,{\gamma_n}) ], \quad S \in \mathcal{P}_{n,r}. \] $|\mathcal{P}_{n,r} | = {n-\gamma_n \choose r}$, since there are ${n-\gamma_n \choose r}$ subsets of $R$ with r elements. Thus, we have \begin{equation*} \sum_{S \in \mathcal{P}_{n,r} } \mathbb{P}[\mathcal{E}_n ({K}_n,{\gamma_n} ; S) ] = {n-\gamma_n\choose r} ~ \mathbb{P}[\mathcal{E}_{n,r} ({K}_n,{\gamma_n})]. \label{eq:ForEach=r} \end{equation*} Substituting this into (\ref{eq:BasicIdea+UnionBound}), we obtain \begin{align} \hspace{-.4mm} \mathbb{P}\left[\left(\mathcal{Z}(x_n;K_n,\gamma_n)\right)^{\rm c}\right] \leq \hspace{-1mm} \sum_{r=x_n}^{ \left\lfloor \frac{\hspace{-.5mm} n-\gamma}{2} \right\rfloor } \hspace{-1mm} {n-\gamma_n \hspace{-.4mm} \choose r } \hspace{-.4mm} \mathbb{P}[ \mathcal{E}_{n,r}({K}_n,{\gamma_n})] \hspace{-1mm} \label{eq:Z_bound} \end{align} \vspace{-0.1mm} Remember that $\mathcal{E}_{n,r}({K}_n,{\gamma_n})$ is the event that the $n-\gamma_n-r$ nodes in $S$ and $r$ nodes in $S^{\rm c}$ do not pick each other; but they can pick from the $\gamma_n$ nodes from $D$. Thus, we have \begin{align} \nonumber \mathbb{P} [\mathcal{E}_{n,r}({K}_n,{\gamma_n})] & = \left( \dfrac{{\gamma_n+r-1 \choose K_n}}{{n-1 \choose K_n}} \right)^{r} \left( \dfrac{{n-r-1 \choose K_n}}{{n-1 \choose K_n}} \right)^{n-\gamma_n-r} \\ & \leq \left(\dfrac{\gamma_n+r}{n}\right)^{rK_n} \left(\dfrac{n-r}{n}\right)^{K_n(n-\gamma_n-r)} \nonumber \end{align} Letting $P_Z=\mathbb{P}\left[\mathcal{Z}(1;K_n, \gamma_n)^{\rm c}\right]$, and plugging in $x_n=1$ in (\ref{eq:Z_bound}), we get \begin{align} \hspace{-1mm} P_Z \leq \hspace{-1mm} \sum_{r=1}^{ \left\lfloor \frac{n-\gamma}{2} \right\rfloor } \hspace{-1mm} {\hspace{-.5mm}n-\gamma_n\hspace{-.5mm}\choose r} \hspace{-1mm} \left(\hspace{-.5mm}\dfrac{\hspace{-.5mm}\gamma_n+r\hspace{-.5mm}}{n}\right)^{\hspace{-1mm} r K_n} \hspace{-1mm} \left(\hspace{-.5mm}\dfrac{n-r}{n}\hspace{-.5mm} \right)^{\hspace{-1mm} K_n(n-\gamma_n-r)\hspace{-.5mm}} \label{eq:gc_pz} \end{align} Let $\gamma_n = \alpha n$ with $0<\alpha<1$. Using this and standard bounds ${n \choose k} \leq (\frac{n e }{k})^k$ and $1-x \leq e^{-x}$ in (\ref{eq:gc_pz}), we get \begin{align} P_Z \leq \sum_{r=1}^{ \left\lfloor \frac{n-\alpha n}{2} \right\rfloor } \left(\dfrac{n -\alpha n}{r}\right)^r e^r \left(\alpha + \dfrac{r}{n}\right)^{rK_n} e^{\frac{-r K_n (n-\alpha n -r)}{n}} \nonumber \end{align} We will show that the right side of the above expression goes to zero as $n$ goes to infinity. Let $$A_{n,r,\alpha}: = \left(\dfrac{n -\alpha n}{r}\right)^r e^r \left(\alpha + \dfrac{r}{n}\right)^{rK_n}e^{\frac{-rK_n(n-\alpha n -r)}{n}}.$$ \vspace{-2mm} We write \vspace{-1mm} \begin{align*} P_Z \leq \sum_{r=1}^{ \left\lfloor n/\log n \right\rfloor } A_{n,r,\alpha} + \sum_{r=\left\lfloor n/\log n \right\rfloor}^{ \left\lfloor \frac{n-\alpha n}{2} \right\rfloor } A_{n,r,\alpha} := Q_1 + Q_2, \end{align*} and show that both $Q_1$ and $Q_2$ go to zero as $n \to \infty$. We start with the first summation $Q_1$. \vspace{-1mm} \begin{align} Q_1 & \! \begin{multlined}[t] \leq \sum_{r=1}^{ \left\lfloor \frac{n}{\log n} \right\rfloor } \left((1 -\alpha )en\cdot e^{ K_n \log(\alpha+ \frac{1}{\log n}) -K_n (1-\alpha -\frac{1}{\log n}) } \right)^r \nonumber \end{multlined} \vspace{-1mm} \end{align} \vspace{-1mm} Next, assume as in the statement of Theorem \ref{theorem:thmc_1} that \begin{align} K_n = \frac{c_n \log n}{1 - \alpha - \log \alpha}, \quad n=1,2,\ldots \label{eq:proof1_k} \end{align} for some sequence $c: \mathbb{N}_0 \to \mathbb{R}_+$ such that $\lim_{n \to \infty}c_n = c$ with $c>1$. Also define \begin{align*} a_n &:= (1 -\alpha )en\cdot e^{ K_n\log(\alpha+ \frac{1}{\log n}) -K_n(1-\alpha -\frac{1}{\log n}) } \\ & = (1 -\alpha )e n \cdot e^{- c_n \log n \frac{1- \alpha -\log (\alpha+\frac{1}{\log n}) - \frac{1}{\log n}}{1- \alpha - \log \alpha}} \\ & = (1 -\alpha )e n^{1-c_n} e^{c_n \frac{\log n \log(1+ \frac{1}{\alpha \log n}) +1}{1- \alpha - \log \alpha}} \\ & = O(1) n^{1-c_n} \end{align*} where we substituted $K_n$ via (\ref{eq:proof1_k}) and used the fact that $ \log n \cdot \log(1+ \frac{1}{\alpha \log n}) \leq \frac{1}{\alpha}$. Taking the limit as $n \to \infty$ and recalling that $\lim _{n \to \infty} c_n = c >1$, we see that $\lim _{n \to \infty} a_n = 0$. Hence, for large $n$, we have \begin{align} Q_1 \leq \sum_{r=1}^{ \left\lfloor n/\log n \right\rfloor } \left( a_n \right)^r \leq \sum_{r=1}^{ \infty} \left( a_n \right)^r = \frac{a_n}{1-a_n} \end{align} where the geometric sum converges by virtue of $\lim _{n \to \infty} a_n = 0$. Using this once again, it is clear from the last expression that $\lim _{n \to \infty}Q_1 = 0$. Now, similarly consider the second summation $S_2$. \begin{align} Q_2 \leq \sum_{r= \lfloor n/\log n\rfloor }^{ \left \lfloor (n-\alpha n)/2 \right\rfloor } \left((1 - \alpha )e\log n \cdot e^{ K_n \log(\frac{1+\alpha}{2}) -K_n\frac{1- \alpha}{2} } \right)^r \nonumber \end{align} \vspace{-2mm} Next, we define \begin{align} b_n &:= (1 - \alpha )e\log n \cdot e^{ -K_n \left( \frac{1-\alpha}{2} - \log(\frac{1+\alpha}{2}) \right)} \end{align} Substituting for $K_n$ via (\ref{eq:proof1_k}) and taking the limit as ${n \to \infty}$ it can be seen that $ \lim _{n \to \infty} b_n = 0 $ upon noting that $ \frac{1-\alpha}{2} - \log(\frac{1+\alpha}{2}) >0$ and $\lim _{n \to \infty} c_n =c >1$. With arguments similar to those used in the case of $Q_1$, we can show that when $n$ is large $S_2 \leq b_n/(1-b_n)$, leading to $Q_2$ converging to zero as $n$ gets large. With $P_Z \leq Q_1 + Q_2$, and both $Q_1$ and $Q_2$ converging to zero when $n$ is large, we establish that $P_Z$ converges to zero as $n$ goes to infinity. This result also yields the desired conclusion $\lim _{n \to \infty} P(n,K_n,\gamma_n) =1$ in Theorem \ref{theorem:thmc_1} since $P_Z = 1-P(n,K_n,\gamma_n)$. We direct readers to \cite{icc2021proof} for proof of other Theorems presented in Section \ref{sec:Main Results}. \vspace{-4mm} \section{Conclusions} \vspace{-1mm} In this paper, we provide a comprehensive set of results on the connectivity and giant component size of $\hhdn$, i.e., random K-out graph with randomly selected $\gamma_n$ nodes deleted. Computer simulations are used to validate the results in the finite node regime. Using our results, we compare random K-out graphs with Erd\H{o}s-R\'enyi graphs with the same mean node degree and same number of deleted nodes, and show that random K-out graphs are either connected with higher probability or have a larger giant component. This reinforces the usefulness of random K-out graphs in various distributed network applications including federated averaging \cite{2018dprivacy,2020dprivacy}. Our results can help design networks with desired levels of robustness or tolerance to nodes failing, being captured, or being dishonest in many applications including wireless sensor networks and distributed averaging. \vspace{-1mm} \section*{Acknowledgements} This work was supported by the National Science Foundation through Grant \# CCF-1617934, and by CyLab through the Secure and Private IoT Initiative. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-03-03T02:14:57", "yymm": "2103", "arxiv_id": "2103.01471", "language": "en", "url": "https://arxiv.org/abs/2103.01471", "abstract": "Random K-out graphs, denoted $\\mathbb{H}(n;K)$, are generated by each of the $n$ nodes drawing $K$ out-edges towards $K$ distinct nodes selected uniformly at random, and then ignoring the orientation of the arcs. Recently, random K-out graphs have been used in applications as diverse as random (pairwise) key predistribution in ad-hoc networks, anonymous message routing in crypto-currency networks, and differentially-private federated averaging. In many applications, connectivity of the random K-out graph when some of its nodes are dishonest, have failed, or have been captured is of practical interest. We provide a comprehensive set of results on the connectivity and giant component size of $\\mathbb{H}(n;K_n,\\gamma_n)$, i.e., random K-out graph when $\\gamma_n$ of its nodes, selected uniformly at random, are deleted. First, we derive conditions for $K_n$ and $n$ that ensure, with high probability (whp), the connectivity of the remaining graph when the number of deleted nodes is $\\gamma_n=\\Omega(n)$ and $\\gamma_n=o(n)$, respectively. Next, we derive conditions for $\\mathbb{H}(n;K_n,\\gamma_n)$ to have a giant component, i.e., a connected subgraph with $\\Omega(n)$ nodes, whp. This is also done for different scalings of $\\gamma_n$ and upper bounds are provided for the number of nodes outside the giant component. Simulation results are presented to validate the usefulness of the results in the finite node regime.", "subjects": "Information Theory (cs.IT)", "title": "On the Connectivity and Giant Component Size of Random K-out Graphs Under Randomly Deleted Nodes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575183283513, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542077693128 }
https://arxiv.org/abs/1502.00366
Partitions into a small number of part sizes
We study $\nu_k(n)$, the number of partitions of $n$ into $k$ part sizes, and find numerous arithmetic progressions where $\nu_2$ and $\nu_3$ take on values divisible by 2 and 4. Expanding earlier work, we show $\nu_2(An+B) \equiv 0 \pmod{4}$ for (A,B) = (36,30), (72,42), (252,114), (196,70), and likely many other progressions for which our method should easily generalize. Of some independent interest, we prove that the overpartition function $\bar{p}(n) \equiv 0 \pmod{16}$ in the first three progressions (the fourth is known), and thereby show that $\nu_3(An+B) \equiv 0 \pmod{2}$ in each of these progressions as well, and discuss the relationship between these congruences in more generality. We end with open questions in this area.
\section{Introduction} Denote the number of partitions of $n$ in which exactly $k$ sizes of part appear by $\nu_k(n)$. For instance, $\nu_2(5) = 5$, counting $$4+1, 3+2, 3+1+1, 2+2+1, \text{ and } 2+1+1+1.$$ This easily stated function has been studied by Major P. A. MacMahon \cite{MacMahon}, George Andrews \cite{GEA1}, and more recently Tani and Bouroubi \cite{TandB}, the latter specifically interested in $\nu_2$. The author in a recent paper \cite{Keith1} stated several theorems concerning $\nu_2$ and ventured further conjectures regarding $\nu_2$ and $\nu_3$, which it is the purpose of this paper to prove and expand. Despite attention from these authors, results of the kind found in other areas of partition theory, such as congruences in arithmetic progressions, have not been forthcoming; here we provide several, with a proof strategy easily adaptable to future possible candidates. Data on $\nu_k(n)$ relates to the study of overpartitions. An overpartition of $n$ is a partition of $n$ in which the last appearance of a given size of summand is either overlined or not. The overpartitions of 3 are \[ 3, \overline{3}, 2+1, \overline{2}+1, 2+\overline{1}, \overline{2}+ \overline{1}, 1+1+1, 1+1+\overline{1} .\] Often attributed originally to Major MacMahon, overpartitions have seen a surge of interest in recent years since the 2004 publication of a paper by Corteel and Lovejoy \cite{CoLo}, placing them in the context of more recent work in partition theory. Denote the number of overpartitions of $n$ by $\overline{p}(n)$. Then it is clear that $$\overline{p}(n) = 2 \nu_1 (n) + 4 \nu_2 (n) + 8 \nu_3(n) + \dots .$$ Thus data about $\nu_i(n)$ can inform or be informed by results on overpartitions. An example by Byungchan Kim \cite{Kim} is the theorem that $\overline{p}(n) \equiv 0 \pmod{8}$ if $n$ is neither a square nor twice a square; this is equivalent to the claim that for such numbers, $\frac{1}{2} \nu_1(n)$ and $\nu_2(n)$ are simultaneously both even or both odd. Our main theorems include several on $\nu_2(An+B)$ mod 4 and $\nu_3$ mod 2: \begin{theorem}\label{Nu2} $\nu_2(An+B) \equiv 0 \pmod{4}$ if $(A,B) \in \{(36,30), (72,42), (196,70), (252,114) \}$. \end{theorem} \begin{theorem}\label{Nu3} For each of the $(A,B)$ above, $\nu_3(An+B) \equiv 0 \pmod{2}$.\end{theorem} In order to prove Theorem \ref{Nu3}, we need several overpartition congruences modulo 16. The congruence for $(A,B) = (196,70)$ is already known; the others, and the generating function dissections we provide which prove them, are apparently new, although this is a field of active research. We record these below. \begin{theorem}\label{OverPtns} $\overline{p}(An+B) \equiv 0 \pmod{16}$ for all $(A,B)$ above.\end{theorem} There are many other candidate progressions. The proof techniques we give should be easily adaptable. \phantom{.} \noindent \textbf{Remark:} Between versions of this article, a paper appeared by Xinhua Xiong \cite{Xiong} in which $\overline{p}(n)$ is completely determined modulo 16 by the factorization of $n$. The overpartition identities in this paper would then also follow from Xiong's work and facts such as our candidate progressions never containing squares or sums of squares, along with observations regarding various primes appearing in their factorization. This work, plus the method elaborated below for $\nu_2$, together give a method for obtaining many progressions for $\nu_3$. \phantom{.} In the next section we give much of the background information necessary to verify the results in this paper, including useful formulas from MacMahon and Andrews for $\nu_i$, and several facts concerning modular forms which are central the methodology. The author sincerely thanks Jeremy Rouse for assistance provided on MathOverflow (\cite{MOJRouse}, \cite{MOJRouse2}) answering related questions, and a careful referee for correcting an oversight in an earlier draft. In Section 3 we prove Theorem \ref{Nu2} by expanding Rouse' original method: we isolate as much as possible common to all such progressions using MacMahon and Andrews' results, reducing the proof in each individual case to a short catalog of necessary modular forms, and a Sturm calculation verifying a summatory congruence. In Section 4 we prove Theorem \ref{Nu3}, proving Theorem \ref{OverPtns} in the process, and discuss challenges and possible avenues of attack in proceeding further. The last section gives a number of open questions which we think are of general interest. \section{Background Theorems} Since partitions into exactly one size of part have Ferrers diagrams which are just rectangles of area $n$, $\nu_1(n)$ is just $d(n)$, the divisor function, which is perfectly understood. If the factorization of $n$ into primes is $n = p_1^{\alpha_1} p_2^{\alpha_2} \dots$, then $$d(n) = (\alpha_1 + 1)(\alpha_2 + 1) \dots.$$ We are thus more interested in $\nu_2$ and $\nu_3$. MacMahon and Andrews gave generating functions for $\nu_k$ and, along with Karl Dilcher independently \cite{Dilcher}, all derived the identities \begin{equation}\label{Nu2Eq} \nu_2(n) = \frac{1}{2}\left(\sum_{k=1}^{n-1} d(k)d(n-k) - \sigma_1(n) + d(n) \right)\end{equation} and \begin{multline}\label{Nu3Eq} \nu_3(n) = \frac{1}{3} d(n) - \frac{1}{2} \sigma_1(n) + \frac{1}{6} \sigma_2(n) - \frac{1}{2} \sum_{k=1}^{n-1} d(k) \sigma_1(n-k) \\ + \frac{1}{2} \sum_{k=1}^{n-1} d(k) d(n-k) + \frac{1}{6} \sum_{k=1}^{n-2} \sum_{j=1}^{n-k-1} d(k) d(j) d(n-k-j)\end{multline} \noindent where $\sigma_k(n) = \sum_{d \vert n} d^k$. (Dilcher's identity is different in form but closely related.) Using these ideas, the author showed in \cite{Keith1} that \begin{theorem}\label{V2Mod2} If $n \equiv 2 \pmod{4}$, or $n$ has two or more primes appearing to odd order in its prime factorization, then $\nu_2(n) \equiv 0 \pmod{2}$.\end{theorem} Together with Rouse, it was further shown that \begin{theorem}\label{16Mod14} $v_2(16j+14) \equiv 0 \pmod{4}$. \end{theorem} Since the proof strategy for Theorem \ref{Nu2} is an expansion of this method, we sketch the proof for Theorem \ref{16Mod14} below. One observes that for $n = 16j+14$, $\sigma_1(n) \equiv 0 \pmod{8}$ and that $d(n) \equiv d\left(\frac{n}{2}\right)^2 \pmod{8}$, so these can be removed from equation (\ref{Nu2Eq}) and it remains to show that $$\sum_{k=1}^{\frac{n-2}{2}} d(k) d(n-k) \equiv 0 \pmod{4}.$$ There are no odd terms, since $n$ is not the sum of two squares (observe quadratic residues mod 16), and therefore we wish to show that there are an even number of terms that are not multiples of 4. The only terms that are not multiples of 4 are those in which $k$ or $n-k$ is square, and the other term is 2 mod 4, i.e. $n-k$ or $k$ respectively is $p y^2$ for $p$ a prime, with $s_p(y) \equiv 0 \pmod{2}$ where $s_p(y)$ is the power of $p$ in the prime factorization of $y$. Thus the theorem reduces to showing that there are an even number of representations of $n$ in the form $n=x^2+py^2$ with the appropriate conditions on the prime $p$, since for each such pair $k$ will be the smaller of the two terms $x^2$ or $py^2$. In order to analyze the parity of the number of such representations, we avail ourselves of the congruences $$F(q) := \sum_{n=0}^\infty \sigma_1(2n+1) q^{2n+1} \equiv \sum_{n =1}^\infty q^{(2n+1)^2} \pmod{2}$$ and $$G(q) := \frac{1}{2} \sum_{n=0}^\infty \sigma_1(8n+5) q^{8n+5} \equiv \sum_{{p \equiv 5 \pmod{8}} \atop {y \geq 1, 2 \vert s_p(y)}} q^{p y^2} \pmod{2}.$$ (When we state of functions $F(q) = \sum_{n=0}^\infty f(n)q^n$ and $G(q)=\sum_{n=0}^\infty g(n) q^n$ that $F(q) \equiv G(q) \pmod{c}$, we mean that $f(n) \equiv g(n) \pmod{c}$ for all $n$.) With these functions, $$T(q) = F(q)G(q) + F(q^4)F(q^2)$$ \noindent has coefficients of parity equal to the number of representations we desire. This construction is advantageous since $F(q)$ and $G(q)$ are modular forms, and thus, by the properties listed below, so is $T(q)$. Indeed we can calculate that $T(q)$ is a modular form of weight 4 for $\Gamma_0(64)$ and hence the Sturm bound is 32; a short calculation of the type described below shows that all coefficients are even, and thus the theorem is shown. The facts in the preceding paragraph are due to the properties of modular forms. We refer the interested reader to any textbook on modular forms for a more detailed study; we here summarize the properties we need. \begin{itemize} \item A modular form is said to be of weight $k$ for $\Gamma_0(N)$ or $\Gamma_1(N)$, certain subgroups of the modular group on the upper half-plane. Its \emph{level} is the minimum possible $N$. Such a form is also of weight $k$ for any $\Gamma_0(cN)$ or $\Gamma_1(cN)$, $c \in \mathbb{N}$. \item Modular forms of a given weight for $\Gamma_i(N)$ form vector spaces over $\mathbb{C}$. \item The substitutions $q \rightarrow q^c$ for $c \in \mathbb{N}$ send forms of weight $k$ for $\Gamma_i(N)$ to forms of weight $k$ for $\Gamma_i(cN)$. \item The product of two modular forms for $\Gamma_i(N)$ of weights $k$ and $\ell$ is a modular form for $\Gamma_i(N)$ of weight $k+\ell$. \item For a form $f(q)$, if all coefficients of $q^i$ in $f$ for $i$ below the \emph{Sturm bound} are divisible by a given prime, then all coefficients of $f$ are so divisible. This bound is $\frac{k}{12} N \prod_{p \vert N} \left(\frac{p+1}{p}\right)$ (the product is over all primes dividing $N$) for a form in $\Gamma_0(N)$ of weight $k$ and level dividing $N$, and for a form in $\Gamma_1(N)$ is increased by a factor equal to the index of the subgroup of $\Gamma_0(N)$ for which $f(q)$ is a form. \item If $f(q) = \sum_{n=0}^\infty a(n) q^n$ is a modular form of weight $k$ and level $N$ for $\Gamma_0(N)$, then for $m \vert N$, $g(q) = \sum_{n=0}^\infty a(mn) q^n$ is also a modular form of weight $k$ and level $N$ for $\Gamma_0(N)$, and if $\chi$ is a primitive Dirichlet character mod $M$, then $g(q) = \sum_{n=0}^\infty a(n) \chi (n) q^n$ is a modular form of weight $k$ for $\Gamma_1(N M^2)$. \end{itemize} The last property allows us to dissect modular forms as needed for our proofs, for by selecting characters that cancel properly when the forms are added, we may construct from the form $f(q) = \sum_{n=0}^\infty a(n) q^n$ a form $g(q) = \sum_{n=0}^\infty a(An+B) q^{An+B}$ of the same weight and higher level. The form constructed from such twists will likely lie in $\Gamma_1(N)$ for the required level, in which case the Sturm bound will be increased by a factor equal to the order of the subgroup of squares of Dirichlet characters of modulo $A$ in the group of all Dirichlet characters modulo $A$. Our proofs will require the facts that $F(q)$ (defined above) is of weight 2 and level 4, and $H(q) := \sum_{n=0}^\infty \sigma_1(3n+1) q^{3n+1}$ is of weight 2 and level 9. \section{Partitions into 2 sizes of part} In \cite{Keith1} it was conjectured that $\nu_2(36n+30) \equiv 0 \pmod{4}$. This and more is true. We prove Theorem \ref{Nu2} by an expansion of the methodology above, executing the proof strategy in detail for $(A,B) = (36,30)$. \begin{proof} Set $n=36j+30$. We again observe that $\sigma_1(n) \equiv 0 \pmod{8}$ (since $3 \vert\vert n$ and at least one prime $6\ell+5$ appears to odd order in its factorization) and that $d(n) \equiv d(\frac{n}{2})^2 \pmod{8}$, and so again it suffices to show \begin{equation}\label{ShortD2} \sum_{k=1}^{\frac{n-2}{2}} d(k) d(n-k) \equiv 0 \pmod{4}. \end{equation} By the same argument as before, we wish to show that there exist an even number of representations $n = x^2+py^2$, $s_p(y) \equiv 0 \pmod{2}$. There are now six possible residue classes for $x$, with several possible values mod 36 of $p$ and $y$ for each. We summarize these in the following table. \begin{center}\begin{tabular}{|c|c|c|} \hline $x^2 \pmod{36}$ & $py^2 \pmod{36}$ & Possible $(p,y^2) \pmod{36}$ \\ \hline 1 & 29 & $\{(29,1), (17,25) , (5,13)\}$ \\ \hline 13 & 17 & $\{(17,1), (5,25), (29,13)\}$ \\ \hline 25 & 5 & $\{(5,1) , (29,25), (17,13)\}$ \\ \hline 4 & 26 & $\{(2,13)\}$ \\ \hline 16 & 14 & $\{(2,25)\}$ \\ \hline 28 & 2 & $\{(2,1)\}$ \\ \hline \end{tabular}\end{center} We construct the following modular forms. All $q$-series congruences are mod 2. For $i = 1,25, \text{ or } 13$, with $\epsilon(i) = 1, 5, \text{ or } 7$ respectively, $$F_{x,i}(q) := \sum_{j=0}^\infty \sigma_1(36j+i) q^{36j+i} \equiv \sum_{{j=0} \atop {j \equiv \pm \epsilon(i) \, (\text{mod }18})}^\infty q^{j^2}.$$ To illustrate for clarity, $F_{x,13} = \sum_{j=0}^\infty \sigma_1(36j+13) q^{36j+13}$, which is congruent modulo 2 to $\sum q^{j^2}$ with the sum taken over positive $j \equiv \pm 7 \pmod{18}$. For $\ell = 4,16, \text{ or } 28$, with $\epsilon(\ell) = 2, 4, \text{ or } 8$ respectively, $$F_{x,\ell}(q) := \sum_{j=0}^\infty \sigma_1(9j+\ell/4) \left(q^4\right)^{9j+\ell/4} \equiv \sum_{j=0}^\infty \sigma_1(36j+\ell) q^{36j+\ell} \equiv \sum_{{j=0} \atop {j \equiv \pm \epsilon(\ell) \, (\text{mod } 18)}}^\infty q^{j^2}.$$ For $m = 13, 7, \text{ or } 1$, we observe that $\sigma_1(18j+m) \equiv \sigma_1(36j+2m) \pmod{2}$. Let $\epsilon(m) = 7, 5, \text{ or } 1$, respectively. We construct for these $m$ $$G_{y,2m}(q) := \sum_{j=0}^\infty \sigma_1(18j+m) \left(q^2\right)^{18j+m} \equiv \sum_{j=0}^\infty \sigma_1(36j+2m) q^{36j+2m} \equiv \sum_{{j=0} \atop {j \equiv \pm \epsilon(m) \, (\text{mod } 18)}}^\infty \left(q^2\right)^{j^2}.$$ Finally, for $k = 29, 17, \text{ or } 5$, define \begin{align*} G_{y,29}(q) :=& \frac{1}{2} \sum_{j=0}^\infty \sigma_1(36j+29) q^{36j+29} \equiv \sum_{{p \equiv 29 \pmod{36}} \atop {{y \equiv \pm 1 \pmod{18}} \atop {s_p(y) \, \text{even}}} } q^{p y^2} + \sum_{{p \equiv 17 \pmod{36}} \atop {{y \equiv \pm 5 \pmod{18}} \atop {s_p(y) \, \text{even}}} } q^{p y^2} + \sum_{{p \equiv 5 \pmod{36}} \atop {{y \equiv \pm 7 \pmod{18}} \atop {s_p(y) \, \text{even}}} } q^{p y^2} \\ G_{y,17}(q) :=& \frac{1}{2} \sum_{j=0}^\infty \sigma_1(36j+17) q^{36j+17} \equiv \sum_{{p \equiv 17 \pmod{36}} \atop {{y \equiv \pm 1 \pmod{18}} \atop {s_p(y) \, \text{even}}} } q^{p y^2} + \sum_{{p \equiv 5 \pmod{36}} \atop {{y \equiv \pm 5 \pmod{18}} \atop {s_p(y) \, \text{even}}} } q^{p y^2} + \sum_{{p \equiv 29 \pmod{36}} \atop {{y \equiv \pm 7 \pmod{18}} \atop {s_p(y) \, \text{even}}} } q^{p y^2} \\ G_{y,5}(q) :=& \frac{1}{2} \sum_{j=0}^\infty \sigma_1(36j+5) q^{36j+5} \equiv \sum_{{p \equiv 5 \pmod{36}} \atop {{y \equiv \pm 1 \pmod{18}} \atop {s_p(y) \, \text{even}}} } q^{p y^2} + \sum_{{p \equiv 29 \pmod{36}} \atop {{y \equiv \pm 5 \pmod{18}} \atop {s_p(y) \, \text{even}}} } q^{p y^2} + \sum_{{p \equiv 17 \pmod{36}} \atop {{y \equiv \pm 7 \pmod{18}} \atop {s_p(y) \, \text{even}}} } q^{p y^2}. \end{align*} These modular forms have the following weights and levels: $F_{x,1}$, $F_{x,25}$, $F_{x,13}$, $G_{y,29}$, $G_{y,17}$, and $G_{y,5}$ are all dissections of $F(q)$ by characters mod 36, and so they are all of weight 2 for $\Gamma_1(36^2 \cdot 4) = \Gamma_1(5184)$. $F_{x,4}$, $F_{x,16}$, and $F_{x,28}$ are dissections of $H(q)$ by characters mod 9, thereafter magnified by the substitution $q\rightarrow q^4$, so they of weight 2 for $\Gamma_1(4\cdot 9^3) = \Gamma_1(2916)$. $G_{y,26}$, $G_{y,14}$ and $G_{y,2}$ are all dissections of $H(q)$ by characters mod 18, thereafter magnified by the substitution $q \rightarrow q^2$, so they are modular forms of weight 2 for $\Gamma_1(9 \cdot 18^2 \cdot 2) = \Gamma_1(5832)$. The product of any two of these modular forms of weight 2 for $\Gamma_1(N_1)$ and $\Gamma_1(N_2)$ is a modular form of weight 4 for $\Gamma_1(lcm(N_1,N_2))$. For the odd $F$ and $G$, we have $F_{x,i} G_{y,j}$ of weight 4 for $\Gamma_1(5184)$. The even cases $F_{x,2i} G_{y,2j}$ are of weight 4 for $\Gamma_1(5832)$. Finally, the sum of all these is a modular form of weight 4 for $\Gamma_1(lcm(5184,5832))=\Gamma_1(46656)=\Gamma_1(6^6)$. Define $S=\{1,25,13,4,16,28\}$ and set $$R(q) = \sum_{n=0}^\infty r(n) q^n = \sum_{i \in S} F_{x,i} (q) G_{y,30-i} (q).$$ Then $r(n)$ is 0 for terms other than $n=36j+30$, and for these terms is of the same parity as the number of representations of $n$ of the form sought. If $R(q)$ were in $\Gamma_0(N)$, the Sturm bound for $R(q)$ would be $\frac{4}{12} 46656 \left(\frac{3}{2}\frac{4}{3}\right) = 31104$. However, $R(q)$ instead lies in $M_2(\Gamma_0(6^6)) \oplus M_2(\Gamma_0(6^6), \chi) \oplus M_2(\Gamma_0(6^6),\chi^2)$ where $\chi$ is some Dirichlet character of order 3, and thus $R(q)$ is a modular form of weight 4 for a subgroup of $\Gamma_0(6^6)$ of index 3, so the bound required is 93312. It is a straightforward calculation to construct all these forms in Mathematica or another symbolic computation package, expand the series to the Sturm bound, and check that all coefficients up to $q^{93312}$ are even. Hence, all coefficients are even, and so the number of representations of $36j+30$ of the form required is also even. Thus, $\sum_{k=1}^{\frac{n-2}{2}} d(k) d(n-k) \equiv 0 \pmod{4}$, and the theorem holds. For progressions $n=Aj+B$ with $A$ even in which $\sigma_1(n) \equiv 0 \pmod{8}$ and $d(n) \equiv d\left(\frac{n}{2}\right)^2 \pmod{8}$, the sum reduces the same way to showing $$\sum_{k=1}^{\frac{n-2}{2}} d(k) d(n-k) \equiv 0 \pmod{4}.$$ If the candidate progression is among those which can never contain sums of two squares, then we again reduce the question to analyzing the parity of the number of representations of $n$ of the form $x^2+py^2$, $s_p(y) \equiv 0 \pmod{2}$, which if $Aj+B$ is a suitable progression we can analyze exactly as before. The other progressions of the theorem can be analyzed in such a fashion. We omit the repetitive details, noting only that for $A=72, 196, 252$ our multiples of the $\Gamma_0$ bound are 6, 21, and 9 respectively; the necessary parity checks can be easily verified by a symbolic computation package on a laptop computer. \end{proof} \noindent \textbf{Remarks:} These are not exhaustive even of candidates of small moduli. We note that computation has not yet suggested a candidate progression in which the conditions described do \emph{not} hold. It would be reasonable to conjecture that the conditions are necessary: \begin{conjecture}\label{Nu2Conj} If $\nu_2(An+B) \equiv 0 \pmod{4}$ for all $n$, then for all $n$ it holds that $\sigma_1(An+B) \equiv 0 \pmod{8}$, $d(An+B) \equiv d\left(\frac{An+B}{2}\right)^2 \pmod{8}$, and the progression $An+B$ never contains sums of two squares. \end{conjecture} \section{Partitions into 3 sizes of part} We observed in the introduction that the result of Kim, that $\overline{p}(n) \equiv 0 \pmod{8}$ for $n \neq k^2, 2k^2$, is equivalent to the claim that for such $n$, $\frac{1}{2}\nu_1(n)$ and $\nu_2(n)$ are simultaneously both even or both odd. Once we have information on $\nu_i$ for $i < k$, information about overpartitions gives us information about $\nu_k$. We will prove Theorem \ref{Nu3} by first giving several facts about $\nu_1$ and $\nu_2$, then proving or employing congruences for $\overline{p}(An+B)$ modulo 16. \phantom{.} \noindent \emph{Proof of Theorem \ref{Nu3}.} Begin by observing that for $(A,B)$ one of the ordered pairs $\{(36,30), (72,42), (196,70), (252,114) \}$, $\nu_1(An+B) \equiv 0 \pmod{8}$ because at least three primes divide $An+B$ with odd exponent. To wit, $36n+30 = 6(6n+5)$, and 5 is a quadratic nonresidue modulo 6, so 2, 3, and some additional prime divide $36n+30$ to odd order. For $72n+42 = 6(12n+7)$, 7 is a quadratic nonresidue modulo 12; for $196n+70 = 14(14n+5)$, 5 is a quadratic nonresidue modulo 14; for $252n+114 = 6(42n+19)$, 19 is a quadratic nonresidue modulo 42. We previously showed that $\nu_2(An+B) \equiv 0 \pmod{4}$ in each of these progressions. Therefore, we have $$\overline{p}(An+B) \equiv 2\cdot 8 + 4 \cdot 4 + 8 \cdot \nu_3(An+B) + \dots \pmod{16}.$$ Thus, if $\overline{p}(An+B) \equiv 0 \pmod{16}$ in these progressions, it must follow that $\nu_3(An+B) \equiv 0 \pmod{2}$. Hence we show Theorem \ref{OverPtns}. \begin{proof} The case $(A,B) = (196,70)$ is separate and follows immediately from Theorem 1.2 of Chen et. al in \cite{CHSZ}, which holds that $\overline{p}(7n) \equiv 0 \pmod{16}$ unless $7 \vert n$. For the remaining cases, in which $36 \vert A$, we employ several identities from \cite{XiaYao}, beginning with congruence (4.29) of that paper. For compactness, we employ the notation $$f_i = \prod_{k=1}^\infty (1-q^{ik}).$$ It holds for $\ell > 0$ that ${f_i}^{2^\ell} \equiv {f_{2i}}^{2^{\ell-1}} \pmod{2^\ell}$, and more generally for $\ell > k \geq 0$ that $$2^k {f_i}^{2^{\ell-k}} \equiv 2^k {f_{2i}}^{2^{\ell-k-1}} \pmod{2^{\ell-k}}.$$ We will require two lemmas from \cite{CHSZ}. First is the 3-dissection of the overpartition function: \begin{lemma}\label{OPmod3} $\frac{f_2}{{f_1}^2} = \frac{f_6^4 f_9^6}{f_3^8f_{18}^3} + 2q\frac{f_6^3f_9^3}{f_3^7}+4q^2\frac{f_6^2f_{18}^3}{f_3^6}$ \end{lemma} Next is the 2-dissection of another quotient: \begin{lemma}\label{ThreeEven} $\frac{f_3^3}{f_1} = \frac{f_4^3f_6^2}{f_2^2f_{12}} + q \frac{f_{12}^3}{f_4}$ \end{lemma} Now, from equation (4.29) in \cite{CHSZ}, we extract the even terms and reduce the congruence to one modulo 16 to obtain $$\overline{p}(6n) q^n \equiv \frac{f_2^4 f_{12}^{15}}{f_1^8 f_{6}^6 f_{24}^6} + 12q^3 \frac{f_{12}^3 f_{24}^2}{f_6^2} \pmod{16}.$$ We wish to extract from this identity those terms in which $n \equiv 5 \pmod{6}$. In the second summand, all $q^n$ have $n \equiv 0 \pmod{3}$, so we discard these. Take $\frac{f_2^4}{f_1^8}$ and employ Lemma \ref{OPmod3} to obtain \begin{equation}\label{Fourth}\sum_{n=0}^\infty \overline{p}(6n)q^n \equiv \frac{f_{12}^{15}}{f_6^6 f_{24}^6} \left( \frac{f_6^4 f_9^6}{f_3^8f_{18}^3} + 2q\frac{f_6^3f_9^3}{f_3^7}+4q^2\frac{f_6^2f_{18}^3}{f_3^6} \right)^4 + \cdots \pmod{16}\end{equation} \noindent where the elided terms do not have power $n \equiv 5 \pmod{6}$. Now expand the fourth power, disregarding all terms with an integer coefficient divisible by 16, and extract all those terms in which $n \equiv 2 \pmod{3}$: $$\sum_{n=0}^\infty \overline{p}(18n+12)q^{3n+2} \equiv \frac{f_{12}^{15}}{f_6^6 f_{24}^6} \left( 24q^2 \frac{f_6^7 f_9^{18}}{f_3^{30} f_{18}^6} \right) \pmod{16}.$$ But since $24\frac{1}{f_3^{30}} \equiv 24 \frac{1}{f_6^{15}} \pmod{16}$, all powers in this congruence with odd coefficient are in fact congruent to 2 mod 6, and so no terms congruent to 5 mod 6 appear with nonzero coefficient modulo 16. The theorem is proved for $(A,B) = (36,30)$. To prove the case $(A,B) = (72,42)$, we start from equation \ref{Fourth} and extract terms $7 \pmod{12}$. Begin with terms congruent to $1 \pmod{3}$: $$\sum_{n=0}^\infty \overline{p}(18n+6)q^{3n+1} \equiv \frac{f_{12}^{15}}{f_6^6 f_{24}^6} \left( 8 q \frac{f_6^{15} f_9^{21}}{f_3^{31} f_{18}^9} \right) \equiv 8q \frac{f_9^3}{f_3} \pmod{16}.$$ We now employ Lemma \ref{ThreeEven} to obtain those terms that are 1 mod 6: $$ \sum_{n=0}^\infty \overline{p}(36n+6) q^{6n+1} \equiv 8q \frac{f_{12}^3 f_{18}^2}{f_6^2 f_{36}} \equiv 8q f_{24} \pmod{16}.$$ Hence there are no powers $q^j$ with $j \equiv 7 \pmod{12}$ that have coefficients nonzero modulo 16. Finally, to show the case $(A,B) = (252,114)$, we take the congruence above: $$ \sum_{n=0}^\infty \overline{p}(6(6n+1)) q^{n} \equiv 8 f_{4} \pmod{16}.$$ \noindent and extract terms where $n \equiv 3 \pmod{7}$. But by the Pentagonal Number Theorem $f_4 = \sum_{n=-\infty}^\infty (-1)^n q^{2n(3n+1)},$ and for integer argument $2n(3n+1)$ only takes on residues 0, 1, 4, or 6 modulo 7. Thus, in the progression $(A,B) = (252,114)$ no coefficients are nonzero mod 16, and Theorem \ref{OverPtns} is proven. \end{proof} Since Theorem \ref{OverPtns} holds, and the necessary conditions on $\nu_1$ and $\nu_2$ are fulfilled, it follows that $\nu_3 \equiv 0 \pmod{2}$ in the progressions studied. Thus Theorem \ref{Nu3} is proved. \hfill $\Box$ \section{Open Questions} One could possibly use information about $\nu_k$ to prove statements about overpartitions, at least modulo powers of 2. In order to do so one would need to analyze $\nu_k$ without invoking overpartition congruences, analyzing the parity of the terms in the generating function as we did for Theorem \ref{Nu2}. At present we require information about $\nu_1$ and $\nu_2$ to obtain information on $\nu_3$. While it is conceivable that $\nu_3$ might possess arithmetic progressions in which all values are even without the same holding true for higher powers for $\nu_2$ and $\nu_1$, we believe there is good reason to think that this is not the case, and formally conjecture: \begin{conjecture}\label{Nu3Conj} If $\nu_3(An+B) \equiv 0 \pmod{2}$ for some arithmetic progression, it also holds that $\nu_2(An+B) \equiv 0 \pmod{4}$. \end{conjecture} Why might this conjecture hold? Suppose one wishes to show $\nu_3(36j+30) \equiv 0 \pmod{2}$ by analyzing the parity of the terms in equation (\ref{Nu3Eq}). Suppose we have already shown for $n \equiv 30 \pmod{36}$ that $\nu_2(n) \equiv 0 \pmod{4}$, and we know $d(n) \equiv 0 \pmod{8}$, $\sigma_1(n) \equiv 0 \pmod{8}$, and $\sum_{k=1}^{n-1} d(k) d(n-k) \equiv 0 \pmod{8}$. (In any other arithmetic progression, if any three of these are true, all four are, because we may subtract the other terms in equation (\ref{Nu2Eq}) from $\nu_2(n)$.) We may then subtract these terms from equation (\ref{Nu3Eq}) for $\nu_3(n)$ to obtain \begin{multline*} \nu_3(n) = \frac{1}{3} d(n) - \frac{1}{2} \sigma_1(n) + \frac{1}{6} \sigma_2(n) - \frac{1}{2} \sum_{k=1}^{n-1} d(k) \sigma_1(n-k) \\ + \frac{1}{2} \sum_{k=1}^{n-1} d(k) d(n-k) + \frac{1}{6} \sum_{k=1}^{n-2} \sum_{\ell=1}^{n-k-1} d(k) d(\ell) d(n-k-\ell) \\ \equiv -\frac{1}{6}d(n) +\frac{1}{6} \sigma_2(n) - \frac{1}{2} \sum_{k=1}^{n-1} d(k) \sigma_1(n-k) \\ + \frac{1}{6} \sum_{k=1}^{n-2} \sum_{\ell=1}^{n-k-1} d(k) d(\ell) d(n-k-\ell) \pmod{2}. \end{multline*} It is not difficult to show that $d(36j+30) \equiv -\sigma_2(36j+30) \pmod{12}$ (both functions being multiplicative, one simply observes the values mod 6 of each factor) and hence we can write \begin{multline*} \nu_3(n) \equiv -\frac{1}{3}d(n) - \frac{1}{2} \sum_{k=1}^{n-1} d(k) \sigma_1(n-k) \\ + \frac{1}{6} \sum_{k=1}^{n-2} \sum_{\ell=1}^{n-k-1} d(k) d(\ell) d(n-k-\ell) \pmod{2}. \end{multline*} We now note that many terms in the final sum can be cast out modulo 2. If exactly one of $k$, $\ell$, or $n-k-\ell$ is a nonsquare and the other two terms are not equal, we can group the six permutations of the three entries, the product of which are even, and discard them. If exactly one entry is a square, we can again do so -- we may have only three permutations, but the product is divisible by 4. If all three are non-squares, the only term we cannot permute and cast out is when $k = \ell = 12j+10$, which may not have $d(12j+10) \equiv 0 \pmod{3}$. But $d(12j+10) = \frac{1}{2} d(36j+30)$. If we add one-sixth of the cube of this to $-\frac{1}{3}d(n)$, we obtain $\frac{1}{3} \cdot \frac{1}{16} d(n)(d(n)-4)(d(n)+4)$. In the latter product one term is divisible by 3 and since $d(n) \equiv 0 \pmod{8}$, the whole is an even integer. If all three are squares, then they cannot be the same square (10 is not a quadratic residue mod 12) and thus they have 3 or 6 permutations; however, we may not be able to cast out such terms. For instance, $30=25+4+1$ and the six permutations thereof, and this is the only such representation of 30. We are also left with terms in which exactly one entry is a non-square and the other two are equal squares. We may multiply by 3 and take the representative of these in which $k = \ell$ are the squares. Thus, we end up interested in representations of $n$ by three squares, or twice a square and a non-square. (It is interesting that overpartition identities so often relate to identities concerning sums of squares.) Now observe that in $\frac{1}{2} \sum_{k=1}^{n-1} d(k) \sigma_1(n-k)$, $\sigma_1(n-k) \equiv d(n-k) \pmod{2}$ unless $n-k$ is twice a square, and we already know that $\frac{1}{2} \sum_{k=1}^{n-1} d(k) d(n-k) \equiv 0 \pmod{4}$. Thus we can reduce the sought identity to \begin{multline} \nu_3(n) \equiv -\frac{1}{2} \sum_{k=1}^{n-1} d(k)\sigma_1(n-k) \\ + \sum_{{j+k+\ell=n} \atop {0 < j < k < \ell \text{ distinct squares}}} d(j)d(k)d(\ell) + \frac{1}{2} \sum_{k=1}^{\lfloor \sqrt{(n-1)/2} \rfloor} d(k^2)^2 d(n-2k^2) \pmod{2}.\end{multline} We know this is congruent to 0, by the previous work; the search for a direct proof seems like a natural question of interest. Finally, in addition to the conjectures stated previously, a number of open questions present themselves. \begin{enumerate} \item Treat candidate progressions in a more unified fashion, probably via the theory of eigenforms. Can we show the existence of an infinite class of $(A,B)$ for which $\nu_2(An+B) \equiv 0 \pmod{4}$, and/or $\nu_3(An+B) \equiv 0 \pmod{2}$? \item Numerical experimentation to date has found no progressions $An+B$ in which $\nu_2(An+B) \equiv 0 \pmod{N}$ for any $N$ other than 2 or 4; and none for $\nu(3)$ other than $N=2$. If different moduli occur, they may have large progression modulus $A$. Do these occur, and if so, how can they be efficiently found, or, are they forbidden? \item Experimentation has yielded no progressions with nontrivial modulus for $\nu_k$ with $k > 3$. It is plausible that these never occur, since from the formulas in Andrews \cite{GEA1} these values involve sums concerning $d(k) \sigma_2(n-k)$, and $\sigma_2(j)$ is not part of the same framework of modular forms and their symmetries as $\sigma_1$. (When it appeared in $\nu_3(36j+30)$ it was a single term which cancelled with $d(n)$.) Again, can these occur, and if so where, or are they forbidden? \item Elaborate on the relationships between $\nu_1$ and $\nu_2$, and between $\nu_2$ and $\nu_3$. State conditions necessary and/or sufficient for simultaneous congruences. \item Complete the combinatorial proof for $\nu_3(36j+30)$ and generalize to other progressions. \end{enumerate} \section{Acknowledgements} The author sincerely thanks Jeremy Rouse for assistance provided on MathOverflow \cite{MOJRouse, MOJRouse2}, and a careful anonymous referee for correcting an oversight in an earlier draft.
{ "timestamp": "2016-05-05T02:12:38", "yymm": "1502", "arxiv_id": "1502.00366", "language": "en", "url": "https://arxiv.org/abs/1502.00366", "abstract": "We study $\\nu_k(n)$, the number of partitions of $n$ into $k$ part sizes, and find numerous arithmetic progressions where $\\nu_2$ and $\\nu_3$ take on values divisible by 2 and 4. Expanding earlier work, we show $\\nu_2(An+B) \\equiv 0 \\pmod{4}$ for (A,B) = (36,30), (72,42), (252,114), (196,70), and likely many other progressions for which our method should easily generalize. Of some independent interest, we prove that the overpartition function $\\bar{p}(n) \\equiv 0 \\pmod{16}$ in the first three progressions (the fourth is known), and thereby show that $\\nu_3(An+B) \\equiv 0 \\pmod{2}$ in each of these progressions as well, and discuss the relationship between these congruences in more generality. We end with open questions in this area.", "subjects": "Combinatorics (math.CO)", "title": "Partitions into a small number of part sizes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575178175919, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542074006757 }
https://arxiv.org/abs/2207.08985
Representing systems of reproducing kernels in spaces of analytic functions
We give an elementary construction of representing systems of the Cauchy kernels in the Hardy spaces $H^p$, $1 \le p <\infty$, as well as of representing systems of reproducing kernels in weighted Hardy spaces.
\section{Introduction and main results} \label{section1} A system $\{x_n\}_{n\ge 1}$ in a separable infinite-dimensional Banach space $X$ is said to be a {\it representing system for $X$} if, for every element $x\in X$, there exists a sequence of complex numbers $\{c_n\}_{n\ge 1}$ such that $$ x=\sum_{n\ge 1} c_n x_n, $$ where the series converges in the norm of $X$. In contrast to a (probably better known) notion of a Schauder basis we do not require that the coefficients in this representation be unique. Representing systems were much studied both in the general functional analysis context and for some specific systems in functional spaces. E.g., there exists a vast literature dealing with representing systems of exponentials in various Frech\'et spaces of analytic functions (see surveys \cite{is, kor}). However, it seems that representing systems of reproducing kernels in classical spaces of analytic functions in the disk did not attract much attention until recently. \medskip \subsection{Classical Hardy spaces} In \cite{fkl} E. Fricain, L.H. Khoi and P. Lef\`evre addressed the existence problem for the representing and absolutely representing systems of reproducing kernels in reproducing kernel Hilbert spaces and showed that many classical spaces do not possess absolutely representing systems of reproducing kernels. The question about existence of representing systems remained open. In particular, in \cite{fkl}, the authors asked the following \medskip \\ {\bf Question.} {\it Do there exist sequences $\Lambda = \{\lambda_n\}_{n\ge 1} \subset \D$ such that the system $\mathcal{K}(\Lambda) = \{k_{\lambda_n}\}_{n\ge 1}$, where $$ k_\lambda(z) = \frac{1}{1-\bar \lambda z} $$ is the Cauchy \textup(or Szeg\"o\textup) kernel at $\lambda$, is representing for the Hardy space $H^2$ in the unit disk $\D$? } The positive answer to this question was given by K.\,S.~Speranskii and P.\,A.~Terekhin \cite{st1}. Namely, it was shown in \cite{st1} that for the sequence $$ \Lambda = \Big\{ \lambda_{k,j} = \Big(1-\frac{1}{k} \Big) e^{\frac{2\pi i j}{k}}: k\ge 1, j=0, 1, \dots k-1\} $$ the system $\mathcal{K}(\Lambda)$ is representing for $H^2$. The sequence $\Lambda$ is assumed to have the standard alphabetical order: $\lambda_{1, 0}, \lambda_{2,0}, \lambda_{2,1}, \lambda_{3,0}, \dots$. In what follows we always assume that sequences with double (or triple) index will be ordered in this way. In \cite{st2} Speranskii and Terekhin extended their result to a more general class of sequences. Let $n_k \in \mathbb{N}$ and $r_k\to 1-$, $k\to\infty$. Define the sequence $\Lambda$ by \begin{equation} \label{loop} \Lambda = \Big\{ \lambda_{k,j} = r_k e^{\frac{2\pi i j}{n_k}}: k\ge 1, j=0, 1, \dots n_k-1\}. \end{equation} As shown in \cite{st2}, if there exist positive constants $A$ and $B$ such that $A \le n_k(1-r_k) \le B$ for all $k$, then $\mathcal{K}(\Lambda)$ is a representing system for $H^2$. The proofs in \cite{st1, st2} are based on interesting abstract functional analysis methods from the papers \cite{ter1, ter2} which relate representing systems with coefficients from a given function space to a certain generalized notion of a frame. The goal of the present work is to give {\it a very simple elementary construction} of a representing system of the Cauchy kernels, which does not make use of functional analysis. Our method does not cover all systems of the form \eqref{loop} with $A \le n_k(1-r_k) \le B$. Its advantage is that it does not use the Hilbert structure of the space and gives the result for all Hardy spaces $H^p$, $1<p<\infty$. \begin{theorem} \label{hardy} There exists a numeric constant $M$ such that $\mathcal{K}(\Lambda)$ is a representing system for $H^p$ for any $p \in (1, \infty)$ if $\Lambda$ is given by \eqref{loop} with $n_k (1-r_k) \ge M$ for any $k$. \end{theorem} The proof of Theorem \ref{hardy} does not extend to the case $p=1$. The main obstacle is in the fact that the Cauchy transform is not bounded in $L^1$. However, one can construct representing systems of the Cauchy kernels in $H^1$ if we take the points uniformly distributed on the circle $\{|z| = 1- 1/n_k\}$ with certain logarithmic multiplicities. For a precise formulation see Theorem \ref{hardy1}. It is obvious that there are no representing systems of the Cauchy kernels in $H^\infty$, since the uniform limit of their finite linear combinations belongs to the disk-algebra $A(\D)$ (the space of all functions continuous in $\overline{\D}$ and analytic in $\D$ equipped with the usual sup-norm). However, the systems of the Cauchy kernels from Theorem \ref{hardy1} are representing also for the disk algebra $A(\D)$. \medskip \subsection{Weighted Hardy spaces} Our second result concerns the class of weighted Hardy spaces $\hb$ in the disk. Let the sequence $\beta = \{\beta_n\}_{n=0}^{\infty}$, $\beta_n > 0$, satisfy \begin{equation} \label{ls} \limsup_{n\to \infty}\, (1/\beta_n)^{1/n} \le 1, \qquad \limsup_{n\to \infty} \beta_n^{1/n} \le 1. \end{equation} Consider the set of analytic functions $$ \mathscr{H}_\beta= \bigg\{f(z)=\sum\limits_{n=0}^{\infty}a_n z^n: \ \sum\limits_{n=0}^{\infty} |a_n|^2\beta_n < +\infty \bigg\}. $$ It follows from \eqref{ls} that $\hb$ consists of functions analytic in the unit disk $\D$ and contains functions which are not analytic in any larger disk. It is clear that $\hb$ is a reproducing kernel Hilbert space with respect to the norm $\|f\|_\beta^2= \sum\limits_{n=0}^{\infty} |a_n|^2\beta_n$ and its kernel at the point $\lambda\in\D$ is given by $$ K^\beta_\lambda(z)= \sum\limits_{n=0}^{\infty}\frac{\overline{\lambda}^n}{\beta_n}z^n. $$ Weighted Hardy spaces $\hb$ include most of the classical spaces of analytic functions in the unit disk: the Hardy space $H^2$ ($\beta_n \equiv 1$), Bergman spaces $A^2_\alpha$ with the weight $(\alpha+1) (1-|z|^2)^\alpha$, $\alpha>-1$ ($\beta_n = \frac{n!\Gamma(\alpha+2)}{\Gamma(n+\alpha+2)}$), the Dirichlet space ($\beta_n = n+1$). Recall that a sequence $\{x_n\}$ is said to be a {\it frame} in a Hilbert space $H$ if there exist constants $A,B>0$ such that $A\|x\|^2 \le \sum_n |(x, x_n)|^2 \le B\|x\|^2$ for any $x\in H$; if one has only the above estimate $\sum_n |(x, x_n)|^2 \le B\|x\|^2$, then $\{x_n\}$ is said to be a {\it Bessel sequence}. Any frame is, in particular, a representing system. It is well known that in Bergman spaces $A^2_\alpha$ there exist frames of normalized reproducing kernels; their complete description was given by K.~Seip \cite{seip} (for general weighted Bergman spaces see \cite{bdk, seip1}). Therefore, the existence of representing sequences of reproducing kernels (but not their description) in the Bergman space setting is trivial. On the other hand, $H^2$ has no frames of normalized Cauchy kernels (and even complete Bessel sequences). Indeed, for any Bessel sequence $\{ k_{z_n}/\|k_{z_n}\|_{H^2} \}$ one has $\sum_n (1-|z_n|^2) <\infty$ (simply applying the inequality with $f\equiv 1$), whence $\{z_n\}$ is a Blaschke ($=$nonuniqueness) sequence. More generally, if $\inf_n \beta_n =\delta >0$, then $\hb$ has no frames of normalized reproducing kernels. Indeed, if $\{K^\beta_{z_n}/\|K^\beta_{z_n}\|_\beta\}$ is a frame, then $\sum_n \|K^\beta_{z_n}\|^{-2}_\beta <\infty$. Let $B_N$ be the Blaschke product with the zeros $z_1, \dots, z_N$. Then $$ \sum_{n>N} |B_N(z_n)|^2 \|K^\beta_{z_n}\|^{-2}_\beta \le \sum_{n>N} \|K^\beta_{z_n}\|^{-2}_\beta \to 0 $$ as $N\to \infty$. On the other hand, since $\beta_n\ge \delta$, we have $\|B_N\|^2_\beta \ge \delta \|B_N\|^2_{H^2} =\delta$, and we come to a contradiction with the frame inequality. Thus, weighted Hardy spaces which are smaller than $H^2$ (e.g., the Dirichlet space) possess no frames of normalized reproducing kernels and the problem about existence of representing systems of reproducing kernels becomes nontrivial. We give a positive answer to this question. \begin{theorem} \label{wei} For any sequence $\beta$ satisfying \eqref{ls} in the space $\hb$ there exist representing systems of reproducing kernels. \end{theorem} \bigskip \section{Hardy spaces $H^p$, $p>1$} \label{hard} Recall that the Hardy space $H^p$, $1 \le p<\infty$, consists of all functions $f$ analytic in $\D$ and such that $$ \|f\|^p_{H^p}=\sup\limits_{0<r<1} \int_\T |f(r \zeta)|^p\,dm(\zeta) <\infty. $$ Here $m$ denotes the normalized Lebesgue measure on the unit circle $\T$. Since $H^p$ is a closed subspace of $L^p(\T)$ in what follows we sometimes denote the norm in $H^p$ and $L^p$ by $\|\cdot\|_p$. For any $f\in H^p$ one has \begin{equation} \label{repr} f(z) = \int_\T f(\zeta)\overline{k_z(\zeta)} dm(\zeta)= \int_\T \frac{f(\zeta)}{1-\bar \zeta z}dm(\zeta), \qquad z\in \D. \end{equation} In particular, $k_z$ is the reproducing kernel of $H^2$ at the point $z\in\D$ and the Cauchy transform $$ (\mathcal{C} g) (z) = \int_\T \frac{g(\zeta)}{1-\bar \zeta z}dm(\zeta), \qquad z\in \D, $$ is the orthogonal projection of a function $g\in L^2(\T)$ to $H^2$. The same is true for any $p\in (1, \infty)$: there exists $C_p>0$ such that for any $g\in L^p(\T)$ one has $\mathcal{C} g \in H^p$ and $\|\mathcal{C} g\|_p \le C_p \|g\|_p$. The idea of the proof of Theorem \ref{hardy} is to replace the integral \eqref{repr} by a certain ``discretization''. \medskip \\ {\bf Proof of Theorem \ref{hardy}.} When we approximate a given function $f\in H^p$, the points (or rather layers) of $\Lambda$ given by \eqref{loop} will be defined inductively. We first explain one step of induction. Let $f\in H^p$ be given. Put $f_r(z) = f(rz)$. It is well known that $\|f -f_r\|_p \to 0$, $r\to 1-$. Therefore, for any positive $\delta$ (to be specified later) we can choose $r_k$ such that $\|f- f_{r_k}\|_p \le \delta \|f\|_p$. Let $I_j = I_{k,j}$, $0\le j\le n_k-1$, be the arcs of $\T$ defined as \begin{equation} \label{arc} I_j = I_{k,j} = \Big[\exp \Big(\frac{(2j-1) \pi i }{n_k}\Big), \exp \Big(\frac{ (2j+1)\pi i}{n_k}\Big)\Big]. \end{equation} and let $\zeta_j =\zeta_{k,j}= \exp \big( \frac{2\pi i j}{n_k} \big)$. At this step the index $k$ is fixed, thus, we omit it and write simply $I_j$, $\zeta_j$. Then $$ f(r_k z) =\int_\T \frac{f(\zeta)}{1-r_k \bar \zeta z}dm(\zeta) = \sum_{j=0}^{n_k-1} \int_{I_j} \frac{f(\zeta)}{1-r_k \bar \zeta z}dm(\zeta). $$ Now it is natural to approximate $f_{r_k}(z) = f(r_k z)$ by $$ S(z) = \sum_{j=0}^{n_k-1} \frac{1}{1- r_k \bar \zeta_j z} \int_{I_j} f(\zeta) dm(\zeta). $$ Let us show that if $k$ and $M = n_k(1-r)$ are sufficiently large, then there exists a numeric constant $\gamma \in (0,1)$ such that one has \begin{equation} \label{dod} \| f_{r_k} - S \|_{H^p} \le \gamma \|f\|_p, \end{equation} whence $\| f- S\|_{H^p} \le (\gamma+\delta) \|f\|_p$ and we need to choose $\delta>0$ so that $\gamma+\delta <1$. We have $$ f(r_k z) - S(z) = \sum_{j=0}^{n_k-1} \int_{I_j} \frac{r_k z(\bar \zeta - \bar \zeta_j)}{(1-r_k \bar \zeta_j z )(1 - r_k \bar \zeta z)} f(\zeta) dm(\zeta). $$ Note that for any $\zeta\in I_j$ we have $|\zeta-\zeta_j| \le \pi n_k^{-1}$ and $|1-r_k \bar \zeta z| \le M_1 |1 - r_k \bar \zeta_j z|$ for some numeric constant $M_1$. Thus, \begin{equation} \label{dodr} |f(r_k z) - S(z)| \le \frac{M_2}{n_k} \sum_{j=0}^{n_k-1} \int_{I_j} \frac{|f(\zeta)|}{|1 - r_k \bar \zeta z|^2} dm(\zeta) = \frac{M_2}{n_k}\int_{\T} \frac{|f(\zeta)|}{|1 - r_k \bar \zeta z|^2} dm(\zeta), \end{equation} where $M_2 = \pi M_1$. By the H\"older inequality ($1/p +1/q = 1$), we have $$ \|f_{r_k} - S\|_{H^p}^p \le \frac{M_2^p}{n_k^p} \int_\T \bigg( \int_{\T} \frac{|f(\zeta)|^p}{|1 - r_k \bar \zeta z|^2} dm(\zeta)\bigg) \bigg( \int_{\T} \frac{dm(\zeta)}{|1 - r_k \bar \zeta z|^2} \bigg)^{p/q} dm(z). $$ Since $\int_{\T} |1 - r_k u|^{-2} dm(u) =(1-r_k^2)^{-1}$, we conclude that $$ \|f_{r_k} - S\|_{H^p} \le M_2 \frac{\|f\|_{H^p}}{n_k(1-r_k^2)} \le \frac{M_2}{M(1+r_k)} \|f\|_{H^p}. $$ Note that $r_k$ can be chosen as close to $1$ as we wish. Hence, if $M>M_2/2$, then $\|f_{r_k} - S\|_{H^p} \le \gamma \|f\|_{H^p}$ for some absolute numeric constant $\gamma\in (0,1)$. Since $\delta$ also can be chosen as small as we wish, we get $\|f - S\|_{H^p} \le \gamma \|f\|_{H^p}$ with another numeric constant $\gamma\in (0,1)$ and for all sufficiently large $k$. Also, note that there exists a constant $B_p>0$ (depending only on $p$) such that for any $0\le n\le n_k-1$ one has \begin{equation} \label{dod1} \bigg\|\sum_{j=0}^{n} \frac{1}{1- r_k \bar \zeta_j z} \int_{I_j} f(\zeta) dm(\zeta)\bigg\|_{H^p} \le B_p \|f\|_{H^p}. \end{equation} Indeed, above we already showed that, for any $n\le n_k-1$, $$ \bigg\|\sum_{j=0}^{n}\frac{1}{1- r_k \bar \zeta_j z} \int_{I_j} f(\zeta) dm(\zeta) - \int_{\cup_{j=0}^n I_j} \frac{f(\zeta)}{1- r_k \bar \zeta z}dm(\zeta) \bigg\|_{H^p} \le \gamma \|f\|_{H^p}, $$ while for the second term we use the boundedness of the Cauchy transform in $L^p$, $1<p<\infty$: $$ \bigg\| \int_{\cup_{j=0}^n I_j} \frac{f(\zeta)}{1- r_k \bar \zeta z}dm(\zeta) \bigg\|_{H^p} \le C_p \|f\|_{H^p}. $$ Now, everything is ready to complete the proof. We start with an arbitrary function $f\in H^p$ and choose $r_{k_1}$ as described above to obtain a function $$ f_1(z) = f(z) - \sum_{j=0}^{n_{k_1}-1} \frac{1}{1- r_{k_1} \bar \zeta_{k_1, j} z} \int_{I_{k_1, j}} f(\zeta) dm(\zeta) $$ with $\|f_1\|_{H^p} \le \gamma \|f\|_{H^p}$, where $\gamma\in (0,1)$. Next, we apply the same procedure to $f_1$ and find $r_{k_2}$ such that $\|f_2\|_{H^p} \le \gamma \|f_1\|_{H^p}$, where $$ f_2(z) = f_1(z) - \sum_{j=0}^{n_{k_2}-1} \frac{1}{1- r_{k_2} \bar \zeta_{k_2, j} z} \int_{I_{k_2, j}} f_1(\zeta) dm(\zeta). $$ Proceeding in this way, we obtain a sequence $k_l$ and a sequence of coefficients $c_{l,j} = \int_{I_{k_l, j}} f_{l-1}(\zeta) dm(\zeta)$ such that $$ \|f_N\|_{H^p} = \bigg\|f- \sum_{l=1}^N \sum_{j=0}^{n_{k_l}-1} \frac{c_{l,j}}{1- r_{k_l} \bar \zeta_{k_l,j} z}\bigg\|_{H^p} \le \gamma^N \|f\|_{H^p}. $$ We see that a subsequence of partial sums of the series $\sum_{l, j} \frac{c_{l,j}}{1- r_{k_l} \bar \zeta_{k_l,j} z}$ (ordered alphabetically) converge to $f$. However, by \eqref{dod1}, for any $n\le n_{k_{N+1}}$, $$ \bigg\| \sum_{j=0}^{n} \frac{c_{N+1,j}}{1- r_{k_{N+1}} \bar \zeta_{k_{N+1},j} z}\bigg\|_{H^p} \le B_p \|f_N\|_{H^p} \to 0, \quad N\to\infty, $$ and, thus, the series converges in the norm of $H^p$. \qed \begin{remark} {\rm 1. It is not difficult to find explicit numeric bounds for $M$ in Theorem \ref{hardy}. For sufficiently large $k$ one has $|1-r_k \bar \zeta z| \le M_1 |1 - r_k \bar \zeta_j z|$ for any $M_1 \ge \sqrt{\pi^2+1}$, whence one can take any $M>\frac{\pi \sqrt{\pi^2+1}}{2} \approx 5.2$. \smallskip 2. The same proof shows that we need not take $\zeta_j$ as the centers of the arc $I_j$ and can choose them randomly in $I_j$. Repeating the arguments one immediately obtains that there exists $M>0$ such that if $n_k(1-r_k) \ge M$, then the sequence $\Lambda = \{r_k \zeta_{k,j}: \zeta_{k,j} \in I_{k,j}, \ k\in\mathbb{N}, \ 0\le j<n_k-1\}$ generates a system of Cauchy kernels which is representing in any $H^p$, $1<p<\infty$. \smallskip 3. It looks plausible that any sequence \eqref{loop} satisfying $n_k(1-r_k) \ge \delta>0$ (as in \cite{st2}) gives rise to a representing system for any $H^p$, $1<p<\infty$, however, apparently, one needs a heavier machinery to prove this.} \end{remark} \bigskip \section{Representing systems in $H^1$ and in $A(\D)$} It is clear from the proof of Theorem 1.1 (see estimate \eqref{dodr}) that the function $S$ well approximates $f$ even in the cases when $p = 1$ or $f\in A(\D)$. Note that, in contrast to $H^\infty$ case, $\|f_r - f \|_{A(\D)} \to 0$, $r\to 1-$, if $f\in A(\D)$. The only problem arises when we need to estimate the norm of a discretization of the integral over an arc of the circle. Since the Cauchy transform is unbounded in $L^1$ and in $L^\infty$, these norms can be large. We can construct representing systems of Cauchy kernels in $H^1$ or in $A(\D)$ by considering more dense sets distributed over a circle with a certain ``multiplicity''. Let $R_k \in(0,1)$, $N_k, M_k \in \N$. For any $j$, $1\le j \le N_k$, consider the open arc $I_{k,j} = (\exp(\frac{(2 j -1)\pi i}{N_k}), \exp(\frac{(2 j +1)\pi i}{N_k})) \subset \T$ and choose $M_k$ distinct points $\zeta_{k, l, j} \in I_{k,j}$, $l=1, \dots, M_k$. Define the set \begin{equation} \label{bor} \Lambda = \{w_{k, l, j} = R_k \zeta_{k, l, j} :\ k\in\N, \ 1\le l \le M_k, \ 1\le j\le N_k\}. \end{equation} The set $\Lambda$ is assumed to be ordered alphabetically. We prefer to make the points in $\Lambda$ distinct even if the definition of a representing system does not exclude repeating vectors. \begin{theorem} \label{hardy1} Let $\Lambda$ be given by \eqref{bor} with $R_k \to 1-$, $k\to\infty$. Then there exists a numeric constant $M>0$ such that if $N_k(1-R_k) \ge M$ and $$ \log \frac{1}{1-R_k} = O(M_k), $$ then $\mathcal{K}(\Lambda) =\{k_\lambda\}_{\lambda\in \Lambda}$ is a representing system in $H^1$ and in $A(\D)$. \end{theorem} In what follows we write $X\lesssim Y$ if there is a constant $C>0$ such that $X\le C Y$ for all admissible values of parameters. \begin{proof} We start with a trivial formula $$ f(R_k z) = \frac{1}{M_k} \sum_{l=1}^{M_k} \int_\T \frac{f(\zeta)}{1-R_k z \bar \zeta z} dm(\zeta) $$ and its discretization $$ S_k(z) = \frac{1}{M_k} \sum_{l=1}^{M_k} \sum_{j=1}^{N_k} \bigg(\int_{I_{k,j}} f(\zeta) dm(\zeta) \bigg) \frac{1}{1-R_k \bar \zeta_{k,l,j} z}. $$ First we consider the case of the space $H^1$. Repeating the arguments from the proof of Theorem \ref{hardy} one easily shows that there exists $M>0$ such that, for any $l$, $$ \bigg\|f(R_k z) - \sum_{j=1}^{N_k} \bigg(\int_{I_{k,j}} f(\zeta) dm(\zeta) \bigg) \frac{1}{1-R_k \bar \zeta_{k,l,j} z}\bigg\|_{H^1} \le \frac{\|f\|_{H^1}}{4} $$ as soon as $N_k(1-R_k) >M$. Thus, we have $\|f(R_k z) - S_k(z)\|_{H^1} \le \|f\|_{H^1}/4$. We need to estimate the norms of intermediate sums in $S_k$. Note that the outer summation goes over $l$. It follows from \eqref{dodr} that for $1\le \tilde M \le M_k$ $$ \bigg\|\frac{1}{M_k} \sum_{l=1}^{\tilde M} \sum_{j=1}^{N_k} \bigg(\int_{I_{k,j}} f(\zeta) dm(\zeta) \bigg) \frac{1}{1-R_k \bar \zeta_{k,l,j} z} - \frac{1}{M_k} \sum_{l=1}^{\tilde M} \int_\T \frac{f(\zeta)}{1-R_k \bar \zeta z} dm(\zeta) \bigg\|_{H^1} \le \frac{\|f\|_{H^1}}{4}, $$ while the second sum inside the norm is simply $\frac{\tilde M}{M_k} f(R_k z)$. Finally, we need to estimate, for some fixed $\tilde M$ and $1\le \tilde N\le N_k$, the norm $$ \bigg\|\frac{1}{M_k} \sum_{j=1}^{\tilde N} \bigg(\int_{I_{k,j}} f(\zeta) dm(\zeta) \bigg) \frac{1}{1-R_k \bar \zeta_{k, \tilde M, j} z} \bigg\|_{H^1} $$ or, equivalently, the norm $$ \bigg\|\frac{1}{M_k} \int_I \frac{f(\zeta)}{1-R_k \bar \zeta z} dm(\zeta) \bigg\|_{H^1}, $$ where $I = \cup_{j=1}^{\tilde N} I_{k,j}$. Since $\int_\T |1-\rho \zeta|^{-1} dm(\zeta) \lesssim \log \frac{1}{1-\rho}$, $1/2 \le\rho <1$, we have $$ \frac{1}{M_k} \int_\T \int_I \frac{|f(\zeta)|}{|1-R_k z \bar \zeta z|} dm(\zeta) \, dm(z) \lesssim \frac{1}{M_k} \log \frac{1}{1-R_k} \|f\|_{H^1} \lesssim \|f\|_{H^1} $$ by the hypothesis on $M_k$. In the case $f\in A(\D)$ the estimate $$ \bigg\|\frac{1}{M_k} \int_I \frac{f(\zeta)}{1-R_k \bar \zeta z} dm(\zeta) \bigg\|_{A(\D)} \lesssim \frac{1}{M_k} \log \frac{1}{1-R_k} \|f\|_{A(\D)} \lesssim \|f\|_{A(\D)} $$ is immediate. The rest of the proof is identical to the proof of Theorem \ref{hardy}. Let $X$ be one of the spaces $H^1$ or $A(\D)$. For a fixed $f$ we choose $R_{k_1}$ so that $\|f(z) - f(R_{k_1} z)\|_X \le \|f\|_{X}/4$. Then $\|f-S_{k_1}\|_{X} \le \|f\|_{X}/2$. Applying the procedure to $f_1 = f - S_{k_1}$ we choose $R_{k_2}$, etc. \end{proof} \bigskip \section{Proof of Theorem \ref{wei}} We start with the following integral representation of functions in $\mathscr{H}_\beta$. In what follows for $f (z)= \sum\limits_{n=0}^{\infty} a_n z^n \in \mathscr{H}_\beta$ and $\rho \in (0,1)$ we put $$ F_\rho(z)= \sum\limits_{n=0}^{\infty} a_n\beta_n \rho^n z^n. $$ Note that $F_\rho$ is analytic in $\{|z| <\rho^{-1}\}$ and \begin{equation} \label{zog} \int_\T |F_\rho(\zeta)|^2 dm(\zeta) = \sum_{n=0}^\infty |a_n|^2 \beta_n^2 \rho^{2n} \le \omega_1(\rho)\|f\|_\beta^2, \end{equation} where $$ \omega_1(\rho)=\sup_{n\ge 0} \rho^{2n} \beta_n. $$ \begin{lemma} \label{inte} Let $f(z)= \sum\limits_{n=0}^{\infty} a_n z^n \in \mathscr{H}_\beta$. Then, for any $0<r<R <1$ and $z\in \D$, $$ f(rz)= \int\limits_{\T} F_{r/R} (\zeta) \overline{K_z(R\zeta)} dm(\zeta). $$ \end{lemma} \begin{proof} By direct computations $$ \int\limits_{\T} F_{r/R} (\zeta) \overline{K_z(R\zeta)} dm(\zeta) = \int\limits_{\T} \bigg(\sum\limits_{n=0}^{\infty} a_n\beta_n\frac{r^n}{R^n} \zeta^n\bigg) \bigg(\sum\limits_{n=0}^{\infty}\frac{R^n z^n }{\beta_n} \bar\zeta^n\bigg) dm(\zeta) = \sum\limits_{n=0}^{\infty} a_n r^n z^n. $$ \end{proof} We will introduce two more characteristics of the sequence $\beta$. For $\rho\in (0,1)$ put $$ \om_2(\rho) = \sum_{n\ge 1} \frac{n^2 \rho^{2n}}{\beta_n}, \qquad \om_3(\rho) = \sum_{n\ge 0} \frac{\rho^{2n}}{\beta_n}. $$ Note that $\om_3(\rho)$ is the square of the norm of the reproducing kernel $K_\rho^\beta$ in $\hb$, while $\om_2(\rho)$ is essentially the squared norm of its derivative. The key idea of a construction of a representing system is similar to the case of $H^1$. Assume that for any $k\in \N$ there are fixed $R_k \in(0,1)$ and $N_k, M_k \in \N$ and a collection of radii $R_{k,l} \in (0,1)$, $l=1, \dots, M_k$, such that $R_{k,1} <R_{k,2} \dots < R_{k, M_k} = R_k$. Consider the set of points $$ \Lambda = \{w_{k, l, j} :\ 1\le l \le M_k, 1\le j\le N_k, k\in\N\}, \qquad w_{k, l, j} = R_{k,l} \exp \Big(\frac{2\pi i j}{N_k}\Big). $$ \begin{theorem} \label{gen} Assume that $R_{k, 1} \to 1$ and \begin{equation} \label{tout} \om_1(R_k) \om_2(R_k) = o(N_k^2), \qquad \om_1(R_k) \om_3(R_k) = O(M_k^2) \end{equation} as $k\to\infty$. Then $\{K_\lambda^\beta\}_{\lambda\in\Lambda}$ is a representing system in $\hb$. \end{theorem} \begin{proof} By Lemma \ref{inte} we have for any $r<R_{k,1}$ $$ f(rz) = \frac{1}{M_k} \sum_{l=1}^{M_k} \int_ \T F_{r/R_{k,l}} (\zeta) \overline{K_z(R_{k,l}\zeta)} dm(\zeta). $$ The idea of the proof is to discretize this integral replacing it by $$ S_k(z) = \frac{1}{M_k} \sum_{l=1}^{M_k} \sum_{j=1}^{N_k} \bigg(\int_{I_{k,j}} F_{r/R_{k,l}} (\zeta) dm(\zeta)\bigg) K_{w_{k,l,j}}(z), $$ where $I_{k,j} = [\exp(\frac{(2 j -1)\pi i}{N_k}), \exp(\frac{(2 j +1)\pi i}{N_k})]$. We consider in detail one step of approximation. For the moment we assume that $k$ is fixed and omit it, i.e., we write $R, R_l, M, N$ in place of $R_k, R_{k,l}, M_k, N_k$. Recall that $R_1<\dots < R_M = R$. Assume that $r < R_1^2$. Note that $\overline{K_z(R_l \zeta)} = K_{R_l \zeta} (z)$. Then we have $$ f(rz) - S(z) = \frac{1}{M} \sum_{l=1}^{M} \sum_{j=1}^{N} \int_{I_j} F_{r/R_{l}} (\zeta) \big(K_{R_l \zeta} (z) - K_{w_{l,j}}(z)\big) dm(\zeta) = \sum_{n=1}^\infty c_n \frac{z^n}{\beta_n}, $$ where $$ c_n = \frac{1}{M} \sum_{l=1}^{M} \sum_{j=1}^{N} \int_{I_j} \big( (R_l\bar \zeta)^n - \overline{w}_{l,j}^n \big) F_{r/R_{l}}(\zeta) dm(\zeta). $$ Since $w_{l,j} = R_l \zeta_j$, $\zeta_j = e^\frac{2\pi i j}{N} \in I_j$, we have for $ \zeta \in I_j$ $$ |(R_l \zeta)^n - w_{l,j}^n| =R_l^n \cdot |\zeta - \zeta_j| \cdot \Big| \sum_{s=0}^{n-1} \zeta^s \zeta_j^{n-1-s}\Big| \lesssim \frac{n R_l^n}{N} \le \frac{n R^n}{N}, $$ whence $$ |c_n| \lesssim \frac{nR^n}{M N} \sum_{l=1}^{M} \int_\T |F_{r/R_{l}}(\zeta)| dm(\zeta). $$ Recall that $r\le R_1^2$ and so $r/R_l \le R$. It follows from \eqref{zog} that \begin{equation} \label{bat} \int_\T |F_{r/R_{l}}(\zeta)| dm(\zeta) \le (\om_1(R))^{1/2} \|f\|_\beta. \end{equation} We conclude that $$ |c_n| \lesssim \frac{n R^n (\om_1(R))^{1/2}}{N} \|f\|_\beta. $$ Hence, $$ \|f_r - S\|_\beta^2 = \sum_{n=1}^\infty \frac{|c_n|^2}{\beta_n} \lesssim \frac{\om_1(R) \|f\|^2_\beta}{N^2} \sum_{n=1}^\infty \frac{n^2 R^{2n}}{\beta_n} = \frac{\om_1(R) \om_2(R)}{N^2} \|f\|^2_\beta. $$ By the first condition in \eqref{tout}, taking $R=R_k$ and $N=N_k$ with a large $k$, one can make this norm as small as we wish. Let us show that any intermediate partial sum is uniformly bounded by the norm of $f$. We need to estimate the norms of the sums $$ T_{\tilde M}(z) = \frac{1}{M} \sum_{l=1}^{\tilde M} \sum_{j=1}^{N} \bigg( \int_{I_j} F_{r/R_{l}} (\zeta) dm(\zeta) \bigg) K_{w_{l,j}}(z) $$ and $$ T_{\tilde M, \tilde N}(z) = \frac{1}{M} \sum_{j=1}^{\tilde N} \bigg( \int_{I_j} F_{r/R_{\tilde M}} (\zeta) dm(\zeta) \bigg) K_{w_{\tilde M, j}}(z), $$ where $1\le \tilde M\le M$ and $1\le \tilde N \le N$. First let us consider the sums over several complete circles. It follows from the above estimates of the difference between the integral and its discretization and from Lemma \ref{inte} that for any $1\le \tilde M \le M$ one has $$ \bigg\| T_{\tilde M}(z) - \frac{1}{M} \sum_{l=1}^{\tilde M} \int_ \T F_{r/R_{l}} (\zeta) \overline{K_z(R_{l}\zeta)} dm(\zeta) \bigg\|_\beta^2 = \bigg\| T_{\tilde M}(z) - \frac{\tilde M}{M} f(rz) \bigg\|_\beta^2 \lesssim \frac{\om_1(R) \om_2(R)}{N^2} \|f\|^2_\beta. $$ Hence, $\| T_{\tilde M} \|_\beta \lesssim \|f\|_\beta$. It remains to estimate the norm of the sum $T_{\tilde M, \tilde N}(z)$ over some incomplete circle. Again, by the above estimates, we have $$ \bigg\| T_{\tilde M, \tilde N}(z) - \frac{1}{M} \int_ I F_{r/R_{\tilde M}} (\zeta) \overline{K_z(R_{\tilde M}\zeta)} dm(\zeta) \bigg\|_\beta^2 \lesssim \frac{\om_1(R) \om_2(R)}{N^2} \|f\|^2_\beta, $$ where $I = \cup_{j=1}^{\tilde N} I_j$. Using the expansion of the kernel function we get $$ \int_ I F_{r/R_{\tilde M}} (\zeta) \overline{K_z(R_{\tilde M}\zeta)} dm(\zeta) = \sum_{n=0}^\infty d_n \frac{R_{\tilde M}^n}{\beta_n} z^n, $$ where $$ d_n = \int_ I F_{r/R_{\tilde M}} (\zeta) \bar \zeta^n dm(\zeta). $$ Making use of \eqref{bat} and the fact that $r/R_{\tilde M} \le R$ we get $$ \bigg\| \frac{1}{M}\int_ I F_{r/R_{\tilde M}} (\zeta) \overline{K_z(R_{\tilde M}\zeta)} dm(\zeta) \bigg\|_\beta^2 \le \frac{\om_1(R) \|f\|^2_\beta}{M^2} \sum_{n=0}^\infty \frac{R^{2n}}{\beta_n} = \frac{ \om_1(R)\om_3(R)}{M^2} \|f\|^2_\beta \lesssim \|f\|^2_\beta $$ by the second condition in \eqref{tout}. The rest of the proof is analogous to the proof of Theorem \ref{hardy}. \end{proof} \begin{remark} {\rm Representing systems satisfying the condition \eqref{tout} of Theorem \ref{gen} are, apparently, more dense than necessary and in special cases these conditions can be substantially relaxed. Our goal was to give a qualitative answer to the question about existence of representing systems. It is an interesting problem for further research to find optimal density conditions. } \end{remark} \section{Open questions} \label{open} Representing systems of reproducing kernels in spaces of analytic functions in the disk are far from being well understood. While it does not seem reasonable to expect a complete description of representing systems of reproducing kernels even in $H^2$ setting, a natural question is how small (in some sense) a representing system can be. Of course, the smaller the system is, the sharper is the result. One way to measure the size of the system is to introduce a density. E.g., for $\Lambda\subset \D$, put $$ D_+(\Lambda) = \limsup_{r\to 1-}\, (1-r)\cdot \#(\Lambda \cap \{|z| <r\}, $$ where $\#E$ denotes the cardinality of $E$. In all known examples of representing systems of the Cauchy kernels in $H^p$ one has $D_+(\Lambda) >0$. \medskip \\ {\bf Question 1.} {\it Do there exist representing systems $\mathcal{K}(\Lambda)$ of the Cauchy kernels in $H^2$ \textup(or $H^p$, $1<p<\infty$\textup), such that $D_+(\Lambda) =0$? } \medskip We find it plausible that the answer is ``no'' and so the case when $n_k(1-r_k)$ behaves like a constant is optimal for systems of the form \eqref{loop}. Of course, one can ask similar questions about representing systems of reproducing kernels in general spaces $\hb$ considering appropriate densities. Also, in all known examples the points of $\Lambda$ accumulate to each point of the unit circle. On the other hand, there exist complete systems of the Cauchy kernels which accumulate to a single point on the boundary. \medskip \\ {\bf Question 2.} {\it Do there exist representing systems $\mathcal{K}(\Lambda)$ of the Cauchy kernels in $H^2$ such that the closure ${\rm Clos}\, \Lambda$ does not contain $\T$, i.e., omits some open arc? } \medskip As we have seen in Theorem \ref{hardy1}, one can construct representing systems of the Cauchy kernels in $H^1$ or in $A(\D)$ if one takes somewhat (logarithmically) denser sets, than for $H^p$, $p>1$. The question about sharp density remains open. \medskip \\ {\bf Question 3.} {\it Do the sequences \eqref{loop} with $n_k(1-r_k) \ge M$ generate representing systems in $H^1$ or in $A(\D)$ when $M$ is sufficiently large? If not, then what is the correct optimal density?} \medskip
{ "timestamp": "2022-07-20T02:05:21", "yymm": "2207", "arxiv_id": "2207.08985", "language": "en", "url": "https://arxiv.org/abs/2207.08985", "abstract": "We give an elementary construction of representing systems of the Cauchy kernels in the Hardy spaces $H^p$, $1 \\le p <\\infty$, as well as of representing systems of reproducing kernels in weighted Hardy spaces.", "subjects": "Complex Variables (math.CV); Functional Analysis (math.FA)", "title": "Representing systems of reproducing kernels in spaces of analytic functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575178175919, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542074006757 }
https://arxiv.org/abs/1310.6019
Graph Clustering with Surprise: Complexity and Exact Solutions
Clustering graphs based on a comparison of the number of links within clusters and the expected value of this quantity in a random graph has gained a lot of attention and popularity in the last decade. Recently, Aldecoa and Marin proposed a related, but slightly different approach leading to the quality measure surprise, and reported good behavior in the context of synthetic and real world benchmarks. We show that the problem of finding a clustering with optimum surprise is NP-hard. Moreover, a bicriterial view on the problem permits to compute optimum solutions for small instances by solving a small number of integer linear programs, and leads to a polynomial time algorithm on trees.
\section{Introduction} \label{sec:introduction} \emph{Graph clustering}, i.e., the partitioning of the entities of a network into densely connected groups, has received growing attention in the literature of the last decade, with applications ranging from the analysis of social networks to recommendation systems and bioinformatics~\cite{f-c-09}. Mathematical formulations thereof abound; for an extensive overview on different approaches see for example the reviews of Fortunato~\cite{f-c-09} and Schaeffer~\cite{s-gc-07}. One line of research that recently gained a lot of popularity is based on \emph{null models}, the most prominent objective function in this context being the \emph{modularity} of a clustering~\cite{ng-fecsn-04}. Roughly speaking, the idea behind this approach is to compare the number of edges within the same cluster to its expected value in a random graph that inherits some properties of the graph given as input. In a wider sense, the measure called~\emph{surprise} that has recently been suggested as an alternative to modularity is also based on a null model, although, compared to modularity and its modifications~\cite{f-c-09}, it uses a different tradeoff between the observed and expected number of edges within clusters. Surprise is used as a quality function in the tools UVCLUSTER and Jerarca to analyze protein interaction data~\cite{amm-icapi-05,am-jeacn-10}. The authors' main arguments for using surprise instead of modularity is that it exhibits better behavior with respect to synthetic benchmarks and, empirically, it does not suffer to the same extent from the \emph{resolution limit} of modularity~\cite{bf-rlcd-07}, i.e.~the tendency to merge small natural communities into larger ones~\cite{am-dncss-11,am-e-13,am-s-13}. However, these results are hard to assess, since a metaheuristic is used instead of directly optimizing the measure. It chooses among a set of clusterings produced by general clustering algorithms the one that is best with respect to surprise. In this work, we take first steps towards a theoretical analysis of surprise. We show that the problem of finding a clustering with optimal surprise is $\mathcal{NP}$-hard in general and polynomially solvable on trees. Moreover, we formulate surprise as a bicriterial problem, which allows to find provably optimal solutions for small instances by solving a small number of integer linear programs. \vspace{2ex} \noindent \textbf{Notation.} All graphs considered are unweighted, undirected and simple, i.e.~they do not contain loops or parallel edges. A clustering $\ensuremath{\zeta}$ of a graph $G=(V,E)$ is a partitioning of $V$. Let $n:=|V|$ and $m:=|E|$ denote the number of vertices and edges of $G$, respectively. If $C$ is a cluster in $\ensuremath{\zeta}$, $i_e(C)$ denotes the number of \emph{intracluster edges} in $C$, i.e., the number of edges having both endpoints in $C$. Similarly, $i_p(C) := \binom{|C|}{2}$ is the number of vertex pairs in $C$. Furthermore, let $p := \binom{n}{2}$ be the number of vertex pairs in $G$, $i_p(\ensuremath{\zeta}) := \sum_{C \in \ensuremath{\zeta}} i_p(C)$ be the total number of intracluster vertex pairs and $i_e(\ensuremath{\zeta}) := \sum_{C \in \ensuremath{\zeta}} i_e(C)$ the total number of intracluster edges. If the clustering is clear from the context, we will sometimes omit $\ensuremath{\zeta}$ and just write $i_p$ and $i_e$. To ease notation, we will allow binomial coefficients $\binom{n}{k}$ for all $n$ and $k \in \mathbb{N}$. If $k > n$, $\binom{n}{k} = 0$ by definition. \section{Definition and Basic Properties} Let $\ensuremath{\zeta}$ be a clustering of a graph $G=(V,E)$ with $i_e$ intracluster edges. Among all graphs labeled with vertex set $V$ and exactly $m$ edges, we draw a graph $\mathcal{G}$ uniformly at random. The surprise $S(\ensuremath{\zeta})$ of this clustering is then the probability that $\mathcal{G}$ has at least $i_e$ intracluster edges with respect to $\ensuremath{\zeta}$. The lower this probability, the more \emph{surprising} it is to observe that many intracluster edges within $G$, and hence, the better the clustering. The above process corresponds to an urn model with $i_p(\ensuremath{\zeta})$ white and $p-i_p(\ensuremath{\zeta})$ black balls from which we draw $m$ balls without replacement. The probability to draw at least $i_e$ white balls then follows a hypergeometric distribution, which leads to the following definition\footnote{This is the definition used in the original version~\cite{amm-icapi-05}; later on, it was replaced by maximizing $-\log_{10} S(\ensuremath{\zeta})$, which is equivalent with respect to optimum solutions.}; the lower $S(\ensuremath{\zeta})$, the better the clustering $$S(\ensuremath{\zeta}) := \sum_{i=i_e}^{m} \frac{\binom{i_p}{i} \cdot \binom{p - i_p}{m-i}}{\binom{p}{m}}$$ \noindent \textbf{Basic Properties.} For a fixed graph, the value of $S$ only depends on two variables, $i_p$ and $i_e$. To ease notation, we will use the term $S(i_p, i_e)$ for the value of a clustering with $i_p$ intracluster pairs and $i_e$ intracluster edges. The urn model view yields some simple properties that lead to a better understanding of how surprise behaves, and that are heavily used in the $\mathcal{NP}$-hardness proof. \begin{lemma} Let $i_e$, $i_p$, $p$ and $m$ be given by a clustering, i.e.~$0\leq i_e \leq i_p \leq p$, $i_e \leq m$ and $m-i_e \leq p-i_p$. Then, the following statements hold: \begin{enumerate} \item[(i)] $S(i_p, i_e+1) < S(i_p, i_e).$ \item[(ii)] If $i_e > 0$, then $S(i_p-1, i_e) < S(i_p, i_e)$ \item[(iii)] If $p - i_p > m - i_e$, then $S(i_p+1, i_e+1) < S(i_p, i_e).$ \end{enumerate} \label{lem:basic_props} \end{lemma} \begin{proof} Statement (i) is obvious. Similarly, statement (ii) is not hard to see if we recall that $S(i_p-1, i_e)$ corresponds to the probability to draw at least $i_e$ white balls after replacing one white ball with a black one. For statement (iii), we show that the number $k_1$ of $m$-element subsets of the set of all balls containing at least $i_e$ white balls is larger than the number $k_2$ of $m$-element subsets containing at least $i_e+1$ white balls after painting one black ball $b$ white. Any subset $A$ that contributes to $k_2$ also contributes to $k_1$, as at most one ball in $A$ got painted white. On the other hand, every $m$-element subset not containing $b$ that contains exactly $i_e$ white balls contributes to $k_1$, but not to $k_2$. As there are at least $i_e$ white balls, and $p-i_p > m-i_e$ implies that there are at least $m-i_e+1$ black balls, there is at least one subset with these properties. Hence $k_1 > k_2$, which is equivalent to $S(i_p+1, i_e+1) < S(i_p, i_e)$.\qed \end{proof} In other words, the value of surprise improves the more edges and the less vertex pairs within clusters exist. Moreover, part (iii) shows that if we increase the number of intracluster edges such that the number of \emph{intracluster non-edges}, i.e., vertex pairs within clusters that are not linked by an edge, does not increase, this leads to a clustering with strictly smaller surprise. This immediately yields some basic properties of optimal clusterings with respect to surprise. Part (i) of the following proposition is interesting as it shows that optimal clusterings always fulfill the assumptions of Lemma~\ref{lem:basic_props}(ii)-(iii). \begin{proposition} Let $G=(V,E)$ be a graph that has at least one edge and that is not a clique and $\ensuremath{\zeta}$ be an optimal clustering of $G$ with respect to surprise. Then, \begin{enumerate} \item[(i)] $i_e(\ensuremath{\zeta}) > 0$ and $p-i_p(\ensuremath{\zeta}) > m-i_e(\ensuremath{\zeta})$ \item[(ii)] $1 < |\ensuremath{\zeta}| < |V|$ \item[(iii)] $\ensuremath{\zeta}$ contains at least as many intracluster edges as any clustering $\ensuremath{\zeta}'$ of $G$ into cliques. \item[(iv)] Any cluster in $\ensuremath{\zeta}$ induces a connected subgraph. \end{enumerate} \label{lem:props_opt_sol} \end{proposition} \begin{proof} (i): If $i_e(\ensuremath{\zeta}) = 0$ or $p - i_p(\ensuremath{\zeta}) = m - i_e(\ensuremath{\zeta})$, it can be easily seen that $S(\ensuremath{\zeta})=1$. On the other hand, let us consider a clustering $\ensuremath{\zeta}'$ where each cluster contains one vertex, except for one cluster that contains two vertices linked by an edge $e$. As $m<p$, there is at least one labeled graph on $V$ with $m$ edges that does not contain $e$. (ii): If $|\ensuremath{\zeta}|=1$, $p-i_p(\ensuremath{\zeta}) = 0 = m-i_e(\ensuremath{\zeta})$ and if $|\ensuremath{\zeta}|=|V|$, $i_e(\ensuremath{\zeta})=0$. The statement now follows from (i). (iii): Let us assume that $i_e(\ensuremath{\zeta}) < i_e(\ensuremath{\zeta}')$. Lemma~\ref{lem:basic_props}(ii) can be used to show that $S(\ensuremath{\zeta}) = S\bigl(i_p(\ensuremath{\zeta}), i_e(\ensuremath{\zeta})\bigr) \geq S\bigl(i_e(\ensuremath{\zeta}), i_e(\ensuremath{\zeta})\bigr)$ and from Lemma~\ref{lem:basic_props}(iii), it follows that $S\bigl(i_e(\ensuremath{\zeta}), i_e(\ensuremath{\zeta})\bigr) > S\bigl(i_e(\ensuremath{\zeta}'), i_e(\ensuremath{\zeta}')\bigr) = S(\ensuremath{\zeta}')$. (iv): Follows from Lemma~\ref{lem:basic_props}(ii) and the fact that splitting a disconnected cluster into its connected components decreases the number of intracluster pairs and does not affect the number of intracluster edges. \qed \end{proof} \vspace{2ex} \noindent \textbf{Bicriterial View.} \label{sec:connection} From Lemma~\ref{lem:basic_props}, it follows that an optimal solution with respect to surprise is \emph{pareto optimal} with respect to (maximizing) $i_e$ and (minimizing) $i_p$. Interestingly, this also holds for a simplification of modularity whose null model does not take vertex degrees into account and that was briefly considered by Reichardt and Bornholdt~\cite{rb-smcd-06,rb-dfcsc-04}, although the tradeoff between the two objectives is different. Hence, an optimal clustering can be found by solving the following optimization problem for all $0 \leq k \leq m$ and choosing the solution that optimizes surprise. \begin{problem}[minIP] Given a graph $G$ and an integer $k>0$, find a clustering $\ensuremath{\zeta}$ with $i_e(\ensuremath{\zeta}) = k$, if there exists one, such that $i_p(\ensuremath{\zeta})$ is minimal. \end{problem} Unfortunately, the decision variant of minIP~is $\mathcal{NP}$-complete even on bipartite graphs, as it is equivalent to the unweighted Minimum Average Contamination problem~\cite{lt-tcamc-11}. However, the formulation of minIP~does not involve binomial coefficients and is thus in some aspects easier to handle. For example, in contrast to surprise, it can be easily cast into an integer linear program. We will use this in Sect.~\ref{sec:algorithms} to compute optimal solutions for small instances. One might guess from the $\mathcal{NP}$-completeness of minIP~that surprise minimization is also $\mathcal{NP}$-complete. However, there is no immediate reduction from minIP~to the decision variant of surprise optimization, as the number of intracluster edges in an optimal clustering with respect to surprise is not fixed. In the following section, we will therefore give a proof for the hardness of finding a clustering with optimal surprise. \section{Complexity}\label{sec:complexity} We show $\mathcal{NP}$-completeness of the corresponding decision problem: \begin{problem}[\textsc{Surprise Decision (SD)}] Given a graph $G$ and a parameter $k>0$, decide whether there exists a clustering $\ensuremath{\zeta}$ of $G$ with $S(\ensuremath{\zeta}) \leq k$. \end{problem} As $S$ can be clearly evaluated in polynomial time, \textsc{SD}~is in $\mathcal{NP}$. To show $\mathcal{NP}$-completeness, we use a reduction from \textsc{Exact Cover by 3-Sets}~\cite{gj-ci-79}: \begin{problem}[\textsc{Exact Cover by 3Sets (X3S)}] Given a set $\ensuremath{\mathcal{X}}$ of elements and a collection $\ensuremath{\mathcal{M}}$ of 3-element subsets of $\ensuremath{\mathcal{X}}$, decide whether there is a subcollection $\ensuremath{\mathcal{R}}$ of $\ensuremath{\mathcal{M}}$ such that each element in $\ensuremath{\mathcal{X}}$ is contained in exactly one member of $\ensuremath{\mathcal{R}}$. \end{problem} \begin{wrapfigure}[10]{r}{.4\textwidth} \vspace{-8ex} \begin{center} \includegraphics[scale = 0.7]{reduction_small} \end{center} \vspace{-2ex} \caption{Illustration for reduction.} \label{fig:reduction} \end{wrapfigure} Let $I = (\ensuremath{\mathcal{X}}, \ensuremath{\mathcal{M}})$ be an instance of \textsc{X3S}. The reduction is based on the idea of implanting large disjoint cliques in the transformed instance that correspond to the subsets in $\ensuremath{\mathcal{M}}$. The size of these cliques is polynomial in $|\ensuremath{\mathcal{M}}|$, but large enough to ensure that they can neither be split nor merged in a clustering with low surprise. Hence, each of these cliques induces a cluster. The transformed instance further contains a vertex for each element in $\ensuremath{\mathcal{X}}$ that is linked with the cliques corresponding to the subsets it is contained in. The idea is to show that in a clustering $\ensuremath{\zeta}$ with low surprise, each of these vertices is contained in a cluster induced by exactly one subset, and each cluster contains either three ``element vertices'' or none, which induces an exact cover of $\ensuremath{\mathcal{X}}$. In the following, we will assume without loss of generality\footnote{Otherwise, the instance is trivially non-solvable.} that each element of $\ensuremath{\mathcal{X}}$ belongs to at least one set in $\ensuremath{\mathcal{M}}$, hence $|\ensuremath{\mathcal{X}}| \leq 3|\ensuremath{\mathcal{M}}|$. We construct an instance $I' = (G,k)$ of \textsc{SD} in the following way. Let $r := 3|\ensuremath{\mathcal{M}}|$. First, we map each set $M$ in $\ensuremath{\mathcal{M}}$ to an $r^2$-clique $C(M)$ in $G$. Furthermore, we introduce an $|\ensuremath{\mathcal{X}}|$-clique to $G$, where each of the vertices $v(x)$ in it is associated with an element $x$ in $\ensuremath{\mathcal{X}}$. We link $v(x)$ with each vertex in $C(M)$, if and only if $x$ is contained in $M$. Let $V_\ensuremath{\mathcal{X}}$ be the set containing all vertices corresponding to elements in $\ensuremath{\mathcal{X}}$, and $V_\ensuremath{\mathcal{M}}$ the set of vertices corresponding to subsets. Fig.~\ref{fig:reduction}~illustrates the reduction, clearly, it is polynomial. In the proof, we will frequently use the notion \emph{for large $r$, statement $A(r)$ holds}. Formally, this is an abbreviation for the statement that there exists a constant $c>0$ such that for all $r \geq c$, $A(r)$ is true. Consequently, the reduction only works for instances that are larger than the maximum of all these constants, which suffices to show that \textsc{SD} is $\mathcal{NP}$-complete\footnote{Smaller instances have constant size and can therefore be trivially solved by a brute-force algorithm.}. \begin{lemma} \label{lem:os_many_ie} Let $\ensuremath{\zeta}$ be an optimal clustering of $G$ with respect to $S$. Then, $i_e(\ensuremath{\zeta}) \geq |\ensuremath{\mathcal{M}}| \cdot \binom{r^2}{2}$. \end{lemma} \begin{proof} Follows from Proposition~\ref{lem:props_opt_sol}(iii) and the fact that the clustering whose clusters are the cliques in $V_\ensuremath{\mathcal{M}}$ and the singletons in $V_\ensuremath{\mathcal{X}}$ is a clustering into cliques with $|\ensuremath{\mathcal{M}}| \cdot \binom{r^2}{2}$ intracluster edges. \qed \end{proof} Next, we give an upper bound on the number of \emph{intracluster non edges}, i.e., vertex pairs within clusters that are not linked by an edge, in an optimal clustering of $G$. Its (rather technical) proof makes use of the asymptotic behavior of binomial coefficients and can be found in App.~\ref{app:asymptotics}. \begin{lemma} Let $\ensuremath{\zeta}$ be an optimal clustering of $G$ with respect to surprise. Then, for large $r$, $i_p(\ensuremath{\zeta}) - i_e(\ensuremath{\zeta}) \leq \frac{r^4}{2}.$ \label{lem:os_few_intra_non} \end{lemma} This can now be used to show that an optimal clustering of $G$ is a clustering into cliques. We start by showing that the cliques in $V_\ensuremath{\mathcal{M}}$ cannot be split by an optimal clustering. \begin{figure}[tbp] \begin{center} \includegraphics[scale = 1]{./Illu_Proof} \end{center} \caption{Illustration for proof of Lemma~\ref{lem:clique_not_split}} \label{fig:Illu_Proof} \end{figure} \begin{lemma} \label{lem:clique_not_split} Let $r$ be large and $\ensuremath{\zeta}$ be an optimal clustering of $G$ with respect to $S$. Then, the cliques $C(M)$ in $V_\ensuremath{\mathcal{M}}$ are not split by $\ensuremath{\zeta}$. \end{lemma} \begin{proof} Assume that there is at least one clique that is split by $\ensuremath{\zeta}$. $\ensuremath{\zeta}$ induces a partition of each clique that it splits. We call the subsets of this partition the \emph{parts} of the clique. \emph{Claim 1: Every clique $C(M)$ contains a part with at least $r^2-6$ vertices.} \emph{Proof of Claim 1:} Assume that there is a clique $K$ where each part has at most $r^2-7$ vertices. We can now greedily group the parts in two roughly equal sized regions, such that the smaller region contains at least $7$ vertices and the larger region at least $r^2/2$ vertices. Let us look at the clustering we get by removing the vertices in $K$ from their clusters and cluster them together. The vertices in $K$ have in total $3r^2$ edges to vertices outside $K$ and we gain at least $7/2 \cdot r^2$ new intracluster edges between the regions. Hence, the number of intracluster edges increases and the number of intracluster non-edges can only decrease. By Lemma~\ref{lem:basic_props}(iii) and Lemma~\ref{lem:basic_props}(i), it can be seen that this operation leads to a clustering with better surprise, which contradicts the optimality of $\ensuremath{\zeta}$. Let us now call the parts with size at least $r^2-6$ \emph{large parts} and the other parts \emph{small parts}. \emph{Claim 2: No two large parts are clustered together.} \emph{Proof of Claim 2:} Assume that there is a cluster that contains more than one large part. This cluster induces at least $(r^2-6)^2$ intracluster non-edges. For large $r$, this is larger than $r^4/2$ and Lemma~\ref{lem:os_few_intra_non} tells us that $\ensuremath{\zeta}$ was not optimal. A simple counting argument now yields the following corollary. \emph{Corollary: There must exist a large part $B$ contained in a split clique whose cluster contains at most $|B|+6$ vertices in $V_\ensuremath{\mathcal{M}}$}. Let $B$ as in the corollary and $A$ be the set of the vertices that are in the same clique as $B$ but not in $B$ and $C$ be the set of vertices that are in the same cluster as $B$ but not in $B$. Fig.~\ref{fig:Illu_Proof} illustrates this case. We consider the clustering that we get by removing the vertices in $A$ and $B$ from their cluster and cluster them together. The number of vertices in $A$ and $C$, respectively, is at most $6$, and each of these vertices has at most $3$ neighbors in $V_\ensuremath{\mathcal{X}}$. Hence, we lose at most $36$ intracluster edges by this operation. On the other hand, we gain at least $r^2-6$ intracluster edges between $A$ and $B$, thus, for large $r$, the number of intracluster edges increases. Again, the number of intracluster non-edges can only decrease and by Lemma~\ref{lem:basic_props}(iii) and Lemma~\ref{lem:basic_props}(i), we get that this operation leads to a clustering with better surprise, which contradicts the optimality of $\ensuremath{\zeta}$. \qed \end{proof} \begin{lemma} \label{lem:cliques_apart} Let $r$ be large and $\ensuremath{\zeta}$ be an optimal clustering of $G$ with respect to $S$. Then, no two of the cliques in $V_\ensuremath{\mathcal{M}}$ are contained in the same cluster. \end{lemma} \begin{proof} A cluster that contains two cliques in $V_\ensuremath{\mathcal{M}}$ induces at least $r^4$ intracluster non-edges. The statement now follows from Lemma~\ref{lem:os_few_intra_non}. \qed \end{proof} \begin{lemma} \label{lem:vertices_assigned} Let $r$ be large and $\ensuremath{\zeta}$ an optimal clustering of $G$ with respect to $S$. Then, each $v(x)$ in $V_\ensuremath{\mathcal{X}}$ shares a cluster with a clique $C(M)$ such that $x \in M$. \end{lemma} \begin{proof}From Lemma~\ref{lem:clique_not_split} and Lemma~\ref{lem:cliques_apart} we know that $\ensuremath{\zeta}$ clusters the vertices in $V_\ensuremath{\mathcal{M}}$ according to the cliques we constructed. Assume that there is a vertex $v(x)$ in $V_\ensuremath{\mathcal{X}}$ that is not contained in any of the clusters induced by the sets containing $x$. Since each element in $\ensuremath{\mathcal{X}}$ is contained in at least one set in $\ensuremath{\mathcal{M}}$, there exists a clique $K$ in $V_\ensuremath{\mathcal{M}}$ that contains $r^2$ neighbors of $v(x)$. As $v(x)$ has at most $|\ensuremath{\mathcal{X}}| -1$ neighbors in its own cluster, removing it from its cluster and moving it to the cluster of $K$ increases the number of intracluster edges. On the other hand, $x$ is linked with all vertices in its new cluster and thus, the number of intracluster non-edges cannot increase. Hence, this operation leads to a clustering with better surprise, which contradicts the optimality of $\ensuremath{\zeta}$. \qed \end{proof} \begin{theorem} For large $r$, $I=(\ensuremath{\mathcal{X}},\ensuremath{\mathcal{M}})$ has a solution if and only if there exists a clustering $\ensuremath{\zeta}$ of $G$ with $S(\ensuremath{\zeta}) \leq k := {\binom{p}{m}}^{-1} \cdot \left({\binom{|\ensuremath{\mathcal{M}}| \cdot r^2 + |\ensuremath{\mathcal{X}}|}{2} - |\ensuremath{\mathcal{M}}| \cdot \binom{r^2}{2} - |\ensuremath{\mathcal{X}}| \cdot r^2 - |\ensuremath{\mathcal{X}}| \atop (3|\ensuremath{\mathcal{M}}| - |\ensuremath{\mathcal{X}}|) \cdot r^2 + \binom{|\ensuremath{\mathcal{X}}|}{2} - |\ensuremath{\mathcal{X}}|} \right)$. \end{theorem} \begin{proof} $\Rightarrow$: Let $R$ be a solution of $I$. $R$ induces a clustering of $G$ in the following way: For each $M \in \ensuremath{\mathcal{M}} \setminus R$ we introduce a cluster $C_M=C(M)$ and for each $M' \in R$ a cluster $C_{M'} = C(M') \cup \{v(x) \mid x \in M'\}$. As $R$ is an exact cover, this is a partition $\ensuremath{\zeta}$ of the vertex set. It is $p=\binom{|\ensuremath{\mathcal{M}}| \cdot r^2 + |\ensuremath{\mathcal{X}}|}{2}$, $m=|\ensuremath{\mathcal{M}}| \cdot \binom{r^2}{2} + 3 \cdot |\ensuremath{\mathcal{M}}| \cdot r^2 + \binom{|\ensuremath{\mathcal{X}}}{2}$ and $i_p(\ensuremath{\zeta})=i_e(\ensuremath{\zeta})=|\ensuremath{\mathcal{M}}| \cdot \binom{r^2}{2} + |\ensuremath{\mathcal{X}}| \cdot r^2 + |\ensuremath{\mathcal{X}}|$. It can be easily verified that $S(\ensuremath{\zeta}) = k$. $\Leftarrow$: Let $\ensuremath{\zeta}$ be an optimal clustering of $G$ with respect to surprise and assume that $S(\ensuremath{\zeta}) \leq k$. From Lemma~\ref{lem:clique_not_split}, Lemma~\ref{lem:cliques_apart} and Lemma~\ref{lem:vertices_assigned}, we know that, for large $r$, we have one cluster for each set $M$ in $\ensuremath{\mathcal{M}}$ that contains $C(M)$ and each vertex $v(x)$ in $V_\ensuremath{\mathcal{X}}$ shares a cluster with a clique $C(M)$ such that $x \in M$. In particular, all clusters in $\ensuremath{\zeta}$ are cliques and hence $\binom{i_p(\ensuremath{\zeta})}{i_e(\ensuremath{\zeta})} = 1$. It follows that $\binom{p}{m} \cdot k \geq \binom{p}{m} \cdot S(\ensuremath{\zeta}) = \binom{p - i_e(\ensuremath{\zeta})}{m - i_e(\ensuremath{\zeta})}$. This term is strictly decreasing with $i_e(\ensuremath{\zeta})$ and the above bound is tight for $i_e(\ensuremath{\zeta}) = |\ensuremath{\mathcal{M}}| \cdot \binom{ \cdot r^2}{2} + |\ensuremath{\mathcal{X}}| \cdot r^2 + |\ensuremath{\mathcal{X}}|:= t$. Hence, $\ensuremath{\zeta}$ contains at least $t$ intracluster edges. The number of intracluster edges within $V_\ensuremath{\mathcal{M}}$ is exactly $|\ensuremath{\mathcal{M}}| \cdot \binom{r^2}{2}$ and the number of intracluster edges linking $V_\ensuremath{\mathcal{M}}$ with $V_\ensuremath{\mathcal{X}}$ is exactly $|\ensuremath{\mathcal{X}}| \cdot r^2$. The only quantity we do not know is the number of intracluster edges within $V_\ensuremath{\mathcal{X}}$, which we denote by $i_e(V_\ensuremath{\mathcal{X}})$. As $i_e(\ensuremath{\zeta}) \geq t$, it follows that $i_e(V_\ensuremath{\mathcal{X}}) \geq |\ensuremath{\mathcal{X}}|$. Thus, every vertex in $V_\ensuremath{\mathcal{X}}$ has in average two neighbors in $V_\ensuremath{\mathcal{X}}$ that are in the same cluster. On the other hand, vertices in $V_\ensuremath{\mathcal{X}}$ can only share a cluster if they are ``assigned'' to the same clique $C(M)$. As the sets in $\ensuremath{\mathcal{M}}$ only contain three elements, vertices in $V_\ensuremath{\mathcal{X}}$ can only have at most two neighbors in $V_\ensuremath{\mathcal{X}}$ in their cluster. It follows that $\ensuremath{\zeta}$ partitions $V_\ensuremath{\mathcal{X}}$ into triangles. Hence, the set of subsets $R$ corresponding to cliques $C(M)$ whose clusters contain vertices in $V_\ensuremath{\mathcal{X}}$ form an exact cover of $\ensuremath{\mathcal{X}}$. \qed \end{proof} We now have a reduction from \textsc{X3S} to \textsc{SD} that works for all instances that are larger than a constant $c > 0$. Hence, we get the following corollary. \begin{corollary} \textsc{Surprise Decision} is $\mathcal{NP}$-complete. \end{corollary} To show that an optimal clustering with respect to surprise can be found in polynomial time if $G$ is a tree, we consider the following problem MACP~\cite{lt-tcamc-11}: \begin{problem}[MACP] Given a graph $G=(V,E)$ together with a weight function $w:V\rightarrow \mathbb{Q}_{\geq 0}$ on $V$ and a parameter $k$. Find a clustering $\ensuremath{\zeta}$ of $G$ such that $m - i_e(\ensuremath{\zeta})=k$ and $\sum_{C \in \ensuremath{\zeta}} {\bigl(\sum_{v \in C} w(v)\bigr)}^2$ is minimal. \end{problem} For the special case that $w(v)$ equals the degree of $v$ and $G$ is a tree, Dinh and Thai give a dynamic program that solves MACP for all $0 \leq k\leq m$ simultaneously~\cite{dt-tocdf-}. This yields an $O(n^5)$ algorithm for modularity maximization in (unweighted) trees. In the context of surprise, we are interested in the special case that $w(v)=1$ for all $v \in V$. The following conversion shows that this is equivalent to minIP~with respect to optimal solutions: \begin{equation} \label{minIP_MACP} i_p(\mathcal{C}) = \sum_{C \in \mathcal{C}} {\frac{\left|C\right|(\left|C\right| - 1)}{2}} = \frac{1}{2} \sum_{C \in \mathcal{C}} {\left|C\right|^2} - \underbrace{\frac{1}{2} \left|V\right|}_{=\mathrm{const.}} \end{equation} The dynamic program of Dinh and Thai has a straightforward generalization to general vertex weights, which is polynomial in the case that each vertex has weight $1$. For completeness, App.~\ref{app:trees} contains a description of the dynamic program in this special case, together with a runtime analysis. \begin{theorem}\label{theoremTrees} Let $T = (V,E)$ with $n:=\left| V \right|$ be an unweighted tree. Then, a surprise optimal clustering of $T$ can be calculated in $O(n^5)$ time. \end{theorem} \section{Exact Solutions} \label{sec:algorithms} In this section, we give an integer linear program for minIP~and discuss some variants of how to use this to get optimal clusterings with respect to surprise. \vspace{2ex} \noindent \textbf{Linear Program for minIP.} The following ILP is very similar to a number of linear programs used for other objectives in the context of graph clustering and partitioning, in particular, to one used for modularity maximization~\cite{dt-tocdf-}. It uses a set of $\binom{n}{2}$ binary variables $\Xer{u}{v}$ corresponding to vertex pairs, with the interpretation that $\Xer{u}{v}=1$ iff $u$ and $v$ are in the same cluster. Let $\mathrm{Sep}(u,v)$ be a minimum $u$-$v$ vertex separator in $G$ if $\{u,v\} \notin E$ or in $G'=(V, E\setminus{\{u,v\}})$, otherwise. The objective is to \begin{align} \text{minimize }\sum_{\{u,v\} \in \binom{V}{2}} \Xer{u}{v} & \label{eq:basic_objective} \end{align} such that \begin{align} \renewcommand{\baselinestretch}{1.7}\normalsize \Xer{u}{v} \in \{0,1\}, &\quad \{u,v\} \in \binom{V}{2} \\ \Xer{u}{w} + \Xer{w}{v} - \Xer{u}{v} \leq 1, &\quad \{u,v\} \in \binom{V}{2}, w \in \mathrm{Sep}(u,v) \label{con:transitivity}\\ \sum_{\{u,v\} \in E} \Xer{u}{v} = k &\quad \label{con:exactly_k} \end{align} Dinh and Thai consider the symmetric and reflexive relation induced by $\mathcal{X}$ and show that Constraint~(\ref{con:transitivity}) suffices to enforce transitivity in the context of modularity maximization~\cite{dt-tocdf-}. Their proof solely relies on the following argument. For an assignment of the variables $\Xer{u}{v}$ that does not violate any constraints, let us consider the graph $G'$ induced by the vertex pairs $\{u,v\}$ with $\Xer{u}{v}=1$. Now assume that there exists a connected component in $G'$ that can be partitioned into two subsets $A$ and $B$ such that there are no edges in the original graph $G$ between them. Setting $\Xer{a}{b} := 0$ for all $a \in A$, $b \in B$ never violates any constraints and strictly improves the objective function. It can be verified that this argument also works in our scenario. Hence, a solution of the above ILP induces an equivalence relation and therefore a partition of the vertex set. As $\mathrm{Sep}(u,v)$ is not larger than the minimum of the degrees of $u$ and $v$, we have $O(nm)$ constraints over $O(n^2)$ variables. \vspace{2ex} \noindent \textbf{Variants.} We tested several variants of the approach described in Sect.~\ref{sec:introduction} to decrease the number of ILPs we have to solve. \begin{itemize} \item \emph{Exact(E)}: Solve $m$ times the above ILP and choose among the resulting clusterings the one optimizing surprise. \item \emph{Relaxed(R)}: We relax Constraint~(\ref{con:exactly_k}), more specifically we replace it by \begin{equation} \sum_{\{u,v\} \in E} \Xer{u}{v} \geq k \label{con:at_least_k} \end{equation} Lemma~\ref{lem:basic_props}(i) tells us that the surprise of the resulting clustering is at least as good as the surprise of any clustering with exactly $k$ intracluster edges. Moreover, by Lemma~\ref{lem:basic_props}(ii), if $i_p$ is the value of a solution to the modified ILP, $S(i_p, k')$ is a valid lower bound for the surprise of any clustering with $k'\geq k$ intracluster edges. In order to profit from this, we consider all possible values for the number of intracluster edges in increasing order and only solve an ILP if the lower bound is better than the best solution found so far. \item \emph{Gap(G)}: Similarly to the relaxed variant, we replace Constraint~(\ref{con:exactly_k}) by~(\ref{con:at_least_k}) and modify~(\ref{eq:basic_objective}) to \begin{equation} \text{minimize }\sum_{\{u,v\} \in \binom{V}{2}} \Xer{u}{v} - \sum_{\{u,v\} \in E} \Xer{u}{v} \label{eq:gap_objective} \end{equation} By Lemma~\ref{lem:basic_props}(ii), if $g$ is the objective value and $i_e$ the number of intracluster edges in a solution to the modified ILP, $S(k'+g, k')$ is a valid lower bound for the surprise of any clustering with $k'\geq k$ intracluster edges. Moreover, by Lemma~\ref{lem:basic_props}(iii), we know that $S(i_e+g, i_e)$ is not larger than the surprise of any clustering with exactly $k$ intracluster edges. Again, we consider all $k$ in increasing order and try to prune ILP computations with the lower bound. \end{itemize} \vspace{2ex} \noindent \textbf{Case Study.} \begin{table}[btp] \caption{Number of linear programs solved and running times in seconds of successive ILP approach, different strategies.} \label{tab:running_times} \begin{center} \begin{footnotesize} \begin{tabular}{|l|cc|cc|cc|cc|} \hline \multicolumn{1}{|c|}{} & \multicolumn{2}{c|}{\texttt{karate}} & \multicolumn{2}{c|}{\texttt{lesmis}} & \multicolumn{2}{c|}{\texttt{grid6}} & \multicolumn{2}{c|}{\texttt{dolphins}} \\ variant & ILP & t(s) & ILP & t(s) & ILP & t(s) & ILP & t(s) \\ \hline Exact & 79 & 51 & 255 & 1192 & 61 & 470 & 160 & 494 \\ Relaxed & 49 & 21 & 176 & 282 & 42 & 449 & 107 & 163 \\ Gap & 39 & \textbf{15} & 112 & \textbf{205} & 37 & \textbf{401} & 91 & \textbf{147} \\ \hline \end{tabular} \end{footnotesize} \end{center} \end{table} Table~\ref{tab:running_times}~shows an overview of running times and the number of solved ILPs of the different strategies on some small instances. \texttt{karate}($n=34, m=78$), \texttt{dolphins}($n=62,m=159$) and \texttt{lesmis}($n=77,m=254$) are real world networks from the website of the 10th DIMACS implementation Challenge\footnote{\url{http://www.cc.gatech.edu/dimacs10/}} that have been previously used to evaluate and compare clusterings, whereas \texttt{grid6}($n=36, m=60$) is a 2 dimensional grid graph. We used the C++-interface of \texttt{gurobi5.1}~\cite{gurobi} and computed the surprise of the resulting clusterings with the help of the GNU Multiple Precision Arithmetic Library, in order to guarantee optimality. The tests were executed on one core of an AMD Opteron Processor 2218. The machine is clocked at 2.1 GHz and has 16 GB of RAM. Running times are averaged over $5$ runs. It can be seen that the gap variant, and, to a smaller extent, the relaxed variant, are able to prune a large percentage of ILP computations and thus lead to less overall running time. These running times can be slightly improved by using some heuristic modifications described and evaluated in App.~\ref{app:heuristics}. \vspace{2ex} \noindent \textbf{Properties of optimal clusterings.} \begin{figure}[tbp] \begin{center} \subfigure[\texttt{karate}]{\includegraphics[scale=0.2]{karate}} \subfigure[\texttt{dolphins}]{\includegraphics[scale=0.2]{dolphins}} \subfigure[\texttt{grid6}]{\includegraphics[scale=0.2]{grid6}} \subfigure[\texttt{lesmis}]{\includegraphics[scale=0.35]{lesmis}} \subfigure[\texttt{football}]{\includegraphics[scale=0.75]{football_nicer}} \caption{Optimal clusterings with respect to surprise(colors) and, for (a) to (d), modularity(grouping). The grouping in (e) represents the \emph{ground-truth clustering}, i.e.~the mapping of teams to conferences.} \label{fig:opt_sol} \end{center} \end{figure} \begin{table}[p] \caption{Properties of optimal clusterings with respect to surprise. $S'$ denotes the surprise as defined by Aldecoa and Mar{\'\i}n~\cite{am-dncss-11}, i.e. $S'(\ensuremath{\zeta}) = -\log_{10} S(\ensuremath{\zeta})$. $S_o$ denotes the clustering with optimum surprise, $S_h$ the heuristically found clusterings from~\cite{am-dncss-11}, if this information was available, and $M_o$ the modularity optimal clustering.} \label{tab:prop_opt} \nprounddigits{2} \begin{center} \begin{tabular}{|l|r|r|l|r|r|r|r|r|} \hline instance & $i_e$ & $i_p$ & $S(S_o)$ & $S'(S_o)$ &$S'(S_h)$ & $|S_o|$ & $|S_h|$ & $|M_o|$\\ \hline \texttt{karate} & $29$ & $30$ & $\numprint{2.02474e-26}$ & $25.69$ & $25.69$ & $19$ & $19$ & $4$\\ \texttt{grid6} & $36$ & $54$ & $\numprint{2.89981e-29}$ & $28.54$ & - & $9$ & - & $4$ \\ \texttt{dolphins} & $87$ & $121$ & $\numprint{9.93152e-77}$ & $76.00$ & - & $22$ & - & $5$\\ \texttt{lesmis} & $165$ & $179$ & $\numprint{1.5385e-184}$ & $183.81$ & - &$33$ & - & $6$\\ \texttt{football} & $399$ & $458$ & $\numprint{5.64724e-407}$ & $\numprint{406.248}$& -& $15$ & $15$ & $10$ \\ \hline \end{tabular} \end{center} \end{table} Fig.~\ref{fig:opt_sol} illustrates optimal clusterings with respect to surprise and modularity on the test instances, Table~\ref{tab:prop_opt} summarizes some of their properties. We also included one slightly larger graph, \texttt{football}($n=115,m=613$), as it has a known, well-motivated \emph{ground truth clustering} and has been evaluated in~\cite{am-dncss-11}. The surprise based clusterings contain significantly more and smaller clusters than the modularity based ones, being \emph{refinements} of the latter in the case of \texttt{karate} and \texttt{lesmis}. Another striking observation is that the surprise based clusterings contain far more \emph{singletons}, i.e. clusters containing only one vertex with usually low degree; this can be explained by the fact that surprise does not take vertex degrees into account and hence, merging low degree vertices into larger clusters causes larger penalties. It reconstructs the ground-truth clustering of the \texttt{football} graph quite well. This confirms the observations of Aldecoa and Mar{\'\i}n based on heuristically found clusterings~\cite{am-dncss-11}; in fact, we can show that for \texttt{karate}, this clustering was already optimal. \section{Conclusion} \label{sec:conclusion} We showed that the problem of finding a clustering of a graph that is optimal with respect to the measure surprise is $\mathcal{NP}$-hard. The observation that surprise is pareto optimal with respect to (maximizing) the number of edges and (minimizing) the number of vertex pairs within clusters yields a (polynomial time) dynamic program on trees. Furthermore, it helps to find exact solutions in small, general graphs via a sequence of ILP computations. The latter can be used to gain insights into the behavior of surprise, independent of any artifacts stemming from a particular heuristic. Moreover, optimal solutions are helpful to assess and validate the outcome of heuristics. \bibliographystyle{abbrv}
{ "timestamp": "2013-10-23T02:09:49", "yymm": "1310", "arxiv_id": "1310.6019", "language": "en", "url": "https://arxiv.org/abs/1310.6019", "abstract": "Clustering graphs based on a comparison of the number of links within clusters and the expected value of this quantity in a random graph has gained a lot of attention and popularity in the last decade. Recently, Aldecoa and Marin proposed a related, but slightly different approach leading to the quality measure surprise, and reported good behavior in the context of synthetic and real world benchmarks. We show that the problem of finding a clustering with optimum surprise is NP-hard. Moreover, a bicriterial view on the problem permits to compute optimum solutions for small instances by solving a small number of integer linear programs, and leads to a polynomial time algorithm on trees.", "subjects": "Data Structures and Algorithms (cs.DS)", "title": "Graph Clustering with Surprise: Complexity and Exact Solutions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575173068325, "lm_q2_score": 0.721743200312399, "lm_q1q2_score": 0.7091542070320387 }
https://arxiv.org/abs/2006.13677
Accurately approximating extreme value statistics
We consider the extreme value statistics of $N$ independent and identically distributed random variables, which is a classic problem in probability theory. When $N\to\infty$, fluctuations around the maximum of the variables are described by the Fisher-Tippett-Gnedenko theorem, which states that the distribution of maxima converges to one out of three limiting forms. Among these is the Gumbel distribution, for which the convergence rate with $N$ is of a logarithmic nature. Here, we present a theory that allows one to use the Gumbel limit to accurately approximate the exact extreme value distribution. We do so by representing the scale and width parameters as power series, and by a transformation of the underlying distribution. We consider functional corrections to the Gumbel limit as well, showing they are obtainable via Taylor expansion. Our method also improves the description of large deviations from the mean extreme value. Additionally, it helps to characterize the extreme value statistics when the underlying distribution is unknown, for example when fitting experimental data.
\section{Introduction} Extreme value (EV) statistics \cite{Gumbel,Leadbetter,Majumdar,Hansen} is an important subfield of probability theory. Given a random variable $\chi$ which describes the magnitude of a recurring event, the focus is on the statistical properties of the maximal value of a set of $N$ such events. Ever since the foundational work on this problem by Fisher and Tippett \cite{Fisher}, it has continued to attract interest. Problems involving EVs of a large number of random variables are important in many fields of physics \cite{Fortin}, such as brittle fracture \cite{Weibull,HuntMcCartney,Alava}, disordered systems \cite{BouchardMezard,Burioni,Barkai1}, $1/f$ noise \cite{Fyodorov}, renewal processes \cite{Schehr}, long-ranged Ising systems \cite{Mukamel}, condensation \cite{Godreche,Barkai2}, and galaxy clusters \cite{Silk}, as well as a broad range of other applications including meteorology \cite{Abarbanel}, finance \cite{Embrechts,Novak,TracyWidom}, and the immune system \cite{George}. To formulate the problem under discussion, let $\{\chi_1,...,\chi_N\}$ be a set of $N$ independent and identically distributed (IID) unbounded random variables $\chi_i \in (-\infty,\infty)$, with a common cumulative distribution function (CDF) $F(\chi)$, and a probability density function (PDF) $f(\chi)\equiv\text{d}F/\text{d}\chi$ that decreases faster than a power-law for large $\chi$. The maximal value of this set, denoted as $x \equiv \max(\{\chi_1,...,\chi_N\})$, has an exact CDF of $F_N(x)=F^N(x)$. Note that a fixed $x=x_0$ together with an everywhere differentiable $F(\chi)$ leads to a single possible outcome, $\lim_{N\to\infty}F_N(x_0)=0$. However, increasing $x$ brings $F_N(x)$ closer to unity, and thus a nontrivial limit emerges upon taking $N,x\to\infty$ simultaneously. This is attainable by suitably choosing two scaling sequences $b_N$ and $a_N$, while rescaling $x$ as $z \equiv (x-b_N)/a_N$, $z \sim O(1)$, leading to a convergence in distribution of $F_N(b_N+a_N z)$. Namely, \begin{equation} \label{equation: primary definition extreme value} G_N(z) \equiv F_N \left( b_N + a_N z \right) , \quad \lim_{N\to\infty} G_N(z) = G_{\infty}(z) \equiv \exp\left(-e^{-z}\right) , \end{equation} where $G_{\infty}(z)$ is the Gumbel CDF \cite{Fisher}. Therefore, $b_N$ and $a_N$ represent the location and width, respectively, of the EV distribution. Note that we designate the CDF (PDF) of the maximal value $x$ by $F_N$ ($f_N$), whereas distributions of the scaled variable $z$ are designated by $G_N$ ($g_N$), respectively. Importantly, the choice of $b_N$ and $a_N$ is not unique. A second set of sequences $b_N'$ and $a_N'$ can serve as an appropriate candidate if the following conditions hold \cite{Gnedenko}, \begin{equation} \label{equation: conditions on sequences} \lim_{N\to\infty}\frac{b_N-b_N'}{a_N} = 0 , \quad \lim_{N\to\infty}\frac{a_N'}{a_N} = 1 , \end{equation} so that the locations on the scale of the width, and the widths themselves, are asymptotically identical. Even though the limit of $N\to\infty$ is long understood, the convergence rate to the Gumbel form is logarithmic in nature for any $f(\chi)$ which is not purely exponential, including the familiar Gaussian \cite{Hall}. It turns out that this convergence rate is extremely sensitive to the choice of $b_N$ and $a_N$, as we show below. Even worse, trying to approximate these sequences for large $N$ results in corrections that involve iterated-logarithm terms, preventing the usage of convergence acceleration techniques such as Pad\'e approximants. Hence, a power-series representation of these sequences can greatly assist in generating an accurate Gumbel approximation to $G_N(z)$ for large, but not exponentially large, $N$s. As one decreases $N$, one finds that no simple Gumbel approximation is satisfactory for even the best choice of $b_N$ and $a_N$, as the distribution $G_N(z)$ increasingly diverges from the asymptotic Gumbel form. One possible workaround is calculating functional corrections to $G_{\infty}(z)$ that allow for accurate capture of the true distribution $G_N(z)$, as we shall demonstrate. However, we first introduce a different method, which we find more efficient: generating the Gumbel approximation for a transformed variable, and using this to construct an approximate distribution for the original variable. In any case, it is clear that to make practical uses of the limit law in the Gumbel case, one always needs to have good estimates of the location and width, namely $b_N$ and $a_N$. In their body of research, Gy\"{o}rgyi, et al. \cite{Gyorgyi1,Gyorgyi2,Gyorgyi3} explored this problem of finite $N$ using a renormalization-group approach. They found the first-order correction of $G_N$ to the Gumbel distribution, and showed that it has a universal structure. By universality, it is meant that this correction has a functional shape that is independent of the underlying distribution $F$, and the $F$-dependence enters only via a numerical prefactor to the functional correction. They also obtained explicit expressions for given general asymptotic shapes of $F$, and showed that the first-order correction contributes to convergence in certain correlated systems (i.e. percolation and $1/f$ noise). However, the importance of an accurate estimation of $b_N$ and $a_N$ was not discussed in these works. In addition, they restricted themselves to the first correction, and indeed obtaining higher order terms using the renormalization-group is not an easy task. Our exposition, then, is comprised of two main parts. The first part centers around an optimal use of the Gumbel distribution, $G_{\infty}(z)$, without a need for functional corrections. It is based primarily on an approximation of the sequences $b_N$ and $a_N$ via power-series expansions, given a general model of stretched or compressed exponential distribution $F$, which also includes the Gaussian. These power series rely only on the behavior of $F(\chi)$ at $\chi\to\infty$, and are expressed in terms of a single large parameter $\beta_N$ that encapsulates all the complicated iterated logarithmic $N$-dependencies by means of the Lambert W-function. In addition, we make a simple change of variables that brings the underlying distribution more to an exponential-like shape, drastically accelerating the convergence rate. This yields closed-form expressions for $b_N$ and $a_N$, working excellently down to $N=50$ (or $500$ for extreme examples) for the scenarios we examined. Our theory speeds up convergence dramatically when compared to the simple $\ln(N)$ scaling of the sequences typically used \cite{Majumdar,Gyorgyi2}. The second part is a procedure for deriving functional corrections of any order to the Gumbel distribution, done by Taylor expanding the double logarithm of the underlying distribution $F$. This process yields $G_N$ as depending on numerical coefficients expressed via $F$, providing an arbitrary-order expansion around $G_{\infty}$, and here we explicitly state the second correction. In agreement with Gy\"{o}rgyi, et al., we find that the first correction to the Gumbel distribution has a universal functional shape, with the methods of part one providing a much faster convergence. This part provides us with the ability to approximate the moments of the EV distribution to arbitrary precision. Note that the limit in Eq.~(\ref{equation: primary definition extreme value}) implicitly assumes that $z \sim O(1)$, and thus for finite $N$ the Gumbel form approximates only the bulk of the exact EV distribution $F_N(x)$. To accurately describe the right tail of $F_N(x)$, one needs to exploit large deviation theory \cite{Touchette}. Using a different pair of scaling sequences $s_N$ and $u_N$, one defines \begin{equation} \label{equation: primary definition large deviations 1} H_N(\xi) \equiv 1-F_N \left( s_N \xi \right) , \quad \psi(\xi) \equiv -\lim_{N\to\infty} \frac{1}{u_N}\ln\left[H_N(\xi)\right] , \end{equation} so that \begin{equation} \label{equation: primary definition large deviations 2} H_N(\xi) \approx e^{-u_N\psi(\xi)} \text{ for } N \gg 1 \text{ and } \xi \ge 1 \end{equation} at the distribution's right tail \cite{Rita}. Traditionally, $u_N$ is called the speed and $\psi(\xi)$ is termed the rate function \cite{Vivo}. Usually for large deviations, a $1/N$ scaling is used for the rescaled variable $\xi$, but here a different scaling needs to be applied, $\xi=x/s_N$, with $s_N$ to be determined \cite{Rita,Vivo}. However, the resulting theory suffers from the same convergence problem mentioned for the typical fluctuations of the maximum. Hence, we consider the large deviations regime as well, where we find that reexpressing the $N$-dependence in terms of $\beta_N$ (using the Lambert W-function) resolves this domain's convergence problem. The left tail is more challenging and does not possess a simple large deviation form to the best of our knowledge. Nevertheless, we derive a uniform approximation describing it. It can be regarded as an extreme large deviations principle, in which the PDF's {\em double} logarithm has a large deviation form. We also consider the EV problem from a practical data analysis direction, where we demonstrate that our approach does not require any knowledge of the underlying distribution. Given a data set of maxima which, in principle, is attracted to the Gumbel law in the limit of $N\to\infty$, we describe an algorithm that can be used to extract the EV distribution parameters ($b_N$, $a_N$, and the Taylor coefficients responsible for the functional corrections), while accounting for the change of variables method, and show it works for $N$s as small as $25$ for various examples of the underlying CDF. Finally, we discuss other cases of EVs. Firstly, we deal with the problem of the fastest first-return time \cite{Schuss,Lawley}, which is a case of minimal EV statistics with a lower bound, that nevertheless has a Gumbel limit which is approached extremely slowly. Secondly, we briefly consider the other two EV limits, the Fr\'echet and Weibull distributions, showing how their underlying CDFs can be usefully understood as an exponential CDF of a transformed variable, thereby shedding light on the reason for which the convergence of underlying distributions to these limits is much faster. The rest of this paper is organized as follows. In Sec.~\ref{section: uncorrected} we develop our theory that allows for utilization of the Gumbel limit, namely $G_{\infty}(z)$, to accurately predict the EV PDF. We obtain expansions to the sequences $b_N$ and $a_N$ given a general asymptotic behavior of $F$, working down to $N=50$. In Sec.~\ref{section: corrections} we outline our method for deriving arbitrary-order corrections to the Gumbel distribution, obtaining expressions for the first two corrections, and observing their shape. In Sec.~\ref{section: tails} we provide a treatment of the far tails. In Sec.~\ref{section: practical} we discuss the EV statistics from a practical data analysis point of view, presenting a fitting-based method that works when the underlying distribution $F$ is not known. Sec.~\ref{section: others} is dedicated to other cases of EV problems, the minimum case alluded to above and the other two EV limits, the Fr\'echet and Weibull distributions. Lastly, we summarize our results in Sec.~\ref{section: summary}. \section{Fast convergence to the Gumbel limit} \label{section: uncorrected} As stated in the introduction, our primary aim is to obtain an accurate approximation to the EV distribution $f_N(x)$. We lay the foundations for our theory by assuming that the leading large-$\chi$ asymptotic behavior of the common CDF $F(\chi)$ is known. We employ a combination of two techniques for accurately approximating the aforementioned EV PDF. The first one allows for an accurate evaluation of the scaling sequences $b_N$ and $a_N$. As shown below, these can in principle be determined via an inversion of the exact underlying CDF $F(\chi)$, which is assumed here to not be explicitly known. Moreover, even given $F(\chi)$, the $N$ dependence of these parameters is extremely complicated, precluding analytical progress. We present a method that accurately approximates the exact values of these sequences in terms of the Lambert W-function. Nevertheless, for certain types of common distributions, this is not enough, as the convergence rate is inherently even slower than usual. These cases are characterized by being ``far" from an exponential distribution, a characterization on which we elaborate below in more detail (e.g. a very stretched exponential falls into this category). Here enters our second technique: by performing a transformation of variables aimed at making the underlying CDF more similar to the rapidly converging exponential case, we make the Gumbel limit usable when combined with the first method discussed above. \subsection{Approximating $b_N$ and $a_N$} We start our calculations following Gy\"{o}rgyi \cite{Gyorgyi2}, by rewriting $F(\chi)$ as \begin{equation} \label{equation: l definition} F(\chi) \equiv \exp\left\{-\exp\left[-L(\chi)\right]\right\} , \end{equation} so that $F_N(x)=\exp\left\{-N\exp\left[-L(x)\right]\right\}$. The advantage of this representation is that in the center part of the EV distribution, $L$ can be replaced by a low-order polynomial, and the larger $N$ is, the smaller is the higher-order terms' impact. Plugging in $x=b_N+a_N z$ and assuming that $b_N\gga_N$ with $z\sim O(1)$, such that $x\approxb_N$, we can expand \begin{equation} \label{equation: l expansion} L(x) = \sum_{n=0}^{\infty} \frac{1}{n!}L^{(n)}(b_N)(a_N z)^n = \sum_{n=0}^{\infty} \frac{c_n}{n!}z^n , \end{equation} where $L^{(n)}(x)\equiv\text{d}^nL(x)/\text{d}x^n$. As we are interested with the Gumbel limit, it is natural to define the scaling sequences as \begin{equation} \label{equation: normalization sequences definition} \exp\left[L(b_N)\right] = N , \quad a_N = \frac{1}{L^{(1)}(b_N)} , \quad c_n(b_N) \equiv \frac{L^{(n)}(b_N)}{[L^{(1)}(b_N)]^n} , \end{equation} since then $G_N(z) = \exp\{-\exp[-z+O(c_2)]\}$. The key point is that for the broad class of generalized (stretched or compressed) exponential distributions, $L(\chi) \propto \chi^\nu$ with $\nu>0$, so that $b_N \propto [\ln(N)]^{1/\nu} \gg 1$ and $c_2 \propto 1/b_N^{\nu} \ll 1$ for large $N$. While this particular choice of $b_N$ and $a_N$ has a degree of arbitrariness, as explained above, it is crucial that any approximation of $b_N$, which we can denote by $b_N'$, satisfies that $|b_N-b_N'|/a_N$ be reasonably small, say less than $0.1$, for all $N$s of interest. We shall now see that this is not true for the naive large-$N$ approximation defined below, henceforth referred to as the ``standard" approximation, even though it is true asymptotically for extremely large $N$. Our first task will be to address this challenge. \begin{figure} \includegraphics[width=1.0\textwidth]{GumErr.pdf} \caption{The scaled error in $b_N$, relative to the width $a_N$, $\delta \equiv |b_N-b_N'|/a_N$, where $b_N'$ is given for the standard approximation by Eq.~(\ref{equation: sequences solved simple}) (green circles), for the Lambert approximation by Eq.~(\ref{equation: normalization sequences zero order}) (blue disks), and for the series expansions by Eq.~(\ref{equation: normalization sequences expansion}) (red triangles, red squares when $[1/2]$ Pad\'e is used). We used the PDFs subfamily given by Eqs.~(\ref{equation: example distributions 1}), (\ref{equation: example distributions 2}), and (\ref{equation: example distributions 3}), for (a) $\nu=2$, (b) $\nu=1/2$, (c) $\nu=5$, and (d) $\nu=1/5$.} \label{figure: delta an} \end{figure} One might think that one needs to know the exact underlying distribution to generate satisfactory approximations to $b_N$ and $a_N$, but this is not the major stumbling block. Let us consider for the present the family of distributions with the large-$\chi$ asymptotic behavior \begin{equation} \label{equation: extended exponential asymptotics simple} 1 - F(\chi) \simeq \frac{e^{-C\chi^{\nu}}}{\chi^{\theta}} D_0 , \end{equation} with $\nu,C,D_0>0$ and $\theta \in \mathbb{R}$, which is a fairly general form of $F(\chi)$, that nevertheless keeps the expressions manageable. This includes the stretched (for $0<\nu<1$) and compressed (for $\nu>1$) exponential distributions, and in particular, the Gaussian and Gamma distributions. Note that knowing the values of $\nu$, $C$, $D_0$, and $\theta$ in Eq.~(\ref{equation: extended exponential asymptotics simple}) is a minimal requirement needed to make our theory presented in Secs.~\ref{section: uncorrected}-\ref{section: tails} usable. In Sec.~\ref{section: practical} we present a method for which no knowledge of the asymptotic form of the underlying CDF is required. Working to sub-sub-leading order, evaluating Eq.~(\ref{equation: normalization sequences definition}) using Eqs.~(\ref{equation: l definition}) and (\ref{equation: extended exponential asymptotics simple}) yields \begin{align} \label{equation: sequences solved simple} &b_N \simeq b_N^{\rm s} \equiv \left[\frac{\ln(N)}{C}\right]^{1/\nu} \left\{ 1 - \frac{\theta\ln[\ln(N)]}{\nu^2\ln(N)} + \frac{\ln(D_0C^{\theta/\nu})}{\nu\ln(N)}\right\} , \nonumber \\ &a_N \simeq a_N^{\rm s} \equiv \frac{1}{\nu C^{1/\nu} [\ln(N)]^{1-1/\nu}} \left\{ 1 - \frac{(1-\nu)\theta\ln[\ln(N)]}{\nu^2\ln(N)} + \frac{(1-\nu)\ln(D_0C^{\theta/\nu})-\theta}{\nu\ln(N)} \right\} , \end{align} where the superscript ``s" stands for the standard approach. These coincide with the known formulas for the Gaussian case found in Ref.~\cite{Hall}, and with the leading order result given in Ref.~\cite{Majumdar}. Note that the $O[1/\ln(N)]$ correction to $b_N$ is necessary to satisfy the criterion on $b_N$ Eq.~(\ref{equation: conditions on sequences}), whereas the correction to $a_N$ is not needed to satisfy the corresponding demand on $a_N$. To see how well these equations work in practice, we test them for a particular subfamily of distributions satisfying Eq.~(\ref{equation: extended exponential asymptotics simple}), with a PDF given by \begin{equation} \label{equation: example distributions 1} f(\chi) = \frac{\nu}{2\Gamma(1/\nu)} \sqrt{\frac{\Gamma(3/\nu)}{\Gamma(1/\nu)}} \exp\left\{-\left[\frac{\Gamma(3/\nu)}{\Gamma(1/\nu)}\right]^{\nu/2}|\chi|^{\nu}\right\} \end{equation} over the domain $\chi\in(-\infty,\infty)$, where the parameters governing the large $\chi$ asymptotics are \begin{equation} \label{equation: example distributions 2} \theta=\nu-1 , \quad C = \left[\frac{\Gamma(3/\nu)}{\Gamma(1/\nu)}\right]^{\nu/2} , \quad D_0=\frac{C^{(1-\nu)/\nu}}{2\Gamma(1/\nu)} , \end{equation} and $\Gamma(\cdot)$ is the gamma function. Note that this subfamily has zero mean and unit standard deviation. In particular, $\nu=2$ is the standard Gaussian. In Fig.~\ref{figure: delta an}, we present the scaled error in $b_N$, $\delta^{\rm s} \equiv |b_N - b_N^{\rm s}|/a_N$, for the cases (a) $\nu=2$ (standard Gaussian, compressed exponential), (b) $\nu=1/2$ (stretched exponential), (c) $\nu=5$ (super-compressed exponential), and (d) $\nu=1/5$ (super-stretched exponential). Note that $b_N$ and $a_N$ denote the ``exact" values satisfying Eq.~(\ref{equation: normalization sequences definition}). We see that for (b), (c), and (d), the error $\delta^{\rm s}$ (green circles) remains above $10\%$ even for $N$s as large as $10^6$. In fact, the error does not fall below $10\%$ until $N \approx 10^{60}$ for (b), $N \approx 10^{43}$ for (c), and $N\approx 10^{12500}$ for (d). This unfortunate situation is true for other $\nu$s as well, and keeps deteriorating the further one is from $\nu=1$. The way out of this dilemma is actually quite simple. One can directly solve the approximate equation \begin{equation} \label{equation: normalization sequences zero order def} \frac{e^{-C\beta_N^{\nu}}}{\beta_N^{\theta}} D_0 = \frac{1}{N} , \end{equation} which replaces the exact $L(\chi)$ in Eq.~(\ref{equation: normalization sequences definition}) by its leading-order large-$\chi$ approximation. The solution, which we denoted above by $\beta_N$, can be expressed in terms of the Lambert W-function which obeys $\text{W}(\eta)\exp[\text{W}(\eta)]=\eta$, giving \begin{equation} \label{equation: normalization sequences zero order} \beta_N \equiv \left\{ \begin{aligned} &\left\{ \frac{\theta}{\nu C} \text{W}_0 \left[ \frac{\nu C}{\theta} \left( D_0N \right)^{\nu/\theta} \right] \right\}^{1/\nu} & \theta>0 \\ &\left[ \frac{1}{C} \ln\left(D_0N\right) \right]^{1/\nu} & \theta=0 \\ &\left\{ \frac{\theta}{\nu C} \text{W}_{-1} \left[ \frac{\nu C}{\theta} \left( D_0N \right)^{\nu/\theta} \right] \right\}^{1/\nu} & \theta < 0 \end{aligned} \right. . \end{equation} Here, $\text{W}_0(\cdot)$ is the Lambert W-function's primary real branch, which has an asymptotic expansion for $\eta\to\infty$ given by $\text{W}_0(\eta)\sim\ln(\eta)-\ln[\ln(\eta)]$, whereas $\text{W}_{-1}(\cdot)$ is the Lambert W-function's secondary real branch, which is defined on the interval $[-1/e,0)$ and has an asymptotic expansion for $\eta\to 0^-$ given by $\text{W}_{-1}(\eta)\sim\ln(-\eta)-\ln[-\ln(-\eta)]$~\cite{DLMF}. By virtue of this asymptotic behavior, $b_N^{\rm s}$ as given in Eq.~(\ref{equation: sequences solved simple}) can be retrieved from Eq.~(\ref{equation: normalization sequences zero order}). The advantage of this formula is clear, as the entire $N$-dependence is encapsulated in the single parameter $\beta_N$. This ``Lambert" approximation for $b_N$ performs much better than $b_N^{\rm s}$, as can be seen in Fig.~\ref{figure: delta an} (blue disks), where the Lambert error, $\delta^{\rm L} \equiv |b_N - \beta_N|/a_N$, is plotted together with $\delta^{\rm s}$. We see that for (b), $\delta^{\rm L}$ falls below $10\%$ already at $N\approx2800$, an improvement of roughly $57$ orders of magnitude in the range of $N$ where the approximation is useful. Similarly for $\nu=5$, $\delta^{\rm L}$ falls below $10\%$ for $N \approx 36000$. \begin{figure} \includegraphics[width=1.0\textwidth]{GumZer.pdf} \caption{The PDF of the maximal value $x$ of (a,b) $N=50$ and (c,d) $N=500$ IID random variables with common PDFs given by Eq.~(\ref{equation: example distributions 1}) for four values of $\nu$: (a) $2$, (b) $1/2$, (c) $5$, and (d) $1/5$. The exact values (dotted black) are compared to the two different types of approximations: the standard $\ln(N)$ expansion given by $f_N(x) \simeq (1/a_N^{\rm s}) g_{\infty}[(x-b_N^{\rm s})/a_N^{\rm s}]$ (dashed blue), and the transformed Lambert method given by Eqs.~(\ref{equation: normalization sequences expansion change variables}), (\ref{equation: normalization sequences zero order change variables}), and (\ref{equation: fn via gumbel change variables}) (solid red). For the latter and in the case of $\nu>1$, a $[1/2]$ Pad\'e approximants in the variable $(\beta_N^w)^{-1}$ are implied for Eq.~(\ref{equation: normalization sequences expansion change variables}). The transformed Lambert method clearly holds very well already for intermediate $N$s. As stated, an underlying distribution which is far from a pure exponential of $\nu=1$ has a slower convergence rate, see Fig.~\ref{figure: gumbel zero larger} in appendix \ref{section: larger} for a replot using larger $N$s.} \label{figure: gumbel zero} \end{figure} To improve the quality of our approximation for $b_N$ yet further, and widen the range of $N$s we can treat, we must utilize more knowledge of the asymptotic behavior of $F(\chi)$. For example, if we assume the asymptotic expansion has the form \begin{equation} \label{equation: extended exponential asymptotics} 1 - F(\chi) \simeq \frac{e^{-C\chi^{\nu}}}{\chi^{\theta}}D_0 \left(1+\frac{D_1}{\chi^{\nu}}+\frac{D_2}{\chi^{2\nu}}\right) , \end{equation} where for our example family of distributions \begin{equation} \label{equation: example distributions 3} D_1 = -\left(1-\frac{1}{\nu}\right)\frac{1}{C} , \quad D_2 = \left(1-\frac{1}{\nu}\right)\left(2-\frac{1}{\nu}\right)\frac{1}{C^2} , \end{equation} then we can make additional progress. The key here is to express the expansion not in terms of $N$, but in terms of $\beta_N$, our zeroth order Lambert approximation for $b_N$. We can similarly express $a_N$ as well in terms of $\beta_N$. We find to order $1/\beta_N^{3\nu}$, \begin{align} \label{equation: normalization sequences expansion} &b_N \simeq b_N^{\rm L,3} \equiv \beta_N \left[1+\frac{D_1}{\nu C\beta_N^{2\nu}}-\frac{2\theta D_1 + \nu C (D_1^2-2D_2)}{2\nu^2C^2\beta_N^{3\nu}}\right] , \\ &a_N \simeq a_N^{\rm L,3} \equiv \frac{1}{\nu C\beta_N^{\nu-1}} \left[1-\frac{\theta}{\nu C\beta_N^{\nu}} + \frac{\theta^2-\nu(2\nu-1) CD_1}{\nu^2C^2\beta_N^{2\nu}} - \frac{2\theta^3-2\nu\theta(5\nu-2)CD_1-\nu^2(3\nu-1)C^2(D_1^2-2D_2)}{2\nu^3C^3\beta_N^{3\nu}}\right] , \nonumber \end{align} These expansions have the added advantage over the standard approximation, in addition to the higher accuracy of the zeroth-order term, that they are standard power series in $\beta_N^{-\nu}$, with no iterated logarithm terms. This means that if needed, one can use techniques such as Pad\'e approximants to help accelerate the convergence rate. We find that for the compressed cases of $\nu>1$, the $[1/2]$ Pad\'e approximants of the sequences in Eq.~(\ref{equation: normalization sequences expansion}) perform better than the regular power series, whereas for the stretched cases of $0<\nu<1$, it is better to use the series expansions as expressed above. The scaled error $\delta^{\rm L,3}=|b_N-b_N^{\rm L,3}|/a_N$ is also indicated in Fig.~\ref{figure: delta an} (red triangles, red squares for Pad\'e form), where we see a drastic improvement in the scaled error for all cases. \subsection{Changing variables} As we saw, at leading order the EV distribution can be approximated by the Gumbel distribution characterized by the two parameters, $b_N$ and $a_N$. However, the further $\nu$ departs from unity, the more the shape of the distribution deviates from Gumbel. This is related to the fact that as $\nu \to 0$, the distribution acquires a fat tail and the Gumbel description breaks down, with the scaling limit being a Fr\'echet distribution. Similarly, as $\nu\to\infty$, the distribution becomes compact, with a Weibull scaling limit. In other words, this situation occurs for common distributions that have an $L$ which is far from a linear function, causing in turn the Taylor approximation Eq.~(\ref{equation: l expansion}) to fail. This problem can be seen in Fig.~\ref{figure: gumbel zero}, where not only is the peak location poorly given by the standard approximation for all but the Gaussian case, but the shape is distinctly different from that of the Gumbel distribution in the non-Gaussian cases. A simple remedy for this problem is given by the expedient of changing variables as $\omega\sim\chi^{\nu}$ for $\chi\to\infty$, in terms of which the underlying distribution has a simple exponential falloff as its dominant behavior. Consequently, the EV distribution for $w \sim x^{\nu}$ and $x\to\infty$ is thus well-described by a Gumbel distribution, with parameters $b_N^w = b_N^\nu$ and $a_N^w = \nub_N^{\nu-1}a_N$, namely \begin{align} \label{equation: normalization sequences expansion change variables} &b_N^w \simeq \beta_N^w \left[1+\frac{D_1}{C(\beta_N^w)^2}-\frac{2\theta_wD_1 + C(D_1^2-2D_2)}{2C^2(\beta_N^w)^3}\right] , \nonumber \\ &a_N^w \simeq \frac{1}{C} \left[1-\frac{\theta_w}{C\beta_N^w} + \frac{\theta_w^2-CD_1}{C^2(\beta_N^w)^2} - \frac{2\theta_w^3-6\theta_wCD_1-2C^2(D_1^2-2D_2)}{2C^3(\beta_N^w)^3}\right] , \end{align} where $\theta_w \equiv \theta/\nu$, and \begin{equation} \label{equation: normalization sequences zero order change variables} \beta_N^w \equiv \left\{ \begin{aligned} & \frac{\theta_w}{C} \text{W}_0 \left[ \frac{C}{\theta_w} \left( D_0N \right)^{1/\theta_w} \right] & \theta_w>0 \\ & \frac{1}{C} \ln\left(D_0N\right) & \theta_w=0 \\ & \frac{\theta_w}{C} \text{W}_{-1} \left[ \frac{C}{\theta_w} \left( D_0N \right)^{1/\theta_w} \right] & \theta_w < 0 \end{aligned} \right. . \end{equation} Note that the scaled error of $b_N^w$ is equal to that of $b_N$ to leading order, hence our previous work in approximating $b_N$ directly carries over. The Gumbel distribution in $w$ translates directly to our new PDF for $x$, \begin{equation} \label{equation: fn via gumbel change variables} f_N(x) = \nu x^{\nu-1} f_N^w\left(x^{\nu}\right) = \frac{\nu x^{\nu-1}}{a_N^w} g_N^w\left(\frac{x^{\nu}-b_N^w}{a_N^w}\right) \simeq \frac{\nu x^{\nu-1}}{a_N^w} g_{\infty}\left(\frac{x^{\nu}-b_N^w}{a_N^w}\right) . \end{equation} Figure~\ref{figure: gumbel zero} shows the EV PDFs $f_N(x)$ for the four examples stated above. The exact values are compared to the standard Gumbel approximation given by $f_N(x) \simeq (1/a_N^{\rm s}) g_{\infty}[(x-b_N^{\rm s})/a_N^{\rm s}]$, and to our transformed Lambert approximation given by Eqs.~(\ref{equation: normalization sequences expansion change variables}), (\ref{equation: normalization sequences zero order change variables}), and (\ref{equation: fn via gumbel change variables}). As with Eq.~(\ref{equation: normalization sequences expansion}), a $[1/2]$ Pad\'e approximant in the variable $(\beta_N^w)^{-1}$ was employed to Eq.~(\ref{equation: normalization sequences expansion change variables}) for the compressed cases. We changed variables according to $w = \text{sign}(x)|x|^{\nu}$, which is consistent with the asymptotics $w \sim x^{\nu}$ described above. The combined usage of the Lambert scaling and the variable transformation excellently match the exact results, without applying any corrections to the Gumbel distribution. In appendix \ref{section: larger}, we replot all panels with an $N$ that is larger by a factor of $10^3$, see Fig.~\ref{figure: gumbel zero larger}, demonstrating the slow rate of convergence for the standard approximation. Note that none of the two methods discussed above can perform as well alone, hence they are complementary. \begin{figure} \includegraphics[width=1.0\textwidth]{GumCor.pdf} \caption{The first and second corrections to the Gumbel approximation of the EV PDFs, relative to the maximum value $1/e$ of the Gumbel distribution, for the four test distributions of Eq.~(\ref{equation: example distributions 1}) with $\nu=$ (a) $2$, (b) $1/2$, (c) $5$, and (d) $1/5$, where $N=$ (a,b) $100$ and (c,d) $500$. The first correction $\Delta_1$ (dashed blue) defined in Eq.~(\ref{equation: delta 1 definition}) has a magnitude of $\approx 10\%$ with respect to the maximal value of $g_{\infty}(z)$, $1/e$. It follows well its predicted shape (blue circles) seen on the right hand side of Eq.~(\ref{equation: delta 1 definition}). The second correction $\Delta_2$ (solid red), defined in Eq.~(\ref{equation: delta 2 definition}), is multiplied by $3$ for visibility. It follows its predicted shape (red disks) seen on the right hand side of Eq.~(\ref{equation: delta 2 definition}), and has a smaller magnitude than the first-order correction. Here we used the exact values of $b_N$, $a_N$, $c_2$, and $c_3$.} \label{figure: corrections delta} \end{figure} \begin{figure} \includegraphics[width=1.0\textwidth]{GumCorTr.pdf} \caption{The first and second corrections to the transformed Gumbel approximation of the EV PDFs, relative to the maximum value $1/e$ of the Gumbel distribution, for the four test distributions of Eq.~(\ref{equation: example distributions 1}) with $\nu=$ (a) $2$, (b) $1/2$, (c) $5$, and (d) $1/5$, where $N=$ (a,b) $100$ and (c,d) $500$. The first correction $\Delta_1^w$ (dashed blue) defined in Eq.~(\ref{equation: delta 1 definition}) has a magnitude of $\approx 1\%$ with respect to the maximal value of $g_{\infty}(z)$, $1/e$. It follows well its predicted shape (blue circles) seen on the right hand side of Eq.~(\ref{equation: delta 1 definition}). The second correction $\Delta_2^w$ (solid red), defined in Eq.~(\ref{equation: delta 2 definition}), is multiplied by $3$ for visibility. It follows its predicted shape (red disks) seen on the right hand side of Eq.~(\ref{equation: delta 2 definition}), and has a smaller magnitude than the first-order correction. We used the exact values of $b_N^w$, $a_N^w$, $c_2^w$, and $c_3^w$. The magnitude of the corrections after performing the change of variables is noticeably smaller for all cases.} \label{figure: corrections change} \end{figure} \section{Corrections to the Gumbel distribution} \label{section: corrections} We now consider corrections to the Gumbel distribution itself. Let us continue from Eqs.~(\ref{equation: l expansion}) and (\ref{equation: normalization sequences definition}) by taking one additional term from the expansion of $L$. In what follows, we suppress the argument $b_N$ of $c_n(b_N)$. We obtain the Gumbel distribution to linear order along with the first correction in $b_N$, \begin{equation} \label{equation: general first correction} G_N(z) \simeq G_{\infty}(z) \left[ 1 + c_2\frac{z^2}{2}e^{-z} \right] , \end{equation} which leads to the approximate PDF \begin{equation} \label{equation: general first correction pdf} g_N(z) \simeq g_{\infty}(z) \left[ 1 + c_2\frac{z}{2}\left(2-z+e^{-z}z\right) \right] . \end{equation} This first order correction is already known from the renormalization-group works by Gy\"{o}rgyi, et al. \cite{Gyorgyi1,Gyorgyi2,Gyorgyi3}. Indeed, we see that it has a universal functional shape, while the numerical prefactor $c_2$ depends on the specifics of the underlying distribution $F$. The second order correction relies on the additional numerical parameter $c_3$. In the renormalization-group language, each additional term comes from a subdominant eigenvalue of the renormalization operator, but here the procedure is simply a Taylor expansion of the appropriate function, namely $L$. Using Eqs.~(\ref{equation: normalization sequences definition}) and (\ref{equation: extended exponential asymptotics simple}), one can show that $c_2 \sim (\nu-1)/[\nu\ln(N)]$ and that $c_2^w = c_2 + (1-\nu)a_N/b_N \sim -\theta_w/[\ln(N)]^2$, which occurs as the transformation of variables gives an effective $\nu=1$. Hence, the transformed coefficient $c_2^w$ is down by an additional factor of $1/\ln(N)$. In order to illustrate this first correction, we define \begin{align} \label{equation: delta 1 definition} \Delta_1 &\equiv \frac{1}{1/e}\left[a_N f_N\left( b_N + a_N z \right) - g_{\infty}(z) \vphantom{\frac{1}{1}}\right] \simeq \frac{c_2}{1/e} g_{\infty}(z) \frac{z}{2}\left(2-z+e^{-z}z \vphantom{\frac{1}{1}}\right) , \nonumber \\ \Delta_1^w &\equiv \frac{1}{1/e}\left[a_N^w f_N^w\left( b_N^w + a_N^w z \right) - g_{\infty}(z) \vphantom{\frac{1}{1}}\right] \simeq \frac{c_2^w}{1/e} g_{\infty}(z) \frac{z}{2}\left(2-z+e^{-z}z \vphantom{\frac{1}{1}}\right) . \end{align} These are the differences between the exact EV PDF for the scaled variable $z$ and the Gumbel approximation, normalized to the maximal value of $g_{\infty}(z)$, $1/e$, where the superscript $w$ denotes the variable change $w = \text{sign}(x)|x|^{\nu}$. Figures~\ref{figure: corrections delta} and \ref{figure: corrections change} show $\Delta_1$ and $\Delta_1^w$ (dashed blue), respectively, for our four examples, together with the predicted shapes of the first correction as given in Eq.~(\ref{equation: delta 1 definition})'s right hand side (blue circles). The differences follow the predicted curves well, and one can see that the relative magnitude of the first correction significantly reduces when applying the variable change, from $\sim 10\%$ to $\sim 1\%$. \begin{figure} \includegraphics[width=1.0\textwidth]{GumTail.pdf} \caption{The left and right tails of the PDF of the maximal value $x$ of $N=500$ IID random variables with common PDFs given by Eq.~(\ref{equation: example distributions 1}) for four values of $\nu$: (a) $2$, (b) $1/2$, (c) $5$, and (d) $1/5$. The exact values (dotted black) excellently match our Lambert scaled approximation for the right tail (thick dashed blue), given by Eqs.~(\ref{equation: normalization sequences zero order change variables}) and (\ref{equation: large deviations principle extended}) in the relevant regime. Also seen are the large deviation results of Ref.~\cite{Rita} for the right tail, given by Eq.~(\ref{equation: large deviations principle simple}) (short-dashed green). The uniform approximation of the EV PDF (solid red), given by Eqs.~(\ref{equation: normalization sequences zero order change variables}) and (\ref{equation: extended uniform approximation solution}), nicely match with the exact values for all relevant $x$s.} \label{figure: tails approx} \end{figure} Next, we demonstrate the second-order correction, going beyond the renormalization-group calculations of Refs.~\cite{Gyorgyi1,Gyorgyi2,Gyorgyi3}. One can show that $c_3 \sim (\nu-1)(\nu-2)/[\nu\ln(N)]^2$ and that $c_3^w = c_3 - 3(\nu-1)c_2a_N/b_N + (\nu-1)(2\nu-1)(a_N/b_N)^2 \sim 2\theta_w/[\ln(N)]^3$, which occurs for the same reason as before. Thus, $c_3^w$ is down by an additional factor of $1/\ln(N)$, and the second-order correction to the EV PDF will be different if one applies this change of variables. For the original variable, extracting yet another term from Eq.~(\ref{equation: l expansion}), we arrive at the approximate CDF \begin{equation} \label{equation: general second correction} G_N(z) \simeq G_{\infty}(z) \left[1 + c_2\frac{z^2}{2}e^{-z} + c_3\frac{z^3}{6}e^{-z} - c_2^2\frac{z^4}{8}e^{-z}\left(1-e^{-z}\right) \right] , \end{equation} with an approximate PDF of \begin{equation} \label{equation: general second correction pdf} g_N(z) \simeq g_{\infty}(z) \left\{1 + c_2\frac{z}{2}\left(2-z+e^{-z}z\right) + c_3\frac{z^2}{6}\left(3-z+e^{-z}z\right) - c_2^2\frac{z^3}{8} \left[4-z-e^{-z}\left(4-3z+e^{-z}z\right)\right] \right\} . \end{equation} Note that in the language of the transformed variable $w$, the term proportional to $c_2^2$ is of a higher order. Hence, in this representation, the second-order correction is also of universal behavior. In order to illustrate this correction, we define \begin{align} \label{equation: delta 2 definition} \Delta_2 &\equiv \frac{1}{1/e}\left\{a_N f_N\left( b_N + a_N z \right) - g_{\infty}(z)\left[ 1 + c_2\frac{z}{2}\left(2-z+e^{-z}z\right) \right] \right\} \nonumber \\ &\simeq \frac{g_{\infty}(z)}{1/e} \left\{ c_3\frac{z^2}{6}\left(3-z+e^{-z}z\right) - c_2^2\frac{z^3}{8} \left[4-z-e^{-z}\left(4-3z+e^{-z}z\right)\right] \right\} , \nonumber \\ \Delta_2^w &\equiv \frac{1}{1/e}\left\{a_N^w f_N^w\left( b_N^w + a_N^w z \right) - g_{\infty}(z) \left[ 1 + c_2^w\frac{z}{2}\left(2-z+e^{-z}z\right) \right] \right\} \simeq \frac{c_3^w}{1/e} g_{\infty}(z) \frac{z^2}{6}\left(3-z+e^{-z}z \vphantom{\frac{1}{1}}\right) . \end{align} These are the differences between the exact EV PDF and the first order Gumbel approximation in the $z$ coordinate, normalized to the maximal value of $g_{\infty}(z)$, $1/e$, where the superscript $w$ denotes the variable change $w = \text{sign}(x)|x|^{\nu}$. Figures~\ref{figure: corrections delta} and \ref{figure: corrections change} show $\Delta_2$ and $\Delta_2^w$ (solid red), respectively, for our four examples, together with the predicted shapes of the second correction as given in Eq.~(\ref{equation: delta 2 definition})'s right hand side (red disks). The differences follow the predicted curves, and one can see that the relative magnitude of the second correction significantly reduces when changing variables. We conclude this section with a calculation of the EV distribution's moments, which are given by \begin{equation} \label{equation: moments definition} \left< x^m \right> \equiv \int_{-\infty}^{\infty}\text{d}x \, f_N(x) x^m . \end{equation} As done above, we change variables to $w=|x|^{\nu}\text{sign}(x)$, with an inverse of $x=|w|^{1/\nu}\text{sign}(w)$. Then, Eq.~(\ref{equation: moments definition}) becomes \begin{align} \label{equation: moments expansion variable change} \left< x^m \right> \simeq (b_N^w)^{m_w} \sum_{n=0}^{\infty} \frac{\Gamma(n-m_w)}{n!\Gamma(-m_w)} \left(-\frac{a_N^w}{b_N^w}\right)^n \int_{-\infty}^{\infty}\text{d}z \, g_N^w(z) z^n , \end{align} up to exponentially small corrections, where $m_w \equiv m/\nu$. Integrating the PDF Eq.~(\ref{equation: general first correction pdf}) and plugging in the expansions for $b_N^w$, $a_N^w$, and $c_2^w$ (not shown) gives for the $m$th moment \begin{equation} \label{equation: moments expansion} \left< x^m \right> \simeq (\beta_N^w)^{m_w} \left[ 1 + \frac{m_w\gamma}{C\beta_N^w} + \frac{m_w(m_w-1)(6\gamma^2+\pi^2)-12m_w\theta_w\gamma +12m_wCD_1}{12C^2(\beta_N^w)^2} \right] , \end{equation} where $\gamma\approx 0.5772$ is the Euler–Mascheroni constant. An important advantage of our series expansion is that it allows one to obtain higher-order corrections to Eq.~(\ref{equation: general first correction pdf}) rather easily, see e.g. Eq.~(\ref{equation: general second correction pdf}), hence Eq.~(\ref{equation: moments expansion}) can be extended to arbitrary orders. \begin{figure} \includegraphics[width=1.0\textwidth]{GumChart.pdf} \caption{A flowchart describing a suggested algorithm for fitting numerical data set of $M$ maxima to our theory. In step (ii), $\lfloor\cdot\rfloor$ denotes the $\text{floor}(\cdot)$ function. In step (iv), the value of $n_*$ determines the highest order correction term of the Gumbel distribution to be obtained. In step (vi), the value of $\nu_*$ stands for the optimal variable change exponent, and generally does not have to be an integer. In step (viii), the referred EV PDF is given by Eq.~(\ref{equation: fn via gumbel change variables}). Note that in all of the considered examples, the $\{c_n^w\}$ parameters were not needed to yield a good match, see Figs.~\ref{figure: gumbel fit} and \ref{figure: minimum case}(d).} \label{figure: gumbel chart} \end{figure} \section{The far tails} \label{section: tails} We now turn to discuss the far tails. In the far right tail, $L(\chi)$ is no longer well-approximated by its expansion around $b_N$, and so universality breaks down. In this regime, $F(\chi)$ is exponentially close to $1$, and as such one can always write $F_N(x) \simeq 1-N[1-F(x)]$, with exponentially small corrections. Exploiting the asymptotics Eq.~(\ref{equation: extended exponential asymptotics simple}) and reexpressing $N$ using $\beta_N$ via Eq.~(\ref{equation: normalization sequences zero order def}), we have $F_N(x) \simeq 1-(\beta_N/x)^{\theta}\exp[-C(x^{\nu}-\beta_N^{\nu})]$, which yields \begin{equation} \label{equation: large deviations principle extended} f_N(x) \simeq \nu Cx^{\nu-1} \left(\frac{\beta_N}{x}\right)^{\theta} e^{-C\beta_N^{\nu}\left[\left(x/\beta_N\right)^{\nu}-1\right]} . \end{equation} Hence, the speed and scaling are $u_N = C\beta_N^{\nu}$ and $s_N = \beta_N$, respectively, where the rate function is $\psi(\xi) = \xi^{\nu}-1$. This formula extends the large deviations approach of Gulliano and Macci \cite{Rita}, for which $1-F_N(x) \approx \exp\{-\ln(N)[(x/s_N^{\rm s})^{\nu}-1]\}$, and \begin{equation} \label{equation: large deviations principle simple} f_N(x) \approx \nu x^{\nu-1} \frac{\ln(N)}{(s_N^{\rm s})^{\nu}} e^{-\ln(N)\left[\left(x/s_N^{\rm s}\right)^{\nu}-1\right]} , \quad 1-F\left(s_N^{\rm s}\right) \equiv \frac{1}{N} , \end{equation} so that the speed was $u_N^{\rm s}=\ln(N)$. Since to leading order in $N$ one has $u_N \simeq u_N^{\rm s}$ and $s_N \simeq s_N^{\rm s}$, the leading-order $N$ dependence of the two formulas is identical. However, as with the Gumbel bulk approximation, this leading order is by far too simplistic to provide accurate predictions. Figure~\ref{figure: tails approx} shows our results and the exact numerical values of the PDFs for our four cases. Even at its base level without corrections depending on $D_1$ and $D_2$, Eq.~(\ref{equation: large deviations principle extended}) is in excellent agreement to the exact values. Also presented are the large deviation results of Ref.~\cite{Rita} given by Eq.~(\ref{equation: large deviations principle simple}). Constructing an approximation to the left tail is a matter of interest too, since the Gumbel approximation fails at both ends. It turns out that two sub-regimes exists for the left tail, corresponding to an extreme left tail where $x\to-\infty$, and to a moderate left tail for which $1\ll x\ll\langle x\rangle$. The former regime is less interesting though, as the probability to encounter such an event is extraordinary small, and thus we focus on the latter case. In this regime, $1-F(\chi)$ is still small, though much larger than $1/N$. In fact, we can still write $F_N(x) \simeq \exp\{-N[1-F(x)]\}$, however we cannot expand further. Repeating the above procedure leads to the uniform approximation, however this time we use the extended asymptotic version Eq.~(\ref{equation: extended exponential asymptotics}). Tackling the small $x$ divergence of the extra terms is done by replacing it with a $[1/1]$ Pad\'e approximant in the variable $1/x^{\nu}$. Differentiating yields the uniform approximation as \begin{align} \label{equation: extended uniform approximation solution} &F_N(x) \approx \exp\left[ -\left(\frac{\beta_N}{x}\right)^{\theta} e^{-C\left(x^{\nu}-\beta_N^{\nu}\right)} \frac{D_1x^{\nu}+D_1^2-D_2}{D_1x^{\nu}-D_2} \right] , \nonumber \\ &f_N(x) \approx \nu C x^{\nu-1} \left(\frac{\beta_N}{x}\right)^{\theta} \exp\left[ -C\left(x^{\nu} -\beta_N^{\nu} \right) - \left(\frac{\beta_N}{x}\right)^{\theta} e^{-C\left(x^{\nu}-\beta_N^{\nu}\right)} \frac{D_1x^{\nu}+D_1^2-D_2}{D_1x^{\nu}-D_2} \right] . \end{align} This expression is valid for every $x$ which satisfies $1-F(x) \ll 1/\sqrt{N}$. In particular, it describes well the moderate left tail, see Fig.~\ref{figure: tails approx}. The large deviations and Gumbel forms are obtainable from Eq.~(\ref{equation: extended uniform approximation solution}) in the appropriate limits. \begin{figure} \includegraphics[width=1.0\textwidth]{GumFit.pdf} \caption{The PDF of the maximal value $x$ of $N=25$ IID random variables with common PDFs given by Eq.~(\ref{equation: example distributions 1}) for four values of $\nu$: (a) $2$, (b) $1/2$, (c) $5$, and (d) $1/5$. For each case we sampled $10^5$ maxima, and used these to extract an estimate for the EV PDF. Using $\nu$, $b_N^w$, and $a_N^w$ as fit parameters for the rightmost hand side of Eq.~(\ref{equation: fn via gumbel change variables}) according to the algorithm described in Fig.~\ref{figure: gumbel chart} yields an excellent match to the simulated data, without assumed knowledge of the underlying PDFs. Note that not $N$ nor $\nu$ need to be known for this procedure to work. As this algorithm renders $c_2^w \approx 0$, the plotted curves do not visibly change when adding the first correction.} \label{figure: gumbel fit} \end{figure} \section{A data analysis approach} \label{section: practical} While above we assumed quite a general shape for the asymptotic form of the underlying distribution, in many cases one lacks knowledge of one or more of its parameters, i.e. $\nu$, $C$, $D_0$, and $\theta$. Still, this turns out to not pose a problem, as from a practical point of view, one has an excellent parameterization of the EV PDF in terms of a very small number of parameters, namely $b_N$, $a_N$, and if needed $c_2$ and $c_3$. To find these parameters given a data set of $M$ maxima $\{x_i\}$ that are assumed to follow the Gumbel limit, one can use the algorithm presented in Fig.~\ref{figure: gumbel chart}. One starts by sorting the data set ascendingly in step (i), as plotting $i/M$ as a function of $w_i = x_i^{\nu}$ for $1 \le i \le M$ essentially gives the empirical EV CDF $F_N^w(w)$. Next, by using Eq.~(\ref{equation: normalization sequences definition}) for the changed variable $w$, i.e. $\exp[L_w(b_N^w)]=N$, one has $F_N^w(b_N^w)=1/e$. This means that an estimate for $b_N^w$ can be obtained from an index which obeys $i/M \approx 1/e$, namely $i = \lfloor M/e \rfloor$, where $\lfloor\cdot\rfloor$ is the $\text{floor}(\cdot)$ function, see step (ii). Then, the quantity $-\ln[-\ln(i/M)]$ versus the argument $x_i^{\nu}$ gives the empirical value of $L_w(w)-\ln(N) = L_w(w)-L_w(b_N^w)$, which corresponds to the $w$-version of the expansion Eq.~(\ref{equation: l expansion}). Step (iv) allows for two fitting schemes. The first is truncating the data set created in step (iii) for a low-order polynomial fit around $0$ in the variable $x-l_0$. The second is fitting more of the said data set to a high-order polynomial and reading off the low-order coefficients, in which case the higher terms take care of the global behavior. Note that $n_*$ determines what is the highest-order correction term of the Gumbel distribution to be obtained. Determining the appropriate power $\nu_*$ for the variable change method is done in step (vi), by demanding that the post-transformation $L_w^{(2)}(b_N^w)$ vanish, which ensures that $L_w$ is locally quite linear near $b_N^w$. Note also that one does not need to know the values of $N$ and $\nu$ to implement this algorithm. The results of this procedure for our four examples with $N=25$ are presented in Fig.~\ref{figure: gumbel fit}, and excellently reproduce the central region of $f_N(x)$ without any assumed knowledge of the underlying distributions. We employed high-order polynomial fits with $n_*=5$ for all cases, but only used fit parameters of the zero order, i.e. $b_N^w$ and $a_N^w$, when plotting. Note that this procedure is not intended to provide an estimation of the true underlying values of $\nu$, $b_N$, $a_N$, etc., but rather the values which best estimate the EV PDF. \begin{figure} \includegraphics[width=1.0\textwidth]{GumMin.pdf} \caption{The case of the minimum value $y$ of $N$ bounded IID random variables $\chi\in[0,\infty)$, illustrated via an example of the one-dimensional first-return time problem, for which $\tilde{F}(\chi) = \text{erf}(1/\sqrt{\chi})$. Each panel corresponds to one of the previously discussed figures, and legends analogy is implied. (a) The zero-order $g_{\infty}(z)$ with the Lambert scaling and variables change method holds well to the exact values for $N=50$. Also shown is the approximation taken from \cite{Lawley}. See Fig.~\ref{figure: minimum case larger} in appendix \ref{section: larger} for a replot of (a) using a larger $N$. (b) The first correction to the transformed Gumbel approximation $\Delta_1^w$ has a magnitude of $\approx 0.5\%$ with respect to the maximal value of $g_{\infty}(z)$, $1/e$, and follows well its predicted shape. The second correction $\Delta_2^w$ is multiplied by $3$ for visibility, and follows its predicted shape. We used the exact values of $b_N^w$, $a_N^w$, $c_2^w$, and $c_3^w$. See Fig.~\ref{figure: minimum case larger} in appendix \ref{section: larger} for the non-transformed Gumbel corrections, namely $\Delta_1$ and $\Delta_2$, which are of a much larger magnitude. (c) The left tail and the uniform approximation for the PDF of $y$ with $N=500$ both excellently match the exact values. The uniform approximation functions for all $y$. Note that the left tail of a minimum EV problem is analog to the right tail of the maximum case. These problems have a trivial left large deviation function, and so it is omitted from the plot. (d) The practical method with $10^5$ minima of $N=25$ IID random variables each. Fit parameters of the zero-order were obtained from the data set and used to extract estimates for $g_{\infty}(z)$ which excellently match the samplings without assumed knowledge of the underlying distribution.} \label{figure: minimum case} \end{figure} \section{Connection to other cases} \label{section: others} \subsection{Exceptional bounded distributions} Usually, compact distributions lie in the Weibull universality class. However, when the PDF vanishes faster than a power-law at the endpoint, the asymptotic distribution is still Gumbel. An example of this is found in a problem discussed by Lawley~\cite{Lawley}, namely the minimum first-passage time to the origin of $N$ particles diffusing on the interval $(0,1)$ which start at the right reflective boundary, where the diffusion coefficient is $1/4$. Indeed, Lawley showed that in this case the Lambert W-function can be used to approximate $b_N$ and $a_N$, however, this case is included among those discussed above where $L$ is far from linear around $b_N$ for reasonably large $N$, and thus just using the Lambert representation of $b_N$ and $a_N$ is insufficient, and the change of variables must be employed as well. Since here we are dealing with a minimum rather than a maximum, the role of the CDF $F$ is replaced by the complementary CDF, $\tilde{F}(\chi) \equiv 1-F(\chi)$, given by $\tilde{F}(\chi) = \text{erf}(1/\sqrt{\chi})$ for this case, where $\chi\in[0,\infty)$ and $\text{erf}(\cdot)$ is the error function. Note that the exact complementary CDF for the minimum $y \equiv \min(\{\chi_1,...,\chi_N\})$ of any $N$ IID random variables is $\tilde{F}_N(y) = \tilde{F}^N(y)$. In what follows, we denote quantities of the minimum EV case by tildes. As elaborated above, one needs to consider a variable change that renders the underlying CDF exponential-like. Observing the asymptotic behavior $\tilde{F}(\chi) \simeq 1-\sqrt{\chi/\pi}\exp(-1/\chi)$, it is clear that the relation must be $\omega = 1/\chi$, then for large $\omega$ one has $L_{\omega}(\omega) \propto \omega + O[\ln(\omega)]$, which is very close to being linear and therefore can be Taylor approximated very well. Since the minimum $y$ of $\{\chi_i\}$ is the maximum $w$ of $\{\omega_i\}$, we can apply our procedures of the above sections to the maximum value problem whose CDF is $F_{\omega}(\omega) = \tilde{F}(1/\omega) = \text{erf}(\sqrt{\omega})$. The PDF for $y$, $\tilde{f}_N(y)$, is then obtained from the PDF for $w$, $f_N^w(w)$, by \begin{equation} \label{equation: general first two corrections change variables} \tilde{f}_N(y) = \frac{1}{y^2}f_N^w\left(\frac{1}{y}\right) . \end{equation} The results of our above discussions for this example are presented in Fig.~\ref{figure: minimum case}, with each panel demonstrating a previous figure: (a) Fig.~\ref{figure: gumbel zero}, (b) Fig.~\ref{figure: corrections change}, (c) Fig.~\ref{figure: tails approx}, and (d) Fig.~\ref{figure: gumbel fit}. The agreement is indeed excellent, and the key is that $f_N^w(w)$ is much closer to a Gumbel distribution than $\tilde{f}_N(y)$, similarly to the super-compressed and super-stretched cases. In appendix \ref{section: larger}, we replot panel (a) with an $N$ that is larger by a factor of $10^2$, see Fig.~\ref{figure: minimum case larger}, demonstrating the slow rate of convergence. We also show the corrections to the non-transformed Gumbel case, in analogy to Fig.~\ref{figure: corrections delta}, where a magnitude of $\approx 20\%$ can be seen (in contradiction to $0.5\%$ for the transformed case). As far as data analysis is concerned, one simply needs to employ the algorithm seen in Fig.~\ref{figure: gumbel chart} for $\nu<0$, while sorting descendingly instead of ascendingly in step (i). As for the moments, we have \begin{equation} \label{equation: moments definition minimum} \left<y^m\right> \equiv \int_0^{\infty}\text{d}y\,\tilde{f}_N(y)y^m = \int_0^{\infty}\text{d}w\,f_N^w(w)w^{-m} , \end{equation} so the $m$th moment of $y$ is just the $-m$ of $w$, which we have already calculated above. Our right tail and uniform approximations for the distribution of $w$ immediately yields the left tail and uniform approximations of $y$'s distribution. \subsection{Other extreme value limits} As a final remark, we point out a nice observation for the reason why random variables with EV distribution different than Gumbel do not suffer from the poor logarithmic convergence problems of their Gumbel counterparts. Take, for example, a distribution with a power-law tail, $f(\chi) \propto \chi^{-\mu}$ with $\mu>0$. A direct application of the method used here would have us expand $L(\chi) \simeq \mu\ln(\chi)$ around $b_N \propto N^{1/\mu}$, obtaining $a_N \simeq b_N/\mu$. As a consequence, $c_n \sim O(1)$ for any $n>1$, and so all terms in the expansion of $L$ are of the same order, resulting in the Gumbel universality being lost. The same is true for a compact distribution with $f(\chi) \propto (1-\chi)^{\mu-1}$. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Type & Support & CDF & Extreme value & Scaling sequences & Limit & Limiting CDF \\ \hline Power-law & $\chi_{\rm p}\in[1,\infty)$ & $1 - \chi_{\rm p}^{-\mu}$ & $x_{\rm p}=b_N^{\rm p}+a_N^{\rm p}z_{\rm p}$ & $b_N^{\rm p}=N^{1/\mu}$ , $a_N^{\rm p}=\mu^{-1}N^{1/\mu}$ & Fr\'echet & $\exp[-(1+z_{\rm p}/\mu)^{-\mu}]$ \\ \hline Compact & $\chi_{\rm c}\in[0,1]$ & $1-(1-\chi_{\rm c})^{\mu}$ & $x_{\rm c}=b_N^{\rm c}+a_N^{\rm c}z_{\rm c}$ & $b_N^{\rm c}=1-N^{-1/\mu}$ , $a_N^{\rm c}=\mu^{-1}N^{-1/\mu}$ & Weibull & $\exp[-(1-z_{\rm c}/\mu)^{\mu}]$ \\ \hline Exponential & $\chi_{\rm e}\in[0,\infty)$ & $1 - \exp\left(-\mu\chi_{\rm e}\right)$ & $x_{\rm e}=b_N^{\rm e}+a_N^{\rm e}z_{\rm e}$ & $b_N^{\rm e}=\mu^{-1}\ln(N)$ , $a_N^{\rm e}=\mu^{-1}$ & Gumbel & $\exp(-e^{-z_{\rm e}})$ \\ \hline Gaussian & $\chi_{\rm g}\in(-\infty,\infty)$ & $[1+\text{erf}(\chi_{\rm g}/\sqrt{2})]/2$ & $x_{\rm g}=b_N^{\rm g}+a_N^{\rm g} z_{\rm g}$ & $b_N^{\rm g},a_N^{\rm g}$ & Gumbel & $\exp(-e^{-z_{\rm g}})$ \\ \hline \end{tabular} \caption{The considered random variables, where $\mu>0$ is a constant parameter.} \label{table: random variables classes} \end{table} It is instructive to look at this from the perspective of a change of variables. Let us consider the four random variables that appear in table~\ref{table: random variables classes}. The transformations \begin{equation} \label{equation: variables transformations to exp 1} \chi_{\rm e} = \ln\left(\chi_{\rm p}\right) , \quad \chi_{\rm e} = \ln\left( \frac{1}{1-\chi_{\rm c}} \right) , \quad \chi_{\rm e} = - \frac{1}{\mu} \ln\left[ \frac{1}{2}-\frac{1}{2} \text{erf}\left(\frac{\chi_{\rm g}}{\sqrt{2}}\right) \right] , \end{equation} generate the exponentially distributed random variable from the power-law, compact, and Gaussian variables, respectively. Since these are strictly increasing functions, Eq.~(\ref{equation: variables transformations to exp 1}) holds for the EVs as well. When plugging these in, Eq.~(\ref{equation: variables transformations to exp 1}) yields \begin{equation} \label{equation: variables transformations to exp 2} z_{\rm e} = \mu\ln\left(1+\frac{z_{\rm p}}{\mu}\right) , \quad z_{\rm e} = -\mu\ln\left(1-\frac{z_{\rm c}}{\mu}\right) , \quad z_{\rm e} = z_{\rm g} - \ln(N) + \frac{1}{2} \left(b_N^{\rm g}\right)^2 + \ln\left( \sqrt{2\pi} b_N^{\rm g} \right) + O\left[\frac{1}{\left(b_N^{\rm g}\right)^2}\right] , \end{equation} with $a_N^{\rm g}=1/b_N^{\rm g}$ and $b_N^{\rm g} \gg 1$ for the Gaussian case. Note that for the first two cases the $N$ dependency vanishes from the relation between the rescaled variables. Moreover, plugging $z_{\rm e}$ in terms of $z_{\rm p},z_{\rm c}$ into the Gumbel CDF results in the Fr\'echet and Weibull CDFs, respectively. Thus, the power-law and compact random variables are actually an exponentially distributed variable in another guise. Hence, it is not surprising that the convergence rate to these limits is much faster, as for the exponential case all finite-$N$ corrections to the Gumbel limit vanish. The latter statement can be concluded by plugging $L(\chi) \simeq \mu\chi$ into Eq.~(\ref{equation: normalization sequences definition}), which yields $c_n=0$ for $n>1$. However, for the Gaussian case in Eq.~(\ref{equation: variables transformations to exp 2}) things are different, as the $N$ dependency remains. Actually, if we identify $z_{\rm e}=z_{\rm g}$, Eq.~(\ref{equation: variables transformations to exp 2})'s rightmost section exactly reproduces Eq.~(\ref{equation: normalization sequences zero order}), with appropriate Gaussian parameters. This further emphasizes the naturalness of the Lambert scaling approach for distributions yielding the Gumbel limit when $N\to\infty$. \section{Summary} \label{section: summary} In this paper, we have discussed the EV problem of $N$ IID random variables and constructed a theory that makes the Gumbel limit of the EV distribution usable for values of $N$s below $500$, and in most cases less than a hundred, whereas in some cases the standard approach would completely fail for $N$s which are not astronomically large. Exploiting the Lambert W-function, we obtained the scaling sequences $b_N$ and $a_N$ as simple asymptotic series in terms of a single parameter $\beta_N$, see Eq.~(\ref{equation: normalization sequences zero order}). The expansions obtained generate useful approximations (sometimes with the aid of Pad\'e transformation) down to $N=50$. Applying a simple variable transformation makes the Gumbel limit relevant in its uncorrected form, namely $g_{\infty}(z)$. We also provided a simple way to derive arbitrary-order corrections to the Gumbel distribution for the EV of IID random variables, and demonstrated the first two corrections. We have tested this for a whole family of stretched or compressed exponential distributions, including the slowly-converging super-stretched case. We improved the accuracy of the large-deviation representation of the right tail of the EV distribution while allowing for a uniform approximation that captures the close left tail as well. If the underlying distribution is not given, we described a fitting scheme that yields an excellent match between a given data set and the Gumbel limit. We have also shown how the same techniques works for compact distributions with essential singularities at the endpoint of the distribution. \begin{acknowledgments} The support of the Israel Science Foundation, Grant No. 1898/17, is acknowledged. \end{acknowledgments}
{ "timestamp": "2021-07-08T02:04:01", "yymm": "2006", "arxiv_id": "2006.13677", "language": "en", "url": "https://arxiv.org/abs/2006.13677", "abstract": "We consider the extreme value statistics of $N$ independent and identically distributed random variables, which is a classic problem in probability theory. When $N\\to\\infty$, fluctuations around the maximum of the variables are described by the Fisher-Tippett-Gnedenko theorem, which states that the distribution of maxima converges to one out of three limiting forms. Among these is the Gumbel distribution, for which the convergence rate with $N$ is of a logarithmic nature. Here, we present a theory that allows one to use the Gumbel limit to accurately approximate the exact extreme value distribution. We do so by representing the scale and width parameters as power series, and by a transformation of the underlying distribution. We consider functional corrections to the Gumbel limit as well, showing they are obtainable via Taylor expansion. Our method also improves the description of large deviations from the mean extreme value. Additionally, it helps to characterize the extreme value statistics when the underlying distribution is unknown, for example when fitting experimental data.", "subjects": "Statistical Mechanics (cond-mat.stat-mech); Mathematical Physics (math-ph); Data Analysis, Statistics and Probability (physics.data-an)", "title": "Accurately approximating extreme value statistics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575173068325, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542070320386 }
https://arxiv.org/abs/1207.6344
A new symmetry criterion based on the distance function and applications to PDE's
We prove that, if $\Omega\subset \mathbb{R}^n$ is an open bounded starshaped domain of class $C^2$, the constancy over $\partial \Omega$ of the function $$\varphi(y) = \int_0^{\lambda(y)} \prod_{j=1}^{n-1}[1-t \kappa_j(y)]\, dt$$ implies that $\Omega$ is a ball. Here $k_j(y)$ and $\lambda(y)$ denote respectively the principal curvatures and the cut value of a boundary point $y \in \partial \Omega$. We apply this geometric result to different symmetry questions for PDE's: an overdetermined system of Monge-Kantorovich type equations (which can be viewed as the limit as $p \to + \infty$ of Serrin's symmetry problem for the $p$-Laplacian), and equations in divergence form whose solutions depend only on the distance from the boundary in some subset of their domain.
\section{Introduction} Characterizing special classes of hypersurfaces in a metric space, in particular spheres, in terms of some properties of their principal curvatures, is a classical and challenging problem in Differential Geometry. A fundamental result by Alexandrov states that a bounded smooth domain in the Euclidean space is a ball provided the mean curvature of its boundary is constant \cite{Al}. For further characterizations of spheres involving the symmetric functions of the principal curvatures, see {\it e.g.} \cite{Ko, MoRo} and the references therein. Alexandrov's result has many powerful applications in Analysis, especially in the fields of PDE's and shape optimization; in fact, it allows to obtain for instance symmetry in overdetermined boundary value problems (as in the seminal paper \cite{S} and the subsequent literature), of in extremum problems for variational functionals under geometric constraints (see the monograph \cite{He}). Often, symmetry questions arise in problems in which a crucial role is played by the distance function from the boundary of an open bounded domain $\Omega\subset \mathbb{R} ^n$ , $d_\Omega (x) := {\rm dist} (x, \partial \Omega)$. This happens for instance when studying PDE's related with mass transportation theory (see \cite{BoBu, CaCa, CCCG}), or minimization problems in the class of so-called {\it web functions}, namely functions which only depend on $d_\Omega$ (see \cite{C, CFG1, CFG2, CFG3, CG1, CG2, G}). Symmetry questions in these frameworks, which will be described more precisely below, pushed us to set up a new roundedness criterion, which brings into play the distance function in a more intrinsic way than merely through the boundary curvatures. More precisely, it involves the following function $\varphi$ associated with a bounded smooth domain $\Omega$ of $\mathbb{R}^n$: \begin{equation}\label{f:phi} \varphi(y) := \int_0^{\lambda(y)} \prod_{j=1}^{n-1}[1-t \kappa_j(y)]\, dt\,, \qquad y\in\partial\Omega\,. \end{equation} Here $\kappa _j(\cdot) $ denote the principal curvatures of $\partial \Omega$, whereas $\lambda (y)$ is the {\it cut value} of $y$: letting $\nu(y)$ denote the unit outer normal to $\partial \Omega$ at $y$, and $\pi (x)$ be the point of $\partial \Omega$ such that $|x - \pi (x)| = {\rm dist} (x, \partial \Omega)$ (which is uniquely determined for ${\mathcal L} ^n$-a.e.\ $x \in \Omega$), the function $\lambda ( \cdot)$ is defined on $\partial \Omega$ by \begin{equation}\label{cut} \lambda (y) := \sup \big \{ t \geq 0 \ :\ \pi (y - t \nu (y) ) = y \big \}\ , \qquad y \in \partial \Omega\ . \end{equation} Intuitively, by following the inner normal to $\partial \Omega$ at $y$, one can continue without crossing another straight line normal to $\partial \Omega$, exactly until one arrives at a distance $\lambda (y)$ from the boundary. Thus, $\Omega$ is filled up by the line segments $\{ y - t \nu (y) \, :\, y \in \partial \Omega\, ,\, t \in (0, \lambda (y))\}$. Let us mention that the functions $\varphi$ and $\lambda$ already appeared in the literature in different contexts: concerning the function $\varphi$, it is a crucial tool in the proof of the isoperimetric inequality \textsl{\`a la Gromov} (see {\it e.g.} \cite[\S 1.6.8]{Be}), and appears also in mathematical models for granular materials \cite{CCCG, CCS,CM}; concerning the function $\lambda$, its regularity has been studied in \cite{CCG,IT,LiNi}, while some of its applications to variational problems can be found in \cite{Ce1,CFG1,CFG2,CFG3}. Our new symmetry criterion is formulated in terms of the function $\varphi$ and of the mean curvature $H$ of $\partial \Omega$, \begin{equation}\label{mc} H(y) := \frac{\kappa_1(y)+\cdots\kappa_{n-1}(y)}{n-1}\,, \qquad y\in\partial\Omega\,, \end{equation} and reads as follows. By saying that $\Omega$ is {\it starshaped}, we mean that \begin{equation}\label{star} \langle y, \nu (y) \rangle >0 \qquad \forall \, y \in \partial \Omega \ . \end{equation} \begin{theorem}\label{t:geom2} Let $\Omega\subset\mathbb{R}^n$ be a bounded connected open set of class $C^2$, starshaped with respect to the origin. Assume that there exists a point $y_0\in\partial\Omega$ such that \begin{equation}\label{f:hypoa} H(y_0) = \max_{y\in\partial\Omega} H(y)\,,\qquad \varphi(y_0) \geq \frac{|\Omega|}{|\partial\Omega|} \,. \end{equation} Then $\Omega$ is a ball. In particular, if $\varphi$ is constant on $\partial \Omega$, assumption \eqref{f:hypoa} is satisfied, and hence $\Omega$ is a ball. \end{theorem} The proof of Theorem \ref{t:geom2} is given in Section \ref{proof} below. It is obtained by showing that the existence of a point $y _0\in \partial \Omega$ fitting the two conditions in (\ref{f:hypoa}) ensures the constancy of the mean curvature. To that aim, we exploit as crucial tools the arithmetic-geometric inequality and the Minkowski integral formula for the mean curvature (recalled in Section \ref{secnot} below). In particular, in order to apply successfully such formula, we need the starshapedness condition (\ref{star}): we believe it may be unnecessary for the validity of the result, but at present we are unable to circumvent it. Concerning the $C^2$-regularity assumption, in dimension $n =2$ we are able to relax it, by allowing domains whose boundary is piecewise $C^2$ and may contain ``convex'' corners (see Section \ref{proof} for more details). \medskip Let us now turn attention to describe two different applications of Theorem \ref{t:geom2} to symmetry questions for PDE's. \medskip The first application concerns an overdetermined system of PDE's of Monge-Kantorovich type, which can be viewed a limit version of Serrin's symmetry result for the $p$-Laplacian operator, as $p$ tends to $+ \infty$. To be more precise, let us recall Serrin's result: existence of a solution to the overdetermined boundary value problem \begin{equation}\label{deqse} \left\{\begin{array}{rll} - \Delta u =1\qquad&\hbox{ in }\Omega\, ,\\ u=0\qquad&\hbox{ on }\partial\Omega\, ,\\ |\nabla u| =c\qquad&\hbox{ on }\partial\Omega\, , \end{array}\right. \end{equation} where $c$ is a positive constant, implies that $\Omega$ is a ball. Later on, the same symmetry statement has been generalized to the case when the Laplacian in (\ref{deqse}) is replaced by more general operators; since the literature on this topic is very broad, we limit ourselves to quote the paper \cite{FGK}, where the interested reader can also find many related references. In particular, Serrin's result extends to the system \begin{equation}\label{deqp} \left\{\begin{array}{rll} - \Delta_p u =1\qquad&\hbox{ in }\Omega\,,\\ u=0\qquad&\hbox{ on }\partial\Omega\,,\\ |\nabla u| =c\qquad&\hbox{ on }\partial\Omega\,, \end{array}\right. \end{equation} where $\Delta _p$ denotes the $p$-Laplacian operator (see \cite{[BH],[DP],[GL]}). It is then natural to ask whether the same result continues to be true when considering, in some sense, the limit as $p \to + \infty$. Let us mention that, in this spirit, overdetermined boundary value problems for the $\infty$-Laplacian have been recently studied in \cite{BuKa}. Here we look rather at the system of PDE's of Monge-Kantorovich type which arises as the limiting problem of (\ref{deqp}) when $p \to + \infty$. Actually, let $u _p$ be the unique solution to the Dirichlet boundary value problem given by the first two equations in (\ref{deqp}). As explained more in detail in Section \ref{secsabbia}, passing to the limit as $p \to + \infty$ in such Dirichlet problem, leads to consider the following system in the unknowns $u \in {\rm Lip } (\Omega) $ and $v \in L^1(\Omega; \mathbb{R}^+)$: \begin{equation}\label{f:MK} \begin{cases} -\dive (v \nabla u) = 1 & \text{in}\ \Omega\,,\\ |\nabla u| \leq 1 & \text{a.e.\ in}\ \Omega\,, \\ u=0 & \text{on}\ \partial\Omega\,,\\ (1-|\nabla u|) v = 0 & \text{a.e.\ in}\ \Omega\,. \\ \end{cases} \end{equation} Heuristically, the variables $u$ and $v$ represent respectively the limit of $u _p$ and $|\nabla u _p| ^ p$ as $p \to + \infty$ ({\it cf.}\ Section \ref{secsabbia}). System (\ref{f:MK}) always admits solutions; moreover, if $(u,v)$ is a solution to (\ref{f:MK}), then $v\in C(\overline{\Omega})$ {see \cite{CaCa,CCCG}). Therefore, the following question makes sense: does symmetry holds for (\ref{f:MK}) if, as a counterpart to the last equation in (\ref{deqp}), we ask that $v$ is constant on $\partial\Omega$? In other words: \begin{equation}\label{Q2} \textit{If \eqref{f:MK} admits a solution $(u, v)$ with $v$ constant on $\partial\Omega$, is $\Omega$ a ball?} \end{equation} In Section \ref{secsabbia}, we show that the answer to question (\ref{Q2}) is affirmative, as a consequence of Theorem \ref{t:geom2}. We provide two significant physical interpretations of this symmetry result in different frameworks: a shape optimization problem for heat conductors considered in \cite{BoBu}, and a two-layers model in granular matter theory studied in \cite{CCCG}. \medskip The second application concerns ``partially web solutions'' to equations in divergence form. Let $\Omega$ be as above, and let $\omega$ be a bounded connected open subset of $\Omega$; we define the space of web functions on $\Omega$ and the space of their restrictions to $\omega$ respectively as $$\begin{array}{ll} & \mathcal W (\Omega) := \Big \{ u \in W ^ {1, 1} (\Omega) \ : \ u(x) \hbox{ depends only from } d_\Omega (x) \Big \}\,, \\ \noalign{\medskip} & \mathcal W (\Omega; \omega) := \Big \{ u \in W ^ {1, 1} (\Omega) \ : \ u (x) = \tilde u(x) \, \hbox{ a.e.\ on $\omega$, for some } \tilde u \in \mathcal W (\Omega) \Big \}\ . \end{array} $$ Given an equation in divergence form on $\Omega$, with a constant source term, \begin{equation}\label{f:elleq} - {\rm div}\big ( A(|\nabla u|) \nabla u\big ) = 1 \qquad \mbox{ in } \Omega\ , \end{equation} where $A\in C([0,+\infty))$, we say that $u$ is a solution if $A(|\nabla u|)\, |\nabla u| \in L^1(\Omega)$ and $$ \int_{\Omega} A(|\nabla u|) \pscal{\nabla u}{\nabla\psi} \, dx = \int_{\Omega} \psi\, dx \qquad \forall \psi\in C^{\infty}_0(\Omega)\,. $$ We then ask the following question: \begin{equation}\label{Q1} \textit{If \eqref{f:elleq} admits a solution $u$ belonging to $\mathcal W (\Omega; \omega)$, is $\Omega$ a ball?} \end{equation} In order to provide some conditions on $\omega$ which are sufficient for a positive answer to question (\ref{Q1}), we shall restrict attention to the case when $\omega$ is of the form $$\Omega _\Gamma:= \Big \{ y - t \nu (y) \ :\ y \in \Gamma \, ,\ t \in (0, \lambda (y)) \Big \}\ ,$$ for some relatively open connected set $\Gamma \subseteq \partial \Omega$. Let us remark that, if $u$ is a solution to $(\ref{f:elleq})$ belonging to $\mathcal W (\Omega; \Omega_\Gamma)$, since the restriction of $u$ to $\Omega _\Gamma$ can be written as $h (d _\Omega)$ for some function $h$, it holds $$u = h (0) \qquad \hbox{ and } \qquad |\nabla u| = |h' (0)| \qquad \hbox{ on } \Gamma\ .$$ Hence, up to an additive constant, a solution $u$ to $(\ref{f:elleq})$ lying in $\mathcal W (\Omega; \Omega_\Gamma)$ satisfies the following system (which is clearly weaker than the requirement $u \in \mathcal W (\Omega; \Omega_\Gamma)$): \begin{equation}\label{deqbis} \left\{\begin{array}{rll} -{\rm div}(A(|\nabla u|)\nabla u)=1\qquad&\hbox{ in }\Omega\,,\\ u=0\qquad&\hbox{ on }\Gamma\,, \\ |\nabla u| =c\qquad&\hbox{ on }\Gamma\, . \end{array}\right. \end{equation} In particular, if $\Gamma \equiv \partial \Omega$, the answer to question (\ref{Q1}) is affirmative, as soon as the operator $A$ is such that Serrin's symmetry result is valid for the elliptic equation (\ref{f:elleq}); in this direction, let us mention that symmetry for minimizers to variational functionals in $\mathcal W(\Omega)$ was studied in \cite{C}. Here we are rather interested in the case when $\Gamma$ is strictly contained into $\partial \Omega$. We point out that system (\ref{deqbis}) is quite close to the the so-called partially overdetermined boundary value problems studied for the Laplace operator in \cite{FG}. However, in such partially overdetermined problems, one among the Dirichlet and the Neumann condition was required to hold on the whole of $\partial \Omega$ (see also \cite{FV, FGLP}), which is not necessarily the case when $u \in \mathcal W (\Omega; \Omega_\Gamma)$. In Section \ref{secparweb}, using Theorem \ref{t:geom2}, we obtain sufficient conditions on $\Gamma$ for a positive answer to question (\ref{Q1}) in dimension $n=2$. We also compare more in detail our results and those obtained for partially overdetermined boundary value problems in \cite{FG}. \medskip The paper is organized as follows. After providing some preliminaries in Section \ref{secnot}, in Section \ref{proof} we give the proof of Theorem \ref{t:geom2}, and we generalize it to piecewise $C^2$ domains in the two-dimensional case. Sections \ref{secsabbia} and \ref{secparweb} are devoted to the applications of the geometric result to PDE's, respectively to Monge-Kantorovich type equations, and to equations in divergence form with partially web solutions. \bigskip {\bf Acknowledgments.} The authors wish to acknowledge the helpful comments of Giuseppe Buttazzo and Filippo Gazzola during the preparation of the manuscript. \section{Notation and Preliminaries}\label{secnot} Let us fix some notation and geometric background. The standard scalar product of two vectors $x,y\in\mathbb{R}^n$ is denoted by $\pscal{x}{y}$, and $|x|$ denotes the Euclidean norm of $x\in\mathbb{R}^n$. As is customary, $B_r(x_0)$ and $\overline{B}_r(x_0)$ are respectively the open and the closed ball centered at $x_0$ and with radius $r>0$. If $\Omega$ is an open subset of $\mathbb{R}^n$, we shall denote by $|\Omega|$ and $|\partial\Omega|$ respectively the $n$-dimensional Lebesgue measure of $\Omega$ and the $(n-1)$-dimensional Hausdorff measure of its boundary. A bounded open set $\Omega\subset\mathbb{R}^n$ (or, equivalently, its closure $\overline{\Omega}$ or its boundary $\partial \Omega$) is of class $C^k$, $k\in\mathbb{N}$, if for every point $x_0\in\partial \Omega$ there exists a ball $B=B_r(x_0)$ and a one-to-one mapping $\psi\colon B\to D$ such that $\psi\in C^k(B)$, $\psi^{-1}\in C^k(D)$, $\psi(B\cap \Omega)\subseteq\{x\in\mathbb{R}^n;\ x_n > 0\}$, $\psi(B\cap\partial \Omega)\subseteq\{x\in\mathbb{R}^n;\ x_n = 0\}$. Given a nonempty open set $\Omega\subset\mathbb{R}^n$, we denote by $d_{\Omega}\colon\overline{\Omega}\to\mathbb{R}$ the distance function from the boundary of $\Omega$, defined by \[ d_{\Omega}(x) := \min_{y\in\partial\Omega} |x-y|,\quad x\in\overline{\Omega}. \] It is well-known that $d_{\Omega}$ is a $1$-Lipschitz function; by Rademacher theorem, its singular set $\Sigma$ ({\it i.e.}, the set of points where $d_{\Omega}$ is not differentiable) has zero Lebesgue measure (more precisely, it is a $\mathcal{H}^{n-1}$-rectifiable set, see \cite[Prop.~4.1.3]{CaSi}). The closure of $\Sigma$ is called the \textsl{cut locus} of $\Omega$, and in general it may have a non-vanishing Lebesgue measure. Nevertheless, it can be shown that if $\Omega$ is of class $C^2$, then $|\overline{\Sigma}| = 0$; moreover, in this case $\overline{\Sigma}\subset\Omega$ and $d_{\Omega}$ is of class $C^2$ in $\overline{\Omega}\setminus\overline{\Sigma}$ (see e.g.\ \cite{CM}). Under the assumption that $\Omega\subset\mathbb{R}^n$ is a bounded open set of class $C^2$, for every $y\in\partial \Omega$, we denote respectively by $\nu(y)$ and $T_y \Omega$ the unique outward unit normal vector and the tangent space of $\partial \Omega$ at $y$. The map $\nu\colon\partial\Omega\to S^{n-1}$ is called the \textsl{spherical image map} (or \textsl{Gauss map}). Then, for every $y \in \partial \Omega$, we denote by $\lambda(y) $ the \textsl{cut value} of $y$ intended according to definition (\ref{cut}). It is well known that the singular set $\Sigma$ is a subset of the collection $C$ of all \textsl{cut points} $y - \lambda (y) \nu (y )$, $y\in\partial\Omega$; this set $C$ has always vanishing Lebesgue measure (see \cite{Bi}) and is contained in $\overline{\Sigma}$. For every $y\in\partial \Omega$, the differential $d\nu_y$ of the Gauss map at $y$ maps the tangent space $T_y \Omega$ into itself. The linear map $L_y:= d\nu_y\colon T_y \Omega\to T_y \Omega$ is called the \textsl{Weingarten map}. The bilinear form defined on $T_y \Omega$ by $S_y(v,w) = \pscal{L_y\, v}{w}$, $v,w\in T_y \Omega$, is the \textsl{second fundamental form} of $\partial \Omega$ at $y$. The geometric meaning of the Weingarten map is the following: for every $v\in T_y \Omega$ with unit norm, $S_y(v,v)$ is equal to the normal curvature of $\partial \Omega$ at $y$ in the direction $v$, that is, $S_y(v,v) = -\langle \ddot{\xi}(0), \nu(y) \rangle$, where $\xi(t)$ is any parameterized curve in $\partial \Omega$ such that $\xi(0) = y$ and $\dot{\xi}(0) = v$. The eigenvalues $\kappa_1(y),\ldots,\kappa_{n-1}(y)$ of the Weingarten map $L_y$ are, by definition, the \textsl{principal curvatures} of $\partial \Omega$ at $y$. The corresponding eigenvectors are called the \textsl{principal directions} of $\partial \Omega$ at $y$. It is readily shown that every $\kappa_i(y)$ is the normal curvature of $\partial \Omega$ at $y$ in the direction of the corresponding eigenvector. {}From the $C^2$ regularity assumption on the manifold $\partial \Omega$, it follows that the principal curvatures of $\partial \Omega$ are continuous functions on $\partial \Omega$. Their arithmetic mean is the mean curvature $H$ of $\partial \Omega$, {\it cf.}\ definition (\ref{mc}). The following classical identity is usually referred as \textsl{Minkowski integral formula} for the mean curvature, see for instance \cite[Section 2A]{MoRo}: \begin{equation}\label{f:mink} \int_{\partial\Omega} H(y) \pscal{y}{\nu(y)} \, d\mathcal{H}^{n-1} = |\partial\Omega|. \end{equation} Finally, we shall need some elementary results about the relationship between cut value and boundary curvatures. In any space dimension, we have the following upper bound: \begin{lemma}\label{l:kd} Let $\Omega\subset\mathbb{R}^{n}$ be a bounded connected open set of class $C^2$. Then, for every $y\in\partial\Omega$, it holds $$\kappa_i(y) \lambda(y) \leq 1 \qquad \forall \, i=1,\ldots,n-1\ .$$ \end{lemma} \begin{proof} Given $y\in\partial\Omega$ let $x := y - \lambda(y) \nu(y)$ be its cut point. Since $d_{\Omega}(x) = \lambda(y)$, the open ball $B_{\lambda(y)}(x)$ is contained in $\Omega$ and is tangent to $\partial\Omega$ at $y$. Therefore, for every $i\in\{1,\ldots,n-1\}$ we have either $\kappa_i(y)\leq 0$ or $1 / \kappa_i(y) \geq \lambda(y)$. \end{proof} \bigskip In dimension $n=2$, the cut value of boundary points with maximal curvature can be easily characterized as follows: \begin{lemma}\label{l:focal} Let $\Omega\subset\mathbb{R}^2$ be a bounded connected open set of class $C^2$, and let $y_0\in\partial\Omega$ be a maximum point of the curvature $\kappa$ of $\partial \Omega$. Then $$\lambda(y_0) = 1/\kappa(y_0)\ .$$ \end{lemma} \begin{proof} We observe that, since $\partial\Omega$ is compact, we have that $\kappa(y_0)>0$. From Lemma \ref{l:kd}, we know that $\kappa(y_0)\lambda(y_0)\leq 1$. Let us show the converse inequality. Setting $r := 1/\kappa(y_0)$, by assumption we have that $\kappa(y)\leq 1/r$ for every $y\in\partial\Omega$. By Schur's theorem for plane curves (see \cite[Sect.~5-7, Ex.7]{doCarmo}) we have that the disk $B = y_0 - r \nu(y_0) + B_r(0)$ is contained in $\Omega$, so that $\lambda(y_0)\geq r$. \end{proof} \bigskip \section{The geometric result}\label{proof} This section is devoted to prove Theorem \ref{t:geom2} and to extend it to less regular domains in the two-dimensional case. \bigskip {\it Proof of Theorem \ref{t:geom2}}. We divide the proof into four steps. \smallskip \textsl{Step 1: upper bound for $\varphi (y) H (y)$.} As a first step, we claim that the following inequality holds true: \begin{equation}\label{f:basic} \varphi(y) H (y) \leq \frac{1}{n} \qquad \forall \, y \in \partial \Omega\ . \end{equation} Indeed, let us fix $y\in\partial\Omega$ and set $x_j := \lambda(y)\kappa_j(y)$, $j=1,\ldots,n-1$. Since $\lambda(y) > 0$, using the change of variables $s = t/\lambda(y)$ in the integral in (\ref{f:phi}) and multiplying by $H(y)$ we obtain the equality \begin{equation}\label{f:hphi} \varphi(y) H(y) = \lambda(y) H(y) \int_0^1 \prod_{j=1}^{n-1}(1-s x_j)\, ds = f(x_1,\ldots, x_{n-1}) \,, \end{equation} where $f$ is the function defined by \begin{equation}\label{f:f} f(x_1,\ldots,x_{n-1}) := \frac{x_1+\cdots+x_{n-1}}{n-1} \int_0^1 \prod_{j=1}^{n-1} (1-s\, x_j)\, ds\,. \end{equation} In view of (\ref{f:hphi}) and (\ref{f:f}), and since $x _j \leq 1$ by Lemma~\ref{l:kd}, in order to find an upper bound for the product $\varphi(y)H(y) $, we have to maximize $f$ on the set $$D := \{(x_1,\ldots,x_{n-1}) \in \mathbb{R}^{n-1}:\ x_j\leq 1\ \forall j\}\ .$$ Since $f(x_1,\ldots,x_{n-1}) < 0$ if $x_1+\ldots +x_{n-1} < 0$, it is clear that $\max_D f$ is achieved on the compact set $K := D \cap \{x_1+\cdots +x_{n-1}\geq 0\}$ . Let $(x_1, \ldots, x_{n-1}) \in K$ and denote by $\overline{x}$ their arithmetic mean. From the arithmetic--geometric mean inequality, taking into account that $\overline x \geq 0$, we have \[ \begin{array}{ll} \displaystyle{f(x_1,\ldots,x_{n-1}) }& \displaystyle{= \overline{x}\int_0^1 \prod_{j=1}^{n-1} (1-s\, x_j)\, ds}\\ &\displaystyle{ \leq \overline{x}\int_0^1 \left(1-s\overline{x}\right)^{n-1} ds }\\ \noalign{\medskip} & \displaystyle{= \frac{1}{n} \big [ 1 - ( 1 - \overline x) ^ n \big ] \leq \frac{1}{n}}\ . \end{array} \] Then the inequality (\ref{f:basic}) is proved. \smallskip \textsl{Step 2: upper bound for $H (y)$. } Exploiting the assumptions (i) and (ii) made on the point $y _0$, and applying (\ref{f:basic}) at $y = y _0$, we obtain \begin{equation}\label{f:dis1} \frac{|\Omega|}{|\partial\Omega|}\, H(y) \leq \frac{|\Omega|}{|\partial\Omega|}\, H(y_0) \leq \varphi(y_0) H(y_0) \leq \frac{1}{n} \qquad\forall y\in\partial\Omega\,. \end{equation} \smallskip \textsl{Step 3: conclusion.} We multiply (\ref{f:dis1}) by the positive quantity $\pscal{y}{\nu(y)}$: integrating on $\partial\Omega$, thanks to Minkowski integral formula (\ref{f:mink}) and the divergence theorem, we get \[ |\Omega| = \frac{|\Omega|}{|\partial\Omega|} \int_{\partial\Omega} H(y) \pscal{y}{\nu(y)} \, d\mathcal{H}^{n-1} \leq \frac{1}{n}\int_{\partial\Omega} \pscal{y}{\nu(y)} \, d\mathcal{H}^{n-1} = |\Omega|\,. \] We deduce that $$\frac{|\Omega|}{|\partial\Omega|}\, H(y) \pscal{y}{\nu(y)} = \frac{1}{n} \pscal{y}{\nu(y)}\qquad \forall \, y \in \partial \Omega \ .$$ Recalling again that that $\pscal{y}{\nu(y)}$ is a (strictly) positive quantity, we infer that the mean curvature of $\partial\Omega$ is constant, hence $\Omega$ is a ball by Alexandrov's Theorem \cite{Al} (see also \cite{Ros}). \smallskip \textsl{Step 4: the case when $\varphi$ is constant.} In order to obtain the second part of the statement, we recall the following change of variable's formula which has been proved in \cite[Theorem 7.1]{CM} for every $h \in L ^ 1 (\Omega)$: \begin{equation}\label{chv} \int_{\Omega} h(x)\, dx = \int_{\partial\Omega}\left[ \int_0^{\lambda(y)} h(y - t \nu(y))\, \prod_{j=1}^{n-1}[1-t \kappa_j(y)]\, dt \right]\, d\mathcal{H}^{n-1}(y) \, . \end{equation} By applying it with $h \equiv 1$, we obtain that the mean value of $\varphi$ over $\partial \Omega$ is precisely $|\Omega| / |\partial \Omega|$. Hence, if $\varphi$ is constant on $\partial \Omega$, both conditions in (\ref{f:hypoa}) are fulfilled by choosing as $y _0$ any point of maximal mean curvature. \qed \bigskip In the remaining of this section, we assume without any further mention that $n=2$, and we study the validity of Theorem \ref{t:geom2} for domains with piecewise smooth boundary. We start by noticing that, if ``concave" corners are present in $\partial \Omega$, the change of variables formula \eqref{chv} is no longer valid. In fact, in presence of such kind of corners, assuming that the function $\varphi$ is constant in their complement, the conclusion of Theorem~\ref{t:geom2} does not remain true, as shown by the following \begin{example} Consider the set $\Omega = B_2(-1,0) \cup B_2(1,0) \subset\mathbb{R}^2$. In this case $\Omega$ has two ``concave" corners at $\mathcal{C} = \{(0,\sqrt{3}), (0, -\sqrt{3})\}$, $\varphi(y) = 1$ for every $y\in\partial\Omega\setminus\mathcal{C}$, but $\Omega$ is not a ball. \end{example} In view of the above Example, we are going to restrict our attention to the following class of domains. \begin{definition} \label{d:ps} We say that a bounded connected open set $\Omega\subset\mathbb{R}^2$ is \textsl{piecewise $C^2$ without concave corners} if it satisfies a uniform exterior sphere condition, and \[ \partial\Omega = \bigcup_{i=1}^m \Gamma_i\ ,\] where each $\Gamma_i$ is a simple $C^2$ arc up to its endpoints $y_{i-1}$ and $y_{i}$ (with the convention $y_0 = y_m$), and \[\Gamma_i\cap\Gamma_j = \begin{cases} \{y_i\}, & \text{if}\ 1\leq i\leq m-1,\ j = i+1,\\ \{y_m\}, & \text{if}\ i=m,\ j=1,\\ \emptyset, & \text{if}\ i-j\neq \pm 1\ . \end{cases} \] \end{definition} Clearly, the boundary of a domain $\Omega$ as in Definition~\ref{d:ps} may contain ``convex'' corners at points in the set $${\mathcal C} := \{ y _1, \dots, y _m \}\ .$$ We point out that, for such domains, the change of variables formula \eqref{chv} still holds (see \cite[Thm.~3.3]{CFV}), whereas the Minkowski integral formula \eqref{f:mink} becomes \begin{equation} \label{f:minkm} |\partial\Omega| = \int_{\partial\Omega} \kappa(y) \pscal{y}{\nu(y)}\, d\mathcal{H}^1(y) - \sum_{i=1}^m \pscal{y_i}{R \Delta\nu(y_i)}\,, \end{equation} where $R:= \begin{pmatrix} 0 & -1\\ 1 & 0\end{pmatrix}$, $\Delta\nu(y_i) = \nu_{i+1}(y_i) - \nu_i(y_i)$ (being $\nu_i$ the outer normal to the boundary component $\Gamma_i$), and \[ \nu_i(y_j) := \mathop{\lim_{y\to y_j}}_{y\in\Gamma_i} \nu_i(y) \] (see \cite{ABF}). Consequently, the following generalized versions of Theorem \ref{t:geom2} hold. \begin{theorem}\label{t:geom2b} Let $\Omega\subset\mathbb{R}^2$ be a bounded connected open set piecewise $C^2$ without concave corners, and starshaped with respect to the origin. Assume that one of the following conditions is satisfied: \begin{itemize} \item[(i)] $\Omega$ is of class $C^1$ and there exists a point $y_0\in\partial\Omega\setminus\mathcal{C}$ such that \[ H(y_0) = \max_{y\in\partial\Omega\setminus\mathcal{C}} H(y)\,,\qquad \varphi(y_0) \geq \frac{|\Omega|}{|\partial\Omega|} \,. \] \item[(ii)] $\varphi$ is constant in $\partial\Omega\setminus\mathcal{C}$. \end{itemize} Then $\Omega$ is a ball. \end{theorem} \begin{proof} Assume first that $\Omega$ satisfies condition (i). In this case, all the steps of the proof of Theorem~\ref{t:geom2} can be repeated. In particular, Step 3 continues to hold thanks to the hypothesis that that $\Omega$ is of class $C^1$: indeed, such assumption ensures that each addendum of the finite sum in the r.h.s.\ of (\ref{f:minkm}) vanishes, so that the Minkowski formula is still valid for $\Omega$ in the very same form as if it was a $C^2$ domain. Assume now that $\Omega$ satisfies condition (ii). Similarly as in Step 4 of the proof of Theorem~\ref{t:geom2}, since $\varphi$ is constant, from the change of variable's formula \eqref{chv} we get that $\varphi(y) = |\Omega| / |\partial\Omega|$ for every $y\in\partial\Omega\setminus\mathcal{C}$. This fact has two consequences. As a first consequence, the upper bound inequality \eqref{f:dis1} in Step 2 holds in the form \begin{equation}\label{f:dis1b} \frac{|\Omega|}{|\partial\Omega|}\, \kappa(y)\leq \frac{1}{2}\,, \qquad\forall y\in\partial\Omega\setminus\mathcal{C}. \end{equation} Namely, if $\kappa(y) \leq 0$ the inequality is trivial, whereas, if $\kappa(y) > 0$, then $\lambda(y) \leq 1/\kappa(y)$ and \[ \frac{|\Omega|}{|\partial\Omega|} = \varphi(y) = \lambda(y) - \frac{\lambda(y)^2}{2}\, \kappa(y) \leq \frac{1}{2\kappa(y)} \] and \eqref{f:dis1b} follows. As a second consequence, $\partial\Omega$ cannot have (convex) corners. Namely, if $y_i$ is a point with $\pscal{\nu_i(y_i)}{\nu_{i+1}(y_i)} < 1$, a simple geometric argument shows that $\lim_{y\to y_i} \lambda(y) = 0$. Since $\kappa$ is uniformly bounded on $\partial\Omega\setminus\mathcal{C}$, we deduce that $\lim_{y\to y_i} \varphi(y) = 0$ and, by assumption, we conclude that $\varphi = 0$ on $\partial\Omega\setminus\mathcal{C}$, a contradiction. Now, since \eqref{f:dis1b} holds and the absence of convex corners ensures that the Minkowski formula is satisfied for $\Omega$ in the very same form as if it was a $C^2$ domain, the conclusion of the proof is identical to Step 3 in the proof of Theorem~\ref{t:geom2}. \end{proof} \section{Application to PDE's of Monge-Kantorovich type}\label{secsabbia} In this section we deal with question (\ref{Q2}) stated in the Introduction. We begin by explaining more in detail its relationship with the overdetermined problem (\ref{deqp}) for the $p$-Laplacian. Let $\Omega\subset \mathbb{R} ^n$ be an open bounded connected set of class $C^2$, and let $f$ be a signed measure with finite total variation supported in $\overline \Omega$. For every $p >1$, set $$\alpha _p := \min \Big \{ \int _{\Omega} \frac{1}{p} |\nabla u | ^ p \, dx - \langle f , u \rangle \ :\ u \in W ^ {1,p} _0 (\Omega) \Big \}\ ,$$ and denote by $u _p$ the unique minimizer, namely the unique solution to the boundary value problem \[\left\{\begin{array}{rll} - \Delta_p u =f\qquad&\hbox{ in }\Omega\,,\\ u=0\qquad&\hbox{ on }\partial\Omega\, . \end{array}\right. \] It was proved in \cite{BoBuDe} that, as $p \to + \infty$, \begin{eqnarray} &\alpha _p \to \alpha & \label{conv0} \\ \noalign{\smallskip} & u _p \to u \ \ \hbox{ uniformly } & \label{conv1} \\ \noalign{\smallskip} & |\nabla u _p| ^ { p-2} \rightharpoonup \mu \ \ \hbox{weakly as measures\ ,} & \label{conv2} \end{eqnarray} being \begin{equation}\label{alpha}\alpha:= - \sup \Big \{ \langle f, u \rangle \ : \ u \in C^ \infty _ 0 (\mathbb{R}^n) \,, \ |\nabla u| \leq 1 \hbox{ on } \Omega\, , \ u = 0 \hbox{ on } \partial \Omega \Big \} \end{equation} and $(u, \mu)$ a solution to \begin{equation}\label{deqp23} \begin{cases} - {\rm div} ( (D _\mu u) \mu) =f &\text{in}\ \Omega\,,\\ u \in {\rm Lip }_1 (\Omega, \partial \Omega)\,, \\ | D_\mu u| = 1 & \mu\hbox{-a.e.} \end{cases} \end{equation} Here \[ {\rm Lip } _1 (\Omega, \partial \Omega) = \Big \{ u \in W ^ {1, \infty} (\Omega)\ :\ u = 0 \hbox{ on } \partial \Omega \ \hbox{ and } \ |\nabla u| \leq 1 \hbox{ a.e. in } \Omega \Big \}\ , \] where, thanks to the regularity of $\Omega$, we understand that a function $u\in W^{1,\infty}(\Omega)$ is continuously extended to $\overline{\Omega}$, while $\mu$ is a positive measure supported on $\overline \Omega$, and $D_\mu u$ denotes the $\mu$-tangential gradient of $u$, which can be suitably defined starting from the notion of tangent space to $\mu$ introduced in \cite{BoBuSe} (see also \cite{FrMa}). Here we skip to introduce the tangential calculus with respect to a measure (for which we refer {\it e.g.} to the survey paper \cite{BoFr}), since we are going to restrict our attention to the case when $\mu$ is an absolutely continuous measure: $$ \mu = v (x) \, dx \, , \hbox{ with } v \in L ^ 1 (\Omega), \ \ v \geq 0\ . $$ In this case, the $\mu$-tangential gradient of $u$ agrees with the gradient of $u$ on the set $\{ v>0 \}$, and vanishes on the complementary set $\{ v = 0 \}$. Thus system (\ref{deqp23}) can be rewritten as \begin{equation}\label{deqp3} \begin{cases} - {\rm div} ( v \nabla u) =f &\text{in}\ \Omega\,,\\ u \in {\rm Lip }_1 (\Omega, \partial \Omega)\,, \\ ( 1 - |\nabla u| ) v = 0 & \text{a.e.\ in}\ \Omega\, . \end{cases} \end{equation} In view of the convergence property (\ref{conv2}), we paraphrase the classical overdetermined Neumann condition for system (\ref{deqp}), namely $$|\nabla u _p | = c \qquad \hbox{ on } \partial \Omega\ ,$$ with the the following overdetermined condition for system (\ref{deqp23}): \begin{equation}\label{muac2} \mu = v (x) \, dx \, , \hbox{ with } v \in L ^ 1 (\Omega), \ \ v \geq 0,\ \ v= c \hbox{ on } \partial \Omega\ . \end{equation} By this way, we arrive at question (\ref{Q2}): under the assumption that the source $f$ is a positive constant on $\overline \Omega$, does the existence of a solution to (\ref{deqp23})-(\ref{muac2}) imply the roundness of $\Omega$? \medskip \begin{remark}{\rm We point out that the constancy condition asked for $v$ on the boundary in (\ref{muac2}) is meaningful as soon as the source $f$ is assumed to be a continuous function on $\overline \Omega$, because in this case the $v$-component of a solution $(u, v)$ to (\ref{deqp3}) is itself a continuous function (see \cite[Prop.~3.2]{CCCG}). Moreover, we recall that the $v$-component is unique (see \cite[Thm.~4.1]{CCCG}, \cite[Section 6]{CM10} and the proof of Theorem~\ref{teosabbia} below), whereas the $u$-component is unique (and coincides with $d_{\Omega}$) if and only if the singular set $\Sigma$ of $d _\Omega$ is contained in the support of the source $f$ \cite[Section 7]{CM10}. In particular, if $f$ is a positive constant, then $u=d_{\Omega}$ and $u_p$ converges uniformly to $d_{\Omega}$ as $p\to\infty$ (see also \cite{BDM} and \cite[Remark~2.1]{BuKa}).} \end{remark} \medskip The next result gives an affirmative answer to question (\ref{Q2}) (under the appropriate assumptions on $\Omega$ which allow to apply Theorem \ref{t:geom2}; a similar statement clearly holds in the nonsmooth two-dimensional case covered by Theorem~\ref{t:geom2b}). \begin{theorem}\label{teosabbia} Let $\Omega\subset\mathbb{R}^n$ be a bounded connected open set of class $C^2$, starshaped with respect to the origin. If the source $f$ is positive and constant on $\overline \Omega$ and problem $(\ref{deqp23})$-$(\ref{muac2})$ admits a solution, then $\Omega$ is a ball. \end{theorem} \begin{proof} Since $f\in C(\overline{\Omega})$, one solution to problem (\ref{deqp3}) is given by the pair $(d_{\Omega}, v_f)$, where $v_f\colon\overline{\Omega}\to \mathbb{R}$ is the (continuous) function defined by \begin{equation}\label{f:vf} v_f(x) = \begin{cases} \displaystyle{ \int_0^{\tau(x)} f(x+t\nu(x)) \prod_{i=1}^{n-1}\frac{1-(d_{\Omega}(x)+t)\, \kappa_i(x)}% {1-d_{\Omega}(x)\, \kappa_i(x)}\, dt} &\textrm{if $x\in{\overline{\Omega}}\setminus\overline{\Sigma}$},\\ 0, &\textrm{if $x\in\overline{\Sigma}$} \end{cases} \end{equation} (see \cite[Thm.~3.1]{CCCG}). Here, for every $x\in\overline{\Omega}\setminus\overline{\Sigma}$, we have used the notation $\kappa_i(x) := \kappa_i(\pi(x))$, $\nu(x) := \nu(\pi(x))$, and $\tau(x) := \lambda(\pi(x)) - d_{\Omega}(x)$. Moreover, the following uniqueness result holds for the $v$-component of the system: if $(u,v)$ is any solution to (\ref{deqp3}), then $v = v_f$ (see \cite[Thm.~4.1]{CCCG}). In particular, in the case $x\in\partial\Omega$ (i.e.\ $d_{\Omega}(x) = 0$), the representation formula (\ref{f:vf}) can be rewritten as: \[v_f(x) = \int_0^{\lambda(x)} f(x+t\nu(x)) \prod_{i=1}^{n-1}(1-t \kappa_i(x))\, dt, \qquad x\in\partial\Omega. \] From such representation formula at the boundary and the uniqueness of the $v$-component, when $f$ is a positive constant $\gamma$ for every $x\in\overline{\Omega}$, we have \[ v (y) = v_f(y) = \gamma \varphi(y)\qquad \forall y\in\partial\Omega, \] where $\varphi$ is precisely the function defined at (\ref{f:phi}). The conclusion now follows from Theorem~\ref{t:geom2}. \end{proof} \bigskip Theorem \ref{teosabbia} admits the following physical interpretations. \bigskip -- {\it A thermic model}. Consider the problem of finding a positive measure $\mu$ which solves the variational problem \begin{equation}\label{pbBB} \min \Big \{ {\mathcal C} (\mu) \ :\ \int d \mu = m \,,\ {\rm spt} (\mu) \subseteq \overline \Omega \Big \}\ , \end{equation} where $m$ is a prescribed positive parameter, and the cost ${\mathcal C}$ is given by $${\mathcal C} (\mu) := -\inf \Big \{ \int j (\nabla u) \, d \mu - \langle f, u \rangle \ :\ u \in {\mathcal D} (\mathbb{R} ^n) \ ,\ u = 0 \hbox{ on } \partial \Omega \Big \} \ ,$$ being $j$ a stored energy density (typically, $j (z) = \frac{1}{2}|z| ^ 2$), and $f$ a signed measure with finite total variation supported on $\overline \Omega$. If $\mu$ is interpreted as the distribution of a conducting material, $u$ as the temperature (which is kept $0$ at the boundary), and $f$ as a given heat sources density, (\ref{pbBB}) can be interpreted as the optimal design problem of finding the most performant conductor of prescribed mass in the design region $\overline \Omega$. A key result established in \cite[Theorem 2.3]{BoBu} states that the minimum in (\ref{pbBB}) is equal to $$\frac{\alpha ^ 2}{2m}\ ,$$ with $\alpha$ as in (\ref{conv0}). Moreover, up to constant multiples, system (\ref{deqp23}) provides necessary and sufficient optimality conditions on $u$ and $\mu$ for being a solution respectively to (\ref{alpha}) and (\ref{pbBB}) \cite[Theorem 3.9]{BoBu}. In the light of these results, Theorem \ref{teosabbia} can be interpreted as follows: \medskip {\it Assume that a constant source heats a region $\overline \Omega$, and that the most performant conductor in such region is represented by a function constant on the boundary $\partial \Omega$. Then $\Omega$ is a ball.} \bigskip -- {\it A model for sandpiles}. In the dynamical theory of granular matter the so-called table problem consists in studying the evolution of a sandpile created by pouring dry matter (the sand) onto a table, which is represented by a bounded open set $\Omega\subset\mathbb{R}^2$, while the (time-independent) matter source is represented by a non-negative function $f\in L^1(\Omega)$. Among differential models, in the so-called BCRE model (after Bouchaud, Cates, Ravi Prakash, Edwards \cite{BCRE}) and its successive modifications (see \cite{HK}), the description of the growing sandpile is based on the introduction of two layers, the \textsl{standing layer}, which collects the matter that remains at rest, and the \textsl{rolling layer}, which is the thin layer of matter moving down along the surface of the standing layer. The equilibrium solutions of this model are precisely the solutions to (\ref{deqp3}), where $u$ and $v$ denote respectively the thickness of the standing and rolling layer. Theorem \ref{teosabbia} has then the following straightforward meaning: \medskip {\it Assume that a uniformly distributed (and constant in time) amount of dry sand is poured onto a table $\Omega\subset\mathbb{R}^2$, and that, once the equilibrium is reached, the thickness of the rolling layer is constant on the boundary $\partial \Omega$. Then $\Omega$ is a disk.} \bigskip \section{Application to PDE's with partially web solutions}\label{secparweb} In this section we deal with question (\ref{Q1}) stated in the Introduction. We adopt the same notation for the set $\Omega_\Gamma$ and the space $\mathcal W (\Omega; \Omega_\Gamma)$. The next result gives sufficient conditions on $\Gamma$ for an affirmative answer in space dimensions $n=2$; $\kappa$ denotes the curvature of $\partial \Omega$. \begin{theorem}\label{teopartialweb} Let $\Omega\subset\mathbb{R}^2$ be a bounded connected open set of class $C^2$, starshaped with respect to the origin. Assume that $A\in C([0,+\infty))$ and that there exists a solution $u$ to equation \eqref{f:elleq} in $\Omega$ belonging to the space \begin{equation}\label{f:w1} \mathcal W^1 (\Omega; \Omega_\Gamma) := \left\{ u\in \mathcal W (\Omega; \Omega_\Gamma)\ : \ u|_{\overline{\Omega_\Gamma}} \in C^1 \right\}\,, \end{equation} where $\Gamma$ is a relatively open connected subset of $\partial \Omega$ such that \begin{itemize} \item[(i)] the maximum of $\kappa$ on $\partial \Omega$ is attained on $\Gamma$; \item[(ii)] for every $\epsilon > 0$ small enough, a.e.\ in $\Omega_{\epsilon} := \{x\in\Omega:\ d_{\Omega}(x) < \epsilon\}$ there holds: \[ A(|\nabla u|)\pscal{\nabla u}{\nabla d_{\Omega}} \leq [-A(|\nabla u|) u_{\nu}] {\big | _ \Gamma} + o(1) \] (where $o(1)$ denotes a quantity that vanishes as $\epsilon\to 0$). \end{itemize} Then $\Omega$ is a ball. \end{theorem} \begin{remark}{\rm In assumption (ii), $[-A(|\nabla u|) u_{\nu}] {\big | _ \Gamma}$ denotes the restriction of $[-A(|\nabla u|) u_{\nu}]$ to $\Gamma$, which is constant since $u \in \mathcal W (\Omega; \Omega_\Gamma)$. We point out that, if one assumes further that $u$ is of class $C^1(\overline{\Omega_\epsilon})$, condition (ii) can be rewritten in a more comfortable way. Namely, in this case it becomes \begin{itemize} \item[(ii')] the maximum of $( - A (|\nabla u|) u _\nu)$ on $\partial \Omega$ is attained on $\Gamma$. \end{itemize}} \end{remark} \medskip Theorem \ref{teopartialweb} should be compared with the results obtained for partially overdetermined boundary value problems obtained in \cite{FG}. Therein, it was proved in particular that existence of a solution to the boundary value problem \begin{equation}\label{deqtris} \left\{\begin{array}{rll} - \Delta u=1\qquad&\hbox{ in }\Omega\,,\\ u=0\qquad&\hbox{ on }\partial \Omega\,, \\ |\nabla u| =c\qquad&\hbox{ on }\Gamma\,, \end{array}\right. \end{equation} implies that $\Omega$ is a ball under one of the following assumptions: \medskip {--} $\partial \Omega$ is connected, and $\Gamma \subseteq \partial \widetilde \Omega$ for some open set $\widetilde \Omega$ with connected analytic boundary ({\it cf.} \cite[Theorem 1]{FG}) ; \medskip {--} $\partial \Omega \in C ^ {2, \alpha}$, $\sup _{x \in \Gamma} H (x) \geq {1/nc}$ and the maximum of $|\nabla u|$ over $\partial \Omega$ is attained on $\Gamma$ ({\it cf.} \cite[Theorem 3]{FG}). \medskip In comparison with \cite[Theorem 1]{FG}, let us remark that the proof of Theorem \ref{teopartialweb} is straightforward in case $A$ is the Laplacian or any other operator in divergence form for which one knows the following two facts: equation $(\ref{f:elleq})$ has a analytic solution inside $\Omega$ and Serrin's symmetry result holds true. Indeed in this case, by uniqueness of the analytic continuation, the solution which is assumed to exist in ${\mathcal W} (\Omega; \Omega _\Gamma)$ agrees with the analytic solution to $(\ref{f:elleq})$ on whole of $\Omega$, so that $u$ satisfies at the same time a constant Dirichlet and Neumann condition on the entire $\partial \Omega$, and the conclusion follows from Serrin's result. Clearly, this kind of argument does not apply any longer when dealing with degenerated elliptic operators such as the $p$-Laplacian, for which analytic regularity of solutions on the whole of $\Omega$ is not fulfilled (see {\it e.g.} \cite{DiBe, Tolk}). On the other hand, it is interesting to observe that the assumptions of Theorem \ref{teopartialweb} are quite similar to those of \cite[Theorem 3]{FG}, though (as already mentioned in the Introduction) neither the existence of a solution $u \in \mathcal W (\Omega; \Omega _\Gamma)$ implies the existence of a solution to (\ref{deqtris}) nor the converse, and though the proof techniques are completely different. The proof of Theorem \ref{teopartialweb} relies on Theorem \ref{t:geom2} combined with Proposition \ref{teo10} below, where we establish a link holding, at a point of maximal curvature, between the normal derivative of partially web solutions to $(\ref{f:elleq})$, and the function $\varphi$ introduced in (\ref{f:phi}). \begin{proposition}\label{teo10} Let $\Omega\subset\mathbb{R}^2$ be a bounded connected open set of class $C^2$, starshaped with respect to the origin. Assume there exists a solution $u$ to equation $(\ref{f:elleq})$ in $\Omega$ belonging to the space $\mathcal W^1 (\Omega; \Omega_\Gamma)$ defined in \eqref{f:w1}, where $\Gamma$ is a relatively open connected subset of $\partial \Omega$ such that the maximum of $\kappa$ on $\partial \Omega$ is attained at some point $y _0 \in\Gamma$. Then the following identity holds at $y _0$: \[ A(|\nabla u | (y _0)) u _\nu (y _0) \displaystyle{= -\varphi(y_0) } \ . \] \end{proposition} \proof Let $Y\colon (-r,r)\to\partial\Omega$ be a local parametrization of $\Gamma$ by arc-length, such that $Y(0) = y_0$. For $\rho\in (-r,r)$, let $$\Gamma _\rho:= Y ( -\rho, \rho)\ .$$ For $\rho$ as above and $t \in [0, \Lambda (\rho)]$, we set $$\Lambda(\rho) := \lambda(Y(\rho))\,, \qquad K(\rho) := \kappa(Y(\rho))\ ,$$ and $$T(\rho) := \tau(Y(\rho)) = \dot Y (\rho)\,, \qquad N(\rho) := \nu(Y(\rho))\, , \qquad X (\rho, t) = Y (\rho) - t N (\rho)\ .$$ By assumption, \eqref{f:elleq} admits a solution $u \in \mathcal W^1 (\Omega; \Omega_\Gamma)$, so we can write $u(x) = h(d_{\Omega}(x))$ for every $x\in\overline{\Omega_{\Gamma}}$, with $h \in C^ 1$, and it holds \begin{equation}\label{gradu}\nabla u ( X (\rho , t)) = - h' (t) N (\rho)\ .\end{equation} We now construct a suitable family of test function to be used in equation $(\ref{f:elleq})$. Let $\Lambda_{\rho} := \min_{|\sigma|\leq\rho} \Lambda(\sigma)$. For $\epsilon>0$ small enough let $\phi_{\epsilon}, \eta_{\epsilon}\colon\mathbb{R}\to\mathbb{R}$ be the functions defined by \[ \phi_{\epsilon}(t) := \begin{cases} 0 & \textrm{if $t\leq 0$ or $t\geq \Lambda_{\rho} - \epsilon$,}\\ 1 & \textrm{if $t\in [\epsilon, \Lambda_{\rho}-2\epsilon]$,}\\ \frac{t}{\epsilon} & \textrm{if $t\in (0, \epsilon)$,}\\ \frac{\Lambda_{\rho} - \epsilon-t}{\epsilon} & \textrm{if $t\in (\Lambda_{\rho}-2\epsilon, \Lambda_{\rho} - \epsilon)$,} \end{cases}\quad \eta_{\epsilon}(\sigma) := \begin{cases} 0 & \textrm{if $|\sigma|\geq \rho$,}\\ 1 & \textrm{if $|\sigma|\leq \rho-\epsilon$,}\\ \frac{\rho-|\sigma|}{\epsilon} & \textrm{if $\rho-\epsilon < |\sigma|< \rho$.} \end{cases} \] Then, for $\rho \in (- r , r)$ and $\epsilon $ small enough, we consider the family of functions $\psi_{\rho,\epsilon}\colon\Omega\to\mathbb{R}$ given by \[ \psi_{\rho,\epsilon}(x):= \begin{cases} \phi_{\epsilon}(t) \eta_{\epsilon}(\sigma), &\text{if $x = X(\sigma, t)$ for some $(\sigma, t)\in D_{\rho} := (-\rho,\rho)\times (0, \Lambda_{\rho}-\epsilon)$},\\ 0, &\text{otherwise}. \end{cases} \] Since $X$ is a $C^1$ diffeomorphism from $\overline{D_{\rho}}$ to $X(\overline{D_{\rho}})$, each function $\psi_{\rho,\epsilon}$ is Lipschitz continuous. Therefore, it can be taken as a test function in equation $(\ref{f:elleq})$. Passing to the limit as $\epsilon \to 0$, then dividing by $|\Gamma _\rho |$ and passing to the limit also as $\rho \to 0$, we obtain \begin{equation}\label{f:eq1} \lim _{\rho \to 0} \Big \{ \frac{1}{|\Gamma _\rho |} \lim _{\epsilon \to 0} \int _\Omega \langle A (|\nabla u|) \nabla u, \nabla \psi _{\rho, \epsilon } \rangle \, dx \Big \} = \lim _{\rho \to 0} \Big \{ \frac{1}{|\Gamma _\rho |} \lim _{\epsilon \to 0} \int _\Omega \psi _{\rho, \epsilon } \, dx\Big \}\ . \end{equation} The right hand side of (\ref{f:eq1}) is immediately computed as \begin{equation}\label{f:eq2} \lim _{\rho \to 0} \Big \{ \frac{1}{|\Gamma _\rho |} \lim _{\epsilon \to 0} \int _\Omega \psi _{\rho, \epsilon } \, dx\Big \} = \lim _{\rho \to 0} \frac{|\Omega _\rho|}{|\Gamma _\rho|} = \lim _{\rho \to 0} \frac{\int_{\Gamma_\rho} \varphi \, d {\mathcal H} ^1}{|\Gamma _\rho|} = \varphi (y _0)\ . \end{equation} Let us compute the left hand side of (\ref{f:eq1}). Differentiating the relation $\psi_{\rho,\epsilon}(X(\sigma,t)) = \phi_{\epsilon}(t) \eta_{\epsilon}(\sigma)$ with respect to $t$ we get \begin{equation}\label{f:diffpsi} \pscal{\nabla\psi_{\rho,\epsilon}(X(\sigma,t))}{N(\sigma)} = -\phi_{\epsilon}'(t) \eta_{\epsilon}(\sigma) \qquad\text{a.e.\ on } D_{\rho}. \end{equation} By combining (\ref{gradu}) and (\ref{f:diffpsi}), we infer \[ \begin{array}{ll} \pscal{A (|\nabla u|) \nabla u}{\nabla\psi_{\rho,\epsilon}}(X(\sigma,t)) & = -A(|h'(t)|) h'(t) \pscal{N(\sigma)}{\nabla\psi_{\rho,\epsilon}(X(\sigma,t))} \\ \noalign{\medskip} & = A(|h'(t)|) h'(t) \phi_{\epsilon}'(t) \eta_{\epsilon}(\sigma) \qquad \text{a.e.\ on } D_{\rho}.\\ \end{array} \] Then, by using the change of variables formula (\ref{chv}), we get \[ \begin{split} \int _\Omega \langle A (|\nabla u|) & \nabla u, \nabla \psi _{\rho, \epsilon } \rangle \, dx \\ = {} & \int _{D_\rho} \langle A (|\nabla u|) \nabla u, \nabla \psi _{\rho, \epsilon } \rangle \, dx \\ = {} & \int_{-\rho}^{\rho} \int_0 ^ {\Lambda (\sigma) } A(|h'(t)|) h'(t) \phi_{\epsilon}'(t) \eta_{\epsilon}(\sigma) \big [ 1 - t K (\sigma) \big ] \, dt \, d\sigma \\ = {} & \frac{1}{\epsilon} \int_{-\rho}^{\rho} \int_0 ^ {\epsilon } A(|h'(t)|) h'(t) \eta_{\epsilon}(\sigma) \big [ 1 - t K (\sigma) \big ] \, dt \, d\sigma \\ & - \frac{1}{\epsilon} \int_{-\rho}^{\rho} \int_{\Lambda _\rho - 2 \epsilon} ^ {\Lambda _ \rho - \epsilon} A(|h'(t)|) h'(t) \eta_{\epsilon}(\sigma) \big [ 1 - t K (\sigma) \big ] \, dt \, d\sigma \ . \end{split} \] In the limit as $\epsilon \to 0^+$, by the regularity assumption made on the solution $u$ and on the operator $A$ (recall that $h\in C^1$ and $A \in C ([ 0 , + \infty))$), we obtain \[ \begin{split} \lim _{\epsilon \to 0 ^+} & \int _\Omega \langle A (|\nabla u|) \nabla u, \nabla \psi _{\rho, \epsilon } \rangle \, dx \\ & = \int_{-\rho}^{\rho} \Big \{ A(|h'(0)|) h'(0) - A(|h'(\Lambda _\rho)|) h'(\Lambda _\rho) \big [ 1 - \Lambda _\rho K (\Lambda _\rho) \big ] \Big \} d\sigma \\ & = |\Gamma _\rho| \Big \{ A(|h'(0)|) h'(0) - A(|h'(\Lambda _\rho)|) h'(\Lambda _\rho) \big [ 1 - \Lambda _\rho K (\Lambda _\rho) \big ] \Big \} \, . \end{split} \] In the limit as $\rho \to 0 ^+$, by exploiting again the regularity assumptions recalled above, we obtain $$\lim _{\rho \to 0 ^+} A(|h'(\Lambda _\rho)|) h'(\Lambda _\rho) \big [ 1 - \Lambda _\rho K (\Lambda _\rho) \big ] = A(|h'(\lambda (y _0))|) h'(\lambda (y _0)) \big [ 1 - \lambda (y _0) \kappa (y _0) \big ] = 0\ ,$$ where the last equality follows from Lemma \ref{l:focal}. We have thus proved that the left hand side of (\ref{f:eq1}) is given by \begin{equation}\label{f:eq3} \lim _{\rho \to 0} \Big \{ \frac{1}{|\Gamma _\rho |} \lim _{\epsilon \to 0} \int _\Omega \langle A (|\nabla u|) \nabla u, \nabla \psi _{\rho, \epsilon } \rangle \, dx \Big \} = A(|h'(0)|) h'(0) = - A (|\nabla u| ( y _0) ) u _\nu (y _0)\ . \end{equation} The proof is achieved by combining (\ref{f:eq1}), (\ref{f:eq2}), and (\ref{f:eq3}). \qed \bigskip \bigskip \medskip {\it Proof of Theorem \ref{teopartialweb}}. By assumption (i), there exists a point $y _0 \in \Gamma$ such that \begin{equation}\label{f:h1b} \kappa (y _0) = \max _{y \in \partial \Omega} \kappa (y)\ . \end{equation} By Proposition \ref{teo10}, it holds \[\varphi ( y _0) = - A \big ( |\nabla u| (y _0) \big ) u _\nu ( y _0) \ . \] We now exploit again the assumption that $u = h (d_\Omega)$ in $\overline{\Omega _\Gamma}$, for some function $h:\mathbb{R} ^+ \to \mathbb{R}$ of class $C^1$. Thus $A \big ( |\nabla u| \big ) u _\nu $ is constant on $\Gamma$, so that \[ \varphi(y_0) = - A \big ( |h' (0) |) h' (0) = [-A(|\nabla u|) u_{\nu}] {\big | _ \Gamma} \] and assumption (ii) becomes \[ A(|\nabla u|)\pscal{\nabla u}{\nabla d_{\Omega}} \leq \varphi(y_0) + o(1) \qquad \text{a.e.\ in}\ \Omega_{\epsilon}\,. \] Using $\psi_{\epsilon}(x) := \min\{ \epsilon^{-1} d_{\Omega}(x), 1\}$ as a test function in (\ref{f:elleq}) we obtain, for $\epsilon > 0$ small enough, \[ \int_{\Omega}\psi_{\epsilon}\, dx = \frac{1}{\epsilon}\int_{\Omega_{\epsilon}} A(|\nabla u|)\pscal{\nabla u}{\nabla d_{\Omega}}\, dx \leq \varphi(y_0) |\partial\Omega| + o(1) \] and, taking the limit as $\epsilon\to 0$, \[ |\Omega| \leq \varphi(y_0) |\partial\Omega|. \] This inequality, together with (\ref{f:h1b}), ensures that the hypotheses of Theorem \ref{t:geom2} are satisfied. Hence $\Omega$ must be a ball. \qed \bigskip\bigskip
{ "timestamp": "2012-07-27T02:06:02", "yymm": "1207", "arxiv_id": "1207.6344", "language": "en", "url": "https://arxiv.org/abs/1207.6344", "abstract": "We prove that, if $\\Omega\\subset \\mathbb{R}^n$ is an open bounded starshaped domain of class $C^2$, the constancy over $\\partial \\Omega$ of the function $$\\varphi(y) = \\int_0^{\\lambda(y)} \\prod_{j=1}^{n-1}[1-t \\kappa_j(y)]\\, dt$$ implies that $\\Omega$ is a ball. Here $k_j(y)$ and $\\lambda(y)$ denote respectively the principal curvatures and the cut value of a boundary point $y \\in \\partial \\Omega$. We apply this geometric result to different symmetry questions for PDE's: an overdetermined system of Monge-Kantorovich type equations (which can be viewed as the limit as $p \\to + \\infty$ of Serrin's symmetry problem for the $p$-Laplacian), and equations in divergence form whose solutions depend only on the distance from the boundary in some subset of their domain.", "subjects": "Analysis of PDEs (math.AP); Differential Geometry (math.DG)", "title": "A new symmetry criterion based on the distance function and applications to PDE's", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.982557516796073, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542066634013 }
https://arxiv.org/abs/1804.09820
A Nonlinear Spectral Method for Core--Periphery Detection in Networks
We derive and analyse a new iterative algorithm for detecting network core--periphery structure. Using techniques in nonlinear Perron-Frobenius theory, we prove global convergence to the unique solution of a relaxed version of a natural discrete optimization problem. On sparse networks, the cost of each iteration scales linearly with the number of nodes, making the algorithm feasible for large-scale problems. We give an alternative interpretation of the algorithm from the perspective of maximum likelihood reordering of a new logistic core--periphery random graph model. This viewpoint also gives a new basis for quantitatively judging a core--periphery detection algorithm. We illustrate the algorithm on a range of synthetic and real networks, and show that it offers advantages over the current state-of-the-art.
\section{Motivation} \label{sec:mot} Large, complex networks record pairwise interactions between components in a system. In many circumstances, we wish to summarize this wealth of information by extracting high-level information or visualizing key features. Two of the most important and well-studied tasks are \begin{itemize} \item \emph{clustering}, also known as \emph{community detection}, where we attempt to subdivide a network into smaller modules such that nodes within each module share many connections and nodes in distinct modules share few connections, and \item determination of \emph{centrality} or \emph{rank}, where we assign a nonnegative value to each node such that a larger value indicates a higher level of importance. \end{itemize} A distinct but closely related problem is to assign each node to either the \emph{core} or \emph{periphery} in such a way that core nodes are strongly connected across the whole network whereas peripheral nodes are strongly connected only to core nodes; hence there are relatively weak periphery--periphery connections. More generally, we may wish to assign a non-negative value to each node, with a larger value indicating greater ``coreness.'' The images in the centre and right of Figure~\ref{fig:sigmoid} indicate the two-by-two block pattern associated with a core--periphery structure. The core--periphery concept emerged implicitly in the study of economic, social and scientific citation networks, and was formalized in a seminal paper of Borgatti and Everett \cite{borgatti2000models}. A review of recent work on modeling and analyzing core--periphery structure, and related ideas in degree assortativity, rich-clubs and nested/bow-tie/onion networks, can be found in \cite{csermely2013structure}. We focus here on the issue of \emph{detection}: given a large complex network with nodes appearing in arbitrary order, can we discover, quantify and visualize any inherent core--periphery organization? In the next section, we set up our notation and discuss background material. Many detection algorithms can be motivated from an optimization perspective. In section~\ref{sec:log} we use such an approach to define and justify the logistic core--periphery detection problem. We also show how it relates to a new random graph model that generates core--periphery networks. In section~\ref{sec:nonlinear_PF} we prove that a suitably relaxed version of this discrete optimization problem may be solved efficiently using a nonlinear spectral method. The resulting algorithm is described in subsection~\ref{subsec:alg}. Experiments on real and synthetic networks are performed in section~\ref{sec:experiments}, and some conclusions are given in section~\ref{sec:disc}. \section{Background} \label{sec:bg} \subsection{Notation} We use bold letters to denote vectors and capital letters to denote matrices. The respective entries are denoted with lower case, non-bold symbols; for example $\boldsymbol}\def\set{\mathit\Omega x$ denotes the vector with $i$th entry $x_i$ and $A$ denotes the matrix with $i,j$th entries $a_{ij}$, $i,j=1, \dots, n$. We use standard entry-wise notation and operations, so for instance $\boldsymbol}\def\set{\mathit\Omega x\geq 0$ denotes a vector with nonnegative entries, $|\boldsymbol}\def\set{\mathit\Omega x|$ the vector with entries $(|\boldsymbol}\def\set{\mathit\Omega x|)_i=|x_i|$, $e^{\boldsymbol}\def\set{\mathit\Omega x}$ the vector with entries $(e^{\boldsymbol}\def\set{\mathit\Omega x})_i=e^{x_i}$, and $\boldsymbol}\def\set{\mathit\Omega x \boldsymbol}\def\set{\mathit\Omega y$ the vector with entries $(\boldsymbol}\def\set{\mathit\Omega x\boldsymbol}\def\set{\mathit\Omega y)_i=x_iy_i$. For $p \ge 1$ we denote by $\|\boldsymbol}\def\set{\mathit\Omega x\|_p = (x_1^p+\cdots +x_n^p)^{1/p}$ the $p$-norm, with $\mathcal S_p = \{\boldsymbol}\def\set{\mathit\Omega x:\|\boldsymbol}\def\set{\mathit\Omega x\|_p=1\}$ the $p$-unit sphere, and by $\mathbb R} \def\sig{ \varphi _+^n = \{\boldsymbol}\def\set{\mathit\Omega x: x_i \geq 0, \forall i\}$ the cone of vectors with nonnegative entries. We use $A\in \mathbb R} \def\sig{ \varphi ^{n\times n}$ to represent the adjacency matrix of a network $G=(V,E)$, with vertex set $V$ and edge set $E$. We consider undirected networks, so $A$ is symmetric. Nonnegative weights are allowed, with a larger value of $a_{ij}$ indicating a stronger connection between nodes $i$ and $j$. We assume that the network is connected; that is, every pair of nodes may be joined by a path of edges having nonzero weight. For a disconnected network we could simply consider each connected component separately. \subsection{Core--periphery Quality Functions}\label{sec:methods}\label{subsec:quality} Several models for core--periphery detection are based on the definition of various \textit{core--periphery quality functions} $f$ and their optimization over certain discrete or continuous sets of vectors. In this setting, node $i$ is assigned a value $x^\star_i$, where $\boldsymbol}\def\set{\mathit\Omega x^\star$ solves an optimization problem of the form \begin{equation}\label{eq:quality-function} \max_{\boldsymbol}\def\set{\mathit\Omega x \, \in\, \set}\, f(\boldsymbol}\def\set{\mathit\Omega x),\quad f(\boldsymbol}\def\set{\mathit\Omega x) = \sum_{i,j=1}^n a_{ij}\, \kappa(x_i,x_j), \end{equation} for some choice of kernel function $\kappa$ and constraint set $\set$. A larger value of $x^\star_i$ indicates greater ``coreness'', and the overall core--periphery structure may be examined by visualizing the adjacency matrix with nodes ordered according to the magnitude of the entries of $\boldsymbol}\def\set{\mathit\Omega x^\star$. We mention below some concrete examples. The influential work of Borgatti and Everett \cite{borgatti2000models} proposed a discrete notion of core--periphery structure based on comparing the given network with a block model that consists of a fully connected core and a periphery that has no internal edges but is fully connected to the core. Their method aims to find an indicator vector $\boldsymbol}\def\set{\mathit\Omega x$ with binary entries. So $x_i = 1$ assigns nodes to the core and $x_i=0$ assigns nodes to the periphery. By defining the matrix $C=(c_{ij})$ as $c_{ij} = 1$ if $x_i = 1$ or $x_j = 1$ and $c_{ij} = 0$ otherwise, they look at the quantity $ \rho_C = \sum_{ij} a_{ij}c_{ij} $ and aim to compute the binary vector $\boldsymbol}\def\set{\mathit\Omega x$ that maximizes $\rho_C$ among all possible reshufflings of $C$ such that the number of 1 and 0 entries is preserved. Clearly this method corresponds to \eqref{eq:quality-function} with $\kappa(x,y) = \mathrm{sign}(x+y)$ and $\set = \{\boldsymbol}\def\set{\mathit\Omega x\in \{0,1\}^n : \sum_i x_i = m\}$, for a fixed positive integer $m\leq n$. Another popular technique, used for instance in UCINET \cite{ucinet}, is based on the best rank-one approximation of the off-diagonal entries of $A$. In other words, this method seeks $\boldsymbol}\def\set{\mathit\Omega x\in \mathbb R} \def\sig{ \varphi ^n$ that minimizes $\sum_i\sum_{j\neq i}(a_{ij}-x_ix_j)^2$. This is done via the MINRES algorithm, as discussed, for instance, in \cite{minres}. Writing $$ A = \lambda_1 \boldsymbol}\def\set{\mathit\Omega v_1 \boldsymbol}\def\set{\mathit\Omega v_1^T + \lambda_2 \boldsymbol}\def\set{\mathit\Omega v_2 \boldsymbol}\def\set{\mathit\Omega v_2^T +\cdots + \lambda_n \boldsymbol}\def\set{\mathit\Omega v_n \boldsymbol}\def\set{\mathit\Omega v_n^T\, , $$ where $\lambda_1>0$ is the largest eigenvalue of $A$ and $\boldsymbol}\def\set{\mathit\Omega v_1$ the corresponding eigenvector, it follows that the the optimal rank-one matrix $\boldsymbol}\def\set{\mathit\Omega x\boldsymbol}\def\set{\mathit\Omega x^T$ we are looking for is strictly related to $\lambda_1 \boldsymbol}\def\set{\mathit\Omega v_1 \boldsymbol}\def\set{\mathit\Omega v_1^T$. Therefore the least-squares problem is equivalent to maximizing the Rayleigh quotient of $A$; that is, the following optimization problem \begin{equation} \max_{\boldsymbol}\def\set{\mathit\Omega x \neq 0}\frac{\boldsymbol}\def\set{\mathit\Omega x^T A \boldsymbol}\def\set{\mathit\Omega x}{\boldsymbol}\def\set{\mathit\Omega x^T \boldsymbol}\def\set{\mathit\Omega x}. \label{eq:RQ} \end{equation} This, in turn, coincides with \eqref{eq:quality-function} for $\kappa(x,y) = xy$ and $\set = \{\boldsymbol}\def\set{\mathit\Omega x : \boldsymbol}\def\set{\mathit\Omega x^T \boldsymbol}\def\set{\mathit\Omega x=1\}=\mathcal S_2$. Moreover, as the matrix $A$ is symmetric, nonnegative and irreducible, by the Perron-Frobenius theorem, the maximizer $\boldsymbol}\def\set{\mathit\Omega v_1$ is unique and entrywise positive and the corresponding eigenvalue $\lambda_1$ coincides with the spectral radius of $A$. Following a different construction, the use of the spectral radius and the associated Perron eigenvector of $A$ for detecting core--periphery is also considered in \cite{mondragon2016network}. Note that, thanks to the Perron-Frobenius theorem, it follows that the constraint set in \eqref{eq:quality-function} can be chosen as $\set = \mathcal S_2^+=\mathcal S_2\cap \mathbb R} \def\sig{ \varphi ^n_+$. This observation has practical importance because it constrains the solution space. As we discuss in Section~\ref{sec:nonlinear_PF}, this feature is shared by our nonlinear core--periphery model, where existence and uniqueness are proved using a customized nonlinear Perron-Frobenius-type theorem. Moreover, note that having a nonnegative solution $\boldsymbol}\def\set{\mathit\Omega x$ to \eqref{eq:quality-function} not only allows for a core--periphery assignment or ranking, but also implicitly produces a continuous core--periphery score for the nodes. We note that the Perron--Frobenius eigenvector of $A$ is also a well-known nodal centrality measure \cite{EH10}. The concept of core--periphery quality measure with general kernel function, as formulated in \eqref{eq:quality-function}, was introduced by Rombach et al.\ in \cite{rombach2014core}. Those authors focus on the choice $\kappa(x,y)=xy$ and introduce a novel continuous constraint set defined in terms of two parameters $0\leq \alpha, \beta \leq 1$ as follows \begin{equation}\label{eq:C_alpha_beta} \set = C_{\alpha, \beta} = \left\{\boldsymbol}\def\set{\mathit\Omega x \in \mathbb R} \def\sig{ \varphi ^n : \begin{array}{ll}x_i = \frac{i(1-\alpha)}{2\lfloor\beta n\rfloor} & \text{for } i=1,\dots,\lfloor\beta n\rfloor,\\ x_i = \frac{(i-\lfloor\beta n\rfloor)(1-\alpha)}{2(n-\lfloor\beta n\rfloor)}+\frac{1+\alpha}{2} &\text{for } i = \lfloor\beta n\rfloor +1, \dots, n \end{array}\right\}. \end{equation} Here $\alpha$ is used to tune the score jump between the peripheral node with highest score and the core node with lowest score, whereas $\beta$ is used to set the size of the core set. Note that, as $0\leq \alpha, \beta \leq 1$, we have $C_{\alpha,\beta}\subseteq \mathbb R} \def\sig{ \varphi ^n_+$ and thus, as for the Perron--Frobenius eigenvector of $A$, the maximizer of \eqref{eq:quality-function} with $\kappa(x,y)=xy$ and $\set = C_{\alpha,\beta}$ is a nonnegative vector whose entries define a core--periphery score value, called the \textit{aggregate core score} in \cite{rombach2017core}. \subsection{The Optimization Problem}\label{ssubsec:opt} The models proposed in \cite{borgatti2000models} and \cite{rombach2014core} lead to discrete optimization problems whose global solution cannot be computed for large graphs. Both papers propose computational methods that deliver approximate solutions but do not come with guarantees of accuracy. The combinatorial optimization problem of \cite{borgatti2000models} is solved via random reshuffling. For the model proposed in \cite{rombach2014core} a simulated--annealing algorithm is used. The presence of the two parameters, $\alpha$ and $\beta$, adds a complication, which is addressed there by considering all $(\alpha,\beta)$ values on a discrete uniform lattice in $[0,1]^2$. Clearly, refining the discretization level improves the approximation to the solution but raises the computational cost. For the model used in UCINET based on MINRES \cite{ucinet,minres}, recalling (\ref{eq:RQ}) we note that an efficient approach is to recast the optimization problem into the computation of a matrix eigenvector, for which well-established algorithms are available. Since our approach fits into the core--periphery quality function optimization approach of \cite{rombach2014core,rombach2017core}, we will use the method developed there, with $\kappa(x,y)=xy$ and $\Omega=C_{\alpha,\beta}$, as a baseline for comparison in our experiments in Section~\ref{sec:experiments}. Although algorithms based on other choices of the kernel function $\kappa$ have not been considered in the literature so far, both in Section 2.2.1 of \cite{rombach2014core} and Section 4.2.1 of \cite{rombach2017core} it is pointed out that an ideal core--periphery kernel function is \begin{equation}\label{eq:mu_alpha} \kappa(x,y)=\mu_\alpha(x,y)=(|x|^\alpha+|y|^\alpha)^{1/\alpha} \end{equation} for $\alpha>0$ large. In fact this function is related to core--periphery structure in a very natural way, as we discuss in the next section. \section{Logistic Core--Periphery Detection Problem}\label{sec:log} We propose a new model based on the kernel $\kappa(x,y) = \max\{|x|,|y|\}$. Note that this kernel function arises as the $\alpha \to \infty$ limit of \eqref{eq:mu_alpha}. Focusing for now on the ranking problem, our goal is to determine a core--periphery ranking vector that assigns to each node a distinct integer between $1$ and $n$; with a lower rank denoting a more peripheral node. Clearly any such ranking vector is nothing but a permutation vector $\boldsymbol}\def\set{\mathit\Omega \pi$, where $i\mapsto \pi_i$ is a permutation of the set $\{1,\dots,n\}$. Therefore, if $\mathcal P_n$ is the set of permutation vectors of $n$ entries, we formulate our core--periphery detection problem as follows \begin{equation}\label{eq:our_model} \max_{\boldsymbol}\def\set{\mathit\Omega \pi \, \in \, \mathcal P_n}f_{\infty}(\boldsymbol}\def\set{\mathit\Omega \pi), \quad \text{where}\quad f_\infty(\boldsymbol}\def\set{\mathit\Omega \pi) = \sum_{i,j=1}^n a_{ij}\max\{\pi_i,\pi_j\}. \end{equation} We will see in Section \ref{sec:nonlinear_PF} that, in practice, finite but large enough values of $\alpha$ in \eqref{eq:mu_alpha} provide an accurate approximation of $\max\{x,y\}$. Moreover, relaxing from $\mathcal P_n$ to $\mathcal S_p^+ = \mathcal S_p \cap \mathbb R} \def\sig{ \varphi ^n_+$ allows for a globally convergent, easily implementable and computationally feasible algorithm. We will refer to \eqref{eq:our_model} as the \textit{logistic core--periphery detection problem}. In order to motivate this name and the model itself, we discuss in the next section a random graph model that provides a natural and flexible model for core--periphery structure. \subsection{Logistic Core--periphery Random Graph Model}\label{sec:logistic_random_model} We now consider random graph models that generate core--periphery structure. For this subsection only, we restrict to the case of unweighted, or binary, networks. We focus on models where the nodes can be placed in a natural ordering, represented by a permutation vector, so $i\mapsto \pi_i$. In this natural ordering, for every pair of nodes $i$ and $j$ the probability of an edge will be a function of $\pi_i$ and $\pi_j$. Moreover, these events will be independent. We note that such models have been studied in other contexts; for example, in an early reference Grindrod \cite{Grin02} used this framework to define a class of range-dependent graphs that captures features of the classic Watts-Strogatz model. A simple core--periphery model of this type arises when edges are present with probability one within the core and between core and periphery, and with probability zero among peripheral nodes. This model is considered for instance in \cite{borgatti2000models,rombach2014core}. In this model there exists a permutation of the indices $i\mapsto \pi_i$ such that an edge connecting two different nodes $i$ and $j$ exists with independent probability $ \mathbb P(i\sim j) = H_t\left(\frac 1 n\max\{\pi_i,\pi_j\}\right), $ where, for $t\in (0,1)$, $H_t$ is the Heaviside function $H_t(x)=1$ if $x\geq t$ and $H_t(x)=0$ otherwise. The parameter $t$ allows us to tune the size of the core and of the periphery. Figure~\ref{fig:sigmoid} (center) shows an example matrix whose $ij$-th entry is the probability $\mathbb P(i\sim j)$ from this model, for $t=1/2$ and $\pi_i=n-i$ for any $i$. \begin{figure}[t] \includegraphics[width=.26 \textwidth]{sigmoid_function_1}\hfill \includegraphics[width=.25\textwidth]{sigmoid_function_3}\hfill \includegraphics[width=.305\textwidth]{sigmoid_function_2} \caption{Left: $\sigma_{s,t}(x)$ for $t=1/2$ and $s \in \{5, 10, 20, 40\}$. The piecewise linear plot is the Heaviside function $H_{t}(x)$ which corresponds to $\lim_{s\to\infty}\sigma_{s,t}(x)$. Center: heatmap of $A$ with entries $a_{ij} = H_{1/2}(\max\{1-i/n, 1-j/n\})$. Right: heatmap of $A$ with entries $a_{ij}=\sigma_{10,1/2}(\max\{1-i/n, 1-j/n\})$. In these heatmaps, in order to emphasize the overall structure, the diagonal entries have been colored with the probabilities $\mathbb P(i\sim i)=H_{1/2}(1-i/n)$ and $\mathbb P(i\sim i)=\sigma_{10,1/2}(1-i/n)$, respectively. However, the associated random graphs have no self-loops and so the actual diagonal probabilities are $\mathbb P(i\sim i) = 0$.}\label{fig:sigmoid} \end{figure} The Heaviside function $H_t$ is a discontinuous step function, and it leads to a idealized all-or-nothing structure. Instead, we may consider a family of continuous approximations to $H_t$ based on the logistic sigmoid function. For $s,t\in \mathbb R} \def\sig{ \varphi $, $s\geq 0$ we define $$ \sigma_{s,t}(x) = \frac{1}{1+e^{-s(x-t)}} \, . $$ Note that, for any fixed $x,t\in \mathbb R} \def\sig{ \varphi $ we have $\lim_{s\to\infty}\sigma_{s,t}(x)=H_t(x)$. Examples are plotted in Figure~\ref{fig:sigmoid} (left). We now introduce the random graph model where an edge connecting two different nodes $i$ and $j$ exists with independent probability $$ \textstyle{\mathbb P(i\sim j) = \sigma_{s,t}\left(\frac 1 n \max\{n-i,n-j\}\right) \, .} $$ We refer to this as the \emph{logistic core--periphery random graph model}. The right-most plot on Figure~\ref{fig:sigmoid} shows a $20\times 20$ example matrix whose $ij$-th entry is the corresponding probability $\mathbb P(i\sim j)$, for $s=10$ and $t=1/2$. We see that, relative to the Heaviside version, this model gives a smoother transition from core to periphery, and has a built-in notion of ranking within each group. The relevance of this model to capture core and perhipheral nodes has been also recently pointed out in \cite{jia2018detecting}. \begin{rev}We are interested in the circumstance where a core--periphery structure is present in the graph, but must be discovered. In practice, our task is to find a suitable reordering of the nodes that highlights the presence of core and periphery. A natural approach is then to find the permutation of indices $\boldsymbol}\def\set{\mathit\Omega \pi \in \mathcal P_n$ that maximizes the likelihood, under the assumption of a logistic core--periphery structure.\end{rev} This likelihood is given by \begin{equation}\label{eq:LCP_probability} \nu(\boldsymbol}\def\set{\mathit\Omega \pi)=\prod_{i\sim j}\sig(\pi_i,\pi_j)\, \prod_{i\not\sim j}\left(1-\sig(\pi_i,\pi_j)\right), \end{equation} where, for the sake of brevity, we let $\sig(x,y)=\sigma_{s,t}(\frac 1 n \max\{x,y\})$. We now show that solving the proposed logistic core--periphery detection problem \eqref{eq:our_model} is equivalent to solving this maximum likelihood reordering problem. \begin{theorem} \label{thm:mlp} $\boldsymbol}\def\set{\mathit\Omega \pi^\star\in \mathcal P_n$ is a permutation that maximizes $\nu(\boldsymbol}\def\set{\mathit\Omega \pi)$ if and only if $\boldsymbol}\def\set{\mathit\Omega \pi^\star$ is a solution of \eqref{eq:our_model}. \end{theorem} \begin{proof} Our proof exploits a very useful trick that Grindrod \cite{Grin02} used in the case of a range-dependent random graph: The likelihood $\nu(\boldsymbol}\def\set{\mathit\Omega \pi)$ can be equivalently written as \begin{equation*}\label{eq:nu_2} \nu(\boldsymbol}\def\set{\mathit\Omega \pi) = \prod_{i\sim j} \frac{\sig(\pi_i,\pi_j)}{1-\sig(\pi_i,\pi_j)}\, \prod_{i,j=1}^n 1-\sig(\pi_i,\pi_j)\, . \end{equation*} As the right-hand factor does not depend on the graph, maximizing $\nu(\boldsymbol}\def\set{\mathit\Omega \pi)$ is equivalent to maximizing the left-hand factor. Thus, taking the logarithm on both sides we observe that $\boldsymbol}\def\set{\mathit\Omega \pi^\star$ maximizes $\nu$ if and only if it maximizes $$\sum_{ij=1}^n a_{ij}\log\left(\frac{\sig(\pi_i,\pi_j)}{1-\sig(\pi_i,\pi_j)}\right)\,.$$ Now, using the definition of $\sig(x,y)$ in terms of the logistic sigmoid function, a short computation shows that $\log(\sig(x,y)/(1-\sig(x,y))=s(\frac 1 n \max\{x,y\}-t)$, for any $x,y,t\in \mathbb R} \def\sig{ \varphi $, $s\geq 0$. Therefore, $\boldsymbol}\def\set{\mathit\Omega \pi^\star$ maximizes $\nu$ if and only if it maximizes the core--periphery quality function $\sum_{ij}a_{ij}\max\{\pi_i,\pi_j\}$, which concludes the proof. \end{proof} In words, Theorem~\ref{thm:mlp} shows that in the case of unweighted networks, solving the logistic core--periphery detection problem \eqref{eq:our_model} is equivalent to solving the maximum likelihood reordering problem (\ref{eq:LCP_probability}) under the assumption that the network was generated from the logistic core--periphery random graph model. \begin{rev} This is somewhat analogous to a known phenomenon in the community detection case \cite{newman2016equivalence}. \end{rev} We mention that core--periphery detection via likelihood maximization on a random graph model was also proposed in \cite{Zhang15}. There, the authors used a stochastic block model where nodes are independently assigned to the core with probability $\gamma_1$ and to the periphery with probability $1- \gamma_1$. Core--core, core--periphery and periphery--periphery connections then appear with independent probabilities $p_{11}$, $p_{12}$ and $p_{22}$, with $p_{11} > p_{12} > p_{22}$. Infering model parameters by maximizing the likelihood over all possible node bi--partitions leads to a core--periphery assignment. Because solving this discrete optimization problem is not practicable for large networks, the authors develop an approximation technique based on expectation maximization and belief propagation. We emphasize that this random graph reordering/partitioning framework applies to unweighted (binary) networks. \section{Nonlinear Spectral Method for Core--periphery Detection}\label{sec:nonlinear_PF} In this section we introduce an iterative method for the logistic core--periphery detection problem \eqref{eq:our_model} and prove that it converges globally to the solution of a relaxed problem. We refer to this as a \textit{nonlinear spectral method} for two reasons. First, its derivation and analysis are inspired by recent work in nonlinear Perron-Frobenius theory \cite{gautier2016globally,mhpfpaper,tudisco2017node,gautier2018contractivity}. Second, as shown in Lemma~\ref{lem:eigenvalue_problem}, there is an equivalence between \eqref{eq:our_model} and a nonlinear eigenvalue problem. Recall that the network is assumed to be (nonnegatively) weighted, connected and undirected. The logistic core--periphery model \eqref{eq:our_model} is a combinatorial optimization problem whose exact solution is not feasible for large scale networks. We therefore introduce two relaxations that lead to a new ``smooth'' logistic core--periphery problem whose solution may be computed efficiently with a new nonlinear spectral method. Given $\alpha>1$, we replace the discontinuous kernel function $\max\{|x|,|y|\}$ with \begin{equation} \mu_\alpha(x,y) = (|x|^\alpha + |y|^\alpha)^{1/\alpha}. \label{eq:mualpha} \end{equation} As mentioned at the end of subsection~\ref{subsec:quality}, $\max\{|x|,|y|\}$ is the limit of $\mu_\alpha(x,y)$ for $\alpha\to\infty$. More precisely, letting \begin{equation} f_\alpha(\boldsymbol}\def\set{\mathit\Omega x) = \sum_{ij}a_{ij}\mu_\alpha(x_i,x_j), \label{eq:falpha} \end{equation} a simple computation using the H\"older inequality reveals that \begin{equation}\label{eq:alpha_approx} f_\infty(\boldsymbol}\def\set{\mathit\Omega x)\leq f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)\leq 2^{1/\alpha} f_\infty(\boldsymbol}\def\set{\mathit\Omega x)\, , \end{equation} for any $\alpha>1$. Therefore when $\alpha$ is large enough, using $f_\alpha$ in place of $f_\infty$ in \eqref{eq:our_model} provides a very accurate approximation. Second, we relax the discrete constraint set $\mathcal P_n$ into a continuous one. In doing this we note that every vector in $\mathcal P_n$ is entry-wise nonnegative and has fixed length. For instance, $\|\boldsymbol}\def\set{\mathit\Omega x\|_1 = \frac 1 2 n (n+1)$, for any $\boldsymbol}\def\set{\mathit\Omega x \in \mathcal P_n$. Note that the normalization constant $\frac 1 2 n (n+1)$ can be chosen arbitrarily. In fact, the function $f_\alpha$ we are considering is positively $1$-homogeneous; that is, for any $\lambda>0$ we have $f_\alpha(\lambda \boldsymbol}\def\set{\mathit\Omega x) = \lambda f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)$. This implies that if $\boldsymbol}\def\set{\mathit\Omega x$ maximizes $f_\alpha$ among all the vectors of norm exactly $1$ then, for any $a>0$, $a\boldsymbol}\def\set{\mathit\Omega x$ maximizes $f_\alpha$ among all the vectors of norm exactly $a$. We therefore relax $\mathcal P_n$ into a sphere of nonnegative vectors. For convenience, we choose the $p$-sphere $\mathcal S_p=\{\boldsymbol}\def\set{\mathit\Omega x : \|\boldsymbol}\def\set{\mathit\Omega x\|_p=1\}$ and let $\mathcal S_p^+ = \mathcal S_p \cap \mathbb R} \def\sig{ \varphi ^n_+$. Overall, for $\mu_\alpha(x,y)$ in (\ref{eq:mualpha}) and $f_\alpha(\boldsymbol}\def\set{\mathit\Omega x) $ in (\ref{eq:falpha}), we modify the original logistic core--periphery problem \eqref{eq:our_model} into \begin{equation}\label{eq:maximization_problem} \max_{\boldsymbol}\def\set{\mathit\Omega x\in \mathcal S_p^+} f_\alpha(\boldsymbol}\def\set{\mathit\Omega x), \quad\text{where}\quad f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)=\sum_{i,j=1}^n a_{ij}\, \mu_\alpha(x_i,x_j) \, . \end{equation} We devote the remainder of this section to proving that, for any $\alpha>0$ and any $p>\alpha$, the relaxed logistic core--periphery model (\ref{eq:maximization_problem}) has a unique, entry-wise positive solution that can be efficiently computed via a globally convergent iterative method. \def\varepsilon{\varepsilon} \subsection{Existence and Uniqueness of a Solution to the Relaxed Problem} We begin by observing that the function $f_\alpha$ attains its maximum on a positive vector. \begin{lemma}\label{lem:the_solution_is_positive} The problem \eqref{eq:maximization_problem} is solved by a vector $\boldsymbol}\def\set{\mathit\Omega x^\star$ such that $\boldsymbol}\def\set{\mathit\Omega x^\star>0$. \end{lemma} \begin{proof} As $f_\alpha(\boldsymbol}\def\set{\mathit\Omega x) = f_\alpha(|\boldsymbol}\def\set{\mathit\Omega x|)$ for any $\boldsymbol}\def\set{\mathit\Omega x\in \mathbb R} \def\sig{ \varphi ^n$, we easily deduce that the maximum is attained on a vector $\boldsymbol}\def\set{\mathit\Omega x^\star\geq 0$. Now suppose that there exists $1\leq k\leq n$ such that $x^\star_k=0$. As the graph is connected, there exists $\ell$ such that $a_{k\ell}>0$. Then the vector $\boldsymbol}\def\set{\mathit\Omega y$ defined by $y_i = x^\star_i$ for $i\neq k$ and $y_k=\varepsilon >0$ would be such that \begin{align*} f_\alpha(\boldsymbol}\def\set{\mathit\Omega y) &= \sum_{i,j\neq k}a_{ij}\mu_\alpha(x^\star_i,x^\star_j)+2\sum_{j}a_{kj}\mu_\alpha(x_j^\star,\varepsilon)\\ &\geq \sum_{i,j\neq k}a_{ij}\mu_\alpha(x^\star_i,x^\star_j)+2a_{k\ell}\mu_\alpha(x_\ell^\star,\varepsilon) >f_\alpha(\boldsymbol}\def\set{\mathit\Omega x^\star), \end{align*} which contradicts the maximality of $\boldsymbol}\def\set{\mathit\Omega x^\star$. We conclude that the solution of \eqref{eq:maximization_problem} is attained on an entry-wise positive vector. \end{proof} Now, by using the positive $1$-homogeneity of $f_\alpha$, we show that the constrained optimization problem \eqref{eq:maximization_problem} is equivalent to an unconstrained problem for the normalized function $f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)/\|\boldsymbol}\def\set{\mathit\Omega x\|_p$. \begin{lemma}\label{lem:balls_equivalence} For any $p>1$ and any $\alpha>1$ we have $$ \max_{\boldsymbol}\def\set{\mathit\Omega x\in \mathcal S_p^+}f_\alpha(\boldsymbol}\def\set{\mathit\Omega x) = \max_{\boldsymbol}\def\set{\mathit\Omega x\in\mathbb R} \def\sig{ \varphi ^n}\frac{f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)}{\|\boldsymbol}\def\set{\mathit\Omega x\|_p}. $$ \end{lemma} \begin{proof} By the $1$-homogeneity of $f_\alpha$ we have the following chain of inequalities \begin{align*} \max_{\|\boldsymbol}\def\set{\mathit\Omega x\|_p\leq 1 }f_\alpha(\boldsymbol}\def\set{\mathit\Omega x) &\geq \max_{\|\boldsymbol}\def\set{\mathit\Omega x\|_p=1}f_\alpha(\boldsymbol}\def\set{\mathit\Omega x) = \max_{\boldsymbol}\def\set{\mathit\Omega x\in\mathbb R} \def\sig{ \varphi ^n}\,f_\alpha\!\left(\frac{\boldsymbol}\def\set{\mathit\Omega x}{\|\boldsymbol}\def\set{\mathit\Omega x\|_p}\right) =\max_{\boldsymbol}\def\set{\mathit\Omega x\in\mathbb R} \def\sig{ \varphi ^n}\frac{f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)}{\|\boldsymbol}\def\set{\mathit\Omega x\|_p}\\ & \geq \max_{\|\boldsymbol}\def\set{\mathit\Omega x\|_p\leq 1 } \frac{f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)}{\|\boldsymbol}\def\set{\mathit\Omega x\|_p} \geq \max_{\|\boldsymbol}\def\set{\mathit\Omega x\|_p\leq 1}f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)\, . \end{align*} This implies that the inequalities above are all identities. Together with $f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)=f_\alpha(|\boldsymbol}\def\set{\mathit\Omega x|)$ this shows the claim. \end{proof} We have the following consequence. \begin{lemma}\label{lem:eigenvalue_problem} Let $F_\alpha=\nabla f_\alpha:\mathbb R} \def\sig{ \varphi ^n\to\mathbb R} \def\sig{ \varphi ^n$ be the gradient of $f_\alpha$, that is, $$ F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)_i = 2\, \sum_{j=1}^na_{ij}\, |x_i|^{\alpha-2}x_i(|x_i|^\alpha+|x_j|^\alpha)^{1/\alpha -1}\, , \quad i=1,\dots,n\, . $$ Then, for any $p>1$, the following statements are equivalent: \begin{enumerate} \item $\boldsymbol}\def\set{\mathit\Omega x$ is a solution of \eqref{eq:maximization_problem}, \item $\boldsymbol}\def\set{\mathit\Omega x$ satisfies the eigenvalue equation $F_\alpha(\boldsymbol}\def\set{\mathit\Omega x) = \lambda\, |\boldsymbol}\def\set{\mathit\Omega x|^{p-2}\boldsymbol}\def\set{\mathit\Omega x$ with $\lambda >0$, \item $\boldsymbol}\def\set{\mathit\Omega x$ is a fixed point of the map $G_\alpha(\boldsymbol}\def\set{\mathit\Omega x)= |F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)|^{q-2}F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)/\|F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)\|_q^{q-1}$, where $q$ is the H\"older conjugate of $p$, i.e. $1/p+1/q=1$. \end{enumerate} \end{lemma} \begin{proof} For convenience, let us write $r_\alpha(\boldsymbol}\def\set{\mathit\Omega x)=f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)/\|\boldsymbol}\def\set{\mathit\Omega x\|_p$. By differentiating $r_\alpha(\boldsymbol}\def\set{\mathit\Omega x)$ we see that $$ \nabla r_\alpha(\boldsymbol}\def\set{\mathit\Omega x) =0 \iff \frac{1}{\|\boldsymbol}\def\set{\mathit\Omega x\|_p}\Big\{ F_\alpha(\boldsymbol}\def\set{\mathit\Omega x) - \lambda \, |\boldsymbol}\def\set{\mathit\Omega x|^{p-2}\boldsymbol}\def\set{\mathit\Omega x\Big\}=0, $$ with $\lambda = f_\alpha(\boldsymbol}\def\set{\mathit\Omega x)/\|\boldsymbol}\def\set{\mathit\Omega x\|_p^p>0$. Together with Lemma \ref{lem:balls_equivalence} this proves $(1)\iff (2)$. Now note that the map $\boldsymbol}\def\set{\mathit\Omega x\mapsto \psi(\boldsymbol}\def\set{\mathit\Omega x)=|\boldsymbol}\def\set{\mathit\Omega x|^{q-2}\boldsymbol}\def\set{\mathit\Omega x$ is such that $\psi(|\boldsymbol}\def\set{\mathit\Omega x|^{p-2}\boldsymbol}\def\set{\mathit\Omega x)=\boldsymbol}\def\set{\mathit\Omega x$. In fact $$ \psi(|\boldsymbol}\def\set{\mathit\Omega x|^{p-2}\boldsymbol}\def\set{\mathit\Omega x)=|\, |\boldsymbol}\def\set{\mathit\Omega x|^{p-2}\boldsymbol}\def\set{\mathit\Omega x|^{q-2}\, |\boldsymbol}\def\set{\mathit\Omega x|^{p-2}\boldsymbol}\def\set{\mathit\Omega x = |\boldsymbol}\def\set{\mathit\Omega x|^{(p-1)(q-2)+p-2}\boldsymbol}\def\set{\mathit\Omega x=\boldsymbol}\def\set{\mathit\Omega x\, . $$ As $\psi$ is bijective we have $F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)=\lambda\,|\boldsymbol}\def\set{\mathit\Omega x|^{p-2}\boldsymbol}\def\set{\mathit\Omega x\iff |F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)|^{q-2}F_{\alpha}(\boldsymbol}\def\set{\mathit\Omega x) =\psi(\lambda)\boldsymbol}\def\set{\mathit\Omega x$. Therefore, recalling that $\|\boldsymbol}\def\set{\mathit\Omega x\|_p=1$ and $\lambda>0$ we have \begin{align*} F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)=\lambda\,|\boldsymbol}\def\set{\mathit\Omega x|^{p-2}\boldsymbol}\def\set{\mathit\Omega x \iff \frac{|F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)|^{q-2}F_{\alpha}(\boldsymbol}\def\set{\mathit\Omega x)}{\|F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)\|_q^{q-1}}=\frac{|F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)|^{q-2}F_{\alpha}(\boldsymbol}\def\set{\mathit\Omega x)}{\||F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)|^{q-2}F_{\alpha}(\boldsymbol}\def\set{\mathit\Omega x)\|_p}=\frac{\psi(\lambda)\boldsymbol}\def\set{\mathit\Omega x}{\|\psi(\lambda)\boldsymbol}\def\set{\mathit\Omega x\|_p}=\boldsymbol}\def\set{\mathit\Omega x, \end{align*} where the first identity follows by $\||\boldsymbol}\def\set{\mathit\Omega y|^{q-2}\boldsymbol}\def\set{\mathit\Omega y\|_p = \|\boldsymbol}\def\set{\mathit\Omega y^{q-1}\|_p =\|\boldsymbol}\def\set{\mathit\Omega y\|_q^{q-1}$. This shows $(2)\iff(3)$ and concludes the proof. \end{proof} We need one final rather technical lemma that, for the sake of completeness, we state for the case where $\alpha$ may attain both positive and negative values. \begin{lemma}\label{lem:lip_constant} For $\alpha\in\mathbb R} \def\sig{ \varphi $ let $G_\alpha$ be defined as in Lemma \ref{lem:eigenvalue_problem} above, and let $g_i:\mathbb R} \def\sig{ \varphi ^n\to\mathbb R} \def\sig{ \varphi $ be the scalar functions such that $G_\alpha(\boldsymbol}\def\set{\mathit\Omega x)=(g_1(\boldsymbol}\def\set{\mathit\Omega x), \dots, g_n(\boldsymbol}\def\set{\mathit\Omega x))$. Then $$\left\| \frac{\nabla g_k(\boldsymbol}\def\set{\mathit\Omega x)\boldsymbol}\def\set{\mathit\Omega x}{g_k(\boldsymbol}\def\set{\mathit\Omega x)}\right\|_1 \leq \frac{|1-\alpha|}{p-1}$$ for any vector $\boldsymbol}\def\set{\mathit\Omega x\geq 0$ and any $k=1,\dots,n$. \end{lemma} \begin{proof} First note that $\boldsymbol}\def\set{\mathit\Omega y\geq 0$ implies that both $F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)$ and $G_\alpha(\boldsymbol}\def\set{\mathit\Omega y)$ are nonnegative. Now, let $i,k\in \{1,\dots,n\}$. By the definition of $g_k$ and using the chain rule, we obtain \begin{align} \begin{aligned}\label{eq:derivative} \left|\frac{\partial_i \{g_k(\boldsymbol}\def\set{\mathit\Omega y)\}}{g_k(\boldsymbol}\def\set{\mathit\Omega y)}\right| &= \left|\frac{\partial_i \Big\{\big(F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_k\big)^{q-1}\Big\}}{\,\,\,\big(F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_k\big)^{q-1}} - \frac{\partial_i\Big\{ \|F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)\|_q^{q-1}\Big\}}{\,\,\,\, \|F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)\|_q^{q-1}}\right|\\ &= (q-1) \left| \frac{\partial_i F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_k}{F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_k}- \frac{\sum_m \big(F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_m\big)^{q-1}\partial_i F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_m}{\|F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)\|_q^q}\right|\\ &= (q-1) \left| \frac{\partial_i F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_k}{F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_k}- \sum_m\left(\frac{F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_m}{\|F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)\|_q}\right)^q \frac{\partial_i F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_m}{F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_m}\right|\, . \end{aligned} \end{align} Now note that for any nonnegative $\boldsymbol}\def\set{\mathit\Omega y\in\mathbb R} \def\sig{ \varphi ^n$ we have $$ \frac{\partial_i F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_m y_i}{F_\alpha(\boldsymbol}\def\set{\mathit\Omega y)_m} = (1-\alpha)\,\frac{y_i^\alpha}{y_i^\alpha+y_m^\alpha}\,\frac{a_{mi}(y_m^\alpha+y_i^\alpha)^{1/\alpha-1}} {\sum_j a_{mj}(y_m^\alpha+y_j^\alpha)^{1/\alpha-1}} \geq 0 \, . $$ Let $\tilde m\in\{1,\dots, n\}$ be the index for which the quantity above is maximal. From \eqref{eq:derivative} we deduce that $|\partial_i \{g_k(\boldsymbol}\def\set{\mathit\Omega y)\}y_i/g_k(\boldsymbol}\def\set{\mathit\Omega y)|$ is of the form $|\beta_k - \sum_m \gamma_m \beta_m|$, where $\beta_m, \gamma_m\geq 0$ for all $m=1,\dots,n$ and $\sum_m \gamma_m=1$. This implies that $|\beta_k - \sum_m \gamma_m \beta_m|\leq \beta_{\tilde m}$ and, as $y_i^\alpha/(y_i^\alpha+y_{\tilde m}^\alpha)\leq 1$, we find $$ \left|\frac{\partial_i \{g_k(\boldsymbol}\def\set{\mathit\Omega y)\}y_i}{g_k(\boldsymbol}\def\set{\mathit\Omega y)}\right| \leq (q-1)|1-\alpha|\left( \frac{a_{\tilde m,i}(y_{\tilde m}^\alpha+y_i^\alpha)^{1/\alpha-1}} {\sum_j a_{\tilde m,j}(y_{\tilde m}^\alpha+y_j^\alpha)^{1/\alpha-1}} \right)\, . $$ Summing this formula over $i$ and recalling that $q-1=1/(p-1)$ concludes the proof. \end{proof} This leads to our main result. \begin{theorem}\label{thm:convergence} Assume $\alpha >1$ and $p>\alpha$. Then \eqref{eq:maximization_problem} has a unique solution $\boldsymbol}\def\set{\mathit\Omega x^\star$ that is entry-wise positive. Moreover, for any $\boldsymbol}\def\set{\mathit\Omega x_0>0$, if $G_\alpha$ is defined as in Lemma \ref{lem:eigenvalue_problem}, then the sequence defined by $\boldsymbol}\def\set{\mathit\Omega x_{k+1}=G_\alpha(\boldsymbol}\def\set{\mathit\Omega x_k)$ belongs to $\mathcal S_p^+$ and converges to $\boldsymbol}\def\set{\mathit\Omega x^\star$. \end{theorem} \begin{proof} The fact that $\boldsymbol}\def\set{\mathit\Omega x_k\in \mathcal S_p^+$ for any $k$ is an obvious consequence of the identities $\||\boldsymbol}\def\set{\mathit\Omega z|^{q-2}\boldsymbol}\def\set{\mathit\Omega z\|_p = \|\boldsymbol}\def\set{\mathit\Omega z^{q-1}\|_p =\|\boldsymbol}\def\set{\mathit\Omega z\|_q^{q-1}$. Now we show that the map $G_\alpha$ defined in Lemma \ref{lem:eigenvalue_problem} is Lipschitz contractive which, due to the Banach fixed point theorem, gives convergence of the sequence and uniqueness of the solution. To this end we use the Thomson metric $d_T$ defined for $\boldsymbol}\def\set{\mathit\Omega x,\boldsymbol}\def\set{\mathit\Omega y\in \mathcal S_p^+$ as $d_T(\boldsymbol}\def\set{\mathit\Omega x,\boldsymbol}\def\set{\mathit\Omega y) = \|\log(\boldsymbol}\def\set{\mathit\Omega x)-\log(\boldsymbol}\def\set{\mathit\Omega y)\|_\infty$. As before, for $i=1,\dots,n$, let $g_i:\mathbb R} \def\sig{ \varphi ^n\to\mathbb R} \def\sig{ \varphi $ be the scalar functions such that $G_\alpha(\boldsymbol}\def\set{\mathit\Omega x)=(g_1(\boldsymbol}\def\set{\mathit\Omega x), \dots, g_n(\boldsymbol}\def\set{\mathit\Omega x))$. By the Mean Value Theorem we have $$ \phi(\boldsymbol}\def\set{\mathit\Omega x)-\phi(\boldsymbol}\def\set{\mathit\Omega y)=\nabla \phi(\boldsymbol}\def\set{\mathit\Omega \xi)^T(\boldsymbol}\def\set{\mathit\Omega x-\boldsymbol}\def\set{\mathit\Omega y) $$ for any differentiable function $\phi:\mathbb R} \def\sig{ \varphi ^n \to\mathbb R} \def\sig{ \varphi $ and with $\boldsymbol}\def\set{\mathit\Omega \xi$ being a point in the segment joining $\boldsymbol}\def\set{\mathit\Omega x$ and $\boldsymbol}\def\set{\mathit\Omega y$. Consider the function $\phi(\boldsymbol}\def\set{\mathit\Omega x) = \log(g_i(e^{\boldsymbol}\def\set{\mathit\Omega x}))$. Then $\nabla \phi(\boldsymbol}\def\set{\mathit\Omega \xi) = \nabla g_i(e^{\boldsymbol}\def\set{\mathit\Omega \xi})e^{\boldsymbol}\def\set{\mathit\Omega \xi}/g_i(e^{\boldsymbol}\def\set{\mathit\Omega \xi})$ and we obtain \begin{equation*} |\phi(\boldsymbol}\def\set{\mathit\Omega x)-\phi(\boldsymbol}\def\set{\mathit\Omega y)|=|\log(g_i(e^{\boldsymbol}\def\set{\mathit\Omega x}))-\log(g_i(e^{\boldsymbol}\def\set{\mathit\Omega y}))| = \left| \frac{(\nabla g_i(e^{\boldsymbol}\def\set{\mathit\Omega \xi})e^{\boldsymbol}\def\set{\mathit\Omega \xi})^T(\boldsymbol}\def\set{\mathit\Omega x-\boldsymbol}\def\set{\mathit\Omega y)}{g_i(e^{\boldsymbol}\def\set{\mathit\Omega \xi})} \right|. \end{equation*} As the exponential function maps positive vectors into positive vectors bijectively, the previous equation implies that for any two positive vectors $\overline}\def\tilde{\widetilde {\boldsymbol}\def\set{\mathit\Omega x} = e^{\boldsymbol}\def\set{\mathit\Omega x}$ and $\overline}\def\tilde{\widetilde {\boldsymbol}\def\set{\mathit\Omega y} = e^{\boldsymbol}\def\set{\mathit\Omega y}$ we have $$ |\log(g_i(\overline}\def\tilde{\widetilde {\boldsymbol}\def\set{\mathit\Omega x}))-\log(g_i(\overline}\def\tilde{\widetilde {\boldsymbol}\def\set{\mathit\Omega y}))| \leq \left\| \frac{\nabla g_i(e^{\boldsymbol}\def\set{\mathit\Omega \xi})e^{\boldsymbol}\def\set{\mathit\Omega \xi}}{g_i(e^{\boldsymbol}\def\set{\mathit\Omega \xi})}\right\|_1 \!\|\log(\overline}\def\tilde{\widetilde {\boldsymbol}\def\set{\mathit\Omega x})-\log(\overline}\def\tilde{\widetilde {\boldsymbol}\def\set{\mathit\Omega y})\|_\infty \!= \left\| \frac{\nabla g_i(e^{\boldsymbol}\def\set{\mathit\Omega \xi})e^{\boldsymbol}\def\set{\mathit\Omega \xi}}{g_i(e^{\boldsymbol}\def\set{\mathit\Omega \xi})}\right\|_1 \!d_T(\overline}\def\tilde{\widetilde {\boldsymbol}\def\set{\mathit\Omega x}, \overline}\def\tilde{\widetilde {\boldsymbol}\def\set{\mathit\Omega y}). $$ Together with Lemma \ref{lem:lip_constant} we have $d_T(G_\alpha(\boldsymbol}\def\set{\mathit\Omega x),G_\alpha(\boldsymbol}\def\set{\mathit\Omega y))\leq C d_T(\boldsymbol}\def\set{\mathit\Omega x,\boldsymbol}\def\set{\mathit\Omega y)$ with $C=|1-\alpha|/(p-1)<1$. Thus $G_\alpha$ is a contraction and $\boldsymbol}\def\set{\mathit\Omega x_k\to \boldsymbol}\def\set{\mathit\Omega x^\star\in \mathcal S_p^+$ as $k\to \infty$. Finally, Lemmas \ref{lem:the_solution_is_positive} and \ref{lem:eigenvalue_problem} imply that $\boldsymbol}\def\set{\mathit\Omega x^\star$ is entry-wise positive and solves \eqref{eq:maximization_problem}, concluding the proof. \end{proof} \subsection{Algorithm}\label{subsec:alg} Theorem~\ref{thm:convergence} leads naturally to the following algorithm. \SetKwBlock{Repeat}{For $k=0,1,2,3,\dots$ repeat}{{until $\|\boldsymbol}\def\set{\mathit\Omega x_{k}-\boldsymbol}\def\set{\mathit\Omega x_{k+1}\|/\|\boldsymbol}\def\set{\mathit\Omega x_{k+1}\|< \mathrm{tolerance}$}} \begin{algorithm}[H] \caption{Nonlinear spectral method for core--periphery detection}\label{alg:1} \DontPrintSemicolon \KwIn{Adjacency matrix $A$, initial guess $\boldsymbol}\def\set{\mathit\Omega x_0 >0$\\ \hspace{3.6em}Fix $\alpha \gg 1$, $p>\alpha$, $q=p/(p-1)$\\ \hspace{3.6em}Let $F_\alpha(\boldsymbol}\def\set{\mathit\Omega x)_i = 2\, \sum_{j=1}^na_{ij}\, |x_i|^{\alpha-2}x_i(|x_i|^\alpha+|x_j|^\alpha)^{1/\alpha -1}$, for $i=1, \dots, n$} \Repeat{ $\boldsymbol}\def\set{\mathit\Omega y_{k+1} = F_\alpha(\boldsymbol}\def\set{\mathit\Omega x_k)$ \; $\boldsymbol}\def\set{\mathit\Omega x_{k+1} = \|\boldsymbol}\def\set{\mathit\Omega y_{k+1}\|_q^{1-q}|\boldsymbol}\def\set{\mathit\Omega y_{k+1}|^{q-2}\boldsymbol}\def\set{\mathit\Omega y_{k+1}$\; } $\boldsymbol}\def\set{\mathit\Omega c = \boldsymbol}\def\set{\mathit\Omega x_{k+1}/\max(\boldsymbol}\def\set{\mathit\Omega x_{k+1})$\; Reorder the network nodes according to the magnitude of the entries of $\boldsymbol}\def\set{\mathit\Omega c$\; \KwOut{Core--periphery score $\boldsymbol}\def\set{\mathit\Omega c$ and approximate maximizer $\boldsymbol}\def\set{\mathit\Omega x_{k+1}$ of \eqref{eq:maximization_problem}.} \end{algorithm} Recall that, since $\boldsymbol}\def\set{\mathit\Omega x_0>0$, each element of the sequence $\boldsymbol}\def\set{\mathit\Omega x_k$ generated by the algorithm is a positive vector, due to Theorem \ref{thm:convergence}. Each iteration requires the computation of a vector norm, at step 3, and the computation of the action of the nonlinear map $F_\alpha$ on a nonnegative vector $\boldsymbol}\def\set{\mathit\Omega x$, at step 2. Thus, if $m\geq n$ is the number of edges in the network (or equivalently half the number of nonzero entries of $A$), the order of complexity per iteration of Algorithm~\ref{alg:1} is $O(m)+O(n)$. For large-scale, sparse, real-world networks $m$ is typically linearly proportional to $n$ or $n\log n$. \begin{rev}In this setting the method is scalable to high dimensions, as confirmed by Figure \ref{fig:NSM_timing}. \end{rev} \begin{figure}[t] \centering \includegraphics[width=.9\textwidth,clip,trim=0 0 0 0]{NSM_timing} \caption{\blue{All plots show mean values over 10 runs. Left and center: Time required by Alg.\ 1 to converge to a tolerance of $10^{-8}$, with $\alpha=p/2=10$, for random Erd\H{o}s--Renyi graphs. Left: $n$ nodes and $m=O(n\log n)$ edges, with $n\in[10^2,10^6]$; Center: $n=1000$ nodes and $m \in [n\log n, n^2]$ edges. Right: Number of iterations required by Alg.\ 1 when the ratio $(\alpha-1)/(p-1)$ varies.} }\label{fig:NSM_timing} \end{figure} Further comments on the algorithm above are in order. First, recall that the convergence is independent of the starting point, $\boldsymbol}\def\set{\mathit\Omega x_0$, provided that $\boldsymbol}\def\set{\mathit\Omega x_0$ is entry-wise positive. In practice, we use a uniform vector. Concerning the choice of $\alpha$, recall that we want $\alpha$ large enough to give a good approximation to the original kernel $\max\{|x|,|y|\}$. As quantified in \eqref{eq:alpha_approx}, the approximation error is bounded by a factor $2^{1/\alpha}$. Thus, in practice, moderate values of the parameter are sufficient. In order to avoid numerical issues, in the experiments presented in Section \ref{sec:experiments} we use $\alpha = 10$. As for the choice of $p$, from the proof of Theorem \ref{thm:convergence} it follows that the larger $p>\alpha$, the smaller the contraction ratio $C= (\alpha-1)/(p-1)$ and thus the faster the convergence of Algorithm~\ref{alg:1}. This is made more precise in Corollary~\ref{cor:conv_bound} below, where we explicitly bound $\|\boldsymbol}\def\set{\mathit\Omega x_k -\boldsymbol}\def\set{\mathit\Omega x_{k+1}\|$ and $\|\boldsymbol}\def\set{\mathit\Omega x_k - \boldsymbol}\def\set{\mathit\Omega x^\star\|$ in terms of $C$. Finally, the choice of the norm in the stopping criterion is not critical. We typically use the $p$-norm because the sequence $\boldsymbol}\def\set{\mathit\Omega x_k$ is designed so that $\|\boldsymbol}\def\set{\mathit\Omega x_k\|_p=1$ for any $k$. Hence in the stopping criterion we require one norm computation fewer at each step. However, this reduction in cost is likely to be negligible and thus we expect that other distance functions would work equally well. Moreover, we point out that, due to Corollary~\ref{cor:conv_bound}, a computationally cheaper stopping criterion is available from the contraction ratio $(\alpha-1)/(p-1)$ and its integer powers. However, in our experience, this upper bound on the iteration error can be far from sharp. \begin{corollary}\label{cor:conv_bound} For $\boldsymbol}\def\set{\mathit\Omega x_0>0$, let $\boldsymbol}\def\set{\mathit\Omega x_k$ be the sequence defined by Algorithm~\ref{alg:1} and let $\gamma = \|\log(\boldsymbol}\def\set{\mathit\Omega x_1)-\log(\boldsymbol}\def\set{\mathit\Omega x_0)\|_\infty$. For any $k=0,1,2,\dots$ we have $$ \|\boldsymbol}\def\set{\mathit\Omega x_{k+1} -\boldsymbol}\def\set{\mathit\Omega x_k\|_\infty\leq \gamma\, \left(\frac{\alpha-1}{p-1}\right)^k \quad \text{and}\quad \|\boldsymbol}\def\set{\mathit\Omega x_{k} -\boldsymbol}\def\set{\mathit\Omega x^\star\|_\infty\leq \gamma\, \left(\frac{p-1}{p-\alpha}\right)\left(\frac{\alpha-1}{p-1}\right)^k, $$ where $\boldsymbol}\def\set{\mathit\Omega x^\star =\lim_k\boldsymbol}\def\set{\mathit\Omega x_k$ is the unique positive solution of \eqref{eq:maximization_problem}. \end{corollary} \begin{proof} From the Mean Value Theorem we have $|e^a -e^b|\leq |a-b|\max\{e^a,e^b\}$. Thus, for any $\boldsymbol}\def\set{\mathit\Omega x,\boldsymbol}\def\set{\mathit\Omega y >0$ with $\|\boldsymbol}\def\set{\mathit\Omega x\|_p=\|\boldsymbol}\def\set{\mathit\Omega y\|_p=1$, we have $$\|\log(\boldsymbol}\def\set{\mathit\Omega x) -\log(\boldsymbol}\def\set{\mathit\Omega y)\|_\infty \geq \|\boldsymbol}\def\set{\mathit\Omega x - \boldsymbol}\def\set{\mathit\Omega y\|_\infty \Big(\max_i (\max\{x_i,y_i\})\Big)^{-1} \geq \|\boldsymbol}\def\set{\mathit\Omega x - \boldsymbol}\def\set{\mathit\Omega y\|_\infty, $$ as both $x_i$ and $y_i$ are not larger than one, for any $i=1,\dots, n$. With the notation of the proof of Theorem \ref{thm:convergence}, this implies that $d_T(\boldsymbol}\def\set{\mathit\Omega x, \boldsymbol}\def\set{\mathit\Omega y)\geq \|\boldsymbol}\def\set{\mathit\Omega x - \boldsymbol}\def\set{\mathit\Omega y\|_\infty$ for any $\boldsymbol}\def\set{\mathit\Omega x, \boldsymbol}\def\set{\mathit\Omega y\in \mathcal S_p^+$. Moreover, from the proof of that theorem we have that $d_T(G_\alpha(\boldsymbol}\def\set{\mathit\Omega x), G_\alpha(\boldsymbol}\def\set{\mathit\Omega y))\leq C d_T(\boldsymbol}\def\set{\mathit\Omega x, \boldsymbol}\def\set{\mathit\Omega y)$, with $C=(\alpha-1)/(p-1)$. Therefore, as $\boldsymbol}\def\set{\mathit\Omega x_k\in \mathcal S_p^+$ for any $k$, we have \begin{align*} \|\boldsymbol}\def\set{\mathit\Omega x_{k+1} -\boldsymbol}\def\set{\mathit\Omega x_k\|_\infty&\leq d_T(\boldsymbol}\def\set{\mathit\Omega x_{k+1},\boldsymbol}\def\set{\mathit\Omega x_k) = d_T(G_\alpha(\boldsymbol}\def\set{\mathit\Omega x_{k}),G_\alpha(\boldsymbol}\def\set{\mathit\Omega x_{k-1}))\\ &\leq C d_T(\boldsymbol}\def\set{\mathit\Omega x_k, \boldsymbol}\def\set{\mathit\Omega x_{k-1}) \leq C^k d_T(\boldsymbol}\def\set{\mathit\Omega x_1, \boldsymbol}\def\set{\mathit\Omega x_0)\, . \end{align*} This proves the first inequality. As for the second one, first note that it is enough to show that $d_T(\boldsymbol}\def\set{\mathit\Omega x_k, \boldsymbol}\def\set{\mathit\Omega x^\star)\leq (1-C)^{-1}d_T(\boldsymbol}\def\set{\mathit\Omega x_{k+1},\boldsymbol}\def\set{\mathit\Omega x_k)$ as we then can argue as before to obtain $\|\boldsymbol}\def\set{\mathit\Omega x^\star-\boldsymbol}\def\set{\mathit\Omega x_k\|_\infty\leq (1-C^{-1})C^kd_T(\boldsymbol}\def\set{\mathit\Omega x_1,\boldsymbol}\def\set{\mathit\Omega x_0)$ which is the right-most inequality in the statement. Now, observe that adding $d_T(\boldsymbol}\def\set{\mathit\Omega x_{i+1}, \boldsymbol}\def\set{\mathit\Omega x_{i})$ to both sides of the inequality $d_T(\boldsymbol}\def\set{\mathit\Omega x_{i+2}, \boldsymbol}\def\set{\mathit\Omega x_{i+1})\leq C d_T(\boldsymbol}\def\set{\mathit\Omega x_{i+1}, \boldsymbol}\def\set{\mathit\Omega x_{i})$ and rearranging terms leads to $$ d_T(\boldsymbol}\def\set{\mathit\Omega x_{i+1}, \boldsymbol}\def\set{\mathit\Omega x_{i})\leq \frac 1 {1-C}\Big(d_T(\boldsymbol}\def\set{\mathit\Omega x_{i+1}, \boldsymbol}\def\set{\mathit\Omega x_{i})- d_T(\boldsymbol}\def\set{\mathit\Omega x_{i+2}, \boldsymbol}\def\set{\mathit\Omega x_{i+1})\Big)\, , $$ for any $i=0,1,2,\dots$. Therefore, using the triangle inequality for $d_T$, for any $k, h$ with $h>k$, we obtain $$ d_T(\boldsymbol}\def\set{\mathit\Omega x_{h+1}, \boldsymbol}\def\set{\mathit\Omega x_{k})\leq \sum_{i=k}^h d_T(\boldsymbol}\def\set{\mathit\Omega x_{i+1}, \boldsymbol}\def\set{\mathit\Omega x_{i})\leq \frac{1}{1-C}\Big(d_T(\boldsymbol}\def\set{\mathit\Omega x_{k+1}, \boldsymbol}\def\set{\mathit\Omega x_{k})- d_T(\boldsymbol}\def\set{\mathit\Omega x_{h+2}, \boldsymbol}\def\set{\mathit\Omega x_{h+1})\Big)\, . $$ Finally, letting $h$ grow to infinity in the previous inequality gives the desired bound and concludes the proof. \end{proof} \section{Experiments}\label{sec:experiments} In this section we describe results obtained when the logistic core--periphery score computed via Algorithm \ref{alg:1} is used to rank nodes in some example networks. All experiments were performed using MATLAB Version 9.1.0.441655 (R2016b) on a laptop running Ubuntu 16.04 LTS with a 3.2 GHz Intel Core i5 processor and 8 GB of RAM. The experiments can be reproduced using the code available at \verb+https://github.com/ftudisco/nonlinear-core-periphery+. \begin{rev} We compare results with those obtained from other core-quality function optimization approaches: the degree vector, the Perron eigenvector of the adjacency matrix (eigenvector centrality) and the simulated--annealing algorithm proposed in \cite{rombach2014core}. Furthermore, we compare with the $k$-core decomposition coreness score \cite{kitsak2010identification}, computed as the limit of the $H$-index operator sequence discussed in \cite{lu2016h}. The use of the degree vector $\boldsymbol}\def\set{\mathit\Omega d$ and the eigenvector centrality $\boldsymbol}\def\set{\mathit\Omega v$ may be regarded as linear counterparts of our method. If $\alpha = 1$, then for any $\boldsymbol}\def\set{\mathit\Omega x \geq 0$ the functional $f_\alpha(\boldsymbol}\def\set{\mathit\Omega x) = \sum_{ij}a_{ij}\mu_\alpha(x_i,x_j)$ is linear and has the form $f_1(\boldsymbol}\def\set{\mathit\Omega x) = \sum_{ij}a_{ij}(x_i+x_j) = 2\,\boldsymbol}\def\set{\mathit\Omega x^T \boldsymbol}\def\set{\mathit\Omega d$. Thus the maximum is attained when $\boldsymbol}\def\set{\mathit\Omega x$ is the degree vector $\boldsymbol}\def\set{\mathit\Omega d$. The eigenvector centrality $A\boldsymbol}\def\set{\mathit\Omega v = \rho(A)\boldsymbol}\def\set{\mathit\Omega v$, $\|\boldsymbol}\def\set{\mathit\Omega v\|_2=1$, instead, somewhat corresponds to the case where $\alpha$ goes to $0$. To obtain $\boldsymbol}\def\set{\mathit\Omega v$, however, we need to slightly modify the approximate kernel $\mu_\alpha$ from \eqref{eq:falpha} to $$ \tilde \mu_\alpha(x,y) = \left(\frac{|x|^\alpha + |y|^\alpha}{2}\right)^{1/\alpha} \, . $$ This is because $\mu_\alpha$ diverges when $\alpha\to 0$, whereas $\lim_{\alpha\to 0}\tilde \mu_\alpha(x,y) = \sqrt{|xy|}$. On the other hand, notice that both $\mu_\alpha$ and $\tilde \mu_\alpha$ coincide with the maximum operator when $\alpha\to \infty$ and for any fixed $\alpha>0$, a vector that maximizes $\sum_{ij}A_{ij}\tilde \mu_\alpha(x_i,x_j)$ maximizes $f_\alpha$ as well. Replacing $\mu_\alpha$ with $\tilde \mu_\alpha$ we have $f_0(\boldsymbol}\def\set{\mathit\Omega x) = \sqrt{\boldsymbol}\def\set{\mathit\Omega x}^T A \sqrt{\boldsymbol}\def\set{\mathit\Omega x}$. Thus, if we choose $p=1$, the maximum is attained when $\boldsymbol}\def\set{\mathit\Omega x = \boldsymbol}\def\set{\mathit\Omega v^2$, the square of the entry-wise positive eigenvector centrality. Note that with this choice of $p$, the solution $\boldsymbol}\def\set{\mathit\Omega v$ is constrained on the Euclidean sphere $\|\boldsymbol}\def\set{\mathit\Omega v\|_2=1$. Notice moreover that this is confirmed by Theorem \ref{thm:mlp} as, for $\alpha\to 0$ and $p=1$, the nonlinear operator $F_\alpha$ boils down to the matrix $A$ and Algorithm 1 is equivalent to the standard linear power method. \end{rev} As for the simulated--annealing method, recall that it aims to maximize the core-quality function $\sum_{ij}a_{ij}x_ix_j$ over $C_{\alpha,\beta}$, defined as in \eqref{eq:C_alpha_beta}. To this end the method requires a uniform discretization of the square $[0,1]^2$. In all our experiments below we choose the discretization $\{1/h,2/h,\dots,1\}^2$ with $h=50$. Algorithm \ref{alg:1} requires the selection of two positive scalars, $\alpha$ and $p>\alpha$, and the norm in the stopping criterion. In all our experiments we set $\alpha = 10$, $p = 2 \alpha$, and terminate when $$ \frac{\|\boldsymbol}\def\set{\mathit\Omega x_k-\boldsymbol}\def\set{\mathit\Omega x_{k+1}\|_p }{\|\boldsymbol}\def\set{\mathit\Omega x_{k+1}\|_p}=\|\boldsymbol}\def\set{\mathit\Omega x_k-\boldsymbol}\def\set{\mathit\Omega x_{k+1}\|_p < 10^{-8}. $$ \begin{rev} For the sake of brevity, we refer to the nonlinear spectral method, simulated--annealing method, degree--based method, eigenvector centrality method, and the $H$-index $k$-core decomposition method as NSM, Sim-Ann, Degree, Eig, and Coreness respectively. We point out that, in order to reduce the computing time, we implement Sim-Ann in parallel on four cores, whereas all other methods are run on a single computing core. \end{rev} \subsection{Synthetic Networks} In practice, of course, it is typically not known ahead of time whether a given network contains any inherent core--periphery structure. However, in order to conduct a set of controlled tests, we begin with two classes of random networks that have a built-in core--periphery structure. The first takes the form of a stochastic block model, a widespread benchmark where community structure is imposed in block form. We then consider the new logistic core--periphery random model discussed in section~\ref{sec:logistic_random_model}. \begin{rev} For the sake of brevity, we only compare NSM, Sim-Ann and Degree in these synthetic tests, noting that Eig and Coreness were comparable to or less effective than Sim-Ann. \end{rev} \begin{figure}[t] \centering \includegraphics[width=.3\textwidth]{sbm_setting_porter} \includegraphics[width=.3\textwidth]{sbm_setting_2} \includegraphics[width=.315\textwidth]{sbm_timing} \caption{(Color online.) Experiments on stochastic block model graphs. Left and center: Fraction of nodes correctly assigned to the core--periphery ground truth versus model parameter $k$, for three methods: our Nonlinear Spectral Method (red circles), the Degree vector (black plus symbols) and the Simulated Annealing technique of Rombach et al. \cite{rombach2017core} (blue crosses). Each value is the mean over 20 random instances. Each network has 100 nodes and $k$ ranges in $\{1,1.05,1.1,\dots,2\}$. Left: Nodes within the periphery and between core and periphery are connected with probability $k/4$, nodes within the core are connected with probability $k^2/4$. Center: Nodes within the core and between core and periphery are connected with probability $k^2/4$, nodes within the periphery are connected with probability $k/4$. Right: Median execution time of the three methods over 20 instances, when the number of nodes varies within $\{10,20,\dots,100\}$.}\label{fig:SBM} \end{figure} \subsubsection{Stochastic Block Model}\label{sec:SBM} We consider synthetic networks that have a planted core--periphery structure, arising from a stochastic block model. For the sake of consistency with previous works, we denote this ensemble of unweighted networks by $\mathrm{CP}(n,\delta,p,k)$. Each network drawn has $\delta n$ core nodes and $(1-\delta)n$ periphery nodes, with $\delta \in [0,1]$. We consider two parameter settings. The first reproduces the case analyzed in \cite[Sec.\ 5.1]{rombach2017core}: for $p\in[0,1]$ and $k\in [1,1/\sqrt{p}]$, each edge between nodes $i$ and $j$ is assigned independently at random with probability $kp$, if either $i$ or $j$ (or both) are in the periphery and with probability $k^2p$ if both $i$ and $j$ are in the core. In the second setting, edges between nodes $i$ and $j$ have probability $kp$ only if both $i$ and $j$ are in the periphery and have probability $k^2p$ otherwise. In our experiment we fix $n=100$, $\delta=1/2$, $p=1/4$ and, for each $k=\{1,1.05,1.1,\dots,2\}$, we compute the core--periphery assignment for a network drawn from $\mathrm{CP}(n,\delta,p,k)$. Figure~\ref{fig:SBM} shows the percentage of nodes correctly assigned to the ground-truth core--periphery structure in the two settings described above (left and central figures) by NSM (red circles), Sim-Ann (blue crosses) and Degree (black plus symbols). Each plot shows the mean over 20 random instances of $\mathrm{CP}(n,\delta,p,k)$, for each fixed value of $k$. The right plot in the figure, instead, shows the median execution time of the three methods over 20 runs. We see that all three approaches give similar results in the first parameter regime, whereas Degree and NSM outperform Sim-Ann in the second. Using the degree gives the cheapest method, and Sim-Ann is around two orders of magnitude more expensive than NSM. \begin{figure}[t] \centering \includegraphics[width=.9\textwidth]{boxplot_groups_methods_deg} \caption{(Color online.) Boxplots of the ratio of the likelihood $\nu(\boldsymbol}\def\set{\mathit\Omega \pi)$ over 20 trials for different sizes $n$ of the random network, ranging within $\{30,60,90,120\}$. Three permutation vectors $\boldsymbol}\def\set{\mathit\Omega \pi_1$, $\boldsymbol}\def\set{\mathit\Omega \pi_2$, $\boldsymbol}\def\set{\mathit\Omega \pi_3$ are obtained by sorting the entries of the score vectors obtained with: (1) the proposed Algorithm \ref{alg:1} (``NSM'' in the legend), (2) the degree vector of the graph (``Deg'' in the legend) and (3) the simulated--annealing method of \cite{rombach2014core} (``Sim-Ann'' in the legend), respectively. The three boxplot groups, with different colors, show (from left to right) the ratios $\nu(\boldsymbol}\def\set{\mathit\Omega \pi_1)/\nu(\boldsymbol}\def\set{\mathit\Omega \pi_2)$ (in red), $\nu(\boldsymbol}\def\set{\mathit\Omega \pi_1)/\nu(\boldsymbol}\def\set{\mathit\Omega \pi_3)$ (in black) and $\nu(\boldsymbol}\def\set{\mathit\Omega \pi_3)/\nu(\boldsymbol}\def\set{\mathit\Omega \pi_2)$ (in blue).}\label{fig:boxplot} \end{figure} \subsubsection{Logistic Core--periphery Random Model} We now consider the unweighted logistic core--periphery random model described in subsection~\ref{sec:logistic_random_model}. More precisely, given $n$, $s$ and $t$, we sample from the family $\mathrm{LCP}(n,s,t)$ of random graphs with $n$ nodes such that an edge between any pair of nodes $i$ and $j$ is assigned independently at random with probability $$ \textstyle{\mathbb P(i\sim j) = \sigma_{s,t}\left(\frac 1 n \max\{n-i,n-j\}\right), \quad \text{where}\quad \sigma_{s,t}(x)=\frac{1}{1+e^{-s(x-t)}}\, .} $$ Unlike the stochastic block model discussed in Section \ref{sec:SBM}, if $s$ is not too large the logistic core--periphery model does not give rise to a binary core--periphery structure. Instead, it uses a sliding scale for the nodes where node $n$ is at the center of the core and node $1$ is the most peripheral. We therefore look at the ability of the algorithms to recover a suitable ordering. In our experiment we fix $s=7$, $t=2/3$ and let the dimension $n$ vary within $\{30,60,90,120\}$. For each $n$ we draw an instance from the ensemble $\mathrm{LCP}(n,s,t)$ and compute the core--periphery score from each of the three methods. We sort each score vector into descending order and consider the associated permutations $\boldsymbol}\def\set{\mathit\Omega \pi_1$, $\boldsymbol}\def\set{\mathit\Omega \pi_2$ and $\boldsymbol}\def\set{\mathit\Omega \pi_3$ for NSM, Degree and Sim-Ann, respectively. We then evaluate the likelihoods $\nu(\boldsymbol}\def\set{\mathit\Omega \pi_i)$, as defined in \eqref{eq:LCP_probability}. In Figure~\ref{fig:boxplot} we show medians and quartiles of the three likelihood ratios $\nu(\boldsymbol}\def\set{\mathit\Omega \pi_1)/\nu(\boldsymbol}\def\set{\mathit\Omega \pi_2)$ (in red), $\nu(\boldsymbol}\def\set{\mathit\Omega \pi_1)/\nu(\boldsymbol}\def\set{\mathit\Omega \pi_3)$ (in black) and $\nu(\boldsymbol}\def\set{\mathit\Omega \pi_3)/\nu(\boldsymbol}\def\set{\mathit\Omega \pi_2)$ (in blue). We see that in this test NSM outperforms Degree, which itself outperforms Sim-Ann. \subsection{Real-world Datasets} \begin{rev} In this subsection we show results for several real-world networks taken from different fields: social interaction, academic collaboration, transportation, internet structure, neural connections, and protein-protein interaction. These networks are freely available online; below we describe their key features and give references for further details. \end{rev} \textbf{Cardiff tweets.} An unweighted network of reciprocated Twitter mentions among users whose bibliographical information indicates they are associated with the city of Cardiff (UK). Data refers to the period October 1--28, 2014. There is a single connected component of $2685$ nodes and $4444$ edges. The mean degree of the network is $3.31$, with a variance of $21.24$ and diameter $29$. This dataset is part of a larger collection of geolocated reciprocated Twitter mentions within UK cities in \cite{grindrod2016comparison}. \textbf{Network scientists.} A weighted co--authorship network among scholars who study network science. This network, compiled in 2006, involves $1589$ authors. We use the largest connected component, which has $379$ nodes and coincides with the network considered in \cite{newman2006finding}. This component contains $914$ edges. Its mean degree is $4.82$, with variance $15.46$ and diameter $17$. \textbf{Erd\H{o}s.} An instance of the Erd\H{o}s collaboration unweighted network with $472$ nodes representing authors. We use the largest connected component, which contains $429$ nodes and $1312$ edges. Its mean degree is $6.12$ with variance $45.98$ and diameter $11$. This dataset is one of the seven Erd\H{o}s collaboration networks made available by Batagelj and Mrvar in the Pajek datasets collection \cite{pajek} and therein referred to as ``Erdos971''. \textbf{Yeast.} An unweighted protein-protein interaction network described and analyzed in \cite{bu2003topological}. As for the Erd\H{o}s dataset, this network is available through the datasets collection \cite{pajek}. The whole network consists of $2361$ nodes. We use the largest connected component, consisting of $2224$ nodes and $6829$ edges. Its mean degree is $6.14$ with variance $65.76$ and diameter $11$. \begin{rev} \textbf{Internet 2006.} A symmetrized snapshot of the structure of the Internet at the level of autonomous systems, reconstructed from BGP tables posted by the University of Oregon Route Views Project. This snapshot was created by Mark Newman from data for July 22, 2006 and is available via \cite{Internet} and \cite{davis2011university}. The network is connected, with $22963$ nodes and $48436$ edges. Its mean degree is $4.2186$ with variance $108.5$ and diameter $11$. \textbf{Jazz.} Network of Jazz bands that performed between 1912 and 1940 obtained from ``The Red Hot Jazz Archive'' digital database \cite{gleiser2003community}. It consists of $198$ nodes, being jazz bands, and $2742$ edges representing common musicians. Mean degree is $27.7$, with variance $304.6$ and diameter $12$. \textbf{Drugs.} Social network of injecting drug users (IDUs) that have shared a needle in a six months time-window. This is a connected network made of $616$ nodes and $2012$ edges. The average degree is $6.5$ with variance $59.17$ and diameter $13$. See e.g.\ \cite{neaigus1998network,tudisco2017community}. \textbf{C. elegans.} This is a neural network of neurons and synapses in Caenorhabditis elegans, a type of worm. It contains $277$ nodes and $2105$ edges. Mean degree is $7.6$ with variance $48$ and diameter $6$. The network was created in \cite{Choe-2004-connectivity}. The data we used is collected from \cite{AustinData}. \end{rev} \textbf{London trains.} A transportation network representing connections between train stations of the city of London. The undirected weighted network that we consider here is the aggregated version of the original multi-layer network. It consists of a single connected component with $369$ nodes, each corresponding to a train station. Direct connections between stations form a set of $430$ edges with nonzero weights. Each such weight takes an integer value of $1$, $2$, or $3$ according to the number of different types of connection, from the three possibilities of underground, overground and Docklands Light Railway (DLR). The average degree is $2.33$ with variance $1.04$ and diameter $39$. This network is studied in \cite{DDSBA14} and the data we used was collected from \cite{DataDD}. \begin{figure} \centering \begin{tikzpicture} \node at (0, 2) {\sf Degree}; \node at (4, 2) {\sf Sim-Ann}; \node at (8, 2) {\sf NSM}; \node[rotate=90] at (-2.5,0) {\sf Cardiff Tweets}; \node at (0,0) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_Cardiff_Deg}}; \node at (4,0) {\includegraphics[width=.25\textwidth,trim=1.3cm 1.2cm 0 .5cm,clip]{SPY_Cardiff_SimAnn}}; \node at (8,0) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_Cardiff_NSM}}; \node[rotate=90] at (-2.5,-3.5) {\sf Net.\ Scientists}; \node at (0,-3.5) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_NETSCIENCE_degree}}; \node at (4,-3.5) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_NETSCIENCE_sim-ann-h50}}; \node at (8,-3.5) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_NETSCIENCE_nonlinear}}; \node[rotate=90] at (-2.5,-7) {\sf Erd\H{o}s}; \node at (0,-7) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_Erdos971_degree}}; \node at (4,-7) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_Erdos971_sim-ann-h50}}; \node at (8,-7) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_Erdos971_nonlinear}}; \node[rotate=90] at (-2.5,-10.5) {\sf Yeast PPI}; \node at (0,-10.5) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_Yeast_degree}}; \node at (4,-10.5) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_Yeast_sim-ann-h50}}; \node at (8,-10.5) {\includegraphics[width=.25\textwidth,trim=0 0 0 .5cm,clip]{SPY_Yeast_nonlinear}}; \node[rotate=90] at (-2.5,-14.3) {\sf Internet 2006}; \node at (0,-14.3) {\includegraphics[width=.25\textwidth,trim=3.05cm 0 2.6cm .6cm,clip]{SPY_Internet2006_deg}}; \node at (4,-14.3) {\includegraphics[width=.25\textwidth,trim=3.5cm 0 3cm .6cm,clip]{SPY_Internet2006_simann}}; \node at (8,-14.3) {\includegraphics[width=.25\textwidth,trim=3.5cm 0 3cm .6cm,clip]{SPY_Internet2006_nsm}}; \end{tikzpicture} \caption{Sparsity plots of adjacency matrices for various real-world networks. Each row of three plots corresponds to a different dataset. Each column corresponds to a different ordering of the network nodes. Left column: nodes ordered by decreasing degree. Middle column: nodes ordered by aggregate core score of \cite{rombach2017core}. Right column: nodes ordered by nonlinear spectral method in Algorithm~\ref{alg:1}.}\label{fig:spy_plots} \end{figure} \subsubsection*{Analysis} \begin{rev} In Figure~\ref{fig:spy_plots} we use adjacency matrix sparsity plots to show how the three algorithms Degree, Sim-Ann and NSM compare on five networks of different size. \end{rev} In each case, the nodes are reordered in descending magnitude of core--periphery score. We see that the three methods give very different visual representations of the data, with NSM generally finding a more convincing core--periphery structure. \begin{rev} On the Cardiff, Erd\H{o}s and Yeast networks, NSM gives a well-defined ``anti-diagonal contour'' that essentially separates the reordered matrix into two regions. This type of behavior has been observed for other spectral reordering methods \cite{TH10}, but does not seem to be fully understood. \end{rev} We note that the reciprocated Twitter mentions for the city of Cardiff show a strong core--periphery structure in all three orderings. Very similar results were observed for all ten city-based networks of reciprocated Twitter mentions collected in \cite{grindrod2016comparison}, which however we refrain from showing here for the sake of brevity. \begin{figure}[!t] \centering \includegraphics[width=.45\textwidth,clip,trim=0 0 1.5cm 0]{NetworkScientists} \includegraphics[width=.45\textwidth,clip,trim=0 0 1.5cm 0]{CardiffTweets}\\ \includegraphics[width=.45\textwidth,clip,trim=0 0 1.5cm 0]{YeastPPI} \includegraphics[width=.45\textwidth,clip,trim=0 0 1.5cm 0]{Internet2006}\\ \includegraphics[width=.45\textwidth,clip,trim=0 0 1.5cm 0]{LondonTrains} \includegraphics[width=.45\textwidth,clip,trim=0 0 1.5cm 0]{Erdos} \caption{(Color online.) Core--periphery profile $\gamma(\boldsymbol}\def\set{\mathit\Omega x)$ of three networks where $\boldsymbol}\def\set{\mathit\Omega x$ is the core--score vector obtained with five different methods. }\label{fig:cp_profile} \end{figure} \begin{rev} To quantify the quality of the core--periphery assignments and to compare different methods on all the datasets, we perform two further tests. In Figure \ref{fig:cp_profile} we show the \emph{core--periphery profile} of five networks obtained with different methods. This analysis is inspired by the core--periphery profiling approach proposed in \cite{della2013profiling} and consists of evaluating the core--periphery profile function $\gamma(\boldsymbol}\def\set{\mathit\Omega x)$ associated with a given core--periphery quality vector $\boldsymbol}\def\set{\mathit\Omega x>0$, defined as \begin{equation} \gamma(\boldsymbol}\def\set{\mathit\Omega x)_k = \frac{\sum_{i,j=1}^k A_{\pi_i, \pi_j}}{\sum_{i=1}^k\sum_{j=1}^n A_{\pi_i,j}}, \label{eq:cpqv} \end{equation} where $\boldsymbol}\def\set{\mathit\Omega \pi$ is a permutation such that $x_{\pi_1}\leq \cdots \leq x_{\pi_n}$. In words, for each $k$ if we regard $\pi_1, \pi_2, \ldots, \pi_k$ as peripheral nodes and $\pi_{k+1}, \pi_{k+2}, \ldots, \pi_n$ as core nodes then $\gamma(\boldsymbol}\def\set{\mathit\Omega x)_k$ in (\ref{eq:cpqv}) measures the ratio of periphery-periphery links to periphery-all links. Hence, $\boldsymbol}\def\set{\mathit\Omega x$ reveals a strong core-periphery structure if $\gamma(\boldsymbol}\def\set{\mathit\Omega x)_k$ remains small for large $k$. The quantity $\gamma(\boldsymbol}\def\set{\mathit\Omega x)_k$ also has an interesting random walk interpretation. Given $\boldsymbol}\def\set{\mathit\Omega x>0$ let $S_k = \{\pi_1,\dots,\pi_k\}$ and consider the standard random walk on $G$ with transition matrix $T=(t_{ij})$ defined by $t_{ij} = a_{ij}/\sum_k a_{ik}$. As the graph is undirected, the stationary distribution of the chain $\boldsymbol}\def\set{\mathit\Omega y>0$ is the (normalized) degree vector $\boldsymbol}\def\set{\mathit\Omega y = \boldsymbol}\def\set{\mathit\Omega d/\sum_i d_i$. Therefore, $$\gamma(\boldsymbol}\def\set{\mathit\Omega x)_k = \frac{\sum_{i,j\in S_k}y_i t_{ij}}{\sum_{i\in S_k}y_i}\, ,$$ which corresponds to the persistance probability of $S_k$, i.e., the probability that a random walker who is currently in any of the nodes of $S_k$, remains in $S_k$ at the next time step. Clearly $\gamma(\boldsymbol}\def\set{\mathit\Omega x)_k\leq \gamma(\boldsymbol}\def\set{\mathit\Omega x)_{h}$ if $k\leq h$ and $\gamma(\boldsymbol}\def\set{\mathit\Omega x)_n = 1$, for any $\boldsymbol}\def\set{\mathit\Omega x$. This further justifies why having small values of $\gamma(\boldsymbol}\def\set{\mathit\Omega x)_k$ for large values of $k$ is a good indication of the presence of a core and periphery \cite{della2013profiling}. Figure \ref{fig:cp_profile} shows that the smallest core--periphery profile $\gamma(\boldsymbol}\def\set{\mathit\Omega x)$ is obtained when $\boldsymbol}\def\set{\mathit\Omega x$ is the output of Algorithm~\ref{alg:1}. This confirms the behavior shown in Figure~\ref{fig:spy_plots}---Algorithm~\ref{alg:1} is the most effective at transforming each network into core-periphery form. \end{rev} \begin{figure}[t!] \centering \includegraphics[width=.9\textwidth,clip,trim=0cm 0cm 0cm 0]{cp_quality} \caption{(Color online.) Normalized core--periphery quality measure $\tilde f_\infty$ for different methods. Left: Value of $\tilde f_\infty(\boldsymbol}\def\set{\mathit\Omega x)$ for different core--score vectors $\boldsymbol}\def\set{\mathit\Omega x$ normalized so that $\max(\boldsymbol}\def\set{\mathit\Omega x) = 1$ and $\min(\boldsymbol}\def\set{\mathit\Omega x) = 0$. Right: Value of $\tilde f_\infty(\boldsymbol}\def\set{\mathit\Omega \pi)$ where $\boldsymbol}\def\set{\mathit\Omega \pi$ is the permutation that sorts the entries of $\boldsymbol}\def\set{\mathit\Omega x$ in increasing order.}\label{fig:cp_quality} \end{figure} \begin{rev} Finally, in Figure~\ref{fig:cp_quality} we compare the value of the core--periphery quality function $f_\infty$ on all the datasets and all the methods. To cover networks of different sizes we plot the normalized value $$ \tilde f_\infty(\boldsymbol}\def\set{\mathit\Omega x) = \frac{f_\infty(\boldsymbol}\def\set{\mathit\Omega x)}{(\max_i x_i)\sum_{ij}a_{ij} }\, . $$ Precisely, the figure shows two plots: On the left we evaluate $\tilde f_\infty(\boldsymbol}\def\set{\mathit\Omega x)$ on core--score vectors $\boldsymbol}\def\set{\mathit\Omega x$ obtained by the methods, rescaled so that $\max(\boldsymbol}\def\set{\mathit\Omega x) = 1$ and $\min(\boldsymbol}\def\set{\mathit\Omega x) = 0$, whereas on the right we evaluate $\tilde f_\infty$ on the corresponding permutation vector $\boldsymbol}\def\set{\mathit\Omega \pi$ such that $x_{\pi_1}\leq \dots\leq x_{\pi_n}$. The NSM is designed to optimize $f_\alpha$ (recall $\alpha = 10$ in our experiments) so the value of $\tilde f_\infty$ is significantly larger than the value obtained with other methods. We see that NSM continues to give the best results when $\tilde f_\infty$ is evaluated on the associated permutation. \end{rev} \begin{figure}[!t] \centering \includegraphics[width=.63\textwidth,clip,trim=1.5cm 3.5cm 1cm 3.5cm]{london_eig.png}\\ \includegraphics[width=.63\textwidth,clip,trim=1.5cm 3.5cm 1cm 3.5cm]{london_simann.png}\\ \includegraphics[width=.63\textwidth,clip,trim=1.5cm 3.5cm 1cm 3.5cm]{london_nsm.png} \caption{(Color online.) Physical layout of the aggregate London transportation network. Red circles indicate the ten nodes with highest core--periphery score for the three algorithms, with node size proportional to score. From top to bottom: Eigenvector centrality, Sim-Ann and NSM.}\label{fig:drawing_london} \end{figure} \subsubsection*{London Transportation Network} \begin{rev} In a final experiment we look in more detail at the London transportation network, where further nodal information is available, using the Perron--Frobenius eigenvector of the adjacency matrix as a baseline for comparison. As discussed in Section \ref{sec:methods}, this vector can be viewed as both a centrality measure and a core--periphery score, and it corresponds to a linear counterpart of our approach, retrieved when $\alpha\to 0$. We compare central nodes in the London train network obtained from Eig, NSM and Sim-Ann. Note that important nodes for both Eig and Sim-Ann are somewhat related with the concept of centrality, as both methods aim to maximize the same core--quality function $\sum_{ij}A_{ij}x_ix_j$ but force different constraints sets, $\mathit \Omega = \{\boldsymbol}\def\set{\mathit\Omega x: \|\boldsymbol}\def\set{\mathit\Omega x\|_2=1\}$ for Eig and $\mathit \Omega = C_{\alpha, \beta}$ for Sim-Ann. On the London train network we find that the core assignments of these two techniques highly correlate. The importance of nodes captured by the NSM is, instead, more directly related with core and periphery features and significantly differ from Eig and Sim-Ann. For the sake of brevity we do not compare with other methods here. \end{rev} In Figure~\ref{fig:drawing_london} we display the edges (underground, overground and DLR connections) in physical space, with darker linetype indicating a larger weight. The top ten stations are highlighted for the three measures, with node size proportional to the value. Although four stations are highlighted in all three plots, there are clear differences in the results. Eigenvector centrality and Sim-Ann produce similar results, focusing on a set of stations that are geographically close, whereas NSM assigns higher core scores to some stations at key intersections that are further from the city centre. To underscore the differences that are apparent in Figure~\ref{fig:drawing_london}, in Table~\ref{tab:data_london} we list the names of the top ten stations drawn in Figure~\ref{fig:drawing_london}, for each of the three rankings. Whereas four major stations, namely Baker Street, King's Cross, Liverpool Street and Moorgate, are shared by all three methods, four stations appearing in the NSM top ten do not appear in the top ten of the other two methods. Table~\ref{tab:data_london} also gives the overall number of passengers entering or exiting each station. A station may play more than one role (underground, overground or DLR) and we list the most recently reported total annual usage. More precisely, we sum the records for \begin{itemize} \item London Underground annual entry and exit 2016, \item National Rail annual entry and exit, 2016--2017, \item DLR annual boardings and alightings, 2016, \end{itemize} as reported in Wikipedia in April 2018. Numbers indicate millions of passengers. The last row shows the overall number of passengers using the top ten stations identified by each method. We note that none of the rankings orders the stations strictly by passenger usage. However, while the top ten stations selected by both Eigenvector and Sim-Ann involve around 600 million passengers per year, the top ten NSM stations involve almost 800 million passengers. \begin{table}[!t] \centering \sf \footnotesize \begin{tabular}{|llllll|} \hline \textbf{Eigenvector} & & \textbf{Sim-Ann} & & \textbf{NSM} & \\ \hline King's Cross &128.85 & Embankment &26.84 & King's Cross &128.85 \\ Farringdon &29.75 & King's Cross &128.85 & Baker Str. &29.75 \\ Euston Square &14.40 & Liverpool Str. &138.95 & West Ham &77.10 \\ Barbican &11.97 & Baker Str. &29.75 & Liverpool Str. &138.95 \\ Gt Port.\ Str. &86.60 & Bank &96.52 & Paddington &85.32 \\ Moorgate &38.40 & Moorgate &38.40 & Stratford &129.01 \\ Euston &87.16 & Euston Square &14.40 & Embankment &26.84 \\ Baker Str. &29.75 & Gloucester Road &13.98 & Willesden Junct. &109.27 \\ Liverpool Str. &138.95 & Farringdon &27.92 & Moorgate &38.40 \\ Angel &22.10 & West Ham &77.10 & Earls Court &20.00 \\ \hline \hline \textit{\textbf{Total}} & 586.09 & & 592.70 & & 783.48\\ \hline \end{tabular} \caption{Ten London train stations with highest core value, according to eigenvector centrality (left column), Sim-Ann (middle column), and NSM (right column), applied to the weighted London trains network. The numbers beside each station show overall (underground, overground, DLR) annual usage in millions of passengers. Numbers in the bottom row show the sum of annual usage across the top 10 stations selected by each method. King's Cross refers to a combination of King's Cross and St Pancras main-line stations and the King's Cross St Pancras underground station. }\label{tab:data_london} \end{table} \begin{figure}[!t] \begin{tabular}{lrrr} & \includegraphics[width=.27\textwidth]{scatter_london_EigNSM} & \includegraphics[width=.27\textwidth]{scatter_london_NSMSimAnn} & \includegraphics[width=.27\textwidth]{scatter_london_EigSimAnn} \\ \hline \textit{Kendal $\tau$} \!\!\!\!\!\!\!\!\!\!\!\!\!& 0.1442 & 0.2930 & 0.5455 \end{tabular} \caption{Top: Scatter plots comparing the ranking associated with the three core score functions: Eigenvector centrality, Sim-Ann and NSM. Bottom: Kendal $\tau$ correlation coefficient between the three pairs of rankings shown in the corresponding scatter plot. }\label{fig:scatter_london} \end{figure} For a comparison across all $369$ stations, Figure~\ref{fig:scatter_london} scatter plots the rankings for the three methods in a pairwise manner. We see that the left and middle plots, NSM versus Eig and NSM versus Sim-Ann, show much less agreement than the third, Eig versus Sim-Ann. This is confirmed by the Kendal $\tau$ correlation coefficients between the different rankings, shown at the bottom of Figure~\ref{fig:scatter_london}. \section{Discussion}\label{sec:disc} The approach in \cite{borgatti2000models,rombach2014core,Zhang15} was to set up a discrete optimization problem and then apply heuristic algorithms that are not guaranteed to find a global minimum. Our work differs by relaxing the problem before addressing the computational task. We showed that a relaxed analogue of a natural discrete optimization problem allows for a globally convergent iteration that is feasible for large, sparse, networks. This philosophy is in line with classical and widely used reordering and clustering methods that make use of the Fiedler or the Perron--Frobenius eigenvectors \cite{EH10}. However, in the core--periphery setting considered here, the resulting relaxed problem is equivalent to an eigenvalue problem that is inherently nonlinear and is reminiscent of more recent clustering and reordering techniques that exploit nonlinear eigenvectors \cite{buhler2009spectral,tudisco2017node,tudisco2018nodal,tudisco2017community}. Hence, we developed new results in nonlinear Perron--Frobenius theory in order to derive and analyze the algorithm. As with all clustering, partitioning and reordering methods in network science, there is no absolute gold standard against which to judge results---the underlying problems may be defined in many different ways. In this work we introduced a new random graph model that (a) gives further justification for our algorithm, and (b) provides one basis for systematic comparison of methods. Maximum likelihood results on synthetic networks with planted structure showed the effectiveness of the new method, as did qualitative visualizations and quantitative tests across a range of application areas. \bigskip \textbf{Acknowledgments} We thank Mason A. Porter for supplying code that implements the algorithm in \cite{rombach2014core,rombach2017core}. EPSRC Data statement: all data and code related to this work are publicly available, and may be obtained by following the links in the text or by consulting the associated references.
{ "timestamp": "2019-02-12T02:27:35", "yymm": "1804", "arxiv_id": "1804.09820", "language": "en", "url": "https://arxiv.org/abs/1804.09820", "abstract": "We derive and analyse a new iterative algorithm for detecting network core--periphery structure. Using techniques in nonlinear Perron-Frobenius theory, we prove global convergence to the unique solution of a relaxed version of a natural discrete optimization problem. On sparse networks, the cost of each iteration scales linearly with the number of nodes, making the algorithm feasible for large-scale problems. We give an alternative interpretation of the algorithm from the perspective of maximum likelihood reordering of a new logistic core--periphery random graph model. This viewpoint also gives a new basis for quantitatively judging a core--periphery detection algorithm. We illustrate the algorithm on a range of synthetic and real networks, and show that it offers advantages over the current state-of-the-art.", "subjects": "Social and Information Networks (cs.SI); Numerical Analysis (math.NA); Data Analysis, Statistics and Probability (physics.data-an)", "title": "A Nonlinear Spectral Method for Core--Periphery Detection in Networks", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575157745542, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542059261271 }
https://arxiv.org/abs/1810.12421
On the forces that cable webs under tension can support and how to design cable webs to channel stresses
In many applications of Structural Engineering the following question arises: given a set of forces $\mathbf{f}_1,\mathbf{f}_2,\dots,\mathbf{f}_N$ applied at prescribed points $\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_N$, under what constraints on the forces does there exist a truss structure (or wire web) with all elements under tension that supports these forces? Here we provide answer to such a question for any configuration of the terminal points $\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_N$ in the two- and three-dimensional case. Specifically, the existence of a web is guaranteed by a necessary and sufficient condition on the loading which corresponds to a finite dimensional linear programming problem. In two-dimensions we show that any such web can be replaced by one in which there are at most $P$ elementary loops, where elementary means the loop cannot be subdivided into subloops, and where $P$ is the number of forces $\mathbf{f}_1,\mathbf{f}_2,\dots,\mathbf{f}_N$ applied at points strictly within the convex hull of $\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_N$. In three-dimensions we show that, by slightly perturbing $\mathbf{f}_1,\mathbf{f}_2,\dots,\mathbf{f}_N$, there exists a uniloadable web supporting this loading. Uniloadable means it supports this loading and all positive multiples of it, but not any other loading. Uniloadable webs provide a mechanism for distributing stress in desired ways.
\section{Introduction}\label{Introduction} One of the main goals of Structural Engineering is to find performing structures when one incorporates into the design a specific type of material or substructure. Many materials behave quite differently under tension or compression: concrete and masonry structures are two examples of materials that perform much better under compression. Some types of structures support also only specific loadings: a wire or cable, for example, can only support tension and not compression. Here we are interested in the case where one incorporates a material that works particularly well under tension so that a cable web is expected to be representative of the most performing structure to be used. Thus, we address the following problem: given a set of forces ${\bf f}_1,{\bf f}_2,\ldots,{\bf f}_N$ applied at prescribed points ${\bf x}_1,{\bf x}_2, \ldots, {\bf x}_N$, under what constraints on the forces does there exist a wire web (or truss structure) with all elements under tension that supports these forces? Note that the problem is identical if one is interested in the case where all elements are under compression. We are only interested in cable wires which can be modeled as a set of straight truss elements: we do not consider the case of catenary elements, see, e.g., \cite{Ahmad:2013:NLA,Ahmadizadeh:2013:TDG,Andreu:2006:AND,Such:2009:AAB}. In the two-dimensional case, a complete answer to this problem is given by Theorem 1 in \cite{Milton:2017:SFI} in the special case where the prescribed points are vertices of a convex polygon. This theorem states that, in this case, a web exists if and only if the net torque going clockwise around any connected portion of the boundary is positive: for any sequence $({\bf x}_i, {\bf x}_{i+1},\dots, {\bf x}_j)$ of consecutive vertices ordered clockwise (where ${\bf x}_k$ is identified with ${\bf x}_{k-N}$) we have: \begin{equation} \sum_{k=i}^{j} \det({\bf x}_k-{\bf x}_i, {\bf f}_k)\geq 0.\eeq{I1} Furthermore, by using the Airy stress functions theory, a representative web is explicitly given that contains no closed loops (that is, there is no set of wires forming the boundary of a polygon). The reader is referred to \cite{Milton:2017:SFI} for details. So the question now is: what happens when the points are not the vertices of a convex polygon or what happens in the three-dimensional case? Theorem 1.1 (see below), which is one of the main results of this paper, answers this question completely. To make the statement clear, let us first introduce some terminology that will be used throughout the paper:\\ \\ \noindent $\bullet$ {\it Finite web}: a collection $W:=\big([{\bf x}_i,{\bf x}_j],[{\bf x}_k,{\bf x}_l]\dots \big)$ of segments (or bars) where ${\bf x}_1,\dots, {\bf x}_M$ are a finite set of points called {\it nodes}.\\ \noindent $\bullet$ {\it Terminal nodes}: the nodes ${\bf X}=({\bf x}_1,\dots, {\bf x}_N)$, $N\leq M$, where the forces are applied.\\ \noindent $\bullet$ {\it Internal nodes}: the remaining nodes, if any. \\ \noindent $\bullet$ {\it Admissible web stress state}: when each bar $[{\bf x}_i,{\bf x}_j]$ of the web is endowed with a non negative tension $\sigma_{ij}$, we say that $\sigma=(\sigma_{ij},\sigma_{kl},\dots ) $ is an admissible web stress state on $W$ for the loading ${\bf F}$ at ${\bf X}$ if it is in equilibrium under the action of the forces ${\bf F}=({\bf f}_1,{\bf f}_2,\ldots,{\bf f}_N)$ applied at ${\bf X}=({\bf x}_1,{\bf x}_2,\ldots,{\bf x}_N)$. If there exists such an admissible stress state then the web $W$ is said to {\it support}\footnote{By ``admissible stress state'' we mean an equilibrium state in which all bars are either in tension, or carrying no load. Thus, by ``supporting'', we mean supporting with all bars either in tension, or carrying no load.} ${\bf F}$ at ${\bf X}$.\\ \noindent $\bullet$ {\it Uniloadable webs}: webs which support only one loading (up to a positive multiplicative constant).\\ Theorem 1.1 then reads as follows: \begin{theorem}\label{mainthm1}{\bf Existence of a web under tension} \newline Let ${\cal A}_{{\bf X}}$ be the cone of displacements ${\bf U}=({\bf u}_1,{\bf u}_2,\ldots,{\bf u}_N)$ at points ${\bf X}=({\bf x}_1,{\bf x}_2,\ldots,{\bf x}_N)$ defined by \begin{equation} {{\cal A}_{{\bf X}}}:=\{{\bf U}\in ({\mathbb R}^d)^N\ :\ \forall\, 1\le i<j\le N,\ ({\bf u}_i-{\bf u}_j)\cdot({\bf x}_i-{\bf x}_j)\geq 0 \}. \eeq{W10} Then, the following condition \begin{equation} \inf_{{\bf U}\in{{\cal A}_{{\bf X}}}}{\bf F}\cdot{\bf U}\geq 0 \eeq{W9} is necessary and sufficient to ensure the existence of a finite web under tension that supports the loading ${\bf F}$ at points ${\bf X}$. In such a case, the web connecting the terminal points ${\bf X}$ pairwise supports the loading ${\bf F}$. \end{theorem} Section \ref{Admissible_loadings} is dedicated to the proof of Theorem \ref{mainthm1} and related consequences, whereas Section \ref{Mechanical_inter} provides insights on the mechanical meaning of the theorem with special reference to the two-dimensional case. In general, from a mechanical point of view, the statement says that the work performed by ${\bf F}$ is non negative for any (infinitesimal)\footnote{Notice that we solve this problem within the context of infinitesimal elasticity: examples of applications of the finite deformation theory to describe the geometric nonlinearity are given, for instance, by \cite{Crusells:2017:AMF,Yang:2007:GNA,Thai:2011:NSD}.} displacements ${\bf U}$ corresponding to a global expansion of the system of points ${\bf X}$. Notice that condition \eqref{W9} provides a characterization of the set ${\cal A}_{{\bf X}}^*$ of all the loadings ${\bf F}$ at points ${\bf X}$ which can be supported by some finite web as the solution to a finite dimensional linear programming problem. Moreover, Theorem \ref{mainthm1} states that a web supporting the given loading is the one that connects the terminal points pairwise (see Example \ref{Ex1}). \begin{Aexa}\label{Ex1} Consider a balanced set of forces ${\bf f}_1$,\ldots,${\bf f}_N$ at points ${\bf x}_1$,\ldots,${\bf x}_N$ that are directed radially outwards from a central point ${\bf x}_0$ (so that ${\bf f}_i=c_i ({\bf x}_i-{\bf x}_0)$ for some set of positive coefficients $c_i$). Note that the balance of forces implies that ${\bf x}_0$ must belong to the convex hull of the points ${\bf x}_1$,\ldots,${\bf x}_N$. \begin{figure}[!ht] \centering { \begin{tikzpicture}[scale=0.50] \draw[black, thick] (0,0) -- (-4,0); \draw[black, thick] (0,0) -- (-3,4); \draw[black, thick] (0,0) -- (2,4); \draw[black, thick] (0,0) -- (4,3); \draw[black, thick] (0,0) -- (4,-3); \filldraw[black] (0,0) circle (2pt); \node (X0) at (-0.2,-0.6) {${\bf x}_0$}; \filldraw[black] (-4,0) circle (2pt) ; \node (X1) at (-4.2,-0.6) {${\bf x}_1$}; \filldraw[black] (-3,4) circle (2pt); \node (X2) at (-2.7,4.5) {${\bf x}_2$}; \filldraw[black] (2,4) circle (2pt); \node (X3) at (1.5,4.4) {${\bf x}_3$}; \filldraw[black] (4,3) circle (2pt); \node (X4) at (4.6,2.6) {${\bf x}_4$}; \filldraw[black] (4,-3) circle (2pt); \node (X5) at (4.7,-2.8) {${\bf x}_5$}; \draw[->,red,thick](-4,0)--(-5,0); \draw[->,red,thick](-3,4)--(-3.75,5); \draw[->,red,thick](2,4)--(2.2,4.4); \draw[->,red,thick](4,3)--(4.4,3.3); \draw[->,red,thick](4,-3)--(6,-4.5); \node (F1) at (-5.3,0){\red ${\bf f}_1$}; \node (F2) at (-3.9,5.4){\red ${\bf f}_2$}; \node (F3) at (2.3,4.7){\red ${\bf f}_3$}; \node (F4) at (4.6,3.6){\red ${\bf f}_4$}; \node (F5) at (6.4,-4.1){\red ${\bf f}_5$}; \node (a) at (0,-4.5){(a)}; \end{tikzpicture}} { \begin{tikzpicture}[scale=0.50] \draw[black, thick] (-4,0)--(-3,4); \draw[black, thick] (-4,0)--(2,4); \draw[black, thick] (-4,0)--(4,3); \draw[black, thick] (-4,0)--(4,-3); \draw[black, thick] (-3,4)--(2,4); \draw[black, thick] (-3,4)--(4,3); \draw[black, thick] (-3,4)--(4,-3); \draw[black, thick] (2,4)--(4,-3); \draw[black, thick] (4,3)--(4,-3); \draw[black, thick] (4,3)--(2,4); \filldraw[black] (0,0) circle (2pt); \node (X0) at (-0.2,-0.6) {${\bf x}_0$}; \filldraw[black] (-4,0) circle (2pt) ; \node (X1) at (-4.2,-0.6) {${\bf x}_1$}; \filldraw[black] (-3,4) circle (2pt); \node (X2) at (-2.7,4.5) {${\bf x}_2$}; \filldraw[black] (2,4) circle (2pt); \node (X3) at (1.5,4.4) {${\bf x}_3$}; \filldraw[black] (4,3) circle (2pt); \node (X4) at (4.6,2.6) {${\bf x}_4$}; \filldraw[black] (4,-3) circle (2pt); \node (X5) at (4.7,-2.8) {${\bf x}_5$}; \draw[->,red,thick](-4,0)--(-5,0); \draw[->,red,thick](-3,4)--(-3.75,5); \draw[->,red,thick](2,4)--(2.2,4.4); \draw[->,red,thick](4,3)--(4.4,3.3); \draw[->,red,thick](4,-3)--(6,-4.5); \node (F1) at (-5.3,0){\red ${\bf f}_1$}; \node (F2) at (-3.9,5.4){\red ${\bf f}_2$}; \node (F3) at (2.3,4.7){\red ${\bf f}_3$}; \node (F4) at (4.6,3.6){\red ${\bf f}_4$}; \node (F5) at (6.4,-4.1){\red ${\bf f}_5$}; \node (b) at (0,-4.5){(b)}; \end{tikzpicture}} \caption{In this example the forces ${\bf F}$ are directed radially outwards from a central point ${\bf x}_0$ and so the web that connects the terminal points ${\bf X}$ to ${\bf x}_0$ supports such a loading. Among all the webs that can support ${\bf F}$, Theorem \ref{mainthm1} provides the web connecting the terminal points pairwise and in Example \ref{Ex1} we determine the stress state in each wire of such a web and we prove it is an equilibrium stress state. } \labfig{1} \end{figure} Clearly, the web formed by the wires connecting the points ${\bf x}_1$,\ldots,${\bf x}_N$ to ${\bf x}_0$, Figure \ref{fig:1}(a), supports this loading: $\sigma_{i0}:=c_i \|{\bf x}_i-{\bf x}_0\|^{-1}$ is an admissible stress state on this web. But we can find another web which supports the same loading: indeed, the web that connects the points ${\bf x}_1$,\ldots,${\bf x}_N$ pairwise, Figure \ref{fig:1}(b), is suitable when endowed with the stress state $\sigma_{ij}= \|{\bf x}_i-{\bf x}_j\| c_i c_j (\sum_k c_k)^{-1}$, with the equilibrium condition \begin{align*} {\bf f}_i + \sum_{j\not=i} \sigma_{ij} \frac {{\bf x}_j-{\bf x}_i}{\|{\bf x}_j-{\bf x}_i\|} &= c_i \left({\bf x}_i-{\bf x}_0\right)+c_i \left(\sum_k c_k\right)^{-1} \sum_j c_j ({\bf x}_j-{\bf x}_i) \\ &= c_i ({\bf x}_i-{\bf x}_0) +c_i \left(\sum_k c_k\right)^{-1} \left(\sum_j {\bf f}_j-\sum_j c_j ({\bf x}_i-{\bf x}_0) \right) =0\end{align*} being satisfied at each node ${\bf x}_i$. \end{Aexa} Section \ref{Channelling} focuses on another major topic of the paper, that is, on channeling or redistributing stresses: we are interested in webs able to channel forces in a controlled way. For example, if one considers, say, a bicycle wheel or a suspension bridge, then, a desired distribution of forces is usually achieved by appropriately tightening the spokes or cables (clearly, the layout of these substructures is also essential). By contrast here we seek to distribute stress through judicious choices of the geometry of the web. Notice that distributing stresses in wires is quite different than distributing electrical currents in conducting wires. At a junction of more than two conducting wires one cannot tell in advance (without looking at the rest of the circuit) how much flow will be channeled into the different wires (this is an advantage if one wants the current to flow where most needed, a disadvantage if one wants to control the allocation of current, as in an irrigation system). This is due to the fact that, in a conduction network, one only has to satisfy Kirchoff's law which states that the outgoing current has to balance the incoming current. By contrast, when distributing stresses, one has that at a node where four noncoplanar wires join (for the three-dimensional case, or three non collinear wires for the two-dimensional case), balance of forces implies that the tension in one wire, and the geometry of the junction, uniquely determines the tension in the other wires. Thus, having at each internal node a coordination number of four for the three-dimensional case, or three for the two-dimensional case, is important to uniquely determining the loading that a web can support. This principle underlies the construction of ``pentamode materials'' \cite{Milton:1995:WET}, which have a diamond-like structure with a coordination number of 4 at each node of the structure. As a consequence, the stress field is essentially uniquely determined: like fluids which only support a hydrostatic loading, pentamodes only support one loading but, unlike fluids, that loading can be a combination of hydrostatic and shear forces. Pentamode materials have been studied for their use in cloaking, in particular for cloaking against sonar. They can guide the acoustic wave around an object while having little impedance mismatch at the membrane boundary with the surrounding fluid \cite{Norris:2008:ACT}. Webs of springs that support only one loading (up to a multiplicative constant) were instrumental in \cite{Camar:2003:DCS,Vasquez:2011:CCS}. The corresponding elastic energies have the form $({\bf F}\cdot{\bf U})^2$, when their terminal nodes have displacements ${\bf U}=({\bf u}_1,{\bf u}_2,\ldots,{\bf u}_N)$, so that the webs are able to support only forces proportional to ${\bf F}$. In that context, as the elastic energy can be expressed using a rank-one matrix in the form ${\bf U}\cdot ({\bf F}\otimes{\bf F})\cdot {\bf U}$ , such webs were called ``rank-one webs''. In this paper, as we are interested in energies that are not necessarily quadratic, we prefer to use the definition of {\it uniloadable webs}. In Section \ref{Channelling} we establish that, apart from some exceptional cases, if forces ${\bf f}_1,{\bf f}_2,\ldots,{\bf f}_N$ at prescribed points ${\bf x}_1,{\bf x}_2, \ldots, {\bf x}_N$ are supported by some web, they are also supported by a uniloadable one. The uniloadable networks we introduce here if prestressed could replace the pentamode materials in the aforementioned cloaking applications, allowing much greater flexibility in the design and economy in the number of junctions and wire elements. One may not be interested only in uniloadable webs but more generally in webs $W$ for which the set ${\cal C}^W_{\bf X}$ of supported loadings is prescribed. It is clear that, for a given web $W$ with terminal nodes at ${\bf X}$, the set ${\cal C}^W_{\bf X}$ is a convex cone contained in ${\cal A}^*_{\bf X}$. In Section \ref{Channelling}, we show that the converse is true too in an approximate sense: given a convex cone ${\cal C}\subset{\cal A}_{\bf X}^*$, one can find a sequence of webs $W_n$ such that ${\cal C}^{W_n}_{{\bf X}}$ approaches ${\cal C}$ as $n\to\infty$. For two-dimensional webs where the points ${\bf X}$ are the vertices of a convex polygon, a similar question was addressed in Theorem 2 of \cite{Milton:2017:SFI}. \section{On the existence of a web under tension}\label{Admissible_loadings} This Section is dedicated to the proof of Theorem \ref{mainthm1} and it contains some mathematical technicalities. Therefore, we recommend the reader who is more interested in the mechanical interpretation of the theorem to skip to Section \ref{Mechanical_inter}. Given a set of forces ${\bf F}$ applied at the points ${\bf X}$, we question the existence of a web supporting such a loading that has all the wires under tension. Recall that the equilibrium of a wire web is achieved if the tension is constant in each wire and if each node is in equilibrium. This situation admits a nice and synthetic, even if a bit abstract, formulation in terms of measures which is convenient for proving Theorem \ref{mainthm1}. Given a web $W$, the associated measure is \begin{equation}{\mathcal W}:={\cal H}^1\restrict {[{\bf x}_i,{\bf x}_j]}+{\cal H}^1\restrict {[{\bf x}_k,{\bf x}_l]}+\dots \label{WW}\end{equation} where ${\cal H}^1\restrict {[{\bf x}_i,{\bf x}_j]}$ stands for the line measure (the one dimensional Hausdorff measure) concentrated on the segment $[{\bf x}_i,{\bf x}_j]$. Accordingly the stress state $\sigma$ can be represented by the measure \begin{equation} {\mathfrak S}:=\sigma_{ij} \frac {{\bf x}_i-{\bf x}_j}{\|{\bf x}_i-{\bf x}_j\|}\otimes \frac {{\bf x}_i-{\bf x}_j}{\|{\bf x}_i-{\bf x}_j\|} {\cal H}^1\restrict {[{\bf x}_i,{\bf x}_j]}+\sigma_{kl} \frac {{\bf x}_k-{\bf x}_l}{\|{\bf x}_k-{\bf x}_l\|}\otimes \frac {{\bf x}_k-{\bf x}_l}{\|{\bf x}_k-{\bf x}_l\|} {\cal H}^1\restrict {[{\bf x}_k,{\bf x}_l]}+\dots \label{ssigma} \end{equation} Similarly, associated with the applied system of forces is the discrete measure \begin{equation} {\cal F}:= \sum_{i+1}^N {\bf f}_i\, \delta_{{\bf x}_i} \label{FF} \end{equation} where $\delta_{{\bf x}_i}$ stands for the Dirac measure at point ${\bf x}_i$. The equilibrium condition then simply reads \begin{equation}\nabla \cdot{\mathfrak S}+{\cal F}=0 \label{eequi}\end{equation} and the requirement that all wires be under tension is equivalent to the requirement that the measure ${\mathfrak S}$ take values in the set of positive semi-definite symmetric matrices. We will denote with ${\cal M}^+$ the set of such measures. Using this formulation, searching for a finite web boils down to finding ${\mathfrak S}\in {\cal M}^+$ of the form \eqref{ssigma} such that \eqref{eequi} is satisfied. If we drop the constraint that ${\mathfrak S}$ must be of the form \eqref{ssigma}, we are led to an interesting generalization: a {\it generalized web} ${\mathcal W}$ is a positive measure, and it supports the loading ${\mathcal F}$ if there exists some ${\mathfrak S}\in {\cal M}^+$ absolutely continuous with respect to ${\mathcal W}$ satisfying the equilibrium condition \eqref{eequi}. The relation between this generalized formulation and the so-called Michell problem is provided in Section \ref{Michell}. \subsection{Proof of Theorem \ref{mainthm1}} \begin{proof} To prove that condition \eq{W9} is necessary, we consider an admissible web stress ${\mathfrak S}$ for the loading ${\bf F}$. As ${\mathfrak S}$ is a positive semidefinite tensor-measure with compact support, by Green's generalized formula we have (see Section 6 of \cite{Milton:2017:SFI}): \begin{equation} 0\leq \Big<{\mathfrak S}, {\bf e}({\bf u})\Big> = - \Big<\nabla \cdot {\mathfrak S},{\bf u}\Big> = \Big<{\cal F}, {\bf u}\Big> = {\bf F}\cdot {\bf U}. \eeq{NS-1} for all $C^1$ fields ${\bf u}$ such that ${\bf e}({\bf u}):= (\nabla{\bf u}({\bf x})+(\nabla{\bf u}({\bf x}))^T)/2$ is positive semidefinite. Now, let ${\bf U}=({\bf u}_1,{\bf u}_2,\ldots,{\bf u}_N)$ be an element of ${\cal A}_{\bf X}$. For any $\kappa>0$, Lemma \ref{lemmaext} (see below) provides a Lipschitz extension $\widetilde{\bf u}: {\mathbb R}^d \to {\mathbb R}^d$ satisfying $\widetilde{\bf u}({\bf x}_\ell)={\bf u}_\ell$ and \begin{equation} \forall ({\bf x},{\bf y})\in ({\mathbb R}^d)^2,\ (\widetilde {\bf u}({\bf x})-\widetilde {\bf u}({\bf y}))\cdot({\bf x}-{\bf y})\geq -\kappa \|{\bf x}-{\bf y}\|^2. \eeq{LE1dup} This extension field is differentiable a.e. and, at every point ${\bf x}$ of differentiability, inequality \eqref{LE1dup} implies that ${\bf e}(\widetilde {\bf u})\geq -\kappa{\bf I}$. In order to apply Green's formula, we introduce a regularized field ${\bf u}^\eta:= \widetilde{\bf u} * \rho^\eta$, $\rho^\eta$ being a smooth convolution mollifier (i.e., non negative, supported in the ball of radius $\eta$, and such that $\int \rho^\eta =1 $). It can be readily checked that the strain ${\bf e}({\bf u}^\eta)$ associated with ${\bf u}^\eta$ is smooth and satisfies everywhere the inequality ${\bf e}({\bf u}^\eta)\geq -\kappa{\bf I}$. By applying \eqref{NS-1} to the field ${\bf u}= {\bf u}^\eta + \kappa {\bf x}$, we deduce that $$ \sum_i{\bf f}_i\cdot{\bf u}^\eta({\bf x}_i)+ \kappa\sum_i{\bf f}_i\cdot{\bf x}_i \ge 0. $$ As ${\bf u}^\eta\to \widetilde {\bf u} $ uniformly on every compact set, we can pass to the limits $\eta \to 0$ and $\kappa\to 0$ in the last inequality to obtain the desired inequality $ {\bf F} \cdot {\bf U} \ge 0$ (see equation \eqref{W9}). \medskip To prove that condition \eqref{W9} is sufficient, we consider ${\bf F}$ such that ${\bf F} \cdot {\bf U} \ge 0$ for any ${\bf u}\in {\cal A}_{\bf X}$. The linear conditions $({\bf u}_i-{\bf u}_j)\cdot({\bf x}_i-{\bf x}_j)\geq 0$ can be rewritten in the form ${\bf F}^{(i,j)}\cdot{\bf U}\geq 0$, where: \begin{equation} {\bf F}^{(i,j)}=({\bf f}_1^{(i,j)},{\bf f}_2^{(i,j)},\ldots,{\bf f}_N^{(i,j)})\quad \text{with}\quad {\bf f}_\ell^{(i,j)} = \begin{cases} {\bf x}_j-{\bf x}_i &\text{ if }\ell=i, \\ {\bf x}_i-{\bf x}_j &\text{ if }\ell=j, \\ 0 &\text{ otherwise.} \end{cases} \eeq{RT4} Hence, thanks to Farkas Lemma\cite{Farkas:1902:TDE}, we know that ${\bf F}$ is a linear combination of the linear forms ${\bf F}^{(i,j)}$ with non negative coefficients: \begin{equation} {\bf F}=\sum_{1\le i<j\le N} \lambda_{i,j}\, {\bf F}^{(i,j)} \quad ,\quad \lambda_{i,j}\ge 0. \eeq{RT4a} It is then easy to check that the positive semidefinite symmetric tensor measure \begin{equation} {\mathfrak S}= \sum_{1\le i<j\le N}\lambda_{i,j}\, \frac {({\bf x}_i-{\bf x}_j)\otimes ({\bf x}_i-{\bf x}_j)}{\|{\bf x}_i-{\bf x}_j\|} {\cal H}^1\restrict {[{\bf x}_i,{\bf x}_j]}, \eeq{RT2} is a possible web stress for the loading $\cal F$. Indeed, for any ${\bf a}$ and ${\bf b}$ in ${\mathbb R}^d$, we have\linebreak $\nabla \cdot(({\bf b}-{\bf a})\otimes ({\bf b}-{\bf a}) {\cal H}^1\restrict{|[{\bf a},{\bf b}]})=\|{\bf b}-{\bf a}\|\, ({\bf b}-{\bf a}) (\delta_{\bf a}-\delta_{\bf b})$, thus $ \nabla \cdot{\mathfrak S}=-\sum_i {\bf f}_i \delta_{{\bf x}_i}=-{\cal F}.$ Let us emphasize that this web stress measure involves only the original nodes $({\bf x}_1, {\bf x}_2, \dots, {\bf x}_N)$ as claimed in the theorem. \end{proof} We now state and prove the interpolation lemma which was needed in the previous proof. \begin{lemma}\label{lemmaext}{\bf Interpolation Lemma} \newline Let $A$ {be} a finite subset of ${\mathbb R}^d$, {and} ${\bf u}:\ A\to {\mathbb R}^d$ {the field} satisfying $({\bf u}({\bf x})-{\bf u}({\bf y}))\cdot({\bf x}-{\bf y})\geq 0, \forall ({\bf x},{\bf y})\in A^2. $ Then, for any $\kappa>0$ there exists a Lipschitz extension $\widetilde {\bf u}$ of ${\bf u}$ on ${\mathbb R}^d$ satisfying \begin{equation} \forall ({\bf x},{\bf y})\in ({\mathbb R}^d)^2,\ (\widetilde {\bf u}({\bf x})-\widetilde {\bf u}({\bf y}))\cdot({\bf x}-{\bf y})\geq -\kappa \|{\bf x}-{\bf y}\|^2. \eeq{LE1} \end{lemma} \begin{proof} As ${\bf u}$ is bounded on the bounded set $A$, there exists $M$ such that, for any ${\bf x}\in A$, $\|{\bf u}({\bf x})\|\leq M$ and $\|{\bf x}\|\leq M$. Moreover, as $A$ is finite, there exists $\delta>0$ such that, for any distinct points ${\bf x}$ and ${\bf y}$ in $A$, $\|{\bf x}-{\bf y}\|\geq \delta$. We set $\lambda=\frac {8\, M^2}{\delta^2}$ and choose $s$ such that $0<\lambda\, s<\min(\kappa,1) $. Let us consider $\bfm\phi$ defined on $A$ by\footnote{Physically, when $s$ is small, we can think of $\bfm\phi({\bf x})$ as a deformation ${\bf x}\to\bfm\phi({\bf x})$ associated with the displacement field ${\bf u}({\bf x})$. The additional uniform contractive factor of $(1-\lambda\, s^2)$ is needed to account for the finite deformation corrections when ${\bf u}({\bf x})-{\bf u}({\bf y})$ is perpendicular to ${\bf x}-{\bf y}$ (with say ${\bf u}({\bf y})=0$ and ${\bf y}=0$, and ${\bf u}({\bf x})$ orthogonal to ${\bf x}$, the distance $|{\bf x}-s{\bf u}({\bf x})|$ lengthens as $s$ is increased).} \begin{equation} \bfm\phi({\bf x})=(1-\lambda\, s^2) {\bf x} - s {\bf u}({\bf x}). \eeq{LE2} For any distinct points ${\bf x}$ and ${\bf y}$ in $A$, we have, by straightforward computation \begin{eqnarray} \|\bfm\phi({\bf x})- \bfm\phi({\bf y})\|^2&=&\|(1-\lambda\, s^2)({\bf x}-{\bf y})-s({\bf u}({\bf x})-{\bf u}({\bf y}))\|^2\nonumber \\ &=&\left\|({\bf x}-{\bf y})-s\left({\bf u}({\bf x})-{\bf u}({\bf y})+\lambda s ({\bf x}-{\bf y})\right)\right\|^2\nonumber \\ &=& \|{\bf x}-{\bf y}\|^2 + s^2\|{\bf u}({\bf x})-{\bf u}({\bf y})+\lambda s({\bf x}-{\bf y})\|^2\nonumber \\ &~&\hskip 5 cm -2 s \big(({\bf x}-{\bf y})\cdot ({\bf u}({\bf x})-{\bf u}({\bf y})+\lambda s\|{\bf x}-{\bf y}\|^2\big)\nonumber \\ &\leq& \|{\bf x}-{\bf y}\|^2 + s^2\|{\bf u}({\bf x})-{\bf u}({\bf y})+\lambda s({\bf x}-{\bf y})\|^2- 2 \lambda s^2 \|{\bf x}-{\bf y}\|^2\nonumber \\ &\leq& \|{\bf x}-{\bf y}\|^2 + 4 s^2\left(\|{\bf u}({\bf x})\|^2+\|{\bf u}({\bf y})\|^2+(\lambda s)^2\|{\bf x}\|^2+(\lambda s)^2\|{\bf y}\|^2\right)- 2 \lambda s^2 \|{\bf x}-{\bf y}\|^2\nonumber \\ &\leq& \|{\bf x}-{\bf y}\|^2 + 16 M^2\, s^2-2 \lambda \, \delta^2\,s^2 \nonumber \\ &\leq& \|{\bf x}-{\bf y}\|^2. \eeqa{LE3} Kirszbraun's Theorem \cite{Kirszbraun:1934:UDZ} proves the existence of an extension $\widetilde \bfm\phi$ of $\bfm\phi$ on ${\mathbb R}^d$ satisfying the same condition: for any $({\bf x},{\bf y})\in({\mathbb R}^d)^2$, \begin{equation} \|\widetilde\bfm\phi({\bf x})- \widetilde\bfm\phi({\bf y})\|\leq \|{\bf x}-{\bf y}\|. \eeq{LE4} Let us now define $\widetilde {\bf u}$ on ${\mathbb R}^d$ by setting \begin{equation} \widetilde {\bf u}({\bf x})=\frac{(1-\lambda\, s^2){\bf x}-\widetilde\bfm\phi({\bf x})}{s}. \eeq{LE5} For any ${\bf x}\in A$ we have $\widetilde {\bf u}({\bf x})={\bf u}({\bf x})$. Moreover $\widetilde {\bf u}$ is a Lipschitz function which satisfies \begin{eqnarray} (\widetilde {\bf u}({\bf x})-\widetilde {\bf u}({\bf y}))\cdot ({\bf x}-{\bf y})&= &s^{-1}\Big((1-\lambda\, s^2) \|{\bf x}-{\bf y}\|^2-(\widetilde \bfm\phi({\bf x})-\widetilde \bfm\phi({\bf y})) \cdot ({\bf x}-{\bf y})\Big)\nonumber \\ &\geq & -\lambda\, s\, \|{\bf x}-{\bf y}\|^2\geq -\kappa \|{\bf x}-{\bf y}\|^2. \eeqa{LE6} \end{proof} \subsection{Support of the stress field} From a physical viewpoint it seems obvious that a finite web supporting the loading ${\bf F}$ at the points ${\bf X}=({\bf x}_1,{\bf x}_2,\ldots,{\bf x}_N)$ should have an associated stress measure vanishing outside the convex hull of the points ${\bf x}_1,{\bf x}_2,\ldots,{\bf x}_N$. This section is devoted to proving this fact which turns out to be valid also for generalized webs. \begin{theorem}\label{Thsupport} Let $\cal F$ be a vector measure with compact support $K$ and ${\mathfrak S}$ in ${\cal M}^+$ such that $\nabla \cdot{\mathfrak S}+{\cal F}=0$. Then the support of ${\mathfrak S}$ is contained in the convex envelope $co(K)$ of $K$. \end{theorem} Notice that, from this result, we can deduce that the support of ${\mathfrak S}$ is contained in the subspace spanned by the vectors\footnote{Here we choose ${\bf x}_1$ as the origin for identifying points and vectors.} ${\bf x}_1$, ${\bf x}_2$, $\ldots$, ${\bf x}_N$ and thus that the loading forces ${\bf f}_i$ belong to this subspace. Hence, we will be able to reduce our problem to this subspace and assume without loss of generality that ${\bf x}_1$, ${\bf x}_2$, $\ldots$, ${\bf x}_N$ span ${\mathbb R}^d$. \medskip \begin{proof} By Hahn-Banach separation theorem \cite{Ekeland:1999:CAV}, proving that ${\mathfrak S}$ vanishes outside the convex envelope $co(K)$ of $K$ reduces to checking that ${\mathfrak S}$ vanishes on all half spaces $P^+_{{\bf m},a}:=\{{\bf x}:\, {\bf x}\cdot {\bf m}>a\}$ with $a\in{\mathbb R}$ and ${\bf m}\in S^{d-1}$ which do not intersect $K$. This verification is achieved in several steps. We first remark that, as $P^+_{{\bf m},a}$ and $K$ do not intersect, we have $\nabla \cdot{\mathfrak S}=0$ on $P^+_{{\bf m},a}$. {\bf Step 1:} Let us consider first a field $\widetilde {\mathfrak S}\in C^1({\mathbb R}^d, {\mathbb R}^{d^2}_{sym})$ such that $\widetilde {\mathfrak S}\geq 0$ and $\nabla \cdot\widetilde {\mathfrak S}=0$ on $P^+_{{\bf m},a}$. Without loss of generality assume that ${\bf m}={\bf e}_1$, where ${\bf e}_1$ is the unit vector along the $x_1$-axis (where $x_1$ is not to be confused with the point ${\bf x}_1$). Let us apply Green's formula in the half ball $\Omega_R:=\{{\bf x}: x_1>a,\ |{\bf x}|<R\}$. We have \begin{equation} \int_{\partial \Omega_R} (\widetilde {\mathfrak S}\cdot {\bf e}_1)\cdot {\bf n}\, {\rm d}{\cal H}^{d-1}=\int_{\Omega_R} (\nabla \cdot \widetilde {\mathfrak S})\cdot {\bf e}_1 \, {\rm d}{\cal H}^{d}=0, \eeq{S0} where ${\bf n}$ stands for the outward normal. Dividing the boundary $\partial \Omega_R$ into $\Sigma_R:=\{{\bf x}:\ x_1=a,\ |{\bf x}|<R\}$ and $S_R^+:=\{{\bf x}:\ x_1>a,\ |{\bf x}|=R\}$ we get \begin{equation} -\int_{\Sigma_R} (\widetilde {\mathfrak S}\cdot {\bf e}_1)\cdot {\bf e}_1\, {\rm d}{\cal H}^{d-1} + \int_{S_R^+} (\widetilde {\mathfrak S}\cdot {\bf e}_1)\cdot {\bf n}\, {\rm d}{\cal H}^{d-1}=0. \eeq{S1} Thus we have \begin{equation} \int_{\Sigma_R} (\widetilde {\mathfrak S}\cdot {\bf e}_1)\cdot {\bf e}_1\, {\rm d}{\cal H}^{d-1} \leq \int_{S_R^+} |\widetilde {\mathfrak S}|\, {\rm d}{\cal H}^{d-1}. \eeq{S2} As $\widetilde {\mathfrak S}\in L^1$, $\int_0^{+\infty}( \int_{S_R^+} |\widetilde {\mathfrak S}|\, {\rm d}{\cal H}^{d-1})\, {\rm d}R <+\infty$ and so there exists a sequence $R_n\to +\infty$ such that $$\int_{S_{R_n}^+} |\widetilde {\mathfrak S}|\, {\rm d}{\cal H}^{d-1}\to 0. $$ This implies that \begin{equation} \lim_{n\to \infty} \int_{\Sigma_{R_n}} (\widetilde {\mathfrak S}\cdot {\bf e}_1)\cdot {\bf e}_1\, {\rm d}{\cal H}^{d-1} =0. \eeq{S3} As $(\widetilde {\mathfrak S}\cdot {\bf e}_1)\cdot {\bf e}_1\geq 0$ we get $(\widetilde {\mathfrak S}\cdot {\bf e}_1)\cdot {\bf e}_1= 0$ on the whole hyperplane $\{{\bf x}: \ x_1=a\}$. As $\widetilde {\mathfrak S}\geq 0$, we deduce that ${\mathfrak S}\cdot {\bf e}_1= 0$ on this hyperplane. \medskip \noindent {\bf Step 2:} We remark that, if $\widetilde {\mathfrak S}\geq 0 \in C^1({\mathbb R}^d, {\mathbb R}^{d^2}_{sym})$ satisfies $\nabla \cdot\widetilde {\mathfrak S}={0}$ on the whole space ${\mathbb R}^d$, then by applying the result of step 1 to every pair $({\bf m},a)\in S^{d-1}\times {\mathbb R}^d$ we get $\widetilde {\mathfrak S}={0}$ everywhere. We also remark that this result holds true for any space dimension $d$. \medskip \noindent {\bf Step 3:} Going back to the case where $\widetilde {\mathfrak S}\geq 0 \in C^1({\mathbb R}^d, {\mathbb R}^{d^2}_{sym})$ satisfies $\nabla \cdot\widetilde {\mathfrak S}={0}$ only on the half space $P^+_{{\bf m},a}$, we apply Step 1 to every pair $({\bf m},t)$ with $t>a$ and we deduce that $\widetilde {\mathfrak S}\cdot {\bf m}=0$ on the hyperplane $\Sigma_t:=\{{\bf x}:\ {\bf x}\cdot {\bf m}=t\}$. The restriction of $\widetilde {\mathfrak S}$ to this hyperplane is tangential and divergence free. For almost every $t>a$, $\int_{\Sigma_t}|\widetilde {\mathfrak S}|<+\infty$, by applying the result of Step 2 to ${\mathbb R}^{d-1}$, we deduce that $\widetilde {\mathfrak S}$ vanishes on almost all hyperplanes $\Sigma_t$. Hence, $\widetilde {\mathfrak S}$ vanishes on $P^+_{{\bf m},a}$. \medskip \noindent {\bf Step 4:} In order to extend this result to measures, we introduce a smooth mollifier $\rho^\eta$ and ${\mathfrak S}^\eta := {\mathfrak S} \star \rho^\eta$. Clearly ${\mathfrak S}^\eta$ belongs to $L^1\cap C^{\infty}({\mathbb R}^{d^2}_{sym})$, ${\mathfrak S}^\eta\geq 0$ and, for any $t>a$, and for $\eta$ small enough we have $\nabla \cdot{\mathfrak S}^\eta={0}$ on $P^+_{{\bf m},t}$. By applying the result of Step 3 to ${\mathfrak S}^\eta$ we get ${\mathfrak S}^\eta={0}$ on $P^+_{{\bf m},t}$, and by passing to the limit $\eta\to 0$ we get ${\mathfrak S}={0}$ on $P^+_{{\bf m},t}$. The theorem is proven by passing finally to the limit $t\to a$. \end{proof} \subsection{Link with the Michell problem}\label{Michell} A very old problem in Mechanical Engineering consists in minimizing the total volume of a network of elastic bars ({\em trusses}) while the resistance to a given load remains constant. It reduces to a linear programming problem which, according to our notation, reads: \begin{equation} \inf_\sigma \left\{ \sum_{i} \sum_j |\sigma_{ij}| \|{\bf x}_j-{\bf x}_i\| \ :\ {\bf f}_i + \sum_{j\not=i} \sigma_{ij} \frac {{\bf x}_j-{\bf x}_i}{\|{\bf x}_j-{\bf x}_i\|} =0 \ ,\ \forall i \right\} \end{equation} The classical dual formulation is the following maximization problem on deformations: \begin{equation} \sup_{{\bf U}} \left\{ \sum_i{\bf f}_i\cdot{\bf u}_i \ :\ |({\bf u}_i-{\bf u}_j)\cdot({\bf x}_i-{\bf x}_j)|\le \|{\bf x}_i-{\bf x}_j)\|^2 \ ,\ \forall i\not=j \right\} \end{equation} As no assumption is made on the number of bars, this belongs to the class of topological optimization problems and it is well known that, in general, no optimal solution exists. Indeed, during the optimization process, the number of bars may increase to infinity leading thus to diffuse structures. The crucial contribution of Michell \cite{Michell:1904:LEM} in the 1900's was to formulate a generalized version (called {\em Michell problem}) in order to take into account all possible structures which may appear in the limit. In the generalized version, attention is focused on the stress carried by the structure rather than on its geometry. Michell stated a duality principle and obtained the optimality conditions on the stress and strain tensors: they share the same eigenvectors (principal directions) and the eigenvalues of the strain tensor have a fixed absolute value. Moreover, Michell noticed that, in the two-dimensional case, when the eigenvalues of the strain tensor have opposite sign and when the eigenvector fields are smooth enough to define stream lines (called ``{\em lines of principal action}''), then these lines constitute a so-called {\em Hencky-net}. This is a family of orthogonal curves which represents the limit of the families of bars through the optimization process. We refer to \cite{Bouchitte:2008:MTL} for a detailed mathematical study where optimality conditions for a generalized truss are established in a rigorous way. The generalized stress formulation derived by Michell in dimension $d=2$ reads as follows: \begin{equation} \inf_{\mathfrak S} \left\{ \int \rho_0({\mathfrak S}) \ :\ \nabla \cdot {\mathfrak S} + {\cal F} =0 \right\} \end{equation} where $\rho_0({\mathfrak S}):=|\lambda_1({\mathfrak S})|+|\lambda_2({\mathfrak S})|$ denotes the sum of the moduli of the two principal values of ${\mathfrak S}$. The corresponding dual problem reads: \begin{equation} \sup_{\widetilde{{\bf u}}}\{<{\cal F},\widetilde{{\bf u}}>: \ |({\widetilde{{\bf u}}}({\bf x})-{\widetilde{{\bf u}}}({\bf y}))\cdot ({\bf x}-{\bf y})|\leq |{\bf x}-{\bf y}|^2\} \end{equation} An admissible pair $({\bf u},{\mathfrak S})$ is then optimal if and only if the following extremality relation holds (see \cite{Bouchitte:2008:MTL}) \begin{equation} <{\cal F},{\bf u}>\ =\ \int \rho_0({\mathfrak S})\,{\rm d}{\bf x}. \eeq{Sopti} We can now emphasize the link between our problem and the Michell problem described above. It turns out indeed that admissible stress states associated with a loading ${\bf F}$ in the cone ${\cal A}_{{\bf X}}^*$ are solutions of the Michell problem. \begin{theorem} Let ${\cal F}$ be a bounded vector-measure with compact support $K$ and ${\mathfrak S}\geq 0$ in $M^{d^2}_{sym}$ such that $\nabla \cdot{\mathfrak S}+{\cal F}={0}$. Then \noindent i) ${\mathfrak S}$ is a solution to the Michell problem for ${\cal F}$, \noindent ii) any solution $\widetilde {\mathfrak S}$ to the Michell problem for ${\cal F}$ satisfies $\widetilde {\mathfrak S} \geq 0$. \end{theorem} \begin{proof} Clearly ${\bf u}({\bf x}):={\bf x}$ is admissible for the dual problem. As $-\nabla \cdot{\mathfrak S}={\cal F}$, the following relation holds: \begin{equation} <{\cal F},{\bf x}>\ =\ <-\nabla \cdot{\mathfrak S},{\bf x}>\ =\ <{\mathfrak S},\nabla {\bf x}>\ =\ \int \mathrm{Tr}({\mathfrak S}). \eeq{S5} If we assume that ${\mathfrak S}\geq 0$, then Michell's dual energy $\rho_0({\mathfrak S})$ coincides with $\mathrm{Tr}({\mathfrak S})$ as both $\lambda_1({\mathfrak S})$ and $\lambda_2({\mathfrak S})$ are non-negative. In particular, we have $\int \rho_0({\mathfrak S}) =\int \mathrm{Tr}({\mathfrak S})$, hence by \eqref{S5} the pair $({\bf x},{\mathfrak S})$ satisfies relation \eqref{Sopti}. The optimality of $({\bf x},{\mathfrak S})$ follows. This proves point i). To prove point (ii), it is enough to notice that, for any other solution $\widetilde {\mathfrak S}$ of the primal problem, the couple $({\bf x},\widetilde{\mathfrak S})$ is optimal and thus must satisfy the optimality condition \eqref{Sopti} so that, by \eqref{S5}, one has $$ <{\cal F},{\bf x}>=\int \mathrm{Tr}(\widetilde {\mathfrak S})\,{\rm d}{\bf x}=\int \rho_0(\widetilde {\mathfrak S})\,{\rm d}{\bf x}. $$ As $\rho_0(\widetilde {\mathfrak S})\geq \mathrm{Tr}(\widetilde {\mathfrak S})$, we deduce that $\rho_0(\widetilde {\mathfrak S})= \mathrm{Tr}(\widetilde {\mathfrak S})$ and so $\widetilde {\mathfrak S}\geq 0$. \end{proof} \section{Theorem \ref{mainthm1} in two dimensions}\label{Mechanical_inter} To get a better insight on the mechanical interpretation of the statement of Theorem \ref{mainthm1}, we focus on the two-dimensional case which was also analyzed in \cite{Milton:2017:SFI}. In particular, we will show that when the points ${\bf x}_1,{\bf x}_2, \ldots, {\bf x}_N$ are vertices of a convex polygon, our condition \eq{W9} is a generalization of the condition \eqref{I1} proved in \cite{Milton:2017:SFI}. \subsection{Mechanical interpretation of the extreme rays of the cone ${{\cal A}_{{\bf X}}}$ } Given a set of points ${\bf x}_1,{\bf x}_2, \ldots, {\bf x}_N$ that are vertices numbered clockwise of a convex polygon, let us consider the following displacement field ${\bf u}_k^{(j,i)}$ defined by: \begin{equation} {\bf u}_k^{(j,i)} = \begin{cases} -{\bf R}_\perp({\bf x}_k-{\bf x}_j),\quad \text{ for }k=j,j+1,\ldots, i-1, \\ 0\quad\text{ otherwise }, \end{cases} \eeq{E2.1} where \begin{equation} {\bf R}_\perp=\left[\begin{array}{cc} 0& 1\\-1 &0 \end{array} \right] \eeq{Rper} is the matrix for a 90$^\circ$ clockwise rotation and, if necessary, we identify $k$ with $k-N$. We call ${\bf u}_k^{(j,i)}$ a ``clam-shell'' displacement (see \fig{3}) as it corresponds to the infinitesimal rotation between two non-overlapping subpolygons the polygon of terminal points can be divided into: by keeping fixed one of the two subpolygons, the rotation of the other opens the clam. Given any points ${\bf x}_k$ and ${\bf x}_\ell$ on opposite sides of the clam (where ${\bf u}_k^{(j,i)}\ne 0$, while ${\bf u}_\ell^{(j,i)}=0$) we have \begin{equation} ({\bf u}_k^{(j,i)}-{\bf u}_\ell^{(j,i)})\cdot({\bf x}_k-{\bf x}_\ell)=-[{\bf R}_\perp({\bf x}_k-{\bf x}_j)]\cdot({\bf x}_k-{\bf x}_\ell)\geq 0, \eeq{E2.3} where the last inequality follows from the convexity of the polygon and the clockwise numbering of the points. Thus, this clam{-}shell movement is an admissible displacement as it satisfies \eqref{W10}. This implies that ${\bf F}$ satisfies the constraints \eqref{W9}, that is, \begin{equation} 0\leq \sum_{k=1}^N {\bf f}_k\cdot{\bf u}_k^{(j,i)}=\sum_{k=j}^{i-1}({\bf x}_k-{\bf x}_j)\cdot[{\bf R}_\perp{\bf f}_k], \eeq{E2.4} which are precisely the same as the constraints \eq{I1} that characterize ${{\cal A}_{{\bf X}}}^*$, that is the set of all the loadings ${\bf F}$ at ${\bf X}$ which can be supported by a finite web. Thus, in this case of the ${\bf x}_i$ forming the vertices of a convex polygon, the displacements ${\bf U}$ correspond precisely (up to an infinitesimal rigid body motion) to these ``clam-shell'' movements, and do not include any other movements. \begin{figure}[!ht] \centering \includegraphics[width=0.6\textwidth]{clam2d.pdf} \caption{Given a set of points ${\bf x}_i$ that form the vertices of a convex polygon, as in (a), an extremal infinitesimal movement is {obtained} by breaking the polygon into two non-overlapping subpolygons connected at one vertex ${\bf x}_j$, as in (b). The ``clam-shell'' movement then consists of fixing one subpolygon, in this example the lower triangle, and infinitesimally rotating the other subpolygon anticlockwise about the point ${\bf x}_j$, so the ``clam'' opens slightly, thus moving any vertex ${\bf x}_k$ on the upper subpolygon away from any vertex ${\bf x}_\ell$ on the lower subpolygon. } \labfig{3} \end{figure} More generally, to check the criterion \eq{W9} it suffices to check it for those ${\bf U}$ corresponding to the extreme rays ${\bf U}^m$ of the cone ${{\cal A}_{{\bf X}}}$. These rays are perpendicular to the ``faces'' of the polar cone $-{{\cal A}_{{\bf X}}}^*$ (see Figure \fig{2}). \begin{figure}[!ht] \centering \includegraphics[width=0.6\textwidth]{admisswebcones.pdf} \caption{Schematic illustration of the cone ${\cal A}_{{\bf X}}$ and the polar cone $-{\cal A}_{{\bf X}}^*$, the negative of the dual cone ${\cal A}_{{\bf X}}^*$. Here the ${\bf F}^{(i,j)}$ are the extreme rays of the dual cone ${\cal A}_{{\bf X}}^*$, while the ${\bf U}^m$ are the extreme rays of the cone ${\cal A}_{{\bf X}}$.} \labfig{2} \end{figure} We use an integer $m=1,2,\ldots,M$ to index these rays. For any given $m$ there exist associated loadings ${\bf F}^m_h$, $h=1,2,\ldots,D-1$ all perpendicular to the extreme ray indexed by $m$. Here, for a given $m$, each value of $h$ signifies a pair $(i,j)=(i(h,m),j(h,m))$ such that ${\bf F}^{m}_h={\bf F}^{(i,j)}$, and linear combinations of the ${\bf F}^m_h$, $h=1,2,\ldots,D-1$ with positive weights generate the ``face'' perpendicular to the extreme ray of the cone ${{\cal A}_{{\bf X}}}$. Let ${\bf U}^m=({\bf u}_1^m,{\bf u}_2^m,\ldots,{\bf u}_N^m)$ be on this extreme ray. Then, we have \begin{equation} {\bf F}^{m}_h\cdot{\bf U}^m=0,\text{ for }h=1,2,\ldots,D-1. \eeq{RT5} In particular, if ${\bf F}^{m}_h={\bf F}^{(i(h,m),j(h,m))}$, then the orthogonality implies that \begin{equation} ({\bf u}_{i(h,m)}^m-{\bf u}_{j(h,m)}^m)\cdot({\bf x}_{i(h,m)}-{\bf x}_{j(h,m)})=0. \eeq{RT6} If we think of ${\bf U}^m$ as corresponding to a displacement, then this restriction says that (within the infinitesimal displacements framework) there is no change in distance between ${\bf x}_{i(h,m)}$ and ${\bf x}_{j(h,m)}$: the constraint is equivalent to only allowing those deformations compatible with rigid rods joining the pairs of points $({\bf x}_{i(h,m)},{\bf x}_{j(h,m)})$ for $h=1,2,\ldots,D-1$. After eliminating the trivial infinitesimal rigid body motions (translations and rotations) from ${\bf U}^m$, by requiring it to satisfy, for instance, \begin{equation}\sum_{i=1}^N{\bf u}_i={\bf 0}\,,\quad \sum_{i=1}^N{\bf x}_i\cdot{\bf A}{\bf u}_i={\bf 0}\eeq{AL3} with ${\bf A}$ any $d\times d$ antisymmetric matrix, there still must be one degree of freedom associated with the infinitesimal motion corresponding to ${\bf U}^m$. The equality $({\bf u}_i-{\bf u}_j)\cdot({\bf x}_i-{\bf x}_j)=0$ cannot hold for all $i\ne j$ as this then would correspond to a trivial overall (infinitesimal) rigid body motion, which must be zero by \eqref{AL3}. To fix the one-degree of freedom, we can impose the normalization condition that ${\bf U}^m\cdot{\bf X}=1$. Without loss of generality this can be seen by noting that for any ${\bf U}\in{\cal A}_{{\bf X}}$, \begin{equation} 0\leq \sum_{i,j=1}^N({\bf u}_i-{\bf u}_j)\cdot({\bf x}_i-{\bf x}_j)=\sum_{i,j=1}^N{\bf u}_i\cdot{\bf u}_i+{\bf u}_j{\bf u}_j-2\sum_{i=1}^N{\bf u}_i\sum_{j=1}^N{\bf u}_j=2{\bf U}\cdot{\bf X}, \eeq{RT7} where, to get the last equality, we use \eq{AL3}. Furthermore, ${\bf U}\cdot{\bf X}=0$ implies that ${\bf U}$ is a rigid body motion, which we excluded. So ${\bf U}\cdot{\bf X}>0$, and by replacing ${\bf U}$ with ${\bf U}/({\bf U}\cdot{\bf X})$ we see that we can assume ${\bf U}\cdot{\bf X}=1$. To conclude, we showed that if the terminal points are the vertices of a convex polygon, then all the extremal displacements correspond to clam-shell movements but notice that if there is at least one terminal point inside the convex hull of the terminal points, then besides clam-shell movements there are also other types of displacements, as shown in Figure \fig{Clam_shell_Arrowhead}. \begin{figure}[h] \centering \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=\textwidth]{Clam_shell1.pdf} \end{subfigure}% \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=\textwidth]{Clam_shell2.pdf} \end{subfigure} \caption{Consider the following 4 points which form an arrowhead in the plane: ${\bf x_1}=[0;0]$, ${\bf x_2}=[1;1]$, ${\bf x_3}=[0.5;0]$, and ${\bf x_4}=[1;-1]$. The displacements corresponding to the extreme rays of the cone ${{\cal A}_{{\bf X}}}$ for this special geometry can be divided into two groups: clam-shell movements and non-clam-shell movements. In this figure, the black lines represents the rigid wires and the green lines represents the deformable bonds. Clearly, (a) is an example of a non-clam-shell displacement, whereas (b) is an example of a clam-shell displacement (the rigid wire connecting the points ${\bf x_1}$ and ${\bf x_2}$ can rotate infinitesimally about the point ${\bf x_2}$ while the rigid triangle formed by ${\bf x_2}$, ${\bf x_3}$, and ${\bf x_4}$ is held fixed). Notice that the existence of non-clam-shell movements is due to the fact that the point ${\bf x_3}$ does not belong to the convex hull of the terminal points.} \labfig{Clam_shell_Arrowhead} \end{figure} \subsection{Simplifying the two-dimensional web} In two dimensions, in any web, we define a loop to be any polygon such that its edges are the wires of the web. A minimal loop is one with no other wires inside the polygon. Any web with all pairs of terminal points interconnected, as in Figure \fig{1} (b), can then be replaced by an equivalent one with at most $P$ minimal loops where $P$ is the number of points ${\bf x}_1,\ldots,{\bf x}_N$ that lie inside the convex hull of these $N$-points. To show this we first place internal nodes where any pair of wires $({\bf x}_i,{\bf x}_j)$ and $({\bf x}_r,{\bf x}_s)$ cross. Then take any minimal loop in the network. The vertices of this loop may include a terminal point ${\bf x}_j$ so long as the net force acting on the loop at ${\bf x}_j$ (including ${\bf f}_j$ and the forces acting on ${\bf x}_j$ due to the tension in the other wires outside the loop) points outside the loop. As the wires are all under tension, the loop then is necessarily convex and exerts forces $-{\bf f}'_m$ at the nodes numbered clockwise around the loop. These forces necessarily satisfy \eq{I1}, and the loop can be replaced by an open web. The number of minimal loops in the web is thus reduced by one. This procedure can be continued until there are at most $P$ minimal loops, and each of these loops is non-convex and thus its vertices include at least one terminal node ${\bf x}_i$ where the associated force ${\bf f}_j$ points inside the loop. { Video1 (see Supplementary Material) shows an example of a web whose wires connect all the terminal points pairwise (initial frame), and how to replace each closed loop with an open web so that the final frame represents an equivalent web in which there is only one minimal loop due to the presence of one point, ${\bf x_4}$, inside the convex hull of the terminal points.} \section{Channeling the stresses in a web}\label{Channelling} In this Section we address the problem of designing wire webs that can support one and only one loading, up to a positive multiplicative factor. Note that the stress is not distributed in a unique way as there are many networks, i.e. stress patterns, that work. In a given wire web, this is possible if one can determine the stress in each wire in a unique way up to an overall proportionality constant: clearly this happens if at each internal node only four non-coplanar wires for the three-dimensional case (or three non-collinear wires for the two-dimensional case) meet and at most three non-coplanar wires meet at any terminal node for the three-dimensional case (or two non-collinear wires for the two-dimensional case). Here we provide a procedure to achieve such a goal so that at each internal node the coordination number is four for the three-dimensional case, or three for the two-dimensional case, while only one wire is connected to each terminal node. Then, it is important to uniquely determine the loading that a web can support. Let us call ${\cal C}^W_{{\bf X}}$ the set of all the loadings ${\bf F}$ that the web $W$ can support at ${\bf X}$. Clearly ${\cal C}^W_{{\bf X}}$ is a convex cone. Indeed, if the web supports the loadings ${\bf F}^1$ and ${\bf F}^2$ with admissible stresses ${\mathfrak S}^1$ and ${\mathfrak S}^2$, respectively, then for any $\lambda_1\geq 0$ and $\lambda_2\geq 0$ it also supports the loading ${\bf F}=\lambda_1{\bf F}^1+\lambda_2{\bf F}^2$ with the associated stress $\lambda_1{\mathfrak S}^1+\lambda_2{\mathfrak S}^2$. Also, by definition, ${\cal C}^W_{{\bf X}}$ must be a subset of the admissible loading cone ${\cal A}_{{\bf X}}^*$. Here we address the converse question: given a convex cone ${\cal C}\subset{\cal A}_{{\bf X}}^*$ can one find a web $W$ such that ${\cal C}^W_{{\bf X}}={\cal C}$? We first focus on the case where ${\cal C}$ is reduced to a single ray and then look for what we call uniloadable webs (that is webs which support only one loading ${\bf F}$, up to a multiplicative constant). If ${\cal C}$ is a ray in the interior of ${\cal A}_{{\bf X}}^*$, then we can prove the existence of a uniloadable web for any ray ${\cal C}$. If ${\cal C}$ does not belong to the interior of ${\cal A}_{{\bf X}}^*$, then the existence of a uniloadable web is not guaranteed. Finally, we will answer the question in case ${\cal C}$ is not simply a ray but a convex cone. Specifically, we will give a positive answer in the following asymptotic sense: one can find a sequence of finite webs $W_n$ such that ${\cal C}^{W_n}_{{\bf X}}$ approaches ${\cal C}$ as $n\to\infty$. For two-dimensional webs where the points ${\bf X}$ are the vertices of a convex polygon, a similar question was addressed by Theorem 2 in \cite{Milton:2017:SFI}, and the proof given here is similar. \subsection{Reducing the number of wires meeting at a point} In two dimensions the procedure for replacing a junction with $M>3$ wires by a localized web in which at most three wires meet is straightforward and described in Section 3 of {\cite{Milton:2017:SFI}.} Briefly, and as illustrated in Figure \fig{5}, one finds the associated Airy stress function in the neighborhood of the junction. This is a convex cone with flat faces with the discontinuity in slope at the edges corresponding to the tension in the wires ( Figure \fig{5}(b)). By cleaving the top of this cone, creating a polygonal face ({ Figure \fig{5}(c)}), one obtains an associated web ({Figure \fig{5}(d)}) supporting the same loading as the original junction ({Figure \fig{5}(a)}), but with at most three wires meeting at every junction. \begin{figure}[!ht] \centering \includegraphics[width=0.6\textwidth]{multcross2d.pdf} \caption{In two dimensions a junction of many wires at an internal node as in (a) can be replaced by a web in which at most three wires {meet} at every junction. First one determines an associated convex Airy function in the vicinity of the crossing, as in (b). Then one cleaves it by a plane as in (c), and the associated web, as in (d), then has at most three wires meeting at every junction. } \labfig{5} \end{figure} In three dimensions the procedure we use for replacing a junction with $M>4$ wires is more complicated. The steps are illustrated in Figure \fig{6}, where we begin in (a) with a junction where $M=6$ wires meet. First we pick 4 of the wires (those marked in blue in Figure \fig{6}(a)), and points ${\bf x}_1$, ${\bf x}_2$, ${\bf x}_3$ and ${\bf x}_4$ on the four wires such that the tetrahedron $T$ formed by ${\bf x}_1$, ${\bf x}_2$, ${\bf x}_3$ and ${\bf x}_4$ encloses the junction, which without loss of generality can be taken to be at the origin ${\bf x}_0=0$ ({this} requires that the four wires be chosen so that they do not all lie on one side of any plane through the origin{: balance} of forces at the origin ensures that at least one choice of such four wires exists). The tensions in these four wires generally do not balance. However, consider the ``tensegrity network'' consisting of rods from the origin to the points ${\bf x}_1$, ${\bf x}_2$, ${\bf x}_3$ and ${\bf x}_4$ under compression, balanced by wires along the edges of the tetrahedron $T$ that are under tension. Example \ref{Ex1} { (see Section\ref{Introduction})} gives the explicit solution for the compressive forces in the rods and the tensions in the wires in this ``tensegrity network''. We next superimpose this ``tensegrity network'' on our junction with the tensions in the ``tensegrity network'' scaled so after superposition the tension near the junction in one wire cancels, while the tension in the other wires remain nonnegative, as sketched in Figure \fig{6}(b). We thus obtain a web under tension where the number of wires joined to the origin is now $M-1$ or less. However, we typically have also created junctions, at some of the points ${\bf x}_1$, ${\bf x}_2$, ${\bf x}_3$ and ${\bf x}_4$ where $5$ wires meet, like those in Figure \fig{6}(c). These junctions are rather special in that one wire goes straight through the junction (but typically has different tensions on opposite sides of the junction). These junctions are then locally replaced by networks in which at most four wires meet as illustrated in Figure \fig{6}(d). Lemma \ref{5wires} below guarantees this can be done. The last step is to successively repeat the argument until the junction at the origin has at most 4 wires. \begin{figure}[!ht] \centering \includegraphics[width=0.6\textwidth]{multcross.pdf} \caption{Steps in the replacement of a junction of many wires under given balanced tensions, with a network localized around the junction such that at most 4 wires meet at any junction in the new network, and the network still supports the same tensions in the wires meeting the network.} \labfig{6} \end{figure} \begin{lemma}\label{5wires}{\bf The five wires problem} \newline Consider five wires with tensions $T_i>0$, directions ${\bf v}_i$ and joining at the origin ${\bf x}={\bf 0}$. Assume that the three first directions are independent while ${\bf v}_5=- {\bf v}_4$. Then we can replace these wires by a web in tension such that at each of its nodes, no more than 4 wires are joining. \end{lemma} \begin{proof} We can assume without loss of generality that $T_5=\alpha T_4$ so that balance of forces implies that $T_1 {\bf v}_1 +T_2 {\bf v}_2+T_3 {\bf v}_3+(1-\alpha )T_4 {\bf v}_4=0$. Set $t>0$ and $s> \frac {\alpha}{3}$. Set also ${\bf x}_1=t T_1 {\bf v}_1$, ${\bf x}_2=t T_2 {\bf v}_2$, ${\bf x}_3=t T_3 {\bf v}_3$, ${\bf x}_4=t\, s T_4 {\bf v}_4$ and ${\bf x}_5=-t\, r T_4 {\bf v}_4$ where $r:=\frac{s}{3s-\alpha}$. As the real parameter $t$ can be arbitrarily chosen provided it is small enough, then we can avoid creating new nodes where a node already exists or where a wire lies. The parts of the wires which lie between the points ${\bf x}_i$ and the origin are replaced by six wires $[{\bf x}_i,{\bf x}_4]$, $[{\bf x}_i,{\bf x}_5]$ ($i\in\{1,2,3\}$). When these wires have respective (positive) tensions \begin{equation} T_{i4}=\frac{r}{t(r+s)}{\|{\bf x}_i-{\bf x}_4 \|},\qquad T_{i5}=\frac{s}{t(r+s)}{\|{\bf x}_i-{\bf x}_5 \|}, \eeq{5W.1} the web is in equilibrium. Indeed at each node ${\bf x}_i$ for $i\in\{1,2,3\}$ we have \begin{equation} T_{i4} \frac{{\bf x}_4-{\bf x}_i}{\|{\bf x}_i-{\bf x}_4 \|} + T_{i5} \frac{{\bf x}_5-{\bf x}_i}{\|{\bf x}_i-{\bf x}_5 \|} + T_i {\bf v}_i = \frac{rs-sr}{t(r+s)}t T_4 {\bf v}_4 + \frac{r+s}{t(r+s)}(-t T_i {\bf v}_i) + T_i {\bf v}_i =0. \eeq{5W.2} Moreover, at nodes ${\bf x}_4$ and ${\bf x}_5$ we have \begin{equation} \sum_{i=1}^3 \Big( T_{i4} \frac{{\bf x}_i-{\bf x}_4}{\|{\bf x}_i-{\bf x}_4 \|}\Big) +T_4 {\bf v}_4 = \frac{1}{r+s} \Big( r\alpha +s- 3rs\Big) T_4 {\bf v}_4 =0. \eeq{5W.3} \begin{eqnarray} \sum_{i=1}^3 \Big( T_{i5} \frac{{\bf x}_i-{\bf x}_5}{\|{\bf x}_i-{\bf x}_5 \|}\Big) +T_5 {\bf v}_5 = \frac{1}{r+s} \Big( - s-\alpha r + 3rs\Big) T_4 {\bf v}_4 =0. \eeqa{5W.4} \end{proof} \subsection{Uniloadable webs}\label{Uniloadable_webs} Given a ray ${\cal C}\subset{\cal A}_{{\bf X}}^*$, we want to determine whether there exists a uniloadable web which can support the corresponding loading $\lambda{\bf F}$, with $\lambda\geq 0$. In case ${\cal C}$ does not belong to the interior of the cone ${\cal A}_{{\bf X}}^*$, then the existence of a uniloadable web is not guaranteed. When ${\bf F}$ belongs to the interior of ${\cal A}_{{\bf X}}^*$, then we prove the existence of a uniloadable web supporting such a loading (up to a multiplicative constant). \subsubsection{Stuck loadings}\label{secnonstuck} Let us start with the following remark : let ${\bf F}=({\bf f}_1,\dots,{\bf f}_N)$ belong to ${\cal A}_{{\bf X}}^*$ with ${\bf X}=({\bf x}_1,\dots,{\bf x}_N)$. Then, it also belongs, for any $t \geq 0$, to the admissible loading cone ${\cal A}_{{\bf X}+t{\bf F}}^*$ of the shifted points $({\bf x}_1+t{\bf f}_1,\dots, {\bf x}_N+t{\bf f}_N)$. Indeed, taking a web $W$ which supports ${\bf F}$ at ${\bf X}$ and adding to $W$ all the wires $[{\bf x}_i,\widetilde {\bf x}_i]$ leads clearly to a web supporting ${\bf F}$ at ${\bf X}+t{\bf F}$. When $t<0$ things are less clear. We say that ${\bf F}$ at ${\bf X}$ is an {\it unstuck loading} if there exists $\varepsilon>0$ such that $$\exists \varepsilon >0,\ \forall t<\varepsilon,\ {\bf F}\in {\cal A}_{{\bf X}-t{\bf F}}^*,$$ otherwise we say that ${\bf F}$ is a {\it stuck loading}. A particular case of stuck loadings, referred to as {\it completely stuck loadings}, is when there exist some $k\in\{1,\dots,N\}$ and $\varepsilon >0$ such that, $$\forall\, 0<t<\varepsilon, \ {\bf F}\not \in {\cal A}_{\widetilde {\bf X}}^* \ \text{ with }\ \widetilde {\bf X}=({\bf x}_1,\dots,{\bf x}_{k-1},{\bf x}_k-t{\bf f}_k,{\bf x}_{k+1},\dots,{\bf x}_N).$$ The stuck or completely stuck conditions can only occur when the loading ${\bf F}$ is a ray on the boundary of the cone of admissible loadings ${\cal A}_{{\bf X}}^*$, as we will prove in the next subsection that any ${\bf F}$ in the interior of ${\cal A}_{{\bf X}}^*$ is unstuck. For a better insight, let us consider two examples of completely stuck loadings. \begin{Aexa}\label{Example3} Consider forces {${\bf f}_1=[-1;0]$, ${\bf f}_2=[3/4;1]$, ${\bf f}_3=[3/4;-1]$, and ${\bf f}_4=[-1/2;0]$ at the four points considered in Figure \fig{Clam_shell_Arrowhead}, that is, ${\bf x_1}=[0;0]$, ${\bf x_2}=[1;1]$, ${\bf x_3}=[1;-1]$, and ${\bf x_4}=[1/2;0]$}. They are supported by the web in Figure \fig{4}(a). \begin{figure}[!ht] \centering \includegraphics[width=0.7\textwidth]{stuck.pdf} \caption{Examples of webs with "completely stuck loadings" in two dimensions. We argue in Example \ref{Example3} that (a) is the unique web that supports the given forces at the terminal nodes. A wire can be attached to the node ${\bf x}_4$, as in (b), in order to uniquely define the associated Airy stress function, up to the addition of an affine function. This Airy stress function then lies above the triangular pyramidal Airy stress function associated with the net forces applied at the three points ${\bf x}_1$, ${\bf x}_2$, and ${\bf x}_3$, as in (c). At the same time it must have a discontinuity in slope across the line between ${\bf x}_1$ and ${\bf x}_4$ corresponding to the tension in the wire joining those two points. This then uniquely determines the Airy stress function (modulo the addition of an affine function) and thus uniquely determines the web. Figure (d), as discussed in Example \ref{Example4}, provides another example of a web with a "completely stuck loading" where the web is uniquely determined once the loading is specified.} \labfig{4} \end{figure} Our objective is to show that this is the only web supporting these forces, thus implying that the loading is ``completely stuck''. First move the force {${\bf f}_4=[-1/2;0]$} back to the origin, by attaching a wire joining the origin to ${\bf x}_4$, as in Figure \fig{4}(b). Then the net force acting on the origin is $[-3/2;0]$ and is balanced by the forces ${\bf f}_3$ and ${\bf f}_4$. The open web in Figure \fig{4}(c) supports these three forces, and the associated Airy function (up to addition of an affine function, see also \cite{Fraternali:2014:OTC}) is \begin{equation} \phi_L({\bf x})=\max\{-3|x_2|/4,1/4-x_1 \}, \eeq{SW1} where $x_1$ and $x_2$ are the coordinates of ${\bf x}$. Now suppose we have any web supporting the four forces ${\bf f}_1$, ${\bf f}_2$, ${\bf f}_3$, and ${\bf f}_4$. Necessarily this web will be confined within the convex hull of the three points ${\bf x}_1,{\bf x}_2,{\bf x}_3$. To define the associated Airy stress potential, in say all of ${\mathbb R}^2$, we need to move the four forces to infinity by attaching {infinite} wires to the four points in the direction of infinity (the wire attached to ${\bf x}_1$ overlaps the wire attached to ${\bf x}_4$ left of the origin). The existence of these wires implies a discontinuity of slope of the Airy stress function across them, matching the tension in the wire. Now consider the Airy stress function in the vicinity of the point ${\bf x}_4$. In particular take the tangent plane at a point ${\bf x}_0$ that approaches ${\bf x}_4$ from the left remaining infinitesimally above the wire that joins ${\bf x}_4$ to the origin. Similarly take the tangent plane at a point ${\bf x}_0'$ that approaches ${\bf x}_4$ from the left remaining infinitesimally below the wire that joins ${\bf x}_4$ to the origin. The maximum of these two tangent planes is the valley function $\phi_V$ that takes the form \begin{equation} \phi_V({\bf x})=-|x_2|/4+ax_1+bx_2+c. \eeq{SW2} As the web is confined to the convex hull of ${\bf x}_1,{\bf x}_2,{\bf x}_3$, the Airy stress function outside this convex hull can be taken to be $\phi_L$ (modulo an affine function that can be set to zero without loss of generality). Convexity of the Airy stress function then implies the inequalities $ \phi_L({\bf x}_i)\geq\phi_V({\bf x}_i)$ for $i=1,2,3$ and $\phi_L({\bf x}_4)\leq\phi_V({\bf x}_4)$. Elementary calculations then show that these inequalities allow one no freedom in the choice of $a$, $b$ and $c$ and one necessarily has $a=-1/2$, $b=c=0$. By convexity, the Airy stress function of any web supporting the four forces must be above \begin{equation} \phi({\bf x})=\max\{\phi_L({\bf x}),\phi_V({\bf x})\}=\max\{-3|x_2|/4,(1/4)-x_1,-|x_2|/4-x_1/2\}, \eeq{SW3} and must coincide with it at the four points ${\bf x}_1,\ldots,{\bf x}_4$. But the polyhedral nature of $\phi({\bf x})$ means that any other candidate convex Airy stress function must cleave it {in} the vicinity of ${\bf x}_4$, which is forbidden. Thus (modulo the addition of an affine function) $\phi({\bf x})$ given by \eq{SW3} is the unique possibility for the Airy stress function, and the web in Figure \fig{4}(a), is the only web that can support the four forces at the four points. \end{Aexa} \begin{Aexa}\label{Example4} A second example of a web with completely stuck loading is shown in Figure \fig{4}(d). Forces {${\bf f}_1=[2;0]$, ${\bf f}_2=[-2;2]$, ${\bf f}_3=[-4;-6]$, and ${\bf f}_4=[4;4]$ are applied at the points ${\bf x}_1=[0;0]$, ${\bf x}_2=[-1/2;1]$, ${\bf x}_3=[-1/2;1]$, and ${\bf x}_4=[1;1]$}. The unique web that supports them is that drawn in Figure \fig{4}(d), and the associated Airy stress potential (modulo addition of an affine function) is \begin{equation} \phi({\bf x})=\max\{2x_1,-2x_2+1,-|x_2|-(4x_1-1-3x_2)^+\}, \eeq{SW4} where $q^+=\max\{0,q\}$. The proof proceeds similarly to the first example. The completely stuck nature of the webs in our two examples has been verified numerically. \end{Aexa} In two dimensions when the terminal points ${\bf X}$ are at the vertices of a convex polygon, the existence of an open web supporting (the assumed admissible) loading ${\bf F}$ implies that such webs are never "completely stuck" with respect to the loading ${\bf F}$. One may wonder: is a similar result true in three dimensions? The numerical example of {Figure \fig{cube}} shows, to the contrary, that there exist terminal points ${\bf X}$ at the vertices of a convex polyhedron, and an admissible loading ${\bf F}$ such that the associated web is "completely stuck". This example was found by starting with $N=8$ terminal nodes at the vertices of a cube, taking an admissible loading ${\bf F}$ supported at these points, then moving the terminal points backwards so that for each $j=1,2,\ldots, 8$, ${\bf x}_j$ is replaced by ${\bf x}_j'={\bf x}_j-{\varepsilon}_j{\bf f}_j$, where the ${\varepsilon}_j\geq 0$ are increased until the loading ${\bf F}$ at the terminals ${\bf X}'=({\bf x}'_1,{\bf x}_2',\ldots,{\bf x}'_8)$ is completely stuck, while keeping the terminals ${\bf X}'$ as vertices of a convex polyhedron. \begin{figure}[h!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\textwidth]{cube_a.pdf} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\textwidth]{cube_b.pdf} \end{subfigure} \caption{We start by applying the forces ${{\bf f}}_i$, given in Table \ref{table:1}, at the 8 points shown in (a), whose coordinates are given by ${{\bf x}}_i$ in Table \ref{table:1}. The points are initially chosen to be the vertices of a cube. Now, move the terminal points backwards so that for each $i=1,2,\ldots, 8$, ${\bf x}_i$ is replaced by ${\bf x}_i'={\bf x}_i-{\varepsilon}_i{\bf f}_i$, where the ${\varepsilon}_i\geq 0$ are increased until the loadings ${{\bf f}}_i$ at the terminals ${\bf x}_i'$, provided in Table \ref{table:1}, is completely stuck, while keeping ${\bf x}_i'$ as vertices of a convex polyhedron. The diagonal lines along the faces in figure (a) are numerical artifacts.} \labfig{cube} \end{figure} \begin{table}[h!] \centering \begin{tabular}{c |c |c |c} \hline $i$ & ${{\bf x}}_i$ & ${{\bf x}}_i'$ & ${{\bf f}}_i$ \\ \hline 1 & [-1;-1;-1] & [-0.27473;-0.31827;-0.46693] & [-0.98151;-0.92259;-0.72140] \\ 2 & [-1;1;-1] & [-0.94086;0.91949;-0.93330] & [-0.46129;0.62796;-0.52022]\\ 3 & [-1;-1;1] & [-0.12837;-0.23757;0.15689] & [-0.74581;-0.65237;0.72140] \\ 4 & [-1;1;1] & [-0.76226;0.74617;0.78972] & [-0.72140;0.77022;0.63807] \\ 5 & [1;-1;-1] & [0.28777;-0.16187;-0.35258] & [0.80474;-0.94700;-0.73151] \\ 6 & [1;1;-1] & [0.27806;-0.12953;-0.35765] & [0.75592;1.1827;-0.67259] \\ 7 & [1;-1;1] & [-0.01231;-0.28017;0.08708] & [0.74581;-0.53033;0.67259] \\ 8 & [1;1;1] & [0.51981;0.62495;0.51177] & [0.60355;0.47140;0.61366] \\ \hline \end{tabular} \caption{Components of the forces ${{\bf f}}_i$ applied at the 8 points shown in Figure \fig{cube} whose coordinates in the original configuration are given by ${{\bf x}}_i$ (Figure \fig{cube}(a)), and in the final configuration by ${{\bf x}}_i'$ (Figure \fig{cube}(b)).} \label{table:1} \end{table} \subsubsection{Unstuck loadings} The aim is to prove that, for a given loading ${\bf F}$ belonging to the interior of the one of admissible loadings ${\cal A}^*_{{\bf X}}$, there exists a uniloadable web supporting only ${\bf F}$ (up to a multiplicative constant). First, we need to prove that the cone ${\cal A}^*_{{\bf X}}$ does not have an empty interior (Lemma \ref{nonempty}). Then, we need to prove that if we perturb slightly the positions of the terminal points, the loading is not stuck (Lemma \ref{lemnonstuck}) and that there exists a connected web with all the wires under tension which supports ${\bf F}$. Finally, we can prove that such a web is a uniloadable web (Theorem \ref{mainthm2}). To prove that the cone ${\cal A}^*_{{\bf X}}$ does not have an empty interior we need to introduce the space ${\cal R}_{{\bf X}}$ of infinitesimal rigid motions on ${\bf X}=({\bf x}_1,\dots,{\bf x}_N)$ and its orthogonal ${\cal B}_{{\bf X}}$, called the space of {\it balanced loadings} on ${\bf X}$, defined by \begin{eqnarray} {\cal R}_{{\bf X}} &:= & \{ {\bf U}=({\bf u}_1,\dots, {\bf u}_N)\in ({\mathbb R}^d)^N:\exists a \in {\mathbb R}^d, \ \exists {\bf A} \text{ antisymmetric such that}, \forall i\,\,\, {\bf u}_i={\bf a}+ {\bf A}{\bf x}_i\}, \nonumber \\ {\cal B}_{{\bf X}}& := & \{ {\bf F}=({\bf f}_1,\dots, {\bf f}_N)\in ({\mathbb R}^d)^N:\ \sum_{i=1}^N {\bf f}_i=0,\ \sum_{i=1}^N ({\bf f}_i\otimes {\bf x}_i - {\bf x}_i\otimes {\bf f}_i)=0\}. \eeqa{NE.1} There is no loss of generality to assume that $\sum_{i=1}^N {\bf x}_i=0$ so that ${\bf X}$ also belongs to ${\cal B}_{{\bf X}}$. Recall that ${\cal A}^*_{{\bf X}}$, the cone of admissible forces ${\bf F}=({\bf f}_1,\dots,{\bf f}_N)$ at the points ${\bf X}=({\bf x}_1,\dots,{\bf x}_N)$, is the dual cone of ${\cal A}_{{\bf X}}:=\{U\in {\cal B}_{{\bf X}}:\ \forall (i,j),\ ({\bf u}_i-{\bf u}_j)\cdot ({\bf x}_i-{\bf x}_j)\geq 0\}$ (see Theorem \ref{mainthm1}). The set ${\cal B}_{{\bf X}}$ is a subspace of $({\mathbb R}^d)^N$ (with codimension $d(d+1)/2$). The notion of interior we use is relative to this subspace. \begin{lemma} \label{nonempty} The cone ${\cal A}^*_{{\bf X}}$ is a subset of ${\cal B}_{{\bf X}}$ with non empty interior. \end{lemma} \begin{proof} We have already noticed, as a consequence of Theorem \ref{Thsupport}, that there is no loss of generality in assuming that the points $({\bf x}_1,\dots {\bf x}_{N})$ span the space ${\mathbb R}^d$. There is no loss of generality either in assuming that it is the first points $({\bf x}_1,\dots {\bf x}_{d+1})$ which span it. Assume now, by contradiction, that the set ${\cal A}^*_{{\bf X}}$ has empty interior. As it is convex, this means that it is contained in a lower dimension subspace: there exists ${\bf U}=({\bf u}_1,\dots,{\bf u}_N)\not=0 \in {\cal B}_{{\bf X}}$ such that ${\bf F}\cdot{\bf U} =0$ for all ${\bf F}\in {\cal A}^*_{{\bf X}}$. As $({\bf x}_1,\dots {\bf x}_{d+1})$ span ${\mathbb R}^d$, and there exists a unique affine function ${\bf u}$ such that ${\bf u}({\bf x}_i)={\bf u}_i$ for $1\leq i \leq d+1$. Clearly, for any $(i,j)\in\{1,\dots,N\}$, the particular loading ${\bf F}^{(i,j)}=({\bf f}_1,\dots,{\bf f}_N)${\red ,} defined by ${\bf f}_i={\bf x}_i-{\bf x}_j$, ${\bf f}_j={\bf x}_j-{\bf x}_i$ and ${\bf f}_k=0$ whenever $k\not=i$ and $k\not=j$, belongs to ${\cal A}^*_{{\bf X}}$ (indeed a simple wire linking ${\bf x}_i$ to ${\bf x}_j$ is an admissible web for this particular loading). Hence ${\bf U}$ satisfies $({\bf u}_i-{\bf u}_j)\cdot({\bf x}_i-{\bf x}_j)=0$ for any pair $(i,j)$. This condition applied to all pairs with $i\leq d+1$ and $j\leq d+1$ implies that ${\bf u}$ is a rigid motion. The same condition applied to all pairs $(i,j)$ with $i\leq d+1$ and $j> d+1$ implies that ${\bf u}_j={\bf u}({\bf x}_j)$ too. Then ${\bf U}$ is a non vanishing rigid motion and this contradicts the definition of ${\cal B}_{{\bf X}}$.\ \end{proof} Let us now prove that all loadings in the interior of ${\cal A}^*_{{\bf X}}$ are unstuck. More precisely, \begin{lemma}\label{lemnonstuck} Let ${\bf F}$ be in the interior of ${\cal A}^*_{{\bf X}}$. Then, for ${\varepsilon}>0$ small enough, ${\bf F}$ also belongs to the interior of ${\cal A}^*_{{\bf X}-{\varepsilon} {\bf F}}$. \end{lemma} \begin{proof} We first remark that, for any ${\varepsilon}\in{\mathbb R}$, ${\bf F}$ belongs to ${\cal B}_{{\bf X}-{\varepsilon} {\bf F}}$. Indeed, for any antisymmetric matrix ${\bf A}$ we have $\sum_{i=1}^N {\bf f}_i\cdot ({\bf A} ({\bf x}_i-{\varepsilon} {\bf f}_i))=-{\varepsilon} \sum_{i=1}^N {\bf f}_i\cdot ({\bf A} {\bf f}_i)= 0$. \smallskip Let now ${\bf U}$ be any vector in ${\cal A}_{{\bf X}-{\varepsilon} {\bf F}}$ with $\|{\bf U}\|=1$. For any pair $(i,j)$, it fulfills \begin{equation} ({\bf u}_i-{\bf u}_j)\cdot (({\bf x}_i-{\varepsilon} {\bf f}_i)-({\bf x}_j-{\varepsilon} {\bf f}_j))\geq 0, \eeq{NS.1} which implies \begin{equation} ({\bf u}_i-{\bf u}_j)\cdot ({\bf x}_i-{\bf x}_j)\geq {\varepsilon} ({\bf u}_i-{\bf u}_j)\cdot ({\bf f}_i-{\bf f}_j)\geq -4 {\varepsilon} \|{\bf F}\|. \eeq{NS.2} Let $\delta:=\min_{i\not =j} |{\bf x}_i-{\bf x}_j|$ be the smallest distance between the points ${\bf x}_i$ and $\gamma:=\frac {4 \|{\bf F}\|}{\delta^2}$. The vector ${\bf W}:={\bf U}+{\varepsilon}\gamma {\bf X}$ satisfies \begin{equation} ({\bf w}^i-{\bf w}^j)\cdot ({\bf x}_i-{\bf x}_j)= ({\bf u}_i-{\bf u}_j)\cdot ({\bf x}_i-{\bf x}_j)+{\varepsilon} \gamma ({\bf x}_i-{\bf x}_j)\cdot ({\bf x}_i-{\bf x}_j) \geq 0. \eeq{NS.3} Therefore its projection $\overline {\bf W}$ on ${\cal B}_{{\bf X}}$, which satisfies the same inequalities, belongs to ${\cal A}_{{\bf X}}$. The cone ${\cal A}^*_{{\bf X}}$ is the set of all ${\bf F}$ satisfying $\forall {\bf U}\in {\cal A}_{{\bf X}}$, ${\bf F}\cdot{\bf U} \geq 0$. As ${\bf F}$ belongs to the interior of ${\cal A}^*_{{\bf X}}$, the function ${\bf V}\to {\bf F}\cdot{\bf V}$ must be strictly positive on the compact intersection of ${\cal A}_{{\bf X}}$ with the unit sphere. Let us call $\alpha>0$ its minimum. By homogeneity, we have for any ${\bf V}$ in ${\cal A}_{{\bf X}}$, ${\bf F}\cdot{\bf V} \geq \alpha \|{\bf V}\|$. When applied to $\overline {\bf W}$, this inequality gives \begin{equation} {\bf F}\cdot{\bf W}= {\bf F}\cdot{\bf W} = {\bf F}\cdot{\bf U} +{\varepsilon} \gamma {\bf F}\cdot{\bf X} \geq \alpha \|\overline {\bf W}\|=\alpha\|\overline{{\bf U}+{\varepsilon}\gamma {\bf X}}\|\geq \alpha( 1-{\varepsilon}\gamma \|{\bf X}\|). \eeq{NS.4} Hence we have \begin{equation} {\bf F}\cdot{\bf U}\geq \alpha( 1-{\varepsilon}\gamma \|{\bf X}\|)-{\varepsilon} \gamma {\bf F}\cdot{\bf X}\geq \alpha -{\varepsilon}\gamma \|{\bf X}\| (\alpha+\|{\bf F}\|). \eeq{NS.5} For ${\varepsilon}$ smaller than $\frac {\alpha}{2 \gamma\|X\| (\alpha+\|{\bf F}\|)}$, we get ${\bf F}\cdot{\bf U}\geq \frac \alpha 2 \|{\bf U}\|$. This inequality, valid for any ${\bf U}\in{\cal A}_{{\bf X}-{\varepsilon} {\bf F}}$ with $\|{\bf U}\|=1$, remains true on the whole cone ${\cal A}_{{\bf X}-{\varepsilon} {\bf F}}$ by homogeneity. So the lemma is proven. \end{proof} \begin{lemma}\label{lemconnected} If ${\bf F}$ is in the interior of ${\cal A}^*_{{\bf X}}$ then there exists a connected web supporting ${\bf F}$ at ${\bf X}$ with a strictly positive stress state. \end{lemma} \begin{proof} We first check that there exists a web $W$ supporting ${\bf F}$ at ${\bf X}=({\bf x}_1,\dots,{\bf x}_N)$ such that all wires with non vanishing tension make a connected set. The set $\mathcal D$ of loadings ${\bf F}$ for which there exists a set $I$ strictly included in $\{1,\dots,N\}$ such that $\sum_{i\in I} {\bf f}_i={\bf 0}$ is a union of subspaces (of codimension $d$) of the space $\mathcal B_{\bf X}$ of balanced loading. We want to avoid the case where ${\bf F}$ is supported by two or more disconnected webs and clearly this may only happen if ${\bf F}\in {\mathcal D}$. In that case we use the following trick: as the interior of $\mathcal D$ is empty we can find ${\bf G}\in {\mathcal B}_{\bf X}$ such that ${\bf F}+{\bf G}$ and ${\bf F}-{\bf G}$ belong to ${\cal A}_{{\bf X}}^*\setminus {\mathcal D}$. Let ${W}^+$ and ${W}^-$ be two webs supporting respectively ${\bf F}+{\bf G}$ and ${\bf F}-{\bf G}$ and let ${\mathfrak S}^+$ and ${\mathfrak S}^-$ be the associated stress measures. The measure ${\mathfrak S}:=({\mathfrak S}^++{\mathfrak S}^-)/2$ satisfies $$\nabla \cdot{\mathfrak S}+{\cal F} =\nabla \cdot{\mathfrak S}^+/2+\nabla \cdot{\mathfrak S}^-/2+{\cal F}= -({\cal F} + {\cal G})/2 -({\cal F} - {\cal G})/2 +{\cal F}= {\bf 0}.$$ Its support is the union of the supports of ${\mathfrak S}^+$ and ${\mathfrak S}^-$ which both are connected sets containing all the terminal nodes where the applied forces are non vanishing. Thus we get a finite connected web $W$ with a strictly positive stress state $\sigma$ that supports the loading ${\bf F}=({\bf f}_1,\dots,{\bf f}_N)$ at points ${\bf X}=({\bf x}_1,\dots,{\bf x}_N)$. \end{proof} \begin{corollary}\label{cornonstuck} If ${\bf F}$ is in the interior of ${\cal A}^*_{{\bf X}}$ then there exists is a connected web supporting ${\bf F}$ at ${\bf X}$ with a strictly positive stress state and only one wire joining each terminal node. \end{corollary} \begin{proof} Lemma \ref{lemnonstuck} provides an ${\varepsilon}>0$ small enough for ${\bf F}$ to belong to the interior of ${\cal A}^*_{{\bf X}-{\varepsilon} {\bf F}}$. Lemma \ref{lemconnected} then provides a connected web $W$ supporting ${\bf F}$ at ${\bf X}-{\varepsilon} {\bf F}$ with a strictly positive stress state $\sigma$. Adding to $W$, for those $i\in\{1,\dots,N\}$ such that ${\bf f}_i\not=0$, the wires $[{\bf x}_i,{\bf x}_i+{\varepsilon} {\bf f}_i]$ and fixing the tension to $\|{\bf f}_i\|$ in each of these wires, makes a new web supporting ${\bf F}$ at ${\bf X}$ with a strictly positive stress state. Clearly, as $W$ is connected, the new one is connected too. \end{proof} We can finally state the theorem regarding the existence of a uniloadable web for an unstuck loading. \begin{theorem}\label{mainthm2}{\bf Existence of uniloadable webs} \newline For any ${\bf F}$ in the interior of the admissible loading cone ${\cal A}_{{\bf X}}^*$, there exists a finite web $W$ such that ${\cal C}^W_{{\bf X}}=\{\lambda{\bf F}: \ \lambda\geq 0\}$. \end{theorem} \begin{proof} Corollary \ref{cornonstuck} states that there exists a finite connected web $W$ with a strictly positive stress state $\sigma$ that supports the loading ${\bf F}=({\bf f}_1,\dots,{\bf f}_N)$ at points ${\bf X}=({\bf x}_1,\dots,{\bf x}_N)$ and such that only one wire is attached to each terminal node. This web is then modified (according to Lemma \ref{lemnonstuck}) into a new web $\widetilde W$ having the extra property that all the internal nodes have in two dimensions at most three coplanar wires meeting it, and in three dimensions at most four noncoplanar wires, or three coplanar wires, meeting it. This ensures that $\widetilde W$ supports only the loadings $\lambda{\bf F}$ for $\lambda\geq 0$. Indeed, as ${\bf F}$ is an admissible loading for $\widetilde W$ at points ${\bf X}$, the unique wire attached to ${\bf x}_1$ has direction ${\bf f}_1$. Let $\widetilde {\bf F}=(\widetilde{\bf f}_1,\dots,\widetilde{\bf f}_N)$ be another admissible loading for $\widetilde W$ at ${\bf X}$. Balance of forces at ${\bf x}_1$ imposes $\widetilde{\bf f}_1=\lambda {\bf f}_1$. As the wire attached to ${\bf x}_1$ is under tension for both loadings, we have $\lambda\ge 0$. Now at each node of the web, once the (positive) tension in one of the joining wires is fixed, the balance of forces fixes the tension in all other joining wires. As the web is connected, from node to node, the tensions of all wires are fixed : $\widetilde {\bf F}$ is uniquely determined and clearly $\widetilde {\bf F}=\lambda {\bf F}$. \end{proof} \subsection{Possible loading cones} Here we seek answer to the question: given a convex cone ${\cal C}\subset{\cal A}_{{\bf X}}^*$ can one find a web $W$ such that ${\cal C}^W_{{\bf X}}={\cal C}$? To proceed, as sketched in Figure 4 of \cite{Milton:2017:SFI}, one approximates the convex cone ${\cal C}$ by a cone, that we will denote with ${\cal C}^{W_j}_{{\bf X}}$, having a finite number $j$ of extreme rays ${\bf F}^{(j)m}$, $m=1,2,\ldots,j$, each strictly in the interior of ${\cal C}$ and hence strictly in the interior of the admissible loading cone ${\cal A}_{{\bf X}}^*$. As we are free to make arbitrary small perturbations of the extreme rays ${\bf F}^{(j)m}$, we can assume that these are chosen so that at each terminal $i$, ${\bf f}^{(j)m}_i$ is not collinear with ${\bf f}^{(j)\ell}_i$ for any $m,\ell$ with $\ell\ne m$. Associated with ${\bf F}^{(j)m}$ is then a uniloadable web $W^{(j)m}$ supporting, and only supporting, at the terminals ${\bf X}$, the loadings $\lambda{\bf F}^{(j)m}$, $\lambda\geq 0$. By adjusting the positions of the interior nodes of the different uniloadable webs $W^{(j)m}$ now indexed by $j=1,2,\ldots,m$ we can ensure that the webs $W^{(j)m}$ do not have overlapping interior nodes, nor overlapping collinear wires. By superimposing these uniloadable webs, for $m=1,2,\ldots,j$ one obtains the web $W_j$ having the desired loading cone. It may happen that some wire pairs cross when we superimpose the webs, generating an additional interior node at the crossing point. This is still okay, as balance of forces at the crossing point ensures that the tension in the wire remains the same on opposite sides of the crossing point. If more than two wires cross at the same point, then we can perturb the uniloadable webs to avoid this. Taking the limit $j\to\infty$ allows us to approximate ${\cal C}$ arbitrarily closely, so that ${\cal C}^{W_j}_{{\bf X}}\to{\cal C}$. \section{Conclusions} {In this paper we provide full answer to the question as to whether, given a set of forces applied to specific points, called terminal points, there exists a web that supports such forces with all the wires under tension. Specifically, we provide a necessary and sufficient condition on the loading forces which guarantees the existence of such a web, see Theorem \ref{mainthm1}. Such a condition corresponds to a finite dimensional linear programming problem: if this has solution, then a web exists which is formed by the wires connecting pairwise the terminal points. Conversely, any web under tension supporting the given loading can be replaced by the web provided by the linear programming problem. The conditions related to the linear programming problem are inequalities expressed in terms of the displacements of the terminal points: they form a cone in the displacement space and the edges of the cone correspond to those displacements that satisfy the conditions as equalities. In case the terminal points form the vertices of a convex polygon, these extreme edges correspond precisely (up to an infinitesimal rigid body motion) to clam-shell movements and do not include any other movements. By clam-shell movements we denote a displacement field that arises when one breaks the convex polygon connecting the terminal points into two non-overlapping subpolygons connected at one vertex and fixes one subpolygon and infinitesimally rotates the other one, so the clam opens slightly. In the case there is at least one terminal point that lies inside the convex hull of the terminal points, the extreme rays of the cone of admissible displacements correspond to types of displacements that are not simply clam-shell movements. In practical situations one would like to have uniloadable webs, that is webs that support only one loading and all positive multiples of it: such webs allow one to channel stresses in desired ways and the superposition of them allows one to get a desired convex loading cone. To construct a uniloadable web in two-dimensions, one has to replace each closed minimal loop, that is any polygon formed by the intersection of wires that cannot be divided into subpolygons, by an open web (not containing any closed loop). This is always possible if the terminal points are positioned at the vertices of a convex polygon. If that is not the case, then there will still be minimal closed loops in a number equal to that of the terminal points which lie inside the convex hull of the terminal points. In general, to construct uniloadable webs, one has to reduce the number of wires meeting at either a terminal point or at an internal node. The first step is to modify the web so that only one wire is attached to each terminal point. We proved that this is possible only when the given loading lies inside the cone formed by the inequality conditions associated with the dual linear programming problem. If, instead, the loading corresponds to one of the extreme rays of such a cone, then we have that for some webs this modification is impossible and we say that such webs have stuck loadings. We provided two examples of webs of this type in two dimensions, see Examples \ref{Example3} and \ref{Example4}. Since in two dimensions we know that a web with terminal points that are vertices of a convex polygon can always be replaced by an open web, we have that such webs are never completely stuck: indeed, in both Example \ref{Example3} and \ref{Example4} the terminal points are not vertices of a convex polygon. This does not hold in the three-dimensional case for which one can find a web with stuck loadings which have the terminal points forming a convex polyhedron (see Figure \fig{cube}). On the other hand, if the loading belongs to the interior of the cone formed by the inequalities regarding the loadings in the dual linear programming problem, then we provide a general procedure to reduce the number of wires meeting at each internal node where initially 5 or more wires meet. \vskip6pt \enlargethispage{20pt} \section*{Acknowledgments} Ornella Mattei and Graeme Milton are grateful to the National Science Foundation for support through {DMS-1814854}. Graeme Milton is grateful to the Universit{\'e} de Toulon for hosting his two week visit there in September 2017 where this work was initiated.
{ "timestamp": "2019-03-04T02:03:26", "yymm": "1810", "arxiv_id": "1810.12421", "language": "en", "url": "https://arxiv.org/abs/1810.12421", "abstract": "In many applications of Structural Engineering the following question arises: given a set of forces $\\mathbf{f}_1,\\mathbf{f}_2,\\dots,\\mathbf{f}_N$ applied at prescribed points $\\mathbf{x}_1,\\mathbf{x}_2,\\dots,\\mathbf{x}_N$, under what constraints on the forces does there exist a truss structure (or wire web) with all elements under tension that supports these forces? Here we provide answer to such a question for any configuration of the terminal points $\\mathbf{x}_1,\\mathbf{x}_2,\\dots,\\mathbf{x}_N$ in the two- and three-dimensional case. Specifically, the existence of a web is guaranteed by a necessary and sufficient condition on the loading which corresponds to a finite dimensional linear programming problem. In two-dimensions we show that any such web can be replaced by one in which there are at most $P$ elementary loops, where elementary means the loop cannot be subdivided into subloops, and where $P$ is the number of forces $\\mathbf{f}_1,\\mathbf{f}_2,\\dots,\\mathbf{f}_N$ applied at points strictly within the convex hull of $\\mathbf{x}_1,\\mathbf{x}_2,\\dots,\\mathbf{x}_N$. In three-dimensions we show that, by slightly perturbing $\\mathbf{f}_1,\\mathbf{f}_2,\\dots,\\mathbf{f}_N$, there exists a uniloadable web supporting this loading. Uniloadable means it supports this loading and all positive multiples of it, but not any other loading. Uniloadable webs provide a mechanism for distributing stress in desired ways.", "subjects": "Mathematical Physics (math-ph)", "title": "On the forces that cable webs under tension can support and how to design cable webs to channel stresses", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575147530352, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542051888527 }
https://arxiv.org/abs/0912.1411
Three notions of tropical rank for symmetric matrices
We introduce and study three different notions of tropical rank for symmetric and dissimilarity matrices in terms of minimal decompositions into rank 1 symmetric matrices, star tree matrices, and tree matrices. Our results provide a close study of the tropical secant sets of certain nice tropical varieties, including the tropical Grassmannian. In particular, we determine the dimension of each secant set, the convex hull of the variety, and in most cases, the smallest secant set which is equal to the convex hull.
\section{Introduction} In this paper, we study tropical secant sets and rank for symmetric matrices. Our setting is the \emph{tropical semiring} $(\mathbb{R}\cup\{ \infty\}, \oplus , \odot )$, where tropical addition is given by $ x \oplus y =\min(x,y)$ and tropical multiplication is given by $x \odot y = x+y$. The $k$th \emph{tropical secant set} of a subset $V $ of $\mathbb{R} ^ N$ is defined to be the set of points \begin{equation*} \{x\in\mathbb{R} ^ N: x=v_1 \oplus \cdots \oplus v_k,\;v_i\in V\}, \end{equation*} where $ \oplus $ denotes coordinate-wise minimum. This set is typically not a tropical variety and thus we prefer the term ``secant set'' to ``secant variety,'' which has appeared previously in the literature. The \emph{rank} of a point $ x\in\mathbb{R} ^ N$ with respect to~$ V $ is the smallest integer $ k$ such that $ x$ lies in the $ k $th tropical secant set of $ V $, or $\infty $ if there is no such $k$. In~\cite{dss}, Develin, Santos, and Sturmfels define the Barvinok rank of a matrix, not necessarily symmetric, to be the rank with respect to the subset of $n \times n$ rank~$1$ matrices, and their definition serves as a model for ours. In addition, they define two other notions of rank, Kapranov rank and tropical rank, for which there are no analogues in this paper. Further examination of ranks of not necessarily symmetric matrices can be found in the review article~\cite{akian}. We give a careful examination of secant sets and rank with respect to three families of tropical varieties in the space of symmetric matrices and the space of dissimilarity matrices. By a \emph{$n\times n$ dissimilarity matrix} we simply mean a function from ${[n]\choose 2}$ to $\mathbb{R}$, which we will write as a symmetric matrix without any entries on the diagonal. There is a natural projection from $n\times n$ symmetric matrices to $n \times n$ dissimilarity matrices which we denote by~$\pi$. For example, \begin{equation} \label{eqn:intro-exs} M= \begin{bmatrix} 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 \end{bmatrix} \quad\mbox{and}\quad\pi (M) = \begin{bmatrix} * & 1 & 0 & 0\\ 1 & * & 0 & 0\\ 0 & 0 & * & 1\\ 0 & 0 & 1 & * \end{bmatrix} \end{equation} are a symmetric matrix and dissimilarity matrix respectively. Our first family is the \emph{tropical Veronese} of degree $2$, which is the tropicalization of the classical space of symmetric matrices of rank~1. It is a classical linear subspace of the space of symmetric matrices consisting of those matrices which can be written as $v ^ T\odot v$ for some row vector $v$. The rank of a matrix with respect to the tropical Veronese is called \emph{symmetric Barvinok rank}, because it is the symmetric analogue of Barvinok rank. Second, we consider the \emph{space of star trees}, which is the image of the tropical Veronese under the projection $\pi$. Equivalently, it can be obtained by first projecting the classical Veronese onto its off-diagonal entries and then tropicalizing. The classical variety and its secant varieties were studied in~\cite{drton}. The tropical variety is a classical linear subspace of the space of dissimilarity matrices, and we call the corresponding notion of rank \emph{star tree rank}. The name reflects the fact that the matrices with star tree rank~$1$ are precisely those points of the tropical Grassmannian which correspond to trees with no internal edges, i.e.\ star trees, in in the identification below. Third, we consider the \emph{tropical Grassmannian} $G_{2,n}$, which is the tropicalization of the Grassmannian of 2-dimensional subspaces in an $n$-dimensional vector space, and was first studied in~\cite{ss}. It consists of exactly those dissimilarity matrices arising as the distance matrix of a weighted tree with $n$ leaves in which internal edges have negative weights. Therefore, we call the points in the tropical Grassmannian \emph{tree matrices}, and call rank with respect to the tropical Grassmannian the \emph{tree rank}. Note that our definition of tree rank differs from that in~\cite[Ch.~3]{ps}, which uses a different notion of mixtures. Our first two families are examples of classical linear spaces, whose secant sets were studied by Mike Develin~\cite{de}. He defines a natural polyhedral fan such that the tropical secant set is the support of this polyhedral fan. Moreover, Theorem~2.1 in~\cite{de} gives an algorithm for computing the rank of a point with respect to a fixed linear space. In contrast, we do not know of a good algorithm for computing the rank with respect to the tropical Grassmannian (see Section~\ref{sec:open-questions}), and we do not know of a natural fan structure. \begin{figure} \begin{center} \input{tree_rank.pstex_t} \label{tree_rank} \end{center} \caption{Weighted tree whose distance matrix is $\pi(M)$ from~(\ref{eqn:intro-exs}).} \label{fig:example-tree} \end{figure} We use our examples of $M$ and $\pi(M)$ from~(\ref{eqn:intro-exs}) to illustrate our three notions of rank. Proposition~\ref{prop:zero-inf-rank} tells us that the symmetric Barvinok rank of~$M$ is~4. Theorem~\ref{thm:max-star-tree-rank} tells us that the star tree rank of $ \pi(M)$ is~2. Explicitly, we have \begin{equation*} \pi (M)=\begin{bmatrix} * & 1 & 0 & 0\\ 1 & * & 2 & 2\\ 0 & 2 & * & 1\\ 0 & 2 & 1 & * \end{bmatrix}\oplus\begin{bmatrix} * & 1 & 2 & 2\\ 1 & * & 0 & 0\\ 2 & 0 & * & 1\\ 2 & 0 & 1 & * \end{bmatrix}. \end{equation*} Finally, the tree rank of $ \pi (M)$ is~1 by Proposition~\ref{prop:tree-rank-01}, which can also be seen explicitly from the weighted tree in Figure~\ref{fig:example-tree}. This example shows that all three of our notions of rank can be different. However, for any $n\times n$ symmetric matrix~$M$, we have \begin{equation} \label{eqn:rank-inequals} \operatorname{symmetric\ Barvinok\ rank}(M) \geq \operatorname{star\ tree\ rank}(\pi (M)) \geq \operatorname{tree\ rank}(\pi (M)). \end{equation} The first inequality follows from the fact that the set of dissimilarity matrices of star tree rank~$1$ is the projection of the set of matrices of symmetric Barvinok rank~$1$. The second inequality follows from the fact that the space of star trees is contained in the tropical Grassmannian. The rest of the paper is organized as follows. In Section~\ref{sec:def-graph}, we present a technique for proving lower bounds on rank. We introduce a graph associated to a matrix for each of our notions of rank; the chromatic number of this graph is a lower bound on the rank of the matrix. The same technique applies to provide a lower bound to the rank of any point with respect to any tropical prevariety, although in general it may produce a hypergraph instead of a graph. We examine symmetric Barvinok rank, star tree rank, and tree rank in Sections \ref{sec:sym-barv-rank}, \ref{sec:star-tree-rank}, and~\ref{sec:tree-rank} respectively. We prove upper bounds on the rank in each case, and with the exception of tree rank, our upper bounds are sharp. We show that the symmetric Barvinok rank of an $n\times n$ symmetric matrix can be infinite, but even when the rank is finite it can exceed $n$, and in fact can grow quadratically in~$n$ (Theorem~\ref{thm:max-rank}). For each notion of rank, the set of matrices with rank at most~$k$ is a union of polyhedral cones, and we compute the dimension of these sets, defined as the dimension of the largest cone. In each case, the dimension of the tropical secant set equals the dimension of the clasical secant variety, confirming Draisma's observation that tropical geometry provides useful lower bounds for the dimensions of classical secant varieties~\cite{d}. Finally, we give a combinatorial characterization of each notion of rank for a $0/1$ matrix in terms of graph covers. In Section~\ref{sec:sym-barv-3}, we examine $3 \times 3$ symmetric matrices and explicitly characterize the stratification by symmetric Barvinok rank. In Sections \ref{sec:star-tree-5} and~\ref{sec:tree-5}, we do the same for the $5 \times 5$ dissimilarity matrices and the stratifications by star tree rank and tree rank respectively. In particular, we show that the lower bounds from the chromatic number in Section~\ref{sec:def-graph} are exact in these cases. We close with some open problems in Section~\ref{sec:open-questions}. \section{Lower bounds on rank via hypergraph coloring}\label{sec:def-graph} Before we examine our three notions of rank, we give a general combinatorial construction: a hypergraph whose chromatic number yields a lower bound on rank. Recall that a \emph{hypergraph} consists of a ground set, called vertices, and a set of subsets of the ground set, called hyperedges. The \emph{chromatic number} of a hypergraph~$H$, denoted $\chi(H) $, is the smallest number $ r$ such that the vertices of $ H$ can be partitioned into $ r $ color classes with no hyperedge of $ H$ monochromatic. In particular, if $H$ contains a hyperedge of size~$1$, then $\chi(H)$ is~$\infty$. Now, suppose we have a tropical prevariety $ V\subseteq\mathbb{R} ^ N$. Recall that a tropical polynomial \begin{equation} \label{eqn:tropical-polynomial} p(x_1, \ldots, x_N) = \bigoplus_{i=1}^t a_i \odot x_1^{c_{i1}} \odot\cdots\odot x_N^{c_{iN}} \end{equation} defines a tropical hypersurface consisting of those vectors $x \in \mathbb{R}^N$ such that the minimum in evaluating $p(x)$ is achieved at least twice. A \emph{tropical prevariety} is the intersection of finitely many tropical hypersurfaces, and any finite set~$S$ of tropical polynomials defining the prevariety $V$ is called a \emph{tropical basis}. Now, given a point $w \in \mathbb{R}^N$ and a tropical basis $S$ for~$V$, we construct a hypergraph on ground set $ [N]$ as follows. Let $p$ from~(\ref{eqn:tropical-polynomial}) be a tropical polynomial in~$S$, with all exponents $c_{ij} \geq 0$. If the minimum is achieved uniquely when $p$ is evaluated at $w$, then we add a hyperedge $E$ whose elements correspond to the coordinates that appear with non-zero exponent in the unique minimal term. The \emph{deficiency hypergraph} of~$w$ with respect to~$V$ and~$S$ consists of hyperedges coming from all polynomials in $S$ with a unique minimum at~$w$. In particular, the deficiency hypergraph has no hyperedges (and thus has chromatic number $1$) if and only if $w$ is in~$V$. \begin{prop} \label{prop:deficiency-hypergraph} If $H$ is the deficiency hypergraph constructed above, then the rank of $w\in\mathbb{R} ^ N$ with respect to $V\subseteq\mathbb{R} ^ N$ is at least $\chi(H)$. \end{prop} \begin{proof} Suppose that $w$ has rank $r$ with respect to~$V$, and let $w = v_1 \oplus \cdots \oplus v_r$ be an expression of $w$ as the tropical sum of $r$ points $v_i \in V$. We will construct an $r$-coloring of the deficiency hypegraph~$H$, which will show that $\chi(H) \leq r$. For each $i \in [N]$, there is at least one $v_j$ which agrees with $w$ in the $i$th coordinate, so arbitrarily pick one such $j$ as the color for vertex~$i$ in~$H$. Let $E$ be a hyperedge of~$H$ and $p$ the associated tropical polynomial. We claim that $E$ cannot be monochromatic with color~$j$. Each coordinate of $v_j$ is greater than or equal to the corresponding coordinate of~$w$, so each term of $p(v_j)$ is greater than or equal to the corresponding term of $p(w)$. On the other hand, the minimum in the evaluation of $p(v_j)$ is achieved at least twice, so the minimum must be strictly greater than $p(w)$. Thus, $v_j$ cannot agree with $w$ for all coordinates in $E$, so $E$ is not monochromatic of color~$j$. This holds for any color~$j$, so we have constructed an $r$-coloring, and thus, $\chi(H) \leq r$. \end{proof} \begin{corollary}\label{cor:singleton} If the deficiency hypergraph $H$ has a hyperedge of size~$1$, then the rank of $w$ with respect to $V$ is infinite. \end{corollary} \begin{proof} If $H$ has a hyperedge of size~$1$, then $\chi(H)$ is $\infty$, and thus the rank of $w$ is infinite. \end{proof} We do not know of any examples in which this lower bound is actually strict; see Section~\ref{sec:open-questions}. For the varieties considered in this paper, we will take quadratic tropical bases and thus the deficiency hypergraph will always be a graph (possibly with loops). Accordingly, we will call it the \emph{deficiency graph}. \section{Symmetric Barvinok rank}\label{sec:sym-barv-rank} Recall from the introduction that the symmetric Barvinok rank of a symmetric matrix $M$ is the smallest number~$r$ such that $M$ can be written as the sum of $r$ rank~$1$ symmetric matrices. The $2\times 2$ minors $x_{ij} x_{kl} \oplus x_{il}x_{kj}$ of $M$ for $i \neq k$ and $l \neq j$ form a tropical basis for the variety of rank~$1$ symmetric matrices. We will always construct our deficiency graph with respect to this tropical basis. Our first observation is that the symmetric Barvinok rank of a matrix can be infinite. More precisely, \begin{prop} \label{prop:inf-rank} If $M$ is a symmetric matrix and $2M_{ij} < M_{ii} + M_{jj}$ for some $i$ and~$j$, then the symmetric Barvinok rank of $M$ is infinite. \end{prop} \begin{proof} The tropical polynomial $x_{ij}^2 \oplus x_{ii} x_{jj}$ is in the tropical basis, so if $2M_{ij} <M_{ii} + M_{jj}$ for some $ i$ and $ j$, then the deficiency graph for~$M$ has a loop at the node $ij$. Therefore, $M$ has infinite rank by Corollary~\ref{cor:singleton}. \end{proof} In fact, the converse to Proposition~\ref{prop:inf-rank} is also true; see Theorem~\ref{thm:max-rank}. In order to construct decompositions into rank~$1$ matrices, we need the following lemma. \begin{lemma} \label{lem:rank-1-extn} Let $M$ be an $m \times m$ symmetric Barvinok rank~$1$ matrix, $n > m$ an integer and $C$ any real number. Then there exists an $n \times n$ symmetric rank~$1$ matrix~$N$ such that the upper left $m \times m$ submatrix is $M$ and every other entry is at least~$C$. \end{lemma} \begin{proof} Since $M$ has rank~$1$, then $M = v^T \odot v$ for some row vector $v$. Let $C' = \max\{\frac{1}{2}C, C- v_i\}$, and let $w$ be the vector consisting of $v$ followed by $C'$ repeated $n-m$ times. Then $N = w^T \odot w$ has the desired properties. \end{proof} \begin{remark} \label{rmk:rank-1-extn} We will use the symbol $\infty$ in an entire row and column of a matrix to denote sufficiently large values that maintain the property of being rank~$1$. So, if $M$ is an $m \times m$ rank~$1$ matrix, then \begin{equation*} \begin{bmatrix} M & \infty \\ \infty & \infty \end{bmatrix} \end{equation*} denotes the $(m+1)\times(m+1)$ matrix obtained by applying Lemma~\ref{lem:rank-1-extn} with $n = m+1$. The value of~$C$ will be clear from the context. \end{remark} Next, we give a graph-theoretic characterization of the symmetric Barvinok rank of $0/1$-matrices. We define a \emph{clique cover} of a simple graph~$G$ to be a collection of $r$ complete subgraphs such that every edge and every vertex of $ G$ is in some element of the collection. Given an $n \times n$ symmetric $0/1$ matrix $M$ with zeroes on the diagonal, define $G_M$ to be the graph whose vertices are the integers $[n]$, and which has an edge between $i$ and~$j$ if and only if $M_{ij} = 0$. \begin{prop} \label{prop:zero-inf-rank} Suppose $M$ is a symmetric $0/1$ matrix with zeroes on the diagonal. Then the symmetric Barvinok rank of $M$ is the size of a smallest clique cover of~$G_{M}$. On the other hand, suppose that $M$ is a symmetric $0/1$ matrix with at least one entry of $1$ on the diagonal. If there exist $i$ and $j$ such that $M_{ii} = 1$ and $M_{ij} = 0$, then the symmetric Barvinok rank of~$M$ is infinite. Otherwise, let $M'$ be the maximal principal submatrix with zeroes on the diagonal. The symmetric Barvinok rank of $M$ is one greater than the symmetric Barvionk rank of $M'$. \end{prop} \begin{proof} First suppose that all diagonal entries of $ M $ are $0$. Let $G_1, \ldots, G_r$ be a clique cover of $G_M$. Let $v_i$ be the $ 0/1$ row vector whose $j$th entry is $0$ if $j$ is a vertex of~$G_i$, and let $ M_i =v_i^T \odot v_i $. Then we claim that $M = \bigoplus M_i$. Each~$0$ entry in $M$ corresponds to an edge or vertex of $G_M$ which is in some $G_i$, so there is a $0$ in $M_i$. For $j \neq k$ such that $M_{jk} = 1$, some $G_i$ contains the vertex~$j$, and hence misses the vertex $k$, so $(M_i)_{jk} = 1$. On the other hand, the entry is at least $1$ in the other rank~$1$ matrices because none of the corresponding graphs contains $ jk$ as an edge. Thus the rank of $M$ is at most~$r$. Conversely, suppose $N$ is a rank~$1$ symmetric matrix in a decomposition of $M$. Since all entries of $M$ are non-negative, so are the entries of $N$, so the rank~$1$ condition says $N_{ii} = N_{jj} = 0$ if and only if $N_{ij} = 0$. By the ``if'' direction, we can define a graph $G_N$ whose vertices and edges correspond to the diagonal and off-diagonal zeroes of $N$ respectively. By the ``only if'' direction, this is a complete graph. Every position with a $0$ in $M$ must be~$0$ for some rank~$1$ matrix in the decomposition, so the graphs $G_N$ form a clique cover of $G_M$ as $N$ ranges over all rank~$1$ symmetric matrices in the decomposition. Thus, the rank of $M$ is exactly~$r$. Next, we suppose $ M$ has at least one $1$ on the diagonal and let $M'\subsetneq M$ be as in the statement. If there exist $ i$ and $ j$ such that $ M_{ii} =1$ but $ M_{ij}=0 $, then the rank of $ M$ is infinite by Proposition~\ref{prop:inf-rank}. Otherwise, we claim that the rank of $M$ is $ r+1 $. Extending each rank 1 summand of a minimal decomposition of $ M'$ by Lemma~\ref{lem:rank-1-extn} and adding in the all ones matrix shows that the rank of $M$ is at most $ r+ 1 $. On the other hand, it is straightforward to check that a rank 1 summand containing a 1 entry on the diagonal can contain no zeroes, so does not contribute to a clique cover for $ G_{ M'} $. So the rank of $M$ is exactly $ r+1 $ in this case. \end{proof} \begin{remark} \label{rmk:bipartite_graph} This characterization gives us two families of matrices which have rank~$n$ and $\lfloor n^2/4 \rfloor$ respectively, namely those corresponding to the trivial graph with $n$ isolated vertices and the complete bipartite graph $K_{\lfloor n/2\rfloor, \lceil n/2\rceil}$. In the latter case, $K_{\lfloor n/2\rfloor, \lceil n/2\rceil}$ is triangle-free, so no clique can consist of more than one edge. On the other hand, there are $\lfloor n^2/4 \rfloor$ edges in the graph, so $\lfloor n^2/4 \rfloor$ cliques are needed in a cover. In fact, these two examples have the maximum possible rank for $n\times n$ matrices, as shown below. \end{remark} \begin{thm} \label{thm:max-rank} Suppose that $M$ is a symmetric $n\times n$ matrix with $M_{ii} + M_{jj} \leq 2M_{ij}$ for all $i$ and~$j$. Then the symmetric Barvinok rank of $M$ is at most $\max\{n, \lfloor n^2/4 \rfloor\}$, and this bound is tight. Thus, every matrix with finite rank has rank at most $\max\{n, \lfloor n^2/4\rfloor\}$. \end{thm} \begin{proof} Subtracting $M_{ii}/2$ from the $i$th row and column for all $i$ does not change the rank, so we can assume that the diagonal entries of $M$ are $0$ and hence, by hypothesis, the off-diagonal entries are non-negative. The statement is trivial for $n=1$. For $n=2$, we have \begin{equation} \label{eqn:decomp-2} M = \begin{bmatrix} 0 & M_{12} \\ M_{12} & 2 M_{12} \end{bmatrix} \oplus \begin{bmatrix} 2M_{12} & M_{12} \\ M_{12} & 0 \end{bmatrix}. \end{equation} For $n=3$, we assume, without loss of generality, that $M_{12} \geq M_{23}$. Then \begin{equation} M = \begin{bmatrix} 0 & M_{12} & M_{13} \\ M_{12} & 2M_{12} & M_{12} \!+\! M_{13} \\ M_{13} & M_{12}\! + \!M_{13} & 2 M_{13} \end{bmatrix} \oplus \begin{bmatrix} \infty & \infty & \infty \\ \infty & 0 & M_{23} \\ \infty & M_{23} & 2M_{23} \end{bmatrix} \oplus \begin{bmatrix} \infty & \infty & \infty \\ \infty & \infty & \infty \\ \infty & \infty & 0 \end{bmatrix}, \label{eqn:decomp-3} \end{equation} where ``$\infty$'' is as in Remark~\ref{rmk:rank-1-extn}. For $n$ at least $4$, the proof is by induction, using the following lemma: \begin{lemma} \label{lem:max-rank-induction} Let $n \geq 4$. Suppose that for any $(n-2) \times (n-2)$ matrix $N$ of finite rank, there exists a matrix $N'$ of rank at most $\lfloor (n-2)^2/4\rfloor$, such that $N'$ is identical to~$N$ except possibly in one diagonal entry, where the entry of $N'$ is greater than or equal to the entry of~$N$. Then any $n\times n$ matrix has rank at most~$\lfloor n^2/4\rfloor$. \end{lemma} \begin{remark} The exceptional diagonal entry in the hypothesis makes the statement of the lemma slightly stronger than what is required for an inductive proof of an $\lfloor n^2/4 \rfloor$ upper bound. However, we will use the lemma to establish the base cases of the induction, $n=4$ and $n=5$, which require a weaker hypothesis because for $n=2$ and $n=3$ the $\lfloor n^2/4 \rfloor$ upper bound on rank does not hold. \end{remark} \begin{proof}[Proof of Lemma~\ref{lem:max-rank-induction}] Let $M$ be an $n\times n$ matrix with finite rank. Without loss of generality, assume that the entry $M_{12}$ is minimal among all off-diagonal elements. We apply the hypothesis to the principal submatrix indexed by $\{3, \ldots, n\}$. Applying Lemma~\ref{lem:rank-1-extn}, we have a collection of at most $\lfloor (n-2)^2/4\rfloor$ rank $1$ matrices whose tropical sum agrees with $M$ except for the first two rows and columns and possibly one diagonal entry, which, without loss of generality, we assume to be $M_{44}$. For each $4 \leq i \leq n$, take the rank~$1$ matrix which has arbitrary large values except for the $\{1,2,i\}$ principal submatrix, which is: \begin{equation*} \begin{bmatrix} 2M_{1i} & M_{1i} + M_{2i} & M_{1i} \\ M_{1i} + M_{2i} & 2M_{2i} & M_{2i} \\ M_{1i} & M_{2i} & 0 \end{bmatrix}. \end{equation*} Note that since $M_{12}$ was chosen to be minimal, we have that $M_{12} \leq M_{1i} + M_{2i}$. Finally, switching indices $1$ and $2$ if necessary, we can assume that $M_{13} \geq M_{23}$, and hence $M_{12}+M_{13} \geq M_{23}$. Then, we take two matrices which are ``$\infty$'' outside of the $\{1,2,3\}$ principal matrices, which are, respectively, \begin{equation*} \begin{bmatrix} 0 & M_{12} & M_{13}\\ M_{12} & 2M_{12} & M_{12} + M_{13} \\ M_{13} & M_{12} + M_{13} & 2M_{13} \end{bmatrix} \quad \mbox{and} \quad \begin{bmatrix} \infty & \infty & \infty \\ \infty & 0 & M_{23} \\ \infty & M_{23} & 2M_{23} \end{bmatrix}, \end{equation*} recalling the meaning of ``$\infty$'' from Remark~\ref{rmk:rank-1-extn}. This yields a decomposition of $M$ into at most $\lfloor (n-2)^2/4 \rfloor + (n-3) + 2 = \lfloor n^2/4 \rfloor$ symmetric rank~$1$ matrices. \end{proof} To complete the proof of the Theorem~\ref{thm:max-rank}, we note that taking all but the last term of~(\ref{eqn:decomp-2}) and~(\ref{eqn:decomp-3}) gives the hypothesis to Lemma~\ref{lem:max-rank-induction} for $n=4$ and~$5$. The desired upper bound follows by induction, which consists of applying Lemma~\ref{lem:max-rank-induction} with $N' = N$. Finally, Remark~\ref{rmk:bipartite_graph} shows that this bound is tight. \end{proof} \begin{theorem} The dimension of the space of symmetric $n \times n$ matrices of symmetric Barvinok rank at most~$r$ is ${n+1 \choose 2} - {n-r+1 \choose 2}$, which is the dimension of the classical secant variety, i.e.\ the space of classical symmetric matrices of classical rank at most~$r$. \end{theorem} \begin{proof} Let $D = {n+1 \choose 2} - {n -r+1 \choose 2}$. The tropical secant variety is contained in the tropicalization of the classical secant variety, so the dimension is at most $D$, by the Bieri-Groves Theorem~\cite[Thm.\ A]{bg}. Thus, it is sufficient to find an open neighborhood in which the tropical variety has dimension~$D$. For $i$ from~$1$ to~$r$, let $v_i = (C, \ldots, C, v_{i,i}, \ldots, v_{i,n})$ be a vector with $C$ for the first $i-1$ entries. Choose the coordinates $v_{i+1,j}$ to be smaller than all the $v_{i,j}$ and let $C$ be very large. Then, \begin{equation*} v_1^T \odot v_1 \oplus \cdots \oplus v_r^T \odot v_r = \begin{bmatrix} 2v_{11} & v_{11} + v_{12} & \cdots & v_{11} + v_{1n} \\ v_{11} + v_{12} & 2v_{22} & \cdots & v_{22} + v_{2n} \\ \vdots & \vdots & & \vdots \\ v_{11} + v_{1n} & v_{22} + v_{2n} & \ldots & 2v_{rn} \end{bmatrix} \end{equation*} This matrix is an injective function of the vector entries $v_{ij}$ for $i \leq r$ and $j \geq i$. Thus, it defines a neighborhood of the $r$th secant set of the desired dimension \begin{equation*} n + (n-1) + \cdots + (n-r+1) = {n+1 \choose 2} - {n-r+1 \choose 2} = D. \qedhere \end{equation*} \end{proof} \section{Star tree rank}\label{sec:star-tree-rank} Recall from the introduction that a star tree matrix is one which can be written as $\pi(v^T \odot v)$ for $v \in \mathbb{R}^n$ a row vector. The star tree matrices from a classical linear space in the space of $n \times n$ dissimilarity matrices defined by the tropical polynomials \begin{equation} \label{eqn:basis-star-tree} x_{ij} x_{kl} \oplus x_{ik} x_{jl} \quad\quad\mbox{for $i$, $j$, $k$, and $l$ distinct integers}. \end{equation} In this section, the deficiency graph will always be taken with respect to this tropical basis. The following lemma is an immediate consequence of Lemma~\ref{lem:rank-1-extn}. \begin{lemma} \label{lem:star-tree-extn} Let $M$ be an $m\times m$ star tree matrix, and let $n > m$ be any integer and $C$ any real number. Then there exists an $n \times n$ star tree matrix with $M$ as the upper left $m \times m$ submatrix and all other entries greater than~$C$. \end{lemma} Unlike the case of symmetric Barvinok rank, the star tree rank is always finite. \begin{thm} \label{thm:max-star-tree-rank} For $n$ at least $3$, the star tree rank of a $n \times n$ dissimilarity matrix~$M$ is at most $n-2$, and this bound is sharp. In particular, the dissimilarity matrix defined by $M_{ij} = \min\{i,j\}$ has star tree rank $n-2$. \end{thm} \begin{proof} The proof of the upper bound is by induction on $n$. For $n=3$, the equations in~(\ref{eqn:basis-star-tree}) are trivial, and thus every $3\times 3$ dissimilarity matrix is a star tree matrix. For $n>3$, let $M$ be an $n \times n$ dissimilarity matrix, and denote by $M'$ the upper left $(n-1) \times (n-1)$ submatrix. By the inductive hypothesis, we can write $M'$ as the tropical sum of $n-3$ star tree matrices. We can extend each of these to $n\times n$ star tree matrices by Lemma~\ref{lem:star-tree-extn}, and their tropical sum will agree with $M$ except in the last column and row. Let $w$ be the vector defined by $w_{i} = M_{in} + C$ for $i < n$ and $w_n = -C$, for $C$ a sufficiently large number. Then the tropical sum of the previous $n-3$ matrices together with $\pi(w^ T\odot w)$ equals~$M$. Now let $M$ be the dissimilarity matrix defined by $M_{ij} = \min\{i,j\}$ as in the statement. We claim that the deficiency graph of $M$ has chromatic number~$n-2$. For every $i < j < k < l$, we have \begin{equation} \label{eqn:m-4-point} M_{ik} + M_{jl} = M_{il} + M_{jk} = i + j < M_{ij} + M_{kl} = i + k, \end{equation} so the deficiency graph has an edge between $ik$ and $jl$, and an edge between $il$ and $jk$. We refer to these types of edges as ``overlapping'' and ``nesting'' respectively. We prove that the deficiency graph of $M$ has chromatic number at least $n-2$ by induction on $n$. The case of $n=3$ is clear. Let $n$ be greater than $3$ and fix a coloring of the deficiency graph of $M$. Let $S$ be the set of nodes of the same color~$c$ as the node~$1n$. There is a ``nesting'' edge between $1n$ and every node other than those of the form $1i$ or $in$ for some~$i$. Thus, every node in $S$ is either of the form $1i$ or $in$. Furthermore, because of the ``overlapping'' edges, there must be an integer $m$ such that if $1i$ is in $S$, then $i\leq m$ and if $jn$ is in $S$, then $m \leq j$. Now consider the set of $n-3$ nodes consisting of $1i$ for $m < i < n$ and $jn$ for $1 < j < m$. By our construction of~$m$, none of them is in~$S$. Therefore, if they have distinct colors, then the coloring has at least $n-2$ colors, which is what we wanted to show. Otherwise, two of these nodes have the same color, and by the ``overlapping'' edges and symmetry we can assume that they are $1i$ and $1i'$, with $m < i < i'$, which have color~$c' \neq c$. Any node $jk$ with $j < k$ and $1 < j < i$ will share an edge with one of these two nodes, so cannot have color $c'$. On the other hand, if the node $1l$ with $l \leq m$ is adjacent to $jk$ with $j <k$, then we must have $ 1 < j < l$. But $l \leq m < i$, so $jk$ cannot have color~$c'$. Thus, we can assume that all of the nodes $1l$ with $l \leq m$ also have color~$c'$ without changing the fact that the coloring is proper. With this change, the only nodes with color~$c$ are of the form~$jn$. By restricting to the nodes with coordinates less than~$n$, we have a coloring of the deficiency graph of the $(n-1)\times (n-1)$ matrix without the color~$c$, so by the inductive hypothesis we're done. \end{proof} \begin{remark} The matrix with maximal star tree rank in the previous theorem is in fact in the Grassmannian, i.e.\ it has tree rank~$1$. Indeed, from~(\ref{eqn:m-4-point}), we see that the four-point condition holds. Alternatively, $M$ arises as the distance matrix of the following weighted tree. Let $T$ be the caterpillar tree with $n$ internal vertices, connected in order by edges of weight $-1/2$. The $i$th leaf vertex is connected to the $i$th internal vertex by an edge of weight $i/2$. For $i < j$, the distance from leaf~$i$ to leaf~$j$ is $i/2 + (j-i)(-1/2) + j/2 = i$, which is equal to the corresponding entry in the matrix $M$ from Theorem~\ref{thm:max-star-tree-rank}. In order to make a proper phylogenetic tree, we should remove the first and last internal vertices and combine the adjacent edge weights. \end{remark} Next, we give a graph theoretic characterization of the star tree rank of $0/1$-matrices. For $M$ a $0/1$ dissimilarity matrix, we define $G_M$ to be the graph whose edges correspond to the zeroes of $M$. As in the case of symmetric Barvinok rank, we can characterize the star tree rank of~$M$ in terms of covers of~$G_M$, this time by both cliques and star trees. We will also say that a cover of~$G_M$ by cliques and star trees is a \emph{solid cover} if for every pair of distinct vertices $i$ and $j$ either: \begin{enumerate} \item there is an edge between $i$ and $j$, \item either $i$ or $j$ belongs to a clique in the cover, \item either $i$ or $j$ is the center of a star tree in the cover, or \item both $i$ and $j$ are leaves of the same star tree. \end{enumerate} \begin{prop} \label{prop:star-tree-rank-01} Let $M$ be a $0/1$ dissimilarity matrix. Let $r$ be the minimal number of graphs in a cover of $G_M$ by cliques and star trees, such that every edge (but not necessarily every vertex) is in some element of the cover. Then $M$ has star tree rank either $r$ or~$r+1$. Moreover, if $G_M$ has a solid cover by $r$ graphs, then $M$ has star tree rank~$r$. \end{prop} \begin{proof} Let $G_1, \ldots, G_r$ be a cover of $G_M$ by cliques and star trees. For $G_i$ a clique, define $v_i$ to be the $0/1$ vector whose $0$ entries correspond to the vertices of the clique. If $G_i$ is a star tree consisting of a central vertex $c_i$ and edges to vertices in the set $I_i$, then define $v_i$ to be the row vector which is $-1/2$ in the $c_i$ entry, $1/2$ for the entries corresponding to $I_i$ and $3/2$ otherwise. In either case, define $M_i = \pi(v_i^ T \odot v_i)$. Then $M_i$ has $0$ entries corresponding to the edges of $G_i$. Thus, the tropical sum of the $M_i$ has the same $0$ entries as $M$. Moreover, if the cover is a solid cover then the tropical sum is equal to $M$, so $M$ has rank at most~$r$. Otherwise, some of the positive entries of the tropical sum are greater than $1$. By additionally taking the tropical sum with the all ones matrix, we see that $M$ has rank at most $r+1$. Conversely, suppose that $M_i = \pi(v_i^ T \odot v_i)$ is a term in a representation of~$M$ as the tropical sum of star tree matrices. Then we claim that the zeroes of $M_i$ correspond to either a star tree or a complete graph. If all entries of $v_i$ are non-negative, then the zeroes of $M_i$ correspond to the complete graph on the vertices where $v_i$ is $0$. Otherwise, since $M_i$ must be non-negative, there can be at most one negative entry of~$v_i$, say with value $-a$; then all other entries must be at least $a$. Then the $0$ entries of $M_i$ correspond to the star tree with edges between the entry with $-a$ and the entries with value~$a$. Thus, any decomposition of $M$ as the tropical sum of star tree metrics yields a cover of $G_M$ by cliques and star trees, so $M$ has tropical rank at least $r$. \end{proof} We do not know if the definition of a solid cover can be weakened in any way. In other words, we do not know of any $0/1$ matrices $M$ such that $r$ is the minimal size of a cover of $G_M$ by cliques and star trees, and $G_M$ does not have a solid cover of size $r$, but $M$ has star tree rank~$r$. In contrast to symmetric Barvinok rank, the upper bound of $n-2$ on the star tree rank of an $ n\times n$ dissimilarity matrix cannot be achieved by a $0/1$ matrix for large~$n$. Recall that the \emph{Ramsey number} $R(k,k)$ is the smallest integer such that any graph on at least $R(k,k)$ vertices has either a clique or a independent set of size~$k$. Then we have the following stronger bound on the star tree rank of a $0/1$ matrix. \begin{proposition} For $n \geq R(k,k)$, any $n\times n$ $0/1$ dissimilarity matrix has star tree rank at most $n - k +1$. \end{proposition} \begin{proof} By the assumption on $n$, the graph $G_M$ has either a clique of size $k$ or an independent set of size $k$. In the former case, we can cover $G_M$ by a star tree centered at each vertex not part of the clique, together with the clique itself. This gives a solid cover by $n-k+1$ subgraphs. In the latter case, we can just take the star trees centered at the vertices not in the independent set, giving a cover of $G_M$ by $n-k$ subgraphs. In either case, Proposition~\ref{prop:star-tree-rank-01} shows that $M$ has rank at most $n-k+1$. \end{proof} \begin{corollary} For $n \geq 18$, every $n \times n$ $0/1$ dissimilarity matrix has star tree rank at most $n-3$. \end{corollary} \begin{proof} The Ramsey number $R(4,4)$ is $18$~\cite{r}. \end{proof} \begin{theorem} Let $ r$ and $ n$ be positive integers. Then the dimension of the space of dissimilarity $ n\times n$ matrices of star tree rank at most $ r$ is \begin{equation*} \min\left\{{ n+1\choose 2} -{ n-r+1\choose 2},~{ n\choose 2}\right\}. \end{equation*} \end{theorem} \begin{proof} Let $D = \min\left\{{n+1 \choose 2} - {n-r+1 \choose 2}, {n \choose 2}\right\}$ be the dimension from the theorem statement. The dimension cannot be any larger than $D$ by the Bieri-Groves Theorem~\cite[Thm.\ A]{bg}, because $D$ is the dimension of the classical secant variety, according to Theorem~2 in~\cite{drton}. Therefore, it is sufficient to construct a matrix with star tree rank~$r$ which has a $D$-dimensional neighborhood of star tree rank~$r$ matrices. If $r \geq n$, then $D$ is ${n \choose 2}$, the dimension of the set of $n\times n $ dissimilarity matrices. Since higher secant sets cannot have smaller dimension, it is sufficient to assume $r \leq n$. We will construct $r$ vectors $v_1, \ldots, v_r$, with $v_{k,i}$ denoting the $i$th entry of $v_k$, and then define $M$ to be the tropical sum of the star tree matrices $\pi(v_k^T \odot v_k)$. First, we fix any order on the set of pairs of distinct integers $S = \{(i,j) : r < i < j \leq n\}$. Then, for $1 \leq k \leq r$ and $r < i \leq n$, we choose $v_{k,i}$ as follows: if the $k$th pair of integers includes $i$, then we choose $v_{k,i}$ in the range $0<v_{k,i} < 1$ and otherwise we choose $v_{k,i} > 2$. For $i$ in the range $k \leq i \leq r$, we choose $v_{k,i}$ inductively beginning with $k=r$. We choose $v_{r, r}$ arbitrarily. Then for $k < r$, and $k \leq i \leq r$, we choose $v_{k, i}$ to be much greater than any of the $v_{k+1,j}$ already chosen. Finally, let $C$ be a large real number and set $v_{k, i}$ equal to $C$, for $i < k$. Let $M$ be the tropical sum of the $M_k := \pi(v_k^T \odot v_k)$. We claim that the set of matrices which can be gotten in this way forms a $D$-dimensional affine linear neighborhood. For $i \leq r$ and $i < j$, the $(i,j)$ entry of $M$ comes from $M_i$ and in particular, is equal to $v_{i,i} + v_{i,j}$. For a fixed $i$, and taking $j > i$, these entries give us $n-i$ linearly independent functions on the matrix~$M$. Moreover, if $(i,j)$ is the $k$th pair in the ordering on $S$, and $k \leq r$, then the $(i,j)$ entry of $M$ comes from $M_k$ and is equal to $v_{k,i} + v_{k,j}$. These are linearly independent from each other and from all of the previous functions. Since the size of~$S$ is ${n-r \choose 2}$, the number of linearly independent functions on the matrix $M$ is \begin{align*} (n-1) + (n-2) + \cdots + (n-r) + &\min\left\{r, {n -r \choose 2}\right\} \\ &= {n \choose 2} - {n - r \choose 2} + \min\left\{r, {n-r \choose 2}\right\} \\ &= \min\left\{{n+1 \choose 2} - {n-r+1\choose 2}, {n\choose 2}\right\}, \end{align*} which is the desired dimension, $D$. \end{proof} \begin{remark} In fact, the difficult part of Theorem~2 in~\cite{drton} is proving the lower bound on the dimension of the classical secant variety. Our computation of the dimension of the tropical secant variety provides an alternative proof of this lower bound. \end{remark} \section{Tree rank}\label{sec:tree-rank} The tropical Grasmmannian $G_{2,n}$ is the tropical variety defined by the $3$-term Pl\"ucker relations: \begin{equation} \label{eqn:plucker} x_{ij} x_{k\ell} \oplus x_{ik} x_{j\ell} \oplus x_{i\ell} x_{jk} \quad\quad\mbox{for all $i < j < k < \ell$}. \end{equation} This condition is equivalent to coming from the distances along a weighted tree which has negative weights along the internal edges~\cite[Sec.\ 4]{ss}. In this section, we will always take the deficiency graph to be with respect to the Pl\"ucker relations in~(\ref{eqn:plucker}). As with the previous notions of rank, the tree rank of a $0/1$ matrix can be characterized in terms of covers of graphs. For any disjoint subsets $I_1, \ldots, I_k \subset [n]$ (not necessarily a partition), the \emph{complete $k$-partite graph} is the graph which has an edge between the elements of $I_i$ and $I_j$ for all $i \neq j$. Complete $ k $-partite graphs are characterized by the property that among vertices which are incident to some edge, the relation of having a non-edge is a transitive relation. \begin{remark} The complete $k$-partite graphs defined above are exactly those graphs whose edge set forms the set of bases of a rank~$2$ matroid on $n$ elements. The transitivity of being a non-edge is equivalent to the basis exchange axiom. Alternatively, each of the sets $I_1, \ldots, I_k$ partition the set of non-loops in the matroid into parallel classes. See~\cite{o} for definitions of these terms. In the following proposition, we will see that the Pl\"ucker relations imply the basis exchange axiom for the $0$ entries of a non-negative tree matrix. \end{remark} \begin{prop} \label{prop:tree-rank-01} Let $M$ be an $n \times n$ $0/1$ dissimilarity matrix and let $r$ be smallest size of a cover of $G_M$ by complete $k$-partite subgraphs. As in Proposition~\ref{prop:star-tree-rank-01}, we only require every edge to be in the cover, not necessarily every vertex. If $G_M$ has at most one isolated vertex then $M$ has tree rank $r$. Otherwise, $M$ has tree rank $r+1$. \end{prop} \begin{proof} Let $I_1, \ldots, I_k$ be disjoint sets defining a $k$-partite graph. We construct a tree which has $k+1$ internal vertices: one vertex $v_i$ for each $i \in [k]$ and a vertex $w$. Every element of $I_i$ has a branch of length $1/2$ to $v_i$ and each $v_i$ has a branch of length $-1/2$ to $w$. The elements of $J = [n] \setminus (I_1 \cup \cdots \cup I_k)$ are connected to $w$ by a branch of length $1$. This gives a distance matrix whose entries are $0$ for the edges of the $k$-partite graph, $2$ for the entries between elements of $J$, and $1$ elsewhere. If $G_M$ has at most one isolated vertex, then the sum of these matrices is $M$. Otherwise, adding the matrix with all entries entries equal to $1$ yields $M$. Conversely, suppose we have a decomposition of $M$ as the sum of tree matrices. For each tree matrix, we can define a graph on the vertices $[n]$ with edges corresponding to the $0$ entries. These form a cover of the graph of $M$, so we just need to show that the graph $G_T$ coming from a tree matrix will be a complete $k$-partite graph. For this, we need to show that the relation of having a non-edge is a transitive relation among vertices which are incident to some edge. Suppose that $(i,j)$ and $(j,k)$ are non-edges, but $(i,k)$ is an edge. Also suppose that $j$ has an edge to some other vertex $\ell$. Then $M_{ik} + M_{j\ell} = 0$, but $M_{ij}$ and $M_{jk}$ are positive and $M_{k\ell}$ and $M_{i\ell}$ are non-negative, which contradicts the Pl\"ucker relation. Thus, $G_T$ is a complete $k$-partite graph, and we have proved the theorem when $G_M$ has at most one isolated vertex. Now suppose that $G_M$ has at least $2$ isolated vertices, $i$ and~$j$. There must be some tree matrix $T$ in the decomposition of $G_M$ such that $T_{ij} = 1$. Suppose that the corresponding graph $G_T$ has an edge between two vertices $k$ and~$\ell$, which must be distinct from $i$ and~$j$ by assumption. Since $i$ and~$j$ are isolated, $T_{ik}$, $T_{i\ell}$, $T_{jk}$, and~$T_{j\ell}$ must each be at least~$1$. But $T_{ij} + T_{kl} = 1$, which contradicts the Pl\"ucker relation. Thus, $G_T$ must be the trivial graph, so the decomposition of $M$ must have one more term than a minimal cover of $G_M$. Therefore, $M$ has rank $r + 1$. \end{proof} Note that by taking the $I_i$ in the definition of $k$-partite graph to be singletons, we get complete graphs, and by taking $k=2$ with $I_1$ a singleton and $I_2$ any set disjoint from $I_1 $, we get star trees. Together with Propositions~\ref{prop:star-tree-rank-01} and~\ref{prop:tree-rank-01}, this confirms, for $0/1$-matrices, the second inequality in~(\ref{eqn:rank-inequals}). \begin{lemma} \label{lem:extn-tree-rank-1} Let $M$ be an $m\times m$ tree matrix, $n > m$ an integer, and $C$ any real number. Then there exists an $n \times n$ tree matrix $N$, whose upper left $m\times m$ submatrix is $M$ and such that the other entries are each at least~$C$. \end{lemma} \begin{proof} The matrix $M$ encodes the distances on some weighted tree $T$ on $m$ leaves. Pick any internal vertex $v$ of~$T$ and let $C'$ be the smallest distance between $v$ and a leaf of $m$. Let $T'$ be the tree on $n$ leaves formed from $T$ by attaching each leaf $i$ with $m < i \leq n$ to $v$ by an edge with weight $\max\{\frac{1}{2} C, C-C'\}$. Let $N$ be the distance matrix of $T'$ and $N$ is a tree matrix with the desired properties. \end{proof} \begin{prop} The dimension of the set of dissimilarity $n \times n$ matrices of tree rank at most $r$ is the dimension of the classical secant variety, \begin{align*} {n \choose 2} - {n-2r\choose 2} &\quad\mbox{if } r \leq \frac{n}{2}, \\ {n \choose 2} &\quad\mbox{if } r \geq \frac{n-1}{2}. \end{align*} \end{prop} \begin{proof} The tropical secant variety is contained in the tropicalization of the classical variety, which has the given dimension by~\cite[Thm.\ 2.1i]{cgg}. Therefore, it is sufficient to prove that the tropical secant variety has at least the given dimension by the Bieri-Groves Theorem~\cite[Thm.\ A]{bg}. To prove the lower bound on the dimension, first note that for $r = \lfloor n/2\rfloor$, $n-2r$ is either $0$ or~$1$, so the first part of the statement implies that the dimension of the $r$th secant set is ${n \choose 2}$, the dimension of the space of dissimilarity $n \times n$ matrices. Since higher secant varieties are at least as large as the preceeding ones, this implies the second part of the statement. The proof of the first part, when $r \leq n/2$, is by induction on $n$. For $n \leq 3$ all dissimilarity matrices have tree rank~$1$, because the four-point condition is trivial. Thus, the dimension of the $1$st secant set is ${n \choose 2} = {n \choose 2} - {n-2 \choose 2}$, as desired. Now suppose $n > 3$ and by the inductive hypothesis, let $N$ be an $(n-2) \times (n-2)$ matrix of tree rank at most $r-1$ such that the locus of matrices with tree rank at most $r-1$ has dimension ${n-2 \choose 2} - {n - 2r \choose 2}$ in a neighborhood of $N$. Consider the caterpillar tree with leaves in the order $1$, $3$, $4$, \ldots, $n$, $2$, pendant edge length $p_i$ for leaf $i$, and negative internal edges $q_3, \ldots, q_{n-1}$. Thus, the first two rows and columns of the distance matrix $M$ are defined by: \begin{align*} M_{1,2} = M_{2,1} &= p_1 + q_3 + \cdots + q_{n-1} + p_2, \\ M_{1,i} = M_{i,1} &= p_1 + \sum_{j=3}^{i-1} q_j + p_i \quad \mbox{for } i > 2, \\ M_{2,i} = M_{i,2} & = p_i + \sum_{j=i}^{n-1} q_j + p_2 \quad \mbox{for } i > 2. \end{align*} Just from the values in these two rows, we can solve for the values of the edge lengths: \begin{align*} p_1 &= \frac{1}{2}(M_{1,2} + M_{1,3} - M_{2,3}) \\ p_2 &= \frac{1}{2}(M_{1,2} + M_{2,n} - M_{1,n}) \\ p_i &= \frac{1}{2}(M_{1,i} + M_{2,i} - M_{1,2}) \quad\mbox{for } i > 2 \\ q_i &= \frac{1}{2}(M_{1,i+1} + M_{2,i} - M_{1,i} - M_{2,i}) \end{align*} Therefore, the projection onto the first two columns has dimension $n + (n-3) = 2n-3$ in a neighborhood of this point. Assume that the $p_i$ are sufficiently negative that the lower right $(n-2)\times (n-2)$ submatrix of $M$ is less than $N$ in every entry. Then \begin{equation*} M \oplus \begin{bmatrix} * & \infty & \infty \\ \infty & * & \infty \\ \infty & \infty & N \end{bmatrix} = \begin{bmatrix} * & M_{12} & M_{13} & \cdots & M_{1n} \\ M_{12} & * & M_{23} & \cdots & M_{2n} \\ M_{13} & M_{23} & * & \cdots & N_{1,n-2} \\ \vdots & \vdots & \vdots & & \vdots \\ M_{1n} & M_{2n} & N_{n-2,1} & \cdots & * \end{bmatrix} \end{equation*} has tree rank at most $r$ and has a neighborhood of such matrices of dimension \begin{equation*} 2n - 3 + {n-2 \choose 2} - {n - 2r \choose 2} = {n \choose 2} - {n-2r \choose 2}, \end{equation*} which is the desired expression. \end{proof} Unlike the cases of symmetric Barvinok rank and star tree rank, we do not know the maximum tree rank of a $n\times n$ dissimilarity matrix for large $n$. We have an upper bound of $n-2$ by Theorem~\ref{thm:max-star-tree-rank}, and we can improve on this slightly: \begin{thm}\label{thm:tree-rank-n-3} For $n\geq 6$, a $n \times n$ dissimilarity matrix~$M$ has tree rank at most $n-3$. \end{thm} \begin{proof} Let $M$ be a $6 \times 6$ dissimilarity matrix. Consider the tropical polynomial whose terms correspond to the perfect matchings on $6$ vertices: \begin{gather*} x_{12}x_{34} x_{56} \oplus x_{12}x_{35}x_{46} \oplus x_{12}x_{36}x_{45} \oplus x_{13}x_{24}x_{56} \oplus x_{13}x_{25}x_{46} \\ {} \oplus x_{13}x_{26}x_{45} \oplus x_{14}x_{23}x_{56} \oplus x_{14}x_{25}x_{36} \oplus x_{14}x_{26}x_{35} \oplus x_{15}x_{23}x_{46} \\ {} \oplus x_{15}x_{24}x_{36} \oplus x_{15}x_{26}x_{34} \oplus x_{16}x_{23}x_{45} \oplus x_{16}x_{24}x_{35} \oplus x_{16}x_{25}x_{34}. \end{gather*} Note that this is the tropicalization of the Pfaffian of a $6\times 6$ dissimilarity matrix (see, for example~\cite[Ch.~7]{godsil}). After relabeling the vertices, we can assume that the minimum is achieved by the term $x_{12} x_{34} x_{56}$. In particular, this means that for the $4$-point condition applied to the upper left $4\times 4$ matrix, the minimum is achieved by $x_{12} x_{34}$. We can set $X_1$ equal to the smaller of $M_{13} + M_{24} - M_{12}$ and $M_{14} + M_{23} - M_{12}$, so that the matrix \begin{equation*} N = \begin{bmatrix} * & M_{12} & M_{13} & M_{14} & \infty & \infty \\ M_{12} & * & M_{23} & M_{24} & \infty & \infty \\ M_{13} & M_{23} & * & X_1 & \infty & \infty \\ M_{14} & M_{24} & X_1 & * & \infty & \infty \\ \infty & \infty & \infty & \infty & * & \infty \\ \infty & \infty & \infty & \infty & \infty & * \end{bmatrix} \end{equation*} is a tree matrix. Moreover, by our assumption, we have that $X_1 \geq M_{34}$. By similar logic, there exist $X_2$ and $X_3$ such that the following is an expression of~$M$ as the tropical sum of $3$ tree matrices: \begin{equation*} N \oplus \begin{bmatrix} * & X_2 & \infty & \infty & M_{15} & M_{16} \\ X_2 & * & \infty & \infty & M_{25} & M_{26} \\ \infty & \infty & * & \infty & \infty & \infty \\ \infty & \infty & \infty & * & \infty & \infty \\ M_{15} & M_{25} & \infty & \infty & * & M_{56} \\ M_{16} & M_{26} & \infty & \infty & M_{56} & * \end{bmatrix} \\ \oplus \begin{bmatrix} * & \infty & \infty & \infty & \infty & \infty \\ \infty & * & \infty & \infty & \infty & \infty \\ \infty & \infty & * & M_{34} & M_{35} & M_{36} \\ \infty & \infty & M_{34} & * & M_{45} & M_{46} \\ \infty & \infty & M_{35} & M_{45} & * & X_3 \\ \infty & \infty & M_{36} & M_{46} & X_3 & * \\ \end{bmatrix} \end{equation*} Therefore, $M$ has tree rank at most~$3$. For $n > 6$, the theorem follows from Lemma~\ref{lem:tree-rank-submatrix}. \end{proof} \begin{table} \begin{tabular}{|l|l|l|} \hline n & maximum tree rank & example \\ \hline $3$ & $1$ & \\ $4$ & $2$ & \\ $5$ & $3$ & $0/1$ matrix corresponding to 5-cycle \\ $6$ & $3$ & \\ $7$ & $4$ & \\ $8$ & $5$ & \\ $9$ & $6$ & $M$ in~(\ref{eqn:tree-rank-6}) \\ $10$ & $6$ or $7$ & Any extension of $M$ in~(\ref{eqn:tree-rank-6})\\ $9k$ & between $6k$ and $9k-3$ & $M_k$ from discussion following~(\ref{eqn:tree-rank-6}) \\ \hline \end{tabular} \caption{Maximum possible tree rank of an $n\times n$ dissimilarity matrix, to the best of our knowledge. The upper bounds come from Theorems~\ref{thm:max-star-tree-rank} and~\ref{thm:tree-rank-n-3}. The examples have the largest tree ranks that are known to us. The omitted examples can be provided by taking a principal submatrix of a larger example, by Lemma~\ref{lem:tree-rank-submatrix}.} \label{tbl:tree-rank} \end{table} Beginning with $n =10$, we don't know whether or not the bound in Theorem~\ref{thm:tree-rank-n-3} is sharp. For the following $9 \times 9$ matrix, found by random search, the deficiency graph was computed to have chromatic number $6$: \begin{equation} \label{eqn:tree-rank-6} M = \begin{bmatrix} * & 1 & 6 & 7 & 2 & 3 & 8 & 9 & 6 \\ 1 & * & 2 & 7 & 9 & 7 & 5 & 7 & 1 \\ 6 & 2 & * & 6 & 0 & 6 & 1 & 7 & 1 \\ 7 & 7 & 6 & * & 3 & 3 & 8 & 5 & 3 \\ 2 & 9 & 0 & 3 & * & 5 & 7 & 5 & 7 \\ 3 & 7 & 6 & 3 & 5 & * & 9 & 3 & 9 \\ 8 & 5 & 1 & 8 & 7 & 9 & * & 2 & 3 \\ 9 & 7 & 7 & 5 & 5 & 3 & 2 & * & 8 \\ 6 & 1 & 1 & 3 & 7 & 9 & 3 & 8 & * \\ \end{bmatrix} \end{equation} Together with Theorem~\ref{thm:tree-rank-n-3}, this computation shows that $M$ has tree rank~$6$. For any $k \geq 1$, we can form an $9k\times 9k$ matrix $M_k$ by putting $M$ in blocks along the diagonal and setting all other entries to $10$. The deficiency graph of $M_k$ includes $k$ copies of the deficiency graph of $M$, and all edges between distinct copies. Therefore, the chromatic number, and thus the tree rank, are at least $6k$. On the other hand, in order to provide examples of an $n \times n$ matrix with tree rank $n-3$ for all $n \leq 9$, we have the following lemma. \begin{lemma} \label{lem:tree-rank-submatrix} Let $M$ be an $n\times n$ matrix. If any $(n-m) \times (n-m)$ principal submatrix has tree rank $r$, then $M$ has tree rank at most $r +m$. \end{lemma} \begin{proof} Fix a decomposition of the $(n-m) \times (n-m)$ principal submatrix into $r$ tree matrices. We can extend each tree matrix to an $n\times n$ tree matrix by Lemma~\ref{lem:extn-tree-rank-1}. For each index~$i$ not in the principal submatrix, define $v_i$ to be the vector which is $C+M_{ij}$ in the $j$th entry and $-C$ in the $i$th entry, where $C$ is a large real number. Then, the extended tree matrices, together with $\pi(v_i^T \odot v_i)$ for all $i$ not in the principal submatrix, give a decomposition of $M$ into $r+m$ tree matrices, as desired. \end{proof} These results on the maximum tree rank are summarized in Table~\ref{tbl:tree-rank}. \section{Symmetric Barvinok rank for \texorpdfstring{$n = 3$}{n=3}} \label{sec:sym-barv-3} In this section, we explicitly describe the stratification of $3\times 3$ symmetric matrices by symmetric Barvinok rank. By Theorem~\ref{thm:max-rank}, the rank is at most~$3$, and the locus of rank~$1$ is the tropical variety defined by the $2\times 2$ minors, so it suffices to characterize the matrices of rank at most~$2$. Following~\cite{dss}, we call a square matrix \emph{tropically singular} if it lies in the tropical variety of the determinant. \begin{prop} \label{prop:sym-3} Let $M$ be a symmetric $3\times 3$ matrix. Then the following are equivalent: \begin{enumerate} \item $M$ has symmetric Barvinok rank at most $2$; \item The deficiency graph of $M$ is $2$-colorable; \item $M$ is tropically singular and $M_{ii} + M_{jj} \leq 2 M_{ij}$ for all $1 \leq i, j \leq 3$. \end{enumerate} \end{prop} \begin{proof} First we note that $M_{ii} + M_{jj} \leq 2M_{ij}$ implies that every term of the tropical determinant is greater than or equal to $M_{11} + M_{22} + M_{33}$. Thus, a matrix~$M$ satisfying these inequalities is tropically singular if and only if some other term of the tropical determinant equals $M_{11} + M_{22} + M_{33}$. Proposition~\ref{prop:deficiency-hypergraph} shows that (1) implies~(2). If the deficiency graph is $2$-colorable, then it can't have any loops, so $M_{ii} + M_{jj} \leq 2 M_{ij}$. Furthermore, the three diagonal entries can't form a clique, so without loss of generality, we assume that there is no edge between $11$ and $22$. Together with the inequality, this implies that $M_{11} + M_{22} = 2M_{12}$, so $M_{11}+M_{22} + M_{33} = 2M_{12} + M_{33}$, so by our initial remark, $M$ is tropically singular. Therefore (2) implies~(3). Finally, suppose that $M$ satisfies~(3). We can subtract $M_{ii}/2$ from the $i$th row and $i$th column without changing the rank, and so we assume that every diagonal entry is~$0$. The inequalities then say that all of the off-diagonal entries are non-negative. For the minimum in the tropical determinant to be achieved at least twice, we must have at least one off-diagonal entry equal to~$0$. Without loss of generality, we assume that $M_{12} = 0$. Then, \begin{equation*} M= \begin{bmatrix} 0 & 0 & M_{13} \\ 0 & 0 & M_{23} \\ M_{13} & M_{23} & 0 \end{bmatrix} = \begin{bmatrix} 0 & 0 & \infty \\ 0 & 0 & \infty \\ \infty & \infty & \infty \end{bmatrix} \oplus \begin{bmatrix} 2M_{13} & M_{13} + M_{23} & M_{13} \\ M_{13} + M_{23} & 2 M_{23} & M_{23} \\ M_{13} & M_{23} & 0 \end{bmatrix}, \end{equation*} where $\infty$ is as in Remark~\ref{rmk:rank-1-extn}. Therefore, $M$ has symmetric Barvinok rank at most~$2$. \end{proof} \begin{remark} For larger matrices the symmetric Barvinok rank does not have as simple a characterization as the third condition in Proposition~\ref{prop:sym-3}. A necessary condition for a symmetric $n\times n$ matrix to have rank at most~$r$ is that $M_{ii} + M_{jj} \leq 2M_{ij}$ and all the $(r+1) \times (r+1)$ submatrices are tropically singular, but this condition is not sufficient. For $n \geq 5$ and $r = n$, there are $n\times n$ symmetric matrices with finite rank greater than $n$ by Remark~\ref{rmk:bipartite_graph}. Even for $n=4$ and for $r=2$ and $r=3$, the matrix \begin{equation*} M = \begin{bmatrix} 0 & 0 & 1 & 2 \\ 0 & 0 & 2 & 1 \\ 1 & 2 & 0 & 0 \\ 2 & 1 & 0 & 0 \end{bmatrix}, \end{equation*} has symmetric Barvinok rank~$4$, as the nodes $12$, $13$, $24$, and~$34$ form a $4$-clique in its deficiency graph. However, $M$ and all of its $3\times 3$ submatrices are tropically singular. \end{remark} \section{Star tree rank for \texorpdfstring{$n=5$}{n=5}}\label{sec:star-tree-5} In this section, we give an explicit characterization of the secant set of the space of star trees in the case $n=5$. We do the same for the Grassmannian in the next section. From Theorem~\ref{thm:max-star-tree-rank}, we know that the maximum star tree rank of a $5 \times 5$ matrix is $3$. On the other hand, the set of dissimilarity matrices of star tree rank~$1$ are defined by the $2\times 2$ minors. Thus, our task is to describe the second secant set of the space of star trees, i.e.\ the set of dissimilarity matrices of star tree rank~$2$. First, we recall the defining ideal of the classical secant variety. The space of star trees is the tropicalization of the projection of the rank~$1$ symmetric matrices onto their off-diagonal entries. Its second secant variety is a hypersurface in $\mathbb{C}^{10}$ defined by the following $12$-term quintic polynomial, known as the pentad~\cite{drton}: \begin{gather*} x_{12}x_{13}x_{24}x_{35}x_{45} -x_{12}x_{13}x_{25}x_{34}x_{45} -x_{12}x_{14}x_{23}x_{35}x_{45} {}+x_{12}x_{14}x_{25}x_{34}x_{35} \\ +x_{12}x_{15}x_{23}x_{34}x_{45} -x_{12}x_{15}x_{24}x_{34}x_{35} +x_{13}x_{14}x_{23}x_{25}x_{45} -x_{13}x_{14}x_{24}x_{25}x_{35} \\ {}-x_{13}x_{15}x_{23}x_{24}x_{45} +x_{13}x_{15}x_{24}x_{25}x_{34} -x_{14}x_{15}x_{23}x_{25}x_{34} +x_{14}x_{15}x_{23}x_{24}x_{35} \end{gather*} Note that the $12$ terms of the pentad correspond to the $12$ different cycles on 5 vertices. The second secant set of the space of star trees is contained in the tropicalization of the pentad, but the containment is proper. Nonetheless, the terms of the pentad play a fundamental role in characterizing matrices of rank at most~$2$. \begin{thm} Let $M$ be a $5\times 5$ dissimilarity matrix. The following are equivalent: \begin{enumerate} \item $M$ has star tree rank at most~$2$; \item The deficiency graph of $M$ is $2$-colorable; \item The minimum of the terms of the pentad is achieved at two terms which satisfy the following conditions: \begin{enumerate} \item The terms differ by a transposition; \item Assuming, without loss of generality, that the minimized terms are $x_{12}x_{23}x_{34}x_{45}x_{15}$ and $ x_{13}x_{23}x_{24}x_{45}x_{15}$, then we have that $M_{14}+M_{23} \leq M_{12}+M_{34} = M_{13}+M_{24}$. \end{enumerate} \end{enumerate} \end{thm} \begin{proof} (1) implies (2) by Proposition~\ref{prop:deficiency-hypergraph}. We show that (2) implies (3) by proving the contrapositive. Suppose that $M$ doesn't satisfy the two conditions for any pair of terms. Without loss of generality, we assume that $x_{12}x_{23}x_{34}x_{45}x_{15}$ is a minimal term from the pentad. By minimality, $M_{12} + M_{34}$ is less than or equal to $M_{13} + M_{24}$. If this inequality is strict, then we have an edge between $12$ and~$34$ in the deficiency graph. On the other hand, if it is an equality, then by our assumption that the conditions in~(3) don't hold, $M_{12} + M_{34}$ must be less than $M_{14} + M_{23}$, in which case we also have an edge between $12$ and~$34$. Similarly, we have edges between $34$ and~$15$, between $15$ and~$23$, between $23$ and~$45$, and between $45$ and~$12$. Thus, the graph has a $5$-cycle, and so is not $2$-colorable. Therefore, $M$ has star tree rank~$3$. Finally, suppose that the two conditions in~(3) hold. Let $A = M_{12}+M_{34} = M_{13}+M_{24}$. Then we claim that: \begin{multline*} M = \begin{bmatrix} * & M_{12} & M_{13} & A - M_{23} & \infty \\ M_{12} & * & M_{23} & M_{24} & \infty \\ M_{13} & M_{23} & * & M_{34} & \infty \\ A - M_{23} & M_{24} & M_{34} & * & \infty \\ \infty & \infty & \infty & \infty & * \end{bmatrix} \oplus \\ \def\+{\!\!+\!\!}\def\-{\!\!-\!\!} \begin{bmatrix} * & M_{14} \+ M_{25} \- M_{45} & M_{14} \+ M_{35} \- M_{45} & M_{14} & M_{15} \\ M_{14} \+ M_{25} \- M_{45} & * & B & M_{14} \+ M_{25} \- M_{15} & M_{25} \\ M_{14} \+ M_{35} \- M_{45} & B & * & M_{14} \+ M_{35} \- M_{15} & M_{35} \\ M_{14} & M_{14} \+ M_{25} \- M_{15} & M_{14} \+ M_{35} \- M_{15} & * & M_{45} \\ M_{15} & M_{25} & M_{35} & M_{45} & * \end{bmatrix}, \end{multline*} where $\infty$ is as in Lemma~\ref{lem:star-tree-extn} and $B = M_{14} + M_{25} + M_{35} - M_{15} - M_{45}$. Note that each matrix has some entries taken from~$M$, while the rest are forced by the rank~$1$ condition. We just need to check that the minimum of these two matrices is in fact~$M$. To see this, we have the following inequalities from condition~(3): \begin{align*} M_{14}+M_{23} \leq M_{12}+M_{34} &\quad\Rightarrow\quad M_{14} \leq A - M_{23} \\ M_{12}+M_{23}+M_{34}+M_{45}+M_{15} &\,\leq\, M_{12}+M_{23}+M_{35}+M_{45}+M_{14} \\ &\quad\Rightarrow\quad M_{34} \leq M_{14} + M_{35} - M_{15} \\ M_{13}+M_{23}+M_{24}+M_{45}+M_{15} &\,\leq\, M_{13}+M_{23}+M_{25}+M_{45}+M_{14} \\ &\quad\Rightarrow\quad M_{24} \leq M_{14} + M_{25} - M_{15} \\ M_{12} + M_{23} + M_{34} + M_{45} + M_{15} &\,\leq\, M_{14} + M_{34} + M_{23} + M_{25} + M_{15} \\ &\quad\Rightarrow\quad M_{12} \leq M_{14} + M_{25} - M_{45} \\ M_{13} + M_{23} + M_{24} + M_{45} + M_{15} &\,\leq\, M_{14} + M_{24} + M_{23} + M_{35} + M_{15} \\ &\quad\Rightarrow\quad M_{13} \leq M_{14} + M_{35} - M_{45} \\ M_{12}+M_{23}+M_{34}+M_{45}+M_{15} &\,\leq\, M_{12}+M_{25}+M_{35}+M_{34}+M_{14} \\ &\quad\Rightarrow\quad M_{23} \leq B \end{align*} Therefore, $M$ has star tree rank at most~$2$. \end{proof} \section{Tree rank for \texorpdfstring{$n=5$}{n=5}}\label{sec:tree-5} We now turn our attention to tree rank of $5\times 5$ dissimilarity matrices. As in the previous section, the maximum tree rank is~$3$ and so it suffices to characterize $5 \times 5$ dissimilarity matrices of tree rank at most~$2$. Unlike the previous section, the second classical secant variety is already all of $\mathbb C^{10}$, so there is no classical polynomial whose tropicalization gives us a clue to the tropical secant set. However, the tropical pentad again shows up in our characterization. First, here is a simple example of a $5\times 5$ $0/1$ dissimilarity matrix with tree rank $3$. Consider the $0/1$ matrix corresponding to the $5$-cycle $C_5$. Now, $C_5$ cannot be covered by fewer than $3$ $k$-partite graphs, and so the matrix has tree rank at least $3$ by Proposition~\ref{prop:tree-rank-01}. On the other hand, it has tree rank at most $3$ by Theorem~\ref{thm:max-star-tree-rank} and the inequality in~(\ref{eqn:rank-inequals}). We will see in Remark~\ref{remark_5_cycle} that this matrix is, in a certain sense, the only such example. Let $P $ be the tropical polynomial in variables $\{x_{ij}:1\le i< j\le 5\} $ which is the tropical sum of the 22 tropical monomials of degree 5 in which each $i \in\{1, \ldots ,5\}$ appears in a subscript exactly twice. Thus $P $ has $12$ monomials of the form $x_{12}x_{23}x_{34}x_{45}x_{15}$, forming the terms of the pentad, and $10$ new monomials of the form $x_{12}x_{23}x_{31}x_{45} ^ 2$. Let us call terms of the former kind \emph{pentagons}, and terms of the latter kind \emph{triangles}. \begin{theorem} \label{thm:tree-rank-5} Let $M $ be a $5\times 5 $ dissimilarity matrix. Then the following are equivalent: \begin{enumerate} \item $M $ has tree rank at most $2$; \item The deficiency graph is $2$-colorable; \item The tropical polynomial~$P$ achieves its minimum at a triangle. \end{enumerate} \end{theorem} \begin{proof} First, (1) implies (2) by Proposition~\ref{prop:deficiency-hypergraph}. For (2) implies (3), we prove the contrapositive. Suppose the minimal terms of $P$ are all pentagons; without loss of generality, we assume that $x_{12}x_{23}x_{34}x_{45}x_{15}$ is a minimal term. Since $x_{14}x_{45}x_{15}x_{23}^2$ is not minimal, we have $M_{12} + M_{34} < M_{14} + M_{23}$. Similarly, we have, \begin{align*} M_{12} + M_{23} + M_{34} + M_{45} + M_{15} &< 2M_{15} + M_{23} + M_{34} + M_{24}, \mbox{ and}\\ M_{12} + M_{23} + M_{34} + M_{45} + M_{15} &< 2M_{45} + M_{12} + M_{23} + M_{13}. \end{align*} Adding these together and cancelling, we get $M_{12} + M_{34} < M_{13} + M_{24}$. Thus, $12$ and~$34$ are adjacent in the deficiency graph. By similar reasoning, we have adjacencies $12-34-15-23-45-12 $ in the deficiency graph, so it has a five cycle and is not 2-colorable. Finally, we prove that (3) implies (1). Assume without loss of generality that $x_{34}x_{35}x_{45}x_{12}^ 2 $ is among the terms minimizing $P $. This implies that $x_{12}x_{34}$, $x_{12}x_{35}$, and $x_{12}x_{45}$ are each minimal terms in their respective Pl\"ucker equations. Then we can use Lemmas~\ref{triangle_complement} and~\ref{triangle} below to obtain a decomposition of $M $ into two tree matrices. \begin{lemma}\label{triangle_complement} For any $5 \times 5$ dissimilarity matrix $M$ such that $x_{12}x_{34}$, $x_{12}x_{35}$, and $x_{12}x_{45}$ are each minimal terms in their respective Pl\"ucker equations, there exists some $5 \times 5$ tree matrix $T$, such that for every $ij \in{[5] \choose 2} $, we have $T_{ij}\ge M_{ij} $, with equality if $ij\in\{12, 13, 14, 15, 23, 24, 25\} $. \end{lemma} \begin{proof} If $ M$ satisfies \begin{align*} M_{14}+ M_{23} &\le M_{13}+ M_{24}, \\ M_{15}+ M_{24} &\le M_{14}+ M_{25}, \\ M_{13}+ M_{25} &\le M_{15}+ M_{23}, \end{align*} then adding shows that each inequality must be an equality. Thus, without loss of generality, we may assume that \begin{align} \label{eqn:inequalities_1}M_{14}+ M_{23} &\ge M_{13}+ M_{24}, \\ \label{eqn:inequalities_2}M_{15}+ M_{24} &\le M_{14}+ M_{25}, \\\ \label{eqn:inequalities_3}M_{13}+ M_{25} &\le M_{15}+ M_{23}. \end{align} Every other case is equivalent to this one via permutations of $\{1,2\}$ and $\{3,4,5\}$. Now define $T $ as follows. Let $ T_{ij}= M_{ij} $ for $ij\in\{12, 13, 14, 15, 23, 24, 25\} $, and \begin{align*} T_{34} & = T_{24} + T_{ 13} - T_{12},\\ T_{35} & = T_{25} + T_{ 13} - T_{12},\\ T_{45} & = T_{15} + T_{ 24} - T_{12}. \end{align*} That $T $ dominates $ M$ in every coordinate follows from the inequalities~(\ref{eqn:inequalities_1}), (\ref{eqn:inequalities_2}), and~(\ref{eqn:inequalities_3}). To check that $T $ has tree rank 1, it suffices to check the four-point condition on each 4-tuple. This condition is satisfied for the 4-tuples $\{1,2,3,4\} $, $ \{1,2,3,5\}$, and $ \{1,2,4,5\}$, by choice of $T_{34},T_{35},T_{45} $. Furthermore, we claim that \begin{align*} T_{15} +T_{34} &=T_{13} +T_{45} \le T_{14} +T_{35},\textrm{ and} \\ T_{24} +T_{35} & =T_{25} +T_{34} \le T_{23} +T_{45}. \end{align*} These inequalities follow immediately from substituting for $T_{34},T_{35},T_{45} $ and using inequalities (\ref{eqn:inequalities_2}) and~(\ref{eqn:inequalities_3}). \end{proof} \begin{lemma}\label{triangle} For any $5 \times 5$ dissimilarity matrix $M$, there exists some $5 \times 5$ tree matrix $T'$ such that for every pair of indices $i$ and $j$, we have $T'_{ij}\ge M_{ij} $, with equality if $ij\in\{34, 35, 45\} $. \end{lemma} \begin{proof} The $\{3,4,5\}$-principal submatrix of $M$ is a tree matrix and therefore $T'$ exists with the desired properties by Lemma~\ref{lem:extn-tree-rank-1}. \end{proof} We can now finish the proof of Theorem~\ref{thm:tree-rank-5}. Let $T $ be as given in Lemma~\ref{triangle_complement} and let $T'$ be as given in Lemma~\ref{triangle}. Then $ M = T \oplus T'$ and so $ M$ has tree rank at most~2. \end{proof} In the rest of this section, we present a detailed analysis of the deficiency graphs~$\Delta_M$ for $n=5$. There are $5$ tropical Pl\"ucker relations on a $5 \times 5$ matrix, each containing $3$ terms. Each term is the tropical product of terms with disjoint entries. Thus, $\Delta_M$ is a subgraph with at most $5$ edges of the Petersen graph~$P_{10}$, which is the graph on vertices ${[5] \choose 2}$ with an edge between $ij$ and $kl$ if and only if $\{i, j\}$ and $\{k, l\}$ are disjoint sets. The following theorem describes the possible subgraphs that $\Delta_M$ can be. \begin{figure} \begin{center} \input{Petersen_subgraphs.pstex_t} \caption{The two 2-colorable possibilities for $\Delta_M$.} \label{fig:Petersen-subgraphs} \end{center} \end{figure} \begin{theorem}\label{thm:tree-rank-5-deficiency} Let $M$ be a $5\times 5$ dissimilarity matrix. Then the deficiency graph $\Delta_M$ is precisely one of the following: \begin{enumerate} \item The trivial graph, in which case $M$ has tree rank~$1$. \item A non-trivial graph with fewer than $5$ edges, in which case $M$ has tree rank~$2$. \item Up to relabeling, either of the two graphs in Figure~\ref{fig:Petersen-subgraphs}, in which case $M$ has tree rank~$2$. \item A $5$-cycle, in which case $M$ has tree rank~$3$. \end{enumerate} \end{theorem} \begin{proof} The matrix $M$ is a tree matrix if and only if the four-point condition holds for all $4$-tuples, i.e.\ if and only if $\Delta_M$ is trivial. This is the first case. Now suppose that $\Delta_M$ is a non-trivial graph with at most $4$ edges. Then, at least one four-point condition holds, so Lemma~\ref{lem:tree-rank-submatrix} implies that $M$ has tree rank at most~$2$. However, at least one four-point condition is violated, so $M$ must have tree rank exactly~$2$. In the rest of the proof, we assume that $\Delta_M$ has $5$ edges, i.e.\ that each of the five 4-tuples yields one deficient pair. Thus, $\Delta_M$ contains exactly one of the three edges $12-34$, $13-24$, and $14-23$, and similarly for the remaining 4-tuples. \begin{figure} \begin{center} \input{alternating_cycle_1.pstex_t} \caption{An alternating 6-cycle, where the solid edges lie in $H $ and the dotted edges do not.} \label{fig:alternating-cycle-1} \end{center} \end{figure} Let us say that a subgraph $H $ of $P_{10} $ \emph{admits an alternating even cycle} if there exists a cycle of even length in $ P_{10}$ such that, traversing the cycle, the edges are alternately members and nonmembers of $ H$ (Figure~\ref{fig:alternating-cycle-1}). \begin{lemma}\label{lem:alternating-cycle} The graph $ \Delta_M$ admits no alternating even cycle. \end{lemma} \begin{proof} Note that the only even cycles in the Petersen graph are of lengths 6 and~8 (see, for example, \cite[Ch.~4.2]{bm}). Suppose $ C$ is an alternating cycle of length~6. All 6-cycles of $P_{10}$ are isomorphic, so after relabeling, we may assume that $ C$ has vertices $ 45, 13,25,34, 15,23$, in that order. Now suppose that $45-13$, $25-34$, and $15-23$ are the three edges of the cycle in $H$. Then \begin{align*} M_{45}+ M_{13} &< M_{34}+ M_{15}, \\ M_{25}+ M_{34} &< M_{23}+ M_{45}, \\ M_{15}+ M_{23} &< M_{13}+ M_{25}. \end{align*} But adding these inequalities yields a contradiction. If instead $13- 25$, $34-15$, and $23-45$ are the three edges of the cycle lying in $ H$, then we obtain a similar contradiction. The case of a cycle of length 8 is analogous, so we omit it. \end{proof} We now use Lemma~\ref{lem:alternating-cycle} to show that, up to relabeling, the only two possibilities for $\Delta_M$, assuming that it is $2$-colorable, are those in Figure~\ref{fig:Petersen-subgraphs}. We organize our case analysis according to the maximum degree in the graph $\Delta_M$. If $\Delta_M$ has maximum degree~1, so that it is a perfect matching in $P_{10}$, then one may check that it has an alternating 8-cycle. This is impossible by Lemma~\ref{lem:alternating-cycle}. If $\Delta_M$ has maximum degree~2, then it is either a 4-path and a 1-path, a 3-path and two 1-paths, or two 2-paths and a 1-path. (By a $ k $-path we mean a path with $k $~edges). It is easy to check that in each of these cases, $\Delta_M$ admits an alternating $6 $-cycle, which is again impossible by Lemma~\ref{lem:alternating-cycle}. Thus, $\Delta_M$ has a vertex of degree 3, which we assume to be the vertex~$12$. In this case, one may check that, up to relabeling, $\Delta_M$ is one of the two graphs shown in Figure~\ref{fig:Petersen-subgraphs}. Note that in either case, the graphs are $2$-colorable, and thus $M$ has tree rank~$2$ by Theorem~\ref{thm:tree-rank-5}. Finally, if $\Delta_M$ is not $2$-colorable, then it must have an odd cycle. The Petersen graph has no $3$-cycles, so $\Delta_M$ must be a $5$-cycle. \end{proof} \begin{remark}\label{remark_5_cycle} If $M$ is the $0/1$ matrix corresponding to the $5$-cycle $C_5$, then $\Delta_M$ is also a $5$-cycle by Theorem~\ref{thm:tree-rank-5-deficiency}. Explicitly, $\Delta_M$ has an edge for each non-adjacent pair of edges in the graph $C_5$. Moreover, Theorem~\ref{thm:tree-rank-5-deficiency} tells us that any other matrix $N$ with tree rank~$3$ must have the same deficiency graph (up to relabeling). In this sense, $M$ is essentially the only example of a $5\times 5$ dissimilarity matrix with tree rank~$3$. \end{remark} \section{Open questions}\label{sec:open-questions} \begin{itemize} \item What is the maximum tree rank of a $n\times n$ dissimilarity matrix? Theorem~\ref{thm:tree-rank-n-3} gives an upper bound, but beginning with $n=10$, we do not know if it is sharp. Specifically, does there exist a $10 \times 10$ dissimilarity matrix with tree rank $7$? \item Give a (reasonable) algorithm for computing tree rank. Note that both the tropical Veronese and the space of star trees are classical linear spaces, and so the results in~\cite{de} can be applied to compute star tree rank and symmetric Barvinok rank. However, it would be nice to have a good algorithm for computing tree rank. \item Is it true that the rank of a matrix, according to any of our notions, is always equal to the chromatic number of the corresponding deficiency graph? In Sections~\ref{sec:sym-barv-3}, \ref{sec:star-tree-5}, and~\ref{sec:tree-5}, we observed that this was true for symmetric Barvinok rank with $n \leq 3$, star tree rank with $n \leq 5$, and tree rank with $n \leq 5$ respectively. In general, we believe the answer is no, but we do not know of a counterexample. \item In phylogenetics, trees have all edges, including the pendant edges, labeled by positive weights, or, equivalently, after negating, by negative weights. In this way, phylogenetic trees form a subset of the tropical Grassmannian, and we define the \emph{phylogenetic tree rank} to be the rank with respect to this subset. One can ask the same questions about phylogenetic tree rank as in this paper: Which matrices have finite phylogenetic tree rank? What is the maximum possible finite phylogenetic tree rank? What is an explicit characterization of the secant sets for small matrices? Work in this direction was begun in~\cite{c}. \item What is an algorithm for computing phylogenetic tree rank? \item Does a matrix have tree rank~$2$ if and only if all its principal $6 \times 6$ submatrices have tree rank at most~$2$? Pachter and Sturmfels have made the same conjecture for their definition of tree rank~\cite[p.~ 124]{ps}. \item What is an explicit description of $6\times 6$ matrices with tree rank at most~$2$, along the lines of Theorem~\ref{thm:tree-rank-5}? Does this help with the previous conjecture? There is a necessary condition coming from applying Theorem~\ref{thm:tree-rank-5} to every $5 \times 5$ minor, and another from the fact that the matrix must be in the tropicalization of the classical secant variety. However, these two conditions together may not be sufficient for a $6\times 6$ matrix to have tree rank at most~$2$. \end{itemize} \section*{Acknowledgments} We thank Bernd Sturmfels for his guidance and a close reading of the text. Maria Ang\'elica Cueto helped us understand the connection to phylogenetics and William Slofstra pointed us to~\cite{godsil}. Melody Chan was supported by the Department of Defense through the National Defense Science and Engineering Graduate Fellowship Program.
{ "timestamp": "2009-12-08T07:12:04", "yymm": "0912", "arxiv_id": "0912.1411", "language": "en", "url": "https://arxiv.org/abs/0912.1411", "abstract": "We introduce and study three different notions of tropical rank for symmetric and dissimilarity matrices in terms of minimal decompositions into rank 1 symmetric matrices, star tree matrices, and tree matrices. Our results provide a close study of the tropical secant sets of certain nice tropical varieties, including the tropical Grassmannian. In particular, we determine the dimension of each secant set, the convex hull of the variety, and in most cases, the smallest secant set which is equal to the convex hull.", "subjects": "Combinatorics (math.CO)", "title": "Three notions of tropical rank for symmetric matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575147530351, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542051888526 }
https://arxiv.org/abs/1609.06171
Coincidences among skew dual stable Grothendieck polynomials
The question of when two skew Young diagrams produce the same skew Schur function has been well-studied. We investigate the same question in the case of stable Grothendieck polynomials, which are the K-theoretic analogues of the Schur functions. We prove a necessary condition for two skew shapes to give rise to the same dual stable Grothendieck polynomial. We also provide a necessary and sufficient condition in the case where the two skew shapes are ribbons.
\section{Introduction} It is well known that the Schur functions indexed by the set of partitions $\{s_{\lambda}\}$ form a linear basis for the ring of symmetric functions over $\mathbb{Z}$. However, for general skew shapes $\lambda / \mu$, the corresponding Schur functions are no longer linearly independent. In fact, two different skew shapes can give rise to the same Schur function. Such skew shapes are called \textit{Schur equivalent}. There are trivial examples of such equivalences---for instance $\langle 2 \rangle$ is clearly Schur-equivalent to $\langle 4 \rangle / \langle 2 \rangle$ as they yield the same shape positioned differently in space---and there are also many non-trivial examples. (Note that we use angled brackets here to denote a partition instead of parentheses to avoid ambiguity with later notation.) For example, the shapes shown below are Schur equivalent \cite{rsw2009coincidences}. \begin{center} \ytableausetup{boxsize=.2cm} \ydiagram{4+2, 2+3,1+4,1+2,2,2} \qquad\ydiagram{4+2,3+2,3+2,1+3,4,2} \end{center} It is natural to ask when these coincidences occur. One application of this type of result involves the representation theory of $GL_N(\mathbb{C})$. In this setting, equality among skew Schur functions corresponds to equivalence of certain $GL_N(\mathbb{C})$ modules \cite{rsw2009coincidences}. Coincidences among skew Schur functions have been studied by Billera-Thomas-van Willigenburg \cite{billera2006decomposable}, Reiner-Shaw-van Willigenburg \cite{rsw2009coincidences}, and McNamara-van Willigenburg \cite{mcnamara2009towards}, among others. \iffalse In \cite{billera2006decomposable}, the authors provide necessary and sufficient conditions for two ribbons $\alpha$ and $\beta$ to be Schur equivalent, where a ribbon is a skew shape containing no $2\times 2$ square. In \cite{rsw2009coincidences} the authors establish some necessary conditions and some sufficient conditions for two skew shapes to be Schur equivalent. \fi The stable and dual stable Grothendieck polynomials are natural ($K$-theoretic) analogues of Schur functions obtained as weighted generating functions over \textit{set-valued tableaux} and \textit{reverse plane partitions}, respectively \cite{buch2002lrrule,lam2007combinatorial}. Roughly speaking, while the Schur functions give infomation about the cohomology of the Grassmannian, these analogues give information about the $K$-theory of the Grassmannian, where $K$-theory is a generalized cohomology theory. Our work concerns the combinatorics of these objects, so knowledge of cohomology theories is not necessary. The question of coincidences among stable and dual stable Grothendieck polynomials of skew shapes was previously unstudied. After a brief background in symmetric functions, we focus on dual stable Grothendieck polynomials of ribbon shape $g_\alpha$, where a ribbon is a connected Young diagram containing no $2\times 2$ square. For a ribbon shape $\alpha$, let $\alpha^*$ denote the shape obtained by 180-degree rotation. We prove the following theorem. \begin{thm*}[Theorem \ref{thm:littlegribbon}] For ribbons $\alpha$ and $\beta$, we have $g_\alpha = g_\beta$ if and only if $\alpha = \beta$ or $\alpha = \beta^*$ \end{thm*} We next prove two necessary conditions for dual stable Grothendieck equivalence involving \textit{bottleneck numbers} of shape $\lambda/\mu$, $b_i^{\lambda/\mu}$. \begin{thm*}[Theorem 3.10] Suppose $g_{\lambda / \mu} = g_{\gamma / \nu}$. Then \[ b_i^{\lambda / \mu}+b_{n-i+1}^{\lambda / \mu} = b_i^{\gamma / \nu}+b_{n-i+1}^{\gamma / \nu} \] for $i=1,2,\ldots,n$ where $n$ is the number of columns in $\lambda / \mu$. \end{thm*} \begin{thm*}[Corollary \ref{cor:trianglesums}] Suppose $g_{\lambda/\mu} = g_{\gamma/\nu}$. Then \[ \sum_{i=1}^n (b_i^{\lambda/\mu})^2 = \sum_{i=1}^n (b_i^{\gamma/\nu})^2. \] \end{thm*} We end by giving examples that show that stable Grothendieck equivalence does not imply dual stable Grothendieck equivalence and vice versa and by highlighting areas for future research. \section{Preliminaries} \subsection{Partitions and tableaux} A \textit{partition} $\lambda=\langle \lambda_1,\lambda_2,\ldots,\lambda_k \rangle$ of a positive integer $n$ is a weakly decreasing sequence of positive integers $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_k > 0$ whose sum is $n$. The integer $\lambda_i$ is called the $i$th \textit{part} of $\lambda$. We call $n$ the \textit{size} of $\lambda$, denoted by $\vert \lambda \vert = n$. Throughout this document $\lambda$ will refer to a partition. We may visualize a partition $\lambda$ using a \textit{Young diagram}: a collection of left-justified boxes where the $i$th row from the top has $\lambda_i$ boxes. For example, the Young diagram of $\lambda = \langle 5,2,1,1 \rangle$ is shown below. \ytableausetup{boxsize=.35cm} $$\ydiagram{5,2,1,1}$$ A \textit{skew shape} $\lambda/\mu$ is a pair of partitions $\lambda = \langle\lambda_1,\ldots,\lambda_m \rangle$ and $\mu = \langle\mu_1,\ldots,\mu_k\rangle$ such that $k \leq m$ and $\mu_i \leq \lambda_i$ for all $i$. We form the Young diagram of a skew shape $\lambda / \mu$ by superimposing the Young diagrams of $\lambda$ and $\mu$ and removing the boxes that are contained in both. If $\mu$ is empty, $\lambda/\mu=\lambda$ is called a \textit{straight shape}. Given a skew shape $\lambda/\mu$, we define its \textit{antipodal rotation} $(\lambda/\mu)^*$ as the skew shape obtained by rotating the Young diagram of $\lambda/\mu$ by $180$ degrees. For example, the Young diagrams of the skew shapes $\langle 6,3,1 \rangle / \langle 3,1\rangle $ and $(\langle 6,3,1\rangle / \langle 3,1\rangle)^*$ are shown below. $$\langle 6,3,1 \rangle /\langle 3,1\rangle =\ydiagram{3+3, 1+2, 1} \qquad \qquad (\langle 6,3,1 \rangle /\langle 3,1 \rangle)^*=\ydiagram{5+1, 3+2, 3}$$ A \textit{semistandard Young tableau} of shape $\lambda / \mu$ is a filling of the boxes of the Young diagram of $\lambda / \mu$ with positive integers such that the entries weakly increase from left to right across rows and strictly increase from top to bottom down columns. Two semistandard Young tableaux are shown below. \ytableausetup{boxsize=.45cm} $$\begin{ytableau}1 & 1 & 4 & 7 \\ 2 & 6 \\ 9\end{ytableau}\qquad\qquad \begin{ytableau}\none & \none & 1 & 3 & 3 \\ \none & 1 & 4 & 6 \\ 1 & 4\end{ytableau}$$ A \textit{set-valued tableau} of shape $\lambda / \mu$ is a filling of the boxes of the Young diagram of $\lambda / \mu$ with finite, nonempty sets of positive integers such that the entries weakly increase from left to right across rows and strictly increase from top to bottom down columns. For two sets of positive integers $A$ and $B$, we say that $A \leq B$ if $\max{A} \leq \min{B}$ and $A < B$ if $\max{A} < \min{B}$. For a set-valued tableau $T$, we define $\vert T \vert$, the \textit{size} of $T$, to be the sum of the sizes of the sets appearing as entries in $T$. For example, \ytableausetup{boxsize=.75cm} $$\begin{ytableau} 1,2 & 2,3 & 6 & 9 \\ 3 & 5 \\ 6 & 6,7 \end{ytableau}$$ is a set-valued tableau of shape $\lambda = \langle 4,2,2 \rangle $ and size $11$. A \textit{reverse plane partition} (RPP) of shape $\lambda / \mu$ is a filling of the boxes of the Young diagram of $\lambda / \mu$ with positive integers such that the entries weakly increase both from left to right across rows and from top to bottom down columns. For example, \ytableausetup{boxsize=.45cm} $$\begin{ytableau}\none & 1 & 1 & 2 & 7 \\ \none & 1 & 2 & 2 & 8 \\ 1 & 2 & 2 & 2\end{ytableau}$$ is a reverse plane partition of shape $\langle 5,5,4\rangle /\langle 1,1 \rangle $. \subsection{Symmetric functions} To each of the above fillings of a Young diagram we may associate a monomial as follows. First, let $T$ be a semistandard or set-valued tableau. We associate a monomial $x^T$ given by $$x^T = \prod_{i \in \mathbb{N}} x_i^{m_i},$$ where $m_i$ is the number of times the integer $i$ appears as an entry in $T$. For example, the semistandard Young tableaux shown above correspond to monomials $x_1^2x_2x_4x_6x_7x_9$ and $x_1^3x_3^2x_4^2x_6$, respectively, while the set-valued tableau corresponds to monomial $x_1x_2^2x_3^2x_5x_6^3x_7x_9$. Given a reverse plane partition $P$, the associated monomial $x^P$ is given by $$x^P = \prod_{i \in \mathbb{N}} x_i^{m_i},$$ where $m_i$ is the number of columns of $P$ that contain the integer $i$ as an entry. The reverse plane partition shown above has monomial $x_1^3x_2^3x_7x_8$. We can now define the Schur functions, the stable Grothendieck polynomials, and the dual stable Grothendieck polynomials, which are all indexed by skew shapes. We define the \textit{Schur function} $s_{\lambda / \mu}$ by $$s_{\lambda / \mu} = \sum_{T}^{}x^T,$$ where we sum over all semistandard Young tableaux of shape $\lambda / \mu$. Note that entries may be any positive integer, so $s_{\lambda/\mu}$ will be an infinite sum where each term has degree $|\lambda/\mu|=|\lambda|-|\mu|$. For example, \[s_{\langle 1 \rangle}=x_1+x_2+x_3+x_4+\ldots,\] and \[s_{\langle 2,1\rangle}=x_1^2x_2+x_1^2x_3+x_2^2x_3+\ldots + 2x_1x_2x_3+2x_1x_2x_4+\ldots+2x_4x_8x_{101}+\ldots.\] Though is it not obvious from this combinatorial definition, the Schur functions are symmetric functions. In other words, each $s_{\lambda/\mu}$ is unchanged after permuting any finite subset of the infinite variable set $\{x_1,x_2,\ldots\}$. Moreover, the Schur functions indexed by straight shapes $\{s_\lambda\}$ form a basis for the ring of symmetric functions over $\mathbb{Z}$. These functions arise naturally in areas like algebraic combinatorics, representation theory, and Schubert calculus. We refer the interested reader to \cite{EC2} for further reading on Schur functions and symmetric functions. We next define the \textit{stable Grothendieck polynomial}, the first of two \textit{$K$-theoretic analogues} of the Schur functions. We direct the interested reader to \cite{buch2002lrrule} for more on this topic and for an explanation of the connection to $K$-theory. The stable Grothendieck polynomial $G_{\lambda / \mu}$ is defined by $$G_{\lambda / \mu} = \sum_{T}^{}(-1)^{\vert T \vert -\vert \lambda \vert}x^T,$$ where we sum over all set-valued tableaux of shape $\lambda / \mu$. Note that semistandard tableaux are set-valued tableaux where each subset has size one. It follows that each $G_{\lambda/\mu}$ will be a sum of $s_{\lambda/\mu}$ plus terms of degree greater than $|\lambda/\mu|$. While each term in a Schur function has the same degree, each stable Grothendieck polynomial is an infinite sum where terms have arbitrarily large degree. For example, \[G_{\langle 1 \rangle}=x_1+x_2+\ldots -x_1x_2-x_2x_3+\ldots+x_1x_2x_4x_5x_9+\ldots,\] and \[G_{\langle 2,2 \rangle/\langle 1\rangle}=x_1^2x_2+2x_1x_2x_3+\ldots- 3x_1^2x_2x_3-8x_2x_5x_9x_{114}-\ldots+2x_1^2x_2^2x_3+\ldots.\] The other natural $K$-theoretic analogue of the Schur function is the \textit{dual stable Grothendieck polynomial}. It is dual to the stable Grothendieck polynomial under the Hall inner product. We refer the reader to \cite{lam2007combinatorial} for more background. We define the dual stable Grothendieck polynomial $g_{\lambda / \mu}$ by $$g_{\lambda / \mu} = \sum_{P}^{}x^P,$$ where the sum is over all reverse plane partitions of shape $\lambda / \mu$. Again, note that semistandard Young tableaux are examples of reverse plane partitions where the columns are strictly increasing. As a result, each dual stable Grothendieck polynomial $g_{\lambda/\mu}$ is a sum of the Schur function indexed by the same shape $s_{\lambda/\mu}$ and terms of degree strictly less than $|\lambda/\mu|$. They are again infinite sums, but now each term has degree at most $|\lambda/\mu|$ and at least the number of columns in shape $\lambda/\mu$. For example, \[g_{\langle 2,1\rangle}=x_1^2x_2+2x_1x_2x_3+\ldots+x_1x_2+x_1x_3+\ldots+x_1^2+x_2^2+\ldots.\] Though it is again not obvious from the definitions, both the stable and dual stable Grothendieck polynomials are symmetric functions. We use this fact throughout this paper. We say that two skew shapes $D_1$ and $D_2$ are $G$-\textit{equivalent} or $g$-\textit{equivalent} if $G_{D_1} = G_{D_2}$ or $g_{D_1} = g_{D_2}$, respectively. Since any $G_D$ contains $s_D$ as its lowest degree terms, $G_{D_1}=G_{D_2}$ implies $s_{D_1}=s_{D_2}$. Similarly, $g_{D_1}=g_{D_2}$ implies $s_{D_1}=s_{D_2}$. Furthermore, it is straightforward to check that two skew shapes that are equivalent in any of the three aforementioned senses must have the same number of rows and columns. We will implicitly use this fact throughout. It is an easy consequence of symmetry that all three notions of skew equivalence are preserved under antipodal rotation, $^*$. We provide a proof for stable Grothendieck polynomials below. \begin{prop} For any skew shape $\lambda/\mu$, $G_{\lambda/\mu} = G_{(\lambda/\mu)^*}$ and $g_{\lambda/\mu} = g_{(\lambda/\mu)^*}$. \end{prop} \begin{proof} We prove the result for stable Grothendieck polynomials; the argument for dual stable Grothendieck polynomials is similar. Let $x_I = x_{i_1}^{p_1}x_{i_2}^{p_2}\ldots x_{i_k}^{p_k}$ be a monomial with $i_1 < i_2 < \ldots < i_k$. It suffices to show that the $x_I$-coefficient of each of the two polynomials is equal. To do so, we construct a bijection between set-valued tableaux of shape $\lambda/\mu$ with weight monomial $x_I$ and set-valued tableaux of shape $(\lambda/\mu)^*$ with weight monomial $x_{I'} = x_{i_k+1-i_1}^{p_1}x_{i_k+1-i_2}^{p_2} \ldots x_{1}^{p_k}$. This bijection, which is in fact an involution, maps a tableau $T$ to the tableau $T'$ given by rotating $T$ and then replacing every entry $j$ with $i_k + 1 - j$. An example is given below where $i_k = 5$. \ytableausetup{boxsize=.75cm} $$T=\begin{ytableau} \none & 1 & 2, 3 \\ \none & 2 & 4 \\ 1 & 3 \end{ytableau} \longrightarrow \begin{ytableau} \none & 3 & 1 \\ 4 & 2 \\ 3, 2 & 1 \end{ytableau} \longrightarrow \begin{ytableau} \none & 2 & 4 \\ 1 & 3 \\ 2, 3 & 4 \end{ytableau}=T'$$ Thus, the $x_{I'}$-coefficient of $G_{(\lambda/\mu)^*}$ is equal to the $x_I$-coefficient of $G_{\lambda/\mu}$. By symmetry, the $x_{I'}$-coefficient of $G_{(\lambda/\mu)^*}$ is equal to the $x_I$-coefficient of $G_{(\lambda/\mu)^*}$, so the $x_I$-coefficients of $G_{\lambda/\mu}$ and $G_{(\lambda/\mu)^*}$ are equal as desired. \end{proof} \subsection{Ribbon shapes} We will be interested in a special class of skew shapes known as \textit{ribbons}. A skew shape $\alpha$ is called a ribbon if it is connected and contains no $2 \times 2$ rectangle. Being connected means that if there is more than one box, then each box must share an edge with another box. The shape shown below on the left is a ribbon while the shape in the middle and on the right are not. The shape in the middle contains a $2\times 2$ rectangle and the shape on the right is not connected. \ytableausetup{boxsize=.35cm} $$\ydiagram{9+3,5+5,6}\ydiagram{9+3,4+6,6}\ydiagram{9+3,6+4,6}$$ A \textit{composition} of a positive integer $n$ is an ordered list of positive integers that sum to $n$. We will write compositions inside of parentheses. For example, $(2,7,4,9)$ is a composition of $22$. It is easy to see that ribbons of size $n$ are in bijection with compositions of $n$: to obtain a composition from a ribbon shape, simply read the row sizes from bottom to top. This is clearly a bijection. For this reason, we will denote a ribbon shape by the associated composition $\alpha$. For example, the we denote the ribbon shown above by $(6,5,3)$. Note that one can also construct a bijection between compositions and ribbons using the sizes of the columns of $\alpha$ read from left to right. We also describe ribbon shapes this way, and we use square brackets in place of parentheses to denote this column reading. For example, the ribbon shown above may be written as $[1,1,1,1,1,2,1,1,1,2,1,1]$. Notice that the antipodal rotation $\alpha^{*}$ of $\alpha=(\alpha_1, \alpha_2,\ldots,\alpha_k)$ is the ribbon $(\alpha_k,\alpha_{k-1},\ldots,\alpha_1)$. We refer to $\alpha^*$ as the \textit{reverse ribbon} of $\alpha$. For a ribbon shape $\alpha$, we denote the corresponding Schur function by $s_\alpha$ and refer to it as the \textit{ribbon Schur function.} We now define several binary operations on the set of ribbons as in \cite{rsw2009coincidences}. Here we let $\alpha = (\alpha_1,\ldots,\alpha_k)$ and $\beta = (\beta_1,\ldots,\beta_m)$ be ribbons. We define the concatenation operation $$\alpha \cdot \beta = (\alpha_1,\ldots,\alpha_k,\beta_1.\ldots,\beta_m)$$ and the near concatenation operation $$\alpha \odot \beta = (\alpha_1,\ldots,\alpha_{k-1},\alpha_k + \beta_1,\beta_2,\ldots,\beta_m).$$ We let $$\alpha^{\odot n} = \underbrace{\alpha \odot \cdots \odot \alpha}_{\text{$n$}}.$$ We can combine the two concatenation operations to yield a third operation $\circ$, defined by $$\alpha \circ \beta = \beta^{\odot\alpha_1}\cdot \beta^{\odot\alpha_2} {\cdots} \beta^{\odot\alpha_k}.$$ \begin{ex} Consider ribbons $\alpha=(3,2)$ and $\beta=(1,2)$ shown below. $$\alpha=\ydiagram{2+2,3} \qquad \beta=\ydiagram{2,1}$$ Then $\alpha\cdot\beta$ and $\alpha\odot\beta$ are as follows. $$\alpha\cdot\beta=\begin{ytableau}\none & \none & \none & *(gray) & *(gray) \\ \none & \none & \none & *(gray) \\ \none & \none & & \\ $ $ & & \end{ytableau}\qquad \alpha\odot \beta = \begin{ytableau}\none & \none & \none & \none & *(gray) & *(gray)\\ \none & \none & & & *(gray)\\ $ $ & & \end{ytableau}$$ The operation $\alpha\circ\beta$ replaces each square of $\alpha$ with a copy of $\beta$. The copies of $\beta$ are near-concatenated if the corresponding blocks of $\alpha$ are horizontally adjacent and concatenated if the corresponding blocks of $\alpha$ are vertically adjacent. $$\alpha\circ\beta=\begin{ytableau}\none & \none & \none & \none & \none & \none & \none & *(gray)& *(gray)\\ \none & \none & \none & \none & \none & & & *(gray) \\ \none & \none & \none & \none & \none & \\ \none & \none & \none & \none &*(gray) & *(gray)\\ \none & \none & & & *(gray) \\ *(gray) $ $ & *(gray)& \\ *(gray) $ $ \end{ytableau}$$ \end{ex} If a ribbon $\alpha$ can be written in the form $\alpha = \beta_1 \circ \cdots \circ \beta_{\ell}$, we call this a \textit{factorization} of $\alpha$. A factorization $\alpha = \beta \circ \gamma$ is called \textit{trivial} if any of the following conditions hold: \begin{enumerate} \item one of $\beta$ or $\gamma$ consists of a single square, \item both $\beta$ and $\gamma$ consist of a single row, or \item both $\beta$ and $\gamma$ consist of a single column. \end{enumerate} A factorization $\alpha = \beta_1 \circ \cdots \circ \beta_{\ell}$ is called \textit{irreducible} if none of the factorizations $\beta_i \circ \beta_{i+1}$ are trivial and each $\beta_i$ has no nontrivial factorization. In \cite{billera2006decomposable}, the authors prove that every ribbon $\alpha$ has a unique irreducible factorization. They then prove the following theorem: \begin{thm}\cite{billera2006decomposable}\label{irred_fact} Two ribbons $\alpha$ and $\beta$ satisfy $s_{\alpha} = s_{\beta}$ if and only if $\alpha$ and $\beta$ have irreducible factorizations $$\alpha = \alpha_1 \circ \cdots \circ \alpha_k \quad \textrm{and} \quad \beta = \beta_1 \circ \cdots \circ \beta_k,$$ where each $\beta_i$ is equal to either $\alpha_i$ or $\alpha_i^{*}$. \end{thm} In the next section, we use the above theorem to prove a necessary and sufficient condition for two ribbons to be $g$-equivalent. We also provide a necessary condition for two skew shapes to be $g$-equivalent. \section{Coincidences of Dual Stable Grothendieck Polynomials} \subsection{Ribbons} The main result of this section is that for two ribbons $\alpha$ and $\beta$, $g_\alpha = g_\beta$ if and only if $\alpha = \beta$ or $\alpha = \beta^*$. We will obtain restrictions on $\alpha$ and $\beta$ by writing the dual stable Grothendieck polynomials in terms of ribbon Schur functions and comparing the coefficients in the resulting expansions. The next proposition requires the following ordering on ribbons. For ribbons $\alpha = [\alpha_1,\ldots,\alpha_n]$ and $\gamma=[\gamma_1,\ldots,\gamma_n]$ with the same number of columns, we write $\gamma\leq{\alpha}$ if $\gamma_i\leq{\alpha_i}$ for each $i = 1, \ldots, n$. \begin{prop} \label{ribbon_expand} Let $\alpha = [\alpha_1,\ldots,\alpha_n]$ be a ribbon. The dual stable Grothendieck polynomial $g_\alpha$ can be decomposed into a sum of ribbon Schur functions as \[ g_\alpha = \sum_{\gamma \leq \alpha} \left( \prod_{i=1}^{n} \binom{\alpha_i-1}{\alpha_i-\gamma_i} \right) s_\gamma . \] \end{prop} \begin{proof} We define a map from reverse plane partitions of ribbon shape $\alpha$ to the set of semistandard Young tableaux of shape $\gamma$ where $\gamma \leq \alpha$. Given a reverse plane partition $T$ of shape $\alpha$, map $T$ to a semistandard Young tableau of shape $\gamma = [\gamma_1,\ldots.\gamma_n]$ where $\gamma_i$ is the number of distinct entries in column $i$ in $T$. Fill column $i$ of $\gamma$ with the distinct entries of column $i$ in $T$ in increasing order. This gives a semistandard Young tableau because columns are clearly strictly increasing and rows will remain weakly increasing. This map preserves the monomial corresponding to the reverse plane partition. The map is also surjective, since any semistandard Young tableau of shape $\gamma$ where $\gamma \leq \alpha$ is mapped to by any reverse plane partition with the same entries in each column but with some entries copied. It remains to show each semistandard Young tableau is mapped to by exactly $\prod \binom{\alpha_i-1}{\alpha_i-\gamma_i}$ reverse plane partitions. Fix some semistandard Young tableau of shape $\gamma \leq \alpha$. We construct all possible reverse plane partitions of $\alpha$ mapping to this semistandard Young tableau column by column. Given column $i$ of $\alpha$, consider the $\alpha_i-1$ pairs of adjacent squares in the column. Since there are $\gamma_i$ distinct entries in the column and the entries are written in weakly increasing order, $\alpha_i - \gamma_i$ of these pairs must match. A size $(\alpha_i-\gamma_i)$ subset of the $\alpha_i-1$ pairs of adjacent squares gives a unique filling, where the given subset is the set of adjacent squares that match. Thus the number of possible fillings for each column is $\binom{\alpha_i-1}{\alpha_i-\gamma_i}$, giving the desired formula. \end{proof} \begin{lem} \label{ribbon-col-sum} Let $\alpha = [\alpha_1, \ldots, \alpha_n]$ and $\beta = [\beta_1, \ldots, \beta_n]$ be ribbons such that $g_\alpha = g_\beta$. Then for all $i=1,\ldots,n$ we have $\alpha_i + \alpha_{n-i+1} = \beta_i + \beta_{n-i+1}$. \end{lem} \begin{proof} Use Proposition \ref{ribbon_expand} to write $g_\alpha$ and $g_\beta$ as a sum of ribbon Schur functions. Note that all terms of degree $n+1$ in both sums are of the form $s_\gamma$ where $\gamma$ is a ribbon $(i,n-i+1)$. Let $A$ denote the set of all compositions of $n+1$ with weakly decreasing parts (i.e. the set of partitions of $n+1$). It is shown in Proposition 2.2 of \cite{billera2006decomposable} that the set $\lbrace{s_\alpha}\rbrace_{\alpha\in A}$ forms a basis for $\Lambda_{n+1}$, the degree $n+1$ elements of the ring of symmetric functions. Then since each ribbon $(i,n-i+1)$ is Schur equivalent to $(n-i+1,i)$, it follows that the set of Schur functions of such ribbons is linearly independent. Comparing coefficients in the respective sums gives the desired equality. \end{proof} \begin{lem}\label{factor_sym} Suppose $\alpha$ and $\beta$ are ribbons such that $g_{\alpha} = g_{\beta}$, $\alpha \neq \beta$, and there exist ribbons $\sigma$, $\tau$, and $\mu$ such that $\alpha = \sigma \circ \mu$ and $\beta = \tau \circ \mu$. Then $\mu = \mu^*$. \end{lem} \begin{proof} Let $\mu = [\mu_1,\ldots,\mu_t]$, $\alpha = [\alpha_1,\ldots,\alpha_n]$, and $\beta = [\beta_1,\ldots,\beta_n]$. By hypothesis, we have that $\alpha = \mu \square_1 \cdots \square_r \mu$ and $\beta = \mu \diamond_1 \cdots \diamond_s \mu$, where each $\square_i$ and $\diamond_i$ is one of the operations $\cdot$ or $\odot$. Thus each $\alpha_i$ and $\beta_i$ is equal to one of $\mu_1,\ldots,\mu_t$ or $\mu_1 + \mu_t$. Since $\alpha \neq \beta$, let $r$ be the minimal index such that $\alpha_r \neq \beta_r$. We see that $\{\alpha_r,\beta_r\} = \{\mu_t,\mu_1 + \mu_t \}$ because the first index where $\alpha$ and $\beta$ disagree corresponds to the first index $i$ where $\square_i \neq \diamond_i$. By Lemma \ref{ribbon-col-sum} it follows that if $\alpha_i = \beta_i$ then $\alpha_{n-i+1} = \beta_{n-i+1}$. Hence $n-r+1$ is the maximal index where $\alpha$ and $\beta$ disagree. Note that by the same argument we similarly have $\{\alpha_{n-r+1},\beta_{n-r+1}\} = \{\mu_1,\mu_1 + \mu_t \}$. We have $\alpha_r + \alpha_{n-r+1} = \beta_r + \beta_{n-r+1}$ by Lemma \ref{ribbon-col-sum}. Substituting the possible values of $\alpha_r \neq \beta_r$ and $\alpha_{n-r+1} \neq \beta_{n-r+1}$, we find that this equation is either $$\mu_1 + \mu_t = 2(\mu_1 + \mu_t)$$ or $$2\mu_1 + \mu_t = \mu_1 + 2\mu_t.$$ The first equation is a contradiction. Thus the second equation holds, implying that $\mu_1 = \mu_t$. We will show by induction that $\mu_i = \mu_{t - i + 1}$, completing the proof. We have just shown the base case. For the general case, we have by Lemma \ref{ribbon-col-sum} $$\alpha_{r + i} + \alpha_{n - r - i + 1} = \beta_{r + i} + \beta_{n - r - i + 1}.$$ We may assume without loss of generality that $\alpha_r = \mu_t$. Then we have $\alpha_{n - r + 1} = \mu_1 + \mu_t$, $\beta_r = \mu_1 + \mu_t$ and $\beta_{n - r + 1} = \mu_1$. Therefore \begin{equation*} \begin{aligned} \alpha_{r + i} &= \mu_i \\ \beta_{r + i} &= \mu_{i + 1} \\ \alpha_{n - r - i + 1} &= \mu_{t - i} \\ \beta_{n - r - i + 1} &= \mu_{t - i + 1}. \end{aligned} \end{equation*} We thus have $$\mu_i + \mu_{t - i} = \mu_{i + 1} + \mu_{t - i +1}.$$ By the inductive hypothesis $\mu_i = \mu_{t - i + 1}$, so $\mu_{i + 1} = \mu_{t - i}$, finishing the proof. \end{proof} We are now ready for the main result of this section. \begin{thm} \label{thm:littlegribbon} For ribbons $\alpha$ and $\beta$, $g_\alpha = g_\beta$ if and only if $\alpha = \beta$ or $\alpha = \beta^*$. \end{thm} \begin{proof} Suppose $g_\alpha = g_\beta$. Then $s_\alpha = s_\beta$. By Theorem \ref{irred_fact}, we have irreducible factorizations \[ \alpha = \alpha_k \circ \cdots \circ \alpha_1 \] \[ \beta = \beta_k \circ \cdots \circ \beta_1. \] (We reverse the indices for ease of induction.) We prove by induction on $r$ that for $r = 1, \ldots, k$ we have \[ \alpha_r \circ \cdots \circ \alpha_1 \in \{ \beta_r \circ \cdots \circ \beta_1, (\beta_r \circ \cdots \circ \beta_1)^* \}. \] By Theorem \ref{irred_fact} we have $\alpha_1 \in \{ \beta_1, \beta_1^* \}$ so the base case is satisfied. Now suppose $r \geq 2$. By the inductive hypothesis we have \[ \alpha_{r-1} \circ \cdots \circ \alpha_1 \in \{ \beta_{r-1} \circ \cdots \circ \beta_1, (\beta_{r-1} \circ \cdots \circ \beta_1)^* \}. \] If $\alpha = \beta$ we are done, so we may assume otherwise. Then by letting $\mu = \alpha_{r-1} \circ \cdots \circ \alpha_1$ and applying Lemma \ref{factor_sym} to $\alpha$ and either $\beta$ or $\beta^*$, we have \[ \beta_{r-1} \circ \cdots \circ \beta_1 = (\beta_{r-1} \circ \cdots \circ \beta_1)^*. \] Since we also have that $\alpha_r \in \{ \beta_r, \beta_r^* \}$ we are done. \end{proof} \subsection{Necessary Condition: Bottlenecks} We now move to the case of determining equality of dual stable Grothendieck polynomials of general skew shape. We introduce the ``bottleneck numbers'' of a skew diagram and use these to construct closed-form expressions for certain coefficients of its dual Grothendieck polynomial. We then obtain a necessary condition for $g$-equivalence that generalizes Lemma \ref{ribbon-col-sum}. For the following definition, we define an \textit{interior horizontal edge} to be a horizontal edge of a box in a Young diagram that lies neither at the top boundary nor the bottom boundary of the Young diagram. \begin{defn} A \textit{bottleneck edge} in a skew shape $\lambda/\mu$ is an interior horizontal edge touching both the left and right boundaries of the shape. For example, the red edges in Figure \ref{bottleneck_ex} are bottleneck edges. We let $b_i^{\lambda/\mu}$ denote the number of bottleneck edges in column $i$. \end{defn} If the shape $\lambda/\mu$ has $n$ columns and $m$ rows, then number of bottleneck edges in column $i$ for $i = 1,2, \ldots , n$ is equivalently \[ b_i^{\lambda / \mu} = | \lbrace 1\leq{j}\leq{m-1} \mid \mu_j = i-1, \lambda_{j+1}=i \rbrace |. \] When the skew shape in question is clear, we will often suppress the superscript. Bottleneck edges are related to the \textit{row overlap compositions} defined in \cite{rsw2009coincidences}, which we now review. \begin{defn}[\cite{rsw2009coincidences}] The $k$-\textit{row overlap composition} $r^{(k)}$ of a skew diagram $\lambda/\mu$ with $m$ rows is $(r^{(k)}_1,\ldots,r^{(k)}_{m-k+1})$, where $r^{(k)}_i$ is the number of columns containing squares in all the rows $i,i+1,\ldots,i+k-1$. \end{defn} In particular, $r^{(2)} = (\lambda_2-\mu_1,\lambda_{3}-\mu_2,\ldots,\lambda_m - \mu_{m-1})$. Thus bottleneck edges correspond to 1's in the 2-row overlap composition. When the $2, 3, \ldots, m$ row overlap compositions are written, they form a triangular array of non-negative integers as shown in Example \ref{ex:rowoverlap}. A column having $i$ bottleneck edges corresponds in the array to an equilateral triangle of 1's with side length $i$. In \cite{rsw2009coincidences}, it is proven that the $k$-overlap compositions of two Schur equivalent shapes are permutations of each other for each $k$. \begin{ex}\label{ex:rowoverlap} Let $\lambda / \mu = \langle 5,5,4,2,2,2\rangle /\langle 4,2,1,1,1\rangle$. \begin{figure}[h] \ytableausetup {boxsize=1.25em} \ytableausetup {aligntableaux=top} \begin{ytableau} \none & \none & \none & \none & \emptytikzmark{6}{1} \\ \none & \none & \emptytikzmark{4}{1} & \emptytikzmark{5}{1} & \\ \none & \emptytikzmark{3}{1} & & \\ \none & \emptytikzmark{2}{1} \\ \none & \emptytikzmark{1}{1} \\ & \\ \end{ytableau} \DrawHLine[red, ultra thick]{1}{1} \DrawHLine[red, ultra thick]{2}{2} \DrawHLine[red, ultra thick]{3}{3} \DrawHLine[red, ultra thick]{6}{6} \caption{$\langle 5,5,4,2,2,2 \rangle / \langle 4,2,1,1,1 \rangle $ has 3 bottleneck edges in column 2 and 1 bottleneck edge in column 5.}\label{bottleneck_ex} \end{figure} Then the number of bottleneck edges in each column is shown below. $(b_1,b_2,b_3,b_4,b_5)=(0,3,0,0,1)$. The row overlap compositions $r^{(2)},\ldots,r^{(6)}$ are \[\begin{tabular}{>{$}l<{$\hspace{12pt}}*{13}{c}} r^{(6)} &&&&&&&0&&&&&&\\ r^{(5)} &&&&&&0&&0&&&&&\\ r^{(4)} &&&&&0&&0&&1&&&&\\ r^{(3)} &&&&0&&0&&1&&1&&&\\ r^{(2)} &&&1&&2&&1&&1&&1&&\\ \end{tabular}\] \end{ex} \begin{defn} We define a \textit{1,2-RPP} to be a reverse plane partition involving only 1's and 2's. A \textit{mixed} column of a 1,2-RPP contains both 1's and 2's while an \textit{i-pure} column contains only $i$'s. \end{defn} Note the 1,2-RPP's of a given shape are in bijection with lattice paths from the upper right vertex of the shape to the lower left vertex of the shape. The corresponding 1,2-RPP can be generated from such a lattice path by filling the squares below the path with 2's and the squares above the path with 1's. Conversely, the corresponding lattice path can be recovered from a 1,2-RPP by drawing horizontal segments below the last 1 (if there are any) in a column and above the first 2 (if there are any) in a column. Vertical segments can then be drawn to connect these horizontal segments into a lattice path. Observe that mixed columns in the 1,2-RPP correspond to interior horizontal edges in the lattice path. \begin{figure}[h] \ytableausetup {boxsize=1.25em} \ytableausetup {aligntableaux=top} \begin{ytableau} \none & \none & \tikzmark{r4c3}{2} & \tikzmark{r4c4}{2}\\ 1 & \tikzmark{r3c2}{1} & \tikzmark{r3c3}{2}\\ 1 & \tikzmark{r2c2}{1} & \tikzmark{r2c3}{2}\\ \tikzmark{r1c1}{1} & 2 \end{ytableau} \DrawHLine[black, ultra thick]{r1c1}{r1c1} \DrawVLine[black, ultra thick]{r1c1}{r1c1} \DrawHLine[red, ultra thick]{r2c2}{r2c2} \DrawVLineLeft[black, ultra thick]{r2c3}{r4c3} \DrawHLineAbove[black, ultra thick]{r4c3}{r4c4} \caption{1,2-RPPs correspond to lattice paths inside the skew shape. Note the red interior horizontal edge corresponds to the boundary between the 1's and 2's in the mixed column.} \end{figure} \begin{thm} \label{bottleneck_cond} Let $\lambda/\mu$ be a skew shape with $n$ columns, and suppose $g_{\lambda / \mu} = g_{\gamma / \nu}$. Then \[ b_i^{\lambda / \mu}+b_{n-i+1}^{\lambda / \mu} = b_i^{\gamma / \nu}+b_{n-i+1}^{\gamma / \nu} \] for $i=1,2,\ldots,n$. \end{thm} Note that $\gamma/\nu$ must also have $n$ columns. \begin{proof} Fix a shape $\lambda / \mu$ with $m$ rows and $n$ columns. We will compute the coefficients for terms of the form $x_1^r x_2^{n-r+1}$ in $g_{\lambda/\mu}$. Since $g_{\lambda/\mu}$ is symmetric, we may assume without loss of generality that $r \leq n-r+1$. By the bijection between 1,2-RPP's and lattice paths given above, we may compute the coefficient of $x_1^r x_2^{n-r+1}$ by counting the number of lattice paths corresponding to this monomial. Note that any such lattice path must have exactly one interior horizontal edge. For each interior horizontal edge, we will count the number of lattice paths corresponding to the monomial $x_1^r x_2^{n-r+1}$ using the given edge. There are four cases: the interior horizontal edge either touches neither boundary, only the left boundary, only the right boundary, or both the left and right boundary (i.e. the edge is a bottleneck edge). Fix an interior horizontal edge and suppose it lies in column $i$. Consider first the case where the interior horizontal edge touches neither boundary. Suppose a lattice path uses the given edge as its only interior horizontal edge. Then, as depicted in Figure \ref{no_boundary}, the lattice path must travel the top boundary until column $i$ and then drop to the horizontal edge. Then from the left endpoint of the given edge the path must drop to the bottom boundary and travel along the bottom boundary until reaching the bottom left. Hence there is a unique lattice path that uses the given edge as its only interior horizontal edge. Note that the corresponding 1,2-RPP has $i$ columns with 1's and $n-i+1$ columns with 2's. Thus the lattice path gives the monomial $x_1^r x_2^{n-r+1}$ exactly when the edge lies in column $r$. \begin{figure}[h] \[ \begin{ytableau} \none & \none & \none & \none & \none & & & \\ \none & \none & \none & \none & & & & \\ \none & \none & & & & & \\ \none & \none & & & & & \\ \none & & & & & \\ \none & & & & \\ & & \emptytikzmark{e}{1} & & \\ & & & \\ & & \\ & & \\ & \end{ytableau} \DrawHLine[red, ultra thick]{e}{e} \begin{ytableau} \none & \none & \none & \none & \none & \emptytikzmark{13}{1} & \emptytikzmark{14}{1} & \emptytikzmark{15}{1} \\ \none & \none & \none & \none & \emptytikzmark{12}{1} & & & \\ \none & \none & *(mygray) \emptytikzmark{10}{1} & \emptytikzmark{11}{1} & & & \\ \none & \none & *(mygray)\emptytikzmark{9}{1} & & & & \\ \none & *(mygray) & *(mygray)\emptytikzmark{8}{1} & & & \\ \none & *(mygray) & *(mygray)\emptytikzmark{7}{1} & & \\ *(mygray) & *(mygray) & *(mygray)\emptytikzmark{6}{1} & & \\ *(mygray) & *(mygray)\emptytikzmark{5}{1} & & \\ *(mygray) & *(mygray)\emptytikzmark{4}{1} & \\ *(mygray) & *(mygray)\emptytikzmark{3}{1} & \\ *(mygray) \emptytikzmark{1}{1} & *(mygray)\emptytikzmark{2}{1} \end{ytableau} \] \caption{Given an interior horizontal edge touching neither boundary, there is a unique lattice path with a single interior edge using the edge. If the edge lies in column $i$, the path contains $i$ columns with 1's and hence corresponds to the monomial $x_1^i x_2^{n-i+1}$.}\label{no_boundary} \DrawHLine[black, ultra thick]{1}{2} \DrawVLine[black, ultra thick]{2}{5} \DrawHLine[red, ultra thick]{6}{6} \DrawVLine[black, ultra thick]{6}{10} \DrawHLineAbove[black, ultra thick]{11}{11} \DrawVLineLeft[black, ultra thick]{12}{12} \DrawHLineAbove[black, ultra thick]{12}{12} \DrawVLineLeft[black, ultra thick]{13}{13} \DrawHLineAbove[black, ultra thick]{13}{15} \end{figure} Next suppose the edge touches only the right boundary. Then as depicted in Figure \ref{right_boundary}, there may be multiple lattice paths using the edge: from the top right, the path may travel along the top boundary and drop down at any column before reaching column $i$. Note that the lattice path can correspond to a 1,2-RPP with between $i$ and $n$ columns containing 1's, and that the number of columns containing 1's determines the path. Similarly, if the edge touches only the left boundary then after reaching the edge the path can drop down to the bottom boundary at any column before $i$. Hence the lattice path can correspond to a 1,2-RPP with between $1$ and $i$ columns containing 1's. \begin{figure}[h] \[\begin{ytableau} \none & \none & \none & \none & \none & \emptytikzmark{13}{1} & \emptytikzmark{14}{1} & \emptytikzmark{15}{1} \\ \none & \none & \none & \none & \emptytikzmark{12}{1} & & & \emptytikzmark{h}{1}\\ \none & \none & \emptytikzmark{10}{1} & \emptytikzmark{11}{1} & & & \\ \none & \none & \emptytikzmark{9}{1} & & & & \emptytikzmark{g}{1} \\ \none & & \emptytikzmark{8}{1} & & & \emptytikzmark{f}{1} \\ \none & & \emptytikzmark{7}{1} & & \\ & & \emptytikzmark{6}{1} & & \emptytikzmark{e}{1} \\ & \emptytikzmark{5}{1} & \emptytikzmark{c}{1} & \emptytikzmark{d}{1} \\ & \emptytikzmark{4}{1} & \emptytikzmark{b}{1}\\ & \emptytikzmark{3}{1} & \emptytikzmark{a}{1} \\ \emptytikzmark{1}{1} & \end{ytableau} \DrawHLine[black, ultra thick]{1}{1} \DrawVLine[black, ultra thick]{1}{1} \DrawHLine[red, ultra thick]{3}{3} \DrawHLine[black, dashed, ultra thick]{a}{a} \DrawVLine[black, dashed, ultra thick]{a}{b} \DrawHLine[black, dashed, ultra thick]{d}{d} \DrawVLine[black, dashed, ultra thick]{d}{11} \DrawVLineLeft[black, dashed, ultra thick]{12}{12} \DrawHLineAbove[black, dashed, ultra thick]{12}{12} \DrawVLineLeft[black, dashed, ultra thick]{13}{13} \DrawHLineAbove[black, dashed, ultra thick]{13}{15} \DrawVLine[black, dashed, ultra thick]{c}{10} \DrawVLineLeft[black, dashed, ultra thick]{a}{10} \DrawHLineAbove[black, dashed, ultra thick]{10}{11} \DrawHLine[black, dashed, ultra thick]{e}{e} \DrawVLine[black, dashed, ultra thick]{e}{12} \DrawHLine[black, dashed, ultra thick]{f}{f} \DrawVLine[black, dashed, ultra thick]{f}{13} \DrawHLine[black, dashed, ultra thick]{g}{g} \DrawVLine[black, dashed, ultra thick]{g}{14} \DrawHLine[black, dashed, ultra thick]{h}{h} \DrawVLine[black, dashed, ultra thick]{h}{15} \begin{ytableau} \none & \none & \none & \none & \none & \emptytikzmark{13}{1} & \emptytikzmark{14}{1} & \emptytikzmark{15}{1} \\ \none & \none & \none & \none & \emptytikzmark{12}{1} & & & \emptytikzmark{h}{1}\\ \none & \none & *(mygray) \emptytikzmark{10}{1} & *(mygray)\emptytikzmark{11}{1} & & & \\ \none & \none & *(mygray)\emptytikzmark{9}{1} & *(mygray) & & & \emptytikzmark{g}{1} \\ \none & *(mygray) & *(mygray)\emptytikzmark{8}{1} & *(mygray) & & \emptytikzmark{f}{1} \\ \none & *(mygray) & *(mygray)\emptytikzmark{7}{1} & *(mygray) & \\ *(mygray) & *(mygray) & *(mygray)\emptytikzmark{6}{1} & *(mygray) & \emptytikzmark{e}{1} \\ *(mygray) & *(mygray)\emptytikzmark{5}{1} & *(mygray)\emptytikzmark{c}{1} & *(mygray)\emptytikzmark{d}{1} \\ *(mygray) & *(mygray)\emptytikzmark{4}{1} & *(mygray)\emptytikzmark{b}{1}\\ *(mygray) & *(mygray)\emptytikzmark{3}{1} & *(mygray)\emptytikzmark{a}{1} \\ *(mygray) \emptytikzmark{1}{1} & \end{ytableau} \] \DrawHLine[black, ultra thick]{1}{1} \DrawVLine[black, ultra thick]{1}{1} \DrawHLine[red, ultra thick]{3}{3} \DrawHLine[black, ultra thick]{a}{a} \DrawVLine[black, ultra thick]{a}{b} \DrawHLine[black, ultra thick]{d}{d} \DrawVLine[black, ultra thick]{d}{11} \DrawVLineLeft[black, ultra thick]{12}{12} \DrawHLineAbove[black, ultra thick]{12}{12} \DrawVLineLeft[black, ultra thick]{13}{13} \DrawHLineAbove[black, ultra thick]{13}{15} \DrawVLine[black, dashed, ultra thick]{c}{10} \DrawVLineLeft[black, dashed, ultra thick]{a}{10} \DrawHLineAbove[black, dashed, ultra thick]{10}{11} \DrawHLine[black, dashed, ultra thick]{e}{e} \DrawVLine[black, dashed, ultra thick]{e}{12} \DrawHLine[black, dashed, ultra thick]{f}{f} \DrawVLine[black, dashed, ultra thick]{f}{13} \DrawHLine[black, dashed, ultra thick]{g}{g} \DrawVLine[black, dashed, ultra thick]{g}{14} \DrawHLine[black, dashed, ultra thick]{h}{h} \DrawVLine[black, dashed, ultra thick]{h}{15} \caption{When the edge touches only the right boundary, a lattice path using this edge can now drop down from the top boundary at any column after column $i$. However, there is a unique path corresponding to the monomial $x_1^r x_2^{n-r+1}$ if $i \leq r$ and no possible paths if $i > r$.}\label{right_boundary} \end{figure} Thus we have identified three cases where there is at least one lattice path corresponding to $x_1^r x_2^{n-r+1}$: the interior horizontal edge lies in one of column $1,\ldots,r$ and touches the right boundary, the edge lies in column $r$ and touches neither boundary, or the edges lies in one of column $r,\ldots,n$ and touches the left boundary. We will consider the fourth case, where the interior edge is a bottleneck edge, in the next paragraph. Fix two adjacent rows, and consider the set of horizontal edges between these two rows. The columns that these edges lie in are either all to the left of column $r$, contain column $r$, or all to the right of column $r$. In any of these three cases there is exactly one valid edge, as depicted in Figure \ref{possible_edges}. That is, between any two adjacent rows there is exactly one edge that correponds to at least one lattice path. Since there are $m$ rows, this gives $m-1$ possible valid interior horizontal edges. Unless the edges are bottleneck edges, each possible edge corresponds to a single lattice path. It remains to count the additional lattice paths given by bottleneck edges. \begin{figure}[h] \begin{ytableau} \none & \none & \none & \none & \none & \emptytikzmark{10}{1} & & \\ \none & \none & \none & \none & \emptytikzmark{9}{1} & \\ \none & \none & \none & \none & \emptytikzmark{8}{1} & \\ \none & \none & \none & \emptytikzmark{7}{1} & \\ \none & \none & \emptytikzmark{6}{1} & & \\ \none & & \emptytikzmark{5}{1} & \\ \none & & \emptytikzmark{4}{1} & \\ \none & \emptytikzmark{3}{1} & \\ \none & \emptytikzmark{2}{1}&\none \\ \emptytikzmark{1}{1} & \\ & \none \end{ytableau} \DrawHLine[red, ultra thick]{1}{1} \DrawHLine[red, ultra thick]{2}{2} \DrawHLine[red, ultra thick]{3}{3} \DrawHLine[red, ultra thick]{4}{4} \DrawHLine[red, ultra thick]{5}{5} \DrawHLine[red, ultra thick]{6}{6} \DrawHLine[red, ultra thick]{7}{7} \DrawHLine[red, ultra thick]{8}{8} \DrawHLine[red, ultra thick]{9}{9} \DrawHLine[red, ultra thick]{10}{10} \caption{There are $m-1$ possible edges that can be chosen as the interior horizontal edge for a lattice path. Unless the edge is a bottleneck edge, each such edge corresponds to a unique lattice path.} \label{possible_edges} \label{12RPPex} \end{figure} Now suppose the interior horizontal edge is a bottleneck edge lying in column $i$. Then there is flexibility on both sides: there can be between 0 and $(i-1)$ 1-pure columns to the left of column $i$ and between $0$ and $(n-i)$ 2-pure columns to the right of column $i$. If there are $x$ 1-pure columns to the left of column $i$, $x$ may be between 0 and $\max(i-1,r-1)$. If $x$ 1-pure columns lie to the left, the remaining $(r-x-1)$ 1-pure columns can be chosen to be to the right of column $i$ (because we assumed that $r \leq n-r+1$). Hence there are $\max(i,r)$ possible lattice paths using a given bottleneck edge in column $i$. We can now give a formula for the coefficient of $x_1^r x_2^{n-r+1}$. Let $k = \lceil\frac{n}{2}\rceil$ and $f_i= b_i + b_{n-i+1}$ for $i=1, 2, \ldots, k-1$. If $n$ is even, let $f_k = b_k + b_{n-k+1}$ and if $n$ is odd, let $f_k = b_k$. There are always at least $m-1$ valid lattice paths. Each bottleneck edge in column $i$ also contributes an additional $\max(i,r)-1$ lattice paths. Hence the coefficient is \[ (m - 1) + f_2 + 2f_3+3f_4+ \ldots + (r-1)f_r + (r-1)f_{r+1}+ \ldots + (r-1)f_k. \] Let $t_r$ denote the coefficient of $x_1^{r}x_2^{n-r+1}$. Note that for $2\leq{r}\leq{k-1}$, we have $2t_{r}-t_{r-1}-t_{r+1}=f_r$. Since any two $g$-equivalent shapes $\lambda/\mu$ and $\gamma/\nu$ must have the same coefficients $t_r$, it follows that for $2\leq{r}\leq{k-1}$ the sums $f_r = b_r + b_{n-r+1}$ are the same for the two shapes. Also, since \[ t_k = (m - 1) + f_2 + 2f_3+3f_4+ \ldots + (k-1)f_k \] is invariant for the two shapes, it then follows that $f_k$ is invariant for the two shapes. By Corollary 8.11 in \cite{rsw2009coincidences}, we also have that $b_1 + \cdots + b_n = f_1 + \cdots + f_k$ is invariant, since the total number of bottleneck edges is the number of 1s in the 2-row overlap composition. Hence $f_1$ is invariant as well. \end{proof} \begin{remark} For a ribbon $[\alpha_1, \alpha_2, \ldots, \alpha_n]$, we have $b_i = \alpha_i - 1$ for $i=1, 2, \ldots , n$. Hence Theorem \ref{bottleneck_cond} generalizes Lemma \ref{ribbon-col-sum} as noted at the beginning of the section. \end{remark} \begin{ex} \ytableausetup{boxsize=.35cm} It is noted in \cite{rsw2009coincidences} that the shapes \[\ydiagram{4+2,2+3,1+4,1+2,2,2}\qquad \ydiagram{4+2,3+2,3+2,1+3,4,2}\] are Schur equivalent. But since $b_2 + b_5 = 2$ for the first shape and $b_2+b_5 = 1$ for the second shape, it follows that the two shapes are not $g$-equivalent. \end{ex} \begin{ex} Having the same bottleneck edge sequence is not sufficient for two skew shapes to be $g$-equivalent. By Theorem $7.6$ in \cite{rsw2009coincidences}, the shapes $D_1$ and $D_2$ below are equivalent and have the same bottleneck edge sequence. However, upon computation it is found that they are not $g$-equivalent. \[ D_1 = \ydiagram{5+3,5+2,3+3,5,2}\qquad D_2 = \ydiagram{6+2,5+3,4+2,1+5,3} \] \end{ex} Since the bottleneck condition followed as a result of comparing terms of $g$ with degree $n+1$ in two variables, it is natural to compute coefficients for terms of higher degree or more variables. The following result shows that terms of degree $n+1$ and more than two variables do not impose additional constraints for two skew shapes to be $g$-equivalent. \begin{prop} Suppose two skew shapes $\lambda/\mu$ and $\gamma/\nu$ have the same number of rows and the polynomial $g_{\lambda/\mu}$ and $g_{\gamma/\nu}$ have same coefficient for every term of degree $n+1$ with two variables. Then in fact these polynomials have the same coefficient for any term of degree $n+1$. \end{prop} \begin{proof} Fix positive integers $i_1,i_2,\ldots,i_k$ where $k\geq2$ is some positive integer, and let $n = (\sum_{j=1}^k i_j)-1$. Given a skew diagram $\lambda/\mu$ with $n$ columns, we claim that the coefficient of $x_1^{i_1}x_2^{i_2}\ldots x_k^{i_k}$ can be expressed as a $\mathbb{Z}$-linear combination $c_0 + c_2b_2 + \ldots c_{n-1}b_{n-1}$ of the bottleneck numbers $b_i$. Furthermore, the constant $c_0$ is known to be $(k-1)(m-1)$, where $m$ is the number of rows in $\lambda/\mu$. We proceed by induction on $k$. The base case $k=2$ is given in the proof of Theorem \ref{bottleneck_cond}, so we may assume $k \geq 3$. We count the number of reverse plane partitions giving the monomial $x_1^{i_1}\cdots x_k^{i_k}$. Suppose first that every column containing a $1$ is in fact $1$-pure. Then the first $i_1$ columns must be filled with $1$'s. Note the remaining squares form a skew shape with $n-i_1$ columns, as depicted in Figure \ref{induction}. We henceforth use $(\lambda/\mu)_{i_1}$ to denote the skew shape given by removing the first $i_1$ columns of $\lambda/\mu$. Note $(\lambda/\mu)_{i_1}$ must be filled with a reverse plane partition giving the monomial $x_2^{i_2}\cdots x_k^{i_k}$. \ytableausetup{boxsize=.5cm} \begin{figure}[h] \begin{ytableau} \none & \none & \none & \none & \none &*(mygray) &*(mygray) &*(mygray) \\ \none & \none & \none & \none & *(mygray) & *(mygray)\\ \none & \none & \none & \none & *(mygray) & *(mygray)\\ \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & 1 & *(mygray) & *(mygray) \\ \none & 1 & 1 & *(mygray) \\ \none & 1 & 1 & *(mygray) \\ \none & 1 & 1 \\ \none & 1 & 1 \\ 1 & 1 \\ 1 \end{ytableau} \caption{The remaining shape shaded in gray is a skew shape with $n-i_1$ columns, denoted $(\lambda/\mu)_{i_1}$.}\label{induction} \end{figure} Let $m'$ be the number of rows in the shape obtained by removing the first $i_1$ columns from $\lambda/\mu$. Then by induction the number of ways to fill in this shape is $$(k-2)(m'-1) + c'_{i_1+2} b_{i_1+2} + \cdots c'_{n-1} b_{n-1}$$ for some integers $c'_{i_1+2},\ldots,c'_{n-1}.$ The remaining case is when the reverse plane partition has a mixed column containing a 1. Given such a reverse plane partition, consider the 1,2-RPP obtained by replacing every entry greater than or equal to 2 with 2. From this 1,2-RPP we obtain a lattice path via our previously described bijection between 1,2-RPPs and lattice paths. Since the reverse plane partition has a mixed column containing a 1 and the total degree of the monomial $x_1^{i_1}x_2^{i_2}\ldots x_k^{i_k}$ is $n+1$, this lattice path must have a single interior horizontal edge. As noted in Figure \ref{12RPPex}, there are $m-1$ possibilities for the unique interior horizontal edge. Consider first the interior horizontal edges in columns $1,\ldots,i_1$ touching the bottom boundary of $\lambda/\mu$. This case is depicted in Figure \ref{case_1}. Note that there are $m-m'$ such edges, since in total there are $m-1$ edges touching the bottom boundary and exactly $m'-1$ of them lie in columns $i_1+1,\ldots,n$. For each of these $m-m'$ edges, one possible lattice path. The path starts at the top right, travels along the top boundary until it reaches the boundary between column $i_1$ and column $i_1+1$, drops down to the bottom boundary, and travels along the bottom boundary until the horizontal edge, traverses the edge, and then immediately drops back down to the bottom boundary and traverses it until reaching the bottom left. This lattice path determines which squares are filled with 1's. The remaining shape is a disconnected skew shape where one component is a single column and the other component is $(\lambda/\mu)_{i_1}$. There are $(k-1)$ fillings using this lattice path, since the column below the edge may be filled with any of $2,\ldots,k$ and the remaining columns must fill $(\lambda/\mu)_{i_1}$ in increasing order. Note that unless the edge is a bottleneck edge, this is the unique lattice path using this edge. \begin{figure}[h] \begin{ytableau} \none & \none & \none & \none & \none &*(mygray)\emptytikzmark{10}{1} &*(mygray)\emptytikzmark{11}{1} &*(mygray)\emptytikzmark{12}{1} \\ \none & \none & \none & \none & *(mygray)\emptytikzmark{9}{1} & *(mygray)\\ \none & \none & \none & \none & *(mygray)\emptytikzmark{8}{1} & *(mygray)\\ \none & \none & \none & *(mygray)\emptytikzmark{7}{1} & *(mygray) \\ \none & \none & 1 & *(mygray) & *(mygray) \\ \none & 1 & 1 & *(mygray) \\ \none & 1 & \tikzmark{5}{1} & *(mygray)\emptytikzmark{6}{1} \\ \none & \tikzmark{4}{1} & *(mygray) \\ \none & \tikzmark{3}{1} & *(mygray) \\ 1 & \tikzmark{2}{1} \\ \tikzmark{1}{1} \end{ytableau} \caption{Case 1: the lattice path uses an edge in columns $1,\ldots,i_1$ touching the bottom boundary. Then the remaining shape is the union of $(\lambda/\mu)_{i_1}$ and a single column.}\label{case_1} \DrawHLine[red, ultra thick]{5}{5} \DrawHLine[black, ultra thick]{1}{1} \DrawHLine[black, ultra thick]{2}{2} \DrawVLine[black, ultra thick]{1}{1} \DrawVLine[black, ultra thick]{2}{4} \DrawHLineAbove[black, ultra thick]{7}{7} \DrawVLineLeft[black, ultra thick]{6}{7} \DrawVLineLeft[black, ultra thick]{8}{9} \DrawHLineAbove[black, ultra thick]{9}{9} \DrawVLineLeft[black, ultra thick]{10}{10} \DrawHLineAbove[black, ultra thick]{10}{12} \end{figure} The remaining $m'-1$ edges are those in column $i_1$ not touching the bottom boundary and the edges in column $i_i+1,\ldots,n$ touching the top boundary. We similarly describe a possible lattice path for each of these edges. Suppose the edge lies in column $i$. The path starts at the top right, travels along the top boundary until the boundary between column $i$ and $i+1$, drops down to the edge and traverses it, traverses the top boundary until the boundary between column $i_1-1$ and column $i_1$, and drops down to the bottom boundary and traverses it until reaching the bottom left. This path determines which squares are filled with 1's. The remaining squares form a (possibly disconnected) skew shape, which must be filled with no mixed columns. Note the remaining skew shape is connected if and only if the horizontal edge was not a bottleneck edge. If the shape is connected, then filling the columns in increasing order is the only possible filling. Otherwise, this is one possible filling but there may be more. Thus far, this gives us \begin{eqnarray*} & &(k-2)(m'-1) + c'_{i_1+2} b_{i_1+2} + \cdots c'_{n-1} b_{n-1} + (k-1)(m-m') + (m'-1) \\ &=& (k-1)(m-1) + c'_{i_1+2} b_{i_1+2} + \cdots c'_{n-1} b_{n-1}\end{eqnarray*} fillings. It remains to show each bottleneck edge in column $i$ contributes a fixed number of additional fillings depending only on $i$. \begin{figure}[h!] \begin{ytableau} \none & \none & \none & \none & \none &*(mygray) &*(mygray) &*(mygray) \\ \none & \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & \none & 1 & *(mygray)\\ \none & \none & 1 & 1 & *(mygray) \\ \none & 1 & 1 & 1 \\ \none & 1 & 1 & 1 \\ \none & 1 & 1 \\ \none & 1 & 1 \\ *(mygray) & *(mygray) \\ *(mygray) & \none \end{ytableau} \begin{ytableau} \none & \none & \none & \none & \none & 1 & 1 & 1 \\ \none & \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & \none & *(mygray) & *(mygray)\\ \none & \none & *(mygray) & *(mygray) & *(mygray) \\ \none & *(mygray) & *(mygray) & *(mygray) \\ \none & *(mygray) & *(mygray) & *(mygray) \\ \none & *(mygray) & *(mygray) \\ \none & *(mygray) & *(mygray) \\ *(mygray) & *(mygray) \\ *(mygray) & \none \end{ytableau} \caption{The remaining shape will have 1 or 2 components. The number of fillings is determined by $i_2,\ldots,i_k$ and the number of columns in the components.}\label{cases} \end{figure} As noted in the proof of Theorem \ref{bottleneck_cond}, each bottleneck edge in column $i$ has $\min(i,n-i+1,i_1)$ possible lattice paths using that edge. Each lattice path determines which squares will be filled with 1's. Note the remaining squares will form a possibly disconnected skew shape with $n-i_1+1$ columns (depicted in Figure \ref{cases}), which must then be filled with no mixed columns. There are a fixed number of ways to fill this shape, which depends only on $i_2,\ldots,i_k$ and the number of columns in the two components. The possible number of columns in each component is in turn determined by which column the bottleneck edge is in. This finishes the proof of the claim. \begin{figure}[h!] \begin{ytableau} \none & \none & \none & \none & \none &*(mygray) &*(mygray) &*(mygray) \\ \none & \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & \none & 1 & *(mygray)\\ \none & \none & 1 & 1 & *(mygray) \\ \none & 1 & 1 & 1 \\ \none & 1 & 1 & 1 \\ \none & 1 & 1 \\ \none & 1 & 1 \\ *(mygray) & *(mygray) \\ *(mygray) & \none \end{ytableau} \begin{ytableau} \none & \none & \none & \none & \none &*(mygray) &*(mygray) &*(mygray) \\ \none & \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & \none & *(mygray) & *(mygray)\\ \none & \none & 1 & *(mygray) & *(mygray) \\ \none & 1 & 1 & *(mygray) \\ \none & 1 & 1 & *(mygray) \\ \none & 1 & 1 \\ \none & 1 & 1 \\ 1 & *(mygray) \\ 1 & \none \end{ytableau} \caption{The possible numbers of columns in the components of the remaining shape are determined by which column the bottleneck edge is in. In this example, since the bottleneck edge is in column 2, the components have 1 and $n-i_1+1$ columns or 2 and $n-i_1+2$ columns.} \end{figure} Thus we have that the coefficient of $x_1^{i_1}\cdots x_k^{i_k}$ for any shape with $n = (\sum_{j_1}^k i_j) - 1$ columns is $(k-1)(m-1) + c_2 b_1 + \cdots + c_{n-1} b_{n-1}$ for some integers $c_2,\ldots,c_{n-1}$. Recall that every shape is equivalent to its 180-degree rotation, and note 180-degree rotation reverses the bottleneck sequence $b_1,\ldots,b_n$. Since there are shapes with arbitrary sequences of $b_1,\ldots,b_n$ (for example, the ribbon $[b_1+1,\ldots,b_n+1]$), it follows that $c_i = c_{n-i+1}$ for $i=2,\ldots,n-1$. Recall also that the proof of Theorem \ref{bottleneck_cond} shows each sum $b_i + b_{n-i+1}$ for $i=2,\ldots,n-1$ must be the same for any two shapes such that the terms in $g$ of degree $n+1$ with two variables are the same. Since the number of rows $m$ must be the same as well, it follows that the sum $(k-1)(m-1) + c_2 b_1 + \cdots + c_{n-1} b_{n-1}$ must also be the same. \end{proof} \begin{prop} \label{x1_2 x2_n} The coefficient of $x_1^2 x_2^n$ in $g_{\lambda/\mu}$ is \[ \binom{m}{2} - \sum_{i=1}^n \binom{b_i+1}{2} . \] \end{prop} \begin{proof} A 1,2-RPP giving the monomial $x_1^2 x_2^n$ must have no 1-pure columns, $(n-2)$ 2-pure columns, and two mixed columns. Hence the corresponding lattice paths have two interior horizontal edges. Consider the \textit{heights} of the interior horizontal edges. By an interior horizontal edge at height $i$ we mean the edge lies between row $i$ and row $i+1$. Observe that given the height of the two interior horizontal edges, there is at most one lattice path using the heights; since there are no 1-pure columns, the lattice path is completely determined by the heights chosen. \ytableausetup{boxsize=.5cm} \begin{figure}[h] \begin{ytableau} \none & \none & \none & \none & \none &*(mygray) &*(mygray) &*(mygray) \\ \none & \none & \none & \none & *(mygray) & *(mygray)\\ \none & \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & \none & *(mygray) & *(mygray) \\ \none & \none & *(mygray) 1 & *(mygray) & *(mygray) \\ \none & 1 & *(mygray) 1 & *(mygray) \\ \none & 1 & *(mygray) & *(mygray) \\ \none & 1 & *(mygray) \\ \none & *(mygray) & *(mygray) \\ *(mygray) & *(mygray) \\ *(mygray) \end{ytableau} \caption{The heights determine the filling, since the column containing 1's more to the left must touch the left boundary of the shape, and the other column must touch the left boundary of the remaining shape.} \end{figure} There are $\binom{m}{2}$ ways to choose a pair of heights from $1,\ldots,m-1$ (with possible repetition). Since each pair of heights contributes either 1 or 0 lattice paths, the desired coefficient is thus $\binom{m}{2}$ minus the number of pairs not giving a lattice path. These are exactly the pairs of heights where the only interior horizontal edges at those heights lie in a single column. This is precisely the pairs of bottleneck edges from the same column. For each column $i$ there are $\binom{b_i+1}{2}$ ways to choose two of the bottlenecks in column $i$ (with possible repetition), giving the desired formula. \end{proof} \iffalse \begin{prop} The coefficient of $x_1x_2x_3^n$ in $g_{\lambda/\mu}$ is \[ (m-1)^2 - \sum_{i=1}^n \binom{b_i+1}{2} \] \end{prop} \begin{proof} Since $g$ is symmetric, we may equivalently compute the coefficient of $x_1 x_2^n x_3$. As before, the lattice path must have two interior horizontal edges. Note the column containing 1's must touch the left boundary and the column containing 3's must touch the right boundary. Hence the squares containing 1s are determined by the height above which the 1s appear. Similarly, the squares containing 3s are determined by the height below which the 3s appear. \begin{figure}[h] \begin{ytableau} \none & \none & \none & \none & \none &\tikzmark{11}{2} & \tikzmark{12}{2}& \tikzmark{13}{2}\\ \none & \none & \none & \none & \tikzmark{10}{2} & \tikzmark{21}{2} \\ \none & \none & \none & \none & \tikzmark{9}{2} & \tikzmark{20}{2}\\ \none & \none & \none & \tikzmark{8}{2} & \tikzmark{19}{2} \\ \none & \none & \tikzmark{7}{2} & \tikzmark{17}{2} & \tikzmark{18}{2} \\ \none & \tikzmark{r6c2}{1} & \tikzmark{6}{2} & \tikzmark{r6c4}{3} \\ \none & \tikzmark{r5c2}{1} & \tikzmark{5}{2} & \tikzmark{r5c4}{3} \\ \none & \tikzmark{r4c2}{1} & \tikzmark{4}{2} \\ \none & \tikzmark{3}{2} & \tikzmark{15}{2} \\ \tikzmark{2}{2} & \tikzmark{14}{2} \\ \tikzmark{1}{2} \end{ytableau} \DrawVLineLeft[black, ultra thick]{1}{2} \DrawHLineAbove[black, ultra thick]{2}{2} \DrawVLineLeft[black, ultra thick]{3}{3} \DrawHLineAbove[red, ultra thick]{3}{3} \DrawVLineLeft[black, ultra thick]{4}{7} \DrawHLineAbove[black, ultra thick]{7}{7} \DrawVLineLeft[black, ultra thick]{8}{8} \DrawHLineAbove[black, ultra thick]{8}{8} \DrawVLineLeft[black, ultra thick]{9}{10} \DrawHLineAbove[black, ultra thick]{10}{10} \DrawVLineLeft[black, ultra thick]{11}{11} \DrawHLineAbove[black, ultra thick]{11}{13} \DrawHLine[black, ultra thick]{1}{1} \DrawVLine[black, ultra thick]{1}{1} \DrawHLine[black, ultra thick]{14}{14} \DrawVLine[black, ultra thick]{14}{14} \DrawHLine[black, ultra thick]{15}{15} \DrawVLine[black, ultra thick]{15}{6} \DrawHLine[blue, ultra thick]{17}{17} \DrawHLine[black, ultra thick]{18}{18} \DrawVLine[black, ultra thick]{18}{19} \DrawHLine[black, ultra thick]{20}{20} \DrawVLine[black, ultra thick]{20}{21} \DrawHLine[black, ultra thick]{12}{13} \DrawVLine[black, ultra thick]{13}{13} \caption{A pair of heights determines the filling.} \end{figure} There are $(m-1)^2$ ways to choose these two heights. The only pairs of heights not giving valid fillings come from bottleneck edges, which are the heights that touch both the left and right boundary. More precisely, the heights fail to give a valid filling exactly when they are both bottlenecks in the same column and the height for 1s is below or equal to the height for the 3s. For each column there are $\binom{b_i+1}{2}$ such pairs, giving the desired formula. \end{proof} \fi By Corollary 8.11 in \cite{rsw2009coincidences}, the number of rows $m$ and the sum $b_1+\cdots+b_n$ are invariant under $g$-equivalence. Hence we attain the following as a direct consequence of Proposition~\ref{x1_2 x2_n}. \begin{cor}\label{cor:trianglesums} Suppose $g_{\lambda/\mu} = g_{\gamma/\nu}$. Then \[ \sum_{i=1}^n (b_i^{\lambda/\mu})^2 = \sum_{i=1}^n (b_i^{\gamma/\nu})^2. \] Equivalently, the sums of the areas of the equilateral triangles of 1's in the row overlap compositions $r^{(2)},\ldots,r^{(m)}$ are the same. \end{cor} \begin{remark} One can also count various other coefficients in the dual stable Grothendieck polynomial. For terms of degree greater than $n+1$, it is useful to define a generalization of bottleneck edges. To that end, for $i = 1,\ldots,\lambda_1-w+1$ the \textit{number of width $w$ bottlenecks in position i} is \[ b^{(w)}_i = |\{ 1 \leq j \leq m-1 \mid \mu_j = i-1, \lambda_{j+1} = i+w-1 \}|. \] For example, let $\lambda / \mu = (5,5,4,2,2,2)/(4,2,1,1,1,0)$. Then the number of bottleneck edges of each width is given below. Note $b^{(1)}$ is just the previously defined bottleneck edges. \[\begin{tabular}{>{$}l<{$\hspace{12pt}}*{13}{c}} b^{(5)} &&&&&&&0&&&&&&\\ b^{(4)} &&&&&&0&&0&&&&&\\ b^{(3)} &&&&&0&&0&&0&&&&\\ b^{(2)} &&&&0&&0&&1&&0&&&\\ b^{(1)} &&&0&&3&&0&&0&&1&&\\ \end{tabular}\] We state the following propositions with proofs omitted for brevity. \begin{prop}\label{x1_3 x2_n-1} The coefficient of $x_1^3 x_2^{n-1}$ in $g_{\lambda/\mu}$ is \[ \left( \binom{m}{2} - \sum_{i=1}^{n} \binom{b^{(1)}_i+1}{2} \right) + \sum_{i=2}^{n-2} \binom{b_i^{(2)}+1}{2} + (m-2)\sum_{i=2}^{n-1} b_i^{(1)}\]\[ - \left(b_2^{(1)}(m-\mu_1'-1) + b_{n-1}^{(1)}(\lambda_n'-1) + \sum_{i=2}^{n-2} b_i^{(1)}b_{i+1}^{(1)} \right). \] \end{prop} \begin{prop} The coefficient of $x_1^3 x_2^n$ in $g_{\lambda/\mu}$ is \[ \binom{m+1}{3} - \sum_{i=1}^{n} \left((m-1)\binom{b_i^{(1)}+1}{2} - 2 \binom{b_i^{(1)}}{3} - b_i^{(1)}(b_i^{(1)}-1) \right) \]\[ - \sum_{i=1}^{n-1} \left(\binom{b_i^{(2)}+2}{3} + (b_i^{(1)}+b_{i+1}^{(1)})\binom{b_i^{(2)}+1}{2} + b_i^{(1)}b_i^{(2)}b_{i+1}^{(1)}\right). \] \end{prop} \end{remark} \iffalse For terms in $g$ of higher degree $n+r$ for $r > 1$, the coefficient is affected not only by areas of width one touching both boundaries but also areas of the shape with width at most $r$. This leads us to define the following generalization of bottleneck edges. \begin{defn} For $i = 1,\ldots,\lambda_1-w+1$ the \textit{number of width $w$ bottlenecks in position i} is \[ b^{(w)}_i = |\{ 1 \leq j \leq m-1 \mid \mu_j = i-1, \lambda_{j+1} = i+w-1 \}| \] \end{defn} \begin{ex} Let $\lambda / \mu = (5,5,4,2,2,2)/(4,2,1,1,1,0)$. Then the number of bottleneck edges of each width is \[\begin{tabular}{>{$}l<{$\hspace{12pt}}*{13}{c}} b^{(5)} &&&&&&&0&&&&&&\\ b^{(4)} &&&&&&0&&0&&&&&\\ b^{(3)} &&&&&0&&0&&0&&&&\\ b^{(2)} &&&&0&&0&&1&&0&&&\\ b^{(1)} &&&0&&3&&0&&0&&1&&\\ \end{tabular}\] Note $b^{(1)}$ is just the previously defined bottleneck edges. \begin{figure}[h] \begin{ytableau} \none & \none & \none & \none & \emptytikzmark{6}{1} \\ \none & \none & \emptytikzmark{4}{1} & \emptytikzmark{5}{1} & \\ \none & \emptytikzmark{3}{1} & & \\ \none & \emptytikzmark{2}{1} \\ \none & \emptytikzmark{1}{1} \\ & \\ \end{ytableau} \DrawHLine[red, ultra thick]{1}{1} \DrawHLine[red, ultra thick]{2}{2} \DrawHLine[red, ultra thick]{3}{3} \DrawHLine[blue, ultra thick]{4}{5} \DrawHLine[red, ultra thick]{6}{6} \caption{The shape $(5,5,4,2,2,2)/(4,2,1,1,1,0)$ has three bottlenecks of width 1 in the second column, one bottleneck of width 1 in the fifth column, and bottleneck of width 2 in the third column.} \end{figure} \end{ex} \begin{prop}\label{x1_3 x2_n-1} The coefficient of $x_1^3 x_2^{n-1}$ in $g_{\lambda/\mu}$ is \[ \left( \binom{m}{2} - \sum_{i=1}^{n} \binom{b^{(1)}_i+1}{2} \right) + \sum_{i=2}^{n-2} \binom{b_i^{(2)}+1}{2} + (m-2)\sum_{i=2}^{n-1} b_i^{(1)}\]\[ - \left(b_2^{(1)}(m-\mu_1'-1) + b_{n-1}^{(1)}(\lambda_n'-1) + \sum_{i=2}^{n-2} b_i^{(1)}b_{i+1}^{(1)} \right) \] \end{prop} \begin{proof} As is the case in Proposition \ref{x1_2 x2_n}, a lattice path identified with this monomial must have two interior horizontal edges. It can be checked that the pairs of heights not giving any lattice paths are the same as before, namely pairs of bottleneck edges in the same column. This contributes the term $\binom{m}{2} - \sum \binom{b^{(1)}_i+1}{2}$. It remains to account for pairs of heights giving more than one possible lattice path. It can be checked that the only pairs of heights with more than one possible lattice path are those involving bottleneck edges and bottlenecks of width two. Consider the case where the two heights are both width two bottlenecks in the same column $i$ where $i \in \{2,\ldots,n-2\}$. Then there are two possible fillings, since the last column containing 1s may be either to the left or to the right of the bottleneck of width 2. Since there are $\binom{b^{(2)}_i+1}{2}$ ways to choose a pair of width two bottlenecks from column $i$, this accounts for the second term in our formula. Next, consider when one of the heights is a bottleneck in columns $2,\ldots,n-1$ (similar to the case in Theorem \ref{bottleneck_cond}, bottlenecks in column 1 and $n$ do not contribute additional paths). Then there are $(m-2)$ possibilities for the other height. Suppose the other interior horizontal edge is not a bottleneck edge. We show that there are 2 possible lattice paths. There are several cases to check. Let the heights of the bottleneck and the other interior horizontal edge by $i$ and $j$ respectively. Also, let the column in which the bottleneck belongs be $c_i$ and the column containing the leftmost edge of height $j$ be $c_j$. Suppose first $i < j$. Then $c_i \leq c_j$ since $\lambda/\mu$ is a skew shape. If $c_j-c_i \geq 2$, then there are two lattice paths. Note the interior horizontal edge must be chosen so that it touches the left boundary. Then one lattice path is given by using column $c_i+1$ as the last column containing 1, and the other is given by using column 1 as the last column containing 1. Now suppose $c_j = c_i$ or $c_j = c_i + 1$. Then, as depicted in Figure \ref{i<j_ci=cj}, there will be two lattice paths (except in a certain border case, which will be accounted for later). \begin{figure}[h!] \[ \begin{ytableau} \none & \none & \none & \none & \none &*(mygray)\emptytikzmark{9}{1} &\emptytikzmark{10}{1} &\emptytikzmark{11}{1} \\ \none & \none & \none & \none & *(mygray)\bullet &*(mygray)\emptytikzmark{8}{1}\\ \none & \none & \none & \none & *(mygray)\emptytikzmark{6}{1} &\emptytikzmark{7}{1}\\ \none & \none & \none & \emptytikzmark{5}{1} & \\ \none & \none & \emptytikzmark{4}{1} & & \\ \none & \emptytikzmark{3}{1} & & \\ \none & & & \\ \none & & \\ \none & & \\ *(mygray) & \emptytikzmark{2}{1} \\ *(mygray)\emptytikzmark{1}{1} \end{ytableau} \] \DrawHLine[black, ultra thick]{1}{1} \DrawVLine[black, ultra thick]{1}{1} \DrawVLineLeft[black, ultra thick]{2}{3} \DrawHLineAbove[black, ultra thick]{3}{3} \DrawVLineLeft[black, ultra thick]{4}{4} \DrawHLineAbove[black, ultra thick]{4}{4} \DrawVLineLeft[black, ultra thick]{5}{5} \DrawHLineAbove[black, ultra thick]{5}{5} \DrawHLine[red, ultra thick]{6}{6} \DrawHLine[red, ultra thick]{8}{8} \DrawVLine[black, ultra thick]{6}{6} \DrawVLine[black, ultra thick]{8}{9} \DrawHLineAbove[black, ultra thick]{10}{11} \[\begin{ytableau} \none & \none & \none & \none & \none &*(mygray)\emptytikzmark{9}{1} &*(mygray)\emptytikzmark{10}{1} &\emptytikzmark{11}{1} \\ \none & \none & \none & \none &*(mygray) \bullet &*(mygray)\emptytikzmark{8}{1}\\ \none & \none & \none & \none & *(mygray)\emptytikzmark{6}{1} &\emptytikzmark{7}{1}\\ \none & \none & \none & \emptytikzmark{5}{1} & \\ \none & \none & \emptytikzmark{4}{1} & & \\ \none & \emptytikzmark{3}{1} & & \\ \none & & & \\ \none & & \\ \none &\emptytikzmark{2}{1} & \\ \emptytikzmark{a}{1} & \\ \emptytikzmark{1}{1} \end{ytableau} \DrawHLineAbove[black, ultra thick]{a}{a} \DrawVLineLeft[black, ultra thick]{1}{a} \DrawVLineLeft[black, ultra thick]{2}{3} \DrawHLineAbove[black, ultra thick]{3}{3} \DrawVLineLeft[black, ultra thick]{4}{4} \DrawHLineAbove[black, ultra thick]{4}{4} \DrawVLineLeft[black, ultra thick]{5}{5} \DrawHLineAbove[black, ultra thick]{5}{5} \DrawHLine[red, ultra thick]{6}{6} \DrawHLine[red, ultra thick]{8}{8} \DrawVLine[black, ultra thick]{6}{6} \DrawVLine[black, ultra thick]{8}{8} \DrawVLine[black, ultra thick]{10}{10} \DrawHLine[black, ultra thick]{10}{10} \DrawHLineAbove[black, ultra thick]{11}{11} \begin{ytableau} \none & \none & \none & \none & \none &*(mygray)\emptytikzmark{9}{1} &*(mygray)\emptytikzmark{10}{1} &\emptytikzmark{11}{1} \\ \none & \none & \none & \none & *(mygray)\bullet &*(mygray)\emptytikzmark{8}{1}&*(mygray)\emptytikzmark{8b}{1}&\\ \none & \none & \none & \none &*(mygray) \emptytikzmark{6}{1} &*(mygray)\emptytikzmark{7}{1}&&\\ \none & \none & \none & \emptytikzmark{5}{1} & \\ \none & \none & \emptytikzmark{4}{1} & & \\ \none & \emptytikzmark{3}{1} & & \\ \none & & & \\ \none & & \\ \none &\emptytikzmark{2}{1} & \\ \emptytikzmark{a}{1} & \\ \emptytikzmark{1}{1} \end{ytableau}\] \DrawHLineAbove[black, ultra thick]{a}{a} \DrawVLineLeft[black, ultra thick]{1}{a} \DrawVLineLeft[black, ultra thick]{2}{3} \DrawHLineAbove[black, ultra thick]{3}{3} \DrawVLineLeft[black, ultra thick]{4}{4} \DrawHLineAbove[black, ultra thick]{4}{4} \DrawVLineLeft[black, ultra thick]{5}{5} \DrawHLineAbove[black, ultra thick]{5}{5} \DrawHLine[red, ultra thick]{6}{6} \DrawVLine[black, ultra thick]{7}{7} \DrawHLine[black, ultra thick]{7}{7} \DrawHLine[red, ultra thick]{8b}{8b} \DrawVLine[black, ultra thick]{8b}{8b} \DrawVLine[black, ultra thick]{10}{10} \DrawHLineAbove[black, ultra thick]{11}{11} \caption{If $i < j$ and $c_i = c_j$ there are two lattice paths. One is given by dropping down between column 1 and 2. The other lattice path is given by dropping down between column $c_i+2$ and $c_i+3$, though whether the non-bottleneck interior horizontal edge is in column $c_i+1$ or $c_i+2$ depends on whether height $j$ is a bottleneck of width 2. Note that if the square marked with a bullet and $i$ and $j$ are still the same, this example will now have $c_j = c_i+1$. However, the permissible lattice paths will still be the exact same, giving 2 lattice paths in either case. An exception is when $c_i = n-1$, in which there will only be the first lattice path.} \label{i<j_ci=cj} \end{figure} Next is the case $i>j$. If $c_j > 1$ note there are two lattice paths using heights $i$ and $j$. This is because the non-bottleneck interior horizontal edge must touch the left boundary, and then the last column containing 1s can be either column 1 or column $c_i+1$. It remains to consider $c_i=1$. In this case there are in fact still two lattice paths. One is given as before by using column $c_i+1$ as the extra column. Unless $c_i=2$, another lattice path is given by choosing the interior horizontal edge at height $j$ in column 2, and dropping down to use column 1 as the last column. Next we suppose both heights are bottlenecks (in different columns). Then unless they are in adjacent columns, the last column containing a 1 may be to the left, right or in between the two bottlenecks, giving 3 possible lattice paths. If the bottlenecks are in adjacent columns, then having a column in between is impossible, giving 2 lattice paths. This gives a total of $(m-2)\sum_{i=2}^{n-1} b_i^{(1)} - \sum_{i=2}^{n-2} b_i^{(1)}b_{i+1}^{(1)}$ extra paths. As noted earlier, an exception is if the bottleneck is in column 2 and the other edge is in column 1. In this case, it is impossible for the last column to be to the left of the two edges, so there is only 1 lattice path. Similarly, a bottleneck in column $n-1$ and the edge in the last column is an exception. Thus we must subtract $b_2^{(1)}$ times the number of heights in the first column and $b_{n-1}^{(1)}$ times the number of heights in the last column. This gives the last two terms in the formula, finishing the proof. \end{proof} \begin{prop} The coefficient of $x_1^3 x_2^n$ in $g_{\lambda/\mu}$ is \[ \binom{m+1}{3} - \sum_{i=1}^{n} \left((m-1)\binom{b_i^{(1)}+1}{2} - 2 \binom{b_i^{(1)}}{3} - b_i^{(1)}(b_i^{(1)}-1) \right) \]\[ - \sum_{i=1}^{n-1} \left(\binom{b_i^{(2)}+2}{3} + (b_i^{(1)}+b_{i+1}^{(1)})\binom{b_i^{(2)}+1}{2} + b_i^{(1)}b_i^{(2)}b_{i+1}^{(1)}\right) \] \end{prop} \begin{proof} Note that a 1,2-RPP corresponding to the monomial $x_1^3 x_2^n$ must have no 1-pure columns, $n-3$ 2-pure columns, and $3$ mixed columns. Thus the corresponding lattice path has exactly 3 interior horizontal edges. Next observe that given the 3 heights of the interior horizontal edges, there is at most one lattice path using these heights. Since there are no 1-pure columns, the left-most column containing a 1 must touch the left boundary, the next left-most column containing a 1 touches the left boundary of the remaining shape to be filled, and similarly the last column containing a 1 touches the left boundary of the remaining shape. Hence we can count the coefficient by starting with the $\binom{m+1}{3}$ ways to choose 3 heights (with possible repetition) from the $m-1$ heights and subtracting the combinations of heights that are not used by any lattice path. One way a triple of heights fails to give a valid filling is if two of the interior horizontal edges are bottlenecks in the same column, and the last horizontal edge is at any of the $(m-1)$ heights. There are $\sum_{i=1}^n (m-1)\binom{b_i^{(1)}+1}{2}$ ways to choose such edges. However, this overcounts cases where all 3 edges are bottlenecks in the same column. Suppose all 3 are distinct heights. Then the given triple of heights has been counted 3 times, so we must subtract it twice, contributing a $-\sum_{i=1}^n 2 \binom{b_i^{(1)}}{3}$ term. If 2 of the heights are the same and the last is distinct, then the given triple of heights has been counted twice. Subtracting gives a term $-\sum_{i=1}^n b_i^{(1)}(b_i^{(1)}-1)$. Finally, if all 3 heights are the same then the triple has beeen counted only once, so we need not subtract anything for this case. This gvies the first sum in the desired formula. Suppose 3 heights $i,j,k$ are chosen such that no two heights are bottlenecks in the same column and there are at least 3 columns containing edges at any of the 3 given heights. Then there will be a unique corresponding lattice path, given by choosing the leftmost edge at height $i$, the leftmost edge at height $j$ after removing the squares above the first edge chosen, and then the leftmost edge at height $k$ after removing the squares above the first two edges chosen. Hence the only other way for a triple of heights to fail to give a valid filling is if there are only two columns containing edges at the 3 given heights. These are the terms involving the width two bottlenecks $b^{(2)}$. There are 3 possible cases. The first case is if all 3 edges are at heights that are width two bottlenecks in a given column. With repetition, this gives $\binom{b_i^{(2)}+2}{3}$ for each column. The second case is if 2 of the heights are at width two bottlenecks of a given column, and the last height is at one of the adjacent bottleneck edges. There are $\binom{b_i^{(2)}+1}{2}$ ways to choose the two width two bottlenecks and $(b_i^{(1)}+b_{i+1}^{(1)})$ ways to choose the remaining bottleneck edge, giving the second term in the sum. Finally, since we have already counted all the cases where two of the heights are bottlenecks in the same column, the remaining case is when one height is a bottleneck in column $i$, another height is a width two bottleneck in column $i$, and the last height is a bottleneck in column $i+1$. This gives the last term in the sum, giving the desired formula. \end{proof} The proof of Proposition \ref{x1_3 x2_n-1} may possibly be adapted to terms of degree $n+2$ with more $x_1$'s, but the argument seems to become increasingly complicated. Similarly to the formulas for the terms of degree $n+1$ described in the proof of Theorem \ref{bottleneck_cond}, the coefficients in the sum of terms $\binom{b^{(1)}_i+1}{2}$ will not all be 1. However, other terms in the formula become more complicated as well. For example, take $x_1^4 x_2^{n-2}$, the next simplest monomial. Rather than terms involving just the first and last column height, the formula would also require terms involving the second and second to last column heights. Also, rather than just a sum over adjacent pairs $b_i b_{i+1}$, a sum over all products $b_i b_j$ is necessary, as well as another sum over pairs $b_i b_{i+2}$. Given this drastic increase in complexity involving the interplay of several different terms, it is unclear that computing the coefficients of more complicated monomials will lead to nice conditions on the bottlenecks $b^{(w)}$. \fi \section{Relation Between $g$-equivalence and $G$-equivalence} It is natural to ask whether $g_A = g_B$ for two skew shapes $A$ and $B$ implies $G_A = G_B$, and vice versa. The following examples show that in general, neither equality implies the other. \begin{ex} \ytableausetup{boxsize=.35cm} Based on computer computation, the shapes \[\ydiagram{4+4,1+5,4,2}\qquad \ydiagram{3+5,2+4,4,2}\] are $g$-equivalent but not $G$-equivalent. For example, the coefficients of $x_1^6 x_2^6 x_3^3 x_4$ in $G$ are $-353$ and $-354$, respectively. \end{ex} \begin{ex}\label{ribbon_staircase} The shapes \[\ydiagram{3+5,3+3,1+3,2} \qquad \ydiagram{5+3,1+5,1+3,2}\] are $G$-equivalent but not $g$-equivalent. One can show $G$-equivalence through computer computation using the reverse lattice word expansion of $G_{\lambda/\mu}$ into stable Grothendieck polynomials indexed by straight shapes found in \cite{buch2002lrrule}. To see the shapes are not $g$-equivalent, we notice that $b_4+b_5 = 1$ for the shape on the left and $b_4+b_5=0$ for the shape on the right. \end{ex} \iffalse Here we document a few of the approaches taken. \subsection{Barely Set-Valued Tableaux} For a ribbon $\alpha$ of size $n$, the $x_1x_2 \ldots x_{n+1}$-coefficient in $G_\alpha$ counts the number of \textit{barely set-valued tableaux} on $\alpha$, which are defined to be set-valued tableaux containing the numbers $1, \ldots, n+1$. An enumerative formula is given in \cite{chan2015bngraph} for the number of these tableaux on any skew shape. Given a skew shape $\sigma$, let $f^{\sigma}$ be the number of standard Young tableaux on $\sigma$. Furthermore, let ${}^i\sigma$ and $i^\sigma$ be the shapes obtained by adding a box to the right and left of row $i$ in $\sigma$ respectively. We define $f^{\sigma^i}$ and $f^{{}^i\sigma}$ analogously as the number of standard Young tableaux on these shapes, although we set these quantities to $0$ if the superscript is not a skew shape. \begin{lem} \cite{chan2015bngraph} The number of barely set-valued tableaux for a skew shape $\sigma$ with $n$ boxes and $k$ rows is $$k(n+1)f^{\sigma} + \sum_{i=1}^k (k-i)f^{{}^i\sigma} - \sum_{i=1}^k (k-i+1)f^{\sigma^i}.$$ \end{lem} The authors attempted to evaluate this formula for ribbons by expanding the $f^{\sigma}$ terms with Aitken's determinantal formula for counting skew standard Young tableaux but it proved to be impossible to obtain a closed form. Furthermore, one should note that the number of barely set-valued tableaux on the ribbons $(1,2) \circ (2,1)$ and $(2,1) \circ (1,2)$ are the same, so this is not a strong enough invariant to prove our conjecture. \subsection{Unstable Grothendieck Polynomials} Another attempt at unraveling the structure of Grothendieck polynomials is to examine their unstable versions as defined in \cite{buch2002lrrule}. These unstable polynomials are defined recursively for general permutations. Define the length $l(w)$ of a permutation $w$ to be the smallest number $k$ such that $w$ can be expressed as a product of simple reflections $s_{i_1}s_{i_2}... s_{i_k}$ where $s_i$ is the transposition $(i\text{ }i+1)$. Let $w_0 \in S_n$ be the longest permutation sending $1$ to $n$, $2$ to $n-1$, and so on. \begin{defn} The \textit{(single-variable) Grothendieck polynomial} $\mathcal{G}_{w_0}(x)$ is the monomial $$\prod_{1 \leq i \leq n-1} x_i^{n-i}.$$ \end{defn} Now let $w$ be any other permutation in $S_n$. Since $w$ is not the longest permutation, there exists $s_i$ such that $l(ws_i) = l(w) + 1$. Now we can define the Grothendieck polynomial for general $w$. \begin{defn} The \textit{Grothendieck polynomial} $\mathcal{G}_w(x)$ is given by the recursive identity $$\pi_i(\mathcal{G}_{ws_i}(x))$$ where $\pi_i$ is the isobaric divided difference operator $$\pi_i(f) = \frac{(1-x_{i+1})f - (1-x_i)f(x_1,x_2,\ldots, x_{i+1}, x_i, \ldots)}{x_i - x_{i+1}}.$$ \end{defn} It is known that the choice of $s_i$ does not matter. We can obtain the stable Grothendieck polynomial by taking the limit $$G_w = \lim_{m \to \infty} \mathcal{G}_{1^m \times w}$$ where $1^m \times w$ is the permutation in $S_{m+n}$ fixing $1, 2, ..., m$ and sending $m + i$ to $m + w(i)$ for $i$ between $1$ and $n$, inclusive. We now defined $\mathcal{G}_{\lambda/\mu)}$. The unstable Grothendieck polynomial $\mathcal{G}_{\lambda/\mu}$ is given by $\mathcal{G}_{w_{\lambda/\mu}}$ where $w_{\lambda/\mu}$ is a permutation determined by $\lambda$ and $\mu$. \begin{defn} Given a skew diagram $\lambda/\mu$, define the permutation $w_{\lambda/\mu} \in S_{|\lambda/\mu|}$ as follows. Starting from the lower-left box, fill in each square in $\lambda/\mu$ with the number of its corresponding diagonal, increasing as we travel up or right in the diagram. An example for $\lambda = (4,3,3,3), \mu = (1)$ is given below: $$\begin{ytableau} \none & 5 & 6 & 7 \\ 3 & 4 & 5 \\ 2 & 3 & 4 \\ 1 & 2 & 3 \end{ytableau}$$ Then, set $w_{\lambda/\mu}$ to be the permutation with reflection word given by reading the diagram from right to left, bottom to top. For example, we have $$w_{(4,3,2)/(1)} = s_2s_1s_4s_3s_2s_6s_5s_4.$$ \end{defn} In a ribbon $\alpha=(\alpha_1,\cdots,\alpha_k)$, no two squares share the same diagonal. Therefore, we have $$w_\alpha = s_{\alpha_1}s_{\alpha_1-1}\ldots s_1 s_{\alpha_2+\alpha_1} s_{\alpha_2+\alpha_1-1}\ldots s_{\alpha_1+1} \ldots s_{|\alpha|}s_{|\alpha|-1}\ldots s_{(\sum_{i=1}^{k-1} \alpha_i) + 1}$$ It is straightforward to calculate a sequence of reflections that send $1^m \times w_\alpha$ to the longest permutation. \begin{prop} Define $B_i$ from $i = 1$ to $k$ as the composition of the reflection word $$s_{|\alpha| - \alpha_i + m}s_{|\alpha| - \alpha_i + m - 1} \ldots s_{1 + \sum_{j=i+1}^k \alpha_j}$$ with the words $s_{|\alpha|} ... s_p$ from $p = 2 + \sum_{j=i+1}^k \alpha_j$ to $\sum_{j=i}^k \alpha_j$. Define $M$ to be the composition of reflections $$s_{|a|+m}s_{|a|+m-1}\ldots s_{|a|+1} s_{|a|+m} \ldots s_{|a|+2} \ldots s_{|a|+m}s_{|a|+m-1}s_{|a|+m}.$$ Then $w_\alpha B_1B_2 \ldots B_k M = w_0$. \end{prop} \begin{proof} It suffices to plug in the definitions of $w_\alpha$, $B_i$, and $M$ to verify that their composition is the longest word $w_0$. \end{proof} From the above, we have a purely algebraic expression for the Grothendieck polynomial of a ribbon. Although the unstable polynomials do not have nice expressions and are not symmetric in general, they may be useful for comparing coefficients. \subsection{Littlewood-Richardson Rule} The stable Grothendieck polynomials on straight shapes form a basis for the ring of symmetric power series. Therefore, we can compare skew stable Grothendieck polynomials by comparing their expansions as a sum of straight stable Grothendieck polynomials. Given a set-valued tableaux, define its \textit{column word} to be its entries read in a bottom-to-top, left-to-right order, where each square is read in increasing order. This expansion is given by the following Littlewood-Richardson rule: \begin{thm} \cite{buch2002lrrule} $G_{\lambda/\mu} = \sum_{\nu \text{ a partition}} a_{\lambda/\mu, \nu} G_\nu$ where the coefficient $a_{\lambda/\mu, \nu}$ is the number of set-valued tableaux whose column word is a reverse lattice word of weight $\nu$. \end{thm} Since the column word coincides with the row word used in the original Littlewood-Richardson rule, the coefficients $a_{\alpha, \nu}$ for $|\nu| = |\alpha|$ are exactly the Littlewood-Richardson coefficients for $\alpha$. While the counting set-valued tableaux giving reverse lattice words is more restricted than simply counting set-valued tableaux, based on initial attempts it seems still difficult to enumerate these coefficients. \fi \section{Future Explorations} \subsection{Coincidences of ribbon stable Grothendieck polynomials} The combinatorics of ribbon stable Grothendieck polynomials seem to be more difficult than their dual stable Grothendieck and Schur counterparts. However, we still conjecture that coincidences among ribbon Grothendieck polynomials arise in precisely the same way as the dual case. \begin{conj} Let $\alpha$ and $\beta$ be ribbons. Then $G_\alpha = G_\beta$ if and only if $\beta=\alpha$ or $\beta=\alpha^*$. \end{conj} While one direction is immediate, the other direction has proven to be much more difficult. \subsection{Conjugation invariance} Given a Young diagram $\lambda=\langle\lambda_1,\lambda_2,\ldots,\lambda_k\rangle$, we define its \textit{transpose} Young diagram to be $\lambda^T=\langle \lambda_1',\dots,\lambda_s'\rangle$, where $\lambda_i'$ is the number of boxes in column $i$ of $\lambda$. This operation extends to skew diagrams by setting $(\lambda/\mu ) ^T = \lambda^T/\mu^T$. For example, $\langle 5,5,2\rangle^T=\langle 3,3,2,2,2\rangle$ and $(\langle 4,3,1\rangle/\langle 2\rangle)^T=\langle 4,3,1\rangle^T/\langle 2\rangle^T=\langle 3,2,2,1\rangle/\langle 1,1\rangle$. For skew shapes $A$ and $B$ it follows immediately from the Jacobi-Trudi identity that $s_A = s_B$ implies $s_{A^T} = s_{B^T}$. Since it remains open to find an analogue of Jacobi-Trudi for skew Grothendieck polynomials, the answer to the analogous question for $g$ and $G$ is less obvious. \begin{question} Suppose $g_A = g_B$. Does it follow that $g_{A^T} = g_{B^T}$? Similarly, suppose $G_A = G_B$. Does it follow that $G_{A^T} = G_{B^T}$? \end{question} If conjugation does preserve $g$-equivalence, then we immediately get another necessary condition on $g$-equivalence by taking a transposed version of Theorem \ref{bottleneck_cond}. \subsection{Ribbon staircases} Theorem 7.30 of \cite{rsw2009coincidences} describes a class of nontrivial skew equivalences. A \textit{nesting} is a word consisting of the symbols left parenthesis ``$($," right parenthesis ``$)$," dot ``$.$," and vertical slash ``$|$" where the parentheses must be properly matched. Given a skew shape that may be decomposed into a ribbon $\alpha$ in a certain manner as described in \cite{rsw2009coincidences}, one may obtain a corresponding nesting. Theorem 7.30 states that shapes that may be decomposed with the same ribbon $\alpha$ such that the nestings are reverses of each other are Schur equivalent. It is interesting to consider whether these equivalences hold for $g$ and $G$ as well. For example, Corollary 7.32 of \cite{rsw2009coincidences} states that $s_{\delta_n/\mu} = s_{(\delta_n/\mu)^T}$ for any diagram $\mu$ contained in the staircase partition $\delta_n = \langle n-1,n-2,\ldots,1\rangle $. Computation strongly suggests the same holds true for the Grothendieck polynomials as well. \begin{conjecture}\label{stair_pair} Let $\mu$ be a diagram contained in the staircase partition $\delta_n = \langle n-1,n-2,\ldots,1 \rangle$. Then $g_{\delta_n/\mu} = g_{(\delta_n/\mu)^T}$ and $G_{\delta_n/\mu} = G_{(\delta_n/\mu)^T}$. \end{conjecture} However, not all equivalences described by Theorem 7.30 hold for Grothendieck polynomials. \iffalse For example, let $\alpha = (2,3)$ and take the nesting \[\begin{tabular}{ c c } $\mid$ & . \\ 1 & 2 \\ \end{tabular}\] and its reverse \[\begin{tabular}{ c c } . & $\mid$ \\ 1 & 2 \\ \end{tabular}\] Then the corresponding skew shapes are exactly those given in Example \ref{ribbon_staircase}; they match for $G$ but not for $g$. Based on computation, for $\alpha = (2,3)$ it appears that the skew shapes are $G$ equivalent if and only if the nesting contains only vertical slashes and dots. However, this does not hold for all ribbons $\alpha$. For example, take $\alpha = (1,3)$. Then the shapes given by the nesting \[\begin{tabular}{ c c } $\mid$ & . \\ 1 & 2 \\ \end{tabular}\] and its reverse do not give $G$ equivalent skew shapes. It also remains to find any examples of equivalences for $g$ besides those given in Conjecture \ref{stair_pair}. This leads to the following question. \fi \begin{question} For which ribbons $\alpha$ and nestings $\mathcal{N}$ are the corresponding shapes $g$-equivalent or $G$-equivalent? \end{question} \iffalse \subsection{Littlewood-Richardson Rule} The stable Grothendieck polynomials on straight shapes form a basis for the ring of symmetric power series. Therefore, we can compare skew stable Grothendieck polynomials by comparing their expansions as a sum of straight stable Grothendieck polynomials. Given a set-valued tableaux, define its \textit{column word} to be its entries read in a bottom-to-top, left-to-right order, where each square is read in increasing order. This expansion is given by the following Littlewood-Richardson rule: \begin{thm} \cite{buch2002lrrule} $G_{\lambda/\mu} = \sum_{\nu \text{ a partition}} a_{\lambda/\mu, \nu} G_\nu$ where the coefficient $a_{\lambda/\mu, \nu}$ is the number of set-valued tableaux whose column word is a reverse lattice word of weight $\nu$. \end{thm} While the counting set-valued tableaux giving reverse lattice words is more restricted than simply counting set-valued tableaux, it is still difficult to enumerate these coefficients. Since the column word coincides with the row word used in the original Littlewood-Richardson rule, the coefficients $a_{\alpha, \nu}$ for $|\nu| = |\alpha|$ are exactly the Littlewood-Richardson coefficients for $\alpha$. \subsection{$G$ positivity} We observed in many examples of ribbons $\alpha, \beta$ such that $s_\alpha = s_\beta$ and $G_\alpha \neq G_\beta$ that not only are the Littlewood-Richardson coefficients different, the coefficients of one shape are dominated by the coefficients of the other. This leads us to define the following analog of Schur positivity. \begin{defn} A symmetric function is \textit{$G$ positive} if the coefficients for its expansion in the basis $\{G_\lambda\}$ are all nonnegative. For example, the Littlewood-Richardson rule given previously shows $G_A$ for a skew shape $A$ is $G$ positive. We define a partial order on $G$ equivalence classes of skew shapes by $A\leq B$ if and only if $a_{A,\nu} \leq a_{B,\nu}$ for all $\nu$. Equivalently, $A \leq B$ if and only if $G_B - G_A$ is $G$ positive. \end{defn} This partial order is a coarsening of the order given by $A \leq B$ if and only if $s_B - s_A$ is Schur positive. Our poset inherits many of the properties of the more well-studied poset given by Schur positivty. For example, when restricted to ribbons, the connected components of the poset are the sets of ribbons with a fixed number of rows and columns. Each component has a least element, the unique hook shape with the given number of rows and columns. Since the poset is a coarsening of the Schur positivity order, the same argument in Proposition 3.10 of \cite{mw2008positivity} shows ribbons that are permutations of some fixed partition $\lambda$ form a convex subposet of the poset of all ribbons. In some cases, the poset is highly structured. \begin{figure}[h] \[ \includegraphics{1234_pretty} \] \caption{The Hasse diagram for all ribbons given by permutations of $(4,3,2,1)$. It has 12 vertices, since each ribbon is equivalent to its 180 degree rotation. They correspond to distinct $G_\alpha$. } \end{figure} \\\\ We pose the following conjectures and questions about our $G$-positivity order. \begin{conjecture} In addition to a least element, the set of ribbons with a fixed number of rows and columns has a greatest element. \end{conjecture} \begin{conjecture} For fixed $\lambda$, the set of ribbons which are permutations of $\lambda$ has both a least and a greatest element. \end{conjecture} \begin{question} Computationally, the set of ribbons which are permutations of some fixed $\lambda$ seems the follow the pattern that ribbons with the larger rows in the middle of the shape are larger in the poset. Is there a general rule that governs this phenomenon? \end{question} \begin{conjecture} Like in the Schur positivity order, conjugation gives acts as an automorphism. \end{conjecture} \begin{question} Are there ribbons $\alpha$ and $\beta$ such that $s_\alpha = s_\beta$ and $G_\alpha \neq G_\beta$ but $\alpha$ and $\beta$ are incomparable? \end{question} \fi \section{Acknowledgments} This research was carried out as part of the 2016 summer REU program at the University of Minnesota, Twin Cities and was supported by NSF RTG grant DMS-1148634 and by NSF grant DMS-1351590. We would like to thank Vic Reiner, Gregg Musiker, Sunita Chepuri, and Pasha Pylyavskyy for their mentorship and support. \bibliographystyle{alpha}
{ "timestamp": "2016-09-21T02:05:54", "yymm": "1609", "arxiv_id": "1609.06171", "language": "en", "url": "https://arxiv.org/abs/1609.06171", "abstract": "The question of when two skew Young diagrams produce the same skew Schur function has been well-studied. We investigate the same question in the case of stable Grothendieck polynomials, which are the K-theoretic analogues of the Schur functions. We prove a necessary condition for two skew shapes to give rise to the same dual stable Grothendieck polynomial. We also provide a necessary and sufficient condition in the case where the two skew shapes are ribbons.", "subjects": "Combinatorics (math.CO)", "title": "Coincidences among skew dual stable Grothendieck polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.982557512709997, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542037143038 }
https://arxiv.org/abs/1809.01131
On Banach space projective tensor product of $C^*$-algebras
We analyze certain algebraic structures of the Banach space projective tensor product of $C^*$-algebras which are comparable with their known counterparts or the Haagerup tensor product and the operator space projective tensor product of $C^*$-algebras. Highlights of this analysis include (a) injectivity of the Banach space projective tensor product when restricted to the tensor products of $C^*$-algebras, (b) detailed structure of closed ideals of $A \otimes_{\gamma} B$ in terms of those of $A$ and $B$, (c) identification of certain spaces of ideals of $A \otimes_{\gamma} B$ in terms of those of $A$ and $B$ from the perspective of hull-kernel topology, and (d) identification of the center of $A \otimes_{\gamma} B$ with $Z(A) \otimes_{\gamma} Z(B)$, where $A$ and $B$ are $C^*$-algebras.
\section{ Introduction} Around 1960s, Gelbaum initiated the study of certain spaces of ideals of the Banach space projective tensor product $A \otimes^\gamma B$ of Banach algebras $A$ and $B$, who was then followed by Laursen and Tomiyama - see \cite{gelbaum1, gelbaum2, gelbaum3, laur, tom} and the references therein. They focussed mainly on analyzing the spaces of maximal modular ideals and primitive ideals of $A \otimes^\gamma B$ in terms of those of $A$ and $B$, and to analyze which properties of Banach algebras are passed on to their tensor products, where $A$ or $B$ or both is/are commutative. However, not much was discovered about the structure of general ideals of $A \otimes^\gamma B$ in terms of those of $A$ and $B$. A similar hardship has been observed in understanding the ideal structure of $A \otimes^{\min} B$ as well, for $C^*$-algebras $A$ and $B$. On the other hand, the analysis of various algebraic structures of the Haagerup tensor product $A \otimes^h B$ and the operator space projective tensor product $A \widehat\otimes B$, of $C^*$-algebras $A$ and $B$, has been carried out extensively during last three decades (see \cite{ass, arc3, JK, jk11, JK-edin, jk-13, kr-1, kr-2, KS}). An important and useful development of this project has been the discovery of the connection that exists between the structures of centers and ideals of $A \otimes^h B$ and $A \widehat\otimes B$ in terms of those of $A$ and $B$, all of which was pioneered by the remarkable work \cite{ass} of Allen, Sinclair and Smith, wherein they study closed ideals of $A \otimes^h B$. This article aims at obtaining similar results for $A \otimes^\gamma B$, for $C^*$-algebras $A$ and $B$. A closer look reveals that a crucial ingredient that helped to establish the relationships mentioned above was the injectiviy of $\otimes^h$ and a partial injectivity of $\widehat\otimes$ (see \Cref{ideals}, below), along with an exactness type property (as in \Cref{kernel-phi-ot-psi}) exhibited by both tensor norms. It is known that $\otimes^\gamma $ is not injective; and, might be due to the lack of any partial injectivity result, not much could be known about the algebraic structure of $A \otimes^\gamma B$. However, exploiting a work of Diestel et al. \cite{puglisi}, the second named author had demonstrated in her Ph.D. thesis \cite{jain} that, when restricted to the tensor products of $C^*$-algebras, $\otimes^\gamma$ observes a better partial injectivity than $\widehat\otimes$, as is re-produced in Section 2 below. This turns out to have beautiful consequences in the study of algebraic structures of $A \otimes^\gamma B$, which is achieved primarily by employing a similar line of treatment as in \cite{ass}. We present here few of those consequences and are quite hopeful that we can deduce more. For any algebra $A$, let $\mathcal{M}(A)$ (resp., $\mathcal{M}_m(A)$) denote the set of maximal ideals (resp., maximal modular ideals) of $A$. In his first article to analyze spaces of ideals of $A \otimes^\gamma B$, Gelbaum (\cite{gelbaum1}) showed that if $A$ and $B$ are commutative Banach algebras, then there is a homeomorphism between $\mathcal{M}_m(A) \times \mathcal{M}_m(B)$ and $\mathcal{M}_m(A \otimes^\gamma B)$, when the spaces are equipped with $w^*$-topologies. Then, in \cite{gelbaum3}, he replaced the condition of commutativity by existence of unity and obtained an injective map from $\mathcal{M}(A) \times \mathcal{M}(B)$ into $\mathcal{M}(A \otimes^\gamma B)$ which turned out to be closed and continuous with respect to the hull-kernel topology. Surjectivity is still an open question in this case. Almost simultaneously, Laursen (\cite{laur}) dropped the commutativity of one of the Banach algebras and established that the spaces $\mathcal{M}_m(A) \times \mathcal{M}_m(B)$ and $\mathcal{M}_m(A \otimes^\gamma B)$ are homeomorphic with respect to hull-kernel topology. For arbitrary $C^*$-algebras $A$ and $B$ (not necessarily unital or commutative), based on the ideal structue of $A \otimes^\gamma B$ discussed above, we establish that there is a homeomorphism from $Id'(A) \times Id'(B)$ onto its image (which is also dense) in $Id'(A \otimes^\gamma B)$, with respect to $\tau_w$-topology, where $Id'(A)$ denotes the set of proper closed ideals of $A$. Moreover, this map restricts to a homeomorphism from $\mathcal{M}_m(A) \times \mathcal{M}_m(B)$ (resp., $\mathcal{M}(A) \times \mathcal{M}(B)$) onto $\mathcal{M}_m(A \otimes^\gamma B)$ (resp., $\mathcal{M}(A \otimes^\gamma B)$ )with respect to the hull-kernel topology. Here is a brief overview of the topics discussed in this article. As mentioned above, Section 2 is devoted towards establishing a partial injectivity of $\otimes^\gamma$ when restricted to tensor products of $C^*$-algebras, which is used throughout the article. Section 3 is the soul of this article which provides a thorough discussion of ideal structure of $A \otimes^\gamma B$ in terms of ideals of $C^*$-algebras $A$ and $B$. Among other things, we show that every closed ideal of $A \otimes^\gamma B$ contains a product ideal; that if $A$ or $B$ has finitely many closed ideals, then every closed ideal of $A \otimes^\gamma B$ is a sum of product ideals, and we identify minimal (resp., maximal and maximal modular) ideals of $A \otimes^\gamma B$ in terms of those of $A$ and $B$. In Section 4, {we identify spaces of proper (resp., maximal and maximal modular) ideals of $A \otimes^\gamma B$ in terms } of those of $A$ and $B$ from the perspective of hull-kernel topology. Finally, Section 5 provides another application of partial injectivity of $\otimes^\gamma$, where we provide an identification of the center of $A \otimes^\gamma B$ with $\mathcal{Z}(A) \otimes^\gamma \mathcal{Z}(B)$. \section{Injectivity of the Banach space projective norm on tensor products of $C^*$-algebras} Recall that, a norm $\|\cdot \|_{\alpha}$ on the algebraic tensor product $A \otimes B$ of a pair of $C^*$-algebras $A$ and $B$ is said to be \begin{enumerate} \item a {\em cross norm} if $\|a \otimes b\|_\alpha = \|a\|\, \|b\|$ for all $a \in A$, $b \in B$, \item an {\em algebra norm} if $\|w\, z \|_{\alpha} \leq \| w \|_{\alpha} \, \|z\|_{\alpha} $ for all $w, z \in A \otimes B$, and \item a {\em tensor norm} if $\| \cdot \|_{\lambda} \leq \| \cdot \|_{\alpha} \leq \| \cdot \|_{\gamma}$, where $\lambda$ and $\gamma$ are the Banach space injective and projective norms, respectively. \end{enumerate} Clearly, $A \otimes^{\alpha} B$, the completion of $A \otimes B$ with respect to any algebra norm $\|\cdot\|_\alpha$, is a Banach algebra. From an algebraic point of view, among the well analyzed tensor products in literature are the $C^*$-minimal tensor product ($\otimes^{\min}$), the Haagerup tensor product ($\otimes^h$), the operator space projective tensor product ($\widehat\otimes$) { and the Banach space projective tensor product ($\otimes^\gamma$)}. We refer the reader to \cite{ERbook, Tak} and \cite{ryan} for their definitions and essential properties. All these norms are cross algebra tensor norms and yield Banach algebras. In fact, for $C^*$-algebras $A$ and $B$, the canonical involution on $A \otimes B$ is not isometric \footnote{We follow the convention that the involution on a Banach $*$-algebra is an isometry.} with respect to $\otimes^h$ (see \cite{ass}); however, by the very definition of $\|\cdot\|_\gamma$, $A \otimes^\gamma B$ is a Banach $*$-algebra; and, by \cite{Kum01}, so is $A \widehat\otimes B$. The $C^*$-minimal tensor product is known to be injective and so is the Haagerup tensor product (see \cite{ass, ERbook}). On the other hand, neither the Banach space projective tensor product nor the operator space projective tensor product is injective. But in few cases they behave well. For instance, Kumar (\cite{Kum01}) proved that the tensor product of ideals of $C^*$-algebras can still be embedded nicely. \begin{prop}(\cite[Theorem 5]{Kum01})\label{ideals} Let $A$ and $B$ be $C^*$-algebras, and $I$ and $J$ be closed ideals in $A$ and $B$, respectively. Then the identity map on $I \otimes J$ extends to an isometric algebra map from the Banach algebra $I \widehat\otimes J$ onto the product ideal $\overline{I \otimes J} \subseteq A \widehat\otimes B$. \end{prop} The situation with the Banach space projective tensor product of $C^*$-algebras is even better (\Cref{obp-injective} below) as was proved in the doctoral thesis \cite{jain} of the second named author. Given its usefulness, we include the details for convenience of present and future references. We first recall some concepts whose details can be found in \cite{puglisi} and the references therein. The {\it strong*-topology} on a Banach space $X$ is defined to be the locally convex topology generated by the seminorms $x \mapsto \|Tx\|$ for bounded linear maps $T$ from $X$ into Hilbert spaces. Also, an operator $T: A\rightarrow X$, where $A$ is a $C^*$-algebra and $X$ is a Banach space, is said to be {\it $p\text{-}C^*$-summing} for $p>0$ if there exists a constant $k$ such that for every finite sequence $(a_1,a_2,\dots, a_n)$ of self-adjoint elements in $A$, $$\left( \sum_{i=1}^n \|T(a_i)\|^p \right)^{1/p} \leq k \Bigg\| \left( \sum_{i=1}^n |a_i|^p \right)^{1/p} \Bigg\|, $$ where for $a\in A$, $|a| := (\frac{aa^*+ a^*a}{2})^{1/2}$. We first collect few useful results. \begin{prop} \cite[$\S 3$]{puglisi}:\label{2-summ-strong} For a $C^*$-algebra $A$ and Banach space $X$, an operator $T:A \rightarrow X$ is 2-$C^*$-summing if and only if it is strong*-norm continuous. \end{prop} \begin{theorem}\cite[Proposition 2.1]{haag}\label{b3} Every bounded linear map $T:A\rightarrow B^*$, $A$ and $B$ being $C^*$-algebras, can be factored through a Hilbert space, that is, there exists a Hilbert space $H$, and bounded linear maps $u:A \rightarrow H $ and $v: H \rightarrow B^*$ such that $T = v\circ u$. \end{theorem} \begin{theorem}\cite[Theorem 3.2]{puglisi}\label{b2} Let $T:X\rightarrow Y$ be a bounded linear operator between Banach spaces $X$ and $Y$. Then $T$ is strong*-norm continuous if and only if $T$ factors through a Hilbert space. \end{theorem} \begin{theorem}\cite[Theorem 3.6]{puglisi}\label{b1} Let $A$ be a $C^*$-algebra, $B$ be a $C^*$-subalgebra of $A$ and $Y$ be any Banach space. Then every $2\text{-}C^*$-summing operator $T:B\rightarrow Y$ extends to a norm preserving $2\text{-}C^*$-summing operator $\widetilde{T}: A \rightarrow Y$. \end{theorem} Now, with these results in hand, we are ready to prove our result regarding the injectivity of Banach space projective norm in the $C^*$ set-up. \begin{theorem} \label{obp-injective} Let $A_1$ and $B_1$ be $C^*$-subalgebras of $C^*$-algebras $A$ and $B$, respectively. Then the identity map on $A_1\otimes B_1$ extends to an isometric $*$-algebra map from $A_1\otimes^\gamma B_1$ onto the closed $*$-subalgebra $ \overline{A_1 \otimes B_1} \subseteq A\otimes^\gamma B$. \end{theorem} \begin{proof} By \cite[Corollary 2.12]{ryan}, $A_1 \otimes^\gamma B_1$ is a subspace of $A \otimes^\gamma B_1$ if and only if every bounded operator from $A_1$ into $B_1^*$ extends to an operator of the same norm from $A$ into $B_1^*$. Let $T$ be one such operator from $A_1$ into $B_1^*$. Then, by Theorem \ref{b3}, $T$ can be factored through a Hilbert space, which, by Theorem \ref{b2}, is strong*-norm continuous. By \Cref{2-summ-strong}, $T$ is $2\text{-}C^*$-summing, therefore, by Theorem \ref{b1}, $T$ extends to a norm preserving $2\text{-}C^*$-summing operator $\tilde{T}: A \rightarrow B_1^*$. Thus $A_1 \otimes^\gamma B_1$ is a closed subspace of $A \otimes^\gamma B_1$. Since $\otimes^\gamma$ is symmetric, it also follows that $A \otimes^\gamma B_1$ is a closed subspace of $A \otimes^\gamma B$. \end{proof} \section{Ideal structure of $A \otimes^\gamma B$} If $\alpha$ is either the Haagerup tensor product or the operator space projective tensor product, $A$ and $B$ are $C^*$-algebras with $A$ topologically simple, then by \cite[Proposition 5.2]{ass}, and by \cite[Theorem 3.8]{JK-edin}, it is known that every closed ideal of the Banach algebra $A \otimes^\alpha B$ is a product ideal of the form $ A \otimes^\alpha J$ for some closed ideal $J$ in $B$. For $\otimes^{\min}$, we obtained, in \cite{GJ}, the following analogue of above results: \begin{theorem}\cite{GJ}\label{simple-ideal} Let $A$ and $B$ be $C^*$-algebras where $A$ is topologically simple. If either $A$ is exact or $B$ is nuclear, then every closed ideal of the $C^*$-algebra $A \otimes^{\min} B$ is a product ideal of the form $ A \otimes^{\min} J$ for some closed ideal $J$ in $B$. \end{theorem} It turns out that the ideal structure of the Banach space projective tensor product $A \otimes^\gamma B$ also follows on similar lines as in \Cref{simple-ideal}, provided its ingredients can be established - see \Cref{obp-ideal} below. The steps involved are very much on the lines of \cite{ass} and \cite{JK-edin}, and we avoid mentioning them at every instance. Since $\| \cdot \|_{\min}$ and $\|\cdot\|_{{h}}$ are cross-norms and $\| \cdot \|_{\lambda} \leq \|\cdot \|_{\min} \leq \| \cdot \|_{{h}} \leq \| \cdot \|_{\gamma}$, where $\| \cdot \|_{\lambda}$ is the Banach space injective tensor norm, by the Remark on Page $97$ of \cite{haag}, we have the following crucial embeddings. \begin{prop}\label{obp-min}\cite{haag} Let $A$ and $B$ be $C^*$-algebras. Then the identity map on $A \otimes B$ extends to a contractive injective $*$-homomorphism $i_{\min} : A \otimes^\gamma B \rightarrow A \otimes^{\min} B$ and injective homomorphisms $i_{h}: A \otimes^\gamma B \rightarrow A \otimes^h B$ and $j_{\min}: A \otimes^h B \rightarrow A \otimes^{\min} B$. Also, the diagram \begin{equation}\label{min-h-min} \xymatrix{ A \otimes^\gamma B \ar[dr]^{i_{\min}} \ar[rr]^{i_h}& & A \otimes^h B\ar[dl]^{j_{\min}} \\ & A \otimes^{\min} B & } \end{equation} commutes. \end{prop} For a closed ideal $I$ in $A \otimes^\gamma B$, let $I_{\min} := \overline{i_{\min}(I)} \subseteq A \otimes^{\min} B$. On the lines of \cite[Lemma 4.2]{ass}, Kumar and Rajpal \cite{kr-1} proved the following useful result. \begin{prop}\label{1-tensor-1}\cite{kr-1} Let $M$ and $N$ be von Neumann algebras and let $I$ be a closed ideal in $M \otimes^\gamma N$. If $1 \otimes 1 \in I_{\min} \subseteq M \otimes^{\min} N$, then $1 \otimes 1 \in I$, and, in particular, $I$ equals $M \otimes^\gamma N$. \end{prop} Analogous to \cite[Theorem 4.4]{ass}, we now prove a theorem that, along with \Cref{obp-injective}, turns out to be the main ingredient in the study of ideal structure of Banach space projective tensor product of $C^*$-algebras. \begin{theorem}\label{a-ot-b} Let $A$ and $B$ be $C^*$-algebras and let $I$ be a closed ideal in $A \otimes^\gamma B$. If an elementary tensor $a \otimes b \in I_{\min}$, then $a \otimes b \in I$. \end{theorem} \begin{proof} We first prove for $a, b \geq 0$. Suppose $a \otimes b \in I_{\min}$ and is not in $I$. By Hahn-Banach Theorem, there exists a $\varphi \in (A \otimes^\gamma B)^*$ such that $\varphi (I) = (0)$ and $\varphi (a \otimes b ) \neq 0$. It is well known that $(A \otimes^\gamma B)^*$ can be identified canonically with $B(A, B^*)$, the space of bounded linear maps from $A$ into $B^*$ - see \cite[$\S 2$, page 24]{ryan}. In particular, there exists a $\Phi \in B(A, B^*)$ such that $\varphi (x \otimes y) = \Phi(x)(y)$ for all $x \in A$ and $ y \in B$. Note that $\Phi^{**}: A^{**} \rightarrow B^{***}$ is $w^*$-$w^*$continuous and satisfies $\| \Phi^{**} \| = \|\Phi\| = \| \varphi \|$. Further, the association $A^{**} \otimes B^{**} \ni u \otimes v \mapsto \Phi^{**}(u)(v) \in \mathbb{C}$ extends linearly to a continuous functional on $A^{**} \otimes^\gamma B^{**}$, say, $\tilde{\varphi}$. Since $A \otimes^\gamma B \subseteq A^{**} \otimes^\gamma B^{**}$ (\cite[Corollary 2.14]{ryan} or \Cref{obp-injective}) , $\tilde{\varphi}$ extends $\varphi$ and also $\| \tilde{\varphi}\| = \| \varphi\|$. Now, consider the enveloping von Neumann algebras $M := A^{**}, N := B^{**}$, and let $ \tilde{I}$ be the closed ideal in $M \otimes^\gamma N$ generated by $I$. We claim that $\tilde{\varphi}(\tilde{I}) = (0)$ as well. For this, it is enough to show that \begin{equation}\label{verify} \tilde{\varphi} ( (u \otimes s) z (v \otimes t)) = 0 \end{equation} $ \mathrm{for\ all}\ z \in I, u, v \in M\ \mathrm{ and} \ s, t \in N$. To begin with, let $z \in I \subseteq A \otimes^\gamma B$, $u \in M, v \in A$ and $s, t \in B$. By \cite[Proposition 2.8]{ryan}, there exist bounded sequences $\{a_i\} \subset A$ and $\{ b_i\} \subset B$ such that $z = \sum_{i=1}^\infty a_i \otimes b_i$. For each $n \geq 1$, consider $\omega_n \in M^*$ given by $$\omega_n (x) = \sum_{i = 1}^n \Phi^{**} (x a_i v) ( s b_i t),\ x \in M.$$ Since $\Phi^{**}$ is $w^*$-$w^*$ continuous, $\omega_n$ is $w^*$-continuous, i.e., $\omega_n \in M_*$, the predual of $M$, for all $n \geq 1$. Also, for $m < n$, we observe that \begin{eqnarray*} \|\omega_n (x) - \omega_m(x)\| & = & \left\| \tilde{\varphi} \left((x \otimes s) \left(\sum_{i= m+1}^n a_i \otimes b_i\right)(v \otimes t)\right)\right\| \\ & \leq & \|\varphi\| \|x\| \|s\| \left\| \sum_{i= m+1}^n a_i \otimes b_i\right\|_\gamma \|v\| \|t\| \end{eqnarray*} for all $x \in M$. In particular, $\{\omega_n\}$ is a Cauchy sequence in $M^*$ with a limit, say, $\omega \in M^*$. Given the actions of $\omega_n$'s on $M$, the obvious candidate for $\omega$ is given by $\omega (x) = \sum_{i = 1}^\infty \Phi^{**} (x a_i v) ( s b_i t)$ for all $ x \in M$. In particular, $\omega \in M_*$. Since, $\varphi(I) = 0$ and $z \in I$, we easily see that $ \sum_{i= 1}^\infty{\varphi} (x a_iv \otimes sb_i t) = 0$ for all $ x \in A$. This implies that $A \subseteq \ker(\omega)$ and, since $\omega$ is $w^*$- (equivalently, $\sigma$-weakly) continuous, by von Neumann's Bicommutant Theorem, we obtain $M = A^{**} = \overline{A}^{w^*} = \ker (\omega)$, i.e., \Cref{verify} holds for all $u \in M, v \in A$ and $s, t \in B$. Repeating the argument by letting $v, s$ and $t$ vary successively, we conclude that \Cref{verify} holds $ \mathrm{for\ all}\ z \in I, u, v \in M\ \mathrm{ and} \ s, t \in N$. With this observation at our disposal, we now show that $a \otimes b$ can be approximated appropriately by elements of $\tilde{I}$ and deduce that it is annihilated by $\tilde{\varphi}$ to obtain a contradiction. For each $\epsilon, \nu > 0$, let $p_\epsilon, q_\nu$ be the spectral projections in $M$ and $N$ associated to $a$ and $b$ for the closed intervals $[\epsilon, \infty)$ and $[\nu, \infty)$, respectively. Then $p_\epsilon M p_\epsilon $ and $q_\nu N q_\nu$ are von Neumann subalgebras of $M$ and $N$ with units $p_\epsilon$ and $q_\nu$, respectively. In view of the embedding given in \Cref{obp-injective}, consider the closed ideal $\tilde{I}_{\epsilon, \nu}:= \tilde{I} \cap (p_\epsilon M p_\epsilon \otimes^\gamma q_\nu N q_\nu)$. We claim that ${(\tilde{I}_{\epsilon, \nu})}_{\min} \subseteq p_\epsilon M p_\epsilon \otimes^{\min} q_\nu N q_\nu$ contains the unit $p_\epsilon \otimes q_\nu$. This will then, by \Cref{1-tensor-1}, yield $\tilde{I}_{\epsilon, \nu} = p_\epsilon M p_\epsilon \otimes^\gamma q_\nu N q_\nu$ implying that $p_\epsilon a \otimes q_\nu b \in \tilde{I}_{\epsilon, \nu}$. In particular, $p_\epsilon a \otimes q_\nu b \in \tilde{I}$ for all $\epsilon, \nu > 0$. And since $ p_\epsilon a \stackrel{w^*}{\rightarrow} a$ in $M$ and $q_\nu b \stackrel{w^*}{\rightarrow} b$ in $N$, we will obtain $\tilde{\varphi} (a \otimes b) = \lim_{\epsilon \rightarrow 0} \lim_{\nu \rightarrow 0} \tilde{\varphi} (p_\epsilon a \otimes q_\nu b) = 0$, giving the desired contradiction. Towards the claim, note that, by bounded functional calculus, $p_\epsilon a $ and $q_\nu b$ are invertible in $p_\epsilon M p_\epsilon $ and $q_\nu N q_\nu$, respectively. If $\{ z_n\} $ is a sequence in $I$ such that $\{i(z_n)\}$ converges to $a \otimes b$ in $I_{\min} \subseteq A \otimes^{\min} B \subset M \otimes^{\min} N$, then, again by \Cref{obp-injective}, the sequence $\{(p_\epsilon \otimes q_\nu) z_n (p_\epsilon \otimes q_\nu)\}$ is contained in ${\tilde{I}_{\epsilon, \nu}}$ and $ j((p_\epsilon \otimes q_\nu) z_n (p_\epsilon \otimes q_\nu)) \rightarrow p_\epsilon a \otimes q_\nu b$ in $ p_\epsilon M p_\epsilon \otimes^{\min} q_\nu N q_\nu \subseteq M \otimes^{\min} N$, where $j$ is the injective homomorphism from $ p_\epsilon M p_\epsilon \otimes^\gamma q_\nu N q_\nu$ into $ p_\epsilon M p_\epsilon \otimes^{\min} q_\nu N q_\nu$ guaranteed by \Cref{obp-min}. This shows that the invertible element $ p_\epsilon a \otimes q_\nu b$ belongs to ${(\tilde{I}_{\epsilon, \nu})}_{\min}$ and hence the unit $ p_\epsilon \otimes q_\nu$ also belongs to $ {(\tilde{I}_{\epsilon, \nu})}_{\min}$. Finally, for arbitrary $a$ and $b$, if $ a \otimes b \in I_{\min}$ then, using above positive case, on the lines of last part of proof of \cite[Theorem 4.4]{ass}, it can be shown that $ a\otimes b \in I$. \end{proof} \begin{cor}\label{a-ot-b-h} Let $A$ and $B$ be $C^*$-algebras and let $I$ be a closed ideal in $A \otimes^\gamma B$. If an elementary tensor $a \otimes b \in I_{h}:=\overline{i_h(I)}^h \subset A \otimes^h B$, then $a \otimes b \in I$. \end{cor} \begin{proof} By \Cref{min-h-min}, we have $\overline{j_{\min}(I_h)}^{\min} = I_{\min}$ in $A \otimes^{\min} B$. Since $a \otimes b \in I_h$, $a \otimes b = j_{\min} (a \otimes b) \in I_{\min}$ as well and, therefore, by \Cref{a-ot-b}, $a \otimes b \in I$. \end{proof} \Cref{a-ot-b} also allows us to deduce the following, which will be crucial in the proof of \Cref{obp-ideal}. \begin{cor}\label{elementary-tensor} Let $A$ and $B$ be $C^*$-algebras and $I$ be a non-zero closed ideal in $A \otimes^\gamma B$. Then $I$ contains a non-zero elementary tensor and a product ideal. \end{cor} \begin{proof} From the Diagram \ref{obp-min}, we see that $I_{\min}$ is a non-zero closed ideal in $A \otimes^{\min} B$. So, by \cite[Proposition 4.5]{ass}, $I_{\min}$ contains a non-zero elementary tensor, say, $a \otimes b$, and then, by \Cref{a-ot-b}, $a \otimes b \in I$. Also, if $J$ and $K$ are the closed ideals of $A$ and $B$, generated by $a$ and $b$, respectively, then the product ideal $J \otimes^\gamma K$ is contained in $I$. \end{proof} This immediately yields the following analogue of \cite[Corollary 4.7]{ass}. \begin{cor}\label{obp-faithful} Let $A$ and $B$ be $C^*$-algebras and $D$ be a Banach algebra. If $\pi : A \otimes^\gamma B \rightarrow D$ is a bounded homomorphism whose restriction to $A \otimes B$ is faithful, then so is $\pi$. \end{cor} \begin{prop}\label{sum-obp} Let $A$ and $B$ be $C^*$-algebras. Then a finite sum of closed product ideals in $A \otimes^\gamma B$ is closed. \end{prop} \begin{proof} It is enough to consider the sum of two product ideals. Let $J_i, K_i$ be closed ideals in $A$ and $B$, respectively, for $i =1, 2$. By \cite[Proposition 2.4]{dixon}, it is enough to show that $J_1 \otimes^\gamma K_1$ has a bounded approximate identity. Since every closed ideal in a $C^*$-algebra possesses a bounded approximate identity, by \cite[Lemma 3.1]{JK-edin}, $J_1 \otimes^\gamma K_1$ possesses a bounded approximate identity. \end{proof} From \Cref{obp-injective}, \Cref{sum-obp} and the fact that a finite sum of closed ideals in a $C^*$-algebra is closed, we easily deduce the following. \begin{cor}\label{finite-sum} Let $\{J_i\}_{i = 1}^n$ and $\{ K_j\}_{j=1}^m$ be closed ideals in $C^*$-algebras $A$ and $B$, respectively. Then, \begin{enumerate} \item $(\sum_i J_i) \otimes^\gamma B = \sum_i J_i \otimes^\gamma B$, and \item $A \otimes^\gamma (\sum_j K_j) = \sum_j A \otimes^\gamma K_j$. \end{enumerate} \end{cor} Recall that a map $\pi : X \rightarrow Y$ between two Banach spaces is said to be a {\em quotient map} if it maps the open unit ball of $X$ onto that of $Y$. In particular, a quotient map is surjective. For two quotient maps $\varphi_i: X_i \rightarrow Y_i$, $\varphi_1 \otimes \varphi_2$ extends to a quotient map $\varphi_1 \otimes^\gamma \varphi_2 : X_1 \otimes^\gamma X_2 \rightarrow Y_1 \otimes^\gamma Y_2$ - see \cite[Proposition 2.5]{ryan}. Anologous to \cite[Theorem 2.4]{ass}, \cite[Proposition 7.1.7]{ERbook} and \cite[Proposition 3.3]{kr-2}, we obtain the following essential result: \begin{prop}\label{kernel-phi-ot-psi} Let $X_i$ and $Y_i$ be Banach spaces and $\varphi_i : X_i \rightarrow Y_i$, $i = 1, 2$ be quotient maps. If $E_1$ and $E_2$ are closed subspaces of $Y_1$ and $Y_2$, respectively, then \[ (\varphi_1 \otimes^\gamma \varphi_2)^{-1} \Big(\overline{E_1 \otimes E_2 }\Big) =\overline{ \ker(\varphi_1)\otimes X_2 + \varphi_1^{-1}(E_1) \otimes \varphi_2^{-1}(E_2) + X_1 \otimes \ker(\varphi_2)}. \] In particular, we have \[ \ker (\varphi_1 \otimes^\gamma \varphi_2) = \overline{\ker(\varphi_1)\otimes X_2 + X_1 \otimes \ker(\varphi_2)}. \] \end{prop} \begin{proof} Set $Z = \overline{ \ker(\varphi_1)\otimes X_2 + \varphi_1^{-1}(E_1) \otimes \varphi_2^{-1}(E_2) + X_1 \otimes \ker(\varphi_2)}$. Recall that, for any subspace $W$ of a Banach space $X$, $W^{\perp \perp} = \overline{W}$, where $W^\perp := \{ \Phi \in X^*: \Phi(W) = (0)\} $ (Bipolar Theorem). So, it suffices to show that $\Big( (\varphi_1 \otimes^\gamma \varphi_2)^{-1} \big(\, \overline{E_1 \otimes E_2 }\, \big) \Big)^{\perp}= Z^{\perp}$. Clearly, $ \Big( (\varphi_1 \otimes^\gamma \varphi_2)^{-1} \big(\, \overline{E_1 \otimes E_2 }\, \big) \Big)^{\perp} \subseteq Z^{\perp} $. For the reverse inclusion, let $f \in Z^\perp$. Since $(X_1 \otimes^\gamma X_2)^*$ can be identified with the space of bounded bilinear forms on $X_1 \times X_2$ (\cite[$\S 2.2$]{ryan}), there exists a bounded bilinear map $\tilde{f}: X_1 \times X_2 \to \mathbb{C}$ such that $f(a_1 \otimes a_2) = \tilde{f}(a_1,a_2)$ for $a_i \in X_i$. Define $g: Y_1 \times Y_2 \to \mathbb{C}$ as $g(b_1,b_2) = \tilde{f}(a_1,a_2)$, where $\varphi_i(a_i) = b_i, i =1,2$. Since $f|_Z =0$, $g$ is well defined, and it is also a bounded bilinear map. Thus $g$ can be identified with a unique element in $ (Y_1 \otimes^\gamma Y_2)^*$, say, $\tilde{g}$. It can be seen that $ f= \tilde{g} \circ (\varphi_1 \otimes^\gamma \varphi_2).$ Now, take any $x$ in $(\varphi_1 \otimes^\gamma \varphi_2)^{-1}(\, \overline{E_1 \otimes E_2}\, )$. Then, by \cite[Proposition 2.8]{ryan}, there exist bounded sequences $\{r_n\}$ and $\{s_n\}$ in $E_1$ and $E_2$, respectively, such that $$ (\varphi_1 \otimes^\gamma \varphi_2)(x) = \sum_{n=1}^{\infty} r_n \otimes s_n.$$ By surjectivity of $\varphi_i$'s, for each $n \in \mathbb{N}$, fix $x_n \in \varphi_1^{-1}(r_n)$ and $y_n \in \varphi_2^{-1}(s_n)$. Then, $$f(x) = \tilde{g}\Big(\sum_n r_n \otimes s_n\Big) = \sum_n \tilde{g}\big(r_n \otimes s_n\big) = \sum_n f( x_n \otimes y_n) = 0, $$ which implies that $ Z^{\perp} \subseteq \Big( (\varphi_1 \otimes^\gamma \varphi_2)^{-1} \big(\, \overline{E_1 \otimes E_2 }\, \big) \Big)^{\perp} $. \end{proof} We'll have instances ahead to appeal to the following useful consequence, wherein $i_h$ is as in \Cref{obp-min}. \begin{cor}\label{h-gamma} Let $I$ and $J$ be closed ideals in $C^*$-algebras $A$ and $B$, respectively. Then, we have \[ i_h(A \otimes^\gamma J + I \otimes^\gamma B) = (A \otimes^h J + I \otimes^h B) \cap i_h(A \otimes^\gamma B). \] \end{cor} \begin{proof} By \Cref{kernel-phi-ot-psi}, we have $\ker(\pi_I \otimes^\gamma \pi_J) = A \otimes^\gamma J + I \otimes^\gamma B$ and also, by \cite[Corollary 2.6]{ass}, $\ker(\pi_I \otimes^h \pi_J) = A \otimes^h J + I \otimes^h B$. Now, the diagram \[ \xymatrix{ A \otimes^\gamma B \ar@{^{(}->}[r]^{i_h} \ar[d]^{\pi_I \otimes^\gamma \pi_J} & A \otimes^h B\ar[d]^{\pi_I \otimes^h \pi_J} \\ A/I \otimes^\gamma B/J \ar@{^{(}->}[r]^{i_h} & A/I \otimes^h B/J } \] is easily seen to be commutative and we are done. \end{proof} The following folklore result will be required in the proof of \Cref{obp-ideal}. (Note that, a part of it also follows from \Cref{kernel-phi-ot-psi}.) \begin{lemma}\label{obp-kernel} Let $A$ and $B$ be $C^*$-algebras, $J$ be a closed ideal in $B$ and $\pi : B \rightarrow B/J$ be the natural quotient map. Then, $\ker (\mathrm{Id}\otimes^\gamma \pi) = A \otimes^\gamma J$ and $ (\mathrm{Id}\otimes^\gamma \pi) (Z)$ is closed for any closed subspace $Z$ in $A \otimes^\gamma B$. Additionally, if $Z$ contains $A \otimes^\gamma J$, then $$ Z = (\mathrm{Id}\otimes^\gamma \pi)^{-1}((\mathrm{Id}\otimes^\gamma \pi)(Z)). $$ \end{lemma} \begin{proof} Clearly, $A \otimes^\gamma J \subseteq \ker (\mathrm{Id}\otimes^\gamma \pi)$, where the inclusion is meaningful because of \Cref{obp-injective}. Let $z \in \ker (\mathrm{Id}\otimes^\gamma \pi)$ and $\epsilon > 0$. Then, by \cite[Proposition 2.8]{ryan}, there exist bounded sequences $\{ a_n\} \subset A$ and $\{b_n +J \} \subset B/J$ such that $ \sum_n a_n \otimes (b_n + J) = 0 + J$ and $\sum_n \|a_n\| \|b_n + J\| < \epsilon$. Choose a sequence $\{ x_n\} \subset J$ such that $\sum_n \|a_n\| \|b_n - x_n \| < \epsilon$. Then, it is easily seen that $\sum_n a_n \otimes x_n$ is absosutely convergent in $A \otimes^\gamma I$ and that $\|\sum_n a_n \otimes b_n - \sum_n a_n \otimes x_n \|_{\gamma} < \epsilon$, which implies that $A \otimes^\gamma I$ is dense in $ \ker (\mathrm{Id}\otimes^\gamma \pi)$. \vspace*{1mm} Finally, by \cite[Proposition 2.3]{ryan}, $\|\mathrm{Id}\otimes^\gamma \pi \| = \| \mathrm{Id}\|\, \| \pi\| = 1$ so that $\mathrm{Id}\otimes^\gamma \pi $ is a contraction and hence $(\mathrm{Id}\otimes^\gamma \pi) (Z)$ is closed. \end{proof} Note that \Cref{obp-kernel} holds for Banach spaces and closed subspace as well. The same proof works in this generality. With this, all ingredients are available to adapt the steps of \cite[Theorem 3.1]{GJ} to obtain a proof of the following ideal structure: \begin{theorem}\label{obp-ideal} Let $A$ and $B$ be $C^*$-algebras where $A$ is topologically simple. Then every closed ideal of the Banach $*$-algebra $A \otimes^\gamma B$ is a product ideal of the form $ A \otimes^\gamma J$ for some closed ideal $J$ in $B$. In particular, every closed ideal in $A \otimes^\gamma B$ is a $*$-ideal. \end{theorem} \begin{proof} We will appeal to the usual Zorn's Lemma approach. Let $I$ be a non-zero closed ideal in $A \otimes^\gamma B$. Consider the collection $$ \mathcal{F} := \{ J \subseteq B: J \ \mathrm{is\ a\ closed\ ideal\ in}\ B\ \mathrm{and}\ A \otimes^\gamma J \subseteq I\},$$ where the inclusion $A \otimes^\gamma J \subseteq I$ makes sense by \Cref{obp-injective}. By \Cref{elementary-tensor}, $I$ contains a non-zero elementary tensor, say, $a \otimes b$. If $K$ and $J$ are the non-zero closed ideals in $A$ and $B$ generated by $a$ and $b$, respectively, then by simplicity of $A$, we have $K = A$ and $A \otimes^\gamma J \subseteq I$. In particular, $\mathcal{F} \neq \emptyset$. We saw in \Cref{finite-sum} that $A \otimes^\gamma (\sum_i J_i) = \sum_i (A \otimes^\gamma J_i)$ for any finite collection of closed ideals $\{J_i\}$ in $B$. So, with respect to the partial order given by set inclusion, every chain $\{ J_i : i \in \Lambda\}$ in $\mathcal{F}$ has an upper bound in $\mathcal{F}$, namely, the closure of the ideal $\big\{\sum_{\mathrm{finite}} x_i : x_i \in J_i\big\}$, implying thereby that there exists a maximal element, say $J$, in $\mathcal{F}$. The obvious thing to do now is to try to show that $A \otimes^\gamma J = I$. Consider the canonical map $\mathrm{Id} \otimes^\gamma \pi : A \otimes^\gamma B \rightarrow A \otimes^\gamma (B / J)$. By \Cref{obp-kernel}, we have $\ker(\mathrm{Id}\otimes^\gamma \pi) = A \otimes^\gamma J$ and that $\widetilde{I}:=(\mathrm{Id} \otimes^\gamma \pi)(I)$ is a closed ideal in $A \otimes^\gamma (B / J)$. It now suffices to show that $\widetilde{I} = (0)$. If $\widetilde{I} \neq (0)$, then, again by \Cref{elementary-tensor}, $\widetilde{I}$ contains a non-zero elementary tensor, say, $a \otimes (b +J)$. Observe that $b \notin J$ and $$a \otimes b \in (\mathrm{Id}\otimes^\gamma \pi)^{-1}(a \otimes (b + J)) \in (\mathrm{Id}\otimes^\gamma \pi)^{-1} \big ((\mathrm{Id}\otimes^\gamma \pi) (I)\big) = I $$ by \Cref{obp-kernel}. Let $K$ denote the closed ideal in $B$ generated by $b$ and $J$. Note that $J \subsetneq K$. Since $A$ is simple, it equals the closed ideal generated by $a$ and we obtain $A \otimes^\gamma K \subseteq I$, i.e., $K \in \mathcal{F}$ contradicting the maximality of $J$ in $\mathcal{F}$. \end{proof} \noindent In view of Theorems \ref{obp-injective}, \ref{obp-ideal} and \Cref{obp-kernel}, we obtain the following analogue of \cite[Corollary 4.21]{Tak}, \cite[Theorem 5.1]{ass} and of \cite[Theorem 3.7]{JK-edin}. \begin{cor}\label{obp-simple} Let $A$ and $B$ be $C^*$-algebras. Then the Banach $*$-algebra $A \otimes^\gamma B$ is topologically simple if and only if $A$ and $B$ are both topologically simple. \end{cor} If $A$ contains only finitely many closed ideals and $\alpha$ is either the Haagerup or the operator space projective tensor product, then every closed ideal in the Banach algebra $A \otimes^{\alpha} B$ is a finite sum of product ideals, - see \cite{ass, kr-2}. We now make another use of \Cref{obp-ideal} to prove its analogue for $A \otimes^\gamma B$. We'll use the following useful observation made in the proof of \cite[Theorem 5.3]{ass}. \begin{lemma}\label{ann-result} Let $A$ and $B$ be $C^*$-algebras and $I$ be a simple closed ideal in $A$. If $K$ is a closed ideal in the Banach algebra $A \otimes^h B$, then $K \cap (I \otimes^h B) = I \otimes^h J$ for some closed ideal $J$ in $ B$, and, \[ K \subseteq A \otimes^h J + M \otimes^h B, \] where $M$ is the closed ideal $ann(I) :=\{ x \in A: x I = I x = (0)\}$. \end{lemma} \begin{theorem}\label{ideals-finite} Let $A$ and $B$ be $C^*$-algebras and suppose $A$ contains only finitely many closed ideals. Then every closed ideal in the Banach $*$-algebra $A \otimes^\gamma B$ is a finite sum of product ideals. \end{theorem} \begin{proof} The proof is given by induction on the number of closed ideals in $A$, call it $\nu(A)$. If $\nu(A) = 2$, then we are done by \Cref{obp-ideal}. Let $n > 2$ and suppose that the assertion holds for all $C^*$-algebras with less than $ n$ closed ideals, and let $A$ be a $C^*$-algebra with $\nu(A) = n$. Pick a minimal non-zero closed ideal, say $I$, in $A$, which is clearly simple. Let $K$ be a closed ideal in $A \otimes^\gamma B$, then $I \otimes^\gamma B \subseteq A \otimes^\gamma B$ by \Cref{obp-injective}, so that $K\cap (I \otimes^\gamma B)$ is a closed ideal in $I \otimes^\gamma B$. By \Cref{obp-ideal}, it is then equal to $I \otimes^\gamma J$ for some closed ideal $J$ in $B$. Consider the closed ideal $K_h:=\overline{i_h(K)}^h$ in the Banach algebra $ A \otimes^h B$ (where $i_h$ is as in \Cref{obp-min}). By \Cref{ann-result}, $K_h \cap (I \otimes^h B) = I \otimes^h J_1$ and $K_h \subseteq A \otimes^h J_1 + M\otimes^h B$ for some closed ideal $J_1$ in $B$, where $M = ann(I)$. We wish to show, in fact, that $ K \subseteq A \otimes^\gamma J + M \otimes^\gamma B$ as well. Note that $K \subseteq i_h^{-1}(K_h) \subseteq i_h^{-1}(A \otimes^h J_1 + M \otimes^h B)$ and, by \Cref{h-gamma}, $i_h^{-1}(A \otimes^h J_1 + M \otimes^h B) = A \otimes^\gamma J_1 + M \otimes^\gamma B$. So, it suffices to show that $J_1 \subset J$. Note that, if $y \in J_1$, then $x \otimes y \in I \otimes^h J_1 \subset K_h $ for any fixed $0 \neq x \in I$, so that, by \Cref{a-ot-b-h}, $x \otimes y \in K$ implying further that $ x \otimes y \in K \cap (I \otimes^\gamma B) = I \otimes^\gamma J$. Choose a $\varphi \in A^*$ such that $\varphi(x) \neq 0$, then $R_\varphi(x\otimes y) = \varphi(x) y \in J$, where $R_{\varphi}:A \otimes B \rightarrow B$ is the right slice map given by $ R_{\varphi}\left(\sum_1^n a_i \otimes b_i\right)=\sum_{1}^{n} \varphi (a_i) b_i$. Hence $y \in J$. We now claim that $K \cap (A \otimes^\gamma J +M \otimes^\gamma B)=K\cap (A \otimes^\gamma J)+ K \cap (M \otimes^\gamma B)$ and show that each of the two closed ideals appearing in the sum on the right hand side, by induction hypothesis, are finite sums of closed ideals, which will then complete the proof. We first prove that $L := K \cap (A \otimes^\gamma J)$ is a finite sum of closed ideals. Clearly $L$ contains $I\otimes^\gamma J$. Corresponding to the complete quotient map $\pi_I: A \to A/I$, we have a quotient map $\pi_I \otimes^\gamma Id: A \otimes^\gamma J \to A/I \otimes^\gamma J$. By \Cref{obp-kernel}, $\ker (\pi_I \otimes^\gamma Id) = I \otimes^\gamma J$ and $(\pi_I \otimes^\gamma Id)(L)$ is a closed ideal in $A/I \otimes^\gamma J$. Since $\nu (A/I )\leq n-1$, by induction hypothesis, $(\pi_I \otimes^\gamma Id)(L)= \sum_{r=1}^k I_r \otimes^\gamma J_r$, where $I_r$ and $J_r$ are closed ideals in $A/I $ and $J$, respectively. Thus, by \Cref{obp-kernel} and \Cref{kernel-phi-ot-psi}, $ L = \sum_{r=1}^k \pi_I^{-1}(I_r)\otimes^\gamma J_r + I \otimes^\gamma J$. Further, since $M$ cannot contain $I$, we have $\nu(M)\leq n - 1$. So, by induction hypothesis, the closed ideal $K \cap (M \otimes^\gamma B)$ is a finite sum of product ideals. Finally, it is easy to see that $ K\cap (A \otimes^\gamma J)+ K \cap (M \otimes^\gamma B) \subseteq K \cap (A \otimes^\gamma J + M \otimes^\gamma B) $. Let $z\in K \cap (A \otimes^\gamma J + M \otimes^\gamma B)$. By \cite[Proposition 4.11]{GJ}, the closed ideal $A \otimes^\gamma J + M \otimes^\gamma B$ possesses a quasi-central approximate identity, say $\{ e_{\lambda}\}$, which as in \cite[Lemma 3.3]{ass}, can be taken to be of the form $e_{\lambda} = f_{\lambda} + g_{\lambda} - f_{\lambda} g_{\lambda}$ for some quasi-central approximate identities $\{f_{\lambda}\}$ and $\{g_{\lambda}\}$ in $A \otimes^\gamma J$ and $M \otimes^\gamma B$, respectively. Then, $e_{\lambda} z \rightarrow z$ and $ze_{\lambda} \in K\cap (A \otimes^\gamma J)+ K \cap (M \otimes^\gamma B)$ for every $\lambda$. This implies that $K\cap (A \otimes^\gamma J)+ K \cap (M \otimes^\gamma B)$ is dense in $K \cap (A \otimes^\gamma J + M \otimes^\gamma B) $ and, by \Cref{sum-obp}, we know that $K\cap (A \otimes^\gamma J)+ K \cap (M \otimes^\gamma B)$, being a finite sum of product ideals, is closed. \end{proof} Based on above discussion, we easily deduce the following: \begin{cor}\label{consequences} Let $A$ and $B$ be $C^*$-algebras and suppose $A$ contains only finitely many closed ideals. Then, the following hold: \begin{enumerate} \item A finite sum of closed ideals in $A \otimes^\gamma B$ is also a closed ideal. \item Every closed ideal of $A \otimes^\gamma B$ contains a bounded approximate unit. \item Every closed ideal of $A \otimes^\gamma B$ is $*$-closed. \end{enumerate} \end{cor} \subsection*{Some Examples} We can now reap some immediate fruits of \Cref{ideals-finite} and \Cref{consequences}. \begin{enumerate} \item For any separable Hilbert space $H$, analogous to the structure of closed ideals of $B(H) \otimes^{\alpha} B(H)$ for $\otimes^{\alpha} = \otimes^h$ and $\widehat\otimes$ (see \cite{ass,jk11}), $B(H)\otimes^\gamma B(H)$ contains only four non-trivial closed ideals, namely, $ B(H) \otimes^\gamma K(H) + K(H) \otimes^\gamma B(H), B(H) \otimes^\gamma K(H), K(H) \otimes^\gamma B(H)$ and $ K(H) \otimes^\gamma K(H)$. In particular, $B(H) \otimes^\gamma B(H)$ has a unique maximal ideal, namely, $B(H) \otimes^\gamma K(H) + K(H) \otimes^\gamma B(H) $. \item For a locally compact Hausdorff space $X$ and a separable Hilbert space $H$, the closed ideals of $B(H) \otimes^\gamma C_0(X)$ are given by $\sum_{i=1}^n I_i\otimes^\gamma I(E_i)$, for closed subsets $E_i$ of $X$, where $I_i=K(H) $ or $B(H)$, and $$I(E_i) := \{ f\in C_0(X): f(x)=0 \ \text{for all} \,x\in E_i\}.$$ Note that, by \Cref{maximal-ideals}, $B(H) \otimes^\gamma I(\{x\}) + K(H) \otimes^\gamma C_0(X)$ is a maximal ideal in $B(H) \otimes^\gamma C_0(X)$ for each $x \in C_0(X)$. \end{enumerate} \subsection{Minimal and maximal ideals of $A \otimes^\gamma B$} The structure of closed minimal ideals of $A \otimes^\gamma B$ turns out to be an immediate consequence of \Cref{elementary-tensor}. \begin{prop} Let $A$ and $B$ be $C^*$-algebras. Then, a closed ideal $J$ in $A \otimes^\gamma B$ is minimal if and only if it is a product ideal of the form $J = K \otimes^\gamma L$ for some minimal closed ideals $K$ and $L$ in $A$ and $B$, respectively. \end{prop} The proof of \cite[Proposition 3.9]{JK-edin} works verbatim for above identification. \vspace*{2mm} In order to analyze the structure of maximal ideals, we will use the concept of {\em Wiener property} for Banach $*$-algebras. Recall that a Banach $*$-algebra $A$ said to have the Wiener property if every proper closed ideal of $A$ is annihilated by some irreducible $*$-representation of $A$ on some Hilbert space. It is well known that every $C^*$-algebra has Wiener property. \begin{lemma}\label{wiener} Let $A$ and $B$ be $C^*$-algebras. Then, the Banach $\ast$-algebra $A\otimes^\gamma B$ has the Wiener property. \end{lemma} \begin{proof} Consider a proper closed two-sided ideal $J$ of $A\otimes^\gamma B$. By Theorem \ref{a-ot-b}, $J_{\min}$ is also a proper closed two-sided ideal of the $C^*$-algebra $A\otimes^{\min} B$. Since every $C^*$-algebra has the Wiener property, $J_{\min}$ is annihilated by an irreducible $*$-representation, say, $\pi:A\otimes^{\min} B \rightarrow B(H)$. So, we have a $*$-representation $\hat{\pi}:= \pi \circ i$ of $A\otimes^\gamma B$ on $H$ with $\hat{\pi}(J)=\{0\}$, where $i : A \otimes^\gamma B \rightarrow A \otimes^{\min} B$ is the canonical injective $*$-homomorphism as in \Cref{obp-min}. Also, the relation $\hat{\pi}(A\otimes B) = \pi (A\otimes B)$ yields \[ \hat{\pi}(A\otimes^\gamma B)^{\prime} \subseteq \hat{\pi}(A\otimes B)' = \pi(A\otimes B)^{\prime} = \pi(A\otimes^{\min} B)^{\prime}= \mathbb{C}I. \] Thus, $\hat{\pi}$ is irreducible and $A\otimes^\gamma B$ has Wiener property. \end{proof} \begin{lemma}\label{irr} Let $A$ and $B$ be $C^*$-algebras and $\pi$ be an irreducible $*$-representation of $A\otimes^\gamma B$ on a Hilbert space $H$. Then there exist $*$-representations $\pi_1$ and $\pi_2$ of $A$ and $B$, respectively, on $H$ with commuting ranges such that $$ \pi(a\otimes b) = \pi_1(a) \pi_2(b) \,\,\, \text{for all} \,\, a\in A, b\in B.$$ Moreover, $\pi_1$ and $\pi_2$ are both factor representations. \end{lemma} \begin{proof} Since $\pi$ is a $*$-representation of $A\otimes B$, by \cite[Lemma IV.4.1]{Tak}, there exist $*$-representations $\pi_1$ and $\pi_2$ of $A$ and $B$ on $H$ with commuting ranges such that $$ \pi(a\otimes b) = \pi_1 (a) \pi_2 (b)\,\, \text{for all} \,a\in A,\,b\in B. $$ Now, $\pi(A\otimes B)= \pi_1(A) \pi_2(B)$, so that $ \pi(A\otimes^\gamma B) \subseteq \overline{\pi_1(A) \pi_2(B)}$. Irreducibility of $\pi$ gives $$ \big(\pi_1(A) \pi_2(B)\big)^{\prime}=\Big( \overline{\pi_1(A) \pi_2(B)}\Big)^{\prime} \subseteq \pi(A \otimes^\gamma B)^{\prime} = \mathbb{C} I. $$ If $ M := \pi_1(A)^{\prime\prime}$, then we have \begin{eqnarray*} M \cap M^{\prime} & = & \pi_1(A)^{\prime\prime} \cap \pi_1(A)^{\prime}\\ &=& \big(\pi_1(A)^{\prime} \cup \pi_1(A)\big)^{\prime} \\ &\subseteq & (\pi_2(B) \cup \pi_1(A))^{\prime} \, \qquad \qquad ( \text{as} \, \pi_2(B) \subseteq \pi_1(A)')\\ & = &\pi_1(A)^{\prime} \cap \pi_2(B)^{\prime}\\ & \subseteq & \{\pi_1(A) \pi_2(B) \}^{\prime}\\ &=& \mathbb{C} I. \end{eqnarray*} Thus, $\pi_1$ (and similarly $\pi_2$) is a factor representation. \end{proof} Analogous to \cite[Theorem 5.6]{ass}, \cite[Theorem 3.10]{JK-edin} and \cite[Theorem 9]{jk11}, we now obtain the following characterizations of maximal and maximal modular ideals. \begin{theorem}\label{maximal-ideals} Let $A$ and $B$ be $C^*$-algebras. Then, a closed ideal $J$ in $A \otimes^\gamma B$ is maximal if and only if it is of the form $J = A \otimes^\gamma N + M \otimes^\gamma B$ for some maximal ideals $M$ and $N$ in $A$ and $B$, respectively. \end{theorem} \begin{proof} Let $J = A \otimes^\gamma N + M \otimes^\gamma B$, where $M$ and $N$ are maximal ideals in $A$ and $B$, respectively. Note that, by \Cref{sum-obp}, $A\otimes^\gamma N + M\otimes^\gamma B$ is a closed ideal in $A\otimes^\gamma B$. For the canonical quotient maps $\pi_1 : A \rightarrow A/M$ and $\pi_2 : B \rightarrow B/N$, we have $ J =\text{ker}(\pi_1 \otimes \pi_2)$ and there is an isomorphism between $(A\otimes^\gamma B)/J $ and $(A/M)\otimes^\gamma (B/N)$ by \Cref{kernel-phi-ot-psi}. Since $A/M$ and $B/N$ are both simple, so is $(A\otimes^\gamma B)/J$, by \Cref{obp-simple}. Thus, $J$ is maximal in $A\otimes^\gamma B$. Conversely, let $J$ be a maximal ideal in $A\otimes^\gamma B$. As seen in \Cref{wiener}, $A\otimes^\gamma B$ has the Wiener property; so, there exists a non-zero irreducible $*$-representation $\pi$ of $A\otimes^\gamma B$ on a Hilbert space $H$, such that $\pi(J)= (0)$. Then, by \Cref{irr}, there exist factor $*$-representations $\pi_1$ and $\pi_2$ of $A$ and $B$ on $H$ with commuting ranges such that $ \pi(a\otimes b) = \pi_1 (a) \pi_2 (b)$ for all $a\in A$ and $b\in B $. Set $M= \ker \pi_1,\,N= \ker \pi_2$ and $L= A\otimes^\gamma N + M\otimes^\gamma B$. We first show that $ L = J$. Clearly, $\pi(M\otimes^\gamma B)= (0) = \pi(A\otimes^\gamma N)$, which gives $\pi(J+L)= (0)$. Since $\pi$ is non-zero, this shows that $J+L$ is a proper ideal of $A\otimes^\gamma B$. Since $J + L$ contains $J$, by maximality of $J$, we have $L \subseteq J$, i.e., $ A\otimes^\gamma N + M \otimes^\gamma B \subseteq J.$ By \Cref{kernel-phi-ot-psi} and \Cref{sum-obp} we have $\ker(\pi_M \otimes^\gamma \pi_N) = A \otimes^\gamma N + M \otimes^\gamma B$, so, for the reverse inclusion, it suffices to show that $J \subseteq \ker(\pi_M \otimes^\gamma \pi_N)$, where $\pi_M$ and $\pi_N$ are the natural quotient maps. Note that the representations $\pi_1$ and $\pi_2$ induce faithful commuting representations $\tilde{\pi}_1$ of $A/M$ and $\tilde{\pi}_2$ of $B/N$ on $H$. Then, by a universal property of $\otimes^{\max}$ (see \cite[Proposition IV.4.7]{Tak}), there exists a bounded $*$-representation $\pi_0:(A/M) \otimes^{\max} (B/N) \rightarrow B(H)$ such that $\pi_0(x\otimes y) = \tilde{\pi}_1 (x)\tilde{\pi}_2(y)$ for all $x\in A/M$ and $ y\in B/N $. Since $\|\cdot \|_{\max} \leq \| \cdot \|_{\gamma}$, the identity map on $(A/M) \otimes (B/N)$ extends to a contractive $*$-homomorphism, say, $i: (A/M) \otimes^\gamma (B/N) \rightarrow (A/M) \otimes^{\max} (B/N)$. In particular, $\theta:= \pi_0 \circ i$ is a $*$-representation of $ (A/M) \otimes^\gamma (B/N)$ on $H$. It is easy to verify that $ \pi = \theta \circ \, (\pi_M \otimes^\gamma \pi_N) $ on $A\otimes B$, so by continuity we have $ \pi = \theta \circ \, (\pi_M \otimes^\gamma \pi_N) $, which further gives $ \theta ( (\pi_M \otimes^\gamma \pi_N)(J))=0 $. We now claim that $\theta$ is faithful on $(A/M) \otimes^\gamma (B/N)$, which will yield $(\pi_M \otimes^\gamma \pi_N)(J)= (0)$, as was asserted above. Note that, by \Cref{obp-faithful}, it suffices to show that $\theta$ is faithful on $(A/M) \otimes (B/N)$. Since $\pi_1$ and $\pi_2$ are both factor representations, so are the representations $\tilde{\pi}_1$ and $\tilde{\pi}_2$ because \[ \tilde{\pi}_1(A/M)^{\prime\prime} = \pi_1(A)^{\prime\prime} \quad \text{and}\quad \tilde{\pi}_2(B/N)^{\prime\prime} = \pi_2(B)^{\prime\prime}. \] Now, for the factor $\mathcal{R}= \tilde{\pi}_1(A/M)^{\prime \prime}$, the map $$\mathcal{R} \otimes \mathcal{R}^{\prime} \ni \Sigma_{i=1}^n x_i \otimes x_i^{\prime} \stackrel{\rho}{\mapsto} \Sigma_{i=1}^n x_i x_i^{\prime} \in B(H)$$ is an injective homomorphism, by \cite[Proposition IV.4.20]{Tak}. Suppose $ \theta\big(\sum_{i=1}^n x_i \otimes y_i\big) = 0$ for some $\sum_{i=1}^n x_i \otimes y_i \in (A/M) \otimes (B/N)$, which gives $$ 0 = \theta\Big(\sum_i x_i \otimes y_i\Big) = \sum_i \tilde{\pi}_1(x_i) \tilde{\pi}_2 (y_i) = \rho\Big( \sum_i \tilde{\pi}_1(x_i) \otimes \tilde{\pi}_2 (y_i)\Big). $$ Note that $\tilde{\pi}_2(y_i) \in \tilde{\pi}_1(A/M)' = \mathcal{R}'''= \mathcal{R}' $ for all $i$. Since $\rho$ is injective, we obtain $$ (\tilde{\pi}_1 \otimes \tilde{\pi}_2) \Big(\sum_i x_i \otimes y_i\Big) = \sum_i \tilde{\pi}_1(x_i) \otimes \tilde{\pi}_2 (y_i) = 0. $$ Further, since $ \tilde{\pi}_1$ and $ \tilde{\pi}_2$ are both injective, so is $ \tilde{\pi}_1 \otimes \tilde{\pi}_2$ and hence $\sum_i x_i \otimes y_i =0$. This proves our claim. Finally, since $J = \ker(\pi_M \otimes^\gamma \pi_N)$, $(A\otimes^\gamma B)/J$ is isomorphic to $(A/M) \otimes^\gamma (B/N)$. So, by \Cref{obp-simple}, it follows that $M$ and $N$ are both maximal in $A$ and $B$, respectively. \end{proof} It is known that an ideal in a Banach algebra is maximal modular if and only if it is maximal and modular. The above structure of maximal ideals immediately yields the structure of maximal modular ideals as well. \begin{theorem}\label{maximal-modular} Let $A$ and $B$ be $C^*$-algebras. Then, a closed ideal $J$ of $A\otimes^\gamma B$ is maximal modular if and only if it is of the form $J = A \otimes^\gamma N + M \otimes^\gamma B$ for some maximal modular ideals $M$ and $N$ in $A$ and $B$, respectively. \end{theorem} \begin{proof} Let $J = A\otimes^\gamma N + M \otimes^\gamma B$ for some maximal modular ideals $M$ and $N$ in $A$ and $B$, respectively. Since $M$ and $N$ are both maximal ideals, so is $J$, by \Cref{maximal-ideals}. Also, by \Cref{kernel-phi-ot-psi}, $(A\otimes^\gamma B)/J$ and $\big(A/M\big) \otimes^\gamma \big(B/N\big)$ are isomorphic Banach $*$-algebras. Since $A/M$ and $B/N$ are both unital, so is $(A\otimes^\gamma B)/J$. In particular, $J$ is modular. Conversely, suppose $J$ is a maximal modular ideal in $A \otimes^\gamma B$. Again, by \Cref{maximal-ideals}, $J$, being maximal, is of the form $J = A\otimes^\gamma M + I\otimes^\gamma N$ for some maximal ideals $M$ and $N$ in $A$ and $B$, respectively. As seen in previous paragraph, $(A\otimes^\gamma B)/J$ is isomorphic to $\big(A/ M \big) \otimes^\gamma\big( B/N\big)$; in particular, the latter space is unital. Therefore, by \cite[Theorem 1]{loy}, $A/M$ and $B/N$ are both unital; so that $M$ and $N$ are both modular as well. \end{proof} \section{Hull-kernel topology} As in Introduction, for any Banach algebra $A$, $Id(A)$ (resp., $Id'(A)$) denotes the set of closed ideals (resp., proper closed ideals) of $A$. And, for any algebra $A$, $\mathcal{M}(A)$ (resp., $\mathcal{M}_m(A)$) denotes the set of maximal (resp., maximal modular) ideals of $A$. Before discussing hull-kernel topology, we briefly outline another topology on $Id(A)$ for a Banach algebra $A$, which agrees with hull-kernel topology on the set of maximal ideals and is called the {\it $\tau_w$-topology} (\cite[$\S 2$]{arc2}). A subbasis for $\tau_w$-topology is given by the collection $$\Big\{ U(J) :=\{ I \in Id(A): I\nsupseteq J\},\, J \in Id(A) \cup \{\emptyset\} \Big\},$$ where $U(\emptyset) := A $. Note that $U(A) = Id'(A)$, $U((0)) = \emptyset$ and $U(\emptyset)$ is the only subbasic set that contains $A$. $Id(A)$ is a $T_0$ space with respect to $\tau_w$-topology (\cite{arc2}). For $C^*$-algebras $A$ and $B$, consider the map $\Phi: Id(A) \times Id(B) \to Id(A\otimes^\gamma B)$ defined by \begin{equation}\label{Phi} \Phi(I, J) = \ker(\pi_I \otimes^\gamma \pi_J) = A\otimes^\gamma J + I \otimes^\gamma B, \end{equation} where $\pi_I: A \to A/I$ and $\pi_J: B\to B/J$ are the canonical quotient maps. Note that the last equality in (\ref{Phi}) follows from \Cref{kernel-phi-ot-psi} and \Cref{sum-obp}. Also, If $(I, J) \in Id'(A) \times Id'(B)$, then $\pi_I \otimes^\gamma \pi_J \neq 0$ so that $ \ker(\pi_I \otimes^\gamma \pi_J) $ is proper. Hence $\Phi$ maps $Id'(A) \times Id'(B)$ into $ Id'(A\otimes^\gamma B)$. Analogous to \cite[Lemma 1.4]{arc3} and \cite[Lemma 2.5]{laz}, we obtain the following: \begin{lemma} Let $A$ and $B$ be $C^*$-algebras. Then, $\Phi: Id(A) \times Id(B) \rightarrow Id (A \otimes^\gamma B)$ is $\tau_w$-continuous. \end{lemma} \begin{proof} Consider the diagram \[ \xymatrix { & Id(A) \times Id(B) \ar[ld]_{\Phi_1} \ar[rd]^{\Phi} & \\ Id(A\otimes^{\min} B) \ar[rr]^{\Phi_2} & & Id(A \otimes^\gamma B),} \] where $\Phi_1(I,J) := \ker(\pi_I \otimes^{\min} \pi_J)$ and $\Phi_2(K) := i^{-1}(K) $, $i$ being the injective contractive $*$-homomorphism from $A\otimes^\gamma B \to A\otimes^{\min} B$ (as in \Cref{obp-min}). It is known that $\Phi_1$ is $\tau_w$-continuous - see \cite[Lemma 2.5]{laz}. So, it suffices to show that this diagram commutes and that $\Phi_2$ is $\tau_w$-continuous. In order to establish commutativity of the diagram, we just need to verify that $$ \ker(\pi_I \otimes^\gamma \pi_J) = i^{-1}(\ker(\pi_I \otimes^{\min} \pi_J)). $$ For $z\in A\otimes^\gamma B$, let $\{z_n\}$ be a sequence in $A\otimes B$ such that $ \|z_n -z\|_{\gamma} \rightarrow 0$. Let $\hat{i}: (A/I) \otimes^\gamma (B/J) \to (A/I) \otimes^{\min} (B/J)$ be the injective continuous homomorphism. Then the sequence $ \{\hat{i}\big((\pi_I \otimes^\gamma \pi_J)(z_n)\big) = (\pi_I \otimes^\gamma \pi_J)(z_n)\} $ converges to $\hat{i}\big((\pi_I \otimes^\gamma \pi_J)(z)\big)$ in $(A/I) \otimes^{\min} (B/J)$. Since $\|\cdot \|_{\min} \leq \| \cdot \|_{\gamma}$ and $i$ is identity on $A \otimes B$, $ \|z_n -i(z)\|_{\min} \rightarrow 0$ as well. So, $ (\pi_I \otimes^{\min} \pi_J)(z_n) {\longrightarrow} (\pi_I \otimes^{\min} \pi_J)(i(z))$ in $(A/I) \otimes^{\min} (B/J)$. Since both the mappings $\pi_I \otimes^\gamma \pi_J$ and $\pi_I \otimes^{\min} \pi_J$ agree on $A\otimes B$, by continuity, we have $$\hat{i}\big((\pi_I \otimes^\gamma \pi_J)(z)\big) = (\pi_I \otimes^{\min} \pi_J)(i(z)).$$ The required relationship now follows from injectivity of $\hat{i}$. Next, we show $ \Phi_2$ is $\tau_w$-continuous. For a subbasic open set $U(K)$ of $Id(A\otimes^\gamma B)$ for some $K\in Id(A\otimes^\gamma B)$, we have $\Phi_2^{-1} (U(K)) = U(K_{\min})$. Indeed, for $P \in Id(A\otimes^{\min} B)$, \begin{eqnarray*} P \in \Phi_2^{-1} (U(K)) & \iff & i^{-1}(P) \in U(K) \\ & \iff & i^{-1}(P) \nsupseteq K \\ & \iff & P \nsupseteq K_{\min} \quad (:=\overline{i(K)}) \\ & \iff & P \in U(K_{\min}). \end{eqnarray*} Thus, $ \Phi_2$ is $\tau_w$-continuous. \end{proof} \begin{lemma}{\label{ker-cont}} Let $A$ and $B$ be $C^*$-algebras and $(I_i, J_i) \in Id'(A)\times Id'(B),\, i=1,2$ be such that $ A\otimes^\gamma J_1 + I_1 \otimes^\gamma B \subseteq A\otimes^\gamma J_2 + I_2 \otimes^\gamma B $. Then $I_1 \subseteq I_2$ and $J_1 \subseteq J_2$. \end{lemma} \begin{proof} For a fixed $a\in I_1$ and any $b \in B$ we have $a \otimes b \in \ker (\pi_{I_1}\otimes^\gamma \pi_{J_1}) \subseteq \ker (\pi_{I_2}\otimes^\gamma \pi_{J_2})$, by the given condition. This yields $\pi_{I_2} (a) \otimes \pi_{J_2}(b) = 0$ for every $b\in B$. Since $J_2$ is proper, $\pi_{J_2}(b) \neq 0$ for some $b \in B$. So, we must have $\pi_{I_2}(a)=0$, that is, $a\in I_2$. Similarly, we obtain $J_1 \subseteq J_2$. \end{proof} We now obtain the following analogue of \cite[Theorem 1.5]{arc3}, \cite[Theorem 2.6]{laz} and \cite[Proposition 1.1(v)]{jk-13}. \begin{theorem} Let $A$ and $B$ be $C^*$-algebras. Then, $\Phi$ maps $Id'(A) \times Id'(B)$ homeomorphically onto its image which is dense in $Id'(A \otimes^\gamma B)$. \end{theorem} \begin{proof} For a closed ideal $K$ in $A\otimes^\gamma B$, define $$K_A=\{ a \in A: a\otimes B \subseteq K\}\quad \text{and} \quad K_B = \{ b\in B: A \otimes b \subseteq K\}.$$ By an easy application of continuous functional calculus, it is immediately seen that $K_A$ and $K_B$ are closed ideals in $A$ and $B$, respectively. Define $\Psi: Id(A\otimes^\gamma B) \to Id(A) \times Id(B)$ by $ \Psi (K) = (K_A, K_B)$. Then $\Psi \circ \Phi$ equals {identity} on $Id'(A) \times Id'(B)$. To see this, consider $(I,J) \in Id'(A) \times Id'(B)$ and set $K= \Phi(I,J)= A\otimes^\gamma J + I\otimes^\gamma B$. Clearly $K_A \supseteq I$ and $K_B \supseteq J$. Also, for $a\in K_A$, if $I^\prime$ denotes the closed ideal generated by $a$ in $A$, then $I^\prime \otimes^\gamma B \subseteq K$. So, $I^\prime \in Id'(A)$ and by \Cref{ker-cont}, $I^{\prime} \subseteq I$, giving that $K_A \subseteq I$ and hence $K_A = I$. Similarly, we can see that $K_B = J$. As a consequence, $\Phi$ is injective on $Id'(A) \times Id'(B)$. It now suffices to show that $\Psi$ is $\tau_w$-continuous. Consider a subbasic open set $U(I) \times U(J)$ of $Id(A) \times Id(B)$, where $I \in Id(A), J \in Id(B)$. For any $K \in Id(A\otimes^\gamma B)$, \begin{eqnarray*} K \in \Psi^{-1}(U(I) \times U(J)) & \iff & K_A \in U(I) \quad \& \quad K_B \in U(J) \\ & \iff & K_A\nsupseteq I \quad \& \quad K_B \nsupseteq J \\ & \iff & K \nsupseteq I\otimes^\gamma B \quad \& \quad K \nsupseteq A\otimes^\gamma J \\ & \iff & K \in U(I\otimes^\gamma B) \cap U(A\otimes^\gamma J) \end{eqnarray*} Thus $\Psi^{-1} (U(I) \times U(J)) = U(I \otimes^\gamma B) \cap U(A\otimes^\gamma J)$ and since the latter set is open in $Id(A\otimes^\gamma B)$, this proves our claim. We now show that $\Phi \big(Id'(A) \times Id'(B)\big)$ is dense in $Id'(A \otimes^\gamma B)$. For this, consider a $K$ in $Id'(A \otimes^\gamma B)$ and let $U$ be a basic open set in $Id'(A\otimes^\gamma B)$ containing $K$. Then $U = \cap_{i=1}^n U(P_i)$ for some $P_i \in Id'(A\otimes^\gamma B)$. Here, $K \in U(P_i)$, that is, $P_i \nsubseteq K$ for all $1 \leq i \leq n$. Now for each $i$, note that $A\otimes^\gamma K_B + K_A \otimes^\gamma B \subseteq K$; so $P_i\nsubseteq A \otimes^\gamma K_B$ and $ P_i \nsubseteq K_A \otimes^\gamma B$, since $P_i \nsubseteq K$. This further implies that $P_i \nsubseteq \Phi(0,K_B)$ and $P_i\nsubseteq \Phi(K_A,0)$ so that $\Phi(0,K_B) \in U(P_i)$ and $\Phi(K_A,0) \in U(P_i)$ for all $1 \leq i \leq n$. Thus $U \cap Im(\Phi) \neq \phi$ and hence image of $\Phi$ is dense in $Id'(A\otimes^\gamma B)$. \end{proof} We now briefly recall hull-kernel topology, without details. Let $A$ be a Banach algebra. For each $E \subseteq Prime(A)$, the set of all proper closed prime ideals of $A$, one associates a closed ideal, called {\it kernel} of $E$, given by $ k(E) = \bigcap_{P \in E} P.$ Also, for each $M \subseteq A$, {\it hull} of $M$ is defined as $$ h_A(M) = \{ P \in Prime(A) : P \supseteq M \}.$$ Equip $Prime(A)$ with the {\it hull-kernel topology} (hk-topology, in short), where for $E\subseteq Prime(A)$, its closure turns out to satisfy $\overline{E} = h(k(E))$, which can be taken as the definition of closure for our purpose - for details, see \cite{arc2} and references therein. As mentioned above, it is a fact that for any Banach algebra $A$, the $\tau_w$-topology coincides with the hull-kernel topolgy on $\mathcal{M}(A)$- see \cite{arc2}. The above homeomorphism restricts well to maximal and maximal modular ideals. \begin{theorem} Let $A$ and $B$ be $C^*$-algebras. Then, the restriction of $\Phi$ to $\mathcal{M}(A) \times \mathcal{M}(B)$ is a homeomorphism onto $\mathcal{M}(A \otimes^\gamma B)$ with respect to the hull-kernel topology. Furthermore, $\Phi$ maps $\mathcal{M}_m(A) \times \mathcal{M}_m(B)$ homeomorphically onto $\mathcal{M}_m(A \otimes^\gamma B)$, as well. \end{theorem} \begin{proof} By \Cref{Phi}, \Cref{maximal-ideals} and \Cref{maximal-modular}, $\Phi$ maps $\mathcal{M}(A) \times \mathcal{M}(B)$ (resp., $\mathcal{M}_m(A) \times \mathcal{M}_m(B)$) bijectively onto $\mathcal{M}(A \otimes^\gamma B)$ (resp., $\mathcal{M}_m(A \otimes^\gamma B)$). Since $\Phi$ is continuous (being $\tau_w$-continuous), it just remains to show that $\Phi$ is a closed map with respect to the product $hk$-topology on $\mathcal{M}(A) \times \mathcal{M}(B)$ and the $hk$-topology on $\mathcal{M}(A \otimes^\gamma B)$. First of all note that any closed set in $\mathcal{M}(A) \times \mathcal{M}(B)$ is of the form $$ \cap_\alpha \Big(\big(F_A^\alpha \times \mathcal{M}(B)\big) \cup \big(\mathcal{M}(A) \times F_B^\alpha\big)\Big),$$ where $F_C^\alpha$ is closed in $\mathcal{M}(C)$ for $C = A, B$. Also, since $\Phi$ is injective, we have $$ \Phi \Big(\cap_\alpha \Big( \big(F_A^\alpha \times \mathcal{M}(B)\big) \cup \big(\mathcal{M}(A) \times F_B^\alpha) \big) \Big) = \cap_\alpha \Big(\Phi\big(F_A^\alpha \times \mathcal{M}(B)\big) \cup \Phi\big(\mathcal{M}(A) \times F_B^\alpha\big)\Big).$$ Thus, it is sufficient to prove that for any closed set $F_A$ of $\mathcal{M}(A)$, $X:=\Phi\big(F_A \times \mathcal{M}(B)\big)$ is closed in $\mathcal{M}(A \otimes^\gamma B)$. We have to show that $h(k(X)) \subseteq X$. Let $P\in \mathcal{M}(A \otimes^\gamma B)$ be such that $k(X) \subseteq P$. Let, if possible, $P\notin X$. Since $P = A\otimes^\gamma J + I \otimes^\gamma B= \ker(\pi_I \otimes^\gamma \pi_J)$, for some $I \in \mathcal{M}(A), J \in \mathcal{M}(B)$, it follows that $I \notin F_1$. But $F_A$ is closed in $hk$-topology, thus $k(F_A) \nsubseteq I$. Let $a \in k(F_A) \setminus I$ and fix $b \notin J$. Note that $(a+ I) \otimes (b+J) \neq 0$ which gives that $a \otimes b \notin \ker(\pi_I \otimes^\gamma \pi_J) = P$. On the other hand, consider any $K:= \Phi(L \times M) \in X$, where $ L \in F_A$ and $ M \in \mathcal{M}(B)$. Since $ a\in k(F_A) \subseteq L $ we have $ (a+L) \otimes (b+M) =0$. Thus $a\otimes b \in \ker(\pi_L \otimes^\gamma \pi_M) = K$ and this is true for all $K \in X$. So, $a\otimes b \in k(X) \subseteq P$, which gives a contradiction. \end{proof} \section{Center of $A\otimes^\gamma B$} For algebras $A$ and $B$ with centers $\mathcal{Z}(A)$ and $\mathcal{Z}(B)$, respectively, one can easily check that there is a canonical algebra isomorphism between $\mathcal{Z}(A \otimes B)$ and $ \mathcal{Z}(A) \otimes\mathcal{Z}( B)$. For any two $C^*$-algebras $A$ and $B$ and any $C^*$-norm $\| \cdot \|_\alpha$, it is known that the above isomorphism extends to an isometric $*$-isomorphism from $\mathcal{Z}(A \otimes^{\alpha} B)$ onto $\mathcal{Z}(A) \otimes^{\alpha} \mathcal{Z}( B)$ - see \cite{arc}. Making explicit use of injectivity of $\otimes^h$, the above natural map also extends to an isometric algebra isomorphism from $\mathcal{Z}(A \otimes^h B)$ onto $\mathcal{Z}(A) \otimes^h \mathcal{Z}( B)$ - see \cite{ass}; and for $\widehat\otimes$, it extends to an algebraic $*$-isomorphism (not necessarily isometric) from $\mathcal{Z}(A \widehat\otimes B)$ onto $\mathcal{Z}(A) \widehat\otimes \mathcal{Z}( B)$ - see \cite{JK}. With the kind of partial injectivity for $\otimes^\gamma$ established in Section 2, their analogue for $\otimes^\gamma$ is quite satisfying. \begin{theorem} For $C^*$-algebras $A$ and $B$, $\mathcal{Z}(A \otimes^\gamma B) = \mathcal{Z}(A) \otimes^\gamma \mathcal{Z}( B).$ \end{theorem} \begin{proof} Since $\mathcal{Z}(A \otimes B) \subseteq \mathcal{Z}(A \otimes^\gamma B)$, consider the identity function from $ \mathcal{Z}(A) \otimes \mathcal{Z}(B)$ into $ \mathcal{Z}(A \otimes^\gamma B)$. By Theorem \ref{obp-injective}, $\mathcal{Z}(A) \otimes^\gamma \mathcal{Z}(B)$ can be considered as a $*$-subalgebra of $A\otimes^\gamma B$, so that for any $u\in \mathcal{Z}(A) \otimes \mathcal{Z}(B)$, $ \|u\|_{\mathcal{Z}(A) \otimes^\gamma \mathcal{Z}(B)} = \|u\|_{A\otimes^\gamma B} $. Thus, the identity function extends uniquely to an isometric $*$-homomorphism, say, $\theta$ from $\mathcal{Z}(A) \otimes^\gamma \mathcal{Z}(B)$ into $\mathcal{Z}(A\otimes^\gamma B)$. It only remains to show that $\theta $ is surjective. Let $z \in \mathcal{Z}(A \otimes^\gamma B)$. Consider $i: A\otimes^\gamma B \rightarrow A\otimes^h B$, the canonical injective homomorphism as in \Cref{obp-min}. It is easily seen that $i(z)x = x i(z)$ for all $x\in \mathcal{Z}(A) \otimes \mathcal{Z}(B)$; so that $ i(z) \in \mathcal{Z}(A) \otimes^h \mathcal{Z}(B)$, and by \cite{ass}, we have $ \mathcal{Z}(A\otimes^h B) = \mathcal{Z}(A) \otimes^h \mathcal{Z}(B)$. Now, let $i^\prime: \mathcal{Z}(A) \otimes^\gamma \mathcal{Z}(B) \rightarrow \mathcal{Z}(A) \otimes^h \mathcal{Z}(B)$ be the canonical injective homomorphism (like the map $i$). Then, the following diagram $$ \xymatrix{ \mathcal{Z}(A)\otimes^\gamma \mathcal{Z}(B) \ar[rr]^\theta \ar[rd]_{i^\prime} && \mathcal{Z}(A\otimes^\gamma B) \ar[ld]^{i}\\ & \mathcal{Z}(A)\otimes^h \mathcal{Z}(B)} $$ commutates because $i \circ \theta = i^\prime $ on $\mathcal{Z}(A) \otimes \mathcal{Z}(B)$ and all the three maps are continuous. Note that, the map $i^\prime$ is surjective as well. To see this, consider an element $z^\prime \in \mathcal{Z}(A) \otimes^h \mathcal{Z}(B)$ and fix a sequence $\{z_n\} \subseteq \mathcal{Z}(A)\otimes \mathcal{Z}(B)$ such that $\|z_n -z^\prime\|_h \rightarrow 0 $. By Grothendieck inequality for commutative $C^*$-algebras (see \cite{pis}), we have $$ \|x\|_\gamma \leq K_G \|x\|_h \ \text{for all}\ x\in \mathcal{Z}(A)\otimes \mathcal{Z}(B), $$ $K_G$ being the Grothendieck constant. Thus, the sequence $\{z_n\}$ is Cauchy with respect to $\|\cdot\|_{\gamma}$ and converges to some $z^{\prime\prime}$ in $\mathcal{Z}(A) \otimes^\gamma \mathcal{Z}(B)$. This shows that $\{z_n = i'(z_n)\}$ converges to $z'$ as well as to $z''$ in $\mathcal{Z}(A) \otimes^h \mathcal{Z}(B)$. So, $i^\prime(z^{\prime\prime}) = z^\prime$ and $i^\prime$ is surjective. Thus, for above $i(z)$ in $\mathcal{Z}(A) \otimes^h \mathcal{Z}(B)$, there exists some $w \in \mathcal{Z}(A) \otimes^\gamma \mathcal{Z}(B)$ such that $i(z) = i^\prime(w) = i( \theta(w))$. Since $i$ is injective, $z= \theta(w)$, so that $\theta $ is surjective and we are done. \end{proof} It would be interesting to provide an answer to the following: \vspace*{1mm} \noindent{\sc Question.} Is there any relationship, as above, between the center of $A \otimes^\gamma B$ and $\mathcal{Z}(A) \otimes^\gamma \mathcal{Z}(B)$ for Banach algebras $A$ and $B$?
{ "timestamp": "2018-09-06T02:00:10", "yymm": "1809", "arxiv_id": "1809.01131", "language": "en", "url": "https://arxiv.org/abs/1809.01131", "abstract": "We analyze certain algebraic structures of the Banach space projective tensor product of $C^*$-algebras which are comparable with their known counterparts or the Haagerup tensor product and the operator space projective tensor product of $C^*$-algebras. Highlights of this analysis include (a) injectivity of the Banach space projective tensor product when restricted to the tensor products of $C^*$-algebras, (b) detailed structure of closed ideals of $A \\otimes_{\\gamma} B$ in terms of those of $A$ and $B$, (c) identification of certain spaces of ideals of $A \\otimes_{\\gamma} B$ in terms of those of $A$ and $B$ from the perspective of hull-kernel topology, and (d) identification of the center of $A \\otimes_{\\gamma} B$ with $Z(A) \\otimes_{\\gamma} Z(B)$, where $A$ and $B$ are $C^*$-algebras.", "subjects": "Operator Algebras (math.OA); Functional Analysis (math.FA)", "title": "On Banach space projective tensor product of $C^*$-algebras", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575116884779, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7091542029770294 }
https://arxiv.org/abs/1006.0491
Multiple recurrence and the structure of probability-preserving systems
In 1975 Szemerédi proved the long-standing conjecture of Erdős and Turán that any subset of $\bbZ$ having positive upper Banach density contains arbitrarily long arithmetic progressions. Szemerédi's proof was entirely combinatorial, but two years later Furstenberg gave a quite different proof of Szemerédi's Theorem by first showing its equivalence to an ergodic-theoretic assertion of multiple recurrence, and then bringing new machinery in ergodic theory to bear on proving that. His ergodic-theoretic approach subsequently yielded several other results in extremal combinatorics, as well as revealing a range of new phenomena according to which the structures of probability-preserving systems can be described and classified.In this work I survey some recent advances in understanding these ergodic-theoretic structures. It contains proofs of the norm convergence of the `nonconventional' ergodic averages that underly Furstenberg's approach to variants of Szemerédi's Theorem, and of two of the recurrence theorems of Furstenberg and Katznelson: the Multidimensional Multiple Recurrence Theorem, which implies a multidimensional generalization of Szemerédi's Theorem; and a density version of the Hales-Jewett Theorem of Ramsey Theory.
\chapter*{Preface} In 1975 Szemer\'edi proved the long-standing conjecture of Erd\H{o}s and Tur\'an that any subset of $\mathbb{Z}$ having positive upper Banach density contains arbitrarily long arithmetic progressions. Szemer\'edi's proof was entirely combinatorial, but two years later Furstenberg gave a quite different proof of Szemer\'edi's Theorem by first showing its equivalence to an ergodic-theoretic assertion of multiple recurrence, and then bringing new machinery in ergodic theory to bear on proving that. His ergodic-theoretic approach subsequently yielded several other results in extremal combinatorics, as well as revealing a range of new phenomena according to which the structures of probability-preserving systems can be described and classified. In this work I survey some recent advances in understanding these ergodic-theoretic structures. It contains proofs of the norm convergence of the `nonconventional' ergodic averages that underly Furstenberg's approach to variants of Szemer\'edi's Theorem, and of two of the recurrence theorems of Furstenberg and Katznelson: the Multidimensional Multiple Recurrence Theorem, which implies a multidimensional generalization of Szemer\'edi's Theorem; and a density version of the Hales-Jewett Theorem of Ramsey Theory. \[\ast\quad\ast\quad\ast\] The text below was originally submitted as my Ph.D. dissertation at UCLA, after being assembled from a number of earlier papers. It seems worth repeating the acknowledgements from that dissertation as well. Many people deserve my thanks for their part in my mathematical education. Listing them in roughly the order we met, I must at least mention David Fremlin, Imre Leader, Tim Gowers, B\'ela Bollob\'as, Ben Garling, James Norris, Assaf Naor, Yuval Peres, Vitaly Bergelson, Christoph Thiele, Sorin Popa, David Aldous, Tamar Ziegler, Bryna Kra, Bernard Host, Mariusz Lema\'nczyk and Dan Rudolph. I could have written a much longer list, but still the selection would have been slightly arbitrary: to make it complete would have required far more space than I have available. During the same period, I have benefited from the financial support of Trinity College, Cambridge, the Shapiro and Huang Families through their UCLA graduate student fellowships, and Microsoft Corporation. No less significant, I have been able to rely unquestioningly on the support of family and friends, for whom I can only hope to be so generous in turn should the need arise. Terence Tao, who advised this dissertation, has certainly taught me more during the last four years than either of us fully appreciates, and his energy and enthusiasm for mathematics are a constant motivation for those around him. \begin{flushright} Venice Beach, California May 2010 \end{flushright} \chapter{Introduction} The concerns of this work stem from the following remarkable result of Szemer\'edi~(\cite{Sze75}), which confirmed an old conjecture of Erd\H{o}s and Tur\'an~(\cite{ErdTur36}). \vspace{7pt} \noindent\textbf{Szemer\'edi's Theorem.}\quad \emph{For any $\delta > 0$ and $k\geq 1$ there is some $N_0 \geq 1$ such that if $N \geq N_0$ then any $A \subseteq \{1,2,3,\ldots,N\}$ with $|A| \geq \delta N$ includes a nontrivial $k$-term arithmetic progression: $A \supseteq \{a,a+n,\ldots,a+(k-1)n\}$ for some $a \in \{1,2,\ldots,N\}$ and $n \geq 1$.} \vspace{7pt} This provides a considerable strengthening of a much older result of van der Waerden~\cite{vdW27}, according to which any colouring of $\mathbb{N}$ using a bounded number of colours witnesses arbitrarily long finite arithmetic progressions that are monochromatic. Since any colouring with at most $c$ colours must have at least one colour class of upper Banach density at least $1/c$, van der Waerden's Theorem can be deduced by applying Szemer\'edi's Theorem to the intersection of that class with sufficiently long discrete intervals in $\mathbb{N}$. Shortly after the appearance of Szemer\'edi's ingenious combinatorial proof, Furstenberg gave a new proof of the above theorem in~\cite{Fur77} using a superficially quite different approach, relying on a conversion to a problem about probability-preserving dynamical systems. Such a system consists of a probability space $(X,\S,\mu)$ together with an invertible, measurable, $\mu$-preserving transformation $T:X\to X$. Furstenberg proved that all such systems enjoy a property of `multiple recurrence': \vspace{7pt} \noindent\textbf{Multiple Recurrence Theorem.}\quad \emph{Whenever $(X,\S,\mu)$ and $T$ are as above, if $k\geq 1$ and $A \in \S$ has $\mu(A) > 0$ then \[\liminf_{N\to\infty}\frac{1}{N}\sum_{n=1}^N\mu(A\cap T^{-n}(A)\cap\cdots\cap T^{-(k-1)n}(A)) > 0.\] In particular, there is some $n \geq 1$ such that \[\mu(A\cap T^{-n}(A)\cap\cdots\cap T^{-(k-1)n}(A)) > 0.\]} \vspace{7pt} It is worth noting that analogously to this ergodic-theoretic proof of Szemer\'edi's Theorem, it is possible to deduce the colouring theorem of van der Waerden from a multiple recurrence result in topological dynamics. We will not be concerned with this story here, but it is reported in detail in Furstenberg's book~\cite{Fur81}. Shortly after the above result appeared, Furstenberg and Katznelson realized that the same basic method could be modified to apply to collections of commuting measure-preserving transformations, and proved the following in~\cite{FurKat78}. \vspace{7pt} \noindent\textbf{Theorem A} (Multidimensional Multiple Recurrence Theorem).\quad \emph{If $(X,\S,\mu)$ is a probability space, $T_1$, $T_2$, \ldots, $T_d$ are commuting measurable invertible $\mu$-preserving self-maps of $X$ and $A \in \S$ has $\mu(A) > 0$, then \[\liminf_{N\to\infty}\frac{1}{N}\sum_{n=1}^N\mu(T_1^{-n}(A)\cap\cdots\cap T_d^{-n}(A)) > 0.\]} \vspace{7pt} Of course this result implies one-dimensional multiple recurrence by setting $d := k$ and $T_i := T^i$ for $i=0,1,\ldots,k-1$. In addition, Furstenberg and Katznelson were able to convert Theorem A back into a multidimensional combinatorial result generalizing Szemer\'edi's Theorem. \vspace{7pt} \noindent\textbf{Multidimensional Szemer\'edi Theorem.}\quad \emph{For any $\delta > 0$ and $d\geq 1$ there is some $N_0 \geq 1$ such that if $N \geq N_0$ then any $A \subseteq \{1,2,\ldots,N\}^d$ with $|A| \geq \delta N^d$ includes the vertex set of the outer face of a nontrivial upright simplex: \[A \supseteq \{\bf{a} + n\bf{e}_1,\bf{a}+n\bf{e}_2,\ldots,\bf{a}+n\bf{e}_d\}\] for some $\bf{a} \in \{1,2,\ldots,N\}^d$ and $n \geq 1$, where $\bf{e}_1$, $\bf{e}_2$, \ldots, $\bf{e}_d$ are the usual basis vectors of $\mathbb{Z}^d$.} \vspace{7pt} This ergodic-theoretic approach to results in additive combinatorics has since developed into a whole subdiscipline, sometimes termed `Ergodic Ramsey Theory'; see, for instance, Bergelson's survey~\cite{Ber96}. In particular, Furstenberg and Katznelson used this approach to prove a number of further results concerning some form of `recurrence', culminating in the following density version of the classical Hales-Jewett Theorem~\cite{HalJew63} proved in~\cite{FurKat91}: \vspace{7pt} \noindent\textbf{Theorem B} (Density version of the Hales-Jewett Theorem).\quad \emph{For any $\delta > 0$ and $k\geq 1$ there is some $N_0 \geq 1$ such that if $N \geq N_0$ then any $A \subseteq [k]^N$ with $|A| \geq \delta k^N$ includes a \textbf{combinatorial line}: a subset $L\subseteq [k]^N$ of the form \[L = \{w\in [k]^N:\ w|_{[N]\setminus J} = w_0,\,\&\,w_j\ \hbox{is the same element of $[k]$ for all $j \in J$}\},\] for some fixed nonempty $J \subseteq [N]$ and $w_0 \in [k]^{[N]\setminus J}$.} \vspace{7pt} In fact, this result implies most of the other main results in density Ramsey Theory, including Szemer\'edi's Theorem and its multidimensional generalization. This implication holds exactly as in the older setting of colouring Ramsey Theorems, which is well-treated in the book~\cite{GraRotSpe90} of Graham, Rothschild and Spencer. In addition to achieving some striking new combinatorial results, Ergodic Ramsey Theory has also motivated new ergodic-theoretic questions, and has witnessed an ongoing interplay between insights into these two aspects of the subject. One basic question that was resolved only recently is whether the `multiple ergodic averages' studied in Theorems A and B above actually converge (that is, whether `$\liminf$' can be replaced with `$\lim$'). In the case of the original Multiple Recurrence Theorem, this was finally shown to be so by Host and Kra in~\cite{HosKra05}, following the establishment of several special cases and related results over two decades in~\cite{ConLes84,ConLes88.1,ConLes88.2,Zha96,FurWei96,HosKra01} (see also Ziegler's paper~\cite{Zie07} for another proof of the Host-Kra result). The more general setting of Theorem A was then settled by Tao in~\cite{Tao08(nonconv)}. \vspace{7pt} \noindent\textbf{Theorem C} (Norm convergence of nonconventional averages).\quad \emph{For any commuting tuple of invertible measurable $\mu$-preserving transformations $T_1$, $T_2$, \ldots, $T_d\curvearrowright (X,\S,\mu)$ and any functions $f_1,f_2,\ldots,f_d \in L^\infty(\mu)$, the multiple ergodic averages \[\frac{1}{|I_N|}\sum_{n\in I_N}\prod_{i=1}^df_i\circ T_i^n\] converge in $L^2(\mu)$ as $N\to\infty$.} \vspace{7pt} While the sequence of works preceding the proof of convergence in the one-dimensional setting of the Multiple Recurrence Theorem develops a large body of ergodic-theoretic machinery for the analysis of these averages, Tao departs quite markedly from those approaches and effectively converts the problem of convergence into a quantitative assertion concerning averages of $[-1,1]$-valued functions on large finite grids $\{1,2,\ldots,N\}^d$. A new proof of Tao's Theorem was given using classical ergodic-theoretic machinery in~\cite{Aus--nonconv}. It turns out that this convergence can be proved relatively quickly using a version of the older approaches, with the one new twist that starting from a system of commuting transformations of interest $T_1,T_2,\ldots,T_d \curvearrowright (X,\S,\mu)$ one must first pass to a carefully-chosen \emph{extended} system $\tilde{T}_1,\tilde{T}_2,\ldots,\tilde{T}_d \curvearrowright (\tilde{X},\tilde{\S},\tilde{\mu})$ (that is, a new system for which the original one is isomorphic to the action of the $\tilde{T}_i$'s on some globally invariant $\sigma$-subalgebra of $\tilde{\S}$: in ergodic-theoretic terms, the original system is a `factor' of the new one). If the extension is constructed correctly then the asymptotic behaviour of the multiple ergodic averages associated to it admits a simplification allowing them to be compared with a similar system of averages involving only $k-1$ transformations; from this point convergence in $L^2$ follows quickly by induction on $k$. The need for this extension also offers some explanation for the advantage that Tao gains in his approach to Theorem C by converting to the finitary, combinatorial world: during the course of his proof he constructs new functions from the initial data of the problem in ways that cannot be used to construct \emph{measurable} functions in the ergodic-theoretic setting, but suitable measurable functions are available using the larger $\sigma$-algebra of the extended system. Theorem C proves the convergence of the scalar averages appearing in Theorem A because \[\frac{1}{N}\sum_{n=1}^N\mu(T_1^{-n}(A)\cap T_2^{-n}(A)\cap \cdots\cap T_d^{-n}(A)) = \int_X \frac{1}{N}\sum_{n = 1}^N\prod_{i=1}^d(f_i\circ T_i^n)\,\d\mu\] when $f_1 = f_2 = \ldots = f_d = 1_A$. Note that another re-proof of Tao's theorem involving non-standard analysis has been given by Towsner in~\cite{Tow09}, and that a different construction of some extensions of probability-preserving systems that can be used as in the proof of~\cite{Aus--nonconv} has since been given by Host in~\cite{Hos09}. Having found the extended systems appearing in the new proof of Theorem C, it turns out that they also afford a somewhat simplified description of the limiting value of the scalar averages appearing in Theorem A. These limiting values can always be expressed in terms of a certain $(d+1)$-fold self-joining of the system $(\tilde{X},\tilde{\S},\tilde{\mu},\tilde{T}_1,\tilde{T}_2,\ldots,\tilde{T}_d)$ (which appears already in the works of Furstenberg and Katznelson), and one finds that for the extended system this self-joining takes a special form. Crucially, that special form is precisely the hypothesis required to apply another result of Tao: the infinitary analog of the hypergraph removal lemma from~\cite{Tao07}. This leads fairly quickly to a new proof of Theorem A (and hence also one-dimensional multiple recurrence and their combinatorial consequences), which appeared in~\cite{Aus--newmultiSzem}. A similar story is now known in the setting of Theorem B. For their proof of that theorem, Furstenberg and Katznelson first provided a correspondence with a class of stochastic processes enjoying stationarity with respect to some semigroup of transformations. This is broadly similar to Furstenberg's original correspondence between Szemer\'edi's Theorem and the Multiple Recurrence Theorem, but differs considerably in its details. Having built this bridge to a class of stochastic processes, Furstenberg and Katznelson then used analogs of their earlier structural results from the setting of probability-preserving $\mathbb{Z}^d$-actions to prove the `recurrence' result that is the translation of Theorem B. Here, too, it turns out that the strategy of seeking extended systems in which the behaviour of interest is simplified leads to a new proof of that recurrence result, and so overall to a considerably shortened proof of Theorem B, where again the punchline is an implementation of Tao's infinitary hypergraph removal. This new proof of Theorem B appears in~\cite{Aus--DHJ}. It was discovered simultaneously with the work of the Polymath project~\cite{DHJ09}, which provided the first finitary, effective proof of that theorem, and the proof of~\cite{Aus--DHJ} used a key construction discovered by the members of that project (again, suitably translated to apply to the stochastic processes). More recently still, in pursuit of some convergence results for `polynomial' analogs of the functional averages of Theorem C, it was found that a very abstract, unified approach could be given to the construction of the different extensions underlying the above-mentioned proofs of Theorems C, A and B. This rests on the notion of a system that is `sated' relative to another class of systems. In this dissertation, the new proofs of the above results are re-told using this unifying language, and some speculations offered concerning some further extensions of this machinery. \vspace{7pt} \subsubsection*{Outline of the following chapters} In the next chapter we recall some basic definitions and conventions from the study of measurable dynamical systems, and then introduce the chief technical innovation on which most of the remaining chapters will rest: a special property of certain dynamical systems called `satedness'. The main result of that chapter, Theorem~\ref{thm:sateds-exist}, asserts that any probability-preserving dynamical system admits extensions that enjoy this `satedness' (where precisely what this means is relative to a choice of another class of systems). In Chapter~\ref{chap:conv} we use the existence of sated extensions to prove Theorem C. After the introduction of another important technical device, the `Furstenberg self-joining', this follows by a quick induction once the strategy of passing to a sated extension has been decided. Chapter~\ref{chap:multiMRT} is dedicated to Theorem A. In this case the use of sated extensions gives a relatively easy reduction of the proof to a case in which the Furstenberg self-joining (which describes the limiting averages of interest) admits a rather detailed structural description; but the use of that description to deduce the desired positivity of these averages is still rather involved. This requires an implementation of (a very slight modification of) Tao's `infinitary hypergraph removal lemma', which we will recall for completeness. In Chapter~\ref{chap:DHJ} we prove Theorem B. This proof follows very closely that of Theorem A, notwithstanding that the category of dynamical systems in which the proof takes place is very different. However, the unusual features of this new category will require that we quickly re-examine the existence of sated extensions proved in Chapter 2 to check that a slightly modified version of that result holds here. After recalling Furstenberg and Katznelson's original reformulation of Theorem B in terms of a `recurrence' property of certain `strongly stationary' stochastic processes, we establish this new notion of `coordinatewise-satedness' and show that in this world it implies a similar structure for certain joint distributions to that obtained for the Furstenberg self-joining in Chapter~\ref{chap:multiMRT}. The proof of Theorem B is then completed by another appeal to infinitary hypergraph removal, essential identical to that in Chapter~\ref{chap:multiMRT}. Finally, Chapter~\ref{chap:spec} contains some speculations around an important question left open by our work. In the case of $\mathbb{Z}^d$-actions treated by Chapters~\ref{chap:conv} and~\ref{chap:multiMRT}, one can discern in the background a very general ergodic-theoretic meta-question concerning the possible joinings among systems enjoying various additional invariances. This is formulated precisely in Section~\ref{sec:meta}, but in that section it is answered only in a special case that suffices for the proof of Theorem A. A more general answer would be very interesting in its own right, as well as potentially offering new insights on other generalizations of nonconventional average convergence and multiple recurrence. In Chapter~\ref{chap:spec} we will formulate a conjecture that would answer this question much more completely. \chapter{Setting the stage}\label{chap:basics} A handful of key technical ideas in ergodic theory will drive all of the proofs in the later chapters of this work. After recalling some standard definitions and notation in the first section below, we introduce two such key ideas: that of a subclass of a class of dynamical systems that has the property of being `idempotent', and the constructions that this assumption of idempotence enables; and then the possibility of a system being `sated' relative to such an idempotent class, together with the result that all systems have extensions that are sated in this way. These preliminary sections provide the necessary background for Chapters 3 and 4 (and also Chapter 6). Unfortunately, the slightly unusual class of stochastic processes that appears in Chapter 5 is a little less willing to be analysed using this standard framework: the key ideas of idempotence and satedness will be central there too, but only after being modified to suit that class. The modifications will be explained early in that chapter, together with those small changes that must accordingly be made to the proofs in Sections~\ref{sec:idem} and~\ref{sec:sateds}. In principle one could give a unified treatment of all of these settings, but only at the expense of working with quite abstractly-defined categories of dynamical system and operations on them, in which our basic intuitions for the notions recalled in Section~\ref{sec:basics} may become obscured. Although more unified, that route seems to pose too great a risk to the clarity of the other chapters, and so we shall only indicate it in passing during Chapter 5. \section{Probability-preserving systems}\label{sec:basics} Throughout this paper $(X,\S)$ will denote a measurable space. Since our main results pertain only to the joint distribution of countably many bounded real-valued functions on this space and their shifts under some measurable transformations, by passing to the image measure on a suitable product space we may always assume that $(X,\S)$ is standard Borel, and this will prove convenient for some of our later constructions. In addition, $\mu$ will always denote a probability measure on $\S$. We shall write $(X^S,\S^{\otimes S})$ for the usual product measurable structure indexed by a set $S$, and $\mu^{\otimes S}$ for the product measure and $\mu^{\Delta S}$ for the diagonal measure on this structure respectively. Given a measurable map $\phi:(X,\S)\to (Y,\Phi)$ to another measurable space, we shall write $\phi_\#\mu$ for the resulting pushforward probability measure on $(Y,\Phi)$. Suppose now that $\Gamma$ is a discrete semigroup, and consider the class of all probability-preserving actions $T:\Gamma\curvearrowright (X,\S,\mu)$ on standard Borel probability spaces; these will be referred to as \textbf{$\Gamma$-systems}, and will often be denoted by either the quadruple $(X,\S,\mu,T)$ or simply by a boldface letter such as $\mathbf{X}$. If $\L\leq \Gamma$ is a subgroup we denote by $T^{\ \upharpoonright\L}$ the $\L$-action on $(X,\S,\mu)$ defined by $(T^{\ \upharpoonright\L})^\gamma := T^\gamma$ for $\gamma \in \L$, and refer to this as the \textbf{$\L$-subaction}, and if $\mathbf{X} = (X,\S,\mu,T)$ is a $\Gamma$-system then we write similarly $\mathbf{X}^{\ \upharpoonright\L}$ for the system $(X,\S,\mu,T^{\ \upharpoonright\L})$ and refer to it as a \textbf{subaction system}. A $\Gamma$-system $(X,\S,\mu,T)$ is \textbf{trivial} if $\mu$ is supported on a single point. Since any two such systems are measure-theoretically isomorphic simply by identifying these single points, we will usually refer to `the' trivial system. We will make repeated use of a handful of standard constructions and properties of $\Gamma$-systems. \subsection*{Factors and joinings} A \textbf{factor} of the $\Gamma$-system $(X,\S,\mu,T)$ is a globally $T$-invariant $\sigma$-subalgebra $\Phi \leq \S$. Relatedly, a \textbf{factor map} from one $\Gamma$-system $T:\Gamma\curvearrowright (X,\S,\mu)$ to another $S:\Gamma\curvearrowright (Y,\Phi,\nu)$ is a measurable map $\pi:X \to Y$ such that $\nu = \pi_\#\mu$ and $S^\gamma\circ\pi = \pi\circ T^\gamma$ for all $\gamma\in\Gamma$. This situation is often signified by writing $\pi:(X,\S,\mu,T) \to (Y,\Phi,\nu,S)$. Factor maps comprise the natural morphisms between systems for a fixed acting semigroup. To any factor map $\pi$ is associated the factor $\{\pi^{-1}(A):\ A\in\Phi\}\leq \S$. Two factor maps $\pi$ and $\psi$ are \textbf{equivalent} if these $\sigma$-subalgebras of $\S$ that they generate are equal up to $\mu$-negligible sets, in which case we shall write $\pi \simeq \psi$; this clearly defines an equivalence relation among factors. It is a standard fact that in the category of standard Borel spaces equivalence classes of factors are in bijective correspondence with equivalence classes of globally invariant $\sigma$-subalgebras under the relation of equality modulo negligible sets. A treatment of these classical issues may be found, for example, in Chapter 2 of Glasner~\cite{Gla03}. Given a globally invariant $\sigma$-subaglebra in $\mathbf{X}$, a choice of factor $\pi:\mathbf{X}\to \mathbf{Y}$ generating that $\sigma$-subalgebra will be referred to as \textbf{coordinatizing} the $\sigma$-subalgebra. More generally, the factor map $\pi:(X,\S,\mu,T) \to (Y,\Phi,\nu,S)$ \textbf{contains} $\psi:(X,\S,\mu,T)\to (Z,\Psi,\theta,R)$ if $\pi^{-1}(\Phi) \supseteq \psi^{-1}(\Psi)$ up to $\mu$-negligible sets. Another standard feature of standard Borel spaces is that this inclusion is equivalent to the existence of a \textbf{factorizing} factor map $\phi:(Y,\Phi,\nu,S) \to (Z,\Psi,\theta,R)$ with $\psi = \phi\circ\pi$ $\mu$-a.s., and that a measurable analog of the Schroeder-Bernstein Theorem holds: $\pi \simeq \psi$ if and only if a single such $\phi$ may be chosen that is invertible away from some negligible subsets of the domain and target. If $\pi$ contains $\psi$ we shall write $\pi \succsim \psi$ or $\psi \precsim \pi$. If $\pi:\mathbf{X}\to\mathbf{Y}$ and $\psi:\mathbf{X}\to \mathbf{Z}$ are any two factor maps as above (not necessarily ordered), then the $\sigma$-subalgebra $\pi^{-1}(\Phi)\vee \psi^{-1}(\Psi)$ is another factor of $\mathbf{X}$. In general we will write $\pi\vee \psi$ for an arbitrary choice of factor map coordinatizing this factor, and similarly for larger collections of factor maps. Dual to the idea of a factor is that of an extension: if $\mathbf{X}$ is a $\Gamma$-system, then an \textbf{extension} $\mathbf{X}$ is another $\Gamma$-system $\t{\mathbf{X}}$ together with a factor map $\pi:\t{\mathbf{X}}\to \mathbf{X}$. More general than the notion of a factor is that of a joining: if $\mathbf{X}_1$, $\mathbf{X}_2$, \ldots, $\mathbf{X}_k$ are $\Gamma$-systems then a \textbf{joining} of them is another $\Gamma$-system $\mathbf{X}$ together with factor maps $\pi_i:\mathbf{X}\to \mathbf{X}_i$ such that these $\pi_i$ together generate the whole $\sigma$-algebra of $\mathbf{X}$. Since their introduction by Furstenberg in~\cite{Fur67}, joinings have become one of the most important concepts in the ergodic theorist's vocabulary, as is well-demonstrated in Glasner's book~\cite{Gla03}. \subsection*{Partially invariant factors} Given a $\Gamma$-system $\mathbf{X} = (X,\S,\mu,T)$, the $\sigma$-algebra $\S^T$ of sets $A\in\S$ for which $\mu(A\triangle T^\gamma(A))=0$ for all $\gamma \in \Gamma$ is $T$-invariant, so defines a factor of $\mathbf{X}$. More generally, if $\Gamma$ is a group and $\L \unlhd\Gamma$ then we can consider the $\sigma$-algebra $\S^{T\upharpoonright\L}$ generated by all $T^{\ \upharpoonright\L}$-invariant sets: we refer to this as the \textbf{$\L$-partially invariant factor}. Note that in this case the condition that $\L$ be normal is needed for this to be a globally $T$-invariant factor. Similarly, if $S\subseteq \Gamma$ and $\L$ is the normal subgroup generated by $S$, we will sometimes write $\S^{T\upharpoonright S}$ for $\S^{T\upharpoonright\L}$. If moreover $\Gamma$ is Abelian and $T_1$ and $T_2$ are two commuting actions of $\Gamma$ on $(X,\S,\mu)$, then we can define a third action $T_1T_2^{-1}$ by setting $(T_1T_2^{-1})^\gamma := T_1^\gamma T_2^{\gamma^{-1}}$. Given this we often write $\S^{T_1 = T_2}$ in place of $\S^{T_1^{-1}T_2}$, and similarly for a larger number of actions of the same group. \subsection*{Relative independence} If $\S_i \geq \Xi_i$ are factors of $(X,\S,\mu,T)$ for each $i\leq d$, then the tuple of factors $(\S_1,\S_2,\ldots,\S_d)$ is \textbf{relatively independent} over the tuple $(\Xi_1,\Xi_2,\ldots,\Xi_d)$ if whenever $f_i\in L^\infty(\mu)$ is $\S_i$-measurable for each $i\leq d$ we have \[\int_X\prod_{i\leq d}f_i\,\d\mu = \int_X\prod_{i\leq d}\mathsf{E}_\mu(f_i\,|\,\Xi_i)\,\d\mu.\] The information that various joint distributions are relatively independent will repeatedly prove pivotal in the following. Sometimes for brevity we will write that `$\S_1$ is relatively independent from $\S_2$, $\S_d$, \ldots, $\S_d$ over $\Xi_1$' if $(\S_1,\S_2,\ldots,\S_d)$ is relatively independent over $(\Xi_1,\S_2,\ldots,\S_d)$. In case $\Gamma$ is a group (not just a semigroup, so each $T^\gamma$ is invertible) we can construct examples of this situation as follows. Suppose that $\mathbf{Y} = (Y,\Phi,\nu,S)$ is a $\Gamma$-system and \[\pi_i:\mathbf{X}_i = (X_i,\S_i,\mu_i,T_i)\to \mathbf{Y}\] are extensions of it for $i=1,2,\ldots,k$. Then the \textbf{relatively independent product} of the systems $\mathbf{X}_i$ over their factor maps $\pi_i$ is the system \[\prod_{\{\pi_1 = \ldots = \pi_k\}}\mathbf{X}_i = \Big(\prod_{\{\pi_1 = \ldots = \pi_k\}}X_i,\bigotimes_{\{\pi_1 = \ldots = \pi_k\}}\S_i,\bigotimes_{\{\pi_1 = \ldots = \pi_k\}}\mu_i,T_1\times \cdots\times T_k\Big)\] where \[\prod_{\{\pi_1 = \ldots = \pi_k\}}X_i := \{(x_1,\ldots,x_k)\in X_1\times \cdots\times X_k:\\ \pi_1(x_1) = \ldots = \pi_k(x_k)\},\] $\bigotimes_{\{\pi_1 = \ldots = \pi_k\}}\S_i$ is the restriction of $\S_1\otimes\cdots\otimes\S_k$ to this subset of $X_1\times\cdots\times X_k$, and \[\bigotimes_{\{\pi_1 = \ldots = \pi_k\}}\mu_i := \int_Y \bigotimes_{i=1}^k\mu_{i,y}\,\nu(\d y)\] with $y\mapsto \mu_{i,y}$ an arbitrary choice of disintegration of $\mu_i$ over $\pi_i$. A quick check shows that the factors generated by the coordinate projections $\phi_j:\prod_{\{\pi_1 = \ldots = \pi_k\}}\mathbf{X}_i\to \mathbf{X}_j$ are relatively independent over the common further factor map \[\pi_1\circ\phi_1\simeq \ldots \simeq \pi_k\circ\phi_k:\prod_{\{\pi_1 = \ldots = \pi_k\}}\mathbf{X}_i\to \mathbf{Y}.\] In case $k=2$ we write the relatively independent product more simply as $\mathbf{X}_1\times_{\{\pi_1= \pi_2\}}\mathbf{X}_2$, and in addition if $\mathbf{X}_1 = \mathbf{X}_2 = \mathbf{X}$ and $\pi_1 = \pi_2 = \pi$ then we will abbreviate this further to $\mathbf{X}\times_\pi\mathbf{X}$, and similarly for the individual spaces and measures. The need for the invertibility of $T$ in this construction arises in checking that $\bigotimes_{\{\pi_1 = \ldots = \pi_k\}}\mu_i$ is invariant under the product action. For example, if $k=2$ then the invariance of $\mu_i$ under $T_i$ implies that for each $\gamma \in \Gamma$ the disintegrations $\mu_{i,y}$ satisfy \[\int_Y (T_i^\gamma)_\#\mu_{i,y}\,\nu(\d y) = \int_Y \mu_{i,y}\,\nu(\d y).\] However, to argue from here to the invariance of $\mu_1\otimes_{\{\pi_1 = \pi_2\}}\mu_2$ we must know in addition that for $\nu$-almost every $y \in Y$ there is a unique point $S^{\gamma^{-1}}(y) \in Y$ such that $(T_i^\gamma)_\#\mu_{i,S^{\gamma^{-1}}(y)}$ is supported on the fibre over $y$. Given this and the essential uniqueness of disintegrations, the above equation implies that $(T_i^\gamma)_\#\mu_{i,y} = \mu_{i,S^\gamma(y)}$ for $\nu$-almost every $y$, from which it also follows that \[(T_1\times T_2)^\gamma_\#(\mu_{1,y}\otimes \mu_{2,y}) = (\mu_{1,S^\gamma(y)}\otimes \mu_{2,S^\gamma(y)})\] $\nu$-almost surely, so that integrating again with respect to $y$ gives the desired invariance of $\mu_1\otimes_{\{\pi_1 = \pi_2\}}\mu_2$. However, this latter argument is valid only if we can obtain the above equality pointwise in $y$, and this can fail if $T_i^\gamma$ is not invertible. \subsection*{Inverse limits} An \textbf{inverse sequence} of $\Gamma$-systems is a family of $\Gamma$-systems $(X_m,\S_m,\mu_m,T_m)$ together with factor maps \[\psi^m_k:(X_m,\S_m,\mu_m,T_m)\to (X_k,\S_k,\mu_k,T_k)\quad\quad\hbox{for all}\ m\geq k\] satisfying the compatibility property that $\psi^k_\ell\circ \psi^m_k = \psi^m_\ell$ whenever $m\geq k\geq \ell$. From such a family one can construct an \textbf{inverse limit} \[\lim_{m\leftarrow}\,\big((X_m,\S_m,\mu_m,T_m)_m,(\psi^m_k)_{m\geq k}\big) =: (X,\S,\mu,T)\] together with a sequence of factor maps \[\psi_m:(X,\S,\mu,T)\to (X_m,\S_m,\mu_m,T_m)\] such that $\psi^m_k\circ \psi_m = \psi_k$ whenever $m\geq k$, and such that the lifted factors $\psi_m^{-1}(\S_m)$ together generate the whole of $\S$. Moreover, subject to these stipulations this inverse limit is unique up to isomorphisms that intertwine all the factor maps $\psi_m$. This construction is described, for example, in Section 6.3 of Glasner~\cite{Gla03}. \section{Idempotent classes}\label{sec:idem} In much of the following we will be concerned with properties of one system that are defined relative to some other class of systems. \begin{dfn}[Idempotent class] A subclass $\mathsf{C}$ of $\Gamma$-systems is \textbf{idempotent} if it contains the trivial system and is closed under measure-theoretic isomorphism, inverse limits and joinings. \end{dfn} Note that our `classes' need not be sets in the sense of ZFC. In all subsequent constructions involving these classes it will be clear that we need only some set-indexed family of members, and so we will not generally pass comment on this set-theoretic distinction. Alternatively, we could circumvent this issue altogether by working only with probability-preserving systems modelled by some Borel transformations and invariant probability measure on, say, the Cantor space, since any standard Borel system admits such a model up to measure-theoretic isomorphism (see, for instance, Theorem 2.15 in~\cite{Gla03}). \noindent\textbf{Examples}\quad Suppose that $\Gamma$ is a group and that $\L \unlhd \Gamma$. Then the class of all $\Gamma$-systems for which the subaction of $\L$ is trivial is easily seen to be idempotent. This important example will usually be denoted by $\mathsf{Z}_0^{\L}$ in the following. More generally, for $\L$ as above and any $n\in \mathbb{N}$ we let $\mathsf{Z}_n^\L$ denote the class of systems on which the $\L$-subaction is a distal tower of height at most $n$, in the sense of direct integrals of compact homogeneous space data introduced in~\cite{Aus--ergdirint} to allow for the case of non-ergodic systems. Standard results on the possible joinings and inverse limits of isometric extensions show that this class is idempotent (see~\cite{Aus--ergdirint,Aus--lindeppleasant1}). Those arguments also allow us to identify certain natural idempotent subclasses of $\mathsf{Z}_n^\L$, such as the class $\mathsf{Z}_{\rm{Ab},n}^\L$ of those systems with $\L$-subaction a distal tower of height at most $n$ and in which each isometric extension is Abelian. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline \begin{lem} If $\mathsf{C}$ is an idempotent class of $\Gamma$-systems then any $\Gamma$-system $\mathbf{X}$ has an essentially unique maximal factor in the class $\mathsf{C}$. \end{lem} \noindent\textbf{Proof}\quad It is clear that under the above assumption the family of factors \[\big\{\Xi \leq \S:\ \Xi\ \hbox{is generated by a factor map to a system in $\mathsf{C}$}\big\}\] is nonempty (it contains $\{\emptyset,X\}$, which corresponds to the trivial system), upwards directed (because $\mathsf{C}$ is closed under joinings) and closed under taking $\sigma$-algebra completions of increasing unions (because $\mathsf{C}$ is closed under inverse limits). There is therefore a maximal $\sigma$-subalgebra in this family. \qed \begin{dfn} If $\mathsf{C}$ is an idempotent class then $\mathbf{X}$ is a \textbf{$\mathsf{C}$-system} if $\mathbf{X} \in \mathsf{C}$, and for any $\mathbf{X}$ we write $\zeta_\mathsf{C}^\mathbf{X}:\mathbf{X}\to\mathsf{C}\mathbf{X}$ for an arbitrarily-chosen coordinatization of its \textbf{maximal $\mathsf{C}$-factor} given by the above lemma. It is clear that if $\pi:\mathbf{X}\to\mathbf{Y}$ then $\zeta_\mathsf{C}^\mathbf{X} \succsim \zeta_\mathsf{C}^\mathbf{Y}\circ\pi$, and so there is an essentially unique factorizing map, which we denote by $\mathsf{C}\pi$, that makes the following diagram commute: \begin{center} $\phantom{i}$\xymatrix{ &\mathbf{X}\ar[dl]_{\zeta^\mathbf{X}_\mathsf{C}}\ar[dr]^{\pi}\\ \mathsf{C}\mathbf{X}\ar[dr]_{\mathsf{C}\pi} & & \mathbf{Y}\ar[dl]^{\zeta^\mathbf{Y}_\mathsf{C}}\\ &\mathsf{C}\mathbf{Y}.} \end{center} In addition, we shall abbreviate $\mathbf{X}\times_{\zeta_\mathsf{C}^\mathbf{X}} \mathbf{X}$ to $\mathbf{X}\times_\mathsf{C} \mathbf{X}$, and similarly for the individual spaces and measures defining this relatively independent product. \end{dfn} The above lemma and definition explain the choice of the term `idempotent', which is motivated by a more categorial viewpoint of such subclasses: if we identify such a class $\mathsf{C}$ with a full subcategory of the category of $\Gamma$-systems with factor maps as morphisms, then the assignments $\mathbf{X}\mapsto \mathsf{C}\mathbf{X}$, $\pi \mapsto \mathsf{C}\pi$ define an autofunctor of this category which is idempotent. The name we give for our next definition is also motivated by this relationship with functors. \begin{dfn}[Order continuity] A class of $\Gamma$-systems $\mathsf{C}$ is \textbf{order continuous} if whenever $(\mathbf{X}_m)_{m\geq 0}$, $(\psi^m_k)_{m\geq k\geq 0}$ is an inverse sequence of $\Gamma$-systems with inverse limit $\mathbf{X}$, $(\psi_m)_{m\geq 0}$ we have \[\zeta_\mathsf{C}^\mathbf{X} = \bigvee_{m\geq 0}\zeta_\mathsf{C}^{\mathbf{X}_m}\circ \psi_m:\] that is, the maximal $\mathsf{C}$-factor of the inverse limit is simply given by the (increasing) join of the maximal $\mathsf{C}$-factors of the contributing systems. \end{dfn} \noindent\textbf{Example}\quad Although all the idempotent classes that will matter to us later can be shown to be order continuous, it may be instructive to exhibit one that is not. In case $\Gamma$ is an Abelian group, let us say that a system $\mathbf{X}$ has a \textbf{finite-dimensional Kronecker factor} if its Kronecker factor $\zeta_1^\mathbf{X}:X\to Z_1^\mathbf{X}$ can be coordinatized as a direct integral (se Section 3 of~\cite{Aus--ergdirint}) of rotations on some measurably-varying compact Abelian groups all of which can be isomorphically embedded into a fibre repository $\mathbb{T}^D$ for some fixed $D \in \mathbb{N}$ (this includes the possibility that the Kronecker factor is finite or trivial). It is now easy to check that the class of $\mathbb{Z}$-systems comprising all those that are either themselves finite-dimensional Kronecker systems, or have a Kronecker factor that is \emph{not} finite-dimensional (so we exclude just those systems that have a finite-dimensional Kronecker factor but properly contain it), is idempotent but not order continuous, since any infinite-dimensional separable group rotation can be identified with an inverse limit of finite-dimensional group rotations. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline \begin{dfn}[Hereditariness] An idempotent class $\mathsf{C}$ is \textbf{hereditary} if it is also closed under taking factors. \end{dfn} \begin{dfn}[Join] If $\mathsf{C}_1$, $\mathsf{C}_2$ are idempotent classes, then the class $\mathsf{C}_1\vee \mathsf{C}_2$ of all joinings of members of $\mathsf{C}_1$ and $\mathsf{C}_2$ is clearly also idempotent. We call $\mathsf{C}_1\vee \mathsf{C}_2$ the \textbf{join} of $\mathsf{C}_1$ and $\mathsf{C}_2$. \end{dfn} \begin{lem}[Join preserves order continuity] If $\mathsf{C}_1$ and $\mathsf{C}_2$ are both order continuous then so is $\mathsf{C}_1\vee \mathsf{C}_2$. \end{lem} \noindent\textbf{Proof}\quad Let $(\mathbf{X}_m)_{m\geq 0}$, $(\psi^m_k)_{m\geq k \geq 0}$ be an inverse sequence with inverse limit $\mathbf{X}$, $(\psi_m)_{m\geq 0}$. Then $\zeta^\mathbf{X}_{\mathsf{C}_1\vee \mathsf{C}_2}$ is the maximal factor of $\mathbf{X}$ that is a joining of a $\mathsf{C}_1$-factor and a $\mathsf{C}_2$-factor (so, in particular, it must be generated by its own $\mathsf{C}_1$- and $\mathsf{C}_2$-factors), and hence it is equivalent to $\zeta^\mathbf{X}_{\mathsf{C}_1} \vee \zeta^\mathbf{X}_{\mathsf{C}_2}$. Therefore any $f \in L^\infty(\mu)$ that is $\zeta^\mathbf{X}_{\mathsf{C}_1} \vee \zeta^\mathbf{X}_{\mathsf{C}_2}$-measurable can be approximated in $L^2(\mu)$ by some function of the finite-sum form $\sum_p g_{p,1}\cdot g_{p,2}$ with each $g_{p,i} \in L^\infty(\mu)$ being $\mathsf{C}_i$-measurable, and now since each $\mathsf{C}_i$ is order continuous we may further approximate each $g_{p,i}$ by some $h_{p,i} \circ \psi_m$ for a large integer $m$ and some $\mathsf{C}_i$-measurable $h_{p,i} \in L^\infty(\mu_m)$. Combining these approximations completes the proof. \qed \noindent\textbf{Examples}\quad Of course, we can form the joins of any of our earlier examples of idempotent classes: for example, given a group $\Gamma$ and subgroups $\L_1,\L_2,\ldots,\L_n \unlhd \Gamma$ we can form $\mathsf{Z}_0^{\L_1}\vee \mathsf{Z}_0^{\L_2}\vee\cdots\vee \mathsf{Z}_0^{\L_n}$. This particular example and several others like it will appear frequently throughout the rest of this work. Clearly each class $\mathsf{Z}_0^\L$ is hereditary, but in general joins of several such classes are not; we will see this explicitly in the first example of the next section. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline The following terminology will also prove useful. \begin{dfn}[Joining to an idempotent class; adjoining] If $\mathbf{X}$ is a system and $\mathsf{C}$ is an idempotent class then a \textbf{joining of $\mathbf{X}$ to $\mathsf{C}$} or a \textbf{$\mathsf{C}$-adjoining of $\mathbf{X}$} is a joining of $\mathbf{X}$ and $\mathbf{Y}$ for some $\mathbf{Y} \in \mathsf{C}$. \end{dfn} \section{Sated systems}\label{sec:sateds} The remainder of this dissertation concerns the consequences of one basic idea: that by extending a probability-preserving system, it is sometimes possible to impose on it some additional structure that makes its behaviour more transparent. For our later applications, a notion of `additional structure' that is both useful and obtainable is best summarized by demanding that the system does not admit a nontrivial joining to systems drawn from various other special classes. We will soon show that all systems admit extensions for which some version of this is true. This idea, although very abstract and very simple, will repeatedly prove surprisingly powerful. \begin{dfn}[Sated system]\label{dfn:sated} Given an idempotent class $\mathsf{C}$, a system $\mathbf{X}$ is \textbf{$\mathsf{C}$-sated} if whenever $\pi:\t{\mathbf{X}} = (\t{X},\t{\S},\t{\mu},\t{T}) \to \mathbf{X}$ is an extension, the factor maps $\pi$ and $\zeta^{\t{\mathbf{X}}}_\mathsf{C}$ on $\t{X}$ are relatively independent over $\zeta^\mathbf{X}_\mathsf{C}\circ\pi = \mathsf{C}\pi\circ\zeta^{\t{\mathbf{X}}}_\mathsf{C}$ under $\t{\mu}$. Phrased more pictorially, the two systems in the middle row of the commutative diagram \begin{center} $\phantom{i}$\xymatrix{ &\t{\mathbf{X}}\ar[dl]_{\zeta^{\t{\mathbf{X}}}_\mathsf{C}}\ar[dr]^{\pi}\\ \mathsf{C}\t{\mathbf{X}}\ar[dr]_{\mathsf{C}\pi} & & \mathbf{X}\ar[dl]^{\zeta^\mathbf{X}_\mathsf{C}}\\ &\mathsf{C}\mathbf{X}} \end{center} are relatively independent over their common factor copy of the system $\mathsf{C}\mathbf{X}$. An inverse sequence is \textbf{$\mathsf{C}$-sated} if it has a cofinal subsequence all of whose systems are $\mathsf{C}$-sated. \end{dfn} \noindent\textbf{Remark}\quad This definition has an important precedent in Furstenberg and Weiss' notion of a `pair homomorphism' betwen extensions elaborated in Section 8 of~\cite{FurWei96}. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline \noindent\textbf{Example}\quad If $\mathbf{X} = (U,\rm{Borel},\rm{Haar},R_\phi)$ with $U$ a compact metrizable Abelian group, $\phi:\mathbb{Z}^2\to U$ a dense homomorphism and $R_\phi$ the corresponding action of $\mathbb{Z}^2$ by rotations (so $R^\bf{n}(z) := z + \phi(\bf{n})$), then $\mathsf{Z}_0^{\bf{e}_i}\mathbf{X}$ is coordinatized by the quotient homomorphism \[U\to U/\ol{\phi(\mathbb{Z}\bf{e}_i)}, \] and so $\mathbf{X}$ is a member of $\mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2}$ if and only if these quotients together generate the whole of $U$, hence if and only if $\ol{\phi(\mathbb{Z}\bf{e}_1)}\cap\ol{\phi(\mathbb{Z}\bf{e}_2)}=\{0\}$. On the other hand, any ergodic action $\mathbf{X}$ of $\mathbb{Z}^2$ by compact group rotations can be extended to a member of $\mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2}$. To see this we first note that ergodicity is equivalent to the denseness of $\phi(\mathbb{Z}^2)$ in $U$, and so in particular that $\ol{\phi(\mathbb{Z}\bf{e}_1)} + \ol{\phi(\mathbb{Z}\bf{e}_2)} = U$. It follows that the `larger' group rotation system \[\t{\mathbf{X}} = (\t{U},\rm{Borel},\rm{Haar},R_{\t{\phi}}),\] where $\t{U} := \ol{\phi(\mathbb{Z}\bf{e}_1)} \oplus \ol{\phi(\mathbb{Z}\bf{e}_2)}$ and the homomorphism $\t{\phi}:\mathbb{Z}^2\to U^2$ is defined by \[\t{\phi}(\bf{e}_1) := (\phi(\bf{e}_1),0)\quad\hbox{and}\quad \t{\phi}(\bf{e}_2) := (0,\phi(\bf{e}_2)),\] is an extension of $\mathbf{X}$ through the factor map \[\t{U}\to U:(x,y)\mapsto x+y.\] Now $\t{\mathbf{X}}$ clearly satisfies the above condition for membership of $\mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2}$, since the quotients by $\ol{\t{\phi}(\mathbb{Z}\bf{e}_i)}$ for $i=1,2$ are respectively the second and first coordinate projections. It follows that every such $\mathbf{X}$ admits a $(\mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2})$-adjoining that generates the whole of $\mathbf{X}$, and which is therefore not relatively independent over any proper factor of $\mathbf{X}$, and hence that $\mathbf{X}$ itself is $(\mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2})$-sated if and only if it is already in the class $\mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2}$. This reasoning also shows that the class $\mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2}$ is not hereditary. A little more generally, if $\mathbf{X}$ is a totally weakly mixing extension of an ergodic action $\mathbf{Y}$ of $\mathbb{Z}^2$ by compact group rotations, then routine arguments show that $\mathbf{X}$ is $(\mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2})$-sated if and only if this is true of $\mathbf{Y}$ (since a totally weakly mixing extension is relatively disjoint from any $\mathsf{Z}_0^{\bf{e}_1}$-system, and given this the Furstenberg-Zimmer Inverse Theorem implies that the $\bf{e}_2$-invariant factor of any $\mathsf{Z}_0^{\bf{e}_1}$-adjoining of $\mathbf{X}$ is also relatively independent from $\mathbf{X}$ over its factor map to $\mathbf{Y}$; see, for instance, Chapters 9 and 10 of~\cite{Gla03}). Therefore such an $\mathbf{X}$ is $(\mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2})$-sated if and only if $\mathbf{Y} \in \mathsf{Z}_0^{\bf{e}_1}\vee \mathsf{Z}_0^{\bf{e}_2}$. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline The crucial technical fact that turns satedness into a useful tool is the ability to construct sated extensions of arbitrary systems. This can be seen as a natural abstraction from Propositions 4.6 of~\cite{Aus--nonconv} and 4.3 of~\cite{Aus--newmultiSzem}, and appears in its full strength as Theorem 3.11 in~\cite{Aus--lindeppleasant1}. \begin{thm}[Idempotent classes admit multiply sated extensions]\label{thm:sateds-exist} If $(\mathsf{C}_i)_{i\in I}$ is a countable family of idempotent classes then any system $\mathbf{X}_0$ admits an extension $\pi:\mathbf{X} \to \mathbf{X}_0$ such that \begin{itemize} \item $\mathbf{X}$ is $\mathsf{C}_i$-sated for every $i\in I$; \item the factors $\pi$ and $\bigvee_{i\in I}\zeta^{\mathbf{X}}_{\mathsf{C}_i}$ generate the whole of $\mathbf{X}$. \end{itemize} \end{thm} We shall prove this result after a preliminary lemma. \begin{lem}\label{lem:inverse-lim-sated} If $\mathsf{C}$ is an idempotent class then the inverse limit of any $\mathsf{C}$-sated inverse sequence is $\mathsf{C}$-sated. \end{lem} \noindent\textbf{Proof}\quad By passing to a subsequence if necessary, it suffices to suppose that $(\mathbf{X}_m)_{m\geq 0}$, $(\psi^m_k)_{m\geq k\geq 0}$ is an inverse sequence of $\mathsf{C}$-sated systems with inverse limit $\mathbf{X}_\infty$, $(\psi_m)_{m\geq 1}$, and let $\pi:\t{\mathbf{X}} \to \mathbf{X}_\infty$ be any further extension and $f \in L^\infty(\mu_\infty)$. We will commit the abuse of identifying such a function with its lift to any given extension when the extension in question is obvious. With this in mind, we need to show that \[\mathsf{E}(f\,|\,\zeta^{\t{\mathbf{X}}}_\mathsf{C}) = \mathsf{E}(f\,|\,\zeta^{\mathbf{X}_\infty}_\mathsf{C}).\] However, by the $\mathsf{C}$-satedness of each $\mathbf{X}_m$, we certainly have \[\mathsf{E}(\mathsf{E}(f\,|\,\psi_m)\,|\,\zeta^{\t{\mathbf{X}}}_\mathsf{C}) = \mathsf{E}(f\,|\,\zeta^{\mathbf{X}_m}_\mathsf{C}),\] and now as $m\to\infty$ this equation converges in $L^2(\mu)$ to \[\mathsf{E}(f\,|\,\zeta^{\t{\mathbf{X}}}_\mathsf{C}) = \mathsf{E}\Big(f\,\Big|\,\bigvee_{m\geq 1}\,(\zeta^{\mathbf{X}_m}_\mathsf{C}\circ\psi_m)\Big).\] By monotonicity we have \[\zeta^{\t{\mathbf{X}}}_\mathsf{C}\succsim\zeta^{\mathbf{X}_\infty}_\mathsf{C} \succsim \bigvee_{m\geq 1}\,(\zeta^{\mathbf{X}_m}_\mathsf{C}\circ \psi_m),\] and so by sandwiching the desired equality of conditional expectations must also hold. \qed \noindent\textbf{Proof of Theorem~\ref{thm:sateds-exist}}\quad We first prove this for $I$ a singleton, and then in the general case. \textbf{Step 1}\quad Suppose that $I = \{i\}$ and $\mathsf{C}_i = \mathsf{C}$. This case will follow from a simple `energy increment' argument. Let $(f_r)_{r\geq 1}$ be a countable subset of the $L^\infty$-unit ball $\{f\in L^\infty(\mu):\ \|f\|_\infty\leq 1\}$ that is dense in this ball for the $L^2$-norm, and let $(r_i)_{i\geq 1}$ be a member of $\mathbb{N}^\mathbb{N}$ in which every non-negative integer appears infinitely often. We will construct an inverse sequence $(\mathbf{X}_m)_{m\geq 0}$, $(\psi^m_k)_{m\geq k \geq 0}$ starting from $\mathbf{X}_0$ such that each $\mathbf{X}_{m+1}$ is a $\mathsf{C}$-adjoining of $\mathbf{X}_m$. Suppose that for some $m_1 \geq 0$ we have already obtained $(\mathbf{X}_m)_{m=0}^{m_1}$, $(\psi^m_k)_{m_1 \geq m\geq k\geq 0}$ such that $\mathrm{id}_{X_{m_1}} \simeq \zeta^{\mathbf{X}_{m_1}}_\mathsf{C}\vee \psi^{m_1}_{0}$. We consider two separate cases: \begin{itemize} \item If there is some further extension $\pi:\t{\mathbf{X}}\to \mathbf{X}_{m_1}$ such that \[\|\mathsf{E}_{\t{\mu}}(f_{r_{m_1}}\circ \psi^{m_1}_{0}\circ\pi\,|\,\zeta_\mathsf{C}^{\t{\mathbf{X}}})\|_2^2 > \|\mathsf{E}_{\mu_{m_1}}(f_{r_{m_1}}\circ \psi^{m_1}_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_{m_1}})\|_2^2 + 2^{-m_1},\] then choose a particular $\pi:\t{\mathbf{X}}\to \mathbf{X}_{m_1}$ such that the increase \[\|\mathsf{E}_{\t{\mu}}(f_{r_{m_1}}\circ \psi^{m_1}_0\circ\pi\,|\,\zeta_\mathsf{C}^{\t{\mathbf{X}}})\|_2^2 - \|\mathsf{E}_{\mu_{m_1}}(f_{r_{m_1}}\circ \psi^{m_1}_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_{m_1}})\|_2^2\] is at least half its supremal possible value over all extensions. By restricting to the possibly smaller subextension of $\t{\mathbf{X}}\to\mathbf{X}_{m_1}$ generated by $\pi$ and $\zeta_\mathsf{C}^{\t{\mathbf{X}}}$ we may assume that $\t{\mathbf{X}}$ is itself a $\mathsf{C}$-adjoining of $\mathbf{X}_{m_1}$ and hence of $\mathbf{X}_0$, and now we let $\mathbf{X}_{m_1+1} := \t{\mathbf{X}}$ and $\psi^{m_1 + 1}_{m_1} := \pi$ (the other connecting factor maps being determined by this one). \item If, on the other hand, for every further extension $\pi:\t{\mathbf{X}}\to \mathbf{X}_{m_1}$ we have \[\|\mathsf{E}_{\t{\mu}}(f_{r_{m_1}}\circ \psi^{m_1}_0\circ\pi\,|\,\zeta_\mathsf{C}^{\t{\mathbf{X}}})\|_2^2 \leq \|\mathsf{E}_{\mu_{m_1}}(f_{r_{m_1}}\circ \psi^{m_1}_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_{m_1}})\|_2^2 + 2^{-m_1}\] then we simply set $\mathbf{X}_{m_1+1} := \mathbf{X}_{m_1}$ and $\psi^{m_1+1}_{m_1} := \mathrm{id}_{X_{m_1}}$. \end{itemize} Finally, let $\mathbf{X}_\infty$, $(\psi_m)_{m \geq 0}$ be the inverse limit of this sequence. We have \begin{multline*} \mathrm{id}_{X_\infty} \simeq \bigvee_{m\geq 0}\psi_m \simeq \bigvee_{m\geq 0}(\zeta_\mathsf{C}^{\mathbf{X}_m}\vee \psi^m_0)\circ\psi_m\\ \simeq \bigvee_{m\geq 0}(\zeta_\mathsf{C}^{\mathbf{X}_m}\circ\psi_m)\vee\bigvee_{m\geq 0}( \psi^m_0\circ\psi_m) \precsim \zeta_\mathsf{C}^{\mathbf{X}_\infty}\vee \psi_0, \end{multline*} so $\mathbf{X}_\infty$ is still a $\mathsf{C}$-adjoining of $\mathbf{X}_0$. To show that it is $\mathsf{C}$-sated, let $\pi:\t{\mathbf{X}}\to\mathbf{X}_\infty$ be any further extension, and suppose that $f \in L^\infty(\mu_\infty)$. We will complete the proof for Step 1 by showing that \[\mathsf{E}_{\t{\mu}}(f\circ\pi\,|\,\zeta_\mathsf{C}^{\t{\mathbf{X}}}) = \mathsf{E}_{\mu_\infty}(f\,|\,\zeta_\mathsf{C}^{\mathbf{X}_\infty})\circ\pi.\] Since $\mathbf{X}_\infty$ is a $\mathsf{C}$-adjoining of $\mathbf{X}$, this $f$ may be approximated arbitrarily well in $L^2(\mu_\infty)$ by finite sums of the form $\sum_p g_p\cdot h_p$ with $g_p$ being bounded and $\zeta_\mathsf{C}^{\mathbf{X}_\infty}$-measurable and $h_p$ being bounded and $\psi_0$-measurable, and now by density we may also restrict to using $h_p$ that are each a scalar multiple of some $f_{r_p}\circ\psi_0$, so by continuity and multilinearity it suffices to prove the above equality for just one such product $g\cdot (f_r\circ\psi_0)$. Since $g$ is $\zeta_\mathsf{C}^{\mathbf{X}_\infty}$-measurable, this requirement now reduces to \[\mathsf{E}_{\t{\mu}}(f_r\circ\psi_0\circ\pi\,|\,\zeta_\mathsf{C}^{\t{\mathbf{X}}}) = \mathsf{E}_{\mu_\infty}(f_r\circ\psi_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_\infty})\circ\pi.\] Since $\zeta_\mathsf{C}^{\t{\mathbf{X}}} \succsim \zeta_\mathsf{C}^{\mathbf{X}_\infty}\circ\pi$, this will follow if we only show that \[\|\mathsf{E}_{\t{\mu}}(f_r\circ\psi_0\circ\pi\,|\,\zeta_\mathsf{C}^{\t{\mathbf{X}}})\|_2^2 = \|\mathsf{E}_{\mu_\infty}(f_r\circ\psi_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_\infty})\|_2^2.\] Now, by the martingale convergence theorem we have \[\|\mathsf{E}_{\mu_m}(f_r\circ\psi^m_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_m})\|_2^2 \to \|\mathsf{E}_{\mu_\infty}(f_r\circ\psi_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_\infty})\|_2^2\] as $m\to\infty$. It follows that if \[\|\mathsf{E}_{\t{\mu}}(f_r\circ\psi_0\circ\pi\,|\,\zeta_\mathsf{C}^{\t{\mathbf{X}}})\|_2^2 > \|\mathsf{E}_{\mu_\infty}(f_r\circ\psi_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_\infty})\|_2^2\] then for some sufficiently large $m$ we would have $r_m = r$ (since each integer appears infinitely often as some $r_m$) but also \begin{eqnarray*} &&\|\mathsf{E}_{\mu_{m+1}}(f_r\circ\psi^{m+1}_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_{m+1}})\|_2^2 - \|\mathsf{E}_{\mu_m}(f_r\circ\psi^m_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_m})\|_2^2\\ && \leq \|\mathsf{E}_{\mu_\infty}(f_r\circ\psi_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_\infty})\|_2^2 - \|\mathsf{E}_{\mu_m}(f_r\circ\psi^m_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_m})\|_2^2\\ && < \frac{1}{2}\Big(\|\mathsf{E}_{\t{\mu}}(f_r\circ\psi_0\circ\pi\,|\,\zeta_\mathsf{C}^{\t{\mathbf{X}}})\|_2^2 - \|\mathsf{E}_{\mu_m}(f\circ\psi^m_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_m})\|_2^2\Big) \end{eqnarray*} and \[\|\mathsf{E}_{\t{\mu}}(f_r\circ\psi_0\circ\pi\,|\,\zeta_\mathsf{C}^{\t{\mathbf{X}}})\|_2^2\geq \|\mathsf{E}_{\mu_m}(f\circ\psi^m_0\,|\,\zeta_\mathsf{C}^{\mathbf{X}_m})\|_2^2 + 2^{-m},\] so contradicting our choice of $\mathbf{X}_{m+1}\to \mathbf{X}_m$ in the first alternative in our construction above. This contradiction shows that we must actually have the equality of $L^2$-norms required. \textbf{Step 2}\quad The general case follows easily from Step 1 and a second inverse limit construction: choose a sequence $(i_m)_{m\geq 1} \in I^\mathbb{N}$ in which each member of $I$ appears infinitely often, and form an inverse sequence $(\mathbf{X}_m)_{m\geq 0}$, $(\psi^m_k)_{m\geq k\geq 0}$ starting from $\mathbf{X}_0$ such that each $\mathbf{X}_m$ is $\mathsf{C}_{i_m}$-sated for $m\geq 1$. The inverse limit $\mathbf{X}$ is now sated for every $\mathsf{C}_i$, by Lemma~\ref{lem:inverse-lim-sated}. \qed \noindent\textbf{Remark}\quad Thierry de la Rue has shown me another proof of Theorem~\ref{thm:sateds-exist} in case $\Gamma$ is a group that follows very quickly from ideas contained in his paper~\cite{LesRitdelaRue03} with Lesigne and Rittaud, and which has now received a nice separate writeup in~\cite{delaRue09}. The key observation is that \begin{center} \emph{An idempotent class $\mathsf{C}$ is hereditary if and only if every system is $\mathsf{C}$-sated.} \end{center} This in turn follows from a striking result of Lema\'nczyk, Parreau and Thouvenot~\cite{LemParTho00} that if two systems $\mathbf{X}$ and $\mathbf{Y}$ are not disjoint then $\mathbf{X}$ shares a nontrivial factor with the infinite Cartesian power $\mathbf{Y}^{\times \infty}$. Given now an idempotent class $\mathsf{C}$ and a system $\mathbf{X}$, let $\mathsf{C}^\ast$ be the hereditary idempotent class of all factors of members of $\mathsf{C}$, and let $\mathbf{Y}$ be any $\mathsf{C}$-system admitting a factor map $\pi:\mathbf{Y}\to \mathsf{C}^\ast\mathbf{X}$ (such exists because by definition $\mathsf{C}^\ast\mathbf{X}$ is a factor of some $\mathsf{C}$-system). Now forming $\t{\mathbf{X}}:= \mathbf{X}\times_{\{\zeta_{\mathsf{C}^\ast}^\mathbf{X} = \pi\}}\mathbf{Y}$ (so here is where we need $\Gamma$ to be a group), a quick check using the above fact shows that $\mathsf{C}\t{\mathbf{X}} = \mathsf{C}^\ast\t{\mathbf{X}}$, and that this is equivalent to the $\mathsf{C}$-satedness of $\t{\mathbf{X}}$. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline \chapter{The convergence of nonconventional averages}\label{chap:conv} In this chapter Theorem C will be deduced from Theorem~\ref{thm:sateds-exist}. This amounts to a rather simpler outing for many of the same ideas that will go into proving recurrence in the next chapter. We first recall the Hilbert space version of a classical estimate due to van der Corput, which has long been a workhorse of Ergodic Ramsey Theory. After giving this its own section, the Furstenberg self-joining for a tuple of transformations is introduced, and then in the last section we show how the right instance of satedness implies that these enjoy some additional structure from which a proof of Theorem C follows quite quickly. \subsection*{Notation} Before commencing with any of these proofs, we make a slight modification to the notation of the Introduction to be more in keeping with that of Chapter~\ref{chap:basics}: rather than letting $T_1$, $T_2$, \ldots, $T_d$ denote a tuple of commuting individual transformations on $(X,\S,\mu)$, we henceforth regard these as the subactions of the basis vectors $\bf{e}_1$, $\bf{e}_2$, \ldots, $\bf{e}_d$ for a single $\mathbb{Z}^d$-action $T$. Theorem C is accordingly re-phrased as asserting that the averages \[\frac{1}{N}\sum_{n=1}^N(f_1\circ T^{n\bf{e}_1})\cdot(f_2\circ T^{n\bf{e}_2})\cdot\cdots\cdot (f_d\circ T^{n\bf{e}_d})\] converge in $L^2(\mu)$ for any $\mathbb{Z}^d$-system $(X,\S,\mu,T)$. This slight increase in abstraction will prove worth tolerating when we come to various constructions of new actions from old during our later arguments, in which we will need to keep efficient track of how the action of one vector in $\mathbb{Z}^d$ may have been re-assigned to that of another. It follows that in the remainder of this work, a list such as `$T_1$, $T_2$, \ldots, $T_d$' will denote a tuple of \emph{whole actions} of some previously-decided group, rather than individual transformations. \section{The van der Corput estimate} This result and a related discussion can be found, for example, as Theorem 2.2 of Bergelson~\cite{Ber96}. \begin{prop}[Van der Corput estimate]\label{prop:vdC} Suppose that $(u_n)_{n\geq 1}$ is a bounded sequence in a Hilbert space $\mathfrak{H}$. If the vector-valued averages \[\frac{1}{N}\sum_{n=1}^Nu_n\] do not converge to $0$ in norm as $N\to\infty$, then also the scalar-valued averages \[\frac{1}{M}\sum_{m=1}^M\frac{1}{N}\sum_{n=1}^N\langle u_n,u_{n+m}\rangle\] do not converge to $0$ as $N\to\infty$ and then $M\to\infty$. \end{prop} \noindent\textbf{Proof}\quad For any fixed $H\geq 1$ we have \[\frac{1}{N}\sum_{n=1}^Nu_n \sim \frac{1}{N}\sum_{n=1}^N \frac{1}{H}\sum_{h=1}^H u_{n+h}\] as $N\to\infty$, where the notation $w_N\sim v_N$ denotes that $w_N - v_N \to 0$ in $\mathfrak{H}$. However, the squared norm of the right-hand double average may be estimated by \[\Big\|\frac{1}{N}\sum_{n=1}^N \frac{1}{H}\sum_{h=1}^H u_{n+h}\Big\|^2 \leq \frac{1}{N}\sum_{n=1}^N\Big\|\frac{1}{H}\sum_{h=1}^H u_{n+h}\Big\|^2\] (using the triangle and Cauchy-Schwartz inequalities), and this right-hand side is equal to \[\frac{1}{H^2}\sum_{h_1,h_2=1}^M\frac{1}{N}\sum_{n=1}^N \langle u_{n+h_1},u_{n+h_2}\rangle.\] It follows that these averages must also not converge to $0$ as $N\to\infty$ and then $H\to\infty$; but for large $H$ these can be expressed as averages of the averages \[\frac{1}{M}\sum_{m=1}^M\frac{1}{N}\sum_{n=1}^N\langle u_n,u_{n+m}\rangle\] for correspondingly large values of $M$, and so these also cannot converge to $0$ as $N\to\infty$ and then $M\to\infty$, as required. \qed \section{The Furstenberg self-joining}\label{sec:Fberg} Theorem C is proved by induction on $d$. In the first instance, this induction is enabled by a construction that is made possible once convergence is known for a smaller number of transformations, and which will also be central to the proof of Theorem A in the next chapter. Thus, suppose now that for some $d\geq 1$ the convergence of Theorem C is known for all tuples of at most $d-1$ commuting transformations (so this assumption is vacuous if $d=1$). Let $\mathbf{X} = (X,\S,\mu,T)$ be a $\mathbb{Z}^d$-system, and let $A_1$, $A_2$, \ldots, $A_d \in \S$. By integrating and using the invariance of $\mu$ under $T^{\bf{e}_1}$, our assumption applied to the transformations $T^{\bf{e}_2 - \bf{e}_1}$, \ldots, $T^{\bf{e}_d - \bf{e}_1}$ implies that the scalar averages \begin{multline*} \frac{1}{N}\sum_{n=1}^N\mu(T^{-n\bf{e}_1}(A_1)\cap T^{-n\bf{e}_2}(A_2)\cap\cdots\cap T^{-n\bf{e}_d}(A_d))\\ = \int_X 1_{A_1}\cdot\Big(\frac{1}{N}\sum_{n=1}^N (1_{A_2}\circ T^{n(\bf{e}_2 - \bf{e}_1)})\cdot\cdots\cdot (1_{A_d}\circ T^{n(\bf{e}_d - \bf{e}_1)}) \Big)\,\d\mu \end{multline*} converge as $N\to\infty$. Moreover, the limit takes the form $\mu^{\rm{F}}(A_1\times A_2\times \cdots\times A_d)$ for some probability $\mu^{\rm{F}}$ on $X^d$ that is invariant under the diagonal $\mathbb{Z}^d$-action defined by $(T^{\times d})^{\bf{n}} := T^\bf{n}\times T^\bf{n}\times \cdots\times T^\bf{n}$, simply because it is a limit of averages of the off-diagonal joinings \[\int_X \delta_{(T^{n\bf{e}_1}(x),T^{n\bf{e}_2}(x),\ldots,T^{n\bf{e}_d}(x))}\,\mu(\d x)\quad\quad \hbox{for}\ n \in \mathbb{N}.\] The $\mathbb{Z}^d$-system $\mathbf{X}^{\rm{F}} := (X^d,\S^{\otimes d},\mu^{\rm{F}},T^{\times d})$ is therefore a $d$-fold self-joining of $\mathbf{X}$ through the $d$ coordinate projections $\pi_i:\mathbf{X}^{\rm{F}} \to \mathbf{X}$. We refer to either $\mu^\rm{F}$ or $\mathbf{X}^\rm{F}$ as the \textbf{Furstenberg self-joining} of $\mathbf{X}$. Given functions $f_1$, $f_2$, \ldots, $f_d \in L^\infty(\mu)$, by approximating each of them in $L^\infty$ using step functions we may extend the above definition of $\mu^\rm{F}$ to the convergence \[\frac{1}{N}\sum_{n=1}^N\int_X (f_1\circ T^{n\bf{e}_1})\cdot (f_2\circ T^{n\bf{e}_2})\cdot\cdots \cdot(f_d\circ T^{n\bf{e}_d})\,\d\mu\to \int_{X^d} f_1\otimes f_2\otimes \cdots\otimes f_d\,\d\mu^\rm{F}\] as $N\to\infty$. In addition to its invariance under $T^{\times d}$, the definition of $\mu^\rm{F}$ gives an additional invariance that will shortly prove crucial. \begin{lem}\label{lem:Fberg-offdiag-invar} Provided the limiting self-joining $\mu^{\rm{F}}$ exists, it is also invariant under the transformation $T^{\bf{e}_1}\times T^{\bf{e}_2}\times \cdots\times T^{\bf{e}_d}$. \end{lem} \noindent\textbf{Proof}\quad For any $A_1$, $A_2$, \ldots, $A_d \in \S$ we have \begin{eqnarray*} &&\mu^{\rm{F}}((T^{\bf{e}_1}\times T^{\bf{e}_2}\times \cdots\times T^{\bf{e}_d})^{-1}(A_1\times A_2\times \cdots\times A_d))\\ &&\quad = \lim_{n\to\infty}\frac{1}{N}\sum_{n=1}^N\mu(T^{-n\bf{e}_1}(T^{-\bf{e}_1}(A_1))\cap \cdots\cap T^{-n\bf{e}_d}(T^{-\bf{e}_d}(A_d)))\\ &&\quad= \lim_{n\to\infty}\frac{1}{N}\sum_{n=2}^{N+1}\mu(T^{-n\bf{e}_1}(A_1)\cap \cdots\cap T^{-n\bf{e}_d}(A_d))\\ &&\quad= \mu^{\rm{F}}(A_1\times A_2\times \cdots\times A_d), \end{eqnarray*} where the last equality follows because the discrete intervals $\{1,2,\ldots,N\}$ and $\{2,3,\ldots,N+1\}$ asymptotically overlap in $1 - \rm{o}(1)$ of their lengths. \qed It will be important to know that Furstenberg self-joinings behave well under inverse limits. The following is another immediate consequence of the definition, and we omit the proof. \begin{lem}\label{lem:Fberg-inv-lim} If $(\mathbf{X}_m)_{m\geq 0}$, $(\psi^m_k)_{m\geq k\geq 0}$ is an inverse sequence with inverse limit $\mathbf{X}$, $(\psi_m)_{m\geq 0}$, then the Furstenberg self-joinings $\mathbf{X}_m^\rm{F}$ form an inverse sequence under the factor maps $(\psi^m_k)^{\times d}$ with inverse limit $\mathbf{X}^\rm{F}$, $(\psi^{\times d}_m)_{m\geq 0}$. \qed \end{lem} \section{The proof of convergence} The final observation needed before we prove Theorem C is that satedness implies a certain inverse result for the situation in which the functional averages \[S_N(f_1,f_2,\ldots,f_d) := \frac{1}{N}\sum_{n=1}^N (f_1\circ T^{n\bf{e}_1})\cdot (f_2\circ T^{n\bf{e}_2})\cdot\cdots \cdot (f_d\circ T^{n\bf{e}_d})\] do not converge to $0$. \begin{prop}\label{prop:sated-implies-pleasant} Suppose that $\mathbf{X}$ is $\mathsf{C}$-sated for the idempotent class \[\mathsf{C} := \mathsf{Z}_0^{\bf{e}_1}\vee \bigvee_{j=2}^d\mathsf{Z}_0^{\bf{e}_1 - \bf{e}_j}\] and that $f_i \in L^\infty(\mu)$ for $i=1,2,\ldots,d$. In addition, let $\Phi:= \S^{T^{\bf{e}_1}}\vee \bigvee_{j=2}^d\S^{T^{\bf{e}_1} = T^{\bf{e}_j}}$, so this is a factor of $\mathbf{X}$. If \[S_N(f_1,f_2,\ldots,f_d)\not\to 0\] as $N\to\infty$, then also $\mathsf{E}(f_1\,|\,\Phi) \neq 0$. \end{prop} \noindent\textbf{Remark}\quad In the terminology of~\cite{FurWei96}, which has since become standard in this area (and is roughly followed in~\cite{Aus--nonconv}), this asserts that for a $\mathsf{C}$-sated system $\mathbf{X}$ the factor $\Phi$ is \textbf{partially characteristic}. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline \noindent\textbf{Proof}\quad This rests on an appeal to the van der Corput estimate followed by a re-interpretation of what it tells us. Letting $u_n := (f_1\circ T^{n\bf{e}_1})\cdot (f_2\circ T^{n\bf{e}_2})\cdot\cdots \cdot (f_d\circ T^{n\bf{e}_d})$, Proposition~\ref{prop:vdC} and our assumption imply that the double averages \begin{multline*} \frac{1}{M}\sum_{m=1}^M\frac{1}{N}\sum_{n=1}^N\langle u_n,u_{n+m}\rangle\\ = \frac{1}{M}\sum_{m=1}^M\frac{1}{N}\sum_{n=1}^N\int_X \big((f_1\circ T^{n\bf{e}_1})\cdot \cdots \cdot (f_d\circ T^{n\bf{e}_d})\big)\quad\quad\quad\quad\quad\quad\quad\quad\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\cdot \big((\ol{f_1}\circ T^{(n+m)\bf{e}_1})\cdot \cdots \cdot (\ol{f_d}\circ T^{(n+m)\bf{e}_d})\big)\,\d\mu \end{multline*} do not tend to $0$ as $N\to\infty$ and then $M\to\infty$. However, simply by re-arranging the individual functions and recalling the definition of $\mu^\rm{F}$, the limit in $N$ behaves as \begin{multline*} \frac{1}{M}\sum_{m=1}^M\frac{1}{N}\sum_{n=1}^N\int_X ((f_1\cdot(\ol{f_1}\circ T^{m\bf{e}_1}))\circ T^{n\bf{e}_1})\cdot\cdots \cdot ((f_d\cdot (\ol{f_d}\circ T^{m\bf{e}_d}))\circ T^{n\bf{e}_d})\,\d\mu\\ \to \frac{1}{M}\sum_{m=1}^M\int_{X^d}(f_1\cdot(\ol{f_1}\circ T^{m\bf{e}_d}))\otimes\cdots\otimes (f_d\cdot(\ol{f_d}\circ T^{m\bf{e}_d}))\,\d\mu^\rm{F}\\ = \frac{1}{M}\sum_{m=1}^M\int_{X^d}(f_1\otimes \cdots\otimes f_d)\cdot (\ol{f_1\otimes\cdots\otimes f_d}\circ (T^{\bf{e}_1}\times \cdots\times T^{\bf{e}_d})^m)\,\d\mu^\rm{F}. \end{multline*} Now, since Lemma~\ref{lem:Fberg-offdiag-invar} gives that $\mu^\rm{F}$ is invariant under $T^{\bf{e}_1}\times T^{\bf{e}_2}\times \cdots\times T^{\bf{e}_d}$, the classical mean ergodic theorem allows us to take the limit in $M$ to obtain \[\int_{X^d}(f_1\otimes \cdots\otimes f_d)\cdot \mathsf{E}_{\mu^\rm{F}}\big(\ol{f_1\otimes\cdots\otimes f_d}\,\big|\,(\S^{\otimes d})^{T^{\bf{e}_1}\times \cdots\times T^{\bf{e}_d}}\big)\,\d\mu^\rm{F}.\] Thus the van der Corput estimate tells us that this integral is non-zero. The proof is completed simply by re-phrasing this conclusion slightly. We have previously used $\mu^\rm{F}$ to define a $\mathbb{Z}^d$-system $\mathbf{X}^\rm{F}$, but in light of Lemma~\ref{lem:Fberg-offdiag-invar} we may alternatively use it to define a $\mathbb{Z}^d$-system $\t{\mathbf{X}}$ by setting \[(\t{X},\t{\S},\t{\mu}) := (X^d,\S^{\otimes d},\mu^\rm{F}),\] \[\t{T}^{\bf{e}_1} := T^{\bf{e}_1}\times T^{\bf{e}_2}\times \cdots\times T^{\bf{e}_d}\] and \[\t{T}^{\bf{e}_i} := (T^{\times d})^{\bf{e}_i}\quad\quad\hbox{for}\ i=2,3,\ldots,d\] (thus, the basis direction $\bf{e}_1$ is treated differently from the others). With this definition the first coordinate projection $\pi_1:X^d\to X$ still defines a factor map of $\mathbb{Z}^d$-systems $\t{\mathbf{X}}\to \mathbf{X}$, because $\t{T}^\bf{n}$ does agree with $T^\bf{n}$ on the first coordinate in $X^d$ for every $\bf{n}$. On the other hand, for $i=2,3,\ldots,d$ the function $f_i\circ\pi_i \in L^\infty(\mu^\rm{F})$ depends only on the $i^{\rm{th}}$ coordinate in $X^d$, and on this coordinate the transformations $\t{T}^{\bf{e}_1}$ and $\t{T}^{\bf{e}_i}$ agree, so that $f_i\circ \pi_i$ is $\t{T}^{\bf{e}_i - \bf{e}_1}$-invariant. Thus the nonvanishing \[\int_{X^d}(f_1\otimes \cdots\otimes f_d)\cdot \mathsf{E}_{\mu^\rm{F}}\big(\ol{f_1\otimes\cdots\otimes f_d}\,\big|\,\t{\S}^{\t{T}^{\bf{e}_1}}\big)\,\d\mu^\rm{F}\neq 0\] asserts that the lifted function $f_1\circ \pi_1$ has a nontrivial inner product with a function that is a pointwise product of $\t{\S}^{\t{T}^{\bf{e}_1} = \t{T}^{\bf{e}_i}}$-measurable functions for $i=2,3,\ldots,d$ and the function $\mathsf{E}_{\mu^\rm{F}}\big(\ol{f_1\otimes\cdots\otimes f_d}\,\big|\,\t{\S}^{\t{T}^{\bf{e}_1}}\big)$, which is manifestly $\t{\S}^{\t{T}^{\bf{e}_1}}$-measurable. Therefore $f_1\circ \pi_1$ has a nontrivial conditional expectation onto $\t{\S}^{\t{T}^{\bf{e}_1}}\vee\bigvee_{j=2}^d\t{\S}^{\t{T}^{\bf{e}_1} = \t{T}^{\bf{e}_j}}$, which is the $\sigma$-algebra generated by the factor map $\t{\mathbf{X}}\to \mathsf{C}\t{\mathbf{X}}$. On the other hand, by $\mathsf{C}$-satedness $f_1\circ \pi_1$ must be relatively independent from this $\sigma$-algebra over $\Phi$, and so we also have $\mathsf{E}_\mu(f_1\,|\,\Phi) \neq 0$, as required. \qed \noindent\textbf{Proof of Theorem C}\quad This proceeds by induction on $d$. The case $d=1$ is the classical mean ergodic theorem, so suppose now that $d \geq 2$, that we know the result for all tuples of at most $d-1$ transformations and that we are given $T:\mathbb{Z}^d\curvearrowright (X,\S,\mu)$. Let $\mathsf{C}$ be the class in Proposition~\ref{prop:sated-implies-pleasant}. By Theorem~\ref{thm:sateds-exist} we may choose a $\mathsf{C}$-sated extension $\pi:\t{\mathbf{X}}\to \mathbf{X}$, and now since the corresponding inclusion $L^\infty(\mu)\subseteq L^\infty(\t{\mu})$ is an embedding of algebras that preserves the norms $\|\cdot\|_2$ it will suffice to prove convergence for the analogs of the averages $S_N$ associated to $\t{\mathbf{X}}$. To lighten notation we henceforth assume that $\mathbf{X}$ itself is $\mathsf{C}$-sated. Suppose that $f_1,f_2,\ldots,f_{d+1} \in L^\infty(\mu)$. Letting $\Phi:= \S^{T^{\bf{e}_1}}\vee \bigvee_{j = 2}^d\S^{T^{\bf{e}_1} = T^{\bf{e}_j}}$, we see that the function $f_1 - \mathsf{E}(f_1\,|\,\Phi)$ has zero conditional expectation onto $\Phi$, and so by the multilinearity of $S_N$ and Proposition~\ref{prop:sated-implies-pleasant} we have that \begin{multline*} S_N(f_1,f_2,\ldots,f_d) - S_N(\mathsf{E}(f_1\,|\,\Phi),f_2,\ldots,f_d) \\ = S_N(f_1 - \mathsf{E}(f_1\,|\,\Phi),f_2,\ldots,f_d)\to 0 \end{multline*} in $L^2(\mu)$ as $N\to\infty$. It therefore suffices to prove convergence with $f_1$ replaced by $\mathsf{E}(f_1\,|\,\Phi)$, or equivalently under the assumption that $f_1$ is $\Phi$-measurable. However, this implies that $f_1$ may be approximated in $\|\cdot\|_2$ by finite sums of the form $\sum_p g_p\cdot h_{2,p} \cdot h_{3,p} \cdot\cdots \cdot h_{d,p}$ in which each $g_p$ is $T^{\bf{e}_1}$-invariant and each $h_{j,p}$ is $T^{\bf{e}_j - \bf{e}_1}$-invariant. Since the operator \[f_1\mapsto S_N(f_1,f_2,\ldots,f_d)\] is linear and uniformly continuous in $L^2(\mu)$ for fixed bounded $f_2$, $f_3$, \ldots, $f_d$, it therefore suffices to prove convergence in case $f_1$ is simply one such product, say $gh_2h_3\cdots h_d$. For this function, however, we can re-arrange our averages as \begin{multline*} S_N(f_1,f_2,\ldots,f_d) = \frac{1}{N}\sum_{n=1}^N((gh_2h_3\cdots h_d)\circ T^{n\bf{e}_1})\cdot (f_2\circ T^{n\bf{e}_2})\cdot\cdots\cdot (f_d\circ T^{n\bf{e}_d})\\ = g\cdot \frac{1}{N}\sum_{n=1}^N ((f_2h_2)\circ T^{n\bf{e}_2})\cdot\cdots\cdot ((f_dh_d)\circ T^{n\bf{e}_d}) = gS_N(1_X,f_2h_2,\ldots,f_dh_d), \end{multline*} since $g\circ T^{n\bf{e}_1} = g$ and $h_j\circ T^{n\bf{e}_1} = h_j\circ T^{n\bf{e}_j}$ for each $j=2,3,\ldots,d$. Now the averages appearing on the right are uniformly bounded in $\|\cdot\|_\infty$ and involve only the $d-1$ transformations $T^{\bf{e}_2}$, $T^{\bf{e}_3}$, \ldots, $T^{\bf{e}_d}$, and so the inductive hypothesis gives their convergence in $\|\cdot\|_2$. Since $\|g\|_\infty < \infty$ this gives also the convergence of the left-hand averages in $\|\cdot\|_2$, as required. \qed \noindent\textbf{Remark}\quad In fact the above proof gives a slight strengthening of Theorem C, in that the convergence is uniform in the location of the interval of averaging: that is, the averages \[\frac{1}{|I_N|}\sum_{n\in I_N}\prod_{i=1}^d f_i\circ T^{n\bf{e}_i}\] converge in $L^2(\mu)$ for any sequence of increasingly long finite intervals $I_N \subset \mathbb{Z}$, and the limit does not depend on the choice of these intervals. This result is treated in full in~\cite{Aus--nonconv}. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline \chapter{Multiple recurrence for commuting transformations}\label{chap:multiMRT} In this chapter we deduce Theorem A from Theorem~\ref{thm:sateds-exist}. Coupled with Furstenberg and Katznelson's correspondence principle from~\cite{FurKat78}, this gives a new proof of the Multidimensional Szemer\'edi Theorem, but we will not recount that correspondence here since it is already well-known from that paper and several subsequent accounts, such as those in the books~\cite{Fur81} of Furstenberg and~\cite{TaoVu06} of Tao and Vu. After introducing a more convenient reformulation of Theorem A below, we first introduce a very general meta-question that covers most of the ergodic-theory we need. We then show how it specializes to give quite detailed information on the Furstenberg self-joining corresponding to a tuple of commuting transformations. From this the proof of Theorem A follows by appealing to a version of Tao's infinitary hypergraph removal lemma. We will continue the practice begun in the previous chapter of writing a tuple of commuting transformations as $T^{\bf{e}_1}$, $T^{\bf{e}_2}$, \ldots, $T^{\bf{e}_d}$ for some $\mathbb{Z}^d$-action $T$. The convergence result of the previous chapter implies that for any such $T^{\bf{e}_1}$, $T^{\bf{e}_2}$, \ldots, $T^{\bf{e}_d}$ the Furstenberg self-joining $\mu^\rm{F}$ of Section~\ref{sec:Fberg} exists. Knowing this, Theorem A about the limit infima of scalar averages is a consequence of the following more general result: \begin{thm}\label{thm:multirec2} If $T:\mathbb{Z}^d\curvearrowright(X,\S,\mu)$, $\mu^\rm{F}$ denotes the Furstenberg self-joining of the transformations $T^{\bf{e}_1}$, $T^{\bf{e}_2}$, \ldots, $T^{\bf{e}_d}$ and $A_1$, $A_2$, \ldots, $A_d\in \S$ then \[\mu^\rm{F}(A_1\times A_2\times \cdots\times A_d) = 0 \quad\quad\Rightarrow\quad\quad \mu(A_1\cap A_2\cap\cdots\cap A_d) =0.\] \end{thm} Inded, in case $A_i = A$ for each $A$ this assertion is precisely the contrapositive of Theorem A. However, the formulation of Theorem~\ref{thm:multirec2} has the great advantage of allowing us to manipulate the sets $A_i$ separately in setting up a proof by induction. \section{The question in the background}\label{sec:meta} Having reformulated our goal in this chapter as Theorem~\ref{thm:multirec2}, it becomes clear that it is really an assertion about the joint distribution of the coordinate projections $\pi_i:X^d\to X$, $i=1,2,\ldots,d$ under $\mu^\rm{F}$. By Lemma~\ref{lem:Fberg-offdiag-invar} $\mu^\rm{F}$ is an invariant measure for the action $\vec{T}$ of the larger group $\mathbb{Z}^{d+1}$ defined by setting \[\vec{T}^{\,\upharpoonright\, \mathbb{Z}^d\oplus \{\bs{0}\}} := T^{\times d}\quad\quad\hbox{and}\quad\quad \vec{T}^{\bf{e}_{d+1}} := T^{\bf{e}_1}\times T^{\bf{e}_2}\times \cdots\times T^{\bf{e}_d}.\] Thus this defines a $\mathbb{Z}^{d+1}$-system $\vec{\mathbf{X}}$ in which the Furstenberg self-joining $\mathbf{X}^\rm{F}$ corresponds to the subaction of $\mathbb{Z}^d\oplus\{\bs{0}\}$. The key to our proof is the observation that the coordinate projections $\pi_i$ now define factor maps of $\vec{\mathbf{X}}$ onto a collection of $\mathbb{Z}^{d+1}$-systems $\mathbf{X}_1$, $\mathbf{X}_2$, \ldots, $\mathbf{X}_d$ for each of which some one-dimensional subgroup of $\mathbb{Z}^{d+1}$ acts trivially: specifically, this is so with $\mathbf{X}_i = (X_i,\S_i,\mu_i,T_i)$ defined simply by `doubling up' the $\mathbb{Z}\bf{e}_i$-subaction of $T$: \[(X_i,\S_i,\mu_i) := (X,\S,\mu),\quad T_i^{\,\upharpoonright\,\mathbb{Z}^d\oplus\{\bs{0}\}} := T\quad\hbox{and}\quad T_i^{\bf{e}_{d+1}} := T^{\bf{e}_i}.\] It follows immediately from these specifications that $\pi_i\circ \vec{T} = T_i\circ\pi_i$ and that $\mathbf{X}_i \in \mathsf{Z}_0^{\bf{e}_{d+1} - \bf{e}_i}$. Having made these observations, our principal results on $\mu^\rm{F}$ will fall within the pattern of the following: \begin{quote} \textbf{Meta-question:} Given subgroups $\Gamma_1$, $\Gamma_2$, \ldots, $\Gamma_r \leq \mathbb{Z}^D$ and $\mathbb{Z}^D$-systems $(X_i,\S_i,\mu_i,T_i)$ for $i=1,2,\ldots,r$ such that $T_i^{\upharpoonright\,\Gamma_i}=\mathrm{id}$, what do these partial invariances imply about the possible joinings of these $\mathbb{Z}^D$-systems? \end{quote} The first stage in proving Theorem~\ref{thm:multirec2} will boil down to a handful of special cases of this question. In this section we show that a partial answer covering all of the cases we need can be given quite easily, subject to an algebraic constraint on the subgroups $\Gamma_i$ and an allowance to pass to extended systems. First, it is instructive to understand the simple case $r=2$: \begin{lem}\label{lem:two-fold-joinings} If the systems $\mathbf{X}_i$ are $\Gamma_i$-partially invariant for $i=1,2$, then any joining of them is relatively independent over their factors $\S_i^{T_i\upharpoonright(\Gamma_1 + \Gamma_2)}$. \end{lem} \noindent\textbf{Proof}\quad Suppose $\pi_i:(Y,\Phi,\nu,S)\to (X_i,\S_i,\mu_i,T_i)$ is a joining of the two systems and consider subsets $A_i \in \S_i$. In addition let $(F_N)_{N\geq 1}$ be a F\o lner sequence of subsets of $\Gamma_1$. Then the invariance of $\nu$ and the Mean Ergodic Theorem give \begin{eqnarray*} &&\nu(\pi_1^{-1}(A_1)\cap \pi_2^{-1}(A_2))\\ &&\quad = \lim_{N\to\infty}\frac{1}{|F_N|}\sum_{\gamma\in F_N}\int_Y (1_{A_1}\circ \pi_1)(1_{A_2}\circ T_2^{\gamma}\circ \pi_2)\,\d\nu\\ &&\quad = \lim_{N\to\infty}\int_Y (1_{A_1}\circ \pi_1)\Big(\Big(\frac{1}{|F_N|}\sum_{\gamma\in F_N}1_{A_2}\circ T_2^{\gamma}\Big)\circ \pi_2\Big)\,\d\nu\\ &&\quad = \int_Y (1_{A_1}\circ \pi_1)(\mathsf{E}_{\mu_2}(A_2\,|\,\S_2^{T_2\upharpoonright\Gamma_1})\circ\pi_2)\,\d\nu. \end{eqnarray*} Since $T_2^{\upharpoonright\,\Gamma_2} = \mathrm{id}$ the factor $\S_2^{T_2\upharpoonright\Gamma_1}$ consists of sets that are invariant under the whole group $\Gamma_1 + \Gamma_2$, and hence agrees with $\S_2^{T_2\upharpoonright(\Gamma_1 + \Gamma_2)}$. Arguing similarly with the roles of $\mathbf{X}_1$ and $\mathbf{X}_2$ reversed, this shows that that above is equal to \[\int_Y (\mathsf{E}_{\mu_1}(A_1\,|\,\S_1^{T_1\upharpoonright(\Gamma_1 + \Gamma_2)})\circ \pi_1)(\mathsf{E}_{\mu_2}(A_2\,|\,\S_2^{T_2\upharpoonright(\Gamma_1 + \Gamma_2)})\circ\pi_2)\,\d\nu,\] as required. \qed For $r\geq 3$ we will not obtain an answer as complete as the above. However, a natural generalization is available for certain special tuples of subgroups, subject to the further provision that we may replace the originally-given systems $\mathbf{X}_i$ with some extensions of them. The extensions, of course, will be sated extensions, and for them the picture is given by the following. \begin{thm}\label{thm:sateds-joint-dist} Suppose that \[\mathbb{Z}^D \cong \Gamma_1 \oplus \Gamma_2 \oplus \cdots \oplus \Gamma_r \oplus \L\] is a direct sum decomposition of $\mathbb{Z}^D$ into the subgroups $\Gamma_i$ and some auxiliary subgroup $\L$, and that $\mathbf{X}_i \in \mathsf{Z}_0^{\Gamma_i}$ for $i=1,2,\ldots,r$ are systems such that each $\mathbf{X}_i$ is $\mathsf{C}_i$-sated for \[\mathsf{C}_i := \bigvee_{j\leq r,\,j\neq i}\mathsf{Z}_0^{\Gamma_i + \Gamma_j}.\] Then for any joining $\pi_i:\mathbf{Y} \to \mathbf{X}_i$, $i=1,2,\ldots,r$, the factors $\pi_i^{-1}(\S_i)$ are relatively independent over their further factors \[\pi_i^{-1}\Big(\bigvee_{j\leq r,\,j\neq i}\S_i^{T_i\upharpoonright(\Gamma_i + \Gamma_j)}\Big).\] \end{thm} \noindent\textbf{Proof}\quad This is a simple appeal to the definition of satedness. We will show that $\pi_1^{-1}(\S_1)$ is relatively independent from $\bigvee_{j=2}^r\pi_j^{-1}(\S_j)$ over\\ $\pi_1^{-1}\big(\bigvee_{j=2}^r\S_1^{T_1\upharpoonright(\Gamma_1 + \Gamma_j)}\big)$, the cases of the other factors being similar. Let $\Gamma := \Gamma_2 \oplus \cdots \oplus \Gamma_r \oplus \L\leq \mathbb{Z}^D$, so this complements $\Gamma_1$ in $\mathbb{Z}^D$, and let $\mathbf{Y} = (Y,\Phi,\nu,S)$. From $S$ we may construct a new $\nu$-preserving $\mathbb{Z}^D$-action $S'$ by defining \[(S')^{\bf{m} + \bf{n}} := S^{\bf{n}}\quad\quad\hbox{for all}\ \bf{m}\in \Gamma_1,\ \bf{n} \in \Gamma.\] Let $\mathbf{Y}' := (Y,\Phi,\nu,S')$, so manifestly $\mathbf{Y} \in \mathsf{Z}_0^{\Gamma_1}$. Similarly define the systems $\mathbf{X}'_i = (X_i,\S_i,\mu_i,T_i')$ for $i=2,3,\ldots,r$, so these also have trivial $\Gamma_1$-subactions and hence in fact lie in the classes $\mathsf{Z}_0^{\Gamma_1 + \Gamma_i}$. Since $T_1^{\bf{m}} = \mathrm{id}_{X_1}$ for all $\bf{m} \in \Gamma_1$ by assumption, we see that $\pi_1\circ S' = T_1\circ \pi_1$, so $\pi_1$ still defines a factor map $\mathbf{Y}' \to \mathbf{X}_1$. On the other hand, we also have \[\pi_i\circ (S')^{\bf{m} + \bf{n} + \bf{p}} = \pi_i\circ S^{\bf{n} + \bf{p}} = T_i^{\bf{n} + \bf{p}}\circ \pi_i = T_i^{\bf{p}}\circ \pi_i = (T'_i)^{\bf{m} + \bf{n} + \bf{p}}\circ \pi_i\] whenever $i=2,3,\ldots,r$ and $\bf{m} \in \Gamma_1$, $\bf{n} \in \Gamma_i$ and $\bf{p} \in \bigoplus_{j\neq 1,i}\Gamma_j \oplus \L$. Therefore $\pi_i$ is a factor map $\mathbf{Y}' \to \mathbf{X}_i'$ for $i=2,3,\ldots,d$, and so $\mathbf{Y}'$ is a joining of $\mathbf{X}_1$ with members of the classes $\mathsf{Z}_0^{\Gamma_1 + \Gamma_i}$ for $i=2,3,\ldots,r$: that is, $\mathbf{Y}$ is a $\mathsf{C}_1$-adjoining of $\mathbf{X}_1$. By the assumption of $\mathsf{C}_1$-satedness, it follows that this adjoining is relatively independent over the maximal $\mathsf{C}_1$-factor of $\mathbf{X}_1$, which equals $\bigvee_{j=2}^r\S_1^{T_1\upharpoonright(\Gamma_1 + \Gamma_j)}$, as required. \qed \noindent\textbf{Example}\quad Without the assumption of satedness, more complicated phenomena can appear in the joint distribution of three partially-invariant systems. For example, let $(X,\S,\mu,T)$ be the $\mathbb{Z}^3$-system on the two-torus $\mathbb{T}^2$ with its Borel $\sigma$-algebra and Haar measure defined by $T^{\bf{e}_1} := R_{(\a,0)}$, $T^{\bf{e}_2} := R_{(0,\a)}$ and $T^{\bf{e}_3} := R_{(\a,\a)}$, where $R_q$ denotes the rotation of $\mathbb{T}^2$ by an element $q \in \mathbb{T}^2$ and we choose $\a \in \mathbb{T}$ irrational. In this case we have natural coordinatizations of the partially invariant factors $\zeta_0^{T^{\bf{e}_i}}:X \to \mathbb{T}$ given by \[\zeta_0^{T^{\bf{e}_1}}(t_1,t_2) = t_2,\quad \zeta_0^{T^{\bf{e}_2}}(t_1,t_2) = t_1\quad\hbox{and}\quad \zeta_0^{T^{\bf{e}_3}}(t_1,t_2) = t_1 - t_2.\] It follows that in this example any two of $\S^{T^{\bf{e}_1}}$, $\S^{T^{\bf{e}_2}}$ and $\S^{T^{\bf{e}_3}}$ are independent, but also that any two of them generate the whole system (and so overall independence fails). In fact, it is possible to give a fairly complete answer to our meta-question in the case of any three $\mathbb{Z}$-subactions of some $\mathbb{Z}^D$-action, without the simplifying power of extending our systems. However, that answer in general requires the handling of extensions of non-ergodic systems by measurably-varying compact homogeneous space data: it is contained in Theorem 1.1 of~\cite{Aus--ergdirint}, in which such extensions are studied in suitable generality. The full formulation of that Theorem 1.1 is rather lengthy, and will not be repeated here; and it seems clear that matters will only become more convoluted for larger $r$. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline Theorem~\ref{thm:sateds-joint-dist} already suffices for the coming applications, but it is natural to ask about more general collections of subgroups $\Gamma_i \leq \mathbb{Z}^D$. In fact it is possible to do slightly better than Theorem~\ref{thm:sateds-joint-dist} with just a little extra effort: the same conclusion holds given only that these subgroups are \textbf{linearly independent}, in the sense that for any $\bf{n}_i \in \Gamma_i$ we have \[\bf{n}_1 + \bf{n}_2 + \cdots + \bf{n}_r = \bs{0}\quad \Rightarrow\quad \bf{n}_i = \bs{0}\ \forall i\leq r.\] Indeed, given this linear independence, one can let $\Delta := \Gamma_1 + \Gamma_2 + \ldots + \Gamma_r$ and now argue as in the above proof to deduce that the conclusion holds provided that $\mathbf{X}_1$ is $\mathsf{C}_1$-sated \emph{among all $\Delta$-systems}. However, it is not quite obvious that this is the same as being $\mathsf{C}_1$-sated among $\mathbb{Z}^D$-systems. This turns out to be true, but it requires the key additional result that whenever $\Delta \leq \L$ are discrete Abelian groups, $\mathbf{X}$ is a $\L$-system and $\a:\mathbf{Y} \to \mathbf{X}^{\upharpoonright\Delta}$ is an extension of the $\Delta$-subaction, there is an extension of $\L$-systems $\b:\mathbf{Z}\to \mathbf{X}$ that fits into a commutative diagram \begin{center} $\phantom{i}$\xymatrix{ \mathbf{Z}^{\upharpoonright \Delta}\ar[dr]\ar[rr]^\b & & \mathbf{X}^{\upharpoonright \Delta}\\ & \mathbf{Y}\ar[ur]_{\a}. } \end{center} The elementary but slightly messy proof of this can be found in Subsection 3.2 of~\cite{Aus--lindeppleasant1}. What happens when there are linearly dependences among the subgroups $\Gamma_1$, $\Gamma_2$, \ldots, $\Gamma_r$? An answer to this question could have several applications to understanding multiple recurrence, but it is also clearly of broader interest in ergodic theory. At present the picture remains unclear, but a number of recent works have provided answers in several further special cases, and in moments of optimism it now seems possible that a quite general extension of Theorem~\ref{thm:sateds-joint-dist} (using satedness relative to a much larger list of classes of system) may be available. A more precise conjecture in this vein will be formulated in Chapter~\ref{chap:spec}. \noindent\textbf{Remark}\quad Before leaving this section, it is worth contrasting the feature seen above that linear independence is helpful with previous works in this area. In the early study of special cases of Theorems B or C it was generally found that the analysis of powers of a single transformation (or correspondingly of arithmetic progressions in $\mathbb{Z}$) revealed more usable structure and was thus more tractable than the general case. Of course, Furstenberg's original Multiple Recurrence Theorem preceded Theorem B; and the conclusion of Theorem C was known in many such `one-dimensional' cases long before the general case was treated (see~\cite{ConLes84,ConLes88.1,ConLes88.2,FurWei96,HosKra07,Zie07}, although we note that Conze and Lesigne did also treat a two-dimensional case of Theorem C, and that in~\cite{Zha96} Zhang extended this result to three dimensions subject to some additional assumptions). The same phenomenon is apparent in the search for finitary, quantitative approaches to Szemer\'edi's Theorem and its relatives. Indeed, a purely finitary proof of the Multidimensional Szemer\'edi Theorem appeared only recently in works of R\"odl and Skokan~\cite{RodSko04}, Nagle, R\"odl and Schacht~\cite{NagRodSch06} and Gowers~\cite{Gow07}, building on the development by those authors of sufficiently powerful hypergraph variants of Szemer\'edi's Regularity Lemma in graph theory. Furthermore, the known bounds for how large $N_0$ must be taken in terms of $\delta$ and $k$ are far better for Szemer\'edi's Theorem than for its multidimensional generalization, owing to the powerful methods developed by Gowers in~\cite{Gow98,Gow01}, which extend Roth's proof for $k=3$ from~\cite{Rot53} and are much more efficient than the hypergraph regularity proofs. As yet these methods have resisted extension to the multidimensional setting, except in one two-dimensional case recently treated by Shkredov~\cite{Shk05}. This story is discussed in much greater depth in Chapters 10 and 11 of~\cite{TaoVu06}. Running counter to this trend, the value of linear independence for the present work is a consequence of our strategy of passing to extensions of probability-preserving systems. Although such extensions can lose any a priori algebraic structure (such as being a $\mathbb{Z}^D$-action in which the transformations $T^{\bf{e}_i}$ are actually all powers of one fixed transformation), the various instances of satedness that it allows us to assume will furnish enough power to drive all of our subsequent proofs. These instances of satedness will all be relative to joins of different classes of partially invariant systems, and, as illustrated by the above proof of Theorem~\ref{thm:sateds-joint-dist}, the usefulness of this kind of satedness will rely on the ability to construct new systems for which the corresponding subgroups behave in specified ways. With this in mind it is natural that having those subgroups linearly independent removes a potential obstacle from these arguments, and that answering our meta-question for sated systems will be more difficult when the subgroups exhibit some linear dependences. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline \section{More on the Furstenberg self-joining}\label{sec:moreFberg} We now return to the study of the Furstenberg self-joining $\mu^\rm{F}$ introduced in the previous chapter, with the goal of deriving a structure theorem for it as a consequence of Theorem~\ref{thm:sateds-joint-dist} in case $\mathbf{X}$ is sated with respect to enough difference classes. In order to formulate this structure theorem, we first settle on some more bespoke notation. In the following we shall make repeated reference to certain factors assembled from the partially invariant factors of our $\mathbb{Z}^d$-action $T$, so we now give these factors their own names. They will be indexed by subsets of $[d] := \{1,2,\ldots,d\}$, or more generally by subfamilies of the collection $\binom{[d]}{\geq 2}$ of all subsets of $[d]$ of size at least $2$. On the whole, these indexing subfamilies will be up-sets in $\binom{[d]}{\geq 2}$: $\mathcal{I}\subseteq \binom{[d]}{\geq 2}$ is an \textbf{up-set} if $u \in \mathcal{I}$ and $[d]\supseteq v\supseteq u$ imply $v \in \mathcal{I}$. For example, given $e \subseteq [d]$ we write $\langle e\rangle := \{u \in\binom{[d]}{\geq 2}:\ u\supseteq e\}$ (note the non-standard feature of our notation that $e \in \langle e\rangle$ if and only if $|e| \geq 2$): up-sets of this form are \textbf{principal}. We will abbreviate $\langle\{i\}\rangle$ to $\langle i\rangle$. It will also be helpful to define the \textbf{depth} of a non-empty up-set $\mathcal{I}$ to be $\min\{|e|:\ e\in \mathcal{I}\}$. The corresponding factor for $e = \{i_1,i_2,\ldots,i_k\}\subseteq [d]$ with $k\geq 2$ is $\Phi_e := \S^{T^{\bf{e}_{i_1}} = T^{\bf{e}_{i_2}} = \ldots = T^{\bf{e}_{i_k}}}$, so this is the partially invariant factor for the $(k-1)$-dimensional subgroup \[\mathbb{Z}(\bf{e}_{i_1} - \bf{e}_{i_2}) + \mathbb{Z}(\bf{e}_{i_1} - \bf{e}_{i_3}) + \cdots + \mathbb{Z}(\bf{e}_{i_1} - \bf{e}_{i_k}).\] More generally, given a family $\cal{A} \subseteq \binom{[d]}{\geq 2}$ we define $\Phi_{\cal{A}} := \bigvee_{e\in\cal{A}}\Phi_e$. From the ordering among the factors $\Phi_e$ it is clear that $\Phi_{\cal{I}} = \Phi_{\cal{A}}$ whenever $\cal{A} \subseteq \binom{[d]}{\geq 2}$ is a family that generates $\cal{I}$ as an up-set, and in particular that $\Phi_e = \Phi_{\langle e\rangle}$ when $|e|\geq 2$. We now return to the Furstenberg self-joining $\mu^\rm{F}$. For $e = \{i_1 < i_2 < \ldots < i_k\} \subseteq [d]$ we write $\mu^\rm{F}_e$ for the Furstenberg self-joining of the transformations $T^{\bf{e}_{i_1}}$, $T^{\bf{e}_{i_2}}$, \ldots, $T^{\bf{e}_{i_k}}$: \[\mu^{\rm{F}}_e(A_1\times \cdots\times A_k) := \lim_{N\to \infty}\frac{1}{N}\sum_{n=1}^N\mu(T^{-n\bf{e}_{i_1}}(A_1)\cap \cdots\cap T^{-n\bf{e}_{i_k}}(A_k)),\] so this clearly extends the definition of Section~\ref{sec:Fberg} in the sense that $\mu_{[d]}^\rm{F} = \mu^\rm{F}$. Of course, we know the existence of each $\mu^\rm{F}_e$ by the results of the previous chapter. We next record some simple properties of the family of self-joinings $\mu^\rm{F}_e$ for $e \subseteq [d]$. Given subsets $e \subseteq e' \subseteq [d]$, in the following we write $\pi_e$ for the coordinate projection $X^{e'}\to X^e$, since the choice of $e'$ will always be clear from the context. \begin{lem}\label{lem:Fberg-project} If $e \subseteq e' \subseteq [d]$ then $(\pi_e)_{\#}\mu^\rm{F}_{e'} = \mu^\rm{F}_e$. \end{lem} \noindent\textbf{Proof}\quad This is immediate from the definition: if $e = \{i_1 < i_2 < \ldots < i_k\} \subseteq e' = \{j_1 < j_2 < \ldots < j_l\}$ and $A_{i_j} \in \S$ for each $j \leq k$ then \[(\pi_e)_{\#}\mu^{\rm{F}}_{e'}(A_{i_1}\times \cdots\times A_{i_k})\\ := \lim_{N\to \infty}\frac{1}{N}\sum_{n=1}^N\mu(T^{-n\bf{e}_{j_1}}(B_{j_1})\cap \cdots\cap T^{-n\bf{e}_{j_l}}(B_{j_l}))\] where $B_j := A_j$ if $j \in e$ and $B_j := X$ otherwise; but then this last average simplifies summand-by-summand directly to \[\lim_{N\to \infty}\frac{1}{N}\sum_{n=1}^N\mu(T^{-n\bf{e}_{i_1}}(A_1)\cap \cap\cdots\cap T^{-n\bf{e}_{i_k}}(A_k)) =: \mu^{\rm{F}}_e(A_1\times \cdots\times A_k),\] as required. \qed \begin{lem}\label{lem:diag} For any $e \subseteq [d]$ and $A \in \Phi_e$ we have \[\mu^\rm{F}_e(\pi_i^{-1}(A)\triangle \pi_j^{-1}(A)) = 0\quad\quad\forall i,j \in e:\] thus, the restriction $\mu^{\rm{F}}_e\upharpoonright_{\Phi_e^{\otimes e}}$ is just the diagonal measure $(\mu\upharpoonright_{\Phi_e})^{\Delta e}$. \end{lem} \noindent\textbf{Proof}\quad If $e = \{i_1 < i_2 < \ldots < i_k\}$ and $A_j \in \Phi_e$ for each $j \leq k$ then by definition we have \begin{eqnarray*}&&\mu^{\rm{F}}_e(A_1 \times A_2\times \cdots\times A_k)\\ &&\quad = \lim_{N\to \infty}\frac{1}{N}\sum_{n=1}^N\mu(T^{-n\bf{e}_{i_1}}(A_1)\cap T^{-n\bf{e}_{i_2}}(A_2)\cap\cdots\cap T^{-n\bf{e}_{i_k}}(A_k))\\ &&\quad = \lim_{N\to \infty}\frac{1}{N}\sum_{n=1}^N\mu(T^{-n\bf{e}_{i_1}}(A_1\cap A_2\cap\cdots\cap A_k))\\ && \quad = \mu(A_1\cap A_2\cap\cdots\cap A_k), \end{eqnarray*} as required. \qed It follows from the last lemma that whenever $e \subseteq e'$ the factors $\pi_i^{-1}(\Phi_e) \leq \S^{\otimes d}$ for $i\in e$ are all equal up to $\mu^{\rm{F}}_{e'}$-negligible sets. It will prove helpful later to have a dedicated notation for these factors. \begin{dfn}[Oblique copies] For each $e \subseteq [d]$ we refer to the common $\mu^{\rm{F}}_{[d]}$-completion of the $\sigma$-subalgebra $\pi_i^{-1}(\Phi_e)$, $i \in e$, as the \textbf{oblique copy} of $\Phi_e$, and denote it by $\Phi^{\rm{F}}_e$. More generally we shall refer to factors formed by repeatedly applying $\cap$ and $\vee$ to such oblique copies as \textbf{oblique factors}. \end{dfn} We are now ready to derive the more nontrivial consequences we need from Theorem~\ref{thm:sateds-joint-dist}. These will appear in two separate propositions. \begin{prop}\label{prop:Fberg1} For each pair $i\leq d$ let \[\mathsf{C}_i := \bigvee_{j\leq d,\,j\neq i}\mathsf{Z}_0^{\bf{e}_i - \bf{e}_j}.\] If $\mathbf{X}$ is $\mathsf{C}_i$-sated for each $i$ then the coordinate projections $\pi_i:X^d\to X$ are relatively independent under $\mu^\rm{F}$ over the further factors \[\pi_i^{-1}\Big(\bigvee_{j\leq d,\,j\neq i}\S^{T^{\bf{e}_i} = T^{\bf{e}_j}}\Big) = \pi_i^{-1}(\Phi_{\langle i\rangle}).\] \end{prop} \noindent\textbf{Proof}\quad This follows by applying Theorem~\ref{thm:sateds-joint-dist} to the $\mathbb{Z}^{d+1}$-system $\vec{\mathbf{X}}$ introduced at the beginning of the previous section. Indeed, as explained there the coordinate projections $\pi_i:\vec{\mathbf{X}}\to \mathbf{X}_i$ witness that $\vec{\mathbf{X}}$ is a joining of the systems $\mathbf{X}_i \in \mathsf{Z}_0^{\mathbb{Z}(\bf{e}_{d+1} - \bf{e}_i)}$. Let \[\mathsf{D}_i := \bigvee_{j\leq d,\,j\neq i}\mathsf{Z}_0^{\mathbb{Z}(\bf{e}_i - \bf{e}_{d+1}) + \mathbb{Z}(\bf{e}_j - \bf{e}_{d+1})},\] an idempotent class of $\mathbb{Z}^{d+1}$-systems. Now the assumption that $\mathbf{X}$ is $\mathsf{C}_i$-sated as a $\mathbb{Z}^d$-system implies that $\mathbf{X}_i$ is $\mathsf{D}_i$-sated as a $\mathbb{Z}^{d+1}$-system. Indeed, given any extension of $\mathbb{Z}^{d+1}$-systems $\pi:\mathbf{Y}\to \mathbf{X}_i$ the subaction $(\mathsf{D}_i\mathbf{Y})^{\upharpoonright(\mathbb{Z}^d\oplus\{\bs{0}\}}$ is clearly a member of the class $\mathsf{C}_i$, so the $\mathsf{C}_i$-satedness of $\mathbf{X}$ implies that $\pi$ is relatively independent from $\zeta_{\mathsf{D}_i}^\mathbf{Y}:\mathbf{Y}\to \mathsf{D}_i\mathbf{Y}$ over its further factor map $\zeta_{\mathsf{C}_i}^{\mathbf{X}}$, which agrees with $\zeta_{\mathsf{D}_i}^{\mathbf{X}_i}$ because the whole of $\mathbf{X}_i$ is already $\mathbb{Z}(\bf{e}_{d+1} - \bf{e}_i)$-partially invariant. Setting $\Gamma_i := \mathbb{Z}(\bf{e}_i - \bf{e}_{d+1})$ for $i=1,2,\ldots,d$ and $\L := \mathbb{Z}\bf{e}_{d+1}$, these subgroups define a direct-sum decomposition of $\mathbb{Z}^{d+1}$. Therefore Theorem~\ref{thm:sateds-joint-dist} applies to tell us that the factors $\pi_i^{-1}(\S)$ are relatively independent under $\mu^\rm{F}$ over their further factors \[\pi_i^{-1}\Big(\bigvee_{j\leq d,\,j\neq i}\S^{T_i\upharpoonright(\mathbb{Z}(\bf{e}_i - \bf{e}_{d+1}) + \mathbb{Z}(\bf{e}_j - \bf{e}_{d+1}))}\Big) = \pi_i^{-1}(\Phi_{\langle i\rangle}),\] as required. \qed For our second application of Theorem~\ref{thm:sateds-joint-dist} we need a preparatory lemma. \begin{lem}\label{lem:inherit-satedness} If $\mathsf{C} \subseteq \mathsf{D}$ are idempotent classes of $\Gamma$-systems for any discrete group $\Gamma$ and $\mathbf{X}$ is $\mathsf{C}$-sated, then $\mathsf{D}\mathbf{X}$ is also $\mathsf{C}$-sated. \end{lem} \noindent\textbf{Proof}\quad If $\mathbf{X}$ is $\mathsf{C}$-sated and $\pi:\mathbf{Y} \to \mathsf{D}\mathbf{X}$ is any extension, then the relatively independent product $\t{\mathbf{X}} := \mathbf{X}\times_{\{\zeta_\mathsf{D}^\mathbf{X} = \pi\}} \mathbf{Y}$ is an extension of $\mathbf{X}$ through the first coordinate projection (it for the sake of using this relatively independent product that we need $\Gamma$ to be a group). Therefore by $\mathsf{C}$-satedness the factor map $\zeta_\mathsf{C}^{\t{\mathbf{X}}}$ is relatively independent from this coordinate projection over the further factor map $\zeta_\mathsf{C}^\mathbf{X}:\mathbf{X}\to\mathsf{C}\mathbf{X}$ of the latter, and so the same must be true of $\zeta_\mathsf{C}^\mathbf{Y}$. However, the factor map $\zeta_\mathsf{C}^\mathbf{X}$ is clearly contained in the factor map $\zeta_\mathsf{D}^\mathbf{X}$ since $\mathsf{C} \subseteq \mathsf{D}$, and so it must actually equal $\zeta_\mathsf{C}^{\mathsf{D}\mathbf{X}}\circ \zeta_\mathsf{D}^{\mathbf{X}}:\mathbf{X}\to \mathsf{C}(\mathsf{D}\mathbf{X})$. Hence $\pi$ is relatively independent from $\zeta_\mathsf{C}^\mathbf{Y}$ over its further factor map $\zeta_\mathsf{C}^{\mathsf{D}\mathbf{X}}$, as required. \qed \begin{prop}\label{prop:Fberg2} For each subset $e = \{i_1,i_2,\ldots,i_k\}\subseteq [d]$ let \[\mathsf{C}_e := \bigvee_{j\in [d]\setminus e}\mathsf{Z}_0^{\mathbb{Z}(\bf{e}_{i_1} - \bf{e}_{i_2}) + \cdots + \mathbb{Z}(\bf{e}_{i_1} - \bf{e}_{i_k}) + \mathbb{Z}(\bf{e}_{i_1} - \bf{e}_j)},\] and suppose now that $\mathbf{X}$ is $\mathsf{C}_e$-sated for every $e$ (so this includes the assumption of the previous proposition when $e$ is a singleton). Then under $\mu^\rm{F}$ the oblique factors have the property that $\Phi^\rm{F}_\mathcal{I}$ and $\Phi^\rm{F}_{\mathcal{I}'}$ are relatively independent over $\Phi^\rm{F}_{\mathcal{I}\cap \mathcal{I}'}$ for any up-sets $\mathcal{I},\mathcal{I}' \subseteq \binom{[d]}{\geq 2}$. \end{prop} \noindent\textbf{Proof}\quad\textbf{Step 1}\quad First observe that the result is trivial if $\mathcal{I} \supseteq \mathcal{I}'$, so now suppose that $\mathcal{I}' = \langle e\rangle$ where $e$ is a maximal member of $\binom{[d]}{\geq 2}\setminus \mathcal{I}$. Let $\{a_1,a_2,\ldots,a_m\}$ be the antichain of minimal elements of $\mathcal{I}$, so that $\Phi^\rm{F}_\mathcal{I} = \bigvee_{l\leq m}\Phi^\rm{F}_{a_k}$. The maximality assumption on $e$ implies that $e\cup \{j\}$ contains some $a_k$ for every $j \in [d]\setminus e$, and so $\mathcal{I}\cap \mathcal{I}'$ is precisely the up-set generated by these sets $e\cup \{j\}$ for $j\in[d]\setminus e$. We must therefore show that $\Phi^\rm{F}_e$ is relatively independent from $\bigvee_{k\leq m}\Phi^\rm{F}_{a_k}$ under $\mu^\rm{F}$ over the common factor $\bigvee_{j\in [d]\setminus e}\Phi^\rm{F}_{e\cup\{j\}}$. Observe also that since $e \not\in \mathcal{I}$ we can find some $j_k \in a_k\setminus e$ for each $k\leq m$. Moreover, each $j\in [d]\setminus e$ must appear as some $j_k$ in this list, since it appears at least for any $k$ for which $a_k \subseteq e \cup \{j\}$. Now Lemma~\ref{lem:diag} implies that $\Phi^\rm{F}_{a_k}$ agrees with $\pi_{j_k}^{-1}(\Phi_{a_k})$ up to $\mu^\rm{F}$-negligible sets. On the other hand, we clearly have $\pi_{j_k}^{-1}(\Phi_{a_k})\leq \pi_{j_k}^{-1}(\S)$, and so in fact it will suffice to show that $\Phi^\rm{F}_e$ is relatively independent from $\bigvee_{j\in [d]\setminus e}\pi_j^{-1}(\S)$ over $\bigvee_{j\in [d]\setminus e}\Phi^\rm{F}_{e\cup\{j\}}$. This alteration of the problem is important because it provides the linear independence needed to apply Theorem~\ref{thm:sateds-joint-dist}. Indeed, considering again the $\mathbb{Z}^{d+1}$-system $\vec{\mathbf{X}}$, in the present setting we see that the $\sigma$-subalgebras \[\Phi^\rm{F}_e\quad\hbox{and}\quad \pi_j^{-1}(\S)\ \hbox{for}\ j\in[d]\setminus e\] constitute a collection of factors of $\vec{\mathbf{X}}$ that are partially invariant under the subgroups \[\Gamma_e := \mathbb{Z}(\bf{e}_i - \bf{e}_{d+1}) + \sum_{\ell\in e\setminus \{i\}}\mathbb{Z}(\bf{e}_i - \bf{e}_\ell)\quad\hbox{and}\quad \Gamma_j := \mathbb{Z}(\bf{e}_j - \bf{e}_{d+1})\ \hbox{for}\ j\in [d]\setminus e\] respectively, where $i \in e$ is arbitrary. On the one hand these subgroups can be inserted into a direct sum decomposition of $\mathbb{Z}^{d+1}$, and on the other we may argue just as in the proof of Proposition~\ref{prop:Fberg1} that the $\mathbb{Z}^{d+1}$-system defined by the factor $\Phi^\rm{F}_e$ is sated relative to the class $\bigvee_{j \in [d]\setminus e}\mathsf{Z}_0^{\Gamma_e + \Gamma_j}$, using our satedness assumption on $\mathbf{X}$ and Lemma~\ref{lem:inherit-satedness}. The conclusion therefore follows from Theorem~\ref{thm:sateds-joint-dist}. \textbf{Step 2}\quad The general case can now be treated for fixed $\mathcal{I}$ by induction on $\mathcal{I}'$. If $\cal{I}' \subseteq \cal{I}$ then the result is clear, so now let $e$ be a minimal member of $\cal{I}'\setminus\cal{I}$ of maximal size, and let $\cal{I}'' := \cal{I}'\setminus\{e\}$. It will suffice to prove that if $F \in L^\infty(\mu^{\rm{F}})$ is $\Phi^{\rm{F}}_{\cal{I}'}$-measurable then \[\mathsf{E}_{\mu^{\rm{F}}}(F\,|\,\Phi^{\rm{F}}_{\cal{I}}) = \mathsf{E}_{\mu^{\rm{F}}}(F\,|\,\Phi^{\rm{F}}_{\cal{I}\cap\cal{I}'}),\] and furthermore, by an approximation in $\|\cdot\|_2$ by finite sums of products, to do so only for $F$ that are of the form $F_1\cdot F_2$ with $F_1$ and $F_2$ being bounded and respectively $\Phi^{\rm{F}}_{\langle e \rangle}$- and $\Phi^{\rm{F}}_{\cal{I}''}$-measurable. However, for such a product we can write \[\mathsf{E}_{\mu^{\rm{F}}}(F\,|\,\Phi^{\rm{F}}_{\cal{I}}) = \mathsf{E}_{\mu^{\rm{F}}}\big(\mathsf{E}_{\mu^{\rm{F}}}(F\,|\,\Phi^{\rm{F}}_{\cal{I}\cup\cal{I}''})\,\big|\,\Phi^{\rm{F}}_{\cal{I}}\big) = \mathsf{E}_{\mu^{\rm{F}}}\big(\mathsf{E}_{\mu^{\rm{F}}}(F_1\,|\,\Phi^{\rm{F}}_{\cal{I}\cup\cal{I}''})\cdot F_2\,\big|\,\Phi^{\rm{F}}_{\cal{I}}\big).\] By Step 1 we have \[\mathsf{E}_{\mu^{\rm{F}}}(F_1\,|\,\Phi^{\rm{F}}_{\cal{I}\cup\cal{I}''}) = \mathsf{E}_{\mu^{\rm{F}}}(F_1\,|\,\Phi^{\rm{F}}_{(\cal{I}\cup\cal{I}'')\cap\langle e\rangle}),\] and on the other hand $(\cal{I}\cup\cal{I}'')\cap\langle e\rangle \subseteq \cal{I}''$ (because $\cal{I}''$ contains every subset of $[d]$ that strictly includes $e$, since $\mathcal{I}'$ is an up-set), so $(\cal{I}\cup\cal{I}'')\cap\langle e\rangle = \mathcal{I}''\cap \langle e\rangle$ and therefore another appeal to Step 1 gives \[\mathsf{E}_{\mu^{\rm{F}}}(F_1\,|\,\Phi^{\rm{F}}_{(\cal{I}\cup\cal{I}'')\cap\langle e\rangle}) = \mathsf{E}_{\mu^{\rm{F}}}(F_1\,|\,\Phi^{\rm{F}}_{\cal{I}''}).\] Therefore the above expression for $\mathsf{E}_{\mu^{\rm{F}}}(F_1F_2\,|\,\Phi^{\rm{F}}_{\cal{I}})$ simplifies to \begin{multline*} \mathsf{E}_{\mu^{\rm{F}}}\big(\mathsf{E}_{\mu^{\rm{F}}}(F_1\,|\,\Phi^{\rm{F}}_{\cal{I}''})\cdot F_2\,\big|\,\Phi^{\rm{F}}_{\cal{I}}\big) = \mathsf{E}_{\mu^{\rm{F}}}\big(\mathsf{E}_{\mu^{\rm{F}}}(F_1\cdot F_2\,|\,\Phi^{\rm{F}}_{\cal{I}''})\,\big|\,\Phi^{\rm{F}}_{\cal{I}}\big)\\ = \mathsf{E}_{\mu^{\rm{F}}}\big(\mathsf{E}_{\mu^{\rm{F}}}(F\,|\,\Phi^{\rm{F}}_{\cal{I}''})\,\big|\,\Phi^{\rm{F}}_{\cal{I}}\big) = \mathsf{E}_{\mu^{\rm{F}}}(F\,|\,\Phi^{\rm{F}}_{\cal{I}\cap \cal{I}''}) = \mathsf{E}_{\mu^{\rm{F}}}(F\,|\,\Phi^{\rm{F}}_{\cal{I}\cap\cal{I}'}), \end{multline*} where the third equality follows by the inductive hypothesis applied to $\cal{I}''$ and $\cal{I}$. \qed \section{Infinitary hypergraph removal and completion of the proof} Propositions~\ref{prop:Fberg1} and~\ref{prop:Fberg2} tell us a great deal about the structure of the probability measure $\mu^\rm{F}$ for a system $\mathbf{X}$ that is sated relative to all the necessary classes in terms of the partially-ordered family of factors \begin{center} $\phantom{i}$\xymatrix{ & & \S^{\otimes d}\ar@{-}[dll]\ar@{-}[d]\ar@{.}[dr]\ar@{.}[drr]\ar@{-}[drrr]\\ \pi_1^{-1}(\S)\ar@{-}[d]\ar@{-}[dr]\ar@{.}[drrr] & & \pi_2^{-1}(\S)\ar@{-}[dll]\ar@{-}[d]\ar@{.}[dr]\ar@{.}[drr] & \cdots & \cdots & \pi_d^{-1}(\S)\ar@{.}[dl]\ar@{-}[d]\\ \Phi^\rm{F}_{\{1,2\}}\ar@{.}[d]\ar@{.}[dr]\ar@{.}[drr] & \Phi^\rm{F}_{\{1,3\}}\ar@{.}[dr]\ar@{.}[d]\ar@{.}[dl] & \Phi^\rm{F}_{\{2,3\}}\ar@{.}[dr]\ar@{.}[d]\ar@{.}[dl] &\cdots &\cdots & \Phi^\rm{F}_{\{d-1,d\}}\ar@{.}[dll]\ar@{.}[dl]\ar@{.}[d]\\ \vdots\ar@{.}[d] & \vdots\ar@{.}[dl]\ar@{.}[d]\ar@{.}[dr]& \vdots\ar@{.}[dll]\ar@{.}[dl]\ar@{.}[d] & \vdots\ar@{.}[dl]\ar@{.}[drr] & \vdots\ar@{.}[dr]& \vdots\ar@{.}[d] \\ \Phi^\rm{F}_{\{2,3,\ldots,d\}}\ar@{-}[drr] & \Phi^\rm{F}_{\{1,3,\ldots,d\}}\ar@{-}[dr] & \Phi^\rm{F}_{\{1,2,4,\ldots,d\}}\ar@{-}[d] & \cdots\ar@{.}[dl] & \cdots\ar@{.}[dll] & \Phi^\rm{F}_{\{1,2,3,\ldots,d-1\}}\ar@{-}[dlll]\\ & & \Phi^\rm{F}_{[d]} } \end{center} by showing that large collections of the $\sigma$-subalgebras appearing here are relatively independent over the collections of further $\sigma$-subalgebras that they have in common. It is worth stressing at this point that we have \emph{not} proved any such assertion for the joint distribution of all the original factors $\Phi_e\leq \S$, but only for their oblique copies inside $\S^{\otimes d}$. The problem of describing the joint distribution of the factors $\Phi_e$ themselves seems to be much harder, because it runs into precisely the difficulties with linear dependence discussed in Section~\ref{sec:meta}: for example, if $e_1,e_2,e_3 \subseteq [d]$ are three subsets that are pairwise non-disjoint, then we have $\Phi_{e_i} = \S^{T\upharpoonright \Gamma_{e_i}}$ for $\Gamma_{e_i} = \sum_{j,j' \in e_i}\mathbb{Z}(\bf{e}_j - \bf{e}_{j'})$, and these three subgroups are now clearly not linearly independent. In our analysis of the oblique factors $\Phi^\rm{F}_e$ we carefully avoided a similar problem during Step 1 of the proof of Proposition~\ref{prop:Fberg2}, where we exploited the fact that $\Phi^\rm{F}_e$ is contained modulo negligible sets in $\pi_j^{-1}(\S)$ for any choice of $j \in e$, so that by making careful choices of the coordinates with which to express these oblique copies we were able to reduce the joint distribution of interest to the case covered by Theorem~\ref{thm:sateds-joint-dist}, involving only linearly independent subgroups. However, it seems clear that no similar trick will be available in the study of the factors $\Phi_e$. Happily, however, we do not need any such more precise information to complete our proof of Theorem~\ref{thm:multirec2}: in the remainder of this chapter we show how the structure proved above for $\mu^\rm{F}$ suffices. This will proceed through a slight modification of Tao's infinitary hypergraph removal lemma from~\cite{Tao07}, which first appeared in the form given below in~\cite{Aus--newmultiSzem}. \begin{prop}\label{prop:infremoval} Suppose that $(X,\S,\mu)$ is a standard Borel space and $\l$ is a $d$-fold coupling of $\mu$ on $(X^d,\S^{\otimes d})$ with coordinate projection maps $\pi_i:X^d\to X$, and that $(\Psi_e)_e$ is a collection of $\sigma$-subalgebras of $\S$ indexed by subsets $e \in \binom{[d]}{\geq 2}$ with the following properties: \begin{itemize} \item[{[i]}] if $e \subseteq e'$ then $\Psi_e \geq \Psi_{e'}$; \item[{[ii]}] if $i,j \in e$ and $A \in \Psi_e$ then $\l(\pi_i^{-1}(A)\triangle \pi_j^{-1}(A)) = 0$, so that we may let $\Psi^\dag_e$ be the common $\l$-completion of the lifted $\sigma$-algebras $\pi_i^{-1}(\Psi_e)$ for $i\in e$; \item[{[iii]}] if we define $\Psi^\dag_\mathcal{I} := \bigvee_{e\in \mathcal{I}}\Psi^\dag_e$ for each up-set $\mathcal{I} \in \binom{[d]}{\geq 2}$, then the $\sigma$-subalgebras $\Psi^\dag_\mathcal{I}$ and $\Psi^\dag_{\mathcal{I}'}$ are relatively independent under $\l$ over $\Psi^\dag_{\mathcal{I}\cap \mathcal{I}'}$. \end{itemize} In addition, suppose that $\mathcal{I}_{i,j}$ for $i=1,2,\ldots,d$ and $j = 1,2,\ldots,k_i$ are collections of up-sets in $\binom{[d]}{\geq 2}$ such that $[d] \in \mathcal{I}_{i,j} \subseteq \langle i\rangle$ for each $i,j$, and that the sets $A_{i,j}\in \Phi_{\mathcal{I}_{i,j}}$ are such that \[\l\Big(\prod_{i=1}^d\Big(\bigcap_{j=1}^{k_i}A_{i,j}\Big)\Big) = 0.\] Then we must also have \[\mu\Big(\bigcap_{i=1}^d\bigcap_{j=1}^{k_i}A_{i,j}\Big) = 0.\] \end{prop} \noindent\textbf{Proof of Theorem~\ref{thm:multirec2} from Proposition~\ref{prop:infremoval}}\quad Clearly the conclusion holds for a system $\mathbf{X}$ if it holds for any extension of $\mathbf{X}$, so by Theorem~\ref{thm:sateds-exist} we may assume that $\mathbf{X}$ is $\mathsf{C}_e$-sated for every $e\subseteq [d]$. Now suppose that $A_1,A_2,\ldots,A_d \in \S$ are such that $\mu^{\rm{F}}(A_1\times A_2\times\cdots\times A_d) = 0$. Then by Proposition~\ref{prop:Fberg1} we have \[\mu^{\rm{F}}(A_1\times A_2\times\cdots\times A_d) = \int_{X^d}\bigotimes_{i=1}^d \mathsf{E}_{\mu}(1_{A_i}\,|\,\Phi_{\langle i\rangle})\,\d\mu^{\rm{F}} = 0.\] The level set $B_i := \{\mathsf{E}_{\mu}(1_A\,|\,\Phi_{\langle i\rangle}) > 0\}$ (of course, this is unique only up to $\mu$-negligible sets) lies in $\Phi_{\langle i\rangle}$, and the above vanishing requires that also $\mu^{\rm{F}}(B_1\times B_2\times \cdots\times B_d) = 0$. Now setting $k_i=1$, $\mathcal{I}_{i,1}:= \langle i\rangle$ and $A_{i,1} := B_i$ for each $i \leq d$, Lemma~\ref{lem:diag} and Proposition~\ref{prop:Fberg2} imply that Proposition~\ref{prop:infremoval} applies to the partially invariant factors $\Phi_e$ and their oblique copies to give $\mu(B_1\cap B_2\cap\cdots\cap B_d) = 0$. On the other hand we must have $\mu(A\setminus B_i) = 0$ for each $i$, and so overall $\mu(A) \leq \mu(B_1\cap B_2\cap\cdots\cap B_d) + \sum_{i=1}^d\mu(A\setminus B_i) = 0$, as required. \qed The remainder of this chapter is given to the proof of Proposition~\ref{prop:infremoval}. This proceeds by induction on a suitable ordering of the possible collections of up-sets $(\mathcal{I}_{i,j})_{i,j}$, appealing to a handful of different possible cases at different steps of the induction. At the outermost level, this induction will be organized according to the depth of our up-sets. The proof given below is taken essentially unchanged from~\cite{Aus--newmultiSzem}, where in turn the statement and proof were adopted with only slight modifications from~\cite{Tao07}. The reader may consult~\cite{Aus--newmultiSzem} for an explanation of these modifications. \begin{dfn} A family $(\mathcal{I}_{i,j})_{i,j}$ has the property \textbf{P} if it satisfies the conclusion of Proposition~\ref{prop:infremoval}. \end{dfn} We separate the various components of the induction into separate lemmas. \begin{lem}[Lifting using relative independence]\label{lem:lift-rel-ind} Suppose that all up-sets in the collection $(\mathcal{I}_{i,j})_{i,j}$ have depth at least $k$, that all those with depth exactly $k$ are principal, and that there are $\ell \geq 1$ of these. Then if property P holds for all similar collections having $\ell - 1$ up-sets of depth $k$, then it holds also for this collection. \end{lem} \noindent\textbf{Proof}\quad Let $\mathcal{I}_{i_1,j_1} = \langle e_1\rangle$, $\mathcal{I}_{i_2,j_2} = \langle e_2\rangle$, \ldots, $\mathcal{I}_{i_\ell,j_\ell} = \langle e_\ell\rangle$ be an enumeration of all the (principal) up-sets of depth $k$ in our collection. We will treat two separate cases. First suppose that two of the generating sets agree; by re-ordering if necessary we may assume that $e_1 = e_2$. Clearly we can assume that there are no duplicates among the coordinate-collections $(\mathcal{I}_{i,j})_{j=1}^{k_i}$ for each $i$ separately, so we must have $i_1 \neq i_2$. However, if we now suppose that $A_{i,j}\in \mathcal{I}_{i,j}$ for each $i$, $j$ are such that \[\l\Big(\prod_{i=1}^d\Big(\bigcap_{j=1}^{k_i}A_{i,j}\Big)\Big) = 0,\] then by assumption [ii] the same equality holds if we simply replace $A_{i_1,j_1} \in \langle e_1\rangle$ with $A'_{i_1,j_1}:= A_{i_1,j_1}\cap A_{i_2,j_2}$ and $A_{i_2,j_2}$ with $A'_{i_2,j_2} := X$. Now this last set can simply be ignored to leave an instance of a $\l$-negligible product for the same collection of up-sets omitting $\mathcal{I}_{i_2,j_2}$, and so property P of this reduced collection completes the proof. On the other hand, if all the $e_i$ are distinct, we shall simplify the last of the principal up-sets $\mathcal{I}_{i_\ell,j_\ell}$ by exploiting the relative independence among the lifted $\sigma$-algebras $\Psi_e^\dag$. Assume for notational simplicity that $(i_\ell,j_\ell) = (1,1)$; clearly this will not affect the proof. We will reduce to an instance of property P associated to the collection $(\mathcal{I}'_{i,j})$ defined by \[\mathcal{I}'_{i,j} := \left\{\begin{array}{ll}\langle e_\ell\rangle\setminus\{e_\ell\}&\quad\hbox{if}\ (i,j) = (1,1)\\ \mathcal{I}_{i,j}&\quad\hbox{else,}\end{array}\right.\] which has one fewer up-set of depth $k$ and so falls under the inductive assumption. Indeed, by property [iii] under $\l$ the set $\pi_1^{-1}(A_{1,1})$ is relatively independent from all the sets $\pi_i^{-1}(A_{i,j})$, $(i,j) \neq (1,1)$, over the $\sigma$-algebra $\pi_1^{-1}(\Psi_{\langle e_\ell\rangle\setminus\{e_\ell\}})$, which is dense inside $\Psi^\dag_{\langle e_\ell\rangle\setminus\{e_\ell\}}$. Therefore \begin{multline*} 0 = \l\Big(\prod_{i=1}^d\Big(\bigcap_{j=1}^{k_i}A_{i,j}\Big)\Big)\\ = \int_{X^d}\mathsf{E}_\mu(1_{A_{1,1}}\,|\,\Psi_{\langle e_\ell\rangle\setminus\{e_\ell\}})\circ\pi_1\cdot\prod_{j=2}^{k_1}1_{\pi_1^{-1}(A_{1,j})}\cdot\prod_{i=2}^d\prod_{j=1}^{k_i}1_{\pi_i^{-1}(A_{i,j})}\,\d\l. \end{multline*} Setting $A'_{1,1}:= \{\mathsf{E}_\mu(1_{A_{1,1}}\,|\,\Psi_{\langle e_\ell\rangle\setminus\{e_\ell\}}) > 0\} \in \Psi_{\langle e_\ell\rangle\setminus\{e_\ell\}}$ and $A'_{i,j} := A_{i,j}$ for $(i,j) \neq (1,1)$, we have that $\mu(A_{1,1}\setminus A'_{1,1}) = 0$ and it follows from the above equality that also $\l\big(\prod_{i=1}^d\big(\bigcap_{j=1}^{k_i}A'_{i,j}\big)\big) = 0$, so an appeal to property P for the reduced collection of up-sets completes the proof. \qed \begin{lem}[Lifting under finitary generation]\label{lem:lift-gen} Suppose that all up-sets in the collection $(\mathcal{I}_{i,j})_{i,j}$ have depth at least $k$ and that among those of depth $k$ there are $\ell \geq 1$ that are non-principal. Then if property P holds for all similar collections having at most $\ell - 1$ non-principal up-sets of depth $k$, then it also holds for this collection. \end{lem} \noindent\textbf{Proof}\quad Let $\mathcal{I}_{i_1,j_1}$, $\mathcal{I}_{i_2,j_2}$, \ldots, $\mathcal{I}_{i_\ell,j_\ell}$ be the non-principal up-sets of depth $k$, and now in addition let $e_1$, $e_2$, \ldots, $e_r$ be all the members of $\mathcal{I}_{i_\ell,j_\ell}$ of size $k$ (so, of course, $r \leq \binom{d}{k}$). Once again we will assume for simplicity that $(i_\ell,j_\ell) = (1,1)$. We break our work into two further steps. \textbf{Step 1}\quad First consider the case of a collection $(A_{i,j})_{i,j}$ such that for the set $A_{1,1}$, we can actually find \emph{finite} subalgebras of sets $\mathcal{B}_s\in \Psi_{\{e_s\}}$ for $s = 1,2,\ldots,r$ such that $A_{i_\ell,j_\ell} \in \mathcal{B}_1\vee \mathcal{B}_2\vee \cdots \vee \mathcal{B}_r\vee\Psi_{\mathcal{I}_{1,1}\cap\binom{[d]}{\geq k+1}}$ (so $A_{1,1}$ lies in one of our non-principal up-sets of depth $k$, but it fails to lie in an up-set of depth $k+1$ only `up to' finitely many additional generating sets). Choose $M\geq \max_{s\leq r}|\mathcal{B}_s|$, so that we can certainly express \[A_{1,1} = \bigcup_{m=1}^{M^r} (B_{m,1}\cap B_{m,2}\cap\cdots\cap B_{m,r}\cap C_m)\] with $B_{m,s}\in \mathcal{B}_s$ for each $s\leq r$ and $C_m \in \Psi_{\mathcal{I}_{1,1}\cap\binom{[d]}{\geq k+1}}$. Inserting this expression into the equation \[\l\Big(\prod_{i=1}^d\Big(\bigcap_{j=1}^{k_i}A_{i,j}\Big)\Big) = 0\] now gives that each of the $M^r$ individual product sets \[\Big((B_{m,1}\cap B_{m,2}\cap\cdots\cap B_{m,r}\cap C_m)\cap \bigcap_{j=2}^{k_1}A_{1,j}\Big)\times\prod_{i=2}^d \Big(\bigcap_{j=1}^{k_i}A_{i,j}\Big)\] is $\l$-negligible. Now consider the family of up-sets comprising the original $\mathcal{I}_{i,j}$ if $i=2,3,\ldots,d$ and the collection $\langle e_1\rangle$, $\langle e_2\rangle$, \ldots, $\langle e_r\rangle$, $\mathcal{I}_{1,2}$, $\mathcal{I}_{1,3}$, \ldots, $\mathcal{I}_{1,k_1}$ corresponding to $i = 1$. We have broken the depth-$k$ non-principal up-set $\mathcal{I}_{1,1}$ into the higher-depth up-set $\mathcal{I}_{1,1}\cap\binom{[d]}{\geq k+1}$ and the principal up-sets $\langle e_s\rangle$, and so there are only $\ell-1$ minimal-depth non-principal up-sets in this new family. It is clear that for each $m\leq M^r$ the above product set is associated to this family of up-sets, and so an inductive appeal to property P for this family tells us that also \[\mu\Big((B_{m,1}\cap B_{m,2}\cap\cdots\cap B_{m,r}\cap C_m)\cap \bigcap_{j=2}^{k_1}A_{1,j}\cap \bigcap_{i=2}^d \bigcap_{j=1}^{k_i}A_{i,j}\Big) = 0\] for every $m\leq M^r$. Since the union of these sets is just $\bigcap_{i=1}^d\bigcap_{j=1}^{k_i} A_{i,j}$, this gives the desired negligibility in this case. \textbf{Step 2}\quad Now we return to the general case, which will follow by a suitable limiting argument applied to the conclusion of Step 1. Since any $\Psi_e$ is countably generated modulo $\mu$, for each $e$ with $|e| = k$ we can find an increasing sequence of finite subalgebras $\mathcal{B}_{e,1} \subseteq \mathcal{B}_{e,2} \subseteq\ldots$ that generates $\Psi_e$ up to $\mu$-negligible sets. In terms of these define approximating sub-$\sigma$-algebras \[\Xi^{(n)}_{i,j} := \Psi_{\mathcal{I}_{i,j}\cap \binom{[d]}{\geq k+1}}\vee \bigvee_{e \in \mathcal{I}_{i,j}\cap \binom{[d]}{k}} \mathcal{B}_{e,n},\] so for each $\mathcal{I}_{i,j}$ these form an increasing family of $\sigma$-algebras that generates $\Psi_{\mathcal{I}_{i,j}}$ up to $\mu$-negligible sets (inded, if $\mathcal{I}_{i,j}$ does not contain any sets of the minimal depth $k$ then we simply have $\Xi^{(n)}_{i,j} = \Psi_{\mathcal{I}_{i,j}}$ for all $n$). Now property [iii] implies for each $n$ that $\Psi^\dag_{\mathcal{I}_{1,1}}$ and $\bigvee_{(i,j)\neq (1,1)}\pi_i^{-1}(\Xi^{(n)}_{i,j})$ are relatively independent over $\pi_1^{-1}(\Xi^{(n)}_{1,1})$. Given now a family of sets $(A_{i,j})_{i,j}$ associated to $(\mathcal{I}_{i,j})_{i,j}$, for each $(i,j)$ the conditional expectations $\mathsf{E}_\mu(1_{A_{i,j}}\,|\,\Xi^{(n)}_{i,j})$ form an almost surely uniformly bounded martingale converging to $1_{A_{i,j}}$ in $L^2(\mu)$. Letting \[B^{(n)}_{i,j}:= \{\mathsf{E}_\mu(1_{A_{i,j}}\,|\,\Xi^{(n)}_{i,j}) > 1-\delta\}\] for some small $\delta > 0$ (to be specified momentarily), it is clear that we also have $\mu(A_{i,j}\triangle B_{i,j}^{(n)}) \to 0$ as $n\to\infty$. Let \[F := \prod_{i=1}^d\Big(\bigcap_{j=1}^{k_i}B^{(n)}_{i,j}\Big).\] We now compute using the above-mentioned relative independence that \begin{eqnarray*} &&\l(F\setminus \pi_i^{-1}(A_{i,j}))\\ && \quad = \int_{X^d} \Big(\prod_{(i',j')}1_{B^{(n)}_{i',j'}}\circ\pi_{i'}\Big) - 1_{A_{i,j}}\circ\pi_i\cdot\Big(\prod_{(i',j')}1_{B^{(n)}_{i',j'}}\circ\pi_{i'}\Big)\,\d\l\\ &&\quad = \int_{X^d} (1_{B^{(n)}_{i,j}\setminus A_{i,j}}\circ\pi_i)\cdot \Big(\prod_{(i',j')\neq (i,j)}1_{B^{(n)}_{i',j'}}\circ\pi_{i'}\Big)\,\d\l\\ &&\quad = \int_{X^d} (\mathsf{E}_\mu(1_{B^{(n)}_{i,j}\setminus A_{i,j}}\,|\,\Xi^{(n)}_{i,j})\circ\pi_i)\cdot \Big(\prod_{(i',j')\neq (i,j)}1_{B^{(n)}_{i',j'}}\circ\pi_{i'}\Big)\,\d\l \end{eqnarray*} for each pair $(i,j)$. However, from the definition of $B^{(n)}_{i,j}$ we must have \[\mathsf{E}_\mu(1_{B^{(n)}_{i,j}\setminus A_{i,j}}\,|\,\Xi^{(n)}_{i,j}) \leq \delta 1_{B^{(n)}_{i,j}}\] almost surely, and therefore the above integral inequality implies that \[\l(F\setminus \pi_i^{-1}(A_{i,j}))\leq \delta\int_{X^d} (1_{B^{(n)}_{i,j}}\circ\pi_i)\cdot \Big(\prod_{(i',j')\neq (i,j)}1_{B^{(n)}_{i',j'}}\circ\pi_{i'}\Big)\,\d\l = \delta \l(F).\] From this we can estimate as follows: \[\l(F) \leq \l\Big(\prod_{i=1}^d\Big(\bigcap_{j=1}^{k_i}A_{i,j}\Big)\Big) + \sum_{(i,j)}\l(F\setminus \pi_i^{-1}(A_{i,j})) \leq 0 + \Big(\sum_{i=1}^dk_i\Big)\delta \l(F),\] and so provided we chose $\delta < \big(\sum_{i=1}^dk_i\big)^{-1}$ we must in fact have $\l(F) = 0$. We have now obtained sets $(B^{(n)}_{i,j})_{i,j}$ that are associated to the family $(\mathcal{I}_{i,j})_{i,j}$ and satisfy the property of lying in finitely-generated extensions of the relevant factors corresponding to the members of the $\mathcal{I}_{i,j}$ of minimal size, and so we can apply the result of Step 1 to deduce that $\mu\big(\bigcap_{i=1}^d\bigcap_{j=1}^{k_i}B^{(n)}_{i,j}\big) = 0$. It follows that \[\mu\Big(\bigcap_{i=1}^d\bigcap_{j=1}^{k_i}A_{i,j}\Big) \leq \sum_{i,j}\mu(A_{i,j}\setminus B^{(n)}_{i,j}) \to 0\quad\quad\hbox{as }n\to\infty,\] as required. \qed \noindent\textbf{Proof of Proposition~\ref{prop:infremoval}}\quad We first take as our base case $k_i = 1$ and $\mathcal{I}_{i,1} = \{[d]\}$ for each $i=1,2,\ldots,d$. In this case we know from property [ii] that for any $A \in \Psi_{[d]}$ the pre-images $\pi_i^{-1}(A)$ are all equal up to negligible sets, and so given $A_1$, $A_2$, \ldots, $A_d \in \Psi_{[d]}$ we have $0 = \l(A_1\times A_2\times\cdots\times A_d) = \mu(A_1\cap A_2\cap\cdots\cap A_d)$. The remainder of the proof now just requires putting the preceding lemmas into order to form an induction with three layers: if our collection has any non-principal up-sets of minimal depth, then Lemma~\ref{lem:lift-gen} allows us to reduce their number at the expense only of introducing new principal up-sets of the same depth; and having removed all the non-principal minimal-depth up-sets, Lemma~\ref{lem:lift-rel-ind} enables us to remove also the principal ones until we are left only with up-sets of increased minimal depth. This completes the proof. \qed \chapter{The Density Hales-Jewett Theorem}\label{chap:DHJ} Much as for Szemer\'edi's Theorem and its multidimensional generalization, the Ergodic Ramsey Theory approach to Theorem B begins by establishing its equivalence to a result about stochastic processes. We have deferred the introduction of the stochastic processes analog of Theorem B until now because it involves a less well-known family of processes than the tuples of commuting transformations that appear in Theorem A, and these new stochastic processes require a separate discussion. The proof from~\cite{FurKat91} of the correspondence between Theorem B and an assertion about these processes is also less well-known, and so we recall this in the first section below for completeness. After formulating the stochastic processes result to which Theorem B is equivalent, we introduce an additional semigroup $\Gamma$ of transformations on these processes and argue that we may reduce further to the case of processes whose distributions are invariant. This leaves us with a class of $\Gamma$-systems, on which we will bring a notion of satedness to bear. However, as promised at the beginning of Chapter 2, this first requires some modifications to that notion, effectively by imposing additional restrictions on the factor maps we allow in our theory of a kind not involved heretofore. With these modifications in place we will proceed to analogs of Propositions~\ref{prop:Fberg1} and~\ref{prop:Fberg2} and thence to the proof of Theorem B. \section{The correspondence with a class of stationary processes}\label{sec:startDHJ} \subsection*{Combinatorial notation} In addition to the finite spaces $[k]^N$ appearing in the statement of Theorem B, we will work with their union \[[k]^\ast := \bigcup_{N\geq 1}[k]^N.\] The spaces $[k]^N$ and $[k]^\ast$ are referred to as the \textbf{$N$-dimensional} and \textbf{infinite-dimensional combinatorial spaces} over the alphabet $[k]$ respectively. Most of this chapter will consider probabilities on product spaces indexed by $[k]^\ast$. If $A\subseteq [k]^N$ then we denote its \textbf{density} by \[\d(A) := \frac{|A|}{k^N};\] thus the assumption of Theorem B is that $N$ is sufficiently large in terms of $k$ and $\d(A)$. Given two finite words $u,v \in [k]^\ast$ we denote their concatenation by either $uv$ or $u\oplus v$. For any finite $n$ we define an \textbf{$n$-dimensional subspace} of $[k]^\ast$ to be an injection $\phi:[k]^n \hookrightarrow [k]^\ast$ specified as follows: for some integers $0 = N_0 < N_1 < N_2 < \ldots < N_n$, nonempty subsets $I_1 \subseteq [N_1]$, $I_2 \subseteq [N_2]\setminus [N_1]$, \ldots, $I_n \subseteq [N_n]\setminus [N_{n-1}]$ and a word $w \in [k]^{N_n}$ we let $\phi(v_1v_2\cdots v_n)$ be the word in $[k]^\ast$ of length $N_n$ given by \[\phi(v_1v_2\cdots v_n)_m := \left\{\begin{array}{ll}w_m&\quad\quad\hbox{if }m\in [N_n]\setminus (I_1 \cup I_2\cup\cdots\cup I_n)\\ v_i&\quad\quad\hbox{if }m\in I_i.\end{array}\right.\] In these terms a combinatorial line is simply a $1$-dimensional combinatorial subspace. Similarly, an \textbf{infinite-dimensional subspace} (or often just \textbf{subspace}) of $[k]^\ast$ is an injection $\phi:[k]^\ast \hookrightarrow [k]^\ast$ specified using some infinite sequence $0 = N_0 < N_1 < N_2 < \ldots$, nonempty subsets $I_{i+1} \subseteq [N_{i+1}]\setminus [N_i]$ and words $w_i \in [k]^{N_i}$, where for any $v \in [k]^n$ its image $\phi(v)$ has length $N_n$ and is given by the above formula with $w := w_n$. It is clear that the collection of all subspaces of $[k]^\ast$ forms a semigroup $\Gamma$ under composition. Finally, let us define \textbf{letter-replacement maps}: give $i\in [k]$ and $e \subseteq [k]$, for each $N\geq 1$ we define $r^N_{e,i}:[k]^N \to [k]^N$ by \[r^N_{e,i}(w)_m := \left\{\begin{array}{ll}i&\quad\quad\hbox{if }w_m \in e\\ w_m&\quad\quad\hbox{if }w_m \in [k]\setminus e\end{array}\right.\] for $m\leq N$, and let \[r_{e,i} := \bigcup_{N\geq 1}r^N_{e,i}:[k]^\ast\to [k]^\ast\] (so clearly $r_{e,i}$ actually takes values in the subset $(([k]\setminus e)\cup \{i\})^\ast\subseteq [k]^\ast$). \subsection*{Reformulation in terms of stochastic processes} The correspondence that Furstenberg and Katznelson establish for Theorem B is between dense subsets of the finite-dimensional combinatorial spaces $[k]^N$ and stochastic processes \emph{indexed} by the infinite-dimensional combinatorial space $[k]^\ast$. \begin{thm}[Infinitary Density Hales-Jewett Theorem]\label{thm:inf-DHJ} For any $\delta > 0$, if $\mu$ is a Borel probability measure on $\{0,1\}^{[k]^\ast}$ with the property that \[\mu\{\bf{x} \in \{0,1\}^{[k]^\ast}:\ x_w = 1\} \geq \delta\quad\quad\forall w \in [k]^\ast,\] then there is a combinatorial line $\phi:[k]\hookrightarrow [k]^\ast$ such that \[\mu\{\bf{x} \in \{0,1\}^{[k]^\ast}:\ x_{\phi(i)} = 1\ \forall i \in [k]\} > 0.\] \end{thm} \noindent\textbf{Proof of Theorem B from Theorem~\ref{thm:inf-DHJ}}\quad Clearly we may restrict our attention to $k\geq 2$. We will suppose that theorem B fails, and show that this would give rise to a counterexample to Theorem~\ref{thm:inf-DHJ}. We break this into two steps. \textbf{Step 1}\quad First observe that if $N\geq L\geq 1$ and $A\subseteq [k]^N$ has $\d(A) > 1 - \frac{1}{k^{2L}}$ then $A$ necessarily contains a whole $L$-dimensional combinatorial subspace. Indeed, having density as high as this implies that each of the $k^L$ subsets \[A_u := \{w \in [k]^{N-L}:\ u\oplus w \in A\}\quad\quad\hbox{for}\ u \in [k]^L\] has density greater than $1 - \frac{1}{k^L}$, and so there must be some $w \in \bigcap_{u\in [k]^L}A_u$, implying that the subspace $[k]^L\hookrightarrow [k]^N:u\mapsto u\oplus w$ has image lying entirely in $A$. In particular, letting $L = 1$, if we assume that Theorem B fails then we may let \[\delta_0 := \sup\{\delta > 0:\ \hbox{Theorem B fails for subsets of density $\delta$}\}\] and deduce that $0 < \delta_0 < 1$. \textbf{Step 2}\quad Now fix some integer $L \geq 1$ and let $A \subseteq [k]^{N}$ be a subset of density $\d(A) = \delta > (1 + \frac{1}{2k^{L+1}})^{-1}\delta_0$ for some $N \geq L$ such that $A$ contains no combinatorial lines. Let $N = L + M$ and decompose $[k]^{N}$ as $[k]^L \oplus [k]^{M}$. For each $w \in [k]^L$ let \[A_w := \{v \in [k]^{M}:\ w\oplus v \in A\}.\] Clearly \[\frac{1}{k^L}\sum_{w\in [k]^L}\d(A_w) = \d(A) = \delta,\] and on the other hand $\d(A_w) < (1 + \frac{1}{2k^{L+1}})\delta$ for each $w$ once $N$ is sufficiently large, for otherwise $A_w$ would contain a combinatorial line by the definition of $\delta_0$. Therefore the above equation between densities and Chebyshev's inequality require that in fact \emph{every} $w \in [k]^L$ have $\d(A_w) > \delta/2$. Now defining the probability measure $\mu_L$ on $\{0,1\}^{[k]^L}$ by \[\mu_L\{(x_w)_{w\in [k]^L}\} := \d(\{v \in [k]^M:\ x_w = 1_{A_w}(v)\ \forall w\in [k]^L\})\] for each $(x_w)_{w\in [k]^L} \in \{0,1\}^{[k]^L}$, we see that for each $L$ we have produced a probability $\mu_L$ on $\{0,1\}^{[k]^L}$ such that \[\mu_L\{\bf{x} \in \{0,1\}^{[k]^L}:\ x_w = 1\} = \d(A_w) \geq \delta/2 \geq \delta_0/4\quad\quad\forall w \in [k]^L\] but \[\mu_L\{\bf{x} \in \{0,1\}^{[k]^L}:\ x_{\phi(i)} = 1\ \forall i \in [k]\} = 0\] for any combinatorial line $\phi:[k]\hookrightarrow [k]^L$. Finally defining $\mu := \bigotimes_{L\geq 1}\mu_L$, we obtain a measure that contradicts Theorem~\ref{thm:inf-DHJ} with density $\delta_0/4$. \qed \noindent\textbf{Remark}\quad The above proof is essentially taken from Proposition 2.1 of~\cite{FurKat91}, where the reverse implication is also proved. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline \section{Strongly stationary processes} After introducing Theorem~\ref{thm:inf-DHJ}, Furstenberg and Katznelson make a further reduction to a special subclass of measures. \begin{dfn}[Semigroup action of combinatorial subspaces] If $\phi:[k]^N\hookrightarrow [k]^\ast$ is a combinatorial subspace then for any product space $K^{[k]^\ast}$ we define the corresponding map $T_\phi:K^{[k]^\ast}\to K^{[k]^N}$ by \[(T_\phi(\bf{x}))_w := x_{\phi(w)}\quad\quad\hbox{for}\ w \in [k]^N\ \hbox{and}\ \bf{x} = (x_u)_{u\in [k]^\ast} \in K^{[k]^\ast},\] and similarly define $T_\phi:K^{[k]^\ast}\to K^{[k]^\ast}$ in case $\phi:[k]^\ast\hookrightarrow [k]^\ast$. In the latter case this specifies an action $\Gamma\curvearrowright K^{[k]^\ast}$. \end{dfn} \begin{dfn}[Strongly stationary laws] A probability measure $\mu$ on the product $(K^{[k]^\ast},\Psi^{\otimes [k]^\ast})$ for some standard Borel space $(K,\Psi)$ is \textbf{strongly stationary} if $T_{\phi\#}\mu = \mu$ for all subspaces $\phi \in \Gamma$. In this case the transformations $T_\phi$ give to $(K^{[k]^\ast},\Psi^{\otimes [k]^\ast},\mu)$ the structure of a probability-preserving $\Gamma$-system. \end{dfn} \begin{lem}\label{lem:sssuffice} If Theorem~\ref{thm:inf-DHJ} holds for all strongly stationary measures for any $\delta > 0$ then it holds for all measures satisfying the conditions of that theorem for any $\delta > 0$. \end{lem} \noindent\textbf{Proof}\quad This argument is again lifted directly from~\cite{FurKat91}, and we only sketch the details. Given a measure $\mu$ satisfying the conditions of Theorem~\ref{thm:inf-DHJ} for some $\delta > 0$, by applying the Carlson-Simpson Theorem~\cite{Car88} to arbitrarily fine finite open coverings of the finite-dimensional spaces of probability distributions on $\{0,1\}^{[k]^n}$ for increasingly large $n$, we obtain a subspace $\psi:[k]^\ast \hookrightarrow [k]^\ast$ and an infinite word $w = w_1w_2\cdots \in [k]^\mathbb{N}$ such that the restricted laws \[T_{\psi(w_1w_2\cdots w_m\oplus \ \cdot\ )\#}\mu\] converge to a strongly stationary law as $m\to\infty$, and since \emph{all} one-dimensional marginals of the input law gave probability at least $\delta$ to $\{1\}$, the same is true of the limit measure. Finally, the subset of probability measures \[\big\{\nu \in \Pr\{0,1\}^{[k]^\ast}:\ \nu\{\bf{x} \in \{0,1\}^{[k]^\ast}:\ x_{\phi(i)} = 1\ \forall i\leq k\} > 0\big\}\] is finite-dimensional and open for any given line $\phi:[k]\hookrightarrow [k]^\ast$, so if the limit measure is in this set the so is some image of the original measure. \qed An immediate consequence of the strong stationarity of a measure $\mu$ is that for any two $N$-dimensional subspaces $\phi,\psi:[k]^N\hookrightarrow [k]^\ast$ we have $T_{\phi\#}\mu = T_{\psi\#}\mu$. In case $N = 0$ we refer to this common image measure as the \textbf{point marginal} $\mu$ and denote it by $\mu^{\rm{pt}}$, and similarly in case $N=1$ it is the \textbf{line marginal} of $\mu$ and is denoted by $\mu^{\rm{line}}$. In these terms it is possible to give another, more convenient reformulation of Theorem~\ref{thm:inf-DHJ}. \begin{thm}\label{thm:inf-DHJ2} If $(K,\Psi)$ is a standard Borel space and $\mu$ is a strongly stationary law on $(K^{[k]^\ast},\Psi^{\otimes [k]^\ast})$ then for any $A_1,A_2,\ldots,A_k \in \Psi$ we have \[\mu^{\rm{line}}(A_1\times A_2\times \cdots \times A_k) = 0\quad\quad\Rightarrow\quad\quad \mu^{\rm{pt}}(A_1\cap A_2\cap\cdots \cap A_k) = 0.\] \end{thm} The resemblance to Theorem~\ref{thm:multirec2} is far from accidental! The proof of Theorem~\ref{thm:inf-DHJ2} will involve a version of satedness for our systems of interest; however, here a slight subtlety creeps in. In the following we will need to work with only those $\Gamma$-systems that are of the form $(K^{[k]^\ast},\Psi^{[k]^\ast},\mu,T)$ for some strongly stationary measure $\mu$ (of course, the huge semigroup $\Gamma$ could also have invariant measures for all sorts of other Borel actions, not of this form). On the other hand, the conclusion of Theorem~\ref{thm:inf-DHJ2} is not about the joint distribution of several copies of whole $\Gamma$-systems under some self-joining. Rather, it is about the joint distribution of some copies of just the `one-dimensional' point marginal $(K,\Psi,\mu^{\rm{pt}})$ under the line marginal: this is only a tiny fragment of the whole system $(K^{[k]^\ast},\Psi^{\otimes [k]^\ast},\mu,T)$. The way we can keep track of the structure of point and line marginals between different such systems is by restricting the kinds of factor map we allow. \begin{dfn} Let $\mathsf{A}$ be the class of $\Gamma$-systems given by strongly stationary measures on product spaces indexed by $[k]^\ast$, as above. A \textbf{coordinatewise factor} (or \textbf{cw-factor}) of $\mathbf{X} = (K^{[k]^\ast},\Psi^{\otimes [k]^\ast},\mu,T) \in \mathsf{A}$ is a $\sigma$-subalgebra of the form $\Phi^{\otimes [k]^\ast} \leq \Psi^{\otimes [k]^\ast}$ for some $\Phi \leq \Psi$. Slightly abusively, we will sometimes refer instead to the single-coordinate $\sigma$-subalgebra $\Psi$ as a cw-factor. Likewise, a \textbf{cw-factor map} is a map of the form \[f^\ast:(K^{[k]^\ast},\Psi^{\otimes [k]^\ast},\mu,T)\to (L^{[k]^\ast},\Xi^{\otimes [k]^\ast},\nu,T):(x_w)_w \mapsto (f(x_w))_w\] for some Borel map $f:(K,\Psi)\to (L,\Xi)$, and $f^\ast$ is a \textbf{cw-isomorphism} if $f$ is measurably invertible away from some $\mu^{\rm{pt}}$- and $\nu^{\rm{pt}}$-negligible sets (this is clearly equivalent to its being an isomorphism in the usual sense). With $f^\ast$ as above we shall sometimes refer to $f$ as its corresponding \textbf{single-coordinate map}. \end{dfn} It is now easy to see that the class $\mathsf{A}$ is closed under joinings and inverse limits, provided that we interpret a joining of two systems $(K^{[k]^\ast},\Psi^{\otimes [k]^\ast},\mu,T)$ and $(L^{[k]^\ast},\Xi^{\otimes [k]^\ast},\nu,T)$ as a strongly stationary measure on $(K\times L)^{[k]^\ast}$ and that we restrict our attention to inverse sequences whose connecting maps are all cw-factor maps. We will henceforth refer to a subclass $\mathsf{C}\subseteq \mathsf{A}$ as \textbf{cw-idempotent} if it is closed under cw-isomorphisms, joinings and inverse limits involving cw-factor maps, and now observe that all of the definitions and lemmas of Section~\ref{sec:idem} have direct analogs for cw-idempotent classes obtained simply by insisting that all morphisms be given by cw-factor maps. In particular, if $\mathsf{C}$ is a cw-idempotent class and $\mathbf{X} = (K^{[k]^\ast},\Psi^{\otimes [k]^\ast},\mu,T)\in \mathsf{A}$ then the maximal cw-$\mathsf{C}$-factor of $\mathbf{X}$ is given by $\Phi^{\otimes [k]^\ast}$ where $\Phi$ is the maximal $\sigma$-algebra in the family \begin{multline*} \big\{\Xi\leq \Psi:\ \Xi\ \hbox{is generated by some Borel map}\ f:(K,\Psi)\to (K_1,\Psi_1)\\ \hbox{such that}\ (K_1^{[k]^\ast},\Psi_1^{\otimes [k]^\ast},f^\ast_\#\mu,T) \in \mathsf{C}\big\}. \end{multline*} We will write a cw-factor map coordinatizing this maximal $\mathsf{C}$-factor as $\zeta^\ast_\mathsf{C}$ for some map $\zeta_\mathsf{C}:K\to \mathsf{C} K$ of single-coordinate spaces. Given these observations we can make our analog of Definition~\ref{dfn:sated}. \begin{dfn}[CW-sated systems]\label{dfn:Csated} For a cw-idempotent class $\mathsf{C}\subseteq \mathsf{A}$, a system $\mathbf{X} \in \mathsf{A}$ is \textbf{cw-$\mathsf{C}$-sated} if for any cw-extension \[\pi^\ast:\t{\mathbf{X}} = (\t{K}^{[k]^\ast},\t{\Psi}^{\otimes[k]^\ast},\t{\mu},T)\to \mathbf{X}\] the single-coordinate maps $\pi:\t{K}\to K$ and $\t{\zeta}_\mathsf{C}:\t{K}\to \mathsf{C}\t{K}$ are relatively independent under $\t{\mu}^{\rm{pt}}$ over $\zeta_\mathsf{C}\circ\pi:\t{K}\to K\to\mathsf{C} K$, where $\t{\zeta}^\ast_\mathsf{C}$ and $\zeta_\mathsf{C}^\ast$ coordinatize the maximal $\mathsf{C}$-factors of $\t{\mathbf{X}}$ and $\mathbf{X}$ respectively. \end{dfn} \begin{thm}\label{thm:Csateds-exist} If $(\mathsf{C}_i)_{i\in I}$ is a countable family of cw-idempotent classes then any system $\mathbf{X}_0 \in \mathsf{A}$ admits a cw-extension $\pi:\mathbf{X} \to \mathbf{X}_0$ that is cw-$\mathsf{C}_i$-sated for every $i\in I$. \end{thm} \noindent\textbf{Proof outline}\quad This proceeds in exact analogy with the proof of Theorem~\ref{thm:sateds-exist}. First, applying the argument for Lemma~\ref{lem:inverse-lim-sated} to a bounded measurable function $f$ on the single-coordinate space of an inverse limit shows that an inverse limit of cw-$\mathsf{C}$-sated systems through cw-factor maps is cw-$\mathsf{C}$-sated. Next, given a system \[\mathbf{X} = (K^{[k]^\ast},\Psi^{\otimes [k]^\ast},\mu,T)\in \mathsf{A},\] we show how to produce a cw-sated extension for a single cw-idempotent class $\mathsf{C}$: first enumerate an $L^2$-dense sequence $(f_r)_{r\geq 1}$ in the unit ball of $L^\infty(\mu^{\rm{pt}})$; then apply the same exhaustion argument as in Step 1 of Theorem~\ref{thm:sateds-exist} to produce an inverse sequence of cw-extensions \begin{multline*} \ldots\stackrel{(\zeta^{n+2}_{n+1})^\ast}{\to} \mathbf{X}_{n+1} = (K_{n+1}^{[k]^\ast},\Psi_{n+1}^{\otimes [n]^\ast},\mu_{n+1},T)\\ \stackrel{(\zeta^{n+1}_n)^\ast}{\to}\mathbf{X}_n = (K_n^{[k]^\ast},\Psi_n^{\otimes [n]^\ast},\mu_n,T)\stackrel{(\zeta^n_{n-1})^\ast}{\to}\ldots\to \mathbf{X} \end{multline*} such that for each $r$ it happens cofinally often that this extension is within a factor of $2$ of achieving the optimal increase in the $L^2$-norm of the conditional expectation $\mathsf{E}_{\mu^{\rm{pt}}_n}(f_r\circ\psi^n_0\,|\,\zeta_\mathsf{C}^{(n)})$ (where $\zeta_\mathsf{C}^{(n)}$ is the single-coordinate map coordinatizing $\mathsf{C}\mathbf{X}_n$); and finally take the inverse limit of this sequence. Just as in the proof of Theorem~\ref{thm:sateds-exist}, if this inverse limit were not cw-$\mathsf{C}$-sated then this would lead to a contradiction with our assumption on the increase of $\|\mathsf{E}_{\mu^{\rm{pt}}_n}(f_r\circ\psi^n_0\,|\,\zeta_\mathsf{C}^{(n)})\|_2$ for some finite $n$. Finally the proof is completed by arguing that given a countable collection of cw-idempotent classes $\mathsf{C}_i$, we can produce one long inverse sequence of extensions in which for each $i$ there is a cofinal subsequence of cw-$\mathsf{C}_i$-sated systems, so that the inverse limit is cw-$\mathsf{C}_i$-sated for every $i$. \qed This completes the modifications we need for our approach to Theorem B. Note that detailed proofs of the above results written in the setting of strongly stationary laws are given in~\cite{Aus--DHJ}. \vspace{7pt} \noindent\textbf{Remark}\quad In principle one could give a complete unification of Chapter~\ref{chap:basics} with the above modifications to it by phrasing all of these results in terms of a general (not necessarily full) subcategory $\bf{Cat}$ of the category $\Gamma\hbox{-\textbf{Sys}}$ of all $\Gamma$-systems, and adopting a flexible meaning for the term `relatively independent'. In this work we have preferred to draw a more informal parallel between our two settings of interest, but it may be instructive to deduce from the proofs of Chapter 2 what basic properties we really need for the existence of sated extensions and the various lemmas that support it. Although we leave the proof to the reader, it turns out that $\bf{Cat}$ must admit two basic constructions: \begin{itemize} \item it must have inverse limits; \item it must have generated factors: that is, if \begin{center} $\phantom{i}$\xymatrix{& \mathbf{X}\ar[dl]\ar[dr]\\ \mathbf{Y} & & \mathbf{Z} } \end{center} is a diagram in $\bf{Cat}$, then there is an essentially unique minimal system $\mathbf{W}$ that may be inserted into this diagram as \begin{center} $\phantom{i}$\xymatrix{& \mathbf{X}\ar[dl]\ar[d]\ar[dr]\\ \mathbf{Y} & \mathbf{W}\ar[l]\ar[r] & \mathbf{Z} } \end{center} \end{itemize} Note, interestingly, that it does \emph{not} seem to be essential that any diagram such as \begin{center} $\phantom{i}$\xymatrix{\mathbf{X}\ar[dr] & & \mathbf{Y}\ar[dl]\\ & \mathbf{Z} } \end{center} have a common extension of $\mathbf{X}$ and $\mathbf{Y}$ that can be inserted above it (of course, working in the whole of $\Gamma\hbox{-\textbf{Sys}}$ when $\Gamma$ is a group such a common extension is provided by the relatively independent product). While these assumptions on $\bf{Cat}$ are relatively innocuous, more drastic steps are needed if we are to accommodate the instances of relative independence appearing in both Theorem~\ref{thm:sateds-exist} and Theorem~\ref{thm:Csateds-exist}. The former of these asserts the relative independence of two whole factors of some extended system, whereas the latter concerns only the relative independent of functions of a single fixed coordinate within each of those factors (that is, relative independence under $\mu^{\rm{pt}}$ rather than $\mu$). In order to treat these together, one could for example augment the category $\bf{Cat}$ by attaching to each system some distinguished subalgebra of bounded measurable functions (the whole of $L^\infty$ in the first case, and the subalgebra of functions of $x_w$ for some distinguished $w \in [k]^\ast$ in the second), and then re-defining conditional expectation as an operator acting only between these subalgebras for different systems and satisfying the usual conditions of idempotence and agreement of integrals against functions in the target subalgebra. Altogether these very abstract considerations seem more demanding than worthwhile, and I know of few other situations in which a non-standard example of an abstract category of systems having these properties has been useful in ergodic theory. One related area which could fit into this mould is the study of partial exchangeability in probability theory, for which we refer the reader to Kallenberg's book~\cite{Kal02}, the survey papers~\cite{Aus--ERH,Ald10} and the references given there. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline \section{Another appeal to the infinitary hypergraph removal lemma} The cw-idempotent classes for which we will apply Theorem~\ref{thm:Csateds-exist} are as follows. \begin{dfn}[Partially insensitive processes] Given a subset $e \subseteq [k]$, a process $(K^{[k]^\ast},\Psi^{\otimes [k]^\ast},\mu,T) \in \mathsf{A}$ is \textbf{$e$-insensitive} if its line marginal satisfies \[x_i = x_j\quad\quad\hbox{for}\ \mu^{\rm{line}}\hbox{-a.e.}\ (x_1,x_2,\ldots,x_k)\in K^k\ \hbox{for all}\ i,j \in e.\] We write $\mathsf{A}_e \subseteq \mathsf{A}$ for the subclass of all $e$-insensitive processes. \end{dfn} The persistence of $e$-insensitivity under inverse limits and joinings is immediate, and so we have: \begin{lem} The class $\mathsf{A}_e$ is cw-idempotent for each $e\subseteq [d]$. \qed \end{lem} In parallel with the developments of Section~\ref{sec:moreFberg}, given an arbitrary process \[\mathbf{X} = (K^{[k]^\ast},\Psi^{\otimes [k]^\ast},\mu,T)\in \mathsf{A},\] for each $e \subseteq [d]$ we let $\Phi_e$ denote the \textbf{$e$-insensitive $\sigma$-subalgebra} of $\Psi$, consisting of those $A \in \Psi$ such that $\mu^{\rm{line}}(\pi_i^{-1}(A)\triangle \pi_j^{-1}(A)) = 0$ for all $i,j \in e$, where $\pi_i:K^k \to K$ is the coordinate projection. Letting $\zeta_e:(K,\Psi) \to (K_e,\Psi_e)$ be some map of standard Borel spaces such that $\Phi_e$ agrees with $\{\zeta_e^{-1}(E):\ E \in \Psi_e\}$ modulo $\mu^{\rm{pt}}$-negligible sets, it follows that \[\zeta^\ast_e:K^{[k]^\ast}\to K_e^{[k]^\ast}:(x_w)_w\mapsto (\zeta_e(x_w))_w\] is a cw-factor map that coordinatizes $\mathbf{X} \to \mathsf{A}_e\mathbf{X}$. Directly from the definition of $\Phi_e$ we observe that if $i,j \in e$ then $\pi_i^{-1}(\Phi_e)$ and $\pi_j^{-1}(\Phi_e)$ differ only by $\mu^{\rm{line}}$-negligible sets, and we denote their common $\mu^{\rm{line}}$-completion by $\Phi_e^\dag$. If now $\mathcal{I} \subseteq \binom{[k]}{\geq 2}$ is an up-set, then similarly to the setup of Section~\ref{sec:moreFberg} we define $\Phi_\mathcal{I} := \bigvee_{e \in \mathcal{I}}\Phi_e$ and $\Phi^\dag_\mathcal{I} := \bigvee_{e \in \mathcal{I}}\Phi^\dag_e$. In terms of these definitions, the consequences of cw-satedness that we need are now essentially parallel to Propositions~\ref{prop:Fberg1} and~\ref{prop:Fberg2}. \begin{prop}\label{prop:line1} For each $i\leq k$ let \[\mathsf{C}_i := \bigvee_{j\leq k,\,j\neq i}\mathsf{A}_{\{i,j\}}.\] If a system $\mathbf{X}$ with strongly stationary measure $\mu$ is cw-$\mathsf{C}_i$-sated for each $i$ then the $\sigma$-algebras $\pi_i^{-1}(\Psi) \leq \Psi^{\otimes k}$ are relatively independent under $\mu^{\rm{line}}$ over the further factors \[\pi_i^{-1}\Big(\bigvee_{j\leq k,\,j\neq i}\Phi_{\{i,j\}}\Big).\] \end{prop} \noindent\textbf{Proof}\quad Clearly it will suffice to prove that $\pi_1^{-1}(\Psi)$ is relatively independent from $\pi_2^{-1}(\Psi)\vee\cdots\vee\pi_d^{-1}(\Psi)$ under $\mu^{\rm{line}}$ over \[\Xi:= \bigvee_{j=2}^k\Phi_{\{1,j\}},\] since the cases of the other coordinates under $\mu^{\rm{line}}$ then follow by symmetry. We prove this by contradiction, so suppose that $f_1$, $f_2$, \ldots, $f_d \in L^\infty(\mu^{\rm{pt}})$ are such that \[\int_{K^k}f_1\otimes f_2\otimes \cdots\otimes f_k\,\d\mu^{\rm{line}}\neq \int_{K^k}\mathsf{E}(f_1\,|\,\Xi)\otimes f_2\otimes \cdots\otimes f_k\,\d\mu^{\rm{line}}.\] We will deduce from this a contradiction with the cw-satedness of $\mu$. By replacing $f_1$ with $f_1 - \mathsf{E}(f_1\,|\,\Xi)$ it suffices to assume that $\mathsf{E}(f_1\,|\,\Xi) = 0$ but that the left-hand integral above does not vanish. For each $j=2,3,\ldots,k$ recall the letter-replacement map $r_{\{1,j\},j}:[k]^\ast\to[k]^\ast$ defined in Section~\ref{sec:startDHJ}. In view of the strong stationarity of $\mu$, we may transport the above non-vanishing integral to any combinatorial line in $[k]^\ast$: in particular, picking some $w \in [k]^\ast$ for which $w^{-1}\{j\} \neq \emptyset$ for every $j$, the points $\{w,r_{\{1,2\},2}(w),r_{\{1,3\},3}(w),\ldots,r_{\{1,k\},k}(w)\}$ form such a line, and so we have \[\int_{X^{[k]^\ast}} f_1(x_w)\cdot f_2(x_{r_{\{1,2\},2}(w)})\cdot\cdots\cdot f_k(x_{r_{\{1,k\},k}(w)})\,\mu(\d\bf{x}) = \kappa\neq 0.\] Now define the probability measure $\l$ on $(K\times K^{\{2,3,\ldots,k\}})^{[k]^\ast}$ to be the joint law under $\mu$ of \[(x_w)_w \mapsto \big(x_w,x_{r_{\{1,2\},2}(w)},x_{r_{\{1,3\},3}(w)},\ldots,x_{r_{\{1,k\},k}(w)}\big)_w.\] We see that all of its coordinate projections onto individual copies of $K$ are still just $\mu^{\rm{pt}}$, the cw-factor map \[\phi^\ast_1:(x_w,y_{2,w},y_{3,w},\ldots,y_{k,w})_w\mapsto (x_w)_w\] has $\phi^\ast_{1\#}\l = \mu$, and the cw-factor map \[\phi^\ast_j:(x_w,y_{2,w},y_{3,w},\ldots,y_{k,w})_w\mapsto (y_{j,w})_w\] for $j=2,3,\ldots,k$ is $\l$-almost surely $\{1,j\}$-insensitive. Therefore through the cw-factor map $\phi^\ast_1$ the law $\l$ defines an extension of $\mu$ as a measure space. This new measure $\l$ may not be strongly stationary, so may not define an extension of members of $\mathsf{A}$. However, we can now repeat the trick of Lemma~\ref{lem:sssuffice}. By the Carlson-Simpson Theorem there are a subspace $\psi:[k]^\ast\hookrightarrow [k]^\ast$ and an infinite word $w \in [k]^\mathbb{N}$ such that the pulled-back measures \[T_{\psi(w_1w_2\cdots w_n\oplus\ \cdot\ )\#}\l\] converge in the coupling topology on $(K\times K^{\{2,3,\ldots,k\}})^{[k]^\ast}$ (recall that for couplings of fixed marginal measures this is compact; see Theorem 6.2 in~\cite{Gla03}) to a strongly stationary measure $\tilde{\mu}$. Since $\mu$ was already strongly stationary, we must still have $\phi^\ast_{1\#}\tilde{\mu}= \mu$, and by the definition of the coupling topology as the weakest for which integration of fixed product functions is continuous it follows that we must still have, firstly, that \[\int_{(K\times K^{\{2,3,\ldots,k\}})^{[k]^\ast}}(f\circ\pi_u\circ\phi^\ast_1)\cdot\prod_{j \in [k]\setminus e}(h_j\circ\pi_u\circ\phi^\ast_j)\,\d\tilde{\mu} =\kappa \neq 0\] for each $u\in [k]^\ast$ (where now we may omit the assumption that $u$ contains every letter at least once, by strong stationarity), and secondly that the cw-factors generated by the maps $\phi^\ast_j$ are $\{1,j\}$-insensitive under $\tilde{\mu}$, since this is equivalent to the assertion that for any $A \in \Psi$ and line $\ell:[k]\hookrightarrow [k]^\ast$ we have \[\int_{(K\times K^{\{2,3,\ldots,k\}})^{[k]^\ast}} 1_A(\phi_j(z_{\ell(1)}))\cdot 1_{K\setminus A}(\phi_j(z_{\ell(j)}))\,\tilde{\mu}(\d\bf{z}) = 0\] and this is clearly a closed condition in the coupling topology. It follows that this strongly stationary measure $\t{\mu}$ gives a genuine cw-extension $\phi^\ast_1:\t{\mathbf{X}}\to \mathbf{X}$ such that the lift of $f_1\circ\pi_1$ as a function of any one coordinate must have a nontrivial inner product with some pointwise product of $\{1,j\}$-insensitive functions under $\t{\mu}$ over $j=2,3,\ldots,k$. Hence this lift has nonzero conditional expectation onto a $\sigma$-subalgebra of $\Psi\otimes \Psi^{\otimes \{2,3,\ldots,k\}}$ coordinatizing a cw-factor in the class $\mathsf{C}_1$, but recalling our assumption that $\mathsf{E}(f_1\,|\,\Xi) = 0$, this provides the desired contradiction with cw-$\mathsf{C}_1$-satedness. \qed \begin{prop}\label{prop:line2} For each $e \subseteq [k]$ let \[\mathsf{C}_e:= \bigvee_{j\in [d]\setminus e}\mathsf{A}_{e\cup \{j\}}.\] If $\mathbf{X}$ is cw-$\mathsf{C}_e$-sated for every $e$ then for any up-sets $\mathcal{I},\mathcal{I}'\subseteq \binom{[k]}{\geq 2}$ the $\sigma$-subalgebras $\Phi^\dag_\mathcal{I}$ and $\Phi^\dag_{\mathcal{I}'}$ are relatively independent under $\mu^{\rm{line}}$ over $\Phi^\dag_{\mathcal{I}\cap \mathcal{I}'}$. \end{prop} \noindent\textbf{Proof}\quad As for Proposition~\ref{prop:Fberg2} we start with the case in which $\mathcal{I}' = \langle e\rangle$ for $e$ a member of $\binom{[d]}{\geq 2}\setminus \mathcal{I}$ of maximal size, and again just as for that proposition it suffices to show that $\Phi^\dag_e$ is relatively independent from $\bigvee_{j \in [k]\setminus e}\pi_j^{-1}(\Psi)$ over $\bigvee_{j \in [k]\setminus e}\Phi_{e\cup \{j\}}^\dag$ under $\mu^{\rm{line}}$. Again this is best proved by deriving a contradiction with cw-satedness. Pick some $i \in e$, so $\Phi^\dag_e$ agrees with $\pi_i^{-1}(\Phi_e)$ up to negligible sets, let \[\Xi := \bigvee_{j \in [k]\setminus e}\Phi_{e\cup\{j\}},\] and suppose we have some $f \in L^\infty(\mu^{\rm{pt}})$ that is $\Phi_e$-measurable and such that $\mathsf{E}(f\,|\,\Xi) = 0$, and also $h_j \in L^\infty(\mu^{\rm{pt}})$ for each $j\in [k]\setminus e$ such that \[\int_{K^k} (f\circ\pi_i)\cdot \prod_{j\in [k]\setminus e}(h_j\circ\pi_j)\,\d\mu^{\rm{line}} = \kappa \neq 0.\] Arguing as for the preceding proposition, this nonvanishing can be transported to any combinatorial line in $[k]^\ast$, including to a line such as $\{r_{e,1}(w)$, $r_{e,2}(w)$, $r_{e,3}(w)$, \ldots, $r_{e,k}(w)\}$ for any $w$ that contains every letter at least once. This gives \[\int_{K^{[k]^\ast}} f(x_{r_{e,i}(w)})\cdot \prod_{j\in [k]\setminus e}h_j(x_{r_{e,j}(w)})\,\mu(\d\bf{x}) = \kappa\] for any such $w$, but since $f$ is $e$-insensitive we may replace the first factor in this integrand simply by $f(x_w)$. It follows that if we define the probability measure $\l$ on $(K\times K^{[k]\setminus e})^{[k]^\ast}$ to be the joint law under $\mu$ of \[(x_w)_w \mapsto \big(x_w,(x_{r_{e,j}(w)})_{j\in [k]\setminus e}\big)_w\] then all of its coordinate projections onto individual copies of $K$ are still just $\mu^{\rm{pt}}$, the cw-factor map \[\phi^\ast:\big(x_w,(y_{j,w})_{j\in [k]\setminus e}\big)_w\mapsto (x_w)_w\] has $\phi^\ast_\#\l = \mu$ and the cw-factor maps \[\phi^\ast_j:\big(x_w,(y_{j,w})_{j\in [k]\setminus e}\big)_w\mapsto (y_{j,w})_w\] are $\l$-almost surely $(e\cup\{j\})$-insensitive. Therefore through $\phi^\ast$ the measure $\l$ is an extension of the measure $\mu$, and the above inequality gives a non-zero inner product for the lift of $f\circ\pi_u$ through $\phi^\ast$ with some product over $j\in [k]\setminus e$ of $(e\cup\{j\})$-insensitive functions under $\l$, which we can express as \[\int_{K^{[k]^\ast}} (f\circ\pi_u\circ\phi^\ast)\cdot\prod_{j \in [k]\setminus e}(h_j\circ\pi_u\circ\phi^\ast_j)\,\d\l = \kappa\] for any $u\in [k]^\ast$ that contains each letter at least once. To complete the proof, we may argue exactly as for Proposition~\ref{prop:line1} that within the not-necessarily-strongly-stationary law $\l$ we can find infinite-dimensional subspaces $\psi$ for which the corresponding image measures under $T_\psi$ converge in the coupling topology to a strongly stationary extension $\t{\mu}$ of $\mu$, and such that this extension preserves the feature that the lift of $f\circ\pi_u$ has a nontrivial inner product with a pointwise product of $(e\cup\{j\})$-insensitive functions under $\t{\mu}^{\rm{pt}}$ for any word $u$. By our assumption that $\mathsf{E}(f\,|\,\Xi) = 0$ this gives a contradiction with cw-$\mathsf{C}_e$-satedness, as required. The general case can now follows by induction on $\mathcal{I}'$ for each fixed $\mathcal{I}$ exactly as for Proposition~\ref{prop:Fberg2}. \qed \noindent\textbf{Proof of Theorem~\ref{thm:inf-DHJ2}}\quad An initial application of Theorem~\ref{thm:Csateds-exist} allows us to assume that $\mathbf{X}$ is cw-sated for all the classes involved in Propositions~\ref{prop:line1} and~\ref{prop:line2}. Next, exactly as for the proof of Theorem~\ref{thm:multirec2}, applying Proposition~\ref{prop:line1} shows that it suffices to prove Theorem~\ref{thm:inf-DHJ2} in case the sets $A_i$ lie in the $\sigma$-subalgebras $\Phi_{\langle i\rangle} = \bigvee_{j \in [d]\setminus \{i\}}\Phi_{\{i,j\}} \leq \Psi$. Finally, it follows from the definitions and Proposition~\ref{prop:line2} that the probability space $(K,\Psi,\mu^{\rm{pt}})$, its self-coupling $\mu^{\rm{line}}$ and the $\sigma$-subalgebras $\Phi_e$ and their lifts $\Phi^\dag_e$ for $e \subseteq [d]$ satisfy all the conditions of the `infinitary removal result' Proposition~\ref{prop:infremoval}, so another appeal to that proposition completes the proof. \qed \subsection*{Postscript to the above proof} After the appearance of Furstenberg and Katznelson's original, technically rather demanding proof of Theorem B in~\cite{FurKat91}, considerable efforts were made to provide firstly a simpler proof, and more importantly one that could be made effective to deduce some quantitative bound on the necessary dependence of $N_0$ and $\delta$ and $k$. Both of these goals were recently achieved by a large online collaboration, instigated by Tim Gowers and involving several other mathematicians, called Polymath1. Importantly, their new proof does give a dependence of $N_0$ on $\delta$ and $k$ similar to the dependence obtained for the Multidimensional Szemer\'edi Theorem by using the hypergraph regularity and removal lemmas. All these developments can be found online~(\cite{Gow(online)}) and in the preprint~\cite{DHJ09}. Importantly, the infinitary proof of Theorem B that we have reported above relies on an observation that was originally taken from their work. I will not attempt an exact translation here since the lexicons of these two approaches are very different, but the outcome for stochastic processes is essentially the observation that an initially-given system $\mathbf{X} \in \mathsf{A}$ can be combined in a strongly stationary joining with some $\{1,j\}$-insensitive systems as in our proof of Proposition~\ref{prop:line1}, which then gives some information on the structure of the original process $\mathbf{X}$ (in our case by an appeal to cw-satedness). \chapter{Coda: a general structural conjecture}\label{chap:spec} It seems inadequate to finish this dissertation without discussing at least some of the issues obviously left open by the preceding chapters. Perhaps most interesting for ergodic theory is the meta-question introduced in Section~\ref{sec:meta}, and in this last chapter I offer a few further speculations on what additional answers to it we might hope for. Our first clue in this direction is offered by the works~\cite{HosKra05} of Host and Kra and~\cite{Zie07} of Ziegler, establishing the special case of Theorem C corresponding to different powers of a single ergodic transformation: that is, the result that if $T:\mathbb{Z}\curvearrowright (X,\S,\mu)$ is ergodic and $f_1$, $f_2$, \ldots, $f_d \in L^\infty(\mu)$ then the averages \[S_N(f_1,f_2,\ldots,f_d) := \frac{1}{N}\sum_{n=1}^N(f_1\circ T^n)\cdot (f_2\circ T^{2n})\cdot\cdots\cdot (f_d\circ T^{dn})\] converge in $L^2(\mu)$ as $N\to\infty$. Importantly, those two works both rest on a quite detailed result about `characteristic factors' for these averages: \begin{thm}[Host-Kra Theorem]\label{thm:HK} If $\mathbf{X} = (X,\S,\mu,T)$ is as above then there is a factor $\Phi \leq \S$ that is \textbf{characteristic} for the averages $S_N$ in the sense that \[S_N(f_1,f_2,\ldots,f_d)\sim S_N(\mathsf{E}(f_1\,|\,\Phi),\mathsf{E}(f_2\,|\,\Phi),\ldots,\mathsf{E}(f_d\,|\,\Phi))\] in $L^2(\mu)$ for any $f_1$, $f_2$, \ldots $f_d \in L^\infty(\mu)$ as $N\to\infty$, and which can be generated by a factor map to a $(d-1)$-step pro-nilsystem: that is, it can be generated by some increasing sequence of factor maps \[\pi_n:(X,\S,\mu,T)\to (G_n/\Gamma_n,\rm{Borel},m_{G_n/\Gamma_n},R_{g_n})\] to systems that are given by rotations by elements $g_n$ on compact $(d-1)$-step nilmanifolds $G_n/\Gamma_n$. \end{thm} \noindent\textbf{Remark}\quad This notion of a characteristic factor is just a slight modification to that of a partially characteristic factor that we met in Proposition~\ref{prop:sated-implies-pleasant}. In fact, Ziegler proves in~\cite{Zie07} that there is a unique minimal factor with the properties given by the above theorem, and in Leibman's later treatment of these two proofs in ~\cite{Lei05(HKvsZ)} it is shown that the pro-nilsystem characteristic factors constructed by Host and Kra are precisely these minimal factors. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline This very surprising theorem asserts that for a completely arbitrary ergodic $\mathbb{Z}$-system $\mathbf{X}$, its nonconventional averages $S_N$ are entirely controlled by some highly-structured factor of $\mathbf{X}$, which can be expressed in terms of the very concrete data of rotations on nilmanifolds. In this informal discussion we will assume familiarity with the definition and basic properties of such `nilsystems' here; they are treated thoroughly in~\cite{HosKra05} and~\cite{Zie07} and the references given there. Host and Kra and Ziegler's proofs of the one-dimensional case of Theorem C proceed via two different approaches to Theorem~\ref{thm:HK}. They are both rather longer than the proof in our Chapter~\ref{chap:conv}, but using Theorem~\ref{thm:HK} they give a much more precise picture of the limit. On the other hand, the strategy used in our Chapter~\ref{chap:conv} simply cannot be specialized to the one-dimensional setting: it is essential for our approach that the result be formulated for the linearly independent directions $\bf{e}_1$, $\bf{e}_2$, \ldots, $\bf{e}_d \in \mathbb{Z}^d$. This is because even if we are initially given a $\mathbb{Z}$-system $(X,\S,\mu,T)$, we must re-interpret it as a $\mathbb{Z}^d$-system in order to pass to an extension that is sated in the way required by Proposition~\ref{prop:sated-implies-pleasant}. To do this we define a new $\mathbb{Z}^d$-action $T'$ on $X$ by $(T')^{\bf{e}_i} := T^i$, but once we ascend to our sated extension this special structure of a collection of powers of a single transformation will be lost, and so we can no longer focus on the special, one-dimensional case of convergence. In a sense, this quiet assumption of linear independence was a precursor to the discussion of Section~\ref{sec:meta}: we need the linear independence of the subgroups $\mathbb{Z}\bf{e}_i\leq \mathbb{Z}^d$ in order that a corresponding notion of satedness has useful consequences. However, these two very different approaches to different cases of Theorem C do suggest a reconciliation of the issue raised at the end of Section~\ref{sec:meta}: what becomes of our meta-question on the possibly joinings of $\mathbb{Z}^D$-systems $\mathbf{X}_i \in \mathsf{Z}_0^{\Gamma_i}$ if the subgroups $\Gamma_i$ are not linearly independent? The centrepiece of this final chapter is a conjectural answer to this question. If true, it would offer the first step in a complete `interpolation' between the structural result~\ref{thm:HK} of Host and Kra and our much softer result~\ref{thm:sateds-joint-dist}. In order to formulate our conjecture, we first need some more notation. The notion of an isometric extension of ergodic probability-preserving systems and the fact that any such can be coordinatized as a skew-product extension over the base system by some compact homogeneous space are very classical; see, for instance, Glasner's book~\cite{Gla03}. Here we will also assume familiarity with a natural but less common generalization of this theory to the case in which the base system is not necessarily ergodic, in which the fibres of our extension must be allowed to vary in a suitable `measurable' way over the ergodic components of the base system. This theory is set up generally in~\cite{Aus--ergdirint}, where the lengthy but routine work of re-establishing all the well-known theorems from the ergodic case is carried out in full, and we will also adopt the basic notations of that paper. \begin{dfn}[Direct integral of pro-nilsystems] If $\Gamma$ is a discrete Abelian group then a $\Gamma$-system $\mathbf{X} = (X,\S,\mu,T)$ is a \textbf{direct integral of $k$-step pro-nilsystems} if it admits a tower of factors \[\mathbf{X} = \mathbf{X}_k \to \mathbf{X}_{k-1}\to \ldots\to \mathbf{X}_1 \to \mathbf{X}_0\] in which the action of $\Gamma$ on $\mathbf{X}_0$ is trivial, each extension $\mathbf{X}_i\to \mathbf{X}_{i-1}$ for $i\geq 1$ can be coordinatized as a relatively ergodic extension by measurably-varying compact metrizable Abelian group data \begin{center} $\phantom{i}$\xymatrix{\mathbf{X}_i\ar[dr]\ar@{<->}[rr]^-\cong & & \mathbf{X}_{i-1}\ltimes (A_{i,\bullet},m_{A_{i,\bullet}},\sigma_i)\ar[dl]^{\rm{canonical}}\\ & \mathbf{X}_{i-1},} \end{center} (so the measurable group data $A_{i,\bullet}$ really varies only over the base system $\mathbf{X}_0$) and for each ergodic component $\mu_s$ of $\mu$ the resulting $k$-step Abelian distal ergodic $\Gamma$-system \[(X,\S,\mu_s,T) \cong (A_{1,s}\times A_{2,s}\times\cdots\times A_{k,s},\rm{Borel},\rm{Haar},\sigma_1\ltimes \sigma_2\ltimes \cdots\ltimes \sigma_k)\] is measure-theoretically isomorphic to an inverse limit of actions of $\Gamma$ by commuting rotations on $k$-step nilmanifolds. \end{dfn} \noindent\textbf{Remark}\quad In fact it seems likely that the above class of systems can be set up in several different ways, which will presumably turn out to be equivalent. I haven chosen the above definition here because I suspect it will ultimately prove relatively convenient for establishing the necessary properties of these systems, but an alternative has already appeared in the literature in the paper~\cite{ChuFraHos09} of Chu, Frantzikinakis and Host. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline The following lemma is now routine, given the ergodic case which is classical (it follows from the nilmanifold case of Ratner's Theorem: see, for instance,~\cite{Lei07,Lei10}). \begin{dfn} If $\L\leq \Gamma$ is an inclusion of discrete Abelian groups, then the class $\mathsf{Z}_{\mathrm{nil},k}^\L$ of those $\Gamma$-systems whose $\L$-subactions are direct integral of $k$-step pro-nilsystems is an idempotent class of $\Gamma$-systems. We refer to it as the class of \textbf{$\L$-partially $k$-step pro-nilsystems}. \qed \end{dfn} We are now ready to offer our conjectural strengthening of Theorem~\ref{thm:sateds-joint-dist} to the case of linearly dependent subgroups $\Gamma_i$. \begin{conj}[General Structural Conjecture]\label{conj} Suppose that $\Gamma_i \leq \mathbb{Z}^D$ for $i=1,2,\ldots,r$ are subgroups among which there are no pairwise inclusions and $n_1$, $n_2$, \ldots, $n_r \geq 0$ are integers. Then depending on these data there are finite families of pairs \[(\L_{i,1},m_{i,1}),(\L_{i,2},m_{i,2}),\ldots,(\L_{i,k_i},m_{i,k_i})\quad\quad\hbox{for}\ i=1,2,\ldots,r\] such that each $m_{i,j}\geq 0$ is an integer and $\L_{i,j}\leq \mathbb{Z}^d$ is a subgroup properly containing $\Gamma_i$ for each $i,j$, and for which the following holds. If $\mathbf{X}_i \in \mathsf{Z}_{\mathrm{nil},n_i}^{\Gamma_i}$ for each $i=1,2,\ldots,r$ and each $\mathbf{X}_i$ is sated with respect to all possible joins of classes of the form $\mathsf{Z}_{\mathrm{nil},n}^\Gamma$ for $\Gamma \leq \mathbb{Z}^D$ and $n\geq 0$, then for any joining $\pi_i:\mathbf{Y} \to \mathbf{X}_i$, $i=1,2,\ldots,r$, the factors $\pi_i^{-1}(\S_i)$ are relatively independent over their further factors \[\pi_i^{-1}\Big(\bigvee_{j=1}^{k_i}\Phi_{i,j}\Big)\] where $\Phi_{i,j}$ is the factor of $\mathbf{X}_i$ generated by the factor map to $(\mathsf{Z}_0^{\Gamma_i}\cap\mathsf{Z}_{\mathrm{nil},m_{i,j}}^{\L_{i,j}})\mathbf{X}_i$. \end{conj} \noindent\textbf{Remark}\quad We invoke the `no-inclusions' condition on the subgroups $\Gamma_i$ in order to avoid degenerate cases. Without it, we might for example be asking for the collection of all possible joinings between two systems $\mathbf{X}_i \in \mathsf{Z}_0^{\Gamma_i}$ for $i=1,2$ with $\Gamma_1 \geq \Gamma_2$, and in this case Lemma~\ref{lem:two-fold-joinings} tells us something about the less constrained system $\mathbf{X}_2$, but on the side of the more constrained system $\mathbf{X}_1$ the joining may clearly be completely arbitrary. \nolinebreak\hspace{\stretch{1}}$\lhd$\newline In particular, the case in which $\mathbf{X}_i$ has trivial $\Gamma_i$-subaction corresponds to $n_i = 0$, and in this case the above conjecture asserts that given enough satedness, the factors $\pi_i^{-1}(\S_i)$ of the joining system are relatively independent over some further factors, each of which is assembled as a join of systems from the classes $\mathsf{Z}_0^{\Gamma_i}\cap\mathsf{Z}_{\mathrm{nil},m_{i,j}}^{\L_{i,j}}$. In particular, while each of these ingredients may not be partially invariant under any subgroup of $\mathbb{Z}^D$ strictly larger than $\Gamma_i$, for each them we do know \emph{something} quite concrete (in terms of pro-nilsystems) about the subaction of some properly larger subgroup $\L_{i,j}\gneqq\Gamma_i$. Of course, the above conjecture does not strictly cover Theorem~\ref{thm:sateds-joint-dist}, since that gives much more precise information on the pairs $(\L_{i,j},m_{i,j})$ in case the $\Gamma_i$ are linearly independent: to wit, the $\L_{i,j}$ are the sums $\Gamma_i + \Gamma_\ell$ for $\ell\neq i$, and $m_{i,j} = 0$. While a final understanding of Conjecture~\ref{conj} would presumably also give a recipe for producing these pairs in the general case (and so would recover the exact details of our known special cases), the slightly incomplete formulation of Conjecture~\ref{conj} seems ample for our present discussion, and as I write this any sensible guess as to its completion appears beyond reach. Indeed, by itself Conjecture~\ref{conj} seems very optimistic, so it is worth mentioning some special cases of it beyond Theorem~\ref{thm:sateds-joint-dist} for which we have some supplementary evidence. Firstly, if $D = 2$, each $n_i = 0$ and the $\Gamma_i$ are pairwise linearly-independent one-dimensional subgroups $\mathbb{Z}\bf{v}_i\leq \mathbb{Z}^2$, then we can take a sensible guess at a more precise version of the above conjecture: that any joining of systems $\mathbf{X}_i \in \mathsf{Z}_0^{\Gamma_i}$ should be relatively independent over the maximal $(r-1)$-step pro-nilsystem factors $\mathbf{X}_i \to \mathsf{Z}_{\mathrm{nil},r}\mathbf{X}_i$. Indeed, this would simply correspond to the Host-Kra Theorem in the case of the $\mathbb{Z}^2$-system \[\vec{\mathbf{X}} := (X^k,\S^{\otimes k},\mu^\rm{F},\vec{T})\] with $\vec{T}^{\bf{e}_1} := T\times T\times\cdots\times T$ and $\vec{T}^{\bf{e}_2} := T\times T^2\times \cdots\times T^k$, where now the subgroups are $\Gamma_i = \mathbb{Z}(\bf{e}_2 - i\bf{e}_1)$ and the coordinate projections $\pi_i:X^k\to X$ define factor maps to suitable $\Gamma_i$-partially-invariant $\mathbb{Z}^2$-systems $\mathbf{X}_i$, constructed from $\mathbf{X}$ as in the proof of Proposition~\ref{prop:Fberg1}. In fact, I strongly suspect that the methods of either~\cite{HosKra05} or~\cite{Zie07} could be adapted directly to proving this more general result on the possible joinings of such partially-invariant systems. Other, similar results on possible joinings of partially-invariant systems that do not require any extensions but would correspond to further special cases of Conjecture~\ref{conj} have appeared in Frantzikinakis and Kra~\cite{FraKra05} (where nonconventional averages such as in our Theorem C are studied, but subject to some additional hypotheses on the individual ergodicity of several one-dimensional subactions), in Chu~\cite{Chu09} and in Chu, Frantzikinakis and Host~\cite{ChuFraHos09}. In each of these cases, the joining in question has been either the Furstenberg self-joining of some tuple of commuting transformations, or the related Host-Kra self-joining (originally defined in~\cite{HosKra05} for the case of powers of a single transformation, and since adapted to the multi-directional case in~\cite{Hos09,Chu09,ChuFraHos09}). However, in each of these cases it seems likely that the methods employed could be adapted to proving a corresponding instance of Conjecture~\ref{conj}. Another special case of Conjecture~\ref{conj}, the first beyond Theorem~\ref{thm:sateds-joint-dist} that \emph{does} require an ascent to sated extensions, appears in~\cite{Aus--lindeppleasant1,Aus--lindeppleasant2}. Indeed, the principal structural result of~\cite{Aus--lindeppleasant2} can be phrased as asserting that if $\bf{p}_1$, $\bf{p}_2$ and $\bf{p}_3 \in \mathbb{Z}^2$ are three directions which together with the origin $\bs{0}\in \mathbb{Z}^2$ lie in general position, then for a sufficiently sated system $\mathbf{X}$ the Furstenberg self-joining $\mu^\rm{F}$ of the quadruple of transformations $\mathrm{id},T^{\bf{p}_1}$, $T^{\bf{p}_2}$, $T^{\bf{p}_3}$ is such that the coordinate projections $\pi_i:X^{\{0,1,2,3\}} \to X$ are relatively independent over their further factors \[\pi_0^{-1}(\S^{T^{\bf{p}_1} = T^{\bf{p}_2}}\vee \S^{T^{\bf{p}_1} = T^{\bf{p}_3}}\vee \S^{T^{\bf{p}_2} = T^{\bf{p}_3}}\vee \S_{\mathrm{nil},2}^T)\] and \[\pi_i^{-1}(\S^{T^{\bf{p}_i}}\vee \S^{T^{\bf{p}_i} = T^{\bf{p}_j}}\vee \S^{T^{\bf{p}_i} = T^{\bf{p}_k}}\vee \S_{\mathrm{nil},2}^T)\quad\quad\hbox{for}\ \{i,j,k\} = \{1,2,3\}.\] Arguing again as for Proposition~\ref{prop:Fberg1}, this would follow from a special case of Conjecture~\ref{conj} (again with some more precise information on the pairs $(\L_{i,j},m_{i,j})$) when $D=3$, $r=4$ and $\Gamma_0,\Gamma_1,\Gamma_2,\Gamma_3$ are four one-dimensional subgroups of $\mathbb{Z}^3$ any three of which are linearly independent. At present no proof (or disproof) of Conjecture~\ref{conj} seems to be at hand. Nevertheless, the various cases mentioned above do give me hope for it, and I strongly suspect that any result as powerful as this would constitute a major addition to our toolkit for approaching questions of multiple recurrence. For example, I would expect it to shed considerable new light on the Bergelson-Leibman conjecture on the convergence of `polynomial' nonconventional averages~\cite{BerLei02}. For a recent discussion of these latter question see~\cite{Aus--lindeppleasant1,Aus--lindeppleasant2}, where the proof of an instance of this latter conjecture was the original motivation for the result on joint distributions mentioned above. \bibliographystyle{uclathes}
{ "timestamp": "2010-06-09T02:02:30", "yymm": "1006", "arxiv_id": "1006.0491", "language": "en", "url": "https://arxiv.org/abs/1006.0491", "abstract": "In 1975 Szemerédi proved the long-standing conjecture of Erdős and Turán that any subset of $\\bbZ$ having positive upper Banach density contains arbitrarily long arithmetic progressions. Szemerédi's proof was entirely combinatorial, but two years later Furstenberg gave a quite different proof of Szemerédi's Theorem by first showing its equivalence to an ergodic-theoretic assertion of multiple recurrence, and then bringing new machinery in ergodic theory to bear on proving that. His ergodic-theoretic approach subsequently yielded several other results in extremal combinatorics, as well as revealing a range of new phenomena according to which the structures of probability-preserving systems can be described and classified.In this work I survey some recent advances in understanding these ergodic-theoretic structures. It contains proofs of the norm convergence of the `nonconventional' ergodic averages that underly Furstenberg's approach to variants of Szemerédi's Theorem, and of two of the recurrence theorems of Furstenberg and Katznelson: the Multidimensional Multiple Recurrence Theorem, which implies a multidimensional generalization of Szemerédi's Theorem; and a density version of the Hales-Jewett Theorem of Ramsey Theory.", "subjects": "Dynamical Systems (math.DS); Combinatorics (math.CO); Probability (math.PR)", "title": "Multiple recurrence and the structure of probability-preserving systems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575178175918, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7091542015198734 }
https://arxiv.org/abs/1406.4895
Computing The Extension Complexities of All 4-Dimensional 0/1-Polytopes
We present slight refinements of known general lower and upper bounds on sizes of extended formulations for polytopes. With these observations we are able to compute the extension complexities of all 0/1-polytopes up to dimension 4. We provide a complete list of our results including geometric constructions of minimum size extensions for all considered polytopes. Furthermore, we show that all of these extensions have strong properties. In particular, one of our computational results is that every 0/1-polytope up to dimension 4 has a minimum size extension that is also a 0/1-polytope.
\section{Introduction} \noindent The theory of extended formulations is a fast-developing research field that adresses the problem of writing a polytope as the projection of a preferably simpler polyhedron. More precisely, given a polytope $ P \in \mathbb{R}^p $, a polyhedron $ Q \in \mathbb{R}^q $ together with a linear map $ \pi \colon \mathbb{R}^q \to \mathbb{R}^p $ is an \emph{extension} of $ P $ if $ \pi(Q) = P $. An explicit outer description of $ Q $ by linear inequalities and equations is called an \emph{extended formulation} for $ P $. The \emph{size} of an extension ($ Q $, $ \pi $) is defined as the number of the facets of $ Q $. The quantity of major interest in the field of extended formulations is the so-called \emph{extension complexity} of a polytope $ P $, which is defined as the smallest size of any extension of $ P $ and is denoted by $ \xc(P) $. Equivalently, $ \xc(P) $ is the smallest number of inequalities in any extended formulation for $ P $. The concept of extended formulations is motivated by the following fact: Suppose that $ (Q,\pi) $ is an extension of $ P $. Then optimizing a linear function $ x \mapsto \langle c,x \rangle $ over $ P $ is equivalent to maximizing $ y \mapsto \langle \pi^*(c),y \rangle $ over $ Q $, where $ \pi^* $ is the adjoint map of $ \pi $. Of course, if $ Q $ admits a simpler outer description than $ P $, then this might have a substantial impact on the performance of algorithms solving such optimization problems. Since $ 0/1 $-polytopes (i.e., polytopes with vertices in $ \{0,1\}^p $) play a central role in the application of linear programming, they are also of particiular interest in the field of extended formulations. Current research is mainly driven by the seminal results of Fiorini et al.~\cite{FioriniMPTW12} and Rothvo\ss{} \cite{Rothvoss13} that give exponential lower bounds on the extension complexities of the TSP polytope and the matching polytope, respectively. While they answered two of the most important questions in the area of extended formulations, there are still many (sometimes even elementary) open questions. By giving a complete list of the extension complexities of all $ 0/1 $-polytopes up to dimension $4$, one aim of this paper is to provide a first reference that hopefully allows progress on those questions. For instance, it is not known whether for any rational polytope $ P $ there exists a rational extension of size $ \xc(P) $, not even if $ P $ is a $ 0/1 $-polytope. As a consequence of observations mainly made in Section~\ref{sec:upperbounds}, our computational results show that all $ 0/1 $-polytopes up to dimension $ 4 $ admit minimum size extensions that are even $ 0/1 $-polytopes. Surprisingly, in all those extensions $ Q $ there is a 1-to-1 correspondence between the vertices of $ Q $ and their images. Another motivation for this work was the fact that, in general, computing $ \xc(P) $ for a given polytope $ P $ seems to be a non-trivial task. Note that, from its original definition, it is not obvious how to compute the extension complexity of a polytope. However, due to Yannakakis' theorem \cite{Yannakakis91}, we know that computing $ \xc(P) $ is equivalent to computing the nonnegative rank of one of its slack matrices (see Section \ref{sec:lowerbounds} for the precise definitions and the statement). For reasonable numbers of facets and vertices, it is possible to compute slack matrices using the double description method, which is, for instance, efficiently implemented in \texttt{cdd}~\cite{Fukuda14} and used by \texttt{polymake}~\cite{GawrillowJ00}. Further, computing the exact nonnegative rank of a matrix can be reduced to the decision problem whether certain semi-algebraic sets are nonempty. Arora et al.~\cite{AroraGKM12} give a subtle construction of such sets whose descriptions are much smaller than in naive approaches. Thus, in principle, it is possible to compute the extension complexity of a polytope by finally using quantifier elimination algorithms as in~\cite{BasuPR96}. However, such general approaches still do not allow computations for polytopes of reasonable complexity. In contrast, our calculations are based on very specific observations yielding matching lower and upper bounds on $ \xc(P) $. Hence, they can be performed within a total time of few minutes using simple scripts as well as \texttt{polymake}{} for computing slack matrices. Regarding our specific task, note that the extension complexity is obviously invariant under affine transformations. Thus, when talking about $ 0/1 $-polytopes of dimension $ k $, we implicitly refer to full-dimensional polytopes with vertices in $ \{0,1\}^k $. (It is a basic fact that any $ 0/1 $-polytope of dimension $ k $ is affinely isomorphic to a full-dimensional $ 0/1 $-polytope in ambient dimension $ k $.) Formally, there are $ 2^{2^4} = 65536 $ polytopes with vertices in $ \{0,1\}^4 $ -- most of them being $ 4 $-dimensional. Of course, it suffices to consider only representatives of each affine equivalence class. It turns out that there are still $ 202 $ distinct affine equivalence classes of $ 0/1 $-polytopes of dimension $4$. While we also give results for (the few) $ 0/1 $-polytopes up to dimension three, our work mainly focusses on the more challenging class of $ 4 $-dimensional $ 0/1 $-polytopes. Our paper is organized as follows. In Section~\ref{sec:upperbounds}, we describe known, simple geometric constructions for extended formulations and show that they preserve interesting properties. In order to obtain tight bounds on $ \xc(P) $, we carefully analyze the sizes of the resulting extensions. In Section~\ref{sec:lowerbounds}, we recall known general lower bounds on the extension complexity of a polytope and present a refinement of the rectangle covering bound, which yields improved bounds but is still computable by rather simple combinatorial algorithms. Finally, in Section~\ref{sec:computations}, we describe our approach of computing all extensions complexities and present computational results. \subsection{Notation} The standard euclidean scalar product and norm are denoted by $ \langle \cdot,\cdot \rangle $ and $ \| \cdot \| $, respectively. For a nonnegative integer $ k $, we set $ [k] := \{1,\dotsc,k\} $. The $ k $-dimensional nonnegative orthant is denoted by $ \mathbb{R}^k_+ $. We use $ \Delta_k := \{ x \in \mathbb{R}^k_+ : \sum_{i=1}^k x_i = 1 \} $ to denote the standard $ (k-1) $-simplex. The zero vector will be denoted by $ \mathbb{O} $, whose dimension will always be clear from the context. For a polyhedron $ P $, we use $ \rec(P) $ to denote its recession cone. Further, $ \# \mathrm{facets}(P) $ and $ \# \mathrm{vertices}(P) $ denote the number of facets of $ P $ and number of vertices of $ P $, respectively. \section{Geometric Upper Bounds} \label{sec:upperbounds} \noindent In this section, we review known bounds on the extension complexity that are based on simple geometric constructions. We show that some bounds can be sligthly strengthened in relevant cases, which will be essential for the computations in Section~\ref{sec:computations}. Besides, we show that in our cases, the corresponding constructions preserve the following, strong properties: \begin{definition} \label{def:nice01} Let $ P $ be a $ 0/1 $-polytope and $ (Q,\pi) $ be an extension for $ P $. Then $ (Q,\pi) $ is called a \emph{nice $ 0/1 $-extension} if \begin{compactenum}[(i)] \item \label{enum:zeroone} $ Q $ is a $ 0/1 $-polytope, \item \label{enum:vertexpreserving} each vertex of $ Q $ is projected onto a vertex of $ P $ and \item \label{enum:vertexbijective} for each vertex $ v $ of $ P $ there is exactly one vertex of $ Q $ that projects onto $ v $. \end{compactenum} The smallest size of any nice $ 0/1 $-extension of $ P $ is denoted by $ \xcs(P) $. \end{definition} \noindent Clearly, we have that $ \xc(P) \leq \xcs(P) $ holds for any $ 0/1 $-polytope $ P $. In general, the requirements of the above definition seem to be very restrictive. In fact, in~\cite{PashkovichW14} it was shown that there exist polytopes for which no minimum size extension satisfies property~(\ref{enum:vertexpreserving}). However, the polytopes constructed in that paper were not $ 0/1 $-polytopes. In particular, we will see that all minimum size constructions that are implicitly generated by our computations are indeed nice $ 0/1 $-extensions. \begin{remark} Our convention of restricting projections to be linear instead of affine maps is just a technical requirement: Indeed, if $ P = \alpha(Q) $ for an affine map $ \alpha $, define $ Q' := \{ (x,y) : x = \alpha(y), \, y \in Q \} $, which is affinely isomorphic to $ Q $, let $ \pi $ be the linear projection onto the $ x $-coordinates and obtain that $ (Q',\pi) $ is an extension for $ P $. Moreover, if $ P $ is a $ 0/1 $-polytope and $ (Q,\alpha) $ satisfies the above-named properties (\ref{enum:zeroone})--(\ref{enum:vertexbijective}), then so does $ (Q',\pi) $. In particular, we still have that $ \xc(P) = \xc(P') $ and $ \xcs(P) = \xcs(P') $ holds for affinely isomorphic polytopes $ P, P' $. \end{remark} \noindent Let us start with two trivial known upper bounds on the extension complexity of a polytope $ P \subseteq \mathbb{R}^p $. First, choosing $ Q = P $ and $ \pi \colon \mathbb{R}^p \to \mathbb{R}^p $ as the identity, any polytope is an extension of itself. Second, if $ P = \conv (\{ v^1, \dotsc, v^k \}) $ for some points $ v^1, \dotsc, v^k \in \mathbb{R}^p $, then we obviously have that \[ P = \Big\{ \sum\nolimits_{i=1}^k \lambda_i v^i : \lambda_j \geq 0 \ \forall \, j \in [k], \, \sum\nolimits_{j=1}^k \lambda_j = 1 \Big\}. \] Thus, setting $ Q = \Delta_k $ and $ \pi(\lambda) = \sum_{i=1}^k \lambda_i v^i $, we obtain that $ P $ is a linear projection of a $ (k-1) $-simplex. Since both are clearly nice $ 0/1 $-extensions if $ P $ is a $ 0/1 $-polytope, we conclude: \begin{proposition} \label{prop:trivialupperbounds} For a $ 0/1 $-polytope $ P $ it holds that \begin{itemize} \item $ \xcs(P) \leq \# \mathrm{facets}(P) $ and \item $ \xcs(P) \leq \# \mathrm{vertices}(P) $. \qed \end{itemize} \end{proposition} \noindent In fact, for some $ 0/1 $-polytopes such trivial extensions are indeed best possible. Let us now describe two more subtle ways to construct extended formulations. \subsection{Unions of Polytopes} For a polyhedron $ Q = \{ y : Ay \leq b \} \subseteq \mathbb{R}^q $ let us denote its homogenization cone by \[ \homog(Q) = \{ (y,\lambda) : Ay \leq \lambda b, \, \lambda \geq 0 \} = \{ (y,\lambda) : y \in \lambda \cdotp Q, \, \lambda \geq 0 \} \subseteq \mathbb{R}^{q+1}. \] Since Balas' famous work~\cite{Balas79} on Disjunctive Programming, we know that for polytopes $ P_1,\dotsc,P_k \in \mathbb{R}^p $, the convex hull of their union $ P = \conv( \cup_{i=1}^k P_i ) $ can be described via \[ P = \Big\{ \sum\nolimits_{i=1}^k x^i : (x^i,\lambda_i) \in \homog(P_i) \ \forall \, i \in [k], \, \sum\nolimits_{i=1}^k \lambda_i = 1 \Big\}. \] Given extensions $ (Q_i,\pi_i) $ for each $ P_i $, as a direct consequence, we obtain \begin{equation} \label{eq:disjunction} P = \Big\{ \sum\nolimits_{i=1}^k \pi_i(y^i) : (y^i,\lambda_i) \in \homog(Q_i) \ \forall \, i \in [k], \, \sum\nolimits_{i=1}^k \lambda_i = 1 \Big\}. \end{equation} Thus, choosing each extension $ (Q_i,\pi_i) $ of minimum size, this immediatly implies the well-known upper bound $ \xc(P) \leq \sum_{i=1}^k (\xc(P_i) + 1) $, see, e.g., \cite{Kaibel11a}. However, we show that this bound can be slightly improved in most of the cases: \begin{theorem} \label{thm:xcunion} For polytopes $ P_1, \dotsc, P_k \in \mathbb{R}^p $ it holds that \[ \xc\big( \conv(\cup_{i=1}^k P_i)\big) \leq \sum_{i=1}^k \xc(P_i) + |\{i \in [k] : \dim(P_i) = 0 \}|. \] \end{theorem} \noindent When proving Theorem~\ref{thm:xcunion}, we will make use of the following (known) useful fact. \begin{lemma} \label{lem:polytope} For any polytope $ P $ there exists an extension $ (Q,\pi) $ of minimum size such that $ Q $ is a polytope. \end{lemma} \begin{proof} \renewcommand{\qedsymbol}{} See Appendix~\ref{proof:polytope}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:xcunion}] For $ i=1,\dotsc,k $ let $ (Q_i,\pi_i) $ be a minimum size extension for $ P_i $. By Lemma~\ref{lem:polytope}, we further may assume that all $ Q_i $'s are polytopes. By equation~\eqref{eq:disjunction}, it suffices to show that $ \# \mathrm{facets}(\homog(Q_i)) \leq \# \mathrm{facets}(Q_i) $ for all $ i \in [k] $ with $ \dim(P_i) > 0 $. Towards this end, suppose that $ \dim(P_j) > 0 $ and let $ Q_j = \{ y : Ay \leq b \} $. We will show that $ \homog(Q_j) = \{ (y,\lambda) : Ay \leq \lambda b, \, \lambda \in \mathbb{R} \} $ (and hence $ \# \mathrm{facets}(\homog(Q_j)) \leq \# \mathrm{facets}(Q_j) $) holds. In order to show that the inequality $ \lambda \geq 0 $ is indeed not facet-defining for $ \homog(Q_j) $ (and hence redundant), let us assume the contrary and obtain \begin{align*} \dim(\homog(Q_j)) & = \dim(\{(y,0) \in \homog(Q_j) \}) + 1 \\ & = \dim(\{(y,0) : Ay \leq \mathbb{O} \}) + 1 \\ & = \dim(\rec(Q_j) \times \{0\}) + 1 \\ & = \dim(\{\mathbb{O}\} \times \{0\}) + 1 = 1. \end{align*} On the other hand, we have that $ \dim(\homog(Q_j)) = \dim(Q_j) + 1 \geq \dim(P_j) + 1 \geq 2 $, a contradiction. \end{proof} \noindent It turns out that Theorem~\ref{thm:xcunion} can even be rephrased in terms of $ \xcs(\cdot) $ under a further assumption: \begin{theorem} \label{thm:xcsunion} For disjoint $ 0/1 $-polytopes $ P_1, \dotsc, P_k \in \mathbb{R}^p $ it holds that \[ \xcs\big( \conv(\cup_{i=1}^k P_i)\big) \leq \sum_{i=1}^k \xcs(P_i) + |\{i : \dim(P_i) = 0 \}|. \] \end{theorem} \begin{proof} For $ i=1,\dotsc,k $ let $ (Q_i,\pi_i) $ be a nice $0/1$-extension. Since the $ Q_i $'s are polytopes and due to the proof of Theorem~\ref{thm:xcunion}, it suffices to show that the polytope \[ Q = \Big\{ (y^1,\dotsc,y^k,w) : (y^i,w_i) \in \homog(Q_i) \ \forall \, i \in [k], \, \sum\nolimits_{i=1}^k w_i = 1 \Big\}. \] together with $ \pi(y^1,\dotsc,y^k,w) = \sum_{i=1}^k \pi_i(y^i) $ is nicely $ 0/1 $. Towards this end, let $ v = (y^1,\dotsc,y^k,w) $ be a vertex of $ Q $ and let us define \[ x^j := \begin{cases} \frac{1}{w_j} y^j & \text{ if } w_j > 0, \\ \mathbb{O} & \text{ if } w_j = 0. \end{cases} \] Note that $ x^j \in Q_j $ if $ w_j > 0 $. Let us recall that $ (y^j,0) \in \homog(Q_j) $ if and only if $ y^j \in \rec(Q_j) $. Since $ Q_j $ is a polytope, we obtain that $ (y^j,0) \in \homog(Q_j) $ if and only if $ y^j = \mathbb{O} $. Thus, we get \begin{equation} \label{eq:zeroonebalas1} w_j x^j = y^j \end{equation} as well as \begin{equation} \label{eq:zeroonebalas2} (\tilde{w}_j x^j, \tilde{w}_j) \in \homog(Q_j) \quad \forall \ j \in [k], \, \tilde{w}_j \geq 0. \end{equation} We claim that $ w $ is a vertex of $ \Delta_k $. If not, then there exist $ \overline{w}, \underline{w} \in \Delta_k $ with $ \overline{w} \neq \underline{w} $ such that $ w = \mu \cdotp \overline{w} + (1-\mu) \cdotp \underline{w} $ for some $ \mu \in (0,1) $. By equation~\eqref{eq:zeroonebalas2}, we have that $ (\overline{w}_j \cdotp x^j,\overline{w}_j), (\underline{w}_j \cdotp x^j,\underline{w}_j) \in \homog(Q_j) $ holds for all $ j \in [k] $ and hence \begin{align*} \overline{v} := (\overline{w}_1 \cdotp x^1, \dotsc, \overline{w}_k \cdotp x^k, \overline{w}) \in Q \quad \text{ and } \quad \underline{v} := (\underline{w}_1 \cdotp x^1, \dotsc, \underline{w}_k \cdotp x^k, \underline{w}) \in Q. \end{align*} On the other hand, we also have \[ \mu \cdotp \overline{w}_j \cdotp x^j + (1-\mu) \cdotp \underline{w}_j \cdotp x^j = w_j x^j \stackrel{\eqref{eq:zeroonebalas1}}{=} y^j, \] a contradiction to $ v $ being a vertex of $ Q $. Since $ w $ is a vertex of $ \Delta_k $, by relabeling the indices, we may assume that $ w_1 = 1 $ and $ w_j = 0 $ for all $ j \in \{2,\dotsc,k\} $. As mentioned above, this implies $ y^j = \mathbb{O} $ for all $ j \in \{2,\dotsc,k\} $. We claim that $ y^1 $ is a vertex of $ Q_1 $. For the sake of a contradiction let us assume that there exist $ \overline{y}, \underline{y} \in Q_1 $ with $ \overline{y} \neq \underline{y} $ such that $ y^1 = \mu \cdotp \overline{y} + (1-\mu) \cdotp \underline{y} $ for some $ \mu \in (0,1) $. Since $ w_1 = 1 $, we have that $ (\overline{y},w_1), (\underline{y},w_1) \in \homog(Q_1) $ and hence \begin{align*} \overline{v} := (\overline{y},\mathbb{O},\dotsc,\mathbb{O},w) \in Q \quad \text{ and } \quad \underline{v} := (\underline{y},\mathbb{O},\dotsc,\mathbb{O},w) \in Q. \end{align*} On the other hand, we also have $ v = \mu \cdotp \overline{v} + (1-\mu) \cdotp \underline{v} $, which again is a contradiction to $ v $ being a vertex of $ Q $. Since $ Q_1 $ is a $ 0/1 $-polytope, $ y^1 $ is also a $ 0/1 $-vector and so is $ v $. Moreover, since $ (Q_1,\pi_1) $ is a nice $ 0/1 $-extension, we obtain that $ \pi(v) = \pi_1(y^1) $ is a vertex of $ P_1 $ and hence a vertex of $ P := \conv\big(\cup_{i=1}^k P_i \big) $. Thus, $ Q $ is indeed a $ 0/1 $-polytope and each vertex of $ Q $ is projected onto a vertex of $ P $. In order to verify property~(\ref{enum:vertexbijective}), let $ v $ be a vertex of $ P $. Since the $ P_i $'s are $ 0/1 $-polytopes, there exists an index $ \ell \in [k] $ such that $ v $ is a vertex of $ P_{\ell} $. Further, let $ v' = (y^1,\dotsc,y^k,w) $ be a vertex of $ Q $ such that $ \pi(v') = v $. We have seen that there exists an index $ i \in [k] $ such that $ y^i $ is a vertex of $ Q_i $, $ w_i = 1 $ and $ y^j = \mathbb{O} $ for all $ j \in [k] \setminus \{i\} $. Since $ v = \pi(v') = \pi_i(y^i) \in P_i $ and the $ P_j $'s are pairwise disjoint, we obtain that $ i = \ell $. Finally, since $ (Q_i,\pi_i) $ is nicely $0/1$, $ y^i $ is uniquely determined by $ v $ and so is $ v' $. \end{proof} \noindent Surprisingly, our computations considerably benefit from this simple consequence: \begin{corollary} \label{cor:simplebalas} For any $ 0/1 $-polytope $ P \subseteq \mathbb{R}^p $ with $ \dim(P) \geq 1 $ and any point $ v \in \{0,1\}^p \setminus P $, it holds that \[ \xcs\big(\conv(P \cup \{v\})\big) \leq \xcs(P) + 1. \qed \] \end{corollary} \noindent Let us remark that the assumption of disjointness in the above statements is only required to satisfy property~(\ref{enum:vertexbijective}) of Definition~\ref{def:nice01}. \subsection{Reflections} \label{sec:reflections} Another general method to construct extended formulations for polytopes $ P $ has been introduced by Kaibel and Pashkovich~\cite{KaibelP13}. It is again based on the assumption that $ P $ can be written as the convex hull of the union of two polytopes $ P' $ and $ P'' $. Further, it requires $ P'' $ to be very similar to $ P' $, namely being the reflection of $ P' $ through some hyperplane. For $ a \in \mathbb{R}^p \setminus \{ \mathbb{O} \} $ and $ \beta \in \mathbb{R} $ let $ H^{\leq}(a,\beta) := \{ x : \langle a,x \rangle \leq \beta \} $ denote the associated halfspace. We further define $ H^=(a,\beta) := \{ x : \langle a,x \rangle = \beta \} $ as the corresponding hyperplane. The reflection $ \varphi_H \colon \mathbb{R}^p \to \mathbb{R}^p $ at $ H = H^=(a,\beta) $ can be written as \[ \varphi_H(x) = x + 2 \cdotp \frac{\beta - \langle a,x \rangle}{\|a\|^2} \cdotp a. \] Suppose that $ P' \subseteq H^{\leq}(a,\beta) $ and $ P'' = \varphi_H(P') $. Although \cite{KaibelP13} addresses to a slightly more general type of reflection, their results imply the simple extended formulation \[ \conv(P' \cup P'') = \bigg\{ x + \lambda \cdotp \frac{2}{\|a\|^2} \cdotp a : x \in P', \, 0 \leq \lambda \leq \beta - \langle a,x \rangle \bigg\}, \] where replacing $ P' $ again by an extension $ (Q',\pi') $ yields \begin{equation} \label{eq:reflection} \conv(P' \cup P'') = \bigg\{ \pi'(y) + \lambda \cdotp \frac{2}{\|a\|^2} \cdotp a : y \in Q', \, 0 \leq \lambda \leq \beta - \langle a,\pi'(y) \rangle \bigg\}. \end{equation} In terms of a bound on the general extension complexity, we conclude: \begin{theorem}[\cite{KaibelP13}] \label{thm:reflection} Let $ P' \subseteq \mathbb{R}^p $ and $ a \in \mathbb{R}^p \setminus \{ \mathbb{O} \} $, $ \beta \in \mathbb{R} $ such that $ P' \subseteq H^{\leq}(a,\beta) $. Then \[ \xc\Big(\conv\big(P' \cup \varphi_H(P')\big)\Big) \leq \xc(P') + 2. \] \end{theorem} \noindent In Section~\ref{sec:computations}, we will make use of two very specific types of reflections: First, for any $ i \in [p] $ let $ \varphi_{\mathrm{flip},i} \colon \mathbb{R}^p \to \mathbb{R}^p $ be the map that flips the $ i $th entry of vectors $ x \in \mathbb{R}^p $, i.e., \[ \varphi_{\mathrm{flip},i}(x)_{\ell} := \begin{cases} 1-x_i & \text{ if } \ell = i \\ x_{\ell} & \text{ else.} \end{cases} \] This map coincides with the reflection map $ \varphi_H $ that is induced by setting $ \beta = \pm 1 $, $ a_i = \pm2 $ and $ a_{\ell} = 0 $ for all $ \ell \in [p] \setminus \{i\} $. Second, for any $ i,j \in [p] $ with $ i \neq j $ let $ \varphi_{\mathrm{swap},i,j} \colon \mathbb{R}^p \to \mathbb{R}^p $ be the map that swaps the $ i $th and $ j $th coordinate of vectors $ x \in \mathbb{R}^p $, i.e., \[ \varphi_{\mathrm{swap},i,j}(x)_{\ell} := \begin{cases} x_j & \text{ if } \ell = i \\ x_i & \text{ if } \ell = j \\ x_{\ell} & \text{ else.} \end{cases} \] This is equivalent to the reflection map that is induced by setting $ \beta = 0 $, $ a_i = \pm 1 $, $ a_j = \mp 1 $ and $ a_{\ell} = $ for all $ \ell \in [p] \setminus \{i,j\} $. We say that $ (a,\beta) $ \emph{induces a symmetry of the cube} if it is of one of the above types. Note that in all these cases, for all $ v \in \{0,1\}^p $, the expression $ \beta - \langle a,v \rangle $ attains only values in $ \{ 0,t \} $ for $ t \in \{-1,1\} $ ($ t $ depends on the orientation of the associated halfspace we used). This property allows us to make an analogous statement as in Theorem~\ref{thm:reflection} for sizes of nice $ 0/1 $-extensions: \begin{theorem} \label{thm:reflectionsxcs} Let $ P' \subseteq \mathbb{R}^p $ be a $ 0/1 $-polytope and let $ a \in \mathbb{R}^p, \beta \in \mathbb{R} $ induce a symmetry of the cube with $ P' \subseteq H^{\leq}(a,\beta) $. Then \[ \xcs\Big(\conv\big(P' \cup \varphi_H(P')\big)\Big) \leq \xcs(P') + 2. \] \end{theorem} \begin{proof} By the above paragraph, we may assume that $ \beta - \langle a,v \rangle \in \{0,1\} $ holds for all vertices $ v $ of $ P' $. Otherwise, replace $ P' $ by $ \varphi_H(P') $, which is affinely isomorphic and hence $ \xcs(P') = \xcs(\varphi_H(P')) $. Let $ (Q',\pi') $ be a nice $ 0/1 $-extension of $ P' $. As mentioned above, the polytope \[ Q = \big\{ (y,\lambda) : y \in Q', \, 0 \leq \lambda \leq \beta - \langle a,\pi'(y) \rangle \big\} \] together with the map $ \pi(y,\lambda) = \pi'(y) + \lambda \cdotp \frac{2}{\|a\|^2} $ is an extension for $ P := \conv\big(P' \cup \varphi_H(P')\big) $. We have to show that $ (Q,\pi) $ is nicely $ 0/1 $. Let $ (y,\lambda) $ be a vertex of $ Q $. First, suppose that there exists an $ \varepsilon > 0 $ such that $ 0 \leq \lambda - \varepsilon \leq \lambda \leq \lambda + \varepsilon \leq \beta - \langle a,\pi'(y) \rangle $. In this case, we can write \[ (y,\lambda) = \frac{1}{2} (y,\lambda - \varepsilon) + \frac{1}{2} (y,\lambda + \varepsilon), \] a contradiction since $ (y,\lambda - \varepsilon), (y,\lambda + \varepsilon) \in Q $ and $ (y,\lambda) $ is assumed to be a vertex of $ Q $. Thus, at least one of the inequalities $ 0 \leq \lambda $ and $ \lambda \leq \beta - \langle a,\pi'(y) \rangle $ holds with equality. Second, we show that $ y $ is a vertex of $ Q' $. Let us assume that there are vectors $ y^1,y^2 \in Q' $ with $ y^1 \neq y^2 $ such that $ y = \mu \cdotp y^1 + (1-\mu) \cdotp y^2 $ for some $ \mu \in (0,1) $. If $ \lambda = 0 $, we have that $ (y,\lambda) = \mu \cdotp (y^1,0) + (1-\mu) \cdotp (y^2,0) $ as well as $ (y^1,0), (y^2,0) \in Q $ since $ \pi'(Q') \subseteq H^{\leq}(a,\beta) $, a contradiction to $ (y,\lambda) $ being a vertex of $ Q $. If otherwise $ \lambda = \beta - \langle a,\pi'(y) \rangle $, let us set $ \lambda_i := \beta - \langle a,\pi'(y^i) \rangle \geq 0 $ for $ i=1,2 $. By construction, we again have that $ (y^1,\lambda_1), (y^2,\lambda_2) \in Q $. Since \begin{align*} \mu \lambda_1 + (1-\mu) \lambda_2 & = \mu (\beta - \langle a,\pi'(y^1) \rangle) + (1-\mu) (\beta - \langle a,\pi'(y^2) \rangle) \\ & = \beta - \langle a,\pi'(y) \rangle = \lambda, \end{align*} we obtain $ (y,\lambda) = \mu \cdotp (y^1,\lambda_1) + (1-\mu) \cdotp (y^2,\lambda_1) $ and hence again a contradiction to the assumption that $ (y,\lambda) $ is a vertex of $ Q $. Since $ (Q',\pi') $ is nicely $ 0/1 $, $ y $ is a $ 0/1 $-vector and $ v := \pi'(y) $ is a vertex of $ P' $. We have seen that $ \lambda \in \{0, \beta - \langle a,v \rangle \} $ holds. By our first assumption, this implies that $ \lambda \in \{0,1\} $ and hence $ (y,\lambda) $ is a $ 0/1 $-vector. Further, we have that \[ \pi(y,\lambda) \in \{ \pi(y,0), \pi(y,\beta - \langle a,v \rangle) \} = \{ v, \varphi_H(v) \} \] and since $ \varphi_H $ is one of the cube's symmetries, $ \varphi_H(v) $ is a $ 0/1 $ point and hence a vertex of $ P $ and so is $ \pi(y,\lambda) $. It remains to be shown that every vertex $ v^* $ of $ P $ has a unique preimage. Since again $ \varphi_H $ is one of the cube's symmetries and $ P' \subseteq H^{\leq}(a,\beta) $, there exists a unique vertex $ v' $ of $ P' $ such that $ v^* = v' $ or $ v^* = \varphi_H(v') $. Thus, if $ (y^*,\lambda^*) $ projects onto $ v^* $, we must have $ \pi'(y^*) = v' $. Since $ (Q',\pi') $ is nicely $ 0/1 $, $ y^* $ is uniquely determined by $ v' $ (and hence by $ v^* $). Further, $ \lambda^* $ is also uniquely determined by $ v^* $ depending on whether $ v^* = v' $ or $ v^* = \varphi_H(v') $ holds. Note that if $ \varphi_H(v^*) = v^* $, then $ \lambda^* = 0 $ in both cases. \end{proof} \noindent Another simple consequence which our computations exploit is the following fact: \begin{corollary} \label{cor:reflectionfacet} Let $ P' \subseteq \mathbb{R}^p $ be a $ 0/1 $-polytope and let $ a \in \mathbb{R}^p, \beta \in \mathbb{R} $ induce a symmetry of the cube with $ P' \subseteq H^{\leq}(a,\beta) $. If $ P' \cap H $ is a facet of $ P' $, then \[ \xcs\Big(\conv\big(P' \cup \varphi_H(P')\big)\Big) \leq \# \mathrm{facets}(P') + 1. \] \end{corollary} \begin{proof} Set $ k = \# \mathrm{facets}(P') $ and let $ A \in \mathbb{R}^{(k-1) \times p} $, $ b \in \mathbb{R}^{k-1} $ such that $ P' = \{ x \in \mathbb{R}^p : Ax \leq b, \, \langle a,x \rangle \leq \beta \} $. By (the proof of) Theorem~\ref{thm:reflectionsxcs}, we have that \[ Q := \{ (x,\lambda) : x \in P', \, 0 \leq \lambda \leq \beta - \langle a,x \rangle \}. \] is a nice $ 0/1 $-extension (together with $ \pi(x,\lambda) = x + \lambda \cdotp a $) for $ \conv\big(P' \cup \varphi_H(P')\big) $. Since $ 0 \leq \lambda \leq \beta - \langle a,x \rangle $ implies $ \langle a,x \rangle \leq \beta $, we obtain \[ Q = \{ (x,\lambda) : Ax \leq b, \, 0 \leq \lambda \leq \beta - \langle a,x \rangle \} \] and hence $ Q $ has at most $ (k - 1) + 2 = k + 1 $ facets. \end{proof} \subsection{Down-Monotonicity} The third construction that we will use in Section~\ref{sec:computations} is related to the concept of down-monotone polyhedra. Here, we are only interested in down-monotonicity with respect to only one coordinate. Let us consider the map $ \down_j \colon \mathbb{R}^p \to \mathbb{R}^p $ defined via \[ \down_j(x)_i := \begin{cases} 0 & \text{if } j = k \\ x_i & \text{else}, \end{cases} \] where $ j \in [p] $. For a polyhedron $ P' \subseteq \mathbb{R}^p_+ $, it is straightforward to see that \[ \conv \big(P' \cup \down_j(P')\big) = \big\{ z \in \mathbb{R}^p : x \in P', \, 0 \leq z_j \leq x_j, \, z_i = x_i \ \forall \, i \in [p] \setminus \{ j \} \big\} \] holds, which immediatly yields the bound \[ \xc \Big( \conv \big(P' \cup \down_j(P') \big) \Big) \leq \xc(P') + 2. \] In terms of nice $ 0/1 $-extensions we need an additional requirement on $ P' $: \begin{theorem} \label{thm:downward} Let $ P' $ be a $ 0/1 $-polytope such that $ \down_j $ is injective on the vertices of $ P' $. Then \[ \xcs \Big( \conv \big(P' \cup \down_j(P') \big) \Big) \leq \xcs(P') + 2. \] \end{theorem} \begin{proof} Let $ (Q',\pi') $ be a nice $ 0/1 $-extension of $ P' $ and consider the polytope \[ Q := \{ (y,\lambda) : y \in Q', \, 0 \leq \lambda \leq \pi'(y)_j \} \] together with the linear map \[ \pi(y,\lambda) := \down_j(\pi'(y)) + \mathbbm{e}_j \cdotp \lambda, \] where $ \mathbbm{e}_j $ is the $ j $th unit vector. As mentioned above, $ (Q,\pi) $ is an extension for $ P := \conv (P' \cup \down_j(P') ) $. Let $ (y,\lambda) $ be a vertex of $ Q $. Analogous to the proof of Theorem~\ref{thm:reflectionsxcs}, it is easy to see that $ y $ is a vertex of $ Q' $ and $ \lambda \in \{0,\pi'(y)_j\} \subseteq \{0,1\} $. This directly implies that $ Q $ is a $ 0/1 $-polytope and that every vertex of $ Q $ is projected onto a vertex of $ P $. Finally, the uniqueness of preimages of vertices of $ P $ follows from the fact that $ \down_j $ is injective on the vertices of $ P' $ and that $ (Q',\pi') $ be a nice $ 0/1 $-extension of $ P' $. \end{proof} \section{Combinatorial Lower Bounds} \label{sec:lowerbounds} \noindent In this section, we review known combinatorial lower bounds on the extension complexities of polytopes. We present a slight refinement of the rectangle covering bound, for which, in the case of $ 4 $-dimensional $ 0/1 $-polytopes, the resulting bound (i) is still easy to compute and (ii) provides tight results. Before, let us consider the following very simple bounds on the extension complexity: \begin{proposition} \label{prop:triviallowerbounds} For any polytope $ P $ with $ \dim(P) = d $ it holds that \begin{itemize} \item[a)] $ \xc(P) \geq d + 1 $, \item[b)] $ \xc(P) = d + 1 $ if and only if $ P $ is a simplex, \item[c)] $ \xc(P) = d + 2 $ if and only if $ \min\{\# \mathrm{facets}(P),\# \mathrm{vertices}(P)\} = d + 2 $. \end{itemize} \end{proposition} \begin{proof} Let $ (Q,\pi) $ be a minimum size extension for $ P $. By Lemma~\ref{lem:polytope}, we may assume that $ Q $ is a polytope. Parts a) and b) are left to the reader. By Proposition~\ref{prop:trivialupperbounds} it remains to show the only-if part of c). Suppose that $ \# \mathrm{facets}(P),\# \mathrm{vertices}(P) \geq d + 3 $ and let us assume that $ Q $ has at most $ d + 2 $ facets. Note that $ \dim(Q) \geq d + 1 $ since otherwise $ Q $ is isomorphic to $ P $ and hence has at least $ d + 3 $ facets. This implies that $ Q $ is a $ (d+1) $-simplex and hence has only $ d+2 $ vertices. Since $ Q $ must have at least as many vertices as $ P $, we obtain a contradiction. \end{proof} \noindent It turns out that, together with constructions from Section~\ref{sec:upperbounds}, the above bounds suffice to determine the extension complexities of all $ 0/1 $-polytopes up to dimension $ 3 $ (see Section~\ref{sec:computations}). Not surprisingly, one needs more profound bounds for tight results in dimension $ 4 $. \subsection{Yannakakis' Theorem} In his seminal paper~\cite{Yannakakis91}, Yannakakis gave an algebraic interpretation of the extension complexity and laid the foundation of the developtment of bounds used in important theoretical results as in~\cite{FioriniMPTW12} or~\cite{Rothvoss13}. Let $ P = \{ x \in \mathbb{R}^p : Ax \leq b \} $ be a polytope with $ A \in \mathbb{R}^{m \times p} $, $ b \in \mathbb{R}^m $ and $ \{v^1,\dotsc,v^k\} $ the set of its vertices. The nonnegative matrix $ S \in \mathbb{R}_+^{m \times k} $ defined via \[ S_{i,j} := b_i - \langle A_{i,*} , v^j \rangle, \] where $ A_{i,*} $ denotes the $ i $th row of $ A $, is called a \emph{slack matrix} of $ P $. Further, the smallest number $ r $ such that $ S = U \cdotp V $ for two nonnegative matrices $ U \in \mathbb{R}_+^{m \times r} $, $ V \in \mathbb{R}_+^{r \times k} $ is called the \emph{nonnegative rank} of $ S $ and denoted by $ r_+(S) $. \begin{theorem}[Yannakakis '91] Let $ P $ be a polytope and $ S $ be a slack matrix of $ P $. Then $ \xc(P) = r_+(S) $. \end{theorem} \noindent The nonnegative rank of a matrix $ S $ can also be seen as the smallest $ r $ such that $ S $ can be written as the sum of $ r $ nonnegative rank-1 matrices, which is a helpful interpretation to obtain combinatorial bounds. \subsection{Rectangle Coverings and Fooling Sets} A well-known bound on the nonnegative rank is the \emph{rectangle covering bound}, which we will present here. For more details, in particular related to its application in the field of extended formulations, we refer to the paper of Fiorini et al.~\cite{FioriniKPT11}. Let $ S $ be a slack matrix whose rows and columns are indexed by some sets $ \mathcal{I} $ and $ \mathcal{J} $, respectively. Let us define the \emph{support} of $ S $ as the set $ \supp(S) := \{ (i,j) \in \mathcal{I} \times \mathcal{J} : S_{i,j} > 0 \} $. A set $ I \times J $ with $ I \subseteq \mathcal{I} $, $ J \subseteq \mathcal{J} $ is now called a \emph{rectangle}, if $ I \times J \subseteq \supp(S) $. Further, a set of rectangles $ R_1,\dotsc,R_k $ is called a \emph{rectangle covering} if $ \supp(S) = \cup_{\ell=1}^k R_{\ell} $. The following observation motivates to consider the so-called \emph{rectangle covering number} of $ S $, which is denoted by $ \rc(S) $ and defined as the smallest number of rectangles in any rectangle covering of $ S $. Suppose that the nonnegative rank of $ S $ is $ r $, i.e., there exist nonnegative matrices $ \mathcal{R}^1,\dotsc,\mathcal{R}^r $ such that $ S = \sum_{\ell=1}^r \mathcal{R}^{\ell} $. Then the sets $ R_{\ell} := \supp(\mathcal{R}_\ell) $ are clearly rectangles. Moreover, one has that \[ \supp(S) = \supp(\cup_{\ell=1}^r \mathcal{R}_{\ell}) = \cup_{\ell=1}^r \supp(\mathcal{R}_\ell) = \cup_{\ell=1}^r R_{\ell}, \] and hence the $ R_{\ell} $'s form a rectangle covering of $ S $. Thus, if $ P $ is a polytope and $ S $ a slack matrix of $ P $, we obtain the rectangle covering bound \[ \rc(S) \leq r_+(S) = \xc(P). \] Since it still seems to be a difficult task to determine (or compute) $ \rc(S) $, one is of course interested in further lower bounds that are easier to compute. One simple bound on the rectangle covering number is the \emph{fooling set bound}, where a \emph{fooling set} is a set $ F \subseteq \supp(S) $ such that \[ S_{i_1,j_2} = 0 \text{ or } S_{i_2,j_1} = 0 \] holds for all distinct pairs $ (i_1,j_1), (i_2,j_2) \in F $. In what follows, the largest cardinality of a fooling set of $ S $ is called the \emph{fooling set number} and will be denoted by $ \omega(S) $. It is easy to see that any rectangle of $ S $ can contain at most one element of $ F $. Thus, any rectangle covering of $ S $ consists of at least $ |F| $ rectangles and hence we obtain \[ \omega(S) \leq \rc(S). \] In summary: \begin{proposition} Let $ P $ be a polytope and $ S $ a slack matrix of $ P $. Then \[ \omega(S) \leq \rc(S) \leq r_+(S) = \xc(P). \] \end{proposition} \noindent While it is not much known about the general performance of the rectangle covering bound, there are some definite limitations. For instance, it is a classical fact that $ \omega(S) \leq (\dim(P) + 1)^2 $, see, e.g., \cite{FioriniKPT11}. Of course, in the case of $ 0/1 $-polytopes of dimension $ 4 $, this limitation is trivial since such polytopes have at most $ 16 $ vertices and hence an extension complexity of at most $ 16 $. In fact, it turns out that the fooling set bound already yields tight bounds in many of our computations. \subsection{Refinined Rectangle Coverings} Although the classical lower bounds perform surprisingly well on slack matrices of $ 0/1 $-polytopes of dimension $ 4 $, there are still some polytopes for which there is a gap between the rectangle covering number and the extension complexity. One drawback of the rectangle covering number is that it only depends on the sparsity pattern of the considered matrix. In general, a rectangle covering $ R_1,\dotsc,R_k $ of a nonnegative matrix $ S $ might be far away from being induced by nonnegative rank-1 matrices $ \mathcal{R}_1,\dotsc,\mathcal{R}_k $ such that $ S = \sum_{\ell=1}^k \mathcal{R}_{\ell} $. For instance, the matrix \[ \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix} \] has rectangle covering number $ 1 $ while its nonnegative rank is $ 2 $. Thus, in order to assure that a rectangle covering is at least locally (by only considering $ 2 \times 2 $-submatrices) extendable to a rank-1 decomposition, we propose the following additional requirement: \begin{definition} \label{def:refinedreccover} A rectangle covering $ R_1,\dotsc,R_k $ of a nonnegative matrix $ S $ is called a \emph{refined covering} if \[ \Big| \Big\{ \ell : R_\ell \cap \{ (i_1,j_1), (i_2,j_2) \} \neq \emptyset \Big\} \Big| \geq 2 \] holds for all pairs $ (i_1,j_1), (i_2,j_2) $ with $ S_{i_1,j_1} \cdot S_{i_2,j_2} > S_{i_1,j_2} \cdot S_{i_1,j_2} $. The smallest size of any refined covering of $ S $ is called the \emph{refined rectangle covering number} and denoted by $ \rrc(S) $. \end{definition} \noindent It turns out that this quantity allows us to close all remaining gaps in our computations in Section~\ref{sec:computations}. \begin{theorem} Let $ S $ be a nonnegative matrix. Then \[ \rc(S) \leq \rrc(S) \leq r_+(S). \] \end{theorem} \begin{proof} Suppose that there exist nonnegative rank-1 matrices $ \mathcal{R}_1,\dotsc,\mathcal{R}_k $ such that \begin{equation} \label{eq:proofrefinedsum} S = \sum_{i=1}^k \mathcal{R}_i. \end{equation} We have already seen that $ \supp(\mathcal{R}_1), \dotsc, \supp(\mathcal{R})_k $ is a rectangle covering of $ S $. It suffices to show that this covering also satisfies the requirements of Definition~\ref{def:refinedreccover}. Let us assume the contrary. By reordering the rows and columns of $ S $, we may assume that \begin{equation} \label{eq:proofrefineddeterminant} S_{1,1} \cdotp S_{2,2} > S_{1,2} \cdotp S_{2,1} \end{equation} and let $ j \in [k] $ such that $ \mathcal{R} = \mathcal{R}_j $ is the only matrix among the $ \mathcal{R}_i $'s whose support has a non-empty intersection with $ \{ (1,1), (2,2) \} $. By equation~\eqref{eq:proofrefinedsum}, this implies that \begin{equation} \label{eq:proofrefinedequal} \mathcal{R}_{1,1} = S_{1,1} \quad \text{and} \quad \mathcal{R}_{2,2} = S_{2,2}. \end{equation} Since $ \mathcal{R} $ has rank 1, we further have that \begin{equation} \label{eq:proofrefinedrank1} \mathcal{R}_{1,1} \cdotp \mathcal{R}_{2,2} = \mathcal{R}_{1,2} \cdotp \mathcal{R}_{2,1}. \end{equation} By the nonnegativity of all $ \mathcal{R}_i $'s, we finally obtain \[ S_{1,2} \cdotp S_{2,1} \geq \mathcal{R}_{1,2} \cdotp \mathcal{R}_{2,1} \stackrel{\eqref{eq:proofrefinedrank1}}{=} \mathcal{R}_{1,1} \cdotp \mathcal{R}_{2,2} \stackrel{\eqref{eq:proofrefinedequal}}{=} S_{1,1} \cdotp S_{2,2} \stackrel{\eqref{eq:proofrefineddeterminant}}{>} S_{1,2} \cdotp S_{2,1}, \] a contradiction. \end{proof} \section{Computation \& Results} \label{sec:computations} \noindent In this section, we briefly describe our approach of computing the extension complexities of all $ 0/1 $-polytopes up to dimension $ 4 $ and present the results. In order to provide comprehensible results, we assign an ID to each polytope. For this purpose, let us define the function $ b \colon \{0,1\}^n \to \{0,\dotsc,2^n-1\} $ via $ b(v) := \sum_{i=1}^n v_i 2^{i-1} $. For a set $ V \subseteq \{0,1\}^n $ of binary vectors, the ID of the corresponding polytope $ P = \conv(V) $ is now defined as \[ \mathrm{ID}(P) := \sum_{v \in V} b(v) 2^{b(v)} \in \{0,2^{2^n}-1\}. \] Since the extension complexity of a polytope (as well as our notion of $ \xcs(\cdot) $) is invariant under affine transformations, we shall divide all $ 0/1 $-polytopes in affine equivalence classes. The representative of each equivalence class is chosen to be the polytope with smallest ID inside the class. In order to compute the affine equivalence classes, we first enumerated all $ 0/1 $-equivalence classes as proposed in~\cite{Aichholzer00}. Recall that two $ 0/1 $-polytopes $ P, P' \subseteq \mathbb{R}^n $ are $ 0/1 $-equivalent if there exists an affine isomorphism $ f \colon \{0,1\}^n \to \{0,1\}^n $ with $ f(\{0,1\}^n) = \{0,1\}^n $ such that $ f(P) = P' $. After computing the f-vectors of the representatives of each $ 0/1 $-equivalence class via \texttt{polymake}, it remained to run a small number of tests for affine equivalence. See Table~\ref{tab:eqclasses} for the intermediate results. \begin{table} \small \begin{tabular}{crrr} \toprule vertices & polytopes & $0/1$-equivalence classes & affine equiv. classes \\ \midrule 5 & 3008 & 17 & 1 \\ 6 & 7408 & 40 & 8 \\ 7 & 11280 & 54 & 17 \\ 8 & 12850 & 72 & 36 \\ 9 & 11440 & 56 & 40 \\ 10 & 8008 & 50 & 43 \\ 11 & 4368 & 27 & 26 \\ 12 & 1820 & 19 & 19 \\ 13 & 560 & 6 & 6 \\ 14 & 120 & 4 & 4 \\ 15 & 16 & 1 & 1 \\ 16 & 1 & 1 & 1 \\ \midrule $\sum$ & 60879 & 347 & 202 \\ \bottomrule \end{tabular} \caption{Number of $ 4 $-dimensional $ 0/1 $-polytopes} \label{tab:eqclasses} \end{table} Before we restrict ourselves to $ 4 $-dimensional $ 0/1 $-polytopes, let us show that no further (computer aided) computations are needed to determine the extension complexities of $ 0/1 $-polytopes of dimension up to $ 3 $. \subsection{Computations Up to Dimension 3} With respect to affine equivalence, there are $ 12 $ different $ 0/1 $-polytopes of dimension $ 3 $ or less, see Figure~\ref{fig:3dim}. Applying Proposition~\ref{prop:triviallowerbounds} already yields that in each of the cases, except for the \emph{sliced cube} (ID $ 127 $), the trivial extensions of Proposition~\ref{prop:trivialupperbounds} are smallest possible. Considering the sliced cube, observe that it can be written as the convex hull of the union of the prism and a single point. Since the prism has $ 5 $ facets, we obtain an extension with only $ 6 $ facets by Corollary~\ref{cor:simplebalas}. Again by Proposition~\ref{prop:triviallowerbounds}, this has smallest possible size. Note that all minimum size extensions used here are nice $ 0/1 $-extensions. A summary of the results can be found in Table~\ref{tab:dim3}. \begin{figure} \begin{center} \begin{small} \begin{tikzpicture}[scale=1] \tikzstyle{thickline}=[line width=1.5pt, line join=round] \tikzstyle{thinline}=[] \tikzstyle{dottedline}=[dotted] \newcommand{0.4}{0.4} \newcommand{0.3}{0.3} \def\drawname#1#2#3{ \node at (#1+0.7,#2-0.25) {#3}; } \def\drawdottedcube#1#2{ \draw[dottedline] (#1,#2) -- ++(1,0) -- ++ (0,1) -- ++(-1,0) -- cycle; \draw[dottedline] (#1+0.4,#2+0.3) -- ++(1,0) -- ++ (0,1) -- ++(-1,0) -- cycle; \draw[dottedline] (#1,#2) -- ++(0.4,0.3); \draw[dottedline] (#1+1,#2) -- ++(0.4,0.3); \draw[dottedline] (#1,#2+1) -- ++(0.4,0.3); \draw[dottedline] (#1+1,#2+1) -- ++(0.4,0.3); } \def\drawpoint#1#2{ \drawdottedcube{#1}{#2} \node[fill,circle,inner sep=0pt,minimum size=3pt] at (#1,#2) {}; \draw[thickline] (#1,#2) -- (#1,#2); \drawname{#1}{#2}{point ($1$)} } \def\drawinterval#1#2{ \drawdottedcube{#1}{#2} \draw[thickline] (#1,#2) -- +(1,0); \drawname{#1}{#2}{interval ($3$)} } \def\drawtriangle#1#2{ \drawdottedcube{#1}{#2} \draw[thickline] (#1,#2) -- +(1,0) -- +(0.4,0.3) -- cycle; \drawname{#1}{#2}{triangle ($7$)} } \def\drawsquare#1#2{ \drawdottedcube{#1}{#2} \draw[thickline] (#1,#2) -- ++(1,0) -- ++(0.4,0.3) -- ++(-1,0) -- cycle; \drawname{#1}{#2}{square ($15$)} } \def\drawsimplex#1#2{ \drawdottedcube{#1}{#2} \draw[thinline] (#1,#2) -- +(0.4,0.3) -- +(1,0); \draw[thinline] (#1,#2+1) -- +(0.4,0.3-1); \draw[thickline] (#1,#2) -- +(1,0) -- +(0,1) -- cycle; \drawname{#1}{#2}{tetrahedron ($23$)} } \def\drawpyramid#1#2{ \drawdottedcube{#1}{#2} \draw[thickline] (#1,#2) -- +(1,0) -- +(0.4+1,0.3) -- +(0,1) -- cycle; \draw[thickline] (#1+1,#2) -- +(-1,1); \draw[thinline] (#1,#2) -- ++(0.4,0.3) -- ++(1,0); \draw[thinline] (#1,#2+1) -- ++(0.4,0.3-1); \drawname{#1}{#2}{pyramid ($31$)} } \def\drawbipyramid#1#2{ \drawdottedcube{#1}{#2} \draw[thinline] (#1,#2) -- ++(0.4+1,0.3) -- ++(-1,1); \draw[thickline] (#1,#2) -- +(1,0) -- +(0.4+1,0.3) -- +(1,1) -- +(0.4,0.3+1) -- cycle; \draw[thickline] (#1,#2) -- ++(1,1) -- ++(0,-1); \drawname{#1}{#2}{bipyramid ($107$)} } \def\drawprism#1#2{ \drawdottedcube{#1}{#2} \draw[thinline] (#1,#2) -- ++(0.4,0.3) -- ++(1,0); \draw[thinline] (#1,#2+1) -- ++(0.4,0.3-1); \draw[thickline] (#1,#2) -- ++(1,0) -- ++(0.4,0.3) -- ++(-1*0.4,-1*0.3+1) -- ++(-1,0) -- cycle; \draw[thickline] (#1+1,#2) -- ++(0,1); \drawname{#1}{#2}{prism ($63$)} } \def\drawnameless#1#2{ \drawdottedcube{#1}{#2} \draw[thinline] (#1,#2) -- ++(0.4,0.3) -- ++(1,0) -- ++(-1,1) -- ++(0,-1); \draw[thickline] (#1,#2) -- ++(1,0) -- ++(0,1) -- ++(-1,-1) -- ++(0.4,0.3+1) -- ++(-1*0.4+1,-1*0.3) -- ++(0.4,0.3-1) -- ++(-1*0.4,-1*0.3); \drawname{#1}{#2}{nameless ($111$)} } \def\drawoctahedron#1#2{ \drawdottedcube{#1}{#2} \draw[thinline] (#1+0.4,#2+0.3) -- ++(1,0) -- ++(-1,1) -- cycle; \draw[thickline] (#1+1,#2) -- ++(0,1) -- ++(-1,0) -- ++(1,-1) -- ++(0.4,0.3) -- ++(-1*0.4,-1*0.3+1) -- ++(0.4-1,0.3) -- ++(-1*0.4,-1*0.3) -- ++(0.4,0.3-1) -- cycle; \drawname{#1}{#2}{octahedron ($126$)} } \def\drawslicedcube#1#2{ \drawdottedcube{#1}{#2} \draw[thinline] (#1,#2) -- ++(0.4,0.3) -- ++(1,0) -- ++(-1,1) -- ++(0,-1); \draw[thickline] (#1,#2) -- ++(1,0) -- ++ (0,1) -- ++(-1,0) -- cycle; \draw[thickline] (#1+1,#2) -- +(0.4,0.3) -- +(0,1) -- +(0.4-1,0.3+1) -- +(-1,1); \drawname{#1}{#2}{sliced cube ($127$)} } \def\drawcube#1#2{ \draw[thinline] (#1,#2) -- ++(0.4,0.3) -- ++(1,0); \draw[thinline] (#1+0.4,#2+0.3) -- ++(0,1); \draw[thickline] (#1,#2+1) -- ++(0.4,0.3) -- ++(1,0) -- ++(0,-1) -- ++(-1*0.4,-1*0.3) -- ++(-1,0) -- ++(0,1) -- ++(1,0) -- ++(0,-1); \draw[thickline] (#1+1,#2+1) -- ++(0.4,0.3); \drawname{#1}{#2}{cube ($255$)} } \drawpoint{-2.5}{7.5} \drawinterval{0}{7.5} \drawtriangle{2.5}{7.5} \drawsquare{5}{7.5} \drawsimplex{-2.5}{5} \drawpyramid{0}{5} \drawbipyramid{2.5}{5} \drawprism{5}{5} \drawnameless{-2.5}{2.5} \drawoctahedron{0}{2.5} \drawslicedcube{2.5}{2.5} \drawcube{5}{2.5} \end{tikzpicture} \end{small} \end{center} \caption{All $ 0/1 $-polytopes up to dimension $ 3 $ (IDs in parentheses)} \label{fig:3dim} \end{figure} \begin{table} \small \begin{tabular}{lrcccc} \toprule polytope & \multicolumn{1}{c}{ID} & dimension & vertices & facets & $ \xcs $ \\ \midrule point & 1 & 0 & 1 & 0 & 0 \\ interval & 3 & 1 & 2 & 2 & 2 \\ triangle & 7 & 2 & 3 & 3 & 3 \\ square & 15 & 2 & 4 & 4 & 4 \\ tetrahedron & 23 & 3 & 4 & 4 & 4 \\ pyramid & 31 & 3 & 5 & 5 & 5 \\ bipyramid & 107 & 3 & 5 & 6 & 5 \\ prism & 63 & 3 & 6 & 5 & 5 \\ nameless & 111 & 3 & 6 & 7 & 6 \\ octahedron & 126 & 3 & 6 & 8 & 6 \\ sliced cube & 127 & 3 & 7 & 7 & 6 \\ cube & 255 & 3 & 8 & 6 & 6 \\ \bottomrule \end{tabular} \caption{Extension complexities of representatives of all affine equivalence classes of $ 0/1 $-polytopes up to dimension $ 3 $} \label{tab:dim3} \end{table} \subsection{Computations in Dimension 4} In Section~\ref{sec:lowerbounds}, we presented several lower bounds on the extension complexity. Let us recall, that we have the inequality chain \[ \omega(S) \leq \rc(S) \leq \rrc(S) \leq r_+(S) = \xc(P) \leq \xcs(P), \] where $ S $ is the slack matrix of the polytope $ P $. For each representative of all affine equivalence classes, we computed their slack matrices with \texttt{polymake}{} and calculated all required lower bounds by using simple, exact backtracking algorithms. In order to obtain tight upper bounds on $ \xc(P) $, we first fixed all representatives for which the trivial extension of Proposition~\ref{prop:trivialupperbounds} is already of minimum size. Observe now that all upper bounds in Section~\ref{sec:upperbounds} are induced by extensions constructed in the following way: Start with a polytope $ P' $ and perform a simple geometric operation to obtain $ P $. If the size of the resulting extension matches the largest lower bound on $ \xc(P) $, then the extension complexity of $ P $ is determined. In that case, we call the polytope $ P' $ the \emph{predecessor polytope} of $ P $. By iterating this process of checking for appropriate predecessors and geometric operations that yield smallest possible extensions, we were able to determine the extension complexities of all representatives. Note that these computations can also be performed by very simple algorithms. The final results are presented in Table~\ref{tab:finalresults} and can be read as follows: For each representative $ P $, we list its ID, its number of vertices ($ n $) and facets ($ m $) as well as all computed lower bounds on $ \xc(P) $ as well as $ \xc(P) $ itself. Since all implicitly computed extensions are even nice $ 0/1 $-extensions, we have that $ \xc(P) $ and $ \xcs(P) $ conincide in all these cases. Further, we indicate the geometric operation used to construct the smallest extension as well as the corresponding predecessor polytope. For the geometric operations, we use the following symbols: \[ \begin{array}{cl} \text{-} & \text{no operation, original outer description is smallest possible} \\ \Delta & \text{trivial vertex-extension (see Proposition~\ref{prop:trivialupperbounds})} \\ \cup & \text{union with a single point (see Corollary~\ref{cor:simplebalas})} \\ \div & \text{reflection at a hyperplane corresponding to one of the cube's symmetries} \\ & \text{(see Theorem~\ref{thm:reflectionsxcs})} \\ \phantom{{}^*} \div^* & \text{reflection at a hyperplane corresponding to one of the cube's symmetries that is} \\ & \text{also a facet of the predecessor polytope (see Corollary~\ref{cor:reflectionfacet})} \\ \downarrow & \text{making the predecessor down-monotone with respect to one coordinate} \\ & \text{(see \ref{thm:downward})} \end{array} \] Finally, since it may be too time-consuming for the reader to compute the affine equivalence class of a given polytope $ P $, we additionaly provide the representatives of all $ 0/1 $-equivalence classes that fall into the same affine equivalence class as $ P $. Thus, the reader has only to compute (the representative of) the $ 0/1 $-equivalence class of $ P $. Note that this can be done very efficiently since any affine map of the cube is a composition of some entry flips and coordinate swaps defined in Section~\ref{sec:reflections}, see~\cite{Aichholzer00}. \vspace{1em} { \small \tablefirsthead{ \toprule \multicolumn{1}{c}{ID} & $ n $ & $ m $ & $ \omega $ & $ \rc $ & $ \rrc $ & $ \xcs $ & extension & predecessor & further representatives \\ \midrule } \tablehead{ \multicolumn{10}{l}{\textit{continued from previous page}} \\ \midrule \multicolumn{1}{c}{ID} & $ n $ & $ m $ & $ \omega $ & $ \rc $ & $ \rrc $ & $ \xcs $ & extension & predecessor & further representatives \\ \midrule } \tabletail{ \midrule \multicolumn{10}{r}{\textit{continued on next page}} \\ } \tablelasttail{ \bottomrule } \bottomcaption{Extension complexities of representatives of all affine equivalence classes of $ 4 $-dimensional $ 0/1 $-polytopes} \label{tab:finalresults} \begin{supertabular}{rrrrrrrcp{2.5cm}p{3cm}} 279 & 5 & 5 & 5 & 5 & 5 & 5 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}283, 286, 301, 361, 362, 391, 395, 406, 410, 425, 428, 488, 856, 872, 1681, 5761 \\ 287 & 6 & 6 & 6 & 6 & 6 & 6 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}303, 317, 318, 366, 399, 411, 427, 429, 430, 444, 490, 858, 876, 965, 966, 980, 984, 1635, 1641, 1650, 1656, 1686, 5766 \\ 363 & 6 & 7 & 6 & 6 & 6 & 6 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}407, 414, 489, 1713, 1714, 1716 \\ 5769 & 6 & 8 & 6 & 6 & 6 & 6 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5784 & 6 & 8 & 6 & 6 & 6 & 6 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6017 & 6 & 8 & 6 & 6 & 6 & 6 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 854 & 6 & 9 & 6 & 6 & 6 & 6 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}857, 874 \\ 873 & 6 & 9 & 6 & 6 & 6 & 6 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}1683 \\ 5763 & 6 & 9 & 6 & 6 & 6 & 6 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 319 & 7 & 6 & 6 & 6 & 6 & 6 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}431, 494, 829, 892, 967, 988, 1639, 1654, 1912, 5782 \\ 1643 & 7 & 7 & 7 & 7 & 7 & 7 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}1718 \\ 367 & 7 & 8 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}415, 446, 491, 1777, 1778, 1969, 1972 \\ 855 & 7 & 8 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}859, 862, 878, 981, 985, 1651, 1658 \\ 382 & 7 & 9 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}445, 2017, 2018, 5737, 5738 \\ 875 & 7 & 9 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}877, 982, 1715, 1717 \\ 1657 & 7 & 9 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}1687, 1721 \\ 5774 & 7 & 10 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5785 & 7 & 10 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5786 & 7 & 10 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6019 & 7 & 10 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6025 & 7 & 10 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5767 & 7 & 11 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5771 & 7 & 11 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5801 & 7 & 12 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5804 & 7 & 12 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}6040 \\ 6625 & 7 & 13 & 7 & 7 & 7 & 7 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 831 & 8 & 6 & 6 & 6 & 6 & 6 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}975, 1020, 15555 \\ 863 & 8 & 7 & 7 & 7 & 7 & 7 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}989, 1655, 1910, 1914 \\ 383 & 8 & 8 & 7 & 7 & 7 & 7 & $ \cup $ & $ 375 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}447, 495, 510, 2033, 2034, 2040 \\ 893 & 8 & 8 & 7 & 7 & 7 & 7 & $ \cup $ & $ 892 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}983, 1973 \\ 1647 & 8 & 8 & 7 & 7 & 7 & 7 & $ \cup $ & $ 1646 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}1782 \\ 1723 & 8 & 8 & 7 & 7 & 7 & 7 & $ \cup $ & $ 1211 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}1913 \\ 879 & 8 & 9 & 7 & 7 & 7 & 7 & $ \cup $ & $ 847 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}990, 1779, 1971, 1980 \\ 894 & 8 & 9 & 7 & 7 & 7 & 7 & $ \cup $ & $ 892 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}987, 1662, 2019, 2022, 5742 \\ 1659 & 8 & 9 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}1719, 1974 \\ 5739 & 8 & 9 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5783 & 8 & 9 & 7 & 7 & 7 & 7 & $ \cup $ & $ 5782 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5790 & 8 & 9 & 7 & 7 & 7 & 7 & $ \cup $ & $ 5782 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6038 & 8 & 10 & 7 & 7 & 7 & 7 & $ \cup $ & $ 5910 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6041 & 8 & 10 & 7 & 7 & 7 & 7 & $ \cup $ & $ 5529 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 1695 & 8 & 11 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}1785 \\ 1725 & 8 & 11 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}2025 \\ 5787 & 8 & 11 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5803 & 8 & 11 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6023 & 8 & 11 & 7 & 7 & 7 & 7 & $ \cup $ & $ 5895 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6630 & 8 & 11 & 7 & 7 & 7 & 7 & $ \cup $ & $ 6502 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5806 & 8 & 12 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}6042 \\ 5820 & 8 & 12 & 7 & 7 & 7 & 7 & $ \cup $ & $ 5692 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}6634 \\ 6027 & 8 & 12 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6641 & 8 & 12 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7140 & 8 & 12 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7905 & 8 & 12 & 7 & 7 & 7 & 7 & $ \cup $ & $ 7904 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5775 & 8 & 13 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5805 & 8 & 13 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6030 & 8 & 13 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6057 & 8 & 13 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6627 & 8 & 13 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6633 & 8 & 13 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5866 & 8 & 14 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}6060, 6648 \\ 5865 & 8 & 15 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6375 & 8 & 15 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6120 & 8 & 16 & 8 & 8 & 8 & 8 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}7128, 27030 \\ 1911 & 9 & 6 & 6 & 6 & 6 & 6 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 511 & 9 & 7 & 7 & 7 & 7 & 7 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}4081 \\ 895 & 9 & 8 & 7 & 7 & 7 & 7 & $ \cup $ & $ 831 \ (\cong 831) $ & \fontsize{0.2cm}{1em}{}\selectfont{}991, 1021, 2035, 2042 \\ 1915 & 9 & 8 & 7 & 8 & 8 & 8 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}1975 \\ 1663 & 9 & 9 & 8 & 8 & 8 & 8 & $ \cup $ & $ 1662 \ (\cong 894) $ & \fontsize{0.2cm}{1em}{}\selectfont{}1783, 2038 \\ 1918 & 9 & 9 & 7 & 8 & 8 & 8 & $ \cup $ & $ 1916 \ (\cong 863) $ & \fontsize{0.2cm}{1em}{}\selectfont{}2023, 5758 \\ 5743 & 9 & 9 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5742 \ (\cong 894) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6039 & 9 & 9 & 7 & 7 & 7 & 7 & $ \cup $ & $ 5911 \ (\cong 831) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15559 & 9 & 9 & 7 & 7 & 7 & 7 & $ \cup $ & $ 15555 \ (\cong 831) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 1727 & 9 & 10 & 8 & 8 & 8 & 8 & $ \cup $ & $ 1723 \ (\cong 1723) $ & \fontsize{0.2cm}{1em}{}\selectfont{}1787, 2041 \\ 1981 & 9 & 10 & 8 & 8 & 8 & 8 & $ \cup $ & $ 1980 \ (\cong 879) $ & \fontsize{0.2cm}{1em}{}\selectfont{}2027 \\ 6638 & 9 & 10 & 7 & 7 & 7 & 7 & $ \cup $ & $ 4590 \ (\cong 831) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5791 & 9 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5790 \ (\cong 5790) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5822 & 9 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5820 \ (\cong 5820) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6043 & 9 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6041 \ (\cong 6041) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7921 & 9 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7920 \ (\cong 383) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5807 & 9 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5295 \ (\cong 879) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6046 & 9 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6038 \ (\cong 6038) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6059 & 9 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6058 \ (\cong 383) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6643 & 9 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6579 \ (\cong 879) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6649 & 9 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6617 \ (\cong 383) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7141 & 9 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7077 \ (\cong 5790) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7148 & 9 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7116 \ (\cong 383) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7907 & 9 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7906 \ (\cong 5820) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5821 & 9 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5820 \ (\cong 5820) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5870 & 9 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5862 \ (\cong 5820) $ & \fontsize{0.2cm}{1em}{}\selectfont{}6076, 6650 \\ 6031 & 9 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6023 \ (\cong 6023) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6061 & 9 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6053 \ (\cong 6041) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6062 & 9 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6058 \ (\cong 383) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6631 & 9 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6630 \ (\cong 6630) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6635 & 9 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6634 \ (\cong 5820) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6646 & 9 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6630 \ (\cong 6630) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7910 & 9 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7908 \ (\cong 5820) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5867 & 9 & 14 & 8 & 9 & 9 & 9 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6122 & 9 & 14 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6058 \ (\cong 383) $ & \fontsize{0.2cm}{1em}{}\selectfont{}7129 \\ 6383 & 9 & 14 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6382 \ (\cong 6023) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7126 & 9 & 14 & 9 & 9 & 9 & 9 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7913 & 9 & 14 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7912 \ (\cong 894) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6121 & 9 & 15 & 8 & 9 & 9 & 9 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27031 & 9 & 15 & 8 & 9 & 9 & 9 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 1023 & 10 & 7 & 7 & 7 & 7 & 7 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}4083 \\ 1919 & 10 & 8 & 7 & 7 & 7 & 7 & $ \cup $ & $ 1911 \ (\cong 1911) $ & \fontsize{0.2cm}{1em}{}\selectfont{}2039 \\ 15567 & 10 & 8 & 7 & 7 & 7 & 7 & $ \div $ & $ 5189 \ (\cong 107) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 1791 & 10 & 9 & 8 & 8 & 8 & 8 & $ \cup $ & $ 1279 \ (\cong 511) $ & \fontsize{0.2cm}{1em}{}\selectfont{}4086 \\ 1983 & 10 & 9 & 8 & 8 & 8 & 8 & $ \cup $ & $ 1967 \ (\cong 895) $ & \fontsize{0.2cm}{1em}{}\selectfont{}2043 \\ 2031 & 10 & 9 & 8 & 8 & 8 & 8 & $ \cup $ & $ 1999 \ (\cong 895) $ & \fontsize{0.2cm}{1em}{}\selectfont{}2046 \\ 5759 & 10 & 9 & 8 & 9 & 9 & 9 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6014 & 10 & 10 & 7 & 9 & 9 & 9 & $ \cup $ & $ 6012 \ (\cong 1918) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8177 & 10 & 10 & 8 & 8 & 8 & 8 & $ \cup $ & $ 8176 \ (\cong 511) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6047 & 10 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6039 \ (\cong 6039) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7150 & 10 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6638 \ (\cong 6638) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7923 & 10 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7411 \ (\cong 6638) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8178 & 10 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 8176 \ (\cong 511) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15575 & 10 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 15571 \ (\cong 15559) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15579 & 10 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 15571 \ (\cong 15559) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5823 & 10 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 5822 \ (\cong 5822) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5886 & 10 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 5884 \ (\cong 5870) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6063 & 10 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5551 \ (\cong 895) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6639 & 10 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6638 \ (\cong 6638) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6647 & 10 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6519 \ (\cong 6039) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6651 & 10 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6587 \ (\cong 895) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6654 & 10 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6638 \ (\cong 6638) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7164 & 10 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5116 \ (\cong 895) $ & \fontsize{0.2cm}{1em}{}\selectfont{}7930 \\ 7918 & 10 & 12 & 7 & 7 & 7 & 7 & $ \cup $ & $ 3822 \ (\cong 1911) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7926 & 10 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7924 \ (\cong 7148) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8184 & 10 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 8176 \ (\cong 511) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15834 & 10 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 15832 \ (\cong 5870) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5871 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 5870 \ (\cong 5870) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6077 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 6076 \ (\cong 5870) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6078 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 6076 \ (\cong 5870) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6126 & 10 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5614 \ (\cong 895) $ & \fontsize{0.2cm}{1em}{}\selectfont{}7131 \\ 6399 & 10 & 13 & 8 & 8 & 8 & 8 & $ \cup $ & $ 4351 \ (\cong 511) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7127 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7125 \ (\cong 5870) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7134 & 10 & 13 & 9 & 9 & 9 & 9 & $ \cup $ & $ 7132 \ (\cong 6122) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7143 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7142 \ (\cong 7141) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7149 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7148 \ (\cong 7148) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7911 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7910 \ (\cong 7910) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7915 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7914 \ (\cong 5870) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7929 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7928 \ (\cong 6122) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15830 & 10 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 15828 \ (\cong 1918) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6123 & 10 & 14 & 8 & 9 & 9 & 9 & $ \cup $ & $ 6122 \ (\cong 6122) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27039 & 10 & 14 & 8 & 9 & 9 & 9 & $ \cup $ & $ 27037 \ (\cong 7913) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27606 & 10 & 14 & 8 & 9 & 10 & 10 & $ \Delta $ & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 2047 & 11 & 8 & 8 & 8 & 8 & 8 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}4087 \\ 6015 & 11 & 9 & 8 & 8 & 8 & 8 & $ \cup $ & $ 6007 \ (\cong 1919) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8179 & 11 & 10 & 8 & 8 & 8 & 8 & $ \cup $ & $ 4083 \ (\cong 1023) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15583 & 11 & 10 & 8 & 8 & 8 & 8 & $ \cup $ & $ 15567 \ (\cong 15567) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6655 & 11 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 4607 \ (\cong 1023) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7934 & 11 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7918 \ (\cong 7918) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8186 & 11 & 11 & 8 & 8 & 8 & 8 & $ \cup $ & $ 4090 \ (\cong 1023) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15853 & 11 & 11 & 8 & 9 & 9 & 9 & $ \cup $ & $ 15852 \ (\cong 7164) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 5887 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 5375 \ (\cong 1791) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6079 & 11 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 5951 \ (\cong 1919) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7135 & 11 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7007 \ (\cong 1919) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7151 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7150 \ (\cong 7150) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7165 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7164 \ (\cong 7164) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7919 & 11 & 12 & 8 & 8 & 8 & 8 & $ \cup $ & $ 7918 \ (\cong 7918) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7927 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7925 \ (\cong 7923) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7931 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7930 \ (\cong 7164) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8182 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 8180 \ (\cong 8178) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8185 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 8184 \ (\cong 8184) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15831 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 15827 \ (\cong 15575) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15835 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 15827 \ (\cong 15575) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15838 & 11 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 15836 \ (\cong 6126) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6127 & 11 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 6126 \ (\cong 6126) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6142 & 11 & 13 & 8 & 9 & 9 & 9 & $ \cup $ & $ 6140 \ (\cong 6126) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27071 & 11 & 13 & 8 & 9 & 10 & 10 & $ \cup $ & $ 27070 \ (\cong 27039) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27581 & 11 & 13 & 8 & 10 & 10 & 10 & $ \cup $ & $ 27580 \ (\cong 15830) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27607 & 11 & 13 & 8 & 9 & 10 & 10 & $ \cup $ & $ 27605 \ (\cong 7929) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 4095 & 12 & 7 & 7 & 7 & 7 & 7 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15615 & 12 & 9 & 8 & 8 & 8 & 8 & $ \div $ & $ 5205 \ (\cong 111) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15869 & 12 & 10 & 8 & 8 & 8 & 8 & $ \phantom{{}^*}\div^* $ & $ 12785 \ (\cong 863) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 16380 & 12 & 10 & 8 & 8 & 8 & 8 & $ \div $ & $ 5460 \ (\cong 126) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7167 & 12 & 11 & 8 & 8 & 8 & 8 & $ \downarrow $ & $ 6604 \ (\cong 319) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 7935 & 12 & 11 & 8 & 9 & 9 & 9 & $ \cup $ & $ 7934 \ (\cong 7934) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8183 & 12 & 11 & 8 & 9 & 9 & 9 & $ \cup $ & $ 8181 \ (\cong 8179) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8187 & 12 & 11 & 8 & 9 & 9 & 9 & $ \cup $ & $ 8186 \ (\cong 8186) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8190 & 12 & 11 & 8 & 9 & 9 & 9 & $ \cup $ & $ 8188 \ (\cong 8186) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15839 & 12 & 11 & 8 & 9 & 9 & 9 & $ \cup $ & $ 15837 \ (\cong 7934) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15855 & 12 & 11 & 8 & 9 & 9 & 9 & $ \cup $ & $ 15823 \ (\cong 15583) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15870 & 12 & 11 & 8 & 9 & 9 & 9 & $ \cup $ & $ 15868 \ (\cong 8186) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 6143 & 12 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 6135 \ (\cong 6079) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27135 & 12 & 12 & 8 & 9 & 9 & 9 & $ \phantom{{}^*}\div^* $ & $ 27135 \ (\cong 27135) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27583 & 12 & 12 & 8 & 9 & 9 & 9 & $ \cup $ & $ 27579 \ (\cong 7919) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27615 & 12 & 12 & 8 & 9 & 10 & 10 & $ \cup $ & $ 27613 \ (\cong 15838) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27645 & 12 & 12 & 8 & 10 & 10 & 10 & $ \cup $ & $ 27644 \ (\cong 15838) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 28662 & 12 & 12 & 8 & 9 & 9 & 9 & $ \div $ & $ 19924 \ (\cong 1647) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 28665 & 12 & 12 & 8 & 10 & 10 & 10 & $ \cup $ & $ 28664 \ (\cong 8182) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 8191 & 13 & 10 & 8 & 8 & 8 & 8 & $ \cup $ & $ 4095 \ (\cong 4095) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 15871 & 13 & 10 & 8 & 9 & 9 & 9 & $ \div $ & $ 12787 \ (\cong 895) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 16381 & 13 & 10 & 8 & 9 & 9 & 9 & $ \div $ & $ 13297 \ (\cong 895) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 27647 & 13 & 11 & 8 & 9 & 10 & 10 & $ \cup $ & $ 27643 \ (\cong 15839) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 28663 & 13 & 11 & 8 & 9 & 10 & 10 & $ \cup $ & $ 28661 \ (\cong 15870) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 28667 & 13 & 11 & 8 & 10 & 10 & 10 & $ \cup $ & $ 28666 \ (\cong 15870) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 16383 & 14 & 9 & 8 & 8 & 8 & 8 & $ \div $ & $ 5461 \ (\cong 127) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 28671 & 14 & 10 & 8 & 9 & 9 & 9 & $ \cup $ & $ 20479 \ (\cong 8191) $ & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 32511 & 14 & 10 & 8 & 10 & 10 & 10 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 32766 & 14 & 10 & 8 & 9 & 10 & 10 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 32767 & 15 & 9 & 8 & 9 & 9 & 9 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ 65535 & 16 & 8 & 8 & 8 & 8 & 8 & - & - & \fontsize{0.2cm}{1em}{}\selectfont{}- \\ \end{supertabular} } \newpage \section{Outlook} \noindent We computed the extension complexities of all $ 0/1 $-polytopes up to dimension $ 4 $ by providing minimum size extensions and matching lower bounds. In particular, all implicitly computed minimum size extensions were induced by simple geometric operations and satisfy strong properties: \begin{observation*} For every $ 0/1 $-polytope of dimension at most $ 4 $ there exists a minimum size extension that is even a nice $ 0/1 $-extension. \end{observation*} \noindent As mentioned throughout this paper, the authors are not aware of any arguments that rule out the existence of such extensions for general $ 0/1 $-polytopes. On the other hand, there is no theoretical evidence that minimum size extensions with such strong properties should always exist. Thus, any new insights regarding such properties of minimum size extensions are of certain interest. On the computational side, as a next natural step one might consider computations in dimension $ 5 $. However, it seems to be a much more difficult task to achieve a complete list of results in this case. There are $ 1{,}226{,}525 $ different $ 0/1 $-equivalence classes~\cite{Aichholzer00}. Thus, even the determination of all affine equivalence classes is presumably very time-consuming. Independently, due to the sizes of the corresponding slack matrices, computing all lower bounds presented in this paper requires sophisticated algorithms. In addition, it is questionable whether rather simple (lower and upper) bounds as presented in this paper suffice to determine all extension complexities. In this paper we considered the \emph{linear} extension complexity of a polytope $ P $, which can also be seen as the smallest $ r $ such that $ P $ can be written as a linear projection of an affine slice of the nonnegative orthant $ \mathbb{R}^r_+ $. A more general way of representing polytopes that recently has become of particular interest is the concept of semidefinite extensions. Here, the analogous quantity is the so-called \emph{semidefinite} extension complexity, which is defined as the smallest $ r $ such that $ P $ can be written as a linear projection of an affine slice of the cone $ \mathbb{S}^r_+ $ of positive semidefinite matrices of size $ r \times r $. Although there are already several papers concerning this quantity, the field lacks of good upper and lower bounds on sizes of semidefinite extensions. For instance, Gouveia~et~al.~\cite{GouveiaRT13} examine properties of polytopes $ P $ whose semidefinite extension complexity equals $ \dim(P) + 1 $ (which is a general lower bound~\cite{LeeT12}). Using their observations together with our results in Table~\ref{tab:dim3} and the basic construction of semidefinite extensions by taking the positive Hadamard square root of a slack matrix (see~\cite{GouveiaRT13}), it is easy to determine the semidefinite extension complexities of all $ 0/1 $-polytopes up to dimension $ 3 $. However, in order to continue the computations for dimension $ 4 $, it seems that new -- yet unknown -- bounds have to be taken into account. We hope that our computations provide a helpful basis towards this task. \bibliographystyle{plain}
{ "timestamp": "2014-06-20T02:01:53", "yymm": "1406", "arxiv_id": "1406.4895", "language": "en", "url": "https://arxiv.org/abs/1406.4895", "abstract": "We present slight refinements of known general lower and upper bounds on sizes of extended formulations for polytopes. With these observations we are able to compute the extension complexities of all 0/1-polytopes up to dimension 4. We provide a complete list of our results including geometric constructions of minimum size extensions for all considered polytopes. Furthermore, we show that all of these extensions have strong properties. In particular, one of our computational results is that every 0/1-polytope up to dimension 4 has a minimum size extension that is also a 0/1-polytope.", "subjects": "Combinatorics (math.CO)", "title": "Computing The Extension Complexities of All 4-Dimensional 0/1-Polytopes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575173068325, "lm_q2_score": 0.7217431943271998, "lm_q1q2_score": 0.7091542011512362 }
https://arxiv.org/abs/2001.02753
Locating conical degeneracies in the spectra of parametric self-adjoint matrices
A simple iterative scheme is proposed for locating the parameter values for which a 2-parameter family of real symmetric matrices has a double eigenvalue. The convergence is proved to be quadratic. An extension of the scheme to complex Hermitian matrices (with 3 parameters) and to location of triple eigenvalues (5 parameters for real symmetric matrices) is also described. Algorithm convergence is illustrated in several examples: a real symmetric family, a complex Hermitian family, a family of matrices with an "avoided crossing" (no covergence) and a 5-parameter family of real symmetric matrices with a triple eigenvalue.
\section{Introduction} A theorem of von Neumann and Wigner states that, generically, a two-parameter family of real symmetric matrices has multiple eigenvalues at isolated points \cite{WigVon}. In other words, the matrices with multiple eigenvalues have co-dimension 2 in the manifold of real symmetric matrices \cite[Appendix 10]{Arnold_mechanics}. In this paper, we would like to address the problem of locating these isolated points of eigenvalue multiplicity in the 2-dimensional parameter space. To be more precise, we consider the following problem. \begin{problem} Given a smooth real symmetric matrix valued function $A: \mathbb{R}^2 \mapsto \mathbb{R}^{n \times n}$, locate the values of the parameters $(x,y)$ which yield a matrix $A(x,y)$ with degenerate eigenvalues. \end{problem} To give a simple example, the function \begin{equation*} A(x,y) = \begin{pmatrix} x & y \\ y & -x \end{pmatrix} \end{equation*} has a double eigenvalue at the unique point $(x,y)=(0,0)$. Its eigenvalues $\lambda$ satisfy the equation $\lambda^2 = x^2+y^2$ and the eigenvalue surface is a circular double cone in the space $(x,y,\lambda)$. In contrast, the nonlinear function \begin{equation} \label{eq:example_func} A(x,y) = \begin{pmatrix} \cos(y)\sin(x) & 2-3\sin(y-x) \\ 2-3\sin(y-x) & 2\cos(y)-\sin(x) \end{pmatrix} \end{equation} has multiple points of eigenvalue multiplicity, see Figure~\ref{fig:simple_cone}. Each point is isolated and locally around each point the eigenvalue surface also looks like a cone. For a family of complex Hermitian matrices, the co-dimension of the matrices with multiple eigenvalues is 3. Therefore, the analogous question can be posed about locating multiple eigenvalues of a Hermitian $A(x,y,z)$. We will formulate an extension of our results to complex Hermitian matrices but will concentrate on the real symmetric case in our proofs. \begin{figure} \centering \includegraphics[scale=0.5]{images/example_conical_point-crop.pdf} \caption{Eigenvalue surfaces corresponding to $A(x,y)$ from \eqref{eq:example_func}. There are three conical point; the surfaces appear not to touch at the middle point due to insufficient grid precision.} \label{fig:simple_cone} \end{figure} The problem of locating the points of eigenvalue multiplicity is of practical importance. In condensed matter physics \cite{AshcroftMermin_solid} the wave propagation through periodic medium is studied via Floquet--Bloch transform \cite{Kuc_floquet,Kuc_bams16} which results in a parametric family of self-adjoint operators (or matrices) with discrete spectrum. The eigenvalue surfaces (sheets of the ``dispersion relation'') may touch, see Fig.~\ref{fig:simple_cone}, which has profound effect on wave propagation and its sensitivity to a small perturbation of the medium. This touching corresponds precisely to a multiplicity in the eigenvalue spectrum. To give a well-studied example, the unusual electron properties of graphene occur due to the presence of eigenvalue multiplicity \cite{CasGraphen_rmp09,Nov_nob10}. It is also of practical relevance to be able to distinguish touching from ``almost touching'' (also known as ``avoided crossing'' in one-parameter problems). The question of locating eigenvalue multiplicity in a family of $2\times 2$ real symmetric matrices $A$ has a straightforward solution (which also illustrates why the co-dimension is 2). The discriminant of $A \in \mathbb{R}^{2\times 2}$ can be written as a sum of two squares, \begin{equation} \label{eq:discriminant_def} \disc(A):=(\lambda_1-\lambda_2)^2=(A_{11}-A_{22})^2+4A_{12}^2. \end{equation} By definition, the discriminant is $0$ if and only if two eigenvalues coincide, therefore we have two conditions that must simultaneously be met for the multiplicity to occur: \begin{equation} \label{eq:F_def} F(x,y) = \textbf{0}, \qquad \mbox{where}\quad F : \mathbb{R}^2 \to \mathbb{R}^2, \quad \ F(x,y) := \begin{pmatrix} A_{11}(x,y)-A_{22}(x,y)\\ A_{12}(x,y) \end{pmatrix}. \end{equation} Unfortunately, for larger matrices the discriminant quickly becomes unwieldy and cannot be used in practical computations. The discriminant can still be written as a sum of squares \cite{Ily,Lax,Par_laa02,DanIkr}, but the number of terms grows fast with the size of the matrix. Thus, for an $n\times n$ real symmetric matrix $A(x,y)$ depending on two parameters $x$ and $y$ there is only one easily computable function $\lambda_2(x,y) - \lambda_1(x,y)$ whose root, in variables $x$ and $y$, we are seeking.\footnote{Here, without loss of generality, we have assumed that one is interested in the degeneracy $\lambda_1 = \lambda_2 < \lambda_3 < \ldots$} However, to apply a standard method with quadratic convergence, such as the Newton--Raphson algorithm, one needs 2 functions for 2 variables. One can search for the minimum of the square eigenvalue difference, $\big(\lambda_2(x,y) - \lambda_1(x,y)\big)^2$, which is smooth. But such a search would converge equally well to a point of ``avoided crossing'', a pitfall our proposed method manages to avoid, see Sections~\ref{sec:avoided} and \ref{sec:mergingDirac}. One can change the basis to make $A(x,y)$ block-diagonal, with a $2\times2$ block corresponding to eigenvalues $\lambda_1$ and $\lambda_2$. The existence of this change in a neighborhood of the multiplicity point is assured (using Riesz projector) if $\lambda_{1,2}$ remain bounded away from the rest of the spectrum. However the new basis will depend on the parameters $(x,y)$ and is not directly accessible for numerical computations. Despite this obstacle, we will show that a ``naive'' approach produces equivalently good convergence: one can use a \emph{constant} eigenvector basis which is recomputed\footnote{We are motivated mostly by the applications to tight-binding models of condensed matter physics \cite{AshcroftMermin_solid} where the matrix dimesion $n$ is often of order $10$ and computation of eigenvectors is relatively fast and precise. Another area of application is pointed out at the end of Section~\ref{sec:mergingDirac}.} at each point of the Newton--Raphson iteration. More precisely, we establish the following theorem. \begin{theorem} \label{thm:main} Let $A({\mathbf{r}}):\mathbb{R}^2\mapsto \mathbb{R}^{n\times n}$ be a real symmetric matrix valued function which is continuously twice differentiable in each entry, with a non-degenerate conical point (defined below) between $\lambda_1$ and $\lambda_2$ at parameter point $\alpha$. For any ${\mathbf{r}}_i$, define ${\mathbf{r}}_{i+1}$ by \begin{equation} \label{iter_equation_simple} {\mathbf{r}}_{i+1} = {\mathbf{r}}_i - \begin{pmatrix} \big\langle v_1, \spdev{x}{A}v_1\big\rangle -\big\langle v_2, \spdev{x}{A}v_2\big\rangle & \big\langle v_1, \spdev{y}{A}v_1\big\rangle -\big\langle v_2, \spdev{y}{A}v_2\big\rangle \\ 2\big\langle v_1, \spdev{x}{A}v_2\big\rangle & 2\big\langle v_1, \spdev{y}{A}v_2\big\rangle \end{pmatrix}^{-1} \begin{pmatrix} \lambda_1 - \lambda_2 \\ 0 \end{pmatrix} \end{equation} where $\lambda_{1,2}=\lambda_{1,2}({\mathbf{r}}_i)$ denote the eigenvalues of $A$ at the point ${\mathbf{r}}_i$ and $v_{1,2}=v_{1,2}({\mathbf{r}}_i)$ denote the corresponding eigenvectors. Then there exists an open neighborhood $\Omega\subset \mathbb{R}^2$ of $\alpha$ and a constant $C>0$ such that for all ${\mathbf{r}}_i\in \Omega$, the corresponding ${\mathbf{r}}_{i+1}$ satisfies the estimate \begin{equation} \label{eq:quadratic_conv} |{\mathbf{r}}_{i+1}-\alpha|<C|{\mathbf{r}}_i-\alpha|^2. \end{equation} \end{theorem} Before we prove this theorem in Section~\ref{sec:proofs}, we explain in Section~\ref{sec:discussion} the geometrical picture behind the iterative procedure~(\ref{iter_equation_simple}) and also point out the main differences between~(\ref{iter_equation_simple}) and the Newton--Raphson method in a conventional setting. We also review related literature in Section~\ref{sec:berry_phase} once we introduce relevant notions. The precise definition and properties of ``nondegenerate conical point'' is given in Section~\ref{sec:conical_points}. Section~\ref{sec:examples} contains some computational examples. \subsection{Notation} We let $C^2(\mathbb{R}^2, \mathbb{R}^{n\times n})$ denote the set of matrix valued functions mapping $\mathbb{R}^2$ to $\mathbb{R}^{n\times n}$ with each element being continuously twice differentiable. The eigenvalues of the matrix function $A\in C^2(\mathbb{R}^2, \mathbb{R}^{n\times n})$ are numbered in the increasing order $\lambda_1 \leq \lambda_2 \leq \lambda_3 \leq \cdots \leq \lambda_n$ and without loss of generality we will look for ${\mathbf{r}}=(x,y)\in\mathbb{R}^2$ such that $\lambda_1({\mathbf{r}}) = \lambda_2({\mathbf{r}})$. Naturally, all results apply equally well to any pair of consecutive eigenvalues. We remark that functions $\lambda_k({\mathbf{r}})$ are continuous but not necessarily smooth: the points of eigenvalue multiplicity are typically the points where the eigenvalues involved are not differentiable, see Fig.~\ref{fig:simple_cone}. For any real symmetric matrix valued function $A$ and any point ${\mathbf{p}}\in \mathbb{R}^2$, we let $A^{\mathbf{p}} = V^* A({\mathbf{r}}) V$ denote the representation of $A$ in the eigenvector basis computed at point ${\mathbf{p}}$. That is, $V$ is a fixed orthogonal matrix whose columns are the eigenvectors of $A({\mathbf{p}})$. The eigenvectors are assumed to be numbered according to the eigenvalue ordering. This means that $A^{\mathbf{p}}\in C^2(\mathbb{R}^2, \mathbb{R}^{n\times n})$ is a diagonal matrix at the point ${\mathbf{p}}$ but not necessarily anywhere else. We let \begin{equation} \label{eq:tildeA_def} \widetilde{A}^{\mathbf{p}}({\mathbf{r}}) = \begin{pmatrix} A^{\mathbf{p}}_{11} & A^{\mathbf{p}}_{12}\\ A^{\mathbf{p}}_{21} & A^{\mathbf{p}}_{22} \end{pmatrix} := \begin{pmatrix} \langle v_1, A({\mathbf{r}}) v_1\rangle & \langle v_1, A({\mathbf{r}}) v_2\rangle \\ \langle v_2, A({\mathbf{r}}) v_1\rangle & \langle v_2, A({\mathbf{r}}) v_2\rangle \end{pmatrix} \end{equation} denote the submatrix of $A^{\mathbf{p}}$ corresponding to the eigenvectors of the coalescing eigenvalues. We stress again that the eigenvectors $v_1=v_1({\mathbf{p}})$ and $v_2=v_2({\mathbf{p}})$ are computed at the point ${\mathbf{p}}$ and do not vary with ${\mathbf{r}}$. By the definition of $A^{\mathbf{p}}$, we have \begin{equation} \label{eq:tildeA_diag} \widetilde{A}^{\mathbf{p}}({\mathbf{p}}) = \begin{pmatrix} \lambda_1({\mathbf{p}}) & 0\\ 0 & \lambda_2({\mathbf{p}}) \end{pmatrix}. \end{equation} We let \begin{equation} \label{eq:target_function_def} F\big(A^{\mathbf{p}}({\mathbf{r}})\big) := \begin{pmatrix} A^{{\mathbf{p}}}_{11}({\mathbf{r}})-A^{{\mathbf{p}}}_{22}({\mathbf{r}})\\ 2A^{{\mathbf{p}}}_{12}({\mathbf{r}}) \end{pmatrix} \end{equation} denote the target function similar to \eqref{eq:F_def}. We stress that $F$ is a function of ${\mathbf{r}}$. Throughout the paper $\mathrm{D}$ will denote the row vector of derivatives taken with respect to parameters ${\mathbf{r}} = (x,y)$, \begin{equation*} \mathrm{D} f = \left( \frac{\partial f}{\partial x}, \ \frac{\partial f}{\partial y}\right). \end{equation*} If $f$ is a vector-function, $\mathrm{D} f$ is a matrix with 2 columns. We use the notation $\mathrm{D}_{{\mathbf{r}}_0} f$ to denote the derivative evaluated at the point ${\mathbf{r}}={\mathbf{r}}_0$, i.e.\ \begin{equation*} \mathrm{D}_{{\mathbf{r}}_0} f = \left( \frac{\partial f}{\partial x}({\mathbf{r}}_0), \ \frac{\partial f}{\partial y}({\mathbf{r}}_0)\right). \end{equation*} We use notation $J_{{\mathbf{r}}}(A^{{\mathbf{p}}})$ to denote the Jacobian of $F(A^{\mathbf{p}})$, \begin{equation} \label{eq:Jacobian_def} J_{{\mathbf{r}}}(A^{{\mathbf{p}}}) := \mathrm{D}_{{\mathbf{r}}} F(A^{\mathbf{p}}) = \begin{pmatrix} \big\langle v_1, \spdev{x}{A}v_1\big\rangle -\big\langle v_2, \spdev{x}{A}v_2\big\rangle & \big\langle v_1, \spdev{y}{A}v_1\big\rangle -\big\langle v_2, \spdev{y}{A}v_2\big\rangle \\ 2\big\langle v_1, \spdev{x}{A}v_2\big\rangle & 2\big\langle v_1, \spdev{y}{A}v_2\big\rangle \end{pmatrix}, \end{equation} where $v_1,v_2$ are the eigenvectors of $A({\mathbf{p}})$ and the derivatives $\spdev{x}{A}$ and $\spdev{y}{A}$ have been evaluated at point ${\mathbf{r}}$. This is the matrix appearing in Theorem~\ref{thm:main}. The factor $2$ in the definition of $J_{{\mathbf{r}}}(A^{{\mathbf{p}}})$ arises naturally in calculations; it can also be used to put the second row terms in the more symmetric form, \begin{equation*} 2\Big\langle v_1, \spdev{x}{A}v_2\Big\rangle = \Big\langle v_1, \spdev{x}{A}v_2\Big\rangle + \Big\langle v_2, \spdev{x}{A}v_1\Big\rangle. \end{equation*} Finally, we remark that by our definitions $F(A) = F\big(\widetilde{A}\big)$ and $J_{\mathbf{r}}(A) = J_{\mathbf{r}}\big(\widetilde{A}\big)$. Therefore, the tilde (defined in equation~\eqref{eq:tildeA_def}) will usually be omitted once we invoke functions $F$ and $J$. \section{Discussion} \label{sec:discussion} \subsection{Geometric interpretation} \label{sec:berry_phase} What is described in this paper is a variation of the Newton-Raphson method searching for a zero of the objective function $\lambda_1({\mathbf{r}})-\lambda_2({\mathbf{r}})$. This is only one condition on two parameters (in the real symmetric case), and leads to an underdetermined Newton-Raphson iteration. In particular, given an initial guess ${\mathbf{r}}_0$, we would like to update our guess to ${\mathbf{r}}_1$ such that \begin{equation} \label{eq:tangent_cond} \mathrm{D}_{{\mathbf{r}}_0}\big(\lambda_1({\mathbf{r}})-\lambda_2({\mathbf{r}})\big) \, ({\mathbf{r}}_1-{\mathbf{r}}_0) = -\big(\lambda_1({\mathbf{r}}_0)-\lambda_2({\mathbf{r}}_0)\big). \end{equation} However, there is a whole line of points ${\mathbf{r}}_1$ that satisfy this condition, as illustrated in Figure~\ref{fig:tangent_planes}. \begin{figure} \centering \subfloat[]{\includegraphics[width=0.24\textwidth]{./images/both_cones-crop.pdf}} \subfloat[]{\includegraphics[width=0.24\textwidth]{./images/top_cone-crop.pdf}} \subfloat[]{\includegraphics[width=0.24\textwidth]{./images/bottom_plane-crop.pdf}} \subfloat[]{\includegraphics[width=0.24\textwidth]{./images/both_planes-crop.pdf}} \caption{(a) Conical degeneracy of eigenvalues. (b) Linear approximation of top eigenvalue about the initial guess. (c) Linear approximation of bottom eigenvalue about the initial guess. (d) The intersection of the two linear approximations is a line, not a point. We need to use the conical nature of the intersection to determine a unique point to chose as our next guess.} \label{fig:tangent_planes} \end{figure} To incorporate our knowledge that the degeneracy occurs at an isolated point, we use a heuristic derived from Berry phase \cite{HerLon_dfs63,Ber,Sim_prl83}, a phenomenon which underlies the inability to find a smooth diagonalization around a degeneracy: on a loop in the parameter space around a nondegenerate conical point, a continuous choice of eigenvectors must rotate by $\pi$ (as opposed to 0 mod $2\pi$). But if smoothly going in a loop around the degeneracy rotates the eigenvectors, the direction of minimal rotation is a direction \emph{towards the point of degeneracy}. Let $\{v_1({\mathbf{r}}),v_2({\mathbf{r}})\}$ be a smooth choice of normalized eigenvectors around the point ${\mathbf{r}}_0$ (this is possible because ${\mathbf{r}}_0$ is not a point of eigenvalue multiplicity). Then we are looking for the direction in the parameter space in which the eigenvector $v_1$ as a function of ${\mathbf{r}}$ does not rotate in the plane spanned by $\{v_1({\mathbf{r}}_0),v_2({\mathbf{r}}_0)\}$ (it may still rotate ``out of the plane''). This condition can be written as \begin{equation}\label{eq:nonrotation_cond} \mathrm{D}_{{\mathbf{r}}_0} \big\langle v_1({\mathbf{r}}), v_2({\mathbf{r}}_0)\big\rangle \, ({\mathbf{r}}_1-{\mathbf{r}}_0) = 0. \end{equation} Conditions~\eqref{eq:tangent_cond} and \eqref{eq:nonrotation_cond} together generically\footnote{See Sections~\ref{sec:conical_points} and \ref{sec:proofs} for a precise formulation.} define a unique point ${\mathbf{r}}$ which can be taken as the next step in the iteration. We can solve for it explicitly using the well-known perturbation formulas \cite{BorFoc,Kato_perturbation}, \begin{gather} \label{eq:pert_value} \mathrm{D}_{{\mathbf{r}}_0} \lambda_1 = \mathrm{D}_{{\mathbf{r}}_0} A^{{\mathbf{r}}_0}_{11}, \qquad \mathrm{D}_{{\mathbf{r}}_0} \lambda_2 = \mathrm{D}_{{\mathbf{r}}_0} A^{{\mathbf{r}}_0}_{22},\\ \label{eq:pert_vec} \mathrm{D}_{{\mathbf{r}}_0} \big\langle v_1({\mathbf{r}}), v_2({\mathbf{r}}_0)\big\rangle = \frac{\mathrm{D}_{{\mathbf{r}}_0} A^{{\mathbf{r}}_0}_{12}}{\lambda_1 - \lambda_2}, \end{gather} where \begin{equation} \label{eq:Arij} A^{{\mathbf{r}}_0}_{ij} = A^{{\mathbf{r}}_0}_{ij}({\mathbf{r}}) = \big\langle v_i({\mathbf{r}}_0), A^{{\mathbf{r}}_0}({\mathbf{r}})v_j({\mathbf{r}}_0) \big\rangle. \end{equation} We stress that in equation~\eqref{eq:Arij} the eigenvectors $v_1,v_2$ are evaluated at the point ${\mathbf{r}}_0$ and do not depend on ${\mathbf{r}}$. The tangent planes condition~\eqref{eq:tangent_cond} and the non-rotation condition~\eqref{eq:nonrotation_cond} can now be written succinctly as \begin{equation} \label{eq:two_cond_pert} \left[\mathrm{D}_{{\mathbf{r}}_0} \begin{pmatrix} A^{{\mathbf{r}}_0}_{11}-A^{{\mathbf{r}}_0}_{22} \\ 2 A^{{\mathbf{r}}_0}_{12} \end{pmatrix} \right] ({\mathbf{r}}_1-{\mathbf{r}}_0) = \left[\mathrm{D}_{{\mathbf{r}}_0} F\big(A^{{\mathbf{r}}_0}({\mathbf{r}})\big) \right] ({\mathbf{r}}_1-{\mathbf{r}}_0) = \begin{pmatrix} \lambda_2-\lambda_1 \\ 0 \end{pmatrix}, \end{equation} or, less succinctly, as \begin{equation*} \begin{pmatrix} \big\langle v_1, \spdev{x}{A}v_1\big\rangle -\big\langle v_2, \spdev{x}{A}v_2\big\rangle & \big\langle v_1, \spdev{y}{A}v_1\big\rangle -\big\langle v_2, \spdev{y}{A}v_2\big\rangle \\ 2\big\langle v_1, \spdev{x}{A}v_2\big\rangle & 2\big\langle v_1, \spdev{y}{A}v_2\big\rangle \end{pmatrix} ({\mathbf{r}}_1-{\mathbf{r}}_0) = \begin{pmatrix} \lambda_2-\lambda_1 \\ 0 \end{pmatrix}, \end{equation*} which immediately leads to \eqref{iter_equation_simple}. Berry phase also lies at the heart of another set of works devoted to locating points of eigenvalue multiplicity. Pugliese, Dieci and co-authors \cite{Pug_phd08,DiePug_mcs08,DiePug_siamjmaa09,DiePug_laa12,DiePapPug_siamjmaa13} developed a procedure which uses Berry phase to grid-search available space and identify regions with conical points. For the final convergence they used the standard Newton--Raphson method to locate the critical point of $(\lambda_2-\lambda_1)^2$. The convergence rate of this final step is quadratic, as in Theorem~\ref{thm:main}; we refer to Section~\ref{sec:mergingDirac} for a comparison of actual convergence in an example. In terms of ease of application, coding equation~\eqref{iter_equation_simple} is straightforward and lack of convergence of the method also carries information (see Section~\ref{sec:avoided} and \ref{sec:mergingDirac}). To perform a thorough search of all available space and to locate all conical points, it is preferable to use the methods of \cite{Pug_phd08,DiePug_laa12,DiePapPug_siamjmaa13}. \subsection{Relation to Newton-Raphson method} Recalling the definition of $\widetilde{A}^{{\mathbf{r}}_0}$ and in particular equation~\eqref{eq:tildeA_diag}, we have \begin{equation*} \begin{pmatrix} \lambda_2-\lambda_1 \\ 0 \end{pmatrix} = -F\big(A^{{\mathbf{r}}_0}({\mathbf{r}}_0)\big). \end{equation*} This allows us to rewrite equation ~\eqref{eq:two_cond_pert} as \begin{equation*} \left[ \mathrm{D}_{{\mathbf{r}}_0} F\big(A^{{\mathbf{r}}_0}({\mathbf{r}})\big) \right] \,({\mathbf{r}}_1-{\mathbf{r}}_0) = - F\big(A^{{\mathbf{r}}_0}({\mathbf{r}}_0)\big), \end{equation*} which is the same as a single step of Newton--Raphson iteration applied to $F(\widetilde{A}^{{\mathbf{r}}_0})$. In other words, ${\mathbf{r}}_1 = (x_1, y_1)$ is chosen to be a solution to \begin{equation} \label{first_order_expansion} \widetilde{A}^{{\mathbf{r}}_0}({\mathbf{r}}_0) + (x_1-x_0)\spdev{x}{\widetilde{A}^{{\mathbf{r}}_0}}({\mathbf{r}}_0) + (y_1-y_0)\spdev{y}{\widetilde{A}^{{\mathbf{r}}_0}}({\mathbf{r}}_0) = \lambda I_2 \end{equation} for some $\lambda\in \mathbb{R}$. Equivalently, ${\mathbf{r}}_1$ is the point where the linear approximation to $\widetilde{A}^{{\mathbf{r}}_0}({\mathbf{r}})$ has a double eigenvalue. To understand the difference of our algorithm from a seemingly conventional Newton--Raphson method, we need to revisit the computation of $\widetilde{A}$. It can be viewed as first expressing $A({\mathbf{r}})$ in the eigenvector basis computed \emph{at the point} ${\mathbf{r}}_0$ and then extracting the $\{1,2\}$-sub-block of the resulting matrix. In this notation, the problem of finding the degeneracy is equivalent to finding a point ${\mathbf{r}}'$ such that \begin{equation} \label{eq:wewant} \widetilde{A}^{{\mathbf{r}}'}({\mathbf{r}}') = \lambda I_2, \qquad \mbox{for some }\lambda\in\mathbb{R}. \end{equation} In contrast, solving equation~\eqref{first_order_expansion} is a first step in finding a point ${\mathbf{r}}'$ such that \begin{equation} \label{eq:wesolve} \widetilde{A}^{{\mathbf{r}}_0}({\mathbf{r}}') = \lambda I_2, \qquad \mbox{for some }\lambda\in\mathbb{R}. \end{equation} Going all the way to find the solution ${\mathbf{r}}'$ to equation \eqref{eq:wesolve} is pointless; this is not the equation we need to solve. Instead, we go one step, computing the first Newton--Raphson approximation ${\mathbf{r}}_1$, and then update our target equation to \begin{equation*} \widetilde{A}^{{\mathbf{r}}_1}({\mathbf{r}}') = \lambda I_2, \qquad \mbox{for some }\lambda\in\mathbb{R}, \end{equation*} compute the first Newton--Raphson approximation ${\mathbf{r}}_2$ to \emph{that} equation and so on. \subsection{Complex Hermitian matrices} \label{sec:complex} Let us now consider a complex Hermitian matrix-valued function $A\in C^2(\mathbb{R}^3, \mathbb{C}^{n\times n})$. To find a point of eigenvalue multiplicity, we typically need three real parameters (the off diagonal terms can be complex, and that introduces an additional degree of freedom), which we still denote by ${\mathbf{r}} = (x,y,z)$. The conditions can now be written as \begin{equation} \label{eq:three_cond_pert} \left[\mathrm{D}_{{\mathbf{r}}_0} G\big(A^{{\mathbf{r}}_0}({\mathbf{r}})\big) \right] ({\mathbf{r}}_1-{\mathbf{r}}_0) = \begin{pmatrix} \lambda_2-\lambda_1 \\ 0 \\ 0 \end{pmatrix}, \end{equation} where \begin{equation} \label{eq:objective_complex} G\big(A^{{\mathbf{r}}_0}\big) = \begin{pmatrix} A^{{\mathbf{r}}_0}_{11}-A^{{\mathbf{r}}_0}_{22} \\ 2 A^{{\mathbf{r}}_0}_{12} \\ 2 A^{{\mathbf{r}}_0}_{21} \end{pmatrix}. \end{equation} One can equivalently use the objective function \begin{equation} \label{eq:objective_complex_ri} G\big(A^{{\mathbf{r}}_0}\big) = \begin{pmatrix} A^{{\mathbf{r}}_0}_{11}-A^{{\mathbf{r}}_0}_{22} \\ 2 \Real(A^{{\mathbf{r}}_0}_{12}) \\ 2 \Imag(A^{{\mathbf{r}}_0}_{21}) \end{pmatrix}. \end{equation} \section{Conical Intersection} \label{sec:conical_points} Let $\alpha$ be a point in the parameter space such that $A(\alpha)$ has a double eigenvalue $\lambda_1=\lambda_2$. The existence of eigenvalue multiplicity precludes a smooth diagonalization in a region containing the degeneracy. However, a smooth block diagonalization exists. The standard construction (see, for example, \cite[II.4.2 and Remark 4.4 therein]{Kato_perturbation}) uses Riesz projector. We can choose a contour $\gamma: [0,1]\mapsto \mathbb{C}$ with $\gamma(0) = \gamma(1)$ enclosing $\lambda_1$, $\lambda_2$ and no other point in the spectrum of $A(\alpha)$. This property of $\gamma$ must persist for $A({\mathbf{r}})$ when ${\mathbf{r}}$ is in a small neighborhood of $\alpha$. The Riesz projector \begin{equation} \label{eq:Riesz_def} P({\mathbf{r}}) = \int_{\gamma} (A({\mathbf{r}})-\lambda I_n)^{-1} d\lambda \end{equation} projects onto the space spanned by the eigenvectors of $\lambda_1({\mathbf{r}})$ and $\lambda_2({\mathbf{r}})$ \cite{HisSig}. The projector itself is smooth, as the points on the contour are all in the resolvent set of $A$ (and so $A-\lambda I_n$ has a bounded inverse for all $\lambda\in \Gamma$). Starting with an arbitrary eigenvector basis $\{v_1,v_2\}$ at $\alpha$, we can obtain a basis at a nearby ${\mathbf{r}}$ by applying Gram-Schmidt procedure to the set $\left\{P({\mathbf{r}}) v_1, P({\mathbf{r}}) v_2\right\}$, which preserves smoothness. We can do the same with the orthogonal complement $I-P({\mathbf{r}})$ and a complementary basis to $\{v_1,v_2\}$. To summarize, for some region $\Omega\in \mathbb{R}^2$ with $\alpha\in \Omega$, we find a change of basis $M(\cdot)\in C^2(\Omega, R^{n\times n})$ such that \begin{equation} \label{block_diagonalization} M({\mathbf{r}})^*A({\mathbf{r}})M({\mathbf{r}}) = B({\mathbf{r}})\oplus\Lambda({\mathbf{r}}), \end{equation} where $B\in C^2(\Omega, \mathbb{R}^{2\times 2})$ and $\Lambda\in C^2(\Omega, \mathbb{R}^{(n-2)\times (n-2)})$. We can further diagonalize both $B$ and $A$ at any point ${\mathbf{r}}_0$ to obtain \begin{equation} \label{diag_block_diagonalization} \Gamma({\mathbf{r}})^*A^{{\mathbf{r}}_0}({\mathbf{r}})\Gamma({\mathbf{r}}) = B^{{\mathbf{r}}_0}({\mathbf{r}})\oplus\Lambda({\mathbf{r}}), \end{equation} where $\Gamma({\mathbf{r}}) = V^TM({\mathbf{r}})(W\oplus I_{n-2}) \in C^2(\Omega, R^{n\times n})$, and both \begin{equation*} A^{{\mathbf{r}}_0}(\cdot) := V^TA(\cdot)V \qquad\mbox{and}\qquad B^{{\mathbf{r}}_0}(\cdot) := W^TB(\cdot)W \end{equation*} are diagonal at ${\mathbf{r}}_0$. A stronger result from Hsieh, and Sibuya \cite{HsiSib}, and Gingold \cite{Gin} states that such block-diagonalization exists even for matrices that are not necessarily Hermitian, and for any closed rectangular region that contains an isolated degeneracy. Note that since $B$ is a $2\times 2$ matrix which has an eigenvalue multiplicity at the point $\alpha$, $B(\alpha)$ is a multiple of the identity. The eigenvalue multiplicity is detected by the \emph{discriminant} of $B$ which in the $2\times2$ case is defined as \begin{equation} \label{eq:discr_B} \disc(B) := (\lambda_1-\lambda_2)^2 = (B_{11}-B_{22})^2+4B_{12}^2. \end{equation} The discriminant achieves its minimum value 0 at the point $\alpha$. It is also a $C^2$ function of ${\mathbf{r}}$ and its Hessian is well-defined. \begin{definition}\label{def_conical_degeneracy} A point of eigenvalue multiplicity $\alpha$ is \emph{a non-degenerate conical point} if $\disc(B({\mathbf{r}}))$ has a non-degenerate critical point at ${\mathbf{r}}=\alpha$. \end{definition} In other words, there is a positive definite matrix $H$ (the ``Hessian'') such that \begin{equation*} \disc(B({\mathbf{r}})) = \big\langle ({\mathbf{r}}-\alpha), H({\mathbf{r}}-\alpha)\big\rangle + o\!\left(|{\mathbf{r}}-\alpha|^2\right), \end{equation*} and therefore, along any ray originating at $\alpha$, the eigenvalues are separating at a non-zero linear rate. This picture justifies the use of the term ``conical''. Unfortunately, while existence of $B({\mathbf{r}})$ is assured, it is not easily accessible. The following theorem provides a more practical method of checking if $\alpha$ is non-degenerate. \begin{theorem} \label{thm:Hess_disc} The Hessian of $\disc(B)$ at $\alpha$ is given by \begin{equation} \label{eq:Hess_disc} \Hess_\alpha (\disc(B)) = 2 J_{\alpha}(B)^T J_{\alpha}(B) = 2 J_\alpha(A^\alpha)^T J_\alpha(A^\alpha). \end{equation} Consequently, $\alpha$ is a non-degenerate conical point if and only if $\det J_\alpha(A^\alpha) \neq 0$. \end{theorem} The condition $\det J_\alpha(A^\alpha) \neq 0$ has a nice geometric meaning: it is precisely the condition that the manifold $\widetilde{A}^\alpha$ of $2\times2$ real symmetric matrices is transversal to the line of $2\times2$ symmetric matrices with repeated eigenvalues (cf.\ \cite[Def.\ 1]{ONe_siamjmaa05}). The choice of basis in the definition of $\widetilde{A}^\alpha$ is assumed to align with the choice of basis used to compute $B({\mathbf{r}})$, i.e.\ the first two columns of $M(\alpha)$ are the eigenvectors used to compute $\widetilde{A}^\alpha$. This choice does not affect the definition of the non-degenerate point because of the following lemma. \begin{lemma} \label{lem:basis_change_FJ} Let $A\in C^2(\mathbb{R}^2, \mathbb{R}^{2\times 2})$ be a $2\times2$ matrix-valued function of ${\mathbf{r}}\in \mathbb{R}^2$. Then for any orthogonal matrix $U\in\mathbb{R}^{2\times 2}$ there is an orthogonal matrix $W\in\mathbb{R}^{2\times 2}$ such that for all ${\mathbf{r}}$ we have \begin{equation} \label{eq:conj_FJ} F(U^TAU) = WF(A), \qquad J_{\mathbf{r}}(U^TAU) = W J_{\mathbf{r}}(A), \end{equation} and therefore \begin{equation} \left|\det(J_{\mathbf{r}}(A))\right| = \left|\det(J_{\mathbf{r}}(U^TAU))\right|. \end{equation} \end{lemma} \begin{proof} This identity for $2\times2$ matrix-functions can be checked by direct computation but the details are excessively tedious. Instead we use a more generalizable approach. We fix an orthogonal $U$ and let $\mathcal{S}^2$ denote the linear space of $2\times2$ real symmetric matrices. The map $F$, see equation~\eqref{eq:target_function_def}, acts as a linear transformation from $\mathcal{S}^2$ to $\mathbb{R}^2$. It is obviously onto and has the kernel $\Ker(F)$ consisting of multiples of the identity. On the other hand, conjugation by $U$ (namely the map $A \mapsto U^TAU$) is a linear transformation of $\mathcal{S}^2$ to itself. It maps multiples of the identity to themselves and therefore induces a linear transformation from the quotient space $\mathcal{S}^2 / \Ker(F)$ to itself. This linear transformation, via the isomorphism $F$ between $\mathcal{S}^2 / \Ker(F)$ and $\mathbb{R}^2$, induces a linear transformation on $\mathbb{R}^2$ mapping $F(A)$ to $F(U^TAU)$. We summarize the above in the commutative diagram \begin{equation*} \begin{CD} \mathcal{S}^2 @>{A \mapsto U^TAU}>> \mathcal{S}^2 \\ @V{F}VV @V{F}VV\\ \mathbb{R}^2 @>W>> \mathbb{R}^2 \end{CD} \end{equation*} In other words, for a given orthogonal $U$, there exists a constant $2\times2$ matrix $W$ such that \begin{equation*} F(U^TAU) = WF(A). \end{equation*} From the identity (see \eqref{eq:discr_B} for the definition of discriminant) \begin{equation*} \left|F(A)\right|^2 = \disc(A) = \disc(U^TAU) = \left|F(U^TAU)\right|^2 \end{equation*} we conclude that $W$ is orthogonal. Finally, taking derivatives we get \begin{equation*} J(U^TAU) = W J(A), \quad\implies\quad \det(J(U^TAU)) = \det(WJ(A)) = \pm\det(J(A)), \end{equation*} since determinant of an orthogonal matrix is either $1$ or $-1$. \end{proof} The following identity will be helpful in the proof of Theorem~\ref{thm:Hess_disc} and also in Section~\ref{sec:proofs}. \begin{lemma} \label{lem:jacobian_difference} For any $A^{{\mathbf{r}}_0}$ and $B^{{\mathbf{r}}_0}$ as in equation~\eqref{diag_block_diagonalization}, \begin{equation} \label{jacobian_difference} J_{{\mathbf{r}}_0}(B^{{\mathbf{r}}_0}) = J_{{\mathbf{r}}_0}(A^{{\mathbf{r}}_0}) + 2(\lambda_2-\lambda_1) \begin{pmatrix} 0& 0\\ \big\langle \spdev{x}{\gamma_1}, \gamma_2\big\rangle & \big\langle \spdev{y}{\gamma_1}, \gamma_2\big\rangle \end{pmatrix}, \end{equation} where $\gamma_{1,2}=\gamma_{1,2}({\mathbf{r}}_0)$ are the first two columns of the matrix $\Gamma({\mathbf{r}}_0)$. \end{lemma} \begin{proof} We remark that identity~\eqref{jacobian_difference} is only claimed for the Jacobian evaluated at the point where both $A^{{\mathbf{r}}_0}$ and $B^{{\mathbf{r}}_0}$ are diagonal, therefore $A^{{\mathbf{r}}_0}\gamma_j({\mathbf{r}}_0) = \lambda_j({\mathbf{r}}_0)\gamma_j({\mathbf{r}}_0)$. For all ${\mathbf{r}}$, $\gamma_j({\mathbf{r}})$ are orthonormal and differentiating $\langle \gamma_i, \gamma_j \rangle = \mathrm{const}$ we get \begin{equation} \label{eq:commutation_gamma} \left\langle\spdev{x}{\gamma_i}, \gamma_j\right\rangle = -\left\langle\gamma_i, \spdev{x}{\gamma_j}\right\rangle. \end{equation} We can now relate the derivatives of $A^{{\mathbf{r}}_0}$ to the derivatives of $B^{{\mathbf{r}}_0}$, \begin{align*} \pdev{x}{B^{{\mathbf{r}}_0}_{ij}} &=\frac{\partial}{\partial x} \big\langle \gamma_j, A^{{\mathbf{r}}_0}\gamma_i\big\rangle\\ &=\left\langle \gamma_i, \spdev{x}{A^{{\mathbf{r}}_0}} \gamma_j\right\rangle +\left\langle\spdev{x}{\gamma_i}, A^{{\mathbf{r}}_0}\gamma_j\right\rangle +\left\langle \gamma_i, A^{{\mathbf{r}}_0}\spdev{x}{\gamma_j}\right\rangle\\ &=\spdev{x}{A^{{\mathbf{r}}_0}_{ij}} + \lambda_j \left\langle\spdev{x}{\gamma_i}, \gamma_j\right\rangle + \lambda_i \left\langle\gamma_i, \spdev{x}{\gamma_j}\right\rangle\\ &=\spdev{x}{A^{{\mathbf{r}}_0}_{ij}} + (\lambda_j- \lambda_i) \left\langle\spdev{x}{\gamma_i}, \gamma_j\right\rangle, \qquad i,j \in \{1,2\}. \end{align*} The calculation is identical for $y$ derivatives. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Hess_disc}] We write \begin{equation*} \disc(B) = (B_{11}-B_{22})^2+4B_{12}^2 = \left\langle F(B), F(B) \right\rangle, \end{equation*} and note that $F(B(\alpha)) = \mathbf{0}$. The latter observation implies that the product rule for the second derivatives at the point $\alpha$ collapses to \begin{equation*} \frac{\partial^2}{\partial x_i \partial x_j} \left\langle F(B), F(B) \right\rangle = 2 \left\langle \frac{\partial F(B)}{\partial x_i}, \frac{\partial F(B)}{\partial x_j} \right\rangle, \qquad x_i,x_j \in \{x,y\}. \end{equation*} Therefore the Hessian can be written as \begin{equation*} \Hess_\alpha \langle F(B), F(B) \rangle = 2 \begin{pmatrix} \frac{\partial F(B)^T}{\partial x}\\[5pt] \frac{\partial F(B)^T}{\partial y} \end{pmatrix} \begin{bmatrix} \frac{\partial F(B)}{\partial x} &\frac{\partial F(B)}{\partial y} \end{bmatrix} = 2J_{\alpha}(B)^TJ_{\alpha}(B). \end{equation*} Finally, setting ${\mathbf{r}}_0=\alpha$ in Lemma~\ref{lem:jacobian_difference} yields \begin{equation} \label{eq:deriv_equal} J_{\alpha}(B) = J_{\alpha}(A^\alpha), \end{equation} and concludes the proof of \eqref{eq:Hess_disc}. \end{proof} \section{Proof of the main result} \label{sec:proofs} Here we restate the procedure used to locate the degeneracy in the notation that has been introduced. \begin{theorem} \label{main_thm_proofs} Let $\sigma\colon C^2(\mathbb{R}^2, \mathbb{R}^{2\times 2}) \times \mathbb{R}^2\to \mathbb{R}^2$ be defined by \begin{equation} \label{eq:sigma_def} \sigma(S, {\mathbf{r}}) = {\mathbf{r}} - J_{{\mathbf{r}}}(S)^{-1}F_{\mathbf{r}}(S). \end{equation} Let $A\in C^2(\mathbb{R}^2, \mathbb{R}^{n\times n})$ have a non-degenerate conical point at $\alpha$ between eigenvalues $\lambda_1$ and $\lambda_2$. Then there exists an open $\Omega\subset \mathbb{R}^2$ with $\alpha\in \Omega$ and $\exists C\in \mathbb{R}$, such that for all ${\mathbf{r}}\in \Omega$, \begin{equation} \label{eq:sigma_est} |\sigma(\widetilde{A}^{{\mathbf{r}}}, {\mathbf{r}})-\alpha| < C|{\mathbf{r}}-\alpha|^2, \end{equation} where the $2\times2$ matrix-function $\widetilde{A}^{{\mathbf{r}}}(\cdot)\in C^2(\mathbb{R}^2, \mathbb{R}^{2\times 2})$ is defined by \begin{equation} \label{eq:Atilde_def} \widetilde{A}^{{\mathbf{r}}}(\cdot) = V^T A(\cdot) V, \end{equation} with the constant $n\times 2$ matrix $V = (v_1\ v_2)$ whose columns are the eigenvectors of $A({\mathbf{r}})$. \end{theorem} We remark that non-degeneracy of the conical point is a generic property: any degenerate conical point can be made non-degenerate by a small perturbation of the function $A$. We recall that the superscript in $\widetilde{A}^{{\mathbf{r}}}(\cdot)$ refers to the basis which is computed at the point ${\mathbf{r}}$ and in which the matrix $A(x,y)$ is represented. The derivatives of $\widetilde{A}^{{\mathbf{r}}}(\cdot)$ that are taken to compute $J_{\mathbf{r}}$ in \eqref{eq:sigma_def}, are also evaluated at the point ${\mathbf{r}}$. The result of evaluating $\sigma(\widetilde{A}^{{\mathbf{r}}},{\mathbf{r}})$ is explicitly written out in equation~\eqref{iter_equation_simple}. \begin{proof} Let $B$ be the matrix defined in equation (\ref{block_diagonalization}). We will see, in Lemmas \ref{sigma_B} and \ref{sigma_diff} below, that there is a neighborhood $\Omega\subset \mathbb{R}^2$ of the conical point $\alpha$, and constants $C_1, C_2 > 0$ such that for all ${\mathbf{r}}\in \Omega$ we have \begin{equation*} |\sigma(B, {\mathbf{r}})-\alpha|<C_1|{\mathbf{r}}-\alpha|^2 \end{equation*} and \begin{equation*} |\sigma(B,{\mathbf{r}})-\sigma(\widetilde{A}^{{\mathbf{r}}},{\mathbf{r}})|<C_2|{\mathbf{r}}-\alpha|^2. \end{equation*} Together, these give us \begin{equation*} |\sigma(\widetilde{A}^{{\mathbf{r}}},{\mathbf{r}})-\alpha|<(C_1+C_2)|{\mathbf{r}}-\alpha|^2, \end{equation*} as desired. \end{proof} Now we establish the lemmas used in the proof of Theorem~\ref{main_thm_proofs}. \begin{lemma} \label{sigma_B} There exists $\Omega_1\subset \mathbb{R}^2$ with $\alpha\in \Omega_1$ and $C_1\in \mathbb{R}$ such that \begin{equation} \label{eq:newton_B} |\sigma(B,{\mathbf{r}})-\alpha|<C_1|{\mathbf{r}}-\alpha|^2, \end{equation} when ${\mathbf{r}}\in \Omega_1$. \end{lemma} \begin{proof} This is the usual Newton--Raphson method applied to conical point search for the $2\times2$ matrix $B$. For completeness we provide the proof. For the function $F({\mathbf{r}}) := F(B({\mathbf{r}}))$, we have the Taylor expansion around the point ${\mathbf{r}}_0$ which is evaluated at the point $\alpha$, \begin{equation*} \textbf{0} = F(\alpha) = F({\mathbf{r}}_0) + \mathrm{D}_{{\mathbf{r}}_0} F \cdot (\alpha-{\mathbf{r}}_0) + O(|\alpha-{\mathbf{r}}_0|^2), \end{equation*} where the constant in $O(|\alpha-{\mathbf{r}}_0|^2)$ is \emph{independent} of ${\mathbf{r}}_0$ as long as it is in a neighborhood $\widetilde{\Omega}_1$ of $\alpha$. The dot denotes the matrix-by-vector multiplication (to distinguish it from the argument of the function $F$). By assumption $\det(J_{\alpha}) \ne 0$, and, by smoothness, we know that $\mathrm{D}_{{\mathbf{r}}_0} F = J_{{\mathbf{r}}_0}$ is boundedly invertible in some region $\Omega_1 \subset \widetilde{\Omega}_1$ containing $\alpha$. Therefore, for the point ${\mathbf{r}}_1 = \sigma(B,{\mathbf{r}}_0)$, or equivalently, \begin{equation*} J_{{\mathbf{r}}_0} \cdot ({\mathbf{r}}_1-{\mathbf{r}}_0) = -F({\mathbf{r}}_0), \end{equation*} we have \begin{equation*} \textbf{0} = J_{{\mathbf{r}}_0}\cdot (\alpha-{\mathbf{r}}_1) + O(|\alpha-{\mathbf{r}}_0|^2), \end{equation*} with the estimate \eqref{eq:newton_B} following by inverting $J_{{\mathbf{r}}_0}$. \end{proof} \begin{lemma} \label{invariance_of_iteration} For any $B\in C^2(\mathbb{R}^2, \mathbb{R}^{n\times n})$ and constant, orthogonal $U$, we have \begin{equation} \label{eq:invariance_of_iteration} \sigma(B, {\mathbf{r}})=\sigma(U^TBU, {\mathbf{r}}). \end{equation} \end{lemma} \begin{proof} Equation~\eqref{eq:invariance_of_iteration} follows directly from the definition of the one-step iteration function $\sigma$ and Lemma~\ref{lem:basis_change_FJ}. \end{proof} \begin{lemma} \label{sigma_diff} There exists $\Omega_2\subset \mathbb{R}^2$ with $\alpha\in \Omega_2$ and $C_2\in \mathbb{R}$ such that \begin{equation} \label{eq:sigma_diff} |\sigma(B,{\mathbf{r}}) - \sigma(\widetilde{A}^{{\mathbf{r}}},{\mathbf{r}})| < C_2|{\mathbf{r}}-\alpha|^2, \end{equation} when ${\mathbf{r}}\in \Omega_2$. \end{lemma} \begin{proof} By the assumption that $\alpha$ is a non-degenerate conical point and equation~\eqref{eq:Hess_disc}, we have that $J_{\mathbf{r}}(B)$ and therefore $J_{\mathbf{r}}(B^{{\mathbf{r}}})$ has a bounded inverse in a region around $\alpha$. By equation \eqref{jacobian_difference} we conclude that $J_{\mathbf{r}}(\widetilde{A}^{{\mathbf{r}}})$ also has a bounded inverse in some region $\Omega_2$ around $\alpha$ where $\lambda_1-\lambda_2$ is small. We can express the difference of the inverses as \begin{align*} J_{\mathbf{r}}(B^{{\mathbf{r}}})^{-1}-J_{\mathbf{r}}(\widetilde{A}^{{\mathbf{r}}})^{-1} &= J_{\mathbf{r}}(B^{{\mathbf{r}}})^{-1} \left(J_{\mathbf{r}}(\widetilde{A}^{{\mathbf{r}}}) - J_{\mathbf{r}}(B^{{\mathbf{r}}})\right) J_{\mathbf{r}}(\widetilde{A}^{{\mathbf{r}}})^{-1} \\ &= (\lambda_1-\lambda_2) J_{\mathbf{r}}(B^{{\mathbf{r}}})^{-1} \begin{pmatrix} 0 & 0\\ \big\langle \spdev{x}{\gamma_1}, \gamma_2\big\rangle & \big\langle \spdev{y}{\gamma_1}, \gamma_2\big\rangle \end{pmatrix} J_{\mathbf{r}}(\widetilde{A}^{{\mathbf{r}}})^{-1}. \end{align*} and so, using boundedness of $\Gamma$ and its derivatives, we get \begin{equation*} \left\|J_{\mathbf{r}}(B^{{\mathbf{r}}})^{-1} - J_{\mathbf{r}}(\widetilde{A}^{{\mathbf{r}}})^{-1}\right\| = O(\lambda_1-\lambda_2). \end{equation*} We also recall that by definition of $A^{\mathbf{r}}$ and $B^{\mathbf{r}}$, \begin{equation*} F(B^{\mathbf{r}}) = F(\widetilde{A}^{{\mathbf{r}}}) = \begin{pmatrix} \lambda_1({\mathbf{r}}) - \lambda_2({\mathbf{r}}) \\ 0 \end{pmatrix}. \end{equation*} Finally, abbreviating $J=J_{{\mathbf{r}}}$, we estimate \begin{align*} \left|\sigma(B^{{\mathbf{r}}},{\mathbf{r}}) -\sigma(\widetilde{A}^{{\mathbf{r}}},{\mathbf{r}})\right| &= \left|J(B^{{\mathbf{r}}})^{-1}F(B^{{\mathbf{r}}}) -J(\widetilde{A}^{{\mathbf{r}}})^{-1}F(\widetilde{A}^{{\mathbf{r}}})\right|\\ &= \left|\left(J(B^{{\mathbf{r}}})^{-1} -J(\widetilde{A}^{{\mathbf{r}}})^{-1}\right)F(\widetilde{A}^{{\mathbf{r}}})\right|\\ &\le \left\|J(B^{{\mathbf{r}}})^{-1} -J(\widetilde{A}^{{\mathbf{r}}})^{-1}\right\|\, \left|F(\widetilde{A}^{{\mathbf{r}}})\right|\\ &= O\left((\lambda_2-\lambda_1)^2\right) = O\left(|{\mathbf{r}}-\alpha|^2\right). \end{align*} Equation~\eqref{eq:sigma_diff} now follows by applying Lemma~\ref{invariance_of_iteration} to get $\sigma(B^{{\mathbf{r}}},{\mathbf{r}}) = \sigma(B,{\mathbf{r}})$. \end{proof} \section{Examples} \label{sec:examples} \subsection{Elements of A are linear in parameters} If $A$ is linear in each parameter, we have $A = \Lambda I + xA_x + yA_y = \Lambda I + \alpha I+\beta \sigma_1 + \gamma \sigma_3$, where \begin{equation*} \sigma_1 = \begin{pmatrix}0&1 \\1&0\end{pmatrix} \qquad\mbox{and}\qquad \sigma_3 = \begin{pmatrix}1&0 \\0&-1\end{pmatrix}, \end{equation*} for some $\alpha$, $\beta$ that depend on $x, y$, and $A$. The eigenvalues of this matrix are values of $\lambda$ where \[\det(A-\lambda I) = \det(\Lambda I + \alpha I+\beta \sigma_1 + \gamma \sigma_3-\lambda I) = 0\] \[(\Lambda+\alpha-\lambda)^2 = \beta^2+\gamma^2\] \[\lambda = \Lambda + \alpha \pm \sqrt{\beta^2+\gamma^2}\] which is a cone in the new parameter space. In fact, a simple calculation shows that the degeneracy of the function $\hat{A}(\alpha, \beta) = \begin{pmatrix} \beta & \gamma \\ \gamma & -\beta\end{pmatrix}$, which has the same eigenvectors and shifted eigenvalues, can be located using a single step of iteration~\eqref{iter_equation_simple}. \subsection{Non-linear examples} Consider the following matrix-function example, \begin{equation} \label{eq:real_convergence} A(x,y) = \begin{pmatrix} 2\cos(x)&0&0&1\\ 0&0.5+\cos(y)&0&1\\ 0&0&1&1\\ 1&1&1&1 \end{pmatrix}. \end{equation} Since $A(x,y)$ is a rank-one perturbation of a diagonal matrix, it can be shown that there is a double eigenvalue $1$ at the point given by \begin{equation*} 2\cos(x) = 0.5 + \cos(y) = 1, \end{equation*} or $x=y=\pi/3$. The results of running the algorithm of Theorem~\ref{thm:main} with random starting points in the rectangle $(\frac\pi3, \frac\pi3) \pm \frac12$ is shown in Figure~\ref{fig:real_convergence}. \begin{figure} \centering \subfloat[]{\label{fig:real_convergence} \includegraphics[width=0.4\textwidth]{./images/2_params-crop.pdf}} \subfloat[]{\label{fig:complex_convergence} \includegraphics[width=0.4\textwidth]{./images/3_params-crop.pdf}}\\ \caption{(A) Logarithm of the distance from the $i$-th iteration ${\mathbf{r}}_i$ to the conical point $(\frac\pi3,\frac\pi3)$ of $A(x,y)$ from equation (\ref{eq:real_convergence}), plotted as a function of $i$; the algorithm saturates at the limit of numerical precision in 3-5 steps. (B) Logarithm of $|{\mathbf{r}}_{i+1}-{\mathbf{r}}_{i}|$ where ${\mathbf{r}}_i$ is the $i$-th iteration of the algorithm applied to $A(x, y, z)$ given by equation (\ref{eq:complex_convergence}). Several independent runs are plotted, each beginning at a random point in $[-\pi, \pi]$.} \end{figure} The complex Hermitian case described in Section \ref{sec:complex} is demonstrated in Figure \ref{fig:complex_convergence}. The matrix \begin{equation} \label{eq:complex_convergence} A = \begin{pmatrix} 1&1&0&0&0&0&0&0&0&z\\ 1&3&e^{ix}&1&0&0&0&0&0&0\\ 0&e^{-ix}&2&1&0&0&0&0&0&0\\ 0&1&1&3&1&0&0&0&0&0\\ 0&0&0&1&3&1&1&0&0&0\\ 0&0&0&0&1&3&0&0&0&0\\ 0&0&0&0&1&0&3&1&1&0\\ 0&0&0&0&0&0&1&2&e^{iy}&0\\ 0&0&0&0&0&0&1&e^{-iy}&3&1\\ z&0&0&0&0&0&0&0&1&1 \end{pmatrix}. \end{equation} corresponds to the discrete Laplacian of the graph shown in Figure~\ref{fig:graph_Laplacian} with dashed edges carrying a magnetic potential ($x$ and $y$ correspondingly). The parameter $z$ is introduced artificially, and the conical point found numerically has value $z=0$. Since the location of the conical point is not known analytically, the error is estimated using the norms of updates $\left\|{\mathbf{r}}_{i}-{\mathbf{r}}_{i+1}\right\|$ instead of $\left\|{\mathbf{r}}_{i}-\alpha\right\|$. The result of several runs of the algorithm is shown in Figure~\ref{fig:complex_convergence}. \begin{figure} \centering \begin{tikzpicture} \draw[fill=black] (0,0) circle (3pt); \draw[fill=black] (-2,0) circle (3pt); \draw[fill=black] (2,0) circle (3pt); \draw[fill=black] (0,2) circle (3pt); \draw[fill=black] (4,0) circle (3pt); \draw[fill=black] (4,2) circle (3pt); \draw[fill=black] (6,0) circle (3pt); \draw[fill=black] (8,2) circle (3pt); \draw[fill=black] (8,0) circle (3pt); \draw[fill=black] (10,0) circle (3pt); \node at (0,-0.5) {2}; \node at (-2,-0.5) {1}; \node at (2,-0.5) {4}; \node at (0,2.5) {3}; \node at (4,-0.5) {5}; \node at (4,2.5) {6}; \node at (6,-0.5) {7}; \node at (8,2.5) {8}; \node at (8,-0.5) {9}; \node at (10,-0.5) {10}; \draw[thick] (-2,0) -- (0,0) -- (2,0) -- (0,2); \draw[thick] (2,0) -- (4,0) -- (4,2); \draw[thick] (4,0) -- (6,0) -- (8,2); \draw[thick] (6,0) -- (8,0) -- (10,0); \draw[dashed,thick] (0,0) -- (0,2); \draw[dashed,thick] (8,2) -- (8,0); \end{tikzpicture} \caption{Graph corresponding to equation \eqref{eq:complex_convergence}.} \label{fig:graph_Laplacian} \end{figure} \subsection{Avoided crossing} \label{sec:avoided} While a non-degenerate conical point is stable under small perturbations of the real symmetric matrix-function $A(x,y)$, the eigenvalue multiplicity may be lifted by an addition of a small complex perturbation. It is instructive to investigate the run results of our algorithm in this case. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{images/f_avoided0.pdf} \quad \includegraphics[width=0.4\textwidth]{images/f_avoided4.pdf} \caption{Logarithm of distance to $(0,0)$ as a function of the iteration step for several runs of the algorithm for $A(x,y)$ given by equation~\eqref{eq:avoided} with $\epsilon=0$ (left) and with $\epsilon=10^{-4}$ (right), i.e.\ an avoided crossing. Note the difference in vertical scales. Runs are initialized with random points on the circle of radius $1/2$ around $(0,0)$.} \label{fig:avoided} \end{figure} Consider the matrix-function \begin{equation} \label{eq:avoided} A = \begin{pmatrix} x+3\sin(y) & y+\epsilon i\\ y-\epsilon i & -x-x^2 \end{pmatrix}. \end{equation} It has a conical point at $(0,0)$ when $\epsilon=0$ and no eigenvalue multiplicities when $\epsilon\neq 0$. We plot in Figure~\ref{fig:avoided} the results of several runs with $\epsilon=0$ (left) and with $\epsilon = 10^{-4}$ (right). For $\epsilon=0$ the algorithm converges quadratically, as in the previous examples. For $\epsilon\neq0$, the algorithm initially approaches the position of the former conical point, but gets repelled, resulting in oscillations. Conversely, such oscillations (within the limits of numerical precision) should be considered a tell-tale sign of eigenvalue surfaces nearly but not exactly touching. We remark that for $\epsilon \neq 0$, the square eigenvalue difference $(\lambda_1 - \lambda_2)^2$ has the minimal value of order $\epsilon^2$. If one is using optimization of $(\lambda_1 - \lambda_2)^2$ to find the multiplicity location, it would be difficult to tell apart genuine points of multiplicty from avoided crossings. This observation is investigated further in the next example. \subsection{Merging Dirac points} \label{sec:mergingDirac} In condensed matter physics literature, the conical points in the dispersion relation of a periodic structure are know as the ``Dirac points'', because the effective equation of the wave propagation at the corresponding energy is of Dirac type (see \cite{FefWei_cmp14} for a mathematical formulation of this physics result). When the material parameters change, the Dirac point may undergo a fold bifurcation, where two points collide and annihilate. The physical consequence of this collision were studied, for example, in \cite{Mon+_prb09}; an experimental observation in a tunable honeycomb lattice was reported in \cite{Tar+_n12}. In this section we use the basic model from \cite{Mon+_prb09}, \begin{equation} \label{eq:Agraphene} A(x,y) := \begin{pmatrix} 0 & -1 - \frac12 e^{ix} - p e^{iy} \\ -1 - \frac12 e^{-ix} - p e^{-iy} & 0 \end{pmatrix}, \end{equation} where the bifurcation occurs at $p=\frac12$: for $p>\frac12$ there are two Dirac points and for $p<\frac12$ there are none, see Fig.~\ref{fig:mergingDirac_surf}. Despite $A(x,y)$ being a complex matrix, the problem of locating Dirac points in this setting is analogous to the real symmetric case due to presence of the inversion symmetry $\overline{A(-x, -y)} = A(x,y)$. The correct target function $F$ (cf.\ \eqref{eq:target_function_def} and \eqref{eq:objective_complex_ri}) is \begin{equation*} F(A^{\mathbf{p}}) := \begin{pmatrix} A^{{\mathbf{p}}}_{11}-A^{{\mathbf{p}}}_{22}\\ 2\Imag(A^{{\mathbf{p}}}_{12}) \end{pmatrix}. \end{equation*} \begin{figure} \centering \includegraphics[scale=0.5]{images/f_mergingDirac_surf.pdf} \caption{Two Dirac points (left), colliding (center) and disappearing (right), in the dispersion relation of \eqref{eq:Agraphene} with parameter $p$ values $0.6$, $0.5$ and $0.45$ correspondingly.} \label{fig:mergingDirac_surf} \end{figure} \begin{figure} \centering \includegraphics[scale=0.3]{images/f_mergingDirac_conv06-crop.pdf} \includegraphics[scale=0.3]{images/f_mergingDirac_conv05-crop.pdf} \includegraphics[scale=0.3]{images/f_mergingDirac_conv045-crop.pdf} \caption{Convergence of iterations for matrix family \eqref{eq:Agraphene}: applying Theorem~\ref{thm:main} (blue, solid) versus quasi-Newton minimization of $(\lambda_1-\lambda_2)^2$ (red, dashed). The parameter $p$ is $0.6$ (left), $0.5$ (center) and $0.45$ (right). Two starting points used in each figure, $(0.8\pi,0.8\pi)$ (empty circles) and $(0.8\pi, 1.2\pi)$ (stars).} \label{fig:f_mergingDirac} \end{figure} In Figure~\ref{fig:f_mergingDirac} we present a comparison between the convergence of iterations of Theorem~\ref{thm:main} and a standard quasi-Newton search for the minimum of $g(x,y) = (\lambda_1 - \lambda_2)^2$. Figure~\ref{fig:f_mergingDirac}(left) is for $p=0.6$ where the convergence of both methods is quadratic, although Theorem~\ref{thm:main} is faster. Figure~\ref{fig:f_mergingDirac}(center) is for $p=0.5$, where the multiplicity point is \emph{degenerate}. While Theorem~\ref{thm:main} is no longer applicable, the iteration still converges when the matrix pseudoinverse is used in \eqref{iter_equation_simple}. The speed of iteration is highly dependent on the direction, presumably because the cross-section of the eigenvalue surface is parabolic in one direction and conical in the other. Again, the algorithm of Theorem~\ref{thm:main} converges faster, while quasi-Newton iteration fails altogether for the second initial point. Finally, in Figure~\ref{fig:f_mergingDirac}(right), the $Y$-axis shows the logarithm of the last taken step, since the distance to the conical point is undefined: there is no conical point. While the quasi-Newton iteration converges, correctly, to the minimum of $(\lambda_1-\lambda_2)^2$ located at $(\pi,\pi)$, the algorithm of Theorem~\ref{thm:main} is not converging, indicating the absence of the conical point in that area. To interpret the results, recall that a quasi-Newton minimization is searching for the zero of the gradient of $g$ using a numerical approximation of the Hessian of $g$. But according to Theorem~\ref{thm:Hess_disc}, the matrix appearing in equation~\eqref{iter_equation_simple} is \emph{equal} to the leading term of the Hessian (or its square root) around the conical point. It is therefore natural to expect a faster convergence. To give an analogy, consider finding the root of $f(x)=x^2-a$ via the Newton--Raphson scheme (thus computing $f'$ as done in Theorem~\ref{thm:main}) or via minimization of $g(x)=f^2(x)$ (thus computing $g''$ in the course of finding the root of $g'$). Of course, close to the root, $g'' \approx (f')^2$, so the two schemes give equivalent rates of convergence, but having an analytical expression for $f'$ naturally produces better results than performing a numerical approximation of $g''$. Theorem~\ref{thm:main}) would thus be beneficial in any situation where computing two eigenvectors is not significantly more expensive than sampling the eigenvalues several times.\footnote{In the quasi-Newton experiment above, the eigenvalues were computed 5 times per iteration step in order to estimate the Hessian} One example of such circumstances is given by differential operators on metric graphs \cite{Ber_gcst17}, where the eigenvalues are found by solving the ``secular equation'' of the form $\det\left(I-S(\sqrt{\lambda})\right)=0$, and, once an eigenvalue is identified, the corresponding eigenvector of $S(\sqrt{\lambda})$ gives the (Fourier coefficients of the) eigenvector on the graph. The latter operation is inexpensive relative to repeated evaluation of the determinant necessary for locating the root $\lambda$. \subsection{Locating points of higher multiplicity} We can apply a modification of the method to search for points of higher multiplicity in a family of matrices with sufficient number of parameters. For example, for locating a triple eigenvalue of a 5-parameter family $A$ we use \begin{equation} \label{eq:F_triple} F(A^{\mathbf{p}}) = \begin{pmatrix} A^{\mathbf{p}}_{11} - A^{\mathbf{p}}_{22} \\ A^{\mathbf{p}}_{22} - A^{\mathbf{p}}_{33} \\ 2 A^{\mathbf{p}}_{12} \\ 2 A^{\mathbf{p}}_{13} \\ 2 A^{\mathbf{p}}_{23} \end{pmatrix}, \end{equation} where $A^{\mathbf{p}}$ is the function $A(\cdot)$ expressed in the eigenbasis calculated at point ${\mathbf{p}}$; the first three eigenvectors are assumed to correspond to the consecutive eigenvalues whose point of coalescing we are seeking. As before, $J_{{\mathbf{r}}}(A^{{\mathbf{p}}}) = \mathrm{D}_{\mathbf{r}} F(A^{{\mathbf{p}}})$, and a point $\alpha$ of triple multiplicity is \emph{non-degenerate} if $\det J_\alpha(A^\alpha) \neq 0$. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{images/5_params-crop.pdf} \caption{Logarithm of distance to $\mathbf{0}$ as a function of the iteration step for several runs of the algorithm for $A(x,y,z,u,v)$ given by equation~\eqref{eq:5_params}. Several independent runs are plotted, each beginning at a random point in $[-0.2, 0.2]^5$.} \label{fig:triple} \end{figure} To demonstrate the performance of our method in locating a triple multiplicity, we consider the function \begin{equation} \label{eq:5_params} A = \begin{pmatrix} 1+v+w+x-3y-z & 2x+y+2z & x+xz+y \\ 2x+y+2z & 1+x+yz & 2v-w+z \\ x+xz+y & 2v-w+z & 1+vw \end{pmatrix} \end{equation} with triple eigenvalue at $(0,0,0,0,0)$. The results of several runs are shown in Figure~\ref{fig:triple}; the convergence is clearly quadratic until the limit of numerical precision is reached in about 4 steps. \section*{Acknowledgment} Work on this project was supported by the National Science Foundation through grant DMS-1815075 and the Binational US--Israel Science Foundation grant 2016281 while one of the authors (AP) was an undergraduate student at Texas A\&M University. Numerous illuminating discussions with Igor Zelenko are gratefully acknowledged. The authors are particularly grateful to the referees for their deep reading of the manuscript and several suggestions which resulted in significant improvement of the presentation. \bibliographystyle{abbrv}
{ "timestamp": "2020-12-14T02:27:34", "yymm": "2001", "arxiv_id": "2001.02753", "language": "en", "url": "https://arxiv.org/abs/2001.02753", "abstract": "A simple iterative scheme is proposed for locating the parameter values for which a 2-parameter family of real symmetric matrices has a double eigenvalue. The convergence is proved to be quadratic. An extension of the scheme to complex Hermitian matrices (with 3 parameters) and to location of triple eigenvalues (5 parameters for real symmetric matrices) is also described. Algorithm convergence is illustrated in several examples: a real symmetric family, a complex Hermitian family, a family of matrices with an \"avoided crossing\" (no covergence) and a 5-parameter family of real symmetric matrices with a triple eigenvalue.", "subjects": "Spectral Theory (math.SP); Numerical Analysis (math.NA)", "title": "Locating conical degeneracies in the spectra of parametric self-adjoint matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575167960731, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7091542007825992 }
https://arxiv.org/abs/1712.09891
Fractional Sturm-Liouville eigenvalue problems, I
We introduce and present the general solution of three two-term fractional differential equations of mixed Caputo/Riemann Liouville type. We then solve a Dirichlet type Sturm-Liouville eigenvalue problem for a fractional differential equation derived from a special composition of a Caputo and a Riemann-Liouville operator on a finite interval where the boundary conditions are induced by evaluating Riemann-Liouville integrals at those end-points. For each $1/2<\alpha<1$ it is shown that there is a finite number of real eigenvalues, an infinite number of non-real eigenvalues, that the number of such real eigenvalues grows without bound as $\alpha \to 1^-$, and that the fractional operator converges to an ordinary two term Sturm-Liouville operator as $\alpha \to 1^-$ with Dirichlet boundary conditions. Finally, two-sided estimates as to their location are provided as is their asymptotic behavior as a function of $\alpha$.
\section{Introduction} Fractional Sturm-Liouville Problems (FSLP) generalize classical SLP in that the ordinary derivatives are replaced by {\it fractional derivatives}, or derivatives of fractional order. As an introduction the interested reader may wish to consult the great variety of works on the subject, starting with books such as \cite{kil, pod, sam}. It turns out that such fractional equations have seen many applications in science and engineering in problems such as capturing the dynamical behaviors of amorphous materials, e. g., polymer and porous media \cite{metzler,rossik} or the superior modeling of the anomalous diffusion in materials with memory, e. g., viscoelastic materials for which the mean square variance grows faster (superdiffusion) or slower (subdiffusion) than in a Gaussian process \cite{bouch,met,shl}. One of the major differences between fractional and ordinary derivatives lies in the global nature of the former and to the local nature of the latter. Thus in order to get information on the fractional derivative of a function at a given point one needs to have a knowledge of the original function on a semi-interval! The ordinary circular trigonometric functions that occur in the classical theory are now to be replaced by Mittag-Leffler functions. So, it is conceivable and practical that physical problems involving past memory and also equations with delay be governed using such fractional derivatives \cite{poos,sing,maina}. The question at the core of this paper involves the determination of analytical solutions and subsequent spectrum of a fractional Sturm-Liouville equation with zero potential function and fixed end homogeneous boundary conditions. As can be seen below even such an innocuous question can require a tremendous effort. To the best of our knowledge, the vast majority of scientific papers have paid attention to a numerical solution of such equations (see \cite{tomas, zayer, mad1, mad2}). Of note here is Klimek and Agrawal's work \cite{kliag1} in which the fractional operators are different from those usually defined in the literature in the sense that they contain left and right Riemann-Liouville and left and right Caputo fractional derivatives. They were also able to prove that, under suitable conditions, if the eigenvalues exist then they are real and that the eigenfunctions enjoy the same orthogonal property as in the classical case \cite{kliag2}. \section{Preliminaries} \subsection{Fractional Calculus} We recall some basic definitions in ODE's and results of importance in the fractional calculus. The Wronskian of two differentiable functions $f(t)$ and $g(t)$ is defined as usual by $\mathcal{W}(f,g)(t)=f(t)g'(t)-f'(t)g(t).$ If $f(t)$ and $g(t)$ are two solutions of a second order linear differential equation and $\mathcal{W}(f,g)(t)\neq 0$ for some $t$, then we say that $f(t)$ and $g(t)$ are linearly independent and these functions are referred to as a fundamental set of solutions (FSS). \begin{definition} The left and the right Riemann-Liouville fractional integrals $\mathcal{I}^{\alpha}_{a^+}$ and $\mathcal{I}^{\alpha}_{b^-}$ of order $\alpha\in \mathbb{R}^+$ are defined by \begin{equation}\label{lfi} \mathcal{I}^{\alpha}_{a^+}f(t):=\frac{1}{\Gamma(\alpha)}\int_a^t\frac{f(s)}{(t-s)^{1-\alpha}}ds,\quad t \in (a,b], \end{equation} and \begin{equation}\label{rfi} \mathcal{I}^{\alpha}_{b^-}f(t):=\frac{1}{\Gamma(\alpha)}\int_t^b\frac{f(s)}{(s-t)^{1-\alpha}}ds,\quad t \in [a,b), \end{equation} respectively. Here $\Gamma (\alpha)$ denotes Euler's Gamma function. The following property is easily verified. \end{definition} \begin{pro}\label{c} \normalfont For a constant $C$, we have $\mathcal{I}_{a^+}^{\alpha}C=\frac{(t-a)^{\alpha}}{\Gamma(\alpha+1)}\cdot C$. \end{pro} \begin{definition} The left and the right Caputo fractional derivatives $^{c}\mathcal{D}^{\alpha}_{a^+}$ and $^{c}\mathcal{D}^{\alpha}_{b^-}$ are defined by \begin{equation}\label{lcfd} ^{c}\mathcal{D}^{\alpha}_{a^+}f(t):=\mathcal{I}^{n-\alpha}_{a^+}\circ\mathcal{D}^nf(t)=\frac{1}{\Gamma(n-\alpha)}\int_a^{t}\frac{f^{(n)}(s)}{(t-s)^{\alpha-n+1}}ds, \quad t>a, \end{equation} and \begin{equation}\label{rcfd} ^{c}\mathcal{D}^{\alpha}_{b^-}f(t):=(-1)^n\mathcal{I}^{n-\alpha}_{b^-}\circ\mathcal{D}^nf(t)=\frac{(-1)^n}{\Gamma(n-\alpha)}\int_t^{b}\frac{f^{(n)}(s)}{(s-t)^{\alpha-n+1}}ds, \quad t<b, \end{equation} respectively, where $f$ is sufficiently differentiable and $n-1\leq \alpha<n$. \end{definition} \begin{definition} Similarly, the left and the right Riemann-Liouville fractional derivatives $\mathcal{D}^{\alpha}_{a^+}$ and $\mathcal{D}^{\alpha}_{b^-}$ are defined by \begin{equation}\label{lrlfd} \mathcal{D}^{\alpha}_{a^+}f(t):=\mathcal{D}^n\circ\mathcal{I}^{n-\alpha}_{a^+}f(t)=\frac{1}{\Gamma(n-\alpha)}\frac{d^n}{dt^n}\int_a^{t}\frac{f(s)}{(t-s)^{\alpha-n+1}}ds, \quad t>a, \end{equation} and \begin{equation}\label{rrlfd} \mathcal{D}^{\alpha}_{b^-}f(t):=(-1)^n\mathcal{D}^n\circ\mathcal{I}^{n-\alpha}_{b^-}f(t)=\frac{(-1)^n}{\Gamma(n-\alpha)}\frac{d^n}{dt^n}\int_t^{b}\frac{f(s)}{(s-t)^{\alpha-n+1}}ds, \quad t<b, \end{equation} respectively, where $f$ is sufficiently differentiable and $n-1\leq \alpha<n$. \end{definition} \begin{pro} \normalfont (Lemma 2.19 \cite{klim}.) Assume that $0<\alpha<1$, $f\in AC[a,b]$ and $g\in L^p(a,b)(1\leq p \leq \infty)$. Then the following fractional integration by parts formula holds \begin{equation}\label{part} \int_a^bf(t)\mathcal{D}^{\alpha}_{a^+}g(t)dt=\int_a^b g(t) ^{c}\mathcal{D}^{\alpha}_{b^-}f(t)dt+f(t)\mathcal{I}^{1-\alpha}_{a^+}g(t)|_{t=a}^{t=b}. \end{equation} \end{pro} \begin{pro} \normalfont (Property 2.1 \cite{kil}.) Let $0<\alpha<\beta$, then the following identities hold: \begin{equation}\nonumber \begin{split} \mathcal{I}^{\alpha}_{a^+}(t-a)^{\beta-1}&=\frac{\Gamma(\beta)}{\Gamma(\beta+\alpha)}(t-a)^{\beta+\alpha-1},\\ \mathcal{D}^{\alpha}_{a^+}(t-a)^{\beta-1}&=\frac{\Gamma(\beta)}{\Gamma(\beta-\alpha)}(t-a)^{\beta-\alpha-1},\\ \mathcal{I}^{\alpha}_{b^-}(b-t)^{\beta-1}&=\frac{\Gamma(\beta)}{\Gamma(\beta+\alpha)}(b-t)^{\beta+\alpha-1},\\ \mathcal{D}^{\alpha}_{b^-}(b-t)^{\beta-1}&=\frac{\Gamma(\beta)}{\Gamma(\beta-\alpha)}(b-t)^{\beta-\alpha-1}. \end{split} \end{equation} \end{pro} \begin{pro}\label{betaf} \normalfont For $a \leq t < b$, $0 < \alpha < 1$ we have, \begin{eqnarray*} \mathcal{I}_{a^+}^{\alpha}\left ((b-t)^{\alpha-1}\right )&=& - \frac{(b-t)^{2\alpha -1}}{\Gamma(\alpha)\,}\, \int_{\frac{b-a}{b-t}}^1 (w-1)^{\alpha-1}\, w^{\alpha -1}\, dw,\\ &=& \frac{(b-t)^{2\alpha -1}}{\Gamma(\alpha)\,}\, \left (B \left ( \frac{b-a}{b-t};\alpha,\alpha \right) - B(1;\alpha,\alpha)\right ). \end{eqnarray*} where $B(z;\gamma,\theta)$ is the ``Incomplete Beta function" defined by \[ B(z;\gamma,\theta)=\int_0^z u^{\gamma-1}(1-u)^{\theta-1}du. \] \end{pro} \begin{pro} \normalfont (Lemma 2.4 \cite{kil}.) If $\alpha>0$ and $f\in L^p(a,b)(1\leq p\leq\infty)$, then the following equalities \begin{equation}\nonumber \begin{split} \mathcal{D}^{\alpha}_{a^+}\circ\mathcal{I}^{\alpha}_{a^+}f(t)=f(t),\\ \mathcal{D}^{\alpha}_{b^-}\circ\mathcal{I}^{\alpha}_{b^-}f(t)=f(t), \end{split} \end{equation} hold almost everywhere on $[a,b]$. \end{pro} \begin{pro}\label{p4} \normalfont (Lemma 2.5 and Lemma 2.6 \cite{kil}.) If $0<\alpha<1$, $f\in L^1(a,b)$ and $\mathcal{I}^{1-\alpha}_{a^+}f,\mathcal{I}^{1-\alpha}_{b^-}f\in AC[a,b]$, then the following equalities \begin{equation}\nonumber \begin{split} \mathcal{I}^{\alpha}_{a^+}\circ\mathcal{D}^{\alpha}_{a^+}f(t)=f(t)-\frac{(t-a)^{\alpha-1}}{\Gamma(\alpha)}\mathcal{I}^{1-\alpha}_{a^+}f(t)|_{t=a},\\ \mathcal{I}^{\alpha}_{b^-}\circ\mathcal{D}^{\alpha}_{b^-}f(t)=f(t)-\frac{(b-t)^{\alpha-1}}{\Gamma(\alpha)}\mathcal{I}^{1-\alpha}_{b^-}f(t)|_{t=b}, \end{split} \end{equation} hold a.e. on $[a,b]$. \end{pro} \begin{pro} \normalfont (Lemma 2.21 \cite{kil}.) Let $\Re(\alpha)>0$ and $f(t)\in L^{\infty}(a,b)$ or $f(t)\in C[a,b]$. If $\Re(\alpha)\notin\mathbb{N}$ or $\alpha \in \mathbb{N}$, then \begin{equation}\nonumber \begin{split} ^{c}\mathcal{D}^{\alpha}_{a^+}\circ\mathcal{I}^{\alpha}_{a^+}f(t)=f(t),\\ ^{c}\mathcal{D}^{\alpha}_{b^-}\circ\mathcal{I}^{\alpha}_{b^-}f(t)=f(t). \end{split} \end{equation} \end{pro} \begin{pro}\label{p6} \normalfont (Lemma 2.22 \cite{kil}.) Let $0<\alpha\leq 1$. If $f\in AC[a,b]$, then \begin{equation}\nonumber \begin{split} \mathcal{I}^{\alpha}_{a^+}\circ ^{c}\mathcal{D}^{\alpha}_{a^+} f(t)=f(t)-f(a),\\ \mathcal{I}^{\alpha}_{b^-}\circ ^{c}\mathcal{D}^{\alpha}_{b^-}f(t)=f(t)-f(b). \end{split} \end{equation} \end{pro} \subsection{The Mittag-Leffler function} The function $E_{\delta}(z)$ defined by \begin{equation}\label{mittag1} E_{\delta}(z):=\sum_{k=0}^{\infty}\frac{z^{k}}{\Gamma(\delta k+1)},\quad (z\in \mathbb{C}, \Re(\delta)>0), \end{equation} was introduced by Mittag-Leffler \cite{mit}. In particular, when $\delta=1$ and $\delta=2$, we have \begin{equation}\label{e1} E_1(z)=e^z,\qquad E_2(z)=\cosh(\sqrt{z}). \end{equation} The Mittag-Leffler function $E_{\delta,\theta}(z)$, generalizing the one in (\ref{mittag1}) is normally defined by \begin{equation}\label{mittag2} E_{\delta,\theta}(z):=\sum_{k=0}^{\infty}\frac{z^k}{\Gamma(\delta k+\theta)},\quad (z,\theta\in \mathbb{C}, \Re(\delta)>0). \end{equation} Of course, when $\theta=1$, $E_{\delta,\theta}(z)$ coincides with the Mittag-Leffler function (\ref{mittag1}): \begin{equation}\label{e2} E_{\delta,1}(z)=E_{\delta}(z). \end{equation} Two other particular cases of (\ref{mittag2}) are as follows: \begin{equation}\label{pmi} E_{1,2}(z)=\frac{e^z-1}{z},\quad E_{2,2}(z)=\frac{\sinh(\sqrt{z})}{\sqrt{z}}. \end{equation} \begin{pro}\label{asym} \normalfont If $0<\delta<2$ and $\mu\in(\frac{\delta \pi}{2},\min(\pi,\delta \pi))$, then function $E_{\delta,\theta}(z)$ has the following asymptotic expansion as $|z|\rightarrow\infty$ (see \cite{kil}) \begin{eqnarray} E_{\delta,\theta}(z)=\left\{ \begin{array}{ll} \frac{1}{\delta}z^{\frac{1-\theta}{\delta}}\exp(z^{\frac{1}{\delta}})-\sum_{k=1}^N\frac{1}{\Gamma(\theta-\delta k)}\frac{1}{z^k}+O(\frac{1}{z^{N+1}}), \qquad |\arg(z)|\leq \mu, \\ -\sum_{k=1}^N\frac{1}{\Gamma(\theta-\delta k)}\frac{1}{z^k}+O(\frac{1}{z^{N+1}}), \qquad \mu\leq|\arg(z)|\leq \pi. \end{array}\right. \label{asy} \end{eqnarray} \end{pro} \subsection{The Laplace transform} \begin{definition} The Laplace transform of a function $f(t)$ defined for all $t\geq 0$, is the function $F(s)$ defined by \[ F(s)=\mathfrak{L}\{f(t)\}:=\int_0^{\infty}e^{-st}f(t)dt, \] whenever the integral exists, where $s$ is the frequency parameter. \end{definition} \begin{definition} The inverse Laplace transform of a function $F(s)$ is then given by the line integral \[ f(t)=\mathfrak{L}\{F(s)\}:=\frac{1}{2\pi i}\lim_{T\rightarrow\infty}\int_{\gamma-iT}^{\gamma+iT}e^{st}F(s)ds \] where the integration is done along the vertical line $\Re(s)=\gamma$ in the complex plane such that $\gamma$ is greater than the real part of all the singularities of $F(s)$. \end{definition} \begin{definition} The convolution of $f(t)$ and $g(t)$ supported on only $[0,\infty)$ is defined by \[ (f\ast g)(t)=\int_0^tf(s)g(t-s)ds,\qquad f,g:[0,\infty)\rightarrow \mathbb{R}. \] whenever the integral exists. \end{definition} \begin{pro} \normalfont For $\Re(q)>-1$, then \begin{gather}\label{tq} \mathfrak{L}\{t^q\}=\frac{\Gamma(q+1)}{s^{q+1}}\\ \mathfrak{L}^{-1}\{s^q\}=\frac{1}{t^{q+1}\Gamma(q)}\label{ils} \end{gather} \end{pro} \begin{pro}\label{lapder} \normalfont If $f(t)$ is assumed to be a differentiable function and its derivative is of exponential type, then $\mathfrak{L}\{f'(t)\}=s\mathfrak{L}\{f(t)\}-f(0)$. \end{pro} \begin{pro}\label{con} \normalfont $\mathfrak{L}\{(f\ast g )(t)\}=\mathfrak{L}\{f(t)\}\cdot \mathfrak{L}\{g(t)\}$. \end{pro} \begin{pro}\label{sal} \normalfont By definition of the left fractional integral (\ref{lfi}), we can rewrite \[ \mathcal{I}_{0^+}^{\alpha}f(t)=\frac{1}{\Gamma(\alpha)}(f(t)\ast\frac{1}{t^{1-\alpha}}). \] So, by virtue of \eqref{tq} and Property~\ref{con}, we have \begin{equation}\nonumber \begin{split} \mathfrak{L}\{\mathcal{I}_{0^+}^{\alpha}f(t)\}&=\frac{1}{\Gamma(\alpha)}\mathfrak{L}\{f(t)\}\cdot \mathfrak{L}\{\frac{1}{t^{1-\alpha}}\}\\ &=\frac{1}{s^{\alpha}}\mathfrak{L}\{f(t)\}. \end{split} \end{equation} \end{pro} \begin{pro}\label{lc} The definition of the left Caputo fractional derivative (\ref{lcfd}) and Properties~\ref{sal} and \ref{lapder}, gives us, for $0<\alpha<1$, \begin{equation}\nonumber \begin{split} \mathfrak{L}\{^c\mathcal{D}_{0^+}^{\alpha}f(t)\}&=\mathfrak{L}\{\mathcal{I}_{0^+}^{1-\alpha}\mathcal{D}f(t)\}\\ &=\frac{1}{s^{1-\alpha}}\mathfrak{L}\{\mathcal{D}f(t)\}\\ &=\frac{1}{s^{1-\alpha}}\left(s\mathfrak{L}\{f(t)\}-f(0)\right)\\ &=s^{\alpha}\mathfrak{L}\{f(t)\}-s^{\alpha-1}f(0). \end{split} \end{equation} \end{pro} \begin{pro}\label{lrl} Similarly, the definition of the left Riemann-Liouville fractional derivative (\ref{lrlfd}), and Properties~\ref{sal} and \ref{lapder} gives us, for $0<\alpha<1$, \begin{equation}\nonumber \begin{split} \mathfrak{L}\{\mathcal{D}_{0^+}^{\alpha}f(t)\}&=\mathfrak{L}\{\mathcal{D}\mathcal{I}_{0^+}^{1-\alpha}f(t)\}\\ &=s\mathfrak{L}\{\mathcal{I}_{0^+}^{1-\alpha}f(t)\}-\mathcal{I}_{0^+}^{1-\alpha}f(t)|_{t=0}\\ &=s\left(\frac{1}{s^{1-\alpha}}\mathfrak{L}\{f(t)\}\right)-\mathcal{I}_{0^+}^{1-\alpha}f(t)|_{t=0}\\ &=s^{\alpha}\mathfrak{L}\{f(t)\}-\mathcal{I}_{0^+}^{1-\alpha}f(t)|_{t=0}. \end{split} \end{equation} \end{pro} \section{Fundamental set of Solutions (FSS)} In this section, we consider three features of a differently defined fractional Sturm-Liouville operator. This operator is a composition of a left Riemann-Liouville fractional derivative with a right Caputo fractional derivative as follows: \begin{equation}\label{fe1} ^{c}\mathcal{D}_{b^-}^{\alpha}\circ\mathcal{D}^{\alpha}_{a^+}y(t)=0,\qquad 0<\alpha<1. \end{equation} By a solution of \eqref{fe1} is meant a function $y \in AC[a,b]$ such that ${D}^{\alpha}_{a^+} \in AC[a,b]$. Applying the right fractional integral on (\ref{fe1}) and using the second relation of Property~\ref{p6} we obtain \[ \mathcal{D}^{\alpha}_{a^+}y(t)-\mathcal{D}^{\alpha}_{a^+}y(t)|_{t=b}=0. \] Now, by taking the left fractional integral of the above equation followed by the first relation in Property~\ref{p4}, and then Property~\ref{c}, we get \begin{equation}\label{sf1} y(t)=\frac{(t-a)^{\alpha-1}}{\Gamma(\alpha)}\cdot \mathcal{I}_{a^+}^{1-\alpha}y(t)|_{t=a}+\frac{(t-a)^{\alpha}}{\Gamma(\alpha+1)}\cdot \mathcal{D}_{a^+}^{\alpha}y(t)|_{t=b}, \end{equation} in which $\mathcal{I}_{a^+}^{1-\alpha}y(t)|_{t=a}$ and $\mathcal{D}_{a^+}^{\alpha}y(t)|_{t=b}$ are constants to be determined by imposing one or more initial/boundary conditions. The two solutions $y_1(t)=\frac{(t-a)^{\alpha-1}}{\Gamma(\alpha)}$ and $y_2(t)=\frac{(t-a)^{\alpha}}{\Gamma(\alpha+1)}$ are then fundamental solutions of (\ref{fe1}) since both of them satisfy the equation (\ref{fe1}) separately, as is readily verified, they are linearly independent and finally \[ \mathcal{W}(y_1,y_2)(t)=\frac{1}{\alpha}\frac{(t-a)^{2\alpha-2}}{\Gamma^2(\alpha)}, \] is not identically zero in the whole interval $[a,b]$ and has just discontinuity at point $a$. Observe that $y'_2(t)=y_1(t)$. \begin{remark} It is worth noting that as $\alpha$ approaches $1$, (\ref{fe1}) reduces to $-y^{\prime\prime}=0$, our two solutions $y_1(t)$ and $y_2(t)$ converge to $1$ and $(t-a)$ respectively, and (\ref{sf1}) then becomes $y(t)=y(a)+y'(b)(t-a)$, which authenticates classical results. We also have $\lim_{\alpha\rightarrow 1}\mathcal{W}(y_1,y_2)(t)=1$. \end{remark} Associated to \eqref{fe1} is another similar but quite different composition i.e., \begin{equation}\label{fe2} \mathcal{D}_{b^-}^{\alpha}\circ ^{c}\mathcal{D}^{\alpha}_{a^+}y(t)=0,\qquad 0<\alpha<1. \end{equation} Applying the right fractional integral on (\ref{fe1}) and using the second relation in Property~\ref{p4}, we have \[ ^{c}\mathcal{D}^{\alpha}_{a^+}y(t)-\frac{(b-t)^{\alpha-1}}{\Gamma(\alpha)}\cdot \mathcal{I}_{b^-}^{1-\alpha}\circ ^{c}\mathcal{D}^{\alpha}_{a^+}y(t)|_{t=b}=0. \] Now, by taking the left fractional integral on the above equation, using Properties~\ref{p6} and \ref{betaf}, and introducing the function $\psi(t;a,b,\alpha) $ by \[ \psi(t;a,b,\alpha)=\frac{(b-t)^{2\alpha-1}}{\Gamma^2(\alpha)}\cdot \left \{B \left (\frac{b-a}{b-t};\alpha,\alpha\right ) - B(1;\alpha,\alpha)\right \} \] we obtain \begin{equation}\label{sf2} y(t)=y(a)+\mathcal{I}_{b^-}^{1-\alpha}\circ ^{c}\mathcal{D}^{\alpha}_{a^+}y(t)|_{t=b}\cdot\psi(t;a,b,\alpha). \end{equation} So, we can say FSS is $\{1,\psi(t;a,b,\alpha)\}$, since both of them satisfy the equation (\ref{fe2}) separately and their Wronskian is not identically zero in $[a,b]$, having discontinuities at $a$ and $b$. \begin{pro}\label{xik} It is obvious that $\psi(a;a,b,\alpha)=0$ and if $\frac{1}{2}<\alpha<1$ we have, by a simple application of both Leibniz's and L'Hospital's Rule, \[ \lim_{t\rightarrow b}\psi(t;a,b,\alpha)=\frac{(b-a)^{2\alpha-1}}{(2\alpha-1)\Gamma^2(\alpha)}. \] The function $\psi(t;a,b,\alpha)$ has a discontinuity at $t=b$ when $0<\alpha\leq \frac{1}{2}$. \end{pro} In order to determine the constant value $\mathcal{I}_{b^-}^{1-\alpha}\circ ^{c}\mathcal{D}^{\alpha}_{a^+}y(t)|_{t=b}$, we substitute the value $t=b$ into equation (\ref{sf2}) and by virtue of Property~\ref{xik} for $\frac{1}{2}<\alpha<1$, we get \begin{equation}\label{ax} \mathcal{I}_{b^-}^{1-\alpha}\circ ^{c}\mathcal{D}^{\alpha}_{a^+}y(t)|_{t=b}=\frac{\left(y(b)-y(a)\right)(2\alpha-1)\Gamma^2(\alpha)}{(b-a)^{2\alpha-1}}. \end{equation} Next, we substitute the right hand side of (\ref{ax}) into (\ref{sf2}) and we obtain the general solution of equation (\ref{fe2}) as follows \[ y(t)=y(a)+(y(b)-y(a))(2\alpha-1)\cdot (\frac{b-t}{b-a})^{2\alpha-1}\int_{1}^{\frac{b-a}{b-t}} (w-1)^{\alpha-1}\, w^{\alpha -1}\, dw, \] in which $y(a)$ and $y(b)$ are constants to be determined by imposing the boundary conditions. \begin{fe} For $ 0<\alpha<1$ consider the following two term fractional Sturm-Liouville equation \begin{equation}\label{fe3} -^{c}\mathcal{D}_{0^+}^{\alpha}\circ \mathcal{D}^{\alpha}_{0^+}y(t)=\lambda y(t),\qquad t > 0. \end{equation} \end{fe} By taking the Laplace transform on both sides of equation (\ref{fe3}) and using Property~\ref{lc}, we have \[ -s^{\alpha}\mathfrak{L}\{\mathcal{D}^{\alpha}_{0^+}y(t)\}+s^{\alpha-1}\mathcal{D}^{\alpha}_{0^+}y(t)|_{t=0}=\lambda\mathfrak{L}\{y(t)\}, \] and then using Property~\ref{lrl}, we get \[ -s^{\alpha}\left(s^{\alpha}\mathfrak{L}\{y(t)\}-\mathcal{I}_{0^+}^{1-\alpha}y(t)|_{t=0}\right)+s^{\alpha-1}\mathcal{D}_{0^+}^{\alpha}y(t)|_{t=0}=\lambda\mathfrak{L}\{y(t)\}. \] It can be readily obtained \[ \mathfrak{L}\{y(t)\}=\frac{s^{\alpha}}{\lambda+s^{2\alpha}}\mathcal{I}^{1-\alpha}_{0^+}y(t)|_{t=0}+\frac{s^{\alpha-1}}{\lambda+s^{2\alpha}}\mathcal{D}^{\alpha}_{0^+}y(t)|_{t=0}. \] By taking the inverse Laplace transform in order to gain $y(t)$, we get \begin{equation}\label{lg} y(t)=\mathfrak{L}^{-1}\{\frac{s^{\alpha}}{\lambda+s^{2\alpha}}\}\cdot\mathcal{I}^{1-\alpha}_{0^+}y(t)|_{t=0}+\mathfrak{L}^{-1}\{\frac{s^{\alpha-1}}{\lambda+s^{2\alpha}}\}\cdot\mathcal{D}^{\alpha}_{0^+}y(t)|_{t=0}. \end{equation} On the other hand, \begin{equation}{\nonumber} \begin{split} \mathfrak{L}^{-1}\{\frac{s^{\alpha}}{\lambda+s^{2\alpha}}\}&=\mathfrak{L}^{-1}\{\frac{s^{\alpha}}{\lambda}\frac{1}{1+\left(\frac{s}{\lambda^{\frac{1}{2\alpha}}}\right)^{2\alpha}}\}\\ &=\mathfrak{L}^{-1}\{\frac{s^{\alpha}}{\lambda}-\frac{s^{3\alpha}}{\lambda^2}+\frac{s^{5\alpha}}{\lambda^3}-\cdots\}\qquad \text{as}\quad|\lambda|\rightarrow\infty\\ &=\mathfrak{L}^{-1}\{\sum_{k=1}^{\infty}(-1)^{k-1}\frac{s^{(2k-1)\alpha}}{\lambda^k}\}\\ &=\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{\lambda^kt^{(2k-1)\alpha+1}\Gamma(-(2k-1)\alpha)}\qquad \text{(by virtue of Property~\ref{ils})}\\ &=-\frac{1}{t^{1-\alpha}}\sum_{k=1}^{\infty}\left(\frac{-1}{\lambda t^{2\alpha}}\right)^k\frac{1}{\Gamma(\alpha-2\alpha k)}. \end{split} \end{equation} Consequently, assuming $z=-\lambda t^{2\alpha}$, $\delta=2\alpha$ and $\theta=\alpha$ in (\ref{asy}), we can write from above equation \[ \mathfrak{L}^{-1}\{\frac{s^{\alpha}}{\lambda+s^{2\alpha}}\}=\frac{1}{t^{1-\alpha}}\left(E_{2\alpha,\alpha}(-\lambda t^{2\alpha})-\frac{1}{2\alpha}(-\lambda t^{2\alpha})^{\frac{1-\alpha}{2\alpha}}\exp((-\lambda t^{2\alpha})^{\frac{1}{2\alpha}})\right), \] as $|\lambda|\rightarrow\infty$. For real positive eigenvalues $\lambda$ and in accordance with $t\in [0,1]$, we have $\arg(-\lambda t^{2\alpha})=\pi$. Hence, the second term of the right hand side of the above equation vanishes due to Property~\ref{asym} and we get \begin{equation}\label{mi1} \mathfrak{L}^{-1}\{\frac{s^{\alpha}}{\lambda+s^{2\alpha}}\}=\frac{1}{t^{1-\alpha}}E_{2\alpha,\alpha}(-\lambda t^{2\alpha}). \end{equation} In similar manner, we can obtain \begin{equation}\label{mi2} \mathfrak{L}^{-1}\{\frac{s^{\alpha-1}}{\lambda+s^{2\alpha}}\}=t^{\alpha}E_{2\alpha,\alpha+1}(-\lambda t^{2\alpha}). \end{equation} So, using (\ref{lg}), (\ref{mi1}) and (\ref{mi2}), we can write the general solution of (\ref{fe3}) as, \begin{equation}\label{sf3} y(t)=\mathcal{I}_{0^+}^{1-\alpha}y(t)|_{t=0}\cdot \frac{1}{t^{1-\alpha}}E_{2\alpha,\alpha}(-\lambda t^{2\alpha})+ \mathcal{D}_{0^+}^{\alpha}y(t)|_{t=0}\cdot t^{\alpha}E_{2\alpha,\alpha+1}(-\lambda t^{2\alpha}), \end{equation} in which $\mathcal{I}_{0^+}^{1-\alpha}y(t)|_{t=0}$ and $\mathcal{D}_{0^+}^{\alpha}y(t)|_{t=0}$ are constants to be determined using the initial conditions. The two linearly independent functions $y_1(t;\alpha,\lambda)=\frac{1}{t^{1-\alpha}}E_{2\alpha,\alpha}(-\lambda t^{2\alpha})$ and $y_2(t;\alpha,\lambda)=t^{\alpha}E_{2\alpha,\alpha+1}(-\lambda t^{2\alpha})$ satisfy the equation (\ref{fe3}), separately so, they form a fundamental set of solutions. It is readily verified that $y_2^\prime (t;\alpha,\lambda))=y_1(t;\alpha,\lambda) $. Note that \eqref{sf3} may have been obtained alternately using the method of successive approximations. \begin{remark} When $\alpha$ approaches $1$, equation (\ref{fe3}) turns into $-y^{\prime\prime}=\lambda y$ and its fundamental set of solutions, i.e., $\{t^{\alpha-1}E_{2\alpha,\alpha}(-\lambda t^{2\alpha}),t^{\alpha}E_{2\alpha,\alpha+1}(-\lambda t^{2\alpha})\}$ reduces to $\{\cos(\sqrt{\lambda}t),\frac{\sin(\sqrt{\lambda}t)}{\sqrt{\lambda}}\}$ due to (\ref{e1}), (\ref{e2}) and (\ref{pmi}). This shows that our results are a generalization of classical the ones. \end{remark} \section{Eigenvalues of our Fractional Sturm-Liouville problem} Consider (\ref{fe3}) on $[0,1]$ with boundary conditions \begin{equation}\label{fcon} \mathcal{I}_{0^+}^{1-\alpha}y(t)|_{t=0}=0,\quad \text{and}\quad \mathcal{I}_{0^+}^{1-\alpha}y(t)|_{t=1}=0. \end{equation} Imposing these boundary conditions on the general solution of (\ref{sf3}) with $\mathcal{D}_{0^+}^{\alpha}y(t)|_{t=0} \neq 0$ and using that \[ \mathcal{I}_{0^+}^{1-\alpha}\{t^{\alpha}E_{2\alpha,\alpha+1}(-\lambda t^{2\alpha})\}=tE_{2\alpha,2}(-\lambda t^{2\alpha}), \] we get \begin{equation}\label{eigeq} E_{2\alpha,2}(-\lambda)=0, \end{equation} which defines our characteristic equation for the eigenvalues. We note that $E_{2\alpha,2}(-\lambda)$ is an entire function of $\lambda$ order $1/2\alpha$ and type 1, [\cite{kil}, p.42], and since it must be of fractional order for all $ 1/2 < \alpha < 1$ by classical complex analysis it must have infinitely many zeros for all such $\alpha$. Now the Mittag-Leffler function $E_{2\alpha,2}(-\lambda)$ for $1/2<\alpha<1$ can be decomposed into two parts \cite{gor} given by \begin{equation}\label{plus} \lambda^{\frac{1}{2\alpha}}E_{2\alpha,2}(-\lambda)=f_{2\alpha,2}(-\lambda)+g_{2\alpha,2}(-\lambda) \end{equation} where \begin{equation}\label{f} f_{2\alpha,2}(-\lambda)=\int_0^{\infty}e^{-r\lambda^{\frac{1}{2\alpha}}}k_{2\alpha,2}(r)dr \end{equation} with \[ k_{2\alpha,2}(r)=\frac{1}{\pi}\frac{r^{2\alpha-2}(-\sin(2\alpha \pi))}{r^{4\alpha}+2r^{2\alpha}\cos(2\alpha\pi)+1} \] and \begin{equation}\label{g} g_{2\alpha,2}(-\lambda)=\frac{1}{\alpha}e^{\lambda^{\frac{1}{2\alpha}}\cdot\cos(\frac{\pi}{2\alpha})}\cdot \cos\left(\lambda^{\frac{1}{2\alpha}}\sin(\frac{\pi}{2\alpha})-\frac{\pi}{2\alpha}\right). \end{equation} It is obvious that \[ g_k=\left(\frac{(k+\frac{1}{2}+\frac{1}{2\alpha})\pi}{\sin(\frac{\pi}{2\alpha})}\right)^{2\alpha},\qquad k=-1,0,1,\cdots, \] are positive zeros of $g_{2\alpha,2}(-\lambda)$ and $g_{2\alpha,2}(0)=\frac{1}{\alpha}\cos\frac{\pi}{2\alpha}$. We note that this function exhibits oscillations with an amplitude which decays exponentially, since $\cos(\frac{\pi}{2\alpha})$ is negative as long as $1/2<\alpha<1$. The function $k_{2\alpha,2}(r)$ is always positive due to the fact that $\sin(2\alpha\pi)$ is negative for $1/2<\alpha<1$ and the denominator is greater than $(r^{2\alpha}-1)^2$ which is nonnegative for such $\alpha$. Also, we have \[ f_{2\alpha,2}(0)=\int_0^{\infty}k_{2\alpha,2}(r)dr=-\frac{1}{\alpha}\cos(\frac{\pi}{2\alpha})>0 \] for $1/2<\alpha<1$ and according to Riemann-Lebesgue Lemma \cite{leb}, the function $f_{2\alpha,2}(-\lambda)$ approaches zero asymptotically as $\lambda\rightarrow\infty$. Recall that the form of $E_{2\alpha, 2}(-\lambda)$ (cf., \eqref{mittag2}) shows that as $\lambda \to \infty$ its zeros as a function of $\lambda$, if any, must lie along the negative axis. Its asymptotic form as a function of a complex variable can be found in [\cite{erdelyi}, p. 210, eq. (22)] with the obvious substitutions. Specifically, we know that $$E_{2\alpha, 2}(z) =\frac{1}{2\alpha} \sum_{m} t_m^{-1}\, e^{t_m} - O\left (|\lambda|^{-(N-1)}\right ),\quad {\rm as}\quad \lambda \to \infty.$$ Here $t_m = z^{1/2\alpha}\, e^{\pi i m/\alpha}$, $m$ is an integer, and $-2\alpha \pi < \arg z +2\pi m < 2\alpha \pi$. One can show directly from this that, for each $\lambda > 0$, $$E_{2\alpha, 2}(-\lambda) \to \frac{\sin\sqrt{\lambda}}{\sqrt{\lambda}}$$ as $\alpha \to 1$, as expected in the classical case. The latter display on the right gives the dispersion relation for the eigenvalues (all positive) of the Dirichlet problem for $-y^{\prime\prime}=\lambda\, y$ on $[0,1]$. It also follows that the eigenvalues of our problem approach those of the Dirichlet problem for $-y^{\prime\prime}=\lambda\, y$, as the zeros of $E_{2\alpha, 2}(-\lambda)$ approach those of $\frac{\sin\sqrt{\lambda}}{\sqrt{\lambda}}$ Since $E_{2\alpha, 2}(-\lambda)$ is an entire function of (fractional) finite order $1/2\alpha$ [\cite{erdelyi}, p. 208], we can ascertain that, for each $\alpha$, there must be a {\it finite} number of zeros along the negative $\lambda$ axis whenever $1/2 <\alpha < 1$. In fact, for each such $\alpha$, $E_{2\alpha, 2}(-\lambda) > 0$ for all sufficiently large $\lambda$, where the $\lambda$ generally depends on $\alpha$. This now implies that \begin{equation}\label{xine} f_{2\alpha,2}(-\lambda) > |g_{2\alpha,2}(-\lambda)| \end{equation} for all sufficiently large (positive) $\lambda$. Next, in the remaining {\it finite} $\lambda$-interval, we may have subintervals wherein \begin{equation}\label{ine} |g_{2\alpha,2}(-\lambda)|\geq f_{2\alpha,2}(-\lambda). \end{equation} In such an exceptional interval in which $g_{2\alpha,2}(-\lambda)<0$, call it $I$, \eqref{g} implies that $ f_{2\alpha,2}(-\lambda) = - g_{2\alpha,2}(-\lambda)$ at least twice. Hence $E_{2\alpha, 2}(-\lambda)$ has two zeros in $I$ (each of which is an eigenvalue). Denote the totality of such intervals in which $g_{2\alpha,2}(-\lambda)<0$ by $I_n$, $n=0, 1, 2, \ldots, N^*-1$, where $N^*$ depends on $\alpha$. Its value will be specified below. A glance at (\ref{g}) shows that the $n^{th}$ such interval $I_n$ is given by the set of all $\lambda$ such that, \[ (4n+1)\frac{\pi}{2}<\lambda^{\frac{1}{2\alpha}}\sin(\frac{\pi}{2\alpha})-\frac{\pi}{2\alpha}<(4n+3)\frac{\pi}{2},\quad n=0,1,\cdots. \] In other words, \begin{equation}\label{neg} I_n:=\left(\left(\frac{(2n+\frac{1}{2}+\frac{1}{2\alpha})\pi}{\sin(\frac{\pi}{2\alpha})}\right)^{2\alpha},\left(\frac{(2n+\frac{3}{2}+\frac{1}{2\alpha})\pi}{\sin(\frac{\pi}{2\alpha})}\right)^{2\alpha}\right),\quad n=0,1,\cdots. \end{equation} Recall, however, that \eqref{xine} must persist for all large $\lambda$ so that, insofar as eigenvalues are concerned, this list must be necessarily {\it finite} for each $\alpha$ from previous considerations and consists of the intervals $I_n$, $n=0, 1, 2, \ldots, N^*-1$. We conclude that each interval $I_n$ under consideration contains two eigenvalues and their total number is then $2N^*$. \subsection{Estimating the number of real eigenvalues} It is worth noting again that the number of real eigenvalues is finite while the number of non-real eigenvalues is infinite (as the entire function is of fractional order for our range of $\alpha$). We do not, however, address the non-real spectrum of our problem and, in the following, consider the real spectrum exclusively. Determining the extreme points of $g_{2\alpha,2}(-\lambda)$ we find, \[ g'_{2\alpha,2}(-\lambda)=\frac{e^{\lambda^{\frac{1}{2\alpha}}\cos(\frac{\pi}{2\alpha})}\cdot \lambda^{\frac{1}{2\alpha}-1}}{2\alpha^2}\left(\cos(\frac{\pi}{2\alpha})\cos(\lambda^{\frac{1}{2\alpha}}\sin(\frac{\pi}{2\alpha})-\frac{\pi}{2\alpha})-\sin(\frac{\pi}{2\alpha})\sin(\lambda^{\frac{1}{2\alpha}}\sin(\frac{\pi}{2\alpha})-\frac{\pi}{2\alpha})\right)=0 \] which yields \[ g'_{2\alpha,2}(-\lambda)=\frac{e^{\lambda^{\frac{1}{2\alpha}}\cos(\frac{\pi}{2\alpha})}\cdot \lambda^{\frac{1}{2\alpha}-1}}{2\alpha^2}\left(\cos(\lambda^{\frac{1}{2\alpha}}\sin(\frac{\pi}{2\alpha})\right)=0. \] Therefore, \[ z_k=\left(\frac{(k+\frac{1}{2})\pi}{\sin(\frac{\pi}{2\alpha})}\right)^{2\alpha}\qquad, k=0,1,\cdots, \] are the extreme points of $g_{2\alpha,2}(-\lambda)$ and at these points we have \begin{equation}\nonumber g_{2\alpha,2}(-z_k)=\frac{1}{\alpha}e^{(k+\frac{1}{2})\pi\cdot \cot(\frac{\pi}{2\alpha})}\cdot(-1)^k\sin(\frac{\pi}{2\alpha})\quad \left\{ \begin{array}{ll} >0& k \quad\text{even}, \\ \\ <0 & k \quad\text{odd}, \end{array}\right. \end{equation} for $1/2<\alpha<1$. Since we are interested in that part of $g_{2\alpha,2}(-\lambda)$ where it is negative, we need to look at those extreme points arising from an odd value of $k$, that is, \[ z^{odd}_k=\left(\frac{(2k+\frac{3}{2})\pi}{\sin(\frac{\pi}{2\alpha})}\right)^{2\alpha},\qquad k=0,1,\cdots. \] The above considerations imply that for each fixed $1/2<\alpha<1$, there exists an $N^{\ast}(\alpha)\in \mathbb{N}$ such that for all $N\geq N^{\ast}$ we have \begin{equation}\label{ine2} \left|g_{2\alpha,2}(-z_N^{odd})\right|<f_{2\alpha,2}(-z_N^{odd}), \end{equation} and such that there is no zero of $E_{2\alpha,2}(-\lambda)$ in these negative intervals of $g_{2\alpha,2}(-\lambda)$, namely $I_n$ for $n=N^{\ast},N^{\ast}+1,\ldots$. The inequality (\ref{ine2}) can be rewritten as follows, \begin{equation}\label{ine3} \frac{1}{\alpha}e^{(2N^{\ast}+\frac{3}{2})\pi\cdot \cot(\frac{\pi}{2\alpha})}\cdot\sin(\frac{\pi}{2\alpha})<\frac{1}{\pi}\int_0^{\infty}e^{-r\frac{(2N^{\ast}+\frac{3}{2})\pi}{\sin(\frac{\pi}{2\alpha})}}\frac{r^{2\alpha-2}(-\sin(2\alpha \pi))}{r^{4\alpha}+2r^{2\alpha}\cos(2\alpha\pi)+1}dr. \end{equation} From this, for given $\alpha$, we can solve for for $N^{\ast}$ implicitly and so numerically. Since $f_{2\alpha,2}$ is decreasing and $g_{2\alpha,2}$ is an oscillating cosine function with an exponentially decaying amplitude (as $1/2 < \alpha < 1$), there are two zeros of $E_{2\alpha,2}(-\lambda)$ in each interval $I_n$ for $n=0,1,\cdots,N^{\ast}-1$ in which $g_{2\alpha,2} < 0$. {\bf Remark:}\quad It must be mentioned that the numerical solution of the integral appearing in the right hand side of (\ref{ine3}) for fixed $\alpha$ can be obtained by using various library packages such as \emph{NIntegrate} in \emph{Mathematica} or even \emph{Maple}. Since the intervals where $g_{2\alpha,2} < 0$ are given by \eqref{neg}, we can readily deduce the following two results. \begin{prop} The $2\alpha$-th root of the two first eigenvalues of (\ref{fe3}) - (\ref{fcon}), namely $\rho_{1}$ and $\rho_{2}$ if they exist, are in the interval \begin{equation}\label{sint} \tilde{I}_{0}=\left(\left(\frac{\frac{1}{2}+\frac{1}{2\alpha})\pi}{\sin(\frac{\pi}{2\alpha})}\right),\left(\frac{(\frac{3}{2}+\frac{1}{2\alpha})\pi}{\sin(\frac{\pi}{2\alpha})}\right)\right), \end{equation} and $\lim_{\alpha\rightarrow 1} |\tilde{I}_{0}|=\pi$, which corresponds to the classical case. \end{prop} \begin{prop} The $2\alpha$-th root of the two largest eigenvalues denoted by $\rho_{2N^{\ast}-1}$ and $\rho_{2N^{\ast}}$, are in the interval \begin{equation}\label{bint} \tilde{I}_{N^{\ast}-1}=\left(\left(\frac{(2N^{\ast}-\frac{3}{2}+\frac{1}{2\alpha})\pi}{\sin(\frac{\pi}{2\alpha})}\right),\left(\frac{(2N^{\ast}-\frac{1}{2}+\frac{1}{2\alpha})\pi}{\sin(\frac{\pi}{2\alpha})}\right)\right), \end{equation} and $\lim_{\alpha\rightarrow 1} |\tilde{I}_{N^{\ast}-1}|=\pi$. \end{prop} Table~\ref{tab1} gives numerical results for different values of $1/2<\alpha<1$. Using (\ref{ine3}), (\ref{sint}), and (\ref{bint}) we can then find the number of eigenvalues of (\ref{fe3}) - (\ref{fcon}). The intervals containing the $2\alpha$-th root of the smallest and largest eigenvalue are shown in the two columns on the right. \begin{table}[H] \caption{Numerical results for many different values of $\alpha$. The intervals containing the $2\alpha$-th root of the smallest and largest eigenvalue are shown in the two columns on the right.} \centering \begin{tabular}{c c c c } \hline\hline \\ [-2ex $\alpha$ (Fractional order) & $2N^{\ast}$ (Number of eigenvalues) & $\tilde{I}_0$ &$\tilde{I}_{N^{\ast}-1}$ \\ [0.5ex] \hline 0.78 & 0 & -----& ----- \\ 0.80 & 2 &(3.82549,7.22593)&(3.82549,7.22593)\\ 0.82& 2 &(3.70445,7.04252)&(3.70445,7.04252)\\ 0.84&2 & (3.60076,6.88842) &(3.60076,6.88842) \\ 0.86&4 & (3.51148,6.75866) & (10.0058,13.253) \\ 0.88& 4 &(3.43428,6.64934) & (9.86441,13.0795) \\ 0.90& 8 & (3.36728,6.55734) & (22.5076,25.6977) \\ 0.92& 10 & (3.309,6.48013) & (28.678,31.8492) \\ 0.94& 18 & (3.25822,6.41567) & (53.7774,56.9349) \\ 0.96&32 & (3.21392,6.36226) &(97.6639,100.812) \\ 0.98&84 & (3.17528,6.31849) & (260.918,264.062) \\ 0.981& 90 &(3.17348,6.31653) & (279.762,282.905) \\ 0.982& 98 & (3.1717,6.3146) & (304.89,308.033) \\ 0.983& 104 & (3.16993,6.31268) & (323.731,326.873) \\ 0.984& 114 & (3.16817,6.31079) & (355.141,358.284) \\ 0.985& 124 & (3.16642,6.30891) & (386.55,389.693) \\ 0.989& 182 & (3.15955,6.30162) & (568.733,571.875) \\ 0.9898& 200 & (3.15819,6.3002) & (625.275,628.417) \\ [1ex] \hline \end{tabular} \label{tab1} \end{table} {\bf Remark:}\quad Observe that the length of these intervals, $\tilde{I}_0$ and $\tilde{I}_{N^{\ast}-1}$, approaches $\pi$ as $\alpha \to 1^-$ (as expected in the classical case). In addition, a glance at \eqref{neg} shows that the asymptotic behavior of an eigenvalue, let's call it $\lambda_n (\alpha)$, contained in \eqref{neg}, for fixed $n$, is given by $$\lambda_n (\alpha) \sim \left (\frac{n\pi}{\sin (\frac{\pi}{2\alpha})}\right)^{2\alpha},$$ as $\alpha \to 1^-$. This corresponds exactly with the well known classical asymptotic estimate $\lambda_n \sim n^2\,\pi^2$ as $n \to \infty$. \section{Conclusion} In this article, we gave an introduction to three distinct fractional Sturm-Liouville equations consisting of various compositions of a Riemann-Liouville and Caputo fractional derivatives simultaneously operating on an unknown function. It must be mentioned that the potential function is absent in these equations for which the fundamental set of solutions have been derived. Of these operators we distinguished one of them and solved a Dirichlet type problem completely all the while showing that, in the limit as $\alpha\to 1^-$ we recover the two-term Sturm-Liouville problem in the case of Dirichlet boundary conditions. Our results might set the groundwork for many semi-analytical and numerical methods such as Homotopy Perturbation Method (HPM) and Variational Iteration Method (VIM) once a potential function is introduced, through which the eigenvalues and the eigenfunctions of a Sturm-Liouville problem are found via the fundamental solutions of the linear part of the equation. It is also possible to implement the classical {\it Variation of Parameters} method to obtain the general solution of a fractional Sturm-Liouville equation analytically in the presence of a potential function though we have not done this here.
{ "timestamp": "2017-12-29T02:11:23", "yymm": "1712", "arxiv_id": "1712.09891", "language": "en", "url": "https://arxiv.org/abs/1712.09891", "abstract": "We introduce and present the general solution of three two-term fractional differential equations of mixed Caputo/Riemann Liouville type. We then solve a Dirichlet type Sturm-Liouville eigenvalue problem for a fractional differential equation derived from a special composition of a Caputo and a Riemann-Liouville operator on a finite interval where the boundary conditions are induced by evaluating Riemann-Liouville integrals at those end-points. For each $1/2<\\alpha<1$ it is shown that there is a finite number of real eigenvalues, an infinite number of non-real eigenvalues, that the number of such real eigenvalues grows without bound as $\\alpha \\to 1^-$, and that the fractional operator converges to an ordinary two term Sturm-Liouville operator as $\\alpha \\to 1^-$ with Dirichlet boundary conditions. Finally, two-sided estimates as to their location are provided as is their asymptotic behavior as a function of $\\alpha$.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Fractional Sturm-Liouville eigenvalue problems, I", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9825575152637946, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7091541996766876 }
https://arxiv.org/abs/2209.01171
Aperiodicity of positive operators that increase the support of functions
Consider a positive operator $T$ on an $L^p$-space (or, more generally, a Banach lattice) which increases the support of functions in the sense that $supp(Tf) \supseteq supp{f}$ for every function $f \ge 0$. We show that this implies, under mild assumptions, that $T$ has no unimodular eigenvalues except for possibly the number $1$. This rules out periodic behaviour of any orbits of the powers of $T$, and thus enables us to prove convergence of those powers in many situations.For the proof we first perform a careful analysis of the action of lattice homomorphisms on the support of functions; then we split $T$ into an invertible and a weakly stable part, and apply the aforementioned analysis to the invertible part. An appropriate adaptation of this argument allows us to prove another version of our main result which is useful for the study of so-called irreducible operators.
\section{Introduction} \subsection*{Motivation} If a matrix $T \in \mathbb{R}^{d \times d}$ with spectral radius $\spr(T) = 1$ has only entries $\ge 0$ and all diagonal entries of $T$ are non-zero, then it follows from classical Perron--Frobenius theory that $1$ is the only eigenvalue of $T$ in the complex unit circle $\mathbb{T}$. The latter spectral property if often referred to as \emph{aperiodicity}. For linear operators $T$ that leaves the positive cone of an infinite dimensional function space (or Banach lattice) $E$ invariant, the most obvious generalization of the assumption that all diagonal entries be non-zero is the assumption that $T \ge \varepsilon \id_E$ for some $\varepsilon > 0$. While this does indeed imply a similar result for the spectrum of $T$ (see Section~\ref{section:domination-of-id} for details), this observation is only of minor usability since the assumption $T \ge \varepsilon \id_E$ is extremely strong in infinite dimensions. \subsection*{Operators that increase the support of functions} It is therefore worthwhile to look for a more prevalent properties which generalize the assumption that all diagonal entries of a matrix be non-zero. One such property on, say, a function space $L^p$ over a $\sigma$-finite measure space is as follows: one assumes that for every $0 \le f \in L^p$ the support $\supp(Tf)$ contains the support $\supp f$ up to a set of measure $0$. Here, the support of a function $f \in L^p(\Omega,\mu)$ means the set $\{\omega \in \Omega: \; f(\omega) \not= 0\}$; this set is defined only up to a null set (and thus, strictly speaking, it is not a set but an element of the \emph{measure algebra} of $(\Omega,\mu)$). Under mild assumptions this property implies that the \emph{point spectrum} $\specPnt(T)$ (rather than the spectrum) intersects the unit circle $\mathbb{T}$ at most in the point $1$: \begin{theorem} \label{thm:main-everywhere} Let $(\Omega,\mu)$ be a $\sigma$-finite measure space and $p \in [1,\infty)$. Let $T: L^p \to L^p$ be a positive linear operator on the complex-valued space $L^p := L^p(\Omega,\mu)$. Assume that $T$ is power bounded and that, for each $0 \le f \in L^p$, there exists an integer $n \ge 1$ such that $\supp(T^nf) \supseteq \supp(T^{n-1}f)$. Then $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. \end{theorem} As indicated above, the condition $\supp(T^nf) \supseteq \supp(T^{n-1}f)$ is meant as an inclusion up to a null set (i.e., as in inclusion within the measure algebra). \emph{Power boundedness} of $T$ means that $\sup_{n \in \mathbb{N}_0} \norm{T^n} < \infty$, \emph{positivity} of $T$ means that $Tf \ge 0$ whenever $f \ge 0$, and the inequalities $\le$ and $\ge$ for functions are to be understood almost everywhere. The simplest case of the inclusion $\supp(T^nf) \supseteq \supp(T^{n-1}f)$ is of course the case $\supp(Tf) \supseteq \sup(f)$ for each $0 \le f \in L^p$; this latter property is, as explained before, an infinite-dimensional generalization of the property of a matrix $T \ge 0$ that all diagonal entries be non-zero, and it is much weaker than the assumption that $T \ge \varepsilon \id$ for some $\varepsilon > 0$. Similarly as for matrices, one may think of the conclusion $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$ in Theorem~\ref{thm:main-everywhere} as an \emph{aperiodicity} property, since it is equivalent to the assertion that the powers of $T$ do not have a periodic orbit with minimal period $\ge 2$. This explains the title of the paper, and we will employ this property to prove various convergence theorems for the powers of $T^n$ as $n \to \infty$. We remark that, at least when the spectral radius of $T$ satisfies $\spr(T) = 1$, the property $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$ is, in some parts of the literature, also called \emph{primitivity}; see for instance \cite[Section~6]{Grobler1995}. We will actually prove a more general version of Theorem~\ref{thm:main-everywhere} which also holds on Banach lattices (see Subsection~\ref{subsection:primitivity-1:point-spectrum}; there we will also show how to derive Theorem~\ref{thm:main-everywhere} from its Banach lattice version). The Banach lattice setting does not only make the result available for other functions spaces, too; it is also a more natural setting for the proof which will make heavy usage of lattice theory and will, in particular, include a reduction to a lattice subspace which will not be an $L^p$-space, in general, even if we start on an $L^p$-space. \subsection*{Irreducible operators and partially increasing support} In some cases it will happen that the property $\supp(Tf) \supseteq \sup(f)$ is not satisfied for all $0 \le f \in L^p$, but only for those $f$ which are supported within a given set $S$ that is smaller than the underlying measure space. Still, we are able to essentially make the same conclusion if we add the assumption that the operator $T$ be \emph{irreducible}, which means that, for each non-zero $0 \le f \in L^p$ and each non-zero linear functional $g \ge 0$ on $L^p$, there exists an integer $n \ge 0$ such that $\langle g, T^n f \rangle > 0$. \begin{theorem} \label{thm:main-irred} Let $(\Omega,\mu)$ be a $\sigma$-finite measure space and $p \in [1,\infty)$. Let $T: L^p \to L^p$ be a positive linear operator on the complex-valued space $L^p = L^p(\Omega,\mu)$. Assume that $T$ is power bounded and irreducible and that there exists a measurable set $S \subseteq \Omega$ of non-zero measure such that $\supp(Tf) \supseteq \supp(f)$ for each $0 \le f \in L^p$ that satisfies $\supp (f) \subseteq S$. Then $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. \end{theorem} In contrast to Theorem~\ref{thm:main-everywhere} we do not allow for powers of $T$ in the inclusion $\supp(Tf) \supseteq \supp(f)$ here. We do not know what happens if we only assume $\supp(T^nf) \supseteq \supp(T^{n-1}f)$ in the situation of Theorem~\ref{thm:main-irred}. Again, we will show Theorem~\ref{thm:main-irred} in the more general setting of Banach lattices; see Subsection~\ref{subsection:conditions-for-primitivity-2:point-spectrum}, where we also prove Theorem~\ref{thm:main-irred} as a consequence of its Banach lattice version. As is the case for Theorem~\ref{thm:main-everywhere}, Theorem~\ref{thm:main-irred} is also a generalization of a well-known result about matrices: if $0 \le T \in \mathbb{R}^{d \times d}$ has spectral radisu $\spr(T) = 1$ and is irreducible, and at least one diagonal entry of $T$ is non-zero, $1$ is the only eigenvalue of $T$ in the complex unit circle; see for instance \cite[Corollary~1 to Theorem~I.6.5 on pp.\,22--23]{Schaefer1974}. Another infinite-dimensional generalization of this finite-dimensional result can be found in \cite[Theorem~6.2]{Grobler1995} and is applicable to operators that belong to an operator ideal which admits a continuous trace. However, such operators always have a compact power, which severaly restricts the applicability of this result; in contrast, Theorem~\ref{thm:main-irred} does not entail any such compactness condition. \subsection*{Organization of the article} In Sections~\ref{section:conditions-for-primitivity-1} and~\ref{section:conditions-for-primitivity-2} we prove various versions of Theorems~\ref{thm:main-everywhere} and~\ref{thm:main-irred}, respectively. In Section~\ref{section:domination-of-id} we study how the assumption $T \ge \varepsilon \id$, and versions thereof, impact the spectrum of $T$. All three sections contain a number of applications of our spectral theoretic results to the analysis of the long-term behaviour of the powers of $T$. \subsection*{Related literature} Results that use different kind of support expansion properties for positive operators or operator semigroups in order to obtain spectral or asymptotic results occur in the literature on various occassions. Apart from finite dimensional theorems such as \cite[Corollary~1 to Theorem~I.6.5 on pp.\,22--23]{Schaefer1974} we mention results that rely on different kinds of \emph{positivity improving properties} (e.g.\ \cite[Theorem~4.3]{Gerlach2013}, \cite{GlueckWeber2020} \cite[Theorem~6.1]{Grobler1995}, \cite[Theorem~2.2]{KellerLenzVogtWojciechowski2015}, \cite{MakarowWeber2000}, \cite[Corollary~2 on p.\,249]{Rudnicki1995}), support overlapping operators (see e.g.\ \cite{BartoszekBrown1998}, \cite{Lin2000} \cite[Corollary~1 on p.\,248]{Rudnicki1995}), operators that satisfy lower bound properties (such as e.g.\ in \cite{Ding2003, Lasota1982, Emelyanov2004a, Zalewska-Mitura1994} and \cite[Section~4]{GlueckWolff2019}) and, more losely related, $C_0$-semigroups on sequences spaces \cite{Davies2005, Keicher2006, Wolff2008} (see in particular the arguments used in the proofs of \cite[Proposition~3.5]{Keicher2006} and \cite[Proposition~2.3]{Wolff2008}). \subsection*{Principal ideals and quasi-interior points in Banach lattices} Throughout the paper we assume the reader to be familiar with the basic theory of real and complex Banach lattices; standard references for this theory are e.g.\ the monographs \cite{AliprantisBurkinshaw2006, Meyer-Nieberg1991, Schaefer1974, Zaanen1983}. Let $E$ be a real or complex Banach lattice; we denote its cone by $E_+$. For every vector $u \in E_+$ we denote the smallest ideal in $E$ that contains $u$ by $E_u$; it is called the \emph{principal ideal generated by $u$}, and one can easily check that it is given by the formula \begin{align*} E_u = \{f \in E: \, \exists c \ge 0 \; \modulus{f} \le cu\}. \end{align*} The vector $u \in E_+$ is called a \emph{quasi-interior point} (of $E_+$) if the principal ideal $E_u$ is dense in $E$. If $E$ is an $L^p$-space over a $\sigma$-finite measure space for some $p \in [1,\infty)$, then a function $0 \le u \in L^p$ is a quasi-interior point if and only if it is strictly positive almost everywhere. For elements $v,w \ge 0$ in a Banach lattice $E$ we will often consider the property \begin{align*} \overline{E_w} \supseteq \overline{E_v}, \end{align*} where the bar denotes the closure. Note that the set $\overline{E_w}$ is the smallest closed ideal in $E$ that contains $w$. In the $L^p$-setting (over a $\sigma$-finite measure space and for $p \in [1,\infty)$), it is not difficult to check that the inclusion $\overline{E_w} \supseteq \overline{E_v}$ is equivalent to the inclusion $\supp(w) \supseteq \supp(v)$, which is again understood to hold up to a set of measure $0$. Hence, the inclusion between the closed ideals generated by $w$ and $v$ will serve as an abstract Banach lattice theoretic version of the assumption that the support of $w$ contains the support of $v$. However, it is important to point out that things become more complicted if we consider spaces of continuous functions rather than $L^p$-spaces; see the discussion before Example~\ref{ex:weakly-expanding}, as well as Theorem~\ref{thm:everywhere-c-k} and Lemma~\ref{lem:ideal-inclusion-c-k}. \subsection*{Notational conventions} We use the conventions $\mathbb{N} := \{1,2,3, \dots\}$ and $\mathbb{N}_0 := \mathbb{N} \cup \{0\}$, as well as $\mathbb{T} := \{\lambda \in \mathbb{C}: \, \modulus{\lambda} = 1\}$. \section{Operators that increase the support of functions} \label{section:conditions-for-primitivity-1} \subsection{The point spectrum} \label{subsection:primitivity-1:point-spectrum} The main result of this subsection, and also of the entire Section~\ref{section:conditions-for-primitivity-1} is the following theorem, which is an abstract version of Theorem~\ref{thm:main-everywhere}. \begin{theorem} \label{thm:main-everywhere-bl} Let $T: E \to E$ be a positive and weakly almost periodic linear operator on a complex Banach lattice $E$. Assume that for each $f \in E_+$ there exists an integer $n \ge 1$ such that $\overline{E_{T^nf}} \supseteq \overline{E_{T^{n-1}f}}$. Then $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. \end{theorem} The property \emph{weak almost periodicity} means that for each $f \in E$ the orbit $\{T^nf: \, n \ge 0\}$ is relatively compact with respect to the weak topology on $E$. By the uniform boundedness principle this property is stronger than being power bounded. Before we proceed to the proof of Theorem~\ref{thm:main-everywhere-bl} we show in the following corollary that the weak almost periodicity assumption can be replaced with the weaker property power boundedness if the Banach lattice is a so-called \emph{KB-space}. A KB-space is a Banach lattice $E$ in which every increasing norm bounded sequence in the positive cone converges in norm. For instance, every $L^p$-space for $p \in [1,\infty)$ is a KB-space. \begin{corollary} \label{cor:main-everywhere-bl-kb} Let $T: E \to E$ be a positive and power-bounded linear operator on a KB-space $E$. Assume that for each $f \in E_+$ there exists an integer $n \ge 1$ such that $\overline{E_{T^nf}} \supseteq \overline{E_{T^{n-1}f}}$. Then $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. \end{corollary} \begin{proof} Let $\lambda \in \mathbb{T}$ be an eigenvalue of $T$ and let $f \in E$ be an associated eigenvector; we need to show that $\lambda=1$. Observe that $\modulus{f} = \modulus{Tf} \le T \modulus{f}$. By iterating this inequality we see that the sequence $(T^n \modulus{f})$ is increasing. Due to the power boundedness of $T$, the sequence is also norm bounded, so it converges to a point $h \in E_+$ as $E$ is a KB-space. Obviously, $h$ is a fixed point of $T$. We now consider the subspace $F := \overline{E_h}$ of $E$, which is a Banach lattice in its own right. The space $F$ is invariant under $T$ and contains the eigenvector $f$. The restriction $T\restricted{F}: F \to F$ still satisfies the assumptions of the corollary. Moreover, the restriction is a weakly almost periodic operator on $F$. Indeed, the \emph{order interval} \begin{align*} [0,h] := \{f \in E: \, 0 \le f \le g\} \end{align*} is weakly compact% \footnote{ Indeed, as is for instance explained after Definition~2.4.11 on p.\,92 of \cite{Meyer-Nieberg1991}, every KB-space has \emph{order continuous norm} (see our discussion before Corollary~\ref{cor:irred-bl-oc} for a definition of this notion), and having order continuous norm is equivalent to all order intervals being weakly compact (see \cite[Theorem~2.4.2 on p.\,86]{Meyer-Nieberg1991} or \cite[Theorem~II.5.10 on p.\,89]{Schaefer1974}). } and invariant under $T$, so the orbit of every $f \in [0,h]$ under $T$ is relatively weakly compact. Since the span of $[0,h]$ is dense in $F$ and $T$ is power bounded, it thus follows that $T\restricted{F}$ is indeed weakly almost periodic \cite[Corollary~A.5(c) on p.\,514]{Engel2000}. So $T\restricted{F}$ satisfies all assumptions of Theorem~\ref{thm:main-everywhere-bl}. Hence, the theorem implies that $\lambda = 1$. \end{proof} We do not know whether the assertion of Theorem~\ref{thm:main-everywhere-bl} remains true on general Banach lattices if $T$ is only power bounded rather than weakly almost periodic. Next, observe that the corollary immediately gives us the first theorem from the introduction as a special case: \begin{proof}[Proof of Theorem~\ref{thm:main-everywhere}] Since $p \in [1,\infty)$, the Banach lattice $E := L^p$ is a KB-space. Moreover, as explained at the end of the introduction, the assumption on the supports in Theorem~\ref{thm:main-everywhere} is the same as the inclusion $\overline{E_{T^nf}} \supseteq \overline{E_{T^{n-1}f}}$. So all assumptions of Corollary~\ref{cor:main-everywhere-bl-kb} are satisfied. \end{proof} Let us now turn to the proof of Theorem~\ref{thm:main-everywhere-bl}. We first show a few properties of closed ideals generated by single vectors in the following three lemmas. \begin{lemma} \label{lem:ideal-inclusion} Let $E$ be a real or complex Banach lattice and let $f,g \in E_+$. The following assertions are equivalent: \begin{enumerate}[label=\upshape(\alph*)] \item\label{lem:ideal-inclusion:item:inclusion} $\overline{E_f} \subseteq \overline{E_g}$. \item\label{lem:ideal-inclusion:item:element} $f \in \overline{E_g}$. \item\label{lem:ideal-inclusion:item:approx} $\lim_{t \to \infty} f \land (tg) = f$ (where the limit to be understood in norm). \item\label{lem:ideal-inclusion:item:small} $\norm{(g-sf)^-} = o(s)$ as $s \downarrow 0$. \end{enumerate} \end{lemma} \begin{proof} \impliesProof{lem:ideal-inclusion:item:inclusion}{lem:ideal-inclusion:item:element} This implication is obvious. \impliesProof{lem:ideal-inclusion:item:element}{lem:ideal-inclusion:item:inclusion} This holds since $\overline{E_g}$ is a closed ideal and $\overline{E_f}$ is the smalles closed ideal that contains $f$. \impliesProof{lem:ideal-inclusion:item:element}{lem:ideal-inclusion:item:approx} Due to~\ref{lem:ideal-inclusion:item:element} there exists a sequence of number $(c_n)$ in $[0,\infty)$ and a sequence $(h_n)$ in $E$ which converges to $f$ and which satisfies $\modulus{h_n} \le c_n g$ for each index $n$. By replacing each vector $h_n$ with the positive part of its real part we may, and shall, assume that $0 \le h_n \le c_n g$ for each $n$. Let $\varepsilon > 0$. As $(f \land h_n)$ converges to $f$, there exists an index $n_0$ such that $\norm{f - f \land h_{n_0}} \le \varepsilon$. For $t \ge c_{n_0}$ we have \begin{align*} 0 \le f \land h_{n_0} \le f \land (c_{n_0} g) \ \le f \land (t g) \le f, \end{align*} so $\norm{f - f \land (t g)} \le \varepsilon$. \impliesProof{lem:ideal-inclusion:item:approx}{lem:ideal-inclusion:item:element} If~\ref{lem:ideal-inclusion:item:approx} holds, then $\big( f \land (ng) \big)$ is a sequence in $E_g$ that converges to $f$. \equivalentProof{lem:ideal-inclusion:item:element}{lem:ideal-inclusion:item:small} A proof of this equivalence can, for instance, be found in \cite[Proposition~4.6]{Glueck2018}. \end{proof} \begin{lemma} \label{lem:closed-ideals} Let $E$ be a real or complex Banach lattice and let $f,g,h \in E_+$. The following assertions hold: \begin{enumerate}[label=\upshape(\alph*)] \item\label{lem:closed-ideals:item:inf} $\overline{E_{f \land g}} = \overline{E_f} \cap \overline{E_g}$. \item\label{lem:closed-ideals:item:sup} $\overline{E_{f \lor g}} = \overline{E_{f+g}} = \overline{E_f} + \overline{E_g}$. \item\label{lem:closed-ideals:item:disjoint} If $f$ is disjoint to $h$ and $\overline{E_f} \subseteq \overline{E_{g+h}}$, then actually $\overline{E_f} \subseteq \overline{E_g}$. \end{enumerate} \end{lemma} \begin{proof} \ref{lem:closed-ideals:item:inf} We first observe that, for two arbitrary ideals $I,J \subseteq E$, we have $\overline{I \cap J} = \overline{I} \cap \overline{J}$. Indeed, the inclusion ``$\subseteq$'' is obvious. If conversely $0 \le x \in \overline{I} \cap \overline{J}$, then there exist sequences $(f_n)_{n \in \mathbb{N}} \subseteq I$ and $(g_n)_{n\in\mathbb{N}} \subseteq J$ which both converge to $x$. After replacing all vectors $f_n$ and $g_n$ with the positive parts of their real parts, we may assume that $f_n,g_n \in E_+$ for all $n$. Now we have $x = \lim_{n \to \infty} f_n \land g_n \in \overline{I \cap J}$. Since $\overline{I} \cap \overline{J}$ is itself an ideal, every vector in $\overline{I} \cap \overline{J}$ is a linear combination of positive vectors in $\overline{I} \cap \overline{J}$, so we have also proved the inclusion ``$\supseteq$''. It is easy to see that $E_{f\land g} = E_f \cap E_g$; thus it follows from what was said above that $\overline{E_{f \land g}} = \overline{E_f \cap E_g} = \overline{E_f} \cap \overline{E_g}$. \ref{lem:closed-ideals:item:sup} We have $E_{f\lor g} = E_{f+g} = E_f + E_g$, where the inclusion $E_{f+g} \subseteq E_f + E_g$ follows from the Riesz decomposition property of vector lattices \cite[Propositions~II.1.6 and~II.11.2]{Schaefer1974}. Hence, we conclude that \begin{align*} \overline{E_{f\lor g}} = \overline{E_{f+g}} = \overline{E_f + E_g} = \overline{E_f} + \overline{E_g}, \end{align*} where the last equality follows from the fact that the sum of two closed ideals in a Banach lattice is again a closed ideal (this result can be found in \cite[Proposition~1.2.2]{Meyer-Nieberg1991} for real Banach lattices; for complex Banach lattices it is a simple consequence of the real case). \ref{lem:closed-ideals:item:disjoint} For $t \ge 0$ we have \begin{align*} 0 \le f \land \big(t(g + h )\big) \le f \land (tg) + f \land (th) = f \land (tg) \le f; \end{align*} for the second inequality we used that the infimum operation $\land$ is, when restricted to the positive cone, subadditive in each component \cite[Corollary to Proposition~II.1.6 on p.\,53]{Schaefer1974}. As $\overline{E_f} \subseteq \overline{E_{g+h}}$, Lemma~\ref{lem:ideal-inclusion} gives us that $f \land \big(t(g + h )\big) \to f$ as $t \to \infty$. Thus, also $f \land (tg) \to f$ as $t \to \infty$. By applying Lemma~\ref{lem:ideal-inclusion} once again, we see that $\overline{E_f} \subseteq \overline{E_g}$, as claimed. \end{proof} We also need the following simple observation about how positive operators preserve inclusion between closed ideals generated by single elements. \begin{lemma} \label{lem:ideal-unter-op} Let $E,F$ Banach lattices over the same field. \begin{enumerate}[label=\upshape(\alph*)] \item\label{lem:ideal-unter-op:item:general} Let $T: E \to F$ be a positive linear operator and let $f,g \in E_+$. If $\overline{E_f} \subseteq \overline{E_g}$, then $\overline{F_{Tf}} \subseteq \overline{F_{Tg}}$. \item\label{lem:ideal-unter-op:item:higher-powers} Let $T: E \to E$ be a positive lineaer operator, let $f \in E_+$, and assume that $\overline{E_{T^nf}} \supseteq \overline{E_{T^{n-1}f}}$ for some integer $n \ge 1$. Then even $\overline{E_{T^mf}} \supseteq \overline{E_{T^{m-1}f}}$ for all integers $m \ge n$. \end{enumerate} \end{lemma} \begin{proof} \ref{lem:ideal-unter-op:item:general} One has $Tf \in T \overline{E_g} \subseteq \overline{TE_g} \subseteq \overline{F_{Tg}}$. This implies the claim due to Lemma~\ref{lem:ideal-inclusion}. \ref{lem:ideal-unter-op:item:higher-powers} For $m = n+1$ the claim follows from~\ref{lem:ideal-unter-op:item:general}, and for general $m \ge n$ the claim thus follows by induction. \end{proof} Now we first prove a version of Theorem~\ref{thm:main-everywhere-bl} for lattice homomorphisms before we actually show the theorem itself. \begin{theorem} \label{thm:lattice-homomorphism} Let $T: E \to E$ be a linear lattice homomorphism on a complex Banach lattice $E$. Assume that for each $f \in E_+$ there exists an integer $n \ge 1$ such that $\overline{E_{T^nf}} \supseteq \overline{E_{T^{n-1}f}}$ . Then $\specPnt(T) \subseteq [0,\infty)$. \end{theorem} \begin{proof} Suppose for a contradiction that $T$ has an eigenvalue which is not contained in $[0,\infty)$. The point spectrum of every lattice homomorphism is \emph{cyclic}, i.e., whenever $re^{i\theta}$ is an eigenvalue (for some $r \ge 0$ and $\theta \in [-\pi,\pi)$), then so is $re^{in\theta}$ for each $n \in \mathbb{Z}$ \cite[Corollary~2 to Proposition~V.4.2 on p.\,324]{Schaefer1974}. Thus, we can find a non-zero eigenvalue $\lambda$ of $T$ whose real part satisfies $\re \lambda \le 0$; and since the point spectrum of $T$ is invariant under complex conjugation, we may moreover assume that $\im \lambda \ge 0$. Furthermore, there is no loss of generality in assuming that $\modulus{\lambda} = 1$ (otherwise, we multiply $T$ with an appropriate strictly positive scalar). So let $\lambda = e^{i\theta} = \cos \theta + i \sin \theta$, where $\cos \theta \le 0$ and $\sin \theta \ge 0$. Now, let $0 \not= z = x+iy$ be an eigenvector of $T$ for the eigenvalue $e^{i\theta}$, where $x$ and $y$ are contained in the real part of $E$. It follows from the assumptions of the theorem and from Lemma~\ref{lem:ideal-unter-op}\ref{lem:ideal-unter-op:item:higher-powers} that there exists an integer $n \ge 1$ such that $\overline{E_{T^nf}} \supseteq \overline{E_{T^{n-1}f}}$ for each $f \in \{x^+,x^-,y^+,y^-\}$. Let us define $v := T^{n-1}x$ and $w := T^{n-1}y$. From $T(v + i w) = T T^{n-1}z = e^{i\theta}T^{n-1}z = e^{i\theta}(v + iw)$ we conclude that \begin{align*} Tv = \cos \theta \; v - \sin \theta \; w \qquad \text{and} \qquad Tw = \cos \theta \; w + \sin \theta \; v. \end{align*} Those two equalities, along with $\cos \theta \le 0$ and $\sin \theta \ge 0$ and the fact that $T$ is a lattice homomorphism, yield the estimates% \footnote{We use the fact that $(a+b)^+ \le a^+ + b^+$ and $(a+b)^- \le a^- + b^-$ for all elements $a,b,c$ of a vector lattice.} \begin{align} \label{eq:lattice-homomorphism:support-estimate} \begin{split} & T(v^+) = (Tv)^+ \le -\cos \theta \; v^- + \sin \theta \; w^- \le v^- + w^-, \\ & T(v^-) = (Tv)^- \le -\cos \theta \; v^+ + \sin \theta \; w^+ \le v^+ + w^+, \\ & T(w^+) = (Tw)^+ \le - \cos \theta \; w^- + \sin \theta \; v^+ \le w^- + v^+, \\ & T(w^-) = (Tw)^- \le - \cos \theta \; w^+ + \sin \theta \; v^- \le w^+ + v^-. \end{split} \end{align} By using the first of these four estimates, and again that $T$ is a lattice homomorphism, one concludes \begin{align*} \overline{E_{v^+}} = \overline{E_{T^{n-1}(x^+)}} \subseteq \overline{E_{T^n(x^+)}} = \overline{E_{T(v^+)}} \subseteq \overline{E_{v^- + w^-}}, \end{align*} As $v^+$ is disjoint to $v^-$ this implies \begin{align*} \overline{E_{v^+}} \subseteq \overline{E_{w^-}}, \end{align*} see Lemma~\ref{lem:closed-ideals}\ref{lem:closed-ideals:item:disjoint}. By the same reasoning, the second, third, and fourth estimate in~\eqref{eq:lattice-homomorphism:support-estimate} give us \begin{align*} & \overline{E_{v^-}} \subseteq \overline{E_{T(v^-)}} \subseteq \overline{E_{v^+ + w^+}}, \qquad \text{hence} \qquad \overline{E_{v^-}} \subseteq \overline{E_{w^+}}, \\ % & \overline{E_{w^+}} \subseteq \overline{E_{T(w^+)}} \subseteq \overline{E_{w^- + v^+}}, \qquad \text{hence} \qquad \overline{E_{w^+}} \subseteq \overline{E_{v^+}}, \\ % \text{and} \qquad & \overline{E_{w^-}} \subseteq \overline{E_{T(w^-)}} \subseteq \overline{E_{w^+ + v^-}}, \qquad \text{hence} \qquad \overline{E_{w^-}} \subseteq \overline{E_{v^-}}. \end{align*} Now we can see that $\overline{E_{v^+}} \subseteq \overline{E_{w^-}} \subseteq \overline{E_{v^-}}$, so it follows from Lemma~\ref{lem:closed-ideals}\ref{lem:closed-ideals:item:disjoint} that $\overline{E_{v^+}} \subseteq \overline{E_0} = \{0\}$, so $v^+ = 0$. Similarly, we obtain $v^- = 0$, $w^+ = 0$ and $w^- = 0$. Thus, $0 = T^{n-1}z = e^{i(n-1)\theta}z$, which is a contradiction to $z \not= 0$. \end{proof} By employing the celebrated Jacobs--de Leeuw--Glicksberg decomposition theory, it is now easy to derive Theorem~\ref{thm:main-everywhere-bl} from Theorem~\ref{thm:lattice-homomorphism}. \begin{proof}[Proof of Theorem~\ref{thm:main-everywhere-bl}] Since $T$ is weakly almost periodic we can employ the Jacobs--de Leeuw--Glicksberg decomposition presented, for instance, in \cite[Section~2.4]{Krengel1985}. It gives us a bounded linear projection $P$ on $E$ with the following properties: \begin{enumerate}[label=(\alph*)] \item The projection $P$ is positive. Hence, as follows from \cite[Proposition~III.11.5]{Schaefer1974}, the range $PE$ is a Banach lattice in its own right with respect to an equivalent norm with positive cone $(PE)_+ := E_+ \cap PE$. \item The projection $P$ commutes with $T$ and thus, $T$ leaves both the kernel $\ker P$ and the range $PE$ invariant. The restriction $T\restricted{PE}$ is a bijection from $PE$ to $PE$. Moreover, both $T\restricted{PE}$ and its inverse $(T\restricted{PE})^{-1}$ are positive. Thus, $T\restricted{PE}$ is a lattice isomorphism on $PE$. \item The space $PE$ is the closed span of all eigenvectors of $T$ that belong to unimodular eigenvalues. \end{enumerate} Due to property~(c), it suffices to prove that $T\restricted{PE}$ does not have any eigenvalues in $\mathbb{T} \setminus \{1\}$. To see this, we only have to show that $T\restricted{PE}$ satisfies the assumptions of Theorem~\ref{thm:lattice-homomorphism}. According to~(a), $PE$ is a Banach lattice, and according to~(b), $T\restricted{PE}$ is a lattice homomorphism. So finally let $f \in (PE)_+$. By assumption there is an integer $n \ge 1$ such that $\overline{E_{T^nf}} \supseteq \overline{E_{T^{n-1}f}} \ni T^{n-1}f$. By applying Lemma~\ref{lem:ideal-unter-op}\ref{lem:ideal-unter-op:item:general} to the positive operator $P: E \to PE$ we obtain $\overline{(PE)_{T^nf}} \supseteq \overline{(PE)_{T^{n-1}f}}$; here we used that $P$ is a projection, and thus $PT^m f = T^m f$ for all $m \ge 0$. So Theorem~\ref{thm:lattice-homomorphism} is indeed applicable to the operator $T\restricted{PE}$, and we conclude that $\specPnt(T\restricted{PE}) \cap \mathbb{T} \subseteq \{1\}$. \end{proof} In order to derive Theorem~\ref{thm:main-everywhere} from Theorem~\ref{thm:main-everywhere-bl} was used that the inlcusion $\overline{E_ g} \supseteq \overline{E_f}$ is equivalent to the almost everywhere inclusion $\supp(g) \supseteq \supp(f)$ if $E$ is an $L^p$-space over a $\sigma$-finite measure space and $p \in [1,\infty)$. It is important to note that the situation is very different on spaces of continuous functions. Recall that, for compact Hausdorff space $K$ and a continuous mapping $f: K \to \mathbb{C}$ the \emph{support} of $f$ is defined as \begin{align*} \supp(f) := \overline{\{x \in K: \; f(x) \not= 0\}}. \end{align*} Now consider for instance the space $K = [-1,1]$ and the functions $u,\one \in E := \Cont([-1,1])$, where $\one$ denotes the constant function with value $1$ and $u(x) = 1 - \modulus{x}$ for all $x \in [-1,1]$. Then both functions $u$ and $\one$ have support $[-1,1]$. However, we have $E_{\one} = E$ and thus $\overline{E_{\one}} = E$, while \begin{align*} \overline{E_u} = C_0((-1,1)) := \{f \in E: \; f(-1) = f(1) = 0\} \subsetneq E. \end{align*} For this reason, we cannot expect Theorem~\ref{thm:main-everywhere} to hold on spaces of continuous functions% \footnote{We note that, on spaces of continuous functions, inclusions between supports of functions need to be treated as set inclusions in the usual sense; it is not possible to neglect any sets of measure $0$, since there is no canonical choice of a measure.} (but see, however, Theorem~\ref{thm:everywhere-c-k} and Example~\ref{exa:nagler} below). Here is a concrete counterexample: \begin{example} \label{ex:weakly-expanding} Let $E = \Cont([-1,1])$. There exists a positive finite rank operator $T \in \mathcal{L}(E)$ which satisfies $\spr(T) = \norm{T} = 1$ and which has the following properties: \begin{enumerate}[label=(\alph*)] \item One has $\supp(Tf) = [-1,1]$ for every non-zero $f \in E_+$; in particular, $\supp(Tf) \supseteq \supp(f)$ for all $f \in E_+$. \item The point spectrum of $T$ contains the number $-1$. \end{enumerate} Indeed, let $u,v,w \in E_+$ be given by \begin{align*} u(x) = 1 - \modulus{x}, \qquad v(x) = |x| \one_{[-1,0]}(x), \qquad w(x) = |x| \one_{[0,1]}(x) \end{align*} for all $x \in [-1,1]$. We define $T \in \mathcal{L}(E)$ by \begin{align*} Tf = \frac{1}{2}\int_{[-1,1]} f \dx x \cdot u + f(1) \, v + f(-1) \, w \end{align*} for all $f \in E$. Clearly, $T$ has finite rank, is positive, and satisfies $T \one = \one$; the latter two properties in turn imply $\spr(T) = \norm{T} = 1$. Since $\supp(u) = [-1,1]$, it follows that $\supp(Tf) = [-1,1]$ for every non-zero $f \ge 0$; this proves~(a). On the other hand, $T(v-w) = -v+w$, so $-1$ is an eigenvalue of $T$; this proves~(b). \end{example} \begin{remark} In \cite[Exercise~8(c) on p.\,353]{Schaefer1974} the following assertion is claimed: Let $E$ be a Banach lattice, let $T \in \mathcal{L}(E)$ be positive, let us say for the sake of convenience with spectral radius $1$, and assume that $$\sup_{\lambda \in (1,\infty)} (\lambda - 1) \norm{(\lambda-T)^{-1}} < \infty.$$ If for every non-zero $f \in E_+$ there exists an integer $n \ge 1$ such that $T^nf$ is a \emph{weak order unit} of $E$ (see \cite[p.\,55]{Schaefer1974} for definition of this notion), then $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. Examples~\ref{ex:weakly-expanding} shows that this claim is not correct. \end{remark} The problem in Example~\ref{ex:weakly-expanding} is that the support of a continuous function $f$, defined as the closure of the set where $f$ does not vanish, is not well-suited to the closed ideals in $\Cont(K)$ generated by $f$. However, the situation becomes much better if we consider the \emph{open support} of functions. For a topological space $X$ and a continuous scalar-valued function $f$ on $X$ the open support is the (open) set \begin{align*} \opensupp(f) := \{x \in X: \; f(x) \not= 0\}. \end{align*} On compact spaces, inclusions of the type $\overline{E_f} \supseteq \overline{E_g}$ can be characterized by means of the open supports of $f$ and $g$ (see Lemma~\ref{lem:ideal-inclusion-c-k} below); thus we get the following version of Theorem~\ref{thm:main-everywhere-bl} on $\Cont(K)$. \begin{theorem} \label{thm:everywhere-c-k} Let $K$ be a compact Hausdorrf space and let $T: \Cont(K) \to \Cont(K)$ be a positive linear operator which is weakly almost periodic. Assume that, for each $0 \le f \in \Cont(K)$, there exists an integer $n \ge 1$ such that $\opensupp(T^nf) \supseteq \opensupp(T^{n-1}f)$. Then $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. \end{theorem} The theorem is an immediate consequence of Theorem~\ref{thm:main-everywhere-bl} and the following lemma. \begin{lemma} \label{lem:ideal-inclusion-c-k} Let $K$ be a compact Hausdorff space and let $0 \le f,g \in E := \Cont(K)$. Then $\overline{E_f} \subseteq \overline{E_g}$ if and only if $\opensupp(f) \subseteq \opensupp(g)$. \end{lemma} \begin{proof} ``$\Rightarrow$'' According to Lemma~\ref{lem:ideal-inclusion} we have $f \land (tg) \to f$ in norm as $t \to \infty$. Let $x \in \opensupp(f)$. Then $f(x) > 0$, so we conclude from $f(x) \land (tg(x)) \to f(x)$ as $t \to \infty$ that $g(x) > 0$, i.e., $x \in \opensupp(g)$. ``$\Leftarrow$'' Consider the increasing net $\big(f \land (tg)\big)_{t \in (0,\infty)}$ in $\Cont(K)$. It follows from the inclusion $\opensupp(f) \subseteq \opensupp(g)$ that this net converges pointwise to the continuous function $f$. Due to Dini's theorem the convergence is automatically uniform, so Lemma~\ref{lem:ideal-inclusion} implies that $\overline{E_f} \subseteq \overline{E_g}$. \end{proof} Very easy examples (for instance on the Euclidean space $\mathbb{R}$) show that Lemma~\ref{lem:ideal-inclusion-c-k} does not remain true if we replace the compact space $K$ with, for instance, a locally compact space. We close this subsection by showing that Theorem~\ref{thm:main-everywhere-bl} can be interpreted as a strong generalization of the following result of Nagler \cite[Theorem~1]{Nagler2015} for finite rank operators. For a vector $x$ in a Banach space $E$ and a functional $\alpha \in E'$ we use the notation $x \otimes \varphi$ for the linear operator $E \to E$ given by \begin{align*} (x \otimes \alpha) f := \langle \alpha, f \rangle x \end{align*} for all $f \in E$. \begin{example}[Nagler] \label{exa:nagler} Let $E$ be a complex Banach lattice, let $e_1, \dots, e_n \in E_+$ and $\alpha_1, \dots \alpha_n \in E'_+$, and set $e := e_1 + \dots + e_n$. Assume that $\langle \alpha_j, e \rangle = 1$ and $\langle \alpha_j, e_j \rangle > 0$ for each $j \in \{1, \dots, n\}$. Then the finite rank operator \begin{align*} T := e_1 \otimes \alpha_1 + \dots e_n \otimes \alpha_n : E \to E \end{align*} is power bounded, has $e$ as a fixed point, and satisfies $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. \end{example} \begin{proof} One readily checks that $Te = e$. Thus, the orbits of $e_1, \dots, e_n$ under the powers of $T$ are contained in $[0,e]$; and since $e_1, \dots, e_n$ span the range of $T$ it follows that $T$ is power bounded. As $T$ has finite-rank, we conclude in particular that $T$ is weakly almost periodic. In order to conclude $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$ we show that the assumption $\overline{E_{T^nf}} \supseteq \overline{E_{T^{n-1}f}}$ in Theorem~\ref{thm:main-everywhere-bl} is satisfied for every $f \in E_+$ for $n=2$. Fix $f \in E_+$ and let \begin{align*} A := \big\{j \in \{1, \dots, n\} : \; \langle \alpha_j, f \rangle > 0 \big\}. \end{align*} With the notation $e_A := \sum_{j \in A} e_j$ it follows that $c e_A \le Tf \le d e_A$ for some numbers $c,d > 0$, so $E_{Tf} = E_{e_A}$. By letting $a > 0$ be the smallest of the numbers $\langle \alpha_1, e_1 \rangle, \dots, \langle \alpha_n, e_n \rangle$, we finally get \begin{align*} T^2 f \ge c T e_A \ge c \sum_{j \in A} Te_j \ge ca \sum_{j \in A} e_j = ca e_A, \end{align*} so $E_{T^2f} \supseteq E_{e_A} = E_{Tf}$, and hence $\overline{E_{T^2f}} \supseteq \overline{E_{Tf}}$. \end{proof} Nagler considered Banach function spaces $E$ in \cite[Theorem~1]{Nagler2015} and assumed $e$ to be the constant function with value $1$, but this assumption is not essential for the result. (In \cite[Section~1.2]{Nagler2015} the assumption $\norm{\alpha_j} = 1$ for each $j \in \{1, \dots, n\}$ is also listed; but this assumption is, in conjunction with the assumption that $\langle \alpha_j, e \rangle = 1$ for the constant $1$-function $e$, rather restrictive, and is not needed for the result to hold.) \begin{remark} It is illuminating to compare Example~\ref{exa:nagler} to the counterexample in~\ref{ex:weakly-expanding}. The operator $T$ in Example~\ref{ex:weakly-expanding} can also be written in the form \begin{align*} T = e_1 \otimes \alpha_1 + e_2 \alpha_2 + e_3 \otimes \alpha_3, \end{align*} where $e_1 = u$, $e_2 = v$, $e_3 = w$ (and thus $e = e_1+e_2+e_3 = \one$), and where $\alpha_1$ is the integration against $1/2$ times the Lebesgue measure on $[-1,1]$, $\alpha_2$ is the point evaluation at $1$, and $\alpha_3$ is the point evaluation at $-1$. One also has $\langle \alpha_j, e \rangle = 1$ in this example; the reason why one is not in the situation of Example~\ref{exa:nagler} is that $\langle \alpha_2, e_2 \rangle = v(1) = 0$ and $\langle \alpha_3, e_3 \rangle = w(-1) = 0$. \end{remark} \subsection{Long-term behaviour} \label{subsection:primitivity-1:asymptotics} We now use the results of the previous subsection to analyze the behaviour of the powers $T^n$ as $n \to \infty$. A bounded linear operator $T$ on a Banach lattice $E$ is called \emph{AM-compact} if it maps every order interval to a relatively compact set. Obviously, every compact operator is AM-compact. Other examples of AM-compact operators are discussed after the following theorem. \begin{theorem} \label{thm:everywhere-convergence} Let $T: E \to E$ be a power bounded linear operator on a complex Banach lattice $E$, and assume that the fixed space $\ker(1-T)$ contains a quasi-interior point $h$ of $E_+$. Assume moreover that the operator $T^{n_0}$ is AM-compact for some integer $n_0 \ge 1$ and that, for every $f \in E_+$, there exists an integer $n \ge 1$ such that $\overline{E_{T^nf}} \supseteq \overline{E_{T^{n-1}f}}$. Then $T^k$ converges strongly as $k \to \infty$. \end{theorem} \begin{proof} Since the order interval $[0,h]$ is invariant under $T$, it follows from the AM-compactness of $T^{n_0}$ that the orbit of each point $f \in [0,h]$ under $\{T^n: \; n \ge 0\}$ is relatively compact in $E$; thus, the same is true for every $f$ in the span $E_h$ of $[0,h]$. Since $E_h$ is dense in $E$ and $T$ is power bounded, it even follows that the orbit of every $f \in E$ is relatively compact \cite[Corolary~A.5(a)]{Engel2000}. In particular, each orbit is relatively weakly compact, i.e., the operator $T$ is weakly almost periodic. So Theorem~\ref{thm:main-everywhere-bl} implies that $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. Also due to the AM-compactness of $T^{n_0}$ and the existence of the quasi-interior fixed point, \cite[Theorem~6.1]{GlueckHaase2019} is applicable. Part~(c) of this result tells us that $T^n$ converges strongly as $n \to \infty$ since $T$ does not have eigenvalues in $\mathbb{T} \setminus \{1\}$. \end{proof} Next we state a more explicit version of the previous theorem on $L^p$-spaces. Let $(\Omega,\mu)$ be a $\sigma$-finite measure space and $p \in [1,\infty)$; we consider the (complex-valued) space $L^p := L^p(\Omega,\mu)$. A positive linear operator $T: L^p \to L^p$ is called an \emph{integral operator} or \emph{kernel operator} if there exists a measurable function $k: \Omega \times \Omega \to [0,\infty)$ -- the so-called \emph{integral kernel} of $T$ -- such that the following property holds for every $f \in L^p$: for almost every $\omega \in \Omega$, the function $k(\omega,\argument)f$ is an $L^1(\Omega,\mu)$ and one has \begin{align*} (Tf)(\omega) = \int_\Omega k(\omega, \tilde \omega) f(\tilde \omega) \dx \tilde \omega. \end{align*} Every positive integral operator is AM-compact; see for instance \cite[Appendix~A]{GerlachGlueck2019} or \cite[Appendix~A]{GlueckHaase2019} for more details. \begin{corollary} \label{cor:everywhere-convergence-l-p} Let $p \in [1,\infty)$, let $(\Omega,\mu)$ be a $\sigma$-finite measure space and set $L^p := L^p(\Omega,\mu)$. Let the positive linear operator $T: L^p \to L^p$ be an integral operator whose integral kernel $k: \Omega \times \Omega \to [0,\infty)$ has the following property: for every measurable set $M \subseteq \Omega$ with non-zero measure we have $\int_{M \times M} k \dx \mu \otimes \mu > 0$. If $T$ is power bounded and has a fixed point which is strictly positive almost everywhere on $\Omega$, then $T^k$ converges strongly as $k \to \infty$. \end{corollary} \begin{proof} As pointed out before the corollary, $T$ is AM-compact. So, in order to apply Theorem~\ref{thm:everywhere-convergence}, it suffices to check that $\supp Tf \supseteq \supp f$ (in the almost everywhere sense) for each $0 \le f \in L^p$. Fix $0 \le f \in L^p$ (more precisely, let $f$ be a representative of a positive vector in $L^p$; we may choose $f$ such that $f(\omega) \ge 0$ for all $\omega \in \Omega$) and let $g = Tf$. We have \begin{align*} g(\omega) = \int_\Omega k(\omega, \tilde \omega) f(\tilde \omega) \dx \mu(\tilde \omega) \end{align*} for almost all $\omega \in \Omega$; we may even modify $k$ and $g$ on a null set of $\Omega \times \Omega$ and $\Omega$, respectively, such that this formula holds for all $\omega \in \Omega$ (where the modification of $k$ might depend on $f$). Set \begin{align*} S_f := \{\omega \in \Omega: \; f(\omega) > 0\} \qquad \text{and} \qquad S_g := \{\omega \in \Omega: \; g(\omega) > 0\}. \end{align*} Since $\int_{\Omega \setminus S_g} g(\omega)\dx \mu(\omega) = 0$, we conclude that $k(\omega, \tilde \omega) f(\tilde \omega) = 0$ for almost all $(\omega, \tilde \omega) \in (\Omega \setminus S_g) \times \Omega $. Hence, $k(\omega, \tilde \omega) = 0$ for almost all $(\omega,\tilde \omega) \in (\Omega \setminus S_g) \times S_f$. In particular, we have $k(\omega,\tilde \omega) = 0$ for almost all $(\omega, \tilde \omega) \in (S_f \setminus S_g) \times (S_f \setminus S_g)$. By our assumption on $k$ this implies that $S_f \setminus S_g$ has measure $0$, so we indeed have $\supp Tf = \supp g \supseteq \supp f$. \end{proof} The condition on the integral kernel $k$ in Corollary~\ref{cor:everywhere-convergence-l-p} means, in a sense, that $k$ is non-zero close to the diagonal of $\Omega \times \Omega$. We make this more explicit in the following version of the corollary which is stated in terms of topological spaces. Recall that a topological space $\Omega$ is called \emph{Lindelöf} if every open cover of $\Omega$ admits an at most countable subcover. Of course, every compact topological space is Lindelöf; moreover, a metric space is Lindelöf if and only if it is separable. Hence, the following corollary is applicable to a large variety of spaces. \begin{corollary} \label{cor:lindeloef} Let $p \in [1,\infty)$ and let $\Omega$ be a topological space which is Lindelöf. Let $\mu $ be a $\sigma$-finite measure (with values in $[0,\infty)$) defined on the Borel $\sigma$-algebra on $\Omega$ and set $L^p := L^p(\Omega,\mu)$. Let $T: L^p \to L^p$ be a positive linear operator which is an integral operator and whose integral kernel $k: \Omega \times \Omega \to [0,\infty)$ has the following property: there exists an open set $U \subseteq \Omega \times \Omega$ which contains the diagonal $\{(\omega,\omega) \in \Omega \times \Omega: \; \omega \in \Omega\}$ such that $k(\omega, \tilde \omega) > 0$ for almost all $(\omega, \tilde \omega) \in U$. If $T$ is power bounded and has a fixed point which is strictly positive almost everywhere on $\Omega$, then $T^k$ converges strongly as $k \to \infty$. \end{corollary} Note that we do not assume that measure $\mu$ in the corollary to be strictly positive in the sense that $\mu(V) > 0$ for every non-empty open set $V$. For the proof we need a little lemma about Lindelöf spaces: \begin{lemma} \label{lem:lindeloef-diagonal} Let $\Omega$ be a topological space which is Lindelöf and let $U \subseteq \Omega \times \Omega$ be an open set which contains the diagonal $\Delta := \{(\omega,\omega) \in \Omega \times \Omega: \; \omega \in \Omega\}$. Then there exists an at most countable system $\mathcal{B}$ of open sets $B \subseteq \Omega$ such that $\mathcal{B}$ covers $\Omega$ and such that $B \times B \subseteq U$ for all $B \in \mathcal{B}$. \end{lemma} \begin{proof} Let $\omega \in \Omega$. Since $U$ is an open neighbourhood of $(\omega,\omega)$ in $\Omega \times \Omega$, we can find an open neighbourhood $B_\omega$ of $\omega$ in $\Omega$ such that $B_\omega \times B_\omega \subseteq U$. We have $\bigcup_{\omega \in \Omega} B_\omega = \Omega$; using that $\Omega$ is Lindelöf, we can thus find an at most countable subsystem $\mathcal{B}$ of $\{B_\omega: \; \omega \in \Omega\}$ which also covers $\Omega$. This proves the assertion. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:lindeloef}] We only have to check that $k$ satisfies the assumption stated in Corollary~\ref{cor:everywhere-convergence-l-p}, so let $M \subseteq \Omega$ be a measurable set of non-zero measure. Since $\Omega$ is Lindelöf, Lemma~\ref{lem:lindeloef-diagonal} shows that we can find an at most countable open cover $\mathcal{B}$ of $\Omega$ such that $B \times B \subseteq U$ for each $B \in \mathcal{B}$. There exists a set $B \in \mathcal{B}$ such that $B \cap M$ has non-zero measure. For this set $B$, the square $S := (B \cap M) \times (B \cap M)$ has non-zero measure in $\Omega \times \Omega$ and it is contained in $U$, so $k$ is strictly positive almost everywhere on $S$. Thus, \begin{align*} \int_{M \times M} k \dx \mu \otimes \mu \ge \int_S k \dx \mu \otimes \mu > 0, \end{align*} so the assumptions of Corollary~\ref{cor:everywhere-convergence-l-p} are indeed satisfied. \end{proof} Let us illustrate Corollary~\ref{cor:lindeloef} with the following simple example: \begin{example} \label{ex:integral-operator-on-unit-interval} Endow the unit interval $[0,1]$ with the Borel-$\sigma$-algebra and the Lebesgue measure and let $k: [0,1] \times [0,1] \to [0,\infty)$ be a measurable function such that \begin{align*} & \int_{[0,1]} k(x,y) \dx x = 1 \quad \text{for almost all } y \in [0,1] \\ \text{and} \qquad & \int_{[0,1]} k(x,y) \dx y = 1 \quad \text{for almost all } x \in [0,1]. \end{align*} Moreover, let $\delta > 0$ and assume that $k$ is strictly positive almost everywhere on the diagonal strip $\{(x,y) \in [0,1]^2: \; \modulus{x-y} < \delta\}$. For each $0 \le f \in L^1 := L^1([0,1])$ we have \begin{align*} \int_{[0,1]} \int_{[0,1]} k(x,y) f(x) \dx x \dx y = \norm{f}_{L^1}. \end{align*} Thus, $k(x,\argument)f(\argument)$ is in $L^1$ for almost all $x \in [0,1]$, the function $\int_{[0,1]} k(\argument,y)f(y)\dx y$ is again in $L^1$, and its norm equals the norm of $f$. Hence, \begin{align*} L^1 \ni f \mapsto Tf := \int_{[0,1]} k(\argument,y) f(y) \dx y \in L^1 \end{align*} defines a positive linear operator $T: L^1 \to L^1$ of norm $1$. Moreover, one readily checks that $T\one_{[0,1]} = \one_{[0,1]}$. Corollary~\ref{cor:lindeloef} thus implies that $T^k$ converges strongly as $k \to \infty$. \end{example} We also have the following consequence of Theorem~\ref{thm:everywhere-convergence} on sequence spaces: \begin{corollary} \label{cor:convergence-small-l-p} Let $p \in [1,\infty)$ and set $\ell^p := \ell^p(\mathbb{N})$. Let $T: \ell^p \to \ell^p$ be a positive and power bounded operator and assume that $T$ has a fixed point $f_0$ such that $f_0(\omega) > 0$ for all $\omega \in \mathbb{N}$. Assume moreover that $\langle e_j, T e_j \rangle > 0$ for each canonical unit vector $e_j$. Then $T^k$ converges strongly as $k \to \infty$. \end{corollary} \begin{proof} We clearly have $\supp (Tf) \supseteq \supp f$ for each $0 \le f \in \ell^p$. Moreover, it is not difficult to check that every order interval in $\ell^p$ is compact and thus, every bounded linear operator on $\ell^p$ is AM-compact. Hence, the assertion follows from Theorem~\ref{thm:everywhere-convergence}. \end{proof} Note that we can also replace $\ell^p$ with the space $c_0$ in the above corollary and the same conclusion remains true (with the same proof). By now we have explored several convergence results for the powers of kernel operators or, more generally, AM-compact operators. We conclude this chapter by considering the case where (a power of) $T$ only dominates a non-zero AM-compact operator. Operators with this property -- and especially the special case of so-called \emph{partial integral operators} that dominate a non-zero integral operator -- frequently occur in the literature. See for instance \cite[Sections~3 and~4]{Gerlach2013} and \cite[Theorem~1 on p.\,247]{Rudnicki1995} for an analysis of spectral and asymptotic properties of a (sub)class of such operators, and \cite[Section~5]{GerlachGlueckKernel} for a characterization of this operator class. A convergence result for $C_0$-semigroups on $L^1$ which contain a partial integral operator can, for instance, be found in \cite[Theorems~1 and~2]{Pichor2000}; this result turned out to have numerous applications to evolution equations in mathematical biology. For a positive operator $T$ on a Banach lattice $E$ we call a vector $x \in E$ a \emph{fixed point} of $T$ if $Tx = x$. We call a vector $x \in E_+$ a \emph{super-fixed point} of $T$ if $Tx \ge x$. \begin{theorem} \label{thm:convergence-partial-kernel} Let $E$ be a Banach lattice with order continuous norm and let $T: E \to E$ be a power bounded positive linear operator on $E$ whose fixed space $\ker(1-T)$ contains a quasi-interior fixed point of $E_+$. Assume that the following assertions are fulfilled: \begin{enumerate}[label=\upshape(\arabic*)] \item Every super-fixed point in $E_+$ of $T$ is actually a fixed point of $T$. \item For every non-zero $0 \le g \in \fix T$ there exists an integer $n \ge 0$ and an AM-compact operator $K \in \mathcal{L}(E)$ such that $0 \le K \le T^n$ and $Kg \not= 0$. \item For every $0 \le f \in E$ there exists an integer $n \ge 1$ such that $\supp(T^nf) \supseteq \supp(T^{n-1}f)$. \end{enumerate} Then $T^k$ converges strongly as $k \to \infty$. \end{theorem} Sufficient conditions for every super-fixed point in $E_+$ to actually be a fixed point can, for instance, be found in \cite[Proposition~3.11]{GerlachGlueck2019}. \begin{proof}[Proof of Theorem~\ref{thm:convergence-partial-kernel}] Due to the assumptions~(1) and~(2) and the existence of a quasi-interior fixed point it follows that the operator semigroup $(T^n)_{n \in \mathbb{N}_0}$ satisfies the \emph{standard assumptions} in \cite[Section~6]{GlueckHaase2019}; this is proved in \cite[Lemma~3.10]{GerlachGlueck2019} (the divisibility assumption in Theorem~3.9 in this reference is not used in the proof of Lemma~3.10). Therefore, it follows from \cite[Theorem~3.1(a) and Remark~2.6(2)]{GlueckHaase2019} that every orbit of the powers of $T$ is relatively compact in $E$; in particular, every orbit is relatively weakly compact, so $T$ is weakly almost periodic. So we can apply Theorem~\ref{thm:main-everywhere-bl} to derive that $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$, and thus the claimed convergence follows from \cite[Theorem~6.1(c)]{GlueckHaase2019}. \end{proof} Theorem~\ref{thm:convergence-partial-kernel} is, in a sense, an analogous result to Theorem~\ref{thm:everywhere-convergence}. Thus, one can derive similar consequences for Theorem~\ref{thm:convergence-partial-kernel} as we did for Theorem~\ref{thm:everywhere-convergence} above. However, it would probably not be particularly illuminating to repeat the same arguments again, so we omit the details here. When combining our triviality results on the point spectrum with spectral analysis of quasi-compact operators (which is, for instance, presented in \cite[Theorem~2.8 on pp.\,91--92]{Krengel1985} or \cite{Nagler2018}) one can also obtain sufficient conditions for operator norm convergence of powers of positive operators. As these arguments do not seem to add many insights to the main theme of this article though, we do not discuss this in detail. \section{Irreducibe Operators that partially increase the support of functions} \label{section:conditions-for-primitivity-2} \subsection{Point spectrum} \label{subsection:conditions-for-primitivity-2:point-spectrum} In some cases it can happen that the property of an operator to increase the support of functions is not satisfied everywhere on the underlying space, but only on a part of the space. If the operator is irreducible, we can still derive similar results as before. This is the purpose of this section. A positive linear operator $T$ on a Banach lattice $E$ is called \emph{irreducible} if, for each non-zero $f \in E_+$ and each non-zero functional $\varphi \ge 0$ in the norm dual $E'$ of $E$, there exists an integer $n \ge 0$ such that $\langle \varphi, T^n f \rangle > 0$. Equivalently, $\{0\}$ and $E$ are the only closed $T$-invariant ideals in $E$. A vector subspace $B$ of a Banach lattice $E$ is called a \emph{projection band} if there exists a bounded linear projection $P$ on $E$ which has range $B$ and satisfies $0 \le Pf \le f$ for all $f \in E_+$. For instance, on an $L^p$-space the multiplication with the indicator function of a measurable set is a band projection. \begin{theorem} \label{thm:irred-bl} Let $E$ be a complex Banach lattice and let $T: E \to E$ be a positive linear operator which is weakly almost periodic. Assume that $T$ is irreducible and that there exists a projection band $\{0\} \not= B \subseteq E$ such that $\overline{E_{Tf}} \supseteq \overline{E_f}$ for each $0 \le f \in B$. Then $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. \end{theorem} \begin{proof} We use similar arguments as in the proofs of Theorems~\ref{thm:main-everywhere-bl} and~\ref{thm:lattice-homomorphism}. Since $T$ is weakly almost periodic, we can employ the Jacobs--de Leeuw--Glicksberg decomposition \cite[Section~2.4]{Krengel1985} to obtain a bounded linear projection $P$ on $E$ which has the same properties as listed in the proof of Theorem~\ref{thm:main-everywhere-bl}: \begin{enumerate}[label=(\alph*)] \item The projection $P$ is positive, and thus range $PE$ is a complex Banach lattice with respect to an equivalent norm, and with positive cone $(PE)_+ := E_+ \cap PE$. \item The projection $P$ commutes with $T$, so $T$ leaves $\ker P$ and $PE$ invariant; moreover, the restriction $T\restricted{PE}$ is a lattice isomorphism on $PE$. \item The range $PE$ is the closed span of the eigenvectors of $T$ belonging to unimodular eigenvalues. \end{enumerate} Assume now that the assertion of the theorem is false. Then there exists an unimodular eigenvalue $e^{i\theta} \not= 1$ of $T$ (with some $\theta \in [-\pi,\pi)$), and hence, $e^{i\theta} \in \specPnt(T\restricted{PE})$. Since the letter point spectrum is cyclic according to \cite[Corollary~2 to Proposition~V.4.2]{Schaefer1974}, we can even assume that $\re e^{i\theta} \le 0$. Moreover, since the point spectrum of $T\restricted{PE}$ is invariant under complex conjugation, we can also assume that $\im e^{i\theta} \ge 0$. Thus, $e^{i\theta} = \cos \theta + i\sin \theta$, where $\cos \theta \le 0$ and $\sin \theta \ge 0$. Let $z = x+iy \in PE \setminus \{0\}$ be an eigenvector of $T\restricted{PE}$ for the eigenvalue $e^{i\theta}$ (where $x,y$ are contained in the real parts of $E$ and $PE$). Consider the vector $\modulus{z}$ (where the modulus is computed in the Banach lattice $E$, not in $PE$)% \footnote{ We will actually see later in the proof that the modulus operation is the same in both lattices $E$ and $PE$, but at this point we do not know this, yet. }% . We have $T \modulus{z} \ge \modulus{Tz} = \modulus{z}$. Since $T$ is irreducible and power bounded, it follows from the inequality $T \modulus{z} \ge \modulus{Tz} = \modulus{z}$ for the non-zero vector $\modulus{z}$ that there exists a strictly positive function $\varphi \in E'_+$ which is a fixed point of $T'$; this is explained in the proof of \cite[Proposition~3.11(c)]{GerlachGlueck2019}. Moreover, the same reference shows that $\modulus{z}$ is actually a fixed point of $T$. Thus, the principal ideal $E_{\modulus{z}}$ is $T$-invariant and hence, so is its closure. As $T$ is irreducible, it follows that $\overline{E_{\modulus{z}}} = E$, so $\modulus{z}$ is a quasi-interior point of $E_+$. Since $P$ is, by the Jacobs--de Leeuw--Glicksberg theory, contained in the weak operator closure of $\{T^n: \; n \ge 0\}$, it readily follows that the strictly positive functional $\varphi$ is also a fixed point of $P'$. This implies that $P$ is \emph{strictly positive} in the sense that $Pf$ is non-zero for every non-zero $f \in E_+$. Therefore, $PE$ is even a sublattice of $E$ (this follows from \cite[Proposition~III.11.5]{Schaefer1974}) and thus, the lattice operations in the Banach lattice $PE$ coincide with the lattice operations in $E$. Similarly as in the proof of Theorem~\ref{thm:lattice-homomorphism} we now observe that \begin{align*} Tx = \cos \theta \; x - \sin \theta \; y \qquad \text{and} \qquad Ty = \cos \theta \; y + \sin \theta \; x. \end{align*} By using that $T\restricted{PE}$ is a lattice homomorphism, we conclude in the same way as in the proof of Theorem~\ref{thm:lattice-homomorphism} that \begin{align*} & T(x^+) \le x^- + y^- \qquad \text{and} \qquad T(x^-) \le x^+ + y^+, \\ \text{as well as} \qquad & T(y^+) \le y^- + x^+ \qquad \text{and} \qquad T(y^-) \le y^+ + x^-. \end{align*} Now, let $Q \in \mathcal{L}(E)$ denote the band projection onto $B$. Then we have \begin{align*} \overline{E_{Qx^+}} \subseteq \overline{E_{TQx^+}} \subseteq \overline{E_{Tx^+}} \subseteq \overline{E_{x^- + y^-}}, \end{align*} where the first inclusion follows from our assumption on $T$. Since $Qx^+$ is dominated by $x^+$ and thus disjoint to $x^-$, we conclude from Lemma~\ref{lem:closed-ideals}\ref{lem:closed-ideals:item:disjoint} that $\overline{E_{Qx^+}} \subseteq \overline{E_{y^-}}$. Moreover, as $Qx^+$ is contained in the band $B$, so is $\overline{E_{Qx^+}}$, and hence we obtain \begin{align*} \overline{E_{Qx^+}} = Q \overline{E_{Qx^+}} \subseteq Q \overline{E_{y^-}} \subseteq \overline{QE_{y^-}} \subseteq \overline{E_{Qy^-}}. \end{align*} By the same reasoning we can see that \begin{align*} \overline{E_{Qx^-}} \subseteq \overline{E_{Qy^+}} \qquad \text{as well as} \qquad \overline{E_{Qy^+}} \subseteq \overline{E_{Qx^+}} \qquad \text{and} \qquad \overline{E_{Qy^-}} \subseteq \overline{E_{Qx^-}}. \end{align*} Hence, we have $\overline{E_{Qx^+}} \subseteq \overline{E_{Qy^-}} \subseteq \overline{E_{Qx^-}}$. However, $Qx^+$ and $Qx^-$ are disjoint, so it follows from Lemma~\ref{lem:closed-ideals}\ref{lem:closed-ideals:item:disjoint} that $\overline{E_{Qx^+}} = \{0\}$ and thus, $Qx^+ = 0$. Using the same argument again, we can also see that $Qx^- = 0$, as well as $Qy^+ = 0$ and $Qy^- = 0$. Yet, we have $\modulus{z} \le \modulus{x} + \modulus{y} = x^+ + x^- + y^+ + y^-$, so $0 \le Q\modulus{z} \le 0$, i.e., $Q\modulus{z} = 0$. As we have noted above that $\modulus{z}$ is a quasi-interior point of $E_+$, the equality $Q\modulus{z} = 0$ implies that $Q = 0$. This contradicts the assumption $B \not= \{0\}$. \end{proof} As in Subsection~\ref{subsection:primitivity-1:point-spectrum}, we will now derive a corollary of the previous theorem which does only require $T$ to be power bounded rather than almost weakly periodic. We recall that a Banach lattice $E$ is said to have \emph{order continuous norm} if every decreasing net in $E_+$ with infimum $0$ is norm convergent to $0$. Every KB-space (and thus, in particular, every $L^p$-space for $1 \le p < \infty$) has order continuous norm; the space $c_0$ of sequence that converge to $0$ (endow with the $\infty$-norm) is an example of a Banach lattice that has order continuous norm, but is not a KB-space. In Corollary~\ref{cor:main-everywhere-bl-kb} we required the underlying Banach lattice to be a KB-space. Due to the irreducibility of $T$, we only need the space have order continuous norm in the following corollary. In a Banach lattice with order continuous norm, every closed ideal is a projection band.% \footnote{ Indeed, every closed ideal in such a space is a band according to \cite[Theorem~II.5.14 on p.\,94]{Schaefer1974}; moreover, a Banach lattice with order continuous norm is order complete \cite[Theorem~II.5.10 on p.\,89]{Schaefer1974}, and in an order complete vector lattice every band is a projection band \cite[Theorem~II.2.10 on p.\,62]{Schaefer1974}. } Thus, instead of requiring the subspace $B$ in Theorem~\ref{thm:irred-bl} to be a projection band, one can require it to be a closed ideal in the following corollary. \begin{corollary} \label{cor:irred-bl-oc} Let $E$ be a complex Banach lattice with order continuous norm and let $T: E \to E$ be a positive linear operator which is power bounded. Assume that $T$ is irreducible and that there exists a closed ideal $\{0\} \not= B \subseteq E$ such that $\overline{E_{Tf}} \supseteq \overline{E_f}$ for each $0 \le f \in B$. Then $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. \end{corollary} \begin{proof} First note that, as observed before the corollary, $B$ is actually a projection band. Now assume for a contradiction that $T$ has an eigenvalue $\lambda \in \mathbb{T} \setminus \{1\}$, and let $z \in E$ be a corresponding eigenvector. Then $T\modulus{z} \ge \modulus{Tz} = \modulus{z}$, and thus it follows from the irreducibility and the power boundedness of $T$ that $\modulus{z}$ is actually a fixed point of $T$, i.e., $T \modulus{z} = \modulus{z}$; compare the proof of Theorem~\ref{thm:irred-bl}. As also seen in the proof of Theorem~\ref{thm:irred-bl}, the irreducibility of $T$ thus implies that $\modulus{z}$ is quasi-interior point of $E_+$. Since order intervals in Banach lattices with order continuous norm are weakly compact \cite[Theorem~II.5.10 on p.\,89]{Schaefer1974}, it thus follows by the same argument as in the proof of Corollary~\ref{cor:main-everywhere-bl-kb} that $T$ is weakly almost periodic. Thus, Theorem~\ref{thm:irred-bl} is applicable, and yields that $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. This is a contradiction. \end{proof} The second theorem in the introduction is an immediate consequence of the previous corollary: \begin{proof}[Proof of Theorem~\ref{thm:main-irred}] Since $p \in [1,\infty)$, the Banach lattice $E := L^p$ has order continuous norm. Let $B \subseteq L^p$ denote the set of all functions which vanish outside $S$. This is a closed ideal in $L^p$, and for each $0 \le f \in B$ the assumption $\supp(Tf) \supseteq \supp(f)$ in Theorem~\ref{thm:main-irred} is equivalent to the inclusion $\overline{E_{Tf}} \supseteq \overline{E_{f}}$. Thus, the assumptions of Corollary~\ref{cor:irred-bl-oc} are satisfied, so the corollary shows that $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. \end{proof} \subsection{Long-term behaviour} Again, we use the spectral results from the previous subsections to derive consequences for the long-term behaviour of the powers of positive operators. \begin{corollary} \label{cor:irred-bl-oc-convergence} Let $E$ be a complex Banach lattice with order continuous norm and let $T: E \to E$ be a positive linear operator which is power bounded. Assume that $T$ is irreducible and that there exists a closed ideal $\{0\} \not= B \subseteq E$ such that $\overline{E_{Tf}} \supseteq \overline{E_f}$ for each $0 \le f \in B$. If $T$ has a non-zero fixed vector and there exists an integer $n_0 \ge 0$ and a non-zero linear AM-compact operator $K: E \to K$ such that $0 \le K \le T^{n_0}$, then $T^k$ converges strongly as $k \to \infty$. \end{corollary} \begin{proof} According to Corollary~\ref{cor:irred-bl-oc} one has $\specPnt(T) \cap \mathbb{T} \subseteq \{1\}$. Due to the irreducibility of $T$ the assumptions~(a) and~(b) of \cite[Theorem~3.9]{GerlachGlueck2019} are satisfied for the semigroup representation $(T^k)_{k \in \mathbb{N}_0}$; indeed, assumption~(a) follows from \cite[Proposition~3.11(c)]{GerlachGlueck2019}, and assumption~(b) from the fact that every non-zero fixed point of $T$ in $E_+$ is a quasi-interior point of $E_+$ (see \cite[Theorem~V.5.2(i) on p.\,329]{Schaefer1974} or our proof of Theorem~\ref{thm:irred-bl}). If $x \in E$ denotes a non-zero fixed point of $T$, then $T\modulus{x} \ge \modulus{x}$, so $\modulus{x}$ is even a fixed point of $T$ by property~(a) mentioned in the previous paragraph; therefore, $\modulus{x}$ is also a quasi-interior point according to property~(b). Thus, the assumptions of \cite[Lemma~3.10]{GerlachGlueck2019} are satisfied (note that the divisibility assumption in \cite[Theorem~3.9]{GerlachGlueck2019} is not needed for the lemma) and this shows in turn that the standard assumptions of \cite[Section~6]{GlueckHaase2019} are satisfied. As $T$ has no eigenvalues in $\mathbb{T} \setminus \{1\}$, it thus follows from \cite[Theorem~6.1(c)]{GlueckHaase2019} that $T^k$ converges strongly as $k \to \infty$. \end{proof} We do not discuss the case of integral operators on general measure spaces in detail here (which can be treated in the same spirit is in Corollaries~\ref{cor:everywhere-convergence-l-p} and~\ref{cor:lindeloef} in the preceding section), but we find it worthwhile to again mention the case of $\ell^p$-spaces explicitly: \begin{corollary} \label{cor:irred-l-p-convergence} Let $p \in [1,\infty)$ and set $\ell^p := \ell^p(\mathbb{N})$. Let $T: \ell^p \to \ell^p$ be a positive linear operator which is power bounded. Assume that $T$ is irreducible and has a fixed point $f_0$ such that $f_0(\omega) > 0$ for all $\omega \in \mathbb{N}$. Assume moreover that $\langle e_j, Te_j \rangle > 0$ for at least one canonical unit vector $e_j$. Then $T^k$ converges strongly as $k \to \infty$. \end{corollary} \begin{proof} If $T$ is zero there is nothing to prove, so assume that $T \not= 0$. For $p \in [1,\infty)$ every order interval in $\ell^p$ is compact, so $T$ is AM-compact. Hence, we can apply Corollary~\ref{cor:irred-bl-oc-convergence}, where $B := \big\{g \in \ell^p: \; g(\omega) = 0 \text{ for all } \omega \in \mathbb{N} \setminus \{j\}\big\}$. \end{proof} \section{Domination of the identity} \label{section:domination-of-id} \subsection{The spectrum} In this subsection we give to sufficient conditions for the peripheral spectrum of a positive operator to consist of the spectral radius only. The first of those results, Theorem~\ref{thm:peripheral-spectrum-first-power}, is likely to be known to experts in Perron--Frobenius theory. Yet, we could not find an explicit reference for it in the literature, so we include the proof. By $\spr(T)$ we denote the spectral radius of a bounded linear operator $T$, and the \emph{peripheral spectrum of $T$} is denoted by \begin{align*} \spec_{\operatorname{per}}(T) := \{\lambda \in \spec(T): \; \modulus{\lambda} = \spr(T)\}. \end{align*} \begin{theorem} \label{thm:peripheral-spectrum-first-power} Let $E$ be a complex Banach lattice and let $T: E \to E$ be a positive linear operator. If there is a number $\varepsilon > 0$ such that $T \ge \varepsilon \id$, then $\spec_{\operatorname{per}}(T) = \{\spr(T)\}$. \end{theorem} \begin{proof} For a point $z$ in the complex plane and a number $r \ge 0$ we will use the notation $\overline{B}_r(z) \subseteq \mathbb{C}$ for the closed disk with center $z$ and radius $r$. Since $T$ is positive, we have $\spr(T) \in \spec(T)$ \cite[Proposition~V.4.1 on p.\,323]{Schaefer1974}. In particular, $\spr(T)$ is the largest real spectral value of $T$. Since substracting $\varepsilon \id$ from the operator shifts the spectrum by $-\varepsilon$, it follows that $\spr(T)-\varepsilon$ is the largest real spectral value of $T-\varepsilon \id$. But $T-\varepsilon \id$ is also positive by assumption, so $\spr(T-\varepsilon \id) \in \spec(T-\varepsilon \id)$ (again by \cite[Proposition~V.4.1 on p.\,323]{Schaefer1974}), and hence it follows that $\spr(T-\varepsilon \id)$ is the largest real spectral value of $T-\varepsilon\id$. So we conclude that \begin{align*} \spr(T)-\varepsilon = \spr(T-\varepsilon \id). \end{align*} Hence, $\spr(T) - \varepsilon \ge 0$ and $\spec(T-\varepsilon\id) \subseteq \overline{B}_{\spr(T)-\varepsilon}(0)$, and therefore, \begin{align*} \spec(T) \subseteq \overline{B}_{\spr(T)-\varepsilon}(0) + \varepsilon = \overline{B}_{\spr(T)-\varepsilon}(\varepsilon). \end{align*} Clearly, the latter disk intersects the circle $\spr(T) \mathbb{T}$ only in $\spr(T)$. \end{proof} \begin{remark} Theorem~\ref{thm:peripheral-spectrum-first-power} remains true if $E$ is not a Banach lattice, but only (a complexification of) an order Banach space with generating and normal cone. Indeed, on such a space one still knows that the spectral radius of every positive operator is contained in its spectrum \cite[paragraph~2.2 on p.\,311]{SchaeferWolff1999}, so the proof above works without modification. \end{remark} In our second theorem in this subsection we replace the assumption $T \ge \varepsilon \id$ with the more general assumption $T^n \ge \varepsilon T^{n-1}$ for an integer $n \ge 1$. In order to deduce from this condition that the peripheral spectrum is trivial, we need an additional technical assumption, namely that the peripheral spectrum is a priori known to be cyclic. We recall from the previous sections that a subset $S$ of the complex numbers if called \emph{cyclic} if $re^{i\theta} \in S$ (for some $r \ge 0$ and $\theta \in \mathbb{R}$) implies that $re^{in\theta} \in S$ for all $n \in \mathbb{Z}$ (see the discussion after the theorem for comments on this assumption). \begin{theorem} \label{thm:peripheral-spectrum-powers} Let $E$ be a complex Banach lattice, let $T: E \to E$ be a positive linear operator and assume that the peripheral spectrum of $T$ is cyclic. If there is an integer $n \ge 1$ and a number $\varepsilon > 0$ such that $T^n \ge \varepsilon T^{n-1}$, then $\spec_{\operatorname{per}}(T) = \{\spr(T)\}$. \end{theorem} The a priori assumption in the theorem that $\spec_{\operatorname{per}}(T)$ be cyclic is rather weak, since it is automatically satisfied for many classes of positive operators. In particular, if $\spr(T) = 1$ and $T$ is power bounded, which is a natural assumption if one wants to prove convergence of the iterates $T^n$, then positivity of $T$ always implies that $\spec_{\operatorname{per}}(T)$ is cyclic (this follows from \cite[Theorem~V.4.9 on p.\,327]{Schaefer1974}). It is, however, an open problem whether the peripheral spectrum of every positive operator on a Banach lattice is cyclic; see \cite{Glueck2018} and the references therein for a thorough discussion of this question. \begin{proof}[Proof of Theorem~\ref{thm:peripheral-spectrum-powers}] There is no loss of generality in assuming that $\spr(T) = 1$. We have $0 \le T^n - \varepsilon T^{n-1} \le T^n$, so $\spr(T^n - \varepsilon T^{n-1}) \le \spr(T^n) = \spr(T)^n = 1$. Since $T$ is positive, one has $1 = \spr(T) \in \spec(T)$ \cite[Proposition~V.4.1 on p.\,323]{Schaefer1974}. Now assume for a contradiction that $\spec_{\operatorname{per}}(T)$ does not only consist of $\spr(T)$. Since $\spec_{\operatorname{per}}(T)$ is cyclic, this implies that there exists a number $\lambda \in \spec_{\operatorname{per}}(T)$ with real part $\re \lambda \le 0$. For the number $\lambda^n - \varepsilon \lambda^{n-1}$, which is contained in the spectrum of $T^n - \varepsilon T^{n-1}$, we then obtain \begin{align*} 1 \ge \spr(T^n - \varepsilon T^{n-1}) \ge \modulus{\lambda^n - \varepsilon\lambda^{n-1}} = \modulus{\lambda - \varepsilon} > \modulus{\lambda} = 1, \end{align*} where the inequality at the end follows from $\re \lambda \le 0$. This is a contradiction. \end{proof} \subsection{Long-term behaviour} Let us give an application of Theorem~\ref{thm:peripheral-spectrum-powers}, again to the asymptotics of the powers of $T$. Recall that a bounded linear operator $T$ on a Banach space is called \emph{mean ergodic} if the sequence of its \emph{Cesàro means} \begin{align*} \frac{1}{n} \sum_{k=0}^{n-1} T^k \end{align*} converges strongly as $n \to \infty$. In this case, the limit of this sequence is called the \emph{mean ergodic projection} of $T$. \begin{corollary} \label{cor:peripheral-spectrum-powers-plus-ablp} Let $E$ be a complex Banach lattice and let $T: E \to E$ be a positive linear operator which is power bounded and mean ergodic. If there is an integer $n \ge 1$ and a number $\varepsilon > 0$ such that $T^n \ge \varepsilon T^{n-1}$, then $T^k$ converges strongly as $k \to \infty$. \end{corollary} \begin{proof} The sequence of Cesàro means of $T$ is bounded in operator norm; this follows from the mean ergodicity and the uniform boundedness principle. Hence, the spectral radius of $T$ satisfies $\spr(T) \le 1$. We may assume that $\spr(T) = 1$ since otherwise the assertion is trivial. Due to the boundedness of the Cesàro means, the resolvent of $T$ satisfies the estimate \begin{align*} \sup_{\lambda \in (1,\infty)} (\lambda - 1) \norm{(\lambda - T)^{-1} } < \infty, \end{align*} see \cite[Theorem~1.7]{Emilion1985}. This together with the positivity of $T$ implies that the peripheral spectrum of $T$ is cyclic \cite[Theorem~V.4.9 on p.\,327]{Schaefer1974}. So Theorem~\ref{thm:peripheral-spectrum-powers} is applicable and yields that $\spec(T) \cap \mathbb{T} \subseteq \{1\}$. Due to the mean ergodicity of $T$ we can apply the single operator version of the so-called \emph{ABLV theorem} \cite[Theorem~5.1]{ArendtBatty1988} to obtain the assertion.% \footnote{More precisely speaking, we need to split $E$ into the range and the kernel of the mean ergodic projection $P$ and apply the cited theorem to the restriction of $T$ to $\ker P$.} \end{proof} Let us close this section with an example which can loosely be interpreted as an infinite version of Example~\ref{exa:nagler}. The interesting point about the example is that Corollary~\ref{cor:peripheral-spectrum-powers-plus-ablp} can be applied for the case $n = 2$, but not for $n=1$. \begin{example} Endow the real line $\mathbb{R}$ with the Lebesgue measure and let $E := L^2(\mathbb{R})$. Let $(I_n)_{n \in \mathbb{N}}$ and $(J_n)_{n \in \mathbb{N}}$ be two partitions of $\mathbb{R}$ into measurable sets of measure $1$. For each $f \in E$, the series \begin{align*} Tf := \sum_{n=1}^\infty \int_{I_n} f \dx x \cdot \one_{J_n} \end{align*} converges unconditionally in $E$, and $T$ is a bounded linear operator on $E$ which has norm $1$ and is positive. We now show the following two properties of $T$: \begin{enumerate}[label=(\alph*)] \item The powers $T^k$ are not strongly convergent as $k \to \infty$, in general. \item Assume now that there exists a number $\varepsilon > 0$ such that $I_n \cap J_n$ has measure $\ge \varepsilon$ for each index $n \ge 1$. Then $T^k$ converges strongly as $k \to \infty$. \end{enumerate} \begin{proof} (a) If we choose $J_n = I_{n+1}$ for all $n \in \mathbb{N}$, then we obtain $T^k \one_{I_1} = \one_{I_{k+1}}$ for all $k \ge 1$. Since the sets $I_n$ are pairwise disjoint and have measure $1$, it follows that the latter sequence is not convergent in $E$ as $k \to \infty$. (b) Let $0 \le f \in E$. Then we obtain \begin{align*} T^2f = \sum_{n=1}^\infty \int_{I_n} f \dx x \cdot T \one_{J_n} \ge \sum_{n=1}^\infty \int_{I_n} f \dx x \cdot \varepsilon \one_{J_n} = \varepsilon Tf, \end{align*} so $T^2 \ge \varepsilon T$. Since $E$ is reflexive and $T$ is power bounded, we know that $T$ is mean ergodic. Therefore, Corollary~\ref{cor:peripheral-spectrum-powers-plus-ablp} implies that $T^k$ is strongly convergent as $k \to \infty$. \end{proof} \end{example} \bibliographystyle{plain}
{ "timestamp": "2022-09-05T02:20:55", "yymm": "2209", "arxiv_id": "2209.01171", "language": "en", "url": "https://arxiv.org/abs/2209.01171", "abstract": "Consider a positive operator $T$ on an $L^p$-space (or, more generally, a Banach lattice) which increases the support of functions in the sense that $supp(Tf) \\supseteq supp{f}$ for every function $f \\ge 0$. We show that this implies, under mild assumptions, that $T$ has no unimodular eigenvalues except for possibly the number $1$. This rules out periodic behaviour of any orbits of the powers of $T$, and thus enables us to prove convergence of those powers in many situations.For the proof we first perform a careful analysis of the action of lattice homomorphisms on the support of functions; then we split $T$ into an invertible and a weakly stable part, and apply the aforementioned analysis to the invertible part. An appropriate adaptation of this argument allows us to prove another version of our main result which is useful for the study of so-called irreducible operators.", "subjects": "Functional Analysis (math.FA); Spectral Theory (math.SP)", "title": "Aperiodicity of positive operators that increase the support of functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575137315162, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7091541985707761 }
https://arxiv.org/abs/1604.07005
Continuous homomorphisms between algebras of iterated Laurent series over a ring
We study continuous homomorphisms between algebras of iterated Laurent series over a commutative ring. We give a full description of such homomorphisms in terms of a discrete data determined by the images of parameters. In similar terms, we give a criterion of invertibility of an endomorphism and provide an explicit formula for the inverse endomorphism. We study the behavior of the higher-dimensional residue under continuous homomorphisms.
\section{Introduction} In this paper, we study continuous homomorphisms between algebras of iterated Laurent series in many variables over a commutative ring. Let us start with a previously known case of one variable. Let $A$ be a commutative ring and let $A((t))$ be the ring of Laurent series over $A$. For simplicity, assume in the introduction that $A$ does not decompose into a product of two rings. Then for any invertible Laurent series $$ \varphi =\mbox{$ \sum\limits_{l \in \z} a_l t^l $} \in A((t))^*\,, $$ there is an integer $\nu(\varphi) \in \z$ such that the coefficient $a_{\nu(\varphi)}$ is invertible and for all~${l<\nu(\varphi)}$, the coefficients $a_l$ are nilpotent (see more explanation and references in Section~\ref{prelim}). If $\nu(\varphi) > 0$, then we have a well-defined continuous endomorphism of the \mbox{$A$-algebra} \begin{equation}\label{map} A((t))\longrightarrow A((t))\,,\qquad \mbox{$f=\sum\limits_{l \in \Z} b_lt^l \longmapsto f(\varphi)=\sum\limits_{l \in \Z} b_l \varphi^l $}\,, \end{equation} where we consider the natural topology on~$A((t))$ with the base of open neighborhoods of zero given by $A$-submodules $t^iA[[t]]$,~$i\in\Z$. The series $\sum\limits_{l \in \Z} b_l \varphi^l$ converges in this topology. It is not hard to check that all continuous endomorphisms of the \mbox{$A$-algebra}~$A((t))$ have this form. Besides, Morava (after Kapranov) in~\cite[\S\,1.2]{Mo} and Mu\~{n}oz Porras and Plaza Mart\'in in~\cite[Theor.\,3.3]{MPM} have independently obtained a non-trivial statement that the endomorphism in formula~\eqref{map} is invertible if and only if~$\nu(\varphi)=1$. Note that the nilpotent coefficients $a_l$ of $\varphi$ with $l < \nu(\varphi)$ play an important role. Even in the case of continuous automorphisms of the $A$-algebra $A[[t]]$, where we have $\nu(\varphi)=1$ and $a_l=0 $ for $l <0$, one needs to consider the nilpotent coefficient $a_0$ in order to obtain the algebra of all derivations (or vector fields) on the formal disk~$\Spf\big(\C[[t]]\big)$ as the Lie algebra of the group of automorphisms, see~\cite[Ch.\,6.2]{FB}. \medskip We generalize the above facts to the case of continuous homomorphisms~$A((t_1))\ldots((t_n))\to A((t_1))\ldots((t_m))$ between \mbox{$A$-algebras} of iterated Laurent series over~$A$ with the natural topology. As a geometric motivation, let us mention that when $A=k$ is a field, the \mbox{$n$-dimensional} local field $k((t_1))\ldots ((t_n))$ appears naturally from iterated localization and completion procedures along a full flag of irreducible subvarieties on an $n$-dimensional algebraic variety~$X$, see, e.g., a survey~\cite{O}. Thus, in particular, for a finite-dimensional $k$-algebra $A$, the ring $$ A((t_1)) \ldots ((t_n))\simeq A\otimes_k k((t_1)) \ldots ((t_n)) $$ can be considered as a deformation of the $n$-dimensional local field along $A$, or as a ring that appears from the scheme $X \times_k A$ over $A$. \medskip Let us describe our main results. For short, we denote $A((t_1))\ldots((t_n))$ by $\LL^n(A)$. Elements of $\LL^n(A)$ have the form ${f=\sum\limits_{l\in\z^n}a_lt_1^{l_1}\ldots t_n^{l_n}}$, where ${l=(l_1,\ldots, l_n)\in\z^n}$ and~${a_l\in A}$, with a certain restriction on the set of indices of non-zero coefficients, see Section~\ref{prelim}. Firstly, we give the following description of all continuous homomorphisms between \mbox{$A$-algebras} of iterated Laurent series, see Theorem~\ref{prop:contchange}. Remind that the classical valuation (when $A$ is a field) is generalized to a homomorphism of groups $$ \nu\,:\,\LL^n(A)^*\lrto\z^n\,. $$ For instance, we have $\nu(t_1)=(1,0,\ldots,0)$ and $\nu(t_n)=(0,\ldots,0,1)$. Given a collection of~$n$ invertible iterated Laurent series $\varphi_1,\ldots,\varphi_n\in \LL^m(A)^*$ in~$m$ variables, suppose that the \mbox{$(m\times n)$-matrix} $\big(\nu(\varphi_1),\ldots,\nu(\varphi_n)\big)$ with integral entries is in column echelon form with positive leading entries (see Lemma~\ref{lem:upper}$(ii)$). For example, when $m=n$, this means that the matrix is upper-trangular with positive diagonal entries. Then we have a well-defined continuous homomorphism of $A$-algebras $$ \phi\,:\,\LL^n(A) \lrto \LL^m(A)\,, $$ $$ \mbox{$f=\sum\limits_{l\in\z^n}a_lt_1^{l_1}\ldots t_n^{l_n}\longmapsto f(\varphi_1,\ldots,\varphi_n)=\sum\limits_{l\in\z^n}a_l\varphi_1^{l_1}\ldots\varphi_n^{l_n}$}\,. $$ Moreover, all continuous homomorphisms of $A$-algebras $\LL^n(A)\to\LL^m(A)$ have this form. In particular, this implies that there are no such homomorphisms when $n>m$, see Corollary~\ref{cor:inequality}. As another application, we obtain that the functor that sends a commutative ring~$A$ to the set of all continuous homomorphisms of $A$-algebras $\LL^n(A)\to\LL^m(A)$ is represented by an ind-affine scheme over $\z$ that has many nice geometric properties, see Proposition~\ref{prop:repr} and Corollary~\ref{cor:repr}. More precisely, this ind-scheme is a product of an ind-flat scheme over~$\z$ with a so-called thick ind-cone introduced and studied in~\cite{GOMS}. Such ind-schemes provide an adequate replacement of ind-flat schemes in the context of iterated Laurent series in many variables. \medskip Secondly, we investigate how the residue of top differential forms is changed under continuous endomorphisms. More precisely, let the free rank one $\LL^n(A)$-module ${\widetilde{\Omega}^n_{\LL^n(A)}\simeq \LL^n(A)dt_1\wedge\ldots\wedge dt_n}$ be the natural quotient of the $\LL^n(A)$-module of absolute K\"ahler $n$-differentials of the ring $\LL^n(A)$. Further, let $$ \res\,:\,\widetilde{\Omega}^n_{\LL^n(A)}\longrightarrow A\,,\qquad \mbox{$\sum\limits_{l\in\z^n}a_lt_1^{l_1}\ldots t_n^{l_n}dt_1\wedge\ldots\wedge dt_n\longmapsto a_{-1\ldots -1}$}\,, $$ be the $n$-dimensional residue map. In Proposition~\ref{prop:invres} and Corollary~\ref{cor:res}, we describe explicitly how the residue map is changed under a continuous homomorphism~$\phi$ as above. In particular, in the case $m=n$, we show that the residue map is invariant under all continuous automorphisms of the $A$-algebra $\LL^n(A)$, see Remark~\ref{rmk:invres}. As an application, we obtain injectivity of the homomorphisms under a rather mild assumption, see Corollary~\ref{cor:inj} and Remark~\ref{rmk:inj}. Note that when $A$ is a field, the invariance of the residue under continuous automorphisms of the field $\LL^n(A)$ over $A$ is classical in the case $n=1$ (see, e.g.,~\cite{S}) and is known in the case $n\geqslant 2$ due to the works of Parshin, Lomadze, and Yekutieli, see~\cite[\S\,1, Prop.\,1]{P0},~\cite[Lem.\,6(VIII)]{Lom}, and~\cite[Theor.\,2.4.3]{Y}. Also, let us mention that unlike the case $n=1$, when $n\geqslant 2$, the residue is not necessarily invariant under a non-continuous automorphism, see~\cite[Ex.\,2.4.24]{Y} (when $n=1$, any automorphism of the field $A((t))$ is continuous, see, e.g.,~\cite[Ch.\,II, \S\,2, Exer.\,6\,a)]{FV}). \medskip Thirdly, when $m=n$, we give a criterion of invertibility of a continuous endomorphism $\phi$, see Theorem~\ref{theor:inv}. Namely,~$\phi$ is invertible if and only if the \mbox{$(n\times n)$-matrix} ${\big(\nu(\varphi_1),\ldots,\nu(\varphi_n)\big)}$ is invertible, or, equivalently, this upper-triangular matrix has units on the diagonal. Moreover, it turns out that if $\phi$ is invertible, then the inverse $\phi^{-1}$ coincides with the adjoint map $\phi^{\vee}$ to the induced map $\phi\colon\widetilde{\Omega}^n_{\LL^n(A)}\to \widetilde{\Omega}^n_{\LL^n(A)}$ (denoted also by~$\phi$) with respect to the perfect continuous pairing $$ \LL^n(A) \times \widetilde{\Omega}^n_{\LL^n(A)} \lrto A\,,\qquad (f, \omega) \longmapsto \res(f \omega)\,. $$ The generalization of the invertibility criterion from $n=1$ to $n\geqslant 1$ is non-trivial. It is not clear how to apply methods from~\cite{Mo} and~\cite{MPM} in the case $n>1$ for an arbitrary commutative ring $A$ (cf.~\cite[Rem.\,5.2]{OZ1} for $n=2$). Our proof of the criterion is based on a different method: we use the invariance of the residue map and the above representability result together with the theory of thick ind-cones from~\cite{GOMS}. \medskip Finally, the description of the inverse of a continuous endomorphism $\phi$ as the adjoint map gives immediately an explicit formula for~$\phi^{-1}=\phi^{\vee}$, see in Remark~\ref{rmk:expladj} an explicit formula for $\phi^{\vee}$. For simplicity, let us give this formula when $n=1$. Even in this case, the formula seems not to be known before. If $\phi$ is an endomorphism as in formula~\eqref{map} given by an element $\varphi\in A((t))^*$ with $\nu(\varphi)=1$, then for any element $f \in A((t))$, we have $$ \mbox{$\phi^{-1}(f)=\phi^{\vee}(f)= \sum\limits_{l\in\z}\res\big(f\varphi^{-l-1} (\partial \varphi/ \partial t)dt \big)t^l$}\,. $$ This implies the following peculiar identity (for the general case $n\geqslant 1$, see formula~\eqref{eq:identity}): $$ f=\mbox{$\sum\limits_{l\in\z}\res\big(f\varphi^{-l-1} (\partial \varphi/ \partial t) dt \big)\varphi^l$}\,. $$ We do not know any elementary (direct computational) proof of this formula even when~$A$ is a field. \medskip The results of this paper will be applied in a forthcoming paper~\cite{GOnext} to further investigations of the $n$-dimensional Contou-Carr\`ere symbol, which was studied by Contou-Carr\`{e}re himself in~\cite{CC1, CC2} for the case~$n=1$, by the second named author and Zhu in~\cite{OZ1} for the case $n=2$, and by the authors in~\cite{GO1, GOMilnor, GOMS} for arbitrary~$n$. \medskip We are grateful to the referee for his comments. \section{Preliminaries and notation} \label{prelim} In this section, we introduce notation and recall, mainly from~\cite{GOMS}, facts on the ring of iterated Laurent series, the topology on this ring, the group of its invertible elements, its differential forms, and on the higher-dimensional residue map. \medskip For short, by a ring, we mean a commutative associative unital ring and similarly for algebras. Throughout the paper, $A$ denotes a ring. Let $n\geqslant 1$ be a positive integer. Let ${\LL(A):=A((t))=A[[t]][t^{-1}]}$ be the ring of Laurent series over $A$ and let $$ \LL^n(A):=\LL\big(\LL^{n-1}(A)\big)=A((t_1))\ldots((t_n)) $$ be the ring of iterated Laurent series over $A$. We call elements $t_1,\ldots,t_n$ variables or parameters. Put $t^{l}:=t_1^{l_1}\ldots t_n^{l_n}$ for an element ${l=(l_1,\ldots,l_n)\in\z^n}$. More generally, for any collection $\varphi=(\varphi_1,\ldots,\varphi_n)$, where $\varphi_i\in\LL^n(A)$, and for an element ${l=(l_1,\ldots,l_n)\in\z^n}$, we put $\varphi^l:=\varphi_1^{l_1}\ldots\varphi_n^{l_n}$. Given a formal series $f=\sum\limits_{l\in \z^n} a_l t^{l}$, let the support $\supp(f)\subset\z^n$ of~$f$ be the set of all $l\in\z^n$ such that $a_l\ne 0$. Elements of $\LL^n(A)$ are described explicitly in terms of their supports as follows. \medskip Define iteratively the sets $$ \Lambda_1:=\z\,,\qquad \Lambda_n:=\Map(\z,\Lambda_{n-1})\times \z\,, $$ where $\Map(\z,\Lambda_{n-1})$ denotes the set of all maps from $\z$ to $\Lambda_{n-1}$. The set $\Lambda_n$ has a natural group structure defined inductively by sums of integers and point-wise sums of group valued functions. For an element $\lambda\in\z$, define the set $$ \z_{\lambda}=\z^1_{\lambda}:=\{l\in \z\,\mid\,l\geqslant \lambda\}\subset \z\,. $$ For an element $\lambda=(\lambda',\lambda_n)\in\Lambda_n$ with $n\geqslant 2$, define inductively the set $$ {\z^n_{\lambda}:=\bigcup\limits_{l_n\geqslant\lambda_n}\z^{n-1}_{\lambda'(l_n)}\times\{l_n\}}\subset\z^n\,. $$ Explicitly, we have $$ \Lambda_n=\big\{(\lambda_1,\ldots,\lambda_n)\;\mid\;\lambda_p \, \colon \, \z^{n-p}\lrto \z \quad \mbox{for} \quad 1\leqslant p\leqslant n-1,\quad \lambda_n\in\Z \big\} $$ and for an element $\lambda=(\lambda_1,\ldots,\lambda_n)\in\Lambda_n$, we have $$ \z^n_{\lambda}=\big\{(l_1,\ldots,l_n)\in\Z^n\;\mid\;l_n\geqslant \lambda_n,\,l_{n-1}\geqslant \lambda_{n-1}(l_n),\,\ldots,\,l_{1}\geqslant \lambda_{1}(l_2,\ldots,l_n)\big\}\,. $$ Given two subsets $X,Y\subset \z^n$, let $X+Y$ be the subset of $\z^n$ that consists of all pair-wise sums $x+y\in\z^n$, where $x\in X$ and $y\in Y$. Similarly, let the subset $-X\subset\z^n$ consist of all elements $-x$, where $x\in X$. For short, we put $0:=(0,\ldots,0)$. The ring of iterated Laurent series~$\LL^n(A)$ consists of all series $f=\sum\limits_{l\in \z^n} a_l t^{l}$ such that $\supp(f)\subset \z^n_{\lambda}$ for some $\lambda=(\lambda',\lambda_n)\in\Lambda_n$. Explicitly, we have that $f=\sum\limits_{i\geqslant{\lambda_n}}g_i t_n^i$, where $g_i\in \LL^{n-1}(A)$, $i\geqslant \lambda_n$, are such that $\supp(g_i)\subset\z^{n-1}_{\lambda'(i)}$. Given two iterated Laurent series $f=\sum\limits_{l\in\z^n_{\lambda}}a_lt^l$ and $g=\sum\limits_{l\in\z^n_{\mu}}b_lt^l$, we have that ${fg=\sum\limits_{l\in\z^n}c_lt^l}$, where $c_l=\sum\limits_{p+q=l} a_p b_q$. Since $fg$ is a well-defined iterated Laurent series, we obtain that for all $\lambda,\mu\in\Lambda_n$, the summation map ${\z^n_{\lambda}\times\z^n_\mu\to \z^n}$ has finite fibers and there is $\rho\in\Lambda_n$ such that ${\z^n_{\lambda}+\z^n_{\mu}\subset \z^n_{\rho}}$. \medskip One has a natural topology on $\LL^n(A)$ such that $\LL^n(A)$ is a topological group with the group structure given by sums of iterated Laurent series. The topology was introduced first by Parshin in~\cite{P1} in the case when $A$ is a finite field. For properties of this topology in the general case, see~\cite[\S\,3.2]{GOMS}. The base of open neighborhoods of zero is given by $A$-submodules $U_{\lambda}\subset\LL^n(A)$, where $\lambda\in\Lambda$ and $U_{\lambda}$ consists of all elements $f\in\LL^n(A)$ such that $\supp(f)\cap (-\z^n_{\lambda})=\varnothing$. Note that this definition of the base of topology looks differently than the one in~\cite{GOMS} but two definitions are evidently equivalent. A countable set $\{f_i\}$, $i\in\N$, of iterated Laurent series $f_i\in\LL^n(A)$ tends to zero if for any $\lambda\in\Lambda_n$, all but finitely many $i\in\N$ satisfy $\supp(f_i)\cap(-\z^n_{\lambda})=\varnothing$, or, equivalently, $0\notin\supp(f_i)+\z^n_{\lambda}$. The topological group $\LL^n(A)$ is complete, see~\cite[Rem.\,3.4]{GOMS}, and a series $\sum\limits_{i\in\N}f_i$ of elements of $\LL^n(A)$ converges if and only if the countable set $\{f_i\}$, $i\in\N$, tends to zero, see~\cite[\S\,3.3]{GOMS} (if this holds, the result of the summation does not depend on the order of summation). Note that when $n\geqslant 2$, the product of Laurent series is not continuous with respect to this topology, whence~$\LL^n(A)$ is not a topological ring. Nevertheless, the product with a fixed element is a continuous homomorphism from $\LL^n(A)$ to itself. \medskip Define inductively the lexicographical order on $\z^n$ as follows: we have $(l_1,\ldots,l_n)\leqslant (l'_1,\ldots,l'_n)$ if and only if either $l_n < l'_n$, or $l_n=l'_n$ and $(l_1, \ldots, l_{n-1}) \leqslant (l'_1, \ldots, l'_{n-1})$. Clearly, the order is invariant under translations on the group $\z^n$. Let~$\uz$ denote the constant sheaf in Zariski topology associated with the constant presheaf~$\z$ on~$\Spec(A)$. Thus $\uz(A)$ consists of all locally constant functions on $\Spec(A)$ with values in $\z$. Let $\LL(A)^*$ be the group of invertible elements in the ring $\LL(A)$. By~\cite[\S\,4.2]{GOMS}, for any element $f\in\LL^n(A)^*$, there is a finite decomposition into a product of rings ${A\simeq\prod\limits_{i=1}^N A_i}$ with the following property. Let $f=\prod\limits_{i=1}^N f_i$ with $f_i=\sum\limits_{l\in\z^n}a_{i,l}t^l$, $a_{i,l}\in A_i$, be the decomposition of $f$ with respect to the arising decomposition $\LL^n(A)\simeq\prod\limits_{i=1}^N\LL^n(A_i)$. Then for for each $i$, $1\leqslant i\leqslant N$, there is an element $l_i\in\z^n$ such that for any $l<l_i$, the element $a_{i,l}\in A_i$ is nilpotent and the element $a_{i,l_i}\in A_i$ is invertible. Moreover, the iterated Laurent series ${\sum\limits_{l<l_i}a_{i,l}t^l\in\LL^n(A_i)}$ is nilpotent. Let $\underline{l}\in\uz^n(A)$ be the locally constant function on $\Spec(A)$ whose value on $\Spec(A_i)$ is $l_i$ for any $i$, $1\leqslant i\leqslant N$. Then the map $$ \nu\,:\,\LL^n(A)^*\longrightarrow \uz^n(A)\,,\qquad f\longmapsto \underline{l}\,, $$ is a homomorphism of groups. These facts were first proved by Contou-Carr\`ere in the case $n=1$, see~\cite{CC1} and~\cite{CC2}, and by the second named author and Zhu in the case $n=2$, see~\cite{OZ1}. \medskip Given a functor $F$ on the category of rings, by $F_A$ denote the restriction of $F$ to the category of $A$-algebras. If a functor $F$ is represented by an (ind-)scheme, then we denote this (ind-)scheme also by $F$. By a subgroup $G\subset F$, we mean a morphism of functors $G\to F$ such that for any ring $A$, the map $G(A)\to F(A)$ is injective. By~$L^n\ga$ and $L^n\gm$ denote the group functors on the category of rings $A\mapsto \LL^n(A)$ and $A\mapsto \LL^n(A)^*$, respectively. These functors are represented by ind-affine schemes that have many nice geometric properties, see~\cite[\S\S\,6.2, 6.3]{GOMS}. Explicitly, $L^n\ga$ is represented by the ind-closed subscheme $$ \mbox{``$\varinjlim\limits_{\lambda\in \Lambda_{n}}$''}\,\Ab^{\z^n_{\lambda}}\subset\Ab^{\z^n}\,, $$ which is an ind-affine space, where for a set $E$, by $\Ab^E$ we denote the affine space whose set of coordinates is bijective with $E$. An iterated Laurent series $\sum\limits_{l\in \z^n}a_lt^l\in\LL^n(A)$ corresponds to the point~$(a_l)\in\Ab^{\z^n}(A)$, where $l\in \z^n$. Let $(L^n\gm)^0$ be the kernel of the morphism of group functors $\nu\colon L^n\gm\to\uz^n$. The group functor $(L^n\gm)^0$ is represented by an absolutely connected ind-affine scheme over~$\z$, see~\cite[Def.\,5.18]{GOMS} and~\cite[Prop.\,6.13(iv)]{GOMS}. \medskip By $\widetilde{\Omega}^1_{\LL^n(A)}$ denote the quotient of the $\LL^n(A)$-module of absolute K\"ahler differentials of the ring $\LL^n(A)$ by the $\LL^n(A)$-submodule generated by all elements of the form ${df-\sum\limits_{i=1}^n\frac{\partial f}{\partial t_i}}dt_i$. Then $\widetilde{\Omega}^1_{\LL^n(A)}$ is a free $\LL^n(A)$-module of rank $n$. For each $i\geqslant 0$, put $\widetilde{\Omega}^i_{\LL^n(A)}:=\bigwedge^i_{\LL^n(A)}\widetilde{\Omega}^1_{\LL^n(A)}$. Note that one has a well-defined de Rham differential $d\colon \widetilde{\Omega}^i_{\LL^n(A)}\to \widetilde{\Omega}^{i+1}_{\LL^n(A)}$, $i\geqslant 0$. We have an $A$-linear {\it residue map} $$ \res\,:\,\widetilde{\Omega}^n_{\LL^n(A)}\longrightarrow A\,,\qquad \mbox{$\sum\limits_{l\in\z^n}a_lt^l\cdot dt_1\wedge\ldots\wedge dt_n\longmapsto a_{-1\ldots-1}$}\,. $$ If $A$ is $\Q$-algebra, then the residue map induces an isomorphism (see, e.g.,~\cite[Lem.\,3.12]{GOMS}) $$ \widetilde{\Omega}^n_{\LL^n(A)}/d\widetilde{\Omega}^{n-1}_{\LL^n(A)}\stackrel{\sim}\longrightarrow A\,. $$ \section{Continuous and functorial additive homomorphisms} We compare in this section continuous additive homomorphisms $\LL^n(A)\to\LL^m(A)$ with morphisms of groups functors $L^n\ga\to L^m\ga$, see Proposition~\ref{prop:funccont} and Corollary~\ref{cor:funccontrg}. As an application, we describe in Proposition~\ref{prop:valuation} how the homomorphism ${\nu\colon\LL^n(A)^*\to\uz(A)^n}$ is changed under continuous homomorphisms of $A$-algebras of iterated Laurent series. \medskip We start with the following simple facts. \begin{lemma}\label{cor:conv} \hspace{0cm} \begin{itemize} \item[(i)] For any element $f=\sum\limits_{l\in\z^n}a_lt^l$ in $\LL^n(A)$, the series $\sum\limits_{l\in\z^n}a_lt^l$ converges to~$f$ in the topological group $\LL^n(A)$. \item[(ii)] The set of all Laurent polynomials is dense in $\LL^n(A)$. \end{itemize} \end{lemma} \begin{proof} $(i)$ Consider an element $\lambda\in\Lambda_n$ and the corresponding open neighborhood of zero $U_{\lambda}\subset\LL^n(A)$. Since the summation map ${\supp(f)\times\z^n_{\lambda}\to \z^n}$ has finite fibers, there is a finite subset $S\subset\supp(f)$ such that $0\notin \big(\supp(f)\smallsetminus S\big)+\z^n_{\lambda}$. Hence for any finite subset $T\subset \supp(f)$ such that $S\subset T$, the partial sum $f_T:=\sum\limits_{l\in T}a_lt^l$ satisfies the condition $f-f_T\in U_{\lambda}$. Therefore partial sums of the series $\sum\limits_{l\in\z^n}a_lt^l$ tend to~$f$ (with respect to any linear order on the summands of the series). $(ii)$ This follows directly from item~$(i)$. \end{proof} \begin{lemma}\label{lemma:intersect} For any subset $X\subset \z^n$, the following conditions are equivalent: \begin{itemize} \item[(i)] for some $\lambda\in\Lambda_n$, we have $X\subset \z^n_{\lambda}$; \item[(ii)] for any $\mu\in\Lambda_n$, the intersection $X\cap (-\z^n_{\mu})$ is finite. \end{itemize} \end{lemma} \begin{proof} Since the summation map $\z^n_{\lambda}\times \z^n_{\mu}\to\z^n$ has finite fibers, $(i)$ implies $(ii)$. Let us show by induction on $n$ that $(ii)$ implies $(i)$. The base $n=1$ is obvious and let us make the induction step from $n-1$ to $n$, where $n\geqslant 2$. Note that there are equalities $$ \z^n=\bigcup\limits_{\lambda\in\Lambda_n}\z^n_{\lambda}=\bigcup\limits_{\lambda\in\Lambda_n}(-\z^n_{\lambda})\,. $$ Using the analogous fact with $n-1$, we see that there is $\lambda_n\in\z$ such that for any $l_n<\lambda_n$, we have $X\cap \big(\z^{n-1}\times\{l_n\}\big)=\varnothing$. Moreover, for all $l_n\geqslant \lambda_n$ and $\rho\in\Lambda_{n-1}$, the intersection $X\cap \big( (-\z^{n-1}_{\rho})\times\{l_n\}\big)$ is finite. Thus we conclude by the induction hypothesis applied to the intersections $X\cap\big(\z^{n-1}\times\{l_n\}\big)$ with $l_n\geqslant \lambda_n$. \end{proof} \medskip Let $\ga$ be the additive group scheme $\Spec\big(\z[T]\big)$ over $\z$. Define the ind-affine group scheme $\ga^{\oplus\N}:=\mbox{``$\varinjlim\limits_{i\in \N}$''}(\ga)^i$. Note that the group functor~$\ga^{\oplus \N}$ has a natural $\ga$-module structure in the following sense: for any ring $B$, the group $(\ga^{\oplus\N})(B)\simeq B^{\oplus\N}$ has a canonical structure of a $B$-module, which is functorial with respect to~$B$. Similarly, the group functor $L^n\ga$ has also a natural $\ga$-module structure. We say that a morphism of group functors $L^n\ga\to\ga^{\oplus\N}$ is {\it $\ga$-linear} if it respects the corresponding $B$-module structures for any ring $B$. One defines similarly $(\ga)_A$-linear morphisms of group functors $(L^n\ga)_A\to(\ga^{\oplus\N})_A$. Given a $(\ga)_A$-linear morphism of group functors $$ \pi\,:\,(L^n\ga)_A\longrightarrow (\ga^{\oplus\N})_A\,, $$ we have the kernel $\Ker\big(\pi(A)\big)\subset \LL^n(A)$ of the evaluation $\pi(A)$ of $\pi$ at $A$ $$ \pi(A)\,:\, (L^n\ga)_A(A)=\LL^n(A)\longrightarrow (\ga^{\oplus\N})_A(A)=A^{\oplus \N}\,. $$ Clearly, $\Ker\big(\pi(A)\big)$ is an $A$-submodule of $\LL^n(A)$. The following statement provides an important relation between the ind-affine scheme~$(L^n\ga)_A$ and the topology on $\LL^n(A)$. \begin{lemma}\label{lemma:basechar} The collection of the subgroups $\Ker\big((\pi(A)\big)\subset \LL^n(A)$, where ${\pi\colon (L^n\ga)_A\to (\ga^{\oplus\N})_A}$ runs over all $(\ga)_A$-linear morphisms of group functors, form a base of open neighborhoods of zero in the topological group $\LL^n(A)$. \end{lemma} \begin{proof} Let us show that for any $\pi$ as in the lemma, the kernel $\Ker\big((\pi(A)\big)$ is an open subgroup of $\LL^n(A)$. For any set $E$, a $(\ga)_A$-linear morphism of group schemes ${(\Ab^E)_A\to (\ga)_A}$ over~$A$ is given by a linear functional $(x_e)\mapsto\sum\limits_{e\in E} c_{e} x_e$, where $c_e\in A$ are scalars, $x_e$ are coordinates on $\Ab^E$, and all but finitely many $c_e$ equal zero. Since the functor~$L^n\ga$ is represented by a formal direct limit of group schemes~$\Ab^{\z^n_{\mu}}$, where $\mu\in\Lambda_n$, we see that a $(\ga)_A$-linear morphism of group functors ${\chi\colon (L^n\ga)_A\to (\ga)_A}$ is given by a linear functional $\sum\limits_{l\in\z^n}a_lt^l\mapsto\sum\limits_{l\in \z^n} c_{l} a_l$, where $c_l\in A$ and for any $\mu\in\Lambda_n$, the intersection $\supp(\chi)\cap \z^n_{\mu}$ is finite. Here, $\supp(\chi)\in \z^n$ is the set of all $l\in \z^n$ such that $c_l\ne 0$. It follows that a $(\ga)_A$-linear morphism of group functors $\pi\colon (L^n\ga)_A\to (\ga^{\oplus\N})_A$ is given by a collection of $(\ga)_A$-linear morphisms of group functors $(\chi_1,\ldots,\chi_i,\ldots)$, where $\chi_i\colon (L^n\ga)_A\to (\ga)_A$, $i\geqslant 1$. Note that not any collection $(\chi_1,\ldots,\chi_i,\ldots)$ corresponds to a morphism of group functors $\pi$ as above. In particular, for any $\mu\in\Lambda_n$, the intersection of the set $X:=\bigcup\limits_{i\in \N}\supp(\chi_i)$ with $\z^n_{\mu}$ should be finite. This is because the restrictions of all but finitely many~$\chi_i$ to~$\Ab^{\z^n_{\mu}}$ are equal to zero. By Lemma~\ref{lemma:intersect}, we have that $X\subset(-\z^n_{\lambda})$ for some $\lambda\in\Lambda_n$. Therefore the kernel $\Ker\big((\pi(A)\big)$ contains the open subgroup $U_{\lambda}\subset\LL^n(A)$, whence $\Ker\big((\pi(A)\big)$ is open. Finally, take an open subset $U_{\lambda}$, where $\lambda\in\Lambda_n$, and choose any bijection between~$-\z^n_{\lambda}$ and~$\N$. Then the collection of linear functionals $\sum\limits_{l\in\z^n}a_lt^l\mapsto a_l$, where $l\in-\z^n_{\lambda}$, defines a morphism of group functors $\pi\colon (L^n\ga)_A\to (\ga^{\oplus\N})_A$ such that $\Ker\big((\pi(A)\big)=U_{\lambda}$. This proves the lemma. \end{proof} \medskip Here is our main object of study. \begin{defin}\label{def:hom} Denote by $$ {\Hom^{\rm c}_A\big(\LL^n(A),\LL^m(A)\big)} $$ the set of all continuous $A$-linear homomorphisms of additive topological groups~${\LL^n(A)\to\LL^m(A)}$. Denote by $$ {\Hom^{\rm c,alg}_{A}\big(\LL^n(A),\LL^m(A)\big)} $$ the set of all continuous homomorphisms of $A$-algebras $\LL^n(A)\to\LL^m(A)$. \end{defin} Also, let ${\Hom\big((L^n\ga)_A,(L^m\ga)_A\big)}$ be the set of all $(\ga)_A$-linear morphisms of group functors~${(L^n\ga)_A\to(L^m\ga)_A}$ and let $\Hom^{\rm rg}\big((L^n\ga)_A,(L^m\ga)_A\big)$ be the set of all \mbox{$(\ga)_A$-linear} morphisms of ring functors~$(L^n\ga)_A\to(L^m\ga)_A$. \begin{prop}\label{prop:funccont} Evaluation at $A$ defines the bijection $$ {\Hom\big((L^n\ga)_A,(L^m\ga)_A)}\stackrel{\sim}\longrightarrow \Hom^{\rm c}_A\big(\LL^n(A),\LL^m(A)\big)\,,\qquad \alpha\longmapsto\alpha(A)\,. $$ \end{prop} \begin{proof} Lemma~\ref{lemma:basechar} implies directly that for any $(\ga)_A$-linear morphism of group functors $\alpha\colon(L^n\ga)_A\to (L^m\ga)_A$, the evaluation $\alpha(A)\colon \LL^n(A)\to\LL^m(A)$ at $A$ is continuous. Clearly, the map $\alpha(A)$ is $A$-linear. Thus the map in the proposition is well-defined. By the same reason, for any $A$-algebra $B$, the evaluation $\alpha(B)$ belongs to $\Hom_B^{\rm c}\big(\LL^n(B),\LL^m(B)\big)$. By Lemma~\ref{cor:conv}$(ii)$, the homomorphism~$\alpha(B)$ is defined uniquely by the values $\alpha(B)(bt^l)=b\,\alpha(B)(t^l)$, where $b\in B$, $l\in\z^n$. Therefore $\alpha(B)$ is defined uniquely by $\alpha(A)$. This proves that the map in the proposition is injective. Finally, we prove the surjectivity. Take a homomorphism $\phi\in\Hom_A^{\rm c}\big(\LL^n(A),\LL^m(A)\big)$. For any $\lambda\in\Lambda_n$, the countable set $\{t^l\}$, ${l\in\z^n_{\lambda}}$, tends to zero in~$\LL^n(A)$. Since $\phi$ is continuous, the countable set $\{\phi(t^l)\}$, $l\in\Lambda_n$, tends to zero in $\LL^m(A)$. Let~$B$ be an $A$-algebra. For an element $f=\sum\limits_{l\in\z^n}b_lt^l\in\LL^n(B)$, put ${\alpha(B)(f):=\sum\limits_{l\in\z^n}b_l\phi(t^l)}$, which is a convergent series in the topological group $\LL^m(B)$, because the countable set $\{\phi(t^l)\}$, $l\in\Lambda_n$, tends to zero in $\LL^m(B)$ as well. This defines a morphism ${\alpha\in\Hom\big((L^n\ga)_A,(L^m\ga)_A)}$. By Lemma~\ref{cor:conv}$(i)$, we have $\alpha(A)=\phi$. This finishes the proof. \end{proof} \medskip Now let us consider homomorphisms that respect products of iterated Laurent series. We will use the following simple observation. \begin{lemma}\label{lem:mult} Suppose that a homomorphism $\phi\in\Hom^{\rm c}_A\big(\LL^n(A),\LL^m(A)\big)$ satisfies $$ \phi(t^l)=\phi(t_1)^{l_1}\ldots\phi(t_n)^{l_n} $$ for any element $l=(l_1,\ldots,l_n)\in\z^n$. Then $\phi$ is a homomorphism of rings. \end{lemma} \begin{proof} The condition of the lemma implies that $\phi$ respects products of Laurent polynomials. Recall that the product with a fixed iterated Laurent series is continuous,~see~\cite[Lem.\,3.5(i)]{GOMS}, and that the set of Laurent polynomials is dense in $\LL^n(A)$ by Lemma~\ref{cor:conv}$(ii)$. Applying this twice, we obtain first that $\phi$ respects products of Laurent polynomials with iterated Laurent series and then that $\phi$ respects products of arbitrary elements of~$\LL^n(A)$. \end{proof} Proposition~\ref{prop:funccont} implies the following fact. \begin{corol}\label{cor:funccontrg} Evaluation at $A$ defines the bijection $$ {\Hom^{\rm rg}\big((L^n\ga)_A,(L^m\ga)_A\big)}\stackrel{\sim}\longrightarrow \Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^m(A)\big)\,,\qquad \alpha\longmapsto\alpha(A)\,. $$ \end{corol} \begin{proof} By Proposition~\ref{prop:funccont}, the map is well-defined and injective. Also by Proposition~\ref{prop:funccont}, for any ${\phi\in \Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^m(A)\big)}$, there is $\alpha\in{\Hom\big((L^n\ga)_A,(L^m\ga)_A\big)}$ such that $\alpha(A)=\phi$. Since $\phi$ is a homomorphism of rings, it follows from Lemma~\ref{lem:mult} that for any $A$-algebra $B$, the continuous $B$-linear map ${\alpha(B)\colon \LL^n(B)\to\LL^m(B)}$ is a homomorphism of rings as well, whence we have $\alpha\in{\Hom^{\rm rg}\big((L^n\ga)_A,(L^m\ga)_A\big)}$. \end{proof} \begin{rmk} If a ring $A$ is of zero characteristic, that is, the natural homomorphism $\z\to A$ is injective, then any endomorphism of the group functor~$(\ga)_A$ is $(\ga)_A$-linear (see, e.g.,~\cite[Lem.\,6.20(i)]{GOMS}). This implies that in this case, one can omit in Lemma~\ref{lemma:basechar}, Proposition~\ref{prop:funccont}, and Corollary~\ref{cor:funccontrg} the condition that morphisms of group functors (or ring functors) are $(\ga)_A$-linear (see the beginning of the proof of Lemma~\ref{lemma:basechar}). \end{rmk} \medskip Here is an application of Proposition~\ref{prop:funccont} and Corollary~\ref{cor:funccontrg}. For an arbitrary ring~$R$, by~$\M_{m\times n}(R)$ denote the set of all $(m\times n)$-matrices with entries in $R$. Consider elements of $R^n$ as columns. \begin{defin}\label{def:ups} For any $\phi\in\Hom_A^{\rm c,alg}\big(\LL^n(A),\LL^m(A)\big)$, put $$ \Upsilon(\phi):=\big(\nu(\phi(t_1)),\ldots,\nu(\phi(t_n))\big)\in\M_{m\times n}\big(\uz(A)\big)\,. $$ \end{defin} \begin{prop}\label{prop:valuation} For any $\phi\in\Hom_A^{\rm c,alg}\big(\LL^n(A),\LL^m(A)\big)$ and any $f\in\LL^n(A)^*$, there is an equality in $\uz(A)^n$ $$ \nu\big(\phi(f)\big)=\Upsilon(\phi)\cdot \nu(f)\,. $$ \end{prop} \begin{proof} By Corollary~\ref{cor:funccontrg}, there is a unique $\alpha\in\Hom^{\rm rg}\big((L^n\ga)_A,(L^m\ga)_A\big)$ such that~${\alpha(A)=\phi}$. Since $\alpha$ is a morphism of ring functors, $\alpha$ induces a morphism of group functors ${(L^n\gm)_A\to(L^m\gm)_A}$, which we denote also by $\alpha$ for simplicity. Let us show that the morphism $\alpha\colon (L^n\gm)_A\to (L^m\gm)_A$ sends the group subfunctor $(L^n\gm)^0_A\subset (L^n\gm)_A$ to the group subfunctor $(L^m\gm)^0_A\subset (L^m\gm)_A$. Since the functor~$(L^n\gm)^0$ is represented by an absolutely connected ind-affine scheme over~$\z$, the group functor $(L^n\gm)^0_A$ is represented by an absolutely connected ind-affine scheme over $A$, see~\cite[Def.\,5.18]{GOMS} and~\cite[Prop.\,6.13(iv)]{GOMS}. It follows that any morphism of group functors $(L^n\gm)^0_A\to \uz_A$ over $A$ is equal to zero, see~\cite[Prop.\,5.23]{GOMS}. Therefore the composition $$ (L^n\gm)^0_A\stackrel{\alpha}\longrightarrow (L^m\gm)_A\stackrel{\nu}\longrightarrow \uz^m_A $$ is also equal to zero. Hence~$\alpha$ sends~$(L^n\gm)^0_A$ to $(L^m\gm)^0_A$. We obtain that $\alpha$ defines a homomorphism of group functors from the quotient $$ \nu\,:\,(L^n\gm)_A/(L^n\gm)_A^0\stackrel{\sim}\longrightarrow \uz^n_A $$ to the quotient $$ \nu\,:\,(L^m\gm)_A/(L^m\gm)_A^0\stackrel{\sim}\longrightarrow \uz^m_A\,, $$ which is given by a matrix in $\M_{m\times n}\big(\uz(A)\big)$. It is easy to see that this matrix is nothing but $\Upsilon(\phi)$, which finishes the proof. \end{proof} \begin{corol}\label{cor:hommon} When $m=n$, the map $$ \Upsilon\,:\, \Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)\longrightarrow \M_{n\times n}\big(\uz(A)\big) $$ is a homomorphism of monoids. In particular, if an endomorphism ${\phi\in\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)}$ is invertible, then the matrix ${\Upsilon(\phi)\in\M_{n\times n}\big(\uz(A)\big)}$ is invertible as well. \end{corol} \section{Continuous homomorphisms and changes of parameters} In this section, we describe all possible changes of parameters $t_1,\ldots,t_n$ under continuous homomorphisms of $A$-algebras $\phi\colon\LL^n(A)\to\LL^m(A)$, that is, we describe all possible collections $\big(\phi(t_1),\ldots,\phi(t_n)\big)$ of elements in $\LL^m(A)^*$, see Theorem~\ref{prop:contchange}. Such a collection determines uniquely the initial homomorphism~$\phi$. \medskip We will use the following elementary fact on matrices with integral entries. \begin{lemma}\label{lem:upper} For any matrix $M\in\M_{m\times n}(\z)$, the following conditions are equivalent: \begin{itemize} \item[(i)] the map $$ M\,:\,\z^n\longrightarrow \z^m\,,\qquad l\longmapsto M\cdot l\,, $$ strictly preserves the lexicographical order, that is, if $l>0$, then $M\cdot l>0$; \item[(ii)] the matrix $M$ is in column echelon form with positive leading entries, that is, the matrix~$M$ has a form $$ \begin{pmatrix} x_{11}& x_{12}&\ldots &x_{1n} \\ \vdots& \vdots& &\vdots \\ x_{p_1, 1}&\vdots&&\vdots\\ 0&x_{p_2,2}&&\vdots&\\ \vdots& 0&&x_{p_n,n}\\ \vdots&\vdots&&0 \\ \vdots&\vdots&&\vdots \\ 0&0&\ldots&0 \end{pmatrix} $$ where $1\leqslant p_1<\ldots <p_n\leqslant m$ and $x_{p_i,i}>0$ for any $i$, $1\leqslant i\leqslant n$; \item[(iii)] the map $M\colon \z^n\to\z^m$ is injective and for any $\lambda\in\Lambda_n$, there is $\mu\in\Lambda_m$ such that $M\cdot \z^n_{\lambda}\subset \z^m_{\mu}$. \end{itemize} \end{lemma} \begin{proof} $(i)\Rightarrow (ii)$ The proof is by induction on $n$. The base $n=1$ is obvious. Let us make the induction step from $n-1$ to $n$, where $n\geqslant 2$. The inequality ${(0,\ldots,0,1,0)<(0,\ldots,0,0,1)}$ implies that either $p_n>p_{n-1}$ and $x_{p_n,n}>0$, or $p_n=p_{n-1}=p$ and ${x_{p,n}>x_{p,n-1}}$. Suppose the second case holds. Since $x_{p,n-1}\ne 0$ (actually, $x_{p,n-1}> 0$), there is $k\in\z$ such that $kx_{p,n-1}+x_{p,n}<0$. Then we obtain a contradiction, because ${(0,\ldots,0,k,1)>0}$ and $M\cdot (0,\ldots,0,k,1)^{\top}<0$. $(ii)\Rightarrow (iii)$ It follows from the explicit form of the matrix $M$ in item~$(ii)$ that the rank of $M$ (over $\Q$) equals $n$. Thus the map ${M\colon\z^n\to\z^m}$ is injective. Let us prove by induction on $m$ that the second condition in item~$(iii)$ also holds true. We consider the base $m=0$, in which case the statement is satisfied, being void. Let us make an induction step from $m-1$ to $m$, where $m\geqslant 1$. Suppose that $p_n<m$. Then the image of the map $M\colon \z^n\to\z^m$ is contained in the subgroup ${\z^{m-1}=\{(*,\ldots,*,0)\}}$. By the induction hypothesis, there is $\kappa\in\Lambda_{m-1}$ such that $M\cdot \z^n_{\lambda}\subset\z^{m-1}_{\kappa}$. We let $\mu=(\mu',0)\in\Lambda_m$, where $\mu'\colon\z\to\Lambda_{m-1}$ is any map such that $\mu'(0)=\kappa$. Now suppose that $p_n=m$. By definition, we have $\lambda=(\lambda',\lambda_n)$, where ${\lambda'\colon\z\to\Lambda_{n-1}}$ and $\lambda_n\in\z$. Put $\mu_m:=\lambda_nx_{mn}\in\z$. Let $M'$ be an \mbox{$(m-1)\times(n-1)$-matrix} which is equal to $M$ without the last row and the last column. By the induction hypothesis, for any $l\geqslant\lambda_n$, there is $\rho(l)\in\Lambda_{m-1}$ such that $M'\cdot \z^{n-1}_{\lambda'(l)}\subset\z^{m-1}_{\rho(l)}$. Now let $\mu'\colon\z\to\Lambda_{m-1}$ be a map such that for any $l\geqslant \lambda_n$, the element $\mu'(lx_{mn})\in\Lambda_{m-1}$ satisfies the condition $$ \z^{m-1}_{\rho(l)}+l\cdot (x_{1n},\ldots,x_{m-1,n})\subset \z^{m-1}_{\mu'(lx_{mn})}\,. $$ Then we obtain $M\cdot\z^n_{\lambda}\subset\z^m_{\mu}$, where $\mu=(\mu',\mu_m)$. $(iii)\Rightarrow (i)$ Note that an element $l\in\z^n$ satisfies $l>0$ if and only if $l\ne 0$ and there is $\lambda\in\Lambda_n$ such that the set ${\{kl\mid k\in\N\}}$ is contained in $\z^n_{\lambda}$ (this can be shown easily by induction on $n$). This gives the needed implication. \end{proof} \begin{defin}\label{def:plus} Denote by $$ \M_{m\times n}^+(\z)\subset \M_{m\times n}(\z) $$ the set of all matrices that satisfy the equivalent conditions of Lemma~\ref{lem:upper}. Similarly, let~$\M^+_{m\times n}\big(\uz(A)\big)$ be the set of all \mbox{$(m\times n)$-matrices} with entries in $\uz(A)$ such that point-wise on $\Spec(A)$ these maitrices belong to~$\M^+_{m\times n}(\z)$. \end{defin} Note that $n\leqslant m$ whenever $\M_{m\times n}^+(\z)$ is non-empty. \medskip Let us say that a set $\{f_i\}$, $i\in \N$, of iterated Laurent series in $\LL^n(A)$ is {\it bounded} if there is $\lambda\in\Lambda_n$ such that for all $i\in\N$, we have ${\supp(f_i)\subset \z^n_{\lambda}}$. Clearly, the pair-wise product of two bounded sets is also bounded. \begin{lemma}\label{lem:zerobound} \hspace{0cm} \begin{itemize} \item[(i)] If a countable set $\{f_i\}$, $i\in\N$, of iterated Laurent series tends to zero, then this set is bounded. \item[(ii)] If a countable set $\{f_i\}$, $i\in\N$, of iterated Laurent series tends to zero and a countable set $\{g_i\}$, $i\in\N$, of iterated Laurent series is bounded, then the diagonal product~$\{f_ig_i\}$, $i\in\N$, tends to zero. \end{itemize} \end{lemma} \begin{proof} $(i)$ Take an element $\mu\in\Lambda_n$. Note that for any $i\in \N$, the intersection ${\supp(f_i)\cap (-\z^n_{\mu})}$ is finite and this intersection is empty for all but finitely many $i\in\N$. This implies finiteness of the intersection $X\cap(-\z^n_{\mu})$, where $X:=\bigcup\limits_{i\in \N}\supp(f_i)$. Now, the statement follows from Lemma~\ref{lemma:intersect}. $(ii)$ Let $\lambda\in\Lambda_n$ be an element such that $\supp(g_i)\subset \z^n_{\lambda}$ for all $i\in\N$. Given an element $\mu\in\Lambda_n$, take an element $\rho\in\Lambda_n$ such that ${\z^n_{\lambda}+\z^n_{\mu}\subset\z^n_{\rho}}$. Since the countable set $\{f_i\}$, $i\in\N$, tends to zero, all but finitely many $i\in\N$ satisfy $0\notin\supp(f_i)+\z^n_{\rho}$. The embeddings $$ \supp(f_i)+\z^n_{\rho}\supset \supp(f_i)+\z^n_{\lambda}+\z^n_{\mu}\supset \supp(f_ig_i)+\z^n_{\mu} $$ imply that all but finitely many $i\in\N$ satisfy $0\notin \supp(f_ig_i)+\z^n_{\mu}$. \end{proof} \begin{lemma}\label{lemma:coundpower} For any element $f\in\LL^n(A)^*$ with $\nu(f)=0$, the set $\{f^k\}$, $k\in\Z$, is bounded. \end{lemma} \begin{proof} We have a decomposition $f=c\cdot (1+g)$, where $c\in A^*$ and $g=\sum\limits_{l\in\z^n}a_lt^l$ is an element such that $a_0=0$ and the iterated Laurent series $\sum\limits_{l<0}a_lt^l$ is nilpotent. Clearly, for any integer $k\in\z$, we have that $\supp(f^k)$ is contained in the union $\bigcup\limits_{i\in\N}\supp(g^i)$. By~\cite[Prop.\,3.8]{GOMS}, the countable set $\{g^i\}$, $i\in\N$, tends to zero. Thus by Lemma~\ref{lem:zerobound}$(i)$, the set~$\{g^i\}$, $i\in\N$, is bounded, which proves the lemma. \end{proof} \begin{defin} Denote by $$ \Hb_{m,n}(A)\subset \big(\LL^m(A)^*\big)^{\times n} $$ the set of all collections $(\varphi_1,\ldots,\varphi_n)$ of invertible iterated Laurent series in $t_1,\ldots,t_m$ such that the matrix ${\big(\nu(\varphi_1),\ldots,\nu(\varphi_n)\big)}\in\M_{m\times n}\big(\uz(A)\big)$ belongs to~$\M^+_{m\times n}\big(\uz(A)\big)$. \end{defin} \begin{prop}\label{prop:converg} A collection ${\varphi=(\varphi_1,\ldots,\varphi_n)\in \big(\LL^m(A)^*\big)^{\times n}}$ belongs to $\Hb_{m, n}(A)$ if and only if for any $\lambda\in\Lambda_n$, the countable set $\{\varphi^l\}$, $l\in\z^n_{\lambda}$, tends to zero in~$\LL^m(A)$. \end{prop} \begin{proof} Taking a decomposition of $A$ into a finite product of rings, we may assume that all $\nu(\varphi_i)\in\uz^n(A)$, $1\leqslant i\leqslant n$, are constant, that is, are elements of~$\z^n$. Suppose that $\varphi\in\Hb_{m,n}(A)$. Put $M:={\big(\nu(\varphi_1),\ldots,\nu(\varphi_n)\big)}\in\M^+_{m\times n}(\z)$. For each $i$, $1\leqslant i\leqslant n$, let $f_i\in\LL^m(A)^*$ be the element that satisfies $$ \varphi_i=t^{\nu(\varphi_i)}\cdot f_i\,. $$ In particular, we have $\nu(f_i)=0$. Then for any $l\in\z^n$, there is an equality $\varphi^l=t^{M\cdot l}\cdot f^l$, where $f = (f_1, \ldots, f_n)$. Since the matrix $M$ satisfies condition~$(iii)$ of Lemma~\ref{lem:upper}, we have an embedding $M\colon \z^n_{\lambda}\hookrightarrow\z^m_{\mu}$ for some $\mu\in\Lambda_m$. For any $\rho\in\Lambda_m$, the intersection $\z^m_{\mu}\cap (-\z^m_{\rho})$ is finite. This implies that the countable set $\{t^{M\cdot l}\}$, $l\in \z^n_{\lambda}$, tends to zero. It follows from Lemma~\ref{lemma:coundpower} that the countable set $\{f^l\}$, where $l\in \z^n$, is bounded. Thus by Lemma~\ref{lem:zerobound}$(ii)$, the countable set $\{t^{M\cdot l}\cdot f^l\}$, $l\in\z^n_{\lambda}$, tends to zero. Now suppose that the countable set $\{\varphi^l\}$, $l\in\z^n_{\lambda}$, tends to zero. Then by Lemma~\ref{lem:zerobound}$(i)$, this set is bounded. On the other hand, we have that $\nu(\varphi^l)\in\supp(\varphi^l)$ and ${\nu(\varphi^l)=M\cdot l\in\z^m}$. Thus there is $\mu\in\Lambda_m$ such that $M\cdot\z^n_{\lambda}\subset \z^m_{\mu}$. Suppose that the map $M\colon\z^n\to\z^m$ has a non-zero kernel. Then there is $\lambda\in\Lambda_n$ such that for infinitely many $l\in \z^n_{\lambda}$, we have $M\cdot l=0$. Consequently the countable set~$\{\varphi^l\}$, $l\in\z^n_{\lambda}$, contains infinitely many elements $g$ with $\nu(g)=0$. This contradicts the condition that $\{\varphi^l\}$, $l\in\z^n_{\lambda}$, tends to zero. Thus the map $M\colon\z^n\to\z^m$ is injective and~$M$ satisfies condition~$(iii)$ of Lemma~\ref{lem:upper}. \end{proof} Now we are ready to prove the main result of this section. \begin{theor}\label{prop:contchange} We have a bijection (see Definition~\ref{def:hom}) $$ \Hom^{\rm c,alg}_{A}\big(\LL^n(A),\LL^m(A)\big)\stackrel{\sim}\longrightarrow \Hb_{m,n}(A)\,,\qquad \phi\longmapsto\big(\phi(t_1),\ldots\phi(t_n)\big)\,. $$ \end{theor} \begin{proof} Consider an element $\phi\in\Hom^{\rm c,alg}_{A}\big(\LL^n(A),\LL^m(A)\big)$. For any $\lambda\in\Lambda_n$, the countable set $\{t^l\}$, $l\in\z^n_{\lambda}$, tends to zero in $\LL^n(A)$. Therefore the countable set $\{\phi(t^l)\}$, $l\in\z^n_{\lambda}$, tends to zero in $\LL^m(A)$. For any $l\in\z^n$, we have an equality $\phi(t^l)=\varphi^l$, where $\varphi_i:=\phi(t_i)$, $1\leqslant i\leqslant n$, and $\varphi=(\varphi_1,\ldots,\varphi_n)$. Thus by Proposition~\ref{prop:converg}, the map in the theorem is well-defined. The injectivity of this map follows directly from Lemma~\ref{cor:conv}$(ii)$. Let us prove the surjectivity. Take a collection $\varphi=(\varphi_1,\ldots,\varphi_n)\in \Hb_{m,n}(A)$. By Proposition~\ref{prop:converg}, for any $\lambda\in\Lambda_n$, the countable set~$\{\varphi^l\}$, $l\in\z^n_{\lambda}$, tends to zero. Hence the series $\sum\limits_{l\in\z^n_{\lambda}}a_l\varphi^l$ converges in~$\LL^m(A)$ and we have a well-defined map $$ \phi\,:\,\LL^n(A)\longrightarrow \LL^m(A)\,,\qquad \mbox{$f=\sum\limits_{l\in\z^n_{\lambda}}a_lt^l\longmapsto f(\varphi_1,\ldots,\varphi_n)=\sum\limits_{l\in\z^n_{\lambda}}a_l\varphi^l$}\,. $$ Clearly, the map $\phi$ is $A$-linear, one has $\phi(t_i)=\varphi_i$, $1\leqslant i\leqslant n$, and one easily shows that $\phi$ is a homomorphism of additive groups. Similarly, for any $A$-algebra $B$, the collection $(\varphi_1,\ldots,\varphi_n)$ defines a $B$-linear homomorphism of additive groups ${\LL^n(B)\to\LL^m(B)}$. This gives a $(\ga)_A$-linear morphism of group functors~$\alpha\colon (L^n\ga)_A\to(L^m\ga)_A$ such that $\alpha(A)=\phi$. Therefore by Proposition~\ref{prop:funccont}, the map~$\phi$ is continuous. Finally, Lemma~\ref{lem:mult} implies that $\phi$ is a homomorphism of rings. Thus $\phi$ is a well-defined element in $\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^m(A)\big)$. \end{proof} \begin{corol} \label{cor:inequality} If $n>m$, then there are no continuous homomorphisms of $A$-algebras from $\LL^n(A)$ to~$\LL^m(A)$. \end{corol} \begin{corol}\label{cor:section} The homomorphism of monoids (see Corollary~\ref{cor:hommon}) $$ \Upsilon\,:\,\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)\longrightarrow \M_{n\times n}^+\big(\uz(A)\big) $$ has a natural monoid section that sends a matrix ${M\in\M_{n\times n}^+\big(\uz(A)\big)}$ to the endomorphism~$\phi$ such that $\phi(t_i)=t^{l_i}$, $1\leqslant i\leqslant n$, where $M=(l_1,\ldots,l_n)$ and $l_i\in\z^n$. \end{corol} \medskip Theorem~\ref{prop:contchange} also implies that one can define an associative composition $\circ$ of collections from~$\Hb_{p,m}$ and $\Hb_{m,n}$ as follows. \begin{corol}\label{cor:monoidphi} For all collections $(\varphi_1,\ldots,\varphi_n)\in\Hb_{m,n}(A)$ and $(\vartheta_1,\ldots,\vartheta_m)\in\Hb_{p,m}(A)$, the composition $$ (\vartheta_1,\ldots,\vartheta_m)\circ(\varphi_1,\ldots,\varphi_n):=\big(\varphi_1(\vartheta_1,\ldots,\vartheta_m),\ldots,\varphi_n(\vartheta_1,\ldots,\vartheta_m)\big)\in \big(\LL^p(A)\big)^{\times n} $$ belongs to $\Hb_{p,n}(A)\subset \big(\LL^p(A)^*\big)^{\times n}\subset \big(\LL^p(A)\big)^{\times n}$. Moreover, there is an equality of matrices in $\M_{p\times n}\big(\uz(A)\big)$ $$ \big(\nu(\varphi_1(\vartheta_1,\ldots,\vartheta_m)),\ldots,\nu(\varphi_n(\vartheta_1,\ldots,\vartheta_m))\big)= \big(\nu(\vartheta_1),\ldots,\nu(\vartheta_m)\big)\cdot \big(\nu(\varphi_1),\ldots,\nu(\varphi_n)\big)\,. $$ \end{corol} \begin{proof} By Theorem~\ref{prop:contchange}, the collections $(\varphi_1,\ldots,\varphi_n)$ and $(\vartheta_1,\ldots,\vartheta_m)$ correspond to homomorphisms~$\phi\in\Hom_A^{\rm c,rg}\big(\LL^n(A),\LL^m(A)\big)$ and~$\theta\in\Hom_A^{\rm c,rg}\big(\LL^m(A),\LL^p(A)\big)$, respectively. It follows that for any $i$ such that $1\leqslant i\leqslant n$, there are equalities $$ (\theta\circ\phi)(t_i)=\theta(\varphi_i)=\varphi_i(\vartheta_1,\ldots,\vartheta_m)\,. $$ Hence the composition $\theta\circ\phi\in\Hom_A^{\rm c,rg}(\LL^n(A),\LL^p(A))$ corresponds to the collection $(\vartheta_1,\ldots,\vartheta_m)\circ(\varphi_1,\ldots,\varphi_n)$. Therefore again by Theorem~\ref{prop:contchange}, we see that the collection ${(\vartheta_1,\ldots,\vartheta_m)\circ(\varphi_1,\ldots,\varphi_n)}$ belongs to $\Hb_{p,n}(A)$. The equality between matrices follows from Proposition~\ref{prop:valuation}. \end{proof} \section{Representability of functors and invariance of the residue} In this section, we show representability for functors defined by the sets of continuous homomorphisms, see Proposition~\ref{prop:repr} and Corollary~\ref{cor:repr}. Then we investigate how the residue map is changed under continuous homomorphisms, see Proposition~\ref{prop:invres} and Corollary~\ref{cor:res}. \medskip Define the functor on the category of rings (see Definition~\ref{def:hom}) $$ \Hc om^{\rm c,alg}(\LL^n,\LL^m)\,:\,A\longmapsto \Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^m(A)\big)\,. $$ This is indeed a functor by Corollary~\ref{cor:funccontrg} (cf. Remark~\ref{rmk:funvtadj} below). The functor (see Definition~\ref{def:plus}) $$ \M^+_{m\times n}(\uz)\,:\,A\longmapsto \M^+_{m\times n}\big(\uz(A)\big) $$ is clearly represented by an ind-scheme, which is a formal direct limit of finite disjoint unions of $\Spec(\z)$. Theorem~\ref{prop:contchange} implies directly the following representability result. \begin{prop}\label{prop:repr} The functor $\Hc om^{\rm c,alg}(\LL^n,\LL^m)$ is represented by the ind-affine scheme over $\z$ $$ \M_{m\times n}^+\big(\uz\big)\times\big((L^m\gm)^0\big)^{\times n}\,, $$ where for any ring $A$, the corresponding bijection sends $\phi\in\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^m(A)\big)$ to $$ \big(\Upsilon(\phi),(f_1,\ldots,f_n)\big)\in \M_{m\times n}^+\big(\uz(A)\big)\times\big((L^m\gm)^0\big)^{\times n}(A)\,, $$ where $f_i:=\phi(t_i)\cdot t^{-\nu(\phi(t_i))}$, $1\leqslant i\leqslant n$ (see Definition~\ref{def:ups} for $\Upsilon(\phi)$). \end{prop} Recall that for $m=n$, by Corollary~\ref{cor:hommon}, we have a morphism of monoid functors $$ \Upsilon\,:\,\Hc om^{\rm c,alg}(\LL^n,\LL^n)\longrightarrow \M_{n\times n}^+\big(\uz\big)\,. $$ Denote the kernel of this morphism by $\Ker(\Upsilon)_n$. Define the subfunctor $$ \Ker(\Upsilon)^{\rm nil}_n\subset\Ker(\Upsilon)_n $$ that sends a ring $A$ to the set of all $\phi\in\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)$ such that for any~$i$ with ${1\leqslant i\leqslant n}$, we have ${\phi(t_i)=t_i+h_i}$, where $h_i\in \LL^n(A)$ is a Laurent polynomial (not merely a series) with nilpotent coefficients. \medskip By definition, a regular function on a functor $F$ on the category of $A$-algebras is a morphism of functors from $F$ to the affine line $(\Ab^1)_A$ over $A$. By $\OO(F)$ denote the $A$-algebra of all regular functions on $F$. Proposition~\ref{prop:repr} implies many useful properties of regular functions on the functor~$\Hc om^{\rm c,alg}(\LL^n,\LL^m)$. Corollary~\ref{cor:repr} below contains two of them. The first one is that in order to check equalities between regular functions on the functor ${\Hc om^{\rm c,alg}(\LL^n,\LL^m)\times L^n\ga}$, it is enough to consider their evaluations at $\Q$-algebras only. The second property is that in order to check equalities between regular functions on the functor $\Ker(\Upsilon)_n\times L^n\ga$, it is enough to consider for an arbitrary ring $A$ only elements from ${\Ker(\Upsilon)_n^{\rm nil}(A)\times\LL^n(A)\subset \Ker(\Upsilon)_n(A)\times\LL^n(A)}$, which are much simpler to be treated. These properties of regular functions are of utmost importance for what follows. We warn the reader that the proof of Corollary~\ref{cor:repr} relies heavily on the theory of thick ind-cones developed in~\cite{GOMS}. However, in this paper, the theory of thick ind-cones is used explicitly only here. \begin{corol}\label{cor:repr} \hspace{0cm} \begin{itemize} \item[(i)] The natural ring homomorphism $$ \OO\big(\Hc om^{\rm c,alg}(\LL^n,\LL^m)\times L^n\ga\big)\longrightarrow\OO\big(\Hc om^{\rm c,alg}(\LL^n,\LL^m)_{\Q}\times (L^n\ga)_{\Q}\big) $$ is injective. \item[(ii)] The ring homomorphism which is the restriction map $$ \OO\big(\Ker(\Upsilon)_n\times L^n\ga\big)\longrightarrow \OO\big(\Ker(\Upsilon)_n^{\rm nil}\times L^n\ga\big) $$ is injective. \end{itemize} \end{corol} \begin{proof} $(i)$ By Proposition~\ref{prop:repr}, we have an isomorphism $$ \Hc om^{\rm c,alg}(\LL^n,\LL^m)\times L^n\ga\stackrel{\sim}\longrightarrow\M_{m\times n}^+\big(\uz\big)\times\big((L^m\gm)^0\big)^{\times n}\times L^n\ga\,. $$ Clearly, the ind-affine scheme $\M_{m\times n}^+\big(\uz\big)$ is ind-flat over $\z$, that is, this scheme is a formal direct limit of flat affine schemes over $\z$. Further, the ind-affine scheme~$\big((L^n\gm)^0\big)^{\times m}$ is isomorphic to a product of a thick ind-cone and an ind-flat scheme over~$\z$, see~\cite[Def.\,5.7, Def.\,5.10]{GOMS} and~\cite[Prop.\,6.8(iii), Lem.\,5.13]{GOMS}. The ind-affine scheme $L^n\ga$ is an ind-affine space by~\cite[Ex.\,6.7]{GOMS}, whence $L^n\ga$ is ind-flat over $\z$. Finally, by~\cite[Prop.\,5.17]{GOMS}, for any $X$ which is a product of a thick ind-cone and an ind-flat scheme over~$\z$, the homomorphism $\OO(X)\to\OO(X_{\Q})$ is injective. $(ii)$ By Proposition~\ref{prop:repr}, we have an isomorphism $$ \Ker(\Upsilon)_n\times L^n\ga\stackrel{\sim}\longrightarrow \big((L^n\gm)^0\big)^{\times n}\times L^n\ga\,. $$ By~\cite[Prop.\,6.13(iii)]{GOMS}, the ind-scheme $\big((L^n\gm)^0\big)^{\times n}$ contains an absolutely dense ind-closed subscheme $\big((L^n\gm)^{\sharp}\big)^{\times n}$, see~\cite[Def.\,5.24]{GOMS}. Here, for any ring $A$, the set (actually, the group)~${(L^n\gm)^{\sharp}(A)}$ consists of all elements $1+\sum\limits_{l\in\z^n}a_lt^l\in(L^n\gm)(A)$ such that $\sum\limits_{l\leqslant 0}a_l t^l$ is a nilpotent element of $\LL^n(A)$, see~\cite[Lem.\,4.7(iii)]{GOMS}. Hence by~\cite[Lem.\,5.25]{GOMS}, the restriction map $$ \OO\big(\big((L^n\gm)^0\big)^{\times n}\times L^n\ga\big)\longrightarrow \OO\big(\big((L^n\gm)^{\sharp}\big)^{\times n}\times L^n\ga\big) $$ is injective. Clearly, $\Ker(\Upsilon)^{\rm nil}_{n}$ is a subfunctor of $\big((L^n\gm)^{\sharp}\big)^{\times n}$, see Proposition~\ref{prop:repr}. Further, the ind-scheme $(L^n\gm)^{\sharp}$ is a thick ind-cone by~\cite[Prop.\,6.8(ii)]{GOMS}. Hence by~\cite[Lem.\,5.13]{GOMS} and~\cite[Ex.\,6.7]{GOMS}, the ind-scheme ${\big((L^n\gm)^{\sharp}\big)^{\times n}\times L^n\ga}$ is a thick ind-cone as well. It follows from~\cite[Prop.\,5.15]{GOMS} that the restriction map $$ \OO\big(\big((L^n\gm)^{\sharp}\big)^{\times n}\times L^n\ga\big)\longrightarrow \OO\big(\Ker(\Upsilon)^{\rm nil}_n\times L^n\ga\big) $$ is injective. This finishes the proof. \end{proof} \medskip Now we describe how the residue map is changed under continuous homomorphisms. Untill the end of this section, we fix a continuous homomorphism of $A$-algebras ${\phi\colon\LL^n(A)\to\LL^m(A)}$. By Theorem~\ref{prop:contchange}, the matrix $\Upsilon(\phi)$ belongs to $\M^+_{m\times n}\big(\uz(A)\big)$. In particular, we have that $n\leqslant m$. Given two locally constant functions $p,q\in\uz(A)$ on $\Spec(A)$, we say that $p<q$ or $p\leqslant q$ if this holds point-wise on $\Spec(A)$. Let $$ 1\leqslant p_1<\ldots<p_n\leqslant m \quad \mbox{and} \quad x_{p_i,i}>0 \quad \mbox{for}\quad 1\leqslant i\leqslant n\,, $$ be the elements of $\uz(A)$ that correspond to the matrix $\Upsilon(\phi)\in\M^+_{m\times n}\big(\uz(A)\big)$ as in condition~$(ii)$ of Lemma~\ref{lem:upper}. Let $$ 1\leqslant q_1<\ldots<q_{m-n}\leqslant m $$ be a collection of elements of $\uz(A)$ which is complementary to $(p_1,\ldots,p_n)$, that is, for all $1\leqslant i,j\leqslant n$, we have~$p_i\ne q_j$ point-wise on $\Spec(A)$ (in other words, for any point of $\Spec (A)$, the values of $p_i$ and $q_j$ at this point are not equal). Let $\sgn(\phi)\in\uz(A)$ be the locally constant function of sign of the locally constant permutation that sends $(1,2,\ldots,n)$ to~$(p_1,\ldots,p_m,q_{1},\ldots,q_{m-n})$. Also, put $$ 0<d(\phi):=\prod\limits_{i=1}^n x_{p_i,i}\in\uz(A)\,. $$ For example, if $m=n$, then $\sgn(\phi)=1$ and $d(\phi)=\det\big(\Upsilon(\phi)\big)$. The continuous homomorphism $\phi$ induces additive homomorphisms $\widetilde{\Omega}^i_{\LL^n(A)}\to \widetilde{\Omega}^i_{\LL^m(A)}$, $i\geqslant 0$, which we denote also by $\phi$ for simplicity. \begin{prop}\label{prop:invres} For any differential form $\omega\in\widetilde{\Omega}^n_{\LL^n(A)}$, there is an equality \begin{equation}\label{eq:res} \res\left(\phi(\omega)\wedge\frac{dt_{q_1}}{t_{q_1}}\wedge\ldots\wedge\frac{dt_{q_{m-n}}}{t_{q_{m-n}}}\right)=\sgn(\phi)d(\phi)\res(\omega)\,. \end{equation} \end{prop} \begin{proof} Both sides of formula~\eqref{eq:res} are functions with values in $A$ that depend on a homomorphism $\phi\in\Hc om^{\rm c,alg}(\LL^n,\LL^m)(A)$ and a differential form $\omega\in\widetilde{\Omega}^n_{\LL^n(A)}$. Clearly, the functor $A\mapsto \widetilde{\Omega}^n_{\LL^n(A)}$ is isomorphic to $L^n\ga$. Thus both sides of formula~\eqref{eq:res} are regular functions from ${\OO\big(\Hc om^{\rm c,alg}(\LL^n,\LL^m)\times L^n\ga\big)}$. Hence by Corollary~\ref{cor:repr}$(i)$, it is enough to prove the equality between the images of these functions in the ring ${\OO\big(\Hc om^{\rm c,alg}(\LL^n,\LL^m)_{\Q}\times (L^n\ga)_{\Q}\big)}$. In other words, it is enough to prove the proposition when $A$ is a $\Q$-algebra, which we assume from now on. The left hand side of formula~\eqref{eq:res} is equal to zero if $\omega$ is exact. Therefore the left hand side of formula~\eqref{eq:res} with fixed $\phi$ defines an $A$-linear map from the quotient $$ \widetilde{\Omega}^n_{\LL^n(A)}/d\widetilde{\Omega}^{n-1}_{\LL^n(A)}\stackrel{\sim}\longrightarrow A $$ to $A$. Thus this map sends any differential form $\omega\in\widetilde{\Omega}^n_{\LL^n(A)}$ to $c\cdot\res(\omega)$, where $c\in A$ can be calculated for a particular differential form $\frac{dt_1}{t_1}\wedge\ldots\wedge\frac{dt_n}{t_n}$. Explicitly, we have $$ c=\res\left(\phi\left(\frac{dt_1}{t_1}\wedge\ldots\wedge\frac{dt_n}{t_n}\right)\wedge\frac{dt_{q_1}}{t_{q_1}}\wedge\ldots\wedge\frac{dt_{q_{m-n}}}{t_{q_{m-n}}}\right)= $$ $$ =\res\left(\frac{d\phi(t_1)}{\phi(t_1)}\wedge\ldots\wedge\frac{d\phi(t_n)}{\phi(t_n)}\wedge\frac{dt_{q_1}}{t_{q_1}}\wedge\ldots\wedge\frac{dt_{q_{m-n}}}{t_{q_{m-n}}}\right)\,. $$ By~\cite[Prop.\,8.4]{GOMS}, we have that $c$ is the image in $A$ of the determinant of the \mbox{$(m\times m)$-matrix} $$ \big(\Upsilon(\phi),\nu(t_{q_1}),\ldots,\nu(t_{q_{m-n}})\big)\in\M_{m\times m}\big(\uz(A)\big)\,. $$ Obviously, this determinant equals $\sgn(\phi)d(\phi)$. \end{proof} \begin{corol}\label{cor:res} Suppose that $m=n$. Then for any differential form $\omega\in\widetilde{\Omega}^n_{\LL^n(A)}$, we have $$ \res\big(\phi(\omega)\big)=d(\phi)\res(\omega)\,. $$ \end{corol} \begin{rmk}\label{rmk:invres} If $\phi$ is invertible, then by Corollary~\ref{cor:hommon} and Theorem~\ref{prop:contchange}, we have the equality $d(\phi)=1$. Thus by Corollary~\ref{cor:res}, for any differential form $\omega\in\widetilde{\Omega}^n_{\LL^n(A)}$, we have $$ \res\big(\phi(\omega)\big)=\res(\omega)\,. $$ \end{rmk} \medskip Proposition~\ref{prop:invres} has also the following application for injectivity of homomorphisms. \begin{corol}\label{cor:inj} Suppose that the image of $d(\phi)$ under the natural homomorphism $\uz(A)\to A$ is not a zero divisor in the ring $A$. Then the continuous homomorphism ${\phi\colon\LL^n(A)\to\LL^m(A)}$ is injective. \end{corol} \begin{proof} Let us show that for any non-zero element $f\in\LL^n(A)$, its image $\phi(f)$ is also non-zero. Consider an element $l\in\z^n$ such that the $l$-th coefficient of $f$ is non-zero. Define a differential form $$ \omega:=f t^{-l}\,\frac{dt_1}{t_1}\wedge\ldots\wedge\frac{dt_n}{t_n}\in\widetilde{\Omega}^n_{\LL^n(A)}\,. $$ By construction, we have that $\res(\omega)\ne 0$. Therefore by the condition of the corollary and by Proposition~\ref{prop:invres}, we have (in the notation of the proposition) that ${\res\left(\phi(\omega)\wedge\frac{dt_{q_1}}{t_{q_1}}\wedge\ldots\wedge\frac{dt_{q_{m-n}}}{t_{q_{m-n}}}\right)\ne 0}$. Hence $\phi(\omega)\ne 0$, and therefore $\phi(f)\ne 0$. \end{proof} \begin{rmk}\label{rmk:inj} Suppose that $\phi(f)=0$ for an element $f\in\LL^n(A)$. Then all coefficients of~$f$ are nilpotent elements of $A$, which can be proven as follows. First one shows that for any closed (with respect to the topology on $\LL^n(A)$) prime ideal $I\subset\LL^n(A)$, there is a prime ideal $\p\subset A$ such that $I$ consists of all iterated Laurent series with coefficients in~$\p$. For this, one uses that if $A$ is a domain, then the localization ${(A\smallsetminus\{0\})^{-1}\cdot\LL^n(A)}$ is a field. Now one sees that any continuous homomorphism of \mbox{$A$-algebras} $\phi\colon\LL^n(A)\to\LL^m(A)$ induces a bijection between the sets of closed prime ideals, because these ideals are defined by prime ideals of $A$ as above. Hence $f$ belongs to the intersection of all closed prime ideals in~$\LL^n(A)$, as the analogous holds for the element $\phi(f)=0$ in $\LL^m(A)$. Finally, one uses that the intersection of all prime ideals in $A$ coincides with the nilradical of $A$. \end{rmk} \smallskip It is an interesting question whether there exists a non-injective continuous homomorphism of $A$-algebras $\phi\colon\LL^n(A)\to\LL^m(A)$. By Corollary~\ref{cor:inj} and Remark~\ref{rmk:inj}, we see that for such $\phi$, the image of $d(\phi) \in \uz(A)$ in $A$ should be a zero divisor in $A$, and if $\phi(f)=0$ for $f \in \LL^n(A)$, then all coefficients of $f$ are nilpotent elements of~$A$. \section{Invertibility criterion} We start this section with a self-duality of the topological $A$-module $\LL^n(A)$, see Proposition~\ref{prop:seldual}. Then we study the adjoint map $\phi^{\vee}$ to the map of $\LL^n(A)$-modules of top differential forms, where the last map is induced by a continuous homomorphism $\phi$, see Proposition~\ref{lem:leftinverse} and Remark~\ref{rmk:expladj}. Finally, we prove in Theorem~\ref{theor:inv} a criterion of invertibility of continuous endomorphisms of the $A$-algebra~$\LL^n(A)$. \medskip By $\LL^n(A)^{\vee}$ denote the \mbox{$A$-module} of all continuous $A$-linear maps $\LL^n(A)\to A$, where we consider the discrete topology on $A$. The $A$-module $\LL^n(A)^{\vee}$ has a natural topology such that $\LL^n(A)^{\vee}$ a topological group. The base of open neighborhoods of zero in $\LL^n(A)^{\vee}$ is given by annihilators of compact subsets of $\LL^n(A)$. Given an iterated Laurent series~$f \in \LL^n(A)$ and an element $k\in\z^n$, by $[f]_k \in A$ we denote the $k$-th coefficient of~$f$. \begin{prop}\label{prop:seldual} For any element $k\in\z^n$, the pairing $$ \LL^n(A)\times\LL^n(A)\longrightarrow A\,,\qquad (f,g)\longmapsto [fg]_k\,, $$ defines an isomorphism of topological $A$-modules $$ \tau\,:\,\LL^n(A)\stackrel{\sim}\longrightarrow \LL^n(A)^{\vee}\,,\qquad f\longmapsto\big(g\mapsto [fg]_k\big)\,. $$ \end{prop} \begin{proof} Recall that the product with a fixed iterated Laurent series is a continuous map. Applying to one of the arguments in the pairing the continuous automorphism given by the product with $t^k$, we see that it is enough to prove the proposition when $k=0$, which we assume from now on. Since the map $\LL^n(A)\to A$, $f\mapsto [f]_0$, is continuous, $\tau$ is a well-defined $A$-linear map. If $m\in\z^n$ is an element such that the $m$-th coefficient of an iterated Laurent series $f\in\LL^n(A)$ is non-zero, then for $g:=t^{-m}$, we have that $[fg]_0\ne 0$. Hence the map~$\tau$ is injective. Let $\chi\colon \LL^n(A)\to A$ be a continuous $A$-linear map. Then there is $\lambda\in\Lambda_m$ such that we have~$\chi(U_{\lambda})=0$. Now we put $a_l:=\chi(t^{-l})$ for $l \in \z^n$. It follows from the definition of $U_{\lambda}$ that $a_l=0$ if ${-l\notin (-\z^n_{\lambda})}$, that is, if ${l\notin \z^n_{\lambda}}$. Hence $f:=\sum\limits_{l\in\z^n}a_lt^l$ is a well-defined element of $\LL^n(A)$ with $\supp(f)\subset\z^n_{\lambda}$. We claim that for any $g\in\LL^n(A)$, we have $\chi(g)=[fg]_0$. Indeed, by continuity and $A$-linearity, it is enough to check this when $g$ is a monomial, which holds by construction of $f$. Thus we have shown that the map~$\tau$ is surjective. Finally, let us prove that $\tau$ is a homeomorphism. Given an element $\lambda\in\Lambda_n$, by $K_{\lambda}$ denote the $A$-submodule of~$\LL^n(A)$ that consists of all elements $f\in\LL^n(A)$ with condition $\supp(f)\subset\z^n_{\lambda}$. Note that $K_{\lambda}$ is the closure in $\LL^n(A)$ of the submodule generated by the compact subset of $\LL^n(A)$ that consists of all series in $K_{\lambda}$ whose coefficient take only two values~$0$ and~$1$. We claim that any compact subset~$C$ of~$\LL^n(A)$ is contained in~$K_{\lambda}$ for some $\lambda\in\Lambda_n$. Indeed, for any $\mu\in\Lambda_n$, the image of $C$ in the discrete \mbox{$A$-module~${\LL^n(A)/U_{\mu}}$} is a finite set whose elements are images of finite sums of monomials. Therefore condition~$(ii)$ of Lemma~\ref{lemma:intersect} holds for the subset of $\z^n$ which is the union of supports of all elements in~$C$. Thus Lemma~\ref{lemma:intersect} implies that $C$ is contained in $K_{\lambda}$ for some $\lambda\in\Lambda_n$. Further, the map $\tau$ gives an isomorphism between the set $U_{\lambda}$ and the annihilator of the set $K_{\lambda}$. This proves that $\tau$ is a homeomorphism. \end{proof} Since we have an isomorphism ${\widetilde{\Omega}^n_{\LL^n(A)}\simeq \LL^n(A) dt_1\wedge\ldots\wedge dt_n}$, Proposition~\ref{prop:seldual} with ${k=(-1,\ldots,-1)}$ implies that the pairing \begin{equation}\label{eq:pair} \LL^n(A)\times\widetilde{\Omega}^n_{\LL^n(A)}\longrightarrow A\,,\qquad (f,\omega)\longmapsto \res(f\omega)\,, \end{equation} defines isomorphisms of topological $A$-modules \begin{equation}\label{eq:dual} {\LL^n(A)\stackrel{\sim}\longrightarrow\big(\widetilde{\Omega}^n_{\LL^n(A)}\big)^{\vee}}\,, \qquad{\widetilde{\Omega}^n_{\LL^n(A)}\stackrel{\sim}\longrightarrow\LL^n(A)^{\vee}}\,. \end{equation} These isomorphisms are more useful than the isomorphisms from Proposition~\ref{prop:seldual}, because the isomorphisms~\ref{eq:dual} behave nicely under continuous endomorphisms of the \mbox{$A$-algebra~$\LL^n(A)$} as Proposition~\ref{lem:leftinverse} below claims. \medskip For any ${\phi\in\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)}$, let ${\phi^{\vee}\in\Hom^{\rm c}_A\big(\LL^n(A),\LL^n(A)\big)}$ (see Definition~\ref{def:hom}) be the adjoint map to the continuous $A$-linear map ${\phi\colon\widetilde{\Omega}^n_{\LL^n(A)}\to \widetilde{\Omega}^n_{\LL^n(A)}}$ with respect to the pairing~\eqref{eq:pair} (see also the first isomorphism from formula~\eqref{eq:dual}). Equivalently, for all $f\in\LL^n(A)$ and $\omega\in\widetilde{\Omega}^n_{\LL^n(A)}$, there is an equality $$ \res\big(\phi^{\vee}(f)\,\omega\big)=\res\big(f\phi(\omega)\big)\,. $$ \begin{rmk}\label{rmk:funvtadj} One easily checks that the assignment $\phi\mapsto\phi^{\vee}$ is functorial with respect to~$A$, that is, given a homomorphism of rings $A\to B$, the following diagram is commutative: $$ \begin{CD} \Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big) @>{\vee}>> \Hom^{\rm c}_A\big(\LL^n(A),\LL^n(A)\big) \\ @VV V @VVV \\ \Hom^{\rm c,alg}_B\big(\LL^n(B),\LL^n(B)\big) @>{\vee}>> \Hom^{\rm c}_B\big(\LL^n(B),\LL^n(B)\big)\,, \end{CD} $$ where the horizontal maps are given by $\phi\mapsto \phi^{\vee}$ and the vertical maps are obtained by taking the extension of a continuous $A$-linear endomorphism of the \mbox{$A$-module}~$\LL^n(A)$ to a continuous $B$-linear endomorphism of the \mbox{$B$-module}~$\LL^n(B)$. This extension exists and it is unique by Proposition~\ref{prop:funccont} and Lemma~\ref{cor:conv}(ii) (see also Corollary~\ref{cor:funccontrg} and Lemma~\ref{lem:mult}). \end{rmk} \begin{prop}\label{lem:leftinverse} For any $\phi\in\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)$, there is an equality (see Definition~\ref{def:ups} for $\Upsilon(\phi)$) $$ \phi^{\vee}\circ\phi=\det\big(\Upsilon(\phi)\big)\,{\rm id} $$ in $\Hom_A^{\rm c}\big(\LL^n(A),\LL^n(A)\big)$. In particular, if $\det\big(\Upsilon(\phi)\big)=1$, then $\phi^{\vee}\circ\phi={\rm id}$. \end{prop} \begin{proof} By definition of $\phi^{\vee}$ and Corollary~\ref{cor:res}, for all $f\in\LL^n(A)$ and $\omega\in\widetilde{\Omega}^n_{\LL^n(A)}$, we have the equalities $$ \res\big((\phi^{\vee}\circ\phi)(f)\,\omega\big)=\res\big(\phi(f)\phi(\omega)\big)=\det\big(\Upsilon(\phi)\big)\res(f\omega)\,. $$ Thus, the statement follows from the first of the isomorphisms~\eqref{eq:dual}. \end{proof} \begin{rmk}\label{rmk:expladj} Here is an explicit formula for the adjoint map. Given an element ${\phi\in\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)}$, put $\varphi_i:=\phi(t_i)$, where $1\leqslant i\leqslant n$, and define the Jacobian $J(\varphi)$ of the collection $\varphi=(\varphi_1,\ldots,\varphi_n)$ as the determinant of the matrix $$ \left(\frac{\partial\varphi_i}{\partial t_j}\right)\in\M_{n\times n}\big(\LL^n(A)\big)\,. $$ For short, denote $(1,\ldots,1)\in\z^n$ just by $1$. In particular, for $l=(l_1,\ldots,l_n)$, we put $l-1=(l_1-1,\ldots,l_n-1)$. Then for any $f\in\LL^n(A)$, there is an equality $$ \phi^{\vee}(f)=\mbox{$\sum\limits_{l\in\z^n}\res\big(f\varphi^{-l-1}J(\varphi)dt_1\wedge\ldots \wedge dt_n\big)t^l$}\,. $$ Indeed, for any $l\in\z^n$, the $l$-th coefficient of $\phi^{\vee}(f)$ is equal to $$ \res\big(\phi^{\vee}(f)t^{-l-1}dt_1\wedge\ldots\wedge dt_n\big)= $$ $$ =\res\big(f\varphi^{-l-1}\phi(dt_1\wedge\ldots\wedge dt_n)\big)=\res\big(f\varphi^{-l-1}J(\varphi)dt_1\wedge\ldots \wedge dt_n\big)\,. $$ \end{rmk} \begin{rmk} In general, the adjoint map $\phi^{\vee}$ is not necessarily a ring homomorphism. For example, when $n=1$ and $\phi(t)=\varphi=t^2$, Remark~\ref{rmk:expladj} implies that $\phi^{\vee}(t)=0$ and $\phi^{\vee}(t^2)=2t$. However, if $\det\big(\Upsilon(\phi)\big)=1$, then we obtain a posteriori from Theorem~\ref{theor:inv} below that $\phi^{\vee}$ is a ring homomorphism. Note that the proof of Theorem~\ref{theor:inv} is based on the theory of thick ind-cones from~\cite{GOMS}. We do not know how to deduce directly from the definition of the adjoint map that $\phi^{\vee}$ is a ring homomorphism provided that $\det\big(\Upsilon(\phi)\big)=1$. \end{rmk} \medskip Now we pass to invertibility of continuous endomorphisms of the $A$-algebra $\LL^n(A)$. First we prove the following auxiliary statement. \begin{lemma}\label{lemma:inv} An endomorphism $\phi\in\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)$ with $\det\big(\Upsilon(\phi)\big)=1$ is invertible if and only if there is an equality $\phi\circ\phi^{\vee}={\rm id}$ in $\Hom^{\rm c}_A\big(\LL^n(A),\LL^n(A)\big)$. \end{lemma} \begin{proof} Suppose that $\phi$ is invertible. Then by Proposition~\ref{lem:leftinverse}, we have that $\phi^{\vee}=\phi^{-1}$, whence $\phi\circ\phi^{\vee}={\rm id}$. Now suppose that $\phi\circ\phi^{\vee}={\rm id}$. Then Proposition~\ref{lem:leftinverse} implies that $\phi$ is a bijection from~$\LL^n(A)$ to itself, whence $\phi$ is invertible in~$\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)$. \end{proof} Notice that Theorem~\ref{theor:inv} below claims that, in fact, the equivalent conditions of Lemma~\ref{lemma:inv} hold for any $\phi\in\Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)$ with $\det\big(\Upsilon(\phi)\big)=1$. \begin{lemma}\label{lemma:auxil} Any endomorphism $\phi\in\Ker(\Upsilon)_n^{\rm nil}(A)$ is invertible. \end{lemma} \begin{proof} By the definition of $\Ker(\Upsilon)^{\rm nil}_n$, for any $i$, $1\leqslant i\leqslant n$, we have $\phi(t_i)=t_i+h_i$, where~$h_i$ is a Laurent polynomial with coefficients being nilpotent elements of~$A$. Let $\a\subset A$ be the ideal generated by all coefficients of $h_i$, $1\leqslant i\leqslant n$. Define also the ideal $I:=\a\,\LL^n(A)$ in~$\LL^n(A)$. Clearly, the image of $\phi$ under the natural map $$ \Hom^{\rm c,alg}_A\big(\LL^n(A),\LL^n(A)\big)\longrightarrow \Hom^{\rm c,alg}_{A/\a}\big(\LL^n(A/\a),\LL^n(A/\a)\big) $$ coincides with the identity. Since $\a$ is finitely generated, we have an isomorphism $\LL^n(A)/I\simeq \LL^n(A/\a)$. Therefore we have that $\phi={\rm id}+h$, where $h\colon\LL^n(A)\to\LL^n(A)$ is a continuous $A$-linear map such that $h\big(\LL^n(A)\big)\subset I$. Moreover, we claim that $h(I^k)\subset I^{k+1}$ for any $k\geqslant 0$. Indeed, we have $I^k=\a^k\,\LL^n(A)$ and for all $a\in \a^k$ and $f\in\LL^n(A)$ we have $h(af)=ah(f)\in I^{k+1}$, because $h(f)\in I$. We have that $I$ is a nilpotent ideal, because $I$ is finitely generated by nilpotent elements. Therefore we see that the map $h$ is nilpotent. Hence the map $\phi={\rm id}+h$ is invertible. \end{proof} \medskip Here is the main result of this section. \begin{theor}\label{theor:inv} A continuous endomorphism of the $A$-algebra $\phi\colon\LL^n(A)\to\LL^n(A)$ is invertible if and only if the matrix ${\Upsilon(\phi)\in\M^+_{n\times n}\big(\uz(A)\big)}$ is invertible, that is, the upper-triangular matrix with (locally constant on $\Spec(A)$) integral entries $$ \Upsilon(\phi)=\big(\nu(\phi(t_1)),\ldots,\nu(\phi(t_n))\big) $$ has units on the diagonal. Moreover, if $\phi$ is invertible, then we have the equality $$ \phi^{-1}=\phi^{\vee}\,. $$ \end{theor} \begin{proof} By Corollary~\ref{cor:hommon} and Theorem~\ref{prop:contchange}, one implication is clear. Now suppose that the matrix $\Upsilon(\phi)$ is invertible and let us show the invertibility of~$\phi$. By Corollary~\ref{cor:section}, it is enough to consider the case when $\phi\in\Ker(\Upsilon)_n(A)$, that is, the case when $\Upsilon(\phi)$ is the identity matrix, which we assume from now on. By Lemma~\ref{lemma:inv}, we need to show that that for any $\phi\in\Ker(\Upsilon)_n(A)$, there is an equality $\phi\circ\phi^{\vee}={\rm id}$ in~$\Hom^{\rm c}_A\big(\LL^n(A),\LL^n(A)\big)$. In other words, we need to show that for all $\phi\in\Ker(\Upsilon)_n(A)$ and $f\in\LL^n(A)$, there is an equality \begin{equation}\label{eq:main} (\phi\circ\phi^{\vee})(f)-f=0 \end{equation} in $\LL^n(A)$. It follows from Remark~\ref{rmk:funvtadj} that for each $l\in\z^n$, the $l$-th coefficient of the iterated Laurent series obtained in the left hand side of~\eqref{eq:main} is given by a regular function $\xi_l\in\OO\big(\Ker(\Upsilon)_n\times L^n\ga\big)$. Therefore equality~\eqref{eq:main} is equivalent to countably many equalities $\xi_l=0$, where $l\in\z^n$. Hence by Corollary~\ref{cor:repr}$(ii)$, we may assume that $\phi\in\Ker(\Upsilon)^{\rm nil}_n(A)$. Finally, by Lemma~\ref{lemma:auxil}, any $\phi\in\Ker(\Upsilon)_n^{\rm nil}(A)$ is invertible, whence again by Lemma~\ref{lemma:inv}, we have the equality $\phi\circ\phi^{\vee}={\rm id}$. This finishes the proof. \end{proof} \medskip Note that since $\phi^{-1}=\phi^{\vee}$, we obtain the explicit formula for the inverse automorphism by Remark~\ref{rmk:expladj}. In particular, for all $\varphi=(\varphi_1,\ldots,\varphi_n)\in\Hb_{n,n}(A)$ and $f\in\LL^n(A)$, there is an equality \begin{equation} \label{eq:identity} f=\mbox{$\sum\limits_{l\in\z^n}\res\big(f\varphi^{-l-1}J(\varphi)dt_1\wedge\ldots \wedge dt_n\big)\varphi^l$}\,, \end{equation} which corresponds to the equality ${f=(\phi\circ\phi^{-1})(f)=(\phi\circ\phi^{\vee})(f)}$. It is unclear how to prove equality~\eqref{eq:identity} directly, without using the theory of thick ind-cones developed in~\cite{GOMS}. Note that this would give a different explicit proof of Theorem~\ref{theor:inv}. \medskip \begin{rmk} \hspace{0cm} \begin{itemize} \item[(i)] Let $n=1$ and let ${\varphi=\sum\limits_{l\in\z}a_lt^l\in\LL(A)}$ be a Laurent series such that ${\nu(\varphi)=1}$. One checks easily that the derivative ${\partial\varphi/\partial t\in\LL(A)}$ is invertible. Applying formula~\eqref{eq:identity} with ${f=\varphi(\partial\varphi/\partial t)^{-1}t^{-1}}$, we obtain the equality \begin{equation}\label{eq:rmkViktor} \mbox{$\sum\limits_{l\in\z}[\varphi^{-l}]_0\,\varphi^l=\varphi\,(\partial\varphi/\partial t)^{-1}\,t^{-1}$}\,. \end{equation} \item[(ii)] Suppose that $A=k$ is a field of zero characteristic. In this case, it is well-known and is easy to show that a Laurent series $\varphi\in k((t))$ with $\nu(\varphi)=1$ defines an automorphism of~$k((t))$ (cf. Theorem~\ref{theor:inv}) and that the residue map is invariant under such automorphism, see, e.g.~\cite{S} (cf. Proposition~\ref{prop:invres} and Remark~\ref{rmk:invres}). By Lemma~\ref{lemma:inv}, this proves formulas~\eqref{eq:identity} and~\eqref{eq:rmkViktor} in this case without using the theory of thick-ind cones involved in the proofs of Theorem~\ref{theor:inv} and Proposition~\ref{prop:invres}. Moreover, positive powers of~$\varphi$ have a zero constant term, whence formula~\eqref{eq:rmkViktor} becomes $$ \mbox{$1+\sum\limits_{l\geqslant 1}[\varphi^{-l}]_0\,\varphi^l=\varphi\,(\partial\varphi/\partial t)^{-1}\,t^{-1}$}\,. $$ Also, $\varphi=(\partial\varphi/\partial t)t$ only for $\varphi=ct$, where $c\in k^*$. Thus after replacing $\varphi$ by its inverse, we obtain the following statement. Let $F\in\LL(k)$ be a Laurent series such that $\nu(F)=-1$ and $F\ne ct^{-1}$, where $c\in k^*$. Then the generating series of constant terms of powers of $F$, namely, the series ${\sum\limits_{l\geqslant 1}[F^l]_0\,t^l}$, is not equal to zero. Note that when $F$ is also a Laurent polynomial, this is a particular case of~\cite[Theor.\,2]{DK}. \end{itemize} \end{rmk}
{ "timestamp": "2016-09-05T02:03:51", "yymm": "1604", "arxiv_id": "1604.07005", "language": "en", "url": "https://arxiv.org/abs/1604.07005", "abstract": "We study continuous homomorphisms between algebras of iterated Laurent series over a commutative ring. We give a full description of such homomorphisms in terms of a discrete data determined by the images of parameters. In similar terms, we give a criterion of invertibility of an endomorphism and provide an explicit formula for the inverse endomorphism. We study the behavior of the higher-dimensional residue under continuous homomorphisms.", "subjects": "Rings and Algebras (math.RA); Algebraic Geometry (math.AG)", "title": "Continuous homomorphisms between algebras of iterated Laurent series over a ring", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575111777183, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7091541967275898 }
https://arxiv.org/abs/2201.09861
The sharp form of the Kolmogorov--Rogozin inequality and a conjecture of Leader--Radcliffe
Let $X$ be a random variable and define its concentration function by $$\mathcal{Q}_{h}(X)=\sup_{x\in \mathbb{R}}\mathbb{P}(X\in (x,x+h]).$$ For a sum $S_n=X_1+\cdots+X_n$ of independent real-valued random variables the Kolmogorov-Rogozin inequality states that $$\mathcal{Q}_{h}(S_n)\leq C\left(\sum_{i=1}^{n}(1-\mathcal{Q}_{h}(X_i))\right)^{-\frac{1}{2}}.$$In this paper we give an optimal bound for $\mathcal{Q}_{h}(S_n)$ in terms of $\mathcal{Q}_{h}(X_i)$, which settles a question posed by Leader and Radcliffe in 1994. Moreover, we show that the extremal distributions are mixtures of two uniform distributions each lying on an arithmetic progression.
\section{Introduction} Define the concentration function ${\mathcal{Q}}_{h}(X)$ of a random variable $X$ is defined to be the quantity \begin{equation*} {\mathcal{Q}}_{h}(X)=\sup_{x\in {\mathbb{R}}}{\mathbb{P}}(X\in (x,x+h]). \end{equation*} Let $S_{n}=X_{1}+\cdots+ X_{n}$ be a sum of independent random variables. Doeblin and Levy \cite{DoeblinLevy} were the first to study the spread of the distribution of $S_n$ in terms of its concentration function by establishing quantitative bounds on ${\mathcal{Q}}_{h}(S_n)$ in terms of the individual concentration functions ${\mathcal{Q}}_{h}(X_i)$ of the summands. Kolmogorov \cite{Kolmogorov} improved these results and obtained a bound that was asymptotically sharp up to a logarithmic factor in $n$. The latter result was improved by Rogozin \cite{KR} who removed the logarithmic factor. \begin{theorem} [Kolmogorov--Rogozin]\label{KOL} There exists an absolute constant $C>0$ such that for $h>0$ we have $${\mathcal{Q}}_{h}(S_n)\leq C\left(\sum_{i=1}^{n}(1-{\mathcal{Q}}_{h}(X_i))\right)^{-\frac{1}{2}}.$$ \end{theorem} The latter inequality is asymtotically sharp in the case ${\mathcal{Q}}_{h}(X_i)=\alpha$ for fixed $\alpha$ as then the right hand side is $O(n^{-1/2})$. Yet for small values of $\alpha$ it degenerates. This deficiency was removed by Kesten \cite{Kesten} who inserted a certain multiplicative factor that provides the correct asymptotics of the bound as $\alpha \rightarrow 0$.\\ The goal of this paper is to establish the optimal upper bound in Theorem \ref{KOL}. Before stating our main result let us first adopt some conventions. For $k\in {\mathbb{N}}$ let us denote by $\nu^{k}$ the uniform distribution on the $k$ term arithmetic progression $\left\{-k+1,-k+3,\ldots,k-1\right\}$. For $\alpha\in[\frac{1}{k+1},\frac{1}{k}]$ we denote by $T(\alpha)$ a random variable having distribution $$(1-\tau)\nu^{k+1}+\tau \nu^{k}$$ with $\tau=k(k+1)\alpha-k$. In particular, the random variable $T(\frac{1}{k})$ has distribution $\nu^{k}$. Note that ${\mathcal{Q}}_{2}(T(\alpha))$ is attained by any interval containing two consecutive atoms of $T(\alpha)$ and it easily follows that from the definition that ${\mathcal{Q}}_{2}(T(\alpha))=\alpha$.\\ For any random variable $X$ and $a>0$ we have ${\mathcal{Q}}_{h}(X)={\mathcal{Q}}_{h/a}(aX)$. We shall henceforth only treat the case $h=2$ for concentration functions of individual random variables under consideration and write ${\mathcal{Q}}$ instead of ${\mathcal{Q}}_{2}$. The reason for picking this particular value will soon become apparent. We now state the main result of the paper. \begin{theorem}\label{main} Let $X_1,\ldots, X_n,$ be independent random variables such that ${\mathcal{Q}}_{2}(X_i)\leq\alpha_i\in[0,1]$. For all integer $\ell \geq 1$ we have \begin{eqnarray*}\label{nel} {\mathcal{Q}}_{2\ell}\left(X_1+\cdots+X_n\right) &\leq& {\mathcal{Q}}_{2\ell}\left(T_1(\alpha_1)+\cdots+T_n(\alpha_n)\right),\\ \end{eqnarray*} where $T_{i}(\alpha_i)$ are independent. \end{theorem} The latter inequality is therefore optimal with the choice $X_i \buildrel d \over = T_{i}(\alpha_i)$ saturating the bound. It turns out that $${\mathcal{Q}}_{2\ell}\left(T_1(\alpha_1)+\cdots+T_n(\alpha_n)\right)=\mathbb{P}\left(T_1(\alpha_1)+\cdots+T_n(\alpha_n) \in (-\ell,\ell]\right),$$ which is not obvious since $T_1(\alpha_1)+\cdots+T_n(\alpha_n)$ is not in general unimodal. Leader and Radcliffe \cite{LR} established Theorem \ref{main} in the special case $\alpha_i=\frac{2}{k}$ for integer $k\geq 2$ and $l=1$ and posed the question to extend their inequality for other values of $\alpha$. Theorem \ref{main} thus resolves their question and also extends the desired result to all $\ell \in {\mathbb{N}}$ and possibly different $\alpha_i$'s. The choice $\alpha_i=\frac{1}{2}$ recovers the famous Littlewood--Offord inequality by Erd\H{o}s \cite{LO}. The case $\alpha_i=\frac{2}{k}$ is special as only in this case $T_i(\alpha)$ has a uniform distribution. The reason for picking $h=2$ is now clear --- it is the smallest natural number for which the maximizing distributions take integer values.\\ In most applications of the Kolmogorov--Rogozin type bounds are applied for the case $\alpha_i=\alpha$. For this case the Local Limit Theorem can be used to obtain a simple bound with the asymptotically sharp constant in Theorem \ref{main}.\\ \begin{corollary}\label{cor1} Let $X_1,\ldots,X_n$ independent random variables with ${\mathcal{Q}}(X_i)\leq \alpha\in(0,1)$. For $\ell \geq 1$ and $n\rightarrow \infty$ we have \begin{equation*} {\mathcal{Q}}_{2\ell}(X_1+\cdots+X_n) \leq \frac{2\ell+o(1)}{\sqrt{2\pi\mathrm{Var}(T_{1}+\cdots+T_{n})}}. \end{equation*} \end{corollary} We now turn to the situation where $\alpha_i\geq1/2$. Note that for $\alpha_i\geq 1/2$ the corresponding extremal random variables $T(\alpha)$ in Theorem \ref{main} have symmetric distribution on the set $\{-1,0,1\}$ so that ${\mathbb{P}}(T_i(\alpha_i)=\pm 1)=1-\alpha_i$. We shall allow the values $\alpha_i$ depend on $n$ in the forthcoming result, which can be viewed as a more general version of Poisson type anticoncentration recently investigated by Fox, Kwan an Sauermann \cite{FKS} (see their Section 6) for linear combinations of Bernoulli random variables. \begin{corollary}\label{kant} For independent random variables $X_1,\ldots,X_n$ satisfying ${\mathcal{Q}}(X_i)\leq\alpha_i\in [1/2,1]$ we have $${\mathcal{Q}}(X_1+\ldots+X_n)\leq {\mathbb{P}}(\eta_1-\eta_2 \in \{0,1\})\leq \sqrt{\frac{2}{\pi \lambda}},$$ where $\eta_i$ are independent copies of a Poisson random variable with mean $\lambda=\sum_{i=1}^{n}(1-\alpha_{i})$. \end{corollary} Note that the first inequality is sharp in the following sense. If we do not restrict $n$ and fix the sum of $\alpha_i$'s, then this bound can be achieved in the limit as $n\rightarrow \infty$ by sums of distributions $T_i(1-\frac{\lambda}{n})$. The second inequality can be seen to be sharp as $\lambda\rightarrow \infty$ by the Local Limit Theorem.\\ \textbf{Remark.} We used the half-closed interval in the definition of ${\mathcal{Q}}$, whereas some authors used both closed or open intervals of fixed length. It is straightforward to deduce the corresponding versions of Theorem \ref{main} with these different definitions as well. The reason for our choice is convenience --- with this definition the optimal interval of concentration is always symmetric with respect to the origin.\\ The paper is organized as follows. We start with giving an informal outline of the proof strategy of Theorem \ref{main} in Section $2$. We then proceed with establishing the necessary steps in the order described in the outline. Proofs of the corollaries are given and one open problem formulated in Section $6$. \section{Outline of the proof of the main inequality} The proof proceeds in the following steps that are split into multiple small sections:\\ \begin{enumerate} \item The problem is reduced to \textbf{simple} random variables, i.e., random variables taking only finitely many values all with rational probabilities;\\ \item It is shown that the distributions of simple random variables $X_i$ under the condition ${\mathcal{Q}}(X_i)\leq \alpha_i$ can be expressed as a convex combination of uniform distributions with uniformities tied to the values $\alpha_i$ in the appropriate way (see Lemma \ref{repr});\\ \item Using a LYM-type inequality for extremal combinatorics we establish the desired inequality in the special case when $\alpha_i = 1/k_{i}$, $k_i\in {\mathbb{N}}$. \\ \item Finally, using the representation for measures provided Lemma \ref{repr} and multiple applications of the inequality established in the latter step we shall be done. \end{enumerate} Since the reduction of the first step is rather standard and dull, we postpone it to the Appendix. In the other parts we shall only deal with simple random variables. The remaining steps of the proof will appear in the order of the list above in the upcoming sections. \section{The representation lemma} For $k\geq 1$ define by $\mu^{k}$ a uniform distribution on some $k$ points in ${\mathbb{R}}$ that are pairwise at distance at least 2. Note that the definition of $\mu^{k}$ depends on the choice of those points, which is not reflected in the notation. Usually we will supply $\mu^{k}$ with a subscript, which will mean that the distributions with distinct subscripts might be concentrated in different sets. When the set of $k$ points will be $\left\{-k+1,-k+3,\ldots,k-3,k-1\right\}$, we are going to use the notation $\nu^{k}$ instead of $\mu^{k}$ in consistency with the definitions of the Introduction. Furthermore, for a random variable $X$ we shall denote it's probability distribution by $\mathcal{L}(X)$.\\ \begin{lemma}\label{repr} Let $X$ be a simple random variable with ${\mathcal{Q}}(X)=m/n\in(1/(k+1),1/k]$. Assume that $X$ is concentrated in the set $S=\left\{y_{1},\ldots,y_{M}\right\}$ with rational probabilities $\mathbb{P}\left(X=y_{i}\right)=m_{i}/n_{i}$. Let us define $$N=n\prod_{i}n_{i},\quad K=(n-km)\prod_{i}n_{i},\quad L= ((k+1)m-n)\prod_{i}n_{i}.$$ We can express the distribution of $X$ as \begin{equation*} \mathcal{L}(X)=\frac{1-\tau}{K}\sum^{K}_{l=1}\mu_{l}^{k+1}+\frac{\tau}{L}\sum^{K+L}_{l=K+1}\mu_{l}^{k}, \end{equation*} where $\tau=k(k+1)m/n-k$. \end{lemma} \begin{proof} Assume that $y_1\leq \ldots \leq y_{M}$. We can regard the distribution of $X$ as the uniform distribution on a multiset $S'$, where $S'$ is obtained from $S$ by taking the element $y_{i}$ exactly $nm_{i}\prod_{j\neq i}n_{i}$ times. Let ${x_{1},\ldots,x_{N}}$ be the elements of $S'$ in increasing order. \\ The condition ${\mathcal{Q}}(X)=m/n$ ensures than no more than $d=Nm/n$ points lie in the interval $(x,x+2]$ for all $x$. Thus the points $x_{l},x_{l+d}$ are at distance at least $2$. For $l\leq L$ the points $x_{l},x_{l+d},\ldots,x_{l+kd}$ are pairwise at distance at least two. Each point has mass $1/N$, so in order make the measure on the latter set of points into a probability measure we must divide it by it by $(k+1)/N$. We have $$(k+1)/N=(k+1)(n-km)/(nK)=(1-(k(k+1)m/n-k))/K=(1-\tau)/K,$$ thus obtaining the first $K$ distributions $\mu_{l}^{k+1}$ with the desired weights. \\ For $K+1\leq l\leq K+L$ take the points $x_{l},x_{l+d},\ldots,x_{l+(k-1)d}$ and the measures concentrated on those points will give us the required $L$ measures $\mu_{l}^{k}$. It can be checked that the proportion is again correct, but that will follow from the fact that we used up all points from $S'$ and took each of them only once. Indeed, we started constructing each measure in the representation from a different point in ${x_{1},\ldots,x_{K+L}}$ and then added points with equally spaced indices. Thus we did not use any point twice. Furthermore, $K(k+1)+Lk=N$ and so we used them all. \end{proof} \section{The case $\alpha_i=\frac{1}{k_i}$ and a LYM type inequality for multisets} In this section we shall be dealing with multisets defined on the ground set $[n]$ such that each element has an upper bound, say $k_i$, on its multiplicity. The case $k_i=1$ naturally reduces to the study of sets. In the latter case we can switch between talking about the powerset of $[n]$ to the study of indicator vectors in $\left\{0,1\right\}^n$ with set inclusion corresponding to the product order in $\left\{0,1\right\}^n$. \\ Analogously, we shall view multisets as vectors in the discrete rectangle $L(k_1,\ldots,k_n)=\left\{0,\ldots,k_1-1\right\}\times \cdots \times \left\{0,\ldots,k_n-1\right\}$ by associating with a multiset the vector of multiplicities of each element in it. \\ For a vector $x\in {\mathbb{R}} ^{n}$ we shall denote its $i$-th coordinate by $x_i$. We shall endow $L(k_1,\ldots,k_n)$ with the product order. That is, $v\leq w$ if and only if $v_i\leq w_i$. Multiset inclusion corresponds to this order as in the case with sets.\\ We shall call a collection of vectors $v_1,\ldots,v_k$ a \textbf{chain} if $v_1\leq \cdots \leq v_k$ and refer to the number $k$ as its length. We say that a family of vectors $\mathcal{F}$ is \textbf{$k$-Sperner} if it has no chains of length $k+1$. In the case $k=1$ we shall say that $\mathcal{F}$ is an \textbf{antichain} rather than $1$-Sperner.\\ Let us partition $L(k_1,\ldots,k_n)$ into classes $L_i$ where $$L_i=\left\{x\in L(k_{1},\ldots,k_{n})\, | \, x_1+\cdots+ x_n =i\right\}.$$ Note that $|L_i|$ is a symmetric sequence in the sense that $|L_i|=|L_{N-i}|$ where $N=\sum (k_i-1)$. The sequence $|L_i|$ is non-decreasing for $i\leq \left\lfloor \frac{N}{2}\right\rfloor$ and thus, by symmetry, it is non-increasing for $i\geq \left\lceil \frac{N}{2}\right\rceil$.\\ For $k\leq k_1+\cdots+k_n+1$ write $f(k_1,k_2,\ldots,k_n,k)$ for the sum of the $k$ largest sets $L_{i}$. These are just the $k$ middle diagonals of the rectangle $L(k_1,\ldots,k_n)$. \\ Similar to Erd\H{o}s's proof of the Littlewood-Offord problem in \cite{ELO} that used a Sperner type theorem we shall use a similar result for multiset $k$-Sperner families in the exactly the same way. The result we shall need is the following. \begin{lemma}\label{BKT} Let $\mathcal{F}$ be a $k$-Sperner family of vectors in $L(k_1,\ldots,k_n)$. Then \begin{equation*} \left|\mathcal{F}\right| \leq f(m_1,m_2,\ldots, m_n,k). \end{equation*} \end{lemma} Before we proceed with the proof, let us state a standard LYM type inequality for antichains of multisets. \begin{theorem} \label{LYM} Let $\mathcal{F}$ be an antichain in $L(k_{1},\ldots,k_{n})$. For $0\leq i \leq \sum_{j=1}^{n}(k_{j}-1)$ denote $\mathcal{F}_{i}=\mathcal{F} \cap L_{i}$. We have \begin{equation*} \sum_{i}^{} \frac{|\mathcal{F}_{i}|}{|L_{i}|}\leq 1. \end{equation*} \end{theorem} The proof of Theorem \ref{LYM} can be found in Chapter $10$ of the book by Anderson \cite{Anderson}. In the case of sets it is known as the LYM inequality (Lubell-Yamamoto-Meshalkin).\\ {\em Proof of Lemma \ref{BKT}.} Let $\mathcal{F}$ be a $k$-Sperner family. It is easy to see that $\mathcal{F}$ is a union of $k$ antichains. Indeed, the maximal elements of $\mathcal{F}$ form an antichain and the remaining elements form a $(k-1)$-Sperner family and so the observation follows by induction on $k$. Let $\mathcal{A}$ be one of the $k$ antichains that decompose $\mathcal{F}$.\\ Using Theorem \ref{LYM} we obtain \begin{equation*} \sum_{i}^{} \frac{|\mathcal{A}_{i}|}{|L_{i}|}\leq 1. \end{equation*} Summing this inequality over all $k$ antichains we obtain \begin{equation}\label{sumaaa} \sum_{i}^{} \frac{|\mathcal{F}_{i}|}{|L_{i}|}\leq k. \end{equation} For families of vectors of fixed cardinality the sum in \eqref{sumaaa} is minimized by families containing vectors with coordinate sums as close to $\sum_{i}(k_i -1)/2$ as possible. This is because in view of \eqref{sumaaa} the vectors are assigned the smallest weight.\\ Suppose now that $|\mathcal{F}|> f(k_1,\ldots,k_n,k)$. Note for the family of vectors consisting of the middle $k$ diagonals of $L(k_{1},\ldots,k_{n})$ the corresponding sum in \eqref{sumaaa} is exactly equal to $1$ and is minimal among all families having $f(k_1,\ldots,k_n,k)$ vectors. Therefore for any family of vectors with more elements the corresponding sum in \eqref{sumaaa} is strictly greater than $1$, which is a contradiction. Thus $|\mathcal{F}|\leq f(k_1,\ldots,k_n,k)$ and we are done. \endproof We can now obtain the statement of Theorem \ref{main} in an important special case, which is a slight generalization of the main result of Leader and Radcliffe \cite{LR}. \begin{lemma}\label{LRlemma} Let $X_1,\ldots,X_n$ be independent simple random variables such that ${\mathcal{Q}}(X_i)=1/k_i$. For all integer $\ell\geq 1$ and $x\in {\mathbb{R}}$ we have \begin{equation*} \mathbb{P} \left(X_1+\cdots+X_n \in (x-\ell,x+\ell]\right) \leq \mathbb{P}\left(T_1(1/k_1)+\cdots+T_n(1/k_n) \in (-\ell,\ell]\right), \end{equation*} where $T_{i}(1/k_i)$ are independent. \end{lemma} {\em Proof of Lemma \ref{LRlemma}.} In view of Lemma \ref{repr} we can assume that $\mathcal{L}(X_i)=\mu_{i}^{k_i}$. This is due to the fact that our optimization problem is linear with respect to the measures in the decomposition given by Lemma \ref{repr} - we can therefore pick the measure of type $\mu^{k_i}$ in the decomposition that maximizes the functional in question. For each $i$ let us denote the values $X_i$ takes by $x_{i,1},\ldots,x_{i,k_i}$. Let us define a family of vectors (or multisets) \begin{equation*} \mathcal{F}=\left\{v\in L(k_1,\ldots,k_n)| \sum_{j=1}^{n}x_{j,v_{j}}\in (x-\ell,x+\ell]\right\}. \end{equation*} Note that by definition of measures $\mu_{i}^{k_i}$ the points $x_{i,1},\ldots,x_{i,k_i}$ are all at distance at least $2$ within each other. Therefore if we had a chain of vectors (or multisets) of length $\ell+1$ then the sums corresponding to the top and bottom vectors (or multisets) would differ by strictly more than $2\ell$ and so we get a contradiction. Therefore the family $\mathcal{F}$ is $\ell$-Sperner.\\ Using Lemma \ref{BKT} we therefore have \begin{eqnarray*} \mathbb{P} \left(X_1+\cdots+X_n \in (x-\ell,x+\ell]\right)&=&|\mathcal{F}|/\prod_{j=1}^{n}k_i\\ &\leq& f(k_1,k_2,\ldots, k_n,\ell)/\prod_{j=1}^{n}k_i\\ &=&\mathbb{P}\left(T_1(1/k_1)+\cdots+T_n(1/k_n) \in (-\ell,\ell]\right). \end{eqnarray*} \section{Putting it all together} Let $X_{1},\ldots,X_{n}$ be independent random variables such that ${\mathcal{Q}}(X_i)\leq \alpha_i$. Since the bound of Theorem \ref{main} is continuous with respect to the variables $\alpha_i$, we can consider only rational $\alpha_i =m_{i}/n_{i} \in (1/(k_{i}+1),1/k_{i}]$. We can without loss of generality assume that the random variables $X_i$ are simple (see the Appendix). Thus each random variable $X_i$ takes values in some finite set $\left\{y_{i,1},\ldots, y_{i,D}\right\}$ (we can take one value of $D$ for all variables by adding some points with $0$ probability). Moreover, we can assume that the probabilities $\mathbb{P}\left(X_{i}=y_{i,k}\right)$ are also rational. Thus $\mathbb{P}\left(X_{i}=y_{i,k}\right)=m_{i,k}/n_{i,k}$. Writing $N_{i}=n_{i}\prod_{j=1}^{n}n_{i,j}$ we can look at the distribution of $X_{i}$ as a uniform distribution on a multiset with $N_{i}$ elements. By Lemma \ref{repr} we have\\ \begin{equation*} \mathcal{L}(X_{i})=\frac{1-\tau_{i}}{K_{i}}\sum_{l_{i}=1}^{K_{i}}\mu^{k_{i}+1}_{i,l_{i}}+\frac{\tau_{i}}{L_{i}}\sum_{l_{i}=K_{i}+1}^{K_{i}+L_{i}}\mu^{k_{i}}_{i,l_{i}}, \end{equation*} where $K_i, L_i$ and $\tau_{i}$ are defined as in Lemma \ref{repr}. \\ We shall expand the product measure $\prod_{i=1}^{n}\mathcal{L}(X_{i})$ into a sum of products of the measures $\mu_{i,l_{i}}^{\tilde{k}_{i}}$, where $\tilde{k}_{i}=k_{i}+1$ for $l_{i} \leq K_{i}$ and $\tilde{k}_{i}=k_{i}$ otherwise. For the same ranges of $l_{i}$ define $\tilde{\tau_{i}}$ in a natural way - the coefficient in front of $\mu_{i,l_{i}}^{\tilde{k_{i}}}$. Expanding the product measure and using Lemma \ref{LRlemma} term by term we obtain \begin{eqnarray*} &&\mathbb{P}\left(X_{1}+\cdots+X_{n}\in (x-\ell,x+\ell]\right)\\ &=&\prod_{i=1}^{n}\mathcal{L}(X_{i})((x-\ell,x+\ell])\\ &=&\prod_{i=1}^{n} \left(\frac{1-\tau_{i}}{K_{i}}\sum_{l_{i}=1}^{K_{i}}\mu^{k_{i}}_{i,l_{i}}+\frac{\tau_{i}}{L_{i}}\sum_{l=K_{i}+1}^{K_{i}+L_{i}}\mu^{k_{i}+1}_{i,l_{i}}\right)((x-\ell,x+\ell])\\ &=&\prod_{i=1}^{n}\left(\tilde{\tau_{i}}\sum_{l=1}^{K_{i}+L_{i}}\mu^{\tilde{k}_{i}}_{i,l}\right)((x-\ell,x+\ell])\\ &=&\sum_{l_{1},\ldots,l_{n}}\prod_{i=1}^{n}\tilde{\tau_{i}}\mu^{\tilde{k}_{i}}_{i,l}((x-\ell,x+\ell])\\ &\leq& \sum_{l_{1},\ldots,l_{n}}\prod_{i=1}^{n}\tilde{\tau_{i}}\nu^{\tilde{k}_{i}}_{i,l}((-k,k])\\ &=&\prod_{i=1}^{n}\left(\tilde{\tau_{i}}\sum_{l=1}^{K_{i}+L_{i}}\nu^{\tilde{k}_{i}}_{i,l}\right)((-k,k])\\ &=&\prod_{i=1}^{n}\left(\frac{1-\tau_{i}}{K_{i}}\sum_{l_{i}=1}^{K_{i}}\nu^{k_{i}}_{i,l_{i}}+\frac{\tau_{i}}{L_{i}}\sum_{l=K_{i}+1}^{K_{i}+L_{i}}\nu^{k_{i}+1}_{i,l_{i}}\right)((-k,k])\\ &=&\prod_{i=1}^{n}(\tau_{i}\nu_{i}^{k_{i}}+(1-\tau_{i})\nu_{i}^{k_{i}+1})((-k,k])\\ &=&\mathbb{P}\left(T_{1}(\alpha_1)+\cdots+T_{n}(\alpha_n)\in (-k,k]\right). \end{eqnarray*} This completes the proof of Theorem \ref{main} and even provides the interval has the most mass under the distribution of $T_{1}(\alpha_1)+\cdots+T_{n}(\alpha_n)$. \section{Proofs of the corollaries} Corollary \ref{cor1} follows from Theorem \ref{main} by the Local Limit Theorem when $\frac{1}{\alpha}$ is not an integer. When $\frac{1}{\alpha}\in {\mathbb{N}}$ it follows either by continuity of the bound or one can use the Local Limit Theorem for the rescaled random variables $T_i(\alpha)$ after the application of Theorem \ref{main}.\\ {\em Proof of Corollary \ref{kant}.} When $\alpha_i\in [1/2,1]$ the sharp inequality of Theorem \ref{main} is given in terms of symmetric distributions. For symmetric independent real valued random variables $X_i$ such that $\mathbb{P}(|X_i|\geq 1)=2(1-\alpha_i)$ Kanter's inequality (see Corollary $1.3$ in \cite{Mattner}) states that for all $x\in {\mathbb{R}}$ we have $$\mathbb{P}(|X_1+\cdots +X_n-x|<1)\leq {\mathbb{P}}(\eta_1-\eta_2 \in \{0,1\}),$$ where $\eta_i$ are independent copies of a Poisson random variable with mean $\lambda=\sum_{i=1}^{n}(1-\alpha_{i})$.\\ To finish up, just notice that for the extremal distribution $T_{1}(\alpha_1)+\cdots+T_{n}(\alpha_n)$, coming from Theorem \ref{main}, has the largest probability in the interval of the form $(x,x+2]$ exactly as the maximal probability of the same length open interval. This means we can apply Kanter's inequality to obtain the desired result.\\ The second inequality was established by Mattner and Roos (Lemma $1.4$ in \cite{Mattner}).\\ Let us formulate a natural conjecture regarding an analytically simpler form of the bound in Theorem \ref{main} that is of the flavor of the Corollary \ref{kant}. Kanter \cite{kanter} actually proved a more detailed result - he showed, in our notation, that for $\alpha_i\geq 1/2$ the value of ${\mathcal{Q}}(T_{1}(\alpha_1)+\cdots+T_{n}(\alpha_n))$ is at most ${\mathcal{Q}}(T_{1}(\alpha)+\cdots+T_{n}(\alpha))$, where $\alpha=\frac{1}{n}\sum_{i}\alpha_i$. If $\alpha_{i}\geq 3/4$ the result follows by majorization techniques or by using the positivity of the corresponding characteristic functions. The general case is much more involved and follows by quite delicate analysis of certain integrals (see \cite{Mattner}). It is tempting to believe that a similar phenomenon remains true in the unrestricted case. Let us be more precise. In the case $\alpha_i\geq 1/2$ the variances of $T_i(\alpha_i)$ are linear in $\alpha_i$ and so averaging the values $\alpha_i$ is the same as averaging the variances of the random variables. We thus have the following conjecture. \begin{conjecture} For any $\alpha_i\in [0,1]$ we have $${\mathcal{Q}}(T_{1}(\alpha_1)+\cdots+T_{n}(\alpha_n))\leq {\mathcal{Q}}(T_{1}(\alpha)+\cdots+T_{n}(\alpha)),$$ where all random variables are independent and $\alpha$ is the unique value in $[0,1]$ such that $$\mathrm{Var}(T_{1}(\alpha_1)+\cdots+T_{n}(\alpha_n))=\mathrm{Var}(T_{1}(\alpha)+\cdots+T_{n}(\alpha)).$$ \end{conjecture} \begin{funding} The author was supported by the Czech Science Foundation, grant number 18-01472Y. \end{funding} \bibliographystyle{imsart-nameyear.bst}
{ "timestamp": "2022-01-25T02:47:33", "yymm": "2201", "arxiv_id": "2201.09861", "language": "en", "url": "https://arxiv.org/abs/2201.09861", "abstract": "Let $X$ be a random variable and define its concentration function by $$\\mathcal{Q}_{h}(X)=\\sup_{x\\in \\mathbb{R}}\\mathbb{P}(X\\in (x,x+h]).$$ For a sum $S_n=X_1+\\cdots+X_n$ of independent real-valued random variables the Kolmogorov-Rogozin inequality states that $$\\mathcal{Q}_{h}(S_n)\\leq C\\left(\\sum_{i=1}^{n}(1-\\mathcal{Q}_{h}(X_i))\\right)^{-\\frac{1}{2}}.$$In this paper we give an optimal bound for $\\mathcal{Q}_{h}(S_n)$ in terms of $\\mathcal{Q}_{h}(X_i)$, which settles a question posed by Leader and Radcliffe in 1994. Moreover, we show that the extremal distributions are mixtures of two uniform distributions each lying on an arithmetic progression.", "subjects": "Probability (math.PR); Combinatorics (math.CO)", "title": "The sharp form of the Kolmogorov--Rogozin inequality and a conjecture of Leader--Radcliffe", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9911526449170697, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7090943916717758 }
https://arxiv.org/abs/1101.1172
The existence of k-radius sequences
Let $n$ and $k$ be positive integers, and let $F$ be an alphabet of size $n$. A sequence over $F$ of length $m$ is a \emph{$k$-radius sequence} if any two distinct elements of $F$ occur within distance $k$ of each other somewhere in the sequence. These sequences were introduced by Jaromczyk and Lonc in 2004, in order to produce an efficient caching strategy when computing certain functions on large data sets such as medical images.Let $f_k(n)$ be the length of the shortest $n$-ary $k$-radius sequence. The paper shows, using a probabilistic argument, that whenever $k$ is fixed and $n\rightarrow\infty$ \[ f_k(n)\sim \frac{1}{k}\binom{n}{2}. \]The paper observes that the same argument generalises to the situation when we require the following stronger property for some integer $t$ such that $2\leq t\leq k+1$: any $t$ distinct elements of $F$ must simultaneously occur within a distance $k$ of each other somewhere in the sequence.
\section{Introduction} \label{sec:introduction} Let $n$ and $k$ be positive integers, and let $F$ be an alphabet of size $n$. A sequence $a_1,a_2,\ldots,a_{m}$ over $F$ of length $m$ is a \emph{$k$-radius sequence} if for all $x,y\in F$ there exists $i,j\in\{1,2,\ldots,m\}$ such that $a_i=x$, $a_j=y$ and $|i-j|\leq k$. The following is an example of an $8$-ary $3$-radius sequence over the alphabet $F=\{0,1,2,3,4,5,6,7\}$: \[ 0,1,2,3,4,5,6,7,0,1,2,4,5,6,3,7. \] We write $f_k(n)$ for the length of the shortest $n$-ary $k$-radius sequence; so the example above shows that $f_3(8)\leq 16$. The concept of a $k$-radius sequence was introduced by Jaromczyk and Lonc~\cite{JL}. They were interested in these sequences so they could design an efficient caching strategy to compute a function that depends on comparing pairs of a sequence of large data sets such as medical images (see the discussion in Section~\ref{sec:comments} below). Ghosh~\cite{Ghosh} showed that \[ f_1(n)=\begin{cases} \binom{n}{2}+1&\text{when $n$ is odd;}\\ \binom{n}{2}+n/2&\text{when $n$ is even.} \end{cases} \] Jaromczyk and Lonc~\cite{JL} showed that $f_2(n)=\frac{1}{2}\binom{n}{2}+O(n^2/\log n)$, and gave a construction for $k$-radius sequences of the right order of magnitude but with a leading term that is not tight. Chee, Ling, Tan and Zhang~\cite{CheeLing} provided good constructions for $n$-ary $2$-radius sequences for small values of $n$. Blackburn and McKee~\cite{BlackburnMcKee} showed how to construct asymptotically good $k$-radius sequences for many values of $k$. In particular, their constructions show that $f_{k}(n)=\frac{1}{k}\binom{n}{2}+O(n^2/\log n)$ whenever $k\leq 194$, or $k+1$ is a prime, or $2k+1$ is a prime. They asked whether $\lim_{n\rightarrow\infty} f_k(n)/\binom{n}{2}$ exists and is equal to $1/k$. The main purpose of this paper is to answer this question positively, by proving the following theorem. \begin{theorem} \label{thm:main} Let $k$ be a fixed positive integer. Then \[ f_k(n)\sim \frac{1}{k}\binom{n}{2}, \] as $n\rightarrow\infty$. \end{theorem} We use probabilistic methods, our main tool being Pippenger and Spencer's version of the Frankl-Rodl theorem on the size of the matchings in a quasi-random hypergraph~\cite{FranklRodl,PippengerSpencer} (see Section~\ref{sec:comments}). Theorem~\ref{thm:main} is proved in the next section. In the final section of the paper we observe that the proof of this theorem can be generalised (Theorem~\ref{thm:general}) to the situation where we are interested in subsets of $t$ elements, rather than pairs of elements, from the alphabet. The final section also contains various comments and open problems. \section{The proof of Theorem~\ref{thm:main}} \label{sec:proof} We begin with an elementary lemma which establishes a lower bound on $f_k(n)$. Jaromczyk and Lonc~\cite{JL} prove a slightly stronger version of this lemma in their paper. \begin{lemma} \label{lem:lower_bound} For all positive integers $n$ and $k$, \[ f_k(n)>\frac{1}{k}\binom{n}{2}. \] \end{lemma} \begin{proof} Let $a_1,a_2,\ldots,a_{m}$ be an $n$-ary $k$-radius sequence. There are less than $km$ pairs of the form $\{a_i,a_{i+z}\}$ where $i,i+z\in\{1,2,\ldots,m\}$ and $1\leq z\leq k$. The $k$-radius sequence property implies that every unordered pair of alphabet symbols must occur at least once as a pair $\{a_i,a_{i+z}\}$ for some $i$ and $z$, and so $km> \binom{n}{2}$. \end{proof} To provide an upper bound on $f_k(n)$, we use a well-known theorem in hypergraph theory. Recall that a hypergraph $\Gamma$ is \emph{$r$-uniform} if all its hyperedges have cardinality $r$. The \emph{degree} $\deg(v)$ of a vertex $v\in\Gamma$ is the number of hyperedges containing $v$; the \emph{codegree} $\mathrm{codeg}(v,w)$ of a pair of distinct vertices $v,w\in\Gamma$ is the number of hyperedges containing both $v$ and $w$. A \emph{cover} is a set of hyperedges in $\Gamma$ whose union is equal to the set of all vertices of $\Gamma$. \begin{theorem} \label{thm:hypergraph} Fix an integer $r$ and a positive real number $\delta$. Then there exists an integer $n_0$ and a positive real number $\delta'$ with the following property. Let $\Gamma$ be an $r$-uniform hypergraph on $n$ vertices, where $n\geq n_0$. Suppose that all vertices of $\Gamma$ have degree $d$ for some integer $d$. Let $c=\max \mathrm{codeg}(u,v)$, where the maximum is taken over all pairs of distinct vertices $u,v\in \Gamma$. If $c\leq \delta' d$, then there exists a cover consisting of at most $(1+\delta)n/r$ hyperedges. \end{theorem} Theorem~\ref{thm:hypergraph} can be proved using a second-moment technique that Alon and Spencer~\cite{AlonSpencer} call the `R\"odl nibble' (see~\cite{Rodl,FranklRodl} or~\cite[Theorem~8.4]{Furedi} for example). Also see Pippenger and Spencer~\cite[Theorem~1.1]{PippengerSpencer} for a stronger result. \begin{proof}[Proof of Theorem~\ref{thm:main}] We need to prove that $\lim_{n\rightarrow\infty} f_k(n)/\binom{n}{2} = 1/k$. Now $f_k(n)/\binom{n}{2}>1/k$ by Lemma~\ref{lem:lower_bound}. Let $\epsilon$ be a fixed positive real number. To prove the theorem, it suffices to show that for all sufficiently large integers $n$, we have that $f_k(n)/\binom{n}{2}\leq (1+\epsilon)/k$. Choose an integer $\ell$ and a positive real number $\delta$ such that $\ell\geq k$ and \[ \frac{(1+\delta)}{1-\frac{1}{2}(k+1)/\ell}<(1+\epsilon). \] Let $n$ be an integer such that $n\geq \ell$, and let $F$ be a set of cardinality $n$. Define a hypergraph $\Gamma_n$ as follows. The vertices of $\Gamma_n$ are the $\binom{n}{2}$ unordered pairs $\{x,y\}$ where $x,y\in F$. The hyperedges of $\Gamma_n$ are the $n(n-1)\cdots (n-(\ell-1))$ sequences $\mathbf{b}=b_1,b_2,\ldots,b_{\ell}$ of length $\ell$ over $F$ whose entries $b_i$ are all distinct. We define a vertex $\{x,y\}$ to lie in a hyperedge $\mathbf{b}$ whenever $x$ and $y$ occur in $\mathbf{b}$ within a distance of $k$; more precisely, whenever there exist $i,j\in\{1,2,\ldots ,\ell\}$ such that $b_i=x$, $b_j=y$ and $|i-j|\leq k$. Let $r$ be the number of ways of choosing an (unordered) pair of distinct positions in a sequence of length $\ell$, where the positions are at most a distance $k$ apart. So \[ r=(\ell-k)k+\sum_{i=0}^{k-1} i = \ell k - \tfrac{1}{2}k(k+1). \] Clearly $r$ does not depend on $n$. Since the entries of $\mathbf{b}$ are distinct, every hyperedge in $\Gamma_n$ contains exactly $r$ vertices, and so $\Gamma_n$ is an $r$-uniform hypergraph. The degree $d$ of any vertex $v\in \Gamma_n$ is equal to $2r (n-2)(n-3)\cdots(n-(\ell-1))$, which is of the order of $n^{\ell-2}$. The codegree of distinct vertices $v,w\in\Gamma_n$ depends on whether $v$ and $w$ are intersecting when thought of as pairs of elements of $F$. But in either case $\mathrm{codeg}(v,w)=O(n^{\ell-3})=o(d)$. So Theorem~\ref{thm:hypergraph} implies that for all sufficiently large integers $n$ there exists a cover $\mathbf{b}_1,\mathbf{b}_2,\ldots,\mathbf{b}_s$ for $\Gamma_n$ consisting of $s$ hyperedges, where $s\leq (1+\delta)\binom{n}{2}/r$. The definition of $\Gamma_n$ and the fact that the sequences $\mathbf{b}_i$ form a cover show that the concatenation of $\mathbf{b}_1,\mathbf{b}_2,\ldots,\mathbf{b}_s$ is a $k$-radius sequence. The length of this sequence is $\ell s$, and \begin{align*} \ell s&\leq \ell(1+\delta)\binom{n}{2}/r\\ &=\frac{1}{k}\binom{n}{2}\frac{\ell(1+\delta)}{\ell-\frac{1}{2}(k+1)}\\ &=\frac{1}{k}\binom{n}{2}\frac{(1+\delta)}{1-\frac{1}{2}(k+1)/\ell}\\ &<\frac{1}{k}\binom{n}{2}(1+\epsilon). \end{align*} So $f_k(n)/\binom{n}{2}\leq (1+\epsilon)/k$ for all sufficiently large integers $n$, as required. \end{proof} \section{Comments} \label{sec:comments} We have found the leading term for $f_k(n)$ as $n\rightarrow\infty$ with $k$ fixed using probabilistic methods. It would be very interesting to search for explicit constructions of $k$-radius sequences that are asymptotically good for any value of $k$. (The constructions of Jaromczyk and Lonc~\cite{JL} and of Blackburn and McKee~\cite{BlackburnMcKee} only lead to asymptotically good constructions for some values of $k$.) The following problem would also be very interesting: \begin{problem} \label{prob:k_radius} Provide an upper bound (using explicit or probabilistic methods) of the form \[ f_k(n)\leq \frac{1}{k}\binom{n}{2}+g(n), \] where $g(n)$ is a function of $n$ that grows significantly more slowly than $n^2$. \end{problem} \emph{Note added in final revision:} A recent preprint of Jaromzcyk, Lonc and Truszczynski~\cite{JLT} provides some beautiful recursive constructions of $k$-radius sequences, solving Open Problem~\ref{prob:k_radius}. Indeed, they show that we may take $g(n)=O(n^{1+\epsilon})$ for any positive real number $\epsilon$. They also give optimal constructions of $2$-radius sequences when $n=2p$ with $p$ a prime. We now discuss the caching application that motivated Jaromczyk and Lonc in a little more detail. Suppose we have a total of $n$ medical images, and we wish to compute some function which depends on all pairs of these images. We assume that the computation involving each pair of images is computationally intensive, so we wish to place these images in our cache before carrying out this computation. We assume our cache can hold up to $k+1$ images at one time. Then an $n$-ary $k$-radius sequence will enable us to design an efficient caching strategy, as follows. Let $a_1,a_2,\ldots,a_{m}$ be an $n$-ary $k$-radius sequence. Suppose we load image $a_t$ into our cache at time $t$, using a first-in first-out caching strategy. So at time $t$ (for $t\geq k+1$) our cache holds the images $a_{t-k},a_{t-k+1},\ldots ,a_t$. The property of being a $k$-radius sequence implies that any pair of alphabet symbols occurs in some window of length $k+1$ in the sequence, and so any pair of images simultaneously lies in our cache at some point. Short sequences correspond to efficient caching strategies for this problem. We might ask what the consequences are of removing our insistence on a first-in first-out strategy in the application above. But whatever caching strategy is used it is clear that at most $k$ new pairs of images are introduced into our cache at every time period: the bound of Lemma~\ref{lem:lower_bound} holds for any caching strategy. So the results of this paper show that imposing the restriction to a first-in first-out strategy does not affect the asymptotic efficiency. We should also remark that when we are not imposing the restriction to a first-in first-out strategy there is a simple caching method that gives asymptotically tight results, which can be described as follows. We begin by loading the first batch of $k$ images $1,2,\ldots,k$ into the cache. Our cache can store one more image: keeping our initial batch of images in our cache, we load all the remaining images in turn. So at time $k+i$ where $1\leq i\leq n-k$, the cache holds images $1,2,3,\ldots,k$ and $k+i$. We then continue with the next batch of $k$ images $k+1,k+2,\ldots,2k$: at time $n+k+i$ where $1\leq i\leq n-2k$ the cache holds images $k+1,k+2,\ldots,2k$ and $2k+i$. We continue in this way, first loading a batch of $k$ images into our cache and then using the remaining space to load each of the later images in turn. The results of this paper are easily generalised to a wider class of combinatorial objects. Let $k$, $t$ and $n$ be fixed positive integers, with $t\leq n$. Let $F$ be an alphabet of cardinality $n$. We may define a \emph{$t$-subset $k$-radius sequence over $F$} to be a finite sequence $a_1,a_2,\ldots ,a_m$ over $F$ such that for all $t$-subsets $X\subseteq F$, there exists $i\in\{1,2,\ldots,m-k\}$ such that \[ X\subseteq\{a_i,a_{i+1},\ldots,a_{i+k}\}. \] So a $k$-radius sequence satisfies this definition in the special case when $t=2$. Let $f_{t,k}(n)$ be the length of the shortest $t$-subset $k$-radius sequence. \begin{theorem} \label{thm:general} Let $k$ and $t$ be fixed integers such that $2\leq t\leq k+1$. Then \[ f_{t,k}(n)\sim \frac{1}{\binom{k}{t-1}}\binom{n}{t} \] as $n\rightarrow\infty$. \end{theorem} \begin{proof} The lower bound follows by observing that each new element added to a sequence can `cover' at most $\binom{k}{t-1}$ new subsets $X$ (as these new subsets must involve the new element). The upper bound follows as in the proof of Theorem~\ref{thm:main}. So we define the hypergraph $\Gamma_n$ to have hyperedges as before, with vertices the $t$-subsets of the $n$-set $F$, and with a vertex lying in a hyperedge $\mathbf{b}$ if and only if the subset is contained in a set of $k+1$ consecutive elements of the sequence $\mathbf{b}$. The graph $\Gamma_n$ is $r$-uniform, where $r=\ell\binom{k}{t-1}+h(k,t)$ for some fixed function $h$ of $k$ and $t$. The degree of a vertex in $\Gamma_n$ does not depend on the vertex and is of the order of $n^{\ell-t}$, whereas the codegree of a pair of vertices depends on the size of intersection of the subsets the vertices are identified with, but is at most $O(n^{\ell-t-1})$. We may use Theorem~\ref{thm:hypergraph} to obtain a small cover for $\Gamma_n$, and then concatenate the resulting sequences in this cover to obtain a short $t$-subset $k$-radius sequence, just as in the proof of Theorem~\ref{thm:main}. \end{proof} \begin{problem} Find good explicit constructions of $t$-subset $k$-radius sequences. \end{problem} This problem has been considered in the case $t=k+1$ by Lonc, Traczyk, and Truszczynski~\cite{LTT}. The authors show that $f_{k+1,k}(n)=\binom{n}{k}+O(n^{\lfloor k/2\rfloor})$, determine $f_{3,2}(n)$ exactly and determine $f_{4,3}(n)$ and $f_{6,5}(n)$ for infinitely many values of $n$. The corresponding packing rather than covering problem is also interesting combinatorially (although we do not know of an application). Here we may define a \emph{packing $t$-subset $k$-radius sequence over $F$} to be a sequence $a_1,a_2,\ldots,a_m$ over $F$ with the property that any $t$-subset $X\subseteq F$ only occurs as a subset of $\{a_i,a_{i+1},\ldots,a_{i+k}\}$ in at most one position in the sequence. More precisely, we require that for all $t$-subsets $X\subseteq F$ there exists at most one choice for an increasing sequence $z_1,z_2,\ldots,z_t$ of integers such that \[ X=\{a_{z_1},a_{z_2},\ldots,a_{z_t}\} \] and where $|z_t-z_1|\leq k$. \begin{problem} Define $F_{t,k}(n)$ to be the length of the longest packing $t$-subset $k$-radius sequence. Find good asymptotic lower bounds on $F_{t,k}(n)$, either using probabilistic or explicit constructions. \end{problem} This problem has been considered in the case when $t=k+1$ by Curtis, Hines, Hurlbert and Moyer~\cite{CHHM} under the name of Ucycle packings. The authors prove that for any $t$, we have \[ F_{t,t-1}(n)=(1-o(1))\binom{n}{t}. \] Their work was motivated by the concept of a universal cycle; see~\cite{CDG,Hurlbert,Jackson}. \paragraph{Acknowledgement} The author would like to thank Jason Crampton for pointing out the simple caching construction described in Section~\ref{sec:comments}, and the referees for suggesting several improvements in the exposition and bibliography.
{ "timestamp": "2011-08-08T02:01:49", "yymm": "1101", "arxiv_id": "1101.1172", "language": "en", "url": "https://arxiv.org/abs/1101.1172", "abstract": "Let $n$ and $k$ be positive integers, and let $F$ be an alphabet of size $n$. A sequence over $F$ of length $m$ is a \\emph{$k$-radius sequence} if any two distinct elements of $F$ occur within distance $k$ of each other somewhere in the sequence. These sequences were introduced by Jaromczyk and Lonc in 2004, in order to produce an efficient caching strategy when computing certain functions on large data sets such as medical images.Let $f_k(n)$ be the length of the shortest $n$-ary $k$-radius sequence. The paper shows, using a probabilistic argument, that whenever $k$ is fixed and $n\\rightarrow\\infty$ \\[ f_k(n)\\sim \\frac{1}{k}\\binom{n}{2}. \\]The paper observes that the same argument generalises to the situation when we require the following stronger property for some integer $t$ such that $2\\leq t\\leq k+1$: any $t$ distinct elements of $F$ must simultaneously occur within a distance $k$ of each other somewhere in the sequence.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "The existence of k-radius sequences", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9911526446557306, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7090943854709598 }
https://arxiv.org/abs/1406.7339
Block Kaczmarz Method with Inequalities
The randomized Kaczmarz method is an iterative algorithm that solves overdetermined systems of linear equations. Recently, the method was extended to systems of equalities and inequalities by Leventhal and Lewis. Even more recently, Needell and Tropp provided an analysis of a block version of the method for systems of linear equations. This paper considers the use of a block type method for systems of mixed equalities and inequalities, bridging these two bodies of work. We show that utilizing a matrix paving over the equalities of the system can lead to significantly improved convergence, and prove a linear convergence rate as in the standard block method. We also demonstrate that using blocks of inequalities offers similar improvement only when the system satisfies a certain geometric property. We support the theoretical analysis with several experimental results.
\section{Introduction} The Kaczmarz method~\cite{Kac37:Angenaeherte-Aufloesung} is an iterative algorithm for solving linear systems of equations. It is usually applied to large-scale overdetermined systems because of its simplicity and speed (but also converges in the underdetermined case to the least-norm solution under appropriate initial conditions). Each iteration projects onto the solution space corresponding to one row in the system, in a sequential fashion. Strohmer and Vershynin prove that when the rows are selected from a certain random distribution rather than sequentially, that the randomized method converges to the solution at a linear rate~\cite{SV09:Randomized-Kaczmarz}. % The method has been applied to fields including image reconstruction, digital signal processing, and computer tomography~\cite{SS87:Image-Recovery,CFMSS92:POCS-Variants,Nat01:Mathematics-Computerized,RefWorks:504}. Leventhal and Lewis modify the randomized Kaczmarz method to apply to systems of linear equalities and inequalities~\cite{LL10:Randomized-Methods}, thereby extending results on the standard method in this setting (see e.g.~\cite{censor1981} and references therein). Unlike the traditional randomized algorithm which enforces a single constraint at each iteration, the block Kaczmarz approach recently analyzed by Needell and Tropp~\cite{NT12:Block-Kaczmarz} enforces multiple constraints simultaneously and thus offers computational advantages. Here we demonstrate convergence for a system of linear equalities and inequalities by combining a randomized block Kaczmarz method for the equalities with a randomized Kaczmarz algorithm for the inequalities. These results indicate that the block Kaczmarz method can be used for a system of equalities and inequalities, and in some cases may quicken convergence. We also consider the case of utilizing blocking in both the equalities and inequalities, although this can be detrimental unless the geometry of the system meets certain conditions. \subsection{Model and Notation} \label{sec:assumptions} We consider a linear system \begin{equation} \label{eqn:lin-system} \mtx{A} \vct{x} = \vct{b}, \end{equation} where $\mtx{A}$ is a real (or complex) $n \times d$ matrix, typically with $n\gg d$. The $\ell_p$ vector norm for $p \in [1, \infty]$ is denoted $\norm{\cdot}_p$, while $\norm{\cdot}$ is the spectral norm and $\fnorm{\cdot}$ refers to the Frobenius norm. For an $n \times d$ matrix $\mtx{A}$, the singular values are arranged in decreasing order and we write $$ \sigma_{\max}(\mtx{A}) \overset{\mathrm{\scriptscriptstyle{def}}}{=} \sigma_1(\mtx{A}) \geq \sigma_2(\mtx{A}) \geq \dots \geq \sigma_{d}(\mtx{A}) \overset{\mathrm{\scriptscriptstyle{def}}}{=} \sigma_{\min}(\mtx{A}). $$ We define the eigenvalues $\lambda_{\min}(\mtx{A}), \ldots, \lambda_{\max}(\mtx{A})$ of a matrix analogously. For convenience we will assume that each row $\vct{a_i} $ of $\mtx{A}$ has unit norm, $\enorm{ \vct{a_i} } = 1$, and we call such matrices \textit{standardized}. We define the usual condition number $$\kappa(\mtx{A}) \overset{\mathrm{\scriptscriptstyle{def}}}{=} \sigma_{\max}(\mtx{A})/\sigma_{\min}(\mtx{A}),$$ and write the Moore-Penrose pseudoinverse of matrix $\mtx{A}$ by $\mtx{A}^\dagger$. Recall that for a matrix $\mtx{A}$ with full row rank, the pseudoinverse is obtained by $\mtx{A}^\dagger \overset{\mathrm{\scriptscriptstyle{def}}}{=} \mtx{A}^{*}(\mtx{A}\mtx{A}^{*})^{-1}$. Now we consider a system of linear equalities and inequalities and denote by $S$ its non-empty set of feasible solutions. We thus consider the matrix $\mtx{A}$ whose rows can be arranged such that \begin{equation} \label{eqn:ineq-matrix} \mtx{A} = \left[ \begin{array}{lr} \mtx{A}_{=}\\ \mtx{A}_{\leq} \end{array} \right], \end{equation} and we will write $I_{=}$ and $I_{\leq}$ to denote the row indices of $\mtx{A}_{=}$ and $\mtx{A}_{\leq}$, respectively. Therefore, we ask that \begin{equation} \label{eqn:ineq-system} \langle \vct{a}_{i}, \vct{x} \rangle \leq \vct{b}_{i} \quad(i \in I_{\leq})\quad\text{and}\quad \langle \vct{a}_{i}, \vct{x} \rangle = \vct{b}_{i} \quad(i \in I_{=}) \end{equation} We will assume that the set of rows $\{1,2,...,n\}$ is partitioned such that the first $n_{e}$ rows correspond to equalities, and the remaining $n_i = n-n_{e}$ rows to inequalities. Thus $\mtx{A}_=$ is an $n_e\times d$ matrix and $\mtx{A}_{\leq}$ is $n_i \times d$. The error bound for this system of linear inequalities uses the function $e: \textbf{R}^n \rightarrow \textbf{R}^n$ defined as in~\cite{LL10:Randomized-Methods} by $$ e(y)_{i} = \left\{ \begin{array}{lr} y_{i}^{+} & \text{for } i \in {I_{\leq}}\\ y_{i} & \text{for } i \in {I_{=}} \end{array} \right. $$ where the positive part is defined as ${x^{+}} \overset{\mathrm{\scriptscriptstyle{def}}}{=} \max(x,0)$. \subsection{Details of Kaczmarz} The simple Kaczmarz method is an iterative algorithm that approximates a least-squares minimizer $\vct{x}_{\star}$ to the problem in~\eqref{eqn:lin-system}. It takes an arbitrary initial approximation $\vct{x}_{0}$, and at each iteration $j$ the current iterate is projected orthogonally onto the solution hyperplane $\{\left\langle \vct{a}_{i}, \vct{x} \right\rangle = \vct{b}_{i}\}$, using the update rule \begin{equation} \label{eqn:simp-rule} \vct{x}_{j+1} = \vct{x}_{j} + \frac{\vct{b}_{i} - \left\langle \vct{a}_{i}, \vct{x}_{j} \right\rangle }{\ \enormsq{\vct{a}_{i}}} \vct{a}_{i} \end{equation} where $i = j$ mod $n+1$ \cite{Kac37:Angenaeherte-Aufloesung}. With an unfortunate ordering of the rows, this method as-is can produce very slow convergence. However, it has been well known that using randomized selection often eliminates this effect \cite{HS78:Angles-Null,HM93:Algebraic-Reconstruction}. The randomized Kaczmarz method put forth by Strohmer and Vershynin \cite{SV09:Randomized-Kaczmarz} uses a random selection method for the selection of row $i$ such that each row $i$ is selected with probability proportional to $\enormsq{\vct{a}_{i}}$. This randomization provides an algorithm that is both simple to analyze and enforce in many cases. In this paper we assume each row has unit norm, so each row is selected uniformly at random from $\{$1,...,n$\}$ in the simple randomized Kaczmarz approach.\footnote{This assumption is both for notational convenience, and because the use of matrix pavings discussed below only hold for standardized matrices. In practice, one can employ pre-conditioning on non-standardized systems, or extend the construction of matrix pavings to non-standardized systems~\cite{RefWorks:Versh}.} Strohmer and Vershynin prove a linear rate of convergence for consistent systems that depends on the scaled condition number of $\mtx{A}$, and not on the number of equations $n$ \cite{SV09:Randomized-Kaczmarz}, \begin{equation}\label{rv} \mathbb{E} \enormsq{\vct{x}_{j} -\vct{x}_{\star}} \leq \left[1 - \frac{1}{K}\right]^j \enormsq{\vct{x}_{0} - \vct{x}}, \end{equation} where $\vct{x}_{\star}$ is the solution to the consistent system~\eqref{eqn:lin-system} and $K = \|\mtx{A}\|_F^2 / \sigma^2_{\min}(\mtx{A})$ denotes the scaled condition number. Needell extended this work to the inconsistent case and proves linear convergence to the least-squares solution within some fixed radius \cite{Nee10:Randomized-Kaczmarz}, $$ \mathbb{E} \enormsq{\vct{x}_{j} -\vct{x}_{\star}} \leq \left[1 - \frac{1}{K}\right]^j \enormsq{\vct{x}_{0} - \vct{x}} + K\|\vct{e}\|_{\infty}^2, $$ where $\vct{e} = \mtx{Ax_{\star}} - \vct{b}$ denotes the residual vector. Because the Kaczmarz method projects directly onto each solution hyperplane, such a convergence radius is unavoidable without adding a relaxation parameter. The randomized Kaczmarz method can be adapted to the case of a linear system of equalities and inequalities described in~\eqref{eqn:ineq-system}. Leventhal and Lewis \cite{LL10:Randomized-Methods} apply the Kaczmarz method to a consistent system of linear equalities and inequalities (here consistent simply means the feasible set $S$ is non-empty). At each iteration $j$, the previous iterate only projects onto the solution hyperplane if the inequality is not already satisfied. If the inequality is satisfied for row $i$ selected at iteration $j$ $(\vct{a}_{i}^T \vct{x} \leq \vct{b}_{i})$, the approximation $\vct{x}_{j}$ is set as $\vct{x}_{j-1}$ \cite{LL10:Randomized-Methods}. The update rule for this algorithm is thus \begin{equation} \label{eqn:ineq-rule} \vct{x}_{j+1} = \vct{x}_{j} - \frac{e(\vct{a}_{i}^{T}\vct{x}_{j} - \vct{b}_{i})}{\ \enormsq{\vct{a}_{i}}} \vct{a}_{i}. \end{equation} This algorithm converges linearly in expectation \cite{LL10:Randomized-Methods}, with $$ \mathbb{E} \left[ d(\vct{x}_{j},S)^2 \;|\; \vct{x}_{j-1} \right] \leq d(\vct{x}_{j-1},S)^2 - \frac{\enormsq{e(\mtx{A}\vct{x}_{j-1}-\vct{b})}}{\ \normsq{\mtx{A}}_{F}}. $$ In order to bound the right hand side of this expression, the authors rely on a lemma due to Hoffman \cite{Hof52:Approximate-Solutions,LL10:Randomized-Methods}. This result states that for any system~\eqref{eqn:ineq-system} with non-empty solution set $S$, there exists a constant $L$ independent of $\vct{b}$ such that for all $\vct{x}$, \begin{equation}\label{hoffman} d(\vct{x}, S) \leq L\enorm{e(\vct{Ax}-\vct{b})}. \end{equation} When $\mtx{A}_= = \mtx{A}$ is full column rank, the Hoffman constant is the inverse of the smallest singular value, $L = \sigma_{\min}^{-1}(\mtx{A})$. Using this their result becomes \begin{equation} \label{eqn:ineq-convergence} \mathbb{E} \left[d(\vct{x}_j,S)^2\right] \ \leq \ \left[ 1 - \frac{1}{L^2\|\mtx{A}\|_F^2} \right]^j\cdot d(\vct{x}_{0},S)^2, \end{equation} which coincides with~\eqref{rv} for consistent systems of equalities. \subsection{Block Kaczmarz} A block variant of the randomized Kaczmarz method due to Elfving \cite{Elf80:Block-Iterative-Methods} has been recently analyzed by Needell and Tropp \cite{NT12:Block-Kaczmarz} and can improve the convergence rate in certain cases. The block Kaczmarz method first partitions the rows $\{1,...,n\}$ into $m$ blocks, denoted $\tau_{1}, \ldots \tau_m$. Instead of selecting one row per iteration as done with the simple Kaczmarz method, the block Kaczmarz algorithm chooses a block uniformly at random at each iteration. Thus the block Kaczmarz method enforces multiple constraints simultaneously. At each iteration, the previous iterate $\vct{x}_{j-1}$ is projected onto the solution space to $\mtx{A}_{\tau}\vct{x} = \vct{b}_{\tau}$, which enforces the set of equations in block $\tau$ \cite{NT12:Block-Kaczmarz}. $\mtx{A}_{\tau}$ and $\vct{b}_{\tau}$ are written as the row submatrix of $\mtx{A}$ and the subvector of $\vct{b}$ indexed by $\tau$ respectively, yielding an iterative rule of \begin{equation} \label{eqn:block-algorithm} \vct{x}_{j} = \vct{x}_{j-1} + (\mtx{A}_{\tau})^\dagger(\vct{b}_{\tau} - \mtx{A}_{\tau} \vct{x}_{j-1}). \end{equation} The pseudoinverse used in~\eqref{eqn:block-algorithm} returns the solution to the underdetermined least squares problem for a wide or square row submatrix $\mtx{A}_{\tau}$.% Depending on the characteristics of the submatrix $\mtx{A}_{\tau}$, the block method can provide better convergence than the simple method. If we assume that the submatrices $\mtx{A}_{\tau}$ are well conditioned, the additional cost of computing their pseudo-inverse can be overcome by the gain in utilizing block multiplications (see our experiments in Section~\ref{sec:exps}). In fact, if the blocks admit a fast multiply (for example if the matrix is built of DFT or circulant blocks), then the computational cost of the block iteration~\eqref{eqn:block-algorithm} is similar to the cost of the simple update rule in~\eqref{eqn:simp-rule}. Since the convergence depends heavily on the conditioning of each submatrix, one seeks partitions of the rows into blocks for which each block is well-conditioned. The notion of a \textit{row-paving} allows one to do precisely that. \begin{definition} We define an ($m$, $\beta$) row paving\footnote{The standard definition of a row paving also includes a constant $\alpha$ which serves as a lower bound to the smallest singular value. We ignore that parameter here since it will not be utilized.} of matrix $\mtx{A}$ as a partition $ T=\{\tau_1,...\tau_m\}$ of the row indices such that $$ \lambda_{\max}( \mtx{A}_{\tau} \mtx{A}_{\tau}^* ) \leq \beta \quad\text{for each $\tau \in T$.} $$ \end{definition} The \textit{size} of the paving, or number of blocks, is $m$. The value of $\beta$ is the upper paving bound, which controls the spectral norms of the submatrices. Needell and Tropp \cite{NT12:Block-Kaczmarz} show that these parameters determine the performance of the algorithm, with convergence for a consistent system admitting an $(m, \beta)$ paving given by \begin{equation} \label{eqn:block-convergence} \mathbb{E} \enormsq{ \vct{x}_j - \vct{x}_{\star} } \ \leq \ \left[ 1 - \frac{\sigma_{\min}^2(\mtx{A})}{\beta m} \right]^{j} \enormsq{ \vct{x}_0 - \vct{x}_{\star} }. \end{equation} Therefore the convergence rate depends on the size $m$ and upper bound $\beta$; the algorithm's performance improves with low values of $m$ and $\beta$, and large $\sigma_{\min}^2(\mtx{A})$. The authors also prove convergence for inconsistent systems, with the same convergence rate and convergence radius which depends also on the minimum of all $\lambda_{\min}( \mtx{A}_{\tau} \mtx{A}_{\tau}^* )$, see \cite{NT12:Block-Kaczmarz} for details. Surprisingly, every standardized matrix admits a good row paving. The following result is due to \cite{RefWorks:485,Tro09:Column-Subset} which builds off the foundational work of \cite{BT87:Invertibility-Large,RefWorks:546}. \begin{proposition}[Existence of Good Row Pavings] \label{prop:intro-paving} For any $\delta \in (0, 1)$ and standardized $n \times d$ matrix $\mtx{A}$, there is a row paving satisfying $$ m \leq \cnst{C} \cdot \delta^{-2} \normsq{\mtx{A}} \log(1+n) \quad\text{and}\quad 1 - \delta \leq \beta \leq 1 + \delta. $$ where $\cnst{C}$ is an absolute constant.% \end{proposition} Although this is an existential result, there are constructive methods to obtain such pavings, and for certain classes of matrices, they can even be obtained by a random partitioning of the rows \cite{RefWorks:556,RefWorks:536,NT12:Block-Kaczmarz}. With such a paving in tow, the convergence of~\eqref{eqn:block-convergence} becomes $$ \mathbb{E} \normsq{ \vct{x}_j - \vct{x}_{\star} } \leq \left[ 1 - \frac{1}{\cnst{C} \kappa^2(\mtx{A}) \log(1+n)} \right]^j \enormsq{ \vct{x}_0 - \vct{x}_\star } \ $$ Although often comparable to the convergence rate for the simple method~\eqref{rv}, numerical results confirm that the block method offers significant reduction in computation time due to the speed of matrix--vector multiplication (see e.g. \cite{NT12:Block-Kaczmarz}). \subsection{Contribution} This paper analyzes the system with matrix described in~\eqref{eqn:ineq-matrix} using an algorithm with the block Kaczmarz approach for the equalities given by $\mtx{A}_{=}$ and the simple method for the inequalities given by $\mtx{A}_{\leq}$. A paving is created for $\mtx{A}_{=}$, with the inequalities excluded. At each iteration, we select from $\mtx{A}_{=}$ with a fixed probability $p$ and from $\mtx{A}_\leq$ with probability $1-p$. In the former case, we select a block $\tau$ from paving $T$ uniformly at random, and in the latter case we select a row $i$ of $\mtx{A}_\leq$ uniformly at random. In the case of a block of equalities being selected, the algorithm proceeds by updating $\vct{x}_{j}$ using~\eqref{eqn:block-algorithm}. When an inequality row is selected, $\vct{x}_{j}$ is updated using the rule~\eqref{eqn:ineq-rule}. We prove that this method yields linear convergence to the solution set $S$. We also include a discussion about paving both $\mtx{A}_=$ and $\mtx{A}_\leq$, which identifies a geometric property of the system which allows for improved convergence by utilizing two pavings. We show that when this property is not satisfied, utilizing both pavings can be detrimental to convergence. \subsection{Organization} Section~\ref{sec:main} lays out our main result, Theorem~\ref{thm:convergence}, and provides a proof. We discuss blocking the full matrix in Section~\ref{sec:double} and Section~\ref{sec:exps} explains numerical experiments and results. We conclude with discussion and related work in Section~\ref{sec:end}. \section{Analysis of the Block Kaczmarz Algorithm for a System of Inequalities}\label{sec:main} In this section we analyze the convergence of the described method, which is detailed in Algorithm~\ref{alg}. \begin{center} \begin{algorithm}[hb] \caption{Block Kaczmarz Method for a System of Inequalities} \label{alg} \begin{center} \fbox{ \begin{minipage}{.95\textwidth} \vspace{4pt} \alginout{\begin{itemize} \item Matrix $\mtx{A}$ with dimension $n \times d$ \item Right-hand side $\vct{b}$ with dimension $n$ \item Number of rows representing equalities, $n_{e}$, and inequalities, $n_i = n - n_e$ \item Partition $T = \{\tau_1, \dots, \tau_m\}$ of the row indices $\{1, \dots, n_{e}\}$ and paving constant $\beta$ \item Initial iterate $\vct{x}_0$ with dimension $d$ \item Convergence tolerance $\varepsilon > 0$ \end{itemize}} {An estimate $\hat{\vct{x}}$ to the solution of the system~\eqref{eqn:ineq-system}} \vspace{8pt}\hrule\vspace{8pt} \begin{algtab*} $j \leftarrow 0$ \algrepeat $j \leftarrow j + 1$ \\ Draw uniformly at random $q$ from $[0,1]$ \\ \algif{$q \leq \frac{\beta m}{n_{i} + \beta m}$} Choose a block $\tau$ uniformly at random from $T$ \\ $\vct{x}_j \leftarrow \vct{x}_{j-1} + (\mtx{A}_\tau)^\dagger (\vct{b}_\tau - \mtx{A}_\tau\vct{x}_{j-1})$ \\%\hfill \hfill ({Solve least-squares approximation}) \\ \algelse Choose a row $i$ uniformly at random from $\{n_{e} + 1, \dots, n\}$ \\ $\vct{x}_{j} \leftarrow \vct{x}_{j-1} - \frac{e(\vct{a}_{i}^{T}\vct{x}_{j-1} - \vct{b}_{i})}{\ \enormsq{\vct{a}_{i}}} \vct{a}_{i}$ \\ \algend \alguntil{$\enormsq{ e(\mtx{A}\vct{x}_j - \vct{b}) } \leq \varepsilon^2$} $\hat{\vct{x}} \leftarrow \vct{x}_j$ \end{algtab*} \end{minipage}} \end{center} \end{algorithm} \end{center} Notice that the probability of selecting a block of $\mtx{A}_=$ is $\frac{\beta m}{n_{i} + \beta m}$. This quantity corresponds to the relative size of $A_{=}$ in the system, where the size is measured in terms of the paving quantities $\beta m$. This value may be difficult to compute precisely, and the simpler threshold of $n_e/n$ appears to also work well in practice. We provide no evidence that our selection of this threshold is most efficient, nor any more efficient than using one proportional to the number of equality rows $n_{e}$. We find that this algorithm yields linear convergence in expectation with a rate that only depends on the number of inequalities $n_{i}$, paving size $m$, and upper bound $\beta$. Our main result is described in Theorem~\ref{thm:convergence}. \begin{theorem}[Convergence] \label{thm:convergence} Let the standardized matrix $\mtx{A}\in\mathbb{R}^{n \times d}$ and $b\in\mathbb{R}^n$ correspond to a system as in~\eqref{eqn:ineq-matrix} with the first $n_{e}$ rows being equalities and the remaining $n_i = n - n_{e}$ rows being inequalities. Let $T$ be an $(m, \beta)$ row paving of $\mtx{A}_{=}$. Let $\vct{x}_0$ be an arbitrary initial estimate and $S$ the non-empty feasible region. Then Algorithm~\ref{alg} satisfies for each iteration $j$ = 1,2,3,..., \begin{align*} \mathbb{E} \left[d(\vct{x}_j,S)^2\right] \ \leq \ \left( 1 - \frac{1}{L^2(n_{i} + \beta m)} \right)^j\cdot d(\vct{x}_{0},S)^2, \end{align*} where $L$ is the Hoffman constant~\eqref{hoffman}. \end{theorem} \begin{remarks}\\ \noindent{\bfseries 1. }Note that when there are no block projections, no inequalities, or neither, Theorem~\ref{thm:convergence} recovers the results of the standard randomized Kaczmarz for inequalities \cite{LL10:Randomized-Methods}, the standard randomized block Kaczmarz method \cite{NT12:Block-Kaczmarz} or the standard randomized Kaczmarz method \cite{SV09:Randomized-Kaczmarz}, respectively. We thus view this result as a completely generalized convergence bound. \noindent{\bfseries 2. } If we let $\rho_s$ and $\rho_b$ be the convergence rates of the simple and block methods for mixed systems, respectively, then by~\eqref{eqn:ineq-convergence} and Theorem~\ref{thm:convergence}, $$ \rho_{{s}} \geq \frac{1}{L^2 n} \quad\text{ and }\quad \rho_{{b}} \geq \frac{1}{L^2( n_{i} + \beta m)}. $$ It is evident that our expected convergence rate will be faster per iteration than the simple method when $n_{i} + \beta m < n$. Since $\beta$ can be chosen close to $1$ and $m<n_e$ is then number of rows in $\mtx{A}_=$, this holds quite easily. \noindent{\bfseries 3. }Since a single iteration using a block $\mtx{A}_\tau$ in general may cost more than an iteration utilizing a single row, it is more fair to compare per epoch, rather than per iteration. An epoch is typically the minimum number of iterations needed to visit each row of the matrix. When there are inequalities present that are already satisfied in a given iteration, that iteration may make no contribution and cost very little computationally. Thus the notion of epoch may be slightly skewed here, but if we ignore this subtlety the simple method will have approximately $n$ iterations per epoch, compared to $n_{i} + m$ iterations per epoch with the block method. The approximate per epoch convergence rates can thus be compared as $$ n \cdot \rho_{{s}} \geq \frac{1}{L^2} \quad\text{ and }\quad (n_{i} + m) \cdot \rho_{{b}} \geq \frac{n_{i} + m}{L^2( n_{i} + \beta m)}. $$ This result is similar to that found by Needell and Tropp \cite{NT12:Block-Kaczmarz}, with the block convergence rate at best equal to that of the simple convergence rate when $\beta = 1$. However, as already noted, the block method is quite advantageous computationally. \end{remarks} Combining the paving result of Prop.~\ref{prop:intro-paving} with Theorem~\ref{thm:convergence} yields the following corollary. \begin{corollary}\label{cor1} Instate the assumptions and notation of Theorem~\ref{thm:convergence} and let $\mtx{A}_=$ be equipped with an $(m, \beta)$ row-paving as in Proposition~\ref{prop:intro-paving}. Then the iterates of Algorithm~\ref{alg} satisfy \begin{align*} \mathbb{E} \left[d(\vct{x}_j,S)^2\right] \ \leq \ \gamma^j\cdot d(\vct{x}_{0},S)^2, \end{align*} where $\gamma = \left( 1 - \frac{1}{L^2(n_{i} + C\|\mtx{A}_=\|^2\log(1+n))} \right)$ and $C$ is some absolute constant. \end{corollary} \begin{proof}[of Theorem~\ref{thm:convergence}] Fix an iteration $j$ of Algorithm~\ref{alg}. We proceed as in \cite{NT12:Block-Kaczmarz} and \cite{LL10:Randomized-Methods}. First, we suppose that $q \leq \frac{\beta m}{n_{i} + \beta m}$, so that a block $\tau$ of equalities is selected this iteration. Then writing $P_S$ as the orthogonal projection onto $S$, we have $\mtx{b}_\tau = \mtx{A}_\tau P_S \vct{x}_{j-1}$ since $P_S \vct{x}_{j-1} \in S$. We then have \begin{align*} \vct{x}_j &= \vct{x}_{j-1} + \mtx{A}_\tau^\dagger (\mtx{b}_\tau - \mtx{A}_\tau \vct{x}_{j-1}) \\ &= \vct{x}_{j-1} + \mtx{A}_\tau^\dagger (\mtx{A}_\tau P_S \vct{x}_{j-1} - \mtx{A}_\tau \vct{x}_{j-1}) \\ &= \vct{x}_{j-1} + \mtx{A}_\tau^\dagger \mtx{A}_\tau (P_S \vct{x}_{j-1} - \vct{x}_{j-1}). \end{align*} Thus, \begin{align*} \|\vct{x}_j &- P_S \vct{x}_{j-1}\|^2\\ &= \enormsq{\vct{x}_{j-1} - P_S \vct{x}_{j-1} - \mtx{A}_\tau^\dagger \mtx{A}_\tau (\vct{x}_{j-1} - P_S \vct{x}_{j-1})} \\ &= \enormsq{(\mathbf{I} - \mtx{A}_\tau^\dagger \mtx{A}_\tau) (\vct{x}_{j-1} - P_S \vct{x}_{j-1})}. \end{align*} Taking expectation (over the choice of the block $\tau$, conditioned on previous choices), and using the fact that $\mtx{A}_\tau^\dagger \mtx{A}_\tau$ is an orthogonal projector, along with the properties of the paving yields \begin{align*} \mathbb{E} \|\vct{x}_j &- P_S \vct{x}_{j-1}\|^2\\ &= \mathbb{E} \enormsq{(I - \mtx{A}_\tau^\dagger \mtx{A}_\tau) (\vct{x}_{j-1} - P_S \vct{x}_{j-1})} \\ &= \enormsq{\vct{x}_{j-1} - P_S \vct{x}_{j-1}} - \mathbb{E} \enormsq{\mtx{A}_\tau^\dagger \mtx{A}_\tau (\vct{x}_{j-1} - P_S \vct{x}_{j-1})} \\ &\leq \enormsq{\vct{x}_{j-1} - P_S \vct{x}_{j-1}} - \frac{1}{\ \beta} \mathbb{E} \enormsq{\mtx{A_{\tau}} (\vct{x}_{j-1} - P_S \vct{x}_{j-1})}. \end{align*} Since $d(\vct{x}_{j-1},S) = \enorm{\vct{x}_{j-1} - P_S \vct{x}_{j-1}}$ and $d(\vct{x}_{j}, S) \leq \enorm{\vct{x}_j - P_S \vct{x}_{j-1}}$, this means that \begin{align}\label{eqs} \mathbb{E} \left[d(\vct{x}_j,S)^2\right] &\leq d(\vct{x}_{j-1}, S)^2 - \frac{1}{\ \beta } \mathbb{E} \enormsq{\mtx{A_{\tau}} (\vct{x}_{j-1} - P_S \vct{x}_{j-1})} \notag\\ &= d(\vct{x}_{j-1}, S)^2 - \frac{1}{\ \beta m} \sum_{\tau \in T} \enormsq{\mtx{A}_{\tau} \vct{x}_{j-1} - \vct{b}_{\tau}} \notag\\ &= d(\vct{x}_{j-1}, S)^2 - \frac{1}{\ \beta m} \sum_{i \in I_{=}} e(\mtx{A}_{=} \vct{x}_{j-1} - \vct{b}_{=})_{i}^2. \end{align} Next suppose that instead $i \in I_\leq$ is selected. Then since each row $\vct{a_i}$ has unit norm, \begin{align*} d(\vct{x}_j,S)^2 &\leq \enormsq{\vct{x}_j - P_S \vct{x}_{j-1}}\\ &= \enormsq{\vct{x}_{j-1} - e(\mtx{Ax}_{j-1}-\vct{b})_i \vct{a_i} - P_S \vct{x}_{j-1}}\\ &= \enormsq{\vct{x}_{j-1} - P_S \vct{x}_{j-1}} + e(\mtx{Ax}_{j-1}-\vct{b})_i^2 \\ &\;\;-2e(\mtx{Ax}_{j-1}-\vct{b})_i\langle \vct{a_i} , \vct{x}_{j-1} - P_S \vct{x}_{j-1} \rangle\\ &\leq d(\vct{x}_{j-1},S)^2 - {e(\mtx{A} \vct{x}_{j-1} - \vct{b})_{i}^2}, \end{align*} where the last line follows from the fact that $\langle \vct{a_i}, P_S \vct{x}_{j-1} \rangle \leq b_i$ and $e(\mtx{A} \vct{x}_{j-1} - \vct{b})_{i} \geq 0$. Now taking expectation again we have \begin{align*} \mathbb{E} \left[d(\vct{x}_j,S)^2\right] &\leq d(\vct{x}_{j-1},S)^2 - \mathbb{E} (e(\mtx{A} \vct{x}_{j-1} - \vct{b})_{i}^2) \\ &= d(\vct{x}_{j-1},S)^2 - \frac{1}{\ n_{i}} \sum_{i \in I_\leq} e(\mtx{A}_{\leq} \vct{x}_{j-1} - \vct{b}_\leq)_{i}^2. \end{align*} Combining these results and letting $E_=$ and $E_\leq$ denote the events that a block from $T$ and a row from $I_{\leq}$ is selected, respectively, we have \begin{align*} \mathbb{E} \left[d(\vct{x}_j,S)^2\right] &= p \cdot \mathbb{E}[ d(\vct{x}_{j},S)^2 | E_=] + (1-p) \cdot \mathbb{E} [ d(\vct{x}_{j},S)^2 | E_{\leq}] \\ &\leq p\left[d(\vct{x}_{j-1}, S)^2 - \frac{1}{\ \beta m} \sum_{i \in I_=} e(\mtx{A}_{=} \vct{x}_{j-1} - \vct{b}_=)_{i}^2\right] \\ &\;\;+ (1-p) \left[d(\vct{x}_{j-1},S)^2 - \frac{1}{\ n_{i}} \sum_{i \in I_\leq} e(\mtx{A}_{\leq} \vct{x}_{j-1} - \vct{b}_\leq)_{i}^2\right] \\ &= d(\vct{x}_{j-1}, S)^2 - p \cdot \frac{1}{\ \beta m} \sum_{i \in I_=} e(\mtx{A}_{=} \vct{x}_{j-1} - \vct{b}_=)_{i}^2\\ &\;\; - (1-p) \cdot \frac{1}{\ n_{i}} \sum_{i \in I_\leq} e(\mtx{A}_{\leq} \vct{x}_{j-1} - \vct{b}_\leq)_{i}^2. \end{align*} Since $p = \frac{\beta m}{\ n_{i} + \beta m}$, we have $\frac{1-p}{n_i} = \frac{1}{n_{i}+\beta m}$ and we can simplify \begin{align*} \mathbb{E} \left[d(\vct{x}_j,S)^2\right] &\leq d(\vct{x}_{j-1}, S)^2 - \frac{1}{\ n_{i} + \beta m} \Big[\sum_{i \in I_=} e(\mtx{A}_{=} \vct{x}_{j-1} - \vct{b}_=)_{i}^2 \\ &\;\;\;+ \sum_{i \in I_\leq} e(\mtx{A}_{\leq} \vct{x}_{j-1} - \vct{b}_\leq)_{i}^2\Big] \\ &= d(\vct{x}_{j-1}, S)^2 - \frac{1}{\ n_{i} + \beta m} \enormsq{e(\mtx{A} \vct{x}_{j-1} - \vct{b})} \\ &\leq d(\vct{x}_{j-1}, S)^2 - \frac{1}{L^2(n_{i} + \beta m)} \cdot d(\vct{x}_{j-1},S)^2 \\ &= \left[1 - \frac{1}{L^2(n_{i} + \beta m)}\right] d(\vct{x}_{j-1},S)^2, \end{align*} where we have utilized the Hoffman bound~\eqref{hoffman} in the second inequality. Utilizing independence of the random selections and recursing on this relation yields the desired result. \end{proof} \section{A Discussion about Blocking Inequalities}\label{sec:double} It is natural to ask whether one can benefit by blocking both the equalities as above and also the inequalities, as described by Algorithm~\ref{alg2}. Indeed, Section~\ref{sec:exps} will show dramatic improvements in computational time when the rows of $\mtx{A}_{=}$ are paved and block projections as in Algorithm~\ref{alg} are used. So can one benefit even more by paving also the rows of $\mtx{A}_{\leq}$? The answer to this question heavily depends on the structure of the matrix $\mtx{A}$. \begin{center} \begin{algorithm}[hb] \caption{Double Block Kaczmarz Method for a System of Inequalities} \label{alg2} \begin{center} \fbox{ \begin{minipage}{.95\textwidth} \vspace{4pt} \alginout{\begin{itemize} \item Matrix $\mtx{A}$ with dimension $n \times d$ \item Right-hand side $\vct{b}$ with dimension $n$ \item Partition $T' = \{\tau_1', \dots, \tau_{m'}'\}$ of the row indices $\{1, \dots, n_{i}\}$ \item Partition $T = \{\tau_1, \dots, \tau_m\}$ of the row indices $\{1, \dots, n_{e}\}$ \item Initial iterate $\vct{x}_0$ with dimension $d$ \item Convergence tolerance $\varepsilon > 0$ \end{itemize}} {An estimate $\hat{\vct{x}}$ to the solution of the system~\eqref{eqn:ineq-system}} \vspace{8pt}\hrule\vspace{8pt} \begin{algtab*} $j \leftarrow 0$ \algrepeat $j \leftarrow j + 1$ \\ Draw uniformly at random $q$ from $[0,1]$ \\ \algif{$q \leq \frac{\beta m}{\beta' m' + \beta m}$} Choose a block $\tau$ uniformly at random from $T$ \\ $\vct{x}_j \leftarrow \vct{x}_{j-1} + (\mtx{A}_\tau)^\dagger (\vct{b}_\tau - \mtx{A}_\tau\vct{x}_{j-1})$ \hfill ({Solve least-squares approximation})\\ \algelse Choose a block $\tau'$ uniformly at random from $T'$ \\ Set $\sigma = \{i\in\tau' : \langle \vct{a_i}, \vct{x}_{j-1} \rangle > b_i\} \subset\tau'$ \hfill (Select unsatisfied subset)\\ $\vct{x}_j \leftarrow \vct{x}_{j-1} + (\mtx{A}_\sigma)^\dagger (\vct{b}_\sigma - \mtx{A}_\sigma\vct{x}_{j-1})$ \hfill ({Solve least-squares approximation})\\ \algend \alguntil{$\enormsq{ e(\mtx{A}\vct{x}_j - \vct{b}) } \leq \varepsilon^2$} $\hat{\vct{x}} \leftarrow \vct{x}_j$ \end{algtab*} \end{minipage}} \end{center} \end{algorithm} \end{center} If we only consider $\mtx{A}_{=}$, a block projection as in~\eqref{eqn:block-algorithm} enforces all the equations indexed by $\tau$ to be satisfied. This is of course desirable when the rows indexed by $\tau$ correspond to equalities. Also, if a single inequality corresponding to row $i$ in $\mtx{A}_{\leq}$ is not satisfied and we perform a single projection as in~\eqref{eqn:simp-rule}, we are again enforcing that inequality to hold with equality. However, this improves the estimation since in this case we know the solution set $S$ lies on the opposite side of the hyperplane $\{\vct{x} : \langle \vct{x}, \vct{a_i} \rangle = b_i\}$ as the current estimation (see Figure~\ref{draw1} (a)). \begin{figure}[h] \begin{tabular}{ccc} \includegraphics[scale=0.4]{draw1.pdf}& \includegraphics[scale=0.4]{draw2.pdf}\hspace*{0.5in} & \includegraphics[scale=0.4]{draw3.pdf}\\ (a) & (b) & (c)\\ \end{tabular} \caption{Possible geometries of the system. S denotes solution space (or solution point). Yellow shading denotes regions where inequalities $i_1$ and $i_2$ are both satisfied. (a) A single projection onto hyperplane $H_i = \{\vct{x} : \langle \vct{a_i}, \vct{x}\rangle = b_i\}$ provides improved estimation. (b) Block projection onto intersection of hyperplanes also may provide improved estimation. (c) Block projection onto intersection of hyperplanes may provide improved estimation.} \label{draw1} \end{figure} On the other hand, if we employ a block projection as in~\eqref{eqn:block-algorithm} to a set of inequalities indexed by $\tau$ which are not satisfied by the current estimation $\vct{x}_{j-1}$ then we enforce \textit{all of them to hold with equality simultaneously}. Depending on the geometry of the involved rows, this may result in an improved estimation or actually one much farther from the solution set. Of course, one might alternatively want to solve the convex program to project onto the intersection of the corresponding half-spaces, but we would like to maintain the efficiency and simplicity of the block Kaczmarz method. As an illustrative example, Figure~\ref{draw1} (b) and (c) demonstrate two possible scenarios in two dimensions. Here, the solution space is a single point marked $S$, and we draw two hyperplanes $H_{i_1}$ and $H_{i_2}$ where $H_i = \{\vct{x} : \langle \vct{a_i}, \vct{x}\rangle = b_i\}$ . The yellow shaded regions denote areas where both inequalities hold true: $\{\vct{x} : \langle \vct{a}_{i_1}, \vct{x}\rangle \leq b_{i_1} \text{ and } \langle \vct{a}_{i_2}, \vct{x}\rangle \leq b_{i_2}\}$. Notice that in (b), when the angle between $\vct{x}_{j-1} - \vct{x}_j$ and $\vct{s} - \vct{x}_j$ is obtuse, the orthogonal projection of estimation $\vct{x}_{j-1}$ onto their intersection is guaranteed to be closer to the solution set S. On the other hand, when that angle is acute we see exactly the opposite, as in (c). We can quantify this notion by the following definition. \begin{definition} For an $r\times d$ matrix $\mtx{A}$ and $\vct{b}\in\mathbb{R}^r$, for row $i$ denote by $\tilde{H}_i$ and $H_i$ the half-space $\tilde{H}_i = \{\langle \vct{a_i}, \vct{x}\rangle \leq b_i \}$ and hyperplane ${H}_i = \{\langle \vct{a_i}, \vct{x}\rangle = b_i \}$, respectively, and write $P_S$ as the orthogonal projection onto a convex set $S$. An \textit{obtuse $(m, \beta)$ row paving} of the matrix $\mtx{A}$ is an $(m, \beta)$ row paving $T=\{\tau_1, \ldots, \tau_m\}$ that also satisfies the following. Let $\tau\in T$ and let $\vct{s} \in \cap_{i\in\tau}\tilde{H}_i$, $\vct{w} \in \cap_{i\in\tau}\tilde{H}_i^c$, and $\vct{z} = P_{\cap_{i\in\tau}{H}_i}\vct{w}$. Then $$ \langle \vct{w}-\vct{z}, \vct{s}\rangle < 0. $$ In other words, the angle between $\vct{w}-\vct{z}$ and $\vct{s}$ (and thus $\vct{s} - \vct{z}$) is obtuse. \end{definition} We will see that performing block projections on the inequalities in the system only makes sense when one can obtain an obtuse row paving. We will use $\vct{w}=\vct{x}_{j-1}$, $\vct{z} = \vct{x}_j$, and $\vct{s}\in S$. Notice that if $i_1, i_2 \in \tau \in T$, then the partition used in the system depicted in Figure~\ref{draw1} (c) does not constitute an obtuse row paving. We conduct two simple experiments to demonstrate the different behavior of the algorithm. In all cases the matrix $\mtx{A}$ is a $300\times 100$ matrix with standard normal entries, $100$ rows correspond to inequalities, and $\vct{b}$ is generated so that the solution set $S$ is non-empty. We measure the residual error which we define as $\enorm{e(\mtx{A}\vct{x}_j - \vct{b})}$. Figure~\ref{fig:bad} (a) shows the behavior of the block method with this matrix and a row paving obtained via a random row partition of $30$ blocks ($10$ rows per block). This generation will create a matrix with paving that with very high probability is not an obtuse row paving. As Figure~\ref{fig:bad} demonstrates, the block method does not converge to a solution in this case. However, as Figure~\ref{fig:bad} (c) shows, the simple Kaczmarz method succeeds in identifying a point in the solution space. Next, we create a matrix in the exact same way, and create the same random row paving. Then, however, we iterate through every block in the paving corresponding to inequalities and if two rows $i$ and $k$ in a block satisfy $\langle \vct{a_i}, \vct{a}_k\rangle >0$, we replace row $\vct{a_i}$ with $-\vct{a_i}$ and entry $b_i$ with $-b_i$. This guarantees every block in the paving yields a geometry like that shown in Figure~\ref{draw1} (b), and gives an obtuse row paving. Note that of course this changes the solution space as well so one cannot employ this strategy in general. We then add positive values to the entries in $\vct{b}$ corresponding to inequalities to ensure the solution set $S$ is non-empty. With this new system and paving, we again run the block method and see that the method now converges to a point in the solution set, as seen in Figure~\ref{fig:bad} (b). \begin{figure}[ht] \begin{center} \begin{tabular}{ccc} \includegraphics[scale=0.35]{obtuse_30blocks.pdf} & \includegraphics[scale=0.35]{block_regular_30blocks.pdf} & \includegraphics[scale=0.35]{simple.pdf} \\ (a) & (b) & (c) \\ \end{tabular} \caption{ Residual error of the Kaczmarz Method per epoch: a) Median residual error of block method over $40$ trials for matrix $\mtx{A}$ not having an obtuse row paving, b) Median residual error of block method over $40$ trials for matrix $\mtx{A}$ using an obtuse row paving, c) Median residual error of simple method over $40$ trials for same matrix as in a). Shaded region spans across minimum and maximum values over all trials and solid line denotes median value.\label{fig:bad}} \end{center} \end{figure} With this definition we obtain the following result, whose proof can be found in the appendix. \begin{theorem}\label{thm:obtuse} Let $\mtx{A}$ satisfy the assumptions of Theorem~\ref{thm:convergence} and in addition have an obtuse $(m', \beta')$ row paving of $\mtx{A}_{\leq}$. Let $x_1, \ldots$ denote the iterates of Algorithm~\ref{alg2}. Then using the notation of Theorem~\ref{thm:convergence}, \begin{align*} \mathbb{E} [d(\vct{x}_j,S)^2] \leq \left[1 - \frac{1}{L^2(\beta' m' + \beta m)}\right]^j d(\vct{x}_{0},S)^2. \end{align*} \end{theorem} Note that row pavings of standardized matrices can be obtained readily, often by random partitions \cite{tropp2011improved,tropp2012user,NT12:Block-Kaczmarz}, whereas obtuse row pavings may be much more challenging to obtain in general. Of course, by default the trivial paving which assigns each set $\tau$ to a single row always admits an obtuse row paving. We focus on Algorithm~\ref{alg} which paves only $\mtx{A}_=$, and leave further analysis of Algorithm~\ref{alg2} and constructions of obtuse row pavings for future work. \section{Experiments}\label{sec:exps} We use {\sc Matlab} to run some experiments using random matrices to test the convergence of the block Kaczmarz method applied to a system of equalities and inequalities. In each experiment, we create a random $500$ by $100$ matrix $\mtx{A}$ where each element is an independent standard normal random variable. Each entry is then divided by the norm of its row so that the matrix is standardized. The first 400 rows of matrix $\mtx{A}$ compose $\mtx{A}_{=}$, and the remaining 100 rows are set as inequalities of $\mtx{A}_{\leq}$ in the method described by~\eqref{eqn:ineq-system}. The experiments are run using the following procedure. For each of 100 trials, \begin{enumerate} \item Create matrix $\mtx{A}$ in the manner described above. \item Create $\vct{x}_{\star}$ where each entry is selected independently from a standard normal distribution. Set $\vct{b} = \mtx{A} \vct{x}_{\star}$. \item Pave submatrix $\mtx{A}_{=}$ into 16 blocks with 25 equalities per block by a random partitioning of the rows. \item Set initial approximations $\vct{x}_{0}^{\text{block}} = \vct{x}_{0}^{\text{simp}} = \mtx{A}^*\vct{b}$. \item Draw $q$ uniformly at random from $[0,1]$. \begin{enumerate} \item If $q \leq \frac{n_{e}}{\ n}$, choose block $\{1,...,m\}$ uniformly at random and update iterate $\vct{x}_{j}^{\text{block}}$ using~\eqref{eqn:block-algorithm}. (Note that the threshold $\frac{n_{e}}{\ n}$ is different than that given in the main algorithm and theorem, but it is easier to calculate and seems to work fine in practice.) \item Else, choose a row uniformly at random from $\{401,...,500\}$ and update iterate $\vct{x}_{j}^{\text{block}}$ using~\eqref{eqn:ineq-rule}. \item Update iterate $\vct{x}_{j}^{\text{simp}}$ using~\eqref{eqn:ineq-rule}. \end{enumerate} \end{enumerate} For both the simple and block algorithms, the median, minimum, and maximum values of the residual $\enormsq{e(\mtx{A}\vct{x}_{j}-\vct{b})}$ of the 100 trials are recorded for each iteration $j$. Figure~\ref{fig:plots} compares the performance of the block Kaczmarz method used in this paper and the standard Kaczmarz method described by Leventhal and Lewis \cite{LL10:Randomized-Methods}. The plot in Figure~\ref{fig:plots} (a) compares convergence per iteration. As the block Kaczmarz method enforces multiple equalities per iteration, it is unsurprising that it performs better in this experiment. Figure~\ref{fig:plots} (b) displays the convergence of the two methods per epoch. The block Kaczmarz algorithm has an epoch of $m + n_i$ iterations, and the standard Kaczmarz method has an epoch of size $n$. Here, to be fair we only count an iteration towards an epoch if the estimated solution $\vct{x}_{j} \neq \vct{x}_{j-1}$. Thus in the case where a chosen inequality is already satisfied for iteration $j$, this iteration does not count towards an epoch since no computation is being performed. We noticed, however, that whether or not we modified the count in this way, the behavior still produces results very similar to Figure~\ref{fig:plots}. Once again the experiments yielded faster convergence with the block Kaczmarz approach. It is interesting to compare the results of Figure~\ref{fig:plots} (b) and those of Figure~\ref{fig:bad} (b) and (c). The per-epoch convergence of the methods and whether the block or standard appears faster varies slightly and depends on both the number of rows and columns. In general, the per-epoch convergence rates are reasonably comparable, as the analysis suggests. However, Figure~\ref{fig:plots} (c) compares the rate of convergence of the two algorithms by plotting the residual against the CPU time expended in the simulation. We believe that the ability to utilize efficient matrix--vector multiplication gives the method significantly improved convergence per second relative to the standard Kaczmarz algorithm, although other mechanisms may certainly be at work as well. \begin{figure}[ht] \begin{center} \begin{tabular}{ccc} \includegraphics[scale=0.35]{plot2fin.pdf} & \includegraphics[scale=0.35]{plot3fin.pdf} & \includegraphics[scale=0.35]{plot4fin.pdf} \\ (a) & (b) & (c) \\ \end{tabular} \caption{ Residual error of the Block Kaczmarz Method (solid red) vs. Simple Kaczmarz Method (dashed blue) as a function of (a) Iterations, (b) Epochs, (c) CPU time. Shaded region spans from minimum to maximum value over $100$ trials; lines denote the median value.\label{fig:plots}} \end{center} \end{figure} \section{Conclusion and Related Work}\label{sec:end} The Kaczmarz algorithm was first proposed in \cite{Kac37:Angenaeherte-Aufloesung}. Kaczmarz demonstrated that the method converged to the solution of linear system $\mtx{A}\vct{x} = \vct{b}$ for square, non-singular matrix $\mtx{A}$. Since then, the method has been utilized in the context of computer tomography as the \textit{Algebraic Reconstruction Technique} (ART) \cite{GBH70:Algebraic-Reconstruction,Byr08:Applied-Iterative,Nat01:Mathematics-Computerized,herman2009fundamentals}. Empirical results suggested that randomized selection offered improved convergence over the cyclic scheme \cite{HS78:Angles-Null,HM93:Algebraic-Reconstruction}. Strohmer and Vershynin \cite{SV09:Randomized-Kaczmarz} were the first to prove an expected linear convergence rate using a randomized Kaczmarz algorithm with specific random control. This result was extended by Needell \cite{Nee10:Randomized-Kaczmarz} to apply to inconsistent systems, which shows a linear convergence rate to within a fixed radius around the least-squares solution. Almost-sure convergence guarantees were recently proved by Chen and Powell \cite{CP12:Almost-Sure-Convergence}. Zouzias and Freris \cite{ZF13:Kaczmarz-Least-Squares} analyze a modified version of the method in the inconsistent case, using a variant motivated by Popa \cite{Pop99:Block-Projections-Algorithms} to reduce the residual and thereby converge to the least squares solution. Relaxation parameters can also be introduced to obtain convergence to the least squares solution, see e.g. \cite{whitney1967two,censor1983strong,tanabe1971projection,hanke1990acceleration}, and partially weighted sampling can lead to a tradeoff between convergence rate and radius \cite{NSW13:SGD-Kaczmarz}. Liu, Wright, and Sridhar \cite{LWS14:Parallel-Kaczmarz} discuss applying a parallelized variant of the randomized Kaczmarz method, demonstrating that the convergence rate can be increased almost linearly by bounding the number of processors by a multiple of the number of rows of $\mtx{A}$. The block Kaczmarz updating method was introduced by Elfving \cite{Elf80:Block-Iterative-Methods} as a special case of the more general framework by Eggermont et.al. \cite{EHL81:Iterative-Algorithms}. The notion of using blocking in projection methods is certainly not new, and there is a large amount of literature on these types of methods, see e.g. \cite{XZ02:Method-Alternating,Byr08:Applied-Iterative} and references therein. Needell and Tropp \cite{NT12:Block-Kaczmarz} provide the first analysis showing an expected linear convergence rate which depends on the properties of the matrix $\mtx{A}$ and of the submatrices $\mtx{A_{\tau}}$ resulting from the paving, connecting pavings and the block Kaczmarz scheme. The use of specialized blocks appears elsewhere, in particular, the works of Popa use blocks with orthogonal rows that are beneficial for the block Kaczmarz method \cite{Pop99:Block-Projections-Algorithms,Pop01:Fast-Kaczmarz-Kovarik,Pop04:Kaczmarz-Kovarik-Algorithm}. Needell, Zhao, and Zouzias \cite{NZZ14:Block-Least-Squares} expand on the results from \cite{NT12:Block-Kaczmarz} and \cite{ZF13:Kaczmarz-Least-Squares} to demonstrate convergence to the least-squares solution for an inconsistent system using the block Kaczmarz method. Again the block approach can yield faster convergence than the simple method. The Kaczmarz method was first applied to a system of equalities and inequalities by Leventhal and Lewis \cite{LL10:Randomized-Methods}, who also consider polynomial constraints with the method. They give a linear convergence rate to the feasible solution space $S$, using $\fnormsq{\mtx{A}}$ and the Hoffman constant \cite{Hof52:Approximate-Solutions}. We apply the block Kaczmarz scheme to the system described in \cite{LL10:Randomized-Methods}, combining their result with that of Needell and Tropp \cite{NT12:Block-Kaczmarz} to acquire a completely generalized result. We highlight several important complications which arise when attempting to apply the block scheme to inequalities. Nonetheless, whether a paving is used only partially or for the complete system, significant reduction in computational time can be achieved. \subsection{Future Work} There are many interesting open problems related to the block Kazcmarz method and linear systems with inequalities. It has been well observed in the literature that selecting rows (or blocks) \textit{without replacement} rather than with replacement as in the theoretical results leads to faster a convergence rate empirically \cite{RefWorks:533,NT12:Block-Kaczmarz}. When selecting without replacement, independence between iterations vanishes, making a theoretical analysis more challenging. Secondly, it would be interesting to further investigate the use of obtuse row pavings. In systems with a large number of inequalities, the ability to pave the submatrix $\mtx{A}_\leq$ with an obtuse row paving would lead to significantly faster convergence. In that case, one may like to identify a more general geometric property about the system that permits such pavings or an alternative formulation that offers convergence of the full block method.
{ "timestamp": "2014-09-04T02:03:14", "yymm": "1406", "arxiv_id": "1406.7339", "language": "en", "url": "https://arxiv.org/abs/1406.7339", "abstract": "The randomized Kaczmarz method is an iterative algorithm that solves overdetermined systems of linear equations. Recently, the method was extended to systems of equalities and inequalities by Leventhal and Lewis. Even more recently, Needell and Tropp provided an analysis of a block version of the method for systems of linear equations. This paper considers the use of a block type method for systems of mixed equalities and inequalities, bridging these two bodies of work. We show that utilizing a matrix paving over the equalities of the system can lead to significantly improved convergence, and prove a linear convergence rate as in the standard block method. We also demonstrate that using blocks of inequalities offers similar improvement only when the system satisfies a certain geometric property. We support the theoretical analysis with several experimental results.", "subjects": "Numerical Analysis (math.NA)", "title": "Block Kaczmarz Method with Inequalities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9911526430876969, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7090943843491508 }
https://arxiv.org/abs/1003.2809
Minimal paths in the commuting graphs of semigroups
Let $S$ be a finite non-commutative semigroup. The commuting graph of $S$, denoted $\cg(S)$, is the graph whose vertices are the non-central elements of $S$ and whose edges are the sets $\{a,b\}$ of vertices such that $a\ne b$ and $ab=ba$. Denote by $T(X)$ the semigroup of full transformations on a finite set $X$. Let $J$ be any ideal of $T(X)$ such that $J$ is different from the ideal of constant transformations on $X$. We prove that if $|X|\geq4$, then, with a few exceptions, the diameter of $\cg(J)$ is 5. On the other hand, we prove that for every positive integer $n$, there exists a semigroup $S$ such that the diameter of $\cg(S)$ is $n$. We also study the left paths in $\cg(S)$, that is, paths $a_1-a_2-...-a_m$ such that $a_1\ne a_m$ and $a_1a_i=a_ma_i$ for all $i\in \{1,\ldot, m\}$. We prove that for every positive integer $n\geq2$, except $n=3$, there exists a semigroup whose shortest left path has length $n$. As a corollary, we use the previous results to solve a purely algebraic old problem posed by B.M. Schein.
\section{Introduction} \setcounter{equation}{0} The commuting graph of a finite non-abelian group $G$ is a simple graph whose vertices are all non-central elements of $G$ and two distinct vertices $x,y$ are adjacent if $xy=yx$. Commuting graphs of various groups have been studied in terms of their properties (such as connectivity or diameter), for example in \cite{BaBu03}, \cite{Bu06}, \cite{IrJa08}, and \cite{Se01}. They have also been used as a tool to prove group theoretic results, for example in \cite{Be83}, \cite{RaSe01}, and \cite{RaSe02}. The concept of the commuting graph carries over to semigroups. Let $S$ be a finite non-commutative semigroup with center $Z(S)=\{a\in S:ab=ba\mbox{ for all $b\in S$}\}$. The \emph{commuting graph} of $S$, denoted $\cg(S)$, is the simple graph (that is, an undirected graph with no multiple edges or loops) whose vertices are the elements of $S-Z(S)$ and whose edges are the sets $\{a,b\}$ such that $a$ and $b$ are distinct vertices with $ab=ba$. This paper initiates the study of commuting graphs of semigroups. Our main goal is to study the lengths of minimal paths. We shall consider two types of paths: ordinary paths and so called left paths. We first investigate the semigroup $T(X)$ of full transformations on a finite set $X$, and determine the diameter of the commuting graph of every ideal of $T(X)$ (Section~\ref{stx}). We find that, with a few exceptions, the diameter of $\cg(J)$, where $J$ is an ideal of $T(X)$, is $5$. This small diameter does not extend to semigroups in general. We prove that for every $n\geq2$, there is a finite semigroup $S$ whose commuting graph has diameter $n$ (Theorem~\ref{tald}). To prove the existence of such a semigroup, we use our work on the {\em left paths} in the commuting graph of a semigroup. Let $S$ be a semigroup. A path $a_1-a_2-\cdots-a_m$ in $\cg(S)$ is called a \emph{left path} (or $l$-path) if $a_1\ne a_m$ and $a_1a_i=a_ma_i$ for every $i\in\{1,\ldots,m\}$. If there is any $l$-path in $\cg(S)$, we define the \emph{knit degree} of $S$, denoted $\kd(S)$, to be the length of a shortest $l$-path in $\cg(S)$. For every $n\geq2$ with $n\ne3$, we construct a band (semigroup of idempotents) of knit degree $n$ (Section~\ref{smlp}). It is an open problem if there is a semigroup of knit degree $3$. The constructions presented in Section~\ref{smlp} also give a band $S$ whose commuting graph has diameter $n$ (for every $n\geq4$). As another application of our work on the left paths, we settle a conjecture on bands formulated by B.M. Schein in 1978 (Section~\ref{ssch}). Finally, we present some problems regarding the commuting graphs of semigroups (Section~\ref{spro}). \section{Commuting Graphs of Ideals of $T(X)$}\label{stx} \setcounter{equation}{0} Let $T(X)$ be the semigroup of full transformations on a finite set $X$, that is, the set of all functions from $X$ to $X$ with composition as the operation. We will write functions on the right and compose from left to right, that is, for $a,b\in T(X)$ and $x\in X$, we will write $xa$ (not $a(x)$) and $x(ab)=(xa)b$ (not $(ba)(x)=b(a(x))$). In this section, we determine the diameter of the commuting graph of every ideal of $T(X)$. Throughout this section, we assume that $X=\{1,\ldots,n\}$. Let $\Gamma$ be a simple graph, that is, $\Gamma=(V,E)$, where $V$ is a finite non-empty set of vertices and $E\subseteq\{\{u,v\}:u,v\in V, u\ne v\}$ is a set of edges. We will write $u-v$ to mean that $\{u,v\}\in E$. Let $u,w\in V$. A \emph{path} in $\Gamma$ from $u$ to $w$ is a sequence of pairwise distinct vertices $u=v_1,v_2,\ldots,v_m=w$ ($m\geq1$) such that $v_i-v_{i+1}$ for every $i\in\{1,\ldots,m-1\}$. If $\lam$ is a path $v_1,v_2,\ldots,v_m$, we will write $\lam=v_1-v_2-\cdots-v_m$ and say that $\lam$ has \emph{length} $m-1$. We say that a path $\lam$ from $u$ to $w$ is a \emph{minimal path} if there is no path from $u$ to $w$ that is shorter than $\lam$. We say that the \emph{distance} between vertices $u$ and $w$ is $k$, and write $d(u,w)=k$, if a minimal path from $u$ to $w$ has length $k$. If there is no path from $u$ to $w$, we say that the distance between $u$ and $w$ is infinity, and write $d(u,w)=\infty$. The maximum distance $\max\{d(u,w):u,w\in V\}$ between vertices of $\Gamma$ is called the \emph{diameter} of $\Gamma$. Note that the diameter of $\Gamma$ is finite if and only if $\Gamma$ is connected. If $S$ is a finite non-commutative semigroup, then the commuting graph $\cg(S)$ is a simple graph with $V=S-Z(S)$ and, for $a,b\in V$, $a-b$ if and only if $a\ne b$ and $ab=ba$. For $a\in T(X)$, we denote by $\ima(a)$ the image of $a$, by $\Ker(a)=\{(x,y)\in X\times X:xa=ya\}$ the kernel of $a$, and by $\rank(a)=|\ima(a)|$ the rank of $a$. It is well known (see \cite[Section~2.2]{ClPr64}) that in $T(X)$ the only element of $Z(T(X))$ is the identity transformation on $X$, and that $T(X)$ has exactly $n$ ideals: $J_1,J_2,\ldots,J_n$, where, for $1\leq r\leq n$, \[ J_r=\{a\in T(X):\rank(a)\leq r\}. \] Each ideal $J_r$ is principal and any $a\in T(X)$ of rank $r$ generates $J_r$. The ideal $J_1$ consists of the transformations of rank $1$ (that is, constant transformations), and it is clear that $\cg(J_1)$ is the graph with $n$ isolated vertices. Let $S$ be a semigroup. We denote by $\cge(S)$ the subgraph of $\cg(S)$ induced by the non-central idempotents of $S$. The graph $\cge(S)$ is said to be the \emph{idempotent commuting graph} of $S$. We first determine the diameter of $\cge(J_r)$. This approach is justified by the following lemma. \begin{lemma}\label{lad1} Let $2\leq r<n$ and let $a,b\in J_r$ be such that $ab\ne ba$. Suppose $a-a_1-a_2-\cdots-a_k-b$ ($k\geq1$) is a minimal path in $\cg(J_r)$ from $a$ to $b$. Then there are idempotents $e_1,e_2,\ldots,e_k\in J_r$ such that $a-e_1-e_2-\cdots-e_k-b$ is a minimal path in $\cg(J_r)$ from $a$ to $b$. \end{lemma} \begin{proof} Since $J_r$ is finite, there is an integer $p\geq1$ such that $e_1=a_1^p$ is an idempotent in $J_r$. Note that $e_1\notin Z(J_r)$ since for any $x\in X-\ima(e_1)$, $e_1$ does not commute with $c_x\in J_r$, where $c_x$ is the constant transformation with $\ima(c_x)=\{x\}$. Since $a_1$ commutes with $a$ and $a_2$, the idempotent $e_1=a_1^p$ also commutes with $a$ and $a_2$, and so $a-e_1-a_2-\cdots-a_k-b$. Repeating the foregoing argument for $a_2,\ldots,a_k$, we obtain idempotents $e_2,\ldots,e_k$ in $J_r$ such that $a-e_1-e_2-\cdots-e_k-b$. Since the path $a-a_1-a_2-\cdots-a_k-b$ is minimal, it follows that $a,e_1,e_2,\ldots,e_k,b$ are pairwise distinct and the path $a-e_1-e_2-\cdots-e_k-b$ is minimal. \end{proof} It follows from Lemma~\ref{lad1} that if $d$ is the diameter of $\cge(J_r)$, then the diameter of $\cg(J_r)$ is at most $d+2$. \subsection{Idempotent Commuting Graphs}\label{sside} In this subsection, we assume that $n\geq3$ and $2\leq r<n$. We will show that, with some exceptions, the diameter of $\cge(J_r)$ is $3$ (Theorem~\ref{tdia}). Let $e\in T(X)$ be an idempotent. Then there is a unique partition $\{A_1,A_2,\ldots,A_k\}$ of $X$ and unique elements $x_1\in A_1, x_2\in A_2,\ldots, x_k\in A_k$ such that for every $i$, $A_ie=\{x_i\}$. The partition $\{A_1\ldots,A_k\}$ is induced by the kernel of $e$, and $\{x_1,\ldots,x_k\}$ is the image of $e$. We will use the following notation for $e$: \begin{equation}\label{eide} e=(A_1,x_1\ran(A_2,x_2\ran\ldots(A_k,x_k\ran. \end{equation} Note that $(X,x\ran$ is the constant idempotent with image $\{x\}$. The following result has been obtained in \cite{ArKo03} and \cite{Ko02} (see also \cite{ArKo04}). \begin{lemma}\label{lcen} Let $e=(A_1,x_1\ran(A_2,x_2\ran\ldots(A_k,x_k\ran$ be an idempotent in $T(X)$ and let $b\in T(X)$. Then $b$ commutes with $e$ if and only if for every $i\in\{1,\ldots,k\}$, there is $j\in\{1,\ldots,k\}$ such that $x_ib=x_j$ and $A_ib\subseteq A_j$. \end{lemma} We will use Lemma~\ref{lcen} frequently, not always mentioning it explicitly. The following lemma is an immediate consequence of Lemma~\ref{lcen}. \begin{lemma}\label{ljo1} Let $e,f\in J_r$ be idempotents and suppose there is $x\in X$ such that $x\in\ima(e)\cap\ima(f)$. Then $e-(X,x\ran-f$. \end{lemma} \begin{lemma}\label{ljo2} Let $e,f\in J_r$ be idempotents such that $\ima(e)\cap\ima(f)=\emptyset$. Suppose there is $(x,y)\in\ima(e)\times\ima(f)$ such that $(x,y)\in\Ker(e)\cap\Ker(f)$. Then there is an idempotent $g\in J_r$ such that $e-g-f$. \end{lemma} \begin{proof} Let $e=(A_1,x_1\ran\ldots(A_k,x_k\ran$ and $f=(B_1,y_1\ran\ldots(B_m,y_m\ran$. We may assume that $x=x_1$ and $y=y_1$. Since $(x,y)\in\Ker(e)\cap\Ker(f)$, we have $y\in A_1$ and $x\in B_1$. Let $g=(\ima(e),x\ran(X-\ima(e),y\ran$. Then $g$ is in $J_r$ since $\rank(g)=2$ and $r\geq2$. By Lemma~\ref{lcen}, we have $eg=ge$ (since $y\in A_1$) and $fg=gf$ (since $\ima(f)\subseteq X-\ima(e)$ and $x\in B_1$). Hence $e-g-f$. \end{proof} \begin{lemma}\label{lja1} Let $e,f\in J_r$ be idempotents such that $\ima(e)\cap\ima(f)=\emptyset$. Then there are idempotents $g,h\in J_r$ such that $e-g-h-f$. \end{lemma} \begin{proof} Let $e=(A_1,x_1\ran\ldots(A_k,x_k\ran$ and $f=(B_1,y_1\ran\ldots(B_m,y_m\ran$. Since $\{A_1,\ldots,A_k\}$ is a partition of $X$, there is $i$ such that $y_1\in A_i$. We may assume that $y_1\in A_1$. Let $g=(X-\{y_1\},x_1\ran(\{y_1\},y_1\ran$ and $h=(X,y_1\ran$. Then $g$ and $h$ are in $J_r$ (since $r\geq2$). By Lemma~\ref{lcen}, $eg=ge$, $gh=hg$, and $hf=fh$. Thus $e-g-h-f$. \end{proof} \begin{lemma}\label{lja2} Let $m$ be a positive integer such that $2m\leq n$, $\sigma$ be an $m$-cycle on $\{1,\ldots,m\}$, and \[ e=(A_1,x_1\ran(A_2,x_2\ran\ldots(A_m,x_m\ran\mbox{ and }f=(B_1,y_1\ran(B_2,y_2\ran\ldots(B_m,y_m\ran \] be idempotents in $T(X)$ such that $x_1,\ldots,x_m,y_1,\ldots,y_m$ are pairwise distinct, $y_i\in A_i$, and $x_{i\sigma}\in B_i$ ($1\leq i\leq m)$. Suppose that $g$ is an idempotent in $T(X)$ such that $e-g-f$. Then: \begin{itemize} \item [\rm(1)] $x_jg=x_j$ and $y_jg=y_j$ for every $j\in\{1,\ldots,m\}$. \item [\rm(2)] If $1\leq i,j\leq m$ are such that $A_i=\{x_i,y_i,z\}$, $B_j=\{y_j,x_{j\sigma},z\}$ and $A_i\cap B_j=\{z\}$, then $zg=z$. \end{itemize} \end{lemma} \begin{proof} Since $eg=ge$, $x_1g=x_i$ for some $i$. Then $x_ig=x_i$ (since $g$ is an idempotent). Thus, $e-g-f$ and Lemma~\ref{lcen} imply that $y_ig=y_i$. Since $x_i=x_{(i\sigma^{-1})\sigma}\in B_{i\sigma^{-1}}$ and $g$ commutes with $f$, we have $y_{i\sigma^{-1}}g=y_{i\sigma^{-1}}$. But now, since $y_{i\sigma^{-1}}\in A_{i\sigma^{-1}}$ and $g$ commutes with $e$, we have $x_{i\sigma^{-1}}g=x_{i\sigma^{-1}}$. Continuing this way, we obtain $x_{i\sigma^{-k}}g=x_{i\sigma^{-k}}$ and $y_{i\sigma^{-k}}g=y_{i\sigma^{-k}}$ for every $k\in\{1,\ldots,m-1\}$. Since $\sigma$ is an $m$-cycle, it follows that $x_jg=x_j$ and $y_jg=y_j$ for every $j\in\{1,\ldots,m\}$. We have proved (1). Suppose $A_i=\{x_i,y_i,z\}$, $B_j=\{y_j,x_{j\sigma},z\}$, and $A_i\cap B_j=\{z\}$. Then $zg\in\{x_i,y_i,z\}$ (since $x_ig=x_i$ and $eg=ge$) and $zg\in\{y_j,x_{j\sigma},z\}$ (since $y_jg=y_j$ and $fg=gf$). Since $A_i\cap B_j=\{z\}$, we have $zg=z$, which proves (2). \end{proof} \begin{lemma}\label{lja3} Let $n\geq4$. If $n\ne5$ or $r\ne4$, then for some idempotents $e,f\in J_r$, there is no idempotent $g\in J_r$ such that $e-g-f$. \end{lemma} \begin{proof} Let $n\ne5$ or $r\ne4$. Suppose that $r<n-1$ or $n$ is even. Then there is an integer $m$ such that $m\leq r$ and $r<2m\leq n$. Let $e$ and $f$ be idempotents from Lemma~\ref{lja2}. Then $e,f\in J_r$ since $m\leq r$. But every idempotent $g\in T(X)$ such that $e-g-f$ fixes at least $2m$ elements, and so $g\notin J_r$ since $r<2m$. Suppose that $r=n-1$ and $n=2m+1$ is odd. Then $n\geq7$ since we are working under the assumption that $n\ne5$ or $r\ne4$. We again consider idempotents $e$ and $f$ from Lemma~\ref{lja2}, which belong to $J_r$ since $m<n-1=r$. Note that $X=\{x_1,\ldots,x_m,y_1,\ldots,y_m,z\}$. We may assume that $z\in A_m$ and $z\in B_1$. Since $n\geq7$, we have $m\geq3$. Thus, the intersection of $A_m=\{x_m,y_m,z\}$ and $B_1=\{y_1,x_2,z\}$ is $\{z\}$, and so $zg=z$ by Lemma~\ref{lja2}. Hence $g=\id_{X}\notin J_r$, which concludes the proof. \end{proof} \begin{theorem}\label{tdia} Let $n\geq3$ and let $J_r$ be an ideal in $T(X)$ such that $2\leq r<n$. Then: \begin{itemize} \item [\rm(1)] If $n=3$ or $n=5$ and $r=4$, then the diameter of $\cge(J_r)$ is $2$. \item [\rm(2)] In all other cases, the diameter of $\cge(J_r)$ is $3$. \end{itemize} \end{theorem} \begin{proof} Suppose $n=3$ or $n=5$ and $r=4$. In these special cases, we obtained the desired result using GRAPE \cite{So06}, which is a package for GAP \cite{Scel92}. Let $n\geq4$ and suppose that $n\ne5$ or $r\ne4$. By Lemmas~\ref{ljo1} and~\ref{lja1}, the diameter of $\cge(J_r)$ is at most $3$. By Lemma~\ref{lja3}, the diameter of $\cge(J_r)$ is at least $3$. Thus the diameter of $\cge(J_r)$ is $3$, which concludes the proof of~(2). \end{proof} \subsection{Commuting Graphs of Proper Ideals of $T(X)$}\label{ssprop} In this subsection, we determine the diameter of every proper ideal of $T(X)$. The ideal $J_1$ consists of the constant transformations, so $\cg(J_1)$ is the graph with $n$ isolated vertices. Thus $J_1$ is not connected and its diameter is $\infty$. Therefore, for the remainder of this subsection, we assume that $n\geq3$ and $2\leq r<n$. It follows from Lemma~\ref{lad1} and Theorem~\ref{tdia} that the diameter of $\cg(J_r)$ is at most $5$. We will prove that this diameter is in fact $5$ except when $n=3$ or $n\in\{5,6,7\}$ and $r=4$. It also follows from Lemma~\ref{lad1} that if $e$ and $f$ are idempotents in $J_r$, then the distance between $e$ and $f$ in $\cg(J_r)$ is the same as the distance between $e$ and $f$ in $\cge(J_r)$. So no ambiguity will arise when we talk about the distance between idempotents in $J_r$. For $a\in T(X)$ and $x,y\in X$, we will write $x\ara y$ when $xa=y$. \begin{lemma}\label{lad2} Let $a,b\in T(X)$. Then $ab=ba$ if and only if for all $x,y\in X$, $x\ara y$ implies $xb\ara yb$. \end{lemma} \begin{proof} Suppose $ab=ba$. Let $x,y\in X$ with $x\ara y$, that is, $y=xa$. Then, since $ab=ba$, we have $yb=(xa)b=x(ab)=x(ba)=(xb)a$, and so $xb\ara yb$. Conversely, suppose $x\ara y$ implies $xb\ara yb$ for all $x,y\in X$. Let $x\in X$. Since $x\ara xa$, we have $xb\ara (xa)b$. But this means that $(xb)a=(xa)b$, which implies $ab=ba$. \end{proof} Let $a\in T(X)$. Suppose $x_1,\ldots,x_m$ are pairwise distinct elements of $X$ such that $x_ia=x_{i+1}$ ($1\leq i<m$) and $x_ma=x_1$. We will then say that $a$ contains a cycle $(x_1\,x_2\ldots\,x_m)$. \begin{lemma}\label{lad0} Let $a\in J_r$ be a transformation containing a unique cycle $(x_1\,x_2\ldots\,x_m)$. Let $e\in J_r$ be an idempotent such that $ae=ea$. Then $x_ie=x_i$ for every $i\in\{1,\ldots,m\}$. \end{lemma} \begin{proof} Since $a$ contains $(x_1\,x_2\ldots\,x_m)$, we have $x_1\ara x_2\ara\cdots\ara x_m\ara x_1$. Thus, by Lemma~\ref{lad2}, \[ x_1e\ara x_2e\ara\cdots\ara x_me\ara x_1e. \] Thus $(x_1e\,x_2e\ldots\,x_me)$ is a cycle in $a$, and is therefore equal to $(x_1\,x_2\ldots\,x_m)$. Hence, for every $i\in\{1,\ldots,m\}$, there exists $j\in\{1,\ldots,m\}$ such that $x_i=x_je$, and so $x_ie=(x_je)e=x_j(ee)=x_je=x_i$. \end{proof} To construct transformations $a,b\in J_r$ such that the distance between $a$ and $b$ is $5$, it will be convenient to introduce the following notation. \begin{nota}\label{ndi5} {\rm Let $x_1,\ldots,x_m,z_1,\ldots,z_p$ be pairwise distinct elements of $X$, and let $s$ be fixed such that $1\leq s<p$. We will denote by \begin{equation}\label{e1ndi5} a=(*\,z_s\ran(z_p\,z_{p-1}\ldots\,z_1\,x_1\ran(x_1\,x_2\ldots\,x_m) \end{equation} the transformation $a\in T(X)$ such that \begin{align} z_pa&=z_{p-1},\,z_{p-1}a=z_{p-2},\ldots,z_2a=z_1,\,z_1a=x_1,\notag\\ x_1a&=x_2,\,x_2a=x_3,\ldots,\,x_{m-1}a=x_m,\,x_ma=x_1,\notag \end{align} and $ya=z_s$ for all other $y\in X$. Suppose $w\in X$ such that $w\notin\{x_1,\ldots,x_m,z_1,\ldots,z_p\}$ and $1\leq t<p$ with $t\ne s$. We will denote by \begin{equation}\label{e2ndi5} b=(*\,z_s\ran(w\,z_t\ran(z_p\,z_{p-1}\ldots\,z_1\,x_1\ran(x_1\,x_2\ldots\,x_m) \end{equation} the transformation $b\in T(X)$ that is defined as $a$ in (\ref{e1ndi5}) except that $wb=z_t$. } \end{nota} \begin{lemma}\label{lad3} Let $a\in J_r$ be the transformation defined in {\rm(\ref{e1ndi5})} such that $m+p>r$. Let $e\in J_r$ be an idempotent such that $ae=ea$. Then: \begin{itemize} \item [\rm(1)] $x_ie=x_i$ for every $i\in\{1,\ldots,m\}$. \item [\rm(2)] $z_je=x_{m-j+1}$ for every $j\in\{1,\ldots,p\}$. \item [\rm(3)] $ye=x_{m-s}$ for every $y\in X-\{x_1,\ldots,x_m,z_1,\ldots,z_p\}$. \end{itemize} (We assume that for every integer $u$, $x_u=x_v$, where $v\in\{1,\ldots,m\}$ and $u\equiv v\pmod m$.) \end{lemma} \begin{proof} Statement (1) follows from Lemma~\ref{lad0}. By the definition of $a$, we have \[ z_p\ara z_{p-1}\ara\cdots\ara z_1\ara x_1. \] Thus, by Lemma~\ref{lad2}, \[ z_pe\ara z_{p-1}e\ara\cdots\ara z_1e\ara x_1e=x_1. \] Since $z_1e\ara x_1$, either $z_1e=x_m$ or $z_1e\notin\{x_1,\ldots,x_m\}$. We claim that the latter is impossible. Indeed, suppose $z_1e\notin\{x_1,\ldots,x_m\}$. Then $z_je\notin\{x_1,\ldots,x_m\}$ for every $j\in\{1,\ldots,p\}$. Thus the set $\{x_1,\ldots,x_m,z_1e,\ldots,z_pe\}$ is a subset of $\ima(e)$ with $m+p$ elements. But this implies that $e\notin J_r$ (since $m+p>r$), which is a contradiction. We proved the claim. Thus $z_1e=x_m$. Now, $z_2e\ara z_1e=x_m$, which implies $z_2e=x_{m-1}$. Continuing this way, we obtain $z_3e=x_{m-2},\,z_4e=x_{m-3},\ldots$. (A special argument is required when $j=qm+1$ for some $q\geq1$. Suppose $q=1$, that is, $j=m+1$. Then $z_je\ara z_{j-1}e=z_me=x_1$, and so either $z_je=x_m$ or $z_je=z_1$. But the latter is impossible since we would have $x_m=z_1e=z_j(ee)=z_je=z_1$, which is a contradiction. Hence, for $j=m+1$, we have $z_je=x_m$. Assuming, inductively, that $z_je=x_m$ for $j=qm+1$, we prove by a similar argument that $z_je=x_m$ for $j=(q+1)m+1$.) This concludes the proof of (2). Let $y\in X-\{x_1,\ldots,x_m,z_1,\ldots,z_p\}$. Then $y\ara z_s$, and so $ye\ara z_se=x_{m-s+1}$. Suppose $s$ is not a multiple of $m$. Then $x_{m-s+1}\ne x_1$, and so $ye\ara x_{m-s+1}$ implies $ye=x_{m-s}$. Suppose $s$ is a multiple of $m$. Then $ye\ara x_{m-s+1}=x_1$, and so either $ye=x_m$ or $ye=z_1$. But the latter is impossible since we would have $x_m=z_1e=y(ee)=ye=z_1$, which is a contradiction. Hence, for $s$ that is a multiple of $m$, we have $ye=x_m$, which concludes the proof of (3). \end{proof} The proof of the following lemma is almost identical to the proof of Lemma~\ref{lad3}. \begin{lemma}\label{lad4} Let $b\in J_r$ be the transformation defined in {\rm(\ref{e2ndi5})} such that $m+p>r$. Let $e\in J_r$ be an idempotent such that $be=eb$. Then: \begin{itemize} \item [\rm(1)] $x_ie=x_i$ for every $i\in\{1,\ldots,m\}$. \item [\rm(2)] $z_je=x_{m-j+1}$ for every $j\in\{1,\ldots,p\}$. \item [\rm(3)] $we=x_{m-t}$. \item [\rm(4)] $ye=x_{m-s}$ for every $y\in X-\{x_1,\ldots,x_m,z_1,\ldots,z_p,w\}$. \end{itemize} \end{lemma} \begin{lemma}\label{lad6} Let $n\in\{5,6,7\}$ and $r=4$. Then there are $a,b\in J_4$ such that the distance between $a$ and $b$ in $\cg(J_4)$ is at least $4$. \end{lemma} \begin{proof} Let $a=(*\,4\ran(3\,4\,1\ran(1\,2)$ and $b=(*\,1\ran(2\,1\,3\ran(3\,4)$ (see Notation~\ref{ndi5}). Suppose $e$ and $f$ are idempotents in $J_4$ such that $a-e$ and $f-b$. Then, by Lemma~\ref{lad3}, $e=(\{\ldots,3,1\},1\ran(\{4,2\},2\ran$ and $f=(\{\ldots,2,3\},3\ran(\{1,4\},4\ran$, where ``$\ldots$'' denotes ``$5$'' (if $n=5$), ``$5,6$'' (if $n=6$), and ``$5,6,7$'' (if $n=7$). Then $e$ and $f$ do not commute, and so $d(e,f)\geq2$. Thus $d(a,b)\geq4$ by Lemma~\ref{lad1}. \end{proof} \begin{lemma}\label{lad7} Let $n\in\{6,7\}$ and $r=4$. Let $a\in J_4$ be a transformation that is not an idempotent. Then there is an idempotent $e\in J_4$ commuting with $a$ such that $\rank(e)\ne3$ or $\rank(e)=3$ and $ye^{-1}=\{y\}$ for some $y\in\ima(e)$. \end{lemma} \begin{proof} If $a$ fixes some $x\in X$, then $a$ commutes with $e=(X,x\ran$ of rank $1$. Suppose $a$ has no fixed points. Let $p$ be a positive integer such that $a^p$ is an idempotent. If $a$ contains a unique cycle $(x_1\,x_2)$, then $e=a^p$ has rank $2$. If $a$ contains a unique cycle $(x_1\,x_2\,x_3\,x_4)$ or two cycles $(x_1\,x_2)$ and $(y_1\,y_2)$ with $\{x_1,x_2\}\cap\{y_1,y_2\}=\emptyset$, then $e=a^p$ has rank $4$. Suppose $a$ contains a unique cycle $(x_1\,x_2\,x_3)$. Define $e\in T(X)$ as follows. Set $x_ie=x_i$, $1\leq i\leq3$. Suppose there are $y,z\in X-\{x_1,x_2,x_3\}$ such that $ya=z$ and $za=x_i$ for some $i$. We may assume that $za=x_1$. Define $ze=x_3$ and $ye=x_2$. Let $u$ and $w$ be the two remaining elements in $X$ (only $u$ remains when $n=6$). Since $\rank(a)\leq4$, we have $\{u,w\}a\subseteq\{z,x_1,x_2,x_3\}$. Suppose $ua=wa=z$. Define $ue=x_2$ and $we=x_2$. Then $e$ is an idempotent of rank $3$ such that $ae=ea$ and $x_1e^{-1}=\{x_1\}$. Suppose $ua$ or $wa$ is in $\{x_1,x_2,x_3\}$, say $ua\in\{x_1,x_2,x_3\}$. Define $ue=u$, and $we=x_{i-1}$ (if $wa=x_i$), where $x_{i-1}=x_3$ if $i=1$, or $we=x_2$ (if $wa=z$). Then $e$ is an idempotent of rank $4$ such that $ae=ea$. Suppose that for every $y\in X-\{x_1,x_2,x_3\}$, $ya\in\{x_1,x_2,x_3\}$. Select $z\in X-\{x_1,x_2,x_3\}$ and define $ze=z$. For every $y\in X-\{z,x_1,x_2,x_3\}$, define $ye=x_{i-1}$ if $ya=x_i$. Then $e$ is an idempotent of rank $4$ such that $ae=ea$. Since $a\in J_4$, we have exhausted all possibilities, and the result follows. \end{proof} \begin{lemma}\label{lad8} Let $n\in\{6,7\}$ and $r=4$. Then for all $a,b\in J_4$, the distance between $a$ and $b$ in $\cg(J_4)$ is at most $4$. \end{lemma} \begin{proof} Let $a,b\in J_4$. If $a$ or $b$ is an idempotent, then $d(a,b)\leq4$ by Lemma~\ref{lad1} and Theorem~\ref{tdia}. Suppose $a$ and $b$ are not idempotents. By Lemma~\ref{lad7}, there are idempotents $e,f\in J_4$ such that $ae=ea$, $bf=fb$, if $\rank(e)=3$, then $ye^{-1}=\{y\}$ for some $y\in\ima(e)$, and if $\rank(f)=3$, then $yf^{-1}=\{y\}$ for some $y\in\ima(f)$. We claim that there is an idempotent $g\in J_4$ such that $e-g-f$. If $\ima(e)\cap\ima(f)\ne\emptyset$, then such an idempotent $g$ exists by Lemma~\ref{ljo1}. Suppose $\ima(e)\cap\ima(f)=\emptyset$. Then, since $n\in\{6,7\}$, both $\rank(e)+\rank(f)\leq7$. We may assume that $\rank(e)\leq\rank(f)$. There are six possible cases. \vskip 1mm \noindent{\bf Case 1.} $\rank(e)=1$. \vskip 1mm Then $e=(X,x\ran$ for some $x\in X$. Let $y=xf$. Then $(x,y)\in\ima(e)\times\ima(f)$ and $(x,y)\in\Ker(e)\cap\Ker(f)$. Thus, by Lemma~\ref{ljo2}, there is an idempotent $g\in J_4$ such that $e-g-f$. \vskip 1mm \noindent{\bf Case 2.} $\rank(e)=2$ and $\rank(f)=2$. \vskip 1mm We may assume that $e=(A_1,1\ran(A_2,2\ran$ and $f=(B_1,3\ran(B_2,4\ran$. If $\{1,2\}\subseteq B_i$ or $\{3,4\}\subseteq A_i$ for some $i$, then we can find $(x,y)\in\ima(e)\times\ima(f)$ such that $(x,y)\in\Ker(e)\cap\Ker(f)$, and so a desired idempotent $g$ exists by Lemma~\ref{ljo2}. Otherwise, we may assume that $3\in A_1$ and $4\in A_2$. If $1\in B_1$ or $2\in B_2$, then Lemma~\ref{ljo2} can be applied again. So suppose $1\in B_2$ and $2\in B_1$. Now we have \[ e=(\{\ldots,3,1\},1\ran(\{\ldots,4,2\},2\ran\mbox{ and } f=(\{\ldots,2,3\},3\ran(\{\ldots,1,4\},4\ran. \] We define $g\in T(X)$ as follows. Set $xg=x$ for every $x\in\{1,2,3,4\}$. Let $x\in\{5,6,7\}$ ($x\in\{5,6\}$ if $n=6$). If $x\in A_1\cap B_1$, define $xg=3$; if $x\in A_1\cap B_2$, define $xg=1$; if $x\in A_2\cap B_1$, define $xg=2$; finally, if $x\in A_2\cap B_2$, define $xg=4$. Then $g$ is an idempotent of rank $4$ and $e-g-f$. \vskip 1mm \noindent{\bf Case 3.} $\rank(e)=2$ and $\rank(f)=3$. \vskip 1mm We may assume that $e=(A_1,1\ran(A_2,2\ran$ and $f=(B_1,3\ran(B_2,4\ran(B_3,5\ran$. If $\{3,4,5\}\subseteq A_1$ or $\{3,4,5\}\subseteq A_2$, then Lemma~\ref{ljo2} applies. Otherwise, we may assume that $3,4\in A_1$ and $5\in A_2$. If $1\in B_1\cup B_2$ or $2\in B_3$, then Lemma~\ref{ljo2} applies again. So suppose $1\in B_3$ and $2\in B_1\cup B_2$. We may assume that $2\in B_1$. Note that if $z\in\{6,7\}$, then $z$ cannot be in $B_2$ since $z\in B_2$ would imply that there is no $y\in\ima(f)$ such that $yf^{-1}=\{y\}$. So now \[ e=(\{\ldots,3,4,1\},1\ran(\{\ldots,5,2\},2\ran\mbox{ and } f=(\{\ldots,2,3\},3\ran(\{4\},4\ran(\{\ldots,1,5\},5\ran. \] We define $g\in T(X)$ as follows. Set $xg=x$ for every $x\in\{1,2,3,5\}$ and $4g=3$. Let $z\in\{6,7\}$. If $z\in A_1\cap B_1$, define $zg=3$; if $z\in A_1\cap B_3$, define $zg=1$; if $z\in A_2\cap B_1$, define $zg=2$; finally, if $z\in A_2\cap B_3$, define $zg=5$. Then $g$ is an idempotent of rank $4$ and $e-g-f$. \vskip 1mm \noindent{\bf Case 4.} $\rank(e)=2$ and $\rank(f)=4$. \vskip 1mm We may assume that $e=(A_1,1\ran(A_2,2\ran$ and $f=(B_1,3\ran(B_2,4\ran(B_3,5\ran(B_4,6\ran$. If $\{3,4,5,6\}\subseteq A_1$ or $\{3,4,5,6\}\subseteq A_2$, then Lemma~\ref{ljo2} applies. Otherwise, we may assume that $3,4,5\in A_1$ and $6\in A_2$ or $3,4\in A_1$ and $5,6\in A_2$. Suppose $3,4,5\in A_1$ and $6\in A_2$. If $1\in B_1\cup B_2\cup B_3$ or $2\in B_4$, then Lemma~\ref{ljo2} applies. So suppose $1\in B_4$, and we may assume that $2\in B_1$. Now we have \begin{align} e&=(\{\ldots,3,4,5,1\},1\ran(\{\ldots,6,2\},2\ran,\notag\\ f&=(\{\ldots,2,3\},3\ran(\{\ldots,4\},4\ran(\{\ldots,5\},5\ran(\{\ldots,1,6\},6\ran.\notag \end{align} We define $g\in T(X)$ as follows. Set $xg=x$ for every $x\in\{1,2,3,6\}$, $4g=3$, and $5g=3$. Define $7g=3$ if $7\in A_1$ and $7\in B_1\cup B_2\cup B_3$; $7g=1$ if $7\in A_1$ and $7\in B_4$; $7g=2$ if $7\in A_2$ and $7\in B_1\cup B_2\cup B_3$; and $7g=6$ if $7\in A_2$ and $7\in B_4$. Then $g$ is an idempotent of rank $4$ and $e-g-f$. The argument in the case when $3,4\in A_1$ and $5,6\in A_2$ is similar. \vskip 1mm \noindent{\bf Case 5.} $\rank(e)=3$ and $\rank(f)=3$. \vskip 1mm Since both $e$ and $f$ have an element in their range whose preimage is the singleton, we may assume that $e=(A_1,1\ran(A_2,2\ran(\{3\},3\ran$ and $f=(B_1,4\ran(B_2,5\ran(\{6\},6\ran$. If $\{1,2\}\subseteq B_i$ or $\{4,5\}\subseteq A_i$ for some $i$, then Lemma~\ref{ljo2} applies. Otherwise, we may assume that $4\in A_1$ and $5\in A_2$. If $1\in B_1$ or $2\in B_2$, then Lemma~\ref{ljo2} applies again. So suppose $1\in B_2$ and $2\in B_1$. So now \[ e=(\{\ldots,4,1\},1\ran(\{\ldots,5,2\},2\ran(\{3\},3\ran\mbox{ and } f=(\{\ldots,2,4\},4\ran(\{\ldots,1,5\},5\ran(\{6\},6\ran. \] We define $g\in T(X)$ as follows. Set $xg=x$ for every $x\in\{1,2,4,5\}$, $3g=1$, and $6g=4$. Define $7g=4$ if $7\in A_1$ and $7\in B_1$; $7g=1$ if $7\in A_1$ and $7\in B_2$; $7g=2$ if $7\in A_2$ and $7\in B_1$; and $7g=5$ if $7\in A_2$ and $7\in B_2$. Then $g$ is an idempotent of rank $4$ and $e-g-f$. \vskip 1mm \noindent{\bf Case 6.} $\rank(e)=3$ and $\rank(f)=4$. \vskip 1mm We may assume that $e=(A_1,1\ran(A_2,2\ran(\{3\},3\ran$ and $f=(B_1,4\ran(B_2,5\ran(B_3,6\ran(\{7\},7\ran$. If $\{4,5,6\}\subseteq A_1$ or $\{4,5,6\}\subseteq A_2$, then Lemma~\ref{ljo2} applies. So we may assume that $4,5\in A_1$ and $6\in A_2$. If $1\in B_1\cup B_2$ or $2\in B_3$, then Lemma~\ref{ljo2} applies again. So we may assume that $1\in B_3$ and $2\in B_1$. So now \begin{align} e&=(\{\ldots,4,5,1\},1\ran(\{\ldots,6,2\},2\ran(\{3\},3\ran,\notag\\ f&=(\{\ldots,2,4\},4\ran(\{\ldots,5\},5\ran(\{\ldots,1,6\},6\ran(\{7\},7\ran.\notag \end{align} We define $g\in T(X)$ as follows. Set $xg=x$ for every $x\in\{1,2,4,6\}$ and $5g=4$. Define $7g=4$ if $7\in A_1$; $7g=6$ if $7\in A_2$; $3g=3$ if $3\in B_1\cup A_2$; and $3g=1$ if $3\in B_3$. Then $g$ is an idempotent of rank $4$ and $e-g-f$. \end{proof} \begin{theorem}\label{tdia2} Let $n\geq3$ and let $J_r$ be an ideal in $T(X)$ such that $2\leq r<n$. Then: \begin{itemize} \item [\rm(1)] If $n=3$ or $n\in\{5,6,7\}$ and $r=4$, then the diameter of $\cg(J_r)$ is $4$. \item [\rm(2)] In all other cases, the diameter of $\cg(J_r)$ is $5$. \end{itemize} \end{theorem} \begin{proof} Let $n=3$. Then the diameter of $\cg(J_2)$ is at most $4$ by Lemma~\ref{lad1} and Theorem~\ref{tdia}. On the other hand, consider $a=(3\,1\ran(1\,2)$ and $b=(2\,1\ran(1\,3)$ in $J_2$. Suppose $e$ and $f$ are idempotents in $J_2$ such that $a-e$ and $f-b$. By Lemma~\ref{lad3}, $e=(\{1\},1\ran(\{3,2\},2\ran$ and $f=(\{1\},1\ran(\{2,3\},3\ran$. Then $e$ and $f$ do not commute, and so $d(e,f)\geq2$. Thus $d(a,b)\geq4$ by Lemma~\ref{lad1}, and so the diameter of $\cg(J_2)$ is at least $4$. Let $n\in\{5,6,7\}$ and $r=4$. If $n=5$, then the diameter of $\cg(J_4)$ is at least $4$ (by Lemma~\ref{lad6}) and at most $4$ (by Lemma~\ref{lad1} and Theorem~\ref{tdia}). If $n\in\{6,7\}$, then the diameter of $\cg(J_4)$ is at least $4$ (by Lemma~\ref{lad6}) and at most $4$ (by Lemma~\ref{lad8}). We have proved (1). Let $n\geq4$ and suppose that $n\notin\{5,6,7\}$ or $r\ne4$. Then the diameter of $\cg(J_r)$ is at most $5$ by Lemma~\ref{lad1} and Theorem~\ref{tdia}. It remains to find $a,b\in J_r$ such that the distance between $a$ and $b$ in $\cg(J_r)$ is at least $5$. We consider four possible cases. \vskip 1mm \noindent{\bf Case 1.} $r=2m-1$ for some $m\geq2$. \vskip 1mm Then $2\leq m<r<2m\leq n$. Let $x_1,\ldots,x_m,y_1,\ldots,y_m$ be pairwise distinct elements of $X$. Let \[ a=(*\,y_2\ran(y_1\,y_2\ldots\,y_m\,x_1\ran(x_1\,x_2\ldots\,x_m)\mbox{ and } b=(*\,x_3\ran(x_2\,x_3\ldots\,x_{m-1}\,x_1\,y_1\ran(y_1\,y_2\ldots\,y_m) \] (see Notation~\ref{ndi5}) and note that $a,b\in J_r$ and $ab\ne ba$. Then, by Lemma~\ref{lad1}, there are idempotents $e_1,\ldots,e_k\in J_r$ ($k\geq1$) such that $a-e_1-\cdots-e_k-b$ is a minimal path in $\cg(J_r)$ from $a$ to $b$. By Lemma~\ref{lad3}, \[ e_1=(A_1,x_1\ran(A_2,x_2\ran\ldots(A_m,x_m\ran\mbox{ and }e_k=(B_1,y_1\ran(B_2,y_2\ran\ldots(B_m,y_m\ran, \] where $y_i\in A_i$ ($1\leq i\leq m$), $x_{i+1}\in B_i$ ($1\leq i<m)$, and $x_1\in B_m$. Let $g\in T(X)$ be an idempotent such that $e_1-g-e_k$. By Lemma~\ref{lja2}, $x_jg=x_j$ and $y_jg=y_j$ for every $j\in\{1,\ldots,m\}$. Hence $\rank(g)\geq 2m>r$, and so $g\notin J_r$. It follows that the distance between $e_1$ and $e_k$ is at least $3$, and so the distance between $a$ and $b$ is at least $5$. \vskip 1mm \noindent{\bf Case 2.} $r=2m$ for some $m\geq3$. \vskip 1mm Then $3\leq m<r=2m<n$. Let $x_1,\ldots,x_m,y_1,\ldots,y_m,z$ be pairwise distinct elements of $X$. Let \begin{align} a&=(*\,y_2\ran(z\,y_1\,y_2\ldots\,y_m\,x_1\ran(x_1\,x_2\ldots\,x_m),\notag\\ b&=(*\,x_1\ran(z\,x_3\ran(x_2\,x_3\ldots\,x_m\,x_1\,y_1\ran(y_1\,y_2\ldots\,y_m)\notag \end{align} (see Notation~\ref{ndi5}) and note that $a,b\in J_r$ and $ab\ne ba$. Then, by Lemma~\ref{lad1}, there are idempotents $e_1,\ldots,e_k\in J_r$ ($k\geq1$) such that $a-e_1-\cdots-e_k-b$ is a minimal path in $\cg(J_r)$ from $a$ to $b$. By Lemma~\ref{lad3}, \[ e_1=(A_1,x_1\ran(A_2,x_2\ran\ldots(A_m,x_m\ran\mbox{ and }e_k=(B_1,y_1\ran(B_2,y_2\ran\ldots(B_m,y_m\ran, \] where $y_i\in A_i$ ($1\leq i\leq m$), $x_{i+1}\in B_i$ ($1\leq i<m)$, $x_1\in B_m$, $A_m=\{x_m,y_m,z\}$, and $B_1=\{y_1,x_2,z\}$. Let $g\in T(X)$ be an idempotent such that $e_1-g-e_k$. By Lemma~\ref{lja2}, $x_jg=x_j$ and $y_jg=y_j$ for every $j\in\{1,\ldots,m\}$, and $zg=z$. Hence $\rank(g)\geq 2m+1>r$, and so $g\notin J_r$. It follows that the distance between $e_1$ and $e_k$ is at least $3$, and so the distance between $a$ and $b$ is at least $5$. \vskip 1mm \noindent{\bf Case 3.} $r=4$. \vskip 1mm Since we are working under the assumption that $n\notin\{5,6,7\}$ or $r\ne4$, we have $n\notin\{5,6,7\}$. Thus $n\geq8$ (since $r\leq n-1$). Let \[ a=\begin{pmatrix} 1&2&3&4&5&6&7&8&9&\!\!\!\!\ldots\, n\\2&3&4&1&2&3&4&1&1&\!\!\!\!\ldots\, 1 \end{pmatrix}\mbox{ and } b=\begin{pmatrix} 1&2&3&4&5&6&7&8&9&\!\!\!\!\ldots\, n\\5&6&7&8&6&7&8&5&1&\!\!\!\!\ldots\, 1 \end{pmatrix}. \] Note that $a,b\in J_4$, $ab\ne ba$, $(1\,2\,3\,4)$ is a unique cycle in $a$, and $(5\,6\,7\,8)$ is a unique cycle in $b$. By Lemma~\ref{lad1}, there are idempotents $e_1,\ldots,e_k\in J_4$ ($k\geq1$) such that $a-e_1-\cdots-e_k-b$ is a minimal path in $\cg(J_4)$ from $a$ to $b$. By Lemma~\ref{lad0}, $ie_1=i$ and $(4+i)e_k=4+i$ for every $i\in\{1,2,3,4\}$. By Lemma~\ref{lad2}, $5e_1=1$ or $5e_1=5$. But the latter is impossible since with $5e_1=5$ we would have $\rank(e_1)\geq5$. Similarly, we obtain $6e_1=2$, $7e_1=3$, $8e_1=4$, $2e_k=5$, $3e_k=6$, $4e_k=7$, and $1e_k=8$. Let $g\in T(X)$ be an idempotent such that $e_1-g-e_k$. By Lemma~\ref{lja2}, $jg=j$ for every $j\in\{1,\ldots,8\}$. Hence $\rank(g)\geq 8>r$, and so $g\notin J_4$. It follows that the distance between $e_1$ and $e_k$ is at least $3$, and so the distance between $a$ and $b$ is at least~$5$. \vskip 1mm \noindent{\bf Case 4.} $r=2$. \vskip 1mm In this case we let \[ a=\begin{pmatrix} 1&2&3&4&5&\!\!\!\!\ldots\, n\\2&1&2&1&1&\!\!\!\!\ldots\, 1 \end{pmatrix}\mbox{ and } b=\begin{pmatrix} 1&2&3&4&5&\!\!\!\!\ldots\, n\\3&4&4&3&3&\!\!\!\!\ldots\, 3 \end{pmatrix}. \] Note that $a,b\in J_2$, $ab\ne ba$, $(1\,2)$ is a unique cycle in $a$, and $(3\,4)$ is a unique cycle in $b$. By Lemma~\ref{lad1}, there are idempotents $e_1,\ldots,e_k\in J_2$ ($k\geq1$) such that $a-e_1-\cdots-e_k-b$ is a minimal path in $\cg(J_2)$ from $a$ to $b$. By Lemma~\ref{lad0}, $1e_1=1$, $2e_1=2$, $3e_k=3$, and $4e_k=4$. By Lemma~\ref{lad2}, $3e_1=1$ or $3e_1=3$. But the latter is impossible since with $3e_1=3$ we would have $\rank(e_1)\geq3$. Again By Lemma~\ref{lad2}, $4e_1=2$ or $4e_1=y$ for some $y\in\{4,5,\ldots,n\}$. But the latter is impossible since we would have $ye_1=y$ and again $\rank(e_1)$ would be at least $3$. Similarly, we obtain $2e_k=3$, and $1e_k=4$. Let $g\in T(X)$ be an idempotent such that $e_1-g-e_k$. By Lemma~\ref{lja2}, $jg=j$ for every $j\in\{1,\ldots,4\}$. Hence $\rank(g)\geq 4>r$, and so $g\notin J_2$. It follows that the distance between $e_1$ and $e_k$ is at least $3$, and so the distance between $a$ and $b$ is at least~$5$. Thus the diameter of $\cg(J_r)$ is at least $5$, which concludes the proof of (2). \end{proof} \subsection{The Commuting Graph of $T(X)$}\label{ssctx} Let $X$ be a finite set with $|X|=n$. It has been proved in \cite[Theorem~3.1]{IrJa08} that if $n$ and $n-1$ are not prime, then the diameter of the commuting graph of $\sym(X)$ is at most $5$, and that the bound is sharp since the diameter of $\cg(\sym(X))$ is $5$ when $n=9$. In this subsection, we determine the exact value of the diameter of the commuting graph of $T(X)$ for every $n\geq2$. Throughout this subsection, we assume that $X$ is a finite set with $n\geq2$ elements. \begin{lemma}\label{ltx} Let $n\geq4$ be composite. Let $a,f\in T(X)$ such that $a,f\ne\id_X$, $a\in\sym(X)$, and $f$ is an idempotent. Then $d(a,f)\leq4$. \end{lemma} \begin{proof} Fix $x\in\ima(f)$ and a cycle $(x_1\ldots x_m)$ of $a$ such that $x\in\{x_1,\ldots,x_m\}$. Consider three cases. \vskip 1mm \noindent{\bf Case 1.} $a$ has a cycle $(y_1\ldots y_k)$ such that $k$ does not divide $m$. \vskip 1mm Then $a^m$ is different from $\id_X$ and it fixes $x$. Thus $a-a^m-(X,x\ran-f$, and so $d(a,f)\leq3$. \vskip 1mm \noindent{\bf Case 2.} $a$ has at least two cycles and for every cycle $(y_1\ldots y_k)$ of $a$, $k$ divides $m$. \vskip 1mm Suppose there is $z\in\ima(f)$ such that $z\in\{y_1,\ldots,y_k\}$ for some cycle $(y_1\ldots y_k)$ of $a$ different from $(x_1\ldots x_m)$. Since $k$ divides $m$, there is a positive integer $t$ such that $m=tk$. Define $e\in T(X)$ by: \begin{equation}\label{eltx1} x_1e=y_1,\ldots,x_ke=y_k,\,x_{k+1}e=y_1,\ldots,x_{2k}e=y_k,\ldots,x_{(t-1)k+1}e=y_1,\ldots,x_{tk}e=y_k, \end{equation} and $ye=y$ for all other $y\in X$. Then $e$ is an idempotent such that $ae=ea$ and $z\in\ima(e)$. Thus, by Lemma~\ref{ljo1}, $a-e-(X,z\ran-f$, and so $d(a,f)\leq3$. Suppose that $\ima(f)\subseteq\{x_1,\ldots,x_m\}$. Consider any cycle $(y_1\ldots y_k)$ of $a$ different from $(x_1\ldots x_m)$. Since $\ima(f)\subseteq\{x_1,\ldots,x_m\}$, $y_1f=x_i$ for some $i$. We may assume that $y_1f=x_1$. Define an idempotent $e$ exactly as in (\ref{eltx1}). Then $\ima(e)\cap\ima(f)=\emptyset$, $(y_1,x_1)\in\ima(e)\times\ima(f)$, and $(y_1,x_1)\in\Ker(e)\cap\Ker(f)$. Thus, by Lemma~\ref{ljo2}, there is an idempotent $g\in T(X)-\{\id_X\}$ such that $e-g-f$. Hence $a-e-g-f$, and so $d(a,f)\leq3$. \vskip 1mm \noindent{\bf Case 3.} $a$ is an $n$-cycle. \vskip 1mm Since $n$ is composite, there is a divisor $k$ of $n$ such that $1<k<n$. Then $a^k\ne\id_X$ is a permutation with $k\geq2$ cycles, each of length $m=n/k$. By Case~2, $d(a^k,f)\leq3$, and so $d(a,f)\leq4$. \end{proof} \begin{lemma}\label{ltx1} Let $n\geq4$ be composite. Let $a,b\in T(X)$ such that $a,b\ne\id_X$ and $a\in\sym(X)$. Then $d(a,b)\leq5$. \end{lemma} \begin{proof} Suppose $b\notin\sym(X)$. Then $b^k$ is an idempotent different from $\id_X$ for some $k\geq1$. By Lemma~\ref{ltx}, $d(a,b^k)\leq4$, and so $d(a,b)\leq5$. Suppose $b\in\sym(X)$. Suppose $n-1$ is not prime. Then, by \cite[Theorem~3.1]{IrJa08}, there is a path from $a$ to $b$ in $\cg(\sym(X))$ of length at most $5$. Such a path is also a path in $\cg(T(X))$, and so $d(a,b)\leq5$. Suppose $p=n-1$ is prime. Then the proof of \cite[Theorem~3.1]{IrJa08} still works for $a$ and $b$ unless $a^p=\id_X$ or $b^p=\id_X$. (See also \cite[Lemma~3.3]{IrJa08} and its proof.) Thus, if $a^p\ne\id_X$ and $b^p\ne\id_X$, then there is a path from $a$ to $b$ in $\cg(\sym(X))$ of length at most $5$, and so $d(a,b)\leq5$. Suppose $a^p=\id_X$ or $b^p=\id_X$. We may assume that $b^p=\id_X$. Then $b$ is a cycle of length $p$, that is, $b=(x_1\ldots x_p)(x)$. Thus $b$ commutes with the constant idempotent $f=(X,x\ran$. By Lemma~\ref{ltx}, $d(a,f)\leq4$, and so $d(a,b)\leq5$. \end{proof} \begin{lemma}\label{ltx2} Let $X=\{x_1,\ldots,x_m,y_1,\ldots,y_k\}$, $a\in\sym(X)$, and $b=(y_1\ldots y_k\,x_1\ran(x_1\ldots x_m)$. If $ab=ba$ then $a=\id_X$. \end{lemma} \begin{proof} Suppose $ab=ba$. By Lemma~\ref{lad2}, \begin{equation}\label{e1ltx2} x_1a\arb x_2a\arb\cdots\arb x_ma\arb x_1a\quad\mbox{and}\quad y_1a\arb y_2a\arb\cdots\arb y_ka\arb x_1a. \end{equation} Since $(x_1\,x_2\ldots\,x_m)$ is a unique cycle in $b$, (\ref{e1ltx2}) implies that \begin{equation}\label{e2ltx2} x_1a=x_q,\, x_2a=x_{q+1},\ldots,\, x_ma=x_{q+m-1}, \end{equation} where $q\in\{1,\ldots,m\}$ ($x_{q+i}=x_{q+i-m}$ if $q+i>m$). Thus $x_1a=x_j$ for some $j$. Since $y_k\arb x_1$ and $x_m\arb x_1$, we have $y_ka\arb x_1a=x_j$ and $x_ma\arb x_1a=x_j$. Suppose $j\geq2$. Then $x_jb^{-1}=\{x_{j-1}\}$, and so $y_ka=x_{j-1}=x_ma$. But this implies $y_k=x_m$ (since $a$ is injective), which is a contradiction. Hence $j=1$, and so $x_1a=x_1$. But then $x_ia=x_i$ for all $i$ by (\ref{e2ltx2}). Since $y_ka\arb x_1a=x_1$, we have $y_ka=y_k$ since $x_1b^{-1}=\{y_k,x_m\}$. Let $i\in\{1,\ldots,k-1\}$ and suppose $y_{i+1}a=y_{i+1}$. Then $y_ia=y_i$ since $y_ia\arb y_{i+1}a=y_{i+1}$ and $y_{i+1}b^{-1}=\{y_{i+1}\}$. It follows that $y_ia=y_i$ for all $i\in\{1,\ldots,k\}$. \end{proof} \begin{lemma}\label{ltx3} Let $m$ be a positive integer such that $2m\leq n$, $\sigma$ be an $m$-cycle on $\{1,\ldots,m\}$, $a\in\sym(X)$, and \[ e=(A_1,x_1\ran(A_2,x_2\ran\ldots(A_m,x_m\ran\mbox{ and }f=(B_1,y_1\ran(B_2,y_2\ran\ldots(B_m,y_m\ran \] be idempotents in $T(X)$ such that $x_1,\ldots,x_m,y_1,\ldots,y_m$ are pairwise distinct, $y_i\in A_i$, and $x_{i\sigma}\in B_i$ ($1\leq i\leq m)$. Then: \begin{itemize} \item [\rm(1)] Suppose $X=\{x_1,\ldots,x_m,y_1,\ldots,y_m,z\}$ and $z\in A_i\cap B_j$ such that $A_i\cap B_j=\{z\}$. If $e-a-f$, then $a=\id_X$. \item [\rm(2)] Suppose $X=\{x_1,\ldots,x_m,y_1,\ldots,y_m,z,w\}$, $z\in A_i\cap B_j$ such that $A_i\cap B_j=\{z\}$, and $w\in A_s\cap B_t$ such that $A_s\cap B_t=\{w\}$, where $s\ne i$ and $t\ne j$. If $e-a-f$, then $a=\id_X$. \end{itemize} \end{lemma} \begin{proof} To prove (1), suppose $e-a-f$ and note that $A_i=\{x_i,y_i,z\}$ and $B_j=\{y_j,x_{j\sigma},z\}$. By Lemma~\ref{lcen}, there is $p\in\{1,\ldots,m\}$ such that $x_ia=x_p$ and $A_ia\subseteq A_p$. Suppose $p\ne i$. Then $A_p=\{x_p,y_p\}$, and so $A_ia$ cannot be a subset of $A_p$ since $a$ is injective. It follows that $p=i$, that is, $x_ia=x_i$ and $A_ia\subseteq A_i$. Similarly, $y_ja=y_j$ and $B_ja\subseteq B_j$. Thus $za\in A_i\cap B_j=\{z\}$, and so $za=z$. Hence, since $a$ is injective, $y_ia=y_i$. We have proved that $x_ia=x_i$, $y_ia=y_i$, and $za=z$. We have $B_i=\{y_i,x_{i\sigma}\}$ or $B_i=\{y_i,x_{i\sigma},z\}$ Since $y_ia=y_i$, we have $B_ia\subseteq B_i$ by Lemma~\ref{lcen}. Since $za=z$ and $a$ is injective, it follows that $x_{i\sigma}a=x_{i\sigma}$. By the foregoing argument applied to $A_{i\sigma}=\{x_{i\sigma},y_{i\sigma}\}$, we obtain $y_{i\sigma}a=y_{i\sigma}$. Continuing this way, we obtain $x_{i\sigma^k}a=x_{i\sigma^k}$ and $y_{i\sigma^k}a=y_{i\sigma^k}$ for every $k\in\{1,\ldots,m-1\}$. Since $\sigma$ is an $m$-cycle, it follows that $x_ja=x_j$ and $y_jg=y_j$ for every $j\in\{1,\ldots,m\}$. Hence $a=\id_X$. We have proved (1). The proof of (2) is similar. \end{proof} \begin{theorem}\label{tdia3} Let $X$ be a finite set with $n\geq2$ elements. Then: \begin{itemize} \item [\rm(1)] If $n$ is prime, then $\cg(T(X))$ is not connected. \item [\rm(2)] If $n=4$, then the diameter of $\cg(T(X))$ is $4$. \item [\rm(3)] If $n\geq6$ is composite, then the diameter of $\cg(T(X))$ is $5$. \end{itemize} \end{theorem} \begin{proof} Suppose $n=p$ is prime. Consider a $p$-cycle $a=(x_1\,x_2\ldots x_p)$ and let $b\in T(X)$ be such that $b\ne\id_X$ and $ab=ba$. Let $x_q=x_1b$. Then, by Lemma~\ref{lad2}, $x_ib=x_{q+i}$ for every $i\in\{1,\ldots,p\}$ (where $x_{q+i}=x_{q+i-m}$ if $q+i>m$). Thus $b=a^q$, and so, since $p$ is prime, $b$ is also a $p$-cycle. It follows that if $c$ is a vertex of $\cg(T(X))$ that is not a $p$-cycle, then there is no path in $\cg(T(X))$ from $a$ to $c$. Hence $\cg(T(X))$ is not connected. We have proved (1). We checked the case $n=4$ directly using GRAPE \cite{So06} through GAP \cite{Scel92}. We found that, when $|X|=4$, the diameter of $\cg(T(X))$ is $4$. Suppose $n\geq6$ is composite. Let $a,b\in T(X)$ such that $a,b\ne\id_X$. If $a\in\sym(X)$ or $b\in\sym(X)$, then $d(a,b)\leq5$ by Lemma~\ref{ltx1}. If $a,b\notin\sym(X)$, then $a,b\in J_{n-1}$, and so $d(a,b)\leq5$ by Theorem~\ref{tdia2}. Hence the diameter of $\cg(T(X))$ is at most $5$. It remains to find $a,b\in T(X)-\{\id_X\}$ such that $d(a,b)\geq5$. For $n\in\{6,8\}$, we employed GAP \cite{Scel92}. When $n=6$, we found that the distance between the $6$-cycle $a=(1\,2\,3\,4\,5\,6)$ and $b=\begin{pmatrix} 1&2&3&4&5&6\\2&3&5&1&2&4\end{pmatrix}$ in $\cg(T(X))$ is at least $5$. And when $n=8$, the distance between the $8$-cycle $a=(1\,2\,3\,4\,5\,6\,7\,8)$ and $b=\begin{pmatrix} 1&2&3&4&5&6&7&8\\2&3&1&1&4&8&6&5\end{pmatrix}$ in $\cg(T(X))$ is at least $5$. To verify this with GAP, we used the following sequence of arguments and computer calculations: \begin{enumerate} \item By Lemma \ref{lad1}, if there exists a path $a-c_1-c_2-\ldots-c_k-b$, then there exists a path $a-e_1-e_2-\ldots-e_k-b$, where each $e_i$ is either an idempotent or a permutation; \item Let $E$ be the set idempotents of $T(X)-\{\id_X\}$ and let $G=\sym(X)-\{\id_X\}$. For $A\subseteq T(X)$, let $C(A)=\{f\in E\cup G:(\exists_{a\in A}) af=fa\}$; \item Calculate $C(C(\{a\}))$ and $C(\{b\})$; \item Verify that for all $c\in C(C(\{a\}))$ and all $d\in C(\{b\})$, $cd\neq dc$; \item If there were a path $a-c_1-c_2-c_3-b$ from $a$ to $b$, then we would have $c_2\in C(C(\{a\}))$, $c_3 \in C(\{b\})$, and $c_2c_3=c_3c_2$. But, by 4., there are no such $c_2$ and $c_3$, and it follows that the distance between $a$ and $b$ is at least $5$. \end{enumerate} Let $n\geq9$ be composite. We consider two cases. \vskip 1mm \noindent{\bf Case 1.} $n=2m+1$ is odd ($m\geq4$). \vskip 1mm Let $X=\{x_1,\ldots,x_m,y_1,\ldots,y_m,z\}$. Consider \[ a=(z\,y_1\,y_2\ldots\,y_m\,x_1\ran(x_1\,x_2\ldots\,x_m)\mbox{ and } b=(x_2\,x_3\ldots\,x_m\,x_1\,z\,y_2\ran(y_1\,y_2\ldots\,y_m). \] Let $\lam$ be a minimal path in $\cg(T(X))$ from $a$ to $b$. By Lemma~\ref{ltx2}, there is no $g\in\sym(X)$ such that $g\ne\id_X$ and $ag=ga$ or $bg=gb$. Thus, by the proof of Lemma~\ref{lad1}, $\lam=a-e_1-\cdots-e_k-b$, where $e_1$ and $e_k$ are idempotents. By Lemma~\ref{lad3}, \[ e_1=(A_1,x_1\ran(A_2,x_2\ran\ldots(A_m,x_m\ran\mbox{ and }e_k=(B_1,y_1\ran(B_2,y_2\ran\ldots(B_m,y_m\ran, \] where $y_i\in A_i$ ($1\leq i\leq m$), $x_{i+1}\in B_i$ ($1\leq i<m)$, $x_1\in B_m$, $A_m=\{x_m,y_m,z\}$, and $B_1=\{y_1,x_2,z\}$. Since $m\geq4$, $A_m\cap B_1=\{z\}$. Thus, by Lemma~\ref{ltx3}, there is no $g\in\sym(X)$ such that $g\ne\id_X$ and $e_1-g-e_k$. Hence, if $\lam$ contains an element $g\in\sym(X)$, then the length of $\lam$ is at least $5$. Suppose $\lam$ does not contain any permutations. Then $\lam$ is a path in $J_{n-1}$ and we may assume that all vertices in $\lam$ except $a$ and $b$ are idempotents (by Lemma~\ref{lad3}). By Lemma~\ref{lja2}, there is no idempotent $f\in J_{n-1}$ such that $e_1-f-e_k$. (Here, the $m$-cycle that occurs in Lemmas~\ref{lja2} and~\ref{ltx3} is $\sigma=(1\,2\ldots m)$.) Hence the length of $\lam$ is at least $5$. \vskip 1mm \noindent{\bf Case 2.} $n=2m+2$ is even ($m\geq4$). \vskip 1mm Let $X=\{x_1,\ldots,x_m,y_1,\ldots,y_m,z,w\}$. Consider \[ a=(z\,y_1\,y_2\ldots\,y_m\,w\,x_2\ran(x_1\,x_2\ldots\,x_m)\mbox{ and } b=(w\,x_2\,x_3\ldots\,x_{m-2}\,x_m\,x_1\,x_{m-1}\,y_2\ran(y_1\,y_2\ldots\,y_m). \] Let $\lam$ be a minimal path in $\cg(T(X))$ from $a$ to $b$. By Lemma~\ref{ltx2}, there is no $g\in\sym(X)$ such that $g\ne\id_X$ and $ag=ga$ or $bg=gb$. Thus, by the proof of Lemma~\ref{lad1}, $\lam=a-e_1-\cdots-e_k-b$, where $e_1$ and $e_k$ are idempotents. By Lemma~\ref{lad3}, \[ e_1=(A_1,x_1\ran(A_2,x_2\ran\ldots(A_m,x_m\ran\mbox{ and }e_k=(B_1,y_1\ran(B_2,y_2\ran\ldots(B_m,y_m\ran, \] where $y_i\in A_i$ ($1\leq i\leq m$), $x_{i+1}\in B_i$ ($1\leq i\leq m-3)$, $x_m\in B_{m-2}$, $x_1\in B_{m-1}$, $x_{m-1}\in B_m$, $A_1=\{x_1,y_1,w\}$, $A_m=\{x_m,y_m,z\}$, $B_1=\{y_1,x_2,z\}$, and $B_m=\{y_m,x_{m-1},w\}$. Since $m\geq4$, $A_m\cap B_1=\{z\}$ and $A_1\cap B_m=\{w\}$. Thus, by Lemma~\ref{ltx3}, there is no $g\in\sym(X)$ such that $g\ne\id_X$ and $e_1-g-e_k$. Hence, as in Case 1, the length of $\lam$ is at least $5$. (Here, the $m$-cycle that occurs in Lemmas~\ref{lja2} and~\ref{ltx3} is $\sigma=(1,2\ldots, m-3,m-2,m,m-1)$.) Hence, if $n\geq6$ is composite, then the diameter of $\cg(T(X))$ is $5$. This concludes the proof. \end{proof} \section{Minimal Left Paths}\label{smlp} \setcounter{equation}{0} In this section, we prove that for every integer $n\geq4$, there is a band $S$ with knit degree $n$. We will show how to construct such an $S$ as a subsemigroup of $T(X)$ for some finite set $X$. Let $S$ be a finite non-commutative semigroup. Recall that a path $a_1-a_2-\cdots-a_m$ in $\cg(S)$ is called a \emph{left path} (or $l$-path) if $a_1\ne a_m$ and $a_1a_i=a_ma_i$ for every $i\in\{1,\ldots,m\}$. If there is any $l$-path in $\cg(S)$, we define the \emph{knit degree} of $S$, denoted $\kd(S)$, to be the length of a shortest $l$-path in $\cg(S)$. We say that an $l$-path $\lam$ from $a$ to $b$ in $\cg(S)$ is a \emph{minimal $l$-path} if there is no $l$-path from $a$ to $b$ that is shorter than $\lam$. \subsection{The Even Case}\label{sseven} In this subsection, we will construct a band of knit degree $n$ where $n\geq4$ is even. For $x\in X$, we denote by $c_x$ the constant transformation with image $\{x\}$. The following lemma is obvious. \begin{lemma}\label{lcon} Let $c_x,c_y,e\in T(X)$ such that $e$ is an idempotent. Then: \begin{itemize} \item [\rm(1)] $c_xe=ec_x$ if and only if $x\in\ima(e)$. \item [\rm(2)] $c_xe=c_ye$ if and only if $(x,y)\in\Ker(e)$. \end{itemize} \end{lemma} Now, given an even $n\geq4$, we will construct a band $S$ such that $\kd(S)=n$. We will explain the construction using $n=8$ as an example. The band $S$ will be a subsemigroups of $T(X)$, where \[ X=\{y_0,y_1,y_2,y_3,y_4=v_0,v_1,v_2,v_3,v_4,x_1,x_2,x_3,x_4,u_1,u_2,u_3,u_4,r,s\}, \] and it will be generated by idempotent transformations $a_1,a_2,a_3,a_4,b_1,b_2,b_3,b_4,e_1$, whose images are defined by Table~1. \begin{table}[h] \[ \begin{tabular}{|c|ccc|}\hline $\ima(a_1)$ & $y_0$ & $x_1$ & $y_1$ \\\hline $\ima(a_2)$ & $y_1$ & $x_2$ & $y_2$ \\\hline $\ima(a_3)$ & $y_2$ & $x_3$ & $y_3$ \\\hline $\ima(a_4)$ & $y_3$ & $x_4$ & $y_4$ \\\hline $\ima(b_1)$ & $y_4$ & $u_1$ & $v_1$ \\\hline $\ima(b_2)$ & $v_1$ & $u_2$ & $v_2$ \\\hline $\ima(b_3)$ & $v_2$ & $u_3$ & $v_3$ \\\hline $\ima(b_4)$ & $v_3$ & $u_4$ & $v_4$ \\\hline $\ima(e_1)$ & $v_4$ & $r$ & $s$ \\\hline \end{tabular} \] \caption{Images of the generators.} \end{table} We will define the kernels in such a way that the generators with the same subscript will have the same kernel. For example, $\Ker(a_1)=\Ker(b_1)=\Ker(e_1)$ and $\Ker(a_2)=\Ker(b_2)$. Let $i\in\{2,3,4\}$. The kernel of $a_i$ will have the following three classes (elements of the partition $X/\Ker(a_i)$): \begin{align} \mbox{Class-1}&=\ima(a_{i+1})\cup\ldots\cup\ima(a_4)\cup\ima(b_1)\cup\ldots\cup\ima(b_{i-1}),\notag\\ \mbox{Class-2}&=\ima(b_{i+1})\cup\ldots\cup\ima(b_4)\cup\ima(e_1)\cup\ima(a_1)\cup\ldots\cup\ima(a_{i-1}),\notag\\ \mbox{Class-3}&=\{x_i,u_i\}.\notag \end{align} For example, $\Ker(a_2)$ has the following classes: \begin{align} \mbox{Class-1}&=\{y_2,x_3,y_3,x_4,y_4,u_1,v_1\},\notag\\ \mbox{Class-2}&=\{v_2,u_3,v_3,u_4,v_4,r,s,y_0,x_1,y_1\},\notag\\ \mbox{Class-3}&=\{x_2,u_2\}.\notag \end{align} We define the kernel of $a_1$ as follows: \begin{align} \mbox{Class-1}&=\ima(a_2)\cup\ima(a_3)\cup\ima(a_4)\cup\{s\}=\{y_1,x_2,y_2,x_3,y_3,x_4,y_4,s\},\notag\\ \mbox{Class-2}&=\ima(b_2)\cup\ima(b_3)\cup\ima(b_4)\cup\{y_0\}=\{v_1,u_2,v_2,u_3,v_3,u_4,v_4,y_0\},\notag\\ \mbox{Class-3}&=\{x_1,u_1,r\}.\notag \end{align} Now the generators are completely defined since $\Ker(b_i)=\Ker(a_i)$, $1\leq i\leq 4$, and $\Ker(e_1)=\Ker(a_1)$. Order the generators as follows: \begin{equation}\label{elist1} a_1,\,a_2,\,a_3,\,a_4,\,b_1,\,b_2,\,b_3,\,b_4,\,e_1. \end{equation} Let $S$ be the semigroup generated by the idempotents listed in (\ref{elist1}). Since the idempotents with the same subscript have the same kernel, they form a right-zero subsemigroup of $S$. For example, $\{a_1,b_1,e_1\}$ is a right-zero semigroup: $a_1a_1=b_1a_1=e_1a_1=a_1$, $a_1b_1=b_1b_1=e_1b_1=b_1$, and $a_1e_1=b_1e_1=e_1e_1=e_1$. The product of any two generators with different indices is a constant transformation. For example, $a_2a_4=c_{y_3}$, $a_4a_2=c_{y_2}$, and $a_1b_3=c_{v_3}$. The semigroup $S$ consists of the nine generators listed in (\ref{elist1}) and $10$ constants: \[ S=\{a_1,a_2,a_3,a_4,b_1,b_2,b_3,b_4,e_1,c_{y_0},c_{y_1},c_{y_2},c_{y_3},c_{y_4},c_{v_1},c_{v_2},c_{v_3},c_{v_4},c_s\}, \] so $S$ is a band. Note that $Z(S)=\emptyset$. Each idempotent in (\ref{elist1}) commutes with the next idempotent, so $a_1-a_2-a_3-a_4-b_1-b_2-b_3-b_4-e_1$ is a path in $\cg(S)$. Moreover, it is a unique $l$-path in $\cg(S)$, so $\kd(S)=8$. We will now provide a general construction of a band $S$ such that $\kd(S)=n$, where $n$ is even. \begin{defi}\label{dco1} {\rm Let $k\geq2$ be an integer. Let \[ X=\{y_0,y_1,\ldots,y_k=v_0,v_1,\ldots,v_k,x_1,\ldots,x_k,u_1,\ldots,u_k,r,s\}. \] We will define idempotents $a_1,\ldots,a_k,b_1,\ldots,b_k,e_1$ as follows. For $i\in\{1,\ldots,k\}$, let \begin{align} \ima(a_i)&=\{y_{i-1},x_i,y_i\},\notag\\ \ima(b_i)&=\{v_{i-1},u_i,v_i\},\notag\\ \ima(e_1)&=\{v_k,r,s\}.\notag \end{align} For $i\in\{2,\ldots,k\}$, define the $\Ker(a_i)$-classes by: \begin{align} \mbox{Class-1}&=\ima(a_{i+1})\cup\ldots\cup\ima(a_k)\cup\ima(b_1)\cup\ldots\cup\ima(b_{i-1}),\notag\\ \mbox{Class-2}&=\ima(b_{i+1})\cup\ldots\cup\ima(b_k)\cup\ima(e_1)\cup\ima(a_1)\cup\ldots\cup\ima(a_{i-1}),\notag\\ \mbox{Class-3}&=\{x_i,u_i\}.\notag \end{align} (Note that for $i=k$, $\mbox{Class-1}=\ima(b_1)\cup\ldots\cup\ima(b_{k-1})$ and $\mbox{Class-2}=\ima(e_1)\cup\ima(a_1)\cup\ldots\cup\ima(a_{i-1})$.) Define the $\Ker(a_1)$-classes by: \begin{align} \mbox{Class-1}&=\ima(a_2)\cup\ldots\cup\ima(a_k)\cup\{s\},\notag\\ \mbox{Class-2}&=\ima(b_2)\cup\ldots\cup\ima(b_k)\cup\{y_0\},\notag\\ \mbox{Class-3}&=\{x_1,u_1,r\}.\notag \end{align} Let $\Ker(b_i)=\Ker(a_i)$ for every $i\in\{1,\ldots,k\}$, and $\Ker(e_1)=\Ker(a_1)$. Now, define the subsemigroup $S_0^k$ of $T(X)$ by: \begin{equation}\label{edco1} S_0^k=\mbox{the semigroup generated by $\{a_1,\ldots,a_k,b_1,\ldots,b_k,e_1\}$.} \end{equation} } \end{defi} We must argue that the idempotents $a_1,\ldots,a_k,b_1,\ldots,b_k,e_1$ are well defined, that is, for each of them, different elements of the image lie in different kernel classes. Consider $a_i$, where $i\in\{2,\ldots,k\}$. Then $\ima(a_i)=\{y_{i-1},x_i,y_i\}$. Then $y_i$ lies in Class-1 (see Definition~\ref{dco1}) since $y_i\in\ima(a_{i+1})$ (or $y_i\in\ima(b_1)$ if $i=k$), $y_{i-1}$ lies in Class-2 since $y_{i-1}\in\ima(a_{i-1})$, and $x_i$ lies in Class-3. Arguments for the remaining idempotents are similar. For the remainder of this subsection, $S_0^k$ will be the semigroup (\ref{edco1}). Our objective is to prove that $S_0^k$ is a band such that $\pi=a_1-\cdots-a_k-b_1-\cdots-b_k-e_1$ is a shortest $l$-path in $S_0^k$. Since $\pi$ has length $2k=n$, it will follow that $S_0^k$ is a band with knit degree $n$. We first analyze products of the generators of $S_0^k$. \begin{lemma}\label{lev2} Let $1\leq i<j\leq k$. Then: \begin{itemize} \item [\rm(1)] $a_ib_i=b_i$, $b_ia_i=a_i$, $a_1e_1=b_1e_1=e_1$, $e_1a_1=b_1a_1=a_1$, and $e_1b_1=a_1b_1=b_1$. \item [\rm(2)] $a_ia_j=c_{y_{j-1}}$ and $a_ja_i=c_{y_i}$. \item [\rm(3)] $a_ib_j=c_{v_j}$ and $a_jb_i=c_{v_{i-1}}$. \item [\rm(4)] $b_ia_j=c_{y_j}$ and $b_ja_i=c_{y_{i-1}}$. \item [\rm(5)] $b_ib_j=c_{v_{j-1}}$ and $b_jb_i=c_{v_i}$. \item [\rm(6)] $e_1a_j=c_{y_{j-1}}$ and $a_je_1=c_s$. \item [\rm(7)] $e_1b_j=c_{v_j}$ and $b_je_1=c_{v_k}$. \end{itemize} \end{lemma} \begin{proof} Statement (1) is true because the generators of $S_0^k$ are idempotents and the ones with the same subscript have the same kernel. By Definition~\ref{dco1}, Class-2 of $\Ker(a_j)$ contains both $\ima(a_{j-1})=\{y_{j-2},x_{j-1},y_{j-1}\}$ and $\ima(a_i)$ (since $i<j$). Since $y_{j-1}\in\ima(a_j)=\{y_{j-1},x_j,y_j\}$, $a_j$ maps all elements of Class-2 to $y_{j-1}$. Hence $a_ia_j=c_{y_{j-1}}$. Similarly, since $i<j$, Class-1 of $\Ker(a_i)$ contains both $\ima(a_{i+1})=\{y_i,x_{i+1},y_{i+1}\}$ and $\ima(a_j)$. Since $y_i\in\ima(a_i)=\{y_{i-1},x_i,y_i\}$, $a_i$ maps all elements of Class-1 to $y_i$. Hence $a_ja_i=c_{y_i}$. We have proved (2). Proofs of (3)-(7) are similar. For example, $b_je_1=c_{v_k}$ because Class-2 of $\Ker(e_1)=\Ker(a_1)$ contains both $\ima(b_j)$ and $\ima(b_k)=\{v_{k-1},u_k,v_k\}$, and $v_k\in\ima(e_1)$. \end{proof} The following corollaries are immediate consequences of Lemma~\ref{lev2}. \begin{corollary}\label{cev2} The semigroup $S_0^k$ is a band. It consists of $2k+1$ generators from Definition~{\rm \ref{dco1}} and $2k+2$ constant transformations: \[ S_0^k=\{a_1,\ldots,a_k,b_1,\ldots,b_k,e_1,c_{y_0},c_{y_1},\ldots,c_{y_k},c_{v_1},\ldots,c_{v_k},c_s\}. \] \end{corollary} \begin{corollary}\label{cev2a} Let $g,h\in S_0^k$ be generators from the list \begin{equation}\label{ecev2} a_1,\ldots,a_k,b_1,\ldots,b_k,e_1. \end{equation} Then $gh=hg$ if and only if $g$ and $h$ are consecutive elements in the list. \end{corollary} Lemma~\ref{lev2} gives a partial multiplication table for $S_0^k$. The following lemma completes the table. \begin{lemma}\label{lev2a} Let $1\leq p\leq k$ and $1\leq i<j\leq k$. Then: \begin{itemize} \item [\rm(1)] $c_{y_p}a_p=c_{y_p}$, $c_{y_p}b_p=c_{v_{p-1}}$, $c_{y_i}a_j=c_{y_{j-1}}$, $c_{y_j}a_i=c_{y_i}$, $c_{y_i}b_j=c_{v_j}$, $c_{y_j}b_i=c_{v_{i-1}}$, $c_{y_p}e_1=c_s$, $c_{y_0}a_p=c_{y_{p-1}}$, $c_{y_0}b_p=c_{v_p}$, and $c_{y_0}e_1=c_{v_k}$. \item [\rm(2)] $c_{v_p}a_p=c_{y_{p-1}}$, $c_{v_p}b_p=c_{v_p}$, $c_{v_i}a_j=c_{y_j}$, $c_{v_j}a_i=c_{y_{i-1}}$, $c_{v_i}b_j=c_{v_{j-1}}$, $c_{v_j}b_i=c_{v_i}$, and $c_{v_p}e_1=c_{v_k}$. \item [\rm(3)] $c_sa_j=c_{y_{j-1}}$, $c_sb_j=c_{v_j}$, $c_sa_1=c_{y_1}$, $c_sb_1=c_{v_0}$, and $c_se_1=c_s$. \end{itemize} \end{lemma} \begin{proof} We have $c_{y_p}a_p=c_{y_p}$ since $y_p\in\ima(a_p)$. By Definition~\ref{dco1}, Class-1 of $\Ker(b_p)$ contains both $\ima(a_{p+1})$ and $\ima(b_{p-1})$. Since $y_p\in\ima(a_{p+1})$ and $v_{p-1}\in\ima(b_{p-1})$, both $y_p$ and $v_{p-1}$ are in Class-1. Hence $y_pb_p=v_{p-1}b_p=v_{p-1}$, where the last equality is true because $v_{p-1}\in\ima(b_p)$. Thus $c_{y_p}b_p=c_{v_{p-1}}$. By Definition~\ref{dco1}, $y_p$ and $s$ belong to Class-1 of $\Ker(e_1)$, and $s\in\ima(e_1)$. It follows that $c_{y_p}e_1=c_s$. Again by Definition~\ref{dco1}, $y_0$ and $y_{p-1}$ belong to Class-2 of $\Ker(a_p)$, and $y_{p-1}\in\ima(a_p)$. Hence $c_{y_0}a_p=c_{y_{p-1}}$. Similarly, $c_{y_0}b_p=c_{v_p}$ and $c_{y_0}e_1=c_{v_k}$. By Lemma~\ref{lev2}, \begin{align} c_{y_i}a_j&=(c_{y_i}a_i)a_j=c_{y_i}(a_ia_j)=c_{y_i}c_{y_{j-1}}=c_{y_{j-1}},\notag\\ c_{y_j}a_i&=(c_{y_j}a_j)a_i=c_{y_j}(a_ja_i)=c_{y_j}c_{y_i}=c_{y_i},\notag\\ c_{y_i}b_j&=(c_{y_i}a_i)b_j=c_{y_i}(a_ib_j)=c_{y_i}c_{v_j}=c_{v_j},\notag\\ c_{y_j}b_i&=(c_{y_j}a_j)b_i=c_{y_j}(a_jb_i)=c_{y_j}c_{v_{i-1}}=c_{v_{i-1}}.\notag \end{align} We have proved (1). Proofs of (2) and (3) are similar. \end{proof} Table~2 presents the Cayley table for $S_0^2$. \begin{table}[h] \[ \begin{tabular}{c|ccccccccccc} & $a_1$ & $a_2$ & $b_1$ & $b_2$ & $e_1$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\\hline $a_1$ & $a_1$ & $c_{y_1}$ & $b_1$ & $c_{v_2}$ & $e_1$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $a_2$ & $c_{y_1}$ & $a_2$ & $c_{y_2}$ & $b_2$ & $c_s$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $b_1$ & $a_1$ & $c_{y_2}$ & $b_1$ & $c_{v_1}$ & $e_1$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $b_2$ & $c_{y_0}$ & $a_2$ & $c_{v_1}$ & $b_2$ & $c_{v_2}$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $e_1$ & $a_1$ & $c_{y_1}$ & $b_1$ & $c_{v_2}$ & $e_1$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $c_{y_0}$ & $c_{y_0}$ & $c_{y_1}$ & $c_{v_1}$ & $c_{v_2}$ & $c_{v_2}$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $c_{y_1}$ & $c_{y_1}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_2}$ & $c_{s}$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $c_{y_2}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{s}$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $c_{v_1}$ & $c_{y_0}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_1}$ & $c_{v_2}$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $c_{v_2}$ & $c_{y_0}$ & $c_{y_1}$ & $c_{v_1}$ & $c_{v_2}$ & $c_{v_2}$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ $c_s$ & $c_{y_1}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_2}$ & $c_{s}$ & $c_{y_0}$ & $c_{y_1}$ & $c_{y_2}$ & $c_{v_1}$ & $c_{v_2}$ & $c_s$ \\ \end{tabular} \] \caption{Cayley table for $S_0^2$.} \end{table} \begin{lemma}\label{lev3} Let $g,h,c_z\in S_0^k$ such that $c_z$ is a constant and $g-c_z-h$ is a path in $\cg(S_0^k)$. Then $gh=hg$. \end{lemma} \begin{proof} Note that $g,h$ are not constants since different constants do not commute. Thus $g$ and $h$ are generators from list (\ref{ecev2}). We may assume that $g$ is to the left of $h$ in the list. Since $c_z$ commutes with both $g$ and $h$, $z\in\ima(g)\cap\ima(h)$ by Lemma~\ref{lcon}. Suppose $g=a_i$, where $1\leq i\leq k-1$. Then $h=a_{i+1}$ since $a_{i+1}$ is the only generator to the right of $a_i$ whose image is not disjoint from $\ima(a_i)$. Similarly, if $g=a_k$ then $h=b_1$; if $g=b_i$ ($1\leq i\leq k-1$) then $h=b_{i+1}$; and if $g=b_k$ then $h=e_1$. Hence $gh=hg$ by Corollary~\ref{cev2a}. \end{proof} \begin{lemma}\label{lev4} The paths \begin{itemize} \item [\rm(i)] $\tau_1=c_{y_0}-a_1-\cdots-a_k-b_1-\cdots-b_k-c_{v_k}$, \item [\rm(ii)] $\tau_2=c_{y_1}-a_2-\cdots-a_k-b_1-\cdots-b_k-e_1-c_s$ \end{itemize} are the only minimal $l$-paths in $\cg(S_0^k)$ with constants as the endpoints. \end{lemma} \begin{proof} We have that $\tau_1$ and $\tau_2$ are $l$-paths by Lemmas~\ref{lev2} and~\ref{lev2a}. Suppose that $\lam=c_z-\cdots-c_w$ is a minimal $l$-path in $\cg(S_0^k)$ with constants $c_z$ and $c_w$ as the endpoints. Recall that $z,w\in\{y_0,y_1,\ldots,y_k,v_1,\ldots,v_k,s\}$. We may assume that $z$ is to the left of $w$ in the list $y_0,y_1,\ldots,y_k,v_1,\ldots,v_k,s$. Since $\lam$ is minimal, Lemma~\ref{lev3} implies that $\lam$ does not contain any constants except $c_z$ and $c_w$. There are five cases to consider. \begin{itemize} \item [(a)] $\lam=c_{y_i}-\cdots-c_{y_j}$, where $0\leq i<j\leq k$. \item [(b)] $\lam=c_{y_i}-\cdots-c_{v_j}$, where $0\leq i\leq k$, $1\leq j\leq k$. \item [(c)] $\lam=c_{y_i}-\cdots-c_s$, where $0\leq i\leq k$. \item [(d)] $\lam=c_{v_i}-\cdots-c_{v_j}$, where $1\leq i<j\leq k$. \item [(e)] $\lam=c_{v_i}-\cdots-c_s$, where $1\leq i\leq k$. \end{itemize} Suppose (a) holds, that is, $\lam=c_{y_i}-\cdots-h-c_{y_j}$, $0\leq i<j\leq k$. Since $hc_{y_j}=c_{y_j}h$, either $h=a_j$ or $h=a_{j+1}$ (where $a_{k+1}=b_1$) (since $a_j$ and $a_{j+1}$ are the only generators that have $y_j$ in their image). Suppose $h=a_{j+1}$. Then, by Corollary~\ref{cev2a}, either $\lam=c_{y_i}-\cdots-a_j-a_{j+1}-c_{y_j}$ or $\lam=c_{y_i}-\cdots-a_{j+2}-a_{j+1}-c_{y_j}$ (where $a_{j+2}=b_1$ if $j=k-1$, and $a_{j+2}=b_2$ if $j=k$). In the latter case, \[ \lam=c_{y_i}-\cdots-a_1-e_1-b_k-\cdots-b_1-a_k-\cdots-a_{j+2}-a_{j+1}-c_{y_j}, \] which is a contradiction since $a_1$ and $e_1$ do not commute. Thus either $\lam=c_{y_i}-\cdots-a_j-c_{y_j}$ or $\lam=c_{y_i}-\cdots-a_j-a_{j+1}-c_{y_j}$. In either case, $\lam$ contains $a_j$, and so $c_{y_i}a_j=c_{y_j}a_j$ (since $\lam$ is an $l$-path). But, by Lemma~\ref{lev2a}, $c_{y_i}a_j=c_{y_{j-1}}$ and $c_{y_j}a_j=c_{y_j}$. Hence $c_{y_{j-1}}=c_{y_j}$, which is a contradiction. Suppose (b) holds, that is, $\lam=c_{y_i}-g-\cdots-h-c_{v_j}$, $0\leq i\leq k$ and $1\leq j\leq k$. Then $g$ is either $a_i$ or $a_{i+1}$ ($g=a_{i+1}$ if $i=0$) and $h$ is either $b_j$ or $b_{j+1}$ (where $b_{k+1}=e_1$). In any case, $\lam=c_{y_i}-g-\cdots-a_k-b_1-\cdots-h-c_{v_j}$. Suppose $i\geq1$. Then, by Lemma~\ref{lev2a} and the fact that $\lam$ is an $l$-path, $c_{v_0}=c_{y_i}b_1=c_{v_j}b_1=c_{v_1}$, which is a contradiction. If $i=0$ and $j<k$, then $c_{y_{k-1}}=c_{y_0}a_k=c_{v_j}a_k=c_{y_k}$, which is again a contradiction. If $i=0$ and $j=k$, then $g=a_1$, and so $\lam=\tau_1$. Suppose (c) holds, that is, $\lam=c_{y_i}-g-\cdots-a_k-b_1-\cdots-b_k-e_1-c_s$, $0\leq i\leq k$, where $g$ is either $a_i$ or $a_{i+1}$ ($g=a_{i+1}$ if $i=0$). If $i>1$, then $c_{v_{i-1}}=c_{y_i}b_i=c_sb_i=c_{v_i}$, which is a contradiction. If $i=0$, then $c_{v_k}=c_{y_0}e_1=c_se_1=c_s$, which is a contradiction. If $i=1$ and $g=a_1$, then $\lam$ is not minimal since $c_{y_1}-a_2$, so $a_1$ can be removed. Finally, if $i=1$ and $g=a_2$, then $\lam=\tau_2$. Suppose (d) holds, that is, $\lam=c_{v_i}-g-\cdots-h-c_{v_j}$, $1\leq i<j\leq k$, where $g$ is either $b_i$ or $b_{i+1}$ and $h$ is either $b_j$ or $b_{j+1}$ (where $b_{k+1}=e_1$). In any case, $\lam$ contains $b_j$, and so $c_{v_{j-1}}=c_{v_i}b_j=c_{v_j}b_j=c_{v_j}$, which is a contradiction. Suppose (e) holds, that is, $\lam=c_{v_i}-\cdots-e_1-c_s$, $1\leq i\leq k$. Then $c_{v_k}=c_{v_i}e_1=c_se_1=c_s$, which is a contradiction. We have exhausted all possibilities and obtained that $\lam$ must be equal to $\tau_1$ or $\tau_2$. The result follows. \end{proof} \begin{lemma}\label{lev5} The path $\pi=a_1-\cdots-a_k-b_1-\cdots-b_k-e_1$ is a unique minimal $l$-path in $\cg(S_0^k)$ with at least one endpoint that is not a constant. \end{lemma} \begin{proof} We have that $\pi$ is an $l$-path by Lemmas~\ref{lev2} and~\ref{lev2a}. Suppose that $\lam=e-\cdots-f$ is a minimal $l$-path in $\cg(S_0^k)$ such that $e$ or $f$ is not a constant. We claim that $\lam$ does not contain any constant $c_z$. By Lemma~\ref{lev3}, there is no constant $c_z$ such that $\lam=e-\cdots-c_z-\cdots-f$ (since otherwise $\lam$ would not be minimal). We may assume that $f$ is not a constant. But then $e$ is not a constant either since otherwise we would have that $ef$ is a constant and $ff=f$ is not a constant. But this is impossible since $\lam$ is an $l$-path, and so $ef=ff$. The claim has been proved. Thus all elements in $\lam$ are generators from list (\ref{ecev2}). We may assume that $e$ is to the left of $f$ (according to the ordering in (\ref{ecev2})). Since $\lam$ is an $l$-path, $e=ee=fe$. Hence, by Lemma~\ref{lev2}, $e=a_p$ and $f=b_p$ (for some $p\in\{1,\ldots,k\}$) or $e=b_1$ and $f=e_1$ or $e=a_1$ and $f=e_1$. Suppose that $e=a_p$ and $f=b_p$ for some $p$. Then, by Corollary~\ref{cev2a}, $\lam=a_p-\cdots-a_k-b_1-\cdots-b_p$. (Note that $\lam=a_p-a_{p-1}-\cdots-a_1-e_1-b_k-\cdots-b_p$ is impossible since $a_1e_1\ne e_1a_1$.) If $p>1$ then, by Lemma~\ref{lev2}, $c_{v_0}=a_pb_1=b_pb_1=c_{v_1}$, which is a contradiction. If $p=1$, then $c_{y_{k-1}}=a_1a_k=b_1b_k=c_{y_k}$, which is again a contradiction. Suppose that $e=b_1$ and $f=e_1$. Then $\lam=b_1-\cdots-b_k-e_1$, and so $c_{v_{k-1}}=b_1b_k=e_1b_k=c_{v_k}$, which is a contradiction. Hence we must have $e=a_1$ and $f=e_1$. But then, by Corollary~\ref{cev2a}, $\lam=a_1-\cdots-a_k-b_1-\cdots-b_k-e_1=\pi$. The result follows. \end{proof} \begin{theorem}\label{teve} For every even integer $n\geq2$, there is a band $S$ with knit degree $n$. \end{theorem} \begin{proof} Let $n=2$. Consider the band $S=\{a,b,c,d\}$ defined by the following Cayley table: \[ \begin{tabular}{c|cccc} & $a$ & $b$ & $c$ & $d$ \\\hline $a$ & $a$ & $b$ & $c$ & $d$ \\ $b$ & $b$ & $b$ & $b$ & $b$ \\ $c$ & $a$ & $b$ & $c$ & $d$ \\ $d$ & $d$ & $d$ & $d$ & $d$ \\ \end{tabular} \] It is easy to see that the center of $S$ is empty and $a-b-c$ is a shortest $l$-path in $\cg(S)$. Thus $\kd(S)=2$. Let $n=2k$ where $k\geq2$. Consider the semigroup $S_0^k$ defined by (\ref{edco1}). Then, by Corollary~\ref{cev2}, $S_0^k$ is a band. The paths $\tau_1$, $\tau_2$, and $\pi$ from Lemmas~\ref{lev4} and \ref{lev5} are the only minimal $l$-paths in $\cg(S_0^k)$. Since $\tau_1$ has length $2k+1=n+1$, $\tau_2$ has length $2k+2=n+2$, and $\pi$ has length $2k=n$, it follows that $\kd(S_0^k)=n$. \end{proof} \subsection{The Odd Case}\label{ssodd} Suppose $n=2k+1\geq5$ is odd. We will obtain a band $S$ of knit degree $n$ by slightly modifying the construction of the band $S_0^k$ from Definition~\ref{dco1}. Recall that $S_0^k$ has knit degree $2k$ (see the proof of Theorem~\ref{teve}). We will obtain a band of knit degree $n=2k+1$ by simply removing transformations $e_1$ and $c_s$ from $S_0^k$. \begin{defi}\label{dco2} {\rm Let $k\geq2$ be an integer. Consider the following subset of the semigroup $S^0_k$ from Definition~\ref{dco1}: \begin{equation}\label{edco21} S^1_k=S^0_k-\{e_1,c_s\}=\{a_1,\ldots,a_k,b_1,\ldots,b_k,c_{y_0},c_{y_1},\ldots,c_{y_k},c_{v_1},\ldots,c_{v_k}\}. \end{equation} By Lemmas~\ref{lev2} and \ref{lev2a}, $S^1_k$ is a subsemigroup of $S^0_k$. } \end{defi} \begin{rem}\label{rdco2} {\rm Note that $r$ and $s$, which still occur in the domain (but not the image) of each element of $S_1^k$, are now superfluous. We can remove them from the domain of each element of $S^1_k$ and view $S^1_k$ as a semigroup of transformations on the set \[ X=\{y_0,y_1,\ldots,y_k=v_0,v_1,\ldots,v_k,x_1,\ldots,x_k,u_1,\ldots,u_k\}. \] } \end{rem} It is clear from the definition of $S_1^k$ that the multiplication table for $S_1^k$ is the multiplication table for $S_0^k$ (see Lemmas~\ref{lev2} and \ref{lev2a}) with the rows and columns $e_1$ and $c_s$ removed. This new multiplication table is given by Lemmas~\ref{lev2} and \ref{lev2a} if we ignore the multiplications involving $e_1$ or $c_s$. Therefore, the following lemma follows immediately from Corollary~\ref{cev2} and Lemmas~\ref{lev4} and \ref{lev5}. \begin{lemma}\label{levnew1} Let $S_1^k$ be the semigroups defined by {\rm(\ref{edco21})}. Then $S_1^k$ is a band and $\tau=c_{y_0}-a_1-\cdots-a_k-b_1-\cdots-b_k-c_{v_k}$ is the only minimal $l$-path in $\cg(S_1^k)$. \end{lemma} \begin{theorem}\label{todd} For every odd integer $n\geq5$, there is a band $S$ of knit degree $n$. \end{theorem} \begin{proof} Let $n=2k+1$ where $k\geq2$. Consider the semigroup $S_1^k$ defined by (\ref{edco21}). Then, by Lemma~\ref{levnew1}, $S_1^k$ is a band and $\tau=c_{y_0}-a_1-\cdots-a_k-b_1-\cdots-b_k-c_{v_k}$ is the only minimal $l$-path in $\cg(S_1^k)$. Since $\tau$ has length $2k+1=n$, it follows that $\kd(S_1^k)=n$. \end{proof} The case $n=3$ remains unresolved. \vskip 2mm \noindent{\bf Open Question.} Is there a semigroup of knit degree $3$? \section{Commuting Graphs with Arbitrary Diameters}\label{sald} \setcounter{equation}{0} In Section~\ref{stx}, we showed that, except for some special cases, the commuting graph of any ideal of the semigroup $T(X)$ has diameter $5$. In this section, we use the constructions of Section~\ref{smlp} to show that there are semigroups whose commuting graphs have any prescribed diameter. We note that the situation is (might be) quite different in group theory: it has been conjectured that there is an upper bound for the diameters of the connected commuting graphs of finite non-abelian groups \cite[Conjecture~2.2]{IrJa08}. \begin{theorem}\label{tald} For every $n\geq2$, there is a semigroup $S$ such that the diameter of $\cg(S)$ is $n$. \end{theorem} \begin{proof} Let $n\in\{2,3,4\}$. The commuting graph of the band $S$ defined by the Cayley table in the proof of Theorem~\ref{teve} is the cycle $a-b-c-d-a$. Thus the diameter of $\cg(S)$ is $2$. Consider the semigroup $S$ defined by the following table: \[ \begin{tabular}{r|rrrr} & a & b & c & d\\ \hline a & a & a & a & a \\ b & a & b & c & c \\ c & c & c & c & c \\ d & c & d & c & c \end{tabular} \hspace{.5cm} \] Note that $Z(S)=\emptyset$ and $\cg(S)$ is the chain $a-b-c-d$. Thus the diameter of $\cg(S)$ is $3$. The diameter of $\cg(J_4)$ is $4$ (where $J_4$ is an ideal of $T(X)$ with $|X|=5$). Let $n\geq5$. Suppose $n$ is even. Then $n=2k+2$ for some $k\geq2$. Consider the band $S_0^k$ from Definition~\ref{dco1}. Since $c_{y_0}$ and $a_1$ are the only elements of $S_0^k$ whose image contains $y_0$, they are the only elements of $S_0^k$ commuting with $c_{y_0}$ (see Lemma~\ref{lcon}). Similarly, $e_1$ and $c_s$ are the only elements commuting with $c_s$. Therefore, it follows from Corollary~\ref{cev2a} that $c_{y_0}-a_1-\cdots-a_k-b_1-\cdots-b_k-e_1-c_s$ is a shortest path in $\cg(S_0^k)$ from $c_{y_0}$ to $c_s$, that is, the distance between $c_{y_0}$ and $c_s$ is $2k+2=n$. Since $a_1-\cdots-a_k-b_1-\cdots-b_k-e_1$ is a path in $\cg(S_0^k)$, $c_{y_i}a_i=a_ic_{y_i}$ and $c_{v_i}b_i=b_ic_{v_i}$ ($1\leq i\leq k$), it follows that the distance between any two vertices of $\cg(S_0^k)$ is at most $2k+2$. Hence the diameter of $\cg(S_0^k)$ is $n$. Suppose $n$ is odd. Then $n=2k+1$ for some $k\geq2$. Consider the band $S_1^k$ from Definition~\ref{dco2}. Then $c_{y_0}-a_1-\cdots-a_k-b_1-\cdots-b_k-c_{v_k}$ is a shortest path in $\cg(S_1^k)$ from $c_{y_0}$ to $c_{v_k}$, that is, the distance between $c_{y_0}$ and $c_{v_k}$ is $2k+1=n$. As for $S_0^k$, we have $c_{y_i}a_i=a_ic_{y_i}$ and $c_{v_i}b_i=b_ic_{v_i}$ ($1\leq i\leq k$). Thus the distance between any two vertices of $S_1^k$ is at most $2k+1$, and so the diameter of $\cg(S_1^k)$ is $n$. \end{proof} \section{Schein's Conjecture}\label{ssch} \setcounter{equation}{0} The results obtained in Section~\ref{smlp} enable us to settle a conjecture formulated by B.M. Schein in 1978 \cite[p.~12]{Sc78}. Schein stated his conjecture in the context of the attempts to characterize the $r$-semisimple bands. A right congruence $\tau$ on a semigroup S is said to be modular if there exists an element $e\in S$ such that $(ex)\tau x$ for all $x\in S$. The radical $R_r$ on a band $S$ is the intersection of all maximal modular right congruences on $S$ \cite{Oe66}. A band $S$ is called \emph{$r$-semisimple} if its radical $R_r$ is the identity relation on $S$. In 1969, B.D. Arendt announced a characterization of $r$-semisimple bands \cite[Theorem~18]{Ar69}. In 1978, B.M Schein pointed out that Arendt's characterization is incorrect and proved \cite[p.~2]{Sc78} that a band $S$ is $r$-semisimple if and only if it satisfies infinitely many quasi-identities: (1) and $(A_n)$ for all integers $n\geq1$, where \begin{align} (1)\,\,\,\,\,&zx=zy\imp xy=yx,\notag\\ (A_n)\,\,\,\,\,&x_1x_2=x_2x_1\wedge x_2x_3=x_3x_2\wedge\ldots\wedge x_{n-1}x_n=x_nx_{n-1}\wedge\notag\\ &\wedge x_1x_1=x_nx_1\wedge x_1x_2=x_nx_2\wedge\ldots\wedge x_1x_n=x_nx_n \imp x_1=x_n.\notag \end{align} Schein observed that $(A_1)$ and $(A_2)$ are true in every band, that $(A_3)$ easily follows from (1), and that Arendt's characterization of $r$-semisimple bands is equivalent to (1). He used the last observation to show that Arendt's characterization is incorrect by providing an example of a band $T$ for which (1) holds but $(A_4)$ does not. We note that Schein's example is incorrect since the Cayley table in \cite[p.~10]{Sc78}, which is supposed to define $T$, does not define a semigroup because the operation is not associative: $(4*1)*1=10\neq8=4*(1*1)$. However, Schein was right that it is not true that condition (1) implies $(A_n)$ for all $n$. The semigroup $S_0^2$ (see Table~2) satisfies (1) but it does not satisfy $(A_5)$ since $a_1-a_2-b_1-b_2-e_1$ is an $l$-path (so the premise of $(A_5)$ holds) but $a_1\ne e_1$. At the end of the paper, Schein formulates his conjecture \cite[p.~12]{Sc78}: \vskip 2mm \noindent{\bf Schein's Conjecture.} For every $n>1$, $(A_n)$ does not imply $(A_{n+1})$. \vskip 2mm The reason that Section~\ref{smlp} enables us to settle Schein's conjecture is the following lemma. \begin{lemma}\label{lscon} Let $n\geq1$ and let $S$ be a band with no central elements. Then $S$ satisfies $(A_n)$ if and only if $\cg(S)$ has no $l$-path of length $<n$. \end{lemma} \begin{proof} First note that $(A_n)$ can be expressed as: for all $x_1,\ldots,x_n\in S$, \begin{equation}\label{elscon} x_1-\cdots-x_n\mbox{ and }x_1x_i=x_nx_i\mbox{ $(1\leq i\leq n)$}\imp x_1=x_n. \end{equation} (Here, we allow $x-x$ and do not require that $x_1,\ldots,x_n$ be distinct.) Assume $S$ satisfies $(A_n)$. Suppose to the contrary that $\cg(S)$ has an $l$-path $\lam=x_1-\cdots-x_k$ of length $<n$, that is, $k\leq n$. Then $x_1-\cdots-x_k-x_{k+1}-\cdots-x_n$, where $x_i=x_k$ for every $i\in\{k+1,\ldots,n\}$, and so $x_1=x_n=x_k$ by (\ref{elscon}). This is a contradiction since $\lam$ is a path. Conversely, suppose that $\cg(S)$ has no $l$-path of length $<n$. Let $x_1-\cdots-x_n$ and $x_1x_i=x_nx_i$ ($1\leq i\leq n$). Suppose to the contrary that $x_1\ne x_n$. If there are $i$ and $j$ such that $1\leq i<j\leq n$ and $x_i=x_j$, we can replace $x_1-\cdots-x_i-\cdots-x_j-\cdots-x_n$ with $x_1-\cdots-x_i-x_{j+1}-\cdots-x_n$. Therefore, we can assume that $x_1,\ldots,x_n$ are pairwise distinct. Recall that $S$ has no central elements, so all $x_i$ are vertices in $\cg(S)$. Thus $x_1-\cdots-x_n$ is an $l$-path in $\cg(S)$ of length $n-1$, which is a contradiction. \end{proof} First, Schein's conjecture is false for $n=3$. \begin{prop}\label{pscon1} $(A_3)\imp(A_4)$. \end{prop} \begin{proof} Suppose a band $S$ satisfies $(A_3)$, that is, \begin{equation}\label{essch1} x_1x_2=x_2x_1 \wedge x_2x_3=x_3x_2 \wedge x_1x_1=x_3x_1 \wedge x_1x_2=x_3x_2 \wedge x_1x_3=x_3x_3 \imp x_1=x_3. \end{equation} To prove that $S$ satisfies $(A_4)$, suppose that \[ y_1y_2=y_2y_1 \wedge y_2y_3=y_3y_2 \wedge y_3y_4=y_4y_3 \wedge y_1y_1=y_4y_1 \wedge y_1y_2=y_4y_2 \wedge y_1y_3=y_4y_3 \wedge y_1y_4=y_4y_4. \] Take $x_1=y_1$, $x_2=y_2y_3$, and $x_3=y_4$. Then $x_1,x_2,x_3$ satisfy the premise of (\ref{essch1}): \begin{align} x_1x_2&=y_1y_2y_3=y_1y_3y_2=y_4y_3y_2=y_3y_4y_2=y_3y_1y_2=y_3y_2y_1=y_2y_3y_1=x_2x_1,\notag\\ x_2x_3&=y_2y_3y_4=y_2y_4y_3=y_2y_1y_3=y_1y_2y_3=y_4y_2y_3=x_3x_2,\notag\\ x_1x_1&=y_1y_1=y_4y_1=x_3x_1,\,x_1x_2=y_1y_2y_3=y_4y_2y_3=x_3x_2,\,x_1x_3=y_1y_4=y_4y_4=x_3x_3.\notag \end{align} Thus, by (\ref{essch1}), $y_1=x_1=x_3=y_4$, and so $(A_4)$ holds. \end{proof} Second, Schein's conjecture is true for $n\ne3$. \begin{prop}\label{pscon2} If $n>1$ and $n\ne3$, then $(A_n)$ does not imply $(A_{n+1})$. \end{prop} \begin{proof} Consider the band $S=\{e,f,0\}$, where $0$ is the zero, $ef=f$, and $fe=e$. Then $e-0-f$, $ee=fe$, $e0=f0$, $ef=ff$, and $e\ne f$. Thus $S$ does not satisfy $(A_3)$. But $S$ satisfies $(A_2)$ since $(A_2)$ is true in every band. Hence $(A_2)$ does not imply $(A_3)$. Let $n\geq4$. Then, by Theorems~\ref{teve} and \ref{todd} and their proofs, the band $S$ constructed in Definition~\ref{dco1} (if $n$ is even) or Definition~\ref{dco2} (if $n$ is odd) has knit degree $n$. By Lemmas~\ref{lev2} and \ref{lev2a}, $S$ has no central elements. Since $\kd(S)=n$, there is an $l$-path in $\cg(S)$ of length $n$ and there is no $l$-path in $\cg(S)$ of length $<n$. Hence, by Lemma~\ref{lscon}, $S$ satisfies $(A_n)$ and $S$ does not satisfy $(A_{n+1})$. Thus $(A_n)$ does not imply $(A_{n+1})$. \end{proof} \section{Problems}\label{spro} We finish this paper with a list of some problems concerning commuting graphs of semigroups. \begin{itemize} \item[(1)] Is there a semigroup with knit degree 3? Our guess is that such a semigroup does not exist. \item[(2)] Classify the semigroups whose commuting graph is eulerian (proposed by M. Volkov). The same problem for hamiltonian and planar graphs. \item[(3)] Classify the commuting graphs of semigroups. \item[(4)] Is it true that for all natural numbers $n\geq 3$, there is a semigroup $S$ such that the clique number (girth, chromatic number) of $\cg(S)$ is $n$? \item[(5)] Classify the semigroups $S$ such that the clique and chromatic numbers of $\cg(S)$ coincide. \item[(6)] Calculate the clique and chromatic numbers of the commuting graphs of $T(X)$ and $\fend(V)$, where $X$ is a finite set and $V$ is a finite-dimensional vector space over a finite field. \item[(7)] Let $\cg(S)$ be the commuting graph of a finite non-commutative semigroup $S$. An \emph{$\mrl$-path} is a path $a_1-\cdots-a_m$ in $\cg(S)$ such that $a_1\ne a_m$ and $a_1a_ia_1=a_ma_ia_m$ for all $i=1,\ldots,m$. For $\mrl$-paths, prove the results analogous to the results for $l$-paths contained in this paper. \item[(8)] Find classes of finite non-commutative semigroups such that if $S$ and $T$ are two semigroups in that class and $\cg(S)\cong \cg(T)$, then $S\cong T$. \end{itemize} \section{Acknowledgments} We are pleased to acknowledge the assistance of the automated deduction tool \textsc{Prover9} and the finite model builder \textsc{Mace4}, both developed by W. McCune \cite{McCune}. We also thank the developers of GAP \cite{Scel92}, L.H. Soicher for GRAPE \cite{So06}, and Aedan Pope and Kyle Pula for their suggestions after carefully reading the manuscript. The first author was partially supported by FCT and FEDER, Project POCTI-ISFL-1-143 of Centro de Algebra da Universidade de Lisboa, by FCT and PIDDAC through the project PTDC/MAT/69514/2006, by PTDC/MAT/69514/2006 Semigroups and Languages, and by\newline PTDC/MAT/101993/2008 Computations in groups and semigroups.
{ "timestamp": "2010-09-09T02:00:38", "yymm": "1003", "arxiv_id": "1003.2809", "language": "en", "url": "https://arxiv.org/abs/1003.2809", "abstract": "Let $S$ be a finite non-commutative semigroup. The commuting graph of $S$, denoted $\\cg(S)$, is the graph whose vertices are the non-central elements of $S$ and whose edges are the sets $\\{a,b\\}$ of vertices such that $a\\ne b$ and $ab=ba$. Denote by $T(X)$ the semigroup of full transformations on a finite set $X$. Let $J$ be any ideal of $T(X)$ such that $J$ is different from the ideal of constant transformations on $X$. We prove that if $|X|\\geq4$, then, with a few exceptions, the diameter of $\\cg(J)$ is 5. On the other hand, we prove that for every positive integer $n$, there exists a semigroup $S$ such that the diameter of $\\cg(S)$ is $n$. We also study the left paths in $\\cg(S)$, that is, paths $a_1-a_2-...-a_m$ such that $a_1\\ne a_m$ and $a_1a_i=a_ma_i$ for all $i\\in \\{1,\\ldot, m\\}$. We prove that for every positive integer $n\\geq2$, except $n=3$, there exists a semigroup whose shortest left path has length $n$. As a corollary, we use the previous results to solve a purely algebraic old problem posed by B.M. Schein.", "subjects": "Group Theory (math.GR); Combinatorics (math.CO)", "title": "Minimal paths in the commuting graphs of semigroups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9911526470077808, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.709094381139825 }
https://arxiv.org/abs/1802.03579
On the weighted safe set problem on paths and cycles
Let $G$ be a graph, and let $w: V(G) \to \mathbb{R}$ be a weight function on the vertices of $G$. For every subset $X$ of $V(G)$, let $w(X)=\sum_{v \in X} w(v).$ A non-empty subset $S \subset V(G)$ is a weighted safe set of $(G,w)$ if, for every component $C$ of the subgraph induced by $S$ and every component $D$ of $G-S$, we have $w(C) \geq w(D)$ whenever there is an edge between $C$ and $D$. If the subgraph of $G$ induced by a weighted safe set $S$ is connected, then the set $S$ is called a connected weighted safe set of $(G,w)$. The weighted safe number $s(G,w)$ and connected weighted safe number $cs(G,w)$ of $(G,w)$ are the minimum weights $w(S)$ among all weighted safe sets and all connected weighted safe sets of $(G,w)$, respectively. It is easy to see that for any pair $(G,w)$, ${s}(G,w) \le {cs}(G,w)$ by their definitions. In this paper, we discuss the possible equality when $G$ is a path or a cycle. We also give an answer to a problem due to Tittmann et al. [Eur. J. Combin. Vol. 32 (2011)] concerning subgraph component polynomials for cycles and complete graphs.
\section{Introduction} We start with a question about number sequences in combinatorial number theory. For a sequence $a_1, \ldots, a_n$ of positive integers and a segment $I$ consisting of a subsequence $a_i, a_{i+1},\ldots, a_{i+|I|-1}$, let $s(I)=\sum_{j=i}^{j=i+|I|-1}a_j$. We consider partitioning $a_1,\ldots ,a_n$ into an odd number of non-empty segments $I_1=\{a_1,\ldots, a_{|I_1|}\},\ldots, I_{2k+1}=\{a_{n-|I_{2k+1}|+1}, \ldots, a_n\}$ with $k\ge 1$ so that {the sequence of $s(I_1)$, $s(I_2)$, $\ldots$, $s(I_{2k+1})$ is a ``zigzag" sequence, i.e.,} $\max\{s(I_{2j-1}), s(I_{2j+1})\}\le s(I_{2j})$ holds for all $j=1,\ldots,k$. Whenever such segments exist, we would like to choose them so that $\sum_{j=1}^{k}s(I_{2j})$ is as small as possible and, subject to this condition, $k$ is as small as possible among all such partitions into an odd number of segments. {By our choice, $k$ is likely to be small in many cases.} Assuming that the desired partition exists, we consider the special case when {$n$ is odd and the optimal solution occurs only when each segment {consists of a single number.}} Then the elements of the odd segments $I_{2j-1}$ for $j=1\ldots,k+1$ and the elements of the even segments $I_{2j}$ for $j=1,\ldots,k$ correspond to the two number sequences that are obtained from $a_1,\ldots, a_n$ by taking their terms alternately. The question is whether number sequences $a_1,\ldots ,a_n$ exist for {which $k=\frac{n-1}{2}$ is the optimal solution.} Our answer to this question is positive. Actually, we construct infinitely many number sequences with this property (see Proposition~\ref{path}). Along a slightly different line, we can also ask a similar question for cyclic number sequences $a_0,a_1,\ldots , a_{n-1},$ with the indices taken modulo $n$ (that is, $a_n=a_0$) by modifying the problem into finding an even number of segments of subsequences $I_1,\ldots, I_{2k}$ subject to the same requirements, where $k\ge 1$. Our answer to the modified question is rather negative. Indeed, we show that $k=1$ is optimal for any cyclic number sequence (see Theorem~\ref{thm:cycle}). In fact the above problems are related to safe set problems on weighted graphs. We use \cite{cl} for graph terminology and notation not defined here. Only finite, simple (undirected) graphs are considered. For a graph $G$, let $\delta(G)$ be the minimum degree of $G,$ and let $G[S]$ denote the subgraph of $G$ induced by {a} subset $S \subset V(G).$ We often abuse/identify terminology and notation for subsets of the vertex set and subgraphs induced by them. In particular, a (connected) component is sometimes treated as a subset of the vertex set. We let $k(G)$ denote the number of components of $G$. When $A$ and $B$ are disjoint subsets of $V(G)$, the set of edges joining a vertex of $A$ to a vertex of $B$ is denoted by $E_G(A,B)$. If there is no confusion, we often denote this set by $E(A,B)$. If $E(A,B)\neq\emptyset$, then $A$ and $B$ are said to be \textit{adjacent}. A \textit{weight function} $w$ on $V(G)$ is a mapping associating each vertex of $G$ with a positive real number. Let $\mathcal{W}(G)$ be the set of all weight functions on $V(G)$. For $w\in \mathcal{W}(G)$, we refer to $(G,w)$ as a \textit{weighted graph}. For every subset $X$ of $V(G)$, let $w(X)=\sum_{v \in X} w(v);$ here we also allow ourselves to use the notation $w(G[X])$ for $w(X)$. The notion of a safe set was introduced by Fujita et al.\ \cite{SGS-safe-set} {as} a variation of facility location problems. Bapat et al.\ \cite{BFLMMST-weighted-sf} extended it to weighted graphs. Assume that $(G,w)$ is a weighted graph where $G$ is connected. A non-empty subset $S \subset V(G)$ is a {\it weighted safe set} of $(G,w)$ if for every component $C$ of $G[S]$ and every component $D$ of $G-S$, we have $w(C) \geq w(D)$ whenever $E (C,D) \neq \emptyset$. The \textit{weighted safe number} of $(G,w)$ is the minimum weight $w(S)$ among all weighted safe sets of $(G,w)$, that is, $$\s(G,w)=\min\{ w(S) \mid S \text{ is a weighted safe set of }(G,w)\}.$$ If $S$ is a weighted safe set of $(G,w)$ and $w(S)=\s(G,w)$, then $S$ is called a {\it minimum weighted safe set\/}. Restricting to connected safe sets, if $S$ is a weighted safe set of $(G,w)$ and $G[S]$ is connected, then $S$ is called a \textit{connected weighted safe set} of $(G,w)$. The \textit{connected weighted safe number} of $(G,w)$ is defined by $$\cs(G,w)=\min\{ w(S) \mid S \text{ is a connected weighted safe set of }(G,w)\},$$ and a {\it minimum connected weighted safe set\/} is a connected weighted safe set $S$ of $(G,w)$ such that $w(S)=\cs(G,w)$. Throughout the paper, we will often omit `weighted', and simply speak of a safe set or a connected safe set. {For a disconnected graph $G$, we can define the notion of a (connected) safe set naturally by considering a (connected) safe set of each component. So, we always assume that every graph in this paper is connected unless otherwise specified.} Recently, problems on safe sets in graphs have been extensively studied, especially to investigate the algorithmic aspects. Fujita et al.\ \cite{SGS-safe-set} showed that computing the connected safe number in { a unweighted graph (i.e., $(G,w)$ with a constant weight function $w$)} is NP-hard in general, whereas they constructed a linear time algorithm for computing the connected safe number in unweighted trees. \'Agueda et al.\ \cite{ag} gave an efficient algorithm for computing the safe number for unweighted trees. Somewhat surprisingly, Bapat et al.\ \cite{BFLMMST-weighted-sf} showed that computing the connected weighted safe number in a tree is NP-hard even if the underlying tree is restricted to be a star, whereas they gave an efficient algorithm computing the safe number for a weighted path. More recently, Ehard and Rautenbach \cite{ED-tree} provided a polynomial-time approximation scheme (PTAS) for the connected safe number of a weighted tree. In this paper we focus on weighted graphs $(G,w)$ with $\s(G,w) = \cs(G,w)$. {Recall that, for any weighted graph $(G,w)$, we have $\s(G,w) \le \cs(G,w) < 2\s(G,w)$, where the fist inequality is from definitions and the second inequality is obtained from the same method as in Proposition 2 of \cite{SGS-safe-set}.} Intuitively, {like for the facility location problem}, if $(G,w)$ contains a minimum connected weighted safe set $S$ with $|S|=\s(G,w)$, then one might feel that $G[S]$ plays a central role in the graph. To see that this is reasonable, {consider a weighted graph $(G,w)$.} When we regard $(G,w)$ as a kind of network, $G[S]$ has a majority weight compared with other components in $G-S$ and the internal structure of $G[S]$ is rather stable because it is connected; thus, we can regard $G[S]$ as a core part in the network in some sense. From this viewpoint, if $G$ is a graph such that $\s(G,w) = \cs(G,w)$ holds for a weight $w\in \mathcal{W}(G)$, then we would choose a minimum safe set $S$ which induces a connected graph for efficiency and stableness. Consequently we would like to investigate which kind of weighted graphs $(G,w)$ satisfy $\s(G,w) = \cs(G,w)$. We thus propose the following problems. \begin{problem}\label{problem1:weights} Given a graph $G$, determine the set $\W(G)$ of weights $w$ such that $\s(G,w)=\cs(G,w)$. \end{problem} \begin{problem}\label{problem2:graphs} Determine the family $\G$ of graphs $G$ for which $\s(G,w)=\cs(G,w)$ for every $w\in \mathcal{W}(G)$. \end{problem} If $G$ is a complete graph, then $\W(G)=\mathcal{W}(G)$ and hence $G\in \G$. If $G$ is a path, then it is shown in \cite{SGS-safe-set} that any constant function belongs to $\W(G)$. Regarding Problem~\ref{problem2:graphs}, in addition to every complete graph being in $\G,$ it is not difficult to check that any star graph is in $\G$; indeed, we can even show the following. \begin{prop}\label{n-1:vertex} If $\Delta(G)=|V(G)|-1$, then $G\in \G$. \end{prop} \begin{proof}[Proof of Proposition~\ref{n-1:vertex}] Let $v$ be a vertex of degree $|V(G)|-1$ in $G$. Let $w$ be a weight function on $V(G)$, and $S$ be a minimum safe set of $(G,w)$. Suppose that $G[S]$ is not connected. Then $v\not\in S,$ hence $G-S$ contains $v$ and so is connected. Let $D$ be a component of $G[S]$ with the smallest weight, that is, \[ w(D) =\min \{ w(C) \mid C\text{ is a component of }G[S] \}.\] Let $S'=V(G)-D,$ then $G[S']$ is connected, since $v\in S'$. Since $S'$ includes a component of $G[S]$ whose weight is not less than $w(D)$, it follows that $w(D) \le w(S')$. Thus $S'$ is a connected safe set of $(G,w)$. Moreover, since $w(D) \ge w(G-S)$, $w(S')\le w(S)$. Thus $w(S')= w(S)$. Hence, $\s(G,w)=\cs(G,w)$. \end{proof} In view of Proposition~\ref{n-1:vertex}, one might ask if there exists a graph $G$ with a low maximum degree such that $G\in \G$. As we will observe later, this is not the case for paths. In the following we show that such graphs do exist. Namely, we prove the following theorem. \begin{thm}\label{thm:cycle} Any cycle belongs to $\G$. \end{thm} In fact, cycles are the unique non-complete graphs for which a minimum safe set always contains at least half the total weight of the graph. Through the study of the weighted safe set problem for cycles, we found several equivalent conditions for a graph to be a cycle or a complete graph. In particular, one of them sheds light on the study of subgraph component polynomials. Inspired by the study of {the} community structure in connection networks, Tittmann et al.\ \cite{TAM} introduced a new type of graph polynomial. For a graph $G$ and two positive integers $i$ and $j$, let \[q_{i,j}(G)=| \{X \subset V(G) \mid |X|=i \text{ and }k(G[X])=j \}|.\] The \textit{subgraph component polynomial} $Q(G;x,y)$ of $G$ is the polynomial in two variables $x$ and $y$ such that the coefficient of $x^i y^j$ is $q_{i,j}(G)$. From the definition, it is easy to check that $q_{1,1}(G)=|V(G)|$, $q_{2,1}(G)=|E(G)|$, and $q_{2,2}(G)={n\choose 2}-|E(G)|$. {In addition, $q_{1,1}(G)=q_{n-1,1}(G)$ is equivalent to the statement that $G$ is 2-connected.} Our result is the following. \begin{thm}\label{prop:ratio} Let $G$ be a connected graph with $n$ vertices, where $n\ge 5$. The following are equivalent: \begin{itemize} \item[\rm(i)] $G$ is either a complete graph or a cycle; \item[\rm(ii)] $\s(G,w)\ge \frac{w(G)}{2}$ for every $w\in \mathcal{W}(G)$; \item[\rm(iii)] $G-\{u,v\}$ is disconnected for any two nonadjacent vertices $u$ and $v$; \item[\rm(iv)] $q_{1,1}(G)=q_{n-1,1}(G)$ and $q_{2,1}(G)=q_{n-2,1}(G)$. \end{itemize} \end{thm} Tittmann et al.\ gave a few examples of graphs and graph families that are determined by $Q(G;x,y)$ and, in view of the importance of the study of {the} communication structure in networks, they proposed as an open problem to find more classes of graphs that are determined by $Q(G;x,y)$ (see Problem 35 in \cite{TAM}). Our result contributes to a solution of their open problem. Our result suggests a deep relationship between safe numbers and subgraph component polynomials. Consequently we believe that Problem~\ref{problem2:graphs} is also important in the study of communication structure in networks. As an initial step to approach this challenging problem we consider some basic properties of $\G$ from a variety of viewpoints. For a vertex $v$ of degree two in $G$ which is not on a triangle, \textit{suppression} of $v$ is the operation of removing $v$ and adding an edge between the two neighbors of $v$. This is the reverse operation of \textit{subdivision}; the subdivision of an edge $e=xy$ yields a new graph containing one new vertex $v,$ and having an edge set in which $e$ is replaced by two new edges $xv$ and $vy$. We have the following result. \begin{thm}\label{prop:subdivision} The family $\G$ is closed under suppression. \end{thm} In contrast to the family $\G$ of graphs, one might consider another family of graphs that is in a certain sense very far from $\G.$ A family $\mathcal{G}$ of graphs is \textit{safe-finite} if $f(\mathcal{G}) < \infty,$ where \begin{center} $f(\mathcal{G}) = \max_{G\in \mathcal{G}, w\in \mathcal{W}(G) } \min \{k(G[S])\mid S \text{ is a minimum weighted safe set of } (G,w)\}.$ \end{center} Conversely, $\mathcal{G}$ is \textit{safe-infinite} if it is not safe-finite. Obviously any finite family $\mathcal{G}$ of graphs is safe-finite. The function $f$ is a mapping from a safe-finite family of graphs to a positive integer and, in particular, $\G$ is the maximal family $\mathcal{G}$ of graphs such that $f(\mathcal{G})=1.$ It would seem an interesting problem to discover what kind of families of graphs are safe-(in)finite. Considering the set of all complete bipartite graphs with a constant weight on the vertices, we observe that there exists a safe-infinite family of graphs. To present another example, we provide the following theorem. \begin{thm}\label{oddpath} The family of paths with an odd number of vertices is safe-infinite. \end{thm} This paper is organized as follows. In Section~\ref{sec:path},we give a construction on weighted paths $(P,w)$ {such that every minimum safe set has at least $N$ components for an arbitrary taken positive integer $N$,} thereby proving Theorem~\ref{oddpath}. We also prove Theorem~\ref{prop:subdivision} in this section. We prove Theorems~\ref{thm:cycle} and~\ref{prop:ratio} in Sections~\ref{sec:cycle} and~\ref{new}, respectively. We give some remarks on Theorems~\ref{thm:cycle} and~\ref{prop:ratio} in Section~\ref{sec:open}. \section{Weighted safe sets of paths}\label{sec:path} Subsection~\ref{subsec:path} gives the construction of a weighted path $(P,w)$ with {an odd number of vertices} such that any minimum safe set has exactly $\lfloor|V(P)|/2\rfloor$ components. This implies Theorem 1.7. Further, {in Subsection~\ref{subsec:subdivision},} we give a construction, using subdivisions, of weight functions $w$ of $P$ for any path $P$ so that $w\not\in \W(P)$. \subsection{Construction of weight functions on a path of odd order}\label{subsec:path} Throughout the subsection, let $n\ge 2$ be an integer, $P: v_1v_2\ldots v_{2n+1}$ be a path with $2n+1$ vertices. Fix two positive real numbers $a$ and $b$ so that \begin{eqnarray}\label{condition:ab} 2b>3a\qquad\text{ and }\qquad 2a>b>a. \end{eqnarray} We define a weight function $w$ on $V(P)$ by (See Figure~\ref{fig:path}.) \begin{eqnarray}\label{def:weight} w(v_j)=\begin{cases} b & \text{ if }j\in\{1,2\} \\ 2^{i-1}a &\text{ if }j\ge 3\text{ and }j\in\{2i,2i+1\}\text{ for some }i\in\{1, \ldots,n\}.\\ \end{cases}\end{eqnarray} \begin{figure}[h!] \centering \begin{tikzpicture} \path (0,0) coordinate (1) (2,0) coordinate (2) (4,0) coordinate (3) (6,0) coordinate (4) (8,0) coordinate (5) (10,0) coordinate (6) (12,0) coordinate (7); \fill (1) node[above]{\footnotesize$w(v_1)=5$} (2) node[above]{\footnotesize$w(v_2)=5$} (3) node[above]{\footnotesize$w(v_3)=3$} (4) node[above]{\footnotesize$w(v_4)=6$} (5) node[above]{\footnotesize$w(v_5)=6$} (6) node[above]{\footnotesize$w(v_6)=12$} (7) node[above]{\footnotesize$w(v_7)=12$}; \fill (1) node[below]{\small$v_1$} (2) node[below]{\small$v_2$} (3) node[below]{\small$v_3$} (4) node[below]{\small$v_4$} (5) node[below]{\small$v_5$} (6) node[below]{\small$v_6$} (7) node[below]{\small$v_7$}; \path (1) edge (2); \path (2) edge (3); \path (3) edge (4); \path (4) edge (5); \path (5) edge (6); \path (6) edge (7); \fill (1) circle (2pt) (2) circle (2pt) (3) circle (2pt) (4) circle (2pt) (5) circle (2pt) (6) circle (2pt) (7) circle (2pt); \end{tikzpicture} \caption{$(P,w)$ when $n=3$ and $a=3,b=5.$}\label{fig:path} \end{figure} We let $S=\{v_2,v_4,\ldots, v_{2n}\}$. Since $w(v_{1})=w(v_{2}) >w(v_{3})$ and $w(v_{2i-1})<w(v_{2i})=w(v_{2i+1})$ for all $i\in\{2, \ldots,n\}$, it is easy to see that $S$ is a safe set of $(P,w)$. Actually, $S$ is the unique minimum safe set. \begin{prop}\label{path} Let $(P,w )$ be a weighted path with a weight function defined as in \eqref{def:weight}. Then $S=\{v_2,v_4,\ldots, v_{2n}\}$ is the unique minimum safe set of $(P,w )$. \end{prop} \begin{proof}[Proof of Proposition~\ref{path}] Take a minimum safe set $X$ of $(P,w )$. Since $S$ is a safe set and $X$ is a minimum safe set of $(P,w)$, together with \eqref{condition:ab}, \begin{eqnarray} \s(P,w)= w(X) \le w(S)=\sum_{i=1}^{n}w(v_{2i}) = b+ \sum_{i=2}^{n} 2^{i-1}a = 2^na-2a+b< 2^{n}a. \label{last_two_2-0} \end{eqnarray} \begin{clm}\label{p2} $|X\cap\{ v_{2i}, v_{2i+1} \}|=1$ for all $i\in \{2, \ldots,n\}$. \end{clm} \begin{proof}[Proof of Claim~\ref{p2}] We apply induction on $n-i \geq 0,$ where $n$ is fixed. Since $w(\{v_{2n}, v_{2n+1}\})=2^na$, if $v_{2n}$ and $v_{2n+1}$ are in a same component of either $P[X]$ or $P-X$, then this component has weight at least $2^na$, contradicting \eqref{last_two_2-0}. Hence, $|X\cap\{ v_{2n}, v_{2n+1} \}|=1$. Assume $|X\cap\{ v_{2i'}, v_{2i'+1} \}|=1$ for all $i'>i$ ($2\le i\le n-1$). Then \[w(X\cap\{ v_{2i+2},v_{2i+3}, \ldots,v_{2n+1}\}) = \sum_{i'=i}^{n-1} 2^{i'}a = a(2^{i} +2^{i+1}+\cdots+ 2^{n-2} + 2^{n-1}) = a(2^n -2^i).\] If $X\cap\{ v_{2i}, v_{2i+1} \}= \{ v_{2i}, v_{2i+1} \}$, then $w(X)\ge 2^n a -2^ia +w(\{v_{2i}, v_{2i+1}\})=2^na$, contradicting \eqref{last_two_2-0}. Suppose $X\cap\{ v_{2i}, v_{2i+1} \}= \emptyset,$ and let $D$ be the component of $P-X$ that contains $v_{2i}$ and $v_{2i+1}.$ Then $w(D)\ge 2^i a$. If there is a component $C'$ of $P[X]$ adjacent to $D$ such that $C'\subset \{v_1,v_2,\ldots,v_{2i-1}\},$ then \[w(X)\ge 2^n a -2^ia +w(C') \ge 2^n a -2^ia +w(D)\ge 2^n a,\] a contradiction to \eqref{last_two_2-0}. Suppose that there is no component of $P[X]$ contained in $\{v_1,v_2,\ldots,v_{2i-1}\}$. Then $D \supset \{v_1,v_2,\ldots,v_{2i+1}\},$ and there is a unique component $C$ of $P[X]$ which is adjacent to $D$. By the induction hypothesis, $C=\{v_{2i+2}\}$ or $C=\{v_{2i+3}\}$ or $C=\{v_{2i+3},v_{2i+4}\}$. If $C=\{v_{2i+2}\}$ or $C=\{v_{2i+3}\}$, then, together with~\eqref{condition:ab}, \[ w(D) \ge \sum_{i'=1}^{2i+1} w(v_{i'}) = 2^{i+1}a-3a+2b > 2^{i}a=w(C),\] a contradiction to the definition of a safe set. Similarly, if $C=\{v_{2i+3},v_{2i+4}\}$, then $D= \{v_1,v_2,\ldots,v_{2i+2}\},$ and so \[ w(D) = \sum_{i'=1}^{2i+2} w(v_{i'}) = 2^{i+1}a-3a+2b +2^ia > 2^{i}a+2^{i+1}a\ge w(C),\] a contradiction to the definition of a safe set. Hence $|X\cap\{ v_{2i}, v_{2i+1} \}|=1.$ \end{proof} By Claim~\ref{p2}, $w(X\cap\{v_4,v_5,\ldots,v_{2n+1}\}) =2^na-2a$. Together with \eqref{last_two_2-0}, it follows that $|X\cap\{v_1,v_2,v_3\}|\le 1$. Furthermore, the following holds. \begin{clm}\label{p3} $X\cap\{v_1,v_2,v_3,v_4,v_5\}=\{v_2,v_4\}$. \end{clm} \begin{proof}[Proof of Claim~\ref{p3}] Suppose $v_4\not\in X,$ and let $D$ be the component of $P-X$ containing $v_4$. If $X\cap\{v_1,v_2,v_3\}\neq \emptyset$, then $X\cap\{v_1,v_2,v_3\}=\{v_i\}$ is a component of $P[X]$, which is a contradiction since $w(D)\ge w(v_4) > w(v_i)$. Thus $X\cap\{v_1,v_2,v_3\}= \emptyset$ and so $D=\{v_1,v_2,v_3,v_4\}$. Note that by Claim~\ref{p2}, $v_5\in X$, and the component of $P[X]$ containing $v_5$ is either $ \{v_5\}$ or $ \{v_5,v_6\}$. However, by \eqref{condition:ab}, $w(\{v_5,v_6\}) \le 2a+4a < 3a+2b =w(D)$. Hence, $v_4\in X$ and by Claim~\ref{p2}, $v_5\not\in X$. By the fact that $|X\cap\{v_1,v_2,v_3\}|\le 1$, it remains to show $v_2\in X$. Suppose not, and let $C$ be the component of $P[X]$ containing $v_4$. Then $C$ is either $\{v_3,v_4\}$ or $\{v_4\}$. If $C= \{v_3,v_4\}$, then $w(v_3)+w(v_4)>w(v_1)+w(v_2)$ by the definition of a safe set, equivalently, $a+2a\ge b+b$, a contradiction to \eqref{condition:ab}. If $C=\{v_4\}$, then $w(v_4)\ge w(v_2)+w(v_3)$ by the definition of a safe set, equivalently, $2a\ge b+a$, a contradiction to \eqref{condition:ab}. Hence, $v_2\in X$. Consequently, the claim holds. \end{proof} Using induction on $i$ we will show for each $i\in \{1,2, \ldots,n\}$ that $\{v_{2i}\}$ is a component of $P[X].$ By Claim~\ref{p3}, each of $\{v_2\}$ and $\{v_4\}$ is a component of $P[X],$ so assume $i \ge 3.$ By the induction hypothesis, $\{v_{2i-2}\}$ is a component of $P[X]$. If $v_{2i}\not\in X$ and $v_{2i+1}\in X,$ then $\{v_{2i-1},v_{2i}\}$ is a component of $P-X$ the weight of which is greater than $w(v_{2i-2}).$ But this is a contradiction to the definition of a safe set. Hence $v_{2i}\in X$ and $v_{2i+1}\not\in X$ follow by Claim~\ref{p2}, and the proposition holds. \end{proof} {At the end of this subsection, we will show that, if a weight function $w$ on a path $P$ with $n$ vertices is bounded, then both the safe number and the connected safe number tend to $\frac{w(P)}{3}$ as $n$ goes to $\infty$. \begin{lem}\label{path-k-2k+1} Let $(P,w)$ be a weighted path, and $S$ be a safe set of $(P,w)$. Then $\frac{k}{2k+1}\leq\frac{w(S)}{w(P)}$, where $k$ is the number of components of $P[S]$. \end{lem} \begin{proof}Let $S_1,\ldots,S_k$ be the components of $P[S]$, and $P-S=D_1\cup \cdots \cup D_{k+1}$ where the left (resp. right) neighbor of $S_i$ is $D_i$ (resp. $D_{i+1}$) for all $i \in \{1,\ldots,k\}$. Note that for every $i\not\in\{1,k+1\}$, $D_i$ is a component of $G-S$ and each of $D_1$ and $D_{k+1}$ is either a component of $P-S$ or empty. Take an integer $r\in \{1,\ldots,k\}$ such that $w(S_r)=\min\{w(S_i) : i=1,\ldots,k\}$. By the definition of a safe set, for each $i\in\{1,\ldots,r\}$, $w(S_i)\geq w(D_i)$ and for each $i\in\{r,\ldots,k\}$, $w(S_{i})\ge w(D_{i+1})$. Then \[w(S)+w(S_r)=\sum_{i=1}^{r}w(S_i) +\sum_{i=r}^{k}w(S_i) \ge \sum_{i=1}^{r} w(D_i) +\sum_{i=r}^{k}w(D_{i+1}) = w(P)-w(S),\] and hence $2w(S)+w(S_r)\ge w(P)$. Since $w(S_r)\le \frac{w(S)}{k}$, we have $\frac{k}{2k+1} \leq \frac{w(S)}{w(P)}$. \end{proof} \begin{prop} Let $a$ and $b$ be real numbers with $0<a < b$. Let $\{ (P_n,w_n) \}_{n=1}^{\infty}$ be a sequence of weighted paths $(P_n,w_n)$ where $P_n$ is a path with $n$ vertices and $a \le w_n(v)\le b$ for every vertex $v\in V(P_n)$. Then $\lim_{n \to \infty}\frac{\s(P_n,w_n)}{w_n(P_n)}=\lim_{n \to \infty}\frac{\cs(P_n,w_n)}{w_n(P_n)}=\frac{1}{3}.$ \end{prop} \begin{proof} Let $n$ be any positive integer. We can find a subpath $L_n$ of $P_n$ starting from one pendent vertex of $P_n$ such that $\frac{1}{3}w(P_n) - b \leq w(L_n) \leq \frac{1}{3} w(P_n)$ holds. By symmetry, we can also find a subpath $R_n$ of $P_n$ starting from the other pendent vertex such that $\frac{1}{3}w(P_n) - b \leq w(R_n) \leq \frac{1}{3} w(P_n)$ holds. Then $P_n-(L_n\cup R_n)$ is a connected safe set of $(P_n,w_n)$ and so $\cs(P_n,w_n) \leq \frac{1}{3} w(P_n) +2b$. Hence $\frac{\cs(P_n,w_n)}{w_n(P_n)} \leq \frac{1}{3} + \frac{2b}{w(P_n)} \leq \frac{1}{3} + \frac{2b}{an}$. Together with Lemma~\ref{path-k-2k+1}, we have \[ \frac{1}{3} \leq \frac{\s(P_n,w_n)}{w_n(P_n)} \leq \frac{\cs(P_n,w_n)}{w_n(P_n)}\leq \frac{1}{3} + \frac{2b}{an}.\] Since $\frac{1}{3} + \frac{2b}{an}\to \frac{1}{3}$ as $n\to \infty$, this completes the proof. \end{proof}} \subsection{Graph suppression and $\G$}\label{subsec:subdivision} \begin{proof}[Proof of Theorem~\ref{prop:subdivision}] Let $G$ be a graph obtained by suppression of a vertex $v^*$ from a graph $G^*$. Let $x$ and $y$ be the neighbors of $v^*$ in $G^*$. It is sufficient to show that if $G^*\in \G,$ then $G\in \G$. Assume $G^*\in \G$ and suppose $G\not\in\G$. Then there is a weight function $w$ on $V(G)$ such that $\s(G,w)<\cs(G,w)$. Let $\epsilon$ be a {real number} such that $0< \epsilon<\frac{\min\{\alpha,\beta\}}{2}$, where \begin{eqnarray*} &&\alpha= \min\{ \cs(G,w)-\s(G,w), w(x), w(y)\}, \\ &&\beta= \min_{T:\text{ non-safe set of }(G,w)}\{ w(D)-w(C) >0 \mid C\text{ is a component of }G[T], D\text{ is a component of }G-T\}. \end{eqnarray*} We define a weight function $w^*$ on $V(G^*)$ by $w^*(v^*)= \epsilon$, $w^*(x)=w(x)-\epsilon$ and $w^*({u})=w({u})$ for every ${u}\in V(G^*)\setminus \{x,v^*\}$ (see Figure~\ref{fig:proof:subdivision}). \begin{figure}[h!]\centering \begin{tikzpicture}[thick] \path (0,0) coordinate (x) (2,0) coordinate (y) (6.5,0) coordinate (v^*) (5,0) coordinate (x') (8,0) coordinate (y'); \fill (1,-0.7) node {\footnotesize$G$}; \fill (6.5,-0.7) node {\footnotesize$G^*$}; \fill (x) node[below]{\small$x$} (y) node[below]{\small$y$} (x') node[below]{\small$x$} (y') node[below]{\small$y$} (v^*) node[below]{\small$v^*$}; \fill (x)+(0.2,0) node[above]{\small$w(x)$} (y)+(-0.2,0) node[above]{\small$w(y)$} (x')+(0.4,0) node[above]{\small$w(x)-\epsilon$} (y')+(-0.2,0) node[above]{\small$w(y)$} (v^*) node[above]{\small$\epsilon$}; \path (x) edge (y); \path (v^*) edge (x') edge (y'); \path (y) edge (y)+(0.4,-0.4) edge (y)+(0.4,0) edge (y)+(0.4,0.4) edge (y); \path (y') edge (y')+(0.4,-0.4) edge (y')+(0.4,0) edge (y')+(0.4,0.4) edge (y'); \path (x) edge (x)+(-0.4,-0.4) edge (x)+(-0.4,0) edge (x)+(-0.4,0.4) edge (x); \path (x') edge (x')+(-0.4,-0.4) edge (x')+(-0.4,0) edge (x')+(-0.4,0.4) edge (x'); \fill (x) circle (2pt) (y) circle (2pt) (v^*) circle (2pt) (x') circle (2pt) (y') circle (2pt); \end{tikzpicture} \caption{An illustration for the proof of Theorem~\ref{prop:subdivision}}\label{fig:proof:subdivision} \end{figure} Since $G^*\in \G,$ there is a connected safe set $S^*$ of $G^*$ such that $w^*(S^*)=\s(G^*,w^*)$. For any subset $X$ of $V(G)$, we define a subset $\tilde{X}$ of $V(G^*)$ by $$\tilde{X}=\left\{\begin{array}{ll}X\cup\{v^*\}&\text{if }x\in X,\\ X&\text{otherwise.}\end{array}\right.$$ Then $w^*(\tilde{X})=w(X)$ by definition. \begin{clm} $\s(G^*,w^*)\le \s(G,w)$. \label{claim1} \end{clm} \begin{proof}[Proof of Claim~\ref{claim1}] Let $U$ be a minimum safe set of $(G,w)$, that is, $w(U)=\s(G,w)$. Then $w^*(\tilde{U})=w(U)$. Note that the adjacency between the components of $G-U$ and $G[U]$ is the same as the adjacency between the components of $G^*-\tilde{U}$ and $\tilde{U}$. Thus, $\tilde{U}$ is a safe set for $(G^*,w^*),$ and $\s(G^*,w^*) \le w^*(\tilde{U})=w(U) =\s(G,w)$. \end{proof} Let $T=S^*\setminus\{v^*\}$. If $T$ is a connected safe set of $(G,w)$, then $\cs(G,w) \le w(T)$, and so by Claim~\ref{claim1}, \[ \s(G^*,w^*)\le \s(G,w) < \cs(G,w) \le w(T)\le w^*(S^*)+\epsilon =\s(G^*,w^*)+\epsilon,\] and so $\cs(G,w)-\s(G,w) {<} \epsilon<\alpha$, a contradiction to the definition of $\epsilon$. Thus $T$ is not a connected safe set of $(G,w)$. Since $G[T]$ is connected, $T$ is not a safe set of $(G,w)$. Then there is a component $D$ of $G-T$ such that $E_G(D,T)\neq\emptyset$ and $w(D)>w(T)$. We have \begin{eqnarray}\label{eq:sub} &&w^*(\tilde{D}) > w^*(S^*)+\epsilon, \end{eqnarray} since \[ w^*(\tilde{D}) = w(D) \ge w(T) +\beta > w(T) +2\epsilon = w^*(\tilde{T})+2\epsilon \ge w^*(S^*)+\epsilon,\] where the first inequality is from the definition of $\epsilon$ and the last inequality is from \begin{eqnarray}\label{claim4} &&w^*(\tilde{T})+\epsilon \ge w^*(S^*). \end{eqnarray} We note that if $\tilde{T}=S^*$ or $T=S^*$, then \eqref{claim4} holds trivially. If $\tilde{T}\neq S^*$ and $T\neq S^*$, then $v^*\in S^*$ and $x\not\in S^*$, which implies $w^*(\tilde{T})=w^*(T)=w^*(S^*)-\epsilon$, and again \eqref{claim4} holds. Also $D\subset G-T\subset G^*-S^*,$ and $D$ is connected, hence so is $\tilde{D}.$ \begin{clm}\label{claim5} $S^*\cap \{v^*,x,y\}=\{v^*,y\}$ and $D\cap\{x,y\}=\{x\}$. \end{clm} \begin{proof}[Proof of Claim~\ref{claim5}] By \eqref{eq:sub} and the fact that $S^*$ is a connected safe set of $(G^*,w^*)$, $\tilde{D}$ cannot be contained in a component of $G^*-S^*$. Since $\tilde{D}$ is a connected subgraph of $G^*$, $\tilde{D}$ is not a subgraph of $G^*-S^*$. Since $D\subset G-T\subset G^*-S^*$, it follows that $v^*\in \tilde{D}$ and $v^*\not\in G^*-S^*$. Thus {$v^*\in S^*$, which implies that $x\in D$.} Then since $x\in D \subset G^*-S^*$, we have $x\not\in S^*$. Since $G[S^*]$ is connected and $S^*\neq \{v^*\}$ by the definition of $\epsilon$, {it follows that $y\in S^*$}. Thus $S^*\cap \{v^*,x,y\}=\{v^*,y\}$. From $ D \subset G^*-S^* $, we deduce $D\cap\{x,y\}=\{x\}$. \end{proof} {Since $D$ is a connected subgraph of $G^*-S^*$, Claim~\ref{claim5} implies that} $w^*(D) = w^*(\tilde{D})-\epsilon$. Together with~\eqref{eq:sub}, $w^*(D) = w^*(\tilde{D})-\epsilon > w^*(S^*)$, which contradicts the fact that $S^*$ is a connected safe set of $(G^*,w^*)$. \end{proof} When $w$ is a weight function of $P_{2n+1}$ for some $2n+1<m$ defined in Subsection~\ref{subsec:path}, then since $P_m$ is a subdivision of $P_{2n+1}$, by defining $w^*$ as in the proof of Theorem~\ref{prop:subdivision}, we can obtain infinitely many weight functions $w$ on $P_m$ satisfying $w\not\in \W(P_m)$. \section{Proof of Theorem~\ref{thm:cycle}}\label{sec:cycle} \begin{proof}[Proof of Theorem~\ref{thm:cycle}] Suppose that there is a cycle $C$ not in $\G$. We take such a $C$ with the shortest length and a weight function $w$ on $V(C)$ such that $\s(C,w)<\cs(C,w)$. {Note that $C$ has at least four vertices.} Let $S$ be a minimum safe set of $(C,w)$. Let $X_1$, $X_3$, \ldots, $X_{2m-1}$ be the $m$ components of $G[S]$ and $X_0,X_2,\ldots,X_{2m-2}$ be the $m$ components of $G-S,$ where the indices are considered as elements of $\Z_{2m}.$ We assume $E(X_i,X_{i+1})\neq\emptyset$ for each $i\in\Z_{2m}.$ \begin{clm}\label{size:one} For each $i\in\Z_{2m}$, $|X_i|=1$. \end{clm} \begin{proof}[Proof of Claim~\ref{size:one}] Let $C^*$ be the graph such that $V(C^*)=\{X_1,\ldots,X_{2m}\}$ and $E(C^*)= \{X_iX_{i+1}\mid i\in \Z_{2m} \}$. Then we define a weight function $w^*$ on $V(C^*)$ by $w^*(X_i)= w(X_i)$ for each $i\in \Z_{2m}$. Suppose that $|X_i|\ge 2$ for some $i$, then $C^*$ is a cycle shorter than $C.$ So $C^*\in \G$ follows by minimality of $C,$ and hence $\s(C^*,w^*)=\cs(C^*,w^*).$ Since $S^*=\{X_1,X_3,\ldots,X_{2m-1}\}$ is a safe set of $(C^*,w^*)$, there is a connected safe set $S_0^*$ of $(C^*,w^*)$ whose weight is at most $w^*(S^*)$. Then $S_0=\cup_{X\in S_0^*} X$ is a connected safe set of $(C,w)$ and $w(S_0)=w^*(S_0^*)\le w^*(S^*)=w(S)$, which is a contradiction. \end{proof} By Claim~\ref{size:one}, we can assume $X_i=\{u_i\}$ for each $i\in \Z_{2m},$ so that $S=\{u_1,u_3,\ldots,u_{2m-1}\}$. For simplicity, we let $V=V(C)$. Let $\min(w)=\min\{w(u) ~|~ u \in V\}$ and $\max(w)=\max\{w(u) ~|~ u \in V\}$. Then \begin{eqnarray*} &&\max(w) = \max\{w(u) ~|~ u \in S\}\quad \text{ and }\quad \min(w) = \min\{w(u) ~|~ u \in V \setminus S \}. \end{eqnarray*} Without loss of generality, we may assume $w(u_0)=\min(w)$. Let $k$ be a nonnegative integer such that $k<m$ and $w(u_{2k+1})=\max(w)$. Note that {$w(u_{2i+1})\ge w(u_{2i})$ and $w(u_{2i-1})\ge w(u_{2i})$ for any $i\in\Z_m$ by the definition of a safe set, and so} \begin{eqnarray*} w(S)- w(V \setminus S) =\sum^{m-1}_{i=0} \big( w(u_{2i+1}) - w(u_{2i}) \big) &\geq& \sum^{k}_{i=0} \big( w(u_{2i+1}) - w(u_{2i}) \big)\\&=& w(u_{2k+1}) + \sum^{k}_{i=1} \big( -w(u_{2i}) + w(u_{2i-1}) \big) - w(u_{0})\\&=& w(u_{2k+1})- w(u_{0})=\max(w) - \min(w).\end{eqnarray*} Hence, \begin{eqnarray}\label{c2} && w(S) - w(V \setminus S) \ge \max(w) - \min(w) \end{eqnarray} For every $i \in \Z_{2m}$, define $I_i=\{u_i, u_{i+1}, \ldots, u_{i+m-1}\}$. Note that, for every $i \in \Z_{2m}$, at least one of the two sets $I_i$ and $I_{i+m}=V \setminus I_i$ is a safe set of $(C,w)$. Hence $w(I_r) \leq \frac{w(V)}{2} \le w(I_{r+1})$ for some $r \in \Z_{2m}$. Then $w(I_{r+m+1}) \leq \frac{w(V)}{2} \le w(I_{r+m})$. Thus both $I_{r+1}$ and $I_{r+m}$ are safe sets of $(C,w)$, and so $w(I_{r+1})- w(I_{r+m+1})\ge 0$ and $w(I_{r+m})- w(I_{r})\ge 0$. Without loss of generality, we assume \[ w(I_{r+1})-w(I_{r+m+1}) \le w(I_{r+m})-w(I_{r}).\] Since \begin{eqnarray*} 2 (w(I_{r+1})-w(I_{r+m+1})) &\leq& (w(I_{r+1}) - w(I_{r+m+1})) + ( w(I_{r+m}) - w(I_{r}) ) \\ & = & (w(I_{r+1})- w(I_{r}) ) + ( w(I_{r+m})- w(I_{r+m+1}))\\ &= & (w(u_{r+m}) - w(u_{r})) + (w(u_{r+m}) - w(u_{r}))\\ & \le&2(\max(w) - \min(w)), \end{eqnarray*} it follows that \begin{eqnarray}\label{c3} && w(I_{r+1})-w(I_{r+m+1})\le \max(w) - \min(w). \end{eqnarray} Then by \eqref{c2} and \eqref{c3}, \[ 2w(I_{r+1})-w(V) =w(I_{r+1})-w(I_{r+m+1}) \le \max(w) - \min(w) \le w(S) - w(V\setminus S) = 2w(S) - w(V),\] and hence $w(I_{r+1}) \le w(S).$ Since the set $I_{r+1}$ is a connected weighted safe set and $S$ is a minimum safe set of $(C,w)$, we get a contradiction to $\s(C,w)<\cs(C,w)$. \end{proof} \section{Proof of Theorem~\ref{prop:ratio}}\label{new} We start with the following lemma: \begin{lem}\label{lem:P3} Let $p$ and $q$ be positive integers, where $p\ge q.$ For a graph $G$, if $\s(G,w)\ge \frac{q}{p+q} w(G)$ for any weight function $w$ on $G$, then $$\frac{k(G[S])}{k(G-S)} = \frac{q}{p}$$ for any $\emptyset \neq S\subsetneq V(G)$ such that $ \frac{k(G[S])}{k(G-S)} \le \frac{q}{p}.$ \end{lem} \begin{proof}[Proof of Lemma~\ref{lem:P3}] Let $S$ be a nonempty proper subset of $V(G)$ such that {$\frac{k(G[S])}{k(G-S)} \le \frac{q}{p}$}. Let $S_1, \ldots, S_t$ be the components of $G[S],$ where $t=k(G[S]),$ and $U_1, \ldots, U_r$ be the components of $G-S,$ where $r=k(G-S),$ so that $\frac{t}{r}\le \frac{q}{p}$. Let $w$ be a weight function on $V(G)$ such that $w(S_i)=w(U_j)>0$ for all $i$ and $j.$ Then $S$ is a safe set of $(G,w)$. Thus $\s(G,w)\le \frac{t}{t+r} w(G) $. If $\frac{t}{r} < \frac{q}{p}$, then \[ \frac{t}{r} < \frac{q}{p} \quad \Leftrightarrow \quad \frac{r}{t} > \frac{p}{q} \quad \Leftrightarrow \quad \frac{t+r}{t}=1+\frac{r}{t} > \frac{p}{q}+1= \frac{p+q}{q} \quad \Leftrightarrow \quad \frac{t}{t+r}< \frac{q}{p+q},\] which implies $\s(G,w)< \frac{q}{p+q}w(G)$, a contradiction. Thus $\frac{t}{r}=\frac{q}{p}$. \end{proof} Now we prove Theorem~\ref{prop:ratio}. \begin{proof}[Proof of Theorem~\ref{prop:ratio}] First we will show that (i), (ii), and (iii) are equivalent. It is trivial that (i) implies (ii) by Theorem~\ref{thm:cycle}. By the case for $p=q=1$ of Lemma~\ref{lem:P3} and $S=\{u,v\}$ where $u$ and $v$ are not adjacent, it follows that (ii) implies (iii). Suppose that (iii) is true. Take a spanning tree $T$ of $G$ with a maximum diameter. For any two pendent vertices $x$ and $y$ of $T$, if $xy\not\in E(G)$, then $G-\{x,y\}$ is connected, a contradiction to (iii). Thus any two pendent vertices of $T$ are adjacent in $G$. If there are at least three pendent vertices, then we can obtain a spanning tree that has greater diameter than $T$, a contradiction to the choice of $T$. Thus, $T$ is a path and $G$ has a Hamiltonian cycle $C=v_1\cdots v_n$ {as $n\ge 5$}. If $C$ has no chord, then $G$ is a cycle. Suppose that $C$ has a chord. For any chord, say $v_1v_i$, of $C$, if a vertex $x$ in $ \{v_2,\ldots, v_{i-1} \}$ and a vertex $y$ in $ \{v_{i+1},\ldots, v_{n} \}$ are not adjacent in $G$, then $G-\{x,y\}$ is connected, a contradiction to (iii). Thus, for any chord, say $v_1v_i$, of $C$, a vertex $x$ in $ \{v_2,\ldots, v_{i-1} \}$ and a vertex $y$ in $ \{v_{i+1},\ldots, v_{n} \}$ are adjacent in $G$. Then such $xy$ becomes a chord of $C$. Applying this argument again to the chords of $C$, we see that $G$ is a complete graph. Hence, (iii) implies (i). Therefore, (i), (ii), and (iii) are equivalent. It is trivial that (i) implies (iv). It is sufficient to show that (iv) implies (iii). First, we give some definitions. We denote a 2-subset $\{u,v\}$ by $uv$ even though it is not an edge, and in this case, we call it a non-edge. For any graph $H$, let $E_c(H)$ {(resp. $E_d(H)$)} be the set of edges $uv$ such that $H-\{u,v\}$ is connected {(resp. disconnected)}, and let $N_c(H)$ {(resp. $N_d(H)$)} be the set of non-edges $uv$ such that $H-\{u,v\}$ is connected {(resp. disconnected)}. Assume that $G$ satisfies (iv). If $G$ has a cut vertex, then $q_{1,1}(G)=n$ and $q_{n-1,1}(G)\le n-1$, a contradiction to the assumption. Thus $G$ is 2-connected. Then from the definitions, \[|E_c(G)|+|E_d(G)|=|E(G)|=q_{2,1}(G),\] and by taking the complement in $V(G)$ of each element of $E_c(G) \cup N_c(G)$, \[|E_c(G)|+|N_c(G)|=q_{n-2,1}(G).\] Then by (iv), \begin{eqnarray}\label{eq:NE} |N_c(G)|=|E_d(G)|. \end{eqnarray} Suppose that we prove that $E_d(G)=\emptyset.$ Then $N_c(G)=\emptyset$ follows from \eqref{eq:NE}; that is, $ \{u,v\}\in N_d(G)$ holds for any two nonadjacent vertices $u$ and $v,$ and so $G-\{u,v\}$ is disconnected. This implies (iii). {Thus, in the following, we will finish the proof by showing $E_d(G)=\emptyset.$} We will use the following basic property of 2-connected graphs. \begin{itemize} \item[($\sharp$)] If $H$ is 2-connected, then for any component $D$ of $H-\{x,y\}$, each of $D\cup\{x\}$ and $D\cup\{y\}$ is connected. \end{itemize} If there is a component $D$ of $H-\{x,y\}$ such that $x$ is not adjacent to any vertex of $D$, then $D$ is separated by the vertex $y$, and so $G-y$ is disconnected, a contradiction. Similarly, if there is a component $D$ of $H-\{x,y\}$ which is not adjacent to $y$, then $x$ is a cut vertex, a contradiction. Thus ($\sharp$) holds. We also add some observations (O1) and (O2) on $E_d(G)$. For an edge $uv\in E_d(G)$, \begin{itemize} \item[\rm (O1)] every component $D$ of $G-\{u,v\}$ satisfies $1\le |D| \le |V(G)|-3$; \item[\rm (O2)] $N_c(G) \supset \{ S\subset (V(G)-\{u,v\}) \mid |S|=2, {|S\cap D |\le 1} \text{ for {any component} }D\text{ of }G-\{u,v\} \}$. \end{itemize} To see (O1), take any edge $uv\in E_d(G)$. Note that $G-\{u,v\}$ is disconnected {and so} $1\le |D| \le |V(G)|-3$ follows. Also (O2) holds, {to see why, let $S=\{u',v'\}\subset (V(G)-\{u,v\})$ and $|S\cap D|\le1$ for any component $D$ of $G-\{u,v\}$. Then clearly $S$ is a non-edge. From the fact that $G$ is 2-connected, together with {$(\sharp)$}, we can see that every component of $G-S$ contains one of $u$ and $v$, and so $G-S$ is connected.} To show that \eqref{eq:NE} implies $E_d(G)=\emptyset,$ we prove its contrapositive, so we assume $E_d(G)\neq \emptyset.$ {Then, at the end, we will reach a contradiction to \eqref{eq:NE} by showing that $|N_c(G)|>|E_d(G)|$. Since $E_d(G)\neq \emptyset,$} we can take an edge $u_1v_1 \in E_d(G)$ so that there is a component $C_1$ of $G-\{u_1,v_1\}$ such that $u_1v_1$ is the unique edge of $G[C_1\cup\{u_1,v_1\}]$ which belongs to $E_d(G)$. We can take such an edge by considering all edges $uv\in E_d(G)$ and all components $C$ of $G-\{u,v\},$ and choose $uv$ and $C$ so that $C$ is as small as possible. Let $G_1=G-C_1$ and let $N_1$ be the set of non-edges defined by \[N_1=\{ xy \mid x\in C_1, y\in V(G)- (C_1\cup\{u_1,v_1\}) \} .\] We proceed similarly to construct a maximal sequence of subgraphs $G_0,G_1,\ldots,G_p$ of $G,$ where $G_0=G$ and $p\geq 1$ as follows. Assume that we have $u_{i-1}v_{i-1}$, $C_{i-1}$, $G_{i-1}$, and $N_{i-1}$ for some $i\ge 2$. As long as $E_d(G_{i-1})\setminus\{ u_1v_1,\ldots, u_{i-1}v_{i-1}\}\neq \emptyset$, we continue recursively: \begin{itemize} \item[](Step 1) Take an edge $u_iv_i \in E_d(G_{i-1})\setminus \{ u_1v_1,\ldots, u_{i-1}v_{i-1}\}$ so that there is a component $C_i$ of $G_{i-1}-\{u_{i},v_{i}\}$ such that $u_{i}v_{i}$ is the unique edge of $G_{i-1}[C_i\cup\{u_{i},v_{i}\}]$ which belongs to $E_d(G_{i-1})\setminus\{ u_1v_1,\ldots, u_{i-1}v_{i-1}\}$; \item[](Step 2) Let $G_i=G_{i-1}-C_i$ and let $N_{i}=\{ xy \mid x\in C_{i}, y\in V(G_{i-1})- (C_{i}\cup\{u_{i},v_{i}\}) \}$. \end{itemize} {We note that (Step 1) is possible by choosing the edge $u_iv_i \in E_d(G_{i-1})\setminus \{ u_1v_1,\ldots, u_{i-1}v_{i-1}\}$ and the component $C_i$ of $G_{i-1}-\{u_{i},v_{i}\}$ so that $|C_i|$ is as small as possible. If the subgraph induced by $C_i\cup\{ u_i,v_i\}$ contains an edge $u'v'$ in the set $E_d(G_{i-1})\setminus \{ u_1v_1,\ldots, u_{i-1}v_{i-1}\}$, then $G_{i-1}-\{u',v'\}$ has a component whose order is smaller than $|C_i|$, a contradiction to the choice of $u_iv_i$. Thus $u_{i}v_{i}$ is the unique edge of $G_{i-1}[C_i\cup\{u_{i},v_{i}\}]$ which belongs to $E_d(G_{i-1})\setminus\{ u_1v_1,\ldots, u_{i-1}v_{i-1}\}$. } \begin{clm}\label{claim:2connected}{ $G_i$ is $2$-connected for each $i\le p$. } \end{clm} \begin{proof}[Proof of Claim~\ref{claim:2connected}] For $G_0=G$ this is true by assumption. Suppose that $G_i$ has a cut vertex $x$ for some $i\ge1$, where $G_j$ is 2-connected for all $j<i$. Since both $u_i$ and $v_i$ are vertices of $G_i$, we may assume that $u_i$ is a vertex of $G_i-x$. Let $C$ be the component of $G_i-x$ which contains $u_i$, and $C'$ be another component of $G_i-x$. Since $u_iv_i\in E(G_i)$, note that if $v_i\neq x$, then $v_i\in C$. Recall that $G_i=G_{i-1}-C_{i}$. By the minimality of $i$, $G_{i-1}$ is 2-connected and so by ($\sharp$), both $C_{i}\cup\{u_i\}$ and $C_{i}\cup\{v_i\}$ are connected. Since $C_{i}$ is a component of $G_{i-1}-\{u_i,v_i\}$, $C_{i}$ is adjacent to only $\{u_i,v_i\}$ among all vertices of $G_{i-1}$. Hence, $C\cup C_{i}$ is connected in $G_{i-1}$ and $C_i$ is adjacent to only $C$ among the components of $G_i-x$. This implies that $C'$ is still a component of $G_{i-1}-x$, which implies that $G_{i-1}-x$ is disconnected, a contradiction. \end{proof} \begin{clm}\label{claim:connected} For any $\{u,v\}\subset V(G_{i}),$ if $\{u,v\}\neq \{u_i,v_i\},$ then \begin{itemize} \item[(a)] if $G_i-\{u,v\}$ is connected then $G_{i-1}-\{u,v\}$ is connected, \item[(b)] if $uv\in E_d(G_i)$, then $G_{i-1}-\{u,v\}$ is disconnected. \end{itemize} \end{clm} \begin{proof}[Proof of Claim~\ref{claim:connected}] Take $\{u,v\}\subset V(G_{i})$ so that $\{u,v\}\neq \{u_i,v_i\}$. Since $\{u,v\}\neq \{u_i,v_i\}$, we may assume that $u_i\notin \{u,v\} $. Let $C$ be the component of $G_i-\{u,v\}$ containing the vertex $u_i$. Recall that $C_i$ is a component of $G_{i-1}-\{u_i,v_i\}$ taken from (Step 1), and by {Claim~\ref{claim:2connected} and ($\sharp$)}, $\{u_i\}\cup C_i$ induces a connected graph in $G_{i-1}$. Each of $\{u_i\}\cup C_i$ and $C$ is a connected graph in $G_{i-1}$ containing the vertex $u_i$. Hence, $\{u_i\}\cup C_i\cup C=C_i\cup C$ induces a connected graph in $G_{i-1}$. Since each of $C_i$ and $C$ is disjoint from $\{u,v\}$, $C_i\cup C$ induces a connected graph $H$ in $G_{i-1}-\{u,v\}$. To show (a), suppose that $G_i-\{u,v\}$ is connected. Then $C=G_{i}-\{u,v\}$ and so $H$ is a connected spanning subgraph of $G_{i-1}-\{u,v\}$. Thus $G_{i-1}-\{u,v\}$ is a connected graph, and (a) holds. To show (b), suppose that $uv\in E_d(G_i)$. Then $G_{i}-\{u,v\}$ is disconnected. From {the fact that $u_i$ and $v_i$ are adjacent}, $v_i \in C$ if $v_i\not\in \{u,v\}$. Note that $C_i$ is only connected to two vertices $u_i$ and $v_i$ among all vertices of $G_{i-1}$. Thus, $C$ is the unique component of $G_i-\{u,v\}$ which is adjacent to $C_i$. Hence, $G_{i-1}-\{u,v\}$ is not connected, and so $uv\in E_d(G_{i-1})$. \end{proof} \begin{clm}\label{claim:finish} For every $i=1,\ldots,p,$ \[E_d(G_{i})\setminus\{ u_1v_1,\ldots, u_{i}v_{i} \} =E_d(G_{i-1})\setminus\{ u_1v_1,\ldots, u_{i}v_{i} \}.\] \end{clm} \begin{proof}[Proof of Claim~\ref{claim:finish}] For simplicity, let $E_i=E_d(G_{i})\setminus\{ u_1v_1,\ldots, u_{i}v_{i} \}$ and $E_{i-1}=E_d(G_{i-1})\setminus\{ u_1v_1,\ldots, u_{i}v_{i} \}$. Take an edge $uv \in E_i$. By (b) of Claim~\ref{claim:connected}, $G_{i-1}-\{u,v\}$ is disconnected, and so $uv\in E_d(G_{i-1})$. Since $uv\not\in \{ u_1v_1,\ldots, u_{i}v_{i} \}$, $uv\in E_{i-1}$. Thus $E_i \subset E_{i-1}$. To show that $E_{i-1} \subset E_i$, take an edge $uv \in E_{i-1}$. By the definition of $C_i$, for any edge $u'v'$ in $G_{i-1}[C_{i}\cup\{u_i,v_i\}]$ except {the edges in $\{u_1v_1,\ldots, u_iv_i\}$}, $G_{i-1}-\{u',v'\}$ is connected. Therefore, from the fact that $uv \in E_d(G_{i-1})$, $uv$ is an edge of $G_i$. By (a) of Claim~\ref{claim:connected}, $G_i-\{u,v\}$ is disconnected, and so $uv\in E_d(G_i)$. Since $uv\not\in \{ u_1v_1,\ldots, u_{i}v_{i} \}$, $uv\in E_{i}$. Hence, the claim holds. \end{proof} \begin{clm}\label{claim:last} $N_i\neq \emptyset,$ $N_i \cap N_c(G_i)=\emptyset,$ and $N_i \cup N_c(G_i) {\subset} N_c(G_{i-1})$ for every $i=1,\ldots,p.$ Moreover, $|N_1|\ge 2$. \end{clm} \begin{proof}[Proof of Claim~\ref{claim:last}] By the definition of $N_i$ and $G_i$, it is clear that $N_i\neq \emptyset$ and $N_i \cap N_c(G_i)=\emptyset$ hold. By (O2), $N_i$ is a subset of $N_c(G_{i-1})$. For any non-edge $xy\in N_c(G_i)$, $G_i-\{x,y\}$ is connected and so $G_{i-1}-\{x,y\}$ is a connected graph by (a) of Claim~\ref{claim:connected}, which implies $xy\in N_c(G_{i-1})$. Thus $N_c(G_i)\subset N_c(G_{i-1})$. Moreover, from (O1), by the assumption of $n\ge 5,$ we have $|N_1|\ge |C_1| (n-2-|C_1|)\ge n-3 \ge 2$. \end{proof} From Claim~\ref{claim:finish}, for each $i=1,\ldots,p,$ \begin{eqnarray*} E_d(G_{i})\setminus\{ u_1v_1,\ldots, u_{i}v_{i} \} =E_d(G_{0})\setminus\{ u_1v_1,\ldots, u_{i}v_{i} \}, \end{eqnarray*} which implies $p=|E_d(G)|,$ since $E_d(G_{i})\setminus\{ u_1v_1,\ldots, u_{i}v_{i} \}\neq \emptyset$ for $i<p,$ and $E_d(G_{p})\setminus\{ u_1v_1,\ldots, u_{p}v_{p} \} =\emptyset.$ Thus by Claim~\ref{claim:last}, \begin{eqnarray*} && N_c(G)=N_c(G_0) \ \supset \ N_1 \cup N_c(G_1) \ \supset \ N_1 \cup N_2 \cup N_c(G_2) \ \supset \ \cdots \ \supset \ N_1 \cup N_2 \cup \cdots \cup N_p, \end{eqnarray*} where $N_i$ and $N_j$ are disjoint whenever $1 \leq i < j \leq p.$ Again by Claim~\ref{claim:last}, \[ |N_c(G)| \ge |N_1| +|N_2|+ \cdots + |N_p| \ge 2 + \underbrace{1 + \cdots +1}_{ (p-1)\text{ times}} =p+1 > p =|E_d(G)|, \] which violates \eqref{eq:NE}. Hence the proof is complete. \end{proof} \section{Closing remarks}\label{sec:open} We finally give two remarks in our main results. \begin{rmk} From the proof of Theorem~\ref{thm:cycle}, we have the following strongly linear time algorithm for calculating the safe number of a cycle with a weight function. \noindent \hrulefill \\ \textsc{{\bf WEIGHTED SAFE NUMBER OF A CYCLE GRAPH}}\\ \textsc{{\bf INPUT}}: A cycle {$C$} such that $V(C):=\{v_i | i \in \Z_n \}$ and $E(C):=\{v_iv_{i+1}| i \in \Z_n\}$ and a {positive real-valued function $w$ on $V(C)$}. \\ \textsc{{\bf OUTPUT}}: The {(connected)} safe number $\s(C,w)${$(=\cs(C,w))$}. \begin{quote} \begin{alg-enumerate} \item{Calculate the total weight $w(V):=\sum_{i=0}^{n-1} w(v_i)$.} \item{Set $w_{\min}:=w(V)$} \item{Set $w:=w(v_0)$} \item{Set $\ell:=0 \/ (\in \Z_n)$} \item{Set $k:=0 \/ (\in \Z_n)$} \item{While $w < \frac{w(V)}{2}$ do:\\ \hspace{30pt} set $w := w + w(v_{\ell+1});$ \hspace{2pt} set $\ell:=\ell+1;$}\label{6} \item{If $w < w_{\min}$ then set $w_{\min}:=w$} \item{Set $w:=w - w(v_k)$} \item{Set $k:=k+1$} \item{If $k=0$ then return the number $w_{\min}$} \item{Goto Step\hspace*{0.2em}\ref{6}} \end{alg-enumerate} \end{quote} \hrulefill {We remark that for each $k\in\Z_n$, Step 6 determines the `smallest' $\ell\in\Z_n$ such that $\{v_k,v_{k+1},\ldots,v_\ell\}$ has weight at least $\frac{w(V)}{2}$.} \end{rmk} Note that, in contrast with the above, it was shown in \cite{BFLMMST-weighted-sf} that the safe number of a given weighted path can be calculated in $O(n^3)$-time. \begin{rmk} We note that (i), (ii), and (iii) in Theorem~\ref{prop:ratio} are equivalent without the assumption of $n \ge 5$. In addition, if we replace (iii) and/or (iv) in Theorem~\ref{prop:ratio} by any of the following stronger statements, then the theorem remains true: \begin{itemize} \item[(iii${}^{\prime}$)] $k(G-S)=k(G[S])$ for any $S\subset V(G)$ with $|S|=2$; \item[(iii${}^{\prime\prime}$)] $k(G-S)=k(G[S])$ for any $S\subset V(G)$ with $S\neq \emptyset$. \item[(iv${}^{\prime}$)] $q_{k,1}(G)=q_{n-k,1}(G)$ for any $1 \le k \le n-1$. \end{itemize} Obviously, (i) implies (iii${}^{\prime}$), (iii${}^{\prime\prime}$), and (iv${}^{\prime}$). Each of (iii${}^{\prime}$) and (iii${}^{\prime\prime}$) implies (iii), and (iv${}^{\prime}$) implies (iv). \end{rmk} \section*{Acknowledgement} The authors would like to thank the two anonymous reviewers for helpful and valuable comments.
{ "timestamp": "2018-05-31T02:06:36", "yymm": "1802", "arxiv_id": "1802.03579", "language": "en", "url": "https://arxiv.org/abs/1802.03579", "abstract": "Let $G$ be a graph, and let $w: V(G) \\to \\mathbb{R}$ be a weight function on the vertices of $G$. For every subset $X$ of $V(G)$, let $w(X)=\\sum_{v \\in X} w(v).$ A non-empty subset $S \\subset V(G)$ is a weighted safe set of $(G,w)$ if, for every component $C$ of the subgraph induced by $S$ and every component $D$ of $G-S$, we have $w(C) \\geq w(D)$ whenever there is an edge between $C$ and $D$. If the subgraph of $G$ induced by a weighted safe set $S$ is connected, then the set $S$ is called a connected weighted safe set of $(G,w)$. The weighted safe number $s(G,w)$ and connected weighted safe number $cs(G,w)$ of $(G,w)$ are the minimum weights $w(S)$ among all weighted safe sets and all connected weighted safe sets of $(G,w)$, respectively. It is easy to see that for any pair $(G,w)$, ${s}(G,w) \\le {cs}(G,w)$ by their definitions. In this paper, we discuss the possible equality when $G$ is a path or a cycle. We also give an answer to a problem due to Tittmann et al. [Eur. J. Combin. Vol. 32 (2011)] concerning subgraph component polynomials for cycles and complete graphs.", "subjects": "Combinatorics (math.CO)", "title": "On the weighted safe set problem on paths and cycles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9911526462237643, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.7090943805789208 }
https://arxiv.org/abs/1605.04890
Product of simplices and sets of positive upper density in $\mathbb{R}^d$
We establish that any subset of $\mathbb{R}^d$ of positive upper Banach density necessarily contains an isometric copy of all sufficiently large dilates of any fixed two-dimensional rectangle provided $d\geq4$.We further present an extension of this result to configurations that are the product of two non-degenerate simplices; specifically we show that if $\Delta_{k_1}$ and $\Delta_{k_2}$ are two fixed non-degenerate simplices of $k_1+1$ and $k_2+1$ points respectively, then any subset of $\mathbb{R}^d$ of positive upper Banach density with $d\geq k_1+k_2+6$ will necessarily contain an isometric copy of all sufficiently large dilates of $\Delta_{k_1}\times\Delta_{k_2}$.A new direct proof of the fact that any subset of $\mathbb{R}^d$ of positive upper Banach density necessarily contains an isometric copy of all sufficiently large dilates of any fixed non-degenerate simplex of $k+1$ points provided $d\geq k+1$, a result originally due to Bourgain, is also presented.
\section{Introduction}\label{intro} \subsection{Background} Recall that the \emph{upper Banach density} of a measurable set $A\subseteq\mathbb R^d$ is defined by \begin{equation}\label{BD} \delta^*(A)=\lim_{N\rightarrow\infty}\sup_{t\in\mathbb R^d}\frac{|A\cap(t+Q_N)|}{|Q_N|},\end{equation} where $|\cdot|$ denotes Lebesgue measure on $\mathbb R^d$ and $Q_N$ denotes the cube $[-N/2,N/2]^d$. A result of Katznelson and Weiss \cite{FKW} states that if $A\subseteq\mathbb{R}^2$ has positive upper Banach density, then its distance set \[\text{dist}(A)=\{|x-x'|\,:\, x,x'\in A\}\] contains all large numbers. This result was later reproved using Fourier analytic techniques by Bourgain in \cite{B} where he established the following more general result for arbitrary non-degenerate $k$-dimensional simplices. \begin{thm}[Bourgain \cite{B}]\label{BourSimp} Let $\Delta_k\subseteq\mathbb R^{k}$ be a fixed non-degenerate $k$-dimensional simplex. If $A\subseteq\mathbb R^d$ has positive upper Banach density and $d\geq k+1$, then there exists a threshold $\lambda_0=\lambda_0(A,\Delta_k)$ such that $A$ contains an isometric copy of $\lambda\cdot\Delta_k$ for all $\lambda\geq \lambda_0$. \end{thm} Recall that a set $\Delta_k=\{0,v_1,\dots,v_k\}$ of $k+1$ points in $\mathbb R^{k}$ is a non-degenerate $k$-dimensional simplex if the vectors $v_1,\dots,v_k$ are linearly independent and that a configuration $\Delta_k'$ is an isometric copy of $\lambda\cdot \Delta_k$ in $\mathbb R^d$ if $\Delta'_k=x+\lambda\cdot U(\Delta_k)$ for some $x\in \mathbb R^d$ and $U\in SO(d)$ when $d\geq k+1$. \subsection{Main Results} In Section \ref{newdistances} we present a new and direct proof of Theorem \ref{BourSimp} when $k=1$, namely a new proof of the aforementioned distance set result of Katznelson and Weiss. A new direct proof of Theorem \ref{BourSimp} in its full generality is also given, in fact two different new approaches are presented in Section \ref{newsimplices}. However, the main purpose of this article is to establish the following new results, namely Theorems \ref{Rect} and \ref{ProdSimp} below. \begin{thm}\label{Rect} Let $\Box=\{0,v_1,v_2, v_1+v_2\}\subseteq\mathbb R^2$ with $v_1\cdot v_2=0$ denote a fixed two-dimensional rectangle. If $A\subseteq\mathbb R^d$ has positive upper Banach density and $d\geq 4$, then there exists a threshold $\lambda_0=\lambda_0(A,\Box)$ such that $A$ contains an isometric copy of $\lambda\cdot\Box$ for all $\lambda\geq \lambda_0$. \end{thm} Since $d\geq4$ we can write $\mathbb R^d=\mathbb R^{d_1}\times\mathbb R^{d_2}$ with $d_1,d_2\geq2$. It is important to note that the isometric copies of $\lambda\cdot\Box$, whose existence in $A$ Theorem \ref{Rect} guarantees, will in fact all be of the special form \[\{(x,y), (x',y), (x,y'), (x',y')\}\subseteq \mathbb R^{d_1}\times\mathbb R^{d_2}\] where $|x-x'|=\lambda|v_1|$ and $|y-y'|=\lambda|v_2|.$ We also establish the following generalization of Theorem \ref{Rect}, but with a slight loss in the dimension $d$. \begin{thm}\label{ProdSimp} Let $\Delta_{k_1}$ and $\Delta_{k_2}$ be two fixed non-degenerate simplices of dimension $k_1$ and $k_2$. If $A\subseteq\mathbb R^d$ has positive upper Banach density with $d\geq k_1+k_2+6$, then there exists a threshold $\lambda_0=\lambda_0(A,\Delta_{k_1},\Delta_{k_2})$ such that $A$ contains an isometric copy of $\lambda\cdot(\Delta_{k_1}\times\Delta_{k_2})$ of the form $\Delta_{k_1}'\times\Delta_{k_2}'$ with each $\Delta_{k_i}'\subseteq\mathbb R^{d_i}$ an isometric copy of $\lambda\cdot\Delta_{k_i}$ for all $\lambda\geq \lambda_0$. \end{thm} It will be clear from the proofs of Theorems \ref{ProdSimp} and \ref{Rect} that if $1=k_1<k_2$, then the conclusion of Theorem \ref{ProdSimp} will in fact hold under the weaker hypothesis that $d\geq k_1+k_2+4$. Note further that if $A$ were a direct product set $B_1\times B_2\subseteq \mathbb R^{d_1}\times \mathbb R^{d_2}$ with each $d_i\geq k_i+1$, then the conclusion of Theorem \ref{ProdSimp} (which contains the conclusion of Theorem \ref{Rect} when each $k_i=1$) would follow immediately from Theorem \ref{BourSimp} and under the weaker hypothesis that $d\geq k_1+k_2+2$. The natural extension of Theorems \ref{Rect} and \ref{ProdSimp} to $\ell$-dimensional rectangles and $\ell$-fold products of simplices (with $\ell>2$) also holds, but as the arguments involved in establishing these results are significantly more technical than those needed for Theorems \ref{Rect} and \ref{ProdSimp} we plan to address this in a separate article. \subsection{Outline of Paper} Our approach to proving Theorems \ref{Rect} and \ref{ProdSimp} will be to reduce them to quantitative results in the compact setting of $[0,1]^{d_1}\times[0,1]^{d_2}$, namely Propositions \ref{Propn1} and \ref{Propn11}. These reductions are carried out in Section \ref{red} with the remainder of Section \ref{4} and the entirety of Sections \ref{SecPart1}-\ref{SecPart2} then devoted to establishing Propositions \ref{Propn1} and \ref{Propn11}. In Section \ref{newdistances} we present a new direct proof of Theorem \ref{BourSimp} when $k=1$ and two new proofs of Theorem \ref{BourSimp}, in its full generality, are presented in Section \ref{newsimplices}. In both cases our novel approach will be to first reduce matters to results for suitably uniformly distributed subsets of $[0,1]^d$. \comment{ Our approach to proving both Theorem \ref{BourSimp} and Theorem \ref{ProdSimp} will be to reduce them to results in the compact setting of $[0,1]^d$ and $[0,1]^{d_1}\times[0,1]^{d_2}$ respectively, namely Proposition \ref{Propn00} and Proposition \ref{Propn1}. This reduction is carried out in Section \ref{red} below, the proof of Proposition \ref{Propn00} is carried out in Sections \ref{new2}, and the remainder of the article, namely Sections \ref{SecPart1}-\ref{reglemproof}, is devoted to establishing Proposition \ref{Propn11}. } \section{Uniformly Distributed Subsets of $\mathbb R^d$ and a New Proof of Theorem \ref{BourSimp} when $k=1$}\label{newdistances} In this section we introduce a precise notion of uniform distribution for subsets of $\mathbb R^d$ and prove an (optimal) result, Proposition \ref{Propn0} below, on distances in uniformly distributed subsets of $[0,1]^d$. Proposition \ref{Propn0} will be critically important in our proof of Proposition \ref{Propn1}, but as we shall see below it also immediately implies Theorem \ref{BourSimp} when $k=1$ and hence provides a new direct proof of the following \begin{thm}[Katznelson and Weiss \cite{FKW}]\label{KW} If $A\subseteq\mathbb R^d$ has positive upper Banach density and $d\geq 2$, then there exists a threshold $\lambda_0=\lambda_0(A)$ such that for all $\lambda\geq \lambda_0$ there exist a pair of points \begin{equation*}\{x,x'\}\subseteq A\quad\text{with}\quad |x-x'|=\lambda.\end{equation*} \end{thm} \subsection{Uniform Distribution and Distances} \begin{defn}[$(\varepsilon,L)$-uniform distribution] Let $0<L\leq\varepsilon\ll1$ and $Q_L=[-L/2,L/2]^d$. A set $A\subseteq[0,1]^d$ is said to be $(\varepsilon,L)$-uniformly distributed if \begin{equation}\label{3.3.1} \int_{[0,1]^d}\left|\frac{|A\cap(t+Q_{L})|}{|Q_{L}|}-|A|\,\right|^2\,dt\leq\varepsilon^2. \end{equation} \end{defn} \begin{propn}[Distances in uniformly distributed sets]\label{Propn0} Let $0<c\leq1$, $0<\lambda\leq\varepsilon\ll 1$ and $d\geq2$. If $A\subseteq[0,1]^d$ is $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed with $\alpha=|A|>0 , then there exist a pair of points \begin{equation*}\{x,x'\}\subseteq A\quad\text{with}\quad |x-x'|=c\lambda.\end{equation*} In fact, \begin{equation*} \iint1_A(x)1_A(x-c\lambda x_1)\,d\sigma(x_1)\,dx=\alpha^2 +O(c^{-1/6}\varepsilon^{2/3}). \end{equation*} where $\sigma$ denotes the normalized measure on the sphere $\{x\in\mathbb R^d\,:\,|x|=1\}$ induced by Lebesgue measure. \end{propn} Before proving Proposition \ref{Propn0} we will first show that when $c=1$ it immediately implies Theorem \ref{KW}. To the best of our knowledge this observation, which gives a direct proof of Theorem \ref{KW}, is new. \subsection{Proof that Proposition \ref{Propn0} implies Theorem \ref{KW}}\label{newreduction} Let $\varepsilon>0$ and $A\subseteq\mathbb R^d$ with $\delta^*(A)>0$.\smallskip \comment{Recall that the \emph{upper Banach density} $\delta^*$ of a measurable set $A\subseteq\mathbb R^d$ is defined by \begin{equation}\label{BD} \delta^*(A)=\lim_{N\rightarrow\infty}\sup_{x\in\mathbb R^d}\frac{|A\cap(x+Q_N)|}{|Q_N|},\end{equation} where $|\cdot|$ denotes Lebesgue measure on $\mathbb R^d$ and $Q_N$ denotes the cube $[-N/2,N/2]^d$.} The following two facts follow immediately from the definition of upper Banach density, see (\ref{BD}): \begin{itemize} \item[(i)] There exist $M_0=M_0(A,\varepsilon)$ such that for all $M\geq M_0$ and all $t\in\mathbb R^d$ \[\frac{|A\cap(t+Q_M)|}{|Q_M|}\leq(1+\varepsilon^4/3)\,\delta^*(A).\] \item[(ii)] There exist arbitrarily large $N\in\mathbb R$ such that \[\frac{|A\cap(t_0+Q_N)|}{|Q_N|}\geq(1-\varepsilon^4/3)\,\delta^*(A)\] for some $t_0\in\mathbb R^d$. \end{itemize} Combining (i) and (ii) above we see that for any $\lambda\geq \varepsilon^{-4}M_0$, there exist $N\geq\varepsilon^{-4}\lambda$ and $t_0\in\mathbb R^d$ such that \begin{equation*} \frac{|A\cap(t+Q_{\varepsilon^4\lambda})|}{|Q_{\varepsilon^4\lambda}|}\leq(1+\varepsilon^4)\frac{|A\cap(t_0+Q_N)|}{|Q_N|} \end{equation*} for all $t\in\mathbb R^d$. Consequently, Theorem \ref{KW} reduces, via a rescaling of $A\cap(t_0+Q_N)$ to a subset of $[0,1]^d$, to establishing that if $0<\lambda\leq\varepsilon\ll1$ and $A\subseteq[0,1]^d$ is measurable with $|A|>0$ and the property that \begin{equation*} \frac{|A\cap(t+Q_{\varepsilon^4\lambda})|}{|Q_{\varepsilon^4\lambda}|}\leq (1+\varepsilon^4)\,|A| \end{equation*} for all $t\in\mathbb R^d$, then there exist a pair of points $x,x'\in A$ such that $ |x-x'|=\lambda$. Now since $A\cap(t+Q_{\varepsilon^4\lambda})$ is only supported in $[-\varepsilon^4\lambda,1+\varepsilon^4\lambda]^d$ it follows that \begin{equation}\label{Q-QL} |A|=\int_{\mathbb R^d}\frac{|A\cap(t+Q_{\varepsilon^4\lambda})|}{|Q_{\varepsilon^4\lambda}|}\,dt=\int_{[0,1]^d}\frac{|A\cap(t+Q_{\varepsilon^4\lambda})|}{|Q_{\varepsilon^4\lambda}|}\,dt+O(\varepsilon^4|A|), \end{equation} from which one can easily deduce that \begin{equation}\label{most t's} \Bigl|\Bigl\{t\in[0,1]^d\,:\, \frac{|A\cap(t+Q_{\varepsilon^4\lambda})|}{|Q_{\varepsilon^4\lambda}|}\leq (1-\varepsilon^2)\,|A|\Bigr\}\Bigr|=O(\varepsilon^2) \end{equation} and hence that $A$ is $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed. The result therefore follows, provided $d\geq 2$. \qed \subsection{Proof of Proposition \ref{Propn0}} \begin{defn}[Counting Function for Distances] For $0<\lambda\ll1$ and functions \[f_0,f_{1}:[0,1]^{d}\to\mathbb R\] with $d\geq 2$ we define \begin{equation} T(f_0,f_1)(\lambda)= \iint f_0(x)f_1(x-\lambda x_1)\,d\sigma(x_1) \,dx. \end{equation} \end{defn} \begin{defn}[$U^1(L)$-norm] For $0<L\ll1$ and functions $f:[0,1]^d\to\mathbb R$ we define \begin{equation} \|f\|_{U^1(L)}^2=\int\limits_{[0,1]^d}\Bigl|\frac{1}{L^d}\int\limits_{t+Q_L}f(x)\,dx\Bigr|^2 \,dt =\int\limits_{[0,1]^d}\biggl(\frac{1}{L^{2d}}\iint\limits_{x,x'\in t+Q_L}f(x)f(x')\,dx'\,dx\biggr)dt \end{equation} where $Q_L=[-L/2,L/2]^d$. \end{defn} It is an easy, but important, observation that \begin{equation}\label{almostU1} \|f\|_{U^1(L)}^2=\iint f(x)f(x-x_1)\psi_L(x_1)\,dx_1\,dx +O(L), \end{equation} where $\psi_L=L^{-2d}\,1_{Q_L}*1_{Q_L}$. Note also that if $A\subseteq[0,1]^d$ with $\alpha=|A|>0$ and we define \begin{equation*} f_A:=1_A-\A1_{[0,1]^d} \end{equation*} then \begin{equation}\label{relate} \int\limits_{[0,1]^d}\Bigl|\frac{1}{L^d}\int\limits_{t+Q_L}f_A(x)\,dx\Bigr|^2 \,dt=\int\limits_{[0,1]^d}\left|\frac{|A\cap(t+Q_{L})|}{|Q_{L}|}-|A|\,\right|^2\,dt+O(L). \end{equation} Evidently the $U^1(L)$-norm is measuring the mean-square uniform distribution of $A$ on scale $L$. Specifically if $A$ is $(\varepsilon,L)$-uniformly distributed, then $\|f_A\|_{U^1(L)}\leq2\varepsilon$ provided $0<L\ll\varepsilon$. \smallskip At the heart of this short proof of Proposition \ref{Propn0} is the following ``generalized von-Neumann inequality''. \begin{lem}[Generalized von-Neumann for Distances]\label{GvN0} For any $c>0$, $0<\varepsilon,\lambda\ll\min\{1,c^{-1}\}$ and functions \[f_0,f_1:[0,1]^{d}\to[-1,1]\] with $d\geq2$ we have \begin{equation*} \left|T(f_0,f_1)(c\lambda)\right|\leq\prod_{j=0,1}\|f_j\|_{U^1(\varepsilon^4\lambda)}+O(c^{-1/6}\varepsilon^{2/3}). \end{equation*} \end{lem} Indeed, if $A\subseteq[0,1]^d$ with $d\geq 2$ and $\alpha=|A|>0$, then Lemma \ref{GvN0} implies that \[\left|T(1_A,1_A)(c\lambda)-T(\A1_{[0,1]^d},\A1_{[0,1]^d})(c\lambda)\right|\leq 3\,\|f_A\|_{U^1(\varepsilon^4\lambda)}+O(c^{-1/6}\varepsilon^{2/3})\] for any $0<c\leq1$ and $0<\varepsilon,\lambda \ll1$. Since $T(\A1_{[0,1]^d},\A1_{[0,1]^d})(c\lambda)=\alpha^{2}+O(c\lambda)$ it follows that \begin{equation*} T(1_A,1_A)(c\lambda)=\alpha^{2}+O(c^{-1/6}\varepsilon^{2/3})\end{equation*} provided $0<\lambda\leq\varepsilon\ll1$. \\ To finish the proof of Proposition \ref{Propn0} we are therefore left with the task of proving Lemma \ref{GvN0}. \begin{proof}[Proof of Lemma \ref{GvN0}] An application of Parseval followed by Cauchy-Schwarz implies that \begin{align*} T(f_{0},f_{1})(c\lambda)^2&=\Bigl(\iint f_0(x)f_1(x-c\lambda x_1)\,d\sigma(x_1) \,dx\Bigr)^2\\ &\leq \Bigl(\int_{\mathbb R^d} |\widehat{f_0}(\xi)||\widehat{f_1}(\xi)||\widehat{\sigma}(c\lambda\xi)|\,d\xi\Bigr)^2\\ &\leq \prod_{j=0,1}\int_{\mathbb R^d} |\widehat{f_j}(\xi)|^2|\widehat{\sigma}(c\lambda\xi)|\,d\xi \end{align*} where \begin{equation*} \widehat{\mu}(\xi)=\int_{\mathbb R^d}e^{-2\pi i x\cdot\xi}\,d\mu(x)\end{equation*} denotes the Fourier transform of any complex-valued Borel measure $d\mu$ and $\widehat{g}(\xi)$ is the Fourier transform of the measure $d\mu=g\,dx$. Combining the basic fact (see for example \cite{Stein}) that \begin{equation*} |\widehat{\sigma}(\xi)|\leq\min\{1,C|\xi|^{-(d-1)/2}\} \end{equation*} with the simple observation that $|1-\widehat{\psi}(\xi)|\leq\min\{1,C|\xi|\}$ gives \begin{equation*} |\widehat{\sigma}(c\lambda\xi)|=|\widehat{\sigma}(c\lambda\xi)|\widehat{\psi}(\varepsilon^4\lambda\xi)+|\widehat{\sigma}(c\lambda\xi)|(1-\widehat{\psi}(\varepsilon^4\lambda\xi))\leq \widehat{\psi}(\varepsilon^4\lambda\xi)+O(\min\{\varepsilon^4\lambda|\xi|,(c\lambda|\xi|)^{-1/2}\}). \end{equation*} The result now follows, since $\|f_j\|_2^2\leq1$, \begin{equation*} \min\{\varepsilon^4\lambda|\xi|,(c\lambda|\xi|)^{-1/2}\}\leq c^{-1/3}\varepsilon^{4/3} \end{equation*} and a further application of Parseval (and appeal to (\ref{almostU1})) reveals that \begin{equation*} \int |\widehat{f_j}(\xi)|^2\widehat{\psi}(\varepsilon^4\lambda\xi)\,d\xi=\iint f_j(x)f_j(x-x_1)\psi_{\varepsilon^4\lambda}(x_1)\,dx_1\,dx=\|f_j\|_{U^1(\varepsilon^4\lambda)}^2+O(\varepsilon^4\lambda).\qedhere \end{equation*} \end{proof} \section{A New Proof of Theorem \ref{BourSimp}}\label{newsimplices} In light of the reduction argument presented in Section \ref{newreduction} it is clear that in order to prove Theorem \ref{BourSimp} it would suffice to establish the following result for uniformly distributed subsets of $[0,1]^d$. \begin{propn}[Simplices in uniformly distributed sets]\label{Propn00} Let $\Delta_k=\{0,v_1,\dots,v_k\}$ be a fixed non-degenerate $k$-dimensional simplex with $c_{\Delta_k}=\min_{1\leq j\leq k}\text{\emph{dist}}(v_j,\text{\emph{span}}\left\{\{v_1,\dots,v_{k}\}\setminus v_j\right\})\leq 1.$ Let $0<\lambda\leq\varepsilon\ll \min\{1,c_{\Delta_k}^{-1}\}$ and $A\subseteq[0,1]^d$ with $d\geq k+1$ and $\alpha=|A|>0$. If $A$ is $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed, then $A$ contains an isometric copy of $\lambda\cdot\Delta_k$ and in fact \begin{equation}\label{intcount} \iint 1_A(x)1_A(x-\lambda \cdot U(v_1))\cdots 1_A(x-\lambda\cdot U(v_k))\,d\mu(U) \,dx=\alpha^{k+1}+O_k(c_{\Delta_k}^{-1/6}\varepsilon^{2/3}) \end{equation} where $\mu$ denotes the Haar measure on $SO(d)$.\end{propn} Note that Proposition \ref{Propn0} is the special case of Proposition \ref{Propn00} with $k=1$ and $v_1=1$. \subsection{Proof of Proposition \ref{Propn00}}\label{new2} Let $\Delta_k=\{0,v_1,\dots,v_k\}$ be a fixed non-degenerate $k$-dimensional simplex with \[c_{\Delta_k}=\min_{1\leq j\leq k}\text{dist}(v_j,\text{span}\left\{\{v_1,\dots,v_{k}\}\setminus v_j\right\})\leq 1.\] \begin{defn}[Counting Function for Simplices] For any $0<\lambda\ll1$ and functions \[f_0,f_1,\dots,f_{k}:[0,1]^{d}\to\mathbb R\] with $d\geq k+1$ we define \begin{equation} T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)= \iint f_0(x)f_1(x-\lambda \cdot U(v_1))\cdots f_k(x-\lambda\cdot U(v_k))\,d\mu(U) \,dx. \end{equation} \end{defn} Proposition \ref{Propn00} is an immediate consequence of the following ``generalized von-Neumann inequality''. \begin{lem}[Generalized von-Neumann for Simplices]\label{GvN00} For any $0<\varepsilon, \lambda\ll1$ and functions \[f_0,f_1,\dots,f_{k}:[0,1]^{d}\to[-1,1]\] \begin{equation*} \left|T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)\right|\leq\min_{j=0,1,\dots,k}\|f_j\|_{U^1(\varepsilon^4\lambda)}+O(c_{\Delta_k}^{-1/6}\varepsilon^{2/3}). \end{equation*} \end{lem} Indeed, if $A\subseteq[0,1]^d$ with $d\geq k+1$ and $\alpha=|A|>0$, then Lemma \ref{GvN00} implies \[\left|T_{\Delta_k}(1_A,\dots,1_A)(\lambda)-T_{\Delta_k}(\A1_{[0,1]^d},\dots,\A1_{[0,1]^d})(\lambda)\right|\leq (2^{k+1}-1)\|f_A\|_{U^1(\varepsilon^4\lambda)}+O_k(c_{\Delta_k}^{-1/6}\varepsilon^{2/3})\] for any $0<\varepsilon,\lambda\ll1$. Since $T_{\Delta_k}(\A1_{[0,1]^d},\dots,\A1_{[0,1]^d})(\lambda)=\alpha^{k+1}+O(\lambda)$ it follows that \begin{equation*} T_{\Delta_k}(1_A,\dots,1_A)(\lambda)=\alpha^{k+1}+O_k(c_{\Delta_k}^{-1/6}\varepsilon^{2/3})\end{equation*} provided $0<\lambda\leq\varepsilon\ll 1$. \smallskip To finish the proof of Proposition \ref{Propn00} we are therefore left with the task of proving Lemma \ref{GvN00}. \begin{proof}[Proof of Lemma \ref{GvN00}] By symmetry it suffices to show that \begin{equation}\label{now1} \left|T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)\right|\leq \|f_k\|_{U^1(\varepsilon^4\lambda)}+O(c_{\Delta_k}^{-1/6}\varepsilon^{2/3}). \end{equation} As in \cite{B} we start by writing \[ T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)=\iint\cdots\int f_0(x)f_{1}(x-\lambda x_1)\cdots f_k(x-\lambda x_k)\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_k)\cdots d\sigma^{(d-2)}_{x_1}(x_2)\,d\sigma(x_{1})\,dx \] where $\sigma$ now denotes the normalized measure on the sphere $S^{d-1}(0,|v_1|)$ and $\sigma^{(d-j)}_{x_1,\dots,x_{j-1}}$ denotes, for each $2\leq j\leq k$, the normalized measure on the spheres \begin{equation} S^{d-j}_{x_1,\dots,x_{j-1}}=S^{d-1}(0,|v_j|)\cap S^{d-1}(x_1,|v_j-v_1|)\cap\cdots\cap S^{d-1}(x_{j-1},|v_j-v_{j-1}|)\end{equation} where $S^{d-1}(x,r)=\{x'\in\mathbb R^d\,:\,|x-x'|=r\}$. Since \[ \left|T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)\right|\leq\iint\cdots\int\Bigl| \int f_k(x-\lambda x_k)\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_k) \Bigr|\, d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma^{(d-2)}_{x_1}(x_2)\,d\sigma(x_{1})\,dx \] it follows from an application of Cauchy-Schwarz that \begin{align}\label{CS} \left|T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)\right|^2\leq\int\cdots\iint\Bigl| \int f_k(x-\lambda x_k) & \,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_k) \Bigr|^2\,dx \\ & \,d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma^{(d-2)}_{x_1}(x_2)\,d\sigma(x_{1}).\nonumber \end{align} An application of Plancherel therefore shows that \[ \left|T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)\right|^2 \leq \int|\widehat{f_k}(\xi)|^2I(\lambda\,\xi)\,d\xi\] where \begin{equation}\label{I} I(\xi)=\int\cdots\int \bigl|\widehat{\sigma^{(d-k)}_{x_1,\dots,x_{j-1}}}(\xi)\bigr|^2\,d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma^{(d-2)}_{x_1}(x_2)\,d\sigma(x_{1}). \end{equation} Estimate (\ref{now1}) will follow if we can show that \begin{equation}\label{key} I(\lambda\xi)=I(\lambda\xi)\widehat{\psi}(\varepsilon^4\lambda\xi)+I(\lambda\xi)(1-\widehat{\psi}(\varepsilon^4\lambda\xi))\leq \widehat{\psi}(\varepsilon^4\lambda\xi)+O(c_{\Delta_k}^{-1/3}\varepsilon^{4/3}) \end{equation} since $\|f_k\|_2\leq1$ and an application of Parseval and appeal to (\ref{almostU1}) reveals that \begin{equation}\label{72} \int |\widehat{f_k}(\xi)|^2\widehat{\psi}(\varepsilon^4\lambda\xi)\,d\xi=\iint f_k(x)f_k(x-x_1)\psi_{\varepsilon^4\lambda}(x_1)\,dx\,dx_1=\|f_k\|_{U^1(\varepsilon^4\lambda)}^2+O(\varepsilon^4\lambda). \end{equation} To establish (\ref{key}) we argue as in \cite{B}, in particular we use the fact that in addition to being trivially bounded by 1 the Fourier transform of $\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}$ also decays for large $\xi$ in certain directions, specifically \begin{equation}\label{decay} \bigl|\widehat{\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}}(\xi)\bigr|\leq C \left(r(S^{d-k}_{x_1,\dots,x_{k-1}})\cdot\text{dist}(\xi,\text{span}\{x_1,\dots,x_{k-1}\})\right)^{-(d-k)/2} \end{equation} where $r(S^{d-k}_{x_1,\dots,x_{k-1}})=\text{dist}(v_k,\text{span}\{v_1,\dots,v_{k-1}\})$ denotes the radius of the sphere $S^{d-k}_{x_1,\dots,x_{k-1}}$. This estimate is a consequence of the well-known asymptotic behavior of the Fourier transform of the measure on the unit sphere $S^{d-k}\subseteq\mathbb R^{d-k+1}$ induced by Lebesgue measure, see for example \cite{Stein}. Together with the trivial uniform bound $I(\xi)\leq 1$, and an appropriate conical decomposition (depending on $\xi$) of the configuration space over which the integral $I(\xi)$ is defined, this gives \begin{equation}\label{Ibound} I(\xi)\leq \min\{1,C(c_{\Delta_k}|\xi|)^{-(d-k)/2}\} \end{equation} Combining (\ref{Ibound}) with the basic bound $ |1-\widehat{\psi}(\xi)|\leq\min\{1,C|\xi|\} $ we obtain the uniform bound \begin{equation*} |1-\widehat{\psi}(\varepsilon^4\lambda\,\xi)|I(\lambda\,\xi)\ll\min\{(\lambda c_{\Delta_k}|\xi|)^{-1/2},\varepsilon^4\lambda|\xi|\}\leq c_{\Delta_k}^{-1/3}\varepsilon^{4/3} \end{equation*} from which (\ref{key}) follows.\qedhere \end{proof} \subsection{A Second New Proof of Theorem \ref{BourSimp}} In this subsection we present an alternative approach to proving Proposition \ref{Propn00} with the slightly worse error bound $O_k(c_{\Delta_k}^{-1/12}\varepsilon^{1/3})$. Specifically, we show that one can in fact establish the following (slightly weaker) generalized von-Neumann inequality for simplices using only Lemma \ref{GvN0}, namely the generalized von-Neumann inequality for distances. \begin{lem}[Generalized von-Neumann for Simplices II]\label{GvN000} For any $0<\lambda\leq\varepsilon\ll 1$ and functions \[f_0,f_1,\dots,f_{k}:[0,1]^{d}\to[-1,1]\] \begin{equation*} \left|T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)\right|\leq \sqrt{2\pi} \min_{j=0,1,\dots,k}\|f_j\|^{1/2}_{U^1(\varepsilon^4\lambda)}+O(c_{\Delta_k}^{-1/12}\varepsilon^{1/3}). \end{equation*} \end{lem} In the proof below we will make use of the following straightforward observations: \begin{itemize} \item[(i)] If we let $\Delta_{k-1}=\{0,v_1,\dots,v_{k-1}\}$, then \begin{equation}\label{i} T_{\Delta_k}(f_0,f_1,\dots,f_{k-1},1_{[0,1]^d})(\lambda)= T_{\Delta_{k-1}}(f_0,f_1,\dots,f_{k-1})(\lambda) + O(\lambda). \end{equation} \item[(ii)] If we let $\Delta'_{k}=\{0,v'_1,\dots,v'_{k}\}$ with $v'_j=v_{k-j}-v_k$ for $0\leq j\leq k-1$ and $v'_k=-v_k$, then \begin{equation}\label{ii} T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)= T_{\Delta'_{k}}(f_k,f_{k-1},\dots,f_{0})(\lambda). \end{equation} \end{itemize} \begin{proof}[Proof of Lemma \ref{GvN000}] By symmetry it suffices to show that \begin{equation}\label{now} \left|T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)\right|^2\leq 2\pi\, \|f_k\|_{U^1(\varepsilon^4\lambda)}+O(c_{\Delta_k}^{-1/6}\varepsilon^{2/3}). \end{equation} We initially follow the proof of Lemma \ref{GvN00}, but after (\ref{CS}) we now proceed differently. Instead of applying Plancherel to the right hand side of \begin{equation*} |T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)|^2\leq\iint\cdots\int\Bigl| \int f_k(x-\lambda x_k)\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_k) \Bigr|^2 \, d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma(x_{1})\,dx. \end{equation*} we now ``square out" the right hand side to obtain \begin{equation}\label{82} \iint\!\!\cdots\!\!\iiint \!\! f_k(x-\lambda x_{k}) f_k(x-\lambda x_{k+1})\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_{k+1})\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_{k})\, d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma(x_{1})\,dx. \end{equation} If $d= k+1$, then for fixed $x_1,\dots, x_k$ we can use arc-length to parameterize of the circle $S^{d-k}_{x_1,\dots,x_{k-1}}$, with $\theta=0$ and $\theta=2\pi$ corresponding to the point $x_k$, to write \begin{equation} \int f_k(x-\lambda x_{k+1})\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_{k+1})=\int_0^{2\pi} f_k(x-\lambda x_{k+1}(x_1,\dots,x_k,\theta))\,d\theta. \end{equation} For any fixed $\theta\in[0,2\pi]$ we then define $\Delta_{k+1}(\theta)=\{0,v_1,\dots,v_k,v_{k+1}(\theta)\}$ with $v_{k+1}=v_{k+1}(\theta)$ satisfying $ |v_{k+1}|=|v_k|, $ $ |v_{k+1}-v_j|=|v_k-v_j| $ for all $1\leq j\leq k-1$ and use $\theta$ to determine the angle between $v_{k+1}$ and $v_k$ measured from the center of the circle $S^{d-k}_{x_1,\dots,x_{k-1}}$, consequently \begin{equation*} |v_{k+1}-v_k|=2\sin(\theta/2)\cdot \text{dist}(v_k,\text{span}\{v_1,\dots,v_{k-1}\}). \end{equation*} It follows that \begin{equation*} |T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)|^2\leq\int_0^{2\pi} T_{\Delta_{k+1}(\theta)}(1_{[0,1]^d},\dots,1_{[0,1]^d},f_{k},f_{k})(\lambda)\,d\theta + O(\lambda) \end{equation*} and in light of (\ref{i}) and (\ref{ii}) that \begin{align*} |T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)|^2 &\leq\int_0^{2\pi}T_{\Delta'_{k+1}(\theta)}(f_{k},f_{k},1_{[0,1]^d},\dots,1_{[0,1]^d})(\lambda)\,d\theta + O(\lambda)\\ &=\int_0^{2\pi}T_{\Delta'_{1}(\theta)}(f_{k},f_{k})(\lambda)\,d\theta + O(\lambda) \end{align*} where \[ T_{\Delta'_{1}(\theta)}(f_{k},f_{k})(\lambda)=T(f_{k},f_{k})(c(\theta)\lambda):=\iint f_k(x)f_k(x- c(\theta)\lambda x_1)\,d\sigma(x_1) \,dx \] with $c(\theta)=2\sin(\theta/2)\cdot \text{dist}(v_k,\text{span}\{v_1,\dots,v_{k-1}\})$. Lemma \ref{GvN0} now implies that \begin{equation*} |T_{\Delta'_{1}(\theta)}(f_{k},f_{k})(\lambda)|\leq\|f_k\|_{U^1(\varepsilon^4\lambda)}+O((\sin(\theta/2))^{-1/6}c_{\Delta_k}^{-1/6}\varepsilon^{2/3}) \end{equation*} since $c(\theta)\geq 2\sin(\theta/2)\,c_{\Delta_k}$. This completes the proof, when $d=k+1$, as $\int_0^{2\pi}(\sin(\theta/2))^{-1/6}\,d\theta<\infty, $ and in fact establishes the result in general, since if $d\geq k+2$, one can define a new non-degenerate simplex \[\Delta_{d-1}=\{0,v_1,\dots,v_{k-1}, v'_{k},\dots,v'_{d-2},v'_{d-1}\}\] with $v'_{d-1}=v_k$ and use the fact that \begin{equation*} T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)=T_{\Delta_{d-1}}(f_0,\dots,f_{k-1},1_{[0,1]^d},\dots,1_{[0,1]^d}, f_{k})(\lambda)+O(\lambda).\qedhere \end{equation*} \end{proof} \subsection{A Direct proof of Lemma \ref{GvN000} when $d\geq k+2$}\label{direct} We choose to include an additional argument similar to the one presented above that covers the case $d\geq k+2$ directly. Arguments of this nature will be critical important in Section \ref{6.2} when we establish a ``relative generalized von-Neumann inequality" for simplices. \smallskip If $d\geq k+2$ then in (\ref{82}), for fixed $x_1,\dots, x_k$, we write \begin{equation} \sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_{k+1})=\int_0^{\pi} (\sin\theta)^{d-k-1}\,d\sigma^{(d-k-1)}_{x_1,\dots,x_{k-1},x_k,\theta}(x_{k+1})\,d\theta \end{equation} where $\sigma^{(d-k-1)}_{x_1,\dots,x_{k-1},x_k,\theta}(x_{k+1})$ denotes the normalized measure on the sphere \begin{equation} S^{d-k-1}_{x_1,\dots,x_{k-1},x_k,\theta}=S^{d-1}(0,|v_{k+1}|)\cap S^{d-1}(x_1,|v_{k+1}-v_1|)\cap\cdots\cap S^{d-1}(x_{k},|v_{k+1}-v_{k}|)\end{equation} with $v_{k+1}=v_{k+1}(\theta)$ defined such that $ |v_{k+1}|=|v_k|, $ $ |v_{k+1}-v_j|=|v_k-v_j| $ for all $1\leq j\leq k-1$ with $\theta$ determining the angle between $v_{k+1}$ and $v_k$ measured from the center of the sphere $S^{d-k}_{x_1,\dots,x_{k-1}}$, consequently \begin{equation*} |v_{k+1}-v_k|=2\sin(\theta/2)\cdot \text{dist}(v_k,\text{span}\{v_1,\dots,v_{k-1}\}). \end{equation*} If we again let $\Delta_{k+1}(\theta)=\{0,v_1,\dots,v_k,v_{k+1}\}$, it follows that \begin{equation*} |T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)|^2\leq\int_0^\pi (\sin\theta)^{d-k-1}T_{\Delta_{k+1}(\theta)}(1_{[0,1]^d},\dots,1_{[0,1]^d},f_{k},f_{k})(\lambda)\,d\theta + O(\lambda) \end{equation*} and in light of (\ref{i}) and (\ref{ii}) that \begin{align*} |T_{\Delta_k}(f_0,f_1,\dots,f_{k})(\lambda)|^2 &\leq\int_0^\pi (\sin\theta)^{d-k-1}T_{\Delta'_{k+1}(\theta)}(f_{k},f_{k},1_{[0,1]^d},\dots,1_{[0,1]^d})(\lambda)\,d\theta + O(\lambda)\\ &=\int_0^\pi (\sin\theta)^{d-k-1}T_{\Delta'_{1}(\theta)}(f_{k},f_{k})(\lambda)\,d\theta + O(\lambda) \end{align*} where again \begin{equation*} T_{\Delta_{1}(\theta)}(f_{k},f_{k})(\lambda)=T(f_{k},f_{k})(c(\theta)\lambda):=\iint f_k(x)f_k(x- c(\theta)\lambda x_1)\,d\sigma(x_1) \,dx \end{equation*} with $c(\theta)=2\sin(\theta/2)\cdot \text{dist}(v_k,\text{span}\{v_1,\dots,v_{k-1}\})$. Lemma \ref{GvN0} again implies that \begin{equation*} |T_{\Delta'_{1}(\theta)}(f_{k},f_{k})(\lambda)|\leq\|f_k\|_{U^1(\varepsilon^4\lambda)}+O((\sin(\theta/2))^{-1/6}c_{\Delta_k}^{-1/6}\varepsilon^{2/3}) \end{equation*} since $c(\theta)\geq 2\sin(\theta/2)\,c_{\Delta_k}$ and this completes the proof as $\int_0^\pi (\sin\theta)^{d-k-1}(\sin(\theta/2))^{-1/6}\,d\theta<\infty.$ \qed \bigskip \section{Proof of Theorems \ref{Rect} and \ref{ProdSimp}}\label{4} We now proceed with the main task, namely the proofs of Theorems \ref{Rect} and \ref{ProdSimp}. \subsection{Reducing Theorems \ref{Rect} and \ref{ProdSimp} to quantitative results for subsets of $[0,1]^{d_1}\times[0,1]^{d_2}$}\label{red} \begin{propn}[Rectangles]\label{Propn1} Let $0<c\leq1$ and $A\subseteq[0,1]^{d_1}\times[0,1]^{d_2}$ with $d_1,d_2\geq2$ and $\alpha=|A|>0$. If $\{\lambda_j\}$ is any sequence in $(0,1)$ with $\lambda_{j+1}<\frac{1}{2}\lambda_j$ for all $j\geq1$, then there exist $1\leq j\leq J(\alpha)$ and a quadruple of points \begin{equation*}\{(x,y), (x',y), (x,y'), (x',y')\}\subseteq A\quad\text{with}\quad|x-x'|=\lambda_j\text{ and\, }|y-y'|=c\lambda_j.\end{equation*} In fact, for $\lambda=\lambda_j$ \begin{equation*} \iiiint 1_A(x,y)1_A(x-\lambda x_1,y)1_A(x,y-c\lambda y_1)1_A(x-\lambda x_1,y-c\lambda y_1)\,d\sigma_1(x_1)\,d\sigma_2(y_1)\,dx\,dy\geq C(\alpha)>0 \end{equation*} where $\sigma_i$ denotes, for $i=1,2$, the normalized measure on the unit sphere $S^{d_i-1}\subseteq\mathbb R^{d_i}$ centered at the origin induced by the Lebesgue measure on $\mathbb R^{d_i}$. \end{propn} \begin{propn}[Product of Simplices]\label{Propn11} Let $\Delta_{k_i}=\{0,v^i_1,v^i_2,\dots,v^i_{k_i}\}$ be fixed non-degenerate simplices of dimension $k_i$ with \[c_{\Delta_{k_i}}=\min_{1\leq j\leq k_i}\text{\emph{dist}}(v^i_j,\text{\emph{span}}\left\{\{v^i_1,\dots,v^i_{k_i}\}\setminus v^i_j\right\})\leq1\] for $i=1,2$ and $A\subseteq[0,1]^{d_1}\times[0,1]^{d_2}$ with $d_i\geq k_i+3$ and $\alpha=|A|>0$. If $\{\lambda_j\}$ is any sequence in $(0,1)$ with $\lambda_{j+1}<\frac{1}{2}\lambda_j$ for all $j\geq1$, then there exist $1\leq j\leq J(\alpha,\Delta_{k_1},\Delta_{k_2})$ and a product $\Delta_{k_1}'\times\Delta_{k_2}'\subseteq A$ with each $\Delta_{k_i}'\subseteq[0,1]^{d_i}$ an isometric copy of $\lambda_j\cdot\Delta_{k_i}$. In fact, for $\lambda=\lambda_j$ \begin{equation*} \iiiint \prod_{i=0}^{k_1}\prod_{j=0}^{k_2}1_A(x-\lambda\cdot U_1(v^1_i),y-\lambda\cdot U_2(v^2_j)) \,d\mu_1(U_1)\,d\mu_2(U_2)\,dx\,dy\geq C(\alpha)>0 \end{equation*} where $v_0^1=v_0^2=0$ and $\mu_1$ and $\mu_2$ denote the Haar measures on $SO(d_1)$ and $SO(d_2)$ respectively. \end{propn} The reduction of Theorems \ref{Rect} and \ref{ProdSimp} to these results in the compact setting of $[0,1]^{d_1}\times[0,1]^{d_2}$ is straightforward and precisely the approach taken by Bourgain in \cite{B} to prove Theorem \ref{BourSimp}, but for completeness we supply the details for Theorem \ref{Rect} below. \begin{proof}[Proof that Proposition \ref{Propn1} implies Theorem \ref{Rect}] We may assume that $c:=|v_2|\leq|v_1|=1$. Arguing indirectly we suppose that $A\subseteq \mathbb R^d$ with $d\geq 4$ is a set with $\delta^*(A)>0$ for which the conclusion of Theorem \ref{Rect} fails to hold, namely that there exist arbitrarily large $\lambda\in\mathbb R$ for which $A$ does not contain an isometric copy of $\lambda \cdot\Box$. We now let $0<\alpha<\delta^*(A)$ and set $J=J(\alpha)$ from Proposition \ref{Propn1}. By our indirect assumption we can choose a sequence $\{\lambda_j\}_{j=1}^J$ with the property that $\lambda_{j+1}<\frac{1}{2}\lambda_j$ for all $1\leq j\leq J-1$ and $A$ does not contain an isometric copy of $\lambda_j \cdot\Box$ for each $1\leq j\leq J$. It follows from the definition of upper Banach density that exist $N\in\mathbb R$ with $N\gg\lambda_1$ and $t_0\in\mathbb R^d$ for which \[\frac{|A\cap(t_0+Q_N)|}{|Q_N|}\geq\alpha.\] Rescaling $A\cap(t_0+Q_N)$ to a subset of $[0,1]^d$ and applying Proposition \ref{Propn1} leads to a contradiction. \end{proof} \subsection{Proof of Propositions \ref{Propn1} and \ref{Propn11}, Part I: A Density Increment Strategy }\label{SecPart1} \begin{propn}[Dichotomy for Rectangles]\label{part1} Let $0<c\leq1$ and $B_i\subseteq[0,1]^{d_i}$ with $d_i\geq 2$ and $\beta_i=|B_i|>0$ for $i=1,2$. If $A\subseteq B_1\times B_2$ with $|A|=\alpha\beta_1\beta_2>0$ and $0<\lambda\leq\varepsilon\ll c\beta_1^{6}\beta_2^{6}\alpha^{32}$, then either \begin{equation*} \frac{1}{\beta_1^{2}\beta_2^{2}}\iiiint 1_A(x,y)1_A(x-\lambda x_1,y)1_A(x,y-c\lambda y_1)1_A(x-\lambda x_1,y-c\lambda y_1)\,d\sigma_1(x_1)\,d\sigma_2(y_1)\,dx\,dy\geq\frac{1}{2}\alpha^{4}\end{equation*} or there exist cubes $Q_i\subseteq[0,1]^{d_i}$ of side-length $\varepsilon^4\lambda$, sets $B_i'$ in $Q_i$, and $c'>0$ for which \begin{equation*} \frac{|A\cap(B_1'\times B_2')|}{|B_1'\times B_2'|}\geq \alpha+c'\,\alpha^{32}. \end{equation*} provided $B_1$ and $B_2$ are $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ respectively. \end{propn} \begin{propn}[Dichotomy for Product of Simplices]\label{part11} For $i=1,2$ let $B_i\subseteq[0,1]^{d_i}$ with $d_i\geq k_i+3$ and $\beta_i=|B_i|>0$ and $\Delta_{k_i}=\{v_0^i,v^i_1,v^i_2,\dots,v^i_{k_i}\}$ be a non-degenerate simplex of dimension $k_i$ with $v_0^i=0$ and \[c_{\Delta_{k_i}}=\min_{1\leq j\leq k_i}\text{\emph{dist}}(v^i_j,\text{\emph{span}}\left\{\{v^i_1,\dots,v^i_{k_i}\}\setminus v^i_j\right\})\leq1.\] If $A\subseteq B_1\times B_2$ with $|A|=\alpha\beta_1\beta_2>0$ and \[0<\lambda\leq\varepsilon\ll_{k_1,k_2}(c_{\Delta_{k_1}}c_{\Delta_{k_2}})^{2}(\beta_1^{k_1+1}\beta_2^{k_2+1}\alpha^{(k_1+1)(k_2+1)})^{16}\] then either \begin{equation*} \frac{1}{\beta_1^{k_1+1}\beta_2^{k_2+1}}\iiiint \prod_{i=0}^{k_1}\prod_{j=0}^{k_2}1_A(x-\lambda\cdot U_1(v^1_i),y-\lambda\cdot U_2(v^2_j)) \,d\mu_1(U_1)\,d\mu_2(U_2)\,dx\,dy\geq\frac{1}{2}\alpha^{(k_1+1)(k_2+1)}\end{equation*} or there exist cubes $Q_i\subseteq[0,1]^{d_i}$ of side-length $\varepsilon^4\lambda$, sets $B_i'$ in $Q_i$, and $c'>0$ for which \begin{equation*} \frac{|A\cap(B_1'\times B_2')|}{|B_1'\times B_2'|}\geq \alpha+c'\,\alpha^{8(k_1+1)(k_2+1)}. \end{equation*} provided $B_1$ and $B_2$ are $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ respectively. \end{propn} Sections \ref{SecPart1} and \ref{SecPart11} below are devoted to the proofs of Propositions \ref{part1} and \ref{part11}. Central to each proof is an appropriate ``relative generalized von-Neumann inequality'', namely Lemmas \ref{GvN1} and \ref{GvN11}. These relative generalized von-Neumann inequalities in turn imply Corollaries \ref{CorGvN1} and \ref{CorGvN11}, which together with Corollaries \ref{InvCor1} and \ref{InvCor11} (which are both consequences of an appropriate common ``Inverse Theorem", namely Theorem \ref{InvThm}) immediately imply Propositions \ref{part1} and \ref{part11} respectively. It is important to note that Propositions \ref{part1} and \ref{part11} are not in and of themselves sufficient to establish Propositions \ref{Propn1} and \ref{Propn11}. In order to apply a density increment argument one would need that the sets $B_1'$ and $B_2'$ produced by Propositions \ref{part1} and \ref{part11}, for which $A$ has increased density on $B_1'\times B_2'$, were $(\eta,L')$-uniformly distributed for a sufficiently small $\eta$ and for $L'$ attached to some of the $\lambda_j$'s on $Q_1$ and $Q_2$ respectively, which they simply may not be. In Section \ref{SecPart2} we complete the proofs of Proposition \ref{Propn1} and \ref{Propn11} by showing that we can obtain suitably uniformly distributed sets $B_1'$ and $B_2'$ by appealing to a version of Szemer\'edi's Regularity Lemma \cite{Sz} adapted to a sequence of scales. \section{Proof of Proposition \ref{part1} }\label{SecPart1} At the heart of our proof of Proposition \ref{part1} will be an appropriate ``relative generalized von-Neumann inequality for rectangles", namely Lemma \ref{GvN1} below. This result, together with a companion ``Inverse Theorem" (Theorem \ref{InvThm} below) and Proposition \ref{Propn0} will ultimately furnish a proof of Proposition \ref{part1}. Throughout this section we fix $B_i\subseteq[0,1]^{d_i}$ with $d_i\geq 2$ to be arbitrary sets with $\beta_i=|B_i|>0$ for $i=1,2$. \subsection{A Relative Generalized von-Neumann Inequality for Distances and Rectangles} \begin{defn}[A Counting Function for Rectangles] For any $0<c\leq1$, $0<\lambda\ll1$ and functions \[f_{ij}:[0,1]^{d_1}\times [0,1]^{d_2}\to\mathbb R\] with $i,j\in\{0,1\}$ we define \[T_{\Box_c}(\lambda):=T_{\Box_c}(f_{00},f_{10},f_{01},f_{11})(\lambda)\] where \begin{equation} T_{\Box_c}(\lambda)= \iiiint f_{00}(x,y)f_{10}(x-\lambda x_1,y)f_{01}(x,y-c\lambda y_1)f_{11}(x-\lambda x_1,y-c\lambda y_1)\,d\sigma_1(x_1)\,d\sigma_2(y_1)\,dx\,dy \end{equation} \end{defn} Note that if we let \begin{equation} \nu(x,y)=\nu_1(x)^{1/2}\nu_2(y)^{1/2} \end{equation} where \begin{equation} \nu_1=\beta_1^{-1}1_{B_1}\quad\text{and}\quad \nu_2=\beta_2^{-1}1_{B_2} \end{equation} then, in light of Proposition \ref{Propn0}, we have \begin{equation}\label{observation} T_{\Box_c}(\nu,\nu,\nu,\nu)(\lambda)=T(\nu_1,\nu_1)(\lambda)\cdot T(\nu_2,\nu_2)(c\lambda)=1+O(\beta_1^{-2}\beta_2^{-2}c^{-1/6}\varepsilon^{2/3}) \end{equation} for any $0<\lambda\leq\varepsilon\ll1$, provided $B_1$ and $B_2$ are $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ respectively. \comment{ \begin{align*} T_{B_1,B_2}&(f_{00},\dots,f_{k_1k_2})(\lambda)\\ & =\idotsint \prod_{i=0}^{k_1}\prod_{j=0}^{k_2}f_{ij}(x_i,y_j)\,\Omega^{(k_1-1)}_{1,\lambda}(\mathbf{x})\,\Omega^{(k_2-1)}_{2,\lambda}(\mathbf{y})\,d\mu_{B_1}(x_1)\cdots d\mu_{B_1}(x_{k_1})\,d\mu_{B_2}(y_1)\cdots d\mu_{B_2}(y_{k_2}). \end{align*} } \begin{defn}[$\Box(L)$-norm] For $0<L\ll1$ and functions $f:[0,1]^{d_1}\times [0,1]^{d_2}\to\mathbb R$ we define \begin{equation}\label{B1} \|f\|_{\Box(L)}^4=\int_{[0,1]^{d_1}}\int_{[0,1]^{d_2}}\|f\|_{\Box(L)(t_1,t_2)}^4\,dt_2\,dt_1 \end{equation} with \begin{equation}\label{B2} \|f\|_{\Box(L)(t_1,t_2)}^4=\frac{1}{L^{2(d_1+d_2)}}\iiiint\limits_{\substack{x,x'\in t_1+Q_{1,L}\\ y,y'\in t_2+Q_{2,L}}} f(x,y)f(x',y)f(x,y')f(x',y')\,\,dx'\,dx\,dy'\,dy \end{equation} where $Q_{i,L}=[-L/2,L/2]^{d_i}$ for $i=1,2$. \end{defn} As before it is a straightforward but important observation that $\|f\|_{\Box(L)}^4$ equals \begin{equation}\label{almost2} \iiiint f(x,y)f(x-x_1,y)f(x,y-y_1)f(x-x_1,y-y_1) \psi_{1,L}(x_1)\psi_{2,L}(y_1)\,dx_1\,dx\,dy_1\,dy+O(L) \end{equation} where $ \psi_{i,L}=L^{-2d_i}\,1_{Q_{i,L}}*1_{Q_{i,L}}. $ \smallskip In this setting we have the following ``generalized von-Neumann inequality" relative to $B_1\times B_2$. \begin{lem}[Generalized von-Neumann for Rectangles relative to $B_1\times B_2$]\label{GvN1} Let $0<c\leq1$ and $ \nu=\nu_1^{1/2}\otimes\nu_2^{1/2} $ where $ \nu_1=\beta_1^{-1}1_{B_1}$ and $\nu_2=\beta_2^{-1}1_{B_2} $. For any $0<\varepsilon,\lambda\ll1$ and functions \[f_{ij}:[0,1]^{d_1}\times [0,1]^{d_2}\to[-1,1]\] with $i,j\in\{0,1\}$ we have \begin{equation*} |T_{\Box_c}(f_{00}\nu,f_{10}\nu,f_{01}\nu,f_{11}\nu)(\lambda)|\leq \prod_{i,j\in\{0,1\}}\!\! \|f_{ij}\nu\|_{\Box(\varepsilon^4\lambda)} + O(\beta_1^{-1}\beta_2^{-1}c^{-1/24}\varepsilon^{1/6}). \end{equation*} \end{lem} It is easy to see that Lemma \ref{GvN1}, combined with Proposition \ref{Propn0}, gives the following \begin{cor}\label{CorGvN1} Let $0<c\leq1$, $0<\alpha,\beta_1,\beta_2\leq 1$ and $0<\lambda\leq\varepsilon\ll c\beta_1^{6}\beta_2^{6}\alpha^{24}$. If $A\subseteq B_1\times B_2\subseteq[0,1]^{d_1}\times[0,1]^{d_2}$ with $|A|=\alpha\beta_1\beta_2$ and $ \|f_A\nu\|_{\Box(\varepsilon^4\lambda)}\ll\alpha^{4}, $ then \begin{equation*} T_{\Box_c}(1_A\nu,1_A\nu,1_A\nu,1_A\nu)(\lambda)\geq\frac{1}{2}\alpha^{4} \end{equation*} provided $B_1$ and $B_2$ are $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ respectively. \end{cor} \begin{proof}[Proof of Corollary \ref{CorGvN1}] It follows immediately from Lemma \ref{GvN1} that \begin{equation*} \Bigl|T_{\Box_c}(1_A\nu,1_A\nu,1_A\nu,1_A\nu)(\lambda)-\alpha^4T_{\Box_c}(\nu,\nu,\nu,\nu)(\lambda)\Bigr| \leq 15\,\|f_A\nu\|_{\Box(\varepsilon^4\lambda)}+ O(\beta_1^{-1}\beta_2^{-1}c^{-1/24}\varepsilon^{1/6}) \end{equation*} for any $0<\varepsilon,\lambda\ll1$, where $ f_A=1_A-\A1_{B_1\times B_2} $. The result follows since, as noted in (\ref{observation}), the fact that $B_1$ and $B_2$ are $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ allows us to use Proposition \ref{Propn0} and conclude that \[T_{\Box_c}(\nu,\nu,\nu,\nu)(\lambda)=1+O(\beta_1^{-2}\beta_2^{-2}c^{-1/6}\varepsilon^{2/3}) \] for any $0<\lambda\leq\varepsilon\ll1$, as required. \end{proof} \subsection{Proof of Lemma \ref{GvN1}} The proof of Lemma \ref{GvN1} follows from two clever applications of Cauchy-Schwarz combined with the following relative version of Lemma \ref{GvN0}. \begin{lem}[Relative Version of Lemma \ref{GvN0}]\label{GvN0withB} Let $B\subseteq[0,1]^d$ with $d\geq2$ and $\beta=|B|$. For any $0<c\leq1$, $0<\varepsilon, \lambda\ll1$ and functions $f_0,f_1:[0,1]^d\to[-1,1]$ we have \begin{equation*} \left|T(f_0\nu,f_1\nu)(c\lambda)\right|\leq\prod_{j\in\{0,1\}} \left(\iint f_j\nu(x)f_j\nu(x-x_1)\psi_{\varepsilon^4\lambda}(x_1)\,dx_1\,dx\right)^{1/2}\!\!+\,O(\beta^{-1}c^{-1/6}\varepsilon^{2/3}). \end{equation*} where $\nu=\beta^{-1}1_B$. \end{lem} \begin{proof} Same as that for Lemma \ref{GvN0} above, but noting that $\|f_j\nu\|_2^2\leq\beta^{-1}$ for $j=0,1$ \end{proof} To prove Lemma \ref{GvN1} we first observe that \begin{equation*} \left|T_{\Box_c}(f_{00}\nu,f_{10}\nu, f_{01}\nu,f_{11}\nu)(\lambda)\right| \leq\iint \left|T(g_0^{x,x_1}\nu_2,g_1^{x,x_1}\nu_2)(c\lambda)\right|\,\nu_1(x)\nu_1(x-\lambda x_1) \,d\sigma_1(x_1)\,dx \end{equation*} where \begin{align*} g_0^{x,x_1}(y)&=f_{00}(x,y)f_{10}(x-\lambda x_1,y)\\ g_1^{x,x_1}(y)&=f_{01}(x,y)f_{11}(x-\lambda x_1,y). \end{align*} Applying Lemma \ref{GvN0withB} to $T(g_0^{x,x_1}\nu_2,g_1^{x,x_1}\nu_2)(c\lambda)$ followed by an application of Cauchy-Schwarz (and switching the order of integration) shows that $|T_{\Box_c}(f_{00}\nu,\dots,f_{11}\nu)(\lambda)|^2$ is majorized by \[ \prod_{j\in\{0,1\}}\iint \left|T(h_{0j}^{y,y_1}\nu_1,h_{1j}^{y,y_1}\nu_1)(\lambda)\right|\nu_2(y)\nu_2(y-\lambda y_1) \psi_{2,\varepsilon^4\lambda}(y_1)\,dy_1\,dy+O(\beta_1^{-2}\beta_2^{-2}c^{-1/6}\varepsilon^{2/3}) \] where \begin{align*} h_{0j}^{y,y_1}(x)&=f_{0j}(x,y)f_{0j}(x,y-\lambda y_1)\\ h_{1j}^{y,y_1}(x)&=f_{1j}(x,y)f_{1j}(x,y-\lambda y_1). \end{align*} Applying Lemma \ref{GvN0withB} once more, this time to $T(h_{0j}^{y,y_1}\nu_1,h_{1j}^{y,y_1}\nu_1)(\lambda)$, followed by another application of Cauchy-Schwarz reveals that $\left|T_{\Box_c}(f_{00}\nu,\dots,f_{11}\nu)(\lambda)\right|^4$ is majorized by \[ \prod_{i,j\in\{0,1\}}\iiiint h_{ij}^{y,y_1}\nu_1(x)h_{ij}^{y,y_1}\nu_1(x-x_1) \,\nu_2(y)\nu_2(y-\lambda y_1) \psi_{1,\varepsilon^4\lambda}(x_1)\psi_{2,\varepsilon^4\lambda}(y_1)\,dx_1\,dx\,dy_1\,dy+O(\beta_1^{-4}\beta_2^{-4}c^{-1/6}\varepsilon^{2/3}) \] Since \begin{equation*} h_{ij}^{y,y_1}\nu_1(x)h_{ij}^{y,y_1}\nu_1(x-x_1)\nu_2(y)\nu_2(y-\lambda y_1)=f_{ij}\nu(x,y)f_{ij}\nu(x-x_1,y)f_{ij}\nu(x,y-y_1)f_{ij}\nu(x-x_1,y-y_1)\end{equation*} the result follows in light of observation (\ref{almost2}). \qed \medskip \subsection{Inverse Theorem for the $\Box(L)$-norm} The final piece in the proof of Proposition \ref{part1} is the following \begin{thm}[Inverse Theorem]\label{InvThm} Let $0<\eta, \beta_1,\beta_2\leq1$ and $B_1$ and $B_2$ be $(\varepsilon,L)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ with $0<L\leq\varepsilon\ll \eta^8\,\beta_1^{2}\beta_2^{2}$. If $f:[0,1]^{d_1}\times [0,1]^{d_2}\to[-1,1]$ satisfies \begin{equation}\label{assumption} \iint f(x,y)\nu_1(x)\nu_2(y)\,dx\,dy=0\quad\quad\text{and}\quad\quad \|f\nu\|_{\Box(L)}\geq\eta \end{equation} with $ \nu=\nu_1^{1/2}\otimes\nu_2^{1/2} $ and $ \nu_1=\beta_1^{-1}1_{B_1}$ and $\nu_2=\beta_2^{-1}1_{B_2} $, then there exist cubes $Q_i\subseteq[0,1]^{d_i}$ of side-length $L$ and sets $B_i'\subseteq B_i\cap Q_i$ such that \begin{equation}\label{3.25} \frac{1}{L^{d_1+d_2}}\iint_{B_1'\times B_2'} f(x,y)\nu_1(x)\nu_2(y)\,dx\,dy\geq c\,\eta^8. \end{equation} \end{thm} As a consequence of Theorem \ref{InvThm} we immediately obtain the following corollary which together with Corollary \ref{CorGvN1} implies Proposition \ref{part1}. \begin{cor}\label{InvCor1} Let $0<\alpha,\beta_1,\beta_2\leq 1$ and $B_1$ and $B_2$ be $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ with $0<\lambda\leq\varepsilon\ll\beta_1^{2}\beta_2^{2}\alpha^{32}$. If $A\subseteq B_1\times B_2\subseteq[0,1]^{d_1}\times[0,1]^{d_2}$ with $|A|=\alpha\beta_1\beta_2$ and \[\|f_A\nu\|_{\Box(\varepsilon^4\lambda)}\gg\alpha^{4}\] with $f_A=1_A-\A1_{B_1\times B_2}$, then there exist cubes $Q_i\subseteq[0,1]^{d_i}$ of side-length $\varepsilon^4\lambda$ and sets $B_i'$ in $Q_i$ for which \begin{equation*} \frac{|A\cap(B_1'\times B_2')|}{|B_1'\times B_2'|}\geq \alpha+c\,\alpha^{32}. \end{equation*} \end{cor} \begin{proof}[Proof of Theorem \ref{InvThm}] If \eqref{3.25} holds for some cubes $Q_i:=t_i+Q_L$ and sets $B_i':=B_i\cap Q_i$, then Theorem \ref{InvThm} follows, so we may assume for all $t_1\in [0,1]^{d_1}$ and $t_2\in [0,1]^{d_2}$ that \begin{equation}\label{3.27} I(t_1,t_2):=\frac{1}{\beta_1\beta_2L^{d_1+d_2}}\int_{t_1+Q_L}\int_{t_2+Q_L} f(x,y)\,dx\,dy\leq c\,\eta^8 \end{equation} with say $c=2^{-16}$. It is then easy to see that this assumption, together with our assumption on the sets $B_i$, namely that \[\int ||B_i\cap (t+Q_L)|-\beta_i L^{d_i}|^2\,dt \leq \varepsilon^2 L^{2d_i},\] imply, via an easy averaging argument, that \begin{equation}\label{3.30} |G_{\eta,\varepsilon}|\geq \frac{\eta^4}{16}\quad\text{where}\quad G_{\eta,\varepsilon}=\left\{(t_1,t_2)\in G_\varepsilon\,:\, \|f\nu\|^4_{\Box(L)(t_1,t_2)}\geq \frac{\eta^4}{16}\right\} \end{equation} and \begin{equation*} G_\varepsilon=\left\{(t_1,t_2);\ |B_i\cap (t_i+Q_L)-\beta_i L^{d_i}|\leq \varepsilon^{1/2} L^2\,\text{for $i=1,2$}\right\}. \end{equation*} We first show that if there exist $(t_1,t_2)\in G_{\eta,\varepsilon}$ for which $|I(t_1,t_2)|\leq \eta^4/2^9$, then Theorem \ref{InvThm} holds. Indeed, by the pigeonhole principle, we see that given such a pair $(t_1,t_2)$ we may choose $x_1\in [0,1]^{d_1}$ and $y_1\in[0,1]^{d_2}$ so that \begin{equation}\label{3.31} \left|\frac{1}{\beta_1\beta_2L^{d_1+d_2}} \int_{t_1+Q_L}\int_{t_2+Q_L} f(x_2,y_2)f(x_2,y_1)f(x_1,y_2)\,dx_2\,dy_2\,\right| \geq \frac{\eta^4}{32}. \end{equation} If we now write $f_{y_1}(x_2)=f(x_2,y_1)$, $f_{x_1}(y_2)=f(x_1,y_2)$ and decompose $f_{y_1}=f_{y_1}^+-f_{y_1}^-$ and $f_{x_1}=f_{x_1}^+-f_{x_1}^-$ into their respective positive and negative parts, then it follows that \begin{equation*} \left|\frac{1}{\beta_1\beta_2L^{d_1+d_2}}\int_{t_1+Q_L}\int_{t_2+Q_L}f(x_2,y_2)g_1(x_2)g_2(y_2)\,dx_2\,dy_2\,\right| \geq \frac{\eta^4}{2^7}, \end{equation*} for some functions $g_i:[0,1]^{d_i}\to [0,1]$. Writing these functions as an average of indicator functions, namely \[g_i(x)=\int_0^1 1_{\{g_i(x)\geq s\}}\,ds \] and appealing again to the pigeonhole principle, we see that we may choose sets $U_1$ and $V_1$ so that \begin{equation}\label{8} \left|\frac{1}{\beta_1\beta_2L^{d_1+d_2}} \int_{t_1+Q_L}\int_{t_2+Q_L} f(x_2,y_2)1_{U_1}(x_2)1_{V_1}(y_2)\,dx_2\,dy_2\,\right| \geq \frac{\eta^4}{2^7}. \end{equation} We now set $U_2=U_1^c$, $V_2=V_1^c$ and define, for $j,j'\in \{1,2\}$, the integrals \begin{equation*} I_{j,j'}:= \frac{1}{\beta_1\beta_2L^{d_1+d_2}} \int_{t_1+Q_L}\int_{t_2+Q_L}f(x_2,y_2)1_{U_j}(x_2)1_{V_{j'}}(y_2)\,dx_2\,dy_2.\end{equation*} Note that we know $|I_{1,1}|\geq \eta^4/2^7$ and if $I_{1,1}\geq \eta^4/2^7$ then \eqref{3.25} holds for the sets $B_1'=B_1\cap (t_1+Q_L)\cap U_1$ and $B_2'=B_2\cap (t_1+Q_L)\cap V_1$. We may therefore assume that $I_{1,1}\leq -\eta^4/2^7$, but this assumption, together with the previous assumption that \begin{equation*} I(t_1,t_2)= I_{1,1}+I_{1,2}+I_{2,1}+I_{2,2}\geq -\eta^4/2^9\end{equation*} immediately implies that $I_{i,j}\geq\eta^4/2^9$ for some $(j,j')\neq (1,1)$ and \eqref{3.25} again follows. It remains to consider the case when $I(t_1,t_2)\leq -\eta^4/2^9$ for all $(t_1,t_2)\in G_{\eta,\varepsilon}$. Then by \eqref{3.27} and \eqref{3.30} \begin{align*} \iint I(t_1,t_2)\,dt_1\,dt_2 &= \iint_{G_{\eta,\varepsilon}} I(t_1,t_2)\,dt_1\,dt_2 + \iint_{G_{\eta,\varepsilon}^c} I(t_1,t_2)\,dt_1\,dt_2 \leq -\frac{\eta^4}{2^4}\,\frac{\eta^4}{2^9}\,+\,2\,\frac{\eta^8}{2^{16}}\, \leq\,-\frac{\eta^8}{2^{15}}. \end{align*} While on the other hand \begin{align*} \iint I(t_1,t_2)\,dt_1\,dt_2 =O(L) \end{align*} by the first assumption of (\ref{assumption}), which is a contradiction. This proves the theorem. \end{proof} \section{Proof of Proposition \ref{part11}}\label{SecPart11} An appropriate ``relative generalized von-Neumann inequality'' will again be central to our proof of Proposition \ref{part11}, specifically a ``relative generalized von-Neumann inequality for product of simplices". However, the true heart of the argument is in fact the analogous result for \emph{just} simplices, the proof of this ``relative generalized von-Neumann inequality for simplices" is necessarily significantly more involved than the analogous relative result for distances (whose proof was essentially identical to the non-relative case) and it is here that our loss in dimension appears. We fix non-degenerate simplices $\Delta_{k_i}=\{v_0^i,v^i_1,v^i_2,\dots,v^i_{k_i}\}$ of dimension $k_i$ with $v_0^i=0$ and \[c_{\Delta_{k_i}}=\min_{1\leq j\leq k_i}\text{dist}(v^i_j,\text{span}\left\{\{v^i_1,\dots,v^i_{k_i}\}\setminus v^i_j\right\})\leq1\] and let $B_i\subseteq[0,1]^{d_i}$ with $d_i\geq k_i+3$ and $\beta_i=|B_i|>0$ denote arbitrary sets, for $i=1,2$. In contrast to the proof of Proposition \ref{part1}, we will need to assume that our sets $B_1$ and $B_2$ are suitably uniformly distributed, and make use of Proposition \ref{Propn00}, throughout the proof of Proposition \ref{part11}. \subsection{A Relative Generalized von-Neumann Inequality for Simplices and Products of Simplices} \begin{defn}[Counting function for $\Delta_{k_1}\times \Delta_{k_2}$] Let $0<\lambda\ll1$. For functions $f_{ij}:[0,1]^{d_1}\times [0,1]^{d_2}\to\mathbb R$ with $(i,j)\in\{0,1,\dots,k_1\}\times\{0,1,\dots,k_2\}$ we define \begin{equation} T_{\Delta_{k_1},\Delta_{k_2}}(f_{00},\dots,f_{k_1k_2})(\lambda)=\iiiint \prod_{i=0}^{k_1}\prod_{j=0}^{k_2}f_{ij}(x-\lambda\cdot U_1(v^1_i),y-\lambda\cdot U_2(v^2_j)) \,d\mu_1(U_1)\,d\mu_2(U_2)\,dx\,dy\end{equation} \end{defn} Note that if we let \begin{equation} \widetilde{\nu}(x,y)=\nu_1(x)^{1/(k_2+1)}\nu_2(y)^{1/(k_1+1)} \end{equation} where $ \nu_1=\beta_1^{-1}1_{B_1}$ and $\nu_2=\beta_2^{-1}1_{B_2} $ then \[T_{\Delta_{k_1},\Delta_{k_2}}(\widetilde{\nu},\dots,\widetilde{\nu})(\lambda)=T_{\Delta_{k_1}}(\nu_1,\dots,\nu_1)(\lambda)\cdot T_{\Delta_{k_2}}(\nu_2,\dots,\nu_2)(\lambda)\] and in light of Proposition \ref{Propn00} we can conclude that \begin{equation}\label{observation} T_{\Delta_{k_1},\Delta_{k_2}}(\widetilde{\nu},\dots,\widetilde{\nu})(\lambda)=1+O_{k_1,k_2}(\beta_1^{-k_1-1}\beta_2^{-k_2-1}c_{\Delta_{k_1}}^{-1/6}c_{\Delta_{k_2}}^{-1/6}\varepsilon^{2/3}) \end{equation} for any $0<\lambda\leq\varepsilon\ll1$, provided $B_1$ and $B_2$ are $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$. \newpage In this setting we have the following ``generalized von-Neumann inequality", for which it is essential that our count of product simplices is taken relative to suitably uniformly distributed sets $B_1$ and $B_2$. \begin{lem}[Generalized von-Neumann for $\Delta_{k_1}\times \Delta_{k_2}$ relative to $B_1\times B_2$]\label{GvN11} Let \[ \widetilde{\nu}=\nu_1^{1/(k_2+1)}\otimes\nu_2^{1/(k_1+1)}\quad\text{and}\quad \nu=\nu_1^{1/2}\otimes\nu_2^{1/2} \] where $ \nu_1=\beta_1^{-1}1_{B_1}$ and $\nu_2=\beta_2^{-1}1_{B_2} $ For any $0<\lambda\leq\varepsilon\ll\min\{c_{\Delta_{k_1}},c_{\Delta_{k_2}}\}$ and functions \[f_{ij}:[0,1]^{d_1}\times [0,1]^{d_2}\to[-1,1]\] with $(i,j)\in\{0,1,\dots,k_1\}\times\{0,1,\dots,k_2\}$ we have \begin{equation*} |T_{\Delta_{k_1},\Delta_{k_2}}(f_{00}\widetilde{\nu},\dots,f_{k_1k_2}\widetilde{\nu})(\lambda)|\leq \min\limits_{\substack{i=0,1,\dots,k_1\\ j=0,1,\dots,k_2}} \|f_{ij}\nu\|_{\Box(\varepsilon^4\lambda)} + O_{k_1,k_2}(\beta_1^{-k_1-1}\beta_2^{-k_2-1}c_{\Delta_{k_1}}^{-1/8}c_{\Delta_{k_2}}^{-1/8}\varepsilon^{1/16}) \end{equation*} provided $B_1$ and $B_2$ are $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ respectively. \end{lem} It is easy to see that Lemma \ref{GvN11}, combined with Proposition \ref{Propn00}, gives the following \begin{cor}\label{CorGvN11} Let $0<\alpha,\beta_1,\beta_2\leq 1$ and \[0<\lambda\leq\varepsilon\ll_{k_1,k_2}(c_{\Delta_{k_1}}c_{\Delta_{k_2}})^{2}(\beta_1^{k_1+1}\beta_2^{k_2+1}\alpha^{(k_1+1)(k_2+1)})^{16}.\] If $A\subseteq B_1\times B_2\subseteq[0,1]^{d_1}\times[0,1]^{d_2}$ with $|A|=\alpha\beta_1\beta_2$ and $ \|f_A\nu\|_{\Box(\varepsilon^4\lambda)}\ll\alpha^{(k_1+1)(k_2+1)}, $ then \begin{equation*} T_{\Delta_{k_1},\Delta_{k_2}}(1_A\widetilde{\nu},\dots,1_A\widetilde{\nu})(\lambda)\geq\frac{1}{2}\alpha^{(k_1+1)(k_2+1)} \end{equation*} provided $B_1$ and $B_2$ are $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ respectively. \end{cor} \begin{proof}[Proof of Corollary \ref{CorGvN11}] It follows immediately from Lemma \ref{GvN11} that \begin{align*} |T_{\Delta_{k_1},\Delta_{k_2}}(1_A\widetilde{\nu},\dots,1_A\widetilde{\nu})(\lambda)-&T_{\Delta_{k_1},\Delta_{k_2}}(\alpha\widetilde{\nu},\dots,\alpha\widetilde{\nu})(\lambda)|\\ &\leq (2^{(k_1+1)(k_2+1)}-1)\|f_A\nu\|_{\Box(\varepsilon^4\lambda)}+ O_{k_1,k_2}(\beta_1^{-k_1-1}\beta_2^{-k_2-1}c_{\Delta_{k_1}}^{-1/8}c_{\Delta_{k_2}}^{-1/8}\varepsilon^{1/16}) \end{align*} for any $0<\varepsilon,\lambda\ll\min\{c_{\Delta_{k_1}},c_{\Delta_{k_2}}\}$, where $ f_A=1_A-\A1_{B_1\times B_2} $ while, as noted in (\ref{observation}), Proposition \ref{Propn00} implies that \[ T_{\Delta_{k_1},\Delta_{k_2}}(\alpha\widetilde{\nu},\dots,\alpha\widetilde{\nu})(\lambda)= \alpha^{(k_1+1)(k_2+1)}(1+O_{k_1,k_2}(\beta_1^{-k_1-1}\beta_2^{-k_2-1}c_{\Delta_{k_1}}^{-1/6}c_{\Delta_{k_2}}^{-1/6}\varepsilon^{2/3})) \] for any $0<\lambda\leq\varepsilon\ll1$, as required. \end{proof} \subsection{A Relative Version of Lemma \ref{GvN00}}\label{6.2} Key to the proof of Lemma \ref{GvN11} is the following \begin{lem}[Lemma \ref{GvN00} relative to uniformly distributed sets]\label{relative} Let $\Delta_k=\{0,v_1,v_2,\dots,v_{k}\}$ be any non-degenerate $k$-dimensional simplex with \[c_{\Delta_k}=\min_{1\leq j\leq k}\text{\emph{dist}}(v_j,\text{\emph{span}}\left\{\{v_1,\dots,v_{k}\}\setminus v_j\right\})\leq 1\] and $B\subseteq[0,1]^{d}$ with $d\geq k+3$ be an arbitrary set with $\beta=|B|>0$. If we set $\nu=\beta^{-1}1_B$, then for any $0<\lambda\leq\varepsilon\ll c_{\Delta_k}$ and functions $f_0,f_1,\dots,f_{k}:[0,1]^d\to[-1,1]$ we have \begin{equation}\label{pp} \left|T_{\Delta_k}(f_0\nu,\dots,f_{k}\nu)(\lambda)\right|^2\leq\iint f_j\nu(x)f_j\nu (x-x_1)\psi_{\varepsilon^4\lambda}(x_1)\,dx\,dx_1+O_{k}(\beta^{-3k-3}c_{\Delta_k}^{-1/2}\varepsilon^{1/4}) \end{equation} for any $0\leq j\leq k$, provided $B$ is a $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subset of $[0,1]^{d}$. \end{lem} \begin{proof} As in the proof of Lemma \ref{GvN00} it suffices, by symmetry, to establish (\ref{pp}) for $j=k$. Note also, as in (\ref{observation}) above, that Proposition \ref{Propn00} implies \begin{equation}\label{observation2} T_{\Delta_k}(\nu,\dots,\nu)(\lambda)=1+O_k(\beta^{-k-1}c_{\Delta_k}^{-1/6}\varepsilon^{2/3}), \end{equation} provided $0<\lambda\leq\varepsilon\ll1$ and $B$ is an $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subset of $[0,1]^{d}$ with $d\geq k+1$. It is equally easy to see, using Lemma \ref{GvN00}, that if $1\leq j\leq k$ and any $j$ of the weights $\nu$ are replaced with $1_{[0,1]^d}$ then this modified count will still be asymptotically equal to $1$ and will in fact equal $1+O_k(\beta^{-k-1+j}c_{\Delta_k}^{-1/6}\varepsilon^{2/3})$. Since \begin{align*} \left|T_{\Delta_k}(f_0\nu,\dots,f_{k}\nu)(\lambda)\right|\leq\iint\cdots\int \nu(x)\nu(x-\lambda x_{1})\cdots \nu(x-\lambda x_{k-1})\Bigl| \int f_k\nu & (x-\lambda x_k) \,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_k) \Bigr|\\ &\,d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma(x_{1})\,dx \end{align*} it follows from an application of Cauchy-Schwarz, facilitated by (\ref{observation2}) for the simplex $\Delta_{k-1}$, that \begin{equation*} |T_{\Delta_k}(f_0\nu,\dots,f_{k}\nu)(\lambda)|^2\leq\bigl(1+O_{k}(\beta^{-k}c_{\Delta_k}^{-1/6}\varepsilon^{2/3})\bigr)^2 \bigl(\,M(\lambda)+E(\lambda)\,\bigr) \end{equation*} where \begin{equation*} M(\lambda)=\iint\cdots\int \Bigl| \int f_k \nu (x-\lambda x_k)\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_k) \Bigr|^2\,d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma(x_{1})\,dx \end{equation*} and \begin{align*} E(\lambda)&=\iint\cdots\int \bigl[\nu(x)\nu(x-\lambda x_{1})\cdots \nu(x-\lambda x_{k-1})-1(x)\bigr] \Bigl| \int f_k \nu (x-\lambda x_k)\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_k) \Bigr|^2\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma(x_{1})\,dx \end{align*} where $1=1_{[0,1]^d}$. It follows from the proof of Lemma \ref{GvN00}, specifically the argument from (\ref{CS}) to (\ref{72})) that \begin{equation*} M(\lambda)\leq \iint f_k\nu(x_1)f_k\nu(x_2)\psi_{\varepsilon^4\lambda}(x_2-x_1)\,dx_1\,dx_2. \end{equation*} We now complete the proof by establishing that $E(\lambda)=O_k(\beta^{-k-3}c_{\Delta_k}^{-1/6}\varepsilon^{1/4})$. Our strategy will be to expand the square in the error term $E(\lambda)$ which will add a new vertex $x_{k+1}$ to the simplex. ``Fixing" the distance $|x_{k+1}-x_k|$ leads to an expression which may be viewed as the difference between a weighted and an unweighted average over all isometric copies of a fixed $(k+1)$-dimensional simplex. The reason that this difference is small is that the measure $\nu$ behaves suitably random with respect to averages of this type, expressed in (\ref{observation2}). To remove the uncontrolled terms $f_k$ one needs another application of Cauchy-Schwarz which leads to simplices of dimension $k+2$ and the requirement $d\geq k+3$ for the underlying dimension of the space. Writing \begin{align*} \nu(x)\nu(x-\lambda x_{1})\cdots \nu(x-\lambda x_{k-1})-1(x)&=\sum_{j=0}^{k-1}\bigl[\nu(x-\lambda x_{j})-1(x)\bigr]\nu(x-\lambda x_{j+1})\cdots \nu(x-\lambda x_{k-1}) \end{align*} with the understanding that $x_0=0$, it follows that \begin{equation*} E(\lambda)=\sum_{j=0}^{k-1}E_j(\lambda) \end{equation*} with \begin{align*} E_j(\lambda)&=\iint\cdots\int \bigl[\nu(x-\lambda x_{j})-1(x)\bigr]\nu(x-\lambda x_{j+1})\cdots \nu(x-\lambda x_{k-1}) \Bigl| \int f_k \nu (x-\lambda x_k)\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_k) \Bigr|^2\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma(x_{1})\,dx. \end{align*} Squaring out we see that \begin{align*} E_j(\lambda)=&\iint\cdots\iiint \bigl[\nu(x-\lambda x_{j})-1(x)\bigr]\nu(x-\lambda x_{j+1})\cdots \nu(x-\lambda x_{k-1}) f_k \nu (x-\lambda x_k)f_k \nu (x-\lambda x_{k+1})\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_{k+1})\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_{k}) \,d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma(x_{1})\,dx. \end{align*} Since $d\geq k+3$ we can follow the argument in Section \ref{direct} and write \begin{equation*} \sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_{k+1})=\int_0^{\pi} (\sin\theta_1)^{d-k-1}\,d\sigma^{(d-k-1)}_{x_1,\dots,x_{k-1},x_k,\theta_1}(x_{k+1})\,d\theta_1 \end{equation*} where $\sigma^{(d-k-1)}_{x_1,\dots,x_{k-1},x_k,\theta_1}(x_{k+1})$ denotes the normalized measure on the sphere \begin{equation*} S^{d-k-1}_{x_1,\dots,x_{k-1},x_k,\theta_1}=S^{d-1}(0,|v_{k+1}|)\cap S^{d-1}(x_1,|v_{k+1}-v_1|)\cap\cdots\cap S^{d-1}(x_{k},|v_{k+1}-v_{k}|)\end{equation*} with $v_{k+1}=v_{k+1}(\theta)$ satisfying $ |v_{k+1}|=|v_k|, $ $ |v_{k+1}-v_j|=|v_k-v_j| $ for all $1\leq j\leq k-1$ and $\theta_1$ determining the angle between $v_{k+1}$ and $v_k$ so that $ |v_{k+1}-v_k|=2|v_k|\sin(\theta_1/2).$ If we now let $\Delta_{k+1}(\theta_1)=\{0,v_1,\dots,v_k,v_{k+1}\}$, then it follows (again using (\ref{observation2})) that \begin{equation*} E_j(\lambda)=\int_0^\pi (\sin\theta_1)^{d-k-1}T_{\Delta_{k+1}(\theta)}(1,\dots,1,\nu-1,\nu,\dots,\nu,f_{k}\nu,f_{k}\nu)(\lambda)\,d\theta_1+O_k(\beta^{-k-1+j}c_{\Delta_k}^{-1/6}\varepsilon^{2/3}) \end{equation*} where $T_{\Delta_{k+1}(\theta)}(1,\dots,1,\nu-1,\nu,\dots,\nu,f_{k}\nu,f_{k}\nu)(\lambda)$ equals \begin{align*} &\iint\cdots\iiint \bigl[\nu(x-\lambda x_{j})-1\bigr]\nu(x-\lambda x_{j+1})\cdots \nu(x-\lambda x_{k-1}) f_k \nu (x-\lambda x_k)f_k \nu (x-\lambda x_{k+1})\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \,d\sigma^{(d-k-1)}_{x_1,\dots,x_{k-1},x_k,\theta_1}(x_{k+1})\,d\sigma^{(d-k)}_{x_1,\dots,x_{k-1}}(x_{k}) \,d\sigma^{(d-k+1)}_{x_1,\dots,x_{k-2}}(x_{k-1})\cdots d\sigma(x_{1})\,dx. \end{align*} In light of (\ref{i}) and (\ref{ii}) it suffices to now show that \begin{equation*} E_j'(\lambda):= \int_0^\pi (\sin\theta_1)^{d-k-1}T_{\Delta'_{k+1-j}(\theta_1)}(f_{k}\nu,f_{k}\nu,\nu,\dots\nu, \nu-1)(\lambda)\,d\theta_1=O(\beta^{-k-1+j}\varepsilon^{1/4})\end{equation*} where \begin{equation*} \Delta'_{k+1-j}(\theta_1)=\{0,v_1',\dots,v'_{k+1-j}\} \end{equation*} with $v'_i=v_{k+1-i}-v_{k+1}$ for $0\leq i\leq k$ and $v'_{k+1}=-v_{k+1}$. Since $|T_{\Delta'_{k+1-j}(\theta_1)}(f_{k}\nu,f_{k}\nu,\nu,\dots\nu, \nu-1)(\lambda)|$ is dominated by \begin{align*} \iint\cdots\int \nu(x)\nu(x-\lambda x_{1})\cdots \nu(x-\lambda x_{k-j})\Bigl| \int (\nu-1)(x-\lambda & x_{k+1-j}) \,d\sigma'^{(d-k-1+j)}_{x_1,\dots,x_{k-j}}(x_{k+1-j}) \Bigr|\\ &\,d\sigma'^{(d-k+j)}_{x_1,\dots,x_{k-j-1}}(x_{k-j})\cdots d\sigma'(x_{1})\,dx \end{align*} it follows from an application of Cauchy-Schwarz, facilitated by (\ref{observation2}) for the simplex $\Delta'_{k-j}(\theta_1)$, that \begin{equation*} |T_{\Delta'_{k+1-j}(\theta_1)}(f_{k}\nu,f_{k}\nu,\nu,\dots,\nu, \nu-1)(\lambda)|^2\leq 2\, I_{\Delta'_{k+1-j}(\theta_1)}(\lambda) \end{equation*} where \begin{align*} I_{\Delta'_{k+1-j}(\theta_1)}(\lambda)=\iint\cdots\int \nu(x)\nu(x-\lambda x_{1})\cdots \nu(x-\lambda x_{k-j})\Bigl| \int (\nu-1)(x-\lambda & x_{k+1-j}) \,d\sigma'^{(d-k-1+j)}_{x_1,\dots,x_{k-j}}(x_{k+1-j}) \Bigr|^2\\ &\,d\sigma'^{(d-k+j)}_{x_1,\dots,x_{k-j-1}}(x_{k-j})\cdots d\sigma'(x_{1})\,dx. \end{align*} Squaring out we see that $I_{\Delta'_{k+1-j}(\theta_1)}(\lambda)$ equals \begin{align*} \iint\cdots&\iiint \nu(x)\nu(x-\lambda x_{1})\cdots \nu(x-\lambda x_{k-j})(\nu-1)(x-\lambda x_{k+1-j})\,(\nu-1)(x-\lambda x_{k+2-j})\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \,d\sigma'^{(d-k-1+j)}_{x_1,\dots,x_{k-j}}(x_{k+2-j}) \,d\sigma'^{(d-k-1+j)}_{x_1,\dots,x_{k-j}}(x_{k+1-j}) \,d\sigma'^{(d-k+j)}_{x_1,\dots,x_{k-j-1}}(x_{k-j})\cdots d\sigma'(x_{1})\,dx. \end{align*} Since $d\geq k+3$ we can again argue as above to obtain \begin{align*} I_{\Delta'_{k+1-j}(\theta_1)}(\lambda)&= \int_0^\pi (\sin\theta_2)^{d-k-2+j}\,\bigl[T_1(\lambda)-T_2(\lambda)-T_3(\lambda)+T_4(\lambda)\bigr]\,d\theta_2 \end{align*} where \[T_1(\lambda)=T_{\Delta'_{k+2-j}(\theta_1,\theta_2)}(\nu,\dots,\nu)(\lambda)\] \[T_2(\lambda)=T_{\Delta'_{k+2-j}(\theta_1,\theta_2)}(\nu,\dots,\nu, 1)(\lambda)\] \[T_3(\lambda)=T_{\Delta'_{k+2-j}(\theta_1,\theta_2)}(\nu,\dots,1,\nu)(\lambda)\] \[T_4(\lambda)=T_{\Delta'_{k+2-j}(\theta_1,\theta_2)}(\nu,\dots,\nu, 1,1)(\lambda)\] with \begin{equation*} \Delta'_{k+2-j}(\theta_1,\theta_2)=\Delta'_{k+1-j}(\theta_1)\cup\{v'_{k+2-j}\} \end{equation*} with $v'_{k+2-j}=v'_{k+2-j}(\theta_2)$ satisfying $ |v'_{k+2-j}|=|v'_{k+1-j}|, $ $ |v'_{k+2-j}-v_i|=|v'_{k+1-j}-v_i| $ for all $1\leq i\leq k-j$ and $\theta_2$ determining the angle between $v'_{k+2-j}$ and $v'_{k+1-j}$ so that $ |v'_{k+2-j}-v'_{k+1-j}|=2|v_{j}|\sin(\theta_2/2).$ We have therefore ultimately established \begin{equation*} |E_j'(\lambda)|^2\leq C \int_0^\pi\int_0^\pi \bigl|T_1(\lambda)-T_2(\lambda)-T_3(\lambda)+T_4(\lambda)\bigr|\,d\theta_2\,d\theta_1 \end{equation*} for each $0\leq j\leq k$. In light of (\ref{observation2}) we know that \begin{equation*} T_i(\lambda)=1+O_k(\beta^{-k-3+j}c_{\Delta'_{k+2-j}(\theta_1,\theta_2)}^{-1/6}\varepsilon^{2/3}) \end{equation*} for $i=1,\dots,4$, and hence \begin{equation*} E_j'(\lambda)=O_k(\beta^{-k-3+j}\varepsilon^{1/4}) \end{equation*} provided \begin{equation*} c_{\Delta'_{k+2-j}(\theta_1,\theta_2)}\geq\varepsilon. \end{equation*} The result now follows since the fact that \begin{equation*} c_{\Delta'_{k+2-j}(\theta_1,\theta_2)}= \min\{c_{\Delta_{k}}, 2|v_k|\sin(\theta_1/2), 2|v_j|\sin(\theta_2/2)\} \end{equation*} and $\varepsilon\ll c_{\Delta_{k}}$ ensures that \begin{equation*} |\{(\theta_1,\theta_2)\in[0,\pi]\times[0,\pi]\,:\, c_{\Delta'_{k+2-j}(\theta_1,\theta_2)}\leq\varepsilon\}|=O(\varepsilon).\qedhere \end{equation*} \end{proof} \subsection{Proof of Lemma \ref{GvN11}} The proof of Lemma \ref{GvN11} will follow from two applications of Cauchy-Schwarz combined with Proposition \ref{Propn00} and Lemma \ref{relative}. We first observe that if \[ T_{\Delta_{k_1},\Delta_{k_2}}(\lambda):=T_{\Delta_{k_1},\Delta_{k_2}}(f_{00}\widetilde{\nu},\dots,f_{k_1k_2}\widetilde{\nu})(\lambda) \] then \[ T_{\Delta_{k_1},\Delta_{k_2}}(\lambda)=\iint \nu_1(x-\lambda U_1(v^1_0))\cdots\nu_1(x-\lambda U_1(v^1_{k_1})) \,T_{\Delta_{k_2}}(g_0^{x,U_1}\nu_2,g_1^{x,U_1},\dots,g_{k_2}^{x,U_1}\nu_2)(\lambda)\,d\mu_1(U_1)\,dx \] where \begin{equation*} g_j^{x,U_1}(y)=f_{0j}(x-\lambda\cdot U_1(v^1_0),y)\cdots f_{k_1j}(x-\lambda\cdot U_1(v^1_{k_1}),y) \end{equation*} for each $j=0,1,\dots,k_2$ and that Lemma \ref{relative} implies \[ \left|T_{\Delta_{k_2}}(g_0^{x,U_1}\nu_2,\dots,g_{k_2}^{x,U_1}\nu_2)(\lambda)\right|^2 \leq\iint g_{j}^{x,U_1}\nu_2(y)g_{j}^{x,U_1}\nu_2(y-y_1)\psi_{2,\varepsilon^4\lambda}(y_1)\,dy\,dy_1+O_{k_2}(\beta^{-3k_2-3}c_{\Delta_{k_2}}^{-1/2}\varepsilon^{1/4}) \] for any $0\leq j\leq k_2$. Hence by Cauchy-Schwarz, using (\ref{observation2}) for $T_{\Delta_{k_1}}(\nu_1,\dots,\nu_1)(\lambda)$, and switching the order of integration we obtain that $|T_{\Delta_{k_1},\Delta_{k_2}}(\lambda)|^2$ is majorized by \[ \iint T_{\Delta_{k_1}}(h_{0j}^{y,y_1}\nu_1,\dots,h_{k_1j}^{y,y_1}\nu_1)(\lambda)\,\nu_2(y)\nu_2(y-y_1)\psi_{2,\varepsilon^4\lambda}(y_1)\,dy\,dy_1+O_{k_1,k_2}(\beta_1^{-k_1-1}\beta_2^{-3k_2-3}c_{\Delta_{k_1}}^{-1/6}c_{\Delta_{k_2}}^{-1/2}\varepsilon^{1/4}) \] for any $0\leq j\leq k_2$ where \begin{equation*} h_{ij}^{y,y_1}(x)=f_{ij}(x,y)f_{ij}(x,y-y_1) \end{equation*} for $i=0,1,\dots,k_1$. A further application of Cauchy-Schwarz (using the fact that $\psi_{2,\varepsilon^4\lambda}$ is $L^1$-normalized) and appeal to Lemma \ref{relative} reveals that $|T_{\Delta_{k_1},\Delta_{k_2}}(\lambda)|^4$ is majorized by \begin{align*} \iiiint h_{ij}^{y,y_1}\nu_1(x)h_{ij}^{y,y_1}\nu_1(x-x_1) \,\nu_2(y)\nu_2(y-y_1)\psi_{1,\varepsilon^4\lambda}(x_1)\,&\psi_{2,\varepsilon^4\lambda}(y_1)\,dx\,dx_1\,dy\,dy_1\\ &+O_{k_1,k_2}(\beta_1^{-4k_1-4}\beta_2^{-3k_2-4}c_{\Delta_{k_1}}^{-1/2}c_{\Delta_{k_2}}^{-1/2}\varepsilon^{1/4}) \end{align*} for any $0\leq i\leq k_1$ and $0\leq j\leq k_2$. Since \[ h_{ij}^{y,y_1}\nu_1(x)h_{ij}^{y,y_1}\nu_2(x-x_1)\nu_2(y)\nu_2(y-y_1)=f_{ij}\nu(x,y)f_{ij}\nu(x-x_1,y)f_{ij}\nu(x,y-y_1)f_{ij}\nu(x-x_1,y-y_1). \] the result follows from (\ref{almost2}). \qed \subsection{Inverse Theorem Revisited} We complete this section by noting the following immediate consequence of Theorem \ref{InvThm} which together with Corollary \ref{CorGvN11} implies Proposition \ref{part11}. \begin{cor}\label{InvCor11} Let $0<\alpha,\beta_1,\beta_2\leq 1$ and $B_1$ and $B_2$ be $(\varepsilon,\varepsilon^4\lambda)$-uniformly distributed subsets of $[0,1]^{d_1}$ and $[0,1]^{d_2}$ with $0<\lambda\leq\varepsilon\ll\beta_1^{k_1+1}\beta_2^{k_2+1}\alpha^{8(k_1+1)(k_2+1)}$. If $A\subseteq B_1\times B_2\subseteq[0,1]^{d_1}\times[0,1]^{d_2}$ with $|A|=\alpha\beta_1\beta_2$ and \[\|f_A\nu\|_{\Box(\varepsilon^4\lambda)}\gg\alpha^{(k_1+1)(k_2+1)}\] with $f_A=1_A-\A1_{B_1\times B_2}$, then there exist cubes $Q_i\subseteq[0,1]^{d_i}$ of side-length $\varepsilon^4\lambda$ and sets $B_i'$ in $Q_i$ for which \begin{equation*} \frac{|A\cap(B_1'\times B_2')|}{|B_1'\times B_2'|}\geq \alpha+c\,\alpha^{8(k_1+1)(k_2+1)}. \end{equation*} \end{cor} \section{Proof of Proposition \ref{Propn1}, Part II: Regularization}\label{SecPart2} To complete the proof of Proposition \ref{Propn1}, as was noted after the Proposition \ref{part1}, we need to now produce a pair of new sets $B_1''$ and $B_2''$ that are $(\eta,L')$-uniformly distributed for a sufficiently small $\eta$ and for $L'$ attached to some of the $\lambda_j$'s, but for which $A$ still has increased density on $B_1''\times B_2''$. Proposition \ref{part1} did produce a pair of sets $B_1'$ and $B_2'$ for which $A$ has increased density on $B_1'\times B_2'$, but these sets are not necessarily uniformly distributed. We will now obtain sets $B_1''$ and $B_2''$ with the desired properties from the sets $B_1$ and $B_2$ produced by Proposition \ref{part1} by appealing to a version of Szemer\'edi's Regularity Lemma \cite{Sz} adapted to a sequence of scales $\{L_j\}_{1\leq j\leq J}$. The precise result we need is stated below in Theorem \ref{reglemma}, but first we state a couple of definitions. \begin{defn}[A partition $\mathcal P$ being adapted to scale $L_j$] Let $1=L_0> L_1>L_2>\cdots >0$ be a sequence with the property that $L_{j+1}<\frac{1}{2} L_j$. We say that a partition $\mathcal P=\mathcal Q\cup\mathcal R$ of $[0,1]^{d_1}\times [0,1]^{d_2}$ into cubes $\mathcal Q$ and ``rectangles" $\mathcal R$ is \emph{adapted to the scale $L_j$} if each of the cubes in $\mathcal Q$ have sidelength $L_i$ for some $0\leq i\leq j$. \end{defn} \begin{defn}[$(\varepsilon,L)$-uniform distribution on $Q$] Let $Q$ be a cube of sidelength $L_0$ and $0<L/L_0\leq\varepsilon\ll1$. A set $B\subseteq Q$ is said to be $(\varepsilon,L)$-uniformly distributed on $Q$ if \begin{equation}\label{3.3.1} \frac{1}{|Q|}\int_{Q}\left|\frac{|B\cap(t+Q_{L})|}{|Q_{L}|}-\frac{|B|}{|Q|}\,\right|^2\,dt\leq\varepsilon^2. \end{equation} \end{defn} \begin{thm}[Regularity Lemma]\label{reglemma} Let $0<\beta_1,\beta_2,\eta\leq1$ and $B_i\subseteq[0,1]^{d_i}$ with $|B_i|=\beta_i$ for $i=1,2$. Given any sequence $1=L_0> L_1>\cdots >0$ with $L_{j+1}<\frac{1}{2} L_j$ there exists $0\leq j<j'\leq J(\beta_1,\beta_2,\eta)$ and a partition $\mathcal P=\mathcal Q\cup\mathcal R$ of $[0,1]^{d_1}\times [0,1]^{d_2}$ adapted to the scale $L_j$ with the following properties: \begin{itemize} \item[(i)] For every cube $Q=Q_1\times Q_2$ in $\mathcal Q$ of sidelength $L_i$ with $0\leq i\leq j-1$, the sets $B_1$ and $B_2$ are $(\eta,L_{j'})$-uniformly distributed on the cubes $Q_1$ and $Q_2$ respectively.\\ \item[(ii)] If $\mathcal N$ denotes the collection of cubes in $Q=Q_1\times Q_2$ in $\mathcal Q$ of sidelength $L_j$ for which at least one of the sets $B_1$ and $B_2$ is \emph{not} $(\eta,L_{j'})$-uniformly distributed on the cubes $Q_1$ and $Q_2$ respectively, then \[\sum_{Q\in \mathcal N} |Q|+ \sum_{R\in \mathcal R} |R|\leq\eta.\] \end{itemize} \end{thm} The proof of Theorem \ref{reglemma} follows by standard arguments, for completeness we include it in Section \ref{reglemproof}.\\ An almost immediate consequence of Theorem \ref{reglemma} is the following Corollary which, together with Proposition \ref{part1}, provides a complete proof of Proposition \ref{Propn1}, the easy verification of this we leave to the reader. \begin{cor}\label{cor3.3} Let $0<\alpha,\beta_1,\beta_2,\tau,\varepsilon\leq1$ and $A\subseteq B_1\times B_2\subseteq[0,1]^{d_1}\times [0,1]^{d_2}$ with $|A|\geq (\alpha+\tau)\beta_1\beta_2$ and $|B_i|=\beta_i$ for $i=1,2$. Given any sequence $1=L_0> L_1>\cdots >0$ with $L_{j+1}<\frac{1}{2} L_j$, there exist $0\leq j<j'\leq J(\alpha,\beta_1,\beta_2,\tau,\varepsilon)$ and squares $Q_1,\,Q_2$ of sidelength $L_j$ such that the sets \[B_i':=B_i\cap Q_i\] with $i=1,2$ have the following properties: \begin{itemize} \item[(i)] $|B'_i|\geq \dfrac{1}{3}\beta_i\tau|Q_i|$.\\ \item[(ii)] $B_i'$ is $(\varepsilon, L_{j'})$-uniformly distributed on $Q_i$\\ \item[(iii)] $\dfrac{|A\cap(B'_1\times B'_2)|}{|B'_1\times B'_2|}\geq \alpha+\dfrac{\tau}{3}$. \end{itemize} \end{cor} \begin{proof}[Proof that Theorem \ref{reglemma} implies Corollary \ref{cor3.3}] Let $\eta=\varepsilon\beta_1\beta_2\tau/3$ and $\mathcal P=\mathcal Q\cup\mathcal R$ be a partition of $[0,1]^{d_1}\times [0,1]^{d_2}$ adapted to the scale $L_j$ that satisfies the conclusions of Theorem \ref{reglemma} for some $0\leq j<j'\leq J(\beta_1,\beta_2,\eta)$. Let $B=B_1\times B_2$ and $\mathcal U$ denote the collection of all cubes in $Q=Q_1\times Q_2$ in $\mathcal Q$ of sidelength $L_i$ with $0\leq i\leq j$ for which $B_1$ and $B_2$ are $(\eta,L_{j'})$-uniformly distributed on $Q_1$ and $Q_2$ respectively. Note that property (ii) of Corollary \ref{cor3.3} holds by definition for all cubes $Q_1$ and $Q_2$ for which $Q_1\times Q_2\in\mathcal U$. If we let $\mathcal S$ denote the collection of all cubes $Q$ in $\mathcal U$ which are \emph{sparse} in the sense that $|B\cap Q|<\beta\tau|Q|/3$, then property (i) of Corollary \ref{cor3.3} will hold by definition for all cubes $Q_1$ and $Q_2$ with $Q_1\times Q_2\in\mathcal U\setminus\mathcal S$. Finally, it is straightforward to see, using property (ii) of our partition $\mathcal P$ (on the size of $\mathcal N$ and $\mathcal R$) and our assumption on the relative density of $A$ on $B$, that property (iii) of Corollary \ref{cor3.3} must hold for at least one cube $Q$ in $\mathcal U\setminus\mathcal S$. \end{proof} \subsection{Proof of Theorem \ref{reglemma}}\label{reglemproof} By passing to a subsequence we may assume $L_{j+1}\leq 2^{-(j+6)}\eta L_j$, and in this case we will show that the conclusions of the theorem hold with $j'=j+1$ for some $0\leq j\leq J(\beta_1,\beta_2,\gamma,\eta)$. For $j=0,1,2,\dots$ we construct partitions $\mathcal P^{(j)}$ of $[0,1]^{d_1}\times [0,1]^{d_2}$ into cubes $\mathcal Q^{(j)}$ and rectangles $\mathcal R^{(j)}$ starting from the trivial partition $\mathcal P^{(0)}$ consisting of only one cube $Q=[0,1]^{d_1}\times [0,1]^{d_2}$. The partition $\mathcal P^{(j)}$ will consists of two collections of cubes $\mathcal U^{(j)},\mathcal N^{(j)}$ and a collection of rectangles $\mathcal R^{(j)}$, that is \[\mathcal P^{(j)}=\mathcal U^{(j)}\cup\mathcal N^{(j)}\cup\mathcal R^{(j)}.\] The collection $\mathcal R^{(j)}$ will consist of rectangles $R=R_1\times R_2$ whose total measure is small, specifically \begin{equation}\label{3.4.2} \sum_{R\in \mathcal R^{(j)}} |R|\leq \frac{\eta}{2}, \end{equation} while the collection $\mathcal U^{(j)}$ will consist of cubes $Q=Q_1\times Q_2$ of sidelength $L_i$ for some $1\leq i\leq j$ such that $B_1$ and $B_2$ are $(\eta,L_{i+1})$-uniformly distributed on $Q_1$ and $Q_2$ respectively. Note that the cubes in $\mathcal U^{(j)}$ may have different sizes. The remaining collection $\mathcal N^{(j)}$ will consist of those cubes $Q$ of sidelength $L_j$ which are not $(\eta,L_{j+1})$-uniformly distributed. We will stop the procedure when the total measure of the non-uniform cubes is small enough, specifically when \begin{equation}\label{3.4.3} \sum_{Q\in \mathcal N^{(j)}} |Q|\leq \frac{\eta}{2} \end{equation} and note that such a partition satisfies the conclusions of Theorem \ref{reglemma}. If $[0,1]^{d_1}\times [0,1]^{d_2}\in\mathcal U^{(0)}$, then the sets $B_1,\,B_2$ are both $(\varepsilon, L_1)$-uniformly distributed and Theorem \ref{reglemma} holds. We thus assume that for some $j\geq 0$ we have a partition $\mathcal P^{(j)}$ for which \eqref{3.4.3} does not hold and let $Q=Q_1\times Q_2$ denote an arbitrary cube in $\mathcal N^{(j)}$. By our assumption both cubes have sidelength $L_j$ and $B_i$ is not $(\eta,L_{j+1})$-uniformly distributed on $Q_i$ for either $i=1$ or $i=2$. We assume, without loss of generality, that $i=1$. Averaging show that for $Q_1=t_1+[0,L_j]^{d_1}$ and $L:=L_{j+1}$, we have \begin{equation}\label{3.4.5} |E_\eta|\geq \frac{\eta^2}{2} |Q_1| \end{equation} where \begin{equation} E_\eta :=\left\{t\in Q_1\,:\, \left|\frac{|B_1\cap(t+Q_L)|}{|Q_L|}-\frac{|B_1\cap Q_1|}{|Q_1|}\right| \geq \frac{\eta}{2}\right\}. \end{equation} Let $m=\lfloor L_j/L_{j+1}\rfloor$ and partition the cube $Q_1'=t_1+[0,(m+1)L]^{d_1}\supseteq Q_1$ into grids of the form $G(s_1)=s_1+\{0,L,\ldots ,mL\}^{d_1}$ with $s_1$ running through the cube $t_1+[0,L]^{d_1}$. Since $L<2^{-6}L_j$, by \eqref{3.4.5} there exist $s_1\in Q'_1$ such that \begin{equation}\label{3.4.51} \frac{|G(s_1)\cap E_\eta|}{m^{d_1}}\geq \frac{\eta^2}{4}. \end{equation} Fix such an $s_1$ and consider the partition of $Q_1$ into cubes of size $L=L_{j+1}$ and possibly rectangles, defined by the grid $G(s_1)$. Repeat the same partition of the cube $Q_2$ corresponding to a point $s_2$ which we can choose arbitrarily from a cube $Q'_2\subseteq Q_2$ of size $L$. Taking the direct product of these partitions gives a partition of the cube $Q=Q_1\times Q_2$ into cubes of size $L=L_{j+1}$ and possibly also into some $(d_1\times d_2)$-dimensional rectangles. After performing this partition of all cubes in $\mathcal N^{(j)}$ we obtain the new partition $\mathcal P^{(j+1)}$ of $[0,1]^{d_1}\times [0,1]^{d_2}$. The new cubes obtained are then partitioned into classes $\mathcal U^{(j+1)}$ and $\mathcal N^{(j+1)}$ according to whether they are $(\eta,L_{j+2})$-uniform. Note that the cubes in $\mathcal U^{(j)}$ and rectangles in $\mathcal R^{(j)}$ remain cells of $\mathcal P^{(j+1)}$. Note that for each cube $Q\in\mathcal N^{(j)}$ the total measure of all the rectangles obtained is at most $16L_{j+1}L_j^{-1}|Q|$, hence summing over all cubes the total measure of the rectangles obtained this way is at most $4L_{j+1}L_j^{-1}$. We adjoin these rectangles to $\mathcal R^{(j)}$ to form $\mathcal R^{(j+1)}$. Note that this way the total measure of the rectangles is always bounded by \[\sum_{j=0}^\infty \frac{16L_{j+1}}{L_j} \leq \sum_{j=0}^\infty 2^{-(j+2)} \eta \leq \frac{\eta}{2},\] hence \eqref{3.4.2} holds. A key notion in regularization arguments is that of the \emph{index} or \emph{energy} of a set with respect to a partition. In our context we define it as follows. Let $\{C_k\}_{k=1}^K$ denote the collection of cells that constitute $\mathcal P^{(j)}$. For any given cell $C^k=Q^k_1\times Q^k_2$ in $\mathcal P^{(j)}$, where $Q^k_i$ could be either a square or a rectangle, we let $\delta^k_i$ denote the relative density of $B_i$ in $Q^k_i$ for $i=1,2$, and define the \emph{energy} of $(B_1,B_2)$ with respect to $\mathcal P^{(j)}$ by \begin{equation}\label{3.4.6} \mathcal E(B_1,B_2;\mathcal P^{(j)}):= \frac{1}{2}\,\sum_{C^k\in \mathcal P^{(j)}} \bigl((\delta^k_1)^2 + (\delta^k_2)^2\bigr)\,|C^k|. \end{equation} It is not hard to see that the energy is always at most 1 and is increasing when the partition is refined. To be more precise, we say a partition $\mathcal P'$ is a refinement of $\mathcal P$ if every cell $C=Q_1\times Q_2$ of $\mathcal P$ is decomposed into cells $C^{\ell,{\ell'}}=Q^\ell_1\times Q^{\ell'}_2$ of $\mathcal P'$ so that cubes (or rectangles) $Q^\ell_1$ and $Q^{\ell'}_2$ form a partition of $Q_1$ and $Q_2$ respectively. Then $|Q_1|=\sum_\ell |Q^\ell_1|$ and $|B_1\cap Q_1|=\sum_\ell |B_1\cap Q^\ell_1|$, hence writing $\delta_1$ for the relative density of $B_1$ on $Q_1$ and $\delta^\ell_1$ for the relative density of $B_1$ on $Q^\ell_1$ one has \begin{equation}\label{3.4.7} \sum_{\ell} (\delta^\ell_1)^2\,|Q^\ell_1|=(\delta_1)^2\,|Q_1|+\sum_\ell (\delta^\ell_1-\delta_1)^2\,|Q^\ell_1|. \end{equation} Similarly \begin{equation}\label{3.4.8} \sum_{\ell'} (\delta^{\ell'}_2)^2\,|Q^l_2|=(\delta_2)^2\,|Q_2|+\sum_{\ell'} (\delta^{\ell'}_2-\delta_2)^2\,|Q^{\ell'}_2|. \end{equation} Multiplying equations \eqref{3.4.7} by $|Q_2|$, \eqref{3.4.8} by $|Q_1|$, and adding, we get \begin{equation}\label{3.4.9} \sum_{\ell,{\ell'}} \bigl((\delta^\ell_1)^2+(\delta^{\ell'}_2)^2\bigr)\,|C^{\ell,{\ell'}}| = \bigl((\delta_1)^2+(\delta_2)^2\bigr)\,|C|+\sum_{\ell,{\ell'}} \bigl((\delta^\ell_1-\delta_1)^2+(\delta^{\ell'}_2-\delta_2)^2\bigr)\,|C^{\ell,{\ell'}}| . \end{equation} Going back to our construction we have decomposed each cell $C^k=Q_1\times Q_2\in\mathcal N^{(j)}$ into cubes of the form $C^{\ell,{\ell'}}=Q^\ell_1\times Q^{\ell'}_2$ where $Q^\ell_1=s_1+\ell L+Q_L$, $Q^{\ell'}_2=s_2+{\ell'}L+Q_L$ for some ${\ell}\in \{1,\ldots,m\}^{d_1}$ and ${\ell'}\in \{1,\ldots,m\}^{d_2}$, and into a collection of $(d_1+d_2)$-dimensional rectangles of small total measure. By \eqref{3.4.51} there at least $\eta^2m^{d_1}/4$ values of $\ell$ for which $|\delta^\ell_1-\delta_1|^2\geq \eta^2/4$. Thus, as $|Q_1|=L_j^{d_1}$, $|Q^\ell_1|=L^{d_1}_{j+1}$ for all $\ell$, and $m=\lfloor L_j/L_{j+1}\rfloor\geq \frac{1}{2}L_j/L_{j+1}$, we have that \begin{equation}\label{3.4.10} \sum_{\ell\in \{1,\ldots,m\}^{d_1}} (\delta^\ell_1-\delta_1)^2\,|Q^\ell_1| \geq \frac{\eta^4}{64}\,|Q_1|. \end{equation} By \eqref{3.4.9} this implies that the energy of $(B_1,B_2)$ with respect to the collection of cells of $\mathcal P^{(j+1)}$ contained in $C^k=Q_1\times Q_2$ given by the left side of \eqref{3.4.9} is at least \begin{equation}\label{3.4.11} \mathcal E(B_1,B_2;\mathcal P^{(j+1)}|_{C^k}) \,\geq\, \frac{1}{2}\,(\delta_1^2+\delta_2^2)\,|C^k|+\frac{\eta^4}{128}\,|C^k|. \end{equation} This holds for all non-uniform cells $C^k\in\mathcal N^{(j)}$ and by our assumption that the total measure of $\mathcal N^{(j)}\geq \eta/2$ it follows that \begin{equation}\label{3.4.12} \mathcal E(B_1,B_2;\mathcal P^{(j+1)}) \,\geq\, \mathcal E(B_1,B_2;\mathcal P^{(j)}) \,+\,\ \frac{\eta^5}{256}. \end{equation} Thus the procedure must stop in $j\leq 256\,\eta^{-5}$ steps providing a satisfactory partition. As explained above this leads to a cell $C=Q_1\times Q_2$ satisfying the conclusions of Theorem \ref{reglemma}.\qed \bigskip \bigskip \bigskip \noindent \emph{Acknowledgements.} We would like to thank the anonymous referee for useful comments and suggestions which have greatly improved the exposition of this article. \bigskip \bigskip
{ "timestamp": "2017-01-24T02:06:45", "yymm": "1605", "arxiv_id": "1605.04890", "language": "en", "url": "https://arxiv.org/abs/1605.04890", "abstract": "We establish that any subset of $\\mathbb{R}^d$ of positive upper Banach density necessarily contains an isometric copy of all sufficiently large dilates of any fixed two-dimensional rectangle provided $d\\geq4$.We further present an extension of this result to configurations that are the product of two non-degenerate simplices; specifically we show that if $\\Delta_{k_1}$ and $\\Delta_{k_2}$ are two fixed non-degenerate simplices of $k_1+1$ and $k_2+1$ points respectively, then any subset of $\\mathbb{R}^d$ of positive upper Banach density with $d\\geq k_1+k_2+6$ will necessarily contain an isometric copy of all sufficiently large dilates of $\\Delta_{k_1}\\times\\Delta_{k_2}$.A new direct proof of the fact that any subset of $\\mathbb{R}^d$ of positive upper Banach density necessarily contains an isometric copy of all sufficiently large dilates of any fixed non-degenerate simplex of $k+1$ points provided $d\\geq k+1$, a result originally due to Bourgain, is also presented.", "subjects": "Classical Analysis and ODEs (math.CA); Combinatorics (math.CO)", "title": "Product of simplices and sets of positive upper density in $\\mathbb{R}^d$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9911526454397475, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7090943800180165 }
https://arxiv.org/abs/1302.3192
Some properties of finite rings
A well-known theorem of Wedderburn asserts that a finite division ring is commutative. In a division ring the group of invertible elements is as large as possible. Here we will be particularly interested in the case where this group is as small as possible, namely reduced to 1. We will show that, if this is the case, then the ring is boolean. Thus, here too, the ring is commutative.
\section{Boolean rings} A ring $R$ is boolean if all its elements are idempotent, i.e., $x^2=x$ for all $x\in R$. A simple example of a boolean ring is ${\mathbb Z} _2$. Products of boolean rings are also boolean, so we may construct a large class of such rings. \begin{proposition} If $R$ is a boolean ring, then $char (R)=2$, $R$ is commutative and $R^{\times}=\{1\}$. \end{proposition} \noindent \textsc{proof} We have $$ x+y = (x+y)^2 = x^2+xy+yx +y^2 = x+xy+yx +y, $$ which implies that $xy+yx=0$. If we set $x=y=1$, then we obtain $char (R)=2$. Now $char (R)=2$ implies that $xy+xy=0$. This, with the fact that $xy+yx=0$, implies that $xy=yx$, i.e., $R$ is commutative. Suppose now that $x\in R^{\times}$. Then, multiplying the expression $x^2=x$ by $x^{-1}$, we obtain $x=1$. Thus $R^{\times}$ contains the unique element 1. This finishes the proof.\hfill{$\Box $ }\\ We should point out here that not all rings of characteristic 2 are Boolean. For example, the ring ${\cal M}_2({\mathbb Z} _2)$ of square $2\times 2$ matrices, with coefficients in ${\mathbb Z}_2$, is not Boolean. The polynomial ring ${\mathbb Z}_2[X]$ is another example. \section{The group of invertible elements} The invertible elements of a ring form a group with the multiplication of the ring. In this section we will consider some elementary properties of this group. \begin{proposition}\label{prop.inv1} Let $R$ be a ring whose characteristic is not 2. If $x$ is invertible, then $-x\neq x$. It follows that, if $R$ is finite, then the sum of the elements of $R^{\times}$ is 0. \end{proposition} \noindent \textsc{proof} Let $x\in R^{\times}$. Then $$ x=-x \Longrightarrow x+x=0 \Longrightarrow 1+1=0, $$ a contradiction, because $char (R)\neq 2$. It follows that $x\neq -x$. Suppose now that $R$ is finite. If $x$ is invertible, then so is $-x$. Therefore $R^{\times}$ is composed of pairs whose sum is 0. Thus the sum of the elements of $R^{\times}$ is 0.\hfill{$\Box $ } \begin{corollary} If $char(R)\neq 2$ and $R$ is finite, then $|R^{\times}|$ is an even number. \end{corollary} \noindent {\bf Remark.} If $x\in R$ is not invertible, then we may have $x=-x$, even if the characteristic of the ring is not 2. For example, in ${\mathbb Z} _4$, which is of chacteristic 4, $2=-2$. In fact, more generally in ${\mathbb Z} _{2n}$, $n=-n$.\\ We may extend Proposition \ref{prop.inv1} to finite fields of more than two elements, even if the chacteristic is 2. \begin{proposition} Let $R$ be a finite field. If $|R|>2$, then the sum of the elements of $R^{\times}$ is 0. If $|R|=2$, then the sum is 1. \end{proposition} \noindent \textsc{proof} If $|R|=2$, then $R^{\times}$ contains the unique element 1, hence the result. Suppose now that $|R|=n>2$ and that $\alpha $ is a generator of the group $R^{\times}$. Then $$ 0 = 1-\alpha ^n = (1-\alpha )(1 + \alpha + \cdots + \alpha ^{n-1}). $$ As $\alpha \neq 1$, we have $$ 1 + \alpha + \cdots + \alpha ^{n-1} =0. $$ However, this is the sum of the elements of $R^{\times}$. Hence the result. \hfill{$\Box $ }\\ \noindent {\bf Remark.} If a finite field $R$ is of characteristic $2$, then $R$ has $2^s$ elements for some $s\in {\mathbb N}^*$. Hence $|R^{\times}|$ is an odd number. This is not the case if the characteristic is an odd prime number. \section{Matrix rings over finite fields} In this section we consider the particular case of matrix rings over finite fields of characteristic 2. We will write ${\cal M}_n(R)$ for the set of $n\times n$ matrices with coordinates in $R$. With the usual operations of addition and multiplication of matrices, ${\cal M}_n(R)$ is a ring. If $n=1$, then ${\cal M}_n(R)$ is isomorphic to $R$. If $|R|=2$, then $R^{\times}$ contains the unique element 1, otherwise the sum of its elements is 0. Now let us consider the case where $n>1$. In this case ${\cal M}_n(R)$ is noncommutative. \begin{proposition} If $R$ is a finite field of characteristic 2 and $n\geq 2$, then the sum of the elements of ${\cal M}_n(R)^{\times}$ has an even number of elements, whose sum is 0. \end{proposition} \noindent \textsc{proof} If $R$ is a finite field of characteristic $p$, then $$ |{\cal M}_n(R)^{\times}|= (p^n-p^{n-1})(p^n-p^{n-2})\cdots (p^n-1). $$ A proof may be found in \cite{rotman}. It follows that in the case where $p=2$ and $n>1$, $|{\cal M}_n(R)^{\times}|$ is an even number. Let $c$ be a nonzero vector in $R^2$. As $|R|=2^s$ for a certain $s\in {\mathbb N}^*$, there are $2^s$ multiples of $c$. Therefore there are $2^{2s}-2^s$ vectors which are not multiples of $c$. Thus there is an even number of matrices in ${\cal M}_2(R)^{\times}$ having the first column $c^T$. A similar argument to that we have used for the first column shows that there is an even number of matrices in ${\cal M}_2(R)^{\times}$ having the same second column. It follows that the sum of the elements in ${\cal M}_2(R)^{\times}$ is 0. Now let us consider the case $n>2$. Let $c_1$ be a nonzero vector in $R^n$ and $c_2,\ldots ,c_n\in R^n$ be such that $c_1,\ldots ,c_n$ form an independant set. If we fix $c_1$ and permute the other elements, then we obtain another distinct ordered set. There are $(n-1)!$ such permutations. Thus there are $(n-1)!$ matrices in ${\cal M}_n(R)^{\times}$ having the same columns with the first column fixed. As 2 divides $(n-1)!$, there is an even number of matrices with the same first column. The preceding argument applies to any column and so the sum of the matrices in ${\cal M}_n(R)^{\times}$ is 0.\hfill{$\Box $ } \section{The main theorem} Our aim in this section is to show that a finite ring $R$ in which the multiplicative group is as small as possible, i.e., $R^{\times}=\{1\}$, is a Boolean ring.\\ We will first give a brief review of Artin-Wedderburn theory. The subject is well-handled in various places. A good reference is \cite{ash}.\\ A division ring is a ring $R$ such that $R^{\times}=R^*$. A theorem of Wedderburn states that, if such a ring is finite, then it is commutative, i.e., a field. Proofs of this result may be found, for example, in \cite{herstein} or \cite{lam}. It is natural to consider another 'extreme' case, namely where $R^{\times}=\{1\}$. Our aim in this section is to show that in this case $R$ is a boolean ring.\\ We will first recall some definitions and results from elementary ring theory. In a ring $R$ the intersection of the maximal left ideals is called the Jacobson radical of $R$ and usually noted $J(R)$. It turns out that $J(R)$ is also the intersection of all the maximal right ideals and so is an ideal. We may characterize elements of $J(R)$ in the following way: $a\in J(R)$ if and only if $1-xay\in R^{\times}$ for all $x,y\in R$. By setting $x=y=1$, we see that if $R^{\times}=\{1\}$, then $J(R)=\{0\}$.\\ We now recall that a ring is artinian if any descending chain of ideals is stationary after a finite number of ideals. If we replace ideals by left (resp. right) ideals in the definition, then we obtain the definition of a left (resp. right) artinian ring. Clearly a finite ring is artinian, as well as being left and right artinian.\\ We now come to the the notion of semi-simplicity and to Wedderburn's structure theorem. In the definitions we will use left ideals; however, we could replace these by right, or two-sided, ideals. To be brief, we will use the term ideal for left ideal. We say that an ideal $I$ is simple if $I\neq \{0\}$ and the only sub-ideals included in $I$ are $I$ itself and $\{0\}$. A ring is semi-simple if it is a product of simple ideals. It should be noticed that a ring is semi-simple if and only if it is a direct sum of simple ideals. A fundamental result in the theory of semi-simple rings is the Wedderburn structure theorem, namely \begin{theorem} If $R$ is a semi-simple ring, then there are division rings $D_1,\ldots ,D_t$ and positive integers $n_1, \ldots ,n_t$ such that $$ R\simeq {\cal M}_{n_1}(D_1)\oplus \cdots \oplus {\cal M}_{n_t}(D_t). $$ \end{theorem} If $R$ is an artinian ring and $I$ an ideal in $R$, then the quotient ring $R/I$ is also artinian. In the case where $I=J(R)$ the ring $R/I$ is semi-simple. Of course, if $J(R)=\{0\}$, then $R$ is semi-simple. We thus have the following result: \begin{theorem} If $R$ is a finite ring and $R^{\times}=\{1\}$, then $R$ is semi-simple. \end{theorem} As a corollary we have the principle result of this paper, namely \begin{theorem} If $R$ is a finite ring and $R^{\times}=\{1\}$, then $R$ is a boolean ring. \end{theorem} \noindent \textsc{proof} As $R$ is semi-simple, we know that there are division rings $D_1,\ldots ,D_t$ and positive integers $n_1,\ldots ,n_t$ such that $$ R\simeq {\cal M}_{n_1}(D_1)\oplus \cdots \oplus {\cal M}_{n_t} (D_t). $$ As $R$ is finite so are the division rings. From Wedderburn's Little Theorem these rings are fields. Given that $|R^{\times}|=1$, it must be so that $D_i={\mathbb Z}_2$ and $n_i=1$ for all $i$. However, ${\cal M}_1({\mathbb Z}_2)\simeq {\mathbb Z}_2$ and so $R$ is isomophic to a direct sum of boolean rings and hence is a boolean ring.\hfill{$\Box $ }\\ \noindent {\bf Remark.} As a boolean ring is commutative, if a finite ring $R$ is such that $R^{\times}=\{1\}$, then $R$ is commutative. Thus when $R^{\times}$ is as small as possible or as large as possible $R$ is commutative. \section{A final comment} From what we have seen above we might be tempted to think that the sum of the invertible elements in a finite ring is either 0 or 1. However, this is not the case. We only need to consider the subring $A$ of upper triangular matrices in ${\cal M}_2({\mathbb Z} _2)$. Here $A^{\times}$ is composed of the identity matrix $I_2$ and the matrix $M=(m_{ij})$, with $m_{21}=0$ and $m_{ij}=1$ otherwise. The sum of these matrices is the matrix $N=(n_{ij})$, with $n_{12}=1$ and $n_{ij}=0$ otherwise.\\ It should be noted that the sum of the invertible upper triangular matrices in ${\cal M}_n({\mathbb Z} _2)$, with $n\geq 3$, is 0. There are $2^{\frac{(n-1)n}{2}}$ such matrices and, for $i<j$, in the position $ij$ half of them have a 0 and half of them a 1. As the number $2^{\frac{(n-1)n}{2}}$ is divisible by 4, there is an even number of matrices with 1 in any position $ij$, with $i<j$, and it follows that the sum of the matrices which interest us is $0$.
{ "timestamp": "2013-02-14T02:03:29", "yymm": "1302", "arxiv_id": "1302.3192", "language": "en", "url": "https://arxiv.org/abs/1302.3192", "abstract": "A well-known theorem of Wedderburn asserts that a finite division ring is commutative. In a division ring the group of invertible elements is as large as possible. Here we will be particularly interested in the case where this group is as small as possible, namely reduced to 1. We will show that, if this is the case, then the ring is boolean. Thus, here too, the ring is commutative.", "subjects": "Rings and Algebras (math.RA)", "title": "Some properties of finite rings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9911526433490359, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.7090943785222711 }
https://arxiv.org/abs/1108.3737
Representing integers as linear combinations of powers
At a conference in Debrecen in October 2010 Nathanson announced some results concerning the arithmetic diameters of certain sets. He proposed some related results on the representation of integers by sums or differences of powers of 2 and 3. In this note we prove some results on this problem and the more general problem about the representation by linear combinations of powers of some fixed integers.
\section{Introduction} Let $P$ be a nonempty finite set of prime numbers, and let $T$ be the set of positive integers that are products of powers of primes in $P$. Put $T_P=T\cup (-T)$. Then there does not exist an integer $k$ such that every positive integer can be represented as a sum of at most $k$ elements of $T_P$. This follows e.g. from Theorem 1 of Jarden and Narkiewicz \cite{jn}, cf. \cite{h, ahl}. At a conference in Debrecen in October 2010 Nathanson announced the following stronger result (see also \cite{n}): \\ \\ {\it For every positive integer $k$ there exist infinitely many integers $n$ such that $k$ is the smallest value of $l$ for which $n$ can be written as $$ n=a_1+a_2+\cdots+a_l~~(a_1,a_2,\dots,a_l\in T_P). $$ } \\ Let $f(k)$ be the smallest positive integer which cannot be represented as sum of less than $k$ terms from $T_P$. In Problem 2 of \cite{n} Nathanson asked to give estimates for $f(k)$. (The notation in \cite{n} is somewhat different from ours.) Problem 1 asks the same question in case $T$ consists of the pure powers of $2$ and of $3$. Observe that in both cases $f(k)$ can be represented as a sum of $k$ terms from $T_P$, since less than $T_P$ terms suffice for $f(k)-1$ and $1\in T_P$. In this note we consider Problem 1. More generally, let $B=\{b_1,\dots,b_t\}$ be any finite set of positive integers. Put $A = \{ b_i^j,-b_i^j~ :~ i=1,\dots,t;j=0,1,2,\dots\}$. Note that on writing $P=\{p\ \text{prime}\ :\ p\mid b_1\cdots b_t\}$ we have $A\subseteq T_P$. So there is no $k$ for which every positive integer can be represented as a sum of at most $k$ elements of $A$. Let $f(k)$ be the smallest positive integer which cannot be represented as sum of less than $k$ terms of $A$. Similarly as above, we get that $f(k)$ can be represented as a sum of $k$ terms of $A$. In this paper we show that there exists a number $c$ depending only on $B$ and an absolute constant $C$ such that $\exp(ck)<f(k)<\exp((k \log t)^C)$. Moreover, we show that there are infinitely many $k$'s for which $f(k)<\exp(c^*k\log (2kt)\log\log k)$ where $c^*$ is some constant. For the upper bound we apply a method of \'Ad\'am, Hajdu and Luca \cite{ahl} in which a result of Erd\H{o}s, Pomerance and Schmutz \cite{eps} plays an important part. We refine the result of Erd\H{o}s, Pomerance and Schmutz in Section \ref{sec2} and that of \'Ad\'am, Hajdu and Luca in Section \ref{sec3}. In Section \ref{sec4} we derive lower and upper bounds for $f(k)$ in a somewhat more general setting. We conclude with some remarks in Section \ref{sec5}. \section{An extension of a theorem of Erd\H{o}s, Pomerance and Schmutz}\label{sec2} Let $\lambda(m)$ be the Carmichael function of the positive integer $m$, that is the least positive integer for which $$ b^{\lambda(m)} \equiv 1 ~~({\rm mod}~m) $$ for all $b \in \mathbb{Z}$ with gcd$(b,m)=1$. Theorem 1 of \cite{eps} gives the following information on small values of the Carmichael function. \vskip.1cm \noindent {\it For any increasing sequence $(n_i)_{i=1}^{\infty}$ of positive integers, and any positive constant $c_0 < 1/ \log 2$, one has $$ \lambda (n_i) > ( \log n_i)^{c_0 \log \log \log n_i} $$ for $i$ sufficiently large. On the other hand, there exist a strictly increasing sequence $(n_i)_{i=1}^{\infty}$ of positive integers and a positive constant $c_1$, such that, for every $i$, $$ \lambda (n_i)<(\log n_i)^{c_1\log\log\log n_i}. $$} This nice theorem does not give any information on the size of $n_i$. Since we need such information in this paper, we prove the following refinement of the second part. The proof is an extension of the proof in \cite{eps}. \begin{thm} \label{thm1} There exist positive constants $c_2, c_3$ such that for every large integer $i$ there is an integer $m$ with $\log m\in [\log i,(\log i)^{c_2}]$ and $$ \lambda(m)<(\log m)^{c_3\log\log\log m}. $$ \end{thm} \begin{proof} In \cite{apr} it is shown that there is a computable constant $c_4>0$ with the property that, for any $x>10$, there is a squarefree number $h_x < x^2$ for which $$ \sum_{p-1 | h_x} ~ 1>e^{c_4\log x/\log\log x}. $$ Put $x =(\log i)^{(2/c_4)\log\log\log i},y=h_{x}$, and $m=\prod_{p-1 | y} ~ p$. Note that, for $i$ sufficiently large, we have $$ m\geq\prod_{p-1| y}2 ~>~ \exp\left((\log~2)\exp\left(\frac{c_4 \log x}{\log\log x}\right)\right)>i. $$ But then, for $i$ sufficiently large and $c_3=4/c_4$, $$ \lambda(m)\leq y<x^2=(\log i)^{(4/c_4)\log\log\log i}< (\log m)^{c_3\log\log\log m}. $$ It remains to estimate $m$ from above. Let $s$ be the number of prime factors of the squarefree number $y$ and $0<\varepsilon<0.1$. Then $y$ is at least $s^{(1-\varepsilon)s}$ if $i$ is sufficiently large. Hence $s<(1+2\varepsilon)\log y/\log\log y$. It follows that $$ \sum_{p-1 | y} ~1\leq 2^s<\frac{y^{1/\log\log y}}{\log(y+1)} $$ when $i$ is large. Thus $$ \log m=\log\left(\prod_{p-1 | y} ~ p \right)<\sum_{p-1 | y} ~ \log (y+1)<y^{1/\log\log y}< x^{2/\log\log x}<(\log i)^{c_2} $$ for some constant $c_2$. \end{proof} \section{An extension of a theorem of \'Ad\'am, Hajdu and Luca} \label{sec3} Let $B=\{b_1,\dots,b_t\}$ be any finite set of positive integers. Let $A=\{b_i^j ~:~ i=1,\dots,t;j=0,1,2,\dots\}$. Let $k$ be a positive integer and $R$ a finite set of integers of cardinality $\rho$. Put $$ H_{B,R,k}=\{n\in\mathbb{Z}:n=\sum_{i=1}^k r_ia_i\} $$ where $r_i\in R,a_i\in A ~~ (i=1,2,\dots,k)$. For $H\subseteq\mathbb{Z}$ and $m\in\mathbb{Z},m\geq 2$, we write $\sharp H$ for the cardinality of the set $H$ and $$ H({\rm mod}~m)=\{i:0\leq i<m,h\equiv i ~( {\rm mod}~ m)~ {\rm for ~ some~}h\in H\}. $$ Observe that the definition of $A$ differs from that in the introduction and that we get the situation described there by choosing $R=\{-1,1\}$. \begin{thm} \label{thm2} Let $B,R$ and $k$ be given as above. For every sufficiently large integer $i$ there exists a number $m$ with $\log m\in [\log i,(\log i)^{c_2}]$ such that $$ \sharp H_{B, R, k} ~({\rm mod}~m)<(\rho t)^k(\log m)^{c_5k\log\log\log m} $$ where $c_5$ is a constant. \end{thm} In the proof of Theorem \ref{thm1} the following lemma is used. \begin{lem} \label{lem1} {\rm (\cite{ahl}, Lemma 1)}.\\ Let $m=q_1^{\alpha_1}\cdots q_z^{\alpha_z}$ where $q_1,\dots,q_z$ are distinct primes and $\alpha_1,\dots,\alpha_z$ are positive integers, and let $b\in\mathbb{Z}$. Then $$ \sharp\{b^u~({\rm mod}~m):u\geq 0\}\leq\lambda(m)+ \max_{1\leq j\leq z}\alpha_j. $$ \end{lem} The proof of Theorem \ref{thm2} is similar to that of Theorem 3 of \cite{ahl}. In that paper there is the restriction that of each element of $B$ only one power occurs in $H_{B,R,K}$, hence $k=t$. \begin{proof}[Proof of Theorem \ref{thm2}] Let $i$ be an integer so large that Theorem \ref{thm1} applies. Choose $m$ as in Theorem \ref{thm1}. Write $m$ as in Lemma \ref{lem1} as a product of powers of distinct primes. Lemma \ref{lem1} implies that for all $b\in B$, $$ \sharp\{r\cdot b^u~({\rm mod}~m):b\in B,r\in R,u\geq 0 \} \leq\rho t\left(\lambda(m)+\max_{1\leq j\leq z}\alpha_j\right). $$ On the other hand, with the constant $c_3$ from Theorem \ref{thm1}, $$ \lambda(m)+\max_{1\leq j\leq z}\alpha_j\leq(\log m)^{c_3 \log\log\log m}+\frac{\log m}{\log 2}. $$ The combination of both inequalities yields the theorem. \end{proof} \section{Representing integers as linear combinations of powers} \label{sec4} We use the notation of Section \ref{sec3}. Suppose we want to express the positive integer $n$ as a finite sum of powers of $b_1$. For this we apply the greedy algorithm. If we subtract the largest power of $b_1$ not exceeding $n$ from $n$, we obtain a number which is less than $n(1-1/b_1)$. We can iterate subtracting the highest power of $b_1$ not exceeding the rest from the rest and so reduce the rest each time by a factor at most $1-1/b_1$. Hence we can represent $n$ as the sum of at most $\log n/\log(1/(1-1/b_1))$ powers of $b_1$. Thus we find that the sum of $k\leq c_6\log n$ powers of $b_1$ suffices to represent $n$, where $c_6$ depends only on $b_1$. This implies the lower bound $\exp(ck)$ for $f(k)$ claimed in the introduction. More generally, let $f_R(k)$ be the smallest positive integer $n$ which cannot be represented as a sum $\sum_{j=1}^lr_ja_j$ with $l<k,r_j\in R,a_j\in A$. Then the above argument shows that $1\in R$ implies $f_R(k)>e^{k/c_6}$. For an upper bound for $f_R(k)$ suppose first that all the elements of $R$ are positive. We study the representation of positive integers up to $n$ as $\sum_{j=1}^{k-1}r_jb_j^{k_j}$ with $r_j\in R,b_j\in B,k_j\in\mathbb{Z},k_j\geq 0$. Then $k_j\leq\log n/\log b_j\leq\log n/\log 2$. Hence the number of represented integers is at most $\left(\rho t\log n/\log 2\right)^{k-1}$. If this number is less than $n$, then we are sure that some positive integer $\leq n$ is not represented. This is the case if $$ k-1<\frac{\log n}{\log(\rho t)+\log\log n-\log\log 2}. $$ Hence it suffices that $n\geq(1.5\rho kt\log(\rho kt))^{k-1}$ and for this special case we find that $$ f_R(k)\leq(1.5\rho kt\log (\rho kt))^{k-1}. $$ We now turn to the general case. Choose the smallest positive integer $i>10$ such that $i>(\rho t)^k(\log i)^{c_5k\log\log\log i}$. Then $i<2(\rho t)^k(\log i)^{c_5k\log\log\log i}$. It follows that $$ \log i<k(\log\rho t)+c_7k(\log\log i)(\log\log\log i) $$ for some constant $c_7$, thus $\log i<2k\log(\rho t)$ or $\log i<2c_7k(\log\log i)(\log\log\log i)$. In the latter case $\log i<c_{8}k(\log k)(\log\log k)$ for some suitable constant $c_{8}$. According to Theorem \ref{thm2} there exists an $m$ with $\log i\leq\log m\leq(\log i )^{c_2}$ such that all representations are covered by at most $(\rho t)^k (\log m)^{c_5k\log\log\log m}$ residue classes modulo $m$. By the definition of $i$ and the inequality $i\leq m$, we see that this number of residue classes is less than $m$, therefore at least one positive integer $n\leq m$ has no representation of the form $\sum_{j=1}^{k}r_ja_j$ with $r_j\in R,a_j\in A$ for all $j$. Since $\log m\leq(\log i)^{c_2}$, we obtain $$ \log n\leq\log m \leq(\log i)^{c_2}<\left(\max(2k\log \rho t,c_{8}k(\log k)(\log\log k))\right)^{c_2}<(k\log\rho t)^{c_{9}} $$ for some constant $c_{9}$. There are infinitely many $k$'s for which a considerably better bound for $f_R(k)$ can be derived by a variant of the above argument. According to Theorem \ref{thm1} there are infinitely many integers $m$ for which \begin{equation} \label{eps} \lambda(m)<(\log m)^{c_3\log\log\log m}. \end{equation} Let $B$, hence $A,\rho$ and $t$ be given. Choose $k$ as the largest integer such that $$ (\rho t)^k(\log m)^{c_5k\log\log\log m}<m $$ for an $m$ satisfying (\ref{eps}). It follows from Theorem \ref{thm1} that there are infinitely many such $k$'s. Theorem \ref{thm2} and its proof imply that there is a positive integer $n\leq m$ which is not representable as a linear combination of $k$ elements of $A$ with coefficients from $R$. Moreover, $$ \log m\leq(k+1)(\log(\rho t)+c_5\log\log m\log\log\log m). $$ Hence $\log m\leq2(k+1)\log(\rho t)$ or $\log m\leq 2c_5(k+1)\log\log m\log\log\log m$. In the latter case $\log m\leq c_{10}k\log k\log\log k$ where $c_{10}$ is some constant. Combining both inequalities we obtain, for some constant $c_{11}$, $$ \log n\leq\log m\leq c_{11}k\log(\rho kt)\log\log k. $$ So we have proved the following result. \begin{thm} \label{thm3} Let $B=\{b_1,\dots,b_t\}$ be any finite set of positive integers. Put $A=\{b_i^j ~:~ i=1,\dots,t;j=0,1,2,\dots\}$. Let $R$ be a finite set of integers of cardinality $\rho$ and $k$ a positive integer. Denote by $f_R(k)$ the smallest positive integer which cannot be represented in the form $ \sum_{i=1}^{k-1}r_ia_i$ with $r_i\in R,a_i\in A$ for all $i$. Then\\ (i) if $1\in R$, then $\log f_R(k)>k/c_6$ for some number $c_6>0$ depending only on $b_1$,\\ (ii) if all elements of $R$ are positive, then $f_R(k)\leq(1.5\rho kt\log(\rho kt))^{k-1}$,\\ (iii) there exists a constant $c_9$ such that $\log f_R(k)<(k\log(\rho t))^{c_9}$,\\ (iv) there exist a constant $c_{11}$ and infinitely many positive integers $k$ such that $\log f_R(k)\leq c_{11}k\log(\rho kt)\log\log k$. \end{thm} In Nathanson's Problem 1 mentioned in the introduction we have $R=\{-1,1\}$, and hence $\rho=2$. Thus we have the following consequences for the function $f$. \begin{cor} There is a positive number $c$ depending only on $b_1$ such that $\log f(k)>ck$.\\ On the other hand, $\log f(k)<(k\log (2t))^C$ where $C$ is a constant.\\ Moreover, $\log f(k)<c^*k \log (2kt)\log\log k$ for infinitely many integers $k$ where $c^*$ is a constant. \end{cor} \section{Some remarks} \label{sec5} \noindent{\bf Remark 1.} To prove that $f_R(k)>e^{ck}$ we assumed $1\in R$. Here we check what happens if this condition is not fulfilled. Obviously, we may assume that $0\notin R$ and that not all the elements of $R$ are negative. Further, if the elements of $R$ are not coprime, then there is a full residue class not represented as $\sum_{i=1}^k r_ia_i$. Therefore we may assume that the elements of $R$ are coprime. (In particular, since $R=\{1\}$ is now excluded, that $\rho>1$.) Assume first that $R$ contains a negative element. There exist integers $d_j~ (j=1,\dots,\rho)$ such that $\sum_{j=1}^\rho d_jr_j=1$. Let $P=\prod_{j=1}^\rho |r_j|$. So $P$ is a multiple of $r_j$ for every $j$. Consider the sum $\sum_{j=1}^{\rho} \left(d_j+e_j\frac{P}{r_j}\right)r_j$ where the $e_j$'s are integers such that $d_j+e_j\frac{P}{r_j}\geq 0$ and $e_jr_j\geq 0$ for all $j$. Let $v=\sum_{j=1}^t e_j$. If $v>0$ then we replace $e_j$ by $e_j-v$ for some $j$ with $r_j<0$, and if $v<0$ then we do so for some $j$ with $r_j>0$. Afterwards $\sum_{j=1}^t e_j=0$, hence $\sum_{j=1}^{\rho} \left(d_j+e_j\frac{P}{r_j}\right)r_j=1$, and further $d_j+e_j\frac{P}{r_j} \geq 0$ for $j=1,\dots,\rho$. Thus $1$ admits a representation $\sum_{j=1}^k r_ja_j$ where $a_j=b_1^0=1$ for all $j$, and $k$ is bounded by $\sum_{j=1}^{\rho}\left(d_j+e_j\frac{P}{r_j}\right)$, that is by a constant $c_{12}$ which only depends on $R$. If the elements of $R$ are coprime and all positive, then we have to do with the so-called coin problem or Frobenius problem. Let $0<r_1\leq r_2\leq\dots\leq r_{\rho}$. Schur \cite{b} proved in 1935 that every number larger than $c_{13}:=r_1r_{\rho}+r_2+\dots+r_{\rho-1}$ can be represented as a linear combination of $r_1,r_2,\dots,r_{\rho}$ with nonnegative integer coefficients. Note that $c_{13}$ depends only on $R$. Therefore each of the integers in the interval $(c_{13},2c_{13}]$ can be represented as $\sum_{j=1}^{c_{14}} r_ja_j$ where the number $c_{14}$ depends only on $R$. We can now use the greedy algorithm as in the first block of Section \ref{sec4}, iterating until we reach this interval, to obtain a representation of $n>c_{13}$ with at most $(c_{6}+c_{14})\log n$ terms $r_ja_j$. We conclude that if the elements of $R$ are coprime, every positive integer $n>c_{13}$ can be represented as $\sum_{i=1}^{k-1} r_ia_i$ with $r_i\in R,a_i\in A$ for all $i$ and with $k<c_{15}\log n$, where $c_{15}$ is a number depending only on $R$. Thus $f_R(k)>e^{ck}$ where $c$ is a number depending only on $R$. \vskip.1truecm \noindent {\bf Remark 2.} The upper bound for $f(k)$ can possibly be improved by deriving a version of Theorem \ref{thm1} where the interval for $m$ is essentially smaller at the cost of a larger bound for $\lambda(m)$. We expect that the given upper bound for infinitely many values of $k$ may be close to an upper bound for all $k$. \section{Acknowledgements} The authors are grateful to the referees for their helpful remarks.
{ "timestamp": "2011-08-19T02:03:06", "yymm": "1108", "arxiv_id": "1108.3737", "language": "en", "url": "https://arxiv.org/abs/1108.3737", "abstract": "At a conference in Debrecen in October 2010 Nathanson announced some results concerning the arithmetic diameters of certain sets. He proposed some related results on the representation of integers by sums or differences of powers of 2 and 3. In this note we prove some results on this problem and the more general problem about the representation by linear combinations of powers of some fixed integers.", "subjects": "Number Theory (math.NT)", "title": "Representing integers as linear combinations of powers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771786365549, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7090925591787851 }
https://arxiv.org/abs/2008.08905
Linear algebra and quantum algorithm
In mathematical aspect, we introduce quantum algorithm and the mathematical structure of quantum computer. Quantum algorithm is expressed by linear algebra on a finite dimensional complex inner product space. The mathematical formulations of quantum mechanics had been established in around 1930, by von Neumann. The formulation uses functional analysis, linear algebra and probability theory. The knowledge of the mathematical formulation of QM is enough quantum mechanical knowledge for approaching to quantum algorithm and it might be efficient way for mathematicians that starting with mathematical formulations of QM. We explain the mathematical formulations of quantum mechanics briefly, quantum bits, quantum gates, quantum discrete Fourier transformation, Deutsch's algorithm and Shor's algorithm.
\section{Introduction} As quantum computer hardware production, which seemed a long way off, has made some progresses recently, much attention is also being paid to the study of quantum algorithm. The class of decision problems which solvable by a quantum computer in polynomial time is called BQP(bounded error quantum polynomial time). Although BQP is not perfectly identified yet, It was proved that many important and hard decision problems belong to BQP. Cryptologists also regard quantum computing as a realizable threat. For example, Shor's algorithm can broke a cypher which relying on the difficulty of the discrete logarithm such as RSA or ECC(elliptic curve cryptography). Cryptologists are preparing for the quantum computing era. This research field called post-quantum cryptography. Quantum algorithm is expressed by linear algebra on a finite dimensional complex inner product space. The part that need to know about QM is just the mathematical formulation of quantum mechanics which is formulated by probability theory, linear algebra and functional analysis. So, quantum algorithm is just a mathematical problem. In fact, many mathematicians, such as Peter Shor\footnote{His prime factorization quantum algorithm made a sensational impact and triggered many researches about quantum computing and financial investments because it can broke a strong cryptography system.}, Michael Freedman\footnote{A fields medal winner mathematician. He works in Microsoft Quantum – Santa Barbara.}, research quantum algorithm. This paper is the lecture note that I wrote for the (about) six-hour lecture that I spoke in Quantum algorithm seminar during 2019 spring semester. I don't know about and unfamiliar with physics\footnote{I do not interested in sciences but because of Shor's algorithm, I had become interested in the mathematical formulations of quantum mechanics. Fortunately, }, but it did not take long time to approach quantum algorithm. I expect that the readers will be able to understand it easily. \section{The mathematical formulations of quantum mechanics} Before quantum mechanics, one of the main purpose in physics was to find `` the trajectory of a particle'', $x:(a,b)\rightarrow \mathbb R^3 $ mathematically, from initial location and momentum of the particle and mechanical principles which are mathematically formulated mainly in a system of partial differential equations. This way had been established after 17 century-the birth of physics. It was believed that the initial locations and momentums of a physical system determines perfectly the future of the physical system. There was also an extreme argument, in this way, known as ``Laplace's demon'' by a French mathematician Pierre Simon Laplace. In this direction, Newtonian mechanics, Lagrangian mechanics, Hamiltonian mechanics\footnote{However, Lagrangian mechanics and Hamiltonian mechanics seems like prepare quantum mechanics and quantum field theory.} and the theory of relativity were very successful in the description of the macroscopic physical world. However, in atomic scale (about $10^{-9}$m) physics, finding `` the trajectory of a particle'' is an unattainable purpose according to quantum physics. In atomic scale physics, one can't know what physical event will be happened but only can say about the ``distribution of probability''. Also, `` the trajectory of a particle'' does not make sense in this scale. Suppose that you want to know about the momentum of a particle with mass $m$ in a specific potential environment\footnote{Let's consider 1-dimensional case.} described as the real-valued function $V(x,t)$. Then you solve the Schrodinger's equation \[i\hbar\frac{\partial}{\partial t}\psi(x,t) = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\psi(x,t) +V(x,t)\psi(x,t)\] \footnote{where $\hbar$ is the Dirac's constant $1.054571817\times 10^{-34}$J$\cdot$s} to find ``the wave function of the particle'' $\psi(x,t)$ which have all quantum mechanical information about the physical system. A wave function is a complex valued function and an element of a complex Hilbert space $\mathcal H$(complete inner product space over $\mathbb C$, $L^2$ space usually). As you know, any constant multiple of a solution of the linear partial differential equation is also a solution\footnote{However, the zero function do not fit to describe a physical system. So, we consider only non-zero complex functions.}, we take the solution with unit norm. Now, operate ``the momentum operator'' \[\hat{p}:= \frac{\hbar }{i}\frac{ \partial}{\partial x}\] which is an Hermitian on the wave equation $\psi(x,t)$ and compute the inner product of $\psi(x,t)$ and $\hat{p}\psi(x,t)$:\[\int\psi(x,t)(\hat{p}\psi(x,t))dx=\int\psi(x,t)\frac{\hbar }{i}\frac{ \partial\psi(x,t)}{\partial x}dx.\] Then this value means that the expectation value of the momentum of the particle. Because any constant multiple of a solution of the linear partial differential equation is also a solution, we give a equivalence relation on $\mathcal H\setminus\{0\}$:\[\text{For any \,}\phi , \psi\in \mathcal H\setminus\{0\},\;\;\phi \sim \psi \text{\;\;iff\;\;}\exists c\in \mathbb C \text{\;\;s.t.\;\;} \phi=c\psi.\] \\ Generally, quantum mechanics can be mathematically formulated as follows: \\ 1. A quantum mechanical system associated with a separable\footnote{An inner product space is trivially a normed linear space(Banach space). If a Banach space is not separable, then there is no Schauder basis. If $S$ is a Schauder basis of a Banach space $\mathcal X$, Span$S$ is dense subset of $\mathcal X$.} complex Hilbert space $\mathcal H$. A quantum sate is described by a 1-dimensional subspace of $\mathcal H$. Especially. the zero element in $\mathcal H$ is do not fit and a quantum sate is exactly associated with an element of complex projective Hilbert space $(\mathcal H\setminus\{0\})/\sim$. Therefore, we can take a element with unit norm as a representative. 2. Let $\mathcal H_1, \mathcal H_2$ describe two quantum mechanical systems respectably. Then, the Hilbert space which describes the composition of two quantum mechanical systems is $\mathcal H_1\otimes \mathcal H_2$. 3. Physical observables \footnote{For example, position, momentum, energy, spin, etc. } are described by Hermitian operators on $\mathcal H$. 4. The expectation value of an observable $\hat A$ of a quantum mechanical system in the state represented by the unit element $\psi \in \mathcal H$ is the inner product of $\psi$ and $\hat A \psi$. 5. Physical symmetries in qunatum mechanics are represented by unitary or anti-unitary operators.\footnote{Due to Wigner's theorem.} 6. Let an observable represented by $\hat A$ in a quantum mechanical system has a discrete spectrum$\{\lambda_i\;|\;i=1,2,\dots \}$. Then, the result of the experimental measurement is one of the eigenvalues $\lambda_i$ and the probability that we get the result $\lambda_i$ is the inner product of $\psi$ and $\hat P_i\psi$ where $\hat P_i$ is the projection operator corresponding to $\lambda_i$. \section{Quantum bits} A quantum bit(qubit) is the unit of information in quantum computing, and one of the unit elements of 2-dimensional complex Hilbert space $H$ with the inner product\[(\cdot,\cdot):\mathcal H\times\mathcal H\longrightarrow \mathbb C.\] As a bit can be physically implemented by two different voltage or power on-off, Quantum bit can be physically implemented by any two-state quantum mechanical system such as two states of spin of an electron or two states of polarization of a photon. 1-qubit with an orthonormal basis $\{b_0, b_1\}$ represented by \[u=c_0b_0+c_1b_1 \in \mathcal H \;\;\;\text{where}\;\;(u,u)=c_0\bar c_0+c_1\bar c_1=|c_0|^2+|c_1|^2=1\]and \[u=c_0b_0+c_1b_1=c_0\begin{bmatrix}1 \\ 0 \end{bmatrix}+c_1\begin{bmatrix} 0 \\ 1 \end{bmatrix}=\begin{bmatrix} c_0 \\ c_1 \end{bmatrix}.\] $b_0,b_1$ means the bit 0 and 1. The measurement of qubit is probabilistic. The sample space of 1-qubit measurement is $\{b_0, b_1\}$ and the probability of event $b_i$ is $|c_i|^2$. $n$-qubit system associated with $\mathcal H^{\otimes n}$ so that represented by a unit element of a $2^n$-dimensional complex Hilbert space\footnote{Trivially, any higher $m$-dimensional complex Hilbert space and $m^n$-dimensional complex Hilbert space are also possible. } with an orthonormal basis $\{b_i\;|\;i=0,1,\dots ,2^n-1\}$: \[v=\sum_{i=1}^{2^n}c_ib_i\in \mathcal H^{\otimes n}\;\;\;\text{where}\;\;\sum_{i=1}^{2^n}|c_i|^2=1.\] Here, the $2^n$-dimensional complex projective Hilbert space is the stage for quantum algorithms are performed. It is easy to see that $b_i\rightarrow i$ denotes all possible bit from $0$ to $2^n-1$ i.e. the basis $\{b_i\;|\;i=0,1,\dots ,2^n-1\}$ is the sample space of the n-qubit measurement and $P(b_i)=c_i\bar c_i=|c_i|^2$. Trivially, there is an element $u$ in $\mathcal H\otimes \mathcal H$ such that $u$ is not a Kronecker product $a\otimes b$ where $a,b\in \mathcal H$. It is the mathematical formulation of quantum entanglement. For example, 2-qubit system is generally \[c_0(b_0\otimes b_0)+c_1(b_0\otimes b_1)+c_2(b_1\otimes b_0)+c_3(b_1\otimes b_1)=\begin{bmatrix} c_0 \\ c_1\\ c_2 \\ c_3\end{bmatrix}\] and the Kronecker product of $u=u_0b_0+u_1b_1$ and $v=v_0b_0+v_1b_1$ is \[u\otimes v= u_0v_0(b_0\otimes b_0)+u_1v_0(b_0\otimes b_1)+u_0v_1(b_1\otimes b_0)+u_1v_1(b_1\otimes b_1)=\begin{bmatrix} u_0v_0 \\ u_1v_0\\ u_0v_1 \\ u_1v_1\end{bmatrix}.\] Therefore, there are many element which is not a Kronecker product of two elements in $\mathcal H$ such as \[w=\frac{1}{\sqrt 2}(b_1\otimes b_0)+\frac{1}{\sqrt 2}(b_0\otimes b_1)=\frac{1}{\sqrt 2}\begin{bmatrix} 0 \\ 1\\ 1 \\ 0\end{bmatrix}.\]$w$ is not a a Kronecker product of two elements in $\mathcal H$ since $\mathbb C$ has no zero divisor. In quantum algorithm, two qubits can be entangled as a result of a quantum gate operation. \section{Quantum gates and what is a quantum algorithm} A quantum gate on $n$-qubit is a unitary linear map $U$ on $2^n$-dimensional complex Hilbert space $ \mathcal H^{\otimes n}$ and represented by $2^n\times 2^n$ unitary matrix \footnote{Recall that if a matrix $U$ satisfies $UU^*=U^*U=I$, $U$ is unitary }. Since a unitary map preserves the norm of elements, the result $Uv$ is also unit. A quantum gate changes the probability distribution on the basis. Suppose that there is a problem and we prepared enough qubits to express the answer of the problem. This means that the set basis $B$ of $ \mathcal H^{\otimes n}$ contains the answer. Now, \textit{a quantum algorithm to solve the problem is a sequence of quantum gates which makes the probability of the answer of the problem, denoted by a basis element $b^* \in B$, higher enough so that we can get the answer quickly by iterating performance of the quantum algorithm.} For example, suppose that a quantum algorithm have the probability of the answer is $1/5$ then, the probability that the results of 15 performances never meet the answer is $(4/5)^{15}\approx 0.03$. Trivially, since the result of quantum algorithm is probabilistic, we should verify whether the result is really the answer or not. We can use a classical computer to check it. Since quantum gate is unitary, it's invertible. Therefore, a quantum computation can be traced back from the result, and it preserves all informations. this is one point that quantum computations differ from classical computations. Following matrices(quantum gates) act on a single qubit. The Hadamard matrix(gate) is \[H := \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}.\] Observe that \[Hu=\frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} c_0 \\ c_1 \end{bmatrix}=\frac{1}{\sqrt{2}}\begin{bmatrix} c_0+c_1 \\ c_0-c_1 \end{bmatrix}\]and it makes a superposition if $u$ is a basis bit $b_0$ or $b_1$. The $X$-gate is \[X := \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}.\] It changes the coefficients of a qubit:\[Xu=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} c_0 \\ c_1 \end{bmatrix}=\begin{bmatrix} c_1 \\ c_0 \end{bmatrix}.\] It is analogous to the classical NOT gate since it flips the bit when it acts on a basis bit. The twist gates \[T(\alpha) := \begin{bmatrix} 1 & 0 \\ 0 & e^{i\alpha} \end{bmatrix}\] do not change the probability distribution but change the argument of a coefficient: \[T(\alpha)u = \begin{bmatrix} 1 & 0 \\ 0 & e^{i\alpha} \end{bmatrix} \begin{bmatrix} c_0 \\ c_1 \end{bmatrix}=\begin{bmatrix} c_0 \\ e^{i\alpha}c_1 \end{bmatrix}.\] There are many matrices(quantum gates) act on two qubit. But here, we present a very important quantum gate which involves a quantum entanglement. The quantum gate is CNOT(controlled-not) gate \[\wedge_1(X):=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\\ \end{bmatrix}.\] It acts as identity gate for the first qubit and as X-gate(which is analogous to the classical NOT gate). Observe that \[\wedge_1(X)v:=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\\ \end{bmatrix}\begin{bmatrix} v_0 \\ v_1 \\ v_2 \\ v_3 \end{bmatrix}=\begin{bmatrix} v_0 \\ v_1 \\ v_3 \\ v_2 \end{bmatrix}.\] Especially, \begin{align*} \wedge_1(X)(b_0\otimes b_0)&=b_0\otimes b_0,\\ \wedge_1(X)(b_1\otimes b_0)&=b_1\otimes b_0,\\ \wedge_1(X)(b_0\otimes b_1)&=b_1\otimes b_1,\\ \wedge_1(X)(b_1\otimes b_1)&=b_0\otimes b_1.\\ \end{align*} i.e. \[\wedge_1(X)(b_j\otimes b_i)=b_{j\oplus i}\otimes b_i.\] where $\oplus$ is the addition in $\mathbb Z_2$. It is showed that CNOT gate is enough for any quantum circuit involving a quantum entanglement and we do not need any other quantum entanglement-involving gates. \section{Quantum discrete Fourier transformation} Quantum discrete Fourier transformation is an important transformation in many quantum algorithms. This is just discrete Fourier transformation \[\hat f(k)=\frac{1}{\sqrt N}\sum_{j=0}^{N-1}e^{2\pi ijk/N}f(j)\] on qubits. For a basis element $b_k$ of $\mathcal H^{\otimes n}$, the quantum discrete Fourier transformation $\mathcal F_n$ on $n$-qubit is \[\mathcal F_n(b_k)=\frac{1}{\sqrt {2^n}}\sum_{j=0}^{2^n-1}e^{2\pi ijk/2^n}b_j.\] As you know, if $N | k$, then \[\frac{1}{\sqrt N}\sum_{j=0}^{N-1}e^{2\pi ijk/N}=1\] and it is 0 if $N$ not divide $k$. Quantum discrete Fourier transformation is due to a mathematician and a cryptographer Don Coppersmith.\footnote{Coppersmith, D. An approximate Fourier transform useful in quantum factoring. Technical Report RC19642, IBM. 1994} Let us denote $e^{2\pi i/2^n}$ by $\zeta_{2^n}$. Quantum discrete Fourier transformation on $n$-qubit represented by the unitary matrix \[\mathcal F_n=\begin{bmatrix} 1 & 1 & 1 & \cdots & 1\\ 1 & \zeta_{2^n} & \zeta_{2^n}^2 & \cdots & \zeta_{2^n}^{2^n-1}\\ 1 & \zeta_{2^n}^2 & \zeta_{2^n}^4 & \cdots & \zeta_{2^n}^{2(2^n-1)}\\ 1 & \zeta_{2^n}^3 & \zeta_{2^n}^6 & \cdots & \zeta_{2^n}^{3(2^n-1)}\\ \vdots & \vdots & \vdots & & \vdots \\ 1 & \zeta_{2^n}^{2^n-1} & \zeta_{2^n}^{2(2^n-1)} & \cdots & \zeta_{2^n}^{(2^n-1)^2}\\ \end{bmatrix}.\] For 1-qubit, \[\mathcal F_1= \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & \zeta_{2} \end{bmatrix}=\frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & e^{\pi i} \end{bmatrix}=\frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}=H.\]i.e. the Hadamard gate is the quantum discrete Fourier transformation on 1-qubit. Quantum discrete Fourier transformation $\mathcal F_n$ can be performed by Hadamard gates and CNOT gates. \section{Deutsch's algorithm} Deutsch's algorithm is a simple example of quantum algorithm that shows computational profit of quantum algorithm. It solves the following problem\footnote{Deutsch, D. Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer. Proceedings of the Royal Society of London A. 400 (1818): 97–117. 1985}. Let $f:\{0,1\}\longrightarrow \{0,1\}$. If we want to know $f$ is a constant function or not by calculation, we need to calculate $f(0)$ and $f(1)$ classically. However, assuming a quantum gate\[U_f(b_j\otimes b_i):=b_j\otimes b_{f(j)\oplus i}.\] Then, \[(H\otimes I)U_f(H\otimes H)(I\otimes X)(b_0\otimes b_0)=\frac{1}{2}[(1 +(-1)^{f(0)\oplus f(1)})b_0 + (1-(-1)^{f(0)\oplus f(1)})b_1].\] If $f$ is constant, then $f(0)\oplus f(1)=0$ i.e. $P(b_0)=1$. Otherwise, $f(0)\oplus f(1)=1$ i.e. $P(b_1)=1$. In this algorithm, we used $U_f$ only once. \section{Shor's algorithm} RSA is one of the most popular public-key crypto-system. The security of RSA relies on the difficulty of prime factorization. It uses very large two primes $p,q$. The product $pq$ is announced to public and any one who want to sent cryptogram uses $pq$ to encrypt the message. To Decrypt the cryptogram, one should know what is $p$ and $q$. Since prime factorization is very difficult, one can not find $p$ and $q$ from $pq$. However, a mathematician Peter Shor published his paper "Algorithms for quantum computation: discrete logarithms and factoring" in 1994 which shows that prime factorization can be obtained fast by his quantum algorithm\footnote{Shor, P.W. Algorithms for quantum computation: discrete logarithms and factoring. Proceedings 35th Annual Symposium on Foundations of Computer Science. IEEE Comput. Soc. Press: 124–134. 1994}. Let $N$ be the product of two or more odd primes. If we found an element $g\in \mathbb{Z}^*_N$ with the order $|g|$ is even, \footnote{It is proved that there is enough number of element with even order in $ \mathbb{Z}^*_N$.} $N$ divides $(g^{r/2}-1)(g^{r/2}+1)$ since \[g^r-1\equiv(g^{r/2}-1)(g^{r/2}+1)\equiv0 \mod N.\] Then $gcd(g^{r/2}-1,N)$ and $gcd(g^{r/2}+1,N)$ are non trivial divisor of $N$. The Euclidean algorithm finding $gcd$ is very fast. We set two parts of qubit: one part is $u$ which is $n$-qubit system where $N^2\leq 2^n <2N^2$. the other part is $m$-qubit system where $m=\lceil\ln N/\ln 2\rceil $. We operate Shor's algorithm on the $(n+m)$-qubit system $v\otimes u$ as follows. 1. $H$ acts on each qubit in $n$-qubit system $u$:\[(H^{\otimes n}\otimes I^{\otimes m})v_0\otimes u_0=\frac{1}{\sqrt {2^n}}\sum^{2^n-1}_{j=0}v_j\otimes u_0.\] 2. Let $U_x(v_j\otimes u_t):=v_j\otimes u_{t+x^j\mod N}$ for $x\in \mathbb{Z}^*_N$. $U_x$ acts on the $(n+m)$-qubit system:\[U_x\big[\frac{1}{\sqrt {2^n}}\sum^{2^n-1}_{j=0}v_j\otimes u_0\big]=\frac{1}{\sqrt {2^n}}\sum^{2^n-1}_{j=0}v_j\otimes u_{x^j\mod N}.\] 3. $\mathcal F_n$ acts on the $n$-qubit system: \[ \mathcal F\otimes I^{\otimes m}\big[\frac{1}{\sqrt {2^n}}\sum^{2^n-1}_{j=0}v_j\otimes u_{x^j\mod N}\big]=\frac{1}{2^n}\sum^{2^n-1}_{j=0}\big(\sum^{2^n-1}_{c=0}e^{2\pi ijc/2^n}v_c\big)\otimes u_{x^j\mod N}.\] 4. Carry out the measurement of result \[(\mathcal F\otimes I^{\otimes m})U_x(H^{\otimes n}\otimes I^{\otimes m})(v\otimes u).\] The value of the measurement means the order of $x$. 5. Factorize $N$ using the order of $x$. \\ Let the order of $x$ be $r$ and $j=j_0+rk$. $j_0\equiv j \mod r$. $P(v_c\otimes v_{x^{j_0}})$ is \[\frac{1}{2^{2n}}\bigg|e^{2\pi ij_oc/2^n}\sum^{\lfloor2^n/r\rfloor+\delta}_{k=0}e^{2\pi irkc/2^n}\bigg|^2\] where $\delta=0$ ot 1. If $r | 2^n$, $P(v_c\otimes v_{x^{j_0}})>0$ only if $2^n/r | c$ and $P(v_c\otimes v_{x^{j_0}})=0$ otherwise. Therefore, the only possible result of measurement is $v_{t2^n/r}\otimes v_{x^{j_0}}$ for $t \in \mathbb Z$. So, one can find easily the order of $x$. If $r$ does not divide $2^n$, one should take a little different process. But the above algorithm still needed. \section{References} $\;\;\;\;$[1] D. Coppersmith, An approximate Fourier transform useful in quantum factoring. Technical Report RC19642, IBM. 1994. \\ [2] D. Deutsch, Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer. Proceedings of the Royal Society of London A. 400 (1818): 97–117. 1985. \\ [1] G. Mackey, Mathematical Foundations of Quantum Mechanics, W. A. Benjamin, 1963. \\ [2] J. von Neumann, Mathematical Foundations of Quantum Mechanics, 1932. \\ [3] P. W. Shor, Introduction to quantum algorithms, arXiv:quant-ph/0005003v2, 2001. \\ [4] P.W. Shor, Algorithms for quantum computation: discrete logarithms and factoring. Proceedings 35th Annual Symposium on Foundations of Computer Science. IEEE Comput. Soc. Press: 124–134. 1994 \\ [5] H. Weyl, The Theory of Groups and Quantum Mechanics, Dover Publications, 1950. \end{document}
{ "timestamp": "2020-08-21T02:12:53", "yymm": "2008", "arxiv_id": "2008.08905", "language": "en", "url": "https://arxiv.org/abs/2008.08905", "abstract": "In mathematical aspect, we introduce quantum algorithm and the mathematical structure of quantum computer. Quantum algorithm is expressed by linear algebra on a finite dimensional complex inner product space. The mathematical formulations of quantum mechanics had been established in around 1930, by von Neumann. The formulation uses functional analysis, linear algebra and probability theory. The knowledge of the mathematical formulation of QM is enough quantum mechanical knowledge for approaching to quantum algorithm and it might be efficient way for mathematicians that starting with mathematical formulations of QM. We explain the mathematical formulations of quantum mechanics briefly, quantum bits, quantum gates, quantum discrete Fourier transformation, Deutsch's algorithm and Shor's algorithm.", "subjects": "History and Overview (math.HO); Quantum Physics (quant-ph)", "title": "Linear algebra and quantum algorithm", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771770811146, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7090925580610544 }
https://arxiv.org/abs/2206.00601
Quantitative invertibility of non-Hermitian random matrices
The problem of estimating the smallest singular value of random square matrices is important in connection with matrix computations and analysis of the spectral distribution. In this survey, we consider recent developments in the study of quantitative invertibility in the non-Hermitian setting, and review some applications of this line of research.
\section{Introduction} Given an $N\times n$ ($N\geq n$) matrix $A$, its singular values are defined as square roots of the eigenvalues of the positive semidefinite $n\times n$ matrix $A^* A$: \[ s_i(A):=\sqrt{\lambda_i(A^* A)}, \quad i=1,2,\dots,n, \] where we assume the non-increasing ordering $\lambda_1(A^* A)\geq \lambda_2(A^* A)\geq\dots\geq \lambda_n(A^* A)$. The classical Courant--Fischer--Weyl theorem provides a variational formula \[ s_i(A)=\min\limits_{E:\;\dim(E)=n-i+1}\max\limits_{x\in E,\,\lVert x\rVert_2=1}\lVert Ax\rVert_2,\quad 1\leq i\leq n, \] where the minimum taken over all linear subspaces $E$ of the specified dimension. In particular, \textit{the smallest} and \textit{the largest} singular values of $A$ can be computed as \[ s_{\min}(A)=s_n(A)=\min\limits_{x:\,\lVert x\rVert_2=1}\lVert Ax\rVert_2,\quad s_{\max}(A)=s_1(A)=\max\limits_{x:\,\lVert x\rVert_2=1}\lVert Ax\rVert_2. \] Additionally, if the matrix $A$ is square ($N=n$) and invertible then $s_{\min}(A)=\frac{1}{s_{\max}(A^{-1})}$. The magnitude of the smallest singular value of square random matrices has attracted much attention due to the special role it plays in several questions of theoretical significance and in applications. In particular, the ratio of the largest and smallest singular values of a square matrix --- \textit{the condition number} --- is systematically used in numerical analysis as a measure of sensitivity to round-off errors. Further, for certain random matrix models, bounds on the spectral norm of the matrix' resolvent (or, equivalently, the smallest singular value of diagonal shifts of the matrix) is a crucial point in the study of the spectral distribution. We refer to Sections~\ref{s:num} and~\ref{s:spec} of the survey for a discussion of those directions. In this survey, we consider \textit{quantitative invertibility} of random \textit{non-Hermitian} square matrices, including matrices with independent entries and adjacency matrices of random regular digraphs. The main objective in that line of research is to obtain bounds on probabilities $\Prob\{s_{\min}(A)\leq t\}$ as a function of $t$, of the dimension, and, possibly, of some parameters of the model under consideration, such as the variance profile of the matrix or its mean. One approach to the problem, which can be named analytical, is based on comparing the distribution of $s_{\min}(A)$ with the distribution of the smallest singular value of a corresponding Gaussian random matrix. The latter is very well understood \cite{Edelman} since explicit formulas for the joint distribution of the singular values of Gaussian matrices are available \cite{James}. We refer to \cite{TV10a,CL2019} for results of that type. Another approach, which is the focus of this survey, falls in the category of \textit{non-asymptotic} methods \cite{RV survey} and is based on a combination of techniques originated within asymptotic geometric analysis. It often produces very strong probability estimates although typically lacks the precision of the analytical methods. The major features of that approach are (a) reducing the estimate for $s_{\min}$ to estimating distances between random vectors and random linear subspaces associated with the matrix, and (b) the use of concentration (Bernstein--type) and anti-concentration (Littlewood--Offord--type) inequalities. Often, this approach also involves constructing discretizations of certain subsets of $\R^n$ or $\Cp^n$ (\textit{$\varepsilon$-nets}) and estimating their cardinalities. We will give a description of the features by considering multiple examples from the literature. Because of some differences in methodology, and because we wish to emphasize the importance of the matrix invertibility for numerical analysis and in the study of the spectral distribution, this survey does not cover non-quantitative results on singularity of random matrices. We note that estimating the singularity probability for several models of \textit{discrete} random matrices is a major topic within the combinatorial random matrix theory \cite{K67,KKS1995,TV07a,BVW10,CTV,Hoi sym}. In last few years there has been a significant progress in this research direction (also, as corollaries of quantitative results), in particular, the problem of estimating the sigularity probability of adjacency matrices of random regular (di)graphs \cite{JHuang,Meszaros,NM18}, of Bernoulli random matrices \cite{Tikh2020,LT2020,JSS2020a} and, more generally, discrete matrices with i.i.d entries \cite{JSS2020b}, of random symmetric matrices \cite{CMMM,FJ19,CJMS,CJMSb}. We refer to a recent survey \cite{Van survey new} for a discussion and further references. The rest of the survey is organized as follows. Sections~\ref{s:num} and~\ref{s:spec} provide motivation for studying quantitative invertibility of non-Hermitian random matrices, and a brief account of known results. In Section~\ref{s:method}, we give an overview the methodology, starting with the result of Rudelson and Vershynin \cite{RV08a} as a main illustration. We then discuss novel additions to the methodology made in the past 10 years, which allowed to make progress on several important problems in the random matrix theory. Finally, in Section~\ref{s:open}, we discuss some open problems. Let us recall some notions which will be used further. A random variable $X$ on $\R$ or $\Cp$ is called \textit{subgaussian} if $\Exp\,\exp(|X|^2/K^2)<\infty$ for some number $K>0$. The smallest value of $K$ such that $\Exp\,\exp(|X|^2/K^2)-1\leq 1$, is called \textit{the subgaussian moment} of $X$. Given a sequence of \textit{random} Borel probability measures $(\mu_m)_{m=1}^\infty$ and a random probability measure $\mu$ on $\Cp$, we say that $\mu_m$ converge \textit{weakly in probability} to $\mu$ if for every bounded continuous function $f$ on $\Cp$, \[ \lim_{m\to\infty}\Prob\bigg\{\Big|\int f\,d\mu_m-\int f\,d\mu\Big|>\varepsilon\bigg\}=0,\quad \forall\varepsilon>0. \] We will denote by $\lVert \cdot\rVert$ the spectral norm of a matrix. The standard Euclidean norm in $\R^n$ or $\Cp^n$ will be denoted by $\lVert \cdot\rVert_2$. We will write $\dist(S,T)$ for the Euclidean distance between two subsets $S$ and $T$ of $\R^n$ or $\Cp^n$. By $S^{n-1}(\R)$ or $S^{n-1}(\Cp)$ we denote the unit Euclidean sphere in $\R^n$ or $\Cp^n$, respectively. The constants will be denoted by $C,c'$, etc. \section{Quantitative invertibility in matrix computations}\label{s:num} In this section, we discuss the importance of estimating the smallest singular value in numerical analysis, and provide a brief overview of related results on random matrices. \subsection{The condition number in numerical analysis} For an $n\times n$ invertible matrix $A\in \Cp^{n\times n}$, \textit{the condition number of $A$} is defined as \[ \kappa(A):=\lVert A\rVert\,\lVert A^{-1}\rVert=\frac{s_{\max}(A)}{s_{\min}(A)}. \] Consider a system of $n$ linear equations in $n$ variables, represented in the matrix-vector form as $Ax=b$. If the system is \textit{well conditioned}, i.e the condition number of the coefficient matrix $A$ is small, a perturbation of the matrix or the coefficient vector does not strongly affect the solution. In particular, the round-off errors in matrix computations such as the Gaussian elimination, do not significantly distort the solution vector. As an example of well known theoretical guarantees, we mention an estimate on \textit{the relative distance} between the solution of $Ax=b$ and the solution of a perturbed syste \[ (A+F)y=(b+f). \] The terms $F\in \Cp^{n\times n}$ and $f\in\Cp^n$ can be thought of as consequences of measurement or round-off errors. It is not difficult to check that under the assumption that $\delta:=\max(\frac{\lVert F\rVert}{\lVert A\rVert},\frac{\lVert f\rVert_2}{\lVert b\rVert_2})$ is small, the relative distance $\frac{\lVert y-x\rVert_2}{\lVert x\rVert_2}$ satisfies \[ \frac{\lVert y-x\rVert_2}{\lVert x\rVert_2}=O\big(\delta\,\kappa(A)\big) \] (see, in particular, \cite[Section~2.6.2]{MatrixComp4}, \cite[Section~4]{Smale1985}). In the specific setting when the system $Ax=b$ is solved using the Gaussian elimination with partial pivoting and the perturbation of the system is due to round-off errors, Wilkinson \cite{Wilkinson} showed that the relative distance between the computed and the actual solution can be bounded above by \[ n^{O(1)}\,\varepsilon\,\kappa(A)\,\rho. \] Here \textit{the growth factor} $\rho$ is defined as $\rho:=\frac{\max_{k=0,1,\dots;\,i,j\leq n}|a_{ij}^{(k)}|} {\max_{i,j\leq n}|a_{ij}^{(k)}|}$, with $a_{ij}^{(k)}$ being the $(i,j)$--th element of the matrix $A^{(k)}$ obtained from $A$ after $k$ iterations of the Gaussian elimination process, and $\varepsilon$ is the precision of the machine (see also \cite{TS1990,Sankar}). \medskip Whereas the condition number of $A$ characterizes sensitivity of the corresponding SLE to small perturbations, \textit{the eigenvector condition number} quantifies stability of the spectrum and eigenvectors of $A$. The eigenvector condition number of a diagonalizable matrix $A\in \Cp^{n\times n}$ is defined as \[ \kappa_V(A):=\min_{W\in\Cp^{n\times n}:\,W^{-1}A W\mbox{ is diagonal}}\kappa(W)= \min_{W\in\Cp^{n\times n}:\,W^{-1}A W\mbox{ is diagonal}}\frac{s_{\max}(W)}{s_{\min}(W)}. \] Clearly, $\kappa_V(A)=1$ if and only if $A$ is \textit{unitarily diagonalizable} (normal). A classical stability result for a matrix spectrum using the eigenvector condition number is the Bauer--Fike theorem \cite{BF1960}. According to the theorem, given a diagonalizable matrix $A$ and its perturbation $A+F$, the distance between any eigenvalue $\mu$ of $A+F$ and the spectrum of $A$ can be estimated as \[ \min_{\lambda\in \textrm{Spec}(A)}|\mu-\lambda|\leq \kappa_V(A)\lVert F\rVert. \] Moreover, stability of matrix functions under pertubations of the argument can be quantified using the eigenvector condition number (see \cite[Section~3.3]{Higham2008}). Here, we refer to a related line of research dealing with \textit{the approximate diagonalization} of matrices --- approximating a matrix with one with a small eigenvector condition number (see \cite{Davies,BKMS2019,BGKS2020,JSS2020} and references therein). A connection between $\kappa_V(A)$ and quantitative invertibility of diagonal shifts of $A$ is established through the notion of a pseudospectrum. \textit{An $\varepsilon$--pseudospectrum of $A$} --- $\textrm{Spec}_{\varepsilon}(A)$ --- is defined as the set of all points $z\in\Cp$ with $s_{\min}(A-z\,\Id)<\varepsilon$. It can be shown (see \cite[Lemma~9.2.11]{DaviesBook}) that for a diagonalizable matrix $A$ with $D$ being a corresponding diagonal matrix, $ \textrm{Spec}(D)+\kappa_V(A)^{-1}\,\varepsilon\,U\subset \textrm{Spec}_{\varepsilon}(A)\subset \textrm{Spec}(D)+\kappa_V(A)\,\varepsilon\,U$, where $U$ is the unit disk of the complex plane. \subsection{Related results on random matrices} Randomness is a natural approach to simulate typical matrices observed in applications. For example, the LINPACK benchmark for measuring the computing power involves systems of linear equations with a randomly generated coefficient matrix \cite{LINPACK}. Condition numbers of random square matrices with the computational perspective were first considered by von Neumann and Goldstine \cite{vNG}. Rigorous results were obtained much later, notably by Edelman \cite{Edelman} for Gaussian random matrices (see also Szarek \cite{Szarek}). We note here that for \textit{sufficiently dense} random matrices with i.i.d entries satisfying certain moment conditions, estimating the largest singular values up to a constant multiple can be accomplished by a simple combination of Bernstein--type inequalities and an $\varepsilon$--net argument (see, for example \cite{RV survey}), and with precision up to $(1\pm o(1))$ multiple via the trace method \cite{Geman,YBK,Seginer}. Further, we only discuss estimates for the smallest singular value. The average-case quantitative analysis of the matrix invertibility, when a typical matrix is modeled as a random matrix with independent entries with matching first two moments, has been developed in multiple works. We refer, in particular, to papers \cite{TV10a,CL2019} employing the analytical approach, as well as works \cite{Rud2008,TV2009,RV08a,RV08b,RT15,BR16,BR threshold, Luh,Tikh shifted,Livshyts,Tikh2020,LTV,Tatarko,LT2020,JSS2020a,JSS2020b,Huang} based on reduction to distance estimates and on the use of concentration/anti-concentration inequalities. Some of those results are mentioned below. In \cite{RV08a}, Rudelson and Vershynin showed that given a random $n\times n$ matrix $A$ with i.i.d real entries of zero mean, unit variance and a bounded \textit{subgaussian moment}, the smallest singular value of $A$ satisfies \[ \Prob\{s_{\min}(A)\leq n^{-1/2}\,t\}\leq C(t+c^n),\quad t>0, \] where the constants $C>0$ and $c\in(0,1)$ may only depend on the sugaussian moment (in fact, the statement is preserved if $A$ is shifted by a non-random matrix with the spectral norm of order $O(\sqrt{n})$). The moment assumptions and the requirement that the entries are equidistributed were relaxed in later works \cite{RT15,Livshyts,LTV}. On the other hand, in the special case of a matrix $A$ with i.i.d entries taking values $+1$ and $-1$ with probability $1/2$, it was proved in \cite{Tikh2020} that for any $\varepsilon>0$, \[ \Prob\big\{s_{\min}(A)\leq n^{-1/2}\,t\big\}\leq Ct+C(1/2+\varepsilon)^n,\quad t>0, \] where $C>0$ is only allowed to depend on $\varepsilon$ (see the introduction to \cite{Tikh2020} as well as \cite{Van survey new} for a discussion of this result in the context of the combinatorial random matrix theory). Yet stronger result is available when $A$ has i.i.d \textit{discrete} entries which are not uniformly distributed on their support \cite{JSS2020b}: for every $\varepsilon>0$ and assuming $n$ is sufficiently large, \[ \Prob\big\{s_{\min}(A)\leq n^{-1/2}\,t\big\}\leq Ct+(1+\varepsilon) \Prob\big\{\mbox{Two rows or columns of $A$ are colinear}\big\},\quad t>0, \] with $C>0$ depending only on the individual entry's distribution (see \cite{JSS2020b} for the statement in its full strength). In the setting when $A$ has i.i.d Bernoulli($p$) entries and $p$ is allowed to depend on $n$, its was shown in \cite{BR threshold,LT2020,Huang} that, as long as $p\leq c$ for a small universal constant $c>0$, for every $\varepsilon>0$ and assuming $n$ is sufficiently large, \[ \Prob\big\{s_{\min}(A)\leq n^{-C}\,t\big\}\leq t+(1+\varepsilon) \Prob\big\{\mbox{A row or a column of $A$ is zero}\big\},\quad t>0, \] where $C>0$ is a universal constant. We refer to \cite{Huang} for a generalization to matrix rank estimates, as well as work \cite{JSS2020a} for sharp bounds in the setting of constant $p\in(0,1/2)$ and \cite{BR16} for stronger quantitative estimates in a certain range for the parameter $p$. \medskip Put forward by Spielman and Teng \cite{ST02}, \textit{the smoothed analysis} of the condition number is concerned with quantitative invertibility of a typical matrix in a small neighborhood of a fixed matrix (with possibly a very large spectral norm). A basic probabilistic model of that type is of the form $A+M$, where $M$ is a non-random matrix, and $A$ has i.i.d entries. The result of Sankar--Spielman--Teng \cite{SST06} provided a small ball probability bound for the smallest singular value of a shifted \textit{Gaussian} real random matrix with i.i.d standard normal entries, \textit{independent} from the shift $M$: \[ \Prob\big\{s_{\min}(A+M)\leq t\,n^{-1/2}\big\}\leq C\,t,\quad t>0, \] for a certain universal constant $C>0$ (see also \cite[Section~2.3]{BKMS2019}). Analogous estimates for a broader class of random matrices with continuous distribution was later obtained in \cite{Tikh shifted}. On the other hand, it was observed that for certain discrete random matrices, such as random sign (Bernoulli) matrices, no shift-independent small ball probability bounds for $s_{\min}(A+M)$ are possible \cite{TV10c,Tikh shifted,JSS2020 smooth}. In particular, it is shown in \cite{JSS2020 smooth} that, assuming $A$ has i.i.d entries taking values $\pm 1$ with probability $1/4$ and zero with probability $1/2$, every $L\geq 1$, and every positive integer $K$, \[ \sup_{M:\,\lVert M\rVert \leq n^{L}}\Prob\big\{s_{\min}(A+M)\leq C\, n^{-KL}\big\}\geq c\,n^{-K(K-1)/4}, \] where $C,c>0$ may only depend on $L$ and $K$. The smoothed analysis of the matrix condition number for discrete distributions was carried out in works \cite{TV STOC, TV10c,Jain2021a,JSS2020 smooth} (see also references therein). The following result was proved in \cite{TV10c}. Let $K,B,\varepsilon>0$ and $L\geq 1/2$ be arbitrary parameters. Then, for all sufficiently large $n$, given an $n\times n$ random matrix $A$ with i.i.d centered entries of unit variance and the subgaussian moment bounded above by $B$, and given a non-random matrix $M$ with $\lVert M\rVert\leq n^L$, one has \[ \Prob\big\{s_{\min}(A+M)\leq n^{-(2K+1)L}\big\}\leq n^{-K+\varepsilon}. \] In \cite{JSS2020 smooth}, it is shown that the above small ball probability bound can be significantly improved to match the average-case result of Rudelson and Vershynin \cite{RV08a}, under the assumption that a positive fraction of the singular values of $M$ are of order $O(\sqrt{n})$. More specifically, for every $\tilde c\in(0,1)$ and $\tilde C>0$, and any fixed matrix $M$ with $s_{n-\lfloor \tilde cn\rfloor}(M)\leq \tilde C\sqrt{n}$, one has \[ \Prob\big\{s_{\min}(A+M)\leq t\,n^{-1/2}\big\}\leq C(t+c^{n}), \] where $C>0$ and $c\in(0,1)$ may only depend on $\tilde c$, $\tilde C$, and the subgaussian moment $B$. Under much weaker assumptions on the shift $M$, though at a price of precision, quantitative bounds for $s_{\min}(A+M)$ were obtained in \cite{Jain2021a}. \section{Invertibility and spectrum}\label{s:spec} Given a square $n\times n$ matrix $A_n$, denote by $\mu_{A_n}$ its normalized spectral measure (spectral distribution): $$ \mu_{A_n}:=\frac{1}{n}\sum_{i=1}^n \delta_{\lambda_i(A_n)}. $$ For real and complex Gaussian matrices with i.i.d standard entries (\textit{the Ginibre ensemble}) explicit formulas for the joint distribution of the eigenvalues are known \cite{Ginibre,EdelmanJMA,LS1991}. Those, in turn, were used by Mehta \cite{Mehta}, Silverstein (unpublished; see \cite[Section~3]{BC2011}) and Edelman \cite{EdelmanJMA} to derive convergence results for the spectral distribution in the Gaussian case. In the non-Gaussian setting, when no similar formulas are available, Girko \cite{Girko} proposed a \textit{Hermitization} argument based on the identity \begin{align*} \frac{1}{n}\sum_{i=1}^n\log|z-\lambda_i(A_n) =\frac{1}{n}\log\sqrt{\det((A_n-z\,\Id)(A_n-z\,\Id)^*)}=\frac{1}{n}\sum_{i=1}^n\log s_i(A_n-z\,\Id), \end{align*} which relates the spectrum to the singular values of the matrix resolvent. A modern form of the argument can be summarized as follows (see \cite[Lemma~4.3]{BC2011} as well as \textit{the replacement principle} in \cite{TV circ}). Assume that a sequence of random matrices $(A_n)_{n=1}^\infty$ is such that for almost every $z\in\Cp$, the sequence of measures $\mu_{\sqrt{(A_n-z\,\Id)(A_n-z\,\Id)^*}}$ converges weakly in probability to a non-random probability measure $\mu_z$. Assume further that the logarithm is \textit{uniformly integrable in probability} with respect to $(\mu_{\sqrt{(A_n-z\,\Id)(A_n-z\,\Id)^*}})_{n=1}^\infty$ for almost every $z\in\Cp$, that is \begin{equation}\label{1-9481-498-} \lim_{t\to\infty}\sup_n \Prob\bigg\{\frac{1}{n}\sum_{i\leq n:\,|\log s_i(A_n-z\,\Id)|>t}|\log s_i(A_n-z\,\Id)|>\varepsilon\bigg\}=0,\quad \forall \varepsilon>0. \end{equation} Then there is a measure $\mu$ on $\Cp$ such that the sequence $(\mu_{A_n})_{n=1}^\infty$ converges to $\mu$ weakly in probability; moreover, the measure $\mu$ can be characterized in terms of $(\mu_z)_{z\in\Cp}$. We refer to \cite{BC2011} for proofs as well as a detailed historical account of the study of the spectral distribution of non-Hermitian random matrices, up to 2000-s. In view of the uniform integrability requirement \eqref{1-9481-498-}, strong quantitative estimates for small singular values of matrices $A_n-z\,\Id$ are an essential part of the Hermitization argument. In the setting of matrices with i.i.d non-Gaussian entries, first rigorous estimates on the small singular values of $A_n-z\,\Id$ sufficient for the argument to go through were obtained for a class of continuous distributions by Bai \cite{Bai circ}, who applied the estimates to study the limiting spectral distribution in that setting. As the techniques to quantify invertibility of more general classes of matrices became available through the works of Tao--Vu \cite{TV2009}, Rudelson \cite{Rud2008}, and Rudelson--Vershynin \cite{RV08a}, the result of Bai was consecutively generalized in works \cite{GT circ,PZ circ,TV circ -,TV circ}. The \textit{strong circular law} under minimal moment assumptions proved in \cite{TV circ} can be formulated as follows. Let $\xi$ be a complex valued random variable of zero mean and unit absolute second moment, and let $(A_n)_{n=1}^\infty$ be a sequence of random matrices, where each $A_n$ is $n\times n$ with i.i.d entries equidistributed with $\xi$. Then the sequence of spectral distributions $(\mu_{\frac{1}{\sqrt{n}}A_n})_{n=1}^\infty$ converges weakly almost surely to the uniform probability measure on the unit disk of the complex plane. \smallskip In the context of the circular law, a most studied model of \textit{sparse} random matrices is of the form $A_n=B_n \odot M_n$, where $B_n$ is the random matrix with i.i.d Bernoulli($p_n$) entries, $M_n$ is independent from $B_n$ and has i.i.d entries equidistributed with a random variable $\xi$ of unit variance, and ``$\odot$'' denotes the Hadamard (entry-wise) product of matrices. In the regime $p_n\geq n^{-1+\varepsilon}$ for a fixed $\varepsilon>0$, the (weak) circular law has been established in \cite{Wood circ} following earlier works \cite{GT circ,TV circ -} dealing with additional moment assumptions. In yet sparser regime, estimating the smallest singular value of $A_n-z\,\Id$ presents significant challenges, and further progress has only been made recently in \cite{BR circ,RT circ}. In \cite{RT circ}, it is proved that, assuming $\xi$ is a real valued random variable of unit variance, $n\,p_n\leq n^{1/8}$, and $n p_n$ tends to infinity with $n$, and assuming the matrices $A_n=B_n \odot M_n$ are defined as in the previous paragraph, the sequence of spectral distributions $\big(\mu_{\frac{1}{\sqrt{n p_n}}A_n}\big)_{n=1}^\infty$ converges weakly in probability to the uniform measure on the unit disk of $\Cp$. A central technical result of \cite{RT circ} is the following quantitative bound for $s_{\min}(A_n-z\,\Id)$: under the assumption that $|z|\leq np_n$ and $|\Im(z)|\geq 1$, \[ \Prob\big\{s_{\min}(A_n-z\,\Id)\leq \exp(-C\log^3 n)\big\}\leq C\,(np_n)^{-c}, \] where $C,c>0$ may only depend on the c.d.f of $\xi$. Quantitative invertibility and spectrum of adjacency matrices of random regular directed graphs have been considered in multiple works in past years \cite{C14,Cook circ random,Cook circ,BCZ circ,LLTTY sing,LLTTY trans,LLTTY circ}. Given integers $n$ and $d$, a \textit{$d$--regular digraph} on vertices $\{1,2,\dots,n\}$ is a directed graph in which every vertex has $d$ incoming edges and $d$ outgoing edges. Here, we focus on the model when no multiedges are allowed but the graph may have loops (the latter condition is not conventional). For each $n$, denote by $A_{n,d}$ the adjacency matrix of a random graph uniformly distributed on the set of all $d$--regular digraphs on $\{1,2,\dots,n\}$ (we allow $d$ to depend on $n$). First results on invertibility for this model were obtained by Cook \cite{C14}. The circular law for the sequence of spectral measures $\big(\mu_{\frac{1}{\sqrt{d(1-d/n)}}A_{n,d}}\big)_{n=1}^\infty$ has been established in \cite{Cook circ} under the assumption $\min(d,n-d)\geq \log^{96} n$. Later, in works \cite{LLTTY sing,LLTTY trans,LLTTY circ}, the range $\omega(1)=d\leq \log^{96} n$ was treated. Either of the two results relies heavily on estimates of the smallest singular values of $A_{n,d}-z\,\Id$. In particular, the main theorem of \cite{LLTTY sing} is the following statement: assuming $C\leq d\leq n/\log^2 n$ and $|z|\leq d/6$, \[ \Prob\big\{s_{\min}(A_{n,d}-z\,\Id)< n^{-6}\big\}\leq \frac{C\log^2 d}{\sqrt{d}}, \] where $C>0$ is a universal constant. \smallskip Invertibility of \textit{structured} random matrices and applications to the study of limiting spectral distribution have been considered, in particular, in \cite{RZ2016,C16,Cooketal1,Cooketal2,JJLR}. A basic model of interest here is of the form $A_n=U_n\odot M_n-z\,\Id$, where $M_n$ is a matrix with i.i.d entries of zero mean and unit variance, $z\in\Cp$ is some complex number, $U_n$ is a non-random matrix with non-negative real entries encoding \textit{the standard deviation profile}, and ``$\odot$'' denotes the Hadamard (entry-wise) product of matrices. Note that $A_n$ has mutually independent entries, with $\sqrt{\Var\, a_{ij}}=u_{ij}$, $1\leq i,j\leq n$. In \cite{RZ2016}, invertibility (and, more generally, the singular spectrum) of $U_n\odot M_n$ was studied in connection with the problem of estimation of matrix permanents. In particular, strong quantitative bounds on $s_{\min}(U_n\odot M_n)$ were obtained in the setting when $M_n$ is the standard real Gaussian matrix, and $U_n$ is a \textit{broadly connected} profile (see \cite[Section~2]{RZ2016}). A significant progress in the study of structured random matrices was made by Cook in \cite{C16}, who extended the result of \cite{RZ2016} to non-Gaussian matrices, and obtained a polynomial lower bound on $s_{\min}(U_n\odot M_n-z\,\Id)$ under very general assumptions on $U_n$. Namely, assuming that all entries of $U_n$ are in the interval $[0,C]$, that $z\in [c\sqrt{n},C\sqrt{n}]$ for some constants $c,C>0$, and that the entries of $M_n$ have a bounded $(4+\varepsilon)$--moment, the main result of \cite{C16} asserts that \[ \Prob\big\{s_{\min}(U_n\odot M_n-z\,\Id)\leq n^{-\beta}\big\}\leq n^{-\alpha} \] for some $\alpha,\beta>0$ depending only on $c,C,\varepsilon$, and the value of the $(4+\varepsilon)$--moment. In \cite{Cooketal1}, this estimate was applied to derive limiting laws for the spectral distributions, under some additional assumptions on $U_n$. One of the results of \cite{Cooketal1} is \textit{the circular law for doubly stochastic variance profiles}: provided that $\sum_{i=1}^n (U_n)^2_{ij}=\sum_{i=1}^n (U_n)^2_{ji}=n$, $1\leq j\leq n$, and $\sup_n\max_{i,j}(U_n)_{ij}<\infty$, the sequence of spectral distributions $\big(\mu_{\frac{1}{\sqrt{n}}U_n\odot M_n}\big)_{n=1}^\infty$ converge weakly in probability to the uniform measure on the unit disc of $\Cp$. The setting of \textit{sparse} structured matrices is not well understood. For results in that direction, we refer to a recent paper \cite{JJLR} dealing with invertibility and spectrum of \textit{block band matrices}. \section{Methodology}\label{s:method} We start this section with a brief outline of \cite{RV08a} which will serve as a canonical illustration of non-asymptotic methods. The proof of the main theorem in \cite{RV08a} relies on four major components: sphere partitioning, invertibility via distance, $\varepsilon$--net arguments, and Littlewood--Offord--type inequalities. Let $A$ be an $n\times n$ matrix with i.i.d real entries of zero mean and unit variance, and assume for simplicity that the entries are $K$--subgaussian for some constant $K>0$. We recall that a vector $x\in\R^n$ is called \textit{$m$--sparse} if the size of its support is at most $m$. We will denote the set of all $m$--sparse vectors by $\Sparse_n(m)$. The proof of \cite[Theorem~3.1]{RV08a} starts with splitting $S^{n-1}(\R)$ into sets of \textit{compressible} and \textit{incompressible} vectors, \begin{align*} \Comp_n(\delta,\rho)&:=\big\{x\in S^{n-1}(\R):\;\dist(x,\Sparse_n(\delta n))< \rho\big\};\\ \Incomp_n(\delta,\rho)&:=\big\{x\in S^{n-1}(\R):\;\dist(x,\Sparse_n(\delta n))\geq \rho\big\}. \end{align*} Here, $\delta,\rho\in(0,1)$ are small constants. The variational formula for $s_{\min}(A)$ allows to write \begin{align*} \Prob\big\{s_{\min}(A)\leq s\big\}\leq &\Prob\big\{\lVert Ax\rVert_2\leq s\mbox{ for some $x\in \Comp_n(\delta,\rho)$}\big\}\\ + &\Prob\big\{\lVert Ax\rVert_2\leq s\mbox{ for some $x\in \Incomp_n(\delta,\rho)$}\big\},\quad s>0. \end{align*} If both $\delta$ and $\rho$ are sufficiently small, the set of compressible vectors has small \textit{covering numbers}, which allows to apply an $\varepsilon$--net argument. More specifically, it can be checked that for every $\varepsilon\in(3\rho,1/2]$ there is a discrete subset $\Net\subset \Comp_n(\delta,\rho)$ of size at most $\big(\frac{C}{\varepsilon\delta}\big)^{\delta n}$ such that for every $x\in \Comp_n(\delta,\rho)$, we have $\dist(x,\Net)\leq\varepsilon$ (i.e, $\Net$ is an $\varepsilon$--net in $\Comp_n(\delta,\rho)$ with respect to the Euclidean metric). Consequently, for every $L>0$, \begin{align*} \Prob&\big\{\lVert Ax\rVert_2\leq s\mbox{ for some $x\in \Comp_n(\delta,\rho)$}\big\}\\ &\leq \Prob\big\{\lVert Ay\rVert_2\leq s+\varepsilon\,L\sqrt{n}\mbox{ for some $y\in \Net$}\big\} +\Prob\big\{\lVert A\rVert> L\sqrt{n}\big\}\\ &\leq |\Net|\, \sup\limits_{z\in S^{n-1}(\R)}\Prob\big\{\lVert Az\rVert_2\leq s+\varepsilon\,L\sqrt{n}\big\} +\Prob\big\{\lVert A\rVert> L\sqrt{n}\big\}. \end{align*} For any $z\in S^{n-1}(\R)$, the vector $Az$ has i.i.d subgaussian components of unit variances, and a standard Laplace transform argument implies that, as long as $s+\varepsilon\,L\sqrt{n}$ is much less than $\sqrt{n}$, the probability $\Prob\big\{\lVert Az\rVert_2\leq s+\varepsilon\,L\sqrt{n}\big\}$ is exponentially small in $n$. Moreover, for a sufficiently large constant $L$, the probability $\Prob\big\{\lVert A\rVert> L\sqrt{n}\big\}$ is exponentially small in $n$. Therefore, an appropriate choice of parameters $\delta,\rho,\varepsilon,L$ yields \[ \Prob\big\{\lVert Ax\rVert_2\leq s\mbox{ for some $x\in \Comp_n(\delta,\rho)$}\big\}\leq 2\exp(-cn),\quad s=o(\sqrt{n}). \] We refer to \cite{RV08a} as well as \cite{RV survey} for details regarding the above computations. Let us note also that the idea of sphere partitioning was applied a few years earlier in paper \cite{LPRT} dealing with rectangular random matrices. The incompressible vectors are treated using the \textit{invertibility via distance} argument, which is based on the observation that for any incompressible vector $x$, a constant proportion of its components are of order $\Omega(n^{-1/2})$ by the absolute value. For every $1\leq i\leq n$, denote by $H_i(A)$ the linear span of columns of $A$ except the $i$-th: \[ H_i(A):=\Span\{\Col_j(A),\;j\neq i\}. \] Then for arbitrary vector $x$ and arbitrary ``threshold'' $\tau>0$ with $\{i:\,|x_i|\geq \tau\}\neq\emptyset$ we have \[ \lVert Ax\rVert_2\geq \max_{1\leq i\leq n}\big(|x_i|\,\dist(\Col_i(A),H_i(A))\big) \geq \tau\max_{i:\,|x_i|\geq \tau}\dist(\Col_i(A),H_i(A)), \] and hence for any $s>0$, $ \textbf{1}_{\{\lVert Ax\rVert_2\leq s\}}\leq \frac{1}{|\{i:\,|x_i|\geq \tau\}|}\sum_{i=1}^n\textbf{1}_{\{\dist(\Col_i(A),H_i(A))\leq s/\tau\}}$. This, combined with Markov's inequality and the fact that every $(\delta,\rho)$--incompressible vector is \textit{spread} --- has at least $\delta n$ components of magnitude at least $\rho n^{-1/2}$ --- gives for $t>0$, \begin{equation}\label{310984-198} \Prob\big\{\exists\,x\in \Incomp_n(\delta,\rho):\; \lVert Ax\rVert_2\leq t\,n^{-1/2}\big\} \leq \frac{1}{\delta n}\sum_{i=1}^n\Prob\{\dist(\Col_i(A),H_i(A))\leq t/\rho\} \end{equation} (see \cite[Lemma~3.5]{RV08a}). Since the distribution of $A$ is invariant under columns permutations, the last relation can be rewritten as \begin{align*} \Prob\big\{\lVert Ax\rVert_2\leq t\,n^{-1/2}\mbox{ for some $x\in \Incomp_n(\delta,\rho)$}\big\} &\leq \frac{1}{\delta}\Prob\{\dist(\Col_n(A),H_n(A))\leq t/\rho\}\\ &\leq\frac{1}{\delta}\Prob\{|\langle \Col_n(A),Y_n(A)\rangle|\leq t/\rho\}, \end{align*} where $Y_n(A)$ denotes a unit normal to $H_n(A)$ measurable with respect to $\sigma(H_n(A))$. The most involved part of \cite{RV08a} is analysis of anti-concentration of $\langle \Col_n(A),Y_n(A)\rangle$. Recall that \textit{the L\'evy concentration function} $\cf(Z,t)$ of a real variable $Z$ is defined as \[ \cf(Z,t):=\sup_{\lambda\in\R}\Prob\big\{|Z-\lambda|\leq t\big\},\quad t\geq 0. \] The relationship between the magnitude of $\cf\big(\sum_{i=1}^n a_i Z_i,t\big)$ for a linear combination of random variables $\sum_{i=1}^n a_i Z_i$ and the structure of the coefficient vector $(a_1,\dots,a_n)$ has been studied in numerous works, starting from an inequality of Erdos--Littlewood--Offord \cite{LO1943,Erdos1945}; we refer, in particular, to works \cite{Rogozin1961,Kolmogorov1958,Kesten1969,Esseen1966} as well as \cite{TV2009} and survey \cite{NV13} for a more recent account of \textit{the Littlewood--Offord theory} and its applications to the matrix invertibility. To characterize the structure of a coefficient vector in regard to anti-concentration, the notion of the \textit{Essential Least Common Denominator} (LCD) has been introduced in \cite{RV08a}. We quote a slightly modified definition from \cite{RV rect}: \[ \LCD(a):=\inf\big\{\theta>0:\;\dist(\theta\,a,\Z^n)<\min(\gamma\lVert \theta\,a\rVert_2,\alpha\sqrt{n})\big\},\quad a\in\R^n. \] Here, $\alpha,\gamma$ are small positive constants. The Littlewood--Offord--type inequality used in \cite{RV08a,RV rect} can be stated as follows. If $Z_1,Z_2,\dots,Z_n$ are i.i.d real valued random variables with $\Prob\{|Z_i-\Exp\,Z_i|<\beta\}\leq 1-\beta$ for some $\beta>0$ then for any unit vector $a\in\R^n$, \begin{equation}\label{3u32-4u4poiu} \cf\bigg(\sum_{i=1}^n a_i Z_i,t\bigg)\leq Ct+\frac{C}{\LCD(a)}+2\exp(-c\,n),\quad t>0, \end{equation} where $C>0$ may only depend on $\beta,\gamma$ and $c>0$ only on $\alpha,\beta$ (see \cite{RV rect} for a proof). Using an $\varepsilon$--net argument, the authors of \cite{RV08a} show that with probability exponentially close to one, the random unit normal vector $Y_n(A)$ has an exponentially large $\LCD$. This implies \begin{align*} \Prob\{|\langle \Col_n(A),Y_n(A)\rangle|\leq s\} &\leq \Prob\big\{\LCD(Y_n(A))<\exp(c'n)\big\}+C +2\exp(-c'\,n)\\ &\leq Cs+3\exp(-c''n),\quad s>0. \end{align*} The combination of all the ingredients now gives the final estimate \[ \Prob\big\{s_{\min}(A)\leq t\,n^{-1/2}\big\}\leq \tilde C\,t+\tilde C\,\exp(-\hat c\,n),\quad t>0, \] matching, by the order of magnitude and up to the exponentially small additive term, the known asymptotics of $s_{\min}$ of Gaussian random matrices \cite{Edelman,Szarek}. \smallskip In the remaining part of this section, we will consider some of the novel additions to the methodology made over the past years. To avoid technical details as much as possible, we will refer to compressible vectors as well as all related notions from the literature as \textit{almost sparse} vectors, and incompressible vectors and their relatives as \textit{spread} vectors. \smallskip \textbf{Invertibility over almost sparse vectors.} In the setting of \textit{dense} random matrices as described above, the set of almost sparse vectors $\textrm{AlSp}_n$ can be treated by a simple $\varepsilon$--net argument since anti-concentration estimates for $\lVert Az\rVert_2$ for an \textit{arbitrary} vector $z\in S^{n-1}$ are able to overpower the cardinality of the $\varepsilon$--net $\Net$ in $\textrm{AlSp}_n$. In the case of sparse and certain models of structured random matrices, such argument may not be sufficient since the product $|\Net|\,\sup_{z\in S^{n-1}(\R)} \Prob\{\lVert Az\rVert_2\leq s\}$ may become infinitely large even for small $s>0$. We consider two [related] approaches to this problem from the literature. A first one is based on further subdividing $\textrm{AlSp}_n$ into a few subsets $T_1,T_2,\dots$ according to the size of set of vector's components of non-negligible magnitude, and applying an $\varepsilon$--net argument within each of the subsets. Anti-concentration estimates for $Az$ for vectors $z\in T_i$ then compete with the cardinality of an $\varepsilon$--net on the set $T_i$ rather than on the entire collection $\textrm{AlSp}_n$, which, for certain models, allows the proof to go through. We refer, in particular, to \cite[Section~4]{RZ2016} and \cite[Section~3]{C16} for an application of this strategy to structured random matrices; as well as \cite[Proposition~3.1]{Cook circ} dealing with adjacency matrices of random $d$--regular digraphs. The second approach consists in identifying a class of non-random matrices $\mathcal{C}$ such that for every $M\in\mathcal {C}$ and every almost sparse vector $z\in S^{n-1}$, $Mz$ has a non-negligible Euclidean norm, and then showing that with probability close to one, $A\in\mathcal{C}$. As an example, consider a collection of matrices $M$ such that for every non-empty subset $I\subset[n]$ with $|I|\leq m$, there is a row $\Row_i(M)$ with $|\supp\Row_i(M)\cap I|=1$. Then, it is not difficult to check that for every non-zero $m$--sparse vector $z$ one has $Mz\neq 0$. It can further be verified that a random matrix $A$ with i.i.d Bernoulli($p$) elements and $n^{-1}\textrm{polylog}(n)\leq p\leq c m^{-1}$ belongs to this class with probability tending to one with $n\to\infty$ \cite{BR16}. The construction can be made robust to treat almost sparse vectors, and can be further elaborated to deals with diagonal shifts of very sparse matrices \cite{BR16,LLTTY sing,RT circ}. \smallskip \textbf{Invertibility via distance.} The relation \eqref{310984-198} discovered in \cite{RV08a} can be applied to any model of randomness. However, this relation is not completely satisfactory when either (a) there are strong probabilistic dependencies between $\Col_i(A)$ and $H_i(A)$ which make estimating $\Prob\big\{\dist(\Col_i(A),H_i(A))\leq t\}$ challenging, or (b) invertibility over the almost sparse vectors cannot be treated with a desired precision using approaches based on $\varepsilon$--net arguments or on conditioning on a particular structure of the matrix. Here, we consider some developments of the invertibility via distance argument made in the contexts of $d$-regular random digraphs and smoothed analysis of the condition number. Let $A_{n,d}$ be the adjacency matrix of a uniform random $d$-regular directed graph on $n$ vertices. The regularity condition implies that for every $1\leq i\leq n$, $\Col_i(A_{n,d})$ is a function of $\{\Col_j(A_{n,d})\}_{j\neq i}$, creating issues with applying the original version of the argument from \cite{RV08a}. In \cite{C14}, Cook proposed a modification of the argument based on considering distances between the matrix columns and random subspaces of the form $H_{i_1,i_2,+}(A_{n,d}):= \Span\{\Col_j(A_{n,d}),\,j\neq i_1,i_2;\;\Col_{i_1}(A_{n,d}) +\Col_{i_2}(A_{n,d})\}$, for $i_1\neq i_2$. That was later applied in \cite{Cook circ,LLTTY sing}. Here, we quote \cite[Lemma~4.2]{LLTTY sing}: denoting by $S(\rho,\delta)$ the collection of all unit vectors $x$ in $\Cp^n$ with $\inf\limits_{\lambda\in\Cp} |\{i\leq n:\,|x_i-\lambda|>\rho\,n^{-1/2}\}|> \delta n$, one has \begin{align*} \Prob&\Big\{\inf_{x\in S(\rho,\delta)}\lVert (A_{n,d}-\,z\Id)x \rVert_2\leq t\,n^{-1/2}\Big\}\\ &\leq \frac{1}{\delta\,n^2} \sum_{\substack{i_1,i_2\in [n],\\ i_1\neq i_2}}\Prob\big\{ \dist(\Col_{i_1}(A_{n,d}-\,z\Id),H_{i_1,i_2,+}(A_{n,d}-\,z\Id))\leq t/\rho\big\}. \end{align*} Conditioned on a realization of $\Col_j(A_{n,d})$, $j\neq i_1,i_2$ (hence, also $Y:=\Col_{i_1}(A_{n,d}) +\Col_{i_2}(A_{n,d})$), the support of the $i_1$--st column of $A_{n,d}$ is uniformly distributed on the collection of $d$-subsets $Q$ satisfying $\{j\leq n:\,Y_j=2\} \subset Q\subset \supp Y$. In the regime $d\to\infty$ as $n$ goes to infinity, this is ``enough randomness'' for a satisfactory bound on $s_{\min}(A_{n,d}-\,z\Id)$ required by the Hermitization argument \cite{Cook circ,LLTTY sing}. We remark here that another version of the argument for matrices with dependencies based on evaluation of certain quadratic forms, introduced in \cite{V sym}, has been used in a non-Hermitian setting in \cite{RV unit} to estimate the smallest singular value of unitary and orthogonal perturbations of fixed matrices. We refer to \cite{RV unit} for details. In \cite{Tikh shifted}, a variant of the invertibility via distance argument was developed to deal with non-random shifts of matrices with continuous distributions. A main observation of \cite{Tikh shifted} is that the distances $\dist(\Col_i(A),H_i(A))$, $1\leq i\leq n$, are highly correlated, which allows for a more efficient analysis than the first moment method estimate \eqref{310984-198}. The invertibility via distance is applied in \cite{Tikh shifted} to the entire sphere rather than the set of spread vectors. As an illustration of the principle, we consider a simpler setting of centered random matrices when the argument is still able to produce new results. Assuming $A$ is an $n\times n$ real random matrix with i.i.d entries of zero mean, unit variance, and the distribution density bounded above by $\rho$, for every $t>0$ and $1\leq k\leq n$ one has $ \Prob\big\{\exists\,I\subset[n]:\; |I|\geq k,\;\dist(\Col_i(A),H_i(A))\leq t\;\;\forall\,i\in I\big\} \leq C_\rho\,t\,(n/k)^{5/11} $, where $C_\rho>0$ may only depend on $\rho$ (see \cite[Prop.~3.8]{Tikh shifted}). This, combined with the simple consequence of the \textit{negative second moment identity} \[ s_{\min}(A)\geq \Big(\sum_{i=1}^n\dist(\Col_i(A),H_i(A))^{-2}\Big)^{-1/2}, \] implies an estimate $\Prob\{s_{\min}(A)\leq t\,n^{-1/2}\}\leq C_\rho'\,t$, $t>0$, which does not carry the $c^n$ additive term inevitable when an $\varepsilon$-net--based approach is used. We refer to \cite{Tikh shifted} for the more involved setting of non-centered random matrices. \smallskip \textbf{Alternatives to the $\LCD$.} Functions of coefficient vectors different from the Essential Least Common Denominator have been introduced in the literature to deal with anti-concentration in the context of sparse and inhomogeneous random matrices as well as matrices with dependencies. Here, we review some of them (for non-Hermitian models only). The original notion of $\LCD$ is not applicable to the study of linear combinations of non-identically distributed variables: in fact, given any vector $a\in S^{n-1}(\R)$ with an exponentially large $\LCD$, one can easily construct mutually independent variables $Z_1,\dots,Z_n$ with $\cf(Z_i,1)\leq 1/2$, $1\leq i\leq n$, and such that $\cf\big(\sum_{i=1}^n a_i Z_i,0\big)=\Omega(n^{-1/2})$. Given a random vector $X$ in $\R^n$ and denoting by $\bar X$ the difference $X-X'$ (where $X'$ is an independent copy of $X$, the \textit{Randomized Least Common Denominator} with respect to $X$ is defined as \[ \RLCD^X(a):=\inf\big\{\theta>0:\; \Exp\,\dist^2\big((\theta a_1 \bar X_1,\dots,\theta a_n \bar X_n),\Z^n\big)< \min(\gamma\lVert \theta a\rVert_2^2,\alpha n)\big\},\; a\in\R^n. \] The notion was introduced in \cite{LTV} to deal with inhomogeneous random matrices with different entries distributions. The small ball probability inequality \eqref{3u32-4u4poiu} from \cite{RV08a,RV rect} extends to the non-i.i.d setting with the $\RLCD$ taking place of the original notion. We refer to \cite{LTV} for details. Strong quantitative invertibility results for matrices with fixed rowsums and adjacency matrices of $d$--regular digraphs obtained recently in \cite{Tran} and \cite{JSS reg}, respectively, rely on a modification of the $\LCD$ which allows to treat linear combinations of Bernoulli variables conditioned on their sum. Specifically, in \cite{Tran} the notion of the \textit{Combinatorial Least Common Denominator} $\CLCD$ is defined as \[ \CLCD(a):=\inf\big\{\theta>0:\;\dist(\theta (a_i-a_j)_{i< j},\Z^{n\choose 2})<\min(\gamma\lVert \theta (a_i-a_j)_{i< j} \rVert_2,\alpha n)\big\},\;a\in\R^n, \] where $(a_i-a_j)_{i<j}$ denotes a vector in $\R^{n\choose 2}$ with $(i,j)$--th coordinate equal to $a_i-a_j$, $1\leq i<j\leq n$. It is further shown that for the random vector $(Z_1,Z_2,\dots,Z_n)$ uniformly distributed on the collection of $0/1$ vectors with exactly $n/2$ ones, an analog of the anti-concentration inequality \eqref{3u32-4u4poiu} holds, with $\LCD$ replaced with $\CLCD$. A modification of the notion, called $\QCLCD$, was further considered in \cite{JSS reg}. We refer to that paper for details. Another functional --- \textit{the degree of unstructuredness} $\UD$ --- was introduced in \cite{LT2020} to study invertibility of sparse Bernoulli random matrices. A main observation exploited in \cite{LT2020} is that, for $p=o(1)$, linear combinations of i.i.d Bernoulli($p$) random variables $\sum_{i=1}^n a_i Z_i$ are often more concentrated than corresponding linear combinations of \textit{dependent} $0/1$ variables conditioned to sum to a fixed number of order $\Theta(pn)$. In \cite{LT2020}, the argument proceeds by conditioning on the size of the support of a column of the matrix and estimating anti-concentration of $\dist(\Col_i(A),H_i(A))=|\langle\Col_i(A),Y_i(A) \rangle|$ in terms of the degree of unstructuredness of the unit random normal $Y_i(A)$. The definition of $\UD$ is technically involved, and we do not provide it here; see \cite{LT2020} for details. \smallskip \textbf{Average-case analysis of anti-concentration.} The average-case study of Little\-wood--Offord--type inequlaities for linear combinations $\sum_{i=1}^n a_i Z_i$, introduced in the random matrix context in \cite{Tikh2020}, was a crucial element in some recent advances on quantitative invertibility of random discrete matrices \cite{Tikh2020,JSS2020a,JSS2020b}, which helped resolve some long standing problems in the combinatorial random matrix theory. A main idea of \cite{Tikh2020} is, rather than attempting to obtain an explicit description of vectors $a$ such that $\sum_{i=1}^n a_i Z_i$ is strongly anti-concentrated, to consider the linear combination for a \textit{randomly chosen} coefficient vector (with an appropriately defined notion of randomness). This approach allowed to strengthen the invertibility results available through the use of the $\LCD$. As an example, we consider a simplified version of the main technical result of \cite{Tikh2020}. Let $\varepsilon\in(0,1/2)$, $M\geq 1$. Then there exist $n_{0}=n_{0}(\varepsilon,M)$ depending on $\varepsilon,M$ and $L_{0}= L_{0}(\varepsilon)>0$ depending \textit{only} on $\varepsilon$ (and not on $M$) with the following property. Take $n\geq n_{0}$, $1\leq N\leq (1/2+\varepsilon)^{-n}$, and let $\mathcal A:=(\{-2N,\dots,-N-1\}\cup\{N+1,\dots,2N\})^n$. Assume that a random vector $a=(a_1,\dots,a_n)$ is uniformly distributed on $\mathcal A$. Then \[ \Prob_a\big\{\cf_Z \big(a_1 Z_1+\dots+a_n Z_n,\sqrt{n}\big) > L_{0} N^{-1} \big\}\leq e^{-M\,n}. \] Here, $\cf_Z(\cdot,\cdot)$ denotes the L\'evy concentration function with respect to the randomness of $(Z_1,\dots,Z_n)$, a vector with independent $\pm 1$ components. The main point of the statement is that the parameter $L_0$ controlling the anti-concentration of the linear combination, does not depend on $M$, i.e the proportion of the coefficient vectors in $\mathcal A$ such that anti-concentration of $a_1 Z_1+\dots+a_n Z_n$ is weak, becomes \textit{superexponentially small} in $n$ as $n\to\infty$. \smallskip \textbf{Matrices with heavy entries.} For invertibility of (dense) random matrices with independent entries assuming only finite second moments, we refer to \cite{RT15,Livshyts,LTV}. \section{Open problems}\label{s:open} We conclude this survey with a selection of open research problems. \textbf{Refined smoothed analysis of invertibility.} Recall that a standard model in the setting of the smoothed analysis of the condition number is of the form $A+M$, where $A$ is an $n\times n$ random matrix with i.i.d entries, and $M$ is a non-random shift. \begin{problem}[Shift-independent estimates for matrices with continuous distributions] Let $\xi$ be a real random variable of zero mean, unit variance, and bounded distribution density. Let $A$ be an $n\times n$ matrix with i.i.d entries equidistributed with $\xi$. It is true that for every non-random matrix $M$, \[ \Prob\big\{s_{\min}(A+M)\leq t\,n^{-1/2}\big\}\leq Ct,\quad t>0, \] where $C>0$ may only depend on the c.d.f of $\xi$ (and not on $n$)? \end{problem} For partial results on the above problem, see \cite{SST06,Tikh shifted}. \smallskip \begin{problem}[Optimal dependence of $s_{\min}(A+M)$ on the norm of the shift in the discrete setting] Let $A$ be an $n\times n$ matrix with i.i.d $\pm1$ entries, and let $T,t>0$ be parameters. For any $\varepsilon,L>0$, estimate $ \sup_{M:\,\lVert M\rVert\leq T}\Prob\big\{s_{\min}(A+M)\leq t\big\} $ up to a multiplicative error $O(n^{\varepsilon})$ and an additive error $O(n^{-L})$, that is, find an explicit function $f(n,T,t)$ such that \[ n^{-\varepsilon}\,f(n,T,t)-C\,n^{-L}\leq \sup_{M:\,\lVert M\rVert\leq T}\Prob\big\{s_{\min}(A+M)\leq t\big\}\leq n^{\varepsilon}\,f(n,T,t)+C\,n^{-L}, \] where $C>0$ may only depend on $\varepsilon$ and $L$. \end{problem} For best known partial results on the above problem, see \cite{TV10c,JSS2020 smooth}. \smallskip \begin{problem}[Dependence of $s_{\min}(A+M)$ on $M$ in the Gaussian setting] Let $A$ be an $n\times n$ matrix with i.i.d standard real Gaussian entries. Find an estimate on $\Exp\,s_{\min}(A+M)$ in terms of the singular spectrum of $M$. \end{problem} One can assume in the above problem that $M$ is a diagonal matrix with $i$-th diagonal element $s_i(M)$, $1\leq i\leq n$. Note that $A$ may either improve or degrade invertibility of $M$. \medskip \textbf{Invertibility and spectrum of very sparse matrices.} Here, we consider the problem of identifying the limiting spectral distribution for non-Hermitian matrices with \textit{constant} average number of non-zero elements in a row/column. \begin{problem}[{The oriented Kesten--McKay law; see \cite[Section~7]{BC2011}}] Let $d\geq 3$. For each $n$, let $A_{n,d}$ be the adjacency matrix of a uniform random $d$--regular directed graph on $n$ vertices. Prove that the sequence of spectral distributions $\big(\mu_{A_{n,d}}\big)_{n=1}^\infty$ converges weakly to the probability measure on $\Cp$ with the density function \[ \rho_d(z):=\frac{1}{\pi}\frac{d^2(d-1)}{(d^2-|z|^2)^2}\, \textbf{1}_{\{|z|<\sqrt{d}\}}. \] \end{problem} Assuming the standard Hermitization approach to the above problem, the following is the crucial (perhaps the main) step of the argument: \begin{problem} Let $d\geq 3$ and let $(A_{n,d})_{n=1}^\infty$ be as above. Prove that for almost every $z\in\Cp$ and every $\varepsilon>0$, \[ \lim_{n\to\infty}\Prob\big\{s_{\min}(A_{n,d}-z\,\Id)\leq \exp(-\varepsilon\,n)\big\}=0. \] \end{problem} \smallskip \begin{problem}[Spectrum of directed Erdos--Renyi graphs of constant average degree] Let $\alpha>0$. For each $n\geq \alpha$, let $A_n$ be an $n\times n$ random matrix with i.i.d Bernoulli($\alpha/n$) entries. Does a sequence of spectral distributions $\big(\mu_{A_n}\big)$ converge weakly to a non-random probability measure? \end{problem} As in the case of regular digraphs, assuming the Hermitization argument, the following problem constitutes an important step to understanding asymptotics of the spectrum: \begin{problem} For each $n\geq \alpha$, let $A_n$ be an $n\times n$ random matrix with i.i.d Bernoulli($\alpha/n$) entries. Is it true that for almost every $z\in\Cp$ and every $\varepsilon>0$, \[ \lim_{n\to\infty}\Prob\big\{s_{\min}(A_n-z\,\Id)\leq \exp(-\varepsilon\,n)\big\}=0? \] \end{problem} \medskip \textbf{Invertibility and spectrum of structured random matrices.} The spectrum of structured random matrices in the absence of expansion-like properties (such as \textit{broad connectivity} \cite{RZ2016} or \textit{robust irreducibility} \cite{Cooketal1,Cooketal2}) is not well understood as of now. In particular, a full description of the class of inhomogeneous matrices with independent entries with spectral convergence to the circular law seems to be out of reach of modern methods. \begin{problem} Give a complete description of sequences of standard deviation profiles $(U_n)_{n=1}^\infty$ satisfying the following condition: assuming that $\xi$ is any random variable of zero mean and unit variance, and that for each $n$, $M_n$ is an $n\times n$ matrix with i.i.d entries equidistributed with $\xi$, the sequence of spectral distributions $\big(\mu_{U_n\odot M_n}\big)$ converges weakly in probability to the uniform measure on the unit disc of $\Cp$. \end{problem} A natural class of profiles considered, in particular, in \cite{Cooketal1,Cooketal2}, are \textit{doubly stochastic} profiles. One may expect that those profile sequences, under some weak assumption on the magnitude of the maximal entry, should be sufficient for the circular law to hold: \begin{problem} Assume that for each $n$, the standard deviation profile $U_n$ satisfies \[ \sum_{i=1}^n (U_n)_{ij}^2=\sum_{i=1}^n (U_n)_{ji}^2=1,\quad 1\leq j\leq n, \] and that for some $\varepsilon>0$, $\limsup_n\max_{ij}((U_n)_{ij}n^\varepsilon)=0$. Is it true that, with $M_n$ as in the above problem, the sequence $\big(\mu_{U_n\odot M_n}\big)$ converges weakly in probability to the uniform measure on the unit disc of $\Cp$? \end{problem} Note that the above setting allows sparse matrices (cf. \cite[Theorem~2.4]{Cooketal1}). Solution to the above problem, if approached with Girko's Hermitization procedure, requires satisfactory bounds on the smallest singular values of $U_n\odot M_n-z\,\Id$.
{ "timestamp": "2022-06-02T02:24:54", "yymm": "2206", "arxiv_id": "2206.00601", "language": "en", "url": "https://arxiv.org/abs/2206.00601", "abstract": "The problem of estimating the smallest singular value of random square matrices is important in connection with matrix computations and analysis of the spectral distribution. In this survey, we consider recent developments in the study of quantitative invertibility in the non-Hermitian setting, and review some applications of this line of research.", "subjects": "Probability (math.PR)", "title": "Quantitative invertibility of non-Hermitian random matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771770811146, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7090925580610544 }
https://arxiv.org/abs/1506.00928
A Curved Brunn-Minkowski Inequality for the Symmetric Group
In this paper, we construct an injection $A \times B \rightarrow M \times M$ from the product of any two nonempty subsets of the symmetric group into the square of their midpoint set, where the metric is that corresponding to the conjugacy class of transpositions. If $A$ and $B$ are disjoint, our construction allows to inject two copies of $A \times B$ into $M \times M$. These injections imply a positively curved Brunn-Minkowski inequality for the symmetric group analogous to that obtained by Ollivier and Villani for the hypercube. However, while Ollivier and Villani's inequality is optimal, we believe that the curvature term in our inequality can be improved. We identify a hypothetical concentration inequality in the symmetric group and prove that it yields an optimally curved Brunn-Minkowski inequality.
\section{Introduction} The classical Brunn-Minkowski inequality may be formulated as follows: given two compact nonempty sets $A,B \subset \mathbb{R}^n$, one has \begin{equation*} \label{ineq:ClassicalBM} \log |M_t| \geq (1-t) \log |A| + t \log |B| \end{equation*} \noindent for any $0 \leq t \leq 1$, where \begin{equation*} M_t = \{ (1-t)a + tb : a \in A,\ b \in B\} \end{equation*} \noindent is the set of $t$-midpoints of $A$ and $B$, and $|\cdot|$ is Lebesgue measure. If $\mathbb{R}^n$ is replaced by a smooth complete Riemannian manifold with positive Ricci curvature bounded below by $K > 0$, the Brunn-Minkowski inequality can be strengthened to \begin{equation} \label{eqn:CurvedBM} \log |M_t| \geq (1-t) \log |A| + t \log |B| + \frac{K}{2}t(1-t)\mathrm{d}(A,B)^2, \end{equation} \noindent where $\mathrm{d}$ is the Hausdorff distance and $M_t$ the set of points of the form $\gamma(t)$ with $\gamma$ a geodesic in $X$ such that $\gamma(0) \in A$ and $\gamma(1) \in B$, see \cite{OV} and references therein. The heuristic here is that midpoint sets are larger in positively curved space than in flat space, and the degree of this distortion is controlled by Ricci curvature. Y. Ollivier and C. Villani \cite{OV} have shown that a curved Brunn-Minkowski inequality analogous to \eqref{eqn:CurvedBM} holds for the discrete hypercube $\mathbb{Z}_2^N$ equipped with the Hamming distance. While the definition of $t$-midpoints in discrete space is somewhat messy, in the case $t=\frac{1}{2}$, at least, it is reasonable to define the midpoint set $M=M_{1/2}$ of $A,B \subseteq \mathbb{Z}_2^N$ to be the collection of $m \in \mathbb{Z}_2^N$ which satisfy $\mathrm{d}(a,m) + \mathrm{d}(m,b) = \mathrm{d}(a,b)$ and $\mathrm{d}(a,m) = \mathrm{d}(m,b) + \varepsilon$, with $\varepsilon \in \{-1,0,1\}$, for some $(a,b) \in A \times B$. Adopting these definitions, Ollivier and Villani proved the following curved Brunn-Minkowski inequality for the hypercube \cite[Theorem 1]{OV}. \begin{thm} \label{thm:OV} For any nonempty sets $A,B \subseteq \mathbb{Z}_2^N$, \begin{equation*} \log |M| \geq \frac{1}{2} \log |A| + \frac{1}{2} \log |B| + \frac{K}{8} \mathrm{d}(A,B)^2, \end{equation*} \noindent where $K=\frac{1}{2N}$. \end{thm} \noindent Ollivier and Villani moreover verify that the dependence of $K$ on $N$ in their result is optimal. As discussed in \cite{OV}, this result supports the statement that the ``discrete Ricci curvature'' of $\mathbb{Z}_2^N$ is of order $N^{-1}$. For any $n \geq 2N$, there is an injective group homomorphism \begin{equation*} \mathbb{Z}_2^N \longrightarrow S(n) \end{equation*} \noindent from the $N$-dimensional hypercube into the symmetric group of rank $n$ determined by \begin{equation*} \begin{split} e_1 &\mapsto (1 \mapsto 2) \\ e_2 &\mapsto (3 \mapsto 4) \\ &\vdots \\ e_N &\mapsto (2N-1 \mapsto 2N), \end{split} \end{equation*} \noindent where $e_i \in \mathbb{Z}_2^N$ is the bitstring in which all bits are zero except the $i$th bit, and $(2i-1 \mapsto 2i) \in S(n)$ is the transposition which swaps $2i$ and $2i-1$. If one equips $S(n)$ with the metric induced by the word norm corresponding to the conjugacy class of transpositions, this injection is an isometric embedding of $\mathbb{Z}_2^N$ in $S(n)$. It is thus natural to seek an extension of Theorem \ref{thm:OV} to the symmetric group, viewed as a metric space in this way. In this paper, we prove the following curved Brunn-Minkowski inequality for $S(n)$. \begin{thm} \label{thm:Main} For any nonempty sets $A,B \subseteq S(n)$, \begin{equation*} \log |M| \geq \frac{1}{2} \log |A| + \frac{1}{2} \log |B| + \frac{K}{8} \mathrm{d}(A,B)^2, \end{equation*} \noindent where $K=\frac{4 \log 2}{(n-1)^2}$. \end{thm} The Brunn-Minkowski inequality presented in Theorem \ref{thm:Main} is only slightly curved, and we believe that Theorem \ref{thm:Main} in fact holds with $K=\frac{c}{n-1}$, $c$ a positive constant. Although we do not prove this, we identify a hypothetical concentration inequality in the symmetric group which generalizes the hypercube concentration inequality of Ollivier and Villani \cite[Corollary 6]{OV}, and demonstrate that it implies an optimally curved Brunn-Minkowski inequality for the symmetric group. \subsubsection*{Acknowledgement} We thank Prasad Tetali for introducing us to the subject of discrete curvature, and suggesting the problem of lifting the curved Brunn-Minkowski inequality from the hypercube to the symmetric group. We are grateful to Professor Tetali for communicating to us a set of notes \cite{Tetali} on this topic compiled by a group of researchers following an AIM workshop on discrete curvature, and for pointing out the papers \cite{EMT,KKRT}. We thank Mike Lacroix for creating Figure \ref{fig:Cayley}. \section{Symmetric Group Basics} In this section we fix basic notation and terminology concerning the symmetric group $S(n)$. We identify $S(n)$ with its right Cayley graph as generated by the conjugacy class of transpositions. Thus $a,b,c,\dots \in S(n)$ are the vertices of our graph, and $\{a,b\}$ is an edge if and only if $a^{-1}b$ fixes all but two points of $\{1,\dots,n\}$. In this way $S(n)$ becomes a \emph{graded graph}: it decomposes as the disjoint union \begin{equation*} S(n) = \bigsqcup_{r=0}^{n-1} L_r, \end{equation*} \noindent where $L_r$ is the set of permutations which factor into exactly $n-r$ disjoint cycles; each $L_r$ is an independent set; and, finally, there exists an edge between $L_r$ and $L_{r'}$ if and only if $|r-r'|=1$. Figure \ref{fig:Cayley} shows the case $n=4$. \begin{figure} \includegraphics{CayleyS4-MonoGrey.pdf} \caption{\label{fig:Cayley} The Cayley graph of $S(4)$.} \end{figure} Each level $L_r$ of $S(n)$ further decomposes as the disjoint union \begin{equation*} L_r = \bigsqcup_{\substack{\lambda \vdash n\\ \ell(\lambda)=n-r} } C_\lambda, \end{equation*} \noindent where the union is over partitions $\lambda$ of $n$ with $n-r$ parts, and $C_\lambda$ is the set of permutations with cycle type $\lambda$. The sets $C_\lambda$ are the conjugacy classes of $S(n)$. In this paper we make use of a decomposition of $S(n)$ which is finer than the usual decomposition into conjugacy classes. Given $p \in S(n)$, factor $p$ into disjoint cycles, and present each cycle so that its leftmost element is its minimal element. That is, each cycle of $p$ is presented in the form \begin{equation*} (i_1 \mapsto i_2 \mapsto i_3 \mapsto \dots), \end{equation*} \noindent where $i_1 < \min\{i_2,i_3,\dots\}$. Next, list the cycles of $p$ from left to right in increasing order of their minimal elements. Thus $p$ is presented in the form \begin{equation*} p = (i_1 \mapsto i_2 \mapsto i_3 \mapsto \dots) (j_1 \mapsto j_2 \mapsto j_3 \mapsto \dots) (k_1 \mapsto k_2 \mapsto k_3 \mapsto \dots) \dots, \end{equation*} \noindent where $i_1 < j_1 < k_1 < \dots$. We call this the \emph{ordered cycle factorization} of $p$. Figure \ref{fig:Cayley} displays the elements of $S(4)$ using their ordered cycle factorizations. We refer to the vector \begin{equation*} (i_1,j_1,k_1,\dots) \end{equation*} \noindent as the \emph{sequence of cycle minima} of $p$. The vector \begin{equation*} (\mu_1,\mu_2,\mu_3,\dots) \end{equation*} \noindent of cycle lengths \begin{equation*} p = (\underbrace{i_1 \mapsto i_2 \mapsto i_3 \mapsto \dots}_{\text{length }\mu_1}) (\underbrace{j_1 \mapsto j_2 \mapsto j_3 \mapsto \dots}_{\text{length }\mu_2}) (\underbrace{k_1 \mapsto k_2 \mapsto k_3 \mapsto \dots}_{\text{length }\mu_3}) \dots, \end{equation*} \noindent in the ordered cycle factorization of $p$ is a composition of $n$ which we call the \emph{ordered cycle type} of $p$. We denote by $\ell(\mu)$ the number of parts of $\mu$, so that $r=n-\ell(\mu)$ if $p \in L_r$. Given two permutations $p,p'$ of the same ordered cycle type, there is a \emph{unique} permutation $u$ which both conjugates $p$ into $p'$ and transforms the sequence of cycle minima of $p$ into that of $p'$. Given a composition $\mu \vDash n$, we denote by $\vec{C}_\mu$ the set of all permutations whose ordered cycle type is $\mu$. Then each conjugacy class $C_\lambda$ in $S(n)$ decomposes as the disjoint union \begin{equation*} C_\lambda = \bigsqcup_\mu \vec{C}_\mu, \end{equation*} \noindent where $\mu$ ranges over all compositions obtained by permuting the parts of $\lambda$. For the symmetric group $S(4)$, the successive decompositions we have discussed are \begin{equation*} \begin{split} S(4) &= L_0 \sqcup L_1 \sqcup L_2 \sqcup L_3 \\ & = C_{(1,1,1,1)} \sqcup C_{(2,1,1)} \sqcup C_{(3,1)} \sqcup C_{(2,2)} \sqcup C_{(4)} \\ & = \vec{C}_{(1,1,1,1)} \sqcup \vec{C}_{(2,1,1)} \sqcup \vec{C}_{(1,2,1)} \sqcup \vec{C}_{(1,1,2)} \sqcup \vec{C}_{(3,1)} \sqcup \vec{C}_{(1,3)} \sqcup \vec{C}_{(2,2)} \sqcup \vec{C}_{(4)}. \end{split} \end{equation*} \noindent Each class $\vec{C}_\mu$ contains a canonical permutation $p_\mu$, which acts by cyclically permuting the first $\mu_1$ positive integers in the canonical way, cyclically permuting the next $\mu_2$ positive integers in the canonical way, and so on. Given $p \in \vec{C}_\mu$, we denote by $u_p \in S(n)$ the unique permutation which both conjugates $p_\mu$ to $p$ and transforms the sequence of cycle minima of $p_\mu$ into that of $p$. We equip $S(n)$ with the graph theory distance $\mathrm{d}$. Thus level $L_r$ in the Cayley graph coincides with the sphere of radius $r$ centred at the identity permutation $e \in S(n)$. The following properties of $\mathrm{d}$ are easily checked: \begin{equation*} \begin{split} \mathrm{d}(a,b) &=\mathrm{d}(pap^{-1},pbp^{-1})\\ &=\mathrm{d}(ab^{-1},e) \\ &=\mathrm{d}(e,a^{-1}b) \\ &=\mathrm{d}(ap,bp)\\ &= \mathrm{d}(pa,pb). \end{split} \end{equation*} \noindent In particular, the diameter of the Cayley graph is \begin{equation*} \label{eqn:Diameter} \max \{\mathrm{d}(a,b) : a,b \in S(n)\} = \max \{\mathrm{d}(e,p) : p \in S(n)\} =n-1. \end{equation*} We have already mentioned the fact that the set of permutations which lie on a geodesic path from the identity permutation $e$ to an involution $v$ is isometrically isomorphic to a hypercube whose dimension is half the size of the support of $v$. We will also make use of the fact that a permutation lies on a geodesic path from $e$ to a forward cycle $f$ if and only if it is a product of forward cycles which together induce a noncrossing partition of the support of $f$. A proof of this folklore result may be found in \cite[Lecture 23]{NS}. \section{Midpoint Calculus} In this section, we generalize the encoding/decoding formalism of Ollivier and Villani from the hypercube to the symmetric group. Where possible, we try to be consistent with the notation and terminology of \cite{OV}. \subsection{Crossovers} Let $(a,b) \in S(n) \times S(n)$ be a pair of permutations, and let $M(a,b)$ be the corresponding midpoint set. Our first observation is that $M(a,b)$ is the isometric image of a ``standard'' set of midpoints. More precisely, let $\mu$ be the ordered cycle type of $a^{-1}b$, and let $\Cr(\mu)$ denote the set of midpoints of $e$, the identity permutation, and $p_\mu$, the canonical permutation of ordered cycle type $\mu$. Adopting the terminology of \cite{OV}, we call the elements of $\Cr(\mu)$ \emph{crossovers}, or $\mu$-\emph{crossovers} to be precise. Consider the function \begin{equation*} S(n) \longrightarrow S(n) \end{equation*} \noindent defined by \begin{equation*} x \mapsto a u_{a^{-1}b} x u_{a^{-1}b}^{-1}, \end{equation*} \noindent where $u_{a^{-1}b}$ is the unique permutation which conjugates $p_\mu$ to $a^{-1}b$ and transforms the sequence of cycle minima of $p_\mu$ into that of $a^{-1}b$. This function is an isometry, being composed of rotation by $u_{a^{-1}b}$ followed by translation by $a$. Moreover, under this mapping \begin{equation*} e \mapsto a, \quad p_\mu \mapsto b. \end{equation*} \noindent Thus the mapping restricts to a bijection \begin{equation*} \Cr(\mu) \longrightarrow M(a,b). \end{equation*} \noindent We write \begin{equation*} \varphi_c(a,b) = au_{a^{-1}b}cu_{a^{-1}b}^{-1}, \end{equation*} \noindent and view \begin{equation*} \varphi_c(a,b), \quad c \in \Cr(\mu) \end{equation*} \noindent as a parameterization of the locus $M(a,b)$ by a ``standard'' set of midpoints. Following the terminology of \cite{OV}, we call $\varphi_c(a,b) \in M(a,b)$ the midpoint of $a$ and $b$ \emph{encoded} by the crossover $c \in \Cr(\mu)$. \subsection{Duality} There is a natural geometric operation on crossovers: we view a crossover as the lower half of a geodesic path from the identity to the canonical permutation with a given ordered cycle type, and map it to the corresponding upper half. More precisely, given $c \in \Cr(\mu)$, its \emph{dual} $c^\vee$ is defined by \begin{equation*} c^\vee := c^{-1}p_\mu. \end{equation*} We now establish some technical properties of the operation $c \mapsto c^\vee$ which will be needed below. First is the basic but important closure property. \begin{prop} \label{prop:Closed} $\Cr(\mu)$ is closed under taking duals. \end{prop} \begin{proof} Let $c$ be a midpoint of $e$ and $p_\mu$; we have to check that $c^\vee$ is a midpoint of $e$ and $p_\mu$. We have \begin{equation*} \mathrm{d}(e,c^\vee) = \mathrm{d}(e,c^{-1}p_\mu) =\mathrm{d}(c,p_\mu) \end{equation*} \noindent and \begin{equation*} \mathrm{d}(c^\vee,p_\mu) = \mathrm{d}(c^{-1}p_\mu,p_\mu) =\mathrm{d}(c^{-1},e) =\mathrm{d}(e,c). \end{equation*} \noindent Thus \begin{equation*} \mathrm{d}(e,c^\vee)+\mathrm{d}(c^\vee,p_\mu) =\mathrm{d}(e,c) + \mathrm{d}(c,p_\mu) =\mathrm{d}(e,p_\mu), \end{equation*} \noindent and \begin{equation*} \mathrm{d}(e,c^\vee) - \mathrm{d}(c^\vee,p_\mu) = \mathrm{d}(c,p_\mu) - \mathrm{d}(e,c) =\varepsilon \end{equation*} \noindent with $\varepsilon \in \{-1,0,1\}$, as required. \end{proof} Note that while the map $\Cr(\mu) \rightarrow \Cr(\mu)$ defined by $c \mapsto c^\vee$ is bijective, it is not involutive: the dual of the dual of $c$ is $c$ conjugated by $p_\mu^{-1}$. For future use, we extend the duality operation from points to sets: given $C \subseteq \Cr(\mu)$, we define \begin{equation*} C^\vee := \{ c^\vee : c \in C\}. \end{equation*} Next is the following important property of crossover duals. \begin{lem} \label{lem:LemForDuality} If $c \in \Cr(\mu)$, then $c^{-1}c^\vee$ has both the same ordered cycle type and the same sequence of cycle minima as $p_\mu$. \end{lem} \begin{proof} First note that $c^{-1}c^\vee=c^{-2}p_\mu$. Since $c$ lies on a geodesic path linking $e$ to $p_\mu$, each cycle of $c$ is a subcycle of some cycle of $p_\mu$. Since the cycles of $p_\mu$ are intervals, our task reduces to proving the following general statement: whenever $c_1c_2\dots$ is a product of forward cycles which induce a noncrossing partition of $\{1,\dots,k\}$, the product \begin{equation*} c_1^{-2}c_2^{-2}\dots (1\mapsto \dots \mapsto k) \end{equation*} \noindent is a cyclic permutation of the numbers $1,\dots,k$. If this statement holds, then left multiplication of $p_\mu$ by $c^{-2}$ will change neither the cycle structure nor cycle minima of $p_\mu$. Note that we can assume each cycle $c_i$ is of length at least two. Suppose that the first cycle, $c_1$, is \begin{equation*} c_1 = (i_1 \mapsto i_2 \mapsto \dots \mapsto i_{k_1}), \end{equation*} \noindent where $1 \leq i_1 < i_2 < \dots < i_{k_1} \leq k$. Let us write the full forward $k$-cycle in the form \begin{equation*} (1\mapsto \dots \mapsto k)=(i_1 \mapsto I_1 \mapsto i_2 \mapsto I_2 \mapsto \dots \mapsto i_{k_1} \mapsto I_{k_1}), \end{equation*} \noindent where the $I_*$'s are intervals. Since \begin{align*} (i_1 \mapsto I_1 \mapsto i_2 \mapsto I_2 \mapsto \dots \mapsto i_{k_1} \mapsto I_{k_1}) &=(i_1\mapsto \dots \mapsto i_{k_1}) (i_1 \mapsto I_1) (i_2 \mapsto I_2) \dots (i_{k_1} \mapsto I_{k_1}), \end{align*} \noindent we have \begin{align*} (i_1\mapsto \dots \mapsto i_{k_1})^{-2} (1\mapsto \dots \mapsto k) &=(i_1\mapsto \dots \mapsto i_{k_1})^{-1} (i_1 \mapsto I_1) (i_2 \mapsto I_2) \dots (i_{k_1} \mapsto I_{k_1})\\ &=(i_{k_1}\mapsto \dots \mapsto i_1) (i_{k_1} \mapsto I_{k_1}) \dots (i_2 \mapsto I_2) (i_1 \mapsto I_1) \\ &=(i_{k_1} \mapsto I_{k_1} \mapsto \dots \mapsto i_2 \mapsto I_2 \mapsto i_1 \mapsto I_1). \end{align*} \noindent Now, since the cycles $c_1,c_2,\dots$ induce a noncrossing partition of $\{1,\dots,k\}$, the cycle $c_2$ is contained in one of the intervals $I_1,\dots,I_{k_1+1}$. Thus the same argument applies to compute \begin{equation*} c_2^{-2}(i_{k_1} \mapsto I_{k_1} \mapsto \dots \mapsto i_2 \mapsto I_2 \mapsto i_1 \mapsto I_1). \end{equation*} \end{proof} \subsection{Encoding} The duality operation has been introduced, and its basic properties developed, in order to make available a structured means of encoding \emph{pairs} of midpoints using crossovers. To this end, we introduce the mapping \begin{equation*} \Cr(\mu) \longrightarrow M (a,b) \times M(a,b) \end{equation*} \noindent defined by \begin{equation*} c \mapsto \Phi_c(a,b) := (\varphi_c(a,b), \varphi_{c^\vee}(a,b)). \end{equation*} \noindent Thus $(x,y)=\Phi_c(a,b)$ is the pair of midpoints of $a$ and $b$ encoded by $c$ and $c^\vee$. The duality relationship between $c$ and $c^\vee$ induces algebraic and geometric relations between the pairs $(a,b)$ and $(x,y)$ which may be collectively called \emph{duality of midpoints}. \begin{prop} \label{prop:Duality} If $(x,y) = \Phi_c(a,b)$, then: \begin{enumerate} \smallskip \item $x^{-1}y$ has the same ordered cycle type and sequence of cycle minima as $a^{-1}b$; \smallskip \item $a$ and $b$ are midpoints of $x$ and $y$; \smallskip \item $u_{x^{-1}y} = u_{a^{-1}b} u_{c^{-1}c^\vee}$. \end{enumerate} \end{prop} \begin{proof} Let us prove these assertions in order. \begin{enumerate} \smallskip \item First, we have \begin{align*} x^{-1}y &= (\varphi_c(a,b))^{-1}(\varphi_{c^\vee}(a,b)) \\ &=(au_{a^{-1}b}cu_{a^{-1}b}^{-1})^{-1} (au_{a^{-1}b}c^\vee u_{a^{-1}b}^{-1}) \\ &= u_{a^{-1}b}c^{-1}c^\vee u_{a^{-1}b}^{-1}, \end{align*} \noindent Let $\mu$ be the ordered cycle type of $a^{-1}b$. By definition, $u_{a^{-1}b}$ conjugates $p_\mu$ into $a^{-1}b$ and transforms the sequence of cycle minima of $p_\mu$ into that of $a^{-1}b$. By Lemma \ref{lem:LemForDuality}, $c^{-1}c^\vee$ has both the same ordered cycle type and sequence of cycle minima as $p_\mu$. Thus, $u_{a^{-1}b}c^{-1}c^\vee u_{a^{-1}b}^{-1}$ has both the same ordered cycle type and sequence of cycle minima as $a^{-1}b$. \smallskip \item We now show that $a$ and $b$ are midpoints of $x$ and $y$. Since \begin{equation*} a^{-1}x=a^{-1}(au_{a^{-1}b}cu_{a^{-1}b}^{-1}) =u_{a^{-1}b}cu_{a^{-1}b}^{-1} \end{equation*} \noindent and \begin{align*} y^{-1}b&=(au_{a^{-1}b}c^\vee u_{a^{-1}b}^{-1})^{-1}b \\ &=(au_{a^{-1}b}c^{-1}p_\mu u_{a^{-1}b}^{-1})^{-1}b \\ &=(au_{a^{-1}b}c^{-1}u_{a^{-1}b}^{-1} u_{a^{-1}b}p_\mu u_{a^{-1}b}^{-1})^{-1}b \\ &=(au_{a^{-1}b}c^{-1}u_{a^{-1}b}^{-1}a^{-1}b)^{-1}b \\ &= (b^{-1}au_{a^{-1}b})c(b^{-1}au_{a^{-1}b})^{-1}, \end{align*} \noindent both $a^{-1}x$ and $y^{-1}b$ are conjugates of $c$. By the conjugation invariance of $\mathrm{d}$ we thus have \begin{equation*} \mathrm{d}(e,a^{-1}x) = \mathrm{d}(e,c) = \mathrm{d}(e,y^{-1}b), \end{equation*} \noindent whence \begin{equation*} \mathrm{d}(a,x) = \mathrm{d}(e,c) = \mathrm{d}(y,b). \end{equation*} \noindent Similarly, since \begin{equation*} a^{-1}y=a^{-1}(au_{a^{-1}b}c^\vee u_{a^{-1}b}^{-1}) =u_{a^{-1}b}c^\vee u_{a^{-1}b}^{-1} \end{equation*} \noindent and \begin{align*} x^{-1}b&=(au_{a^{-1}b}c u_{a^{-1}b}^{-1})^{-1}b \\ &=u_{a^{-1}b}c^{-1}u_{a^{-1}b}^{-1}a^{-1}b\\ &=u_{a^{-1}b}c^\vee p_\mu^{-1}u_{a^{-1}b}^{-1}a^{-1}b \\ &=u_{a^{-1}b}c^\vee (u_{a^{-1}b}^{-1}(a^{-1}b)^{-1}u_{a^{-1}b})u_{a^{-1}b}^{-1}a^{-1}b \\ &=u_{a^{-1}b}c^\vee u_{a^{-1}b}^{-1}, \end{align*} \noindent both $a^{-1}y$ and $x^{-1}b$ are conjugates of $c^\vee$, and we have \begin{equation*} \mathrm{d}(a,y)=\mathrm{d}(e,c^\vee)=\mathrm{d}(x,b). \end{equation*} \noindent Now, since $x^{-1}y$ has the same ordered cycle type as $a^{-1}b$, we have \begin{equation*} \mathrm{d}(x,y) = \mathrm{d}(a,b) = \mathrm{d}(e,p_\mu). \end{equation*} \noindent Putting this all together, we have \begin{align*} \mathrm{d}(x,a) + \mathrm{d}(a,y) &= \mathrm{d}(e,c) + \mathrm{d}(e,c^\vee) \\ &= \mathrm{d}(e,c) + \mathrm{d}(c,p_\mu) \\ &= \mathrm{d}(e,p_\mu) \\ &= \mathrm{d}(x,y), \end{align*} \noindent and \begin{align*} \mathrm{d}(x,a) - \mathrm{d}(a,y) &= \mathrm{d}(e,c) - \mathrm{d}(e,c^\vee) \\ &= \mathrm{d}(e,c) - \mathrm{d}(c,p_\mu) \\ &= \varepsilon, \end{align*} \noindent where $\varepsilon \in \{-1,0,1\}$ because $c$ is a midpoint of $e$ and $p_\mu$. Thus, $a$ is a midpoint of $x$ and $y$. The proof that $b$ is a midpoint of $x$ and $y$ is just the same. \smallskip \item Finally, we prove the identity \begin{equation*} u_{x^{-1}y} = u_{a^{-1}b} u_{c^{-1}c^\vee}, \end{equation*} \noindent which will be needed below in the proof of Proposition \ref{prop:Decoding}. Since \begin{equation*} \begin{split} x^{-1}y &= (au_{a^{-1}b} c u_{a^{-1}b}^{-1})^{-1} (au_{a^{-1}b} c^\vee u_{a^{-1}b}^{-1}) \\ &= u_{a^{-1}b} c^{-1}c^\vee u_{a^{-1}b}^{-1} \\ &= u_{a^{-1}b} u_{c^{-1}c^\vee} p_\mu u_{c^{-1}c^\vee} ^{-1} u_{a^{-1}b}^{-1} \\ &= (u_{a^{-1}b} u_{c^{-1}c^\vee}) p_\mu (u_{a^{-1}b} u_{c^{-1}c^\vee})^{-1}, \end{split} \end{equation*} \noindent we have that $u_{a^{-1}b} u_{c^{-1}c^\vee}$ conjugates $p_\mu$ into $x^{-1}y$. This is one of the two properties that uniquely defines the permutation $u_{x^{-1}y}$, the other being that it transforms the sequence of cycle minima of $p_\mu$ into the sequence of cycle minima of $x^{-1}y$. Since $u_{c^{-1}c^\vee}$ stabilizes the sequence of cycle minima of $p_\mu$ (by Lemma \ref{lem:LemForDuality}), conjugation of $p_\mu$ by $u_{a^{-1}b} u_{c^{-1}c^\vee}$ produces a permutation whose sequence of cycle minima coincides with that of $a^{-1}b$, and hence with that of $x^{-1}y$ by Part (1). \end{enumerate} \end{proof} \subsection{Decoding} Our constructions so far may be thought of in cryptographic terms, as follows. Alice and Bob wish to transmit messages to one another across an insecure channel. They meet at a secure location, and agree on a composition $\mu \vDash n$ and a crossover $c \in \Cr(\mu)$ to be used as a secret encryption key. Alice and Bob then return to their respective locations on opposite ends of the channel. The plaintext messages to be transmitted are pairs $(a,b) \in S(n) \times S(n)$ such that $a^{-1}b \in \vec{C}_\mu$. To send the message $(a,b)$ to Bob, Alice computes the ciphertext $(x,y) = \Phi_c(a,b)$ and transmits it to Bob across the channel. Bob receives the ciphertext $(x,y)$, and wishes to recover the plaintext message. Our next result proves that there is a well-defined decryption key $\delta(c)$ such that $(a,b)=\Phi_{\delta(c)}(x,y)$ --- in fact, the proof explains how to compute $\delta(c)$. \begin{prop} \label{prop:Decoding} For each composition $\mu \vDash n$, there exists a function \begin{equation*} \delta_\mu : \Cr(\mu) \longrightarrow \Cr(\mu) \end{equation*} \noindent such that \begin{equation*} \Phi_{\delta_\mu(c)}(\Phi_c(a,b)) = (a,b) \end{equation*} \noindent holds for all $c \in \Cr(\mu)$ and each $(a,b) \in S(n) \times S(n)$ verifying $a^{-1}b \in \vec{C}_\mu$. Moreover, this function is an involution. \end{prop} We call the function $\delta_\mu$ of Proposition \ref{prop:Decoding} the \emph{decoding function} of type $\mu$. In order to lighten the notation, we will henceforth omit the dependence of $\delta$ on $\mu$. \begin{proof} Fix a composition $\mu \vDash n$. We claim that the corresponding decoding function is given by \begin{equation*} \delta(c) := u_{c^{-1}c^\vee}^{-1}c^{-1}u_{c^{-1}c^\vee}. \end{equation*} First, let us check that the codomain of $\delta$ is indeed $\Cr(\mu)$, i.e. that $\delta(c)$ is in fact a midpoint of $e$ and $p_\mu$. We have \begin{align*} \mathrm{d}(e,\delta(c)) &= \mathrm{d}(e,u_{c^{-1}c^\vee}^{-1}c^{-1}u_{c^{-1}c^\vee})\\ &=\mathrm{d}(e,c^{-1})\\ &=\mathrm{d}(e,c) \end{align*} \noindent and \begin{align*} \mathrm{d}(\delta(c),p_\mu) &=\mathrm{d}(u_{c^{-1}c^\vee}^{-1}c^{-1}u_{c^{-1}c^\vee},p_\mu)\\ &=\mathrm{d}(c^{-1},u_{c^{-1}c^\vee}p_\mu u_{c^{-1}c^\vee}^{-1})\\ &=\mathrm{d}(c^{-1},c^{-1}c^\vee)\\ &=\mathrm{d}(e,c^\vee)\\ &=\mathrm{d}(e,c^{-1}p_\mu)\\ &=\mathrm{d}(c,p_\mu). \end{align*} \noindent Thus, since $c$ is a midpoint of $e$ and $p_\mu$, we have \begin{equation*} \mathrm{d}(e,\delta(c))+\mathrm{d}(\delta(c),p_\mu) =\mathrm{d}(e,c) + \mathrm{d}(c,p_\mu) =\mathrm{d}(e,p_\mu) \end{equation*} \noindent and \begin{equation*} \mathrm{d}(e,\delta(c))-\mathrm{d}(\delta(c),p_\mu) =\mathrm{d}(e,c) - \mathrm{d}(c,p_\mu) =\varepsilon \end{equation*} \noindent with $\varepsilon \in \{-1,0,1\}$. Next, let $(a,b) \in S(n) \times S(n)$ be a valid plaintext, i.e. a pair of permutations such that $a^{-1}b \in \vec{C}_\mu$, and let $(x,y) = \Phi_c(a,b)$ be the corresponding ciphertext. We then have \begin{equation*} \begin{split} x &= \varphi_c(a,b) = au_{a^{-1}b} c u_{a^{-1}b}^{-1} \\ y &= \varphi_c(a,b) = au_{a^{-1}b} c^\vee u_{a^{-1}b}^{-1}, \end{split} \end{equation*} \noindent By Proposition \ref{prop:Duality}, Part (1), we may re-encode $(x,y)$ using $\delta(c)$ as an encryption key, arriving at a new ciphertext $(x',y') = \Phi_{\delta(c)}(x,y)$. We claim that this re-encoding decrypts $(x,y)$, i.e. that $(x',y')=(a,b)$. Indeed, \begin{align*} x' &= \varphi_{\delta(c)}(x,y) \\ &= x u_{x^{-1}y} \delta(c) u_{x^{-1}y}^{-1} \\ &= (au_{a^{-1}b} c u_{a^{-1}b}^{-1}) (u_{a^{-1}b}u_{c^{-1}c^\vee}) (u_{c^{-1}c^\vee}^{-1}c^{-1}u_{c^{-1}c^\vee}) (u_{a^{-1}b}u_{c^{-1}c^\vee})^{-1} \\ &= a, \end{align*} \noindent where we made use of Proposition \ref{prop:Duality}, Part (3). Similarly, we have \begin{align*} y' &= \varphi_{\delta(c)^\vee}(x,y) \\ &= x u_{x^{-1}y} \delta(c)^{\vee} u_{x^{-1}y}^{-1} \\ &= (au_{a^{-1}b} c u_{a^{-1}b}^{-1}) (u_{a^{-1}b}u_{c^{-1}c^\vee}) (u_{c^{-1}c^\vee}^{-1}c^{-1}u_{c^{-1}c^\vee})^\vee (u_{a^{-1}b}u_{c^{-1}c^\vee})^{-1} \\ &=(au_{a^{-1}b} c u_{a^{-1}b}^{-1}) (u_{a^{-1}b}u_{c^{-1}c^\vee}) (u_{c^{-1}c^\vee}^{-1}cu_{c^{-1}c^\vee}p_\mu) (u_{a^{-1}b}u_{c^{-1}c^\vee})^{-1} \\ &=a u_{a^{-1}b}c^2u_{c^{-1}c^\vee}p_\mu u_{c^{-1}c^\vee}^{-1} u_{a^{-1}b}^{-1} \\ &= a u_{a^{-1}b} cc^\vee u_{a^{-1}b} \\ &= a u_{a^{-1}b} p_\mu u_{a^{-1}b} \\ &=b. \end{align*} It remains to show that \begin{equation*} \delta: \Cr(\mu) \longrightarrow \Cr(\mu) \end{equation*} is an involution. Let $c$ be a $\mu$-crossover, let $(a,b)$ be a valid message, and consider the triple encoding \begin{equation*} \Phi_{\delta^2(c)}(\Phi_{\delta(c)}(\Phi_c(a,b))). \end{equation*} \noindent Since $\delta(c)$ is the decryption key corresponding to the encryption key $c$, we have \begin{equation*} \Phi_{\delta^2(c)}(\Phi_{\delta(c)}(\Phi_c(a,b))) = \Phi_{\delta^2(c)}(a,b). \end{equation*} \noindent On the other hand, since $\delta^2(c)$ is the decryption key corresponding to the encryption key $\delta(c)$, we also have \begin{equation*} \Phi_{\delta^2(c)}(\Phi_{\delta(c)}(\Phi_c(a,b))) = \Phi_{c}(a,b). \end{equation*} \noindent Thus $\Phi_{\delta^2(c)}(a,b) = \Phi_{c}(a,b)$, which readily implies $\delta^2(c)=c$. \end{proof} \section{The Brunn-Minkowski inequality} \subsection{Without a curvature term} We now put our encoding-decoding formalism to work. We begin by proving the ``flat'' Brunn-Minkowski inequality for $S(n)$. \begin{thm} \label{thm:BrunnMinkowski} For any nonempty $A,B \subseteq S(n)$, we have \begin{equation*} \log |M| \geq \frac{1}{2}\log|A| + \frac{1}{2}\log|B|, \end{equation*} \noindent where $M$ is the midpoint set of $A$ and $B$. \end{thm} \begin{proof} Suppose that there exists an injection \begin{equation*} \Phi:A \times B \longrightarrow M \times M. \end{equation*} \noindent Then \begin{equation*} |M|^2 \geq |A| |B|, \end{equation*} \noindent and taking logs this becomes \begin{equation*} \log |M| \geq \frac{1}{2}\log|A| + \frac{1}{2}\log|B|. \end{equation*} To construct such an injection, let $\mu_1,\dots,\mu_m$ be an enumeration of the ordered cycle types of the permutations \begin{equation*} a^{-1}b, \quad (a,b) \in A \times B. \end{equation*} \noindent For each $1 \leq i \leq m$, choose an encryption key $c_i \in \Cr(\mu_i)$, and let $d_i = \delta(c_i)$ be the corresponding decryption key. Partition $A \times B$ into classes \begin{equation*} C_i = \{ (a,b) \in A \times B : a^{-1}b \in \vec{C}_{\mu_i}\}, \quad 1 \leq i \leq m, \end{equation*} \noindent and consider the map \begin{equation*} \Phi:A\times B \longrightarrow M \times M \end{equation*} \noindent whose restriction to each $C_i$ is given by encryption using key $c_i$: \begin{equation*} \Phi(a,b) = \Phi_{c_i}(a,b). \end{equation*} \noindent We claim that $\Phi$ is injective. Indeed, if \begin{equation*} \Phi(a,b) = \Phi(a',b') \end{equation*} \noindent for some $(a,b), (a',b') \in A \times B$, then we must have $(a,b), (a',b') \in C_i$ for some $1 \leq i \leq m$ by Proposition \ref{prop:Duality}. Hence \begin{equation*} (a,b) = \Phi_{d_i}(\Phi_{c_i}(a,b)) = \Phi_{d_i}(\Phi_{c_i}(a',b')) = (a',b'), \end{equation*} \noindent by Proposition \ref{prop:Decoding}. \end{proof} \subsection{With a curvature term} Theorem \ref{thm:BrunnMinkowski} above was proved by injecting $A \times B$ into $M \times M$. The injection used was a straightforward application of the formalism of encoding and decoding developed above. We now obtain the curved Brunn-Minkowski inequality stated as Theorem \ref{thm:Main} in the introduction by injecting two copies of $A \times B$ into $M \times M$. The construction of the required injection involves a subtler use of encoding/decoding: the proof requires choosing two parallel sequences of encryption keys which are coupled in a special way. \begin{thm} \label{thm:CurvedBrunnMinkowski} For any nonempty $A,B \subseteq S(n)$, we have \begin{equation*} \log |M| \geq \frac{1}{2}\log|A| + \frac{1}{2}\log|B| + \frac{K}{8}\mathrm{d}(A,B)^2 \end{equation*} \noindent where $M$ is the midpoint set of $A$ and $B$ and \begin{equation*} K = \frac{4\log 2}{(n-1)^2}. \end{equation*} \end{thm} \begin{proof} First note that if $\mathrm{d}(A,B)=0$, the claimed inequality degenerates to the flat Brunn-Minkowski inequality, which we have already proved. We thus assume that $A$ and $B$ are disjoint. Suppose that there exists an injection \begin{equation*} \Phi:A \times B \times \{0,1\} \longrightarrow M \times M. \end{equation*} \noindent Then \begin{equation*} |M|^2 \geq 2|A| |B|, \end{equation*} \noindent and taking logs this becomes \begin{equation*} \log |M| \geq \frac{1}{2}\log|A| + \frac{1}{2}\log|B| + \frac{\log 2}{2}. \end{equation*} \noindent The desired inequality now follows from the fact that the diameter of $S(n)$ is $n-1$. In order to construct $\Phi$ as required, let $\mu_1,\dots,\mu_m$ and $C_1,\dots,C_m$ be as in the proof of Theorem \ref{thm:BrunnMinkowski}. Choose a system of encryption keys \begin{equation*} c_i \in \Cr(\mu_i), \quad 1 \leq i \leq m, \end{equation*} \noindent and consider a second system of encryption keys obtained from the first system by setting \begin{equation*} \tilde{c}_i := \delta(\delta(c_i)^\vee), \quad 1 \leq i \leq m. \end{equation*} \noindent The keys $\tilde{c}_i$ are defined in this way so that, by the involutive property of $\delta$, their corresponding decryption keys are the duals of the decryption keys of the first system: \begin{equation} \label{eqn:DerivedCrossovers} \delta(\tilde{c}_i) = \delta(c_i)^\vee. \end{equation} We build $\Phi$ from the above data as follows. First, partition $A \times B \times \{0,1\}$ into the $2m$ sets \begin{equation*} C_i^{(j)} = \{ (a,b,j) : a^{-1}b \in C_i\}, \quad 1 \leq i \leq m,\ 0 \leq j \leq 1. \end{equation*} \noindent We define $\Phi$ by declaring its restriction to $C_i^{(j)}$ to be \begin{equation*} \Phi(a,b,j) := \begin{cases} \Phi_{c_i}(a,b), \text{ if } j=0 \\ \Phi_{\tilde{c}_i}(a,b), \text{ if } j=1 \end{cases} \end{equation*} We now prove that $\Phi$ so defined is an injection. Suppose \begin{equation*} (x,y) \in M \times M,\quad (a,b,j) \in C_i^{(j)},\quad (a',b',j') \in C_{i'}^{(j')} \end{equation*} \noindent are such that \begin{equation*} (x,y) = \Phi(a,b,j) = \Phi(a',b',j'). \end{equation*} \noindent Then, by the first part of Proposition \ref{prop:Duality}, we must have $i=i'$. We claim that also $j=j'$. If not, then (relabelling if neccessary) we have \begin{equation*} (x,y) = \Phi(a,b,0) = \Phi(a',b',1), \end{equation*} \noindent so that \begin{equation*} (x,y) = \Phi_{c_i}(a,b) = \Phi_{\tilde{c}_i}(a',b'). \end{equation*} \noindent Decoding, we obtain \begin{equation*} (a,b) = \Phi_{\delta(c_i)}(x,y) =(\varphi_{\delta(c_i)}(x,y), \varphi_{\delta(c_i)^\vee}(x,y)) \end{equation*} \noindent and, using \eqref{eqn:DerivedCrossovers}, \begin{equation*} (a',b') = \Phi_{\delta(\tilde{c}_i)}(x,y) =(\varphi_{\delta(c_i)^\vee}(x,y), \varphi_{\delta(\tilde{c}_i)^\vee}(x,y)). \end{equation*} \noindent This implies $a'=b$, which is impossible since $A$ and $B$ are disjoint. There are now two cases: $j=j'=0$ and $j=j'=1$. In the first case, we have \begin{equation*} \Phi_{c_i}(a,b)= \Phi(a,b,0) =\Phi(a',b',0) =\Phi_{c_i}(a',b'), \end{equation*} \noindent whence \begin{equation*} (a,b) = \Phi_{\delta(c_i)}(\Phi_{c_i}(a,b)) =\Phi_{\delta(c_i)}(\Phi_{c_i}(a',b')) = (a',b'). \end{equation*} \noindent In the second case, we have \begin{equation*} \Phi_{\tilde{c}_i}(a,b) =\Phi(a,b,1) =\Phi(a',b',1) =\Phi_{\tilde{c}_i}(a',b'), \end{equation*} \noindent whence \begin{equation*} (a,b) = \Phi_{\delta(\tilde{c}_i)}(\Phi_{\tilde{c}_i}(a,b)) =\Phi_{\delta(\tilde{c}_i)} (\Phi_{\tilde{c}_i}(a',b')) = (a',b'). \end{equation*} \end{proof} \subsection{With an optimal curvature term} We conjecture that the curved Brunn-Minkowski inequality proved in Theorem \ref{thm:CurvedBrunnMinkowski} can be improved to the following optimal statement. \begin{conj} \label{conj:Optimal} For any nonempty $A,B \subseteq S(n)$, we have \begin{equation*} \log |M| \geq \frac{1}{2}\log|A| + \frac{1}{2}\log|B| + \frac{K}{8}\mathrm{d}(A,B)^2 \end{equation*} \noindent where $M$ is the midpoint set of $A$ and $B$, \begin{equation*} K = \frac{c}{n-1}, \end{equation*} \noindent and $c$ is a positive constant. \end{conj} While we are at present unable to prove Conjecture \ref{conj:Optimal}, we show here that it is implied by the following conjectural concentration inequality. \begin{conj} \label{conj:Concentration} There exists a positive constant $\varepsilon > 0$ such that, for any $\mu \vDash n$, we have \begin{equation*} \mathrm{d}(C,C^\vee) \geq r \implies \mathbb{P}(C) \leq e^{-\frac{\varepsilon r^2}{n-\ell(\mu)}}, \end{equation*} \noindent where $\mathbb{P}$ is the uniform probability measure on $\Cr(\mu)$. \end{conj} \begin{remark} In the case where $\mu=(\mu_1,\mu_2,\dots)$ satisfies $\mu_i \in \{1,2\}$, Conjecture \ref{conj:Concentration} is true --- via the embedding of the hypercube described in the Introduction, it is equivalent to Corollary 6 in \cite{OV}. \end{remark} We now explain how Conjecture \ref{conj:Optimal} can be deduced from Conjecture \ref{conj:Concentration}. This argument, which lifts the proof of \cite[Theorem 1]{OV} from the hypercube to the symmetric group, differs substantially from the proofs of Theorems \ref{thm:BrunnMinkowski} and \ref{thm:CurvedBrunnMinkowski}. To prove these coarser results, we used static encoding to construct our injections, i.e. a predetermined list of encryption keys. Here, we use an adaptive coding scheme in which all crossovers associated to $A \times B$ are employed. \begin{thm} \label{thm:ConditionalOptimal} Suppose that Conjecture \ref{conj:Concentration} is true. Then, for any nonempty $A,B \subseteq S(n)$, we have \begin{equation*} \log |M| \geq \frac{1}{2}\log|A| + \frac{1}{2}\log|B| + \frac{K}{8}\mathrm{d}(A,B)^2 \end{equation*} \noindent where $M$ is the midpoint set of $A$ and $B$ and \begin{equation*} K = \frac{4\varepsilon}{n-1}. \end{equation*} \end{thm} \begin{proof} Let $\mu_1,\dots,\mu_m$ and $C_1,\dots,C_m$ be as in the proof of Theorem \ref{thm:BrunnMinkowski}. Put \begin{equation*} U_i := C_i \times \Cr(\mu_i), \quad 1\leq i \leq m. \end{equation*} \noindent These sets are pairwise disjoint, but note that they are not subsets of $A \times B$. Rather, \begin{equation*} U := \bigsqcup_{i=1}^m U_i \end{equation*} \noindent is an ``enriched'' version of $A \times B$ in which each pair $(a,b) \in C_i$ appears with multiplicity $|\Cr(\mu_i)|$. Consider the map \begin{equation*} \Phi: U \longrightarrow M \times M \end{equation*} \noindent defined by \begin{equation*} \Phi(a,b,c) := \Phi_c(a,b). \end{equation*} \noindent By Proposition \ref{prop:Duality}, we know that the image of $U_i$ under $\Phi$ is contained in \begin{equation*} V_i := \{ (x,y) \in M \times M : x^{-1}y \in \vec{C}_{\mu_i}\}. \end{equation*} \noindent The sets $V_1,\dots,V_m$ are disjoint, and \begin{equation*} \sum_{i=1}^m |V_i| \leq |M \times M|. \end{equation*} Fix an arbitrary $i \in \{1,\dots,m\}$, and an arbitrary pair of midpoints $(x,y) \in V_i$. By definition, $(a,b,c) \in U_i$ maps to $(x,y)$ under $\Phi$ if and only if \begin{equation*} \Phi_c(a,b) = (x,y), \end{equation*} \noindent which by Proposition \ref{prop:Decoding} is equivalent to \begin{equation*} \Phi_{\delta(c)}(x,y) = (a,b). \end{equation*} \noindent Thus the cardinality of the fibre of $\Phi$ over $(x,y)$ agrees with that of the set \begin{equation*} D(x,y) := \{d \in \Cr(\mu_i) : \Phi_d(x,y) \in C_i \}. \end{equation*} \noindent Now let $d_1,d_2 \in D(x,y)$. Then, by definition, \begin{equation*} \varphi_{d_1}(x,y) \in A, \quad \varphi_{d_2^\vee}(x,y) \in B. \end{equation*} \noindent Because the map \begin{equation*} d \mapsto \varphi_d(x,y) \end{equation*} \noindent is an isometry, this implies \begin{equation*} \mathrm{d}(d_1,d_2^\vee) = \mathrm{d}(\varphi_{d_1}(x,y),\varphi_{d_2^\vee}(x,y)) \geq \mathrm{d}(A,B), \end{equation*} \noindent and hence \begin{equation*} \mathrm{d}(D(x,y),D(x,y)^\vee) \geq \mathrm{d}(A,B). \end{equation*} \noindent Hence, assuming Conjecture \ref{conj:Concentration}, we have that \begin{equation*} |D(x,y)| \leq e^{-\frac{\varepsilon \mathrm{d}(A,B)^2}{n-\ell(\mu_i)}} |\Cr(\mu_i)| \leq e^{-\frac{\varepsilon \mathrm{d}(A,B)^2}{n-1}} |\Cr(\mu_i)|. \end{equation*} \noindent Summing over all $(x,y) \in V_i$ we thus obtain \begin{equation*} |U_i| \leq e^{-\frac{\varepsilon \mathrm{d}(A,B)^2}{n-1}} |\Cr(\mu_i)| |V_i|. \end{equation*} \noindent Since $|U_i| = |C_i| |\Cr(\mu_i)|$, this implies \begin{equation*} |C_i| \leq e^{-\frac{\varepsilon \mathrm{d}(A,B)^2}{n-1}} |V_i|. \end{equation*} \noindent Summing over $1 \leq i \leq m$, we get \begin{equation*} \sum_{i=1}^m|C_i| \leq e^{-\frac{\varepsilon \mathrm{d}(A,B)^2}{n-1}} \sum_{i=1}^m|V_i|, \end{equation*} \noindent whence \begin{equation*} |A\times B| \leq e^{-\frac{\varepsilon \mathrm{d}(A,B)^2}{n-1}} |M \times M|. \end{equation*} \noindent Taking logs, we obtain the curved Brunn-Minkowski inequality with \begin{equation*} K = \frac{4\varepsilon}{n-1}. \end{equation*} \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2015-06-03T02:11:53", "yymm": "1506", "arxiv_id": "1506.00928", "language": "en", "url": "https://arxiv.org/abs/1506.00928", "abstract": "In this paper, we construct an injection $A \\times B \\rightarrow M \\times M$ from the product of any two nonempty subsets of the symmetric group into the square of their midpoint set, where the metric is that corresponding to the conjugacy class of transpositions. If $A$ and $B$ are disjoint, our construction allows to inject two copies of $A \\times B$ into $M \\times M$. These injections imply a positively curved Brunn-Minkowski inequality for the symmetric group analogous to that obtained by Ollivier and Villani for the hypercube. However, while Ollivier and Villani's inequality is optimal, we believe that the curvature term in our inequality can be improved. We identify a hypothetical concentration inequality in the symmetric group and prove that it yields an optimally curved Brunn-Minkowski inequality.", "subjects": "Combinatorics (math.CO); Metric Geometry (math.MG)", "title": "A Curved Brunn-Minkowski Inequality for the Symmetric Group", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771805808551, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.709092554629115 }
https://arxiv.org/abs/1403.3479
On the boundary of weighted numerical ranges
In this article, we are going to introduce the weighted numerical range which is a further generalization both the c-numerical range and the rank k numerical range. If the boundaries of weighted numerical ranges of two matrices (possibly of different sizes) overlap at sufficiently many points, then the two matrices share common generalized eigenvalues.
\section{Introduction} Let $M_n$ denote the space of all $n\times n$ complex matrices and $\IR^n$ the set of all real $n$-tuples. For any $A\in M_n$, we denote $\lambda_1(A), \ldots, \lambda_n(A)$ the $n$ eigenvalues of $A$. In the case that $A$ is hermitian, we assume that $\lambda_1(A)\ge \lambda_2(A)\ge\cdots\ge\lambda_n(A)$. We also define $\displaystyle H_\theta(A)=\frac{e^{i\theta}A+e^{-i\theta}A^*}{2}$ for $\theta\in [0,2\pi)$. Numerical range is an extensively studied subject area. The study starts with the classical numerical range of $A\in M_n$ which is defined as $$W(A)=\{x^*Ax\ :\ x\in \IC^n, x^*x=1\}$$ which is a compact set containing all the eigenvalues of $A$. $W(A)$ is a convex set by the famous Toeplitz-Housedorf Theorem \cite{T,H}. A nice discussion can be found in \cite[Chapter 1]{HJ}. There are many papers related to the boundary of the classical numerical range \cite{K, RS, GW1, GW2, Wu, CL}. There are many different generalizations of the classical numerical range. For $1\le k\le n$, the $k$-numerical range of $A$, introduced by Halmos \cite{Hal} is defined as $$W_k(A)=\left\{\sum_{j=1}^k\frac{1}{k}x_j^*Ax_j\ :\ x_1,\ldots,x_k\in \IC^n \mbox{ are orthonormal}\right\}$$ which is proved to be convex by Berger \cite{B}. Note that $W_1(A)=W(A)$. Indeed $W_k(A)\subseteq W(A)$. For any $c=(c_1,c_2,\ldots,c_n)^t\in \IR^n$, the $c$-numerical range of $A$, first introduced by Marcus \cite{M}, is defined as $$W_c(A)=\left\{\sum_{j=1}^nc_jx_j^*Ax_j\ :\ x_1,\ldots,x_n\in \IC^n \mbox{ are orthonormal}\right\}.$$ Westwick \cite{W} proved it is a convex set. Indeed, if $c_1=\cdots=c_k=1/k,c_{k+1}=\cdots=c_n=0$ then $W_c(A)=W_k(A)$. Therefore the $c$-numerical range is a natural generalization of the $k$-numerical range. A survey can be found in \cite{Li}. There are papers related to the boundary of the $k$-numerical range and the $c$-numerical range \cite{CL, MMF, LST}. A seemingly different generalization is the rank $k$ numerical range. For any $1\le k\le n$, the rank $k$ numerical range of $A$, first introduced by Choi, Kribs and \.{Z}yczkowsk \cite{CKZ, CKZ1}, is defined as $$\Lambda_k(A)=\{\lambda\in\IC\ :\ PAP=\lambda P \mbox{ for some rank-}k \mbox{ orthogonal projection }P\}.$$ Note that $\Lambda_1(A)=W(A)$. The rank $k$ numerical of $A$ was proven to be convex by Woerdeman \cite{Wo} and Li and Sze \cite{LS} independently. The rank $k$-numerical range is a relatively new generalized numerical range and it is a hot topic partly due to its connection to quantum computing, e.g. \cite{LP, Many}. There are papers on the boundary of the rank $k$ numerical range \cite{CN, GLPS, LPS, C}, but not much geometric properties are known. As it turns out, all the three generalizations have similar reformulations. In this article, we are going to introduce the weighted numerical range, which is a unified approach to the $c$-numerical range and the rank $k$ numerical range. We will prove a theorem that if the boundaries of the weighted numerical ranges of two matrices have many intersection points, then the two matrices have common generalized eigenvalues. The size of the two matrices could be different. Applying the theorem and using known results on classical numerical range, we can deduce some properties about the $c$-numerical range and the rank $k$ numerical range. \section{Weighted Numerical Ranges} From \cite[Chapter 1]{HJ}, we have the following equivalent expression for $W(A)$: $$W(A)=\bigcap_\theta \left\{v\in \IC\ : \mbox{Re }e^{i\theta}v \le \lambda_1(H_\theta(A))\right\}.$$ There is a similar equivalent expression for $W_c(A)$: $$W_c(A)=\bigcap_\theta \left\{v\in \IC\ : \mbox{Re } e^{i\theta}v \le c_{\sigma(1)}\lambda_1(H_\theta(A))+\cdots+c_{\sigma(n)}\lambda_n(H_\theta(A))\right\}$$ where $\sigma\in S_n$ such that $c_{\sigma(1)}\ge \ldots\ge c_{\sigma(n)}$. Li and Sze \cite{LS} proved the convexity of $\Lambda_k(A)$ by showing that there is also a similar equivalent expression of $\Lambda_k(A)$: $$\Lambda_k(A)=\bigcap_\theta \left\{v\in \IC\ : \mbox{Re }e^{i\theta}v \le \lambda_k(H_\theta(A))\right\}.$$ Inspired by the alternative expressions, we define a new type of generalized numerical range. For any $c=(c_1,c_2,\ldots,c_n)^t\in \IR^n$, we define \begin{equation} W(A;c)=\bigcap_\theta \left\{v\in \IC\ : \mbox{Re } e^{i\theta}v \le c_1\lambda_1(H_\theta(A))+\cdots+c_n\lambda_n(H_\theta(A))\right\}. \end{equation} Follow the suggestion of Karol \.{Z}yczkowski, we will call it the {\it weighted numerical range}. Lets start with some simple properties of the weighted numerical ranges. \begin{proposition} Let $c\in\IR^n$ and $A\in M_n$. The weighted numerical range of $A$ has the following properties: \begin{enumerate} \item If $W(A;c)$ is nonempty, then it is a complex convex set. \item When $c=e_k=(0,\ldots,0,1,0,\ldots,0)$, $W(A;c)=\Lambda_k(A)$. \item $W_c(A)=W(A; (c_{\sigma(1)}, \ldots, c_{\sigma(n)}))$ where $\sigma$ is a permutation such that $c_{\sigma(1)}\ge \ldots\ge c_{\sigma(n)}$. \item If $A$ is normal, then $W(A;c)$ is a polygonal disc. \item If $A$ is hermitian, then $W(A;c)$ is the real segment $\{x\ :\ \sum_{j=1}^n c_{n+1-j}\lambda_j(A)\le x\le \sum_{j=1}^n c_j\lambda(A)\}$ which can be empty. \item $W(\gamma A+\lambda I; c)=\gamma W(A;c)+\lambda(\sum_{j=1}^nc_j)$. \end{enumerate} \end{proposition} {\it Proof.} (1), (2), (3) and (6) follow directly from the definiton. Suppose $A$ is normal with eigenvalues $u_j+iv_j$ for $j=1,\ldots, n$. The eigenvalues of $H_\theta(A)$ are therefore $u_j\cos\theta-v_j\sin\theta$, $j=1,\ldots, n$. We define $\sigma=\sigma(\theta)$ to be a permutation such that $u_{\sigma(j)}\cos\theta-v_{\sigma(j)}\sin\theta$, $j=1,\ldots,n$ are in decreasing order. The equality $$\mbox{Re }e^{i\theta}(x+i y) \le c_1\lambda_1(H_\theta(A))+\cdots+c_n\lambda_n(H_\theta(A))$$ is therefore equivalent to $$x \cos\theta - y\sin\theta \le \sum_{j=1}^n c_{\sigma^{-1}(j)} (u_j\cos\theta-v_j\sin\theta).$$ If $\cos\theta>0$, it becomes $$x- \sum_{j=1}^n c_{\sigma^{-1}(j)} \le (y-\sum_{j=1}^n c_{\sigma(j)} v_j)\tan\theta.$$ If $\cos\theta<0$, it becomes $$x- \sum_{j=1}^n c_{\sigma^{-1}(j)} \ge (y-\sum_{j=1}^n c_{\sigma(j)} v_j)\tan\theta.$$ If $\theta=\pi/2$, it becomes $$y \le \sum_{j=1}^n c_{\sigma^{-1}(j)} v_j.$$ If $\theta=-\pi/2$, it becomes $$y \ge \sum_{j=1}^n c_{\sigma^{-1}(j)} v_j.$$ Therefore for each $\sigma$, the set of corresponding inequalities can be reduced to a set of at most four inequalities. That is, for $x+i y\in W(A;c)$, $x$ and $y$ need to satisfies atmost $4n!$ inequalities. In other words, $W(A;c)$ is a polygonal disc with at most $4n!$ sides. Hence we have (4). Suppose $A$ is hermitian. Apply the proof of (4) and note that only two permutations are relevant, we have (5). \qed \section{The $c$-values and $c$-polynomial} Let $c=(c_1,\ldots,c_n)^t\in \IR^n$ and $A\in M_n$. Suppose $c_{i_1}, \ldots, c_{i_r}$ are all the nonzero entries. We define $$\lambda_{j_1,\ldots,j_r}(A;c)=c_{i_1}\lambda_{j_1}(A)+\cdots+c_{i_r}\lambda_{j_r}(A)$$ and call it a $c$-value of $A$. The $c$-polynomial of $A$, $p(A;c)(t)$, is the polynomial which takes all generic distinct $c$-values as root. For example, if $c=(1,0,1,2)$ and that $a,b,c,d$ are the eigenvalues of $A$, then $a+b+2c, a+b+2d, a+c+2b, a+c+2d, a+d+2b, a+d+2c, b+c+2a, b+c+2d, b+d+2a, b+d+2c, c+d+2a, c+d+2b$ are all the $c$-values of $A$ and $p(A;c)(t)$ is a polynomial of degree 12. We introduce two more notations. Let $r(A;c)(x,y,t)=p(xA+yA^*;c)(t)$ which is a polynomial in $x,y,t$ and that $r(A;c)(1,0,t)=p(A;c)(t)$. We also write $\deg(A;c)=\deg p(A;c)(t)$. Note that if $r=\deg (A;c)$ then the coefficients of $t^k$ in $r(A;c)=p(xA+yA^*;c)(t)$ are symmetric homogeneous functions of degree $r-k$ on the $c$-values of $xA+yA^*$, and hence symmetric homogeneous functions of degree $r-k$ on the coefficients of the characteristic polynomial of $xA+yA^*$, and hence symmetric homogeneous functions of degree $r-k$ on the entries of $xA+yA^*$. Therefore $r(A;c)$ is a homogeneous function of degree $r$ and that $\deg(A;c)= \deg r(A;c)(x,y,t)$. Apply Bezout's Theorem, we have two lemmas. \begin{lemma} \label{lemma1} Let $A\in M_n$, $B\in M_m$, $c\in\IR^n$ and $d\in\IR^m$. If $r(A;c)$ and $r(B;d)$ have more than $\deg(A;c) \deg(B;d)$ common roots in the projective plane, then $r(A;c)$ and $r(B;d)$ have a common factor. Consequently there exists a $c$-value of $A$ which is also a $d$-value of $B$. \end{lemma} \begin{lemma} \label{lemma2} Let $A\in M_n$, $B\in M_m$, $c\in\IR^n$ and $d\in\IR^m$. If $r(A;c)$ and $r(B;d)$ have more than $\deg(A;c) \deg(B;d)$ common roots in the projective plane and that $r(A;c)$ is irreducible, then $r(A;c)$ is a factor of $r(A;c)$. Consequently all the $c$-values of $A$ are $d$-values of $B$. \end{lemma} \section{Supporting Lines and Common Boundary Points} We need two technical lemmas for further discussion. The first lemma is trivial, it tells us when a supporting line is of a special form. \begin{lemma} \label{lemma3} A supporting line of $W(A;c)$ is of the form $$\left\{v\in \IC\ : \mbox{Re }e^{i\theta}v = c_1\lambda_1(H_\theta(A))+\cdots+c_n\lambda_n(H_\theta(A))\right\}$$ whenever \begin{enumerate} \item[(a)] the supporting line is tangent to the boundary of $W(A;c)$ at a differentiable point; or \item[(b)] $c_1\ge c_2\ge \cdots\ge c_n$. In other words, every supporting line of $W_c(A)$ is of this special form. \end{enumerate} \end{lemma} The second lemma states that if there are three common boundary points of two weighted numerical ranges, then there must be a common supporting line. \begin{lemma} \label{lemma4} Let $A\in M_n$, $B\in M_m$, $c\in\IR^n$ and $d\in\IR^m$. Suppose $z_1, z_2, z_3\in \partial W(A;c)\cap \partial W(B;d)$ are positioned along the boundary of $W(A;c)$ (or $W(B;d)$) in an anticlockwise way. Let $\omega_l=Arg(i\overline{z_{l+1}-z_{l}})$, $l=1,2$. then there exists $\phi\in [\omega_2,\omega_1]$ (defined as $[\omega_2, 2\pi]\cup [0, \omega_1]$ if $\omega_1<\omega_2]$) such that $$\sum_{j=1}^n c_j \lambda_j(H_\phi(A))=\sum_{k=1}^m d_k \lambda_k(H_\phi(A)).$$ In other words, there is a common supporting line in the direction which is between the perpendicular bisectors of $[z_1,z_2]$ and $[z_2,z_3]$. Furthermore $\phi$ can be chosen so that $\phi\in (\omega_1,\omega_2)$ or at least one of $[z_1,z_2]$ and $[z_2,z_3]$ is part of $\partial W(A;c)\cap \partial W(B;d)$. \end{lemma} {\it Proof.} Let $L:\{z\ :\ \mbox{Re } e^{i\theta}z=\sum_{k=1}^m d_k \lambda_k(H_\theta(B))\}$ be a supporting line of $W(B;d)$ passing $z_2$. Let $z_2-z_1=r e^{i(\pi/2-\omega_1)}$, then $$\mbox{Re }e^{i\theta}(z_2-z_1)=r\cos(v+\pi/2-\omega_1)=r\sin(\omega_1-v)$$ is nonnegative if: when $0\le\omega_1\le \pi$, then $0\le \theta\le \omega_1$ or $\pi+\omega_1\le \theta\le 2\pi$; when $\pi<\omega_1\le 2\pi$, then $\omega_1-\pi \le \theta\le \omega_1$. Likewise $\mbox{Re }e^{i\theta}(z_2-z_3)$ is nonnegative if: when $0\le\omega_2\le\pi$, then $\omega_2\le \theta\le \pi+\omega_2$; when $\pi<\omega_2\le 2\pi$, then $0\le \theta\le \omega_2-\pi$ or $\omega_2\le\theta\le 2\pi$. If $\omega_2\le \omega_1$ then $\omega_1\le\theta\le \omega_2$. If $\omega_2>\omega_1$ and $\theta>\omega_1$ then $\theta>\omega_1+\pi$ and hence $\theta>\omega_2$. Thus $\theta\in [\omega_2, \omega_1]$. Now $\sum_{j=1}^n c_j \lambda_j(H_\theta(A))\ge \sum_{k=1}^m d_k \lambda_k(H_\theta(B))$ or otherwise $z_2\notin W(A;c)$. Likewise, we can find $\varphi\in [\omega_2,\omega_1]$ such that $\sum_{j=1}^n c_j \lambda_j(H_\varphi(A))\le \sum_{k=1}^m d_k \lambda_k(H_\varphi(B))$. By the continuity of $\lambda_j(H_\theta(A))$ and $\lambda_k(H_\theta(B))$, there exists $\phi$ between $\theta$ and $\varphi$ such that $\sum_{j=1}^n c_j \lambda_j(H_\phi(A))=\sum_{k=1}^m d_k \lambda_k(H_\phi(A)).$ If $\phi=\omega_1$, then either $\phi=\omega_1=\theta$ or $\phi=\omega_1=\varphi$. In both case, it implies that $[z_1,z_2]\subseteq \partial W(A;c)\cap \partial W(B;d)$. Similarly for the case $\phi=\omega_2$. \qed \section{Main Theorems} We are ready to state the main results. \begin{theorem} \label{main} Let $A\in M_n$, $B\in M_m$, $c\in\IR^n$ and $d\in\IR^m$. Suppose $$c_1\lambda_1(H_\theta(A))+\cdots+c_n\lambda_n(H_\theta(A))= d_1\lambda_1(H_\theta(B))+\cdots+d_m\lambda_m(H_\theta(B))$$ for $\deg(A;c) \deg(B;d)+1$ $\theta$'s, then there exists a $c$-value of $A$ which is also a $d$-value of $B$. Furthermore, if $r(A;c)$ is irreducible, then all the $c$-values of $A$ are $d$-values of $B$. \end{theorem} {\it Proof. }The condition $$r=c_1\lambda_1(H_\theta(A))+\cdots+c_n\lambda_n(H_\theta(A)) =d_1\lambda_1(H_\theta(B))+\cdots+d_m\lambda_m(H_\theta(B))$$ implies that a $c$-value of $H_\theta(A)$ is a $d$-value of $H_\theta(B)$, which in turns implies that $(e^{i\theta}, e^{-i\theta}, r)$ is a common root of $r(A;c)$ and $r(B;d)$. The result then follows Lemma~\ref{lemma1} and Lemma~\ref{lemma2}. \qed Apply Lemma~\ref{lemma3} and Theorem~\ref{main}, we prove a theorem on common supporting lines. \begin{theorem} \label{thm1} Let $A\in M_n$, $B\in M_m$, $c\in\IR^n$ and $d\in\IR^m$. If $W(A;c)$ and $W(B;d)$ have more than $\deg(A;c) \deg(B;d)$ common supporting lines and the following two conditions are satisfied: \begin{enumerate} \item $W(A;c)=W_c(A)$, or each supporting line touches $\partial W(A;c)$ at a differentiable point; \item $W(B;d)=W_d(B)$, or each supporting line touches $\partial W(B;d)$ at a differentiable point, \end{enumerate} then there exists a $c$-value of $A$ which is also a $d$-value of $B$. Furthermore, if $r(A;c)$ is irreducible, then all the $c$-values of $A$ are $d$-values of $B$. \end{theorem} Apply Lemma~\ref{lemma4} and Theorem~\ref{main}, we prove a theorem on common boundary points. \begin{theorem} \label{thm2} Let $A\in M_n$, $B\in M_m$, $c\in\IR^n$ and $d\in\IR^m$. Suppose there are $z_1,\ldots,z_k \in \partial W(A;c)\cap \partial W(B;d)$ where $k=\deg(A;c) \deg(B;d)+1$ such that $[z_r,z_s]$ do not lie on $\partial W(A;c)\cap \partial W(B;d)$ for $r\ne s$, then there exists a $c$-value of $A$ which is also a $d$-value of $B$. Furthermore, if $r(A;c)$ is irreducible, then all the $c$-values of $A$ are $d$-values of $B$. \end{theorem} {\it Proof.} Suppose the $k$ points are in an anticlockwise manner and that $z_{k+1}=z_1, z_{k+2}=z_2$. By Lemma~\ref{lemma4}, $\{z_l, z_{l+1}, z_{l+2}\}$ defines an angle $\phi_l\in (\omega_{l+1}, \omega_{l})$ where $\omega_s=Arg(i\overline{z_{s+1}-z_{s}})$ such that $$\sum_{j=1}^n c_j \lambda_j(H_\phi(A))=\sum_{k=1}^m d_k \lambda_k(H_\phi(A)).$$ Note that those $k$ $\phi$'s are distinct. Therefore, by Theorem~\ref{main}, the result follows. \qed We have two simple consequences of Theorem~\ref{thm1}. \begin{corollary} \label{thm3} Let $A\in M_n$, $B\in M_m$, $c\in\IR^n$ and $d\in\IR^m$. Suppose there is a differentiable curve, which is not a straight line, lying on $\partial W(A;c) \cap \partial W(B;d)$, then there exists a $c$-value of $A$ which is also a $d$-value of $B$. Furthermore, if $r(A;c)$ is irreducible, then all the $c$-values of $A$ are $d$-values of $B$. \end{corollary} \begin{corollary} \label{thm4} Let $A\in M_n$, $B\in M_m$, $c\in\IR^n$ and $d\in\IR^m$. If $W_c(A)=W_d(B)$ then there exists a $c$-value of $A$ which is also a $d$-value of $B$. \end{corollary} The next two corollaries relate to some old results \cite{ GW1, GW2, Wu, CL, C}. \begin{corollary} \label{cor1} Let $A\in M_n$ and $c\in \IR^n$. If $\partial W(A;c)$ contains $2\deg(A;c)+1$ points on a circle centered at $\alpha$, then $\alpha$ is a $c$-value of $A$ with multiplicity greater than $1$. Consequently, if $\Lambda_k(A)$ contains a circular arc centered at $\alpha$, than $\alpha$ is an eigenvalue of $A$ with multiplicity greater then $1$. \end{corollary} {\it Proof.} Let $B=\begin{pmatrix}\alpha&2R\\0&\alpha\end{pmatrix}$ where $R$ is the radius of the arc and let $d=(1,0)$. Apply Theorem~\ref{thm2} and note that $r(B;d)$ is irreducible. \qed \begin{corollary} \label{cor2} Let $A\in M_n$ and $c\in \IR^n$. If $\partial W(A;c)$ contains $2\deg(A;c)+1$ points on an ellipse, then the two foci of the ellipse along the main axis are $c$-values of $A$. Consequently, if $\Lambda_k(A)$ contains an elliptical arc, then the two foci of the elliptical arc along the main axis are two eigenvalues of $A$. \end{corollary} {\it Proof.} Let $B=\begin{pmatrix}\alpha&R\\0&\beta\end{pmatrix}$ where $\alpha$ and $\beta$ are the foci of the ellipse and $R$ is a suitable number. Let $d=(1,0)$. Apply Theorem~\ref{thm2} and note that $r(B;d)$ is irreducible. \qed \begin{remark} The bound $\deg(A;c)\deg(B;d)+1$ is sharp. Let $A$ to be the $n\times n$ diagonal matrix with eigenvalues being the $n$ roots of unity and $B=\begin{pmatrix}0&2R\\0&0\end{pmatrix}$ where $R$ is slightly less than $1$, then $W(A)$ and $W(B)$ have exactly $2n$ common boundary points, but $A$ and $B$ have no common eigenvalues. \end{remark} We end this section with two known results with new proofs. The first result is on the sharp point of $c$-numerical ranges. \begin{corollary} Let $A\in M_n$ and $c\in \IR^n$. If $W_c(A)$ has a sharp point $\alpha$ then $\alpha$ is a $c$-value of $A$. \end{corollary} {\it Proof.} Let $B=\alpha I_2$ and $d=(1,0)$. Apply Theorem~\ref{thm1}. \qed The second result in \cite{Ta} relates to in \cite{LT} and a follow-up question listed in \cite[Section 9]{Li}. \begin{corollary} Consider $A\in M_n(\bC)$. Suppose $W_c(A)$ is a circular disc centered at $0$ for any $c\in \bR^n$, then $A$ is nilpotent. \end{corollary} {\it Proof. } Let $a_1,\ldots,a_n$ be eigenvalues of $A$. Suppose not all $a_j$'s are zero, then there exists $c=(c_1,\ldots,c_n)^t\in\bR^n$ such that $c_1a_{\sigma(1)}+\cdots+c_na_{\sigma(n)}\ne 0$ for all permutations $\sigma$. Since $W_c(A)$ is a circular disc centered at $0$, we have $W_c(A)=W(\alpha E_{12})$ for some $\alpha\in\bR$. By Theorem~\ref{cor1}, there exists a permutation $\sigma$ such that $c_1a_{\sigma(1)}+\cdots+c_na_{\sigma(n)}=0$. We now have a contradiction. \qed \section{Open Questions} \begin{problem} What happens if there is a sharp point or a line segment on $\partial W(A;c)$? \end{problem} We know very little even for rank $k$ numerical range. \begin{problem} Could we get any meaningful results if $\partial W(A;c)\cap \partial W(B;d)$ contains a line segment? \end{problem} Again, we know very little for rank $k$ numerical range. \begin{problem} Suppose we know that $W(A;c)\subseteq W(B;d)$ and that $\partial W(A;c)\cap \partial W(B;d)$ contains sufficiently many points. Could we say more about the geometry of $W(A;c)$ and $W(B;d)$? \end{problem} Wu \cite{Wu} proved some nice results if $W(A)$ or $W(B)$ is a circular disc. Cheung and Li \cite{CL} generalized Wu's results to elliptical disc and $k$-numerical range. Cheung \cite{C} obtained extended Wu's results to rank $k$ numerical range. We believe that there should be some similar results for weighted numerical range. \section*{Acknowledgement} I would like to thank C.K. Li and Karol \.{Z}yczkowski for their valuable suggestions. I would also like to thank the referee for pointing out a mistake in the first version of Lemma~\ref{lemma4} and other errors. I would also like to thank P.S. Lau for reminding me of \cite{Ta} after the paper is accepted by LAMA.
{ "timestamp": "2015-06-16T02:09:27", "yymm": "1403", "arxiv_id": "1403.3479", "language": "en", "url": "https://arxiv.org/abs/1403.3479", "abstract": "In this article, we are going to introduce the weighted numerical range which is a further generalization both the c-numerical range and the rank k numerical range. If the boundaries of weighted numerical ranges of two matrices (possibly of different sizes) overlap at sufficiently many points, then the two matrices share common generalized eigenvalues.", "subjects": "Functional Analysis (math.FA)", "title": "On the boundary of weighted numerical ranges", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771794142749, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.709092553790817 }
https://arxiv.org/abs/1101.2949
Perfect powers in elliptic divisibility sequences
It is shown that there are finitely many perfect powers in an elliptic divisibility sequence whose first term is divisible by 2 or 3. For Mordell curves the same conclusion is shown to hold if the first term is greater than 1. Examples of Mordell curves and families of congruent number curves are given with corresponding elliptic divisibility sequences having no perfect power terms. The proofs combine primitive divisor results with modular methods for Diophantine equations.
\section{Introduction} Using modular techniques inspired by the proof of Fermat's Last Theorem, it was finally shown in \cite{MR2215137} that the only perfect powers in the Fibonnaci sequence are $1$, $8$ and $144$. Fibonnaci is just one example of an infinte sequence $(h_m)$ of integers \[ \ldots ,h_{-2}, h_{-1}, h_0, h_1, h_2, \ldots \] satisfying $h_m|h_n$ whenever $m|n$ and, up to sign, \[ h_{m+n}h_{m-n}=h_{m+1}h_{m-1}h_n^2-h_{n+1}h_{n-1}h_m^2 \] for all $m,n \in \mathbb{Z}$, where $h_2h_3 \ne 0$. Gezer and Bizim \cite{MR2669714} have recently described the squares in some of these sequences but $(h_m)$ was first studied in general by Ward \cite{MR0023275} and is related to a Weierstrass equation \begin{equation} \label{we} y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6 \end{equation} with integer coefficients. See \cite{MR2514094, MR1171452} for background on Weierstrass equations and elliptic curves. The non-singular rational points on (\ref{we}) form a group $E_{ns}(\mathbb{Q})$ and for $P \in E_{ns}(\mathbb{Q})$ different from the identity we can write \begin{equation} \label{B_P} (x(P),y(P))=\left(\frac{A_P}{B_P^2}, \frac{C_P}{B_P^3} \right), \end{equation} where $A_P,B_P,C_P \in \mathbb{Z}$ and $\gcd(A_PC_P,B_P)=1$. Let $(h_m)$ be a sequence of integers as above with $h_0=0$ and $h_1=1$. Building on work of Ward, Shipsey \cite{Shi01} has given a formula for a Weierstrass equation (\ref{we}) such that $h_m=\psi_m(0,0)$, where $\psi_m$ is the $m$th division polynomial (see Section \ref{FH}) and $h_m=\pm B_{m(0,0)}$ if $\gcd(a_3,a_4)=1$. For example, up to sign, $(0,0)$ on \[ y^2+xy+y=x^3-2x^2 \] generates the Fibonacci sequence $(B_{m(0,0)})$. In \cite{MR0023275} Ward calls $(h_m)$ an elliptic divisibility sequence; however, as in the Fibonacci case, the Weierstrass equation for $(h_m)$ may have a singular point and so not define an elliptic curve. Via the work of Everest \cite{MR2164113, MR2045409}, Ingram \cite{MR2301226}, Silverman \cite{IngrSilv, MR961918} et al, it has now become conventional to use the following \begin{defn} \label{EDS} Let $E/\mathbb{Q}$ be an elliptic curve and (\ref{we}) a Weierstrass equation for $E$. Let $P \in E(\mathbb{Q})$ be a non-torsion point. For $m \in \mathbb{N}$ denote $B_{mP}$, as in (\ref{B_P}), by $B_m$. The sequence $(B_m)$ is an \emph{elliptic divisibility sequence}. \end{defn} In the current paper, we are interested in analogues for elliptic divisibility sequences (in the sense of Definition \ref{EDS}) to the result for Fibonacci numbers. There are certainly perfect powers in some elliptic divisibility sequences. For example, \[ E: y^2+xy=x^3+x^2-7x+5 \] with $P=(2,-3)$ gives $B_m=1$ for $m=1,2,3,4,7$ and $B_{12}=2^7$. However, the following theorem shows that one can often prove that there are only finitely many perfect powers in such sequences. \begin{thm} \label{2or3} Let $(B_m)$ be an elliptic divisibility sequence whose first term is divisible by $2$ or $3$. There are finitely many perfect powers in $(B_m)$. Moreover, if $B_m=z^l$ for some integer $z$ and prime $l$ then $l$ can be bounded explicitly in terms of $E$ and $P$. \end{thm} The proof of Theorem \ref{2or3} combines a recent Frey-Hellegouarch construction for Klein forms by Bennett and Dahmen \cite{BenDah} with a primitive divisor result due to Silverman \cite{MR961918}. The method of proof is so flexible that it also allows one in certain concrete cases to completely determine the set of all perfect power terms, as was done for the Fibonacci sequence (see Proposition \ref{D=11} and Example \ref{exa8} below). The condition that only $2$ or $3$ divides the first term is because higher primes, such as $5$, do not give a Klein form as in Defintion \ref{K}. Let $E/\mathbb{Q}$ be an elliptic curve and (\ref{we}) a Weierstrass equation for $E$. Siegel \cite{Sie01} proved that there are finitely many $P \in E(\mathbb{Q})$ with $B_P=1$. In \cite{MR2365225} it is shown that for fixed $l>1$, there are finitely many $P \in E(\mathbb{Q})$ with $B_P=z^l$ for some $z \in \mathbb{Z}$. Since their denominator is a perfect power, perhaps it is reasonable to give the following \begin{defn} Call $P \in E(\mathbb{Q})$ \emph{power integral} if $B_P$ (as in (\ref{B_P})) is equal to a perfect power. \end{defn} Note that $1$ is a perfect power and so power integral points can be thought of as a generalization of the integral points. A lot of work has been done to make Siegel's theorem effective \cite{MR0231783,MR0263742, MR735341, MR1328329, MR518817} and there are many techniques which can find all of the integral points for large classes of elliptic curves \cite{MR1305199, MR1713117, MR1291875, MR1986812}. For certain curves we are now able to find all of the power integral points. \subsection{Mordell curves} Theorem \ref{2or3} can be strengthened considerably for Mordell curves. \begin{thm} \label{Mor} Let $D \in \mathbb{Z}$ be non-zero and $E: y^2=x^3+D$. There are finitely many perfect powers in an elliptic divisibility sequence $(B_m)$ whose first term is greater than $1$. As in Theorem \ref{2or3}, there is a bound depending on $D$ and $P$ for the possible prime exponents. \end{thm} By utilizing the proofs of the theorems above in a specific case we are able to find the Mordell curve of smallest conductor with non-zero rank and no power integral points. \begin{prop} \label{D=11} The elliptic curve $E: y^2=x^3+11$ has no power integral points. \end{prop} In the general case, allowing for integral points, we expect the following to hold. \begin{con} \label{con1} Let $D \in \mathbb{Z}$ be non-zero and $E: y^2=x^3+D$. For $l$ a sufficiently large prime, if $B_P$ (as in (\ref{B_P})) is an $l$th power then $B_P=1$. \end{con} At the end of Section \ref{mdc} it is explained that Conjecture \ref{con1} would follow from the Frey-Mazur conjecture \cite{MR1357210}. \subsection{Congruent number curves} A much studied class of elliptic curves is the congruent number curves $E_N : y^2 = x^3-N^2x$, where $N \ge 1$ is an integer. Let $p$ be an odd prime and $a$, $b$ non-negative integers. For $N = 2^ap^b$, a simple algorithm for the determination of the integral points in $E_N(\mathbb{Q})$ has been given in \cite{MR2219047} and~\cite{MR2219040}. In this case we are able to find all power integral points in $2E_N(\mathbb{Q})$; in fact they are all integral. \begin{thm} \label{first} Let $N = 2^ap^b$. If $P \in 2E_N(\mathbb{Q})$ is power integral then $N=2^a3^b$ and $P=(c^225,c^335)$, where $a,b$ are odd, $a \ge 3$ and $c= \pm 2^{(a-3)/2}3^{(b-1)/2}$. \end{thm} In Section \ref{red} Theorem \ref{first} is proven using Fermat's Last Theorem, due to Wiles \cite{MR1333035}, along with the first variants by Ribet \cite{MR1438112}, Darmon and Merel~\cite{MR1468926}. \begin{thm} \label{second} Let $N=2^ap$, where $a=0$ or $1$. Suppose that $P \in E_N(\mathbb{Q})$ has \[ x(P) \in -{\mathbb{Q}^*}^2 \] and \[ x(P)+N \in p{\mathbb{Q}^*}^2. \] Then there are no perfect powers in the elliptic divisibility sequence generated by $P$. \end{thm} Theorem \ref{second} is proven using Theorem \ref{first} along with an equation recently solved by Bennett, Ellenberg and Ng \cite{MR2646760}. \begin{exa} \label{exa8} There are no perfect powers in the elliptic divisibility sequence generated by $(-(60/41)^2,-455700/41^3)$ on $E_5:y^2=x^3-25x$. \end{exa} Let $N=2^ap$, where $a=0$ or $1$. Points belonging to two cosets in $E_N(\mathbb{Q})/2E_N(\mathbb{Q})$ have been considered above. The remaining cases lead to equations which currently appear unresolvable in general. As the next example shows, there can be power integral points on $E_N$ which are not integral. However, in the example $N$ is equal to the odd terms of a sequence $(C_m)$ and, since these odd terms form an elliptic divisibility sequence, there are conjectured to be finitely many possibilities with $N$ prime \cite{MR2548983}. This, along with Theorem~\ref{first} and Theorem \ref{second}, suggests that the number of power integral points in $E_N(\mathbb{Q})$ which are not integral could be uniformly bounded. \begin{exa} For $(-1,1)$ on $y^2=x^3-2x$ and $m$ odd write \[ m(-1,1)=\left(-\frac{A_m^2}{B_m^2}, \frac{A_mC_m}{B_m^3} \right). \] We get a power integral point on $E_{C_m}: y^2=x^3-C_m^2x$ given by $x(P)=-(C_mA_m)^2/B_m^4$. Moreover, $C_m$ is prime for $m=3,7$ and $23$. \end{exa} \section{Properties of elliptic divisibility sequences} In this section the required properties of elliptic divisibility sequences are collected. \begin{lem} \label{edsp} Let $(B_m)$ be an elliptic divisibility sequence. \begin{enumerate}[(i)] \item Let $p$ be a prime and $m_0$ be the smallest positive integer such that $p \mid B_{m_0}$. Then for every $m \in \mathbb{N}$, \[ p \mid B_m \iff m_0 \mid m. \] \item Let $p$ be an odd prime. For any pair $n,m \in \mathbb{N}$, if $\ord_p(B_n)>0$ then \[ \ord_p(B_{mn})=\ord_p(B_n)+\ord_p(m). \] \item For any pair $n,m \in \mathbb{N}$, if $2 \mid B_n$ then \[ \ord_2(B_{mn}) = \ord_2(B_n)+\ord_2(m) \] if $a_1$ is even and \[ \left| \ord_2(B_{mn}) -(\ord_2(B_n)+\ord_2(m)) \right| \le \epsilon \] otherwise, where the constant $\epsilon$ depends only on $E$ and $P$. \item For all $m,n \in \mathbb{N}$, \[ \gcd(B_m,B_n)=B_{\gcd(m,n)}. \] \end{enumerate} \end{lem} \begin{proof} See \cite{MR961918} and Section 4 in Chapter IV of \cite{Marcothesis}. \end{proof} \begin{rem} \rm Given a prime $p$, a rational point on (\ref{we}) can be reduced modulo $p$ to give a point defined over a finite field (see Chapter VII of \cite{MR2514094}). It follows that $m_0$ in Lemma~\ref{edsp} exists for every prime at which the generator of the elliptic divisibility sequence has non-singular reduction. \end{rem} \begin{lem} \label{three} Assume that the given Weierstrass equation for $E$ has $a_1$ even. For a prime $p$ suppose that $m_0=p$ in Lemma \ref{edsp}. Write $m=p^em'$ where $p \nmid m'$. If $B_m$ is an $l$th power then so is $B_{m'}$. Moreover, $p \nmid B_{m'}$. \end{lem} \begin{proof} By Lemma \ref{edsp}, if a prime $q$ divides $B_{m'}$ then $q \ne p$ and \[ \ord_q(B_m)=\ord_q(B_{m'})+\ord_q(p^e)=\ord_q(B_{m'}) \] so the result follows. \end{proof} \begin{defn} A prime $p | B_m$ such that $p \nmid B_{m'}$ for any $m' < m$ is called a \emph{primitive divisor}. \end{defn} \begin{thm}[Silverman] \label{prim} For all but finitely many $m \in \mathbb{N}$, $B_m$ has a primitive divisor. Moreover, if $B_m$ does not have a primitive divisor then $m$ is bounded by an effectively computable constant which depends only on $E$ and $P$. \end{thm} \begin{proof} See \cite{MR961918} or Chapter IV of \cite{Marcothesis}. \end{proof} \begin{rem} \rm For certain minimal Weierstrass equations the number of terms without a primitive divisor has been uniformly bounded (see \cite{MR2301226, MR2605536, IngrSilv}). \end{rem} \section{The modular approach to Diophantine equations} For a more thorough exploration see \cite{Sanderthesis} and Chapter 15 in \cite{MR2312338}. As is conventional, in what follows all newforms shall have weight $2$ with a trivial character at some level $N$ and shall be thought of as a $q$-expansion \[ f=q+\sum_{n \ge 2}c_nq^n, \] where the field $K_f=\mathbb{Q}(c_2,c_3,\cdots)$ is a totally real number field. The coefficients $c_n$ are algebraic integers and $f$ is called \emph{rational} if they all belong to $\mathbb{Z}$. For a given level $N$, the number of newforms is finite. The modular symbols algorithm \cite{MR1628193}, implemented on $\mathtt{MAGMA}$ \cite{MR1484478} by William Stein, shall be used to compute the newforms at a given level. \begin{thm}[Modularity Theorem] Let $E/\mathbb{Q}$ be an elliptic curve of conductor $N$. Then there exists a newform $f$ of level $N$ such that $a_p(E)=c_p$ for all primes $p \nmid N$, where $c_p$ is $p$th coefficient of $f$ and $a_p(E)=p+1-\#E(\mathbb{F}_p)$. \end{thm} \begin{proof} This is due to Taylor and Wiles \cite{MR1333036, MR1333035} in the semi-stable case. The proof was completed by Breuil, Conrad, Diamond and Taylor \cite{MR1839918}. \end{proof} The modularity of elliptic curves over $\mathbb{Q}$ can be seen as a converse to \begin{thm}[Eichler-Shimura] Let $f$ be a rational newform of level $N$. There exists an elliptic curve $E/\mathbb{Q}$ of conductor $N$ such that $a_p(E)=c_p$ for all primes $p \nmid N$, where $c_p$ is the $p$th coefficient of $f$ and $a_p(E)=p+1-\#E(\mathbb{F}_p)$. \end{thm} \begin{proof} See Chapter $8$ of \cite{MR2112196}. \end{proof} Given a rational newform of level $N$, the elliptic curves of conductor $N$ associated to it via the Eichler-Shimura theorem shall be computed using $\mathtt{MAGMA}$. \begin{prop} \label{levellow} Let $E/\mathbb{Q}$ be an elliptic curve with conductor $N$ and minimal discriminant $\Delta_{\min}$. Let $l$ be an odd prime and define \[ N_0(E,l):=N/\mathop{\prod_{{\textrm{primes } p \mid \mid N}}}_{l \mid \ord_p(\Delta_{\min})} p. \] Suppose that the Galois representation \[ \rho_l^E: \gal(\bar{\mathbb{Q}}/\mathbb{Q}) \to \aut(E[l]) \] is irreducible. Then there exists a newform $f$ of level $N_0(E,l)$. Also there exists a prime $\mathcal{L}$ lying above $l$ in the ring of integers $\mathcal{O}_f$ defined by the coefficients of $f$ such that \[ c_p \equiv \left\{ \begin{array}{ll} a_p(E) \mod \mathcal{L} & \textrm{ if } p \nmid lN, \\ \pm(1+p) \mod \mathcal{L} & \textrm{ if } p \mid \mid N \textrm{ and } p \nmid lN_0, \end{array} \right. \] where $c_p$ is the $p$th coefficient of $f$. Furthermore, if $\mathcal{O}_f=\mathbb{Z}$ then \[ c_p \equiv \left\{ \begin{array}{ll} a_p(E) \mod l & \textrm{ if } p \nmid N, \\ \pm(1+p) \mod l & \textrm{ if } p \mid \mid N \textrm{ and } p \nmid N_0. \end{array} \right. \] \end{prop} \begin{proof} This arose from combining modularity with level-lowering results by Ribet~\cite{MR1047143, MR1265566}. The strengthening in the case $\mathcal{O}_f=\mathbb{Z}$ is due to Kraus and Oesterl{\'e}~\cite{MR1166121}. A detailed exploration is given, for example, in Chapter 2 of \cite{Sanderthesis}. \end{proof} \begin{rem} \rm Let $E/\mathbb{Q}$ be an elliptic curve with conductor $N$. Note that the exponents of the primes in the factorization of $N$ are uniformly bounded (see Section 10 in Chapter IV of \cite{MR1312368}). In particular, only primes of bad reduction divide $N$ and if $E$ has multiplicative reduction at $p$ then $p \mid \mid N$. \end{rem} \begin{cor} \label{lbound} Keeping the notation of Proposition \ref{levellow}, if $p$ is a prime such that $p \nmid lN_0$ and $p \mid N$ then \[ l < (1+\sqrt{p})^{2[K_f: \mathbb{Q}]}. \] \end{cor} \begin{proof} See Theorem 37 in \cite{Sanderthesis}. \end{proof} Applying Proposition \ref{levellow} to carefully constructed Frey curves has led to the solution of many Diophantine problems. The most famous of these is Fermat's Last theorem \cite{MR1333035} but there are now constructions for other equations and we shall make use of those described below. \subsection{Recipes for Diophantine equations with signature $(l,l,3)$} Consider the equation \[ Ax^l+By^l=Cz^3, \] with non-zero pairwise coprime terms and $l \ge 5$ prime. Assume any prime $q$ satisfies $\ord_q(A)<l$, $\ord_q(B)<l$ and $\ord_q(C)<3$. Without lost of generality also assume that $Ax \nequiv 0 \mod 3$ and $By^l \nequiv 2 \mod 3$. Construct the Frey curve \[ E_{x,y}: Y^2+3CzXY+C^2By^lY=X^3. \] \begin{thm}[Bennett, Vatsal and Yazdani \cite{BenDah}] \label{reci} The conductor $N_{x,y}$ of $E_{x,y}$ is given by \[ N_{x,y}=3^{\alpha}\rad_3(ABxy)\rad_3(C)^2, \] where \[ \alpha= \left\{ \begin{array}{ll} 2 & \textrm{if } \textrm{ } 9 \mid (2+C^2By^l-3Cz), \\ 3 & \textrm{if } \textrm{ } 3\mid \mid (2+C^2By^l-3Cz), \\ 4 & \textrm{if } \textrm{ } \ord_3(By^l)=1, \\ 3 & \textrm{if } \textrm{ } \ord_3(By^l)=2, \\ 0 & \textrm{if } \textrm{ } \ord_3(By^l)=3, \\ 1 & \textrm{if } \textrm{ } \ord_3(By^l) \ge 4, \\ 5 & \textrm{if } \textrm{ } 3 \mid C. \end{array} \right. \] Suppose that $E_{x,y}$ does not correspond to one of the equations \begin{eqnarray*} 1 \cdot 2^5+27 \cdot (-1)^5 &=& 5 \cdot 1^3, \\ 1 \cdot 2^7+3 \cdot (-1)^7 &=& 1 \cdot 5^3, \\ 2\cdot 1^2+27\cdot (-1)^5&=&25 \cdot (-1)^3, \textrm{ or } \\ 2 \cdot 1^7 + 3 \cdot (-1)^7&=&(-1)^3. \end{eqnarray*} Then there exists a newform of level \[ N_0=3^{\beta}\rad_3(AB)\rad_3(C)^2, \] where \[ \beta= \left\{ \begin{array}{ll} 2 & \textrm{if } \textrm{ } 9 \mid (2+C^2By^l-3Cz), \\ 3 & \textrm{if } \textrm{ } 3\mid \mid (2+C^2By^l-3Cz), \\ 4 & \textrm{if } \textrm{ } \ord_3(By^l)=1, \\ 3 & \textrm{if } \textrm{ } \ord_3(By^l)=2, \\ 0 & \textrm{if } \textrm{ } \ord_3(B)=3, \\ 1 & \textrm{if } \textrm{ } \ord_3(By^l)\ge 4 \textrm{ and } \ord_3(B) \ne 3, \\ 5 & \textrm{if } \textrm{ } 3 \mid C. \end{array} \right. \] \end{thm} \subsection{Frey-Hellegouarch curves for Klein forms} \label{FH} Let $E$ be an elliptic curve defined over $\mathbb{Q}$ with Weierstrass coordinate functions $x,y$. For any integer $n \in \mathbb{Z}$, the $n$th \emph{division polynomial} of $E$ is the polynomial $\psi_n \in \mathbb{Q}[x,y] \subset \mathbb{Q}(E)$ as given on p. 39 of \cite{MR1771549}. In particular, \begin{eqnarray*} \psi_2^2&=&4x^3 + b_2x^2 + 2b_4x + b_6, \\ \psi_3&=&3x^4 + b_2x^3 + 3b_4x^2 + 3b_6x + b_8, \end{eqnarray*} $\psi_n^2 \in \mathbb{Q}[x]$ and there exists $\theta_n \in \mathbb{Q}[x]$ such that \begin{equation} \label{nQ} [n]x=\frac{\theta_n}{\psi_n^2}. \end{equation} \begin{defn} \label{K} Associate to $\psi_2^2(x)$ and $\psi_3(x)$ the homogeneous polynomials: \[ K^E_n(x,y)= \left\{ \begin{array}{ll} \psi_2^2(x/y)y^3 & \textrm{ for } $n=2$, \\ \psi_3(x/y)y^4 & \textrm{ for } $n=3$. \end{array} \right. \] \end{defn} The notion of a Klein form arose from Klein's classification \cite{MR0080930} of the finite subgroups of $\aut_{\bar{\mathbb{Q}}}(\mathbb{P}_1)$. For our purposes it is enough to note that any separable cubic binary form in $\mathbb{Q}[x,y]$ is a Klein form and that a separable quartic \[ \alpha_0x^4+\alpha_1x^3y+\alpha_2x^2y^2+\alpha_3xy^3+\alpha_4y^4 \in \mathbb{Q}[x,y] \] is a Klein form precisely when \begin{equation} \label{transvectant} 12\alpha_0\alpha_4-3\alpha_1\alpha_3+\alpha_2^2= 0. \end{equation} \begin{lem} Let $E$ be an elliptic curve defined over $\mathbb{Q}$. Then $K^E_2(x,y)$ and $K^E_3(x,y)$ are Klein forms. \end{lem} \begin{proof} Since the multiplication by $n$ map is separable (see Chapter III of \cite{MR2514094}), $K^E_n(x,y)$ is separable. A small calculation checks that the coefficients of $K^E_3(x,y)$ satisfy (\ref{transvectant}). \end{proof} For $S$ a fixed finite set of primes, let \[ \mathbb{Z}_S := \{ x \in \mathbb{Q} : \ord_p(x) \ge 0 \textrm{ for all } p \notin S \} \] and let $\mathbb{Z}_S^*$ be the set of units in $\mathbb{Z}_S$. Let $F$ be a Klein form of degree $k \in \{3,4,6,12\}$ ($k=3$ or $4$ is enough for our purposes). The \emph{index} of $F$ is $n=6-12/k$. Denote by $\Delta_F$ the discriminant of $F$ and let $S_F$ be the set of primes which divide $n\Delta_F$. In \cite{BenDah, Sanderthesis} Bennett and Dahmen construct a Frey-Hellegouarch curve for the Diophantine equation \begin{equation} \label{power} F(A,B)=uC^l, \end{equation} where $\gcd(A,B)=1$, $C \ne 0$, $l$ is prime and $u \in \mathbb{Z}_{S_F}^*$. Define \[ H(x,y)=\frac{1}{(k-1)^2} \left| \begin{array}{cc} F_{xx} & F_{xy} \\ F_{xy} & F_{yy} \end{array} \right| \] and the Jacobian determinant of $F$ and $H$ by \[ G(x,y)=\frac{1}{k-2} \left| \begin{array}{cc} F_{x} & F_{y} \\ H_{x} & H_{y} \end{array} \right|, \] where $F_x$, $F_y$, etc, refer to corresponding partial derivatives. Then \[ 4H(A,B)^3+G(A,B)^2=d_nF(A,B)^n, \] where $d_2=-27\Delta_F$ and $d_3=2^8\sqrt{-\Delta_F/27}$ are integers. So \[ E_{A,B}: Y^2=X^3+3H(A,B)X+G(A,B) \] has discriminant $-2^4 \cdot 3^3d_nF(A,B)^2$. \begin{prop}[\cite{BenDah}] \label{t} There exists $t \in \{\pm 1, \pm 3\}$ such that for all primes $p \notin S_F$ we have that the quadratic twist \[ E_{A,B}^{(t)}: Y^2=X^3+3H(A,B)t^2X+G(A,B)t^3 \] is semistable at $p$ and \[ \ord_p(\Delta_{min}(E_{A,B}^{(t)}))=n\ord_p(F(A,B)). \] \end{prop} \begin{proof} This is Proposition 4.2 of \cite{BenDah}. \end{proof} \begin{prop}[\cite{BenDah}] \label{newform} Let $l>163$ in (\ref{power}) and let $t$ be as in Proposition \ref{t}. Denote by $N_{A,B}$ the conductor of $E_{A,B}^{(t)}$. Then the Galois representation \[ \rho_l^{A,B}: \gal(\bar{\mathbb{Q}}/\mathbb{Q}) \to \aut(E_{A,B}^{(t)}[l]) \] is modular of level \begin{equation} \label{N_0} N_0=\prod_{p \mid n\Delta_F}p^{\ord_p(N_{A,B})}. \end{equation} In particular, there exists a newform $f$ of level $N_0$. \end{prop} \begin{proof} This is Proposition 8.1 in \cite{BenDah}. \end{proof} \subsection{A similar Frey curve for cubic forms} The Frey curve already given in Section~\ref{FH} can be seen as sufficient; however, for ease of reference we give a construction from~\cite{MR2406491}. Let \[ F(x,y)=t_0a^3+t_1^2y+t_2xy^2+t_3y^3 \in \mathbb{Z}[x,y] \] be a separable cubic binary form. In \cite{MR2406491} a Frey curve is given for the Diophantine equation \begin{equation} \label{cub} F(a,b)=dc^l, \end{equation} where $\gcd(a,b)=1$, $d \in \mathbb{Z}$ is fixed and $l \ge 7$ is prime. Define a Frey curve $E_{a,b}$ by \begin{equation} \label{BillFrey} E_{a,b}: y^2=x^3+a_2x^2+a_4x+a_6, \end{equation} where \begin{eqnarray*} a_2 &=& t_1a -t_2b, \\ a_4 &=& t_0t_2a^2 +(3t_0t_3 -t_1t_2)ab + t_1t_3b^2, \\ a_6 &=& t_0^2t_3a^3-t_0(t_2^2-2t_1t_3)a^2b+t_3(t_1^2-2t_0t_2)ab^2-t_0t_3^2b^3. \end{eqnarray*} Then $E_{a,b}$ has discriminant $16 \Delta_F F(a,b)^2$. Consider the Galois representation \[ \rho_l^{a,b} : \gal(\bar{\mathbb{Q}}/\mathbb{Q}) \to \aut(E_{a,b}[l]). \] \begin{thm}[\cite{MR2406491}] \label{Bill} Let $S$ be the set of primes dividing $2d\Delta_F$. There exists a constant $\alpha(d,F) \ge 0$ such that if $l>\alpha(d,F)$ and $c \ne \pm 1$ then: \begin{itemize} \item the representation $\rho_l^{a,b}$ is irreducible; \item at any prime $p \notin S$ dividing $F(a,b)$ the equation (\ref{BillFrey}) is minimal, the elliptic curve $E_{a,b}$ has multiplicative reduction and $l \mid \ord_p(\Delta_{min}(E_{a,b}))$. \end{itemize} \end{thm} \begin{proof} This is Theorem 2.3 and Lemma 2.4 in \cite{MR2406491}. \end{proof} \section{Proof of Theorem \ref{2or3}} \begin{proof}[Proof of Theorem \ref{2or3}] Let $n=2$ or $3$ and let $S$ be the set of primes dividing $n \Delta_E$. Assume that $B_m$ is an $l$th power. Note that by Theorem 1.1 in \cite{MR2365225} it is enough to bound $l$ in terms of $E$ and $P$. To do this we shall derive an equation of the form (\ref{power}) and prove the existence of a prime divisor $p_0$ to which Corollary \ref{lbound} can be applied. Using Theorem \ref{prim}, fix $e_0 \ge 1$ such that \begin{itemize} \item $B_{n^{e_0}}$ is divisible by a prime $p_0 \nmid n\Delta_E$, \item $p_0 \nmid B_{n^e}$ for all $0 \le e <e_0$. \end{itemize} Note that $e_0$ does not depend on $m$. From Lemma \ref{edsp}, since $\ord_n(B_1)>0$, \[ \ord_n(B_{m}) -(\ord_n(B_1)+\ord_n(m)) = O(1). \] Hence, since $l \mid \ord_n(B_{m})$, we can assume that $l$ is large enough so that \begin{equation} \label{e_0} \ord_n(m) \ge e_0. \end{equation} For $Q \in E(\mathbb{Q})$, using (\ref{nQ}) gives \begin{equation} \label{2Q} \frac{A_{nQ}}{B_{nQ}^2}=\frac{\theta_n(A_Q/B_Q^2)}{\psi_n^2(A_Q/B_Q^2)}= \frac{B_Q^{2n^2}\theta_n(A_Q/B_Q^2)}{B_Q^2 \psi_n^2(A_Q/B_Q^2)B_Q^{2(n^2-1)}}, \end{equation} where \[ \psi_n^2(A_Q/B_Q^2)B_Q^{2(n^2-1)}=\left\{ \begin{array}{ll} K_2^E(A_Q,B_Q^2) & \textrm{ if } n=2, \\ (K_3^E(A_Q,B_Q^2))^2 & \textrm{ if } n=3 \end{array} \right. \] (see Definition \ref{K}). \begin{comment} \frac{A_Q^4-b_4A_Q^2B_Q^4-2b_6A_QB_Q^6-b_8B_Q^8} {B_Q^2(4A_Q^3+b_2A_Q^2B_Q^2+2b_4A_QB_Q^4+b_6B_Q^6)} \end{comment} Since $\theta_n$ is monic and the leading coefficient of $\psi_n^2$ is $n^2$, $B_Q$ is coprime with the numerator of (\ref{2Q}) and if $B_{nQ}$ is an $l$th power then $B_Q$ is a power of $n$ multiplied by an $l$th power. Write $m=n^{\ord_n{m}}m'$ with $n \nmid m'$. From (\ref{e_0}) it follows that $B_{n^{e_0}m'}$ is a power of $n$ multiplied by an $l$th power. Write $Q=n^{e_0-1}m'P$ then $n^{e_0}m'P=nQ$. The primes which divide the numerator and the denominator of (\ref{2Q}) also divide the discriminant $\Delta_E$ (see~\cite{MR1185022}). So \begin{equation} \label{K_n} K_n^E(A_Q,B_Q^2)=uC^l, \end{equation} where $u \in \mathbb{Z}_S^*$. Moreover, $p_0 \nmid B_Q$ (since $\gcd(B_Q,B_{n^{e_0}})=B_{n^{e_0-1}}$) and so $C \in \mathbb{Z}$ is divisible by $p_0$. In characteristic away from $n$ the multiplication by $n$ map is separable (see Chapter III of \cite{MR2514094}) so the set of primes which divide the discriminant of $K_n^E$ is equal to $S$. Applying Proposition \ref{newform} shows that there exists a newform $f$ of level $N_0$ (as in (\ref{N_0})). It follows that there are finitely many choices for $f$. We have $p_0 \nmid lN_0$ and $p_0 \mid N_{A_Q,B_Q}$ (see Proposition \ref{t}) so Corollary \ref{lbound} bounds $l$. \end{proof} \begin{rem} \rm Note that $K_n^E(A_Q,B_Q^2)$, as in (\ref{K_n}), does not belong to $\mathbb{Z}_S^*$ so Theorem 8.1 in~\cite{BenDah} along with Silverman's primitive divisor theorem proves the existence of an effectively computable bound for $l$ which depends only on $E$ and $P$. However, keeping in mind that $p_0 \nmid lN_0$, in practice a much better bound is obtained by computing the newforms at level $N_0$ and applying Proposition \ref{levellow} directly. \end{rem} \begin{rem} \label{Sext} \rm Let $S$ be a finite set of fixed primes and let $(B_m)$ be an elliptic divisibility sequence whose first term is divisible by $2$ or $3$ . The results in Section \ref{FH} hold with the primes in $S$ added to $S_F$. Using this the proof above can be extended to show that there are finitely terms in $(B_m)$ equal to a perfect power multiplied by an $S$-unit. \end{rem} \section{The Mordell curves $y^2=x^3+D$} \label{mdc} \begin{proof}[Proof of Theorem \ref{Mor}] Write $D=d^2D'$, where $D'$ is square free. Suppose that $P \in E(\mathbb{Q})$ with $x(P) \ne 0$ and $B_P=z^l$ for some prime $l$. Factorizing over $K=\mathbb{Q}(\sqrt{D'})$, \[ A_P^3=C_P^2-Dz^{6l}=(C_P+d\sqrt{D'}z^{3l})(C_P-d\sqrt{D'}z^{3l}). \] If $D'=1$ then $C_P+dz^{3l}=ua^3$ and $C_P-dz^{3l}=vb^3$, where $a,b \in \mathbb{Z}$ are coprime, $u,v$ divide $2d$ and $uv$ is a cube. Subtracting the two factors gives \begin{equation} \label{D'=1} 2dz^{3l}=ua^3-vb^3. \end{equation} In general the ring $\mathcal{O}_{KT}$ is a principal ideal domain for some finite set $T$ of prime ideals. Include in $T$ the primes in $\mathcal{O}_K$ dividing $2d\sqrt{D'}$. Using Dirichlet's unit theorem, $\mathcal{O}_{KT}^*/{\mathcal{O}_{KT}^*}^3$ is a finite set. Hence, if $D' \ne 1$ then \[ C_P+d\sqrt{D'}z^{3l}=(u+\sqrt{D'}v)(a+b\sqrt{D'})^3, \] where $a,b \in \mathbb{Z}$ are coprime and there are finitely many choices for $u,v \in \mathbb{Q}$. Subtracting the two conjugate factors gives \begin{equation} \label{1} dz^{3l}=va^3+3ua^2b+3D'vab^2+uD'b^3. \end{equation} Now suppose that $(B_m)$ is an elliptic divisibility sequence generated by a point on $E$. Multiplying through by the denominators of $u,v$ in (\ref{D'=1}) or (\ref{1}) gives an equation \[ F(a,b)=dc^l \] as in (\ref{cub}) with $c^l=B_m^3$. Note that $u$ and $v$ are non-zero in (\ref{D'=1}) and at least one of $u,v$ is non-zero in (\ref{1}); it follows that the cubic forms considered are separable. Construct a Frey curve $E_{a,b}$ as in (\ref{BillFrey}). Let $S$ be the set of primes dividing $2d\Delta_F$. Assume that $n \mid B_1$ and $n>1$ is prime. Using the Siegel-Mahler theorem about finiteness of $S$-integral points on elliptic curves, fix $e_0 \ge 1$ such that $B_{n^{e_0}}$ is divisible by a prime $p_0 \notin S$. Note that $e_0$ does not depend on $m$. From Lemma \ref{edsp}, since $\ord_n(B_1)>1$, \[ \ord_n(B_{m}) -(\ord_n(B_1)+\ord_n(m)) = O(1). \] Hence, since $l \mid \ord_n(B_{m})$, we can assume $l$ is large enough so that \[ \ord_n(m) \ge e_0. \] Then $B_{n^{e_0}} \mid B_m$ and, in particular, $p_0 \mid B_m$. Applying Theorem \ref{Bill} and Proposition \ref{levellow} with $p=p_0$ gives that $l$ is bounded. (Note that $p_0$ divies the conductor of the Frey curve but not the level of the newform.) The finiteness claim follows from Theorem 1.1 in \cite{MR2365225}. \end{proof} Let $D \in \mathbb{Z}$ be square-free. For $P \in 2E(\mathbb{Q})$ write $P=2Q$. Using the duplication formula, \begin{equation} \label{2Q2} \frac{A_P}{B_P^2}=\frac{A_Q(A_Q^3-8DB_Q^6)}{4B_Q^2(A_Q^3+DB_Q^6)}=\frac{A_Q(A_Q^3-8DB_Q^6)}{4B_Q^2C_Q^2}. \end{equation} Any prime dividing $C_Q$ and $A_Q^3-8DB_Q^6$ also divides $3D$. Suppose that $B_P$ is an $l$th power and that $B_Q$ is even. Since $D$ is square-free, $\gcd(A_Q,C_Q)=1$ so only $3$ can divide both $C_Q$ and the numerator of (\ref{2Q2}). If $\gcd(3,C_Q)=1$ then $C_Q$ and $2B_Q$ must be an $l$th powers. \begin{proof}[Proof of Proposition \ref{D=11}] Note that $E(\mathbb{Q})=\left< (-7/4,19/8) \right>$. Let $P=m(-7/4,19/8)$ for some $m \ge 1$ and denote $B_P$, as in (\ref{B_P}), by $B_m$. Assume that $B_P$ is an $l$th power. Using Lemma \ref{three} we can assume that $3 \nmid B_P$ and $3 \nmid m$. From Lemma \ref{edsp}, \begin{equation} \label{ord} \ord_2(B_m)=\ord_2(B_1)+\ord_2(m)=1+\ord_2(m) \ge l \end{equation} so $m$ is even. Thus $P=2Q$ for some $Q \in E(\mathbb{Q})$. By (\ref{2Q2}) it follows that $C_Q$ and $2B_Q$ are $l$th powers. \begin{lem} \label{tec} If $l>2$ then $13, 19$ and $619$ divide $B_Q$. Also $7 \mid A_Q$ but $7 \nmid B_QC_Q$. \end{lem} \begin{proof} If $l>2$ then (\ref{ord}) gives that $m=4m'$ thus $Q=2m'(-7/4,19/8)$ for some $m' \ge 1$ so $B_2 \mid B_Q$ and, in particular, $19 \mid B_Q$. Using Lemma \ref{edsp} again, \[ \ord_{19}(B_Q)=\ord_{19}(B_2)+\ord_{19}(m')=1+\ord_{19}(m')=l \] so $\ord_{19}(m')>0$ and, in particular, $13 \mid B_Q$. Similarly, $B_{13} \mid B_Q$ so $619 \mid B_Q$. Since $7 \mid B_3$ and $\gcd(B_P,B_3)=2$, we have that $7 \nmid B_P$ so, from (\ref{2Q2}), $7 \nmid B_QC_Q$. Reducing the equation $C_Q^2-11B_Q^6=A_Q^3$ modulo $7$ shows that $A_Q \equiv 0 \mod 7$. \end{proof} Assume that $l \ge 5$. Consider the $(l,l,3)$ triple given by \[ C_Q^2-11B_Q^6=A_Q^3, \] where the three terms are pairwise coprime. As in Theorem \ref{reci} construct a Frey Curve \[ E_Q: Y^2+3A_QXY-11B_Q^6Y=X^3 \] with conductor $N_Q=3^{\alpha} \cdot 11\rad_3(C_QB_Q)$, where $\alpha=2$ or $3$. The Galois representation $\rho_l^{E_Q}: \gal(\bar{\mathbb{Q}}/\mathbb{Q}) \to \aut (E_Q[l])$ arises from a cuspidal newform $f$ of weight $2$ and level $N_0=2\cdot 3^{\alpha} \cdot 11$. This newform is one of \begin{eqnarray*} f_1 &=& q-q^2+q^4+2q^7-q^8+q^{11}+\cdots, \\ f_2 &=& q-q^2+q^4+4q^5-2q^7-q^8-4q^{10}-q^{11} + \cdots, \\ f_3 &=& q - q^2 + q^4 - 2q^5 - 4q^7 - q^8 + 2q^{10} + q^{11} + \cdots, \\ f_4 &=& q + q^2 + q^4 + 2q^7 + q^8 - q^{11} + \cdots, \\ f_5 &=& q + q^2 + q^4 + 2q^7 + q^8 + q^{11}+ \cdots, \end{eqnarray*} for $\alpha=2$ or (up to conjugacy) one of \begin{eqnarray*} f_6 &=& q - q^2 + q^4 - 2q^5 + q^7 - q^8 + 2q^{10} - q^{11} + \cdots \\ f_7 &=& q - q^2 + q^4 + q^5 + 4q^7 - q^8 - q^{10} - q^{11} + \cdots \\ f_8 &=& q - q^2 + q^4 - 3q^5 - 4q^7 - q^8 + 3q^{10} - q^{11} + \cdots \\ f_9 &=& q - q^2 + q^4 - 2q^5 - q^7 - q^8 + 2q^{10} + q^{11} + \cdots \\ f_{10} &=& q + q^2 + q^4 + 2q^5 - q^7 + q^8 + 2q^{10} - q^{11} + \cdots \\ f_{11} &=& q + q^2 + q^4 - q^5 + 4q^7 + q^8 - q^{10} + q^{11} + \cdots \\ f_{12} &=& q + q^2 + q^4 + 2q^5 + q^7 + q^8 + 2q^{10} + q^{11} + \cdots \\ f_{13} &=& q + q^2 + q^4 + 3q^5 - 4q^7 + q^8 + 3q^{10} + q^{11} + \cdots \\ f_{14} &=& q - q^2 + q^4 + \theta q^5 + 2q^7 - q^8 - \theta q^{10} + q^{11} + \cdots \\ f_{15} &=& q + q^2 + q^4 + \theta q^5 + 2q^7 + q^8 + \theta q^{10} - q^{11} + \cdots \end{eqnarray*} for $\alpha=3$, where the last two are defined over a quadratic number field. Applying Proposition \ref{levellow} with $p=13$, $19$, $619$ gives $l=5$ if $f=f_2$ and $l<5$ (a contradiction) otherwise. If $f=f_2$ then applying Proposition 4.2 in \cite{MR2098394} with $p=5$ gives a contradiction; note that $5$ is a prime of good reduction and $f_2$ is rational so the restriction $l \ne 5$ in the Proposition can be removed. To eliminate the possibilities of $l=2$ or $3$ consider the parameterizations given in (\ref{1}) with $D=11$. Then $K=\mathbb{Q}(\sqrt{11})$ and $\mathcal{O}_K=\mathbb{Z}[\sqrt{11}]$ is a principal ideal domain with fundamental unit $10+3\sqrt{11}$. Also $2\sqrt{11}=(10-3\sqrt{11})\sqrt{11}(3+\sqrt{11})^2$. It follows that $(u,v)=(1,0), (10,3), (10,-3), (199,60)$ or $(199,-60)$. If $(u,v)=(1,0)$ then $C_Q=a(a^2+33b^2)$ and $B_Q^3=b(3a^2+11b^2)$, where $b$ is even (since $4 \mid B_Q^3$). Since $33 \nmid C_Q$, $a$ is an $l$th power. Write $a=C^l$. Since $2 \mid b$ and $3 \nmid b$, $b=2^{3(l-1)}B^{3l}$. Write $a^2+33b^2=\bar{C}$ and $3a^2+11b^2=\bar{B}$. Then \begin{eqnarray} 3C^{2l}+2^{6(l-1)}11B^{6l}&=&\bar{B}^{3l}, \\ C^{2l}+2^{6(l-1)}33B^{6l}&=&\bar{C}^l, \\ \bar{B}^{3l}-3\bar{C}^l&=&2^{6l-3}11B^{6l}, \label{j3} \\ \bar{C}^l-3\bar{B}^{3l} &=&-8C^{2l}, \label{j4} \end{eqnarray} where the terms in each of the ternary equations are nonzero and pairwise coprime. If $l=2$ then (\ref{j3}) becomes $-3\bar{C}^2+(\bar{B}^2)^3-2^311(2B^2)^6=0$ and Proposition 6.5.9. in \cite{MR2312337} gives a rational point on the elliptic curve given by $Y^2=X^3-2376$, but there are no such points. If $l=3$ then (\ref{j4}) becomes $\bar{C}^3+3(-\bar{B}^3)^3+(2C^2)^3=0$ and Proposition 6.4.14 in \cite{MR2312337} gives a rational point with non-zero coordinates on the elliptic curve given by $Y^2=X^3+144$, but there are no such points. For the other parameterizations details are given only for $(u,v)=(10,3)$. The other cases are similar. Assume that $(u,v)=(10,3)$. Then $A_Q=a^2-11b^2$, \[ B_Q^3=3a^3+30a^2b+99ab^2+110b^3 \] and \[ C_Q=10a^3+99a^2b+330ab^2+363b^3. \] Suppose that $l=2$. Since $C_Q$ and $2B_Q$ are squares, multiplying the two expressions gives a rational point on the hyperelliptic curve \[ F: Y^2=60X^6 + 1194X^5 + 9900X^4 + 43780X^3+108900X^2+144474X + 79860. \] But computations implemented in $\mathtt{MAGMA}$ confirm that the Jacobian of $F$ has rank $0$ and, via the method of Chabauty, $F(\mathbb{Q})$ is empty. Finally, suppose that $l=3$. By Lemma \ref{tec}, $A_Q \equiv 0 \mod 7$. Hence, $a/b \equiv 2 \textrm{ or } 5 \mod 7$. Substituting these in the parametrization of $B_Q^3$ shows that $a/b \equiv 5 \mod 7$, but this cannot be a solution if $C_Q$ is a cube. This completes the proof of Proposition \ref{D=11}. \end{proof} \begin{comment} If $(u,v)=(10,-3)$ \[ B_Q^3=-3a^3 + 30a^2b - 99ab^2 + 110b^3 \] and \[ C_Q=10a^3 - 99a^2b + 330ab^2 - 363b^3. \] Suppose that $l=2$. Since $C_Q$ and $2B_Q$ are squares, multiplying the two expressions gives a rational point on the Hyper elliptic curve \[ F: Y^2=-60x^6 + 1194x^5 - 9900x^4 + 43780x^3 - 108900x^2+144474x - 79860 \] But computations implemented in $\mathtt{MAGMA}$ confirm that the Jacobian of $F$ has rank $0$ and, via the method of Chabauty, $F(\mathbb{Q})$ is empty. If $(u,v)=(199,-60)$ then $A_Q=a^2-11b^2$, \[ B_Q^3=-60a^3 + 597a^2b - 1980ab^2+2189b^3 \] and \[ C_Q=199a^3 - 1980a^2b + 6567ab^2 - 7260b^3 \] \[ Y^2=23880X^6 + 475206X^5 + 3940200X^4 + 17424220X^3+43342200X^2 + 57499926X + 31784280 \] \end{comment} By (\ref{D'=1}) and (\ref{1}) we see that Conjecture \ref{con1} would follow from \begin{con}[\cite{MR2406491}] \label{2.1} Let $F$ be a seperable homogenous cubic binary form with integer coefficients, $d$ a fixed integer $\ge 1$ and $l$ a prime number. There exists a constant $C_{d,F}>0$ depending only on $d$ and $F$ such that if $l>C_{d,F}$ and \[ F(a,b)=dc^l \] with $\gcd(a,b)=1$ then $c=\pm 1$. \end{con} In \cite{MR2406491} it is explained that Conjecture \ref{2.1} would follow from the Frey-Mazur conjecture. \begin{rem} \rm A more direct Frey curve for $C_P^2=A_P^3+DB_P^6$ with $B_P$ an $l$th power is \[ E_P: Y^2=X^3-3A_PX+2C_P. \] However, parametrizing as above highlights the connection with cubic binary forms and, as in the proof of Proposition \ref{D=11}, helps resolve specific cases. \end{rem} \section{The congruent number curves $y^2=x^3-(2^ap^b)^2x$} \label{red} Let $E_N$ be the elliptic curve given by $y^2=x^3-N^2x$, where $N$ is a congruent number. For a non-torsion point $P \in E_N(\mathbb{Q})$ there exists non-zero integers $z_1,z_2,z_3$ so that \begin{eqnarray*} A_P&=&\alpha_1 z_1^2, \\ A_P+NB_P^2&=&\alpha_2 z_2^2, \\ A_P-NB_P^2&=&\alpha_3 z_3^2, \end{eqnarray*} where the $\alpha_i$ are square free. Note that $\alpha_1 \mid N$, $\alpha_2 \mid 2N$ and $\gcd(z_1^2,z_2^2) \mid N$. So, in particular, $\gcd(z_1,z_2)=1$ if $N$ is square free. We have \begin{eqnarray} \alpha_2z_2^2-\alpha_1z_1^2 &=& NB_P^2; \label{e1} \\ \alpha_1z_1^2-\alpha_3z_3^2 &=& NB_P^2; \label{e3} \\ \alpha_2z_2^2-\alpha_3z_3^2 &=& 2NB_P^2; \label{e4} \\ 2\alpha_1z_1^2-\alpha_2z_2^2&=&\alpha_3z_3^2. \label{e2} \end{eqnarray} \begin{thm} Suppose that $l$ is an odd prime, $r$ is a non-negative integer and $U,V,W$ are non-zero pairwise coprime integers with \begin{equation} \label{lem} U^l+2^rV^l+W^l=0. \end{equation} Then $r=1$ and $(U,V,W)=\pm(-1,1,-1)$. \end{thm} \begin{proof} The result is due to Wiles \cite{MR1333035} for $r=0$, Ribet \cite{MR1438112} for $r \ge 2$, and Darmon and Merel \cite{MR1468926} for $r=1$. \end{proof} \begin{lem} Suppose that $r$ is a non-negative integer and $U,V,W$ are non-zero pairwise coprime integers. If \begin{equation} \label{lem2} U^4-2^rV^4+W^4=0 \end{equation} then $r=1$ and $|U|=|V|=|W|=1$. There are no solutions to the equation \begin{equation} \label{lem3} 2^rU^4-V^4+W^4=0. \end{equation} \end{lem} \begin{proof} See, for example, Section~6.5 of \cite{MR2312337}. \end{proof} \begin{proof}[Proof of Theorem \ref{first}] The only torsion points in $E_N(\mathbb{Q})$ are $2$-torsion. If $b$ is even then the rank of $E_N(\mathbb{Q})$ is zero (since it is zero when $b=0$), so assume that $b$ is odd. Assume that $P \in 2E_N(\mathbb{Q})$ is non-zero. The fundamental 2-descent map (see, for example, Section 8.2.3 in \cite{MR2312337}) shows that: \begin{eqnarray*} A_P&=&z_1^2; \\ A_P-2^ap^bB_P^2&=&z_2^2; \\ A_P+2^ap^bB_P^2&=&z_3^2. \end{eqnarray*} Suppose that $p$ divides $A_P$ exactly $e$ times. Then $e$ is even, $e<b$, and, by replacing $A_P$ by $A_P/p^e$ and $b$ by $b-e$, we can assume that $p$ does not divide $A_P$. Equations (\ref{e3})-(\ref{e2}) become: \begin{eqnarray*} -2^ap^bB_P^2&=&(z_2-z_1)(z_2+z_1); \\ 2^ap^bB_P^2&=&(z_3-z_1)(z_3+z_1); \\ 2^{a+1}p^bB_P^2&=&(z_3-z_2)(z_3+z_2). \end{eqnarray*} Now $\gcd(z_j-z_i,z_j+z_i)$ divides $2z_j$ and $2^{a+1}p^bB_P^2$, so is a power of $2$. Suppose that $B_P$ is a perfect power. Now $p$ divides $z_2+(-1)^{s_1}z_1$ and $z_3+(-1)^{s_2}z_1$, where $s_1,s_2 \in \{ 0,1 \}$. So $p$ divides $z_3+(-1)^{s_3}z_2$, where $s_3=s_1+s_2+1$. \it{Siegel's identity} \rm: \[ (-1)^{s_3+1}\frac{z_2+(-1)^{s_1}z_1}{z_3+(-1)^{s_3}z_2}-\frac{z_3+(-1)^{s_2}z_1}{z_3+(-1)^{s_3}z_2}+1=0 \] gives (\ref{lem}), (\ref{lem2}) or (\ref{lem3}). Thus \[ (-1)^{s_3+1}\frac{z_2+(-1)^{s_1}z_1}{z_3+(-1)^{s_3}z_2}=u \textrm{ and } -\frac{z_3+(-1)^{s_2}z_1}{z_3+(-1)^{s_3}z_2}=v, \] where $(u,v)=(1,-2),(-2,1)$ or $(-\frac{1}{2},-\frac{1}{2})$. So \[ -2^ap^bB_P^2=(-1)^{s_3+1}u(z_2+(-1)^{s_1+1}z_1)(z_3+(-1)^{s_3}z_2) \] and \[ 2^ap^bB_P^2=-v(z_3+(-1)^{s_2+1}z_1)(z_3+(-1)^{s_3}z_2). \] Dividing the two equations gives \[ \frac{u}{v}=(-1)^{s_3+1}\frac{z_3+(-1)^{s_2+1}z_1}{z_2+(-1)^{s_1+1}z_1}. \] But \[ \frac{z_3+(-1)^{s_3}z_2}{z_2+(-1)^{s_1+1}z_1}-\frac{z_3+(-1)^{s_2+1}z_1}{z_2+(-1)^{s_1+1}z_1}=(-1)^{s_3}, \] thus $u \ne v$, \[ \frac{z_2+(-1)^{s_1+1}z_1}{z_3+(-1)^{s_3}z_2}=(-1)^{s_3}\frac{v}{v-u}, \] and \[ \left( \frac{z_2+(-1)^{s_1+1}z_1}{z_3+(-1)^{s_3}z_2} \right) \left( \frac{z_2+(-1)^{s_1}z_1}{z_3+(-1)^{s_3}z_2} \right)=\frac{uv}{u-v}= \frac{-2^ap^bB_P^2}{(z_3+(-1)^{s_3}z_2)^2}. \] So \[ \frac{-uv}{2^ap^b(u-v)} \] is a square. Hence $(u,v)=(1,-2)$, $p=3$, $a$ is odd, $B_P=1$ and \[ (z_3+(-1)^{s_3}z_2)^2=2^{a-1}3^{b+1}. \] Thus \[ z_3=\frac{1}{2}(z_3+(-1)^{s_3}z_2+z_3+(-1)^{s_3+1}z_2)= \pm \frac{1}{2} \left( 2^{\frac{a-1}{2}}3^{\frac{b+1}{2}}+\frac{2^{a+1}3^b}{2^{\frac{a-1}{2}}3^{\frac{b+1}{2}}} \right) \] and \[ A_P=z_3^2-2^a3^b=2^{a-3}3^{b-1}25. \] From which it follows that $a \ge 3$ and $P$ is as required. \end{proof} \begin{proof}[Proof of Theorem \ref{second}] Let $N=2^ap$ where $a=0$ or $1$. Let $P \in E_N(\mathbb{Q})$ non-torsion point with $x(P) \in -{\mathbb{Q}^*}^2$ and $x(P)+N \in p{\mathbb{Q}^*}^2$. If $m$ is even then the result follows from Theorem \ref{first}. If $m$ is odd then the fundamental 2-descent map (see, for example, 8.2.3 in \cite{MR2312337}) shows that $\alpha_1=-1$ and $\alpha_2=p$ so $\alpha_3=-p$. Now (\ref{e4}) becomes $z_2^2+z_3^2=2^{a+1}B_P^2$ and (\ref{e2}) becomes $2z_1^2+pz_2^2=pz_3^2$ so \[ z_2^2+2p(z_1/p)^2=z_3^2. \] Corollary 6.3.6 in \cite{MR2312337} with the particular solution $(1, 0, 1)$ gives $dz_2=s^2-2pt^2$, $dz_1=2pst$, $dz_3=s^2+2pt^2$ where $s,t$ are coprime integers and $d \mid 2p$. If $d=\pm 1$ then $|z_2|=s^2-2pt^2$, $|z_1|=2pst$ and $|z_3|=s^2+2pt^2$. Since $z_1$ is even, $a=0$ and substituting into (\ref{e4}) gives $(s^2-2pt^2)^2+(s^2+2pt^2)^2=2B_P^2$ so \[ s^4+4p^2t^4=B_P^2. \] Now applying Theorem 1 in \cite{MR2646760} shows that $B_P$ cannot be a perfect power. If $d=\pm 2$ then $|z_2|=2s^2-pt^2$, $|z_1|=2pst$ and $|z_3|=2s^2+pt^2$, where $s,t$ are coprime integers. So $a=0$ and substituting into (\ref{e4}) gives \[ 4s^4+p^2t^4=B_P^2. \] So $4s^4=B_P^2-p^2t^4=(B_P+pt^2)(B_P-pt^2)$. Since $2 \nmid B_P$ and $p \nmid B_P$, we have $B_P+pt^2=\pm 2s'^4$ and $B_P-pt^2=\pm 2t'^4$ where $s',t'$ are coprime and odd. Thus $\pm s'^4 \pm t'^4=B_P$. Again applying Theorem 1 in \cite{MR2646760} shows that if $B_P$ is a perfect power then it is a square or a cube, but these remaining cases are well known (see 6.5.2 of \cite{MR2312337} and 14.6.6 of \cite{MR2312338}). Finally, the cases $d=\pm p$ and $d=\pm 2p$ give the same two parametrizations already considered above. \end{proof} \begin{comment} Consider $s^4+p^2t^4=2B^2$, where the terms are pairwise coprime. We have $s^4=2B^2-p^2t^4=-(pt^2+B\sqrt{2})(pt^2-B\sqrt{2})$. The two factors are coprime in $\mathbb{Z}[\sqrt{2}]$. Hence $pt^2+B\sqrt{2}=u(a+b\sqrt{2})^4$, where $u\bar{u}=-1$. So $u=\pm(1+\sqrt{2})^r$, where $r= \pm 1$ or $\pm 3$. For $r=1$ we have \[ B=a^4 + 4ba^3 + 12b^2a^2 + 8b^3a + 4b^4. \] $(\alpha_1=1, \alpha_2=2, \alpha_3=2)$ and $B$ an $l$th power gives $s^{4l}+p^2t^{4l}=2z_1^2$, where the terms are pairwise coprime. The level is $2^{2+6}p$. $(1, p, p)$ E.g. $p=5$, $P=(-4,6)+(0,0)$ and $B$ an $l$th power gives $p(z_1/p)^2=s^4+2^2t^4$, where the terms are pairwise coprime and $t$ is even. The level is $2p^2$. For $p=5$, $l$ can be bounded. $(2,p,2p)$ gives $2p(z_1/p)^2-2(z_2/2)^2=z_3^2$ and $(z_2/2)^2-2(z_3/2)^2=B^2$ so $-2(z_3/2)^2=B^2-(z_2/2)^2=(B+z_2/2)(B-z_2/2)$. \end{comment} \begin{comment} Case1: (1,1): Suppose that $p$ divides $z_2+z_1$ and $z_3+z_1$. Then $p$ divides $z_3-z_2$ and \it{Siegel's identity} \rm: \[ \frac{z_2+z_1}{z_3-z_2}-\frac{z_3+z_1}{z_3-z_2}+1=0 \] gives (\ref{lem}), (\ref{lem2}) or (\ref{lem3}). Thus \[ \frac{z_2+z_1}{z_3-z_2}=u \textrm{ and } -\frac{z_3+z_1}{z_3-z_2}=v, \] where $(u,v)=(1,-2),(-2,1)$ or $(-\frac{1}{2},-\frac{1}{2})$. So \[ -2^ap^bB^2=u(z_2-z_1)(z_3-z_2) \textrm{ and } 2^ap^bB^2=-v(z_3-z_1)(z_3-z_2) \] and \[ \frac{u}{v}=\frac{z_3-z_1}{z_2-z_1}. \] But \[ \frac{z_3-z_2}{z_2-z_1}-\frac{z_3-z_1}{z_2-z_1}=-1, \] thus $u \ne v$, \[ \frac{z_2-z_1}{z_3-z_2}=\frac{v}{u-v}, \] and \[ \left( \frac{z_2-z_1}{z_3-z_2} \right) \left( \frac{z_2+z_1}{z_3-z_2} \right)=\frac{uv}{u-v}=\frac{-2^ap^bB^2}{(z_3-z_2)^2}. \] So \[ \frac{-uv}{2^ap^b(u-v)}=\frac{B^2}{(z_3-z_2)^2} \] is a square. Hence $(u,v)=(1,-2)$, $p=3$, $a$ is odd, $B=1$ and \[ (z_3-z_2)^2=2^{a-1}3^{b+1}. \] Thus \[ z_3=\frac{1}{2}(z_3-z_2+z_3+z_2)= \pm \frac{1}{2} \left( 2^{\frac{a-1}{2}}3^{\frac{b+1}{2}}+\frac{2^{a+1}3^b}{2^{\frac{a-1}{2}}3^{\frac{b+1}{2}}} \right) \] and \[ A=z_3^2-2^a3^b=2^{a-3}3^{b-1}25. \] From which it follows that $a \ge 3$ and $P$ is as required. Case2: (1,q): Suppose that $q$ divides $z_2+z_1$ and $z_3-z_1$. Then $q$ divides $z_3+z_2$ and \it{Siegel's identity} \rm: \[ -\frac{z_2+z_1}{z_3+z_2}-\frac{z_3-z_1}{z_3+z_2}+1=0 \] gives (\ref{lem}), (\ref{lem2}) or (\ref{lem3}). Thus \[ -\frac{z_2+z_1}{z_3+z_2}=a \textrm{ and } -\frac{z_3-z_1}{z_3+z_2}=b, \] where $(a,b)=(1,-2),(-2,1)$ or $(-\frac{1}{2},-\frac{1}{2})$. So \[ -qB_m^2=-a(z_2-z_1)(z_3+z_2) \textrm{ and } qB_m^2=-b(z_3+z_2)(z_3+z_1) \] and \[ -\frac{a}{b}=\frac{z_3+z_1}{z_2-z_1}. \] But \[ \frac{z_3+z_2}{z_2-z_1}-\frac{z_3+z_1}{z_2-z_1}=1, \] thus $a \ne b$, \[ -\frac{z_2-z_1}{z_3+z_2}=\frac{b}{a-b}, \] and \[ \left( \frac{z_2-z_1}{z_3+z_2} \right) \left( \frac{z_2+z_1}{z_3+z_2} \right)=\frac{ab}{a-b}=\frac{-qB_m^2}{(z_3+z_2)^2}. \] But $-\frac{ab}{q(a-b)}$ is not a square. Case3: (q,1): Suppose that $q$ divides $z_2-z_1$ and $z_3+z_1$. Then $q$ divides $z_3+z_2$ and \it{Siegel's identity} \rm: \[ -\frac{z_2-z_1}{z_3+z_2}-\frac{z_3+z_1}{z_3+z_2}+1=0 \] gives (\ref{lem}), (\ref{lem2}) or (\ref{lem3}). Thus \[ -\frac{z_2-z_1}{z_3+z_2}=a \textrm{ and } -\frac{z_3+z_1}{z_3+z_2}=b, \] where $(a,b)=(1,-2),(-2,1)$ or $(-\frac{1}{2},-\frac{1}{2})$. So \[ -qB_m^2=-a(z_3+z_2)(z_2+z_1) \textrm{ and } qB_m^2=-b(z_3+z_2)(z_3-z_1) \] and \[ -\frac{a}{b}=\frac{z_3-z_1}{z_2+z_1}. \] But \[ \frac{z_3+z_2}{z_2+z_1}-\frac{z_3-z_1}{z_2+z_1}=1, \] thus $a \ne b$, \[ -\frac{z_2+z_1}{z_3+z_2}=\frac{b}{a-b}, \] and \[ \left( \frac{z_2-z_1}{z_3+z_2} \right) \left( \frac{z_2+z_1}{z_3+z_2} \right)=\frac{ab}{a-b}=\frac{-qB_m^2}{(z_3+z_2)^2}. \] But $-\frac{ab}{q(a-b)}$ is not a square. Case4: (q,q): Suppose that $q$ divides $z_2-z_1$ and $z_3-z_1$. Then $q$ divides $z_3-z_2$ and \it{Siegel's identity} \rm: \[ \frac{z_2-z_1}{z_3-z_2}-\frac{z_3-z_1}{z_3-z_2}+1=0 \] gives (\ref{lem}), (\ref{lem2}) or (\ref{lem3}). Thus \[ \frac{z_2-z_1}{z_3-z_2}=a \textrm{ and } -\frac{z_3-z_1}{z_3-z_2}=b, \] where $(a,b)=(1,-2),(-2,1)$ or $(-\frac{1}{2},-\frac{1}{2})$. So \[ -qB_m^2=a(z_3-z_2)(z_2+z_1) \textrm{ and } qB_m^2=-b(z_3-z_2)(z_3+z_1) \] and \[ \frac{a}{b}=\frac{z_3+z_1}{z_2+z_1}. \] But \[ \frac{z_3-z_2}{z_2+z_1}-\frac{z_3+z_1}{z_2+z_1}=-1, \] thus $a \ne b$, \[ \frac{z_2+z_1}{z_3-z_2}=\frac{b}{a-b}, \] and \[ \left( \frac{z_2-z_1}{z_3-z_2} \right) \left( \frac{z_2+z_1}{z_3-z_2} \right)=\frac{ab}{a-b}=\frac{-qB_m^2}{(z_3-z_2)^2}. \] But $-\frac{ab}{q(a-b)}$ is not a square. This completes the proof. \end{comment} \bibliographystyle{amsplain}
{ "timestamp": "2011-01-20T02:00:49", "yymm": "1101", "arxiv_id": "1101.2949", "language": "en", "url": "https://arxiv.org/abs/1101.2949", "abstract": "It is shown that there are finitely many perfect powers in an elliptic divisibility sequence whose first term is divisible by 2 or 3. For Mordell curves the same conclusion is shown to hold if the first term is greater than 1. Examples of Mordell curves and families of congruent number curves are given with corresponding elliptic divisibility sequences having no perfect power terms. The proofs combine primitive divisor results with modular methods for Diophantine equations.", "subjects": "Number Theory (math.NT)", "title": "Perfect powers in elliptic divisibility sequences", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771790254151, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7090925535113846 }
https://arxiv.org/abs/1605.04207
Functional lower bounds for arithmetic circuits and connections to boolean circuit complexity
We say that a circuit $C$ over a field $F$ functionally computes an $n$-variate polynomial $P$ if for every $x \in \{0,1\}^n$ we have that $C(x) = P(x)$. This is in contrast to syntactically computing $P$, when $C \equiv P$ as formal polynomials. In this paper, we study the question of proving lower bounds for homogeneous depth-$3$ and depth-$4$ arithmetic circuits for functional computation. We prove the following results :1. Exponential lower bounds homogeneous depth-$3$ arithmetic circuits for a polynomial in $VNP$.2. Exponential lower bounds for homogeneous depth-$4$ arithmetic circuits with bounded individual degree for a polynomial in $VNP$.Our main motivation for this line of research comes from our observation that strong enough functional lower bounds for even very special depth-$4$ arithmetic circuits for the Permanent imply a separation between ${\#}P$ and $ACC$. Thus, improving the second result to get rid of the bounded individual degree condition could lead to substantial progress in boolean circuit complexity. Besides, it is known from a recent result of Kumar and Saptharishi [KS15] that over constant sized finite fields, strong enough average case functional lower bounds for homogeneous depth-$4$ circuits imply superpolynomial lower bounds for homogeneous depth-$5$ circuits.Our proofs are based on a family of new complexity measures called shifted evaluation dimension, and might be of independent interest.
\section{Introduction} Arithmetic circuits are one of the most natural models of computation for studying computation with multivariate polynomials. One of the most fundamental questions in this area of research is to show that there are low degree polynomials which cannot be efficiently computed by \emph{small sized} arithmetic circuits. However, in spite of the significance of this question, progress on it has been sparse and our current state of understanding of lower bounds for arithmetic circuits continues to remain extremely modest. Most of the research in algebraic complexity theory so far considers arithmetic circuits and multivariate polynomials as \emph{formal} objects and studies the complexity of \emph{syntactic} representation of polynomials over the underlying field. However, in this work, we aim to study the \emph{semantic} or \emph{functional} analogue of the complexity of computing multivariate polynomials. We formally define this notion below and then try to motivate the definition based on our potential applications. \begin{definition}[Functional equivalence]\label{def: functional equivalence} Let $\F$ be any field and let $D$ be a subset of $\F$. We say that two $n$-variate polynomials $P_1$ and $P_2$ in $\F[x_1, x_2, \ldots, x_n]$ are \emph{functionally} equivalent over the domain $D^n$ if \[ \forall \vecx \in D^n\spaced{,} P_1(\vecx) = P_2(\vecx)\qedhere \] \end{definition} This definition of functional equivalence naturally extends to the case of arithmetic circuits functionally computing a family of polynomials, as defined below. \begin{definition}[Functional computation]\label{def: functional computation} Let $\F$ be any field and let $D$ be a subset of $\F$. A circuit family $\{C_n\}$ is said to functionally compute a family of polynomials $\{P_n\}$ over the domain $D^n$ if \[ \forall n\in \N, \vecx \in D^n\spaced{,} C_n(\vecx) = P_n(\vecx)\qedhere \] \end{definition} Having defined functional computation, we will now try to motivate the problem of proving functional lower bounds for arithmetic circuits. \subsection{Motivation} \paragraph{Improved boolean circuit lower bounds: } In the late 80's there was some spectacular progress on the question of lower bounds for bounded depth boolean circuits. In particular, Razborov and Smolensky~\cite{smolensky87, razborov87} showed exponential lower bounds for constant depth boolean circuits with AND $(\wedge)$, OR $(\vee)$, Negations $(\neg)$ and $\mod p$ gates for a prime $p$ (i.e the class of $\mathsf{AC^0[p]}$ circuits). However, the question of proving lower bounds for constant depth boolean circuits which also have $\mod q$ gates for a composite $q$ (i.e the class of general $\mathsf{ACC^0}$ circuits) remained wide open. In general, one major obstacle was that the techniques of Razborov and Smolensky failed for composite moduli, and we could not find alternative techniques which were effective for the problem. Although it is widely believed that the the majority function should be hard for such circuits, till a few years ago, we did not even know to show that there is such a language in $\mathsf{NEXP}$\footnote{The class of problems in nondeterministic exponential time.}. In a major breakthrough on this question, Williams~\cite{w11} showed that there is a function in $\mathsf{NEXP}$ which requires $\mathsf{ACC^0}$ circuits of superpolynomial size. Along with the result itself, the paper introduced a new proof strategy for showing such lower bounds. However, it still remains wide open to show that there is a function in deterministic exponential time, which requires $\mathsf{ACC^0}$ circuits of superpolynomial size. One of our main motivations for studying functional lower bounds for arithmetic circuits is the following lemma which shows that such lower bounds in fairly modest set up would imply a separation between $\mathsf{\#P}$ and $\mathsf{ACC^0}$. A formal statement and a simple proof can be found in \autoref{sec: functional lb and acc}. \begin{lemma}[Informal]~\label{lem: acc to functional lb-intro} Let $\F$ be any field of characteristic zero or at least $\exp\left(\omega\left(\poly(\log n)\right)\right)$. Then, a functional lower bound of $\exp\left(\omega\left(\poly(\log n)\right)\right)$ for the permanent of an $n\times n$ matrix over $\{0,1\}^{n^2}$ for depth-$4$ arithmetic circuits with bottom fan-in $\poly(\log n)$ imply that $\mathsf{\#P} \neq \mathsf{ACC^0}$. \end{lemma} In fact, we show that something slightly stronger is true. It suffices to prove functional lower bounds for the model of sums of powers of low degree polynomials for the conclusion in \autoref{lem: acc to functional lb-intro} to hold. At this point, there are two possible interpretations of the statement of \autoref{lem: acc to functional lb-intro}. For an optimist, it provides another approach to proving new lower bounds for $\mathsf{ACC^0}$, while for a pessimist it points to the fact that the functional lower bounds for depth-$4$ arithmetic circuits could be possibly very challenging. What makes us somewhat optimistic about this strategy is the fact that in the last few years, we seem to have made substantial progress on the question of proving lower bounds for homogeneous depth-$4$ circuits in the syntactic setting \cite{gkks13, FLMS13, KLSS, KS14}. In particular, even though the depth-$4$ circuits obtained in the proof of \autoref{lem: acc to functional lb-intro} are not homogeneous, an exponential lower bound for sums of powers of low degree polynomials is known in the syntactic set up. Therefore, it makes sense to try and understand if these bounds can be extended to the functional set up as well. \paragraph{Lower bounds for homogeneous depth-$5$ circuits: } In a recent work by Kumar and Saptharishi~\cite{KumarSaptharishi15}, it was shown that over constant size finite fields, \emph{average case functional } lower bounds for homogeneous depth-$4$ circuits\footnote{in fact, with bounded bottom fan-in} implies lower bounds for homogeneous depth-$5$ circuits. More precisely, the following lemma was shown: \begin{lemma}[\cite{KumarSaptharishi15}]\label{lem: avg case to depth 5} Let $\F_q$ be a finite field such that $q = O(1)$. Let $P$ be a homogeneous polynomial of degree $d$ in $n$ variables over $\F_q$, which can be computed by a homogeneous depth-$5$ circuit of size at most $O\left( \exp{\left( d^{0.499}\right)} \right)$. Then, there exists a homogeneous depth-$4$ circuit $C'$ of bottom fan-in $O(\sqrt{d})$ and top fan-in at most $O\left( \exp{\left( d^{0.499}\right)} \right)$ such that \[ \Pr_{x \in \F_q^n}\left[P(x) \neq C'(x)\right] \leq \exp(-\Omega(\sqrt{d}))\qedhere \] \end{lemma} Informally, the lemma shows that over small finite fields strong enough \emph{average case} functional lower bounds for homogeneous depth-$4$ arithmetic circuit with bounded bottom fan-in are sufficient to show superpolynomial lower bounds for homogeneous depth-$5$ circuits. Even though in \cite{KumarSaptharishi15}, the authors do not take this route to eventually prove their lower bounds, this connection seems like a strong motivation to study the question of proving functional lower bounds for bounded depth arithmetic circuits. \paragraph{Functional lower bounds for bounded depth arithmetic circuits: } It is immediately clear from the definition that \emph{syntactic} computation implies \emph{functional} computation, but vice-versa may not be necessarily true. In this sense, proving lower bounds for functional computation could be potentially harder than proving lower bounds for syntactic computation. From this point of view, once we have syntactic lower bounds for a certain class of circuits, it seems natural to ask if these bounds can be extended to the functional framework as well. The last few years have witnessed substantial progress on the question of proving lower bounds for variants of depth-$4$ arithmetic circuits, and in this work we explore the question of whether these bounds can be extended to the functional setting. \paragraph{Applications to proof complexity lower bounds :} Functional lower bounds have recently found applications for obtaining lower bounds for algebraic proof systems. In particular, Forbes, Shpilka, Tzameret, and Wigderson~\cite{FSTW15} have given lower bounds in various algebraic circuit measures for any polynomial agreeing with certain functions of the form $\vecx\mapsto\frac{1}{p(\vecx)}$, where $p$ is a constant-degree polynomial (which is non-zero on the boolean cube). In particular, they used such lower bounds to obtain lower bounds for the various subclasses of the Ideal Proof System (IPS) of Grochow and Pitassi~\cite{GrochowPitassi14}. In the next section, we explore the connections between syntactic and functional computation in a bit more detail, and discuss why the techniques used in proving syntactic lower bounds do not seem conducive to prove lower bounds in the functional setting. Hence, the problem of proving functional lower bounds might lead us to more techniques for arithmetic circuit lower bounds. \subsection{Functional vs syntactic computation} We now discuss the differences and similarities between functional and syntactic computation in a bit more detail. The following observation is easy to see. \begin{observation}\label{obs: properties of semantic equivalence} The following properties follow from \autoref{def: functional computation}: \begin{itemize} \item Any two polynomials $P_1$ and $P_2$ which are syntactically equivalent are also functionally equivalent for every choice of $D$. \item If two polynomials of individual degrees bounded by $d$ are functionally equivalent over any domain of size at least $d+1$, then they are also syntactically equivalent. \item In particular, any two multilinear polynomials which are functionally equivalent over the hypercube $\{0,1\}^n$ are also syntactically equivalent. \end{itemize} \end{observation} For the rest of the paper, our domain of interest will be $D = \{0,1\}$ and we will be interested in polynomials which are functionally the same over the hypercube $\{0,1\}^n$. For brevity, for the rest of the paper, when we say that two polynomials are functionally equivalent, we mean that the domain is the hypercube. As an additional abuse of notation, when we say that a circuit $C$ is functionally equivalent to a polynomial $P$, we mean that for every $\vecx \in \{0,1\}^n$, $C(\vecx) = P(\vecx)$. Observe that functional equivalence over the hypercube is precisely the same as syntactic equivalence when we work modulo the ideal generated by the polynomials $\{x_i^2-x_i : i \in [n]\}$. However, we find the functional view easier and more convenient to work with. At this point, one might ask why is the choice of $D$ as $\{0,1\}$ a natural one? The motivation for studying a domain of size $2$ stems from the fact that most of the polynomials for which we have syntactic arithmetic circuit lower bounds, are multilinear. For instance, the permanent ($\mathsf{Perm}$), the determinant ($\mathsf{Det}$), the Nisan-Wigderson polynomials ($\mathsf{NW}$) and the iterated matrix multiplication polynomial ($\mathsf{IMM}$) are known to be hard for many natural classes of arithmetic circuits, homogeneous depth three circuits being one such class. Since for any $D\subseteq \F$ such that $|D| \geq 2$, $D^n$ is an interpolating set for multilinear polynomials, it seems natural to ask if there is a small homogeneous depth three arithmetic circuit which is functionally equivalent to any of these polynomials. Another reason why $\{0,1\}^n$ seems a natural domain to study functional algebraic computation is due to potential connections to boolean circuit lower bounds. It seems natural to ask if the techniques discovered in the quest for arithmetic circuit lower bounds can be adapted to say something interesting about questions in boolean circuit complexity. And, \autoref{lem: acc to functional lb-intro} seems like an encouraging step in this direction. \subsubsection{Functional lower bounds and partial derivatives }\label{sec: functional lb and partial derivatives} Almost all the bounded depth arithmetic circuit lower bounds so far have been proved using techniques based on the partial derivatives of a polynomial. This includes exponential lower bounds for homogeneous depth-$3$ circuits \cite{nw1997} and lower bounds for homogeneous depth-$4$ arithmetic circuits~\cite{gkks13, FLMS13, KLSS, KS14}. At a high level, the proofs have the following structure: \begin{itemize} \item Define a function $\Gamma : \F[\vecx] \rightarrow \N$, called the complexity measure, which serves as an indicator of the hardness of a polynomial. \item For all \emph{small} arithmetic circuits in the model of interest, show that $\Gamma$ has a non-trivial upper bound. \item For the target hard polynomial, show that $\Gamma$ is large. Comparing this with the upper bound in step $2$ leads to a contradiction if the hard polynomial had a small arithmetic circuit. \end{itemize} The precise measure $\Gamma$ used in these proofs varies, but they all build upon the the notion of partial derivatives of a polynomial. The idea is to define $\Gamma(P)$ to be the dimension of a linear space of polynomials defined in terms of the partial derivatives of $P$. In the syntactic set up, if a circuit $C$ computes a polynomial $P$, then any partial derivative of $C$ must be equivalent to the corresponding partial derivative of $P$. This observation along with bounds on the dimension of the partial derivative based linear spaces, led to circuit lower bounds. However, this clearly breaks down in the case when our only guarantee is that the circuit $C$ and the polynomial $P$ agree as functions on all of $\{0,1\}^n$. Apriori, it is not clear if we can say anything meaningful about how the partial derivatives of $C$ and those of $P$ are related to each other. An extreme case of this is the following example. Let the polynomials $P$ and $Q$ be defined as follows: \[ P = \left(\sum_{i = 1}^n x_i \right)^n \] and \[ Q = P \mod I_0 \] Here $I_0$ is the ideal generated by the polynomials $\{x_i^2-x_i : i \in [n]\}$. The following items follow easily from the definitions: \begin{itemize} \item $\forall \vecx \in \{0,1\}^n, P(\vecx) = Q(\vecx)$. \item The dimension of the span of partial derivatives of $P$ is at most $n$. \item The dimension of the span of partial derivatives of $Q$ is at least $2^n$. This follows from the fact that the leading monomial of $Q$ is $x_1\cdot x_2 \cdots x_n$. \end{itemize} So, clearly the dimension of the partial derivatives of two polynomials which are functionally the same over $\{0,1\}^n$ can be wildly different. Thus, it seems tricky to extend the proofs of syntactic lower bounds to the functional setup. Nevertheless, we do manage to get around this obstacle in certain cases as our results in the next section show. Moreover, we also show that a general solution to this question offers a possibility of proving new lower bounds for boolean circuits, that have so far been beyond our reach so far. \subsection{Our results} We now state our main results. As our first result, we show functional lower bounds for homogeneous\footnote{Our lower bounds require that the formal degree of the circuit and the degree of the polynomial are \emph{close} to each other. Homogeneity guarantees this condition, but is a much stronger condition than what we need for our proofs to work. } depth-$3$ circuits. In the syntactic setting such lower bounds were first shown by Nisan and Wigderson \cite{nw1997} using the partial derivative of a polynomial as the complexity measure. However, as we discussed in \autoref{sec: functional lb and partial derivatives}, partial derivative based proofs do not extend to the functional setting in a straightforward manner. We get around this obstacle by working with a different but related complexity measure. We now formally state the theorem : \begin{theorem}\label{thm: depth 3 lower bound} Let $\F$ be any field. There exists a family $\{P_d\}$ of polynomials of degree $d$ in $n = \poly(d)$ variables in $\VNP$ such that any $\Sigma\Pi\Sigma$ circuit of formal degree $d$ which is functionally equivalent to $P_d$ over $\{0,1\}^n$ has size at least $\exp\left( \Omega\left(d\log n \right)\right)$. \end{theorem} As our second result, we show similar functional analogues of the homogeneous depth-$4$ lower bounds of \cite{KLSS,KS14} but under the restriction that the depth-$4$ circuit computes a polynomial of \emph{low individual degree}. As discussed in the introduction, such lower bounds for depth-$4$ circuits with bounded bottom fan-in but unbounded individual degree would imply that $\mathsf{\#P} \neq \mathsf{ACC^0}$, and would be a major progress on the question of boolean circuit lower bounds. \begin{theorem}\label{thm: depth 4 lower bound} Let $\F$ be any field. There exists a family $\{P_d\}$ of polynomials of degree $d$ in $n = \poly(d)$ variables in $\VNP$ such that any $\Sigma\Pi\Sigma\Pi$ circuit of formal degree $d$ and individual degree $O(1)$ which is functionally equivalent to $P_d$ over $\{0,1\}^n$ has size at least $\exp\left( \Omega\left(\sqrt{d}\log n \right)\right)$. \end{theorem} Our techniques for the proof of \autoref{thm: depth 4 lower bound} are again different from the proofs of homogeneous depth-$4$ lower bounds in the syntactic setting. We introduce a family of new complexity measures, which are functional in their definition (as opposed to partial derivative based measures), and use them to capture functional computation. The family of measures, called \emph{Shifted Evaluation dimension} is a shifted analogue of the well known notion of evaluation dimension, which has had many applications in algebraic complexity (for instance, in multilinear formula, circuit lower bounds~\cite{raz2004, Raz06, raz-yehudayoff}). We believe that the measure is of independent interest, and could have other potential applications. \paragraph{Elementary symmetric polynomials : } In their paper~\cite{nw1997}, Nisan and Wigderson showed an exponential lower bound on the size of homogeneous depth-$3$ circuits computing the elementary symmetric polynomials. A curious consequence of our proof, is that we are unable to show an analogue of \autoref{thm: depth 3 lower bound} for the elementary symmetric polynomials. One of the reasons for this is the fact that the elementary symmetric polynomials have a \emph{small} evaluation dimension complexity (the complexity measure used for this lower bound), hence our proof technique fails. However, it turns out the at least over fields of sufficiently large characteristic, there are polynomial sized depth-$3$ circuits of low formal degree which are functionally equivalent to the elementary symmetric polynomials over $\{0,1\}^n$. The upper bounds are based on the simple observation that for any $d$ and $x \in \{0,1\}^n$, the value of $Sym_d(x)$ (elementary symmetric polynomial of degree $d$) is equal to $\binom{h(x)}{d}$, where $h(x) = \sum_i x_i$ is the hamming weight of $x$. In particular, for $d = 1$, the polynomial $\sum_i x_i$ is functionally equivalent to $Sym_1$, the polynomial $\frac{(\sum_i x_i)(\sum_i x_i - 1)}{2}$ is functionally equivalent to $Sym_2$ and so on. In particular, there is a polynomial which is a product of $d$ affine forms which is equivalent to $Sym_d$. However, over fields of low characteristic, the complexity of the elementary symmetric polynomials for functional computation by depth-$3$ (or even depth-$4$) circuits is not clear to us and is an interesting open question. \paragraph{Comparison to Kayal, Saha, Tavenas~\cite{KST15} : } In a recent independent result, Kayal, Saha and Tavenas showed exponential lower bounds for depth-$4$ circuits of bounded individual degree computing an explicit polynomial in $\VP$. Their proof uses a complexity measure called \emph{skew shifted partials} which is very similar in spirit to the notion of \emph{shifted evaluation dimension}, the complexity measure we use. Even though the results seem related, none of them subsumes the other. For our proof, we require that the formal degree of the depth-$4$ circuit is small (homogeneity), in addition to the individual degree being small, whereas in~\cite{KST15} the authors only require the individual degree of the circuit to be small. In this sense, their result is for a more general model than ours. However, for our lower bounds, we only require the circuit to agree with the target hard polynomial over $\{0,1\}^n$ while the proof in~\cite{KST15} is for syntactically computing the hard polynomial. Hence, the results are incomparable. \subsection{Organization of the paper} We set up some notations to be used in the rest of the paper in \autoref{sec:notation}. We prove the connections between functional lower bounds for depth-$4$ circuits and lower bounds for $\mathsf{ACC^0}$ in \autoref{sec: functional lb and acc}. We introduce our main complexity measure in \autoref{sec:complexity measure}. We define and study the properties of the hard polynomials for our lower bounds in \autoref{sec:NW}. We present the proof of \autoref{thm: depth 3 lower bound} in \autoref{sec:depth 3} and the proof of \autoref{thm: depth 4 lower bound} in \autoref{sec:depth 4}. \section{Notation}\label{sec:notation} We now setup some notation to be used for the rest of the paper. \begin{itemize} \item Throughout the paper, we shall use bold-face letters such as $\vecx$ to denote a set $\set{x_1,\dots, x_n}$. Most of the times, the size of this set would be clear from context. We shall also abuse this notation to use $\vecx^\vece$ to refer to the monomial $x_1^{e_1}\cdots x_n^{e_n}$. \item The set of formal variables in this paper denoted by $\vecx$ of size $n$ shall often be partitioned into sets $\vecy$ and $\vecz$. We shall use $n_y$ and $n_z$ to denote the sizes of $\vecy$ and $\vecz$ respectively. \item For an integer $m > 0$, we shall use $[m]$ to denote the set $\set{1,\dots, m}$. \item We shall use the short-hand $\partial_{\vecx^{\vece}}(P)$ to denote \[ \frac{\partial^{e_1}}{\partial x_1^{e_1}}\inparen{ \frac{\partial^{e_2}}{\partial x_2^{e_2}}\inparen{\cdots \inparen{ P }\cdots}}. \] \item For a set of polynomials $\mathcal{P}$ shall use $\partial_{\vecy}^{=k}\mathcal{P}$ to denote the set of all $k$-th order partial derivatives of polynomials in $\mathcal{P}$ with respect to $y$ variables only, and $\partial_{\vecy}^{\leq k}\mathcal{P}$ similarly. Also, $\vecx^{=\ell} \mathcal{P}$ shall refer to the set of polynomials of the form $\vecx^{\vece} \cdot P$ where $\mathsf{Deg}(\vecx^{\vece}) = \ell$ and $P \in \mathcal{P}$. Similarly $\vecx^{\leq \ell} \mathcal{P}$. \item For a polynomial $P \in \F[\vecx]$ and for a set $S \subseteq\F^n$, we shall denote by $\mathsf{Eval}_S(P)$ the vector of the evaluation of $P$ on points in $S$ (in some natural predefined order like say the lexicographic order). For a set of vectors $V$, their span over $\F$ will be denoted by $\mathsf{Span}(V)$ and their dimension by $\mathsf{Dim}(V)$. \end{itemize} \section{Functional lower bounds for depth-$4$ circuits and $\mathsf{ACC^0}$}\label{sec: functional lb and acc} In this section, we show that strong enough functional lower bounds for even very special depth-$4$ arithmetic circuits are sufficient to imply new lower bounds for $\mathsf{ACC^0}$. The proof follows from a simple application of a well known characterization of $\mathsf{ACC^0}$ by Yao~\cite{yao85} and Beigel and Tarui~\cite{beigeltarui94}. The following version of the theorem is from Arora-Barak~\cite{arorabarak} \begin{theorem}[\cite{yao85, beigeltarui94}]\label{thm: acc to sym} If a function $f:\{0,1\}^n \rightarrow \{0,1\}$ is in $\mathsf{ACC^0}$, then $f$ can be computed by a depth $2$ circuit with a symmetric gate with quasipolynomial $\left(\exp(\log^{O(1)} n)\right)$ fan-in at the output level and $\vee$ gates with polylogarithmic $\left(\log^{O(1)} n\right)$ fan-in at the bottom level. \end{theorem} We now prove the following lemma which shows {\it functional} upper bound for $\mathsf{ACC^0}$. \begin{lemma}\label{lem: acc to functional ub} Let $\F$ be any field of characteristic zero or at least $\exp\left(\omega\left(\poly(\log n)\right)\right)$. If a function $f:\{0,1\}^n \rightarrow \{0,1\}$ is in $\mathsf{ACC^0}$, then there exists a polynomial $P_f \in \F[x_1, x_2, \ldots, x_n]$ such that the following are true: \begin{itemize} \item For every $\vecx \in \{0,1\}^n$, $f(\vecx) = P_f(\vecx)$. \item $P_f$ can be computed by a quasipolynomial sized $\Sigma\!\wedge\!\Sigma\Pi$ circuit with bottom fan-in at most $\poly(\log n)$, which are depth-$4$ circuits where the product gates in the second level are powering gates. \end{itemize} \end{lemma} \begin{proof} From \autoref{thm: acc to sym}, we know that there exists a symmetric function $h$ and multilinear polynomials $g_1, g_2, \ldots, g_t$ such that \begin{itemize} \item $t = \exp(\poly(\log n))$. \item For every $\vecx \in \{0,1\}^n$, $f(\vecx) = h(g_1(\vecx), g_2(\vecx), \ldots, g_t(\vecx))$. \item Each $g_i$ is a multilinear polynomial in at most $\poly(\log n)$ variables. \item For every $\vecx \in \{0,1\}^n$ and $j \in [t]$, $g_j(\vecx) \in \{0,1\}$. \end{itemize} From the last item above, we know that the $g_i$s only take boolean values on inputs from $\{0,1\}^n$. Since $h$ is symmetric, it follows that its value on boolean inputs only depends upon the hamming weight of its input. Hence, $h$ is in fact a function of $\sum_{i \in [t]} g_i$. Therefore, over any field of characteristic zero or larger than $t$, there exists a univariate polynomial $P_h$ of degree at most $t$ over reals, such that \[ \forall \vecx \in \{0,1\}^n, h\left(g_1(\vecx), g_2(\vecx), \ldots, g_t(\vecx)\right) = P_h\left(\sum_{i \in [t]} g_i(\vecx)\right) \] The lemma now follows from the fact that each $g_i$ is a multilinear polynomial in $\poly(\log n)$ variables. \end{proof} \autoref{lem: acc to functional ub} now immediately implies the following lemma. \begin{lemma}~\label{lem: acc to functional lb} Let $\F$ be any field of characteristic zero or at least $\exp\left(\omega\left(\poly(\log n)\right)\right)$. Then, an \\ $\exp\left(\omega\left(\poly(\log n)\right)\right)$ functional lower bound for a function on $n$ variables for $\Sigma\wedge\Sigma\Pi^{[\poly(\log n)]}$ circuits over $\F$ would imply that $f$ is not in $\mathsf{ACC^0}$. \end{lemma} \section{The complexity measure}~\label{sec:complexity measure} In the lower bounds for homogeneous depth four circuits \cite{KLSS,KS14}, the complexity measure used was the \emph{dimension of projected shifted partial derivatives}. The following definition is not the same as used in \cite{KLSS,KS14}, but this slight variant would be easier to work with for our applications. We abuse notation to call it ``projected shifted partial derivatives'' as it continues to have the essence of the original definition. A discussion on the precise differences between the following definition and the original definition of \cite{KLSS,KS14} is present in \autoref{sec:pspd-discussion} \begin{definition}[Projected shifted partial derivatives]\label{defn:pspd} Let $\vecx = \vecy \sqcup \vecz$ with $|\vecy| = n_y$ and $|\vecz| = n_z$, and let $S$ be the set of all strings in $\set{0,1}^{n_y + n_z}$ that are zero on the first $n_y$ coordinates. If $k, \ell$ are some parameters, the \emph{dimension of projected shifted partial derivatives} for any polynomial $P(\vecy, \vecz) \in \F[\vecy, \vecz]$, denoted by $\Gamma_{k,\ell}^{\mathrm{PSPD}}(P)$, is defined as \[ \Gamma_{k,\ell}^{\mathrm{PSPD}}(P) \spaced{:=} \mathsf{Dim}\inbrace{\mathsf{Eval}_{S}\inparen{\vecz^{=\ell} \partial_{\vecy}^{=k}(P)}}.\qedhere \] \end{definition} The above measure is still syntactic as partial derivatives are not useful in the functional setting. For the functional setting, we shall use a different measure for our lower bound that we call \emph{the shifted evaluation dimension}. We now define the complexity measure that we shall be using to prove the lower bound. For brevity, we shall assume that our set of variables $\vecx$ is partitioned into $\vecy$ and $\vecz$. For our proofs, we shall use a carefully chosen partition. We now formally define the notion of \emph{shifted evaluation dimension} of a polynomial below. \begin{definition}[Shifted evaluation dimension] Let $\ell$ and $k$ be some parameters and let $\vecx = \vecy \sqcup \vecz$ such that $|\vecy| = n_y$ and $|\vecz| = n_z$. For any polynomial $P \in \F[\vecy, \vecz]$, define $\Gamma_{k, \ell}(P)$ as \[ \Gamma_{k,\ell}^{\mathrm{SED}}(P) \spaced{:=} \mathsf{Dim}\inbrace{\mathsf{Eval}_{\{0,1\}^{n_z}}\inparen{\vecz^{=\ell} \cdot \{P(\veca,\vecz) : \veca \in \{0,1\}^{n_y}_{\leq k} \}}}.\qedhere \] \end{definition} Informally, for every polynomial $P$, we fix a partition of the input variables into $\vecy$ and $\vecz$ and generate a linear space by the following algorithm. \begin{itemize} \itemsep 0pt \item We take the projections of $P$ obtained by setting each of the $y$ variables to $0,1$ such that the number of $y$ variables set to $1$ is at most $k$. \item We shift the polynomials obtained in step $1$ by all monomials in variables $\vecz$ of degree $\ell$. \item Observe that the polynomials obtained at the end of step two are polynomials only in the $\vecz$ variables. We now look at the evaluation vectors of these polynomials over $\{0,1\}^{n_z}$. \end{itemize} The complexity measure of the polynomial $P$ is defined as the dimension of the linear space generated by the vectors obtained at the end of step $3$ in the algorithm above. For our proof, we will pick a careful partition of the variables $\vecx$ into $\vecy$ and $\vecz$ and look at $\Gamma_{k, \ell}^{\mathrm{SED}}(P)$. The following lemma highlights the key reason of utility of the above measure to functional lower bounds. \begin{lemma}[Functional equivalence and shifted evaluation dimension]\label{lem: complexity measure utility} Let $P \in \F[\vecx]$ and $Q \in \F[\vecx]$ be any two polynomials which are functionally equivalent over $\{0,1\}^n$. Then, for every choice of $k$, $\ell$ and partition $\vecx = \vecy \sqcup \vecz$ \[ \Gamma_{k, \ell}^{\mathrm{SED}}(P) \spaced{=} \Gamma_{k, \ell}^{\mathrm{SED}}(Q) \] \end{lemma} \begin{proof} The proof easily follows from the fact that the measure $\Gamma_{k, \ell}^{\mathrm{SED}}(P)$ is the dimension of a linear space which is generated by vectors which correspond to evaluations of $P$ over subcubes of $\{0,1\}^n$. Hence, it would be the same for any two polynomials which agree as functions over $\{0,1\}^n$. \end{proof} \begin{remark*} Observe that a lemma analogous to \autoref{lem: complexity measure utility} is not true in general for partial derivative based measures. And hence, the proofs for syntactic lower bounds which are based on such measures does not immediately carry over to the functional setting. \end{remark*} \subsection{Evaluations vs partial derivatives} In this section, we show that for polynomials of low individual degree, the notion of shifted evaluation dimension can be used as a proxy for the notion of shifted partial derivatives. This is the key observation that drives the proofs of \autoref{thm: depth 3 lower bound} and \autoref{thm: depth 4 lower bound}. We first consider the case when the polynomial is \emph{set-multilinear} in which case derivatives can be directly related to careful evaluations. \subsubsection{For set-multilinear polynomials} The explicit polynomials we shall be working with in this paper would be \emph{set-multilinear}. An example to keep in mind is $\mathsf{Det}_n$ or $\mathsf{Perm}_n$ where the variables can be partitioned into rows and each monomial involves exactly one variable from each part. \begin{definition}[Set-multilinear polynomials] A polynomial $P$ is said to be \emph{set-multilinear} with respect to the a partition $\vecx = \vecx_1 \sqcup \cdots \sqcup \vecx_r$ if every monomial of $P$ involves exactly\footnote{sometimes in the literature the word `exactly' is replaced by `at most' but in this paper we would be dealing with this definition.} one variable from each $\vecx_i$. \end{definition} We begin with the following simple observation. \begin{observation}\label{lem: multilinear equivalence} Let $P \in \F[\vecx]$ be a set-multilinear with respect to a partition $\vecx = \vecx_1 \sqcup \cdots \sqcup \vecx_r$. Let $\vecy = \vecx_1 \union \cdots \union \vecx_k$ for some $k \leq r$ and let $\vecz = \vecx \setminus \vecy$. Then, for any degree $k$ monomial $\vecy^\vece$ that is set-multilinear with respect to $\vecx_1 \sqcup \cdots \sqcup \vecx_k$, we have \[ \frac{\partial P}{\partial \vecy^{\vece}} \spaced{=} P(\vece, \vecz). \] \end{observation} \begin{proof} We shall prove this by induction on $k$. Suppose $\vecy = \vecx_1$ and $y_1 \in \vecx_1$. Since $P$ is set-multilinear, we can write $P$ as \[ P(\vecx_1,\cdots, \vecx_r) \spaced{=} \sum_{y_i \in \vecx_1} y_i \cdot P_i(\vecx_2,\cdots, \vecx_r). \] Hence it follows that $\partial_{y_1}(P)$ equals $P_1$, which is also the partial evaluation of $P$ where $y_1$ is set to $1$ and all other $y_i \in \vecx_1$ is set to zero. Hence, if $y_1 = \vecy^\vece$, then $\partial_{y_1}(P) = P(\vece, \vecx_2,\cdots, \vecx_r)$. The claim follows by repeating this argument on $P(\vece,\vecx_2,\cdots, \vecx_r)$ which continues to be set-multilinear. \end{proof} \autoref{lem: multilinear equivalence} immediately implies the following corollary, which shows that for set-multilinear polynomials {\emph shifted evaluation dimension} and {\emph shifted partial derivatives} are the same quantity if we choose our set of derivatives carefully. \begin{corollary}\label{cor: taylor mult} Let $P(\vecx)$ be a set-multilinear polynomial with respect to $\vecx = \vecx_1 \sqcup \cdots \sqcup \vecx_r$. Suppose $\vecy = \vecx_1 \union \cdots \union \vecx_k$ and $\vecz = \vecx \setminus \vecy$. Then if we consider the dimension of projected shifted partials with respect to set-multilinear monomials in $\vecy$, we have \[ \Gamma_{k, \ell}^{\mathrm{PSPD}}(P) \spaced{\leq} \Gamma_{k, \ell}^{\mathrm{SED}}(P). \] \end{corollary} \subsubsection{For low individual degree polynomials} We now proceed to show that an \emph{approximation} of the \autoref{cor: taylor mult} also holds for polynomials of low individual degree. \begin{lemma}\label{lemma: low ind degree taylor} Let $P(\vecy, \vecz)$ be a polynomial with individual degree at most $r$. Then, for every choice of parameters $k$ and $\ell$ \[ \set{ P(\veca, \vecz) : a \in \{0,1\}^{n_y}_{\leq k} } \spaced{\subseteq} \mathsf{Span}\inparen{\inparen{\partial^{\leq rk} P}_{\vecy = \mathbf{0}}}. \] \end{lemma} \begin{proof} For the rest of this proof, we shall think of $P$ as an element $P_\vecz(\vecy) \in \F[\vecz][\vecy]$. Let $\veca$ be any point in $\{0,1\}^{n_y}$. Then by the Taylor's expansion, we know that \[ P_\vecz(\vecy + \veca) \spaced{=} \sum_{\vece} \veca^{\vece} \cdot \partial_{\vecy^{\vece}}(P_{\vecz})(\vecy) \] If the support of $\veca$ is at most $k$, then for every $\vece$ such that $\|\vece \|_0 > k$, we would have $\veca^{\vece} = 0$. Moreover, since $P$ is a polynomial of individual degree at most $r$, it follows that if any coordinate of $\vece$ is more than $r$ then \[ \partial_{\vecy^{\vece}}(P_{\vecz}) = 0. \] In summary, for any $\veca$ such that $\|\veca\|_0 \leq k$, \begin{eqnarray*} P_\vecz(\vecy + \veca) & = & \sum_{\substack{\vece : \|\vece\|_0 \leq k,\\ \|\vece\|_1 \leq rk}}\veca^{\vece} \cdot \partial_{\vecy^{\vece}}(P_{\vecz})(\vecy)\\ \implies P_\vecz(\veca) \quad = \quad P(\veca, \vecz) & = & \sum_{\substack{\vece : \|\vece\|_0 \leq k,\\ \|\vece\|_1 \leq rk}}\veca^{\vece} \cdot \inparen{\partial_{\vecy^{\vece}}(P_{\vecz})}_{\vecy = \mathbf{0}} \quad \in \quad \mathsf{Span}\inparen{\inparen{\partial^{\leq rk} P}_{\vecy = \mathbf{0}}}.\qedhere \end{eqnarray*} \end{proof} We are now ready to prove our main technical claim of this section. \begin{lemma}\label{lemma: eval dim vs partial derivatives} Let $P(\vecy, \vecz)$ be a polynomial with individual degree at most $r$. Then, for every choice of parameters $k$ and $\ell$, \[ \Gamma_{k, \ell}^{\mathrm{SED}}(P) \spaced{\leq} \Gamma_{rk, \ell}^{\mathrm{PSPD}}(P) \] \end{lemma} \begin{proof} From \autoref{lemma: low ind degree taylor}, we know that \begin{eqnarray*} \set{P(\veca, \vecz) : \veca \in \{0,1\}^{n_y}_{\leq k}} &\subseteq& \mathsf{Span}\inparen{\inparen{\partial^{\leq rk} P}_{\vecy = \mathbf{0}}} \\ \implies \set{\vecz^{=\ell} \cdot P(\veca, \vecz) : \veca \in \{0,1\}^{n_y}_{\leq k}} &\subseteq& \mathsf{Span}\inparen{\vecz^{=\ell} \cdot \inparen{\partial^{\leq rk} P}_{\vecy = \mathbf{0}}} \end{eqnarray*} By looking at the evaluation vectors on $\set{0,1}^{n_z}$, \begin{eqnarray*} \left\{ \mathsf{Eval}_{\set{0,1}^{n_z}}\left(\vecz^{=\ell} \cdot P(\veca, \vecz) \right): \veca \in \{0,1\}^{n_y}_{\leq k} \right\} &\subseteq& \mathsf{Span}\left( \mathsf{Eval}_{\set{0,1}^{n_z}}\left(\vecz^{=\ell}\cdot \inparen{\partial^{\leq rk} P}_{\vecy = \mathbf{0}} \right)\right)\\ & = & \mathsf{Span}\left( \mathsf{Eval}_{\set{0}^{n_y} \times \set{0,1}^{n_z}}\left(\vecz^{=\ell}\cdot \partial^{\leq rk} P \right)\right) \end{eqnarray*} Taking the dimension of the linear spans on both sides completes the proof. \end{proof} \section{Nisan-Wigderson polynomial families}~\label{sec:NW} In this section, we formally define the family of Nisan-Wigderson polynomials and mention some known results about lower bounds on the their projected shifted partials complexity~\cite{KLSS, KS14, KumarSaptharishi15}. These bounds will be critically used in our proof. \begin{definition}[Nisan-Wigderson polynomial families]~\label{def:NW final} Let $d,m,e$ be arbitrary parameters with $m$ being a power of a prime, and $d,e\leq m$. Since $m$ is a power of a prime, let us identify the set $[m]$ with the field $\F_m$ of $m$ elements. Note that since $d \leq m$, we have that $[d] \subseteq \F_m$. The Nisan-Wigderson polynomial with parameters $d,m,e$, denoted by $\mathsf{NW}_{d,m,e}$ is defined as \[ \mathsf{NW}_{d,m,e}(\vecx) \spaced{=} \sum_{\substack{p(t) \in \F_m[t]\\ \mathsf{Deg}(p) < e}} x_{1,p(1)}\dots x_{d, p(d)} \] That is, for every univariate polynomial $p(t) \in \F_m[t]$ of degree less that $e$, we add one monomial that encodes the `graph' of $p$ on the points $[d]$. This is a homogeneous, multilinear polynomial of degree $d$ over $dm$ variables with exactly $m^e$ monomials. Furthermore, the polynomial is \emph{set-multilinear} with respect to $\vecx = \vecx_1 \sqcup \cdots \sqcup \vecx_d$ where $\vecx_i = \set{x_{i1},\cdots, x_{im}}$. \end{definition} We now state the following lemma which shows a lower bound on the $\Gamma_{k, \ell}^{\mathrm{PSPD}}(\mathsf{NW}_{d,m,e})$ for an appropriate choice of parameters. We will then use this bound along with \autoref{cor: taylor mult} to show a lower bound on $\Gamma_{k, \ell}^{\mathrm{SED}}(NW_{d,m,e})$. The lower bound on $\Gamma_{k, \ell}^{\mathrm{PSPD}}(\mathsf{NW}_{d,m,e})$ was shown in two independent proofs by Kayal et al.~\cite{KLSS} and by Kumar and Saraf~\cite{KS14}. The version stated below is from a strengthening of these bounds by Kumar and Saptharishi~\cite{KumarSaptharishi15}. \begin{lemma}\label{lem: psd lower bound for nw} For every $d$ and $k = O(\sqrt{d})$ there exists parameters $m,e, \epsilon$ such that $m = \Theta(d^2)$ and $\epsilon = \Theta\pfrac{\log d}{\sqrt{d}}$ with \begin{eqnarray*} m^k & \geq & (1+\epsilon)^{2(d-k)}\\ m^{e-k} & = & \pfrac{2}{1+\epsilon}^{d-k} \cdot \poly(m). \end{eqnarray*} For such a choice of parameters, let $\vecx = \setdef{x_{ij}}{i\in [d]\;,\; j\in [m]} = \vecx_1 \sqcup \cdots \sqcup \vecx_d$ where $\vecx_i = \set{x_{i1}, \ldots, x_{im}}$. Let $\vecy = \vecx_1 \sqcup \cdots \sqcup \vecx_k$ and $\vecz = \vecx \setminus \vecy$. If $\ell$ is a parameter that satisfies $\ell = \frac{n_z}{2} (1 - \epsilon)$, then over any field $\F$, we have\footnote{We remark that in the calculations in \cite{KLSS, KS14, KumarSaptharishi15}, the shifted monomials consist of both the $\vecy$ and $\vecz$ variables, while here we only shift by $\vecz$ variables. But the calculations still go through since the parameters continue to satisfy the constraints needed for soundness of the calculation. } \[ \Gamma_{k,\ell}^{\mathrm{PSPD}}(\mathsf{NW}_{d,m,e}(\vecy, \vecz)) \spaced{\geq} \binom{n_z}{\ell + d - k} \cdot \exp(-O(\log^2 d)). \] \end{lemma} From \autoref{cor: taylor mult}, we immediately have the following crucial lemma. \begin{lemma}\label{lem: NW complexity lower bound} Let $d, m, e, \ell$ be parameters as defined in \autoref{lem: psd lower bound for nw} and let $\vecy$ and $\vecz$ be the partition of variables $\vecx$ as in \autoref{lem: psd lower bound for nw}. Then,over any field $\F$, we have \[ \Gamma_{k,\ell}^{\mathrm{SED}}(\mathsf{NW}_{d,m,e}(\vecy, \vecz)) \spaced{\geq} \binom{n_z}{\ell + d - k} \cdot \exp(-O(\log^2 d)). \] \end{lemma} \section{Functional lower bounds for depth-$3$ circuits}~\label{sec:depth 3} In this section, we complete the proof of \autoref{thm: depth 3 lower bound}. We start by defining the exact hard polynomial for which our lower bound is shown. \subsection*{Hard polynomials for the lower bound} We will prove \autoref{thm: depth 3 lower bound} for the polynomial $\mathsf{NW}_{d,m,e}$ for an appropriate choice of the parameters. \begin{lemma}\label{lem : nw partial derivative lower bounds} Let the parameters $e$ and $d$ be chosen so that $e = d/2-1$, and let $k = e+1$. Let the variables $\vecx$ in $\mathsf{NW}_{d,m,e}$ be partitioned into $\vecy = \setdef{x_{ij}}{i\in [k], j\in [m]}$ and $\vecz = \vecx \setminus \vecy$. Then \[ \Gamma_{k, 0}^{\mathrm{SED}}(\mathsf{NW}_{d,m,e}(\vecy, \vecz)) \spaced{\geq} m^{d/2}. \] \end{lemma} \begin{proof} Let the set of monomials $S$ be defined as \[ S = \left\{\prod_{i = 1}^k x_{i,j_i} : j_i \in [m] \right\} \] Observe that for every monomial $\vecx^\alpha$ in $S$, the partial derivative of $\mathsf{NW}_{d,m,e}$ with respect to $\vecx^\alpha$, is a monomial in $\vecz$. This is due to the fact that $e < d/2$ and no two distinct univariate polynomials of degree $d/2$ can agree at more than $d/2$ many points. Moreover for every two distinct monomials $\vecx^\alpha$ and $\vecx^\beta$ in $S$, \[ \frac{{\partial \mathsf{NW}_{d,m,e}}}{{\partial \vecx^\alpha}} \spaced{\neq} \frac{\partial \mathsf{NW}_{d,m,e}}{\partial \vecx^\beta} \] Hence, \[ \Gamma_{k, 0}^{\mathrm{PSPD}}(\mathsf{NW}_{d,m,e}) = |S| = m^{d/2} \] Since $\mathsf{NW}_{d,m,e}$ is a set-multilinear with respect to the rows of variable matrix, by \autoref{lem: multilinear equivalence}, it follows that \[ \Gamma_{k, 0}^{\mathrm{SED}}(\mathsf{NW}_{d,m,e}) = m^{d/2} \qedhere \] \end{proof} \subsection*{Complexity of the model} \begin{lemma}\label{lem: depth 3 circuit complexity upper bound} The $C(\vecx)$ be a $\SPS$ circuit of formal degree $d$ and top fan-in $s$. Then, for all choices of $k$ and any partition of $\vecx$ into $\vecy$ and $\vecz$, \[ \Gamma_{k, 0}^{\mathrm{SED}}(C) \spaced{\leq} s\cdot 2^d \] \end{lemma} \begin{proof} Observe that for any choice of $k$ and $\ell$, $\Gamma_{k,\ell}^{\mathrm{SED}}$ is a subadditive measure. Therefore, it is enough to upper bound the value of $\Gamma_{k, 0}^{\mathrm{SED}}()$ for every product gate in $C$ by $2^d$. Let \[ Q(\vecy, \vecz) = \prod_{i = 1}^d L_i \] be any product gate of formal degree at most $d$ in $C$. Since each $L_i$ is a linear form, we can express it as $L_i = L_{yi} + L_{zi}$, where $L_{yi}$ and $L_{zi}$ are the parts of $L_i$ consisting entirely of $\vecy$ and $\vecz$ variables respectively. Therefore, \[ Q(\vecy, \vecz) = \sum_{S\subseteq [d]}\prod_{i\in S} L_{yi} \cdot \prod_{j \notin S} L_{zj} \] Now observe that by \[ \left\{ Q(\veca, \vecz) : \veca \in \{0,1\}^{n_y} \right\} \spaced{\subseteq} \mathsf{Span}\left(\left\{\prod_{j \notin S} L_{zj} : S \subseteq [d]\right\} \right) \] Therefore, \[ \Gamma_{k, 0}^{\mathrm{SED}}(C) \spaced{\leq} 2^d \] The lemma now follows by subadditivity. \end{proof} \subsection*{Wrapping up the proof} We are now ready to complete the proof of \autoref{thm: depth 3 lower bound}. \begin{theorem} Let $\F$ be any field, and let $d,m,e$ be parameters such that $e = d/2-1$ and $m = \poly(d)$. Let $C$ be a $\SPS$ circuit of formal degree $d$ which is functionally equivalent to the polynomial $\mathsf{NW}_{d,m,e}$. Then \[ \text{Size}(C) \geq m^{d/2}/2^d \] \end{theorem} \begin{proof} Let $k=e+1$ and consider a partition of variables into $\vecy$ and $\vecz$ where all the variables in the first $k$ rows of the variable matrix are labelled $\vecy$ and the remaining variables are labelled $\vecz$. Now, the theorem immediately follows from \autoref{lem : nw partial derivative lower bounds} and \autoref{lem: depth 3 circuit complexity upper bound}. \end{proof} \section{Functional lower bounds for depth-$4$ circuits}~\label{sec:depth 4} In this section, we prove \autoref{thm: depth 4 lower bound}. We first define the family of polynomials for which our lower bounds apply. \subsection*{Hard polynomials for the lower bound} For the proof of \autoref{thm: depth 4 lower bound}, we would have to show that a statement in the spirit of \autoref{lem: NW complexity lower bound} is also true for a \emph{random projection} of our hard polynomial. Even though we believe\footnote{In fact, \cite{KLSS, KS14} showed such statements to be true.} that this is true for the polynomial defined in \autoref{def:NW final}, for simplicity, we modify our hard polynomial and in turn prove a lower bound for the following variant of it. \begin{definition}[Hard polynomials for the lower bound] Let $d, m, e$ be parameters as defined in \autoref{def:NW final}. Let $p = p(m, d)$ be a parameter and let \[ t = \frac{dm}{p} \] The polynomial $\mathsf{NW \circ Lin}$ is defined as \[ \mathsf{NW \circ Lin}_{d,m,e,p} = \mathsf{NW}_{d, m, e}\left(L(x_{1,1}), L(x_{1,2}), \dots, L(x_{d,m}) \right) \] where for each $i \in [d], j \in [m]$, $L(x_{i,j})$ is defined as \[ L(x_{i,j}) = \sum_{u = 1}^t x_{i,j,u} \] \end{definition} For the rest of this proof, we set $p = (md)^{-0.1}$, and for brevity, we will indicate $\mathsf{NW \circ Lin}_{d,m,e, (md)^{0.1}}$ by $\mathsf{NW \circ Lin}_{d,m,e}$. Observe that setting $p$ sets $t$ to be equal to $(md)^{1.1}$. We conclude this section with the next lemma where we show that $\mathsf{NW \circ Lin}_{d,m,e}$ is \emph{robust} under random restrictions where every variable is kept alive with a probability $p$. \begin{lemma}\label{lem: robustness under random restrictions} Let $p$ and $t$ be as stated above and let $n = dm$. Let $P$ be a random projection of $\mathsf{NW \circ Lin}$ obtained by setting every variable in $\{x_{i,j,h} : i \in [d], j \in [m], h \in [t]\}$ to zero with a probability equal to $1-p$. Then, with a probability at least $1-o(1)$, $\mathsf{NW}_{d,m,e}$ is a projection of $P$. \end{lemma} \begin{proof} For every $i \in [d]$, $j \in [m]$, define the set $A_{i,j}$ as \[ A_{ij} = \{x_{i,j,h} : h \in [t]\} \] When every variable is being set to zero with a probability $1-p$, the probability that there exists an $i \in [d]$ and $j \in [m]$ such that all the variables in the set $A_{i,j}$ are set to zero is at most $dm(1-p)^t$. For $p = n^{-0.1}$, the probability is at most $n\dot (1-n^{-0.1})^{n^{1.1}}$ which is $\exp(-\Omega(n))$. Therefore, with a probability at least $1-\exp(-\Omega(n))$, each of the set $A_{i,j}$ has at least one variable alive in $P$. Now, we set all but one of them to zero for each $i,j$. Observe that the resulting projection of $P$ is precisely $\mathsf{NW}_{d,m,e}$ up to a relabelling of variables. This proves the lemma. \end{proof} It should be noted that the polynomial $\mathsf{NW \circ Lin}$ continues to remain set-multilinear with respect to he rows of the variable matrix. \subsection*{Upper bound on the complexity of the model} We now show the upper bound on $\Gamma_{k, \ell}^{\mathrm{SED}}(C)$ when $C$ is a depth-$4$ circuit of individual degree at most $r$ and bottom support $s$. We will use the following upper bound on $\Gamma_{k, \ell}^{\mathrm{PSPD}}(C)$ from \cite{KLSS, KS14}. \begin{lemma}\label{lem: depth 4 circuits psd} Let $C(\vecy, \vecz)$ be a depth-$4$ circuit, of formal degree at most $d$ and bottom support at most $s$. Let $k$ and $\ell$ be parameters satisfying$\ell + ks < n_z/2$. Then \[ \Gamma_{k,\ell}^{\mathrm{PSPD}}(C) \spaced{\leq} \text{Size}(C) \cdot \binom{O\left( \frac{d}{s} \right) + k }{k} \cdot \binom{n_z}{\ell + ks}\cdot \poly(n). \] \end{lemma} The following lemma now immediately follows from \autoref{lem: depth 4 circuits psd} and \autoref{lemma: eval dim vs partial derivatives}. \begin{lemma}\label{lem: depth 4 circuits sed-low bottom support} Let $C(\vecy, \vecz)$ be a depth-$4$ circuit, of formal degree at most $d$, individual degree at most $r$ and bottom support at most $s$. Let $k$ and $\ell$ be parameters satisfying $\ell + krs < n_z/2$. Then \[ \Gamma_{k,\ell}^{\mathrm{SED}}(C) \spaced{\leq} \text{Size}(C) \cdot \binom{O\left( \frac{d}{s} \right) + kr }{kr} \cdot \binom{n_z}{\ell + krs}\cdot \poly(n_z). \] \end{lemma} \subsection*{Wrapping up the proof} \begin{theorem} Let $d, m, e$ be parameters as defined in \autoref{lem: psd lower bound for nw}. Let $C$ be a $\SPSP$ circuit $C$ of formal degree $d$ and individual degree at most $r = O(1)$ over any field $\F$ such that $C$ is functionally equivalent to $\mathsf{NW \circ Lin}_{d,m,e}$. Then, \[ \text{Size}(C) \geq \exp\left(\Omega\left(\sqrt{d}\log dm \right) \right) \] \end{theorem} \begin{proof} If the size of $C$ is larger than $\exp\left(\frac{\sqrt{d}\log dm}{1000r} \right)$, then we are already done, else the size of $C$ is at most $\exp\left(\frac{\sqrt{d}\log dm}{1000r} \right)$. Let us set every variable in $C$ and $\mathsf{NW \circ Lin}_{d,m,e}$ to zero independently with a probability $1-(md)^{-0.1}$. The following claim easily follows via a standard application of the union bound. \begin{claim} With probability at least $1-o(1)$ over the random restrictions as defined above, every product gate at the bottom level of $C$ with support at least $\frac{\sqrt{d}}{100r}$ is set to zero. \end{claim} From the above claim and from \autoref{lem: robustness under random restrictions}, it follows that there is a $\SPSP$ circuit $C'$ of formal degree $d$ over $\F$ which is functionally equivalent to $\mathsf{NW}_{d,m,e}$. Let us relabel the variables as $\vecy$ and $\vecz$ as described in \autoref{lem: psd lower bound for nw}. Let $k = \sqrt{d}$ and let $\ell = \frac{n_z}{2}\cdot (1-\epsilon)$ where $\epsilon = O\left( \frac{\log d}{\sqrt{d}}\right)$ to be chosen shortly. By \autoref{lem: NW complexity lower bound}, we know that for this choice of $k$ and $\ell$ \begin{eqnarray*} \Gamma_{k,\ell}^{\mathrm{SED}}(\mathsf{NW}_{d,m,e}(\vecy, \vecz)) &\spaced{\geq}& \binom{n_z}{\ell + d - k} \cdot \exp(-O(\log^2 d))\\ &\spaced{\geq}& \binom{n_z}{\ell} \cdot (1+\epsilon)^{2d-2k} \cdot \exp(-O(\log^2 d)) \end{eqnarray*} Moreover, by \autoref{lem: depth 4 circuits sed-low bottom support}, we know that \begin{eqnarray*} \Gamma_{k,\ell}^{\mathrm{SED}}(C') &\spaced{\leq}& (dm)^{\sqrt{d}/1000r} \cdot \binom{O\left( \frac{\sqrt{d}}{r} \right) + kr }{kr} \cdot \binom{n_z}{{\ell + k\cdot r\cdot \frac{\sqrt{d}}{100r}}}\cdot \poly(n_z)\\ &\spaced{\leq}& (dm)^{\sqrt{d}/1000r} \cdot 2^{O(\sqrt{d})} \cdot \binom{n_z}{\ell} \cdot (1+\epsilon)^{\frac{d}{50}} \cdot \exp(O(\log^2 d))\\ &\spaced{\leq}& \exp{\left({\sqrt{d}\log d/100r}\right)} \cdot 2^{O(\sqrt{d})} \cdot \binom{n_z}{\ell} \cdot (1+\epsilon)^{\frac{d}{50}} \cdot \exp(O(\log^2 d)) \end{eqnarray*} Now, observe that there exists a constant $c$ such that if $\epsilon$ is set to $\frac{c\log d}{\sqrt{d}}$, then \[ \Gamma_{k,\ell}^{\mathrm{SED}}(\mathsf{NW}_{d,m,e}) > \Gamma_{k,\ell}^{\mathrm{SED}}(C') \] But this is a contradiction since $C'$ computes $\mathsf{NW}_{d,m,e}$. This completes the proof. \end{proof} \section{Open problems} We end with some open questions : \begin{itemize} \item The main challenge would be to improve \autoref{thm: depth 4 lower bound}, and prove it for the model of sums of powers of low degree polynomials. It is not clear to us if the complexity measure used in this paper would be useful. \item The functional lower bounds proved in this paper are for \emph{exact} functional computation. We believe that some of these bounds should also hold in the average case, where the circuit and the polynomial agree on a random point on $\{0,1\}^n$ with a high probability. It is not clear to us if the proof techniques in this paper can be adapted to say something in the average case setting. The most natural attempt to generalize the proofs seem to hit a \emph{matrix rigidity} like obstacle. \end{itemize} \section*{Acknowledgement} Part of this work was done while the third author was visiting Rutgers. We are grateful to Eric Allender and DIMACS for funding the visit. We are also grateful to Pravesh Kothari and Madhu Sudan for many helpful conversations. \iffalse \section{Lower bounds for sum of read once ABPs in the same order} Things to write: \begin{itemize} \item Definition of the model \item Definition of the polynomial \item Complexity measure : evaluation dimension. \item Complexity lower bound for model should be easy. There might be a reference..check with Ramprasad/Michael. \item Complexity lower bound for the polynomial should follow from one of our depth-3 statements. \end{itemize} In this section, we show functional lower bounds for polynomials of the form \begin{equation}\label{eqn: roabp model} P = \sum_{i = 1}^T R_i\cdot {Q_i}^d \end{equation} such that the following are true: \begin{itemize} \item Here, $R_1, R_2, \ldots, R_T$ are polynomials computed by roABPS in the same order. \item $Q_1, Q_2, \ldots, Q_T$ are polynomials of degree at most $\Delta$, and \item The individual degree of $P$ is at most $r = O(1)$. \end{itemize} The complexity measure we use will be \fi \iffalse \section{Elementary symmetric polynomials}\label{sec:esym} Nisan and Wigderson~\cite{nw94} showed that any homogeneous depth-$3$ circuit computing the elementary symmetric polynomial $(SYM_{d})$ of degree $d$ on $n$ variables has size at least $2^{-d}\cdot \binom{n}{d/2}$. The lower bound argument in\cite{nw94} uses the dimension of the span of partial derivatives of polynomials as a complexity measure, and the lower bound is for syntactic computation. It turns out that we were interested in functional computation over $\{0,1\}^n$, there are much better upper bounds over the fields of large characteristic. The observation is a simple consequence of Newton identities and we formally state the bound below. First we state the following lemma of Shpilka and Wigderson~\cite{sw2001}. \begin{lemma}[Shpilka-Wigderson~\cite{sw2001}]\label{lem: sw depth four upper bound} Let $\F$ be any field of characteristic at least $d+1$. Then, there is a $\sum\prod\sum\bigwedge$ circuit $C$ of size at most $2^{O(\sqrt{d})}\cdot n$ which syntactically computes $SYM_d$ over $\F$. \end{lemma} The depth-$3$ functional upper bound immediately follows from this. \begin{lemma}\label{lem: esym upper bound} Let $d$ be a parameter such that $1\leq d \leq n$. Let $\F$ be a field of characteristic at least $d+1$. Then, there exists a $\sum\prod\sum$ circuit $C$ of size at most $2^{O(\sqrt{d})\dot \poly(n)}$ and formal degree at most $d$ such that for every $\vecx \in \{0,1\}^n$, \[ SYM_d(\vecx) = C(\vecx) \] \end{lemma} \begin{proof} The proof follows from \autoref{lem: sw depth four upper bound} and the observation that we can replace each monomial $x^i$ by $x$, if we only intend to preserve the values over $\{0,1\}^n$. This results in a depth three circuit of formal degree at most $d$. \end{proof} \fi \bibliographystyle{customurlbst/alphaurlpp}
{ "timestamp": "2016-05-16T02:10:43", "yymm": "1605", "arxiv_id": "1605.04207", "language": "en", "url": "https://arxiv.org/abs/1605.04207", "abstract": "We say that a circuit $C$ over a field $F$ functionally computes an $n$-variate polynomial $P$ if for every $x \\in \\{0,1\\}^n$ we have that $C(x) = P(x)$. This is in contrast to syntactically computing $P$, when $C \\equiv P$ as formal polynomials. In this paper, we study the question of proving lower bounds for homogeneous depth-$3$ and depth-$4$ arithmetic circuits for functional computation. We prove the following results :1. Exponential lower bounds homogeneous depth-$3$ arithmetic circuits for a polynomial in $VNP$.2. Exponential lower bounds for homogeneous depth-$4$ arithmetic circuits with bounded individual degree for a polynomial in $VNP$.Our main motivation for this line of research comes from our observation that strong enough functional lower bounds for even very special depth-$4$ arithmetic circuits for the Permanent imply a separation between ${\\#}P$ and $ACC$. Thus, improving the second result to get rid of the bounded individual degree condition could lead to substantial progress in boolean circuit complexity. Besides, it is known from a recent result of Kumar and Saptharishi [KS15] that over constant sized finite fields, strong enough average case functional lower bounds for homogeneous depth-$4$ circuits imply superpolynomial lower bounds for homogeneous depth-$5$ circuits.Our proofs are based on a family of new complexity measures called shifted evaluation dimension, and might be of independent interest.", "subjects": "Computational Complexity (cs.CC)", "title": "Functional lower bounds for arithmetic circuits and connections to boolean circuit complexity", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9867771774699746, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7090925523936538 }
https://arxiv.org/abs/math/0508024
Volume preserving codimension one Anosov flows in dimensions greater than three are suspensions
We show that every volume preserving codimension one Anosov flow on a closed Riemannian manifold of dimension greater than three admits a global cross section and is therefore topologically conjugate to a suspension of a linear toral automorphism. This proves a conjecture of Verjovsky from the 1970's in the volume preserving case.
\section{Introduction} The theory of hyperbolic dynamical systems, despite its long history, still abounds with open fundamental problems. Among these is the following \begin{vconj} Every codimension one Anosov flow on a closed Riemannian manifold of dimension greater than three admits a global cross section. \end{vconj} Verjovsky stated the conjecture in \cite{verj74} for all dimensions with an additional assumption that the fundamental group of the manifold is solvable. This was proved by Plante~\cite{plante81,plante83} and Armendariz~\cite{armendariz}, who showed that the conjecture is true if and only if the fundamental group of the manifold is solvable. In the above form, the conjecture first appeared in Ghys~\cite{ghys89}. However, Ghys has pointed out that Verjovsky had originally proposed it in the 1970's. In \cite{ghys89}, Ghys showed that the conjecture is true if the sum $E^{su} = E^{ss} \oplus E^{uu}$ of the strong bundles of the flow is of class $C^1$ or if the codimension one center stable bundle $E^{cs}$ is $C^2$ and the flow preserves volume. The first result of Ghys was generalized in \cite{sns+96} to Lipschitz $E^{su}$. The second one was extended in \cite{sns+97} to the case when $E^{su}$ is $\mathrm{Lip}-$ or when $E^{cs}$ is $C^{1 + \mathrm{Lip}-}$, where Lip-- means $C^\theta$ for all $\theta \in (0,1)$. In a related work, Bonatti and Guelman~\cite{bg+06} showed that if the time one map of a codimension one Anosov flow can be $C^1$ approximated by an Axiom A diffeomorphism with more than one attractor, then the flow is topologically equivalent to the suspension of an Anosov diffeomorphism. In this paper, we prove the following result. \begin{mainthm} Verjovsky's conjecture is true for volume preserving flows. More precisely, every volume preserving codimension one Anosov flow on a closed Riemannian manifold of dimension greater than three can be $C^1$ approximated by a $C^\infty$ flow of the same type whose synchronization admits a global cross section with constant first-return time. \end{mainthm} By synchronization we mean a suitable reparametrization of the flow that makes the strong unstable cocycle be of the form $e^{ct}$, i.e., independent of the space variable (see \S\ref{sbs:sync}). The induced Poincar\'e map on a global cross section is automatically a codimension one Anosov diffeomorphism, $f$. Franks~\cite{franks70} proved that if the non-wandering set of $f$ is the whole manifold, then $f$ is topologically conjugate to a linear toral automorphism. By a result of Newhouse~\cite{newhouse70}, this is indeed the case for every codimension one Anosov diffeomorphism. We therefore obtain the following classification. \begin{classif} Every volume preserving codimension one Anosov flow on a closed Riemannian manifold of dimension greater than three is topologically equivalent to a suspension of a linear toral automorphism. \end{classif} Recall that two flows are topologically equivalent if there exists a homeomorphism which takes orbits of one flow to orbits of the other, preserving the orientation but not necessarily preserving the time parameter. \subsection*{Outline of the proof} Given a $C^1$ volume preserving codimension one Anosov flow on a $C^\infty$ closed Riemannian manifold $M$ of dimension $n > 3$, the goal is to show there exists a topologically equivalent flow with jointly integrable (see \S\ref{sec:anosov}) strong foliations. The main difficulty is the lack of smoothness of the strong stable distribution $E^{ss}$. We start by $C^1$ approximating the original flow by a $C^1$ flow with a continuous Oseledets splitting (Step 1). To do this, we use the work of Bochi-Viana~\cite{bochi+viana+05} and Bessa~\cite{bessa+05} (see \S\ref{sbs:lyap}). Next, we use the density result of Arbieto and Matheus~\cite{arbieto+03} to $C^1$ approximate again. We obtain a $C^\infty$ volume preserving codimension one Anosov flow such that either: (A) the dimension of the top Lyapunov bundle is one and the sum of the remaining Lyapunov bundles in the strong stable bundle is continuous on the whole manifold, or (B) the top Lyapunov exponent of its synchronization is less than $\tau = (2-\theta)^{-1}$, where $\theta$ is the H\"older exponent of its strong stable bundle. Next, we \emph{synchronize} (\S\ref{sbs:sync}) the flow to obtain a $C^{1 + \mathrm{H\ddot{o}lder}}$ volume preserving codimension one Anosov flow $\{ f_t \}$, topologically equivalent to the original one, and satisfying $\det Tf_t \! \restriction_{E^{uu}} \equiv e^t$ (Step 2). For the reverse flow $f_{-t}$, the Oseledets splitting $TM = E_1 \oplus \cdots \oplus E_\ell$, corresponding to Lyapunov exponents $\chi_1 < \cdots < \chi_\ell$, satisfies either (A) or (B) above. Observe that $E_1 = E^{uu}$, $E_2 = E^c$, and $E_3 \oplus \cdots \oplus E_\ell = E^{ss}$. We show that for this flow, the foliations $W^{ss}$ and $W^{uu}$ are jointly integrable. For any $p \in M$ and $q \in W^{ss}_\mathrm{loc}(p)$, let $\huu_{p,q}: W^{cs}_\mathrm{loc}(p) \to W^{cs}_\mathrm{loc}(q)$ be the strong unstable holonomy (\S\ref{sbs:su-disk}). We prove that $T\huu_{p,q}$ takes $E^{ss}$ to itself. In case (A), this is done in Steps 3A, 4, and 5. In case (B), it is done in Steps 3B and 5. In Step 3A, we show $T\huu_{p,q}(F_{\ell-1}) \subset E^{ss}$, where $F_{\ell-1} = E_3 \oplus \cdots \oplus E_{\ell-1}$ is the invariant subbundle of $E^{ss}$ consisting of vectors whose growth rate relative to $Tf_{-t}$ is not the maximal possible one, $\chi_\ell$. In Step 3B, which treats case (B), we show that $T\huu_{p,q}$ takes the \emph{whole} bundle $E^{ss}$ onto itself. Step 4, which is a continuation of Step 3A, shows that $T\huu_{p,q}$, in fact, takes the whole bundle $E^{ss}$ onto itself. This is done using some simple linear algebra of differential forms. Step 5 completes the proof by showing how $T\huu_{p,q}(E^{ss}) = E^{ss}$ implies the existence of a global cross section to the original flow. The proofs of Steps 3A and 3B are based on one key estimate (Theorem~\ref{thm:estimate}). Let $\alpha$ be a 1-form on $M$ dual to $E^{su} = E^{ss} \oplus E^{uu}$, defined by \begin{equation} \label{eq:alpha} \mathrm{Ker}(\alpha) = E^{su} \qquad \text{and} \qquad \alpha(X) = 1. \end{equation} By Proposition~\ref{prop:key}, joint integrability of $W^{ss}$ and $W^{uu}$ is equivalent to the vanishing of the integral of $\alpha$ over the boundary of any (small) $su$-disk $D$. An $su$-disk $D$ (\S\ref{sbs:su-disk}) is a smooth 2-disks foliated by arcs of strong unstable manifolds, with piecewise smooth boundary consisting of two opposing strong unstable arcs, and one strong stable arc (the ``base'' of $D$) opposite a center stable arc (\textsc{Fig.}~\ref{fig:disc}). If $D$ is an $su$-disk, then (Proposition~\ref{prop:su}) the area of $f_{-t}D$ tends to zero, as $t \to \infty$. To take advantage of this fact we need a suitable estimate of the integral of $\alpha$ over $\partial D$ which involves the area of $D$. If $\alpha$ were $C^1$, then by Stokes' theorem, $\abs{\int_{\partial D} \alpha} \leq \norm{\alpha}_{C^1} \abs{D}$. For general H\"older forms, such estimates are hard to come by and are not suitable for our purposes. However, for the very special form $\alpha$, it is possible to derive an estimate in terms of both the circumference $\abs{\partial D}$ and area $\abs{D}$. The derivation is based on an analysis trick, which goes as follows. We regularize $\alpha$ to obtain a smooth form $\alpha^\varepsilon$ such that $\norm{\alpha^\varepsilon - \alpha}_{C^0} \lesssim \varepsilon^\theta$. However, along $W^{cs}$-leaves, we can ensure $\norm{(\alpha^\varepsilon - \alpha) \! \! \restriction_{W^{cs}}}_{C^0} \lesssim \varepsilon$. This yields, essentially, $\abs{\int_{\partial D} \alpha} \lesssim \abs{\partial D} \varepsilon + \abs{D} \varepsilon^{\theta-1}$. The trick is to minimize over $\varepsilon$. If $\varepsilon$ is allowed to range over a sufficiently large interval $(0,\varepsilon_0)$, then the minimum of the right hand side is $\abs{\partial D}^{1-\tau} \abs{D}^\tau$, where $\tau = 1/(2-\theta)$. To show that the integral of $\alpha$ over $\partial D$ vanishes, we use the flow invariance of $\alpha$, $f_t^\ast \alpha = \alpha$, and apply the key estimate to the integral of $\alpha$ over $\partial f_{-t} D$. For this to work, we need to decompose $D$ into smaller $su$-disks (\textsc{Fig.}~\ref{fig:Di}). We obtain $\abs{\int_{\partial f_{-t} D} \alpha} \lesssim \abs{\partial f_{-t} D}^{1-\tau} \, \abs{f_{-t} D}^\tau$ (cf., \eqref{eq:crucial}). In case (A), the right hand side goes to zero, as $t \to \infty$, if the base $\gamma$ of $D$ lies in a certain open set of full measure and is tangent to $F_{\ell-1}$, i.e., if the length of $f_{-t}(\gamma)$ does not grow at the fastest possible speed, $e^{\chi_\ell t}$. This implies that $T\huu_{p,q}(F_{\ell-1}) \subset E^{ss}$. In case (B), the same statement holds for all $su$-disks $D$ in an open set of full measure, implying $T\huu_{p,q}(E^{ss}) \subset E^{ss}$. The paper is organized as follows. In \S\ref{sec:anosov}, we review the necessary basics of Anosov flows and the existence of global cross sections. In \S\ref{sec:prelim}, we prove a series of preparatory results on Lyapunov exponents (\S\ref{sbs:lyap}), synchronization (\S\ref{sbs:sync}), $su$-disks (\S\ref{sbs:su-disk}), and regularization (\S\ref{sbs:reg}). The key estimate is proved in \S\ref{sbs:estimate}. The proof of the main theorem is given in \S\ref{sec:proof}. \subsection*{Acknowledgments} We thank an anonymous referee and M\'ario Bessa, Christian Bonatti, Federico and Jana Rodriguez Hertz, Yakov Pesin, Charles Pugh, and Marcelo Viana for their helpful comments and suggestions. \section{Anosov flows, cross sections, and suspensions} \label{sec:anosov} A non-singular smooth flow $\Phi = \{ f_t \}$ on a closed (compact and without boundary) Riemannian manifold $M$ is called \textsf{Anosov} if there exists a $Tf_t$-invariant continuous splitting of the tangent bundle, \begin{equation*} TM = E^{uu} \oplus E^c \oplus E^{ss}, \end{equation*} and constants $C > 0$, $0 < \nu < 1$, and $\lambda > 1$ such that for all $t \geq 0$, \begin{equation*} \norm{Tf_t \! \restriction_{E^{ss}}} \leq C \nu^t \qquad \qquad \text{and} \qquad \qquad \norm{Tf_t \! \restriction_{E^{uu}}} \geq C \lambda^t. \end{equation*} The center bundle $E^c$ is one dimensional and generated by the vector field $X$ tangent to the flow. We call $E^{uu}, E^{ss}, E^{cu} = E^c \oplus E^{uu}$, and $E^{cs} = E^c \oplus E^{ss}$ the strong unstable, strong stable, center unstable, and center stable bundle, respectively. We also set $E^{su} = E^{ss} \oplus E^{uu}$. Typically these bundles are only continuous, but they are uniquely integrable \cite{anosov+67}, giving rise to continuous foliations denoted by $W^{uu}, W^{ss}, W^{cu}$, and $W^{cs}$, respectively. Recall that a distribution $E$ is called \textsf{uniquely integrable} (or simply \textsf{integrable}) if it is tangent to a foliation and every differentiable curve everywhere tangent to $E$ is wholly contained in a leaf of the foliation. If the flow is $C^{1 + \mathrm{H\ddot{o}lder}}$, then these foliations are in fact absolutely continuous (\S\ref{sbs:su-disk}). By a classical result of Anosov~\cite{anosov+67}, Anosov flows are also structurally stable and, if they preserve a $C^1$ volume form, ergodic. If $\dim E^{uu} = 1$, we call the Anosov flow \textsf{of codimension one}~\cite{mats+95}. (The assumption $\dim E^{ss} = 1$ works just as well, since we can reverse the direction of the flow.) Verjovsky~\cite{verj+sp,verj+70,verj74} showed that if $\dim M > 3$, then codimension one Anosov flows are topologically transitive and the universal covering space of $M$ is $\R^n$. \\ \paragraph{\textbf{Regularity}} In general, the bundles $E^{uu}, E^{ss}, E^{cs}$, and $E^{cu}$ are only H\"older continuous (see \cite{hps77}). The H\"older invariant section theorem (see \cite{hps77,shubbook+87,psw+97}) implies that if two Anosov vector fields are $C^1$ close, then the H\"older exponents of their strong stable bundles are close. That is, the H\"older exponent $\theta(X)$ of the strong stable bundle $E^{ss}$ for $X$ varies continuously with $X$ in the $C^1$ topology. The foliations $W^{uu}, W^{ss}, W^{cs}$, and $W^{cu}$ are H\"older (cf., \cite{psw+97}) in the sense that their holonomy maps are uniformly H\"older, but their leaves are as smooth as the flow. If the flow is of codimension one and $n > 3$, then $E^{cs}$ is of class $C^{1 + \theta}$, for some $\theta \in (0,1)$. If in addition it preserves a $C^1$ volume form, then $E^{uu}$ is also of class $C^{1 + \theta}$, for some $\theta \in (0,1)$ (cf., \cite{hps77}). This implies that their holonomies are $C^{1 + \theta}$; see~\cite{psw+97}. A general note on regularity of foliations is in order here. Following~\cite{psw+97}, the usual variants of the definition of a $C^r$ foliation are: \begin{enumerate} \item[(a)] the leaves are tangent to a $C^r$ distribution; \item[(b)] the foliation charts are $C^r$ diffeomorphisms; \item[(c)] the leaves and the local holonomy maps along them are uniformly $C^r$. \end{enumerate} In this paper, we use (a). Since the center stable and strong unstable distributions are $C^{1+\theta}$, the relevant value of $r$ is $1 + \theta$. According to \cite{psw+97}, when $r = 1 + \theta$, $0 \leq \theta \leq \text{Lip}$, the relations among the above definitions are: $(\text{a}) \Rightarrow (\text{b})$, $(\text{b}) \not\Rightarrow (\text{a})$, and $(\text{b}) \Leftrightarrow (\text{c})$. (More can be said: by Hart's smoothing theorem, a foliation satisfying (b) is diffeomorphic by an ambient $C^r$ diffeomorphism, to a foliation satisfying (a).) Therefore, for $E^{cs}$ and $E^{uu}$, statements (a), (b) and (c) are all true. \\ \paragraph{\textbf{Cross sections}} Recall that a smooth compact codimension one submanifold $\Sigma$ of $M$ is called a \textsf{(global) cross section} for a flow if it intersects every orbit transversely. If this is the case, then every point $p \in \Sigma$ returns to $\Sigma$, defining the Poincar\'e or first-return map $g: \Sigma \to \Sigma$. The flow can then be reconstructed by \textsf{suspending} $g$ under the roof function equal to the first-return time~\cite{fried+82,katok+95,schwartz+57}. Existence of global cross sections to Anosov flows was studied by Plante in his Ph.D. thesis. He showed: \begin{plante}[\cite{plante72}] Let $\{ f_t \}$ be an Anosov flow. \begin{enumerate} \item[(a)] If $E^{su}$ is integrable, then the flow admits a smooth global cross section. \item[(b)] If the flow is of codimension one and $E^{su}$ is integrable, then every leaf of the corresponding foliation is a global cross section with \textsf{constant} first-return time. \item[(c)] $E^{su}$ is integrable if and only if the foliations $W^{ss}$ and $W^{uu}$ are jointly integrable. \end{enumerate} \end{plante} Foliations $W^{ss}$ and $W^{uu}$ are \textsf{jointly integrable} if in every foliation chart for $W^{ss}$ and $W^{uu}$, the $W^{uu}$-holonomy (\S\ref{sbs:su-disk}) takes $W^{ss}$-plaques to $W^{ss}$-plaques. The opposite situation is that of $su$-accessibility, where any two points of $M$ can be connected by a continuous path consisting of finitely many smooth arc alternately in $W^{ss}$ and $W^{uu}$ \cite{ps+04}. Plante's Theorem will be the main tool for proving the existence of a global cross section. Note that if the first-return time is constant, then the periods of all periodic points are rationally dependent. This property is clearly not robust, since it can be destroyed by a small non-trivial time-change. Reparametrization will consequently play an important role in the proof. \section{Preliminaries} \label{sec:prelim} This section contains preparatory results on Lyapunov exponents, synchronization, holonomy, $su$-disks, regularization, and the key estimate. \subsection{Lyapunov Exponents} \label{sbs:lyap} Let $\Phi = \{f_t\}$ be a $C^1$ flow on a compact manifold $M$. For $x \in M$ and $v \in T_x M \setminus \{ 0 \}$, recall that the \textsf{Lyapunov exponent} of $v$ is defined by \begin{displaymath} \chi = \lim_{t \to \infty} \frac{1}{t} \log \norm{T_x f_t (v)}. \tag{$\diamond$} \end{displaymath} This means that $\norm{T_x f_t(v)} \sim e^{\chi t} \norm{v}$, as $\abs{t} \to \infty$. If this limit exists, the set of vectors in $T_x M$ (including zero) with the same Lyapunov exponent $\chi$ is a vector subspace of $T_x M$, which we call the \textsf{Lyapunov space} of $\chi$ and denote by $E^\chi(x)$. The fundamental properties of Lyapunov exponents and their Lyapunov spaces are described by the celebrated \begin{oseledec}[\cite{barreira+pesin+01,osel+68}] \label{thm:osel} Suppose that $\Phi = \{ f_t \}$ is a $C^1$ flow preserving a Borel probability measure $\mu$ on a compact manifold $M$. Then there exists a set $\ms{R} \subset M$ of full measure such that every point in $\ms{R}$ is \textsf{Lyapunov regular}. This means that for every $x \in \ms{R}$ there exists a splitting, called the \textsf{Oseledets splitting} of $\Phi$, \begin{equation} \label{eq:splitting} T_x M = \bigoplus_{i=1}^{\ell(x)} E_i(x), \end{equation} and numbers $\chi_1 < \cdots < \chi_\ell$ such that: \begin{enumerate} \item[(a)] The bundles $E_i$ are $\Phi$-invariant, \begin{displaymath} T_x f_t(E_i(x)) = E_i(f_tx), \end{displaymath} and depend Borel measurably on $x$. \item[(b)] For all $v \in E_i(x) \setminus \{0\}$, \begin{displaymath} \lim_{\abs{t} \to \infty} \frac{1}{t} \log \norm{T_x f_t (v)} = \chi_i(x), \end{displaymath} that is, $E_i(x) = E^{\chi_i}(x)$. The convergence is uniform on the unit sphere in $E_i(x)$. \item[(c)] For for any $I, J \subset \{ 1, \ldots, \ell(x) \}$ with $I \cap J = \emptyset$, the angle function is tempered, i.e., \begin{displaymath} \lim_{\abs{t} \to \infty} \frac{1}{t} \log \sphericalangle(T_x f_t(E_I(x)),T_x f_t(E_J(x))) = 0, \end{displaymath} where $E_I = \bigoplus_{i \in I} E_i$. \item[(d)] For every $x \in \ms{R}$, \begin{displaymath} \lim_{\abs{t} \to \infty} \frac{1}{t} \log \det T_x f_t = \sum_{i=1}^{\ell(x)} \chi_i(x) \dim E_i(x). \end{displaymath} \item[(e)] There is a corresponding decomposition of the cotangent bundle, \begin{displaymath} T_x^\ast M = \bigoplus_{i=1}^{\ell(x)} E_i^\ast(x). \end{displaymath} The bundles $E_i^\ast$ depend Borel measurably on $x \in \ms{R}$ and are $\Phi$-invariant in the sense that \begin{displaymath} T_x' f_t(E_i^\ast(x)) = E_i^\ast(f_t x), \end{displaymath} where \begin{displaymath} T_x' f_t = \left( T_x^\ast f_t \right)^{-1} : T_x^\ast M \to T_{f_t x}^\ast M \end{displaymath} is the inverse of the codifferential $T_x^\ast f_t = (T_x f_t)^\ast$ of $f_t$. \item[(f)] If $\Phi$ is ergodic with respect to $\mu$, then the functions $\ell$ and $\chi_i$ are $\mu$-almost everywhere constant. \end{enumerate} \end{oseledec} One can also speak of \emph{forward} (or positive) and \emph{backward} (or negative) regularity, where one considers only $t \to + \infty$ or $t \to -\infty$, respectively. Note that the Oseledets splitting need not be defined on the whole manifold nor do the above limits have to be uniform. However, it turns out that for large set of systems one \emph{can} expect a certain amount of uniformity, in the sense explained below. \begin{defn}[Dominated splitting] For a diffeomorphism $f : M \to M$, we say that a $Tf$-invariant splitting $T_\Lambda M = E \oplus F$ over an $f$-invariant set $\Lambda$ is \textsf{dominated} (denoted by $E \prec F$) if there exists an $n \in \N$ and a constant $\sigma < 1$ such that for all $x \in \Lambda$, \begin{equation*} \norm{T_x f^n \! \! \restriction_{E(x)}} \leq \sigma m(T_x f^n \! \! \restriction_{F(x)}). \end{equation*} \end{defn} Here $m(L)$ denotes the minimum norm of a linear transformation $L$: $m(L) = \inf \{ \norm{Lv} : \norm{v} = 1 \}$. This means that for $v \in T_\Lambda M \setminus (E \cup F)$, the forward iterates of $v$ converge to $F$ and its backward iterates converge to $E$. The definition for flows is analogous. More generally, we say that a splitting $T_\Lambda M = E_1 \oplus \cdots \oplus E_k$ into an arbitrary number of invariant subbundles is \textsf{dominated} if for every $1 \leq i < k$, \begin{equation*} E_1 \oplus \cdots \oplus E_i \prec E_{i+1} \oplus \cdots \oplus E_k. \end{equation*} Subsequently, when talking about a dominated splitting, we will always be referring to this general definition of the term. Bochi and Viana~\cite{bochi+viana+05} showed that there exists a residual (dense $G_\delta$) set $\ms{D}$ in the space of $C^1$ volume preserving diffeomorphisms of $M$ such that for every $f \in \ms{D}$ and almost every point $x$, either \textsf{(a)} all Lyapunov exponents of $f$ are zero at $x$, or \textsf{(b)} the Oseledets splitting of $f$ is dominated on the orbit of $x$. If $f$ is ergodic, this means that either \textsf{(a)} all exponents vanish at almost every point or \textsf{(b)} the Oseledets splitting extends \emph{continuously} to a dominated splitting on the whole manifold~\cite{bochi+viana+05}. The results of Bessa~\cite{bessa+05} for volume preserving non-singular flows in dimension three and Bochi-Viana~\cite{bochi+viana+05} for volume preserving diffeomorphisms extend to volume preserving Anosov flows in any dimension~\cite{viana+pers05}. Namely, for the $C^1$ generic volume preserving Anosov flow, the Oseledets splitting is dominated and extends continuously over the whole underlying manifold (the other alternative in the dichotomy does not apply, since an Anosov flow cannot have all its Lyapunov exponent equal to zero). Let $\Phi = \{ f_t \}$ now be a volume preserving codimension one Anosov flow on $M$, $n > 3$, and let \eqref{eq:splitting} be the Oseledets splitting relative of the \textsf{reverse} flow $f_{-t}$. Recall that $\Phi$ is topologically transitive~\cite{verj74}, hence ergodic relative to Lebesgue measure, so $\ell$ and $\chi_i$'s are a.e. constant functions. Observe that $\chi_1 < 0$, $\chi_2 = \chi(X) = 0$, $\chi_i > 0$, for $3 \leq i \leq \ell$, and $E^{ss}(x) = E_3(x) \oplus \cdots \oplus E_\ell(x)$. Let \begin{equation} \label{eq:Fk} F_k = \bigoplus_{i=3}^k E_i. \end{equation} The above discussion implies that for the $C^1$-generic $\{ f_t \}$, the bundles $F_k$ are continuous. \\ \paragraph{\textbf{Oseledets Regularity functions}} For a fixed Lyapunov exponent $\chi = \chi_i$ corresponding to the Lyapunov bundle $E = E_i$, $\varepsilon > 0$, and $x \in \ms{R}$, define $R_\varepsilon(x)$ to be the infimum of all numbers $R \geq 1$ such that the following inequalities hold for all $t \geq 0$: \begin{align*} R^{-1} e^{(\chi-\varepsilon)t} & \leq \norm{T_x^E f_t} \leq R e^{(\chi+\varepsilon)t}, \\ R^{-1} e^{(-\chi-\varepsilon)t} & \leq \norm{T_x^E f_{-t}} \leq R e^{(-\chi+\varepsilon)t}, \\ R^{-1} e^{(-\chi-\varepsilon)t} & \leq \norm{T_x^E f_t'} \leq R e^{(-\chi+\varepsilon)t}, \\ R^{-1} e^{(\chi-\varepsilon)t} & \leq \norm{T_x^E f_{-t}'} \leq R e^{(\chi+\varepsilon)t}. \end{align*} Here $T^E f_t$ denotes the restriction of $Tf_t$ to $E$ and $T^E f_t' = \left\{ ( T^E f_t )^\ast \right\}^{-1}$. We refer to $R_\varepsilon : \ms{R} \to [1,\infty)$ as an \textsf{Oseledets regularity function} (relative to $\chi$ and $\varepsilon$), or simply a \textsf{regularity function} (we borrowed the name from \cite{ps+89}; see also \cite{barreira+pesin+01}). An immediate corollary of the definition of $R_\varepsilon$ is that \begin{equation} \label{eq:T} R_\varepsilon(x)^{-1} e^{-\varepsilon t} \leq \frac{\norm{T_x^E f_t}}{e^{\chi \abs{t}}} \leq R_\varepsilon(x) e^{\varepsilon t}, \end{equation} for all $x \in \ms{R}$ and $t \in \R$. The following result was proved in \cite{sns+reg+07}. \begin{thm} \label{thm:reg} If $E$ is continuous on the entire manifold $M$, then for every $\varepsilon > 0$ there exists an open set $V_\varepsilon$ of full measure in $M$ such that $R_\varepsilon$ is locally bounded on $V_\varepsilon$. \end{thm} \begin{proof}[Sketch of proof] Since $E$ is continuous on $M$, it follows that $R_\varepsilon$ is lower semicontinuous on $M$, as the supremum of a collection of continuous functions. This implies that the sets $H_k = \{ x \in M: R_\varepsilon(x) \leq k\}$ are closed. Since their union equals $M$, by the Baire category theorem at least one of them has nonempty interior, hence contains an open set $U$. Then $V_\varepsilon = \bigcup_{t \in \R} f_t(U)$ is open and has full measure, by ergodicity. Furthermore, $R_\varepsilon$ is locally bounded on it. This follows from the fact that $R_\varepsilon$ is a slowly varying function: $\sqrt{R_\varepsilon(x)} e^{-\varepsilon \abs{t}} \leq R_\varepsilon(f_t x) \leq R_\varepsilon(x)^2 e^{2 \varepsilon \abs{t}}$, for all $x$ and $t$. For details, see \cite{sns+reg+07}. \end{proof} If $\Phi$ is a volume preserving codimension one Anosov flow, for each $3 \leq k \leq \ell$ and $\varepsilon > 0$, we can also consider the regularity function $R_\varepsilon^k$ responsible for the bundle $F_k$ defined in \eqref{eq:Fk}, relative to the reverse flow $f_{-t}$. This function is defined by the requirement that it satisfy \eqref{eq:T} with $E, t$ replaced by $F_k,-t$, respectively. An argument analogous to that in the proof of Theorem~\ref{thm:reg} yields the following \begin{cor} \label{cor:Rk} If $F_k$ is continuous on $M$, then for each $\varepsilon > 0$ there exists an open set of full measure on which $R_\varepsilon^k$ is locally bounded. \\ \end{cor} \subsection{Synchronization} \label{sbs:sync} In this section we show how to reparametrize an Anosov flow to obtain another Anosov flow with $\det T_x f_t \! \restriction_{E^{uu}} \equiv e^{c t}$, where $c > 0$ is a constant. This technique is called \textsf{synchronization} and was first described by Parry in \cite{parry86} who used it to obtain a system for which the SRB measure coincides with the measure of maximal entropy. A similar result, with mildly different assumptions, was proved in \cite{sns+97}. The construction goes as follows. Let $\{ f_t \}$ be a transitive $C^r$ ($r = k + \mathrm{H\ddot{o}lder}$, $k \geq 2$) Anosov flow on $M$ such that $E^{cs}$ and $E^{uu}$ are of class $C^{1 + \theta}$, for some $0 < \theta < 1$. This is the case if the flow is of codimension one and preserves a $C^1$ volume form on a manifold of dimension $> 3$, which we now assume. Without loss of generality, we may also assume that $E^{uu}$ is orientable. (Otherwise, pass to a double cover of $M$.) Let $Y$ be a $C^{1 + \theta}$ unit vector field generating $E^{uu}$; its flow is denoted by $\{ \phi_t \}$ throughout the paper. Let $\lambda(x,t) = \det T_x f_t \restriction_{E^{uu}}$ and define \begin{equation} \label{eq:psi} \psi(x) = \left. \frac{d}{dt} \right\rvert_{t=0} \log \lambda(x,t). \end{equation} It is not hard to see (cf., \cite{sns+97}) that $\psi$ is of class $C^{1+\mathrm{H\ddot{o}lder}}$ and that there exists a Riemann structure $\mathcal{R}_\ast$ on $M$ with respect to which $\psi > 0$. This Riemann structure is as smooth as $E^{uu}$ and $E^{cs}$, i.e., $C^{1+\theta}$. Reparametrize $X$ by \begin{equation*} \tilde{X} = \frac{1}{\psi} X. \end{equation*} It is a well known theorem of Anosov and Sinai \cite{anosov+sinai+67} that $\tilde{X}$ generates an Anosov flow $\{ \tilde{f}_t \}$. Furthermore \cite{sns+97}, \begin{equation*} \det T_x \tilde{f}_t \! \restriction_{\tilde{E}^{uu}} \equiv e^t, \end{equation*} where $\tilde{E}^{uu}$ denotes the strong unstable bundle of the new flow. \begin{defn} The reparametrized flow $\{ \tilde{f}_t \}$ is called the \textsf{synchronization} of $\{ f_t \}$. \end{defn} Reparametrization alters the strong bundles but does not change the center bundles, i.e., $\tilde{W}^{cs} = W^{cs}$ and $\tilde{W}^{cu} = W^{cu}$. The new strong unstable bundle $\tilde{E}^{uu}$ can be expressed as (cf., \cite{parry86}) \begin{equation*} \tilde{E}^{uu} = \{ w + \xi(w) X: w \in E^{uu} \}, \end{equation*} where $\xi$ is a continuous 1-form on $E^{uu}$ defined by \begin{equation} \label{eq:xi} \xi_x(w) = \frac{1}{\psi(x)} \int_0^\infty d(\psi \circ f_{-t})(w) \, dt. \end{equation} There is an analogous characterization of the strong stable bundle $\tilde{E}^{ss}$: there exists a continuous 1-form $\eta$ on $E^{ss}$ such that \begin{equation} \label{eq:eta} \tilde{E}^{ss} = \{ v + \eta(v)X : v \in E^{ss} \}. \end{equation} Let us look at the regularity of $\tilde{E}^{uu}$ more closely. The synchronized flow is only $C^{1 + \mathrm{H\ddot{o}lder}}$, so we cannot use the $C^1$-Section Theorem~\cite{hps77} to show that $\tilde{E}^{uu}$ is $C^1$. However, we know that $W^{uu}$ is of class $C^{1+\theta}$ and has leaves as smooth as the system, i.e., $C^r$. Recall that the adapted Riemann structure $\mathcal{R}_\ast$ is required to have the following properties \cite{sns+97}: (\textsf{i}) $E^{uu}$ is orthogonal to $E^{cs}$ relative to $\mathcal{R}_\ast$; (\textsf{ii}) $\mathcal{R}_\ast$ coincides with the original Riemann structure on $E^{cs}$. Thus we can assume that along the $W^{uu}$-leaves, $\mathcal{R}_\ast$ is as smooth as the flow, i.e., $C^r$. Recall that $\lambda(x,t)$ is the Jacobian determinant of the $C^r$ map $f_t$ between $C^r$ leaves of the $C^1$ foliation $W^{uu}$. This implies that $x \mapsto \lambda(x,t)$ is $C^1$ and $t \mapsto \lambda(x,t)$ is $C^r$; however, in the $W^{uu}$-direction, $x \mapsto \lambda(x,t)$ is as smooth as $Tf_t$, i.e., $C^{r-1}$. By \eqref{eq:psi}, $\psi$ is $C^{r-1}$ in the $W^{uu}$-direction. Thus if $r \geq 3$, then $d\psi(Y)$ is at least of class $C^1$. By \eqref{eq:xi}, we have \begin{align*} \xi(Y(x)) & = \frac{1}{\psi(x)} \int_0^\infty d(\psi \circ f_{-t})(Y(x)) \, dt \\ & = \frac{1}{\psi(x)} \int_0^\infty d\psi(T f_{-t}(Y(x))) \, dt \\ & = \frac{1}{\psi(x)} \int_0^\infty \lambda(x,-t) \, d\psi(Y(f_{-t} x)) \, ds, \end{align*} which implies that $\xi(Y)$ is $C^1$. Therefore, $\tilde{Y} = Y + \xi(Y) X$ is $C^1$, so we have the following \begin{lem} \label{lem:uu} The strong unstable bundle $\tilde{E}^{uu}$ of the synchronized flow is of class $C^1$. \end{lem} If $\{ f_t \}$ preserves a $C^1$ volume form $\Omega$, then $\{ \tilde{f}_t \}$ preserves $\tilde{\Omega} = \psi \Omega$. However, $\tilde{\Omega}$ does not have to equal the volume form defined by the adapted Riemann structure $\mathcal{R}_\ast$, but since Lyapunov exponents are independent on the Riemann structure, this makes no difference in the subsequent analysis. The flow $\{ \tilde{f}_t \}$ is $C^{1 + \mathrm{H\ddot{o}lder}}$, so the H\"older invariant section theorem applies and guarantees that its strong stable foliation is H\"older~\cite{hps77,shubbook+87,psw+97}. Furthermore, we have: \begin{prop} \label{prop:LE-sync} Suppose $\{ \tilde{f}_t \}$ is the synchronization (or, more generally, a $C^1$ reparametrization) of a volume preserving codimension one Anosov flow $\{ f_t \}$ on $M$, $n > 3$. Let $\{ \chi_1, \ldots, \chi_\ell \}$, $\{ \tilde{\chi}_1, \ldots, \tilde{\chi}_{\tilde{\ell}} \}$ be the Lyapunov exponents of $f_{-t}, \tilde{f}_{-t}$ corresponding to Oseledets decompositions $\bigoplus E_i$, $\bigoplus \tilde{E}_j$ over regular sets $\ms{R}, \tilde{\ms{R}}$, respectively. Then $\tilde{\ms{R}} = \ms{R}$ and: \begin{enumerate} \item[(a)] For every $x \in \ms{R}$ and all $3 \leq i \leq \ell$, \begin{equation*} \tilde{E}_i (x) = \{ v + \eta(v) X: v \in E_i(x) \}, \end{equation*} where $X$ is the infinitesimal generator of $\{ f_t \}$ and $\eta$ is the 1-form in \eqref{eq:eta}. In particular, $\tilde{E}_i$ and $E_i$ have the same dimension and $\tilde{\ell} = \ell$. \item[(b)] There exists a constant $C$ such that $\tilde{\chi}_i = C \: \chi_i$, for all $1 \leq i \leq \ell$. \end{enumerate} \end{prop} \begin{proof} Since $\{ f_t \}$ and $\{ \tilde{f}_t \}$ have the same orbits, there exists a $C^1$-function $\varrho: M \times \R \to \R$ such that \begin{equation*} \tilde{f}_t(x) = f_{\varrho(x,t)}(x). \end{equation*} It is not hard to see that \begin{equation*} \varrho(x,t) = \int_0^t \frac{ds}{\psi(\tilde{f}_s x)}, \end{equation*} where $\psi : M \to \R_+$ is the $C^1$ time-change. Let $x$ be a regular point for $f_t$ and let $v \in E_i(x) \setminus \{ 0 \}$. Set $\tilde{v} = v + \eta(v) X$. Then \begin{align*} T\tilde{f}_t(\tilde{v}) & = Tf_{\varrho(x,t)}(\tilde{v}) + \frac{\partial \varrho}{\partial x}(\tilde{v}) X \\ & = Tf_{\varrho(x,t)}(v) + \left\{ \eta(v) \left[ 1 + \frac{\partial \varrho}{\partial x}(X) \right] + \frac{\partial \varrho}{\partial x}(v) \right\} X. \end{align*} Since $T\tilde{f}_t(\tilde{v}) \in \tilde{E}^{ss}$ and $Tf_{\varrho(x,t)}(v) \in E^{ss}$, it follows that \begin{equation*} T\tilde{f}_t(\tilde{v}) = Tf_{\varrho(x,t)}(v) + \eta(Tf_{\varrho(x,t)}(v)) X. \end{equation*} Thus $\norm{T\tilde{f}_t(\tilde{v})} \sim \norm{Tf_{\varrho(x,t)}(v)} \sim e^{\varrho(x,t) \chi_i} \norm{v}$, as $t \to \pm \infty$, so \begin{equation*} \tilde{\chi}(\tilde{v}) = \lim_{t \to -\infty} \frac{\varrho(x,t)}{t} \chi_i = C \chi_i, \end{equation*} where $C = \int_M (1/\psi)$, by Birkhoff's Ergodic Theorem. Therefore, $x \in \tilde{\ms{R}}$, and $\tilde{v}$ belongs to the Lyapunov space for $\tilde{f}_{-t}$ corresponding to $\tilde{\chi}_i = C \: \chi_i$. \end{proof} \begin{cor} \label{cor:sync} If the Oseledets splitting of $\{ f_t \}$ is continuous on all of $M$, then so is that of its synchronization $\{ \tilde{f}_t \}$. \end{cor} Now let us drop the tildes and assume $\{ f_t \}$ is synchronized. Then we have the following characterization of the Lyapunov exponents of $f_{-t}$. \begin{prop} \label{prop:LE} Suppose $\{f_t\}$ is a synchronized volume preserving codimension one Anosov flow and $n > 3$. \begin{enumerate} \item[(a)] If $n = 4$, then $\chi_3 \leq 1/2$. If $\dim E_3 = 2$, then $\chi_\ell = 1/2$. \item[(b)] If $n > 4$, then $\chi_{\ell-1} + \chi_\ell < 1$. In particular, $\chi_{\ell-1} \leq 1/2$. If in addition, $\dim E_\ell > 1$, then $\chi_\ell < 1/2$. \end{enumerate} \end{prop} \begin{proof} Note first that $\chi_1 = \chi(Y) = -1$. Then by part (d) of the Oseledets's Multiplicative Ergodic Theorem, \begin{equation*} \sum_{i=3}^\ell \chi_i \dim E_i = 1, \end{equation*} which easily implies (a). If $n > 4$, then $1 \geq \chi_{\ell-1} \dim E_{\ell-1} + \chi_\ell \dim E_\ell \geq \chi_{\ell-1} + \chi_\ell$, where at least one inequality is strict. This yields (b). \end{proof} \paragraph{\textbf{Standing Assumption}} Unless stated otherwise, in the remainder of the paper all flows are assumed to be \textsf{synchronized, volume preserving, codimension one and Anosov on a $C^\infty$ closed Riemannian manifold of dimension $n > 3$}. \\ \subsection{Holonomy and $su$-Disks} \label{sbs:su-disk} If $\ms{F}$ is a continuous foliation with $C^1$ leaves, we define its holonomy as follows. Fix a foliation chart $U$, a point $p \in U$, and $q$ in the plaque $\ms{F}_U(p)$. Choose $C^1$ disks $D_p$, $D_q \subset U$ transverse to $\ms{F}$, with $p \in D_p$, $q \in D_q$. Then the \textsf{holonomy} of $\ms{F}$ relative to $D_p, D_q$ is the map $\mathbf{h}^\ms{F}: D_p \to D_q$ defined by sliding points along the plaques of $\ms{F}$. Namely, for $x \in D_p$, $\mathbf{h}^\ms{F}(x) = y$ if $\{ y \} = \ms{F}_U(x) \cap D_q$. This defines a homeomorphism between $D_p$ and a subset of $D_q$. If $T\ms{F}$ is $C^1$, then so is $\mathbf{h}^\ms{F}$ \cite{psw+97}. Denote by $\Jac_x(\mathbf{h}^\ms{F})$ the Jacobian determinant of $\mathbf{h}^\ms{F}$ at $x$. Recall that for a linear isomorphism $T:V \to W$ between inner product spaces, the determinant of $T$ is the volume of the parallelepiped spanned by $T(e_1), \ldots, T(e_k)$, where $\{ e_1, \ldots, e_k \}$ is an orthonormal basis for $V$. If for every choice of $U, D_p$, and $D_q$, $\mathbf{h}^\ms{F}$ sends sets of measure zero in $D_p$ to sets of measure zero in $D_q$, we say that $\ms{F}$ is \textsf{absolutely continuous}. Then by the Radon-Nikodym theorem, $\Jac(\mathbf{h}^\ms{F})$ is well-defined. It is a classical result of Anosov~\cite{anosov+67} that the invariant foliations of a $C^{1 + \mathrm{H\ddot{o}lder}}$ Anosov system are absolutely continuous. In our case, foliations under consideration, $W^{cs}$ and $W^{uu}$, are $C^{1 + \theta}$, so the holonomy is actually continuously differentiable in the usual sense~\cite{psw+97}. Let $U \subset M$ now be a foliation chart for both $W^{cs}$ and $W^{uu}$. If $p \in U$, $q \in W^{uu}_\mathrm{loc}(p)$, and $x \in W^{ss}_\mathrm{loc}(p)$, the $uu$- and $cs$-holonomy are $C^1$ maps \begin{equation*} \huu_{p,q} : W^{cs}_\mathrm{loc}(p) \to W^{cs}_\mathrm{loc}(q), \qquad \qquad \hcs_{p,x} : W^{uu}_\mathrm{loc}(p) \to W^{uu}_\mathrm{loc}(x). \end{equation*} Similarly, we can define the $cu$-holonomy $\hcu_{p,q}: W^{ss}_{\mathrm{loc}}(p) \to W^{ss}_{\mathrm{loc}}(q)$. Note that if the unstable manifolds $W^{uu}$ are parametrized by the flow $\{ \phi_t \}$ of $Y \in E^{uu}$, then $\hcs_{p,x}$ can be regarded as a map between intervals of real numbers. For simplicity, we will later make a slight abuse of notation and identify these two versions of $\hcs_{p,x}$. For a simple $C^1$ path $\gamma: [0,1] \to W^{ss}_{\mathrm{loc}}(p)$ from $p$ to $x$, define a closed piecewise $C^1$ path $\Gamma$ as follows. Let $\Gamma$ be the sum of $-\gamma$, the $uu$-arc $[p,q]_{uu}$ from $p$ to $q$, the $C^1$-path $\huu_{p,q}(\gamma)$, and the $uu$-arc $[\huu_{p,q}(x),x]_{uu}$ from $\huu_{p,q}(x)$ to $x$ (see \textsc{Fig.}~\ref{fig:disc}). Let $D_\gamma$ be the 2-disk foliated by $W^{uu}$ whose boundary is $\Gamma$. \begin{defn} $D = D_\gamma$ is called an $su$-\textsf{disk} with \textsf{base} $\gamma$. \end{defn} Further let $(\partial D)^{cs} = \huu_{p,q}(\gamma) - \gamma$ and $(\partial D)^{uu} = [p,q]_{uu} - [x,\huu_{p,q}(x)]_{uu}$ be the $cs$- and $uu$-component of $\partial D$. Define a 1-form $\omega$ by requiring $\mathrm{Ker}(\omega) = E^{cs}$ and $\omega(Y) = 1$. It was shown in \cite{sns+97} that (for a synchronized flow) \begin{displaymath} d\omega = \alpha \wedge \omega, \end{displaymath} where $\alpha$ is the 1-form defined in \eqref{eq:alpha}. Recall a result from foliation theory (see, for instance, Exercise 2.3.16 on p.66 in \cite{candel+99} as well as (17.20) in \cite{anosov+67}): let $\ms{F}$ be a $C^1$ codimension one foliation such that: \begin{enumerate} \item[(i)] $T\ms{F} = \mathrm{Ker}(\omega)$, for some a $C^1$ 1-form $\omega$; \item[(ii)] $d \omega = \alpha \wedge \omega$, for some continuous 1-form $\alpha$; \item[(iii)] $p_0, p_1$ lie in a same plaque of $\ms{F}$; \item[(iv)] $\Sigma_i$ is a transversal for $\ms{F}$ passing through $p_i$ ($i=0, 1$) and $h : \Sigma_0 \to \Sigma_1$ is the corresponding holonomy map of $\ms{F}$; \item[(v)] $x_0 \in \Sigma_0$ and $h(x_0) = x_1 \in \Sigma_1$. \item[(vi)] $\sigma$ is a $C^1$ path in the leaf $\ms{F}(x_0)$ connecting $x_0$ and $x_1$. \end{enumerate} Then $\log h'(x_0) = \int_\sigma \alpha$. Since $\int_{\huu_{p,q}(\gamma)} \alpha = \int_{\partial D} \alpha$, this yields \begin{equation} \label{eq:su_hol} \log \Jac_q(\hcs_{p,x}) = \int_{\partial D} \alpha. \end{equation} We now have the following characterization of joint integrability. \begin{figure}[htbp] \centerline{ \psfrag{p}[][]{$p$} \psfrag{q}[][]{$q$} \psfrag{x}[][]{$x$} \psfrag{Wp}[][]{$W^{cs}_\mathrm{loc}(p)$} \psfrag{Wq}[][]{$W^{cs}_\mathrm{loc}(q)$} \psfrag{Wup}[][]{$W^{uu}_\mathrm{loc}(p)$} \psfrag{Wux}[][]{$W^{uu}_\mathrm{loc}(x)$} \psfrag{hc}[][]{$\hcs_{p,x}$} \psfrag{hu}[][]{$\huu_{p,q}$} \psfrag{g}[][]{$\gamma \subset W^{ss}_\mathrm{loc}(p)$} \psfrag{D}[][]{$D$} \psfrag{hg}[][]{$\huu_{p,q}(\gamma)$} \includegraphics[width=0.6\hsize]{su.eps}} \caption{An $su$-disk $D$ with base $\gamma$.} \label{fig:disc} \end{figure} \begin{prop} \label{prop:key} The following statements are equivalent.\footnote{Parts (d) and (e) are not used in the remainder of the paper.} \begin{enumerate} \item[(a)] $W^{ss}$ and $W^{uu}$ are jointly integrable. \item[(b)] $T_x \huu_{p,q}(E^{ss}) = E^{ss}$, for all $p \in M$, $q \in W^{uu}_\mathrm{loc}(p)$, and $x \in W^{ss}_\mathrm{loc}(p)$. \item[(c)] $\int_{\partial D} \alpha = 0$, for every small $su$-disk $D$. \item[(d)] $\Jac_q (\hcs_{p,x}) = 1$, for all $p \in M$, $x \in W^{ss}_\mathrm{loc}(p)$, and $q \in W^{uu}_\mathrm{loc}(p)$. \item[(e)] $\Jac_x (\hcu_{p,q}) = 1$, for all $p \in M$, $x \in W^{ss}_\mathrm{loc}(p)$, and $q \in W^{uu}_\mathrm{loc}(p)$. \end{enumerate} \end{prop} \begin{proof} Equivalence of (a) and (b) is clear enough. Part (a) implies (c) by definition, since $E^{ss} \subset \mathrm{Ker}(\alpha)$. Assume (c) and let $D$ be a small $su$-disk as above. Recall that its base is the path $\gamma:[0,1] \to W^{ss}_\mathrm{loc}(p)$. For $0 < s \leq 1$, set $\gamma_s = \gamma\! \restriction_{[0,s]}$. Then, by assumption, $\int_{\huu_{p,q}(\gamma_s)} \alpha = 0$, for all $0 < s \leq 1$. This means that $\huu_{p,q}(\gamma)$ is entirely contained in $W^{ss}_\mathrm{loc}(q)$. Therefore, $W^{ss}$ and $W^{uu}$ are jointly integrable. Parts (c) and (d) are equivalent by \eqref{eq:su_hol}. To show that (d) and (e) are equivalent, we use the following fact, proved in \cite{sns+97}. If $\Omega$ is a $C^1$ volume form preserved by the flow and $\Theta = i_X i_Y \Omega$, where $i_X$ denotes the contraction by $X$, then $\mathrm{Ker}(\Theta) = E^{cu}$ and $d\Theta = - \alpha \wedge \Theta$. Therefore, by an analogue of \eqref{eq:su_hol} for $W^{cu}$ (note that $E^{cu}$ is of class $C^{1+\theta}$), we have \begin{align*} \log \Jac_x (\hcu_{p,q}) & = \int_{\partial D} (-\alpha) \\ & = - \log \Jac_q (\hcs_{p,x}). \qedhere \end{align*} \end{proof} \paragraph{\textbf{Metric properties of $su$-disks}} For any $x \in M$, $s \in \R$, and $v \in E^{ss}(x)$, let \begin{equation*} T \phi_s(v) = a_u(s,v) Y + a_c(s,v) X + Z_s(v), \end{equation*} where, as before, $\{ \phi_s \}$ is the flow of $Y \in E^{uu}$ and $Z_s(v) \in E^{ss}(\phi_s x)$. (We apologize for the overuse of the letter $s$.) We will need the following auxiliary result. \begin{lem} \label{lem:Z} \begin{enumerate} \item[(a)] For all $x \in M$ and $s \in \R$, $Z_s : E^{ss}(x) \to E^{ss}(\phi_s x)$ is a linear bundle isomorphism covering $\phi_s$. The map $s \mapsto Z_s$ is continuous and $Z_0 = \mathrm{identity}$. \item[(b)] If $v \in E^{ss}(x)$, then \begin{equation*} \lim_{t \to \infty} \frac{\norm{Tf_{-t}(Z_s(v))}}{\norm{Tf_{-t}(v)}} = 1. \end{equation*} In particular, $Z_s(F_{\ell-1}) = F_{\ell-1}$ $($cf., \eqref{eq:Fk}$)$. \end{enumerate} \end{lem} \begin{proof} Part (a) is easy to check. To prove (b), note that since $\{ f_t \}$ is synchronized, we have $f_{-t} \circ \phi_s = \phi_{s e^{-t}} \circ f_{-t}$, so \begin{equation*} Tf_{-t} (Z_s(v)) = Z_{se^{-t}}(Tf_{-t}(v)). \end{equation*} Therefore, \begin{equation*} \min_{0 \leq r \leq s e^{-t}} \norm{Z_r} \leq \frac{\norm{Tf_{-t}(Z_s(v))}}{\norm{Tf_{-t}(v)}} \leq \max_{0 \leq r \leq s e^{-t}} \norm{Z_r}. \end{equation*} As $t \to \infty$, both the left and right hand side converge to $\norm{Z_0} = 1$. \end{proof} \begin{cor} \label{cor:D-reg} Let $\gamma$ be a simple $C^1$ path in $W^{ss}_\mathrm{loc}(p)$ and let $D = D_\gamma$ be an $su$-disk with base $\gamma$. If almost every point of $\gamma$ is backward Lyapunov regular, then so is almost every point of $D$. Furthermore, if $\gamma$ is a.e. tangent to $F_k$, for some $3 \leq k \leq \ell$, then the Lyapunov exponents of the tangent vectors of $\huu_{p,q} \circ \gamma$ are $\leq \chi_k$. \end{cor} \begin{proof} Let $S$ denote the set of backward Lyapunov regular points in $\gamma$. By assumption, $S$ has full measure in $\gamma$. Assume $x \in S$ and let $y \in D \cap W^{uu}_\text{loc}(x)$ be arbitrary. Then $y = \phi_r(x)$, for some $r \in \R$. Let $v \in T_y M$, $v \neq 0$, be an arbitrary vector. We claim that the limit of $(1/t) \log \norm{T_y f_{-t}(v)}$ exists, as $t \to \infty$. It is enough to consider the cases $v \in E^{cu}$ and $v \in E^{ss}$. In the former case, $(1/t) \log \norm{T_y f_{-t}(v)}$ converges to $0$ or $-1$, as $t \to \infty$. In the latter one, by Lemma~\ref{lem:Z}(b), $\norm{T_y f_{-t}(v)}$ is asymptotically equivalent to $\norm{T_x f_{-t}(Z_{-r}(v))}$, which converges as $t \to \infty$, since $x$ is a backwards regular point. This proves that $y$ is backward regular. Since almost every point of $D$ is of this form, we obtain the first assertion. To prove the second one, observe that $(\mathbf{h}^{uu}_{p,q} \circ \gamma)'(r) = T \mathbf{h}^{uu}_{p,q} (\dot{\gamma}(r)) = a_c X + Z_\rho(\dot{\gamma}(r))$, for some $a_c$ and $\rho$ depending on $r$. The Lyapunov exponent of this vector is the greater of the Lyapunov exponents of $X$ and $Z_\rho(\dot{\gamma}(r))$, which is $\chi(Z_\rho(\dot{\gamma}(r)))$. By Lemma~\ref{lem:Z}(b), $\chi(Z_\rho(\dot{\gamma}(r)) = \chi(\dot{\gamma}(r)) \leq \chi_k$ for a.e. $r$, which completes the proof. \end{proof} Given an $su$-disk $D = D_\gamma$ as above, we parametrize it by \begin{equation*} \Psi(r,s) = \huu_{p,\phi_r p}(\gamma(s)), \end{equation*} where $0 \leq r \leq \kappa$, for some $\kappa > 0$, and $0 \leq s \leq 1$. Express the $W^{uu}$-holonomy $\huu_{p,q}$ as \begin{equation} \label{eq:huu} \huu_{p,q}(x) = \phi_{\hcs_{p,x}(q)}(x), \end{equation} where the $W^{cs}$-holonomy $\hcs_{p,x}: W^{uu}_\mathrm{loc}(p) \to W^{uu}_\mathrm{loc}(x)$ is regarded as a real-valued function. Since $\huu_{p,q}$ takes $W^{cs}_\mathrm{loc}(p)$ to $W^{cs}_\mathrm{loc}(q)$, differentiating with respect to $x$ in the direction of $v \in E^{ss}(x)$, we obtain $d_x \hcs_{p,x}(v) = - a_u(\hcs_{p,x}(q),v)$ and \begin{equation*} T_x \huu_{p,q}(v) = a_c(\hcs_{p,x}(q),v) X + Z_{\hcs_{p,x}(q)}(v). \end{equation*} Therefore, \begin{equation*} \frac{\partial \Psi}{\partial s} = a_c(\hcs_{p,\gamma(s)}(\phi_r p),\dot{\gamma}(s)) X + Z_{\hcs_{p,\gamma(s)}(\phi_r p)}(\dot{\gamma}(s)). \end{equation*} It follows from \eqref{eq:huu} that \begin{equation*} \frac{\partial \Psi}{\partial r} = \Jac_{\phi_r p}(\hcs_{p,\gamma(s)}) Y. \end{equation*} The area element of $D$ is $\left\lVert \frac{\partial \Psi}{\partial r} \wedge \frac{\partial \Psi}{\partial s} \right\rVert$. Recall that $\abs{\partial D}$ denotes the circumference of the boundary of $D$ and $\abs{D}$ its area. Since $f_{-t} \circ \Psi$ is a parametrization of $f_{-t}D$, the area of $f_{-t} D$ can be estimated as follows: \begin{align*} \abs{f_{-t}D} & = \iint\limits_{[0,\kappa]\times [0,1]} \left\lVert Tf_{-t} \left( \frac{\partial \Psi}{\partial r} \wedge \frac{\partial \Psi}{\partial s} \right) \right\rVert \, dr \, ds \\ & = \iint\limits_{[0,\kappa]\times [0,1]} \left\lVert Tf_{-t} \left( a_c(\hcs_{p,\gamma(s)}(\phi_r p),\dot{\gamma}(s)) \Jac_{\phi_r p}(\hcs_{p,\gamma(s)}) X \wedge Y \right. \right. \\ & \qquad \qquad + \left. \left. \Jac_{\phi_r p}(\hcs_{p,\gamma(s)}) Z_{\hcs_{p,\gamma(s)}(\phi_r p)}(\dot{\gamma}(s)) \wedge Y \right) \right\rVert \, dr \, ds \\ & \leq \iint\limits_{[0,\kappa]\times [0,1]} \abs{a_c(\hcs_{p,\gamma(s)}(\phi_r p),\dot{\gamma}(s))} \Jac_{\phi_r p}(\hcs_{p,\gamma(s)}) \norm{Tf_{-t}(X \wedge Y)} \, dr \, ds \\ & \qquad \qquad + \iint\limits_{[0,\kappa]\times [0,1]} \Jac_{\phi_r p}(\hcs_{p,\gamma(s)}) \norm{Tf_{-t}(Y \wedge Z(r,s))} \, dr \, ds \\ & \leq K \kappa e^{-t} + K \iint\limits_{[0,\kappa]\times [0,1]} \left\lVert Tf_{-t}(Y \wedge Z(r,s)) \right\rVert \, dr \, ds, \tag{$\flat$} \end{align*} where $Z(r,s) = Z_{\hcs_{p,\gamma(s)}(\phi_r p)}(\dot{\gamma}(s))$ and \begin{equation*} K = \sup \left\{ \max\left( \abs{a_c(\hcs_{p,\gamma(s)}(\phi_r p), \dot{\gamma}(s))} \, \Jac_{\phi_r p}(\hcs_{p,\gamma(s)}), \Jac_{\phi_r p}(\hcs_{p,\gamma(s)}) \right) : (r,s) \in [0,\kappa] \times [0,1] \right\}. \end{equation*} \begin{prop} \label{prop:su} Let $D = D_\gamma$ be an $su$-disk as above, with $\gamma \subset W_{\mathrm{loc}}^{ss}(p)$. Then: \begin{enumerate} \item[(a)] $e^{-t} \abs{\partial f_{-t}D} \to 0$ and $\abs{f_{-t} D} \to 0$, as $t \to \infty$. \item[(b)] If $\gamma$ is tangent to the bundle $F_k \subset E^{ss}$, for some $3 \leq k \leq \ell$, then for every $\varepsilon > 0$, \begin{displaymath} \abs{\partial f_{-t} D} \leq \abs{\partial D} \norm{R_\varepsilon^k}_{L^\infty(\partial D)} e^{(\chi_k + \varepsilon)t}, \end{displaymath} and \begin{displaymath} \abs{f_{-t} D} \leq A \norm{R_\varepsilon^k}_{L^\infty(D)} e^{(\chi_k + \varepsilon - 1)t}, \end{displaymath} for all $t \geq 0$, where $A$ is a constant depending only on $D$ and the flow. \end{enumerate} \end{prop} \begin{rem} The norms $\norm{R_\varepsilon^k}_{L^\infty(\partial D)}, \norm{R_\varepsilon^k}_{L^\infty(D)}$ may, of course, be infinite. \end{rem} \begin{proof} (a) If $c:[0,1] \to \partial D$ is a piecewise $C^1$ parametrization of $\partial D$, then as $t \to \infty$, \begin{align*} e^{-t} \abs{\partial f_{-t}D} & = e^{-t} \int_0^1 \norm{Tf_{-t}(\dot{c}(s))} \, ds \\ & = \int_0^1 \norm{Tf_{-t}(\dot{c}(s) \wedge Y)} \, ds \\ & \leq \int_0^1 C \nu^{(n-3)t} \norm{\dot{c}(s) \wedge Y} \, ds \\ & \leq C \nu^{(n-3)t} \abs{\partial D}. \end{align*} The inequality $\norm{Tf_{-t}(v \wedge Y)} \leq C \nu^{(n-3)t} \norm{v \wedge Y}$, for $v \in E^{ss}$, was proved in \cite{ghys89} (Lemma 1.2) and \cite{sns+97} (Lemma 3.1). Using $\norm{Tf_{-t}(Y \wedge Z(r,s))} \leq C \nu^{(n-3)t} \norm{Y \wedge Z(r,s)}$ and ($\flat$), we obtain \begin{displaymath} \abs{f_{-t} D} \leq K \kappa e^{-t} + K C \nu^{(n-3)t} \iint\limits_{[0,\kappa]\times [0,1]} \norm{Y \wedge Z(r,s)} \, dr \, ds, \end{displaymath} which converges to zero, as $t \to \infty$. \\ (b) Since $\dot{\gamma}$ is tangent to $F_k$, by Corollary~\ref{cor:D-reg} the Lyapunov exponents of the tangent vectors to $\huu_{p,q} \circ \gamma$ are $\leq \chi_k$. Therefore, $\norm{Tf_{-t}(\dot{c}(s))} \leq \norm{R_\varepsilon^k}_{L^\infty(\partial D)} e^{(\chi_k + \varepsilon)t} \norm{\dot{c}(s)}$, for all $t \geq 0$. This yields \begin{displaymath} \abs{\partial f_{-t}D} \leq \int_0^1 \norm{R_\varepsilon^k}_{L^\infty(\partial D)} e^{(\chi_k + \varepsilon)t} \norm{\dot{c}(s)} \: ds = \abs{\partial D} \norm{R_\varepsilon^k}_{L^\infty(\partial D)} e^{(\chi_k + \varepsilon)t}. \end{displaymath} Also by Corollary~\ref{cor:D-reg}. \begin{displaymath} \norm{Tf_{-t}(Z(r,s))} \leq R_\varepsilon^k(\Psi(r,s)) e^{(\chi_k + \varepsilon)t} \norm{Z(r,s)}. \end{displaymath} Since $\norm{Tf_{-t}(Y)} = e^{-t}$, it follows that \begin{displaymath} \norm{Tf_{-t}(Y \wedge Z(r,s))} \leq R_\varepsilon^k(\Psi(r,s)) e^{(\chi_k + \varepsilon - 1)t} \norm{Z(r,s)}. \end{displaymath} Using ($\flat$), we obtain \begin{displaymath} \abs{f_{-t}D} \leq K \kappa e^{-t} + K \norm{R_\varepsilon^k}_{L^\infty(D)} e^{(\chi_k + \varepsilon - 1)t} \iint_{[0,\kappa] \times [0,1]} \norm{Z(r,s)} \: dr ds. \end{displaymath} Taking $A = 2 K \kappa \max \left\{ \norm{Z(r,s)} : (r,s) \in [0,\kappa] \times [0,1] \right\}$, we obtain the second statement in (b). \end{proof} \begin{cor} \label{cor:tau} Suppose $\gamma$ is tangent to $F_k$ and $\chi_k< \tau < 1$. If $\norm{R_\varepsilon^k}_{L^\infty(D)}$ is finite for some $\varepsilon < \tau - \chi_k$, then \begin{displaymath} \lim_{t \to \infty} \abs{\partial f_{-t}D}^{1-\tau} \abs{f_{-t}D}^\tau = 0. \end{displaymath} \end{cor} \begin{proof} Let $0 < \varepsilon < \tau - \chi_k$. Then by Proposition~\ref{prop:su}(b), \begin{align*} \abs{\partial f_{-t}D}^{1-\tau} \abs{f_{-t}D}^\tau & \leq \left\{ \abs{\partial D} \norm{R_\varepsilon^k}_{L^\infty(\partial D)} e^{(\chi_k + \varepsilon)t} \right\}^{1-\tau} \: \left\{ A \norm{R_\varepsilon^k}_{L^\infty(D)} e^{(\chi_k + \varepsilon - 1)t} \right\}^\tau \\ & \leq A^\tau \abs{\partial D}^{1-\tau} \norm{R_\varepsilon^k}_{L^\infty(D)} e^{(\chi_k + \varepsilon - \tau)t} \\ & \to 0, \end{align*} as $t \to \infty$. \end{proof} \subsection{Regularization} \label{sbs:reg} We now review a well known method of approximating locally integrable functions by smooth ones. Suppose $u: \R^n \to \R$ is locally integrable and define its \textsf{regularization} (or mollification) by the convolution $u^\varepsilon = \eta_\varepsilon * u$, where $\eta_\varepsilon(x) = \varepsilon^{-n} \eta\left(\frac{x}{\varepsilon}\right)$, $\varepsilon > 0$, and $\eta: \R^n \to \R$ is the \textsf{standard mollifier}~\cite{evans+98,stein+70} \begin{equation*} \eta(x) = \begin{cases} A \exp \left( \frac{1}{\abs{x}^2 - 1} \right) & \text{if $\abs{x} < 1$} \\ 0 & \text{if $\abs{x} \geq 1$}, \end{cases} \end{equation*} with $A$ chosen so that $\int \eta \: dx = 1$. Note that the support of $\eta_\varepsilon$ is contained in the ball of radius $\varepsilon$ centered at $0$ and $\int \eta_\varepsilon \, dx = 1$. \begin{prop} \label{prop:reg} Let $u: \R^n \to \R$ be locally integrable. Then: \begin{enumerate} \item[(a)] $u^\varepsilon \in C^\infty(\R^n)$. \item[(b)] If $u \in L^\infty$, then $\norm{u^\varepsilon}_{L^\infty} \leq \norm{u}_{L^\infty}$. \item[(c)] If $u \in C^\theta$ $(0 < \theta < 1)$, then $\norm{u^\varepsilon - u}_{C^0} \leq \norm{u}_{C^\theta} \: \varepsilon^\theta$. \item[(d)] If $u \in C^1$, then $\norm{u^\varepsilon - u}_{C^0} \leq \norm{u}_{C^1} \, \varepsilon$. If $u$ is $C^1$ along the leaves of a $C^1$ foliation, then this estimate holds along each leaf. \item[(e)] If $u \in C^\theta$, then $\norm{du^\varepsilon}_{C^0} \leq \norm{d\eta}_{L^1} \norm{u}_{C^\theta} \varepsilon^{\theta-1}$, where $\norm{d\eta}_{L^1} = \max_i \int_{\R^n} \abs{\partial \eta/\partial x_i} dx$. \end{enumerate} \end{prop} \begin{proof} Proof of (a) and (b) can be found in \cite{evans+98} . Although (c)--(e) are probably well known facts, I have not been able to find them in the literature. We therefore sketch their proofs. For (c), we have \begin{align*} \abs{u^\varepsilon(x) - u(x)} & = \left\lvert \int_{B(0,\varepsilon)} \eta_\varepsilon(y) [u(x-y) - u(x)] \, dy \right\rvert \\ & \leq \norm{u}_{C^\theta} \varepsilon^\theta \int_{B(0,\varepsilon)} \eta_\varepsilon(y) \, dy \\ & = \norm{u}_{C^\theta} \varepsilon^\theta. \end{align*} If $u \in C^1$, then the same estimates hold with $\theta$ replaced by $1$. The leafwise version follows straightforwardly. This settles (d). Observe that since $\eta_\varepsilon$ has compact support, \begin{equation} \label{eq:compact} \int_{\R^n} \frac{\partial \eta_\varepsilon}{\partial x_i}(y) \, dy = 0, \end{equation} for $1 \leq i \leq n$. Note also that \begin{equation*} \frac{\partial \eta_\varepsilon}{\partial x_i}(x) = \frac{1}{\varepsilon^{n+1}} \frac{\partial \eta}{\partial x_i}\left( \frac{x}{\varepsilon} \right). \end{equation*} Using this and assuming $u \in C^\theta$, we obtain (e): \begin{align*} \left\lvert \frac{\partial u^\varepsilon}{\partial x_i}(x) \right\rvert & = \left\lvert \int_{\R^n} u(x-y) \frac{\partial \eta_\varepsilon}{\partial x_i}(y) \, dy \right\rvert \\ & \overset{\text{by} \ \eqref{eq:compact}}{=} \left\lvert \int_{B(0,\varepsilon)} [u(x-y) - u(x)] \frac{\partial \eta_\varepsilon}{\partial x_i}(y) \, dy \right\rvert, \\ & \leq \norm{u}_{C^\theta} \, \varepsilon^\theta \int_{B(0,\varepsilon)} \left\lvert \frac{\partial \eta_\varepsilon}{\partial x_i}(y) \right\rvert \, dy \\ & = \norm{u}_{C^\theta} \, \varepsilon^\theta \int_{B(0,\varepsilon)} \frac{1}{\varepsilon^{n+1}} \left\lvert \frac{\partial \eta}{\partial x_i}\left(\frac{y}{\varepsilon}\right) \right\rvert \, dy \\ & \overset{z = \frac{y}{\varepsilon}}{=} \norm{u}_{C^\theta} \, \varepsilon^\theta \cdot \frac{1}{\varepsilon} \int_{B(0,1)} \left\lvert \frac{\partial \eta}{\partial x_i}(z) \right\rvert \, dz, \\ & \leq \norm{d\eta}_{L^1} \norm{u}_{C^\theta} \, \varepsilon^{\theta-1}. \qedhere \end{align*} \end{proof} \begin{rem} Regularization on smooth manifolds can be done locally. If $\varphi: U \to \varphi(U) \subset \R^k$ are $C^\infty$ local coordinates, let $\hat{U}$ be an open set whose closure is contained in $U$. Define $\hat{\varphi} = \varphi \! \restriction_{\hat{U}}$. Then if $u:U \to \R$ is locally integrable and $\varepsilon > 0$ is small enough, simply take $u^\varepsilon = \left( u \circ \hat{\varphi}^{-1} \right)^\varepsilon \circ \hat{\varphi} : \hat{U} \to \R$. \\ \end{rem} \subsection{The Key Estimate} \label{sbs:estimate} We now derive an upper bound for the integral of $\alpha$ over the boundary of an $su$-disk $D$ in terms of the circumference $\abs{\partial D}$ and area $\abs{D}$. Before we begin, we recall that $\alpha$ is of class $C^\theta$, where $\theta = \theta(X)$ is the H\"older exponent of $E^{ss}$. However, restricted to any $W^{cs}$-plaque, $\alpha$ is of class $C^1$, since $\mathrm{Ker}(\alpha \! \! \restriction_{W^{cs}}) = E^{ss}$ is $C^1$ along the leaves of $W^{cs}$. Now fix a finite atlas $\{(U,\varphi)\}$ of $M$. For each coordinate chart $U$ choose an open set $\hat{U}$ such that the closure of $\hat{U}$ is contained in $U$ and $\{ \hat{U} \}$ covers $M$. Let $\varepsilon_0 = \frac{1}{2} \min_U \inf \{ d(x,y): x \in \partial U, y \in \partial \hat{U} \}$. Then for every chart $U$ and every locally integrable function $u: U \to \R$, the regularization $u^\varepsilon(x)$ is defined for all $\varepsilon \in (0,\varepsilon_0)$ and $x \in \hat{U}$. Set \begin{displaymath} \norm{\alpha}_\ast = \max_U \left\{ \norm{\alpha}_{C^\theta(U)}, \sup_P \norm{\alpha}_{C^1(P)} \right\}, \end{displaymath} where $U$ is a chart in $\ms{A}$ and $P$ runs over all $W^{cs}$-plaques in $U$. Let $\delta > 0$ be the Lebesgue number of the covering $\{\hat{U}\}$. This means that for every set $S \subset M$ with $\mathrm{diam}(S) < \delta$, there exists a coordinate chart $U$ for $M$ such that $S \subset \hat{U}$. \begin{thm}[The Key Estimate] \label{thm:estimate} Let $D$ be an $su$-disk. If $\mathrm{diam}(D) < \delta$ and $\frac{\abs{D}}{\abs{\partial D}} < \frac{\varepsilon_0^{2-\theta}}{1-\theta}$, then \begin{equation} \label{eq:integral0} \left\lvert \int_{\partial D} \alpha \right\rvert \leq K(\alpha,\theta) \abs{\partial D}^{1-\tau} \abs{D}^\tau, \end{equation} where $K(\alpha,\theta) = 2 \norm{d\eta}_{L^1} \norm{\alpha}_\ast [(1-\theta)^\tau + (1-\theta)^{\tau - 1}]$ and $\tau = \frac{1}{2-\theta}$. \end{thm} \begin{proof} Since $\mathrm{diam}(D) < \delta$, $D$ is contained in $\hat{U}$, for some coordinate chart $U$. In $U$, $\alpha$ can be written as $\sum a_i dx_i$, for some functions $a_i : U \to \R$. These functions inherit properties from $\alpha$: they are $C^\theta$ and on $W^{cs}$-plaques, they are $C^1$. Let $a_i^\varepsilon : \hat{U} \to \R$ be the regularization of $a_i$ defined for $0 < \varepsilon \leq \varepsilon_0$. Set $\alpha^\varepsilon = \sum a_i^\varepsilon dx_i$. Then Proposition~\ref{prop:reg} states that for all $0 < \varepsilon \leq \varepsilon_0$, \begin{equation*} \norm{\alpha^\varepsilon - \alpha}_{C^0} \leq \norm{\alpha}_{C^\theta } \, \varepsilon^\theta \qquad \text{and} \qquad \norm{d\alpha^\varepsilon}_{C^0} \leq \norm{d\eta}_{L^1} \norm{\alpha}_{C^\theta} \, \varepsilon^{\theta-1}. \end{equation*} Furthermore, since $\alpha$ is $C^1$ along the plaques of $W^{cs}$ in $U$, we also have \begin{equation} \label{eq:da_i} \norm{\left( \alpha^\varepsilon - \alpha \right) \! \! \restriction_{W^{cs}}}_{C^0} \leq \norm{\alpha}_\ast \, \varepsilon. \end{equation} Recall that $(\partial D)^{cs}$ is contained in the union of two $W^{cs}$-plaques, which means that the $C^0$ distance between $\alpha$ and $\alpha^\varepsilon$ along $(\partial D)^{cs}$ is of order $\varepsilon$, as in \eqref{eq:da_i}. Therefore, \begin{align*} \left\lvert \int_{\partial D} \alpha \right\rvert & = \left\lvert \int_{(\partial D)^{cs}} \alpha \right\rvert \\ & \leq \left\lvert \int_{(\partial D)^{cs}} \negmedspace (\alpha - \alpha^\varepsilon) \right\rvert + \left\lvert \int_{(\partial D)^{cs}} \negmedspace \alpha^\epsilon \right\rvert \\ & \leq \norm{(\alpha - \alpha^\varepsilon) \! \! \restriction_{W^{cs}}}_{C^0} \: \abs{(\partial D)^{cs}} + \left\lvert \int_{\partial D} \alpha^\varepsilon \right\rvert + \left\lvert \int_{(\partial D)^{uu}} (\alpha^\varepsilon - \alpha) \right\rvert \\ & \leq \abs{\partial D} \norm{\alpha}_\ast \varepsilon + \left\lvert \int_D d \alpha^\varepsilon \right\rvert + \varepsilon \norm{\alpha}_\ast \abs{\partial D} \\ & \leq 2 \abs{\partial D} \, \norm{\alpha}_\ast \, \varepsilon + \abs{D} \, \norm{d\alpha^\varepsilon}_{C^0} \\ & \leq 2 \abs{\partial D} \, \norm{\alpha}_\ast \, \varepsilon + \abs{D} \, \norm{d\eta}_{L^1} \norm{\alpha}_\ast \, \varepsilon^{\theta-1} \\ & \leq 2 \norm{d\eta}_{L^1} \norm{\alpha}_\ast \left\{ \abs{\partial D} \, \varepsilon + \abs{D} \, \varepsilon^{\theta-1} \right\}. \tag{$\ast$} \end{align*} Note that the inequality holds for \emph{all} $ \varepsilon \in (0,\varepsilon_0)$. Let us minimize the right hand side with respect to $\varepsilon$. It is elementary to check that the function $\varepsilon \mapsto \abs{\partial D} \varepsilon + \abs{D} \varepsilon^{\theta-1}$ has an absolute minimum equal to \begin{equation*} B(\theta) \abs{\partial D}^{1-\tau} \abs{D}^\tau \qquad \text{achieved at} \qquad \varepsilon_* = \left\{ \frac{(1-\theta) \abs{D}}{\abs{\partial D}} \right\}^\tau, \end{equation*} where $B(\theta) = (1-\theta)^\tau + (1-\theta)^{\tau-1}$ and $\tau = 1/(2-\theta)$. Observe that $\varepsilon_*$ does lie in $(0,\varepsilon_0)$, the permissible range of $\varepsilon$. Therefore, we can take $\varepsilon = \varepsilon_*$ in ($\ast$), which yields \eqref{eq:integral0}. \end{proof} \begin{rem} Note that $\tau > 1/2$, for all $\theta \in (0,1)$. Furthermore, since $\theta = \theta(X)$ depends continuously on $X$ in the $C^1$ topology (see Section~\ref{sec:anosov}), so does $\tau = \tau(X)$. \end{rem} \section{Proof of the Main Theorem} \label{sec:proof} We start with an arbitrary $C^1$ volume preserving codimension one Anosov vector field $X_0$ on a $C^\infty$ closed Riemannian manifold $M$ of dimension $n > 3$. Recall that, as before, each Oseledets splitting is relative to the reverse flow. \\ \paragraph{\textbf{Step 1: Perturbation}} Let $\ms{U}$ be a $C^1$-structural stability neighborhood of $X_0$ such that every vector field in $\ms{U}$ is topologically equivalent to $X_0$. By the work of Bessa~\cite{bessa+05} and Bochi-Viana~\cite{bochi+viana+05} (see \S\ref{sbs:lyap}), there exists a volume preserving $X_1 \in \ms{U}$ such that its flow admits a dominated Oseledets splitting $E_1 \oplus \cdots \oplus E_\ell$ continuous over the whole manifold $M$. The density result of Arbieto and Matheus~\cite{arbieto+03} gives a $C^\infty$ volume preserving $X_2$ in $\ms{U}$ arbitrarily close to $X_1$. Since the property of possessing a dominated splitting is open in the $C^1$ topology, we can assume that $X_2$ has it. Denote it by $H_1 \oplus \cdots \oplus H_\ell$. The difficulty is that this splitting need not be the \emph{Oseledets} splitting for $X_2$, since a perturbation can cause some of the Lyapunov bundles for $X_1$ to split into lower dimensional ones. Observe, however, that since $X_1$ and $X_2$ are both codimension one Anosov, $E_1, H_1$ are the strong unstable bundles and $E_2, H_2$ are the center bundles. There are two possibilities: \begin{description} \item[Case 1] $\dim E_\ell = 1$. Then a perturbation cannot split $E_\ell$ any further, so $H_\ell$ is also 1-dimensional and is the top Lyapunov bundle for $X_2$. It follows that $H_3 \oplus \cdots \oplus H_{\ell-1}$ is the $F_{\ell-1}$-bundle for $X_2$ and is thus continuous. \item[Case 2] $\dim E_\ell > 1$. Denote the top Lyapunov exponent of $X_i$ ($i=1,2$) and its synchronization $\tilde{X}_i$ by $\chi_{\mathrm{top}}(X_i)$ and $\chi_{\mathrm{top}}(\tilde{X}_i)$, respectively. If $n > 4$, then $\chi_{\mathrm{top}}(\tilde{X}_1) = \chi_\ell(\tilde{X}_1) < 1/2$ (Proposition~\ref{prop:LE}). Thus if $X_2$ is sufficiently $C^1$-close to $X_1$, then $\chi_{\mathrm{top}}(\tilde{X}_2) = \chi_\ell(\tilde{X}_2) < 1/2$ as well. In particular, it is $< \tau(\tilde{X}_2)$. If $n=4$, things are a little more subtle. Since the Lyapunov bundle $E_\ell = E_3$ corresponding to $\chi_{\mathrm{top}}(X_1)$ equals two, so does that of the synchronization $\tilde{X}_1$ of $X_1$ (Proposition~\ref{prop:LE-sync}). If $X_2$ is $C^1$-close to $X_1$, then $\tilde{X}_2$ is $C^1$-close to $\tilde{X}_1$, so $\chi_{\mathrm{top}}(\tilde{X}_2)$ is close to $\chi_{\mathrm{top}}(\tilde{X}_1)$ and $\tau(\tilde{X}_2)$ is close to $\tau(\tilde{X}_1)$. By Proposition~\ref{prop:LE}, $\chi_{\mathrm{top}}(\tilde{X}_1) = 1/2 < \tau(\tilde{X}_1)$. This implies that if $X_2$ is sufficiently $C^1$-close to $X_1$, then $\chi_{\mathrm{top}}(\tilde{X}_2) < \tau(\tilde{X}_2)$. \end{description} We conclude that it is always possible to find a $C^\infty$ volume preserving $X_2 \in \ms{U}$ having one of the following two properties: \begin{enumerate} \item[(A)] The top Lyapunov bundle is has dimension one, and both it and the corresponding $F_{\ell-1}$-bundle are continuous on $M$. \item[(B)] The top Lyapunov exponent of its synchronization is strictly less than the corresponding number $\tau$. \end{enumerate} If $X_2$ satisfies (A), the remainder of the proof consists of \textbf{Steps 2, 3A, 4}, and \textbf{5}. If $X_2$ satisfies (B), the remainder of the proof consists of \textbf{Steps 2, 3B}, and \textbf{5}. \\ \paragraph{\textbf{Step 2: Synchronization}} Let us now synchronize $X_2$. We obtain a $C^{1 + \mathrm{H\ddot{o}lder}}$ Anosov vector field, which we denote by $X$, with flow $\{ f_t \}$. By Propositions~\ref{prop:LE-sync} and \ref{prop:LE} in \S\ref{sbs:sync}, $\{ f_t \}$ has the following properties: \begin{enumerate} \item[\textsf{(a)}] It is volume preserving and of codimension one; \item[\textsf{(b)}] Its center stable bundle $E^{cs}$ and strong unstable bundle $E^{uu}$ are of class $C^{1 + \mathrm{H\ddot{o}lder}}$. \item[\textsf{(c)}] The Oseledets splitting for $f_{-t}$, which we (slightly abusing the notation) denote by $E_1 \oplus \cdots \oplus E_\ell$, and the corresponding Lyapunov exponents satisfy either (A) or (B), where (B) now reads $\chi_\ell < \tau$. \\ \end{enumerate} \paragraph{\textbf{Step 3A}} We have an Anosov vector field satisfying (A) and \textsf{(a)--(c)} from Step 2. Since $F_{\ell-1}$ is continuous, by Corollary~\ref{cor:Rk}, for each $\varepsilon > 0$ there exists an open set $G_\varepsilon$ of full measure in $M$ such that the regularity function $R_\varepsilon^{\ell-1}$ is locally bounded on $G_\varepsilon$. Since $\chi_{\ell-1} \leq \frac{1}{2} < \tau = \tau(X)$, we can pick $\varepsilon > 0$ such that $\varepsilon < \tau - \chi_{\ell-1}$. Let $p \in G_\varepsilon$ and $q \in W^{uu}_\mathrm{loc}(p) \cap G$ be arbitrary but fixed. We will show that $T\huu_{p,q}(F_{\ell-1}) \subset E^{ss}$. Let $\gamma: [0,1] \to W^{ss}_\mathrm{loc}(p)$ be a simple $C^1$ path tangent to $F_{\ell-1}$ and contained in the set $G_\varepsilon$. Such a path exists, since $F_{\ell-1}$ is continuous. Let $D = D_\gamma$ be the associated $su$-disk; we assume that $q$ is sufficiently close to $p$ so that $D \subset G_\varepsilon$. We now use the flow invariance ($f_t^* \alpha = \alpha$) and Theorem~\ref{thm:estimate} to estimate the integral of $\alpha$ over $\partial f_{-t}D$, where $t > 0$ is large but fixed for now. To do that, decompose $D$ into $k$ small $su$-disks $D_i$ of approximately equal size such that Proposition~\ref{thm:estimate} can be applied to each $f_{-t} D_i$. We will also make sure that $\abs{f_{-t} D_i} \lesssim \frac{1}{k} \abs{f_{-t} D}$ and $\abs{\partial f_{-t} D_i} \lesssim \frac{1}{k} \abs{\partial f_{-t} D}$, for all $1 \leq i \leq k$. First, choose an integer $k$ so that, in the notation from \S\ref{sbs:su-disk}, \begin{equation} \label{eq:k} \frac{2}{\delta} \abs{(\partial f_{-t}D)^{cs}} < k < \frac{3}{\delta} \abs{(\partial f_{-t}D)^{cs}}, \end{equation} where $\delta$ is the Lebesgue number of the covering $\{ \hat{U} \}$ defined in \S\ref{sbs:estimate}. Divide $\gamma$ into $k$ segments $\gamma_i$ (see \textsc{Fig.}~\ref{fig:Di}) so that arcs $f_{-t}(\gamma_i)$ all have equal length. Let $D_i = D_{\gamma_i}$ be the $su$-disk defined by $\gamma_i$. Observe that for all $i$, $\abs{(\partial D_i)^{uu}} \leq c \abs{(\partial D)^{uu}}$, where $c > 0$ is a constant depending on the diameter of $D$ and the size of $\Jac(\hcs_{p,z})$, as $z$ traverses $\gamma$. Without loss we can assume that $c \leq 2$. Furthermore, $\abs{(\partial f_{-t} D_i)^{cs}} \approx \abs{(\partial f_{-t} D_j)^{cs}}$, for all $i, j$, so for large enough $t$, \begin{equation} \label{eq:D_i_cs} \abs{(\partial f_{-t} D_i)^{cs}} \leq \frac{2}{k} \abs{(\partial f_{-t} D)^{cs}} < \delta. \end{equation} Since the diameter of $f_{-t} D_i$ is approximately $\abs{(\partial f_{-t} D_i)^{cs}}$, it follows that $\mathrm{diam}(f_{-t} D_i) < \delta$. Further note that $\abs{\partial f_{-t} D_i} = \abs{(\partial f_{-t} D_i)^{uu}} + \abs{(\partial f_{-t} D_i)^{cs}} < e^{-t} \abs{(\partial D_i)^{uu}} + \delta \leq 2 e^{-t} \abs{(\partial D)^{uu}} + \delta$ and $\abs{\partial f_{-t} D} = \abs{(\partial f_{-t} D)^{uu}} + \abs{(\partial f_{-t} D)^{cs}} > e^{-t} \abs{(\partial D)^{uu}} + \frac{1}{3} k \delta$. It is not hard to see that this implies \begin{equation} \label{eq:circ_D_i} \abs{\partial f_{-t} D_i} \leq \frac{4}{k} \abs{\partial f_{-t} D}. \end{equation} Moreover, since $\abs{f_{-t} D_i} \approx \abs{f_{-t} \gamma_i} \cdot e^{-t} \abs{(\partial D_i)^{uu}} \approx \abs{f_{-t} \gamma_j} \cdot e^{-t} \abs{(\partial D_j)^{uu}} \approx \abs{f_{-t} D_j}$, for all $i, j$, all disks $f_{-t} D_i$ have roughly the same area, so for sufficiently large $t$ and all $1 \leq i \leq k$, \begin{equation} \label{eq:area_Di} \abs{f_{-t} D_i} \leq \frac{2}{k} \abs{f_{-t} D}. \end{equation} To apply Theorem~\ref{thm:estimate}, it remains to verify that $\abs{f_{-t} D_i}/\abs{\partial f_{-t} D_i}$ is small. This is indeed the case for large $t$, as $\abs{f_{-t} D_i}/\abs{\partial f_{-t} D_i} \approx e^{-t} \delta/[2(\delta + e^{-t})] \approx e^{-t}$. \begin{figure}[htbp] \centerline{ \psfrag{f}[][]{$f_{-t}$} \psfrag{D}[][]{$D$} \psfrag{fD}[][]{$f_{-t}D$} \psfrag{Di}[][]{$D_i$} \psfrag{fDi}[][]{$f_{-t}D_i$} \psfrag{.}[][]{$\cdots$} \includegraphics[width=0.75\hsize]{Di.eps}} \caption{Decomposition of $D$ into $D_i$'s.} \label{fig:Di} \end{figure} Therefore, \begin{align*} \left\lvert \int_{\partial D} \negmedspace \alpha \right\rvert & = \left\lvert \int_{\partial f_{-t}D} \negmedspace \alpha \right\rvert \\ & \leq \sum_{i=1}^k \left\lvert \int_{\partial f_{-t}D_i} \negmedspace \alpha \right\rvert \\ & \leq \sum_{i=1}^k K(\alpha,\theta) \abs{\partial f_{-t}D_i}^{1-\tau} \abs{f_{-t}D_i}^\tau \\ & \leq K(\alpha,\theta) \sum_{i=1}^k \left( \frac{4}{k} \abs{\partial f_{-t}D} \right)^{1-\tau} \left( \frac{2}{k} \abs{f_{-t}D} \right)^\tau \tag{$\diamond$} \\ & \leq 4 K(\alpha,\theta) \sum_{i=1}^k \frac{1}{k} \abs{\partial f_{-t}D}^{1-\tau} \abs{f_{-t}D}^\tau \\ & \leq 4 K(\alpha,\theta) \abs{\partial f_{-t}D}^{1-\tau} \abs{f_{-t}D}^\tau. \end{align*} Note that ($\diamond$) follows from \eqref{eq:circ_D_i}, $\abs{(\partial D_i)^{uu}} \leq 2 \abs{(\partial D)^{uu}}$, and \eqref{eq:area_Di}. Using \eqref{eq:k}, we arrive to the crucial estimate: \begin{equation} \label{eq:crucial} \left\lvert \int_{\partial D} \negmedspace \alpha \right\rvert \leq 4 K(\alpha,\theta) \abs{\partial f_{-t}D}^{1-\tau} \abs{f_{-t}D}^\tau. \end{equation} Since $D \subset G_\varepsilon$, the norm $\norm{R_\varepsilon^{\ell-1}}_{L^\infty(D)}$ is finite, so by Corollary~\ref{cor:tau}, $\abs{\partial f_{-t}D}^{1-\tau} \abs{f_{-t}D}^\tau \to 0$, as $t \to \infty$. Therefore, $\int_{\partial D} \alpha = 0$ for all $su$-disks $D = D_\gamma$ in $G_\varepsilon$ with $\gamma$ tangent to $F_{\ell-1}$. This implies $T \huu_{p,q}(F_{\ell-1}) \subset E^{ss}$, for all $p \in G_\varepsilon$ and $q \in W^{uu}_{\mathrm{loc}}(p) \cap G_\varepsilon$. In fact, Lemma~\ref{lem:Z}(c) gives $T \huu_{p,q}(F_{\ell-1}) = F_{\ell-1}$. By continuity, this holds for \emph{all} $p \in M$ and $q \in W^{uu}_{\mathrm{loc}}(p)$. The remainder of the proof consists of Steps 4 and 5. \\ \paragraph{\textbf{Step 3B}} \label{sec:textbfstep-3b} We have an Anosov vector field satisfying (B) and \textsf{(a)--(c)} from Step 2. In particular $\chi_\ell = \chi_{\mathrm{top}}(X) < \tau = \tau(X)$. Let $0 < \varepsilon < \tau - \chi_\ell$. Since $E^{ss} = F_\ell$ is continuous, by Corollary~\ref{cor:Rk} there exists an open set $G_\varepsilon$ of full measure on which the regularity function $R_\varepsilon^\ell$ is locally bounded on $G_\varepsilon$. Let $D$ be any $su$-disk contained in $G_\varepsilon$ with base $\gamma$ tangent to $E^{ss}$. Then estimates completely analogous to those in Step 3A show \begin{displaymath} \abs{ \int_{\partial D} \alpha} \leq 4 K(\alpha,\theta) \abs{\partial f_{-t}D}^{1-\tau} \abs{f_{-t}D}^\tau. \end{displaymath} Since $D \subset G_\varepsilon$, the norm $\norm{R_\varepsilon^\ell}_{L^\infty(D)}$ is finite, so by Corollary~\ref{cor:tau}, $\abs{\partial f_{-t}D}^{1-\tau} \abs{f_{-t}D}^\tau \to 0$, as $t \to \infty$. By the same concluding argument as in Step 3A, it follows that $T\huu_{p,q}(E^{ss}) = E^{ss}$, for all $p \in M$ and $q \in W^{uu}_{\mathrm{loc}}(p)$. We skip Step 4 and proceed to Step 5. \\ \paragraph{\textbf{Step 4}} It was shown in Step 3A that $T \huu_{p,q}(F_{\ell-1}) = F_{\ell-1}$, for all $p \in M$ and $q \in W^{uu}_{\mathrm{loc}}(p)$. In this step we will show that $T_x \huu_{p,q}$, in fact, takes the \emph{whole} bundle $E^{ss}$ onto itself, for every $p \in M$, $q \in W^{uu}_\text{loc}(p)$, and $x \in W^{cs}_\text{loc}(p)$. Recall that $\dim E_\ell = 1$, so $F_{\ell-1}$ is of codimension one in $E^{ss}$ (as well as continuous). Set \begin{displaymath} \alpha_s = \phi_s^\ast \alpha \qquad \text{and} \qquad \alpha_s^\sharp = \alpha_s \! \! \restriction_{E^{ss}}, \end{displaymath} where, as before, $\{ \phi_s \}$ denotes the flow of $Y \in E^{uu}$. It is enough to show that $\alpha_s^\sharp = 0$. Since $\huu_{p,q}(x) = \phi_{\zeta(x)}(x)$, for some $C^1$-function $\zeta$, it follows that \begin{displaymath} T_x \huu_{p,q}(w) = T_x \phi_{\zeta(x)} (w) + d\zeta(w) Y, \end{displaymath} (Observe that $\zeta(x) = \mathbf{h}^{cs}_{p,x}(q)$; cf., \S\ref{sbs:su-disk}.) By Step 3A and $\alpha(Y) = 0$, we obtain $\alpha_s (w) = 0$, for $w \in F_{\ell-1}$ and every $p$ and $q$, where $s = \zeta(x)$. It follows that $\alpha_s \! \restriction_{F_{\ell-1}} = 0$ for all $s \in \R$, so \begin{displaymath} F_{\ell-1} \subset \mathrm{Ker}(\alpha_s^\sharp). \end{displaymath} Since $\alpha_s^\sharp$ is a 1-form on $E^{ss}$, its kernel is either $F_{\ell-1}$ or it is all of $E^{ss}$. Suppose that for some $s \neq 0$ the former holds at some point $x \in M$. Since \begin{displaymath} f_t^\ast \alpha_s = \alpha_{se^{-t}}, \end{displaymath} the kernel of $\alpha_{se^{-t}}^\sharp$ must equal $F_{\ell-1}$ for \emph{all} $t \in \R$. In particular, this implies that $\alpha_s^\sharp$ can be viewed as a volume form for $E_\ell$. Furthermore, since they have the same kernel, $\alpha_s^\sharp$ and $\alpha_{se^{-t}}^\sharp$ are scalar multiples of each other. Let us take $s=1$. Then there exists a function $t \mapsto k(t)$ such that for all $t$, \begin{equation} \label{k(t)} \alpha_{e^{-t}}^\sharp = k(t) \alpha_1^\sharp. \end{equation} Therefore, $f_t^\ast \alpha_1^\sharp = k(t) \alpha_1^\sharp$, so $k(t)$ is the determinant of $T f_t \! \restriction_{E_\ell}$ relative to the volume form $\alpha_1^\sharp$. On the other hand, for small $r > 0$, in any set of local coordinates we have $\norm{T \phi_r - I} \leq e^{\text{Lip}(Y) r} - 1$, where $\text{Lip}(Y)$ is the Lipschitz constant of $Y$. Thus \begin{align*} \norm{\alpha_{e^{-t}}^\sharp}_{C^0} & = \norm{\alpha_{e^{-t}}^\sharp - \alpha_0^\sharp}_{C^0} \\ & \leq \norm{\phi_{e^{-t}}^\ast \alpha - \alpha}_{C^0} \\ & \leq \norm{\alpha}_{C^0} \left( e^{\text{Lip}(Y) e^{-t}} - 1 \right). \end{align*} Since $(e^r - 1)/r \to 1$, as $r \to 0$, it follows that as $t \to +\infty$, $e^{\text{Lip}(Y) e^{-t}} - 1$ is asymptotically equivalent to $\text{Lip}(Y) e^{-t}$. Therefore, the left hand side of \eqref{k(t)} converges to zero as $e^{-t}$. However, since $k(t)$ is the determinant of $T f_t \! \restriction_{E_\ell}$ relative to the volume form $\alpha_1^\sharp$, and $\dim E_\ell = 1$, $k(t)$ is asymptotically equivalent to $\norm{Tf_t \! \restriction_{E_\ell}}$, which goes to zero as $e^{-\chi_\ell t}$. This is a contradiction, since $\chi_\ell < 1$. Thus $\alpha_s^\sharp = 0$, for all $s$. \\ \paragraph{\textbf{Step 5}} In summary, we have shown that $T \huu_{p,q}(E^{ss}) = E^{ss}$, for all $p \in M$ and $q \in W^{uu}_\mathrm{loc}(p)$. This proves joint integrability of $W^{ss}$ and $W^{uu}$ and the existence of a smooth constant first-return time global cross section $\Sigma$ for the flow of $X$. Vector fields $X$ and $X_2$ have the same orbits, so $\Sigma$ is a cross section for $X_2$. Therefore, the flow of $X_2$ is topologically equivalent to a suspension of a linear toral automorphism. Since $X_2 \in \ms{U}$, the same is true for the flow of $X_0$. To show that $X_0$ also admits a smooth global cross section, we use Proposition 1.1 from Ghys~\cite{ghys89}: if $\Phi$ is a transitive codimension one Anosov flow, then $\Phi$ admits a global cross section if and only if no periodic orbit of $\Phi$ is homologous to zero. Since the flow of $X_2$ has this property, so does the topologically equivalent flow of $X_0$. This completes the proof. \bibliographystyle{amsplain}
{ "timestamp": "2007-03-22T01:11:11", "yymm": "0508", "arxiv_id": "math/0508024", "language": "en", "url": "https://arxiv.org/abs/math/0508024", "abstract": "We show that every volume preserving codimension one Anosov flow on a closed Riemannian manifold of dimension greater than three admits a global cross section and is therefore topologically conjugate to a suspension of a linear toral automorphism. This proves a conjecture of Verjovsky from the 1970's in the volume preserving case.", "subjects": "Dynamical Systems (math.DS); Differential Geometry (math.DG)", "title": "Volume preserving codimension one Anosov flows in dimensions greater than three are suspensions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771770811146, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7090925521142211 }
https://arxiv.org/abs/1105.6042
Weighted Integral Means of Mixed Areas and Lengths under Holomorphic Mappings
This note addresses monotonic growths and logarithmic convexities of the weighted ($(1-t^2)^\alpha dt^2$, $-\infty<\alpha<\infty$, $0<t<1$) integral means $\mathsf{A}_{\alpha,\beta}(f,\cdot)$ and $\mathsf{L}_{\alpha,\beta}(f,\cdot)$ of the mixed area $(\pi r^2)^{-\beta}A(f,r)$ and the mixed length $(2\pi r)^{-\beta}L(f,r)$ ($0\le\beta\le 1$ and $0<r<1$) of $f(r\mathbb D)$ and $\partial f(r\mathbb D)$ under a holomorphic map $f$ from the unit disk $\mathbb D$ into the finite complex plane $\mathbb C$.
\section{Introduction} From now on, $\mathbb D$ represents the unit disk in the finite complex plane $\mathbb C$, $H(\mathbb D)$ denotes the space of holomorphic mappings $f: \mathbb D\to\mathbb C$, and $U(\mathbb D)$ stands for all univalent functions in $H(\mathbb D)$. For any real number $\alpha$, positive number $r\in (0,1)$ and the standard area measure $dA$, let $$ dA_\alpha(z)=(1-|z|^2)^\alpha dA(z);\quad r\mathbb D=\{z\in\mathbb D: |z|<r\};\quad r\mathbb T=\{z\in\mathbb D: |z|=r\}. $$ In their recent paper \cite{XZ}, Xiao and Zhu have discussed the following area $0<p<\infty$-integral means of $f\in H(\mathbb D)$: $$ {M}_{p,\alpha}(f,r)=\left[\frac{1}{A_\alpha(r\mathbb D)}\int_{r\mathbb D}|f|^p\,dA_\alpha\right]^{\frac1p}, $$ proving that $r\mapsto M_{p,\alpha}(f,r)$ is strictly increasing unless $f$ is a constant, and $\log r\mapsto\log M_{p,\alpha}(f,r)$ is not always convex. This last result suggests a conjecture that $\log r\mapsto\log M_{p,\alpha}(f,r)$ is convex or concave when $\alpha\le 0$ or $\alpha>0$. But, motivated by \cite[Example 10, (ii)]{XZ} we can choose $p=2$, $\alpha=1$, $f(z)=z+c$ and $c>0$ to verify that the conjecture is not true. At the same time, this negative result was also obtained in Wang-Zhu's manuscript \cite{WZ}. So far it is unknown whether the conjecture is generally true for $p\not=2$. The foregoing observation has actually inspired the following investigation. Our concentration is the fundamental case $p=1$. To understand this approach, let us take a look at $M_{1,\alpha}(\cdot,\cdot)$ from a differential geometric viewpoint. Note that $$ {M}_{1,\alpha}(f',r)=\frac{\int_{r\mathbb D}|f'|\,dA_\alpha}{A_\alpha(r\mathbb D)}=\frac{\int_0^r \big[(2\pi t)^{-1}\int_{t\mathbb T}|f'(z)||dz|\big](1-t^2)^\alpha\,dt^2}{\int_0^r (1-t^2)^\alpha\,dt^2}. $$ So, if $f\in U(\mathbb D)$, then $$ (2\pi t)^{-1}\int_{t\mathbb T}|f'(z)|\,|dz| $$ is a kind of mean of the length of $\partial f(t\mathbb D)$, and hence the square of this mean dominates a sort of mean of the area of $f(t\mathbb D)$ in the isoperimetric sense: $$ \Phi_{A}(f,t)=(\pi t^2)^{-1}\int_{t\mathbb D}|f'(z)|^2\,dA(z)\le \left[(2\pi t)^{-1}\int_{t\mathbb T}|f'(z)|\,|dz|\right]^2=\big[\Phi_{L}(f,t)\big]^2. $$ According to the P\'olya-Szeg\"o monotone principle \cite[Problem 309]{PS} (or \cite[Proposition 6.1]{BMM}) and the area Schwarz's lemma in Burckel, Marshall, Minda, Poggi-Corradini and Ransford \cite[Theorem 1.9]{BMM}, $\Phi_{L}(f,\cdot)$ and $\Phi_{A}(f,\cdot)$ are strictly increasing on $(0,1)$ unless $f(z)=a_1z$ with $a_1\not=0$. Furthermore, $\log\Phi_{L}(f,r)$ and $\log\Phi_{A}(f,r)$, equivalently, $\log L(f,r)$ and $\log A(f,r)$, are convex functions of $\log r$ for $r\in (0,1)$, due to the classical Hardy's convexity and \cite[Section 5]{BMM}. Perhaps, it is worth-wise to mention that if $c>0$ is small enough then the universal cover of $\mathbb D$ onto the annulus $\{e^{-\frac{c\pi}{2}}<|z|< e^{\frac{c\pi}{2}}\}$: $$ f(z)=\exp\Big[ic\log\Big(\frac{1+z}{1-z}\Big)\Big] $$ enjoys the property that $\log r\mapsto \log A(f,r)$ is not convex; see \cite[Example 5.1]{BMM}. In the above and below, we have used the following convention: $$ \Phi_{A}(f,r)=\frac{A(f,r)}{\pi r^2}\quad\&\quad \Phi_{L}(f,r)=\frac{L(f,r)}{2\pi r}, $$ where under $r\in (0,1)$ and $f\in H(\mathbb D)$, $A(f,r)$ and $L(f,r)$ stand respectively for the area of $f(r\mathbb D)$ (the projection of the Riemannian image of $r\mathbb D$ by $f$) and the length of $\partial f(r\mathbb D)$ (the boundary of the projection of the Riemannian image of $r\mathbb D$ by $f$) with respect to the standard Euclidean metric on $\mathbb C$. For our purpose, we choose a shortcut notation $$ d\mu_\alpha(t)=(1-t^2)^\alpha dt^2\quad\&\quad \nu_\alpha(t)=\mu_\alpha([0,t])\quad\forall\quad t\in (0,1), $$ and for $0\le\beta\le 1$ define $$ \Phi_{A,\beta}(f,t)=\frac{A(f,t)}{(\pi t^2)^\beta}\quad\&\quad \Phi_{L,\beta}(f,t)=\frac{L(f,t)}{(2\pi t)^\beta}, $$ and then $$ \mathsf{A}_{\alpha,\beta}(f,r)=\frac{\int_0^r \Phi_{A,\beta}(f,t) \,d\mu_\alpha(t)}{\int_0^r d\mu_\alpha(t)}\quad\&\quad \mathsf{L}_{\alpha,\beta}(f,r)=\frac{\int_0^r \Phi_{L,\beta}(f,t)\, d\mu_\alpha(t)}{\int_0^r d\mu_\alpha(t)} $$ which are called the weighted integral means of the mixed area and the mixed length for $f(r\mathbb D)$ and $\partial f(r\mathbb D)$, respectively. In this note, we consider two fundamental properties: monotonic growths and logarithmic convexities of both $\mathsf{A}_{\alpha,\beta}(f,r)$ and $\mathsf{L}_{\alpha,\beta}(f,r)$, thereby producing two specialities: (i) if $r\mapsto \Phi_{L}(f,r)$ is monotone increasing on $(0,1)$, then so is the isoperimetry-induced function: $$ r\mapsto\frac{\int_0^r \big[\Phi_{L,1}(f,t)\big]^2\,d\mu_\alpha(t)}{\int_0^r d\mu_\alpha(t)}\ge \mathsf{A}_{\alpha,1}(f,r); $$ (ii) the log-convexity for $\mathsf{L}_{\alpha,\beta=1}(f,r)$ essentially settles the above-mentioned conjecture. The details (results and their proofs) are arranged in the forthcoming two sections. \section{Monotonic Growth} In this section, we deal with the monotonic growths of $\mathsf{A}_{\alpha,\beta}(f,r)$ and $\mathsf{L}_{\alpha,\beta}(f,r)$, along with their associated Schwarz type lemmas. In what follows, $\mathbb N$ is used as the set of all natural numbers. \subsection{Two Lemmas} The following two preliminary results are needed. \begin{lemma}\cite[Theorems 1 \& 2]{Ma}\label{l2} Let $f\in H(\mathbb D)$ be of the form $f(z)=a_0+\sum_{k=n}^\infty a_kz^k$ with $n\in\mathbb N$. Then: \item{\rm(i)} $\pi r^{2n}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2\le A(f,r)\quad\forall\quad r\in (0,1)$. \item{\rm(ii)} $2\pi r^n \Big[\frac{|f^{(n)}(0)|}{n!}\Big]\le L(f,r)\quad\forall\quad r\in (0,1)$. \noindent Moreover, equality in (i) or (ii) holds if and only if $f(z)=a_0+a_nz^n$. \end{lemma} \begin{proof} This may be viewed as the higher order Schwarz type lemma for area and length. See also the proofs of Theorems 1 \& 2 in \cite{Ma}, and their immediate remarks on equalities. Here it is worth noticing three matters: (a) $\frac{f^{(n)}(0)}{n!}$ is just $a_n$; (b) \cite[Corollary 3]{J} presents a different argument for the area case; (c) $L(f,r)$ is greater than or equal to the length $l(r,f)$ of the outer boundary of $f(r\mathbb D)$ (defined in \cite{Ma}) which is not less than the length $l^\#(r,f)$ of the exact outer boundary of $f(r\mathbb D)$ (introduced in \cite{Y}). \end{proof} \begin{lemma}\label{l1} Let $0\le\beta\le 1$. \item{\rm(i)} If $f\in H(\mathbb D)$, then $r\mapsto \Phi_{A,\beta}(f,r)$ is strictly increasing on $(0,1)$ unless \[ f=\left\{\begin{array} {r@{\;}l} constant &\quad \hbox{when}\quad \beta<1\\ linear\ map &\quad \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(ii)} If $f\in U(\mathbb D)$ or $f(z)=a_0+a_nz^n$ with $n\in\mathbb N$, then $r\mapsto \Phi_{L,\beta}(f,r)$ is strictly increasing on $(0,1)$ unless \[ f=\left\{\begin{array} {r@{\;}l} constant &\quad \hbox{when}\quad \beta<1\\ linear\ map & \quad \hbox{when}\quad \beta=1. \end{array} \right. \] \end{lemma} \begin{proof} It is enough to handle $\beta<1$ since the case $\beta=1$ has been treated in \cite[Theorem 1.9 \& Proposition 6.1]{BMM}. The monotonic growths in (i) and (ii) follow from $$ \Phi_{A,\beta}(f,r)=(\pi r^2)^{1-\beta}\Phi_{A,1}(f,r)\quad\&\quad L(f,r)=(2\pi r)^{1-\beta}\Phi_{L,1}(f,r). $$ To see the strictness, we consider two cases. (i) Suppose that $\Phi_{A,\beta}(f,\cdot)$ is not strictly increasing. Then there are $r_1,r_2\in (0,1)$ such that $r_1<r_2$, and $\Phi_{A,\beta}(f,\cdot)$ is a constant on $[r_1,r_2]$. Hence $$ \frac{d}{dr}\Phi_{A,\beta}(f,r)=0\quad\forall\quad r\in [r_1,r_2]. $$ Equivalently, $$ 2\beta A(f,r)=r\frac{d}{dr}A(f,r)\quad\forall\quad r\in [r_1,r_2]. $$ But, according to \cite[(4.2)]{BMM}: $$ 2A(f,r)\le r\frac{d}{dr} A(f,r)\quad\forall\quad r\in (0,1). $$ Since $\beta<1$, we get $A(f,r)=0$ for all $r\in [r_1,r_2]$, whence finding that $f$ is constant. (ii) Now assume that $\Phi_{L,\beta}(f,\cdot)$ is not strictly increasing. There are $r_3,r_4\in (0,1)$ such that $ r_3<r_4$ and $$ 0=\frac{d}{dr}\Phi_{L,\beta}(f,r)=(2\pi r)^{-\beta}\Big[\frac{d}{dr}L(f,r)-\frac{\beta}{r}L(f,r)\Big]=0\quad\forall\quad r\in [r_3,r_4]. $$ If $f\in U(\mathbb D)$ then $$ L(f,r)=\int_{r\mathbb T}|f'(z)|\,|dz| $$ and hence one has the following ``first variation formula" $$ \frac{d}{dr}L(f,r)=\int_0^{2\pi}|f'(re^{i\theta})|d\theta+r\frac{d}{dr}\int_0^{2\pi}|f'(re^{i\theta})|d\theta\quad\forall\quad r\in [r_3,r_4]. $$ The previous three equations yield $$ 0=(1-\beta)\int_0^{2\pi}|f'(re^{i\theta})|d\theta+r\frac{d}{dr}\int_0^{2\pi}|f'(re^{i\theta})|d\theta\quad\forall\quad r\in [r_3,r_4] $$ and so $$ \int_0^{2\pi}|f'(re^{i\theta})|d\theta=0\quad\forall\quad r\in [r_3,r_4]. $$ This ensures that $f$ is a constant, contradicting $f\in U(\mathbb D)$. Therefore, $f(z)$ is of the form $a_0+a_nz^n$. But, since $L(z^n,r)=2\pi r^n$ is strictly increasing, $f$ must be constant. \end{proof} \subsection{Monotonic Growth of $\mathsf{A}_{\alpha,\beta}(f,\cdot)$} This aspect is essentially motivated by the following Schwarz type lemma. \begin{proposition}\label{pr1} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in H(\mathbb D)$ be of the form $f(z)=a_0+\sum_{k=n}^\infty a_k z^k$ with $n\in\mathbb N$. Then $$ \pi^{1-\beta}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2\le \mathsf{A}_{\alpha,\beta}(f,r)\left[\frac{\nu_\alpha(r)}{\int_0^rt^{2(n-\beta)}\,d\mu_\alpha(t)}\right]\quad\forall\quad r\in (0,1) $$ with equality if and only if $f(z)=a_0+a_nz^n$. \end{proposition} \begin{proof} The inequality follows from Lemma \ref{l2} (i) right away. When $f(z)=a_0+a_nz^n$, the last inequality becomes equality due to the equality case of Lemma \ref{l2} (i). Conversely, suppose that the last inequality is an equality. If $f$ does not have the form $a_0+a_nz^n$, then the equality in Lemma \ref{l2} (i) is not true, then there are $r_1,r_2\in (0,1)$ such that $r_1<r_2$ and $$ A(f,t)>\pi t^{2n}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2\quad\forall\quad t\in [r_1,r_2]. $$ This strict inequality forces that for $r\in [r_1,r_2]$, \begin{eqnarray*} \pi^{1-\beta}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2\int_0^r t^{2(n-\beta)}\,d\mu_\alpha(t)&=&\int_0^r (\pi t^2)^{-\beta}A(f,t)\,d\mu_\alpha(t)\\ &=&\left(\int_0^{r_1}+\int_{r_1}^{r_2}+\int_{r_2}^{r}\right)(\pi t^2)^{-\beta} A(f,t)\,d\mu_\alpha(t)\\ &>&\pi^{1-\beta} \Big[\frac{|f^{(n)}(0)|}{n!}\Big]^2 \int_0^{r} t^{2(n-\beta)}\,d\mu_\alpha(t), \end{eqnarray*} a contradiction. Thus $f(z)=a_0+a_nz^n$. \end{proof} Based on Proposition \ref{pr1}, we find the monotonic growth for $\mathsf{A}_{\alpha,\beta}(\cdot,\cdot)$ as follows. \begin{theorem}\label{th1} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in H(\mathbb D)$. Then $r\mapsto\mathsf{A}_{\alpha,\beta}(f,r)$ is strictly increasing on $(0,1)$ unless \[ f=\left\{\begin{array} {r@{\;}l} constant &\quad \hbox{when}\quad \beta<1\\ linear\ map &\quad \hbox{when}\quad \beta=1. \end{array} \right. \] Consequently, \item{\rm(i)} \[ \lim_{r\to 0}\mathsf{A}_{\alpha,\beta}(f,r)=\left\{\begin{array} {r@{\;}l} 0\quad & \hbox{when}\quad \beta<1\\ |f'(0)|^2\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(ii)} If $$ \Phi_{A,\beta}(f,0):=\lim_{r\to 0}\Phi_{A,\beta}(f,r)\quad\&\quad\Phi_{A,\beta}(f,1):=\lim_{r\to 1}\Phi_{A,\beta}(f,r)<\infty, $$ then $$ 0<r<s<1\Rightarrow 0\le \frac{\mathsf{A}_{\alpha,\beta}(f,s)-\mathsf{A}_{\alpha,\beta}(f,r)}{\log\nu_\alpha(s)-\log\nu_\alpha(r)}\leq \Phi_{A,\beta}(f,s)-\Phi_{A,\beta}(f,0) $$ with equality if and only if \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] In particular, $t\mapsto \mathsf{A}_{\alpha,\beta}(f,t)$ is Lipschitz with respect to $\log\nu_\alpha(t)$ for $t\in (0,1)$. \end{theorem} \begin{proof} Note that $\nu_\alpha(r)=\int_0^r d\mu_\alpha(t)$. So $d\nu_\alpha(r)$, the differential of $\nu_\alpha(r)$ with respect to $r\in (0,1)$, equals $d\mu_\alpha(r)$. By integration by parts we have $$ \Phi_{A,\beta}(f,r)\nu_\alpha(r)-\int_0^r \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\int_0^r\big[\frac{d}{dt}\Phi_{A,\beta}(f,t)\big] \nu_\alpha(t)\,dt. $$ Differentiating the function $\mathsf{A}_{\alpha,\beta}(f,r)$ with respect to $r$ and using Lemma \ref{l1} (i), we get \begin{align*} \frac{d}{dr}\mathsf{A}_{\alpha,\beta}(f,r)&=\frac{\Phi_{A,\beta}(f,r)2r(1-r^2)^\alpha \nu_\alpha(r)-\Big[\int_0^r\Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)\Big]2r(1-r^2)^\alpha}{\nu_\alpha(r)^2}\\ &=\frac{2r(1-r^2)^\alpha \left[\Phi_{A,\beta}(f,t)\nu_\alpha(r)- \int_0^r \Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)\right]}{\nu_\alpha(r)^2}\\ &=\frac{2r(1-r^2)^\alpha\int_0^r\big[\frac{d}{dt}\Phi_{A,\beta}(f,t)\big] \nu_\alpha(t)\, dt}{\nu_\alpha(r)^2}\geq 0. \end{align*} As a result, $r\mapsto\mathsf{A}_{\alpha,\beta}(f,r)$ increases on $(0,1)$. Next suppose that the just-verified monotonicity is not strict. Then there exist two numbers $r_1,r_2\in (0,1)$ such that $r_1<r_2$ and $$ \mathsf{A}_{\alpha,\beta}(f,r_1)=\mathsf{A}_{\alpha,\beta}(f,r)=\mathsf{A}_{\alpha,\beta}(f,r_2)\quad \forall\quad r\in [r_1,r_2]. $$ Consequently, $$ \frac{d}{dr}\mathsf{A}_{\alpha,\beta}(f,r)=0\quad\forall\quad r\in[r_1,r_2] $$ and so $$ \int_0^r \big[\frac{d}{dt}\Phi_{A,\beta}(f,t)\big]\nu_\alpha(t)\, dt=0\quad\forall\quad r\in [r_1,r_2]. $$ Then we must have $$ \frac{d}{dt}\Phi_{A,\beta}(f,t)=0\quad\forall\quad t\in (0,r)\quad\hbox{with}\quad r\in [r_1,r_2], $$ whence getting that if $\beta<1$ then $f$ must be constant or if $\beta=1$ then $f$ must be linear, thanks to the argument for the strictness in Lemma \ref{l1} (i). It remains to check the rest of Theorem \ref{th1}. (i) The monotonic growth of $\mathsf{A}_{\alpha,\beta}(f,\cdot)$ ensures the existence of the limit. An application of L'H\^{o}pital's rule gives $$ \lim_{r\to 0}\mathsf{A}_{\alpha,\beta}(f,r)=\lim_{r\to 0}\Phi_{A,\beta}(f,r)= \left\{\begin{array} {r@{\;}l} 0\quad & \hbox{when}\quad \beta<1\\ |f'(0)|^2\quad & \hbox{when}\quad \beta=1. \end{array} \right. $$ (ii) Again, the above monotonicity formula of $\mathsf{A}_{\alpha,\beta}(f,\cdot)$ plus the given condition yields that for $s\in (0,1)$, $$ \sup_{r\in (0,s)}\mathsf{A}_{\alpha,\beta}(f,r)=\mathsf{A}_{\alpha,\beta}(f,s)<\infty. $$ Integrating by parts twice and using the monotonicity of $\Phi_{A,\beta}(f,\cdot)$, we obtain that under $0<r<s<1$, \begin{eqnarray*} 0&\le&\mathsf{A}_{\alpha,\beta}(f,s)-\mathsf{A}_{\alpha,\beta}(f,r)\\ &=&\int_r^s\frac{d}{dt}\mathsf{A}_{\alpha,\beta}(f,t)\,dt\\ &=&\int_r^s\left(\int_0^t\big[\frac{d}{d\tau}\Phi_{A,\beta}(f,\tau)\big]\nu_\alpha(\tau)\,d\tau\right)\,\Big[\frac{d\nu_\alpha(t)}{\nu_\alpha(t)^2}\Big]\\ &=&\int_r^s\left(\nu_\alpha(t)\Phi_{A,\beta}(f,t)-\int_0^t\Phi_{A,\beta}(f,\tau)\,d\nu_\alpha(\tau)\right)\,\Big[\frac{d\nu_\alpha(t)}{\nu_\alpha(t)^2}\Big]\\ &\le&\Big[\Phi_{A,\beta}(f,s)-\Phi_{A,\beta}(f,0)\Big]\int_r^s\frac{d\nu_\alpha(t)}{\nu_\alpha(t)}. \end{eqnarray*} This gives the desired inequality right away. Furthermore, the above argument plus Lemma \ref{l1} (i) derives the equality case. \end{proof} As an immediate consequence of Theorem \ref{th1}, we get a sort of ``norm" estimate associated with $\Phi_{A,\beta}(f,\cdot)$. \begin{corollary}\label{pr2} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in H(\mathbb D)$. \item{\rm(i)} If $-\infty<\alpha\le -1$, then $$ \int_0^1 \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\sup_{r\in (0,1)}\int_0^r \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)<\infty $$ if and only if $f$ is constant. Moreover, $\sup_{r\in (0,1)}\mathsf{A}_{\alpha,\beta}(f,r)=\Phi_{A,\beta}(f,1).$ \item{\rm(ii)} If $-1<\alpha<\infty$, then $$ \mathsf{A}_{\alpha,\beta}(f,r)\le\mathsf{A}_{\alpha,\beta}(f,1):=\sup_{s\in (0,1)}\mathsf{A}_{\alpha,\beta}(f,s)\quad\forall\quad r\in (0,1), $$ where the inequality becomes an equality for all $r\in (0,1)$ if and only if \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(iii)} The following function $\alpha\mapsto\mathsf{A}_{\alpha,\beta}(f,1)$ is strictly decreasing on $(-1,\infty)$ unless \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \end{corollary} \begin{proof} (i) By Theorem \ref{th1}, we have $$ \mathsf{A}_{\alpha,\beta}(f,r)\leq \frac{\int_0^s \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)}{\nu_\alpha(s)}\quad\forall\quad r\in (0,s). $$ Note that $$\lim_{s\to 1}\nu_\alpha(s)=\infty\quad\&\quad\lim_{s\to 1}\int_0^s\Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)=\int_0^1 \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t). $$ So, the last integral is finite if and only if $$ \Phi_{A,\beta}(f,r)=0\quad\forall\quad r\in (0,1), $$ equivalently, $A(f,r)=0$ holds for all $r\in (0,1)$, i.e., $f$ is constant. For the remaining part of (i), we may assume that $f$ is not a constant map. Due to $\lim_{r\to 1}\nu_\alpha(r)=\infty$, we obtain $$ \lim_{r\to 1}\int_0^r \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\int_0^1 \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\infty. $$ So, an application of L'H\^{o}pital's rule yields $$ \sup_{0<r<1}\mathsf{A}_{\alpha,\beta}(f,r)=\lim_{r\to 1}\frac{\int_0^r \Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)}{\nu_\alpha(r)}=\lim_{r\to 1}\frac{\Phi_{A,\beta}(f,r)r(1-r^2)^\alpha}{ r(1-r^2)^\alpha}=\Phi_{A,\beta}(f,1). $$ (ii) Under $-1<\alpha<\infty$, we have $$ \lim_{r\to 1}\nu_\alpha(r)=\nu_\alpha(1)\quad\&\quad \lim_{r\to 1}\int_0^r\Phi_{A,\beta}(f,t)\,d\mu_\alpha(t)=\int_0^1 \Phi_{A,\beta}(f,t)\,d\mu_\alpha(t). $$ Thus, by Theorem \ref{th1} it follows that for $r\in (0,1)$, $$ \mathsf{A}_{\alpha,\beta}(f,r)\le\lim_{s\to 1}\mathsf{A}_{\alpha,\beta}(f,s)=\big[\nu_\alpha(1)\big]^{-1}\int_0^1 \Phi_{A,\beta}(f,t)\, d\mu_\alpha(t)=\sup_{s\in (0,1)}\mathsf{A}_{\alpha,\beta}(f,s). $$ The equality case just follows from a straightforward computation and Theorem \ref{th1}. (iii) Suppose $-1<\alpha_1<\alpha_2<\infty$ and $\mathsf{A}_{\alpha_1,\beta}(f,1)<\infty$, then integrating by parts twice, we obtain \begin{align*} \mathsf{A}_{\alpha_2,\beta}(f,1)&= \big[\nu_{\alpha_2}(1)\big]^{-1}\int_0^1\Phi_{ A,\beta}(f,r)\,d\mu_{\alpha_2}(r)\\ &= \big[\nu_{\alpha_2}(1)\big]^{-1}\int_0^1 (1-r^2)^{\alpha_2-\alpha_1}\frac{d}{dr}\left[\int_0^r \Phi_{A,\beta}(f,t)\, d\mu_{\alpha_1}(t)\right]\, dr\\ &= \big[\nu_{\alpha_2}(1)\big]^{-1}\left[-\int_0^1\left(\int_0^r\Phi_{A,\beta}(f,t)\,d\mu_{\alpha_1}(t)\right)\, d(1-r^2)^{\alpha_2-\alpha_1}\right]\\ &\leq \big[\nu_{\alpha_2}(1)\big]^{-1}\mathsf{A}_{\alpha_1,\beta}(f,1)\int_0^1 \nu_{\alpha_1}(r)\, d\big[-(1-r^2)^{\alpha_2-\alpha_1}\big]\\ &=\mathsf{A}_{\alpha_1,\beta}(f,1)\big[\nu_{\alpha_2}(1)\big]^{-1}\left[\int_0^1 (1-r^2)^{\alpha_2-\alpha_1}\,d\mu_{\alpha_1}(r)\right] \\ &=\mathsf{A}_{\alpha_1,\beta}(f,1), \end{align*} thereby establishing $\mathsf{A}_{\alpha_2,\beta}(f,1)\le \mathsf{A}_{\alpha_1,\beta}(f,1)$. If this last inequality becomes equality, then the above argument forces $$ \int_0^r\Phi_{A,\beta}(f,t)\,d\mu_{\alpha_1}(t)=\mathsf{A}_{\alpha_1,\beta}(f,1) \nu_{\alpha_1}(r)\quad\forall\quad r\in (0,1), $$ whence yielding (via the just-verified (ii)) \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \end{proof} \subsection{Monotonic Growth of $\mathsf{L}_{\alpha,\beta}(f,\cdot)$} Correspondingly, we first have the following Schwarz type lemma. \begin{proposition}\label{co1} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in H(\mathbb D)$ be of the form $f(z)=a_0+\sum_{k=n}^\infty a_kz^k$ with $n\in\mathbb N$. Then $$ (2\pi)^{1-\beta}\Big[\frac{|f^{(n)}(0)|}{n!}\Big]\le \mathsf{L}_{\alpha,\beta}(f,r)\left[\frac{\nu_\alpha(r)}{\int_0^rt^{n-\beta}\,d\mu_\alpha(t)}\right]\quad\forall\quad r\in (0,1) $$ with equality when and only when $f=a_0+a_nz^n$. \end{proposition} \begin{proof} This follows from Lemma \ref{l2} (ii) and its equality case. \end{proof} The coming-up-next monotonicity contains a hypothesis stronger than that for Theorem \ref{th1}. \begin{theorem}\label{th2} Let $-\infty<\alpha<\infty$, $0\le\beta\le 1$, and $f\in U(\mathbb D)$ or $f(z)=a_0+a_nz^n$ with $n\in\mathbb N$. Then $r\mapsto\mathsf{L}_{\alpha,\beta}(f,r)$ is strictly increasing on $(0,1)$ unless \[ f=\left\{\begin{array} {r@{\;}l} constant &\quad \hbox{when}\quad \beta<1\\ linear\ map &\quad \hbox{when}\quad \beta=1. \end{array} \right. \] Consequently, \item{\rm(i)} \[ \lim_{r\to 0}\mathsf{L}_{\alpha,\beta}(f,r)=\left\{\begin{array} {r@{\;}l} 0\quad & \hbox{when}\quad \beta<1\\ |f'(0)|\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(ii)} If $$ \Phi_{L,\beta}(f,0):=\lim_{r\to 0}\Phi_{L,\beta}(f,r)\quad\&\quad\Phi_{L,\beta}(f,1):=\lim_{r\to 1}\Phi_{L,\beta}(f,r)<\infty, $$ then $$ 0<r<s<1\Rightarrow 0\le \frac{\mathsf{L}_{\alpha,\beta}(f,s)-\mathsf{L}_{\alpha,\beta}(f,r)}{\log\nu_\alpha(s)-\log\nu_\alpha(r)}\leq \Phi_{L,\beta}(f,s)-\Phi_{L,\beta}(f,0) $$ with equality if and only if \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] In particular, $t\mapsto \mathsf{L}_{\alpha,\beta}(f,t)$ is Lipschitz with respect to $\log\nu_\alpha(t)$ for $t\in (0,1)$. \end{theorem} \begin{proof} Similar to that for Theorem \ref{th1}, but this time by Lemma \ref{l1} (ii). \end{proof} Naturally, we can establish the so-called ``norm" estimate associated to $\Phi_{L,\beta}(f,\cdot)$. \begin{corollary}\label{co2} Let $0\le\beta\le 1$ and $f\in U(\mathbb D)$ or $f(z)=a_0+a_nz^n$ with $n\in\mathbb N$. \item{\rm(i)} If $-\infty<\alpha\le -1$, then $$ \int_0^1 \Phi_{L,\beta}(f,t)\,d\mu_\alpha(t)=\sup_{r\in (0,1)}\int_0^r \Phi_{L,\beta}(f,t)\,d\mu_\alpha(t)<\infty $$ if and only if $f$ is constant. Moreover, $\sup_{r\in (0,1)}\mathsf{L}_{\alpha,\beta}(f,r)=\Phi_{L,\beta}(f,1).$ \item{\rm(ii)} If $-1<\alpha<\infty$, then $$ \mathsf{L}_{\alpha,\beta}(f,r)\le\mathsf{L}_{\alpha,\beta}(f,1):=\sup_{s\in (0,1)}\mathsf{L}_{\alpha,\beta}(f,s)\quad\forall\quad r\in (0,1), $$ where the inequality becomes an equality for all $r\in (0,1)$ if and only if \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \item{\rm(iii)} $\alpha\mapsto\mathsf{L}_{\alpha,\beta}(f,1)$ is strictly decreasing on $(-1,\infty)$ unless \[ f=\left\{\begin{array} {r@{\;}l} \hbox{constant}\quad & \hbox{when}\quad \beta<1\\ \hbox{linear\ map}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] \end{corollary} \begin{proof} The argument is similar to that for Corollary \ref{pr2}, but via Lemma \ref{l1} (ii). \end{proof} \section{logarithmic convexity} In this section, we treat the convexities of the functions: $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(f,r)$ and $\log r\mapsto \log\mathsf{L}_{\alpha,\beta}(f,r)$ for $r\in (0,1)$. \subsection{Two More Lemmas} The following are two technical preliminaries. \begin{lemma}\cite[Corollaries 2-3 \& Proposition 7]{WZ}\label{wz} Suppose $f(x)$ and $\{h_k(x)\}_{k=0}^\infty$ are positive and twice differentiable for $x\in (0,1)$ such that the function $H(x)=\sum_{k=0}^\infty h_k(x)$ is also twice differentiable for $x\in (0,1)$. Then: \item{\rm(i)} $\log x\mapsto\log f(x)$ is convex if and only if $\log x\mapsto\log f(x^2)$ is convex. \item{\rm(ii)} The function $\log x\mapsto \log f(x)$ is convex if and only if the $D$-notation of $f$ $$ D(f(x)):=\frac{f'(x)}{f(x)}+ x\left(\frac{f'(x)}{f(x)}\right)'\ge 0\quad\forall\quad x\in (0,1). $$ \item{\rm(iii)} If for each $k$ the function $\log x\mapsto \log h_k(x)$ is convex, then $\log x\mapsto \log H(x)$ is also convex. \end{lemma} \begin{lemma}\label{uni} Let $f\in H(\mathbb D)$. Then $f$ belongs to $U(\mathbb D)$ provided that one of the following two conditions is valid: \item{\rm(i)} \cite{Nu} or \cite[Lemma 2.1]{AlD} $$ f(0)=f'(0)-1=0\quad\&\quad \left|\frac{z^2f'(z)}{f^2(z)}-1\right|<1\quad\forall\quad z\in \mathbb D. $$ \item{\rm(ii)} \cite[Theorem 1]{Ne} or \cite[Theorem 8.12]{Du} $$ \left|\left[\frac{f''(z)}{f'(z)}\right]'-\frac{1}{2}\left[\frac{f''(z)}{f'(z)}\right]^2\right|\leq 2(1-|z|^2)^{-2}\quad\forall\quad z\in \mathbb D. $$ \end{lemma} \subsection{Log-convexity for $\mathsf{A}_{\alpha,\beta}(f,\cdot)$} Such a property is given below. \begin{theorem}\label{th3} Let $0\le\beta\le 1$ and $0<r<1$. \item{\rm(i)} If $\alpha\in (-\infty,-3)$, then there exist $f, g\in H(\mathbb D)$ such that $\log r\mapsto\log\mathsf{A}_{\alpha,\beta}(f,r)$ is not convex and $\log r\mapsto\log \mathsf{A}_{\alpha,\beta}(g,r)$ is not concave. \item{\rm(ii)} If $\alpha\in [-3,0]$, then $\log r\mapsto \log\mathsf{A}_{\alpha,1}(a_nz^n,r\big)$ is convex for $a_n\not=0$ with $n\in\mathbb N$. Consequently, $$ \log r\mapsto \log\mathsf{A}_{\alpha,1}\big(f,r\big) $$ is convex for all $f\in U(\mathbb D)$. \item{\rm(iii)} If $\alpha\in (0,\infty)$, then $\log r\mapsto\log\mathsf{A}_{\alpha,\beta}(a_nz^n,r)$ is not convex for $a_n\not=0$ and $n\in \mathbb N$. \end{theorem} \begin{proof} The key issue is to check whether or not $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(z^n,r)$ is convex for $r\in (0,1)$. To see this, let us borrow some symbols from \cite{WZ}. For $\lambda\ge 0$ and $0<x<1$ we define $$ f_\lambda (x)=\int_0^x t^\lambda(1-t)^\alpha dt $$ and $$ \Delta (\lambda, x)=\frac{f_\lambda'(x)}{f_\lambda (x)}+x\left(\frac{f_\lambda'(x)}{f_\lambda(x)}\right)'-\left[\frac{f_0'(x)}{f_0(x)}+x\left(\frac{f_0'(x)}{f_0(x)}\right)'\right]. $$ Given $n\in\mathbb N$. A simple calculation shows $\Phi_{A,\beta}(z^n,t)=\pi^{1-\beta} t^{2(n-\beta)}$, and then a change of variable derives \begin{eqnarray*} \mathsf{A}_{\alpha,\beta}(z^n,r)&=&\frac{\int_0^r \Phi_{A,\beta}(z^n,t)\,d\mu_\alpha(t)}{\nu_\alpha(r)}\\ &=&\frac{\pi^{1-\beta}\int_0^{r^2}t^{n-\beta}(1-t)^\alpha \,dt }{\int_0^{r^2} (1-t)^\alpha\, dt}\\ &=& \pi^{1-\beta}\left[\frac{f_{n-\beta}(r^2)}{f_{0}(r^2)}\right]. \end{eqnarray*} In accordance with Lemma \ref{wz} (i)-(ii), it is readily to work out that $\log r\mapsto\log\mathsf{A}_{\alpha,\beta}(z^n,r)$ is convex for $r\in (0,1)$ if and only if $\Delta (n-\beta, x)\ge 0$ for any $x\in (0,1)$. (i) Under $\alpha\in (-\infty,-3)$, we follow the argument for \cite[Proposition 6]{WZ} to get $$ \lim_{x\to 1}\Delta(\lambda,x)=\frac{\lambda (\alpha+1)(\lambda+2+\alpha)}{(\alpha+2)^2(\alpha+3)}. $$ Choosing \[ f(z)=z^n=\left\{\begin{array} {r@{\;}l} z\quad & \hbox{when}\quad \beta<1\\ z^2\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] and $\lambda=n-\beta$, we find $\lim_{x\to 1}\Delta(\lambda,x)<0$, whence deriving that $\log r\mapsto \log A_\alpha(f,r)$ is not convex. In the meantime, picking $n\in \mathbb N$ such that $n>\beta-(2+\alpha)$ and putting $g(z)=z^n$, we obtain $$ \lim_{x\to 1}\Delta(n-\beta,x)=\frac{(n-\beta)(\alpha+1)(n-\beta+2+\alpha)}{(\alpha+2)^2(\alpha+3)}>0, $$ whence deriving that $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(g,r)$ is not concave. (ii) Under $\alpha\in [-3,0]$, we handle the two situations. {\it Situation 1}: $f\in U(\mathbb D)$. Upon writing $f(z)=\sum_{n=0}^\infty a_n z^n$, we compute $$ \Phi_{A,1}\big(f(z),t\big)=(\pi t^2)^{-1}A(f,t)=\sum_{n=0}^\infty n|a_n|^2 t^{2(n-1)}, $$ and consequently, $$ \mathsf{A}_{\alpha,1}(f,r)=\frac{\sum_{n=0}^\infty n|a_n|^2\int_0^r (\pi t^2)^{-1}A(z^n,t)\, d\mu_\alpha(t)}{\nu_\alpha(r)}=\sum_{n=0}^\infty n |a_n|^2 \mathsf{A}_{\alpha,1}(z^n,r). $$ So, by Lemma \ref{wz} (iii), we see that the convexity of $$ \log r\mapsto\log\mathsf{A}_{\alpha,1}(f,r)\quad\hbox{under}\quad f\in U(\mathbb D) $$ follows from the convexity of $$ \log r\mapsto\log\mathsf{A}_{\alpha,1}(z^n,r)\quad\hbox{under}\quad n\in\mathbb N. $$ So, it remains to verify this last convexity via the coming-up-next consideration. {\it Situation 2}: $f(z)=a_nz^n$ with $a_n\not=0$. Three cases are required to control. {\it Case 1}: $\alpha=0$. An easy computation shows $$ \mathsf{A}_{0,1}(z^n,r)=n^{-1}{r^{2(n-1)}} $$ and so $\log r\mapsto\log\mathsf{A}_{0,1}(z^n,r)$ is convex. {\it Case 2}: $-2\le\alpha<0$. Under this condition, we see from the arguments for \cite[Propositions 4-5]{WZ} that $$ \Delta(n-1,x)\geq 0\quad\forall\quad n-1\geq 0\ \ \&\ \ 0<x<1, $$ and so that $\log r\mapsto\log\mathsf{A}_{\alpha,1}(z^n,r)$ is convex. {\it Case 3}: $-3\leq \alpha<-2$. With the assumption, we also get from the arguments for \cite[Propositions 4-5]{WZ} that $$ \Delta (n-1,x)\geq \Delta(-2-\alpha,x)>0\quad\forall\quad x\in (0,1)\ \ \&\ \ n-1\in [-2-\alpha,\infty) $$ and so that $\log r\mapsto\log\mathsf{A}_{\alpha,1}(z^n,r)$ is convex when $n\ge 2$. Here it is worth noting that the convexity of $\log r\mapsto\log\mathsf{A}_{\alpha,1}(z,r)=0$ is trivial. (iii) Under $0<\alpha<\infty$, from the argument for \cite[Proposition 6]{WZ} we know that $\Delta(n-\beta,x)<0$ as $x$ is sufficiently close to $1$. Thus $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(a_n z^n,r)$ is not convex under $a_n\not=0$. \end{proof} The following illustrates that the function $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(f,r)$ is not always concave for $\alpha>0$, $0\le\beta\le 1$, and $f\in U(\mathbb D)$. \begin{example} Let $\alpha=1$, $\beta\in\{0,1\}$, and $f(z)=z+\frac{z^2}{2}$. Then the function $\log r\mapsto \log\mathsf{A}_{\alpha,\beta}(f,r)$ is neither convex nor concave for $r\in (0,1)$. \end{example} \begin{proof} A direct computation shows $$ \left|\frac{z^2f'(z)}{f^2(z)}-1\right|=\left|\frac{z^2(1+z)}{(z+\frac{z^2}{2})^2}-1\right|=\frac{|z|^2}{|z+2|^2}<1 $$ since $$ |z|<1<2-|z|\leq|z+2|\quad\forall\quad z\in \mathbb D. $$ So, $f\in U(\mathbb D)$ owing to Lemma \ref{uni} (i). By $f'(z)=z+1$ we have $$ A(f,t)=\int_{t\mathbb D}|z+1|^2\, dA(z)=\pi \Big(t^2+\frac{t^4}{2}\Big), $$ plus \[ \int_0^r \Phi_{A,\beta}(f,t)\,d\mu_1(t)=\left\{\begin{array} {r@{\;}l} \frac{\pi}{2}\Big(r^4-\frac{r^6}{3}-\frac{r^8}{4}\Big)\quad & \hbox{when}\quad \beta=0\\ r^2-\frac{r^4}{4}-\frac{r^6}{6}\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] Meanwhile, $$ \nu_1(r)=\int_0^r (1-t^2)dt^2=r^2-\frac{r^4}{2}. $$ So, we get \[ \mathsf{A}_{1,\beta}(f,r)=\left\{\begin{array} {r@{\;}l} \frac{\pi(12r^2-4r^4-3r^6)}{12(2-r^2)} \quad & \hbox{when}\quad \beta=0\\ \frac{12-3r^2-2r^4}{6(2-r^2)}\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] and in turn consider the logarithmic convexities of the following function \[ h_\beta(x)=\left\{\begin{array} {r@{\;}l} \frac{12x-4x^2-3x^3}{2-x}\quad & \hbox{when}\quad \beta=0\\ \frac{12-3x-2x^2}{2-x}\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] for $x\in (0,1)$. Using the so-called D-notation in Lemma \ref{wz}, we have \[ D(h_\beta(x))=\left\{\begin{array} {r@{\;}l} D(12x-4x^2-3x^3)-D(2-x)\quad & \hbox{when}\quad \beta=0\\ D(12-3x-2x^2)-D(2-x)\quad & \hbox{when}\quad \beta=1 \end{array} \right. \] for $x\in (0,1)$. By an elementary calculation, we get \[ \left\{\begin{array} {r@{\;}l} D(12x-4x^2-3x^3)=\frac{-48-144x+12x^2}{(12-4x-3x^2)^2}\\ D(2-x)=\frac{-2}{(2-x)^2}\\ D(12-3x-2x^2)=\frac{-36-96x+6x^2}{(12-3x-2x^2)^2}. \end{array} \right. \] Consequently, \[ D(h_\beta(x))=\left\{\begin{array} {r@{\;}l} \frac{2g_\beta(x)}{(12-4x-3x^2)^2(2-x)^2}\quad & \hbox{when}\quad \beta=0\\ \frac{2g_\beta(x)}{(12-3x-2x^2)^2(2-x)^2}\quad & \hbox{when}\quad \beta=1, \end{array} \right. \] where \[ g_\beta(x)=\left\{\begin{array} {r@{\;}l} 48-288x+232x^2-72x^3+15x^4\quad & \hbox{when}\quad \beta=0\\ 72-192x+147x^2-48x^3+7x^4\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] Now, under $x\in (0,1)$ we find $$ g_0'(x)=-288+464x-216x^2+60x^3\quad \&\quad g_0''(x)=464-432x+180x^2. $$ Clearly, $g_0''(x)$ is an open-upward parabola with the axis of symmetry $x=\frac{6}{5}>1$. By $g_0''(1)=212>0$ and the monotonicity of $g_0''$ on $(0,1)$, we have $g_0''(x)>0$ for all $x\in (0,1)$. Thus $g_0'$ is increasing on $(0,1)$. The following condition $$ g_0'(0)=-288<0\quad \&\quad g_0'(1)=20>0 $$ yields an $x_1\in (0,1)$ such that $g_0'(x)<0$ for $x\in(0,x_1)$ and $g_0'(x)>0$ for $x\in (x_1,1)$. Since $g_0(0)=48$ and $g_0(1)=-65$, there exists an $x_0\in (0,1)$ such that $g_0(x)>0$ for $x\in (0,x_0)$ and $g_0(x)<0$ for $x\in (x_0,1)$. Thus the function $\log x\mapsto\log h_0(x)$ is neither convex nor concave. Similarly, under $x\in (0,1)$ we have $$ g_1'(x)=-192+294x-144x^2+28x^3\quad \&\quad g_1''(x)=294-288x+84x^2. $$ Obviously, $g_1''(x)$ is an open-upward parabola with the axis of symmetry $x=\frac{12}{7}>1$. By $g_1''(1)=90>0$ and the monotonicity of $g_1''$ on $(0,1)$, we have $g_1''(x)>0$ for all $x\in (0,1)$. Thus $g_1'$ is increasing on $(0,1)$. The following condition $$ g_1'(0)=-192<0\quad \&\quad g_1'(1)=-14<0 $$ yields $g_1'(x)<0$ for $x\in(0,1)$. Since $g_1(0)=72$ and $g_1(1)=-14$, there exists an $x_0\in (0,1)$ such that $g_1(x)>0$ for $x\in (0,x_0)$ and $g_1(x)<0$ for $x\in (x_0,1)$. Thus the function $\log x\mapsto\log h_1(x)$ is neither convex nor concave. \end{proof} \subsection{Log-convexity for $\mathsf{L}_{\alpha,\beta}(f,\cdot)$} Analogously, we can establish the expected convexity for the mixed lengths. \begin{theorem}\label{th4} Let $0\le\beta\le 1$ and $0<r<1$. \item{\rm(i)} If $\alpha\in (-\infty,-3)$, then there exist $f, g\in H(\mathbb D)$ such that $\log r\mapsto\log\mathsf{L}_{\alpha,\beta}(f,r)$ is not convex and $\log r\mapsto\log \mathsf{L}_{\alpha,\beta}(g,r)$ is not concave. \item{\rm(ii)} If $\alpha\in [-3,0]$, then $\log r\mapsto \log\mathsf{L}_{\alpha,1}(a_nz^n,r\big)$ is convex for $a_n\not=0$ with $n\in\mathbb N$. Consequently, $\log r\mapsto \log\mathsf{L}_{\alpha,1}(f,r)$ is convex for $f\in U(\mathbb D)$. \item{\rm(iii)} If $\alpha\in (0,\infty)$, then $\log r\mapsto\log\mathsf{L}_{\alpha,\beta}(a_nz^n,r)$ is not convex for $a_n\not=0$ and $n\in \mathbb N$. \end{theorem} \begin{proof} The argument is similar to that for Theorem \ref{th3} except using the following statement for $\alpha\in [-3,0]$ -- If $f\in U(\mathbb D)$, then there exists $g(z)=\sum_{n=0}^\infty b_n z^n$ such that $g$ is the square root of the zero-free derivative $f'$ on $\mathbb D$ and $f'(0)=g^2(0)$, and hence \begin{eqnarray*} \Phi_{L,1}(f,t)&=&(2\pi t)^{-1}\int_{t\mathbb T}|f'(z)||dz|\\ &=&(2\pi t)^{-1} \int_{t\mathbb T} |g(z)|^2|dz|\\ &=&\sum_{n=0}^\infty |b_n|^2 t^{2n}. \end{eqnarray*} \end{proof} Our concluding example shows that under $0<\alpha<\infty$ and $0\le\beta\le 1$ one cannot get that $\log\mathsf{L}_{\alpha,\beta}(f,r)$ is convex or concave in $\log r$ for all functions $f\in U(\mathbb D)$. \begin{example} Let $\alpha=1$, $\beta\in\{0,1\}$, and $f(z)=(z+2)^3$. Then the function $\log r\mapsto\log\mathsf{L}_{\alpha,\beta}(f,r)$ is neither convex nor concave for $r\in (0,1)$. \end{example} \begin{proof} Clearly, we have $$ f'(z)=3(z+2)^2\ \ \&\ \ f''(z)=6(z+2) $$ as well as the Schwarizian derivative $$ \left[\frac{f''(z)}{f'(z)}\right]'-\frac{1}{2}\left[\frac{f''(z)}{f'(z)}\right]^2=\frac{-4}{(z+2)^2}. $$ It is easy to see that $$ \sqrt{2}(1-|z|^2)\leq 2-|z|\quad\forall\quad z\in \mathbb D. $$ So, $$ \left|\left[\frac{f''(z)}{f'(z)}\right]'-\frac{1}{2}\left[\frac{f''(z)}{f'(z)}\right]^2\right|=\frac{4}{|z+2|^2}\leq \frac{4}{(2-|z|)^2}\leq \frac{2}{(1-|z|^2)^2}. $$ By Lemma \ref{uni} (ii), $f$ belongs to $U(\mathbb D)$. Consequently, $$ L(f,t)=\int_0^{2\pi}|f'(te^{i\theta})|t\, d\theta=6\pi t(t^2+4) $$ and \[ \int_0^r\Phi_{L,\beta}(f,t)\,d\mu_1(t)=\left\{\begin{array} {r@{\;}l} 12\pi\Big(\frac{4}{3}r^3-\frac{3}{5}r^5-\frac{1}{7}r^7\Big) \quad & \hbox{when}\quad \beta=0\\ 12r^2-\frac{9}{2}r^4-r^6\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] Note that $\nu_1(r)=r^2-\frac{r^4}{2}$. So, \[ \mathsf{L}_{1,\beta}(f,r)=\left\{\begin{array} {r@{\;}l} \frac{24\pi(140r-63r^3-15r^5)}{105(2-r^2)} \quad & \hbox{when}\quad \beta=0\\ \frac{24-9r^2-2r^4}{2-r^2} \quad & \hbox{when}\quad \beta=1. \end{array} \right. \] To gain our conclusion, we only need to consider the logarithmic convexity of the function \[ h_\beta(x)=\left\{\begin{array} {r@{\;}l} \frac{140x-63x^3-15x^5}{2-x^2}\quad & \hbox{when}\quad \beta=0\\ \frac{24-9x-2x^2}{2-x}\quad & \hbox{when}\quad \beta=1. \end{array} \right. \] {\it Case 1}: $\beta=0$. Applying the definition of $D$-notation, we obtain $$ D(140x-63x^3-15x^5)=\frac{-35280 x-33600 x^3+3780x^5}{(140-63x^2-15x^4)^2} $$ and $$ D(2-x^2)=\frac{-8x}{(2-x^2)^2}, $$ whence reaching $$ D\big(h_0(x)\big)=D(140x-63x^3-15x^5)-D(2-x^2)=\frac{4xg_0(x)}{(140-63x^2-15x^4)^2(2-x^2)^2}, $$ where $$ g_0(x)=3920-33600x^2+28098x^4-8400x^6+1395x^8. $$ Obviously, $$ g_0(0)=3920>0\quad\&\quad g_0(1)=-8587<0. $$ Now letting $s=x^2$, we get $$ g_0(x)=G_0(s)=3920-33600s+28098s^2-8400s^3+1395s^4, $$ and $$ G'_0(s)=-33600+56196s-25200s^2+5580s^3\ \&\ G''_0(s)=56196-50400s+16740s^2. $$ Since the axis of symmetry of $G''_0$ is $s=\frac{140}{93}>1$, $G''_0$ is decreasing on $(0,1)$. Due to $G''_0(1)=22536>0$, we have $G''_0(s)>0$ for all $s\in (0,1)$, i.e., $G'_0(s)$ is increasing on $(0,1)$. By $$ G'_0(0)=-33600<0\quad\&\quad G'_0(1)=2976>0, $$ we conclude that there exists an $s_0\in(0,1)$ such that $G'_0(s)<0$ for $s\in(0,s_0)$ and $G'_0(s)>0$ for $s\in (s_0,1)$. Then there exists an $x_0\in (0,1)$ such that $g_0(x)$ is decreasing for $x\in (0,x_0)$ and $g_0(x)$ is increasing for $x\in(x_0,1)$. Thus there exists an $x_1\in (0,1)$ such that $g_0(x)>0$ for $x\in(0,x_1)$ and $g_0(x)<0$ for $x\in(x_1,1)$. As a result, we find that $\log r\mapsto\log\mathsf{L}_{\alpha,0}(f,r)$ is neither concave nor convex. {\it Case 2}: $\beta=1$. Again using the $D$-notation, we obtain $$ D(24-9x-2x^2)=\frac{-216-192x+18x^2}{(24-9x-2x^2)^2} $$ and $$ D(2-x)=\frac{-2}{(2-x)^2}, $$ whence deriving $$ D\big(h_1(x)\big)=D(24-9x-2x^2)-D(2-x)=\frac{2g_1(x)}{(24-9x-2x^2)^2(2-x)^2}, $$ where $$ g_1(x)=144-384x+297x^2-96x^3+13x^4. $$ Now we have $$ g'_1(x)=-384+594x-288x^2+52x^3\quad \&\quad g''_1(x)=594-576x+156x^2. $$ Since the axis of symmetry of $g''_1(x)$ is $x=\frac{24}{13}>1$, $g''_1(x)$ is decreasing on $(0,1)$. Due to $g''_1(1)=174>0$, we have $g''_1(x)>0$ for all $x\in (0,1)$, i.e., $g'_1(x)$ is increasing on $(0,1)$. By $$ g'_1(0)=-384<0\quad\&\quad g'_1(1)=-26<0, $$ we conclude that $g'_1(x)<0$ for $x\in (0,1)$. Obviously, $$ g_1(0)=144>0\quad \&\quad g_1(1)=-26<0. $$ Hence there exists an $x_0\in (0,1)$ such that $g_1(x)>0$ for $x\in (0,x_0)$ and $g_1(x)<0$ for $x\in (x_0,1)$. Consequently, we find that $\log r\mapsto\log\mathsf{L}_{\alpha,\beta=1}(f,r)$ is neither concave nor convex. \end{proof}
{ "timestamp": "2011-05-31T02:05:13", "yymm": "1105", "arxiv_id": "1105.6042", "language": "en", "url": "https://arxiv.org/abs/1105.6042", "abstract": "This note addresses monotonic growths and logarithmic convexities of the weighted ($(1-t^2)^\\alpha dt^2$, $-\\infty<\\alpha<\\infty$, $0<t<1$) integral means $\\mathsf{A}_{\\alpha,\\beta}(f,\\cdot)$ and $\\mathsf{L}_{\\alpha,\\beta}(f,\\cdot)$ of the mixed area $(\\pi r^2)^{-\\beta}A(f,r)$ and the mixed length $(2\\pi r)^{-\\beta}L(f,r)$ ($0\\le\\beta\\le 1$ and $0<r<1$) of $f(r\\mathbb D)$ and $\\partial f(r\\mathbb D)$ under a holomorphic map $f$ from the unit disk $\\mathbb D$ into the finite complex plane $\\mathbb C$.", "subjects": "Complex Variables (math.CV)", "title": "Weighted Integral Means of Mixed Areas and Lengths under Holomorphic Mappings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771763033945, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7090925515553559 }
https://arxiv.org/abs/1512.01832
Variable selection with Hamming loss
We derive non-asymptotic bounds for the minimax risk of variable selection under expected Hamming loss in the Gaussian mean model in $\mathbb{R}^d$ for classes of $s$-sparse vectors separated from 0 by a constant $a > 0$. In some cases, we get exact expressions for the nonasymptotic minimax risk as a function of $d, s, a$ and find explicitly the minimax selectors. These results are extended to dependent or non-Gaussian observations and to the problem of crowdsourcing. Analogous conclusions are obtained for the probability of wrong recovery of the sparsity pattern. As corollaries, we derive necessary and sufficient conditions for such asymptotic properties as almost full recovery and exact recovery. Moreover, we propose data-driven selectors that provide almost full and exact recovery adaptively to the parameters of the classes.
\section{Introduction} In recent years, the problem of variable selection in high-dimensional regression models has been extensively studied from the theoretical and computational viewpoints. In making effective high-dimensional inference, sparsity plays a key role. With regard to variable selection in sparse high-dimensional regression, the Lasso, Dantzig selector, other penalized techniques as well as marginal regression were analyzed in detail; see, for example, \cite{MB2006, ZhaoYu2006,Wainwright2009,Lounici2008,WR2009,Zhang2010,MB2010, GJW2012,JJ2012} and the references cited therein. Several other recent papers deal with sparse variable selection in nonparametric regression; see, for example, \cite{LW2008,BL2008,Dalalyan,IngStep2014,ButStep2015}. In this paper, we study the problem of variable selection in the Gaussian sequence model \begin{equation} \label{vectormodel} X_j = \theta_j + \sigma \xi _j, \quad j =1,\dots, d, \end{equation} where $\xi_1,\dots,\xi_d$ are i.i.d. standard Gaussian random variables, $\sigma>0$ is the noise level, and $\theta = (\theta_1,\dots,\theta_d)$ is {an} unknown vector of parameters to be estimated. We assume that $\theta$ is $(s,a)$-{\it sparse}, which is understood in the sense that $\theta$ belongs to one of the following sets: \begin{eqnarray*} \Theta_d(s,a) & = &\lt\{ \theta \in \mathbb{R}^d : \mbox{ there exists a set } S\subseteq\{1,\dots,d\} \mbox{ with } s \mbox{ elements } \right. \\ && \left. \mbox{such that } |\theta_j| \geq a \mbox{ for all } j \in S, \, \mbox{ and } \theta_j=0 \mbox{ for all } j \not \in S \rt\} \end{eqnarray*} or \begin{eqnarray*} \Theta_d^+(s,a) & = &\lt\{ \theta \in \mathbb{R}^d : \mbox{ there exists a set } S\subseteq\{1,\dots,d\} \mbox{ with } s \mbox{ elements } \right. \\ && \left. \mbox{such that } \theta_j \geq a \mbox{ for all } j \in S, \, \mbox{ and } \theta_j=0 \mbox{ for all } j \not \in S \rt\}. \end{eqnarray*} Here, $a>0$ and $s\in \{1,\dots,d\}$ are given constants. We study the problem of selecting the {relevant} components of $\theta$, that is, of estimating the vector $$\eta = \eta(\theta) = (I(\theta_j \ne 0 ))_{j=1,\dots,d},$$ where $I(\cdot)$ is the indicator function. As estimators of $\eta$, we consider any measurable functions $\widehat \eta=\widehat \eta(X_1,\dots,X_n)$ of $(X_1,\dots,X_n)$ taking values in $\{0,1\}^d$. Such estimators will be called {\it selectors}. We characterize the loss of {a} selector $\widehat \eta$ as {an} estimator of $\eta$ by the Hamming distance between $\widehat \eta$ and $\eta$, that is, by the number of positions at which $\widehat \eta$ and $\eta$ differ: $$ |\widehat \eta-\eta|\triangleq\sum_{j=1}^d |\widehat \eta_j-\eta_j|= \sum_{j=1}^d I(\widehat \eta_j\ne \eta_j). $$ Here, $\widehat \eta_j$ and $ \eta_j=\eta_j(\theta)$ are the $j$th components of $\widehat \eta$ and $ \eta= \eta(\theta)$, respectively. The expected Hamming loss of a selector $\widehat \eta$ is defined as $\E_\theta |\widehat \eta - \eta|$, where $\E_\theta$ denotes the expectation with respect to the distribution $\Pb_\theta$ of $(X_1,\dots,X_n)$ satisfying~\eqref{vectormodel}. Another well-known risk measure is the probability of wrong recovery $\Pb_\theta(\widehat S \ne S(\theta))$, where $\widehat S= \{ j: \, \widehat \eta_j=1\}$ and $S(\theta)=\{ j: \, \eta_j(\theta)=1\}$. It can be viewed as the Hamming distance with an indicator loss and is related to the expected Hamming loss as follows: \begin{equation} \label{risks} \Pb_\theta(\widehat S \ne S(\theta)) = \Pb_\theta(|\widehat \eta - \eta|\ge 1) \le \E_\theta |\widehat \eta - \eta|. \end{equation} In view of the last inequality, bounding the expected Hamming loss provides a stronger result than bounding the probability of wrong recovery. Most of the literature on variable selection in high dimensions focuses on the recovery of the sparsity pattern, that is, on constructing selectors such that the probability $\Pb_\theta(\widehat S \ne S(\theta))$ is close to 0 in some asymptotic sense (see, for example, \cite{MB2006, ZhaoYu2006,Wainwright2009,Lounici2008,WR2009,Zhang2010,MB2010}). These papers consider high-dimensional linear regression settings with deterministic or random covariates. In particular, for the sequence model~\eqref{vectormodel}, one gets that if $a>C\sigma \sqrt{\log d}$ for some $C>0$ large enough, then there exist selectors such that $\Pb_\theta(\widehat S \ne S(\theta))$ tends to 0, while this is not the case if $a<c\sigma \sqrt{\log d}$ for some $c>0$ small enough. More insight into variable selection was provided in \cite{GJW2012,JJ2012} by considering a Hamming risk close to the one we have defined above. Assuming that $s\sim d^{1-\beta}$ for some $\beta\in (0,1)$, the papers \cite{GJW2012,JJ2012} establish an asymptotic in $d$ ``phase diagram'' that partitions the parameter space into three regions called the exact recovery, almost full recovery, and no recovery regions. This is done in a Bayesian setup for the linear regression model with i.i.d. Gaussian covariates and random $\theta$. Note also that in \cite{GJW2012,JJ2012} the knowledge of $\beta$ is required to construct the selectors, so that in this sense the methods are not adaptive. The selectors are of the form $\hat \eta_j = I(|X_j|\geq t)$ with threshold $t=\tau(\beta)\sigma \sqrt{\log d}$ for some function $\tau(\cdot)>0$. More recently, these asymptotic results were extended to a combined minimax - Bayes Hamming risk on a certain class of vectors $\theta$ in \cite{JZZ2014}. The present paper makes further steps in the analysis of variable selection with a Hamming loss initiated in \cite{GJW2012,JJ2012}. Unlike \cite{GJW2012,JJ2012}, we study the sequence model \eqref{vectormodel} rather than Gaussian regression and analyze the behavior of the minimax risk rather than that of the Bayes risk with a specific prior. Furthermore, we consider not only $s\sim d^{1-\beta}$ but general $s$ and derive non-asymptotic results that are valid for any sample size. Remarkably, we get an exact expression for the non-asymptotic minimax risk and find explicitly the minimax selectors. Finally, we construct data-driven selectors that are simultaneously adaptive to the parameters $a$ and $s$. Specifically, we consider the minimax risk \begin{eqnarray}\label{minimaxrisk} \inf_{\ti \eta} \sup_{\theta \in \Theta} \f 1s \, \E_\theta |\ti \eta - \eta| \end{eqnarray} for $\Theta= \Theta_d(s,a)$ and $\Theta= \Theta_d^+(s,a)$, where $\inf_{\ti \eta}$ denotes the infimum over all selectors~$\widetilde \eta$. In Section \ref{sec:minimax}, for both classes $\Theta= \Theta_d^+(s,a)$ and $\Theta= \Theta_d(s,a)$ we find the exact values of the minimax risks and derive minimax selectors for any fixed $d, s, a>0$ such that $s < d $. For $\Theta= \Theta_d(s,a)$ we also propose another selector attaining the minimax risk up to the factor~2. Interestingly, the thresholds that correspond to the minimax optimal selectors do not have the classical form $A\sigma \sqrt{\log d}$ for some $A>0$; the optimal threshold is a function of $a$ and $s$. Analogous minimax results are obtained for the risk measured by the probability of wrong recovery $\Pb_\theta(\widehat S \ne S(\theta))$. Section \ref{sec:general} considers extensions of the non-asymptotic exact minimax theorems of Section \ref{sec:minimax} to settings with non-Gaussian or dependent observations. In Section~\ref{sec:asymp}, as asymptotic corollaries of these results, we establish sharp conditions under which exact and almost full recovery are achievable. Section~\ref{sec:adapt} is devoted to the construction of adaptive selectors that achieve almost full and exact recovery without the knowledge of the parameters $a$ and $s$. Most of the proofs are given in the Appendix. Finally, note that quite recently several papers have studied the expected Hamming loss in other problems of variable selection. Asymptotic behavior of the minimax risk analogous to \eqref{minimaxrisk} for classes $\Theta$ different from the sparsity classes that we consider here was analyzed in \cite{ButStep2015} and without the normalizing factor $1/s$ in~\cite{IngStep2014}. Oracle inequalities for Hamming risks in the problem of multiple classification under sparsity constraints are established in \cite{NR2012}. The paper \cite{ZhangZhou2015} introduces an asymptotically minimax approach based on the Hamming loss in the problem of community detection in networks. \section{Non-asymptotic minimax selectors}\label{sec:minimax} In what follows, we assume that $s < d$. We first consider minimax variable selection for the class $\Theta_d^+(s,a)$. For this class, we will use a selector $\hat \eta^+$ with the components \begin{equation}\label{selector+} \hat \eta_j^+ = I(X_j\geq t), \quad j =1,\dots,d, \end{equation} where the threshold is defined by \begin{equation}\label{threshold} t = \frac a2 + \frac {\s^2} a \log\left( \frac ds - 1 \right). \end{equation} Set $$ \Psi_+(d,s,a)= \left( \frac ds - 1 \right) \Phi \left( -\frac a {2\s} - \frac \s a \log \Big( \frac ds - 1 \Big) \right) +\Phi \left( - \frac a {2\s} + \frac \s a \log \Big( \frac ds - 1 \Big) \right), $$ where $\Phi(\cdot)$ denotes the standard Gaussian cumulative distribution function. \begin{theorem}\label{t2} For any $a>0$ and $s < d$ the selector $\hat \eta^+$ in \eqref{selector+} with the threshold $t$ defined in (\ref{threshold}) satisfies \begin{equation}\label{t2:eq1} \sup_{\theta \in \Theta_d^+(s,a)} \frac 1s \E_\theta |\hat \eta^+ - \eta| \leq \Psi_+(d,s,a). \end{equation} \end{theorem} The proof is given in the Appendix. The next theorem gives a lower bound on the minimax risk showing that the upper bound in Theorem~\ref{t2} is tight. \begin{theorem}\label{thm:lb} For any $a>0$ and $s < d$ we have $$ \inf_{\widetilde \eta} \sup_{\theta \in \Theta_d^+(s,a)} \frac 1s \E_\theta |\widetilde \eta - \eta| \ge \Psi_+(d,s,a), $$ where $\inf_{\widetilde \eta}$ denotes the infimum over all selectors $\widetilde \eta$. \end{theorem} The proof is given in the Appendix. As a straightforward corollary of Theorems \ref{t2} and \ref{thm:lb}, we obtain that the estimator $\hat \eta^+$ is minimax in the exact sense for the class $\Theta_d^+(s,a)$ and the minimax risk satisfies \begin{eqnarray}\label{eq:minimax:central} \inf_{\widetilde \eta} \sup_{\theta \in \Theta_d^+(s,a)} \frac 1s \E_\theta |\widetilde \eta - \eta| = \sup_{\theta \in \Theta_d^+(s,a)} \frac 1s \E_\theta |\hat \eta^+ - \eta| = \Psi_+(d,s,a). \end{eqnarray} Remarkably, this holds under no assumptions on $d,s,a$ except for, of course, some minimal conditions under which the problem ever makes sense: $a>0$ and $s < d$. Analogous non-asymptotic minimax result is valid for the class \begin{eqnarray*} \Theta_d^-(s,a) & = &\lt\{ \theta \in \mathbb{R}^d : \mbox{ there exists a set } S\subseteq\{1,\dots,d\} \mbox{ with } s \mbox{ elements } \right. \\ && \left. \mbox{such that } \theta_j \leq - a \mbox{ for all } j \in S, \, \mbox{ and } \theta_j=0 \mbox{ for all } j \not \in S \rt\}. \end{eqnarray*} We omit details here. Next, consider the class $\Theta_d(s,a)$. A direct analog of $\hat \eta^+$ for $\Theta_d(s,a)$ is a selector $\hat \eta$ with the components \begin{equation}\label{selector} \hat \eta_j = I(|X_j|\geq t), \quad j =1,\dots,d, \end{equation} where the threshold $t$ is defined in \eqref{threshold}. Set $$ \Psi(d,s,a)= \left( \frac ds - 1 \right) \Phi \left( - \frac a{2\s} - \frac \s a \log \Big( \frac ds - 1 \Big) \right) +\Phi \left( - \Big(\frac a{2\s} - \frac \s a \log \Big( \frac ds - 1 \Big) \Big)_+ \right), $$ where $x_+=\max (x,0)$. Note that \begin{equation}\label{eq_psi} \Psi(d,s,a) \le \Psi_+(d,s,a). \end{equation} We have the following bound. \begin{theorem}\label{nonasymptotic} For any $a>0$ and $s < d$ the selector $\hat \eta$ in (\ref{selector}) with the threshold $t$ defined in (\ref{threshold}) satisfies \begin{equation}\label{t1:eq1} \sup_{\theta \in \Theta_d(s,a)} \frac 1s \E_\theta |\hat \eta - \eta| \leq 2 \Psi(d,s,a). \end{equation} \end{theorem} The proof is given in the Appendix. For the minimax risk on the class $\Theta_d(s,a)$, we have the following corollary, which is an immediate consequence of Theorems~\ref{thm:lb}, \ref{nonasymptotic}, and inequality \eqref{eq_psi}. \begin{corollary}\label{cor:factor2} For any $a>0$ and $s < d$ the selector $\hat \eta$ in (\ref{selector}) with the threshold $t$ defined in (\ref{threshold}) satisfies \begin{equation}\label{cor1:eq1} \sup_{\theta \in \Theta_d(s,a)} \E_\theta |\hat \eta - \eta| \leq 2\,\inf_{\widetilde \eta} \sup_{\theta \in \Theta_d(s,a)} \E_\theta |\widetilde \eta - \eta|. \end{equation} \end{corollary} Thus, the risk of the thresholding estimator (\ref{selector}) cannot be greater than the minimax risk over the class $\Theta_d(s,a)$ multiplied by 2. \smallskip We turn now to exact minimax variable selection over the class $\Theta_d(s,a)$. Consider a selector $\overline \eta=(\overline \eta_1,\dots,\overline \eta_d)$ with the components \begin{equation}\label{selector2} \overline \eta_j = I\left(\log \Big(\cosh \Big(\frac{a X_j}{\s^2}\Big)\Big) \geq t\right), \quad j =1,\dots,d, \end{equation} where the threshold is defined by \begin{equation}\label{threshold2} t = \frac{a^2}{2 \s^2} + \log\left( \frac ds - 1 \right). \end{equation} Set \begin{eqnarray*} \overline \Psi (d,s,a)&=&\left(\frac ds - 1 \right) \Pb \left( e^{- \frac {a^2}{2\s^2} }\cosh \Big(\frac{a\xi }{\s}\Big) \geq \frac ds - 1 \right)\\ && + \Pb \left( e^{- \frac {a^2}{2\s^2} }\cosh \Big(\frac{a\xi }{\s}+\frac{a^2}{\s^2}\Big) < \frac ds - 1 \right), \end{eqnarray*} where $\xi$ denotes a standard Gaussian random variable. Our aim is to show that $\overline \Psi (d,s,a)$ is the minimax risk of variable selection under the Hamming loss over the class $\Theta_d(s,a)$ and that it is achieved by the selector in \eqref{selector2}. We first prove that $\overline \Psi (d,s,a)$ is an upper bound on the maximum risk of the selector \eqref{selector2}. \begin{theorem}\label{t1} For any $a>0$ and $s < d$ the selector $\overline \eta$ in (\ref{selector2}) with the threshold $t$ defined in (\ref{threshold2}) satisfies \begin{equation*} \sup_{\theta \in \Theta_d(s,a)} \frac 1s \E_\theta |\overline \eta - \eta| \leq \overline \Psi (d,s,a). \end{equation*} \end{theorem} The next theorem establishes the lower bound on the minimax risk showing that the upper bound in Theorem~\ref{t1} cannot be improved. \begin{theorem}\label{thm:lb2} For any $a>0$ and $s < d$ we have $$ \inf_{\widetilde \eta} \sup_{\theta \in \Theta_d(s,a)} \frac 1s \E_\theta |\widetilde \eta - \eta| \ge \overline \Psi (d,s,a), $$ where $\inf_{\widetilde \eta}$ denotes the infimum over all selectors $\widetilde \eta$. \end{theorem} Next, we discuss a connection to the Bayesian setting. It is not hard to check that, for each $j$, the minimax optimal selector $\hat \eta_j^+ $ coincides with the Bayes test of the null hypothesis $H_0: \ \theta_j=0$ against the alternative $H_0: \ \theta_j=a$ with prior probabilities $1-s/d$ and $s/d$, respectively. This can be also seen from the proof of Theorem~\ref{thm:lb}. Furthermore, the minimax risk on $\Theta_d^+(s,a)$ is exactly equal to the risk of the corresponding Bayes selector as shown in the next proposition. Analogous result holds for the class $\Theta_d(s,a)$ but we do not state it here for the sake of brevity. \begin{proposition}\label{prop:bayes} Let $\mu$ be the uniform distribution on the set $\Theta'$ of all $\theta$ in $\Theta_d^+(s,a)$ such that $s$ components $\theta_j$ of $\theta$ are equal to $a$ and the remaining $d-s$ components are 0. Then, \begin{eqnarray}\label{eq:prop:bayes} \inf_{\widetilde \eta} \sup_{\theta \in \Theta_d^+(s,a)}\E_\theta |\widetilde \eta - \eta| = \inf_{\widetilde \eta}\int \E_\theta |\widetilde \eta - \eta| \mu(d\theta). \end{eqnarray} \end{proposition} The proof is given in the Appendix. Finally, we show how the above non-asymptotic minimax results can be extended to the probability of wrong recovery. For any selector $\widetilde \eta$, we denote by $ S_{\widetilde \eta}$ the selected set of indices: $S_{\widetilde \eta} = \{j:\, \widetilde \eta_j=1\}$. A selector $\widetilde \eta=(\widetilde \eta_1,\dots,\widetilde \eta_d)$ will be called {\it separable} if its $j$th component $\widetilde \eta_j$ depends only on $X_j$ for all $j=1,\dots,d$. We denote by $\cal T$ the set of all separable selectors. \begin{theorem}\label{cor:recovery_pattern} For any $a>0$ and $s < d$ the selectors $\hat \eta$ in \eqref{selector} and $\hat \eta^+$ in \eqref{selector+} with the threshold $t$ defined in~\eqref{threshold}, and the selector $\overline \eta$ in (\ref{selector2}) with the threshold $t$ defined in (\ref{threshold2}) satisfy \begin{equation}\label{cor:recovery_pattern:eq1} \sup_{\theta \in \Theta_d^+(s,a)} \Pb_\theta( S_{\hat \eta^+} \ne S(\theta)) \le s\Psi_+(d,s,a), \end{equation} \begin{equation}\label{cor:recovery_pattern:eqbar2} \sup_{\theta \in \Theta_d(s,a)} \Pb_\theta( S_{\overline \eta} \ne S(\theta)) \le s \overline \Psi(d,s,a) \end{equation} and \begin{equation}\label{cor:recovery_pattern:eq2} \sup_{\theta \in \Theta_d(s,a)} \Pb_\theta( S_{\hat \eta} \ne S(\theta)) \le 2s\Psi(d,s,a). \end{equation} Furthermore, \begin{equation}\label{cor:recovery_pattern:eq3} \inf_{\widetilde \eta\in \cal T} \sup_{\theta \in \Theta_d^+(s,a)} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) \ge \frac{s\Psi_+(d,s,a)}{1+s\Psi_+(d,s,a)}, \end{equation} and \begin{equation}\label{cor:recovery_pattern:eqbar3} \inf_{\widetilde \eta\in \cal T} \sup_{\theta \in \Theta_d(s,a)} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) \ge \frac{s \overline \Psi(d,s,a)}{1+s \overline \Psi(d,s,a)}. \end{equation} \end{theorem} The proof is given in the Appendix. Although Theorem~\ref{cor:recovery_pattern} does not provide the exact minimax solution, it implies sharp minimaxity in asymptotic sense. Indeed, an interesting case is when the minimax risk in Theorem~\ref{cor:recovery_pattern} goes to 0 as $d\to\infty$. Assuming that $s$ and $a$ are functions of $d$, this corresponds to $s\Psi_+(d,s,a)\to 0$ as $d\to\infty$. In this natural asymptotic setup, the upper and lower bounds of Theorem~\ref{cor:recovery_pattern} for both classes $ \Theta_d^+(s,a)$ and $ \Theta_d(s,a)$ are sharp. We discuss this issue in Section \ref{sec:asymp}, cf. Theorem~\ref{th:sharp:asymp}. \begin{remark} Papers {\rm \cite{GJW2012,JJ2012,JZZ2014}} use a different Hamming loss defined in terms of vectors of signs. In our setting, this would mean considering not $|\hat \eta - \eta|$ but the following loss: $ \sum_{j=1}^d I( {\rm sign}(\hat \theta_j)\ne {\rm sign}(\theta_j)), $ where $\hat \theta_j$ is an estimator of $\theta_j$ and ${\rm sign}(x)=I(x>0)-I(x<0)$. Theorems of this section are easily adapted to such a loss, but in this case the corresponding expressions for the non-asymptotic risk contain additional terms and we do not obtain exact minimax solutions as above. On the other hand, these additional terms are smaller than $\Psi(d,s,a)$ and $\Psi_+(d,s,a)$, and in the asymptotic analysis, such as the one performed in Sections \ref{sec:asymp} and \ref{sec:adapt}, can often be neglected. Thus, in many cases, one gets the same asymptotic results for both losses. We do not discuss this issue in more detail here. \end{remark} \section{Generalizations and extensions}\label{sec:general} Before proceeding to asymptotic corollaries, we discuss some generalizations and extensions of the non-asymptotic results of Section \ref{sec:minimax}. \subsection{Dependent observations} It is easy to see that Theorems~\ref{t2} and~\ref{nonasymptotic} do not use any information on the dependence between the observations and thus remain valid for dependent~$X_j$. Furthermore, a minimax optimality property holds under dependence as well. To be specific, denote by $\mathcal{N}_d(\theta,\Sigma)$ the $d$-dimensional Gaussian distribution with mean $\theta$ and covariance matrix $\Sigma$. Assume that the distribution $\Pb$ of $(X_1,\dots,X_d)$ belongs to the class $$ {\mathcal P}_d^+(s,a, \sigma^2) = \{\mathcal{N}_d(\theta,\Sigma): \theta \in \Theta_d^+(s,a), \, \sigma_{ii}=\sigma^2, \mbox{ for all }i = 1,\dots,d\} $$ where we denote by $\sigma_{ii}$ the diagonal entries of $\Sigma$. Note that, for distributions in this class, $\Sigma$~can be any covariance matrix with constant diagonal elements. \begin{theorem}\label{thm:dependence} For any $a>0$ and $s < d$, and for the selector $\hat \eta^+$ in \eqref{selector+} with the threshold $t$ defined in (\ref{threshold}) we have $$ \inf_{\widetilde \eta} \sup_{\Pb \in{\mathcal P}_d^+(s,a,\sigma^2)} \E_\Pb |\widetilde \eta - \eta| = \sup_{\Pb \in {\mathcal P}_d^+(s,a,\sigma^2)} \E_\Pb |\hat \eta^+ - \eta| = s \Psi_+(d,s,a), $$ where $\inf_{\widetilde \eta}$ denotes the infimum over all selectors $\widetilde \eta$, and $\E_\Pb$ denotes the expectation with respect to $\Pb$. \end{theorem} \begin{proof} The upper bound $\Psi_+(d,s,a)$ on the minimax risk follows from the fact that the proofs of Theorems~\ref{t2} and \ref{nonasymptotic} are not affected by the dependence. Indeed, both the selector and the Hamming loss proceed coordinate-wisely. The lower bound $\Psi_+(d,s,a)$ on the minimax risk follows from Theorem~\ref{thm:lb} after observing that the maximum over ${\mathcal P}_d^+(s,a,\sigma^2)$ is greater than the maximum over the subfamily of Gaussian vectors with independent entries $\{\mathcal{N}_d(\theta, \sigma^2 I_d):\theta \in \Theta_d^+(s,a)\}$, where $I_d$ is the $d\times d$ identity matrix. \end{proof} An interesting consequence of Theorem~\ref{thm:dependence} and of \eqref{eq:minimax:central} is that the model with independent $X_j$ is the least favorable model, in the exact non-asymptotic sense, for the problem of variable selection with Hamming loss on the class of vectors $\Theta_d^+(s,a)$. This fact was also noticed and discussed in \cite{HallJin2010} for the detection problem. That paper considers the Gaussian model with covariance matrix $\Sigma$ that is not necessarily a diagonal matrix. It is shown that faster detection rates are achieved in the case of dependent observations (under some assumptions) than in the case of independent data. It would be interesting to extend these results to the variable selection problem in hand. \subsection{Non-Gaussian models} As a building block for extension to non-Gaussian observations, we first consider the following simple model. We observe independent random variables $X_1,\dots,X_d$ with values in a measurable space $({\mathcal X},{\mathcal U})$ such that $s$ among them are distributed according to the probability distribution $\Pb_1$ and the other $d-s$ are distributed according to the probability distribution $\Pb_0$. We assume that $\Pb_0\ne \Pb_1$. Let $f_0$ and $f_1$ be densities of $\Pb_0$ and $\Pb_1$ with respect to some dominating measure. Denote by $\eta=(\eta_1,\dots,\eta_d)$ the vector such that $\eta_j=1 $ if the distribution of $X_j$ is $\Pb_1$ and $\eta_j=0$ if it is $\Pb_0$. Let $\Theta_d(s, f_0,f_1)$ be the set of all such vectors $\eta$. Consider the selector $\hat \eta=(\hat\eta_1,\dots,\hat\eta_d)$ where \begin{equation}\label{def:newsel} \hat \eta_j = I \left(\log \frac{f_1}{f_0}(X_j) \geq \log\left(\frac ds - 1 \right) \right), \quad j=1,\dots,d. \end{equation} \begin{theorem}\label{thm:fixed} For any $s<d$, the selector $\hat \eta$ in (\ref{def:newsel}) satisfies \begin{equation} \label{thm:fixed:eq1} \sup_{\eta \in \Theta_d(s, f_0,f_1)} \E |\hat \eta - \eta| = \inf_{\tilde \eta} \sup_{\eta \in \Theta_d(s, f_0,f_1)} \E|\tilde \eta - \eta| = s \Psi, \end{equation} where $ \inf_{\ti{\eta}}$ denotes the infimum over all selectors, and \begin{equation} \label{psi} \Psi = \Pb_1\Big(\log \frac{f_1}{f_0}(X) < \log \Big(\frac ds - 1\Big) \Big) + \Big(\frac ds - 1\Big) \Pb_0\Big(\log \frac{f_1}{f_0}(X) \ge \log \Big(\frac ds - 1\Big) \Big). \end{equation} \end{theorem} \begin{proof}The proof of the upper bound $\sup_{\eta \in \Theta_d(s, f_0,f_1)}\E |\hat \eta - \eta| \le s\Psi$ is obvious. The proof of the lower bound $\inf_{\tilde \eta} \sup_{\eta \in \Theta_d(s, f_0,f_1)}\E|\tilde \eta - \eta| \ge s\Psi$ follows the same lines as the proof of Theorem~\ref{thm:lb} with the only difference in the definition of probability measures. We replace the Gaussian distributions centered at 0 and $a$ by the distributions $\Pb_0$ and $\Pb_1$, respectively. With this change, the Bayesian version of the Neyman-Pearson lemma leads to the optimal test statistic $T^*$ of the form $$ T^*(X) = I \left(\frac{(s/d) f_1 (X)}{(1-s/d) f_0(X)} >1 \right), $$ and, respectively, to the lower bound \eqref{psi} on the minimax risk. \end{proof} Suppose now that instead of two measures $\Pb_0$ and $\Pb_1$ we have a parametric family of probability measures $\{\Pb_a, a\in \mathcal{U}\}$ where $\mathcal{U}\subseteq {\mathbb R}$. Let ${\sf f}_a$ be a density of $\Pb_a$ with respect to some dominating measure. Recall that the family $\{{\sf f}_a, a\in \mathcal{U}\}$ is said to have the Monotone Likelihood Ratio (MLR) property if, for all $a_0,a_1$ in $\mathcal{U}$ such that $a_0 < a_1$, the log-likelihood ratio $ \log({{\sf f}_{a_1}(X) }/{{\sf f}_{a_0}(X) } ) $ is an increasing function of $X$. In particular, this implies, cf. \cite[{Lemma 3.4.2}]{lehmann} that $\{{\sf f}_a, \, a \in \mathcal{U}\}$ is a stochastically ordered family, i.e., \begin{equation}\label{order} F_{a} (x) \geq F_{a'} (x) \quad \mbox{for all } x \quad \mbox{ if } a < a' \end{equation} where $F_a$ is the cumulative distribution function corresponding to ${\sf f}_a$. Using these facts, we generalize the non-asymptotic results of the previous section in two ways. First, we allow for not necessarily Gaussian distributions and second, instead of the set of parameters $\Theta_d^+(s,a)$, we consider the following set with two restrictions: \begin{eqnarray*} \Theta_d^+ (s,a_0,a_1) &=& \left\{ \theta \in \mathbb{R}^d: \mbox{ there exists a set }S\subseteq\{1,\dots,d\} \mbox{ with $s$ elements} \right.\\ && \left. \mbox{ such that $\theta_j \geq a_1$ for all $j \in S$, and $\theta_j \leq a_0$ for all } j \not \in S \right\} \end{eqnarray*} where $a_0<a_1$. In what follows, we use the notation $f_j = {\sf f}_{a_j}, j=0,1$. \begin{theorem}\label{thm:st} Let $\{{\sf f}_a, \, a \in \mathcal{U}\}$ be a family with the MLR property, and let $a_0,a_1\in\mathcal{U}$ be such that $a_0 < a_1$. Then, for any $s<d$, the selector $\hat \eta$ in (\ref{def:newsel}) with $f_0 = {\sf f}_{a_0}$ and $f_1= {\sf f}_{a_1}$ satisfies $$ \sup_{\theta \in \Theta_d^+(s, a_0,a_1)} \E_\theta |\hat \eta - \eta| = \inf_{\tilde \eta} \sup_{\theta \in \Theta_d^+(s, a_0,a_1)} \E_\theta|\tilde \eta - \eta| = s\Psi, $$ where $ \inf_{\ti{\eta}}$ denotes the infimum over all selectors and $\Psi$ is given in (\ref{psi}). \end{theorem} \begin{proof} We have \begin{eqnarray*} \sup_{\theta \in \Theta_d^+(s, a_0,a_1)}\frac 1s \E_\theta |\hat \eta - \eta| &=& \sup_{a \geq a_1} \Pb_a \Big(\log \frac{f_1}{f_0}(X) < \log \Big(\frac ds - 1\Big)\Big)\\ && + \sup_{a\leq a_0} \Big(\frac ds -1\Big) \Pb_a \Big(\log \frac{f_1}{f_0}(X) \geq \log \Big(\frac ds - 1\Big)\Big) = \Psi, \end{eqnarray*} where the last equality is due to the monotonicity of $\log \frac{f_1}{f_0} (X)$ and to the stochastic order property \eqref{order}. The proof of the lower bound $$\inf_{\tilde \eta} \sup_{\theta \in \Theta_d^+(s, a_0,a_1)} \frac 1s \E_\theta|\tilde \eta - \eta| \ge \Psi$$ follows from the fact $\sup_{\theta \in \Theta_d^+(s, a_0,a_1)}\E_\theta|\tilde \eta - \eta| \ge \sup_{\eta \in \Theta_d(s, f_0,f_1)}\E|\tilde \eta - \eta|$ and Theorem~\ref{thm:fixed}. \end{proof} {\bf Example 1.} Let ${\sf f}_a $ be the Gaussian ${\mathcal N}(a, \sigma^2)$ density with some $\sigma^2>0$, and let $a_0<a_1$. For $f_1= {\sf f}_{a_1}$ and $f_0 = {\sf f}_{a_0}$, the log-likelihood ratio $$ \log \frac{f_1}{f_0}(X) = X \, \frac{a_1-a_0}{\sigma^2}- \frac{a_1^2 - a_0^2}{2 \sigma^2} $$ is increasing in $X$. By Theorem~\ref{thm:st}, the minimax optimal selector $\hat \eta$ on the class $\Theta_d^+(s, a_0,a_1)$ is a vector with components \begin{equation}\label{eta01} \hat \eta_j = I\left( X_j \geq t(a_0,a_1) \right), \ j=1,\dots,d, \end{equation} where $$ t(a_0,a_1) =\frac{a_1+a_0}2 + \frac{\sigma^2 \log(d/s-1)}{a_1-a_0}. $$ Note that for $a_0=0$ it coincides with the selector in \eqref{selector+} with $a=a_1$, which is minimax optimal on $\Theta_d^+(s, a_1)$. Moreover, the minimax risk only depends on $a_0$ and $a_1$ through the difference $\delta=a_1-a_0$: $$ \Psi = \Phi\Big(-\frac{\delta}2 + \frac{\sigma^2 \log(d/s-1)}{\delta} \Big)+ \Big(\frac ds - 1\Big) \Phi\Big(-\frac{\delta}2 + \frac{\sigma^2 \log(d/s-1)}{\delta} \Big). $$ {\bf Example 2} Let $\Pb_a$ be the Bernoulli distribution $B(a)$ with parameter $a \in (0,1)$, and $0< a_0 < a_1 <1$. Denoting by ${\sf f}_a$ the density of $\Pb_a$ with respect to the counting measure we have, for $f_1= {\sf f}_{a_1}$ and $f_0 = {\sf f}_{a_0}$, $$ \log \frac{f_1}{f_0}(X) = X \log\Big(\frac{a_1}{1-a_1} \frac{1-a_0}{a_0}\Big) + \log \frac{1-a_1}{1-a_0} $$ which is increasing in $X$ for $0< a_0 < a_1 <1$. The minimax optimal selector $\hat \eta$ on the class $\Theta_d^+(s, a_0,a_1)$ is a vector with components $\hat \eta_j$ in \eqref{eta01} where the threshold $t(a_0,a_1)$ is given by $$ t(a_0,a_1)= \frac{ \log( \frac ds-1) - \log \frac{1-a_1}{1-a_0} }{\log(\frac{a_1}{1-a_1} \frac{1-a_0}{a_0})} \,. $$ Note that the minimax selector $\hat \eta_j$ differs from the naive selector $\hat \eta_j^{n}= X_j$. Indeed since $X_j\in\{0,1\}$ we have $\hat \eta_j=1$ if either $X_j=1$ or $t(a_0,a_1)\le 0$, and $\hat \eta_j=0$ if either $X_j=0$ or $t(a_0,a_1)>1$. The value $\Psi$ in the minimax risk has the form \begin{eqnarray*} \Psi &=& \Pb_{a_1}(X < t(a_0,a_1)) +\Big(\frac ds - 1\Big) \Pb_{a_0}(X \ge t(a_0,a_1))\\ &=& \left\{ \begin{array}{ll} d/s - 1, & t(a_0,a_1) \le 0,\\ 1- a_1 + a_0 ( d/s - 1), & 0 < t(a_0,a_1) <1,\\ 1, & t(a_0,a_1) \geq 1. \end{array} \right. \end{eqnarray*} In the asymptotic regime when $d\to \infty$ and $s\to \infty$, the minimax risk $s\Psi$ can converge to 0 only when the parameters $d,s, a_0,a_1$ are kept such that $0 < t(a_0,a_1) <1$, and in addition $(1-a_1) s\to 0$, $a_0(d-s) \to 0$. Thus, the risk can converge to 0 only when the Bernoulli probabilities $a_1$ and $a_0$ tend sufficiently fast to 1 and to 0, respectively. {\bf Example 3.} Let $\Pb_a$ be the Poisson distribution with parameter $a>0$, and let $a_1>a_0 >0$. Denoting by ${\sf f}_a$ the density of $\Pb_a$ with respect to the counting measure we have $$ \log \frac{f_1}{f_0}(X) = X\log\Big(\frac{a_1}{a_0}\Big) -a_1+a_0, $$ which is increasing in $X$. The components of the minimax optimal selector $\hat \eta$ are given by \eqref{eta01} with $$ t(a_0,a_1)=\frac{\log( d/s - 1) +a_1 - a_0}{\log(a_1/a_0)}. $$ Note that $t(a_0,a_1)>0$ as soon as $d/s \geq 2$ and $a_1>a_0 >0$. The minimax risk has the form \Psi = \Pb_{a_1}(X < t(a_0,a_1)) +(d/s - 1) \Pb_{a_0}(X \ge t(a_0,a_1))$. \subsection{Crowdsourcing with sparsity constraint} The problem of crowdsourcing with two classes is a clustering problem that can be formalized as follows, cf. \cite{GLZ}. Assume that $m$ workers provide class assignments for $d$ items. The class assignment $X_{ij}$ of the $i$th worker for the $j$th item is assumed to have a Bernoulli distribution $B(a_{i0})$ if the $j$th item belongs to class 0, and a Bernoulli distribution $B(a_{i1})$ if it belongs to class 1. Here, $a_{i0},a_{i1}\in (0,1)$ and $a_{i0} \ne a_{i1}$ for $i=1,\dots,m$. The observations $(X_{ij},i=1,\dots,m, j=1,\dots,d)$ are assumed to be jointly independent. Thus, each vector $X_j=(X_{1j},\dots,X_{mj})$ is distributed according to $\Pb_{0}$ or to $\Pb_{1}$ where each of these two measures is a product of Bernoulli measures, and $\Pb_{0}\ne\Pb_{1}$. We assume that there are $s$ vectors $X_j$ with distribution $\Pb_{1}$, and $d-s$ vectors $X_j$ with distribution $\Pb_{0}$. The aim is to recover the binary vector of class labels $\eta=(\eta_1,\dots,\eta_d)$ based on the observations ${\mathbf X}=(X_1,\dots,X_d)$. Here, $\eta_j\in \{0,1\} $ satisfies $\eta_j=k$ if the $j$th item belongs to class $k\in \{0,1\}$. Thus, we are in the framework of Theorem~\ref{thm:fixed} with a particular form of the log-likelihood ratio \begin{equation}\label{cor:crowdsourcing:eq1} \log \frac{f_1}{f_0}(X_{j}) = \sum_{i=1}^m \left(X_{ij} \log\Big(\frac{a_{i1}}{1-a_{i1}} \frac{1-a_{i0}}{a_{i0}}\Big) + \log \frac{1-a_{i1}}{1-a_{i0}}\right), \end{equation} where $f_k$ is the density of $\Pb_{k}$, $k\in \{0,1\}$. The following corollary is an immediate consequence of Theorem~\ref{thm:fixed}. \begin{corollary}\label{cor:crowdsourcing} Let $s<d$, $a_{i0},a_{i1}\in (0,1)$ and $a_{i0} \ne a_{i1}$ for $i=1,\dots,m$. Then, the selector $\hat \eta$ in (\ref{def:newsel}) with $\log \frac{f_1}{f_0}(X_{j})$ defined in \eqref{cor:crowdsourcing:eq1} is minimax optimal on the class $\Theta_d(s, f_0,f_1)$. The corresponding minimax risk is given in \eqref{psi}. \end{corollary} Thus, a selector which is minimax optimal in the exact non-asymptotic sense is explicitly given by formula \eqref{def:newsel}. For suitable combinations of parameters $d,s, a_{i0},a_{i1}$, the exact value of the minimax risk $\Psi$ can be further analyzed to obtain asymptotics of interest. Gao et al. \cite{GLZ} have studied a setting of crowdsourcing problem which is different from the one we consider here. They did not assume sparsity $s$, and instead of the class $\Theta_d(s, f_0,f_1)$ of $s$-sparse binary sequences, they considered the class of all possible binary sequences $\{0,1\}^d$. For this class, Gao et al. \cite{GLZ} did not derive the exact minimax solution but rather analyzed specific asymptotics of the minimax risk $\inf_{\tilde \eta} \sup_{\eta \in \{0,1\}^d} d^{-1} \E|\tilde \eta - \eta|$ in large deviations perspective. \section{Asymptotic analysis. Phase transitions}\label{sec:asymp} In this section, we conduct the asymptotic analysis of the problem of variable selection. The results are derived as corollaries of the minimax bounds of Section~\ref{sec:minimax}. We will assume that $d\to\infty$ and that parameters $a=a_d$ and $s=s_d$ depend on $d$. The first two asymptotic properties we study here are {\it exact recovery} and {\it almost full recovery}. We use this terminology following \cite{GJW2012,JJ2012} but we define these properties in a different way, as asymptotic minimax properties for classes of vectors $\theta$. The papers \cite{GJW2012,JJ2012} considered a Bayesian setup with random $\theta$ and studied a linear regression model with i.i.d. Gaussian regressors rather than the sequence model~\eqref{vectormodel}. The study of {\it exact recovery} and {\it almost full recovery} will be done here only for the classes $\Theta_d(s_d,a_d)$. The corresponding results for the classes $\Theta_d^+ (s_d,a_d)$ or $\Theta_d^- (s_d,a_d)$ are completely analogous. We do not state them here for the sake of brevity. \begin{definition} Let $(\Theta_d(s_d,a_d))_{d\ge1}$ be a sequence of classes of sparse vectors. \begin{itemize}\item We say that \textbf{exact recovery is possible} for $(\Theta_d(s_d,a_d))_{d\ge1}$ if there exists a selector $\hat \eta$ such that \begin{gather}\label{er1} \lim_{d\to \infty} \sup_{\theta\in \Theta_d(s_d,a_d)}\E_{\theta}|\hat{\eta}-\eta|=0. \end{gather} In this case, we say that $\hat \eta$ achieves exact recovery. \item We say that \textbf{almost full recovery is possible} for $(\Theta_d(s_d,a_d))_{d\ge1}$ if there exists a selector $\hat \eta$ such that \begin{gather}\label{af1} \lim_{d\to \infty} \sup_{\theta\in \Theta_d(s_d,a_d)} \frac 1{s_d} \,\E_{\theta}|\hat{\eta}-\eta|=0. \end{gather} In this case, we say that $\hat \eta$ achieves almost full recovery. \end{itemize} \end{definition} It is of interest to characterize the sequences $(s_d,a_d)_{d\ge1}$, for which exact recovery and almost full recovery are possible. To describe the impossibility of exact or almost full recovery, we need the following definition. \begin{definition} Let $(\Theta_d(s_d,a_d))_{d\ge1}$ be a sequence of classes of sparse vectors. \begin{itemize}\item We say that \textbf{exact recovery is impossible} for $(\Theta_d(s_d,a_d))_{d\ge1}$ if \begin{gather}\label{er2} \liminf_{d \to \infty}\, \inf_{\ti{\eta}}\sup_{\theta\in \Theta_d(s_d,a_d)}\E_{\theta}|\ti{\eta}-\eta|>0, \end{gather} \item We say that \textbf{almost full recovery is impossible} for $(\Theta_d(s_d,a_d))_{d\ge1}$ if \begin{gather}\label{af2} \liminf_{d \to \infty} \,\inf_{\ti{\eta}}\sup_{\theta\in \Theta_d(s_d,a_d)} \frac 1{s_d} \,\E_{\theta}|\ti{\eta}-\eta|>0, \end{gather} where $ \inf_{\ti{\eta}}$ denotes the infimum over all selectors. \end{itemize} \end{definition} The following general characterization theorem is a straightforward corollary of the results of Section \ref{sec:minimax}. \begin{theorem}\label{t3a} \begin{itemize}\item[(i)] Almost full recovery is possible for $(\Theta_d(s_d,a_d))_{d\ge1}$ if and only if \begin{equation} \label{eq1:t3a} \Psi_+(d, s_d,a_d)\to 0 \quad \text{as} \ d\to \infty. \end{equation} In this case, the selector $\hat \eta$ defined in \eqref{selector} with threshold \eqref{threshold} achieves almost full recovery. \item[(ii)] Exact recovery is possible for $(\Theta_d(s_d,a_d))_{d\ge1}$ if and only if \begin{equation} \label{eq2:t3a} s_d\Psi_+(d, s_d,a_d)\to 0 \quad \text{as} \ d\to \infty. \end{equation} In this case, the selector $\hat \eta$ defined in \eqref{selector} with threshold \eqref{threshold} achieves exact recovery. \end{itemize} \end{theorem} Although this theorem gives a complete solution to the problem, conditions \eqref{eq1:t3a} and \eqref{eq2:t3a} are not quite explicit. Intuitively, we would like to get a ``phase transition" values $a_d^*$ such that exact (or almost full) recovery is possible for $a_d$ greater than $a_d^*$ and is impossible for $a_d$ smaller than $a_d^*$. Our aim now is to find such ``phase transition" values. We first do it in the almost full recovery framework. The following bounds for the tails of Gaussian distribution will be useful: \begin{equation} \label{AS} \sqrt{\frac 2\pi} \, \frac{e^{-y^2/2}}{y+\sqrt{y^2+4}} < \frac{1}{\sqrt{2\pi}} \int_y^\infty e^{-u^2/2} du\leq \sqrt{\frac 2\pi} \,\frac{e^{-y^2/2}}{y+\sqrt{y^2+8/\pi}}, \end{equation} for all $y\geq 0.$ These bounds are an immediate consequence of formula 7.1.13. in \cite{AbrStegun} with $x= y/\sqrt{2}$. Furthermore, we will need some non-asymptotic bounds for the expected Hamming loss that will play a key role in the subsequent asymptotic analysis. They are given in the next theorem. \begin{theorem}\label{t4} Assume that $s < d/2$. \begin{itemize} \item[(i)] If \begin{equation}\label{ass:e} a^2 \ge \s^2\Big(2 \log((d-s)/s) + W \Big) \mbox{ for some } W>0, \end{equation} then the selector $\hat \eta$ defined in \eqref{selector} with threshold \eqref{threshold} satisfies \begin{equation}\label{eq:t4:1} \sup_{\theta\in \Theta_d(s,a)} \E_\theta |\hat \eta - \eta| \leq (2 + \sqrt{2 \pi}) s\, \Phi(-\Delta), \end{equation} where $\Delta$ is defined by \begin{equation}\label{DeltaERec} \Delta = \frac{W}{2 \sqrt{2 \log((d-s)/s) +W}} \, . \end{equation} \item[(ii)] If $a>0$ is such that \begin{equation}\label{lowE} a^2 \le \s^2\Big(2 \log((d-s)/s) + W \Big) \mbox{ for some } W>0, \end{equation} then \begin{equation}\label{eq:t4:2} \inf_{\widetilde \eta} \sup_{\theta \in \Theta_d(s,a)} \E_\theta |\widetilde \eta - \eta| \geq s \, \Phi(-\Delta), \end{equation} where the infimum is taken over all selectors $\widetilde \eta$ and $\Delta >0$ is defined in \eqref{DeltaERec}. \end{itemize} \end{theorem} The proof is given in the Appendix. The next theorem is an easy consequence of Theorem~\ref{t4}. It describes a ``phase transition" for $a_d$ in the problem of almost full recovery. \begin{theorem}\label{thm:asymp} Assume that $\limsup_{d\to\infty} s_d/d<1/2$. \begin{itemize} \item[(i)] If, for all $d$ large enough, $$ a^2_d \ge \s^2\Big(2\log((d-s_d)/s_d) + A_d \sqrt{2\log((d-s_d)/s_d)}\Big) $$ for an arbitrary sequence $A_d\to \infty$, as $d\to \infty$, then the selector $\hat \eta$ defined by \eqref{selector} and \eqref{threshold} achieves almost full recovery: $$ \lim_{d \to \infty}\sup_{\theta \in \Theta_d(s_d,a_d)}\frac 1{s_d} \E_\theta |\hat \eta - \eta| = 0. $$ \item[(ii)] Moreover, if there exists $A>0$ such that for all $d$ large enough the reverse inequality holds: \begin{equation*}\label{eq2:thm:asympE} a^2_d \le \s^2\Big(2\log((d-s_d)/s_d) + A \sqrt{2\log((d-s_d)/s_d)}\Big) \end{equation*} then almost full recovery is impossible: \begin{equation*} \liminf_{d \to \infty} \ \inf_{\ti{\eta}}\sup_{\theta\in \Theta_d(s_d,a_d)} \frac 1{s_d} \,\E_{\theta}|\ti{\eta}-\eta|\ge \Phi\Big(-\frac{A}{2}\Big) >0. \end{equation*} Here, $\inf_{\ti{\eta}}$ is the infimum over all selectors $\widetilde \eta$. \end{itemize} \end{theorem} The proof is given in the Appendix. Under the natural sparsity assumption that \begin{equation} \label{condsp} d/s_d \to \infty \quad \mbox{ as } \quad d \to \infty, \end{equation} Theorem~\ref{thm:asymp} shows that the ``phase transition" for almost full recovery occurs at the value $a_d=a_d^*$, where \begin{equation} \label{phase} a_d^* = \s \sqrt{2\log((d-s_d)/s_d)} \Big(1+o(1)\Big). \end{equation} Furthermore, Theorem~\ref{thm:asymp} details the behavior of the $o(1)$ term here. We now state a corollary of Theorem~\ref{thm:asymp} under simplified assumptions. \begin{corollary}\label{cor1} Assume that \eqref{condsp} holds and set $$ a_d=\s\sqrt{2(1+\delta) \log(d/s_d)}, \quad \mbox{ for some } \delta>0. $$ Then the selector $\hat \eta$ defined by \eqref{selector} with threshold $t=\s\sqrt{2(1+\varepsilon(\delta)) \log(d/s_d)}$ where $\varepsilon(\delta)>0$ depends only on $\delta$, achieves almost full recovery. \end{corollary} In the particular case of $s_d = d^{1-\beta}(1+o(1))$ for some $\beta\in (0,1)$, condition \eqref{condsp} is satisfied. Then $\log(d / s_d)=\beta (1+o(1)) \log d$ and it follows from Corollary~\ref{cor1} that for $a_d=\s\sqrt{2\beta(1+\delta) \log d}$ the selector with components $\hat \eta_j = I\big(|X_j|> \s\sqrt{2\beta (1+\varepsilon) \log d}\big)$ achieves almost full recovery. This is in agreement with the findings of~\cite{GJW2012,JJ2012} where an analogous particular case of $s_d$ was considered for a different model and the Bayesian definition of almost full recovery. \smallskip We now turn to the problem of exact recovery. First, notice that if $$\limsup_{d\to\infty}s_d<\infty$$ the properties of exact recovery and almost full recovery are equivalent. Therefore, it suffices to consider exact recovery only when $s_d\to \infty$ as $d\to\infty$. Under this assumption, the ``phase transition" for $a_d$ in the problem of exact recovery is described in the next theorem. \begin{theorem}\label{thm:asympE} Assume that $s_d\to \infty$ as $d\to\infty$, and $\limsup_{d\to\infty} s_d/d<1/2$. \begin{itemize} \item[(i)] If $$ a^2_d \ge \s^2\Big(2 \log((d-s_d)/s_d) + W_d \Big) $$ for all $d$ large enough, where the sequence $W_d$ is such that \begin{equation}\label{eq1:thm:asympE} \liminf_{d\to \infty} \frac {W_d}{4 \Big(\log(s_d) + \sqrt{\log (s_d) \log (d-s_d)}\Big)} \ge 1, \end{equation} then the selector $\hat \eta$ defined by \eqref{selector} and \eqref{threshold} achieves exact recovery: \begin{equation}\label{eq2a:thm:asymp} \lim_{d \to \infty}\sup_{\theta \in \Theta_d(s_d,a_d)} \E_\theta |\hat \eta - \eta| = 0. \end{equation} \item[(ii)] If the complementary condition holds: $$ a^2_d \le \s^2\Big(2 \log((d-s_d)/s_d) + W_d \Big) $$ for all $d$ large enough, where the sequence $W_d$ is such that \begin{equation}\label{eq2:thm:asymp} \limsup_{d\to \infty} \frac {W_d}{4 \Big(\log(s_d) + \sqrt{\log (s_d) \log (d-s_d)}\Big)} < 1, \end{equation} then exact recovery is impossible, and moreover we have \begin{equation*} \lim_{d\to \infty} \ \inf_{\widetilde{\eta}}\sup_{\theta\in \Theta_d(s_d,a_d)} \,\E_{\theta}|\widetilde{\eta}-\eta| = \infty. \end{equation*} Here, $\inf_{\widetilde{\eta}}$ is the infimum over all selectors $\widetilde \eta$. \end{itemize} \end{theorem} The proof is given in the Appendix. Some remarks are in order here. First of all, Theorem~\ref{thm:asympE} shows that the ``phase transition" for exact recovery occurs at $W_d=4 \Big(\log(s_d) + \sqrt{\log (s_d) \log (d-s_d)}\Big)$, which corresponds to the critical value $a_d=a_d^*$ of the form \begin{equation} \label{phase1} a_d^* = \s \Big(\sqrt{2\log(d-s_d)} + \sqrt{2\log s_d}\Big). \end{equation} This value is greater than the critical value $a_d^*$ for almost full recovery, cf. \eqref{phase}, which is intuitively quite clear. The optimal threshold \eqref{threshold} corresponding to \eqref{phase1} has a simple form: $$ t_d^* = \frac{a_d^*}2+\frac{\s^2}{a_d^*}\log\left(\frac{d}{s_d} -1 \right)= \s \sqrt{2\log(d-s_d)}. $$ For example, if $s_d = d^{1-\beta}(1+o(1))$ for some $\beta \in (0,1)$, then $a_d^* \sim\s (1+ \sqrt{1-\beta}) \sqrt{2 \log d}$. In this particular case, Theorem~\ref{thm:asympE} implies that if $a_d=\s (1+ \sqrt{1-\beta})\sqrt{2(1+\delta) \log d}$ for some $\delta >0$, then exact recovery is possible and the selector with threshold $t = \s \sqrt{2 (1+\varepsilon) \log d}$ for some $\varepsilon >0$ achieves exact recovery. This is in agreement with the results of~\cite{GJW2012,JJ2012} where an analogous particular case of $s_d$ was considered for a different model and the Bayesian definition of exact recovery. For our model, even a sharper result is true; namely, a simple universal threshold $t = \s \sqrt{2 \log d}$ guarantees exact recovery adaptively in the parameters $a$ and $s$. Intuitively, this is suggested by the form of $t_d^*$. The precise statement is given in Theorem~\ref{thm:adaptE} below. Finally, we state an asymptotic corollary of Theorem~\ref{cor:recovery_pattern} showing that the selector $\hat \eta$ considered above is sharp in the asymptotically minimax sense with respect to the risk defined as the probability of wrong recovery. \begin{theorem}\label{th:sharp:asymp} Assume that exact recovery is possible for the classes $(\Theta_d(s_d,a_d))_{d\ge1}$ and $(\Theta_d^+(s_d,a_d))_{d\ge1}$, that is, condition \eqref{eq2:t3a} holds. Then, for the selectors $\hat \eta$ and $\hat \eta^+$ defined by \eqref{selector}, \eqref{selector+} and \eqref{threshold}, and for the selector $\overline \eta$ defined by \eqref{selector2} and \eqref{threshold2}, we have \begin{equation*} \lim_{d\to\infty} \sup_{\theta \in \Theta_d^+(s_d,a_d)} \frac{\Pb_\theta(S_{\hat \eta^+} \ne S(\theta))}{s_d\Psi_+(d,s_d,a_d)}=\lim_{d\to\infty} \inf_{\widetilde \eta\in \cal T} \sup_{\theta \in \Theta_d^+(s_d,a_d)} \frac{\Pb_\theta(S_{\widetilde \eta} \ne S(\theta))}{s_d\Psi_+(d,s_d,a_d)} = 1, \end{equation*} \begin{equation*} \lim_{d\to\infty} \sup_{\theta \in \Theta_d(s_d,a_d)} \frac{\Pb_\theta(S_{\overline \eta } \ne S(\theta))}{s_d \overline \Psi(d,s_d,a_d)}=\lim_{d\to\infty} \inf_{\widetilde \eta\in \cal T} \sup_{\theta \in \Theta_d(s_d,a_d)} \frac{\Pb_\theta(S_{\widetilde \eta} \ne S(\theta))}{s_d \overline \Psi(d,s_d,a_d)} = 1, \end{equation*} and \begin{eqnarray*}\label{recovery_pattern:eq3} \limsup_{d\to\infty} \sup_{\theta \in \Theta_d(s_d,a_d)} \frac{\Pb_\theta(S_{\hat \eta} \ne S(\theta))}{s_d\Psi_+(d,s_d,a_d)}\le 2. \end{eqnarray*} \end{theorem} Note that the threshold \eqref{threshold} depends on the parameters $s$ and $a$, so that the selectors considered in all the results above are not adaptive. In the next section, we propose adaptive selectors that achieve almost full recovery and exact recovery without the knowledge of $s$ and~$a$. \begin{remark}\label{exhsearch} Another procedure of variable selection is the exhaustive search estimator of the support $S(\theta)$ defined as $$ \tilde S = \underset{C \subseteq \{ 1,\dots,d \}: |C|=s}{\rm argmax} \sum_{j \in C} X_j. $$ This estimator was studied by Butucea {\it et al.} \cite{BIS}. The selection procedure can be equivalently stated as choosing the indices $j$ corresponding to $s$ largest order statistics of the sample $(X_1,\dots,X_d)$. In \cite[Theorem 2.5]{BIS}, it was shown that, on the class $\Theta_d^+(s_d,a_d)$, the probability of wrong recovery $\Pb_\theta (\tilde S \not = S(\theta))$ tends to 0 as $d\to\infty$ under a stronger condition on $(s_d,a_d)$ than \eqref{eq2:t3a}. The rate of this convergence was not analyzed there. If we denote by $\eta_{\tilde S} $ the selector with components $I(j \in \tilde S)$ for $j$ from 1 to $d$, it can be proved that $\E|\eta_{\tilde S} - \eta | \leq 2 \E|\hat \eta^+ - \eta|$ and thus the risk of $\eta_{\tilde S}$ is within the factor 2 of the minimax risk over the class $\Theta^+_d(s, a)$. Thus, it does not enjoy the non-asymptotic sharp optimality that we have established for the selector defined by \eqref{selector+} and \eqref{threshold} over the class $\Theta^+_d(s, a)$ and for the selector defined by \eqref{selector2} and \eqref{threshold2} over the class $\Theta_d(s, a)$. \end{remark} \section{Adaptive selectors}\label{sec:adapt} In this section, we consider the asymptotic setup as in Section~\ref{sec:asymp} and construct the selectors that provide almost full and exact recovery adaptively, that is, without the knowledge of $a$ and $s$. As discussed in Section~\ref{sec:asymp}, the issue of adaptation for exact recovery is almost trivial. Indeed, the expressions for minimal value $a_d^*$, for which exact recovery is possible (cf. \eqref{phase1}), and for the corresponding optimal threshold $t_d^*$ suggest that taking a selector with the universal threshold $t = \s \sqrt{2 \log d}$ is enough to achieve exact recovery simultaneously for all values $(a_d,s_d)$, for which the exact recovery is possible. This point is formalized in the next theorem. \begin{theorem}\label{thm:adaptE} Assume that $s_d\to \infty$ as $d\to\infty$ and that $\limsup_{d\to\infty} s_d/d<1/2$. Let the sequence $(a_d)_{d\ge1}$ be above the phase transition level for exact recovery, that is, $a_d\ge a_d^*$ for all $d$, where $a_d^*$ is defined in~\eqref{phase1}. Then the selector $\hat \eta$ defined by \eqref{selector} with threshold $t = \s \sqrt{2 \log d}$ achieves exact recovery. \end{theorem} The proof of this theorem is given in the Appendix. We now turn to the problem of adaption for almost full recovery. Ideally, we would like to construct a selector that achieves almost full recovery for all sequences $(s_d, a_d)_{d\ge1}$ for which almost full recovery is possible. We have seen in Section \ref{sec:asymp} that this includes a much broader range of values than in case of exact recovery. Thus, using the adaptive selector of Theorem~\ref{thm:adaptE} for almost full recovery does not give a satisfactory result, and we have to take a different approach. Following Section \ref{sec:asymp}, we will use the notation $$ a_0(s,A)\triangleq\s\Big(2\log((d-s)/s) + A\sqrt{\log((d-s)/s)}\Big)^{1/2}\,. $$ As shown in Section~\ref{sec:asymp}, it makes sense to consider the classes $\Theta_d(s,a)$ only when $a\ge a_0(s,A)$ with some $A>0$, since for other values of $a$ almost full recovery is impossible. Only such classes will be studied below. In the asymptotic setup of Section~\ref{sec:asymp} we have used the assumption that $d/s_d\to \infty$ (the sparsity assumption), which is now transformed into the condition \begin{equation} \label{smax} s_d\in {\mathcal S}_d\triangleq\{1,2,\dots,s^*_d\} \mbox{ where $s^*_d$ is an integer such that } \frac d{s^*_d}\to \infty \mbox{ as } d\to \infty. \end{equation} Assuming $s_d$ to be known, we have shown in Section~\ref{sec:asymp} that almost full recovery is achievable for all $a\ge a_0(s_d,A_d)$, where $A_d$ tends to infinity as $d\to \infty$. The rate of growth of $A_d$ was allowed to be arbitrarily slow there, cf. Theorem~\ref{thm:asymp}. However, for adaptive estimation considered in this section we will need the following mild assumption on the growth of $A_d$: \begin{equation} \label{a_d} A_d\ge c_0\left(\log\log\left(\frac{d}{s^*_d} -1\right)\right)^{1/2}, \end{equation} where $c_0>0$ is an absolute constant. In what follows, we will assume that $s^*_d\le d/4$, so that the right-hand side of \eqref{a_d} is well-defined. Consider a grid of points $\{g_1, \dots , g_M\}$ on ${\mathcal S}_d$, where $g_j=2^{j-1}$ and $M$ is the maximal integer such that $g_M\le s^*_d$. For each $g_m$, $m=1,\dots,M$, we define a selector $$ \hat \eta (g_m) = (\hat \eta_j (g_m))_{j=1,\dots,d} \triangleq \left( I (|X_j| \geq w(g_m))\right)_{j=1,\dots,d}, $$ where $$ w(s)=\s\sqrt{2 \log\Big(\frac ds - 1\Big)}. $$ Note that $w(s)$ is monotonically decreasing. We now choose the ``best'' index $m$, for which $g_m$ is near the true (but unknown) value of $s$, by the following data-driven procedure: \begin{eqnarray}\label{def:hat:m} \widehat{m} &=& \min \Big\{ m\in\{2,\dots,M\}: \nonumber \\ && \ \sum_{j=1}^d I \big(w(g_k)\le |X_j| < w(g_{k-1})\big) \leq \tau g_k \mbox{ for all } k\ge m \Big\}, \label{mhat} \end{eqnarray} where $$ \tau = \big( \log\left({d}/{s^*_d} -1\right)\big)^{-\frac1{7}}, $$ and we set $\widehat{m} =M$ if the set in \eqref{def:hat:m} is empty. Finally, we define an adaptive selector as $$\ee= \hat \eta (g_{\widehat{m}}).$$ This adaptive procedure is quite natural in the sense that it can be related to the Lepski method or to wavelet thresholding that are widely used for adaptive estimation. Indeed, as in wavelet methods, we consider dyadic blocks determined by the grid points $g_j$. The value $\sum_{j=1}^d I \big(w(g_k)\le |X_j| < w(g_{k-1})\big)$ is the number of observations within the $k$th block. If this number is too small (below a suitably chosen threshold) we decide that the block corresponds to pure noise and it is rejected; in other words, this $k$ is not considered as a good candidate for $\widehat{m}$. This argument is analogous to wavelet thersholding. We start from the largest $k$ (equivalently, smallest $w(g_k)$) and perform this procedure until we find the first block, which is not rejected. The corresponding value $k$ determines our choice of $\widehat{m}$ as defined in \eqref{def:hat:m}. \begin{theorem}\label{th:adapt} Let $ {c_0\ge 16}$. Then the selector $\ee$ adaptively achieves almost full recovery in the following sense: \begin{equation} \label{eq:th:adapt} \lim_{d\to \infty} \sup_{\theta\in\Theta_d(s_d,a_d)} \frac 1{s_d} \E_\theta |\ee - \eta| = 0 \end{equation} for all sequences $(s_d,a_d)_{d\ge1}$ such that \eqref{smax} holds and $a_d\ge a_0(s_d,A_d)$, where $A_d$ satisfies \eqref{a_d}. \end{theorem} \begin{remark} Another family of variable selection methods originates from the theory of multiple testing. These are, for example, the Benjamini-Hochberg, Benjamini-Yekutieli or SLOPE procedures. We refer to \cite{Bogdan} for a recent overview and comparison of these techniques. They have the same structure as the exhaustive search procedure in that they keep only the largest order statistics. The difference is that the value $s$ (which is usually not known in practice) is replaced by an estimator $\hat s$ obtained from comparing the $i$th order statistic of $(|X_1|,\dots,|X_d|)$ with a suitable normal quantile depending on $i$. The analysis of these methods in the literature is focused on the evaluation of false discovery rate (FDR). Asymptotic power calculations for the Benjamini-Hochberg procedure are given in \cite{Arias-Castro}. To the best of our knowledge, the behavior of the risk $\Pb_\theta (\tilde S \not = S(\theta))$ and of the Hamming risk, even in a simple consistency perspective, was not studied. \end{remark} \begin{remark}In this paper, the variance $\sigma$ was supposed to be known. Extension to the case of unknown $\sigma$ can be treated as described, for example, in \cite{collier_etal2016}. Namely, we replace $\sigma$ in the definition of the threshold $w(s)$ by a statistic $\hat \sigma$ defined in \cite[Section 3]{collier_etal2016}. As shown in \cite[Proposition 1]{collier_etal2016}, this statistic is such that $\sigma \le \hat \sigma \le C'\sigma$ with high probability provided that $s\le d/2$, and $d\ge d_0$ for some absolute constants $C'>1,\, d_0 \geq 1$. Then, replacing $\sigma$ by $\hat \sigma$ in the expression for $w(s)$, one can show that Theorem~\ref{th:adapt} remains valid with this choice of $w(s)$ independent of $\sigma$, up to a change in numerical constants in the definition of the adaptive procedure. With this modification, we obtain a procedure which is completely data-driven and enjoys the property of almost full recovery under the mild conditions given in Theorem~\ref{th:adapt}. The same modification can be done in Theorem~\ref{thm:adaptE}. Namely, under the assumptions of Theorem~\ref{thm:adaptE} and $a_d\ge c' a_d^*$, where $c'\ge 1$ is a numerical constant, the selector $\hat \eta$ defined by \eqref{selector} with threshold $t = \hat \sigma \sqrt{2 \log d}$ achieves exact recovery when $\sigma$ is unknown. \end{remark} \begin{remark} In this section, the problem of adaptive variable selection was considered only for the classes $\Theta_d(s_d,a_d)$. The corresponding results for classes $\Theta_d^+ (s_d,a_d)$ and $\Theta_d^- (s_d,a_d)$ are completely analogous. We do not state them here for the sake of brevity. \end{remark} \section{Appendix} \begin{proof}[Proof of Theorem~\ref{nonasymptotic}] We have, for any $t>0$, \begin{eqnarray*} | \hat \eta - \eta| & = & \sum_{j: \eta_j=0} \hat \eta_j +\sum_{j: \eta_j=1} (1- \hat \eta_j ) \\ &=& \sum_{j: \eta_j=0} I(|\s\xi_j|\geq t ) +\sum_{j: \eta_j=1} I (| \s\xi_j + \theta_j | < t ). \end{eqnarray*} Now, for any $\theta \in \Theta_d(s,a)$ and any $t>0$, \begin{eqnarray*} && \E \left(I \left(| \s\xi_j + \theta_j | < t \right) \right) \leq \Pb (|\theta_j| -|\s\xi_j| < t) \leq \Pb (|\xi| > (a - t)/\s )\\ &=& \Pb (|\xi| > (a - t)_+ /\s), \end{eqnarray*} where $\xi$ denotes a standard Gaussian random variable. Thus, for any $\theta \in \Theta_d(s,a)$, \begin{equation}\label{term1} \frac 1s \E_\theta |\hat \eta - \eta| \leq \left(\frac{d}s -1\right) \Pb(|\xi|\geq \frac t \s ) + \Pb \left( |\xi| >\frac{(a- t)_+}\s \right) = 2\Psi(d,s,a). \end{equation} Note that the inequality here is valid for any $t>0$, not necessarily for $t$ defined in (\ref{threshold}). \end{proof} \begin{proof}[Proof of Theorem~\ref{t2}] Arguing as in the proof of Theorem~\ref{nonasymptotic}, we obtain \begin{eqnarray*} | \hat \eta^+ - \eta| &=& \sum_{j: \eta_j=0} I(\xi_j\geq t ) +\sum_{j: \eta_j=1} I (\s\xi_j + \theta_j < t ), \end{eqnarray*} and $\E \left(I \left( \s\xi_j + \theta_j < t \right) \right) \le \Pb (\xi < (t -a)/\s ).$ Thus, for any $\theta \in \Theta_d^+(s,a)$, \begin{equation*}\label{term1_1} \frac 1s \E_\theta |\hat \eta^+ - \eta| \leq \left(\frac{d}s -1\right) \Pb(\xi\geq t/\s) + \Pb(\xi < (t -a)/\s) = \Psi_+(d,s,a). \end{equation*} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:lb}] An estimator $\bar \eta=(\bar \eta_1,\dots,\bar \eta_d)$ of $\eta$ (not necessarily a selector) will be called \textit{separable} if $\bar \eta_j$ depends only on $X_j$ for all $j=1,\dots,d$. First note that instead of considering all selectors, it suffices to prove the lower bound for the class of separable estimators $\bar \eta$ with components $\bar \eta_j\in [0,1]$. Indeed, for any selector $\widetilde \eta$, using Jensen's inequality, we obtain $$ \E_\theta |\widetilde \eta - \eta|= \sum_{j=1}^d \E_{\theta} |\widetilde \eta _j - \eta_j| = \sum_{j=1}^d \E_{j,\theta_j} \E_{\{\theta_i, i\ne j\}} |\widetilde \eta _j - \eta_j| \ge \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j| $$ where $\bar \eta _j = \E_{\{\theta_i, i\ne j\}}(\widetilde \eta _j)$, and the symbols $ \E_{j,\theta_j}$ and $ \E_{\{\theta_i, i\ne j\}}$ stand for the expectations over the distributions of $X_j$ and $(X_1,\dots,X_{j-1},X_{j+1},\dots,X_d)$, respectively. Clearly, $\bar \eta _j$ depends only on $X_j$ and takes on values in $[0,1]$. Thus, \begin{equation}\label{eq:lb1} \inf_{\widetilde \eta} \sup_{\theta \in \Theta_d^+(s,a)} \frac 1s \E_\theta |\widetilde \eta - \eta| \ge \inf_{\bar \eta\in {\cal T}_{[0,1]}} \ \sup_{\theta \in \Theta_d^+(s,a)} \frac 1s \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j| \end{equation} where ${\cal T}_{[0,1]}$ is the class of all separable estimators $\bar \eta$ with components $\bar \eta_j\in [0,1]$. Let $\Theta'$ be the set of all $\theta$ in $\Theta_d^+(s,a)$ such that $s$ components $\theta_j$ of $\theta$ are equal to $a$ and the remaining $d-s$ components are 0. Denote by $|\Theta'|={d\choose s}$ the cardinality of $\Theta'$. Then, for any $\bar \eta\in {\cal T}_{[0,1]}$ we have \begin{eqnarray}\label{eq:lb2} &&\sup_{\theta \in \Theta_d^+(s,a)}\frac 1s \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j| \geq \frac{1}{s|\Theta'|} \sum_{\theta\in \Theta'} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j| \\ &= & \frac{1}{s|\Theta'|} \sum_{j=1}^d \Big(\sum_{\theta\in \Theta':\theta_j=0} {\E}_{j,0} (\bar \eta _j) + \sum_{\theta\in \Theta':\theta_j=a} {\E}_{j,a} (1-\bar \eta _j ) \Big)\nonumber \\ &= & \frac{1}{s} \sum_{j=1}^d \Big(\Big(1-\frac sd \Big) {\E}_{j,0} (\bar \eta _j) + \frac{s}{d} \, {\E}_{j,a} (1-\bar \eta _j ) \Big)\nonumber \\ & \geq & \frac{d}{s} \inf_{T \in [0,1]} \left( \Big(1-\frac sd \Big) {\mathbb E}_0 (T)+ \frac{s}{d} \, {\mathbb E}_a (1-T)\right), \nonumber \end{eqnarray} where we have used that $|\{\theta\in \Theta':\theta_j=a\}|={d-1\choose s-1}=s|\Theta'|/d$. In the last line of display \eqref{eq:lb2}, ${\mathbb E}_u$ is understood as the expectation with respect to the distribution of $X=u+\sigma\xi$, where $\xi\sim {\cal N}(0,1)$ and $\inf_{T \in [0,1]}$ denotes the infimum over all $[0,1]$-valued statistics $T(X)$. Set $$ L^*=\inf_{T \in [0,1]} \left( \Big(1-\frac sd \Big) {\mathbb E}_0 (T)+ \frac{s}{d} \, {\mathbb E}_a (1-T)\right) $$ By the Bayesian version of the Neyman-Pearson lemma, the infimum here is attained for $T=T^*$ given by $$ T^*( X) = I\left( \frac {(s/d) \varphi_{\s} (X-a)}{(1-s/d) \varphi_{\s}(X)} >1 \right) $$ where $ \varphi_{\s}(\cdot)$ is the density of an ${\cal N}(0,\s^2)$ distribution. Thus, \begin{eqnarray*} L^* &=& \Big(1-\frac sd\Big) {\bf P}\left( \frac {\varphi_{\s} ({\s}\xi-a)}{ \varphi_{\s}({\s}\xi )} > \frac ds - 1 \right) + \frac sd\, {\bf P}\left( \frac { \varphi_{\s} ({\s}\xi)}{ \varphi_{\s}({\s}\xi +a)} \leq \frac ds - 1 \right). \end{eqnarray*} Combining this with \eqref{eq:lb1} and \eqref{eq:lb2}, we get \begin{eqnarray*} &&\inf_{\widetilde \eta} \sup_{\theta \in \Theta_d^+(s,a)}\frac 1s \E_\theta |\widetilde \eta - \eta| \nonumber \\ && \geq \left( \frac ds - 1 \right) \Pb \left( \exp\Big( \frac{a\xi}{\s} - \frac{a^2}{2\s^2} \Big) > \frac ds - 1 \right) + \Pb \left( \exp\Big(\frac{a\xi}{\s} + \frac{ a^2}{2\s^2} \Big) \leq \frac ds - 1\right)\nonumber \\ && = \left( \frac ds - 1 \right) \Pb \left( \xi > \frac{a}{2\s} + \frac \s a \log \Big( \frac ds - 1 \Big) \right) + \Pb \left( \xi \leq -\frac a{2\s} + \frac \s a \log\Big( \frac ds - 1 \Big) \right)\nonumber \\ && = \Psi_+(d,s,a) \end{eqnarray*} \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:bayes}] Using \eqref{eq:minimax:central} it suffices to show that the right hand side of \eqref{eq:prop:bayes} is bounded from above and from below by $s\Psi_+(d,s,a)$. The upper bound is obvious in view of Theorem~\ref{t2}. To prove lower bound, we follow the same lines as in the proof of Theorem~\ref{thm:lb}. The only difference is that, instead of \eqref{eq:lb1}, we now use the inequality \begin{equation*}\label{eq:lb1:bis} \inf_{\widetilde \eta} \frac{1}{s|\Theta'|} \sum_{\theta\in \Theta'} \E_\theta |\widetilde \eta - \eta| \ge \inf_{\bar \eta\in {\cal T}_{[0,1]}} \ \frac{1}{s|\Theta'|} \sum_{\theta\in \Theta'} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j|, \end{equation*} and we do not need the first inequality in \eqref{eq:lb2}. \end{proof} \begin{proof}[Proof of Theorem~\ref{t1}] For any $\theta \in \Theta_d(s,a)$, we have \begin{eqnarray}\label{t1:1} \E_\theta| \overline \eta - \eta| & = & \sum_{j: \theta_j=0} \Pb_{j,0}(\overline \eta_j=1) +\sum_{j: \theta_j\ge a} \Pb_{j,\theta_j}(\overline \eta_j=0) + \sum_{j: \theta_j\le -a} \Pb_{j,\theta_j}(\overline \eta_j=0) \\ &=& (d-s)\Pb \left( e^{- \frac {a^2}{2\s^2} }\cosh \Big(\frac{a\xi }{\s}\Big) > \frac ds - 1 \right) \nonumber \\ && + \sum_{j: \theta_j\ge a} \Pb_{j,\theta_j}(\overline \eta_j=0) + \sum_{j: \theta_j\le -a} \Pb_{j,\theta_j}(\overline \eta_j=0),\nonumber \end{eqnarray} where $\Pb_{j,\theta_j}$ denotes the distribution of $X_j$, and $\xi$ is a standard Gaussian random variable. We now bound from above the probabilities $\Pb_{j,\theta_j}(\overline \eta_j=0)$. Introduce the notation $$ g(x) = \cosh \left(\frac{(x+ \s\xi)a }{\s^2}\right), \quad \forall x\in \mathbb R, $$ and $$ u= \exp\left(\frac {a^2}{2 \s^2} + \log\left( \frac ds - 1 \right)\right). $$ We have \begin{eqnarray*} \Pb_{j,\theta_j}(\overline \eta_j=0) = \Pb (g(\theta_j)< u) = \Pb\left(-b-\theta_j < \s\xi < b-\theta_j\right). \end{eqnarray*} where $b= (\s^2/a){\rm arccosh}(u)>0$. It is easy to check that the function $x\mapsto \Pb\left(-b-x < \s\xi < b-x\right)$ is monotonically decreasing on $[0,\infty)$. Therefore, the maximum of $\Pb\left(-b-\theta_j < \s\xi < b-\theta_j\right)$ over $\theta_j\ge a$ is attained at $\theta_j=a$. Thus, for any $\theta_j\ge a$ we have \begin{eqnarray}\label{t1:2} \Pb_{j,\theta_j}(\overline \eta_j=0) \le \Pb (g(a)< u) = \Pb\left(e^{- \frac {a^2}{2\s^2} }\cosh \left(\frac{(a + \s\xi)a }{\s^2}\right)< \frac ds - 1 \right). \end{eqnarray} Analogously, for any $\theta_j\le -a$, \begin{eqnarray}\label{t1:3} \Pb_{j,\theta_j}(\overline \eta_j=0) & \le &\Pb\left(e^{- \frac {a^2}{2\s^2} }\cosh \left(\frac{(-a + \s\xi)a }{\s^2}\right)< \frac ds - 1 \right)\\ & = &\Pb\left(e^{- \frac {a^2}{2\s^2} }\cosh \left(\frac{(a + \s\xi)a }{\s^2}\right)< \frac ds - 1 \right) \nonumber \end{eqnarray} where the last equality follows from the fact that $\xi$ has the same distribution as $-\xi$ and $\cosh$ is an even function. Combining \eqref{t1:1} -- \eqref{t1:3} proves the theorem. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:lb2}] We follow the lines of the proof of Theorem~\ref{thm:lb} with suitable modifications. The same argument shows that instead of considering all selectors, it suffices to prove the lower bound for the class of separable estimators $\bar \eta$ with components $\bar \eta_j\in [0,1]$. Thus, \begin{equation}\label{eq:lb1_2} \inf_{\widetilde \eta} \sup_{\theta \in \Theta_d(s,a)} \E_\theta |\widetilde \eta - \eta| \ge \inf_{\bar \eta\in {\cal T}_{[0,1]}} \ \sup_{\theta \in \Theta_d(s,a)} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j| \end{equation} where ${\cal T}_{[0,1]}$ is the class of all separable estimators $\bar \eta$ with components $\bar \eta_j\in [0,1]$ and $ \E_{j,\theta_j}$ denotes the expectation with respect to $ \Pb_{j,\theta_j}$. Let $\Theta^+$ and $\Theta^-$ be the sets of all $\theta$ in $\Theta_d(s,a)$ such that $d-s$ components $\theta_j$ of $\theta$ are equal to $0$ and the remaining $s$ components are equal to $a$ (for $\theta\in\Theta^+$) or to $-a$ (for $\theta\in\Theta^-$). For any $\bar \eta\in {\cal T}_{[0,1]}$ we have $$ \sup_{\theta \in \Theta_d(s,a)} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j|\ge \frac12 \Big\{\sup_{\theta \in \Theta^+} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j|+ \sup_{\theta \in \Theta^-} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j|\Big\}. $$ As shown in the proof of Theorem~\ref{thm:lb}, for any $\bar \eta\in {\cal T}_{[0,1]}$, \begin{eqnarray} \sup_{\theta \in \Theta^+} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j| &\ge & \sum_{j=1}^d \Big(\Big(1-\frac sd \Big) {\E}_{j,0} (\bar \eta _j) + \frac{s}{d} \, {\E}_{j,a} (1-\bar \eta _j ) \Big).\nonumber \end{eqnarray} Analogously, \begin{eqnarray} \sup_{\theta \in \Theta^-} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j| &\ge & \sum_{j=1}^d \Big(\Big(1-\frac sd \Big) {\E}_{j,0} (\bar \eta _j) + \frac{s}{d} \, {\E}_{j,-a} (1-\bar \eta _j ) \Big).\nonumber \end{eqnarray} From the last three displays we obtain \begin{eqnarray* \sup_{\theta \in\Theta_d(s,a)} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j| &\ge & \sum_{j=1}^d \Big(\Big(1-\frac sd \Big) {\E}_{j,0} (\bar \eta _j) + \frac{s}{d} \, {\bar\E}_{j} (1-\bar \eta _j ) \Big). \end{eqnarray*} where ${\bar \E}_j$ is the expectation with respect to the measure ${\bar \Pb}_j = (\Pb_{j,a}+\Pb_{j,-a})/2$. It follows that \begin{eqnarray}\label{eq:lb2_2} \sup_{\theta \in\Theta_d(s,a)} \sum_{j=1}^d \E_{j,\theta_j} |\bar \eta _j - \eta_j| & \geq & \inf_{T \in [0,1]} \left( (d-s) {\mathbb E}_0 (T)+ s \, {\bar {\mathbb E} }(1-T)\right). \end{eqnarray} Here, ${\mathbb E}_0$ denotes the expectation with respect to the distribution of $X$ with density $\varphi_{\s}(\cdot)$, ${\bar {\mathbb E}}$ is the expectation with respect to the distribution of $X$ with mixture density $\bar \varphi_{\s}(\cdot)=(\varphi_{\s}(\cdot+a)+\varphi_{\s}(\cdot-a))/2$, and $\inf_{T \in [0,1]}$ denotes the infimum over all $[0,1]$-valued statistics $T(X)$. Recall that we denote by $ \varphi_{\s}(\cdot)$ is the density of ${\cal N}(0,\s^2)$ distribution. Set $$ \tilde L=\inf_{T \in [0,1]} \left( \Big(1-\frac sd \Big) {\mathbb E}_0 (T)+ \frac{s}{d} \, {\bar {\mathbb E}} (1-T)\right). $$ By the Bayesian version of the Neyman-Pearson lemma, the infimum here is attained for $T=\tilde T$ given by $$ \tilde T( X) = I\left( \frac {(s/d) \bar\varphi_{\s} (X)}{(1-s/d) \varphi_{\s}(X)} >1 \right). $$ Thus, \begin{eqnarray}\label{eq:lb21} \tilde L &=& \Big(1-\frac sd\Big) {\bf P}\left( \frac {\bar\varphi_{\s} ({\s}\xi)}{ \varphi_{\s}({\s}\xi )} > \frac ds - 1 \right) + \frac{s}{2d}\, {\mathbb P}_a \left( \frac { \bar\varphi_{\s} (X)}{ \varphi_{\s}(X)} \leq \frac ds - 1 \right)\\ && + \frac{s}{2d}\, {\mathbb P}_{-a} \left( \frac {\bar \varphi_{\s} (X)}{ \varphi_{\s}(X)} \leq \frac ds - 1 \right)\nonumber \\ \nonumber &=&\Big(1-\frac sd\Big)\Pb \left( e^{- \frac {a^2}{2\s^2} }\cosh \Big(\frac{a\xi }{\s}\Big) > \frac ds - 1 \right)\\ && + \frac{s}{2d}\, {\mathbb P}_a \left( \frac { \bar \varphi_{\s} (X)}{ \varphi_{\s}(X)} \leq \frac ds - 1 \right) + \frac{s}{2d}\, {\mathbb P}_{-a} \left( \frac { \bar \varphi_{\s} (X)}{ \varphi_{\s}(X)} \leq \frac ds - 1 \right)\nonumber \end{eqnarray} where ${\mathbb P}_u$ denotes the probability distribution of $X$ with density $\varphi_{\s}(\cdot-u)$. Note that, for all $x\in \mathbb R$, $$ \frac { \bar \varphi_{\s} (x)}{ \varphi_{\s}(x)} = e^{- \frac {a^2}{2\s^2} }\cosh \Big(\frac{ax }{\s^2}\Big). $$ Using this formula with $x=\s \xi +a$ and $x=\s\xi -a$, and the facts that $\cosh(\cdot)$ is an even function and $\xi$ coincides with $-\xi$ in distribution, we obtain \begin{eqnarray*} {\mathbb P}_a \left( \frac { \bar \varphi_{\s} (X)}{ \varphi_{\s}(X)} \leq \frac ds - 1 \right) &=& {\mathbb P}_{-a} \left( \frac { \bar \varphi_{\s} (X)}{ \varphi_{\s}(X)} \leq \frac ds - 1 \right)\\ &=& \Pb \left( e^{- \frac {a^2}{2\s^2} }\cosh \Big(\frac{a\xi }{\s}+\frac{a^2}{\s^2}\Big) \le \frac ds - 1 \right). \end{eqnarray*} Thus, $\tilde L= (s/d)\overline \Psi(d,s,a)$. Combining this equality with \eqref{eq:lb1_2} and \eqref{eq:lb2_2} proves the theorem. \end{proof} \begin{proof}[Proof of Theorem~\ref{cor:recovery_pattern}] The upper bounds \eqref{cor:recovery_pattern:eq1}, \eqref{cor:recovery_pattern:eqbar2} and \eqref{cor:recovery_pattern:eq2} follow immediately from \eqref{risks} and Theorems \ref{t2},~\ref{t1} and~\ref{nonasymptotic}, respectively. We now prove the lower bound \eqref {cor:recovery_pattern:eq3}. To this end, first note that for any $\theta\in \Theta_d^+(s,a)$ and any $\widetilde \eta\in \cal T$ we have \begin{eqnarray*}\label{cor:recovery_pattern:eq4} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta) = \Pb_\theta(\cup_{j=1}^d \{\widetilde \eta_j \ne \eta_j\}) = 1-\prod_{j=1}^d p_j(\theta), \end{eqnarray*} where $p_j(\theta)\triangleq\Pb_\theta(\widetilde \eta_j = \eta_j)$. Hence, for any $\widetilde\eta\in \cal T$, \begin{eqnarray}\label{cor:recovery_pattern:eq5} \sup_{\theta \in \Theta_d^+(s,a)} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) \ge \max_{\theta \in \Theta'} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) = 1-p_* , \end{eqnarray} where $\Theta'$ is the subset of $\Theta_d^+(s,a)$ defined in the proof of Theorem~\ref{thm:lb}, and $p_*= \min_{\theta \in \Theta'} \prod_{j=1}^d p_j(\theta)$. Next, for any selector $\widetilde\eta$ we have $\Pb_\theta(S_{\widetilde \eta} \ne S(\theta))\ge \Pb_\theta(|\widetilde \eta - \eta|=1)$. Therefore, \begin{eqnarray}\label{cor:recovery_pattern:eq5a} \sup_{\theta \in \Theta_d^+(s,a)} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) &\ge& \frac{1}{|\Theta'|} \sum_{\theta\in \Theta'} \Pb_{\theta} (|\widetilde \eta - \eta| =1). \end{eqnarray} Here, $\Pb_{\theta} (|\widetilde \eta - \eta| =1)=\Pb_{\theta} (\cup_{j=1}^d B_j)$ with the random events $B_j=\{|\widetilde \eta_j - \eta_j| =1, \, \text{and} \, \widetilde \eta_i = \eta_i, \,\forall \, i\ne j\}$. Since the events $B_j$ are disjoint, for any $\widetilde\eta\in \cal T$ we get \begin{eqnarray} \nonumber && \frac{1}{|\Theta'|} \sum_{\theta\in \Theta'} \Pb_{\theta} (|\widetilde \eta - \eta| =1) = \frac{1}{|\Theta'|} \sum_{\theta\in \Theta'} \sum_{j=1}^d \Pb_{\theta} (B_j) \\ \nonumber &=& \frac{1}{|\Theta'|} \sum_{j=1}^d \Big(\sum_{\theta\in \Theta':\theta_j=0} {\Pb}_{j,0} (\widetilde \eta _j=1) \prod_{i\ne j} p_i(\theta) + \sum_{\theta\in \Theta':\theta_j=a} {\Pb}_{j,a} (\widetilde \eta _j=0) \prod_{i\ne j} p_i(\theta)\Big) \\ \nonumber &\ge & \frac{p_*}{|\Theta'|} \sum_{j=1}^d \Big(\sum_{\theta\in \Theta':\theta_j=0} {\Pb}_{j,0} (\widetilde \eta _j=1) + \sum_{\theta\in \Theta':\theta_j=a} {\Pb}_{j,a} (\widetilde \eta _j=0)\Big) \\ &= &\label{cor:recovery_pattern:eq6} \frac{p_*}{|\Theta'|} \sum_{j=1}^d \Big(\sum_{\theta\in \Theta':\theta_j=0} {\E}_{j,0} (\widetilde \eta _j) + \sum_{\theta\in \Theta':\theta_j=a} {\E}_{j,a} (1-\widetilde \eta _j)\Big) \end{eqnarray} where ${\Pb}_{j,u}$ denotes the distribution of $X_j$ when $\theta_j=u$. We now bound the right-hand side of \eqref{cor:recovery_pattern:eq6} by following the argument from the last three lines of \eqref{eq:lb2} to the end of the proof of Theorem~\ref{thm:lb}. Applying this argument yields that, for any $\widetilde \eta\in \cal T$, \begin{eqnarray}\label{cor:recovery_pattern:eq7} \frac{1}{|\Theta'|} \sum_{\theta\in \Theta'} \Pb_{\theta} (|\widetilde \eta - \eta| =1)&\ge & p^*d \tilde L \ge p^*s \Psi_+(d,s,a). \end{eqnarray} Combining \eqref{cor:recovery_pattern:eq5}, \eqref{cor:recovery_pattern:eq5a}, and \eqref{cor:recovery_pattern:eq7}, we find that, for any $\widetilde \eta\in \cal T$, $$ \sup_{\theta \in \Theta_d^+(s,a)} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) \ge \min_{0\le p^*\le 1}\max\{1-p^*, p^*s \Psi_+(d,s,a)\}=\frac{s\Psi_+(d,s,a)}{1+s\Psi_+(d,s,a)}. $$ We now prove the lower bound \eqref{cor:recovery_pattern:eqbar3}. Let the sets $\Theta^+$ and $\Theta^-$ and the constants $p_j(\theta)$ be the same as in the proof of Theorem~\ref{thm:lb2}. Then \begin{equation*}\label{eq:newlb1} \sup_{\theta \in \Theta_d(s,a)} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) \ge \max_{\theta \in \Theta^+ \cup \Theta^-} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) = 1-\bar p, \end{equation*} where $\bar p = \min_{\theta \in \Theta^+ \cup \Theta^-}\prod_{j=1}^d p_j(\theta)$. For any selector $\tilde \eta$, we use that $\Pb_\theta(S_{\widetilde \eta} \ne S(\theta))\ge \Pb_\theta(|\widetilde \eta - \eta|=1)$ and, therefore, \begin{eqnarray*} \sup_{\theta \in \Theta_d(s,a)} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) &\ge& \frac{1}{2|\Theta^+|} \sum_{\theta\in \Theta^+} \Pb_{\theta} (|\widetilde \eta - \eta| =1) + \frac{1}{2|\Theta^-|} \sum_{\theta\in \Theta^-} \Pb_{\theta} (|\widetilde \eta - \eta| =1). \end{eqnarray*} We continue along the same lines as in the proof of \eqref{cor:recovery_pattern:eq6} to get, for any separable selector $\tilde \eta$, \begin{eqnarray*} \sup_{\theta \in \Theta_d(s,a)} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) &\ge& \frac{\bar p}{2 |\Theta^+| } \sum_{j=1}^d \left( \sum_{\theta \in \Theta^+: \theta_j=0} \E_{j,0}(\widetilde \eta_j) + \sum_{\theta \in \Theta^+: \theta_j=a} \E_{j,a}(1-\widetilde \eta_j)\right) \\ && + \frac{\bar p}{2 |\Theta^-| } \sum_{j=1}^d \left( \sum_{\theta \in \Theta^-: \theta_j=0} \E_{j,0}(\widetilde \eta_j) + \sum_{\theta \in \Theta^-: \theta_j=-a} \E_{j,-a}(1-\widetilde \eta_j)\right)\\ & \geq & \frac{\bar p}2 \sum_{j=1}^d \left( \left( 1- \frac sd \right) \E_{j,0} (\widetilde \eta_j) + \frac sd \E_{j, a}(1 - \widetilde \eta_j) \right)\\ && + \frac{\bar p}2 \sum_{j=1}^d \left( \left( 1- \frac sd \right) \E_{j,0} (\widetilde \eta_j) + \frac sd \E_{j, -a}(1 - \widetilde \eta_j) \right)\\ &= & \bar p \sum_{j=1}^d \left( \left( 1- \frac sd \right) \E_{j,0} (\bar \eta_j) + \frac sd \overline \E_{j}(1 - \widetilde \eta_j) \right), \end{eqnarray*} where again $\bar \E_j$ denotes the expected value with respect to $\bar \Pb _j = \frac 12 (\Pb_{j,a}+ \Pb_{j, -a})$. Analogously to the proof of Theorem~\ref{thm:lb2}, the expression in the last display can be further bounded from below by $\bar p d \tilde L = \bar p s \overline \Psi(d,s,a)$. Thus, $$ \sup_{\theta \in \Theta_d(s,a)} \Pb_\theta(S_{\widetilde \eta} \ne S(\theta)) \ge \min_{0\le \bar p \le 1}\max\{1-\bar p, \bar p s \overline \Psi(d,s,a)\}=\frac{s \overline \Psi(d,s,a)}{1+s \overline \Psi(d,s,a)}. $$ \end{proof} \begin{proof}[Proof of Theorem~\ref{t4}] $(i)$ In the proof of Theorem~\ref{nonasymptotic}, we have obtained that \begin{eqnarray}\label{xx} \sup_{\theta\in \Theta_d(s,a)} \frac 1s \E_\theta |\hat \eta - \eta| \leq 2 \left(\frac ds - 1 \right) \Phi(-t/\sigma) + 2 \Phi (-(a-t)_+/\sigma), \end{eqnarray} where $t= \frac a2 + \frac {\s^2}a \log\left( \frac ds - 1\right)$ is the threshold (\ref{threshold}). Since $a^2 \geq 2 \s^2\log(d/s - 1)$ we get that $a\ge t$ and that $t> a/2$, which is equivalent to $t > a-t$. Furthermore, $\big(\frac ds -1\big)e^{-t^2/(2\s^2)} = e^{-(a-t)^2/(2\s^2)} $. These remarks and \eqref{AS} imply that \begin{eqnarray*} \left(\frac ds - 1 \right) \Phi(-t/\sigma) &\leq & \sqrt{\frac 2\pi} \, \frac{\exp (-(a-t)^2/(2\s^2))}{(a-t)/\sigma + \sqrt{(a-t)^2/\sigma^2 + 8/\pi}} \\ &\leq & \frac{\exp (-(a-t)^2/(2\s^2))}{(a-t)/\sigma + \sqrt{(a-t)^2/\sigma^2 + 4}} \\ &\leq & \sqrt{\frac \pi 2} \Phi\left(- \frac{a-t}{\s} \right). \end{eqnarray*} Combining this with \eqref{xx} we get \begin{eqnarray*} \sup_{\theta\in \Theta_d(s,a)} \frac 1s \E_\theta |\hat \eta - \eta| &\leq & ( 2 + \sqrt{2 \pi}) \Phi\left(- \frac{a-t}{\s} \right). \end{eqnarray*} Now, to prove \eqref{eq:t4:1} it remains to note that under assumption \eqref{ass:e}, $$ \frac{a-t}\s = \frac a{2\s} - \frac {\s}a \log\left( \frac ds - 1\right)= \frac{a^2-2\s^2\log((d-s)/s)}{2a \s}\ge \Delta. $$ Indeed, assumption \eqref{ass:e} states that $a\ge a_0\triangleq\s\Big(2\log((d-s)/s) + W\Big)^{1/2}$, and the function $a\mapsto \big({a^2-2\s^2\log((d-s)/s)}\big)/{a}$ is monotonically increasing in $a>0$. On the other hand, \begin{equation}\label{eq:t4:3} \big( {a_0^2-2\s^2\log((d-s)/s)}\big)/(2{a_0}\s) = \Delta. \end{equation} $(ii)$ We now prove \eqref{eq:t4:2}. By Theorem~\ref{thm:lb}, $$ \inf_{\widetilde \eta} \sup_{\theta \in \Theta_d(s,a)} \frac 1s \E_\theta |\widetilde \eta - \eta| \geq \Psi_+(d,s,a)\ge \Phi \left( - \frac a {2\s} + \frac \s a \log \Big( \frac ds - 1 \Big) \right). $$ Here, $$- \frac a {2\s} + \frac \s a \log \Big( \frac ds - 1 \Big)= \frac{2\s^2\log((d-s)/s)-a^2}{2\s a}\,. $$ Observe that the function $a\mapsto \big({2\s^2\log((d-s)/s)-a^2}\big)/{a}$ is monotonically decreasing in $a>0$ and that assumption (\ref{lowE}) states that $a\le a_0$. In view of \eqref{eq:t4:3}, the value of its minimum for $a\le a_0$ is equal to $- \Delta$. The bound \eqref{eq:t4:2} now follows by the monotonicity of $\Phi(\cdot)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:asymp}] Assume without loss of generality that $d$ is large enough to have $(d-s_d)/s_d>1$. We apply Theorem~\ref{t4} with $ W= A \sqrt{2\log((d-s_d)/s_d)}. $ Then, $$ \Delta^2= \frac{A^2 \sqrt{2\log ((d-s_d)/s_d) }}{4\big(\sqrt{2\log ((d-s_d)/s_d)}+A\big)}. $$ By assumption, there exists $\nu>0$ such that $(2+\nu)s_d \le d$ for all $d$ large enough. Equivalently, $d/s_d - 1 \geq 1+\nu$ and therefore, using the monotonicity argument, we find $$ \Delta^2\ge \frac{A^2 \sqrt{2\log (1+\nu) }}{\sqrt{2\log (1+\nu)}+A}\to \infty \quad \mbox{ as } A\to \infty. $$ This and \eqref{eq:t4:1} imply part $(i)$ of the theorem. Part $(ii)$ follows from \eqref{eq:t4:2} by noticing that $\Delta^2\le \sup_{x>0}\frac{ A^2x}{4(x+A)}=A^2/4$ for any fixed $A>0$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:asympE}] Throughout the proof, we assume without loss of generality that $d$ is large enough to have $s_d\ge 2$, and $(d-s_d)/s_d>1$. Set $W_*(s)\triangleq 4 \Big(\log s + \sqrt{\log s \log (d-s)}\Big)$, and notice that \begin{eqnarray}\label{eq3a:thm:asympE} &&\frac{W_*(s_d)}{2 \sqrt{2 \log((d-s_d)/s_d) +W_*(s_d)}}=\sqrt{2\log s_d}, \\ \label{eq3b:thm:asympE} &&2 \log((d-s_d)/s_d) +W_*(s_d) = 2\Big(\sqrt{\log(d-s_d)} + \sqrt{\log s_d} \Big)^2. \end{eqnarray} If \eqref{eq1:thm:asympE} holds, we have $W_d\ge W_*(s_d)$ for all $d$ large enough. By the monotonicity of the quantity $\Delta$ defined in (\ref{DeltaERec}) with respect to $W$, this implies \begin{eqnarray}\label{eq4:thm:asymp} \Delta_d &\triangleq & \frac{W_d}{2 \sqrt{2 \log((d-s_d)/s_d) +W_d}} \nonumber \\ &\ge &\frac{W_*(s_d)}{2 \sqrt{2 \log((d-s_d)/s_d) +W_*(s_d)}}=\sqrt{2\log s_d}\, . \end{eqnarray} Now, by Theorem~\ref{t4} and using (\ref{AS}) we may write \begin{eqnarray}\nonumber \sup_{\theta \in \Theta_d(s_d,a_d)} \E_\theta |\hat \eta - \eta| &\leq& (2+\sqrt{2\pi})s_d \, \Phi\left(- \Delta_d \right) \nonumber \\ &\leq & 3s_d \min \left\{1, \frac 1{\Delta_d} \right\} \exp \left(- \frac {\Delta_d^2}2\right) \nonumber \\ \label{eq3:thm:asymp} &=& 3 \min \left\{1, \frac 1{\Delta_d} \right\} \exp \left(- \frac {\Delta_d^2-2\log s_d}2 \right)\,. \end{eqnarray} This and \eqref{eq4:thm:asymp} imply that, for all $d$ large enough, $$ \sup_{\theta \in \Theta_d(s_d,a_d)} \E_\theta |\hat \eta - \eta| \le 3 \min \left\{1, \frac 1{\sqrt{2\log s_d}}\right\}. $$ Since $s_d\to\infty$, part $(i)$ of the theorem follows. We now prove part $(ii)$ of the theorem. It suffices to consider $W_d> 0$ for all $d$ large enough since for non-positive $W_d$ almost full recovery is impossible and the result follows from part $(ii)$ of Theorem~\ref{thm:asymp}. If \eqref{eq2:thm:asymp} holds, there exists $A<1$ such that $W_d\le AW_*(s_d)$ for all $d$ large enough. By the monotonicity of the quantity $\Delta$ defined in (\ref{DeltaERec}) with respect to $W$ and in view of equation \eqref{eq3a:thm:asympE}, this implies \begin{eqnarray}\nonumber && \Delta_d^2 - 2\log s_d \nonumber\\ &\le& \frac{A^2W_*^2(s_d)}{4 (2 \log((d-s_d)/s_d) +AW_*(s_d))}- \frac{W_*^2(s_d)}{4 (2 \log((d-s_d)/s_d) +W_*(s_d))} \nonumber \\ \nonumber &=& \frac{(A-1)W_*^2(s_d)(AW_*(s_d)+2 (A+1) \log((d-s_d)/s_d))}{4 (2 \log((d-s_d)/s_d) +AW_*(s_d))(2 \log((d-s_d)/s_d) +W_*(s_d))} \\ \nonumber \\ \nonumber &\le & \frac{(A-1)AW_*^2(s_d)}{4 (2 \log((d-s_d)/s_d) +W_*(s_d))} \\ \nonumber \\ &= & \frac{2(A-1)A \Big(\log s_d + \sqrt{\log s_d \log (d-s_d)}\Big)^2}{\Big(\sqrt{\log(d-s_d)} + \sqrt{\log s_d} \Big)^2} = 2(A-1)A \log s_d, \label{eq5:thm:asymp} \end{eqnarray} where we have used the fact that $A<1$ and equations \eqref{eq3a:thm:asympE}, \eqref{eq3b:thm:asympE}. Next, by Theorem~\ref{t4} and using (\ref{AS}), we have \begin{eqnarray}\nonumber \inf_{\widetilde{\eta}} \sup_{\theta \in \Theta_d(s_d,a_d)} \E_\theta |\widetilde \eta - \eta| &\ge & s_d \, \Phi\left(- \Delta_d \right) \ge \frac{s_d}4 \min \left\{\frac12, \frac 1{\Delta_d} \right\} \exp \left(- \frac {\Delta_d^2}2\right) \\ \nonumber &=& \frac{1}4 \min \left\{\frac12, \frac 1{\Delta_d} \right\} \exp \left(- \frac {\Delta_d^2-2\log s_d}2 \right)\,. \end{eqnarray} Combining this inequality with \eqref{eq5:thm:asymp}, we find that, for all $d$ large enough, $$ \inf_{\widetilde{\eta}}\sup_{\theta \in \Theta_d(s_d,a_d)} \E_\theta |\widetilde \eta - \eta|\ge \frac{1}4 \min \left\{\frac12, \frac 1{\Delta_d} \right\} \exp \left((1-A)A \log s_d \right) $$ Since $A<1$ and $\Delta_d\le A\sqrt{2\log s_d}$ by \eqref{eq5:thm:asymp}, the last expression tends to $\infty$ as $s_d\to \infty$. This proves part $(ii)$ of the theorem. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:adaptE}] By \eqref{term1}, for any $\theta \in \Theta_d(s_d,a_d)$, and any $t>0$ we have \begin{equation*}\label{term1**} \E_\theta |\hat \eta - \eta| \leq \left(d -s_d\right) \Pb(|\xi|\geq t/\s) + s_d \Pb(|\xi| >(a_d- t)_+/\s), \end{equation*} where $\xi$ is a standard normal random variable. It follows that, for any $a_d\ge a_d^*$, any $\theta \in \Theta_d(s_d,a_d)$, and any $t>0$, \begin{equation*}\label{term1*} \E_\theta |\hat \eta - \eta| \leq d \, \Pb(|\xi|\geq t/\s) + s_d \, \Pb(|\xi| >(a_d^*- t)_+/\s). \end{equation*} Without loss of generality assume that $d\ge 6$ and $2\le s_d\le d/2$. Then, using the inequality $\sqrt{x}-\sqrt{y} \le (x-y)/\sqrt{2y}$, $\forall x>y>0,$ we find that, for $t=\s \sqrt{2 \log d}$, \begin{eqnarray*} (a_d^* -t)_+/\s&\geq& \sqrt{2} \left(\sqrt{\log(d-s_d)} -\sqrt{\log d }+ \sqrt{\log (s_d)}\right)\\ &\geq& \sqrt{2\log (s_d)} - \log\left(\frac{d}{d-s_d}\right)/\sqrt{\log(d-s_d)}\\ &\geq& \sqrt{2\log (s_d)} - (\log 2)/\sqrt{\log(d/2)} >0. \end{eqnarray*} From this we also easily deduce that, for $2\le s_d\le d/2$, we have $((a_d^* -t)_+/\s)^2/2\ge \log (s_d) - \sqrt{2}\log 2$. Combining these remarks with \eqref{AS} and \eqref{phase1}, we find \begin{eqnarray*} \sup_{\theta \in \Theta_d(s_d,a_d)} \E_\theta |\hat \eta - \eta| &\leq& \frac 1{\sqrt{2\log d }}+ \frac {s_d \exp \left(-\log (s_d) + \sqrt{2}\log 2\right)} {\sqrt{2\log (s_d)}}\,, \end{eqnarray*} which immediately implies the theorem by taking the limit as $d\to\infty$. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:adapt}] Throughout the proof, we will write for brevity $s_d=s, a_d=a, A_d=A$, and set $\s=1$. Since $\Theta_d(s,a)\subseteq \Theta_d(s,a_0(s,A))$ for all $a\ge a_0(s,A)$, it suffices to prove that \begin{equation} \label{eq:th:adapt1} \lim_{d\to \infty} \sup_{\theta\in\Theta_d(s,a_0(s,A))} \frac 1{s} \E_\theta |\ee - \eta| = 0. \end{equation} Here $s\le s^*_d$ and recall that throughout this section we assume that $s_d^*\le d/4$; since we deal with asymptotics as $d/s^*_d\to \infty$, the latter assumption is without loss of generality in the current proof. { If $s<g_M$, let $m_0\in \{2,\dots,M\}$ be the index such that $g_{m_0}$ is the minimal element of the grid, which is greater than the true underlying~$s$. Thus, $g_{m_0}/2=g_{m_0-1} \leq s < g_{m_0}$. If $s\in [g_M,s^*_d]$, we set $m_0=M$. In both cases, \begin{equation} \label{eq:th:adapt1_bis} s\ge g_{m_0}/2. \end{equation} } We decompose the risk as follows: $$ \frac 1s \E_\theta | \ee - \eta | = I_1 + I_2, $$ where \begin{eqnarray*} I_1 & = & \frac 1s \E_\theta \left( | \hat \eta(g_{\widehat{m}}) - \eta | I(\widehat{m} \leq m_0)\right), \\ I_2 &=& \frac 1s \E_\theta \left( | \hat \eta(g_{\widehat{m}}) - \eta | I( \widehat{m} \ge m_0+1)\right). \end{eqnarray*} We now evaluate $I_1$. Using the fact that $\hat \eta_j(g_{m})$ is monotonically increasing in $m$ and the definition of $\widehat{m}$, we obtain that, on the event $\{\widehat{m} \leq m_0\}$, \begin{eqnarray*} | \hat \eta(g_{\widehat{m}}) - \hat\eta(g_{m_0}) |&\le& \sum_{m=\widehat{m} +1}^{m_0} |\hat \eta(g_{m}) - \hat\eta(g_{m-1}) |\\ &=& \sum_{m=\widehat{m}+1}^{m_0} \sum_{j=1}^{d}(\hat \eta_j(g_{m}) - \hat\eta_j(g_{m-1}) )\\ &=& \sum_{m=\widehat{m}+1}^{m_0} \sum_{j=1}^d I \big(w(g_m)\le |X_j| < w(g_{m-1})\big)\\ &\le &\tau\sum_{m=\widehat{m}+1}^{m_0} g_m \le {\tau} s \sum_{m=2}^{m_0} 2^{m-m_0+1}\le 4\tau s, \end{eqnarray*} where we have used the equality $g_m = 2^m$ { and \eqref{eq:th:adapt1_bis}}. Thus, \begin{eqnarray}\label{term10} I_1 & \le & \frac 1s \E_\theta \left( | \hat \eta(g_{\widehat{m}}) - \hat\eta(g_{m_0}) | I(\widehat{m} \le m_0)\right) + \frac 1s \E_\theta | \hat \eta(g_{m_0}) - \eta | \\ & \leq & 4\tau+ \frac 1s \E_\theta | \hat \eta(g_{m_0}) - \eta |.\nonumber \end{eqnarray} By \eqref{term1}, for any $\theta \in \Theta_d(s,a_0(s,A))$ we have \begin{equation}\label{term11} \frac 1s \E_\theta |\hat \eta(g_{m_0}) - \eta| \leq \left(\frac{d}s -1\right) \Pb(|\xi|\geq w(g_{m_0})) + \Pb(|\xi| >(a_0(s,A)- w(g_{m_0}))_+) \end{equation} where $\xi$ is a standard Gaussian random variable. Using the bound on the Gaussian tail probability and the fact that $s\ge g_{m_0}/2$, we get \begin{eqnarray}\label{term12} &&\left(\frac{d}s -1\right) \Pb(|\xi|\geq w(g_{m_0}))\le \frac{d/s-1}{d/g_{m_0}-1} \ \frac{\pi^{-1/2}}{\sqrt{\log(d/g_{m_0} - 1)}} \\ &&\qquad\le \frac{d-s}{d-2s}\ \frac{2\pi^{-1/2}}{\sqrt{\log(d/s - 1)}} \le \frac{3\pi^{-1/2}}{\sqrt{\log(d/s^*_d - 1)}}\ .\nonumber \end{eqnarray} To bound the second probability on the right-hand side of \eqref{term11}, we use the following lemma. \begin{lemma}\label{lem1} Under the assumptions of Theorem~\ref{th:adapt}, for any $m\ge m_0$ we have \begin{equation}\label{eq:lem1} \Pb(|\xi| >(a_0(s,A)- w(g_{m}))_+)\le\big( \log\left({d}/{s^*_d} -1\right)\big)^{-\frac1{2}}\,. \end{equation} \end{lemma} Combining \eqref{term11}, \eqref{term12} and \eqref{eq:lem1} with $m=m_0$, we find \begin{eqnarray}\label{term141} \frac 1s \E_\theta |\hat \eta(g_{m_0}) - \eta| & \leq & \frac{3\pi^{-1/2}+1}{\sqrt{\log(d/s^*_d - 1)}} \,, \end{eqnarray} which together with \eqref{term10} leads to the bound \begin{eqnarray}\label{term14} I_1 & \leq & 4\tau + \frac{3\pi^{-1/2}+1}{\sqrt{\log(d/s^*_d - 1)}} \,. \end{eqnarray} We now turn to the evaluation of $I_2$. {It is enough to consider the case $m_0\le M-1$ since $I_2=0$ when $m_0=M$. }We have \begin{eqnarray}\label{term15} I_2 &=& \frac 1s \sum_{m=m_0+1}^M \E_\theta\left( |\hat \eta (g_{\hat{m}}) - \eta| I( \widehat{m} = m) \right)\\ & \leq & \frac 1s \sum_{m=m_0+1}^M \big(\E_\theta |\hat \eta (g_{m}) - \eta|\big)^{1/2} \big(\Pb_\theta(\widehat{m} = m)\big)^{1/2}.\nonumber \end{eqnarray} By definition, the event $\{\widehat{m} = m\}$ occurs if and only if there exists some $\ell \geq m$ such that $\sum_{j=1}^d I( w_\ell \le |X_j|< w_{\ell-1} ) > \tau g_\ell \triangleq v_\ell$, where we set for brevity $w_\ell=w(g_{\ell})$. Thus, \begin{eqnarray \Pb_\theta(\widehat{m} = m) & \leq & \sum_{\ell=m}^M \Pb_\theta \left( \sum_{j=1}^d I( w_\ell \le |X_j|< w_{\ell-1} ) > v_\ell \right)\,. \label{I2} \end{eqnarray} By Bernstein's inequality, for any $t>0$ we have \begin{eqnarray}\label{I2B} && \Pb_\theta \left( \sum_{j=1}^d I( w_\ell \le |X_j|< w_{\ell-1} ) - \E_\theta\left( \sum_{j=1}^d I( w_\ell \le |X_j|< w_{\ell-1} ) \right) > t \right)\nonumber \\ && \qquad \leq \exp \left( - \frac{t^2/2}{ \sum_{j=1}^d \E_\theta\left( I( w_\ell \le |X_j|< w_{\ell-1}) \right) +2t/3 }\right), \end{eqnarray} where we have used that, for random variables with values in $\{0,1\}$, the variance is smaller than the expectation. Now, similar to \eqref{term1}, for any $\theta \in \Theta_d(s,a_0(s,A))$, \begin{eqnarray*} &&\E_\theta\Big( \sum_{j=1}^d I(w_\ell \le |X_j|< w_{\ell-1} ) \Big)\\ &\le & (d-s) \Pb \left( w_\ell \le |\xi|< w_{\ell-1} \right) + \sum_{j:\theta_j\ne 0}\Pb \left(|\theta_j+ \xi|< w_{\ell-1} \right) \\ & \leq & (d-s) \Pb \left( |\xi|\ge w_{\ell} \right)+ s \Pb(|\xi| > -(a_0(s,A) - w_{\ell-1})_+), \end{eqnarray*} where $\xi$ is a standard Gaussian random variable. Since $\ell\ge m_0+1$, from Lemma \ref{lem1} we ge \begin{equation}\label{term17} \Pb(|\xi| >(a_0(s,A)- w_{\ell-1}))_+)\le \big( \log\left({d}/{s^*_d} -1\right)\big)^{-\frac1{2}}. \end{equation} Next, using the bound on the Gaussian tail probability and the inequalities $g_\ell \le s^*_d\le d/4$, we find \begin{equation}\label{term18} (d-s) \Pb \left( |\xi|\ge w_{\ell} \right)\le \frac{d-s}{d/g_\ell - 1} \frac{\pi^{-1/2}}{\sqrt{\log(d/g_\ell - 1)}}\le \frac{(4/3)\pi^{-1/2}g_\ell}{\sqrt{\log(d/s^*_d - 1)}}\,. \end{equation} We now deduce from \eqref{term17} and \eqref{term18}, and the inequality $s\le g_\ell$ for $\ell\ge m_0+1$, that \begin{equation} \label{I221} \E_\theta\Big( \sum_{j=1}^d I( w_\ell \le |X_j|< w_{\ell-1}) \Big) \leq \frac{\big((4/3)\pi^{-1/2}+1\big)g_\ell }{\sqrt{\log(d/s^*_d - 1)}} \le 2 \tau g_\ell. \end{equation} Taking in \eqref{I2B} $t=3\tau g_\ell=3v_\ell$ and using \eqref{I221}, we find \begin{equation*}\label{term19} \Pb_\theta \left( \sum_{j=1}^d I( w_\ell \le |X_j|< w_{\ell-1} ) > v_\ell \right)\leq \exp(-C_1 v_\ell)=\exp(-C_1 2^\ell\tau ), \end{equation*} for all $\ell\ge m_0+1$ and some absolute constant $C_1>0$. This implies \begin{equation}\label{term20} \Pb_\theta(\widehat{m} = m)\le \sum_{\ell=m}^M \exp(-C_1 2^\ell\tau ) \le C_2 2^{-m} \tau^{-1} \exp(-C_1 2^m\tau) \end{equation} for some absolute constant $C_2>0$. On the other hand, notice that the bounds \eqref{term11}, and \eqref{term12} are valid not only for $g_{m_0}$ but also for any $g_{m}$ with $m\ge m_0+1$. Using this observation and Lemma \ref{lem1} we get that, for any $\theta \in \Theta_d(s,a_0(s,A))$ and any $m\ge m_0+1$, \begin{eqnarray} \E_\theta |\hat \eta(g_{m}) - \eta| & \leq & s\left[\frac{d/s-1}{d/g_{m}-1} \ \frac{\pi^{-1/2}}{\sqrt{\log(d/g_{m} - 1)}} + \big( \log\left({d}/{s^*_d} -1\right)\big)^{-\frac1{2}}\right]\nonumber \\ & \leq & \frac{\big((4/3)\pi^{-1/2}+1\big)g_m}{\sqrt{\log(d/s^*_d - 1)}} \triangleq \tau' g_m= \tau' 2^m\,, \label{term241} \end{eqnarray} where the last inequality follows from the same argument as in \eqref{term18}. Now, we plug \eqref{term20} and \eqref{term241} in \eqref{term15} to obtain \begin{eqnarray} I_2 &\le & \frac {C_2^{1/2} (\tau'/\tau)^{1/2}}s \sum_{m=m_0+1}^M \exp(-C_1 2^{m-1}\tau) \label{term150} \\ \nonumber &\le& C_3 (\tau')^{1/2} \tau^{-3/2} \exp(-C_1 2^{m_0}\tau)\le C_3 (\tau')^{1/2} \tau^{-3/2} \end{eqnarray} for some absolute constant $C_3>0$. Notice that $ (\tau')^{1/2} = O\big(\big(\log\left({d}/{s^*_d} -1\right)\big)^{-\frac1{4}}\big)$ as ${d}/{s^*_d}\to \infty$ while $\tau^{-3/2}= O\big(\big(\log\left({d}/{s^*_d} -1\right)\big)^{\frac3{14}}\big)$. Thus, $I_2=o(1)$ as $d\to \infty$. Since from \eqref{term14} we also get that $I_1=o(1)$ as $d\to \infty$, the proof is complete. \end{proof} \begin{proof}[Proof of Lemma \ref{lem1}] { Let first $s<g_M$. Then, by definition of $m_0$, we have $s<g_{m_0}$. } Therefore, $s<g_m$ for $m\ge m_0$, and we have $w(g_{m})<w(s)$. It follows that $$ a_0(s,A)- w(g_{m}) \ge a_0(s,A)- w(s) \ge \frac{\sqrt{A}}{2\sqrt{2}} \min\left(\frac {\sqrt{A}}{\sqrt{2}}, \ \log^{1/4}\left({d}/s -1\right)\right), $$ where we have used the elementary inequalities $$\sqrt{x+y}-\sqrt{y}\ge y/(2\sqrt{x+y}) \ge (2\sqrt{2})^{-1} \min\left(y/\sqrt{x},\sqrt{y}\right)$$ with $x=2\log\left({d}/s -1\right)$ and $y=A\sqrt{\log\left({d}/s -1\right)}$. By assumption, $A\ge { 16}\sqrt{\log\log\left({d}/{s^*_d} -1\right)}$, so that we get \begin{equation}\label{term111} a_0(s,A)- w(g_{m}) \ge a_0(s,A)- w(s) \ge { 4}\left(\log\log\left(\frac{d}{s^*_d} -1\right)\right)^{1/2}. \end{equation} This and the standard bound on the Gaussian tail probability imply \begin{eqnarray}\label{term13} \Pb(|\xi| >(a_0(s,A)- w(g_{m}))_+) &\le & \exp(-(a_0(s,A)- w(g_{m}))^2/2)\\ &\le& \big( \log\left({d}/{s^*_d} -1\right)\big)^{-\frac1{2}}\,. \end{eqnarray} { Let now $s\in [g_M, s^*_d]$. Then $m_0=M$ and we need to prove the result only for $m=M$. By definition of $M$ we have $s^*_d\le 2g_M$. This and \eqref{term111} imply $$ a_0(s,A)- w(g_{M}) \ge a_0(s,A)- w(s) - (w(s^*_d/2)-w(s^*_d)) \ge { 4}\left(\log\log\left(\frac{d}{s^*_d} -1\right)\right)^{1/2} - (w(s^*_d/2)-w(s^*_d)). $$ Now, using the elementary inequality $\sqrt{\log(x+y)}-\sqrt{\log(x)}\le y/(2x\sqrt{\log(x)})$ with $x=d/s^*_d-1$ and $y=d/s^*_d$, and the fact that $s^*_d\le d/4$ we find $$ w(s^*_d/2)-w(s^*_d) \le \frac1{\sqrt{2\log(d/s^*_d-1)}} \frac{d}{d-s^*_d}\le \frac{2\sqrt{2}}{3\sqrt{\log(d/s^*_d-1)}}\le 3\left(\log\log\left(\frac{d}{s^*_d} -1\right)\right)^{1/2}. $$ The last two displays yield $ a_0(s,A)- w(g_{M}) \ge \left(\log\log\left(\frac{d}{s^*_d} -1\right)\right)^{1/2}$, and we conclude as in~\eqref{term13}. } \end{proof} \noindent {\bf Acknowledgements.} We would like to thank Felix Abramovich for helpful discussion of the results. The work of N.A. Stepanova was supported by an NSERC grant. The work of A.B. Tsybakov was supported by GENES and by the French National Research Agency (ANR) under the grants IPANEMA (ANR-13-BSH1-0004-02), and Labex ECODEC (ANR - 11-LABEX-0047). It was also supported by the "Chaire Economie et Gestion des Nouvelles Donn\'ees", under the auspices of Institut Louis Bachelier, Havas-Media and Paris-Dauphine.
{ "timestamp": "2017-03-13T01:06:33", "yymm": "1512", "arxiv_id": "1512.01832", "language": "en", "url": "https://arxiv.org/abs/1512.01832", "abstract": "We derive non-asymptotic bounds for the minimax risk of variable selection under expected Hamming loss in the Gaussian mean model in $\\mathbb{R}^d$ for classes of $s$-sparse vectors separated from 0 by a constant $a > 0$. In some cases, we get exact expressions for the nonasymptotic minimax risk as a function of $d, s, a$ and find explicitly the minimax selectors. These results are extended to dependent or non-Gaussian observations and to the problem of crowdsourcing. Analogous conclusions are obtained for the probability of wrong recovery of the sparsity pattern. As corollaries, we derive necessary and sufficient conditions for such asymptotic properties as almost full recovery and exact recovery. Moreover, we propose data-driven selectors that provide almost full and exact recovery adaptively to the parameters of the classes.", "subjects": "Statistics Theory (math.ST)", "title": "Variable selection with Hamming loss", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771755256741, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7090925509964903 }
https://arxiv.org/abs/1511.05522
On the Classification of Pointed Fusion Categories up to weak Morita Equivalence
A pointed fusion category is a rigid tensor category with finitely many isomorphism classes of simple objects which moreover are invertible. Two tensor categories $C$ and $D$ are weakly Morita equivalent if there exists an indecomposable right module category $M$ over $C$ such that $Fun_C(M,M)$ and $D$ are tensor equivalent. We use the Lyndon-Hochschild-Serre spectral sequence associated to abelian group extensions to give necessary and sufficient conditions in terms of cohomology classes for two pointed fusion categories to be weakly Morita equivalent. This result may permit to classify the equivalence classes of pointed fusion categories of any given global dimension.
\section*{Introduction} Pointed fusion categories are rigid tensor categories with finitely many isomorphism classes of simple objects with the property that all simple objects are invertible. Any pointed fusion category $\mathcal{C}$ is equivalent to the fusion category $Vect(G,\omega)$ of complex vector spaces graded by the finite group $G$ together with the associativity constraint defined by the 3-cocycle $\omega \in Z^3(G,\ensuremath{{\mathbb C}^*})$. Whenever we have a right module category $\mathcal{M}$ over $\mathcal{C}$ we can define the dual category $\mathcal{C}_\mathcal{M}^*:= \operatorname{Fun}_\mathcal{C}(\mathcal{M},\mathcal{M})$ which becomes a tensor category via composition of functors. Whenever $\mathcal{C}$ is a fusion category and $\mathcal{M}$ is an indecomposable fusion category, the dual category $\mathcal{C}_\mathcal{M}^*$ is also a fusion category \cite[\S 2.2]{Ost-2}. An indecomposable module category $\mathcal{M}$ of $Vect(G,\omega)$ may be defined by $\mathcal{M}=\mathcal{M}(K, \mu)$ where $K$ is the space of cosets $K :=A \backslash G$ for $A$ a subgroup of $G$ and $\mu \in C^2(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ is a cochain that satisfies the equation $ \delta_G \mu^{-1} = \omega$. Two tensor categories $\mathcal{C}$ and ${\mathcal D}$ are {{weakly Morita equivalent}} if there exists an indecomposable right module category $\mathcal{M}$ over $\mathcal{C}$ such that $\mathcal{C}_\mathcal{M}^*$ and ${\mathcal D}$ are tensor equivalent \cite[Def 4.2]{MugerI}. Now, if we have two pointed fusion categories $Vect(G,\omega)$ and $Vect(\widehat{G},\widehat{\omega})$, what are the necessary and sufficient conditions for them to be weakly Morita equivalent? This question was raised in \cite{Davydov, Movshev}, it was answered by Davydov \cite[Cor. 6.2]{Davydov} for the case on which both $\omega$ and $\widehat{\omega}$ were trivial, and the general case was answered by Naidu in \cite[Theorem 5.8]{Naidu} in terms of the properties that $A$, $\omega$ and $\mu$ need to satisfy. Nevertheless these conditions were given in equations that a priori had no interpretation in terms of known cohomology classes. We continue the work started by Naidu in \cite{Naidu} and frame all the calculations done there in the language of the double complex associated to an abelian group extension which induces the Lyndon-Hochschild-Serre spectral sequence. By doing so we are able to obtain in Corollary \ref{omega in 2,1 and 3,0} cohomological conditions on $\omega$ in order for the tensor category $Vect(G,\omega)_{\mathcal{M}(A \backslash G, \mu)}^*$ to be pointed, namely that $\omega$ must be cohomologous to a cocycle appearing in $C^{2,1}\oplus C^{3,0}$ of the double complex which induces the Lyndon-Hochschild-Serre spectral sequence associated to the extension $1 \to A \to G \to K \to 1$. With the previous result at hand, we construct explicit representatives of $\omega$ and $\mu$ in terms of coordinates and we determine explicitly the groups $\widehat{G}$ and the cocycles $\widehat{\omega}$. The main result of this paper is Theorem \ref{main theorem} in which we give the necessary and sufficient conditions for the categories $Vect(H,\eta)$ and $Vect(\widehat{H},\widehat{\eta})$ to be weakly Morita equivalent. We may summarize the conditions as follows: $Vect(H,\eta)$ and $Vect(\widehat{H},\widehat{\eta})$ are weakly Morita equivalent if and only if there exist isomorphisms of groups $\phi : A \rtimes_F K \stackrel{\cong}{\to} H$ and $ \widehat{\phi} : K \ltimes_{\widehat{F}} {{\mathbb{A}}} \stackrel{\cong}{\to} \widehat{H}$ for some finite group $K$ acting on the abelian group $A$, with $F \in Z^2(K, A)$ and $\widehat{F} \in Z^2(K, {{\mathbb{A}}})$ where ${{\mathbb{A}}} := \mathrm{Hom}(A, \ensuremath{{\mathbb C}^*})$, such that both $[\widehat{F}]$ and $[F]$ survive respectively the LHS spectral sequence for the groups $A \rtimes_F K$ and $K \ltimes_{\widehat{F}} {{\mathbb{A}}}$, and such that $\phi^* \eta$ is cohomologous to $$ \omega((a_1,k_1),(a_2,k_2),(a_3,k_3)) := \widehat{F}(k_1,k_2)(a_3) \ \epsilon(k_1,k_2,k_3)$$ and $\widehat{\phi}^*\widehat{\eta}$ is cohomologous to $$ \widehat{\omega}((k_1, \rho_1), (k_2,\rho_2),(k_3 ,\rho_3)) := \epsilon(k_1,k_2,k_3) \ \rho_1(F(k_2,k_3))$$ where $\epsilon: K^3 \to \ensuremath{{\mathbb C}^*}$ satisfies $\delta_K \epsilon = \widehat{F} \wedge F$. Theorem \ref{main theorem} may be used to determine the weak Morita equivalence classes of pointed fusion categories of a given global dimension but the cohomological calculations can become very elaborate and are beyond the scope of this article. Nevertheless in section \S \ref{section examples} we include a calculation on which we show how Theorem \ref{main theorem} can be used to show that there are only seven weak Morita equivalence classes of pointed fusion categories of global dimension four, and in order to calculate the pointed fusion categories which are weakly Morita equivalent to $Vect(Q_8,\eta)$ for the quaternion group $Q_8$. \section{Preliminaries} \label{section Preliminaries} \subsection{Abelian group extensions} \label{subsection Abelian group extensions} Consider the short exact sequence of finite groups \begin{eqnarray} \label{extension of G by A and K} 1 \longrightarrow A \longrightarrow G \longrightarrow K \longrightarrow 1 \end{eqnarray} with $A$ abelian. Consider $u : K \to G$ any section of the projection map $p: G \to K, p(g)=(Ag)$ such that $u(1_K)=1_G$ and denote the right $G$-action on $K$ by $$k {\vartriangleleft} g:= p((u(k)g)$$ for $k \in $ and $g \in G$. The elements $u(k)g$ and $u(k {\vartriangleleft} g)$ differ by an element $\kappa_{k,g} \in A$ satisfying the equation \begin{align} u(k)g = \kappa_{k,g} u(k {\vartriangleleft} g) \label{equation of kappa} \end{align} which furthermore satisfies the relation $$\kappa_{k,g_1g_2}= \kappa_{k,g_1} \kappa_{k {\vartriangleleft} g_1,g_2}$$ for $ k \in K$ and $g_1,g_2 \in G$. Since $A$ is an abelian normal subgroup $G$, there is an induced $K$-left action on $A$ by conjugation: $${}^ka: = u(k)a u(k)^{-1}$$ for $k \in K$ and $a \in A$. Since the isomorphism class of the extension \eqref{extension of G by A and K} can be classified by the cohomology class of the cocycle $F \in Z^2(K, A)$, i.e. a map $F : K \times K \to A$ such that $$\delta_K F (k_1,k_2,k_3)= {}^{k_1}F(k_2,k_3)F(k_1k_2,k_3)^{-1}F(k_1,k_2k_3) F(k_1,k_2)^{-1}=1,$$ without loss of generality we will further assume that $$G := A \rtimes_F K$$ where the product structure of $G$ is given by the formula $$(a_1,k_1) (a_2,k_2) := (a_1 ({}^{k_1}a_2) F(k_1,k_2),k_1k_2).$$ With this explicit choice of the group $G$, we choose the function $u: K \to G$ to be $u(k):=(1_A,k)$ and therefore we have that $$\kappa_{k_1,(a,k_2)}= {}^{k_1}aF(k_1,k_2)$$ thus obtaining $F(k_1,k_2)= \kappa_{k_1,(1,k_2)}$. We furthermore have that for $x \in K$ and $g=(a,k) \in G$ $$x {\vartriangleleft} g= x {\vartriangleleft} (a,k)=xk.$$ Denote the dual group ${{\mathbb{A}}} := \mathrm{Hom}(A, \ensuremath{{\mathbb C}^*})$ and note that there is an induced $K$-right action on ${{\mathbb{A}}}$ defined as $\rho^k(a):= \rho({}^ka)$ for $\rho \in {{\mathbb{A}}}$ and $k \in K$. \subsection{Cohomology of groups and the Lyndon-Hochschild-Serre spectral sequence} \label{subsection Cohomology of groups and the Lyndon-Hochschild-Serre spectral sequence} In what follows we will construct an explicit double complex whose cohomology calculates the cohomology of the group $G$, and whose associated spectral sequence recovers the Lyndon-Hochschild-Spectral (LHS) spectral sequence of the extension \eqref{extension of G by A and K}. Endow the set $\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})$ with the left $G$-action $(g {\vartriangleright} f) (k):= f(k {\vartriangleleft} g)$ where $g \in G$, $k \in K$ and $f : K \to \ensuremath{{\mathbb C}^*}$, and consider the complex $C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ with elements normalized chains $$C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})):= \{ f : K \times G^q \to \ensuremath{{\mathbb C}^*} | f(k;g_1,...,g_q)=1 \ \ \mbox{whenever some} \ \ g_i=1 \}$$ and boundary map \begin{align} (\delta_G f)(k ; g_1,...,g_q) = f(k {\vartriangleleft} g_1;g_2,...,g_q) \prod_{i=1}^{q-1}f(k;g_1 ,..,g_ig_{i+1} &,...,g_q)^{(-1)^i} \nonumber \\ & f(k; g_1,...,g_{q-1})^{(-1)^{q}}. \label{differential G} \end{align} Since the natural morphism of groupoids, defined by the inclusion of the group $A$ into the action groupoid defined by the right action of $G$ on $K$, is an equivalence of categories, we have that the restriction map $$\psi : C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})) \to C^*(A, \ensuremath{{\mathbb C}^*}), \ \ \psi(f)(a_1,...,a_q):= f(1_K;a_1,...,a_q)$$ is a morphism of complexes which induces an isomorphism in cohomology $$\widetilde{\psi}: H^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})) \stackrel{\cong}{\to} H^*(A, \ensuremath{{\mathbb C}^*}).$$ The inverse map could be constructed at the level of cocycles as follows \begin{lemma} \label{lemma varphi} The map $\varphi: C^q(A, \ensuremath{{\mathbb C}^*}) \to C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ $$\varphi(\alpha)(k;g_1,...,g_q):= \alpha(\kappa_{k,g_1}, \kappa_{k {\vartriangleleft} g_1,g_2},...,\kappa_{k{\vartriangleleft} g_1g_2...g_{q-1},g_q})$$ defines a map of complexes inducing an isomorphism in cohomology $\widetilde{\varphi}: H^*(A, \ensuremath{{\mathbb C}^*}) \stackrel{\cong}{\to} H^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ which is the inverse of the map $\widetilde{\psi}$. \end{lemma} \begin{proof} On the one hand we have \begin{align*} \delta_G \varphi(\alpha)(k;g_1,...,g_p) = & \varphi(\alpha)(k {\vartriangleleft} g_1;g_2,...,g_q) \prod_{i=1}^{q-1} \varphi(\alpha)(k;g_1 ,..,g_ig_{i+1} ,...,g_q)^{(-1)^i}\\ & \varphi(\alpha)(k; g_1,...,g_{q-1})^{(-1)^{q}}\\ =& \alpha(\kappa_{k {\vartriangleleft} g_1,g_2}, \kappa_{k {\vartriangleleft} g_1g_2,g_3},..., \kappa_{k {\vartriangleleft} g_1...g_{q-1}, g_q})\\ & \prod_{i=1}^{q-1} \alpha(\kappa_{k,g_1},\kappa_{k {\vartriangleleft} g_1,g_2} ,...,\kappa_{k {\vartriangleleft} g_1...g_{i-1},g_ig_{i+1}},..., \kappa_{k {\vartriangleleft} g_1...g_{q-1},g_q})^{(-1)^i}\\ & \alpha((\kappa_{k,g_1},\kappa_{k {\vartriangleleft} g_1,g_2} ,...,\kappa_{k {\vartriangleleft} g_1...g_{q-2},g_{q-1}})^{(-1)^{q}} \end{align*} and on the other \begin{align*} \varphi( \delta_G & \alpha)(k;g_1,...,g_p) = \delta_G \alpha (\kappa_{k,g_1}, \kappa_{k {\vartriangleleft} g_1,g_2},...,\kappa_{k{\vartriangleleft} g_1g_2,...,g_{q-1},g_q})\\ =& \alpha(\kappa_{k {\vartriangleleft} g_1,g_2}, \kappa_{k {\vartriangleleft} g_1g_2,g_3},..., \kappa_{k {\vartriangleleft} g_1...g_{q-1}, g_q})\\ & \prod_{i=1}^{q-1} \alpha(\kappa_{k,g_1},\kappa_{k {\vartriangleleft} g_1,g_2} ,..., \kappa_{k {\vartriangleleft} g_1...g_{i-1},g_i}\kappa_{k {\vartriangleleft} g_1...g_{i-1}g_i,g_{i+1}},..., \kappa_{k {\vartriangleleft} g_1...g_{q-1},g_q})^{(-1)^i}\\ & \alpha((\kappa_{k,g_1},\kappa_{k {\vartriangleleft} g_1,g_2} ,...,\kappa_{k {\vartriangleleft} g_1...g_{q-2},g_{q-1}})^{(-1)^{q}}. \end{align*} The equality $\delta_G \varphi(\alpha)=\varphi( \delta_G \alpha)$ follows from the identity $$\kappa_{k {\vartriangleleft} g_1...g_{i-1},g_ig_{i+1}}=\kappa_{k {\vartriangleleft} g_1...g_{i-1},g_i}\kappa_{k {\vartriangleleft} g_1...g_{i-1}g_i,g_{i+1}}.$$ Finally, the composition $\psi(\varphi(\alpha))=\alpha$ follows from the equation $\kappa_{1,a}=a$ for $a \in A$. \end{proof} The complex $C^*(A, \ensuremath{{\mathbb C}^*})$ can be endowed with the structure of a right K-module by setting for $\alpha \in C^q(A, \ensuremath{{\mathbb C}^*})$ and $k \in K$ $$\alpha^k (a_1,...,a_q) := \alpha(u(k)a_1u(k)^{-1},...,u(k)a_qu(k)^{-1}),$$ and the complex $C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ can also be endowed with the structure of a right $K$-module by setting for $f \in C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ and $k \in K$ $$(f {\vartriangleleft} k)(x; g_1,...,g_q):= f(kx;g_1,...,g_q).$$ The map $\varphi$ fails to be a $K$-module map; nevertheless it induces a $K$-module map at the level of cohomology \begin{lemma} \label{lemma iso varphi} The isomorphism $\widetilde{\varphi}: H^*(A, \ensuremath{{\mathbb C}^*}) \stackrel{\cong}{\to} H^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ is an isomorphism of $K$-modules. \end{lemma} \begin{proof} Take $\alpha \in Z^q(A, \ensuremath{{\mathbb C}^*})$ and $k \in K$. We claim that $\psi(\varphi(\alpha) {\vartriangleleft} k)= \alpha^k$, and since $\psi(\varphi(\alpha^k))=\alpha^k$, we conclude that $\varphi(\alpha) {\vartriangleleft} k$ and $\varphi(\alpha^k)$ are cohomologous. Now, let us calculate \begin{align*} \psi(\varphi(\alpha) {\vartriangleleft} k)(a_1,...,a_q) =& (\varphi(\alpha) {\vartriangleleft} k)(1;a_1,...,a_q)\\ =& \varphi(\alpha) (k;a_1,...,a_q)\\ =& \alpha(\kappa_{k,a_1}, \kappa_{k {\vartriangleleft} a_1,a_2},...,\kappa_{k{\vartriangleleft} a_1a_2,...,a_{q-1},a_q})\\ =& \alpha(\kappa_{k,a_1}, \kappa_{k ,a_2},...,\kappa_{k ,a_q})\\ =& \alpha(u(k)a_1u(k)^{-1}, u(k)a_2u(k)^{-1},...,u(k)a_qu(k)^{-1})\\ =& \alpha^k(a_1,a_2,...,a_q); \end{align*} the lemma follows. \end{proof} \subsubsection{Double complex} \label{subsubsection Double complex} Since $C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ is a complex of right $K$-modules, we can consider the complexes $$C^*(K, C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})))$$ with $C^p(K, C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ consisting of normalized cochains $$ \{f : K^p \to C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))| f(k_1,...,k_p)=1 \ \ \mbox{whenever some} \ \ k_i=1 \}$$ and whose differentials \begin{align*} (\delta_K f)(k_1,...,k_p) =& f(k_2,...,k_p) \prod_{i=1}^{p-1} f(k_1,...,k_ik_{i+1},...,k_p)^{(-1)^i}(f(k_1,...,k_{p-1}) {\vartriangleleft} k_p)^{(-1)^p}. \end{align*} These complexes assemble into a double complex $C^{p,q} :=C^p(K, C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})))$. Let us denote by $\mbox{\rm Tot\,}(C^{*,*})$ the total complex associated to the double complex and let $\delta_{Tot}:=\delta_K \oplus (\delta_G)^{(-1)^p}$ be its differential. We may filter the total complex by the degree of the $G$ cochains, thus obtaining a spectral sequence whose first page becomes $$E_1^{p,q}= H^p(K, C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))).$$ Since the $K$-modules $C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ are free $K$-modules, we conclude that the first page localizes on the $y$-axis, $$E_1^{0,q} = H^0(K, C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))) = C^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))^K \cong C^q(G, \ensuremath{{\mathbb C}^*})$$ and $E_1^{p,q}=0$ for $p>0$. The spectral sequence collapses at the second page, with the only surviving elements on the $y$-axis $$E_2^{0,q}= H^q(G, \ensuremath{{\mathbb C}^*}).$$ Hence we have \begin{proposition} The inclusion of $K$-invariant cochains $$C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))^K \hookrightarrow \mbox{\rm Tot\,}(C^*(K, C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))))$$ is a quasi-isomorphism. Therefore the cohomology groups $$H^*(G,\ensuremath{{\mathbb C}^*}) \stackrel{\cong}{\to} H^*(\mbox{\rm Tot\,}(C^*(K, C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))))$$ are canonically isomorphic. \end{proposition} Filtering the double complex by the degree of the $K$ cochains we obtain the Lyndon-Hochschild-Serre spectral sequence associated to the group extension $1 \to A \to G \to K \to 1$ (see \cite[\S 7.2]{Evens} and references therein). The first page becomes $$E_1^{p,q} = C^p(K, H^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})))$$ and the second page becomes $$E_2^{p,q} = H^p(K, H^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))).$$ Since the projection map $\widetilde{\psi}: H^q(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})) \stackrel{\cong}{\to} H^q(A,\ensuremath{{\mathbb C}^*})$ is an isomorphism of $K$-modules, we conclude \begin{proposition}[LHS spectral sequence] Filtering the total complex by the degree of the $K$-chains, we obtain a spectral sequence whose second page is $$E_2^{p,q} \cong H^p(K, H^q(A,\ensuremath{{\mathbb C}^*}))$$ and that converges to $H^*(G,\ensuremath{{\mathbb C}^*})$. \end{proposition} We will denote $d_i: E_i^{p,q} \to E_i^{p+i,q-i+1}$ the differentials of this spectral sequence. \subsection{Tensor categories} Following \cite[\S 1]{Bakalov-Kirillov}, a tensor category consist of $(\mathcal{C}, \otimes, 1_\mathcal{C}, \alpha, \lambda, \rho)$ where $\mathcal{C}$ is a category, $\otimes : \mathcal{C} \times \mathcal{C} \to \mathcal{C}$ is a bifunctor, $\alpha$ is the associativity constraint i.e. a functorial isomorphism $\alpha_{UVW}: (U \otimes V) \otimes W \stackrel{\sim}{\to} U \otimes (V \otimes W)$ of functors $\mathcal{C} \times \mathcal{C} \times \mathcal{C} \to \mathcal{C}$, $1_{\mathcal{C}} \in {\rm{Ob}}(\mathcal{C})$ is a unit element and $\lambda, \rho$ are functorial isomorphisms $\lambda_V : 1_\mathcal{C} \otimes V \stackrel{\sim}{\to} V$, $\rho_V : V \otimes 1_\mathcal{C} \stackrel{\sim}{\to} V$, satisfying the pentagon axiom $$\xymatrix{ & ((V_1 \otimes V_2) \otimes V_3) \otimes V_4 \ar[ld]_{\alpha_{1,2,3} \otimes id_4} \ar[dr]^{\alpha_{12,3,4}} & \\ (V_1 \otimes (V_2 \otimes V_3)) \otimes V_4 \ar[d]^{\alpha_{1,23,4}} && ( V_1 \otimes V_2) \otimes (V_3 \otimes V_4) \ar[d]^{\alpha_{1,2,34}}\\ V_1 \otimes ((V_2 \otimes V_3) \otimes V_4) \ar[rr]_{id_1 \otimes \alpha_{2,3,4}} && ( V_1 \otimes (V_2 \otimes (V_3 \otimes V_4))) }$$ and the triangle axiom $$\xymatrix{(V_1 \otimes 1_\mathcal{C}) \otimes V_2 \ar[dr]_{\rho \otimes id} \ar[rr]^\alpha && V_1 \otimes (1_\mathcal{C} \otimes V_2) \ar[ld]^{id \otimes \lambda}\\ & V_1 \otimes V_2. & }$$ \subsection{The fusion category $Vect(G, \omega)$} A fusion category over $\ensuremath{{\mathbb C}}$ is a rigid semi-simple $\ensuremath{{\mathbb C}}$-linear tensor category, with only finitely many isomorphism classes of simple objects, such that the endomorphisms of the unit object is $\ensuremath{{\mathbb C}}$ (see \cite{ENO}). For $G$ a finite group and a 3-cocycle $\omega \in Z^3(G, \ensuremath{{\mathbb C}^*})$, define the category $Vect(G, \omega)$ by setting that its objects are $G$-graded complex vector spaces $V = \bigoplus_{g \in G} V_g$, whose tensor product is $$(V \otimes W)_g : = \bigoplus_{hk=g}V_h \otimes W_k,$$ whose associativity constraint is $\alpha_{V_g,V_h,V_k} = \omega(g,h,k) \gamma$ with $\gamma((x \otimes y) \otimes z)= x \otimes (y \otimes z)$, and whose left and right unit isomorphisms are $\lambda_{V_g} =\omega(1,1,g)^{-1}id_{V_g}$ and $\rho_{V_g} =\omega(g,1,1)id_{V_g}$. The category $Vect(G, \omega)$ is a fusion category where the simple objects are the 1-dimensional vector spaces. We will assume that all group cochains are normalized, and hence the left and right unit isomorphisms become identities. For convenience we will work with a category ${\mathcal V}(G,\omega)$ which is {\it{skeletal}}, i.e. one on which isomorphic objects are equal, and which is equivalent to $Vect(G,\omega)$. The category ${\mathcal V}(G,\omega)$ has for simple objects the elements $g$ of the group $G$, the tensor product is $g \otimes h =gh$ and the associativity isomorphisms are $\omega(g,h,k)id_{ghk}$. A finite tensor category is called {\it{pointed}} if all its simple objects are invertible. It is thus easy to see that any finite tensor category which is pointed is equivalent to $Vect(G, \omega)$ for some finite group $G$ and some 3-cocycle $\omega$. \subsection{Module Categories} Following \cite[\S 2.3]{Ost}, a right {\it{module category}} over the tensor category $(\mathcal{C}, \otimes, 1_\mathcal{C}, \alpha, \lambda, \rho)$ consists of $(\mathcal{M}, \otimes , \mu, \tau)$ where $\mathcal{M}$ is a category, $\otimes : \mathcal{M} \times \mathcal{C} \to \mathcal{M}$ is an exact bifunctor, $\mu_{M,X,Y}: M \otimes (X\otimes Y) \stackrel{\sim}{\to} (M \otimes X) \otimes Y$ is a functorial associativity and $\tau_M: M \otimes 1_\mathcal{C} \stackrel{\sim}{\to} M$ is a unit isomorphism for any $X,Y \in \mathcal{C}$, $M \in \mathcal{M}$, satisfying the pentagon axiom \begin{align}\xymatrix{ & M \otimes ((X \otimes Y) \otimes Z) \ar[ld]_{id_M \otimes \alpha_{X,Y,Z}} \ar[dr]^{\mu_{M,X \otimes Y, Z}} & \\ M \otimes (X \otimes (Y \otimes Z)) \ar[d]^{\mu_{M,X,Y \otimes Z}} && ( M \otimes (X \otimes Y)) \otimes Z\ar[d]^{\mu_{M,X,Y} \otimes id_Z}\\ (M \otimes X) \otimes ( Y \otimes Z) \ar[rr]_{\mu_{M \otimes X,Y,Z}} && ((M \otimes X) \otimes Y) \otimes Z } \label{pentagon axiom module category}\end{align} and the triangle axiom \begin{align}\xymatrix{M \otimes (1_\mathcal{C} \otimes Y) \ar[dr]_{id_M \otimes \lambda_Y} \ar[rr]^{\mu_{M, 1_\mathcal{C}, Y}} && (M \otimes 1_\mathcal{C}) \otimes Y \ar[ld]^{\tau_M \otimes id_Y}\\ & M \otimes Y. & }\label{triangle axiom module category}\end{align} A {\it{module functor}} $(F,\gamma): (\mathcal{M}_1 , \mu^1, \tau^1)\to (\mathcal{M}_2 , \mu^2, \tau^2)$ between two module categories consist of a functor $F : \mathcal{M}_1 \to \mathcal{M}_2$ and a functorial isomorphism $\gamma_{M,X}: F(M \otimes X) \to F(M)\otimes X$ for any $X \in \mathcal{C}$, $M \in \mathcal{M}$, satisfying the pentagon axiom $$\xymatrix{ &F( M \otimes (X \otimes Y)) \ar[ld]_{F(\mu^1_{M,X,Y})} \ar[dr]^{\gamma_{M,X \otimes Y}} & \\ F(( M \otimes X) \otimes Y) \ar[d]^{\gamma_{M \otimes X, Y}} &&F( M )\otimes (X \otimes Y)\ar[d]^{\mu^2_{F(M),X,Y} }\\ F( M \otimes X) \otimes Y \ar[rr]_{\gamma_{M, X \otimes id_Y}} && (F( M ) \otimes X) \otimes Y }$$ and the triangle axiom $$\xymatrix{F(M \otimes 1_\mathcal{C}) \ar[dr]_{\gamma_{M, 1_\mathcal{C}}} \ar[rr]^{F(\tau^1_M)} && F(M) \\ & F(M) \otimes 1_\mathcal{C}. \ar[ru]_{\tau^1_{F(M)}} & }$$ Two module categories $\mathcal{M}_1$ and $\mathcal{M}_2$ over $\mathcal{C}$ are {\it{equivalent}} if there exist a module functor between the two which is moreover an equivalence of categories. The {\it{direct sum}} $\mathcal{M}_1 \oplus \mathcal{M}_2$ is the module category with the obvious structure. A module category is {\it{indecomposable}} if it is not equivalent to the direct sum of two non-trivial module categories. A {\it{natural module transformation}} $\eta: (F^1, \gamma^1) \to (F^2, \gamma^2)$ consist of a natural transformation $\eta: F^1 \to F^2$ such that the square $$\xymatrix{ F^1(M \otimes X) \ar[r]^{\eta_{M \otimes X }} \ar[d]_{ \gamma^1_{M, X}} & F^2(M \otimes X) \ar[d]^{\gamma^2_{M,X}}\\ F^1(M) \otimes X \ar[r]_{\eta_M \otimes id_X} & F^2(M) \otimes X. }$$ commutes for all $M \in \mathcal{M}$ and $X \in \mathcal{C}$. \subsection{Indecomposable module categories over ${\mathcal V}(G, \omega)$} \label{subsection indecomposable module categories} Let $\mathcal{M}$ be a skeletal right module category over ${\mathcal V}(G, \omega)$. The set of simple objects of $\mathcal{M}$ is a transitive right $G$-set and therefore it can be identified with the coset $K :=A \backslash G$ for $A$ a subgroup of $G$. The isomorphisms $\mu_{k,g_1,g_2}$ for $k \in K$ and $g_1,g_2 \in G$ are scalars, and we can assemble these scalars as an element $$\mu \in C^2(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})), \ \ \mu(k;g_1,g_2):= \mu_{k,g_1,g_2}.$$ The pentagon axiom \eqref{pentagon axiom module category} translates into the equation $$\omega(g_1,g_2,g_3) \mu(k;g_1,g_2g_3) \mu(k {\vartriangleleft} g_1; g_2, g_3)= \mu(k; g_1g_2,g_3)\mu(k;g_1,g_2),$$ which in view of the definition of the differential $\delta_G$ in \eqref{differential G} becomes \begin{align} \label{delta mu = omega} \delta_G \mu^{-1} = \pi^*\omega \end{align} where $\pi^* \omega \in C^3(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))^K$ is the $K$-invariant cocycle defined by $\omega$, i.e. $$\pi^* \omega (k;g_1,g_2,g_3) : = \omega (g_1,g_2,g_3).$$ Since $\mu$ is normalized and the unit constraint in ${\mathcal V}(G, \omega)$ is trivial, we have that the triangle axiom \eqref{triangle axiom module category} implies that the unit constraint in $\mathcal{M}$ is trivial. Denote this skeletal module category $\mathcal{M} = \mathcal{M}(A \backslash G, \mu)$. Note that two ${\mathcal V}(G, \omega)$-module categories $\mathcal{M}_1 = \mathcal{M}(A_1 \backslash G, \mu_1)$ and $\mathcal{M}_2 = \mathcal{M}(A_2 \backslash G, \mu_2)$ are equivalent if and only if there exist a right $G$-equivariant isomorphism $F: A_1 \backslash G \stackrel{\cong}{\to} A_2 \backslash G$ and an element $\gamma \in C^1(G, \ensuremath{{\mathrm{Map}}}(A_1 \backslash G, \ensuremath{{\mathbb C}^*}))$ such that $$\gamma(A_1g ; g_1g_2) \mu_2(F(A_1g);g_1,g_2)= \mu_1(A_1g;g_1,g_2) \gamma(A_1gg_1;g_2) \gamma(A_1g;g_1).$$ This information implies that $A_1$ and $A_2$ are conjugate subgroups of $G$ and that $$ \delta_G \gamma = \frac{F^*\mu_2}{\mu_1}.$$ In the case that $A=A_1=A_2$, the $G$-equivariant isomorphisms are parameterized by the elements of the group $A \backslash N_G(A)$, and the equation $\delta_G \gamma = \frac{F^*\mu_2}{\mu_1}$ implies that $\frac{F^*\mu_2}{\mu_1}$ is trivial in $H^2(G,\ensuremath{{\mathrm{Map}}}(A \backslash G, \ensuremath{{\mathbb C}^*}))$. Since we know that $\widetilde{\psi}:H^2(G,\ensuremath{{\mathrm{Map}}}(A \backslash G, \ensuremath{{\mathbb C}^*})) \stackrel{\cong}{\to} H^2(A, \ensuremath{{\mathbb C}^*})$ is an isomorphism, we can conclude that the isomorphism classes of module categories over ${\mathcal V}(G, \omega)$ may be parameterized (in a non-canonical manner) by pairs $([A],[\psi(\mu)])$ where $[A]$ is a conjugacy class of subgroups of $G$, and $[\psi(\mu)]$ is a representative of a cohomology class in the group of invariants $H^2(A, \ensuremath{{\mathbb C}^*})/{N_G(A)}$. \subsection{Dual category} \label{subsection Dual category} Let $\mathcal{C}$ be a tensor category and $\mathcal{M}$ an indecomposable right module category. The dual category $\mathcal{C}^*_\mathcal{M} := \operatorname{Fun}_\mathcal{C}(\mathcal{M},\mathcal{M})$ is the category whose objects are module functors from $\mathcal{M}$ to itself and whose morphisms are natural module transformations. The category $\mathcal{C}_\mathcal{M}^*$ becomes a tensor category by composition of functors, namely for $(\gamma^1, F_1), (\gamma^2, F_2) \in \mbox{\rm Obj\,}(\mathcal{C}_\mathcal{M}^*)$ where $\gamma^1, \gamma^2$ represent the module structures on the functors $F_1$ and $F_2$ respectively, we define the tensor structure by $(\gamma^1, F_1)\otimes (\gamma^2, F_2):= (\gamma, F_1 \circ F_2)$ where the module structure $\gamma$ is defined by $\gamma_{M,X}:=\gamma^1_{F_2(M),X} \circ F_1(\gamma^2_{M,X})$ for $M \in \mathcal{M}$ and $X \in \mathcal{C}$. For two morphisms $\eta : (\gamma^1, F_1) \to (\gamma^2, F_2)$ and $\eta' : (\gamma'^1, F'_1) \to (\gamma'^2, F'_2)$ in $\mathcal{C}_\mathcal{M}^*$ their tensor product is $(\eta \otimes \eta' )(M): = \eta_{F_2'(M)} \circ F_1(\eta'_M)$. Whenever $\mathcal{C}$ and $\mathcal{M}$ are semisimple, the dual category $\mathcal{C}_\mathcal{M}^*$ is semisimple \cite[\S 2.2]{Ost-2}. Moreover, since $\mathcal{M}$ is itself a left module category over $\mathcal{C}_\mathcal{M}^*$ it has been shown in \cite[Cor. 4.1]{Ost} that the double dual is tensor equivalent to the original category, i.e. $(\mathcal{C}^*_\mathcal{M})^*_\mathcal{M} \simeq \mathcal{C}$. Furthermore, the module categories of $\mathcal{C}$ and of $\mathcal{C}_\mathcal{M}^*$ are in canonical bijection \cite[Prop. 2.1]{Ost-2} by the following maps. For $\mathcal{M}_1$ a module category over $\mathcal{C}$, the category $\operatorname{Fun}_{\mathcal{C}}(\mathcal{M}_1,\mathcal{M})$ of module functors from $\mathcal{M}_1$ to $\mathcal{M}$ is a left module category of $\mathcal{C}_\mathcal{M}^*=\operatorname{Fun}_{\mathcal{C}}(\mathcal{M},\mathcal{M})$ via the composition of functors. Conversely, if $\mathcal{M}_2$ is a left module category over $\mathcal{C}_\mathcal{M}^*$, then $\operatorname{Fun}_{\mathcal{C}_\mathcal{M}^*}(\mathcal{M},\mathcal{M}_2)$ is a right module category over $\operatorname{Fun}_{\mathcal{C}_\mathcal{M}^*}(\mathcal{M},\mathcal{M})=(\mathcal{C}_\mathcal{M}^*)_\mathcal{M}^* \simeq \mathcal{C}$ via composition of functors. These maps are inverse from each other. \subsection{Center of a tensor category} The {\it{center}} ${\mathcal Z}(\mathcal{C})$ of the tensor category $\mathcal{C}$ is the category whose objects are pairs $(X, \eta)$ where $X$ is an object in $\mathcal{C}$ and $\eta$ is a functorial set of isomorphisms $\eta_Y : X \otimes Y \to Y \otimes X$ such that the hexagon diagram $$\xymatrix{ (X \otimes Y) \otimes Z \ar[r]^{\alpha} \ar[d]^{\eta_Y \otimes 1}& X \otimes (Y \otimes Z) \ar[r]^{\eta_{Y \otimes Z}} & (Y \otimes Z) \otimes X \ar[d]^\alpha \\ (Y \otimes X) \otimes Z \ar[r]^\alpha & Y \otimes (X \otimes Z) \ar[r]^{1 \otimes \eta_{Z}} & Y \otimes ( Z \otimes X) }$$ and the triangle diagram $$\xymatrix{ X \otimes 1_\mathcal{C} \ar[rr]^{\eta_{1_\mathcal{C}}} \ar[rd]_\rho && 1_\mathcal{C} \otimes X \ar[dl]^\lambda\\ & X & }$$ are commutative. A morphism $f:(X, \eta) \to (Y , \nu)$ consists of a morphism $f:X \to Y$ for which the diagram $$\xymatrix{ X \otimes Z \ar[r]^{\eta_Z} \ar[d]_{f \otimes 1} & Z \otimes X \ar[d]^{1 \otimes f} \\ Y \otimes Z \ar[r]_{\nu_Z} & Z \otimes Y }$$ commutes for any object $Z$ in $\mathcal{C}$. The tensor structure is defined as $(X , \eta) \otimes (Y, \nu): = (X \otimes Y, \gamma)$ where $\gamma_Z$ is defined as the composition $$\xymatrix{(X \otimes Y) \otimes Z \ar[r]^\alpha & X \otimes (Y \otimes Z) \ar[r]^{1 \otimes \nu_Z} & X \otimes (Z \otimes Y ) \ar[dl]_{\alpha^{-1}} & \\ & (X \otimes Z ) \otimes Y \ar[r]^{\eta_Z \otimes 1} & (Z \otimes X) \otimes Y \ar[r]^{\alpha} & Z \otimes (X \otimes Y). }$$ The center ${\mathcal Z}(\mathcal{C})$ is moreover {\it{braided}} and the braiding for the pair $(X, \eta), (Y , \nu)$ is precisely the map $\eta_Y$. The center ${\mathcal Z}(Vect(G, \omega))$ of the tensor category $Vect(G, \omega)$ contains the information necessary for constructing the quasi-Hopf algebra that is known as the Twisted Drinfeld Double $D^\omega(G)$ of the group $G$ twisted by $\omega$ (see \cite[\S 3.2]{Dijkgraaf}). \subsection{Weak Morita equivalence of tensor categories} \label{subsection Weak Morita equivalence of tensor categories} Two tensor categories $\mathcal{C}$ and ${\mathcal D}$ are {\it{weakly Morita equivalent}} if there exists an indecomposable right module category $\mathcal{M}$ over $\mathcal{C}$ such that $\mathcal{C}_\mathcal{M}^*$ and ${\mathcal D}$ are tensor equivalent \cite[Def 4.2]{MugerI}. In \cite[Prop. 4.6]{MugerI} it is shown that weak Morita equivalence is an equivalence relation, and in \cite[Thm. 3.1]{W-GT-fusion-cat} it is shown that two tensor categories are weak Morita equivalent if and only if their centers are braided equivalent. In particular we have that for $\mathcal{M}$ an indecomposable module category over $\mathcal{C}$ there is a canonical equivalence of braided tensor categories ${\mathcal Z}(\mathcal{C}) \simeq {\mathcal Z}(\mathcal{C}_\mathcal{M}^*)$ \cite[Prop. 2.2]{Ost-2}. \section{The dual of ${\mathcal V}(G, \omega)$ with respect to $\mathcal{M}(A \backslash G, \mu)$} Let us consider the tensor category $\mathcal{C} = {\mathcal V}(G, \omega)$ and the right module category $\mathcal{M}=\mathcal{M}(A \backslash G, \mu)$ described in \S \ref{subsection indecomposable module categories}. In this chapter we will review the main results of \cite {Naidu} where explicit conditions are stated under which the dual category $\mathcal{C}_\mathcal{M}^*$ is pointed. For the sake of completeness and clarity we will review the constructions done in \S 3 and \S 4 of \cite{Naidu} and we will reinterpret the equations given there in the terminology that we have set up in \S \ref{subsection Abelian group extensions} and \S \ref{subsection Cohomology of groups and the Lyndon-Hochschild-Serre spectral sequence}. \subsection{Conditions for $\mathcal{C}_\mathcal{M}^*$ to be pointed} Let us setup some notation for this section: let $K := A \backslash G$, $u : K \to G$ satisfy $p \circ u =1_G$ and $u(p(1_G))=1_G$ for $p:G \to K$ the projection, $\kappa: K \times G \to A$ satisfy $u(k)g = \kappa_{k,g} u(k {\vartriangleleft} g)$ and $K^A$ the elements of $K$ fixed under the conjugation by elements of $A$. The module category $\mathcal{M}(A \backslash G, \mu)$ is the skeletal category whose simple objects are the elements of $K=A \backslash G$, whose tensor structure is $k \otimes g := k {\vartriangleleft} g$ for $k\in K$ and $g \in G$ and whose associativity constraint $\mu$ satisfies $\delta_G \mu^{-1} = \pi^* \omega$, see \eqref{delta mu = omega} In what follows we will focus on parametrizing the invertible objects of $\mathcal{C}_\mathcal{M}^*$. Following \cite[Lemma 3.2]{Naidu} any invertible module functor in $\mathcal{C}_\mathcal{M}^*$ is of the form $(F_y, \gamma)$ where the functor $F_y: \mathcal{M} \to \mathcal{M}$ is the one that extends the $G$-equivariant map $f_y : K \to K$, $f_y(k)=p(u(y)u(k))$ for $y \in K^A$, and $\gamma$ is a functorial isomorphism $\gamma_{k,g} : F_y(k \otimes g) \stackrel{\cong}{\to} F_y(k) \otimes g$ that satisfies the pentagon axiom. Writing $\gamma_{k,g} := \gamma(k;g) id_{p(u(y)u(k {\vartriangleleft} g))}$ for $\gamma \in C^1(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ we have that the pentagon axiom of a module functor translates into the equation $$ \mu(k;g_1,g_2) \gamma(k {\vartriangleleft} g_1; g_2) \gamma(k;g_1) = \gamma(k; g_1g_2) \mu(f_y(k); g_1,g_2)$$ which can also be written as $$\delta_G \gamma (k; \gamma_1,\gamma_2) = \frac{\mu(f_y(k); g_1,g_2)}{\mu(k;g_1,g_2) }.$$ The inverse of $(F_y, \gamma)$ is the module functor $(F_{p(u(y)^{-1})}, \bar{\gamma})$ with $$\bar{\gamma}(k; g):= \gamma(p(u(y)^{-1}u(x))^{-1}; g)^{-1}.$$ Defining for each $ y \in K^A$ the set $$Fun_y := \left\{ \gamma \in C^1(G, \ensuremath{{\mathrm{Map}}}(K,\ensuremath{{\mathbb C}^*})) | \delta_G \gamma(k; g_1,g_2) =\frac{\mu(f_y(k); g_1,g_2)}{\mu(k;g_1,g_2) } \right\}$$ for all $k \in K$ and $g_1, g_2 \in G$, we have that the set of invertible objects of $\mathcal{C}_\mathcal{M}^*$ are precisely the module functors $(F_y, \gamma)$ where $y \in K^A$ and $\gamma \in Fun_y$. To simplify the notation we will denote such module functor by the pair $(y,\gamma)$. Two invertible module functors $(y_1, \gamma^1)$ and $(y_2, \gamma^2)$ in $\mathcal{C}_\mathcal{M}^*$ are isomorphic if and only if $y_1=y_2$ and if there exists natural transformation parameterized by a map $\eta \in C^0(G,\ensuremath{{\mathrm{Map}}}(K,\ensuremath{{\mathbb C}^*}))$ satisfying the equation \begin{align} \gamma^1(k;g) \eta(k)=\eta(k {\vartriangleleft} g) \gamma^2(k;g) \label{equation isomorphic module functors} \end{align} for all $k \in K$ and $g \in G$. These equations can be rewritten as the equation $$\delta_G \eta = \frac{\gamma^2}{\gamma^1}$$ in $C^1(G, \ensuremath{{\mathrm{Map}}}(K,\ensuremath{{\mathbb C}^*}))$. Therefore for each $y \in K^A$ we may define an equivalence relation on the elements $\gamma^1, \gamma^2 \in Fun_y$ by setting that $\gamma^2 \simeq \gamma^1$ whenever there exist $\eta$ such that $\delta_G \eta = \frac{\gamma^2}{\gamma^1}$; denote the by $\overline{Fun}_y$ the associated set of equivalence classes. For each $y \in K^A$ let us choose an element $\gamma_y \in Fun_y$, and note that the maps $$Fun_y \to Z^1(G, \ensuremath{{\mathrm{Map}}}(K,\ensuremath{{\mathbb C}^*})), \ \beta \mapsto \frac{\beta}{\gamma_y}, \ \ \ \ Z^1(G, \ensuremath{{\mathrm{Map}}}(k,\ensuremath{{\mathbb C}^*})) \to Fun_y, \ \epsilon \mapsto \epsilon \gamma_y$$ are inverse to each other. Therefore we obtain bijections $$\overline{Fun}_y \cong H^1(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})) \cong H^1(A, \ensuremath{{\mathbb C}^*}) ={{\mathbb{A}}}$$ which are realized by the maps \begin{align} \zeta_y : &{{\mathbb{A}}} \to Fun_y, & \zeta_y(\rho):=& \gamma_y \varphi(\rho) \nonumber\\ \theta_y: &Fun_y \to {{\mathbb{A}}}, & \theta_y(\beta) :=& \psi(\beta / \gamma_y). \label{maps from widehatA to Funy} \end{align} Recall from \cite[Def. 2.2]{ENO} that the {\it{ global dimension}} $dim(\mathcal{C})$ of a fusion category $\mathcal{C}$ is the sum of the squared norms of its simple objects, and note that by \cite[Thm. 2.15 ]{ENO} we have $dim(\mathcal{C}_\mathcal{M}^*)=dim(\mathcal{C})$ whenever $\mathcal{C}$ is a fusion category and $\mathcal{M}$ is an indecomposable module category over $\mathcal{C}$. Let us suppose now that the dual category $\mathcal{C}_\mathcal{M}^*={\mathcal V}(G,\omega)_{\mathcal{M}(A \backslash G, \mu)}^*$ is pointed. Therefore its global dimension $$dim(\mathcal{C}_\mathcal{M}^*)= | {{\mathbb{A}}}||K^A|$$ must be equal to the number of isomorphic classes of invertible objects, since on pointed categories all simple objects are invertible. On the other hand, by \cite[Thm. 2.15 ]{ENO} we have that $dim(\mathcal{C}_\mathcal{M}^*)=dim(\mathcal{C})$ and $dim(\mathcal{C})=|G|$. Therefore in order for the category $\mathcal{C}_\mathcal{M}^*$ to be pointed it is necessary that $| {{\mathbb{A}}}||K^A|=|G|$. Since $|G|=|A||K|$, $|{{\mathbb{A}}}| \leq |A| $ and $|K^A| \leq |K|$, the equality holds if and only if $A$ is abelian, thus having that $|{{\mathbb{A}}}|= |A|$, and if $A$ is normal in $G$ and $K^A=K$. On the other hand, if $A$ is abelian and normal on $G$, then the number of isomorphism classes of invertible objects in $\mathcal{C}_\mathcal{M}^*$ is $|{{\mathbb{A}}}||K|=|G|$. Since $dim(\mathcal{C}_\mathcal{M}^*)=dim(\mathcal{C})=|G|$ then we have that the set of isomorphism classes of invertible objects exhaust the set of simple elements, and therefore $\mathcal{C}_\mathcal{M}^*$ must be pointed. Summarizing we have \begin{theorem} \cite[Thm 3.4]{Naidu} \label{Theorem conditions for pointed} The tensor category $\mathcal{C}_\mathcal{M}^*={\mathcal V}(G,\omega)_{\mathcal{M}(A \backslash G, \mu)}^*$ is pointed if and only if $A$ is abelian and normal in $G$ and the cohomology class $[\frac{\mu {\vartriangleleft} y}{\mu}]$ is trivial in $H^2(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ for all $y \in K$. \end{theorem} Note that since $A$ is normal in $G$, we may use the notation introduced in \S \ref{subsection Cohomology of groups and the Lyndon-Hochschild-Serre spectral sequence} so that $\mu(f_y(k);g_1,g_2)=\mu(yk;g_1,g_2)= (\mu {\vartriangleleft} y)(k;g_1,g_2).$ Since we have that $\delta_G \mu^{-1} = \pi^* \omega = \delta_G (\mu^{-1} {\vartriangleleft} y)$, the quotient $\frac{\mu {\vartriangleleft} y}{\mu}$ defines a cocycle in $Z^2(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$. The equation $\delta_G \gamma_y = \frac{ \mu {\vartriangleleft} y}{\mu }$ implies that the quotient is trivial in cohomology. \subsection{The Grothendieck ring of the pointed category $\mathcal{C}_\mathcal{M}^*$} \label{Subsection The Grothendieck ring of the pointed category} From now on we will assume that the dual category $\mathcal{C}_\mathcal{M}^*$ is pointed. Therefore we have that $A$ is abelian and normal in $G$ and that we can choose elements $\gamma_y \in C^1(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))$ for each $y \in K$ such that $\delta_G \gamma_y = \frac{\mu {\vartriangleleft} y}{\mu}$. The Grothendieck ring $K_0(\mathcal{C}_\mathcal{M}^*)$ of the category $\mathcal{C}_\mathcal{M}^*$ is the ring defined by the semi-ring whose elements are the isomorphism classes of objects and whose product is the one induced by the tensor product. Since $\mathcal{C}_\mathcal{M}^*$ is pointed then $K_0(\mathcal{C}_\mathcal{M}^*)$ is isomorphic to the group ring $\ensuremath{{\mathbb Z}}[\Lambda]$ for some finite group $\Lambda$. In this section we will recall the construction of this isomorphism carried out in \cite[Thm. 4.5]{Naidu}. The tensor product of two invertible elements $(y_1,\gamma^1)$, $(y_2,\gamma^2)$ in $\mathcal{C}_\mathcal{M}^*$ as defined in \S \ref{subsection Dual category} is $$(y_1,\gamma^1) \otimes (y_2,\gamma^2) = (y_1y_2,(\gamma^1 {\vartriangleleft} y_2)\gamma^2).$$ This tensor product defines a group structure on the set of isomorphism classes of invertible objects $$\Lambda:= \bigcup_{y \in K } \{y\} \times \overline{Fun}_y $$ by the equation $(y_1,[\gamma^1]) \star (y_2,[\gamma^2]) = (y_1y_2, [(\gamma^1 {\vartriangleleft} y_2)\gamma^2])$ where $[\gamma]$ denotes the equivalence class of $\gamma$ in $Fun_y$. Define the element $\gamma \in C^1(K, C^1(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})))$ by the equation $$\gamma(y):=\gamma_y$$ and note that the equations $\delta_G \gamma_y = \frac{\mu {\vartriangleleft} y}{\mu}$ are equivalent to the equation $$\delta_G \gamma = \delta_K \mu.$$ Define the element $\tilde{\nu} := \delta_K \gamma$, i.e. $\tilde{\nu}(y_1,y_2)= \frac{\gamma(y_2) \gamma(y_1){\vartriangleleft} y_2}{\gamma(y_1y_2)}$, and note that $$\delta_K \tilde{\nu} = \delta_K^2 \gamma = 1 \ \ \mbox{and} \ \ \delta_G \tilde{\nu} = \delta_G \delta_K \gamma= \delta_K \delta_G \gamma = \delta_K^2 \mu=1.$$ Hence $\tilde{\nu} \in Z^2(K, Z^1(G,\ensuremath{{\mathrm{Map}}}(K,\ensuremath{{\mathbb C}^*})))$ and we may define \begin{align} \label{definition nu}{\nu} := \psi \circ \tilde{\nu} \in Z^2(K, Z^1(A,\ensuremath{{\mathbb C}^*}))= Z^2(K, {{\mathbb{A}}}) \end{align} thus having ${\nu}(y_1,y_2)(a):= \tilde{\nu}(y_1,y_2)(1;a)$. With this 2-cocycle ${\nu}$ we may define the crossed product $K \ltimes_{{\nu}} {{\mathbb{A}}}$ by setting on pairs of elements of the set $K \times {{\mathbb{A}}}$ $$(y_1, \rho_1) \cdot (y_2, \rho_2) := (y_1y_2, \rho_1^{y_2} \rho_2 {{\nu}}(y_1,y_2)).$$ Using the notation of \eqref{maps from widehatA to Funy} we have \begin{theorem} \cite[Thm. 4.5]{Naidu} The map $$T : K \ltimes_{{\nu}}{{\mathbb{A}}} \to \Lambda, \ \ \ T((y, \rho))= (y,[\zeta_y(\rho)])$$ is an isomorphism of groups. Hence $K_0(\mathcal{C}_\mathcal{M}^*) \cong \ensuremath{{\mathbb Z}} [ K \ltimes_{{\nu}} {{\mathbb{A}}}]$. \end{theorem} \begin{proof}On the one hand we have \begin{align*} T((y_1, \rho_1) \cdot (y_2,\rho_2))=&T( (y_1y_2, \rho_1^{y_2} \rho_2 {{\nu}}(y_1,y_2))) \\ = &(y_1y_2,[\zeta_{y_1y_2}(\rho_1^{y_2} \rho_2 {{\nu}}(y_1,y_2))]) \end{align*} and on the other \begin{align*} T((y_1, \rho_1)) \star T((y_2,\rho_2)) =& (y_1,[\zeta_{y_1}(\rho_1)]) \star (y_2,[\zeta_{y_2}(\rho_2)]) \\ =& (y_1y_2, [( \zeta_{y_1}(\rho_1) {\vartriangleleft} y_2) \zeta_{y_2}(\rho_2)]) \end{align*} The result follows if we check the equality $$\theta_{y_1y_2}((\zeta_{y_1}(\rho_1) {\vartriangleleft} y_2) \zeta_{y_2}(\rho_2)) = \rho_1^{y_2} \rho_2 {{\nu}}(y_1,y_2)$$ since this implies that $\zeta_{y_1y_2}((\rho_1 {\vartriangleleft} y_2) \rho_2 {{\nu}}(y_1,y_2))$ and $(\zeta_{y_1}(\rho_1) {\vartriangleleft} y_2) \zeta_{y_2}(\rho_2)$ are cohomologous; hence we have \begin{align*} \theta_{y_1y_2}((\zeta_{y_1}(\rho_1) {\vartriangleleft} y_2) \zeta_{y_2}(\rho_2))(a) =& \frac{((\zeta_{y_1}(\rho_1) {\vartriangleleft} y_2) (1;a)) \zeta_{y_2}(\rho_2) (1;a)}{\gamma(y_1y_2)(1;a)}\\ =& \frac{(\gamma(y_1){\vartriangleleft} y_2 \varphi(\rho_1) {\vartriangleleft} y_2)(1;a) (\gamma(y_2) \varphi(\rho_2))(1;a)}{\gamma(y_1y_2)(1;a)}\\ =& \delta_K \gamma (y_1,y_2)(1;a) \rho_1^{y_2}(a) \rho_2(a)\\ =& ({{\nu}}(y_1,y_2) \rho_1^{y_2} \rho_2)(a). \end{align*} \end{proof} \subsection{A skeleton of the pointed category $\mathcal{C}_\mathcal{M}^*$} A skeleton $sk(\mathcal{C}_\mathcal{M}^*)$ of $\mathcal{C}_\mathcal{M}^*$ is a full subcategory of $\mathcal{C}_\mathcal{M}^*$ on which each object of $\mathcal{C}_\mathcal{M}^*$ is isomorphic to only one object in $sk(\mathcal{C}_\mathcal{M}^*)$. Let us choose for objects $$ob(sk(\mathcal{C}_\mathcal{M}^*)):= \{(y, \zeta_y(\rho)) | (y,\rho) \in K \ltimes_\nu {{\mathbb{A}}} \}$$ and define its tensor product $\bullet$ by the one induced by $\star$, i.e. $$ ((y_1, \zeta_{y_1}(\rho_1)) \bullet (y_2,\zeta_{y_2}(\rho_2)) : = (y_1y_2, \zeta_{y_1y_2}({{\nu}}(y_1,y_2) \rho_1^{y_2}\rho_1)).$$ For each pair of objects, choose isomorphisms in $\mathcal{C}_\mathcal{M}^*$ \begin{align*} f((y_1, \zeta_{y_1}(\rho_1)), (y_2,\zeta_{y_2}(\rho_2)) : (y_1, \zeta_{y_1}(\rho_1)) \bullet (y_2,\zeta_{y_2}(\rho_2) )\stackrel{\sim}{\to} (y_1, \zeta_{y_1}(\rho_1)) \ensuremath{\otimes} (y_2,\zeta_{y_2}(\rho_2) ) \end{align*} which by equation \eqref{equation isomorphic module functors} satisfy \begin{align*} (( \zeta_{y_1}(\rho_1) {\vartriangleleft} y_2) \zeta_{y_1}(\rho_1)) (k;g) = \frac{ f((y_1, \zeta_{y_1}(\rho_1)), (y_2,\zeta_{y_2}(\rho_2))(k {\vartriangleleft} g)}{ f((y_1, \zeta_{y_1}(\rho_1)), (y_2,\zeta_{y_2}(\rho_2))(k)}&\\ \times \zeta_{y_1y_2}&({{\nu}}(y_1 ,y_2) \rho_1^{y_2}\rho_1)(k;g). \end{align*} The tensor product $\otimes$ in $\mathcal{C}_\mathcal{M}^*$ is associative since it is defined by the composition of functors, but the tensor product $\bullet$ in its skeleton $sk(\mathcal{C}_\mathcal{M}^*)$ may fail to be associative. The associativity constraint for $sk(\mathcal{C}_\mathcal{M}^*)$ is then \begin{align*} \widehat{\omega}'((y_1, \zeta_{y_1}(\rho_1)), (y_2,\zeta_{y_2}(\rho_2)),(y_3 &,\zeta_{y_3}(\rho_3))) \\ =& \frac{f((y_1, \zeta_{y_1}(\rho_1)), (y_2,\zeta_{y_2}(\rho_2)) \otimes Id_{(\zeta_{y_3}(\rho_3),y_3)}}{f((y_1, \zeta_{y_1}(\rho_1)), (y_2,\zeta_{y_2}(\rho_2))\bullet (y_3,\zeta_{y_3}(\rho_3)))} \\ &\times \frac{f((y_1, \zeta_{y_1}(\rho_1))\bullet (y_2,\zeta_{y_2}(\rho_2)),(y_3,\zeta_{y_3}(\rho_3)))}{ Id_{(\zeta_{y_1}(\rho_1),y_1)} \otimes f((y_2,\zeta_{y_2}(\rho_2)),(y_3,\zeta_{y_3}(\rho_3))). } \end{align*} In \cite[Thm. 4.9]{Naidu} it is shown that $\widehat{\omega}'$ is $K$-invariant and moreover that it can be given in explicit form by the equation \begin{align*} \widehat{\omega}'((y_1, \zeta_{y_1}(\rho_1)), (y_2,\zeta_{y_2}(\rho_2)),(y_3 &,\zeta_{y_3}(\rho_3))) = \tilde{\nu}(y_1,y_2)(1;u(y_3)) \ \rho_1(\kappa_{y_2,u(y_3)}). \end{align*} Therefore we may define the 3-cocycle on $K \ltimes_{{\nu}} {{\mathbb{A}}}$ by the equation \begin{align*} \widehat{\omega}((y_1, \rho_1), (y_2,\rho_2),(y_3 ,\rho_3)) = \tilde{\nu}(y_1,y_2)(1;u(y_3)) \ \rho_1(\kappa_{y_2,u(y_3)}), \end{align*} and choosing $G = A \rtimes_F K$ and $u(y)=(1,y)$ as it was done at the end of \S \ref{subsection Abelian group extensions}, the 3-cocycle on $K \ltimes_{{\nu}} {{\mathbb{A}}}$ becomes \begin{align} \label{definition widehat omega} \widehat{\omega}((y_1, \rho_1), (y_2,\rho_2),(y_3 ,\rho_3)) = \tilde{\nu}(y_1,y_2)(1;(1,y_3)) \ \rho_1(F(y_2,y_3)). \end{align} Therefore the skeleton $sk(\mathcal{C}_\mathcal{M}^*)$ of $\mathcal{C}_\mathcal{M}^*$ becomes isomorphic to ${\mathcal V}(K \ltimes_{{\nu}} {{\mathbb{A}}}, \widehat{\omega})$ which is equivalent to $Vect(K \ltimes_{{\nu}} {{\mathbb{A}}}, \widehat{\omega})$. Therefore we can conclude with \begin{theorem} \cite[Thm. 4.9]{Naidu} The fusion categories $\mathcal{C}_\mathcal{M}^*={\mathcal V}(G,\omega)_{\mathcal{M}(A \backslash G, \mu)}^*$ and $Vect(K \ltimes_{{\nu}} {{\mathbb{A}}}, \widehat{\omega})$ are equivalent. \end{theorem} Applying the results of \S \ref{subsection Weak Morita equivalence of tensor categories} we have \begin{cor} \label{corollary weak Morita equivalence} The categories $Vect( A \rtimes_F K, \omega)$ and $Vect(K \ltimes_{{\nu}} {{\mathbb{A}}}, \widehat{\omega})$ are weakly Morita equivalent. Hence their centers are are canonically equivalent $${\mathcal Z}(Vect( A \rtimes_F K, \omega)) \simeq {\mathcal Z}(Vect(K \ltimes_{{\nu}} {{\mathbb{A}}}, \widehat{\omega}))$$ as braided tensor categories \end{cor} \section{Weak Morita equivalence classes of group-theoretical tensor categories} We are interested in classifying group theoretical tensor categories of a specific global dimension up to weak Morita equivalence. For this purpose we will fix the group $G= A \rtimes_F K$ with $A$ abelian and normal in $G$ and $F \in Z^2(K,A)$, and we will give an explicit description of the cocycles $\omega \in Z^3(A \rtimes_F K, \ensuremath{{\mathbb C}^*})$ and $\widehat{\omega} \in Z^3(K \ltimes_{{\nu}} {{\mathbb{A}}}, \ensuremath{{\mathbb C}^*})$ such that the tensor categories ${\mathcal V}( A \rtimes_F K, \omega)$ and ${\mathcal V}( K \ltimes_{{\nu}} {{\mathbb{A}}}, \widehat{\omega})$ are weakly Morita equivalent. \subsection{Description of $\omega$, $\mu$ and $\gamma$} \label{subsection omega nu} In Theorem \ref{Theorem conditions for pointed} and in \S \ref{Subsection The Grothendieck ring of the pointed category} we have seen the conditions needed for the tensor category $\mathcal{C}_\mathcal{M}^*={\mathcal V}(G,\omega)_{\mathcal{M}(A \backslash G, \mu)}^*$ to be pointed. In particular we have seen that we need the existence of $\gamma \in C^1(K, C^1(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})))$ such that $$\delta_G \gamma= \delta_K \mu.$$ Since we also have that $\delta_G \mu^{-1}= \pi^* \omega$ we can obtain the following lemma. \begin{lemma} \label{lemma omega cohomologous to tilde nu} The cocycles $\pi^* \omega$ and $\tilde{\nu}$ are cohomologous in $\mbox{\rm Tot\,}(C^*(K, C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))))$. \end{lemma} \begin{proof} Recall the definition of the double complex $C^*(K, C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})))$ given in \S \ref{subsubsection Double complex}, and note that we have $\pi^* \omega \in C^{0,3}$, $\mu \in C^{0,2}$, $\gamma \in C^{1,1}$ and $\tilde{\nu} = \delta_K \gamma \in C^{2,1}$, satisfying $\pi^* \omega \cdot \delta_G \mu=1$ and $ \delta_K \mu \cdot \delta_G \gamma^{-1}=1$. Consider the element $\mu \oplus \gamma \in \mbox{\rm Tot\,}^2$ and note that $$\delta_{Tot} (\mu \oplus \gamma) = (\delta_K \oplus \delta_G^{(-1)^p}) (\mu \oplus \gamma) = \delta_G \mu \oplus \delta_K \mu \cdot \delta_G \gamma^{-1} \oplus \delta_K \gamma.$$ Therefore $$\pi^* \omega \cdot \delta_{Tot} (\mu \oplus \gamma) = \tilde{\nu}.$$ \end{proof} Lemma \ref{lemma omega cohomologous to tilde nu} implies further conditions on the cohomology class of $\omega$ for the tensor category $\mathcal{C}_\mathcal{M}^*={\mathcal V}(G,\omega)_{\mathcal{M}(A \backslash G, \mu)}^*$ to be pointed. \begin{cor} \label{omega in 2,1 and 3,0} If the tensor category $\mathcal{C}_\mathcal{M}^*={\mathcal V}(G,\omega)_{\mathcal{M}(A \backslash G, \mu)}^*$ is pointed then $\omega$ is cohomologous to a cocycle that lives in $C^{2,1} \oplus C^{3,0}$ of the double complex that induces the Lyndon-Hochschild-Serre spectral sequence. \end{cor} \begin{rem} Note that this implies that the cohomology class of $\omega$ belongs to the subgroup of $H^3(G, \ensuremath{{\mathbb C}^*}) $ defined as $$\Omega(G;A) := \ker \left( \ker \left( H^3(G, \ensuremath{{\mathbb C}^*}) \to E^{0,3}_\infty \right) \to E^{1,2}_\infty \right)$$ which fits into the short exact sequence $$1 \to E^{3,0}_\infty \to \Omega(G;A) \to E^{2,1}_\infty \to 1.$$ The cohomology classes in $\Omega(G;A)$ are the only cohomology classes such that $\mathcal{C}_\mathcal{M}^*={\mathcal V}(G,\omega)_{\mathcal{M}(A \backslash G, \mu)}^*$ is pointed. \end{rem} In what follows we will construct explicit representatives for $\omega$ and $\mu$, but for this purpose we will start by constructing explicit 3-cocyles in $\mbox{\rm Tot\,}(C^*(K, C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))))$ which appear in $\Omega(G;A)$. Let us start by determining the second differential $d_2 : E_2^{2,1} \to E_2^{4,0}$. \begin{lemma} The second differential $d_2 : E_2^{2,1} \to E_2^{4,0}$ is isomorphic to the homomorphism $$H^2(K, {{\mathbb{A}}}) \to H^4(K, \ensuremath{{\mathbb C}^*}), \ \ [\widehat{F}] \mapsto [(\widehat{F} \wedge F)^{-1}]$$ where $(\widehat{F} \wedge F) (k_1,k_2,k_3,k_4):= \widehat{F}(k_1,k_2)(F(k_3,k_4))$. \end{lemma} \begin{proof} First recall that \begin{align*}E_2^{2,1} &= H^2(K, H^1(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))) \cong H^2(K, \mathrm{Hom}(A, \ensuremath{{\mathbb C}^*})) = H^2(K, {{\mathbb{A}}})\\ E_2^{4,0} & = H^4(K, H^0(G, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*}))) = H^4(K, \ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})^G) \cong H^4(K, \ensuremath{{\mathbb C}^*}). \end{align*} Take $\widehat{F} \in Z^2(K, {{\mathbb{A}}})$ and use the map $\varphi$ of Lemma \ref{lemma varphi} to lift this cocycle to $\varphi(\widehat{F}) \in C^2(K, Z^1(G,\ensuremath{{\mathrm{Map}}}(K,\ensuremath{{\mathbb C}^*})))$; in coordinates: \begin{align*} \varphi(\widehat{F})(k_1,k_2)(x_1,(a_2,x_2)) &= \widehat{F}(k_1,k_2)(\kappa_{x_1,(a_2,x_2)}) = \widehat{F}(k_1,k_2)({}^{x_1}a_2 F(x_1,x_2))\\ &= \widehat{F}(k_1,k_2)({}^{x_1}a_2) \ \widehat{F}(k_1,k_2)(F(x_1,x_2)). \end{align*} Its boundary is \begin{align*} \delta_k \varphi(\widehat{F})(k_1,k_2,k_3)&(x_1, (a_2,x_2))\\ =& \widehat{F}(k_2,k_3)({}^{x_1}a_2 F(x_1,x_2)) \widehat{F}(k_1k_2,k_3)({}^{x_1}a_2 F(x_1,x_2))^{-1}\\ & \widehat{F}(k_1,k_2k_3)({}^{x_1}a_2 F(x_1,x_2)) \widehat{F}(k_1,k_2)({}^{k_3x_1}a_2 F(k_3x_1,x_2))^{-1}\\ =& \widehat{F}(k_1,k_2)^{k_3}( F(x_1,x_2)) \widehat{F}(k_1,k_2)( F(k_3x_1,x_2))^{-1}\\ =& \widehat{F}(k_1,k_2)\left( \frac{F(k_3, x_1)}{F(k_3,x_1x_2)} \right), \end{align*} and we can define $u \in C^3(K, C^0(G, \ensuremath{{\mathrm{Map}}}(K,\ensuremath{{\mathbb C}^*})))$ as \begin{align*} u(k_1,k_2,k_3)(x) := \widehat{F}(k_1,k_2)(F(k_3,x)). \end{align*} On the one hand we have \begin{align*} \delta_G u(k_1,k_2,k_3)(x_1,(a_2,x_2)) = & u(k_1,k_2,k_3)(x_1x_2) u(k_1,k_2,k_3)(x_1)^{-1} \\ =& \widehat{F}(k_1,k_2)\left( \frac{F(k_3, x_1x_2)}{F(k_3,x_1)} \right) \end{align*} and on the other \begin{align*} \delta_K u(k_1, & k_2,k_3,k_4)(x) \\ =& \widehat{F}(k_2,k_3)(F(k_4,x)) \widehat{F}(k_1k_2,k_3)(F(k_4,x))^{-1} \widehat{F}(k_1,k_2k_3)(F(k_4,x))\\ &\widehat{F}(k_1,k_2)(F(k_3k_4,x))^{-1} \widehat{F}(k_1,k_2)(F(k_3,k_4x))\\ =&\widehat{F}(k_1,k_2)^{k_3}(F(k_4,x))\widehat{F}(k_1,k_2)(F(k_3k_4,x))^{-1} \widehat{F}(k_1,k_2)(F(k_3,k_4x))\\ =& \widehat{F}(k_1,k_2)(F(k_3,k_4)). \end{align*} Since $\delta_Gu=\delta_K \varphi(\widehat{F})$ we have that \begin{align*} \delta_{Tot} ( \varphi(\widehat{F}) \oplus u^{-1})= \delta_K \varphi(\widehat{F}) \delta_Gu \oplus \delta_k u^{-1}= (\widehat{F} \wedge F)^{-1}; \end{align*} therefore $d_2 [\varphi(\widehat{F})]= [(\widehat{F} \wedge F)^{-1}]$. \end{proof} Suppose that $d_2 [\varphi(\widehat{F})]=0$, hence there exists $\epsilon \in C^3(K, \ensuremath{{\mathbb C}^*})$ such that $\delta_K \epsilon = \widehat{F} \wedge F$. Define $\bar{\epsilon} \in C^3(K,C^0(G, \mathrm{Maps}(K,\ensuremath{{\mathbb C}^*})))$ by the equation $$\bar{\epsilon}(k_1,k_2,k_3)(x):= {\epsilon}(k_1,k_2,k_3)$$ and note that $\delta_K \bar{\epsilon} = \widehat{F} \wedge F$ and that $\delta_G \bar{\epsilon}=1$. Hence the class $\varphi(\widehat{F}) \oplus \bar{\epsilon} u^{-1} \in C^{2,1} \oplus C^{3,0}$ defines a 3-cocycle in the total complex: \begin{align*} \varphi(\widehat{F}) \oplus \bar{\epsilon} u^{-1} \in Z^3 \mbox{\rm Tot\,}(C^*(K, C^*(G,\ensuremath{{\mathrm{Map}}}(K, \ensuremath{{\mathbb C}^*})))). \end{align*} Define $\beta \in C^2(K,C^0(G, \mathrm{Maps}(K,\ensuremath{{\mathbb C}^*})))$ by the equation \begin{align*} \beta(k_1,k_2)(x):= \epsilon(k_1,k_2,x) \end{align*} and note that \begin{align} \delta_K \beta (k_1,k_2,k_3)(x) = &\epsilon(k_2,k_3,x) \nonumber \epsilon(k_1k_2,k_3,x)^{-1} \epsilon(k_1,k_2k_3,x) \epsilon(k_1,k_2,k_3x)^{-1} \\ \nonumber =& \delta_K \epsilon(k_1,k_2,k_3,x) \epsilon(k_1,k_2,k_3)^{-1}\\ \label{equation delta K beta} =& \widehat{F}(k_1,k_2)((F(k_3,x)) \bar{\epsilon}(k_1,k_2,k_3)(x)^{-1}. \end{align} Therefore $\delta_K \beta \ \bar{\epsilon} \ u^{-1} =1$, hence we have that the class $\varphi(\widehat{F}) \delta_G \beta \in C^{2,1}$ is a 3-cocycle in the total complex and moreover that it is cohomologous to the class $\varphi(\widehat{F}) \oplus \bar{\epsilon} u^{-1}$, in coordinates: \begin{align} (\varphi(\widehat{F}) \delta_G \beta) (k_1,k_2)(x_1,(a_2,x_2)) =& \widehat{F}(k_1,k_2)({}^{x_1}a_2) \ \widehat{F}(k_1,k_2)(F(x_1,x_2)) \nonumber\\ & \epsilon(k_1,k_2,x_1x_2) \ \epsilon(k_1,k_2,x_1)^{-1}. \label{varphi(widehat F) delta G beta} \end{align} Summarizing the previous results: \begin{proposition} \label{proposition omega in 2,1 3,0} Every cohomology class which appears in $\Omega(G;A)$ can be represented by a 3-cocycle $\varphi(\widehat{F}) \delta_G \beta \in C^{2,1}$ with $\widehat{F} \in Z^2(K, {{\mathbb{A}}})$, $\beta(k_1,k_2)(x)=\epsilon'(k_1,k_2,x)$ and $\delta_K \epsilon' = \widehat{F} \wedge F$. \end{proposition} \begin{proof} Take $[\omega] \in \Omega(G;A)$ and let $[\widehat{F}] \in E^{2,1}_2$ be a representative of the cohomology class of the image of $[\omega]$ in $E^{2,1}_\infty$. Since $d_2 [\varphi(\widehat{F})]=0$ we know that the cohomology class $[ \varphi(\widehat{F}) \oplus \bar{\epsilon} u^{-1}]$ constructed above belongs to $\Omega(G;A)$. Therefore we have that $$[\omega^{-1}] \cdot [ \varphi(\widehat{F}) \oplus \bar{\epsilon} u^{-1}] \in E^{3,0}_\infty$$ and hence we can choose a representative cocycle $[\tau] \in H^3(K,\ensuremath{{\mathbb C}^*})\cong E_2^{3,0}$ such that $$[\omega] = [ \varphi(\widehat{F}) \oplus \bar{\epsilon} \ \bar{\tau} \ u^{-1}]$$ with $\bar{\tau} \in C^3(K,C^0(G, \mathrm{Maps}(K,\ensuremath{{\mathbb C}^*})))$ defined as $$\bar{\tau}(k_1,k_2,k_3)(x):= {\tau}(k_1,k_2,k_3).$$ Let $\epsilon':= \epsilon \tau$ and define $\beta \in C^2(K,C^0(G, \mathrm{Maps}(K,\ensuremath{{\mathbb C}^*})))$ by the equation \begin{align*} \beta(k_1,k_2)(x):= \epsilon'(k_1,k_2,x). \end{align*} Equation \eqref{equation delta K beta} implies that $\delta_K \beta = (\bar{\epsilon} \ \bar{\tau})^{-1} u$ and therefore the proposition follows from the equation $$( \varphi(\widehat{F}) \oplus \bar{\epsilon} \ \bar{\tau} u^{-1}) \delta_{Tot} \beta = \varphi(\widehat{F}) \delta_G \beta \oplus \delta_K \beta \ \bar{\epsilon} \ \bar{\tau} u^{-1}=\varphi(\widehat{F}) \delta_G \beta.$$ \end{proof} Now we need to find an explicit description of $\omega \in Z^3(G, \ensuremath{{\mathbb C}^*})$ such that $\pi^*\omega$ and $\varphi(\widehat{F}) \delta_G \beta$ are cohomologous. \begin{theorem} \label{theorem omega mu gamma} Let $G = A \rtimes_F K$ and consider $\omega \in C^3(G, \ensuremath{{\mathbb C}^*})$, $ \mu \in C^{0,2}$ and $\gamma \in C^{1,1}$ defined by the following equations: \begin{align*} \omega((a_1,x_1),(a_2,x_2),(a_3,x_3)) := & \widehat{F}(x_1,x_2)(a_3) \ \epsilon(x_1,x_2,x_3)\\ \mu(x_1, (a_2,x_2),(a_3,x_3)) =& \left(\widehat{F}(x_1,x_2)(a_3)\ \epsilon(x_1,x_2,x_3) \right)^{-1}\\ \gamma(y)(x_1,(a_2,x_2))=& \widehat{F}(y,x_1)(a_2)\ \epsilon(y,x_1,x_2,). \end{align*} Then $\pi^* \omega \cdot (\delta_{Tot} \mu \oplus \gamma) =\varphi(\widehat{F}) \delta_G \beta$. \end{theorem} \begin{proof} Let us calculate: \begin{align*} \delta_G \mu ( x_1,&(a_2,x_2),(a_3,x_3),(a_4,x_4)) \\ =& \mu(x_1x_2, (a_3,x_3),(a_4,x_4)) \ \mu(x_1, (a_2{}^{x_2}a_3F(x_2,x_3),x_2x_3),(a_3,x_3))^{-1}\\ & \mu(x_1, (a_2,x_2)(a_3{}^{x_3}a_4F(x_3,x_4),x_3x_4)) \ \mu(x_1, (a_2,x_2),(a_3,x_3))^{-1}\\ =& \widehat{F}(x_1x_2,x_3)(a_4)^{-1} \ \widehat{F}(x_1,x_2x_3)(a_4) \ \widehat{F}(x_1,x_2)(a_3{}^{x_3}a_4F(x_3,x_4))^{-1}\\ & \widehat{F}(x_1,x_2)(a_3) \ \epsilon(x_2,x_3,x_4)^{-1} \ \delta_K \epsilon(x_1,x_2,x_3,x_4)\\ =& \widehat{F}(x_2,x_3)(a_4)^{-1} \ \epsilon(x_2,x_3,x_4)^{-1}, \end{align*} and \begin{align*} \pi^* \omega(x_1,(a_2,x_2),(a_3,x_3),(a_4,x_4))=& \omega((a_2,x_2),(a_3,x_3),(a_4,x_4))\\ =& \widehat{F}(x_2,x_3)(a_4) \ \epsilon(x_2,x_3,x_4), \end{align*} hence we have that $$\delta_G \mu \cdot \pi^*\omega =1.$$ Now \begin{align*} \delta_K\mu (y) ( x_1,(a_2,x_2),(a_3,x_3& )) \\ =& \mu(x_1, (a_2,x_2),(a_3,x_3)) \ \mu(yx_1, (a_2,x_2),(a_3,x_3))^{-1}\\ =& \frac{\widehat{F}(yx_1,x_2)(a_3) \ \epsilon(yx_1,x_2,x_3)}{\widehat{F}(x_1,x_2)(a_3) \ \epsilon(x_1,x_2,x_3)}, \end{align*} and \begin{align*} \delta_G \gamma(y&)(x_1,(a_2,x_2),(a_3,x_3)) \\ = & \gamma(y)(x_1x_2,(a_3,x_3)) \ \gamma(y)(x_1,(a_2 {}^ {x_2}a_3F(x_2,x_3),x_2x_3))^{-1} \ \gamma(y)(x_1,(a_2,x_2)) \\ = & \widehat{F} (y, x_1x_2)(a_3) \ \widehat{F}(y,x_1)(a_2 {}^ {x_2}a_3F(x_2,x_3))^{-1} \ \widehat{F}(y,x_1)(a_2)\\ & \epsilon(y,x_1x_2,x_3) \ \epsilon(y,x_1,x_2x_3)^{-1} \ \epsilon(y,x_1,x_2) \\ =& \widehat{F}(yx_1,x_2)(a_3) \ \widehat{F}(x_1,x_2)(a_3)^{-1} \epsilon(yx_1,x_2,x_3) \ \epsilon(x_1,x_2,x_3)^{-1}, \end{align*} hence we have that \begin{align*} \delta_K \mu \cdot \delta_G \gamma^{-1} = 1 \end{align*} Finally we calculate \begin{align*} \delta_K \gamma(k_1,k_2)&(x_1,(a_2,x_2)) \\ =& \gamma(k_2)(x_1,(a_2,x_2)) \ \gamma(k_1k_2)(x_1,(a_2,x_2))^{-1} \ \gamma(k_1)(k_2x_1,(a_2,x_2))\\ =& \widehat{F}(k_2,x_1)(a_2) \ \widehat{F}(k_1k_2,x_1)(a_2)^{-1} \ \widehat{F}(k_1,k_2x_2)(a_2) \\ & \epsilon(k_2,x_1,x_2) \ \epsilon(k_1k_2,x_1,x_2)^{-1} \ \epsilon(k_1,k_2x_1,x_2)\\ =& \widehat{F}(k_1,k_2)({}^{x_1}a_2) \ \delta_K \epsilon(k_1,k_2,x_1,x_2) \ \epsilon(k_1,k_2,x_1x_2) \ \epsilon(k_1,k_2,x_1)^{-1}\\ =& \widehat{F}(k_1,k_2)({}^{x_1}a_2) \ \widehat{F}(k_1,k_2)(F(x_1,x_2)) \ \epsilon(k_1,k_2,x_1x_2)\ \epsilon(k_1,k_2,x_1)^{-1}, \end{align*} and since by equation \eqref{varphi(widehat F) delta G beta} we have that \begin{align*} (\varphi(\widehat{F}) \delta_G \beta) (k_1,k_2)(x_1,(a_2,x_2)) = & \widehat{F}(k_1,k_2)({}^{x_1}a_2) \ \widehat{F}(k_1,k_2)(F(x_1,x_2))\\ &\epsilon(k_1,k_2,x_1x_2)\ \epsilon(k_1,k_2,x_1)^{-1} \end{align*} we have that $$\delta_K \gamma = \varphi(\widehat{F}) \delta_G \beta.$$ Hence $\pi^* \omega \cdot (\delta_{Tot} \mu \oplus \gamma) =\varphi(\widehat{F}) \delta_G \beta$. \end{proof} \subsection{Description of $\widehat{\omega}$ and $\nu$} Assuming the explicit descriptions of $\omega$, $\mu$ and $\gamma$ described in Theorem \ref{theorem omega mu gamma}, we see that $\tilde{\nu}= \varphi(\widehat{F}) \delta_G \beta$. Applying this explicit description of $\tilde{\nu}$ into the definition of $\nu$ given in \eqref{definition nu} and of $\widehat{\omega}$ given in \eqref{definition widehat omega} we obtain \begin{align*} \nu(k_1,k_2)(a):= \tilde{\nu}(k_1,k_2)(1,(a,1))= \widehat{F}(k_1,k_2)(a) \end{align*} which implies that $\nu= \widehat{F}$, and \begin{align*} \widehat{\omega}((k_1, \rho_1), (k_2,\rho_2),(k_3 ,\rho_3)) :=& \tilde{\nu}(k_1,k_2)(1;(1,k_3)) \ \rho_1(F(k_2,k_3))\\ =& \epsilon(k_1,k_2,k_3) \ \rho_1(F(k_2,k_3)). \end{align*} After applying Corollary \ref{corollary weak Morita equivalence} to the previous explicit construction of $\widehat{\omega}$ we obtain the following theorem: \begin{theorem} \label{theorem Morita equivalence} Let $K$ be a finite group acting on the finite abelian group $A$. Consider cocycles $F \in Z^2(K,A)$ and $\widehat{F} \in Z^2(K,{{\mathbb{A}}})$ such that $\widehat{F} \wedge F$ is trivial in cohomology, i.e. there exists $\epsilon \in C^3(K,\ensuremath{{\mathbb C}^*})$ such that $\delta_K \epsilon = \widehat{F} \wedge F$. Define the 3-cocycles $\omega \in Z^3(A \rtimes_F K, \ensuremath{{\mathbb C}^*})$ and $\widehat{\omega} \in Z^3(K \ltimes_{\widehat{F}} {{\mathbb{A}}}, \ensuremath{{\mathbb C}^*})$ by the equations: \begin{align*} \omega((a_1,k_1),(a_2,k_2),(a_3,k_3)) := & \widehat{F}(k_1,k_2)(a_3) \ \epsilon(k_1,k_2,k_3)\\ \widehat{\omega}((k_1, \rho_1), (k_2,\rho_2),(k_3 ,\rho_3)) :=& \epsilon(k_1,k_2,k_3) \ \rho_1(F(k_2,k_3)). \end{align*} Then the tensor categories $Vect(A \rtimes_F K, \omega)$ and $Vect(K \ltimes_{\widehat{F}} {{\mathbb{A}}}, \widehat{\omega})$ are weakly Morita equivalent, and therefore their centers are braided equivalent $${\mathcal Z}(Vect(A \rtimes_F K, \omega)) \simeq {\mathcal Z}(Vect(K \ltimes_{\widehat{F}} {{\mathbb{A}}}, \widehat{\omega})).$$ \end{theorem} Note that we may have taken a different choice of $\mu$ and $\gamma$ in section \ref{subsection omega nu} thus producing different $\tilde{\nu}$ and $\widehat{\omega}$. The description of $\widehat{\omega}$ depends on the choice of cohomology class $[\widehat{F}] \in H^2(K,{{\mathbb{A}}})\cong E_2^{2,1}$ in the second page representing the image of $[\omega] $ in $E_3^{2,1}=E_\infty^{2,1}$. This choice may be changed by elements in the image of the second differential $d_2: E_2^{0,2} \to E_2^{2,1}$. Changing $\omega$ by a coboundary $\omega'=\omega\delta_G \alpha$, and writing $\omega'$ explicitly as \begin{align}\label{def omega'}\omega'((a_1,x_1),(a_2,x_2),(a_3,x_3)) := \widehat{F}'(x_1,x_2)(a_3) \ \epsilon'(x_1,x_2,x_3),\end{align} produces a $\widehat{\omega}'$ which becomes \begin{align}\label{def widehat omega'}\widehat{\omega}'((k_1, \rho_1), (k_2,\rho_2),(k_3 ,\rho_3)) := \epsilon'(k_1,k_2,k_3) \ \rho_1(F(k_2,k_3)).\end{align} Applying Theorem \ref{theorem Morita equivalence} and using the equivalence of categories $Vect(A \rtimes_F K, \omega) \simeq Vect(A \rtimes_F K, \omega')$ we obtain that the tensor categories $Vect(A \rtimes_F K, \omega)$ and $Vect(K \ltimes_{\widehat{F}'} {{\mathbb{A}}}, \widehat{\omega}')$ are also weakly Morita equivalent. The previous argument permit us to conclude the following corollary: \begin{cor} \label{corollary omega'} Suppose that the fusion category $\mathcal{C}_\mathcal{M}^*={\mathcal V}(A \rtimes_F K, \omega)_{\mathcal{M}(K, \mu)}^*$ is pointed. Then it is equivalent to the category $Vect(K \ltimes_{\widehat{F}'} {{\mathbb{A}}}, \widehat{\omega}')$ where $\widehat{\omega}'$ and $\omega'$ are the cocycles defined in \eqref{def omega'} and \eqref{def widehat omega'} respectively and $\omega'$ is cohomologous to $\omega$. \end{cor} \subsection{Classification theorem} Now we are ready to state the key result in order to establish the weak Morita equivalence classes of group theoretical tensor categories. \begin{theorem} \label{main theorem} Let $H$ and $\widehat{H}$ be finite groups, $\eta \in Z^3(H, \ensuremath{{\mathbb C}^*})$ and $\widehat{\eta} \in Z^3(\widehat{H}, \ensuremath{{\mathbb C}^*})$. Then the tensor categories $Vect(H,\eta)$ and $Vect(\widehat{H}, \widehat{\eta})$ are weakly Morita equivalent if and only if the following conditions are satified: \begin{itemize} \item There exist isomorphisms of groups $$\phi : G= A \rtimes_F K \stackrel{\cong}{\to} H \ \ \ \ \widehat{\phi} : \widehat{G}= K \ltimes_{\widehat{F}} {{\mathbb{A}}} \stackrel{\cong}{\to} \widehat{H}$$ for some finite group $K$ acting on the abelian group $A$, with $F \in Z^2(K, A)$ and $\widehat{F} \in Z^2(K, {{\mathbb{A}}})$ where ${{\mathbb{A}}} := \mathrm{Hom}(A, \ensuremath{{\mathbb C}^*})$. \item There exist $\epsilon : K^3 \to \ensuremath{{\mathbb C}^*}$ such that $\widehat{F} \wedge F =\delta_K \epsilon$. \item The cohomology classes satisfy the equations $[ \phi^* \eta ]=[\omega]$ and $[\widehat{\phi}^*\widehat{\eta}]=[\widehat{\omega}]$ with \begin{align*} \omega((a_1,k_1),(a_2,k_2),(a_3,k_3)) := & \widehat{F}(k_1,k_2)(a_3) \ \epsilon(k_1,k_2,k_3)\\ \widehat{\omega}((k_1, \rho_1), (k_2,\rho_2),(k_3 ,\rho_3)) :=& \epsilon(k_1,k_2,k_3) \ \rho_1(F(k_2,k_3)). \end{align*} \end{itemize} \end{theorem} \begin{proof} Suppose that $Vect(H,\eta)$ and $Vect(\widehat{H}, \widehat{\eta})$ are weaky Morita equivalent. Then $Vect(\widehat{H}, \widehat{\eta})$ is equivalent to the dual category ${\mathcal V}(H,\eta)_{\mathcal{M}(A \backslash H, \mu)}^*$ with $K:= A \backslash H$, $\phi : G= A \rtimes_F K \stackrel{\cong}{\to} H$ and $\mathcal{M}(A \backslash H, \mu)$ some module category of ${\mathcal V}(H,\eta)$. By Corollary \ref{corollary omega'} the tensor category $Vect(\widehat{H}, \widehat{\eta})$ is furthermore equivalent to $Vect(K \ltimes_{\widehat{F}'} {{\mathbb{A}}}, \widehat{\omega}')$ where $\omega'$ and $\widehat{\omega}'$ are the cocycles defined in equations \eqref{def omega'} and \eqref{def widehat omega'} respectively, and such that $\omega'$ is cohomologous to $\phi^*\eta$. In particular we have that $ \widehat{\phi} : \widehat{G}= K \ltimes_{\widehat{F}} {{\mathbb{A}}} \stackrel{\cong}{\to} \widehat{H}$ and that $\widehat{\phi}^*\widehat{\eta}$ is cohomologous to $\widehat{\omega}'$. The converse is the statement of Theorem \ref{theorem Morita equivalence}. \end{proof} In the case that both $\omega$ and $\widehat{\omega}$ were cohomologically trivial, we conclude that $Vect(A \rtimes_F K, 1)$ and $Vect(K \ltimes_{\widehat{F}} {{\mathbb{A}}}, 1)$ are weakly Morita equivalent if and only if the cohomology class $[\widehat{F}] \in H^2(K,{{\mathbb{A}}})$ lies in the image of the second differential of the spectral sequence $d_2: H^2(A, \ensuremath{{\mathbb C}^*})^K \to H^2(K,{{\mathbb{A}}}).$ This result was originally proved in \cite[Cor. 6.2]{Davydov}. \section{Examples} \label{section examples} \subsection{Pointed fusion categories of global dimension 4} We can now calculate the weakly Morita equivalence classes of pointed fusion categories of global dimension 4. For $G = \ensuremath{{\mathbb Z}}/4$ we have that $H^*(\ensuremath{{\mathbb Z}}/4,\ensuremath{{\mathbb Z}}) \cong \ensuremath{{\mathbb Z}}[u]/4u$ with $|u|=2$ and that the non trivial automorphism of $\ensuremath{{\mathbb Z}}/4$ maps $u$ to $-u$; therefore $H^4(\ensuremath{{\mathbb Z}}/4,\ensuremath{{\mathbb Z}})/Aut(\ensuremath{{\mathbb Z}}/4) =\langle u^2 \rangle = \ensuremath{{\mathbb Z}}/4$. For $G= (\ensuremath{{\mathbb Z}}/2)^2$ we have that $$H^4((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}) \cong ker(Sq^1: H^4((\ensuremath{{\mathbb Z}}/2)^2, {\mathbb{F}}_2) \to H^5((\ensuremath{{\mathbb Z}}/2)^2, {\mathbb{F}}_2)) =\langle x^4,x^2y^2,y^4 \rangle$$ where $H^*((\ensuremath{{\mathbb Z}}/2)^2, {\mathbb{F}}_2)={\mathbb{F}}_2[x,y]$ and $Sq^1$ is the Steenrod operation, and up to automorphisms of $(\ensuremath{{\mathbb Z}}/2)^2$ we get $$ H^4((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}})/Aut((\ensuremath{{\mathbb Z}}/2)^2) = \left\{ \begin{array}{l} 0\\ (x^4)= \{x^4,y^4,x^4+y^4 \} \\ (x^2y^2) =\{ x^2y^2,x^2y^2+x^4,x^2y^2+y^4 \}\\ (x^4+x^2y^2+y^4) = \{x^4+x^2y^2+y^4\}. \end{array} \right. $$ Since we have a clear description for a base of $H^4((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}})$, we will abuse the notation and denote with the symbols of $H^4((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}})$ the elements of $H^3((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb C}^*})$. With this clarification the relevant terms of the second page of the LHS spectral sequence of the extension $1 \to \ensuremath{{\mathbb Z}}/2 \to \ensuremath{{\mathbb Z}}/4 \to \ensuremath{{\mathbb Z}}/2 \to 1$ become \begin{tikzpicture} \matrix [matrix of math nodes,row sep=6mm] { 3 & [5mm] |(a)| \ensuremath{{\mathbb Z}}/2= \langle y^4 \rangle & [5mm] & [5mm] & [5mm] & [5mm] & [5mm] \\\ 2 & |(b)| 0 & |(c)| 0 & & & & \\ 1& \ensuremath{{\mathbb Z}}/2 & |(d)| \ensuremath{{\mathbb Z}}/2= \langle yx\rangle & |(e)| \ensuremath{{\mathbb Z}}/2= \langle yx^2\rangle & & & \\ 0& \ensuremath{{\mathbb C}^*} & \ensuremath{{\mathbb Z}}/2 & |(f)| 0 & |(g)| \ensuremath{{\mathbb Z}}/2=\langle x^4\rangle & 0 \\ & 0 & 1 & 2 & 3& 4&\\ }; \tikzstyle{every node}=[midway,auto,font=\scriptsize] \draw[thick] (-5,-1.7) -- (-5,2.8) ; \draw[thick] (-5,-1.7) -- (6.0,-1.7) ; \draw[-stealth] (d) -- node {$\cong$} (g); \end{tikzpicture} \noindent where the second differential is defined by the assignment $d_2(yx^k)= Sq^1(x^{k+2})$ with the class $x^2$ classifying the extension. We conclude that the only weak Morita equivalence that appears, which does not come from an automorphism of a group, is $$Vect(\ensuremath{{\mathbb Z}}/4, 0) \simeq Vect((\ensuremath{{\mathbb Z}}/2)^2,x^2y^2).$$ Therefore we see that there are exactly seven weak Morita equivalence classes of pointed fusion categories of global dimension 4, namely the three for $\ensuremath{{\mathbb Z}}/4$: $$ Vect(\ensuremath{{\mathbb Z}}/4,u^2), Vect(\ensuremath{{\mathbb Z}}/4,2u^2), Vect(\ensuremath{{\mathbb Z}}/4,3u^2);$$ the three for $(\ensuremath{{\mathbb Z}}/2)^2$: $$Vect((\ensuremath{{\mathbb Z}}/2)^2,0),Vect((\ensuremath{{\mathbb Z}}/2)^2,x^4),Vect((\ensuremath{{\mathbb Z}}/2)^2, x^4+y^4+x^2y^2)$$ and the one that we have just constructed $$ Vect(\ensuremath{{\mathbb Z}}/4,0)\simeq_M Vect((\ensuremath{{\mathbb Z}}/2)^2,x^2y^2). $$ \subsection{Non trivial action of $\ensuremath{{\mathbb Z}}/2$ on $\ensuremath{{\mathbb Z}}/4$} \label{subsection nontrivial action} Consider the non trivial action of $\ensuremath{{\mathbb Z}}/2$ on $\ensuremath{{\mathbb Z}}/4$ and the abelian extension $1 \to \ensuremath{{\mathbb Z}}/4 \to G \to \ensuremath{{\mathbb Z}}/2 \to 1$. The group $G$ is either the Dihedral group $D_8$ in the case that the extension is a split extension or the quaternion group $Q_8$ in the case that the extension is a non-split extension. In the case of $D_8$ the relevant elements of the second page of the LHS spectral sequence associated to the extension are: \begin{tikzpicture} \matrix [matrix of math nodes,row sep=6mm] { 3 & [5mm] |(a)| \ensuremath{{\mathbb Z}}/4= \langle a\rangle & [5mm] & [5mm] & [5mm] & [5mm] & [5mm] \\\ 2 & |(b)| 0 & |(c)| 0 & & & & \\ 1& \ensuremath{{\mathbb Z}}/2 & |(d)| \ensuremath{{\mathbb Z}}/2= \langle e\rangle & |(e)| \ensuremath{{\mathbb Z}}/2= \langle b\rangle & & & \\ 0& \ensuremath{{\mathbb C}^*} & \ensuremath{{\mathbb Z}}/2 & |(f)| 0 & |(g)| \ensuremath{{\mathbb Z}}/2=\langle c\rangle & 0 \\ & 0 & 1 & 2 & 3& 4&\\ }; \tikzstyle{every node}=[midway,auto,font=\scriptsize] \draw[thick] (-4.5,-1.7) -- (-4.5,2.8) ; \draw[thick] (-4.5,-1.7) -- (5.0,-1.7) ; \end{tikzpicture} \noindent and they all survive to the page at infinity. Since $H^3(D_8, \ensuremath{{\mathbb C}^*})= \ensuremath{{\mathbb Z}}/4 \oplus \ensuremath{{\mathbb Z}}/2 \oplus \ensuremath{{\mathbb Z}}/2$ we may say that $H^3(D_8, \ensuremath{{\mathbb C}^*})\cong \langle a \rangle \oplus \langle b \rangle \oplus \langle c \rangle$, and since $D_8 \cong \ensuremath{{\mathbb Z}}/4 \rtimes \ensuremath{{\mathbb Z}}/2$ we have that $F=0$. The element $b \in H^2(\ensuremath{{\mathbb Z}}/2, {\ensuremath{{\mathbb Z}}/4})$ defines the non trivial extension $Q_8 \cong \ensuremath{{\mathbb Z}}/2 \ltimes_b {\ensuremath{{\mathbb Z}}/4}$. The second page of the LHS spectral sequence of the extension $Q_8 \cong \ensuremath{{\mathbb Z}}/2 \ltimes_b {\ensuremath{{\mathbb Z}}/4}$ becomes: \begin{tikzpicture} \matrix [matrix of math nodes,row sep=6mm] { 3 & [5mm] |(a)| \ensuremath{{\mathbb Z}}/4= \langle \alpha \rangle & [5mm] & [5mm] & [5mm] & [5mm] & [5mm] \\\ 2 & |(b)| 0 & |(c)| 0 & & & & \\ 1& \ensuremath{{\mathbb Z}}/2 & |(d)| \ensuremath{{\mathbb Z}}/2= \langle e\rangle & |(e)| \ensuremath{{\mathbb Z}}/2= \langle 4\alpha\rangle & & & \\ 0& \ensuremath{{\mathbb C}^*} & \ensuremath{{\mathbb Z}}/2 & |(f)| 0 & |(g)| \ensuremath{{\mathbb Z}}/2=\langle c\rangle & 0 \\ & 0 & 1 & 2 & 3& 4&\\ }; \tikzstyle{every node}=[midway,auto,font=\scriptsize] \draw[thick] (-4.5,-1.7) -- (-4.5,2.8) ; \draw[thick] (-4.5,-1.7) -- (5.0,-1.7) ; \draw[-stealth] (d) -- node {$\cong$} (g); \end{tikzpicture} \noindent where $d_2: E_2^{1,1} \stackrel{\cong}{\to} E_2^{3,0}$ is an isomorphism and $H^3(Q_8,\ensuremath{{\mathbb C}^*})=\langle \alpha \rangle= \ensuremath{{\mathbb Z}}/8$. Therefore for these extensions we only have the weak Morita equivalences: \begin{align*} Vect(D_8, b) \simeq_M Vect(Q_8,0) \simeq_M Vect(D_8,b \oplus c) \end{align*} where the equivalence of the right is obtained from the fact that $c$ does not survive the spectral sequence for the group $Q_8$, and the self Morita equivalence $$ Vect(Q_8,4\alpha) \simeq_M Vect(Q_8,4\alpha).$$ \subsection{Extension of $\ensuremath{{\mathbb Z}}/2 \times \ensuremath{{\mathbb Z}}/2$ by $\ensuremath{{\mathbb Z}}/2$} Consider the non-abelian extensions of the form $1 \to \ensuremath{{\mathbb Z}}/2 \to G \to \ensuremath{{\mathbb Z}}/2 \times \ensuremath{{\mathbb Z}}/2 \to 1$, namely $D_8$ and $Q_8$. The second page of the LHS spectral sequence for these extensions becomes: \begin{tikzpicture} \matrix [matrix of math nodes,row sep=6mm] { 3 & [5mm] |(a)| \ensuremath{{\mathbb Z}}/2 & [5mm] & [5mm] & [5mm] & [5mm] & [5mm] \\\ 2 & |(b)| 0 & |(c)| 0 & & & & \\ 1& \ensuremath{{\mathbb Z}}/2 & |(d)| (\ensuremath{{\mathbb Z}}/2)^2 & |(e)| (\ensuremath{{\mathbb Z}}/2)^3 & & & \\ 0& \ensuremath{{\mathbb C}^*} & (\ensuremath{{\mathbb Z}}/2)^2 & |(f)| \ensuremath{{\mathbb Z}}/2 & |(g)| (\ensuremath{{\mathbb Z}}/2)^3 & (\ensuremath{{\mathbb Z}}/2)^2 \\ & 0 & 1 & 2 & 3& 4&\\ }; \tikzstyle{every node}=[midway,auto,font=\scriptsize] \draw[thick] (-4.0,-1.7) -- (-4.0,2.8) ; \draw[thick] (-4.0,-1.7) -- (4.0,-1.7) ; \end{tikzpicture} \noindent and we need only to concentrate in the differentials $d_2:E_2^{p,1} \to E_2^{p+2,0}$ between the first two rows since we know that $E_2^{0,3} = \ensuremath{{\mathbb Z}}/2$ survives the spectral sequence in all the groups. First we will determine the differential $\overline{d}_2^G$ in the LHS spectral sequence for coefficients in the field of two elements ${\mathbb{F}}_2$. In this case $$E_2 \cong H^*(\ensuremath{{\mathbb Z}}/2 \times \ensuremath{{\mathbb Z}}/2, {\mathbb{F}}_2) \otimes_{{\mathbb{F}}_2} H^*(\ensuremath{{\mathbb Z}}/2, {\mathbb{F}}_2) \cong {\mathbb{F}}_2[x,y,e]$$ and $\overline{d}_2^Ge \in H^2(\ensuremath{{\mathbb Z}}/2 \times \ensuremath{{\mathbb Z}}/2, {\mathbb{F}}_2)$ represents the class that defines the extension $G$. It is known that the class $x^2+xy +y^2$ defines $Q_8$ \cite[Lemma 2.10]{AdemMilgram}, the classes $x^2+xy, xy+y^2,xy$ define $D_8$ \cite[pp. 130]{AdemMilgram}, and the classes $x^2,y^2,x^2+y^2$ define $\ensuremath{{\mathbb Z}}/2 \times \ensuremath{{\mathbb Z}}/4$. Second we use the fact that for the group $(\ensuremath{{\mathbb Z}}/2)^2$ we have the isomorphism $$H^j((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}) \cong ker(Sq^1 : H^j((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2) \to H^{j+1}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2)) $$ where $Sq^1$ is the first Steenrod square. This implies that the map canonical map $$H^j((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2)) \to H^{j}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb C}^*})$$ can be seen as the map \begin{align*} H^j((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2)) \stackrel{Sq^1}{\longrightarrow} & ker\left(Sq^1 : H^{j+1}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2) \to H^{j+2}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2)\right)\\ & \cong H^{j+1}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}) \cong H^{j}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb C}}^*). \end{align*} Therefore the second differential $$d_2^G : H^{p-2}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2) \to H^{p}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb C}^*})$$ is isomorphic to the composite map \begin{align*} d_2^G : H^{p-2}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2) \to & ker\left(Sq^1 : H^{p+1}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2) \to H^{p+2}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}/2)\right)\\ & \cong H^{p+1}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}}) \cong H^{p}((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb C}}^*)\\ z \mapsto & Sq^1(z \cup \overline{d}_2^Ge). \end{align*} Without loss of generality we may choose $\overline{d}_2^Ge=xy+x^2$ for calculating the LHS spectral sequence for $D_8$. Applying the differential $d_2^G$ to the elements $1,x,y,x^2,xy,y^2$ we obtain that the surviving terms in the infinite page of the LHS spectral sequence for $D_8$ become: \begin{tikzpicture} \matrix [matrix of math nodes,row sep=6mm] { 3 & [1mm] |(a)| \ensuremath{{\mathbb Z}}/2 & [1mm] & [1mm] & [1mm] & [1mm] & [1mm] \\\ 2 & |(b)| 0 & |(c)| 0 & & & & \\ 1& 0 & |(d)| \ensuremath{{\mathbb Z}}/2=\langle e(y) \rangle & |(e)| \ensuremath{{\mathbb Z}}/2=\langle e(xy+x^2) \rangle & & & \\ 0& \ensuremath{{\mathbb C}^*} & (\ensuremath{{\mathbb Z}}/2)^2=\langle x^2,y^2 \rangle & |(f)| 0 & |(g)| (\ensuremath{{\mathbb Z}}/2)^2=\frac{\langle x^4,x^2y^2,y^4 \rangle}{\langle x^2y^2+x^4 \rangle} & 0 \\ & 0 & 1 & 2 & 3& 4&\\ }; \tikzstyle{every node}=[midway,auto,font=\scriptsize] \draw[thick] (-5.3,-1.7) -- (-5.3,2.8) ; \draw[thick] (-5.3,-1.7) -- (5.9,-1.7) ; \end{tikzpicture} \noindent Here we are abusing the notation and we are using the explicit base of $H^4((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb Z}})$ to denote the elements in $H^3((\ensuremath{{\mathbb Z}}/2)^2, \ensuremath{{\mathbb C}^*})$. Since $E_3^{2,1}= \langle e(xy+x^2) \rangle$ we have that the weak Morita equivalences that we obtain in the extension are $$Vect(D_8,0) \simeq_M Vect((\ensuremath{{\mathbb Z}}/2)^3, Sq^1(e(xy+x^2)))$$ $$Vect(D_8,x^4) \simeq_M Vect((\ensuremath{{\mathbb Z}}/2)^3, Sq^1(e(xy+x^2))+x^4)$$ $$Vect(D_8,y^4) \simeq_M Vect((\ensuremath{{\mathbb Z}}/2)^3, Sq^1(e(xy+x^2))+y^4)$$ and the self equivalence $$Vect(D_8,e(xy+x^2) \simeq Vect(D_8,e(xy+x^2).$$ The surviving terms for $Q_8$ with $\overline{d}_2^Ge=x^2+xy+y^2$ are: \begin{tikzpicture} \matrix [matrix of math nodes,row sep=6mm] { 3 & [3mm] |(a)| \ensuremath{{\mathbb Z}}/2 & [3mm] & [3mm] & [3mm] & [3mm] & [3mm] \\\ 2 & |(b)| 0 & |(c)| 0 & & & & \\ 1& 0 & |(d)| 0 & |(e)| \ensuremath{{\mathbb Z}}/2=\langle e(x^2 +xy+ y^2) \rangle & & & \\ 0& \ensuremath{{\mathbb C}^*} & (\ensuremath{{\mathbb Z}}/2)^2=\langle x^2,y^2 \rangle & |(f)| 0 & |(g)| \ensuremath{{\mathbb Z}}/2=\langle x^2y^2\rangle & 0 \\ & 0 & 1 & 2 & 3& 4&\\ }; \tikzstyle{every node}=[midway,auto,font=\scriptsize] \draw[thick] (-5.6,-1.7) -- (-5.6,2.8) ; \draw[thick] (-5.6,-1.7) -- (6.1,-1.7) ; \end{tikzpicture} \noindent with $E_\infty^{0,3} = \ensuremath{{\mathbb Z}}/2=\langle \alpha \rangle$, $\langle x^2 +xy+ y^2 \rangle= \langle 2\alpha \rangle$ and $\langle x^2y^2 \rangle= \langle 4\alpha \rangle$ where $\alpha$ is a generator $\langle \alpha \rangle = H^3(Q_8,\ensuremath{{\mathbb C}^*})$ that was defined in section \S\ref{subsection nontrivial action}. Hence the only Morita equivalences that we obtain are $$Vect(Q_8,0) \simeq Vect((\ensuremath{{\mathbb Z}}/2)^3, Sq^1(e(x^2 +xy+ y^2)))$$ $$Vect(Q_8,4\alpha) \simeq Vect((\ensuremath{{\mathbb Z}}/2)^3, Sq^1(e(x^2 +xy+ y^2))+x^2y^2)$$ and the self Morita equivalences $Vect(Q_8,2\alpha) \simeq_M Vect(Q_8,2\alpha) $ and $Vect(Q_8,6\alpha) \simeq_M Vect(Q_8,6\alpha) $. Bundling up the previous results for the group $Q_8$ we obtain the following result: \begin{proposition} Let us suppose that $Vect(Q_8, k\alpha)$ is weakly Morita equivalent to $Vect(G,\eta)$. Then \begin{itemize} \item For $k$ odd or $k=2,6$, $G$ must be isomorphic to $Q_8$ and $\eta$ must correspond to $j\alpha$ with $j$ odd or $j=2,6$. \item For $k=4$, $G$ must be isomorphic to $Q_8$ or $(\ensuremath{{\mathbb Z}}/2)^3$. \item For $k=0$, $G$ must be isomorphic to $Q_8$, $D_8$ or $(\ensuremath{{\mathbb Z}}/2)^3$. \end{itemize} \end{proposition} \begin{proof} First note that the action of $Aut(Q_8)$ on $H^3(Q_8,\ensuremath{{\mathbb C}^*})$ is trivial. Second note that the only normal subgroups of $Q_8$ are its center and the cyclic ones generated by roots of unity and that they all fit into the central extension $1 \to \ensuremath{{\mathbb Z}}/2 \to Q_8 \to (\ensuremath{{\mathbb Z}}/2)^2 \to 1$ or the non-split extension $1 \to \ensuremath{{\mathbb Z}}/4 \to Q_8 \to \ensuremath{{\mathbb Z}}/2 \to 1$ that we have studied before. Since any weak Morita equivalence between pointed fusion categories comes from a normal and abelian subgroup of $Q_8$, the classification that we have done before exhausts all possibilities. For $k$ odd we know that $k\alpha$ survives to the restriction to the center and to the cyclic subgroups isomorphic to $\ensuremath{{\mathbb Z}}/4$ and therefore $G$ can only be $Q_8$. The classes $2\alpha$ and $6\alpha$ trivialize on the center of $Q_8$ but these classes define extensions of $(\ensuremath{{\mathbb Z}}/2)^2$ by $\ensuremath{{\mathbb Z}}/2$ which are isomorphic to $Q_8$ and define cohomology classes which are precisely $2\alpha$ and $6\alpha$. The class $4\alpha$ trivializes in all normal and abelian subgroups; in the case of the subgroup $\ensuremath{{\mathbb Z}}/4$ the only group that may appear is $Q_8$, and in the case of the center we may obtain the weak Morita equivalence $$Vect(Q_8,4\alpha) \simeq Vect((\ensuremath{{\mathbb Z}}/2)^3, Sq^1(e(x^2 +xy+ y^2))+x^2y^2).$$ Finally, the trivial class produces only the group $D_8$ in the case of the subgroup $\ensuremath{{\mathbb Z}}/4$ and $(\ensuremath{{\mathbb Z}}/2)^3$ in the case of the center; some weak Morita equivalences are $$Vect(Q_8,0) \simeq Vect((\ensuremath{{\mathbb Z}}/2)^3, Sq^1(e(x^2 +xy+ y^2))) \simeq_M Vect(D_8,b).$$ \end{proof} \bibliographystyle{abbrv} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
{ "timestamp": "2017-03-14T01:02:23", "yymm": "1511", "arxiv_id": "1511.05522", "language": "en", "url": "https://arxiv.org/abs/1511.05522", "abstract": "A pointed fusion category is a rigid tensor category with finitely many isomorphism classes of simple objects which moreover are invertible. Two tensor categories $C$ and $D$ are weakly Morita equivalent if there exists an indecomposable right module category $M$ over $C$ such that $Fun_C(M,M)$ and $D$ are tensor equivalent. We use the Lyndon-Hochschild-Serre spectral sequence associated to abelian group extensions to give necessary and sufficient conditions in terms of cohomology classes for two pointed fusion categories to be weakly Morita equivalent. This result may permit to classify the equivalence classes of pointed fusion categories of any given global dimension.", "subjects": "Algebraic Topology (math.AT); Quantum Algebra (math.QA)", "title": "On the Classification of Pointed Fusion Categories up to weak Morita Equivalence", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771801919951, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.709092548402849 }
https://arxiv.org/abs/1312.6466
Optimal Confidence Bands for Shape-Restricted Curves
Let $Y$ be a stochastic process on $[0,1]$ satisfying $dY(t) = n^{1/2} f(t) dt + dW(t)$, where $n \ge 1$ is a given scale parameter (``sample size''), $W$ is standard Brownian motion and $f$ is an unknown function. Utilizing suitable multiscale tests we construct confidence bands for $f$ with guaranteed given coverage probability, assuming that $f$ is isotonic or convex. These confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.
\section{Introduction} \label{Introduction} Nonparametric statistical models often involve some unknown function $f$ defined on a real interval $J$. For instance $f$ might be the probability density of some distribution or a regression function. Nonparametric point estimators for such a curve $f$ are abundant. The available methods are based on kernels, splines, local polynomials, or orthogonal series, including wavelets; see Hart~(1997) and references cited therein. In order to quantify the precision of estimation, one often wants to replace a point estimator with a confidence band $(\hat\ell, \hat u)$ for $f$. The latter consists of two functions $\hat\ell = \hat\ell(\cdot,\mbox{data})$ and $\hat u = \hat u(\cdot,\mbox{data})$ on $J$ with values in $[-\infty,\infty]$ such that, hopefully, $\hat\ell \le f \le \hat u$ pointwise. More precisely, one is aiming at a confidence band such that \begin{equation} \mathrm{I\!\!P}\{\hat\ell \le f \le \hat u\} \ \ge \ 1 - \alpha \label{Confidence0} \end{equation} for a given level $\alpha \in \left]0,1\right[$, while $\hat \ell$ and $\hat u$ should be as close to each other as possible. Unfortunately, curve estimation is an ill-posed problem, and usually there are no nontrivial bands $(\hat\ell,\hat u)$ satisfying (\ref{Confidence0}) for arbitrary $f$; see Donoho~(1988). Therefore one has to impose some additional restrictions on $f$. One possibility are smoothness constraints on $f$, for instance an upper bound on a certain derivative of $f$. Under such restrictions, (\ref{Confidence0}) can be achieved approximately for large sample sizes; see for example Bickel and Rosenblatt~(1973), Knafl et al.~(1985), Hall and Titterington~(1988), H\"ardle and Marron~(1991), Eubank and Speckman~(1993), Fan and Zhang~(2000), and the references cited therein. A problem with the aforementioned methods is that smoothness constraints are hard to justify in practical situations. More precisely, even if the underlying curve $f$ is infinitely often differentiable, the actual coverage probabilities of the confidence bands mentioned above depend on quantitative properties of certain derivatives of $f$ which are difficult to obtain from the data. In many applications qualitative assumptions about $f$ such as monotonicity, unimodality or concavity/convexity are plausible. One example are growth curves in medicine, e.g.~where $f(x)$ is the mean body height of newborns at age $x$. Here isotonicity of $f$ is a plausible assumption. Another example are so-called Engel curves in econometrics, where $f(x)$ is the mean expenditure for certain consumer goods of households with annual income $x$. Here one expects $f$ to be isotonic and sometimes concave as well. Under such qualitative assumptions it is possible to construct $(1 - \alpha)$--confidence sets for $f$ based on certain goodness-of-fit tests without relying on asymptotic arguments. Examples for such procedures can be found in Davies~(1995), Hengartner and Stark~(1995) and D\"umbgen~(1998). In particular, these papers present confidence bands $(\hat\ell,\hat u)$ for $f$ such that \begin{equation} \mathrm{I\!\!P}\{\hat\ell \le f \le \hat u\} \ \ge \ 1 - \alpha \quad\mbox{whenever } f \in {\mathcal F} . \label{Confidence} \end{equation} Here ${\mathcal F}$ denotes the specified class of functions. Given a suitable distance measure $D(\cdot,\cdot)$ for functions, the goal is to find a band $(\hat\ell,\hat u)$ satisfying (\ref{Confidence}) such that either $D(\hat u, \hat \ell)$ or $D(\hat\ell, f)$ and $D(\hat u,f)$ are as small as possible. The phrase ``as small as possible'' can be interpreted in the sense of optimal rates of convergence to zero as the sample size $n$ tends to infinity. The papers of Hengartner and Stark~(1995) and D\"umbgen~(1998) contain such optimality results. In the present paper we investigate optimality of confidence bands in more detail. In addition to optimal rates of convergence we obtain optimal constants and discuss the impact of local smoothness properties of $f$. Compared to the general confidence sets of D\"umbgen~(1998), the methods developed here are more stringent and computationally simpler. They are based on multiscale tests as developed by D\"umbgen and Spokoiny~(2001), who considered tests of qualitative assumptions rather than confidence bands. For further results on testing in nonparametric curve estimation see Hart~(1997), Fan et al.~(2001), and the references cited there. \section{Basic setting and overview} For mathematical convenience we focus on a continuous white noise model: Suppose that one observes a stochastic process $Y$ on the unit interval $[0,1]$, where $$ Y(t) \ = \ n^{1/2} \int_0^t f(x) \, dx + W(t) . $$ Here $f$ is an unknown function in $L^2[0,1]$, $n \ge 1$ is a given scale parameter (``sample size''), and $W$ is standard Brownian motion. In this context the bounding functions $\hat\ell, \hat u$ are defined on $[0,1]$, but for notational convenience the function $f$ is tacitly assumed to be defined on the whole real line with values in $[-\infty,\infty]$. From now on we assume that $$ f \ \in \ {\mathcal G} \cap L^2[0,1] , $$ where ${\mathcal G}$ denotes one of the following two function classes: \begin{eqnarray*} {\mathcal G}^{}_\uparrow & := & \Bigl\{ \mbox{non-decreasing functions } g : \mathbb{R} \to [-\infty,\infty] \Bigr\} , \\ {\mathcal G}^{}_{\rm conv} & := & \Bigl\{ \mbox{convex functions } g : \mathbb{R} \to \left]-\infty,\infty\right] \Bigr\} . \end{eqnarray*} The paper is organized as follows. In Section~\ref{GGiso Levy} we treat the case ${\mathcal G} = {\mathcal G}^{}_\uparrow$ and measure the quality of a confidence band $(\hat\ell, \hat u)$ by quantities related to the Levy distance $d_{\rm L}(\hat\ell,\hat u)$. Generally, $$ d_{\rm L}(g,h) \ := \ \inf \Bigl\{ \epsilon > 0 : g \le h(\cdot + \epsilon) + \epsilon \mbox{ and } h \le g(\cdot + \epsilon) + \epsilon \mbox{ on } [0,1-\epsilon] \Bigr\} $$ for isotonic functions $g,h : [0,1] \to [-\infty,\infty]$. It turns out that a confidence band which is based on a suitable multiscale test as introduced by D\"umbgen and Spokoiny~(2001) is asymptotically optimal in a strong sense. Throughout this paper asymptotic statements refer to $n\to\infty$, unless stated otherwise. In Section~\ref{GG smooth} we treat both classes ${\mathcal G}^{}_\uparrow$ and ${\mathcal G}^{}_{\rm conv}$ simultaneously. We discuss the construction of confidence bands $(\hat\ell,\hat u)$ satisfying (\ref{Confidence}) such that $D(\hat\ell,f)$ and $D(f,\hat u)$ are as small as possible whenever $f$ satisfies some additional smoothness constraints. Here $D(g,h)$ is a distance measure of the form $$ D(g,h) \ := \ \sup_{x \in [0,1]} \, w(x,f) (h(x) - g(x)) $$ for some weight function $w(\cdot,f) \ge 0$ reflecting local smoothness properties of $f$. Again it turns out that suitable multiscale procedures yield nearly optimal procedures without additional prior information on $f$. In Section~\ref{Examples} we present some numerical examples for the procedures of Section~\ref{GG smooth}. The proofs are deferred to Sections~\ref{Proofs}, \ref{Decision Theory} and \ref{Optimization}. In particular, Section~\ref{Decision Theory} contains a new minimax bound for confidence rectangles in a gaussian shift model, which may be of independent interest. As for the white noise model, the results of Brown and Low~(1996), Nussbaum~(1996) and Grama and Nussbaum~(1998) on asymptotic equivalence can be used to transfer the lower bounds of the present paper to other models. Moreover, one can mimick the confidence bands developed here in traditional regression models under minimal assumptions; see D\"umbgen and Johns~(2004) and D\"umbgen~(2007). \section{Optimality for isotonic functions in terms of L\'{e}vy type distances} \label{GGiso Levy} In this section we consider the class ${\mathcal G}^{}_\uparrow$. For isotonic functions $g,h : [0,1] \to [-\infty,\infty]$ and $\epsilon > 0$ let $$ D_\epsilon(g,h) \ := \ \inf \Bigl\{ \lambda \ge 0 : g \le h(\cdot + \epsilon) + \lambda \mbox{ and } h \le g(\cdot + \epsilon) + \lambda \mbox{ on } [0,1-\epsilon] \Bigr\} . $$ Then the L\'{e}vy distance $d_{\rm L}(g,h)$ is the infimum of all $\epsilon > 0$ such that $D_\epsilon(g,h) \le \epsilon$. We use these functionals $D_\epsilon(\cdot,\cdot)$ in order to quantify differences between isotonic functions. Figure~\ref{Levy-Band} depicts one such function $g$, and the shaded areas represent the set of all functions $h$ with $D_{0.05}(g,h) \le 0.1$ and $D_{0.05}(g,h) \le 0.025$, respectively. \begin{figure}[h] \centering \includegraphics[width=7cm,height=9cm]{LevyNbsA} \hfill \includegraphics[width=7cm,height=9cm]{LevyNbsB} \caption{Two $D_{0.05}(\cdot,\cdot)$--neighborhoods of some function $g$.} \label{Levy-Band} \end{figure} The next theorem provides lower bounds for $D_\epsilon(\hat\ell, \hat u)$, $0 < \epsilon \le 1$. Here and throughout the sequel the dependence of probabilities, expectations and distributions on the functional parameter $f$ is sometimes indicated by a subscript $f$. \begin{Theorem} \label{Lower Bounds GGiso} There exists a universal function $b$ on $\left]0,1\right]$ with $\lim_{\epsilon \downarrow 0} b(\epsilon) = 0$ such that $$ \inf_{f \in {\mathcal G}^{}_\uparrow \cap L^2[0,1]} \, \mathrm{I\!\!P}_f \left\{ \hat\ell \le f \le \hat u \mbox{ and } D_\epsilon(\hat\ell, \hat u) < { (8 \log(e/\epsilon))^{1/2} - b(\epsilon) \over (n\epsilon)^{1/2}} \right\} \ \le \ b(\epsilon) $$ for any confidence band $(\hat\ell, \hat u)$ and arbitrary $\epsilon \in \left]0,1\right]$. \end{Theorem} Theorem~\ref{Lower Bounds GGiso} entails a lower bound for $d_{\rm L}(\hat\ell,\hat u)$. For let $\epsilon = \epsilon_n := c \, (\log(n)/n)^{1/3} - \delta n^{-1/3}$ with any fixed $c, \delta > 0$. Then one can show that for sufficiently large $n$, $$ {(8 \log(e/\epsilon))^{1/2} - b(\epsilon) \over (n\epsilon)^{1/2}} \ = \ \Bigl( {8 \over 3c} \Bigr)^{1/2} \Bigl( {\log n\over n} \Bigr)^{1/3} + o(n^{-1/3}) \ \ge \ \epsilon , $$ provided that $c$ equals $(8/3)^{1/3} \approx 1.387$. \begin{Corollary} For each $n \ge 1$ there exists a universal constant $\beta_n$ such that $\beta_n\to 0$ and $$ \inf_{f \in {\mathcal G}^{}_\uparrow \cap L^2[0,1]} \, \mathrm{I\!\!P}_f \left\{ \hat\ell \le f \le \hat u \mbox{ and } d_{\rm L}(\hat\ell, \hat u) < \Bigl( {8\over 3} \Bigr)^{1/3} \Bigl( {\log n\over n} \Bigr)^{1/3} - \beta_n n^{-1/3} \right\} \ \le \ \beta_n $$ for any confidence band $(\hat\ell, \hat u)$. \hfill $\Box$ \end{Corollary} It is possible to get close to these lower bounds for $D_\epsilon(\hat\ell, \hat u)$ {\sl simultaneously} for all $\epsilon \in \left]0,1\right]$ while (\ref{Confidence}) is satisfied. For let $\kappa_\alpha$ be a real number such that $$ \mathrm{I\!\!P} \left\{ {|W(t) - W(s)| \over (t - s)^{1/2}} \le \Gamma(t - s) + \kappa_\alpha \mbox{ for } 0 \le s < t \le 1 \right\} \ \le \ \alpha , $$ where $$ \Gamma(u) \ := \ (2 \log(e/u))^{1/2} \quad\mbox{for } 0 < u \le 1 . $$ The existence of such a critical value $\kappa_\alpha$ follows from D\"umbgen and Spokoiny~(2001, Theorem~2.1). With the local averages $$ F_f(s,t) \ := \ {1 \over t - s} \int_s^t f(x) \, dx $$ of $f$ and their natural estimators $$ \hat F(s,t) \ := \ {Y(t) - Y(s) \over n^{1/2}(t - s)} $$ it follows that $$ \mathrm{I\!\!P}_f \left\{ \Bigl| \hat F(s,t) - F_f(s,t) \Bigr| \le {\Gamma(t - s) + \kappa_\alpha \over (n(t - s))^{1/2}} \mbox{ \ for } 0 \le s < t \le 1 \right\} \ \ge \ 1 - \alpha . $$ But for $0 \le s < t \le 1$, $$ f(s) \ \le \ F_f(s,t) \ \le \ f(t) \quad\mbox{whenever $f \in {\mathcal G}^{}_\uparrow$} . $$ This implies the first assertion of the following theorem. \begin{Theorem} \label{Upper Bounds GGiso} With the critical value $\kappa_\alpha$ above let \begin{eqnarray*} \hat\ell(x) & := & \sup_{0 \le s < t \le x} \, \Bigl( \hat F(s,t) - {\Gamma(t - s) + \kappa_\alpha \over \sqrt{n(t - s)}} \Bigr) , \\ \hat u(x) & := & \inf_{x \le s < t \le 1} \, \Bigl( \hat F(s,t) + {\Gamma(t - s) + \kappa_\alpha \over \sqrt{n(t - s)}} \Bigr) . \end{eqnarray*} This defines a confidence band $(\hat\ell, \hat u)$ for $f$ satisfying (\ref{Confidence}) with ${\mathcal F} = {\mathcal G}^{}_\uparrow \cap L^2[0,1]$. Moreover, in case of $\hat\ell \le \hat u$, \begin{eqnarray*} D_\epsilon(\hat\ell, \hat u) & \le & {(8 \log(e/\epsilon))^{1/2} + 2 \kappa_\alpha \over (n\epsilon)^{1/2}} \quad\mbox{for } 0 < \epsilon \le 1 , \\ d_{\rm L}(\hat\ell, \hat u) & \le & \Bigl( {8\over 3} \Bigr)^{1/3} \Bigl( {\log n\over n} \Bigr)^{1/3} + o(n^{-1/3}) . \end{eqnarray*} \end{Theorem} \noindent {\bf Proof.} The preceding upper bound for $D_\epsilon(\hat\ell, \hat u)$ follows from the fact that for any $x \in [0,1-\epsilon]$, \begin{eqnarray*} \hat u(x) - \hat\ell(x+\epsilon) & \le & \Bigl( \hat F(x,x+\epsilon) + {\Gamma(\epsilon) + \kappa_\alpha \over (n\epsilon)^{1/2}} \Bigr) - \Bigl( \hat F(x,x+\epsilon) - {\Gamma(\epsilon) + \kappa_\alpha \over (n\epsilon)^{1/2}} \Bigr) \\ & = & {2 \Gamma(\epsilon) + 2 \kappa_\alpha \over (n\epsilon)^{1/2}} \\ & = & {(8 \log(e/\epsilon))^{1/2} + 2 \kappa_\alpha \over (n\epsilon)^{1/2}} . \end{eqnarray*} Letting $\epsilon = \epsilon_n = (8/3)^{1/3} (\log(n)/n)^{1/3}$ yields the upper bound for $d_{\rm L}(\hat\ell, \hat u)$. \hfill $\Box$ \section{Bands for potentially smooth functions} \label{GG smooth} A possible criticism of the preceding results is the fact that the minimax bounds are attained at special step functions. On the other hand one often expects the underlying curve $f$ to be smooth in some vague sense. Therefore we aim now at confidence bands satisfying (\ref{Confidence}) with ${\mathcal F} = {\mathcal G} \cap L^2[0,1]$, which are as small as possible whenever $f$ satisfies some additional smoothness conditions. Throughout ${\mathcal G}$ stands for ${\mathcal G}^{}_\uparrow$ or ${\mathcal G}^{}_{\rm conv}$. In the sequel let $\langle g,h \rangle := \int_{-\infty}^\infty g(x)h(x) \, dx$ and $\|g\| := \langle g,g\rangle^{1/2}$ for measurable functions $g,h$ on the real line such that these integrals are defined. The confidence bands to be presented here can be described either in terms of kernel estimators for $f$ or in terms of tests. Both viewpoints have their own merits. \subsection{Kernel estimators for $f$} Let $\psi$ be some kernel function in $L^2(\mathbb{R})$. For technical reasons we assume that $\psi$ satisfies the following three regularity conditions: \begin{equation} \left\{\begin{array}{l} \mbox{$\psi$ has bounded total variation}; \\ \mbox{$\psi$ is supported by $[-a,b]$, where $a,b \ge 0$}; \\ \langle 1,\psi \rangle \ > \ 0 . \end{array}\right. \label{Regularity} \end{equation} For any bandwidth $h > 0$ and location parameter $t \in \mathbb{R}$ let $$ \psi_{h,t}(x) \ := \ \psi \Bigl( {x - t \over h} \Bigr) . $$ Then $\langle g, \psi_{h,t}\rangle = h \, \langle g(t + h \, \cdot), \psi\rangle$ and $\|\psi_{h,t}\| = h^{1/2} \|\psi\|$. A kernel estimator for $f(t)$ with kernel function $\psi$ and bandwidth $h$ is given by $$ \hat f_h(t) \ := \ {\psi Y(h,t) \over n^{1/2} h \, \langle 1,\psi\rangle} , $$ where $$ \psi Y(h,t) \ := \ \int_0^1 \psi_{h,t}(x) \, dY(x) . $$ From now on suppose that $ah \le t \le 1 - bh$. Then $\psi_{h,t}$ is supported by $[0,1]$ and one may write \begin{eqnarray*} \mathrm{I\!\!E} \hat f_h(t) & = & {\langle f, \psi_{t,h}\rangle \over h \, \langle 1,\psi\rangle} \ = \ {\langle f(t + h\,\cdot), \psi\rangle \over \langle 1,\psi\rangle} , \\ \mathrm{Var}(\hat f_h(t)) & = & {\|\psi_{t,h}\|^2 \rangle \over n h^2 \langle 1,\psi\rangle^2} \ = \ {\|\psi\|^2 \over nh \, \langle 1,\psi\rangle^2} . \end{eqnarray*} The random fluctuations of these kernel estimators can be bounded uniformly in $h > 0$. For that purpose we define the multiscale statistic \begin{eqnarray*} T(\pm\psi) & := & \sup_{h > 0} \, \sup_{t \in [ah,1-bh]} \Bigl( {\pm \psi W(h,t) \over h^{1/2} \|\psi\|} - \Gamma((a+b)h) \Bigr) \\ & = & \sup_{h > 0} \, \sup_{t \in [ah,1-bh]} \Bigl( \pm \, {\hat f_h(t) - \mathrm{I\!\!E} \hat f_h(t) \over \mathrm{Var}(\hat f_h(t))^{1/2}} - \Gamma((a+b)h) \Bigr) , \end{eqnarray*} similarly as in D\"umbgen and Spokoiny~(2001). It follows from Theorem~2.1 in the latter paper, that $0 \le T(\pm\psi) < \infty$ almost surely. In particular, $|\hat f_h(t) - \mathrm{I\!\!E} \hat f_h(t)| \le (nh)^{-1/2} \log(e/h)^{1/2} O_p(1)$, uniformly in $h > 0$ and $ah \le t \le 1 - bh$. It is well-known that kernel estimators are biased in general. But our shape restrictions may be used to construct two kernel estimators whose bias is always non-positive or non-negative, respectively. Precisely, let $\psi^{(\ell)}$ and $\psi^{(u)}$ be two kernel functions satisfying (\ref{Regularity}) with respective supports $[-a^{(\ell)}, b^{(\ell)}]$ and $[-a^{(u)}, b^{(u)}]$. In addition suppose that \begin{eqnarray} \langle g, \psi^{(\ell)}\rangle & \le & g(0) \langle 1,\psi^{(\ell)}\rangle \quad\mbox{for all } g \in {\mathcal G} \cap L^2[-a^{(\ell)},b^{(\ell)}] , \label{OrthogonalityL} \\ \langle g, \psi^{(u)}\rangle & \ge & g(0) \langle 1,\psi^{(u)}\rangle \quad\mbox{for all } g \in {\mathcal G} \cap L^2[-a^{(u)},b^{(u)}] . \label{OrthogonalityU} \end{eqnarray} These inequalities imply that the corresponding kernel estimators satisfy the inequalities $\mathrm{I\!\!E} \hat f_h^{(\ell)}(t) \le f(t) \le \mathrm{I\!\!E} \hat f_h^{(u)}(t)$, and the definition of $T(\pm\psi)$ yields that \begin{eqnarray} f(t) & \ge & \hat f^{(\ell)}_h(t) - {\|\psi^{(\ell)}\| \Bigl( \Gamma(d^{(\ell)} h) + T(\psi^{(\ell)}) \Bigr) \over \langle 1,\psi^{(\ell)}\rangle (nh)^{1/2}} , \label{Lower Bound} \\ f(t) & \le & \hat f^{(u)}_h(t) + {\|\psi^{(u)}\| \Bigl( \Gamma(d^{(u)} h) + T(- \psi^{(u)}) \Bigr) \over \langle 1,\psi^{(u)}\rangle (nh)^{1/2}} . \label{Upper Bound} \end{eqnarray} Here $d^{(z)} := a^{(z)} + b^{(z)}$. Now let $\kappa_\alpha$ be the $(1-\alpha)$--quantile of the combined statistic $T^* := \max \Bigl( T(\psi^{(\ell)}), T(-\psi^{(u)}) \Bigr)$, i.e. the smallest real number such that $\mathrm{I\!\!P}\{T^* \le \kappa_\alpha\} \ge 1 - \alpha$. Then \begin{eqnarray*} \hat\ell(t) & := & \sup_{h > 0 \ : \ t \in \left[a^{(\ell)}h,1-b^{(\ell)}h\right]} \left( \hat f_h^{(\ell)}(t) - {\|\psi^{(\ell)}\| (\Gamma(d^{(\ell)} h) + \kappa_\alpha) \over \langle 1,\psi^{(\ell)}\rangle (nh)^{1/2}} \right) , \\ \hat u(t) & := & \inf_{h > 0 \ : \ t \in \left[a^{(u)}h,1-b^{(u)}h\right]} \left( \hat f_h ^{(u)}(t) + {\|\psi^{(u)}\| (\Gamma(d^{(u)} h) + \kappa_\alpha) \over \langle 1,\psi^{(u)}\rangle (nh)^{1/2}} \right) \end{eqnarray*} defines a confidence band $(\hat\ell,\hat u)$ for $f$ satisfying (\ref{Confidence}). Equality holds in (\ref{Confidence}) if ${\mathcal G} = {\mathcal G}^{}_\uparrow$ and $f$ is constant, or if ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$ and $f$ is linear, provided that $\kappa_\alpha > 0$. For then it follows from (\ref{OrthogonalityL}) and (\ref{OrthogonalityU}) with $g(x) = \pm 1$ or $g(x) = \pm x$ that the kernel estimators are unbiased. Thus $\hat\ell \le f \le \hat u$ is equivalent to $T^* > \kappa_\alpha$. Moreover, using general theory for gaussian measures on Banach spaces one can show that the distribution of $T^*$ is continuous on $\left]0,\infty\right[$. Sufficient conditions for requirements~(\ref{OrthogonalityL}) and (\ref{OrthogonalityU}) in general are provided by Lemma~\ref{Ortho} in Section~\ref{Optimization}. The confidence band presented in Section~\ref{GGiso Levy} is a special case of the one derived here, if we define $\psi^{(\ell)}(x) := 1\{x \in [-1,0]\}$ and $\psi^{(u)}(x) := 1\{x \in [0,1]\}$ and apply postprocessing as described below. \subsection{Postprocessing of confidence bands} Any confidence band $(\hat\ell,\hat u)$ for $f$ can be enhanced, if we replace $\hat\ell(x)$ and $\hat u(x)$ with $$ \hat{\hat\ell}(x) \ := \ \inf \Bigl\{ g(x) : g \in {\mathcal G}, \hat\ell \le g \le \hat u \Bigr\} \quad\mbox{and}\quad \hat{\hat u}(x) \ := \ \sup \Bigl\{ g(x) : g \in {\mathcal G}, \hat\ell \le g \le \hat u \Bigr\} , $$ respectively. Here we assume tacitly that the set $\{g \in {\mathcal G} : \hat\ell \le g \le \hat u\}$ is nonempty. In case of ${\mathcal G} = {\mathcal G}^{}_\uparrow$ one can easily show that $$ \hat{\hat\ell}(x) \ = \ \sup_{t \in [0,x]} \, \hat\ell(t) \quad\mbox{and}\quad \hat{\hat u}(x) \ = \ \inf_{s \in [x,1]} \, \hat u(s) . $$ Note also that $\hat{\hat\ell}$ and $\hat{\hat u}$ are isotonic, whereas the raw functions $\hat\ell$ and $\hat u$ need not be. In case of ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$ the modified upper bound $\hat{\hat u}$ is the greatest convex minorant of $\hat u$ and can be computed (in discrete models) by means of the pool-adjacent-violators algorithm (cf. Robertson et al.~1988). The modified lower bound $\hat{\hat\ell}(x)$ can be shown to be $$ \hat{\hat\ell}(x) \ = \ \max \left\{ \sup_{0 \le s < t \le x} \Bigl( \hat{\hat u}(s) + {\hat\ell(t) - \hat{\hat u}(s)\over t-s} \, (x-s) \Bigr) , \sup_{x \le s < t \le 1} \Bigl( \hat{\hat u}(t) - {\hat{\hat u}(t) - \hat\ell(s)\over t-s} \, (t-x) \Bigr) \right\} . $$ This improved bound $\hat{\hat\ell}$ is not a convex function, though more regular than the raw function $\hat\ell$. Figure~\ref{UpdateBand} depicts some hypothetical confidence band $(\hat\ell,\hat u)$ for a function $f \in {\mathcal G}^{}_{\rm conv}$ and its improvement $(\hat{\hat\ell},\hat{\hat u})$. \begin{figure}[h] \centering \includegraphics[height=7cm,width=10cm]{UpdateBand} \caption{Improvement $(\hat{\hat\ell},\hat{\hat u})$ of a band $(\hat\ell,\hat u)$ if ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$.} \label{UpdateBand} \end{figure} \subsection{Adaptivity in terms of rates} Whenever we construct a band following the recipe above we end up with a confidence band adapting to the unknown smoothness of $f$ in terms of rates of convergence. For $\beta, L > 0$ the H\"older smoothness class ${\mathcal H}_{\beta,L}$ is defined as follows: In case of $0 < \beta \le 1$ let $$ {\mathcal H}_{\beta,L} \ := \ \Bigl\{ g : |g(x) - g(y)| \le L |x-y|^\beta \mbox{ for all } x,y \Bigr\} . $$ In case of $1 < \beta \le 2$ let $$ {\mathcal H}_{\beta,L} \ := \ \Bigl\{ g \in {\mathcal C}^1 : g' \in {\mathcal H}_{\beta-1,L} \Bigr\} . $$ \begin{Theorem} \label{Optimal Rates} Suppose that $f \in {\mathcal G} \cap {\mathcal H}_{\beta,L}$, where either ${\mathcal G} = {\mathcal G}^{}_\uparrow$ and $\beta \le 1$, or ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$ and $1 \le \beta \le 2$. Let $(\hat\ell,\hat u)$ be the confidence band for $f$ based on test functions $\psi^{(\ell)}, \psi^{(u)}$ as described previously. Then there exists a constant $\Delta$ depending only on $(\beta,L)$ and $(\psi^{(\ell)},\psi^{(u)})$ such that $$ \sup_{t \in [\epsilon_n,1-\epsilon_n]} \Bigl( \hat u(t) - \hat\ell(t) \Bigr) \ \le \ \Delta \rho_n \, \Bigl( 1 + {\kappa_\alpha + T(\psi^{(u)}) + T(-\psi^{(\ell)}) \over \log(en)^{1/2}} \Bigr) , $$ where $\epsilon_n := \rho_n^{1/\beta}$ and $$ \rho_n \ := \ \Bigl( {\log(en) \over n} \Bigr)_{}^{\beta/(2\beta+1)} . $$ \end{Theorem} Using the same arguments as Khas'minskii~(1978) one can show that for any $0\le r < s \le 1$, $$ \inf_{f \in {\mathcal G} \cap {\mathcal H}_{\beta,L}} \, \mathrm{I\!\!P}_f \left\{ \sup_{t \in [r,s]} (\hat u(t) - \hat\ell(t)) \le \Delta \rho_n \right\} \ \to \ 0 , $$ provided that $\Delta > 0$ is sufficiently small. Thus our confidence bands adapt to the unknown smoothness of $f$. \subsection{Testing hypotheses about $f(t)$} In order to find suitable kernel functions $\psi^{(\ell)}, \psi^{(u)}$ we proceed similarly as D\"umbgen and Spokoiny~(2001, Section~3.2). That means we consider temporarily tests of the null hypothesis $$ {\mathcal F}_o \ := \ \Bigl\{ f \in {\mathcal G} \cap L^2[0,1] : f(t) \le r-\delta \Bigr\} $$ versus the alternative hypothesis $$ {\mathcal F}_A \ := \ \Bigl\{ f \in {\mathcal G} \cap {\mathcal H}_{k,L} : f(t) \ge r \Bigr\} . $$ Here $t \in [0,1]$, $r \in \mathbb{R}$ and $L,\delta > 0$ are arbitrary fixed numbers, while \begin{equation} ({\mathcal G},k) \ =\ ({\mathcal G}^{}_\uparrow,1) \quad\mbox{or}\quad ({\mathcal G},k) \ = \ ({\mathcal G}^{}_{\rm conv},2) . \label{GGk} \end{equation} Note that ${\mathcal F}_o$ and ${\mathcal F}_A$ are closed, convex subsets of $L^2[0,1]$. Suppose that there are functions $f_o\in{\mathcal F}_o$ and $f_A\in{\mathcal F}_A$ such that $$ \int_0^1 (f_o-f_A)(x)^2 \, dx \ = \ \min_{g_o\in{\mathcal F}_o, \, g_A\in{\mathcal F}_A} \int_0^1 (g_o-g_A)(x)^2 \, dx . $$ Then optimal tests of ${\mathcal F}_o$ versus ${\mathcal F}_A$ are based on the linear test statistic $\int_0^1 (f_A-f_o) \, dY$, where critical values have to be computed under the assumption $f=f_o$. The problem of finding such functions $f_o,f_A$ is treated in Section~\ref{Optimization}. Here is the conclusion: Let \begin{equation} \psi^{(\ell)}(x) \ := \ \left\{\begin{array}{ll} 1\{x \in [-1,0]\} (1 + x) & \mbox{if } {\mathcal G} = {\mathcal G}^{}_\uparrow , \\ 1\{x \in [-2,2]\} \Bigl( 1 - (3/2)|x| + x^2/2 \Bigr) & \mbox{if } {\mathcal G} = {\mathcal G}^{}_{\rm conv} . \end{array}\right. \label{PsiL} \end{equation} Then the functions \begin{equation} f_A(s) \ := \ \left\{\begin{array}{cl} r + L(s-t) & \mbox{if } {\mathcal G} = {\mathcal G}^{}_\uparrow \\ r + L(s-t)^2/2 & \mbox{if } {\mathcal G} = {\mathcal G}^{}_{\rm conv} \end{array}\right. \label{Least favorable fA} \end{equation} and $$ f_o \ := \ f_A - \delta \psi^{(\ell)}_{h,t} \quad\mbox{with } h := (\delta/L)^{1/k} $$ solve our minimzation problem, provided that $a^{(\ell)}h \le t \le 1 - b^{(\ell)}h$. Thus the optimal linear test statistic may be written as $\int_0^1 \psi_{h,t} \, dY = \psi Y(h,t)$. Elementary considerations show that the inequality $$ \hat f_h^{(\ell)}(t) - {\|\psi^{(\ell)}\| (\Gamma(d^{(\ell)} h) + \kappa_\alpha) \over \langle 1,\psi^{(\ell)}\rangle (nh)^{1/2}} \ \le \ r_o $$ is equivalent to \begin{eqnarray*} \psi Y(h,t) & \le & n^{1/2} h r_o \langle 1, \psi^{(\ell)}\rangle + h^{1/2} \|\psi^{(\ell)}\| (\Gamma(d^{(\ell)} h) + \kappa_\alpha) \\ & = & \mathrm{I\!\!E}_{f_o}(\psi Y(h,t)) + \mathrm{Var}(\psi Y(h,t))^{1/2} (\Gamma(d^{(\ell)} h) + \kappa_\alpha) . \end{eqnarray*} Thus our lower confidence bound $\hat\ell$ may be interpreted as a multiple test of all null hypotheses $\{f \in {\mathcal G} : f(t) \le r_o\}$ with $t \in [0,1]$ and $r_o \in \mathbb{R}$. Analogous considerations yield a candidate for $\psi^{(u)}$: Let $$ {\mathcal F}_o \ := \ \Bigl\{ f \in {\mathcal G} \cap L^2[0,1] : f(t) \ge r+\delta \Bigr\} $$ and $$ {\mathcal F}_A \ := \ \Bigl\{ f \in {\mathcal G} \cap {\mathcal H}_{k,L} : f(t) \le r \Bigr\} . $$ Then the function $f_A$ in (\ref{Least favorable fA}) and $$ f_o \ := \ f_A + \delta \psi^{(u)}_{h,t} \quad\mbox{with } h := (\delta/L)^{1/k} $$ form a least favorable pair $(f_o,f_A)$ in ${\mathcal F}_o \times {\mathcal F}_A$, where \begin{equation} \psi^{(u)}(x) \ := \ \left\{\begin{array}{ll} 1\{x \in [0,1]\} (1 - x) & \mbox{if } {\mathcal G} = {\mathcal G}^{}_\uparrow , \\ 1\{x \in [-2^{1/2},2^{1/2}]\} (1 - x^2/2) & \mbox{if } {\mathcal G} = {\mathcal G}^{}_{\rm conv} . \end{array}\right. \label{PsiU} \end{equation} Figures~\ref{OptKernelsGGiso} and \ref{OptKernelsGGconv} depict the functions $\psi^{(\ell)}$ in (\ref{PsiL}) and $\psi^{(u)}$ in (\ref{PsiU}). \begin{figure}[h] \includegraphics[height=6cm,width=7cm]{PsiLGGiso} \hfill \includegraphics[height=6cm,width=7cm]{PsiUGGiso} \caption{Kernel functions $\psi^{(\ell)}, \psi^{(u)}$ for ${\mathcal G}^{}_\uparrow$.} \label{OptKernelsGGiso} \end{figure} \begin{figure}[h] \includegraphics[height=6cm,width=7cm]{PsiLGGconv} \hfill \includegraphics[height=6cm,width=7cm]{PsiUGGconv} \caption{Kernel functions $\psi^{(\ell)}, \psi^{(u)}$ for ${\mathcal G}^{}_{\rm conv}$.} \label{OptKernelsGGconv} \end{figure} \subsection{Optimal constants and local adaptivity} Now we are going to show that our multiscale confidence band $(\hat\ell, \hat u)$, if constructed with the kernel functions in (\ref{PsiL}) and (\ref{PsiU}), is locally adaptive in a certain sense. Precisely, we consider an arbitrary fixed function $f_o \in {\mathcal G}\cap{\mathcal C}^k$ with $({\mathcal G},k)$ as specified in (\ref{GGk}). We analyze quantities such as $$ \|(\hat u - f_o) w\|^{+}_{r,s} \quad\mbox{and}\quad \|(f_o - \hat\ell) w\|^{+}_{r,s} , $$ where $w$ is some positive weight function on the unit interval and $$ \|g\|^{+}_{r,s} \ := \ \sup_{t \in [r,s]} \, g(t) . $$ The function $w$ should reflect local smoothness properties of $f_o$ in an appropriate way. The following theorem demonstrates that the $k$--th derivative of $f_o$, denoted by $\nabla^k f_o$, plays a crucial role. \begin{Theorem} \label{Adapt I} For arbitrary fixed numbers $0 \le r < s \le 1$ let $$ L \ := \ \max_{t \in [r,s]} \, \nabla^k f_o(t) . $$ Then for any $\gamma\in\left]0,1\right[$, \begin{eqnarray*} \inf_{(\hat\ell,\hat u)} \, \mathrm{I\!\!P}_{f_o} \Bigl\{ \|f - \hat\ell\|^{+}_{r,s} \ge \gamma \Delta^{(\ell)} L^{1/(2k+1)} \rho_n \Bigr\} & \ge & 1 - \alpha + o(1) , \\ \inf_{(\hat\ell,\hat u)} \, \mathrm{I\!\!P}_{f_o} \Bigl\{ \|\hat u - f\|^{+}_{r,s} \ge \gamma \Delta^{(u)} L^{1/(2k+1)} \rho_n \Bigr\} & \ge & 1 - \alpha + o(1) , \end{eqnarray*} where both infima are taken over all confidence bands $(\hat\ell,\hat u)$ satisfying (\ref{Confidence}), and \begin{eqnarray*} \Delta^{(z)} & := & \Bigl( (k+1/2) \|\psi^{(z)}\|^2 \Bigr)^{-k/(2k+1)} , \\ \rho_n & := & \Bigl( {\log(en)\over n} \Bigr)^{k/(2k+1)} . \end{eqnarray*} \end{Theorem} In case of ${\mathcal G}={\mathcal G}^{}_\uparrow$, the critical constants are $\Delta^{(\ell)} = \Delta^{(u)} = 2^{1/3} \approx 1.260$. In case of ${\mathcal G}={\mathcal G}^{}_{\rm conv}$, $$ \Delta^{(\ell)} \ = \ (3/4)^{2/5} \ \approx \ 0.891 \quad\mbox{and}\quad \Delta^{(u)} \ = \ 3^{2/5}/128^{1/5} \ \approx \ 0.588 . $$ This indicates that bounding a convex function from below is more difficult than finding an upper bound. In view of Theorem~\ref{Adapt I} we introduce for arbitrary fixed $\epsilon > 0$ the weight function $$ w_\epsilon \ := \ \Bigl( \max(\nabla^k f_o,\epsilon) \Bigr)^{-1/(2k+1)} $$ reflecting the local smoothness of $f_o$. The next theorem shows that our particular confidence band $(\hat\ell,\hat u)$ attains the lower bounds of Theorem~\ref{Adapt I} pointwise. Suprema such as $\|(f_o - \hat\ell) w_\epsilon\|^{+}_{r,s}$ and $\|(\hat u - f_o) w_\epsilon\|^{+}_{r,s}$ attain their respective lower bounds $\Delta^{(\ell)}$, $\Delta^{(u)}$ up to a multiplicative factor $2^{k/(k+1/2)} + o_p(1)$. \begin{Theorem} \label{Adapt II} Let $(\hat\ell,\hat u)$ be the confidence band based on the kernel functions in (\ref{PsiL}) and (\ref{PsiU}). If $f = f_o$, then for arbitrary $\epsilon > 0$ and any $t\in\left]0,1\right[$, \begin{eqnarray*} (f_o - \hat\ell)(t) w_\epsilon(t) & \le & \left( \Delta^{(\ell)} + o_p(1) \right) \rho_n , \\ (\hat u - f_o)(t) w_\epsilon(t) & \le & \left( \Delta^{(u)} + o_p(1) \right) \rho_n . \end{eqnarray*} Moreover, \begin{eqnarray*} \|(f_o - \hat\ell) w_\epsilon\|^{+}_{\epsilon,1-\epsilon} & \le & \left( 2^{k/(k+1/2)} \Delta^{(\ell)} + o_p(1) \right) \rho_n , \\ \|(\hat u - f_o) w_\epsilon \|^{+}_{\epsilon,1-\epsilon} & \le & \left( 2^{k/(k+1/2)} \Delta^{(u)} + o_p(1) \right) \rho_n . \end{eqnarray*} \end{Theorem} If we used kernel functions differing from (\ref{PsiL}) and (\ref{PsiU}), then pointwise optimality would be lost, and the constants for the supremum distances would get worse. \section{Simulations and numerical examples} \label{Examples} Here we demonstrate the performance of the procedures in Section~\ref{GG smooth}. We replace the continuous white noise model with a discrete one: Suppose that one observes a random vector $\vec{Y} \in \mathbb{R}^n$ with components \begin{equation} Y_i \ = \ f(x_i) + \epsilon_i , \label{WN Regression} \end{equation} where $x_i := (i-1/2)/n$, and the random errors $\epsilon_i$ are independent with Gaussian distribution ${\mathcal N}(0,\sigma^2)$. Our kernel functions $\psi^{(\ell)}$ and $\psi^{(u)}$ are rescaled as follows: \begin{eqnarray*} \psi^{(\ell)}(x) & := & \left\{\begin{array}{ll} 1\{x \in [-1,0]\} (1 + x) & \mbox{if } {\mathcal G} = {\mathcal G}^{}_\uparrow , \\ 1\{x \in [-1,1]\} \Bigl( 1 - 3 |x| + 2 x^2 \Bigr) & \mbox{if } {\mathcal G} = {\mathcal G}^{}_{\rm conv} , \end{array}\right. \\ \psi^{(u)}(x) & := & \left\{\begin{array}{ll} 1\{x \in [0,1]\} (1 - x) & \mbox{if } {\mathcal G} = {\mathcal G}^{}_\uparrow , \\ 1\{x \in [-1,1]\} (1 - x^2) & \mbox{if } {\mathcal G} = {\mathcal G}^{}_{\rm conv} . \end{array}\right. \end{eqnarray*} Note that now $a^{(\ell)}, a^{(u)}, b^{(\ell)}, b^{(u)} \in \{0,1\}$. For convenience we compute kernel estimators and confidence bounds for $f$ only on the grid ${\mathcal T}_n:=\{1/n,2/n,\ldots,1 - 1/n\}$, while the bandwidth parameter $h$ is restricted to $$ H_n \ := \ \left\{\begin{array}{cl} \{1/n, 2/n, \ldots, 1\} & \mbox{if } {\mathcal G} = {\mathcal G}^{}_\uparrow , \\ \{1/n, 2/n, \ldots, \lfloor n/2\rfloor/n\} & \mbox{if } {\mathcal G} = {\mathcal G}^{}_{\rm conv} . \end{array}\right. $$ Let $\psi$ stand for $\psi^{(\ell)}$ or $\psi^{(u)}$ with support $[-a,b]$. Then for $h\in H_n$ and $t\in {\mathcal T}_n$ with $ah \le t \le 1 - bh$ we define $$ \psi\vec{Y}(h,t) \ := \ \sum_{i=1}^n \psi \Bigl( {x_i - t\over h} \Bigr) Y_i \ = \ \sum_{j=1 - anh}^{bnh} \psi \Bigl( {j - 1/2\over nh} \Bigr) Y_{nt+j} $$ and $$ \hat f_h(t) \ := \ {\psi\vec{Y}(h,t) \over S_{nh}} , $$ where $S_d$ stands for $\sum_{j=1-d}^d \psi((j-1/2)/d)$. The standard deviation of $\hat{f}_h(t)$ equals $\sigma_h := \sigma R_{nh}^{1/2}/S_{nh}$, where $R_d := \sum_{j=1-d}^d \psi((j-1/2)/d)^2$. Tedious but elementary calculations show that in case of ${\mathcal G} = {\mathcal G}^{}_\uparrow$, $$ S_d \ = \ d/2 \quad\mbox{and}\quad R_d \ = \ d/3 - 1/(12d) . $$ In case of ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$, $$ \begin{array}{ccccccc} S^{(\ell)}_d & = & d/3 - 1/(3d) & \mbox{and} & R^{(\ell)}_d & = & 4d/15 - 1/(2d) + 7/(30d^3) , \\ S^{(u)}_d & = & 4d/3 + 1/(6d) & \mbox{and} & R^{(u)}_d & = & 16d/15 + 7/(120d^3) . \end{array} $$ Note that here $S^{(\ell)}_1 = 0 = \psi^{(\ell)}\vec{Y}(1/n,\cdot)$, whence the bandwidth $1/n$ is excluded from any computation involving $\psi^{(\ell)}$. As for the bias of these kernel estimators, one can deduce from Lemma~\ref{Ortho} that $\mathrm{I\!\!E} \hat f^{(\ell)}_h(t) \le f(t)$ and $\mathrm{I\!\!E} \hat f^{(u)}_h(t) \ge f(t)$ whenever $f\in{\mathcal G}$. Here is a discrete version of our multiscale test statistic: $T_n^* := \max \Bigl( T_n(\psi^{(\ell)}), T_n(- \psi^{(u)}) \Bigr)$, where $$ T_n(\pm\psi) \ := \ \max_{h\in H_n} \ \max_{t\in {\mathcal T}_n \cap [ah,1-bh]} \Bigl( \pm \sigma^{-1} R_{nh}^{-1/2} \psi\vec{E}(h,t) - \Gamma((a+b)h) \Bigr) $$ with $\vec{E} := (\epsilon_i)_{i=1}^n$. Let $\kappa_{\alpha,n}$ be the $(1-\alpha)$--quantile of $T_n^*$. Then \begin{eqnarray*} \hat\ell(t) & := & \max_{h\in H_n \, : \, t \in [a^{(\ell)}h, 1 - b^{(\ell)}h]} \Bigl( \hat{f}^{(\ell)}_h(t) - \sigma^{(\ell)}_h (\Gamma(d^{(\ell)} h) + \kappa_{\alpha,n}) \Bigr) , \\ \hat{u}(t) & := & \min_{h\in H_n \, : \, t \in [a^{(u)}h, 1 - b^{(u)}h]} \Bigl( \hat{f}^{(u)}_h(t) + \sigma^{(u)}_h (\Gamma(d^{(u)} h) + \kappa_{\alpha,n}) \Bigr) , \end{eqnarray*} defines a confidence band for $f$ such that $$ \mathrm{I\!\!P} \Bigl\{ \hat\ell \le f \le \hat{u} \mbox{ on } {\mathcal T}_n \Bigr\} \ \ge \ 1 - \alpha \quad\mbox{whenever } f \in {\mathcal G} . $$ Equality holds if ${\mathcal G} = {\mathcal G}^{}_\uparrow$ and $f$ is constant, or if ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$ and $f$ is linear. If the noise variance $\sigma^2$ is unknown, it may be estimated as described in D\"umbgen and Spokoiny~(2001). Then, under moderate regularity assumptions on $f$, our confidence bands have {\sl asymptotic} coverage probability at least $1 - \alpha$ as $n$ tends to infinity. {\bf Critical values.} For various values of $n$ we estimated several quantiles $\kappa_{\alpha,n}$ in 9999 Monte-Carlo simulations; see Table~\ref{Simulations}. One can easily show that the critical value $\kappa_{\alpha,n}$ converges to the corresponding quantile $\kappa_\alpha$ for the continuous white noise model as $n \to \infty$. Software for the computation of critical values as well as confidence bands may be obtained from the author's URL. \begin{table}[h] \centerline{\begin{tabular}{|c||c|c|c||c|c|c|} \hline & \multicolumn{3}{|c||}{${\mathcal G}^{}_\uparrow$} & \multicolumn{3}{c|}{${\mathcal G}^{}_{\rm conv}$} \\ $n$ & $\kappa_{0.5,n}$ & $\kappa_{0.1,n}$ & $\kappa_{0.05,n}$ & $\kappa_{0.5,n}$ & $\kappa_{0.1,n}$ & $\kappa_{0.05,n}$ \\\hline\hline 100 & 0.330 & 1.092 & 1.349 & 0.350 & 1.053 & 1.283 \\\hline 200 & 0.433 & 1.146 & 1.392 & 0.430 & 1.121 & 1.342 \\\hline 300 & 0.475 & 1.169 & 1.416 & 0.470 & 1.126 & 1.342 \\\hline 400 & 0.507 & 1.204 & 1.446 & 0.489 & 1.128 & 1.340 \\\hline 500 & 0.526 & 1.222 & 1.450 & 0.512 & 1.143 & 1.358 \\\hline 700 & 0.570 & 1.252 & 1.492 & 0.536 & 1.162 & 1.380 \\\hline 1000 & 0.585 & 1.250 & 1.483 & 0.552 & 1.178 & 1.393 \\\hline \end{tabular}} \caption{Some critical values for the discrete white noise model} \label{Simulations} \end{table} {\bf Two numerical examples.} Figure~\ref{ExampleIso} shows a simulated data vector $\vec{Y}$ with $n = 500$ components together with the corresponding $95\%$--confidence band $(\hat\ell,\hat u)$ after postprocessing, where $f$ is assumed to be isotonic. The latter function is depicted as well. Note that the band is comparatively narrow in the middle of $\left]0,1/3\right[$, on which $f$ is constant. On $\left]1/3,1\right]$ the width $\hat u - \hat\ell$ tends to inrease, as does $\nabla f$. These findings are in accordance with Theorem~\ref{Adapt II}. An analogous plot for a convex function $f$ can be seen in Figure~\ref{ExampleConv}. Note that the deviation $f - \hat\ell$ is mostly greater than $\hat u - f$, as predicted by Theorem~\ref{Adapt II}. \begin{figure} \centering \includegraphics[height=9cm,width=12cm]{IsoBand} \caption{Data $\vec{Y}$ and $95\%$--confidence band for $f \in {\mathcal G}^{}_\uparrow$.} \label{ExampleIso} \end{figure} \begin{figure} \centering \includegraphics[height=9cm,width=12cm]{ConvBand} \caption{Data $\vec{Y}$ and $95\%$--confidence band for $f \in {\mathcal G}^{}_{\rm conv}$.} \label{ExampleConv} \end{figure} \section{Proofs} \label{Proofs} \noindent {\bf Proof of Theorem~\ref{Lower Bounds GGiso}.} In order to prove lower bounds we construct unfavorable subfamilies of ${\mathcal G}^{}_\uparrow$ similarly as Khasminski~(1978). For a given integer $m > 0$ we define $I_1 := [0,1/m]$ and $I_j := \left](j-1)/m, j/m\right]$ for $1 < j \le m$. Then we define step functions $g$ and $h^{}_\xi$ for $\xi \in \mathbb{R}^m$ via $$ g(t) \ := \ 2j - 1 \quad\mbox{and}\quad h^{}_\xi(t) \ := \ \xi_j \quad\mbox{for } t \in I_j, 1 \le j \le m . $$ For any $\delta > 0$ and $\xi \in [-\delta, \delta]^m$ the function $\delta g + h^{}_\xi$ is isotonic on $[0,1]$. Now we restrict our attention to the parametric submodel ${\mathcal F}_o = \Bigl\{ \delta g + h^{}_\xi : \xi \in [-\delta,\delta]^m \Bigr\}$ of ${\mathcal G}^{}_\uparrow \cap L^2[0,1]$. Any confidence band $(\hat\ell,\hat u)$ for $f = \delta g + h^{}_\xi$ defines a confidence set $S = S_1 \times S_2 \times \cdots \times S_m$ for $\xi$ via $$ S_j \ := \ \Bigl[ \sup_{t\in I_j} \, \hat\ell(t) - \delta (2j-1) , \inf_{t\in I_j} \, \hat u(t) - \delta (2j-1) \Bigr] . $$ Here $\hat\ell \le f \le \hat u$ if, and only if, $\xi \in S$. Moreover, $$ D_\epsilon(\hat\ell, \hat u) \ \ge \ \max_{j=1,\ldots,m} {\rm length}(S_j) \quad\mbox{for } 1/(m+1) \le \epsilon < 1/m . $$ However, \begin{eqnarray*} \log {d\mathrm{I\!\!P}_{\delta g + h^{}_\xi} \over d\mathrm{I\!\!P}_{\delta g}} (Y) & = & n^{1/2} \int_0^1 h^{}_\xi \, d\tilde{Y} - n \int_0^1 h^{}_\xi(t)^2 \, dt / 2 \\ & = & \sum_{j=1}^m \Bigl( (n/m)^{1/2} \xi_j X_j - (n/m) \xi_j^2/2 \Bigr) \\ & = & \log {d{\mathcal N}((n/m)^{1/2}\xi,I)\over d{\mathcal N}(0,I)} (X) , \end{eqnarray*} where $\tilde{Y}(t) := Y(t) - n^{1/2} \int_0^t \delta g(s) \, ds$ and $X := (X_j)_{j=1}^m$ with components $$ X_j \ := \ m^{1/2} \Bigl( \tilde{Y}(j/m) - \tilde{Y}((j-1)/m) \Bigr) . $$ In case of $f = \delta g$ these random variables are independent and standard normal. Consequently, $X$ is a sufficient statistic for the parametric submodel ${\mathcal F}_o$ with distribution ${\mathcal N}_m((n/m)^{1/2}\xi,I)$ in case of $f = \delta g + h^{}_\xi$. In particular, the conditional distribution of $S$ given $X$ does not depend on $\xi$. Hence letting $\delta = (n/m)^{-1/2} c_m$ with $c_m := (2\log m)^{1/2}$ it follows from Theorem~\ref{CS Sup-Diameter}~(b) in Section~\ref{Decision Theory} that for $1/(m+1) \le \epsilon < 1/m$, \begin{eqnarray*} \lefteqn{ \inf_{f \in {\mathcal G}^{}_\uparrow \cap L^2[0,1]} \, \mathrm{I\!\!P}_f \Bigl\{ \hat\ell \le f \le \hat u \mbox{ and } D_\epsilon(\hat\ell,\hat u) \le 2 {c_m - b_m \over (n/m)^{1/2}} \Bigr\} } \\ & \le & \min_{\xi \in [-\delta, \delta]^m} \, \mathrm{I\!\!P}_\xi \Bigl\{ \xi \in S \mbox{ and } \max_{j=1,\ldots,m} {\rm length}(S_j) \le 2 {c_m - b_m \over (n/m)^{1/2}} \Bigr\} \ \le \ b_m , \end{eqnarray*} where $b_1, b_2, b_3, \ldots$ are universal positive numbers such that $\lim_{m\to\infty} b_m = 0$. This entails the assertion of Theorem~\ref{Lower Bounds GGiso} with $\log(1/\epsilon)$ in place of $\log(e/\epsilon)$ and $$ b(\epsilon) \ := \ (2 \log(1/\epsilon))^{1/2} - (m\epsilon)^{1/2} (c_m - b_m) \quad\mbox{for } 1/(m+1) \le \epsilon < 1/m . $$ Finally note that $\log(e/\epsilon)^{1/2} = \log(1/\epsilon)^{1/2} + o(1)$ as $\epsilon \downarrow 0$. \hfill $\Box$ \noindent {\bf Proof of Theorem~\ref{Optimal Rates}.} Instead of an upper bound for $\hat u-\hat\ell$ we prove an upper bound for $\hat u-f$, because analogous arguments apply to $f-\hat\ell$. In what follows let $\psi = \psi^{(u)}$ with support $[-a,b]$. For $t \in [0,1]$ and $h>0$ with $ah\le t\le 1-bh$, \begin{eqnarray} \hat u(t) - f(t) & \le & \hat f_h(t) - f(t) + {\|\psi\| (\Gamma((a+b)h)+\kappa_\alpha) \over \langle 1,\psi\rangle (nh)^{1/2}} \nonumber \\ & = & {\Bigl\langle f(t + h\,\cdot) - f(t),\psi \Bigr\rangle \over \langle 1,\psi\rangle} + {\psi W(h,t) \over n^{1/2} h \langle 1,\psi\rangle} + {\|\psi\| (\Gamma((a+b)h)+\kappa_\alpha) \over \langle 1,\psi\rangle (nh)^{1/2}} \nonumber \\ & \le & {\Bigl\langle f(t + h\,\cdot) - f(t),\psi \Bigr\rangle \over \langle 1,\psi\rangle} + {\|\psi\| \Bigl( 2 \Gamma((a+b)h) + \kappa_\alpha + T(\psi) \Bigr) \over \langle 1,\psi\rangle (nh)^{1/2}} . \label{Optimal Rates.1} \end{eqnarray} For any function $g \in {\mathcal H}_{\beta,L}$, \begin{eqnarray*} \Bigl| g(x) - g(0) \Bigr| & \le & L |x|^\beta \quad\mbox{if } \beta \le 1 , \\ \Bigl| g(x) - g(0) - g'(0) x \Bigr| & \le & L |x|^\beta \quad\mbox{if } 1 < \beta \le 2 . \end{eqnarray*} Since $f(t + h\,\cdot) \in {\mathcal H}_{\beta,L h^\beta}$ if $f \in {\mathcal H}_{\beta,L}$, this implies that $$ {\Bigl\langle f(t + h\,\cdot) - f(t),\psi \Bigr\rangle \over \langle 1,\psi\rangle} \ \le \ {L h^\beta \int_{-a}^b |x|^\beta |\psi(x)| \, dx \over \langle 1,\psi\rangle} \ \le \ \Delta h^\beta . $$ Here and subsequently $\Delta$ denotes a generic constant depending only on $(\beta,L)$ and $\psi$. Its value may vary from one place to another. In case of $t \in [\epsilon_n,1-\epsilon_n]$ and $h = \epsilon_n/\max(a,b)$ the right-hand side of (\ref{Optimal Rates.1}) is not greater than $$ \Delta \epsilon_n^\beta + {\Delta \Bigl( \log(en)^{1/2} + \kappa_\alpha + T(\psi) \Bigr) \over (n\epsilon_n)^{1/2}} \ = \ \Delta \rho_n \Bigl( 1 + {\kappa_\alpha + T(\psi)\over \log(en)^{1/2}} \Bigr) . \eqno{\Box} $$ \noindent {\bf Proof of Theorem~\ref{Adapt I}.} We prove only the lower bound for $f_o - \hat\ell$, because $\hat u - f_o$ can be treated analogously. It suffices to consider the case $L > 0$ and to show that for any fixed number $\gamma \in \left]0,1\right[$, $$ \mathrm{I\!\!P}_{f_o} \Bigl\{ \|f_o - \hat\ell\|^{+}_{r,s} \ge \gamma \Delta^{(\ell)} L^{1/(2k+1)} \rho_n \Bigr\} \ \ge \ 1 - \alpha + o(1) $$ for arbitrary confidence bands $(\hat\ell,\hat u) = (\hat\ell_n,\hat u_n)$ satisfying (\ref{Confidence}). Without loss of generality one may assume that $$ \nabla^k f_o \ \ge \ L \quad\mbox{on } [r,s] . $$ Otherwise one could increase $\gamma$ and decrease $L$ without changing $\gamma L^{1/(2k+1)}$, and replace $[r,s]$ with some nondegenerate subinterval. Let $\psi$ stand for $\psi^{(\ell)}$ with support $[-a,b]$. For $0 < h \le (s-r)/(a+b)$ and positive integers $j\le m:=\lfloor (s-r)/((a+b)h\rfloor$ let $$ t_j \ := \ s + ah + (j-1)(a+b) h \quad\mbox{and}\quad f_j \ := \ f_o - L h^k \psi^{}_{h,t_j} . $$ It follows from Lemma~\ref{Nice Psis} that these functions $f_j$ belong to ${\mathcal G}\cap L^2[0,1]$. Thus (\ref{Confidence}) implies that the event $$ A \ := \ \Bigl\{ \hat\ell \le f_j \mbox{ for some } j \le m \Bigr\} $$ satisfies the inequality $\mathrm{I\!\!P}_{f_j}(A) \ge 1-\alpha$ for all $j \le m$. Since $\|f_o - f_j\|^{+}_{r,s} \ge \delta$, this entails the inequality $$ \mathrm{I\!\!P}_{f_o} \Bigl\{ \|f_o - \hat\ell\|^{+}_{r,s} \ge L h^k \Bigr\} \ \ge \ \mathrm{I\!\!P}_{f_o}(A) \ \ge \ 1 - \alpha - \min_{j \le m} \Bigl (\mathrm{I\!\!P}_{f_j}(A) - \mathrm{I\!\!P}_{f_o}(A) \Bigr) . $$ Now let $h := (c\rho_n)^{1/k}$ so that $Lh^k=Lc\rho_n$, where $c > 0$ is some number to be specified later. For sufficiently large $n$ this bandwidth $h$ is smaller than $(s-r)/(a+b)$. Then $$ \log {d\mathrm{I\!\!P}_{f_j}\over d\mathrm{I\!\!P}_{f_o}}(Y) \ = \ n^{1/2} h^{k+1/2} L \|\psi\| X_j - nh^{2k+1} L^2 \|\psi\|^2/2 , $$ where $X_j := h^{-1/2} \|\psi\|^{-1} \int_0^1 \psi_{h,t_j} \, d\tilde{Y}$ and $\tilde{Y}(t) := Y(t) - n^{1/2} \int_0^t f_o(x) \, dx$. Thus $X := (X_j)_{j=1}^m$ is a sufficient statistic for the restricted model $\{f_o, f_1, f_2, \ldots, f_m\}$, where ${\mathcal L}_{f_o}(X)$ is a standard normal distribution on $\mathbb{R}^m$. Thus it follows from Theorem~\ref{CS Sup-Diameter}~(a) and a standard sufficiency argument that $$ \lim_{n\to\infty} \, \min_{1 \le j \le m} \Bigl (\mathrm{I\!\!P}_{f_j}(A) - \mathrm{I\!\!P}_{f_o}(A) \Bigr) \ = \ 0 \quad\mbox{if}\quad \lim_{n\to\infty} \, {nh^{2k+1} L^2 \|\psi\|^2 \over 2 \log m} \ < \ 1 . $$ Since $\log m = (1 + o(1)) \log(n)/(2k+1)$, the limit on the right hand side is equal to $$ c^{(2k+1)/k} L^2 \|\psi\|^2 (k+1/2) $$ and smaller than one if $c$ equals $\gamma\Delta^{(\ell)}L^{-2k/(2k+1)}$. In that case, the lower bound $Lh^k=Lc\rho_n$ for $\|f_o-\hat\ell\|^{+}_{r,s}$ equals $\gamma \Delta^{(\ell)} L^{1/(2k+1)} \rho_n$ as desired. \hfill $\Box$ \noindent {\bf Proof of Theorem~\ref{Adapt II}.} Again we restrict our attention to $f_o - \hat\ell$ and let $\psi := \psi^{(\ell)}$ with support $[-a,b]$. For any fixed $\epsilon > 0$ and arbitrary $t \in [0,1]$ let $h_t > 0$ and $$ L_t \ := \ \max_{s \in [t-ah_t, t+bh_t] \cap [0,1]} \, \max(\nabla^k f_o(s),\epsilon) . $$ In case of $ah_t \le t \le 1 - bh_t$ the inequality $(f_o - \hat\ell)(t) \ge L_t h_t^k$ implies that $$ \hat f_{h_t}(t) - { \|\psi\| \Bigl( \Gamma((a+b)h_t) + \kappa_\alpha \Bigr) \over (nh_t)^{1/2} \langle 1,\psi\rangle } \ \le \ f_o(t) - L_t h_t^k . $$ Since $f=f_o$, this can be rewritten as \begin{eqnarray*} {\psi W(h_t,t) \over h_t^{1/2} \|\psi\|} & \le & - \, {(nh_t)^{1/2} \over \|\psi\|} \, \Bigl\langle f_o(t + h_t\,\cdot) - f_o(t) + L_t h_t^k, \psi \Bigr\rangle + \Gamma((a+b)h_t) + \kappa_\alpha \\ & \le & - n^{1/2} L_t h_t^{k+1/2} \|\psi\| + \Gamma((a+b)h_t) + \kappa_\alpha , \end{eqnarray*} where the latter inequality follows from Lemma~\ref{Nice Psis}~(c). Specifically let $$ h_t \ := \ c w_\epsilon(t)^2 \rho_n^{1/k} $$ for some positive constant $c$ to be specified later. By continuity of $\nabla^k f_o$, the weight function $w_\epsilon$ is bounded away from zero and infinity. Hence $h_t\to 0$ and $L_t \max(\nabla^k f_o(t),\epsilon)^{-1}\to 1$, uniformly in $t\in[0,1]$. In particular, \begin{eqnarray*} \Gamma((a+b)h_t) & \le & (k+1/2)^{-1/2} \log(en)^{1/2} \quad\mbox{for } n \ge n_o , \\ n^{1/2} L_t h_t^{k+1/2} \|\psi\| & \ge & c^{k+1/2} \|\psi\| \log(en)^{1/2} , \\ L_t h_t^k & \le & w_\epsilon(t)^{-1} c^k (1 + b_n) \rho_n , \end{eqnarray*} where $n_o$ and $b_n$ are positive numbers depending only on $f_o$, $\epsilon$ and $c$ such that $b_n\to 0$. Consequently, for $n \ge n_o$, $$ ah_t \leq t \leq 1 - bh_t \quad\mbox{and}\quad (f_o - \hat\ell)(t) w_\epsilon(t) \ \ge \ c^k (1 + b_n) \rho_n $$ implies that $$ {\psi W(h_t,t) \over h_t^{1/2} \|\psi\|} \ \le \ - \Bigl( c^{k+1/2} \|\psi\| - (k+1/2)^{-1/2} \Bigr) \log(en)^{1/2} + \kappa_\alpha . $$ Whenever $c > (\Delta^{(\ell)})^{1/k}$, the right-hand side of the preceding inequality tends to minus infinity, while the random variable on the left-hand side has mean zero and variance one. Since the limit of $c^k (1 + b_n)$ can be arbitrarily close to $\Delta^{(\ell)}$, these considerations show that $(f_o - \hat\ell)(t) w_\epsilon(t) \le (\Delta^{(\ell)} + o_p(1)) \rho_n$ for any fixed $t \in \left]0,1\right[$. If $n$ is sufficiently large, then $ah_t \le t \le 1-bh_t$ and $$ {\psi W(h_t,t) \over h_t^{1/2} \|\psi\|} \ \ge \ - T(-\psi) - \Gamma((a+b)h_t) $$ for all $t\in[\epsilon,1-\epsilon]$. Consequently, $$ \sup_{t\in[\epsilon,1-\epsilon]} (f_o - \hat\ell)(t( w_\epsilon(t) \ \geq \ c^k (1 + b_n) $$ implies that \begin{eqnarray*} T(-\psi) & \ge & n^{1/2} L_t h_t^{k+1/2} \|\psi\| - 2 \Gamma((a+b)h_t) - \kappa_\alpha \\ & \ge & \Bigl( c^{k+1/2} \|\psi\| - 2 (k+1/2)^{-1/2} \Bigr) \log(en)^{1/2} - \kappa_\alpha . \end{eqnarray*} Whenever $c > 2^{1/(k+1/2)} (\Delta^{(\ell)})^{1/k}$, the right hand side of the preceding inequality tends to infinity. Since the limit of $c^k (1 + b_n)$ can be arbitrarily close to $2^{k/(k+1/2)} \Delta^{(\ell)}$, these considerations reveal that $\|(f_o - \hat\ell)w_\epsilon\|^+_{\epsilon,1-\epsilon}$ is not greater than $\Bigl( 2^{k/(k+1/2)} \Delta^{(\ell)} + o_p(1) \Bigr) \rho_n$. \hfill $\Box$ \section{Some decision theory} \label{Decision Theory} Let $X = (X_i)_{i=1}^m$ be a random vector with distribution ${\mathcal N}_m(\theta, I)$. In what follows we consider tests $\phi : \mathbb{R}^m \to [0,1]$ and confidence sets $$ S \ = \ S_1 \times S_2 \times \cdots \times S_m $$ for $\theta$ with random intervals $S_j \subset \mathbb{R}$. The conditional distribution of $S$, given $X$, does not depend on $\theta$. The possibility of randomized confidence sets $S$, i.e.~confidence sets not just being a function of $X$, has to be included for technical reasons. Unless specified differently, asymptotic statements in this section refer to $m \to \infty$. \begin{Theorem} \label{CS Sup-Diameter} Let $c_m := (2\log m)^{1/2}$. There are universal positive numbers $b_m$ with $b_m\to 0$ such that the following two inequalities are satisfied: \noindent {\bf (a)} For arbitrary tests $\phi$, $$ \min_{j=1,\ldots,m} \mathrm{I\!\!E}_{(c_m - b_m) e_j} \phi(X) - \mathrm{I\!\!E}_0 \phi(X) \ \le \ b_m , $$ where $e_1,e_2, \ldots, e_m$ denotes the standard basis of $\mathbb{R}^m$. \noindent {\bf (b)} For arbitrary confidence sets $S$ as above, $$ \min_{\theta \in [-c_m,c_m]^m} \, \mathrm{I\!\!P}_\theta \Bigl\{ \theta \in S \mbox{ and } \max_{j = 1,\ldots,m} {\rm length}(S_j) < 2 (c_m - b_m) \Bigr\} \ \le \ b_m . $$ \end{Theorem} \noindent {\bf Proof of Theorem~\ref{CS Sup-Diameter}.} Part~(a) is classical and can be proved by a Bayesian argument; see for instance Ingster~(1993) or D\"umbgen and Spokoiny~(2001). In order to prove part~(b) we also consider a Bayesian model: Let $\theta$ have independent components each of which is uniformly distributed on the three-point set $K_m := \{-\kappa_m,0,\kappa_m\}$, where $\kappa_m := c_m - b_m$ with constants $b_m \in [0,c_m]$ to be specified later on. Let ${\mathcal L}(X \,|\, \theta) = {\mathcal N}_m(\theta, I)$. Let $\mathrm{I\!\!P}(\cdot), \mathrm{I\!\!E}(\cdot)$ denote probabilities and expectations in this Bayesian context, whereas $\mathrm{I\!\!P}_\theta(\cdot), \mathrm{I\!\!E}_\theta(\cdot)$ are used in case of a fixed parameter $\theta$. For any confidence set $S$, \begin{eqnarray*} \lefteqn{ \min_{\theta \in [-c_m,c_m]^m} \, \mathrm{I\!\!P}_\theta \Bigl\{ \theta \in S \mbox{ and } \max_{j = 1,\ldots,m} {\rm length}(S_j) < 2 \kappa_m \Bigr\} } \\ & \le & \mathrm{I\!\!P} \Bigl\{ \theta \in S \mbox{ and } \max_{j = 1,\ldots,m} {\rm length}(S_j) < 2 \kappa_m \Bigr\} \ \le \ \mathrm{I\!\!P}\{\theta \in \tilde{S}\} , \end{eqnarray*} where $$ \tilde{S} \ := \ \left\{\begin{array}{cl} S & \mbox{if } \displaystyle\max_{j = 1,\ldots,m} {\rm length}(S_j) < 2 \kappa_m , \\ \{0\} \times \cdots \times \{0\} & \mbox{else} . \end{array}\right. $$ The conditional distribution of $\theta$ given $(X,S)$ is also a product of $m$ probability measures: For any $\eta\in K_m^m$, $$ \mathrm{I\!\!P}(\theta = \eta \,|\, X,S) \ = \ \prod_{i=1}^m g(\eta_i \,|\, X_i) \quad\mbox{with}\quad g(z \,|\, x) \ := \ { \exp(- (x - z)^2/2) \over \sum_{y \in K_m} \exp(- (x - y)^2/2) } . $$ Since each factor $\tilde{S}_j$ of $\tilde{S}$ contains at most two points from $K_m$, \begin{eqnarray*} \mathrm{I\!\!P}\{\theta \in \tilde{S}\} & = & \mathrm{I\!\!E} \mathrm{I\!\!P}(\theta \in \tilde{S} \,|\, X,S) \\ & \le & \mathrm{I\!\!E} \max_{\eta \in K_m^m} \, \mathrm{I\!\!P}(\theta_i \neq \eta_i \mbox{ for } i=1,\ldots,m \,|\, X,S) \\ & = & \mathrm{I\!\!E} \prod_{i=1}^m \Bigl( 1 - \min_{z\in K_m} g(z \,|\, X_i) \Bigr) \\ & = & \Bigl( 1 - \mathrm{I\!\!E} \min_{z\in K_m} g(z \,|\, X_1) \Bigr)^m \\ & \le & \Bigl( 1 - 3^{-1} \mathrm{I\!\!E} \min_{z\in K_m} \, \exp(- (X_1 - z)^2/2) \Bigr)^m . \end{eqnarray*} The latter expectation can be bounded from below as follows: \begin{eqnarray*} \lefteqn{ 3^{-1} \mathrm{I\!\!E} \min_{z\in K_m} \, \exp(- (X_1 - z)^2/2) } \\ & \ge & 3^{-1} \mathrm{I\!\!P}\{|X_1| \le b_m/2\} \exp(- (\kappa_m + b_m/2)^2/2) \\ & \ge & 3^{-1} \mathrm{I\!\!P}\{|\theta_1| = 0, |X_1| \le b_m/2\} \exp(- (c_m - b_m/2)^2/2) \\ & = & 9^{-1} (2\pi)^{-1/2} (b_m + O(b_m^2)) \exp(c_m b_m/2 - b_m^2/8) m^{-1} . \end{eqnarray*} In case of $b_m := 1\{m > 1\} c_m^{-1/2} = o(1)$ the latter bound is easily seen to be $a_m m^{-1}$ with $a_m = a_m(b_m)\to\infty$. Thus $$ \mathrm{I\!\!P}\{\theta \in \tilde{S}\} \ \le \ (1 - a_m m^{-1})^m \ \to \ 0 . $$ Replacing $b_m$ with $\max\{b_m, (1 - a_m m^{-1})^m\}$ yields the assertion of part~(b). \hfill $\Box$ \section{Related optimization problems} \label{Optimization} As in Section~\ref{GG smooth} let $({\mathcal G},k)$ be either $({\mathcal G}^{}_\uparrow,1)$ or $({\mathcal G}^{}_{\rm conv},2)$. In view of future applications to other regression models we extend our framework slightly and consider $\langle g,h\rangle := \int gh \, d\mu$, $\|g\| := \langle g,g\rangle^{1/2}$ for some measure $\mu$ on the real line such that $\mu(C)<\infty$ for bounded intervals $C\subset\mathbb{R}$. Let $\psi$ be some bounded function on the real line with $\psi(x) = 0$ for $x \not\in [-a,b]$ and $\langle 1,\psi\rangle \ge 0$, where $a,b \ge 0$. The next lemma provides sufficient conditions for one of the following two requirements: \begin{eqnarray} \langle g, \psi\rangle & \le & g(0) \langle 1,\psi\rangle \quad\mbox{whenever } g \in {\mathcal G} , 1_{[-a,b]} g \in L^1(\mu) , \label{OrthoL} \\ \langle g, \psi\rangle & \ge & g(0) \langle 1,\psi\rangle \quad\mbox{whenever } g \in {\mathcal G} , 1_{[-a,b]} g \in L^1(\mu) . \label{OrthoU} \end{eqnarray} \begin{Lemma} \label{Ortho} Let ${\mathcal G} = {\mathcal G}^{}_\uparrow$ and $\psi \ge 0$. Then $b = 0$ entails condition~(\ref{OrthoL}), while $a = 0$ implies condition~(\ref{OrthoU}). Let ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$ and $\int_{-\infty}^\infty x \psi(x) \, \mu(dx) = 0$. Condition~(\ref{OrthoU}) is satisfied if $\psi\ge 0$. On the other hand, condition~(\ref{OrthoL}) is a consequence of the following two requirements: $\int x^\pm\psi(x) \, \mu(dx) = 0$ and $$ \psi \ \left\{\begin{array}{cl} \ge 0 & \mbox{on } [c,d] \\ \le 0 & \mbox{on } \mathbb{R}\setminus[c,d] \end{array}\right. $$ for some numbers $c < 0 < d$, where $\mu([-a,c]), \mu([d,b]) > 0$. (Here $y^+ := \max(y,0)$ and $y^- := \max(-y,0)$.) \end{Lemma} With Lemma~\ref{Ortho} at hand one can solve two mimimization problems leading to the special kernels in (\ref{PsiL}) and (\ref{PsiU}). In both cases we consider two disjoint convex sets ${\mathcal G}_o,{\mathcal G}_A \subset {\mathcal G}$ and construct functions $G_o \in {\mathcal G}_o$, $G_A \in {\mathcal G}_A$ such that \begin{equation} \|G_o - G_A\| \ = \ \min_{g_o\in{\mathcal G}_o, \, g_A\in{\mathcal G}_A} \, \|g_o-g_A\| . \label{OptimizationG} \end{equation} \begin{Theorem} \label{OptimizationL} Let ${\mathcal G}_o := \Bigl\{ g \in {\mathcal G} : g(0) \le -1 \Bigr\}$ and ${\mathcal G}_A := \Bigl\{ g \in {\mathcal G} \cap {\mathcal H}_{k,1} : g(0) \ge 0 \Bigr\}$. In case of ${\mathcal G}={\mathcal G}^{}_\uparrow$ let $G_A(x) := x$ and $$ G_o(x) \ := \ \left\{\begin{array}{cl} -1 & \mbox{if } x \in [-1,0] , \\ G_A(x) & \mbox{else} . \end{array}\right. $$ In case of ${\mathcal G}={\mathcal G}^{}_{\rm conv}$ let $G_A(x) := x^2/2$ and $$ G_o(x) \ := \ \left\{\begin{array}{cl} -1 + (a/2+1/a) x^- + (b/2+1/b) x^+ & \mbox{if } x \in [-a,b] , \\ G_A(x) & \mbox{else} , \end{array}\right. $$ where $a,b \ge 2^{1/2}$ are chosen such that $\int x^\pm(G_A-G_o)(x)\,\mu(dx) = 0$. Then equation~(\ref{OptimizationG}) holds in both cases. More precisely, the function $\psi := G_A - G_o$ satisfies the inequalities $\langle 1,\psi\rangle \ge \|\psi\|^2$, (\ref{OrthoL}) and \begin{equation} \langle g,\psi\rangle \ \ge \ \|\psi\|^2 - \langle 1, \psi\rangle \quad\mbox{whenever } g \in {\mathcal H}_{k,1}, g(0) \ge 0 . \label{OrthoL2} \end{equation} \end{Theorem} In case of $\mu$ being Lebesgue measure, $\psi = G_A-G_o$ coincides with the function $\psi^{(\ell)}$ in (\ref{PsiL}), where $a = b = 2$. \begin{Theorem} \label{OptimizationU} Let ${\mathcal G}_o := \Bigl\{ g \in {\mathcal G} : g(0) \ge 1 \Bigr\}$, ${\mathcal G}_A := \Bigl\{ g \in {\mathcal G} \cap {\mathcal H}_{k,1} : g(0) \le 0 \Bigr\}$, and define $G_A$ as in Theorem~\ref{OptimizationL}. In case of ${\mathcal G}={\mathcal G}^{}_\uparrow$ let $$ G_o(x) \ := \ \left\{\begin{array}{cl} 0 & \mbox{if } x \in [0,1] , \\ G_A(x) & \mbox{else} . \end{array}\right. $$ In case of ${\mathcal G}={\mathcal G}^{}_{\rm conv}$ suppose that $\mu(\left]-\infty,0\right[),\mu(\left]0,\infty\right[)>0$ and let $$ G_o(x) \ := \ \left\{\begin{array}{cl} 1 + cx & \mbox{if } x \in [-a,b] , \\ G_A(x) & \mbox{else} , \end{array}\right. $$ where $a := -c + (c^2 + 2)^{1/2}$, $ b := c + (c^2 + 2)^{1/2}$, and $c$ is chosen such that $\int x (G_o - G_A)(x) \, \mu(dx) = 0$. Then equation~(\ref{OptimizationG}) is satisfied in both cases. More precisely, the function $\psi := G_o-G_A$ satisfies the inequalities $\langle 1,\psi\rangle \ge \|\psi\|^2$, (\ref{OrthoU}) and \begin{equation} \langle g,\psi\rangle \ \le \ \langle 1,\psi\rangle - \|\psi\|^2 \quad\mbox{whenever } g \in {\mathcal H}_{k,1}, g(0) \ge 0 . \label{OrthoU2} \end{equation} \end{Theorem} In case of $\mu$ being Lebesgue measure, $\psi = G_o-G_A$ coincides with the function $\psi^{(u)}$ in (\ref{PsiU}), where $c = 0$ and $a = b = 2^{1/2}$. The following lemma summarizes essential properties of the optimal kernels $\psi^{(\ell)}$ and $\psi^{(u)}$. \begin{Lemma} \label{Nice Psis} Let $\psi^{(\ell)}$ and $\psi^{(u)}$ be the kernel functions in (\ref{PsiL}) and (\ref{PsiU}), and let $h,L > 0$ and $t \in \mathbb{R}$. {\bf(a)} If ${\mathcal G} = {\mathcal G}^{}_\uparrow$, then $\langle 1,\psi^{(\ell)}\rangle = \langle 1,\psi^{(u)}\rangle = 1/2$ and $\|\psi^{(\ell)}\|^2 = \|\psi^{(u)}\|^2 = 1/3$. If $f:\mathbb{R}\to\mathbb{R}$ satisfies $f(y) - f(x) \ge L(y-x)$ for all $x < y$, then $$ f - Lh^{-1} \psi^{(\ell)}_{h,t}, f + Lh^{-1} \psi^{(u)}_{h,t} \ \in \ {\mathcal G}^{}_\uparrow . $$ {\bf(b)} If ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$, then $\langle 1,\psi^{(\ell)}\rangle = 2/3$, $\|\psi^{(\ell)}\|^2 = 8/15$, $\langle 1,\psi^{(u)}\rangle = 2^{2.5}/3$ and $\|\psi^{(u)}\|^2 = 2^{4.5}/15$. Let $f:\mathbb{R}\to\mathbb{R}$ be absolutely continuous with derivative $f'$ such that $f'(y)-f'(x)\ge L(y-x)$ for all $x<y$. Then $$ f - Lh^{-2} \psi^{(\ell)}_{h,t}, f + Lh^{-2} \psi^{(u)}_{h,t} \ \in \ {\mathcal G}^{}_{\rm conv} . $$ {\bf(c)} In general, for any function $f \in {\mathcal H}_{k,L}$, \begin{eqnarray*} \left\langle f(t + h\,\cdot) - r + Lh^k, \psi^{(\ell)} \right\rangle & \ge & Lh^k \|\psi^{(\ell)}\|^2 \quad\mbox{if } f(t) \ge r , \\ \left\langle f(t + h\,\cdot) - r - Lh^k, \psi^{(u)} \right\rangle & \le & -Lh^k \|\psi^{(u)}\|^2 \quad\mbox{if } f(t) \le r . \end{eqnarray*} \end{Lemma} \noindent {\bf Proof of Lemma~\ref{Ortho}.} The assertions for ${\mathcal G} = {\mathcal G}^{}_\uparrow$ are a simple consequence of $g \le g(0)$ on $\left]-\infty,0\right]$ and $g \ge g(0)$ on $\left[0,\infty\right[$. Now let ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$. If $\psi \ge 0$ and $\int x \psi(x) \, \mu(dx) = 0$, then Condition~(\ref{OrthoU}) follows from Jensen's inequality applied to the probability measure $P(dx) = \langle 1,\psi\rangle^{-1} \psi(x) \, \mu(dx)$. On the other hand, suppose that $\psi \ge 0$ on $[c,d]$ and $\psi \le 0$ on $\mathbb{R}\setminus [c,d]$, where $c < 0 < d$ and $\mu([-a,c]), \mu([d,b]) > 0$. For $g \in {\mathcal G}^{}_{\rm conv}$ with $1_[-a,b] g \in L^1(\mu)$, both $g(c)$ and $g(d)$ have to be finite, and we define $$ \tilde{g}(x) \ := \ g(x) - \left\{\begin{array}{cl} d^{-1} (g(d)-g(0)) x & \mbox{if } x \ge 0 , \\ c^{-1} (g(c)-g(0)) x & \mbox{if } x \le 0 . \end{array}\right. $$ By convexity of $g$, this auxiliary function $\tilde{g}$ satisfies $\tilde{g} \le g(0)$ on $[c,d]$ and $\tilde{g} \ge g(0)$ on $\mathbb{R} \setminus [c,d]$. Thus $\langle \tilde{g}, \psi\rangle \le g(0) \langle 1,\psi\rangle$. If in addition $\int x^\pm \psi(x) \, \mu(dx) = 0$, then $\langle g,\psi\rangle = \langle\tilde{g},\psi\rangle$. \hfill $\Box$ \noindent {\bf{Proof of Theorem~\ref{OptimizationL}.}} One can easily deduce from Lemma~\ref{Ortho} that the function $\psi = G_A - G_o$ satisfies inequality~(\ref{OrthoL}). But $G_A$ is an extremal point of ${\mathcal G}_A$ in the sense that $$ G_A - g \ \in \ {\mathcal G} \quad\mbox{for any } g \in {\mathcal H}_{k,1} . $$ For let $x < y$. If ${\mathcal G} = {\mathcal G}^{}_\uparrow$, then $$ (G_A-g)(y) - (G_A-g)(x) \ = \ y - x - (g(y) - g(x)) \ \ge \ y - x - |y-x| \ = \ 0 , $$ whence $G_A - g$ is non-decreasing. In case of ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$ the same argument applies to the first derivative of $G_A - g$. Together with (\ref{OrthoL}) this implies that \begin{eqnarray*} \langle g,\psi\rangle & = & \langle G_A, \psi\rangle - \langle G_A-g,\psi\rangle \\ & \ge & \langle G_A, \psi\rangle - (G_A - g)(0) \langle 1,\psi\rangle \\ & = & \langle G_A, \psi\rangle + g(0) \langle 1,\psi\rangle \\ & = & \|\psi\|^2 + \langle G_o,\psi\rangle + g(0) \langle 1,\psi\rangle \\ & = & \|\psi\|^2 + (g(0)-1) \langle 1,\psi\rangle . \end{eqnarray*} The latter equation follows from $\langle G_o,\psi\rangle = \langle -1,\psi\rangle$, which is easily verified. The special case $g = 0$ yields the inequality $\langle 1,\psi\rangle \geq \|\psi\|^2$. Then inequality (\ref{OrthoL2}) becomes obvious. It remains to be shown that in case of ${\mathcal G} = {\mathcal G}^{}_{\rm conv}$ there exist numbers $a,b \ge 2^{1/2}$ such that $\psi = \psi(\cdot,a,b)$ satisfies $\int x^\pm \psi(x) \, \mu(dx) = 0$. In fact, for any fixed $x$ the number $\psi(x,a,b) \le 1$ can be shown to be continuous and decreasing in $a$ and $b$. Precisely, $\psi(0,a,b) = 1$ and $\lim_{a\to\infty} \psi(x,a,\cdot) = \lim_{b\to\infty} \psi(y,\cdot,b) = -\infty$ for $x < 0 < y$. Hence the assertion is a consequence of monotone convergence. \hfill $\Box$ \noindent {\bf Proof of Theorem~\ref{OptimizationU}.} This proof is analogous to the proof of Theorem~\ref{OptimizationL} and thus omitted. \hfill $\Box$ \noindent {\bf Proof of Lemma~\ref{Nice Psis}.} The calculations of $\langle 1,\psi\rangle$ and $\|\psi\|^2$ are elementary and thus omitted. Elementary calculations show that $g := - Lh^{-k}\psi^{(\ell)}_{t,h}$ as well as $g := Lh^{-k}\psi^{(u)}$ satisfies $$ \left.\begin{array}{c} g(y) - g(x) \\ g'(y)-g'(x) \end{array}\right\} \ \ge - L(y-x) \quad\mbox{if } {\mathcal G} = \left\{\begin{array}{c} {\mathcal G}^{}_\uparrow , \\ {\mathcal G}^{}_{\rm conv} , \end{array}\right. $$ where $g'(x)$ denotes any number between the right- and left-sided derivative of $g$ at $x$. Thus $f+g$ belongs to ${\mathcal G}$, whenever $f$ satisfies the inequalities stated in parts~(a) and (b). As for part~(c), for $f \in {\mathcal H}_{k,L}$ and $t\in \mathbb{R}$, $h,c > 0$ the function $c f(t + h\,\cdot)$ belongs to ${\mathcal H}_{k,cLh^k}$. If we take $c := (Lh^k)^{-1}$, the inequality (\ref{OrthoL2}) implies that \begin{eqnarray*} \left\langle f(t + h\,\cdot) - r + Lh^k,\psi^{(\ell)} \right\rangle & = & Lh^k \left\langle c(f(t + h\,\cdot)-f(t)) + 1, \psi^{(\ell)} \right\rangle \\ & \ge & Lh^k \|\psi^{(\ell)}\|^2 . \end{eqnarray*} Analogously one can deduce the lower bound for $\left\langle f(t+h\,\cdot)-r-Lh^k,\psi^{(u)}\right\rangle$. \hfill $\Box$ \bigskip {\bf Acknowledgements.} The author is grateful to Lars H\"omke for his assistance in Section~\ref{Examples}. Constructive comments of a referee and an associate editor helped to improve the presentation. This work has been supported by Deutsche Forschungsgemeinschaft, grant Du\,238/5-1. \subsection*{References} \begin{description} \item[]{\sc Bickel, P.J. and M. Rosenblatt} (1973). On some global measures of the deviations of density function estimates. \ {\sl Ann. Statist.}~{\bf 1}, 1071--1095 \item[]{\sc Brown, L.D. and M.G. Low} (1996). Asymptotic equivalence of nonparametric regression and white noise. \ {\sl Ann. Statist.}~{\bf 24}, 2384--2398. \item[]{\sc Davies, P.L. (1995).} Data features. \ {\sl Statistica Neerlandica}~{\bf 49}, 185--245 \item[]{\sc Donoho, D.L. (1988).} One-sided inference about functionals of a density. \ {\sl Ann. Statist.}~{\bf 16}, 1390--1420 \item[]{\sc D\"umbgen, L.} (1998). New goodness-of-fit tests and their application to nonparametric confidence sets. \ {\sl Ann. Statist.}~{\bf 26}, 288--314 \item[]{\sc D\"umbgen, L.} (2007). Confidence bands for convex median functions using sign tests.\\ In: \textsl{Asymptotics: Particles, Processes and Inverse Problems (E. Cator, G. Jongbloed, C. Kraai\-kamp, R. Lopuha{\"a}, J.A. Wellner, eds.)}, pp. 85-100. \ Lecture Notes - Monograph Series \textbf{55}, IMS, Hayward, USA. \item[]{\sc D\"umbgen, L. and R.B. Johns} (2004). Confidence bands for isotonic median functions using sign tests. \ {\sl J. Comp. Graph. Statist.}~{\bf 13}, 519--533 \item[]{\sc D\"umbgen, L. and V.G. Spokoiny} (2001). Multiscale testing of qualitative hypotheses. \ {\sl Ann. Statist.}~{\bf 29}, 124--152 \item[]{\sc Eubank, R.L. and P.L. Speckman} (1993). Confidence bands in nonparametric regression. \ {\sl J. Amer. Statist. Assoc.}~{\bf 88}, 1287--1301 \item[]{\sc Fan, J. and W. Zhang} (2000). Simultaneous confidence bands and hypothesis testing in varying-coefficient models. \ {\sl Scand. J. Statist.}~{\bf 27}, 715--731 \item[]{\sc Fan, J., C. Zhang and J. Zhang} (2001). Generalized likelihood ratio statistics and Wilks phenomenon. \ {\sl Ann. Statist.}~{\bf 29}, 153--193 \item[]{\sc Grama, I. and M. Nussbaum} (1998). Asymptotic equivalence for nonparametric generalized linear models. \ {\sl Prob. Theory and Related Fields}~{\bf 111}, 167--214. \item[]{\sc H\"ardle, W. and J.S. Marron} (1991). Bootstrap simultaneous error bars for nonparametric regression. \ {\sl Ann. Statist.}~{\bf 19}, 778--796 \item[]{\sc Hall, P. and D.M. Titterington} (1988). On confidence bands in nonparametric density estimation. \ {\sl J. Multivar. Anal.}~{\bf 27}, 228--254 \item[]{\sc Hart, J.D.} (1997). {\sl Nonparametric Smoothing and Lack-of-Fit Tests.} \ Springer, New York \item[]{\sc Hengartner, N.W. and P.B. Stark (1995).} Finite-sample confidence envelopes for shape-restricted densities. \ {\sl Ann. Statist.}~{\bf 23}, 525--550 \item[]{\sc Khas'minskii, R.Z.} (1978). A lower bound on the risks of nonparametric estimates of densities in the uniform metric. \ {\sl Theory Prob. Appl.}~{\bf 23}, 794--798 \item[]{\sc Knafl, G., J. Sachs and D. Ylvisaker} (1985). Confidence bands for regression functions. \ {\sl J. Amer. Statist. Assoc.}~{\bf 80}, 683--691 \item[]{\sc Nussbaum, M.} (1996). Asymptotic equivalence of density estimation and white noise. \ {\sl Ann. Statist.}~{\bf 24}, 2399--2430. \item[]{\sc Robertson, T., F.T. Wright and R.L. Dykstra} (1988). {\sl Order Restricted Statistical Inference.} \ Wiley, New York \item[]{\sc Ingster, Y.I. (1993).} Asymptotically minimax hypothesis testing for nonparametric alternatives, I-III. \ {\sl Math. Methods Statist.}~{\bf 2}; 85-114, 171--189, 249-268. \end{description} \end{document}
{ "timestamp": "2013-12-24T02:12:11", "yymm": "1312", "arxiv_id": "1312.6466", "language": "en", "url": "https://arxiv.org/abs/1312.6466", "abstract": "Let $Y$ be a stochastic process on $[0,1]$ satisfying $dY(t) = n^{1/2} f(t) dt + dW(t)$, where $n \\ge 1$ is a given scale parameter (``sample size''), $W$ is standard Brownian motion and $f$ is an unknown function. Utilizing suitable multiscale tests we construct confidence bands for $f$ with guaranteed given coverage probability, assuming that $f$ is isotonic or convex. These confidence bands are computationally feasible and shown to be asymptotically sharp optimal in an appropriate sense.", "subjects": "Statistics Theory (math.ST)", "title": "Optimal Confidence Bands for Shape-Restricted Curves", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771774699747, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7090925464468204 }
https://arxiv.org/abs/1908.03197
Supertrees
A $k$-universal permutation, or $k$-superpermutation, is a permutation that contains all permutations of length $k$ as patterns. The problem of finding the minimum length of a $k$-superpermutation has recently received significant attention in the field of permutation patterns. One can ask analogous questions for other classes of objects. In this paper, we study $k$-supertrees. For each $d\geq 2$, we focus on two types of rooted plane trees called $d$-ary plane trees and $[d]$-trees. Motivated by recent developments in the literature, we consider "contiguous" and "noncontiguous" notions of pattern containment for each type of tree. We obtain both upper and lower bounds on the minimum possible size of a $k$-supertree in three cases; in the fourth, we determine the minimum size exactly. One of our lower bounds makes use of a recent result of Albert, Engen, Pantone, and Vatter on $k$-universal layered permutations.
\section{Introduction} \subsection{Background}\label{subsec:background} Let $S_n$ denote the set of permutations of the set $[n]=\{1,\ldots,n\}$. We write permutations as words in one-line notation. Given $\mu\in S_m$, we say that the permutation $\sigma=\sigma_1\cdots\sigma_n\in S_n$ \emph{contains the pattern} $\mu$ if there are indices $i_1<\cdots<i_m$ such that $\sigma_{i_1}\cdots\sigma_{i_m}$ has the same relative order as $\mu$. Otherwise, we say that $\sigma$ \emph{avoids} $\mu$. Consecutive pattern containment and avoidance are defined similarly by requiring the indices $i_1,\ldots,i_m$ to be consecutive integers. An enormous amount of research in the past half-century has focused on pattern containment and pattern avoidance in permutations \cite{Bona, Kitaev, Linton}. Plenty of particularly popular permutation pattern problems possess the following form: \begin{center} What is the minimum length of a permutation that contains all patterns of a certain type? \end{center} For example, one can ask for the smallest size of a permutation containing all length-$k$ patterns; such a permutation is often called a \textit{$k$-universal permutation} or a \textit{$k$-superpermutation} \cite{Arratia, Eriksson, Miller}. The analogous question for consecutive pattern containment has also received attention \cite{Ashlock, Honner, Houston, Johnson}. Rather than discuss all of the variants of this problem that have emerged, we refer the reader to the beautiful article \cite{Engen}, which surveys many of the results in this area. In recent years, the notion of pattern containment has spread to other combinatorial objects. It is natural to ask about the minimum possible sizes of ``universal objects'' in these contexts. This idea dates back to 1964, when Rado \cite{Rado} asked for the minimum number of vertices in a graph that contains all $k$-vertex graphs as induced subgraphs. A vast amount of literature has been devoted to ``Rado's problem'' alone (see \cite{Alon, Alstrup, Butler, Chung, Esperet} and the references therein). In this paper, we focus on rooted plane trees. Several variations on the theme of contiguous and noncontiguous pattern containment in rooted plane trees have appeared in \cite{Baril,Dairyko,Dotsenko,Flajolet1, Flajolet2, Gabriel,Pudwell, Rowland}. The purpose of the present article is to investigate the minimum possible size of a \emph{$k$-universal tree}, or \textit{$k$-supertree}, in some of these contexts. Similar questions about universal trees have been studied since the 1960's \cite{CGSCaterpillars, ChungGraham1, ChungGraham2, ChungGraham3, ChungGraham4, Goldberg}. However, our notions of universal rooted plane trees are new and are inspired by more recent definitions of pattern containment in trees. \subsection{Main Definitions and Terminology}\label{subsec:main-results} Let $d\geq 2$ be an integer. A \emph{$d$-ary plane tree} is either an empty tree or a root vertex with $d$ subtrees that are linearly ordered from left to right and are themselves $d$-ary plane trees. A $2$-ary plane tree is also called a \emph{binary plane tree}. Note that the subtrees of a vertex can be empty. By the ``$i^\text{th}$ subtree" of a vertex, we simply mean the $i^\text{th}$ subtree from the left. We say an edge has ``type $i$" if it connects a vertex to the root of its $i^\text{th}$ subtree. A $d$-ary plane tree is called \emph{full} if every vertex has either $0$ or $d$ children (or, equivalently, if only leaves have empty subtrees). Every connected induced subgraph $T^*$ of a $d$-ary plane tree $\mathcal T$ can be viewed as a $d$-ary plane tree in the obvious way. If $T^*$ is isomorphic as a $d$-ary plane tree to another $d$-ary plane tree $T$, then we say that $T^*$ is a \emph{contiguous embedding} of $T$ in $\mathcal T$ and that $\mathcal T$ \emph{contiguously contains} $T$. For example, the $3$-ary plane tree \[\begin{array}{l} \includegraphics[height=2cm]{SupertreesPIC4} \end{array}\text{ contiguously contains }\begin{array}{l} \includegraphics[height=1.2cm]{SupertreesPIC5} \end{array}\text{ but does not contiguously contain }\begin{array}{l} \includegraphics[height=1.2cm]{SupertreesPIC6} \end{array}.\] Rowland \cite{Rowland} defined contiguous pattern containment in full binary plane trees, and the authors of \cite{Gabriel} made a similar definition for full $3$-ary plane trees. In general, for any $k \geq 0$, the operation of removing (pruning) all leaves provides a natural bijection from the set of full $d$-ary plane trees with $dk+1$ vertices to the set of $d$-ary plane trees with $k$ vertices. Using this bijection, one can easily see that our definition of contiguous pattern containment for $d$-ary plane trees corresponds to the definitions in \cite{Gabriel,Rowland} when $d\in\{2,3\}$. Our formulation has the advantage of working with smaller trees so that diagrams are not cluttered with unnecessary leaves. Given a vertex $u$ in a $d$-ary plane tree, let $\chi(u)$ be the set of all $i\in[d]$ such that $u$ has a nonempty $i^\text{th}$ subtree. Suppose $e$ is an edge of type $i$ that connects $u$ to one of its children $v$ (meaning $i\in\chi(u)$). We can consider the operation of \textit{contracting} the edge $e$. We call this operation a \emph{legal contraction} if every element of $\chi(u)\setminus\{i\}$ is either strictly smaller than $\min(\chi(v))$ or strictly greater than $\max(\chi(v))$. Informally speaking, this definition ensures that edges do not ``overlap" or ``cross'' each other during a legal contraction. After legally contracting an edge in a $d$-ary plane tree, we are left with a new $d$-ary plane tree. Given $d$-ary plane trees $\mathcal T$ and $T$, we say that $\mathcal T$ \emph{noncontiguously contains $T$} if we can obtain $T$ from $\mathcal T$ through a sequence of legal edge contractions. For example, the $3$-ary plane tree \[\begin{array}{l} \includegraphics[height=1.7cm]{SupertreesPIC7} \end{array}\text{ noncontiguously contains }\begin{array}{l} \includegraphics[height=1.2cm]{SupertreesPIC6} \end{array}\text{ but does not noncontiguously contain }\begin{array}{l} \includegraphics[height=.85cm]{SupertreesPIC13} \end{array}.\] When $d=2$, we can use the pruning bijection mentioned above to show that our definition of noncontiguous pattern containment in binary plane trees is consistent with the notion considered in \cite{Dairyko,Pudwell}. Given a set $S$ of positive integers, an \emph{$S$-tree} is a rooted tree in which the children of each vertex are linearly ordered from left to right and the number of children of each vertex is an element of $S\cup\{0\}$. One can think of a $[d]$-tree as a tree obtained from a $d$-ary plane tree by forgetting about empty subtrees and the types of edges. What we term $[2]$-trees are more commonly called ``unary-binary trees" or ``Motzkin trees." Every connected induced subgraph $T^*$ of a $[d]$-tree $\mathcal T$ is itself a $[d]$-tree. If $T^*$ is isomorphic as a $[d]$-tree to another $[d]$-tree $T$, then we say that $T^*$ is a \emph{contiguous embedding} of $T$ in $\mathcal T$ and that $\mathcal T$ \emph{contiguously contains} $T$. For example, the $[3]$-tree \[\begin{array}{l} \includegraphics[height=1.4cm]{SupertreesPIC8} \end{array}\text{ contiguously contains }\begin{array}{l} \includegraphics[height=.95cm]{SupertreesPIC9} \end{array}\text{ but does not contiguously contain }\begin{array}{l} \includegraphics[height=1.2cm]{SupertreesPIC10} \end{array}.\] Suppose $e$ is an edge in a $[d]$-tree that connects a vertex $u$ to one of its children $v$. If the total number of children of $u$ and $v$, excluding $v$ itself, is at most $d$, then the operation of contracting the edge $e$ is a \emph{legal contraction}. Note that if $v'$ was a child of $u$ to the left (respectively, right) of $v$, then $v'$ remains to the left (respectively, right) of the children of $v$ after we legally contract $e$. After legally contracting an edge in a $[d]$-tree, we are left with a new $[d]$-tree. Given $[d]$-trees $\mathcal T$ and $T$, we say that $\mathcal T$ \emph{noncontiguously contains $T$} if we can obtain $T$ from $\mathcal T$ through a sequence of legal edge contractions. For example, the $[3]$-tree \[\begin{array}{l} \includegraphics[height=1.7cm]{SupertreesPIC11} \end{array}\text{ noncontiguously contains }\begin{array}{l} \includegraphics[height=.95cm]{SupertreesPIC12} \end{array}\text{ but does not noncontiguously contain }\begin{array}{l} \includegraphics[height=1.2cm]{SupertreesPIC10} \end{array}.\] A \emph{contiguous $k$-universal $d$-ary plane tree} is a $d$-ary plane tree that contiguously contains all $d$-ary plane trees with $k$ vertices. Similarly, a \emph{noncontiguous $k$-universal $d$-ary plane tree} is a $d$-ary plane tree that noncontiguously contains all $d$-ary plane trees with $k$ vertices. Let $N_{d\text{-ary}}^\text{con}(k)$ (respectively, $N_{d\text{-ary}}^\text{non}(k)$) denote the minimum number of vertices in a contiguous (respectively, noncontiguous) $k$-universal $d$-ary plane tree. Contiguous and noncontiguous $k$-universal $[d]$-trees are defined analogously. Let $N_{[d]}^\text{con}(k)$ (respectively, $N_{[d]}^\text{non}(k)$) denote the minimum number of vertices in a contiguous (respectively, noncontiguous) $k$-universal $[d]$-tree. We refer to $k$-universal trees as ``$k$-supertrees" when the type of tree and the type of containment are clear from context. We say the root of a rooted plane tree has \emph{depth $0$}; a nonroot vertex has \emph{depth $r$} if its parent has depth $r-1$. The \emph{height} of a rooted plane tree is the maximum depth of its vertices. We write $|T|$ for the number of vertices in $T$. The \emph{perfect tree} $P_h^{(d)}$ is the unique $d$-ary plane tree of height $h$ that has exactly $d^r$ vertices of depth $r$ for each $r\in\{0,\ldots,h\}$. It will be useful to have a formally defined ``gluing" operation for combining trees. Suppose $T$ is a rooted plane tree and $v$ is a leaf of $T$. If $T'$ is another rooted plane tree (of the same type as $T$, of course), then we can \textit{glue} $T'$ to $v$ by attaching $T'$ to $T$, where we identify the root of $T'$ with $v$. For example, if $T$ and $v$ are \[\begin{array}{l} \includegraphics[height=1.6cm]{SupertreesPIC19} \end{array}\text{ and }T'\text{ is }\begin{array}{l} \includegraphics[height=.6cm]{SupertreesPIC20} \end{array},\text{ then the result of gluing $T'$ to $v$ is }\begin{array}{l} \includegraphics[height=1.53cm]{SupertreesPIC21} \end{array}.\] \subsection{Main Results} Let $\eta_2=1$, and let $\eta_d=\frac{1}{2}$ for every $d\geq 3$. In Section~\ref{sec:0,1,...,d}, we will define numbers $\rho_d$, which arise as reciprocals of roots of certain polynomials. The purpose of the subsequent sections is to prove the following estimates (where $d\geq 2$ is a fixed integer): \begin{enumerate*} \item $N_{d\text{-ary}}^{\text{con}}(k)=d^{k-1}+k-1$; \item $\eta_d\, k\log_2(k)(1+o(1))\leq N_{d\text{-ary}}^{\non}(k)\leq k^{\frac{1}{2}\log_2(k)(1+o(1))}$; \item $d^{\frac{k-2}{d}}\leq N_{[d]}^{\con}(k)\leq (\rho_d+o(1))^k$; \item $\dfrac{\eta_d}{d}k\log_2(k)(1+o(1))\leq N_{[d]}^{\non}(k)\leq k^{\frac{1}{2}\log_2(k)(1+o(1))}$. \end{enumerate*} Some remarks are in order regarding these estimates. First of all, note that it is unusual to be able to prove an exact formula for the minimum size of a universal object, as we have done in (I). Next, a contiguous $k$-universal $d$-ary plane (respectively, $[d]$-) tree is certainly also a noncontiguous $k$-universal $d$-ary plane (respectively, $[d]$-) tree, so we trivially have $N_{d\text{-ary}}^{\text{con}}(k)\geq N_{d\text{-ary}}^{\text{non}}(k)$ and $N_{[d]}^{\con}(k)\geq N_{[d]}^{\non}(k)$. Furthermore, one can change a $d$-ary plane tree into a $[d]$-tree by simply forgetting about empty subtrees and edge types. Doing so allows us to view a contiguous (respectively, noncontiguous) $k$-universal $d$-ary plane tree as a contiguous (respectively, noncontiguous) $k$-universal $[d]$-tree. Therefore, it follows from (I) that $N_{d\text{-ary}}^{\text{non}}(k)$, $N_{[d]}^{\text{con}}(k)$, and $N_{[d]}^{\text{non}}(k)$ are all at most $d^{k-1}+k-1$. However, the upper bounds in (II), (III), and (IV) greatly improve upon this observation. Indeed, the upper bounds in (II) and (IV) are subexponential in $k$, and the base of the exponential in the upper bound in (III) is much smaller than $d$. In Section~\ref{sec:0,1,...,d}, we will see that $\rho_d=1+\frac{4\log d}{d}(1+o(1))$ as $d\to\infty$. Compare this with the exponential lower bound in (III), in which the base of the exponential is $d^{1/d}=1+\frac{\log d}{d}(1+o(1))$. Finally, note that since $N_{[d]}^{\non}(k)\leq N_{[d]}^{\con}(k)$, we could deduce immediately from (III) that $N_{[d]}^{\non}(k)\leq (\rho_d+o(1))^k$. However, the subexponential upper bound in (IV) greatly improves upon this. Similarly, the lower bound in (III) beats the lower bound in (IV). We will show in Section~\ref{sec:0,1,...,d} that $N_{d\text{-ary}}^{\non}(k)$ and $N_{[d]}^{\non}(k)$ differ by at most a constant factor (for each fixed $d$); this fact explains why (II) and (IV) look similar. Producing nontrivial lower bounds for the sizes of noncontiguous $k$-universal trees is fairly difficult; this is analogous to the permutation setting, where nontrivial lower bounds are scarce. For example, the best known lower bound for the length $n$ of a permutation that contains all length-$k$ patterns is given by $n\geq k^2/e^2$; this is a consequence of the simple observation that ${n\choose k}\geq k!$. The number of $d$-ary plane trees with $k$ vertices is $\frac{1}{(d-1)k+1}{dk\choose k}$, so a similar argument in our setting shows that ${N_{d\text{-ary}}^{\text{non}}(k)\choose k}\geq\frac{1}{(d-1)k+1}{dk\choose k}$. This translates to a lower bound of roughly $dk$ for $N_{d\text{-ary}}^{\text{non}}(k)$. Although many of the lower bounds in the (noncontiguous) permutation setting are trivial, there is a noteworthy exception: Albert, Engen, Pantone, and Vatter managed to obtain an explicit formula for the minimum length of a permutation that (noncontiguously) contains all length-$k$ layered permutations. Making use of a bijection between $231$-avoiding permutations and binary plane trees, we will invoke this result in order to prove the lower bound in (II). \section{$d$-ary plane trees}\label{sec:d-ary} \subsection{Contiguous containment}\label{subsec:d-ary-contiguous} In the case of contiguous containment for $d$-ary plane trees, we obtain the exact size of the smallest $k$-supertree. The upper bound comes from an explicit construction, and the lower bound comes from considering the family of paths. \begin{theorem}\label{thm:d-ary-contiguous} For all integers $d \geq 2$ and $k \geq 1$, we have $N_{d\ary}^{\con}(k)=d^{k-1}+k-1$. \end{theorem} \begin{proof} We first show that $d^{k-1}+k-1$ is a lower bound for the size of a $k$-supertree. Let $\bf T$ be a contiguous $k$-universal $d$-ary plane tree. Let $T_1,\ldots,T_{d^{k-1}}$ be the $d$-ary plane trees on $k$ vertices in which each nonleaf vertex has exactly one child (i.e., the $d$-ary plane trees that are paths on $k$ vertices). For each $i\in\{1,\ldots,d^{k-1}\}$, there is a contiguous embedding $T_i^{\ast}$ of $T_i$ in $\bf T$. Let $v_i^{\ast}$ denote the vertex in $\bf T$ that corresponds to the unique leaf of $T_i$ under this embedding. Starting at $v_i^*$ and tracing up $k-1$ edges, we immediately recover all of the edges of $T_i^*$, so the location of $v_i^{\ast}$ in $\bf T$ completely determines the isomorphism class of $T_i^{\ast}$. Because the trees $T_1^*,\ldots,T_{d^{k-1}}^*$ are pairwise nonisomorphic, the vertices $v_1^*,\ldots,v_{d^{k-1}}^*$ are pairwise distinct. Thus, $\bf T$ contains at least $d^{k-1}$ vertices at depth at least $k-1$. Since $\bf T$ contains vertices at depth $k-1$, it must also contain at least one vertex at each depth $j$ for $0 \leq j \leq k-2$. This gives at least $k-1$ additional vertices, so $\bf T$ contains at least $d^{k-1}+k-1$ vertices, as desired. We now construct a contiguous $k$-universal $d$-ary plane tree $\Delta_d(k)$ on exactly $d^{k-1}+k-1$ vertices. First, the tree $\Delta_d(1)$ consists of a single vertex. Now, consider $k\geq 2$. To construct $\Delta_d(k)$, first consider the $d$-ary plane tree that is a path on $k-1$ vertices in which every edge is of type $1$. Let $v$ denote the unique leaf of this path. For each $2 \leq i \leq d$, attach a copy of the perfect tree $P_{k-2}^{(d)}$ in the $i^\text{th}$ subtree of $v$. (Recall the definition of perfect trees from the introduction.) Note that each of these $d-1$ added trees contains exactly $1+d+\cdots+d^{k-2}$ vertices, which means that together they have $d^{k-1}-1$ vertices in total. Next, consider the leftmost leaf in the copy of $P_{k-2}^{(d)}$ that is sitting in the second subtree of $v$. Add one more vertex in the first subtree of this leaf. The resulting tree $\Delta_d(k)$ has the desired number of vertices. See Figure \ref{Fig1} for an example. \begin{figure}[h] \begin{center} \includegraphics[width=0.65\linewidth]{SupertreesPIC1} \end{center} \caption{The tree $\Delta_3(4)$ is depicted on the right. This tree contiguously contains all $3$-ary plane trees on $4$ vertices, three of which are shown on left.}\label{Fig1} \end{figure} Finally, we show that $\Delta_d(k)$ is in fact a $k$-supertree. Fix any $d$-ary plane tree $T$ with $k$ vertices. If $T$ is not a path, then it has height at most $k-2$, so it fits into one of the copies of $P_{k-2}^{(d)}$. If $T$ is the path whose edges are all of type $1$, then we can embed $T$ in $\Delta_d(k)$ by mapping the root of $T$ to the root of the second subtree (i.e., the leftmost nonempty subtree) of $v$. Now, suppose $T$ is a path in which at least one edge is not of type $1$. Let $m$ be the smallest element of $\{1,\ldots,k-1\}$ such that the $m^\text{th}$ edge from the top of $T$ is not of type $1$. We can embed $T$ into $\Delta_d(k)$ by mapping the unique vertex in $T$ of depth $m-1$ to $v$. This exhausts all cases and shows that $\Delta_d(k)$ is $k$-universal. \end{proof} \subsection{Noncontiguous containment}\label{subsec:d-ary-noncontiguous} \subsubsection{Lower bounds}\label{subsubsec:noncontiguous-lower} Recall that $\eta_2=1$ and $\eta_d=\frac{1}{2}$ for all $d\geq 3$. In this subsection we will prove the following theorem. \begin{theorem}\label{Thm3} For all integers $d \geq 2$ and $k\geq 1$, we have \[N_{d\ary}^{\non}(k)\geq \eta_d\left((k+1)\left\lceil\log_2(k+1)\right\rceil-2^{\left\lceil\log_2(k+1)\right\rceil}+1\right).\] \end{theorem} The first step is to show that it suffices to consider the specific case in which $d=2$. \begin{proposition}\label{Prop1} For all integers $d\geq 3$ and $k\geq 1$, we have \[N_{d\ary}^{\non}(k)>\frac{1}{2}N_{2\ary}^{\non}(k).\] \end{proposition} \begin{proof} Fix $d\geq 3$, and consider the $d$-ary plane tree path on $m-1$ vertices in which every edge has type $1$. Let $\vec{t}=(t_1,\ldots, t_m)$ be an $m$-tuple of integers satisfying $1\leq t_1<\cdots<t_m\leq d$. For $2 \leq i \leq m$, attach a single child via an edge of type $t_i$ to the $(i-1)^{\text{th}}$ vertex of the path, counting from the bottom. Then attach a single child via an edge of type $t_1$ to the bottom vertex of this original path. Call the resulting $d$-ary plane tree $J_{\vec{t}}$. Let $\mathcal T = \mathcal T_0$ be a $d$-ary plane tree. We will transform $\mathcal T$ into a tree $\widetilde{\mathcal T}$ in which every vertex has at most two children. Suppose $\mathcal T$ has a vertex $v_1$ with at least $3$ children, say exactly $m_1$ children in subtrees of type $\vec{t}_1=(t_{1,1},\cdots,t_{1,m_1})$. Replace $v_1$ with a copy of $J_{\vec{t}_1}$ in the following manner. Detach the subtrees of $v_1$, and glue $J_{\vec{t}_1}$ to $v_1$. Then glue the (detached) $i^\text{th}$ nonempty subtree (counted from the left) of $v_1$ to the $i^\text{th}$ leaf (again counted from the left) of the copy of $J_{\vec{t}_1}$. Call this tree $\mathcal T_1$. Choose another vertex $v_2$ that has $m_2\geq 3$ children in subtrees of type $\vec{t}_2$, and replace it in the same fashion with a copy of $J_{\vec{t}_2}$ to obtain $\mathcal T_2$. Continue this process until reaching a tree $\widetilde{\mathcal T} = \mathcal T_r$ in which each vertex has at most $2$ children. Note that if $1\le i\le r$, then $\mathcal T_i$ noncontiguously contains $\mathcal T_{i-1}$ because we can legally contract the edges of the original path in the added $J_{\vec{t}_i}$ from top to bottom. Iterating this procedure shows that there is a sequence of legal contractions that begins with $\widetilde{\mathcal{T}} = \mathcal T_r$ and ends with $\mathcal T_0 = \mathcal T$. We can naturally associate $\widetilde{\mathcal T}$ with a \emph{binary} plane tree $\widetilde{\mathcal T}'$. If there is an only child of type $1$ in $\widetilde{\mathcal T}$, it becomes an only child of type $1$ in $\widetilde{\mathcal T}'$. If there is an only child of type other than $1$ in $\widetilde{\mathcal T}$, it becomes an only child of type $2$ in $\widetilde{\mathcal T}'$. See Figure \ref{Fig4} for an example when $d=4$. As proven above, our construction guarantees that $\widetilde{\mathcal T}$ noncontiguously contains $\mathcal T$, so it also noncontiguously contains every $d$-ary plane tree that $\mathcal T$ noncontiguously contains. \begin{figure}[h] \begin{center} \includegraphics[width=.76\linewidth]{SupertreesPIC18} \caption{Transforming $\mathcal T$ into $\widetilde{\mathcal T}'$. We have used the color red to indicate the edges of the inserted copy of $J_{m_i}$ added in the $i^\text{th}$ step.} \label{Fig4} \end{center} \end{figure} Let $\bf T$ be a noncontiguous $k$-universal $d$-ary plane tree with $\alpha=N_{d\ary}^{\non}(k)$ vertices. The tree $\widetilde{\bf T}$ obtained via the above construction is also a noncontiguous $k$-universal $d$-ary plane tree, so $\widetilde{\bf T}'$ is a noncontiguous $k$-universal binary plane tree. The trees $\widetilde{\bf T}$ and $\widetilde{\bf T}'$ have the same number of vertices, say $\beta$. We know that $\beta\geq N_{2\ary}^{\non}(k)$. We will show that $\beta<2\alpha$, establishing the desired result. Let $f_r$ denote the number of vertices in $\bf T$ with exactly $r$ children. We obtained $\widetilde{\bf T}$ from $\bf T$ by substituting a copy of $J_m$ for each vertex $v$ of $\bf T$ with $m\geq 3$ children. Note that each such substitution increased the number of vertices in the tree by $m-2$. Thus, $\beta=\alpha+\sum_{m=3}^d(m-2)f_m$. We know that $\sum_{m=0}^d f_m=\alpha$. Furthermore, counting the $\alpha-1$ edges in $\bf T$ according to the number of children of their parent vertices gives $\alpha-1=\sum_{m=0}^dmf_m$. Consequently, \begin{align*} \beta &=\alpha+\sum_{m=3}^d(m-2)f_m =\alpha+2f_0+f_1+\sum_{m=0}^d(m-2)f_m\\ &=\alpha+2f_0+f_1+(\alpha-1)-2\alpha =2f_0+f_1-1<2\sum_{m=0}^df_m=2\alpha. \qedhere \end{align*} \end{proof} For the proof of Theorem \ref{Thm3}, it now remains only to show that \begin{equation}\label{Eq3}N_{2\text{-ary}}^{\text{non}}(k)\geq (k+1)\left\lceil\log_2(k+1)\right\rceil-2^{\left\lceil\log_2(k+1)\right\rceil}+1. \end{equation} Let us first establish some terminology and notation concerning labeled trees and tree traversals. Let $\PT_n^{(2)}$ denote the set of binary plane trees with $n$ vertices. A \emph{decreasing binary plane tree} is a binary plane tree whose vertices are labeled with distinct positive integers so that the label of each nonroot vertex is smaller than the label of its parent. Let $\DPT_n^{(2)}$ be the set of decreasing binary plane trees with $n$ vertices in which the labels form the set $[n]$. We can read the labels of a decreasing binary plane tree in \emph{in-order} by first reading the labels of the left subtree of the root in in-order, then reading the label of the root, and finally reading the labels of the right subtree of the root in in-order. Let $I(\Upsilon)$ denote the in-order reading of the decreasing binary plane tree $\Upsilon$. The map $I:\DPT_n^{(2)}\to S_n$ is a bijection \cite[Chapter 8]{Bona}. Alternatively, we can read the labels of a decreasing binary plane tree in \emph{postorder} by first reading the labels of the left subtree of the root in postorder, then reading the labels of the right subtree of the root in postorder, and finally reading the label of the root. For each unlabeled tree $T\in\PT_n^{(2)}$, there is a unique way to label the vertices of $T$ so that the resulting labeled tree $\omega(T)\in\DPT_n^{(2)}$ has postorder reading $123\cdots n$ (the increasing permutation). This gives us a map $\omega:\PT_n^{(2)}\to\DPT_n^{(2)}$. Let $\psi(T)=I(\omega(T))$. It is not difficult to check that the permutation $\psi(T)$ avoids the pattern $231$. In fact, we have the following useful proposition. \begin{proposition}\label{Prop2} The map $\psi$ is a bijection from the set of binary plane trees with $n$ vertices to the set of $231$-avoiding permutations in $S_n$. If $\mathcal T$ is a binary plane tree that noncontiguously contains the binary plane tree $T$, then the permutation $\psi(\mathcal T)$ contains the pattern $\psi(T)$. \end{proposition} \begin{proof} The map $\psi$ is injective because $\omega$ and $I$ are injective. The first statement of the proposition now follows from the fact that the number of binary plane trees with $n$ vertices and the number of $231$-avoiding permutations in $S_n$ are both equal to the $n^\text{th}$ Catalan number.\footnote{The first statement of this proposition is not new; it is essentially equivalent to the fact that a permutation is $1$-stack-sortable if and only if it avoids $231$ (see one of the references \cite{Bona, DefantPolyurethane, DefantPostorder} for more details).} To prove the second statement, we need to understand the effect of legal edge contractions on the corresponding permutations. Let $\mathcal T\in\PT_n^{(2)}$ be a binary plane tree, and let $e$ be an edge of $\mathcal T$ that can be legally contracted. Let $a$ and $b$ be, respectively, the labels of the upper and lower endpoints of $e$ in $\omega(\mathcal T)$. Let $\mathcal T/e\in\PT_{n-1}^{(2)}$ denote the tree that is obtained by contracting the edge $e$ in $\mathcal T$. One can check that if $e$ is a type-$1$ edge, then $\psi(\mathcal T/e)$ is the permutation obtained by deleting the entry $b$ from $\psi(\mathcal T)$ and then normalizing to obtain a permutation in $S_{n-1}$. Similarly, if $e$ is a type-$2$ edge, then $\psi(\mathcal T/e)$ is the permutation obtained by deleting the entry $a$ from $\psi(\mathcal T)$ and then normalizing. In either case, $\psi(\mathcal T)$ contains $\psi(\mathcal T/e)$ as a pattern. If $\mathcal T$ noncontiguously contains a binary plane tree $T$ (meaning $T$ is obtained from $\mathcal T$ via a sequence of legal edge contractions), then $\psi(\mathcal T)$ contains $\psi(T)$ as a pattern. \end{proof} Let us illustrate the proof of the second statement of Proposition \ref{Prop2} with an example. If \[T\text{ is }\begin{array}{l} \includegraphics[height=2.6cm]{SupertreesPIC14} \end{array},\text{ then }\omega(T)\text{ is }\begin{array}{l} \includegraphics[height=2.6cm]{SupertreesPIC15} \end{array},\text{ and }\psi(T)=I(\omega(T))=17324658.\] Contracting the edge labeled $e$, we find that \[T/e\text{ is }\begin{array}{l} \includegraphics[height=2.2cm]{SupertreesPIC16} \end{array},\text{ }\omega(T/e)\text{ is }\begin{array}{l} \includegraphics[height=2.2cm]{SupertreesPIC17} \end{array},\text{ and }\psi(T/e)=I(\omega(T/e))=1632547.\] Note that since $e$ is a left edge, the permutation $\psi(T/e)=1632547$ is obtained by deleting the entry $b=4$ from the permutation $\psi(T)=17324658$ and then normalizing. We can finally deduce inequality \eqref{Eq3}. Suppose $\mathcal T$ is a noncontiguous $k$-universal binary plane tree with $N_{2\text{-ary}}^{\text{non}}(k)$ vertices. Proposition \ref{Prop2} tells us that $\psi(\mathcal T)$ contains every $231$-avoiding permutation in $S_k$. A permutation is called \emph{layered} if it avoids both $231$ and $312$. Thus, $\psi(\mathcal T)$ is a permutation of length $N_{2\text{-ary}}^{\text{non}}(k)$ that contains all layered permutations in $S_k$. The authors of \cite{Albert} proved that the minimum size of a permutation that contains all layered permutations in $S_k$ is $(k+1)\left\lceil\log_2(k+1)\right\rceil-2^{\left\lceil\log_2(k+1)\right\rceil}+1$. This establishes \eqref{Eq3} and hence completes the proof of Theorem \ref{Thm3}. \subsubsection{Upper bounds}\label{subsubsec:noncontiguous-upper} For every $d\geq 2$ and $k\geq 1$, we now construct a noncontiguous $k$-universal $d$-ary plane tree $\xi_d(k)$. The construction is natural but fairly intricate. We begin by defining a few specific $d$-ary plane trees that will form the building blocks in our construction of $\xi_d(k)$. The \textit{$d$-crescent} is the path on $d+1$ vertices in which the vertex at depth $i$ is connected to its parent by an edge of type $i$. Now, take three copies of the $d$-crescent. Remove the lowest vertex from the first of these $d$-crescents, and glue the remaining tree to the vertex of depth $1$ in the second crescent. Next, remove the root vertex from the third $d$-crescent, and glue the remaining tree to the root of the second $d$-crescent. We call the resulting tree the \textit{$d$-vertebra} and denote it by $V_d$. Note that $V_d$ has exactly $3$ leaves, which we call the \textit{left}, \textit{center}, and \textit{right} leaves (in the obvious fashion). \begin{figure}[h] \begin{center} \includegraphics[height=4.5cm]{SupertreesPIC2} \end{center} \caption{From left to right: the $3$-crescent; the $3$-vertebra $V_3$, with the left, center, and right leaves labeled $\ell$, $c$, and $r$ (respectively); and the $2^{\text{nd}}$ $3$-spine.}\label{Fig2} \end{figure} For $m \geq 1$, we obtain the \textit{$m^{\textit{th}}$ $d$-spine} by consecutively gluing $m$ copies of the $d$-vertebra $V_d$ under a single copy of the $d$-crescent: the first $V_d$ is glued to the single leaf of the $d$-crescent, and each subsequent $V_d$ is glued to the center leaf of the previous $V_d$. We speak of the first, second, etc. $d$-vertebra beginning with the highest one. At last, we recursively define the families $\xi_d(k)$. We first describe the following base cases: \begin{itemize} \item Let $\xi_d(1)$ consist of a single vertex. \item Let $\xi_d(2)$ be the $d$-crescent. \item Obtain $\xi_d(3)$ from a $d$-crescent by giving the leaf $d$ children (one in each position). \end{itemize} The construction for larger $k$ is recursive and differs for $d=2$ and $d>2$. (The $d=2$ construction is a slight improvement on the $d>2$ construction.) If $d=2$, then for $k \geq 4$, we obtain $\xi_2(k)$ from the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ $2$-spine as follows: \begin{enumerate} \item For each $1 \leq i \leq \left\lfloor \frac{k}{2}\right\rfloor-2$, glue a copy of $\xi_2(i)$ to each of the left and right leaves of the $i^{\text{th}}$ $2$-vertebra. \item Glue a copy of $\xi_2(\left\lfloor \frac{k}{2}\right\rfloor-1)$ to the right leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ (i.e., lowest) $2$-vertebra. \item Glue a copy of $\xi_2(\left\lceil\frac{k}{2}\right\rceil-1)$ to the left leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ $2$-vertebra. \item Glue a copy of $\xi_2(\left\lceil\frac{k}{2}\right\rceil)$ to the center leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ $2$-vertebra. \end{enumerate} If $d>2$, then for $k \geq 4$, we obtain $\xi_d(k)$ from the $\left\lfloor \frac{k}{2}\right\rfloor^{\text{th}}$ $d$-spine as follows: \begin{enumerate} \item For each $1 \leq i \leq \left\lfloor \frac{k}{2}\right\rfloor-2$, glue a copy of $\xi_d(i)$ to each of the left and right leaves of the $i^{\text{th}}$ $d$-vertebra. \item Glue a copy of $\xi_d(\left\lfloor \frac{k}{2}\right\rfloor-1)$ to the right leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ (i.e., second-lowest) $d$-vertebra. \item Glue a copy of $\xi_d(\left\lceil\frac{k}{2}\right\rceil-1)$ to the left leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ $d$-vertebra. \item Glue a copy of $\xi_d(\left\lceil\frac{k}{2}\right\rceil)$ to the center leaf of the $\left\lfloor \frac{k}{2}\right\rfloor^{\text{th}}$ (i.e., lowest) $d$-vertebra. \item Glue a copy of $\xi_d\left( \left\lfloor \frac{k+1}{4} \right\rfloor \right)$ to each of the left and right leaves of the $\left\lfloor \frac{k}{2}\right\rfloor^{\text{th}}$ $d$-vertebra. \end{enumerate} For $k\geq 4$, the \textit{tail} of $\xi_d(k)$ is the copy of $\xi_d(\left\lceil \frac{k}{2} \right\rceil)$ that is glued to the center leaf of the bottom of the spine in step (4) (in both the $d=2$ and $d>2$ constructions). Figure~\ref{Fig3} shows $\xi_2(k)$ for some small values of $k$. \begin{figure}[h] \begin{center} \includegraphics[height=6.5cm]{SupertreesPIC3} \end{center} \caption{The trees $\xi_2(k)$ for $1\leq k\leq 5$, along with $\xi_2(9)$. In $\xi_2(4)$, $\xi_2(5)$, and $\xi_2(9)$, the pink edges represent the spine. The orange edges represent the copies of the previously-constructed trees that are glued to the spine, and the green edges represent the tail.}\label{Fig3} \end{figure} We now show that $\xi_d(k)$ noncontiguously contains every $d$-ary plane tree with $k$ vertices. The big-picture idea is that we can ``siphon off'' small subtrees of the tree that we are trying to contain until what remains fits into the tail. Many of the arguments are the same for $d=2$ and $d>2$, so we present the proofs together. The reader may find it helpful to bear in mind the example of $\xi_2(9)$ (as shown in Figure~\ref{Fig3}). \begin{theorem}\label{thm:colorful-construction} For all integers $d \geq 2$ and $k \geq 1$, the tree $\xi_d(k)$ noncontiguously contains every $d$-ary plane tree with $k$ vertices. \end{theorem} \begin{proof} Fix $d$. We proceed by strong induction on $k$. The statement is obviously true for $k \leq 3$. Now, consider $k \geq 4$. Let $T$ be a $d$-ary plane tree on $k$ vertices. We will show that $\xi_d(k)$ noncontiguously contains $T$ by showing that $\xi_d(k)$ noncontiguously contains a larger tree $T'$, which in turn noncontiguously contains $T$. We construct $T'$ from $T$ by defining a finite sequence of pairs $(T_i,v_i)$, where $T_i$ is a tree and $v_i$ is a vertex of $T_i$; we then let $T'$ be the last $T_i$. We will see that we can naturally view the vertices $v_0,\ldots,v_i$ as vertices in the tree $T_{i+1}$. In particular, we can view all of the vertices $v_i$ as vertices in the last tree $T'$. First, let $T_0=T$, and let $v_0$ be the root of $T_0$. If at any time the subtree in $T_i$ below $v_i$ (including $v_i$ itself) contains at most $\left\lceil \frac{k}{2} \right\rceil$ vertices, then the sequence terminates. As long as this situation is not achieved, we obtain $(T_{i+1},v_{i+1})$ from $(T_i,v_i)$ as follows. If $v_i$ has only a single child, then we let $v_{i+1}$ denote this child and let $T_{i+1}=T_i$. If $v_i$ has exactly $2$ children, then we let $v_{i+1}$ denote the child with the larger subtree (breaking ties with preference for the right child) and let $T_{i+1}=T_i$. Otherwise, $v_i$ has at least $3$ children. (This possibility of course pertains only to $d>2$.) We consider the leftmost and rightmost nonempty subtrees of $v_i$ and obtain $T_{i+1}$ by performing the following operations. If the leftmost nonempty subtree contains fewer vertices than the rightmost nonempty subtree or these two subtrees contain the same number of vertices, then detach all of the subtrees of $v_i$ except the leftmost nonempty one, add a new child $v_{i+1}$ in the $d^\text{th}$ subtree of $v_i$ via a red edge of type $d$, and reattach the detached subtrees as new subtrees of $v_{i+1}$ (so that the reattached edges have the same types that they originally had). If the rightmost nonempty subtree contains fewer vertices than the leftmost nonempty subtree, then detach all of the subtrees of $v_i$ except the rightmost nonempty one, add a new child $v_{i+1}$ in the $1^\text{st}$ subtree of $v_i$ via a red edge of type $1$, and reattach the detached subtrees as new subtrees of $v_{i+1}$. \begin{figure}[h] \begin{center} \includegraphics[width=0.7\linewidth]{SupertreesPIC22} \end{center} \caption{An illustration of the sequence transforming $T$ into $T'$, where $d=3$ and $k=10$. We also have $s_0=2$, $s_1=0$, and $s_2=1$.}\label{Fig5} \end{figure} This sequence terminates in some $(T_m,v_m)$ with $1\leq m\leq \left\lfloor \frac{k}{2} \right\rfloor$ since each $v_{i+1}$ has strictly fewer vertices below it than $v_i$. Note that each $T_i$ is either the same as $T_{i+1}$ or else can be obtained from $T_{i+1}$ by (legally) contracting the added red edge between $v_i$ and $v_{i+1}$. In particular, $T'$ noncontiguously contains $T$. (When $d=2$, $T'$ equals $T$ because we did not add any red edges.) Each vertex $v_i$, for $0 \leq i \leq m-1$, has at most $2$ children in $T'$. When $v_i$ has exactly $2$ children in $T'$, we think of the subtree containing $v_{i+1}$ as continuing down the main ``trunk'' of $T'$ and the other (smaller) subtree, which we call $\tau_i$, as ``branching off.'' In this case, we let $s_i=|\tau_i|$. If $v_i$ has only $1$ child, then we let $s_i=0$. Note that the difference between the number of non-red edges below $v_{i}$ and the number of non-red edges below $v_{i+1}$ is given by $$\begin{cases} 1, &\text{if }v_i\text{ has only a single child in }T'\\ s_i, &\text{if }v_i\text{ has two children in }T'\text{ and the edge between }v_i\text{ and }v_{i+1}\text{ is red}\\ s_i+1, &\text{if }v_i\text{ has two children in }T'\text{ and the edge between }v_i\text{ and }v_{i+1}\text{ is not red}. \end{cases}$$ We can write this number of non-red edges more concisely as \begin{equation} \label{eq:condition} \begin{cases} \max \{1,s_i\}+1, &\text{if }v_i\text{ has two children and the edge between }v_i\text{ and }v_{i+1}\text{ is not red}\\ \max \{1,s_i\}, &\text{otherwise}. \end{cases} \end{equation} This characterization will be useful later. Now, we condition on $m$: if $m=1$, then it is in fact easier to embed $T$ in $\xi_d(k)$ directly; if $m>1$, then we describe the embedding of $T'$ in $\xi_d(k)$. First, suppose $m=1$, i.e., the algorithm terminates after a single step. If $k$ is even, then the root of $T$ must have two children, with $\frac{k}{2}$ and $\frac{k}{2}-1$ vertices, respectively. We identify the root of $T$ with the top vertex of the $\left(\frac{k}{2}-1\right)^{\text{th}}$ vertebra. If the subtree with $\frac{k}{2}-1$ vertices is the right (respectively, left) child of the root, then we can embed this subtree in the copy of $\xi_d\left(\frac{k}{2}-1 \right)$ that is attached to the right (respectively, left) leaf of the $\left(\frac{k}{2}-1\right)^{\text{th}}$ vertebra; and we embed the subtree with $\frac{k}{2}$ vertices in the copy of $\xi_d\left( \frac{k}{2} \right)$ in the tail. (We have used the inductive hypothesis that $\xi_d(\kappa)$ is actually a $\kappa$-supertree for all $\kappa<k$.) If $k$ is odd, then there are two possibilities for the subtrees of the root of $T$. \begin{enumerate}[(i)] \item There are two subtrees, each with $\frac{k-1}{2}$ vertices. We identify the root of $T$ with the top vertex of the $\left(\frac{k-3}{2}\right)^{\text{th}}$ vertebra. We can embed the left subtree in the copy of $\xi_2\left(\frac{k-1}{2} \right)$ that is attached to the left leaf of the $\left(\frac{k-3}{2}\right)^{\text{th}}$ vertebra, and we can embed the right subtree in the tail. \item There are two subtrees, with $\frac{k-3}{2}$ and $\frac{k+1}{2}$ vertices, respectively. If the subtree with $\frac{k-3}{2}$ vertices is the right (respectively, left) child of the root, then we can embed this subtree in the right (respectively, left) tree that is glued to the $\left(\frac{k-3}{2}\right)^{\text{th}}$ vertebra; and we embed the subtree with $\frac{k+1}{2}$ vertices in the tail. \end{enumerate} We now turn to the case $m>1$. We will describe how to noncontiguously embed $T'$ into $\xi_d(k)$. We first define functions $f_2:\{0,\ldots,m\}\to \{0,\ldots,\left\lfloor \frac{k}{2}\right\rfloor\}$ and $f_{>2}:\{0,\ldots,m\}\to \{0,\ldots,\left\lfloor \frac{k}{2}\right\rfloor+1\}$ that, roughly speaking, tell us how far down $\xi_d(k)$ to embed each $v_i$. Unsurprisingly, $f_2$ will be for the $d=2$ case, and $f_{>2}$ will be for the $d>2$ case. In what follows, we will write $f_*$ in statements that apply to both $f_2$ and $f_{>2}$. Let $f_2(0)=f_{>2}(0)=s_0$. For $1 \leq i \leq m-1$, let $$f_2(i)=\max\{f_2(i-1)+1,s_i\}\quad\text{and}\quad f_{>2}(i)=\max\{f_{>2}(i-1)+1,s_i\}.$$ Finally, let $f_2(m)=\left\lfloor \frac{k}{2} \right\rfloor$ and $f_{>2}(m)=\left\lfloor \frac{k}{2} \right\rfloor+1$. We will see that $f_*$ is strictly increasing and in fact has the claimed codomain; before establishing these facts, we show that they will let us embed $T'$ in $\xi_d(k)$. For each $0 \leq i \leq m$, we identify $v_i$ with a vertex of $\xi_d(k)$ as follows. For the $d=2$ case, we identify $v_i$ with: the root of $\xi_2(k)$ if $f_2(i)=0$; the topmost vertex in the $f_2(i)^{\text{th}}$ vertebra of $\xi_2(k)$ if $1 \leq f_2(i) \leq \left\lfloor \frac{k}{2}\right\rfloor-1$; and the topmost vertex of the tail if $f_2(i)=\left\lfloor \frac{k}{2}\right\rfloor$. For the $d>2$ case, we identify $v_i$ with: the root of $\xi_d(k)$ if $f_{>2}(i)=0$; the topmost vertex in the $f_{>2}(i)^{\text{th}}$ vertebra of $\xi_d(k)$ if $1 \leq f_{>2}(i) \leq \left\lfloor \frac{k}{2}\right\rfloor$; and the topmost vertex of the tail if $f_{>2}(i)=\left\lfloor \frac{k}{2}\right\rfloor+1$. Consider any $i$ with $s_i>0$ and $f_*(i) \leq \left\lfloor \frac{k}{2}\right\rfloor-1$. If $\tau_i$ is the left subtree of $v_i$, then the inductive hypothesis and the definition of $f_*$ guarantee that we can embed $\tau_i$ into the copy of $\xi_d(f_*(i))$ that is glued to the left leaf of the $f_*(i)^{\text{th}}$ vertebra. Contract this glued subtree to a copy of $\tau_i$; then contract the right subtree of this vertebra to a point; then contract the vertebra itself to the edge connecting $v_i$ to $\tau_i$ and one other edge below $v_i$ of the same type as the edge connecting $v_i$ and $v_{i+1}$ in $T'$. The exact same procedure can be done in the case where $\tau_i$ is the right subtree of $v_i$. If $s_i=0$ and $i\neq m$, then we contract everything in the $f_*(i)^{\text{th}}$ vertebra except for a single edge of the type that connects $v_i$ and $v_{i+1}$ in $T'$. Things are even easier for $i>0$ with $s_i=0$ and $f_*(i) \leq \left\lfloor \frac{k}{2}\right\rfloor-1$: in this case, we simply embed the (unique) edge below $v_i$ in the ``center'' crescent of the $i^{\text{th}}$ vertebra (i.e., the crescent whose bottom is the center leaf of the vertebra). When $i=0$ and $s_0=0$, we embed this edge into the crescent at the top of the spine. For $d=2$, we finish by contracting the tail of $\xi_2(k)$ to a copy of the tree below $v_m$ in $T'$. This completes the contraction of $\xi_2(k)$ to $T'$, which, as remarked earlier, can be further contracted to $T$. Now, we turn to $d>2$. We will later show that if $f_{>2}(m-1)=\left\lfloor \frac{k}{2}\right\rfloor$, then $s_{m-1} \leq \left\lfloor \frac{k+1}{4} \right\rfloor$, so we can embed $\tau_{m-1}$ at the level of the $\left\lfloor \frac{k}{2}\right\rfloor ^{\text{th}}$ $d$-vertebra as in the previous paragraph. And then we contract the tail of $\xi_d(k)$ to a copy of the tree below $v_m$ in $T'$, which completes the contraction of $\xi_d(k)$ to $T'$. The next order of business is showing that $f_2$ and $f_{>2}$ have the desired properties for the embedding described above. We first show that $f_2(m-1)\leq \left\lfloor \frac{k}{2}\right\rfloor-1$ and $f_{>2}(m-1) \leq \left\lfloor \frac{k}{2}\right\rfloor$. Both functions are strictly increasing on $i \leq m-1$, so this will also prove that they are injective. Easy induction on $r$ shows that $$f_*(r) \leq \sum_{i=0}^r \max \{1,s_i\},$$ with equality exactly when $s_0 \geq 1$ and $s_i\leq 1$ for all $1 \leq i \leq r$. In particular, \begin{equation}\label{eq:f-inequality} f_*(m-2) \leq \sum_{i=0}^{m-2} \max \{1,s_i\}. \end{equation} At the same time, recall that $\max \{1,s_i\}$ is controlled by the number of edges of $T$ (non-red edges of $T'$) that ``peel away'' at the vertex $v_i$ (compare with \eqref{eq:condition}), so the condition for the termination of the sequence $(T_i,v_i)$ implies that \begin{equation}\label{eq:big-inequality} \sum_{i=0}^{m-2} \max \{1,s_i\} \leq \textstyle{\left\lfloor \frac{k}{2} \right\rfloor}-1. \end{equation} This immediately implies the claim about $f_{>2}(m-1)$. Next, we can show that the inequalities \eqref{eq:f-inequality} and \eqref{eq:big-inequality} cannot both be tight for $d=2$; this will imply that $f_2(m-2)\leq \left\lfloor \frac{k}{2} \right\rfloor-2$, and the inequality $f_2(m-1)\leq \left\lfloor \frac{k}{2} \right\rfloor-1$ will immediately follow from the definition of $f_2$. To see that this improvement is in fact achieved, note that the first condition in equation~\eqref{eq:condition} (which implies an improvement to \eqref{eq:big-inequality}) always occurs somewhere unless $T$ consists of a path on $\left\lfloor \frac{k}{2} \right\rfloor+1$ vertices with a tree on $\left\lceil \frac{k}{2} \right\rceil$ vertices glued to the bottom. But in this exceptional case, we have $s_0=0$, so inequality~\eqref{eq:f-inequality} is not tight. This completes the proof for $d=2$. We still need to check that if $d>2$ and $f_{>2}(m-1)=\left\lfloor \frac{k}{2} \right\rfloor$, then $s_{m-1} \leq \left\lfloor \frac{k+1}{4} \right\rfloor$. Suppose $f_{>2}(m-1)=\left\lfloor \frac{k}{2} \right\rfloor$, so that the inequalities \eqref{eq:f-inequality} and \eqref{eq:big-inequality} are equalities. This means that $f_{>2}(m-2)=\left\lfloor \frac{k}{2} \right\rfloor-1$. Because \eqref{eq:f-inequality} is an equality, the vertex $v_{m-1}$ has exactly $\left\lceil \frac{k}{2} \right\rceil$ vertices (including $v_{m-1}$) below it in $T_{m-1}$. If here $v_{m-1}$ has only a single nonempty subtree, then $s_{m-1}=0$ and we are done. Otherwise, $v_{m-1}$ has at least two nonempty subtrees. The (weakly) smaller of the rightmost and leftmost of these subtrees must have at most $\left\lfloor \frac{1}{2} \left\lceil \frac{k}{2} \right\rceil \right\rfloor=\left\lfloor \frac{k+1}{4} \right\rfloor$ vertices, so we conclude that $s_{m-1} \leq \left\lfloor \frac{k+1}{4} \right\rfloor$, as desired. \end{proof} We remark that in the $d=2$ case, ad hoc arguments show that this construction is in fact optimal for $k \leq 5$; however, small refinements are possible for sufficiently large $k$. Now that we have shown that the $\xi_d(k)$'s are in fact noncontiguous $k$-universal $d$-ary plane trees, we focus on their sizes. Let $M_d(k)$ denote the number of vertices in $\xi_d(k)$. The following proposition, whose proof we omit, follows from counting the various parts of $\xi_d(k)$ as described in the recursive construction above. Let $\delta_{x,y}$ denote the Kronecker delta, which has the value $1$ when $x=y$ and the value $0$ otherwise. Note that the $-1$'s below account for ``overlap'' vertices that are contained in multiple parts. \begin{proposition}\label{prop:colorful-size} For fixed $d$, the sequence $M_d(k)$ has the initial conditions \[M_d(1)=1,\quad M_d(2)=d+1,\quad M_d(3)=2d+1,\] and for $k \geq 4$, it obeys the recurrence \begin{align*} M_d(k) &=(d+1)+\left( \textstyle{\left\lfloor \frac{k}{2} \right\rfloor}-\delta_{d,2} \right)(3d-2)+2\sum_{i=1}^{\left\lfloor \frac{k}{2} \right\rfloor-2}(M_d(i)-1) +M_d\left(\textstyle{\left\lfloor \frac{k}{2} \right\rfloor}-1\right)-1\\ &+M_d\left(\textstyle{\left\lceil \frac{k}{2} \right\rceil}-1\right)-1+ M_d\left(\textstyle{\left\lceil \frac{k}{2} \right\rceil}\right)-1+2(1-\delta_{d,2})\left( M_d\left(\textstyle{ \left\lfloor \frac{k+1}{4} \right\rfloor} \right)-1\right).\end{align*} \end{proposition} We can use Proposition \ref{prop:colorful-size} and an argument similar to one of the proofs in \cite{Goldberg} to obtain asymptotics for $M_d(k)$. \begin{corollary}\label{cor:colorful} For fixed $d \geq 2$, we have $$N_{d\ary}^{\non}(k)\leq M_d(k)=k^{\frac{1}{2}\log_2(k)(1+o(1))}.$$ \end{corollary} \begin{proof} Fix $d$. It will be convenient to work with natural logarithms, so note that $k^{\frac{1}{2}\log_2(k)}=\exp\left(\frac{1}{2\log 2}\log^2 k\right)$. We first prove that there is some constant $C$ (depending on $d$) such that $M_d(k)<C\exp\left( \frac{1}{2 \log 2} \log^2 k \right)$ for all $k$. We proceed by induction on $k$, where making $C$ large deals with any base cases. It is obvious (by construction) that $M_d(k)=|\xi_d(k)|$ is monotonically increasing in $k$. We compute (for sufficiently large $k$): \begin{align*} M_d(k) &=M_d(k-2)+(3d-2)+2\left(M_d\left( \textstyle{\left\lfloor \frac{k}{2} \right\rfloor}-2 \right)-1\right)+M_d\left( \textstyle{\left\lfloor \frac{k}{2} \right\rfloor}-1 \right)\\ &\quad -M_d\left( \textstyle{\left\lfloor \frac{k}{2} \right\rfloor}-2 \right)+M_d\left( \textstyle{\left\lceil \frac{k}{2} \right\rceil} \right)-M_d\left( \textstyle{\left\lceil \frac{k}{2} \right\rceil}-2 \right)+2(1-\delta_{d,2})\left(M_d\left( \textstyle{\left\lfloor \frac{k+1}{4} \right\rfloor} \right)-M_d\left( \textstyle{\left\lfloor \frac{k-1}{4} \right\rfloor} \right)\right)\\ &\leq M_d(k-2)+3d-2+M_d\left( \textstyle{\left\lfloor \frac{k}{2} \right\rfloor}-1 \right)+M_d\left( \textstyle{\left\lceil\frac{k}{2} \right\rceil} \right)+2M_d\left( \textstyle{\left\lfloor \frac{k+1}{4} \right\rfloor} \right)\\ &< C\exp\left( \frac{1}{2 \log 2} \log^2 (k-2) \right)+5C\exp\left( \frac{1}{2 \log 2} \log^2 \left( \frac{k+1}{2} \right) \right)\\ &=C\exp\left( \frac{1}{2 \log 2} \log^2 (k-2) \right) \left( 1+5\exp \left( \frac{1}{2\log 2} \log\left( \frac{(k+1)(k-2)}{2} \right)\log \left(\frac{k+1}{2(k-2)} \right) \right) \right)\\ &=C\exp\left( \frac{1}{2 \log 2} \log^2 (k-2) \right) \left(1+\frac{5}{k}+o\left(\frac{1}{k} \right) \right). \end{align*} At the same time, this expression is certainly smaller than \begin{align*} C\exp\left( \frac{1}{2 \log 2} \log^2 k \right) &=C\exp\left( \frac{1}{2 \log 2} \log^2 (k-2) \right)\left(1+\frac{2\log k}{k \log 2}+o\left(\frac{\log k}{k} \right) \right) \end{align*} for sufficiently large $k$, which establishes the first claim. Second, we show that for any $\gamma <\frac{1}{2 \log 2}$, there exists a constant $c>0$ (depending on $\gamma$) such that $N_d(k)>c\exp(\gamma \log^2 k)$ for all $k$. As above, we proceed by induction on $k$, where making $c$ small deals with the base cases. This time, we compute: \begin{align*} M_d(k &)>M_d(k-2)+2M_d\left( \frac{k-5}{2} \right)\\ &>c\exp \left( \gamma \log^2 (k-2) \right)\left(1+2\exp \left( \gamma \log\left( \frac{(k-5)(k-2)}{2} \right) \log \left( \frac{k-5}{2(k-2)} \right) \right) \right)\\ &=c\exp \left( \gamma \log^2 (k-2) \right)\left(1+\frac{2}{k^{2\gamma \log 2}}+o\left(\frac{1}{k^{2\gamma \log 2}} \right) \right). \end{align*} Since $2\gamma \log 2<1$, this expression is larger than $$c\exp \left( \gamma \log^2 k \right)=c\exp \left( \gamma \log^2 (k-2) \right)\left(1+\frac{4\gamma \log k}{k}+o\left(\frac{\log k}{k} \right) \right)$$ for sufficiently large $k$, as desired. The two claims together imply the result. \end{proof} \section{$[d]$-trees}\label{sec:0,1,...,d} \subsection{Contiguous containment}\label{subsec:0,1,...,d-contiguous} \subsubsection{Lower bounds} We can obtain a lower bound for $N^{\text{con}}_{[d]}(k)$ by modifying the argument in the proof of the first part of Theorem~\ref{thm:d-ary-contiguous}. In particular, we apply this argument to a slightly different family of trees that are difficult to contain. \begin{theorem}\label{thm:0,...,d-contiguous-lower} For $d\geq 2$ and $k \geq 2$, we have \[N_{[d]}^{\con}(k)\geq \left(k-1-d\textstyle{\left\lfloor \frac{k-2}{d} \right\rfloor}\right)d^{\left\lfloor \frac{k-2}{d} \right\rfloor}+\textstyle{\left\lfloor \frac{k-2}{d} \right\rfloor}+1\geq d^{\frac{k-2}{d}}.\] \end{theorem} \begin{proof} Let $\bf T$ be a contiguous $k$-universal $[d]$-tree. Consider the following procedure for building $[d]$-trees with $k$ vertices. Start with a $[d]$-tree that is a path on $\left\lfloor\frac{k-2}{d}\right\rfloor+1$ vertices, and color the edges of this path red. Add $d-1$ additional children to each nonleaf vertex of this path. When adding these additional children to a nonleaf vertex $v$, we choose freely how many children to place on the left of the red edge coming down from $v$, then we place the remaining children on the right of this red edge. Finally, place $k-1-d\left\lfloor \frac{k-2}{d} \right\rfloor$ vertices as children of the leaf of the original path. This forms a $[d]$-tree with $k$ vertices, and there are $d^{\left\lfloor \frac{k-2}{d} \right\rfloor}$ trees that can be built in this fashion. Each of these trees has $k-1-d\left\lfloor \frac{k-2}{d} \right\rfloor$ vertices at depth $\left\lfloor\frac{k-2}{d}\right\rfloor+1$, an easy modification of the argument in Theorem~\ref{thm:d-ary-contiguous} shows that these vertices must correspond to pairwise distinct vertices in $\bf T$ of depth at least $\left\lfloor\frac{k-2}{d}\right\rfloor+1$. Thus, $\bf T$ has at least $$\left(k-1-d\textstyle{\left\lfloor \frac{k-2}{d} \right\rfloor}\right)d^{\left\lfloor \frac{k-2}{d} \right\rfloor}$$ vertices at depth at least $\left\lfloor\frac{k-2}{d}\right\rfloor+1$. The term $\left\lfloor\frac{k-2}{d}\right\rfloor+1$ in the statement of the theorem accounts for the fact that $\bf T$ must also have at least one vertex at each depth $0,1,\ldots, \left\lfloor\frac{k-2}{d}\right\rfloor$. \end{proof} \subsubsection{Upper bounds} As mentioned in the introduction, the quantity $N^{\text{con}}_{d\ary}(k)=d^{k-1}+k-1$ is an upper bound for $N^{\text{con}}_{[d]}(k)$. We can dramatically improve the base of the exponential by describing an explicit construction for a family $\{\Lambda_d(k)\}$ of contiguous $k$-universal $[d]$-trees. The construction of $\Lambda_d(k)$ is recursive. We first let $\Lambda_d(1)$ be a single vertex. For $2 \leq k \leq d$, we construct $\Lambda_d(k)$ by attaching the subtrees $\Lambda_d(1), \Lambda_d(2), \ldots, \Lambda_d\left(\left\lfloor\frac{k}{2}\right\rfloor-1\right)$, then $\Lambda_d(k-1)$, then $\Lambda_d\left(\left\lceil\frac{k}{2}\right\rceil-1\right), \Lambda_d\left(\left\lceil\frac{k}{2}\right\rceil-2\right), , \ldots, \Lambda_d(1)$ to the root, in that order from left to right. For $k>d$, we construct $\Lambda_d(k)$ by attaching the subtrees $\Lambda_d(k-d), \Lambda_d(k-d+1), \ldots, \Lambda_d\left(k-\left\lfloor\frac{d}{2}\right\rfloor-2\right)$, then $\Lambda_d(k-1)$, then $\Lambda_d\left(k-\left\lceil\frac{d}{2}\right\rceil-1\right), \Lambda_d\left(k-\left\lceil\frac{d}{2}\right\rceil-2\right), \ldots, \Lambda_d(k-d)$ to the root, in that order from left to right. The proof that these trees are in fact $k$-supertrees is similar in spirit to the proof of Theorem~\ref{thm:colorful-construction}. \begin{theorem}\label{thm:0,...,d-upper-bound} Let $d\geq 2$ and $k\geq 1$ be integers. For every $k$-vertex $[d]$-tree $T$, there is a contiguous embedding $T^*$ of $T$ in $\Lambda_d(k)$ such that the root of $T^*$ coincides with the root of $\Lambda_d(k)$. \end{theorem} \begin{proof} We proceed by strong induction on $k$, where the base case $k=1$ is trivial. For the induction step, we begin with the case in which $2 \leq k \leq d$. Let $T$ be a $[d]$-tree on $k$ vertices. Let $T_1, \ldots, T_{\ell}$ be the subtrees of the root of $T$, from left to right. Of these $\ell$ trees, let $T_m$ be one with the most vertices. We embed $T_1, \ldots, T_{\ell}$ into the subtrees of the root of $\Lambda_d(k)$ in a greedy way, with a preference for subtrees farther to the left. We consider two cases based on the size of $T_m$. If $|T_m|\geq\left\lceil \frac{k}{2} \right\rceil$, then by the induction hypothesis, we can embed $T_m$ in the subtree of the root of $\Lambda_d(k)$ that is isomorphic to $\Lambda_d(k-1)$ (with the roots coinciding). The remaining subtrees, which contain at most $\left\lfloor \frac{k}{2}\right\rfloor-1$ vertices in total, can easily be embedded in the smaller subtrees of the root of $\Lambda_d(k)$. Otherwise, $|T_m|\leq\left\lceil \frac{k}{2} \right\rceil-1 \leq \left\lfloor \frac{k}{2} \right\rfloor$. In this case, we let $f(1)=|T_1|$ and, for $2\leq i\leq \ell$, let $f(i)=\max\{1+f(i-1),|T_i|\}$. Note that $f$ is strictly increasing. We have $$f(r) \leq \sum_{i=1}^r |T_i|\leq k-1-(\ell-r)$$ for each $r$, where the second inequality uses the fact that $|T_i|\geq 1$ for all $i$. In particular, $f(\ell) \leq k-1$. We now claim that we can embed each $T_r$ in the $f(r)^{\text{th}}$ subtree of the root of $\Lambda_d(k)$ (with the roots coinciding). This is certainly possible when $f(r) \leq \left\lfloor \frac{k}{2} \right\rfloor$ by the induction hypothesis because $f(r)\geq |T_r|$. It is also possible when $f(r)> \left\lfloor \frac{k}{2} \right\rfloor$ because $f(r)\leq k-1-(\ell-r)$. (These two statements also use the fact that $|T_m|\leq \left\lfloor \frac{k}{2} \right\rfloor$.) This completes the argument when $2 \leq k \leq d$. We now assume $k>d$. Let $T$ be a $[d]$-tree on $k$ vertices, and let $T_1, \ldots, T_{\ell}$ be the subtrees of its root, from left to right. Again, we have $|T_1|+\cdots+|T_{\ell}|=k-1$. As above, it is easy to dispense with the case in which some $|T_i| \geq k-\left\lceil \frac{d}{2} \right\rceil$, so we restrict our attention to the case where $|T_i| \leq k-\left\lceil \frac{d}{2} \right\rceil-1\leq k-\left\lfloor \frac{d}{2} \right\rfloor-1$ for all $i$. Let $g(1)=\max\{1,|T_1|-(k-d)+1\}$, and for $2\leq i\leq\ell$, let $g(i)=\max\{1+g(i-1),|T_i|-(k-d)+1\}$. Note that $g$ is strictly increasing and $$g(r)\leq\sum_{i=1}^r \max\{1,|T_i|-(k-d)+1\}$$ for every $r$. In particular, if we let $h_s$ denote the number of trees in the set $T_1,\ldots,T_\ell$ with exactly $s$ vertices, then for $r=\ell$ we get \[g(\ell)\leq\sum_{i=1}^\ell\max\{1,|T_i|-(k-d)+1\}=\sum_{s=1}^{k-d}h_s+\sum_{s\geq k-d+1}(s-(k-d)+1)h_s\] \[=\sum_{s\geq 1}sh_s-\sum_{s=1}^{k-d}(s-1)h_s-\sum_{s\geq k-d+1}(k-d-1)h_s=k-1-\sum_{s=1}^{k-d}(s-1)h_s-\sum_{s\geq k-d+1}(k-d-1)h_s.\] If $h_s\geq 1$ for some $s\geq k-d+1$, then this shows that $g(\ell)\leq d$. Otherwise, we have \[g(\ell)\leq k-1-\sum_{s=1}^{k-d}(s-1)h_s=k-1-\sum_{s\geq 1}(s-1)h_s=k-1-\sum_{s\geq 1}sh_s+\sum_{s\geq 1}h_s=k-1-(k-1)+\ell\leq d.\] In either case, $g(\ell)\leq d$. Since $g$ is strictly increasing, $g(r)\leq d-(\ell-r)$ for all $r\in\{1,\ldots,\ell\}$. We now claim that we can embed each $T_r$ in the $g(r)^{\text{th}}$ subtree of the root of $\Lambda_d(k)$ (with the roots coinciding). As above, this is possible whenever $g(r) \leq \left\lceil \frac{d}{2} \right\rceil$ because $g(r) \geq |T_r|-(k-d)+1$. It is also possible when $g(r)>\left\lceil \frac{d}{2} \right\rceil$ because $g(r)\leq d-(\ell-r)$. \end{proof} We can also describe the sizes of these $k$-universal $[d]$-trees. Let $L_d(k)$ denote the number of vertices in $\Lambda_d(k)$. The following enumeration follows directly from the definition of $\Lambda_d(k)$. \begin{proposition}\label{prop:0,...,d-contiguous-upper} For fixed $d \geq 2$, the sequence $L_d(k)$ has the starting value $L_d(1)=1$. For $2 \leq k \leq d$, we have the recurrence $$L_d(k)=1+L_d(k-1)+\sum_{i=1}^{\left\lfloor \frac{k}{2} \right\rfloor-1} L_d(i)+\sum_{i=1}^{\left\lceil \frac{k}{2} \right\rceil-1} L_d(i).$$ For $k>d$, we have the recurrence $$L_d(k)=1+L_d(k-1)+\sum_{i=k-d}^{k-\left\lfloor \frac{d}{2} \right\rfloor-2} L_d(i)+\sum_{i=k-d}^{k-\left\lceil \frac{d}{2} \right\rceil-1} L_d(i).$$ \end{proposition} \begin{corollary}\label{cor:rhoestimate} Let \[p_d(x)=1-x-\sum_{i=\left\lfloor\frac{d}{2}\right\rfloor+2}^dx^i-\sum_{i=\left\lceil\frac{d}{2}\right\rceil+1}^dx^i.\] Let $\rho_d=1/x_d$, where $x_d$ is the smallest positive real root of $p_d(x)$. For each fixed $d$, we have \[N_{[d]}^{\con}(k)\leq L_d(k)=(\rho_d+o(1))^k.\] Furthermore, as $d\to\infty$, we have \[\rho_d=1+\frac{4\log d}{d}-\frac{4\log\log d}{d}+o\left(\frac{\log\log d}{d}\right).\] \end{corollary} \begin{proof} The inequality $N_{[d]}^{\con}(k)\leq L_d(k)$ follows immediately from Theorem \ref{thm:0,...,d-upper-bound}. To see that $L_d(k)=(\rho_d+o(1))^k$, we let $G_d(x)=\sum_{k\geq 1}L_d(k)x^k$. Using the recurrence in Proposition \ref{prop:0,...,d-contiguous-upper}, it is straightforward to check that $G_d(x)$ is a rational function with denominator $p_d(x)$. Since $G_d(x)$ has nonnegative coefficients, it follows from Pringsheim's theorem \cite[Chapter IV]{FlajoletBook} that $x_d$ is the radius of convergence of $G_d(x)$. This means that $L_d(k)=(\rho_d+o(1))^k$. To prove the last statement of the corollary, we consider only the case in which $d$ is odd; the argument is similar when $d$ is even. All asymptotics are as $d\to\infty$. First, note that $$p_d(x)=1-x-2(x^{\frac{d+1}{2}+1}+x^{\frac{d+1}{2}+2}+\cdots+x^d)=1-x-2\frac{x^{c+1}-x^{2c}}{1-x},$$ where $c=\frac{d+1}{2}$. We have \[(1-x_d)^2-2x_d^{c+1}+2x_d^{2c}=0.\] The additional substitution $x_d=1-\frac{\varepsilon_d}{c}$ (where clearly $\varepsilon_d>0$ since $L_d(k)$ is growing with $k$) gives $$\left(\frac{\varepsilon_d}{c}\right)^2-2\left(1-\frac{\varepsilon_d}{c}\right)^{c+1}+2\left(1-\frac{\varepsilon_d}{c}\right)^{2c}=0.$$ One can show that $x_d\to 1$, so $\frac{\varepsilon_d}{c}\to 0$. Now, $2\left(1-\frac{\varepsilon_d}{c}\right)^{c+1}=2e^{-\varepsilon_d}+o(e^{-\varepsilon_d})$ and $2\left(1-\frac{\varepsilon_d}{c}\right)^{2c}=o(e^{-\varepsilon_d})$. This means that \[\left(\frac{\varepsilon_d}{c}\right)^2=2e^{-\varepsilon_d}(1+o(1))\] and \[\frac{\varepsilon_d}{c}=\sqrt{2}e^{-\varepsilon_d/2}(1+o(1)).\] Rearranging, we find that \[\frac{\varepsilon_d}{2}e^{\varepsilon_d/2}=\frac{c}{\sqrt 2}(1+o(1)).\] Therefore, \[\varepsilon_d=2W\left(\frac{c}{\sqrt 2}(1+o(1))\right)=2\log c-2\log\log c+o(\log\log c),\] where $W$ is the Lambert $W$ function. The desired result follows. \end{proof} \subsection{Noncontiguous containment}\label{subsec:0,1,...,d-noncontiguous} The following theorem relates noncontiguous $k$-universal $[d]$-trees with noncontiguous $k$-universal $d$-ary plane trees. In particular, it shows that the minimum sizes of these trees differ by at most a constant factor. \begin{theorem}\label{thm:0,...,d-noncontiguous} For all integers $d \geq 2$ and $k \geq 1$, we have $$N_{[d]}^{\non}(k) \leq N_{d\ary}^{\non}(k) \leq d (N_{[d]}^{\non}(k)-1)+1.$$ \end{theorem} \begin{proof} The first inequality is straightforward because if we are given a noncontiguous $k$-universal $d$-ary plane tree ${\bf T}'$, then we can obtain a noncontiguous $k$-universal $[d]$-tree $\bf T$ by ``forgetting'' the exact types of all of the edges in ${\bf T}'$. In other words, we interpret ${\bf T}'$ as a $[d]$-tree. For the second inequality, suppose $\bf T$ is a noncontiguous $k$-universal $[d]$-tree on $n$ vertices. We obtain a noncontiguous $k$-universal $d$-ary plane tree ${\bf T}'$ on $d(n-1)+1$ vertices by doing the following for each edge $e$ in $\bf T$. Among all edges with the same parent vertex as $e$, suppose $e$ is the $i^{\text{th}}$ from the left. We replace the edge $e$ (along with its endpoints) with a $d$-ary plane tree path on $d$ edges whose topmost edge has type $i$ and whose remaining edges have types $1$ through $d$, skipping $i$. Note that $|{\bf T}'|= d(|{\bf T}|-1)+1$ since each of the $|{\bf T}|-1$ edges in $\bf T$ has become $d$ edges in ${\bf T}'$. We claim that ${\bf T}'$ is in fact a noncontiguous $k$-universal $d$-ary plane tree. Let $T'$ be a $d$-ary plane tree on $k$ vertices, and let $T$ be the corresponding $[d]$-tree that is obtained by forgetting the types of the edges in $T'$. By hypothesis, $\bf T$ noncontiguously contains $T$. For each edge $e'$ in $T'$, let $e$ be the corresponding edge in $T$. Let $\bf e$ be the edge in $\bf T$ that corresponds to $e$ in the noncontiguous embedding of $T$ in $\bf T$. Recall that $\bf e$ becomes $d$ edges, one of each type, in ${\bf T}'$; let ${\bf e}'$ be the edge among these with the same type as $e'$. Color every such edge ${\bf e}'$ blue, and color all other edges of ${\bf T}'$ red. It is clear that if we contract away all of the red edges, the blue edges will form a copy of $T'$, so it remains only to show that a sequence of legal contractions exists. We begin by contracting every red edge whose bottom vertex is a leaf. We continue this process until every leaf is incident to a blue edge. We contract the remaining red edges, starting with those at the greatest depth (i.e., farthest from the root) and working our way upwards. So we can always assume that all edges of greater depth than our red edges of interest are blue. If the top vertex of a red edge has no other nonempty subtree, then we can legally contract that red edge. We are now left with the case where the top vertex $v$ of our red edge $r$ has multiple children. Consider the nonempty subtrees of $v$, from left to right. If $r$ is not adjacent to another red edge, then we can contract $r$ immediately. Otherwise, there are consecutive red edges $r_1, \ldots, r_s$ ($s \geq 2$). We will show that there is some $r_i$ that we can legally contract; we will then be able to sequentially contract the remaining edges by induction. Let each $r_i$ have edge type $a_i$. Let $b_i$ denote the minimum type of a (necessarily blue) edge directly below $r_i$, and let $c_i$ denote the maximum type of a (necessarily blue) edge directly below $r_i$. It follows from our construction that $$a_1<\cdots<a_s \quad \text{and} \quad b_1\leq c_1<b_2 \leq c_2\cdots<b_s\leq c_s.$$ The condition for being able to legally contract $r_1$ is $c_1<a_2$, and the condition for being able to legally contract $r_s$ is $a_{s-1}<b_s$. For $2 \leq i \leq s-1$, the conditions for being able to contract $r_i$ are $a_{i-1}<b_i$ and $c_i<a_{i+1}$. Assume (for contradiction) that we cannot legally contract any of the edges $r_1,\ldots,r_s$. Since we cannot contract $r_1$, we must have $c_1 \geq a_2$. Since we cannot contract $r_2$, we must have either $a_1 \geq b_2$ or $c_2 \geq a_3$. In the first case, we get $$b_2\leq a_1<a_2\leq c_1<b_2,$$ which is a contradiction, so we conclude that $c_2 \geq a_3$. Similarly, since we cannot contract $a_3$, we have either $a_2 \geq b_3$ or $c_3 \geq a_4$, and the first possibility yields a contradiction in the same way. Continuing this line of reasoning, we arrive at $c_{s-1}\geq a_s$. Then $$a_{s-1}<a_s\leq c_{s-1}<b_s$$ tells us that we can legally contract $r_s$, so we are done. This demonstrates that we can legally contract all of the red edges. \end{proof} Now that we have established this connection between $d$-ary plane trees and $[d]$-trees, we revisit the construction of the trees $\xi_d(k)$ from Section~\ref{subsec:d-ary-noncontiguous}. Because $[d]$-trees have more ``flexibility'' than $d$-ary plane trees, we can use a slightly better (and simpler!) construction to beat the first inequality in Theorem~\ref{thm:0,...,d-noncontiguous}. We call these new noncontiguous $k$-universal $[d]$-trees $\Xi_d(k)$. First, we use the path on $2$ vertices instead of the $d$-crescent. If $d>2$, we define the \textit{modified $d$-vertebra} to be the $[d]$-tree on $4$ vertices in which the root has $3$ children; when $d=2$, the \emph{modified $2$-vertebra} is the $[2]$-tree with $5$ vertices in which the root has $2$ children and the left child of the root has $2$ children. As in the case of the $d$-vertebra, we can identify the left, middle, and right children of the modified vertebra in the obvious way. We then construct the $m^{\text{th}}$ spine exactly as in Section~\ref{subsec:d-ary-noncontiguous}. Our recursive definition of the families $\Xi_d(k)$ resembles the presentation of Section~\ref{subsec:d-ary-noncontiguous}. We begin with the following base cases: \begin{itemize} \item Let $\Xi_d(1)$ consist of a single vertex. \item Let $\Xi_d(2)$ be the path on $2$ vertices (scil., the analogue of the crescent). \item Obtain $\Xi_d(3)$ from the path on $2$ vertices by giving the bottom vertex $2$ children. \end{itemize} The construction for larger $k$ is recursive and differs for $d=2$ and $d>2$. If $d=2$, then for $k \geq 4$, we obtain $\Xi_2(k)$ from the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ $2$-spine as follows: \begin{enumerate} \item For each $1 \leq i \leq \left\lfloor \frac{k}{2}\right\rfloor-2$, glue a copy of $\Xi_2(i)$ to each of the left and right leaves of the $i^{\text{th}}$ modified $2$-vertebra. \item Glue a copy of $\Xi_2(\left\lfloor \frac{k}{2}\right\rfloor-1)$ to the right leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ (i.e., lowest) modified $2$-vertebra. \item Glue a copy of $\Xi_2(\left\lceil\frac{k}{2}\right\rceil-1)$ to the left leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ modified $2$-vertebra. \item Glue a copy of $\Xi_2(\left\lceil\frac{k}{2}\right\rceil)$ to the center leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ modified $2$-vertebra. \end{enumerate} If $d>2$, then for $k \geq 4$, we obtain $\Xi_d(k)$ from the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ $d$-spine as follows: \begin{enumerate} \item For each $1 \leq i \leq \left\lfloor \frac{k}{2}\right\rfloor-2$, glue a copy of $\Xi_d(i)$ to each of the left and right leaves of the $i^{\text{th}}$ modified $d$-vertebra. \item Glue a copy of $\Xi_d(\left\lfloor \frac{k}{2}\right\rfloor-1)$ to the right leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ (i.e., second-lowest) modified $d$-vertebra. \item Glue a copy of $\Xi_d(\left\lceil\frac{k}{2}\right\rceil-1)$ to the left leaf of the $\left(\left\lfloor \frac{k}{2}\right\rfloor-1\right)^{\text{th}}$ modified $d$-vertebra. \item Glue a copy of $\Xi_d(\left\lceil\frac{k}{2}\right\rceil)$ to the center leaf of the $\left\lfloor \frac{k}{2}\right\rfloor^{\text{th}}$ (i.e., lowest) modified $d$-vertebra. \item Glue a copy of $\Xi_d\left( \left\lfloor \frac{k+1}{4} \right\rfloor \right)$ to each of the left and right leaves of the $\left\lfloor \frac{k}{2}\right\rfloor^{\text{th}}$ $d$-vertebra. \end{enumerate} For $k\geq 4$, we still say that the \textit{tail} of $\Xi_d(k)$ is the copy of $\Xi_d(\left\lceil \frac{k}{2} \right\rceil)$ that is glued to the center leaf of the bottom of the spine in step (4). We remark that the trees $\Xi_3(k), \Xi_4(k),\ldots$ are all identical. We omit the proof of the following theorem because it is identical to the proof of Theorem~\ref{thm:colorful-construction}. \begin{theorem}\label{thm:modified-colorful-construction} For any integers $d \geq 2$ and $k \geq 1$, the tree $\Xi_d(k)$ noncontiguously contains all $[d]$-trees with $k$ vertices. \end{theorem} Also as before, simple counting gives a recursive formula for the number of vertices in $\Xi_d(k)$, which we denote $M'_d(k)$. \begin{proposition}\label{prop:modified-colorful-size} For fixed $d$, the sequence $M'_d(k)$ has the initial conditions \[M_d'(1)=1,\quad M_d'(2)=2,\quad M_d'(3)=4. \] For $k \geq 4$, it obeys the recurrence \begin{align*} M'_d(k) &=2+(3+\delta_{d,2})\left( \textstyle{\left\lfloor \frac{k}{2} \right\rfloor}-\delta_{d,2} \right)+2\sum_{i=1}^{\left\lfloor \frac{k}{2} \right\rfloor-2} (M'_d(i)-1) +M'_d\left(\textstyle{\left\lfloor \frac{k}{2} \right\rfloor}-1\right)-1\\ &+M'_d\left(\textstyle{\left\lceil \frac{k}{2} \right\rceil}-1\right)-1+M'_d\left(\textstyle{\left\lceil \frac{k}{2} \right\rceil}\right)-1 +2(1-\delta_{d,2})\left( M_d\left(\textstyle{ \left\lfloor \frac{k+1}{4} \right\rfloor} \right)-1\right).\end{align*} \end{proposition} The proof of Corollary~\ref{cor:colorful} carries through to show that $M'_d(k)=k^{\frac{1}{2}\log_2(k)(1+o(1))}$. \begin{corollary}\label{cor:modified-colorful} For fixed $d \geq 2$, we have $$N_{[d]}^{\non}(k)\leq M'_d(k)=k^{\frac{1}{2}\log_2(k)(1+o(1))}.$$ \end{corollary} \section{Conclusions}\label{sec:conclusions} In Section~\ref{sec:d-ary}, we found the exact values of $N_{d\ary}^{\con}(k)$ for all $d\geq 2$ and $k\geq 1$. Furthermore, the lower and upper bounds that we obtained for $N_{[d]}^{\con}(k)$ are relatively close to each other. By contrast, our lower and upper bounds for $N_{d\ary}^{\non}(k)$ and $N_{[d]}^{\non}(k)$ are far apart. This is largely because it is difficult to obtain good lower bounds for the sizes of noncontiguous universal objects, which is also true in the setting of universal permutations. It would be nice to have better methods for producing lower bounds. Of course, we also encourage the interested reader to try improving our upper bounds. Theorem \ref{thm:0,...,d-noncontiguous} leads us naturally to ask the following. \begin{question} Fix $d\geq 2$. Does the limit \[\lim_{k \to \infty} \frac{N_{d\ary}^{\non}(k)}{N_{[d]}^{\non}(k)}\] exist, and, if so, what is its value? \end{question} Theorem \ref{thm:0,...,d-contiguous-lower} and Corollary \ref{cor:rhoestimate} suggest that $N_{[d]}^{\con}(k)$ has an exponential growth rate. It would be interesting to know its value, beyond the bounds $d^{\frac{1}{d}}$ and $\rho_d$ provided. \begin{question} Fix $d\ge 2$. Does the limit \[\lim_{k\to\infty}N_{[d]}^{\con}(k)^{\frac{1}{k}}\] exist, and, if so, what is its value? \end{question} The articles \cite{CGSCaterpillars, ChungGraham1, ChungGraham2, ChungGraham3, ChungGraham4} investigate universal trees, where the trees under consideration are unrooted and nonplane. In this setting, a tree $\mathcal T$ contains a tree $T$ if $T$ is an induced subgraph of $\mathcal T$. It would likely be interesting to consider analogous questions in a noncontiguous framework. More precisely, say that a tree $\mathcal T$ noncontiguously contains a tree $T$ if it is possible to obtain $T$ by performing a sequence of edge contractions on $\mathcal T$. In this setting, what is the smallest size of a tree that noncontiguously contains all $k$-vertex trees? There has also been recent interest in pattern containment/avoidance in labeled rooted trees \cite{Baril, Dotsenko}. It would be interesting to examine universal trees in these contexts, as well. \section{Acknowledgements}\label{sec:acknowledgements} The authors would like to thank Joe Gallian for hosting them at the University of Minnesota, Duluth, where much of this research was conducted with partial support from NSF/DMS grant 1659047 and NSA grant H98230-18-1-0010. The authors also would like to thank the anonymous referee for helpful comments. The first author was additionally supported by a Fannie and John Hertz Foundation Fellowship and an NSF Graduate Research Fellowship.
{ "timestamp": "2020-05-19T02:03:47", "yymm": "1908", "arxiv_id": "1908.03197", "language": "en", "url": "https://arxiv.org/abs/1908.03197", "abstract": "A $k$-universal permutation, or $k$-superpermutation, is a permutation that contains all permutations of length $k$ as patterns. The problem of finding the minimum length of a $k$-superpermutation has recently received significant attention in the field of permutation patterns. One can ask analogous questions for other classes of objects. In this paper, we study $k$-supertrees. For each $d\\geq 2$, we focus on two types of rooted plane trees called $d$-ary plane trees and $[d]$-trees. Motivated by recent developments in the literature, we consider \"contiguous\" and \"noncontiguous\" notions of pattern containment for each type of tree. We obtain both upper and lower bounds on the minimum possible size of a $k$-supertree in three cases; in the fourth, we determine the minimum size exactly. One of our lower bounds makes use of a recent result of Albert, Engen, Pantone, and Vatter on $k$-universal layered permutations.", "subjects": "Combinatorics (math.CO)", "title": "Supertrees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771766922545, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7090925458879551 }
https://arxiv.org/abs/2104.01362
On infinitely many foliations by caustics in strictly convex open billiards
Reflection in strictly convex bounded planar billiard acts on the space of oriented lines and preserves a standard area form. A caustic is a curve $C$ whose tangent lines are reflected by the billiard to lines tangent to $C$. The famous Birkhoff Conjecture states that the only strictly convex billiards with a foliation by closed caustics near the boundary are ellipses. By Lazutkin's theorem, there always exists a Cantor family of closed caustics approaching the boundary. In the present paper we deal with an open billiard, whose boundary is a strictly convex embedded (non-closed) curve $\gamma$. We prove that there exists a domain $U$ adjacent to $\gamma$ from the convex side and a $C^\infty$-smooth foliation of $U\cup\gamma$ whose leaves are $\gamma$ and (non-closed) caustics of the billiard. This generalizes a previous result by R.Melrose, which yields existence of a germ of foliation as above at a boundary point. We show that there exists a continuum of above foliations by caustics whose germs at each point in $\gamma$ are pairwise different. We prove a more general version of this statement in the cases, when $\gamma$ is just an arc, and also when both $\gamma$ and the caustics are immersed curves. It also applies to a billiard bounded by a closed strictly convex curve $\gamma$ and yields infinitely many "immersed" foliations by immersed caustics. For the proof of the above results, we state and prove their analogue for a special class of area-preserving maps generalizing billiard reflections: the so-called $C^{\infty}$-lifted strongly billiard-like maps. We also prove a series of results on conjugacy of billiard maps near the boundary for open curves of the above type.
\section{Introduction and main results} The billiard reflection from a strictly convex smooth planar curve $\gamma\subset\mathbb R^2$ (parametrized by either a circle, or an interval) is a map $T$ acting on a subset in the space of oriented lines: on the so-called {\it phase cylinder} consisting of those lines that are either tangent to $\gamma$, or intersect $\gamma$ transversally at two points. Namely, if a line is tangent to $\gamma$, then it is a fixed point of the reflection map. If a line $L$ intersects $\gamma$ transversally at two points, take its last intersection point $B$ with $\gamma$ (in the sense of orientation of the line $L$) and reflect $L$ from $T_B\gamma$ according to the usual reflection law: the angle of incidence is equal to the angle of reflection. By definition, the image $T(L)$ is the reflected line oriented at $B$ inside the convex domain adjacent to $\gamma$. The reflection map $T$ thus defined is called the {\it billiard ball map.} See Fig. 1. The space of oriented lines in Euclidean plane $\mathbb R^2_{x,y}$ is homeomorphic to cylinder, and it carries the standard symplectic form \begin{equation}\omega=d\phi\wedge dp,\label{defom}\end{equation} where $\phi=\phi(L)$ is the azimuth of the line $L$ (its angle with the $x$-axis) and $p=p(L)$ is its signed distance to the origin $O$ defined as follows. For each oriented line $L$ that does not pass through $O$ consider the circle centered at $O$ and tangent to $L$. We say that $L$ is {\it clockwise (counterclockwise)}, if it orients the latter circle clockwise (counterclockwise). By definition, - $p(L)=0$, if and only if $L$ passes through the origin $O$; - $p=\operatorname{dist}(L,O)$, if $L$ is clockwise; otherwise $p=-\operatorname{dist}(L,O)$. It is well-known that - the symplectic form $\omega$ is invariant under affine orientation-preserving isometries; - {\it the billiard reflections from all planar curves preserve the symplectic form $\omega$.} \begin{definition} A curve $C$ is a {\it caustic} for the billiard on the curve $\gamma$, if each line tangent to $C$ is reflected from $\gamma$ to a line tangent to $C$. Or equivalently, if the curve of (appropriately oriented) tangent lines to $C$ is an invariant curve for the billiard ball map. \end{definition} \begin{figure}[ht] \begin{center} \epsfig{file=fig-bill-caust.eps, width=15em} \caption{The billiard ball map and a caustic.} \label{fig:01} \end{center} \end{figure} The famous Birkhoff Conjecture deals with a planar billiard bounded by a strictly convex closed curve $\gamma$. Recall that such a billiard is called {\it integrable,} if there exists a domain adjacent to $\gamma$ from the convex side foliated by closed caustics, and $\gamma$ is a leaf of this foliation. See Figure 2. It is well-known that the billiard in an ellipse is integrable, since it has a family of closed caustics: confocal ellipses. The {\bf Birkhoff Conjecture} states the converse: {\it the only integrable planar billiards are ellipses.} \begin{figure}[ht] \begin{center} \epsfig{file=fig-bill-int.eps, width=10cm} \caption{A Birkhoff integrable billiard.} \label{fig:01} \end{center} \end{figure} \begin{remark} The condition of the Birkhoff Conjecture stating that the caustics in question form a {\it foliation} is important: the famous result by Vladimir Lazutkin (1973) states that {\it each strictly convex bounded planar billiard with boundary smooth enough has a Cantor family of closed caustics.} But Lazutkin's caustic family does not extend to a foliation in general. \end{remark} The main result of the paper presented in Subsection 1.1 shows that the other condition of the Birkhoff Conjecture stating that the caustics in question are {\it closed} is also important: the Birkhoff Conjecture is false without closeness condition. Namely we show that any open strictly convex $C^\infty$-smooth planar curve $\gamma$ has an adjacent domain $U$ (from the convex side) admitting a foliation by caustics of $\gamma$ that extends to a $C^\infty$-smooth foliation of the domain with boundary $U\sqcup\gamma$ with $\gamma$ being a leaf. Moreover, we show that $U$ can be chosen so that there exist infinitely many (continuum of) such foliations, and any two distinct foliations have pairwise distinct germs at every point in $\gamma$. We prove analogous statement for a non-injectively immersed curve $\gamma$ and "immersed foliations" by immersed caustics. We state and prove an analogue of this statement in the special case, when $\gamma$ is a closed curve. \begin{remark} Consider the map $T$ of billiard reflection from a strictly convex planar oriented curve $\gamma$. Let $\wh\gamma$ denote the family of its orienting tangent lines. Then the points of the curve $\wh\gamma$ are fixed by $T$. The map $T$ is a well-defined area-preserving map on an open subset adjacent to $\wh\gamma$ in the space of oriented lines. The latter subset consists of those lines that intersect $\gamma$ transversally and are directed to the concave side from $\gamma$ at some intersection point. Each caustic close to $\gamma$ corresponds to a $T$-invariant curve (the family of its tangent lines chosen with appropriate orientation) and vice versa. Thus, a foliation by caustics induces a foliation by $T$-invariant curves. \end{remark} We show that {\it the billiard map has infinitely many foliations by invariant curves in appropriate domain adjacent to $\wh\gamma$.} This together with the above remark implies existence of infinitely many foliations by caustics. In Subsection 1.3 we state the generalization of the above result on foliations by invariant curves to a special class of area-preserving maps: the so-called $C^{\infty}$-lifted strongly billiard-like maps, for which we prove existence of infinitely many pairwise distinct foliations by invariant curves. The results of the paper are proved in Section 2. The plan of proofs is presented in Subsection 1.4. The corresponding background material on symplectic properties of billiard ball map is recalled in Subsection 1.2. A brief historical survey is presented in Subsection 1.5. \subsection{Main result: an open convex arc has infinitely many foliations by caustics} \begin{theorem} \label{thm1} Let $\gamma\subset\mathbb R^2$ be a strictly convex injectively embedded $C^{\infty}$-smooth curve parametrized by an interval. There exists a simply connected domain $U$ adjacent to $\gamma$ from the convex side that admits a foliation by caustics of the billiard played in $\gamma$ that extends to a $C^\infty$-smooth foliation on $U\sqcup\gamma$, with $\gamma$ being a leaf. Moreover, $U$ can be chosen to admit infinitely many (continuum of) foliations as above. At each given point of the curve $\gamma$ the germs of these foliations are pairwise distinct. \end{theorem} \begin{remark} \label{remmelr} It follows from R.Melrose's result \cite[p.184, proposition (7.14)]{melrose1} that each point of the curve $\gamma$ has an arc neighborhood $\alpha\subset\gamma$ for which there exists a domain $U$ adjacent to $\alpha$ from the convex side such that $U\sqcup\gamma$ is $C^{\infty}$-smoothly foliated by caustics. The new result given by Theorem \ref{thm1} is the statement that the latter holds for the whole curve $\gamma$ and there exist infinitely many distinct foliations by caustics. \end{remark} \begin{theorem} \label{thm2} Let $\gamma\subset\mathbb R^2$ be a strictly convex $C^{\infty}$-smooth curve that is the image of an interval $(0,1)$ with coordinate $x$ under an {\bf immersion} $\psi:(0,1)\to\gamma$. Let $V\subset(0,1)\times\mathbb R_+\subset\mathbb R^2$ be a domain adjacent to the interval $J:=(0,1)\times\{0\}$, and let $\Psi: V\sqcup J\to\mathbb R^2$ be a fixed $C^{\infty}$-smooth immersion extending $\psi$ as a map $J\to\gamma$, sending $V$ to the convex side from $\gamma$. There exist a domain $U\subset V$ adjacent to $J$ and a $C^\infty$-smooth foliation by curves on $U\sqcup J$, with $J$ being a leaf, whose leaves in $U$ are projected by $\Psi$ to caustics of the curve $\gamma$. The above $U$ can be chosen so that it admits a continuum of the above foliations with pairwise distinct germs at each point in $J$. \end{theorem} \begin{theorem} \label{thm2closed} Let $\gamma$ be a strictly convex closed curve bijectively parametrized by circle. Fix a topological annulus $\mathcal A$ adjacent to $\gamma$ from the convex side. Let $\pi:\wt\mathcal A=\mathbb R\times[0,\varepsilon)\to\mathcal A$ be its universal covering, set $J:=\mathbb R\times\{0\}$, such that $\pi:J\to\gamma$ is the universal covering over $\gamma$. There exists a domain $U\subset\wt\mathcal A\setminus J$ adjacent to $J$ that admits a foliation by curves projected to caustics of the billiard in $\gamma$ that extends to a $C^{\infty}$-smooth foliation on $U\sqcup J$ with $J$ being a leaf. Moreover, one can choose $U$ so that there exist a continuum of foliations satisfying the above statements and having pairwise distinct germs at each point in $J$. \end{theorem} A generalization of Theorems \ref{thm1}, \ref{thm2} for the so-called $C^{\infty}$-lifted strongly billiard-like maps will be stated in Subsection 1.3. \subsection{Background material: symplectic properties of billiard ball map} Let $\gamma$ be a $C^{\infty}$-smooth strictly convex oriented curve in $\mathbb R^2$ parametrized injectively either by an interval, or by circle. Let $s$ be its natural length parameter respecting its orientation. We identify a point in $\gamma$ with the corresponding value of the natural parameter $s$. Let $\Gamma:=T_{=1}\mathbb R^2|_\gamma\subset T\mathbb R^2_\gamma$ denote the restriction to $\gamma$ of the unit tangent bundle of the ambient plane $\mathbb R^2$: $$\Gamma=\{(q,u) \ | \ q\in\gamma, \ u\in T_q\mathbb R^2, \ ||u||=1\}.$$ It is a two-dimensional surface parametrized diffeomorphically by $(s,\phi)\in\gamma\times S^1$; here $\phi=\phi(u)$ is the angle of a given unit tangent vector $u\in T_s\mathbb R^2$ with the orienting unit tangent vector $\dot\gamma(s)$ to $\gamma$. The curve $$\wt\gamma:=\{\phi=0\}=\{ (s,\dot\gamma(s)) \ | \ s\in\gamma\}$$ is the graph of the above vector field $\dot\gamma$. For every $(q,u)\in\Gamma$ set $$L(q,u):=\text{ the oriented line through } q \text{ directed by the vector } u.$$ We treat the two following cases separately. {\bf Case 1):} the curve $\gamma$ either is parametrized by an interval and goes to infinity in both directions, or is parametrized by circle. That is, it bounds a strictly convex infinite (respectively, bounded) planar domain. Let $\Gamma^0\subset\Gamma$ denote the neighborhood of the curve $\wt\gamma$ that consists of those $(q,u)\in\Gamma$ that satisfy the following conditions: a) the line $L(q,u)$ either intersects $\gamma$ at two points $q$ and $q'$, or is the orienting tangent line to $\gamma$ at $q$: $u=\dot\gamma(s)$; in the latter case we set $q':=q$; b) the angle between the oriented line $L(q,u)$ and any of the orienting tangent vectors to $\gamma$ at $q$ or $q'$ is acute. Let $u'$ denote the directing unit vector of the line $L(q,u)$ at $q'$. Consider the two following involutions acting on $\Gamma^0$ and $\Gamma$ respectively: $$\beta:\Gamma^0\to\Gamma^0, \ \beta(q,u)=(q',u'); \ \ \beta^2=Id;$$ $$I:\Gamma\to\Gamma \text{ is the reflection from } T_q\gamma: \ I(q,u)=(q,u^*),$$ where $u^*$ is the vector symmetric to $u$ with respect to the tangent line $T_q\gamma$. Let $\Gamma^0_+\subset\Gamma^0$ denote the open subset of those pairs $(q,u)$ in which the vector $u$ is directed to the convex side from the curve $\gamma$. \begin{remark} The domain $\Gamma^0$ is $\beta$-invariant. It is a topological disk (cylinder), if $\gamma$ is parametrized by an interval (circle). The domain $\Gamma^0_+$ is a topological disk (cylinder) adjacent to $\wt\gamma$. \end{remark} Let $\Pi_\gamma$ denote the open subset of the space of oriented lines in $\mathbb R^2$ consisting of the lines $L(q,u)$ with $(q,u)\in\Gamma^0_+$. The mapping $\Lambda:(q,u)\mapsto L(q,u)$ is a diffeomorphism $$\Lambda:\Gamma^0_+\to\Pi_\gamma$$ It extends to the set $\Gamma^0_+\cup\wt\gamma$ as a homeomorphism sending each point $(s,\dot\gamma(s))\in\wt\gamma$ to the tangent line $T_s\gamma$ directed by $\dot\gamma(s)$. \begin{remark} Let $\mathcal T$ denote the billiard ball map given by reflection from the curve $\gamma$ acting on oriented lines. It is well-known that the billiard ball map $\mathcal T$ is conjugated by $\Lambda$ to the product of two involutions $$\wt\delta_+:=I\circ\beta=\Lambda^{-1}\circ\mathcal T\circ \Lambda:\Gamma^0_+\to\Gamma.$$ If the curve $\gamma$ is $C^{\infty}$-smooth, then both involutions $I$ and $\beta$ are $C^{\infty}$-smooth on $\Gamma$ and $\Gamma^0$ respectively. Their product is well-defined and smooth on a neighborhood of the curve $\wt\gamma$ and fixes the points of the curve $\wt\gamma$. Both involutions preserve the canonical symplectic form $\sin\phi ds\wedge d\phi$ on $\Gamma\setminus\wt\gamma$, which is known to be the $\Lambda$-pullback of the standard symplectic form on the space of oriented lines. See \cite{ar2, ar3, mm, melrose1, melrose2, tab95}; see also \cite[subsection 7.1]{gpor}. \end{remark} Let us recall another representation of the billiard ball map $\mathcal T$ in a chart where it preserves the standard symplectic form. To do this, consider the orthogonal projection $\pi_\perp:(T\mathbb R^2)|_\gamma\to T\gamma$ sending each vector $u\in T_q\mathbb R^2$ with $q\in\gamma$ to its orthogonal projection to the tangent line $T_q\gamma$. It projects the unit tangent bundle $\Gamma$ to the unit ball bundle $$T_{\leq1}\gamma:=\{(q,w) \ | q\in\gamma, \ w\in T_q\gamma, \ ||w||\leq1\}.$$ A tangent vector $w=w\frac{\partial}{\partial s}\in T_q\gamma$ will be identified with its coordinate $w=\pm||w||$ in the basic vector $\frac{\partial}{\partial s}$. Thus, $\pi_\perp(s,\phi)=(s,\cos\phi)$. Consider the following function and differential form on $T\gamma$: \begin{equation}y:=1-w; \ \omega:=ds\wedge dy.\label{defy}\end{equation} The form $\omega$ coincides with the standard symplectic form on the tangent bundle $T\gamma$ of the curve $\gamma$ (considered as a Riemannian manifold equipped with the metric $|ds|^2$ coming from the standard Euclidean metric on $\mathbb R^2$). The curve $\wt\gamma=\{(s,\dot\gamma(s)) \ | \ s\in\gamma\}= \{ w=1\}=\{ y=0\}\subset T\gamma$ is a component of the boundary $\partial T_{\leq1}\gamma$. The projection $\pi_\perp$ sends $\Gamma^0_+$ diffeomorphically to a domain in $T_{\leq1}\gamma$ adjacent to $\wt\gamma$. It extends homeomorphically to $\Gamma^0_+\cup\wt\gamma$ as the identity map $Id:\wt\gamma\to\wt\gamma$. Let $\mu_+: \pi_\perp(\Gamma^0_+\cup\wt\gamma)\to \Gamma^0_+\cup\wt\gamma$ be the inverse to the restriction of the projection $\pi_\perp$ to $\Gamma^0_+\cup\wt\gamma$. Set \begin{equation}\delta_+:=\pi_\perp\circ\wt\delta_+\circ\mu_+=\pi_\perp\circ \Lambda^{-1}\circ\mathcal T\circ \Lambda\circ\mu_+.\label{defb}\end{equation} \begin{theorem} \label{tsym} (\cite[subsection 1.5]{tab95}, \cite{melrose1, melrose2, ar2, ar3}; see also \cite[theorem 7.3]{gpor}). The mapping $\delta_+:\pi_\perp(\Gamma^0_+)\to T_{\leq1}\gamma$ given by (\ref{defb}), is symplectic: it preserves the form $\omega=ds\wedge dy$. \end{theorem} \begin{proposition} \label{psmi} \cite[proposition 7.5]{gpor}. Let $\kappa(s)$ denote the (geodesic) curvature of the curve $\gamma$. The involutions $I$, $\beta$ and the mappings $\wt\delta_+$, $\delta_+$ admit the following (asymptotic) formulas: \begin{equation}I(s,\phi)=(s,-\phi), \ \beta(s,\phi)=(s+2\kappa^{-1}(s)\phi+O(\phi^2), -\phi+O(\phi^2)),\label{beti}\end{equation} \begin{equation} \wt\delta_+(s,\phi)=(s+2\kappa^{-1}(s)\phi+O(\phi^2), \phi+O(\phi^2)),\label{bilike}\end{equation} \begin{equation}\delta_+(s,y)=(s+2\sqrt 2\kappa^{-1}(s)\sqrt y+O(y), y+O(y^{\frac32})).\label{deltsy}\end{equation} The asymptotics are uniform on compact subsets of points $s\in\gamma$, as $\phi\to0$ (respectively, as $y\to0$). \end{proposition} {\bf Case 2).} Let $\gamma$ be parametrized by an interval, but now it does not necessarily go to infinity or bound a region in the plane. Moreover, we allow $\gamma$ to be an immersed curve that may self-intersect. In this case some lines $L(q,u)$ may intersect $\gamma$ in more than two points. Now the definition of the subset $\Gamma^0\subset\Gamma$ should be modified to be the subset of those $(q,u)\in\Gamma$ for which there exists a $q'\in\gamma\cap L(q,u)$ satisfying the conditions a) and b) from Case 1) and such that the arc $(q,q')\subset\gamma$ is disjoint from the line $L(q,u)$, injectively immersed (i.e., without self-intersections) and the orienting tangent vector of the latter arc at each its point has acute angle with $L(q,u)$. (Here $q$ and $q'$ may be not the only points of intersection $\gamma\cap L(q,u)$.) \begin{remark} \label{rkimm} For any given $(q,u)\in\Gamma^0$ the point $q'$ satisfying the conditions from the above paragraph exists, whenever $u$ is close enough to $\dot\gamma(q)$ (dependently on $q$). Whenever it exists, it is unique. All the statements and discussion in the previous Case 1) remain valid in our Case 2). Now the mapping $\Lambda$ is a local diffeomorphism but not necessarily a global diffeomorphism: an oriented line intersecting $\gamma$ at more than two points (if any) may correspond to at least two different tuples $(q,u)\in\Gamma^0_+$. \end{remark} \subsection{Generalization to $C^{\infty}$-lifted strongly billiard-like maps} In this subsection and in what follows we study the next class of area-preserving mappings introduced in \cite{gpor} generalizing the billiard maps (\ref{deltsy}). \begin{definition} \label{tdw} (see \cite[definition 7.6]{gpor}). Let $(a,b)$ be a (may be (semi) infinite) interval in $\mathbb R$ with coordinate $s$. Let $V\subset(a,b)\times\mathbb R_{+}$ be a domain adjacent to the interval $J:=(a,b)\times\{0\}$. A mapping $F:V\cup J\to\mathbb R\times\mathbb R_{\geq0}\subset\mathbb R^2_{s,y}$ is called {\it billiard-like,} if it satisfies the following conditions: (i) $F: V\cup J\to F(V\cup J)$ is a homeomorphim fixing the points in $J$; (ii) $F|_V$ is a diffeomorphism preserving the standard area form $ds\wedge dy$; (iii) $F$ has the asymptotics of the type \begin{equation}F(s,y)=(s+w(s)\sqrt y+O(y),y+O(y^{\frac32})), \text{ as } y\to0; \ w(s)>0,\label{fbm}\end{equation} uniformly on compact subsets in the $s$-interval $(a,b)$; (iv) the variable change $$(s,y)\mapsto (s,z), \ y=z^2$$ conjugates $F$ to a map $\wt F(s,z)$ that is smooth at $(a,b)\times\{0\}$. If, in addition to conditions (i)--(iv), the latter mapping $\wt F$ is a product of two symplectic involutions $I$ and $\beta$ fixing the points of the line $z=0$, $$\wt F=I\circ\beta, \ I(s,z)=(s,-z),$$ \begin{equation}\beta(s,z)=(s+w(s)z+O(z^2), -z+O(z^2)), \ \beta^2=Id, \label{prinv}\end{equation} then $F$ will be called {\it a (strongly) billiard-like map.} If $F$ is strongly billiard-like, and the corresponding involution $\beta$ (or equivalently, the conjugate map $\wt F$) is $C^{\infty}$-smooth, and $C^{\infty}$-smooth at the points of the boundary interval $J$, then $F$ is called {\it $C^{\infty}$-lifted.} The above definitions make sense for $F$ being a germ of map at the interval $J$. \end{definition} \begin{example} \label{exdel} The mapping $\delta_+$ from (\ref{deltsy}) (and its germ at the curve $\{ y=0\}$) is strongly billiard-like in the coordinates $(s,y)$ with $w(s)=2\sqrt 2\kappa^{-1}(s)$, see (\ref{beti}), (\ref{bilike}) and (\ref{deltsy}). If the curve $\gamma$ is $C^{\infty}$-smooth, then $\beta$ and hence, $\wt\delta_+=I\circ\beta$ are $C^{\infty}$-smooth, and hence, $\delta_+$ is $C^{\infty}$-lifted. \end{example} \begin{proposition} \label{classinv} The class of (germs at $J$ of) $C^{\infty}$-lifted strongly billiard-like maps is invariant under conjugacy by (germs at $J$ of) $C^{\infty}$-smooth symplectomorphisms $G:V\cup J\to G(V\cup J)\subset\mathbb R\times\mathbb R_{\geq0}$ sending $J$ onto an interval in $\mathbb R\times\{0\}$. Here $V\subset\mathbb R\times\mathbb R_+$ is a domain adjacent to $J$. \end{proposition} \begin{proof} Let $F$ be a $C^{\infty}$-lifted strongly billiard-like map, $\wt F=I\circ\beta$ be its lifting. Let $V\subset\mathbb R\times\mathbb R_+$ be a domain adjacent to $J$, and $G:V\cup J\to G(V\cup J)\subset\mathbb R\times\mathbb R_{\geq0}$, be a $C^{\infty}$-smooth symplectomorphism as above. Let us denote $G(s,y)=(\wh s(s,y), \wh y(s,y))$. One has $\wh y(s,0)\equiv0$, $\frac{\partial\wh s}{\partial s}(s,0)>0$, $\frac{\partial\wh y}{\partial y}(s,0)>0$, by definition and orientation-preserving property (symplectomorphicity). Thus, $\wh y(s,y)=yg(s,y)$, where $g(s,y)$ is a positive $C^{\infty}$-smooth function on a neighborhood of the interval $J$ in $(a,b)\times\mathbb R_{\geq0}$. The lifting $\wt G$ of the map $G$ to the variables $(s,z)$, $y=z^2$, acts as follows: \begin{equation}\wt G:(s,z)\mapsto(\wh s(s,z^2), \wh z(s,z)); \ \ \wh z=\sqrt{\wh y(s,z^2)}=z\sqrt{g(s,z^2)}.\label{wtg}\end{equation} The latter square root is well-defined and $C^{\infty}$-smooth. This implies that the map $\wt G$ is a $C^{\infty}$-smooth diffeomorphism of domains with arcs of boundaries corresponding to $V\cup J$ and $G(V\cup J)$. Hence, the lifting $\wt G\circ\wt F\circ\wt G^{-1}$ of the conjugate $F_G:=G\circ F\circ G^{-1}$ is a $C^{\infty}$-smooth diffeomorphism that is the product of $\wt G$-conjugates of the involutions $I$ and $\beta$. One has $\wt G\circ I\circ \wt G^{-1}=I$, by (\ref{wtg}); $F_G$ is a symplectomorphism, since so are $F$ and $G$; \begin{equation}G(s,y)=(\wh s,\wh y)=(\wh s(s,0)+O(y), g(s,0)y+O(y^2)),\label{gsyas}\end{equation} by diffeomorphicity. Substituting (\ref{gsyas}) and (\ref{fbm}) to the expression $F_G=G\circ F\circ G^{-1}$ and denoting $(s,0):=G^{-1}(\wh s,0)$, we get $$F_G(\wh s,\wh y)=(\wh s+\frac{\partial\wh s}{\partial s}(s,0)w(s)g^{-\frac12}(s,0) (\wh y)^{\frac12}+O(\wh y), \wh y+O(\wh y^{\frac32})).$$ This implies that the conjugate $F_G$ has type (\ref{fbm}) and hence, is strongly billiard-like. This proves the proposition. \end{proof} \begin{theorem} \label{thm3} For every $C^{\infty}$-lifted strongly billiard-like map $F$ there exists a domain $U\subset\{ y>0\}$ adjacent to $J$ such that $U\cup J$ admits a $C^{\infty}$-smooth $F$-invariant function $\wt h$, $\wt h|_J\equiv0$, $\frac{\partial\wt h}{\partial y}>0$. (Thus, the foliation $\wt h=const$ is a $C^{\infty}$-smooth foliation by $F$-invariant curves, and $J$ is its leaf.) Moreover, there are continuum of $C^{\infty}$-smooth $F$-invariant foliations $\wt h=const$ as above on the same union $U\cup J$ whose germs at each point in $J$ are pairwise distinct. \end{theorem} {\bf Addendum to Theorem \ref{thm3}: a smooth symplectic normal form for $C^{\infty}$-lifted strongly billiard-like maps.} {\it In Theorem \ref{thm3} one can choose a function $\wt h$ so that there exists a new additional coordinate $\tau=\tau(s,y)$ bijectively parametrizing the interval $J$ such that $(\tau,\wt h)$ are symplectic coordinates on a domain $U\subset\{ y>0\}$ adjacent to $J$ and in these coordinates} \begin{equation}F(\tau,\wt h)=(\tau+\sqrt{\wt h},\wt h).\label{snform}\end{equation} \begin{definition} Let $V\subset (a,b)\times\mathbb R_{+}\subset\mathbb R^2_{s,y}$ be a domain adjacent to an interval $J=(a,b)\times\{0\}$. A $C^{\infty}$-smooth function $f(s,y)$ on $V\cup J$ is {\it $y$-flat,} if $f(s,0)\equiv0$, $f(s,y)$ tends to zero with all its partial derivatives, as $y\to0$, and the latter convergence is uniform on compact subsets in the $s$-interval $(a,b)$ for the function $f$ and for each its individual derivative. \end{definition} \begin{remark} \label{remfl} In the conditions of the above definition let $(x,h)$ be new $C^{\infty}$-smooth coordinates on $V\cup J$ with $h(s,0)\equiv0$. Then each $y$-flat function is $h$-flat and vice versa. This follows from definition. \end{remark} The proof of Theorem \ref{thm3} uses Marvizi--Melrose result \cite[theorem (3.2)]{mm} stating a formal analogue of Theorem \ref{thm3}: existence of a $F$-invariant formal power series $\sum_kh_k(s)y^s$, see Theorem \ref{thmm} below. It implies that in appropriate coordinates $(\tau,h)$ the map $F$ takes the form $F(\tau,h)=(\tau+\sqrt h+\operatorname{flat}(h), h+\operatorname{flat}(h))$. Here $\operatorname{flat}(h)$ is an $h$-flat function, see the above definition. In the coordinates $(\tau,\phi)$, $\phi=\sqrt h$, the lifted map $\wt F$ takes the form \begin{equation}\wt F(\tau,\phi)=(\tau+\phi+\operatorname{flat}(\phi), \phi+\operatorname{flat}(\phi)).\label{tauphi} \end{equation} We prove existence of a $C^{\infty}$-smooth $\wt F$-invariant function $\wt \phi$ with $\wt\phi-\phi=\operatorname{flat}(\phi)$ (the next theorem), and then deduce Theorems \ref{thm3}, \ref{thm1}, \ref{thm2}. \begin{theorem} \label{thm33} Let $V\subset\mathbb R_{\tau}\times(\mathbb R_+)_{\phi}$ be a domain adjacent to an interval $J=(a,b)\times\{0\}$. Let $\wt F:V\cup J\to \mathbb R_{\tau}\times(\mathbb R_{\geq0})_{\phi}$ be a $C^{\infty}$-smooth mapping of type (\ref{tauphi}). (Here we do not assume any area-preserving property.) There exists a domain $W\subset V$ adjacent to $J$ and an $\wt F$-invariant $C^{\infty}$-smooth function on $W\cup J$ of the type $\wt\phi(\tau,\phi)=\phi+\operatorname{flat}(\phi)$. (Thus, the foliation $\wt\phi=const$ is $\wt F$-invariant, $C^{\infty}$-smooth, and $J$ is its leaf.) \end{theorem} {\bf Addendum to Theorem \ref{thm33}.} {\it There exist continuum of functions $\wt\phi$ satisfying the statements of Theorem \ref{thm33} such that the corresponding foliations $\wt\phi=const$ are $C^{\infty}$-smooth on the same subset $W\cup J$ and have pairwise distinct germs at each point in $J$.} This addendum and non-uniqueness in Theorem \ref{thm3} will be proved using the following proposition. \begin{proposition} \label{distgerm} Let $J=(a,b)\times\{0\}$, $W\subset\mathbb R_\tau\times(\mathbb R_+)_\phi$ be a domain adjacent to $J$. Let $\wt F:W\cup J\to\mathbb R\times\mathbb R_{\geq0}$ be a map, as in (\ref{tauphi}). Any two $\wt F$-invariant foliations (functions, line fields) on $W$ having distinct germs at $J$ have distinct germs at each point in $J$. The same statement holds for similar objects invariant under a $C^{\infty}$-lifted strongly billiard-like map. \end{proposition} \subsection{Plan of the paper} In Subsection 2.1 we recall the above-mentioned Marvizi -- Melrose result \cite[theorem 3.2]{mm} (with proof) yielding $C^{\infty}$-smooth coordinates in which $F(\tau,h)=\tau+\sqrt h+\operatorname{flat}(h), h+\operatorname{flat}(h))$. It implies that the lifted map $\wt F$, written in the coordinates $(\tau,\phi)$, $\phi=\sqrt h$, takes form (\ref{tauphi}). Theorem \ref{thm33} will be proved in Subsections 2.2--2.4. To do this, first in Subsection 2.2 we construct a fundamental domain for the map $\wt F$ (a curvilinear sector $\Delta$ with vertex at a point in $J$) and an $\wt F$-invariant function $\wt\phi$ defined on a bigger sector that is $\phi$-flatly close to $\phi$ on the latter bigger sector. Then in Subsection 2.3 we construct its $\wt F$-invariant extension along the $\wt F$-orbits and show that it is well-defined on a domain adjacent to $J$. In Subsection 2.4 we prove that thus extended function $\wt\phi$ is $C^{\infty}$-smooth and $\phi$-flatly close to $\phi$. This will prove Theorem \ref{thm33}. The existence statement of Theorem \ref{thm3} will be deduced from Theorem \ref{thm33} in Subsection 2.5. Proposition \ref{distgerm} and non-uniqueness in Theorem \ref{thm33} (Addendum) and in Theorem \ref{thm3} will be proved in Subsection 2.6. Theorems \ref{thm2} and \ref{thm1} will be proved in Subsection 2.7. \subsection{Historical remarks} The Birkhoff Conjecture was first stated in print by H. Poritsky \cite{poritsky}, who proved it under additional condition that for any two nested closed caustics the smaller one is a caustic of the billiard played in the bigger one; the same result was later obtained in \cite{amiran}. One of the most famous results on the Birkhoff Conjecture is due to M. Bialy \cite{bialy}, who proved that if the phase cylinder of the billiard is foliated by non-contractible invariant closed curves, then the billiard boundary is a circle; see also another proof in \cite{wojt}. Recently V. Kaloshin and A. Sorrentino proved that any integrable deformation of an ellipse is an ellipse \cite{kalsor}. Very recently M. Bialy and A. E. Mironov proved the Birkhoff Conjecture for centrally-symmetric billiards having a family of closed caustics that extends up to a caustic tangent to four-periodic orbits \cite{bm6}. For a detailed survey of the Birkhoff Conjecture see \cite{kalsor}, \cite{KS18}, \cite{bm6} and references therein. Existence of a Cantor family of caustics in every strictly convex bounded planar billiard with sufficiently smooth boundary was proved by V. F. Lazutkin \cite{laz} using KAM type arguments. R. Melrose proved that for every $C^\infty$-smooth germ $\gamma$ of strictly convex planar curve there exists a germ of $C^\infty$-smooth foliation by caustics of the billiard played on $\gamma$, with $\gamma$ being a leaf \cite[p.184, proposition (7.14)]{melrose1}. S. Marvizi and R. Melrose have shown that the billiard ball map $T$ in a planar domain bounded by $C^\infty$-smooth strictly convex closed curve $\gamma$ always has an {\it asymptotic first integral} on a domain with boundary in the space of oriented lines: a domain adjacent to the family of tangent lines to $\gamma$. Namely, there exists a $C^\infty$-smooth function $F$ on the closure of a domain as above such that the difference $F\circ T-F$ is $C^\infty$-smooth there, and it is flat at the points of the family of tangent lines to $\gamma$; see \cite[theorem (3.2)]{mm}; see also statement of their result in Theorem \ref{thmm} below. (Strongly) billiard-like maps were introduced and studied in \cite{gpor}, where results on their dynamics were applied to curves with Poritsky property. \section{Construction of foliation by invariant curves. Proofs of Theorems \ref{thm33}, \ref{thm3}, \ref{thm1}, \ref{thm2} and Proposition \ref{distgerm}} \subsection{Marvizi--Melrose construction of an "up-to-flat" first integral} Here we recall the following Marvizi--Melrose theorem with proof. \begin{theorem} \label{thmm} \cite[theorem (3.2)]{mm}. 1) Let $V\subset(a,b)\times\mathbb R_{>0}\subset\mathbb R^2_{x,y}$ be a domain adjacent to the interval $J:=(a,b)\times\{0\}$. Let $F:V\cup J\to\mathbb R\times\mathbb R_{\geq0}$ be a $C^{\infty}$-lifted strongly billiard-like map. There exist a domain $U\subset V$ adjacent to $J$ and a real-valued $C^{\infty}$-smooth function $h:U\cup J\to\mathbb R_{\geq0}$, $h|_J\equiv0$, $\frac{\partial h}{\partial y}|_J>0$, such that the difference $h\circ F-h$ is $C^{\infty}$-smooth and $y$-flat. Moreover, one can normalize $h$ as above so that the mapping $F$ coincides, up to $y$-flat terms, with the time 1 map of the flow of the Hamiltonian vector field with the Hamiltonian function $\frac23 h^{\frac32}$. This normalization determines the Taylor series $h(s,y)=\sum_{k=1}^{+\infty}h_k(s)y^k$ uniquely. 2) The analogue of the above statement holds if $J$ is replaced by the coordinate circle $S^1_s=S^1_s\times\{0\}$ lying in the cylinder $C:=S^1_s\times[0,\varepsilon)$ equipped with the standard area form and $F$ is a strongly billiard-like map $C\to S^1\times\mathbb R_{\geq0}$. In this case the coefficients $h_k(s)$ of the above normalized series are well-defined and $C^{\infty}$-smooth on the circle $S^1_s$. 3) Let $h$ be the function normalized as in Statement 1). Let $\tau$ denote the time function for the Hamiltonian vector field with the Hamiltonian function $h$. In the coordinates $(\tau,h)$ (which are symplectic) the map $F$ takes the form \begin{equation} F:(\tau,h)\mapsto(\tau+\sqrt h+\operatorname{flat}(h), h+\operatorname{flat}(h)).\label{taut}\end{equation} \end{theorem} \begin{proof} The strongly billiard-like mapping $F$ has the form (\ref{fbm}): \begin{equation} F(s,y)=(s+w(s)\sqrt y+O(y), \ y+q(s)y^{\frac32}+O(y^2)). \label{fform}\end{equation} Its lifting $\wt F(s,z)$, $z=\sqrt y$, has the form \begin{equation} \wt F(s,z)=(s+w(s)z+O(z^2), \ z+\frac{q(s)}2z^2+O(z^3)). \label{lift}\end{equation} The mapping $\wt F(s,z)$ admits an asymptotic Taylor series in $z$, and $F(s,y)$ admits an asymptotic Puiseux series in $y$ involving powers $0, \frac12, 1, \frac32, 2,\dots$. The coefficients of both series are $C^{\infty}$-smooth functions in $s$. Therefore, the mapping $F$ acts by the formula $h\mapsto h\circ F$ not only on functions, but also on formal Puiseux series. It transform each power series $h=\sum_{k=1}^{+\infty}h_k(s)y^k$ with coefficients being $C^{\infty}$-smooth functions on $(a,b)$ to a Puiseux series of the above type. Our goal is to find an $F$-invariant power series (or equivalently, an $\wt F$-invariant even power series $\sum_{k=1}^{+\infty}h_k(s)z^{2k}$) and then to choose its $C^{\infty}$-smooth representative. To do this, we use the following formula for the function $q(s)$ in (\ref{fform}), see \cite[formula (1.2)]{laz}, \cite[formula (7.18)]{gpor}, which follows from area-preserving property: \begin{equation} q(s)=-\frac23 w'(s).\label{qw}\end{equation} Step 1: constructing an even series $\sum_{k=1}^{+\infty}g_kz^{2k}$ whose $\wt F$-image is also an even series. We construct its coefficients $g_k(s)$ by induction as follows. Induction base: $k=1$. Let us find a function $g_1(s)$ such that the $\wt F$-image of the function $g_1(s)z^2$ contains no $z^3$-term. This is equivalent to the statement saying that the function $g_1(s+w(s)z)(z+\frac{q(s)}2z^2)^2$ contains no $z^3$-term, which is in its turn equivalent to the differential equation $$g_1'(s)w(s)+q(s)g_1(s)=0, \ \ q(s)=-\frac23w'(s),$$ which has a unique solution $g_1(s)=w^{\frac23}(s)$ up to constant factor. (Note that $w^{\frac23}(s)y$ is a well-known function: the second Lazutkin coordinate \cite{laz, mm}.) Induction step in the case, when $J=(a,b)\times\{0\}$ is an interval. Let we have already found an even Taylor polynomial $G_{n-1}(s,z):=\sum_{k=1}^{n-1}g_k(s)z^{2k}$, $n\geq2$, such that the asymptotic Taylor series in $z$ of the function $G_{n-1}\circ\wt F$ contains no odd powers of $z$ of degrees no greater than $2n-1$. Let us construct $g_{n}(s)$, set $G_n(s,z):=\sum_{k=1}^{n}g_k(s)z^{2k}$, so that \begin{equation} G_n\circ\wt F-G_n \text{ contains no } z^{2n+1}-\text{term}. \label{gnterm}\end{equation} Note that $G_n\circ\wt F-G_n$ obviously cannot contain odd powers of degrees less than $2n$. Let $b(s)z^{2n+1}$ denote the degree $2n+1$ term in the Taylor series of the function $\wt G_{n-1}\circ F$. Condition (\ref{gnterm}) is equivalent to the differential equation \begin{equation} g_n'(s)w(s)-\frac{2n}3w'(s)g_n(s)=-b(s),\label{gnb}\end{equation} which always has a solution $g_n(s)$ well-defined on the interval $(a,b)$. Step 2. Constructing an $\wt F$-invariant series. The mapping $\wt F$ is the product $I\circ\beta$ of two involutions: $I(s,z)=(s,-z)$ and $\beta$. Let $g:=\sum_{k=1}^{+\infty}g_k(s)z^{2k}$ be the series constructed on Step 1. One has \begin{equation} g\circ\wt F=(g\circ I)\circ\beta=g\circ\beta,\label{gcircb}\end{equation} since the series $g$ is even. The series (\ref{gcircb}) is even (Step 1). Hence, the series $$t:=g+g\circ\beta$$ is even and $\beta$-invariant by construction. Therefore, it is $\wt F$-invariant. Its first coefficient is equal to $2g_1(s)=2w^{\frac23}(s)>0$, by construction. We denote the $\wt F$-invariant series thus constructed by $t:=\sum_{k=1}^{+\infty}t_k(s)z^{2k}$. Step 3: symplectic coordinates and normalization. Let $t(s,y)$ be a function representing the series $\sum_{k=1}^{+\infty}t_k(s)y^{k}$, which is obtained from the latter series (given by Step 2) by the variable change $y=z^2$. It is defined on a domain $U$ adjacent to $J$ and $C^{\infty}$-smooth on $U\cup J$; $t|_J\equiv0$, $\frac{\partial t}{\partial y}|_J>0$. Let $H_t$ denote the corresponding Hamiltonian vector field. Fix an arbitrary $C^{\infty}$-smooth function $\theta$ such that $d\theta(H_t)\equiv1$, $\theta|_{s=0}=0$: a time function for the vector field $H_t$. Then $(\theta,t)$ are symplectic coordinates for the form $\omega=dx\wedge dy$: $\omega=d\theta\wedge dt$. Shrinking $U$ (keeping it adjacent to $J$) we can and will consider that they are global coordinates on $U\cup J$. The difference $t\circ F-t$ is $t$-flat, by construction, and hence, so is $dF(H_t)-H_t$. Therefore, in the coordinates $(\theta,t)$ the symplectic map $F$ takes the form \begin{equation}F:(\theta,t)\mapsto(\theta+\xi(t), t) + \operatorname{flat}(t).\label{xit} \end{equation} In the new coordinates $(\theta,t)$ the map $F$ is $C^{\infty}$-lifted strongly billiard-like, as in the old coordinates $(s,y)$, by Proposition \ref{classinv}. \medskip {\bf Claim 1.} {\it The function $\xi(t)$ in (\ref{xit}) has the form $\xi(t)=\sqrt t \psi(t)$, where $\psi(t)$ is a $C^{\infty}$-smooth function on a segment $[0,\varepsilon]$, $\varepsilon>0$, $\psi\geq0$, $\psi(0)>0$.} \begin{proof} Let $\wt F$ denote the lifting of the map $F$ to the coordinates $(\theta,\zeta)$, $\zeta=\sqrt t$. One has $\wt F=I\circ\beta$, where $I(\theta,\zeta)=(\theta,-\zeta)$ and $\beta$ is a symplectic involution, $\beta(\theta,0)\equiv(\theta,0)$. The involution $\beta$ takes the form \begin{equation}\beta(\theta,\zeta)=(\theta+q(\zeta),-\zeta)+\operatorname{flat}(\zeta), \ \ q(\zeta)=\xi(\zeta^2) \text{ for } \zeta>0.\label{betat}\end{equation} The function $q(\zeta)$ should be $C^{\infty}$-smooth, as is $\beta$, and $q'(0)>0$ (strong billiard-likedness). The condition saying that $\beta$ is an involution implies that $q(\zeta)+q(-\zeta)=\operatorname{flat}(\zeta)$. This in its turn implies that $q(\zeta)=\zeta \psi(\zeta^2)+\operatorname{flat}(\zeta)$, where $\psi$ is a $C^{\infty}$-smooth function; $\psi(0)=q'(0)>0$. This together with (\ref{betat}) implies the statement of the claim. \end{proof} We have to find a function $h(s,y)$, $h(s,0)\equiv0$, such that the Hamiltonian vector field with the Hamiltonian function $\frac23h^{\frac32}$ coincides with $\xi(t)\frac{\partial}{\partial\theta}$: this function will satisfy the normalization statement of Theorem \ref{thmm}, part 1), by construction. We are looking for it as a function depending only on $t$: $h(s,y)=\chi(t)$. The above Hamiltonian vector field is then equal to $\sqrt{\chi(t)}\chi'(t)\frac{\partial}{\partial\theta}$. Thus, we have to solve the equation $$\chi^{\frac12}(t)\chi'(t)=\xi(t)=\sqrt t \psi(t), \ \chi(0)=0.$$ Its solution $\chi(t)$ is given by the formula $$\chi(t)=\left(\frac32\int_0^t\sqrt p\psi(p)dp\right)^{\frac23}.$$ This is a $C^{\infty}$-smooth function, by construction and smoothness of the function $\psi(t)$. One has $\frac{\partial h}{\partial y}|_J>0$, since $\chi'(0)=\psi(0)>0$ and $\frac{\partial t}{\partial y}(s,0)=2g_1(s)=2w^{\frac23}(s)>0$, by construction. Uniqueness of the Taylor series in $y$ of the function $h(s,y)$ satisfying the above Hamiltonian vector field statement follows directly, as in \cite[p.383]{mm}. Statement 1) of Theorem \ref{thmm} is proved. Statement 3) follows immediately from Statement 1), since in the coordinates $(\tau,h)$, see Statement 3), the Hamiltonian field with the Hamiltonian function $\frac23h^{\frac32}$ is equal to $(\sqrt h, 0)$. Statement 2) (case, when $J$ is a circle and $F$ is defined on a cylinder bounded by $J$) says that the Taylor coefficients of the series in $y$ of the function $h(s,y)$ are well-defined functions on the circle $J$. This follows from its the above uniqueness statement. Theorem \ref{thmm} is proved. \end{proof} \subsection{Step 1. Construction of an invariant function on a neighborhood of fundamental domain} Here we give the first step of the proof of Theorem \ref{thm33}. We consider a fundamental sector $\Delta$ for the map $\wt F$ that is bounded by the segment $K=[0,\frac\eta2]$ of the $\phi$-axis, by its $\wt F$-image and by the straightline segment connecting their ends. We construct an $\wt F$-invariant function $\wt\phi$ that is $\phi$-flatly close to $\phi$ on a sectorial neighborhood $S_{\chi,\eta}$ of $\overline\Delta\setminus\{(0,0)\}$. See Fig. 1. \begin{figure}[ht] \begin{center} \epsfig{file=fig-fund-sector.eps} \caption{The fundamental domain $\Delta$ and its sectorial neighborhood $S_{\chi,\eta}$.} \label{fig:01} \end{center} \end{figure} Without loss of generality we consider that the $\tau$-interval contains the origin: $a<0<b$. Fix a number $\chi$, $0<\chi<\frac12$. Consider the sectors \begin{equation} S_{\chi}=\{ -\chi\phi<\tau<(1+\chi)\phi\}\subset \mathbb R_\tau\times(\mathbb R_+)_{\phi}, \label{secd}\end{equation} $$S_{\chi,\eta}:= S_{\chi}\cap\{0<\phi<\eta\}$$ The domain $S_{\chi,\eta}$ will be the above-mentioned neighborhood of fundamental sector, where we construct an $\wt F$-invariant function. \begin{proposition} \label{vi} For every $\chi\in(0,\frac12)$ and $\eta>0$ small enough dependently on $\wt F$ and $\chi$ the following statements hold. (i) The maps $\wt F^{\pm1}$, $\wt F^{\pm2}$ are well-defined on $S_{\chi,2\eta}$. (ii) The domains $S_{\chi,2\eta}$ and $\wt F^2(S_{\chi,2\eta})$ are disjoint; the latter lies on the right from the former. (iii) The segment $K:=\{0\}\times[0,\frac\eta2]\in\mathbb R^2_{\tau,\phi}$ and its image $\wt F(K)$ intersect just by the origin; $\wt F(K)$ lies on the right from $K$. The domain $\Delta\subset S_{\chi,2\eta}$ bounded by $K$, $\wt F(K)$ and the straightline segment connecting the endpoints of the arcs $K$ and $\wt F(K)$ distinct from $(0,0)$ is a fundamental domain for the map $\wt F$. See Fig. 1. \end{proposition} \begin{proof} One has \begin{equation}d\wt F(0,0)=\left(\begin{matrix} 1 & 1\\ 0 & 1\end{matrix}\right). \label{diffo}\end{equation} The latter differential sends each line $\{\tau=\zeta\phi\}$ to the line $\{\tau=(\zeta+1)\phi\}$. This implies that for every $\eta>0$ small enough statements (i)--(iii) hold. \end{proof} \begin{proposition} \label{vii} For every $\chi\in(0,\frac12)$ and $\eta>0$ small enough dependently on $\wt F$ and $\chi$ there exists a $C^{\infty}$-smooth and $\wt F$-invariant function $\wt\phi(\tau,\phi)$ on $S_{\chi,\eta}$ such that the difference $\wt\phi(\tau,\phi)-\phi$ is $\phi$-flat on $S_{\chi,\eta}$: that is, the latter difference tends to zero with all its partial derivatives, as $(\tau,\phi)\in S_{\chi,\eta}$ tends to zero. There exist a continuum of functions $\wt\phi$ satisfying the above statements and without critical points on the same sector $S_{\chi,\eta}$ for which the germs at $(0,0)$ of the foliations $\wt\phi=const$ are pairwise distinct. \end{proposition} \begin{proof} Let $\nu:S_{\chi}\to\mathbb R$ denote the function $$\nu:=\frac{\tau}\phi,$$ whose level curves are lines through the origin. The interval of values of the function $\nu$ on $S_{\chi}$ is $M:=(-\chi,1+\chi)$. Fix a \begin{equation} \sigma>0, \ 2\sigma<\frac12-\chi.\label{sichi}\end{equation} Consider the covering of the interval $M$ by the intervals $$(-\chi,\frac12+\sigma), \ \ (\frac12-\sigma,1+\chi)$$ and a corresponding partition of unity $\rho_1$, $\rho_2$: \begin{equation}\rho_1\equiv1 \text{ on } (-\chi,\frac12-\sigma); \ \rho_2\equiv 1 \text{ on } (\frac12+\sigma,1+\chi);\label{partun}\end{equation} $$ \ \ \rho_1,\rho_2\geq0, \ \rho_1+\rho_2\equiv1 \text{ on } M=(-\chi, 1+\chi).$$ Set \begin{equation}\wt\phi(x):=\rho_1(\nu(x))\phi(x)+\rho_2(\nu(x))\phi\circ \wt F^{-1}(x)\label{wtphi}\end{equation} $$ =\phi(x)+\rho_2(\nu(x))(\phi\circ\wt F^{-1}(x)-\phi(x)).$$ \begin{proposition} \label{pinvf} For every fixed $\chi\in(0,\frac12)$, $\sigma\in(0,\frac12(\frac12-\chi))$ and every $\eta$ small enough (dependently on $\chi$ and $\sigma$) the function $\wt\phi$ given by (\ref{wtphi}) is well-defined on $S_{\chi,\eta}$ and $\wt F$-invariant: if $x, \wt F(x)\in S_{\chi,\eta}$, then $\wt\phi(\wt F(x))=\wt \phi(x)$. It is $C^{\infty}$-smooth, and the difference $\wt\phi(x)-\phi(x)$ is $\phi$-flat on $S_{\chi,\eta}$. \end{proposition} \begin{proof} Recall that $\wt F$ satisfies asymptotic formula (\ref{tauphi}): $$\wt F(\tau,\phi)=(\tau+\phi+\operatorname{flat}(\phi), \phi+\operatorname{flat}(\phi)).$$ Well-definedness and $C^{\infty}$-smoothness of the function $\wt\phi$ on $S_{\chi,\eta}$ for small $\eta$ are obvious. Its $\phi$-flatness on $S_{\chi,\eta}$ follows from formula (\ref{wtphi}), $\phi$-flatness of the difference $\phi\circ\wt F-\phi$, see (\ref{tauphi}), and the fact that the function $\nu(\tau,\phi)=\frac\tau\phi$ has partial derivatives of at most polynomial growth in $\phi$, as $(\tau,\phi)\to0$ along the sector $S_{\chi,\eta}$. Let us prove $\wt F$-invariance, whenever $\eta$ is small enough. For every $\delta>0$ and every $\eta>0$ small enough (dependently on $\delta$) the inclusion $\wt F(x)\in S_{\chi,\eta}$ implies that $\tau(x)\leq(\chi+\delta)\phi(x)$, by formula (\ref{tauphi}) recalled above. Therefore the inclusion $x,\wt F(x)\in S_{\chi,\eta}$ implies that $x$ lies in the sector $\{-\chi\phi<\tau<(\chi+\delta)\phi\}$. Choosing $\delta<\sigma$ we get that on the latter sector $\rho_1\circ\nu\equiv1$ and $\rho_2\circ\nu\equiv0$, since $\chi+\delta<\chi+\sigma<\frac12-\sigma$, see (\ref{sichi}), and by (\ref{partun}). Hence, $\wt\phi(x)=\phi(x)$, by (\ref{wtphi}). Similarly applying the above argument "in the inverse time" yields that the inclusion $x,\wt F(x)\in S_{\chi,\eta}$ implies that $\wt F(x)$ lies in the sector $\{(1-\chi-\delta)\phi<\tau<(1+\chi)\phi\}$. On the latter sector one has $\rho_1\circ\nu\equiv0$, $\rho_2\circ\nu\equiv1$, by (\ref{partun}) and since $$1-\chi-\delta>1-\chi-\sigma=1-\chi+\sigma-2\sigma>1-\chi+\sigma-\frac12+\chi=\frac12+\sigma.$$ Therefore, $\wt\phi(\wt F(x))=\phi\circ\wt F^{-1}(\wt F(x))=\phi(x)$, by (\ref{wtphi}). Finally we get that $\wt\phi(x)=\wt\phi\circ\wt F(x)$, and hence $\wt\phi$ is $\wt F$-invariant. The proposition is proved. \end{proof} Let us now prove non-uniqueness of germ at $(0,0)$ of foliation $\wt\phi=const$. The point $\frac12$ is the midpoint of the interval $M=(-\chi,1+\chi)$ defining $S_{\chi}$. One has $\frac12\pm1\notin M$, since $0<\chi<\frac12$. Fix a small $\theta>0$, set $Y:=(\frac12-\theta,\frac12+\theta)$, so that the intervals $Y\pm1$ lie outside the interval $M$, on distance greater than $\theta$ from its boundary points. Set $SY:=\{(\tau,\phi) \ | \ \frac{\tau}{\phi}\in Y\}$. If $\eta$ is small enough, then $\wt F^{\pm1}(x)\notin S_{\chi}$ for every $x\in S_{\chi,\eta}\cap SY$, by (\ref{tauphi}). Fix a partition of unity (\ref{partun}). Fix a positive $C^{\infty}$-smooth $\phi$-flat function $g(\phi)<1$, say $g(\phi)=\exp(-\frac1{\phi})$. Take an arbitrary $C^{\infty}$-smooth bump function $\rho_3(\nu)\geq0$ supported in $Y$, $\rho_3\not\equiv0$, $\rho_3,\rho_3'\leq1$. In the definition (\ref{wtphi}) of the function $\wt\phi$ let us add the new term $\eta^2\rho_3(\nu(x))g(\phi(x))$ to the right-hand side. This changes $\wt\phi$ only inside the sector $SY$ and will change neither its $\wt F$-invariance, nor its smoothness and $\phi$-flatness, by construction. One can choose a continuum of different functions $\rho_3$ as above. If $\eta$ is small enough, then all the corresponding functions $\wt\phi$ are $C^{\infty}$-smooth on $S_{\chi,\eta}$ and have no critical points there. For any two different functions $\rho_3$ the corresponding germs of foliations $\wt\phi=const$ at the origin are different: the corresponding functions $\wt\phi$ coincide on $S_{\chi,\eta}\setminus SY$, while the germs of their restrictions to $SY$ differ. The cardinality of the set of all the functions in a given finite number of variables is continuum. This implies that there exist a continuum of functions $\wt\phi$ satisfying the statements of Proposition \ref{vii} for which the corresponding germs of foliations $\wt\phi=const$ are distinct. Proposition \ref{vii} is proved. \end{proof} \subsection{Step 2. Extension by dynamics} Here we show that an $\wt F$-invariant function $\wt\phi$ constructed above on a neighborhood of the fundamental domain $\Delta$ extends along $\wt F$-orbits to an $\wt F$-invariant function on a domain $W$ adjacent to $J=(a,b)\times\{0\}\subset\mathbb R^2_{\tau,\phi}$. It suffices to prove that the function $\wt\phi$ extends as above to a rectangle $(a',b')\times(0,\eta')$ adjacent to arbitrary relatively compact subinterval $J'=(a',b')\times\{0\}\Subset J$. The union of the above rectangles corresponding to an exhaustion of $J$ by a sequence of subintervals $J'$ yields a domain $W$ adjacent to all of $J$, where the extended function is defined. Therefore, we make the following convention. \begin{convention} \label{coun} Everywhere below we identify the interval $J=(a,b)\times\{0\}$ with $(a,b)$ and sometimes we denote $J=(a,b)\subset\mathbb R$. We will consider that there exists a $\delta>0$ such that $\wt F^{\pm1}$ are diffeomorphisms of the rectangle $J\times[0,\delta)\subset\mathbb R^2_{\tau,\phi}$ onto its images, and the $\phi$-flat terms in its asymptotic formula (\ref{tauphi}) are {\it uniformly $\phi$-flat:} the difference $\wt F(\tau,\phi)-(\tau+\phi,\phi)$ converges to zero uniformly in $\tau\in J$, and any its partial derivative (of any order) also converges to zero uniformly. Indeed, the flat terms in question are uniform on compact subsets in $J$. Hence, one can achieve their uniformity replacing $J$ by its relatively compact subinterval. Under this assumption the above difference and its differential are both uniformly $o(\phi^m)$ in $\tau\in J$ for each individual $m\in\mathbb N$. We also consider that $J$ is a finite interval: $a$, $b$ are finite. \end{convention} The next proposition describes asymptotics of two-sided $\wt F$-orbits. \begin{proposition} \label{propit} For every $\eta$ small enough and $x:=(\tau_0,\phi_0)\in J\times[0,\eta)$ a) the iterates $\wt F^j(x)=(\tau_j,\phi_j)$ are well defined for all $j\geq0$, $j\leq N_+$, where $N_+=N_+(x)$ is the maximal number $j$ for which $\tau_j< b$; b) the inverse iterates $\wt F^{-j}(x)=(\tau_{-j},\phi_{-j})$ are well-defined for all $j\leq N_{-}$ where $N_-=N_-(x)$ is the maximal number $j$ for which $\tau_{-j}> a$; c) $\phi_j=\phi_0(1+o(1))$ uniformly in $\tau_0$ and $j\in[-N_-,N_+]$, as $\phi_0\to0$; d) the points $\tau_j$ form an asymptotic arithmetic progression: $\tau_{j+1}-\tau_j= \phi_0(1+o(1))$ uniformly in $\tau_0\in J$ and in $j\in[-N_-,N_+-1]$, as $\phi_0\to0$. \end{proposition} \begin{proof} Consider two lines and segments through $x$: $$L_{\pm}(x):=\{\phi=\phi_0\pm\phi_0^{4}(\tau-\tau_0)\}, \ \lambda_{\pm}:=L_{\pm} \cap J\times[0,2\eta).$$ {\bf Claim 2.} {\it For every $x=(\tau_0,\phi_0)\in J\times[0,2\eta)$ with $\phi_0$ small enough e) the image $\wt F(\lambda_{\pm})$ is disjoint from $\lambda_{\pm}$ and lies on its right; f) the image $\wt F^{-1}(\lambda_{\pm})$ is disjoint from $\lambda_{\pm}$ and lies on its left. g) the right sector $S_{+}(x)$ bounded by the right subintervals in $\lambda_{\pm}$ with vertex $x$ is $\wt F$-invariant; h) the left sector $S_{-}(x)$ bounded by the left subintervals in $\lambda_{\pm}$ with vertex $x$ is $\wt F^{-1}$-invariant.} \begin{proof} If $\eta$ is small enough, then $\wt F^{\pm1}$ are well-defined on $J\times[0,3\eta)$. If $\phi_0$ is small enough, then each $\lambda_{\pm}$ is projected to all of $J$, and the $\phi$-coordinates of all its points are uniformly asymptotically equivalent to $\phi_0$ (finiteness of $J$). The map $\wt F$ moves a point $z=(\tau,\phi)\in\lambda_{\pm}$ to $y:=(\tau+\phi,\phi)$ up to a $\phi$-flat term, which is $o(\phi_0^m)$ for every $m\in\mathbb N$. On the other hand, the distance of the latter point $y$ to the line $L_{\pm}$ is equal to $\phi\simeq\phi_0$ times the $|\sin|$ of the azimuth of the line $L_{\pm}$. The latter azimuth is asymptotic to $\phi_0^{4}$, and hence, is greater than $\frac12\phi_0^{4}$, whenever $\phi_0$ is small enough. Thus, $\operatorname{dist}(y,L_{\pm})\geq \frac13\phi_0^5$. Therefore, adding a term $o(\phi_0^m)$, $m\geq5$, to $y$ will not allow to cross $L_{\pm}$, and we will get a point lying on the same, right side from the line $L_{\pm}$, as $y$. The case of inverse iterates is treated analogously. Statements e) and f) are proved. They immediately imply statements g) and h). \end{proof} Let $\eta\in(0,\frac18)$ be small enough so that $\wt F$ is defined on the rectangle $\Pi:=J\times[0,3\eta)$ and for every $x\in\Pi$ with $\phi_0=\phi(x)\in[0,2\eta]$ the sector $S_{+}(x)$ contains the points $x_j=\wt F^j(x)$ until they go out of $\Pi$ (Claim 2 g)). The intersection $S_{+}(x)\cap\partial\Pi$ is contained in the right lateral side $\{ b\}\times[0,3\eta)$. Therefore, the first $j$ for which $x_j$ goes out of $\Pi$ is the one for which $\tau(x_j)\geq b$. This proves Statement a) of Proposition \ref{propit}. The proof of Statement b) is analogous. For every $x\in\Pi$ with $\phi_0=\phi(x)$ small enough the above inclusion $x_j\in S_{+}(x)$ holds for $j=1,\dots,N_+$. It implies Statement c) for the above $j$, by the definition of the sector $S_{+}$. The proof of Statement c) for $j=-N_-,\dots,-1$ is analogous. Statement d) follows from Statement c), since $\tau\circ\wt F(x)-\tau(x)=\phi(x)+\operatorname{flat}(\phi(x))$, see (\ref{tauphi}). Proposition \ref{propit} is proved. \end{proof} \begin{corollary} \label{1-4} 1) For every $\eta>0$ small enough each point $x=(\tau_0,\phi_0)\in J\times[0,\frac{2\eta}3)$ has two-sided orbit lying in $J\times[0,\eta)$ and consisting of points $x_j$, $j\in[-N_-(x),N_+(x)]$, with $\phi_j\simeq\phi_0$, as $\phi_0\to0$; the latter asymptotics is uniform in the above $j$ and in $\tau_0\in J$. 2) Let $\Delta$ denote the fundamental domain (curvilinear triangle) for the map $\wt F$ from Proposition \ref{vi}, Statement (iii). Let $\wh\Delta$ denote the complement of the closure $\overline\Delta$ to the union of its vertex $(0,0)$ and the opposite side. If $\eta>0$ is small enough, then the domain $W$ saturated by the above two-sided orbits of points in $\wh\Delta$ lies in $J\times[0,\frac{2\eta}3)$ and contains the strip $J\times(0,\frac\eta4)$. 3)The orbit of each point in $W$ contains either a unique point lying in the fundamental domain $\Delta$, or two subsequent points lying in its lateral boundary curves (glued by $\wt F$). 4) Each $\wt F$-invariant function $\wt\phi$ on $\wh\Delta$ extends to a unique $\wt F$-invariant function on $W$ as a function constant along the latter orbits. \end{corollary} The corollary follows immediately from Proposition \ref{propit}. Step 2 is done. \subsection{Step 3. Regularity and flatness. End of proof of Theorem \ref{thm33}} Here we will prove the following lemma, which will imply Theorem \ref{thm33}. \begin{lemma} \label{regflat} Let in Corollary \ref{1-4} the function $\wt\phi$ on $\wh\Delta$ be the restriction to $\wh\Delta$ of a $C^{\infty}$-smooth $\wt F$-invariant function defined on a neighborhood of $\wh\Delta$. Let the function $\wt\phi(\tau,\phi)-\phi$ be flat on $\wh\Delta$: it tends to zero with all its partial derivatives, as $(\tau,\phi)\in \wh\Delta$ tends to zero. Consider its extension to the above domain $W$ from Corollary \ref{1-4}, Statement 4), and let us denote the extended function by the same symbol $\wt\phi$. The difference $\wt\phi(\tau,\phi)-\phi$ is $C^{\infty}$-smooth on $W\cup J$, and it is uniformly $\phi$-flat (see Convention \ref{coun}). \end{lemma} \begin{proof} For every point $x=(\tau,\phi)\in W$ there exists a $N=N(x)\in\mathbb Z$ such that $\wt F^N(x)\in\wh\Delta$. The latter image $\wt F^N(x)$ lies in the definition domain of the initial function $\wt\phi$ (which is defined on a neihborhood of $\wh\Delta$), and $\wt\phi(x)=\wt\phi_N(x):=\wt\phi(\wt F^N(x))$, by definition. This immediately implies $C^{\infty}$-smoothness of the extended function $\wt\phi$ on $W$. Let us prove its $\phi$-flatness. This will automatically imply $C^{\infty}$-smoothness at points of the boundary interval $J$. To do this, we use the asymptotics \begin{equation} d\wt F(\tau,\phi)=A+\operatorname{flat}(\phi), \ A=\left(\begin{matrix} 1 & 1\\ 0 & 1 \end{matrix}\right);\label{dwtf}\end{equation} \begin{equation} N(x)=N(\tau,\phi)=O\left(\frac1\phi\right).\label{asnx}\end{equation} Here the flat term in (\ref{dwtf}) is uniformly flat, see Convention \ref{coun}. Formula (\ref{dwtf}) follows from (\ref{tauphi}). Formula (\ref{asnx}) holds, since $N\leq N_++N_-=O(\frac1\phi)$, which follows from Proposition \ref{propit}, Statement d). We study the derivatives of the functions $\wt\phi_N-\phi$, $N=N(x)$, at the point $x=(\tau,\phi)$, as functions in $x$ with fixed $N$ chosen as above for this concrete $x$. To prove uniform flatness, we have to show that all its partial derivatives tend to zero uniformly in $\tau\in J$, as $\phi\to0$. We prove this statement for the first derivatives (step 1) and then for the higher derivatives (step 2). Without loss of generality everywhere below we consider that $N\geq1$, i.e., $x$ lies on the left from the sector $\Delta$: for negative $N$ the proof is analogous. Step 1: the first derivatives. The initial function $\wt\phi$ defined on a neighborhood of the set $\wh\Delta$ is already known to be $\phi$-flat on $\wh\Delta$. The differential of the composition $\wt\phi_N=\wt\phi\circ\wt F^N$ at the point $x$, $N=N(x)$, is equal to \begin{equation} d(\wt\phi\circ\wt F^N)(x)=d\wt\phi(\wt F^N(x))d\wt F(\wt F^{N-1}(x))\dots d\wt F(x). \label{diffwt}\end{equation} \begin{proposition} \label{diffto0} For every sequence of points $x(k)=(\tau_{0k},\phi_{0k})\in W$ with $\phi_{0k}\to0$, as $k\to\infty$, and numbers $N_k=N(x(k))\in\mathbb N$ with $\wt F^{N_k}(x(k))\in\wh\Delta$ the difference $d(\wt\phi\circ\wt F^{N_k})(x(k))-d\phi$ tends to zero, as $k\to\infty$. \end{proposition} Proposition \ref{diffto0} implies uniform convergence to zero of the first derivatives. In its proof (given below) we use the following asymptotics of differential $d\wt F(\wt F^j(x))$ and technical proposition on matrix products. We denote $$M(\tau,\phi):= \text{ the Jacobian matrix of the differential } d\wt F(\tau,\phi).$$ \begin{proposition} \label{propxj} Let $x=(\tau_0,\phi_0)\in J\times(0,\frac\eta4)$, $x_j=(\tau_j,\phi_j):= \wt F^j(x)$, $j=0,\dots,N(x)$. For every $m\in\mathbb N$ one has \begin{equation} M(\tau_j,\phi_j)=A+o(\phi_0^m), \ \text{ as } \phi_0\to0; \ A=\left(\begin{matrix} 1 & 1\\ 0 & 1\end{matrix}\right),\label{mjun}\end{equation} uniformly in $j=1,\dots,N(x)$ and in $\tau_0\in J$ for each individual $m$. \end{proposition} \begin{proof} Formula (\ref{mjun}) follows from (\ref{dwtf}) and Proposition \ref{propit}, part c). \end{proof} \begin{proposition} \label{propasm} Consider arbitrary sequences of numbers $\phi_{0k}>0$, $N_k\in\mathbb N$, $\phi_{0k}\to0$, $N_k=O(\frac1{\phi_{0k}})$, as $k\to\infty$, and matrix collections $$\mathcal M_k=(M_{1;k},\dots,M_{N_k;k}), \ M_{j;k}\in\operatorname{GL}_2(\mathbb R),$$ \begin{equation} M_{j;k}=A+o(\phi_{0k}^m) \text{ for every } m\in\mathbb N; \ A=\left(\begin{matrix} 1 & 1\\ 0 & 1 \end{matrix}\right).\label{mtph}\end{equation} Here the latter asymptotics is uniform in $j=1,\dots,N_k$ for each individual $m$, as $k\to\infty$. Then the products of the matrices $M_{j;k}$ have the asymptotics \begin{equation} \wh M_k:=M_{N_k;k}\dots M_{1;k} =\left(\begin{matrix} 1 & N_k\\ 0 & 1\end{matrix}\right) + o(\phi_{0k}^m) \ \text{ for every } m\in\mathbb N.\label{asmat} \end{equation} \end{proposition} \begin{proof} Conjugating by the diagonal matrix $H_k:=\operatorname{diag}(1,\phi_{0k}^{-1})$ transforms the matrices $M_{j;k}$ and their product respectively to the following matrices: $$\wt M_{j;k}=B_k+o(\phi_{0k}^m), \ B_k=\left(\begin{matrix} 1 & \phi_{0k}\\ 0 & 1 \end{matrix}\right); \ \wt M_k:=\wt M_{N_k;k}\dots \wt M_{1;k}.$$ {\bf Claim 3.} {\it One has} \begin{equation}\wt M_k=B_k^{N_k}+o(\phi_{0k}^m)=\left(\begin{matrix} 1 & N_k\phi_{0k}\\ 0 & 1 \end{matrix}\right)+o(\phi_{0k}^m).\label{renpro}\end{equation} \begin{proof} Without loss of generality we can and will consider that $N_k\phi_{0k}\to C\in\mathbb R_{\geq0}$, passing to a subsequence, since $N_k=O(\frac1{\phi_{0k}})$, by assumption. Let $\mathcal{UT}\subset\operatorname{GL}_2(\mathbb R)$ denote the one-parametric subgroup of unipotent upper triangular matrices. Consider the tangent vector $$V=\left(\begin{matrix} 0 & 1\\ 0 & 0 \end{matrix}\right)\in T_1\mathcal{UT}\subset T_1\operatorname{GL}_2(\mathbb R).$$ Let us extend it to a left-invariant vector field on $\operatorname{GL}_2(\mathbb R)$, which is tangent to the $\mathcal{UT}$-orbits under right multiplication action. Take a small transverse section $S\subset\operatorname{GL}_2(\mathbb R)$ passing through the identity and consider the subset $U\subset\operatorname{GL}_2(\mathbb R)$ foliated by arcs of phase curves of the field $V$ starting in $S$ and parametrized by time segment $[0,2C]$. The subset $U$ is a bordered domain (flowbox) diffeomorphic to the product $S\times[0,2C]$ via the diffeomorphism sending a point $y\in U$ to the pair $(s(y),t(y))$ such that the orbit issued from the point $s(y)\in S$ arrives to $y$ in time $t(y)$. Fix an arbitrary $m\geq3$. In the new chart $(s,t)$ the multiplication by a matrix $\wt M_{j;k}=B_k+o(\phi_{0k}^m)$ from the right moves a point $(s,t)$ to the point $(s,t+\phi_{0k})$ up to a small correction of order $o(\phi_{0k}^m)$. Therefore, the multiplication by $N_k\simeq\frac C{\phi_{0k}}$ similar matrices $\wt M_{j;k}$ with the $o(\phi_{0k}^m)$ in their asymptotics being uniform in $j$ moves a point $(s,t)$ to a point $(s,t+N_k\phi_{0k})$ up to a correction of order $N_ko(\phi_{0k}^m)=o(\phi_{0k}^{m-1})$. This implies (\ref{renpro}) with $m$ replaced by $m-1$. Taking into account that $m$ can be choosen arbitrary, this proves (\ref{renpro}). \end{proof} Conjugating formula (\ref{renpro}) by the matrix $H^{-1}_k$ and taking into account that $m\in\mathbb N$ is arbitrary yields (\ref{asmat}). This proves Proposition \ref{propasm}. \end{proof} \begin{proof} {\bf of Proposition \ref{diffto0}.} For $z\in\wh\Delta$ set $$St(z):=(\frac{\partial\wt\phi}{\partial\tau},\frac{\partial\wt\phi}{\partial\phi})(z).$$ The string of the first partial derivatives of the function $\wt\phi_N=\wt\phi\circ\wt F^N(x)$, $N=N(x)$, is equal to the product $$St(\tau_N,\phi_N)M(\tau_{N-1},\phi_{N-1})\dots M(\tau_0,\phi_0), \ (\tau_j,\phi_j)=\wt F^{j}(x), \ j=0,\dots,N-1,$$ \begin{equation}St(\tau_N,\phi_N)=\left(0,1\right)+o(\phi_0^m) \text{ for every } m\in\mathbb N,\label{stringas} \end{equation} by $\phi$-flatness of the initial function $\wt\phi$ on $\wh\Delta$ and by the uniform asymptotics $\phi_j=\phi_0(1+o(1))$, $j=1,\dots,N$ (Proposition \ref{propit}, Statement c)). Take arbitrary sequence of points $x(k):=(\tau_{0k},\phi_{0k})$, $\tau_{0k}\in J$, $\phi_{0k}\to0$, as $k\to\infty$. Set $$(\tau_{jk},\phi_{jk}):=\wt F^j(x(k)), \ N_k:=N(x(k)).$$ The sequence of collections of Jacobian matrices $M_{j+1;k}:=M(\tau_{jk},\phi_{jk})$, $j=0,\dots,N_k-1$, satisfy the conditions of Proposition \ref{propasm}, by (\ref{dwtf}) and Convention \ref{coun}. Therefore, their product $\wh M_k$, which is the Jacobian matrix of the differential $d\wt F^{N_k}(x(k))$, has asymptotics (\ref{asmat}): \begin{equation}\wh M_k:= \text{ the Jacobian matrix of } d\wt F^{N_k}(x(k)) \ = \ \left(\begin{matrix} 1 & N_k\\ 0 & 1\end{matrix}\right) + o(\phi_{0k}^m). \label{jacas}\end{equation} Thus, the matrix-string of the differential $d\wt\phi_{N_k}(\tau_{0k},\phi_{0k})$ is the product $$St(\tau_{Nk},\phi_{Nk})\wh M_k=\left(\left(0,1\right)+o(\phi_{0k}^m)\right)\left(\begin{matrix} 1 & N_k\\ 0 & 1 \end{matrix}\right)+o(\phi_{0k}^m)=\left(0,1\right)+o(\phi_{0k}^{m-1}),$$ since $N_k=O(\frac1{\phi_{0k}})$, see (\ref{asnx}). For $m=2$ we get that the differential $d(\wt\phi_{N_k}(\tau,\phi)-\phi)$ taken at the point $x(k)$ tends to zero, as $k\to\infty$. This proves Proposition \ref{diffto0}. \end{proof} Step 2: the higher derivatives. For a smooth function $f$ defined on a neighborhood of a point $x$ by $j^\ell_x(f)$ we will denote its $\ell$-jet at $x$. Below we prove the following proposition. \begin{proposition} \label{difftol} In the conditions of Proposition \ref{diffto0} for every $\ell\in\mathbb N$ the $\ell$-jet at $x(k)$ of the difference $\wt\phi\circ F^{N_k}-\phi$ tends to zero, as $k\to\infty$. \end{proposition} Proposition \ref{difftol} will imply $C^{\infty}$-smoothness and $\phi$-flatness of the extended function $\wt\phi$ at the points of the boundary interval $J\times\{0\}$. For every $\ell\in\mathbb N$ and $x\in\mathbb R^2$ let $J^\ell_x$ denote the space of $\ell$-jets of functions at the point $x$. The map $\wt F$ induces a transformation of functions, $g\mapsto g\circ\wt F$. This induces linear operators in the jet spaces, $D_\ell\wt F(x):J^\ell_{\wt F(x)}\to J ^\ell_{x}$. We identify the space of $\ell$-jets at each point in $\mathbb R^2$ with the $\ell$-jet space at the origin, which in its turn is identified with the space $\mathcal P_{\leq\ell}$ of polynomials of degrees no greater than $\ell$. Thus, we consider the operator $D_\ell\wt F(x)$ as acting on the above space $\mathcal P_{\leq\ell}$. One has \begin{equation} D_\ell\wt F^N(x)=D_\ell\wt F(F^{N-1}(x))\dots D_\ell\wt F(x).\label{dlf} \end{equation} Linear changes of variables $(\tau,\phi)$ act on the space $\mathcal P_{\leq\ell}$ and induce an injective linear representation $\rho:\operatorname{GL}_2(\mathbb R)\to\operatorname{GL}(\mathcal P_{\leq\ell})$. Let $A$ denote the unipotent Jordan cell, see (\ref{mtph}). \begin{proposition} \label{propseq} For every sequence of points $x(k)=(\tau_{0k},\phi_{0k})\in W$ with $\phi_{0k}\to0$, as $k\to\infty$, set $N_k:=N(x(k))$, one has \begin{equation}D_\ell\wt F^{N_k}(x(k))=\rho(A^{N_k})+o(\phi_{0k}^m) \ \text{ for every } m\in\mathbb N.\label{delwt}\end{equation} \end{proposition} \begin{proof} One has \begin{equation} D_\ell\wt F(\tau,\phi)=\rho(A)+\operatorname{flat}(\phi), \label{delfl}\end{equation} by (\ref{dwtf}). Set $x_j(k)=(\tau_{jk},\phi_{jk})=\wt F^j(x(k))$, $j=0,\dots, N_k-1$. One has \begin{equation} D_\ell\wt F(x_j(k))=\rho(A)+o(\phi_{0k}^m) \text{ for every } m\in\mathbb N, \label{delflj}\end{equation} by (\ref{delfl}) and Proposition \ref{propit}, Statement c). We use (\ref{dlf}) and the following multidimensional version of Proposition \ref{propasm}. \begin{proposition} \label{propasm2} Consider arbitrary sequences of numbers $\phi_{0k}>0$, $N_k\in\mathbb N$, $\phi_{0k}\to0$, $N_k=O(\frac1{\phi_{0k}})$, as $k\to\infty$, and matrix collections $$\mathcal M_k=(M_{1;k},\dots,M_{N_k;k}), \ M_{j;k}\in\operatorname{GL}(\mathcal P_{\leq\ell}),$$ \begin{equation} M_{j;k}=\rho(A)+o(\phi_{0k}^m) \text{ for every } m\in\mathbb N; \ A=\left(\begin{matrix} 1 & 1\\ 0 & 1 \end{matrix}\right).\label{mtph2}\end{equation} Here the latter asymptotics is uniform in $j=1,\dots,N_k$ for each individual $m$, as $k\to\infty$. Then the product of the matrices $M_{j;k}$ has the asymptotics \begin{equation} \wh M_k:=M_{N_k;k}\dots M_{1;k} =\rho(A^{N_k}) + o(\phi_{0k}^m) \text{ for every } m\in\mathbb N.\label{asmat2} \end{equation} \end{proposition} \begin{proof} Conjugating the matrices $M_{j;k}$ by $\rho(H_k)$, $H_k:=\operatorname{diag}(1,\phi_{0k}^{-1})$, transforms them to matrices $$\wt M_{j;k}=\rho(B_k)+o(\phi_{0k}^{m'}), \ \ B_k=\left(\begin{matrix} 1 & \phi_{0k}\\ 0 & 1 \end{matrix}\right), \ m'=m-\dim\mathcal P_{\leq\ell}.$$ It suffices to show that the product of the matrices $\wt M_{j;k}$ has asymptotics $\rho(B_k^{N_k})+o(\phi_{0k}^m)$ for every $m\in\mathbb N$, as in the proof of Claim 3. This is done by considering the left-invariant vector field on $\operatorname{GL}(\mathcal P_{\leq\ell})$ tangent to the orbits of the subgroup $\rho(\operatorname{GL}_2(\mathbb R))$ acting on $\operatorname{GL}(\mathcal P_{\leq\ell})$ by right multiplication and repeating the arguments from the proof of Claim 3. \end{proof} Formula (\ref{delwt}) is deduced from Proposition \ref{propasm2} and formulas (\ref{dlf}), (\ref{delflj}), as formula (\ref{jacas}). \end{proof} \begin{proof} {\bf of Proposition \ref{difftol}.} The polynomial representing the $\ell$-jet of the initial function $\wt\phi$ at a point $z\in\wh\Delta$ tends to the linear polynomial $P(\tau,\phi)=\phi$, as $z\to0$, so that its distance to $P(\tau,\phi)$ is $o(\phi^m)$ for every $m\in\mathbb N$, by flatness of $\wt\phi$ on $\wh\Delta$. This together with Proposition \ref{propit}, Statement c) implies that the distance of its $\ell$-jet at the point $\wt F^{N_k}(x(k))$ to the polynomial $\phi$ is asymptotic to $o(\phi_{0k}^m)$. The image of the latter $\ell$-jet under the operator $D_\ell\wt F^{N_k}(x(k))$ is also $o(\phi_{0k}^m)$-close to $\phi$ for every $m\in\mathbb N$. This follows from the previous statement, formula (\ref{delwt}), the fact that $\rho(A)$ fixes $\phi$ and the asymptotics $N_k=O(\phi_{0k}^{-1})$. Finally we get that the difference of the $\ell$-jet of the function $\phi$ at $x(k)$ and the $\ell$-jet $j^\ell_{x(k)}(\wt\phi\circ\wt F^{N_k})$ of the extended function tends to zero, as $k\to\infty$. Proposition \ref{difftol} is proved. \end{proof} Lemma \ref{regflat} follows from Proposition \ref{difftol}. It implies Theorem \ref{thm33}. \end{proof} \subsection{Proof of existence in Theorem \ref{thm3} and its addendum} Let $F$ be a $C^{\infty}$-lifted strongly billiard-like map. Let $(\tau,h)$ be the coordinates from Theorem \ref{thmm}. Set $\phi=\sqrt h$. Let $\wt F$ denote the map $F$ written in the coordinates $(\tau,\phi)$, which is $C^{\infty}$-smooth and takes the form $(\tau,\phi)\mapsto(\tau+\phi+\operatorname{flat}(\phi), \phi+\operatorname{flat}(\phi))$ (Theorem \ref{thmm}). There exists a $\wt F$-invariant function $\wt\phi=\phi+\operatorname{flat}(\phi)$ (Theorem \ref{thm33}). The function $\wt h:=\wt\phi^2$ is $F$-invariant, $C^{\infty}$-smooth, and $\wt h=h+\operatorname{flat}(h)$; hence $\frac{\partial\wt h}{\partial h}\neq0$ on $J$ and on some domain adjacent to $J$. The existence in Theorem \ref{thm3} is proved. Let us now prove the addendum to Theorem \ref{thm3}. Let us fix a function $\wt h$ constructed above. Let $\theta$ denote the time function of the Hamiltonian vector field with the Hamiltonian function $\wt h$, normalized to vanishes on the vertical axis $\{\tau=0\}$. The coordinates $(\theta,\wt h)$ are symplectic, and in these coordinates $F(\theta,\wt h)=(\theta+\xi(\wt h), \wt h)$ for some function $\xi(\wt h)=\sqrt{\wt h}\psi(\wt h)$ in one variable, $\psi(0)>0$, as in Subsection 2.1, Claim 1. Afterwards modifying the functions $\wt h$ and $\theta$, as at the end of Subsection 2.1, we get new coordinates $(\tau,\wt h)$ (with new $\tau$) in which $F$ takes the form (\ref{snform}). The addendum is proved. \subsection{Proof of Proposition \ref{distgerm} and non-uniqueness in Theorems \ref{thm33}, \ref{thm3}} Let us prove the statement of Proposition \ref{distgerm} for a map $\wt F$ of type (\ref{tauphi}). We prove it for line fields: for other objects the proof is analogous. Without loss of generality we can and will consider that asymptotics (\ref{tauphi}) is uniform in $\tau\in J$ (replacing $J$ by a relatively compact subinterval $J'\Subset J$), as in Convention \ref{coun}. Let $G_1$ and $G_2$ be two line fields on $W$ with distinct germs at $J$. This means that there exists a sequence of points $x(k)=(\tau(k),\phi(k))$ with $\phi(k)\to0$ and $\tau(k)$ lying in a compact subset in $J$ such that the lines $G_1(x),G_2(x)\subset T_x\mathbb R^2$ are distinct. Taking a subsequence, we can and will consider that $x(k)\to x=(\tau_0,0)$, as $k\to\infty$. The two-sided orbit of a point $x(k)$ with big $k$ consists of points whose $\phi$-coordinates are $\phi(k)(1+o(1))$ and whose $\tau$-coordinates form an asymptotic arithmetic progression with step $\phi(k)(1+o(1))$, the $o(1)$ are uniform; if $k$ is so big that $|o(1)|<\frac12$, then the orbit forms a $2\phi(k)$-net on the $2\phi(k)$-neighborgood of the interval $J$. See Proposition \ref{propit}. At each point of the orbit the lines of the fields $G_1$ and $G_2$ are distinct, since this holds at $x(k)$ and by $\wt F$-invariance. Therefore, passing to limit, as $k\to\infty$, we get that for every point $z\in J$ there exist points $z'$ arbitrarily close to $z$ with $G_1(z')\neq G_2(z')$. Hence, the germs at $z$ of the line fields $G_1$ and $G_2$ are distinct. The first statement of Proposition \ref{distgerm}, for a map $\wt F$ of type (\ref{tauphi}), is proved. Its second statement, for a strongly billiard-like map $F:V\cup J\to F(V\cup J)\subset\mathbb R^2$ follows from its first statement and the fact that each $C^{\infty}$-lifted strongly billiard-like map is conjugated to a map $\wt F$ of type (\ref{tauphi}) by a homeomorphism that is smooth on the complement to the boundary interval $J$. The latter homeomorphism is the composition of a diffeomorphism and the map $(\tau,h)\mapsto(\tau,\phi)$, $\phi=\sqrt h$, see the discussion in the previous subsection. Proposition \ref{distgerm} is proved. Non-uniqueness in Theorem \ref{thm33} (its addendum) follows from non-uniqueness of germ at $(0,0)$ of the foliation $\wt\phi=const$ on $S_{\chi,\eta}\supset\wh\Delta$ (Proposition \ref{vii}), Lemma \ref{regflat} and Proposition \ref{distgerm}. Let us prove non-uniqueness statement of Theorem \ref{thm3}. Given a $C^{\infty}$-lifted strongly billiard map $F$ written in the above-mentioned coordinates $(\tau,h)$, the corresponding lifting $\wt F$ in coordinates $(\tau,\phi)$ has a continuum of foliations $\wt\phi=const$ by level curves of a $\wt F$-invariant function $\wt\phi=\phi+\operatorname{flat}(\phi)$; foliations with pairwise distinct germs at each point in $J$ (the addendum to Theorem \ref{thm33}). Each of them projects via the map $(\tau,\phi)\mapsto(\tau,h)$, $h=\phi^2$, to a foliation by level curves of an $F$-invariant $C^{\infty}$-smooth function $\wt h=(\wt\phi)^{\frac12}=h+\operatorname{flat}(h)$. The $F$-invariant foliations thus obtained have pairwise distinct germs at each point in $J$, by construction. This proves non-uniqueness in Theorem \ref{thm3}. \subsection{Proof of Theorems \ref{thm1}, \ref{thm2}, \ref{thm2closed}} \begin{proof} {\bf of Theorem \ref{thm1}.} The billiard map acting on oriented lines by reflection from the curve $\gamma$ is a $C^{\infty}$-lifted strongly billiard-like map (Example \ref{exdel} and Proposition \ref{psmi}). The corresponding interval $J$ is identified with the family of oriented lines tangent to $\gamma$ and defining the same orientation of the curve $\gamma$, as its parametrization. Each invariant curve of a strongly billiard-like map that is close to $J$ corresponds to a caustic of the billiard in $\gamma$. This together with Theorem \ref{thm3} implies the statements of Theorem \ref{thm1}. \end{proof} The proof of Theorem \ref{thm2} repeats the above proof of Theorem \ref{thm1} with obvious changes. Theorem \ref{thm2closed} follows from Theorm \ref{thm2}. \section{Acknowledgements} I am grateful to Sergei Tabachnikov, Alexander Plakhov, Ivan Beschastnyi and Stefano Baranzini for helpful discussions.
{ "timestamp": "2022-01-13T02:00:32", "yymm": "2104", "arxiv_id": "2104.01362", "language": "en", "url": "https://arxiv.org/abs/2104.01362", "abstract": "Reflection in strictly convex bounded planar billiard acts on the space of oriented lines and preserves a standard area form. A caustic is a curve $C$ whose tangent lines are reflected by the billiard to lines tangent to $C$. The famous Birkhoff Conjecture states that the only strictly convex billiards with a foliation by closed caustics near the boundary are ellipses. By Lazutkin's theorem, there always exists a Cantor family of closed caustics approaching the boundary. In the present paper we deal with an open billiard, whose boundary is a strictly convex embedded (non-closed) curve $\\gamma$. We prove that there exists a domain $U$ adjacent to $\\gamma$ from the convex side and a $C^\\infty$-smooth foliation of $U\\cup\\gamma$ whose leaves are $\\gamma$ and (non-closed) caustics of the billiard. This generalizes a previous result by R.Melrose, which yields existence of a germ of foliation as above at a boundary point. We show that there exists a continuum of above foliations by caustics whose germs at each point in $\\gamma$ are pairwise different. We prove a more general version of this statement in the cases, when $\\gamma$ is just an arc, and also when both $\\gamma$ and the caustics are immersed curves. It also applies to a billiard bounded by a closed strictly convex curve $\\gamma$ and yields infinitely many \"immersed\" foliations by immersed caustics. For the proof of the above results, we state and prove their analogue for a special class of area-preserving maps generalizing billiard reflections: the so-called $C^{\\infty}$-lifted strongly billiard-like maps. We also prove a series of results on conjugacy of billiard maps near the boundary for open curves of the above type.", "subjects": "Dynamical Systems (math.DS)", "title": "On infinitely many foliations by caustics in strictly convex open billiards", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771763033943, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7090925456085222 }
https://arxiv.org/abs/1904.02313
Counting self-conjugate (s,s+1,s+2)-core partitions
We are concerned with counting self-conjugate $(s,s+1,s+2)$-core partitions. A Motzkin path of length $n$ is a path from $(0,0)$ to $(n,0)$ which stays above the $x$-axis and consists of the up $U=(1,1)$, down $D=(1,-1)$, and flat $F=(1,0)$ steps. We say that a Motzkin path of length $n$ is symmetric if its reflection about the line $x=n/2$ is itself. In this paper, we show that the number of self-conjugate $(s,s+1,s+2)$-cores is equal to the number of symmetric Motzkin paths of length $s$, and give a closed formula for this number.
\section{Introduction} Let $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell})$ be a partition of a positive integer $n$. The {\it Young diagram} of $\lambda$ is a collection of $n$ boxes in $\ell$ rows with $\lambda_i$ boxes in row $i$. For example, the Young diagram for $\lambda=(5,4,2)$ is below. \begin{center} \tiny{ \begin{ytableau} ~&~&~&~&~ \\ ~&~&~&~ \\ ~&~ \end{ytableau}} \end{center} Let the leftmost column be column 1. The box in row $i$ and column $j$ is said to be in position $(i,j)$. For the Young diagram of $\lambda$, the partition $\lambda '=(\lambda_1 ',\lambda_2 ',\dots, \lambda_{\lambda_1} ')$ is called the {\it conjugate} of $\lambda$, where $\lambda'_j$ denotes the number of boxes in column $j$. A partition whose conjugate is equal to itself is called {\it self-conjugate}. For each box in its Young diagram, we define its {\it hook length} by counting the number of boxes directly to its right or below, including the box itself. Equivalently, for the box in position $(i,j)$, the hook length of $\lambda$ is defined by $$h(i,j)=\lambda_i+\lambda'_j-i-j+1.$$ For a positive integer $t$, a partition $\lambda$ is called a {\it $t$-core} if none of its hook lengths are multiples of $t$. We use the notation of a $(t_1,...,t_p)$-core if it is simultaneously a $t_1$-core,\dots, and a $t_p$-core. See for details \cite{AL,Anderson,AHJ,GKS,JK,Johnson,Wang}.\\ For a set $S$ of positive integers, we say that $a$ is generated by $S$ if $a$ can be written as a non-negative linear combination of the elements of $S$. Let $P=P_S$ be the set of elements which are not generated by $S$, and let $(P,<_P)$ be a poset by defining the cover relation so that $a$ covers $b$ if and only if $a-b\in S$. For example, see Figure \ref{fig:p8} for the poset $P_{\{8,9,10\}}$. For the detailed explanation of poset, we refer the reader to \cite{AL,S2,YZZ}. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=1] \foreach \i in {1,2,3,4,5,6,7} {\path (\i,0) coordinate (\i); \node[above] at (\i) {\i};} \foreach \i in {11,12,13,14,15} {\path (-9+\i,1) coordinate (\i); \node[above] at (\i) {\i}; \draw (\i) -- (-10+\i,0.45); \draw (\i) -- (-9+\i,0.45); \draw (\i) -- (-8+\i,0.45); } \foreach \i in {21,22,23} {\path (-18+\i,2) coordinate (\i); \node[above] at (\i) {\i}; \draw (\i) -- (-19+\i,1.45); \draw (\i) -- (-18+\i,1.45); \draw (\i) -- (-17+\i,1.45); } \path (4,3) coordinate (31); \node[above] at (31) {31}; \draw (31) -- (3,2.45); \draw (31) -- (4,2.45); \draw (31) -- (5,2.45); \end{tikzpicture} \caption{The Hasse diagram of $P_{\{8,9,10\}}$}\label{fig:p8} \end{figure} For a poset $(P,<_P)$, a set $I\subset P$ is called a \emph{lower ideal} of $P$ if $a<_P b$ and $b\in I$ implies $a\in I$. In \cite{Anderson}, Anderson gave a natural bijection between $t$-cores and lower ideals of a poset $P_{\{t\}}$. Moreover, she proved that for relatively prime positive integers $s$ and $t$, the number of $(s,t)$-cores has a nice closed formula by finding a bijection between $(s,t)$-cores and lattice paths from $(0,0)$ to $(s,t)$ consisting of north and east steps which stay above the diagonal. \begin{thm}\cite{Anderson}\label{thm:anderson} For relatively prime positive integers $s$ and $t$, the number of $(s,t)$-cores is $$\frac{1}{s+t}\binom{s+t}{s}.$$ \end{thm} Since the work of Anderson, the topic counting simultaneous cores has received growing attention. In \cite{FMS}, Ford, Mai, and Sze proved the following analog of Anderson's work. \begin{thm}\cite{FMS}\label{thm:FMS} For relatively prime positive integers $s$ and $t$, the number of self-conjugate $(s,t)$-cores is $$\binom{\lfloor \frac{s}{2}\rfloor+\lfloor \frac{t}{2}\rfloor}{\lfloor \frac{s}{2}\rfloor}.$$ \end{thm} An \emph{$(s,k)$-generalized Dyck path} is a path from $(0,0)$ to $(s,s)$ which stays above the diagonal and consists of the steps $N_k=(0,k)$, $E_k=(k,0)$, and $D_i=(i,i)$ for $1\leq i \leq k-1$. For example, an $(s,1)$-generalized Dyck path is a (classical) \emph{Dyck path of order} $s$. We say that an $(s,k)$-generalized Dyck path is \emph{symmetric} if its reflection about the line $y=s-x$ is itself. It is often observed that counting the number of simultaneous cores can sometimes be described as counting the number of different paths. \begin{rem} \label{rem:symmetric} Let $s$ be a positive integer. \begin{enumerate} \item[1.] The number of $(s,s+1)$-cores is the $s$th Catalan number $C_s=\frac{1}{s+1}\binom{2s}{s}$ which counts the number of Dyck paths of order $s$. \item[2.] The number of self-conjugate $(s,s+1)$-cores is $\binom{s}{\lfloor s/2 \rfloor}$ which counts the number of \emph{symmetric Dyck paths} of order $s$. \end{enumerate} \end{rem} In \cite{AL}, Amdeberhan and Leven expand Anderson's result to $(s,s+1,\dots,s+k)$-cores. \begin{thm}\cite{AL}\label{thm:AL} The followings are equinumerous: \begin{enumerate} \item[(a)] The number of $(s,s+1,\dots,s+k)$-cores. \item[(b)] The number of $(s,k)$-generalized Dyck paths. \item[(c)] The number of lower ideals in $P_{\{s,s+1,\dots,s+k\}}$. \end{enumerate} \end{thm} We note that $(s,2)$-generalized Dyck paths are equivalent to Motzkin paths of length $s$. From Theorem \ref{thm:AL}, one can obtain the following corollary. \begin{cor} \label{cor:Motzkin} For a positive integer $s$, the number of $(s,s+1,s+2)$-cores is $$M_s=\sum_{i=0}\frac{1}{i+1}\binom{s}{2i} \binom{2i}{i},$$ the $s$th Motzkin number which counts the number of Motzkin paths of length $s$. \end{cor} We note that Yang, Zhong, and Zhou \cite{YZZ} proved Corollary \ref{cor:Motzkin} independently.\\ It is natural to ask whether the number of self-conjugate $(s,s+1,s+2)$-cores and the number of symmetric Motzkin paths of length $s$ are equinumerous from Remark \ref{rem:symmetric} and Corollary \ref{cor:Motzkin}. In this paper, we prove that these two quantities are equal by showing that they satisfy the same recurrence relation which will be proved in Seciton \ref{sec:counting}. Furthermore, we give a closed formula for these numbers. \section{Poset structure for self-conjugate $(s,s+1,s+2)$-cores}\label{sec:poset} In this section, we construct a poset whose lower ideals with some restrictions are corresponding to self-conjugate $(s,s+1,s+2)$-cores, and then give a simple diagram to visualize that poset.\\ For a partition $\lambda$, let $MD(\lambda)$ denote the set of main diagonal hook lengths. Therefore, $MD(\lambda)$ is a set of distinct odds when $\lambda$ is self-conjugate. In \cite{FMS}, authors gave a useful result for determining self-conjugate $t$-cores. \begin{prop}\cite{FMS}\label{prop:FMS} Let $\lambda$ be a self-conjugate partition. Then $\lambda$ is a $t$-core partition if and only if both of the following hold: \begin{enumerate} \item[(a)] If $h\in MD(\lambda)$ with $h>2t$, then $h-2t\in MD(\lambda)$. \item[(b)] If $h_1,h_2\in MD(\lambda)$, then $h_1+h_2\not\equiv 0 \pmod{2t}$. \end{enumerate} \end{prop} For a positive integer $s$, we consider an induced subposet of $P=P_{\{2s,2s+1,\dots,2s+4\}}$, $$\tilde{P}_{\{s,s+1,s+2\}}=\{h\in P~:~s\not<_P h,~s+1\not<_P h,~s+2\not<_P h, \text{~and~} h \text{~is odd} \}.$$ We note that the poset $\tilde{P}_{\{s,s+1,s+2\}}$ is the disjoint union of two posets, say $Q$ and $R$, where $Q$ is the maximal induced subposet of $P$ of which minimal elements are odd integers less than $s$, and $R$ is the maximal induced subposet of $P$ of which minimal elements are odd integers $x$ such that $s+2<x<2s$. See Figures \ref{fig:tilde8} and \ref{fig:tilde9} for example. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.77] \foreach \i in {1,3,5,7,11,13,15} {\path (\i,0) coordinate (\i); \node[above] at (\i) {\i};} \foreach \i in {2,4,6,8,9,10,12,14} {\path (\i,0) coordinate (\i); \node[above, gray!60] at (\i) {\i};} \foreach \i in {22,24,25,26,27,28,29,30} {\path (-18+\i,1.5) coordinate (\i); \node[above, gray!60] at (\i) {\i}; \draw[gray!60] (\i) -- (-16+\i,0.55); \draw[gray!60] (\i) -- (-17+\i,0.55); \draw[gray!60] (\i) -- (-18+\i,0.55); \draw[gray!60] (\i) -- (-19+\i,0.55); \draw[gray!60] (\i) -- (-20+\i,0.55); } \foreach \i in {21,23,31} {\path (-18+\i,1.5) coordinate (\i); \node[above] at (\i) {\i}; \draw[thick] (\i) -- (-16+\i,0.55); \draw[gray!60] (\i) -- (-17+\i,0.55); \draw[thick] (\i) -- (-18+\i,0.55); \draw[gray!60] (\i) -- (-19+\i,0.55); \draw (\i) -- (-20+\i,0.55); } \foreach \i in {41,42,43,44,45,46,47} {\path (-36+\i,3) coordinate (\i); \node[above, gray!60] at (\i) {\i}; \draw[gray!60] (\i) -- (-34+\i,2.05); \draw[gray!60] (\i) -- (-35+\i,2.05); \draw[gray!60] (\i) -- (-36+\i,2.05); \draw[gray!60] (\i) -- (-37+\i,2.05); \draw[gray!60] (\i) -- (-38+\i,2.05); } \foreach \i in {61,62,63} {\path (-54+\i,4.5) coordinate (\i); \node[above, gray!60] at (\i) {\i}; \draw[gray!60] (\i) -- (-52+\i,3.55); \draw[gray!60] (\i) -- (-53+\i,3.55); \draw[gray!60] (\i) -- (-54+\i,3.55); \draw[gray!60] (\i) -- (-55+\i,3.55); \draw[gray!60] (\i) -- (-56+\i,3.55); } \end{tikzpicture} \caption{The Hasse diagram of the induced subposet $\tilde{P}_{\{8,9,10\}}$ of $P_{\{16,17,18,19,20\}}$}\label{fig:tilde8} \end{figure} \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.77] \foreach \i in {1,3,5,7,13,15,17} {\path (\i,0) coordinate (\i); \node[above] at (\i) {\i};} \foreach \i in {2,4,6,8,9,10,11,12,14,16} {\path (\i,0) coordinate (\i); \node[above, gray!60] at (\i) {\i};} \foreach \i in {24,26,27,28,29,30,31,32,33,34} {\path (-20+\i,1.5) coordinate (\i); \node[above, gray!60] at (\i) {\i}; \draw[gray!60] (\i) -- (-18+\i,0.55); \draw[gray!60] (\i) -- (-19+\i,0.55); \draw[gray!60] (\i) -- (-20+\i,0.55); \draw[gray!60] (\i) -- (-21+\i,0.55); \draw[gray!60] (\i) -- (-22+\i,0.55); } \foreach \i in {23,25,35} {\path (-20+\i,1.5) coordinate (\i); \node[above] at (\i) {\i}; \draw[thick] (\i) -- (-18+\i,0.55); \draw[gray!60] (\i) -- (-19+\i,0.55); \draw[thick] (\i) -- (-20+\i,0.55); \draw[gray!60] (\i) -- (-21+\i,0.55); \draw (\i) -- (-22+\i,0.55); } \foreach \i in {45,46,47,48,49,50,51,52,53} {\path (-40+\i,3) coordinate (\i); \node[above, gray!60] at (\i) {\i}; \draw[gray!60] (\i) -- (-38+\i,2.05); \draw[gray!60] (\i) -- (-39+\i,2.05); \draw[gray!60] (\i) -- (-40+\i,2.05); \draw[gray!60] (\i) -- (-41+\i,2.05); \draw[gray!60] (\i) -- (-42+\i,2.05); } \foreach \i in {67,68,69,70,71} {\path (-60+\i,4.5) coordinate (\i); \node[above, gray!60] at (\i) {\i}; \draw[gray!60] (\i) -- (-58+\i,3.55); \draw[gray!60] (\i) -- (-59+\i,3.55); \draw[gray!60] (\i) -- (-60+\i,3.55); \draw[gray!60] (\i) -- (-61+\i,3.55); \draw[gray!60] (\i) -- (-62+\i,3.55); } \end{tikzpicture} \caption{The Hasse diagram of the induced subposet $\tilde{P}_{\{9,10,11\}}$ of $P_{\{18,19,20,21,22\}}$}\label{fig:tilde9} \end{figure} Now, we restate Proposition \ref{prop:FMS} by using the poset we constructed. \begin{prop}\label{prop:lower} Let $\lambda$ be a self-conjugate partition. Then $\lambda$ is an $(s,s+1,s+2)$-core partition if and only if the set $MD(\lambda)$ is a lower ideal of $\tilde{P}_{\{s,s+1,s+2\}}$ with no elements $h_1,h_2$ such that $h_1+h_2\in \{2s,2s+2,2s+4\}$. \end{prop} \begin{example} For a self-conjugate $(8,9,10)$-core partition $\lambda=(6,3,3,1,1,1)$, the set $MD(\lambda)=\{11,3,1\}$ of main diagonal hook lengths is a lower ideal of $\tilde{P}_{\{8,9,10\}}$ with no elements $h_1,h_2$ such that $h_1+h_2\in\{16,18,20\}$. \end{example} For convenience, we add dotted edges connecting elements $h_1$, $h_2$ in the Hasse diagram of $\tilde{P}_{\{s,s+1,s+2\}}$ with $h_1+h_2\in \{2s,2s+2,2s+4\}$ so that at most one end point of each dotted edge can be selected for the lower ideal corresponding to an $(s,s+1,s+2)$-core. From now on, we use the \emph{modified diagram} for $\tilde{P}_{\{s,s+1,s+2\}}$ as in Figure \ref{fig:modified8}. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.6] \foreach \i in {1,3,5,7} {\path (\i,0) coordinate (\i); \node[above] at (\i) {\i};} \foreach \i in {11,13,15} {\path (17-\i,-1.5) coordinate (\i); \node[above] at (\i) {\i};} \foreach \i in {21,23} {\path (-18+\i,1.5) coordinate (\i); \node[above] at (\i) {\i}; \draw (\i) -- (-16+\i,0.75); \draw (\i) -- (-18+\i,0.75); \draw (\i) -- (-20+\i,0.75); } \path (4,-2.25) coordinate (31); \node[below] at (31) {31}; \draw (31)-- (15) (31) -- (13) (31) -- (11); \foreach \i in {1,3,5} \draw[dotted] (\i)-- (2,-0.75); \foreach \i in {3,5,7} \draw[dotted] (\i)-- (4,-0.75); \foreach \i in {5,7} \draw[dotted] (\i)-- (6,-0.75); \end{tikzpicture} \qquad \qquad \quad\begin{tikzpicture}[scale=0.6] \foreach \i in {1,3,5,7} {\path (\i,0) coordinate (\i); \node[above] at (\i) {\i};} \foreach \i in {13,15,17} {\path (19-\i,-1.5) coordinate (\i); \node[above] at (\i) {\i};} \foreach \i in {23,25} {\path (-20+\i,1.5) coordinate (\i); \node[above] at (\i) {\i}; \draw (\i) -- (-18+\i,0.75); \draw (\i) -- (-20+\i,0.75); \draw (\i) -- (-22+\i,0.75); } \path (4,-2.25) coordinate (35); \node[below] at (35) {35}; \draw (35)-- (15) (35) -- (13) (35) -- (17); \foreach \i in {1,3,5} \draw[dotted] (\i)-- (2,-0.75); \foreach \i in {3,5,7} \draw[dotted] (\i)-- (4,-0.75); \foreach \i in {5,7} \draw[dotted] (\i)-- (6,-0.75); \end{tikzpicture} \caption{The modified diagrams of $\tilde{P}_{\{8,9,10\}}$ and $\tilde{P}_{\{9,10,11\}}$}\label{fig:modified8} \end{figure} We note that $$Q\cong P_{\{\lfloor \frac{s}{2}\rfloor+1,\lfloor \frac{s}{2}\rfloor+2,\lfloor \frac{s}{2}\rfloor+3 \}}\quad \text{and} \quad R\cong P_{\{\lfloor \frac{s}{2}\rfloor,\lfloor \frac{s}{2}\rfloor+1,\lfloor \frac{s}{2}\rfloor+2 \}},$$ and therefore, $\tilde{P}_{\{2s,2s+1,2s+2\}}$ is equivalent to $\tilde{P}_{\{2s+1,2s+2,2s+3\}}$. Moreover, it is not hard to notice that the modified diagrams of $\tilde{P}_{\{2s,2s+1,2s+2\}}$ and $\tilde{P}_{\{2s+1,2s+2,2s+2\}}$ are also equivalent. Thus, we have the following proposition. \begin{prop}\label{prop:evenodd} For a positive integer $s$, the number of self-conjugate $(2s,2s+1,2s+2)$-cores are equal to the number of self-conjugate $(2s+1,2s+2,2s+3)$-cores. \end{prop} \section{Counting self-conjugate simultaneous core partitions}\label{sec:counting} In this section, we give a formula for the number of symmetric Motzkin paths, and then show that the number of self-conjugate $(2s,2s+1,2s+2)$-cores and the number of symmetric Motzkin paths of length $s$ satisfy the same recurrence relation. \subsection{Counting symmetric Motzkin paths} For a fixed $i$, there are $C_i \binom{n}{2i}$ Motzkin paths with exactly $i$ up steps since there are $C_i$ Dyck paths and there are $\binom{n}{2i}$ ways to insert $n-2i$ flat steps into a Dyck path with $i$ up steps. We say that a Motzkin path of length $n$ is \emph{symmetric} if its reflection about the line $x=\frac{n}{2}$ is itself. Let $S_n$ denote the number of symmetric Motzkin paths of length $n$. For exmaple, $S_0=1,S_1=1,S_2=2,S_3=2,S_4=5$. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.5] \draw (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,0); \draw[-|] (5+0,0) -- (5+1,1); \draw[-|] (5+2,2) -- (5+3,1); \draw (5+1,1) -- (5+2,2) (5+3,1) -- (5+4,0); \draw (10+0,0) -- (10+1,0) -- (10+2,1) -- (10+3,0) -- (10+4,0); \draw[-|] (15+0,0) -- (15+1,1) -- (15+2,1); \draw (15+2,1)-- (15+3,1) -- (15+4,0); \draw[-|] (20+0,0)-- (20+1,0); \draw[-|] (20+1,0)-- (20+2,0); \draw[-|] (20+2,0)-- (20+3,0); \draw (20+3,0)-- (20+4,0); \end{tikzpicture} \caption{Symmetric Motzkin paths of length $4$} \end{figure} We note that the $(n+1)$st step of any symmetric Motzkin path of length $2n+1$ must be a flat step, and therefore, there is a natural bijection between symmetric Motzkin paths of length $2n+1$ and that of length $2n$ so that $S_{2n+1}=S_{2n}$. Now, we count the number of symmetric Motzkin paths. \begin{prop}\label{prop:symmetric} The number of symmetric Motzkin paths of length $n$ is $$S_n=\sum_{i\geq0} \binom{\lfloor \frac{n}{2}\rfloor}{i}\binom{i}{\lfloor \frac{i}{2} \rfloor}.$$ \end{prop} \begin{proof} It is enough to enumerate symmetric Motzkin paths of length $2n$. Suppose we are given a symmetric Dyck path with $i$ up steps, so that it has $2n-2i$ flat steps. To obtain a symmetric Motzkin path of length $2n$ with $i$ up steps, it is enough to consider inserting $n-i$ flat steps into the first half of the given symmetric Dyck path. Since there are $\binom{i}{\lfloor i/2\rfloor}$ symmetric Dyck paths with $i$ up steps as in Remark \ref{rem:symmetric}, and there are $\binom{n}{i}$ ways to insert flat steps, the number of symmetric Motzkin paths of length $2n$ with $i$ up steps is $\binom{n}{i}\binom{i}{\lfloor i/2 \rfloor}$. Therefore, $S_{2n}=S_{2n+1}=\sum_{i\geq0}\binom{n}{i}\binom{i}{\lfloor i/2 \rfloor}$. \end{proof} Now, we consider a recurrence relation of $S_{2n}$ involving $M_{n}$. For a symmetric Motzkin path $P=P_1P_2\cdots P_{2n}$ of length $2n$, where $P_i$ denotes the $i$th step, let $k\leq n$ be the largest number such that $P$ meets $x$-axis at $(k,0)$. We note that if $k=n$, then both of $P_1P_2\cdots P_{n}$ and $P_{n+1}P_{n+2}\cdots P_{2n}$ are Motzkin paths of length $n$ which are symmetric to each other. On the other hand, if $k<n$, then $P_{k+1}=U$, $P_{2n-k}=D$, the subpath $P_{k+2}P_{k+3}\cdots P_{2n-k-1}$ is a symmetric Motzkin path of length $2n-2k-2$, and both of two subpaths $P_1P_2\cdots P_k$ and $P_{2n-k+1}P_{2n-k+2}\cdots P_{2n}$ are Motzkin paths of length $k$ which are symmetric to each other. Hence, we have a relation between $S_{2n}$ and $M_n$: \begin{equation} \label{eqn:re} S_{2n}=M_n+\sum_{k=0}^{n-1}S_{2n-2k-2}M_{k}. \end{equation} Equation (\ref{eqn:re}) and a closed formula for $S_{2n}$ can also be found in the OEIS as A005773 \cite{OEIS}. \subsection{Counting self-conjugate $(2s,2s+1,2s+2)$-cores} The following lemma plays an important role to obtain a recurrence relation for the number of self-conjugate $(2s,2s+1,2s+2)$-cores. \begin{lem} \label{lem:with2s-1} Let $s$ be a positive integer. The number of self-conjugate $(2s,2s+1,2s+2)$-cores $\lambda$ with $2s-1\in MD(\lambda)$ is equal to the number of self-conjugate $(2s-2,2s-1,2s)$-cores. \end{lem} \begin{proof} By Proposition \ref{prop:lower}, there is a bijection between $(2s,2s+1,2s+2)$-cores $\lambda$ with $2s-1\in MD(\lambda)$ and lower ideals $I$ of $\tilde{P}_{\{2s,2s+1,2s+2\}}$ containing $2s-1$ and no elements $h_1,h_2$ such that $h_1+h_2\in\{4s,4s+2,4s+4\}$. Thus, it is enough to consider lower ideals of the first diagram in Figure \ref{fig:modified2s}. To prove the lemma, we construct a bijection $\phi$ between lower ideals $I$ of the first diagram and lower ideals $J$ of the second diagram in Figure \ref{fig:modified2s}, since the second diagram is equivalent to the modified diagram of $\tilde{P}_{\{2s-2,2s-1,2s\}}$. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.65] \foreach \i in {1,3,5,7,9,11} {\path (\i,0) coordinate (\i);} \node[above] at (1) {$1$}; \node[above] at (3) {$3$}; \node[above] at (5) {$\cdots$}; \node[above] at (7) {$\cdots$}; \node[above] at (9) {$2s-3$}; \node[above] at (11) {$\mathbf{2s-1}$}; \path (2,1.5) coordinate (2); \node[above] at (2) {$4s+5$}; \draw (2) -- (1,0.7); \draw (2) -- (3,0.7); \draw (2) -- (5,0.7); \path (4,1.5) coordinate (4); \node[above] at (4) {$\cdots$}; \draw (4) -- (3,0.7); \draw (4) -- (5,0.7); \draw (4) -- (7,0.7); \path (6,1.5) coordinate (6); \node[above] at (6) {$\cdots$}; \draw (6) -- (5,0.7); \draw (6) -- (7,0.7); \draw (6) -- (9,0.7); \path (8,1.5) coordinate (6); \node[above] at (6) {$6s-1$}; \draw (6) -- (7,0.7); \draw (6) -- (9,0.7); \path (4,3) coordinate (8); \node[above] at (8) {$\cdots$}; \draw (8) -- (2,2.2); \draw (8) -- (4,2.2); \draw (8) -- (6,2.2); \path (6,3) coordinate (10); \node[above] at (10) {$\cdots$}; \draw (10) -- (4,2.2); \draw (10) -- (6,2.2); \draw (10) -- (8,2.2); \foreach \i in {13,15,17,19,21} {\path (24-\i,-1.5) coordinate (\i);} \node[above] at (21) {$4s-1$}; \node[above] at (19) {$\cdots$}; \node[above] at (17) {$2s+7$}; \node[above,gray!60] at (15) {$2s+5$}; \node[above,gray!60] at (13) {$2s+3$}; \path (5,-2.25) coordinate (31); \node[below] at (31) {$\cdots$}; \draw (31)-- (21) (17) -- (31) -- (19); \foreach \i in {1,3,5} \draw[dotted] (\i)-- (3,-0.9); \foreach \i in {3,5,7} \draw[dotted] (\i)-- (5,-0.9); \foreach \i in {5,7,9} \draw[dotted] (\i)-- (7,-0.9); \foreach \i in {11} \draw[dotted] (\i)-- (9,-0.9); \foreach \i in {11} \draw[dotted] (\i)-- (11,-0.9); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.65] \node[above] at (-0.4,0) {$\cong$}; \foreach \i in {1,3,5,7,9} {\path (\i,0) coordinate (\i);} \node[above] at (1) {$1$}; \node[above] at (3) {$3$}; \node[above] at (5) {$\cdots$}; \node[above] at (7) {$\cdots$}; \node[above] at (9) {$2s-3$}; \path (2,1.5) coordinate (2); \node[above] at (2) {$4s+5$}; \draw[dotted] (2) -- (1,0.7); \draw[dotted] (2) -- (3,0.7); \draw[dotted] (2) -- (5,0.7); \path (4,1.5) coordinate (4); \node[above] at (4) {$\cdots$}; \draw[dotted] (4) -- (3,0.7); \draw[dotted] (4) -- (5,0.7); \draw[dotted] (4) -- (7,0.7); \path (6,1.5) coordinate (6); \node[above] at (6) {$\cdots$}; \draw[dotted] (6) -- (5,0.7); \draw[dotted] (6) -- (7,0.7); \draw[dotted] (6) -- (9,0.7); \path (8,1.5) coordinate (6); \node[above] at (6) {$6s-1$}; \draw[dotted] (6) -- (7,0.7); \draw[dotted] (6) -- (9,0.7); \path (4,3) coordinate (8); \node[above] at (8) {$\cdots$}; \draw (8) -- (2,2.2); \draw (8) -- (4,2.2); \draw (8) -- (6,2.2); \path (6,3) coordinate (10); \node[above] at (10) {$\cdots$}; \draw (10) -- (4,2.2); \draw (10) -- (6,2.2); \draw (10) -- (8,2.2); \foreach \i in {17,19,21} {\path (24-\i,-1.5) coordinate (\i);} \node[above] at (21) {$4s-13$}; \node[above] at (19) {$\cdots$}; \node[above] at (17) {$2s+7$}; \path (5,-2.25) coordinate (31); \node[below] at (31) {$\cdots$}; \draw (31)-- (21) (17) -- (31) -- (19); \foreach \i in {1,3,5} \draw (\i)-- (3,-0.9); \foreach \i in {3,5,7} \draw (\i)-- (5,-0.9); \foreach \i in {5,7,9} \draw (\i)-- (7,-0.9); \end{tikzpicture} \caption{The modified diagram of $\tilde{P}_{\{2s,2s+1,2s+2\}}$ for ideals having $2s-1$}\label{fig:modified2s} \end{figure} If $I$ is a lower ideal of the first diagram, then $I$ satisfies the following: \begin{itemize} \item $I$ contains $2s-1$. \item For $4s+5\leq h\leq 6s-1$, $h\in I$ implies $h-4s-4, h-4s-2, h-4s\in I$. \item For $2s+7\leq h\leq 4s-1$, $h\in I$ implies $4s-h, 4s-h+2, 4s-h+4\not\in I$. \end{itemize} Now, we construct a corresponding set $\phi(I)$ from $I$ as follows. \begin{itemize} \item For each $h\in I$ with $4s+5\leq h\leq 6s-1$, delete $h-4s-4, h-4s-2, h-4s$ from $I$. \item For each $h\in I$ with $2s+7\leq h\leq 4s-1$, add $4s-h, 4s-h+2, 4s-h+4$ to $I$. \item Delete $2s-1$ from the set. \end{itemize} Then $\phi(I)$ is a lower ideal of the poset structure defined by the second diagram in Figure \ref{fig:modified2s} and it is easy to check that $\phi$ is a bijection. \end{proof} \begin{example} For self-conjugate $(8,9,10)$-cores $\lambda$ such that $7\in MD(\lambda)$, let $I_1, I_2,\dots,I_{13}$ be their corresponding lower ideals of $\tilde{P}_{\{8,9,10\}}$. Now, we list $I_i$ and $J_i=\phi(I_i)$ for $i=1,2,\dots,13$, where $\phi$ is the bijection defined in the proof of Lemma \ref{lem:with2s-1}.\\ $\begin{array}{lr} I_1=\{7\}\\ I_3=\{3,7\}\\ I_5=\{1,3,7\}\\ I_7=\{3,5,7\}\\ I_9=\{1,3,5,7,21\}\\ I_{11}=\{3,5,7,23\} \\ I_{13}=\{7,15\} \end{array} $ $\begin{array}{lr} J_1=\emptyset\\ J_3=\{3\}\\ J_5=\{1,3\} \\ J_7=\{3,5\}\\ J_9=\{21\}\\ J_{11}=\{23\} \\ J_{13}=\{1,3,5,15\} \end{array} $ \quad\qquad $\begin{array}{lr} I_2=\{1,7\}\\ I_4=\{5,7\}\\ I_6=\{1,5,7\}\\ I_8=\{1,3,5,7\}\\ I_{10}=\{1,3,5,7,21,13\}\\ I_{12}=\{1,3,5,7,23\} \end{array} $ $\begin{array}{lr} J_2=\{1\}\\ J_4=\{5\}\\ J_6=\{1,5\}\\ J_8=\{1,3,5\}\\ J_{10}=\{21,23\}\\ J_{12}=\{1,23\} \end{array} $\\ We note that for each $i$, $I_i$ is an ideal of the first diagram and $J_i$ is an ideal of the second diagram of the following figure. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.65] \foreach \i in {5,7,9,11} {\path (\i,0) coordinate (\i);} \node[above] at (5) {$1$}; \node[above] at (7) {$3$}; \node[above] at (9) {$5$}; \node[above] at (11) {$\mathbf{7}$}; \path (6,1.5) coordinate (6); \node[above] at (6) {$21$}; \draw (6) -- (5,0.7); \draw (6) -- (7,0.7); \draw (6) -- (9,0.7); \path (8,1.5) coordinate (6); \node[above] at (6) {$23$}; \draw (6) -- (7,0.7); \draw (6) -- (9,0.7); \foreach \i in {13,15,17} {\path (24-\i,-1.5) coordinate (\i);} \node[above] at (17) {$15$}; \node[above,gray!60] at (15) {$13$}; \node[above,gray!60] at (13) {$11$}; \foreach \i in {5,7,9} \draw[dotted] (\i)-- (7,-0.8); \foreach \i in {11} \draw[dotted] (\i)-- (9,-0.8); \foreach \i in {11} \draw[dotted] (\i)-- (11,-0.8); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.65] \node[above] at (3.6,0) {$\cong$}; \foreach \i in {5,7,9} {\path (\i,0) coordinate (\i);} \node[above] at (5) {$1$}; \node[above] at (7) {$3$}; \node[above] at (9) {$5$}; \path (6,1.5) coordinate (6); \node[above] at (6) {$21$}; \draw[dotted] (6) -- (5,0.7); \draw[dotted] (6) -- (7,0.7); \draw[dotted] (6) -- (9,0.7); \path (8,1.5) coordinate (6); \node[above] at (6) {$23$}; \draw[dotted] (6) -- (7,0.7); \draw[dotted] (6) -- (9,0.7); \foreach \i in {17} {\path (24-\i,-1.5) coordinate (\i);} \node[above] at (17) {$15$}; \foreach \i in {5,7,9} \draw (\i)-- (7,-0.8); \end{tikzpicture} \caption{The modified diagram of $\tilde{P}_{\{8,9,10\}}$ for ideals $I$ with $7\in I$}\label{fig:exam} \end{figure} \end{example} The following proposition is a generalization of Lemma \ref{lem:with2s-1}. \begin{prop} \label{prop:no2s-1} Let $s$ and $k$ be positive integers such that $k\leq s$. \begin{enumerate} \item[(a)] The number of self-conjugate $(2s,2s+1,2s+2)$-cores $\lambda$ satisfies that $$2k-1\in MD(\lambda) \quad \text{and} \quad 2k+1,2k+3,\dots, 2s-1\not\in MD(\lambda)$$ is the number of self-conjugate $(2k-2,2k-1,2k)$-cores multiplied by $M_{s-k}$. \item[(b)] The number of self-conjugate $(2s,2s+1,2s+2)$-cores $\lambda$ with $1,3,\dots, 2s-1\not\in MD(\lambda)$ is $M_{s}$. \end{enumerate} \end{prop} \begin{proof} We consider the modified diagram of $\tilde{P}_{\{2s,2s+1,2s+2\}}$ with restrictions in Figure \ref{fig:modified2s2}. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.72] \foreach \i in {1,3,5,7,9,11,13,15,17,19,21} {\path (\i,0) coordinate (\i);} \node[above] at (1) {\small$1$}; \node[above] at (3) {\small$3$}; \node[above] at (5) {\small$\cdots$}; \node[above] at (7) {\small$\cdots$}; \node[above] at (9) {\small$2k-3$}; \node[above] at (11) {\small$\mathbf{2k-1}$}; \node[above,gray!60] at (13) {\small$2k+1$}; \node[above,gray!60] at (15) {\small$2k+3$}; \node[above,gray!60] at (17) {\small$\cdots$}; \node[above,gray!60] at (19) {\small$2s-3$}; \node[above,gray!60] at (21) {\small$2s-1$}; \path (2,1.5) coordinate (2); \node[above] at (2) {\small$4s+5$}; \draw (2) -- (1,0.7); \draw (2) -- (3,0.7); \draw (2) -- (5,0.7); \path (4,1.5) coordinate (4); \node[above] at (4) {\small$\cdots$}; \draw (4) -- (3,0.7); \draw (4) -- (5,0.7); \draw (4) -- (7,0.7); \path (6,1.5) coordinate (6); \node[above] at (6) {\small$\cdots$}; \draw (6) -- (5,0.7); \draw (6) -- (7,0.7); \draw (6) -- (9,0.7); \path (8,1.5) coordinate (6); \node[above] at (6) {\small$4s+2k-1$}; \draw (6) -- (7,0.7); \draw (6) -- (9,0.7); \path (4,3) coordinate (8); \node[above] at (8) {\small$\cdots$}; \draw (8) -- (2,2.2); \draw (8) -- (4,2.2); \draw (8) -- (6,2.2); \path (6,3) coordinate (10); \node[above] at (10) {\small$\cdots$}; \draw (10) -- (4,2.2); \draw (10) -- (6,2.2); \draw (10) -- (8,2.2); \foreach \i in {111,113,115,117,119,121,123,125,127} {\path (130-\i,-1.5) coordinate (\i);} \node[above] at (127) {\small$4s-1$}; \node[above] at (125) {\small$\cdots$}; \node[above] at (123) {\small$4s-2k+7$}; \node[above] at (115) {\small$4s-2k-1$}; \node[above] at (113) {\small$\cdots$}; \node[above] at (111) {\small$2s+3$}; \path (5,-2.25) coordinate (31); \node[below] at (31) {\small$\cdots$}; \draw (31)-- (127) (123) -- (31) -- (125); \path (17,-2.25) coordinate (33); \node[below] at (33) {\small$\cdots$}; \draw (33)-- (111) (113) -- (33) -- (115); \foreach \i in {1,3,5} \draw[dotted] (\i)-- (3,-0.9); \foreach \i in {3,5,7} \draw[dotted] (\i)-- (5,-0.9); \foreach \i in {5,7,9} \draw[dotted] (\i)-- (7,-0.9); \foreach \i in {9,11,13} \draw[dotted] (11)-- (\i,-0.9); \foreach \i in {13,15,17} \draw[dotted] (\i)-- (15,-0.9); \foreach \i in {15,17,19} \draw[dotted] (\i)-- (17,-0.9); \foreach \i in {17,19,21} \draw[dotted] (\i)-- (19,-0.9); \end{tikzpicture} \caption{The modified diagram of $\tilde{P}_{\{2s,2s+1,2s+2\}}$ with restrictions}\label{fig:modified2s2} \end{figure} \begin{enumerate} \item[(a)] On the modified diagram with restrictions, the left-hand side of $2k-1$ is equivalent to the modified diagram of $\tilde{P}_{\{2k-2,2k-1,2k\}}$ as we showed in Lemma \ref{lem:with2s-1}, and the right-hand side of $2k-1$ is equivalent to the Hasse diagram of the poset $P_{\{2s-2k-2,2s-2k-1,2s-2k\}}$. Hence, by Corollary \ref{cor:Motzkin}, the number of lower ideals $I$ satisfying $2k-1\in I$ and $2k+1,2k+3,\dots,2s-1\not \in I$ is $M_{s-k}$ times the number of self-conjugate $(2k-2,2k-1,2k)$-cores. \item[(b)] If $1,3,\dots, 2s-1\not\in I$, then $I$ is a lower ideal of a subposet of $\tilde{P}_{\{2s,2s+1,2s+2\}}$ which is equivalent to $P_{\{s,s+1,s+2\}}$. Hence, by Corollary \ref{cor:Motzkin}, the number of lower ideals $I$ with $1,3,\dots,2s-1\not \in I$ is $M_{s}$. \end{enumerate} \end{proof} Now, we are ready to prove our main theorem. \begin{thm} For a positive integer $s$, the number of self-conjugate $(s,s+1,s+2)$-cores is $$\sum_{i\geq0}\binom{\lfloor \frac{s}{2}\rfloor}{i}\binom{i}{\lfloor \frac{i}{2} \rfloor},$$ which counts the number of symmetric Motzkin paths of length $s$. \end{thm} \begin{proof} Let $a_s$ denote the number of self-conjugate $(s,s+1,s+2)$-cores. From Proposition \ref{prop:evenodd}, we have $a_{2s+1}=a_{2s}$. Hence, it is enough to show that $a_{2s}=S_{2s}$. For $1\leq k \leq s$, let $A_k$ be the set of self-conjugate $(2s,2s+1,2s+2)$-cores $\lambda$ that satisfies $$2k-1\in MD(\lambda) \quad \text{and} \quad 2k+1,2k+3,\dots, 2s-1\not\in MD(\lambda),$$ and let $A_0$ be the set of self-conjugate $(2s,2s+1,2s+2)$-cores $\lambda$ with $2i-1\not \in MD(\lambda)$ for $1\leq i\leq s$. Then, $A_0\cup A_1\cup \cdots \cup A_s$ is the set of self-conjugate $(2s,2s+1,2s+1)$-cores and $$a_{2s}=|A_0|+|A_1|+\cdots+|A_s|.$$ From Proposition \ref{prop:no2s-1}, we have $|A_0|=M_s$ and $|A_k|=a_{2k-2}M_{s-k}$ for $1\leq k \leq s$, and therefore, $$a_{2s}=M_s+\sum_{k=1}^{s}a_{2k-2}M_{s-k}=M_s+\sum_{k=0}^{s-1}a_{2s-2k-2}M_{k}.$$ Since the relation between $a_{2s}$ and $M_{s}$ is equivalent to (\ref{eqn:re}) and $a_0=S_0=1$, we have come to a conclusion that $a_{2s}=S_{2s}=\sum\binom{s}{i}\binom{i}{\lfloor i/2 \rfloor}$ by Proposition \ref{prop:symmetric}. \end{proof} Encouraged by this success, we offer the following generalized conjecture. \begin{conj} For given positive integers $s$ and $k$, the number of self-conjugate $(s,s+1,\dots,s+k)$-cores is equal to the number of symmetric $(s,k)$-generalized Dyck paths. \end{conj} \bibliographystyle{plain}
{ "timestamp": "2019-04-05T02:07:54", "yymm": "1904", "arxiv_id": "1904.02313", "language": "en", "url": "https://arxiv.org/abs/1904.02313", "abstract": "We are concerned with counting self-conjugate $(s,s+1,s+2)$-core partitions. A Motzkin path of length $n$ is a path from $(0,0)$ to $(n,0)$ which stays above the $x$-axis and consists of the up $U=(1,1)$, down $D=(1,-1)$, and flat $F=(1,0)$ steps. We say that a Motzkin path of length $n$ is symmetric if its reflection about the line $x=n/2$ is itself. In this paper, we show that the number of self-conjugate $(s,s+1,s+2)$-cores is equal to the number of symmetric Motzkin paths of length $s$, and give a closed formula for this number.", "subjects": "Combinatorics (math.CO)", "title": "Counting self-conjugate (s,s+1,s+2)-core partitions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771759145342, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7090925453290895 }
https://arxiv.org/abs/1208.5521
Small sets of reals through the prism of fractal dimensions
A separable metric space X is an H-null set if any uniformly continuous image of X has Hausdorff dimension zero. upper H-null, directed P-null and P-null sets are defined likewise, with other fractal dimensions in place of Hausdorff dimension. We investigate these sets and show that in 2^\omega{} they coincide, respectively, with strongly null, meager-additive, T' and null-additive sets. Some consequences: A subset of 2^\omega{} is meager-additive if and only if it is E-additive; if f:2^\omega->2^\omega{} is continuous and X is meager-additive, then so is f(X), and likewise for null-additive and T'-sets.
\section{Introduction} \label{sec:intro} \subsection*{Strong measure zero and Hausdorff dimension} By the definition due to Borel, a metric space $X$ has \emph{strong measure zero} (\smz) if for any sequence $\seq{\eps_n}$ of positive numbers there is a cover $\{U_n\}$ of $X$ such that $\diam U_n\leq\eps_n$ for all $n$. By the famous Galvin--Mycielski--Solovay Theorem~\cite{GMS}, a subset $X$ of the line has \smz{} if and only if there is no meager set $M$ such that $X+M=\Rset$. The same theorem holds for subsets of the Cantor set $\Cset$, as proved e.g.~in\cite[1.14]{MR2177439}. It is almost obvious that a \smz{} space has Hausdorff dimension zero. Since \smz{} is preserved by uniformly continuous mappings, it follows that any uniformly continuous image of a \smz{} space has Hausdorff dimension zero. It is not difficult to prove that the latter property actually characterizes \smz{}. To be more precise, denote $\hdim$ Hausdorff dimension and say that a metric space $X$ is \hnull{} if $\hdim f(X)=0$ for each uniformly continuous mapping of $X$ into another metric space. Then a metric space is \smz{} if and only if it is \hnull{}, and thus Galvin--Mycielski--Solovay Theorem for $\Cset$ can be phrased ``$X\subs\Cset$ is \hnull{} if and only if there is no meager set such that $X+M=\Cset$.'' The essence of this theorem can be traced back to Besicovitch papers~\cite{MR1555386,MR1555389}. In summary, we thus have three essentially different descriptions of \smz{} sets in $\Cset$: ``combinatorial'' (Borel's definition), ``fractal'' (by Hausdorff dimension) and ``algebraic'' (there is no meager set such that $X+M=\Cset$). \subsection*{Small spaces from other fractal dimensions} One may, just for curiosity, investigate spaces that are defined by the same pattern as \hnull{} spaces, replacing Hausdorff dimension with some other fractal dimension. For instance, for packing dimension $\pdim$: Call $X$ to be \upnull{} if $\pdim f(X)=0$ for each uniformly continuous mapping of $X$ into another metric space. Besides \hnull{} and \upnull{} spaces we consider also \uhnull{} and \dpnull{} spaces arising from the so called upper Hausdorff dimension and directed lower packing dimension, respectively. The detailed exposition of all four dimensions and the fractal measures behind them is provided below. Let us point out that all of these small sets are consistently countable: Recall the \emph{Borel Conjecture}. It is the statement ``\emph{Every \smz{} set is countable}''. As proved by Laver~\cite{MR0422027}, Borel Conjecture is consistent. As proved in Theorem~\ref{hnullsmz}, every \hnull{} space is \smz{}. Thus it is consistent that \hnull{} sets, and \emph{a fortiori} \uhnull{}, \dpnull{} and \upnull{} sets are countable. On the other hand, under the Continuum Hypothesis there are uncountable \upnull{} sets. \subsection*{Meager-additive sets and the like} Let $\Cset$ denote the usual Cantor cube. The coordinatewise addition makes $\Cset$ a compact topological group. Denote its Haar measure $\mu$; it is the usual product measure. Provide $\Cset$ with the usual least difference metric. There are three common \si ideals on $\Cset$: $\MM$, the ideal of meager sets; $\NN$, the ideal of $\mu$-null sets; and $\EE$, the ideal generated by $\mu$-null $F_\sigma$-sets. Given two sets $A,B\subs\Cset$, their sum is defined by $A+B=\{a+b:a\in A,b\in B\}$. Recall that, given an ideal $\mc J$ on $\Cset$, a set $X\subs\Cset$ is termed \emph{$\mc J$-additive} if $X+J\in\mc J$ for all $J\in \mc J$. Thus we have notions of \emph{$\NN$-additive}, \emph{$\MM$-additive} and \emph{$\EE$-additive} sets. Say that $X\subs\Cset$ is \emph{strongly null} if there is no set $M\in\MM$ such that $X+M=\Cset$. These notions (except perhaps $\EE$-additive) received a lot of attention. Shelah~\cite{MR1324470} provided several combinatorial characterizations of $\NN$-additive and $\MM$-additive sets and proved that every $\NN$-additive set is $\MM$-additive, see also~\cite{MR1350295}. Yet another related notion, \Tprime{}-sets, was introduced and investigated by Nowik and Weiss~\cite{MR1905154}. They proved, in particular, the implications $\NN$-additive$\Rightarrow\Tprime\Rightarrow\MM$-additive. \subsection*{The match} The goal of the present paper is to prove that the five notions of the last paragraph match the notions based on fractal dimensions introduced in the next to last paragraph in a perhaps unexpected manner: \begin{thm}\label{bigthm} For subsets of $\Cset$, the following diagram holds. \end{thm} \begin{center}\renewcommand{\arraystretch}{1.4} \newcommand{\lra}{\Longrightarrow} \newcommand{\uda}{\Updownarrow} \begin{tabular}{ccccccc} \upnull & $\lra$ & \dpnull & $\lra$ & \uhnull & $\lra$ & \hnull\\ $\uda$ & & $\uda$ & & $\uda$ & &$\uda$\\ $\NN$-additive & $\lra$ & \Tprime & $\lra$ & $\MM$-additive & $\lra$ & strongly null\\ & & & &$\uda$\\ & & & &$\EE$-additive \end{tabular} \end{center} The upper line implications follow trivially from definitions and the chain~\eqref{basicinequality} of inequalities between the respective fractal dimensions. Thus once the vertical equivalences are proved, the diagram is settled. The equivalences are subject to theorems~\ref{mainSN}, \ref{mainME}, \ref{mainNN} and \ref{mainTT} proven below. \smallskip The paper is organized in eleven sections. First ten sections form three parts. The preliminary part consists of sections~\ref{sec:intro}--\ref{sec:measures}. In section~\ref{sec:null} the four notions of smallness based on fractal dimensions are introduced and section~\ref{sec:measures} reviews the fractal measures behind the four dimensions. In the second part consisting of sections~\ref{sec:hnull}--\ref{sec:dpnull} elementary properties of the four types of smallness are established within the framework of separable metric spaces. In the third part consisting of sections~\ref{sec:meager}--\ref{sec:TT} we investigate further properties of the four kinds of small sets within the Cantor set $\Cset$ and in particular we prove the vertical equivalences from the above diagram. In the concluding section we provide some comments and list several open problems. Some common notation used throughout the paper includes $\abs{A}$ for the cardinality of a set $A$, $\Nset$ for the set of natural numbers, $[\Nset]^\Nset$ for the collection of infinite subsets of $\Nset$, and $\UPset$ for the family of nondecreasing unbounded sequences of natural numbers. \section{Sets of small fractal dimension} \label{sec:null} We first briefly describe the four kinds of fractal dimensions under consideration. More details and references are provided in the next section. Let $X$ be a metric space. \subsection*{Hausdorff dimensions} Hausdorff dimension is well-known. We shall denote it $\hdim X$. The following modification of Hausdorff dimension, called the \emph{upper Hausdorff dimension} can be derived from the Hausdorff dimension as follows: Let $\displaystyle X^\star$ denote the completion of $X$ and define $$ \uhdim X=\inf\{\hdim K:\text{$K$ is \si compact, }X\subs K\subs X^\star\}. $$ \subsection*{Packing dimensions} The \emph{covering number function} $N_X(\del)$ of a nonempty metric space $X$ is defined as the minimal number of sets of diameter at most $\del$ needed to cover $X$. The \emph{upper} and \emph{lower box dimensions} of $X$ are defined, respectively, by $\ubdim X =\varlimsup_{\del\to 0}\frac{\log N_X(\del)}{\abs{\log \del}}$ and $\lbdim X =\varliminf_{\del\to 0}\frac{\log N_X(\del)}{\abs{\log \del}}$. The \emph{upper packing dimension} of $X$ is defined by $$ \updim X= \inf\bigl\{\sup_n\ubdim X_n:\{X_n\}\text{ is a cover of $X$}\bigr\}. $$ The following dimension, akin to the so called lower packing dimension, occurs naturally in the investigation of cartesian products of fractal sets, see~\cite{ZinPack}. Write $X_n\upto X$ to denote that $\{X_n\}$ is an increasing sequence of sets with union $X$. $$ \dpdim X=\inf\bigl\{\sup_n\lbdim X_n:X_n\upto X\bigr\}. $$ The following chain of inequalities holds for any space $X$, see~\eqref{basicIneq} \begin{equation}\label{basicinequality} \hdim X\leq\uhdim X\leq\dpdim X\leq\updim X \end{equation} with examples showing that each of the inequalities may be strict. \subsection*{Small sets from fractal dimensions} Using a common pattern we define four notions of small sets arising from the four fractal dimensions. Say that $f$ is a mapping on a metric space $X$ if $f:X\to Y$, where $Y$ is a metric space. \begin{defn} Let $X$ be a separable metric space. Define $X$ to be \begin{itemyze} \item \upnull{} if $\updim f(X)=0$ for each uniformly continuous mapping $f$ on $X$, \item \dpnull{} if $\dpdim f(X)=0$ for each uniformly continuous mapping $f$ on $X$, \item \uhnull{} if $\uhdim f(X)=0$ for each uniformly continuous mapping $f$ on $X$, \item \hnull{} if $\hdim f(X)=0$ for each uniformly continuous mapping $f$ on $X$. \end{itemyze} \end{defn} The inequalities \eqref{basicinequality} yield the upper line of the Theorem~\ref{bigthm} diagram: \begin{equation}\label{basicinequality2} \text{\upnull}\implies \text{\dpnull}\implies \text{\uhnull}\implies \text{\hnull} \end{equation} It is straightforward from the definitions that all of the four properties are preserved by uniformly continuous mappings: \begin{prop} A uniformly continuous image of a \upnull{} set is \upnull{}. Analogous statements hold for \dpnull{}, \uhnull{} and \hnull{} sets. \end{prop} Each of the four notions is \si additive, i.e. for any $X$ the \upnull{} subsets of $X$ form a \si additive ideal and likewise for \dpnull{}, \uhnull{} and \hnull. This is an obvious consequence of the countable stability of the corresponding dimensions, except for \dpnull{}, for which it is nontrivial and will be proved in Corollary~\ref{dpnullideal}. \section{Review of fractal measures} \label{sec:measures} Before getting any further we have to review fractal measures that are behind the four fractal dimensions. Let $X$ be a space and $d$ its metric. If $A\subs X$, then $dA$ denotes the diameter of $A$ and if $\mc A$ is a family of subsets of $X$, then $d\mc A=\sup_{A\in\mc A} dA$. A closed ball of radius $r$ centered in $x$ is denoted $B(x,r)$. Let $\HH$ denote the set of all functions $h:[0,\infty)\to[0,\infty)$ that are nondecreasing, right-continuous, and satisfy $h(r)=0$ iff $r=0$. Elements of $\HH$ are called \emph{Hausdorff functions}. The following is the common ordering of $\HH$: $$ g\prec h\quad \overset{\mathrm{def}}{\equiv} \quad\lim_{r\to0+}\frac{h(r)}{g(r)}=0. $$ Given $s>0$, we shall write $h\prec s$ to abbreviate that $h\prec g_s$, where $g_s(r)=r^s$. Notice that for any sequence of $h_n\in\HH$ there is $h\in\HH$ such that $h\prec h_n$ for all $n$. Let $\tau$ be a pre-measure, i.e.~a monotone $[0,\infty]$-valued set function such that $\tau(\emptyset)=0$. We shall denote $\NN(\tau)=\{A\in\dom\tau:\tau(A)=0\}$ the family of negligible sets, and in case $\tau$ is not \si subadditive, $\NNs(\tau)$ denotes the \si ideal generated by $\NN(\tau)$. The following operation known as Munroe's \emph{Method I construction} assigns to any pre-measure $\tau$ the maximal \si additive measure majorized by $\tau$: $$ \tau^\mathrm{I}(E)=\inf\left\{\sum_{n\in\Nset}\tau(E_n): E\subs\bigcup_{n\in\Nset}E_n\right\}. $$ \subsection*{Hausdorff measure} If $\del>0$, a cover $\mc A$ of a set $E\subs X$ is termed a \emph{$\del$-cover} if $d\mc A\leq\del$. Fix $h\in\HH$. The \emph{$h$-dimensional Hausdorff measure} $\hm^h(E)$ of a set $E$ in a space $X$ is defined thus: For each $\delta>0$ set $$ \hm^h_\delta(E)= \inf\left\{\sum_{n\in\Nset}h(dE_n): \text{$\{E_n\}$ is a countable $\delta$-cover of $E$}\right\} $$ and put $ \hm^h(E)=\sup_{\delta>0}\hm^h_\delta(E). $ Properties of Hausdorff measures are well-known. We point out that $\hm^h$ (or rather its restriction to Borel sets) is a $G_\del$-regular Borel measure and the following facts. Recall that a countable cover of a set $E$ is termed a $\lambda$-cover if every point of $E$ is contained in infinitely many $U_n$'s. Reference:~\cite{MR0281862}. \begin{prop}\label{lambda} $\hm^h(E)=0$ if and only if $E$ admits a countable $\lambda$-cover $\{E_n\}$ such that $\sum_{n\in\Nset}h(dE_n)<\infty$. \end{prop} \begin{lem}\label{lemHaus} \begin{enum} \item If $\hm^h(X)<\infty$ and $h\prec g$, then $\hm^g(X)=0$. \item If $\hm^h(X)=0$, then there is $g\prec h$ such that $\hm^g(X)=0$. \end{enum} \end{lem} \subsection*{Upper Hausdorff measure} The following variation of $\hm^h$ plays an important role in our considerations. It is defined thus: For each $\delta>0$ set $$ \uhm^h_\delta(E)= \inf\left\{\sum_{n=0}^N h(dE_n): \text{$\{E_n:n\leq N\}$ is a finite $\delta$-cover of $E$}\right\} $$ and put $ \uhm_0^h(E)=\sup_{\delta>0}\uhm^h_\delta(E), $ so that the only difference from $\hm^h$ is that only finite covers are taken in account. This also makes $\uhm_0^h$ finitely additive, but not \si additive. Put $\uhm^h(E)=(\uhm_0^h)^\mathrm{I}(E)$. We list some properties of $\uhm_0^h$ and $\uhm^h$. Some of them will be utilized below and some are provided just to shed more light on the notion of upper Hausdorff measure. The routine proofs are omitted. \begin{lem}\label{lem1} \begin{enum} \item If $\uhm_0^h(E)<\infty$, then $E$ is totally bounded. \item $\uhm_0^h(E)=\uhm_0^h(\clos E)$. \item $\uhm_0^h(E)=\hm^h(E)$ if $E$ is compact. \item If $X$ is complete, $E\subs X$ and $E\in\NNs(\uhm_0^h)$, then there is a \si compact set $K\sups E$ such that $\hm^h(K)=0$. \item If $X$ is complete and $E\subs X$, then $\uhm^h(E)=\inf\{\hm^h(K):\text{$K\sups E$ is \si compact}\}$. \item In particular $\uhm^h(E)=\hm^h(E)$ if $E$ is \si compact. \item If $g\prec h$ and $\uhm^g(E)<\infty$, then $E\in\NNs(\uhm_0^h)$. \end{enum} \end{lem} We shall need the counterpart of~\ref{lambda} for $\uhm^h$ at several occasions. \begin{defn} Let $\seq{U_n}$ be a cover of a set $X$. Recall that $\seq{U_n}$ is called a $\gamma$-cover if each $x\in X$ belongs to all but finitely many $U_n$. Recall that a cover $\seq{U_n}$ of a set $X$ is called \emph{$\gamma$-groupable} if there is a partition $\Nset=I_0\cup I_1\cup I_2\cup\dots$ into finite sets (or intervals, which makes no difference) such that the sequence $\seq{\bigcup_{n\in I_j}U_n:j\in\Nset}$ is a $\gamma$-cover. The finite families $\{U_n:n\in I_j\}$ will be occasionally termed \emph{witnessing groups}. \end{defn} \begin{lem}\label{gammagr} $E\in\NNs(\uhm_0^h)$ if and only if $E$ has a $\gamma$-groupable cover $\seq{U_n}$ such that $\sum_{n\in\Nset}h(dU_n)<\infty$. \end{lem} \begin{proof} \Implies: Let $E_n\upto E$, $\uhm^g_0(E_n)=0$. For each $n$ let $\mc G_n$ be a finite cover of $E_n$ such that $\sum_{G\in\mc G_n}g(dG)<2^{-n}$. Put $\mc G=\bigcup_n\mc G_n$. The witnessing groups are $\mc G_n$. $\Leftarrow$: Let $\mc G_j$ be the witnessing groups. Put $E_k=\bigcap_{j\geq k}\bigcup\mc G_j$. Fix $k$. The set $E_k$ is covered by each $\mc G_j$, $j\geq k$, and $\sum_{G\in\mc G_j}g(dG)$ is as small as needed if $j$ is large enough. Hence $\uhm^g_0(E_k)=0$. \end{proof} \subsection*{Box measures} We could develop the theory of \upnull{} and \dpnull{} spaces from packing measures. But since they are rather unpleasant to work with, we make use of the following variations instead. They are directly related to the above definitions of packing dimensions and are easier to work with. Given $h\in\HH$, set $$ \ubox_0^h(E)=\limsup_{r\to0}N_E(r)\cdot h(r). $$ The \emph{$h$-dimensional box measure} of $E\subs X$ is defined by $\ubox^h(E)=(\ubox_0^h)^\mathrm{I}(E)$. \begin{lem}\label{lemP} \begin{enum} \item If $\ubox_0^h(E)<\infty$, then $E$ is totally bounded. \item $\ubox_0^h(E)=\ubox_0^h(\clos E)$. \item If $X$ is complete, $E\subs X$ and $E\in\NNs(\ubox_0^h)$, then there is a \si compact set $K\sups E$ such that $\ubox^h(K)=0$. \item If $g\prec h$ and $\ubox^g(E)<\infty$, then $E\in\NNs(\ubox_0^h)$. \item If $E\in\NNs(\ubox_0^h)$, then there is $g\prec h$ such that $E\in\NNs(\ubox_0^g)$. \end{enum} \end{lem} The set functions $\hm^h,\uhm^h$ and $\ubox^h$ are are Borel outer measures, i.e.~Borel-regular outer measures whose restrictions to the algebra of Borel sets are Borel measures. The set function that we shall introduce now is not really a measure, as it is defined from the lower box contents that is not finitely subadditive. Let $$ \lbox_0^h(E)=\liminf_{r\to0}N_E(r)\cdot h(r). $$ Write $E_n\upto E$ to denote that $\{E_n\}$ is an increasing sequence of sets with union $E$. Define $$ \dbox^h(E)=\inf\left\{\sup_{n\in\Nset}\lbox_0^h(E_n):E_n\upto E\right\} $$ This is a ``directed'' variation of Method I construction. \begin{lem}\label{lemDP} \begin{enum} \item If $\lbox_0^h(E)<\infty$, then $E$ is totally bounded. \item $\lbox_0^h(E)=\lbox_0^h(\clos E)$. \item If $g\prec h$ and $\dbox^g(E)<\infty$, then $E\in\NNs(\dbox_0^h)$. \item If $E\in\NNs(\lbox_0^h)$, then there is $g\prec h$ such that $E\in\NNs(\lbox_0^g)$. \end{enum} \end{lem} In the common case when $h(r)=r^s$ for some $s>0$ we write $\hm^s$ for $\hm^h$, and the same license is used for all pre-measures and measures under consideration. It is easy to check that \begin{equation}\label{basicIneq} \hm^g\leq\uhm^g\leq\dbox^g\leq\ubox^g \end{equation} and that the three measures $\uhm^g$, $\dbox^g$ and $\ubox^g$ satisfy the following continuity property. \begin{lem} \label{IncreasingSetsLemma} If $\uhm^g(X)<s$, then there is a sequence $X_n\upto X$ such that $\sup\uhm_0^g(X_n)<s$. Analogous statements hold for $\ubox^g$ and $\dbox^g$. \end{lem} \subsection*{Cartesian products} Given two metric spaces $X_1$ and $X_2$ with respective metrics $d_1$ and $d_2$, provide the cartesian product $X_1\times X_2$ with the maximum metric $$ d\bigl((x_1,x_2),(y_1,y_2)\bigr)=\max(d_1(x_1,y_1),d_2(x_2,y_2)). $$ A Hausdorff function $h$ is of \emph{finite order} (or \emph{blanketed} or satisfies \emph{doubling condition}) if $\limsup_{r\to0}\frac{h(2r)}{h(r)}<\infty$. \begin{lem}\label{howroyd} Let $X,Y$ be metric spaces and $h,g$ Hausdorff functions. Then \begin{enum} \item $\ubox^{hg}(X\times Y)\leq\ubox^h(X)\,\ubox^g(Y)$, \item $\hm^{hg}(X\times Y)\leq\hm^h(X)\,\ubox^g(Y)$, \end{enum} provided the rightmost products are not $0\cdot\infty$ or $\infty\cdot0$. If $h,g$ are of finite order, then \begin{enum}\setcounter{enumi}{2} \item $\hm^h(X)\,\hm^g(Y)\leq\hm^{hg}(X\times Y)$, \item $\uhm^h(X)\,\uhm^g(Y)\leq\uhm^{hg}(X\times Y)$, \end{enum} and there is a constant $c>0$ depending only on $g$ and $h$ such that \begin{enum}\setcounter{enumi}{4} \item $\ubox^h(X)\,\dbox^g(Y)\leq c\,\ubox^{hg}(X\times Y)$, \item $\dbox^h(X)\,\dbox^g(Y)\leq c\,\dbox^{hg}(X\times Y)$. \end{enum} \end{lem} \begin{proof}[Proof in outline] (i) easily follows from the definitions and the obvious inequality $N_{X\times Y}(\del)\leq N_X(\del)N_Y(\del)$. (ii) It is clearly enough to prove that $\hm^{hg}(X\times Y)\leq\hm^h(X)\,\ubox_0^g(Y)$. Fix $\eps,\del>0$ and find a $\del$-cover $\{E_n\}$ of $X$ such that $\sum_nh(dE_n)<\hm^h(X)+\eps$. For each $n$ let $\mc B_n$ be an $dE_n$-cover of $Y$ such that $\abs{\mc B_n}=N_Y(dE_n)$. If $\del$ is small enough, we thus have $\abs{\mc B_n}g(dE_n)<\ubox_0^g(Y)+\eps$. Consider the family $\mc A=\{E_n\times B:n\in\Nset,B\in\mc B_n\}$. It is clearly a $\del$-cover of $X\times Y$ and routine calculation shows that $\sum_{A\in\mc A}h(dA)g(dA)\leq(\ubox_0^g(Y)+\eps)(\hm^h(X)+\eps)$. Thus $\hm_\del^{hg}(X\times Y)\leq(\ubox_0^g(Y)+\eps)(\hm^h(X)+\eps)$. Let $\del\to0$ and $\eps\to0$ to get (ii). (iii) comes from~\cite{MR1362951}. (iv) is easily derived from the following generalization of (iii) that can be found in~\cite{howroydPhD}, see also~\cite{MR1362951,Kelly}: If $E\subs X\times Y$ and $g,h$ are of finite order, then $\int_X\hm^g(E\cap\{x\}\times Y)\mathrm{d}\hm^h(x)\leq\hm^{gh}(E)$. (v) and (vi): Let $C_X(\del)$ be the maximal number of points in $X$ that are pairwise more than $\del$ apart. This is a variation of the covering number function $N_X(\del)$ and it is obvious that $C_X(\del)\leq N_X(\del)\leq C_X(\del/2)$. So if $\tau^g$ and $\underrightarrow{\tau^g}$ are the set functions that obtain the same way as $\ubox^g$ and $\dbox^g$, respectively, from $C_X$ in place of $N_X$, it is clear that $\tau^g\leq\ubox^g\leq\tau^{g_2}$, where $g_2(r)=g(2r)$, and likewise $\underrightarrow{\tau^g}\leq\dbox^g\leq\underrightarrow{\tau}^{g_2}$. Thus if $g,h$ are of finite order, there is a constant $q$ such that $\tau^g\leq\ubox^g\leq q\tau^g$ and $\underrightarrow{\tau^g}\leq\dbox^g\leq q\underrightarrow{\tau^g}$. As proved in~\cite{ZinPack}, $\tau^h(X)\,\underrightarrow{\tau^g}(Y)\leq\tau^{hg}(X\times Y)$ and $\underrightarrow{\tau^h}(X)\,\underrightarrow{\tau^g}(Y) \leq\underrightarrow\tau^{hg}(X\times Y)$. Hence (v) and (vi) hold with $c=1/q^2$. \end{proof} \subsection*{Uniformly continuous and Lipschitz images} The following lemma on Lipschitz images and its counterpart for uniformly continuous mappings are well-known for Hausdorff measures, see e.g.~\cite[Theorem 29]{MR0281862}. \begin{lem}\label{lipschitz} Let $f:X\to Y$ be a Lipschitz mapping with Lipschitz constant $L$. Then $\hm^s(f(X))\leq L^s\hm^s(X)$ for any $s>0$. Analogous statements hold also for $\uhm^s,\ubox^s$ and $\dbox^s$. \end{lem} \begin{lem}\label{lipschitz2} Let $f:(X,d_X)\to (Y,d_Y)$ be a uniformly continuous mapping. Suppose $g\in\HH$ and that $f$ satisfies the condition \begin{equation}\label{lip2} d_Y(f(x),f(y))\leq g(d_X(x,y)),\quad x,y\in X. \end{equation} Then $\hm^h(f(X))\leq \hm^{h{\circ}g}(X)$ for any $h\in\HH$. Analogous statements hold also for $\uhm^h$, $\ubox^h$ and $\dbox^h$. \end{lem} \subsection*{Dimensions} Recall that the \emph{Hausdorff dimension} of $X$ is defined by $$ \hdim X=\sup\{s>0:\hm^s(X)=\infty\}=\inf\{s>0:\hm^s(X)=0\}. $$ The \emph{upper Hausdorff dimension} arising from the upper Hausdorff measure is defined by the same pattern: $$ \uhdim X=\sup\{s>0:\uhm^s(X)=\infty\}=\inf\{s>0:\uhm^s(X)=0\}. $$ It is clear that $\hdim X\leq\uhdim X$. It follows from Lemma~\ref{lem1}(v) that if $X$ is a complete metric space and $E\subs X$, then $\uhdim E=\inf\{\hdim K:K\supseteq E\text{ is \si compact}\}$. In particular, if $X$ is \si compact, then $\hdim X=\uhdim X$. The \emph{packing dimension} obtains by the same pattern from $\ubox^s$: $$ \pdim E=\inf\{s>0:\ubox^s(E)=0\}=\sup\{s>0:\ubox^s(E)=\infty\}. $$ The \emph{lower directed packing dimension} related to $\dbox^s$ is defined by $$ \dpdim X=\sup\{s>0:\dbox^s(X)=\infty\}=\inf\{s>0:\dbox^s(X)=0\}. $$ The chain of inequalities~\eqref{basicIneq} yields~\eqref{basicinequality}. \subsection*{The measures on the Cantor cube} For $p\in2^{<\Nset}$ we denote $[p]=\{x\in\Cset:p\subs x\}$. Metrize $\Cset$ as follows: Given $x\neq y\in\Cset$, set $n(x,y)=\min\{i\in\Nset:x(i)\neq y(i)\}$ and define $d(x,y)=2^{-n(x,y)}$. This is a variant of the usual least difference metric on $\Cset$. In particular, the topology induced by $d$ coincides with that of $\Cset$. Routine proofs show that in this metric, $\hm^1$ coincides on Borel sets with the usual product measure, i.e.~the Haar measure of the compact group $\Cset$, and that $$ \hm^1(\Cset)=\uhm^1(\Cset)=\ubox^1(\Cset)=\dbox^1(\Cset)=1. $$ Besides the \si ideal $\EE$ generated by closed null sets we also introduce the following family of highly regular compact subsets of $\Cset$. For each $I\in[\Nset]^\Nset$ put $C_I=\{x\in\Cset:x\rest I\equiv0\}$ and define $\CC=\{C_I:I\in[\Nset]^\Nset\}$. \begin{lem}\label{EC} \begin{enum} \item $E\in\EE$ if and only if there is $h\prec 1$ such that $E\in\NNs(\ubox^h_0)$, \item $\CC\subs\EE$, \item for each $h\prec 1$ there is $C\in\CC$ such that $\hm^h(C)>0$. \end{enum} \end{lem} \begin{proof} (i) According to Lemma~\ref{lemP}(v) it is enough to prove that if $E\subs\Cset$ is a closed set and $\hm^1(E)=0$, then $\ubox^1_0(E)=0$. Let $\eps>0$ be arbitrary. As $E$ is compact and $\hm^1(E)=0$, there is a finite cover $\mc A$ of $E$ such that $\sum_{A\in\mc A}dA<\eps$. Since any subset of $\Cset$ is a subset of a cylinder with the same diameter, we may assume all $A\in\mc A$ are cylinders, so let $\mc A=\{[p_1],[p_2],\dots,[p_k]\}$. Let $n\in\Nset$ be arbitrary subject to $n\geq\max\{\abs{p_1},\abs{p_2},\dots,\abs{p_k}\}$. Let $\mc B=\{p\in2^n:\exists i\leq k(p_i\subs p)\}$. It is clear that $\mc B$ is a $2^{-n}$-cover of $E$. Therefore $N_E(2^{-n})\leq\abs{\mc B}$. It is also clear that $\sum_{A\in\mc A}dA=\sum_{B\in\mc B}dB=2^{-n}\cdot\abs{\mc B}$. Consequently $2^{-n}\cdot N_E(2^{-n})<\eps$. Since this is true for all $n\in\Nset$ large enough, we get $\ubox_0^1(E)\leq\eps$. Letting $\eps\to 0$ yields $\ubox_0^1(E)=0$. (ii) Let $I\in[\Nset]^\Nset$. For each $n\in\Nset$, the family $\{[p]:p\in C_I\rest n\}$ is obviously a $2^{-n}$-cover of $C_I$ of cardinality $2^{\abs{n\setminus I}}$. Therefore $\hm^1_{2^{-n}}(C_I)\leq2^{\abs{n\setminus I}}2^{-n} =2^{-\abs{n\cap I}}$. Hence $\hm^1(C_I)\leq\lim_{n\to\infty}2^{-\abs{n\cap I}}=0$. (iii) Using Lemma~\ref{lemHaus}(i) it is enough to find $C_I$ such that $\hm^h(C_I)\geq 1$. $h\prec 1$ yields $\frac{h(2^{-n})}{2^{-n}}\to\infty$. Therefore there is $I\in[\Nset]^\Nset$ sparse enough to satisfy $2^{\abs{n\cap I}}\leq\frac{h(2^{-n})}{2^{-n}}$, i.e.~$2^{-\abs{n\setminus I}}\leq h(2^{-n})$ for all $n\in\Nset$. Consider the product measure on $C_I$ given as follows: If $p\in2^n$ and $[p]\cap C_I\neq\emptyset$, put $\lambda([p]\cap C_I)=2^{-\abs{n\setminus I}}$. Straightforward calculation shows that $h(dE)\geq\lambda(E)$ for each $E\subs C_I$. Hence $\sum_n h(dE_n)\geq\sum_n\lambda(E_n)\geq\lambda(C_I)=1$ for each cover $\{E_n\}$ of $C_I$ and $\hm^h(C_I)\geq 1$ follows. \end{proof} \section{\hnull{} spaces} \label{sec:hnull} We first establish a couple of characterizations of \hnull{} spaces in terms of Hausdorff measures and dimensions. \begin{thm}\label{basicHnull} Let $X$ be a metric space. The following are equivalent. \begin{enum} \item $X$ is \hnull, \item $\hdim f(X,\rho)=0$ for each uniformly equivalent metric on $X$, \item $\hdim f(X)<\infty$ for each uniformly continuous $f:X\to Y$ into another metric space, \item $\hm^h(X)=0$ for each $h\in\HH$, \item $\hm^h(X)<\infty$ for each $h\in\HH$. \end{enum} \end{thm} \begin{proof} Denote by $d$ the metric of $X$. (i)\Implies(ii) and (i)\Implies(iii) are trivial. We first prove simultaneously (ii)\Implies(iv) and (iii)\Implies(iv). Let $h\in\HH$. Choose a convex Hausdorff function $g$ such that $g\prec h^{1/n}$ for each positive $n\in\Nset$. The properties of $g$ ensure that $\rho(x,y)=g(d(x,y))$ is a uniformly equivalent metric on $X$. The identity map $\id_X:(X,d)\to(X,\rho)$ is of course uniformly continuous. Thus if either (ii) or (iii) holds, then $\hdim(X,\rho)<\infty$, hence there is $n$ such that $\hm^n(X,\rho)=0$. The choice of $g$ ensures $h{\circ}g^{-1}\succ n$. Thus $\hm^{h{\circ}g^{-1}}(X,\rho)=0$ by Lemma~\ref{lemHaus}(i). Also $d(x,y)\leq g^{-1}(\rho(x,y))$. Hence Lemma \ref{lipschitz2} yields $\hm^h(X,d)\leq\hm^{h{\circ}g^{-1}}(X,\rho)=0$. (iv)\Implies(v) is trivial. (v)\Implies(i): Let $f:X\to (Y,\rho)$ be uniformly continuous. There is a function $g\in\HH$ such that $\rho(fx,fy)\leq g(d(x,y))$ for all $x,y\in X$. Fix $s>0$ and put $h(r)=r^s$. By assumption $\hm^{h{\circ}g}(X)<\infty$. Apply Lemma~\ref{lipschitz2} to conclude that $\hm^s(fX)=\hm^h(fX)\leq\hm^{h{\circ}g}(X)<\infty$. In particular $\hdim f(X)\leq s$. As $s>0$ was arbitrary, it follows that $\hdim f(X)=0$. \end{proof} Let $X$ be a space and let $\seq{U_n}$ be a sequence of subsets of $X$. Given a sequence $\Seqeps$ of positive real numbers, $\seq{U_n}$ is termed $\seqeps$-fine if $dU_n\leq\eps_n$ holds for all $n$. Recall once again that $X$ has strong measure zero (\smz{}) if for any $\Seqeps$ there is an $\seqeps$-fine cover of $X$. \begin{lem}\label{seqep2} For each $\Seqeps$ there exists $g\in\HH$ such that: If $\hm^g(X)=0$, then $X$ admits an $\seqeps$-fine $\lambda$-cover. \end{lem} \begin{proof} Assume without loss of generality that $X$ has no isolated points. Let$\seq{\eps_n}\in(0,\infty)^\Nset$. Choose $h\in\HH$ such that $h(\eps_n)\geq\frac1n$ for all $n\in\Nset$. Suppose $\hm^h(X)=0$. Lemma~\ref{lambda} yields a $\lambda$-cover $\{G_n\}$ such that $\sum_nh(dG_n)<\infty$. Reordering we may assume $\sum_nh(dG_n)<1$ and $dG_0\geq dG_1\geq dG_2\geq\dots$. Therefore $$ h(dG_n)\leq\frac1n\, nh(dG_n)\leq\frac1n\sum_nh(dG_n)<\frac1n\leq h(\eps_n) $$ and since $h$ is nondecreasing, we get $dG_n\leq\eps_n$. \end{proof} \begin{thm}\label{hnullsmz} A metric space $X$ is \hnull{} if and only if it is \smz{}. \end{thm} \begin{proof} The forward implication follows at once from the above lemma. To prove the reverse one, let $h\in\HH$, fix $\del>0$ and choose $\eps_n<\del$ to satisfy $h(\eps_n)\leq 2^{-n}$. The $\seq{\eps_n}$-fine cover of $X$ witnesses $\hm_\del^h(X)\leq 1$. Therefore $\hm^h(X)\leq1$, which is by Theorem~\ref{basicHnull}(v) enough. \end{proof} \hnull{}$\,=\,$\smz{} sets are characterized by behavior of cartesian products. \begin{thm}\label{prodHnull} The following are equivalent. \begin{enum} \item $X$ is \hnull{}, \item $\hm^h(X\times Y)=0$ whenever $h\in\HH$ and $\uhm_0^h(Y)=0$, \item $\hm^h(X\times Y)=0$ whenever $h\in\HH$, $Y$ is \si compact and $\hm^h(Y)=0$, \item $\hm^1(X\times E)=0$ whenever $E\in\EE$, \item $\hm^1(X\times C)=0$ whenever $C\in\CC$. \end{enum} \end{thm} \begin{proof} (i)\Implies(ii): Suppose $X$ is \hnull{}. Fix $\eta>0$. Since $\uhm_0^h(Y)=0$, for each $j\in\Nset$ there is a finite family $\mc U_j$ of sets such that $\sum_{U\in\mc U_j}h(dU)<2^{-j}\eta$. We may also assume that $dU<\eta$ for all $U$. Let $\eps_j=\min\{dU:U\in\mc U_j\}$. Choose a cover $\{V_j\}$ of $X$ such that $dV_j\leq\eps_j$ and define $$ \mc W=\{V_j\times U:j\in\Nset,\,U\in\mc U_j\}. $$ It is obvious that $\mc W$ is a cover of $X\times Y$. Since $d(V_j\times U)=d(U)$ for all $j$ and $U\in\mc U_j$ by the choice of $\eps_j$, we have $$ \sum_{W\in\mc W}h(dW)= \sum_{j\in\Nset}\sum_{U\in\mc U_j}h(dU)< \sum_{j\in\Nset}2^{-j}\eta=2\eta. $$ Therefore $\hm_\eta^h(X\times Y)<2\eta$, which is enough for $\hm^h(X\times Y)=0$, as $\eta$ was arbitrary. (ii)\Implies(iii)\Implies(iv)\Implies(v) is trivial. (v)\Implies(i): Suppose $X$ is not \hnull. We will show that $\hm^1(X\times C)>0$ for some $C\in\CC$. By assumption there is $h\in\HH$ such that $\hm^h(X)>0$. \emph{Mutatis mutandis} we may assume $h$ be concave and $h(r)\geq\sqrt r$. In particular $g(r)=r/h(r)$ is an increasing function and $\lim_{r\to0}g(r)=0$, i.e.~$g$ is Hausdorff function, and $g\prec1$. Also both $h$ and $g$ are of finite order. Use Lemma~\ref{EC}(iii) to find $C\in\CC$ such that $\hm^g(C)>0$. Now apply Lemma~\ref{howroyd}(iii): $$ \hm^1(X\times C)=\hm^{h\cdot g}(X\times C)\geq\hm^h(X)\cdot\hm^g(C)>0. \qedhere $$ \end{proof} \begin{coro} If $X$ is \hnull{} then $\hdim X\times Y=\hdim Y$ for every \si compact metric space $Y$. \end{coro} \section{\uhnull{} spaces} \label{sec:uhnull} \begin{thm}\label{basicUhnull} The following are equivalent. \begin{enum} \item $X$ is \uhnull, \item $\uhdim f(X,\rho)=0$ for each uniformly equivalent metric on $X$, \item $\uhdim f(X)<\infty$ for each uniformly continuous $f:X\to Y$ into another metric space, \item $X\in\NNs(\uhm_0^h)$ for each $h\in\HH$, \item $\uhm^h(X)<\infty$ for each $h\in\HH$. \end{enum} \end{thm} \begin{proof} One has to employ Lemma~\ref{lem1} instead of Lemma~\ref{lemHaus}, otherwise the proof goes the same way as that of Theorem~\ref{basicHnull}. \end{proof} Next we provide a combinatorial characterization of \uhnull{} sets that parallels Theorem~\ref{hnullsmz}. \begin{lem}\label{seqep} For each $\Seqeps$ there exists $g\in\HH$ such that: If $\uhm^g(X)=0$, then $X$ admits an $\seqeps$-fine $\gamma$-groupable cover. \end{lem} \begin{proof} Assume without loss of generality that $X$ has no isolated points. Choose Hausdorff functions $g,h$ such that $h(\eps_n)\geq\frac1n$ for all $n\in\Nset$ and $g\prec h$. Suppose $\uhm^g(X)=0$. Then $X\in\NNs(\uhm_0^h)$ by Lemma~\ref{lem1}(vii). By Lemma~\ref{gammagr} there is a $\gamma$-groupable cover $\{G_n\}$ such that $\sum_nh(dG_n)<\infty$. Proceed as in the proof of Lemma~\ref{seqep2}. \end{proof} \begin{thm}\label{combUhnull} Let $X$ be a separable metric space. $X$ is \uhnull{} if and only if for each $\Seqeps$, $X$ has an $\seqeps$-fine $\gamma$-groupable cover. \end{thm} \begin{proof} The forward implication follows at once from the above lemma. The reverse implication is proved as the one of Theorem~\ref{hnullsmz}. \end{proof} \begin{thm}\label{prodUhnull} The following are equivalent. \begin{enum} \item $X$ is \uhnull, \item for each $h\in\HH$, $Y\in\NNs(\uhm_0^h)$ and each complete $Z\sups X$ there is $F$ \si compact, $X\subs F\subs Z$, such that $\uhm^h(F\times Y)=0$, \item $\uhm^h(X\times Y)=0$ whenever $h\in\HH$ and $\uhm_0^h(Y)=0$, \item $\uhm^1(X\times E)=0$ for each $E\in\EE$, \item $\uhm^1(X\times C)=0$ for each $C\in\CC$. \end{enum} \end{thm} \begin{proof} The proof is similar to that of Theorem~\ref{prodHnull}. The only nontrivial implications are (i)\Implies(ii) and (v)\Implies(i). (i)\Implies(ii): Let $Z\sups X$ be a complete metric space. Suppose $X$ is \uhnull{}. In particular, by Lemma~\ref{lem1} $X$ is contained in a \si compact set $K\subs Z$. Let $h\in\HH$ and $Y\in\NNs(\uhm_0^h)$. Lemma~\ref{gammagr} yields a $\gamma$-groupable cover $\mc U$ of $Y$ such that $\sum_{U\in\mc U}h(dU)<\infty$. Denote by $\mc U_j$ the witnessing groups. Let $\eps_j=\min\{dU:U\in\mc U_j\}$. Using Theorem~\ref{combUhnull} choose a $\gamma$-groupable cover $\{V_j\}$ of $X$ such that $dV_j\leq\eps_j$. We may and shall assume that each $V_j$ is a closed subset of $Z$. Denote by $\mc V_k$ the witnessing groups. Define \begin{align*} \mc W&=\{V_j\times U:j\in\Nset,\,U\in\mc U_j\},\\ F&=K\cap\bigcup_{i\in\Nset}\bigcap_{k\geq i}\bigcup\mc V_k. \end{align*} The set $F\subs Z$ is clearly an $F_\sigma$ subset of $K$ and is thus \si compact. It is easy to check that $\mc W$ is a $\gamma$-groupable cover of $F\times Y$. Since $d(V_j\times U)=d(U)$ for all $j$ and $U\in\mc U_j$ by the choice of $\eps_j$, we have $$ \sum_{W\in\mc W}h(dW)= \sum_{U\in\mc U}h(dU)<\infty. $$ Using Lemma~\ref{gammagr} it follows that $F\times Y\in\NNs(\uhm_0^h)$ and in particular $\uhm^h(X\times Y)=0$. (v)\Implies(i): Suppose $X$ is not \uhnull. We will show that $\hm^1(X\times C)>0$ for some $C\in\CC$. By assumption there is $h\in\HH$ such that $\uhm^h(X)>0$. As well as in the proof of Theorem~\ref{prodHnull} suppose $h$ is concave, hence of finite order, and find a Hausdorff function of finite order $g\prec1$ such that $g(r)h(r)=r$ and $C\in\CC$ such that $\hm^g(C)>0$. This time apply Lemma~\ref{howroyd}(iv): $$ \uhm^1(X\times C)=\uhm^{h\cdot g}(X\times C)\geq\uhm^h(X)\cdot\uhm^g(C)>0. \qedhere $$ \end{proof} \begin{coro} If $X$ is \uhnull{} then $\uhdim X\times Y=\uhdim Y$ for every metric space $Y$. In particular, $\uhdim X\times Y=\hdim Y$ if $Y$ is \si compact. \end{coro} \subsection*{Products of \hnull{} and \uhnull{} sets} It is well known that a product of two \smz{} sets need not be \smz{}. Thus the product of two \hnull{} sets need not be \hnull{}. But if one of the factors is \uhnull{}, the product is \hnull{}: \begin{thm}\label{productHUH} \begin{enum} \item If $X$ and $Y$ are \uhnull{}, then $X\times Y$ is \uhnull{}. \item If $X$ is \hnull{} and $Y$ is \uhnull{}, then $X\times Y$ is \hnull{}. \end{enum} \end{thm} \begin{proof} (i) Suppose $X,Y$ are \uhnull{}. By Theorem~\ref{basicUhnull}(iv) $Y\in\NNs(\uhm_0^h)$ for all $h\in\HH$. Hence Theorem~\ref{prodUhnull}(ii) yields $\uhm^h(X\times Y)=0$ for all $h\in\HH$, which is by Theorem~\ref{basicUhnull}(v) enough. (ii) is derived in the same manner from Theorems~\ref{prodHnull} and~\ref{basicHnull}. \end{proof} M.~Scheepers~\cite[Theorem 1, Lemma 3]{MR1779763} proved that a product of two \smz{} sets is \smz{} as long as one of the sets satisfies the Hurewicz property. Theorem~\ref{productHUH}(ii) improves his result (recall that \smz{}$=$\hnull{}). Indeed, it is easy to show that a \smz{} space satisfying the Hurewicz property is \uhnull{}. But since \uhnull{} is a uniform property and Hurewicz property is topological, one cannot expect \emph{a priori} that every \uhnull{} set has the Hurewicz property. (It is of course so if Borel Conjecture holds.) \begin{prop} Assuming the Continuum Hypothesis, there is an \uhnull{} set that does not have the Hurewicz property. \end{prop} \begin{proof} It follows from~\cite[Theorem 1]{MR738943} and its proof that under the Continuum Hypothesis there is a $\gamma$-set $X\subs\Cset$ that is concentrated on a countable set $D$. By~\cite[Theorem 6]{MR738943}, every $\gamma$-set is $\MM$-additive. By Theorem~\ref{mainME} \emph{infra}, every $\MM$-additive set is \uhnull{}. Hence $X$ is \uhnull{}. On the other hand, as proved in~\cite[Theorem 20]{MR1610427}, the set $X\setminus D$ does not have the Hurewicz property and since it is a subset of $X$, it is clearly \uhnull{}. \end{proof} \subsection*{Universally meager sets} Recall that a separable metric space $E$ is termed \emph{universally meager} \cite{MR1814112,MR2427418} if for any perfect Polish spaces $Y,X$ such that $E\subs X$ and every continuous one--to--one mapping $f:Y\to X$ the set $f^{-1}(E)$ is meager in $Y$. We show that \uhnull{} sets are universally meager. \begin{lem}\label{lemUM} Let $X,Y,Z$ be perfect Polish spaces and $\phi:Y\to X$ a continuous one--to--one mapping. Let $\mc F$ be an equicontinuous family of uniformly continuous mappings of $Z$ into $X$. If $E\subs Z$ is \uhnull{}, then there is a \si compact set $F\sups E$ such that $\phi^{-1}f(F)$ is meager in $Y$ for all $f\in\mc F$. \end{lem} \begin{proof} Let $\{U_n\}$ be a countable base for $Y$. As $\phi$ is one--to--one the set $\phi(U_n)$ is analytic and uncountable for each $n$. Therefore it contains a perfect set and thus is not \uhnull{}, i.e.~there is $h_n\in\HH$ such that $\uhm^{h_n}(\phi U_n)>0$. Choose $h\in\HH$ such that $h\prec h_n$ for all $n$, so that $\uhm^h(\phi U_n)>0$ for all $n$. Therefore $\uhm^h(\phi U)>0$ for each nonempty set $U$ open in $Y$. Since $\mc F$ is equicontinuous, there is a function $g\in\HH$ such that~\eqref{lip2} is satisfied by each $f\in\mc F$. By Theorem~\ref{basicUhnull} $E\in\NNs(\uhm_0^{h{\circ}g})$. Therefore there is a \si compact set $F\sups E$ such that $\uhm^{h{\circ}g}(F)=0$. Hence Lemma~\ref{lipschitz2} guarantees that $\uhm^h(fF)=0$ for all $f\in\mc F$. Therefore the $F_\sigma$-set $\phi^{-1}f(F)$ is meager in $Y$: for otherwise it would contain an open set witnessing $\uhm^h(fF)>0$. \end{proof} \begin{thm}\label{umg} Every \uhnull{} set is universally meager. \end{thm} \begin{proof} Apply Lemma~\ref{lemUM} with $Z=X$ and $\mc F=\{\id_X\}$. \end{proof} \section{\upnull{} spaces} \label{sec:pnull} \begin{thm}\label{basicPnull} The following are equivalent. \begin{enum} \item $X$ is \upnull, \item $\updim f(X,\rho)=0$ for each uniformly equivalent metric on $X$, \item $\updim f(X)<\infty$ for each uniformly continuous $f:X\to Y$ into another metric space, \item $X\in\NNs(\ubox_0^h)$ for each $h\in\HH$, \item $\ubox^h(X)<\infty$ for each $h\in\HH$. \end{enum} \end{thm} \begin{proof} This is proved in the same manner as Theorems~\ref{basicHnull} and~\ref{basicUhnull}, with the aid of Lemma~\ref{lemP}. \end{proof} Next we provide a combinatorial characterization of \upnull{} sets. Note the similarity of \ref{combPnull}(ii) with Theorem~\ref{combUhnull}. \begin{lem}\label{seqep3} For each $\Seqeps$ there exists $g\in\HH$ such that: If $\ubox^g(X)=0$, then $X$ admits an $\seqeps$-fine $\gamma$-groupable cover such that the witnessing groups $\mc G_j$ satisfy $\abs{\mc G_j}\leq j$ for each $j$. \end{lem} \begin{proof} Assume without loss of generality that $\seqeps$ is decreasing. Set $\del_n=\eps_{0+1+\dots+n}$. Choose a Hausdorff function $g$ such that $g(\del_n)>\frac1n$ for all $n\in\Nset$. Suppose $\ubox^g(X)=0$. Use Lemma~\ref{IncreasingSetsLemma} to find $X_k\upto X$ such that $\ubox_0(X_k)<1$. Thus for each $k$ there is $n_k$ such that $N_{X_k}(\del_n)g(\del_n)<1$ for each $n\geq n_k$. Define the witnessing groups as follows: If $n_k\leq j<n_{k+1}$, let $\mc G_j$ be a cover of $X_k$ witnessing $N_{X_k}(\del_j)g(\del_j)<1$. Clearly $\abs{\mc G_j}<\frac{1}{g(\del_j)}<j$. The cover we are looking for is of course $\bigcup_{j\in\Nset}\mc G_j=\{U_n:n\in\Nset\}$ ordered in such a way that if $i<j$ and $U_n\in\mc G_i$ and $U_m\in\mc G_j$, then $n<m$. It is clear that $\{U_n\}$ is a $\gamma$-groupable cover. If $U_n\in\mc G_j$, then $n\leq\sum_{i\leq j}\abs{\mc G_i}<0+1+\dots+j$. Hence $dU_n<\del_j=\eps_{0+1+\dots+j}<\eps_n$. Therefore $\{U_n\}$ is $\seqeps$-fine. \end{proof} \begin{thm}\label{combPnull} Let $X$ be a separable metric space. The following are equivalent. \begin{enum} \item $X$ is \upnull{}, \item for each $\Seqeps$, $X$ has an $\seqeps$-fine $\gamma$-groupable cover such that the witnessing groups $\mc G_j$ satisfy $\abs{\mc G_j}\leq j$ for each $j$, \item for each $\Seqeps$, there is a sequence $\{\mc F_n\}$ of families of sets such that $d\mc F_n\leq\eps_n$ and $\abs{\mc F_n}\leq n$ for all $n\in\Nset$ and the sequence $\{\bigcup\mc F_n\}$ is a $\gamma$-cover of $X$, \item there is $f\in\Pset$ such that for each $\Seqeps$, there is a sequence $\{\mc F_n\}$ of families of sets such that $d\mc F_n\leq\eps_n$ and $\abs{\mc F_n}\leq f(n)$ for all $n\in\Nset$ and the sequence $\{\bigcup\mc F_n\}$ is a $\gamma$-cover of $X$. \end{enum} \end{thm} \begin{proof} (i)\Implies(ii) follows from the above lemma. (ii)\Implies(iii)\Implies(iv) is trivial. (iv)\Implies(i): Let $h$ be a Hausdorff function. Choose $\seqeps$ so that $h(\eps_n)<\frac1{f(n+1)}$. Consider the families $\mc F_n$ given by (iii) and for each $k$ set $X_k=\{x\in X:\forall n\geq k\ x\in\bigcup\mc F_n\}$. Clearly $X_k\upto X$. It remains to show $\ubox_0^h(X_k)\leq 1$. Let $\eps<\eps_k$. There is $n>k$ such that $\eps_n\leq\eps<\eps_{n-1}$. Obviously $N_{X_k}(\eps)\leq N_{X_k}(\eps_n)\leq\abs{\mc F_n}\leq f(n)$. Hence $N_{X_k}(\eps)h(\eps)\leq f(n)h(\eps_{n-1})\leq\frac{f(n)}{f(n-1+1)}=1$. Thus $\ubox_0^h(X_k)\leq1$. \end{proof} \begin{thm}\label{prodPnull} The following are equivalent. \begin{enum} \item $X$ is \upnull, \item for each $h\in\HH$, $Y\in\NNs(\ubox_0^h)$ and each complete $Z\sups X$ there is $F$ \si compact, $X\subs F\subs Z$, such that $\ubox^h(F\times Y)=0$, \item $\ubox^h(X\times Y)=0$ whenever $h\in\HH$ and $\ubox_0^h(Y)=0$, \item $\ubox^1(X\times E)=0$ for each $E\in\EE$, \item $\ubox^1(X\times C)=0$ for each $C\in\CC$. \end{enum} \end{thm} \begin{proof} (i)\Implies(ii): Suppose $Y\in\NNs(\ubox_0^h)$. By Lemma~\ref{lemP}(v) there is $g\prec h$ such that $Y\in\NNs(\ubox_0^h)$. Let $f\in\HH$ be such that $fg\geq h$. Since $X$ is \upnull{}, Theorem~\ref{basicPnull}(iv) yields $X\in\NNs(\ubox_0^f)$. Thus there is, by Lemma~\ref{lemP}(iii), a \si compact set $F\sups X$ such that $\ubox^f(F)=0$. Apply Lemma~\ref{howroyd}(i): $$ \ubox^h(F\times Y)\leq\ubox^{fg}(F\times Y)\leq\ubox^f(F)\ubox^g(Y)=0. $$ (ii)\Implies(iii)\Implies(iv)\Implies(v) is trivial. (v)\Implies(i): Suppose $X$ is not \upnull{}. We will show that $\ubox^1(X\times C)>0$ for some $C\in\CC$. By assumption there is $h\in\HH$ such that $\ubox^h(X)>0$. Proceed as in the proof of Theorem~\ref{prodHnull} this time applying Lemma~\ref{howroyd}(v). \end{proof} \begin{thm} If $X$ and $Y$ are \upnull{}, then $X\times Y$ is \upnull{}. \end{thm} \begin{proof} Apply Theorem~\ref{basicPnull}(iv) and (v) and Theorem~\ref{prodPnull}(iii). \end{proof} \begin{prop}\label{XN} If $X$ is \upnull{}, then \begin{enum} \item $\hm^h(X\times Y)=0$ whenever $h\in\HH$ and $\hm^h(Y)=0$, \item in particular $\hm^1(X\times N)=0$ for each $N\in\NN$. \end{enum} \begin{proof} (i) follows from Theorem~\ref{basicPnull}(v) and Lemma~\ref{howroyd}(ii). (ii) follows from (i). \end{proof} \end{prop} \begin{coro} If $X$ is \upnull{} then for every metric space $Y$ \begin{enum} \item $\pdim X\times Y=\pdim Y$, \item $\hdim X\times Y=\hdim Y$. \end{enum} \end{coro} \section{\Dpnull{} spaces} \label{sec:dpnull} \begin{thm}\label{basicDpnull} The following are equivalent. \begin{enum} \item $X$ is \dpnull, \item $\dpdim f(X,\rho)=0$ for each uniformly equivalent metric on $X$, \item $\dpdim f(X)<\infty$ for each uniformly continuous $f:X\to Y$ into another metric space, \item $\dbox^h(X)=0$ for each $h\in\HH$, \item $\dbox^h(X)<\infty$ for each $h\in\HH$. \end{enum} \end{thm} \begin{proof} This is proved in the same manner as Theorems~\ref{basicHnull} and~\ref{basicUhnull}, with the aid of Lemma~\ref{lemDP}. \end{proof} Note the similarity of the following characterization of \dpnull{}-sets with Theorem~\ref{combPnull}. \begin{thm} Let $X$ be a separable metric space. The following are equivalent. \label{combDnull} \begin{enum} \item $X$ is \dpnull{}, \item for each $\Seqeps$, there is $I\in[\Nset]^\Nset$ and a sequence $\{\mc F_n:n\in\Nset\}$ of families of sets such that $d\mc F_n\leq\eps_n$ and $\abs{\mc F_n}\leq n$ for all $n\in I$ and the sequence $\{\bigcup\mc F_n:n\in I\}$ is a $\gamma$-cover of $X$, \item there is $f\in\Pset$ such that for each $\Seqeps$, there is $I\in[\Nset]^\Nset$ and a sequence $\{\mc F_n:n\in\Nset\}$ of families of sets such that $d\mc F_n\leq\eps_n$ and $\abs{\mc F_n}\leq f(n)$ for all $n\in I$ and the sequence $\{\bigcup\mc F_n:n\in I\}$ is a $\gamma$-cover of $X$. \end{enum} \end{thm} \begin{proof} (i)\Implies(ii): Assume without loss of generality that $\seqeps$ is decreasing. Choose a Hausdorff function $g$ such that $g(\eps_n)=\frac1n$ for all $n>0$. Suppose $\dbox^g(X)=0$. Thus there is a sequence $X_k\upto X$ such that $\lbox_0(X_k)<\frac12$. Therefore it is possible to choose a decreasing sequence $\del_k>0$ such that $N_{X_k}(\del_k)g(\del_k)<\frac12$. Fix $k$ for the moment. Let $m(k)$ be the (unique) integer satisfying $\eps_{m(k)+1}\leq\del_k<\eps_{m(k)}$. Then $$ N_{X_k}(\eps_{m(k)})\leq N_{X_k}(\del_k)\frac{g(\del_k)}{g(\eps_{m(k)})}\leq \frac12(m(k)+1)\leq m(k). $$ Choose a cover $\mc F_{m(k)}$ of $X_k$ witnessing $N_{X_k}(\del_k)g(\del_k)<\frac12$. Let $I=\{m(k):k\in\Nset\}$. Verification of the required properties of $\{\bigcup\mc F_n:n\in I\}$ is straightforward. (ii)\Implies(iii) is trivial. (iii)\Implies(i): Let $g$ be a Hausdorff function. Choose $\Seqeps$ decreasing subject to $g(\eps_n)\leq\frac{1}{f(n)}$. Let $I$ and $\{\mc F_n:n\in\Nset\}$ be as in (iii). For $n\in I$ and $k\in\Nset$ set $F_n=\bigcup\mc F_n$ and $X_k=\bigcap_{k\leq n\in I}F_n$. Obviously $X_k\upto X$ and $$ \forall n\geq k, n\in I\quad N_{X_k}(\eps_n)g(\eps_n)\leq \abs{\mc F_n}g(\eps_n)\leq f(n)\frac{1}{f(n)}=1, $$ which yields $\lbox_0^g(X_k)\leq 1$. Consequently $\dbox^g(X)\leq1$. Thus $X$ is \dpnull{} by Theorem~\ref{basicDpnull}(v). \end{proof} This combinatorial description of \dpnull{} sets yields a surprising consequence: though $\dbox^g$ is not even finitely additive, \dpnull{} is a \si additive property. \begin{coro} For each metric space $X$, the family of all \dpnull{} subsets of $X$ forms a \si ideal. \label{dpnullideal} \end{coro} \begin{proof} This follows by a diagonal construction. Let $\{X_k\}$ be a countable collection of \dpnull{} subsets of $X$ and $Y=\bigcup_nX_k$. Let $\Seqeps$. Apply repeatedly Theorem~\ref{combDnull}(ii) to find a diagonal sequence $\seq{n_i:i\in\Nset}$ and a triangular matrix $\{\mc F_{k,i}:k\leq i\in\Nset\}$ of collections of sets with the following properties: \begin{enumerate}[{\rm(a)}] \item $\forall i\in\Nset\,\forall k\leq i\ d\mc F_{i,k}\leq\eps_{n_i}$, \item $\forall i\in\Nset\,\forall k\leq i\ \abs{\mc F_{i,k}}\leq n_i$, \item $\forall k\in\Nset\, \{\bigcup\mc F_{i,k}:i\geq k\}$ is a $\gamma$-cover of $X_k$. \end{enumerate} For each $i$ put $\mc G_i=\bigcup_{k\leq i}\mc F_{i,k}$. Then (a) yields $d\mc G_i\leq\eps_{n_i}$ , (b) yields $\abs{\mc G_i}\leq n_i^2$, and (c) yields that $\{\bigcup\mc G_i:i\in\Nset\}$ is a $\gamma$-cover of $Y$. Apply Theorem~\ref{combDnull}(iii) with $f(n)=n^2$ to conclude that $Y$ is \dpnull{}. \end{proof} \begin{thm} The following are equivalent.\label{prodDpnull} \begin{enum} \item $X$ is \dpnull, \item for each $h\in\HH$, each $Y$ such that $\lbox_0^h(Y)=0$ and each complete $Z\sups X$ there is $F$ \si compact, $X\subs F\subs Z$, such that $\dbox^h(F\times Y)=0$, \item $\dbox^h(X\times Y)=0$ whenever $h\in\HH$ and $\lbox_0^h(Y)=0$, \item $\dbox^1(X\times E)=0$ for each $E\in\EE$, \item $\dbox^1(X\times C)=0$ for each $C\in\CC$. \end{enum} \end{thm} \begin{proof} (i)\Implies(ii): Suppose $\lbox_0^h(Y)=0$. Choose $\seqeps$ such that $N_Y(\eps_n)h(\eps_n)<2^{-n}$. Since $X$ is \dpnull{}, Theorem~\ref{combDnull}(ii) yields $I\in[\Nset]^\Nset$ and a sequence of sets $X_k\upto X$ such that $N_{X_k}(\eps_n)\leq n$ whenever $k\leq n\in I$. Set $F=\bigcup_k \clos{X}_k$, the closures in $Z$. Since $X_k$'s are of finite box content, they are by Lemma~\ref{lemP}(i) totally bounded. Since $Z$ is complete, their closures are compact. Thus $F$ is \si compact. Clearly $\clos{X}_k\times Y\upto F\times Y$. Also $N_{\clos X_k\times Y}(\eps_n)\leq n2^{-n}$ whenever $k\leq n\in I$. Hence $\lbox_0^h(\clos X_k\times Y)\leq\lim n2^{-n}=0$. (ii)\Implies(iii)\Implies(iv)\Implies(v) is trivial and (v)\Implies(i) is proved as in Theorem~\ref{prodPnull}, with the aid of Lemma~\ref{howroyd}(vi). \end{proof} \begin{thm} If $X$ and $Y$ are \dpnull{}, then $X\times Y$ is \dpnull{}. \end{thm} \begin{proof} This follows from Theorems~\ref{basicDpnull} and~\ref{prodDpnull}(iii). \end{proof} \begin{coro} If $X$ is \dpnull{} then for every metric space $Y$ $\dpdim X\times Y=\dpdim Y$. \end{coro} Recall that a separable metric space $X$ is a \emph{$\gamma$-set} if each countable $\omega$-cover contains a subcover that is a $\gamma$-cover (a cover $\mc U$ is an $\omega$-cover if for each finite set $F\subs X$ there is a set $U\in\mc U$ such that $F\subs U$). Nowik and Weiss~\cite{MR1905154} introduce a notion of a \Tprime-set (cf.~Section~\ref{sec:TT}) and prove that every $\gamma$-set is \Tprime{}. In view of Theorem~\ref{mainTT} \emph{infra}, the following generalizes their result. \begin{prop} Every $\gamma$-set is \dpnull{}. \end{prop} \begin{proof} Let $X$ be a $\gamma$-set. We verify condition (ii) of Theorem~\ref{combDnull}. Let $\Seqeps$. Fix an infinite set $\{x_n:n\in\Nset\}\subs X$. For $F\in[X]^{<\omega}$ put $$ U(F)=\bigcup\nolimits_{x\in F}B\bigl(x,\tfrac12\eps_{\abs F}\bigr) \setminus\{x_{\abs F}\}. $$ The family $\{U(F):F\in[X]^{<\omega}\}$ is obviously an $\omega$-cover. Therefore there is a sequence $\{F_n\}$ of finite sets such that $\{U(F_n)\}$ is a $\gamma$-cover. We may clearly assume that $\abs{F_0}\leq\abs{F_1}\leq\abs{F_2}\leq\dots$. Since $U(F)$ misses $x_{\abs F}$, for each $k\in\Nset$ there are only finitely many $n$'s such that $\abs{F_n}=k$. Passing to a subsequence we may thus assume that $\abs{F_0}<\abs{F_1}<\abs{F_2}<\dots$. The set $I=\{\abs{F_n}:n\in\Nset\}$ and the families $\mc F_{\abs{F_n}}=\bigl\{B\bigl(x,\frac12\eps_{\abs{F_n}}\bigr): x\in F_n\bigr\}$, $n\in\Nset$ obviously witness condition (ii) of Theorem~\ref{combDnull}. \end{proof} \section{\uhnull{}-sets vs.~$\MM$-additive and $\EE$-additive sets} \label{sec:meager} Recall that $\MM$ denotes the ideal of meager sets in $\Cset$ and $\EE$ is the intersection ideal in $\Cset$. Recall that a set $X\subs\Cset$ is termed \emph{strongly null} if $X+M\neq\Cset$ for each meager set $M$. The famous Galvin-Mycielski-Solovay Theorem asserts that a subset of $\Cset$ is strongly null if and only if it is \smz{}. Together with Theorem~\ref{hnullsmz} it yields: \begin{thm} A set $X\subs\Cset$ is \hnull{} if and only if it is strongly null. \label{mainSN} \end{thm} Recall that given an ideal $\mc J$ on $\Cset$, a set $X\subs\Cset$ is termed \emph{$\mc J$-additive} if $X+J\in\mc J$ for each $J\in\mc J$. Inspired by this theorem, we attempt to establish, for subsets of $\Cset$, similar connections between \hnull{} and $\MM$- and $\EE$-additive sets (this section), \upnull{} and $\NN$-additive sets (Section~\ref{sec:nadd}) and \dpnull{} and \Tprime-sets (Section~\ref{sec:TT}), respectively. Besides $\mc J$-additive sets we also define a stronger notion of sharply $\mc J$-additive sets. \begin{defn} Given an ideal $\mc J$ on $\Cset$, a set $X\subs\Cset$ is termed \emph{sharply $\mc J$-additive} if for every $J\in \mc J$ there is a \si compact set $F\sups X$ such that $F+J\in\mc J$. We also define a set $X\subs\Cset$ to be \emph{sharply null} if for each $M\in\MM$ there is a \si compact set $F\sups X$ such that $F+M\neq\Cset$. It is clear that a sharply $\mc J$-additive set is $\mc J$-additive and that a sharply null set is strongly null. \end{defn} In this section we prove the following theorem that in particular implies that a set $X\subs\Cset$ is $\MM$-additive if and only if it is $\EE$-additive. \begin{thm}\label{mainME} For any set $X\subs\Cset$, the following are equivalent. \begin{enum} \item $X$ is \uhnull{}, \label{uhnull} \item $X$ is $\MM$-additive, \label{Madd} \item $X$ is $\EE$-additive, \label{Eadd} \item $X$ is sharply $\MM$-additive, \label{sMadd} \item $X$ is sharply $\EE$-additive, \label{sharpE} \item $X$ is sharply null. \label{sharpN} \end{enum} \end{thm} \begin{proof} We shall prove now \eqref{uhnull}$\Implies$\eqref{Eadd} and \eqref{sharpE}$\Implies$\eqref{sharpN}$\Implies$\eqref{sMadd}$\Implies$\eqref{Madd}. The implications \eqref{Madd}$\Implies$\eqref{uhnull} and \eqref{Eadd}$\Implies$\eqref{sharpE} are subject to standalone Propositions~\ref{MeShelah} and~\ref{EisEsharp}. \smallskip \eqref{uhnull}$\Implies$\eqref{Eadd}: Assume $X$ be \uhnull. Let $E\in\EE$. By Theorem~\ref{prodUhnull}, $X\times E\in\NNs(\uhm_0^1)$. Since the mapping $(x,y)\mapsto x+y$ is Lipschitz, Lemma~\ref{lipschitz} yields $X+E\in\NNs(\uhm_0^1)$. \smallskip \eqref{sharpE}$\Implies$\eqref{sharpN}: We employ a Pawlikowski's~\cite{MR1380640} theorem, see also~\cite[Theorem 8.1.19]{MR1350295}: \emph{For each $M\in\MM$ there exists $E\in\EE$ such that for each $Y\subs\Cset$, if $Y+E\in\NN$, then $Y+M\neq\Cset$.} Suppose $X$ is sharply $\EE$-additive. Let $M\in\MM$. Let $E\in\EE$ be the set guaranteed by the Pawlikowski's theorem. Since $X$ is sharply $\EE$-additive, there is $F\sups X$ \si compact such that $F+E\in\EE\subs\NN$. Therefore $F+M\neq\Cset$. Thus $X$ is sharply null. \smallskip \eqref{sharpN}$\Implies$\eqref{sMadd}: Suppose $X$ is sharply null and let $M\in\MM$. We may assume that $M$ is \si compact. Let $Q\subs\Cset$ be a countable dense set. Since $Q$ is countable, $Q+M$ is meager. Therefore there is $F\sups X$ such that $Q+M+F\neq\Cset$. Let $z\notin Q+M+F$. Then, for all $q\in Q$, $z\notin q+M+F$, i.e. $z+q\notin M+F$. Therefore $(M+F)\cap(Q+z)=\emptyset$. Since $Q$ is dense, so is $Q+z$. Therefore the complement of $F+M$ is dense. Since $F+M$ is a continuous image of a \si compact set $F\times M$, it is \si compact as well. Since it has a dense complement, it is meager by the Baire category theorem. \smallskip \eqref{sMadd}$\Implies$\eqref{Madd} is obvious. \end{proof} In order to prove that every $\MM$-additive set is \uhnull{} we need a Shelah's~\cite{MR1324470} characterization of $\MM$-additive sets: \begin{lem}[{\cite[Theorem 2.7.17]{MR1350295}}]\label{ShelahM} $X$ is $\MM$-additive if and only if \begin{multline*} \forall f\in\UPset\,\,\exists g\in\Pset\,\,\exists y\in\Cset\,\, \forall x\in X\,\,\forall^\infty n\,\,\exists k\\ g(n)\leq f(k)<f(k+1)\leq g(n+1) \&\ x\rest [f(k),f(k+1))=y\rest [f(k),f(k+1)). \end{multline*} \end{lem} \begin{prop}\label{MeShelah} If $X\subs\Cset$ is $\MM$-additive, then $X$ is \uhnull. \end{prop} \begin{proof} Let $X\subs\Cset$ be $\MM$-additive. Let $h$ be a Hausdorff function. We have to show that $\uhm^h(X)=0$. Define recursively $f\in\UPset$ to satisfy $$ 2^{f(k)}\cdot h\bigl(2^{-f(k+1)}\bigr)\leq 2^{-k},\quad k\in\Nset. $$ By Lemma~\ref{ShelahM} there is $g\in\Pset$ and $y\in\Cset$ such that \begin{multline}\label{ShelahM2} \forall x\in X\,\,\forall^\infty n\,\,\exists k\\ g(n)\leq f(k)<g(n+1)\ \&\ x\rest [f(k),f(k+1))=y\rest [f(k),f(k+1)). \end{multline} For $p\in\CCset$ denote $[p]=\{f\in\Cset:p\subs f\}$ the corresponding cylinder. Define \begin{alignat*}{3} &\mc B_k&&= \bigl\{ \bigl[p\concat y\rest [f(k),f(k+1))\bigr]:p\in 2^{f(k)} \bigr\},\qquad && k\in\Nset,\\ &\mc G_n&&=\bigcup \bigl\{\mc B_k:g(n)\leq f(k)<g(n+1)\bigr\}, && n\in\Nset,\\ & \mc B&&=\bigcup_{k\in\Nset}\mc B_k=\bigcup_{n\in\Nset}\mc G_n. \end{alignat*} With this notation~\eqref{ShelahM2} reads \begin{equation}\label{ShelahM22} \forall x\in X\,\,\forall^\infty n\,\,\exists G\in\mc G_n\,\, x\in G. \end{equation} Since each of the families $\mc G_n$ is finite, it follows that $\mc G_n$'s witness that $\mc B$ is a $\gamma$-groupable cover of $X$. Using Lemma~\ref{gammagr} it remains to show that the Hausdorff sum $\sum_{B\in\mc B}h(dB)$ is finite. Since $\abs{\mc B_k}=2^{f(k)}$ and $dB=2^{-f(k+1)}$ for all $k$ and all $B\in\mc B_k$, we have $$ \sum_{B\in\mc B}h(dB)= \sum_{k\in\Nset}\sum_{B\in\mc B_k}h(dB)= \sum_{k\in\Nset}2^{f(k)}\cdot h(2^{-f(k+1)}) \leq\sum_{k\in\Nset}2^{-k}<\infty. \qedhere $$ \end{proof} In order to prove that every $\EE$-additive set is sharply $\EE$-additive, we employ a combinatorial description of closed null sets given by Bartoszynski and Shelah~\cite{MR1186905}, see also~\cite[2.6.A]{MR1350295}. For $f\in\UPset$ let $$ \CCC_f=\left\{\seq{F_n}:\forall n\in\Nset\left(F_n\subs2^{[f(n),f(n+1))}\ \&\ \frac{\abs{F_n}}{2^{f(n+1)}}\leq\frac{1}{2^n}\right)\right\} $$ and for $f\in\UPset$ and $F\in\CCC_f$ define $$ S(f,F)=\{z\in\Cset:\fmany n\in\Nset\ z\rest[f(n),f(n+1))\in F_n\}. $$ It is easy to check that $S(f,F)\in\EE$ for all $f\in\UPset$ and $F\in\CCC_f$. By \cite[Theorem 4.2]{MR1186905} (or~\cite[2.6.3]{MR1350295}), these sets actually form a base of $\EE$. The proof therein yields a little more: \begin{lem}\label{2.6.3} For each $E\in\EE$ and each $f\in\UPset$ there is $g\in\UPset$ and $G\in\CCC_{f{\circ}g}$ such that $E\subs S(f{\circ}g,G)$. \end{lem} \begin{lem}\label{Einc3} Let $f,g\in\UPset$, $F\in\CCC_f$ and $G\in\CCC_{f{\circ}g}$. Then $S(f,F)\subs S(f{\circ}g,G)$ if and only if \begin{equation}\label{Einc} \fmany n\in\Nset\ \forall k\in[g(n),g(n+1))\quad F_k\subs\{z\rest[f(k),f(k+1)):z\in G_n\}. \end{equation} \end{lem} \begin{proof} Suppose condition~\eqref{Einc} fails. Then there is $I\in[\Nset]^\Nset$ such that \begin{equation}\label{Einc2} \forall n\in I\ \exists k_n\in[g(n),g(n+1))\ \exists z_{k_n}\in F_{k_n}\ \forall z\in G_n\ z_{k_n}\nsubseteq z. \end{equation} For each $k\notin\{k_n:n\in I\}$ choose $z_k\in F_k$ and let $z\in\Cset$ be a sequence that extends simultaneously all $z_k$'s. Then obviously $z\in S(f,F)$. On the other hand, condition~\eqref{Einc2} ensures that $z\notin S(f{\circ}g,G)$. Thus $S(f,F)\subs S(f{\circ}g,G)$ yields \eqref{Einc}. The reverse implication is straightforward. \end{proof} \begin{prop}\label{EisEsharp} If $X\subs\Cset$ is $\EE$-additive, then $X$ is sharply $\EE$-additive. \end{prop} \begin{proof} Suppose $X$ is $\EE$-additive. Let $E\in\EE$. We are looking for an $F_\sigma$-set $\widetilde X\sups X$ such that $\widetilde X+E\in\EE$. There are $f\in\UPset$ and $F\in\CCC_f$ such that $E\subs S(f,F)$. Since $S(f,F)\in\EE$, we have $X+S(f,F)\in\EE$. By Lemma~\ref{2.6.3} there are $g$ and $G\in\CCC_{f{\circ}g}$ such that $X+S(f,F)\subs S(f{\circ}g,G)$, i.e. $x+S(f,F)\subs S(f{\circ}g,G)$ for all $x\in X$. The set $\widetilde X$ we are looking for is $$ \widetilde X=\{x\in\Cset:x+S(f,F)\subs S(f{\circ}g,G)\}. $$ Obviously $X\subs\widetilde X$. It is also obvious that $\widetilde X+E\subs\widetilde X+S(f,F)\subs S(f{\circ}g,G)\in\EE$. Thus it remains to show that $\widetilde X$ is $F_\sigma$. For any $x\in\Cset$ and $k\in\Nset$ set $$ F_k^x=\{z+x\rest[f(k),f(k+1)):z\in F_k\} $$ and consider the sequence $F^x=\seq{F_k^x}$. Clearly $F^x\in\CCC_f$ and $S(f,F^x)=x+S(f,F)$. Therefore $\widetilde X=\{x\in\Cset:S(f,F^x)\subs S(f{\circ}g,G)\}$. Use Lemma~\ref{Einc3} to conclude that $$ x\in\widetilde X\Leftrightarrow \fmany n\in\Nset\ \forall k\in[g(n),g(n+1))\ F_k^x\subs\{z\rest[f(k),f(k+1)):z\in G_n\}. $$ It follows that $\widetilde X$ is $F_\sigma$ as long as the sets $$ A_{n,k}=\{x\in\Cset:F_k^x\subs\{z\rest[f(k),f(k+1)):z\in G_n\}\} $$ are closed. Fix $n\in\Nset$ and $k\in[g(n),g(n+1))$. Unraveling the definitions yields $$ x\in A_{n,k}\Leftrightarrow \exists y\in2^{[f(k),f(k+1))}\quad y\subs x\ \&\ \forall z\in F_k\ \exists t\in G_n\ z+y\subs t. $$ Since all three sets $2^{[f(k),f(k+1))}$, $F_k$ and $G_n$ are finite, the set $A_{n,k}$ is even clopen. We are done. \end{proof} \noindent The proof of Theorem~\ref{mainME} is now complete. \begin{coro} Let $f:\Cset\to\Cset$ be a continuous mapping. If $X$ is $\MM$-additive, then so is $f(X)$. \end{coro} \begin{coro} If $X\subs\Cset$ is $\MM$-additive, then $\phi(X\times E)\in\EE$ for each $E\in\EE$ and every Lipschitz mapping $\phi:\Cset\times\Cset\to\Cset$. \end{coro} Recall that \emph{transitive additivity} of an ideal $\mc J$ is defined by $$ \add^\star\mc J=\min\{\abs{X}:\exists J\in\mc J\ X+J\notin\mc J\}. $$ Transitive additivity and other transitive coefficients are investigated in detail in \cite[2.7]{MR1350295}. The following is an obvious consequence of the equivalence $\MM$-additive $\Leftrightarrow$ $\EE$-additive. \begin{coro} $\add^\star\EE=\add^\star\MM$. \end{coro} Recall that a set $X\subs\Cset$ is \emph{transitively meager} (or \emph{meager in the transitive sense}, or an \emph{$\mc{AFC}'$-set}) if for every perfect set $P\subs\Cset$ there is an $F_\sigma$-set $F\sups X$ such that $(F+t)\cap P$ is meager in $P$ for all $t\in\Cset$. These sets are investigated e.g.~in~\cite{MR1905154}. One can prove that if $X$ is $\MM$-additive and $Y$ is transitively meager, then $X+Y$ is transitively meager, i.e.~that $\MM$-additive sets are $\mc{AFC}'$-additive, but that requires a nontrivial proof. Nowik, Scheepers and Weiss~\cite[Theorem 9]{MR1610427} have that every strongly meager set $X\subs\Cset$ (i.e.~$X+N\neq\Cset$ for all $N\in\NN$) is transitively meager. The following statement follows at once from Lemma~\ref{lemUM}. \begin{coro} Every $\MM$-additive set is universally meager and transitively meager. \end{coro} \begin{proof} Let $E\subs\Cset$ be $\MM$-additive, i.e.~\uhnull{}. It is universally meager by Theorem~\ref{umg}. To show it is transitively meager, let $P\subs\Cset$ be a perfect set and apply Lemma~\ref{lemUM} with $Z=X=\Cset$, $Y=P$, $\phi=\id_P$ and $\mc F=\{x\mapsto x+t:t\in\Cset\}$. \end{proof} \section{\upnull{} sets vs.~$\NN$-additive sets}\label{sec:nadd} The following theorem in particular shows that a set in $\Cset$ is \upnull{} if and only if it is $\NN$-additive. \begin{thm}\label{mainNN} For any set $X\subs\Cset$, the following are equivalent. \begin{enum} \item $X$ is \upnull, \item $X$ is $\NN$-additive, \item $X$ is sharply $\NN$-additive, \item $\hm^1(X\times N)=0$ for each $N\in\NN$. \end{enum} \end{thm} We employ Shelah's~\cite{MR1324470} characterization of $\NN$-additive sets. \begin{lem}[{\cite[Theorem 2.7.18]{MR1350295}}]\label{ShelahN} $X\subs\Cset$ is $\NN$-additive if and only if for each $f\in\UPset$ there is a sequence $\seq{H_n:n\in \Nset}$ such that \begin{enum} \item $\forall n\ H_n\subs 2^{[f(n),f(n+1))}$, \item $\forall n\ \abs{H_n}\leq n$, \item $X\subs\{x\in\Cset:\forall^\infty n\ x\rest [f(n),f(n+1))\in H_n\}$. \end{enum} \end{lem} \begin{proof}[Proof of Theorem~\ref{mainNN}] (i)\Implies(iii): Suppose $X$ is \upnull{} and $N\in\NN$. By Lemma~\ref{lemHaus}(ii) there is $h\prec 1$ such that $\hm^h(N)=0$. Let $g\in\HH$ be such that $gh\geq1$. Then Theorem~\ref{basicPnull}(iv) and Lemma~\ref{lemP}(iii) yield a \si compact set $F$ such that $\ubox^g(F)=0$. Apply Lemma~\ref{howroyd}(ii) to get $$ \hm^1(F\times N)\leq\hm^{gh}(F\times N)\leq\ubox^g(F)\hm^h(N)=0. $$ Since $(x,y)\mapsto x+y$ is clearly a Lipschitz mapping, Lemma~\ref{lipschitz} yields $\hm^1(F+N)=0$, i.e.~$F+N\in\NN$, as required. This argument also proves (iv)\Implies(ii). (i)\Implies(iv) is nothing but Lemma~\ref{XN}(ii) and (iii)\Implies(ii) is trivial. (ii)\Implies(i): Suppose that $X\subs\Cset$ is $\NN$-additive. Let $h\in\HH$. We verify that $\ubox^h(X)\leq 1$. Choose $F\in\UPset$ to satisfy $F(n)\leq\frac{1}{h(2^{1-n})}$. Define recursively $f\in\UPset$ subject to $$ 2^{f(n)}\, (n+1)!\leq F(f(n+1)),\quad n\in\Nset. $$ Obviously \begin{equation}\label{fuj} \forall n\,\, \forall k>n\quad 2^{f(n)}\, k!\leq F(f(k)). \end{equation} Let $\seq{H_n}$ be the sequence guaranteed by the lemma. Set $$ X_n=\{x\in\Cset:\forall k\geq n\ x\rest[f(k),f(k+1))\in H_n\},\quad n\in\Nset. $$ We verify that $N_{X_n}(2^{-i})\leq F(i)$ for each $n$ and all $i\geq f(n+1)$. Let $k$ be the unique integer satisfying $f(k)\leq i<f(k+1)$. In particular, $k>n$. It is obvious that $$ N_{X_n}(2^{-i})\leq 2^{f(n)} \abs{H_n}\cdot\abs{H_{n+1}}\cdot\dots\cdot\abs{H_k}\leq 2^{f(n)}n(n+1)\dots k. $$ Therefore~\eqref{fuj} yields $N_{X_n}(2^{-i})\leq F(f(k))\leq F(i)$. The definition of $F$ thus yields $N_{X_n}(2^{-i})h(2^{1-i})\leq 1$ and therefore $$ \ubox_0^h(X_k)=\varlimsup_{r\to0}N_{X_k}(r)\cdot h(r) \leq\varlimsup_{i\to\infty}N_{X_k}(2^{-i})\cdot h(2^{1-i}) \leq 1. $$ Condition (iii) of Lemma~\ref{ShelahN} ensures that $X_n\upto X$. Therefore $\ubox^h(X)\leq1$ by Lemma~\ref{IncreasingSetsLemma}. \end{proof} \begin{coro} Let $X\subs\Cset$ and $f:\Cset\to\Cset$ a continuous mapping. If $X$ is $\NN$-additive, then so is $f(X)$. \end{coro} \section{\Dpnull{} sets vs.~\Tprime-sets} \label{sec:TT} Inspired by Shelah's theorem (cf.~Lemma~\ref{ShelahN}) Nowik and Weiss~\cite{MR1905154} introduced and investigated the following notion. \begin{defn}[{\cite{MR1905154}}]\label{defT'} $X$ is called a \Tprime{}\emph{-set} if there exists $g\in\Pset$ such that for each $f\in\UPset$ there is $I\in[\Nset]^\Nset$ and a sequence $\seq{H_n:n\in I}$ such that \begin{enum} \item $\forall n\in I\ H_n\subs 2^{[f(n),f(n+1))}$, \item $\forall n\in I\ \abs{H_n}\leq g(n)$, \item $X\subs\{x\in\Cset:\forall^\infty n\in I\ x\rest [f(n),f(n+1))\in H_n\}$. \end{enum} \end{defn} They proved a number of results on \Tprime{}-sets, e.g.~that \Tprime{}-sets are Ramsey null. By proving that every $\gamma$-set is \Tprime{} they showed that $\NN$-additive sets are consistently a proper subclass of \Tprime{}-sets. They also proved that every \Tprime{}-set is $\MM$-additive and provided a CH example of an $\MM$-additive set that is not \Tprime{}. Nowik and Weiss ask at the end of~\cite{MR1905154} if \Tprime{}-sets coincide with $\EE$-additive sets. In view of Theorem~\ref{mainME}, their example proves that it is not so: Every \Tprime{}-set is $\EE$-additive, but under CH the converse fails. To date it is not known if there is some natural ideal $\mc J$ such that \Tprime{}-sets coincide with $\mc J$-additive sets. In this section we prove that \Tprime{}-sets coincide with \dpnull{} sets. \begin{thm} Let $X\subs\Cset$. The following are equivalent. \begin{enum} \item $X$ is \dpnull, \item $X$ is a \Tprime{}-set. \end{enum} \label{mainTT} \end{thm} \begin{proof} (i)\Implies(ii): Let $f\in\UPset$ and set $\eps_n=2^{-f(n+1)}$. Let $I$ and $\mc F_n$'s be as in Theorem~\ref{combDnull}(ii). By the choice of $\eps_n$ we may assume that each set $F\in\mc F_n$ is a cylinder generated by some $p_F\in2^{f(n+1)}$. For $n\in I$ set $$ H_n=\{p_F\rest[f(n),f(n+1)):F\in\mc F_n\}. $$ Clearly $\abs{H_n}\leq\abs{\mc F_n}\leq n$. Condition (iii) of Definition~\ref{mainTT} follows from the fact that $\{\bigcup\mc F_n:n\in I\}$ is a $\gamma$-cover of $X$. (ii)\Implies(i): Let $h\in\HH$. We verify that $\dbox^h(X)\leq 1$. Choose $G\in\UPset$ to satisfy $G(n)\leq\frac{1}{h(2^{-n})}$. Let $g\in\Pset$ be the function from the Definition~\ref{defT'} of \Tprime{}. Define recursively $f\in\UPset$ to satisfy $$ 2^{f(n)}\cdot g(n)\leq G(f(n+1)). $$ Let $I\in[\Nset]^\Nset$ and $\seq{H_n:n\in I}$ be as in the definition of \Tprime{} . Set \begin{align*} F_n&=\{x\in\Cset:x\rest[f(n),f(n+1))\in H_n\},\quad n\in I,\\ X_k&= \bigcap_{n\geq k,\, n\in I} F_n,\quad k\in\Nset \end{align*} and for each $n\in I$ put $\eps_n=2^{-f(n+1)}$. Fix $k$. It is obvious that $$ N_{X_k}(\eps_n)\leq N_{F_n}(\eps_n)\leq 2^{f(n)}\cdot\abs{H_n}\leq 2^{f(n)}\cdot g(n)\leq G(f(n+1)) $$ holds for each $n\geq k,\,n\in I$ and since $G(f(n+1))\leq\frac{1}{h(2^{-f(n+1)})}=\frac{1}{h(\eps_n)}$, we finally get $$ \lbox_0^h(X_k)\leq\varliminf_{n\in I}N_{X_k}(\eps_n)\cdot h(\eps_n) \leq 1. $$ Condition~\ref{defT'}(iii) guarantees that $X_k\upto X$. Hence $\dbox^h(X)\leq 1$, as required. \end{proof} \begin{coro} Let $X\subs\Cset$ and $f:\Cset\to\Cset$ a continuous mapping. If $X$ is \Tprime{}, then so is $f(X)$. \end{coro} \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2012-08-29T02:00:35", "yymm": "1208", "arxiv_id": "1208.5521", "language": "en", "url": "https://arxiv.org/abs/1208.5521", "abstract": "A separable metric space X is an H-null set if any uniformly continuous image of X has Hausdorff dimension zero. upper H-null, directed P-null and P-null sets are defined likewise, with other fractal dimensions in place of Hausdorff dimension. We investigate these sets and show that in 2^\\omega{} they coincide, respectively, with strongly null, meager-additive, T' and null-additive sets. Some consequences: A subset of 2^\\omega{} is meager-additive if and only if it is E-additive; if f:2^\\omega->2^\\omega{} is continuous and X is meager-additive, then so is f(X), and likewise for null-additive and T'-sets.", "subjects": "Logic (math.LO)", "title": "Small sets of reals through the prism of fractal dimensions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771755256741, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7090925450496569 }
https://arxiv.org/abs/2207.11505
Monotone Subsequences in Locally Uniform Random Permutations
A locally uniform random permutation is generated by sampling $n$ points independently from some absolutely continuous distribution $\rho$ on the plane and interpreting them as a permutation by the rule that $i$ maps to $j$ if the $i$th point from the left is the $j$th point from below. As $n$ tends to infinity, decreasing subsequences in the permutation will appear as curves in the plane, and by interpreting these as level curves, a union of decreasing subsequences give rise to a surface. We show that, under the correct scaling, for any $r\ge0$, the largest union of $\lfloor r\sqrt{n}\rfloor$ decreasing subsequences approaches a limit surface as $n$ tends to infinity, and the limit surface is a solution to a specific variational problem. As a corollary, we prove the existence of a limit shape for the Young diagram associated to the random permutation under the Robinson-Schensted correspondence. In the special case where $\rho$ is the uniform distribution on the diamond $|x|+|y|<1$ we conjecture that the limit shape is triangular, and assuming the conjecture is true we find an explicit formula for the limit surfaces of a uniformly random permutation and recover the famous limit shape of Vershik, Kerov and Logan, Shepp.
\section{Introduction}\label{sec:introduction} It has been known since the 1970s that the longest decreasing (or increasing) subsequence of a random permutation of $\{1,2,\dotsc,n\}$ has length approximately $2\sqrt{n}$ for large $n$. More generally, the (scaled) limit of the cardinality of the largest union of $\lfloor r\sqrt{n}\rfloor$ disjoint decreasing subsequences is known for any $r\ge0$, where $\lfloor\cdot\rfloor$ denotes the integral part. But what does this union typically look like in the permutation diagram? And what if the permutation is not sampled from the uniform distribution? The aim of this paper is to answer these questions, at least for a family of distributions called \emph{locally uniform}. Let $\sigma$ be a finite set of points in the plane, no two of which have the same $x$- or $y$-coordinate. We can interpret any such $\sigma$ as a permutation by letting $\sigma(i)=j$ if the $i$th point from the left is the $j$th point from below. If $\sigma$ consists of $n$ points that are sampled independently from some given absolutely continuous distribution $\rho$ on the plane, $\sigma$ is said to be \emph{locally uniform (with density $\rho$)}. In particular, if $\rho$ is the uniform distribution on the unit square $(0,1)^2$, then, as a permutation, $\sigma$ is uniformly distributed among all permutations of order $n$. In this geometric setting, decreasing subsequences of $\sigma$ appear as ``decreasing subsets'' of the permutation points in the plane, and we may talk about the \emph{location} of a union of decreasing subsequences. \Cref{fig:onion} shows an example. \begin{figure} \begin{centering} \begin{tikzpicture} \node[xscale=1,yscale=-1,inner sep=0,outer sep=0](0,0){\includegraphics[scale=0.23]{uniform_100000_20_bw.pdf}}; \draw[very thick] (2.2,-0.2) -- (3.2,-1.5) -- (2.8,-2.05) -- (1.8,-0.75) -- cycle; \end{tikzpicture} \caption{The location of the largest union of 20 decreasing subsequences in a random permutation of order 100{,}000 drawn from the uniform distribution on the square $(0,1)^2$. The small dots are the points in the permutation, and adjacent points in the decreasing subsequences are connected by line segments. We have also sketched a local parallelogram.}\label{fig:onion} \end{centering} \end{figure} One could imagine a two-dimensional surface whose level curves follow the decreasing subsets, and as $n$ tends to infinity, under some rescaling one might hope to obtain a \emph{limit surface} for a maximal union of $k$ decreasing subsets, where $k$ depends on $n$. (It is not hard to see that we must require that $k$ grows as $\sqrt{n}$.) This appears to be a new question already for uniform random permutations, and we will motivate below why we think it is a both natural and powerful one. \section{Background and significance}\label{sec:background} We will give a brief introduction to the history and current situation of the research area of monotone subsequences in random permutations. For a comprehensive review, we refer to Romik~\cite{RomikBook}. Let $\sigma$ be a \emph{permutation} of order $n$. A \emph{subsequence} of $\sigma$ is an ordered sequence $(\sigma(i_1),\sigma(i_2),\dotsc,\sigma(i_k))$ where $i_1<i_2<\dotsb<i_k$. It is \emph{increasing} if $\sigma(i_1)<\sigma(i_2)<\dotsb<\sigma(i_k)$ and \emph{decreasing} if $\sigma(i_1)>\sigma(i_2)>\dotsb>\sigma(i_k)$. Let $\mathsf{L}(\sigma)$ denote the length of (any of) the longest increasing subsequence(s) of $\sigma$, that is \[ \mathsf{L}(\sigma):=\max\{k\,:\, \text{there is an increasing subsequence of length $k$}\}. \] In 1961 Ulam \cite{Ulam} asked the following question, sometimes known as {\bf Ulam's problem}: If $\sigma_n$ is chosen randomly from the uniform distribution of all permutations of order $n$, what is the expected value of $\mathsf{L}(\sigma_n)$? About ten years later, Hammersley \cite{Hammersley} was able to prove that there exists a limit not only for the expected value $\mathbb{E}\mathsf{L}(\sigma_n)$ but also for $\mathsf{L}(\sigma_n)$ itself. \begin{theo}[Hammersley, 1972]\label{th:hammersley} The limit $\Gamma=\lim_{n\rightarrow\infty}\mathbb{E}\mathsf{L}(\sigma_n)/\sqrt{n}$ exists, and $\mathsf{L}(\sigma_n)/\sqrt{n}\rightarrow\Gamma$ in probability. \end{theo} Though simulations suggested that $\Gamma=2$, this was not proven until 1977 by Vershik and Kerov \cite{VershikKerov}. They used the fact that $\mathsf{L}(\sigma)$ equals the length of the first row in the Young diagram corresponding to $\sigma$ under the Robinson-Schensted bijection, and in fact, they were able to describe a \emph{limit shape} for the Young diagram corresponding to a random permutation as $n$ grows to infinity. The latter result was also obtained independently by Logan and Shepp \cite{LoganShepp}. The next break-through happened in 1999, when Baik, Deift and Johansson \cite{BaikDeiftJohansson} were able to describe the asymptotic behaviour of $\mathsf{L}(\sigma_n)$ in detail. \begin{theo}[Baik-Deift-Johansson 1999] The random variable \[ \frac{\mathsf{L}(\sigma_n)-2\sqrt{n}}{n^{1/6}} \] converges in distribution to the Tracy-Widom distribution as $n\rightarrow\infty$. \end{theo} \subsection{Where is the longest increasing subsequence?} Hammersley's approach was to think about a random permutation as a set of dots randomly and independently positioned in the unit square, and that setting will be convenient also for us, so let us redefine our terminology a bit, in accordance with \cref{sec:introduction}. Let $\sigma$ be a finite set of points in the plane, no two of which have the same $x$- or $y$-coordinate. We can interpret any such $\sigma$ as a permutation by letting $\sigma(i)=j$ if the $i$th point from the left is the $j$th point from below. A subset $I$ of $\sigma$ is \emph{increasing} if, for any pair of points $(x,y)$ and $(x',y')$ belonging to $I$, $x<x'$ if and only if $y<y'$. It is \emph{decreasing} if $x<x'$ if and only if $y>y'$. This corresponds exactly to the increasing and decreasing subsequences that we defined earlier. In this framework a natural question arises: \begin{quest}\label{qu:wherelongest} For a random permutation $\sigma$ generated by sampling $n$ points uniformly in the unit square, where in the plane does the longest increasing subsequence typically reside? \end{quest} It follows quite easily from Hammersley's work that, with high probability, all maximal increasing subsets will be contained in a small region around the diagonal of the unit square as $n$ tends to infinity. But the new formulation also calls for a generalization: What if the points in $\sigma$ are sampled from some \emph{non-uniform} distribution? Deuschel and Zeitouni \cite{DeuschelZeitouni95} considered this generalization and were able to describe a limit curve for the maximal increasing subset when $\sigma$ is a locally uniform random permutation. We will concern ourselves with the following generalized version of \cref{qu:wherelongest} for locally uniform random permutations. \begin{quest}\label{qu:whereunion} Where in the plane does a maximal union of $r\sqrt{n}$ increasing subsets typically reside? \end{quest} \subsection{Novelty} Except for the above-mentioned result on the limit curve of the longest increasing subsequence by Deuschel and Zeitouni, the idea to look at not only the cardinality but also the \emph{location} of monotone subsequences seems to be completely original. As we will see in \cref{sec:generalpde} below, this approach is potentially powerful since it enables us to use analytical tools to study maximal unions of decreasing subsequences in random permutations sampled from a non-uniform distribution. There are some results on the \emph{cardinality} of increasing subsequences for non-uniform random permutations, but to the best of our knowledge those are all concerned with $q$-analogues of the uniform distribution, where a permutation $\pi$ is sampled with probability $q^{f(\pi)+f(\pi^{-1})}$ for some classical permutation statistics $f(\cdot)$ like the number of inversions (the Mallows distribution) \cite{BhatnagarPeled, MuellerStarr} or the the majorant index \cite{Fulman}. In contrast, the family of locally uniform distributions is much larger; it is uncountably infinite-dimensional. \subsection{Relation to limit shapes of Young diagrams} By a theorem of Greene \cite{Greene} (see \cref{pr:curtisgreene} below), the cardinality of a maximal union of $k$ decreasing subsequences is encoded in the Young diagram corresponding to the permutation under the Robinson-Schensted bijection, so the asymptotic behavior of such cardinalities corresponds to a \emph{limit shape} of a random Young diagram as $n$ tends to infinity. If the random permutation is drawn from the uniform distribution, the corresponding Young diagram is drawn from the Plancherel distribution, and its limit shape is the well-known result by Vershik, Kerov and Logan, Shepp that we mentioned above. For non-uniform random permutations, however, the limiting behavior of the corresponding Young diagram is an open problem. Limit shapes of Young diagrams have gained much interest over the years, and there are results for specific probability distributions of Young diagrams, often generated by a stochastic process \cite{JockuschProppShor, Seppalainen98}. More recent examples include \cite{ErikssonJonssonSjostrand12, ErikssonJonssonSjostrand17, ErikssonJonssonSjostrand18Bulgarian, ErikssonJonssonSjostrand18Markov}. \subsection{Relation to permutation limits} Locally uniform random permutations, our main objects of study, appear naturally as limit objects in the sense of Hoppen et~al.~\cite{HoppenEtAl}. Their main result is a definition of convergence for permutation sequences and an equivalence between such sequences and (essentially) locally uniform random permutations, and their paper is the first step towards a theory for permutations analogous to the emerging theory of limits of graphs created by Lov\'asz and many coauthors; see~\cite{LovaszSzegedy} for an overview. \section{Terminology and results}\label{sec:results} \subsection{Probabilistic setting} Our probability space will always be a simple point process in the plane viewed as a random set of points. We will define complex statements about such random point sets without worrying about the measurability of the truth-value of the statement. Typically, such statements will be parameterized by a real number $\gamma$, and we say that the statement holds \emph{asymptotically almost surely (a.a.s.) as $\gamma\rightarrow\infty$} if it is implied by an event which happens with probability tending to one. To be precise, we make the following definition. \begin{defi} Let $(\Omega, \mathcal{F}, P)$ be a probability space and let $\{E_\gamma\subseteq\Omega\}_{\gamma>0}$ be a collection of outcome sets indexed by a parameter $\gamma$. (Note that we do not require the sets to be elements of the $\sigma$-algebra $\mathcal{F}$.) We say that $E_\gamma$ happens \emph{asymptotically almost surely (a.a.s.) as $\gamma\rightarrow\infty$} if there is a family $\{F_\gamma\in\mathcal{F}\}_{\gamma>0}$ such that $F_\gamma\subseteq E_\gamma$ for any $\gamma$, and $P(F_\gamma)\rightarrow1$. Also, if $\{X_\gamma\}_{\gamma>0}$ is a family of functions from $\Omega$ to ${\mathbb R}$, we say that $X_\gamma\rightarrow x$ \emph{in probability} if, for any $\varepsilon>0$, the event $\{\omega\in\Omega\ :\ \abs{X_\gamma(\omega)-x}<\varepsilon\}$ happens a.a.s.\ as $\gamma\rightarrow\infty$. \end{defi} \subsection{Increasing and decreasing sets} A set $P$ of points in ${\mathbb R}^2$ is \emph{increasing} if, for any pair of points $(x,y)$ and $(x',y')$ belonging to $P$, $x<x'$ if and only if $y<y'$. It is \emph{decreasing} if $x<x'$ if and only if $y>y'$. It is \emph{$k$-increasing} (resp.~$k$-decreasing) if it is a union of $k$ increasing (resp.~decreasing) sets. \subsection{The local parallelogram} Consider a situation like that in \cref{fig:onion}, with a permutation embedded in the unit square and a chosen maximal union of $k$ decreasing subsets. Our main idea is to exploit the local uniformity of the random permutation by zooming in on a small region. Let us choose the region to have the shape of a narrow parallelogram as depicted in \cref{fig:onion} where the long edges are parallel to the decreasing subsets passing by (the curves in the figure) and the short edges have the same slope but with a positive sign. Then, we might ask what proportion of the permutation points inside the parallelogram are ``picked up'' by the passing curves. Intuitively, this value is only dependent on the local density and slope of the curves together with the local density of permutation points. In fact, since the property of a set being decreasing is invariant under rescaling of the $x$- and $y$-axes, the question can be reduced to a problem about a narrow rectangle of 45-degree slope. Given a ``density'' of lines with slope minus one, what proportion of the permutation points inside the narrow rectangle are ``picked up'' by the lines? The following theorem follows directly from \cref{pr:narrowrectangles}, proven in \cref{sec:phi}. \begin{theo}\label{th:narrowrectanglesresult} Let $\Omega$ be the open rectangle \[ 0<(x+y)/\sqrt2<1,\ \ 0<(y-x)/\sqrt2<\beta \] for some $\beta>0$, and let $r\ge0$. For each $\gamma>0$, let $\sigma_\gamma$ be a Poisson point process in the plane with homogeneous intensity $\gamma$. Define the random variable $\Lambda^{(\gamma)}$ as the size of a maximal $\lfloor r\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma\cap\Omega$. Then, as $\gamma$ and $\beta$ tends to infinity, $\Lambda^{(\gamma)}/\beta\gamma$ converges in mean to a constant $\Phi(r)$. \end{theo} The function $\Phi$ defined by the preceding theorem will play a main role throughout the paper. \subsection{Limit shape under Robinson-Schensted} While we have introduced $\Phi$ as a tool for showing our main results below, it turns out that $\Phi$ is surprisingly interesting in its own right: The derivative of $\Phi$ is a limit shape of the Young diagram associated with $\sigma$ under the Robinson-Schensted correspondence, where $\sigma$ is generated by a homogeneous Poisson point process on a diamond square. A \emph{Young diagram} $\lambda$ (in French notation) is a finite collection of unit cells, arranged in bottom-justified columns whose lenghts are in non-increasing order from the left. The length of the $i$th column from the left is denoted by $\lambda_i$, and we let $\lambda_i=0$ if $i$ is larger than the number of columns.\footnote{Note that this deviates from the standard notation. In the literature, usually $\lambda_i$ denotes the length of the $i$th longest row, but since we are interested in decreasing rather than increasing subsets, the columns will be most important to us.} \Cref{fig:youngdiagram} shows an example. \begin{figure} \ydiagram{1,1,3,4} \caption{A Young diagram $\lambda$ with column lengths given by $(\lambda_1,\lambda_2,\dotsc)=(4,2,2,1,0,\dotsc)$} \label{fig:youngdiagram} \end{figure} The \emph{Robinson-Schensted correspondence} is a bijection between permutations and pairs of \emph{standard Young tableaux} of the same shape. We will not define this bijection or even the concept of standard Young tableaux since all we need is contained in the proposition of Greene below. For a comprehensive review we refer to \cite{StanleyEnum2}. \begin{defi} Suppose $\sigma$ is a finite set of points in general position in the sense that no two points share the same $x$- or $y$-coordinate. Then we define the \emph{permutation corresponding to $\sigma$} to be the permutation of $\{1,2,\dotsc,\#\sigma\}$ defined by letting $\pi(i)=j$ if the $i$th point from the left in $\sigma$ is the $j$th point from below. The \emph{Young diagram corresponding to $\sigma$} is the shape of the standard Young tableaux corresponding to $\pi$ under Robinson-Schensted. \end{defi} In our setting, Greene's \cite{Greene} beautiful connection between the Young diagram and the decreasing (or increasing) subsequences of the permutation can be formulated as follows. \begin{prop}[Greene]\label{pr:curtisgreene} Suppose $\sigma$ is a finite set of points in general position in the sense that no two points share the same x or y coordinate. For each $k$, let $\Lambda_k$ be the size of a maximal $k$-decreasing subset of $\sigma$. Then \[ \Lambda_k=\sum_{i=1}^k \lambda_i. \] \end{prop} In \cref{pr:phiexists} we will show that $\Phi$ is concave and thus differentiable almost everywhere. Our next theorem, proven in \cref{sec:mainproof}, states that $\Phi'$ is a limit shape of the Young diagram corresponding to a homogeneous Poisson point process on the diamond region $\abs{x}+\abs{y}<1/\sqrt2$. \begin{restatable*}{theo}{rhombuslimitshapeexists}\label{th:rhombuslimitshapeexists} Let $\Omega$ be the open diamond square $\abs{x}+\abs{y}<1/\sqrt2$ and, for each $\gamma>0$, let $\sigma_\gamma$ be a Poisson point processes on $\Omega$ with intensity $\gamma$. Then the Young diagram $\lambda^{(\gamma)}$ corresponding to $\sigma_\gamma$ approaches the limit shape $\Phi'$ in the sense that, for any $r>0$ where $\Phi'(r)$ exists, \[ \frac1{\sqrt\gamma} \lambda^{(\gamma)}_{\lfloor r\sqrt{\gamma}\rfloor+1} \rightarrow \Phi'(r) \] in probability as $\gamma\rightarrow\infty$. \end{restatable*} In fact, as is evident in \cref{fig:triangularlimitshape}, computer simulations strongly suggest that the limit shape is an isosceles triangle! \begin{figure} \centerfloat \begin{tabular}{ccc} \upsidedown{\includegraphics[height=42mm]{triangularLimitShape1000.pdf}} & \upsidedown{\includegraphics[height=42mm]{triangularLimitShape10000.pdf}} & \upsidedown{\includegraphics[height=42mm]{triangularLimitShape100000.pdf}} \\ $n=1000$ & $n=10{,}000$ & $n=100{,}000$ \end{tabular} \caption{The contours of Young diagrams corresponding to $n$ points sampled from the uniform distribution on the diamond shape $\abs{x}+\abs{y}<1$.}\label{fig:triangularlimitshape} \end{figure} We make the following conjecture. \begin{conj}\label{con:triangularlimitshape} \[ \Phi'(r)= \begin{cases} \sqrt2-r & \text{if $0\le r \le \sqrt2$,} \\ 0 & \text{if $r>\sqrt2$.} \end{cases} \] \end{conj} \subsection{Doubly increasing functions} As mentioned in the introduction, we want to find some kind of limit object to a bundle of decreasing subsets like those depicted by the curves in \cref{fig:onion}, and a natural idea is to view the curves as level curves of a two-dimensional surface. Such surfaces can be described by functions of $x$ and $y$ that are increasing in both variables. Define a partial order $\le$ on ${\mathbb R}^2$ by letting $(x_1,y_1)\le(x_2,y_2)$ if $x_1\le x_2$ and $y_1\le y_2$. For any subset $A$ of ${\mathbb R}^2$, a function $u:A\rightarrow{\mathbb R}$ is \emph{doubly increasing} if $u(x_1,y_1)\le u(x_2,y_2)$ whenever $(x_1,y_1)\le(x_2,y_2)$. For $r\ge0$, we let ${\mathcal U}_r(A)$ denote the set of doubly increasing functions $u$ on $A$ with $\diam u(A)\le r$, and we let ${\mathcal U}(A):=\bigcup_{r\ge0}{\mathcal U}_r(A)$ denote the set of all bounded doubly increasing functions on $A$. Let ${\mathcal U}_{h,r}(A)$ denote the subset of ${\mathcal U}_r(A)$ consisting of functions with values in $[h,h+r]$. Exactly how should a $k$-decreasing set be interpreted as level curves and mapped to a doubly increasing function? A simple idea is to convert each decreasing subset to a curve by joining adjacent points with line segments, but this is problematic: First, curves from different decreasing subsets might intersect, and second, there might be multiple ways of partitioning the $k$-decreasing set into $k$ decreasing subsets. We can avoid both of these problems by focusing instead on the \emph{increasing} subsets of the $k$-decreasing set, thanks to the following well-known combinatorial fact (for which we provide a proof for completeness). \begin{prop}\label{pr:decreasingincreasingrelation} Let $P$ be a finite set of points in general position in the sense that no two points share the same $x$- or $y$-coordinate. Then, $P$ is $k$-decreasing if and only if it has no increasing subset of cardinality larger than $k$. \end{prop} \begin{proof} Suppose $P$ is a union of $k$ decreasing sets. No two elements of an increasing set can belong to the same decreasing set, so, by the pigeonhole principle, there is no increasing subset of $P$ of cardinality larger than $k$. For the converse, suppose $P$ has no increasing subset with more than $k$ elements. Let $p_1,\dotsc,p_n$ be the points in $P$ sorted from west to east. Construct a sequence of sets $D_1,D_2,\dotsc$ by the following procedure. Initially, let $D_1,D_2,\dotsc$ be empty sets. Then, iteratively for $i=1,\dotsc n$, add $p_i$ to the first of the sets $D_1,D_2\dotsc$ that currently contains only points to the north-west of $p_i$. After this procedure, if $p$ is an element in $D_{k+1}$, then $D_k$ must contain an element south-west of $p$; otherwise, $p$ would have been added to $D_k$ instead. Iterating this argument yields an increasing set of cardinality $k+1$ which contradicts our assumption. Thus, $D_{k+1}$ is empty and $P$ is a union of the $k$ decreasing sets $D_1,\dotsc,D_k$. \end{proof} In accordance, our interpretation of point sets as doubly increasing functions looks as follows. \begin{defi}\label{def:kappa} For any finite set $P$ of points in the plane, define a map $\kappa_P:{\mathbb R}^2\rightarrow{\mathbb N}$ by letting $\kappa_P(x,y)$ be the maximal size of an increasing subset of $P\cap((-\infty,x]\times(-\infty,y])$. \end{defi} See \cref{fig:kappa} for an example. \begin{figure} \begin{tikzpicture}[scale=0.8] \draw (1,8) -- (1,4) -- (3,4) -- (3,1) -- (8,1); \draw[fill] (1,4) circle(2pt) (3,1) circle(2pt); \draw (2,8) -- (2,6) -- (4,6) -- (4,3) -- (5,3) -- (5,2) -- (8,2); \draw[fill] (2,6) circle(2pt) (4,3) circle(2pt) (5,2) circle(2pt); \draw (6,8) -- (6,5) -- (8,5); \draw[fill] (6,5) circle(2pt); \draw (1,2) node {0} (3,5) node {1} (5,5) node {2} (7,6) node {3}; \end{tikzpicture} \caption{The $\kappa_P$ function for a set $P$ of six points.} \label{fig:kappa} \end{figure} \subsection{A functional} Now, we will take a global perspective: Instead of letting the curves be defined by a maximal union of decreasing subsets, let us think about them just as a bunch of decreasing curves that we can bend and move freely. The goal is to position these curves so that together they pick up as many permutation points as possible. With our parameterization of the bunch of curves by a two-dimensional surface, the discrete optimization problem can be approximated and formulated as a continuous variational problem where we want to choose the two-dimensional surface that maximizes a certain functional. Let $\mu$ denote the Lebesgue measure on ${\mathbb R}^2$. By a \emph{density domain} we will mean a pair $(\Omega,\rho)$ where $\Omega$ is an open subset of ${\mathbb R}^2$ of positive finite measure and $\rho$ is a nonnegative function on $\Omega$ such that $\int_{\Omega}\rho\,d\mu$ is finite. We will often write $\norm{f}_A$ as a shorthand for $\int_A\abs{f}\,d\mu$. \begin{defi}\label{df:FrhoandL} For any $\eta,\theta\ge0$, let \[ L(\eta,\theta):=\begin{cases} \eta\,\Phi(\sqrt{2\theta/\eta}) & \text{if $\eta>0$,} \\ 0 & \text{if $\eta=0$,} \end{cases} \] and, for any density domain $(\Omega,\rho)$, let $F_\rho:{\mathcal U}(\Omega)\rightarrow{\mathbb R}$ be a (nonlinear) functional given by \[ F_\rho(u) := \int_\Omega L(\rho, u_xu_y)\,d\mu=\norm{L(\rho, u_xu_y)}_\Omega, \] where $u_x$ and $u_y$ denote partial derivatives. \end{defi} We will show in \cref{sec:Fwelldefined} that $F_\rho$ is well defined. Intuitively, the factor $\Phi(\sqrt{2u_xu_y/\rho})$ of the integrand in the definition of $F_\rho$ measures the ``local efficiency'' of the surface $u$, that is, the proportion of dots in the neighborhood that the curves (encoded by the surface $u$) will pick up, where we once again refer to \cref{fig:onion}. Note that it is invariant under rescaling of the $x$- and $y$-axes if the density $\rho$ is rescaled accordingly. When the density domain $(\Omega,\rho)$ is implicitly understood, we let $\functional_{\rm max}$ be the map from ${\mathbb R}_{\ge0}$ to ${\mathbb R}_{\ge0}$ defined by letting \[ \functional_{\rm max}(r):=\sup_{u\in{\mathcal U}_r(\Omega)}F_{\rho}(u). \] \subsection{Main result} Our main result is the following two theorems. The first one is nonprobabilistic in nature and deals with doubly increasing functions and the functional $F_\rho$. We postpone its proof until \cref{sec:mainFproof}. \begin{restatable*}{theo}{mainF} \label{th:mainF} Let $(\Omega,\rho)$ be a density domain. Then the following holds. \begin{thmlist} \item\label{th:mainFmaximizer} For any $r\ge0$, $F_\rho$ attains its maximum on ${\mathcal U}_r(\Omega)$. \item\label{th:mainFrhoconcave} $F_\rho$ is a concave function on ${\mathcal U}(\Omega)$. \item\label{th:mainFmaxcontinuousincreasingconcave} $\functional_{\rm max}$ is continuous, increasing and concave. \end{thmlist} \end{restatable*} In \cref{sec:generalpde}, we show how a maximizer of $F_\rho$ can be found in practice by solving a system of partial differential equations. In the introduction we defined locally uniform random permutations as having a deterministic order $n$. However, in many situations it is more natural to consider Poisson point processes where the total number of points is a (Poisson-distributed) random variable. As the intensity tends to infinity, the difference between these random point processes becomes negligible; they approach each other in the following sense. \begin{defi} Let $\{\sigma_\gamma\}_{\gamma>0}$ and $\{\tau_\gamma\}_{\gamma>0}$ by two families of random point processes parameterized by $\gamma>0$. Then we say that \emph{$\tau_\gamma$ approaches $\sigma_\gamma$ as $\gamma\rightarrow\infty$} if $\#(\sigma_\gamma\triangle\tau_\gamma)/\gamma\rightarrow0$ in probability as $\gamma\rightarrow\infty$, where $\triangle$ denotes symmetric difference. \end{defi} Our second main theorem connects the doubly increasing functions and $F_\rho$ with random point processes. We postpone its proof until \cref{sec:mainproof}. \begin{restatable*}{theo}{main}\label{th:main} Let $(\Omega,\rho)$ be a density domain and let $\{\tau_\gamma\}_{\gamma>0}$ be random point processes on $\Omega$ approaching a Poisson point process with intensity function $\gamma\rho$ as $\gamma\rightarrow\infty$. Then the following holds for any $r\ge0$. \begin{thmlist} \item\label{th:mainlimitsurface} For any $\varepsilon>0$, a.a.s.\ as $\gamma\rightarrow\infty$, for every maximal $\lfloor r\sqrt{\gamma}\rfloor$-decreasing subset $P$ of $\tau_\gamma$ there is a $u\in{\mathcal U}_{0,r}(\Omega)$ with $F_\rho(u)=\functional_{\rm max}(r)$ such that $\norm{\kappa_P/\sqrt{\gamma}-u}_\Omega<\varepsilon$. \item\label{th:mainlimittoFmax} Let $\Lambda^{(\gamma)}$ denote the size of a maximal $\lfloor r\sqrt{\gamma}\rfloor$-decreasing subset of $\tau_\gamma$. Then, $\Lambda^{(\gamma)}/\gamma\rightarrow \functional_{\rm max}(r)$ in probability as $\gamma$ tends to infinity. \end{thmlist} \end{restatable*} As a corollary, we obtain a limit shape for the Young diagram associated with a locally uniform random permutation. \begin{restatable*}{coro}{limitshape}\label{cor:limitshape} With the same setup as in \cref{th:main}, the Young diagram $\lambda^{(\gamma)}$ corresponding to $\tau_\gamma$ approaches the limit shape $\functional_{\rm max}'$ in the sense that, for any $r>0$ where the derivative $\functional_{\rm max}'(r)$ exists, \[ \frac1{\sqrt\gamma} \lambda^{(\gamma)}_{\lfloor r\sqrt{\gamma}\rfloor+1} \rightarrow \functional_{\rm max}'(r) \] in probability as $\gamma\rightarrow\infty$. \end{restatable*} The proof will appear in \cref{sec:mainproof}. \subsection{Consequences of \texorpdfstring{\cref{con:triangularlimitshape}}{the conjecture}} Our final result is concerned with the consequences of \cref{con:triangularlimitshape}. In \cref{sec:uniform}, provided \cref{con:triangularlimitshape} holds true, we obtain a simple parameterization of the limit surface for a uniformly random permutation, and we recover the celebrated limit-shape result of Logan, Shepp and Vershik, Kerov mentioned in \cref{sec:background}. \subsection{Organization of the paper} The remainder of the paper is organized as follows. \begin{description} \item[\cref{sec:phi}] We show that $\Phi$ exists and that the functional $F_\rho$ is well defined. \item[\cref{sec:generalpde}] We reduce the maximization problem to a PDE system. \item[\cref{sec:localparallelogram}] We study the probabilistic behavior of the local parallelogram. \item[\cref{sec:greatlemma}] We divide a density domain into small parallelograms. \item[\cref{sec:semicontinuity}] We show that $F_\rho$ is semicontinuous. \item[\cref{sec:mainFproof}] We prove our first main theorem, \cref{th:mainF}. \item[\cref{sec:mainproof}] We prove our second main theorem, \cref{th:main}. \item[\cref{sec:uniquemaximizers}] We show that, provided $\Phi$ is reasonably well behaved (which it is if \cref{con:triangularlimitshape} holds true), the maximizer of $F_\rho$ is essentially unique. \item[\cref{sec:uniform}] Provided \cref{con:triangularlimitshape} holds true, we find the limit surfaces for a uniformly random permutation and the limit shape for the corresponding Young diagram. \item[\cref{sec:future}] We discuss some open questions for future research. \end{description} \section{Existence of \texorpdfstring{$\Phi$}{Phi}}\label{sec:phi} \noindent In this section we will prove \cref{th:narrowrectanglesresult} and thereby establish the existence of the function $\Phi$. We will also show some basic properties of $\Phi$ and $L$ and see that the functional $F_\rho$ defined in \cref{sec:results} is well defined. Let ${\mathbb N}:=\{0,1,2,\dotsc\}$ denote the set of nonnegative integers. We will use bold letters like ${\mathbf{n}}$ for points in the integer lattice ${\mathbb N}^2$, and coordinates will be denoted by the corresponding italic letters with indices, like ${\mathbf{n}}=(n_1,n_2)$. Define the operators plus, minus and the relations $\le$ and $<$ coordinate-wise on ${\mathbb N}^2$. Also, let ${\mathbf{m}}\ast{\mathbf{n}}=(m_1n_1,m_2n_2)$ denote coordinate-wise multiplication. Finally, we let ${\mathbf{n}}\rightarrow\infty$ mean that $\min\{n_1,n_2\}\rightarrow\infty$. Hammersley's proof of the existence of a limit for the size of the longest increasing subsequence (\cref{th:hammersley}) uses Kingman's subergodic theorem \cite{Kingman}. We will need a fancier version of that theorem in order to show the existence of a limiting behavior of the decreasing subsets in our two-dimensional setting. The following theorem is a special case of a result by Sch\"urger \cite[Theorem~2.1]{Schurger88}\footnote{We put $h=0$ in Sch\"urger's theorem and change the sign of the random variables.}. \begin{theo}[Sch\"urger]\label{th:schurger} Let $\{X_{{\mathbf{m}},{\mathbf{n}}}\}$ be a family of real random variables, where the indices span over all ${\mathbf{m}},{\mathbf{n}}\in{\mathbb N}^2$ with ${\mathbf{m}}<{\mathbf{n}}$. Suppose the following holds. \begin{description} \item[Translation invariance] For any ${\mathbf{k}}\in{\mathbb N}^2$, the family $\{X_{{\mathbf{m}}+{\mathbf{k}},{\mathbf{n}}+{\mathbf{k}}}\}_{{\mathbf{m}}<{\mathbf{n}}}$ has the same finite joint probability distributions as $\{X_{{\mathbf{m}},{\mathbf{n}}}\}_{{\mathbf{m}}<{\mathbf{n}}}$. \item[Superadditivity] For any ${\mathbf{0}}<{\mathbf{m}}<{\mathbf{n}}$, we have \begin{align*} X_{{\mathbf{0}},{\mathbf{n}}} &\ge X_{{\mathbf{0}},(m_1,n_2)}+X_{(m_1,0),{\mathbf{n}}}, \\ X_{{\mathbf{0}},{\mathbf{n}}} &\ge X_{{\mathbf{0}},(n_1,m_2)}+X_{(0,m_2),{\mathbf{n}}}. \end{align*} \item[Integrability] The set \[ \{\Expect X_{{\mathbf{0}},{\mathbf{n}}}/n_1n_2\ :\ {\mathbf{n}}>{\mathbf{0}}\} \] is bounded. \end{description} Then, $\lim_{{\mathbf{n}}\rightarrow\infty}X_{{\mathbf{0}},{\mathbf{n}}}/n_1n_2$ exists in $L^1$ and equals \[ \lim_{{\mathbf{n}}\rightarrow\infty}(n_1n_2)^{-1} \lim_{{\mathbf{m}}\rightarrow\infty}(m_1m_2)^{-1} \sum_{{\mathbf{0}}<{\mathbf{k}}\le{\mathbf{m}}}X_{{\mathbf{k}}\ast{\mathbf{n}}-{\mathbf{n}},{\mathbf{k}}\ast{\mathbf{n}}}, \] both limits existing in $L^1$. \end{theo} If $I$ and $J$ are sets of points, we say that $I$ is a \emph{$k$-decreasing set compatible with $J$} if $I\cup J$ is $k$-decreasing. \begin{prop}\label{pr:X} Fix $s>0$ and let $\sigma$ be a Poisson point process on ${\mathbb R}^2$ with homogeneous intensity $s$. For any pair $({\mathbf{m}},{\mathbf{n}})$ with ${\mathbf{m}},{\mathbf{n}}\in{\mathbb N}^2$ and ${\mathbf{m}}<{\mathbf{n}}$ we define the interval \[ [{\mathbf{m}},{\mathbf{n}}):=\{{\mathbf{x}}\in{\mathbb R}^2\ :\ {\mathbf{m}}\le{\mathbf{x}}<{\mathbf{n}}\}. \] Let $T:{\mathbb R}^2\rightarrow{\mathbb R}^2$ be the linear transformation that maps $(1,0)$ to $(1,1)$ and $(0,1)$ to $(-1,1)$, and define the random variable $X_{{\mathbf{m}},{\mathbf{n}}}$ as the maximal size of an $(n_1-m_1)$-decreasing subset of $\sigma\cap T[{\mathbf{m}},{\mathbf{n}})$ compatible with $T([m_1,n_1)\times\{m_2,n_2\})$, where $[m_1,n_1)$ denotes the set $\{m_1,m_1+1,\dotsc,n_1-1\}$. Then, $X_{{\mathbf{0}},{\mathbf{n}}}/n_1n_2$ converges in mean to a constant $c_s$ as ${\mathbf{n}}\rightarrow\infty$. \end{prop} \begin{proof} We claim that the family $\{X_{{\mathbf{m}},{\mathbf{n}}}\}_{{\mathbf{m}}<{\mathbf{n}}}$ has the three properties defined in \cref{th:schurger}. Translation invariance follows immediately from the translation invariance of $\sigma$. Integrability holds since $\Expect X_{{\mathbf{0}},{\mathbf{n}}}/n_1n_2 \le\Expect \#(\sigma\cap T[{\mathbf{0}},{\mathbf{n}}))/n_1n_2=2s$. To prove superadditivity we consider any ${\mathbf{0}}<{\mathbf{m}}<{\mathbf{n}}$. Let $A$ be an $m_1$-decreasing subset of $\sigma\cap T[{\mathbf{0}},(m_1,n_2))$ compatible with $T([0,m_1)\times\{0,n_2\})$ and let $B$ be an $(n_1-m_1)$-decreasing subset of $\sigma\cap T[(m_1,0),{\mathbf{n}})$ compatible with $T([m_1,n_1)\times\{0,n_2\})$, as depicted in \cref{fig:superadditivityone}. \begin{figure} \centerfloat \begin{subfigure}[b]{0.6\textwidth} \centering \begin{tikzpicture}[scale=1.1] \draw[thick,rotate around={45:(0,0)}] (0,0) rectangle (3,3) (1.2,0) -- (1.2,3); \draw[fill=black,rotate around={45:(0,0)}] (0,0) circle (2pt) (0.3,0) circle (1pt) (0.6,0) circle (1pt) (0.9,0) circle (1pt) (1.2,0) circle (2pt) (1.5,0) circle (1pt) (1.8,0) circle (1pt) (2.1,0) circle (1pt) (2.4,0) circle (1pt) (2.7,0) circle (1pt) (3,0) circle (2pt); \draw[fill=black,rotate around={45:(0,0)}] (0,3) circle (2pt) (0.3,3) circle (1pt) (0.6,3) circle (1pt) (0.9,3) circle (1pt) (1.2,3) circle (2pt) (1.5,3) circle (1pt) (1.8,3) circle (1pt) (2.1,3) circle (1pt) (2.4,3) circle (1pt) (2.7,3) circle (1pt) (3,3) circle (2pt); \pgfmathsetseed{1} \draw[rotate around={45:(0,0)},snake it] (0,0) -- (0.1,0.5) -- (0.1,2.5) -- (0,3) (0.3,0) -- (0.38,0.5) -- (0.38,2.5) -- (0.3,3) (0.6,0) -- (0.66,0.5) -- (0.66,2.5) -- (0.6,3) (0.9,0) -- (0.94,0.5) -- (0.94,2.5) -- (0.9,3) (1.2,0) -- (1.3,0.5) -- (1.3,2.5) -- (1.2,3) (1.5,0) -- (1.58,0.5) -- (1.58,2.5) -- (1.5,3) (1.8,0) -- (1.86,0.5) -- (1.86,2.5) -- (1.8,3) (2.1,0) -- (2.14,0.5) -- (2.14,2.5) -- (2.1,3) (2.4,0) -- (2.42,0.5) -- (2.42,2.5) -- (2.4,3) (2.7,0) -- (2.70,0.5) -- (2.70,2.5) -- (2.7,3); \draw[white,fill=white,rotate around={45:(0,0)}] (0.6,1.5) circle (10pt) (2.1,1.5) circle (10pt); \draw[rotate around={45:(0,0)}] (-0.2,-0.2) node {$\scriptstyle T(0,0)$} (1.4,-0.6) node {$\scriptstyle T(m_1,0)$} (3.5,-0.5) node {$\scriptstyle T(n_1,0)$} (-0.5,3.5) node {$\scriptstyle T(0,n_2)$} (1.0, 3.6) node {$\scriptstyle T(m_1,n_2)$} (3.2,3.2) node {$\scriptstyle T(n_1,n_2)$} (0.6,1.5) node {$A$} (2.1,1.5) node {$B$}; \end{tikzpicture} \subcaption{} \label{fig:superadditivityone} \end{subfigure} \begin{subfigure}[b]{0.6\textwidth} \centering \begin{tikzpicture}[scale=1.1] \draw[thick,rotate around={45:(0,0)}] (0,0) rectangle (3,3) (0,1.3) -- (3,1.3); \draw[fill=black,rotate around={45:(0,0)}] (0,0) circle (2pt) (0.3,0) circle (1pt) (0.6,0) circle (1pt) (0.9,0) circle (1pt) (1.2,0) circle (2pt) (1.5,0) circle (1pt) (1.8,0) circle (1pt) (2.1,0) circle (1pt) (2.4,0) circle (1pt) (2.7,0) circle (1pt) (3,0) circle (2pt) (0,1.3) circle (2pt) (0.3,1.3) circle (1pt) (0.6,1.3) circle (1pt) (0.9,1.3) circle (1pt) (1.2,1.3) circle (1pt) (1.5,1.3) circle (1pt) (1.8,1.3) circle (1pt) (2.1,1.3) circle (1pt) (2.4,1.3) circle (1pt) (2.7,1.3) circle (1pt) (3,1.3) circle (2pt) (0,3) circle (2pt) (0.3,3) circle (1pt) (0.6,3) circle (1pt) (0.9,3) circle (1pt) (1.2,3) circle (1pt) (1.5,3) circle (1pt) (1.8,3) circle (1pt) (2.1,3) circle (1pt) (2.4,3) circle (1pt) (2.7,3) circle (1pt) (3,3) circle (2pt); \pgfmathsetseed{1} \draw[rotate around={45:(0,0)},snake it] (0,0) -- (0.07,0.4) -- (0.07,0.9) -- (0,1.3) -- (0.07,1.8) -- (0.07,2.6) -- (0,3) (0.3,0) -- (0.3,1.3) -- (0.3,3) (0.6,0) -- (0.6,1.3) -- (0.6,3) (0.9,0) -- (0.9,1.3) -- (0.9,3) (1.2,0) -- (1.2,1.3) -- (1.2,3) (1.5,0) -- (1.5,1.3) -- (1.5,3) (1.8,0) -- (1.8,1.3) -- (1.8,3) (2.1,0) -- (2.1,1.3) -- (2.1,3) (2.4,0) -- (2.4,1.3) -- (2.4,3) (2.7,0) -- (2.7,1.3) -- (2.7,3); \draw[white,fill=white,rotate around={45:(0,0)}] (1.5,0.65) circle (10pt) (1.5,2.15) circle (10pt); \draw[rotate around={45:(0,0)}] (-0.2,-0.2) node {$\scriptstyle T(0,0)$} (-0.6,1.5) node {$\scriptstyle T(0,m_2)$} (3.5,-0.5) node {$\scriptstyle T(n_1,0)$} (-0.5,3.5) node {$\scriptstyle T(0,n_2)$} (3.6, 1.1) node {$\scriptstyle T(n_1,m_2)$} (3.2,3.2) node {$\scriptstyle T(n_1,n_2)$} (1.5,0.65) node {$A$} (1.5,2.15) node {$B$}; \end{tikzpicture} \subcaption{} \label{fig:superadditivitytwo} \end{subfigure} \caption{The two superadditivity situations considered in the proof of \cref{pr:X}.} \end{figure} Since the disjoint sets $A\cup T([0,m_1)\times\{0,n_2\})$ and $B\cup T([m_1,n_1)\times\{0,n_2\})$ are $m_1$- and $(n_1-m_1)$-decreasing, respectively, their union is $n_1$-decreasing. This means that $A\cup B$ is an $n_1$-decreasing set compatible with $T([0,n_1)\times\{0,n_2\})$, and it follows that $X_{{\mathbf{0}},{\mathbf{n}}}\ge X_{{\mathbf{0}},(m_1,n_2)}+X_{(m_1,0),{\mathbf{n}}}$. Now, instead let $A$ be an $n_1$-decreasing subset of $\sigma\cap T[{\mathbf{0}},(n_1,m_2))$ compatible with $T([0,n_1)\times\{0,m_2\})$ and let $B$ be an $n_1$-decreasing subset of $\sigma\cap T[(0,m_2),{\mathbf{n}})$ compatible with $T([0,n_1)\times\{m_2,n_2\})$, as depicted in \cref{fig:superadditivitytwo}. The $n_1$-decreasing set $A\cup T([0,n_1)\times\{0,m_2\})$ is a union of $n_1$ decreasing sets $A_1,A_2,\dotsc,A_{n_1}$. Since no two elements of $T([0,n_1)\times\{m_2\})$ can belong to the same decreasing set, we may assume that $T(i-1,m_2)\in A_i$ for $i=1,2,\dotsc,n_1$. Analogously, $B\cup T([0,n_1)\times\{m_2,n_2\})$ is a union of $n_1$ decreasing sets $B=B_1\cup B_2\cup\dotsb\cup B_{n_1}$ such that $T(i-1,m_2)\in B_i$ for $i=1,2,\dotsc,n_1$. Clearly, $A_i\cup B_i$ is decreasing, and \[ A\cup B\cup T([0,n_1)\times\{0,m_2,n_2\}) =\bigcup_{i=1}^{n_1} A_i\cup B_i, \] so $A\cup B$ is an $n_1$-decreasing subset of $\sigma\cap T[{\mathbf{0}},{\mathbf{n}})$ compatible with $T([0,n_1)\times\{0,m_2,n_2\})$ and hence with $T([0,n_1)\times\{0,n_2\})$. It follows that $X_{{\mathbf{0}},{\mathbf{n}}}\ge X_{{\mathbf{0}},(n_1,m_2)}+X_{(0,m_2),{\mathbf{n}}}$. We have showed that the family $\{X_{{\mathbf{m}},{\mathbf{n}}}\}_{{\mathbf{m}}<{\mathbf{n}}}$ is superadditive. By \cref{th:schurger}, the limit $X_\infty:=\lim_{{\mathbf{n}}\rightarrow\infty}X_{{\mathbf{0}},{\mathbf{n}}}/n_1n_2$ exists in $L^1$ and equals \begin{equation}\label{eq:doublelimit} \lim_{{\mathbf{n}}\rightarrow\infty}(n_1n_2)^{-1} \lim_{{\mathbf{m}}\rightarrow\infty}(m_1m_2)^{-1} \sum_{{\mathbf{0}}<{\mathbf{k}}\le{\mathbf{m}}}X_{{\mathbf{k}}\ast{\mathbf{n}}-{\mathbf{n}},{\mathbf{k}}\ast{\mathbf{n}}}. \end{equation} For any fixed ${\mathbf{m}},{\mathbf{n}}\in{\mathbb N}^2$, \[ S_{{\mathbf{m}},{\mathbf{n}}} := (m_1m_2)^{-1} \sum_{{\mathbf{0}}<{\mathbf{k}}\le{\mathbf{m}}}X_{{\mathbf{k}}\ast{\mathbf{n}}-{\mathbf{n}},{\mathbf{k}}\ast{\mathbf{n}}} \] is an average of $m_1m_2$ i.i.d.\ random variables of finite expectation. By the law of large numbers, as ${\mathbf{m}}\rightarrow\infty$, $S_{{\mathbf{m}},{\mathbf{n}}}$ converges almost surely to $\Expect X_{{\mathbf{0}},{\mathbf{n}}}$. It follows that the limit in \cref{eq:doublelimit} is concentrated to a constant $c_s$. \end{proof} The following proposition defines the function $\Phi:{\mathbb R}_{\ge0}\rightarrow{\mathbb R}_{\ge0}$. \begin{prop}\label{pr:phiexists} Let $\Omega$ be the open rectangle \[ 0<(x+y)/\sqrt2<\alpha,\ \ 0<(y-x)/\sqrt2<\beta \] and let $r\ge0$. For each $\gamma>0$, let $\sigma_\gamma$ be a Poisson point process in the plane with homogeneous intensity $\gamma$. Define the union of lines \[ L_\gamma:=\bigcup_{i=0}^{\lfloor \alpha r\sqrt{\gamma}\rfloor-1} \{(x,y)\in{\mathbb R}^2\,:\,(x+y)/\sqrt2=i\big/r\sqrt{\gamma}\} \] and let $P_\gamma:=\{(x,y)\in L_\gamma\,:\,y-x=0\}$ and $Q_\gamma:=\{(x,y)\in L_\gamma\,:\,(y-x)/\sqrt2=\beta\}$. See \cref{fig:rectanglewithsquares} for an illustration. \begin{figure} \begin{tikzpicture}[scale=0.9] \draw[fill=lightgray, rotate around={-45:(0,0)}] (2,0) rectangle (5,2); \draw[thick,rotate around={-45:(0,0)}] (0,0) rectangle (7,2); \draw [decorate,decoration={brace,amplitude=5pt,raise=4pt},rotate around={-45:(0,0)}] (7,2) -- (7,0) node [black,midway,right,xshift=6pt,yshift=-12pt] {$\alpha$}; \draw [decorate,decoration={brace,amplitude=5pt,raise=4pt},rotate around={-45:(0,0)}] (7,0) -- (0,0) node [black,midway,below,xshift=-12pt,yshift=-6pt] {$\beta$}; \draw [decorate,decoration={brace,amplitude=5pt,raise=4pt},rotate around={-45:(0,0)}] (0,2) -- (2,2) node [black,midway,above,xshift=12pt,yshift=6pt] {$\alpha$}; \draw [decorate,decoration={brace,amplitude=5pt,raise=4pt},rotate around={-45:(0,0)}] (5,2) -- (7,2) node [black,midway,above,xshift=12pt,yshift=6pt] {$\alpha$}; \draw[fill=black,rotate around={-45:(0,0)}] (7,0) circle (2pt) (7,0.3) circle (2pt) (7,0.6) circle (2pt) (7,0.9) circle (2pt) (7,1.2) circle (2pt) (7,1.5) circle (2pt) (7,1.8) circle (2pt); \draw[fill=black,rotate around={-45:(0,0)}] (0,0) circle (2pt) (0,0.3) circle (2pt) (0,0.6) circle (2pt) (0,0.9) circle (2pt) (0,1.2) circle (2pt) (0,1.5) circle (2pt) (0,1.8) circle (2pt); \draw[rotate around={-45:(0,0)}] (0.5,1) node {$Q_\gamma$}; \draw[rotate around={-45:(0,0)}] (6.5,1) node {$P_\gamma$}; \end{tikzpicture} \caption{The rectangle $\Omega$ and the sets $P_\gamma$ and $Q_\gamma$ in \cref{pr:phiexists,pr:narrowrectangles}. The shaded area shows the rectangle $\Omega'$ in the proof of \cref{pr:narrowrectangles}.} \label{fig:rectanglewithsquares} \end{figure} Let $M_\gamma$ be the size of a maximal $\lfloor \alpha r\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma\cap\Omega$ compatible with $P_\gamma\cup Q_\gamma$. Then there is a constant $\Phi(r)$, independent of $\alpha$, $\beta$ and $\gamma$, such that \[ M_\gamma/\alpha\beta\gamma\rightarrow \Phi(r) \] in mean as $\alpha\sqrt{\gamma}$ and $\beta\sqrt{\gamma}$ tend to infinity simultaneously in any manner. \end{prop} \begin{proof} Clearly, $\Phi(0)$ exists and is $0$, so in the following we may assume that $r>0$. First suppose $n_1:=\alpha r\sqrt{\gamma}$ and $n_2:=\beta r\sqrt{\gamma}$ are both integers. Put $s:=r^{-2}/2$ and define $X_{{\mathbf{0}},(n_1,n_2)}$ as in \cref{pr:X}. Then $X_{{\mathbf{0}},(n_1,n_2)}$ has the same distribution as $M_\gamma$, so $M_\gamma/\alpha\beta\gamma$ converges in mean to $\Phi(r):=r^2 c_s$ as $n_1$ and $n_2$ tends to infinity under the constraint that they are integers. Let $\alpha':=\lfloor \alpha r\sqrt{\gamma}\rfloor/r\sqrt{\gamma}$ and $\beta':=\lfloor \beta r\sqrt{\gamma}\rfloor/r\sqrt{\gamma}$, and let $\Omega'$, $L'_\gamma$, $P'_\gamma$, $Q'_\gamma$ and $M'_\gamma$ be defined as $\Omega$, $L_\gamma$, $P_\gamma$, $Q_\gamma$ and $M_\gamma$ but with $\alpha$ and $\beta$ replaced by $\alpha'$ and $\beta'$. Then, $L'_\gamma=L_\gamma$ and $P'_\gamma=P_\gamma$. Any $\lfloor \alpha'r\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma\cap\Omega'$ compatible with $P'_\gamma\cup Q'_\gamma$ is also a $\lfloor \alpha r\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma\cap\Omega$ compatible with $P_\gamma\cup Q_\gamma$, so $M_\gamma\ge M'_\gamma$ and $M_\gamma-M'_\gamma\le\#(\sigma_\gamma\cap(\Omega\setminus\Omega'))$. Since $\alpha-\alpha'$ and $\beta-\beta'$ are both nonnegative and smaller than $1/r\sqrt{\gamma}$ we have \begin{multline*} \mu(\Omega\setminus\Omega')=\alpha\beta-\alpha'\beta'=(\alpha-\alpha')\beta+(\beta-\beta')\alpha-(\alpha-\alpha')(\beta-\beta')\\ \le (\alpha-\alpha')\beta+(\beta-\beta')\alpha \le (\alpha+\beta)/r\sqrt{\gamma}, \end{multline*} where, as always, $\mu$ denotes the Lebesgue measure on ${\mathbb R}^2$. It follows that $\#(\sigma_\gamma\cap(\Omega\setminus\Omega'))$ has a Poisson distribution with mean smaller than $(\alpha+\beta)\sqrt{\gamma}/r$, and thus $\abs{M_\gamma-M'_\gamma}/\alpha\beta\gamma$ converges to zero almost surely as $\alpha\sqrt{\gamma}$ and $\beta\sqrt{\gamma}$ tend to infinity. We conclude that $M_\gamma/\alpha\beta\gamma$ converges to $\Phi(r)$ in mean. \end{proof} From the next proposition, \cref{th:narrowrectanglesresult} follows immediately. \begin{prop}\label{pr:narrowrectangles} With the same setup as in \cref{pr:phiexists}, define the random variable $\Lambda^{(\gamma)}$ as the size of a maximal $\lfloor \alpha r\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma\cap\Omega$. Then, as $\alpha\sqrt{\gamma}$ and $\beta/\alpha$ tends to infinity, we have $\Lambda^{(\gamma)}/\alpha\beta\gamma\rightarrow \Phi(r)$ in mean. Also, for any fixed $\alpha$ and $\beta$, the inequality \[ \abs{(\Lambda^{(\gamma)}/\alpha\beta\gamma)-\Phi(r)}<3\alpha/\beta \] holds a.a.s.\ as $\gamma\rightarrow\infty$. \end{prop} \begin{proof} Clearly, $\Lambda^{(\gamma)}\ge M_\gamma$. Let $\Omega'$ be the (possibly empty) rectangle given by $0<(x+y)/\sqrt2<\alpha$ and $\alpha<(y-x)/\sqrt2<\beta-\alpha$; see \cref{fig:rectanglewithsquares} for an illustration. If $A$ is a $\lfloor \alpha r\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma\cap\Omega$, then $A\cap\Omega'$ is a $\lfloor \alpha r\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma\cap\Omega$ compatible with $P_\gamma\cup Q_\gamma$. Thus, $\Lambda^{(\gamma)}-M_\gamma\le\#(\sigma_\gamma\cap(\Omega\setminus\Omega'))$. We have $\mu(\Omega\setminus\Omega')\le2\alpha^2$, so $\#(\sigma_\gamma\cap(\Omega\setminus\Omega'))$ has a Poisson distribution with mean at most $2\alpha^2\gamma$, and hence $(\Lambda^{(\gamma)}-M_\gamma)/\alpha\beta\gamma$ converges almost surely to zero as $\alpha\sqrt{\gamma}$ and $\beta/\alpha$ tends to infinity. It follows from \cref{pr:phiexists} that $\Lambda^{(\gamma)}/\alpha\beta\gamma\rightarrow \Phi(r)$ in mean. For fixed $\alpha$ and $\beta$, and for any $\delta>0$, it holds that $(\Lambda^{(\gamma)}-M_\gamma)/\alpha\beta\gamma<2(1+\delta)\alpha/\beta$ a.a.s.\ as $\gamma\rightarrow\infty$, and hence $\abs{(\Lambda^{(\gamma)}/\alpha\beta\gamma)-\Phi(r)}<3\alpha/\beta$ a.a.s. \end{proof} In \cref{sec:localparallelogram} we will need the following more flexible version of \cref{pr:narrowrectangles} that incorporates the functional $F_\rho$. \begin{prop}\label{pr:beta} Let $\Omega$ be the open parallelogram \[ \{(x,y)\in{\mathbb R}^2\ :\ \abs{ax+by}<1,\ \abs{ax-by}<\beta\} \] for some $a,b,\beta>0$, and let $\rho$ be constant on $\Omega$. Let $u_{\rm linear}(x,y)=c(ax+by)$ for some $c\ge0$, and let $\{\sigma_\gamma\}_{\gamma>0}$ be Poisson point processes on $\Omega$ with intensities $\gamma\rho$. Let $\Lambda^{(\gamma)}$ be the size of a maximal $\lfloor 2c\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma$. Then \[ \abs{(\Lambda^{(\gamma)}/\gamma) - F_\rho(u_{\rm linear})} \le 3\rho\mu(\Omega)/\beta \] a.a.s.\ as $\gamma\rightarrow\infty$. \end{prop} \begin{proof} If $\rho=0$, we have $\Lambda^{(\gamma)}=0$ and $F_\rho(u_{\rm linear})=0$ so the conclusion of the proposition is true. In the following we assume that $\rho>0$. Rescale the $x$- and $y$-axes and generate Poisson point processes with intensities $\gamma'=2\gamma\rho/ab$ on the rectangle $\abs{x+y}<1/\sqrt2$, $\abs{x-y}<\beta/\sqrt2$. In \cref{pr:narrowrectangles} this corresponds to $r:=c\sqrt{2ab/\rho}$, $\alpha:=1$, $\beta:=\beta$, $\gamma:=\gamma'$, and the proposition yields that $\abs{(\Lambda^{(\gamma)}/\beta\gamma')-\Phi(r)} < 3/\beta$ a.a.s.\ as $\gamma\rightarrow\infty$. It is straightforward to check that $\mu(\Omega)=2\beta/ab$ and $F_\rho(u_{\rm linear})=2\beta\rho \Phi(r)/ab$. \end{proof} Our next goal is to show some nice properties of $\Phi$, in particular that it is increasing and concave. To this end, we need a couple of lemmas which will be used again later on when we concern ourselves with limit shapes of Young diagrams. Let $\partial_-$ and $\partial_+$ denote the left and right one-sided derivative operators. \begin{lemma}\label{lm:derivativelimit} Let $F_1,F_2,\dotsc$ be random concave functions from ${\mathbb R}_{\ge0}$ to ${\mathbb R}$, and suppose there is a deterministic function $F:{\mathbb R}_{\ge0}\rightarrow{\mathbb R}$ such that $F_n(x)\rightarrow F(x)$ in probability for any $x$. Then the following holds. \begin{thmlist} \item\label{lm:derivativelimitFisconcave} $F$ is concave. \item\label{lm:derivativelimitrightderivative} $\partial_-F_n(x)\rightarrow F'(x)$ and $\partial_+F_n(x)\rightarrow F'(x)$ in probability for any point $x>0$ where $F'(x)$ exists. \end{thmlist} \end{lemma} \begin{proof} \reflocal{lm:derivativelimitFisconcave} Take any $0\le x<y$ and $0<t<1$. We must show that $(1-t)F(x)+tF(y)\le F((1-t)x+ty)$. For any $\varepsilon>0$, a.a.s.\ as $n$ tends to infinity we have \begin{align} F(x) &< F_n(x)+\varepsilon, \nonumber \\ F(y) &< F_n(y)+\varepsilon\ \text{and} \nonumber \\ F((1-t)x+ty) &> F_n((1-t)x+ty)-\varepsilon. \label{eq:FgtFn} \end{align} The first two inequalities imply that \[ (1-t)F(x)+tF(y)<(1-t)F_n(x)+tF_n(y)+\varepsilon \] which is less than or equal to $F_n((1-t)x+ty)+\varepsilon$ since $F_n$ is concave. Combining this with \cref{eq:FgtFn}, we obtain \[ (1-t)F(x)+tF(y)<F((1-t)x+ty)+2\varepsilon. \] Since this holds with positive probability and $F$ is deterministic, it holds deterministically, and since it holds for any $\varepsilon>0$, we must have $(1-t)F(x)+tF(y)\le F((1-t)x+ty)$. \reflocal{lm:derivativelimitrightderivative} Suppose $x>0$ is a point where $F'(x)$ exists, and take any $\varepsilon>0$. Since $F'(x)$ exists, there is a $\delta>0$ such that \begin{equation}\label{eq:derivativeapprox} \begin{aligned} \frac{F(x)-F(x-\delta)}{\delta}-F'(x) &< \varepsilon, \\ \frac{F(x+\delta)-F(x)}{\delta}-F'(x) &> -\varepsilon. \end{aligned} \end{equation} Since $F_n(x)\rightarrow F(x)$, $F_n(x-\delta)\rightarrow F(x-\delta)$ and $F_n(x+\delta)\rightarrow F(x+\delta)$ in probability, the inequalities \begin{equation}\label{eq:functionapprox} \begin{aligned} \frac{F_n(x)-F_n(x-\delta)}{\delta} &\le\frac{F(x)-F(x-\delta)}{\delta}+\varepsilon, \\ \frac{F_n(x+\delta)-F_n(x)}{\delta} &\ge\frac{F(x+\delta)-F(x)}{\delta}-\varepsilon \end{aligned} \end{equation} hold a.a.s.\ as $n\rightarrow\infty$. Since $F_n$ is concave, it has a left derivative and a right derivative at $x$, and \begin{equation}\label{eq:derivatesqueeze} \frac{F_n(x+\delta)-F_n(x)}{\delta} \le\partial_+F_n(x) \le\partial_-F_n(x) \le\frac{F_n(x)-F_n(x-\delta)}{\delta}. \end{equation} Combining \cref{eq:derivativeapprox,eq:functionapprox,eq:derivatesqueeze} yields \[ F'(x)-2\varepsilon <\partial_+F_n(x) \le\partial_-F_n(x) <F'(x)+2\varepsilon, \] and, since $\varepsilon>0$ was chosen arbitrarily, we conclude that $\partial_-F_n(x)\rightarrow F'(x)$ and $\partial_+F_n(x)\rightarrow F'(x)$ in probability. \end{proof} \begin{lemma}\label{lm:Lambdatolambda} Let $\{\lambda^{(\gamma)}\}_{\gamma>0}$ be a family of random Young diagrams and let $\Lambda^{(\gamma)}_k:=\sum_{i=1}^k\lambda^{(\gamma)}_i$. Let $a$ and $b$ be positive functions of $\gamma$ such that $\lim_{\gamma\rightarrow\infty}a(\gamma)=\infty$. Suppose there is a (deterministic) function $G\,:\,{\mathbb R}_{\ge0}\rightarrow{\mathbb R}_{\ge0}$ such that \[ b(\gamma)\Lambda^{(\gamma)}_{\lfloor a(\gamma)r\rfloor}\rightarrow G(r) \] in probability for any $r\ge0$. Then $G$ is increasing and concave, and \[ a(\gamma)b(\gamma)\lambda^{(\gamma)}_{\lfloor a(\gamma)r\rfloor+1}\rightarrow G'(r) \] in probability for any $r>0$ where $G$ is differentiable. \end{lemma} \begin{proof} For each $\gamma>0$, define the function $F^{(\gamma)}\,:\,{\mathbb R}_{>0}\rightarrow{\mathbb R}_{\ge0}$ by \[ F^{(\gamma)}(r):=\int_0^{a(\gamma)r} \lambda^{(\gamma)}_{\lfloor t\rfloor+1}\,dt. \] Since the integrand is a nonnegative decreasing function, $F^{(\gamma)}$ is increasing and concave, and since the integrand is piecewise constant, \[ \abs{F^{(\gamma)}(r)-\Lambda^{(\gamma)}_{\lfloor a(\gamma)r\rfloor}} = \bigl(a(\gamma)r-\lfloor a(\gamma)r\rfloor\bigr) \lambda^{(\gamma)}_{\lfloor a(\gamma)r\rfloor+1} \le \frac{a(\gamma)r-\lfloor a(\gamma)r\rfloor}{\lfloor a(\gamma)r\rfloor} \sum_{i=1}^{\lfloor a(\gamma)r\rfloor}\!\!\!\lambda^{(\gamma)}_i \] which equals $\Lambda^{(\gamma)}_{\lfloor a(\gamma)r\rfloor} $ times a factor that tends to zero as $\gamma\rightarrow\infty$. It follows that \[ b(\gamma)F^{(\gamma)}(r)\rightarrow G(r) \] in probability. Since all $F^{(\gamma)}$ are increasing, $G$ is increasing too. Furthermore, by \cref{lm:derivativelimit}, $G$ is concave and $b(\gamma)\partial_+ F^{(\gamma)}(r)\rightarrow G'(r)$ for any $r$ where $G$ is differentiable. Since $\partial_+ F^{(\gamma)}(r)=a(\gamma)\lambda^{(\gamma)}_{\lfloor a(\gamma)r\rfloor+1}$ the lemma follows. \end{proof} \begin{prop}\label{pr:phiproperties} $\Phi$ is increasing, concave, continuous and bounded by one. Furthermore, $\Phi(0)=0$. \end{prop} \begin{proof} Consider the setup of \cref{pr:narrowrectangles} with the specialization $\alpha=1$ and $\beta=\gamma$. Then $\Lambda^{(\gamma)}/\gamma^2\rightarrow \Phi(r)$ in mean as $\gamma\rightarrow\infty$. Let $\lambda^{(\gamma)}$ be the random Young diagram corresponding to $\sigma_\gamma\cap\Omega$. By \cref{pr:curtisgreene}, $\Lambda^{(\gamma)}=\sum_{i=1}^{\lfloor r\sqrt{\gamma}\rfloor}\lambda^{(\gamma)}_i$, and \cref{lm:Lambdatolambda} with $a(\gamma)=\sqrt{\gamma}$, $b(\gamma)=1/\gamma^2$ and $G=\Phi$ yields that $\Phi$ is increasing and concave. That $\Phi(0)=0$ and that $\Phi$ is bounded by one follows directly from its definition together with the law of large numbers. Since $\Phi$ is concave, it is automatically continuous on the open set $(0,\infty)$. It remains only to show that it is continuous at 0. For any $\beta>0$, let $\Omega_\beta$ be the open rectangle \[ 0<(x+y)/\sqrt2<1,\ \ 0<(y-x)/\sqrt2<\beta, \] and for any $\gamma>0$ and $\beta>0$, let $\sigma_{\gamma,\beta}$ be a Poisson point process on $\Omega_\beta$ with homogeneous intensity $\gamma$. Since $\Omega_\beta\subset (-\tfrac\beta{\sqrt2},\tfrac1{\sqrt2})\times(0,\tfrac{1+\beta}{\sqrt2})$, by \cref{th:hammersley}, for any $\varepsilon>0$, the size of the largest decreasing subset of $\sigma_{\gamma,\beta}$ is smaller than $\frac\Gamma{\sqrt2}(1+\varepsilon)(1+\beta)\sqrt{\gamma}$ a.a.s.\ as $\gamma\rightarrow\infty$. (We know from the result of Vershik and Kerov \cite{VershikKerov} that $\Gamma=2$ but we will not need that now.) Thus, for any $r>0$, the maximum size $\Lambda^{(\gamma)}$ of a $\lfloor r\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_{\gamma,\beta}$ is smaller than $\tfrac\Gamma{\sqrt2}r(1+\varepsilon)(1+\beta)\gamma$ a.a.s.\ as $\gamma\rightarrow\infty$. By \cref{pr:narrowrectangles}, $\abs{\Lambda^{(\gamma)}/\beta\gamma-\Phi(r)}<3/\beta$ and thus $\Lambda^{(\gamma)}/\gamma > \beta\Phi(r) - 3$ a.a.s.\ as $\gamma\rightarrow\infty$. So $\tfrac\Gamma{\sqrt2}r(1+\varepsilon)(1+\beta)>\Lambda^{(\gamma)}/\gamma > \beta\Phi(r) - 3$ a.a.s., and hence $\Phi(r)<\bigl(\tfrac\Gamma{\sqrt2}r(1+\varepsilon)(1+\beta)+3\bigr)/\beta$ for any $r,\varepsilon,\beta>0$. Letting $\beta\rightarrow\infty$ and $\varepsilon\rightarrow0$ yields $\Phi(r)\le \tfrac\Gamma{\sqrt2}r$, and it follows that $\Phi$ is continuous at 0. \end{proof} As a consequence of the nice properties of $\Phi$, the function $L$ from \cref{df:FrhoandL} is well behaved too. \begin{lemma}\label{lm:Lcontinuous} $L$ is continuous and increasing in both variables. Furthermore, for any $\eta,\theta\ge0$ it holds that $L(\eta,0)=0$ and $L(\eta,\theta)\le\eta$. \end{lemma} \begin{proof} By \cref{pr:phiproperties}, $L$ is continuous at all points $(\eta,\theta)$ with $\eta>0$. At any point $(0,\theta)$, continuity of $L$ follows from the fact that $\Phi$ is bounded. That $L$ is increasing in the second variable follows from the fact that $\Phi$ is increasing (\cref{pr:phiproperties}). That $L(\eta,0)=0$ and $L(\eta,\theta)\le\eta$ for any $\eta,\theta\ge0$ follows from the facts that $\Phi(0)=0$ and that $\Phi$ is bounded by one (\cref{pr:phiproperties}). To show that $L$ is increasing in the first variable, take any $\theta\ge0$ and any $\eta'>\eta>0$. Since $\Phi$ is concave (\cref{pr:phiproperties}), \[ \Phi(\sqrt{2\theta/\eta'}) \ge \left(1-\sqrt{\eta/\eta'}\right)\Phi(0) +\sqrt{\eta/\eta'}\,\Phi(\sqrt{2\theta/\eta}) =\sqrt{\eta/\eta'}\,\Phi(\sqrt{2\theta/\eta}), \] and it follows that $ L(\eta',\theta)=\eta'\Phi(\sqrt{2\theta/\eta'}) \ge \sqrt{\eta'/\eta}\,L(\eta,\theta)\ge L(\eta,\theta)$. \end{proof} Finally, \cref{lm:Lcontinuous}, together with the following lemma, shows that the functional $F_\rho$ from \cref{df:FrhoandL} is well defined\label{sec:Fwelldefined}. \begin{lemma}\label{lm:monotonic} Let $u$ be a function from ${\mathbb R}^2$ to ${\mathbb R}$ that is increasing in both variables. Then the following holds. \begin{thmlist} \item\label{lm:monotonicmeasurable} $u$ is measurable. \item\label{lm:monotonicdifferentiable} $u$ is differentiable almost everywhere. \item\label{lm:monotonicpartialderivatives} The partial derivatives of $u$ exist almost everywhere and are measurable. \end{thmlist} \end{lemma} \begin{proof} \reflocal{lm:monotonicmeasurable} Let $a$ be a real number. We must show that $u^{-1}((-\infty,a])$ is measurable. Define a function $g$ by letting $g(x):=\sup\{y\,:\,u(x,y) \le a\}$ whenever the supremum exists. Then the domain of $g$ is an interval and $g$ is decreasing, so it is measurable and its graph has measure zero. The inverse image $u^{-1}((-\infty,a])$ is the region below this graph (a measurable set) plus some subset of the graph itself (a null set). \reflocal{lm:monotonicdifferentiable} This is proved in \cite[Sec.~6]{BurkillHaslamjones}. \reflocal{lm:monotonicpartialderivatives} This follows from \reflocal{lm:monotonicmeasurable}, \reflocal{lm:monotonicdifferentiable} and \cite[Lemma~2]{BurkillHaslamjones}. \end{proof} The set of points where $u$ is differentiable is denoted by $\Diff(u)$. \section{Maximizing \texorpdfstring{$F_\rho$}{the functional} by solving a PDE system} \label{sec:generalpde} In this section we show that the problem of maximizing the functional $F_\rho$ can be reduced to a system of partial differential equations. This will let us compute, in terms of $\Phi$, the maximum for a parallelogram density domain with constant density. Furthermore, if \cref{con:triangularlimitshape} holds, the PDE system simplifies significantly and can be solved analytically for the uniform case as we will see in \cref{sec:uniform}. Let us recall some standard facts from convex analysis. Given a continuous convex function $f$ from an interval $I\subseteq{\mathbb R}$ to ${\mathbb R}$, its \emph{Legendre transform} ${\mathcal L}[f]$ is a function defined by \[ {\mathcal L}[f](s) = \sup_{r\in I} \bigl(rs-f(r)\bigr) \] for those $s$ for which the supremum exists. It is well known that ${\mathcal L}(f)$ is a continuous convex function and that its domain is an interval. Furthermore, the \emph{Fenchel-Young inequality} states that \[ f(r)+{\mathcal L}[f](s)\ge rs \] for any $r\in I$ and $s\in\dom {\mathcal L}[f]$, with equality if and only if $s\in\partial f(r)$, where the \emph{subdifferential set} $\partial f(r)$ is defined by \[ \partial f(r)=\{s\ :\ f(z)-f(r)\ge (z-r)s\ \text{for any}\ z\in I\}. \] It is also known that $\partial f(r)$ is nonempty for any $r$. \begin{defi} Let $\Phi^\ast$ be the real function defined on ${\mathbb R}_{\ge0}$ by \[ \Phi^\ast(s)=\inf_{r\ge0}\bigl(rs - \Phi(r)\bigr). \] \end{defi} Note that the infimum exists since $\Phi$ is bounded by \cref{pr:phiproperties}. \begin{lemma}\label{lm:phibound} $\Phi^\ast$ is continuous and $-1\le\Phi^\ast(s)\le0$ for any $s\ge0$. Furthermore, \begin{thmlist} \item\label{lm:phiboundA} $\Phi(r) + \Phi^\ast(s) \le rs$ for any $r, s\ge 0$, and \item\label{lm:phiboundB} for any $r$ there is an $s$ such that $\Phi(r) + \Phi^\ast(s) = rs$. \end{thmlist} \end{lemma} \begin{proof} Once we note that $\Phi^\ast(s) = -{\mathcal L}[-\Phi](-s)$, it follows that $\Phi^\ast$ is continuous, and part \reflocal{lm:phiboundA} follows from the Fenchel-Young inequality while part \reflocal{lm:phiboundB} follows from the fact that all subdifferential sets are nonempty. To see that $-1\le\Phi^\ast(s)\le0$ we observe that \[ -1\le-\sup_{r\ge0}\Phi(r)\le\inf_{r\ge0}\bigl(rs-\Phi(r)\bigr)\le 0\cdot s-\Phi(0) =0, \] where the first and last inequalities follow from \cref{pr:phiproperties}. \end{proof} Let us expand our terminology for doubly increasing functions to include functions decreasing in $x$ and increasing in $y$. Define a partial order $\le'$ on ${\mathbb R}^2$ by letting $(x_1,y_1)\le'(x_2,y_2)$ if $x_1\ge x_2$ and $y_1\le y_2$. For any subset $A$ of ${\mathbb R}^2$, a function $v:A\rightarrow{\mathbb R}$ is \emph{reversely doubly increasing} if $u(x_1,y_1)\le u(x_2,y_2)$ whenever $(x_1,y_1)\le'(x_2,y_2)$. For $s\ge0$, we let ${\mathcal V}_s(A)$ denote the set of reversely doubly increasing functions $v$ on $A$ with $\diam v(A)\le s$, and we let ${\mathcal V}(A):=\bigcup_{s\ge0}{\mathcal V}_s(A)$ denote the set of all bounded reversely doubly increasing functions on $A$. Let ${\mathcal V}_{h,s}(A)$ denote the subset of ${\mathcal V}_s(A)$ consisting of functions with values in $[h,h+s]$. For any density domain $(\Omega,\rho)$, let $F^\ast_\rho:{\mathcal V}(\Omega)\rightarrow{\mathbb R}$ be a (nonlinear) functional given by \[ F^\ast_\rho(v) = \int_{\Omega}\rho\, \Phi^\ast\bigl(\lagomsqrt{-2v_xv_y/\rho}\bigr)\,d\mu, \] where the integrand is defined to be zero at points where $\rho=0$. This functional is well defined by \cref{lm:monotonic} together with the fact that $\Phi^\ast$ is continuous and bounded by \cref{lm:phibound}. \begin{lemma}\label{lm:injectivity} Let $\Omega$ be an open subset of ${\mathbb R}^2$, let $u\in{\mathcal U}(\Omega)$ and $v\in{\mathcal V}(\Omega)$, and let $A$ be the subset of $\Omega$ where $u_xv_y-u_yv_x>0$. Then, the map $\varphi\,:\,A\rightarrow{\mathbb R}^2$ defined by $\varphi(x,y)=(u(x,y),v(x,y))$ is injective. \end{lemma} \begin{proof} Suppose there are two distinct points $p=(x,y)$ and $q=(x',y')$ in $A$ with $\varphi(p)=\varphi(q)$. Without loss of generality we may assume that $x\le x'$. Since $\Omega$ is open, for any sufficiently small $\varepsilon>0$ the point $p_\varepsilon := (1-\varepsilon)p + \varepsilon q$ belongs to $\Omega$. If $x=x'$ we have $u(p_\varepsilon)=u(p)$ and $v(p_\varepsilon)=v(p)$ and thus $u_y(p)=v_y(p)=0$. If $y=y'$, we have $u(p_\varepsilon)=u(p)$ and $v(p_\varepsilon)=v(p)$ and thus $u_x(p)=v_x(p)=0$. If $x<x'$ and $y<y'$ we have $u(p_\varepsilon)=u(p)$ and thus $u_x(p)=u_y(p)=0$. If $x<x'$ and $y>y'$ we have $v(p_\varepsilon)=v(p)$ and thus $v_x(p)=v_y(p)=0$. In any of the four cases above, we conclude that $u_xv_y-u_yv_x=0$ in $p$, and it follows that $p\not\in A$, a contradiction. \end{proof} We will need the following ``change of variables'' theorem that appears as Theorem~263D in~\cite{Fremlin}. \begin{theo}\label{th:changeofvariables} Let $D\subseteq{\mathbb R}^n$ be any measurable set, and $\varphi:D\rightarrow{\mathbb R}^n$ a function differentiable relative to its domain\footnote{We say that $\varphi$ is \emph{differentiable relative to its domain} at a point $p\in D$ if there is a linear map $T(p)\,:\,{\mathbb R}^n\rightarrow{\mathbb R}^n$ (called a derivative of $\varphi$ relative to $D$ in $p$) such that for each $\varepsilon>0$ there is a $\delta>0$ such that $\abs{\varphi(p)+T(p)(x-p)-\varphi(x)}\le\varepsilon\abs{x-p}$ for any $x\in D$ with $\abs{x-p}<\delta$.} at each point of $D$. For each $p\in D$, let $T(p)$ be a derivative of $\varphi$ relative to $D$ at $p$, and set $J(p):=\abs{\det T(p)}$. Then $\mu(\varphi(D))\le \int_D J\,d\mu$ with equality if $\varphi$ is injective. \end{theo} Now we are ready for the main result of this section. Recall that $\Diff(u)$ denotes the set of points where $u$ is differentiable. \begin{theo}\label{th:pde} Let $(\Omega,\rho)$ be a density domain. Suppose, for some $r,s>0$, there are $u\in{\mathcal U}_r(\Omega)$ and $v\in{\mathcal V}_s(\Omega)$ with the following properties. \begin{enumerate} \item[(a)]\label{it:measurers} The set $\{(u(x,y),v(x,y))\,:\,(x,y)\in\Diff(u)\cap\Diff(v)\}$ has measure $rs$. \item[(b)]\label{it:phiplusphistar} The PDE system \begin{gather} \label{eq:first} u_xv_y+u_yv_x = 0, \\ \label{eq:second} \rho\left(\Phi(\lagomsqrt{2u_xu_y/\rho})+\Phi^\ast(\lagomsqrt{-2v_xv_y/\rho})\right) = 2\sqrt{-u_xu_yv_xv_y} \end{gather} is satisfied almost everywhere in $\Omega$, where the left-hand side of \cref{eq:second} is defined to be zero at points where $\rho=0$. \end{enumerate} Then, $u$ is a maximizer of $F_\rho$ in ${\mathcal U}_r(\Omega)$ and $v$ is a maximizer of $F^\ast_\rho$ in ${\mathcal V}_s(\Omega)$. Furthermore, $s = \functional_{\rm max}'(r)$ if $\functional_{\rm max}'(r)$ exists. \end{theo} \begin{proof} Let $u$ and $v$ be functions in ${\mathcal U}_r(\Omega)$ and ${\mathcal V}_s(\Omega)$, respectively. By \cref{lm:phibound}, \begin{align*} F_\rho(u)+F^\ast_\rho(v) &=\int_{\Omega} \rho\,\left(\Phi\bigl(\lagomsqrt{2u_xu_y/\rho}\bigr) +\Phi^\ast\bigl(\lagomsqrt{-2v_xv_y/\rho}\bigr)\right)\,d\mu \\ &\le 2\int_{\Omega} \sqrt{-u_xu_yv_xv_y}\,d\mu \end{align*} with equality if and only if \cref{eq:second} holds almost everywhere. By the inequality of the geometric and arithmetic mean, \begin{equation}\label{eq:geom_arit_mean} 2\int_{\Omega} \sqrt{-u_xu_yv_xv_y}\,d\mu \le\int_{\Omega} (u_xv_y-u_yv_x)\,d\mu \end{equation} with equality if and only if \cref{eq:first} holds almost everywhere. Let $D:=\Diff(u)\cap\Diff(v)$. Let $\varphi$ be the map from $D$ to ${\mathbb R}^2$ defined by $\varphi(x,y):=\bigl(u(x,y),v(x,y)\bigr)$, and let $A$ be the subset of $D$ where $u_xv_y-u_yv_x$ is positive. It follows from \cref{lm:injectivity} that $\varphi$ is injective on $A$. The right-hand side of~\cref{eq:geom_arit_mean} equals \[ \int_A (u_xv_y-u_yv_x)\,d\mu, \] and since $\varphi$ is injective on $A$, by \cref{th:changeofvariables} this equals $\mu(\varphi(A))$. By the same theorem, $\int_{\Omega} (u_xv_y-u_yv_x)\,d\mu \ge \mu(\varphi(D))$, so $\mu(\varphi(A))\ge\mu(\varphi(D))$, which implies that $\mu(\varphi(A))=\mu(\varphi(D))$. Thus, we obtain \[ \mu(\varphi(A)) =\mu(\varphi(D)) \le rs \] with equality if and only if $u$ and $v$ have property \ref{it:measurers}. In conclusion, we have \[ F_\rho(u)+F^\ast_\rho(v)\le rs \] with equality if and only if $u$ and $v$ have properties \ref{it:measurers} and \ref{it:phiplusphistar}. It follows that such $u$ and $v$ are maximizers of $F_\rho$ and $F^\ast_\rho$ in ${\mathcal U}_r(\Omega)$ and ${\mathcal V}_s(\Omega)$, respectively. It remains to show that $s = \functional_{\rm max}'(r)$ if $\functional_{\rm max}'(r)$ exists. From above, it follows that \begin{equation}\label{eq:FmaxplusFmaxast} \functional_{\rm max}(r) + F_\rho^\ast(v)\le rs \end{equation} for any $r,s>0$ and any $v\in{\mathcal V}_s(\Omega)$, and that equality holds if there is a $u\in{\mathcal U}_r(\Omega)$ such that \ref{it:measurers} and \ref{it:phiplusphistar} hold. If equality holds for some particular $r,s,v$ and if $\functional_{\rm max}'(r)$ exists, then the partial derivative with respect to $r$ (while keeping $s$ and $v$ fixed) of the left- and right-hand sides of \cref{eq:FmaxplusFmaxast} must coincide, so $\functional_{\rm max}'(r)=s$. \end{proof} If \cref{con:triangularlimitshape} holds, the PDE system of \cref{th:pde} can be written explicitly. We will exploit this fact in \cref{sec:uniform} where we solve the system for the uniform case. \begin{prop}\label{pr:pdeunderconjecture} Suppose \cref{con:triangularlimitshape} holds. Then \cref{th:pde} holds if $\rho>0$ on $\Omega$ and \cref{eq:second} is replaced by \begin{equation}\label{eq:seconduniform} \min\{\lagomsqrt{u_xu_y/\rho},1\}+\min\{\lagomsqrt{-v_xv_y/\rho},1\}=1. \end{equation} \end{prop} \begin{proof} Suppose \cref{con:triangularlimitshape} holds. Then \[ \Phi(r)=\begin{cases} \sqrt2\,r-\frac{r^2}2 & \text{if $0\le r\le\sqrt2$,} \\ 1 & \text{if $r>\sqrt2$,} \end{cases} \] and $\Phi^\ast(s)=\Phi(s)-1$ for any $s\ge0$. Let $p=\sqrt{u_xu_y/\rho}$ and $q=\sqrt{-v_xv_y/\rho}$. If $p>1$, \cref{eq:seconduniform} implies that $q=0$ and hence $\Phi(\sqrt2\,p)+\Phi^\ast(\sqrt2\,q)=1-1=0$ so \cref{eq:second} is satisfied. Analogously, if $q>1$, \cref{eq:seconduniform} implies that $p=0$ and hence $\Phi(\sqrt2\,p)+\Phi^\ast(\sqrt2\,q)=0+0=0$ so \cref{eq:second} is satisfied. If $p,q\le1$, \cref{eq:seconduniform} can be written as $p+q=1$ which implies $2(p+q)-(p+q)^2=1$ and hence $2p-p^2+2q-q^2-1=2pq$. The last equation can be written as $\Phi(\sqrt2\,p)+\Phi^\ast(\sqrt2\,q)=2pq$ which is equivalent to \cref{eq:second}. \end{proof} \Cref{th:pde} also lets us find a maximizer of $F_\rho$ on a parallelogram with constant density. \begin{prop}\label{pr:parallelogrammaximizer} Let $\Omega$ be the open parallelogram \[ \{(x,y)\in{\mathbb R}^2\ :\ \abs{ax+by}<1,\ \abs{ax-by}<\beta\} \] for some $a,b,\beta>0$, and let $\rho$ be constant on $\Omega$. Then, for any $c\ge0$, in ${\mathcal U}_{2c}(\Omega)$ the functional $F_\rho$ is maximized by the function $u(x,y)=c(ax+by)$, and the maximum value is $2\rho\beta\Phi(c\sqrt{2ab/\rho})/ab$ if $\rho>0$ and 0 if $\rho=0$. \end{prop} \begin{proof} If $\rho=0$, the functional $F_\rho$ is identically zero, so we may assume that $\rho>0$. Note that $\sqrt{2u_xu_y/\rho}=c\sqrt{2ab/\rho}$ which is independent of $x$ and $y$. By \cref{lm:phiboundB}, there is a $d\ge0$ such that $\Phi(c\sqrt{2ab/\rho})+\Phi^\ast(d\sqrt{2ab/\rho})=2abcd/\rho$. Let $v(x,y):=d(by-ax)$. Then, $u$ and $v$ satisfy the conditions in \cref{th:pde} with $r=2c$ and $s=2\beta d$, and hence $u$ is a maximizer of $F_\rho$ in ${\mathcal U}_{2c}$. The maximum value is $\mu(\Omega)\rho\Phi(\sqrt{2u_xu_y/\rho})=(2\beta/ab)\rho\Phi(c\sqrt{2ab/\rho})$. \end{proof} \section{Probabilistic behavior of the local parallelogram} \label{sec:localparallelogram} In \cref{def:kappa}, we defined the map $P\mapsto\kappa_P$ to convert $k$-decreasing sets into doubly increasing functions. The next definition is a device to go in the other direction. \begin{defi} For any $u$ in ${\mathcal U}(\Omega)$, we let \begin{multline*} D(u):=\{(x,y)\in\Omega\ :\ u(x,y)\in\mathbb{Z}\ \text{and}\ u(x',y')<u(x,y)\\ \text{for any}\ (x',y')\in\Omega\setminus\{(x,y)\} \ \text{such that}\ x'\le x\ \text{and}\ y'\le y\}. \end{multline*} \end{defi} \begin{lemma}\label{lm:Iandkappa} The following holds. \begin{thmlist} \item\label{lm:IandkappaIudecreasing} For any $u$ in ${\mathcal U}_r(\Omega)$, $D(u)$ is $(\lfloor r\rfloor+1)$-decreasing, and if $u\in{\mathcal U}_{0,r}(\Omega)$, $D(u)$ is $\lfloor r\rfloor$-decreasing. \item\label{lm:IandkappaIkappaPisP} If $P$ is a finite set of points in $\Omega\subset{\mathbb R}^2$ with distinct $x$-coordinates and distinct $y$-coordinates, then $D(\kappa_P)=P$. \end{thmlist} \end{lemma} \begin{proof} \reflocal{lm:IandkappaIudecreasing} Let $u\in{\mathcal U}_r(\Omega)$. Suppose $(x,y)$ and $(x',y')$ are distinct points in $D(u)$ such that $u(x,y)=u(x',y')$. By definition of $D(u)$, it follows that $\{(x,y),(x',y')\}$ is a decreasing set. Hence, for each integer $k$, the fiber $u^{-1}(k)\cap D(u)$ is a decreasing set. The image of $u$ contains at most $\lfloor r\rfloor+1$ integers, so $D(u)$ is a union of $\lfloor r\rfloor+1$ decreasing sets. If $u\in{\mathcal U}_{0,r}(\Omega)$, the image of $u$ contains no integer outside the set $\{0,1,\dotsc,\lfloor r\rfloor\}$. Since $\Omega$ is open and $u\ge0$, for any point $(x,y)\in\Omega$ with $u(x,y)=0$ there is an $x'<x$ such that $u(x',y)=0$, so the fiber $u^{-1}(0)\cap D(u)$ is empty. \smallskip \reflocal{lm:IandkappaIkappaPisP} Let us first show that $P\subseteq D(\kappa_P)$. Take any point $(x,y)\in P$ and any point $(x',y')\in\Omega\setminus\{(x,y)\}$ such that $x'\le x$ and $y'\le y$. Let $Q$ be an increasing subset of $P\cap\bigl((-\infty,x']\times(-\infty,y']\bigr)$ of cardinality $\kappa_P(x',y')$. Since no two points in $P$ have the same $x$- or $y$-coordinates, $Q\cup\{(x,y)\}$ is an increasing subset of $P\cap\bigl((-\infty,x]\times(-\infty,y]\bigr)$ of cardinality $\kappa_P(x',y')+1$, and it follows that $\kappa_P(x',y')<\kappa_P(x,y)$. This shows that $(x,y)\in D(\kappa_P)$, and since $(x,y)$ was chosen arbitrarily in $P$, we conclude that $P\subseteq D(\kappa_P)$. To show that $D(\kappa_P)\subseteq P$, take any $(x,y)\in D(\kappa_P)$ and let $Q$ be an increasing subset of $P\cap\bigl((-\infty,x]\times(-\infty,y]\bigr)$ of cardinality $\kappa_P(x,y)$. Let $(x',y')$ be the point in $Q$ with maximal coordinates. Then, $\kappa_P(x',y')\ge\kappa_P(x,y)$ and, since $(x,y)\in D(\kappa_P)$, it follows that $(x',y')=(x,y)$ and hence that $(x,y)\in P$. \end{proof} The next lemma is essentially a reformulation of \cref{pr:beta} in terms of the $D$ operator. \begin{lemma}\label{lm:parallelogrambounds} Let $\Omega$ be the open parallelogram \[ \{(x,y)\in{\mathbb R}^2\ :\ \abs{ax+by}<1,\ \abs{ax-by}<\beta\} \] for some $a,b,\beta>0$, and let $\rho$ be constant on $\Omega$. Let $u_{\rm linear}(x,y)=c(ax+by)$ for some $c\ge0$, and let $\{\sigma_\gamma\}_{\gamma>0}$ be Poisson point processes on $\Omega$ with intensities $\gamma\rho$. Then the following two statements hold a.a.s.\ as $\gamma\rightarrow\infty$. \begin{align*} \forall w\in {\mathcal U}_{2c}(\Omega),\, \# (D(w\sqrt\gamma)\cap\sigma_\gamma)/\gamma &\le F_\rho(u_{\rm linear})+3(\rho+1)\mu(\Omega)/\beta\\ \forall d\in{\mathbb R}\ \exists w\in {\mathcal U}_{d,2c}(\Omega):\, \# (D(w\sqrt\gamma)\cap\sigma_\gamma)/\gamma &\ge F_\rho(u_{\rm linear})-3(\rho+1)\mu(\Omega)/\beta \end{align*} \end{lemma} \begin{proof} For any $w\in{\mathcal U}_{2c}(\Omega)$, by \cref{lm:IandkappaIudecreasing}, $D(w\sqrt\gamma)\cap\sigma_\gamma$ is a $(\lfloor 2c\sqrt{\gamma}\rfloor+1)$-decreasing subset of $\sigma_\gamma$. Let $c'=c + \frac{1}{2\sqrt{\gamma}}$ and let $u'_{\rm linear}(x,y)=c'(ax+by)$. Then, for any $w\in{\mathcal U}_{2c}(\Omega)$, $D(w\sqrt\gamma)\cap\sigma_\gamma$ is a $\lfloor 2c'\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma$, and it follows from \cref{pr:beta} that the statement \[ \forall w\in{\mathcal U}_{2c}(\Omega),\ \ \#(D(w\sqrt\gamma)\cap\sigma_\gamma)/\gamma \le F_\rho(u'_{\rm linear}) + 3\rho\mu(\Omega)/\beta \] holds a.a.s.\ as $\gamma\rightarrow\infty$. Note that $c'\rightarrow c$ as $\gamma\rightarrow\infty$. By \cref{lm:Lcontinuous}, $L$ is continuous, so \[ F_\rho(u_{\rm linear}) - F_\rho(u'_{\rm linear}) = \mu(\Omega) \bigl(L(\rho, c^2ab) - L(\rho, c'^2ab)\bigr) \rightarrow0 \] as $\gamma\rightarrow\infty$, and we conclude that the statement \[ \forall w\in{\mathcal U}_{2c}(\Omega),\ \#(D(w\sqrt\gamma)\cap\sigma_\gamma)/\gamma \le F_\rho(u_{\rm linear}) + 3(\rho+1)\mu(\Omega)/\beta \] holds a.a.s.\ as $\gamma\rightarrow\infty$. Now for the second part. This time, let $c'=c - \frac{1}{2\sqrt{\gamma}}$ and, as before, let $u'_{\rm linear}(x,y)=c'(ax+by)$. We will soon let $\gamma$ tend to infinity, so we may assume that $c'>0$. Let $P_\gamma$ be any maximal $\lfloor 2c'\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_\gamma$. It follows from \cref{pr:beta} that \[ \# P_\gamma/\gamma \ge F_\rho(u'_{\rm linear})-3\rho\mu(\Omega)/\beta \] a.a.s.\ as $\gamma\rightarrow\infty$. Analogously to above, since $c'\rightarrow c$ as $\gamma\rightarrow\infty$ and since $L$ is continuous, it follows that \[ \# P_\gamma/\gamma \ge F_\rho(u_{\rm linear})-3(\rho+1)\mu(\Omega)/\beta \] a.a.s.\ as $\gamma\rightarrow\infty$. For any $d\in{\mathbb R}$, let $w=(\kappa_{P_\gamma}+\lceil d\sqrt\gamma\rceil)/\sqrt\gamma$. By \cref{pr:decreasingincreasingrelation}, \[ 0\le\kappa_{P_\gamma}\le\lfloor 2c'\sqrt{\gamma}\rfloor\le 2c'\sqrt{\gamma}= 2c\sqrt\gamma - 1, \] so $w$ belongs to ${\mathcal U}_{d,2c}(\Omega)$. Clearly, $D$ is invariant under translation by an integer, so $D(w\sqrt\gamma)=D(\kappa_{P_\gamma})$ which equals $P_\gamma$ by \cref{lm:IandkappaIkappaPisP}. (Note that the points in $P_\gamma$ are in general position almost surely.) We conclude that the statement \[ \forall d\in{\mathbb R}\ \exists w\in {\mathcal U}_{d,2c}(\Omega):\ \# (D(w\sqrt\gamma)\cap\sigma_\gamma)/\gamma \ge F_\rho(u_{\rm linear})-3(\rho+1)\mu(\Omega)/\beta \] holds a.a.s.\ as $\gamma\rightarrow\infty$. \end{proof} \section{Approximating \texorpdfstring{$\Omega$}{Omega} by a collection of parallelograms} \label{sec:greatlemma} Now when we have studied the behavior of a parallelogram density domain, it is time to divide a general density domain into many small local parallelograms. It is vital that the number of such parallelograms is finite since we want to infer an ``in probability'' result for the whole domain from similar results for each local parallelogram. To this end we will rely heavily on the theory of Vitali coverings. First a pair of technical lemmas. \begin{lemma}\label{lm:disjointdistance} Let $A$ and $B$ be disjoint closed subsets of ${\mathbb R}^2$, and suppose $A$ is compact. Then, the distance between $A$ and $B$ is positive. \end{lemma} \begin{proof} Suppose not. Then there are sequences $a_i\in A$ and $b_i\in B$ such that $\abs{a_i-b_i}\rightarrow0$. Since $A$ is compact, there is a subsequence $a_{i_j}$ of $a_i$ that converges to some $a$ in $A$. By the triangle inequality, $\abs{b_{i_j}-a}\le\abs{b_{i_j}-a_{i_j}}+\abs{a_{i_j}-a}$ which tends to zero. Since $B$ is closed, this implies that $a$ belongs to $B$ which is a contradiction since $A$ and $B$ are disjoint. \end{proof} \begin{lemma}\label{lm:diamconvergence} Let $\Omega$ be an open subset of ${\mathbb R}^2$ and let $C$ be a compact subset of $\Omega$. Then there is a constant $K$ such that \[ \diam w(C) \le \diam u(\Omega) + K\norm{w-u}_\Omega \] for any $u,w\in{\mathcal U}(\Omega)$. \end{lemma} \begin{proof} By \cref{lm:disjointdistance}, the distance between $C$ and ${\mathbb R}^2\setminus\Omega$ is positive, so there exists a $d>0$ such that for each $(x,y)\in C$ we have $[x-d,x+d]\times[y-d,y+d]\subset\Omega$. It follows that, for any $u,w\in{\mathcal U}(\Omega)$, \[ \norm{w-u}_\Omega \ge \sup_{(x,y)\in C}\norm{w-u}_{[x,x+d]\times[y,y+d]} \ge \bigl(\sup w(C)-\sup u(\Omega)\bigr)d^2 \] and \[ \norm{w-u}_{\Omega} \ge \sup_{(x,y)\in C}\norm{w-u}_{[x-d,x]\times[y-d,y]} \ge \bigl(\inf u(\Omega)-\inf w(C)\bigr)d^2. \] Thus we can choose $K:=2/d^2$. \end{proof} Next, we make the idea of a local parallelogram precise. \begin{defi} Let $u\in{\mathcal U}(\Omega)$ and let $\iota>0$. A \emph{$(u,\iota)$-parallelogram} is a closed parallelogram of the form \begin{align*} P=\{(x,y)\in{\mathbb R}^2\,:\, &\abs{\tilde{u}^P_x(x-x_P)+\tilde{u}^P_y(y-y_P)} \le \iota c_P,\\ &\abs{\tilde{u}^P_x(x-x_P)-\tilde{u}^P_y(y-y_P)} \le c_P\}, \end{align*} where $(x_P,y_P)$ is a point in $\Diff(u)$, $c_P>0$, $\tilde{u}^P_x:=\max\{\iota^3, u_x(x_P,y_P)\}$ and $\tilde{u}^P_y:=\max\{\iota^3, u_y(x_P,y_P)\}$. Also, for notational convenience, define $u^P_x=u_x(x_P,y_P)$ and $u^P_y=u_y(x_P,y_P)$. We say that $P$ is \emph{well behaved} if $\tilde{u}^P_x=u^P_x$ and $\tilde{u}^P_y=u^P_y$. \end{defi} \begin{lemma}\label{lm:uiparallelogrambound} For any point $(x,y)$ in a $(u,\iota)$-parallelogram $P$, we have \[ \abs{x-x_P} + \abs{y-y_P} \le c_P(1+\iota)\iota^{-3} \] and \[ \abs{u^P_x(x-x_P)+u^P_y(y-y_P)} \le c_P(1+\iota). \] \end{lemma} \begin{proof} We have \begin{equation}\label{eq:utildex} \begin{split} & \quad 2\tilde{u}^P_x\abs{x-x_P}\\ &\le\abs{\tilde{u}^P_x(x-x_P) + \tilde{u}^P_y(y-y_P)} +\abs{\tilde{u}^P_x(x-x_P) - \tilde{u}^P_y(y-y_P)}\\ &\le c_P (1+\iota), \end{split} \end{equation} where the first inequality is the triangle inequality and the second inequality follows from the definition of a $(u,\iota)$-parallelogram. For the same reason, \begin{equation}\label{eq:utildey} 2\tilde{u}^P_y\abs{y-y_P} \le c_P (1+\iota), \end{equation} so \[ \abs{x-x_P}+\abs{y-y_P} \le c_P (1+\iota) \left(\frac1{2\tilde{u}^P_x}+\frac1{2\tilde{u}^P_y}\right) \le c_P (1+\iota)\iota^{-3}. \] Finally, by \cref{eq:utildex,eq:utildey}, \[ \abs{u^P_x(x-x_P)+u^P_y(y-y_P)} \le\tilde{u}^P_x\abs{x-x_P}+\tilde{u}^P_y\abs{y-y_P} \le c_P(1+\iota). \] \end{proof} Let us recall the definition of a regular Vitali covering. As always, we let $\mu$ denote the Lebesgue measure on ${\mathbb R}^2$. \begin{defi} Let $A\subseteq{\mathbb R}^2$ and let ${\mathcal C}$ be a collection of closed subsets of ${\mathbb R}^2$. \begin{itemize} \item ${\mathcal C}$ is a \emph{Vitali covering} of $A$ if, for any $p\in A$ and any $\delta>0$, there is a $C\in{\mathcal C}$ such that $p\in C$ and $0<\diam C<\delta$. \item ${\mathcal C}$ is \emph{regular} if there is a constant $K$ such that $(\diam C)^2\le K \mu(C)$ for any $C\in{\mathcal C}$. \end{itemize} \end{defi} \begin{lemma}\label{lm:vitali} Let $u\in {\mathcal U}(\Omega)$ and let $T$ be a subset of $\Diff(u)$ where $u_x$ and $u_y$ are both bounded. Then, for any $\iota>0$, the family of all $(u,\iota)$-parallelograms is a regular Vitali covering of $T$. \end{lemma} \begin{proof} For any $p\in T$, the diameter of a $(u,\iota)$-parallelogram $P$ centered at $(x_P,y_P)=p$ is bounded by \[ 2\sqrt{(x-x_P)^2+(y-y_P)^2}\le 2\bigl(\abs{x-x_P}+\abs{y-y_P}\bigr), \] which is at most $2c_P(1+\iota)\iota^{-3}$ by \cref{lm:uiparallelogrambound}. By choosing $c_P$ small enough we can make the diameter arbitrarily small, so the family of $(u,\iota)$-parallelograms is a Vitali covering of $A$. To see that it is regular, note that $ \mu(P)=2\iota c_P^2/\tilde{u}^P_x\tilde{u}^P_y, $ so the quotient \[ \frac{\diam P}{\sqrt{\mu(P)}} \le \frac{2c_P(1+\iota)\iota^{-3}}{\sqrt{\mu(P)}} \] is bounded since $u_x$ and $u_y$ are bounded on $T$. \end{proof} The following lemma divides a density domain with a doubly increasing function $u$ into a finite number of local parallelograms such that, within each parallelogram, the situation is close to the condition of \cref{lm:parallelogrambounds}, that is, the density is nearly constant and $u$ is nearly a linear function aligned with the parallelogram. We will use the ordo notation $o_\iota(1)$ to represents a function of $\iota$ that tends to zero as $\iota$ tends to zero. \begin{lemma}\label{lm:great} Let $(\Omega,\rho)$ be a density domain and let $u\in {\mathcal U}(\Omega)$ and $\varepsilon>0$. Then, for any $0<\iota<1$, there is a measurable set $S_\iota\subseteq \Diff(u)$ and a finite disjoint collection ${\mathcal P}_\iota$ of $(u,\iota)$-parallelograms such that the following holds. \begin{thmlist} \item\label{lm:greatmisc} For each $P\in{\mathcal P}_\iota$ it holds that $P\subset\Omega$, $(x_P,y_P)\in S_\iota$ and $c_P<1$. Also, $S_\iota\subseteq\bigcup{\mathcal P}_\iota$. \item\label{lm:greatbounded} $\rho$, $u_x$ and $u_y$ are bounded on $\bigcup_{0<\iota<1}S_\iota$. \item\label{lm:greatparallelogramscovereverything} $\norm{\rho}_{\Omega\setminus S_\iota}<\varepsilon + o_\iota(1)$. \intuition{$S_\iota$ nearly covers the density domain.} \item\label{lm:greatScoversparallelograms} For each $P\in{\mathcal P}_\iota$ it holds that $\mu(P\cap S_\iota)/\mu(P)>1-\iota$. \intuition{$S_\iota$ nearly covers each parallelogram.} \item\label{lm:greatrhoconstant} For each $P\in{\mathcal P}_\iota$ it holds that $\abs{\rho(x,y)-\rho(x_P,y_P)}\le\iota$ for any $(x,y)\in P\cap S_\iota$. \intuition{$\rho$ is nearly constant on each parallelogram.} \item\label{lm:greatulinear} For each $P\in{\mathcal P}_\iota$ it holds that \[ \abs{u(x,y)-\bigl(u(x_P,y_P)+u^P_x(x-x_P) +u^P_y(y-y_P)\bigr)} \le \iota^5(\abs{x-x_P}+\abs{y-y_P}) \] for any $(x,y)\in P$. \intuition{$u$ is nearly linear on each parallelogram.} \item\label{lm:greatLconstant} For each $P\in{\mathcal P}_\iota$ it holds that \[ \abs{\norm{L(\rho,u_xu_y)}_P/\mu(P)-L(\rho(x_P,y_P),u^P_xu^P_y)}<\iota. \] \intuition{$L(\rho,u_xu_y)$ is nearly constant on each parallelogram.} \item\label{lm:greatuvarieslittle} For each $P\in{\mathcal P}_\iota$ it holds that \[ \sup_{(x,y)\in P}\abs{u(x,y)-u(x_P,y_P)}< \begin{cases} c_P\iota(1+5\iota) & \text{if $P$ is well behaved,} \\ c_P(1+7\iota) & \text{otherwise.} \end{cases} \] \intuition{$u$ does not vary too much inside each parallelogram.} \item\label{lm:greatwvarieslittle} There is a function $d:(0,1)\rightarrow{\mathbb R}_{>0}$ such that \[ \sup_{w\in{\mathcal U}(\Omega):\ \norm{w-u}_\Omega<d(\iota)}\ \sup_{P\in{\mathcal P}_\iota} \left(\left(\frac{\diam w(P)}{2\iota c_P}\right)^2\tilde{u}^P_x\tilde{u}^P_y-u^P_x u^P_y\right) < o_\iota(1). \] \end{thmlist} \end{lemma} \begin{proof} By \cref{lm:monotonic}, $\mu(\Omega\setminus \Diff(u))=0$. Since $\norm{\rho}_{\Omega}<\infty$, we can choose a subset $T$ of $\Diff(u)$ with finite measure such that $\rho$, $u_x$ and $u_y$ are all smaller than some positive constant $C$ there, and such that \begin{equation}\label{eq:Tlarge} \norm{\rho}_{\Omega\setminus T} < \varepsilon. \end{equation} For each $0<\iota<1$, let $T_\iota$ be the set of points in $T$ that are Lebesgue points of $L(\rho,u_xu_y)$ with respect to the family of $(u,\iota)$-parallelograms, that is, $T_\iota$ is the set of points $(x_0,y_0)\in T$ such that for each $\varepsilon'>0$ we have $\abs{\norm{L(\rho,u_xu_y)}_P/\mu(P) - L(\rho,u_xu_y)|_{(x_0,y_0)}}<\varepsilon'$ for all sufficiently small $(u,\iota)$-parallelograms $P$ centered at $(x_0,y_0)$. By \cref{lm:vitali}, the family of $(u,\iota)$-parallelograms is a regular Vitali covering of $T$, so, by Lebesgue's differentiation theorem (see e.g.~\cite{Folland}), $\mu(T_\iota)=\mu(T)$ for any $\iota$. For any $0<\iota<1$ and any positive integer $j$, let \[ \tilde{S}_\iota^j=\{(x,y)\in T_\iota\,:\,(j-1)\iota\le\rho(x,y)<j\iota\} \] and let $S_\iota^j$ be the set of points in $\tilde{S}_\iota^j$ at which the density of $\tilde{S}_\iota^j$ is $1$. By Lebesgue's density theorem, $\mu(\tilde{S}_\iota^j\setminus S_\iota^j)=0$. For any $\iota$, $T_\iota$ is the union of a finite number of sets of the form $\tilde{S}_\iota^j$, so the union $\hat{S}_\iota:=\bigcup_j S_\iota^j$ of all $S_\iota^j$ for a fixed $\iota$ has the same measure as $T_\iota$ and hence as $T$. For any $0<\iota<1$ and any positive integer $j$, let ${\mathcal A}_\iota^j$ be the family of $(u,\iota)$-parallelograms $P$ with center in $S_\iota^j$ and $c_P<1$ such that \begin{enumerate}[label=\upshape(\Roman*),ref=(\Roman*)] \item\label{it:a} $\mu(P\cap S_\iota^j)/\mu(P)>1-\iota$, \item\label{it:b} $\abs{\norm{L(\rho,u_xu_y)}_P/\mu(P) - L(\rho(x_P,y_P),u^P_xu^P_y)}<\iota$, and \item\label{it:c} the $(u,\iota)$-parallelogram $P'$ concentric with $P$ but with $c_{P'}=(1+\iota)c_P$ is contained in $\Omega$, and \[ \abs{u(x,y) -\bigl(u(x_P,y_P)+u^P_x(x-x_P)+u^P_y(y-y_P)\bigr)} \le\iota^5\bigl(\abs{x-x_P}+\abs{y-y_P}\bigr) \] for any point $(x,y)\in P'$. \end{enumerate} Let ${\mathcal A}_\iota=\bigcup_j{\mathcal A}_\iota^j$. We claim that ${\mathcal A}_\iota^j$ is a Vitali covering of $S_\iota^j$. To see this, take any $(x,y)\in S_\iota^j$ and note the following: \begin{itemize} \item Since the density of $\tilde{S}_\iota^j$ is 1 at $(x,y)$ (by the choice of $S_\iota^j$) and $\mu(\tilde{S}_\iota^j\setminus S_\iota^j)=0$, \ref{it:a} holds for any sufficiently small $(u,\iota)$-parallelogram centered at $(x,y)$. \item Since $(x,y)$ is a Lebesgue point of $T$ (by the choice of $T_\iota$), \ref{it:b} holds for any sufficiently small $(u,\iota)$-parallelogram centered at $(x,y)$. \item Since $\Omega$ is open and $u$ is differentiable at $(x,y)$, \ref{it:c} holds for any sufficiently small $(u,\iota)$-parallelogram centered at $(x,y)$. \end{itemize} By \cref{lm:vitali}, the family of all $(u,\iota)$-parallelograms is regular, so it follows that ${\mathcal A}_\iota^j$ is a regular Vitali covering of $S_\iota^j$, and hence ${\mathcal A}_\iota$ is a regular Vitali covering of $\hat{S}_\iota$. By Vitali's cover theorem (see e.g.~\cite{EvansGariepyBook}), there is a finite disjoint subfamily ${\mathcal P}_\iota$ of ${\mathcal A}_\iota$ such that $\mu(\hat{S}_\iota\cap\cup{\mathcal P}_\iota)\ge(1-\iota)\mu(\hat{S}_\iota)$ and hence \begin{equation}\label{eq:muSP} \mu(T\cap\cup{\mathcal P}_\iota)\ge(1-\iota)\mu(T). \end{equation} Finally, define \[ S_\iota:= \bigcup_{j=1}^\infty\bigcup_{P\in{\mathcal P}_\iota\cap{\mathcal A}_\iota^j}P\cap S_\iota^j. \] Let us check that $S_\iota$ and ${\mathcal P}_\iota$ have the properties claimed in the lemma. \smallskip \reflocal{lm:greatmisc} and \reflocal{lm:greatbounded} follow directly from the definitions. \smallskip \reflocal{lm:greatparallelogramscovereverything} We have \begin{multline*} \mu(S_\iota) =\sum_{j=1}^\infty\sum_{P\in{\mathcal P}_\iota\cap{\mathcal A}_\iota^j}\mu(P\cap S_\iota^j) \ge \{\text{by \ref{it:a}}\} \ge\sum_{j=1}^\infty\sum_{P\in{\mathcal P}_\iota\cap{\mathcal A}_\iota^j}(1-\iota)\mu(P)\\ =(1-\iota)\mu(\cup{\mathcal P}_\iota) \ge \{\text{by \cref{eq:muSP}}\} \ge(1-\iota)^2\mu(T), \end{multline*} so \reflocal{lm:greatparallelogramscovereverything} follows from \cref{eq:Tlarge} and the fact that $\rho$ is bounded in $T$. \smallskip \reflocal{lm:greatScoversparallelograms} follows from \ref{it:a}. \smallskip \reflocal{lm:greatrhoconstant} follows from the definition of the sets $\tilde{S}_\iota^j$. \smallskip \reflocal{lm:greatulinear} follows from \ref{it:c}. \smallskip \reflocal{lm:greatLconstant} follows from \ref{it:b}. \smallskip \reflocal{lm:greatuvarieslittle} requires some reasoning. From \ref{it:c} we know that, for each $P\in{\mathcal P}_\iota$, \begin{equation}\label{eq:uvarieslittle} \abs{u(x,y)-\bigl(u(x_P,y_P)+u^P_x(x-x_P) +u^P_y(y-y_P)\bigr)} \le \iota^5(\abs{x-x_P}+\abs{y-y_P}) \end{equation} for any $(x,y)\in P'$, where $P'\subset\Omega$ is defined as in \ref{it:c}. Consider a parallelogram $P$ in ${\mathcal P}_\iota$. By \cref{lm:uiparallelogrambound}, inside $P'$, \begin{equation}\label{eq:absxplusabsy} \abs{x-x_P}+\abs{y-y_P}\le c_{P'} (1+\iota)\iota^{-3}. \end{equation} Also, inside $P'$, \begin{equation}\label{eq:udiam} \begin{split} &\quad\abs{u(x,y) - u(x_P,y_P)} \le\{\text{triangle ineq.}\}\le\\ &\le \abs{u^P_x(x-x_P)+u^P_y(y-y_P)}\\ &\quad + \abs{u(x,y) - (u(x_P,y_P)+u^P_x(x-x_P)+u^P_y(y-y_P))}\\ &\le \abs{u^P_x(x-x_P)+u^P_y(y-y_P)} + \iota^5(\abs{x-x_P}+\abs{y-y_P})\\ &\le\abs{u^P_x(x-x_P)+u^P_y(y-y_P)} + \iota^2(1+\iota)c_{P'}, \end{split} \end{equation} where the last inequality follows from \cref{eq:absxplusabsy}. If $P$ is well behaved, inside $P'$ we have \[ \abs{u^P_x(x-x_P)+u^P_y(y-y_P)} = \abs{\tilde{u}^P_x(x-x_P)+\tilde{u}^P_y(y-y_P)} \le \iota c_{P'} \] by the definition of a $(u,\iota)$-parallelogram. Hence, by \cref{eq:udiam}, inside $P'$, \[ \abs{u(x,y) - u(x_P,y_P)} \le c_{P'}\iota(1+\iota+\iota^2) = c_P(1+\iota)\iota(1+\iota+\iota^2)< c_P\iota(1+5\iota), \] where the last inequality follows from the fact that $\iota < 1$. If $P$ is not well behaved, by \cref{lm:uiparallelogrambound}, at least we have \[ \abs{u^P_x(x-x_P)+u^P_y(y-y_P)}\le c_{P'}(1+\iota). \] Hence, by \cref{eq:udiam}, inside $P'$ we have \[ \abs{u(x,y) - u(x_P,y_P)} \le c_{P'}(1+\iota)(1+\iota^2) =c_P(1+\iota)^2(1+\iota^2)<c_P(1+7\iota), \] where we have used again than $\iota<1$. \smallskip \reflocal{lm:greatwvarieslittle} requires some reasoning as well. Consider a parallelogram $P$ in ${\mathcal P}_\iota$. Let $P'$ be the larger concentric $(u,\iota)$-parallelogram as defined in \ref{it:c}. By \cref{lm:diamconvergence} applied to the open set $\interior{P'}$ and the compact set $P$ there is a $K_P>0$ such that $\diam w(P) \le \diam u(P')+K_P\norm{w-u}_\Omega$ for any $w\in{\mathcal U}(\Omega)$. It follows from \reflocal{lm:greatuvarieslittle} that, if $P$ is well behaved, \[ \diam w(P) \le 2\iota c_P(1+5\iota) + K_P\norm{w-u}_\Omega \le 2\iota c_P(1+6\iota) \] for any $w\in{\mathcal U}(\Omega)$ with $\norm{w-u}_\Omega\le 2\iota^2 c_P/K_P$, and if $P$ is not well behaved, \[ \diam w(P) \le 2c_P(1+7\iota) + K_P\norm{w-u}_\Omega \le 2c_P(1+7\iota+\iota^2)<18c_P \] for any $w\in{\mathcal U}(\Omega)$ with $\norm{w-u}_\Omega\le 2\iota^2 c_P/K_P$. Thus, if $P$ is well behaved, for any $w\in{\mathcal U}(\Omega)$ with $\norm{w-u}_\Omega\le 2\iota^2 c_P/K_P$ we have \begin{multline*} \left(\frac{\diam w(P)}{2\iota c_P}\right)^2 \tilde{u}^P_x\tilde{u}^P_y-u^P_xu^P_y =\left[\left(\frac{\diam w(P)}{2\iota c_P}\right)^2-1\right] u^P_x u^P_y\\ \le [(1+6\iota)^2-1] u^P_x u^P_y = o_\iota(1) \end{multline*} since $u_x$ and $u_y$ are bounded in $\bigcup_\iota S_\iota$. If $P$ is not well behaved, at least one of $u^P_x$ and $u^P_y$ is smaller than $\iota^3$, and at least one of $\tilde{u}^P_x$ and $\tilde{u}^P_y$ equals $\iota^3$. It follows that, for any $w\in{\mathcal U}(\Omega)$ with $\norm{w-u}_\Omega\le 2\iota^2 c_P/K_P$ we have \[ \left(\frac{\diam w(P)}{2\iota c_P}\right)^2 \tilde{u}^P_x\tilde{u}^P_y-u^P_xu^P_y \le(9/\iota)^2 \tilde{u}^P_x\tilde{u}^P_y-u^P_xu^P_y=o_\iota(1). \] We conclude that \[ \left(\frac{\diam w(P)}{2\iota c_P}\right)^2 \tilde{u}^P_x\tilde{u}^P_y-u^P_xu^P_y < o_\iota(1) \] for any $w\in{\mathcal U}(\Omega)$ with $\norm{w-u}_\Omega\le 2\iota^2 c_P/K_P$ whether $P$ is well behaved or not. Thus, we can choose $d(\iota):=2\iota^2\min_{P\in{\mathcal P}_\iota}(c_P/K_P)$. \end{proof} \section{Semicontinuity of \texorpdfstring{$F_\rho$}{the functional} and a related probabilistic result}\label{sec:semicontinuity} Our proof of \cref{th:mainF} will rely heavily on the following result. \begin{prop}\label{pr:semicontinuity} $F_\rho$ is upper semicontinuous in the $L^1(\Omega)$-norm. \end{prop} \begin{proof} Let $u\in {\mathcal U}(\Omega)$. We must show that, for any $\varepsilon'>0$ there is a $\delta>0$ such that $F_\rho(w)<F_\rho(u)+\varepsilon'$ for any $w\in{\mathcal U}(\Omega)$ with $\norm{w-u}_\Omega<\delta$. Choose any $\varepsilon>0$ smaller than $\varepsilon'$ and apply \cref{lm:great}. Let $d:(0,1)\rightarrow{\mathbb R}_{>0}$ be the function defined in \cref{lm:greatwvarieslittle} and consider any family $\{w^{(\iota)}\in{\mathcal U}(\Omega)\}_{0<\iota<1}$ such that $\norm{w^{(\iota)}-u}_\Omega < d(\iota)$. Let \[ q^{(\iota)}:= \sup_{w\in{\mathcal U}(\Omega):\ \norm{w-u}_\Omega<d(\iota)}\ \sup_{P\in{\mathcal P}_\iota} \left(\left(\frac{\diam w(P)}{2\iota c_P}\right)^2 \tilde{u}^P_x\tilde{u}^P_y-u^P_xu^P_y\right). \] Consider any $P\in {\mathcal P}_\iota$ and let $\rho_P:=\sup_{S_\iota\cap P}\rho$. By \cref{lm:Lcontinuous}, $L$ is increasing in the first variable, so \[ \norm{L(\rho,w^{(\iota)}_xw^{(\iota)}_y)}_{P\cap S_\iota} \le \norm{L(\rho_P,w^{(\iota)}_xw^{(\iota)}_y)}_P. \] Let $r_P:=\diam w^{(\iota)}(P)$. By \cref{pr:parallelogrammaximizer}, on ${\mathcal U}_{r_P}(\interior P)$, $F_{\rho_P}$ is maximized by the function \[ u^P_{\rm linear}(x,y) =\frac{r_P}{2\iota c_P}\bigl(\tilde{u}^P_x(x-x_P)+\tilde{u}^P_y(y-y_P)\bigr), \] so \begin{multline*} \norm{L(\rho_P,w^{(\iota)}_xw^{(\iota)}_y)}_P \le F_{\rho_P}(u^P_{\rm linear}) =\mu(P)L(\rho_P,(\tfrac{r_P}{2\iota c_P})^2\tilde{u}^P_x\tilde{u}^P_y)\\ \le\mu(P) L(\rho_P,q^{(\iota)}+u^P_xu^P_y)=:\text{RHS}, \end{multline*} where the last inequality uses the fact that $L$ is increasing in the second variable (\cref{lm:Lcontinuous}). By \cref{lm:greatrhoconstant}, $\abs{\rho_P-\rho(x_P,y_P)}\le\iota$, and by \cref{lm:greatwvarieslittle}, $q^{(\iota)}<o_\iota(1)$. By \cref{lm:Lcontinuous}, $L$ is continuous and hence uniformly continuous on the set $\{(\rho(x,y),u_x(x,y)u_y(x,y))\,:\,(x,y)\in \bigcup_{\iota>0}S_\iota\}$ which is bounded by \cref{lm:greatbounded}. Again by \cref{lm:Lcontinuous}, $L$ is increasing in the second variable, so $\text{RHS} < \mu(P)\bigl(L(\rho(x_P,y_P),u^P_xu^P_y)+o_\iota(1)\bigr)$, where $o_\iota(1)$ is independent of $P$. By \cref{lm:greatLconstant}, \[ L(\rho(x_P,y_P),u^P_xu^P_y) < \tfrac{1}{\mu(P)}\norm{L(\rho,u_xu_y)}_P+\iota, \] so \[ \norm{L(\rho,w^{(\iota)}_xw^{(\iota)}_y)}_{P\cap S_\iota} < \norm{L(\rho,u_xu_y)}_P+o_\iota(1)\mu(P). \] By \cref{lm:greatmisc}, $S_\iota\subseteq\bigcup{\mathcal P}_\iota$, so summing over all $P$ in ${\mathcal P}_\iota$ yields \[ \norm{L(\rho,w^{(\iota)}_xw^{(\iota)}_y)}_{S_\iota}-\norm{L(\rho,u_xu_y)}_{\cup{\mathcal P}_\iota} <o_\iota(1)\mu(\cup{\mathcal P}_\iota)=o_\iota(1). \] By \cref{lm:Lcontinuous}, $L(\rho,w^{(\iota)}_xw^{(\iota)}_y)\le\rho$, so by \cref{lm:greatparallelogramscovereverything} it now follows that \[ \norm{L(\rho,w^{(\iota)}_xw^{(\iota)}_y)}_\Omega\le \norm{L(\rho,u_xu_y)}_\Omega+\varepsilon + o_\iota(1). \] Since $\varepsilon<\varepsilon'$, the lemma follows. \end{proof} In the proof of \cref{th:main}, we will need the following probabilistic analogue to \cref{pr:semicontinuity}. \begin{lemma}\label{lm:notbetterthanfunctional} Let $(\Omega,\rho)$ be a density domain, and let $\{\sigma_\gamma\}_{\gamma>0}$ be Poisson point processes on $\Omega$ with intensity functions $\gamma\rho$. Then, for any $u\in {\mathcal U}(\Omega)$ and any $\varepsilon'>0$ there is a $\delta>0$ such that \[ \sup_{w\in {\mathcal U}(\Omega),\ \norm{w-u}_\Omega<\delta}\# (D(w_\gamma\sqrt\gamma)\cap\sigma_\gamma)/\gamma < F_\rho(u)+\varepsilon' \] holds a.a.s.\ as $\gamma\rightarrow\infty$. \end{lemma} \begin{proof} Choose any $\varepsilon>0$ smaller than $\varepsilon'/2$ and apply \cref{lm:great}. Let $d:(0,1)\rightarrow{\mathbb R}_{>0}$ be the function defined in \cref{lm:greatwvarieslittle}. Consider any $P\in{\mathcal P}_\iota$ and any $w\in{\mathcal U}(\Omega)$ such that $\norm{w-u}_\Omega<d(\iota)$, and let $r_P:=\diam w(P)$ and $\rho_P:=\sup_{S_\iota\cap P}\rho$. Define $u^P_{\rm linear}\in{\mathcal U}_{r_P}(\interior P)$ by \[ u^P_{\rm linear}(x,y) =\frac{r_P}{2\iota c_P}\bigl(\tilde{u}^P_x(x-x_P)+\tilde{u}^P_y(y-y_P)\bigr). \] Since $\sigma_\gamma\cap P\cap S_\iota$ is a subset of a Poisson point process on $P$ with homogeneous intensity $\rho_P$, \cref{lm:parallelogrambounds} yields that, a.a.s.\ as $\gamma\rightarrow\infty$, \[ \tfrac{1}{\gamma}\# (D(w\sqrt\gamma)\cap\sigma_\gamma\cap P\cap S_\iota) \le F_{\rho_P}(u^P_{\rm linear})+3\iota\mu(P)(\rho_P+1)=:\text{RHS}. \] As in the proof of the upper semicontinuity, we obtain \[ \text{RHS} < \norm{L(\rho,u_xu_y)}_P + \mu(P)o_\iota(1), \] where $o_\iota(1)$ is independent of $P$. Summing over all $P$ in ${\mathcal P}_\iota$ yields that, a.a.s.\ as $\gamma\rightarrow\infty$, for any $w\in{\mathcal U}(\Omega)$ such that $\norm{w-u}_\Omega<d(\iota)$, \[ \tfrac{1}{\gamma}\# (D(w\sqrt\gamma)\cap\sigma_\gamma\cap S_\iota) -\norm{L(\rho,u_xu_y)}_{\cup{\mathcal P}_\iota} <o_\iota(1)\mu(\cup{\mathcal P}_\iota)=o_\iota(1). \] By the law of large numbers, a.a.s.\ as $\gamma\rightarrow\infty$ we have $\frac{1}{\gamma}\#(\sigma_\gamma\setminus S_\iota)< \norm{\rho}_{\Omega\setminus S_\iota}+\varepsilon$ which is bounded by $2\varepsilon+o_\iota(1)$ by \cref{lm:greatparallelogramscovereverything}, so a.a.s. \[ \sup_{w\in{\mathcal U}(\Omega):\ \norm{w-u}_\Omega<d(\iota)}\tfrac{1}{\gamma}\# (D(w\sqrt\gamma)\cap\sigma_\gamma) -\norm{L(\rho,u_xu_y)}_{\cup{\mathcal P}_\iota} <2\varepsilon+o_\iota(1). \] Since $\varepsilon<\varepsilon'/2$, the lemma now follows. \end{proof} \section{Compactness, concavity and existence of maximizers}\label{sec:mainFproof} In this section we put probabilistic matters aside and concern ourselves with the functional $F_\rho$ with the goal to prove our first main theorem, \cref{th:mainF}. First, we show that doubly increasing functions can be extended to larger domains in a natural way. \begin{lemma}\label{lm:increasingextension} Let $A\subseteq B\subseteq{\mathbb R}^2$. For any $u\in{\mathcal U}(A)$ there is a $w\in{\mathcal U}(B)$ such that the restriction of $w$ to $A$ is $u$ and the images $u(A)$ and $w(B)$ have the same closure. \end{lemma} \begin{proof} For each $(x,y)\in B$, let $P(x,y):=\{(x',y')\in A\,:\,x'\le x,\,y'\le y\}$. Define $w$ by letting $w(x,y):=\sup_{P(x,y)} u$ if $P(x,y)$ is nonempty, and $w(x,y):=\inf_A u$ if $P(x,y)$ is empty. It is straightforward to verify that $w$ is doubly increasing. \end{proof} Our proof of the existence of maximizers will need two key ingredients. The first one is the semicontinuity of $F_\rho$ (\cref{pr:semicontinuity}) and the other one is the following result about the topology of the space of doubly increasing functions. \begin{prop}\label{pr:compactness} If $\Omega\subseteq{\mathbb R}^2$ is open, then ${\mathcal U}_{h,r}(\Omega)$ is a compact subset of $L^1(\Omega)$. \end{prop} \begin{proof} Let $\{u_n\}_{n=1}^\infty$ be a sequence of elements in ${\mathcal U}_{h,r}(\Omega)$. We need to show that there is a convergent subsequence. Let $Q=\{q_1,q_2,\dotsc\}$ be a countable dense subset of $\Omega$. We define a sequence $S_1,S_2,\dotsc$ of subsequences of $\{u_n\}_{n=1}^\infty$ recursively as follows. First, let $S_1$ be the original sequence $\{u_n\}_{n=1}^\infty$. Then, for $n=1,2,\dotsc$, let $S_{n+1}$ be a subsequence of $S_n$ that converges at the point $q_n$. This is possible since $[h,h+r]$ is a compact set. Finally, construct another subsequence $\{w_n\}_{n=1}^\infty$ of $\{u_n\}_{n=1}^\infty$ by letting $w_n$ be the $n$th element of $S_n$. Clearly, $\{w_n\}$ converges at each point of $Q$. By \cref{lm:increasingextension}, we can choose a $w\in{\mathcal U}_{h,r}(\Omega)$ such that $\lim_{n\rightarrow\infty}w_n(q)=w(q)$ for any $q\in Q$. We claim that $\lim_{n\rightarrow\infty}w_n(p)=w(p)$ for any continuity point $p$ of $w$. Take any $\varepsilon>0$. Since $w$ is continuous at $p$, we can pick a $\delta>0$ such that $\abs{w(p')-w(p)}<\varepsilon/2$ for any $p'$ with $\abs{p'-p}<\delta$. Let $A^-:=\{p'\in\Omega\,:\,\abs{p'-p}<\delta,\ p'\ \text{strictly south-west of}\ p\}$ and $A^+:=\{p'\in\Omega\,:\,\abs{p'-p}<\delta,\ p'\ \text{strictly north-east of}\ p\}$. Since $\Omega$ is open, $A^-$ and $A^+$ are both open and nonempty, so there are $q^-\in Q\cap A^-$ and $q^+\in Q\cap A^+$. For all sufficiently large $n$, we have $\abs{w_n(q^-)-w(q^-)}<\varepsilon/2$ and $\abs{w_n(q^+)-w(q^+)}<\varepsilon/2$, and hence \begin{multline*} w(p)-\varepsilon\le w(q^-)-\varepsilon/2\le w_n(q^-)\le w_n(p)\\ \le w_n(q^+)\le w(q^+)+\varepsilon/2 \le w(p)+\varepsilon. \end{multline*} Since $\varepsilon$ was chosen arbitrarily, we conclude that $\{w_n\}_{n=1}^\infty$ converges to $w$ at any continuity point of $w$. By \cref{lm:monotonic}, $w$ is continuous almost everywhere, so by the theorem of bounded convergence, $w_n$ converges to $w$ in the $L^1(\Omega)$-norm. \end{proof} Before we are ready to prove \cref{th:mainF} we need a technical lemma. \begin{lemma}\label{lm:concavitycomposition} Let $A$ be a convex subset of a vector space and let $B\subseteq{\mathbb R}$. Let $\psi:A\rightarrow B$ be a concave function and let $\phi:B\rightarrow{\mathbb R}$ be an increasing concave function. Then $\phi\circ\psi:A\rightarrow{\mathbb R}$ is concave. \end{lemma} \begin{proof} Since $\psi$ is concave, for any $t\in[0,1]$ we have \[ \psi((1-t)x+ty)\ge (1-t)\psi(x)+t\psi(y) \] and thus, since $\phi$ is increasing, \[ \phi(\psi((1-t)x+ty))\ge\phi((1-t)\psi(x)+t\psi(y)), \] which, since $\phi$ is concave, is at least \[ (1-t)\phi(\psi(x))+t\phi(\psi(y)). \] Thus, $\psi\circ\phi$ is concave. \end{proof} Finally, we have all we need to prove our first main theorem. \mainF \begin{proof} \reflocal{th:mainFmaximizer} By \cref{pr:semicontinuity}, $F_\rho$ is upper semicontinuous in $L^1(\Omega)$, and by \cref{pr:compactness}, ${\mathcal U}_{0,r}(\Omega)$ is compact, so $F_\rho$ attains its maximum there. \smallskip \reflocal{th:mainFrhoconcave} For any point $(x,y)\in\Omega$, let ${\mathcal U}_{(x,y)}$ be the set of $u$ in ${\mathcal U}(\Omega)$ such that $u_x(x,y)$ and $u_y(x,y)$ exists, and define the function $\chi_{(x,y)}:{\mathcal U}_{(x,y)}\rightarrow{\mathbb R}^2$ by $\chi_{(x,y)}(u)=(u_x(x,y),u_y(x,y))$. Clearly, $\chi_{(x,y)}$ is a linear function. Let $\supp\rho=\{(x,y)\in\Omega\,:\,\rho(x,y)>0\}$. For any $(x,y)\in\supp\rho$, define $\psi\,:\,{\mathbb R}_{\ge0}^2\rightarrow{\mathbb R}_{\ge0}$ by $\psi_{(x,y)}(r,s)=\sqrt{2rs/\rho(x,y)}$. We can easily check that $\psi_{(x,y)}$ is concave; its Hessian determinant is negative semidefinite on ${\mathbb R}_{>0}^2$. Also, $\Phi$ is concave and increasing by \cref{pr:phiproperties}, and hence $\Phi\circ\psi_{(x,y)}$ is concave by \cref{lm:concavitycomposition}. Since $\chi_{(x,y)}$ is linear, $\Phi\circ\psi_{(x,y)}\circ\chi_{(x,y)}$ is concave. By \cref{lm:monotonicdifferentiable}, for any $u\in{\mathcal U}(\Omega)$, $\chi_{(x,y)}$ is defined at $u$ for almost every $(x,y)\in\Omega$, so the integral \[ \int_{\supp\rho}\rho(x,y)(\Phi\circ\psi_{(x,y)}\circ\chi_{(x,y)})(u)\,d\mu \] is well defined and concave as a function of $u$. \smallskip \reflocal{th:mainFmaxcontinuousincreasingconcave} That $\functional_{\rm max}$ is increasing follows from the fact that ${\mathcal U}_{r_1}(\Omega)\subseteq{\mathcal U}_{r_2}(\Omega)$ whenever $r_1\le r_2$. To show that $\functional_{\rm max}$ is concave, we must check that $\functional_{\rm max}((1-t)r_1+tr_2)\ge (1-t)\functional_{\rm max}(r_1)+t\functional_{\rm max}(r_2)$ for any $r_1,r_2>0$ and any $t\in(0,1)$. By \reflocal{th:mainFmaximizer}, there are $u^{(1)}\in{\mathcal U}_{0,r_1}$ and $u^{(2)}\in{\mathcal U}_{0,r_2}$ such that $F_\rho(u^{(1)})=\functional_{\rm max}(r_1)$ and $F_\rho(u^{(2)})=\functional_{\rm max}(r_2)$. Let $u:=(1-t)u^{(1)}+tu^{(2)}$. Clearly, $u\in{\mathcal U}_{0,(1-t)r_1+tr_2}$. By \reflocal{th:mainFrhoconcave}, $F_\rho$ is concave, so \begin{multline*} \functional_{\rm max}((1-t)r_1+tr_2)\geF_\rho(u)=F_\rho((1-t)u^{(1)}+tu^{(2)}) \\ \ge(1-t)F_\rho(u^{(1)}) + tF_\rho(u^{(2)}) =(1-t)\functional_{\rm max}(r_1) + t\functional_{\rm max}(r_2). \end{multline*} This shows that $\functional_{\rm max}$ is concave. A concave function is automatically continuous on any open interval, so it remains only to show that $\functional_{\rm max}$ is continuous at zero. Choose any $\varepsilon>0$. Since $\int_\Omega\rho\,d\mu$ is finite, there is a measurable subset $S$ of $\Omega$ and positive constants $\rho_0,\rho_1$ such that $\int_{\Omega\setminus S} \rho\,d\mu<\varepsilon$ and $\rho_0<\rho<\rho_1$ on $S$. There is also a $c>0$ such that $\int_{\Omega\setminus[-c,c]^2}\rho\,d\mu<\varepsilon$. Choose any $r>0$ and any $u\in{\mathcal U}_r(\Omega)$. By \cref{lm:increasingextension}, there is a $w\in{\mathcal U}_r({\mathbb R}^2)$ that coincides with $u$ on $\Omega$, and by Tonelli's theorem, \[ \int_{\Omega\cap[-c,c]^2}u_x\,d\mu \le\int_{[-c,c]^2}w_x\,d\mu =\int_{-c}^c\left(\int_{-c}^c w_x(x,y)\,dx\right)dy. \] Now, by Lebesgue's theorem for increasing functions in one dimension, \[ \int_{-c}^c w_x(x,y)\,dx\le w(c,y)-w(-c,y), \] which is at most $r$ since $w\in{\mathcal U}_r({\mathbb R}^2)$. Thus, $\int_{\Omega\cap[-c,c]^2}u_x\,d\mu\le \int_{-c}^cr\,dy=2cr$ and, analogously, $\int_{\Omega\cap[-c,c]^2}u_y\,d\mu\le 2cr$. Choose any $\delta>0$ and let $T$ be the subset of $S\cap[-c,c]^2$ where $\sqrt{2u_xu_y/\rho}\ge\delta$. Since $\rho>\rho_0$ on $S$, on $T$ we have $u_xu_y\ge \delta^2\rho/2>\delta^2\rho_0/2$, so $T\subseteq T_1\cup T_2$, where $T_1$ is the subset of $S\cap[-c,c]^2$ where $u_x\ge \delta\sqrt{\rho_0/2}$ and $T_2$ is the subset of $S\cap[-c,c]^2$ where $u_y\ge \delta\sqrt{\rho_0/2}$. By Markov's inequality, \[ \mu(T_1)\le\frac1{\delta\sqrt{\rho_0/2}}\int_{S\cap[-c,c]^2} u_x\,d\mu \le \frac{2cr}{\delta\sqrt{\rho_0/2}}, \] and analogously for $T_2$, so $\mu(T)\le\mu(T_1)+\mu(T_2)\le 4cr/\delta\sqrt{\rho_0/2}$. Combining all above, bearing in mind that $L(\rho,u_xu_y)\le\rho$ by \cref{lm:Lcontinuous}, we obtain \begin{align*} F_\rho(u)&= \int_\Omega L(\rho,u_xu_y)\,d\mu \\ &\le \int\limits_{\Omega\setminus S} \rho\,d\mu\ + \!\!\!\int\limits_{\Omega\setminus [-c,c]^2}\!\!\!\rho\,d\mu\ + \int\limits_T \rho\,d\mu \ +\!\!\!\!\!\int\limits_{(S\cap[-c,c]^2)\setminus T}\!\!\!\!\!\rho\,\Phi(\sqrt{2u_xu_y/\rho})\,d\mu \\ &<2\varepsilon + \mu(T)\rho_1 + \mu((S\cap[-c,c]^2)\setminus T)\rho_1\Phi(\delta) \\ &<2\varepsilon + \frac{4cr\rho_1}{\delta\sqrt{\rho_0/2}} + \mu(\Omega)\rho_1\Phi(\delta). \end{align*} Since $\varepsilon$, $r$ and $\delta$ were chosen freely above, we have shown that \[ \functional_{\rm max}(r)\le2\varepsilon + \frac{4cr\rho_1}{\delta\sqrt{\rho_0/2}} + \mu(\Omega)\rho_1\Phi(\delta) \] for any $\varepsilon,\delta,r>0$. Here, $c$, $\rho_0$ and $\rho_1$ depend on $\varepsilon$ but not on $r$ or $\delta$. By \cref{pr:phiproperties}, $\Phi$ is continuous, so for any $\varepsilon>0$ we can choose $\delta>0$ such that $\mu(\Omega)\rho_1\Phi(\delta)<\varepsilon$. After that, we can choose $r>0$ such that $4cr\rho_1/(\delta\sqrt{\rho_0/2})<\varepsilon$. It follows that $\functional_{\rm max}(r)<4\varepsilon$ and since $\functional_{\rm max}$ is an increasing function, we conclude that $\lim_{r\rightarrow0+}\functional_{\rm max}(r)=0$. \end{proof} \section{A gluing lemma and our second main theorem} \label{sec:mainproof} To prove our second main theorem, \cref{th:main}, we need one final lemma. We know that we can find large unions of decreasing subsets within each local parallelogram. The following lemma glues those unions together to form a global union of decreasing subsets. \begin{lemma}\label{lm:notworsethanfunctional} Let $\{\sigma_\gamma\}_{\gamma>0}$ be Poisson point processes on $\Omega$ with intensity functions $\gamma\rho$, and let $r\ge0$. For any $u\in {\mathcal U}_{0,r}(\Omega)$ and any $\varepsilon'>0$, there is a family $\{w_\gamma\in{\mathcal U}_{0,r}\}_{\gamma>0}$ (dependent on $\{\sigma_\gamma\}_{\gamma>0}$) such that $w_\gamma\rightarrow u$ and, a.a.s.\ as $\gamma\rightarrow\infty$, \[ \# (D(w_\gamma\sqrt\gamma)\cap\sigma_\gamma)/\gamma > F_\rho(u)-\varepsilon'. \] \end{lemma} \begin{proof} Choose any $\varepsilon>0$ smaller than $\varepsilon'$ and apply \cref{lm:great}. We will only consider $\iota$ in the interval $(0,1/2)$. For each $P\in {\mathcal P}_\iota$, let $P'$ denote an open parallelogram with the same center as $P$ but a factor $1-2\iota$ as wide and high. Let $\rho_P:=\inf_{S_\iota\cap P'}\rho$ and let $\bar{u}_P\in {\mathcal U}(P)$ be defined by \[ \bar{u}_P(x,y):=u(x_P,y_P)+u^P_x(x-x_P)+u^P_y(y-y_P) \] if $P$ is well behaved and $\bar{u}_P(x,y):=u(x,y)$ otherwise. Let $\tau_P^{(\gamma)}$ be a Poisson point process on $P'\setminus S_\iota$ with homogenous intensity $\gamma\rho_P$. Then $(\sigma_\gamma\cap P')\cup\tau_P^{(\gamma)}$ is a superset of a Poisson point process $\tilde{\sigma}_\gamma$ on $P'$ with homogeneous intensity $\gamma\rho_P$. Let $r_P:=\diam\bar{u}_P(P')=2(1-2\iota)\iota c_P$. By \cref{lm:parallelogrambounds}, for any well-behaved $P\in{\mathcal P}_\iota$ it holds a.a.s.\ as $\gamma$ tends to infinity that there is a $w_P^{(\gamma)}\in {\mathcal U}_{u(x_P,y_P)-\frac{r_P}{2},r_P}(P')$ such that \begin{equation}\label{eq:tildesigma} \tfrac{1}{\gamma}\# (D(w_P^{(\gamma)}\sqrt\gamma)\cap\tilde{\sigma}_\gamma)/\mu(P') \ge L(\rho_P,u^P_xu^P_y)-3(\rho_P+1)\iota. \end{equation} Since $\tilde{\sigma}_\gamma\subseteq(\sigma_\gamma\cap P')\cup\tau^{(\gamma)}_P$, we have \begin{multline}\label{eq:tildesigmatau} \# (D(w_P^{(\gamma)}\sqrt\gamma)\cap\tilde{\sigma}_\gamma) \le\#(D(w_P^{(\gamma)}\sqrt\gamma)\cap((\sigma_\gamma\cap P')\cup\tau_P^{(\gamma)}))\\ \le\#(D(w_P^{(\gamma)}\sqrt\gamma)\cap\sigma_\gamma\cap P')+\#\tau_P^{(\gamma)}, \end{multline} and by the law of large numbers, \begin{equation}\label{eq:tau} \#\tau_P^{(\gamma)} \le \gamma(\rho_P+1)\mu(P'\setminus S_\iota) \end{equation} a.a.s.\ as $\gamma\rightarrow\infty$. Combining \cref{eq:tildesigma,eq:tildesigmatau,eq:tau}, we obtain \begin{multline*} \bigl(\tfrac{1}{\gamma}\# (D(w_P^{(\gamma)}\sqrt\gamma)\cap\sigma_\gamma\cap P') +(\rho_P+1)\mu(P'\setminus S_\iota)\bigr)/\mu(P')\\ \ge L(\rho_P,u^P_xu^P_y)-3(\rho_P+1)\iota. \end{multline*} By \cref{lm:greatrhoconstant} and the fact that $L$ is uniformly continuous (\cref{lm:Lcontinuous}) on the bounded set $\{(\rho(x,y),u_x(x,y)u_y(x,y))\,:\,(x,y)\in \bigcup_\iota S_\iota\}$, we obtain \begin{multline}\label{eq:Pprime} \bigl(\tfrac{1}{\gamma}\# (D(w_P^{(\gamma)}\sqrt\gamma)\cap\sigma_\gamma\cap P') +(\rho_P+1)\mu(P'\setminus S_\iota)\bigr)/\mu(P')\\ > L(\rho(x_P,y_P),u^P_xu^P_y)-o_\iota(1), \end{multline} where $o_\iota(1)$ is independent of $P$. But the inequality in \cref{eq:Pprime} holds also (even deterministically) for any $P\in{\mathcal P}_\iota$ that is not well behaved by the fact that $\rho$, $u_x$ and $u_y$ are bounded on $\bigcup_\iota S_\iota$ and $u^P_x$ or $u^P_y$ is at most $\iota^3$ if $P$ is not well behaved together with the fact that $L$ is continuous and $L(\rho,0)=0$ by \cref{lm:Lcontinuous}. Since $w^{(\gamma)}_P\in{\mathcal U}_{u(x_P,y_P)-\frac{r_P}{2},r_P}(P')$, we have \begin{equation}\label{eq:wclosetouxpyp} \sup_{P'}\,\abs{w_P^{(\gamma)}-u(x_P,y_P)}\le (1-2\iota)\iota c_P. \end{equation} Let $\tilde{{\mathcal P}}_\iota\subseteq{\mathcal P}_\iota$ be the set of well-behaved parallelograms in ${\mathcal P}_\iota$, and let $w_\gamma\in {\mathcal U}(\bigcup_{P\in\tilde{{\mathcal P}}_\iota}P'\cup(\Omega\setminus\interior\cup\tilde{{\mathcal P}}_\iota))$ be defined by $w_\gamma(x,y):=w_P^{(\gamma)}(x,y)$ if $(x,y)\in P'$ where $P\in\tilde{{\mathcal P}}_\iota$ and $w_\gamma(x,y):=u(x,y)$ if $(x,y)\in\Omega\setminus\interior\cup\tilde{{\mathcal P}}_\iota$. We claim that $w_\gamma$ is doubly increasing. To show this, it suffices to check the inequality condition for any pair of points $(x_1,y_1)$ and $(x_2,y_2)$ where \begin{itemize} \item $(x_1,y_1)\le(x_2,y_2)$ or $(x_1,y_1)\ge(x_2,y_2)$, and \item $(x_1,y_1)$ belongs to $P'$ for some $P\in\tilde{{\mathcal P}}_\iota$ and $(x_2,y_2)$ lies on the boundary of $P$. \end{itemize} \Cref{fig:wellbehavedparallelograms} shows an example. \begin{figure} \begin{tikzpicture}[scale=0.8] \draw[fill=lightgray] (0,0) -- (-2,4) -- (-1,6) -- (1,2) -- cycle; \draw[thick,fill=white] (-0.1,0.6) -- (-1.7,3.8) -- (-0.9,5.4) -- (0.7,2.2) -- cycle; \draw[fill=lightgray] (4,0) -- (0,6) -- (1,7.5) -- (5,1.5) -- cycle; \draw[thick,fill=white] (3.7,0.75) -- (0.5,5.55) -- (1.3,6.75) -- (4.5,1.95) -- cycle; \draw[fill=lightgray] (3,-2) -- (1,0) -- (2,1) -- (4,-1) -- cycle; \draw[thick,fill=white] (2.9,-1.7) -- (1.3,-0.1) -- (2.1,0.7) -- (3.7,-0.9) -- cycle; \draw[fill] (-0.3,3.2) circle(2pt) (0.1,3.8) circle(2pt); \node at (-0.4,2.9) {$\scriptstyle (x_1,y_1)$}; \node at (0.9,3.6) {$\scriptstyle (x_2,y_2)$}; \end{tikzpicture} \caption{The situation in the proof of \cref{lm:notworsethanfunctional}. We see three disjoint well-behaved parallelograms and their slightly smaller primed counterparts. The shaded area is initially excluded from the domain of $w_\gamma$.} \label{fig:wellbehavedparallelograms} \end{figure} We have \begin{equation}\label{eq:ubar} s\cdot\bigl(\bar{u}_P(x_2,y_2)-u(x_P,y_P)\bigr)=\iota c_P, \end{equation} where $s$ is $+1$ if $(x_1,y_1)\le(x_2,y_2)$ and $-1$ if $(x_1,y_1)\ge(x_2,y_2)$. Also, \begin{equation}\label{eq:wubar} \abs{w_\gamma(x_1,y_1)-u(x_P,y_P)}\le(1-2\iota)\iota c_P \end{equation} by \cref{eq:wclosetouxpyp}, and \begin{multline}\label{wubartwo} \abs{w_\gamma(x_2,y_2)-\bar{u}_P(x_2,y_2)}=\abs{u(x_2,y_2)-\bar{u}_P(x_2,y_2)}\\ \le(\abs{x_2-x_P}+\abs{y_2-y_P})\iota^5 \le 2\iota^2 c_P, \end{multline} where the first inequality follows from \cref{lm:greatulinear} and the last inequality follows from \cref{lm:uiparallelogrambound} together with the fact that $\iota<1$. Combining \cref{eq:ubar,eq:wubar,wubartwo} and the triangle inequality, we obtain \begin{multline*} s\cdot\bigl(w_\gamma(x_2,y_2)-w_\gamma(x_1,y_1)\bigr) \ge\\ s\cdot\bigl(\bar{u}_P(x_2,y_2)-u(x_P,y_P)\bigr) - \abs{w_\gamma(x_1,y_1)-u(x_P,y_P)} - \abs{w_\gamma(x_2,y_2)-\bar{u}_P(x_2,y_2)}\\ \ge \iota c_P - (1-2\iota)\iota c_P - 2\iota^2 c_P = 0, \end{multline*} and we have showed that $w_\gamma$ is increasing. For any $P\in\tilde{\mathcal P}_\iota$, inside $P'$ we have \begin{multline*} \abs{w_P^{(\gamma)}(x,y)-u(x,y)} \le\abs{w_P^{(\gamma)}(x,y)-u(x_P,y_P)} + \abs{u(x,y)-u(x_P,y_P)}\\ \le (1-2\iota)\iota c_P + (1+5\iota)\iota c_P, \end{multline*} where the first inequality is the triangle inequality and the second inequality follows from \cref{eq:wclosetouxpyp,lm:greatuvarieslittle}. Since $c_P<1$ by \cref{lm:greatmisc}, it follows that \begin{equation}\label{eq:wclosetou} \sup_{P\in\tilde{{\mathcal P}}_\iota}\sup_{(x,y)\in P'}\abs{w_P^{(\gamma)}(x,y)-u(x,y)}\rightarrow0 \end{equation} as $\iota\rightarrow0$. By \cref{lm:increasingextension}, $w_\gamma$ can be extended to a doubly increasing function on all of $\Omega$. Let us choose such an extension and denote it by the same symbol $w_\gamma$. Since $w_\gamma$ coincides with $u$ outside the interiors of the well-behaved parallelograms, $w_\gamma\in{\mathcal U}_{0,r}(\Omega)$. We have \begin{multline*} \norm{w_\gamma-u}_\Omega = \sum_{P\in\tilde{{\mathcal P}}_\iota}\norm{w_\gamma-u}_{P'} +\sum_{P\in\tilde{{\mathcal P}}_\iota}\norm{w_\gamma-u}_{P\setminus P'}\\ \le \sum_{P\in\tilde{{\mathcal P}}_\iota}\mu(P')\sup_{(x,y)\in P'}\abs{w_P^{(\gamma)}(x,y)-u(x,y)} + r\sum_{P\in\tilde{{\mathcal P}}_\iota}\mu(P\setminus P') =o_\iota(1) \end{multline*} by \cref{eq:wclosetou}. Also, by \cref{eq:Pprime}, \begin{multline*} \tfrac{1}{\gamma}\# (D(w_\gamma\sqrt\gamma) \cap \sigma_\gamma) \ge \sum_{P\in{\mathcal P}_\iota}\tfrac{1}{\gamma}\#(D(w_\gamma\sqrt\gamma)\cap\sigma_\gamma\cap P') \\ \ge-(1+\sup_{S_\iota}\rho)\mu(\cup{\mathcal P}_\iota\setminus S_\iota) +\sum_{P\in{\mathcal P}_\iota}\bigl(L(\rho(x_P,y_P),u^P_xu^P_y)-o_\iota(1)\bigr)\mu(P'). \end{multline*} By \cref{lm:great}, part \reflocal{lm:greatScoversparallelograms} and \reflocal{lm:greatLconstant}, this is greater than \begin{multline*} -o_\iota(1)+\sum_{P\in{\mathcal P}_\iota}\left(\norm{L(\rho,u_xu_y)}_P\frac{\mu(P')}{\mu(P)} - o_\iota(1)\mu(P')\right)\\ \ge(1-2\iota)\norm{L(\rho,u_xu_y)}_{\cup{\mathcal P}_\iota}-o_\iota(1) \end{multline*} which is greater than $\norm{L(\rho,u_xu_y)}_\Omega - \varepsilon - o_\iota(1)$, by \cref{lm:greatparallelogramscovereverything} and the fact that $L(\rho,u_xu_y)\le\rho$ by \cref{lm:Lcontinuous}. Since $\varepsilon<\varepsilon'$, we are done. \end{proof} Finally, we are ready to prove our second main theorem. \main \begin{proof} Take any $\varepsilon>0$ and $r\ge0$. Let $M_\varepsilon$ denote the set of $u\in{\mathcal U}_{0,r}(\Omega)$ such that $\norm{u-u_{\rm max}}_\Omega<\varepsilon$ for some $u_{\rm max}\in{\mathcal U}_{0,r}(\Omega)$ with $F_\rho(u_{\rm max})=\functional_{\rm max}(r)$. Let $M_\varepsilon^c={\mathcal U}_{0,r}(\Omega)\setminus M_\varepsilon$ denote the complementary set. \begin{claim}\label{cl:supissmall} $\sup_{u\in M_\varepsilon^c} F_\rho(u)<\functional_{\rm max}(r)$. \end{claim} Suppose there is a sequence $u_1,u_2,\dotsc \in M_\varepsilon^c$ with $F_\rho(u_i)\rightarrow \functional_{\rm max}(r)$. By \cref{pr:compactness}, $M_\varepsilon^c$ is compact, so there is a convergent subsequence $u_{i_1},u_{i_2},\dotsc\rightarrow u\in M_\varepsilon^c$. Since $F_\rho$ is upper semicontinuous (\cref{pr:semicontinuity}), $F_\rho(u)\ge \limsup F_\rho(u_{i_j})=\functional_{\rm max}(r)$, which is a contradiction. This proves \cref{cl:supissmall}. By the claim above we can choose an $\varepsilon'>0$ such that \begin{equation}\label{eq:epsprim} \sup_{u\in M_\varepsilon^c}F_\rho(u) < \functional_{\rm max}(r) - 3\varepsilon'. \end{equation} Without loss of generality, we assume that $\varepsilon'<\varepsilon$. Let $\{\sigma_\gamma\}_{\gamma>0}$ be Poisson point processes on $\Omega$ with intensity functions $\gamma\rho$ such that $\tau_\gamma$ approaches $\sigma_\gamma$ as $\gamma\rightarrow\infty$. \begin{claim}\label{cl:anywbad} A.a.s.\ as $\gamma\rightarrow\infty$, any $w\in M_\varepsilon^c$ satisfies the inequality \[ \#(D(w\sqrt\gamma)\cap\sigma_\gamma)/\gamma < \functional_{\rm max}(r)-2\varepsilon'. \] \end{claim} \begin{claim}\label{cl:anyIbad} A.a.s.\ as $\gamma\rightarrow\infty$, any $\lfloor r\sqrt{\gamma}\rfloor$-decreasing subset $P$ of $\tau_\gamma$ such that $\kappa_P/\sqrt{\gamma}\in M_\varepsilon^c$ satisfies the inequality $\# P/\gamma < \functional_{\rm max}(r)-\varepsilon'$. \end{claim} \begin{claim}\label{cl:goodI} A.a.s.\ as $\gamma\rightarrow\infty$, there is a $\lfloor r\sqrt{\gamma}\rfloor$-decreasing subset $P$ of $\tau_\gamma$ such that $\# P/\gamma > \functional_{\rm max}(r)-\varepsilon'$. \end{claim} For each $u\in{\mathcal U}_{0,r}(\Omega)$, choose $\delta_u$ according to \cref{lm:notbetterthanfunctional}. The open balls $\{B_{\delta_u}(u)\,:\,u\in M_\varepsilon^c\}$ cover $M_\varepsilon^c$, which is compact by \cref{pr:compactness}, so there is a finite subcover $\{B_{\delta_u}(u)\,:\,u\in A\subset M_\varepsilon^c\}$, $A$ finite. Now \cref{cl:anywbad} follows from \cref{lm:notbetterthanfunctional} (and the choice of $\delta_u$) together with \cref{eq:epsprim}. To show \cref{cl:anyIbad}, consider a $\lfloor r\sqrt{\gamma}\rfloor$-decreasing subset $P$ of $\tau_\gamma$. We have $\# P/\gamma\le \#(P\cap\sigma_\gamma)/\gamma + \#(\tau_\gamma\setminus\sigma_\gamma)/\gamma$, which is smaller than $\frac1\gamma\#(P\cap\sigma_\gamma) + \varepsilon'$ a.a.s.\ as $\gamma\rightarrow\infty$. By \cref{lm:IandkappaIkappaPisP}, $D(\kappa_P)=P$, so we obtain the inequality $\# P/\gamma<\frac1\gamma\#(D(\kappa_P)\cap\sigma_\gamma)+\varepsilon'$. Now \cref{cl:anyIbad} follows from \cref{cl:anywbad} by putting $w=\kappa_P/\sqrt{\gamma}$. By \cref{th:mainFmaximizer}, there is a $u_{\rm max}\in{\mathcal U}_{0,r}(\Omega)$ with $F_\rho(u_{\rm max})=\functional_{\rm max}(r)$. By \cref{lm:notworsethanfunctional}, a.a.s.\ as $\gamma\rightarrow\infty$, there is a $w_\gamma\in{\mathcal U}_{0,r}(\Omega)$ such that $\#(D(w_\gamma\sqrt\gamma)\cap\sigma_\gamma)/\gamma > \functional_{\rm max}(r)-\frac{\varepsilon'}2$ and hence $\#(D(w_\gamma\sqrt\gamma)\cap\tau_\gamma)/\gamma > \functional_{\rm max}(r)-\varepsilon'$ since $\#(\sigma_\gamma\setminus\tau_\gamma)/\gamma$ tends to zero in probability as $\gamma\rightarrow\infty$. By \cref{lm:IandkappaIudecreasing}, $D(w_\gamma\sqrt\gamma)\cap\tau_\gamma$ is a $\lfloor r\sqrt{\gamma}\rfloor$-decreasing subset of $\tau_\gamma$, and \cref{cl:goodI} follows. Now \reflocal{th:mainlimitsurface} follows from \cref{cl:anyIbad,cl:goodI}. \begin{claim}\label{cl:anywbadagain} A.a.s.\ as $\gamma\rightarrow\infty$, any $w\in{\mathcal U}_{0,r}(\Omega)$ satisfies the inequality $\#(D(w\sqrt\gamma)\cap\sigma_\gamma)/\gamma < \functional_{\rm max}(r)+\varepsilon'$. \end{claim} \begin{claim}\label{cl:anyIbadagain} A.a.s.\ as $\gamma\rightarrow\infty$, any $\lfloor r\sqrt{\gamma}\rfloor$-decreasing subset $P$ of $\tau_\gamma$ satisfies the inequality $\# P/\gamma < \functional_{\rm max}(r)+2\varepsilon'$. \end{claim} There is a finite subcover $\{B_{\delta_u}(u)\,:\,u\in A'\subset {\mathcal U}_{0,r}(\Omega)\}$ of ${\mathcal U}_{0,r}(\Omega)$, so \cref{cl:anywbadagain} follows from \cref{lm:notbetterthanfunctional}. By \cref{pr:decreasingincreasingrelation}, for any $\lfloor r\sqrt{\gamma}\rfloor$-decreasing set $P$, we have $\kappa_P/\sqrt{\gamma}\in{\mathcal U}_{0,r}(\Omega)$. With this in mind, \cref{cl:anyIbadagain} follows from \cref{cl:anywbadagain} the same way as \cref{cl:anyIbad} followed from \cref{cl:anywbad}. Finally, \reflocal{th:mainlimittoFmax} follows from \cref{cl:goodI,cl:anyIbadagain} since $\varepsilon>0$ was chosen arbitrarily and $\varepsilon'<\varepsilon$. \end{proof} \limitshape \begin{proof} By \cref{pr:curtisgreene}, the maximal size $\Lambda^{(\gamma)}$ of a $\lfloor r\sqrt\gamma\rfloor$-decreasing subset of $\tau_\gamma$ is $\Lambda^{(\gamma)}=\sum_{i=1}^{\lfloor r\sqrt\gamma\rfloor}\lambda^{(\gamma)}_i$, and by \cref{th:mainlimittoFmax}, $\Lambda^{(\gamma)}/\gamma\rightarrow \functional_{\rm max}(r)$ in probability as $\gamma\rightarrow\infty$. Now the corollary follows from \cref{lm:Lambdatolambda} with $a(\lambda)=\sqrt\gamma$, $b(\gamma)=1/\gamma$ and $G=\functional_{\rm max}$. \end{proof} \rhombuslimitshapeexists \begin{proof} By \cref{pr:parallelogrammaximizer}, for any $r\ge0$, the maximum value of $F_1$ on ${\mathcal U}_r(\Omega)$ is $\functional_{\rm max}(r)=\Phi(r)$, so $\functional_{\rm max}'(r)=\Phi'(r)$ whenever the derivative exists. Now the theorem follows from \cref{cor:limitshape}. \end{proof} \section{Essentially unique maximizers}\label{sec:uniquemaximizers} In this section we show that, under reasonable assumptions, the maximizer of the functional $F_\rho$ is essentially unique. \begin{defi} Let $\psi$ be a real-valued function on a convex subset $A$ of a vector space and let $C$ be a subset of $A$. Then, $\psi$ is said to be \emph{strictly concave from $C$} if, for any $x\in C$ and $y\in A$ with $x\ne y$, and any $t\in(0,1)$, we have $\psi((1-t)x+ty)>(1-t)\psi(x)+t\psi(y)$. \end{defi} \begin{defi} Let $\phi$ be a function from some $B\subseteq{\mathbb R}$ to ${\mathbb R}$, and let $C$ be a subset of $B$. We say that $\phi$ is \emph{strictly increasing from $C$} if $\phi(x)<\phi(y)$ for any $C\ni x<y\in B$. \end{defi} \begin{lemma}\label{lm:advancedconcavitycomposition} Let $A$ be a convex subset of a vector space and let $B\subseteq{\mathbb R}$. Let $\psi:A\rightarrow B$ be a concave function and let $\phi:B\rightarrow{\mathbb R}$ be an increasing function. Suppose $\psi$ is strictly concave on the line through any two distinct points $x\ne y$ with $\psi(x)=\psi(y)$. Suppose also that $\phi$ is strictly concave from some subset $C$ of $B$ and strictly increasing from $C$. Then $\phi\circ\psi:A\rightarrow{\mathbb R}$ is strictly concave from $\psi^{-1}(C)$. \end{lemma} \begin{proof} Take any $x\in\psi^{-1}(C)$ and $y\in A$ with $x\ne y$, and take any $t\in(0,1)$. We must show that \[ (\phi\circ\psi)((1-t)x+ty)>(1-t)(\phi\circ\psi)(x)+t(\phi\circ\psi)(y). \] Since $\psi$ is concave, we have \begin{equation}\label{eq:psiconcave} \psi((1-t)x+ty)\ge(1-t)\psi(x)+t\psi(y) \end{equation} and thus, since $\phi$ is increasing, \begin{equation}\label{eq:phiincreasing} \phi(\psi((1-t)x+ty))\ge\phi((1-t)\psi(x)+t\psi(y)). \end{equation} Since $\phi$ is strictly concave from $C$, we have \[ \phi((1-t)\psi(x)+t\psi(y))\ge(1-t)\phi(\psi(x))+t\phi(\psi(y)). \] with equality only if $\psi(x)=\psi(y)$. If $\psi(x)=\psi(y)$, by the assumptions in the lemma, $\psi$ is strictly concave on the line through $x$ and $y$ and hence the inequality in \cref{eq:psiconcave} holds strictly. Since $\phi$ is strictly increasing from $C$, this implies that the inequality in \cref{eq:phiincreasing} holds strictly too. Thus, $\psi\circ\phi$ is concave. \end{proof} \begin{prop}\label{pr:uniquemaximizer} Suppose $\Phi$ is strictly concave on $[0,\sqrt2]$. Then, if $u^{(1)}$ and $u^{(2)}$ are two maximizers of the operator $F_\rho$ in ${\mathcal U}_r(\Omega)$, the two sets \begin{align*} \{(x,y)\ :\ 0<u^{(1)}_x(x,y)u^{(1)}_y(x,y)<\rho(x,y)\} & \text{\ and}\\ \{(x,y)\ :\ 0<u^{(2)}_x(x,y)u^{(2)}_y(x,y)<\rho(x,y)\} & \mbox{} \end{align*} are almost equal, and the partial derivatives of $u^{(1)}$ and $u^{(2)}$ agree almost everywhere on that set. \end{prop} \begin{proof} For $i=1,2$, let $D_i\subseteq\Omega$ be the set of points $(x,y)$ where $0<u^{(i)}_x u^{(i)}_y<\rho$. Let $A:={\mathbb R}_{>0}^2$, $B:={\mathbb R}_{\ge0}$ and $C:=[0,\sqrt2)$. For any $(x,y)\in\supp\rho$, let $\chi_{(x,y)}$ and $\psi_{(x,y)}$ be defined as in the proof of \cref{th:mainFrhoconcave}. It follows from \cref{pr:phiproperties} together with our assumption that $\Phi$ is strictly concave on $[0,\sqrt2]$ that the assumptions of \cref{lm:advancedconcavitycomposition} are satisfied for $\psi_{(x,y)}$, $\Phi$, $A$, $B$ and $C$, so $\Phi\circ\psi_{(x,y)}$ is strictly concave from $\psi_{(x,y)}^{-1}(C)=\{(r,s)\in{\mathbb R}_{>0}^2\,:\,rs<\rho(x,y)\}$. Let $D$ be the set of points in $D_1\cup D_2$ where $(u^{(1)}_x,u^{(1)}_y)\ne (u^{(2)}_x,u^{(2)}_y)$. We claim that $\mu(D)=0$. Let $w:=(u^{(1)}+u^{(2)})/2$. If $(x,y)\in D$, the points $p:=\chi_{(x,y)}(u^{(1)})$ and $q:=\chi_{(x,y)}(u^{(2)})$ are distinct and at least one of them belongs to $\psi_{(x,y)}^{-1}(C)$. Hence, with $L_{(x,y)}$ as a shorthand for $\rho(x,y)\cdot(\Phi\circ\psi_{(x,y)}\circ\chi_{(x,y)})$, we obtain \begin{multline*} L_{(x,y)}(w) =\rho(x,y)(\Phi\circ\psi_{(x,y)})((p+q)/2)\\ >\rho(x,y)[(\Phi\circ\psi_{(x,y)})(p)+(\Phi\circ\psi_{(x,y)})(q)]/2 =[L_{(x,y)}(u^{(1)}) +L_{(x,y)}(u^{(2)})]/2. \end{multline*} Suppose $\mu(D)>0$. Then, \[ \int_{D}L_{(x,y)}(w)\,d\mu >\frac{1}{2}\left(\int_{D}L_{(x,y)}(u^{(1)})\,d\mu +\int_{D}L_{(x,y)}(u^{(2)})\,d\mu \right). \] Also, by \cref{th:mainFrhoconcave}, \[ \int\limits_{(\supp\rho)\setminus D}\!\!\!\!\!L_{(x,y)}(w)\,d\mu \ge\frac{1}{2}\left(\,\int\limits_{(\supp\rho)\setminus D}\!\!\!\!\!L_{(x,y)}(u^{(1)})\,d\mu \ +\!\!\!\!\!\int\limits_{(\supp\rho)\setminus D}\!\!\!\!\!L_{(x,y)}(u^{(2)})\,d\mu \right), \] so it follows that \[ \int_{\supp\rho}L_{(x,y)}(w)\,d\mu >\frac{1}{2}\left(\int_{\supp\rho}L_{(x,y)}(u^{(1)})\,d\mu +\int_{\supp\rho}L_{(x,y)}(u^{(2)})\,d\mu \right). \] This means that $F_\rho(w)>[F_\rho(u^{(1)})+F_\rho(u^{(2)})]/2$, which is impossible since $u^{(1)}$ and $u^{(2)}$ both are maximizers of $F_\rho$. We conclude that $\mu(D)=0$. \end{proof} Recall the definition of ${\mathcal V}$ from \cref{sec:generalpde}. \begin{lemma}\label{lm:uniquefromderivativesgeneral} Let $\Omega$ be an open subset of ${\mathbb R}^2$, and let $u\in{\mathcal U}_{-c,2c}(\Omega)$ and $v\in{\mathcal V}_{-d,2d}(\Omega)$ for some $c,d>0$. Suppose that $u$ and $v$ are everywhere differentiable with nonzero partial derivatives and that the image of $\Omega$ under the map $\varphi_u\,:\,(x,y)\mapsto(u(x,y),v(x,y))$ is $(-c,c)\times(-d,d)$. Then, for any $w\in{\mathcal U}_{-c,2c}(\Omega)$ whose partial derivatives coincide with those of $u$ almost everywhere, it holds that $w=u$ everywhere. \end{lemma} \begin{proof} Take any $w\in{\mathcal U}_{-c,2c}(\Omega)$ whose partial derivatives coincide with those of $u$ almost everywhere. Let $(x_0,y_0)$ be any point in $\Omega$ and let $u_0:=u(x_0,y_0)$, $v_0:=v(x_0,y_0)$ and $w_0=w(x_0,y_0)$. We need to show that $w_0=u_0$, but by symmetry% \footnote{To be precise: Suppose $\Omega$, $u$, $v$ and $w$ satisfy the assumptions in the lemma, and define $\Omega'$, $u'$, $v'$ and $w'$ by $\Omega':=-\Omega=\{(-x,-y)\,:\,(x,y)\in\Omega\}$, $u'(x,y)=-u(-x,-y)$, $v'(x,y)=-v(-x,-y)$ and $w'(x,y)=-w(-x,-y)$. Then, $\Omega'$, $u'$, $v'$ and $w'$ satisfy the assumptions in the lemma too, and $w(x,y)\le u(x,y)$ if and only if $w'(x,y)\ge u'(x,y)$.} it is enough to show that $w_0\ge u_0$, so this will be our goal. Since $u$ has positive partial derivatives, for any sufficiently small $\delta>0$ there is an $x_1<x_0$ and an $y_1<y_0$ such that $u(x_1,y_0)=u(x_0,y_1)=u_0-\delta$. Since $v_x<0$ and $v_y>0$ everywhere, we have $v_-:=v(x_0,y_1)<v(x_0,y_0)<v(x_1,y_0)=:v_+$. Let $S:=(-c,u_0-\delta)\times(v_-,v_+)$ and $T:=\varphi_u^{-1}(S)$. We claim that every point in $T$ is south-west of $(x_0,y_0)$. To see this, first note that $v\le v_-$ for any point south-east of $(x_0,y_1)$ and $v\ge v_+$ for any point north-west of $(x_1,y_0)$. Then note that $u\ge u_0-\delta$ for any point north-east of $(x_0,y_1)$ or $(x_1,y_0)$. Thus, every point in $T$ is south of $(x_1,y_0)$ and west of $(x_0,y_1)$, and we have proved the claim. \Cref{fig:uandv} illustrates the situation. \begin{figure} \begin{tikzpicture} \draw[fill] (0,1) circle(2pt) (1.5,0) circle(2pt) (1.5,1) circle(2pt); \draw (-0.7,1.5) node {$(x_1,y_0)$} (2.2,-0.5) node {$(x_0,y_1)$} (2.2,1.5) node {$(x_0,y_0)$}; \draw (-3,1) -- (0,1) -- (0,3); \draw (1.5,-2) -- (1.5,0) -- (4.5,0); \draw (0,1) -- (1.5,1) -- (1.5,0); \draw (-1.5,2.5) node {$v\ge v_+$} (3,-1.5) node {$v\le v_-$} (3,2.5) node {$u\ge u_0-\delta$}; \end{tikzpicture} \caption{The situation in the proof of \cref{lm:uniquefromderivativesgeneral}} \label{fig:uandv} \end{figure} Let $J:=u_xv_y-u_yv_x$ be the Jacobian determinant of $\varphi_u$. By \cref{th:changeofvariables}, \[ \int_T J d\mu\ge \mu(\varphi_u(T))=\mu(S), \] where the last equality follows from the surjectivity of $\varphi_u$ onto the set $(-c,c)\times(-d,d)$. Let $T'$ be the subset of $T$ where $w$ is differentiable and the partial derivatives of $w$ and $u$ coincide, and define the function $\varphi_w\,:\,T'\rightarrow[-c,c]\times[-d,d]$ by $\varphi_w(x,y)=(w(x,y),v(x,y))$. Then $\varphi_w$ is injective by \cref{lm:injectivity}, and by \cref{th:changeofvariables}, \[ \int_{T'} J d\mu = \mu(\varphi_w(T')). \] Since $\mu(T\setminus T')=0$, the two integrals above are equal, so \begin{equation}\label{eq:mumu} \mu(\varphi_w(T'))\ge\mu(S). \end{equation} Recall that any point in $T$ is south-west of $(x_0,y_0)$, so $w(x,y)\le w_0$ for any point $(x,y)\in T$. It follows that \[ \varphi_w(T')\subseteq[-c,w_0]\times(v_-,v_+), \] so $\mu(\varphi_w(T'))\le (w_0+c)(v_+-v_-)$. On the other hand, $\mu(S)=(u_0-\delta+c)(v_+-v_-)$, so \cref{eq:mumu} yields that $w_0\ge u_0-\delta$. Since this holds for arbitrarily small positive $\delta$, we conclude that $w_0\ge u_0$. \end{proof} \begin{lemma}\label{lm:uniquefromderivatives} Let $\Omega$ be the open rhombus $a\abs{x}+b\abs{y}<1$ and let $c$ be a positive number. Let $u\in{\mathcal U}_{-c,2c}$ and suppose that $u_x=ca$ and $u_y=cb$ almost everywhere in $\Omega$. Then $u(x,y)=c(ax+by)$ everywhere on $\Omega$. \end{lemma} \begin{proof} This follows from \cref{lm:uniquefromderivativesgeneral} with $d=1$ and $v(x,y)=by-ax$. \end{proof} \begin{prop}\label{pr:phioneisone} $\Phi(r)=1$ if $r\ge\sqrt2$. \end{prop} \begin{proof} By \cref{pr:phiproperties}, $\Phi$ is continuous at $r=\sqrt2$, so it suffices to show that $\Phi(r)=1$ for any $r>\sqrt2$. For any $\beta>0$, consider the density domain $(\Omega_\beta, 1)$ where $\Omega_\beta$ is the open rectangle $\abs{x+y}<1$, $\abs{x-y}<\beta$, and for any $\gamma>0$ and $\beta>0$, let $\sigma_{\gamma,\beta}$ be a Poisson point process on $\Omega_\beta$ with homogeneous intensity $\gamma$. Since $\Omega_\beta\subset(-\tfrac{1+\beta}2,\tfrac{1+\beta}2)^2$, by \cref{th:hammersley}, for any $\varepsilon>0$, the size of the largest increasing subset of $\sigma_{\gamma,\beta}$ is smaller than $(\Gamma+\varepsilon)(1+\beta)\sqrt{\gamma}$ a.a.s.\ as $\gamma\rightarrow\infty$. We know from the result of Vershik and Kerov \cite{VershikKerov} that $\Gamma=2$. Thus, by \cref{pr:decreasingincreasingrelation}, the maximum size of a $\lfloor(2+\varepsilon)(1+\beta)\sqrt{\gamma}\rfloor$-decreasing subset of $\sigma_{\gamma,\beta}$ is $\#\sigma_{\gamma,\beta}$ a.a.s.\ By the law of large numbers, we have $\#\sigma_{\gamma,\beta}/\gamma\rightarrow2\beta$ in probability as $\gamma\rightarrow\infty$, so, by \cref{th:mainlimittoFmax}, $\functional_{\rm max}((2+\varepsilon)(1+\beta))= 2\beta$ for any $\varepsilon>0$ and $\beta>0$. On the other hand, by \cref{pr:parallelogrammaximizer}, $\functional_{\rm max}((2+\varepsilon)(1+\beta))=2\beta\Phi((1+\tfrac\eps2)(1+\beta)\sqrt2)$. It follows that $\Phi((1+\tfrac\eps2)(1+\beta)\sqrt2)=1$. \end{proof} \begin{prop} Suppose $\Phi$ is strictly concave on $[0,\sqrt2]$. Let $\Omega$ be the open rhombus $a\abs{x}+b\abs{y}<1$ and $\rho>0$ be constant, and let $c$ be any positive number smaller than or equal to $\sqrt{\rho/ab}$. Then, in ${\mathcal U}_{-c,2c}(\Omega)$ the functional $F_\rho$ is uniquely maximized by the function $u_{\rm linear}(x,y)=c(ax+by)$. \end{prop} \begin{proof} By \cref{pr:parallelogrammaximizer}, $u_{\rm linear}$ is a maximizer of $F_\rho$. If $c<\sqrt{\rho/ab}$, the uniqueness follows from \cref{pr:uniquemaximizer} together with \cref{lm:uniquefromderivatives}. Suppose $c=\sqrt{\rho/ab}$ and suppose there is another maximizer $u\in{\mathcal U}_{-c,2c}$. By scale invariance we can assume without loss of generality that $a=b=\rho=1$. We have $F_1(u_{\rm linear})=\mu(\Omega)\Phi(\sqrt2)$ which equals $\mu(\Omega)$ by \cref{pr:phioneisone}, so we must have $\Phi(\sqrt{2u_xu_y})=1$ and hence $u_xu_y\ge1$ almost everywhere. It follows that \begin{equation}\label{eq:integrallarge} \int_\Omega \sqrt{u_xu_y}\,d\mu\ge 2. \end{equation} On the other hand, by the inequality of the geometric and arithmetic mean, we have \begin{equation}\label{eq:integralsmallone} \int_\Omega \sqrt{u_xu_y}\,d\mu\le\frac12\int_\Omega (u_x+u_y)d\mu \end{equation} with equality if and only if $u_x=u_y$ almost everywhere in $\Omega$. Define a map $\zeta\,:\,{\mathbb R}^2\rightarrow{\mathbb R}^2$ by $\zeta(\alpha,\beta)=(\frac{\alpha+\beta}{2},\frac{\alpha-\beta}{2})$. Then, $\Omega=\zeta((-1,1)^2)$ and \begin{multline}\label{eq:integralsmalltwo} \frac12\int_\Omega (u_x+u_y)\,d\mu =\frac14\int_{(-1,1)^2} \bigl((u_x+u_y)\circ\zeta\bigr)\,d\mu =\frac12\int_{(-1,1)^2} \tfrac{\partial}{\partial\alpha}(u\circ\zeta)\,d\mu \\ =\{\text{Tonelli's theorem}\} =\frac12\int_{-1}^{1}\left(\int_{-1}^{1} \tfrac{\partial}{\partial\alpha}(u\circ\zeta)\,d\alpha\right)\,d\beta \\ \le\{\text{since $u\circ\zeta$ is increasing in the first variable}\} \le \frac12\int_{-1}^{1}2c\,d\beta=2c=2. \end{multline} Combining \cref{eq:integrallarge,eq:integralsmallone,eq:integralsmalltwo}, we see that $u_x=u_y=1$ must hold almost everywhere in $\Omega$. By \cref{lm:uniquefromderivatives}, $u=u_{\rm linear}$ almost everywhere. \end{proof} \section{The uniform case}\label{sec:uniform} In this section we suppose \cref{con:triangularlimitshape} holds true and explore the consequences for the case of uniformly random permutations. The limit surfaces turn out to have a surprisingly simple parameterization in terms of trigonometric functions, and we are able to recover the Logan-Shepp-$\!$Vershik-Kerov limit-shape result mentioned in \cref{sec:background} and depicted in \cref{fig:famouslimitshape}. Level plots of some limit surfaces are shown in \cref{fig:limitsurfaces}. \begin{figure} \centerfloat \begin{subfigure}[b]{0.55\textwidth} \centerfloat \begin{tikzpicture}[scale=5] \draw[->] (-0.6, 0) -- (0.6, 0) node[right] {$x$}; \draw[->] (0, -0.6) -- (0, 0.6) node[above] {$y$}; \tikzmath{\myalpha = pi/3;} \foreach \psiplusphi in { -1.04646, -0.62934, -0.43881, -0.28133, -0.13784, 0.00000, 0.13784, 0.28133, 0.43881, 0.62934, 1.04646 } { \draw[domain=abs(\psiplusphi)-pi:pi-abs(\psiplusphi), smooth, variable=\psiminusphi]% (1/2,-1/2) -- plot ({1/(2*pi)*((\psiplusphi-\psiminusphi)+sin(deg(\psiplusphi-\psiminusphi)))},% {1/(2*pi)*((\psiplusphi+\psiminusphi)+sin(deg(\psiplusphi+\psiminusphi)))})% -- (-1/2,1/2); } \foreach \psiplusphi in {\myalpha-pi,pi-\myalpha} { \draw[thick,domain=abs(\psiplusphi)-pi:pi-abs(\psiplusphi), smooth, variable=\psiminusphi,densely dashed]% plot ({-1/(2*pi)*((\psiplusphi-\psiminusphi)+sin(deg(\psiplusphi-\psiminusphi)))},% {1/(2*pi)*((\psiplusphi+\psiminusphi)+sin(deg(\psiplusphi+\psiminusphi)))}); } \end{tikzpicture} \caption{$\alpha=\pi/3$} \label{subfig:onethird} \end{subfigure}% \begin{subfigure}[b]{0.55\textwidth} \centerfloat \begin{tikzpicture}[scale=5] \draw[->] (-0.6, 0) -- (0.6, 0) node[right] {$x$}; \draw[->] (0, -0.6) -- (0, 0.6) node[above] {$y$}; \foreach \pitimesu in {-1,-0.8,...,1} { \tikzmath{\psiplusphi = rad(asin(\pitimesu));} \draw[domain=abs(\psiplusphi)-pi:pi-abs(\psiplusphi), smooth, variable=\psiminusphi]% (1/2,-1/2) -- plot ({1/(2*pi)*((\psiplusphi-\psiminusphi)+sin(deg(\psiplusphi-\psiminusphi)))},% {1/(2*pi)*((\psiplusphi+\psiminusphi)+sin(deg(\psiplusphi+\psiminusphi)))})% -- (-1/2,1/2); } \foreach \psiplusphi in {-pi/2,pi/2} { \draw[thick,domain=abs(\psiplusphi)-pi:pi-abs(\psiplusphi), smooth, variable=\psiminusphi,densely dashed]% plot ({-1/(2*pi)*((\psiplusphi-\psiminusphi)+sin(deg(\psiplusphi-\psiminusphi)))},% {1/(2*pi)*((\psiplusphi+\psiminusphi)+sin(deg(\psiplusphi+\psiminusphi)))}); } \end{tikzpicture} \caption{$\alpha=\pi/2$} \label{subfig:onehalf} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} \centerfloat \begin{tikzpicture}[scale=5] \draw[->] (-0.6, 0) -- (0.6, 0) node[right] {$x$}; \draw[->] (0, -0.6) -- (0, 0.6) node[above] {$y$}; \tikzmath{\myalpha = 2*pi/3;} \foreach \psiplusphi in { -2.09360, -1.19831, -0.82575, -0.52615, -0.25698, 0.00000, 0.25698, 0.52615, 0.82575, 1.19831, 2.09360 } { \draw[domain=abs(\psiplusphi)-pi:pi-abs(\psiplusphi), smooth, variable=\psiminusphi]% (1/2,-1/2) -- plot ({1/(2*pi)*((\psiplusphi-\psiminusphi)+sin(deg(\psiplusphi-\psiminusphi)))},% {1/(2*pi)*((\psiplusphi+\psiminusphi)+sin(deg(\psiplusphi+\psiminusphi)))})% -- (-1/2,1/2); } \foreach \psiplusphi in {\myalpha-pi,pi-\myalpha} { \draw[thick,domain=abs(\psiplusphi)-pi:pi-abs(\psiplusphi), smooth, variable=\psiminusphi,densely dashed]% plot ({-1/(2*pi)*((\psiplusphi-\psiminusphi)+sin(deg(\psiplusphi-\psiminusphi)))},% {1/(2*pi)*((\psiplusphi+\psiminusphi)+sin(deg(\psiplusphi+\psiminusphi)))}); } \end{tikzpicture} \caption{$\alpha=2\pi/3$} \label{subfig:twothirds} \end{subfigure}% \begin{subfigure}[b]{0.55\textwidth} \centerfloat \begin{tikzpicture}[scale=5] \draw[->] (-0.6, 0) -- (0.6, 0) node[right] {$x$}; \draw[->] (0, -0.6) -- (0, 0.6) node[above] {$y$}; \foreach \pitimesu in {-1,-0.9,...,1} { \tikzmath{\psiplusphi = 3*pi/4*\pitimesu;} \draw[domain=abs(\psiplusphi)-pi:pi-abs(\psiplusphi), smooth, variable=\psiminusphi]% (1/2,-1/2) -- plot ({1/(2*pi)*((\psiplusphi-\psiminusphi)+sin(deg(\psiplusphi-\psiminusphi)))},% {1/(2*pi)*((\psiplusphi+\psiminusphi)+sin(deg(\psiplusphi+\psiminusphi)))})% -- (-1/2,1/2); \tikzmath{\psiplusphi = 3*pi/4*\pitimesu;} \draw[domain=abs(\psiplusphi)-pi:pi-abs(\psiplusphi), smooth, variable=\psiminusphi,densely dashed]% (-1/2,-1/2) -- plot ({-1/(2*pi)*((\psiplusphi-\psiminusphi)+sin(deg(\psiplusphi-\psiminusphi)))},% {1/(2*pi)*((\psiplusphi+\psiminusphi)+sin(deg(\psiplusphi+\psiminusphi)))})% -- (1/2,1/2); } \end{tikzpicture} \caption{the family of level curves for $u$ and $v$} \label{subfig:uvlevelcurves} \end{subfigure} \caption{(\subref{subfig:onethird})--(\subref{subfig:twothirds}) show 11 evenly distributed level curves (solid) of $u$ in \cref{th:uniform} for three different values of $\alpha$ (corresponding to the marked points in \cref{fig:famouslimitshape}). The lowest and highest level curves correspond to $\psi+\phi=\pm\alpha$, and outside of these curves $u$ has a constant value of $\pm r/2$. In the region between the dashed curves $\psi-\phi=\pm(\pi-\alpha)$, any maximizer of $F_\rho$ coincides with $u$. Finally, (\subref{subfig:uvlevelcurves}) shows curves of the form $\psi+\phi=\text{const}$ (solid) and $\psi-\phi=\text{const}$ (dashed). Those are the possible level curves of $u$ and $v$, respectively, for any $0<\alpha<\pi$.} \label{fig:limitsurfaces} \end{figure} \begin{theo}\label{th:uniform} Suppose \cref{con:triangularlimitshape} holds. Let $\Omega$ be the open square $-\frac12<x,y<\frac12$, let $\rho=1$ and take any $0<r<2$. Then, there is a maximizer $u$ to $F_\rho$ in ${\mathcal U}_{-r/2,r}(\Omega)$ given by the following. Let $0<\alpha<\pi$ be defined by $r = \frac{2}{\pi}(\sin\alpha-\alpha\cos\alpha)$, and define $\phi$ and $\psi$ by \begin{align*} x &= \tfrac{1}{\pi}(\phi+\tfrac12\sin2\phi),\ \ \ -\tfrac\pi2<\phi<\tfrac\pi2, \\ y &= \tfrac{1}{\pi}(\psi+\tfrac12\sin2\psi),\ \ \ -\tfrac\pi2<\psi<\tfrac\pi2. \end{align*} Then \[ u = \begin{cases} \frac1\pi\bigl(\sin(\psi+\phi)-(\psi+\phi)\cos\alpha\bigr) & \text{if $\abs{\psi+\phi} \le \alpha$,}\\ -r/2 & \text{if $\psi+\phi<-\alpha$,}\\ r/2 & \text{if $\psi+\phi>\alpha$.}\\ \end{cases} \] Furthermore, every maximizer to $F_\rho$ in ${\mathcal U}_{-r/2,r}(\Omega)$ coincides with $u$ in the region where $\abs{\psi-\phi} \le \pi-\alpha$. \end{theo} \begin{proof} Recall the definition of ${\mathcal V}$ from \cref{sec:generalpde}. Let $s=r+2\cos\alpha$ and define $v\in{\mathcal V}(\Omega)$ by \[ v := \begin{cases} \frac1\pi\bigl(\sin(\psi-\phi)+(\psi-\phi)\cos\alpha\bigr) & \text{if $\abs{\psi-\phi}\le \pi-\alpha$,}\\ -s/2 & \text{if $\psi-\phi<-(\pi-\alpha)$,}\\ s/2 & \text{if $\psi-\phi>\pi-\alpha$.}\\ \end{cases} \] We claim that $u$ and $v$ satisfy the assumption of \cref{th:pde}. When $\psi+\phi$ spans over $(-\alpha,\alpha)$, $u$ spans over $(-r/2,r/2)$. Analogously, when $\psi-\phi$ spans over $(-(\pi-\alpha),\pi-\alpha)$, $v$ spans over $(-s/2,s/2)$. From this, property \ref{it:measurers} in \cref{th:pde} follows. In order to verify property \ref{it:phiplusphistar}, by \cref{pr:pdeunderconjecture} it is enough to check that the equations \begin{align} u_xv_y+u_yv_x&=0, \label{eq:uxvyplusuyvx} \\ \min\{\lagomsqrt{u_xu_y/\rho},1\}+\min\{\lagomsqrt{-v_xv_y/\rho},1\}&=1 \label{eq:minplusmin} \end{align} hold almost everywhere in $\Omega$. To this end, first we make the following simple calculations. \begin{align*} u_x&=\frac{\partial u/\partial\phi}{dx/d\phi} = \frac{\cos(\psi+\phi)-\cos\alpha}{2\cos^2\phi} \ \ \text{if $\abs{\psi+\phi}\le\alpha$},\\ u_y&=\frac{\partial u/\partial\psi}{dy/d\psi} = \frac{\cos(\psi+\phi)-\cos\alpha}{2\cos^2\psi} \ \ \text{if $\abs{\psi+\phi}\le\alpha$},\\ v_x&=\frac{\partial v/\partial\phi}{dx/d\phi} = -\frac{\cos(\psi-\phi)+\cos\alpha}{2\cos^2\phi} \ \ \text{if $\abs{\psi-\phi}\le\pi-\alpha$},\\ v_y&=\frac{\partial v/\partial\psi}{dy/d\psi} = \frac{\cos(\psi-\phi)+\cos\alpha}{2\cos^2\psi} \ \ \text{if $\abs{\psi-\phi}\le\pi-\alpha$}. \end{align*} Note that \begin{equation}\label{eq:psiphi} \abs{\psi+\phi}+\abs{\psi-\phi}<\pi \end{equation} in $\Omega$. Consider a point in $\Omega$ where $\abs{\psi+\phi}\ge\alpha$. Then $u_x=u_y=0$, so \cref{eq:uxvyplusuyvx} holds there. Also, from \cref{eq:psiphi} it follows that $\abs{\psi-\phi}<\pi-\alpha$ and hence $\sqrt{-v_xv_y}=\frac{\cos(\psi-\phi)+\cos\alpha}{2\cos\phi\cos\psi}=\frac{\cos(\psi-\phi)+\cos\alpha}{\cos(\psi-\phi)+\cos(\psi+\phi)}\ge1$, so \cref{eq:minplusmin} holds too. Now consider a point where $\abs{\psi-\phi}\ge\pi-\alpha$. Then $v_x=v_y=0$, so \cref{eq:uxvyplusuyvx} holds there. Also, from \cref{eq:psiphi} it follows that $\abs{\psi+\phi}<\alpha$ and hence $\sqrt{u_xu_y}=\frac{\cos(\psi+\phi)-\cos\alpha}{2\cos\phi\cos\psi}=\frac{\cos(\psi+\phi)-\cos\alpha}{\cos(\psi-\phi)+\cos(\psi+\phi)}\ge1$, so \cref{eq:minplusmin} holds too. Finally, consider a point where $\abs{\psi+\phi}<\alpha$ and $\abs{\psi-\phi}<\pi-\alpha$. Then \cref{eq:uxvyplusuyvx} and \cref{eq:minplusmin} both follow from our expressions for the partial derivatives. We have shown that $u$ is a maximizer of $F_\rho$. It remains to be shown that every maximizer of $F_\rho$ in ${\mathcal U}_{-r/2,r}(\Omega)$ coincides with $u$ within the region where $\abs{\psi-\phi}\le\pi-\alpha$. Let $w\in{\mathcal U}_{-r/2,r}(\Omega)$ be a maximizer of $F_\rho$. Let $R$ be the subregion of $\Omega$ where $\abs{\psi+\phi}<\alpha$ and $\abs{\psi-\phi}<\pi-\alpha$. Since $\sqrt{u_xu_y}$ and $\sqrt{-v_xv_y}$ are both positive in $R$, it follows from \cref{eq:minplusmin} that they are both smaller than one there. Then, by \cref{pr:uniquemaximizer}, the partial derivatives of $u$ and $w$ coincide almost everywhere in $R$, and by \cref{lm:uniquefromderivativesgeneral}, $u$ and $w$ coincide everywhere in $R$. For any point $p$ in $\Omega$ where $\psi+\phi\le-\alpha$, north-east of $p$ there are points in $R$ with $w$-values arbitrarily close to $-r/2$. Analogously, for any point $p$ in $\Omega$ where $\psi+\phi\ge\alpha$, south-west of $p$ there are points in $R$ with $w$-values arbitrarily close to $r/2$. It follows that $w$ coincides with $u$ also in the region $\abs{\psi+\phi}\ge\alpha$. Finally, for any point $p$ in $\Omega$ where $\abs{\psi-\phi}=\pi-\alpha$, both south-west and north-east of $p$ there are points in $R$ with $w$-values arbitrarily close to $u(p)$, so $w$ coincides with $u$ at $p$ as well. \end{proof} \subsection{Recovering the limit shape of Logan, Shepp and Vershik, Kerov} As a consequence of \cref{th:uniform}, under the assumption that \cref{con:triangularlimitshape} holds we are able to recover the celebrated result of Logan, Shepp~\cite{LoganShepp} and Vershik, Kerov~\cite{VershikKerov} on the limit shape of the Young diagram associated with a uniformly random permutation under the Robinson-Schensted correspondence. In the proof of \cref{th:uniform} we showed that $u$ and $v$ satisfy the assumption of \cref{th:pde}. One consequence of that theorem is that $s=\functional_{\rm max}'(r)$ whenever $\functional_{\rm max}'(r)$ exists. By \cref{cor:limitshape}, it follows that the limit shape in the $r$-$s$ plane is parameterized by \begin{equation}\label{eq:limitshapeparameterization} \begin{aligned} r &= \tfrac2\pi(\sin\alpha-\alpha\cos\alpha), \\ s &= \tfrac2\pi(\sin\alpha-\alpha\cos\alpha) + 2\cos\alpha, \end{aligned} \end{equation} where $0<\alpha<\pi$; see \cref{fig:famouslimitshape} for an illustration. \begin{figure} \begin{centering} \begin{tikzpicture}[scale=2.5] \draw[thick, domain=0:pi, smooth, variable=\myalpha]% plot ({2/pi*(sin(deg(\myalpha))-\myalpha*cos(deg(\myalpha)))},% {2/pi*(sin(deg(\myalpha))-\myalpha*cos(deg(\myalpha)))+2*cos(deg(\myalpha))}); \foreach \myalpha in {pi/3, pi/2, 2*pi/3} { \tikzmath{\myr = 2/pi*(sin(deg(\myalpha))-\myalpha*cos(deg(\myalpha))); \mys = \myr + 2*cos(deg(\myalpha));} \draw[fill] (\myr,\mys) circle(1pt); } \node at (0.55,1.3) {$\alpha=\pi/3$}; \node at (1,0.7) {$\alpha=\pi/2$}; \node at (1.6,0.3) {$\alpha=2\pi/3$}; \draw[->] (-0.2, 0) -- (2.2, 0) node[right] {$r$}; \draw[->] (0, -0.2) -- (0, 2.2) node[above] {$s$}; \draw (2,0.05) -- (2,-0.05) node[below] {2}; \draw (0.05,2) -- (-0.05,2) node[left] {2}; \end{tikzpicture} \caption{The Logan-Shepp-$\!$Vershik-Kerov limit shape of a Young diagram drawn from the Plancherel distribution. We have marked points with three specific $\alpha$-values in the parameterization given by \cref{eq:limitshapeparameterization}. Their corresponding limit surfaces are depicted in \cref{fig:limitsurfaces}.} \label{fig:famouslimitshape} \end{centering} \end{figure} By the substitution $\alpha=\theta+\frac\pi2$, this is equivalent to the parameterization of the Logan-Shepp-$\!$Vershik-Kerov limit shape given in \cite[Section~1.20]{RomikBook}, namely \begin{align*} r &= \left(\tfrac{2\theta}{\pi}+1\right)\sin\theta+\tfrac2\pi\cos\theta, \\ s &= \left(\tfrac{2\theta}{\pi}-1\right)\sin\theta+\tfrac2\pi\cos\theta, \end{align*} where $-\pi/2<\theta<\pi/2$. \section{Future research}\label{sec:future} The present work has generated plenty of open questions. The most significant one is \cref{con:triangularlimitshape}, of course, and since the triangular shape is arguably the simplest of all possible limit shapes, intuitively the phenomenon should have a simple explanation, even though it has evaded the author so far. As we have seen, a proof of \cref{con:triangularlimitshape} would yield, as a by-product, a new proof of the Logan-Shepp-$\!$Vershik-Kerov limit shape in \cref{fig:famouslimitshape}, a proof very different from the known proofs. Logan, Shepp \cite{LoganShepp} and Vershik, Kerov \cite{VershikKerov} found the limit shape independently of each other, but both proofs were based on the \emph{hook-length formula}, an almost magically simple formula for computing the number of standard Young tableaux of a specific shape (see e.g.~\cite{StanleyEnum2}). Another category of open questions concerns the regularity of the maximizers of the functional $F_\rho$. Is there always a continuous maximizer, or even a differentiable one? Under what conditions can we find a maximizer that satisfies the PDE system of \cref{th:pde}? In the uniform case, provided \cref{con:triangularlimitshape} holds true, the maximizers $u$ of the form given by \cref{th:uniform} have the following property: If $u_1$ and $u_2$ are such maximizers associated to $r=r_1$ and $r=r_2$, respectively, where $r_1\le r_2$, then every level curve of $u_1$ is also a level curve of $u_2$. Is this true in general? Our definitions of increasing and decreasing sets and density domains can be generalized to higher dimension. For instance, we might say that a finite set of points in ${\mathbb R}^3$ is \emph{increasing} if the equivalences $x<x'\Leftrightarrow y<y'\Leftrightarrow z<z'$ hold for any pair of points $(x,y,z)$ and $(x',y',z')$ in the set. Can our results be generalized to this setting? Finally, we offer another conjecture, based on evidence from computer-generated limit shapes for various density domains. \begin{conj}\label{con:limitshapeconvex} The limit shape $\functional_{\rm max}'$ that appears in $\cref{cor:limitshape}$ is always convex. \end{conj} Somewhat surprisingly, we have the following implication. \begin{prop} \Cref{con:limitshapeconvex} implies \cref{con:triangularlimitshape}. \end{prop} \begin{proof} At the end of the proof of \cref{pr:phiproperties}, we showed that $\Phi(r)\le\frac{\Gamma}{\sqrt2}r$ for any $r\ge0$, where $\Gamma=2$. Since $\Phi$ is concave, it follows that $\Phi'(r)\le\sqrt2$ for any $r\ge0$ where the derivative exists. On the other hand, by \cref{pr:phioneisone}, $\Phi'(r)=0$ for any $r>\sqrt2$. Suppose \cref{con:limitshapeconvex} holds. Since $\functional_{\rm max}=\Phi$ for the diamond region $\abs{x}+\abs{y}<1/\sqrt2$ with $\rho=1$, it follows that $\Phi'$ is convex. Above, we saw that $\Phi'(0)\le\sqrt2$ and $\Phi'(\sqrt2+\varepsilon)=0$ for any small $\varepsilon>0$, and, together with the convexity, this yields that $\Phi'(r)\le\sqrt2-r$ for $0\le r\le \sqrt2$. But \[ \int_0^{\sqrt2}\Phi'(r)\,dr=\Phi(\sqrt2)=1=\int_0^{\sqrt2}(\sqrt2-r)\,dr \] by \cref{pr:phioneisone}, so $\Phi'(r)$ must be equal to $\sqrt2-r$ in the interval $0\le r\le \sqrt2$. \end{proof} \section{Acknowledgement} This work was supported by the Swedish Research Council (reg.no.~2020-04157). The author is grateful to Prof.~Svante Jansson for feedback on an early draft of the paper. \bibliographystyle{abbrv}
{ "timestamp": "2022-07-26T02:08:49", "yymm": "2207", "arxiv_id": "2207.11505", "language": "en", "url": "https://arxiv.org/abs/2207.11505", "abstract": "A locally uniform random permutation is generated by sampling $n$ points independently from some absolutely continuous distribution $\\rho$ on the plane and interpreting them as a permutation by the rule that $i$ maps to $j$ if the $i$th point from the left is the $j$th point from below. As $n$ tends to infinity, decreasing subsequences in the permutation will appear as curves in the plane, and by interpreting these as level curves, a union of decreasing subsequences give rise to a surface. We show that, under the correct scaling, for any $r\\ge0$, the largest union of $\\lfloor r\\sqrt{n}\\rfloor$ decreasing subsequences approaches a limit surface as $n$ tends to infinity, and the limit surface is a solution to a specific variational problem. As a corollary, we prove the existence of a limit shape for the Young diagram associated to the random permutation under the Robinson-Schensted correspondence. In the special case where $\\rho$ is the uniform distribution on the diamond $|x|+|y|<1$ we conjecture that the limit shape is triangular, and assuming the conjecture is true we find an explicit formula for the limit surfaces of a uniformly random permutation and recover the famous limit shape of Vershik, Kerov and Logan, Shepp.", "subjects": "Probability (math.PR); Combinatorics (math.CO)", "title": "Monotone Subsequences in Locally Uniform Random Permutations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771813585751, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7090925432943133 }
https://arxiv.org/abs/1211.0328
Some lower bounds for the $L$-intersection number of graphs
For a set of non-negative integers $L$, the $L$-intersection number of a graph is the smallest number $l$ for which there is an assignment on the vertices to subsets $A_v \subseteq \{1,\dots, l\}$, such that every two vertices $u,v$ are adjacent if and only if $|A_u \cap A_v|\in L$. The bipartite $L$-intersection number is defined similarly when the conditions are considered only for the vertices in different parts. In this paper, some lower bounds for the (bipartite) $L$-intersection number of a graph for various types $L$ in terms of the minimum rank of graph are obtained.
\section{Introduction} A {\it graph representation} is an assignment on the vertices of graph to a family of objects satisfying certain conditions and a rule which determines from the objects whether or not two vertices are adjacent. In the literature, different types of graph representations such as the set intersection representation~\cite{small_dim,Jukna_theta} and the vector representation~\cite{Lovasz_Shanon,Lovasz_Survey,general_vector_def} are studied. The {\it set intersection representation} is one of the basic graph representations, which is an assignment of sets to the vertices such that two vertices are adjacent if and only if the size of the intersection of their corresponding sets satisfies the certain rule. Precisely, let $G$ be a finite simple graph with vertex set $V$ and $L$ be a subset of non-negative integers. An {\it $L$-intersection representation} of $G$, assign to every vertex $v \in V$ a finite set $A_v$, such that two vertices $u$ and $v$ are adjacent if and only if $|A_u \cap A_v| \in L$. The question we are interest in is the minimum size of the universe of the sets, i.e $|\cup_{v\in V}A_v|$. This parameter is denoted by $\Theta_L(G)$ and called the {\it $L$-intersection number} of $G$~\cite{small_dim}. For bipartite graph $G$ with a fixed vertex partition $V = V_1 \cup V_2$, the definition can be modified by relaxing the condition inside the partition sets (since for vertices inside a partite set, we know they are not adjacent). Indeed, a {\it bipartite $L$-intersection representation} of graph $G$, for a given set $L \subseteq \{0,1,2,\dots\}$, assign to every vertex $v \in V$ a finite set $A_v$, such that two vertices $u,v$ from different partite sets are adjacent if and only if $|A_u \cap A_v| \in L$. The relaxed measure of the $L$-intersection number is denoted by lower case theta, $\theta_L(G)$~\cite{Jukna_theta}. It is clear that $\Theta_L(G) \ge \theta_L(G)$ for every bipartite graph $G$ and set $L$. One of the important measure regarding set intersection representation is finding the most optimal representation for a graph by considering different sets $L$. Indeed, the {\it absolute dimension} of $G$ is defined as $\Theta(G) = \min_L \Theta_L(G)$ over all sets $L$ of non-negative integers (similarly, {\it the bipartite absolute dimension} is $\theta(G) = \min_L \theta_L(G)$). Finding explicit lower bounds for absolute dimension has important consequence in the complexity theory~\cite{Jukna_theta,Pudlak_Rodl,Razborov}. Howevere, by an easy counting argument, it was shown that there exist graphs of order $n$ with absolute dimension $\Omega(n)$. With this motivation we are interested in finding lower bounds for the various $L$-intersection number of graphs. A twin-free graph is a graph without any pair of vertices with $N(u)-\{v\} = N(v)-\{u\}$, where $N(x)$ is the set of vertices adjacent to $x$. As a matter of fact, for every twin-free graph $G$ of order $n$, $\Theta(G) \ge \log_2 n$. This lower bound is obtained from the fact that in such a graph no pair of vertices could be assigned the same set in a representation. Although, this lower bound is obtained simply, the question of finding an explicit construction for graph $G$ such that $\theta(G) \gg \log_2 n$ or even $\Theta(G) \gg \log_2 n$, is going to be a very challenging problem~\cite{Eaton_thesis,Jukna_theta}. It is easy to see that if $H$ is a maximal twin-free induced subgraph of $G$, then $\Theta_L(H) \leq \Theta_L(G)$ and $\theta_L(H) = \theta_L(G)$, for every set $L$. Thus, every lower bound for the $L$-intersection number of $H$ is a lower bound for the $L$-intersection number of $G$. Through this paper we consider twin-free graphs with no isolated vertex. A good summary on the known results on the $L$-intersection number is given in~\cite{Jukna_theta} (for more results in this subject see~\cite{Eaton_Unbalanced,Eaton_Gould_Rodl,Eaton_Grable,2_path}). The most studied problems in this concept are related to the threshold type $L=\{1,2,\dots\}$ which in general case is known as the edge clique covering number, denoted by $\Theta_1(G)$. Despite to the old literature of the problem, the only known general lower bound was proved as follows for the case $\L=\{0,1,\dots,k\}$. \begin{thm}\label{lower_p} {\rm\cite{Eaton_Gould_Rodl}} Let $L=\{0,1,\dots, k-1\}$ for some integer $k$. Then, for every graph $G$, $\Theta_L(G^c) \ge (\Theta_1(G))^{1/k}$ \end{thm} Similarly, the bipartite $L$-intersection number for $L=\{1,2,\dots\}$ corresponds to the well-known parameter, the edge biclique covering number~\cite{Jukna_bc}. The bipartite $L$-intersection number for various sets $L$ are studied in~\cite{Jukna_theta} and the following lower bounds are obtained. \begin{thm}\label{review} {\rm\cite{Jukna_theta}} Let $p$ be a prime and $R$ be a subset of residues module $p$ with $|R|=s$. If \linebreak $L = \{l : l \pmod p \in R\}$, then for every graph $G$ of order $n$ and maximum degree $\Delta$, \begin{enumerate} \renewcommand{\theenumii}{\roman{enumii}} \item $\tl{G^c} \geq (n/\Delta)^{1\over s}.$ \item $\tl{G} \geq ({1\over s}n/\Delta)^{1\over p-1}.$ \end{enumerate} \end{thm} Note that, the type such the ones appears in Theorem~\ref{review} is called modular type. In this paper, we concern on finding lower bounds for (bipartite) $L$-intersection number of graphs for various types $L$. For this purpose, our main tools are linear algebra techniques via inclusion matrices. So we show how these techniques are strength in order to give elegant proofs and stronger results. The structure of the paper is as follows. First, in Section~$2$, we present some preliminaries that we need through the paper. Then, in Section~$3$, we obtain some lower bounds for $L$-intersection number for modular types and finite sets $L$. By the similar method, in Section~$4$, we find some lower bounds for the bipartite $L$-intersection number which improve the bounds in Theorem~\ref{review}. Finally, in Section~$5$, we consider the uniform intersection set representation of graphs, where all sets assigned to the vertices have the same size. \section{Preliminaries} In this section, we present some definitions and known results, which are necessary to prove our main theorems. We start with the definition of the rank of a graph. Let ${\cal M}_n(\Bbb{F})$ be the set of all $n \times n$ matrices over a field $\Bbb{F}$ and ${\cal S}_n(\Bbb{F})$ be the subset of all symmetric matrices in ${\cal M}_n(\Bbb{F})$. For $A \in {\cal S}_n(\Bbb{F})$, the graph of $A$, denoted by ${\cal G}(A)$, is a graph with vertex set $\{1, \dots , n\}$ and edge set $\{ij : A_{ij} \neq 0 \mbox{ and } i\neq j\}$. Note that the diagonal of $A$ is ignored in determining $\cal{G}(A)$. The {\it minimum rank}~\cite{mr} of a graph $G$ over a field $\Bbb{F}$ is defined to be $$\mrf{\Bbb{F}}{G} = \min\{\rk{A} : A\in {\cal S}_n(\Bbb{F}),\ {\cal G}(A) = G\}.$$ In the case of bipartite graph, for convenience we consider the bipartite adjacency matrix. The bipartite adjacency matrix of an $n \times n$ bipartite graph $G$ with a vertex partition $V = V_1 \cup V_2$, denoted by $A_b(G)$, is a $(0,1)$-matrix whose rows correspond to the vertices of $V_1$ and its columns correspond to the vertices of $V_2$, and the $(i,j)$ entry of $A_b(G)$ is $1$ if and only if vertex $i$ is adjacent to vertex $j$. For $A \in {\cal M}_n(\Bbb{F})$, the bipartite graph ${\cal G}_b(A)$ is a graph with bipartite set $V_1$ and $V_2$ corresponding to the rows and the columns of $A$, respectively, and edges $\{ij : A_{ij} \neq 0 \}$. The {\it bipartite minimum rank} of a bipartite graph $G$ over a field $\Bbb{F}$ is defined to be $$\bmrf{\Bbb{F}}{G} = \min\{\rk{A} : A\in {\cal M}_n(\Bbb{F}),\ {\cal G}_b(A) = G\}.$$ It can be easily seen that, for every bipartite graph $G$, $\mrf{\Bbb{F}}{G} = 2\bmrf{\Bbb{F}}{G}.$ For convenience, when $\Bbb{F}=\Bbb{R}$, we denote $\mrf{\Bbb{F}}{G}$ and $\bmrf{\Bbb{F}}{G}$ by $\mr{G}$ and $\bmr{G}$, also for $\Bbb{F}=\Bbb{Z}_p$ we denote them by $\mrp{G}$ and $\bmrp{G}$, respectively. The following results are well-known and straightforward. \begin{pro}\label{known_mr} Over any field $\Bbb{F}$, \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item\label{itm:a} if $G = \bigcup_{i=1}^h G_i$, then $\mrf{\Bbb{F}}{G} \le \sum_{i=1}^h \mrf{\Bbb{F}}{G_i}.$ \item\label{itm:b} if $G'$ is an induced subgraph of $G$, then $\mrf{\Bbb{F}}{G'} \leq \mrf{\Bbb{F}}{G}$. \end{enumerate} \end{pro} The key tools in the proof of our main theorems is the inclusion matrices of set systems. Let ${\cal F}$ and ${\cal T}$ be two families of subsets of set $[l] = \{1,\dots, l\}$. The $({\cal F},{\cal T})$-{\it inclusion matrix}, denoted by $I({\cal F},{\cal T})$ is a $(0,1)$-matrix whose rows and columns are labeled by the members of ${\cal F}$ and ${\cal T}$, respectively. The $(F,T)$ entry of $I({\cal F},{\cal T})$ will be $1$ or $0$ according to whether or not $T \subseteq F$. In the case that $\cal T$ is the family of all $t$-subsets of $[l] \cup \{0\}$, we denote the matrix by $I({\cal F},t)$ and call it the {\it $t$-inclusion matrix} of ${\cal F}$. When ${\cal F}$ is the family of all $i$-subsets of $[l] \cup \{0\}$, the corresponding $t$-inclusion matrix is denoted by $I(i,t)$. Let $A_t({\cal F},{\cal T})=I({\cal F},t) \times I({\cal T},t)^T$, we call $A_t({\cal F},{\cal T})$ the {\it $t$-intersection matrix} of ${\cal F}$ and ${\cal T}$~\cite{Babai_Frankl_book}. Indeed, $A_t({\cal F},{\cal T})$ is a $|{\cal F|} \times |{\cal T}|$ matrix where its $(F,T)$ entry is ${|F \cap T| \choose t}$. Moreover, $$\rk{A_t({\cal F},{\cal T})} \leq \rk{I({\cal F},t)} \leq {l \choose t}.$$ The following fact is a useful relation in working with the inclusion matrix. \begin{pro}\label{relat} {\rm \cite{Babai_Frankl_book}} If ${\cal F}$ is a subfamily of $k$-subsets of $[l] \cup \{0\}$, then $$I({\cal F},i) \times I(i,t) = {k-t \choose i-t} I({\cal F},t).$$ \end{pro} \section{Lower bounds for the $L$-intersection number} In this section, we present some lower bounds for the $L$-intersection number of a graph~$G$ for modular types and finite sets $L$ in terms of the minimum rank of $G$. \begin{thm}\label{mod_mr} Let $p$ be a prime number and $R$ be a subset of residues module $p$ with $|R|=s$. If $L = \{l : l \pmod p \in R\}$, then for every graph $G$, \noindent {\rm(i)} $\displaystyle \mrp{G^c}\leq \sum_{t=0}^s{\ttl{G}\choose t}.$ \noindent{\rm(ii)} $\displaystyle \mrp{G}\leq \sum_{t=0}^{p-1}{\ttl{G}\choose t}.$ \end{thm} \begin{proof}{\belowdisplayskip=-20pt Assume that, $R=\{r_1,r_2,\dots,r_s\}$ and ${\cal A}=\{A_1,\dots,A_n\}$ is the family of sets assigned to the vertices of $G$ in an optimal $L$-intersection representation, i.e. $A_i\subseteq \{1,\dots, \Theta_L(G)\}$. Let $M_t=A_t({\cal A},{\cal A})$, $0 \le t \le s$, be the $t$-intersection matrix of the family $\cal A$. Remark that, $M_t$ is an $n\times n$ matrix, with ${|A_u\cap A_v|\choose t}$ in the position $(u,v)$, and $$\rk{M_t}\leq {\Theta_L(G)\choose t}.$$\\ \noindent {\rm(i)} First, we can choose $a_t$ in $\Bbb{Z}_p$, $0\leq t\leq s$, such that, for every non-negative integer $x$, \begin{equation}\label{1} \prod_{t=1}^s(x-r_t) \equiv \sum_{t=0}^s a_t{x\choose t}\quad \quad \pmod p \end{equation} Then, we define an $n \times n$ matrix $M=\sum_{t=0}^s a_tM_t$. Thus by Relation~(\ref{1}), the $(u,v)$ entry of $M$ is equal to $\prod_{t=1}^s(|A_u\cap A_v|-r_t)$ $\pmod p$. Therefore, $M$ is a symmetric matrix such that for every $u \neq v$, its $(u,v)$ entry is a multiple of $p$ if and only if vertex $u$ is adjacent to vertex $v$. Hence, over the field $\Bbb{Z}_p$, $$\mrp{G^c} \leq \rk{M}.$$\\ On the other hand, by the definition of $M$, the row space of $M$ is a subspace of the vector space spanned by the rows of $M_t$, $0\leq t\leq s$, and consequently, $$\rk{M} \leq \sum_{t=0}^s{\rk{M_t}} \leq \sum_{t=0}^s {\Theta_L(G)\choose t}.$$\\ Thus, $$\mrp{G^c} \leq \sum_{t=0}^s{\ttl{G}\choose t}.$$ \noindent {\rm(ii)} Now we choose $b_t^i$ in $\Bbb{Z}_p$, where $0 \leq t \leq p-1$ and $1 \leq i \leq s$, such that, \begin{equation}\label{2} 1-(x-r_i)^{p-1} \equiv \sum_{t=0}^{p-1} b_t^i{x\choose t}\quad \quad \pmod p. \end{equation} Then, we define an $n\times n$ matrix $M= \sum_{i=1}^s\sum_{t=0}^{p-1} b_t^iM_t$. Thus by Relation~(\ref{2}), the $(u,v)$ entry of $M$ is equal to $\sum_{i=1}^s [1-(|A_u \cap A_v|-r_i)^{p-1}]$. Hence, by the Fermat's little theorem, for every two vertices $u$ and $v$, the $(u,v)$ entry of $M$ is zero in $\Bbb{Z}_p$ if and only if vertex $u$ is not adjacent to vertex $v$. Therefore, similar to above, $$\mrp{G} \leq \rk{M} \leq \sum_{t=0}^{p-1}{\rk{M_t}} \leq \sum_{t=0}^{p-1} {\Theta_L(G)\choose t}.$$ }\end{proof} Using the following approximation for the binomial coefficients, we obtain lower bounds for $\ttl{G^c}$ and $\ttl{G}$ in terms of the minimum rank of $G$. It can be seen that, for every positive integers $x, s > 1$, we have \begin{equation}\label{*} \sum_{i=0}^s{x\choose i} \leq x^s.\tag{*} \end{equation} \begin{cor}\label{cor_mr} Let $p$ be a prime number and $R$ be a subset of residues module $p$ with $|R|=s$, where $s >1$. If $L = \{l : l \pmod p \in R\}$, then for every graph $G$, \noindent{\rm (i)} $\ttl{G^c} \geq (\mrp{G})^{1\over s}.$ \noindent{\rm (ii)} $\ttl{G} \geq (\mrp{G})^{1\over p-1}.$ \end{cor} Note that in the proof of part~(i) in Theorem~\ref{mod_mr}, if the coefficients $a_t$ in Relations~(\ref{1}), and the matrices consider over the field $\Bbb{R}$, then with the similar argument the lower bound in terms of $\mr{G}$ for $\Theta_L(G^c)$, where $L$ is a finite set, is obtained. Hence, we have the following theorem. \begin{thm}\label{finite_mr} If $L$ is a finite set of size $s$, where $s>1$, then for every graph $G$, $\ttl{G^c} \geq (\mr{G})^{1\over s}$. \end{thm} \section{Lower bounds for the bipartite $L$-intersection number} This section deals with the bipartite $L$-intersection number of graphs for modular types and finite types $L$. Here, by defining appropriate inclusion matrices we obtain lower bounds for $\theta_L(G)$ in terms of the bipartite minimum rank of $G$. \begin{thm}\label{mod_bmr} Let $p$ be a prime and $R$ be a subset of residues module $p$ with $|R|=s$. If $L = \{l : l \pmod p \in R\}$, then for every bipartite graph $G$, \noindent {\rm(i)} $\displaystyle \bmrp{G^c}\leq \sum_{t=0}^s{\tl{G}\choose t}.$ \noindent{\rm(ii)} $\displaystyle \bmrp{G}\leq \sum_{t=0}^{p-1}{\tl{G}\choose t}.$ \end{thm} \begin{proof}{ Suppose that, ${\cal A}=\{A_1,\dots,A_n\}$ and ${\cal B}=\{B_1,\dots,B_n\}$ are the families of sets assigned to the vertices in two partition sets of $G$ in a set representation by $\theta_L(G)$ labels. Let $M_t=A_t({\cal A},{\cal B})$ be the $t$-intersection matrix of $\cal A$ and $\cal B$. Now we follow the similar argument as in the proof of Theorem~\ref{mod_mr} and conclude that \noindent (i) $\bmrp{G^c} \leq \rk{M} \leq \sum_{t=0}^s{\rk{M_t}} \le \sum_{t=0}^s{\tl{G}\choose t},$ where $M=\sum_{t=0}^s a_tM_t$ and $a_t$, $0\le t \le s$, satisfy in Relation~(\ref{1}). \noindent (ii) $\bmrp{G} \leq \rk{M} \leq \sum_{t=0}^{p-1}{\rk{M_t}} \le \sum_{t=0}^{p-1}{\tl{G}\choose t},$ where $M= \sum_{i=1}^s\sum_{t=0}^{p-1} b_t^iM_t$ and $b_t^i$, $0 \le t \le p-1$ and $1 \le i \le s$, satisfy in Relation~(\ref{2}). }\end{proof} It is known that if in the above theorem, $L$ is the set of odd numbers, i.e. $p=2$ and $R=\{1\}$, then for every bipartite graph $G$, $\theta_L(G) = \mrf{\Bbb{Z}_2}{G}$~\cite{Jukna_theta}. This shows that the above lower bounds are tight. From Theorem~\ref{mod_bmr}, by the approximation~(\ref{*}) for the binomial coefficients, we get the following corollary. \begin{cor}\label{cor_mod_bmr} Let $p$ be a prime number and $R$ be a subset of residues module $p$ with $|R|=s$, where $s>1$. If $L = \{l : l \pmod p \in R\}$, then for every bipartite graph $G$,\\ \noindent{\rm(i)} $\tl{G^c} \geq (\bmrp{G})^{1\over s}.$ \noindent{\rm(ii)} $\tl{G} \geq (\bmrp{G})^{1\over p-1}.$ \end{cor} By the above lower bounds we obtain an alternative proof of Theorem~\ref{review} as follows. A bipartite $n\times n$ graph $G=(V_1\cup V_2, E)$ is {\it increasing} if it is possible to enumerate its vertices $V_1=\{x_1, \dots, x_n\}$ and $V_2=\{y_1,\dots, y_n\}$ so that $x_iy_i \in E$ and $x_iy_j\not\in E$ for all $i > j$. In~\cite{Jukna_theta} it is stated that every bipartite $n\times n$ graph $G$ of maximum degree $\Delta$, with no isolated vertices, contains an induced bipartite $(n/\Delta)\times(n/\Delta)$ increasing subgraph. By Proposition~\ref{known_mr}(\ref{itm:b}), if $H$ is the induced bipartite $(n/\Delta)\times(n/\Delta)$ increasing subgraph of~$G$, then $\bmrf{\Bbb{F}}{G} \geq \bmrf{\Bbb{F}}{H}$. Moreover, the adjacency matrix of $H$ is upper triangular with non-zero diagonal entry. Thus, $\bmrf{\Bbb{F}}{G} \geq n/\Delta$ over any field $\Bbb{F}$. Hence, Corollary~\ref{cor_mod_bmr} implies Theorem~\ref{review}. Furthermore, it should be note that there are graphs that $\bmrf{\Bbb{F}}{G}-n/\Delta = \Omega(n)$. \section{Uniform set intersection representation} In this section, we consider the set intersection representation of graphs which has some constraints on the size of sets assigning to the vertices. In fact, if all sets assigned to the vertices are of the same size, say $k$, then the representation is called the {\it $k$-uniform intersection representation}. The {\it $(L,k)$-intersection number} of $G$, denoted by $\Theta_{L,k}(G)$, is the minimum size of the universe of the sets in all $k$-uniform intersection representations of graph $G$. As a natural extension, we can assume that the size of sets assign to the vertices are restricted to $r$ different sizes in the set $K=\{k_1,k_2,\dots, k_r\}$. In this case, we denote the minimum size of the universe of the sets in all such representations with $\Theta_{L,K}(G)$. Now we investigate the uniform case and obtain the same lower bounds for $\Theta_{L,k}$ and $\Theta_{L,K}$ for various types $L$. \begin{thm}\label{mod_uni} Let $p$ be a prime number and $R$ be a subset of residues module $p$ with $|R|=s$. If $L = \{l : l \pmod p \in R\}$, then for every graph $G$, $$\mrp{G^c}\leq {\Theta_{L,k}(G)\choose s}.$$ \end{thm} \begin{proof}{ Let ${\cal A}=\{A_1,\dots,A_n\}$ be the $k$-uniform family of subsets assigned to the vertices of $G$ by $\Theta_{L,k}$ labels. Suppose that $M_t=A_t({\cal A},{\cal A})=I({\cal A},t)I({\cal A},t)^T$ be the $t$-intersection matrix of $\cal A$. Remind that, $M_t$ is an $n\times n$ matrix, with ${|A_u\cap A_v|\choose t}$ in position $(u,v)$. By Proposition~\ref{relat}, $$I({\cal A},s) \times I(s,t) = {k-t \choose s-t} I({\cal A},t).$$ Note that, the column vector space of $M_t$ is a subspace of column vector space of $I({\cal A},t)$. Moreover, if $0 \leq t \leq s \leq k$, then ${k-t \choose s-t}\neq 0$. Thus, by the above relation, the column vector space of $I({\cal A},t)$ is a subspace of column vector space of $I({\cal A},s)$. \noindent (i) We define an $n \times n$ matrix $M=\sum_{t=0}^s a_tM_t$ where $a_t$ is in $\Bbb{Z}_p$, $0\leq t\leq s$, satisfying in Relation~(\ref{1}). By the definition of $M$ the column vector space of $M$ is a subspace of the vector space spanned by the columns of $M_t$, $0\leq t\leq s$, that is the subspace of the column vector space of $I({\cal A},s)$, therefore, $$\rk{M} \leq \rk{I({\cal A},s)}\leq {\Theta_{L,k}(G)\choose s}.$$ On the other hand, by choosing $a_t$, $M$ is a symmetric matrix such that for every $u \neq v$, its $(u,v)$ entry is zero if and only if vertex $u$ is adjacent to vertex $v$. Therefore, over the field $\Bbb{Z}_p$, $$\mrp{G^c} \leq \rk{M}\leq {\Theta_{L,k}(G)\choose s}.$$ }\end{proof} A natural extension of the uniform representation is a set representation with the restriction on the size of sets to $r$ different sizes. For such representation, in the next theorem a generalization of the results of Theorem~\ref{mod_uni} is proved. \begin{thm}\label{ttkl} If $L=\{l_1,\dots,l_s\}$ and $K=\{k_1,\dots,k_r\}$ are two subsets of non-negative integers, where $k_i > s - r$, $1\le i \le r$, then for every graph $G$, $$\mr{G^c}\leq r \sum_{t=s-r+1}^s{\ttkl{G}\choose t}.$$ \end{thm} \begin{proof}{ Let ${\cal A}=\{A_1,\dots,A_n\}$ be the family of sets assigned to the vertices of $G$ with $\Theta_{L,K}(G)$ labels and ${\cal A}_i$, $1\leq i\leq r$, be the $k_i$-uniform subfamily of subsets of ${\cal A}$. Suppose that $M_t={A}_t({\cal A},{\cal A})$ be the $t$-intersection matrix of $\cal A$. We define matrix $M=\sum_{t=0}^{s} a_tM_t$, where $a_t$ in $\Bbb{R}$, $0\leq t\leq s$, satisfy in Relation~(\ref{1}). For convenience, we denote the row and column vector spaces of a matrix $Q$ by $R(Q)$ and $C(Q)$, respectively. By the definition of $M_t=I({\cal A},t) \times I({\cal A},t)^T$, \begin{equation*}\label{8} C(M_t) \subseteq C( I({\cal A},t) ). \end{equation*} Moreover, by Proposition~\ref{relat}, we have, $$I({\cal A}_j,s-r+1) \times I(s-r+1,t) = {k_j-t \choose s-r+1-t} I({\cal A}_j,t).$$ If $0 \leq t \leq s-r+1$ then $t \leq s-r+1 \leq k_j$ and ${k_j-t \choose s-r+1-t}\neq 0$. Hence, by the above equality, for $0 \leq t \leq s-r+1$, $$C( I({\cal A}_j,t)) \subseteq C( I({\cal A}_j,s-r+1) ).$$ Therefore, we have the following relations, $$\begin{array}{lllll} C(M) & \subseteq & \sum_{t=0}^{s} C(M_t) & \subseteq & \sum_{t=0}^{s} C( I({\cal A},t) ) \\[2mm] &&& \subseteq & \sum_{t=0}^{s} \sum_{j=1}^{r}C( I({\cal A}_j,t) )\\[2mm] &&& \subseteq & \sum_{j=1}^{r} \sum_{t=s-r+1}^{s} C( I({\cal A}_j,t) ). \end{array}$$ Thus, $$\begin{array}{lll} \rk{M} & \leq & \sum_{j=1}^{r}\sum_{t=s-r+1}^{s} |C( I({\cal A}_j,t) )|\\[2mm] & \leq & \sum_{j=1}^{r} \sum_{t=s-r+1}^{s} {\ttkl{G}\choose t}\\[2mm] & = & r\sum_{t=s-r+1}^{s} {\ttkl{G}\choose t}. \end{array}$$ On the other hand, by choosing $a_i$, $M$ is a matrix such that for every $u$ and $v$, its $(u,v)$ entry is zero if and only if vertex $u$ is adjacent to vertex $v$. Therefore, over field $\Bbb{R}$, $$\mr{G^c} \leq \rk{M}\leq r\sum_{t=s-r+1}^s{\ttl{G}\choose t}.$$ }\end{proof} By the same argument as in Theorems~\ref{mod_uni} and~\ref{ttkl}, the same lower bounds for the bipartite version can be obtained.
{ "timestamp": "2013-08-22T02:06:02", "yymm": "1211", "arxiv_id": "1211.0328", "language": "en", "url": "https://arxiv.org/abs/1211.0328", "abstract": "For a set of non-negative integers $L$, the $L$-intersection number of a graph is the smallest number $l$ for which there is an assignment on the vertices to subsets $A_v \\subseteq \\{1,\\dots, l\\}$, such that every two vertices $u,v$ are adjacent if and only if $|A_u \\cap A_v|\\in L$. The bipartite $L$-intersection number is defined similarly when the conditions are considered only for the vertices in different parts. In this paper, some lower bounds for the (bipartite) $L$-intersection number of a graph for various types $L$ in terms of the minimum rank of graph are obtained.", "subjects": "Combinatorics (math.CO)", "title": "Some lower bounds for the $L$-intersection number of graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771805808551, "lm_q2_score": 0.7185943865443349, "lm_q1q2_score": 0.709092542735448 }
https://arxiv.org/abs/1512.05044
The Drift Laplacian and Hermitian Geometry
Let $(M^n, h)$ be a compact Hermitian manifold. Suppose $\lambda$ is the lowest eigenvalue of the complex Laplacian on $M$. We prove that $\lambda \geq C$ where $C$ depends only on the dimension $n$, the diameter $d$, the Ricci curvature of the Levi-Civita connection on $M$, and a norm, expressed in curvature, that determines how much $M$ fails to be Kähler. We first estimate the principal eigenvalue of a drift Laplacian and then study the structure of Hermitian manifolds using recent results due to Yang and Zheng. We combine these results to obtain the main estimate.
\section{Introduction} This preprint's main goal is to obtain a lower bound on the spectrum of the complex Laplacian on a compact Hermitian manifold. To do this, we need two seemingly unrelated results. First, we derive an estimate for the principal eigenvalue of a Laplacian with drift. Second, using recent results from ~\cite{YZ}, we find inequalities which allow us to to estimate the torsion of a Hermitian manifold in terms of the Riemannian and Hermitian curvature. We then note that the complex Laplacian can be viewed as a drift Laplacian in which the drift can be expressed in terms of the torsion. We thus get the desired estimate. In Section 1, we state and discuss the results. In Section 2, we prove the estimate for the drift Laplacian. In Section 3, we discuss the torsion in greater depth and prove the lemmas in Section 1 that we need to bound the torsion. In the final section, we prove the estimate on the complex Laplacian and discuss some conjectures on the relation between curvature tensors, the torsion tensor, and orthogonal complex structures. \subsection{The Drift Laplacian} The drift Laplacian is a natural operator that appears in physical applications. The associated heat equation has been studied and the drift term acts as convection (i.e. stirring). Often, though not always, stirring speeds up the diffusion process. Therefore, we might expect to be able to derive lower bounds on the spectrum for the drift Laplacian. Theorem 1 provides an example of such bounds. \begin{theorem} Let $(M^n, g)$ be a compact Riemannian manifold without boundary, satisfying $Ric~ M \geq -(n-1)k$ and $\xi$ be a one-form on $M$. Suppose $u \in C^\infty(M)$ is a solution to the equation $\Delta u + \xi(\nabla u) = \lambda u$. Let $\| \xi \|$ be the $C_0$ norm of $\xi$ and $\| \nabla \xi \|$ be the $C_0$ norm of $\nabla \xi$ (as a two tensor). Let $D = 2nd^2$ where $d$ is the diameter of M and $E = \frac{1}{2n}( (16n^2-32n+5) \| \xi \|^2 + 2(n-1)^2 k + 2(n-1) \|\nabla \xi\|)$ Then we have: $$\lambda \geq \frac{1}{D} \frac{(1 + \sqrt{1+4DE})^2-DE}{\exp(1 + \sqrt{1+4DE})}$$ \end{theorem} Similar (and sharper) estimates have been obtained in the case of the Witten-Laplacian, where $\xi = df$ for some smooth function $f$, such as in ~\cite{AN} ~\cite{FLL}. For our purposes, $\xi$ will generally not be exact so we cannot use these results. The one-form is exact if and only if the metric is conformal to a balanced metric, which is a very restrictive condition. We would not be surprised if Theorem 1 were already known but we have not been able to find it in our literature search. In \cite{GN}, Gonzalez and Negrin study the kernel of the drift Laplacian on open domains with the same conditions that we use. More recently, Jorgen Jost and others have studied harmonic maps for a generalization of this operator (so called V-Harmonic maps). This group has proven various results and advanced the theory of harmonic maps ~\cite{CJQ}. \subsection{Structural inequalities on Hermitian manifolds} In Section 3, we use recent results from ~\cite{YZ} to derive inequalities that estimate the torsion of a Hermitian manifold in terms of the Riemannian and Hermitian curvature. We define two norms that measure the difference between Hermitian and Riemannian curvature, denoted $R^h$ and $R$, respectively. Given a unitary frame $\{e_i\}$ on a Hermitian manifold, we define $\| R^h-R\|^2_\star$ and $\| R^h-R \|^2_{\star \star}$ in the following way: $$\| R^h-R \|^2_\star = \sum_{i,j,k,l}| R^h_{i \bar j k \bar l} - R_{i \bar j k \bar l} |^2 + 2\sum_{i,j,k,l}| R_{i j \bar k \bar l} |^2 $$ $$\| R^h-R \|^2_{\star \star} = \sum_{i,j,k,l} | R_{i j k \bar l} |^2 $$ We show that these quantities dominate the $C^0$ and $C^1$ norm of the torsion. To be precise, let $\nabla^{c'}$ and $\nabla^{c''}$ be the $(1,0)$ and $(0,1)$ components of the covariant differentiation of the Chern connection defined by: $$\nabla^{c'}_{X+\overline{Y}} T = \nabla^c_X T \textrm{ and }$$ $$\nabla^{c''}_{X+\overline{Y}} T = \nabla^c_{\overline{Y}} T $$ where $X$ and $Y$ are any complex tangent vectors on M of type $(1,0).$ \begin{theorem} The following inequalities hold pointwise: $$||T||^2 \leq \| R^h-R \|_\star \textrm{ and } ||\eta||^2 \leq \| R^h-R \|_\star$$ $$\| \nabla^{c'} (T)\| \leq \| R^h-R \|_\star$$ $$\| \nabla^{c''} T\| \leq C(n) \| R^h-R \|_\star + \| R^h-R \|_{\star \star}$$ \end{theorem} Here, $\eta$ is Gauduchon's torsion one-form, defined by $\partial \omega^{n-1} = -2 \eta \wedge \omega^{n-1}$, where $\omega$ is the K\"ahler (metric) form of the metric h. Given a unitary frame $\{e_i\}$, we can also define $\eta$ as $\eta_i = \sum_j T_{ij}^j$. The torsion expresses the difference between the Levi-Civita connection and the Hermitian connection. Therefore, given a unit vector $X$ of type (1,0), the difference between $\nabla^{c''}_X T$ and $ \nabla_{\bar X} T$ can be bounded by a quadratic expression in torsion. We can bound the difference between $ \nabla^{c'}_X T$ and $ \nabla_X T$ in the same way. Using these observations, we obtain Theorem 3. \begin{theorem} Let $T$ be the torsion tensor and $\nabla T$ the derivative of the torsion tensor with respect to the Levi-Civita connection. Then there exists $C^\prime(n)$ so that following inequality holds: $$||\nabla T|| \leq C^\prime(n) \| R^h-R \|_\star + \| R^h -R \|_{\star \star} $$ \end{theorem} \subsection{The Complex Laplacian} Using the observation that the complex Laplacian on a Hermitian manifold can be expressed as a Laplacian with drift, we translate our estimate on the eigenvalue on the drift Laplacian into an estimate on the Laplacian on a Hermitian manifold. \begin{theorem} Suppose that $(M^n, h)$ is a compact, Hermitian manifold. Then there exists a uniform $C >0$ such that: $$\lambda \geq \frac{1}{4n} \frac{\left( \frac{2}{d^2} + 3Cn^2 (k+ \| R-R^h \|_\star + \| R-R^h \|_{\star \star} ) \right)}{\exp \left( 1 + \sqrt{1+4 Cn^2 d^2 (k+\| R-R^h \|_\star + \| R-R^h \|_{\star \star}) } \right) }$$ \end{theorem} This estimate is unsightly, but only involves the dimension, the diameter, the Ricci curvature, and the norms we defined earlier. Furthermore, the estimate scales as expected. Spectral geometry of Hermitian manifolds has been studied \cite{JP} \cite{PG}, especially in the context of finding spectral conditions which ensure a Hermitian manifold is balanced or K\"ahler. Our results suggest that one can understand the spectral geometry of Hermitian manifolds by studying the torsion. In a future preprint, we will try to strengthen these estimates and prove other results in this vein. We put forth the following conjecture that this estimate can be improved to only involve the Riemannian curvature tensor. \begin{conjecture}{} Given a compact Hermitian manifold $(M^n, h)$, there exists $C$ depending only on the Riemannian geometry such that if $\square u = \lambda u$ then $\lambda \geq C$. \end{conjecture} This would mirror the case for the Laplace-Beltrami operator, where an estimate exists in terms of the dimension, diameter, and Ricci curvature. In order to do this, one generally tries to obtain some estimate on the torsion one-form. We can show that there are certain curvature conditions which force $\eta$ to vanish, but we have not been able to establish this more generally. However, we can prove Conjecture 5 in several special cases. \begin{theorem*} Let $(M^n, g)$ be a compact globally conformally flat Hermitian manifold. Let $K= \inf_{x \in M} Ric~ M$, $k= \sup_{x \in M} Ric~ M$ $R$ be the scalar curvature of $M$, $d$ be the diameter of $M$ and $i$ be the injectivity radius of $M$. If $\lambda_1$ is the principle eigenvalue of the complex Laplacian $\square$, then we have the following estimate: $$\lambda \geq C(d, K, k, n, | \nabla R|, R^2, i)$$ \end{theorem*} \begin{theorem}Let $(M^{2n}, g)$ be a compact Riemannian manifold and $J$ be an orthogonal complex structure which is $k$-Gauduchon for some $k>\frac{n}{2}$. Then the spectrum of the complex Laplacian is bounded below by some constant $C$ depending only on $(M^{2n}, g)$, independent of $J$. \end{theorem} The proof of Theorem 6 is by contradiction, and so we are not able to produce an numeric lower bound in this paper in terms of the geometry of $M$. We will do so in a future preprint. For future work, we hope to continue studying torsion and to try to understand the moduli space of complex structures which are orthogonal to a given Riemannian metric. This would show how much information the Riemannian geometry can detect about the complex structure. There are strong restrictions preventing a metric from having a compatible a complex structure. To give a result in this vein, Gauduchon proved that hyperbolic manifolds of dimension great than two do not admit complex structures, ~\cite{PG}, a result that Hernandez-lamoneda extended to hold for negative strictly quarter-pinched manifolds ~\cite{LH}. However, one would hope interesting results also hold when the moduli space is not just the empty set. In such a case, a theorem of Salamon shows that at any point of a $4$-manifold, there is an open neighborhood admitting zero, one, two, or infinitely many orthogonal complex structures ~\cite{SMS}. It is worthwhile to note that in order for a single metric to provide a counterexample to Conjecture 5, it would need to admit infinitely many compatible complex structures. This is a very restrictive condition and it may be the case that metrics with infinitely many complex structures are well behaved enough that the torsion one form is controlled. In this case, Conjecture 5 would be true. However, this would not be a particularly satisfying result since it could still be possible that a sequence of metrics whose curvature and geometry is bounded in some very strong norm could be a blow up sequence for the torsion. \subsection{Acknowledgements} We owe many thanks to Bo Guan, Bo Yang, Adrian Lam, and Fangyang Zheng for their insights and help deriving these results. Finally, thank you to Kori Brady and Fangyang Zheng for their edits and help making the writing more clear. \section{Estimating the principal eigenvector of the drift laplacian} We now prove Theorem 1, which gives an estimate for the principal eigenvalue of the drift Laplacian $L = \Delta + \xi(\nabla)$. This proof is an adaptation of the Li-Yau estimate given in Lectures in Differential Geometry ~\cite{SY} for the principal eigenvalue of the Laplacian. Recall that Theorem 1 states the following: \begin{theorem*} Suppose $(M^n, g)$ is a compact Riemannian manifold without boundary satisfying $Ric~M \geq -(n-1)k$ for $k \geq 0$ and diameter $d$. Suppose that $u$ satisfies: $$Lu+\lambda u = 0$$ Let $\| \xi \|$ be the $C_0$ norm of $\xi$ and $\| \nabla \xi \|$ be the $C_0$ norm of $\nabla_X \xi(X)$ for $\| X \| = 1$. If $D = 2nd^2$ where $d$ is the diameter of M and $E = \frac{1}{2n}( (16n^2-32n+5) \| \xi \|^2 + 2(n-1)^2 k + 2(n-1) \| \nabla \xi \|)$. Then we have $$\lambda \geq \frac{1}{D} \frac{(1 + \sqrt{1+4DE})^2-DE}{\exp(1 + \sqrt{1+4DE})}$$ \end{theorem*} \begin{proof} The proof uses the standard technique of utilizing a Bochner identity on a family of functions $G_\beta(x)$ to establish a gradient estimate and then integrating the estimate to obtain an inequality involving $\lambda$ (as well as the other terms that appear). Finally, we solve the inequality in terms of $\lambda$ and optimize the estimate in terms of $\beta$ to get the desired result. However, the details in the calculation are somewhat messy. Suppose $u$ satisfies: \begin{equation}\label{eq:ADE} \Delta u + \xi(\nabla u) + \lambda u = 0 \end{equation} with $\sup u = 1$. Let $\beta >1$ and consider the function $G(x)$ defined by: \begin{equation}\label{eq:G} G(x)= \dfrac{|\nabla u|^2}{(\beta-u)^2} \end{equation} We establish an estimate on $G(x)$ and use this to derive an estimate on $u$. \subsection{Set up using the Bochner formula} Then suppose that $G(x)$ is maximized at $x_0.$ We have that $\nabla G(x_0) =0$ and $\Delta G(x_0) \leq 0$. Also, we have $G(x)(\beta-u)^2 = |\nabla u|^2$ so: $$(\Delta G) (\beta-u)^2 + 2 \nabla G \nabla (\beta-u)^2 + G \Delta (\beta-u)^2 = \Delta | \nabla u|^2$$ At $x_0$, we have $\nabla G(x_0) =0$ so using the Bochner formula in normal coordinates at $x_0$, we have the following: \begin{eqnarray*} 0 & \geq & \Delta | \nabla u|^2 - G \Delta (\beta-u)^2 \\ & = & 2 \sum_{i,j} u_{ij}^2 + 2 \sum_i u_i (\Delta u)_i + 2 Ric( \nabla u, \nabla u) - 2 G ((-\Delta u)(\beta - u) + |\nabla u|^2) \end{eqnarray*} Then dividing by 2, using \eqref{eq:ADE} and the curvature bounds, we have that: \begin{eqnarray*} 0 & \geq & \sum_{i,j} u_{ij}^2 + \sum_i u_i (-\xi(\nabla u) - \lambda u)_i + Ric( \nabla u, \nabla u) - G ((-\Delta u)(\beta - u) + |\nabla u|^2) \\ & \geq & \sum_{i,j} u_{ij}^2 + \sum_i u_i (-\xi(\nabla u) - \lambda u)_i - (n-1)k |\nabla u|^2 - G ((-\Delta u)(\beta - u) + |\nabla u|^2) \\ \end{eqnarray*} We pick normal coordinates at $x_0$ so that $u_i = 0$ for $i> 1$ and $u_1 = |\nabla u|$. Then $\nabla G(x_0) =0$ implies: \begin{equation}\label{eq:Grad 0} u_{11} = \frac{- |\nabla u|^2}{\beta - u} \textrm{ and $u_{1i} = 0$ otherwise.} \end{equation} Let $\mathcal{G} = G ((\xi(\nabla u) + \lambda u)(\beta - u) + |\nabla u|^2)$. In the following manipulation we will not change this term so we use $\mathcal{G}$ as shorthand. Then, at $x_0$: \begin{eqnarray*} 0 & \geq & \sum_{i,j} u_{ij}^2 + \sum_i u_i (-\xi(\nabla u) - \lambda u)_i - (n-1)k |\nabla u|^2 - G ((\xi(\nabla u) + \lambda u)(\beta - u) + |\nabla u|^2) \\ & = & \sum_{i,j} u_{ij}^2 + u_1 (-\xi(\nabla u) - \lambda u)_1 - (n-1)k |\nabla u|^2 - \mathcal{G} \\ & = & \sum_{i,j} u_{ij}^2 - u_1 (\xi(\nabla u))_1 - \lambda u_1^2 - (n-1)k |\nabla u|^2 - \mathcal{G} \\ & = & \sum_{i,j} u_{ij}^2 - u_1 \xi(e_1) u_{11} - \xi(e_1)_1 u_1^2 - \lambda u_1^2 - (n-1)k |\nabla u|^2 - \mathcal{G} \\ & \geq & \sum_{i,j} u_{ij}^2 - u_1 |\xi| |u_{11}| - \| \nabla \xi \| u_1^2 - \lambda u_1^2 - (n-1)k |\nabla u|^2 - \mathcal{G} \\ & = & \sum_{i,j} u_{ij}^2 - u_1 |\xi| |u_{11}| - (\| \nabla \xi \| + \lambda + (n-1)k) |\nabla u|^2 - \mathcal{G} \\ \end{eqnarray*} Therefore, we have: \begin{equation}\label{eq:Bochner 1} 0 \geq \sum_{i,j} u_{ij}^2 - u_1 |\xi| |u_{11}| - (\| \nabla \xi \| + \lambda + (n-1)k) |\nabla u|^2 - \mathcal{G} \\ \end{equation} \subsection{Putting the second derivatives to use} Continuing to work at $x_0$, we now note that: \begin{eqnarray*} \sum_{i,j = 2}^n u_{ij}^2 & \geq & \sum_{i = 2}^n u_{ii}^2 \\ & \geq & \frac{1}{n-1} \left( \sum_{i = 2}^n u_{ii} \right)^2 \\ & = & \frac{1}{n-1} \left( \Delta u - u_{11} \right)^2 \\ & = & \frac{1}{n-1} \left( -\xi (\nabla u) - \lambda u - u_{11} \right)^2 \\ & = & \frac{1}{n-1} \left( \xi (u_1) + \lambda u + u_{11} \right)^2 \\ & \geq & \frac{1}{n-1} \left( \frac{u_{11}^2}{2} - (\xi (u_1) + \lambda u)^2 \right) \\ & \geq & \frac{1}{n-1} \left( \frac{u_{11}^2}{2} - 2 (\xi (u_1))^2 - 2 (\lambda u)^2 \right) \\ \end{eqnarray*} Substituting this inequality into \eqref{eq:Bochner 1}, we get the following: \begin{eqnarray*} 0 & \geq & u_{11}^2 + \frac{1}{n-1} \left( \frac{u_{11}^2}{2} - 2 (\xi (u_1))^2 - 2 (\lambda u)^2 \right) \\ & & - u_1 |\xi| |u_{11}| - (\| \nabla \xi \| + \lambda + (n-1)k) |\nabla u|^2 - \mathcal{G} \\ \end{eqnarray*} Using \eqref{eq:G} and the definition of $\mathcal{G}$, we have: \begin{eqnarray*} 0 & \geq & u_{11}^2 + \frac{1}{n-1} \left( \frac{u_{11}^2}{2} - 2 (\xi (u_1))^2 - 2 (\lambda u)^2 \right) - u_1 |\xi| |u_{11}| \\ & & - (\| \nabla \xi \| + \lambda + (n-1)k) |\nabla u|^2 - \frac{| \nabla u|^2}{(\beta-u)^2} ((\xi(\nabla u) + \lambda u)(\beta - u) + |\nabla u|^2) \\ \end{eqnarray*} Then by \eqref{eq:Grad 0}, the first and last terms cancel, leaving: \begin{eqnarray*} 0 & \geq & \frac{1}{n-1} \left( \frac{u_{11}^2}{2} - 2 (\xi (u_1))^2 - 2 (\lambda u)^2 \right) - u_1 |\xi| |u_{11}| \\ & & - (\| \nabla \xi \| + \lambda + (n-1)k) |\nabla u|^2 - \frac{| \nabla u|^2}{(\beta-u)^2} (\xi(\nabla u) + \lambda u)(\beta - u) \\ & \geq & \frac{1}{2(n-1)} u_{11}^2 - u_1 |\xi| |u_{11}| - \left( \| \nabla \xi \| + \lambda + (n-1)k + \frac{2}{n-1} |\xi|^2 \right) |\nabla u|^2 \\ & & - \frac{| \nabla u|^2}{(\beta-u)} (\xi(\nabla u) + \lambda u) - \frac{2}{n-1} \lambda^2 u^2 \\ & \geq & \frac{1}{2(n-1)} u_{11}^2 - u_1 |\xi| |u_{11}| - \left( \| \nabla \xi \| + \lambda + (n-1)k + \frac{2}{n-1} |\xi|^2 \right) |\nabla u|^2 \\ & & - \frac{| \nabla u|^2}{(\beta-u)} |\xi| |\nabla u| - \lambda u \frac{| \nabla u|^2}{(\beta-u)} - \frac{2}{n-1} \lambda^2 u^2 \\ & = & \frac{1}{2(n-1)} \frac{| \nabla u|^4}{(\beta-u)^2} - 2 |\xi| \frac{| \nabla u|^3}{(\beta-u)} - \left( \| \nabla \xi \| + \lambda + (n-1)k + \frac{2}{n-1} |\xi|^2 \right) |\nabla u|^2 \\ & & - \lambda u \frac{| \nabla u|^2}{(\beta-u)} - \frac{2}{n-1} \lambda^2 u^2 \\ \end{eqnarray*} Now we divide this inequality by $(\beta-u)^2$ to obtain: \begin{eqnarray*} 0 & \geq & \frac{1}{2(n-1)} \frac{| \nabla u|^4}{(\beta-u)^4} - 2 |\xi| \frac{| \nabla u|^3}{(\beta-u)^3} - \left( \| \nabla \xi \| + \lambda + (n-1)k + \frac{2}{n-1} |\xi|^2 \right) \frac{| \nabla u|^2}{(\beta-u)^2} \\ & & - \lambda u \frac{| \nabla u|^2}{(\beta-u)^3} - \frac{2}{n-1} \lambda^2 \frac{u^2}{(\beta-u)^2} \\ \end{eqnarray*} Let $\alpha = \frac{u}{\beta - u}$ and note that $\alpha \leq \frac{1}{\beta - u} \leq \frac{1}{\beta - 1}.$ Then we can rewrite this inequality in terms of $G$ and $\alpha$ as: \begin{equation}\label{eq:G inequality} 0 \geq \frac{1}{2(n-1)} G(x_0)^2 - 2 |\xi| G^{3/2} - \left( \| \nabla \xi \| + \lambda + (n-1)k + \frac{2}{n-1} |\xi|^2 \right) G(x_0) - \lambda \alpha G(x_0) - \frac{2}{n-1} \lambda^2 \alpha^2 \\ \end{equation} Since $x_0$ maximizes $G(x_0)$, this inequality holds true for $G$ throughout $M$. \subsection{Deriving and avoiding a quartic equation} By \eqref{eq:G inequality}, we have \begin{eqnarray*} 0 & \geq & G^2 - 4(n-1) |\xi| G^{3/2} - 2(n-1) \left( \| \nabla \xi \| + \lambda + (n-1)k + \frac{2}{n-1} |\xi|^2 + \lambda \alpha \right) G - 4 \lambda^2 \alpha^2 \\ &= & G^2 - 4(n-1) |\xi| G^{3/2} - 2(n-1) \left( \| \nabla \xi \| + (n-1)k + \frac{2}{n-1} |\xi|^2 + \lambda ( \alpha+1) \right) G - 4 \lambda^2 \alpha^2 \\ & \geq & G^2 - 4(n-1) |\xi| G^{3/2} - 2(n-1) \left( \| \nabla \xi \| + (n-1)k + \frac{2}{n-1} |\xi|^2 + \dfrac{\beta}{\beta -1} \lambda \right) G - 4 \lambda^2 \alpha^2 \\ \end{eqnarray*} Letting $g = \sqrt{G}$, and $A = 4(n-1) |\xi|$, $B = 2(n-1) \left( \| \nabla \xi \| + \dfrac{\beta}{\beta -1} \lambda + (n-1)k + \frac{2}{n-1} |\xi|^2 \right)$ and $C = 4 \lambda^2 \alpha^2$, this reduces to: \begin{equation}\label{eq:The quartic} 0 \geq g^4 - A g^3 - B g^2 - C \end{equation} We could try to solve this quartic and then use that to get estimates on $\lambda$. However, it is much more straightforward to try to estimate $g$. \begin{lemma}\label{Roots of quartics} Given $A,B, C >0$, if x satisfies $P(x) = x^4 - Ax^3 -Bx^2 - C \leq 0$, Then $x \leq A + \sqrt{B +\sqrt{C}} = a$. \end{lemma} Note that $P(x)+C = x^2 (x^2 - Ax -B)$ so it is sufficient to show that $P(a) >0$, because increasing $x$ will only increase both factors if the latter is already positive. Then we have the following: \begin{eqnarray*} P(a) & = & (A + \sqrt{B +\sqrt{C}})^4 - A (A + \sqrt{B +\sqrt{C}})^3 - B (A + \sqrt{B +\sqrt{C}})^2 - C \\ & = & (A + \sqrt{B +\sqrt{C}})^2 \left( (A + \sqrt{B +\sqrt{C}})^2 - A (A + \sqrt{B +\sqrt{C}}) - B \right) - C\\ & = & (A + \sqrt{B +\sqrt{C}})^2 \left( A^2 + B + \sqrt C + 2A \sqrt{B +\sqrt{C}} - A^2 - A\sqrt{B +\sqrt{C}}) - B \right) - C \\ & = & (A + \sqrt{B +\sqrt{C}})^2 \left( \sqrt C + A \sqrt{B +\sqrt{C}} \right) - C > 0 \\ \end{eqnarray*} Thus the lemma is proved. However, in order to make future calculations more feasible, we note the following inequality holds: $$A + \sqrt{B +\sqrt{C}} \leq 2 \sqrt{ A^2 + B +\sqrt{C}}$$ Using $2 \sqrt{ A^2 + B +\sqrt{C}}$ for $g$ in our problem now, we obtain: \begin{eqnarray*} g & \leq & \sqrt{ 16(n-1)^2 |\xi|^2 + 2(n-1) \left( \| \nabla \xi \| + \dfrac{\beta}{\beta -1} \lambda + (n-1)k + \frac{2}{n-1} |\xi|^2 \right) + \sqrt{4 \lambda^2 \alpha^2}} \\ & = & \sqrt{ (16(n-1)^2+4) |\xi|^2 +2(n-1)^2 k + 2(n-1) \| \nabla \xi \| + ( 2(n-1) \dfrac{\beta}{\beta -1} +2 \alpha) \lambda } \\ & \leq & \sqrt{ (16n^2-32n+5) |\xi|^2 +2(n-1)^2 k + 2(n-1) \| \nabla \xi \| + 2(\dfrac{\beta (n-1)+1}{\beta -1}) \lambda } \\ & \leq & \sqrt{ (16n^2-32n+5) |\xi|^2 +2(n-1)^2 k + 2(n-1) \| \nabla \xi \| + 2n(\dfrac{\beta}{\beta -1}) \lambda } \\ \end{eqnarray*} Recalling the definition of $g$, we can write this inequality as: \begin{equation}\label{eq:Gradient Estimate} |\nabla u| \leq (\beta - u) \sqrt{ (16n^2-32n+5) |\xi|^2 +2(n-1)^2 k + 2(n-1) \| \nabla \xi \| + 2n(\dfrac{\beta}{\beta -1}) \lambda } \end{equation} \subsection{Getting an inequality on $\lambda$} Now that we have a gradient estimate, we are most of the way done. It remains to integrate the inequality to get a $C^0$ estimate and pick $\beta$ so that this gives us a useful inequality on $\lambda$. Take $x_1, x_2 \in M$ such that $u(x_1) =0$ and $u(x_2) =1$. Let $\gamma$ be the shortest geodesic joining $x_1$ and $x_2$. Let $d$ be the diameter of $M$. Note that the geodesics and diameter are defined in terms of the Levi-Civita connection because the length of paths depends only on the metric. We will discuss this phenomena further in future preprints. Then: \begin{eqnarray*} && \log{\frac{\beta}{\beta-1}}~ \leq ~ \int_\gamma \frac{|\nabla u| }{\beta - u}~ \leq \\ & \leq & d \sqrt{ (16n^2-32n+5) |\xi|^2 +2(n-1)^2 k + 2(n-1) \| \nabla \xi \| + 2n(\dfrac{\beta}{\beta -1}) \lambda }\\ \end{eqnarray*} That is to say: \begin{eqnarray*} \lambda \geq \frac{\beta -1}{2n \beta} \left( \frac{1}{d^2} (\log{\frac{\beta}{\beta-1}})^2 - (16n^2-32n+5) |\xi|^2 - 2(n-1)^2 k - 2(n-1) \| \nabla \xi \|) \right) \end{eqnarray*} Let $$E = \frac{1}{2n}( (16n^2-32n+5) |\xi|^2 + 2(n-1)^2 k + 2(n-1) \| \nabla \xi \|),$$ $$D = 2nd^2,$$ $$\textrm{and } x = \frac{\beta}{\beta -1} $$ Then this boils down to: \begin{eqnarray*} \lambda \geq \frac{1}{D} \frac{(\log {x})^2}{x} - \frac{E}{x} = f(x) \end{eqnarray*} \subsection{Strengthening the inequality} Taking the derivative of this with respect to $x$, we find: $$f^\prime(x) = \frac1{Dx^2}(\log x - (\log x)^2 + DE)$$ This is zero if $\log(x) = 1 + \sqrt{1+4DE}$, which is the value of $x$ which maximizes the right hand side. Finally, this implies that: $$\lambda \geq \frac{1}{D} \frac{(1 + \sqrt{1+4DE})^2-DE}{\exp(1 + \sqrt{1+4DE})} $$ \end{proof} Despite how complicated the estimate is, notice that the quantity scales correctly under the scalar deformation $\rho g$ where $\rho$ is a positive constant. This estimate is not optimal. There are a few places that this can be improved. We did not solve the quartic equation exactly (and if we had, inverting to solve for $\lambda$ would be unpleasant). Furthermore, the integration essentially assumes that $G(x)$ is constant. It does not effectively using a barrier function as in the theorem due to Zhong-Yang which derives optimal eigenvalue bounds for compact Riemannian manifolds with $Ric(M) \geq 0$ ~\cite{SY}. We imagine that such an estimate can be improved using the various methods of ~\cite{MC}, which we may attempt to do in the future. Also, the effect of the drift is overstated. There is no way that $(\xi(e_1))_1 = |\nabla \xi|$ and $|\xi| = \xi(e_1)$ and both quantities are maximized along the entire curve that we integrate along. \section{The Complex Laplacian on a Hermitian Manifold} We now apply the previous estimate to complex geometry. The complex Laplacian on a Hermitian manifold can naturally be written as a Laplacian with drift equation. If $\eta$ is the torsion one-form as before ($\eta_i$ is the $i$-th component of the form as opposed to the derivative) and $\xi$ is the Lee form, then we have the following: \begin{eqnarray} \square f = \frac12 \Delta f + \xi(\nabla f) \\ \textrm{ and } \eta + \overline{\eta} = - 2 \xi \end{eqnarray} We believe that it is well known at this point, but it is worth noting that $\Delta = 2 \square$ if $\eta = 0$, which is to say that $(M, h)$ is a balanced metric. Therefore, a sufficient condition for the Laplacian and twice the complex Laplacian to be isospectral is that the metric is balanced. Another condition ensuring that the metric is balanced is that $\xi$ = 0, and $\xi$ is exact if and only if the metric is conformal to a balanced metric. Therefore, the estimates from the Witten-Laplacian can only be used in the special case where the metric is conformally equivalent to a balanced metric. \begin{conjecture} Given, $(M^n, h)$ a complex manifold, if $\Delta$ and $2 \square$ have the same spectrum, then $(M^n, h)$ is balanced. \end{conjecture} Any counterexample would be a very interesting Hermitian manifold in its own right. It is known that if the complex Laplacian on a manifold is isospectral to the complex Laplacian on a balanced manifold, then it must be balanced as well. H. Donnelly showed in ~\cite{HD} that if $\square$ and $\Delta$ are isospectral on functions and one-forms, that the manifold is in fact K\"ahler. This is very similar to Peter Gilkey's result that if two complex manifolds are isospectral, they are either both K\"ahler or neither is ~\cite{PG}. However, we turn our attention away from the cases in which $\eta$ is zero and try to understand it for general Hermitian manifolds. From the Theorem 1, if we have control over $\xi$ and $\nabla \xi$ (where $\nabla$ is with respect to the Levi-Civita connection), then we will be able to obtain bounds on the spectrum of $\square$. By equation 9, this reduces to finding estimates on $\eta$ and $\nabla \eta$. We now derive such estimates in terms of the curvature of the Levi-Civita and Chern connections. \subsection{Structural Inequalities on Hermitian Manifolds} From a geometric (and non-rigorous) point of view, the only metric invariants should come from curvature so the need for twisting of a unitary frame can be entirely determined by how ``non-flat" the underlying space is. Furthermore, the deformation of shapes and angles occurs due to the presence of curvature, not its derivatives, so we expect the torsion is bounded somehow by the curvature and how the curvature of the Chern and Levi-Civita connections differ. This phenomena has been noted before and studied in a somewhat different context in ~\cite{SMS}, which shows that if a complex structure exists at all, certain parts of the Weyl tensor must vanish. These observations suggest that torsion should be at most a ``zero-th" order phenomena with respect to the curvature and we seek to formalize this intuition. The following relies heavily on the results from ~\cite{YZ}, which we cite repeatedly. By equation 41 of ~\cite{YZ}, we have that given any type $(1,0)$ vector X, $$R^h_{X \bar X X \bar X} -R_{X \bar X X \bar X} = \sum_k |T_{kX}^X|^2$$ Therefore, in any unitary frame $\{ e_i\}$, \begin{eqnarray*} \sum_{i=1}^n R^h_{i \bar i i \bar i} -R_{i \bar i i \bar i} &= & \sum_{i=1}^n \sum_k |T_{ki}^i|^2 \\ &= & \sum_{k=1}^n \sum_i |T_{ki}^i|^2 \\ & \geq & \frac{1}{(n-1)} \sum_{k=1}^n |\eta_k|^2 \\ & = & \frac{1}{(n-1)} ||\eta||_2^2 \\ \end{eqnarray*} However, we can get stronger estimates using the structure theorems of ~\cite{YZ}. For clarity, we write the results of Lemma 7 of ~\cite{YZ} here. \begin{theorem*}{(Lemma 7)} Let $(M^n,g)$ be a Hermitian manifold and let $p \in M$. Let $\{e_i\}$ be a unitary frame near $p$ such that $\theta |_p = 0$. Then, at the point $p$ we have: $$2T^k_{ij}{}_{,\bar l} = R^h_{j \bar l i \bar k} -R^h_{i \bar l j \bar k}$$ $$ T^l_{ij}{}_{, k} = R_{i j k \bar l} - T_{ri}^l T_{jk}^r + T_{rj}^l T_{ik}^r$$ $$2R_{ij \overline{kl}} = T^l_{ij}{}_{, \bar k} - T^k_{ij}{}_{,\bar l} + 2T_{ij}^r \overline{T_{kl}^r} + T_{ri}^k \overline{T_{rl}^j}+T_{rj}^l \overline{T_{rk}^i}-T_{ri}^l \overline{T_{rk}^j}-T_{rj}^k \overline{T_{rl}^i}$$ $$R_{k \bar l i \bar j} = R^h_{k \bar l i \bar j} - T^j_{ik}{}_{,\bar l} - \overline{ T^i_{jl}{}_{,\bar k}} +T_{ik}^r \overline{T_{jl}^r} - T_{rk}^j \overline{T_{rl}^i} - T_{ri}^l \overline{T_{rj}^k}$$ where r is summed through and $h_{ , i} = e_i(h)$ and $h_{ ,\bar i} = \bar e_i(h)$. \end{theorem*} Now we recall the two norms that measure how much the metric fails to be K\"ahler. We define $\| R^h-R \|^2_\star$ as: $$\| R^h-R \|^2_\star = \sum_{i,j,k,l}| R^h_{i \bar j k \bar l} - R_{i \bar j k \bar l} |^2 + 2\sum_{i,j,k,l}| R_{i j \bar k \bar l} |^2 $$ Recall that $R^h_{XY \bar Z \bar W} = 0$ by Gray's theorem so this can be thought of as a norm of the differences of Riemannian and Hermitian curvatures. Also, we define $\| R^h-R \|^2_{\star \star}$ in the following way: $$\| R^h-R \|^2_{\star \star} = \sum_{i,j,k,l}| R_{i j k \bar l} |^2 $$ Since $R^h_{XYZ \bar W} = 0$, the above notation is meaningful. The next theorem shows how $|R^h-R|_\star$ measures how much a metric fails to be K\"ahler. Recall that one possible definition of the K\"ahler condition is that the torsion identically vanishes. \begin{theorem*} The following inequalities hold pointwise: \\ $||T||^2 \leq |R^h-R|_\star$ and $||\eta||^2 \leq |R^h-R|_\star$ \end{theorem*} \begin{proof} From equation 41 of ~\cite{YZ}, \begin{eqnarray*} \frac12(R^h_{X \bar X Y \bar Y} + R^h_{Y \bar Y X \bar X}) - R_{X \bar Y Y \bar X} & = & \sum_{k} ( |T_{XY}^k|^2 + 2Re(T_{kY}^Y \overline{T_{kX}^X})) \end{eqnarray*} We also have that $R_{XY \bar X \bar Y} = R_{X \bar X Y \bar Y}- R_{X \bar Y Y \bar X}$. Therefore, we can rewrite the left-hand side of the above equation: \begin{eqnarray*} & & \sum_{k} ( |T_{XY}^k|^2 + 2Re(T_{kY}^Y \overline{T_{kX}^X}))\\ & = & \frac12(R^h_{X \bar X Y \bar Y} + R^h_{Y \bar Y X \bar X}) - R_{X \bar X Y \bar Y} - R_{XY \bar X \bar Y} \\ & = & \frac12 ( R^h_{X \bar X Y \bar Y} - R_{X \bar X Y \bar Y}) + \frac12 ( R^h_{Y \bar Y X \bar X} - R_{Y \bar Y X \bar X}) - R_{XY \bar X \bar Y} \\ \end{eqnarray*} We want to gain a better understanding of $\sum_{k=1}^n 2 Re( T_{kX}^X \overline{T_{kY}^Y})$. Choose a unitary frame and let $\sum_{i} T^i_{ki} = \eta_k$. \begin{eqnarray*} \sum_{i} \sum_{j} Re(T^i_{ki} \overline{ T^j_{kj} } ) & = & Re( \sum_{i} T^i_{ki} (\sum_{j} \overline{ T^j_{kj}} ) \\ & = & Re( \sum_{i} T^i_{ki} ( \bar \eta_k)) \\ & = & Re( \bar \eta_k \sum_{i} T^i_{ki}) \\ & = & Re( \bar \eta_k \eta_k) \\ & = & |\eta_k|^2 \end{eqnarray*} That is to say, $$\sum_{i} \sum_{j} 2 Re(T^i_{ki} \overline{ T^j_{kj} } ) = 2|\sum_{i} T^i_{ki}|^2 = 2 |\eta_k|^2$$ Thus, we have the following: \begin{eqnarray*} & & \sum_i \sum_{j} \frac12 ( R^h_{i \bar i j \bar j} - R_{i \bar i j \bar j}) + \frac12 ( R^h_{j \bar j i \bar i} - R_{j \bar j i \bar i}) - R_{i j \bar i \bar j} \\ & = & \sum_i \sum_{ j} \sum_{k} ( |T_{ij}^k|^2 + 2Re(T_{ki}^i \overline{T_{kj}^j})) \\ & = & \sum_{k} \sum_i \sum_{ j} ( |T_{ij}^k|^2 + 2Re(T_{ki}^i \overline{T_{kj}^j})) \\ & = & \sum_{k} \left( (\sum_i \sum_{ j} |T_{ij}^k|^2) + 2|\eta_k|^2 \right) \\ & = & ||T||^2 + 2 ||\eta||^2 \end{eqnarray*} This then immediately proves that $||T||^2 \leq \|R^h-R\|_\star$. \end{proof} There are many terms in $\| R^h - R \|_\star$ that are not needed to control the torsion, but we define $|R^h-R|_\star$ as such in view of the next lemma. \begin{lemma} The following inequality holds where $\nabla^{c''}$ is the derivative with respect to the Chern connection of a $(0,1)$ vector: $\| \nabla^{c''} T \| \leq \| R^h-R \|_\star$ \end{lemma} \begin{proof} We use the first equation of Lemma 7 and the following Bianchi identity: \begin{eqnarray*} R_{i \bar j k \bar l} - R_{k \bar j i \bar l} &=& R_{i \bar j k \bar l} + R_{\bar j k i \bar l} \\ & = & - R_{k i \bar j \bar l} = R_{i k \bar j \bar l} \end{eqnarray*} Combining these we find that: $$e_{\bar l}( T_{ik}^j )= (R^h_{i \bar j k \bar l} - R_{i \bar j k \bar l}) - (R^h_{k \bar j i \bar l} - R_{k \bar j i \bar l}) - R_{i k \bar j \bar l} $$ \end{proof} We can also bound the derivative with respect to a $(1,0)$ vector. \begin{lemma} The following inequality holds where $\nabla^{c'}$ is the derivative with respect to the Chern connection of a $(1,0)$ vector: $$| \nabla^{c'} T| \leq C(n) \| R^h-R \|_\star + \| R^h-R \|_{\star \star}$$ \end{lemma} This is an immediate consequence of the equation from Theorem 4 and the following equation from ~\cite{YZ}: $$ T^l_{ij}{}_{, k} = R_{i j k \bar l} + T_{rj}^l T_{ik}^r - T_{ri}^l T_{jk}^r$$ Using a straightforward computation, we can relate the derivatives of the torsion tensor with respect to the Levi-Civita connection to the derivatives of the torsion tensor with respect to the Chern connection and a quadratic expression in torsion. Therefore, we also have the following result: \begin{theorem*} Let $T$ be the torsion tensor and $\nabla T$ the derivative of the torsion tensor with respect to the Levi-Civita connection. Then there exists a constant $C^\prime(n)$ that grows at most linearly in $n$ such that the following inequality holds: $$||\nabla T|| \leq C^\prime(n) \|R^h-R \|_\star + \| R^h -R \|_{\star \star} $$ \end{theorem*} As a consequence to this result, we have the following observation. \begin{theorem*} Let $(M^{2n}, g, J)$ be a compact complex manifold with pluriclosed metric g. Consider the following initial value problem $$ \frac{\partial}{\partial t} \omega = \partial \partial^{*} \omega + \overline{ \partial \partial^{*}} \omega + \frac{\sqrt{-1}}{2} \partial \bar \partial \log \det g$$ $$w(0)= g(\cdot, J \cdot)$$ There exists a constant $c(n)$ depending only on $n$ such that there exists a unique solution $g(t)$ for $$t \in [0, \frac{c(n)}{\max(| R^h |_{C^0(g_0)}, | R |_{C^0(g_0)})}]$$ Furthermore, we have existence until either $| R^h |_{C^0(g_0)}$ or $| R |_{C^0(g_0)}$ blow up. \end{theorem*} This follows immediately from Theorem 1.2 in ~\cite{ST}. This shows that pluriclosed flow exists until either the Chern curvature or Riemannian curvature blows up. In the first case, $M$ fails to be a well behaved complex manifold and in the latter, $M$ fails to be a well behaved Riemannian manifold. It should be noted that this observation is weaker than the improved regularity theorem of ~\cite{ST2}, which provides a Bismut-Ricci curvature term which suffices to prove regularity for pluriclosed flow. In future preprints, we will explore these inequalities in much greater depth. One can derive monotonicity results and other structure theorems by leveraging the equations we have used in this paper. However, for the purposes of bounding the principal eigenvalue, what we've done is sufficient. \section{An estimate on the principal eigenvalue of the Complex Laplacian} We now combine the two theorems from the previous section with our estimate from the section before. \begin{theorem*} Let $(M^n, h)$ be a compact Hermitian manifold without boundary with Riemannian curvature satisfying $Ric \geq -(n-1)k$ for $k \geq 0$ and $diam(M) = d$. Consider the complex Laplacian $\square$. Let $u$ satisfy $\square u = \lambda u$ and $K = n^2 (k+\| R-R^h \|_\star + \| R-R^h \|_{\star \star} )d^2$. Then there exists an uniform $C >0$ such that the following estimate holds: $$\lambda \geq \frac{1}{4nd^2} \frac{\left(1 + \sqrt{1+4CK}\right)^2-CK }{\exp \left( 1 + \sqrt{1+4 CK } \right) }$$ \end{theorem*} We can simplify this estimate. There exists an uniform $C >0$ such that: $$\lambda \geq \frac{1}{4n} \frac{\left( \frac{2}{d^2} + 3Cn^2 (k+\| R-R^h \|_\star + \| R-R^h \|_{\star \star} ) \right)}{\exp \left( 1 + \sqrt{1+4 CK } \right) }$$ This is still complicated but it only involves the dimension $n$, the diameter $d$, a lower bound on the Ricci curvature $k$, and a measure of how much the metric fails to be K\"ahler involving only curvature. From the point of view of this estimate, the last term acts in the same way as the Ricci curvature term. In a future preprint, we will explore monotonicity formulae to show that in some ways the Hermitian curvature dominates the Riemannian curvature. For instance, it is well known that the Hermitian scalar curvature dominates the Riemmanian scalar curvature. However, our monotonicity results are not yet strong enough to bound the spectrum solely using the Riemannian curvature. Nonetheless, this may be possible in light of a theorem due to Yang and Zheng, which states that if the Riemannian curvature is Gray-K\"ahler-like and the space is compact, then the metric is balanced. Since balanced metrics satisfy $2 \square = \Delta$ on functions, the two operators have the same spectrum. This provides a condition which only depends on the Riemannian curvature and the diameter, that ensures that spectral geometry of the complex laplacian is the same as that of the Laplace-Beltrame operator. However, we cannot hope to control the entire torsion tensor solely using the Riemannian curvature because there exists Riemannian flat metrics that do not have vanishing torsion (such as the tori in \cite{BSV}). Also, the example in ~\cite{YZ} of a non-compact Gray-K\"ahler-like surface that is not K\"ahler shows that Gray-K\"ahler-like alone does not imply balanced. \begin{conjecture*} Given a compact complex manifold $(M^n, h)$, there exists $C$ depending only on the Riemannian geometry such that if $\square u = \lambda u$, then $\lambda \geq C$. \end{conjecture*} Using Theorem 1, this conjecture would be a consequence of the following conjecture. \begin{conjecture*} Given a compact complex manifold $(M^n, h)$, there exists $C$ depending only on the Riemannian geometry such that the $C^1$ norm of $\| \eta \| \leq C$ \end{conjecture*} We suspect that the $C$ in the previous two conjectures can be expressed in terms of the dimension, the diameter and injectivity radius of $M$, an upper and lower bound of the Ricci curvature, the Riemannian curvature tensor, and a $C^1$ bound on the scalar curvature. These may be more accessible in the conformally balanced or conformally K\"ahler case, in which case the analysis done on the Yamabe problem may be useful. We can also use the Riemannian geometry to bound a weaker norm of $\| \eta \|,$ but have not been able to establish bounds on the full $C^1$-norm. One strategy towards proving Conjecture 6 would be to weaken the norm on $\eta$ required to establish bounds on the eigenvalue. We use this result to reprove a result originally due to Gauduchon~\cite{PGa}. \begin{theorem*} Given a compact complex manifold $(M^n, h)$ and let $\| Riem \|$ be the pointwise norm of the Riemannian curvature tensor. Then the torsion one-form $\eta$ satisfies the following inequality: $$ \| \eta \|^2_{L^2} \leq \frac{n^2}{4} \| Riem \|_{L^2}$$ \end{theorem*} \begin{proof} Using the third formula in Lemma 7, if we set $i=k$ and $j=l$ and sum up over both we obtain the following: $$2R_{ij \overline{ij}} = T^j_{ij}{}_{, \bar i} - T^i_{ij}{}_{,\bar j} + 2T_{ij}^r \overline{T_{ij}^r} + T_{ri}^i \overline{T_{rj}^j}+T_{rj}^j \overline{T_{ri}^i}-T_{ri}^j \overline{T_{ri}^j}-T_{rj}^i \overline{T_{rj}^i}$$ This simplifies to $$ R_{ij \overline{ij}} = \eta_{i}{}_{, \bar i} + \eta_i \bar \eta_i $$ where we sum over repeated indices. We consider the following expression \begin{eqnarray*} \bar \partial \partial \omega^{n-1} & = & \bar \partial (-2 \eta \wedge \omega^{n-1})\\ & = & -2 \bar \partial \eta \wedge \omega^{n-1} - 4 \eta \wedge \bar \eta \wedge \omega^{n-1} \\ \end{eqnarray*} Combining these two formulas, we obtain: \begin{eqnarray*} \bar \partial \partial \omega^{n-1} & = & -2 (R_{ij \overline{ij}} - |\eta|^2) \omega^{n} - 4 \eta \wedge \bar \eta \wedge \omega^{n-1} \\ & = & -2 R_{ij \overline{ij}} \omega^{n} - 2 |\eta|^2 \omega^{n} \\ \end{eqnarray*} Integrating this formula, the left hand side is zero whereas the right hand side involves the curvature and the $L^2$ norm of $\eta$. Thus we have the following equation. $$ \| \eta \|^2_{L^2} \leq \| R_{ij \overline{ij}} \|_{L^1}$$ Without knowing the complex structure, there is no way to take the sum on the right hand side. However, we can estimate the term. $$|R_{ij \overline{ij}}|^2 \leq \frac{n^2}{4} \| Riem \|^2$$ Thus, we obtain the desired estimate. This is exactly equivalent to Gauduchon's result, so can be strengthened. Gauduchon's result states that $$ \delta \theta + |\theta|^2 = \frac{2n-2}{2n-1} S - 2 \langle W(\omega), \omega \rangle$$ Here, $\theta = -2 (\eta + \bar \eta)$, $\delta$ is the codifferential, $S$ is the scalar curvature, $W$ is the Weyl tensor if viewed as an endomorphism of $2$-forms and $\omega$ is the K\"ahler form. This immediately implies: $$ \| \eta \|^2_{L^2} \leq \int_M \frac{1}{4}( \frac{2n-2}{2n-1} S + 2 |W|) $$ \end{proof} \section{Some special cases} We are able to prove Conjecture 5 in several special cases. \subsection{Globally conformally flat metrics} The first case is where $(M^n, g)$ is globally conformally flat. \begin{theorem} Let $(M^n, g)$ be a compact globally conformally flat Hermitian manifold. Let $K= \inf_{x \in M} Ric~ M$, $k= \sup_{x \in M} Ric~ M$ $R$ be the scalar curvature of $M$, $d$ be the diameter of $M$ and $i$ be the injectivity radius of $M$. If $\lambda_1$ is the principle eigenvalue of the complex Laplacian $\square$, then we have the following estimate: $$\lambda \geq C(d, K, k, n, | \nabla R|, R^2, i)$$ \end{theorem} In order to do this, we note that $\square = \frac12 \Delta + \xi( \nabla)$ where $\xi$ is the Lie form. Furthermore, we note that if $(M^n, g)$ is conformally flat ( where $e^{2f} g$ is flat), then it is also conformally balanced. As we observed that the Lie form is gradient if a metric is conformally balanced, this means that the complex Laplacian is actually a Witten-Laplacian (modulo a factor of $2$) with drift $\nabla e^{-2f}$. In this case, if one has $C^2$ bounds on $e^{-2f}$, one can often obtain lower bounds on the spectrum. However, we will use the bound from this preprint instead of the standard result using the Bakry-Ricci curvature. The reason to do this is that the bound using Bakry-Ricci curvature can become trivial if the curvature is too negative. Our bound is often much weaker when both apply, but does not display this type of threshold behavior. Since we will need to use a Harnack type result to estimate the $C^0$ bounds on $f$ on the manifold and then the conformal structure to obtain $C^2$ bounds, we obtain weak bounds that interact poorly with cutoff thresholds. We start by proving a Harnack inequality for $(M^n, g)$ for the linear equation $\Delta u = fu$. This proof is completely standard ~\cite{SY} and we provide it in detail only to note explicitly what quantities are used for the estimate. \subsection{A Harnack inequality} \begin{lemma} Suppose $u$ is a positive function satisfying $\Delta u = fu$ on $(M^n, g)$. Then $\inf u \leq C_1 \sup u$ where $C_1$ depends only on $Ric ~ M, n, diam, i, f^2,$ and $|\nabla f|$. \end{lemma} Pick an orthonormal frame at a point which satisfies the same conditions as the one before. \begin{eqnarray*} \frac12 \Delta( |\nabla u|^2) &=& \sum u_{ij}^2 + \sum u_i (\Delta u)_i + Ric (\nabla u, \nabla u) \\ &= & \sum u_{ij}^2 + \sum u_i (f u)_i + Ric (\nabla u, \nabla u) \\ &\geq & \sum u_{ij}^2 + \sum f |u_i|^2 + \sum u_i u f_i - K |\nabla u|^2 \\ &\geq & \sum u_{ij}^2 - \frac12(|\nabla u|^2 + u^2) |\nabla f| - (K-f) |\nabla u|^2 \\ \end{eqnarray*} Set $A = (K- f + \frac12 |\nabla f| )$. Then we have \begin{eqnarray*} \frac12 \Delta( |\nabla u|^2) &\geq& \sum u_{ij}^2 - A |\nabla u|^2 - \frac12 |\nabla f| u^2 \end{eqnarray*} We pick normal coordinates at $x_0$ so that $u_i = 0$ for $i> 1$ and $u_1 = |\nabla u|$ $$\nabla_j (|\nabla u|) = \nabla_j \sqrt{\sum u_i^2} = \frac{ \sum u_i u_{ij}}{|\nabla u|} = u_{1j}$$ So, $$|\nabla (|\nabla u|)|^2 = \sum u_{1j}^2$$ $$|\Delta (|\nabla u|)^2| =2 |\nabla u| \Delta (|\nabla u|)+ 2 |\nabla (|\nabla u|)|^2 $$ \begin{eqnarray*} |\nabla u| \Delta (|\nabla u|)+ \sum u_{1j}^2 &\geq& \sum u_{ij}^2 - A |\nabla u|^2 - \frac12 |\nabla f| u^2 \end{eqnarray*} This implies \begin{eqnarray*} |\nabla u| \Delta (|\nabla u|)+ \sum u_{1j}^2 &\geq& \sum u_{ij}^2 - A |\nabla u|^2 - \frac12 |\nabla f| u^2 \end{eqnarray*} Thus, \begin{eqnarray*} |\nabla u| \Delta (|\nabla u|) + A |\nabla u|^2 + \frac12 |\nabla f| u^2 &\geq& \sum u_{ij}^2 - \sum u_{1j}^2 \\ &\geq& \sum_{i \neq 1} u_{i1}^2 + \sum_{i \neq 1} u_{ii}^2 \\ &\geq& \sum_{i \neq 1} u_{i1}^2 + \frac{1}{n-1} \left( \sum_{i \neq 1} u_{ii} \right)^2 \\ \end{eqnarray*} $$\Delta u = \sum u_{ii} = fu$$ Thus, $f u - u_{11} = \sum u_{ii}$ \begin{eqnarray*} ( \sum_{i \neq 1} u_{ii})^2 &=& (f u - u_{11})^2 \\ &\geq& \frac12 u^2_{11} - f^2 u^2\\ \end{eqnarray*} Thus, \begin{eqnarray*} |\nabla u| \Delta (|\nabla u|) + A |\nabla u|^2 + (\frac12 +f^2)u^2 &\geq& \frac{1}{2(n-1)} |\nabla (|\nabla u|)|^2 \\ \end{eqnarray*} Set $B = (\frac12 |\nabla f| +f^2)$ Then, \begin{eqnarray*} |\nabla u| \Delta (|\nabla u|) + A |\nabla u|^2 + Bu^2 &\geq& \frac{1}{2(n-1)} |\nabla (|\nabla u|)|^2 \\ \end{eqnarray*} Then let $\phi = \dfrac{|\nabla u|}{u}$ $$\nabla \phi = \frac{\nabla | \nabla u|}{u} - \frac{\nabla u | \nabla u|}{u^2}$$ $$ | \nabla u| = \phi u$$ \begin{eqnarray*} \Delta (|\nabla u|) &=& u \Delta \phi + \phi \Delta u + 2 \nabla \phi \cdot \nabla u \end{eqnarray*} \begin{eqnarray*} \Delta (\phi u) &=& u \Delta \phi + \phi \Delta u + 2 \nabla \phi \cdot \nabla u \\ &=& u \Delta \phi + \phi f u + 2 \nabla \phi \cdot \nabla u \\ &=& u \Delta \phi + |\nabla u| f + 2 \nabla \phi \cdot \nabla u \\ \end{eqnarray*} Thus, \begin{eqnarray*} \Delta (\phi) &=& \frac{ \Delta (|\nabla u|)}{u} - \frac{2 \nabla \phi \cdot \nabla u}{u} - f \phi \\ &=& \frac{ |\nabla u| \Delta (|\nabla u|)}{|\nabla u| u} - \frac{2 \nabla \phi \cdot \nabla u}{u} - f \phi \\ &\geq& \frac{1}{|\nabla u| u} \left( \frac{1}{2(n-1)} |\nabla (|\nabla u|)|^2 - A |\nabla u|^2 - Bu^2 \right) - \frac{2 \nabla \phi \cdot \nabla u}{u} - f \phi \\ &=& \frac{1}{|\nabla u| u} \frac{1}{2(n-1)} |\nabla (|\nabla u|)|^2 - A \phi - \frac{B}{\phi} - \frac{2 \nabla \phi \cdot \nabla u}{u} - f \phi \\ \end{eqnarray*} Recalling that $A = (K- f + \frac12)$, set $\alpha = (K+ \frac12)$ to get \begin{eqnarray*} \Delta (\phi) &\geq& \frac{1}{|\nabla u| u} \frac{1}{2(n-1)} |\nabla (|\nabla u|)|^2 - \alpha \phi - \frac{B}{\phi} - \frac{2 \nabla \phi \cdot \nabla u}{u} \\ \end{eqnarray*} Let $\epsilon = \frac{1}{n-1} >0$ \begin{eqnarray*} \frac{2 \nabla \phi \cdot \nabla u}{u} &=& (2-\epsilon)\frac{\nabla \phi \cdot \nabla u}{u}+\frac{\epsilon \nabla \phi \cdot \nabla u}{u} \\ &=& (2-\epsilon)\frac{\nabla \phi \cdot \nabla u}{u}+ \epsilon \frac{ |\nabla (|\nabla u|)| | \nabla u|}{u} - \epsilon \frac{ | \nabla u|^3}{u^3} \\ \end{eqnarray*} Then \begin{eqnarray*} \epsilon \frac{ |\nabla (|\nabla u|)| | \nabla u|}{u} & = & \epsilon \frac{ |\nabla (|\nabla u|)| }{ (| \nabla u| u)^{1/2}} \frac{ | \nabla u|^{3/2}}{u^{3/2}} \\ & \leq & \frac{\epsilon}{2}\left( \frac{ |\nabla (|\nabla u|)|^2 }{ (| \nabla u| u)} + \frac{ | \nabla u|^{3}}{u^{3}} \right)\\ & = & \frac{1}{2(n-1)}\left( \frac{ |\nabla (|\nabla u|)|^2 }{ (| \nabla u| u)} +\phi^3 \right)\\ \end{eqnarray*} Therefore, \begin{eqnarray*} \Delta (\phi)& \geq & -\alpha \phi - \frac{B}{\phi}+ \frac{1}{2(n-1)} \phi^3 + (2-\epsilon)\frac{\nabla \phi \cdot \nabla u}{u} \\ \end{eqnarray*} Consider the maximum point of $\phi$. At this point, $\nabla \phi =0$ and $\Delta (\phi) \leq 0$ and so multiplying through by $\phi$, we obtain \begin{eqnarray*} 0& \geq & -\alpha \phi^2 - B+ \frac{1}{2(n-1)} \phi^4 + (2-\epsilon)\frac{\nabla \phi \cdot \nabla u \phi}{u} \\ 0& = & -\alpha \phi^2 - B+ \frac{1}{2(n-1)} \phi^4 \\ \end{eqnarray*} Then, we find that $$\phi^2 \leq (n-1)\left( B+\sqrt{B^2 + 2\frac{ \alpha}{n-1}} \right) $$ This is what we need to bound in order to get a Harnack inequality. We can integrate this along the geodesics to get upper and lower bounds on $\log u$ using the diameter of $M$. \subsection{Conformally Flat Geometry} If we set $f =R$, this solves the Yamabe problem in the flat case. If we force the volume to be preserved, we can integrate the result from the lemma to get upper and lower bounds on $u$ using the diameter (since we know that $\sup u > 1$ and $\inf u < 1$ in that case). More directly, this gives us bounds on $\nabla \log u$. Now, let $f = \frac{2}{n-2} \log u$. Then the conformal formula for Ricci curvature shows us that $$\tilde R_{ij} = R_{ij} - (n-2)\left[ \nabla_i\partial_j f - (\partial_i f)(\partial_j f) \right] + \left( \triangle f - (n-2)\|\nabla f\|^2 \right)g_{ij} $$ $$\tilde R = e^{-2f}\left(R + 2(n-1)\triangle f - (n-2)(n-1)\|\nabla f\|^2\right) $$ Since $\tilde g$ is flat, we have \begin{eqnarray*} 0 & = & \tilde R_{ij} - \frac{\tilde R}{2(n-1)}\tilde g_{ij} \\ & = & R_{ij} - (n-2)\left[ \nabla_i\partial_j f - (\partial_i f)(\partial_j f) \right] + \left( \triangle f - (n-2)\|\nabla f\|^2 \right)g_{ij} \\ & & - \left( \frac{R}{2(n-1)} + \triangle f - \frac{(n-2)}{2}\|\nabla f\|^2\right) g_{ij} \\ & = & R_{ij} - \frac{R}{2(n-1)} g_{ij} - (n-2)\left[ \nabla_i\partial_j f - (\partial_i f)(\partial_j f) \right] - (n-2)\|\nabla f\|^2g_{ij} + \frac{(n-2)}{2}\|\nabla f\|^2g_{ij} \end{eqnarray*} Thus we have \begin{eqnarray*} \|- (n-2) \nabla_i \partial_j f \| &= & \|R_{ij} - \frac{R}{2(n-1)} g_{ij} + (n-2)(\partial_i f)(\partial_j f) - \frac{(n-2)}{2}\|\nabla f\|^2g_{ij} \| \\ &\leq & \|R_{ij} - \frac{R}{2(n-1)} g_{ij} \| + \| (n-2)(\partial_i f)(\partial_j f) - \frac{(n-2)}{2}\|\nabla f\|^2g_{ij} \| \\ \end{eqnarray*} Setting $i=j$ (this is what we need in the proof of the lower bound on $\lambda$), and working in normal coordinates, we have \begin{eqnarray*} \|- (n-2) \nabla_i \partial_i f \| &\leq & \|R_{ii} - \frac{R}{2(n-1)} \| + \| (n-2)(\partial_i f)^2 \| + \| \frac{(n-2)}{2}\|\nabla f\|^2\| \\ &\leq & \|R_{ii} - \frac{R}{2(n-1)} \| + \| \frac{3(n-2)}{2}\|\nabla f\|^2\| \end{eqnarray*} The Harnack estimate provides bounds on the last term. Then the bounds on the Ricci and scalar curvature give us bounds on the first term on the right hand side. Now we have the needed $C^2$ bounds on $\log u$. However, in order to get $C^2$ bounds on $e^{-2f}$, we note that $$ \nabla_i e^{-2f} = -2 (\nabla_i f) e^{-2f} $$ $$ \nabla_i \nabla_i e^{-2f} = 4 (\nabla_i f)^2 e^{-2f} - 2 (\nabla_i \nabla_i f) e^{-2f} $$ Thus, with bounds on $|\nabla_X \nabla_X f|$ , $|\nabla_X f|$, and upper and lower bounds on $e^{-2f}$, we obtain $C^2$ bounds on $e^{-2f}$ in terms of $d, K, k, n, |\nabla R|$, and $R^2$. We can feed this into the theorem from the preprint to get lower bounds on $\lambda$ where $\lambda$ is the principal eigenvalue of the complex Laplacian. Note that from the perspective of differential geometry and differential topology, globally conformally flat metrics are well understood. However, such metrics admit a rich moduli of orthogonal complex structures ~\cite{KYZ}, many of which are non-K\"ahler. Without such a result, these metrics form a large obstruction to Conjecture 5. This also illustrates one possible approach to proving the conjecture in greater generality. One would start by proving that all complex structures orthogonal to a particular metric in a conformal class satisfy the estimate and then study how the Ricci curvature is affected by conformal transformations of the metric. The above argument proves the conformal stability part of the result in the conformally Ricci-flat case. This shows is that any estimate on the torsion one-form is conformally stable, in that if one can obtain a $C^1$ estimate on the torsion one-form of a Ricci-flat metric, then one can obtain a $C^1$ estimate on the torsion one form for any conformal deformation of that metric. For non-flat metrics, we only have an $L^2$ estimate on the torsion one-form for all complex structure so this is not yet strong enough to finish the proof. \subsection{$k$-Gauduchon metrics} We are able to establish lower bounds on the spectrum in one other case. The bound is given by an argument by contradiction, so is not effective, but we will present a numeric lower bound in a future preprint. Let $G$ be the Gauduchon curvature. That is $$G = \frac{2n-2}{2n-1}S - \langle W(\omega), \omega \rangle$$ Consider $$\alpha_k = i \partial \bar \partial (\omega^k) \wedge \omega^{n-k-1}$$ Using a slight generalization of the calculation in ~\cite{MT}, we see that $$\alpha_k = \frac{2k \omega^n}{n(n-1)(n-2)} \left( -(n-2) \eta_i,_{\bar i} + (2k-1)|\eta|^2 + (n-k-1)|T|^2 \right) $$ Substituting in for the Gauduchon curvature, we obtain: $$\alpha_k = \frac{2k \omega^n}{n(n-1)(n-2)} \left( -(n-2)G + (2k-n)|\eta|^2 + (n-k-1)|T|^2 \right) $$ If the metric is $k$-Gauduchon, then $0 = i \partial \bar \partial (\omega^k) \wedge \omega^{n-k-1}$. For $k>\frac{n}{2}$, then $$(n-2)G = (2k-n) |\eta|^2 + (n-k-1)|T|^2$$ Since $(2k-n), (n-k-1) >0$, this gives us pointwise bounds on torsion in terms of $G$ alone. \section{A lower bound on the spectrum} \begin{theorem}Let $(M^{2n}, g)$ be a compact Riemannian manifold and $J$ be an orthogonal complex structure which is $k$-Gauduchon for some $k>\frac{n}{2}$. Then the spectrum of the complex Laplacian is bounded below by some constant $C$ depending only on $(M^{2n}, g)$, independent of $J$. \end{theorem} The complex Laplacian on a function can be written as $\frac12 \Delta u + (2\eta +2 \bar \eta)(\nabla u)$ Suppose there were a sequence of complex structures such that $\frac12 \Delta u_i + (2\eta_i +2 \bar \eta_i)(\nabla u_i) = \lambda_i u_i$ with $\lambda_i \to 0$ where $\sup |u_i| = 1$ for all $ i$ and $\int u_i~ dVol=0$. Since $\eta_i$ is bounded in $L^\infty$, it converges in weak $L^p$ to $\eta_\infty$, after possibly passing to a subsequence. Similarly, since $u_i$ is uniformly bounded in $H^{2,p}$ for large $p$, it converges weakly to $u_\infty$ after passing to a further subsequence. Therefore, $\frac12 \Delta u_i$ converges in weak $L^2$ to $\frac12 \Delta u_\infty$. Furthermore, $\nabla u_i$ converges to $\nabla u_\infty$ strongly and $\lambda_i u_i$ converges to 0 uniformly. Thus we have, $\frac12 \Delta u_\infty + (2\eta_\infty +2 \bar \eta_\infty)(\nabla u_\infty) = 0$ for $u_\infty$ non-constant (since $\sup |u_\infty| = 1$ and $\int u_\infty~ dVol=0$). This contradicts the strong maximum principle. Since every complex structure is Gauduchon for one metric in the conformal class, if one were able to obtain a $C^2$ estimate on the $\log$ of the conformal factor that makes the K\"ahler form Gauduchon in terms of the Riemannian geometry, this would prove Conjecture 5 in full generality. This result does not give an explicit lower bound in terms of the geometry. In order to do that, one would need to prove a version of the Faber-Krahn inequality with drift. Such a result is known in flat Euclidean space ~\cite{HNR}, and we will prove a numeric result in a following paper.
{ "timestamp": "2017-02-28T02:08:03", "yymm": "1512", "arxiv_id": "1512.05044", "language": "en", "url": "https://arxiv.org/abs/1512.05044", "abstract": "Let $(M^n, h)$ be a compact Hermitian manifold. Suppose $\\lambda$ is the lowest eigenvalue of the complex Laplacian on $M$. We prove that $\\lambda \\geq C$ where $C$ depends only on the dimension $n$, the diameter $d$, the Ricci curvature of the Levi-Civita connection on $M$, and a norm, expressed in curvature, that determines how much $M$ fails to be Kähler. We first estimate the principal eigenvalue of a drift Laplacian and then study the structure of Hermitian manifolds using recent results due to Yang and Zheng. We combine these results to obtain the main estimate.", "subjects": "Differential Geometry (math.DG)", "title": "The Drift Laplacian and Hermitian Geometry", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771782476948, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7090925410588522 }
https://arxiv.org/abs/1906.05914
Uniqueness of bubbling solutions of mean field equations with non-quantized singularities
For singular mean field equations defined on a compact Riemann surface, we prove the uniqueness of bubbling solutions if some blowup points coincide with bubbling sources. If the strength of the bubbling sources at blowup points are not multiple of $4\pi$ we prove that bubbling solutions are unique under non-degeneracy assumptions. This work extends a previous work of Bartolucci, et, al \cite{bart-4}.
\section{Introduction} The main goal of this article is to study the uniqueness property of the following mean field equations with singularities: \begin{equation}\label{m-equ} \Delta_g v+\rho\bigg(\frac{he^v}{\int_M h e^v{\rm d}\mu}-\frac{1}{vol_g(M)}\bigg)=\sum_{j=1}^N 4\pi \alpha_j (\delta_{q_j}-\frac{1}{vol_g(M)}) \quad {\rm in} \ \; M, \end{equation} where $(M,g)$ be a Riemann surface with the metric $g$, $\Delta_g$ is the Laplace-Beltrami operator ($-\Delta_g\ge 0$), $h$ is a positive smooth function on $M$, $q_1,\cdots,q_N$ are distinct points on $M$, $\rho>0,\alpha_j>-1$ are constants, $\delta_{q_j}$ is the Dirac measure at $q_j\in M$. Equation (\ref{m-equ}) is one of the most extensively studied elliptic PDE in the past few decades, partly due to its immense and profound connections with many branches of mathematics and Physics. In conformal geometry, (\ref{m-equ}) represents a metric on M with conic singularity (see \cite{fang-lai,troy,wei-zhang-pacific}). Also it is derived from the mean field limit of point vortices in the Euler flow \cite{caglioti-1,caglioti-2} and serves as a model equation in the Chern-Simons-Higgs theory \cite{spruck-yang,taran-1,y-yang} and in the electroweak theory \cite{ambjorn}, etc. The literature for the study of various form of (\ref{m-equ}) is just too numerous to be listed in any reasonable way. Recently it was found by Lin-Yan \cite{lin-yan-uniq} that the uniqueness property is particularly important for equations with concentration phenomenon. In their work \cite{lin-yan-uniq} they proved the first uniqueness property for bubbling solutions of Chern-Simon-Higgs equation and computed the exact number of solutions in certain special cases. In an important work \cite{bart-4} Bartolucci, et. al, extended Lin-Yan's result for mean field equation (\ref{m-equ}) if the blowup points are not singular sources. Our goal in this article is to further extend the uniqueness property to the case that some singular sources coincide with blowup points. \smallskip To write the main equation in an equivalent form, we invoke the standard Green's function$G(x,p)$: \begin{equation}\label{gf} \left\{\begin{array}{ll} -\Delta_g G(x,p)=\delta_p-1\quad {\rm in}\ \; M \\ \int_{M}G(x,p){\rm d}\mu=0, \end{array} \right. \end{equation} where the volume of $M$ is assumed to be $1$ for convenience. Then it is well known that in a neighborhood of $p$, $G(x,p)$ can be written as $$G(x,p)=-\frac 1{2\pi }\log dist(x,p)+R(x,p)$$ where $dist(x,p)$ is the geodesic distance from $p$ to $x$ for $x$ close to $p$. Using $G(x,p)$ we write (\ref{m-equ}) as \begin{equation}\label{r-equ} \Delta_g w+\rho\bigg(\frac{He^w}{\int_M H e^w{\rm d}\mu}-1\bigg)=0 \quad {\rm in}\ \; M, \end{equation} where \begin{equation}\label{r-sol} w(x)=v(x)+4\pi \sum_{j=1}^N \alpha_j G(x,q_j), \end{equation} and \begin{equation}\label{H1} H(x)=h(x)\prod_{j=1}^N e^{-4\pi\alpha_j G(x,q_j)}. \end{equation} Note that in a local coordinate near $q_j$, \begin{equation}\label{H2} H(x)=h_j(x)|x-q_j|^{2\alpha_j},\quad |x-q_j|\ll 1,\quad 1\leq j\leq N, \end{equation} for some $h_j(x)>0$. We say that $\{v_k\}$ is a sequence of bubbling solutions of (\ref{m-equ}) if the corresponding $w_k$ defined by (\ref{r-sol}) tends to infinity as $k$ goes to infinity. The places that $w_k$ tends to infinity are called blowup points of $v_k$ or $w_k$. In this article we use $p_1,...,p_m$ to denote blowup points. Let $q_1,...,q_N$ be the location of singular sources. If none of $p_1,...p_m$ is a singular source, Bartolucci, et. al have obtained the uniqueness of the blow up solution in \cite{bart-4}. Thus in this article we consider two cases: either all blowup points are singular sources or part of blowup points coincide with singular sources. In more precise terms let \begin{equation}\label{pq} \left\{\begin{array}{ll} p_j=q_j\quad &{\rm if} \ 1\leq j \leq \tau, \\ p_j \notin \{q_1,\cdots,q_N\}\quad &{\rm if} \ \tau+1\leq j\leq m, \end{array} \right. \end{equation} for some $1\le \tau\le m$. Thus if $\tau=m$ all blowup points are singular sources, if $1\le \tau<m$, some blowup points are singular sources and some are not. Let $4\pi\alpha_j$ be the strength of the singular source at $p_j$, so we have $\alpha_j=0$ if $j>\tau$. Since the largest $\alpha_j$ matters the most we require the first $t$ of them to have this strength: \begin{equation} \alpha_1=\cdots =\alpha_t>\alpha_l, \quad l\geq t+1, \quad \mbox{where } 1\le t\le \tau. \end{equation} It is well known that equation (\ref{r-equ}) is the Euler-Lagrange equation of the variational form: $$I_{\rho}(w)=\frac 12 \int_M |\nabla w|^2+\rho\int_Mw-\rho \log \int_M He^w, $$ for $w\in H^1(M)$. Since adding a constant to any solution of (\ref{r-equ}) certainly gives to another solution, the space of solutions for (\ref{r-equ}) is the set of all $H^1(M)$ function with average equal to $0$. The discussion on the variational structure of (\ref{r-equ}) can be found in \cite{machiodi-1}. \smallskip To state the main results we use the following notations: \begin{align} &G_j^*(x)=8\pi (1+\alpha_j)R(x,p_j)+8\pi \sum_{l\neq j}^{1,\cdots,m}(1+\alpha_l)G(x,p_l), \label{G_j*} \\ &L(\mathbf{p})=\sum_{j=1}^t \big[\Delta \log h(p_j)+\rho_*-N^*-2K(p_j)\big] (h_j(p_j))^{\frac{1}{1+\alpha_1}}e^{\frac{G_j^*(p_j)}{1+\alpha_1}}, \label{L} \\ &D(\mathbf{p})= \begin{pmatrix} \nabla(\log h_1+G_1^*)(p_1) \\ \cdots \\ \nabla(\log h_t+G_t^*)(p_t) \end{pmatrix} , \label{D} \end{align} where $h_j$ is defined in (\ref{H2}), and \begin{equation*} \rho_*=8\pi\sum_{j=1}^m(1+\alpha_j),\quad N^*=4\pi\sum_{j=1}^N\alpha_j \end{equation*} Our first result is when all blowup points are singular sources: \begin{thm}\label{main-theorem} Let $v_k^{(1)}$ and $v_k^{(2)}$ be two sequences of bubbling solutions of (\ref{m-equ}) with $\rho_k^{(1)}=\rho_k=\rho_k^{(1)}$ and $\alpha_j\in\mathbb{R}^+\setminus\mathbb{N}\,(1\leq j\leq m)$. If $L(\mathbf{p})\neq0$ and $D(\mathbf{p})=0$, then $v_k^{(1)}=v_k^{(2)}$ for $k$ large enough. \end{thm} Note that we use $\mathbb N$ to denote the set of positive integers. The assumption that $\alpha_j\in \mathbb{R}^+\setminus\mathbb{N}$ implies that all blowup points are singular sources. It is also very essential to require $\alpha_j$ to be non-integer, since quantized singular sources ( if the strength is $4\pi N$) exhibit non-simple blowup phenomenon \cite{kuo-lin}\cite{wei-zhang-19} that has to be studied in a separate work in the future. The assumption of $D({\bf p})$ is also very interesting. It is well known that if $p$ is not a singular source, the vanishing rate of $D({\bf p})$ is very fast for a regular blowup point ( \cite{gluck},\cite{chen-lin-sharp}). \medskip Our second main result is about the uniqueness of bubbling solutions when some blowup points are non-quantized singular sources and some are regular points. So in this case we require $1\le \tau<m$ and for $(x_{\tau+1},\cdots,x_m)\in M\times\cdots \times M$, we define \begin{equation}\label{f*} f^*(x_{\tau+1},\cdots,x_m)=\sum_{j=\tau+1}^m\big[\log h(x_j)+4\pi R(x_j,x_j)\big]+4\pi \sum_{l\neq j}^{\tau +1,\cdots,m}G(x_l,x_j). \end{equation} It is well known that $(p_{\tau+1},\cdots,p_m)$ is a critical point of $f^*$. \begin{thm}\label{main-theorem-2} Let $v_k^{(1)}$ and $v_k^{(2)}$ be two sequences of bubbling solutions of (\ref{m-equ}) with $\rho_k^{(1)}=\rho_k=\rho_k^{(1)}$ and $0\leq\alpha_j<1(1\leq j\leq m)$. Suppose $1\le \tau<m$, $L(\mathbf{p})\neq0$, $D(\mathbf{p})=0$ and $\det \big(D^2f^*(p_{\tau+1},\cdots,p_m)\big)\neq 0$, then $v_k^{(1)}=v_k^{(2)}$ for $k$ large enough. \end{thm} The notation $D^2f^*$ in Theorem \ref{main-theorem-2} stands for the Hessian tensor field on $M$. Theorems \ref{main-theorem} and Theorem \ref{main-theorem-2} are clearly extensions of the main theorem in \cite{bart-4}, where the uniqueness of bubbling solutions around regular blowup points is established. Here in our work, the assumptions of $L(\mathbf{p})$ and $D(\mathbf{p})$ are only placed on singular sources with the strongest strength. In addition to the importance of application, the proof of the main theorems requires extremely delicate local analysis, just like the argument in \cite{bart-4}. Our argument relies heavily on the result of the second author in \cite{zhang2}, Chen-Lin's refined estimates in \cite{chen-lin,chen-lin-deg-2} and the argument used by Lin-Yan \cite{lin-yan-uniq} and Bartolucci-et-al \cite{bart-4}. Even though the outline of our paper is similar to those used in \cite{lin-yan-uniq,bart-4} we have to establish accurate estimates for certain terms in an iterative manner. The proof of Theorem \ref{main-theorem} and Theorem \ref{main-theorem-2} can also be applied to solve the following locally defined Dirichlet boundary problem: Let $\Omega$ be an open and bounded domain in $\mathbb{R}^2$ with regular boundary $\partial\Omega\in C^2$, $v$ be a solution of \begin{equation}\label{equ-flat} \left\{\begin{array}{lll} \Delta v+\rho \frac{he^v}{\int_{\Omega} h e^v{\rm d}x}=\sum_{j=1}^N 4\pi \alpha_j \delta_{q_j} &{\rm in} \;\ \Omega, \\ v=0 &{\rm on} \;\ \partial\Omega, \end{array} \right. \end{equation} where $h>0$ is a $C^1$ funation in $\Omega$, $q_1,\cdots,q_N$ are distinct points in $\Omega$, $\rho>0$, $\alpha_j>0$ are constants. Let $\{v_k\}$ be a sequence of solutions to (\ref{equ-flat}) with $\rho=\rho_k$. We say \begin{equation}\label{blowup-flat} v_k \ {\rm blows} \ {\rm up} \ {\rm at} \ p_j\in\Omega,\quad 1\leq j\leq m, \end{equation} if $\rho \frac{he^v}{\int_{\Omega} h e^v{\rm d}x}\rightharpoonup8\pi\sum_{j=1}^N(1+\alpha_j)\delta_{p_j}$ in $\Omega$ in the sense of measure, where $\alpha_j=0$ if $p_j\notin\{q_1\cdots,q_N\}$. Similar to notations for the first part, we assume there exist $1\leq t\leq\tau\leq m$ such that $\alpha_1=\cdots\alpha_t>\alpha_i$, $i\ge t+1$ and $\alpha_{\tau+1}=\cdots\alpha_m$. Let $G_{\Omega}$ be the Green's function defined by \begin{equation*} \left\{\begin{array}{lll} -\Delta G_{\Omega}(x,p)=\delta_{p} &{\rm in} \;\ \Omega, \\ G_{\Omega}(x,p)=0 &{\rm on} \;\ \partial\Omega, \end{array} \right. \end{equation*} and $R_{\Omega}(x,p)=G_{\Omega}(x,p)+\frac{1}{2\pi}\log |x-p|$ be the regular part of $G_{\Omega}(x,p)$. In order to state the uniqueness results of (\ref{equ-flat}) we denote $N^*=4\pi\sum_{j=1}^m\alpha_j$ and \begin{align*} &G_{j,\Omega}^*(x)=8\pi (1+\alpha_j)R_{\Omega}(x,p_j)+8\pi \sum_{l\neq j}^{1,\cdots,m}(1+\alpha_l)G_{\Omega}(x,p_l), \\ &L_{\Omega}(\mathbf{p})=\sum_{j=1}^t \big[\Delta \log h(p_j)-N^*\big] (h_j(p_j))^{\frac{1}{1+\alpha_1}}e^{\frac{G_j^*(p_j)}{1+\alpha_1}}, \\ &D_{\Omega}(\mathbf{p})= \begin{pmatrix} \nabla(\log h_1+G_1^*)(p_1) \\ \cdots \\ \nabla(\log h_t+G_t^*)(p_t) \end{pmatrix} . \end{align*} Then we have the following result similar to Theorem \ref{main-theorem}. \begin{thm}\label{main-theorem-3} Let $v_k^{(1)}$ and $v_k^{(2)}$ be two sequences of solutions of (\ref{equ-flat}) (\ref{blowup-flat}) with $\rho_k^{(1)}=\rho_k=\rho_k^{(1)}$ and $\alpha_j\in\mathbb{R}^+\setminus\mathbb{N}\,(1\leq j\leq m)$. If $L_{\Omega}(\mathbf{p})\neq0$ and $D_{\Omega}(\mathbf{p})=0$, then $v_k^{(1)}=v_k^{(2)}$ for $k$ large enough. \end{thm} If the set of blowup points is a mixture of non-quantized singular sources and regular points, we also have a uniqueness result. Let \begin{equation*} f_{\Omega}^*(x_{\tau+1},\cdots,x_m)=\sum_{j=\tau+1}^m\big[\log h(x_j)+4\pi R(x_j,x_j)\big]+4\pi \sum_{l\neq j}^{\tau +1,\cdots,m}G(x_l,x_j), \end{equation*} and $D^2f_{\Omega}^*$ be the Hessian tensor field on $M$. In this case, $(p_{\tau+1},\cdots,p_m)$ is a critical point of $f_{\Omega}^*$. Then, we obtain the following result. \begin{thm}\label{main-theorem-4} Let $v_k^{(1)}$ and $v_k^{(2)}$ be two sequences of solutions of (\ref{equ-flat}) (\ref{blowup-flat}) with $\rho_k^{(1)}=\rho_k=\rho_k^{(1)}$ and $0\leq\alpha_j<1(1\leq j\leq m )$. If $L_{\Omega}(\mathbf{p})\neq0$, $D_{\Omega}(\mathbf{p})=0$ and $\det \big(D^2f_{\Omega}^*(p_{\tau+1},\cdots,p_m)\big)\neq 0$, then $v_k^{(1)}=v_k^{(2)}$ for $k$ large enough. \end{thm} When we were in the final stage of writing this article, we found that Bartolucci, et, al \cite{bart-4-2} posted an article on arxiv.org about the same topic. Their theorem is a special case of our results and both works were carried out independently. \smallskip The organization of this paper is as follows. Section \ref{preliminary} is dedicated to notations and preliminary sharp estimates for bubbling solutions of equation (\ref{m-equ}). In section \ref{difference} we consider the differences between two bubbling sequences and establish many estimates near each blowup point and away from all blowup points. In section \ref{anal-pohozaev} we derive some Pohozaev-type identities and evaluate each term carefully. These Pohozaev identities play a key role in the proof of the main theorems. Finally the proof of Theorem \ref{main-theorem} is placed in section \ref{pf-uni-1} and that of Theorem \ref{main-theorem-2} can be found in section \ref{pf-uni-2}. At the end of section \ref{pf-uni-2}, we list the brief sketch of the proof of Theorems \ref{main-theorem-3} and \ref{main-theorem-4} based on well known facts \cite{ma-wei}. \section{Preliminary Estimates}\label{preliminary} Since the proof of the main theorems requires very delicate analysis, in this section we list some established estimates in \cite{chen-lin-sharp,chen-lin,zhang1,zhang2}. Let $w_k$ be a sequence of solutions of (\ref{r-equ}) with $\rho =\rho_k$. Suppose that $w_k$ blows up at $m$ points $\{p_1 \cdots,p_m\}$ as we have stated in section one. To describe the bubbling profile of $w_k$ near $p_j$, we set \begin{equation}\label{n-sol} u_k=w_k-\log\bigg(\int_M He^{w_k}{\rm d}\mu \bigg) \end{equation} and write the equation for $u_k$ as \begin{equation}\label{n-equ} \Delta_g u_k+\rho_k(He^{u_k}-1)=0\quad {\rm in} \ \; M. \end{equation} It is easy to observe from the definition of $u_k$ that $$\int_{M}He^{u_k}{\rm d}\mu=1.$$ From previous works of Liouville equations ( for example \cite{chen-lin-sharp} ), \begin{equation}\label{local-cov} u_k-\bar{u}_k \ \to \sum_{j=1}^m 8\pi(1+\alpha_j)G(x,p_j) \quad {\rm in} \ \; {\rm C}_{loc}^2(M\backslash \{p_1,\cdots,p_m\}) \end{equation} where $\bar{u}_k$ is the average of $u_k$ on $M$: $$\bar{u}_k=\int_{M}u_k{\rm d}\mu.$$ For the convenience later we fix $r_0>0$ small and $M_j\subset M, 1\leq j\leq m$ such that \begin{equation}\label{Mj} M=\bigcup_{j=1}^m \overline{M}_j;\quad M_j\cap M_l=\varnothing,\ {\rm if}\ j\neq l;\quad B(p_j,3r_0)\subset M_j, \ j=1,\cdots,m. \end{equation} According to this definition $M_1=M$, if $m=1$. Then we use $\lambda_{k,j}$ to denote \begin{equation}\label{lambda_kj} \lambda_{k,j}=\left\{ \begin{array}{lcl} u_k(p_j) && {\rm if}\ \,\alpha_j\neq 0, \\ u_k(p_{k,j}) \mathrel{\mathop:}=\max_{B(p_j,r_0)}u_k && {\rm if}\ \,\alpha_j= 0. \end{array} \right. \end{equation} and let $U_{k,j}$ be a global solution of \begin{equation}\label{U_kj-equ} \Delta U_{k,j}+\rho_kh_j(p_{k,j})|x-p_{k,j}|^{2\alpha_j}e^{U_{k,j}}=0 \quad {\rm in} \ \; \mathbb{R}^2 \end{equation} with the expression ($U_{k,j}$ is called a standard bubble): \begin{equation}\label{U_kj} U_{k,j}(x)=\lambda_{k,j}-2\log\Big(1+\frac{\rho_k h_j(p_{k,j})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}}|x-p_{k,j}|^{2(1+\alpha_j)}\Big). \end{equation} It is well-known \cite{li-cmp,Bartolucci-Chen-Lin-T} that $u_k$ can be approximated by the standard bubbles $U_{k,j}$ near $p_j$ with $O(1)$ error: \begin{equation}\label{standard-bubble} \big|u_k(x)-U_{k,j}(x)\big| \leq C, \quad x\in B(p_j,r_0). \end{equation} As a consequence, \begin{equation}\label{lambda-ij} |\lambda_{k,i}-\lambda_{k,j}|\leq C, \quad 1\leq i,j \leq m. \end{equation} for some $C$ independent of $k$. Furthermore, it is established in \cite{bart-taran-mass} that $\rho_*=\lim_{k\to +\infty}\rho_k$. \medskip Later, sharper estimates were obtained in \cite{zhang2,chen-lin} for $1\leq j \leq \tau$ and in \cite{chen-lin-sharp,zhang1,gluck} for $\tau+1\leq j\leq m$. In order to apply those estimates, we might consider the equation in terms of the flat metric and introduce the following notations. \medskip In $B(p_j,r_0)$, the flat metric is ${\rm d}s^2=e^{\phi_j}\big(({\rm d}x_1)^2+({\rm d}x_2)^2\big)$ with $\phi_j$ satisfying \begin{equation}\label{phi-equ} \left\{\begin{array}{lcl} \Delta \phi_j+2Ke^{\phi_j}=0, && {\rm in} \ \; B(p_j,r_0), \\ \phi_j(0)=|\nabla \phi_j(0)|=0, && \quad \end{array} \right. \end{equation} where $0$ is the coordinate of $p_j$, $\Delta =\sum _{i=1}^{2}\frac{\partial^2}{\partial x_i^2}$. In this local coordinate, equation (\ref{n-equ}) is equivalent to \begin{equation}\label{flat-equ} \Delta u_k+\rho_ke^{\phi_j}(He^{u_k}-1)=0\quad {\rm in} \ \; B(p_j,r_0). \end{equation} If we denote $\tilde{h}_j=h_je^{\phi_j}$, then (\ref{flat-equ}) can be written as follows: \begin{equation}\label{flat-equ-2} \Delta u_k+\rho_k\tilde{h}_j|x-p_j|^{2\alpha_j}e^{u_k}-\rho_ke^{\phi_j}=0\quad {\rm in} \ \; B(p_j,r_0). \end{equation} To state the more refined asymptotic analysis we introduce the following notations: \begin{equation}\label{loc-mass} \rho_{k,j}=\int_{B(p_{k,j},r_0)}\rho_kHe^{u_k}{\rm d}\mu,\quad 1\leq j\leq m, \end{equation} \begin{equation}\label{sigma1} \sigma_k(x)=u_k(x)-\bar{u}_k-\sum_{j=1}^m\rho_{k,j} G(x,p_{k,j}), \quad x\in M\backslash \bigcup_{j=1}^mB(p_{k,j},\frac{r_0}{2}), \end{equation} \begin{equation}\label{G_kj} G_{k,j}(x)=\rho_{k,j}R(x,p_{k,j})+\sum_{l\neq j}^{1,\cdots,m}\rho_{k,l}G(x,p_{k,l}),\quad x\in B(p_{k,j},r_0), \end{equation} where $R(x,p_{k,j})$ is the regular part of $G(x,p_{k,j})$. Finally for $x\in B(p_{k,j},r_0)$, set \begin{equation*} \tilde{u}_{k,j}(x)=u_k(x)-\big(G_{k,j}(x)-G_{k,j}(p_{k,j})\big), \end{equation*} \begin{equation}\label{eta1} \eta_{k,j}(x)=\tilde{u}_{k,j}(x)-U_{k,j}(x). \end{equation} \subsection{Sharper estimates} \quad \medskip If $\alpha_j\in \mathbb{R}^+\setminus\mathbb{N}$, in order to obtain the refined estimates of the bubbling solutions, the second author considered the harmonic function $\psi_{k,j}$ in \cite{zhang2}, which satisfies \begin{equation}\label{psi-equ} \left\{\begin{array}{lcl} \Delta \psi_{k,j}=0 && {\rm in} \ \; B(p_{k,j},r_0), \\ \psi_{k,j} =\tilde{u}_{k,j}-\frac{1}{2\pi r_0}\int_{\partial B(p_{k,j},r_0)}\tilde{u}_{k,j} {\rm d}s && {\rm on} \ \; \partial B(p_{k,j},r_0). \end{array} \right. \end{equation} With the help of $\psi_{k,j}$, Zhang and Chen-Lin proved the following sharp estimate in \cite{zhang2}. \begin{thmA}\label{Theorem zhang}\cite{zhang2,chen-lin} For $x\in B(p_{k,j},r_0)$, it holds that \begin{equation}\label{zhang} \begin{split} \eta_{k,j}(x)=&\psi_{k,j}(x)-\frac{2(1+\alpha_j)}{\alpha_j}\frac{\langle a,x-p_{k,j}\rangle}{1+\frac{\rho_k h_j(p_{(k,j)})}{8{(1+\alpha_j)^2}}e^{\lambda_{k,j}}|x-p_{k,j}|^{2(1+\alpha_j)}}\\ & +d_j\log \big(2+e^{\frac{\lambda_{k,j}}{2(1+\alpha_j)}}|x-p_{k,j}|\big)e^{-\frac{\lambda_{k,j}}{1+\alpha_j}}+O(e^{-\frac{\lambda_{k,j}}{1+\alpha_j}}), \end{split} \end{equation} where $a=\nabla(\log h_j+G_{k,j})(p_{k,j}) \in \mathbb{R}^2$ and \begin{equation*} d_j=\frac{\pi}{(1+\alpha_j)\sin \frac{\pi}{1+\alpha_j}}\Big(\frac{8(1+\alpha_j)^2}{\rho_kh_j(p_{k,j})}\Big)^{\frac{1}{1+\alpha_j}}\big[\Delta\log h(p_j)+\rho_*-N^*-2K(p_j)\big]. \end{equation*} \end{thmA} In \cite{chen-lin},the following estimates for $\psi_{k,j}$ and $\sigma_k$ are established: \begin{thmA}\label{Theorem chen-lin1}\cite{chen-lin} \begin{equation}\label{psi-est} |\psi_{k,j}(x)|=O(e^{-\frac{\lambda_{k,j}}{1+\alpha_1}}),\quad x\in B(p_{k,j},r_0). \end{equation} \begin{equation}\label{sigma2} |\sigma_k(x)|+|\nabla\sigma_k(x)|=O(e^{-\frac{\lambda_{k,1}}{1+\alpha_1}}),\quad x\in M\backslash \big(\bigcup_{j=1}^mB(p_{k,j},\frac{r_0}{2})\big). \end{equation} \end{thmA} Then, by Theorem \ref{Theorem zhang} and Theorem \ref{Theorem chen-lin1}, we have \begin{equation}\label{eta2} |\eta_{k,j}(x)|=O(e^{-\frac{\lambda_{k,j}}{2(1+\alpha_j)}}+e^{-\frac{\lambda_{k,j}}{1+\alpha_1}}),\quad x\in B(p_{k,j},r_0),\quad 1\leq j\leq \tau. \end{equation} For the case $\tau+1\leq j \leq m $, the estimate for $\eta_{k,j}$, established in \cite{chen-lin-sharp}\cite{zhang1}\cite{gluck}, is \begin{equation}\label{eta3} |\eta_{k,j}(x)|=O(\lambda_{k,j}e^{-\lambda_{k,j}}),\quad x\in B(p_{k,j},r_0),\quad \tau+1\leq j\leq m. \end{equation} Moreover, according to the proof of Theorem 3.5 in \cite{chen-lin}, the following estimate holds: \begin{equation}\label{uk-ave-1} \bar{u}_k+\lambda_{k,j}+2\log\dfrac{\rho_kh_j(p_{k,j})}{8(1+\alpha_j)^2}+G_{k,j}(p_{k,j})+\frac{d_j}{2(1+\alpha_j)}\lambda_{k,j} e^{-\frac{\lambda_{k,j}}{1+\alpha_1}}=O(e^{-\frac{\lambda_{k,j}}{1+\alpha_1}}). \end{equation} As a consequence, we have \begin{equation}\label{uk-ave-2} \lambda_{k,j}-\lambda_{k,1}=2\log\dfrac{(1+\alpha_j)^2h_1(p_{k,1})}{(1+\alpha_1)^2 h_j(p_{k,j})}+G_{k,1}(p_{k,1})-G_{k,j}(p_{k,j})+O(e^{-\frac{\lambda_{k,1}}{2(1+\alpha_1)}}). \end{equation} For the difference between $\rho_k$ and $\rho_*$, $\rho_k$ and $8\pi(1+\alpha_j)$, the following estimates also have been proved in \cite{chen-lin,chen-lin-sharp}. \begin{thmA}\label{Theorem chen-lin2}\cite{chen-lin,chen-lin-sharp} \begin{align} &\rho_{k,j}-8\pi(1+\alpha_j)=2\pi d_je^{-\frac{\lambda_{k,j}}{1+\alpha_j}}+O\big(e^{-\frac{1+\gamma}{1+\alpha_1}\lambda_{k,1}}\big), && 1\leq j\leq \tau, \label{rho-kj-1} \\ &\rho_{k,j}-8\pi=O\big(\lambda_{k,j}e^{-\lambda_{k,j}}\big), &&\tau+1\leq j\leq m, \label{rho-kj-2} \\ &\rho_k-\rho_*=L^*e^{-\frac{\lambda_{k,1}}{1+\alpha_1}}+O\big(e^{-\frac{1+\gamma}{1+\alpha_1}\lambda_{k,1}}\big),&& \label{rho-k} \end{align} with fixed $\gamma\in (0,\min({\alpha_1,\frac{1}{2}}))$ small and $$L^*=\dfrac{2\pi^2}{(1+\alpha_1)\sin\frac{\pi}{1+\alpha_1}}e^{-\frac{G_1^*(p_{1})}{1+\alpha_1}}\Big(\frac{8(1+\alpha_1)^2}{\rho_* h_1(p_1)^2}\Big)^{\frac{1}{1+\alpha_1}}L(\mathbf{p}).$$ \end{thmA} \smallskip If $\tau<m$, as in \cite{chen-lin-sharp} and \cite{bart-4}, non-degeneracy condition $\det\big(D^2f^*(p_{\tau+1},\cdot,p_m)\big)\neq0$ leads to \begin{equation}\label{p_kj-location} |p_{k,j}-p_j|=O(\lambda_{k,j}e^{-\lambda_{k,j}}),\quad \tau+1\leq j\leq m. \end{equation} Futhermore, in \cite{chen-lin}, the authors showed that \begin{equation}\label{first-deriv-est} \nabla(\log h+G_j^*)(p_{k,j})=O(e^{-\frac{\lambda_{k,j}}{1+\alpha_1}}),\quad \tau+1\leq j\leq m. \end{equation} \subsection{The kernel of the linearized equations} \quad \medskip In the proof of the uniqueness, we need some facts about the linearized equation after the appropriate rescale. \medskip For $\alpha\in \mathbb{R}^+\setminus\mathbb{N}$, Chen-Lin proved the following lemma in \cite{chen-lin}. \begin{lemA}\label{linear-lem-1} Suppose $ \alpha>0 $ is not an integer, $ \varphi $ is a $C^2$-function that satisfies \begin{equation*} \left\{\begin{array}{ll} \Delta \varphi+|x|^{2\alpha}e^{U_\alpha}\varphi=0\quad & {\rm in} \ \; \mathbb{R}^2, \\ |\varphi| \leq (1+|x|)^{\kappa} \quad & {\rm in} \ \;\mathbb{R}^2, \end{array} \right. \end{equation*} where $ U_{\alpha}(x)=\log\frac{8(1+\alpha)^2}{(1+|x|^{2(1+\alpha)})^2} $ and $\kappa\in(0,1)$. Then there exists some constant $b_0$ such that \begin{equation*} \varphi(x)= b_0\frac{1-|x|^{2(1+\alpha)}}{1+|x|^{2(1+\alpha)}}. \end{equation*} \end{lemA} For $\alpha=0$, Chen-Lin proved the following lemma in \cite{chen-lin-sharp}. \begin{lemA}\label{linear-lem-2} Let $ \varphi $ be a $ C^2 $-function of \begin{equation*} \left\{\begin{array}{ll} \Delta \varphi+e^U\varphi=0\quad & {\rm in} \ \;\mathbb{R}^2, \\ |\varphi| \leq c\big(1+|x|\big)^{\kappa} \quad & {\rm in} \ \;\mathbb{R}^2, \end{array} \right. \end{equation*} where $ U(x)=\log\frac{8}{(1+|x|^2)^2} $ and $ \kappa \in[0,1) $. Then there exist constants $b_0$, $b_1$, $b_2$ such that \begin{equation*} \varphi= b_0\varphi_0+b_1\varphi_1+b_2\varphi_2, \end{equation*} where \begin{equation*} \varphi_0(x)= \frac{1-|x|^2}{1+|x|^2},\quad \varphi_1(x)= \frac{x_1}{1+|x|^2},\quad \varphi_2(x)= \frac{x_2}{1+|x|^2}. \end{equation*} \end{lemA} \section{The difference between $ u_k^{(1)} $ and $ u_k^{(2)} $}\label{difference} The way we prove the main theorems is by contradiction. So we assume that $ u_k^{(1)} $ and $ u_k^{(2)} $ are two different sequences of solutions to (\ref{r-equ}) with $\rho_k^{(1)}=\rho_k=\rho_k^{(2)}$, and common blowup points located at $p_1,\cdots,p_m$. For $i=1,2$, we use the following notations $$\lambda_{k,j}^{(i)}, u_{k,j}^{(i)}, v_{k,j}^{(i)}, \rho_{k,j}^{(i)}, \bar{u}_k^{(i)}, U_{k,j}^{(i)}, G_{k,j}^{{(i)}}, \psi_{k,j}^{(i)}, \eta_{k,j}^{(i)}, \epsilon_{k,j}^{(i)}, \sigma_k^{(i)}, p_{j}^{(i)}, $$ with obvious interpretations in the context. Finally the following three functions are defined by the difference of $u_1^k$ and $u_2^k$: \begin{align} & \varsigma_k(x)=\dfrac{u_k^{(1)}(x)-u_k^{(2)}(x)}{\parallel u_k^{(1)}-u_k^{(2)}\parallel_{L^{\infty}(M)}}, \label{varsigma} \\ &f_k(x)=\rho_k H(x)\frac{e^{u_k^{(1)}(x)}-e^{u_k^{(2)}(x)}}{\parallel u_k^{(1)}-u_k^{(2)}\parallel_{L^{\infty}(M)}}, \label{f} \\ &c_k(x)=\dfrac{e^{u_k^{(1)}(x)}-e^{u_k^{(2)}(x)}}{u_k^{(1)}(x)-u_k^{(2)}(x)}. \label{c} \end{align} Clearly $\varsigma_k$ satisfies \begin{equation}\label{sigma-equ} \Delta_g\varsigma_k(x)+f_k(x)=\Delta_g\varsigma_k(x)+\rho_kH(x)c_k(x)\varsigma_k(x)=0,\quad x\in M. \end{equation} As the first step of our proof, we give an initial estimate of $ \parallel u_k^{(1)}-u_k^{(2)}\parallel_{L^{\infty}(M)}$ using $L({\bf p})\neq 0$: \begin{lem}\label{est1} Under the assumption of $L(\mathbf{p})\neq 0$, we have \begin{equation}\label{u-est1} \parallel u_k^{(1)}-u_k^{(2)}\parallel_{L^{\infty}(M)}=O(e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(1)}}). \end{equation} \end{lem} \begin{proof}[\textbf{Proof}] \textbf{Step 1.} For $x\in B(p_{k,j},r_0), 1\leq j\leq m$, by (\ref{eta1}) (\ref{eta2}) (\ref{loc-mass}) (\ref{G_kj}) and Theorem \ref{Theorem chen-lin2}, we have \begin{align*} &u_k^{(1)}(x)-u_k^{(2)}(x)\\ =&\,U_{k,j}^{(1)}(x)-U_{k,j}^{(2)}(x)+\eta_{k,j}^{(1)}(x)-\eta_{k,j}^{(2)}(x)+G_{k,j}^{(1)}(x)-G_{k,j}^{(2)}(x)\\ &\,+G_{k,j}^{(1)}(p_{k,j}^{(1)})-G_{k,j}^{(2)}(p_{k,j}^{(2)})\\ =&\,\lambda_{k,j}^{(1)}-\lambda_{k,j}^{(2)}-2\log\Big(1+\frac{\rho_k h_j(p_{k,j}^{(1)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(1)}}\big|x-p_{k,j}^{(1)}\big|^{2(1+\alpha_j)}\Big)\\ &\,+2\log\Big(1+\frac{\rho_k h_j(p_{k,j}^{(2)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(2)}}\big|x-p_{k,j}^{(2)}\big|^{2(1+\alpha_j)}\Big)+O\Big(\sum_{i=1}^{2}e^{-\frac{\lambda_{k,1}^{(i)}}{2(1+\alpha_1)}}\Big). \end{align*} Theorem \ref{Theorem chen-lin2} and $L(\mathbf{p})\neq 0$ give rise to \begin{equation*} e^{-\frac{1}{1+\alpha_1}(\lambda_{k,1}^{(1)}-\lambda_{k,1}^{(2)})}=1+O\Big(\sum_{i=1}^{2}e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(i)}}\Big), \end{equation*} which immediately implies \begin{equation}\label{lam-est-1} \lambda_{k,1}^{(1)}-\lambda_{k,1}^{(2)}=O\Big(\sum_{i=1}^{2}e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(i)}}\Big). \end{equation} Then by (\ref{lam-est-1}) and (\ref{uk-ave-2}), what holds for one point is also true at other blowup points: \begin{equation}\label{lam-est-2} \lambda_{k,j}^{(1)}-\lambda_{k,j}^{(2)}=O\Big(\sum_{i=1}^{2}e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(i)}}\Big),\quad 1\leq j\leq m. \end{equation} On the other hand, using (\ref{p_kj-location}) in direct computation, we have, \begin{align*} &\log\Big(1+\frac{\rho_k h_j(p_{k,j}^{(1)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(1)}}\big|x-p_{k,j}^{(1)}\big|^{2(1+\alpha_j)}\Big)-\log\Big(1+\frac{\rho_k h_j(p_{k,j}^{(2)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(2)}}\big|x-p_{k,j}^{(2)}\big|^{2(1+\alpha_j)}\Big)\\ &=O(\lambda_{k,j}^{(1)}-\lambda_{k,j}^{(2)}) \end{align*} Thus $u_k^{(1)}$ and $u_k^{(2)}$ are close in the interior of the ball $B(p_{k,j}^{(1)},r_0)$: \begin{equation}\label{est-1} \parallel u_k^{(1)}-u_k^{(2)}\parallel_{L^{\infty}(B(p_{k,j}^{(1)},r_0))}=O\Big(\sum_{i=1}^{2}e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(i)}}\Big)=O(e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(1)}}). \end{equation} \textbf{Step 2.} For $x\in M\backslash \bigcup_{j=1}^mB(p_{k,j}^{(1)},r_0)$, we first use the Green's representation formula to write $u_k^{(1)}-u_k^{(2)}-\big(\bar{u}_k^{(1)}-\bar{u}_k^{(2)}\big)$ in three parts: \begin{equation} \begin{split} & u_k^{(1)}-u_k^{(2)}-\big(\bar{u}_k^{(1)}-\bar{u}_k^{(2)}\big) \notag\\ =& \int_{M} G(y,x)\rho_kH(y)(e^{u_k^{(1)}(y)}-e^{u_k^{(2)}(y)}){\rm d}\mu(y) \\ =& \sum_{j=1}^{m}\int_{B(p_{k,j}^{(1)},\frac{r_0}{2})} \big(G(y,x)-G(p_{k,j}^{(1)},x)\big)\rho_kH(y)(e^{u_k^{(1)}(y)}-e^{u_k^{(2)}(y)}){\rm d}\mu(y) \\ &+ \sum_{j=1}^{m}G(p_{k,j}^{(1)},x)\int_{B(p_{k,j}^{(1)},\frac{r_0}{2})}\rho_kH(y)(e^{u_k^{(1)}(y)}-e^{u_k^{(2)}(y)}){\rm d}\mu(y) \\ &+ \int_{M\backslash \bigcup_{j=1}^mB(p_{k,j}^{(1)},\frac{r_0}{2})} G(y,x)\rho_kH(y)(e^{u_k^{(1)}(y)}-e^{u_k^{(2)}(y)}){\rm d}\mu(y) \\ =&\mathrel{\mathop:}I_1+I_2+I_3. \end{split} \end{equation} Before we evaluate each one of them we recall a few facts: First \begin{equation*} p_{k,j}^{(1)}-p_{k,j}^{(2)}=\left\{\begin{array}{ll}0, \quad \mbox{for}\ 1\le j\le \tau, \\ O(\sum_{i=1}^2\lambda_{k,j}^{(i)}e^{-\lambda_{k,j}^{(i)}}) \quad \mbox{if}\ j>\tau \ \mbox{ (see (\ref{p_kj-location}))}. \end{array} \right. \end{equation*} Next for $x\in M\backslash \bigcup_{j=1}^mB(p_{k,j}^{(1)},r_0)$, $ y\in B(p_{k,j}^{(1)},\frac{r_0}{2}) $, \begin{equation*} G(y,x)-G(p_{k,j}^{(1)},x)=\langle\partial_yG(y,x)\big|_{y-p_{k,j}^{(1)}},y-p_{k,j}^{(1)}\rangle+O(|y-p_{k,j}^{(1)}|^2) \end{equation*} Then using symmetry, scaling, and the closeness between $u_k^{(i)}$ with standard bubbles, we have \begin{align*} &I_1 =\sum_{j=1}^{m}\sum_{i=1}^2\int_{B(p_{k,j}^{(i)},\frac{r_0}{2})}\frac{\langle\partial_yG(y,x)\big|_{y=p_{k,j}^{(i)}},y-p_{k,j}^{(i)}\rangle\rho_k\tilde{h}_j(y)|y-p_{k,j}^{(i)}|^{2\alpha_j}}{\big(1+\frac{\rho_kh_j(p_{k,j}^{(i)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(i)}}|y-p_{k,j}^{(1)}|^{2(1+\alpha_j)}\big)^2}\\ &\times \Big(1+O(|y-p_{k,j}^{(i)}|)+O(e^{-\frac{\lambda_{k,j}^{(i)}}{2(1+\alpha_j)}})+O(e^{-\frac{\lambda_{k,j}^{(i)}}{1+\alpha_1}})\Big){\rm d}y+ O\Big(\sum_{i=1}^{2}e^{-\frac{\lambda_{k,1}^{(i)}}{1+\alpha_1}}\Big),\\ &=O\Big(\sum_{i=1}^{2}e^{-\frac{\lambda_{k,1}^{(i)}}{1+\alpha_1}}\Big). \end{align*} The closeness between $\rho_{k,j}^{(1)}$ and $\rho_{k,j}^{(2)}$ leads to the smallness of $I_2$ (see (\ref{loc-mass}) (\ref{rho-kj-1}) and (\ref{rho-kj-2})): \begin{equation}\label{I2} I_2=\sum_{j=1}^{m}G(p_{k,j}^{(1)},x)(\rho_{k,j}^{(1)}-\rho_{k,j}^{(2)})=O\Big(\sum_{i=1}^{2}e^{-\frac{\lambda_{k,1}^{(i)}}{1+\alpha_1}}\Big). \end{equation} For $I_3$, the magnitude of $u_k^{(i)}$ outside the bubbling area determines the smallness of $I_3$: $$ I_3=\rho_k\int_{M\backslash \bigcup_{j=1}^mB(p_{k,j}^{(1)},\frac{r_0}{2})} G(y,x)H(y)(e^{u_k^{(1)}(y)}-e^{u_k^{(2)}(y)}){\rm d}\mu(y)=O\Big(\sum_{i=1}^{2}e^{-\lambda_{k,1}^{(i)}}\Big). $$ Therefore \begin{equation}\label{step2-2} u_k^{(1)}-u_k^{(2)}-\big(\bar{u}_k^{(1)}-\bar{u}_k^{(2)}\big)=O\Big(\sum_{i=1}^{2}e^{-\frac{\lambda_{k,1}^{(i)}}{1+\alpha_1}}\Big)\quad \mbox{in}\quad M\backslash \bigcup_{j=1}^mB(p_{k,j}^{(1)},r_0). \end{equation} To eliminate the averages in (\ref{step2-2}) we take advantage of (\ref{uk-ave-1}) and (\ref{lam-est-1}): \begin{equation}\label{step2-3} \bar{u}_k^{(1)}-\bar{u}_k^{(2)}=-(\lambda_{k,j}^{(1)}-\lambda_{k,j}^{(2)})+O\Big(\sum_{i=1}^{2}\lambda_{k,j}^{(i)} e^{-\frac{\lambda_{k,1}^{(i)}}{1+\alpha_1}}\Big)=O\Big(\sum_{i=1}^{2}e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(i)}}\Big), \end{equation} Using (\ref{step2-3}) in (\ref{step2-2}) we arrive at \begin{equation}\label{step2-4} u_k^{(1)}(x)-u_k^{(2)}(x)=O\Big(\sum_{i=1}^{2}e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(i)}}\Big)=O(e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(1)}}). \end{equation} for all $x\in M\backslash \bigcup_{j=1}^mB(p_{k,j}^{(1)},r_0)$. Lemma \ref{est1} is established. \end{proof} As an immediate application, Lemma \ref{est1} gives ( see (\ref{c}) ) \begin{equation}\label{c-est} c_k(x)=e^{u_k^{(1)}(x)}\big(1+O(\parallel u_k^{(1)}-u_k^{(2)}\parallel_{L^{\infty}(M)})\big)=e^{u_k^{(1)}(x)}\big(1+O(e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(1)}})\big). \end{equation} To simply the notations, we set \begin{equation} \epsilon_{k,j}=\bigg(\frac{\rho_kh_j(p_{k,j}^{(1)})}{8(1+\alpha_j)^2}\bigg)^{-\frac{1}{2(1+\alpha_j)}}e^{-\frac{\lambda_{k,j}^{(1)}}{2(1+\alpha_j)}}. \end{equation} and \begin{equation}\label{varsigma-kj} \varsigma_{k,j}(z)=\varsigma_k(\epsilon_{k,j}z+p_{k,j}^{(1)}),\quad |z|<\frac{r_0}{\epsilon_{k,j}},\quad 1\leq j\leq m, \end{equation} which satisfies \begin{equation}\label{sigma-t-equ} \Delta\varsigma_{k,j}+\frac{8(1+\alpha_j)^2}{\rho_kh_j(p_{k,j}^{(1)})}\rho_k\tilde{h}_j(\epsilon_{k,j}z+p_{k,j}^{(1)})e^{-\lambda_{k,j}^{(1)}}|z|^{2\alpha_j}c_k(\epsilon_{k,j}z+p_{k,j}^{(1)})\varsigma_{k,j}=0. \end{equation} for $ |z|<r_0\epsilon_{k,j}^{-1}$. \medskip The following lemma determines the limit of $\varsigma_{k,j}$ in both situations: \begin{lem}\label{lem-limit-1} {\rm( The limit of} $\varsigma_{k,j}$ {\rm )} {\rm (i)} For $1\leq j \leq \tau$, \begin{equation*} \varsigma_{k,j}\rightarrow b_{j,0}\varphi_{j,0}\quad {\rm in}\ \; C_{loc}(\mathbb{R}^2). \end{equation*} where $\varphi_{j,0}$ is \begin{equation*} \varphi_{j,0}(z)=\frac{1-|z|^{2(1+\alpha_j)}}{1+|z|^{2(1+\alpha_j)}}. \end{equation*} {\rm (ii)} If $\tau<m$ and $\tau+1\leq j \leq m$, there exist constants $ b_{j,0} $, $ b_{j,1} $ and $ b_{j,2} $ such that \begin{equation*} \varsigma_{k,j}\rightarrow b_{j,0}\varphi_{j,0}+b_{j,1}\varphi_{j,1}+b_{j,2}\varphi_{j,2}\quad {\rm in}\ \; C_{loc}(\mathbb{R}^2), \end{equation*} where $\varphi_{j,i}$ are \begin{equation*} \varphi_{j,0}(z)=\frac{1-|z|^2}{1+|z|^2}, \quad \varphi_{j,1}(z)=\frac{z_1}{1+|z|^2}, \quad \varphi_{j,2}(z)=\frac{z_2}{1+|z|^2}. \end{equation*} \end{lem} \begin{proof}[\textbf{Proof}] (i) For $1\leq j \leq \tau$, it is easy to use (\ref{c-est}) (\ref{eta1}) (\ref{eta2}) and (\ref{eta3}) to obtain \begin{align*} &\frac{8(1+\alpha_j)^2}{\rho_kh_j(p_{k,j}^{(1)})}\rho_k\tilde{h}_j(\epsilon_{k,j}z+p_{k,j}^{(1)})e^{-\lambda_{k,j}^{(1)}}c_k(\epsilon_{k,j}z+p_{k,j}^{(1)}) \\ =& \frac{8(1+\alpha_j)^2}{\rho_kh_j(p_{k,j}^{(1)})}\rho_k\tilde{h}_j(\epsilon_{k,j}z+p_{k,j}^{(1)})e^{-\lambda_{k,j}^{(1)}}e^{U_{k,j}^{(1)}+G_{k,j}^{(1)}(\epsilon_{k,j}z+p_{k,j}^{(1)})-G_{k,j}^{(1)}(p_{k,j}^{(1)}) }\big(1+o(1)\big)\\ =& \frac{8(1+\alpha_j)^2}{\big(1+|z|^{2(1+\alpha_j)}\big)^2}\big(1+O(\epsilon_{k,j}|z|)+o(1)\big) \to \frac{8(1+\alpha_j)^2}{\big(1+|z|^{2(1+\alpha_j)}\big)^2} \quad {\rm in}\ \; C_{loc}(\mathbb{R}^2). \end{align*} Therefore, $\varsigma_{k,j}\rightarrow\varsigma_j$ in $C_{loc}(\mathbb{R}^2)$ and $\varsigma_j$ satisfies \begin{equation}\label{limit-equ1} \Delta\varsigma_j(z)+\frac{8(1+\alpha_j)^2|z|^{2\alpha_j}}{\big(1+|z|^{2(1+\alpha_j)}\big)^2}\varsigma_j(z)=0 \quad {\rm in} \ \; \mathbb{R}^2. \end{equation} Since it is obvious to have $|\varsigma_j|\leq 1$ from $|\varsigma_{k,j}|\leq 1$, we apply Lemma \ref{linear-lem-1} to have $\varsigma_j=b_{j,0}\varphi_{j,0}$ for some constant $b_{j,0}$ and \begin{equation*} \varphi_{j,0}(z)=\frac{1-|z|^{2(1+\alpha_j)}}{1+|z|^{2(1+\alpha_j)}}. \end{equation*} That is $\varsigma_{k,j}\rightarrow b_{j,0}\varphi_{j,0}$ in $ C_{loc}(\mathbb{R}^2)$. \medskip (ii) For $\tau+1\leq j \leq m$, by (\ref{c-est}) (\ref{eta1}) and (\ref{eta3}), we have $\varsigma_{k,j}\rightarrow\varsigma_j$ in $C_{loc}(\mathbb{R}^2)$, where \begin{equation*} \left\{\begin{array}{lcl} \Delta\varsigma_j(z)+\frac{8}{(1+|z|^{2})^2}\varsigma_j(z)=0 && {\rm in}\ \; \mathbb{R}^2, \\ |\varsigma_j|\leq 1 &&{\rm in}\ \; \mathbb{R}^2. \end{array} \right. \end{equation*} In this case we use Lemma \ref{linear-lem-2} to conclude that \begin{equation*} \varsigma_j(z)=b_{j,0}\varphi_{j,0}(z)+b_{j,1}\varphi_{j,1}(z)+b_{j,2}\varphi_{j,2}(z), \end{equation*} for some constants $ b_{j,0} $, $ b_{j,1} $ and $ b_{j,2} $. Lemma \ref{lem-limit-1} is established. \end{proof} Our next goal is to prove that all $b_{j,0}$ are the same, and equal to the limit of $\varsigma_k$ away from the bubbling area. Our approach is similar to the corresponding parts in \cite{lin-yan-uniq} for the Chern-Simons-Higgs equation and in \cite{bart-4} for regular mean field equations. \begin{lem}\label{lem-limit-2} There exists a constant $b_0$ such that \begin{equation*} \varsigma_k\rightarrow -b_0\quad {\rm in}\ \; C_{loc}(M\backslash\{p_1,\cdots,p_m\}). \end{equation*} Moreover, $b_{j,0}=b_0$ for all $1 \leq j \leq m$. \end{lem} \begin{proof}[\textbf{Proof}] Starting from the equation for $\varsigma_k$: \begin{equation*} \Delta_g\varsigma_k(x)+\rho_k H(x)c_k(x)\varsigma_k(x)=0\quad in\ \; M, \end{equation*} we observe from (\ref{c-est}) (\ref{sigma1}) and (\ref{sigma2}) that $c_k\rightarrow 0$ in $C_{loc}(M\backslash\{p_1,\cdots,p_m\})$. Since $\|\varsigma_k\|_{L^{\infty}(M)}\leq 1$, $\varsigma_k\rightarrow \varsigma_0$ in $C_{loc}(M\backslash\{p_1,\cdots,p_m\})$, where $\varsigma_0$ satisfies \begin{equation}\label{limit-equ2} \Delta_g\varsigma_0=0\quad {\rm in}\ \; M\backslash\{p_1,\cdots,p_m\}. \end{equation} The bound for $\varsigma$: $\|\varsigma_0\|_{L^{\infty}(M)}\leq 1$, which comes from $\|\varsigma_k\|_{L^{\infty}(M)}\leq 1$, yields the smoothness of $\varsigma_0$ on the whole manifold. Thus $\varsigma_0\equiv -b_0$ in $M$ for some constant $b_0$. In particular, \begin{equation}\label{limit-equ3} \varsigma_k\rightarrow -b_0\quad {\rm in}\ \; C_{loc}(M\backslash\{p_1,\cdots,p_m\}). \end{equation} For $1\leq j \leq m$, let \begin{equation*} \varphi_{k,j}(x)=\frac{1-\frac{\rho_kh_j(p_{k,j}^{(1)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(1)}}|x-p_{k,j}^{(1)}|^{2(1+\alpha_j)}}{1+\frac{\rho_kh_j(p_{k,j}^{(1)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(1)}}|x-p_{k,j}^{(1)}|^{2(1+\alpha_j)}},\quad x\in B(p_{k,j}^{(1)},r_0). \end{equation*} be a sequence of solutions of \begin{equation*} -\Delta\varphi_{k,j}(x)=\rho_kh_j(p_{k,j}^{(1)})|x-p_{k,j}^{(1)}|^{2\alpha_j}e^{U_{k,j}^{(1)}}\varphi_{k,j}(x),\quad x\in B(p_{k,j}^{(1)},r_0). \end{equation*} Recall that \begin{equation*} -\Delta\varsigma_k(x)=\rho_k \tilde{h}_j(x)|x-p_{k,j}^{(1)}|^{2\alpha_j}\frac{e^{u_k^{(1)}(x)}-e^{u_k^{(2)}(x)}}{u_k^{(1)}(x)-u_k^{(2)}(x)}\varsigma_k(x),\quad x\in B(p_{k,j}^{(1)},r_0). \end{equation*} Using (\ref{p_kj-location}) (\ref{eta1}) (\ref{eta2}), (\ref{eta3}) and integration by parts, we find, for $ d\in (0,r_0) $, that \begin{align*} &\int_{\partial B(p_{k,j}^{(1)},d)}\big(\varphi_{k,j}\frac{\partial\varsigma_k}{\partial\nu}-\varsigma_k\frac{\partial\varphi_{k,j}}{\partial\nu}\big){\rm d}\sigma =\int_{B(p_{k,j}^{(1)},d)}\big(\varphi_{k,j}\Delta\varsigma_k-\varsigma_k\Delta\varphi_{k,j}\big){\rm d}x \\ =& \int_{B(p_{k,j}^{(1)},d)}\rho_k\varsigma_k\varphi_{k,j}\Big(-\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}\frac{e^{u_k^{(1)}}-e^{u_k^{(2)}}}{u_k^{(1)}-u_k^{(2)}}+h_j(p_{k,j}^{(1)})|x-p_{k,j}^{(1)}|^{2\alpha_j}e^{U_{k,j}^{(1)}}\Big){\rm d}x \\ =& \int_{B(p_{k,j}^{(1)},d)}\rho_k\varsigma_k\varphi_{k,j}|x-p_{k,j}^{(1)}|^{2\alpha_j}\Big(-\tilde{h}_je^{u_k^{(1)}}\big(1+O(|u_k^{(1)}-u_k^{(2)}|)\big)+h_j(p_{k,j}^{(1)})e^{U_{k,j}^{(1)}}\Big){\rm d}x \\ =& \int_{B(p_{k,j}^{(1)},d)}\rho_k\varsigma_k\varphi_{k,j}|x-p_{k,j}^{(1)}|^{2\alpha_j}\Big(-\tilde{h}_je^{U_{k,j}^{(1)}+G_{k,j}^{(1)}-G_{k,j}^{(1)}(p_{k,j}^{(1)})+\eta_{k,j}^{(1)}}\big(1+O(|u_k^{(1)}-u_k^{(2)}|)\big) \\ &\qquad \quad+h_j(p_{k,j}^{(1)})e^{U_{k,j}^{(1)}}\Big){\rm d}x . \end{align*} By scaling $x=\epsilon_{k,j}z+p_{k,j}^{(1)}$, (\ref{eta2}), (\ref{eta3}) and the estimate of $\parallel u_k^{(1)}-u_k^{(2)}\parallel_{L^{\infty}(M)}$, it is not hard to obtain \begin{equation}\label{div-est} \int_{\partial B(p_{k,j}^{(1)},d)}\big(\varphi_{k,j}\frac{\partial\varsigma_k}{\partial\nu}-\varsigma_k\frac{\partial\varphi_{k,j}}{\partial\nu}\big){\rm d}\sigma=O\big(e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(1)}}\big). \end{equation} Let $\varsigma_{k,j}(r)$ be the spherical average of $\varsigma_k$: $$\varsigma_{k,j}^*(r)=\frac 1{2\pi}\int_{0}^{2\pi}\varsigma_k(r\cos \theta,r\sin \theta){\rm d}\theta, $$ where $r=|x-p_{k,j}^{(1)}|$. Then (\ref{div-est}) yields \begin{equation*} (\varsigma_{k,j}^*)'(r)\varphi_{k,j}(r)-\varsigma_{k,j}^*(r)\varphi_{k,j}'(r)=\frac{1}{r}O\big(e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(1)}}\big),\quad r\in(R\epsilon_{k,j},r_0). \end{equation*} For any $r \in (R\epsilon_{k,j},r_0)$, we also notice that \begin{align*} &\varphi_{k,j}(r)=-1+\frac{1}{r^{2(1+\alpha_j)}}O(e^{-\lambda_{k,j}^{(1)}}),\\ &\varphi_{k,j}'(r)=\frac{1}{r^{2\alpha_j+3}}O(e^{-\lambda_{k,j}^{(1)}}). \end{align*} Then we conclude \begin{equation}\label{ratial-d-est} (\varsigma_{k,j}^*)'(r)=\frac{1}{r}O\big(e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,1}^{(1)}}\big)+\frac{1}{r^{2\alpha_j+3}}O(e^{-\lambda_{k,j}^{(1)}}),\quad r\in(R\epsilon_{k,j},r_0). \end{equation} Integrating (\ref{ratial-d-est}) from $R\epsilon_{k,j}$ to $r$, we get for all $r\in(R\epsilon_{k,j},r_0)$ \begin{align}\label{ratial-est} \begin{split} \varsigma_{k,j}^*(r) =& \varsigma_{k,j}^*(R\epsilon_{k,j})+O\big(e^{-\frac{\gamma}{1+\alpha_1}\lambda_{k,j}^{(1)}}\big)\big(\log r+\log R+\lambda_{k,j}^{(1)}\big) \\ &+O\big(e^{-\lambda_{k,j}^{(1)}}\big) \big(r^{-2(1+\alpha_j)}+e^{\lambda_{k,j}^{(1)}}R^{-2(1+\alpha_j)}\big) \\ =& \varsigma_{k,j}^*(R\epsilon_{k,j})+o(1)\log R+O(R^{-2(1+\alpha_j)}) . \end{split} \end{align} The first term of (\ref{ratial-est}) is almost a constant ( Lemma \ref{lem-limit-1} ): \begin{equation*} \varsigma_{k,j}^*(R\epsilon_{k,j})=- b_{j,0}+o(1)+o_R(1), \end{equation*} where $\lim_{R\to +\infty}o_R(1)=0$. Then it is easy to see from (\ref{ratial-est}) and (\ref{limit-equ3}) that $b_{j,0}=b_0$ for all $1\le j\le m$. \end{proof} Next we introduce a few quantities to be used later. For $1 \leq j\leq m$, let \begin{align}\label{phi-kj} \begin{split} \phi_{k,j}(x)=&\frac{\rho_k}{\sum_{l=1}^m(1+\alpha_l)}\bigg\{(1+\alpha_j)\Big(R(x,p_{k,j}^{(1)})-R(p_{k,j}^{(1)},p_{k,j}^{(1)})\Big) \\ &+\sum_{l\neq j}^{1,\cdots,m}(1+\alpha_l)\Big(G(x,p_{k,l}^{(1)})-G(p_{k,j}^{(1)},p_{k,l}^{(1)})\Big)\bigg\}, \end{split} \end{align} \begin{equation}\label{G-k-tilde} \tilde{G}_k(x)=8\pi\sum_{l=1}^{m}(1+\alpha_l)G(x,p_{k,l}^{(1)}), \end{equation} It is easy to see that in $M\setminus \{p_{k,1}^{(1)},\cdots,p_{k,m}^{(1)}\}$, \begin{equation}\label{G-phi-equ} \nabla\big(\tilde{G}_k(x)-\phi_{k,j}(x)\big)=-4(1+\alpha_j)\frac{x-p_{k,j}^{(1)}}{|x-p_{k,j}^{(1)}|^2},\quad \Delta\big(\tilde{G}_k(x)-\phi_{k,j}(x)\big)=0. \end{equation} Set \begin{equation}\label{v-kj} v_{k,j}^{(i)}(x)=u_k^{(i)}(x)-\phi_{k,j}(x),\quad i=1,2, \end{equation} and \begin{equation}\label{A-kj} A_{k,j}=\int_{M_j} f_k \, {\rm d}\mu, \end{equation} then we estimate $\nabla v_{k,j}^{(i)}$ away from the bubbling area: \begin{lem}\label{lem-Dv-kj} For any $\theta\in(0,r_0)$ small enough and $x\in B(p_{k,j}^{(1)},2r_0)\setminus B(p_{k,j}^{(1)},\theta)$, the gradient of $v_{k,j}^{(i)}$ is very close to that of a harmonic function: \begin{align}\label{Dv-kj-est} \nabla v_{k,j}^{(i)}(x)=-4(1+\alpha_j)\frac{x-p_{k,j}^{(1)}}{|x-p_{k,j}^{(1)}|^2}+O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}). \end{align} \end{lem} \begin{proof}[\textbf{Proof}] By (\ref{sigma1}) (\ref{sigma2}) and Theorem \ref{Theorem chen-lin2}, we have \begin{align*} O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}})=\nabla \sigma_k^{(i)}=\nabla (v_{k,j}^{(i)}+\phi_{k,j}-\tilde{G}_k)+O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}), \quad x\in B(p_{k,j}^{(1)},2r_0)\setminus B(p_{k,j}^{(1)},\theta) \end{align*} for $i=1,2$, $1\leq j\leq m$. Consequently using (\ref{phi-kj})$\sim$(\ref{G-phi-equ}) we have \begin{align*} \nabla v_{k,j}^{(i)}(x)=&\nabla \big(\tilde{G}_k(x)-\phi_{k,j}(x)\big)+O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}})\\ =&-4(1+\alpha_j)\frac{x-p_{k,j}^{(1)}}{|x-p_{k,j}^{(1)}|^2}+O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}). \end{align*} \end{proof} Next we estimate $\varsigma_k$ and its derivatives away from blowup points. \begin{lem}\label{lem-C1-est} Given $\theta\in(0,r_0)$ small enough, we have \begin{align}\label{GRF-est-1} \begin{split} \varsigma_k-\bar{\varsigma}_k=\sum_{j=1}^m A_{k,j}G(p_{k,j}^{(1)},x)+o(e^{-\frac{\lambda_{k,1}^{(1)}}{2(1+\alpha_1)}}) \quad {\rm in}\ \; M\setminus\bigcup_{j=1}^m B(p_{k,j}^{(1)},\theta). \end{split} \end{align} \end{lem} \begin{proof}[\textbf{Proof}] From the Green's representation formula for $u_k^{(i)}$ and the definition of $\varsigma_k$, we have the following expression of $\varsigma_k$: $$\varsigma_k(x)-\bar \varsigma_k=\int_MG(y,x)f_k(y)d\mu(y). $$ Then for $ x\in M\setminus\bigcup_{j=1}^m B(p_{k,j}^{(1)},\theta)$ we evaluate the integral in three parts: \begin{align}\label{J1+J2+J3} \begin{split} \varsigma_k(x)-\bar{\varsigma}_k=&\sum_{j=1}^m A_{k,j}G(p_{k,j}^{(1)},x)+\sum_{j=1}^\tau\int_{M_j}\big(G(y,x)-G(p_{k,j}^{(1)},x)\big)f_k(y){\rm d}\mu(y) \\ &+\sum_{j=\tau+1}^m\int_{M_j}\big(G(y,x)-G(p_{k,j}^{(1)},x)\big)f_k(y){\rm d}\mu(y) \\ =&\mathrel{\mathop:}J_1+J_2+J_3 \end{split} \end{align} Note that $J_3=\emptyset$ if $\tau=m$. Then it follows from the definition of $\eta_{k,j}^{(1)}$, (\ref{eta2}) and (\ref{eta3}) that \begin{align*} &\int_{M_j}\langle\partial_yG(y,x)\big|_{y=p_{k,j}^{(1)}},y-p_{k,j}^{(1)}\rangle f_k(y){\rm d}\mu(y) \\ =&\int_{B(p_{k,j}^{(1)},r_0)} \langle\partial_yG(y,x)\big|_{y=p_{k,j}^{(1)}},y-p_{k,j}^{(1)}\rangle f_k(y)e^{\phi_j(y)}{\rm d}y+O(e^{-\lambda_{k,j}^{(1)}}) \\ =&\int_{B(p_{k,j}^{(1)},r_0)} \langle\partial_yG(y,x)\big|_{y=p_{k,j}^{(1)}},y-p_{k,j}^{(1)}\rangle \rho_kh_j(p_{k,j}^{(1)})|y-p_{k,j}^{(1)}|^{2\alpha_j}e^{U_{k,j}^{(1)}} \varsigma_k(y) \\ &\times\big(1+O(|y-p_{k,j}^{(1)}|+\epsilon_{k,j}+\epsilon_{k,1}^2)\big)(1+o(1)){\rm d}y+O(e^{-\lambda_{k,j}^{(1)}}). \end{align*} For $1\leq j\leq \tau$, using $y=\epsilon_{k,j}z+p_{k,j}^{(1)}$ in the evaluation of the identity above, we have \begin{align*} &\int_{B(p_{k,j}^{(1)},r_0)} \langle\partial_yG(y,x)\big|_{y=p_{k,j}^{(1)}},y-p_{k,j}^{(1)}\rangle f_k(y)e^{\phi_j}{\rm d}y \\ =&\epsilon_{k,j}\int_{|z|<\frac{r_0}{\epsilon_{k,j}}}\langle\partial_yG(y,x)\big|_{y=p_{k,j}^{(1)}},z\rangle\frac{8(1+\alpha_j)^2|z|^{2\alpha_j}}{(1+|z|^{2(1+\alpha_j)})^2}\Big(b_0\frac{1-|z|^{2(1-\alpha_j)}}{1+|z|^{2(1+\alpha_j)}}+o(1)\Big) \\ &\times\big(1+O(\epsilon_{k,j}|z|+\epsilon_{k,j}+\epsilon_{k,1}^2)+o(1)\big){\rm d}z. \\ =&o(e^{-\frac{\lambda_{k,j}^{(1)}}{2(1+\alpha_j)}}), \end{align*} If $\tau <m$, let us recall that $\alpha_j=0$ for $\tau+1\leq j\leq m$. Similarly, by the standard scaling, Lemma \ref{lem-limit-1} and symmetry, we have \begin{align*} &\int_{B(p_{k,j}^{(1)},r_0)} \langle\partial_yG(y,x)\big|_{y=p_{k,j}^{(1)}},y-p_{k,j}^{(1)}\rangle f_k(y)e^{\phi_j}{\rm d}y \\ =&\epsilon_{k,j}\int_{|z|<\frac{r_0}{\epsilon_{k,j}}}\langle\partial_yG(y,x)\big|_{y=p_{k,j}^{(1)}},z\rangle\frac{8}{(1+|z|^2)^{2}}\varsigma_{k,j}(z){\rm d}z+o(e^{-\frac{\lambda_{k,j}^{(1)}}{2}}) \\ =&e^{-\frac{\lambda_{k,j}^{(1)}}{2}}\Big(\sum_{h=1}^2\partial_{y_h}G(y,x)\big|_{y=p_{k,j}^{(1)}}b_{j,h}\Big)B_j +o(e^{-\frac{\lambda_{k,j}^{(1)}}{2}}), \end{align*} where \begin{equation}\label{Bj} B_j=4\sqrt{\frac{8}{\rho_kh_j(p_{k,j}^{(1)})}}\displaystyle\int_{\mathbb{R}^2}\frac{|z|^2}{(1+|z|^2)^3}{\rm d}z . \end{equation} For the second order terms in the expansion of $G$, we have \begin{align*} &\int_{M_j}|y-p_{k,j}^{(1)}|^2f_k{\rm d}\mu(y) \\ =&\int_{B(p_{k,j}^{(1)},r_0)}|y-p_{k,j}^{(1)}|^2f_ke^{\phi_j}{\rm d}y+O(e^{-\lambda_{k,j}^{(1)}}) \\ =&O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_j}})\int_{|z|<\frac{r_0}{\epsilon_{k,j}}}\frac{|z|^{2(1+\alpha_j)}}{(1+|z|^{2(1+\alpha_j)})^2}(1+o(1)){\rm d}z+O(e^{-\lambda_{k,j}^{(1)}}) \\ =&\left\{ \begin{array}{lcl} O(e^{-\frac{1}{1+\alpha_j}\lambda_{k,j}^{(1)}}), && 1\leq j\leq \tau, \\ O(\lambda_{k,j}^{(1)}e^{-\lambda_{k,j}^{(1)}}), && \tau+1\leq j\leq m. \end{array} \right. \end{align*} Consequently for $J_2$ and $J_3$ we have \begin{equation}\label{J2} J_2=o(e^{-\frac{\lambda_{k,1}^{(1)}}{2(1+\alpha_1)}}), \end{equation} \begin{equation}\label{J3} J_3=\sum_{j=\tau+1}^me^{-\frac{\lambda_{k,j}^{(1)}}{2}}\Big(\sum_{h=1}^2\partial_{y_h}G(y,x)\big|_{y=p_{k,j}^{(1)}}b_{j,h}\Big)B_j +o(e^{-\frac{\lambda_{k,1}^{(1)}}{2}}). \end{equation} Observing (\ref{J1+J2+J3}) (\ref{J2}) and (\ref{J3}), we conclude that (\ref{GRF-est-1}) holds. Then by standard estimates we also have \begin{align}\label{Dsigma_k-1} \nabla\varsigma_k(x)=\sum_{j=\tau+1}^m A_{k,j}\nabla_xG(p_{k,j}^{(1)},x)+o(e^{-\frac{\lambda_{k,1}^{(1)}}{2(1+\alpha_1)}}), \quad x\in M\setminus\bigcup_{j=1}^m B(p_{k,j}^{(1)},\theta). \end{align} \end{proof} \section{Estimates associated with Pohozaev identities}\label{anal-pohozaev} In this section, we establish some sharp estimates for certain terms crucial for evaluation of Pohozaev identities. \smallskip The first important quantity is $A_{k,j}$, defined in (\ref{GRF-est-1}) and the study of which is through the following Pohozaev identity: \begin{lem}\label{Pohozaev identity-1} For $1 \leq j \leq m$ and any $r\in(0,r_0)$, it holds that \begin{align}\label{PI-1} \begin{split} & \frac{1}{2}\int_{\partial B(p_{k,j}^{(1)},r)}r\langle \nabla v_{k,j}^{(1)}+\nabla v_{k,j}^{(2)},\nabla \varsigma_k\rangle{\rm d}\sigma \\ & -\int_{\partial B(p_{k,j}^{(1)},r)}r\langle\nu,\nabla v_{k,j}^{(1)}+\nabla v_{k,j}^{(2)}\rangle \langle\nu,\nabla\varsigma_k\rangle {\rm d}\sigma \\ =& \int_{\partial B(p_{k,j}^{(1)},r)}\frac{r\rho_k\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}\big(e^{v_{k,j}^{(1)}+\phi_{k,j}}-e^{v_{k,j}^{(2)}+\phi_{k,j}}\big)}{\parallel v_{k,j}^{(1)}-v_{k,j}^{(2)} \parallel_{L^{\infty}(M)} } {\rm d}\sigma \\ & - 2(1+\alpha_j)\int_{B(p_{k,j}^{(1)},r)} \frac{\rho_k\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}\big(e^{v_{k,j}^{(1)}+\phi_{k,j}}-e^{v_{k,j}^{(2)}+\phi_{k,j}}\big)}{\parallel v_{k,j}^{(1)}-v_{k,j}^{(2)} \parallel_{L^{\infty}(M)}} {\rm d}x\\ & -\int_{B(p_{k,j}^{(1)},r)} \frac{\rho_k\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}\big(e^{v_{k,j}^{(1)}+\phi_{k,j}}-e^{v_{k,j}^{(2)}+\phi_{k,j}}\big)}{\parallel v_{k,j}^{(1)}-v_{k,j}^{(2)} \parallel_{L^{\infty}(M)}} \langle \nabla\big(\log \tilde{h}_j+\phi_{k,j}\big),x-p_{k,j}^{(1)}\rangle {\rm d}x. \end{split} \end{align} \end{lem} This Pohozaev identity has been used in \cite{bart-4} and \cite{lin-yan-uniq}, we include the proof for the convenience of the readers. \begin{proof}[\textbf{Proof}] First we observe that for any two smooth functions $u$ and $v$, \begin{align}\label{divergence} \begin{split} &\Delta u\{\nabla v\cdotp (x-p_{k,j}^{(1)})\}+\Delta u\{\nabla v\cdotp (x-p_{k,j}^{(1)})\} \\ =\,&{\rm div}\Big\{\nabla u\big[\nabla v\cdotp(x-p_{k,j}^{(1)})\big]+\nabla v\big[\nabla u\cdotp(x-p_{k,j}^{(1)})\big] - \nabla u\cdotp\nabla v (x-p_{k,j}^{(1)})\Big\}. \end{split} \end{align} Replacing $u$, $v$ by $v_{k,j}^{(1)}- v_{k,j}^{(2)}$ and $v_{k,j}^{(1)}+ v_{k,j}^{(2)}$ respectively in (\ref{divergence}), we have \allowdisplaybreaks \begin{align}\label{div-formula-1} \begin{split} &\Delta (v_{k,j}^{(1)}- v_{k,j}^{(2)})\big\{\nabla (v_{k,j}^{(1)}+ v_{k,j}^{(2)})\cdotp (x-p_{k,j}^{(1)})\big\} +\Delta (v_{k,j}^{(1)}+ v_{k,j}^{(2)})\big\{\nabla (v_{k,j}^{(1)}- v_{k,j}^{(2)})\cdotp (x-p_{k,j}^{(1)})\big\} \\ =&{\rm div}\Big\{\nabla (v_{k,j}^{(1)}- v_{k,j}^{(2)})\big[\nabla (v_{k,j}^{(1)}+ v_{k,j}^{(2)})\cdotp(x-p_{k,j}^{(1)})\big] \\ +&\nabla (v_{k,j}^{(1)}+ v_{k,j}^{(2)})\big[\nabla (v_{k,j}^{(1)}- v_{k,j}^{(2)})\cdotp(x-p_{k,j}^{(1)})\big]-\nabla (v_{k,j}^{(1)}- v_{k,j}^{(2)})\cdotp\nabla (v_{k,j}^{(1)}+ v_{k,j}^{(2)})(x-p_{k,j}^{(1)})\Big\}. \end{split} \end{align} By the definition of $v_{k,j}^{(i)}$, we see that, for $x\in B(p_{k,j}^{(1)},r_0)$, \begin{align}\label{v-kj-equ} \Delta(v_{k,j}^{(1)}\pm v_{k,j}^{(2)})+\rho_ke^{\phi_{k,j}}\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}(e^{u_k^{(1)}}\pm e^{u_k^{(2)}})=0. \end{align} Using (\ref{v-kj-equ}) and $$g_i=e^{v_{k,j}^{(i)}+\phi_{k,j}+\log \tilde{h}_j+2\alpha_j\log|x-p_{k,j}^{(1)}|}$$ the right hand side (RHS) of (\ref{div-formula-1}) can be written as: \begin{align*} &{\rm (RHS)}\ {\rm of}\ (\ref{div-formula-1})\\ =&-\rho_k(g_1-g_2)\big\{\nabla (v_{k,j}^{(1)}+ v_{k,j}^{(2)})\cdotp (x-p_{k,j}^{(1)})\big\} -\rho_k(g_1+g_2)\big\{\nabla (v_{k,j}^{(1)}- v_{k,j}^{(2)})\cdotp (x-p_{k,j}^{(1)})\big\}\\ =&-2\rho_kg_1\big\{\nabla v_{k,j}^{(1)}\cdotp (x-p_{k,j}^{(1)})\big\}+2\rho_kg_2\big\{\nabla v_{k,j}^{(2)}\cdotp (x-p_{k,j}^{(1)})\big\}\\ =&-{\rm div}\big\{2\rho_k(g_1+g_2) (x-p_{k,j}^{(1)})\big\}+2\rho_k(g_1-g_2)\big\{\nabla (\log \tilde{h}_j+\phi_{k,j})\cdotp (x-p_{k,j}^{(1)})\big\}\\ &+2\rho_k(g_1-g_2)\big\{2\alpha_j\nabla \log |x-p_{k,j}^{(1)}|\cdotp (x-p_{k,j}^{(1)})\big\}+4\rho_k(g_1-g_2)\\ =&-{\rm div}\big\{2\rho_k(g_1+g_2) (x-p_{k,j}^{(1)})\big\}+4(1+\alpha_j)\rho_k(g_1-g_2)\\ &+2\rho_k(g_1-g_2)\big\{\nabla (\log \tilde{h}_j+\phi_{k,j})\cdotp (x-p_{k,j}^{(1)})\big\}. \end{align*} Since $\varsigma_k=\frac{v_{k,j}^{(1)} -v_{k,j}^{(2)}}{\parallel v_{k,j}^{(1)}-v_{k,j}^{(2)} \parallel_{L^{\infty}(M)}}$ and $\nu=\frac{x-p_{k,j}^{(1)}}{r}$, we have \begin{align}\label{div-formula-2} \begin{split} &\int_{\partial B(p_{k,j}^{(1)},r)}\frac{{\rm (RHS)}\ {\rm of}\ (\ref{div-formula-1})}{\parallel v_{k,j}^{(1)}-v_{k,j}^{(2)} \parallel_{L^{\infty}(M)}}{\rm d}\sigma\\ =&-2\int_{\partial B(p_{k,j}^{(1)},r)}\frac{r\rho_k\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}\big(e^{v_{k,j}^{(1)}+\phi_{k,j}}+e^{v_{k,j}^{(2)}+\phi_{k,j}}\big)}{\parallel v_{k,j}^{(1)}-v_{k,j}^{(2)} \parallel_{L^{\infty}(M)} } {\rm d}\sigma\\ & + 4(1+\alpha_j)\int_{B(p_{k,j}^{(1)},r)} \frac{\rho_k\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}\big(e^{v_{k,j}^{(1)}+\phi_{k,j}}-e^{v_{k,j}^{(2)}+\phi_{k,j}}\big)}{\parallel v_{k,j}^{(1)}-v_{k,j}^{(2)} \parallel_{L^{\infty}(M)}} {\rm d}x\\ & +2\int_{B(p_{k,j}^{(1)},r)} \frac{\rho_k\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}\big(e^{v_{k,j}^{(1)}+\phi_{k,j}}-e^{v_{k,j}^{(2)}+\phi_{k,j}}\big)}{\parallel v_{k,j}^{(1)}-v_{k,j}^{(2)} \parallel_{L^{\infty}(M)}} \langle \nabla\big(\log \tilde{h}_j+\phi_{k,j}\big),x-p_{k,j}^{(1)}\rangle {\rm d}x. \end{split} \end{align} On the other hand, \begin{align}\label{div-formula-3} \begin{split} &\int_{\partial B(p_{k,j}^{(1)},r)}\frac{{\rm (LHS)}\ {\rm of}\ (\ref{div-formula-1})}{\parallel v_{k,j}^{(1)}-v_{k,j}^{(2)} \parallel_{L^{\infty}(M)}}{\rm d}\sigma\\ =&-\int_{\partial B(p_{k,j}^{(1)},r)}r\langle \nabla v_{k,j}^{(1)}+\nabla v_{k,j}^{(2)},\nabla \varsigma_k\rangle{\rm d}\sigma \\ &+2\int_{\partial B(p_{k,j}^{(1)},r)}r\langle\nu,\nabla v_{k,j}^{(1)}+\nabla v_{k,j}^{(2)}\rangle \langle\nu,\nabla \varsigma_k\rangle {\rm d}\sigma \end{split} \end{align} Then (\ref{PI-1}) follows from (\ref{div-formula-1}), (\ref{div-formula-2}) and (\ref{div-formula-3}). \end{proof} \begin{rem} It is easy to see Pohozaev-type identity (\ref{PI-1}) also holds for $\alpha_j>-1$. \end{rem} \begin{lem}\label{lem-PI1-left} For all $1\leq j\leq m$, \begin{equation}\label{PI-1-l} {\rm (LHS)}\ {\rm of}\ (\ref{PI-1})=-4(1+\alpha_j)A_{k,j}+O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}\sum_{l=1}^{m}|A_{k,l}|)+o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}). \end{equation} \end{lem} \begin{proof}[\textbf{Proof}] From (\ref{Dv-kj-est}) and (\ref{Dsigma_k-1}), we find that \begin{align*} &{\rm (LHS)}\ {\rm of}\ (\ref{PI-1})\\ =&\,4(1+\alpha_j)\int_{\partial B(p_{k,j}^{(1)},r)}\langle\nu,D\varsigma_k\rangle{\rm d}\sigma+O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}\parallel D\varsigma_k\parallel_{L^{\infty}(\partial B(p_{k,j}^{(1)},r))})\\ =&\,4(1+\alpha_j)\int_{\partial B(p_{k,j}^{(1)},r)}\langle\nu,D\varsigma_k\rangle{\rm d}\sigma+O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}\sum_{l=1}^{m}|A_{k,l}|)+o(e^{-\frac{3}{2(1+\alpha_1)}\lambda_{k,j}^{(1)}}). \end{align*} For $x\in\partial B(p_{k,j}^{(1)},r)$ we use the Green's representation formula to estimate $\varsigma_k(x)$: \begin{align*} &\varsigma_k(x)-\bar{\varsigma}_k=\int_{M}G(y,x)f_k(y){\rm d}\mu(y)\\ =&\sum_{l=1}^mA_{k,l}G(p_{k,l}^{(1)},x)+\sum_{l=1}^m\sum_{h=1}^2B_{k,l,h}\partial_{y_h}G(y,x)\big|_{y=p_{k,l}^{(1)}}+\frac{1}{2}\sum_{l=1}^m\sum_{h,i=1}^2C_{k,l,h,i}\partial_{y_hy_i}^2G(y,x)\big|_{y=p_{k,l}^{(1)}}\\ &+O(1)\sum_{l=1}^m\int_{M_l}|y-p_{k,j}^{(1)}|^3f_k{\rm d}\mu(y) \quad {\rm in}\,\ C^1\big(B(p_{k,j}^{(1)},2r_0)\setminus B(p_{k,j}^{(1)},\frac{r}{2})\big), \end{align*} where \begin{align*} & B_{k,l,h}=\int_{M_l}(y-p_{k,l}^{(1)})_hf_k(y){\rm d}\mu(y), \\ & C_{k,l,h,i}=\int_{M_l}(y-p_{k,l}^{(1)})_h(y-p_{k,l}^{(1)})_if_k(y){\rm d}\mu(y). \end{align*} It is easy to see that the last term is rather minor: \begin{align*} &\sum_{l=1}^m\int_{M_l}|y-p_{k,j}^{(1)}|^3f_k(y){\rm d}\mu(y) \\ =&\sum_{l=1}^m\int_{B(p_{k,l}^{(1)},r)}\frac{e^{\lambda_{k,l}^{(1)}}|y-p_{k,l}^{(1)}|^{2\alpha_l+3}}{\big(1+e^{\lambda_{k,l}^{(1)}}|y-p_{k,l}^{(1)}|^{2(1+\alpha_l)}\big)^2}{\rm d}y +O(e^{-\lambda_{k,1}^{(1)}}) \\ =&\sum_{l=1}^mO(e^{-\frac{3}{2(1+\alpha_l)}\lambda_{k,l}^{(1)}})\int_{|z|<\frac{r}{\epsilon_{k,l}}}\frac{|z|^{2\alpha_l+3}}{(1+|z|^{2(1+\alpha_l)})^2}{\rm d}z+O(e^{-\lambda_{k,1}^{(1)}}) \\ =&o(e^{-\frac{\lambda_{k,1}^{(1)}}{1+\alpha_1}}). \end{align*} Setting \begin{align*} \bar{G}_k(x)=&\bar{\varsigma}_k(x)+\sum_{j=1}^m A_{k,j}G(p_{k,j}^{(1)},x)+\sum_{l=1}^m\sum_{h=1}^2B_{k,l,h}\partial_{y_h}G(y,x)\big|_{y=p_{k,l}^{(1)}}\\ &+\frac{1}{2}\sum_{l=1}^m\sum_{h,i=1}^2C_{k,l,h,i}\partial_{y_hy_i}^2G(y,x)\big|_{y=p_{k,l}^{(1)}}, \end{align*} we now have \begin{equation*} \nabla\varsigma_k(x)-\nabla\bar{G}_k(x)=o(e^{-\frac{\lambda_{k,1}^{(1)}}{1+\alpha_1}}). \end{equation*} Thus \begin{align}\label{pi-l-1} \begin{split} &{\rm (LHS)}\ {\rm of}\ (\ref{PI-1})\\ =&\,4(1+\alpha_j)\int_{\partial B(p_{k,j}^{(1)},r)}\langle\nu,\nabla\bar{G}_k\rangle{\rm d}\sigma+O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}\sum_{l=1}^{m}|A_{k,l}|)+o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}). \end{split} \end{align} Now we take the global cancellation property into consideration: for any fixed $\theta\in(0,r)$, \begin{equation}\label{Gbar-equ} \Delta \bar{G}_k=\sum_{l=1}^{m}A_{k,l}=\int_{M}f_k\,{\rm d}\mu=0,\quad {\rm in}\ \;B(p_{k,j}^{(1)},2r_0)\setminus B(p_{k,j}^{(1)},\theta). \end{equation} Using (\ref{G-phi-equ}) (\ref{Gbar-equ}) and (\ref{divergence}), we have \begin{align*} 0=&\int_{B_r\setminus B_{\theta}} \Big\{\Delta\bar{G}_k\big\{\nabla(\tilde{G}_k-\phi_{k,j})\cdotp (x-p_{k,j}^{(1)})\big\}+\Delta(\tilde{G}_k-\phi_{k,j})\big\{\nabla \bar{G}_k\cdotp (x-p_{k,j}^{(1)})\big\}\Big\}{\rm d}x\\ =&-4\pi(1+\alpha_j)\int_{\partial(B_r\setminus B_{\theta})}\frac{\partial\bar{G}_k}{\partial\nu} \,{\rm d}\sigma, \end{align*} where $B(p_{k,j}^{(1)},r)$ and $B(p_{k,j}^{(1)},\theta)$ are replaced by $B_r$, $B_{\theta}$ respectively for simplicity. Therefore \begin{equation}\label{pi-l-2} \int_{\partial B_r}\frac{\partial\bar{G}_k}{\partial\nu} {\rm d}\sigma=\int_{\partial B_{\theta}}\frac{\partial\bar{G}_k}{\partial\nu} {\rm d}\sigma. \end{equation} Further direct computation yields \begin{align}\label{pi-l-3} \begin{split} &\int_{\partial B_{\theta}}\langle\nu,\sum_{l=1}^{m}A_{k,l}\nabla_xG(p_{k,l}^{(1)},x)\rangle{\rm d}\sigma\\ =& -A_{k,j}\int_{\partial B_{\theta}}\langle\nu,\nabla_xG(p_{k,l}^{(1)},x)\rangle{\rm d}\sigma+o_{\theta}(1)\\ =& -A_{k,j}\int_{\partial B_{\theta}}\langle\nu,\nabla_x\frac{1}{2\pi}\log |x-p_{k,l}^{(1)}|\rangle{\rm d}\sigma+o_{\theta}(1) \\ =&-A_{k,j}+o_{\theta}(1), \end{split} \end{align} where $\lim_{\theta\to 0}o_{\theta}(1)=0$, and we have used the fact that all the terms related to $l\neq j$ are minor. Let us observe that \begin{equation}\label{pi-l-4} \begin{split} &\int_{\partial B(0,\theta)}\langle\nu,\nabla_x\partial_{y_h}\log|z|\rangle{\rm d}\sigma=-\int_{\partial B(0,\theta)}\sum_{i=1}^2\frac{z_i}{|z|}\frac{\delta_{ih}|z|^2-2z_iz_h}{z^4}{\rm d}\sigma=0, \\ &\int_{\partial B(0,\theta)}\langle\nu,\nabla_x\frac{\partial^2}{\partial y_h^2}\log|z|\rangle{\rm d}\sigma=-\int_{\partial B(0,\theta)}\big(\frac{2}{|z|^3}-\frac{4z_h^2}{|z|^5}\big){\rm d}\sigma=0, \\ &\int_{\partial B(0,\theta)}\langle\nu,\nabla_x\frac{\partial^2}{\partial y_hy_i}\log|z|\rangle{\rm d}\sigma=-\int_{\partial B(0,\theta)}\big(\frac{4z_hz_i}{|z|^5}-\frac{8z_hz_i}{|z|^5}\big){\rm d}\sigma=0, \end{split} \end{equation} Obviously, from (\ref{pi-l-2})$\sim$(\ref{pi-l-4}) we can see that \begin{equation*} \int_{\partial B(p_{k,j}^{(1)},r)}\langle \nu,\nabla\bar{G}_k\rangle{\rm d}\sigma=-A_{k,j}+o_{\theta}(1). \end{equation*} which together with (\ref{pi-l-1}), concludes the proof of Lemma \ref{lem-PI1-left}. \end{proof} \begin{lem}\label{lem-PI1-right} \begin{equation}\label{PI-1-r-1} \begin{split} &{\rm (RHS)}\ {\rm of}\ (\ref{PI-1})\\ =&-2(1+\alpha_j)A_{k,j}-\frac{4\pi^2\big[\Delta \log h(p_j)+\rho_*-N^*-2K(p_j)\big]}{\big(\rho_kh_j(p_{k,j}^{(1)})\big)^{\frac{1}{1+\alpha_1}}(1+\alpha_1)\sin \frac{\pi}{1+\alpha_1}}b_0e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}\\ &+o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}),\qquad 1\leq j\leq t. \end{split} \end{equation} \begin{align} &{\rm (RHS)}\ {\rm of}\ (\ref{PI-1})=-2(1+\alpha_j)A_{k,j}+o(e^{-\frac{\lambda_{k,j}^{(1)}}{2(1+\alpha_1)}}),\quad t+1\leq j\leq \tau. \label{PI-1-r-2} \\ &{\rm (RHS)}\ {\rm of}\ (\ref{PI-1})=-2(1+\alpha_j)A_{k,j}+o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}),\quad \tau+1\leq j\leq m. \label{PI-1-r-3} \end{align} \end{lem} \begin{proof}[\textbf{Proof}] We use $K_1,K_2,K_3$ to denote the three terms on the right hand of (\ref{PI-1}). The first two terms are quite easy to estimate: \begin{equation}\label{K1} K_1=\int_{\partial B(p_{k,j}^{(1)},r)}rf_k\,{\rm d}\sigma=\int_{\partial B(p_{k,j}^{(1)},r)}\rho_k\tilde{h}_jr^{2\alpha_j+1}e^{u_k^{(1)}}(\varsigma_k+o(1)){\rm d}\sigma=O(e^{-\lambda_{k,j}^{(1)}}), \end{equation} \begin{equation}\label{K2} K_2=-2(1+\alpha_j)A_{k,j}. \end{equation} More work is needed for \begin{equation*} K_3=-\int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j}\langle \nabla(\log \tilde{h}_j+\phi_{k,j}),x-p_{k,j}^{(1)}\rangle{\rm d}x. \end{equation*} First we use $\nabla\phi_j(p_{k,j}^{(1)})=0$ to write $\nabla (\log \tilde h_j+\phi_{k,j})(x)$ as \begin{equation}\label{expansion} \begin{split} &\nabla(\log \tilde{h}_j+\phi_{k,j})(x)\\ =&\nabla(\log h_j+\phi_{k,j})(p_{k,j}^{(1)})+\langle D^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle+O(|x-p_{k,j}^{(1)}|^2). \end{split} \end{equation} Then we evaluate $K_3$ in three cases: $\mathbf{Case\ 1:}$ $1\leq j\leq t$ $(\alpha_j=\alpha_1)$. The assumption $D(\mathbf{p})=0$ and (\ref{phi-kj}) (\ref{rho-k}) imply \begin{equation*} \nabla(\log h_j+\phi_{k,j})(p_{k,j}^{(1)})=O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}). \end{equation*} Thus, after the scaling $x=\epsilon_{k,j}z+p_{k,j}^{(1)}$, the first order term can be estimated as follows: \begin{equation}\label{K3-first-1} \int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j}\langle \nabla(\log h_j+\phi_{k,j})(p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x=O(e^{-(\frac{1}{2(1+\alpha_j)}+\frac{1}{1+\alpha_1})\lambda_{k,j}^{(1)}}). \end{equation} For the second order term that contains $D^2(\log\tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})$, we have \begin{align*} &\int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j} \langle D^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})(x-p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x \\ =&\int_{B(p_{k,j}^{(1)},r)}\frac{\rho_kh_j(p_{k,j}^{(1)})e^{\lambda_{k,j}^{(1)}}|x-p_{k,j}^{(1)}|^{2\alpha_j}}{\Big(1+\frac{\rho_kh_j(p_{k,j}^{(1)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(1)}}|x-p_{k,j}^{(1)}|^{2(1+\alpha_j)}\Big)^2}\big(1+O(|x-p_{k,j}^{(1)}|+\epsilon_{k,j}+\epsilon_{k,1}^2)\big) \\ &\times\varsigma_k(x)(1+o(1))\langle D^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})(x-p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x \\ =&\epsilon_{k,j}^2\int_{|z|<\frac{r}{\epsilon_{k,j}}}\frac{8(1+\alpha_j)^2|z|^{2\alpha_j}}{(1+|z|^{2(1+\alpha_j)})^2}\langle D^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})z,z\rangle\varsigma_{k,j}(z){\rm d}z+o(\epsilon_{k,j}^2). \end{align*} The expression above can be greatly simplified by this beautiful identity: \begin{equation*} \int_{0}^{\infty}\frac{8(1+\alpha_j)^2s^{2\alpha_j+3}}{(1+s^{2(1+\alpha_j)})^2}\frac{1-s^{2(1+\alpha_j)}}{1+s^{2(1+\alpha_j)}}{\rm d}s=-\frac{4\pi}{(1+\alpha_j)\sin\frac{\pi}{1+\alpha_j}}. \end{equation*} Consequently, Lemma \ref{lem-limit-1} and the two identities above lead to \begin{equation}\label{K3-second-1} \begin{split} &\int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j} \langle D^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})(x-p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x \\ =&\epsilon_{k,j}^2\pi\Delta(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})b_0\int_{0}^{\infty}\frac{8(1+\alpha_j)^2s^{2\alpha_j+3}}{(1+s^{2(1+\alpha_j)})^2}\frac{1-s^{2(1+\alpha_j)}}{1+s^{2(1+\alpha_j)}}{\rm d}s+o(\epsilon_{k,j}^2) \\ =&-\frac{4\pi^2\big[\Delta \log h(p_j)+\rho_*-N^*-2K(p_j)\big]}{\big(\rho_kh_j(p_{k,j}^{(1)})\big)^{\frac{1}{1+\alpha_1}}(1+\alpha_1)\sin \frac{\pi}{1+\alpha_1}}b_0e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}+o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}). \end{split} \end{equation} Also elementary estimate gives \begin{equation}\label{K3-third-1} \begin{split} &\int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j}\langle O(|x-p_{k,j}^{(1)}|^2),x-p_{k,j}^{(1)}\rangle{\rm d}x \\ =&O(1)e^{-\frac{3}{2(1+\alpha_j)}\lambda_{k,j}^{(1)}}\int_{|z|<\frac{r}{\epsilon_{k,j}}}\frac{|z|^{2\alpha_j+3}}{(1+|z|^{2(1+\alpha_j)})^2}{\rm d}z \\ =& O(1)\big(e^{-\frac{3}{2(1+\alpha_j)}\lambda_{k,j}^{(1)}}+e^{-\lambda_{k,j}^{(1)}}\big) = o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_j}}). \end{split} \end{equation} Therefore, we complete the proof of (\ref{PI-1-r-1}) by using (\ref{K1}) (\ref{K2}) and (\ref{K3-first-1})$\sim$(\ref{K3-third-1}). Note that the leading term in the second order term is ignored at this stage, since the requirement of error in the current step is still crude. \smallskip $\mathbf{Case\ 2:}$ $t+1\leq j\leq \tau$ $(0<\alpha_j<\alpha_1)$. For the first term it is easy to see that \begin{equation}\label{K3-first-2} \int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j} \langle \nabla(\log h_j+\phi_{k,j})(p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x=o(e^{-\frac{\lambda_{k,j}^{(1)}}{2(1+\alpha_j)}}). \end{equation} For the second order term we have \begin{equation}\label{K3-second-2} \begin{split} &\int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j} \langle D^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})(x-p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x\\ =&O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_j}})=o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}), \end{split} \end{equation} where we used the scaling $x=\epsilon_{k,j}z+p_{k,j}^{(1)}$ and $\alpha_j<\alpha_1$. Similar to (\ref{K3-third-1}), we know \begin{equation}\label{K3-third-2} \begin{split} \int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j}\langle O(|x-p_{k,j}^{(1)}|^2),x-p_{k,j}^{(1)}\rangle{\rm d}x=o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_j}}). \end{split} \end{equation} Therefore (\ref{PI-1-r-2}) follows from (\ref{K1}), (\ref{K2}) and (\ref{K3-first-2})$\sim$(\ref{K3-third-2}). \smallskip $\mathbf{Case\ 3:}$ $\tau+1\leq j\leq m$ $(\alpha_j=0)$. In view of (\ref{first-deriv-est}), we get \begin{equation*} \nabla(\log h_j+\phi_{k,j})(p_{k,j}^{(1)})=\nabla(\log h+G_j^*)(p_j)+O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}})=O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}). \end{equation*} The first order term is rather small: \begin{equation}\label{K3-first-3} \int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j} \langle D(\log h_j+\phi_{k,j})(p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x=O(e^{-(\frac{1}{2}+\frac{1}{1+\alpha_1})\lambda_{k,j}^{(1)}}). \end{equation} For the second order term we have \begin{align*} &\int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j} \langle D^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})(x-p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x \\ =&\epsilon_{k,j}^2\int_{|z|<\frac{r}{\epsilon_{k,j}}}\frac{8}{(1+|z|^{2})^2}\langle D^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})z,z\rangle \varsigma_{k,j}(z){\rm d}z+O(e^{-\lambda_{k,j}^{(1)}}) \\ =&\epsilon_{k,j}^2\pi\Delta(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})b_0\int_{0}^{\frac{r}{\epsilon_{k,j}}}\frac{8s^3}{(1+s^{2})^2}\frac{1-s^{2}}{1+s^{2}}{\rm d}s+O(e^{-\lambda_{k,j}^{(1)}}), \end{align*} where we have used Lemma \ref{lem-limit-1} and symmetry. It is easy to see \begin{equation*} \int_{0}^R\frac{8s^3}{(1+s^{2})^2}\frac{1-s^{2}}{1+s^{2}}{\rm d}s=-4\Big(\log (1+R^2)-\frac{1}{1+R^2}+\frac{1}{(1+R^2)^2}\Big), \end{equation*} and \begin{equation}\label{K3-second-3} \begin{split} \int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j}\langle D^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})(x-p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x=O(\lambda_{k,j}^{(1)}e^{-\lambda_{k,j}^{(1)}}). \end{split} \end{equation} Finally, by scaling we immediately observe that \begin{equation}\label{K3-third-3} \begin{split} &\int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j}\langle O(|x-p_{k,j}^{(1)}|^2),x-p_{k,j}^{(1)}\rangle{\rm d}x \\ =&O(e^{-\frac{3}{2}\lambda_{k,j}^{(1)}})\int_{|z|<\frac{r}{\epsilon_{k,j}}}\frac{|z|^{3}}{(1+|z|^{2})^2}{\rm d}z=O(e^{-\lambda_{k,j}^{(1)}}). \end{split} \end{equation} Therefore, $K_3$ is small in this case as well. \begin{equation}\label{K3-est-2} K_3=o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}),\quad \tau+1\leq j\leq m, \end{equation} where $\alpha_1>0$ is used. Lemma \ref{lem-PI1-right} is established. \end{proof} Since $|A_{k,j}|=O(1)$, (\ref{PI-1}) along with Lemma \ref{lem-PI1-left} and Lemma \ref{lem-PI1-right} implies the initial estimate for $A_{k,j}$. \begin{cor}\label{cor-A-kj} \begin{equation}\label{A-kj-est} |A_{k,j}|=o(e^{-\frac{\lambda_{k,j}^{(1)}}{2(1+\alpha_1)}}),\quad 1 \leq j \leq m. \end{equation} \begin{flushright} \qed \end{flushright} \end{cor} Based on (\ref{A-kj-est}), we can improve the estimates in (\ref{PI-1-l}) and (\ref{GRF-est-1}): \begin{equation}\label{PI-1-l-re} {\rm (LHS)}\ {\rm of}\ (\ref{PI-1})=-4(1+\alpha_j)A_{k,j}+o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}})\quad 1\leq j\leq m. \end{equation} \begin{equation}\label{GRF-est-2} \varsigma_k(x)-\bar{\varsigma}_k=o(e^{-\frac{\lambda_{k,1}^{(1)}}{2(1+\alpha_1)}}) \quad {\rm in}\ \; C^1\Big(M\setminus\bigcup_{j=1}^m B(p_{k,j}^{(1)},\theta)\Big). \end{equation} The identity (\ref{GRF-est-2}), which is the refined $C^1$-estimate of $\varsigma_k$ away from the blowup points, will help to improve the estimate of RHS of Pohozaev-type identity (\ref{PI-1}) and the estimate of $D\varsigma_k$. The later one will play a part in section \ref{pf-uni-2}. In order to achieve this goal, we analyse the projections of $\varsigma_{k,j}$ in more detail. For $1\leq j\leq \tau$, we recall the equation of $\varsigma_k$ in $B(p_{k,j}^{(1)},r_0)$: \begin{equation*} \begin{split} \left\{ \begin{array}{lcl} \Delta \varsigma_k+\rho_k\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}e^{U_{k,j}^{(1)}+G_{k,j}^{(1)}-G_{k,j}^{(1)}(p_{k,j}^{(1)})+\eta_{k,j}^{(1)}}\varsigma_k\frac{1-e^{u_k^{(2)}-u_k^{(1)}}}{u_k^{(1)}-u_k^{(2)}}=0, \\ |\varsigma_k|\leq 1, \end{array} \right. \end{split} \end{equation*} and set the following quantities for convenience: \begin{align*} &a_{k,j}=\nabla(\log h_j+G_{k,j}^{(1)})(p_{k,j}^{(1)}), \quad d_k=\parallel u_k^{(1)}-u_k^{(2)}\parallel_{L^{\infty}(M)}, \\ &n_0=\max\big\{n\in\mathbb{N}:n\leq\frac{1}{2\gamma} \big\},\quad U_{j}(r)=\log\frac{8(1+\alpha_j)^2}{(1+r^{2(1+\alpha_j)})^2}. \end{align*} Then the equation for $\varsigma_k$ becomes \begin{equation*} \Delta \varsigma_k+\frac{\rho_kh_j(p_{k,j}^{(1)})e^{\lambda_{k,j}^{(1)}}|x-p_{k,j}^{(1)}|^{2\alpha_j}e^{g_k(x)}}{\big(1+\frac{\rho_kh_j(p_{k,j}^{(1)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(1)}}|x-p_{k,j}^{(1)}|^{2(1+\alpha_j)}\big)^2}\Big\{\sum_{n=0}^{n_0}\frac{(-d_k)^{n}}{(n+1)!}\varsigma_k^{n+1}+O(d_k^{n_0+1})\Big\}=0, \end{equation*} where \begin{align*} g_k(x)=&\langle a_{k,j},x-p_{k,j}^{(1)}\rangle\Big\{1-\frac{2(1+\alpha_j)}{\alpha_j}\Big(1+\frac{\rho_kh_j(p_{k,j}^{(1)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(1)}}|x-p_{k,j}^{(1)}|^{2(1+\alpha_j)}\Big)^{-1}\Big\} \\ &+d_j\log\Big(2+e^{\frac{\lambda_{k,j}^{(1)}}{2(1+\alpha_j)}}|x-p_{k,j}^{(1)}|\Big)e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_j}}+O(|x-p_{k,j}^{(1)}|^2)+O(\epsilon_{k,1}^2) \end{align*} After scaling $x=\epsilon_{k,j}z+p_{k,j}^{(1)}$, we have \begin{align}\label{equ-varsigma_kj} \Delta \varsigma_{k,j}(z)+\frac{8(1+\alpha_j)^2|z|^{2\alpha_j}}{(1+|z|^{2(1+\alpha_j)})^2}\varsigma_{k,j}(z)=E_{k,j}(z), \end{align} where \begin{align*} &E_{k,j}(z)=\frac{8(1+\alpha_j)^2|z|^{2\alpha_j}}{(1+|z|^{2(1+\alpha_j)})^2} \bigg\{-\epsilon_{k,j}\langle a_{k,j},z\rangle\big(1-\frac{2(1+\alpha_j)}{\alpha_j}\frac{1}{1+|z|^{2(1+\alpha_j)}}\big)\varsigma_{k,j}(z) \\ &+ \Big[1+\epsilon_{k,j}\langle a_{k,j},z\rangle\big(1-\frac{2(1+\alpha_j)}{\alpha_j}\frac{1}{1+|z|^{2(1+\alpha_j)}}\big)\Big]\sum_{n=1}^{n_0}\frac{(-1)^{n+1}d_k^{n}}{(n+1)!}\varsigma_{k,j}^{n+1}+o(\epsilon_{k,1})\bigg\}. \end{align*} For each integer $l\ge 0$ we define the projections of frequency $l$ as \begin{align*} &\xi_l(r)=\frac{1}{2\pi}\int_{0}^{2\pi}\varsigma_{k,j}(r\cos\theta,r\sin\theta)\cos(l\theta){\rm d}\theta, \\ &\tilde{\xi}_l(r)=\frac{1}{2\pi}\int_{0}^{2\pi}\varsigma_{k,j}(r\cos\theta,r\sin\theta)\sin(l\theta){\rm d}\theta. \end{align*} Obviously the study of $\xi_l$ is representative enough. (\ref{equ-varsigma_kj}) shows that $\xi_l$ satisfies \begin{align*} \xi_l^{''}+\frac{1}{r}\xi_l^{'}+\Big(r^{2\alpha_j}e^{U_j}-\frac{l^2}{r^2}\Big)\xi_l=\tilde{E}_{l}(r),\quad l\ge 1, \end{align*} where \begin{align*} &\tilde{E}_{1}(r)=r^{2\alpha_j}e^{U_j}\Big\{-\frac{a_{k,j}^1}{4}\epsilon_{k,j}r\big(1-\frac{2(1+\alpha_j)}{\alpha_j}\frac{1}{1+r^{2(1+\alpha_j)}}\big)\xi_0+O(d_k\xi_1)+o(\epsilon_{k,1})\Big\}, \\ &\tilde{E}_{2}(r)=r^{2\alpha_j}e^{U_j}\Big\{-\frac{a_{k,j}^1}{4}\epsilon_{k,j}r\big(1-\frac{2(1+\alpha_j)}{\alpha_j}\frac{1}{1+r^{2(1+\alpha_j)}}\big)\xi_1+O(d_k\xi_2)+o(\epsilon_{k,1})\Big\}, \\ &\tilde{E}_{l}(r)=r^{2\alpha_j}e^{U_j}\Big\{O(d_k\xi_l)+o(\epsilon_{k,1})\Big\},\qquad l\ge 3, \end{align*} and $a_{k,j}^1$ is the first component of $a_{k,j}$. Moreover, from (\ref{GRF-est-2}) we obtain that $\xi_l(0)=o(1)$ for all $l\ge 1$ and \begin{equation}\label{ode-boundary} \begin{split} &|\xi_l(r)|\leq 1,\ \ \quad r\in(0,\frac{r_0}{\epsilon_{k,j}}),\ \ \quad l\ge 0, \\ &\xi_l(r)=o(\epsilon_{k,1}),\quad r\sim e^{\frac{\lambda_{k,j}^{(1)}}{2(1+\alpha_j)}},\quad l\ge 1. \end{split} \end{equation} From the equation of $\xi_l$ and the maximum principle, we only need to consider the finite $l$. Without loss of generality, we consider $1\leq l\leq l_0$ in the following analysis. Let us denote $\delta_{l,j}=\frac{l}{1+\alpha_j}$ and consider the homogeneous ordinary differential equation \begin{equation}\label{ode} \xi_l^{''}+\frac{1}{r}\xi_l^{'}+\Big(r^{2\alpha_j}e^{U_j}-\frac{l^2}{r^2}\Big)\xi_l=0. \end{equation} By direct computation, we can verify that the following two functions are two fundamental solutions of (\ref{ode}) \begin{align*} &\xi_{l,1}(r)=\frac{(\delta_{l,j}+1)r^l+(\delta_{l,j}-1)r^{2(1+\alpha_j)+l}}{1+r^{2(1+\alpha_j)}},\\ &\xi_{l,2}(r)=\frac{(\delta_{l,j}+1)r^{2(1+\alpha_j)-l}+(\delta_{l,j}-1)r^{-l}}{1+r^{2(1+\alpha_j)}}. \end{align*} Using $|\xi_l|\leq 1$ we have $C_{l,2}=0$, that is \begin{equation*} \xi_l(r)=C_{l,1}\xi_{l,1}(r)+\xi_{l,p}(r) \end{equation*} where $C_{l,1}$ is a constant, and \begin{equation}\label{solution-p} \xi_{l,p}(r)=\Big(\int\frac{w_1}{w}{\rm d}r\Big)\xi_{l,1}(r)+\Big(\int\frac{w_2}{w}{\rm d}r\Big)\xi_{l,2}(r) \end{equation} for \begin{equation*} w= \begin{vmatrix} \xi_{l,1} & \xi_{l,2} \\ \xi_{l,1}^{'} & \xi_{l,2}^{'} \end{vmatrix},\quad w_1=\begin{vmatrix} 0 & \xi_{l,2} \\ \tilde{E}_l & \xi_{l,2}^{'} \end{vmatrix},\quad w_2=\begin{vmatrix} \xi_{l,1} & 0 \\ \xi_{l,1}^{'} & \tilde{E}_l \end{vmatrix}.\quad \end{equation*} It is easy to see that $w^{'}=(\xi_{l,1}\xi_{l,2}^{'}-\xi_{l,1}^{'}\xi_{l,2})^{'}=-\frac{1}{r}w$, which means $w(r)\sim \frac{1}{r}$. Next, let us estimate $\xi_{l}$ in $(0,\frac{r_0}{\epsilon_{k,j}})$ for $l\ge 1$. \smallskip For $1\leq j\leq t$, the assumption $D(\mathbf{p})=0$ implies $a_{k,j}=O(\epsilon_{k,1}^2)$. Furthermore, for $t+1\leq j\leq \tau$, it is easy to see that $\epsilon_{k,j}=o(\epsilon_{k,1})$. Therefore, for all $1\leq j\leq \tau$, we estimate $\tilde{E}_l$ as follows \begin{equation}\label{E-1} \begin{split} &\tilde{E}_{l}(r)=r^{2\alpha_j}e^{U_j}\big\{o(\epsilon_{k,1})r+O(d_k\xi_l)+o(\epsilon_{k,1})\big\},\quad l=1,2; \\ &\tilde{E}_{l}(r)=r^{2\alpha_j}e^{U_j}\big\{O(d_k\xi_l)+o(\epsilon_{k,1})\big\},\, \qquad\quad\qquad l\ge 3. \end{split} \end{equation} Roughly, \begin{equation}\label{E-2} \begin{split} &\tilde{E}_{l}(r)=r^{2\alpha_j}e^{U_j}\big\{o(\epsilon_{k,1})r+O(d_k)+o(\epsilon_{k,1})\big\},\quad l=1,2; \\ &\tilde{E}_{l}(r)=r^{2\alpha_j}e^{U_j}\big\{O(d_k)+o(\epsilon_{k,1})\big\},\, \qquad\quad\qquad l\ge 3. \end{split} \end{equation} By using the above estimates (\ref{E-2}) for $\tilde{E}_l$ and (\ref{solution-p}), we have \begin{align*} \xi_{l,p}(r)=&\big(O(d_k)+o(\epsilon_{k,1})\big)\bigg\{\Big(\int_{r}^{\infty}s^{2\alpha_j+1}e^{U_j(s)}(s+1)\xi_{l,2}(s){\rm d}s\Big)\xi_{l,1}(r)\\ &+\Big(\int_{r}^{\infty}s^{2\alpha_j+1}e^{U_j(s)}(s+1)\xi_{l,1}(s){\rm d}s\Big)\xi_{l,2}(r)\bigg\},\quad l=1,2.\\ \xi_{l,p}(r)=&\big(O(d_k)+o(\epsilon_{k,1})\big)\bigg\{\Big(\int_{r}^{\infty}s^{2\alpha_j+1}e^{U_j(s)}\xi_{l,2}(s){\rm d}s\Big)\xi_{l,1}(r)\\ &+\Big(\int_{r}^{\infty}s^{2\alpha_j+1}e^{U_j(s)}\xi_{l,1}(s){\rm d}s\Big)\xi_{l,2}(r)\bigg\},\quad l\ge 3. \end{align*} Direct computation shows, for $0<r<1$ \begin{align*} \xi_{l,p}(r)=\big(O(d_k)+o(\epsilon_{k,1})\big)\big(r^l+r^{2\alpha_j+2}\big),\quad l\ge 1; \end{align*} and for $1<r<\frac{r_0}{\epsilon_{k,j}}$ \begin{align*} \xi_{l,p}(r)=\left\{ \begin{array}{lcl} \big(O(d_k)+o(\epsilon_{k,1})\big)\big(r^{-l}+r^{-(2\alpha_j+1)}\big),\quad l=1,2, \\ \big(O(d_k)+o(\epsilon_{k,1})\big)\big(r^{-l}+r^{-(2\alpha_j+2)}\big),\quad l\ge 3. \end{array} \right. \end{align*} Consequently \begin{equation}\label{xi-est-1} \parallel \xi_{l,p}\parallel_{L^{\infty}((0,\frac{r_0}{\epsilon_{k,j}}))}=O(d_k)+o(\epsilon_{k,1}),\quad 1\leq l\leq l_0. \end{equation} Combining (\ref{E-1}) and (\ref{xi-est-1}), we rewrite $\tilde{E}_l$ as \begin{equation}\label{E-3} \begin{split} &\tilde{E}_{l}(r)=r^{2\alpha_j}e^{U_j}\big\{o(\epsilon_{k,1})r+O(d_k^2)+o(\epsilon_{k,1})\big\},\quad l=1,2; \\ &\tilde{E}_{l}(r)=r^{2\alpha_j}e^{U_j}\big\{O(d_k^2)+o(\epsilon_{k,1})\big\},\, \qquad\quad\qquad l\ge 3. \end{split} \end{equation} Then repeating the above argument $n_0$ times, we obtain \begin{equation}\label{xi-est-2} \parallel \xi_{l,p}\parallel_{L^{\infty}((0,\frac{r_0}{\epsilon_{k,j}}))}=o(\epsilon_{k,1}),\quad 1\leq l\leq l_0. \end{equation} Then, from (\ref{ode-boundary}), we have \begin{equation*} C_{l,1}=o(\epsilon_{k,1})O(\epsilon_{k,j}^l). \end{equation*} and \begin{equation}\label{xi-cos} \parallel \xi_l\parallel_{L^{\infty}((0,\frac{r_0}{\epsilon_{k,j}}))}=o(\epsilon_{k,1}),\quad l\ge 1. \end{equation} Similarly, \begin{equation}\label{xi-sin} \parallel \tilde{\xi}_l\parallel_{L^{\infty}((0,\frac{r_0}{\epsilon_{k,j}}))}=o(\epsilon_{k,1}),\quad l\ge 1. \end{equation} In other words, all projections of high frequency of $\varsigma_{k,j}$ $(1\leq j\leq \tau)$ are $o(\epsilon_{k,1})$. \medskip Using (\ref{xi-cos}) and (\ref{xi-sin}), we now obtain an important sharper estimate of the right hand side of (\ref{PI-1}). \begin{lem}\label{lem-PI1-right-re} \begin{equation}\label{PI-1-r-4} {\rm (RHS)}\ {\rm of} (\ref{PI-1})=-2(1+\alpha_j)A_{k,j}+o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}),\quad t+1\leq j\leq \tau. \end{equation} \end{lem} \begin{proof}[\textbf{Proof}] In view of (\ref{K1})$\sim$(\ref{expansion}) and (\ref{K3-first-2})$\sim$(\ref{K3-third-2}), we only need to improve the estimate in (\ref{K3-first-2}). In other words, it is enough to prove the following estimate \begin{equation}\label{K3-first-2-re} \int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j}\langle \nabla(\log h_j+\phi_{k,j})(p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x =o(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}). \end{equation} In fact, by the change of variable $x=\epsilon_{k,j}z+p_{k,j}^{(1)}$, we have \begin{align*} &\int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j} \langle \nabla(\log h_j+\phi_{k,j})(p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x \\ =&\int_{B(p_{k,j}^{(1)},r)}\rho_k\tilde{h}_j|x-p_{k,j}^{(1)}|^{2\alpha_j}e^{u_k^{(1)}}\varsigma_k\frac{1-e^{u_k^{(2)}-u_k^{(1)}}}{u_k^{(1)}-u_k^{(2)}}\langle a_{k,j},x-p_{k,j}^{(1)}\rangle{\rm d}x \\ =&\int_{B(p_{k,j}^{(1)},r)}\frac{\rho_kh_j(p_{k,j}^{(1)})e^{\lambda_{k,j}^{(1)}}|x-p_{k,j}^{(1)}|^{2\alpha_j}e^{g_k(x)}}{\big(1+\frac{\rho_kh_j(p_{k,j}^{(1)})}{8(1+\alpha_j)^2}e^{\lambda_{k,j}^{(1)}}|x-p_{k,j}^{(1)}|^{2(1+\alpha_j)}\big)^2} \langle a_{k,j},x-p_{k,j}^{(1)}\rangle \\ &\times\Big\{\sum_{n=0}^{n_0}\frac{(-d_k)^{n}}{(n+1)!}\varsigma_k^{n+1}+O(d_k^{n_0+1})\Big\}{\rm d}x \\ =&\epsilon_{k,j}\int_{|z|<\frac{r}{\epsilon_{k,j}}}\frac{8(1+\alpha_j)^2|z|^{2\alpha_j}}{(1+|z|^{2(1+\alpha_j)})^2}\sum_{n=0}^{n_0}\frac{(-d_k)^{n}}{(n+1)!}\varsigma_{k,j}^{n+1}<a_{k,j},z>{\rm d}x+o(\epsilon_{k,1}^2). \\ \end{align*} where we used the fact $\alpha_j<\alpha_1$ and the definition of $n_0$. Then from symmetry and the estimates of high frequency of $\varsigma_{k,j}$, which are (\ref{xi-cos}) and (\ref{xi-sin}), we have the following estimate \begin{align*} \int_{B(p_{k,j}^{(1)},r)}f_ke^{\phi_j}\langle \nabla(\log h_j+\phi_{k,j})(p_{k,j}^{(1)}),x-p_{k,j}^{(1)}\rangle{\rm d}x=O(\epsilon_{k,j})o(\epsilon_{k,1})+o(\epsilon_{k,1}^2) =o(\epsilon_{k,1}^2). \end{align*} Therefore, (\ref{K3-first-2-re}) holds. Finally, combining (\ref{K3-first-2-re}) with the proof of Lemma \ref{lem-PI1-right}, we obtain the esstimate (\ref{PI-1-r-4}). \end{proof} Based on the Pohozaev-type identity (\ref{PI-1}) and its refined estimates, which are (\ref{PI-1-l-re}) (\ref{PI-1-r-2}) (\ref{PI-1-r-3}) and (\ref{PI-1-r-4}), we can improve the estimate for $A_{k,j}$ and prove $b_0=0$. \begin{cor}\label{cor-A-kj-re} \begin{equation}\label{A-kj-est-re} |A_{k,j}|=O(e^{-\frac{\lambda_{k,j}^{(1)}}{1+\alpha_1}}),\quad 1 \leq j \leq m. \end{equation} \begin{flushright} \qed \end{flushright} \end{cor} \begin{prop}\label{prop-b0} $b_0=0$. In particular, $b_{j,0}=0$, for $1\leq j\leq m$. \end{prop} \begin{proof}[\textbf{Proof}] Now the global cancellation property of $f_k$ plays a crucial role: \begin{equation*} \sum_{j=1}^{m}A_{k,j}=\int_{M}f_k{\rm d}\mu=0. \end{equation*} From (\ref{PI-1}) (\ref{PI-1-l-re}) (\ref{PI-1-r-2}) (\ref{PI-1-r-3}) and (\ref{PI-1-r-4}), we can see \begin{align*} b_0e^{-\frac{\lambda_{k,1}^{(1)}}{1+\alpha_1}}\sum_{j=1}^t\big[\Delta h(p_j)+\rho_*-N^*-2K(p_j)\big]\big(\rho_kh_j(p_{k,j}^{(1)})\big)^{\frac{1}{1+\alpha_1}}=o(e^{-\frac{\lambda_{k,1}^{(1)}}{1+\alpha_1}}). \end{align*} On the other hand, from (\ref{uk-ave-2}), it holds \begin{equation*} h_j^2(p_j)=h_1^2(p_1)e^{G_1^*(p_1)}e^{-G_j^*(p_j)}+o(1),\quad 1\leq j\leq t. \end{equation*} As a consequence, we obtain \begin{equation}\label{b0-est} \begin{split} e^{-\frac{G_1^*(p_1)}{1+\alpha_1}}\big(\rho_*h_1^2(p_1)\big)^{-\frac{1}{1+\alpha_1}}L(\mathbf{p})b_0e^{-\frac{\lambda_{k,1}^{(1)}}{1+\alpha_1}}=o(e^{-\frac{\lambda_{k,1}^{(1)}}{1+\alpha_1}}), \end{split} \end{equation} which together with the assumption $L(\mathbf{p})\neq 0$ implies $b_0=0$. In particular, $b_{j,0}=0$ for $1\leq j\leq m$. \end{proof} \section{Proof of Theorem \ref{main-theorem}}\label{pf-uni-1} \begin{proof}[\textbf{Proof of Theorem \ref{main-theorem} }] Let $p_k^*$ be a maximum point of $\varsigma_k$, which says \begin{equation}\label{max=1} |\varsigma_k(p_k^*)|=1 \end{equation} In view of Lemma \ref{lem-limit-2} and Proposition \ref{prop-b0}, we obtain the fact \begin{equation*} \varsigma_k\to 0 \quad{\rm in}\ \; C_{loc}(M\backslash\{p_1,\cdots,p_m\}). \end{equation*} Therefore, \begin{equation}\label{pk*} \lim\limits_{k\rightarrow\infty}p_k^*=p_j, \end{equation} for some $p_j\in\{p_1,\cdots,p_m\}$. Moreover, denoting $s_k=|p_k^*-p_{k,j}^{(1)}|$, by Lemma \ref{lem-limit-1} and Proposition \ref{prop-b0}, it holds \begin{equation*} \varsigma_{k,j}\rightarrow 0 \quad{\rm in}\ \; C_{loc}(\mathbb{R}^2). \end{equation*} Thus, \begin{equation}\label{sk} \lim\limits_{k\rightarrow\infty}\epsilon_{k,j}^{-1}s_k=+\infty \end{equation} Setting $\tilde{\varsigma_k}(x)=\varsigma_k(s_kx+p_{k,j}^{(1)})$, $|x|<s_k^{-1}r$, where $r>0$ small enough, then $\tilde{\varsigma_k}$ satisfies \begin{align*} 0=&\Delta\tilde{\varsigma_k}(x)+\rho_k\tilde{h}_j(s_kx+p_{k,j}^{(1)})s_k^{2(1+\alpha_j)}|x|^{2\alpha_j}c_k(s_kx+p_{k,j}^{(1)})\tilde{\varsigma_k}(x) \\ =& \Delta\tilde{\varsigma_k}(x)+\frac{8(1+\alpha_j)^2(\epsilon_{k,j}^{-1}s_k)^{2(1+\alpha_j)}|x|^{2\alpha_j}}{\big(1+(\epsilon_{k,j}^{-1}s_k)^{2(1+\alpha_j)}|x|^{2(1+\alpha_j)}\big)^2}\big(1+O(s_k|x|)+o(1)\big). \end{align*} On the other hand, by (\ref{max=1}), we also have \begin{equation}\label{scale-max=1} \Big|\tilde{\varsigma_k}\big(\frac{p_k^*-p_{k,j}^{(1)}}{s_k}\big)\Big|=|\varsigma_k(p_k^*)|=1. \end{equation} In view of (\ref{sk}) and $|\tilde{\varsigma_k}|\leq 1$, we see that $\tilde{\varsigma_k}\rightarrow\tilde{\varsigma_0}$ in $C_{loc}(\mathbb{R}^2\backslash\{0\})$, where $\tilde{\varsigma_0}$ satisfies $\Delta\tilde{\varsigma_0}=0$ in $\mathbb{R}^2\backslash\{0\}$. Since $|\tilde{\varsigma_0}|\leq 1$, we have $\Delta\tilde{\varsigma_0}=0$ in $\mathbb{R}^2$. Hence $\tilde{\varsigma_0}$ is a constant. Recalling that $\frac{|p_k^*-p_{k,j}^{(1)}|}{s_k}=1$ and (\ref{scale-max=1}), we find that $\tilde{\varsigma_0}\equiv 1$ or $\tilde{\varsigma_0}\equiv -1$. Therefore, we obtain that for $ k $ large enough \begin{equation}\label{contra1} |\varsigma_k(x)|\geq\frac{1}{2},\quad |x-p_{k,j}^{(1)}|\in\big(\frac{s_k}{2},2s_k\big). \end{equation} By using Lemma \ref{lem-limit-2}, we have \begin{equation}\label{contra2} \varsigma_k(x)=o(1)+o(1)\log R+O(R^{-2(1+\alpha_j)}),\quad |x-p_{k,j}^{(1)}|\in(R\epsilon_{k,j},d). \end{equation} for fixed $d>0$ small enough and arbitrary $R>0$ large enough. \smallskip However, by (\ref{sk}), $\epsilon_{k,j}\ll s_k$. Thus, $|\varsigma_k(s_k)|<\frac{1}{4}$ for $ k $ large enough, which contradicts with (\ref{contra1}). Theorem \ref{main-theorem} is established. \end{proof} \section{Proof of Theorem \ref{main-theorem-2}}\label{pf-uni-2} In this section, we will analyse the behavior of $u_k^{(1)}$ and $u_k^{(2)}$ whose common blowup points include singular source(s) and regular point(s). So in this section $\tau<m$, $0<\alpha_j< 1$ for $1\leq j\leq \tau$ and $\alpha_j=0$ for $\tau+1\leq j\leq m$. Our argument is similar to the approach in \cite{bart-4} where all blowup points are regular points. The fact that $\varsigma_{k,j}\rightarrow 0$ in $C_{loc}(\mathbb{R}^2)$ for all $1\leq j \leq m$ plays a vital role. \medskip In \cite{lin-yan-uniq}, Lin-Yan obtained the following Pohozaev-type identity: \begin{lemA}\label{Pohozave identity-2} \cite{bart-4,lin-yan-uniq} For $\tau+1\leq j\leq m$, it holds \begin{align}\label{PI-2} \begin{split} & \int_{\partial B(p_{k,j}^{(1)},r)}\Big(<\nu,\nabla\varsigma_k>\nabla_iv_{k,j}^{(1)}+<\nu,\nabla v_{k,j}^{(2)}>\nabla_i\varsigma_k \Big) {\rm d}\sigma \\ & \quad -\frac{1}{2}\int_{\partial B(p_{k,j}^{(1)},r)}<\nabla(v_{k,j}^{(1)}+v_{k,j}^{(2)}),\nabla\varsigma_k>\frac{(x-p_{k,j}^{(1)})_i}{|x-p_{k,j}^{(1)}|} {\rm d}\sigma, \\ =\ & -\int_{\partial B(p_{k,j}^{(1)},r)}\rho_k\tilde{h}_j(x)\frac{e^{u_k^{(1)}}-e^{u_k^{(2)}}}{\parallel u_k^{(1)}-u_k^{(2)} \parallel_{L^{\infty}(M)} }\frac{(x-p_{k,j}^{(1)})_i}{|x-p_{k,j}^{(1)}|} {\rm d}\sigma \\ & \quad + \int_{B(p_{k,j}^{(1)},r)} \rho_k\tilde{h}_j(x)\frac{e^{u_k^{(1)}}-e^{u_k^{(2)}}}{\parallel u_k^{(1)}-u_k^{(2)} \parallel_{L^{\infty}(M)}} \nabla_i\big(\log \tilde{h}_j+\phi_{k,j}\big) {\rm d}x. \end{split} \end{align} \end{lemA} By Lemma 4.6 in \cite{bart-4} and Appendix D in \cite{lin-yan-uniq}, we have: \begin{align}\label{PI-2-r} {\rm (RHS)} \ {\rm of}\ (\ref{PI-2})=e^{-\frac{\lambda_{k,j}^{(1)}}{2}}\Big(\sum_{h=1}^2 D_{h,i}^2(\log \tilde{h}_j+\phi_{k,j} )(p_{k,j}^{(1)})b_{j,h}\Big)B_j+o(e^{-\frac{\lambda_{k,j}^{(1)}}{2}}). \end{align} for $i=1,2$ and $\tau+1\le j\le m$. The detail of this proof can be found in \cite{bart-4}. The LHS of (\ref{PI-2}) boils down to sharp estimates of $\nabla v_{k,j}^{(i)}$ and $\nabla\varsigma_k$ on $\partial B(p_{k,j}^{(1)},r)$. The estimate for $\nabla v_{k,j}^{(i)}$ is established in Lemma \ref{lem-Dv-kj}, and the following lemma provides the estimates for $\nabla\varsigma_k$ (see (\ref{Dsigma_k-1}) for comparison). \begin{lem}\label{lem-C1-est-re} For any $\theta\in(0,r)$ small enough, it holds \begin{align}\label{GRF-est-re} \begin{split} \varsigma_k-\bar{\varsigma}_k=\sum_{j=\tau+1}^m e^{-\frac{\lambda_{k,j}^{(1)}}{2}}&\Big(\sum_{h=1}^2\partial_{y_h}G(y,x)\big|_{y=p_{k,j}^{(1)}}b_{j,h}\Big)B_j+o(e^{-\frac{\lambda_{k,1}^{(1)}}{2}}) \\ &{\rm in}\ \; C^1\Big(M\setminus\bigcup_{j=1}^m B(p_{k,j}^{(1)},\theta)\Big). \end{split} \end{align} \end{lem} \begin{proof}[\textbf{Proof}] Using the same notations in (\ref{J1+J2+J3}) and (\ref{J3}), now we only need to show \begin{equation*} J_1=o(e^{-\frac{\lambda_{k,1}^{(1)}}{2}}),\quad J_2=o(e^{-\frac{\lambda_{k,1}^{(1)}}{2}}). \end{equation*} Indeed, from (\ref{A-kj-est-re}) and the assumption $0< \alpha_1<1$, we have \begin{equation}\label{J1-re} J_1=\sum_{j=1}^m A_{k,j}G(p_{k,j}^{(1)},x)=O(e^{-\frac{\lambda_{k,1}^{(1)}}{1+\alpha_1}})=o(e^{-\frac{\lambda_{k,1}^{(1)}}{2}}) \end{equation} Recall that \begin{align*} J_2=&\sum_{j=1}^\tau\int_{M_j}\big(G(y,x)-G(p_{k,j}^{(1)},x)\big)f_k(y){\rm d}\mu(y) \\ =&\sum_{j=1}^\tau\int_{B(p_{k,j}^{(1)},r_0)}f_k(y)e^{\phi_j(y)}\langle\partial _yG(y,x)\big|_{y=p_{k,j}^{(1)}},y-p_{k,j}^{(1)}\rangle {\rm d}y +O(e^{-\frac{\lambda_{k,1}^{(1)}}{1+\alpha_1}}) \end{align*} Based on (\ref{xi-cos}) and (\ref{xi-sin}), by the method similar to the proof of (\ref{K3-first-2-re}) in Lemma \ref{lem-PI1-right-re}, we have \begin{equation*} \int_{B(p_{k,j}^{(1)},r_0)}f_k(y)e^{\phi_j(y)}\langle\partial _yG(y,x)\big|_{y=p_{k,j}^{(1)}},y-p_{k,j}^{(1)}\rangle {\rm d}y=O(\epsilon_{k,j})o(\epsilon_{k,1})+o(\epsilon_{k,1}^2)=O(\epsilon_{k,1}^2). \end{equation*} Therefore, \begin{equation}\label{J2-re} J_2=O(e^{-\frac{\lambda_{k,1}^{(1)}}{1+\alpha_1}})=o(e^{-\frac{\lambda_{k,1}^{(1)}}{2}}). \end{equation} Consequently (\ref{GRF-est-re}) holds in $C^1\big(M\setminus\bigcup_{j=1}^m B(p_{k,j}^{(1)},\theta)\big)$ and the gradient estimate is \begin{equation}\label{Dsigma_k-2} \begin{split} \nabla\varsigma_k(x)=\sum_{j=\tau+1}^m e^{-\frac{\lambda_{k,j}^{(1)}}{2}}\nabla_x\Big(\sum_{h=1}^2\partial_{y_h}G(y,x)\big|_{y=p_{k,j}^{(1)}}b_{j,h}\Big)B_j+o(e^{-\frac{\lambda_{k,1}^{(1)}}{2}}). \end{split} \end{equation} \end{proof} By the improved estimates of $\nabla v_{k,j}^{(i)}$ and $\nabla\varsigma_k$ in (\ref{Dv-kj-est}) and (\ref{Dsigma_k-2}), we can estimate the left hand of (\ref{PI-2}) just like Lemma 4.7 in \cite{bart-4} or Appendix D in \cite{lin-yan-uniq} and the result is: \begin{equation}\label{PT-l} \begin{split} {\rm (LHS)} \ {\rm of}\ (\ref{PI-2})=&-8\pi\bigg\{\sum_{l\neq j}^{\tau+1,\cdots,m}e^{-\frac{\lambda_{k,l}^{(1)}}{2}}\partial_{x_i}\Big(\sum_{h=1}^2\partial_{y_h}G(y,x)\big|_{y=p_{k,l}^{(1)}}b_{l,h}\Big)B_l\\ &\ \;+e^{-\frac{\lambda_{k,j}^{(1)}}{2}}\partial_{x_i}\Big(\sum_{h=1}^2\partial_{y_h}R(y,x)\big|_{x=y=p_{k,j}^{(1)}}b_{j,h}\Big)B_j\bigg\}+o(e^{-\frac{\lambda_{k,j}^{(1)}}{2}}). \end{split} \end{equation} Finally we prove $b_{j,1}=b_{j,2}=0$ for all $j$. \begin{prop}\label{prop-b1b2} $b_{j,1}=b_{j,2}=0$, for all $j=\tau+1,\cdots,m$. In particular, \begin{equation*} \varsigma_{k,j}\rightarrow 0\quad {\rm in}\ \; C_{loc}(\mathbb{R}^2),\quad {\rm for\ \, all} \ \, j=1,\cdots,m. \end{equation*} \end{prop} \begin{proof}[\textbf{Proof}] Obviously, (\ref{PI-2}) together with (\ref{PI-2-r}) and (\ref{PT-l}) implies, for all $i=1,2$, and $j=\tau +1,\cdots,m$, \begin{align}\label{PI-final} \begin{split} &e^{-\frac{\lambda_{k,j}^{(1)}}{2}}\Big(\sum_{h=1}^2 D_{h,i}^2(\log \tilde{h}_j+\phi_{k,j})(p_{k,j}^{(1)})b_{j,h}\Big)B_j\\ =&-8\pi\sum_{l\neq j}^{\tau+1,\cdots,m}e^{-\frac{\lambda_{k,l}^{(1)}}{2}}\partial_{x_i}\Big(\sum_{h=1}^2\partial_{y_h}G(y,x)\big|_{y=p_{k,l}^{(1)}}b_{l,h}\Big)B_l\\ &-8\pi e^{-\frac{\lambda_{k,j}^{(1)}}{2}}\partial_{x_i}\Big(\sum_{h=1}^2\partial_{y_h}R(y,x)\big|_{x=y=p_{k,j}^{(1)}}b_{j,h}\Big)B_j+o(e^{-\frac{\lambda_{k,j}^{(1)}}{2}}). \end{split} \end{align} Set $\vec{b}=(\tilde{b}_{\tau+1,1}B_{\tau+1},\tilde{b}_{\tau+1,2}B_{\tau+1},\cdots,\tilde{b}_{m,1}B_m,\tilde{b}_{m,2}B_m)$, where \begin{equation*} \tilde{b}_{l,h}=\lim\limits_{k\to +\infty}\big(e^{\frac{\lambda_{k,j}^{(1)}-\lambda_{k,1}^{(1)}}{2}}b_{l,h}\big). \end{equation*} Then by (\ref{p_kj-location}) and letting $k\to+\infty$, we obtain that \begin{equation}\label{b-vector} D^2f^*(p_{\tau+1},\cdots,p_m)\cdot \vec{b}=0 \end{equation} By using the non-degeneracy assumption $\det \big(D^2f^*(p_{\tau+1},\cdots,p_m)\big)\neq 0$, we conclude that \begin{equation}\label{b=0} b_{j,1}=b_{j,2}=0,\quad j=\tau+1,\cdots,m. \end{equation} Proposition \ref{prop-b1b2} is established. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{main-theorem-2} }] From Lemma \ref{lem-limit-2} and Proposition \ref{prop-b0} $\varsigma_k$ tends to $0$ in $C_{loc}(M\backslash\{p_1,\cdots,p_m\})$. By Lemma \ref{lem-limit-1} and Proposition \ref{prop-b1b2}, we have \begin{equation*} \varsigma_{k,j}\rightarrow 0\quad {\rm in}\ \; C_{loc}(\mathbb{R}^2),\quad 1\leq j\leq m. \end{equation*} Theorem \ref{main-theorem-2} follows just like the last step of the proof of Theorem \ref{main-theorem}. \end{proof} Finally, we finish to prove Theorem \ref{main-theorem-3} and Theorem \ref{main-theorem-4} about Dirichlet problems. \begin{proof}[\textbf{Proof of Theorems \ref{main-theorem-3}, \ref{main-theorem-4} }] For the blowup solutions to (\ref{equ-flat}), the corresponding estimates as in section \ref{preliminary} have been also obtained in \cite{chen-lin}\cite{zhang2} for $\alpha_j\in\mathbb{R}^+\setminus\mathbb{N}$ and in \cite{chen-lin-sharp}\cite{zhang1}\cite{gluck} for $\alpha_j=0$. Those preliminary estimates have almost the same form except for $\phi_j=0$ and $K\equiv 0$, where $\phi_j$ are the conformal factor at $p_j$ and $K$ is the Gaussian curvature of $M$. Then, under the assumption of regularity about $\partial\Omega$ and $q_j\in \Omega$ $(1\leq j\leq N)$, \cite{ma-wei} has showed that the blowup points of (\ref{equ-flat}) are far away from $\partial\Omega$ via the moving plane method and the Pohozaev identities. Consequently, the terms coming from the boundary of domain are included in the error term. In other words, those boundary terms do not affect our argument. On the other hand, the vital part of estimates obtained in section \ref{difference}, \ref{anal-pohozaev} and \ref{pf-uni-2} only come from local analysis, Therefore, such results still work for the Dirichlet problem (\ref{equ-flat}). Thus, Theorem \ref{main-theorem-3} and Theorem \ref{main-theorem-4} can be proved as Theorem \ref{main-theorem} and Theorem \ref{main-theorem-4}, respectively. \end{proof}
{ "timestamp": "2019-06-17T02:01:48", "yymm": "1906", "arxiv_id": "1906.05914", "language": "en", "url": "https://arxiv.org/abs/1906.05914", "abstract": "For singular mean field equations defined on a compact Riemann surface, we prove the uniqueness of bubbling solutions if some blowup points coincide with bubbling sources. If the strength of the bubbling sources at blowup points are not multiple of $4\\pi$ we prove that bubbling solutions are unique under non-degeneracy assumptions. This work extends a previous work of Bartolucci, et, al \\cite{bart-4}.", "subjects": "Analysis of PDEs (math.AP)", "title": "Uniqueness of bubbling solutions of mean field equations with non-quantized singularities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771782476948, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7090925410588522 }
https://arxiv.org/abs/1101.5450
Quasi-Monte Carlo rules for numerical integration over the unit sphere $\mathbb{S}^2$
We study numerical integration on the unit sphere $\mathbb{S}^2 \subset \mathbb{R}^3$ using equal weight quadrature rules, where the weights are such that constant functions are integrated exactly.The quadrature points are constructed by lifting a $(0,m,2)$-net given in the unit square $[0,1]^2$ to the sphere $\mathbb{S}^2$ by means of an area preserving map. A similar approach has previously been suggested by Cui and Freeden [SIAM J. Sci. Comput. 18 (1997), no. 2].We prove three results. The first one is that the construction is (almost) optimal with respect to discrepancies based on spherical rectangles. Further we prove that the point set is asymptotically uniformly distributed on $\mathbb{S}^2$. And finally, we prove an upper bound on the spherical cap $L_2$-discrepancy of order $N^{-1/2} (\log N)^{1/2}$ (where $N$ denotes the number of points). This slightly improves upon the bound on the spherical cap $L_2$-discrepancy of the construction by Lubotzky, Phillips and Sarnak [Comm. Pure Appl. Math. 39 (1986), 149--186]. Numerical results suggest that the $(0,m,2)$-nets lifted to the sphere $\mathbb{S}^2$ have spherical cap $L_2$-discrepancy converging with the optimal order of $N^{-3/4}$.
\section{Introduction} We consider the unit sphere $\mathbb{S}^2 = \{\boldsymbol{z} = (z_1,z_2,z_3) \in \mathbb{R}^3: \|\boldsymbol{z}\| = \sqrt{z_1^2+z_2^2+z_3^2} = 1\}$. Let $f:\mathbb{S}^2\to\mathbb{R}$ be integrable. Then we estimate the integral $\int_{\mathbb{S}^2} f \, \mathrm{d}\sigma_2$, where $\sigma_2$ is the normalized Lebesgue surface area measure (that is $\int_{\mathbb{S}^2} \mathrm{d} \sigma_2 = 1$), by a quasi-Monte Carlo type rule \begin{equation*} Q_N(f) = \frac{1}{N}\sum_{k=0}^{N-1} f(\boldsymbol{z}_k), \end{equation*} where $Z_N = \{\boldsymbol{z}_0,\ldots, \boldsymbol{z}_{N-1}\} \subseteq \mathbb{S}^2$ are the quadrature points on the sphere. Since the surface area measure is normalized to $1$, it follows that we have $Q_N(f) = \int_{\mathbb{S}^2} f \,\mathrm{d}\sigma_2$ for every constant function $f$. In the following we review known results to put the result of this paper into context. Although some results are known for spheres $\mathbb{S}^d$ of dimension $d \ge 2$, we only state them for the sphere $\mathbb{S}^2$ since this paper only deals with the $2$-sphere. Using Stolarsky's invariance principle \cite{St1973} (also see \cite{BrDi2011_pre} and \cite{BrWo20xx}), it follows that the worst-case error for numerical integration in a certain reproducing kernel Hilbert space is given by the spherical cap $L_2$-discrepancy of the quadrature points. To obtain quadrature points we use a transformation from $[0,1]^2$ to $\mathbb{S}^2$ which preserves area. Specifically, we use the transformation $\Phi:[0,1]^2\to\mathbb{S}^2$ with \begin{equation*} \Phi(y_1,y_2) = \left(2 \cos (2\pi y_1) \sqrt{y_2- y_2^2}, 2 \sin (2\pi y_1) \sqrt{y_2- y_2^2}, 1-2 y_2\right). \end{equation*} The function $\Phi$ maps axis-parallel rectangles in the unit square to zonal spherical rectangles of equal area. It is natural then that the discrepancy on the sphere with respect to spherical rectangles is the same as the discrepancy with respect to rectangles in $[0,1]^2$. Since point sets $Z_N$ for which a discrepancy based on $K$-regular test sets ($K$ fixed, see Sj{\"o}gren~\cite{Sj1972}) converges to zero as $N \to \infty$ are uniformly distributed over the sphere, we show that the digital nets lifted to the sphere via $\Phi$ are also uniformly distributed. Furthermore, we study the spherical cap $L_2$-discrepancy of point sets obtained in this way. We prove that the order of convergence for $(0,m,2)$-nets lifted to the sphere via $\Phi$ is $\mathcal{O}(N^{-1/2} (\log N)^{1/2})$. This improves upon the bound on the spherical cap $L_2$-discrepancy in \cite{LuPhSa1986} of $\mathcal{O}(N^{-1/2} \log N)$ by a factor of $\sqrt{\log N}$. On the other hand, the optimal order of convergence is $\mathcal{O}(N^{-3/4})$. But numerical experiments do suggest that the $(0,m,2)$-nets lifted to the sphere via $\Phi$ achieve optimal convergence rate. We conjecture that this order of convergence is the correct one. Due to the difficulties of having a satisfactory notion of 'bounded variation' on $\mathbb{S}^2$ there is no Koksma-Hlawka inequality on $\mathbb{S}^2$ per se. However, the concept of {\em uniform distribution of a sequence of $N$-point configurations with respect to every function in a function space}, say Sobolev spaces over $\mathbb{S}^2$, can be used as quality criterion. For example, Cui and Freeden~\cite{CuFr1997} introduced the concept of a generalized discrepancy on $\mathbb{S}^2$ which involves pseudodifferential operators. Using the operator $\mathbf{D} = ( - 2 \Delta^* )^{1/2} ( - \Delta^* + 1 / 4 )^{1/4}$ ($\Delta^*$ is the Beltrami operator on $\mathbb{S}^2$) of order $3/2$ they arrived at \begin{equation*} \left| \frac{1}{N} \sum_{k = 1}^N f(\boldsymbol{x}_k) - \int f \dd \sigma_2 \right| \leq \sqrt{6} \, \mathrm{D}(\{ \boldsymbol{x}_1, \dots, \boldsymbol{x}_N \}; \mathbf{D} ) \, \| f \|_{\mathcal{H}^{3/2}}, \end{equation*} where $f$ is from the Sobolev space $\mathcal{H}^{3/2}(\mathbb{S}^2)$. In this notion, a sequence $\{Z_N\}$ of $N$-point configurations is called {\em $\mathbf{D}$-equidistributed in $\mathcal{H}^{3/2}(\mathbb{S}^2)$} if $\lim_{N\to\infty} \mathrm{D}(\{ \boldsymbol{z}_1, \dots, \boldsymbol{z}_N \}; \mathbf{D} ) = 0$. The generalized discrepancy associated with $\mathbf{D}$ can be easily computed by way of \begin{equation} \label{eq:sum.log.discr} 4 \pi \left[ \mathrm{D}(\{ \boldsymbol{z}_1, \dots, \boldsymbol{z}_N \}; \mathbf{D} ) \right]^2 = 1 - \frac{1}{N^2} \sum_{k, \ell = 1}^N \log \left( 1 + \left\| \boldsymbol{z}_\ell - \boldsymbol{z}_k \right\| / 2 \right)^2. \end{equation} Sloan and Womersley~\cite{SlWo2004} showed that $[\mathrm{D}(\{ \boldsymbol{z}_1, \dots, \boldsymbol{z}_N \}; \mathbf{D} )]^2$ has a natural interpretation as the worst-case error for the equally weighted quadrature rule $Q_N$ associated with the points $\boldsymbol{z}_1, \dots, \boldsymbol{z}_N$ for functions $f$ from the unit ball in $\mathcal{H}^{3/2}(\mathbb{S}^2)$ provided with the reproducing kernel $K(\boldsymbol{y}, \boldsymbol{z}) = 2 [ 1 - \log( 1 + \| \boldsymbol{y} - \boldsymbol{z} \| / 2 ) ]$. In \cite{BrWo20xx} this approach is followed further, yielding an even simpler notion of discrepancy (see Section~\ref{sec:worst-case-error}), namely \begin{equation} \label{eq:sum.dist.discr} \left[ \mathrm{D}(\{ \boldsymbol{z}_1, \dots, \boldsymbol{z}_N \} ) \right]^2 = \frac{4}{3} - \frac{1}{N^2} \sum_{k, \ell = 1}^N \left\| \boldsymbol{z}_\ell - \boldsymbol{z}_k \right\| \end{equation} also used in the setting of the Sobolev space $\mathcal{H}^{3/2}(\mathbb{S}^2)$ now provided with the reproducing kernel $K(\boldsymbol{y}, \boldsymbol{z}) = (8 / 3) - \| \boldsymbol{y} - \boldsymbol{z} \|$. Low-discrepancy configurations in the above contexts can be found by maximizing the respective double sums in \eqref{eq:sum.log.discr} and \eqref{eq:sum.dist.discr}. This leads into the realm of the discrete (minimum) energy problem on the sphere where points are thought to interact according to a Riesz $s$-potential $1/r^s$ ($s>0$) or logarithmic potential $\log(1/r)$ ($s=0$) and $r$ is the Euclidean distance in the ambient space. It is known that the minimizer $Z_N$ ($N\geq2$) of the associated $s$-energy functionals form a sequence which is asymptotically uniformly distributed for each fixed $s\geq0$. We refer the interested reader to \cite{BaBr2009, BeCa2009, BeClDu2004, BoHaSa2008, Br2008, CoKu2007, HaSa2004, HaSa2005, Wa1990, Wa1992b, Wa1992}. We remark that Sun and Chen~\cite{SuCh2008} employ 'spherical basis functions' (as a counter part to radial basis function on spheres) to investigate uniform distribution on spheres. They also show that the minimizers of the functionals induced by a spherical basis function are asymptotically uniformly distributed. Let $C(\boldsymbol{z}, t) {:=} \{ \boldsymbol{y} \in \mathbb{S}^2 : \langle \boldsymbol{y}, \boldsymbol{z} \rangle \leq t \}$ be a spherical cap and let $\mathcal{C} {:=} \{C(\boldsymbol{z},t): \boldsymbol{z} \in \mathbb{S}^2, -1 \le t \le 1\}$ denote the set of all spherical caps. In \cite{St1973}, Stolarsky established a beautiful invariance principle on the $2$-sphere (and higher-dimensional spheres), \begin{equation} \label{eq:stolarsky.inv.prncpl} \frac{1}{N^2} \sum_{k, \ell = 1}^N \left\| \boldsymbol{z}_\ell - \boldsymbol{z}_k \right\| + 4 \left[ D_2( Z_N; \mathbb{S}^2, \mathcal{C} ) \right]^2 = \int_{\mathbb{S}^2} \int_{\mathbb{S}^2} \left\| \boldsymbol{y} - \boldsymbol{z} \right\| \dd \sigma_2(\boldsymbol{y}) \dd \sigma_2(\boldsymbol{z}), \end{equation} connecting the sum of distances, the $L_2$-discrepancy (with respect to spherical caps $C(\boldsymbol{z}, t)$), \begin{equation*} D_2( Z_N; \mathbb{S}^2, \mathcal{C} ) {:=} \left[ \int_{-1}^1 \int_{\mathbb{S}^2} \left| \frac{\left| Z_N \cap C( \boldsymbol{z}, t) \right|}{N} - \sigma_2( C( \boldsymbol{z}, t ) ) \right|^2 \dd \sigma_2( \boldsymbol{z} ) \dd t \right]^{1/2}, \end{equation*} and the distance integral. We observe that, on $\mathbb{S}^2$, the discrepancy in \eqref{eq:sum.dist.discr} is essentially (up to a factor $2$) the $L_2$-discrepancy. Originally, Stolarsky used his invariance principle and sharp result for discrepancy estimates by Schmidt~\cite{Sch1969} to establish bounds for the sum of distances. Beck~\cite{Be1984} then improved Stolarsky's lower bound, finally arriving at \begin{equation} \label{eq:Stolarsky.Beck.bounds} c \, N^{-3/2} \leq \int \int \left\| \boldsymbol{x} - \boldsymbol{y} \right\| \dd \sigma_2(\boldsymbol{x}) \dd \sigma_2(\boldsymbol{y}) - \frac{1}{N^2} \sum_{k, \ell = 1}^N \left\| \boldsymbol{z}_\ell - \boldsymbol{z}_k \right\| \leq C \, N^{-3/2} \end{equation} for some universal positive constants $c$ and $C$ independent of $N$. Consequently, relations \eqref{eq:Stolarsky.Beck.bounds} yield lower and upper bound for the $L_2$-discrepancy by means of the invariance principle which are sharp with respect to order of $N$: \begin{equation*} c^\prime \, N^{-3 / 4} \leq D_2( Z_N^*;\mathbb{S}^2, \mathcal{C} ) \leq C^\prime \, N^{-3 / 4} \end{equation*} for a sequence of optimal $L_2$-discrepancy $N$-point configurations $Z_N^*$ on $\mathbb{S}^2$. Observe, that optimal $L_2$-discrepancy and optimal spherical cap discrepancy configurations satisfy estimates with the same order $N^{-3/4}$ (apart from a possible $\sqrt{\log N}$ term), see \cite{Be1984}. \subsection{Quasi-Monte Carlo rules in the unit square} Quasi-Monte Carlo algorithms $\widehat{I}(f) = \frac{1}{N} \sum_{n=0}^{N-1} f(\boldsymbol{x}_n)$ are used to approximate integrals $I(f) = \int_{[0,1]^2} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}$. The crux of the method is to choose the quadrature points $\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1}$ as uniformly distributed as possible. The difference to Monte Carlo is the method by which the sample points $\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1} \in [0,1)^2$ are chosen. The aim of QMC is to chose those points such that the integration error \begin{equation*} \left|\int_{[0,1]^2} f(\boldsymbol{x}) \,\mathrm{d}\boldsymbol{x} - \frac{1}{N} \sum_{n=0}^{N-1} f(\boldsymbol{x}_n) \right| \end{equation*} achieves the (almost) optimal rate of convergence as $N \to\infty$ for certain classes of functions $f:[0,1]^2 \to \mathbb{R}$. For instance, for the family of functions $f$ with bounded variation in the sense of Hardy and Krause (for which we write $\|f\|_{\mathrm{HK}} < \infty$) it is known that the best rate of convergence for the worst case error is \begin{equation*} e = \sup_{f, \|f\|_{\mathrm{HK}} <\infty} \left|\int_{[0,1]^2} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} - \frac{1}{N} \sum_{n=0}^{N-1} f(\boldsymbol{x}_n) \right| \asymp N^{-1+\varepsilon} \quad \mbox{for all } \varepsilon > 0. \end{equation*} More precisely, there are constants $c,C > 0$ such that \begin{equation*} c N^{-1} \sqrt{\log N} \le e \le C N^{-1} \sqrt{\log N}, \end{equation*} see \cite{DiPi2010}. There is an explicit construction of the sample points $\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1}$ for which the optimal rate of convergence is achieved. One criterion for how uniformly a set of points $P_N = \{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1}\}$ is distributed in the unit square is the star discrepancy \begin{equation*} D^\ast(P_N; [0,1]^2, \mathcal{R}^\ast) = \sup_{\boldsymbol{y} \in [0,1]^2} \left| \delta_{P_N}(\boldsymbol{y}) \right|, \qquad \delta_{P_N}(\boldsymbol{y}) = \frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{x}_i \in [\boldsymbol{0},\boldsymbol{y})} - \mathrm{Area}([\boldsymbol{0},\boldsymbol{y})), \end{equation*} where $[\boldsymbol{0},\boldsymbol{y})=\prod_{i=1}^s [0,y_i)$ with $\boldsymbol{y}=(y_1, y_2) \in [0,1]^2$, $\mathcal{R}^\ast = \{[\boldsymbol{0},\boldsymbol{y}) : \boldsymbol{y} \in [0,1]^2\}$, $\mathrm{Area}([\boldsymbol{0},\boldsymbol{y}))= y_1 y_2$ is the area of $[\boldsymbol{0},\boldsymbol{y})$ and \begin{equation*} 1_{\boldsymbol{x}_i \in [\boldsymbol{0},\boldsymbol{y})} = \begin{cases} 1 & \text{if $\boldsymbol{x}_i \in [\boldsymbol{0},\boldsymbol{y})$,} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} The quantity $\delta_{P_N}(\boldsymbol{y})$ is called the local discrepancy (of $P_N$). The connection between this criterion and the integration error is given by the Koksma-Hlawka inequality \begin{equation*} \left|\int_{[0,1]^2} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} - \frac{1}{N} \sum_{n=0}^{N-1} f(\boldsymbol{x}_n) \right| \le D^\ast(P_N; [0,1]^2, \mathcal{R}^\ast) \|f\|_{\mathrm{HK}}. \end{equation*} Informally, a sequence of points $\boldsymbol{x}_0,\boldsymbol{x}_1,\ldots \in [0,1)^s$ is called a low-discrepancy sequence, if \begin{equation*} D^\ast(\{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1}\}; [0,1]^2, \mathcal{R}^\ast) = \mathcal{O}(N^{-1} (\log N)^2) \quad\mbox{as } N \to \infty. \end{equation*} Notice that for such a point sequence, the Koksma-Hlawka inequality implies the optimal rate of convergence of the integration error (apart from the power of the $\log N$ factor), since for a given integrand $f$ the variation $\|f\|_{\mathrm{HK}}$ does not depend on $P_N$ and $N$ at all. The concept of digital nets introduced by Niederreiter~\cite{Ni1988} provides the to date most efficient method to explicitly construct points sets $P_N = \{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1} \} \in [0,1)^2$ with small discrepancy, that is \begin{equation*} D^\ast(P_N; [0,1]^2, \mathcal{R}^\ast) \le C N^{-1} \log N. \end{equation*} They are introduced in the next section. \subsection{Nets and sequences in the unit square} In this section we give a brief overview of (digital) $(0,m,2)$-nets and (digital) $(0,2)$-sequences. For a comprehensive introduction see \cite{DiPi2010}. The aim is to construct a point set $P_N = \{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1}\}$ such that the star discrepancy satisfies $D^\ast(P_N;[0,1]^2, \mathcal{R}^\ast) \le C N^{-1} \log N$. To do so, we discretize the problem by choosing the point set $P_N$ such that the local discrepancy $\delta_{P_N}(\boldsymbol{y}) = 0$ for certain $\boldsymbol{y} \in [0,1]^2$ (those $\boldsymbol{y}$ in turn are chosen such that the star discrepancy of $P_N$ is small, as we explain below). It turns out that, when one chooses a base $b \geq 2$ and $N = b^m$, then for every natural number $m$ there exist point sets $P_{b^m}=\{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{b^m-1}\}$ such that $\delta_{P_{b^m}}(\boldsymbol{y}) = 0$ for all $\boldsymbol{y}=(y_1, y_2)$ of the form \begin{equation*} y_i = a_i / b^{d_i} \quad \text{for $i = 1,2$,} \end{equation*} where $0 < a_i \le b^{d_i}$ is an integer and $d_1 + d_2 \le m$ with $d_1, d_2 \ge 0$. A point set $P_N$ which satisfies this property is called a $(0,m,2)$-net in base $b$. An equivalent description of $(0,m,2)$-nets in base $b$ is given in the following definition. \begin{definition} Let $b \ge 2$ and $m \ge 1$ be integers. A point set $P_{b^m} \subseteq [0,1)^2$ consisting of $b^m$ points is called a $(0,m,2)$-net in base $b$, if for all nonnegative integers $d_1,d_2$ with $d_1 + d_2 = m$, the elementary interval \begin{equation*} \prod_{i=1}^2 \left[\frac{a_i}{b^{d_i}}, \frac{a_i+1}{b^{d_i}}\right) \end{equation*} contains exactly $1$ point of $P_{b^m}$ for all integers $0 \le a_i < b^{d_i}$. \end{definition} It is also possible to construct nested $(0,m,2)$-nets, thereby obtaining an infinite sequence of points. \begin{definition} Let $b \ge 2$ be an integer. A sequence $\boldsymbol{x}_0,\boldsymbol{x}_1,\ldots \in [0,1)^2$ is called a $(0,2)$-sequence in base $b$, if for all $m > 0$ and for all $k \ge 0$, the point set $\boldsymbol{x}_{k b^m}, \boldsymbol{x}_{kb^m + 1}, \ldots, \boldsymbol{x}_{(k+1)b^m-1}$ is a $(0,m,2)$-net in base $b$. \end{definition} It can be shown that a $(0,m,2)$-net in base $b$ satisfies \begin{equation*} D^\ast(P_N; [0,1]^2, \mathcal{R}^\ast) \le C_{b} \frac{m}{b^{m-1}}, \end{equation*} and the first $N$ points $\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1}$ of a $(0,2)$-sequence in base $b$ satisfy \begin{equation*} D^\ast(\{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1}\}; [0,1]^2, \mathcal{R}^\ast) \le C_{b} \frac{(\log N)^2}{N} \quad \mbox{for all } N \ge 1, \end{equation*} where $C_{b} > 0$ depends on $b$ but not on $m$ and $N$. See \cite{DiPi2010,Ni1992} for details. Explicit constructions of $(0,m,2)$-nets and $(0,2)$-sequences can be obtained using the digital construction scheme. Such point sets are then called digital nets (or digital $(0,m,2)$-nets if the point set is a $(0,m,2)$-net) or digital sequences (or digital $(0,2)$-sequence if the sequence is a $(0,2)$-sequence). To describe the digital construction scheme, let $b$ be a prime number and let $\mathbb{Z}_b$ be the finite field of order $b$ (a prime power and the finite field $\mathbb{F}_b$ could be used as well). Let $C_1, C_2 \in \mathbb{Z}_b^{m \times m}$ be $2$ matrices of size $m \times m$ with elements in $\mathbb{Z}_b$. The $i$th coordinate $x_{n,i}$ of the $n$th point $\boldsymbol{x}_n=(x_{n,1}, x_{n,2})$ of the digital net is obtained in the following way. For $0 \le n < b^m$ let $n = n_0 + n_1 b + \cdots + n_{m-1} b^{m-1}$ be the base $b$ representation of $n$. Let $\vec{n} = (n_0,\ldots, n_{m-1})^\top \in \mathbb{Z}_b^m$ denote the vector of digits of $n$. Then let \begin{equation*} \vec{y}_{n,i} = C_i \vec{n}. \end{equation*} For $\vec{y}_{n,i} = (y_{n,i,1},\ldots, y_{n,i,m})^\top \in \mathbb{Z}_b^{m}$ we set \begin{equation*} x_{n,i} = \frac{y_{n,i,1}}{b} + \cdots + \frac{y_{n,i,m}}{b^{m}}. \end{equation*} To construct digital sequences, the generating matrices $C_1, C_2$ are of size $\infty \times \infty$. The search for $(0,m,2)$-nets and $(0,2)$-sequences has now been reduced to finding suitable matrices $C_1, C_2$. Explicit constructions of such matrices are available and where introduced (in chronological order) by Sobol~\cite{So1967}, Faure~\cite{Fa1982}, Niederreiter~\cite{Ni1988}, Niederreiter and Xing~\cite{NiXi1996,NiXi1995,XiNi1995}, as well as others. For instance, to obtain a digital $(0,m,2)$-net over $\mathbb{Z}_2$ one can choose \[ C_{1} = \left( \begin{array}{ccccc} 1 & 0 & \ldots & 0 & 0 \\ 0 & 1 & \ddots & & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & & \ddots & 1 & 0 \\ 0 & 0 & \ldots & 0 & 1 \end{array} \right) \mbox{ and }\;\; C_{2} = \left( \begin{array}{ccccc} {0 \choose 0} & {1 \choose 0} & \ldots & \ldots & {m-1 \choose 0} \\ 0 & {1 \choose 1} & \ldots & \ldots & {m-1 \choose 1} \\ \vdots & \ddots & \ddots & & \vdots \\ 0 & \ldots & 0 & {m-2 \choose m-2} & {m-1 \choose m-2}\\ 0 & \ldots & \ldots & 0 & {m-1 \choose m-1} \end{array} \right), \] where the binomial coefficients are taken modulo $2$. Notice that the matrices $C_1, C_2$ can be extended to $\infty \times \infty$ matrices yielding generating matrices for a digital $(0,2)$-sequence over $\mathbb{Z}_2$. This example was first provided by Sobol$'$ in \cite{So1967}. \section{Spherical rectangle discrepancy of point sets on the unit sphere $\mathbb{S}^2$} We introduce a transformation to lift $(0,m,2)$-nets to the sphere $\mathbb{S}^2$. To do so, we represent each point on the sphere $\mathbb{S}^2$ by using scaled spherical coordinates. We define $T: [0,1) \times [0,1] \mapsto \mathbb{S}^2$ by \begin{equation*} T(\theta,\phi) = \left(\cos (2 \pi \theta) \sin (\pi \phi),\sin (2 \pi \theta) \sin (\pi \phi), \cos (\pi \phi) \right). \end{equation*} Let $0 \le \theta_1 \le \theta \le \theta_2 \le 1$ and $0 \le \phi_1 \le \phi_2 \le 1$. The part of the sphere \begin{equation*} \Omega_{\theta_1,\theta_2, \phi_1,\phi_2} = \{T(\theta,\phi) \in \mathbb{S}^2: \theta_1 \le \theta < \theta_2, \phi_1 \le \phi < \phi_2\} \end{equation*} has area \begin{equation*} \mathrm{Area}(\Omega_{\theta_1,\theta_2, \phi_1,\phi_2}) = 2 \pi (\theta_2-\theta_1) \left[\cos (\pi \phi_1) - \cos (\pi \phi_2) \right]. \end{equation*} Let \begin{align*} \Gamma_{\theta_1,\theta_2,\phi_1,\phi_2} &:= \frac{\mathrm{Area}(\Omega_{\theta_1,\theta_2,\phi_1,\phi_2})}{\mathrm{Area}(\mathbb{S}^2)} = (\theta_2-\theta_1) \frac{\cos (\pi \phi_1) - \cos (\pi \phi_2)}{2}, \end{align*} be the area of $\Omega_{\theta_1,\theta_2,\phi_1,\phi_2}$ normalized with respect to the surface area of the unit sphere. Therefore we have $\Gamma_{0,1,0,1} = 1$. We can now define a discrepancy measure of point sets on the sphere with respect to sets $\Omega_{\theta_1,\theta_2,\phi_1,\phi_2}$. \begin{definition}\label{def_disc_sphere} Let $Z_N = \{\boldsymbol{z}_0,\ldots, \boldsymbol{z}_{N-1}\} \subseteq \mathbb{S}^2$. Then the extreme spherical rectangle discrepancy of $Z_N$ on $\mathbb{S}^2$ with respect to $\Omega = \{\Omega_{\theta_1,\theta_2,\phi_1,\phi_2}: 0 \le \theta_1 < \theta_2 \le 1, 0 \le \phi_1 < \phi_2 \le 1\}$ is defined by \begin{equation*} D_N(Z_N;\mathbb{S}^2,\Omega) = \sup_{\satop{0 \le \theta_1 < \theta_2 \le 1}{0 \le \phi_1 < \phi_2 \le 1}} \left|\frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{z}_n \in \Omega_{\theta_1,\theta_2,\phi_1,\phi_2}} - \Gamma_{\theta_1,\theta_2,\phi_1,\phi_2} \right|, \end{equation*} where \begin{equation*} 1_{\boldsymbol{z}_n \in \Omega_{\theta_1,\theta_2,\phi_1,\phi_2}} = \begin{cases} 1 & \text{if $\boldsymbol{z}_n \in \Omega_{\theta_1,\theta_2,\phi_1,\phi_2}$} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} The spherical rectangle star-discrepancy of $Z_N$ on $\mathbb{S}^2$ with respect to $\Omega^\ast = \{\Omega_{0,\theta,0,\phi}:0\le \theta \le 1, 0 \le \phi \le 1\}$ is defined by \begin{equation*} D^\ast_N(Z_N;\mathbb{S}^2,\Omega^\ast) = \sup_{\satop{0 \le \theta \le 1}{0 \le \phi \le 1}} \left|\frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{z}_n \in \Omega_{0,\theta,0,\phi}} - \Gamma_{0,\theta,0,\phi} \right|. \end{equation*} \end{definition} The expression $\frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{z} \in \Omega_{\theta_1,\theta_2,\phi_1,\phi_2}}$ denotes the proportion of points of $Z_N$ in $\Omega_{\theta_1,\theta_2,\phi_1,\phi_2}$. Note that the set $\Omega_{\theta_1,\theta_2,\phi_1,\phi_2}$ always includes the North Pole $(0,0,1)$ and never the South Pole $(0,0,-1)$. Further, the discrepancy measure includes the discrepancy of spherical caps centered at the North Pole $(0,0,1)$ and South Pole $(0,0,-1)$. The spherical cap at the North Pole is obtained by using $\Omega_{0,1,0,\phi}$. Let $\phi > 0$ be small. Then $\frac{1}{N}\sum_{n=0}^{N-1} 1_{\boldsymbol{z}_n \in \Omega_{0,1,0,\phi}}$ is the proportion of points in the spherical cap $\Omega_{0,1,0,\phi}$ and $4\pi \Gamma_{0,1,0,\phi}$ is the area of the spherical cap. Hence \begin{equation*} \left|\frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{z}_n \in \Omega_{0,1,0,\phi}} - \Gamma_{0,1,0,\phi}\right| \end{equation*} measures the discrepancy at the North Pole. The South Pole is included in the following way. Let $\phi < 1$ be close to $1$. Then $\mathbb{S}^2 \setminus \Omega_{0,1,0,\phi}$ is the spherical cap centered at the South Pole. Then \begin{equation*} \frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{z}_n \in \mathbb{S}^2 \setminus \Omega_{0,1,0,\phi}} = 1-\frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{z}_n \in \Omega_{0,1,0,\phi}} \end{equation*} is the proportion of points in the spherical cap centered at the South Pole. Further, $4\pi (1-\Gamma_{0,1,0,\phi})$ is the area of the spherical cap $\mathbb{S}^2 \setminus \Omega_{0,1,0,\phi}$. Hence \begin{equation*} \left|\frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{z}_n \in \Omega_{0,1,0,\phi}} - \Gamma_{0,1,0,\phi} \right| = \left|\frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{z}_n \in \mathbb{S}^2 \setminus \Omega_{0,1,0,\phi}} - \frac{\mathrm{Area}(\mathbb{S}^2\setminus \Omega_{0,1,0,\phi})}{\mathrm{Area}(\mathbb{S}^2)}\right| \end{equation*} measures the discrepancy at the South Pole. In the following we construct point sets $Z_N = \{\boldsymbol{z}_0,\ldots, \boldsymbol{z}_{N-1}\}$ on the sphere $\mathbb{S}^2$ such that $D_N(Z_N;\mathbb{S}^2,\Omega)$ is small. This can be done by relating the spherical rectangle discrepancy to an analogous discrepancy on $[0,1]^2$. \begin{definition}\label{def_disc_square} Let $P_N = \{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1}\} \subseteq [0,1)^2$. Then the extreme discrepancy of $P_N$ on $[0,1)^2$ with respect to $\mathcal{R} = \{[a_1,a_2) \times [c_1,c_2): 0 \le a_1 < a_2 \le 1, 0 \le c_1 < c_2 \le 1\}$ is defined by \begin{equation*} D(P_N;[0,1)^2,\mathcal{R}) = \sup_{\satop{0 \le a_1 < a_2 \le 1}{0 \le c_1 < c_2 \le 1}} \left|\frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{x}_n \in [a_1,a_2) \times [c_1,c_2)} - (a_2-a_1)(c_2-c_1) \right|, \end{equation*} where \begin{equation*} 1_{\boldsymbol{x}_n \in [a_1,a_2) \times [c_1,c_2)} = \begin{cases} 1 & \text{if $\boldsymbol{x}_n \in [a_1,a_2) \times [c_1,c_2)$,} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} The star-discrepancy of $P_N$ on $[0,1)^2$ with respect to $\mathcal{R}^\ast = \{[0,a)\times [0,c):0 \le a \le 1, 0 \le c \le 1\}$ is defined by \begin{equation*} D^\ast(P_N;[0,1)^2,\mathcal{R}^\ast) = \sup_{\satop{0 \le a \le 1}{0 \le c \le 1}} \left|\frac{1}{N} \sum_{n=0}^{N-1} 1_{\boldsymbol{x}_n \in [0,a)\times [0,c)} - ac \right|. \end{equation*} \end{definition} Let $P_N$ be an arbitrary point set in $[0,1)^2$. It is known, see for instance \cite[Chapter~3]{DiPi2010}, that \begin{equation*} D^\ast(P_N; [0,1)^2, \mathcal{R}^\ast) \le D_N(P_N; [0,1)^2, \mathcal{R}) \le 4 D^\ast(P_N; [0,1)^2, \mathcal{R}^\ast). \end{equation*} The following lemma establishes a connection between the discrepancies on $[0,1)^2$ and $\mathbb{S}^2$ defined above. \begin{lemma}\label{lem_disc} Let $P_N = \{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{N-1}\} \in (0,1)^2$ and $\boldsymbol{x}_n = (x_{n,1}, x_{n,2})$ for $0 \le n < N$. Let \begin{equation*} \boldsymbol{z}_n = T\left(x_{n,1}, \frac{\arccos (1-2x_{n,2})}{\pi}\right) \qquad \text{for $0 \le n < N$} \end{equation*} and $Z_N=\{\boldsymbol{z}_0,\ldots, \boldsymbol{z}_{N-1}\}$. Then $\boldsymbol{z}_n \in \mathbb{S}^2$ for $0 \le n < N$ and \begin{equation*} D_N(Z_N; \mathbb{S}^2, \Omega) = D_N(P_N; [0,1)^2, \mathcal{R}) \end{equation*} and \begin{equation*} D^\ast_N(Z_N; \mathbb{S}^2, \Omega^\ast) = D_N(P_N; [0,1)^2, \mathcal{R}^\ast). \end{equation*} \end{lemma} \begin{proof} It is trivial to check that $\|\boldsymbol{z}_n\|=1$ for all $n$. To show the second part, we set $\theta_1 = a_1$, $\theta_2 = a_2$, $\phi_1 = [ \arccos (1-2 c_1) ] / \pi$ and $\phi_2 = [\arccos (1-2 c_2) ] / \pi$. Then we have \begin{equation}\label{eq_area} \Gamma_{\theta_1,\theta_2,\phi_1,\phi_2} = (a_2-a_1) \frac{1-2 c_1 - (1-2 c_2)}{2} = (a_2-a_1)(c_2-c_1). \end{equation} Note that the points $(0,0,1)$ and $(0,0,-1)$ are excluded from the set $Z_N$ since we assume that $P_N \subseteq (0,1)^2$. The mapping from $P_N$ to $Z_N$ is therefore bijective. Therefore we have \begin{equation*} 1_{\boldsymbol{z}_n\in \Omega_{\theta_1,\theta_2,\phi_1,\phi_2}} = 1_{(y_{n,1},(\arccos(1-2y_{n,2}))/\pi) \in [\theta_1,\theta_2) \times [\phi_1,\phi_2)} = 1_{(y_{n,1},y_{n,2}) \in [a_1,a_2) \times [c_1, c_2)}. \end{equation*} Thus the remaining results follow from Definitions~\ref{def_disc_sphere} and \ref{def_disc_square}. \end{proof} \begin{remark} Note that $\sin (\arccos (1-2x)) = \sqrt{x-x^2}$, whence $\boldsymbol{z}_n= (z_{n,1}, z_{n,2}, z_{n,3})$ where \begin{align*} z_{n,1} &= 2 \cos (2\pi x_{n,1}) \sqrt{x_{n,2}-x_{n,2}^2}, \\ z_{n,2} &= 2 \sin (2\pi x_{n,1}) \sqrt{x_{n,2} - x_{n,2}^2}, \\ z_{n,3} &= 1- 2x_{n,2} \end{align*} for $0 \le n < N$. To shorten the notation we define \begin{eqnarray*} \Phi(x_1,x_2) & = & T\left(x_1, \frac{\arccos (1-2x_2)}{\pi} \right) \\ & = & \left(2\cos(2\pi x_1) \sqrt{x_2-x_2^2}, 2\sin (2\pi x_1) \sqrt{x_2-x_2^2}, 1-2x_2 \right) \end{eqnarray*} for $(x_1,x_2) \in [0,1]^2$. Hence, $\boldsymbol{z}_n = \Phi(\boldsymbol{x}_n)$ for $0 \le n < b^m$. Further, for $J \subseteq [0,1]^2$ we set \begin{equation*} \Phi(J) = \{\Phi(\boldsymbol{x}) \in \mathbb{S}^2: \boldsymbol{x} \in J\}. \end{equation*} \end{remark} \subsection{Digital nets on the sphere} The previous lemma can now be used to obtain a bound on the spherical rectangle discrepancy of $(0,m,2)$-nets lifted to the sphere via the transformation $\Phi$, which we state in the following result. \begin{theorem}\label{thm_construction} Let $P_N = \{\boldsymbol{x}_0,\ldots, \boldsymbol{x}_{b^m-1}\} \subseteq (0,1)^2$ be a $(0,m,2)$-net in base $b$. Let $\boldsymbol{x}_n = (x_{n,1}, x_{n,2})$ for $0 \le n < b^m$. Let $\boldsymbol{z}_n = \Phi(\boldsymbol{x}_n)$ for $0 \le n < b^m$ and $Z_N = \{\boldsymbol{z}_0,\ldots, \boldsymbol{z}_{b^m-1}\}$. Then \begin{equation*} D_{b^m}(Z_N;\mathbb{S}^2, \Omega) \le \frac{b^2}{b+1} \frac{m}{b^m} + \frac{1}{b^m} \left(9 + \frac{1}{b}\right) + \frac{1}{b^{2m}} \left(2b - 1 - \frac{4b+3}{(b+1)^2}\right) \end{equation*} and \begin{equation*} D^\ast_{b^m}(Z_N;\mathbb{S}^2, \Omega^\ast) \le \frac{b^2}{b+1} \frac{m}{4b^m} + \frac{1}{b^m} \left(\frac{9}{4} + \frac{1}{b}\right) + \frac{1}{b^{2m}} \left(\frac{b}{2} - \frac{1}{4} - \frac{4b+3}{4(b+1)^2}\right). \end{equation*} \end{theorem} \begin{proof} The result follows from Lemma~\ref{lem_disc} and \cite[Theorem~1 and Remark~2]{DiKr2006}. \end{proof} Lemma~\ref{lem_disc} and the result of Roth~\cite{Ro1954} imply that \begin{equation*} D_N(Z_N; \mathbb{S}^2, \Omega) \ge D^\ast_N(Z_N; \mathbb{S}^2, \Omega^\ast) \ge \frac{\lfloor \log_2 N \rfloor + 3}{2^8 N} \end{equation*} for all point sets $Z_N = \{\boldsymbol{z}_0,\ldots, \boldsymbol{z}_{N-1}\} \subseteq \mathbb{S}^2$ (where $\log_2$ denotes the logarithm in base $2$). Thus the construction of Theorem~\ref{thm_construction} is optimal with respect to the discrepancy defined in Definition~\ref{def_disc_sphere}. Theorem~\ref{thm_construction} is not surprising since the transformation $\Phi$ is area preserving for all rectangles, which follows from \eqref{eq_area}. \subsection{Uniform distribution on the sphere} A sequence of $N$-point systems $\{ Z_N \}_{N \geq 2}$ on the unit sphere $\mathbb{S}^2$ in $\mathbb{R}^{3}$ is said to be {\em asymptotically uniformly distributed on $\mathbb{S}^2$} if the exact integral $I(f) = \int_{\mathbb{S}^2} f \dd \sigma_2$ of any continuous function $f$ on $\mathbb{S}^2$ integrated with respect to the normalized surface area measure $\sigma_2$ on $\mathbb{S}^2$ can be approximated arbitrarily close (as $N$ becomes large) by means of the equally weighted quadrature rules $Q_N$ having these $Z_N$ as node sets; that is \begin{equation*} \lim_{N \to \infty} Q_N(f) = I(f) \qquad \text{for every $f \in C(\mathbb{S}^2)$.} \end{equation*} This limit relation also states that the weak-* limit (which is defined in such a way) of the discrete measure placing a point mass $1/N$ at every point in $Z_N$ is given by the natural measure on $\mathbb{S}^2$. In other words, the limit distribution (as $N\to \infty$) of the $N$-point configurations $Z_N$ is given by $\sigma_2$. Let $|Z_N|$ denote the cardinality of the (finite) set $Z_N$. Therefore, an equivalent characterization is that \begin{equation*} \lim_{N \to \infty} \left| Z_N \cap A \right| / N = \sigma_2( A ) \end{equation*} for every $\sigma_2$-measurable set $A \subseteq \mathbb{S}^2$ (whose boundary has $2$-dimensional Hausdorff measure $0$). Informally speaking, any such set $A$ gets a fair share of points as $N$ becomes large. In fact, it is sufficient to consider spherical caps on $\mathbb{S}^2$. Let $\mathcal{C} = \{C(\boldsymbol{z},t): \boldsymbol{z} \in \mathbb{S}^2, -1 \le t \le 1\}$ denote the set of all spherical caps. Thus, a natural measure to quantify 'equidistribution' of $N$-point systems on $\mathbb{S}^2$ is the {\em spherical cap discrepancy} \begin{equation*} D(Z_N; \mathbb{S}^2, \mathcal{C}) {:=} \sup_{C \subseteq \mathbb{S}^2} \left| \frac{\left|Z_N \cap C \right|}{N} - \sigma_2(C) \right|, \end{equation*} where the supremum is extended over all spherical caps. It is well-known that the sequence $\{Z_N\}_{N\geq2}$ is asymptotically uniformly distributed on $\mathbb{S}^2$ if and only if $D(Z_N; \mathbb{S}^2, \mathcal{C}) \to 0$ as $N \to \infty$. One can show that the following assertions are equivalent (cf., for example, \cite{Br2008}): \begin{enumerate} \item The sequence $\{ Z_N \}_{N \geq 2}$ is asymptotically uniformly distributed on $\mathbb{S}^2$. \item $\lim_{N \to \infty} Q_N(f) = I(f)$ for every $f \in C(\mathbb{S}^2)$. \item $\lim_{N\to\infty} Q_{N}(Y_{\ell,m}) = 0$ for all spherical harmonics $Y_{\ell,m}$ of degree $\ell\geq1$ from a (real) $L_{2}(\mathbb{S}^{2},\sigma_2)$-orthonormal basis $\{ Y_{\ell,m} \}$. (This is {\em Weyl's criterion} on the sphere.) \item $\lim_{N\to\infty} D(Z_N; \mathbb{S}^2, \mathcal{C}) = 0$. \end{enumerate} The spherical cap discrepancy can be estimated in terms of Weyl sums by means of {\em Erd{\"o}s-Tur{\'a}n type inequalities} \footnote{Such inequalities are a generalization of Erd{\"o}s and Tur{\'a}ns result for the unit circle \cite{ErTu1948I,ErTu1948II}.} (Grabner~\cite{Gr1991}, also cf. Li and Vaaler~\cite{LiVa1999}) \begin{equation*} D(Z_{N}; \mathbb{S}^2, \mathcal{C}) \leq \frac{c_{1}}{L+1} + \sum_{\ell=1}^{L} \left( \frac{c_{2}}{\ell} + \frac{c_{3}}{L+1} \right) \sum_{m=1}^{Z(d,\ell)}\left| \frac{1}{N} \sum_{j=0}^{N-1} Y_{\ell,m}(\PT{x}_{j}) \right|, \end{equation*} where $L$ is any positive integer and the positive constants $c_{1}$, $c_{2}$, and $c_{3}$ are independent of $N$, or {\em LeVeque type inequalities} (\cite{NaSuWa2010}, generalizing LeVeques result for the unit circle \cite{LeV1965}) \begin{equation*} c_4 \left[ \sum_{\ell=1}^\infty a_\ell \sum_{m=1}^{Z(d,\ell)} \left| \frac{1}{N} \sum_{j=0}^{N-1} Y_{\ell,m}(\PT{x}_{j}) \right|^2 \right]^{1/2} \leq D(Z_{N}; \mathbb{S}^2, \mathcal{C}) \leq c_5 \left[ \sum_{\ell=1}^\infty b_\ell \sum_{m=1}^{Z(d,\ell)} \left| \frac{1}{N} \sum_{j=0}^{N-1} Y_{\ell,m}(\PT{x}_{j}) \right|^2 \right]^{1/(d+2)}, \end{equation*} where $a_\ell {:=} \Gamma(\ell - 1/2) / \Gamma( \ell + d + 1/2) \asymp 1 / \ell^{d+1} {=:} b_\ell$, for some positive constants $c_4$ and $c_5$ which are independent of $N$. The integers \begin{equation*} Z(d,0) = 1, \qquad Z(d,\ell) = \left(2\ell+d-1\right) \frac{\Gamma(\ell+d-1)}{\Gamma(d)\Gamma(\ell+1)} \end{equation*} denote the number of linearly independent spherical harmonics $Y_{\ell,m}$ of degree $\ell$. Instead of using spherical caps as test sets, one can define discrepancy with respect to spherical rectangles (as done in this paper) or, as proposed by Sj{\"o}gren~\cite{Sj1972}, with respect to the family of so-called $K$-regular test sets. A $\sigma_2$-measurable set $A \subseteq \mathbb{S}^2$ is defined to be {\em $K$-regular} if the $\sigma_2$-measure of the $\delta$-neighborhood ($\delta$ sufficiently small) of its boundary, \begin{equation*} \left\{ \boldsymbol{z} \in \mathbb{S}^2 : \dist( A, \boldsymbol{z} ) \leq \delta, \dist( \mathbb{S}^2 \setminus A, \boldsymbol{z} ) \leq \delta \right\}, \quad \dist( A, \boldsymbol{z} ) {:=} \inf_{\boldsymbol{z}^\prime \in A} \left\| \boldsymbol{z} - \boldsymbol{z}^\prime \right\|, \end{equation*} is linearly bounded by $K \, \delta$. Clearly, spherical caps (rectangles) are $K$-regular for some $K>0$. In \cite{AnBlGo1999} Andrievskii, Blatt and G{\"o}tz related the discrepancy of a measure with the error in integration of polynomials in the following sense. \begin{proposition} \label{prop:AnBlGo} There exists a universal constant $C_0$ such that for every probability measure $\nu$ supported on $\mathbb{S}^2$ and every $K$-regular set $A \subseteq \mathbb{S}^2$ there holds \begin{equation*} \left| \nu(A) - \sigma_2(A) \right| \leq C_0 \, \inf_{n \in \mathbb{N}} \left\{ \frac{K}{n} + C(\nu, n) \right\}, \end{equation*} where \begin{equation*} C(\nu, n) {:=} \sup\left\{\left| \int p \dd \nu - \int p \dd \sigma_2 \right| : \begin{gathered} \text{$p$ polynomial on $\mathbb{S}^2$,}\\\text{$\deg p \leq q n$, $| p | \leq 1$ on $\mathbb{S}^2$} \end{gathered} \right\} \end{equation*} and $q = 3$.\footnote{Note that in \cite{AnBlGo1999} everything is done in $\mathbb{R}^d$.} \end{proposition} In particular, if $\nu$ is the counting measure $\frac{1}{N} \sum_{\boldsymbol{x} \in Z_N} \delta_{\boldsymbol{z}}$ induced by the node set $Z_N$ of an equally weighted quadrature rule $Q_N$ on $\mathbb{S}^2$, then we have that \begin{equation*} \nu(A) - \sigma_2(A) = \frac{|Z_{N} \cap A|}{N} - \sigma_2(A), \qquad \int p \dd \nu - \int p \dd \sigma_2 = \frac{1}{N} \sum_{\boldsymbol{z} \in Z_N} p(\boldsymbol{z}) - \int p \dd \sigma_2 \end{equation*} in Proposition~\ref{prop:AnBlGo}. Let $D(Z_N; \mathbb{S}^2, \mathcal{F}_K)$ denote the discrepancy with respect to the family of $K$-regular test sets and $D(Z_N; \mathbb{S}^2, \Omega)$ be the spherical rectangle discrepancy. \begin{theorem} \label{thm:nets.sphere.equidistr} Let $\{ Z_N \}$ be a sequence of $N$-point configurations with $N = b^m$, $m \geq 1$, defined by $(0,m,2)$-nets $P_N$ in base $b$ lifted to the sphere $\mathbb{S}^2$ by means of $Z_N = \Phi(P_N)$. Then the following holds: \begin{enumerate} \item[(i)] $\lim_{N \to \infty} D(Z_N; \mathbb{S}^2, \mathcal{F}_K) = 0$ for every fixed $K > 0$. \item[(ii)] $\lim_{N \to \infty} D(Z_N; \mathbb{S}^2, \mathcal{F}_K ) = 0$. \item[(iii)] $\lim_{N \to \infty} D(Z_N,\mathbb{S}^2, \Omega ) = 0$. \end{enumerate} Consequently, the sequence $\{Z_N\}$ is asymptotically uniformly distributed on $\mathbb{S}^2$. \end{theorem} \begin{proof} Let $P_N$ be an $(0,m,2)$-net with $N = b^m$ points ($m \geq 1$) lifted to the sphere $\mathbb{S}^2$ and $\boldsymbol{z}_k = \Phi(\boldsymbol{x}_k)$ ($0 \leq k < N$). Then, using 'cylindrical' coordinates $\boldsymbol{z} = ( \sqrt{1 - t^2} \boldsymbol{z}^*, t )$, where $\boldsymbol{z}^* \in \mathbb{S}^1$ and $-1 \leq t \leq 1$, and the decomposition $\dd \sigma_2( \boldsymbol{z} ) = (1 / 2) \dd t \dd \sigma_1( \boldsymbol{z}^*)$ (\cite{Mu1966}), for every polynomial $p$ on $\mathbb{S}^2$ (satisfying $| p(\boldsymbol{z}) | \leq 1$ on $\mathbb{S}^2$) we get \begin{equation*} Q_N(p) - I(p) = \frac{1}{N} \sum_{k=0}^{N-1} p( \boldsymbol{z}_k ) - \int_{\mathbb{S}^2} p \dd \sigma_2 = \int_{\mathbb{S}^1} \left[ \frac{1}{N} \sum_{k=0}^{N-1} p( \boldsymbol{z}_k ) - \frac{1}{2} \int_{-1}^1 p( \boldsymbol{z} ) \dd t \right] \dd \sigma_1( \boldsymbol{z}^* ). \end{equation*} For each fixed $\boldsymbol{z}^* \in \mathbb{S}^1$ we define a new $N$-point configuration $\widehat{Z}_N$ by aligning all points in $Z_N$ such that they have the 'same $\boldsymbol{z}^*$', that is $\widehat{\boldsymbol{z}}_k = ( \sqrt{1 - t_k^2} \boldsymbol{z}^*, t_k )$ ($0 \leq k < N$). Clearly, $\widehat{Z}_N$ depends on $\boldsymbol{z}^*$. $\widehat{Z}_N$ has $N$ points, since the underlying system $P_N$ is a $(0,m,2)$-net. Indicating the dependence on $\boldsymbol{z}^*$ by $\widehat{\boldsymbol{z}}_k(\boldsymbol{z}^*)$ we write $Q_N(p) - I(p) = \Delta_1(p) + \Delta_2(p)$, where \begin{align*} \Delta_1(p) &{:=} \frac{1}{N} \sum_{k=0}^{N-1} p( \boldsymbol{z}_k ) - \frac{1}{N} \sum_{k=0}^{N-1} \int_{\mathbb{S}^1} p( \widehat{\boldsymbol{z}}_k(\boldsymbol{z}^*) ) \dd \sigma_1( \boldsymbol{z}^* ), \\ \Delta_2(p) &{:=} \int_{\mathbb{S}^1} \left[ \frac{1}{N} \sum_{k=0}^{N-1} p( \widehat{\boldsymbol{z}}_k(\boldsymbol{z}^*) ) - \frac{1}{2} \int_{-1}^1 p( \boldsymbol{z} ) \dd t \right] \dd \sigma_1( \boldsymbol{z}^* ). \end{align*} First, we consider $\Delta_1(p)$. By definition of $(0,m,2)$-nets every elementary interval \begin{equation*} \left[ a / b^d, \left( a + 1 \right) / b^d \right) \times \left[ a^\prime / b^{m-d}, \left( a^\prime + 1 \right) / b^{m-d} \right), \qquad 0 \leq a < b^d, \, 0 \leq a^\prime < b^{m - d}, \end{equation*} contains exactly one point of $P_N$. By construction of $Z_N$, the $2$-sphere can be partitioned into $M \asymp \sqrt{N}$ polar zones $A_1, \dots, A_M$ of equal sizes $1 / \sqrt{N}$, each $A_\ell$ containing precisely $N_\ell \asymp \sqrt{N}$ points of $Z_N$ ($N_1 + \cdots + N_M = N$). Picking $M$ heights $\tau_1, \dots, \tau_M$ such that the circle $C_\ell$ with height $\tau_\ell$ lies in $A_\ell$, we move the points in $A_\ell$ on the circle $C_\ell$ giving them the same height $\tau_\ell$ without changing the 'longitudes', that is $\boldsymbol{z}_k^\prime {:=} (\sqrt{1-\tau_\ell^2} \boldsymbol{z}_k^*, \tau_\ell)$ for $\boldsymbol{z}_k = (\sqrt{1-t_k^2} \boldsymbol{z}_k^*, t_k) \in A_\ell$. The error introduced in this way can be made arbitrarily small by increasing $N$, since the polynomial $p$ is bounded and uniformly continuous on $\mathbb{S}^2$ and the widths of the polar zones uniformly decrease with increasing $N$. The integral $\int_{\mathbb{S}^1} p( \widehat{\boldsymbol{z}}_k(\boldsymbol{z}^*) ) \dd \sigma_1( \boldsymbol{z}^* )$ averages the polynomial $p$ over the circle on $\mathbb{S}^2$ with height $t_k$. In the zone $A_\ell$ we may use the approximation $\int_{\mathbb{S}^1} p( (\sqrt{1-\tau_\ell^2} \boldsymbol{z}^*, \tau_\ell) ) \dd \sigma_1( \boldsymbol{z}^* )$. Therefore \begin{equation*} \Delta_1(p) = \sum_{\ell=1}^M \frac{N_\ell}{N} \left[ \frac{1}{N_\ell} \sum_{\boldsymbol{z}_k \in A_\ell} p( \boldsymbol{z}_k^\prime ) - \int_{\mathbb{S}^1} p( (\sqrt{1-\tau_\ell^2} \boldsymbol{z}^*, \tau_\ell) ) \dd \sigma_1( \boldsymbol{z}^* ) \right] + R_1(p), \end{equation*} where the error $R_1(p)$ goes to zero as $N\to \infty$. Observe, that the square-bracketed expression is the error of integration of an equally weighted quadrature rule on the circle $C_\ell$ with integration points $\boldsymbol{z}_k^\prime$ for $\boldsymbol{z}_k \in A_\ell$ induced by a horizontal strip of the underlaying $(0,m,2)$-net $P_N$ for the polynomial $p$ restricted to $C_\ell$. Since the extreme discrepancy of any $(0,m,2)$-net tends to zero as $m \to \infty$, it follows that the limit distribution of the integration nodes is given by $\sigma_1$. Therefore the square-bracketed expression tends to zero uniformly for all $1 \leq \ell \leq M$ as $N\to\infty$. Next, we consider $\Delta_2(p)$. We define the function $f(\boldsymbol{z}^*; t) {:=} p( (\sqrt{1-t^2} \boldsymbol{z}^*, t) )$ for $| t | \leq 1$ and each fixed $\boldsymbol{z}^* \in \mathbb{S}^1$, which is uniformly continuous and bounded in $t$ and $\boldsymbol{z}^*$. Thus \begin{equation*} \Delta_2(p) = \int_{\mathbb{S}^1} \left[ \frac{1}{N} \sum_{k=0}^{N-1} f( \boldsymbol{z}^*; t_k ) - \frac{1}{2} \int_{-1}^1 f( \boldsymbol{z}^*; t ) \dd t \right] \dd \sigma_1( \boldsymbol{z}^* ). \end{equation*} The square-bracketed expression is the error of integration for an equally weighted quadrature rule with integration nodes given by $t_k = 1 - 2 x_k$, where the $x_k$'s are the $2$nd-coordinates in the $(0,m,2)$-net $P_N$, integrating a function from the class $\{ f( \boldsymbol{z}^*; \cdot ) : \boldsymbol{z}^* \in \mathbb{S}^1 \}$. We can apply Koksma's inequality and use that $| f( \boldsymbol{z}^*; t ) | \leq 1$ to see that $\Delta_2(p) \to 0$ as $N \to \infty$. Thus, for every polynomial $p$ on $\mathbb{S}^2$ (satisfying $| p(\boldsymbol{z}) | \leq 1$ on $\mathbb{S}^2$) we get that $| \Delta(p) | \leq | \Delta_1(p) | + | \Delta_2(p) | \to 0$ as $N \to \infty$. Now, we can apply Proposition~\ref{prop:AnBlGo}. Let $q = 3$. Set $\nu_N = \frac{1}{N} \sum_{\boldsymbol{z} \in Z_N} \delta_{\boldsymbol{z}}$. Then to every $\varepsilon > 0$ we can chose a smallest integer $n$ such that $K / n < \varepsilon / 2$ and find an $N_n$ such that $C(2,\nu_N,n) < \varepsilon / 2$ for $N \geq N_n$ yielding \begin{equation*} \left| \frac{| Z_N \cap A |}{N} - \sigma_2(A) \right| \leq C_0 \left\{ \frac{K}{n} + C(2,\nu_N,n) \right\} \leq C_0 \, \varepsilon, \end{equation*} which holds for every $K$-regular test set $A$ ($K$ is fixed). Item~(i) follows Since spherical caps (rectangles) are $K$-regular ($K^\prime$-regular) for some $K, K^\prime > 0$, the Items~(ii) and (iii) follow. By the list of equivalent characterizations of uniform distribution on $\mathbb{S}^d$, the sequence $\{Z_N\}$ is asymptotically uniformly distributed on $\mathbb{S}^2$. \end{proof} Concerning the convergence rate of the spherical cap discrepancy, it was an observation of Beck~\cite{Be1984} that to any $N$-point set $Z_{N}$ on $\mathbb{S}^d$ there exists a spherical cap $C$ such that \begin{equation*} c_{1} N^{-3/4} < \left| \frac{|Z_{N} \cap C|}{N} - \sigma_2(C) \right| \end{equation*} and (using probability arguments) there exist $N$-point sets $Z_{N}$ on $\mathbb{S}^2$ such that \begin{equation*} \left| \frac{| Z_{N} \cap C|}{N} - \sigma_2(C)\right| < c_{2} N^{-3/4} \, \sqrt{\log N} \end{equation*} for any spherical cap $C$. (The numbers $c_{i}>0$ are constants independent of $N$.) \begin{remark} Thus, the correct order of decay (up to a $\sqrt{\log N}$ factor) of the spherical cap discrepancy of a sequence of low discrepancy $N$-point configurations on $\mathbb{S}^2$ is given by $N^{-3/4}$ as $N \to \infty$. \end{remark} This is in contrast to the convergence rates (essentially [up to a logarithmic factor] $1/N$) given in Theorem~\ref{thm_construction} for the discrepancy with respect to spherical rectangles. \section{Numerical integration on $\mathbb{S}^2$}\label{sec:worst-case-error} We consider now numerical integration of functions over $\mathbb{S}^2$ in some reproducing kernel Hilbert space defined on $\mathbb{S}^2$. Let $\widehat{P}_k$, $k \in \mathbb{N}_0$, denote the Legendre polynomials. We define a reproducing kernel \begin{equation*} K(\boldsymbol{z},\boldsymbol{z}^\prime) = \sum_{\ell=0}^\infty \lambda_k \widehat{P}_\ell(\langle \boldsymbol{z},\boldsymbol{z}^\prime \rangle), \end{equation*} for some real numbers $\lambda_\ell \ge 0$. The rate of decay of $\lambda_\ell$ is related to the smoothness of the functions in the associated reproducing kernel Hilbert space. In particular, if $\lambda_\ell \asymp \ell^{-2s}$, then the reproducing kernel Hilbert space $\mathcal{H}^s$ consists of functions with smoothness $s$, see \cite{BrWo20xx} for more details. In \cite[Theorem~5.1]{BrWo20xx} a choice of $\lambda_\ell$ was introduced for which $\lambda_\ell \asymp \ell^{-3}$ and for which the following reproducing kernel can be written explicitly as \begin{equation*} K(\boldsymbol{z},\boldsymbol{z}^\prime) = \sum_{\ell=0}^\infty \lambda_\ell \widehat{P}_\ell(\langle \boldsymbol{z},\boldsymbol{z}^\prime \rangle) = 2 \mathcal{I} - \|\boldsymbol{z}-\boldsymbol{z}^\prime \|, \end{equation*} where $\mathcal{I}$ is the distance integral \begin{equation*} \mathcal{I} = \int_{\mathbb{S}^2} \int_{\mathbb{S}^2} \|\boldsymbol{z}-\boldsymbol{z}^\prime \| \, \mathrm{d} \sigma_2(\boldsymbol{z}) \mathrm{d} \sigma_2(\boldsymbol{z}^\prime). \end{equation*} This integral can be computed and has value $\mathcal{I} = 4/3$ (see \cite{BrWo20xx}). The coefficients $\lambda_\ell$ are given by \begin{equation*} \lambda_0 = \mathcal{I}, \qquad \lambda_\ell = \mathcal{I} \frac{-(-1/2)_\ell}{(3/2)_\ell} \quad \text{for $\ell \ge 1$,} \end{equation*} where $(a)_\ell = a (a+1) \cdots (a+\ell-1) = \Gamma(a+\ell) / \Gamma(a)$ is the Pochhammer symbol. See \cite{BrWo20xx} for more details. The function $K(\boldsymbol{z},\boldsymbol{z}^\prime)$ is a reproducing kernel of a Hilbert space $\mathcal{H}^{3/2}$ over $\mathbb{S}^2$. In \cite{BrWo20xx} it is shown that the squared worst-case error for numerical integration in this $\mathcal{H}^{3/2}$ is given by \begin{equation} \label{eq_wce_32} e^2(Q_N, \mathcal{H}^{3/2}) = \mathcal{I} - \frac{1}{N^2} \sum_{k,\ell=0}^{N-1} \|\boldsymbol{z}_k - \boldsymbol{z}_\ell\|. \end{equation} For quadrature rules $Q_N$ with node sets maximizing the sum of distances $\sum_{k,\ell=0}^{N-1} \|\boldsymbol{z}_k - \boldsymbol{z}_\ell\|$ (that is, minimizing the right-hand side in \eqref{eq_wce_32} which in turn means that the nodes have minimal $L_2$-discrepancy), it is known that there are constants $c, C > 0$ such that \begin{equation} \label{eq:bounds.e.squared} c \, N^{-3/2} \le e^2(Q_N, \mathcal{H}^{3/2}) \le C \, N^{-3/2} \end{equation} for $N$ sufficiently large, see again \cite[Corollary~5.2]{BrWo20xx}. Note that the lower estimate always holds. \subsection{A lower bound on the worst-case error} \label{subsec:lower.bound} Let $Q_N^*$ be a quadrature rule whose integration points $\boldsymbol{z}_1^*, \dots, \boldsymbol{z}_N^*$ maximize the sum of distances $\sum_{k,\ell=0}^{N-1} \|\boldsymbol{z}_k - \boldsymbol{z}_\ell\|$ or, equivalently, have minimal $L_2$-discrepancy. (Such points always exists by the continuity of the distance function and compactness of the sphere.) Then, using the lower bound in \eqref{eq:bounds.e.squared}, we have \begin{equation*} e^2(Q_N,\mathcal{H}^{3/2}) = \mathcal{I} - \frac{1}{N^2} \sum_{k,\ell=0}^{N-1} \left\| \boldsymbol{z}_k - \boldsymbol{z}_\ell \right \| \geq \mathcal{I} - \frac{1}{N^2} \sum_{k,\ell=0}^{N-1} \left\| \boldsymbol{z}_k^* - \boldsymbol{z}_\ell^* \right \| = e^2(Q_N^*,\mathcal{H}^{3/2}) \geq c \, N^{-3/2} \end{equation*} for some positive constant $c$ not depending on $N$ and $N$ sufficiently large. Thus, one obtains that $e^2(Q_N,\mathcal{H}^{3/2}) \geq c \, N^{-3/2}$ for sufficiently large $N$ for equal weight quadrature rule $Q_N$ induced by any $N$-point configuration. This is in agreement with the results obtained in \cite{HeSl2005b}. \subsection{An upper bound on the worst-case error} By Stolarsky's invariance principle \eqref{eq:stolarsky.inv.prncpl} and representation \eqref{eq_wce_32} we have \begin{equation} \label{eq:wce.l2.discr} e^2(Q_N,\mathcal{H}^{3/2}) = \mathcal{I} - \frac{1}{N^2} \sum_{k,\ell=0}^{N-1} \left\|\boldsymbol{z}_k-\boldsymbol{z}_\ell \right\| = 4 \left[ D_{2}( Z_N; \mathbb{S}^2, \mathcal{C} ) \right]^2. \end{equation} Since the spherical cap discrepancy provides an upper bound for the $L_2$-discrepancy via $D_2( Z_N; \mathbb{S}^2, \mathcal{C} ) \leq \sqrt{2} D(Z_N; \mathbb{S}^2, \mathcal{C})$, one has necessarily $e^2(Q_N,\mathcal{H}^{3/2}) \to 0$ as $N \to \infty$ for a sequence of asymptotically uniformly distributed $N$-point configurations on $\mathbb{S}^2$. This applies in particular to digital nets on the sphere of the type considered in Theorem~\ref{thm:nets.sphere.equidistr}. A natural question is if such nets yield the same order of convergence as the worst-case error over the unit ball in $\mathcal{H}^{3/2}$ over $\mathbb{S}^2$ provided with the reproducing kernel $K(\boldsymbol{x}, \boldsymbol{y}) = (8 / 3) - \| \boldsymbol{x} - \boldsymbol{y} \|$ has (which is the same as of the optimal $L_2$-discrepancy on $\mathbb{S}^2$ (see \eqref{eq:wce.l2.discr})) as suggested by the numerical results in Section~\ref{subsec:numerical.results}. \begin{theorem} \label{thm:upper.bound} Let $\{ Z_N \}$ be a sequence of $N$-point configurations on $\mathbb{S}^2$ defined by point sets $P_N \subseteq [0,1]^2$ lifted to the sphere $\mathbb{S}^2$ by means of $Z_N = \Phi(P_N)$. Then the equal weight quadrature rule $Q_N$ associated with $Z_N$ satisfies \begin{equation*} e^2(Q_N,\mathcal{H}^{3/2}) \le \left( \frac{24}{\sqrt{3}} + 2 \sqrt{2} \right) D^\ast(P_N; [0,1)^2, \mathcal{R}^\ast). \end{equation*} \end{theorem} \begin{proof} Notice that $\int_{\mathbb{S}^2} \| \boldsymbol{z} - \boldsymbol{z}^\prime \| \, \dd \sigma_2( \boldsymbol{z}^\prime)$ does not depend on $\boldsymbol{z} \in \mathbb{S}^2$. Thus, defining the function \begin{equation} \label{eq:function} \Delta( \boldsymbol{w} ) {:=} \int_{\mathbb{S}^2} \left\| \boldsymbol{w} - \boldsymbol{z} \right\| \dd \sigma_2( \boldsymbol{z} ) - \frac{1}{N} \sum_{k = 0}^{N-1} \left\| \boldsymbol{w} - \boldsymbol{z}_k \right\|, \qquad \boldsymbol{w} \in \mathbb{S}^2, \end{equation} one obtains \begin{equation*} \frac{1}{N} \sum_{\ell = 0}^{N-1} \Delta( \boldsymbol{z}_\ell ) = \int_{\mathbb{S}^2} \int_{\mathbb{S}^2} \left\| \boldsymbol{z} - \boldsymbol{z}^\prime \right\| \dd \sigma_2( \boldsymbol{z}) \dd \sigma_2( \boldsymbol{z}^\prime) - \frac{1}{N^2} \sum_{k, \ell = 0}^{N-1} \left\| \boldsymbol{z}_k - \boldsymbol{z}_\ell \right\| = e^2(Q_N,\mathcal{H}^{3/2}) \end{equation*} and therefore \begin{equation*} e^2(Q_N,\mathcal{H}^{3/2}) \leq \max_{0 \leq \ell < N} \Delta(\boldsymbol{z}_\ell) \leq \sup_{\boldsymbol{z} \in \mathbb{S}^2} \Delta(\boldsymbol{z}). \end{equation*} Note that the sum in \eqref{eq:function} can be seen as the potential (with respect to the distance kernel) of the discrete measure associated with $Z_N$ at $\boldsymbol{w}$. Thus, $\Delta( \boldsymbol{z}_\ell )$ gives the deviation from the leading term of the asymptotic expansion (as $N \to \infty$) of this potential at $\boldsymbol{z}_\ell$. Let \begin{equation*} g(\alpha,\tau) = \|\boldsymbol{w} - \Phi(\alpha,\tau) \|. \end{equation*} Since, by \eqref{eq_area}, the transformation $\Phi$ is area-preserving, it follows that $$\int_{\mathbb{S}^2} \|\boldsymbol{w} - \boldsymbol{z}\| \,\mathrm{d} \sigma_2(\boldsymbol{z}) = \int_{0}^1 \int_0^1 g(\alpha,\tau) \,\mathrm{d} \alpha \,\mathrm{d} \tau.$$ Thus, for any fixed $\boldsymbol{w} \in \mathbb{S}^2$, we can view $\Delta(\boldsymbol{w})$ as the integration error of integrating the function $g$ using the points $P_N$. By the Koksma-Hlawka inequality, this integration error is bounded by the star-discrepancy $D^\ast(P_N; [0,1]^2, \mathcal{R}^\ast)$ of $P_N$ times the total variation of the function $g$ in the sense of Hardy and Krause. The variation of $g$ in the sense of Vitali is given by \begin{equation*} V^{(1,2)}(g) = \sup_{\mathcal{P}} \sum_{J \in \mathcal{P}} |\delta(g,J)|, \end{equation*} where the supremum is taken over all partitions of $[0,1)^2$ into subintervals $J = [a_1,a_2) \times [b_1,b_2)$ and $\delta(g,J) = g(a_1,b_1)-g(a_1,b_2)-g(a_2,b_1)+g(a_2,b_2)$. The variation of the one-dimensional projections is given by \begin{equation*} V^{(1)}(g) = \sup_{0=x_0 < x_1 < \cdots < x_M =1} \sum_{k=1}^M |g(x_k,1)-g(x_{k-1},1)| \end{equation*} and \begin{equation*} V^{(2)}(g) = \sup_{0=y_0 < y_1 < \cdots < y_M =1} \sum_{k=1}^M |g(1,y_k)-g(1,y_{k-1})|. \end{equation*} The total variation of $g$ in the sense of Hardy and Krause is then given by \begin{equation*} V(g) = V^{(1)}(g) + V^{(2)}(g) + V^{(1,2)}(g). \end{equation*} Let $\boldsymbol{w} = (0, 0,\pm 1)$. Then $\delta(g,J) = 0$ for all rectangles $J$ and therefore $V^{(1,2)}(g) = 0$. Further, $\Phi(\alpha,1) = (0,0,-1)$ for all $0 \leq \alpha \leq 1$ and therefore $V^{(1)}(g) = 0$. Finally, $V^{(2)}(g) = |g(1,0) - g(1,1)| = 2$ because of the monotonicity of $g(1,\cdot)$. Thus we have $V(g) = 2$. Let $\boldsymbol{w} \neq (0,0, \pm 1)$. Then there are $u,v \in (0,1)^2$ such that $\boldsymbol{w} = \Phi(u,v)$. Again, we have $V^{(1)}(g) = 0$. Further, we have $V^{(2)}(g) \le 2 \sqrt{2}$, where we have equality if $u = 1$ and $v=1/2$. Now we consider $V^{(1,2)}(g)$. The variation $V^{(1,2)}(g)$ does not change by changing $u$ because of the rotational symmetry of $g$ about the polar axis. We may therefore assume without loss of generality that $u=1/2$. First, let $v = 1/2$. The mixed derivative \begin{equation*} \frac{\partial^2 g}{\partial \alpha \partial \tau}(\alpha, \tau) = \frac{\partial^2 g}{\partial \tau \partial \alpha}(\alpha, \tau) = - 4 \pi \frac{\left( 1 - 2 \tau \right) \left( 1 + \sqrt{\left( 1 - \tau \right) \tau} \cos(2 \pi \alpha ) \right) \sin( 2 \pi \alpha )}{\sqrt{\left( 1 - \tau \right) \tau} \left( 2 + 4 \sqrt{\left( 1 - \tau \right) \tau} \cos( 2 \pi \alpha ) \right)^{3/2}} \end{equation*} is finite for $0 \leq \alpha \leq 1$ and $0 < \tau < 1$ with $\Phi(\alpha, \tau) \neq \boldsymbol{w}$. In particular, its sign does not change for $(\alpha, \tau)$ in the interior of one of the four quadrants $\widehat{T}_{1} = [0,1/2]\times [0,1/2]$, $\widehat{T}_{2} = [0,1/2]\times [1/2,1]$, $\widehat{T}_{3} = [1/2,1]\times [0,1/2]$, $\widehat{T}_{4} = [1/2,1] \times [1/2,1]$ of the square. For a subinterval $J$ with $\overline{J}$ contained in the interior of one of these quadrants, one can use \begin{equation}\label{eq:delta.relation} \delta( g; J ) = \int_J \frac{\partial^2 g}{\partial \alpha \partial \tau}( \boldsymbol{x} ) \dd \boldsymbol{x} = \mathrm{VOL}(J) \frac{\partial^2 g}{\partial \alpha \partial \tau}( \boldsymbol{x}_J ) \qquad \text{for some $\boldsymbol{x}_J \in \overline{J}$} \end{equation} to determine the sign of $\delta(g; J)$. For a subinterval $J$ which shares an upper or lower boundary with the square, the quantity $\delta(g; J)$ reduces to a difference of two function values, since $g$ remains constant for $\tau=0,1$. In this case one can use monotonicity of the function $g$ along horizontals (decreasing towards $\alpha = 1/2$). For $\overline{J}$ with $\boldsymbol{w}$ as one corner, one can still deduce if either $\delta(g; J) \leq 0$ or $\delta(g; J) \geq 0$. It suffices to consider partitions $\mathcal{P}_{\boldsymbol{w}}$ defined by horizontal and vertical lines including the lines with $\tau = 1/2$ and $\alpha = 1/2$. From our observations on the sign of $\delta(g; J)$ we conclude that the expression $\sum_{J \in P_{\boldsymbol{w}}} |\delta(g,J)|$ can be reduced to two summands for each quadrant (and by symmetry) \begin{align*} V^{(1,2)}(g) &= \sum_{J \in \mathcal{P}_{\boldsymbol{w}}} \left|\delta(g,J) \right| = - 2 \big( g(0, \Delta\tau^\prime) - g(1/2, \Delta\tau^\prime) + 0 - 2 \big) + 2 \big( g(0, \Delta\tau^\prime) - g(1/2, \Delta\tau^\prime) \big) \\ &= + 2 \big( 2 - 0 + g(1/2, 1 - \Delta\tau^{\prime\prime}) - g(0, 1 - \Delta\tau^{\prime\prime}) \big) + 2 \big( g(0, 1 - \Delta\tau^{\prime\prime}) - g(1/2, 1 - \Delta\tau^{\prime\prime}) \big), \end{align*} where $\Delta\tau^\prime$ and $\Delta\tau^{\prime\prime}$ denote the ``heights'' of the first and last row of subintervals of $\mathcal{P}_{\boldsymbol{w}}$. Cancelling terms, we arrive at \begin{equation*} V^{(1,2)}(g) = 8 \qquad \text{for $(u,v) = (1/2, 1/2)$.} \end{equation*} Therefore, for the case $v=1/2$, we can use the Koksma-Hlawka inequality to obtain \begin{equation*} |\Delta(\boldsymbol{w})| \le D^\ast_N(P_N) V(g) \leq D^\ast_N(P_N) \left( 8 + 2 \sqrt{2} \right). \end{equation*} Now assume that $0 < v < 1/2$ (the case $1/2 < v < 1$ follows because of symmetry). Here, the mixed derivative is given by \begin{equation*} \frac{\partial^2 g}{\partial \alpha \partial \tau}(\alpha, \tau) = \frac{\partial^2 g}{\partial \tau \partial \alpha}(\alpha, \tau) = - 16 \pi \left( 1 - v \right) v \left( 1 - 2 \tau \right) \, R( \cos(2\pi \alpha) ) \, \sin( 2 \pi \alpha ) \left[ g(\alpha, \tau) \right]^{-3}, \end{equation*} where the linear form $R$ is given by \begin{equation*} R(x) = x + \frac{v - \tau + \left( 1 - 2 v \right) \left( 1 - \tau \right) \tau}{\sqrt{\left( 1 - v \right) v \left( 1 - \tau \right) \tau} \left( 1 - 2 \tau \right)}. \end{equation*} Hence the mixed derivative is again finite for $0 \leq \alpha \leq 1$ and $0 < \tau < 1$ with $\Phi(\alpha, \tau) \neq \boldsymbol{w}$. Furthermore, it vanishes on the lines $\alpha = 0,1$ and on the line $\alpha = 1/2$ (except when $\tau = 1/2$, where it is undefined). It does not vanish on the line $\tau = 1/2$. Finally, it vanishes on the set (cf. Figure~\ref{fig:fig.g}) \begin{equation*} \Gamma {:=} \left\{ (\alpha, \tau) : \left( 1 - 2 \tau \right) \, R( \cos(2\pi \alpha) ) = 0, 0 \leq \alpha \leq 1, 0 < \tau < 1 \right\}. \end{equation*} (Note that the mixed derivative is singular everywhere along the lower and upper boundary of the unit square which are mapped to the poles of the sphere.) The set $\Gamma$ is symmetric in the sense that $(\alpha, \tau) \in \Gamma$ if and only if $(1-\alpha, \tau) \in \Gamma$. From the observation \begin{equation*} R( x ) \big|_{\tau = v} = x + \frac{0 + \left( 1 - 2v \right) \left( 1 - v \right) v}{\left(1-v\right) v \left( 1 - 2 v \right)} = x + 1 \end{equation*} it follows that $\Gamma$ and the line $\tau = v$ intersect only in the point $(1/2,v)$. We observe that the following three relations are equivalent: \begin{eqnarray*} \left| R(x) - x \right| & \leq & 1, \\ \left( v - \tau \right) \left( v - 3 v \tau + 3 v \tau^2 - \tau^3 \right) & \leq & 0, \\ \tau \geq v \quad \text{and} \quad \tau^3 + 3 v \tau & \leq & 3 v \tau^2 + v. \end{eqnarray*} From the last pair we obtain the equivalence \begin{equation*} \left| R(x) - x \right| \leq 1 \qquad \text{if and only if} \qquad v \leq \tau \leq v + \left( 1 - v \right)^{2/3} v^{1/3} - \left( 1 - v \right)^{1/3} v^{2/3} {=:} \tau_v. \end{equation*} We conclude that $\Gamma$ is the graph of a curve contained in the strip $v \leq \tau \leq \tau_v$ which is a function $\tau(\alpha)$ (assuming the value $\tau_v$ at $\alpha=0,1$ and $v$ at $\alpha=1/2$) and is a function $\alpha(\tau)$ when restricted to either the left or right half of the unit square. We record that $\tau_v < 1/2$ if $0<v<1/2$ and $\tau_v = 1/2$ if $v=1/2$. Moreover, the vertical line test also shows that the function $\tau(\alpha)$ is strictly monotonically decreasing towards $\alpha = 1/2$. It is also continuous and continuously differentiable as can be seen from an implicit differentiation of $R(\cos(2\pi \alpha))=0$ (the factor $(1-2\tau)$ can not be zero in our setting): \begin{equation*} \tau^\prime(\alpha) = - 4 \pi \sin( 2 \pi \alpha ) \frac{\left( 1 - 2 \tau \right)^2 \left( 1 - \tau \right)^2 \tau \sqrt{\left( 1 - v \right) v \left( 1 - \tau \right) \tau}}{v - 3 v \tau + 3 v \tau^2 - \tau^3 + 3 \left( \tau - v \right) \left( 1 - \tau \right) \tau}, \qquad \text{where $\tau = \tau(\alpha)$.} \end{equation*} (The denominator is a strictly monotonically increasing function on the interval $(v,1/2)$.) Using the relation $R( \cos( 2 \pi \alpha ) ) = 0$ along $\Gamma$ in order to substitute $\cos( 2 \pi \alpha )$ in $g(\alpha, \tau)$, we obtain $g$ restricted to $\Gamma$ and its first derivative in the forms \begin{equation*} g_\Gamma(\alpha) {:=} g( \alpha, \tau(\alpha) ) = 2 \sqrt{\frac{\tau(\alpha) - v}{1-2v}}, \qquad g_\Gamma^\prime( \alpha ) = \frac{2}{1-2v} \frac{\tau^\prime(\alpha)}{g_\Gamma(\alpha)}. \end{equation*} Finally, we observe that $g$ is decreasing along horizontal lines as $\alpha$ tends to $1/2$, since \begin{equation*} \frac{\partial g}{\partial \alpha}(\alpha, \tau) = - 8 \pi \sqrt{\left( 1 - v \right) v \left( 1 - \tau \right) \tau} \, \frac{\sin(2\pi \alpha)}{g(\alpha,\tau)}, \qquad (\alpha, \tau) \in [0,1]^2, (\alpha, \tau) \neq (1/2,v) \end{equation*} and $g$ is decreasing along vertical lines in the strip $v \leq \tau \leq 1/2$ as $\tau$ tends to $v$, since \begin{equation*} \frac{\partial g}{\partial \tau}(\alpha, \tau) = 2 \, \frac{H( \cos( 2 \pi \alpha ) )}{g( \alpha, \tau )}, \qquad H(x) = 1 - 2 v + \sqrt{\frac{\left( 1 - v \right) v}{\left( 1 - \tau \right) \tau}} \left( 1 - 2 \tau \right) x. \end{equation*} \begin{figure}[ht] \begin{center} \includegraphics[scale=.08]{contourplot.jpg} \includegraphics[scale=.075]{variationVitali.jpg} \caption{\label{fig:fig.g} Contour plot of $g(\alpha,\tau)$ with curve $\Gamma$ and scheme to compute $V^{(1,2)}(g)$.} \end{center} \end{figure} Similar as in the case $v = 1/2$, the curve $\Gamma$ and the vertical line $\alpha = 1/2$ divide the unit square into four regions $T_1, T_2, T_3, T_4$. In the interior of each region the mixed derivative has the same sign. By \eqref{eq:delta.relation}, the sign of $\delta(g; J)$ is the same for all subintervals contained in a fixed region $T_k$. Suppose $\mathcal{P}_{\boldsymbol{w}}$ is a partition of the unit square determined by a grid which is symmetric with respect to $\alpha = 1/2$ and also contains the vertical $\alpha = u ( = 1/2 )$ and the horizontals $\tau = v$ and $\tau = \tau_v$. (It suffices to consider this type of partitions.) The horizontals $\tau = v$ and $\tau = \tau_v$ subdivide the regions further into a rectangle $T_k^\prime$ without the curve $\Gamma$ and a part $T_k^{\prime\prime}$ with $\Gamma$ as part of the boundary. By symmetry (cf. Figure~\ref{fig:fig.g}) \begin{align*} &V^{(1,2)}(g) = \sum_{J \in \mathcal{P}_{\boldsymbol{w}}} \left|\delta(g,J) \right| = 2 \left( g(0,\Delta\tau^\prime) - g(1/2,\Delta\tau^\prime) \right) - 2 \sideset{}{^\prime}{\sum}_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1^\prime}}} \delta(g,J) - 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1^{\prime\prime}}, \\ J \cap \Gamma = \emptyset}} \delta(g,J) \\ &\phantom{=\pm}+ 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1} \cup \overline{T_4}, \\ J \cap \Gamma \neq \emptyset}} \left| \delta(g,J) \right| + 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_4^{\prime\prime}}, \\ J \cap \Gamma = \emptyset}} \delta(g,J) + 2 \sideset{}{^\prime}{\sum}_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_4^\prime}}} \delta(g,J) + 2 \left( g(0,1- \Delta\tau^{\prime\prime}) - g(1/2,\Delta\tau^{\prime\prime}) \right), \end{align*} where $\Delta\tau^\prime$ and $\Delta\tau^{\prime\prime}$ denote the ``heights'' of the first and last row of subintervals of $\mathcal{P}_{\boldsymbol{w}}$ and in the sums $\sideset{}{^\prime}{\sum}$ one excludes the $J$'s from the first and last row. Using cancellation at ``interior'' nodes, we obtain \begin{align*} &V^{(1,2)}(g) = 2 \left( g(0,\Delta\tau^\prime) - g(1/2,\Delta\tau^\prime) \right) + 2 \left( g(0, v) - 0 - g(0, \Delta\tau^\prime) + g(1/2, \Delta\tau^\prime) \right) \\ &\phantom{=\pm}- 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1^{\prime\prime}}, \\ J \cap \Gamma = \emptyset}} \delta(g,J) + 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1} \cup \overline{T_4}, \\ J \cap \Gamma \neq \emptyset}} \left| \delta(g,J) \right| + 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_4^{\prime\prime}}, \\ J \cap \Gamma = \emptyset}} \delta(g,J) + 2 \left( g(0,1- \Delta\tau^{\prime\prime}) - g(1/2,\Delta\tau^{\prime\prime}) \right) \\ &\phantom{=\pm}+ 2 \left( g(0, \tau_v) - g(1/2, \tau_v) + g(1/2, 1- \Delta\tau^{\prime\prime}) - g(0,1- \Delta\tau^{\prime\prime}) \right) \\ &\phantom{=}= 2 g(0, v) - 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1^{\prime\prime}}, \\ J \cap \Gamma = \emptyset}} \delta(g,J) + 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1} \cup \overline{T_4}, \\ J \cap \Gamma \neq \emptyset}} \left| \delta(g,J) \right| + 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_4^{\prime\prime}}, \\ J \cap \Gamma = \emptyset}} \delta(g,J) + 2 \left( g(0, \tau_v) - g(1/2, \tau_v) \right). \end{align*} Let $v = y_m < y_{m+1} < \cdots < y_{m+K+1} = \tau_v$ denote the $\tau$-values of the grid defining $\mathcal{P}_{\boldsymbol{w}}$ in the strip $v \leq \tau \leq \tau_v$. To every $y_{m+k}$ with $0 \leq k \leq K-1$ there exists a maximal $x_{\ell_{m+k}}$ of the $\alpha$-values defining the grid such that the subinterval $[x_{\ell_{m+k-1}}, y_{m+k}) \times [x_{\ell_{m+k}}, y_{m+k+1})$ does not intersect $\Gamma$. For consistency reason we set $\ell_{m+K} = 0$, so that $x_{\ell_{m+K}} = 0$. Then \begin{align*} - \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1^{\prime\prime}}, \\ J \cap \Gamma = \emptyset}} \delta(g,J) &= \sum_{k=0}^{K-1} \left( g(0,y_{m+k+1}) - g(x_{\ell_{m+k}},y_{m+k+1}) + g(x_{\ell_{m+k}},y_{m+k}) - g(0,y_{m+k}) \right) \\ &= g(0,y_{m+K}) - g(0,y_{m}) - \sum_{k=0}^{K-1} \left( g(x_{\ell_{m+k}},y_{m+k+1}) - g(x_{\ell_{m+k}},y_{m+k}) \right). \end{align*} Since $g$ is decreasing along horizontals in the strip $v \leq \tau \leq 1/2$ as $\tau$ goes to $v$, one gets \begin{equation*} - \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1^{\prime\prime}}, \\ J \cap \Gamma = \emptyset}} \delta(g,J) \leq g(0,y_{m+K}) - g(0,y_{m}) \leq g(0,\tau_v) - g(0,v). \end{equation*} Similarly, to every $y_{m+k}$ with $1 \leq k \leq K$ there exists a minimal $x_{p_{m+k}}$ such that the subinterval $[x_{p_{m+k}}, y_{m+k}) \times [x_{p_{m+k+1}}, y_{m+k+1})$ does not intersect $\Gamma$. For consistency reason we set $p_{m}$ to that index with $x_{p_{m}} = 1/2$. Then \begin{align*} \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_4^{\prime\prime}}, \\ J \cap \Gamma = \emptyset}} \delta(g,J) &= \sum_{k=1}^K \left( g(x_{p_{m+k}}, y_{m+k}) - g(1/2, y_{m+k}) + g(1/2, y_{m+k+1}) - g(x_{p_{m+k}}, y_{m+k+1}) \right) \\ &= g(1/2, y_{m+K+1}) - g(1/2, y_{m+1}) - \sum_{k=1}^K \left( g(x_{p_{m+k}}, y_{m+k+1}) - g(x_{p_{m+k}}, y_{m+k}) \right) \\ &\leq g(1/2, y_{m+K+1}) - g(1/2, y_{m+1}) \leq g(1/2, y_{m+K+1}) = g(1/2, \tau_v). \end{align*} As an intermediate result we have \begin{align*} V^{(1,2)}(g) &\leq 2 g(0, v) + 2 \left( g(0,\tau_v) - g(0,v) \right) + 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1} \cup \overline{T_4}, \\ J \cap \Gamma \neq \emptyset}} \left| \delta(g,J) \right| + 2 g(1/2, \tau_v) \\ &\phantom{=}+ 2 \left( g(0, \tau_v) - g(1/2, \tau_v) \right) = 4 g(0,\tau_v) + 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1} \cup \overline{T_4}, \\ J \cap \Gamma \neq \emptyset}} \left| \delta(g,J) \right|. \end{align*} Using triangle inequality and monotonicity along horizontals we get \begin{align*} &\sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1} \cup \overline{T_4}, \\ J \cap \Gamma \neq \emptyset}} \left| \delta(g,J) \right| = \sum_{k=0}^{K} \sum_{j=\ell_{m+k}}^{p_{m+k}-1} \left| g(x_j,y_{m+k}) - g(x_{j+1},y_{m+k}) + g(x_{j+1},y_{m+k+1}) - g(x_{j},y_{m+k+1}) \right| \\ &\phantom{=}\leq \sum_{k=0}^{K} \sum_{j=\ell_{m+k}}^{p_{m+k}-1} \left( g(x_j,y_{m+k}) - g(x_{j+1},y_{m+k}) \right) + \sum_{k=0}^{K} \sum_{j=\ell_{m+k}}^{p_{m+k}-1} \left( g(x_{j},y_{m+k+1}) - g(x_{j+1},y_{m+k+1}) \right) \\ &\phantom{=}= \sum_{k=0}^{K} \left( g(x_{\ell_{m+k}},y_{m+k}) - g(x_{p_{m+k}},y_{m+k}) \right) + \sum_{k=0}^{K} \left( g(x_{\ell_{m+k}},y_{m+k+1}) - g(x_{p_{m+k}},y_{m+k+1}) \right) \\ &\phantom{=}\leq 2 \sum_{k=0}^{K} \left( g_\Gamma(x_{\ell_{m+k}}) - g_\Gamma(x_{p_{m+k}}) \right). \end{align*} In the last step we used that the left side of a horizontal segment of successive rectangles of the covering of $\Gamma$ is below and the right side is above the curve $\Gamma$ by construction, cf. Figure~\ref{fig:fig.g}. Recall, that $g_\Gamma$ is strictly monotonically decreasing as $\alpha$ tends to $1/2$. Since $g_\Gamma(x_{\ell_{m+k-1}}) > g_\Gamma(x_{p_{m+k}})$ for $0 \leq k \leq K - 1$ by monotonicity of $g_\Gamma$, we have \begin{align*} &\sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1} \cup \overline{T_4}, \\ J \cap \Gamma \neq \emptyset}} \left| \delta(g,J) \right| \leq 2 \Big\{ \left( g_\Gamma(x_{\ell_{m}}) - g_\Gamma(x_{p_{m}}) \right) + \sum_{k=1}^{K} \left( g_\Gamma(x_{\ell_{m+k}}) - g_\Gamma(x_{\ell_{m+k-1}}) \right) \\ &\phantom{=}+ \sum_{k=1}^{K} \left( g_\Gamma(x_{\ell_{m+k-1}}) - g_\Gamma(x_{p_{m+k}}) \right) \Big\} = 2 \Big\{ g_\Gamma(0) - g_\Gamma(1/2) + \sum_{k=1}^{K} \left( g_\Gamma(x_{\ell_{m+k-1}}) - g_\Gamma(x_{p_{m+k}}) \right) \Big\}. \end{align*} By construction $x_{\ell_{m+K-1}} > x_{p_{m+K}} > x_{\ell_{m+K-2}} > x_{p_{m+K-1}} > \cdots > x_{\ell_{m+k}} > x_{p_{m}} = 1/2$. So, because of monotonicity of $g_\Gamma$, we increase the right-hand side of the estimate above when including terms terms for the gaps $[x_{p_{m+k}},x_{\ell_{m+k-2}}]$. Thus \begin{equation*} \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1} \cup \overline{T_4}, \\ J \cap \Gamma \neq \emptyset}} \left| \delta(g,J) \right| \leq 4 \left( g_\Gamma(0) - g_\Gamma(1/2) \right) = 4 g_\Gamma(0), \end{equation*} where the expression in parentheses is the total variation of (monotone) one-dimensional function $g_\Gamma$ restricted to $[0,1/2]$. Putting everything together, we obtain \begin{equation*} V^{(1,2)}(g) \leq 4 g(0,\tau_v) + 2 \sum_{\substack{J \in \mathcal{P}_{\boldsymbol{w}}, \\ J \subseteq \overline{T_1} \cup \overline{T_4}, \\ J \cap \Gamma \neq \emptyset}} \left| \delta(g,J) \right| \leq 4 g(0,\tau_v) + 8 g_\Gamma(0) = 12 g_\Gamma(0) = 24 \sqrt{ \frac{\tau_v - v}{1-2v} } \end{equation*} It can be shown that the right-most side above as function of $v$ is strictly monotonically increasing on $[0,1/2)$ with \begin{equation*} \lim_{v \to 1/2^-} \sqrt{ \frac{\tau_v - v}{1-2v} } = \frac{1}{\sqrt{3}}. \end{equation*} We can use the Koksma-Hlawka inequality to obtain \begin{equation*} |\Delta(\boldsymbol{w})| \le D^\ast(P_N; [0,1]^2, \mathcal{R}^\ast) V(g) \leq D^\ast(P_N; [0,1]^2, \mathcal{R}^\ast) \left( \frac{24}{\sqrt{3}} + 2 \sqrt{2} \right) \quad \text{for $0 < v < 1/2$.} \end{equation*} This concludes the proof. \end{proof} Digital nets \cite{DiPi2010, Ni1992} are explicit constructions of point sets in $[0,1)^2$ with $D^\ast(P_N; [0,1]^2, \mathcal{R}^\ast) = \mathcal{O}(N^{-1} \log N)$. By mapping them to the unit sphere $\mathbb{S}^2$ one also obtains explicit constructions of points on $\mathbb{S}^2$ with favorable discrepancy. We have the following corollary. \begin{corollary} Let $b\ge 2$ and $m \ge 1$ be integers and let $N= b^m$. Let $\{ Z_N \}$ be a sequence of $N$-point configurations on $\mathbb{S}^2$ defined by digital $(0,m,2)$-nets $P_N \in [0,1]^2$ lifted to the sphere $\mathbb{S}^2$ by means of $Z_N = \Phi(P_N)$. Then the equal weight quadrature rules $Q_N$ associated with $Z_N$ satisfy \begin{equation*} e^2(Q_N,\mathcal{H}^{3/2}) = \mathcal{O}(N^{-1} \log N) \qquad \text{as $N \to \infty$.} \end{equation*} \end{corollary} \subsection{Numerical results} \label{subsec:numerical.results} We consider now the worst case error \eqref{eq_wce_32} for quadrature rules defined by digital nets based on a Sobol' sequence, see \cite{So1967}. For efficient implementations of Sobol' sequences see \cite{AnSa1979,BrFo1988,JoKu2003}. Figure~\ref{fig1} shows the squared worst-case error $e^2(Q_N, \mathcal{H}^{3/2})$. The numerical results suggest that the squared worst case error $e^2(Q_N, \mathcal{H}^{3/2})$ converges with order $\mathcal{O}(N^{-3/2})$ as $N \to \infty$. Note that the first point of the Sobol' sequence (or any digital $(0,2)$-sequence for that matter) is always $(0,0)$. Since $\Phi(0,0) = (1,0,0)$, this point gets mapped to the North Pole. On the other hand, no point of the digital sequence gets mapped to the South Pole. This might not be a desirable feature. To remedy this situation one can randomize the $(0,2)$-sequence using a scrambling algorithm, see \cite{Ma1998,Ow2003}. In this case the sequence in $[0,1]^2$ is still a $(0,2)$-sequence, but the point $(0,0)$ only occurs with probability $0$. Numerical investigation of scrambled Sobol' sequences yield similar results as the one shown in Figure~\ref{fig1}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.55]{wce_h32_m1_20.png} \caption{\label{fig1} The dashed lines show $N^{-3/2}$ and $(9/4) N^{-3/2}$, and the curve shows the squared worst-case error $e^2(Q_N,\mathcal{H}^{3/2})$, where the quadrature points are a digital net mapped to the sphere.} \end{center} \end{figure} \begin{table}[ht] \begin{center} \begin{tabular}{l|l|l|l|l} $m$ & $N = 2^m$ & $e^2(Q_N,\mathcal{H}^{3/2})$ & $N^{-3/2}$ & $N^{3/2} e^2(Q_N,\mathcal{H}^{3/2})$ \\ \hline 1 & 2 & 6.2622e-01 & 3.5355e-01 & 1.7712 \\ 2 & 4 & 2.1149e-01 & 1.2500e-01 & 1.6920 \\ 3 & 8 & 8.1448e-02 & 4.4194e-02 & 1.8430 \\ 4 & 16 & 3.5091e-02 & 1.5625e-02 & 2.2459 \\ 5 & 32 & 8.0526e-03 & 5.5242e-03 & 1.4577 \\ \hline 6 & 64 & 2.6309e-03 & 1.9531e-03 & 1.3470 \\ 7 & 128 & 9.4336e-04 & 6.9053e-04 & 1.3661 \\ 8 & 256 & 3.4501e-04 & 2.4414e-04 & 1.4132 \\ 9 & 512 & 1.3374e-04 & 8.6316e-05 & 1.5495 \\ 10 & 1024 & 4.6029e-05 & 3.0517e-05 & 1.5083 \\ \hline 11 & 2048 & 1.8846e-05 & 1.0789e-05 & 1.7468 \\ 12 & 4096 & 6.4670e-06 & 3.8146e-06 & 1.6953 \\ 13 & 8192 & 1.7873e-06 & 1.3486e-06 & 1.3252 \\ 14 & 16384 & 5.6815e-07 & 4.7683e-07 & 1.1915 \\ 15 & 32768 & 1.9912e-07 & 1.6858e-07 & 1.1811 \\ \hline 16 & 65536 & 6.3194e-08 & 5.9604e-08 & 1.0602 \\ 17 & 131072 & 2.4122e-08 & 2.1073e-08 & 1.1447 \\ 18 &262144 & 9.1906e-09 & 7.4505e-09 & 1.2335 \\ 19 & 524288 & 3.7001e-09 & 2.6341e-09 & 1.4047 \\ 20 & 1048576 & 1.3068e-09 & 9.3132e-10 & 1.4032 \\ \hline \end{tabular} \end{center} \caption{Numerical results: The worst-case error obtained when using digital nets over $\mathbb{Z}_2$ lifted to the sphere $\mathbb{S}^2$. } \label{table1} \end{table} The numerical results lead us to the following conjecture. \begin{conjecture} Let $\boldsymbol{x}_0,\boldsymbol{x}_1,\ldots \in [0,1)^2$ be a $(0,2)$-sequence and let $\boldsymbol{z}_0,\boldsymbol{z}_1,\ldots \in \mathbb{S}^2$ be the corresponding points on the sphere obtained by using the mapping $\Phi$. Let $Q_{N}(f) = \frac{1}{N} \sum_{n=0}^{N-1} f(\boldsymbol{z}_n)$. Then we have \begin{equation*} e^2(Q_N, \mathcal{H}^{3/2}) = \mathcal{O}(N^{-3/2}) \qquad \text{as $N \to \infty$.} \end{equation*} \end{conjecture} In other words, a $(0,2)$-sequence lifted to the $2$-sphere via the mapping $\Phi$ achieves the optimal rate of convergence of the worst-case integration error in $\mathcal{H}^{3/2}$. By Stolarsky's invariance principle, the conjecture also implies that a $(0,2)$-sequence lifted to the $2$-sphere via the mapping $\Phi$ achieves the optimal rate of decay of the spherical cap $L_2$-discrepancy. \vspace{10mm} {\bf Acknowledgement:} The first author is grateful to the School of Mathematics and Statistics at UNSW for their support. \bibliographystyle{abbrv}
{ "timestamp": "2011-08-01T02:01:18", "yymm": "1101", "arxiv_id": "1101.5450", "language": "en", "url": "https://arxiv.org/abs/1101.5450", "abstract": "We study numerical integration on the unit sphere $\\mathbb{S}^2 \\subset \\mathbb{R}^3$ using equal weight quadrature rules, where the weights are such that constant functions are integrated exactly.The quadrature points are constructed by lifting a $(0,m,2)$-net given in the unit square $[0,1]^2$ to the sphere $\\mathbb{S}^2$ by means of an area preserving map. A similar approach has previously been suggested by Cui and Freeden [SIAM J. Sci. Comput. 18 (1997), no. 2].We prove three results. The first one is that the construction is (almost) optimal with respect to discrepancies based on spherical rectangles. Further we prove that the point set is asymptotically uniformly distributed on $\\mathbb{S}^2$. And finally, we prove an upper bound on the spherical cap $L_2$-discrepancy of order $N^{-1/2} (\\log N)^{1/2}$ (where $N$ denotes the number of points). This slightly improves upon the bound on the spherical cap $L_2$-discrepancy of the construction by Lubotzky, Phillips and Sarnak [Comm. Pure Appl. Math. 39 (1986), 149--186]. Numerical results suggest that the $(0,m,2)$-nets lifted to the sphere $\\mathbb{S}^2$ have spherical cap $L_2$-discrepancy converging with the optimal order of $N^{-3/4}$.", "subjects": "Numerical Analysis (math.NA)", "title": "Quasi-Monte Carlo rules for numerical integration over the unit sphere $\\mathbb{S}^2$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771805808551, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7090925367886145 }
https://arxiv.org/abs/1409.4490
Rigidity and tolerance for perturbed lattices
A perturbed lattice is a point process $\Pi=\{x+Y_x:x\in \mathbb{Z}^d\}$ where the lattice points in $\mathbb{Z}^d$ are perturbed by i.i.d.\ random variables $\{Y_x\}_{x\in \mathbb{Z}^d}$. A random point process $\Pi$ is said to be rigid if $|\Pi\cap B_0(1)|$, the number of points in a ball, can be exactly determined given $\Pi \setminus B_0(1)$, the points outside the ball. The process $\Pi$ is called deletion tolerant if removing one point of $\Pi$ yields a process with distribution indistinguishable from that of $\Pi$. Suppose that $Y_x\sim N_d(0,\sigma^2 I)$ are Gaussian vectors with with $d$ independent components of variance $\sigma^2$. Holroyd and Soo showed that in dimensions $d=1,2$ the resulting Gaussian perturbed lattice $\Pi$ is rigid and deletion intolerant. We show that in dimension $d\geq 3$ there exists a critical parameter $\sigma_r(d)$ such that $\Pi$ is rigid if $\sigma<\sigma_r$ and deletion tolerant (hence non-rigid) if $\sigma>\sigma_r$.
\section{Introduction}\label{sec:intro} Let $\Pi=\{x+Y_x:x\in \mathbb{Z}^d\}$ denote the lattice $\mathbbm{Z}^d$ perturbed by independent and identically distributed random variables $\{Y_x\}_{x\in \mathbb{Z}^d}$ taking values in $\R^d$. In this paper we address the questions of rigidity and deletion tolerance of such point processes. Rigidity holds if given the points of $\Pi$ outside a ball, one can determine exactly the number of points of $\Pi$ inside that ball. Deletion tolerance concerns the effect of removing a single point. If one point, say $Y_0$, is removed from $\Pi$, can this be detected? More formally, are the laws of $\Pi$ and $\Pi_u = \big\{x+Y_x:x\in \mathbb{Z}^d \setminus \{u\}\big\}$ mutually singular for $u \in \mathbb{Z}^d$? \begin{definition} A $\Pi$-point is an $\mathbb{R}^d$ valued random variable $Z$ such that $Z\in\Pi$ a.s. A point process $\Pi$ is {\bf deletion tolerant} if for any $\Pi$-point $Z$, the point process $\Pi \setminus Z$ is absolutely continuous\footnote{When discussing absolute continuity or singularity of two random objects, we are referring to their laws.} with respect to $\Pi$. The point process $\Pi$ is {\bf deletion singular} if $\Pi$ and $\Pi\setminus Z$ are mutually singular for any $\Pi$-point $Z$. We say that $\Pi$ is {\bf insertion tolerant} if for any Borel set $V \subset \mathbb{R}^d$ with Lebesgue measure $\mathcal{L}(V)\in(0,\infty)$, if $U$ is independent of $\Pi$ and uniform in $V$ then~$\Pi \cup U$ is absolutely continuous with respect to $\Pi$. If $\Pi$ and $\Pi \cup U$ are mutually singular for all such $V$, then we say that $\Pi$ is {\bf insertion singular}. For a point process $\Pi$ and a ball $B\subset\mathbb{R}^d$ we define $\Pi_{\hbox{in}}=\Pi_{\hbox{in}}(B):=\Pi \cap B$ and $\Pi_{\hbox{out}}(B)=\Pi \cap B^c$. We say that $\Pi$ is {\bf rigid} if for all balls $B\subset\mathbb{R}^d$ there exists a measurable function $N=N_B$ on the collection of discrete point sets in $\mathbb{R}^d$ such that $N_B(\Pi_{\hbox{out}}(B))= | \Pi_{\hbox{in}}(B)| \ a.s.$. \end{definition} Rigidity turns out to be closely related to deletion tolerance where we consider removing multiple points. We write $\Pi_S := \big\{x+Y_x:x\in \mathbb{Z}^d \setminus S \big\}$. \begin{proposition}\label{p:rigid-k-tolerant} If the distribution of $Y_x$ has a density which is everywhere positive, then the perturbed lattice $\Pi=\{x + Y_x: x\in\mathbb{Z}^d\}$ is rigid if and only if $\Pi$ and $\Pi_S$ are mutually singular for all finite sets $S\subset \mathbb{Z}^d$. \end{proposition} It was shown in~\cite{HolSoo:10} that the perturbed lattice is deletion singular in dimension $d=1$ when the perturbations $Y_x$ have bounded first moment and in dimension $d=2$ when the perturbations have bounded second moment. In contrast, we show that when $d\geq 3$, the question of deletion tolerance depends more delicately on the law of the perturbations; in particular, for Gaussian perturbations it exhibits a phase transition. \begin{theorem}\label{t:gaussian} Let $\Pi$ be the perturbed lattice in $\mathbb{Z}^d$ with Gaussian $N_d(0,\sigma^2 I)$ perturbations. For $d \geq 3$ there exist critical variances $0 < \sigma_r(d) \leq \sigma_c(d)$ such that \begin{itemize} \item If $\sigma>\sigma_c$ then $\Pi$ is deletion tolerant and is mutually absolutely continuous with respect to $\Pi_0$. \item If $0<\sigma<\sigma_c$ then $\Pi$ is deletion singular. \item If $0<\sigma<\sigma_r$ then $\Pi$ is rigid. \item If $\sigma>\sigma_r$ then $\Pi$ is non-rigid. \end{itemize} \end{theorem} We conjecture that in fact $\sigma_c=\sigma_r$ and that for all i.i.d.\ perturbations, the perturbed lattice is rigid if and only if the perturbed lattice is deletion singular. However, in Theorem~\ref{t:twoButNotOne} we show that for similar point processes these notions may differ. Given the results of~\cite{HolSoo:10}, it is natural to ask if heavy tailed random variables with infinite means may be deletion tolerant. In the case of $\alpha$-stable perturbations we give a complete characterization. \begin{theorem}\label{t:stable} Let $\Pi$ be a one dimensional perturbed lattice with symmetric $\alpha$-stable perturbations. If $\alpha<1$ then the perturbed lattice $\Pi$ is deletion and insertion tolerant and mutually absolutely continuous with $\Pi_0$, while if $\alpha \geq 1$ then it is deletion singular and rigid. \end{theorem} In Section~\ref{s:generalPerturbation} we give a more general categorization of which perturbations give rise to deletion tolerance and rigidity. \subsection{Absolute Continuity} Assuming that the distribution of the perturbations has a density which is everywhere positive, we establish equivalences between the different notions of deletion and insertion tolerance. \begin{proposition}\label{p:equivalence} If the distribution of $Y_x$ has a density which is everywhere positive then the following are equivalent \begin{enumerate} \item The perturbed lattice is deletion tolerant.\label{it:delTol} \item The perturbed lattice is insertion tolerant.\label{it:insTol} \item The perturbed lattice is not deletion singular. \label{it:delSing} \item The perturbed lattice is not insertion singular. \label{it:insSing} \item The measures $\Pi$ and $\Pi_0$ are mutually absolutely continuous.\label{it:mutAbs} \item The measures $\Pi$ and $\Pi_0$ are not mutually singular.\label{it:delSingOld} \end{enumerate} \end{proposition} We will also consider the case where $k$ points are inserted or deleted. Generalizing the earlier definitions, a point process $\Pi$ is {\bf $k$-deletion tolerant} if for any distinct $\Pi$ points $Z_1,\ldots,Z_k \in\Pi$, $\Pi \setminus Z$ is absolutely continuous with respect to $\Pi$ and {\bf $k$-deletion singular} if they are always mutually singular. We say that $\Pi$ is {\bf $k$-insertion tolerant} if for any Borel set $V \subset \mathbb{R}^d$ with Lebesgue measure $\mathcal{L}(V)\in(0,\infty)$, if $U_1,\ldots,U_k$ are independent points uniform in $V$ and independent of $\Pi$ then $\Pi \cup \{U_1,\ldots,U_k\}$ is absolutely continuous with respect to $\Pi$. If $\Pi$ and $\Pi \cup \{U_1, \ldots,U_k\}$ are mutually singular then we say $\Pi$ is {\bf $k$-insertion singular}. Perhaps surprisingly, there exists a translation-invariant point process $\hat{\Pi}$ that is deletion singular but not rigid. In fact, such a process can be $2$-deletion tolerant; that is, removing a single point from $\hat{\Pi}$ yields a singular measure, but removing any two points from $\hat{\Pi}$ yields a process which is absolutely continuous to the original! We construct $\hat{\Pi}$ as the union of two correlated perturbed lattices. For $d\geq 3$ and $0<\delta<\sigma$, let $Y_x$ be i.i.d.\ $N_d\Bigl(0,(\sigma^2-\delta^2)I\Bigr)$ variables. For $(x,i)\in \mathbb{Z}^d\times \{1,2\}$ let $Y'_{x,i}$ be i.i.d. $N_d(0,\delta^2I)$ variables. Setting $\hat{Y}_{x,i}=Y_x+Y'_{x,i}$, we define the point process $\hat{\Pi}=\{x+\hat{Y}_{x,i}:(x,i)\in \mathbb{Z}^d\times \{1,2\}\}$. The next theorem is proved in Section~\ref{s:twoButNotOne}. \begin{theorem}\label{t:twoButNotOne} There exist $0<\delta<\sigma$ such that $\hat{\Pi}$ is deletion singular but $2$-deletion tolerant and hence non-rigid. \end{theorem} \subsection{Exponential Intersection Tails property} We say that a measure $\eta$ on oriented paths in the lattice has {\bf Exponential Intersection Tails} with parameter $0< \theta <1$, denoted $EIT(\theta)$, if for some $C>0$, \[ \eta\times\eta\Big\{(\gamma,\gamma'):|\gamma\cap\gamma'|\geq n \Big\} \leq C\theta^n. \] The uniform measure on oriented paths on $\mathbb{Z}^d$ has Exponential Intersection Tails for some $\theta<1$ when $d\geq 4$ but not when $d=3$. Moreover, in~\cite{BPP:98} a measure on oriented paths in $\mathbb{Z}^3$ was constructed with Exponential Intersection Tails while it was shown that no such measure exists when $d\leq 2$. \section{Proof of Theorem~\ref{t:gaussian}} In this section we establish Theorem~\ref{t:gaussian} by first proving results about more general perturbations. Let $e_1,\ldots,e_d$ denote the standard basis vectors in $\mathbb{R}^d$. \begin{proposition}\label{p:walkTolerant} For $d\geq 3$ there exists $\rho(d)>1$ such that the following holds. Let $\Pi$ be a $d$-dimensional perturbed lattice with i.i.d.\ perturbations $\{Y_x\}_{x\in\mathbb{Z}^d}$ with density $g(x)$ which is everywhere positive. If \begin{equation}\label{e:muIcondition} \max_{i} \int_{\mathbbm{R}^d} \left(\frac{g(x+e_i)}{g(x)} \right)^2 g(x) d x < \rho(d) \, , \end{equation} then the perturbed lattice $\Pi$ is deletion tolerant. \end{proposition} \begin{proof} Let $\eta$ denote a distribution over oriented paths with $EIT(\theta)$ for some $\theta=\theta(d)\in(0,1)$. Choose $\rho(d)$ so that $\rho(d)\theta(d)<1$. By the Cauchy-Schwartz inequality, the hypothesis of the theorem implies that \begin{equation}\label{e:ratioL2Bound} \max_{i,j} \int_{\mathbbm{R}^d} \left(\frac{g(x+e_i)}{g(x)} \right)\left(\frac{g(x+e_j)}{g(x)} \right) g(x) dx < \rho(d). \end{equation} Let $\gamma=\{\gamma_0,\gamma_1,\ldots \}\subset\mathbb{Z}^d$ be an oriented walk on $\mathbb{Z}^d$ from the origin, (i.e. $\gamma_0=0$ and $\gamma_i-\gamma_{i-1}$ is a standard basis vector). We define $Y^\gamma=\{Y^\gamma_x\}_{x\in\mathbb{Z}^d}$ to be a field of independent random variables distributed as \[ Y^\gamma_x \stackrel{d}{=} \begin{cases} Y_x + \gamma_{i+1}-\gamma_i &x\in\gamma, x=\gamma_i,\\ Y_x & x\not\in \gamma. \end{cases} \] By construction the point $\gamma_i + Y^\gamma_{\gamma_i}$ has the same distribution as $\gamma_{i+1} + Y_{\gamma_{i+1}}$ so changing the perturbations in this way has the effect of shifting the points on $\gamma$ one step along the path. As there is then a point centered at every vertex in $\mathbb{Z}^d$ except 0, it follows that the point process $\{x+Y^\gamma_x:x\in \mathbb{Z}^d\}$ has the same law as $\Pi_0$. Denote by $\nu$ and $\nu^\gamma$ the distributions of $\ty$ and $Y^\gamma$, respectively. We would be done if $Y$ and $Y^\gamma$ were mutually absolutely continuous, but of course they are singular, since we have altered significantly the distribution of a specific infinite sequence of points. However, using a similar argument to that of~\cite{ACHZ:08} and~\cite{BerPer:13}, this singularity can be smoothed away by averaging over $\gamma$. Let $\Gamma$ denote a random path with the law $\eta$ satisfying $EIT(\theta)$, and define $ \ty= Y^\Gamma $. Thus if $\tnu$ denotes the distribution of $\ty$, then \[ \tnu=\int \nu^\gamma \ \eta(d\gamma) \, . \] We will now show that when $\sigma$ is sufficiently large, the distributions of $\nu$ and $\tnu$ are mutually absolutely continuous. Denote $Y(m)=\{Y_x\}_{|x|\leq m}$ and let $\nu$ denote the measure induced by $Y$ and $\nu_m$ the measure induced by $Y(m)$. Define $\ty(m)$, $\tnu$, $\tnu_m$, $Y^\gamma(m)$, $\nu^\gamma$ and $\nu^\gamma_m$ similarly. We let $L(y)$ denote the Radon-Nikodym derivative $\frac{d \tnu}{d\nu}$ and let $L_m(y)$ denote $\frac{d \tnu_m}{d\nu_m}$. Observe that $L_m(Y_m)$ is a martingale which converges to $L(Y)$ almost surely. If $L_m(Y_m)$ is an $L^2$ bounded martingale then $\tnu \ll \nu$ (see, e.g., \cite{MP:10}, Theorem 12.32 or ~\cite{Dur:10}.) By definition of $L_m$, \begin{align}\label{e:LM2expression} \E [L_m(Y_m)]^2 &= \int \left[L_m(y)\right]^2 \nu_m(dy)\nonumber\\ &= \int \left[\int \frac{d\nu^\gamma_m}{d\nu_m}(y) \eta_m(d\gamma) \right]^2 \nu_m(dy)\nonumber\\ &= \int \int \int \frac{d\nu^\gamma_m}{d\nu_m} (y) \frac{d\nu_m^{\gamma'}}{d\nu_m} (y) \eta_m(d\gamma) \eta_m(d\gamma') \nu_m(dy)\nonumber\\ &= \int \int \int \frac{d\nu^\gamma_m}{d\nu_m} (y) \frac{d\nu_m^{\gamma'}}{d\nu_m} (y) \nu_m(dy) \eta_m(d\gamma) \eta_m(d\gamma') . \end{align} For fixed $\gamma$ the measure $\nu^\gamma$ is a product measure, so \begin{equation}\label{e:LMproduct} \int \frac{d\nu_m^\gamma}{d\nu_m} (y) \frac{d\nu_m^{\gamma'}}{d\nu_m} (y) \nu_m(dy)=\prod_{x:\in\mathbb{Z}^d:|x|\leq m} \int_{\mathbb{R}^d} \frac{d\nu_{m,x}^\gamma}{d\nu_{m,x}} (y_x) \frac{d\nu_{m,x}^{\gamma'}}{d\nu_{m,x}} (y_x) \nu_{m,x}(dy_x)\, , \end{equation} where $\nu_{m,x}=\mu$ is the distribution of $Y_x$ which is simply $\mu$. If $x\not\in\gamma$, then $\frac{d\nu_{m,x}^\gamma}{d\nu_{m,x}}=1$ and hence \begin{align*} \int_{\mathbb{R}^d} \frac{d\nu_{m,x}^\gamma}{d\nu_{m,x}} (y_x) \frac{d\nu_{m,x}^{\gamma'}}{d\nu_{m,x}} (y_x) \nu_{m,x}(dy_x) &= \int_{\mathbb{R}^d} \frac{d\nu_{m,x}^{\gamma'}}{d\nu_{m,x}} (y_x) \nu_{m,x}(dy_x)\\ &=\int_{\mathbb{R}^d} \nu^{\gamma'}_{m,x}(dy_x) =1 \,. \end{align*} A similar result holds when $x\not\in\gamma'$, so it remains to consider $x\in\gamma\cap\gamma'$. In this case $x=\gamma_{|x|}=\gamma'_{|x|}$ and for some $1\leq j,j'\leq d$, we have $e_j = \gamma_{|x|+1}-\gamma_{|x|}$ and $e_{j'} = \gamma'_{|x|+1}-\gamma'_{|x|}$. Then by definition of $\tnu_{m,x}$ and equation~\eqref{e:ratioL2Bound} we have that \begin{align*} \int_{\mathbb{R}^d} \frac{d\nu_{m,x}^\gamma}{d\nu_{m,x}} (y_x) \frac{d\nu_{m,x}^{\gamma'}}{d\nu_{m,x}} (y_x) \nu_{m,x}(dy_x) &= \int_{\mathbb{R}^d} \left(\tfrac{g(x+e_j)}{g(x)} \right)\left(\tfrac{g(x+e_{j'})}{g(x)} \right) g(x) dx\leq \rho. \end{align*} Defining $N=N_{\gamma,\gamma'}=|\gamma \cap \gamma'|$ and substituting in equation~\eqref{e:LMproduct} we have that \[ \int \frac{d\nu_m^\gamma}{d\nu_m} (y) \frac{d\nu_m^{\gamma'}}{d\nu_m} (y) \nu_m(dy) \leq \rho^{N} \] and so by equation~\eqref{e:LM2expression} \[ \sup_m \E [L_m(Y_m)]^2 \leq \E \rho^{N} < \infty \, , \] by the $EIT(\theta)$ assumption on $\eta$. It follows that $L_m(Y_m)$ converges to $L(Y)$ almost surely which is finite $\nu$-almost everywhere and hence that $\tnu$ is absolutely continuous with respect to $\nu$. Since $\tnu$ generates the point process $\Pi_0$ it follows that $\Pi_0$ is absolutely continuous with respect to $\Pi$. The result then follows by Proposition~\ref{p:equivalence}. \end{proof} \subsection{Deletion intolerance for small perturbations} In this section we show that if the perturbations are small enough then we have deletion intolerance. Let $\gamma=(u_0,u_1,\ldots)$ denote a nearest neighbor path in $\Z^d$ with $u_0=0$ and let $\gamma_n = (u_0,\ldots,u_n)$. For an i.i.d. field $\{Y_u\}_{u\in \Z^d}$ let $M_{n,d}$ denote \[ M_{n,d} := \sup_{\gamma} \sum_{u\in \gamma_n} Y_u. \] and \[ M_d := \limsup \frac1n M_{n,d}. \] Since $M_d$ is not affected by changing a finite number of $Y_u$ it is almost surely constant depending only on the distribution of $Y_u$ so we will denote this constant as $M_d(Y)$. A simple union bound over paths implies that $M_d$ is finite when $Y$ is Gaussian while Theorem~1 of~\cite{GanKes:94} implies that $M_d(Y)$ is finite provided that \begin{equation}\label{e:greedyCondition} \E Y^d \log^{d+\epsilon} Y < \infty. \end{equation} We have the following result when $M_d(|Y|_1)<\frac12$. \begin{lemma}\label{l:greedyIntolerant} Suppose that $Y_x$ has an absolutely continuous distribution with respect to Lebesgue measure on $\mathbb{R}^d$, the $\ell^1$ norm $|Y_x|_1$ satisfies equation~\eqref{e:greedyCondition} and $M_d(|Y_x|_1)<\frac12$. Then the perturbed lattice with perturbations $\{Y_x\}_{x\in\mathbb{Z}^d}$ is $k$-deletion singular for all $k\geq 1$. \end{lemma} \begin{proof} We consider only the case of $k=1$, the case of larger $k$ following essentially without change. With $\gamma=(u_0,u_1,\ldots)$ and $\gamma_n$ defined as above, for a countable set of points $A\subset \Z^d$ define \[ f(A) = \inf_{\psi:\Z^d \to A} \sup_{\gamma} \limsup_n \frac1n \sum_{u\in \gamma_n} |\psi(u)-u|_1, \] where the supremum is over all paths $\gamma$ and the infimum is over all bijections from $\Z^d$ into $A$. Taking $A=\Pi$ and $\psi(u) = u +Y_u$ we have that \[ f(A) \leq \sup_{\gamma} \limsup_n \frac1n \sum_{u\in \gamma_n} |Y_u|_1 < \frac12, \] since $M_d(|Y_u|_1)<\frac12$. Now consider $f(\Pi_0)$. We define the bijection $W:\Pi \to \mathbb{Z}^d$ so that $y=W(y)+Y_{W(y)}$; this is uniquely defined almost surely since the $Y_x$ have distributions with no atoms. Given a bijection $\psi: \Z^d \to \Pi_0$, construct a path $\gamma$ as follows. Let $v_0=0$ and set $v_{j+1}= W(\psi(v_j))$ for $j \geq 1$ and let $s_j = \sum_{k=1}^j |v_k - v_{k-1}|_1$. Suppose that $v_j=v_{j'}$ for some $j'>j$. Then \[ \psi(v_{j-1})= W^{-1} (v_{j}) = W^{-1} (v_{j'}) = \psi(v_{j'-1}) \] and so $v_{j-1}=v_{j'-1}$. Iterating we have that $0=v_0=v_{j'-j}$ which is a contradiction since $v_{j'-j}\in W(\Pi_0)= \mathbb{Z}^d\setminus \{0\}$. Let $\gamma$ be a nearest neighbor path constructed by sequentially joining the $v_i$ with the shortest intermediate paths, that is $\gamma=(u_0,\ldots)$ satisfies $u_0=v_0=0$ and $u_{s_j} = v_j$. Then since $Y_{v_{k+1}} = W^{-1}(v_{k+1}) - v_{k+1} = \psi(v_k)- v_{k+1}$ we have that, \begin{align*} \limsup_n \frac1n \sum_{u\in \gamma_n} |\psi(u)-u|_1 &\geq \limsup_j \frac1{s_j} \sum_{k=0}^j |\psi(v_k)-v_k|_1,\\ & \geq \limsup_j \frac1{s_j} \sum_{k=0}^j |v_{k+1}-v_j|_1 - |v_{k+1}-\psi(v_k)|_1\\ &\geq \limsup_j \frac1{s_j} \left( s_j - \sum_{k=0}^j |Y_{v_{k+1}}|_1 \right)\\ &\geq 1 - \limsup_n \frac1n \sum_{u\in \gamma_n} |Y_u|_1 > \frac12. \end{align*} Since almost surely $f(\Pi) < \frac12$ and $f(\Pi_0)>\frac12$ we have that the two measures are mutually singular. \end{proof} \subsection{Proof of Theorem~\ref{t:gaussian}} \begin{proof} If $g(x)=\frac1{\sqrt{2\pi}\sigma} e^{-x^2/(2 \sigma^2)}$ is a one-dimensional Gaussian $N(0,\sigma^2)$ density then \begin{align}\label{e:GaussianRNration} \int_{\mathbbm{R}} \left(\frac{g(x+1)}{g(x)} \right)^2 g(x) dx & = \int_{\mathbbm{R}} \left(\frac{\exp[-(x-1)^2/2\sigma^2]}{\exp[-x^2/2\sigma^2]} \right)^2 g(x) dx\\ \nonumber & = \int_{\mathbbm{R}} \exp[(2x-1)/\sigma^2] g(x) dx\\ \nonumber & = \exp[1/\sigma^2]. \end{align} As the $d$-dimensional Gaussian measure with density $g_d(x)$ is a product measure, when calculating~\eqref{e:muIcondition} the contributions to the product not in the direction of $e_i$ cancel and the equation reduces to \begin{equation*} \int_{\mathbbm{R}^d} \left(\frac{g_d(x+e_i)}{g_d(x)} \right)^2 g_d(x) d x = \int_{\mathbbm{R}} \left(\frac{g(x+1)}{g(x)} \right)^2 g(x) d x = \exp[1/\sigma^2]. \end{equation*} It follows from Proposition~\ref{p:walkTolerant} that for sufficiently large $\sigma$, the process $\Pi$ is deletion and insertion tolerant and mutually absolutely continuous with $\Pi_0$. We now consider the case when $\sigma$ is small. By scaling, the greedy lattice animal with weights $|Y_x|_1$ has a finite limiting value with $M(|Y_x|_1)$ proportional to $\sigma$. It follows by Lemma~\ref{l:greedyIntolerant} that $\Pi$ is deletion singular for sufficiently small $\sigma>0$. The existence of a critical value $\sigma_c(d)$ follows from the observation that increasing $\sigma$ is equivalent to a semigroup acting on $\Pi$ by shifting the points according to independent Brownian motions. If $\Pi$ and $\Pi_0$ are not singular for some value of $\sigma$ then they can be coupled with positive probability and hence they can be coupled for all larger values of $\sigma$ as well. Hence, by Proposition~\ref{p:equivalence}, there must be a critical $\sigma_c(d)$ with deletion tolerance for $\sigma>\sigma_c(d)$ and deletion singularity for $\sigma<\sigma_c(d)$. We similarly have that for each $k$ there exists a threshold $\sigma_c(k,d)$ with $k$-deletion tolerance above $\sigma_c(k,d)$ and $k$-deletion singularity below. Letting $\sigma_r(d)=\inf_k \sigma_c(k,d)$ by Proposition~\ref{p:k-equivalence} when $\sigma>\sigma_r$ there is some $k$ for which $\Pi$ is not $k$-deletion singular and hence not rigid by Proposition~\ref{p:rigid-k-tolerant}. Conversely, if $\sigma < \sigma_r$, then $\Pi$ is $k$-deletion singular for all $k$ and hence rigid by Propositions~\ref{p:k-equivalence} and~\ref{p:rigid-k-tolerant}. It follows from Lemmas~\ref{p:walkTolerant} and~\ref{l:greedyIntolerant} that $0<\sigma_r(d)<\infty$; this completes the proof. \end{proof} \section{General Perturbations}\label{s:generalPerturbation} In this section we consider more general perturbations and analyze the effect of tails on deletion tolerance. In particular, we exhibit a transition occurring at a power law decay of exponent $-2d$. \begin{theorem}\label{t:general} Let $\Pi$ be the perturbed lattice with perturbations $Y_x$ with density $g(y)$. \begin{itemize} \item If $\alpha< 2 d$ and \begin{equation}\label{e:heavyTailCond} \inf_{x\in \mathbb{R}^d} \frac{g(x)}{1\wedge |x|^{-\alpha}} > 0 \, , \end{equation} then the perturbed lattice $\Pi$ is $k$-deletion and $k$-insertion tolerant for all $k$ and mutually absolutely continuous with $\Pi_S$ for any finite set $S\subset\mathbb{Z}^d$. \item If $\alpha > 2 d$ and \begin{equation}\label{e:lightTailCond} \sup_{x\in \mathbb{R}^d} \frac{g(x)}{1\wedge |x|^{-\alpha}} < \infty \, , \end{equation} then there exists $\epsilon$ such that the perturbed lattice with perturbations $\epsilon' Y_x$ is $k$-deletion singular for all $0<\epsilon'<\epsilon$ and all $k$. This result also holds under the condition that $\E|Y_x|^{\alpha-d}<\infty$. \end{itemize} \end{theorem} \begin{proof} We first establish the second half of the theorem when the tails are light. The assumption on the density in equation~\eqref{e:lightTailCond}, or the assumption $\E|Y_x|^{\alpha-d}<\infty$, both imply that equation~\eqref{e:greedyCondition} holds for $|Y_x|_1$, so for small enough $\epsilon>0$, the limiting constant from~\eqref{e:greedyConvergence} satisfies $M_d(|\epsilon' Y_x|_1)<\frac12$ for $\epsilon'<\epsilon$. Applying Lemma~\ref{l:greedyIntolerant} then establishes that the perturbed lattice with perturbations $\epsilon' Y_x$ is $k$-deletion singular completing the proof. The remainder of the section is devoted to establishing the first half of Theorem~\ref{t:general}. We will prove the claim in the case $k=1$, with the extension to larger $k$ following similarly. Let $B_r(0)$ be the Euclidian ball of radius $r$ around the origin. Define a partition of $\mathbb{R}^d$ into subsets $\{H_i\}_{i\geq 1}$ by $H_1=B_2(0)$, and $H_i=B_{2^i}(0)\setminus H_{i-1}$ for $i\geq 2$. Given that equation~\eqref{e:heavyTailCond} holds, the density $g$ is everywhere positive. Since the perturbations are independent, it is sufficient to show that $\Pi_0$ can be coupled with positive probability to $\hpi$, the perturbed point process identical to $\Pi$ except that the perturbation of $0$ is taken according to the uniform distribution on $H_1 \cup H_2$ instead of as $Y_0$. We denote these perturbations as $\hat{Y}_x$ and will construct a coupling so that $\P(\hpi=\Pi_0)>0$. By construction there exist constants $0<c_1<c_2$ such that \begin{equation}\label{e:HiSize} c_1 2^{id}\leq \left|H_i\cap(\mathbb{Z}^d\setminus\{0\})\right| \leq c_2 2^{id},\qquad c_1 2^{id}\leq \left|H_i\right| \leq c_2 2^{id} \, . \end{equation} By equation~\eqref{e:heavyTailCond} then we have that for some $c_3>0$ and for all $i\geq 1$, \[ \inf_{x\in H_i,y\in H_i\cup H_{i+1}} g(y-x) \geq c_3 2^{-\alpha i} \, . \] It follows that with $c_4=c_1c_3$ that for all $i$ and $x\in H_i\cup \mathbb{Z}^d\setminus\{0\}$ we can decompose the measure of $x +Y_x$ into a mixture of the uniform distribution on $H_i\cup H_{i+1}$ with probability $p_i = c_4 2^{i(d-\alpha)}$ and another probability measure $\mu_x$ with probability $1-p_i$. The first step of our coupling is to construct independent Bernoulli random variables $\{\zeta_x\}_{x\in\mathbb{Z}\setminus\{0\}}$ where $\P(\zeta_x=1)=p_i$ when $x\in H_i$. When $\zeta_x=0$ we choose $x+\hat{Y}_x$ according to $\mu_x$ and let $x+\hat{Y}_x=x+Y_x$ so it remains to couple the vertices with $\zeta_x=1$ which are distributed uniformly on $H_i\cup H_{i+1}$. Let $\mathcal{Z}= \{Z_i\}_i$ where $Z_i$ denotes the number of $x\in H_i\cap(\mathbb{Z}\setminus\{0\})$ with $\zeta_x=1$. Counting the fact that $\hat{Y}_0$ is uniform on $H_1\cup H_2$ set $\hat{Z}_1 = 1+Z_1$ and $\hat{Z}_i = Z_i$ for $i\geq 2$. In summary the remaining not yet coupled points in $\Pi_0$ (respectively $\hpi$) are $Z_i$ (resp. $\hat{Z}_i$) points independent and uniform in $H_i\cup H_{i+1}$ for $i\geq 1$. Now sampling according to the uniform distribution on $H_i\cup H_{i+1}$ is equivalent to first selecting $H_i$ or $H_{i+1}$ with probability proportional to their area and then sampling the selected region uniformly. So set $r_i=\frac{|H_i|}{|H_i\cup H_{i+1}|}$ and note that the $r_i$ are uniformly bounded away from $0$ and $1$. Hence we define as binomials, $W_i = B(Z_i,r_i)$ and set $\mathcal{U}= \{U_i\}_i$ so that $U_1=W_1$ and $U_i=W_i+Z_{i-1}-W_{i-1}$ for $i\geq 2$. Define $\hat{W}_i$ and $\hat{\mathcal{U}}=\{\hat{W}_i\}_i$ similarly. With these definitions the remaining not yet coupled points in $\Pi_0$ (respectively $\hpi$) are $U_i$ (resp. $\hat{U}_i$) points independent and uniform in $H_i$ for each $i\geq 1$. So our procedure for coupling the remaining points is as follows. Given~$\mathcal{Z}$, we take the coupling maximizing the probability that $\mathcal{U} \equiv \hat{\mathcal{U}}$. Conditional on this event the remaining points in $\Pi_0$ and $\hat{\Pi}$ have the same law, namely the union of $U_i$ independent uniformly chosen points in $H_i$ for each $i \geq 1$. Thus on the event $\mathcal{U} \equiv \hat{\mathcal{U}}$ we can couple $\Pi_0$ and $\hat{\Pi}$ and hence to show deletion tolerance it remains to establish that we can couple $\mathcal{U}$ and $\hat{\mathcal{U}}$ with positive probability. \begin{claim} With the definitions above, $\P(\mathcal{U} \equiv \hat{\mathcal{U}})>0$. \end{claim} With $c_5=\tfrac12 c_1 c_4>0$ denote by $\mathcal{E}$ the event that, \begin{equation}\label{e:ZiCond} Z_i \geq 1\vee c_5 2^{i(2d-\alpha)}, \ \ i \geq 1 \, . \end{equation} We will show that $\P(\mathcal{U} \equiv \hat{\mathcal{U}}\mid \mathcal{Z},\mathcal{E})>0$. First, we will check that $\P(\mathcal{E})>0$. By construction each $Z_i$ is independent with distribution $B(H_i\cap(\mathbb{Z}\setminus\{0\},p_i)$ and so by equation~\eqref{e:HiSize} we have that $\E Z_i \geq c_1 c_4 2^{i(2d-\alpha)}$. Hence with our choice of $c_5$ by the Azuma-Hoeffding inequality, \[ \P(Z_i \leq c_5 2^{i(2d-\alpha)}) \leq c_6\exp[-c_7 2^{i(2d-\alpha)}] \, , \] for large $i$. Given this (better than) exponential decay and as $\P(Z_i \geq 1\vee c_5 2^{i(2d-\alpha)})>0$ for all $i$ it follows that $\P(\mathcal{E})>0$. Now for $\mathcal{U} \equiv \hat{\mathcal{U}}$ we must have $W_1=\hat{W}_1$ and $W_i=1+\hat{W}_i$ for all $i\geq 2$. The optimal coupling is at least as good as taking the optimal coupling independently for each $i$ so we have that \begin{align} \P(\mathcal{U} \equiv \hat{\mathcal{U}}\mid \mathcal{Z}) \geq (1-d_{\mathrm{TV}}(W_1,\hat{W}_1\mid \mathcal{Z}))\prod_{i=2}^\infty (1-d_{\mathrm{TV}}(W_i,\hat{W}_i+1\mid \mathcal{Z})) \end{align} where $d_{\mathrm{TV}}(\cdot,\cdot\mid \mathcal{Z})$ denotes the total variation distance given $\mathcal{Z}$. Since $d_{\mathrm{TV}}(W_1,\hat{W}_1\mid \mathcal{Z})<1$ and $d_{\mathrm{TV}}(W_i,\hat{W}_i+1\mid \mathcal{Z},\mathcal{E})<1$ for all $i\geq 2$ it is sufficient to show that \begin{align}\label{e:WTVBound} \sum_{i=2}^\infty d_{\mathrm{TV}}(W_i,\hat{W}_i+1\mid \mathcal{Z},\mathcal{E}) < \infty. \end{align} Hence we estimate $d_{\mathrm{TV}}(B(n,p),B(n,p)+1)$. When $p \leq \frac12$ and $|j-np|\leq (np)^{3/4}$ then \begin{align}\label{e:binomialRatio} &\frac{\left|\P(B(n,p)=j)-\P(B(n,p)=j-1)\right|}{\P(B(n,p)=j)} \nonumber\\ &\qquad= \frac{\left|{n \choose j}p^j(1-p)^{n-j} - {n \choose j-1}p^{j-1}(1-p)^{n-j+1}\right|}{{n \choose j}p^j(1-p)^{n-j}}\nonumber\\ &\qquad= \left|1-\frac{j }{np}\cdot \frac{n(1-p)}{(n-j+1)}\right|\nonumber\\ &\qquad= \left|1-\left(1 + \frac{j -np }{np}\right)\left( 1 - \frac{j - np - 1}{n(1-p)}\right)^{-1}\right|\nonumber\\ &\qquad \leq c_8 (np)^{-1/4} \end{align} provided $np$ an $n(1-p)$ are sufficiently large. It follows that \begin{align}\label{e:binomialTV} d_{\mathrm{TV}}(B(n,p),B(n,p)+1) &= \frac12\sum_{j=0}^{n+1} \left|\P(B(n,p)=j)-\P(B(n,p)=j-1)\right| \nonumber\\ &\leq \P(|B(n,p)-np|\geq (np)^{3/4})\nonumber\\ &\quad + c_8(np)^{-1/4}\sum_{j=np-(np)^{3/4}}^{np+(np)^{3/4}}\P(B(n,p)=j) \nonumber\\ &\leq 2\exp\left(-\frac{(np)^{3/2}}{n}\right)+ c_8 (np)^{-1/4} \end{align} where the first inequality is by equation~\eqref{e:binomialRatio} and the second is by Azuma's inequality. Since $r_i$ is uniformly bounded away from 0 and 1 then \begin{align}\label{e:WTVBound2} d_{\mathrm{TV}}(W_i,\hat{W}_i+1\mid \mathcal{Z}) &= d_{\mathrm{TV}}(B(Z_i,r_i),B(Z_i,r_i)+1\mid \mathcal{Z})\nonumber\\ & \leq 2\exp\left(-\frac{(Z_i r_i)^{3/2}}{Z_i}\right)+ c_8 (Z_i r_i)^{-1/4} \end{align} Substituting~\eqref{e:WTVBound2} into equation~\eqref{e:WTVBound} we have that \begin{align*}\label{e:WTVBoundFinal} &\sum_{i=2}^\infty d_{\mathrm{TV}}(W_i,\hat{W}_i+1\mid \mathcal{Z},\mathcal{E})\nonumber\\ & \ \ \leq \sum_{i=2}^\infty 2\exp\left(-(1\vee c_5 2^{i(2d-\alpha)})^{1/2} r_i^{3/2}\right)+ c_8 ((1\vee c_5 2^{i(2d-\alpha)}) r_i)^{-1/4} < \infty, \end{align*} which establishes that $\P(\mathcal{U} \equiv \hat{\mathcal{U}}\mid \mathcal{Z},\mathcal{E})>0$ completing the claim. Thus the claim ensures we can couple $\mathcal{U}$ and $\hat{\mathcal{U}}$ with positive probability which completes the coupling of $\Pi_0$ and $\hpi$ and proves that $\Pi$ and $\Pi_0$ are not mutually singular. Then the deletion and insertion tolerance of $\Pi$ and its mutual absolute continuity with $\Pi_0$ follow by Proposition~\ref{p:equivalence}. \end{proof} \section{Proof of Theorem~\ref{t:stable}} The proof of Theorem~\ref{t:stable} is essentially complete from previous results. When $\alpha>1$ then the perturbations have finite first moments and the deletion intolerance result of~\cite{HolSoo:10}. When $\alpha<1$ the perturbations satisfy~\eqref{e:heavyTailCond} and so deletion and insertion tolerance follow from Theorem~\ref{t:general}. The sole remaining case is to show that $\Pi$ is deletion singular when $\alpha=1$ (Cauchy perturbations) which is verified as a special case in the following subsection. \subsection{Cauchy perturbations} \begin{lemma} If $d=1$ and $\{Y_x\}$ are i.i.d.\ Cauchy distributed, then the perturbed lattice is $k$-deletion singular for all $k$. \end{lemma} \begin{proof} Our proof follows the approach of~\cite{HolSoo:10}. Let $S\subset\mathbb{Z}^d$ and $\Phi_{m,x} = \max\{m-|x+Y_x|,0\}$. We define \[ \Psi_m(\Pi) = \frac1m\int_{-m}^m (m-|z|) \Pi(dz) = \frac1m\sum_{x\in\mathbb{Z}}\Phi_{m,x}. \] We similarly have \[ \Psi_m(\Pi_S) = \frac1m\int_{-m}^m (m-|z|) \Pi_S(dz) = \frac1m\sum_{x\in\mathbb{Z}\setminus S}\Phi_{m,x}. \] and so \begin{equation}\label{e:PsiDiff} \Psi_m(\Pi) - \Psi_m(\Pi_0) = \frac1m \sum_{x\in S}\Phi_{m,x}\to |S| \end{equation} almost surely. We next consider the variance of $\Psi_m(\Pi)$ which is bounded as \begin{equation}\label{e:localVarContribution} \var(\Phi_{m,x}) \leq \E[\Phi_{m,x} - \max\{m-|x|,0\}]^2 \leq \E[|Y_x| \wedge m]^2 \leq C m. \end{equation} If $|x|> 2m$ then since $|\Phi_{m,x}| \leq m$ and since the density of the Cauchy decays like $c y^{-2}$, \begin{align}\label{e:distantVarContribution} \var(\Phi_{m,x}) &\leq \E[\Phi_{m,x}]^2 \leq m^2 \P(Y_x \in [-m-x,m-x]) \nonumber\\ &\leq C m^2 \int_{-m-x}^{m-x} y^{-2} dy \leq C' m^3 |x|^{-2}. \end{align} Since the $\Phi_{m,x}$ are independent (over $x$) combining equations~\eqref{e:localVarContribution} and~\eqref{e:distantVarContribution} we have that \begin{equation}\label{e:PhiVarBound} \var \Psi_m(\Pi) \leq \frac1{m^2}\Bigg[ \sum_{x=-2m}^{2m} C m + \sum_{|x|>2m} C' m^3 |x|^{-2} \Bigg] = O(1). \end{equation} Now for $m'>m$ we calculate the covariance of $\Psi_m(\Pi)$ and $\Psi_{m'}(\Pi)$as \begin{align*}\label{e:distantVarContribution} \Cov(\Psi_{m},\Psi_{m'}(\Pi))&= \frac1{m m'} \sum_x \Cov(\Phi_{m,x},\Phi_{m',x}) \nonumber\\ &\leq \frac1{m m'} \sum_x \sqrt{\var(\Phi_{m,x})\var(\Phi_{m',x})} \nonumber\\ &\leq \frac1{m m'}\Bigg[ \sum_{x=-2m}^{2m} \sqrt{C^2 m m'} + \sum_{2m<|x|\leq 2m'} \sqrt{C C' m^3 m' |x|^{-2}}\nonumber\\ &\qquad + \sum_{|x|>2m'} C' m^{3/2} m'^{3/2}|x|^{-2}\Bigg]\nonumber\\ &\leq (m/m')^{1/2} \left[C_1 + C_2 \log(m'/m) + +C_3\right]\nonumber\\ &\leq C_4 (m/m')^{1/2}\log(m'/m). \end{align*} Then if we take $m_\ell=e^{2\ell^2}$ we have that $\Cov(\Psi_{m_\ell},\Psi_{m_{\ell'}}(\Pi)) \leq O(e^{-(\ell\vee \ell')})$ when $\ell\neq \ell'$ and hence \[ \var \left[\frac1n \sum_{\ell=1}^n \Psi_{m_\ell}(\Pi)\right] = o(1). \] So we have that $\frac1n \sum_{i=1}^n \Psi_{m_i}(\Pi)-\E\Psi_{m_i}(\Pi)$ converges to 0 in probability while by~\eqref{e:PsiDiff} we have that $\frac1n \sum_{i=1}^n \Psi_{m_i}(\Pi_0)-\E\Psi_{m_i}(\Pi)$ converges to $-|S|$ in probability. It follows that $\Pi$ and $\Pi_S$ are mutually singular and so by Proposition~\ref{p:k-equivalence} $\Pi$ is $k$-deletion singular for all $k$. \end{proof} \section{Absolute Continuity} In this section we prove the equivalences of deletion intolerance and insertion intolerance and deletion and insertion singularity. \begin{proof}[Proof of Proposition~\ref{p:equivalence}] In this section we establish Proposition~\ref{p:equivalence}. Let $\Q$ denote the law of $\Pi$ and for a finite set $S\subset \Z^d$ we denote $\Pi_S=\{x+Y_x:x\in \Z^d \setminus S\}$ and its law as $\Q_S$. \emph{\eqref{it:mutAbs} $\Longleftrightarrow$ \eqref{it:delSingOld}}. If $\Pi$ and $\Pi_0$ are mutually singular then clearly $\Pi$ and $\Pi_0$ are not mutually absolutely continuous. Now assume that $\Pi$ and $\Pi_0$ are not mutually singular but that $\Q_0$ is not absolutely continuous with respect to $\Q$. Then we can find a measurable set $A$ such that $\Q(A)=1$ and $0<\Q_0(A)<1$ and that on $A$, $\Q_0$ is absolutely continuous with respect to $\Q$ with a Radon-Nikodym derivative given by $\kappa(a)=\frac{d \Q_0}{d \Q}$. We will show that $\Pi_0 \in A$ is a tail event for the $\{Y_x\}$. For some $S\subset \Z^d\setminus\{0\}$ define the set \[ B=B_S := \{b:\Q_0(A\mid \Pi_{S\cup\{0\}} = b) \in (0,1) \}. \] Suppose that $\P[\Pi_{S\cup\{0\}} \in B]>0$. Defining the sub-probability measure \[ \tilde{\Q}_0(E):=\P[\Pi_0\in E\cap A, \Pi_{S\cup\{0\}} \in B], \] we have that \[ \tilde{\Q}_0(A):=\P[\Pi_0\in A, \Pi_{S\cup\{0\}} \in B]=\int_B \Q_0(A\mid \Pi_{S\cup\{0\}}=b) d\Q_{S\cup\{0\}}( b)>0. \] Since $\tilde{\Q}_0$ is dominated by $\Q_0$ it is absolutely continuous with respect to $\Q$ and so not mutually singular. Hence there exists a coupling of $(Y_x,\Pi)$ and an identically distributed copy $(Y_x^\star,\Pi^\star)$ such that \[ \P[ \Pi=\Pi_0^\star, \Pi_{S\cup\{0\}}^\star \in B] > 0. \] Since the points $\{x+Y_x^\star:x\in S\}$ must have images in $\Pi$ when the point processes are equal and as there are only countably many choices of $\hat{S}\subset \Z^d$ with $|\hat{S}|=|S|$ we have that for some $\hat{S}$, \[ \P[ \Pi_{\hat{S}}=\Pi_{S\cup\{0\}}^\star, \Pi_{S\cup\{0\}}^\star \in B] > 0. \] As the $Y_x$ have a positive density everywhere the sets $\{x+Y_x^\star:x\in S\}$ and $\{x+Y_x:x\in \hat{S}\}$ are mutually absolutely continuous and hence the distributions $\Q_0(\cdot \mid \Pi_{S\cup\{0\}} = b)$ and $\Q(\cdot \mid \Pi_{\hat{S}} = b)$ are also mutually absolutely continuous. Then by definition of $B$ we have that for all $b\in B$, $\Q(A \mid \Pi_{\hat{S}} = b) < 1$ and hence \begin{align*} \Q(A^c)\geq \P[\Pi_{\hat{S}} \in B, \Pi \not \in A] & = \int_B \Q(A^c \mid \Pi_{S\cup\{0\}}=b) d\Q_{\hat{S}}(b)>0. \end{align*} But $\Q(A^c)=0$ so we have a contradiction and hence $\P[\P(\Pi_0 \in A\mid \Pi_{S\cup\{0\}}) \in (0,1)]=0$ for all $S$. This implies that $\Pi_0 \in A$ is a tail event and so by the Kolmogorov zero-one law we have that $\P[\Pi_0 \in A]=1$ since $\Q_0(A)>0$. This contradicts our assumption that $\Q_0(A)<1$ so we have that $\Q_0$ is absolutely continuous with respect to $\Q$. That $\Q$ is absolutely continuous with respect to $\Q_0$ follows similarly so the laws are mutually absolutely continuous. \emph{\eqref{it:delTol} $\Longleftrightarrow$ \eqref{it:delSing} $\Longleftrightarrow$ \eqref{it:delSingOld}}. Suppose $\Q$ and $\Q_0$ are singular. If $Z$ is a $\Pi$ point then by an abuse of notation let $\Pi_Z$ denote $\Pi \setminus Z$. Let $X\in\Z^d$ be the random lattice point such that $X+Y_X=Z$ so $\Pi_Z=\Pi_X$. Since, by translation, each $\Pi_x$ is singular to $\Pi$ so is $\Pi_X$ because $X\in\Z^d$ which is countable. Hence $\Pi_Z$ is mutually singular to $\Pi$ and so $\Pi$ is deletion singular and hence also deletion intolerant. Conversely, suppose that $\Q$ and $\Q_0$ are not mutually singular, so they must be mutually absolutely continuous. Then for any set $A$, if $\P[\Pi\in A]=0$ then $\P[\Pi_x \in A]=0$, whence $\P[\Pi_Z\in A] \leq \sum_{x\in\Z^d} \P[\Pi_x \in A] =0$. Thus $\Pi_Z$ is absolutely continuous with respect to $\Pi$, so $\Pi$ is deletion tolerant and not deletion singular. \emph{\eqref{it:insTol} $\Longleftrightarrow$ \eqref{it:insSing} $\Longleftrightarrow$ \eqref{it:delSingOld}}. Suppose $\Q$ and $\Q_0$ are mutually singular. Let $V \subset \mathbb{R}^d$ be a Borel set with Lebesgue measure $\mathcal{L}(V)\in(0,\infty)$ and $U$ a random variable independent of $\{Y_x\}_{x\in\Z^d}$ uniform on $V$. Suppose that $\Pi \cup U$ is not mutually singular with respect to $\Pi$. Then there exists an identically distributed copy $(U^\star, Y_x^\star,\Pi^\star)$ and a coupling such that \[ \P[ \Pi^\star \cup U^\star = \Pi] > 0. \] On the event that they agree let $X$ denote the random lattice point such that $X+Y_X=U^\star$. For some $x$ we have $\P[X=x, \Pi^\star \cup U^\star = \Pi] > 0$ and hence $\P[\Pi^\star = \Pi_x] > 0$. But $\Q$ and $\Q_x$ are mutually singular which is a contradiction so $\Pi \cup V$ is mutually singular with respect to $\Pi$ and hence $\Pi$ is insertion singular and hence insertion intolerant. Conversely if $\Q$ and $\Q_0$ are mutually absolutely continuous then $(U,\Pi)$ is absolutely continuous with respect to $(Y_0,\Pi_0)$ since $Y_0$ has a positive density everywhere. It follows that $\Pi\cup U$ is absolutely continuous with respect to $Y_0 \cup \Pi_0 = \Pi$ so $\Pi$ is insertion tolerant and hence not insertion singular. \end{proof} \section{Rigidity} We begin with the following Proposition relating the $k$-deletion tolerance versions and which follows with minor modification to the proof of Proposition~\ref{p:equivalence}. \begin{proposition}\label{p:k-equivalence} If the distribution of $Y_x$ has a density which is everywhere positive and $S\subset \mathbb{Z}^d$ is of size $k$ then the following are equivalent \begin{enumerate} \item The perturbed lattice is $k$-deletion tolerant. \item The perturbed lattice is $k$-insertion tolerant. \item The perturbed lattice is not $k$-deletion singular. \item The perturbed lattice is not $k$-insertion singular. \item The measures $\Pi$ and $\Pi_S$ are mutually absolutely continuous. \item The measures $\Pi$ and $\Pi_S$ are not mutually singular. \end{enumerate} \end{proposition} The proof of Proposition~\ref{p:k-equivalence} follows by the same proof as Proposition~\ref{p:equivalence} with the minor alteration of adding or removing $k$-points instead of 1. Finally we prove Proposition~\ref{p:rigid-k-tolerant} relating rigidity and deletion tolerance. \begin{proof}(Proof of Proposition~\ref{p:rigid-k-tolerant}) Suppose first that there exists some $S$ such that $\Pi_S$ is not singular with respect to $\Pi$ but that $\Pi$ is rigid. Then $N(\Pi_{\hbox{out}})= | \Pi_{\hbox{in}}| \ a.s.$ but also $N((\Pi_S)_{\hbox{out}})= | (\Pi_S)_{\hbox{in}}| \ a.s.$ since $\Pi$ is mutually absolutely continuous with respect to $\Pi_S$ by Proposition~\ref{p:k-equivalence}. However, on the event $A=\{\forall x\in S: \ x+Y_x\in B_1(0)\}$ by definition $\Pi_{\hbox{out}}=(\Pi_S)_{\hbox{out}}$ but $\Pi_{\hbox{in}}=(\Pi_S)_{\hbox{in}}+|S|$. Since $\P[A]>0$ this is a contradiction and so $\Pi$ is not rigid. Now suppose that $\Pi$ is not rigid and fix some ball $B$ for which it fails. Let $\psi(\Pi_{\hbox{out}},j)=\P[|\Pi_{\hbox{in}}|=j\mid \Pi_{\hbox{out}}]$. Since $\Pi$ is not rigid it follows that \[ \P\left[ \psi(\Pi_{\hbox{out}},\Pi_{\hbox{in}}) < 1 \right] >0 \] and since $\E[|\Pi_{\hbox{in}}|\mid \Pi_{\hbox{out}}] = \sum_j j \psi(\Pi_{\hbox{out}},j)$ we have that \[ \P\left[ \sum_{j<\Pi_{\hbox{in}}} \psi(\Pi_{\hbox{out}},j) >0 \right] >0. \] In particular for some positive integer $k$ we have that \[ \P\left[ \psi(\Pi_{\hbox{out}},\Pi_{\hbox{in}}-k) >0 \right] >0. \] Thus we can construct an independent copy $\Pi'$ of $\Pi$ such that \[ \P[\Pi_{\hbox{out}}'=\Pi_{\hbox{out}},|\Pi'_{\hbox{in}}|+k = |\Pi_{\hbox{in}}|]>0. \] Now since there are a countable number of finite subsets of $\mathbb{Z}^d$ we can find sets $S,S'\subset \mathbb{Z}^d$ with $|S|=|S'|+k$ \[ \P[\Pi_{\hbox{out}}'=\Pi_{\hbox{out}},\{x+Y'_x:x\in S'\}=\Pi'_{\hbox{in}},\{x+Y_x:x\in S\}=\Pi_{\hbox{in}}]>0 \] and so by removing these points \begin{equation}\label{e:set-removed-coupling} \P[\Pi'_{S'}=\Pi_{S}]>0. \end{equation} Let $S^*\subset S$ with $|S^*|=k$ and let $U_1,\ldots,U_{|S'|}$ be i.i.d.\ standard $d$-dimensional Gaussians. Then since each $U_i$ is mutually absolutely continuous with respect to $x+Y_x$ for any $x$ then $\Pi_{S}\cup\{U_1,\ldots,U_{|S'|}\}$ is mutually absolutely continuous with respect to $\Pi_{S^*}$ and $\Pi_{S}'\cup\{U_1,\ldots,U_{|S'|}\}$ is mutually absolutely continuous with respect to $\Pi'$ and hence $\Pi$. Combining this with~\eqref{e:set-removed-coupling} implies that $\Pi$ and $\Pi_{S^*}$ are not mutually singular which completes the proof. \end{proof} \section{Deletion singularity without rigidity}\label{s:twoButNotOne} In this section we prove Theorem~\ref{t:twoButNotOne}. \begin{proof} First we show that $\hat{\Pi}$ is $2$-deletion tolerant if $\sigma^2-\delta^2>\sigma^2_c$. By Theorem~\ref{t:gaussian} we have that $\Pi=\{x+Y_x:x\in \mathbb{Z}^d\}$ is deletion tolerant. We can construct $\hat{\Pi}$ from $\Pi$ by replacing each point in $z\in\Pi$ with points $z+G_{z,1}$ and $z+G_{z,2}$ for independent $N_d(0,\delta^2)$ Gaussians $G_{z,1}$ and $G_{z,2}$. Since $\Pi$ and $\Pi_0$ are mutually absolutely continuous, by Proposition~\ref{p:equivalence} we have that $\hat{\Pi}$ and $\hat{\Pi}_0=\{x+\hat{Y}_{x,i}:(x,i)\in ({Z}^d\setminus\{0\})\times \{1,2\}\}$ are mutually absolutely continuous. Arguing similarly to the proof of Proposition~\ref{p:equivalence} it follows $\hat{\Pi}$ is $2$-deletion tolerant. To prove that $\hat{\Pi}$ is not deletion tolerant we again argue by contradiction from a coupling as in the proof of Lemma~\ref{l:greedyIntolerant}. Let $\mathcal{V}_n$ be the set of all pairs of sequences $V_n=\Big((v_0,\ldots,v_n),(v^\star_0,\ldots,v^\star_n\Big)$ taking elements in $\mathbb{Z}^d\times\{1,2\}$ such that the elements $v_0(1),\ldots,v_n(1)$ are distinct as are $v^\star_0(1),\ldots,v^\star_n(1)$ where $v_0^\star=(0,2)$ and with $v_0^\star=(0,2)$. Let \[ L_n=L_n(V_n) = \sum_{i=0}^n |v^\star_i(1) - v_{i}(1)|_1 + \sum_{i=0}^{n-1} |v_i(1) - v^\star_{i+1}(1)|_1. \] Note that since the $v_i(0)$ are distinct we have that $L_n \geq n$. We define the following collection of events for constants $C_1(d)>0$ to be fixed later \begin{itemize} \item Let $\mathcal{I}_n(V_n)$ (respectively $\mathcal{I}^\star_n(V_n)$) be the event that \[ \sum_{i=0}^n \sum_{j=1}^2 |\hat{Y}_{v_i(1),j}|_1 \geq \frac12 L_n, \quad \hbox{resp. } \ \sum_{i=0}^n \sum_{j=1}^2 |\hat{Y}^\star_{v^\star_i(1),j}|_1 \geq \frac12 L_n. \] \item Let $\mathcal{J}_n(V_n)$ be the event that \begin{align*} &\sum_{i=0}^{n-1} I(|\hat{Y}_{v_i(1),1}-\hat{Y}_{v_i(1),2}|_1 \geq C_1) \geq \frac12 n \end{align*} where $I(\cdot)$ denotes the indicator. \item Let $\mathcal{J}^\star_n(V_n)$ be the event that \[ \sum_{i=0}^{n-1} I(|(v^\star_{i}(1) + \hat{Y}^\star_{v^\star_{i}(1),3-v^\star_{i}(2)})-(v^\star_{i+1}(1) + \hat{Y}^\star_{v^\star_{i+1}(1),v^\star_{i+1}(2)})|_1 \leq C_1) \geq \frac12 n. \] \end{itemize} By basic large deviations estimates since $\hat{Y}_{v_i(1),j}$ are $N_d(0,\sigma^2)$ then for sufficiently large $C_2(d)$ when $L_n \geq C_2 n$, \begin{equation}\label{e:InBound} \P[\mathcal{I}_n(V_n)] \leq (4d)^{-L_n}. \end{equation} and similarly for $\mathcal{I}^\star_n(V_n)$. Let $\mathcal{F}_i$ be the $\sigma$-algebra generated by $\{\hat{Y}^\star_{v^\star_{i}(1),1},\hat{Y}^\star_{v^\star_{i}(1),1}\}_{1\leq i' \leq i}$. By choosing $C_1 = C_1(d,\sigma)$ to be sufficiently small we can make \[ \P[|(v^\star_{i}(1) + \hat{Y}^\star_{v^\star_{i}(1),3-v^\star_{i}(2)})-(v^\star_{i+1}(1) + \hat{Y}^\star_{v^\star_{i+1}(1),v^\star_{i+1}(2)})|_1 \leq C_1\mid \mathcal{F}_i ]< \frac14(4d)^{-2 C_2}, \] for all $V_n$ and $i$ since $\hat{Y}^\star_{v^\star_{i+1}(1),v^\star_{i+1}(2)}$ is distributed as $N_d(0,\sigma^2)$ and is independent of $\mathcal{F}_i$. Hence \begin{equation}\label{e:JnStarBound} \P[\mathcal{J}^\star_n(V_n)] \leq {n \choose n/2}\left(\frac14(4d)^{-2 C_2}\right)^{\frac12 n} \leq (4d)^{-C_2 n}, \end{equation} for large enough $n$. Finally, we may choose $\delta>0$ to be sufficiently small so that \[ \P[|\hat{Y}_{v_i(1),1}-\hat{Y}_{v_i(1),2}|_1 \geq C_1] \leq \frac14(4d)^{-C_2}, \] since $\hat{Y}_{v_i(1),1}-\hat{Y}_{v_i(1),2}$ is distributed as $N_d(2\delta^2)$ and hence \begin{equation}\label{e:JnBound} \P[\mathcal{J}_n(V_n)] \leq (4d)^{-C_2 n}. \end{equation} Finally we note that $\{V_n\in\mathcal{V}_n:L_n(V_n) = \ell\}\leq (2d)^{L_n}$. Now suppose that $\hat{\Pi}$ and $\hat{\Pi}_{(0,1)}=\big\{x+\hat{Y}_{x,i}:(x,i)\in (\mathbb{Z}^d\times \{1,2\})\setminus\{(0,1)\}\big\}$ are not mutually singular. Then there exists an identically distributed copy $(\{\hat{Y}_{x,i}^\star\},\Pi^\star)$ and a coupling so that the event $\mathcal{A}=\{\Pi = \hat{\Pi}_{(0,1)}^\star\}$ has positive probability. We define the bijections $W:\hat{\Pi} \to \mathbb{Z}^d\times\{1,2\}$ and $W^\star:\hat{\Pi}_{(0,1)}^\star\to \mathbb{Z}^d\times\{1,2\}\setminus \{(0,1)\}$ so that \[ y=W_1(y)+\hat{Y}_{W(y)},\qquad y=W^\star_1(y)+\hat{Y}^\star_{W^\star(y)}. \] On $\mathcal{A}$ define the sequence $u_0^\star=(0,1)$ and \[ u_i=W(u^\star_{i}(1)+\hat{Y}^\star_{u^\star_{i}(1),3-u^\star_{i}(2)}), \qquad u^\star_{i+1}=W^\star(u_{i}(1)+\hat{Y}_{u_{i}(1),3-u_{i}(2)}), \] where $u_i(1)$ denotes the first coordinate of $u_1$. By construction the $\{u_i(1)\}_{i\geq 0}$ are distinct as are the $\{u^\star_i(1)\}_{i\geq 0}$ and $U_n=\Big((u_0,\ldots,u_n),(u^\star_0,\ldots,u^\star_n\Big)\in\mathcal{V}_n$. Also by construction \begin{align*} u^\star_i(1) + \hat{Y}^\star_{u^\star_i(1),3-u^\star_i(2)} &= u_i(1) + \hat{Y}_{u_i(1),u_i(2)},\\ u_i(1) + \hat{Y}_{u_i(1),3-u_i(2)} &= u^\star_{i+1}(1) + \hat{Y}^\star_{u^\star_{i+1}(1),u^\star_{i+1}(2)}, \end{align*} and hence by the triangle inequality we have that \begin{align*} L_n(V_n) &= \sum_{i=0}^n |u^\star_i(1) - u_{i}(1)|_1 + \sum_{i=0}^{n-1} |u_i(1) - u^\star_{i+1}(1)|_1\\ &\leq \sum_{i=0}^n \sum_{j=1}^2 |\hat{Y}_{u_i(1),j}|_1 + \sum_{i=0}^n \sum_{j=1}^2 |\hat{Y}^\star_{u^\star_i(1),j}|_1 \end{align*} and so the event $\mathcal{I}_n(U_n) \cup \mathcal{I}^\star_n(U_n)$ holds on $\mathcal{A}$. Again by the definition of $U_n$, \[ \hat{Y}_{v_i(1),1}-\hat{Y}_{v_i(1),2} = (v^\star_{i}(1) + \hat{Y}^\star_{v^\star_{i}(1),3-v^\star_{i}(2)})-(v^\star_{i+1}(1) + \hat{Y}^\star_{v^\star_{i+1}(1),v^\star_{i+1}(2)}) \] and hence at least on of $\mathcal{J}_n(U_n)$ and $\mathcal{J}^\star_n(U_n)$ holds on $\mathcal{A}$. Hence \begin{align*} \P[\mathcal{A}]&\leq \sum_{V_n\in\mathcal{V}_n} \P\left[\big(\mathcal{I}_n(V_n) \cup \mathcal{I}^\star_n(V_n)\big)\cap \big(\mathcal{J}_n(V_n) \cup \mathcal{J}^\star_n(V_n)\big)\right]\\ &\leq \sum_{m= n}^{C_2 n} \sum_{\substack{V_n\in\mathcal{V}_n\\L_N(V_n)=m}} \P\left[\mathcal{J}_n(V_n) \cup \mathcal{J}^\star_n(V_n)\right]\\ &\qquad + \sum_{m=C_2 n}^{\infty} \sum_{\substack{V_n\in\mathcal{V}_n\\L_N(V_n)=m}} \P\left[\mathcal{I}_n(V_n) \cup \mathcal{I}^\star_n(V_n)\right] \end{align*} By equation~\eqref{e:InBound} \[ \sum_{m=C_2 n}^{\infty} \sum_{\substack{V_n\in\mathcal{V}_n\\L_N(V_n)=m}} \P\left[\mathcal{I}_n(V_n) \cup \mathcal{I}^\star_n(V_n)\right] \leq \sum_{m=C_2 n}^{\infty} (2d)^{m} (4d)^{-m}, \] and by equation~\eqref{e:JnBound} and~\eqref{e:JnStarBound} \[ \sum_{m= n}^{C_2 n} \sum_{\substack{V_n\in\mathcal{V}_n\\L_N(V_n)=m}} \P\left[\mathcal{J}_n(V_n) \cup \mathcal{J}^\star_n(V_n)\right] \leq \sum_{m= n}^{C_2 n} (2d)^{m} (4d)^{-C_2 n} \] and since both of these bounds tends to 0 as $n$ tends to infinity we have that $\P[\mathcal{A}]=0$. This is a contradiction and hence $\hat{\Pi}$ is deletion singular. \end{proof} \noindent {\bf Acknowledgements}. The authors would like to thank Alexander Holroyd for useful discussions. \begin{bibdiv} \begin{biblist \bib{ACHZ:08}{article}{ title={Searching for a trail of evidence in a maze}, author={Arias-Castro, E.}, author={Candes, E.J.}, author={Helgason, H.}, author={Zeitouni, O.}, journal={The Annals of Statistics}, volume={36}, pages={1726--1757}, year={2008}, } \bib{BPP:98}{article}{ title={Unpredictable paths and percolation}, author={Benjamini, I.}, author={Pemantle, R.}, author={Peres, Y.}, journal={Annals of probability}, pages={1198--1211}, year={1998}, publisher={JSTOR} } \bib{BerPer:13}{article}{ title={Detecting the trail of a random walker in a random scenery}, author={Berger, N.}, author={Peres, Y.}, title={Detecting the trail of a random walker in a random scenery}, journal={Electron. J. Probab.}, volume={18}, date={2013}, pages={no. 87, 18}, issn={1083-6489}, review={\MR{3119085}}, doi={10.1214/EJP.v18-2367}, } \bib{CGGK:93}{article}{ title={Greedy lattice animals I: Upper bounds}, author={Cox, J.T.}, author={Gandolfi, A.}, author={Griffin, P.S.}, author={Kesten, H.}, journal={The Annals of Applied Probability}, volume={3}, pages={1151--1169}, } \bib{DeGaKe:01}{article}{ title={Greedy lattice animals: negative values and unconstrained maxima}, author={Dembo, A.}, author={Gandolfi, A.}, author={Kesten, H.}, journal={The Annals of Probability}, volume={29}, pages={205--241}, date={2001}, } \bib{GanKes:94}{article}{ title={Greedy lattice animals II: Linear growth}, author={Gandolfi, A.}, author={Kesten, H.}, journal={The Annals of Applied Probability}, volume={4}, number={1}, pages={76--107}, year={1994}, publisher={JSTOR} } \bib{GhoPer:12}{article}{ title={Rigidity and Tolerance in point processes: Gaussian zeroes and Ginibre eigenvalues}, author={Ghosh, S.}, author={Peres, Y.}, journal={arXiv:1211.2381}, date={2012} } \bib{HagMos:98}{article}{ AUTHOR = {H{\"a}ggstr{\"o}m, Olle and Mossel, Elchanan}, TITLE = {Nearest-neighbor walks with low predictability profile and percolation in {$2+\epsilon$} dimensions}, JOURNAL = {The Annals of Probability}, VOLUME = {26}, YEAR = {1998}, NUMBER = {3}, PAGES = {1212--1231}, } \bib{Hammond:06}{article}{ title={Greedy lattice animals: Geometry and criticality}, author={Hammond, A.}, journal={The Annals of Probability}, volume={34}, pages={593--637}, date={2006} } \bib{HolSoo:10}{article}{ title={Insertion and Deletion Tolerance of Point Processes}, author={Holroyd, A.E.}, author={Soo, T.}, journal={arXiv:1007.3538}, date={2010} } \bib{Per:99}{book}{ AUTHOR = {Peres, Yuval}, TITLE = {Probability on trees: an introductory climb}, BOOKTITLE = {Lectures on probability theory and statistics ({S}aint-{F}lour, 1997)}, SERIES = {Lecture Notes in Math.}, VOLUME = {1717}, PAGES = {193--280}, PUBLISHER = {Springer}, ADDRESS = {Berlin}, YEAR = {1999}, URL = {http://dx.doi.org/10.1007/978-3-540-48115-7_3}, } \bib{MP:10}{book}{ author={M{\"o}rters, Peter}, author={Peres, Yuval}, title={Brownian motion}, series={Cambridge Series in Statistical and Probabilistic Mathematics}, note={With an appendix by Oded Schramm and Wendelin Werner}, publisher={Cambridge University Press, Cambridge}, date={2010}, pages={xii+403}, isbn={978-0-521-76018-8}, review={\MR{2604525 (2011i:60152)}}, doi={10.1017/CBO9780511750489}, } \bib{Dur:10}{book}{ author={Durrett, Rick}, title={Probability: theory and examples}, series={Cambridge Series in Statistical and Probabilistic Mathematics}, edition={4}, publisher={Cambridge University Press, Cambridge}, date={2010}, pages={x+428}, isbn={978-0-521-76539-8}, review={\MR{2722836 (2011e:60001)}}, doi={10.1017/CBO9780511779398}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2014-09-17T02:06:56", "yymm": "1409", "arxiv_id": "1409.4490", "language": "en", "url": "https://arxiv.org/abs/1409.4490", "abstract": "A perturbed lattice is a point process $\\Pi=\\{x+Y_x:x\\in \\mathbb{Z}^d\\}$ where the lattice points in $\\mathbb{Z}^d$ are perturbed by i.i.d.\\ random variables $\\{Y_x\\}_{x\\in \\mathbb{Z}^d}$. A random point process $\\Pi$ is said to be rigid if $|\\Pi\\cap B_0(1)|$, the number of points in a ball, can be exactly determined given $\\Pi \\setminus B_0(1)$, the points outside the ball. The process $\\Pi$ is called deletion tolerant if removing one point of $\\Pi$ yields a process with distribution indistinguishable from that of $\\Pi$. Suppose that $Y_x\\sim N_d(0,\\sigma^2 I)$ are Gaussian vectors with with $d$ independent components of variance $\\sigma^2$. Holroyd and Soo showed that in dimensions $d=1,2$ the resulting Gaussian perturbed lattice $\\Pi$ is rigid and deletion intolerant. We show that in dimension $d\\geq 3$ there exists a critical parameter $\\sigma_r(d)$ such that $\\Pi$ is rigid if $\\sigma<\\sigma_r$ and deletion tolerant (hence non-rigid) if $\\sigma>\\sigma_r$.", "subjects": "Probability (math.PR)", "title": "Rigidity and tolerance for perturbed lattices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771798031351, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7090925362297493 }
https://arxiv.org/abs/1805.02225
An Upper Bound for the Moments of a G.C.D. related to Lucas Sequences
Let $(u_n)_{n \geq 0}$ be a non-degenerate Lucas sequence, given by the relation $u_n=a_1 u_{n-1}+a_2 u_{n-2}$. Let $\ell_u(m)=lcm(m, z_u(m))$, for $(m,a_2)=1$, where $z_u(m)$ is the rank of appearance of $m$ in $u_n$. We prove that $$\sum_{\substack{m>x\\ (m,a_2)=1}}\frac{1}{\ell_u(m)}\leq \exp(-(1/\sqrt{6}-\varepsilon+o(1))\sqrt{(\log x)(\log \log x)}),$$ when $x$ is sufficiently large in terms of $\varepsilon$, and where the $o(1)$ depends on $u$. Moreover, if $g_u(n)=\gcd(n,u_n)$, we will show that for every $k\geq 1$, $$\sum_{n\leq x}g_u(n)^{k}\leq x^{k+1}\exp(-(1+o(1))\sqrt{(\log x)(\log \log x)}),$$ when $x$ is sufficiently large and where the $o(1)$ depends on $u$ and $k$. This gives a partial answer to a question posed by C. Sanna. As a by-product, we derive bounds on $#\{n\leq x: (n, u_n)>y\}$, at least in certain ranges of $y$, which strengthens what already obtained by Sanna. Finally, we start the study of the multiplicative analogous of $\ell_u(m)$, finding interesting results.
\section{Introduction} \label{section 1} Let $(u_n)_{n\geq 0}$ be an integral linear recurrence, that is, $(u_n)_{n\geq 0}$ is a sequence of integers and there exist $a_1, \dots, a_k\in\mathbb{Z}$, with $a_k\neq 0$, such that $$u_{n}=a_{1}u_{n-1}+\cdots+a_{k}u_{n-k},$$ for all integers $n\geq k$, with $k$ a fixed positive integer. We recall that $(u_n)_{n\geq 0}$ is said to be non-degenerate if none of the ratios $\alpha_{i}/\alpha_{j}$ $(i \neq j)$ is a root of unity, where $\alpha_{1}, \dots,\alpha_{r}\in\mathbb{C}$ are all the pairwise distinct roots of the characteristic polynomial $$f_{u}(X)=X^{k}-a_{1}X^{k-1}-\cdots-a_{k}.$$ Moreover, $(u_n)_{n\geq 0}$ is said to be a Lucas sequence if $u_0=0, u_1=1,$ and $k=2$. We note that the Lucas sequence with $a_1=a_2=1$ is known as the Fibonacci sequence. We refer the reader to \cite[Chapter 1]{EPSW} for the basic terminology and theory of linear recurrences. The function $g_{u}(n):=\gcd(n,u_n)$ has attracted the interest of several authors. For example, the set of fixed points of $g_{u}(n)$, or equivalently the set of positive numbers $n$ such that $n|u_n$, has been studied by Alba~Gonz\'alez, Luca, Pomerance, and Shparlinski \cite{ALPS}, under the mild hypotheses that $(u_{n})_{n\geq 0}$ is non-degenerate and that its characteristic polynomial has only simple roots. Moreover, this problem has been studied also by Andr\'e-Jeannin \cite{J}, Luca and Tron \cite{LT}, Sanna \cite{S2}, Smyth \cite{SM} and Somer \cite{SO}, when $(u_{n})_{n\geq 0}$ is a Lucas or the Fibonacci sequence. On the other hand, Sanna and Tron \cite{S3, ST} have analysed the fiber $g_{u}(y)^{-1}$, when $(u_{n})_{n\geq 0}$ is non-degenerate and $y=1$, and when $(u_{n})_{n\geq 0}$ is the Fibonacci sequence and $y$ is an arbitrary positive integer. Moreover, the image $g_{u}(\mathbb{N})$ has been investigated by Leonetti and Sanna \cite{LS}, again when $(u_{n})_{n\geq 0}$ is the Fibonacci sequence. Other important questions about the function $g_{u}(n)$ are related to its behaviour on average and its distribution as arithmetic function. From now on, we focus on the specific case in which $(u_n)_{n \geq 0}$ is a non-degenerate Lucas sequence with non-zero discriminant $\Delta_u = a_1^2 + 4a_2$. Otherwise, the sequence reduces to $u_n=n\alpha^{n}$, for a suitable $\alpha\in\mathbb{Z}$, and $g_u(n)=n$, for every positive integer $n$. Even in this particular situation, it is very difficult to find information on the distribution of $g_{u}(n)$, because of its oscillatory behaviour. For this reason, it is natural to consider the flatter function $\log(g_{u}(n))$, for which an asymptotic formula for its mean value, and more in general for its moments, has been given by Sanna, who proved the following theorem \cite[Theorem 1.1]{S}. \begin{thm} \label{thm 1.1} Fix a positive integer $\lambda$ and some $\varepsilon>0$. Then, for all sufficiently large $x$, how large depending on $a_1,a_2,\lambda$ and $\varepsilon$, we have \begin{equation} \label{eq: 1.1} \sum_{n\leq x}(\log g_{u}(n))^{\lambda}=M_{u,\lambda}x+E_{u,\lambda}(x), \end{equation} where $M_{u,\lambda}>0$ is a constant depending on $a_1,a_2$ and $\lambda$, and the error term is bounded by $$E_{u, \lambda}(x)\ll_{u,\lambda}x^{(1+3\lambda)/(2+3\lambda)+\varepsilon}.$$ \end{thm} Also, Sanna showed that the constant $M_{u,\lambda}$ can be expressed through a convergent series. An immediate consequence of the previous result is the possibility of finding information about the distribution of $g_{u}$ \cite[Corollary 1.3]{S}. \begin{cor} \label{cor 1.2} For each positive integer $\lambda$, we have \begin{equation} \label{eq: 1.2} \#\{n\leq x: g_{u}(n)>y\}\ll_{u,\lambda}\frac{x}{(\log y)^{\lambda•}}, \end{equation} for all $x,y>1$. \end{cor} In the same article, Sanna raised the question of finding an asymptotic formula for the moments of the function $g_{u}(n)$ itself. We are not able to answer to this apparently difficult question, but we can at least give a non-trivial estimate for them. The result is the following. \begin{thm} \label{thm 1.3} For every integer $k\geq 1$ and $u_n$ a non-degenerate Lucas sequence, we have \begin{equation} \label{eq: 1.3} \sum_{n\leq x} g_u(n)^{k}\leq x^{k+1}\exp\left(-\left(1+o(1)\right)\sqrt{(\log x)(\log \log x)}\right), \end{equation} as $x$ tends to infinity and where the $o(1)$ depends on $u$ and $k$. \end{thm} For each positive integer $m$ relatively prime with $a_2$, let $z_u(m)$ be the rank of appearance of $m$ in the Lucas sequence $(u_n)_{n\geq 0}$, that is, $z_u(m)$ is the smallest positive integer $n$ such that $m$ divides $u_n$. It is well known that $z_u(m)$ exists (see, e.g., \cite{R}). Also, put $\ell_u(m) :=lcm(m, z_u(m))$. There is a simple trick to relate the moments of $g_u(n)$ with the rate of convergence of the series $\sum_{m>x, (m,a_2)=1}1/ \ell_u(m)$, which has been partially studied by several authors. We will deduce a slightly weaker version of Theorem \ref{thm 1.3}, in which the constant in the exponential is replaced by $-1/\sqrt{6}+\varepsilon+o(1)$, for every $\varepsilon>0$, from it and the following bound. \begin{prop} \label{prop 1.4} For every non-degenerate Lucas sequence $u_n$, we have \begin{equation} \label{eq: 1.4} \sum_{\substack{m>x \\(m,a_2)=1}}\frac{1}{\ell_u(m)}\leq\exp(-(1/ \sqrt{6}-\varepsilon+o(1))\sqrt{(\log x)(\log \log x)}), \end{equation} when $x$ is large in terms of $\varepsilon$ and where the $o(1)$ depends on $u$. \end{prop} In the proof of Proposition \ref{prop 1.4} we highlight a method, based essentially on the distribution of smooth numbers, to achieve the above bound. It seems reasonable to think that a deeper analysis of the structure of $\ell_u(n)$ could lead to understand better the behaviour of $\sum_{\substack{m>x,(m,a_2)=1}}1/\ell_u(m)$ and consequently to improve the result about the moments of $g_u(n)$. Nevertheless, using a completely different and more direct approach that we will describe later, we can obtain the stronger stated estimate in Theorem \ref{thm 1.3}. It is immediate to deduce from Theorem \ref{thm 1.3} the following improvement on the distribution of $g_{u}(n)$ at least when $y$ varies uniformly in a certain range. \begin{cor} \label{cor 1.5} We have \begin{equation} \label{eq: 1.5} \#\{ n\leq x : g_u(n)>y\}\leq \frac{x^{2}}{y\exp((1+o_{u}(1))\sqrt{(\log x)(\log \log x)})}, \end{equation} for every $y\geq 1$, when $x$ is sufficiently large. \end{cor} \begin{proof} By using \eqref{eq: 1.5} with $k=1$ we obtain \begin{equation} \label{eq: 1.6} \#\{ n\leq x : g_u(n) >y\}\leq \sum_{n\leq x}\frac{g_u(n)}{y} \end{equation} $$\leq \frac{x^{2}}{y\exp((1+o_{u}(1))\sqrt{(\log x)(\log \log x)})},$$ for every $y\geq 1$. \end{proof} We observe that this is an improvement of \eqref{eq: 1.2}, only for certain values of $y$, e.g. like for those satisfying \begin{equation} \label{eq: 1.7} x\exp(-(1/2+o_{u}(1))\sqrt{(\log x)(\log \log x)})\leq y\leq x. \end{equation} Consider now the multiplicative function $L_u(n)$ such that $L_u(p^{k})=\ell_u(p^{k})$, for every prime number $p\nmid a_2$ and power $k\geq 1$, and $L_u(p^{k})=p^{k}$, otherwise. Using arguments coming from the theory of Dirichlet series of multiplicative functions, we end up with the following estimate. \begin{prop} \label{prop 1.6} For every $u_n$ non-degenerate Lucas sequence, we have \begin{equation} \label{eq: 1.8} \sum_{\substack{n>x}}\frac{1}{L_u(n)}\ll_u x^{-1/3+\varepsilon}, \end{equation} for every $\varepsilon>0$, when $x$ is sufficiently large with respect to $\varepsilon$. \end{prop} The above result shows that the lack of multiplicativity of $\ell_u(n)$ is the principle cause for the weaker upper bound in \eqref{eq: 1.4}. \section{Notations} For a couple of real functions $f(x), g(x)$, with $g(x)>0$, we indicate with $f(x)=O(g(x))$ or $f(x)\ll g(x)$ that there exists an absolute constant $c>0$ such that $|f(x)|\leq cg(x)$, for $x$ sufficiently large. When the implicit constant $c$ depends from a parameter $\alpha$ we indicate the above bound with $f(x)\ll_{\alpha} g(x)$ or equivalently with $f(x)=O_{\alpha}(g(x))$. Throughout, the letter $p$ is reserved for a prime number. We write $(a,b)$ and $[a,b]$ to denote the greatest common divisor and the least common multiple of integers $a,b$. As usual, we denote with $\lfloor w\rfloor$ the integer part of a real number $w$ and we indicate with $P(n)$ the greatest prime factor of a positive integer $n$. \section{Preliminaries} We begin by recalling the definition of the Jordan's totient function. \begin{defi} \label{def 3.1} The Jordan's totient function of degree $k$ is defined as $$J_{k}(n)=n^{k}\prod_{p\mid n}\left(1-\frac{1}{p^{k}•}\right),$$ for every $k\geq 1$ and natural integers $n$. \end{defi} Clearly, $J_{1}(n)=\varphi(n)$, the Euler's totient function, and it is immediate to see that $J_k(n)$ verifies the following identity. \begin{lem} We have \begin{equation} \label{eq: 3.1} n^{k}=\sum_{d\mid n}J_{k}(d), \end{equation} for any $k\geq 1$ and natural integers $n$. \end{lem} The next lemma summarizes some basic properties of $\ell_u(n)$ and $z_u(n)$, which we will implicitly use later without further mention. \begin{lem} \label{lem 3.2} For all positive integers $m$, $n$ and all odd prime numbers $p$, we have: \begin{enumerate} \item $m \mid u_n$ if and only if $z_u(m) \mid n$ and $(m,a_2)=1$. \item $z_u([m, n]) = [z_u(m), z_u(n)]$, whenever $(mn,a_2)=1$. \item $m \mid \gcd(n, u_n)$ if and only if $(m, a_2)=1$ and $\ell_u(m) \mid n$. \item $\ell_u([m, n]) = [\ell_u(m), \ell_u(n)]$, whenever $(mn,a_2)=1$. \item $\ell_u(p^{j}) = p^{j} z_u(p)$ if $p\nmid \Delta_u,$ and $\ell_u(p^{j}) = p^{j}$ if $p\mid\Delta_u$, for every $p\nmid a_2$ and $j\geq 1$. \item $z_u(p)|p\pm 1$, if $p\nmid \Delta_u,$ and $z_u(p)=p$ if $p\mid\Delta_u$, for every $p\nmid a_2$. \end{enumerate} \end{lem} For any $\gamma>0$, let us define $$\mathcal{Q}_{u,\gamma}:=\{p: p\nmid a_2, z_u(p)\leq p^{\gamma}\}.$$ The following is \cite[Lemma 2.1]{ALPS}. \begin{lem} \label{lem 3.3} For all $x^{\gamma},y\geq 2$ and for any non-degenerate Lucas sequence $(u_n)_{n\geq 0}$, we have $$\#\{p: z_u(p)\leq y\}\ll_u \frac{y^{2}}{\log y},\ \ \mathcal{Q}_{u,\gamma}(x)\ll_u \frac{x^{2\gamma}}{\gamma\log x}.$$ \end{lem} It has been proven by Sanna and Tron \cite[Lemma 3.2]{ST} that the series $\sum_{(n, a_2)=1}1/\ell_u(n)$ converges. We consider the following identity \begin{equation} \label{eq: 3.2} \sum_{\substack{n>x \\ (n,a_2)=1}}\frac{1}{\ell_u(n)}=\sum_{\substack{n>x\\ P(n)>y\\ (n,a_2)=1}}\frac{1}{\ell_u(n)}+\sum_{\substack{n>x\\ P(n)\leq y\\ (n,a_2)=1}}\frac{1}{\ell_u(n)}. \end{equation} We note that the first sum in the right hand side of \eqref{eq: 3.2} has been already investigated by Sanna \cite[Lemma 2.5]{S} and we report here the result which he obtained. \begin{prop} \label{prop 3.4} We have \begin{equation} \label{eq: 3.3} \sum_{\substack{(m,a_2)=1\\P(m)>y}}\frac{1}{\ell_u(m)}\ll_u\frac{1}{y^{1/3-\varepsilon}•}, \end{equation} for all $\varepsilon\in(0, 1/4]$ and $y\gg_{u,\varepsilon}1.$ \end{prop} Regarding the second sum in the right hand side of \eqref{eq: 3.2} we provide an estimate in the next lemma. \begin{lem} \label{lem 3.5} Supposing that $y>(\log x)^{2}$ and $v=\log x/\log y$ tends to infinity as $x$ tends to infinity, we have \begin{equation} \label{eq: 3.4} \sum_{\substack{n>x\\ P(n)\leq y\\ (n, a_2)=1}}\frac{1}{\ell_u(n)}\ll_u (\log y)e^{-\sqrt{y}/2\log y}+\frac{\log y}{\log v}e^{-v\log v}. \end{equation} \end{lem} \begin{proof} Since $\ell_u(n)\geq n$, we may write \begin{equation*} \sum_{\substack{n>x\\ P(n)\leq y\\ (n, a_2)=1}}\frac{1}{\ell_u(n)}\leq \int_{x}^{\infty}\frac{d\psi(t,y)}{t}, \end{equation*} where $\psi(t,y)$ is the counting function of the $y$-smooth numbers less than $t$. Clearly, we have \begin{equation} \label{eq: 3.5} \int_{x}^{\infty}\frac{d\psi(t,y)}{t}=\frac{\psi(t,y)}{t}\bigg|_{x}^{\infty}+\int_{x}^{\infty}\frac{\psi(t,y)}{t^{2}}dt. \end{equation} To estimate the second term on the right hand side of \eqref{eq: 3.5} we suppose first that $y>\log^{2}(x)$ and then we split it into two parts: \begin{equation*} \int_{x}^{\infty}\frac{\psi(t,y)}{t^{2}}dt=\int_{x}^{z}\frac{\psi(t,y)}{t^{2}}dt+\int_{z}^{\infty}\frac{\psi(t,y)}{t^{2}}dt, \end{equation*} where we put $z=e^{\sqrt{y}}$. Using the estimate \cite[Theorem 1, \S 5.1, Chapter III]{T} \begin{equation} \label{eq: 3.6} \psi(t,y)\ll te^{-\log t/2\log y}=t^{1-1/2\log y}, \end{equation} valid uniformly for $t\geq y\geq 2$, we obtain \begin{equation} \label{eq: 3.7} \int_{z}^{\infty}\frac{\psi(t,y)}{t^{2}}\ll \int_{z}^{\infty}t^{-1-1/(2\log y)}dt\ll (\log y) z^{-1/(2\log y)}=(\log y)\exp\left(-\frac{\sqrt{y}}{2\log y}\right). \end{equation} By the Corollary of the Theorem 3.1 in \cite{CEP}, we know that $$\psi(t,y)\leq t\exp\left(-(1+o(1))\frac{\log t}{\log y}\log\left(\frac{\log t}{\log y}\right)\right),$$ in the region $y>\log^{2}t$. Here the $o(1)$ is with respect to $\log t/\log y\rightarrow \infty$. If $v=\log x/ \log y$ tends to infinity as $x$ tends to infinity, then we may use the simpler \begin{equation} \label{eq: 3.8} \psi(t,y)\leq t\exp\left(-\frac{\log t}{\log y}\log\left(\frac{\log t}{\log y}\right)\right), \end{equation} for any $x\leq t\leq z$. Note that equation \eqref{eq: 3.8} also follows from the aformentioned Corollary in \cite{CEP}. Let us suppose to be in this situation. Now, inserting this bound and using the change of variable $s=\log t$, we get $$\int_{x}^{z}\frac{\psi(t,y)}{t^{2}}dt\leq \int_{\log x}^{\sqrt{y}}\exp\left(-\frac{s}{\log y}\log\left(\frac{s}{\log y}\right)\right)ds,$$ which after another change of variable $s=w\log y$ becomes $$(\log y)\int_{\log x/\log y}^{\sqrt{y}/\log y}\exp(-w\log w)dw.$$ Using that $w\geq v$ and putting $w\log v=r$, we find \begin{equation} \label{eq: 3.9} \int_{x}^{z}\frac{\psi(t,y)}{t^{2}}dt\leq \frac{\log y}{\log v}\int_{v\log v}^{\sqrt{y}\log v/\log y}e^{-r}dr\leq \frac{\log y}{\log v}e^{-v\log v}. \end{equation} Regarding the first term on the right hand side of \eqref{eq: 3.5}, we note that $$\frac{\psi(t,y)}{t}\bigg|_{x}^{\infty}\leq \lim_{t\rightarrow\infty}\frac{\psi(t,y)}{t}\ll \lim_{t\rightarrow\infty}t^{-1/2\log y}=0,$$ by \eqref{eq: 3.6}. Collecting the results, we obtain the estimate \eqref{eq: 3.4}. \end{proof}\ \\ Finally, we can deduce the stated estimate on $\sum_{n>x}1/\ell_u(n)$. \begin{proof}[Proof of Proposition \ref{prop 1.4}] By Proposition \ref{prop 3.4} and Lemma \ref{lem 3.5} we conclude that $$\sum_{\substack{n>x\\ (n,a_2)=1}}\frac{1}{\ell_u(n)}\ll_u \frac{1}{y^{1/3-\varepsilon}}+\frac{\log y}{\log v}e^{-v\log v},$$ for every $\varepsilon>0$, if $y$ is sufficiently large in terms of $\varepsilon$. It is immediate to see that the best choice for $y$ is of the form $y=\exp(C\sqrt{(\log x)(\log\log x)})$, with $C$ a suitable positive constant to be chosen later. After some easy considerations, we obtain $$\sum_{\substack{n>x\\(n,a_2)=1}}\frac{1}{\ell_u(n)}\ll_u \exp\left(-C(1/3-\varepsilon)\sqrt{(\log x)(\log \log x)}\right)$$ $$+\exp\left(-\frac{1}{2C} (1-o(1))\sqrt{(\log x)(\log \log x)}\right),$$ where $o(1)$ tends to zero from the right as $x$ goes to infinity. Now, choosing $C=1/\sqrt{2(1/3-\varepsilon)}$, we see that $$\sum_{\substack{n>x\\ (n,a_2)=1}}\frac{1}{\ell_u(n)}\ll_u \exp\left(-\frac{(1-o(1))(1-\varepsilon)}{\sqrt{6}}\sqrt{(\log x)(\log \log x)}\right),$$ for every $\varepsilon>0$ and $x$ sufficiently large with respect to $\varepsilon$. \end{proof} \section{Proof of weak version of Theorem 1.3} \begin{proof} We start inserting equation \eqref{eq: 3.1} inside our main sums. \begin{equation} \label{eq: 4.1} \sum_{n\leq x}(n, u_{n})^{k}=\sum_{n\leq x}\sum_{\substack{d\mid (n, u_n)}} J_{k}(d)=\sum_{d\leq x}J_{k}(d)\sum_{\substack{n\leq x\\ d\mid (n, u_n)}}1=\sum_{\substack{d\leq x\\ (d,a_2)=1}}J_{k}(d)\sum_{\substack{n\leq x\\ \ell_u(d)\mid n }}1, \end{equation} by part (3) of Lemma \ref{lem 3.2}. Clearly, the last sum in \eqref{eq: 4.1} is \begin{equation} \label{eq: 4.2} \sum_{\substack{d\leq x\\ (d, a_{2})=1}}J_{k}(d)\bigg\lfloor \frac{x}{\ell_u(d)}\bigg\rfloor\leq x\sum_{\substack{d\leq x\\ (d, a_{2})=1}}\frac{J_{k}(d)}{\ell_{u}(d)}\leq x\sum_{\substack{d\leq x\\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)}. \end{equation} But now we observe that \begin{equation*} \sum_{\substack{d\leq x\\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)}=\sum_{\substack{d\leq x^\delta \\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)}+\sum_{\substack{x^\delta <d\leq x\\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)} \end{equation*} $$\ll x^{k\delta}+x^{k}\sum_{\substack{d>x^\delta\\ (d, a_{2})=1}}\frac{1}{\ell_{u}(d)}$$ $$\ll x^{k}\exp(-(1/\sqrt{6}-\varepsilon+o(1))\sqrt{\delta}\sqrt{(\log x)(\log \log x)}),$$ for any $\delta\in (0,1)$, using that the series $\sum_{n}1/\ell_u(n)$ converges and the bound \eqref{eq: 1.4}, and for any $x$ large in terms of $\delta$ and $\varepsilon$. Now, choosing $\delta$ close to 1 as a function of $\varepsilon$, and by the arbitrarity of $\varepsilon$, we find \begin{equation} \label{eq: 4.3} \sum_{\substack{d\leq x\\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)}\leq x^{k}\exp(-(1/\sqrt{6}-\varepsilon+o(1))\sqrt{(\log x)(\log \log x)}), \end{equation} where the $o(1)$ depends on $u$ and $k$ and $x$ is chosen large enough with respect to $\varepsilon$. Inserting \eqref{eq: 4.3} in \eqref{eq: 4.2} and \eqref{eq: 4.2} in \eqref{eq: 4.1}, the proof is finished. \end{proof} \section{Proof of Theorem 1.3} \begin{proof} Let $y:=\exp(\frac{1}{2}\sqrt{(\log x)(\log\log x)})$. We define a partition of $\{n: n\leq x\}$, by setting \begin{equation*} \begin{array}{lllll} E_{1}(x)=\{n\leq x: P(n)\nmid u_n\};\\ \\ E_2(x)=\{n\leq x: P(n)\leq y\};\\ \\ E_3(x)=\{n\leq x: P(n)>y^{6},\ P(n)\in Q_{u,1/3}(x)\};\\ \\ E_4(x)=\{n\leq x: P(n)>y^{6},\ P(n)\not\in Q_{u,1/3}(x)\};\\ \\ E_5(x)=\{n\leq x\}\setminus E_1\cup E_2\cup E_3\cup E_4.\end{array} \end{equation*} Let $S_i=\sum_{n\in E_i(x)}(n, u_n)^{k}$, for every $i=\{1,2,3,4,5\}$. We note that if $n\in E_1(x)$, then $(n,u_n)|(n/P(n))$ and we deduce that \begin{equation} \label{eq: 5.1} S_1\leq \sum_{n\leq x}\left(\frac{n}{P(n)}\right)^{k}\leq x^{k}\sum_{n\leq x}\frac{1}{P(n)^{k}}\leq x^{k+1}\exp((-\sqrt{2k}+o(1))\sqrt{(\log x) (\log\log x)}), \end{equation} where the last inequality follows by \cite[equation 1.6]{IP}. Moreover, it is immediate to see that \begin{equation*} S_2\leq x^{k}\psi(x,y)\leq x^{k+1}\exp(-(1+o(1))u\log u), \end{equation*} by the Corollary of Theorem 3.1 in \cite{CEP}, where $u=\log x/\log y$ and $o(1)$ tends to zero as $u$ tends to infinity. We observe that we can apply this result because we chose a value of $y$ sufficiently large. Notice also that by our choice of $y$ we have actually got \begin{equation} \label{5.2} S_2\leq x^{k+1}\exp(-(1+o(1))\sqrt{(\log x)(\log\log x)}), \end{equation} which dominates \eqref{eq: 5.1}. Regarding the third sum, we simply use $S_3\leq x^{k}\# E_3(x)$. Now, if $n\in E_3(x)$ we can factorize $n=P(n)m$, with $P(n)>y^{6}$ and $P(n)\in Q_{u,1/3}(x)$. This implies that $m<x/y^6$ and that $P(n)\in Q_{u,1/3}(x/m)$. Consequently \begin{equation*} \#E_3(x)\leq \sum_{m\leq x/y^6}\#Q_{u,1/3}(x/m)\ll x^{2/3}\sum_{m\leq x/y^6}\frac{1}{m^{2/3}}\ll \frac{x}{y^{2}}, \end{equation*} by Lemma \ref{lem 3.3} and a standard final computation. This leads to \begin{equation} \label{5.3} S_3\ll x^{k+1}\exp(-2\log y), \end{equation} which is of the same order of magnitude of \eqref{5.2}. For what concerns the fourth sum, by part $(1)$ and $(6)$ of Lemma \ref{lem 3.2}, we have that $z_u(P(n))|n$ and $z_u(P(n))|P(n)\pm 1$, implying that $P(n)z_u(P(n))|n$. Note that we can affirm the first two divisibility conditions, because we can suppose $P(n)\nmid a_2 \Delta_u$ and odd, since $y$ is large enough. We deduce that \begin{equation*} \#E_4(x)\leq \sum_{\substack{p>y^6 \\ p\not\in Q_{u,1/3}(x)}}\frac{x}{pz_u(p)}\leq \sum_{p>y^6}\frac{x}{p^{4/3}}\ll \frac{x}{y^{2}}, \end{equation*} by a standard computation. Therefore, we find \begin{equation} \label{5.4} S_4\leq x^{k}\# E_4(x)\ll x^{k+1}\exp(-2\log y), \end{equation} which coincides with \eqref{5.3}. We are left then with the estimate of $S_{5}(x)$. To this aim we strictly follow an argument already employed in the proof of \cite[Theorem 2]{ALPS}. For any non-negative integer $j$, let $I_j:=[2^j,2^{j+1})$. We cover $I:=[y,y^{6})$ by these dyadic intervals, and we define $a_j$ via $2^j=y^{a_j}$. We shall assume the variable $j$ runs over just those integers with $I_j$ not disjoint from $I$. For any integer $k$, define $\mathcal{P}_{j,k}$ as the set of primes $p\in I_j$ with $z_u(p)\in I_k$. Note that, by Lemma \ref{lem 3.3}, we have $\#\mathcal{P}_{j,k}\ll 4^k$. We have \begin{equation} \#E_5(x)\leq\sum_j\sum_k\sum_{p\in\mathcal{P}_{j,k}}\sum_{\substack{n\leq x\\P(n)|u_n\\ P(n)=p}}1\leq \sum_j\sum_k\sum_{p\in\mathcal{P}_{j,k}}\psi\left(\frac{x}{pz_u(p)},p\right) \end{equation} $$\leq \sum_j\sum_k\sum_{p\in\mathcal{P}_{j,k}}\frac{x}{pz_u(p)y^{2/a_j+o(1)}},$$ as $x\to\infty$, where we have used the Corollary of Theorem 3.1 in \cite{CEP} for the last estimate. For $k>j/2$, we use the estimate $$ \sum_{p\in\mathcal{P}_{j,k}}\frac1{pz_u(p)}\leq 2^{-k}\sum_{p\in I_j}\frac1p\leq 2^{-k} $$ for $x$ large. For $k\le j/2$, we use the estimate $$ \sum_{p\in\mathcal{P}_{j,k}}\frac1{pz_u(p)}\ll\frac{4^k}{2^j2^k}=2^{k-j}, $$ since there are at most order of magnitude $4^k$ such primes, as noted before. Thus, \begin{equation} \sum_k\sum_{p\in\mathcal{P}_{j,k}}\frac{1}{pz_u(p)}= \sum_{k>j/2}\sum_{p\in\mathcal{P}_{j,k}}\frac{1}{pz_u(p)}+\sum_{k\le j/2}\sum_{p\in\mathcal{P}_{j,k}}\frac{1}{pz_u(p)}\ll 2^{-j/2}=y^{-a_j/2}. \end{equation} Collecting the above computations, we find $$ \#E_5(x)\leq\sum_j\frac{x}{y^{a_j/2+2/a_j+o(1)}},\ \textrm{as}\ x\to\infty. $$ Since the minimum value of $t/2+2/t$ for $t>0$ is $2$ occuring at $t=2$, we may affirm that $$\#E_5(x)\leq x/y^{2+o(1)},\ \textrm{as}\ x\to\infty,$$ which leads to an estimate for $S_5$ as large as that one for $S_2$. We conclude that $$\max\{S_1,S_2,S_3,S_4,S_5\}\leq x^{k+1}\exp(-(1+o(1))\sqrt{(\log x) (\log\log x)}),$$ proving Theorem \ref{thm 1.3}. \end{proof} \section{The multiplicative analogous of $\ell_u(n)$} Let us define the multiplicative function $L_u(n)$ such that $L_u(p^{k})=\ell_u(p^{k})$, for every prime numbers $p\nmid a_2$ and power $k\geq 1$, and $L_u(p^{k})=p^{k}$, otherwise. Now, consider the Dirichlet series of the function $n/L_u(n)$, given by $$\alpha(s)=\sum_{n\geq 1} \frac{n}{n^{s}L_u(n)•}.$$ Suppose that it converges for $s>\sigma_{c}$, where $\sigma_c$ is the abscissa of absolute and ordinary convergence of $\alpha(s)$. Certainly, since $\ell_u(n)\leq L_u(n)$, for every $n$, and since we know that the series of the reciprocals of $\ell_u(n)$ converges, we have $\sigma_{c}\leq 1$. Then, for any $s\in\mathbb{C}$ with $\Re(s)=\sigma>\sigma_{c}$ we can consider the Euler product and it converges to the Dirichlet series in this range. Therefore, we can write $$ \alpha(s)=\prod_{p\nmid 2a_2\Delta_u}\left(1+\sum_{k\geq 1}\frac{f(p^{k})}{p^{ks}}\right)\beta(s), $$ where $f(n)=n/L_u(n)$ and $\beta(s)$ is an analytic function in $\Re(s)>0$. Since by property (5) of Lemma \ref{lem 3.2} we find that $f(p^{k})=1/z_u(p)$, for any $k\geq 1$ and prime $p\nmid 2a_2\Delta_u$, we have \begin{equation} \label{eq: 6.1} \alpha(s)=\prod_{p\nmid 2a_2\Delta_u}\left(1+\frac{f(p)}{p^{s}}\frac{p^{s}}{p^{s}-1•}\right)\beta(s)=\prod_{p\nmid 2a_2\Delta_u}\left(1+\frac{1}{z_u(p)(p^{s}-1)}\right)\beta(s). \end{equation} Now, the final product in \eqref{eq: 6.1} converges if and only if \begin{equation*} \sum_{p\nmid 2a_2\Delta_u}\frac{1}{z_u(p)(p^{s}-1)•} \end{equation*} converges. Therefore, it suffices to prove that $$\lim_{x\rightarrow \infty}\sum_{\substack{p>x}}\frac1{z_u(p)(p^{\sigma}-1)}=0.$$ We estimate the last sum separating between primes $p\in\mathscr{Q}_{u,\gamma}$ or $p\not\in \mathscr{Q}_{u,\gamma}$. In the first case we obtain \begin{equation} \label{eq: 6.2} \sum_{\substack{p>x\\ p\in\mathscr{Q}_{u,\gamma}}}\frac{1}{z_u(p)(p^{\sigma}-1)•}\ll \int_{x}^{\infty}\frac{d(\#\mathscr{Q}_{u,\gamma}(t))}{t^{\sigma}}\ll_u \frac{1}{(\sigma-2\gamma)x^{\sigma-2\gamma}•}, \end{equation} by Lemma \ref{lem 3.3}, if we choose $\sigma>2\gamma$. On the other hand, in the second case we get \begin{equation} \label{eq: 6.3} \sum_{\substack{p>x\\p\not\in\mathscr{Q}_{u,\gamma}}}\frac{1}{z_u(p)(p^{\sigma}-1)•}\ll \sum_{p>x}\frac{1}{p^{\sigma+\gamma}•}\ll \frac{1}{(\sigma+\gamma-1)x^{\sigma+\gamma-1}}, \end{equation} if we choose $\sigma+\gamma>1$. Comparing \eqref{eq: 6.2} with \eqref{eq: 6.3}, we are led to take $\gamma=1/3$ and we have showed that \begin{equation} \label{eq: 6.4} \sum_{\substack{p>x}}\frac1{z_u(p)(p^{\sigma}-1)}\ll_u \frac{1}{\varepsilon x^{\varepsilon}•}, \end{equation} if $\sigma=2/3+\varepsilon$, for every $\varepsilon>0$, and consequently that $\alpha(s)$ converges for every $s$ with $\Re(s)>2/3$, or equivalently that $\sigma_c\leq 2/3$. An immediate application of this result is the following. Let us define $$F(s)=\sum_{n\geq 1}\frac{1}{n^{s}L_u(n)•}.$$ Then, $F(s)$ has the abscissa of convergence $\sigma_c '\leq -1/3$. This is equivalent to have obtained a strong bound on the tail of $F(0)$. The intermediate passage is made explicit in the next lemma (see e.g. \cite[\S 11.3, Lemma 1]{A}). \begin{lem} \label{lem 6.1} Suppose that $G(s)=\sum_{n\geq 1}a_n n^{-s}$ is a Dirichlet series of a sequence $(a_n)_{n\geq 1}$ of positive real numbers, with abscissa of convergence $\sigma_c '$. Suppose that $G(0)$ converges. Then, we have $\sigma_c '=\inf\{\theta : \sum_{n>x}a_n\ll x^{\theta}\}.$ \end{lem} Since $F(s)$ satisfies the hypotheses of the Lemma \ref{lem 6.1}, by \eqref{eq: 6.4}, we deduce that \begin{equation*} \sum_{n>x}\frac{1}{L_u(n)•}\ll_u x^{-1/3+\varepsilon}, \end{equation*} for every $\varepsilon>0$, proving Proposition \ref{prop 1.6}. \begin{rmk} We believe that a finer study of $L_u(n)$ could lead to understand better the structure of $\ell_u(n)$, though the lack of multiplicativity of the latter makes difficult its study starting with information from the former. For instance, it can be shown that the integers $n$, which have at least two prime factors $p_1,p_2$ such that a fixed prime $q$ divides both $z_u(p_1)$ and $z_u(p_2)$, have asymptotic density $1$. Thus, when calculating $z_u(n)$ as a least common multiple, there is a cancellation of a factor $q$. In other words, for any positive real number $C$, most integers $n$ have $L_u(n)/\ell_u(n) > C$. This suggests that the two mentioned functions are not always very close each other. \end{rmk} \section*{Acknowledgements} I would like to thank Carlo Sanna for suggesting this problem and for introducing me to the theory of linear recurrences. A special thanks goes also to the anonymous referee, for careful reading and useful advice. \bibliographystyle{amsplain}
{ "timestamp": "2019-01-08T02:27:07", "yymm": "1805", "arxiv_id": "1805.02225", "language": "en", "url": "https://arxiv.org/abs/1805.02225", "abstract": "Let $(u_n)_{n \\geq 0}$ be a non-degenerate Lucas sequence, given by the relation $u_n=a_1 u_{n-1}+a_2 u_{n-2}$. Let $\\ell_u(m)=lcm(m, z_u(m))$, for $(m,a_2)=1$, where $z_u(m)$ is the rank of appearance of $m$ in $u_n$. We prove that $$\\sum_{\\substack{m>x\\\\ (m,a_2)=1}}\\frac{1}{\\ell_u(m)}\\leq \\exp(-(1/\\sqrt{6}-\\varepsilon+o(1))\\sqrt{(\\log x)(\\log \\log x)}),$$ when $x$ is sufficiently large in terms of $\\varepsilon$, and where the $o(1)$ depends on $u$. Moreover, if $g_u(n)=\\gcd(n,u_n)$, we will show that for every $k\\geq 1$, $$\\sum_{n\\leq x}g_u(n)^{k}\\leq x^{k+1}\\exp(-(1+o(1))\\sqrt{(\\log x)(\\log \\log x)}),$$ when $x$ is sufficiently large and where the $o(1)$ depends on $u$ and $k$. This gives a partial answer to a question posed by C. Sanna. As a by-product, we derive bounds on $#\\{n\\leq x: (n, u_n)>y\\}$, at least in certain ranges of $y$, which strengthens what already obtained by Sanna. Finally, we start the study of the multiplicative analogous of $\\ell_u(m)$, finding interesting results.", "subjects": "Number Theory (math.NT)", "title": "An Upper Bound for the Moments of a G.C.D. related to Lucas Sequences", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986777179803135, "lm_q2_score": 0.7185943805178138, "lm_q1q2_score": 0.7090925362297491 }
https://arxiv.org/abs/2205.01456
Schur properties of randomly perturbed sets
A set $A$ of integers is said to be Schur if any two-colouring of $A$ results in monochromatic $x,y$ and $z$ with $x+y=z$. We study the following problem: how many random integers from $[n]$ need to be added to some $A\subseteq [n]$ to ensure with high probability that the resulting set is Schur? Hu showed in 1980 that when $|A|> \lceil\tfrac{4n}{5}\rceil$, no random integers are needed, as $A$ is already guaranteed to be Schur. Recently, Aigner-Horev and Person showed that for any dense set of integers $A\subseteq [n]$, adding $\omega(n^{1/3})$ random integers suffices, noting that this is optimal for sets $A$ with $|A|\leq \lceil\tfrac{n}{2}\rceil$. We close the gap between these two results by showing that if $A\subseteq [n]$ with $|A|=\lceil\tfrac{n}{2}\rceil+t<\lceil\tfrac{4n}{5}\rceil$, then adding $\omega(\min\{n^{1/3},nt^{-1}\})$ random integers will with high probability result in a set that is Schur. Our result is optimal for all $t$, and we further provide a stability result showing that one needs far fewer random integers when $A$ is not close in structure to the extremal examples. We also initiate the study of perturbing sparse sets of integers $A$ by using algorithmic arguments and the theory of hypergraph containers to provide nontrivial upper and lower bounds.
\section{Introduction} \label{sec:intro} A \emph{Schur triple} in a set $A\subseteq \mb{N}$ is a triple $(x,y,z)\in A^3$ such that $x+y=z$, and we say a set $A \subseteq \mb{N}$ is \emph{$r$-Schur} if any $r$-colouring of the elements in $A$ results in a monochromatic Schur triple. Note that the property of $A$ being $1$-Schur is just the property of containing a Schur triple. We call sets that are not $1$-Schur \emph{sum-free}. This terminology stems from a classic theorem of Schur \cite{schur1916kongruenz} which asserts that for every $r$, there is some $n_0 = n_0(r)$ such that $[n]$ is $r$-Schur for all $n \ge n_0$. Given this, it is natural to ask which subsets of $[n]$ are also $r$-Schur. From an extremal perspective, this leads to the question of establishing the maximum size of a subset $A\subseteq [n]$ that is \emph{not} $r$-Schur. It is a simple exercise to show that if $|A|> \ceil*{\frac{n}{2}}$, $A$ must be $1$-Schur. Taking $A\subseteq[n]$ to be the set of all odd integers or the large integers $\left\{ \floor*{\frac{n}{2}}+1,\ldots,n \right\}$ shows that this is best possible. For $2$-colourings, one can take $A$ to be all integers in $[n]$ that are \emph{not} divisible by $5$, colouring those that are congruent to $1$ or $4$ $(\mbox{mod } 5)$ red and those congruent to $2$ or $3$ $(\mbox{mod } 5)$ blue. This colouring gives no monochromatic Schur triples and hence there exist sets of size $\ceil*{ \frac{4n}{5}}$ that are not $2$-Schur. Hu~\cite{hu1980note} showed with an elegant argument that one can not do better. \begin{thm} \label{thm:hu} For any $n\in \mb{N}$ and $A\subseteq [n]$ with $\card{A}>\ceil*{ \frac{4n}{5}}$, $A$ is $2$-Schur. \end{thm} For $r\geq 3$, it remains an open problem to determine what density forces a subset to be $r$-Schur. Abbott and Wang \cite{abbott1977sum} posed this question in 1977 and provided constructions which they conjecture to be best possible, while some upper bounds have been provided in \cite{abbott1977sum,hancock2019independent}. Deviating from the problem of determining the size of extremal sets, one can also study the behaviour of typical subsets of $[n]$ by adopting a probabilistic perspective. For this, we fix some probability $p=p(n)\in[0,1]$ and randomly sparsify the set $[n]$, defining $[n]_p$ to be the set obtained by taking each integer of $[n]$ into $[n]_p$ independently with probability $p$. The goal is to understand for which $p$ we can expect the resulting set to be $r$-Schur. Here, and throughout, we say an event holds with high probability (whp, for short) if the probability that it holds tends to $1$ as $n$ tends to infinity. Again, establishing the appearance of Schur triples is an easy task and standard tools (the first and second moment methods) give that if $p=o(n^{-2/3})$, then $[n]_p$ is sum-free whp whilst if $p=\omega(n^{-2/3})$ then $[n]_p$ will be $1$-Schur whp. For more colours, the behaviour was determined by Graham, R\"odl and Ruci\'nski \cite{graham1996schur} for $r=2$ and by R\"odl and Ruci\'nski \cite{rodl1997rado} for $r\geq 3$. \begin{thm} \label{thm:random} For any $2\leq r\in \mb{N}$ we have that if $p=o(n^{-1/2})$ then whp $[n]_p$ is not $r$-Schur whilst if $p=\omega(n^{-1/2})$ then whp $[n]_p$ is $r$-Schur. \end{thm} For the rest of the paper we restrict to the case $r=2$ and say that a set $A\subseteq [n]$ is \emph{Schur} if it is $2$-Schur. \subsection*{Randomly perturbed sets of integers} The study of randomly perturbed structures appeared with the notion of \emph{smoothed analysis of algorithms}, introduced by Spielman and Teng \cite{spielman2004smoothed}, where one is interested in interpolating between worst-case and average-case analysis of algorithms by randomly perturbing an input. At a similar time, Bohman, Frieze and Martin~\cite{bfm1} initiated the study of combinatorial properties in random perturbed graphs by looking at how many random edges need to be added to an arbitrary dense graph to make it Hamiltonian. As with smoothed analysis, their work bridges the gap between probabilistic and extremal points of view. This inspired a wealth of results exploring properties of randomly perturbed graphs and hypergraphs. Most pertinent to this work is the study of Ramsey properties. In an analogous fashion to a set of integers being Schur, for $s\in \mb{N}$, we say a graph $G$ is $s$-(edge-)Ramsey if every 2-colouring of the edges of $G$ results in a monochromatic copy of $K_s$. A series of results~\cite{das2020ramsey,kst,power} have determined the number of random edges one needs to add to an arbitrary dense graph to ensure that the resulting graph is $s$-Ramsey for all $s\geq 3$. Several variants of this edge-Ramsey problem were also explored in~\cite{das2020ramsey,kst,power} and randomly perturbed graphs have also been studied with respect to vertex-Ramsey~\cite{das2020vertex} and anti-Ramsey~\cite{aigner2019large,aigner2020small} properties. Aigner-Horev and Person initiated the study of randomly perturbed structures in the setting of additive combinatorics. From our discussion above, if we have a set $A \subseteq [n]$ of integers with $\card{A}\leq \tfrac{4n}{5}$, one can ask how much we need to randomly perturb $A$ in order to obtain a set that is Schur. For dense sets of integers $A$, Aigner-Horev and Person~\cite{aigner2019monochromatic} showed the following. \begin{thm} \label{thm:posdense} Let $\varepsilon > 0$. If $A \subseteq [n]$, $\card{A} \ge \varepsilon n$, and $p = \omega(n^{-2/3})$, then whp $A \cup [n]_p$ is Schur. \end{thm} This can be interpreted as saying that any dense set is close to being Schur, since a small random perturbation is enough to force the set to be Schur. From a probabilistic point of view, one can also see that, in comparison to Theorem~\ref{thm:random}, one can save a great deal of randomness by starting with an arbitrary set of positive density. Note that Theorem~\ref{thm:posdense} is easily seen to be tight for $\card{A} \le \ceil*{\frac{n}{2}}$: taking $A$ to be a sum-free set, we can colour $A$ red and $[n]_p \setminus A$ blue. Then the only possible monochromatic Schur triples can come from $[n]_p$, and the threshold for their appearance, as previously mentioned, is $p = n^{-2/3}$. \vspace{1mm} Our first result precisely describes the amount of randomness needed when the size of the deterministic set grows beyond $\tfrac{n}{2}$. \begin{thm} \label{thm:main_dense} Let $n$ and $t=t(n)$ be positive integers such that $\ceil*{\frac{n}{2}} + t\leq \ceil*{\tfrac{4n}{5}}$, and define $p(n,t)= \min \left\{n^{-2/3}, t^{-1} \right\}$. Then the following statements hold. \begin{enumerate} \item[(0)] There exists a set $A \subseteq [n]$ with $\card{A}=\ceil*{\frac{n}{2}} + t$ such that for $p=o(p(n,t))$, whp $A\cup[n]_p$ is not Schur. \item[(1)] For all $A \subseteq [n]$ with $\card{A}=\ceil*{\frac{n}{2}} + t$ and $p=\omega(p(n,t))$, whp $A\cup[n]_p$ is Schur. \end{enumerate} \end{thm} In particular, if $|A|\geq \tfrac{n}{2}+\Omega(n)$ then adding a super-constant number of random integers already suffices to force the resulting set to be Schur. Along with Theorems~\ref{thm:hu} and~\ref{thm:posdense}, this completes our understanding of the behaviour of perturbed sets of integers when the starting set is dense, continuing a recent trend in the perturbed setting of exploring the full range of dense starting structures and describing in detail the transition in the random perturbation required (see e.g.~\cite{bottcher2020triangles,bottcher2020cycles,hmt}) \paragraph{Stability.} Theorem~\ref{thm:main_dense} \emph{(0)} shows that there are sets $A$ with $|A|=\ceil*{\frac{n}{2}} + t$ for which (asymptotically) at least $p(n,t)n$ random integers must be added to make the set Schur. Our next result demonstrates that any such set $A$ must have a certain structure. Here we are interested in the case when $t=o(n)$, since when $t=\Omega(n)$, we have $p(n,t)=\Theta(n^{-1})$, and so the question of whether $p(n,t)n$ random integers are necessary simply reduces to determining if $A$ is already Schur or not. When $t=o(n)$, however, we can expect a significant saving for non-extremal examples. In this case, $A$ has size close to $\tfrac{n}{2}$, and a natural candidate for sets requiring many random integers before becoming Schur are those that are close in structure to the extremal sum-free sets. As discussed at the beginning of the introduction, there are two examples of sum-free sets of size $\ceil*{\frac{n}{2}}$, namely the set of odd integers or the set of large integers $\floor*{\frac{n}{2}}+1,\ldots,n$. Moreover, it is well known that there is stability for sum-free sets, in the sense that any large sum-free set must be close in structure to one of these two constructions (see Theorem~\ref{thm:sumstab}). Our next result shows that any set $A$ needing many random integers to be added in order to become Schur must also be close in structure to one of these two examples. \begin{thm} \label{thm:stability} Let $n$ and $t=t(n)$ be positive integers. If $A \subseteq [n]$ with $\card{A} = \ceil*{\tfrac{n}{2}} + t$ and $q=\omega(n^{-1})$ is such that whp $A\cup[n]_q$ is not Schur, then either $\card*{ \left[ \ceil*{\tfrac{n}{2}} , n \right] \setminus A }=O(q^{-1})$ or $A$ contains $O(q^{-2}n^{-1})$ even numbers. \end{thm} Theorem~\ref{thm:stability} shows that even if we only require $\omega(1)$ random integers to be added to $A$ to give a set that is Schur, then we can remove $o(n)$ integers from $A$ to obtain a set contained in one of the two extremal sum-free constructions. Moreover, the dependence of the distance to the sum-free construction on the number of random integers needed is different in the two cases, showing that the set of large numbers is in some sense more sum-free than the set of odd integers. Indeed, note that due to the size constraint, the set $A$ must contain at least $t$ even integers. Thus, if $q=\omega((nt)^{-1/2})$, the second case of Theorem~\ref{thm:stability} cannot occur, and so if $A$ is such that whp $A\cup[n]_q$ is not Schur, then $A$ must be close to the set of large integers. For $t=\omega(n^{1/3})$ we have $(nt)^{-1/2} =o\left( \min \left\{ n^{-2/3}, t^{-1} \right\}\right)$, and so this shows that we can make significant savings in the amount of randomness required by only imposing the condition that $A$ is far from the set of large numbers. \paragraph{Sparse base sets.} One can also explore the behaviour of the perturbed model when the base set $A$ is sparse. This direction has recently been explored in the graph setting~\cite{hahn3513random} and aims to elucidate the full picture of how the randomly perturbed model transitions between the probabilistic and the extremal thresholds. In our setting, the result of Graham, R\"odl and Ruci\'nski (Theorem~\ref{thm:random}) determines the threshold if we have no deterministic elements while Theorem~\ref{thm:posdense} gives that we can save some randomness when starting from a base set of size $\Omega(n)$. For base sets $A$ of size $o(n)$, we begin by noting that if $|A|=o(n^{1/2})$, we gain nothing compared to starting with an empty base set. Indeed, for any $s=o(n^{1/2})$ we may take $A$ to be $[n]_{q}$, where $q=2sn^{-1}$. Then we have that whp $|A|\geq s$ and, for any $p$, one has $A\cup [n]_p\sim [n]_{p+q-pq}$. By Theorem~\ref{thm:random}, one needs $p=\omega(n^{-1/2})$ in order to ensure the resulting set is Schur whp. Here, we take a closer look on how many random integers are needed when the size of $A$ transitions from $n^{1/2}$ to $\varepsilon n$. The following theorem provides non-trivial lower and upper bounds on the perturbed threshold in this sparse case. \begin{thm} \label{thm:main_sparse} Let $n$ and $s=s(n)$ be positive integers with $\Omega\left( n^{1/2} \right) = s\le n/2$. Then the following two statements hold. \begin{enumerate} \item[(0)] There exists a set $A\subseteq [n]$ with $|A|= s$ such that for $p=o\left((ns)^{-1/3}\right)$, whp $A\cup [n]_p$ is not Schur. \item[(1)] For every $A\subseteq [n]$ with $\card{A}=s$ and $p=\omega\left((n^{13}s)^{-1/27}\log n\right)$, whp $A\cup [n]_p$ is Schur. \end{enumerate} \end{thm} \subsection*{Organisation and remarks} We conclude the introduction with some comments on our proofs and the organisation of the rest of the paper. We will treat the dense base sets of Theorem~\ref{thm:main_dense} and the sparse base sets of Theorem~\ref{thm:main_sparse} separately, as the two settings seem to require very different approaches. Before turning to our proofs, we will outline in Section~\ref{sec:TandT} our notation and collect several number theoretic and probabilistic tools which will be of use to us. We will then address the dense setting in Section \ref{sec:dense}, where we start by analysing an explicit colouring to prove the $0$-statement of Theorem~\ref{thm:main_dense} in Section~\ref{sec:0dense}. In Section~\ref{sec:1dense} we prove the $1$-statement by adopting the approach of Aigner-Horev and Person~\cite{aigner2019monochromatic}, finding small (11-integer) configurations in our randomly perturbed sets that are themselves Schur. In order to find these configurations, we will use some powerful number theoretic machinery, such as Green's arithmetic removal lemma~\cite{green2005szemeredi}, and our proof will split into cases depending on the structure of our base set $A$. We will also deduce Theorem~\ref{thm:stability} from the proof of the $1$-statement. In Section~\ref{sec:sparse} we will then turn to the sparse setting, proving Theorem~\ref{thm:main_sparse}. In Section~\ref{sec:0sparse} we prove the $0$-statement, where, in contrast to the dense setting, the proof is non-constructive; we do not give an explicit colouring. We instead build upon ideas of Graham, R\"odl and Ruci\'nski~\cite{graham1996schur} from their proof of Theorem~\ref{thm:random}, showing that $A\cup [n]_p$ being Schur implies the existence of certain substructures that are whp not present. For the 1-statement, we appeal to the hypergraph container method developed by Saxton and Thomason \cite{saxton2015hypergraph} and independently Balogh, Morris and Samotij~\cite{BMS15}. In order to use this method in the randomly perturbed setting, we have to apply it to a hypergraph that encodes certain colouring configurations that will force one to create a monochromatic Schur triple when colouring the base set $A$. We believe the use of this hypergraph is one of the most interesting novelties of our proof and we introduce this, as well as the container method in general, in Section~\ref{sec:containers}. We then use the containers to establish the $1$-statement in Section~\ref{sec:1sparse}. Finally, in Section~\ref{sec:conc}, we outline some directions for future research. In particular, we discuss the issue of closing the gap between the bounds in Theorem~\ref{thm:main_sparse}, which we find to be a very intriguing open problem. \section{Terminology and tools} \label{sec:TandT} Throughout this paper, we will rely on a series of results from number theory and combinatorics. For the convenience of the reader we state the results in this section. First though, we fix some notation and terminology. \subsection{Notation} \label{sec:Notation} As discussed in the introduction, a \emph{Schur triple} in a set $A\subseteq \mathbb{N}$ is a triple $(x,y,z)\in A^3$ such that $x+y=z$. We say that a triple is \emph{degenerate} if $x=y$ and \emph{non-degenerate} otherwise. We say a set $S\subseteq \mathbb{N}$ \emph{hosts} a Schur triple $(x,y,z)$ if $S=\{x\}\cup \{y\} \cup \{z\}$. Note that if a set $S$ hosts a degenerate Schur triple then $|S|=2$ whilst if $S$ hosts a non-degenerate Schur triple then $|S|=3$. Given a set $A\subseteq \mathbb{N}$, we will sometimes work with the \emph{Schur hypergraph} $\mc H_{\textrm{Schur}}(A)$ generated by $A$, whose vertex set is $A$ and whose edge set consists of all sets contained in $A$ that host Schur triples. We say a set $A\subseteq \mathbb{N}$ is \emph{sum-free} if it contains no sets that host Schur triples. We say a set $A\subseteq \mathbb{N}$ is \emph{Schur} if there is no way to partition $A$ into two sum-free sets. In other words, $A$ is Schur if any red/blue-colouring of $A$ results in a monochromatic Schur triple. If $A$ is \emph{not} Schur, we call any red/blue-colouring of $A$ in which both colour classes form sum-free sets a \emph{Schur colouring}. We will work with both tuples (in $[n]^\ell$ for some $\ell\in \mathbb{N}$) and subsets of $[n]$. We introduce the following notation to ease the exposition. We say a tuple $T\in [n]^\ell$ \emph{contains} a set $S\subseteq [n]$ if all the elements in $S$ appear as entries of $T$. Similarly, we say a set $S\subseteq [n]$ \emph{contains} a tuple $T\in [n]^\ell$ if all the entries of $T$ appear in the set $S$. We also define the \emph{intersection} of two tuples $T,T'$, denoted $T\cap T'$, to be the \emph{set} $S\subseteq[n]$ of elements that feature in \emph{both} $T$ and $T'$. Hence $T\cap T'$ is the largest set contained in both $T$ and $T'$. Given $p=p(n)$, the random set $[n]_p$ is the set obtained by keeping each element of $[n]=\{1,\ldots,n\}$ independently with probability $p$. \subsection{Number theoretic tools} \label{sec:NT} We start with the following arithmetic removal lemma of Green~\cite{green2005szemeredi}. \begin{thm} \label{thm:removal} For every $\varepsilon > 0$ there is a $\delta > 0$ such that if $A \subseteq [n]$ is a set containing at most $\delta n^2$ sets that host Schur triples, then there is a sum-free $A' \subseteq A$ with $\card{A \setminus A'} \le \varepsilon n$. \end{thm} The next powerful result we will use is a stability statement for large sum-free sets due to Deshouillers, Freiman and S\'os \cite{deshouillers1999structure}. \begin{thm} \label{thm:sumstab} If $A \subseteq [n]$ is sum-free and $\card{A} > \frac25 n+1$, then either \begin{itemize} \item[(i)] $A$ only consists of odd numbers, or \item[(ii)] $\min A > \card{A}$. \end{itemize} \end{thm} We will also often need to find many arithmetic progressions, and the following result of Varnavides \cite{varnavides1959certain} will be repeatedly applied. Here, and throughout, a \emph{4-AP} in a set $A$ is a sequence $a,a+d,a+2d,a+3d\in A$ and $d\in \mathbb{N}$ is said to be the \emph{difference} of the AP. \begin{thm} \label{thm:APsupersat} For every $\delta > 0$ there is a $\xi = \xi(\delta) > 0$ such that if $A \subseteq [n]$ is a set with $\card{A} \ge \delta n$, then $A$ contains at least $\xi n^2$ $4$-APs. In particular, there are at least $\xi n$ distinct differences of $4$-APs in $A$. \end{thm} \subsection{Probabilistic tools} \label{sec:Prob} We will use concentration inequalities to guarantee the existence of certain configurations in our random set of integers. First, we will use the well-known theorem of Chebyshev, which bounds the deviation from the expectation in terms of the variance; see, for example,~\cite[Chapter 4]{alonspencer}. \begin{thm}[Chebyshev's inequality] \label{thm:chebyshev} Let $X$ be a random variable and let $t\ge 0$. Then \[\Pr[\card{X- \mb{E}[X]}\ge t]\le \frac{\Var[X]}{t^2}.\] \end{thm} Our second inequality bounds lower tails and can be used to give exponential concentration. \begin{thm}[Janson's inequality \cite{janson1990poisson}] Let $\Omega$ be a finite universal set and let $S_1,\ldots,S_m\subseteq \Omega$ for some $m\in \mathbb{N}$ be a collection of subsets of $\Omega$ (with repetitions allowed). Consider a random subset of $\Omega$ with each element being chosen independently with some probability $p$ and, for $i\in [m]$, let $X_i$ be the indicator random variable for the event that all elements of $S_i$ are chosen in the random set. Let $X=\sum_ {i\in[m]}X_i$ count the number of sets $S_i$ that appear in the random set and, writing $i\sim j$ if $i\neq j$ and $S_i\cap S_j\neq \emptyset$, let \[\mu:=\mb{E}[X]=\sum_{i\in [m]}\mb{E}[X_i] \qquad \mbox{ and } \qquad \Delta:=\sum_{\substack{i\sim j}}\mb{E}[X_iX_j].\] Then for $0\leq t\leq \mu$ we have \[\Pr[X\leq \mu -t]\leq e^{-\frac{t^2}{2(\mu+\Delta)}}.\] \label{thm:Janson} \end{thm} As a key example for how we use Janson's inequality, the following lemma shows that for any large enough collection of Schur triples in $[n]$, our random set whp contains a member of the collection. \begin{lem} \label{lem:JansonSTs} For any $\xi>0$ and $C>0$, there exists a $\zeta>0$ such that the following holds for any $p=p(n)\le Cn^{-1/2}$. If $\mathcal{F}\subset [n]^3$ is a collection of non-degenerate Schur triples such that $|\mathcal{F}|\ge \xi n^2$, then we have that \[\Pr[\mathcal{F}\cap [n]_p^3= \emptyset]\le e^{-\zeta n^2p^3}.\] \end{lem} \begin{proof} First, we take a largest subset $\mathcal{F}'\subseteq \mathcal{F}$ such that all the sets that host Schur triples in $\mathcal{F}'$ are \emph{distinct} sets. That is, if $(x,y,z)$ and $(y,x,z)$ both lie in $\mathcal{F}$, we only take one of these triples in $\mathcal{F}'$. From this point on, we will only consider triples in $\mathcal{F}'$ in order to simplify calculations, noting that we have that $\card{\mathcal{F}'}\geq \frac{\xi}{2}n^2$. For each Schur triple $S\in \mathcal{F}'$, let $X_S$ be the indicator variable for the event that all three elements of $S$ appear in $[n]_p$. Moreover, let $X=\sum_{S\in \mathcal{F}'}X_S$. For each $S\in \mathcal{F}$, we have that $\mb{E}[X_S]=\Pr\left[S\in [n]_p^3\right]=p^3$. Hence, by linearity of expectation, we have that \[\mu:=\mb{E}[X]=\sum_{S\in \mathcal{F}'}\mb{E}[X_S]=\sum_{S\in \mathcal{F}'}p^3\ge \tfrac{\xi}{2} n^2p^3.\] Now for Schur triples $S$ and $S'$, we write $S\sim S'$ if $S\cap S'\ne \emptyset$, recalling our definition of the intersection of tuples from Section~\ref{sec:Notation}. We will upper bound \[\Delta:=\sum_{\substack{S\ne S'\in \mathcal{F}'\\S\sim S'}}\mb{E}[X_SX_{S'}] \qquad \mbox{by the larger quantity }\qquad \Delta^*:=\sum_{\substack{S\ne S'\in \mathcal{S}\\S\sim S'}}\mb{E}[X_SX_{S'}],\] where $\mathcal{S}$ denotes the set of \emph{all} non-degenerate Schur triples in $[n]^3$. We then have that \[\Delta\le \Delta^*\le 3n^2\cdot 3\cdot 2\cdot p^4+3n^2\cdot 3\cdot 3n\cdot p^5\le 27(n^2p^4+n^3p^5),\] where the first summand comes from considering pairs of Schur triples $S\ne S'$ that intersect in 2 elements and the second summand considers pairs that intersect in 1 element. Indeed, in both cases there are at most $3n^2$ choices of $S$ (each pair of elements is contained in at most 3 Schur triples) and then 3 choices of the elements in $S\cap S'$. In the case that $|S\cap S'|=2$, there are then at most 2 choices for $S'$ given $S\cap S'$, such that $S\ne S'$. In the case that $|S\cap S'|=1$, there are at most $3n$ choices of Schur triple $S'$ containing the already chosen element of $S\cap S'$. Finally we have that $\Delta\le \Delta^*\le 54C^2n^2p^3$ due to our upper bound on $p$, and hence by Theorem~\ref{thm:Janson}, we have that \[\Pr[X=0]\le \exp \left(-\frac{\mu^2}{2(\mu+\Delta)}\right)\le \exp\left(- \frac{1}{2}\min\left\{\frac{\mu}{2}, \frac{\mu^2}{2\Delta}\right\} \right)\leq \exp(-\zeta n^2p^3), \] using our lower bound on $\mu$, our upper bound on $\Delta$, and the fact that $\zeta>0$ is chosen small enough with respect to $\xi$ and $C$. \end{proof} We will also use Janson's inequality for other larger configurations. We make the following definition. \begin{dfn} \label{def:wicket} We say a $9$-tuple \[W=(x_i,y_i,z_i:i=1,2,3)\in [n]^9\] of nine \emph{distinct} elements of $[n]$ is a \emph{wicket}\footnote{The terminology here is motivated by viewing these configurations in the Schur hypergraph $\mc H_{\textrm{Schur}}([n])$, as well as the sporting interests of one of the authors.} if $x_i+y_i=z_i$ for $i=1,2,3$ and $x_1+x_2=x_3$. \end{dfn} Note that as we define a wicket to have nine distinct elements, all the four Schur triples in the definition will be non-degenerate. As with Lemma~\ref{lem:JansonSTs}, we will be interested in showing that if we have a large collection of wickets, a random set will whp contain one of them. \begin{lem} \label{lem:JansonWickets} For any $\xi>0$ and $C>0$, there exists a $\zeta>0$ such that the following holds for any $p=p(n)\le Cn^{-1/2}$. If $\mathcal{W}\subseteq [n]^9$ is a collection of wickets such that $|\mathcal{W}|\ge \xi n^5$, then we have that \[\Pr[\mathcal{W}\cap [n]_p^9= \emptyset]\le e^{-\zeta n^5p^9}.\] \end{lem} \begin{proof} As with the proof of Lemma \ref{lem:JansonSTs}, for each wicket $W\in \mathcal{W}$, we let $X_W$ be the indicator random variable for the event that $W\subseteq [n]_p$. Then, letting $X:=\sum_{W\in \mathcal{W}}X_W$, we have that \[\mu:=\mathbb{E}[X]=|\mathcal{W}|p^9\geq \xi n^5p^9.\] In order to bound $\Delta=\sum\left\{\mathbb{E}[X_WX_{W'}]:W\ne W'\in \mathcal{W}, W\cap W'\ne \emptyset\right\}$, we consider the number of wickets that can contain a fixed set of a certain size. \begin{clm} \label{clm:wicket} For $\ell=0,1,2,3,4$ and any nonempty set $U\subseteq [n]$ of size at least $8-2\ell$, there are at most $(9!)^{\ell+1} n^{\ell}$ wickets $W\subseteq [n]^9$ containing $U$. \end{clm} Before proving the claim, let us see how it implies the lemma. We upper bound $\Delta$ by \[ |\mathcal{W}|\cdot 2^9\cdot p^9 \cdot \Big((9!)^5n^4 p^8+(9!)^4n^3 p^7+(9!)^4n^3 p^6+(9!)^3n^2 p^5+(9!)^3n^2 p^4+(9!)^2n p^3+(9!)^2n p^2 +9! p +9!\Big). \] Here we first choose a wicket $W\in \mathcal{W}$ ($|\mathcal{W}|$ choices) and then choose some subset of entries of $W$ which will be the intersection $W\cap W'$ (at most $2^9$ choices). The $p^9$ then comes from all the elements of $W$ appearing in $[n]_p$. In the parentheses we then consider the number of choices of $W'$ that intersect $W$ in our already chosen elements of $W\cap W'$. The $i^{th}$ summand in the parentheses corresponds to an intersection of size exactly $i$ with $W$ and hence we have a factor of $p^{9-i}$ to account for the new elements. In each case we use Claim~\ref{clm:wicket} with $\ell = \lceil\frac{8-i}{2}\rceil$ to upper bound the number of wickets $W'$ that can intersect $W$ in our fixed set of size $i$. Note that we require these wickets to intersect in exactly $i$ elements but we can still use the count given by the claim as we are only concerned with an upper bound here. Simplifying, we have that \[\Delta \le 2^{104}|\mathcal{W}|p^9(n^4p^8+n^3p^6+n^2p^4+np^2+1) \le 2^{108}C^4|\mathcal{W}|p^9, \] using our upper bound on $p$ in the final inequality. Applying Theorem~\ref{thm:Janson}, we have that \[\Pr[X=0]\le \exp \left(-\frac{\mu^2}{2(\mu+\Delta)}\right)\le \exp\left(- \frac{1}{2}\min\left\{\frac{\mu}{2}, \frac{\mu^2}{2\Delta}\right\} \right)\le \exp(-\zeta n^5p^9), \] choosing $\zeta$ sufficiently small. It remains to prove the claim. \begin{proof} We address the cases $\ell=0,1,2,3,4$ in that order. For $\ell=0$, we are interested in a set $U$ of size at least $8$. After a choice of (distinct) label from $\{x_i,y_i,z_i:i=1,2,3\}$ for each element of $U$ (at most $9!$ choices), any wicket $W$ which is labelled to match the labels of $U$ is already fully determined, as the only possible as yet unchosen element of $W$ is contained in a Schur triple with two already labelled elements in $U$. For $\ell=1$, consider a set $U$ of size at least 6 and choose distinct labels for the elements of $U$ from $\{x_i,y_i,z_i:i=1,2,3\}$ (at most $9!$ choices of labels). Now consider the number of wickets $W$ which contain $U$ and whose labels coincide with how we have labelled $U$. Note that there are two indices $i_1,i_2\in [3]$, such that for $i=i_1,i_2$, at least two of the labels in the set $\{x_i,y_i,z_i\}$ have been assigned to $U$ already. As we must have that $x_i+y_i=z_i$ for $i=i_1,i_2$ and any wicket whose labels are compatible with the labels of $U$, we have that the labels $\{x_i,y_i,z_i:i=i_1,i_2\}$ of any such wicket are already determined. Hence, letting $j=[3]\setminus\{i_1,i_2\}$, we have that $x_j$ is also determined. A choice of $y_j$ (at most $n$ choices) then completely determines $W$. For $\ell=2$, consider a set $U$ of size at least $4$ and choose labels for the elements of $U$ from $\{x_i,y_i,z_i:i=1,2,3\}$ (at most $9!$ choices). Note that there is some $i^*\in [3]$ such that at least two of the labels $\{x_{i^*},y_{i^*},z_{i^*}\}$ have been placed on elements of $U$. Let $j^*\in [3]\setminus \{i^*\}$ be such that at least one of the labels of $\{x_{j^*},y_{j^*},z_{j^*}\}$ have been placed on $U$ (note that such a $j^*$ is guaranteed to exist). If more than two of the labels of $\{x_{j^*},y_{j^*},z_{j^*}\}$ appear on $U$, then any wicket whose labelling is compatible with $U$ must contain the 6 elements $\{x_k,y_k,z_k:k=i^*,j^*\}$ and so we can appeal to the upper bound for the $\ell=1$ case. Similarly, if only one of the labels $\{x_{j^*},y_{j^*},z_{j^*}\}$ appears on $U$, then a choice of a further element in $[n]$ (at most $n$ choices) labelled with another label from $\{x_{j^*},y_{j^*},z_{j^*}\}$ (the first free label according to the predetermined ordering of the tuple) gives a set of size 6 that any wicket compatible with the already labelled elements of $[n]$ must contain. Again, we can appeal to induction in this case and conclude that there are at most $9!\cdot (9!)^2n\cdot n\le ( 9!)^3n^2$ wickets containing $U$, as required. The case $\ell=3$ is similar. We consider a set $U\subseteq [n]$ of size at least $2$. If $|U|\ge 4$ then we are done by the $\ell=2$ case. Hence we can assume that $|U|=2$ or $|U|=3$. We choose labels for the elements of $U$ from $\{x_i,y_i,z_i:i=1,2,3\}$ (fewer than $9!$ choices) and consider first the case that there is some $i_0\in [3]$ such that exactly one label of $\{x_{i_0},y_{i_0},z_{i_0}\}$ has been assigned. By making a choice of an element to receive one of the other labels in $\{x_{i_0},y_{i_0},z_{i_0}\}$ (at most $n$ choices) we fix all the elements in $\{x_{i_0},y_{i_0},z_{i_0}\}$, and we therefore have at least $4$ elements already labelled in $[n]$. Counting wickets containing these $4$ elements reduces to the $\ell=2$ case and we can use induction. Similarly, if there is no such $i_0$, then there must be some $i_0'$ such that at least two of the labels in $\{x_{i'_0},y_{i'_0},z_{i'_0}\}$ have been used in $U$ and so all the elements in $\{x_{i'_0},y_{i'_0},z_{i'_0}\}$ are determined for any wicket containing $U$. Choosing any further element (at most $n$ choices) and labelling it (with the first free label of the tuple) gives 4 fixed elements and we reduce again to the $\ell=2$ case. Finally when $\ell=4$, we consider sets $U$ with $|U|\geq 1$ (as $U$ is nonempty by assumption). If $|U|\ge 2$, we are already done so we can assume $ |U|=1$. Choose a label for the element of $U$ (at most 9 choices) and choose another element of $[n]$ to label with the first free label. Counting wickets containing these two elements reduces to the $\ell=3$ case and we have at most $9\cdot n \cdot (9!)^4 n^3\le (9!)^5 n^4$ wickets containing $U$ as required. \end{proof} \end{proof} \section{Dense base sets} \label{sec:dense} In this section we will prove Theorems~\ref{thm:main_dense} and~\ref{thm:stability}. Let us first look at the $0$-statement of Theorem~\ref{thm:main_dense}: we will prove the lower bound by analysing a particular colouring of an explicit dense set that has been randomly perturbed. \subsection{Proof of the $0$-statement of Theorem~\ref{thm:main_dense}} \label{sec:0dense} When $t=\Omega(n)$, and $\card{A} = \ceil*{\frac{n}{2}} + t\leq \ceil*{\tfrac{4n}{5}}$, we simply take $A$ to be any set which is not Schur (for example, the construction that removes the integers divisible by 5 discussed in the introduction). Then for $p=o(n^{-1})$, an application of Markov's inequality gives that whp $[n]_p$ is empty and $A\cup [n]_p$ remains 2-colourable without monochromatic Schur triples. \begin{figure}[h!] \centering \includegraphics[scale= 0.8]{lower_bound_with_letters.pdf} \caption{Visualisation of the lower bound construction} \label{fig:lower_bound} \end{figure} For $1\leq t=o(n)$, let $A = [\ceil*{\frac{n+1}{2}} - t, n]$ and let $p = o \left( \min \{ n^{-2/3}, t^{-1} \} \right)$. Write $B = [\ceil*{\frac{n+1}{2}} - t, n-2t]$, $C = [n-2t+1, n]$, and $R = [n]_p \setminus A = [\floor{\frac{n}{2}} - t ]_p$, noting that $A \cup [n]_p = B \cup C \cup R$. We colour $B$ blue and $C \cup R$ red, as pictured in Figure~\ref{fig:lower_bound}. Note that $B$ is sum-free, and therefore we have no monochromatic Schur triples in blue. We also have that $C$ is sum-free, and since $\min C > 2 \max R$, the only possible monochromatic red Schur triples are of the form $x + y = z$ with $x,y,z \in R$ or with $x \in R$ and $y, z \in C$. The former amounts to the random set containing a Schur triple, which we know whp does not happen for $p = o(n^{-2/3})$. For the latter, we require the element $x$ to belong to the difference set $C - C$. Since $C$ is an interval of length $2t$, there are $2t - 1$ possible differences. As $p = o(t^{-1})$, whp none of these elements $x$ appear in $R$. Thus, this colouring has no monochromatic Schur triples whp, thereby demonstrating that $A \cup [n]_p$ is whp not Schur.\qed \subsection{Proof of the $1$-statement of Theorem~\ref{thm:main_dense}} \label{sec:1dense} We use the following variations of a fact used by Aigner-Horev and Person~\cite{aigner2019monochromatic}, which observes that certain sets are Schur. Our proof of the $1$-statement of Theorem~\ref{thm:main_dense} will then reduce to proving the existence of one of these sets in the randomly perturbed set. \begin{prop} \label{prop:11set} Let $a,x,d\in [n]$. Then the following two sets are Schur: \begin{itemize} \item[(i)] $L_1(a,x,d) = \{ d, x, x+d, a, a+d, a+2d, a+3d, a+x, a+x+d, a+x+2d, a+x+3d \}$, and \item[(ii)] $L_2(a,x,d) = \{ d, x-d, x, a, a+d, a+2d, a+3d, x-a-3d, x-a-2d, x-a-d, x-a \}$. \end{itemize} \end{prop} The proof of this proposition follows from a simple case analysis and we omit the details. One can also derive it from the proof of Lemma 2 in \cite{aigner2019monochromatic}. Indeed, Aigner-Horev and Person define a configuration similar\footnote{For any $a,x,d$ we have that $L_1(a,x,d) \supseteq \mc L(a+d,x+a+2d,d)$ where $\mc L$ is as defined in \cite{aigner2019monochromatic}.} to our $L_1(a,x,d)$ and prove that such a configuration is Schur. The proof can be followed directly to prove that $L_1(a,x,d)$ is Schur for all $a,x,d\in [n]$. Moreover, the proof relies solely on the Schur triples depicted in Figure~\ref{fig:compare_struct} and, as shown in the figure, there is an isomorphism between these Schur triples in $L_1(a,x,d)$ and the Schur triples in $L_2(a,x,d)$, thus verifying that $L_2(a,x,d)$ is also Schur. \begin{figure} \centering \begin{minipage}{0.43\linewidth} \centering \includegraphics[scale=0.4]{configuration_with_edges.pdf} \label{fig:L1} \subcaption{A visualisation of $L_1(a,x,d)$} \end{minipage} \vspace{0.01\linewidth} \begin{minipage}{0.43\linewidth} \centering \includegraphics[scale=0.4]{configuration_with_edges_shift.pdf} \label{fig:L2} \subcaption{A visualisation of $L_2(a,x,d)$} \end{minipage} \caption{A comparison of the sum structure of $L_1(a,x,d)$ and $L_2(a,x,d)$.} \label{fig:compare_struct} \end{figure} Given an element $x \in A$, we define $S^+_A(x) = \{ y \in A: x + y \in A \}$, $S^-_A(x) = \{ y \in A : x - y \in A \}$, and $S_A(x) = S^+_A(x) \cup S^-_A(x)$. The following result shows that it will suffice to find some structure in these links of Schur triples in the set $A$. \begin{lem} \label{lem:chebyshev} Suppose $\lambda=\lambda(n),\kappa=\kappa(n)$ are integers with $\kappa\geq 2$ and we have a set $X \subseteq A \subseteq [n]$ of size $\lambda$, and that, for each $x \in X$, there is a set $D_x$ of size $\kappa$ such that for every $d \in D_x$, either $S^+_A(x)$ or $S^-_A(x)$ contains a $4$-AP with common difference $d$. If $p = \omega \left( \max \{ ( \lambda \kappa)^{-1/2}, \kappa^{-1} \} \right)$, then $A \cup [n]_p$ is Schur whp. \end{lem} Before proving this lemma, let us see how it implies the $1$-statement of Theorem~\ref{thm:main_dense}. \begin{proof}[Proof of the $1$-statement of Theorem~\ref{thm:main_dense}] First observe that if $t = O(n^{2/3})$, then $p(n,t) = n^{-2/3}$, and it follows from Theorem~\ref{thm:posdense} that $A \cup [n]_p$ is whp Schur. Hence, we may assume that $t = \omega (n^{2/3})$, and that $p(n,t) = t^{-1}$. We split the proof into two cases, depending on the number of Schur triples in $A$. Set $\varepsilon = \tfrac{1}{28}$ and let $\delta = \delta_{\ref{thm:removal}}(\tfrac{1}{28}) > 0$ be the resulting value from Theorem~\ref{thm:removal}. Recall that $|A|= \ceil{\frac{n}{2}}+t$, and note that by monotonicity we may assume $t \le \varepsilon n$. Indeed, if this is not the case then we can shrink $A$ to a subset of size $\ceil{\frac{n}{2}}+\varepsilon n$ and work with this base set instead. \paragraph{Case I: there are at least $\delta n^2$ Schur triples in $A$.} Let $X = \{ x \in A : \card{S^+_A(x)} \ge \tfrac12 \delta n \}$. By counting Schur triples, we have \[ \delta n^2 \le \sum_{x \in A} \card{S^+_A(x)} \le n \cdot \tfrac12 \delta n + \card{X} \cdot n, \] and so $\card{X} \ge \tfrac12 \delta n$. Now, by Theorem~\ref{thm:APsupersat}, there is some $\xi > 0$ such that, for each $x \in X$, there is a set $D_x$ of at least $\xi n$ values $d$ such that $S^+_A(x)$ contains a $4$-AP with common difference $d$. We may therefore apply Lemma~\ref{lem:chebyshev} with $\lambda = \tfrac12 \delta n$ and $\kappa = \xi n$ to deduce that, when $p = \omega (n^{-1})$, $A \cup [n]_p$ is Schur whp. \paragraph{Case II: there are fewer than $\delta n^2$ Schur triples in $A$.} By Theorem~\ref{thm:removal}, we can remove at most $\varepsilon n$ elements from $A$ to obtain a sum-free subset $A' \subseteq [n]$. It follows that $\card{A'} \ge (\tfrac12 - \varepsilon)n$, and hence we can apply Theorem~\ref{thm:sumstab} to obtain structural information about $A'$ --- it either consists entirely of odd integers or of large integers. \paragraph{Case II.1: $A'$ is contained in the odd integers.} Since $\card{A'} \ge ( \tfrac12 - \varepsilon )n$, it follows that at most $\varepsilon n$ odd integers are missing from $A$. Furthermore, letting $X\subseteq A$ be the set of even integers in $A$, we have that $\ell:=\card{X}\geq t$, since $\card{A} = \ceil{\frac{n}{2}} + t$. Now for $x\in X$, if $x \le \frac{n}{2}$, there are at least $\frac{n}{4}$ sums $a + x = b$ in $[n]$ with $a, b$ odd. Since each missing odd integer appears in at most two of these sums, it follows that $\card{S^+_A(x)} \ge (\tfrac14 - 2 \varepsilon)n \ge \tfrac18 n$. Thus, by Theorem~\ref{thm:APsupersat}, there is some $\xi > 0$ such that there is a set $D_x$ of at least $\xi n$ distinct differences of $4$-APs contained in $S^+_A(x)$. On the other hand, if $x > \frac{n}{2}$, then there are at least $\frac{n}{4}$ sums $a + b = x$ in $[n]$ with $a,b$ odd. Each missing odd integer appears in at most one such sum, and so $\card{S^-_A(x)} \ge (\tfrac14 - \varepsilon) n \ge \tfrac18 n$. As before, we can find a set $D_x$ of at least $\xi n$ distinct differences of $4$-APs contained in $S^-_A(x)$. Hence, applying Lemma~\ref{lem:chebyshev} with $\lambda = \ell$ and $\kappa = \xi n$, we find that $p = \omega( (\ell n)^{-1/2} )= \omega( (tn)^{-1/2} )$ suffices to ensure $A \cup [n]_p$ is whp Schur. As $p(n,t)=t^{-1}\geq (tn)^{-1/2}$, this completes the proof in this case. \paragraph{Case II.2: $A'$ consists of large integers.} In this case, $\min A' > \card{A'}$. Since $\card{A'} > ( \tfrac12 - \varepsilon )n$, it follows that $\min A' > (\tfrac 12 - \varepsilon) n$. Thus, if we write $M = \left[\ceil*{\tfrac{n}{2}}, n \right] \setminus A$, we have \[ \card{M} = \card*{ \left[ \ceil*{\tfrac{n}{2}}, n \right] \setminus A } \le \card*{ \left[ \ceil*{( \tfrac12 - \varepsilon) n}, n \right] \setminus A'} = \card*{ \left[ \ceil*{(\tfrac12 - \varepsilon)n}, n \right]} - \card{A'} < 2 \varepsilon n. \] Let $m := \card{M}$. Since $\card{A} = \ceil{ \tfrac{n}{2} } + t$, we must have $\card*{A \cap \left[1, \floor*{\tfrac{n}{2}} \right]} = m + t$. Let $X$ be the set consisting of the $\tfrac13 (m+t)$ smallest elements in $A$, and consider $x \in X$. Observe that \[S^+_A(x) = \{ y \in A : x + y \in A \} = A \cap (A - x),\] where $A - x = \{ z - x : z \in A \}$. If $x \le \tfrac{n}{2} - 3(m+t)$, consider the interval $I_1 := \left[ \ceil*{\tfrac{n}{2}}, \ceil*{\tfrac{n}{2}} + 3(m+t) \right]$. We then have \[S^+_A(x) \cap I_1 = A \cap (A-x) \cap I_1 = I_1 \setminus \left( M \cup (M-x) \right),\] and so at most $2m$ elements of $I_1$ can be missing from $S^+_A(x)$. Therefore $S^+_A(x)$ contains at least $m+3t$ elements out of an interval of length $3(m+t)+1$, and hence by Theorem~\ref{thm:APsupersat} (which we may apply over any interval, as arithmetic progressions are translation-invariant), there is some $\xi = \xi(\tfrac13) > 0$ and a set $D_x$ of size $3 \xi (m+t)$, such that for each $d \in D_x$ there is a $4$-AP in $S^+_A(x)$ with common difference $d$. Otherwise, we must have $\tfrac{n}{2} - 3(m+t) < x \le \tfrac{n}{2} - \tfrac23 (m+t)$ (since there are $\tfrac23(m+t)$ elements of $A \setminus X$ that are at most $\ceil{\tfrac{n}{2}}$, $x$ cannot be any larger). This time, consider the interval $I_2 := \left[ x, \ceil*{\tfrac{n}{2}} + \tfrac23(m+t) \right]$. We have \[ I_2 \setminus S^+_A(x) = I_2 \setminus \left(A \cap (A - x) \right) = \left( I_2 \setminus A \right) \cup \left( I_2 \setminus (A - x) \right) = \left( I_2 \setminus A \right) \cup \left( \left( (I_2 + x) \setminus A \right) - x \right). \] In this case, since $x$ is large and the interval $I_2$ is small, no missing element of $A$ can contribute to both $I_2 \setminus A$ and $I_2 \setminus (A - x)$ simultaneously. We therefore have \begin{align*} \card*{I_2 \setminus S^+_A(x)} &= \card*{\left(I_2 \cup (I_2 + x) \right) \setminus A} \\ &= \card*{ \left[ x, \floor*{\tfrac{n}{2}} \right] \setminus A} + \card*{ \left( \left[ \ceil*{\tfrac{n}{2}}, \ceil*{\tfrac{n}{2}} + \tfrac23(m+t) \right] \cup \left[ 2x, \ceil*{\tfrac{n}{2}} + \tfrac23(m+t) + x \right] \right) \setminus A} \\ &\le \card*{ \left[ x, \floor*{\tfrac{n}{2}} \right] \setminus A} + \card*{ \left[ \ceil*{\tfrac{n}{2}}, n \right] \setminus A} \\ &= \card*{ \left[ x, \floor*{\tfrac{n}{2}} \right] \setminus A} + m \\ &\le \tfrac{n}{2} - x - \tfrac23(m+t) + m < \tfrac{n}{2} - x + \tfrac13 (m + t), \end{align*} where the penultimate inequality follows from the fact that there are at least $\tfrac23(m+t)$ elements of $A$ that are larger than $x$ and smaller than $\tfrac{n}{2}$. Therefore \[ \card*{S^+_A(x) \cap I_2} > \card{I_2} - \left( \tfrac{n}{2} - x + \tfrac13(m+t) \right) \ge \tfrac13 (m+t). \] Since $\card{I_2} \le \tfrac{11}{3} (m+t)$, it follows that $S^+_A(x)$ is dense in $I_2$, and so we may again apply Theorem~\ref{thm:APsupersat} to find some $\xi = \xi(\tfrac{1}{11}) > 0$ and a set $D_x$ of size at least $\xi \card{I_2} \ge \tfrac43 \xi (m+t)$ such that, for every $d \in D_x$, there is a $4$-AP in $S^+_A(x)$ with common difference $d$. We may therefore apply Lemma~\ref{lem:chebyshev} with $\lambda = \tfrac13(m+t)$ and $\kappa = \tfrac43 \xi(\tfrac{1}{11}) (m+t)$ to deduce that $p = \omega \left( (m+t)^{-1} \right) = \omega(t^{-1})$ suffices to ensure $A \cup [n]_p$ is whp Schur. \end{proof} \vspace{1em} We now complete the proof of the $1$-statement of Theorem~\ref{thm:main_dense} by proving Lemma~\ref{lem:chebyshev}. \begin{proof}[Proof of Lemma~\ref{lem:chebyshev}] Given some $x \in X$ and $d \in D_x$, suppose first that $d$ is the common difference of a $4$-AP in $S^+_A(x)$. Let $a$ be the first term of such a $4$-AP. Then $\{a, a+d, a+2d, a+3d\} \subseteq S^+_A(x) \subseteq A$, and thus, by definition of $S^+_A(x)$, we also have \[\{ a+x, a+x+d, a+x + 2d, a+x+3d\} \subseteq A.\] In this case, we define $P(x,d) = \{d, x+d\}$. Note that $P(x,d) \subseteq [n]$, since $1 \le d \le x + d \le a + x + d \le n$, where the final inequality follows from the fact that $a + x + d \in A \subseteq [n]$. We further have $L_1(a,x,d) \setminus A \subseteq P(x,d)$. Since, by Proposition~\ref{prop:11set}, $L_1(a,x,d)$ is Schur, it follows that $A \cup [n]_p$ will be Schur if $P(x,d) \subseteq [n]_p$. If, instead, $d$ is the common difference of a $4$-AP in $S^-_A(x)$ and $x\neq 2d$, letting $a$ be the first term of such a $4$-AP, then $\{a, a+d, a+2d, a+3d\} \subseteq S^-_A(x) \subseteq A$ and, by definition of $S^-_A(x)$, \[\{x-a-3d, x-a-2d, x-a-d, x-a\} \subseteq A.\] Here, we define $P(x,d) = \{d, x-d\}$, which is again easily seen to be contained in $[n]$. Then we have $L_2(a,x,d) \setminus A \subseteq P(x,d)$, and so, since $L_2(a,x,d)$ is Schur, it once more suffices to have $P(x,d) \subseteq [n]_p$. Note that the map $(x,d) \mapsto P(x,d)$ is at most three-to-one; given a pair $\{u,v\}$ in the image with $u \le v$, it either takes the form of $\{d, x + d \}$ with $d = u$ and $x = v-u$, or the form $\{d, x-d\}$ with $x = u + v$ and $d = u$ or $d = v$. Hence, since there are at least $\tfrac12 \lambda \kappa$ pairs $(x,d)$ with $x\neq 2d$ (the factor of $\tfrac12$ comes from ignoring the pairs with $x=2d$), there are at least $\tfrac16 \lambda \kappa$ distinct pairs $P(x,d)$ whose appearance in $[n]_p$ would make $A \cup [n]_p$ Schur. Moreover, as we ignored cases in which $x=2d$, all the pairs $P(x,d)$ are indeed sets of size 2. Let $Y$ be the random variable counting how many of these pairs are contained in $[n]_p$. We have \[ \mb{E}[Y] \geq \tfrac16 \lambda \kappa p^2 = \omega(1), \] since $p = \omega \left( (\lambda \kappa)^{-1/2} \right)$. To bound the variance of $Y$, note that the events of $P(x,d)$ and $P(x',d')$ appearing in $[n]_p$ are independent unless $P(x,d) \cap P(x',d') \neq \emptyset$. Furthermore, a given element $u$ can be in at most $4 \lambda$ pairs $P(x,d)$; once we specify which type of pair it is, and which role $u$ plays in the pair, each pair determines a unique $x \in X$. Thus, fixing a pair $P(x,d)$, it follows that there are at most $8\lambda$ other pairs which intersect it. Any such intersecting pair of pairs consists of a total of three elements, and they all appear in $[n]_p$ with probability $p^3$. Thus \[ \Var (Y) \le 8 \lambda^2 \kappa p^3 = o(\mb{E}[Y]^2), \] since $p = \omega(\kappa^{-1})$. Hence it follows from Chebyshev's Inequality (Theorem~\ref{thm:chebyshev}) that $\mb{P}(Y = 0) = o(1)$. That is, $[n]_p$ will whp contain some pair $P(x,d)$, and thus $A \cup [n]_p$ will be Schur. \end{proof} \subsubsection{Proof of Theorem~\ref{thm:stability}} Suppose $A \subseteq [n]$ is a set of size $\ceil*{\tfrac{n}{2}} + t$, and $q = \omega \left(n^{-1} \right)$ is such that $A \cup [n]_q$ is whp not Schur. From the proof of Theorem~\ref{thm:main_dense}, we have that if $A$ fell under Case I then $p=\omega(n^{-1})$ would be sufficient to ensure that $A \cup [n]_p$ is whp Schur. Hence, we must have that $A$ falls into Case II. Now if $A$ falls into Case II.1, the proof of Theorem~\ref{thm:main_dense} shows that if $A$ contains $\ell$ even numbers and $p=\omega((\ell n)^{-1/2})$ then whp $A\cup [n]_p$ is Schur. In this case we must thus have $q=O((\ell n)^{-1/2})$ and so $\ell=O(q^{-2}n^{-1})$. On the other hand, if $A$ falls under Case II.2, the proof shows that, for $ m := \card*{\left[ \ceil*{\tfrac{n}{2}}, n\right] \setminus A}$, $p = \omega \left( (m+t)^{-1} \right)$ suffices to make $A \cup [n]_p$ Schur whp. Thus we must have $q = O \left((m+t)^{-1} \right)$, which in particular implies $m = O \left( q^{-1} \right)$, proving the stability result. \qed \section{Sparse base sets} \label{sec:sparse} In this section we prove Theorem~\ref{thm:main_sparse}, starting by proving the $0$-statement in Section~\ref{sec:0sparse}. In Section~\ref{sec:containers} we then introduce containers for colourings, which will be a key tool in proving the $1$-statement of Theorem~\ref{thm:main_sparse}, which we carry out in Section~\ref{sec:1sparse}. \subsection{The $0$-statement of Theorem~\ref{thm:main_sparse}} \label{sec:0sparse} As with the proof of the $0$-statement of Theorem~\ref{thm:main_dense}, we will take $A$ to be the set of the largest integers in $[n]$, which we denote by $A_s:=[n-s+1,n]$. However, in contrast to the setting of dense base sets, our proof here is non-constructive. We obtain a contradiction by assuming that the random perturbation of $A_s$ \emph{is} Schur and derive that certain properties of the random set must hold, all of which do not hold whp. In this way, our approach shares features with the proof of the $0$-statement of Theorem~\ref{thm:random} due to Graham, R\"odl and Ruci\'nski~\cite{graham1996schur}. Recall from Section~\ref{sec:Notation} the definition of $\mc H_{\textrm{Schur}}(X)$ for a set $X\subseteq [n]$ and note that the edges of $\mc H_{\textrm{Schur}}(X)$ have cardinality two (in the case of degenerate Schur triples) or three. Note further that finding a Schur colouring of a set $X \subseteq [n]$ is equivalent to finding a proper two-colouring of $\mc H_{\textrm{Schur}}(X)$; that is, a two-colouring without monochromatic edges. Therefore, to prove the $0$-statement of the theorem, we need to show that if $p = o((sn)^{-1/3})$, then whp, after we add the perturbation $P := [n-s]_p$ to the set $A_s$, we can properly two-colour $\mc H_{\textrm{Schur}}(A_s \cup P)$. We will in fact prove a bit more, showing that one can find such a colouring where all elements in $A_s$ are coloured blue. Now suppose that $\mc H_{\textrm{Schur}}(A_s \cup P)$ does not admit a proper two-colouring where all elements of $A_s$ are coloured blue, and let $\mc H_{min}=\mc{H}_{min}(s,P) \subseteq \mc H_{\textrm{Schur}}(A_s\cup P)$ be an edge-minimal subgraph with this property. That is, $\mc H_{min}$ does not have such a two-colouring, but the hypergraph obtained by removing any edge $h$ from $\mc H_{min}$ does. We now explore the structure of $\mc H_{min}$. \begin{lem} \label{lem:extension} Suppose $P$ is such that $\mc H_{\textrm{Schur}}(A_s \cup P)$ does not admit a proper two-colouring where all elements of $A_s$ are coloured blue. Then $\mc H_{min}=\mc{H}_{min}(s,P)$, as defined above, has the following property. Every edge $h \in \mc H_{min}$ contains at least one element from $P$, and, for every $x \in h \cap P$, there is an edge $h' \in \mc H_{min}$ such that $h \cap h' = \{x \}$. Moreover, if $h$ contains an element from $A_s$, then there exists such an edge $h'$ that does not contain elements from $A_s$. \end{lem} \begin{proof} First observe that, as $s \le \floor{\tfrac{n}{2}}$, the set $A_s$ is sum-free. Hence, $A_s$ is an independent set in $\mc H_{\textrm{Schur}}(A_s \cup P)$, and every edge of $\mc H_{min} \subseteq \mc H_{\textrm{Schur}}(A_s \cup P)$ must contain an element of $P$. Now, by minimality of $\mc H_{min}$, we know that there is a proper colouring of $\mc H_{min} \setminus \{h\}$ where all elements in $A_s$ are blue. As $\mc H_{min}$ itself does not admit such a colouring, $h$ must be monochromatic under this colouring. If we swap the colour of $x$, then $h$ will no longer be monochromatic, so we must create another monochromatic edge, say $h'$. As $x$ was the only element to change colour, we must have $x \in h'$, and as $x$ did change colour, $h'$ must be a different colour than $h$ was. Hence $h$ and $h'$ cannot have any other elements in common, and $h \cap h' = \{x\}$. To establish the final assertion, observe that if $h \cap A_s$ is non-empty, $h$ must have been coloured blue. Thus the edge $h'$ must be coloured red after recolouring $x$, and hence cannot contain any element from $A_s$. \end{proof} While the above result holds for any outcome of the random set $P$, our next proposition gives some additional structure that holds whp. \begin{prop} \label{prop:minhypergraph} The random set $P$ is such that the following holds whp. If $P$ is such that $\mc H_{\textrm{Schur}}(A_s \cup P)$ does not admit a proper two-colouring where all elements of $A_s$ are coloured blue, then the hypergraph $\mc H_{min}=\mc H_{min}(s,P)$ has the following properties. \begin{enumerate} \item[(a)] The hypergraph $\mc H_{min}$ is three-uniform. \item[(b)] Every edge of $\mc H_{min}$ contains at most one element from $A_s$. \item[(c)] The hypergraph $\mc H_{min}$ is linear. That is, any pair of distinct edges of $\mc H_{min}$ intersect in at most one vertex. \end{enumerate} \end{prop} \begin{proof} We start with part (a). As noted earlier, all edges of $\mc H_{\textrm{Schur}}(A_s \cup P)$ have cardinality either two or three, with the two-edges $h$ of the form $\{x,2x\}$. If $2x \le n-s$, then $h \subseteq P$, and so each of its elements appears independently with probability $p$. Thus, the expected number of such edges in $\mc H_{\textrm{Schur}}$ is bounded by $np^2$. Since $p=o( n^{-1/2})$, this is $o(1)$ and so whp none of these edges appear. On the other hand, if $2x \ge n-s+1$, then $2x \in A_s$, and there are at most $s$ choices for $2x$. However, $x \le \tfrac{n}{2} < n-s$, and so $x \in P$. By Lemma~\ref{lem:extension}, there is an edge $h'$ that meets $h$ only in $x$, and this edge must be fully contained in $P$. By the preceding paragraph, $\card{h'} = 3$. Since the element $x$ participates in at most $n$ sums, there are at most $n$ choices for $h'$, and the probability that $h' \subseteq P$ is $p^3$. Thus, the probability that any such $h'$ exists in $\mc H_{min}$ (and, therefore, that $h$ does) is at most $np^3$. Taking a union bound over the possible choices for $h$, we find the probability of the existence of such a two-edge is $snp^3 = o(1)$ We now turn to part (b). We know from Lemma~\ref{lem:extension} that there is no edge fully contained in $A_s$, so we need only show that there are no edges in $\mc H_{min}$ containing two elements of $A_s$. Suppose $h = \{x,y,z\}$ were such an edge, with $x + y = z$, where necessarily $x \in P$ and $y,z \in A_s$. We must then have $x \in [s-1]$, and so there are fewer than $s$ choices for $x$. By Lemma~\ref{lem:extension}, there is an edge $h' \subseteq P$ with $h \cap h' = \{x\}$. For each choice of $x$, there are at most $n$ choices for $h'$, each of which appears with probability $p^3$. Hence, taking a union bound over all choices of $x$ and $h'$, the probability that such a supporting edge exists in $\mc H_{min}$ is at most $snp^3 = o(1)$ For part (c), suppose we have two edges $h = \{x,y,z\}$ and $h' = \{x,y,w\}$. First consider the case $h \cup h' \subseteq P$. There are $n$ choices for $x$ and $n$ choices for $y$, but once these are fixed, there are only constantly many choices for $z$ and $w$. As each element appears in $P$ independently with probability $p$, the probability of finding such a configuration is at most $O(n^2p^4) = o(1)$. Next we handle the case $x \in A_s$ (the case $y \in A_s$ is symmetric). There are $s$ choices for $x$ and $n$ choices for $y$. Once this pair is fixed, there are again only a constant number of choices for $z$ and $w$. Moreover, by part (b), we must have $y, z, w \in P$. Hence, taking a union bound over all possible such configurations, the probability that one appears is $snp^3 = o(1)$. Finally, suppose we have $z \in A_s$. In this case, by part (b), we must have $x,y \in P$, with the relation $x + y = z$. We must further have $w = \card{y-x}$, and so $w \in P$ as well. There are then at most $s$ choices for $z$ and $n$ choices for $x$, after which $y$ and $w$ are determined. The probability of finding such a configuration is therefore at most $snp^3 = o(1)$, completing the proof of (c). \end{proof} In the following it will help to differentiate between two kinds of edges that appear in $\mc H_{min}$. We call an edge \emph{type 1} if it is fully contained in $P$, and \emph{type 2} if it contains exactly one vertex from $A_s$. See Figure~\ref{fig:edge_type} for an example. Also, in what follows, a \emph{loose cycle} in a $3$-uniform hypergraph $\mc H$ is a collection of $\ell\geq 3$ edges $e_1,\ldots,e_\ell\in E(\mc H)$ such that for all $i\in [\ell]$, $|e_i\cap e_j|=1$ for $j=i-1,i+1 \mod \ell$, and $e_i\cap e_j= \emptyset$ if $j\neq i-1,i,i+1 \mod \ell$. \begin{figure}[b] \centering \includegraphics{lower_bound_sparse_type} \caption{An edge of type 1 and an edge of type 2} \label{fig:edge_type} \end{figure} \begin{lem} The random set $P$ is such that the following holds whp. If $P$ is such that $\mc H_{\textrm{Schur}}(A_s \cup P)$ does not admit a proper two-colouring where all elements of $A_s$ are coloured blue, then there is a loose cycle in $\mc H_{min}=\mc H_{min}(s,P)$ that has at most one pair of consecutive type 2 edges and such that all degree 2 vertices in the cycle belong to $P$. \label{lem:cycle} \end{lem} \begin{proof} Using Lemma~\ref{lem:extension} and Proposition~\ref{prop:minhypergraph} we construct a walk in $\mc H_{min}$. This walk must eventually repeat a vertex, at which point it creates a cycle. To build the walk, start from an arbitrary edge $h_0 \in \mc H_{min}$. By Lemma~\ref{lem:extension}, there must be some $x_0 \in h_0 \cap P$. Now suppose we have already taken $i \ge 0$ steps in the walk, and have some edge $h_i \in \mc H_{min}$ and vertex $x_i \in h_i \cap P$. By Lemma~\ref{lem:extension}, there is some edge $h_{i+1} \in \mc H_{min}$ such that $h_i \cap h_{i+1} = \{x_i\}$. Furthermore, by Proposition~\ref{prop:minhypergraph}(a), $\card{h_{i+1}} = 3$, and by Proposition~\ref{prop:minhypergraph}(b), we can find some $x_{i+1} \in h_{i+1} \cap P$ that is distinct from $x_i$. Hence, we can extend the walk with the edge $h_{i+1}$, and proceed to the next step using the vertex $x_{i+1}$. We repeat this process until the last edge $h_t$ added contains a vertex, apart from $x_{t-1}$, that we have seen previously. That is, there is some $r \le t-1$ and some $y \in h_t \setminus \{ x_{t-1} \}$ such that $y \in h_t \cap h_r$. If there are multiple choices for the index $r$, we choose the larger one. Note that, by Proposition~\ref{prop:minhypergraph}(c), $\mc H_{min}$ is linear, and so we must in fact have $r \le t-2$, as $h_{t-1}$ and $h_t$ already share the vertex $x_{t-1} \neq y$. This means the edges $h_r, h_{r+1}, \hdots, h_{t-1}, h_t$ form a loose cycle in $\mc H_{min}$. Lemma~\ref{lem:extension} guarantees that if an edge $h_i$ is of type 2, then the following edge $h_{i+1}$ will be of type 1. Hence, we do not have two consecutive type 2 edges in the walk. Note that it is possible that the first and last edges $h_r$ and $h_t$ might both be of type 2. Thus, in the cyclic ordering of the edges, there is at most one pair of consecutive type 2 edges. \end{proof} Using these properties, we can prove the $0$-statement of Theorem~\ref{thm:main_sparse}. \begin{proof}[Proof of the $0$-statement of Theorem~\ref{thm:main_sparse}] Assume that $P$ satisfies the conclusion of Lemma~\ref{lem:cycle} (which holds whp) and suppose for a contradiction that $P$ is such that $\mc H_{\textrm{Schur}}(A_s \cup P)$ does not admit a proper two-colouring where all elements of $A_s$ are coloured blue. Then, appealing to Lemma~\ref{lem:cycle}, we can construct a loose cycle in the associated minimal hypergraph $\mc H_{min}=\mc H_{min}(s,P) \subseteq \mc H_{\textrm{Schur}}(A_s \cup P)$ that has at most one pair of consecutive type 2 edges. In the following we show that whp such a cycle does not exist in $\mc H_{min}$, and hence there must exist a Schur colouring of $A_s\cup P$. We start by ruling out cycles without any consecutive type 2 edges. If the cycle has length $\ell$, for some $\ell \ge 3$, we label the edges in cyclic order as $e_1, e_2, \hdots, e_\ell$, choosing an ordering where $e_{\ell}$ is of type 1. We can then label the vertices of the cycle in such a way that, for each $1 \le i \le \ell-1$, $e_i = \{x_{i-1}, y_i, x_i\}$, while $e_\ell = \{x_{\ell-1}, y_{\ell}, x_0\}$. Note that, since we always extended our walk from vertices in $P$, each degree-two vertex $x_i$ belongs to $P$, while $y_i \in A_s$ if and only if the edge $e_i$ is of type 2. We can now bound the expected number of such cycles. We start by choosing the vertex $x_0$, for which there are at most $n$ choices, each appearing with probability $p$. Hence, this choice contributes a factor of $np$. Now, consider the choices for the edges $e_i$, where $1 \le i \le \ell-\varepsilon$, where $\varepsilon = 1$ if $e_{\ell-1}$ is of type 1 and $\varepsilon = 2$ if $e_{\ell-1}$ is of type 2. If $e_i$ is of type 1, there are at most $n$ choices of $y_i$, and then at most two choices for $x_i$ that complete a sum with $x_{i-1}$ and $y_i$. The elements $y_{i}$ and $x_i$ appear in $P$ independently, each with probability $p$. Thus a type 1 edge contributes a factor of at most $2np^2$. On the other hand, if $e_i$ is of type 2, then there are at most $s$ choices of $y_i \in A_s$, and then $x_i = y_i - x_{i-1} \in P$ is determined, and appears in $P$ with probability $p$. Thus, this edge contributes a factor of $sp$. However, we then know the next edge $e_{i+1}$ is of type 1, and contributes a factor of $2np^2$. We group these two factors: every type 2 edge, together with its subsequent type 1 edge, contributes a factor of $2snp^3$. Hence, in total, each such intermediate extension contributes a factor of at most $(2np^2 + 2snp^3)$. This leaves us with the task of closing the cycle. If $e_{\ell-1}$ is of type 1, then we have already accounted for it previously, and need only consider the choice of $e_\ell$. Recall that we chose our labelling so that $e_{\ell}$ is of type 1. The vertices $x_{\ell-1}$ and $x_0$ are already fixed, and there are at most two choices for the final vertex $y_\ell \in P$, which appears with probability $p$. Thus, we gain a factor of $2p$ in this case. If, on the other hand, $e_{\ell -1}$ is of type 2, then the same arguments as before show that we gain a factor of at most $sp$ for the edge $e_{\ell-1}$, and a factor of $2p$ for the final type 1 edge $e_{\ell}$. Hence, in this case, we gain a factor of $2sp^2$. In total, closing the cycle contributes a factor of at most $2p(1 + sp)$. When we sum over all possible cycles, therefore, we can bound the expectation by \[ np \cdot \sum_{k \ge 0} \left(2np^2 + 2snp^3\right)^k \cdot 2p(1 + sp) = \sum_{k \ge 0} \left( 2np^2 + 2snp^3 \right)^{k+1}, \] where $k$ represents the number of intermediate extensions in forming the cycle. By virtue of the fact that $p=o( (sn)^{-1/3})$ and $p=o( n^{-1/2})$, this sum is $o(1)$, and so whp we do not have any such cycles. \medskip This leaves us with those cycles whose initial and final edges $h_r$ and $h_t$ (adopting the notation from the proof of Lemma~\ref{lem:cycle}) are both of type 2. We further split into two subcases, based on whether their common vertex lies in $A_s$ or $P$. In the former setting, we index the edges in cyclic order starting with $h_r$ and closing the cycle with $h_t$, so that we have $h_r = e_1, e_2, e_3, \hdots, e_{\ell - 1}, e_{\ell} = h_t$. We label the vertices within the edges similarly to before, except when it comes to $e_{\ell}$, as the common vertex with $e_1$ will be the $A_s$ vertex $y_1$. That is, $e_i = \{x_{i-1}, y_i, x_i\}$ for all $i \in [\ell - 1]$, while $e_{\ell} = \{x_{\ell - 1}, y_1, x_{\ell} \}$. As before, each vertex $x_i$ lies in $P$, while $y_i$ lies in $P$ if $e_i$ is of type 1, and in $A_s$ otherwise. We can then bound the expected number of such cycles just as we did before. For the initial edge $e_1$, we have at most $n$ choices for $x_0 \in P$, at most $s$ choices for $y_1 \in A_s$, and then $x_1 \in P$ is determined uniquely. The vertices $x_0$ and $x_1$ appear in $P$ independently, each with probability $p$, and hence the contribution of $e_1$ to the expectation is at most a factor of $snp^2$. Now, since the final edge $e_{\ell}$ is of type 2, its predecessor $e_{\ell-1}$ must be of type 1, and hence the intermediate extensions account for the edges $e_2$ through to $e_{\ell - 1}$. As before, each extension contributes a factor of at most $(2np^2 + 2snp^3)$. Finally, when closing the cycle with the edge $e_{\ell}$, we have already fixed the elements $x_{\ell-1}$ and $y_1$, and so the edge $e_{\ell}$ is determined. However, $x_{\ell} \in P$ is a new vertex, and appears with probability $p$. We therefore collect a factor $p$ for the final edge $e_{\ell}$. Putting this all together, the expected number of cycles of this type can be bounded by \[ snp^2 \cdot \sum_{k \ge 0} \left( 2np^2 + 2snp^3 \right)^k \cdot p = snp^3 \sum_{k \ge 0} \left( 2np^2 + 2snp^3 \right)^k = o(1), \] and so whp we do not have any such cycles. \medskip For the latter subcase, where the edges $h_r$ and $h_t$ meet in a vertex $x_1 \in P$, we shall instead have these two edges be the first two in our cyclic ordering. We label the edges of the cycle as $e_1 = h_t, e_2 = h_r, e_3, e_4, \hdots, e_{\ell - 1}, e_{\ell}$, and we label the vertices within the edges as before, with $e_i = \{ x_{i-1}, y_i, x_i \}$ for $i \in [\ell - 1]$, and $e_{\ell} = \{x_{\ell-1}, y_{\ell}, x_0\}$. Note that $e_{\ell}$ must be a type 1 edge, as we cannot have another pair of consecutive type 2 edges. Furthermore, observe that the vertex $x_1$ is only used in the cycle as the intersection between $e_1$ and $e_2$, two edges of type 2. By Lemma~\ref{lem:extension}, we are guaranteed the existence of another edge, say $f$, of type 1, such that $f \cap e_1 = \{x_1\}$. If $f$ were to contain another vertex from the cycle, that would create a cycle without a pair of consecutive type 2 edges, but we previously showed that such cycles do not exist. Hence, the vertices in $f \setminus \{x_1\}$ must be new. We now bound the expected number of copies of these cycles, together with the pendant edge $f$. There are $n$ choices for the vertex $x_1$, which appears with probability $p$. There are then $s$ choices for each edge $e_1$ and $e_2$, and their other $P$-vertices, $x_0$ and $x_2$, appear independently with probability $p$ each. Finally, there are at most $n$ choices for the edge $f$, and the two vertices in $f \setminus \{x_1\}$ also each appear with probability $p$. Thus, the initial constellation of edges $e_1, e_2$ and $f$ contributes a factor of at most $s^2 n^2 p^5$ to the expectation. Since $e_2$ is of type 2, the edge $e_3$ must be of type 1. Hence, for the edges $e_3, e_4, \hdots, e_{\ell - 1}$, every type 2 edge is preceded by a type 1 edge, and so we can again\footnote{Previously we paired a type 2 edge with the type 1 edge succeeding it, but the calculations here are identical.} bundle these together when bounding the expectation. Then, as before, each intermediate extension provides a factor of at most $(2np^2 + 2snp^3)$, while closing the cycle with the type 1 edge $e_{\ell}$ gives an additional factor of $2p$. Thus the expectation can be bounded by \[ s^2n^2p^5 \cdot \sum_{k \ge 0} \left( 2np^2 + 2snp^3 \right)^k \cdot 2p = 2 \left( snp^3 \right)^2 \sum_{k \ge 0} \left( 2np^2 + 2snp^3 \right)^k = o(1), \] and so again we do not have any such cycle whp. In summary, we find that whp $\mc H_{min}$ does not contain any cycle that would result from Lemma~\ref{lem:cycle}, which means that our initial assumption that $A_s \cup [n]_p$ does not have a Schur colouring with all elements of $A_s$ coloured blue, cannot hold. This completes the proof. \end{proof} \subsection{Containers for colourings}~\label{sec:containers} For the $1$-statement of Theorem~\ref{thm:main_sparse}, we fix a set $A \subseteq [n]$ of the appropriate size, and wish to show that when $p$ is sufficiently large, then whp a random perturbation $P \sim [n]_p$ is such that $A \cup P$ is Schur. Roughly speaking, the idea is that, for a given colouring of $[n]$, the perturbation $P$ is very likely to contain elements that, in combination with $A$, form a monochromatic Schur triple. Unfortunately, there are far too many potential colourings of $[n]$, rendering the union bound ineffective. To resolve this issue, we make use of hypergraph containers, introduced by Saxton and Thomason~\cite{saxton2015hypergraph} and Balogh, Morris and Samotij~\cite{BMS15}, which have been successfully applied to a wide range of problems in combinatorics. In our setting, the key idea is to group similar colourings into so-called containers, and then show that the random set is unlikely to fit with not just a given colouring, but also the container at large. As the number of containers will be much smaller, we will then be able to proceed with a union bound and obtain the desired result. \medskip To put things on a formal footing, we define a colouring hypergraph $\mc H_A$ that will encode the colourings of subsets of $[n]$. The vertex set $V(\mc H_A)$ is the disjoint union of two copies of $[n]$, which we call $V_R$ and $V_B$. Colouring the element $i \in [n]$ red will then be represented by the vertex $i_R \in V_R$, while colouring it blue will be represented by $i_B \in V_B$. Thus, given a subset $S \subseteq [n]$ and a colouring $\varphi: S \to \{ \text{red}, \text{blue} \}$, we can identify $\varphi$ with the vertex set $\{i_R \in V_R: i \in S, \varphi(i) = \text{red} \} \cup \{i_B \in V_B: i \in S, \varphi(i) = \text{blue} \}$. The edges of the hypergraph will correspond to coloured configurations that cannot appear in a Schur colouring of $A \cup P$. Indeed, given some $a \in A$, let $x,y,w,z \in [n]$ be such that the sets $\{a,x,y\}$ and $\{a,z,w\}$ both host Schur triples. If we were to colour $x$ and $y$ red and colour $z$ and $w$ blue, then assigning either colour to $a$ creates a monochromatic sum. Hence, this colouring of the four elements $x,y,z$ and $w$ can be forbidden, motivating our definition of the edge set of $\mc H_A$: \begin{align*} E(\mc H_A) = \Big\{ \{ x_R, y_R, z_B, & w_B\} : \\ \exists \; a \in A & \text{ for which } \{a,x,y\} \text{ and } \{a,z,w\} \text{ host non-degenerate Schur triples} \Big\}. \end{align*} We restrict here only to non-degenerate Schur triples to ease the analysis. Given an edge $e=\{x_R, y_R, z_B, w_B\}$, we call the associated element $a \in A$ the \emph{target}\footnote{When $\{x,y\} = \{z,w\}$, the target of the edge need not be unique --- it could be either the difference or the sum of the pair. In such a case, when referring to the target of the edge, we arbitrarily choose one such target.} of the edge $e$. It thus follows that if $A \cup P$ admits a Schur colouring $\varphi$, then $\varphi \subseteq V(\mc H_A)$ must be an independent set. The theory of hypergraph containers asserts that when the edges of a hypergraph are well-distributed, in the sense that no set of vertices is contained in a disproportionally large number of edges, then all its independent sets can be grouped together into a small number of containers, each of which induces few edges. To make the condition on the hypergraph precise, we must define the codegree function. Given an $r$-uniform hypergraph $\mc H$ on $N$ vertices and with average (vertex) degree $d$, some vertex $v \in V(\mc H)$, and some $1\leq j \le r$, we define $d_j(v) = \max \left\{ d(\sigma) : v \in \sigma \subseteq V(\mc H), \card{\sigma} = j \right\}$, where $d(\sigma)$ is the number of edges containing $\sigma$. Then, given any $\tau > 0$, we set $\delta_j = \frac{\sum_v d_j(v)}{\tau^{j-1} N d}$, and define the co-degree function \[\delta(\mc H, \tau) = 2^{\binom{r}{2} - 1} \sum_{j=2}^r 2^{-\binom{j-1}{2}} \delta_j.\] With this notation in place, we can state a version of the hypergraph containers theorem due to Saxton and Thomason. \begin{thm}[Container theorem, Corollary 3.6 in \cite{saxton2015hypergraph}] \label{thm:container} Let $\mc H$ be an $r$-uniform hypergraph on the vertex set $[N]$, and suppose that $0<\tau,\varepsilon<1/2$ satisfy $\delta(\mc H,\tau)\le \varepsilon/12r!$. Then there are constants $c=c(r)$ and $z \le c \log (1 / \varepsilon)$ and a function $ \Psi \colon\mathcal{P}([N])^z \to \mathcal{P}([N])$ with the following properties. Let $\mathcal{T} =\{ (T_1,\ldots, T_z)\in \mathcal{P}([N])^z\colon |T_i|\le c\tau N, 1\le i\le z\}$, and let $\mathcal{C}=\{ \Psi(T)\colon T\in \mathcal{T}\}$. Then \begin{enumerate} \item For every independent set $I$ there exists $T=(T_1,\ldots,T_z)\in \mathcal{T}\cap \mathcal{P}([N])^z$ with $I\subseteq \Psi(T)\in \mathcal{C}$, \item $e(\mc H[C])\le \varepsilon e(\mc H)$ for all $C\in \mathcal{C}$, and \item $\log |\mc C|\le c\log (1 / \varepsilon) \tau N \log (1/\tau)$. \end{enumerate} \end{thm} Applying this to our hypergraph $\mc H_A$, we arrive at the following. \begin{cor} \label{cor:containers} For every fixed $\varepsilon > 0$ there is a constant $c = c_{\varepsilon}$ such that, if $A \subseteq [n]$ is a set of size $s = \Omega \left( n^{1/2} \right)$, then there is a collection $\mc C$ of subsets of $V(\mc H_A)$ for which the following hold: \begin{enumerate} \item For every $P \subseteq [n]$ and Schur colouring $\varphi$ of $A \cup P$, there is some $C \in \mc C$ such that $\varphi \subseteq C$. \item For every $C \in \mc C$, $e(\mc H_A[C]) \le \varepsilon sn^2$. \item $\log \; \card{\mc C} \le c s^{-1/3} n^{2/3} \log n$. \end{enumerate} \end{cor} \begin{proof} In order to derive this from Theorem~\ref{thm:container}, we need to compute the codegree function of the hypergraph $\mc H_A$. To start, we count the edges of $\mc H_A$. Observe that for each of the $s$ choices of the target of the edge, there are at least $\tfrac12 n-1$ and at most $n$ choices for the red pair forming a Schur triple, with the same bounds holding for the blue pair. Moreover, every edge has at most 2 targets. Thus, we have $\tfrac1{16} sn^2 \le e(\mc H_A) \le sn^2$. As there are $2n$ vertices, it follows that the average degree $d$ satisfies $d \ge \tfrac18 sn$. We next need to bound the quantities $d_j(v)$, $2 \le j \le 4$, from above. To this end, we define $\Delta_j$ to be the maximum degree of a set $\sigma$ of $j$ vertices, noting that $\Delta_j = \max \{d_j(v) : v \in V(\mc H_A)\}$. When $j = 2$, there are two cases to consider. If the two vertices in $\sigma$ are from the same colour, then there can be at most two choices for the target of the edge. This in turn leaves at most $n$ choices for the pair of vertices of the opposite colour. Hence, for any $\sigma$ of this form, we have $d(\sigma) \le 2n$. On the other hand, if $\sigma$ has one vertex of each colour, then there are $s$ choices for the target of the edge. Once the target is chosen, there are again at most two choices for the remaining vertex of each colour, and thus $d(\sigma) \le 4s \le 4n$. Hence we deduce $\Delta_2 \le 4n$. When $j = 3$, observe that $\sigma$ must contain both vertices from one of the colours, and thus there are only at most two possibilities for the target of the edge. Once this is fixed, and since we already have one vertex from the other colour, there are again at most two choices for the missing vertex, and thus $\Delta_3 \le 4$. Finally, we trivially have $\Delta_4 = 1$, since each edge consists of four vertices. We then have $\delta_j = \frac{\sum_v d_j(v)}{\tau^{j-1} N d} \le \frac{N \Delta_j}{\tau^{j-1} N d} \le \frac{8\Delta_j}{\tau^{j-1} sn}$, and so \[ \delta(\mc H_A, \tau) = 32 \delta_2 + 16 \delta_3 + 4 \delta_4 \le \frac{2^{10}}{\tau s} + \frac{2^{9}}{\tau^2 sn} + \frac{2^5}{\tau^3 sn}. \] Therefore there is a constant $c'$ depending on $\varepsilon$ such that, if $\tau \ge c' \max \{ s^{-1}, (sn)^{-1/2}, (sn)^{-1/3} \}$, we have $\delta(\mc H_A, \tau) \le \tfrac{1}{288} \varepsilon$. Since $s = \Omega \left( n^{1/2} \right)$, the bound simplifies to $\tau \ge c' (sn)^{-1/3}$. We can then apply Theorem~\ref{thm:container} with this choice of $\tau$, and the corollary follows immediately. Indeed, the first conclusion follows from our observation that such a Schur colouring $\varphi$ corresponds to an independent set in $\mc H_A$, and hence must be contained in a container. The second conclusion is a consequence of our earlier calculation showing $e(\mc H_A) \le sn^2$. Finally, the bound on the number of containers comes from making the substitutions $\tau = c' (sn)^{-1/3}$ and $N = 2n$ in the corresponding bound from the theorem. \end{proof} \subsection{The $1$-statement of Theorem~\ref{thm:main_sparse}} \label{sec:1sparse} We will now use the containers of the previous section to prove the $1$-statement for sparse base sets. As previously discussed, the value of containers lies in the fact that when considering potential colourings, we can work on the level of the containers, which allows for a much more efficient union bound. To make things precise, suppose we have a set of containers $\mc C$ provided by Corollary~\ref{cor:containers}. We say that a random perturbation $P \subseteq [n]$ is \emph{compatible} with a container $C \in \mc C$, denoted $P \triangleleft C$, if there is some Schur colouring $\varphi$ of $A \cup P$ such that $\varphi \subseteq C$; that is, $\{ i_R: i \in A \cup P, \; \varphi(i) = \textrm{red} \} \cup \{ i_B : i \in A \cup P, \; \varphi(i) = \textrm{blue} \} \subseteq C$. Note that if $P \sim [n]_p$ is such that $A \cup P$ is not Schur, then $A \cup P$ admits a Schur colouring, and thus by the first part of Corollary~\ref{cor:containers}, $P$ must be compatible with some container $C$. The key proposition below shows that this is exceedingly unlikely. \begin{prop} There exists $\varepsilon >0$ such that the following holds. Let $n$ and $s=s(n)$ be positive integers such that $s=\Omega(n^{1/2})$ and $s=o(n)$. Furthermore, let $A\subset [n]$ be a set of size $s$, let $\mc{H}_A$ be the colouring hypergraph as described in Section~\ref{sec:containers} and let $\mc{C}$ be the collection of containers corresponding to this hypergraph given by Corollary~\ref{cor:containers} with parameter $\varepsilon$. Finally, let $p=p(n)$ such that $p=\omega((sn^{13})^{-1/27}\log n)$ and $p = O\left(n^{-1/2}\right)$ and let $P\sim [n]_p$. Then \[\Pr\left[P\triangleleft C\right] = e^{-\omega\left(s^{-1/3}n^{2/3}\log n\right)},\] for any container $C\in \mc{C}$. \label{prop:probability_container} \end{prop} Before giving the proof of this proposition, let us see how it implies the 1-statement of Theorem~\ref{thm:main_sparse}. \begin{proof}[Proof of the 1-statement of Theorem~\ref{thm:main_sparse}] Let $A$ be our base set with $|A|= s = \Omega(n^{1/2}) $ and let $p =\omega( (sn^{13})^{-1/27}\log n)$. If it is the case that $p=\omega\left(n^{-1/2}\right)$ then the conclusion follows directly from Theorem~\ref{thm:random} (as the random perturbation $P$ itself will be Schur whp) and so we may assume that $p=O\left(n^{-1/2}\right)$. Furthermore, if $s=\Omega(n)$, then the desired conclusion follows from Theorem~\ref{thm:posdense}, and so we may assume that $s=o(n)$. Now let $\mc H_A$ be the $4$-uniform colouring hypergraph described in Section~\ref{sec:containers} and let $P\sim [n]_p$ be the random perturbation. Let $\varepsilon$ be the constant given by Proposition~\ref{prop:probability_container} and apply the container lemma, or more precisely, Corollary~\ref{cor:containers}, with the parameter $\varepsilon$. Proposition~\ref{prop:probability_container} tells us that the probability that $P$ is compatible with a given container is at most $e^{-\omega\left(s^{-1/3}n^{2/3}\log n\right)}$. Moreover, if $A\cup P$ is not Schur, then there is a Schur colouring $\varphi$ of $A\cup P$ and, appealing to part 1 of Corollary~\ref{cor:containers}, $\varphi$ corresponds to a subset of one of the containers $C\in\mc C$ in $\mc H_A$. Hence the event that $A\cup P$ is not Schur is contained in the event that $P$ is compatible with some container $C\in \mc C$. Taking a union bound over all containers we get that \[ \Pr[A\cup P \text{ is not Schur}] \le \sum_{C\in \mc C} \Pr[P\triangleleft C] \le e^{cs^{-1/3}n^{2/3}\log n }\: e^{-\omega\left(s^{-1/3}n^{2/3}\log n\right)} = o(1)\] as required, where we used part $3$ of Corollary~\ref{cor:containers} to upper bound the number of containers. \end{proof} We now turn to the proof of Proposition~\ref{prop:probability_container}. \begin{proof}[Proof of Proposition~\ref{prop:probability_container}] We shall take $\varepsilon = \varepsilon_{\ref{lem:lotsmc}}$ be the constant given by Lemma~\ref{lem:lotsmc}, which will be stated later, and fix an arbitrary container $C\in \mathcal{C}$. We partition the elements of $[n]$ into four different sets: let $M_C$ be the elements \emph{missing} from the container $C$ (that means not being present in either red or blue), $R_C$ be the elements that are \emph{only} present in $C$ in the red copy of $[n]$, $B_C$ be the elements only present in the blue copy of $[n]$, and let $T_C$ be the \emph{two-coloured} elements (that is, those that are present in the container in both colours --- think of these as the elements where the container does not restrict the colouring). Before diving into the details of the proof, we make some observations on what it means for a set $P$ to be compatible with the container $C$, thereby sketching our proof strategy. First, note that if $P$ contains an element that is missing from $C$, then clearly there is no colouring $\varphi$ of $A \cup P$, let alone a Schur colouring, such that $\varphi \subseteq C$, and so we do not have $P \triangleleft C$. In other words, the event that $P \triangleleft C$ implies that $P \cap M_C = \emptyset$. Similarly, we can use the fixed red and blue elements of $C$ to derive restrictions on $P$ in the event that $P\triangleleft C$. Indeed, suppose that either $P\cap R_C$ or $P\cap B_C$ contains a Schur triple. Then, as the colour of these elements is predetermined by the container $C$, any colouring $\varphi$ of $A\cup P$ such that $\varphi\subseteq C$ will have a monochromatic Schur triple, and hence will not be a Schur colouring. Therefore, if $P\triangleleft C$, then both $P\cap R_C$ and $P\cap B_C$ must be sum-free. These simple implications will already allow us to handle some types of containers. Indeed, if a container $C$ is such that the set $M_C$ is linearly large, then it is highly unlikely that the random perturbation $P$ avoids it. Similarly, if $R_C$ or $B_C$ contains quadratically many Schur triples, then $P$ will almost surely contain one of them. This leaves us with those containers for which $M_C$ is small and there are few Schur triples in $R_C$ and $B_C$, and this final case is more subtle. Using these conditions, together with the fact that $C$ spans few edges of the hypergraph $\mc H_A$, we will show that there are many wickets (recall Definition~\ref{def:wicket}) where the elements $y_i, z_i$ all have the same colour (say red); that is, they belong to $R_C$ (see Lemma~\ref{lem:lotsmc} for the precise statement). Then, with high probability, such a wicket appears in $P$, and this prohibits a Schur colouring of $A \cup P$. Indeed, if any of the elements $x_i$ are coloured red, they form a red Schur triple with $y_i$ and $z_i$. Otherwise, all the $x_i$ are blue, forming a blue Schur triple. We now proceed to give the details of each case. \paragraph{Case I: $M_{C}$ contains at least $\varepsilon n$ elements.} As discussed in the proof sketch above, the event that $P\triangleleft C$ is contained in the event that $P\cap M_C=\emptyset$. Hence, if there are at least $\varepsilon n$ missing elements, we have that \[\Pr[P\triangleleft C] \leq \Pr[P\cap M_C=\emptyset ]\leq (1-p)^{\varepsilon n}\le e^{- \varepsilon np }.\] Thus, when $p = \omega \left( (sn)^{-1/3} \log n \right)$, we have the required bound $\Pr[P\triangleleft C]\leq e^{-\omega\left(s^{-1/3}n^{2/3}\log n\right)}$. The bound on $p$ holds due to the fact that $p=\omega\left((sn^{13})^{-1/27}\log n\right)$ and $s=\Omega(n^{1/2})$. \paragraph{Case II: either $R_{C}$ or $B_{C}$ contains at least $\varepsilon n^2$ Schur triples.} Assume without loss of generality that $R_{C}$ has $\varepsilon n^2 $ Schur triples and let $RT(C)$ be the set of all \emph{non-degenerate} Schur triples in $R_{C}$. As there are only $n/2$ degenerate Schur triples in $[n]$ and we are only interested in asymptotics, we may assume that $|RT(C)|\geq \tfrac{\varepsilon}{2} n^2$. Now, as discussed above, if $P\triangleleft C$, then we must have that $RT(C)\cap P^3=\emptyset$. Hence, appealing to Lemma~\ref{lem:JansonSTs} with $\xi = \varepsilon/2$, we conclude that the probability that $P\triangleleft C$ is at most $e^{-\Omega( n^2p^3)}$, and so we have the desired bound on $\Pr[P\triangleleft C]$ as long as $p=\omega((sn^4)^{-1/9}(\log n)^{1/3})$. The latter holds as $p=\omega\left((sn^{13})^{-1/27}\log n\right)$ and $s=\Omega(n^{1/2})$. \paragraph{Case III: all remaining containers.} If a container $C$ is not covered by the previous two cases, then $|M_{C}|<\varepsilon n$ and both $R_{C}$ and $B_{C}$ contain fewer than $\varepsilon n^2$ Schur triples. The following lemma, which we shall prove later, shows that any such container must contain many wickets where, as illustrated in Figure~\ref{fig:wicket_red}, the elements $y_i$ and $z_i$ are all prescribed by the container to have the same colour. \begin{lem} \label{lem:lotsmc} There is some $\varepsilon > 0$ such that applying Corollary~\ref{cor:containers} to $\mc H_A$ with $\varepsilon$ yields the following. Let $C \in \mc C$ be a container for $\mc H_A$ for which $\card{M_C} < \varepsilon n$ and $R_C$ and $B_C$ each contain fewer than $\varepsilon n^2$ Schur triples. Then there is a constant $\xi > 0$ and a set $\chi \in \{ R_C, B_C \}$ such that there are $\xi n^5$ wickets where the elements $y_i, z_i$ belong to $\chi$. \end{lem} Assuming the lemma, the proof of Proposition~\ref{prop:probability_container} in this case is now straightforward. Indeed, we may without loss of generality take $\chi = R_C$. Suppose such a wicket was contained in $P$. Then, in any colouring contained in $C$, no element $x_i$ can be coloured red, as that would create a red Schur triple $(x_i, y_i, z_i)$. However, colouring each $x_i$ blue instead creates the blue Schur triple $(x_1, x_2, x_3)$. Hence, if $P \triangleleft C$, then $P$ cannot contain any of the $\xi n^5$ wickets given by Lemma~\ref{lem:lotsmc}. By Lemma~\ref{lem:JansonWickets}, this occurs with probability at most $e^{-\zeta n^5 p^9}$ for some constant $\zeta > 0$. By our choice of $p$, this is $e^{-\omega( s^{-1/3} n^{2/3} \log n) }$, as required. \end{proof} \begin{figure}[h!] \centering \includegraphics{wicket_with_red.pdf} \caption{A wicket where all the $y_i$ and $z_i$ are red.} \label{fig:wicket_red} \end{figure} To prove Lemma~\ref{lem:lotsmc}, we will make use of the following claim, which asserts that we can find an interval on which one of the two monochromatic sets is quite dense. \begin{clm}\label{clm:mcdense} For $\varepsilon \le 10^{-7}$, let $C \in \mc C$ be a container for $\mc H_A$ for which $\card{M_C} < \varepsilon n$ and $R_C$ and $B_C$ each contain fewer than $\varepsilon n^2$ Schur triples. Then there exists an $\eta \in [n]$ with $\eta \ge \tfrac12 n$ and a set $\chi \in \{ R_C, B_C \}$ such that $\card{ \chi \cap [\eta] } \ge \tfrac{9}{20} \eta$. \end{clm} We shall prove Claim~\ref{clm:mcdense} in due course, but let us first derive Lemma~\ref{lem:lotsmc} from it. \begin{proof}[Proof of Lemma~\ref{lem:lotsmc}] Choose $\varepsilon = \tfrac{1}{4} \min\left\{\delta_{\ref{thm:removal}}\left(\tfrac{1}{100}\right), 10^{-8}\right\}$, where $\delta_{\ref{thm:removal}}\left(\tfrac{1}{100}\right)$ is the constant from Theorem~\ref{thm:removal} (Green's removal lemma) applied with $\varepsilon_{\ref{thm:removal}} = \tfrac{1}{100}$, and let $\eta$ and $\chi$ be as given by Claim~\ref{clm:mcdense}, assuming without loss of generality that $\chi=R_C$. By assumption, $R_C \cap [\eta]$ has fewer than $\varepsilon n^2 \le 4 \varepsilon \eta^2$ Schur triples. By our choice of $\varepsilon$, we can apply Theorem~\ref{thm:removal} to obtain a sum-free set $R'\subseteq R_C \cap [\eta]$ such that $\card{(R_C\cap[\eta])\setminus R'}\leq \tfrac{1}{100}\eta $. Hence, we have $|R'| \ge \card{R_C \cap [\eta]} - \tfrac{1}{100}\eta \ge \tfrac{2}{5}\eta+1$, and Theorem~\ref{thm:sumstab} then yields that $R'$ is either contained in the odd numbers or does not have small elements (that is, $\min R' > \card{R'} > \tfrac{2}{5}\eta$). We now construct the desired wickets. First, let $X$ be the set of all even integers not larger than $\tfrac{1}{10}\eta$. Given a pair of elements in $X$, their difference also lies in $X$. Thus, discounting the $\tfrac{1}{40}\eta$ Schur triples of the form $(x, x, 2x)$, we find at least $10^{-3} \eta^2$ triples $(x_1, x_2, x_3) \in X^3$ with $x_1 \neq x_2$ and $x_3 = x_1 + x_2$. We next show that for any element $x\in X$, there are at least $\tfrac{1}{10}\eta$ Schur triples of the form $x+y=z$ with $y$ and $z$ distinct elements in $R'$. Then, to build one of the desired wickets, we can first choose a non-degenerate Schur triple $x_1 + x_2 = x_3$ in $X$, and subsequently choose for each $i$ a Schur triple $x_i + y_i = z_i$ with $y_i, z_i \in R'$. We shall need to ensure that these triples all use distinct elements (to obtain the nine-element wicket), but such considerations rule out at most a constant number of triples at each stage, and so we can build at least $10^{-7} \eta^5$ wickets in this fashion. Since $\eta \ge \tfrac12 n$, we can safely take $\xi = 10^{-9}$. Given $x\in X$, suppose first that $R'$ is contained in the odd numbers. There are $\tfrac12 (\eta - x)$ odd numbers $y$ such that $z = x+y \le \eta$. Moreover, each odd number in $[\eta]$ appears in at most two of these sums. Since $\card{R'} \ge \tfrac25 \eta$, there are at most $\tfrac{1}{10} \eta$ odd numbers in $[\eta]$ missing from $R'$, and hence for at least $\tfrac12 (\eta - x) - \tfrac{1}{5} \eta$ of these pairs, we have $y,z \in R'$. As $x \in X$, we have $x \le \tfrac{1}{10} \eta$, and hence we have at least $\tfrac{1}{4} \eta$ Schur triples of the desired form in this case. In the other case, $R'$ is such that $\min R' \ge |R'| \ge \tfrac25 \eta$, and we denote by $I$ the interval $[\tfrac25 \eta, \eta]$. There are $\tfrac35 \eta - x$ integers $y \in I$ for which the sum $z = x + y$ lies in $I$ as well. Since $\card{R'} \ge \tfrac25 \eta$, there are at most $\tfrac15 \eta$ integers in $I$ missing from $R'$, and each missing element appears in at most two of the sums. Hence, there are at least $\tfrac35 \eta - x - \tfrac25 \eta$ pairs $y,z \in R'$ with $x + y = z$, and since $x \le \tfrac{1}{10}\eta$, this leaves us with at least $\tfrac{1}{10} \eta$ Schur triples, as required. \end{proof} The proof of Claim~\ref{clm:mcdense} is still outstanding, a situation we now rectify. \begin{proof}[Proof of Claim~\ref{clm:mcdense}] By Corollary~\ref{cor:containers}, we know that the container $C$ hosts at most $\varepsilon sn^2$ edges of $\mc H_A$. Since each edge determines at most two targets in $A$, we can fix some element $\alpha\in A$ that is the target of at most $2\varepsilon n^2$ edges. Our first aim is to find some $\eta\ge \tfrac{n}{2}$ and $Q \subseteq [\eta] \setminus \{\alpha\}$ such that the following hold: \begin{align} \label{eq:largepartition} |Q| &\ge \tfrac{19}{20}\eta, \text{ and there is a partition } \Pi \text{ of } Q \text{ into pairs such that} \\ \nonumber &\text{for every $\{\pi_1,\pi_2\}\in \Pi$, the set $\{\pi_1,\pi_2,\alpha\}$ hosts a Schur triple.} \end{align} In other words, we aim to find some large $\eta$ such that almost all of the interval $[\eta]$ can be partitioned into pairs that form Schur triples with $\alpha$. We will then be able to show that there is some set $\chi\in \{R_C,B_C\}$ such that almost all the pairs of the partition $\Pi$ contain an element in $\chi$, which will complete the proof of the claim. Now, in order to establish \eqref{eq:largepartition}, we split into two cases, depending on how large $\alpha$ is. \paragraph{Case a: $\alpha\le \tfrac{1}{2}n$.} Let $\ell = \left\lfloor\tfrac{n}{2\alpha}\right\rfloor $, set $\eta = 2 \alpha \ell$ and take $Q=[\eta]\setminus\{\alpha,2\alpha\}$. We then choose \[ \Pi = \left\{\{2\alpha j +i, 2\alpha j + i + \alpha\}:0 \le j \le \ell-1, 1 \le i \le \alpha \right\}\setminus\{\{\alpha,2\alpha\}\}.\] Clearly $Q$ and $\Pi$ have the required properties for \eqref{eq:largepartition} and $\eta \ge \max\{2\alpha,n-2\alpha\}\ge n/2$. \paragraph{Case b: $\tfrac{1}{2}n <\alpha$.} Fix $\eta=\alpha$, $Q=[\eta]\setminus \{\floor{\tfrac{\alpha}{2}},\ceil{\tfrac{\alpha}{2}},\alpha\}$ and \[\Pi=\left\{\{i,\alpha-i\}:1 \le i \le \floor{\tfrac{\alpha}{2}} -1 \right\}.\] Again, it is easy to check that $Q$ and $\Pi$ satisfy the conditions of \eqref{eq:largepartition}. \vspace{2mm} Finally, given $\eta$, $Q$ and $\Pi$ as in \eqref{eq:largepartition}, we will show that there is a set $\chi\in\{R_C,B_C\}$ such that all but $2 \varepsilon^{1/2} n$ pairs in $\Pi$ contain an element from $\chi$. Given the lower bound on the size of $Q$, the lower bound on $\eta$ and our upper bound on $\varepsilon$, this will complete the proof of Claim~\ref{clm:mcdense}. Firstly we define the following subsets of $\Pi$: \begin{align*} \Pi_0&=\{{\bf{\pi}}\in \Pi:{\bf{\pi}}\cap M_C\neq \emptyset\}, \\ \Pi_R&=\{{\bf{\pi}}\in \Pi\setminus \Pi_0:{\bf{\pi}}\cap B_C= \emptyset\}, \\ \Pi_B&=\{{\bf{\pi}}\in \Pi\setminus \Pi_0:{\bf{\pi}}\cap R_C= \emptyset\}, \\ \Pi_{1}&=\{{\bf\pi}\in \Pi:|{\bf{\pi}}\cap B_C|=|{\bf{\pi}}\cap R_C|=1 \}. \end{align*} Note that $\Pi\subseteq \Pi_0 \cup \Pi_R \cup \Pi_B \cup \Pi_1$, but this is not quite a partition, as $\Pi_R$ and $\Pi_B$ intersect in pairs whose elements are both in $T_C$. By assumption, $|M_C|\le \varepsilon n$, and so $|\Pi_0|\le \varepsilon n$. Also, we have that either $\Pi_R$ or $\Pi_B$ contains at most $(2 \varepsilon)^{1/2} n$ pairs. Indeed, observe that if $\pi_R = \{x,y\} \in \Pi_R$ and $\pi_B = \{z,w\} \in \Pi_B$, then the set $\{x_R, y_R, z_B, w_B\}$ forms an edge of $\mc H_A$ with target $\alpha$. By our choice of $\alpha$, there are at most $2 \varepsilon n^2$ such edges, and so the smaller of $\Pi_R$ and $\Pi_B$ can contain at most $(2 \varepsilon)^{1/2} n$ pairs. Hence, without loss of generality, $|\Pi_B|\le (2\varepsilon)^{1/2}n$. Since all pairs in $\Pi_1$ and $\Pi_R \setminus \Pi_B$ contain at least one element of $R_C$, and we can partition $\Pi$ as $\Pi_0 \cup \Pi_B \cup (\Pi_R \setminus \Pi_B) \cup \Pi_1$, it follows that \[ \card{ ( \Pi_R \setminus \Pi_B ) \cup \Pi_1 } = \card{\Pi_R \setminus \Pi_B } + \card{\Pi_1} = \card{\Pi} - \card{\Pi_0} - \card{\Pi_B} \ge \card{\Pi} - \varepsilon n - (2 \varepsilon)^{1/2} n \ge \card{\Pi} - 2 \varepsilon^{1/2} n, \] completing the proof of the claim, and thereby the proposition. \end{proof} \section{Concluding Remarks} \label{sec:conc} In this paper, we explored Schur properties of randomly perturbed sets of integers. We addressed the case of sparse base sets $A$ and also dense base sets, describing the behaviour of the model as $|A|$ transitions from sublinear to linear and from $\tfrac{n}{2}$ to $\tfrac{4n}{5}$. A visualisation of our findings, as well as the previous works discussed in the introduction, can be seen in Figure~\ref{fig:summary}. We remark that our work completes the picture for dense base sets, bridging the gap between the extremal threshold of Hu (Theorem~\ref{thm:hu}) and the work of Aigner-Horev and Person (Theorem~\ref{thm:posdense}) giving a perturbed result that is tight for dense base sets $A$ with $|A|\le \tfrac{n}{2}$. \begin{figure}[h!] \centering \includegraphics{graph_overview.pdf} \caption{The complete picture so far} \label{fig:summary} \end{figure} On the other hand, for sparse base sets, we obtain upper and lower bounds that are a polynomial factor away from each other. We believe that the lower bound is likely the truth. Indeed, some evidence for this comes from our proof of the $1$-statement of Theorem~\ref{thm:main_sparse}. There, we perform a union bound over choices of container that our random perturbation can be compatible with. Our analysis then splits into cases depending on properties of the container and in each case, we upper bound the probability of our random perturbation being compatible with such a container. Now in Case I, we look at containers that `miss' linearly many elements, forcing the random perturbation also to avoid these elements in order to be compatible. In this case our proof gives that a bound of $p=\omega((sn)^{-1/3}\log n)$ is already sufficient to give our desired upper bound on the probability of compatibility. Therefore, if it were the case that all containers fall under Case I, then we would be able to provide a $1$-statement which gives the same bound as the $0$-statement up to a $\log$ factor. In fact, even more is true. If all containers were Case I, then we could remove the $\log$ factor appearing in the $1$-statement by factoring in that for the random perturbation to be compatible with a container, it must also contain a small subset of the container (known as the `fingerprint' of the container, see the set $\mathcal{T}$ in Theorem~\ref{thm:container}). Similar ideas have been used in previous arguments using containers for sparse Ramsey theory (see for example~\cite{nenadov2016short}) and can be used to remove the need for $\log$ factors in $1$-statement probabilities. Unfortunately, however, we see no reason why all containers should fall into Case I of our analysis. Indeed, one can cook up examples of candidate containers that satisfy property $2$ of Corollary~\ref{cor:containers} but do not miss any elements. For example, suppose $A$ is the set of the largest $s$ integers in $[n]$, and we create $C$ by taking all elements less than $\tfrac{n-s}{2}$ in both the red and blue copy of $[n]$ and all integers larger than $\tfrac{n-s}{2}$ in only the red copy $V_R$ of $[n]$. Such a set $C$ has no edges of $\mc{H}_A$ and does not fall into Case I (or Case II, for that matter). Despite the apparent necessity for more cases, a deeper analysis of the other cases could perhaps reduce the required probability for the $1$-statement, maybe all the way to match the $0$-statement. To summarise, we set the following problem. \begin{prob} Is it true that, for $s=s(n)\in \mathbb{N}$ and $p=p(n)$ such that $\Omega(\sqrt n)\le s\le n/2$ and $p=\omega((ns)^{-1/3})$, and for any $A\subseteq [n]$ with $|A|=s$, we have that $A\cup [n]_p$ is Schur whp? \end{prob} Although we believe our argument falls short of the truth, we think that our approach for the $1$-statement is of value because it illustrates how the container method can be combined with structural information about the underlying set of the hypergraph. Note that improving the bound on the probability for Case III in the proof of Proposition~\ref{prop:probability_container} suffices to obtain a better $1$-statement. \vspace{2mm} Returning our focus to dense base sets, it would be interesting to extend the results to $r\ge3$ colours. As discussed in the introduction, whilst the random threshold is the same for all number of colours (see \cite{rodl1997rado}), the extremal threshold is already not known for $r\ge 3$. For the perturbed model, as noted by Aigner-Horev and Person~\cite{aigner2019monochromatic}, when $r\ge 3$, the $r$-Schur problem is only interesting for very dense base sets. Indeed, for $|A|\le n/2$, taking $A$ to be a sum-free set (and thus only using one colour for $A$) means that adding $o(n^{1/2})$ random elements does not help, as this random set can be $2$-coloured without a monochromatic Schur triple. On the other hand, with $\omega(n^{1/2})$ random elements, the random set is already $r$-Schur, without the need to consider the base set at all. Generalising this argument, we only see a separation in the behaviour of the randomly perturbed model and the purely random model for the Schur property with $r$ colours when the base set is large enough that it cannot be $(r-2)$-coloured without a monochromatic Schur triple. Let $\mc{E}(r,n)$ be the extremal threshold for $r$ colours; that is, the minimum integer $m$ such that any subset $A\subset [n]$ of size at least $m$ is $r$-Schur. Then the following problem arises naturally. \begin{prob} \label{prob:morecols} For $r,n,s(n)\in \mathbb{N}$ with $r\ge 3$, determine $p_r^*(n,s)$ such that the following statements hold. \begin{enumerate} \item[(0)] There exists a set $A \subseteq [n]$ with $\card{A}=\mc{E}(r-2,n) + s$ such that for $p=o(p_r^*(n,s))$, whp $A\cup[n]_p$ is not Schur. \item[(1)] For all $A \subseteq [n]$ with $\card{A}=\mc{E}(r-2,n) + s$ and $p=\omega(p_r^*(n,s))$, whp $A\cup[n]_p$ is Schur. \end{enumerate} \end{prob} Note that as long as we can colour $A$ with $r-1$ colours without creating a monochromatic Schur triple, in order to obtain a set that is $r$-Schur, the perturbation probability needs to be at least $n^{-2/3}$, as otherwise the random set is sum-free whp. This shows that if $s$ is such that $\mc{E}(r-2,n) + s\le \mc{E}(r-1,n)$, we have $p_r^*(n,s)\geq n^{-2/3}$. It would be interesting to determine if the behaviour of $p_r^*$ is similar to what we observe here in the two colour case as the size of the base set moves beyond $\mc{E}(r-1,n)$. Whilst we believe it may be possible to make progress on Problem~\ref{prob:morecols} without knowing the values of the extremal thresholds, a better understanding of the extremal thresholds for $r\ge 3$ remains a central and very appealing problem in this area. \begin{prob} Determine $\mc{E}(r,n)$ for $r\ge 3$. \end{prob} \section*{Acknowledgements} This project began at a workshop organised by Tibor Szab\'o from Freie Universit\"at Berlin, and the authors would like to thank him for his hospitality. \bibliographystyle{abbrv}
{ "timestamp": "2022-05-04T02:20:03", "yymm": "2205", "arxiv_id": "2205.01456", "language": "en", "url": "https://arxiv.org/abs/2205.01456", "abstract": "A set $A$ of integers is said to be Schur if any two-colouring of $A$ results in monochromatic $x,y$ and $z$ with $x+y=z$. We study the following problem: how many random integers from $[n]$ need to be added to some $A\\subseteq [n]$ to ensure with high probability that the resulting set is Schur? Hu showed in 1980 that when $|A|> \\lceil\\tfrac{4n}{5}\\rceil$, no random integers are needed, as $A$ is already guaranteed to be Schur. Recently, Aigner-Horev and Person showed that for any dense set of integers $A\\subseteq [n]$, adding $\\omega(n^{1/3})$ random integers suffices, noting that this is optimal for sets $A$ with $|A|\\leq \\lceil\\tfrac{n}{2}\\rceil$. We close the gap between these two results by showing that if $A\\subseteq [n]$ with $|A|=\\lceil\\tfrac{n}{2}\\rceil+t<\\lceil\\tfrac{4n}{5}\\rceil$, then adding $\\omega(\\min\\{n^{1/3},nt^{-1}\\})$ random integers will with high probability result in a set that is Schur. Our result is optimal for all $t$, and we further provide a stability result showing that one needs far fewer random integers when $A$ is not close in structure to the extremal examples. We also initiate the study of perturbing sparse sets of integers $A$ by using algorithmic arguments and the theory of hypergraph containers to provide nontrivial upper and lower bounds.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT)", "title": "Schur properties of randomly perturbed sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986777179414275, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7090925359503166 }
https://arxiv.org/abs/2112.06755
Equilateral Chains and Cyclic Central Configurations of the Planar 5-body Problem
Central configurations and relative equilibria are an important facet of the study of the $N$-body problem, but become very difficult to rigorously analyze for $N>3$. In this paper we focus on a particular but interesting class of configurations of the 5-body problem: the equilateral pentagonal configurations, which have a cycle of five equal edges. We prove a variety of results concerning central configurations with this property, including a computer-assisted proof of the finiteness of such configurations for any positive five masses with a range of rational-exponent homogeneous potentials (including the Newtonian case and point-vortex model), some constraints on their shapes, and we determine some exact solutions for particular N-body potentials.
\section{Introduction} In this work we consider some particular classes of relative equilibria of a planar $N$-body problem, in which $N$ point particles with non-negative masses $m_i$ interact through a central potential $U$: $$m_i \ddot{q}_{i;j} = \frac{\partial U}{\partial q_{i;j}}, \ \ i \in \{0,\ldots N-1\}, $$ $$ U = \sum_{i<j} m_i m_j/ r_{i,j}^{A-2}$$ where $q_i \in \mathbf{R}^2$ is the position of particle $i$, and $r_{i,j} = |q_i - q_j|$ are the mutual distances between the particles. The exponent $A$ is a real parameter in $(2,\infty)$. The most interesting and important case is the Newtonian gravitational model with $A=3$, but we believe it can be useful to generalize the problem since many features of the relative equilibria do not strongly depend on the exponent $A$. The potential can be extended to the case $A=2$ by using $$ U = \sum_{i<k} m_i m_k \log(r_{i,k}) $$ which has been used in models of fluid vortex tubes \cite{HelmholtzV,kirchhoff_vorlesungen_1883,aref1992grobli}. The relative equilibria (equilibria in a uniformly rotating reference frame) must satisfy the equations for a central configuration \cite{wintner41}, defined as configurations for which \begin{equation} \label{def_cc} \lambda (q_i - c) = \sum_{j = 1, j \neq i}^n \frac{m_j (q_i - q_j)}{r_{i,j}^A} \end{equation} The vector $c$ is the center of mass, $$c = \frac{1}{M} \sum_{i=1}^n m_i q_i,$$ with $$M = \sum_{i=1}^{n} m_i$$ the total mass, which we will always assume to be nonzero (for the special case in which $M=0$, see \cite{celli_2005}). The parameter $\lambda$ is real. The masses $m_i$ are also assumed to be real, and we are primarily interested in positive masses. In some earlier literature central configurations are also referred to as {\it permanent configurations} \cite{MacMillanBartky32, rayl39, Brumberg57}. The study of central configurations and relative equilibria provides an avenue for progress into the N-body problem, which otherwise presents formidable difficulty. There is a rich literature on these configurations, starting with Euler \cite{Euler_col} and Lagrange \cite{Lagrange_3} who completely characterized the relative equilibria for the Newtonian three-body problem. The collinear three-body configurations studied by Euler were further elucidated by Moulton \cite{moulton_straight_1910}, who showed there is a unique (up to scaling) central configuration for any ordering of $N$ positive masses on a line. For $N>3$ it is much harder to characterize the central configurations. One of the most basic questions is whether there are finitely many equivalence classes of central configurations for each choice of positive masses. This question has been highlighted in the planar case by several authors \cite{chazy18, wintner41, smale98}. It has been resolved for the Newtonian four-body problem \cite{hampton_finiteness_2005}, the four-vortex problem \cite{hampton_finiteness_2009}, and partially for the Newtonian five-body problem \cite{albouykaloshin}. There is much work on other interesting questions on central configurations, such as their stability. Rather than attempt to summarize this work we recommend the excellent surveys by Moeckel \cite{LMS, moeckel_central_1990}. \section{Equations for Central Configurations and Equilateral Chains} Choosing two indices $i$ and $j$ we can take the inner product of (\ref{def_cc}) with $q_i - q_j$ to get $$\lambda (q_i - c) \cdot (q_i - q_j) = \sum_{k = 1, k \neq i}^n \frac{m_k (q_i - q_k) \cdot (q_i - q_j)}{r_{ik}^A}$$ and then the left-hand side becomes $$\lambda (q_i - c) \cdot (q_i - q_j) = \frac{\lambda}{M} \sum_{k=1}^n m_k (q_i - q_j) \cdot (q_i - q_k) = \tilde{\lambda} \sum_{k=1}^n m_k (r_{i,j}^2 + r_{ik}^2 - r_{j,k}^2)/2$$ in which we have introduced $\tilde{\lambda} = \frac{\lambda}{M}$. The inner-products on the right-hand side can be rewritten in terms of the mutual distances as well. After putting all the terms on one side of the equation and cancelling a factor of $1/2$, we obtain for each choice of $i \neq j$ the equations \begin{equation} \label{elem_aceqs} \sum_{k=1}^n m_k (r_{ik}^{-A} - \tilde{\lambda}) (r_{i,j}^2 + r_{ik}^2 - r_{j,k}^2) = 0 \end{equation} If we now introduce variables $S_{i,j} = r_{ik}^{-A} - \tilde{\lambda}$ and $A_{i,j,k} = r_{j,k}^2 - r_{ik}^2 - r_{i,j}^2$ we obtain the compact form $$f_{i,j} = \sum_{k = 1}^n m_k S_{ik} A_{i,j,k} = 0.$$ Gareth Roberts has observed that these equations follow from the developments given in \cite{albouy_probleme_1997}; they are sometimes referred to as the `asymmetric Albouy-Chenciner equations'. If we combine $f_{i,j}$ and $f_{ji}$ we obtain $n(n-1)/2$ equations $$g_{i,j,} = f_{i,j} + f_{j,i} = \sum_{k = 1}^n m_k (S_{ik} A_{i,j,k} + S_{j,k} A_{jik}) = 0.$$ These are the equations presented as the Albouy-Chenciner equations in \cite{hampton_finiteness_2005}. These can be interpreted kinematically as the statement that there exists a $\lambda$ for which $$\lambda r_{i,j}^2 = - (q_i - q_j) \cdot \ddot{q}_{i,j}$$ for all $i \neq j$. By taking the wedge product instead of the inner product, we obtain a different set of equations referred to as the Laura-Andoyer equations \cite{laura1905sulle, andoyer1906main} \begin{equation} \label{eq:alb1} L_{i,j} := \sum_{k \neq i,j} m_k (R_{i,k} - R_{j,k}) \Delta_{i,j,k} = 0 \end{equation} where $\Delta_{i,j,k}$ is the oriented area $(q_i - q_j) \wedge (q_i - q_k)$, $q_i \in \mathbb{R}^2$ and $R_{i,j} = r_{i,j}^{-A} = (|q_i - q_j|)^{-A}$. Sometimes the $\Delta_{i,j,k}$ will be replaced by the non-negative $D_{i,j,k} = |\Delta_{i,j,k}|$ in order to make the sign of the terms in our equations more apparent. We define an $N$-body configuration to be an {\em equilateral chain} if at least $N-1$ consecutive distances involving all of the points are equal; we will choose the particular convention with $$ r_{1,2} = r_{2,3} = \ldots = r_{N-2,N-1} = r_{N-1,N} $$ Similarly an {\em equilateral cyclic configuration} has a $N$ equal distances in a complete cycle, and we will choose our indexing so that $$ r_{1,2} = r_{2,3} = \ldots = r_{1,N} $$ The five-body cyclic configurations generalize the rhomboidal configurations of the four-body problem which have been well studied in both the Newtonian and vortex cases \cite{long2002four,perez2007convex,hampton_vort_pairs2014,leandro2019structure,oliveira2020stability}. The five-body configurations of a rhombus with a central mass are another interesting and well-studied extension, which contain continua of central configurations if a negative central mass is allowed \cite{roberts_continuum_1999, gidea2010symmetric, albouykaloshin, cornelio2021central}. The Laura-Andoyer equations for the equilateral pentagon case fall into two sets of five; one of these sets contains equations involving only two of the masses: \begin{align} & m_4 \Delta_{1,3,4}(R_{1,4}-R_{1,2}) + m_5 \Delta_{1,3,5} (R_{1,2} - R_{3,5}) = 0 , \label{LA2} \\ & m_1 \Delta_{1,2,4}(R_{1,4}-R_{1,2}) + m_5 \Delta_{2,4,5} ( R_{1,2}-R_{2,5} ) = 0 ,\nonumber \\ & m_3 \Delta_{2,3,5}(R_{3,5}-R_{1,2}) + m_4 \Delta_{2,4,5} (R_{1,2} - R_{2,4}) = 0 , \nonumber\\ & m_1 \Delta_{1,3,5} (R_{1,3} -R_{1,2}) + m_2 \Delta_{2,3,5} (R_{1,2} - R_{2,5}) = 0, \nonumber\\ & m_2 \Delta_{1,2,4} ( R_{2,4}-R_{1,2}) + m_3 \Delta_{1,3,4} (R_{1,2}- R_{1,3} ) = 0, \nonumber \end{align} while the remaining equations involve three masses: \begin{align*} & m_3 \Delta_{1,2,3} (R_{1,3} -R_{1,2} ) + m_4 \Delta_{1,2,4} (R_{1,4}- R_{2,4})+ m_5 \Delta_{1,2,5} (R_{1,2} - R_{2,5} ) = 0, \\ & m_2 \Delta_{1,2,5} (R_{2,5} -R_{1,2} ) + m_3 \Delta_{1,3,5} (R_{3,5}- R_{1,3})+ m_4 \Delta_{1,4,5} (R_{1,2} - R_{1,4} ) = 0, \\ & m_1 \Delta_{1,2,3} (R_{1,2} - R_{1,3} ) + m_4 \Delta_{2,3,4} (R_{2,4}- R_{1,2})+ m_5 \Delta_{2,3,5} (R_{2,5} - R_{3,5} ) = 0, \\ & m_1 \Delta_{1,3,4} (R_{1,3} -R_{1,4} ) + m_2 \Delta_{2,3,4} (R_{1,2}- R_{2,4})+ m_5 \Delta_{3,4,5} (R_{3,5} - R_{1,2} ) = 0, \\ & m_1 \Delta_{1,4,5} (R_{1,4} -R_{1,2} ) + m_2 \Delta_{2,4,5} (R_{2,4}- R_{2,5})+ m_3 \Delta_{3,4,5} (R_{1,2} - R_{3,5} ) = 0, \end{align*} If we normalize the configurations by choosing $q_1 = (-1/2, 0)$ and $q_2 = (1/2,0)$ (so $r_{1,2} = 1$), then the $\Delta_{i,j,k}$ are $$ \Delta_{1,2,3} = y_3, \ \ \ \ \Delta_{1,2,4} = y_4, \ \ \ \ \Delta_{1,2,5} = y_5$$ $$ \Delta_{1,3,4} = -x_{4} y_{3} + x_{3} y_{4} + \frac{1}{2} (y_{4} - y_{3} ) $$ $$ \Delta_{1,3,5} = -x_{5} y_{3} + x_{3} y_{5} + \frac{1}{2} (y_{5} - y_{3} )$$ $$ \Delta_{1,4,5} = -x_{5} y_{4} + x_{4} y_{5} + \frac{1}{2} (y_{5} - y_{4} )$$ $$ \Delta_{2,3,4} = -x_{4} y_{3} + x_{3} y_{4} + \frac{1}{2} (y_{3} - y_{4} )$$ $$ \Delta_{2,3,5} = -x_{5} y_{3} + x_{3} y_{5} + \frac{1}{2} (y_{3} - y_{5} )$$ $$ \Delta_{2,4,5} = -x_{5} y_{4} + x_{4} y_{5} + \frac{1}{2} (y_{4} - y_{5} )$$ $$ \Delta_{3,4,5} = -x_{4} y_{3} + x_{5} y_{3} + x_{3} y_{4} - x_{5} y_{4} - x_{3} y_{5} + x_{4} y_{5} $$ \section{Finiteness results} To study the finiteness of the planar equilateral configurations we use the asymmetric Albouy-Chenciner equations $f_{i,j}=0$ and the Cayley-Menger determinants for all four-point subconfigurations \begin{equation} \label{CM} C_{i,j,k,l} = det \left ( \begin{array}{ccccc} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & r_{i,j}^2 & r_{i,k}^2 & r_{i,l}^2 \\ 1 & r_{i,j}^2 & 0 & r_{j,k}^2 & r_{j,l}^2 \\ 1 & r_{i,k}^2 & r_{j,k}^2 & 0 & r_{k,l}^2 \\ 1 & r_{i,l}^2 & r_{j,l}^2 & r_{k,l}^2 & 0 \\ \end{array} \right ) = 0 \end{equation} (with distinct $i$, $j$, $k$, and $l$). Following a very similar procedure to that described in \cite{hampton_finiteness_2011}, with the assistance of the software Sage \cite{sage_2020}, Singular \cite{DGPS21}, and Gfan \cite{gfan} we find relatively easily that there are finitely many equilateral central configurations for $A=2$ and $A=3$, for any positive masses. The tropical prevariety of the system is much simpler in the vortex case $A=2$, having only 22 generating rays compared to 37 in the Newtonian case. In fact the Newtonian case generalizes for potential exponents greater than 3 as well. Because Gfan cannot currently compute a prevariety for variable exponents, we used a slight variation of the Albouy-Chenciner equations to compute and to analyze the initial form systems for rational exponents $A>2$. Let $Q_{i,j} = r_{i,j}^{-A+2}$, so that $$ f_{i,j} = \sum_{k = 1}^n m_k S_{ik} A_{i,j,k} = \sum_{k = 1}^n m_k (Q_{i,j}r_{i,j}^{-2} - \tilde{\lambda}) A_{i,j,k} $$ We used Gfan to compute the prevariety of these equations with the normalization $\tilde{\lambda} = 1$, and cleared denominators to obtain a polynomial system in $r_{i,j}$ and $Q_{i,j}$, combined with the four-point Cayley-Menger determinants. For rational $A > 2$, the rays of the tropical prevariety fall into 9 equivalence classes under the action of the cyclic group $C_5$ on the indices of the $r_{i,j}$. Since we have chosen the equilateral configurations to have $r_{i,j} = r_{i+1, j+1}$, this action fixes our first coordinate $r_{i,j}$ and cyclically permutes the remaining five distances. In the table below, the coordinate exponents are in the order $(r_{1,2}, r_{1,3}, r_{1,4}, r_{2,4}, r_{2,5}, r_{3,5})$. Six of these rays are independent of $A$. \begin{center} \begin{tabular}{|c|c|c|} \hline Label & Ray Representative & Multiplicity \\ \hline $h_1$ & $(-1,-1,-1,-1,-1,-1)$ & 1 \\ \hline $h_2$ & $(1,1,1,1,1,1)$ & 1 \\ \hline $h_3$ & $(0,1,1,1,1,1)$ & 1 \\ \hline $h_4$ & $(0,0,1,1,1,1)$ & 5 \\ \hline $h_5$ & $(1,0,1,1,1,1)$ & 5 \\ \hline $h_6$ & $(1, 0, 1, 0, 1, 1)$ & 5 \\ \hline $h_7$ & $(A-2, -2, A-2, -2, A-2, A-2)$ & 5 \\ \hline $h_8$ & $(A-2, -2, A-2, 0, A-2, A-2)$ & 10 \\ \hline $h_9$ & $(A-2, -2, A-2, A-2, A-2, A-2)$ & 5 \\ \hline \end{tabular} \end{center} Because of the balance condition for tropical varieties \cite{Maclagan}, we can restrict our analysis to cones that intersect the half-space containing exponent vectors with a non-negative sum. This excludes the first ray in our list. Again after reducing by the $C_5$ symmetry we have cones \begin{center} \begin{tabular}{|c|c|} \hline Label & Representative Cone Rays \\ \hline $C_1$ & \{(0, 0, 1, 1, 1, 1)\} \\ \hline $C_2$ & \{(0, 1, 1, 1, 1, 1)\} \\ \hline $C_3$ & \{(1, 0, 1, 0, 1, 1)\} \\ \hline $C_4$ & \{(1, 0, 1, 1, 1, 1)\} \\ \hline $C_5$ & \{(1, 1, 1, 1, 1, 1)\} \\ \hline $C_6$ & \{(A-2, -2, A-2, -2, A-2, A-2)\} \\ \hline $C_7$ & \{(A-2, -2, A-2, 0, A-2, A-2)\} \\ \hline $C_8$ & \{(A-2, -2, A-2, A-2, 0, A-2)\} \\ \hline $C_9$ & \{(A-2, -2, A-2, A-2, A-2, A-2)\} \\ \hline $C_{10}$ & \{(0, 0, 1, 1, 1, 1), (0, 1, 1, 0, 1, 1)\} \\ \hline $C_{11}$ & \{(0, 0, 1, 1, 1, 1), (0, 1, 1, 1, 0, 1)\} \\ \hline $C_{12}$ & \{(0, 0, 1, 1, 1, 1), (0, 1, 1, 1, 1, 1)\} \\ \hline $C_{13}$ & \{(0, 0, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1)\} \\ \hline $C_{14}$ & \{(0, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1)\} \\ \hline $C_{15}$ & \{(1, 0, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1)\} \\ \hline $C_{16}$ & \{(1, 0, 1, 0, 1, 1), (A-2, -2, A-2, 0, A-2, A-2)\} \\ \hline $C_{17}$ & \{(1, 0, 1, 0, 1, 1), (A-2, 0, A-2, -2, A-2, A-2)\} \\ \hline $C_{18}$ & \{(1, 0, 1, 1, 1, 1), (A-2, -2, A-2, A-2, A-2, A-2)\} \\ \hline $C_{19}$ & \{(A-2, -2, A-2, -2, A-2, A-2), (A-2, -2, A-2, 0, A-2, A-2)\} \\ \hline $C_{20}$ & \{(A-2, -2, A-2, -2, A-2, A-2), (A-2, 0, A-2, -2, A-2, A-2)\} \\ \hline $C_{21}$ & \{(0, 0, 1, 1, 1, 1), (0, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1)\} \\ \hline $C_{22}$ & \{(1, 0, 1, 0, 1, 1), (A-2, -2, A-2, -2, A-2, A-2), \\ & (A-2, -2, A-2, 0, A-2, A-2), (A-2, 0, A-2, -2, A-2, A-2)\} \\ \hline \end{tabular} \end{center} Remarkably, for each cone in the tropical prevariety the initial form polynomials factor enough that we can compute the elimination ideal of the initial form system in the ring $\mathbb{Q} [m_1, m_2, m_3, m_4, m_5]$, using Singular \cite{DGPS420} within Sage \cite{sagemath91}, without the need to specialize to a particular value of $A$ (apart from the condition that $A \neq 2$. To rule out a nontrivial (i.e. nonmonomial) initial form ideal we only need to assume that no subset of the masses has a vanishing sum, including the total mass. It seems possible that this formulation of the equations with the $Q_{i,j}$ may be useful in studying central configurations in other contexts. We can include the case of rational $A$ in this result, since if $A = p/q$ then we can use the polynomial conditions $Q_{i,j}^q r_{i,j}^p - r_{i,j}^{2 q}=0$ to define the $Q_{i,j}$. Altogether this gives a computer-assisted proof of \begin{customthm}1 For nonzero masses with nonzero subset sums (i.e. $m_i + m_j \neq 0$, $m_i + m_j + m_k \neq 0$, $m_i + m_j + m_k + m_l \neq 0$, $m_1 + m_2 + m_3 + m_4 + m_5 \neq 0$), there are finitely many planar equilateral 5-body central configurations for any rational potential exponent $A \ge 2$. \end{customthm} This result strongly suggests the conjecture that there are finitely many planar equilateral 5-body central configurations for any real $A \ge 2$, but the proof of that would require different methods. \section{The symmetric case} In this section we impose the further restrictions of an axis of symmetry $r_{1,3} = r_{2,5}$, $r_{1,5} = r_{2,3}$, and $r_{1,4} = r_{2,4}$. In addition to normalizing the size of the configuration with $r_{1,2} = 1$, we can choose cartesian coordinates $q_1 = (-1/2, 0)$, $q_2 = (1/2,0)$, $q_3 = (x_3, y_3)$, $q_4 = (0, y_4)$, and $q_5 = (-x_3, y_3)$. The equilateral constraints in these coordinates become: $$ 4 x_3^2 - 4 x_3 + 4 y_3^2 - 3 = 0 $$ $$ x_3^2 + y_3^2 - 2 y_3 y_4 + y_4^2 - 1 = 0 $$ We can parameterize the configurations by $y_4$, in terms of which $$ y_3 = \frac{8 \, y_{4}^{3} + 2 \, y_{4} \pm \sqrt{-16 \, y_{4}^{4} + 56 \, y_{4}^{2} + 15}}{4 \, {\left(4 \, y_{4}^{2} + 1\right)}} $$ $$ x_3 = \frac{4 \, y_{4}^{2} + 1 \pm 2 \, y_4 \sqrt{-16 \, y_{4}^{4} + 56 \, y_{4}^{2} + 15} }{4 \, {\left(4 \, y_{4}^{2} + 1\right)}} $$ Note that the choices of sign must be the same, giving us two curves of configurations. We will refer to the positive choice of sign as branch A, and the other as branch B. The wedge products $\Delta_{i,j,k}$ in these coordinates are \begin{align*} \Delta_{1,2,3} & = y_3 \\ \Delta_{1,2,4} & = y_4 \\ \Delta_{1,3,4} & = x_3 y_4 + (y_4 - y_3)/2 \\ \Delta_{1,3,5} & = 2 y_3 x_3 \\ \Delta_{1,4,5} & = x_3 y_4 - (y_4 - y_3)/2 \\ \Delta_{3,4,5} & = 2 x_3 (y_4 - y_3) \\ \end{align*} These configurations are shown in Figure \ref{equifig}. \begin{figure} \includegraphics[width=3.5in]{symmetric_equipentagonalsAB.pdf} \caption{Normalized symmetric equilateral pentagons. Branch A is in blue, branch B in red.} \label{equifig} \end{figure} For the vortex case $A=2$, it is possible to compute a Groebner basis of the system used in the finiteness proof. This basis shows there are only two possible symmetric equilateral configurations: the regular pentagon with equal masses, and a configuration (using normalized masses $m_1 = m_2 = 1$) with $m_4$ satisfying $$ 64 m_4^9 - 752 m_4^8 + 2316 m_4^7 - 109 m_4^6 - 2830 m_4^5 + 45 m_4^4 + 1362 m_4^3 + 215 m_4^2 - 149 m_4 - 17 = 0 $$ which has a single positive root at $m_4 \approx 0.34199$ and then $m_3=m_5 \approx 2.32$. We were also able to compute a Groebner basis for $A=4$, with the mass polynomial \begin{align*} & 12288 m_4^{16} - 232064 m_4^{15} + 636883 m_4^{14} + 5616221 m_4^{13} + 2342977 m_4^{12} - 15626678 m_4^{11} \\ & - 6546497 m_4^{10} + 17143788 m_4^9 - 1407668 m_4^8 - 5326884 m_4^7 + 456601 m_4^6 + 2374416 m_4^5 \\ & - 239673 m_4^4 - 387130 m_4^3 - 33431 m_4^2 + 25519 m_4 + 957 = 0 \end{align*} \begin{figure}[h!b] \includegraphics[width=3in]{5bodysym.png} \caption{Symmetric vortex ($A=2$) central configuration} \end{figure} The Laura-Andoyer equations for the symmetric case are $$ L_{1,3} = m_3 (1 - R_{3,5}) \Delta_{1,3,5} + m_4 (R_{1,4} - 1) \Delta_{1,3,4} = 0 $$ $$ L_{1,4} = m_1 (1 - R_{1,4}) \Delta_{1,2,4} + m_3 (R_{1,3} - 1) \Delta_{1,3,4} = 0 $$ $$ L_{1,5} = m_1 (1 - R_{1,3}) \Delta_{1,2,3} + m_3 (R_{1,3} - R_{3,5}) \Delta_{1,3,5} + m_4 (R_{1,4} - 1)\Delta_{1,4,5} = 0 $$ $$ L_{3,4} = m_1 \left [ (R_{1,3} - R_{1,4}) \Delta_{1,3,4} + (1 - R_{1,4}) \Delta_{2,3,4} \right ] + m_3 (R_{3,5} - 1) \Delta_{3,4,5} = 0 $$ We can highlight the linearity of these equations in the masses by forming the mass-coefficient matrix: $$ \left(\begin{array}{ccc} 0 & ( 1- R_{35}) \Delta_{1,3,5} & (R_{1,4} - 1) \Delta_{1,3,4} \\ (1 - R_{1,4}) \Delta_{1,2,4} & (R_{1,3} - 1) \Delta_{1,3,4} & 0 \\ (1 - R_{1,3}) \Delta_{1,2,3} & (R_{1,3} - R_{3,5}) \Delta_{1,3,5} & (R_{1,4} - 1)\Delta_{1,4,5} \\ (R_{1,3} - R_{1,4}) \Delta_{1,3,4} + (1 - R_{1,4}) \Delta_{1,4,5} & (R_{3,5} - 1) \Delta_{3,4,5} & 0 \end{array}\right) $$ We can row-reduce this a little to get \begin{equation} \label{MassCoef} \left(\begin{array}{ccc} 0 & ( 1- R_{35} )\Delta_{1,3,5} & (R_{1,4} - 1) \Delta_{1,3,4} \\ (1 - R_{1,4}) \Delta_{1,2,4} & (R_{1,3} - 1) \Delta_{1,3,4} & 0 \\ (1 - R_{1,3}) \Delta_{1,2,3} & \left [ (R_{1,3} - R_{3,5}) - (1 - R_{3,5}) \frac{\Delta_{1,4,5}}{\Delta_{1,3,4}} \right ] \Delta_{1,3,5}& 0 \\ (R_{1,3} - R_{1,4}) \Delta_{1,3,4} + (1 - R_{1,4}) \Delta_{1,4,5} & (R_{3,5} - 1) \Delta_{3,4,5} & 0 \end{array}\right) \end{equation} This matrix must have a kernel vector of masses in the positive orthant. This imposes many constraints. To simplify the analysis of these constraints, we assume (without loss of generality) that $y_4 \ge 0$. This convention means that $\Delta_{1,2,4} \ge 0$. It is immediate from equation $L_{1,4}$ that $y_4$ cannot be zero, so we can assume that $\Delta_{1,2,4}$ is strictly positive. In terms of the signs of the $\Delta_{i,j,k}$ and the magnitude of the mutual distances relative to $r_{1,2}$, there are five cases for the branch A configurations. Representatives of these are shown in Figure \ref{branch_As}. \begin{figure}[h!b] \includegraphics[width=5in]{ABsL.pdf} \caption{Sign type representatives of A branch configurations} \label{branch_As} \end{figure} All of the branch A configurations have $\Delta_{1,2,3} > 0$, $\Delta_{1,2,4} > 0 $, $\Delta_{1,3,5} > 0$, $\Delta_{1,4,5} > 0$, and $r_{1,3} > \frac{\sqrt{6}}{2} > 1$. The distinguishing geometric properties of the branch A configurations are: \begin{enumerate} \item[A1)] Concave, $\Delta_{1,3,4} < 0$, $\Delta_{3,4,5} < 0$, $r_{1,4} < 1$, $r_{3,5} < 1$. \item[A2)] Concave, $\Delta_{1,3,4} < 0$, $\Delta_{3,4,5} < 0$, $r_{1,4} < 1$, $r_{3,5} > 1$. \item[A3)] Concave, $\Delta_{1,3,4} > 0$, $\Delta_{3,4,5} < 0$, $r_{1,4} < 1$, $r_{3,5} > 1$. \item[A4)] Convex, $\Delta_{1,3,4} > 0$, $\Delta_{3,4,5} > 0$, $r_{1,4} > 1$, $r_{3,5} > 1$. \item[A5)] Convex, $\Delta_{1,3,4} > 0$, $\Delta_{3,4,5} > 0$, $r_{1,4} > 1$, $r_{3,5} > 1$. \end{enumerate} \begin{customthm}{2} The only possible branch A central configurations are of type A2 or A4. There is a unique type A2 central configuration for all $A \ge 2$. \end{customthm} \begin{proof} For type A1 configurations, we can immediately verify that $L_{1,3} < 0$, so no such central configurations are possible. This is also true for the borderline A1/A2 configuration with $r_{3,5} = 1$. The type A2 configurations contain an interesting central configuration. For the mass-coefficient matrix \ref{MassCoef} to have nonzero mass solutions, the following minor must vanish: \begin{align*} F(y_4) & = \det \left(\begin{array}{cc} (1 - R_{1,4}) \Delta_{1,2,4} & (R_{1,3} - 1) \Delta_{1,3,4} \\ (R_{1,3} - R_{1,4}) \Delta_{1,3,4} + (1 - R_{1,4}) \Delta_{1,4,5} & (R_{3,5} - 1) \Delta_{3,4,5} \end{array}\right) \\ & = (1 - R_{1,4})(R_{3,5} - 1) \Delta_{1,2,4} \Delta_{3,4,5} \\ & \ \ + (1 - R_{1,3}) \Delta_{1,3,4} ((R_{1,3} - R_{1,4}) \Delta_{1,3,4} + (1 - R_{1,4}) \Delta_{1,4,5}) = 0 \end{align*} We can prove the existence of a type A2 central configuration for any $A \ge 2$ by examining the sign of $F$ at the regional endpoints. At the lower endpoint, where $y_4 = \frac{2 - \sqrt{3}}{2})$, the points $1$,$2$,$3$, and $5$ form a square, with $r_{3,5} = 1$, $r_{1,4} = \sqrt{ 2 - \sqrt{3}}$, $\Delta_{1,3,4} = \frac{1 - \sqrt{3}}{2} < 0$, and $\Delta_{1,4,5} = \frac{1}{2}$. Since $R_{3,5} = 1$, the first portion of $F$ is zero, which makes it elementary to check the sign of the remaining terms and see that $F(\frac{2 - \sqrt{3}}{2}) > 0$. At the other endpoint, $y_4 = \frac{\sqrt{5 - 2 \sqrt{5}}}{2}$, and the points $1$,$4$, and $3$ are collinear (as are points $2$, $4$, and $5$) so $\Delta_{1,3,4} = 0$. Only the first portion of $F$ is nonzero for this configuration, and it is straightforward to see that $F(\frac{\sqrt{5 - 2 \sqrt{5}}}{2}) < 0$. We can prove the uniqueness of the type A2 central configuration for each $A \ge 2$ using interval arithmetic, simply evaluating $F$ and $\frac{d F}{d y_4}$ for intervals of $y_4$ and $A$. The lack of any common zeros shows that no bifurcations occur, and it suffices to check the uniqueness for $A=2$, for which we have a Groebner basis. For type A3 configurations, and the boundary case with $D_{1,3,4} = 0$, it is immediate that $L_{1,3} > 0$, so no such central configurations are possible. Type A4 configurations include the regular pentagon, which is a central configuration for equal masses for all potential exponents $A$. Remarkably, there is a bifurcation at $A_c \approx 3.12036856$. It appears that for $A \in [2, A_c]$ the regular pentagon is the only central configuration of type A4, and for $A>A_c$ there are 3 type A4 central configurations (including the regular pentagon). For large $A$, these two new central configurations converge to the A3/A4 boundary case for which $D_{3,4,5} = 0$, and the house-like configuration with $r_{3,5} = 1$ and $y_4 = 1 + \frac{\sqrt{3}}{2}$. With interval arithmetic we can prove the uniqueness of the regular pentagon for $A \in [2,3]$, but we do not have an exact value for the bifurcation value $A_c$. For type A5 configurations once again $L_{1,3} < 0$, and no such central configurations are possible. \end{proof} For the branch B configurations there are three subcases of convex configurations and two concave, along with some exceptional cases on the borders between them. For all branch B configurations $\Delta_{1,4,5} < 0$ and $r_{3,5} < 1$. The subcases are: \begin{enumerate} \item[B1)] Convex, $\Delta_{1,2,3} < 0$, $\Delta_{1,3,4} > 0$, $\Delta_{1,3,5} < 0$, $\Delta_{3,4,5} > 0$, $r_{1,3} > 1$, and $r_{1,4} < 1$. \item[B2)] Convex, $\Delta_{1,2,3} < 0$, $\Delta_{1,3,4} > 0$, $\Delta_{1,3,5} > 0$, $\Delta_{3,4,5} < 0$, $r_{1,3} < 1$, and $r_{1,4} < 1$. \item[B3)] Convex, $\Delta_{1,2,3} > 0$, $\Delta_{1,3,4} < 0$, $\Delta_{1,3,5} < 0$, $\Delta_{3,4,5} < 0$, $r_{1,3} < 1$, and $r_{1,4} > 1$. \item[B4)] Concave, $\Delta_{1,2,3} > 0$, $\Delta_{1,3,4} \geq 0$, $\Delta_{1,3,5} < 0$, $\Delta_{3,4,5} < 0$, $r_{1,3} < 1$, and $r_{1,4} > 1$. \item[B5)] Concave, $\Delta_{1,2,3} > 0$, $\Delta_{1,3,4} > 0$, $\Delta_{1,3,5} > 0$, $\Delta_{3,4,5} > 0$, $r_{1,3} > 1$, and $r_{1,4} > 1$. \end{enumerate} \begin{figure}[h!b] \includegraphics[width=5in]{BBs2.pdf} \caption{Sign type representatives of B branch configurations} \end{figure} \begin{customthm}{3} The only possible branch B central configuration is the regular pentagon for all $A \ge 2$ (type B2). \end{customthm} \begin{proof} The case B1 has no solutions with positive masses, as both terms in equation $L_{1,3}$ are positive. The case B2 contains the regular pentagon. Using interval arithmetic with the function $F(y_4)$ and its derivative we can verify that there are no other type B2 central configurations for all $A \ge 2$. The case B3 has no solutions with positive masses, as both terms in equation $L_{1,3}$ are positive. The case B4 has no solutions with positive masses, as both terms in equation $L_{1,4}$ are positive. The case B5 has no solutions with positive masses, as both terms in equation $L_{1,3}$ are negative. \end{proof} \section{General planar equilateral pentagons} We know that two diagonals are always greater than the four exterior edges for planar convex four-body central configurations. For strictly convex planar five-body problem, Chen and Hsiao \cite{chen2018strictly} showed that at least two interior edges are less than the exterior edges if the five bodies form a central configuration. They also showed numerical examples of strictly convex central configurations with five bodies that have either one or two interior edges less than the exterior edges. However for convex planar equilateral 5-body central configurations we have the following result: \begin{customlemma}1 For planar convex equilateral 5-body central configurations all interior edges are greater than the exterior edges. \end{customlemma} \begin{proof} We consider the Laura-Andoyer equations involving only two of the masses (equations (\ref{LA2})). For the convex case we know that $ \Delta_{1,2,3}, \Delta_{1,2,4}, \Delta_{1,2,5}, \Delta_{1,3,4}, \Delta_{1,3,5}, \Delta_{1,4,5}, \Delta_{2,3,4}, \Delta_{2,3,5}$ and $\Delta_{3,4,5}$ are all positive. There is at least one interior edge greater than the exterior edges for any planar equilateral 5-body configuration. Without loss of generality, let $r_{1,4} > r_{1,2}$, and then $R_{1,4} < R_{1,2}$ and $R_{1,4} - R_{1,2} < 0$ . From the first equation of (\ref{LA2}) we must have $R_{1,2} - R_{3,5} >0$. So $R_{1,2} > R_{3,5}$ and $r_{3,5} > r_{1,2}$. Similarly, from the third equation, we have $r_{2,4} > r_{1,2}$; from the second equation above, we get $r_{1,3} > r_{1,2}$; from the fifth equation, we obtain $r_{2,5} > r_{1,2}$. Thus, we have $$r_{1,3}, r_{1,4}, r_{2,4}, r_{2,5}, r_{3,5} > r_{1,2}.$$ \end{proof} The five simpler Laura-Andoyer equations (\ref{LA2}) can be used to further restrict the possible configurations of planar equilateral 5-body central configurations. The allowed regions fall into three classes: \begin{enumerate} \item[Region I:] defined by $r_{1,2} < r_{i,j}$ and containing the regular pentagon (in 12345 order) \item[Region II:] defined by $r_{1,2} > r_{i,j}$ and containing the regular pentagon (in 13524 order) \item[Region III:] five disjoint regions, which are equivalent under permutations. These are concave configurations. The region with point 5 in the interior is defined by $\theta_{1,2,3} + \theta_{2,3,4} \le 3 \pi$, $\theta_{1,2,3} \le 5 \pi/3$, $\theta_{2,3,4} \le 5 \pi/3$, $\Delta_{1,3,5} \ge 0$, and $\Delta_{2,4,5} \ge 0$. \end{enumerate} These regions are shown in Figure \ref{Regions}. \begin{center} \begin{figure} \includegraphics[width=4in]{equipent_hiresC.png} \caption{Regions of allowable angles $\theta_{1,2}$ and $\theta_{2,3}$ for positive mass equilateral central configurations (white). The more heavily shaded central oval consists of angles that are not geometrically realizable. The regular pentagon configurations are indicated as icons in green and red.} \label{Regions} \end{figure} \end{center} \begin{conj} The only positive mass planar equilateral 5-body central configurations in the Newtonian case are the regular pentagon and star with equal masses and the symmetric configurations discussed above. \end{conj} \section{Acknowledgements} The authors would like to thank Manuele Santoprete for the suggestion to study this class of configuration. \section{Competing Interests} The authors have no competing interests to declare.
{ "timestamp": "2021-12-14T02:40:39", "yymm": "2112", "arxiv_id": "2112.06755", "language": "en", "url": "https://arxiv.org/abs/2112.06755", "abstract": "Central configurations and relative equilibria are an important facet of the study of the $N$-body problem, but become very difficult to rigorously analyze for $N>3$. In this paper we focus on a particular but interesting class of configurations of the 5-body problem: the equilateral pentagonal configurations, which have a cycle of five equal edges. We prove a variety of results concerning central configurations with this property, including a computer-assisted proof of the finiteness of such configurations for any positive five masses with a range of rational-exponent homogeneous potentials (including the Newtonian case and point-vortex model), some constraints on their shapes, and we determine some exact solutions for particular N-body potentials.", "subjects": "Dynamical Systems (math.DS)", "title": "Equilateral Chains and Cyclic Central Configurations of the Planar 5-body Problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986777179414275, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7090925359503166 }
https://arxiv.org/abs/2012.08807
Mean-field and graph limits for collective dynamics models with time-varying weights
In this paper, we study a model for opinion dynamics where the influence weights of agents evolve in time via an equation which is coupled with the opinions' evolution. We explore the natural question of the large population limit with two approaches: the now classical mean-field limit and the more recent graph limit. After establishing the existence and uniqueness of solutions to the models that we will consider, we provide a rigorous mathematical justification for taking the graph limit in a general context. Then, establishing the key notion of indistinguishability, which is a necessary framework to consider the mean-field limit, we prove the subordination of the mean-field limit to the graph one in that context. This actually provides an alternative (but weaker) proof for the mean-field limit. We conclude by showing some numerical simulations to illustrate our results.
\section{Introduction} Over the past few years, there has been a soaring interest for the study of multi-agent collective behavior models. Indeed, they can be applied in many different areas: biology with the study of collective flocks and swarms \cite{MR2310046,MR2205679,MR2064375,1582249}, aviation \cite{MR1617559}, and social dynamics \cite{HK} (among many others). In this article, we will take a particular interest in models for opinion dynamics. The first ones were introduced in the 50's \cite{social,social2} and already stressed the highly non-linear nature of social interactions. Some early models for consensus formation have been introduced by DeGroot \cite{DG} and more recently by Hegselmann and Krause \cite{HF,HK,MR1792007}. Since then, a wealth of models have been developed in order to study one of the key features of collective dynamics: the intriguing emergence of global patterns from local interaction rules. This phenomenon is referred to as \emph{self-organization} \cite{ACMPPRT17,MR3714980,MR2343706,JabinMotsch14,MR2064375,1582249}. Most social dynamics models have a common structure: the dynamics of the agents' opinions are described by a system of ODEs in which the agents interact pairwise, i.e. \begin{equation}\label{eq:HKintro} \displaystyle \frac{d}{dt} x_i(t) = \frac{1}{N} \sum_{j=1}^N a_{ij}\,(x_j(t)-x_i(t)), \end{equation} where $x_i$ represents the time-evolving opinion of agent $i$. Depending on the nature of the interaction coefficients $a_{ij}$, these models can be roughly classified in two categories. In the first one, interactions are pre-determined by a given interaction network, which represents the inherent structure of the population's interactions \cite{olfati2007,WS98}. Then each pairwise interaction coefficient $a_{ij}$ is non-zero if and only if the edge $(i,j)$ is part of the underlying graph of interactions. The second approach only considers the state space of opinions and defines the interaction coefficients as a function of the pairwise distances: $a_{ij}:=a(\|x_i-x_j\|)$. In this case, every agent can potentially interact with any other if the distance separating them belongs to the support of the interaction function $a$: there is no underlying network (see for instance \cite{MT14}).\\ Recently, a variant of this model has introduced the concept of \emph{weights of influence} \cite{McQuadePiccoliPouradierDuteil19,MR3965293}. In this augmented model, each agent is not only defined by its opinion $x_i$, but also by its weight $m_i$. Then, the influence of an agent $j$ on an agent $i$'s opinion is proportional to its weight $m_j$. These weights are assumed to evolve in time via an equation which is coupled with the opinions' evolution. In other words, the evolution of each agent's opinion does not only depend on its proximity with another agent, but also on the charisma or popularity of the latter - and this charisma also evolves in time. This can be formulated as a system of $2N$ ODEs, as follows: \begin{equation}\label{eq:syst-gen-intro} \displaystyle \frac{d}{dt} x_i(t) = \frac{1}{N} \sum_{j=1}^N m_j(t)\,a(\|x_i-x_j\|)(x_j(t)-x_i(t)), \qquad \frac{d}{dt} m_i(t) = \psi_i(x(t),m(t)). \end{equation} Interestingly, although the interaction coefficients are given by a function of the pairwise distances between opinions, in this approach we can again view the opinions as nodes of an underlying network. The corresponding weighted graph is \emph{non-symmetric} (the edge between $i$ and $j$ being weighted by $m_i$ in one direction and by $m_j$ in the other), and \emph{time-evolving} (the weights' dynamics being coupled with the dynamics of the nodes). \\ As in all models of collective dynamics, several natural questions arise. A first one concerns the \emph{large time limit}, that is the asymptotic behavior of the system. Many works in the literature have delved into the question of self-organization, i.e. the spontaneous emergence of well-organized group patterns such as consensus, alignment, clustering or dancing equilibrium \cite{MR3714980,MR2343706,MR3392625}. This was studied for the augmented model with time-varying weights in \cite{McQuadePiccoliPouradierDuteil19}.\\ In this paper, we will explore another natural question, the \emph{large population limit}. When the number of agents tends to infinity, the previous system of $2N$ equations becomes unmanageable, a problem well-known as the \emph{curse of dimension}. A common answer to this issue consists of studying the \emph{mean-field limit} of the system. First introduced in the context of gas dynamics (see \cite{braun1977} for instance), when describing particles interacting via a force, a mean-field limit is a limit in which the number of particles $N$ is large ($N$ goes to infinity) but is such that the interaction between particles is both weak enough so that the forces applying on one particle remain finite at the limit, and strong enough so that all the particles continue to interact. The mean-field limit process consists of representing the population by its density probability, instead of following each agent's individual trajectory. In the case of the classical opinion dynamics \eqref{eq:HKintro}, the limit measure $\mu(t,x)$ represents the density of agents with opinion $x$ at time $t$. The mean-field limit of this system is now a classical result, and one can show that the limit measure satisfies a non-local transport equation \cite{Dobrushin79}. In the context of the augmented system with time-varying weights \eqref{eq:syst-gen-intro}, the limit measure $\mu(t,x)$ represents the total weight of the agents with opinion $x$ at time $t$. It was shown in \cite{PouradierDuteil21} that it solves a non-local transport equation with non-local source. \\ However, there is a limitation to the mean-field approach. Since it describes the population by its density, it requires all particles to be \emph{indistinguishable}. This not only entails a significant information loss, but also greatly reduces the span of models that can be studied. It is also incompatible with the graph viewpoint. In particular, in the case of the augmented model \eqref{eq:syst-gen-intro} with time-varying weights, it requires strong assumptions on the mass dynamics $\psi_i$. This leads us to the other approach that will be central to this paper: the graph limit method. \\ In 2014, Medvedev used techniques from the recent theory of graph limit \cite{MR2455626,MR2277152,MR2274085,LS,MR3012035} to derive rigorously the continuum limit of dynamical models on deterministic graphs \cite{Medvedev14}. In the present paper, we extend this idea to our collective dynamics model with time-varying weights, adopting the graph point of view described above. The central point of this approach consists of describing the infinite population by two functions $x(s)$ and $m(s)$ over the space of continuous indices $s$. The discrete system of ODEs is then shown to converge as $N$ goes to infinity to a system of two non-local diffusive equations in the space of continuous indices. We show that this approach is more general than the mean-field one, and the Graph Limit can be derived for a much greater variety of models. \\ In the case of dynamics preserving the indistinguishability of particles, we show that both the graph limit and the mean-field limit can be derived. In particular, we show that there is a hierarchy between the two limit equations: the mean-field limit equation can be derived from the graph limit one. This subordination of the mean-field limit equation to the graph limit equation, pointed out in \cite{BiccariKoZuazua19}, is natural: Indeed, the mean-field limit process eliminates all individuality from the particles by considering only the population density. Thus, there would be no hope of recovering the graph limit equation from the mean-field one. \\ The paper is organized as follows: in Section \ref{sec:presentation}, we present the model and state the main results. In Section \ref{sec:graphlim}, we focus on the graph limit. We start by establishing the existence and uniqueness of a solution to the graph limit equation, and then prove the convergence of the discrete system to the graph limit equation. In Section \ref{sec:mfl}, we study the mean-field limit. We first define the key notion of indistinguishability for a particle system. We then prove the subordination of the mean-field limit to the graph one and finish by a weaker but alternative proof of the mean-field limit based on that subordination. Lastly, we present some numerical simulations in Section \ref{sec:numeric} with concrete models to illustrate our results. \section{Presentation of the model and main results} \label{sec:presentation} We study a social dynamics model with time-varying weights introduced in \cite{McQuadePiccoliPouradierDuteil19}. From here onward, $d\in\mathbb{N}$ will represent the dimension of the space of opinions, and $N\in\mathbb{N}$ will represent the number of agents whose opinions evolve in $\mathbb{R}^d$. More specifically, let $x^N=(x_i^{N})_{i\in\{1,\cdots,N\}}:[0,T]\rightarrow(\mathbb{R}^d)^N$ represent the \emph{opinions} (or positions) of $N$ agents, and let $m^N=(m_i^{N})_{i\in\{1,\cdots,N\}}:[0,T]\rightarrow \mathbb{R}^N$ represent their individual \emph{weights of influence}. Each opinion's time-evolution is affected by the opinion of each neighboring agent via the interaction function $\phi\in\mathrm{Lip}(\mathbb{R}^d; \mathbb{R})$, proportionally to the neighboring agent's weight of influence. In turn, the agents' weights are assumed to evolve in time and their dynamics may depend on the opinions and weights of all the other agents, via functions $\psi_i^{(N)}:(\mathbb{R}^d)^N\times\mathbb{R}^N\rightarrow\mathbb{R}$. Given a set $(x_i^{0,N})_{i\in\{1,\cdots,N\}}$ of initial opinions and $(m_i^{0,N})_{i\in\{1,\cdots,N\}}$ of initial weights, the evolution of the opinions and weights are given by the following system: \begin{equation}\label{eq:syst-gen} \begin{cases} \displaystyle \frac{d}{dt} x_i^{N}(t) = \frac{1}{N} \sum_{j=1}^N m_j^N(t)\, \phi(x_j^{N}(t)-x_i^{N}(t))\\ \displaystyle \frac{d}{dt} m_i^{N}(t) = \psi_i^{(N)}(x^{N}(t),m^{N}(t)), \qquad i\in\{1,\cdots,N\}, \end{cases} \end{equation} supplemented by the initial conditions \begin{equation* \forall i\in\{1,\cdots,N\}, \quad x_i^{N}(0) = x_i^{0,N} \quad \text{ and } \quad m_i^{N}(0) = m_i^{0,N} \end{equation*} such that \begin{equation} \label{eq:sum_M} \sum_{i=1}^N m_i^{0,N}=N. \end{equation} We point out that the choice $m_i^{0,N}=1$ and $\psi_i^{(N)}\equiv 0$ for all $i\in\{1,\cdots,N\}$ brings us back to the classical Hegselmann-Krause model for opinion dynamics \cite{HK}: \begin{equation}\label{eq:HK} \displaystyle \frac{d}{dt} x_i^{N}(t) = \frac{1}{N} \sum_{j=1}^N \phi(x_j^{N}(t)-x_i^{N}(t)) \qquad i\in\{1,\cdots,N\}. \end{equation} This model has been thoroughly studied in the literature (see \cite{ACMPPRT17} for a (non-exhaustive) review) and provides a well-known example of emergence of global patterns, such as convergence to consensus or clustering, from local interaction rules. The augmented model with time-varying weights \eqref{eq:syst-gen} was also shown to exhibit richer types of long-term behavior, such as the emergence of a single (or several) leader(s) \cite{McQuadePiccoliPouradierDuteil19}. \begin{rem} In the literature, the interaction function can be found of the form $\phi(x):=a(\|x\|)x$ for some continuous function $a\in C(\mathbb{R}^+;\mathbb{R})$ (see \cite{BiccariKoZuazua19,HK}). In other works, the interaction between agents takes the form of the gradient of an interaction potential $W:\mathbb{R}\rightarrow\mathbb{R}$, i.e. $\phi(x):=\nabla W(\|x\|)$ (see for instance \cite{CarrilloChoiHauray14}). Here, we will keep the general notation $\phi$, also used in \cite{JabinMotsch14}, which can cover these various cases. \end{rem} From here onward, we will make the following assumptions on the interaction function: \begin{hyp}\label{hyp:phi} The interaction function $\phi$ satisfies $\phi(0)=0$ and $\phi\in\mathrm{Lip}(\mathbb{R}^d; \mathbb{R})$, with $\|\phi\|_\mathrm{Lip}=L_\phi$. \end{hyp} Notice that at this stage, we have not made any assumptions on the interaction functions $\psi_i$. Actually, unlike the position dynamics, the weight dynamics are allowed to differ for each agent $i$. In this paper, the dependence of $\psi_i$ on the opinions $x^N$ and the weights $m^N$ will take two main forms, that will be specified in the subsequent sections. The aim of this work is to derive the continuum limit of these dynamics, that is when the number of agents goes to infinity. We will show that using the graph limit method, we obtain the following limit equation (that we will refer to as the graph limit equation): \begin{equation} \left\{\begin{array}{l} \displaystyle \partial_t x(t,s) = \int_I m(t,s_*)\phi(x(t,s_*) - x(t,s)) ds_* \\ \partial_t m(t,s) = \psi(s,x(t,\cdot),m(t,\cdot)), \end{array}\right. \label{eq:GraphLimit-gen} \end{equation} where $x\in C([0,T];L^\infty(\mathbb{R}^d))$ and $m\in C([0,T];L^\infty(\mathbb{R}))$ are {associated with} the respective continuum limits of $x^N$ and $m^N$. Here, $s$ represents the continuous index variable taking values in $I:=[0,1]$, as introduced in \cite{Medvedev14} and \cite{BiccariKoZuazua19}, and $\psi:I\times C([0,T];L^\infty(\mathbb{R}^d))\times C([0,T];L^\infty(\mathbb{R}))\rightarrow\mathbb{R}$ will have to be specified. Notice that the dependence of $\psi_i^{(N)}$ on the index $i$ in the microscopic dynamics \eqref{eq:syst-gen} is translated by the dependence of $\psi$ on the continuous variable $s$ in the limit \eqref{eq:GraphLimit-gen}. Similarly, the dependence of $\psi_i$ on all agents' opinions $x^N(t)$ and weights $m^N(t)$ is encoded by the non-local dependence of $\psi$ on the functions $x(t,\cdot)$ and $m(t,\cdot)$. \begin{example}\label{ex} A simple example of mass dynamics depending non-locally on the opinions and weights can be given by functions of the form: \begin{equation} \label{eq:psi-part} \displaystyle \psi(s,x(t,\cdot),m(t,\cdot))= m(t,s) \int_I m(t,\tilde{s}) S(x(t,s),x(t,\tilde{s})) d\tilde{s} \end{equation} where $S:\mathbb{R}^d\times\mathbb{R}^d\rightarrow\mathbb{R}$. Note that in this example, $\psi$ depends on the continuous index $s$ only through $x$ and $m$. More specific examples of mass dynamics $\psi$ will be presented in Section \ref{sec:numeric}. The choice \eqref{eq:modelsimuGL} of Section \ref{sec:numericindisting} provides another example of mass dynamics depending only on the opinions and weights, and not on the individual indices. The choice \eqref{eq:modelsimuGL2} of Section \ref{sec:numericnotindisting} provides an example of mass dynamics depending explicitly on $s$. \end{example} In order to give a meaning to the limit, we will reformulate the discrete system \eqref{eq:syst-gen} in a continuous way, using two operators $P_\mathrm{c}^N$ and $P_\mathrm{d}^N$ respectively transforming vectors into piecewise-constant functions and $L^\infty$ functions into $N$-dimensional vectors. From here onward, subscripts {in ($x_N$,$m_N$)} will indicate functions over the continuous space $I$ while superscripts {in ($x^N$) , ($m^N$)} will indicate vectors of $(\mathbb{R}^d)^N$ or $\mathbb{R}^N$. Given initial conditions $x_0\in L^\infty(I;\mathbb{R}^d)$ and $m_0\in L^\infty(I;\mathbb{R})$ for the continuous dynamics \eqref{eq:GraphLimit-gen}, satisfying \begin{equation} \label{eq:integral_egal_a_1} \int_I m_0(s) ds =1, \end{equation} we can define initial conditions for the microscopic dynamics \eqref{eq:syst-gen}. For each $N\in\mathbb{N}$, we define \begin{equation} \label{eq:ICx} \begin{cases} \displaystyle x^{0,N} = P_\mathrm{d}^N(x_0) := \left( N \int_{\frac{i-1}{N}}^{\frac{i}{N}} x_0(s) ds \right)_{i \in \{1, \dots, N \}} \in (\mathbb{R}^d)^N \\ \displaystyle m^{0,N} = P_\mathrm{d}^N(m_0) := \left( N \int_{\frac{i-1}{N}}^{\frac{i}{N}} m_0(s) ds \right)_{i \in \{1, \dots, N \}} \in (\mathbb{R})^N. \end{cases} \end{equation} An schematic illustration of the transformation $P_\mathrm{d}^N$ is provided in Figure \ref{fig:Transfo}. Notice that condition \eqref{eq:integral_egal_a_1} implies that \eqref{eq:sum_M} is fulfilled.\\ Now, for all $t\in [0,T]$, the solution $(x^N(t),m^N(t))$ to the microscopic system \eqref{eq:syst-gen} at time $t$ can be transformed into a pair of piecewise-constant functions $s\mapsto(x_N(t,s),m_N(t,s))$ via the following operation: for all $s\in I$, \begin{equation} \label{eq:xN} \begin{cases} \displaystyle x_N(t,s) = P_\mathrm{c}^N(x^{N}(t)):= \sum_{i=1}^N x_i^{N}(t) \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s)\\ \displaystyle m_N(t,s) = P_\mathrm{c}^N(m^{N}(t)) := \sum_{i=1}^N m_i^{N}(t) \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s). \end{cases} \end{equation} The transformation $P_\mathrm{c}^N$ is also illustrated in Figure \ref{fig:Transfo}. In turn, this transformation will allow us to define the discrete weight dynamics $\psi_i^{(N)}$ from the continuous ones. More specifically, given a functional $\psi: I \times L^2(I;\mathbb{R}^d)\times L^2(I;\mathbb{R}) \rightarrow \mathbb{R}$, we define $\psi_i^{(N)}$ in the following way: \begin{equation}\label{eq:psi} \forall i\in\{1,\cdots,N\},\qquad \psi_i^{(N)}(x^{N}(t),m^{N}(t)) = N \int_{\frac{i-1}{N}}^{\frac{i}{N}} \psi(s,x_N(t,s),m_N(t,s)) ds, \end{equation} where $x_N=P_\mathrm{c}^N(x^N)$ and $m_N=P_\mathrm{c}^N(m^N)$ as defined in \eqref{eq:xN}. \begin{example} With this definition, for $\psi$ taken as \eqref{eq:psi-part} in Example \ref{ex}, the system \eqref{eq:syst-gen} becomes \begin{equation*} \begin{cases} \displaystyle \frac{d}{dt} x_i^{N}(t) = \frac{1}{N} \sum_{j=1}^N m_j^N(t) \phi(x_j^{N}(t)-x_i^{N}(t))\\ \displaystyle \frac{d}{dt} m_i^{N}(t) = \frac{1}{N} m_i^{N}(t) \sum_{j=1}^N m_j^{N}(t) S(x_i^{N}(t),x_j^{N}(t)) , \qquad i\in\{1,\cdots,N\}. \end{cases} \end{equation*} \end{example} \begin{figure} \includegraphics[trim= 2cm 6cm 2cm 6cm, clip=true, width=\textwidth]{ContinuousToDiscrete.pdf} \caption{Illustration of the two transformations $P_\mathrm{d}^N$ and $P_\mathrm{c}^N$ for $N=10$ and $x_0\in L^\infty(I;\mathbb{R})$. Notice that $\lim_{N\rightarrow\infty} P_\mathrm{c}^N(P_\mathrm{d}^N(x_0))= x_0$.}\label{fig:Transfo} \end{figure} These transformations allow us to reveal an equivalence between the microscopic system \eqref{eq:syst-gen} and the continuous one \eqref{eq:GraphLimit-gen} in the case of piecewise-constant functions. More precisely, we have the following \begin{prop}\label{prop:equiv} The vectors $(x^{N},m^{N})\in \mathcal{C}([0,T]; \mathbb{R}^d)^{N}\times \mathcal{C}([0,T];\mathbb{R})^{N}$ satisfy the differential system~\eqref{eq:syst-gen} with initial condition $x^{0,N}\in(\mathbb{R}^d)^N$, $m^{0,N}\in\mathbb{R}^d$, and mass dynamics given by \eqref{eq:psi}, if and only if the piecewise constant functions $x_N\in C([0,T];L^2(I;\mathbb{R}^d))$ and $m_N\in \mathcal{C}([0,T];L^2(I;\mathbb{R}))$ defined by \eqref{eq:xN} satisfy the following system of integro-differential equations: \begin{equation}\label{eq:syst-contN} \begin{cases} \displaystyle \partial_t x_N(t,s) = \int_I m_N(t,s_*)\, \phi(x_N(t,s_*)-x_N(t,s)) \,ds_*\\ \displaystyle \partial_t m_N(t,s) = N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \psi(s_*, x_N(t,\cdot),m_N(t,\cdot)) \,ds_* \end{cases} \end{equation} with initial conditions $\displaystyle x_N(0,s) = \sum_{i=1}^N x_i^{0,N} \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s)$ and $\displaystyle m_N(0,s) = \sum_{i=1}^N m_i^{0,N} \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s)$. \end{prop} \begin{rem}\label{Rem:x0m0} Notice that according to the Lebesgue differentiation theorem, under the assumptions that $x_0\in L^1(I)$ and $m^0\in L^1(I)$, it holds $$\lim_{N\rightarrow\infty} x_N(0,s) = x_0(s) \quad \text{ and } \quad \lim_{N\rightarrow\infty}m_N(0,s) = m_0(s) $$ for almost every $s\in I$. Figure \ref{fig:Transfo} illustrates the relationship between $x_0$ and $x_N(0,\cdot)=P_\mathrm{c}^N(P_\mathrm{d}^N(x_0))$ for a finite $N$. \end{rem} The main interest of this proposition is to have recast the discrete system \eqref{eq:syst-gen} into the framework of $L^\infty(I)$ functions. This will allow us to stay in this framework to state the convergence to a limit function, also belonging to $L^\infty(I)$. Using this trick, we can now state one of the main results of this paper, namely the convergence of the solution to system \eqref{eq:syst-contN} to the solution to system \eqref{eq:GraphLimit-gen}. \begin{theorem}\label{Th:GraphLimit} Let $x_0\in L^\infty(I;\mathbb{R}^d)$ and $m_0\in L^\infty(I;\mathbb{R})$ satisfying \eqref{eq:integral_egal_a_1}, and consider a functional $\psi:I \times L^2(I;\mathbb{R}^d)\times L^2(I;\mathbb{R}) \rightarrow \mathbb{R}$. Suppose that the function $\phi$ satisfies Hyp. \ref{hyp:phi}, that $x\mapsto\psi(\cdot,x,m)$ and $m\mapsto\psi(\cdot,x,m)$ are uniformly Lipschitz functions in the $L^2$ norm, and that $m\mapsto\psi(\cdot,x,m)$ is sublinear in the $L^\infty$ norm. Let $x^{0,N}=P_\mathrm{d}^N(x_0)$ and $m^{0,N}=P_\mathrm{d}^N(m_0)$ given by \eqref{eq:ICx}. Then the solution $(x_N,m_N)$ to \eqref{eq:syst-contN} with initial conditions \[\displaystyle x_N(0,s) = \sum_{i=1}^N x_i^{0,N} \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s) \quad \text{ and } \quad \displaystyle m_N(0,s) = \sum_{i=1}^N m_i^{0,N} \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s) \] converges when $N$ tends to infinity in the $\mathcal{C}([0,T];L^2(I))$ topology. More specifically, there exists $(x,m)\in \mathcal{C}([0,T];L^2(I,\mathbb{R}^d))\times \mathcal{C}([0,T];L^2(I,\mathbb{R}))$ such that \begin{equation}\label{eq:convxm} \displaystyle \|x-x_N\|_{\mathcal{C}([0,T];L^2(I,\mathbb{R}^d))} \xrightarrow[N\rightarrow+\infty]{} 0 \quad \text{ and } \quad \|m-m_N\|_{\mathcal{C}([0,T];L^2(I,\mathbb{R}))} \xrightarrow[N\rightarrow+\infty]{} 0. \end{equation} Furthermore, the limit functions $x$ and $m$ are solutions to the integro-differential system \eqref{eq:GraphLimit-gen} supplemented by the initial conditions $x(0,\cdot) = x_0$ and $m(0,\cdot) = m_0$. \end{theorem} Secondly, we will show that if the mass dynamics satisfy an \emph{indistinguishability} property (that we will define in Section \ref{sec:indisting}), we can take this limit process further and derive the mean-field limit of the system from the continuum graph limit. The mean-field limit will be shown to be a solution to the following transport equation with source (see \cite{PiccoliRossi14, PouradierDuteil21}): \begin{equation*} \partial_t \mu_t(x) + \nabla\cdot ( V[\mu_t](x) \mu_t(x)) = h[\mu_t](x) \end{equation*} where the non-local transport vector field $V$ and source term $h$ will be respectively defined from the interaction function $\phi$ and the mass dynamics $\psi$ (see Theorem \ref{Th:mfl}). \section{The Graph Limit} \label{sec:graphlim} From here onward, in all of Section \ref{sec:graphlim}, we will assume the following properties for the mass dynamics $\psi$. \begin{hyp}\label{hyp:psi} The function $\psi:I\times L^\infty(I;\mathbb{R}^d)\times L^\infty(I;\mathbb{R})$ is assumed to satisfy the following Lipschitz properties: there exists $L_\psi>0$ such that for all $(x_1,x_2,m_1,m_2)\in L^2(I)^4$, \begin{equation}\label{eq:psilip2} \begin{cases} \|\psi(\cdot,x_1,m_1)-\psi(\cdot,x_2,m_1)\|_{L^2(I)} \, \leq \,L_\psi \|x_1-x_2\|_{L^2(I)}\\ \|\psi(\cdot,x_1,m_1)-\psi(\cdot,x_1,m_2)\|_{L^2(I)} \, \leq \, L_\psi \|m_1-m_2\|_{L^2(I)}. \end{cases} \end{equation} Assume also that there exists $C_\psi>0$ such that for all $(x,m)\in L^\infty(I,\mathbb{R}^d\times\mathbb{R})$, for all $s\in I$, \begin{equation}\label{eq:psisublin} |\psi(s, x, m)| \leq C_\psi( 1+\|m\|_{L^\infty(I)}). \end{equation} \end{hyp} Although the assumption of sublinear growth \eqref{eq:psisublin} may seem restrictive, it is necessary in order to prevent the blow-up in finite-time of the weight function $m$. It is coherent with the framework of the Graph Limit developed in~\cite{Medvedev14} on graphs with $L^\infty$ weights. Indeed, we can view our system \eqref{eq:GraphLimit-gen} as the evolution of the opinions $x$ on a weighted non-symmetric graph with weights $W(s,s_*)=m(t,s_*)$. \subsection{Well posedness of the graph limit model} This paragraph is devoted to proving the existence and uniqueness of a solution to the graph limit equation \eqref{eq:GraphLimit-gen}. We start by proving existence and uniqueness for the decoupled system in the general case, where the mass dynamics are assumed to depend on the continuous index $s$. In order to prove the well-posedness of system \eqref{eq:GraphLimit-gen}, we start by studying a decoupled system in which the dynamics of the opinions and of the weights are independent. \begin{lemma}\label{Lemma:WellPos-decoupled} Let ${\tilde{x} \in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}^d))}$ and ${\tilde{m} \in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}))}$. Let $x_0\in L^\infty(I;\mathbb{R}^d)$ and $m_0\in L^\infty(I;\mathbb{R})$. Let $\phi$ satisfy Hyp. \ref{hyp:phi} and $\psi$ satisfy Hyp. \ref{hyp:psi}. Then for any $T>0$, there exists a unique solution $(x,m)\in\mathcal{C}^1([0,T];L^\infty(I;\mathbb{R}^d \times \mathbb{R}))$ to the decoupled integro-differential system \begin{equation*} \left\{\begin{array}{l} \displaystyle \partial_t x(t,s) = \int_I \tilde{m}(t,s_*)\phi(x(t,s) - x(t,s_*)) ds_*; \qquad x(\cdot,0)=x_0 \\ \partial_t m(t,s) = \psi(s,\tilde{x}(t,\cdot),m(t,\cdot)); \qquad m(\cdot,0)=m_0. \end{array}\right. \end{equation*} \end{lemma} \begin{proof} Since the two equations are decoupled, we will treat them independently. Let $\tilde{T}>0$ (to be specified later such that $\tilde{T} \leq T$) and $M_{x_0}$ be a metric subspace of $\mathcal{C}([0,\tilde{T}];L^\infty(I;\mathbb{R}^d))$ consisting of functions $x$ satisfying $x(\cdot,0) = x_0$. Let $K_{x_0}$ be the operator defined by: $$ \begin{array}{l l l } K_{x_0}: & M_{x_0} & \rightarrow M_{x_0} \\ & x & \displaystyle \mapsto (K_{x_0} x):(t,s)\mapsto x_0(s) + \int_0^t \int_I \tilde{m}(\tau,s_*)\phi(x(\tau,s) - x(\tau,s_*)) ds_* d\tau. \end{array} $$ We will show that $K_{x_0}$ is contracting for the norm $\|\cdot\|_{M_{x_0}} := \sup_{[0,\tilde{T}]} \esssup_{I} \|\cdot\|$. Let $(x_1,x_2)\in \mathcal{C}([0,\tilde{T}];L^\infty(I;\mathbb{R}^d))^2)$. Then for all $s\in I$, for all $t\leq \tilde{T}$, \begin{equation*} \begin{split} \| K_{x_0} x_1 - K_{x_0} x_2 \| (t,s) & = \left \| \int_0^t \int_I \tilde{m}(\tau,s_*)\left[ \phi(x_1(\tau,s) - x_1(\tau,s_*))-\phi(x_2(\tau,s) - x_2(\tau,s_*))\right] ds_* d\tau \right \| \\ & \leq \int_0^t \int_I |\tilde{m}(\tau,s_*)| L_\phi \left\| (x_1(\tau,s) - x_1(\tau,s_*))- (x_2(\tau,s) - x_2(\tau,s_*))\right \| ds_* d\tau \\ & \leq \int_0^t \int_I |\tilde{m}(\tau,s_*)| L_\phi \left( \| x_1(\tau,s)- x_2(\tau,s)\| + \| x_1(\tau,s_*) - x_2(\tau,s_*)\| \right) ds_* d\tau \\ \end{split} \end{equation*} from which we get: $$ \| K_{x_0} x_1 - K_{x_0} x_2 \|_{M_{x_0}} \leq 2 L_\phi \tilde{T} \sup_{t\in [0,T]} \|\tilde{m}(t,\cdot)\|_{L_1(I;\mathbb{R})} \quad \|x_1-x_2\|_{M_{x_0}}. $$ We remark that $\sup_{t\in [0,T]} \|\tilde{m}(t,\cdot)\|_{L_1(I;\mathbb{R})}$ is well defined since $\tilde{m}\in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}))$ and $I=[0,1]$. Since $\tilde{m}$ is given, choosing $\tilde{T}\leq (4 L_\phi \sup_{t\in [0,T]} \|\tilde{m}(t,\cdot)\|_{L_1(I;\mathbb{R})})^{-1}$ ensures that $K_{x_0}$ is contracting on $[0,\tilde{T}]$. By the Banach contraction mapping principle, there exists a unique solution $x\in \mathcal{C}([0,\tilde{T}];L^\infty(I;\mathbb{R}^d))$. We then take $x(\cdot,\tilde{T})$ as the initial condition, and the local solution can be extended to $[0, 2\tilde{T}]$, and by repeating the same argument, to $[0,T]$. Moreover, since the integrand in $K_{x_0}$ is continuous as a map $L^\infty(I) \to L^\infty(I) $, $x$ is continuously differentiable and $x$ belongs to $\mathcal{C}^1([0,T];L^\infty(I;\mathbb{R}^d))$.\\ We now show existence and uniqueness of $m\in \mathcal{C}^1([0,T];L^2(I;\mathbb{R}))$, solution to the second decoupled equation. Let $M_{m_0}$ be a metric subspace of $\mathcal{C}([0,\tilde{T}];L^2(I;\mathbb{R}))$ consisting of functions $m$ satisfying $m(\cdot,0) = m_0$ with again $\tilde{T}>0$ to be specified later such that $\tilde{T} \leq T$. Let $K_{m_0}$ be the operator defined by: $$ \begin{array}{l l l } K_{m_0}: & M_{m_0} & \rightarrow M_{m_0} \\ & m &\displaystyle \mapsto (K_{m_0} m):(t,s)\mapsto m_0(s) + \int_0^t \psi(s,\tilde{x}(\tau,\cdot),m(\tau,\cdot)) d\tau. \end{array} $$ We will show that $K_{m_0}$ is contracting for the norm $\|\cdot\|_{M_{m_0}} := \sup_{[0,\tilde{T}]} \|\cdot\|_{L^2(I)}$. Let $(m_1,m_2)\in \mathcal{C}([0,\tilde{T}];L^2(I;\mathbb{R}))^2$. Then for all $s\in I$, for all $t\leq \tilde{T}$, \begin{equation*} \begin{split} \int_I | K_{m_0} m_1 - K_{m_0} m_2 |^2 (t,s) ds & = \int_I \left | \int_0^t \psi(s,\tilde{x}(\tau,\cdot),m_1(\tau,\cdot))- \psi(s,\tilde{x}(\tau,\cdot),m_2(\tau,\cdot)) d\tau \right | ^2 ds \\ & \leq t \int_0^t \int_I \left | \psi(s,\tilde{x}(\tau,\cdot),m_1(\tau,\cdot))- \psi(s,\tilde{x}(\tau,\cdot),m_2(\tau,\cdot)) \right | ^2 ds \, d\tau \\ & \leq t \int_0^t L_\psi^2 \|m_1(\tau,\cdot)-m_2(\tau,\cdot)\|_{L^2(I)}^2 \, d\tau, \end{split} \end{equation*} where the first inequality is a consequence of Cauchy-Schwarz and Jensen's inequalities and Fubini's theorem, and the second inequality comes from \eqref{eq:psilip2}. We obtain: \begin{equation*} \sup_{[0,\tilde{T}]} \| K_{m_0} m_1 - K_{m_0} m_2 \|_{L^2(I)}^2 \leq \tilde{T}^2 L_\psi^2 \sup_{[0,\tilde{T}]} \| m_1 - m_2 \|_{L^2(I)}^2 \end{equation*} which implies: $\| K_{m_0} m_1 - K_{m_0} m_2 \|_{M_{m_0}} \leq \tilde{T} L_\psi \| m_1 - m_2 \|_{M_{m_0}}$. Thus, if $\tilde{T}\leq \frac{1}{2L_\psi}$, by the Banach contraction mapping principle, there exists a unique solution $m\in \mathcal{C}^1([0,\tilde{T}];L^2(I;\mathbb{R}))$. We then take $m(\tilde{T}),\cdot)$ as the initial condition, and the local solution can be extended to $[0, 2\tilde{T}]$, and by repeating the same argument, to $[0,T]$. We thus showed that there exists a unique solution $m\in \mathcal{C}([0,T];L^2(I;\mathbb{R}))$. To prove that $m\in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}))$, we will use the second assumption on $ \psi$ given by the sub-linearity~\eqref{eq:psisublin}. For all $(t,s)\in I\times [0,T]$, $$ |m(t,s)| = \left | m_0(s) + \int_0^t \psi(s,x(\tau,\cdot),m(\tau,\cdot)) d\tau \right| \leq |m_0(s)| + \int_0^t C_\psi (1+ \|m(\tau,\cdot)\|_{L^\infty(I)}) d\tau. $$ This implies $$ \| m(t,\cdot)\|_{L^\infty(I)} \leq (\|m_0\|_{L^\infty(I)} + C_\psi t ) + \int_0^t C_\psi \|m(\tau,\cdot)\|_{L^\infty(I)} d\tau $$ and from Gronwall's lemma, \begin{equation}\label{eq:mbound} \| m(t,\cdot)\|_{L^\infty(I)} \leq (\|m_0\|_{L^\infty(I)} + C_\psi t ) e^{C_\psi t}. \end{equation} Hence $m\in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}))$. As previously, since the integrand in $K_{m_0}$ is continuous as a map $L^\infty(I) \to L^\infty(I) $, $m$ is continuously differentiable and $m$ belongs to $\mathcal{C}^1([0,T];L^\infty(I;\mathbb{R}))$. This concludes the proof. \end{proof} \begin{rem}\label{Rem:Gronwall} We have used the following result: If $u(t)\leq \alpha(t) + \int_0^t \beta(\tau) u(\tau) d\tau$, where $\beta$ is positive and $\alpha$ is non-decreasing, then $u(t) \leq \alpha(t)\exp(\int_0^t \beta(\tau) d\tau )$. \end{rem} By the previous lemma, we have proven that the two decoupled integro-differential equations are well-posed in $\mathcal{C}^1([0,T];L^\infty(I;\mathbb{R}^d \times \mathbb{R}))$. Using this result, we are now ready to demonstrate the well-posedness of the fully coupled system \eqref{eq:GraphLimit-gen}. \begin{theorem}\label{Th:GL-wellpos} Let $x_0\in L^\infty(I;\mathbb{R}^d)$ and $m_0\in L^\infty(I;\mathbb{R})$. Let $\phi$ satisfy Hyp. \ref{hyp:phi} and $\psi$ satisfy Hyp. \ref{hyp:psi}. Then for any $T>0$, there exists a unique solution $(x,m)\in {\mathcal{C}^1([0,T];L^\infty(I;\mathbb{R}^d \times \mathbb{R}))}$ to the integro-differential system \begin{equation} \label{eq:graphlimitcoupled} \left\{\begin{array}{l} \displaystyle \partial_t x(t,s) = \int_I m(t,s_*)\phi(x(t,s) - x(t,s_*)) ds_*; \qquad x(\cdot,0)=x_0 \\ \partial_t m(t,s) = \psi(s,x(t,\cdot),m(t,\cdot)); \qquad m(\cdot,0)=m_0. \end{array}\right. \end{equation} \end{theorem} \begin{proof} The proof will consist of proving the convergence of a sequence of functions $(x^n,m^n)_{n\in\mathbb{N}}$ of $\mathcal{C}([0,T];L^\infty(I;\mathbb{R}^d \times \mathbb{R}))$ defined as follows: \begin{itemize} \item For almost every $s \in I$, for all $t \in [0,T]$, $x^0(t,s) = x_0(s)$ and $m^0(t,s) = m_0(s)$. \item For all $n\in \mathbb{N}^*$, \begin{equation*} \left\{\begin{array}{l} \displaystyle \partial_t x^n(t,s) = \int_I m^{n-1}(t,s_*)\phi(x^n(t,s) - x^n(t,s_*)) ds_*; \qquad x^n(\cdot,0)=x_0 \\ \partial_t m^n(t,s) = \psi(s,x^{n-1}(t,\cdot),m^n(t,\cdot)); \qquad m^n(\cdot,0)=m_0. \end{array}\right. \end{equation*} \end{itemize} From Lemma \ref{Lemma:WellPos-decoupled}, the sequence is well defined and each term $(x^n,m^n)$ is indeed in {$\mathcal{C}([0,T];L^\infty(I;\mathbb{R}^d \times \mathbb{R}))$}. We begin by highlighting the fact that the terms of the sequence are uniformly bounded in $L^\infty(I;\mathbb{R}^d\times\mathbb{R})$. Indeed, from equation \eqref{eq:mbound}, we know that for every $n\in \mathbb{N}$, for all $t\in [0,T]$, $$ \| m^n(t,\cdot)\|_{L^\infty(I)} \leq M_T := (\|m_0\|_{L^\infty(I)} + C_\psi T ) e^{C_\psi T}.$$ Moreover, we can now use this bound to estimate the growth of $x$, noticing that Hypothesis \ref{hyp:phi} implies that $\|\phi(r)\| \leq L_\phi \|r\|$, from the fact that $\phi(0)=0$. We estimate: $$ \|x^n(t,s)\| \leq \|x^n(0,s)\| + M_T \int_0^t \int_I \| \phi(x^n(\tau,s_*)-x^n(\tau,s) ) \| ds_* d\tau \leq \|x_0(s)\| + 2 M_T L_\phi \int_0^t \|x^n(\tau,\cdot) \|_{L^\infty(I)} d\tau . $$ Then Gronwall's lemma implies that for every $n\in \mathbb{N}$, for all $t\in [0,T]$, \begin{equation*} \|x^n(t,\cdot)\|_{L^\infty(I)} \leq X_T := \|x_0\|_{L^\infty(I)} e^{2 M_T L_\phi T}. \end{equation*} We now prove that the sequence $(x^n,m^n)_{n\in \mathbb{N}}$ is a Cauchy sequence. For every $n\in \mathbb{N}^*$, \begin{equation*} \begin{split} |x^{n+1}-x^n|(t,s) = & \bigg | \int_0^t \int_I m^{n}(\tau,s_*)\phi(x^{n+1}(\tau,s) - x^{n+1}(\tau,s_*)) ds_* d\tau \\ & - \int_0^t \int_I m^{n-1}(\tau,s_*)\phi(x^n(\tau,s) - x^n(\tau,s_*)) ds_* d\tau \bigg | \\ = & \bigg | \int_0^t \int_I (m^{n}(\tau,s_*)-m^{n-1}(\tau,s_*)) \phi(x^{n+1}(\tau,s) - x^{n+1}(\tau,s_*)) ds_* d\tau \\ & + \int_0^t \int_I m^{n-1}(\tau,s_*)\left( \phi(x^{n+1}(\tau,s) - x^{n+1}(\tau,s_*)) - \phi(x^n(\tau,s) - x^n(\tau,s_*)) \right) ds_* d\tau \bigg | \\ \leq & \int_0^t \int_I |m^{n}(\tau,s_*)-m^{n-1}(\tau,s_*)| 2 L_\phi\, X_T \, ds_* d\tau \\ & + \int_0^t \int_I M_T \, L_\phi \|(x^{n+1}(\tau,s) - x^{n+1}(\tau,s_*)) - (x^n(\tau,s) - x^n(\tau,s_*)) \| ds_* d\tau \\ \end{split} \end{equation*} Then for all $t\in [0,T]$, for every $n\in \mathbb{N}^*$, \begin{equation*} \begin{split} |x^{n+1}-x^n|^2(t,s) \leq & 8 t L_\phi^2 \, X_T^2 \int_0^t \int_I |m^{n}(\tau,s_*)-m^{n-1}(\tau,s_*)|^2 \, ds_* d\tau \\ & + 2 L_\phi^2 M_T^2 t \int_0^t \int_I 2(\|(x^{n+1}(\tau,s) - x^n(\tau,s)\|^2 + \| x^{n+1}(\tau,s_*)- x^n(\tau,s_*) \|^2) ds_* d\tau \end{split} \end{equation*} and we get \begin{equation*} \begin{split} \|x^{n+1}-x^n\|_{L^2(I)}^2(t) \leq & 8 t L_\phi^2 \, X_T^2 \int_0^t \|m^{n}-m^{n-1}\|_{L^2(I)}^2 (\tau) d\tau + 8t L_\phi^2 M_T^2 \int_0^t \|x^{n+1} - x^n\|_{L^2(I)}^2 (\tau) d\tau. \end{split} \end{equation*} A similar computation for $m$ gives for every $n\in\mathbb{N}$ \begin{equation*} \begin{split} |m^{n+1}-m^n|(t,s) = & \bigg | \int_0^t (\psi(s,x^n(\tau,\cdot),m^{n+1}(\tau,\cdot)) - \psi(s,x^{n-1}(\tau,\cdot),m^{n}(\tau,\cdot)) d\tau \bigg | \\ \leq & \int_0^t \bigg ( |\psi(s,x^n(\tau,\cdot),m^{n+1}(\tau,\cdot)) - \psi(s,x^n(\tau,\cdot),m^{n}(\tau,\cdot)) | \\ & + \psi(s,x^n(\tau,\cdot),m^{n}(\tau,\cdot)) - \psi(s,x^{n-1}(\tau,\cdot),m^{n}(\tau,\cdot)) | \bigg ) d\tau . \\ \end{split} \end{equation*} Squaring and integrating yields: \begin{equation}\label{eq:ineqmn} \begin{split} \|m^{n+1}-m^n\|_{L^2(I)}^2(t) \leq & 2 t \int_0^t \int_I |\psi(s,x^n(\tau,\cdot),m^{n+1}(\tau,\cdot)) - \psi(s,x^n(\tau,\cdot),m^{n}(\tau,\cdot)) |^2 ds \, d\tau \\ & + 2t \int_0^t \int_I | \psi(s,x^n(\tau,\cdot),m^{n}(\tau,\cdot)) - \psi(s,x^{n-1}(\tau,\cdot),m^{n}(\tau,\cdot)) | ^2 ds \, d\tau \\ \leq & 2 t L_\psi^2 \int_0^t \|m^{n+1}-m^{n}\|_{L^2(I)}^2(\tau) d\tau + 2 t L_\psi^2 \int_0^t \| x^n - x^{n-1}\|_{L^2(I)}^2(\tau) d\tau . \end{split} \end{equation} Thus, denoting $A_T = \max(8 T L_\phi^2 \, X_T^2,\, 8 T L_\phi^2 \, M_T^2,\, 2 T L_\psi^2)$, and denoting by $(u_n)_{n\in\mathbb{N}}$ the sequence defined by $u_n(t) := \|x^{n+1}-x^n\|_{L^2(I)}^2(t) + \|m^{n+1}-m^n\|_{L^2(I)}^2(t)$, we obtain for every $n\in \mathbb{N}^*$ and all $t\in [0,T]$: \begin{equation*} \begin{split} u_{n}(t) \leq A_T \int_0^t u_n(\tau) d\tau + A_T \int_0^t u_{n-1}(\tau) d\tau . \end{split} \end{equation*} Since $t\mapsto A_T \int_0^t u_{n-1}(\tau) d\tau$ is non-decreasing, Gronwall's lemma implies (see Remark \ref{Rem:Gronwall}): $$ u_{n}(t) \leq A_T \; e^{A_T T} \int_0^t u_{n-1}(\tau) d\tau . $$ Denoting $U_0 := \sup_{[0,T]} u_0(t)$, one can easily show by induction that for all $t\in [0,T]$, for all $n\in \mathbb{N}$, $$ u_n(t) \leq \frac{(A_T e^{A_T T}t)^n}{n!} U_0. $$ This is the general term of a convergent series, hence for all $t\in [0,T]$, $\lim_{n\rightarrow \infty } u_n(t) = 0$, which implies that for all $t\in [0,T]$, $\|x^{n+1}-x^n\|_{L^2(I)}(t)$ and $\|m^{n+1}-m^n\|_{L^2(I)}(t)$ also converge to $0$ as $n$ tends to infinity. Thus, $(x^n)_{n\in\mathbb{N}}$ and $(m^n)_{n\in\mathbb{N}}$ are Cauchy sequences in the Banach spaces $\mathcal{C}([0,T];L^2(I;\mathbb{R}^d))$ and $\mathcal{C}([0,T];L^2(I;\mathbb{R}))$. One can easily show that their limits $(x,m)$ satisfy the integro-differential system \eqref{eq:GraphLimit-gen}. Furthermore, from the uniform bounds on $\|x^n(t,\cdot)\|_{L^\infty(I)}$ and $\|m^n(t,\cdot)\|_{L^\infty(I)}$, we deduce that $(x,m)\in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}^d \times \mathbb{R}))$, with $\|x(t,\cdot)\|_{L^\infty(I)} \leq X_T$ and $\|m(t,\cdot)\|_{L^\infty(I)} \leq M_T$ for all $t\in [0,T]$. Finally, as previously, looking at the integrand in the integral formulation of \eqref{eq:graphlimitcoupled}, we deduce that $(x,m)\in \mathcal{C}^1([0,T];L^\infty(I;\mathbb{R}^d \times \mathbb{R}))$ which concludes the proof of existence. \\ Let us now deal with the uniqueness. Let us assume that there exist two solutions to the equation~\eqref{eq:graphlimitcoupled} denoted $(x_1,m_1)$ and $(x_2,m_2)$ with the same initial condition. Then, we have \begin{equation*} \left\{\begin{array}{l} \displaystyle (x_1-x_2)(t,s) = \int_0^t \int_I m_1(\tau,s_*) \phi(x_1(\tau,s_*)-x_1(\tau,s))ds_* d\tau\\ \displaystyle ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - \int_0^t \int_I m_2(\tau,s_*) \phi(x_2(\tau,s_*)-x_2(\tau,s))ds_* d\tau \\ \displaystyle (m_1-m_2)(t,s) = \int_0^t (\psi(s,x_1(\tau,\cdot), m_1(\tau,\cdot)) -\psi(s,x_2(\tau,\cdot), m_2(\tau,\cdot))) d\tau \end{array}\right. \end{equation*} that we rewrite \begin{equation*} \left\{\begin{array}{l} \displaystyle (x_1-x_2)(t,s) = \int_0^t \int_I (m_1-m_2)(\tau,s_*) \phi(x_1(\tau,s_*)-x_1(\tau,s))ds_* d\tau\\ \displaystyle ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \int_0^t \int_I m_2(\tau,s_*) (\phi(x_1(\tau,s_*)-x_1(\tau,s)) -\phi(x_2(\tau,s_*)-x_2(\tau,s)))ds_* d\tau \\ \displaystyle (m_1-m_2)(t,s) = \int_0^t (\psi(s,x_1(\tau,\cdot), m_1(\tau,\cdot)) -\psi(s,x_1(\tau,\cdot), m_2(\tau,\cdot))) d\tau\\ \displaystyle ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \int_0^t (\psi(s,x_1(\tau,\cdot), m_2(\tau,\cdot)) -\psi(s,x_2(\tau,\cdot), m_2(\tau,\cdot))) d\tau \end{array}\right. \end{equation*} Thus, we have \begin{equation*} \begin{array}{l} \displaystyle |x_1-x_2|(t,s) \leq \int_0^t \int_I 2 L_\phi X_T |m_1-m_2|(\tau,s_*) ds_* d\tau + \int_0^t \int_I M_TL_\phi (|x_1-x_2|(\tau,s_*) + |x_1-x_2|(\tau,s)) ds_* d\tau \end{array} \end{equation*} from which we deduce \begin{equation*} \begin{array}{l} \displaystyle |x_1-x_2|^2(t,s) \leq 8 L_\phi^2 X_T^2 t \int_0^t \int_I |m_1-m_2|^2(\tau,s_*) ds_* d\tau \\ \displaystyle ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + 2 M_T^2 L_\phi^2 t \int_0^t \int_I |x_1-x_2|^2(\tau,s_*) ds_* d\tau + 2 M_T^2 L_\phi^2 t \int_0^t |x_1-x_2|^2(\tau,s) d\tau. \end{array} \end{equation*} Thus, we have, \begin{equation*} \begin{array}{l} \displaystyle \|x_1-x_2\|^2_{L^2(I)}(t) \leq A_T \left( \int_0^t \|m_1-m_2\|^2_{L^2(I)}(\tau) d\tau + \int_0^t \|x_1-x_2\|^2_{L^2(I)}(\tau) \right) d\tau. \end{array} \end{equation*} Similarly, \begin{equation*} \begin{array}{rl} \displaystyle \|m_1-m_2\|^2_{L^2(I)}(t) &\leq 2 t \int_0^t \|\psi(\cdot,x_1(\tau),m_1(\tau)) - \psi(\cdot,x_1(\tau),m_2(\tau)) \|_{L^2(I)}^2 d\tau \\ &\displaystyle ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + 2 t \int_0^t \|\psi(\cdot,x_1(\tau),m_2(\tau)) - \psi(\cdot,x_2(\tau),m_2(\tau)) \|_{L^2(I)}^2 d\tau\\ &\displaystyle \leq 2t L_\psi \int_0^t \left(\|m_1-m_2\|^2_{L^2(I)}(\tau) + \|x_1-x_2\|^2_{L^2(I)}(\tau) \right) d\tau. \end{array} \end{equation*} Finally, \begin{equation*} \|x_1-x_2\|^2_{L^2(I)}(t) + \|m_1-m_2\|^2_{L^2(I)}(t) \leq 2 A_T \int_0^t \left(\|x_1-x_2\|^2_{L^2(I)}(\tau) + \|m_1-m_2\|^2_{L^2(I)}(\tau) \right) d\tau. \end{equation*} By Gronwall lemma, we deduce that, for all $t \in [0,T]$, \begin{equation*} \|x_1-x_2\|^2_{L^2(I)}(t) + \|m_1-m_2\|^2_{L^2(I)}(t) = 0, \end{equation*} which concludes the proof of uniqueness. \end{proof} \subsection{Well posedness of the microscopic system} In this paragraph, we state the existence and uniqueness results of a solution to the discrete system \eqref{eq:syst-gen}. We do not provide the proof since it consists on a straightforward adaptation at the discrete level of the proof established for the continuous case, the graph limit equation, based on a use of the fixed point theorem. \begin{theorem} Let $(x^{0,N},m^{0,N}) \in \mathbb{R}^{dN} \times \mathbb{R}^N$. Let $\phi$ satisfy Hyp.~\ref{hyp:phi}, $\psi$ satisfy Hyp.~\ref{hyp:psi}, and $\psi_i^{(N)}$ be defined by \eqref{eq:psi}. Then for any $T>0$, there exists a unique solution $(x^{N},m^{N}) \in \mathcal{C}^1([0,T];\mathbb{R}^{dN} \times \mathbb{R}^N)$ to the discrete system \eqref{eq:syst-gen} with initial condition $(x^{0,N},m^{0,N}) \in \mathbb{R}^{dN} \times \mathbb{R}^N$. Moreover, there exists constants $\overline{X}$ and $\overline{M}$ such that for all $t \in [0,T]$, for all $i \in \{1, \dots, N \}$, \begin{equation} \label{eq:bornex} \| x_i^{(N)}(t)\| \leq \overline{X} \end{equation} and \begin{equation} \label{eq:bornem} | m_i^{(N)}(t)| \leq \overline{M}. \end{equation} \end{theorem} \begin{rem} Although this theorem provides existence and uniqueness of solutions to system \eqref{eq:syst-gen} for the special class of mass dynamics given by \eqref{eq:psi}, it can easily be adapted to general mass dynamics $\psi^{(N)}_i(x^N,m^N)$ satisfying the two conditions: there exist $L_\psi>0$ and $C_\psi>0$ such that for all $x^N,y^N\in (\mathbb{R}^d)^N$, for all $m^N,p^N\in\mathbb{R}^N$, \begin{equation*} \begin{cases} \|\psi^{(N)}(x^N,m^N)-\psi^{(N)}(y^N,m^N)\| \, \leq \,L_\psi \|x^N-y^N\| \\ \|\psi^{(N)}(x^N,m^N)-\psi^{(N)}(x^N,p^N)\| \, \leq \, L_\psi \|m^N-p^N\|, \end{cases} \end{equation*} where $\|\cdot\|$ denotes the standard Euclidean norm in $(\mathbb{R}^d)^N$ or in $\mathbb{R}^N$, and for every $i\in\{1,\cdots,N\}$ \begin{equation*} \displaystyle |\psi_i( x^N, m^N)| \leq C_\psi( 1 + max_{i\in\{1,\cdots,N\}} |m^N_i| ). \end{equation*} \end{rem} \subsection{Convergence to the graph limit equation} Let us now prove the main result of this article, namely that under our assumptions, the solution to the discrete problem \eqref{eq:syst-gen} converges to the solution to the integro-differential equation \eqref{eq:GraphLimit-gen} when $N$ goes to infinity. For clarity purposes, we restate the main theorem announced in Section \ref{sec:presentation}. \begin{maintheorem Let $x_0\in L^\infty(I;\mathbb{R}^d)$ and $m_0\in L^\infty(I;\mathbb{R})$ satisfying \eqref{eq:integral_egal_a_1}, and consider a functional $\psi:I \times L^2(I;\mathbb{R}^d)\times L^2(I;\mathbb{R}) \rightarrow \mathbb{R}$. Suppose that the function $\phi$ satisfies Hyp.~\ref{hyp:phi}, and that $\psi$ satisfies Hyp.~\ref{hyp:psi}. Let $x^{0,N}:=P_\mathrm{d}^N(x_0)$ and $m^{0,N}:=P_\mathrm{d}^N(m_0)$ be given by \eqref{eq:ICx}. Then the solution $(x_N,m_N)$ to \eqref{eq:syst-contN} with initial conditions \[\displaystyle x_N(0,s) = \sum_{i=1}^N x_i^{0,N} \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s) \quad \text{ and } \quad \displaystyle m_N(0,s) = \sum_{i=1}^N m_i^{0,N} \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s) \] converges when $N$ tends to infinity in the $\mathcal{C}([0,T];L^2(I))$ topology. More specifically, there exist $(x,m)\in \mathcal{C}([0,T];L^\infty(I,\mathbb{R}^d))\times \mathcal{C}([0,T];L^\infty(I,\mathbb{R}))$ such that \begin{equation}\label{eq:convxm} \displaystyle \|x-x_N\|_{\mathcal{C}([0,T];L^2(I,\mathbb{R}^d))} \xrightarrow[N\rightarrow+\infty]{} 0 \quad \text{ and } \quad \|m-m_N\|_{\mathcal{C}([0,T];L^2(I,\mathbb{R}))} \xrightarrow[N\rightarrow+\infty]{} 0. \end{equation} Furthermore, the limit functions $x$ and $m$ are solutions to the integro-differential system \eqref{eq:GraphLimit-gen} supplemented by the initial conditions $x(0,\cdot) = x_0$ and $m(0,\cdot) = m_0$. \end{maintheorem} \begin{rem} Theorem \ref{Th:GraphLimit} proves the convergence of the solution to \eqref{eq:syst-contN} to the solution to \eqref{eq:GraphLimit-gen}. This is equivalent to proving the convergence of the solution to the discrete system \eqref{eq:syst-gen} to the solution to \eqref{eq:GraphLimit-gen}, as shown in Proposition \ref{prop:equiv}. \end{rem} \begin{proof} The proof is done following the graph limit method of \cite{Medvedev14}. Let $\xi_N:= x_N - x$ and $\zeta_N := m_N -m$. We will also use the slight abuse of notation $y(t,\cdot):=y(t)$, with $y$ standing for $x$, $x_N$, $m$ or $m_N$. We compute \begin{equation*} \begin{split} \frac{\partial\xi_N(t,s)}{\partial t} = & \int_I m_N(t,s_*)\, \phi(x_N(t,s_*)-x_N(t,s)) \, ds_* - \int_I m(t,s_*)\, \phi(x(t,s_*)-x(t,s)) \, ds_* \\ = & \int_I m_N(t,s_*)\, \left[ \phi(x_N(t,s_*)-x_N(t,s))- \phi(x(t,s_*)-x(t,s)) \right] \, ds_* \\ & + \int_I (m_N(t,s_*)- m(t,s_*)) \, \phi(x(t,s_*)-x(t,s)) \, ds_*. \end{split} \end{equation*} By multiplying by $\xi_N$ and integrating over $I$, we obtain \begin{equation}\label{eq:dxi} \begin{split} \frac{1}{2}\int_I \frac{\partial\xi_N(t,s)^2}{\partial t} ds = & \int_{I^2} m_N(t,s_*)\, \left[ \phi(x_N(t,s_*)-x_N(t,s))- \phi(x(t,s_*)-x(t,s)) \right] \xi_N(t,s) \, ds_*\, ds \\ & + \int_{I^2} \zeta_N(t,s_*) \xi_N(t,s) \, \phi(x(t,s_*)-x(t,s)) \, ds_*\, ds. \end{split} \end{equation} We study the first term. Since the solution to \eqref{eq:syst-gen} satisfies \eqref{eq:bornex}-\eqref{eq:bornem}, we have \begin{equation} \label{eq:bornemN} \sup_{t \in [0,T]} \esssup_{s \in I} |m_N(t,s)| \leq \overline{M}. \end{equation} Then, since $\phi$ is Lipschitz, there exists $L>0$ such that \begin{equation*} \begin{split} & \left | \int_{I^2} m_N(t,s_*)\, \left[ \phi(x_N(t,s_*)-x_N(t,s))- \phi(x(t,s_*)-x(t,s)) \right] \xi_N(t,s)\, ds_*\, ds \right| \\ \leq \;& \overline{M} L \int_{I^2} |\xi_N(t,s)| \,\left| x_N(t,s_*)-x_N(t,s)- x(t,s_*)+x(t,s) \right| \, ds_*\, ds \\ \leq \;& \overline{M} L \int_{I^2} |\xi_N(t,s)| \,\left| \xi_N(t,s_*) - \xi_N(t,s) \right| \, ds_*\, ds \leq 2 \overline{M} L \|\xi_N(t)\|^2_{L^2(I)} \end{split} \end{equation*} We now look at the second term of \eqref{eq:dxi}. Since $x \in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}^d))$ and $\phi$ is continuous, there exists a constant $\displaystyle{C_\phi:={\mathrm{ess}\sup}_{(s,s_*,t)\in I^2\times [0,T]}|\phi(x(t,s_*)-x(t,s))| }$ which is finite, and we have the following bound: \begin{equation*} \begin{split} & \int_{I^2} \zeta_N(t,s_*) \xi_N(t,s) \, \phi(x(t,s_*)-x(t,s)) \, ds_*\, ds \leq \; C_\phi \; \int_{I^2} |\zeta_N(t,s_*) \xi_N(t,s)| ds_*\, ds \\ \leq & \; C_\phi \; \|\xi_N(t)\|_{L^1(I)}\, \|\zeta_N(t)\|_{L^1(I)} \; \leq \; C_\phi \; \|\xi_N(t)\|_{L^2(I)}\, \|\zeta_N(t)\|_{L^2(I)}. \end{split} \end{equation*} Hence from \eqref{eq:dxi}, \begin{equation}\label{eq:dxi2} \frac{1}{2}\frac{d}{dt}\|\xi_N(t)\|_{L^2(I)}^2 \, \leq \, 2 \overline{M} L \|\xi_N(t)\|^2_{L^2(I)} + C_\phi \; \|\xi_N(t)\|_{L^2(I)}\, \|\zeta_N(t)\|_{L^2(I)}. \end{equation} We now compute \begin{equation*} \begin{split} \frac{\partial\zeta_N(t,s)}{\partial t} = & N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \psi(s_*,x_N(t),m_N(t)) \,ds_* - \psi(s,x(t),m(t))\\ = & N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \left[ \psi(s_*,x_N(t),m_N(t)) - \psi(s_*,x(t),m(t)) \right] \,ds_* \\ & + N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \psi(s_*,x(t),m(t) \,ds_* - \psi(s,x(t),m(t)). \end{split} \end{equation*} Multiplying by $\zeta_N(t,s)$ and integrating over $I$, we get: \begin{equation}\label{eq:dzn} \begin{split} \frac{1}{2}\int_I \frac{\partial\zeta_N(t,s)^2}{\partial t} \, ds = & \int_I N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \left[ \psi(s_*,x_N(t),m_N(t)) - \psi(s_*,x(t),m(t)) \right] ds_*\, \zeta_N(t,s) \, ds \\ & + \int_I\left[ N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \psi(s_*,x(t),m(t)) \,ds_* - \psi(s,x(t),m(t)) \right] \zeta_N(t,s)\, ds \\ \leq & \; \|h_N(t)\|_{L^2(I)} \|\zeta_N(t)\|_{L^2(I)} \; + \; \|g_N(t)\|_{L^2(I)} \, \|\zeta_N(t)\|_{L^2(I)} , \end{split} \end{equation} where the last inequality was obtained from the Cauchy-Schwarz inequality, and we denoted by {$h_N$ and $g_N$} the functions $$ h_N:(t,s) \mapsto N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \left[ \psi(s_*,x_N(t),m_N(t)) - \psi(s_*,x(t),m(t)) \right] ds_*$$ and $$ g_N:(t,s) \mapsto N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \psi(s_*,x(t),m(t)) \,ds_* - \psi(s,x(t),m(t)). $$ We start with the first term: \begin{equation*} \begin{split} \|h_N(t)\|_{L^2(I)}^2 & = \int_I \left( N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \left[ \psi(s_*,x_N(t),m_N(t)) - \psi(s_*,x(t),m(t)) \right] ds_*\, \right)^2ds \\ & = \sum_{i=1}^N\, \int_{\frac{i-1}{N}}^{\frac{i}{N}} N^2 \left( \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left[ \psi(s_*,x_N(t),m_N(t)) - \psi(s_*,x(t),m(t)) \right] ds_*\, \right)^2ds \\ & = \sum_{i=1}^N\, N \left( \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left[ \psi(s_*,x_N(t),m_N(t)) - \psi(s_*,x(t),m(t)) \right]\cdot 1 \, ds_*\, \right)^2 \\ & \leq \sum_{i=1}^N\, N \left( \left[ \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left[ \psi(s_*,x_N(t),m_N(t)) - \psi(s_*,x(t),m(t)) \right]^2 ds_*\,\right]^{1/2} \left[ \int_{\frac{i-1}{N}}^{\frac{i}{N}} 1^2 ds_*\,\right]^{1/2}\right)^2 \\ & \leq \sum_{i=1}^N\, \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left[ \psi(s_*,x_N(t),m_N(t)) - \psi(s_*,x(t),m(t)) \right]^2 ds_* \\ & = \| \psi(\cdot,x_N(t),m_N(t)) - \psi(\cdot,x(t),m(t)) \|_{L^2(I)}^2, \end{split} \end{equation*} using the Cauchy Schwarz inequality. Then, from \eqref{eq:psilip2}, \begin{equation*} \begin{split} \|h_N(t)\|_{L^2(I)} & \leq \| \psi(\cdot,x_N(t),m_N(t)) - \psi(\cdot,x(t),m_N(t)) \|_{L^2(I)} + \| \psi(\cdot,x(t),m_N(t)) - \psi(\cdot,x(t),m(t)) \|_{L^2(I)} \\ & \leq L_\psi ( \| x_N(t)-x(t) \|_{L^2(I)} + \| m_N(t)-m(t) \|_{L^2(I)}) = L_\psi ( \| \xi_N(t) \|_{L^2(I)} + \| \zeta_N(t) \|_{L^2(I)}). \end{split} \end{equation*} We now study the second term of \eqref{eq:dzn}. According to Lebesgue's differentiation theorem, for almost every $s\in I$, $$ \lim_{N\rightarrow+\infty} N \int_{\frac{1}{N}\floor{sN}}^{\frac{1}{N}(\floor{sN}+1)} \psi(s_*,x(t),m(t)) \,ds_* = \psi(s,x(t),m(t)). $$ which implies \begin{equation}\label{eq:h2} \lim_{N\rightarrow+\infty} \|g_N(t)\|_{L^2(I)} = 0. \end{equation} Summing up the contributions of $h_N$ and $g_N$, from \eqref{eq:dzn} we obtain: \begin{equation}\label{eq:dzn2} \frac{1}{2}\frac{d}{dt}\|\zeta_N(t)\|_{L^2(I)}^2 \, \leq \, L_\psi \left( \| \xi_N(t) \|_{L^2(I)} + \| \zeta_N(t) \|_{L^2(I)} \right) \|\zeta_N(t)\|_{L^2(I)} + \|g_N(t)\|_{L^2(I)} \|\zeta_N(t)\|_{L^2(I)}. \end{equation} Now, for all $\epsilon >0$ and $t\in[0,T]$, let $\bar{\zeta}_N^\epsilon(t):=\sqrt{\|\zeta_N(t)\|_{L^2(I)}^2+\epsilon}$ and $\bar{\xi}_N^\epsilon(t):=\sqrt{\|\xi_N(t)\|_{L^2(I)}^2+\epsilon}$. From \eqref{eq:dxi2} and \eqref{eq:dzn2}, \begin{equation*} \begin{cases} \frac{1}{2}\frac{d}{dt}(\bar{\xi}_N^\epsilon(t)^2) \, \leq \, 2 L \overline{M} \, \bar{\xi}_N^\epsilon(t)^2 + C_\phi \; \bar{\xi}_N^\epsilon(t) \, \bar{\zeta}_N^\epsilon(t)\\ \frac{1}{2}\frac{d}{dt}(\bar{\zeta}_N^\epsilon(t)^2) \, \leq \, L_\psi \left( \bar{\xi}_N^\epsilon(t) + \bar{\zeta}_N^\epsilon(t) \right) \bar{\zeta}_N^\epsilon(t) + \|g_N(t)\|_{L^2(I)} \bar{\zeta}_N^\epsilon(t), \end{cases} \end{equation*} and since for all $t\in [0,T]$, $\bar{\xi}_N^\epsilon(t)>0$ and $\bar{\zeta}_N^\epsilon(t)>0$, this implies: \begin{equation*} \begin{cases} \frac{d}{dt}\bar{\xi}_N^\epsilon(t) \, \leq \, 2 L \overline{M} \, \bar{\xi}_N^\epsilon(t) + C_\phi \, \bar{\zeta}_N^\epsilon(t)\\ \frac{d}{dt}\bar{\zeta}_N^\epsilon(t) \, \leq \, L_\psi \left( \bar{\xi}_N^\epsilon(t) + \bar{\zeta}_N^\epsilon(t) \right) + \|g_N(t)\|_{L^2(I)} . \end{cases} \end{equation*} Summing up, we get $$ \frac{d}{dt}(\bar{\xi}_N^\epsilon(t)+\bar{\zeta}_N^\epsilon(t)) \, \leq \, K (\bar{\xi}_N^\epsilon(t)+\bar{\zeta}_N^\epsilon(t)) + \|g_N(t)\|_{L^2(I)} $$ where $K:=\max\{ 2 L \overline{M} ,\, C_\phi,\, L_\psi\}$. Now, from Gronwall's inequality, for all $t\in [0,T]$, $$ \bar{\xi}_N^\epsilon(t)+\bar{\zeta}_N^\epsilon(t) \, \leq \, (\bar{\xi}_N^\epsilon(0)+\bar{\zeta}_N^\epsilon(0))e^{Kt} + \int_0^t \|g_N(\tau)\|_{L^2(I)}e^{K(t-\tau)}d\tau. $$ Since $\epsilon$ is arbitrary, we obtain: $$ \sup_{t\in [0,T]} \left(\|\xi_N(t)\|_{L^2(I)}+\|\zeta_N(t)\|_{L^2(I)} \right) \leq \left(\|\xi_N(0)\|_{L^2(I)}+\|\zeta_N(0)\|_{L^2(I)}+\int_0^T \|g_N(\tau)\|_{L^2(I)}d\tau \right) e^{KT}. $$ As seen in Remark \ref{Rem:x0m0}, $\lim_{N\rightarrow +\infty} \xi_N(0,s) = \lim_{N\rightarrow +\infty} x_N(0,s)-x(0,s) = 0$ and $\lim_{N\rightarrow +\infty} \zeta_N(0,s) = \lim_{N\rightarrow +\infty} m_N(0,s)-m(0,s) = 0$ for almost all $s\in I$, which implies that $\lim_{N\rightarrow +\infty}\|\xi_N(0)\|_{L^2(I)}=0$ and $\lim_{N\rightarrow +\infty}\|\zeta_N(0)\|_{L^2(I)}=0$. From \eqref{eq:h2}, for all $t$, $\lim_{N\rightarrow +\infty}\|g_N(t)\|_{L^2(I)}=0$, so using the dominated convergence theorem for the last term, we can finally deduce the convergence result \eqref{eq:convxm}. \end{proof} \section{Relation between Graph Limit and Mean-field Limit} \label{sec:mfl} In Section \ref{sec:graphlim}, we have derived the \emph{Graph Limit} of the microscopic model \eqref{eq:syst-gen} when the number of agents $N$ goes to infinity, showing that the limit functions representing the opinions and weights satisfy a system of integro-differential equations. The aim of this section is to relate the \emph{Graph Limit} that we obtained with the \emph{Mean-Field Limit}, much more studied in the field of collective dynamics. However, the Mean-Field Limit can only be derived for a particular form of mass dynamics that satisfy an \emph{indistinguishability} property. We begin by shedding light on this concept. \subsection{Indistinguishability and mean-field limit} \label{sec:indisting} In the context of the classical Hegselmann-Krause model without weights \eqref{eq:HK}, the Mean-Field Limit process consists of representing the population by its density rather than by a collection of individual opinions. The limit density $\nu(t,x)$ represents the (normalized) quantity of agents with opinion $x$ at time $t$ and satisfies a non-local transport equation, where the transport vector field $V$ is defined by the convolution of $\nu$ with the interaction function $\phi$. The proof of the limit lies on the fact that the empirical measure \begin{equation*} \nu^N(t,x) := \frac{1}{N} \sum_{i=1}^N \delta(x-x^{N}_i(t)). \end{equation*} satisfies the very same transport equation. It is crucial to notice that there is an irretrievable information loss in this formalism. Indeed the empirical measure keeps a count of the number of agents with opinion $x$ at time $t$, but loses track of the individual labeling of agents (i.e. the indices). In the case of our augmented system \eqref{eq:syst-gen} with time-evolving weights, we generalize the notion of empirical measure by defining \begin{equation}\label{eq:empmes} \mu^N(t,x) := \frac{1}{N} \sum_{i=1}^N m^{N}_i(t) \delta(x-x^{N}_i(t)). \end{equation} We stress once again the information loss: the empirical measure only keeps track of the total weight of the group of agents with opinion $x$, but loses track of the individual labeling, the individual weights, and the number of agents at each point $x$. More specifically, we draw attention to the fact that the empirical measure is invariant : \begin{enumerate}[(i)] \item by relabeling of the indices: for every $(x^N,m^N)\in (\mathbb{R}^d)^N\times\mathbb{R}^N$, for any $\sigma$ permutation of $\{1,\cdots,N\}$, $$ \frac{1}{N} \sum_{i=1}^N m^{N}_i \delta(x-x^{N}_i)=\frac{1}{N} \sum_{i=1}^N m^{N}_{\sigma(i)} \delta(x-x^{N}_{\sigma(i)}) $$ \item by grouping of the agents: for every $(x^N,m^N)\in (\mathbb{R}^d)^N\times\mathbb{R}^N$, for every $I\subset\{1,\cdots,N\}$, such that $x^N_i = x_I$ for all $i\in I$, $$ \frac{1}{N} \sum_{i=1}^N m^{N}_i \delta(x-x^{N}_i) = \frac{1}{N} \left[ \left(\sum_{i\in I} m^{N}_{i}\right) \delta(x-x_{I}) + \sum_{i\in\{1,\cdots,N\}\setminus I} m^{N}_i \delta(x-x^{N}_i) \right]. $$ \end{enumerate} Figure \ref{fig:Indist} illustrates this invariance by comparing two microscopic systems corresponding to the same empirical measure. This illustrates the fact that contrarily to the graph limit seen in the previous section, the mean-field limit process entails a non-reversible information loss. The empirical measure only retains the information concerning the total mass of agents at each point, and is incapable of differentiating between differently-labeled agents or between agents grouped at the same point. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=1] \input{Micro_xm_1} \end{tikzpicture}% \begin{tikzpicture}[scale=1] \input{Micro_xm_2} \end{tikzpicture}% \begin{tikzpicture}[scale=1] \input{Micro_xm_3} \end{tikzpicture}% \end{center} \caption{Schematic representation of two microscopic sets of agents $(x^5,m^5)\in\mathbb{R}^5\times \mathbb{R}^5$ and $(y^5,p^5)\in\mathbb{R}^5\times \mathbb{R}^5$ corresponding to the same empirical measure $\mu^5\in\mathcal{P}(\mathbb{R})$. Left: Representation of $(x^5,m^5)$ with $x^5 = (0.5, 0.5, 1.5, 2.5, 3 )$ and $m^5 = (1.5,0.5,1.25,0.75,1)$. Center: Representation of $(y^5,p^5)$ with $y^5 = (0.5, 0.5, 3, 2.5,1.5 )$ and $p^5 = (1.25,0.75,1,0.75,1.25)$. Right: Symbolic representation of the empirical measure $\mu^5 = \frac{1}{5}( 2\,\delta_{0.5} + 1.25 \,\delta_{1.5} + 0.75 \,\delta_{2.5}+ \,\delta_{3})$.}\label{fig:Indist} \end{figure} Hence, in order to study the mean-field limit, we will require System \eqref{eq:syst-gen} to satisfy the following \textit{indistinguishability} property: \begin{definition}\label{Def:indist} We say that system \eqref{eq:syst-gen} satisfies \emph{indistinguishability} if for all $J\subset \{1,\cdots,N\}$, for all initial conditions $(x^0, m^0)\in \mathbb{R}^{dN}\times\mathbb{R}^N$ and $(y^0, p^0)\in \mathbb{R}^{dN}\times\mathbb{R}^N$ satisfying \begin{equation}\label{eq:prop-indist_0} \begin{cases} x_i^0 = y_i^0 = x_j^0 = y_j^0 \qquad \text{ for all } (i,j)\in J^2 \\ x_i^0 = y_i^0 \qquad \text{ for all } i\in \{1,\cdots,N\} \\ m_i^0 = p_i^0 \qquad \text{ for all } i\in J^c \\ \sum_{i\in I} m_i^0 = \sum_{i\in I} p_i^0, \end{cases} \end{equation} the solutions $t\mapsto (x(t),m(t))$ and $t\mapsto (y(t),p(t))$ to system \eqref{eq:syst-gen} with respective initial conditions $(x^0, m^0)$ and $(y^0, p^0)$ satisfy for all $t\geq 0$, \begin{equation* \begin{cases} x_i(t) = y_i(t) = x_j(t) = y_j(t) \qquad \text{ for all } (i,j)\in J^2 \\ x_i(t) = y_i(t) \qquad \text{ for all } i\in \{1,\cdots,N\} \\ m_i(t) = p_i(t) \qquad \text{ for all } i\in J^c \\ \sum_{i\in I} m_i(t) = \sum_{i\in I} p_i(t). \end{cases} \end{equation*} \end{definition} In the above definition and in all that follows, $J^c$ denotes the complement of the set $J$ in $\{1,\cdots,N\}$, i.e. $J^c=\{1,\cdots,N\}\setminus J$. We begin by noticing that part of the required property is automatically satisfied by the general system \eqref{eq:syst-gen}, without having to specify further the mass dynamics. Namely, we prove that if two agents initially start at the same position, they stay with the same position at all time. \begin{prop}\label{Prop:equal} Let $(x^{0,N}_i)_{i\in\{1,\cdots,N\}}\in\mathbb{R}^{dN}$ and $(m^{0,N}_i)_{i\in\{1,\cdots,N\}}\in\mathbb{R}^{N}$. Let $\psi_i^{(N)}:(\mathbb{R}^d)\times\mathbb{R}\rightarrow\mathbb{R}$ and let $\phi\in\mathrm{Lip}(\mathbb{R}^d;\mathbb{R})$ such that the system of ODE \eqref{eq:syst-gen} with initial condition $\left( x^{0,N},m^{0,N}\right)$ admits a unique solution $(x^N,m^N)$. If $x_k^{0,N}=x_l^{0,N}$ for some $(k,l)\in \{1,\cdots,N\}^2$, then it holds $x_k^N(t) = x_l^N(t)$ for all $t\geq 0$. \end{prop} \begin{proof} Let $(x^0_i)_{i\in\{1,\cdots,N\}}\in\mathbb{R}^{dN}$ and $(m^0_i)_{i\in\{1,\cdots,N\}}\in\mathbb{R}^{N}$ Without loss of generality, suppose that $x_1^0 = x_2^0$. Now consider the slightly modified differential system \begin{equation}\label{eq:syst-mod12} \begin{cases} \displaystyle \frac{d}{dt} x_i= \frac{1}{N} \sum_{j=3}^N m_j a(\|x_i-x_j\|) (x_j-x_i), \qquad x_i(0) = x_i^0, \qquad \text{ for } i=1,2 \\ \displaystyle \frac{d}{dt} x_i = \frac{1}{N} \sum_{j=1}^N m_j a(\|x_i-x_j\|) (x_j-x_i), \qquad x_i(0) = x_i^0, \qquad \text{ for all } i\in\{3,\cdots,N\} \\ \displaystyle \frac{d}{dt} m_i = \psi_i^N(x,m), \qquad m_i(0) = m_i^0, \qquad \text{ for all } i\in\{1,\cdots,N\}. \end{cases} \end{equation} System \eqref{eq:syst-mod12} has a unique solution, that we denote by $(\tilde{x},\tilde{m})$. Notice that $\tilde{x}_1$ and $\tilde{x}_2$ satisfy the same differential equation, so since $x_1^0=x_2^0$, we have $\tilde{x}_1(t) = \tilde{x}_2(t)$ for all $t\geq 0$. Furthermore, one easily sees that $(\tilde{x}, \tilde{m})$ is also solution to \eqref{eq:syst-gen}. By uniqueness, we conclude that the unique solution $(x,m)$ to \eqref{eq:syst-gen} satisfies $x_1(t) = x_2(t)$ for all $t\geq 0$. \end{proof} Thus, if System \eqref{eq:syst-gen} is well-posed, part of the requirements for indistinguishability stated in Def. \ref{Def:indist} are automatically met. However, one can easily show that for general weight dynamics $\psi^{(N)}$, System \eqref{eq:syst-gen} does not satisfy indistinguishability. For this reason, from here onward, we will focus on a particular class of weight dynamics given by \begin{equation} \label{eq:psi_i_gen} \psi_i^{(N)}(x,m) = m_i(t)\frac{1}{N^k} \sum_{j_1=1}^N \cdots \sum_{j_k=1}^N m_{j_1}(t)\cdots m_{j_k}(t) S(x_i(t), x_{j_1}(t), \cdots x_{j_k}(t)), \end{equation} where $k\in\mathbb{N}$ and $S:(\mathbb{R}^d)^{k+1}\rightarrow \mathbb{R}$ satisfies the following assumptions: \begin{hyp}\label{hyp:S} $S\in C((\mathbb{R}^d)^{k+1}; \mathbb{R})$ is globally bounded and Lipschitz. More specifically, there exist $\bar{S}$, $L_S>0$ s.t. \begin{equation*}\label{eq:psiSkbound} \forall y\in(\mathbb{R}^d)^{k+1}, \quad |S(y)|\leq \bar{S}. \end{equation*} and \begin{equation*}\label{eq:psiSklip} \forall y\in (\mathbb{R}^d)^{k+1}, \forall z\in (\mathbb{R}^d)^{k+1}, | S(y_0,\cdots,y_k) - S(z_0,\cdots,z_k) | \leq L_S \sum_{i=0}^k |y_i-z_i|. \end{equation*} Furthermore, we require that $S$ satisfy the following skew-symmetry property: there exists $(i,j)\in \{0,\cdots,k\}^2$ such that for all $y\in(\mathbb{R}^d)^{k+1}$, \begin{equation}\label{eq:condS} S(y_0,\cdots, y_i,\cdots,y_j,\cdots,y_k) =-S(y_0,\cdots, y_j,\cdots,y_i,\cdots,y_k) . \end{equation} \end{hyp} We show that with the weight dynamics given by \eqref{eq:psi_i_gen} and Hyp. \ref{hyp:S}, System \eqref{eq:syst-gen} satisfies the indistinguishability property of Def. \ref{Def:indist}. \begin{prop} Let $S\in C((\mathbb{R}^d)^{k+1}; \mathbb{R})$ satisfy Hyp. \ref{hyp:S}, and consider the collective dynamics system \eqref{eq:syst-gen}, with mass dynamics given by \eqref{eq:psi_i_gen}. Then the system satisfies the indistinguishability property of Def. \ref{Def:indist}. \end{prop} \begin{proof} For conciseness and clarity, we prove the statement for the case $k=1$, i.e. $S\in C(\mathbb{R}^2;\mathbb{R})$ and \begin{equation}\label{eq:Sbis} \psi_i^{(N)}(x,m) = \frac{1}{N} m_i \sum_{j=1}^N m_j S(x_i,x_j). \end{equation} The proof in the general case $k\in\mathbb{N}$ is essentially identical. Let $J\subset \{1,\cdots,N\}$, $J^c = \{1,\cdots,N\}\setminus J$ and $N_c:=|I^c|\leq N$ denote the cardinal of $I^c$. Let $(x^0, m^0)\in \mathbb{R}^{dN}\times\mathbb{R}^N$ and $(y^0, p^0)\in \mathbb{R}^{dN}\times\mathbb{R}^N$ satisfy \eqref{eq:prop-indist_0}. Let $x_J^0:= x_i^0 = y_i^0$ for $i\in J$. Let us start by considering a different system of dimension $d(N_c+1)+(N_c+1)$: \begin{equation}\label{eq:syst-condensed} \begin{cases} \displaystyle \frac{d}{dt} \tilde{x}_i = \frac{1}{N} \tilde{m}_J a(\|\tilde{x}_i-\tilde{x}_J\|) (\tilde{x}_J-\tilde{x}_i) + \frac{1}{N} \sum_{j\in J^c} \tilde{m}_j a(\|\tilde{x}_i-\tilde{x}_j\|) (\tilde{x}_j-\tilde{x}_i) , \qquad \text{ for all } i\in J^c \\ \displaystyle \frac{d}{dt} \tilde{x}_J = \frac{1}{N} \sum_{j\in J^c} \tilde{m}_j a(\|\tilde{x}_J-\tilde{x}_j\|) (\tilde{x}_j-\tilde{x}_J), \\ \displaystyle \frac{d}{dt} \tilde{m}_i = \frac{1}{N} \tilde{m}_J \tilde{m}_i S(\tilde{x}_i,\tilde{x}_J) + \frac{1}{N} \tilde{m}_i \sum_{j\in J^c} \tilde{m}_j S(\tilde{x}_i,\tilde{x}_j), \qquad \text{ for all } i\in J^c \\ \displaystyle \frac{d}{dt} \tilde{m}_J = \frac{1}{N} \tilde{m}_J^2 S(\tilde{x}_J,\tilde{x}_J) + \frac{1}{N} \tilde{m}_J \sum_{j\in J^c} \tilde{m}_j S(\tilde{x}_J,\tilde{x}_j). \end{cases} \end{equation} Given a set of initial conditions, the Cauchy problem associated with \eqref{eq:syst-condensed} has a unique solution. Let us now consider the solutions $t\mapsto (x(t),m(t)$ and $t\mapsto(y(t), p(t))$ to \eqref{eq:syst-gen} with mass dynamics given by \eqref{eq:Sbis} with respective initial conditions $(x^0, m^0)$ and $(y^0, p^0)$. First of all, from Prop. \ref{Prop:equal}, $x_i(t)=x_j(t)$ and $y_i(t) = y_j(t)$ for all $(i,j)\in J^2$ and all $t\geq 0$. Let $t\mapsto x_J(t):=x_i(t)$ and $t\mapsto y_J(t):=y_i(t)$ for $i\in J$. We can compute: \begin{equation*} \begin{cases} \displaystyle \frac{d}{dt} x_i = \frac{1}{N} (\sum_{j\in J} m_j) a(\|x_i-x_J\|) (x_J-x_i) + \frac{1}{N} \sum_{j\in J^c} m_j a(\|x_i-x_j\|) (x_j-x_i) \qquad \text{ for all } i\in J^c,\\ \displaystyle \frac{d}{dt} x_J = \frac{1}{N} \sum_{j\in J^c} m_j a(\|x_J-x_j\|) (x_j-x_J), \\ \displaystyle \frac{d}{dt} m_i = \frac{1}{N} (\sum_{j\in J} m_j) m_i S(x_i,x_J) + \frac{1}{N} m_i \sum_{j\in J^c} m_j S(x_i,x_j) \qquad \text{ for all } i\in J^c , \\ \displaystyle \frac{d}{dt}(\sum_{j\in J} m_j) = \frac{1}{N} (\sum_{j\in J} m_j)^2 S(x_J,x_J) + \frac{1}{N} (\sum_{j\in J} m_j) \sum_{j\in J^c} m_j S(x_J,x_j). \end{cases} \end{equation*} This shows that $( (x_i)_{i\in J}, x_J, (m_i)_{i\in J}, (\sum_{j\in J} m_j))$ satisfies the differential system \eqref{eq:syst-condensed}. Similarly, we can show that $( (y_i)_{i\in J}, y_J, (p_i)_{i\in J}, (\sum_{j\in J} p_j))$ satisfies \eqref{eq:syst-condensed}. Furthermore, from \eqref{eq:prop-indist_0}, $$( (x_i^0)_{i\in J}, x_J^0, (m_i^0)_{i\in J}, (\sum_{j\in J} m_j^0))=( (y_i^0)_{i\in J}, y_J^0, (p_i^0)_{i\in J}, (\sum_{j\in J} p_j^0)).$$ By uniqueness, for all $t\geq 0$ it holds: $$( (x_i)_{i\in J}, x_J, (m_i)_{i\in J}, (\sum_{j\in J} m_j)) = ( (y_i)_{i\in J}, y_J, (p_i)_{i\in J}, (\sum_{j\in J} p_j)).$$ \end{proof} \subsection{General context and results}\label{sec:MFpres} Let $\P(\mathbb{R}^d)$ denote the set of probability measures on $\mathbb{R}^d$. As exposed in Section \ref{sec:indisting}, the Mean-Field Limit process only makes sense for a subclass of mass dynamics that satisfy an indistinguishability property (Def. \ref{Def:indist}). Hence, from here onward, we will focus on the indistinguishability-preserving mass dynamics given by \eqref{eq:psi_i_gen}. In this framework, the convergence of the empirical measure $\mu^N$ to a limit measure $\mu$ was proven in \cite{PouradierDuteil21}, and can be stated more precisely as follows: \begin{theorem}\label{Th:mfl} Let $T>0$. For each $N\in\mathbb{N}$, let $(x^{0,N}_i)_{i\in\{1,\cdots,N\}}\in\mathbb{R}^{dN}$ and $(m^{0,N}_i)_{i\in\{1,\cdots,N\}}\in\mathbb{R}^{N}$ satisfying \eqref{eq:sum_M}. Let $S\in C((\mathbb{R}^d)^{k+1}; \mathbb{R})$ satisfying Hyp. \ref{hyp:S}. For all $t\in [0,T]$, let $t\mapsto(x^N(t), m^N(t))$ be the corresponding solution to \eqref{eq:syst-gen}-\eqref{eq:psi_i_gen} with initial data $(x^{0,N},m^{0,N})$, and let $\mu^N_t:= \frac{1}{N}\sum_{i=1}^N m_i^N(t) \delta_{x_i^N(t)}\in \P(\mathbb{R}^d)$ be the corresponding empirical measures. If there exists $\mu_0 \in \P(\mathbb{R}^d)$ such that $$ \lim_{N\rightarrow\infty} \rho(\mu^N_0,\mu_0) = 0, $$ then for all $t\in [0,T]$, $$ \lim_{N\rightarrow\infty} \rho(\mu^N_t,\mu_t) = 0, $$ where $\mu_t$ is the solution to the transport equation with source \begin{equation}\label{eq:mfl} \partial_t \mu_t(x) + \nabla\cdot ( V[\mu_t](x) \mu_t(x)) = h[\mu_t](x) \end{equation} with the non-local vector-field given by \[ \forall \mu\in\P(\mathbb{R}^d),\quad \forall x\in\mathbb{R}^d, \quad V[\mu](x) = \int_{\mathbb{R}^d} \phi(y-x) d\mu(y), \] the non-local source term given by \[ \forall \mu\in \P(\mathbb{R}^d), \quad \forall x\in\mathbb{R}^d, \quad h[\mu](x) = \left(\int_{(\mathbb{R}^d)^k} S(x,y_1,\cdots,y_k) d\mu(y_1)\cdots d\mu(y_k)\right) \mu(x). \] and with initial condition $\mu_{t=0} = \mu_0$. \end{theorem} \noindent In Theorem \ref{Th:mfl}, $\rho$ denotes the Bounded Lipschitz distance, also known as the generalized Wasserstein distance $W^{1,1}_1$ \cite{PiccoliRossi14,PouradierDuteil21}. The aim of this section is to explore the link between the Graph Limit equation \eqref{eq:GraphLimit-gen} and the Mean-Field Limit equation \eqref{eq:mfl}. In the spirit of \cite{BiccariKoZuazua19}, we will actually show that the Mean-Field Limit is subordinated to the Graph Limit. Moreover, this link can be used to revisit the proof of the mean-field limit of the discrete system \eqref{eq:syst-gen}. It is important to note however that the result is weaker than Theorem \ref{Th:mfl} and its interest lies mainly in its ability to link the two approaches. We will prove the following result : \begin{theorem} \label{Th:mfl2} Let $T>0$. Let $x_0\in L^\infty(I;\mathbb{R}^d)$ and $m_0\in L^\infty(I;\mathbb{R}^{*+})$ satisfying \eqref{eq:integral_egal_a_1}. Let $\phi$ satisfy Hyp.~\ref{hyp:phi}. Let $(x_N,m_N)$ denote the solution to \eqref{eq:syst-contN} with mass dynamics given by \eqref{eq:psi_i_gen}, with $S$ satisfying Hyp.~\ref{hyp:S} and with initial conditions \[\displaystyle x_N(0,s) = \sum_{i=1}^N x_i^{0,N} \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s) \quad \text{ and }\quad \displaystyle m_N(0,s) = \sum_{i=1}^N m_i^{0,N} \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s), \] where $x^{0,N}=P_\mathrm{d}^N(x_0)$ and $m^{0,N}=P_\mathrm{d}^N(m_0)$ are defined by \eqref{eq:ICx}.\\ Then there exist $x\in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}^d))$ and $m\in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}))$ such that \[ \displaystyle \|x-x_N\|_{\mathcal{C}([0,T];L^2(I,\mathbb{R}^d))} \xrightarrow[N\rightarrow+\infty]{} 0 \quad \text{ and } \quad \|m-m_N\|_{\mathcal{C}([0,T];L^2(I,\mathbb{R}))} \xrightarrow[N\rightarrow+\infty]{} 0. \] Moreover, $(x,m)$ are solutions to the integro-differential system \eqref{eq:GraphLimit-gen} with $\psi:=\psi_{S,k}$ defined by \begin{equation*} \psi_{S,k}(s,x,m) = m(s) \int_{I^k} m(s_1) \cdots m(s_k) \, S(x(s), x(s_1), \cdots, x(s_k) ) \; ds_1\cdots ds_k. \end{equation*} In addition, let $\tilde{\mu}\in C([0,T];\P(\mathbb{R}^d))$ be defined by \begin{equation* \tilde{\mu}_t(x) := \int_I m(t,s_*) \delta(x-x(t,s_*)) ds_*. \end{equation*} Then, the empirical measure $\mu^N$ defined in \eqref{eq:empmes} converges weakly to $\tilde{\mu}$, and $\tilde{\mu}$ is a solution to the transport equation with source term \eqref{eq:mfl}. \end{theorem} Theorem \eqref{Th:mfl2} contains many different results. We show the well-posedness of the system of integro-differential equations given by \eqref{eq:GraphLimit-gen} with $\psi:=\psi_{S,k}$ in Section \ref{sec:WellPos-Indist}. The well-posedness of the microscopic system \eqref{eq:syst-gen}-\eqref{eq:psi_i_gen} will be stated in Section \ref{sec:WellPos-Indist-micro}. Section \ref{sec:mflsubord} is devoted to proving that $\tilde{\mu}$ is a weak solution to \eqref{eq:mfl}. \subsection{Well-posedness of the Graph Limit equation}\label{sec:WellPos-Indist} From here onward, as mentioned in Section \ref{sec:indisting}, we will consider a particular class of mass dynamics which preserves mass as well as \emph{indistinguishability}. Let $\psi_{S,k}:I\times L^\infty(I;\mathbb{R}^d)\times L^\infty(I;\mathbb{R})\rightarrow \mathbb{R}$ be defined by \begin{equation}\label{eq:psiSk} \psi_{S,k}(s,x,m) = m(s) \int_{I^k} m(s_1) \cdots m(s_k) \, S(x(s), x(s_1), \cdots, x(s_k) ) \; ds_1\cdots ds_k \end{equation} where $k\in\mathbb{N}$ and as previously, $S\in C((\mathbb{R}^d)^{k+1}; \mathbb{R})$ satisfies Hyp. \ref{hyp:S}. \begin{rem} The condition \eqref{eq:condS} of Hyp. \ref{hyp:S} implies that \begin{equation}\label{eq:psiSkint} \int_{I} \psi_{S,k}(s,x,m) ds = 0. \end{equation} Actually, this condition \eqref{eq:psiSkint} is sufficient to prove all the subsequent results, but for simplicity purposes, we will keep referring to the more particular (and more tangible) condition \eqref{eq:condS}. \end{rem} We notice that at first glance, the mass dynamics $\psi_{S,k}$ given by \eqref{eq:psiSk} do not satisfy Hyp. \ref{hyp:psi}, which was necessary in order to prove the existence and uniqueness of the solution to equation \eqref{eq:GraphLimit-gen}. The aim of this section is to show that nevertheless, the system of coupled integro-differential equations \eqref{eq:GraphLimit-gen} with $\psi= \psi_{S,k}$ is well-posed, as long as Hyp. \ref{hyp:S} is satisfied. The proof of existence and uniqueness will rely on the following a priori observations: \begin{prop}\label{Prop:psiSkprop} Let $(x_0,m_0)\in L^\infty(I,\mathbb{R}^d\times\mathbb{R}^{*+})$. Consider a solution $(x,m)$ on an interval $[0,T]$ to the integro-differential system \eqref{eq:GraphLimit-gen} with initial condition $(x_0,m_0)$ and weight dynamics $\psi_{S,k}$ given by \eqref{eq:psiSk} and Hyp.~\ref{hyp:S}. Then, the following properties hold: \begin{enumerate}[i)] \item For almost every $s\in I$ and all $t\in [0,T]$, $m(t,s)>0$ \item For all $t\in [0,T]$, $\int_I m(t,s) ds = M_0 := \int_I m_0(s) ds$ \item For almost every $s\in I$ and all $t\in [0,T]$, $m(t,s)\leq m_0(s) \exp(M_0^k \bar{S} t)$ \end{enumerate} \end{prop} \begin{rem} With the assumption \eqref{eq:integral_egal_a_1}, Properties $(ii)$ and $(iii)$ simplify to \[\int_I m(\cdot,s) ds \equiv 1 \qquad \text{ and } \qquad m(t,s)\leq m_0(s) \exp(\bar{S} t). \] \end{rem} \begin{proof} The second property is an immediate consequence of the antisymmetry property \eqref{eq:condS}.\\ We now focus on the first point. Suppose that there exists $t_- \in [0,T]$ and a non-negligeable set $\overline{\mathcal{N}}$ in I, such that for $s \in \overline{\mathcal{N}}$, we have $m(s,t_-)\leq 0$. Let $$t^* := \inf\{t\in [0,T], \, \text{there exists a non-negligeable set } \overline{\mathcal{N}}_k \text{ such that for } s \in \overline{\mathcal{N}}_k, ~ m(t,s)\leq 0\}.$$ Since $m_0(s)>0$ for almost every $s\in I$, there exists $s^* \in \overline{\mathcal{N}}_k$ such that $m_0(s^*) >0$ and by continuity, $m(t^*,s^*)=0$. Moreover, by definition of $t^*$, for $t<t^*$ and for almost every $s\in I$, we have $m(t,s)>0$. Using the global bound on $S$, we then compute, for all $t\leq t^*$ : \begin{equation*} \begin{split} \partial_t m(t,s^*) = & m(t,s^*) \int_{I^k} m(t,s_1) \cdots m(t,s_k) \, S(x(t,s), x(t,s_1), \cdots, x(t,s_k) ) \; ds_1\cdots ds_k \\ \geq& - \bar{S} \, m(t,s^*) \int_{I^k} m(t,s_1) \cdots m(t,s_k) \, ds_1\cdots ds_k = - \bar{S} \, M_0^k \, m(t,s^*) \end{split} \end{equation*} where we used the second property. Thus, from Gronwall's lemma, we obtain that for all $t\leq t^*$, $m(t,s^*)\geq m_0(s^*) e^{-\bar{S} \, M_0^k\, t} > 0 $, which contradicts $m(t^*,s^*)=0$. We deduce that $m(t,s)>0$ for almost every $s\in I$ and all $t\in [0,T]$. \\ The third point can be proven easily by using the positivity of the weights and the boundedness of $S$ by $\bar{S}$. \end{proof} We now prove that the results of Lemma \ref{Lemma:WellPos-decoupled} also hold with $\psi=\psi_{S,k}$ satisfying \eqref{eq:psiSk} and Hyp. \ref{hyp:S}. \begin{lemma}\label{Lemma:WellPos-decoupled-psiSk} Let $\tilde{x} \in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}^d))$ and $\tilde{m} \in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}))$. Let $x_0\in L^\infty(I;\mathbb{R}^d)$ and $m_0\in L^\infty(I;\mathbb{R}^{*+})$. Let $\phi$ satisfy Hyp. \ref{hyp:phi} and let $\psi_{S,k}$ satisfy \eqref{eq:psiSk} and Hyp. \ref{hyp:S}. \\ Then for any $T>0$, there exists a unique solution $(x,m)\in\mathcal{C}^1([0,T];L^\infty(I;\mathbb{R}^d \times \mathbb{R}^{*+}))$ to the decoupled integro-differential equations \begin{equation*} \left\{\begin{array}{l} \displaystyle \partial_t x(t,s) = \int_I \tilde{m}(t,s_*)\phi(x(t,s) - x(t,s_*)) ds_*; \qquad x(\cdot,0)=x_0 \\ \partial_t m(t,s) = \psi_{S,k}(s,\tilde{x}(t,\cdot),m(t,\cdot)); \qquad m(\cdot,0)=m_0. \end{array}\right. \label{eq:GraphLimit-decoupled-psiSk} \end{equation*} \end{lemma} \begin{proof} Since the two equations are decoupled, the proof of existence and uniqueness of the solution $x\in\mathcal{C}([0,T];L^\infty(I;\mathbb{R}^d))$ to the first equation was already done in Lemma \ref{Lemma:WellPos-decoupled}. We focus on the well-posedness of the second equation. Let $M_0:=\int_I m_0(s) ds$. Let $M_{m_0}'$ be a metric subspace of $\mathcal{C}([0,T];L^1(I;\mathbb{R}^{*+}))$ consisting of functions $m$ satisfying $m(\cdot,0) = m_0$ and $\int m(t,s) ds = M_0$ for all $t\in [0,T]$. Let $K_{m_0}'$ be the operator defined by: $$ \begin{array}{l l l } K_{m_0}': & M_{m_0}' & \rightarrow M_{m_0}' \\ & m &\displaystyle \mapsto (K_{m_0}' m):(t,s)\mapsto m_0(s) + \int_0^t \psi(s,\tilde{x}(\tau,\cdot),m(\tau,\cdot)) d\tau. \end{array} $$ From Proposition \ref{Prop:psiSkprop}, it is clear that $K_{m_0}'$ maps $M_{m_0}'$ onto $M_{m_0}'$. Let $\tilde{T}>0$. We will show that $K_{m_0}'$ is contracting for the norm $\|\cdot\|_{M_{m_0}'} := \sup_{[0,\tilde{T}]} \|\cdot\|_{L^1(I)}$. Let $(m_1,m_2)\in M_{m_0}'^2$. Then for all $t\leq \tilde{T}$, \begin{equation*} \begin{split} \int_I & | K_{m_0}' m_1 - K_{m_0}' m_2 | (t,s) ds = \int_I \left | \int_0^t \psi_{S,k}(s,\tilde{x}(\tau,\cdot),m_1(\tau,\cdot))- \psi_{S,k}(s,\tilde{x}(\tau,\cdot),m_2(\tau,\cdot)) d\tau \right | ds \\ & \leq \int_I \int_0^t \int_{I^k} \Big | [m_1(s) m_1(s_1) \cdots m_1(s_k) - m_2(s) m_2(s_1) \cdots m_2(s_k)] \, S(\tilde{x}(s), \tilde{x}(s_1), \cdots, \tilde{x}(s_k) ) \Big | \; ds_1\cdots ds_k \, dt \, ds\\ & \leq \bar{S} \int_0^t \int_{I^{k+1}} \left | m_1(s) m_1(s_1) \cdots m_1(s_k) - m_2(s) m_2(s_1) \cdots m_2(s_k) \right | \;ds \, ds_1\cdots ds_k \, dt \\ & \leq \bar{S} \int_0^t \Big [ \int_{I^{k+1}} | m_1(s) - m_2(s) | m_1(s_1) \cdots m_1(s_k) \;ds \, ds_1\cdots ds_k \\ & \qquad + \int_{I^{k+1}} m_2(s) | m_1(s_1) - m_2(s_1) | m_1(s_2) \cdots m_1(s_k) \;ds \, ds_1\cdots ds_k \\ & \qquad + \cdots + \int_{I^{k+1}} m_2(s) m_2(s_1) \cdots m_2(s_{k-1}) | m_1(s_k) - m_2(s_k) | \;ds \, ds_1\cdots ds_k \, \Big ] dt \\ & \leq \bar{S} (k+1) \int_0^t \int_I | m_1(s) - m_2(s) | ds M_0^k \, dt, \end{split} \end{equation*} where, from the second line onward we omitted the time dependence for compactness of notation. We obtain: \begin{equation*} \| K_{m_0}' m_1 - K_{m_0}' m_2 \|_{M_{m_0}'} \leq \tilde{T} \bar{S} (k+1) M_0^k \| m_1 - m_2 \|_{M_{m_0}'}. \end{equation*} Thus, if $\tilde{T}\leq (2\bar{S} (k+1) M_0^k)^{-1}$, by the Banach contraction mapping principle, there exists a unique solution $m\in \mathcal{C}([0,\tilde{T}];L^1(I;\mathbb{R}^{*+}))$. We then take $m(\tilde{T},\cdot)$ as the initial condition, and the local solution can be extended to $[0, 2\tilde{T}]$, and by repeating the same argument, to $[0,T]$. We thus showed that there exists a unique solution $m\in \mathcal{C}([0,T];L^1(I;\mathbb{R}^{*+}))$. Furthermore, the third property of Prop. \ref{Prop:psiSkprop} implies that $m\in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}^{*+}))$. Lastly, since the integrand is continuous with respect to $\tau$, $m(\cdot,s)$ is continuously differentiable for almost all $s\in I$, which proves that ${m\in \mathcal{C}^1([0,T];L^\infty(I;\mathbb{R}^{*+}))}$. \end{proof} The proof of Theorem \ref{Th:GL-wellpos} relied on the fact that $\psi$ satisfies \eqref{eq:psilip2}. Although $\psi_{S,k}$ does not satisfy \eqref{eq:psilip2}, we notice that a similar property does hold as long as $m$ belongs to $L^\infty$. \begin{lemma}\label{Lemma:psiSklip2} Let $\psi_{S,k}$ satisfy \eqref{eq:psiSk} and Hyp. \ref{hyp:S}. Let $(m_1, m_2) \in L^\infty(I)^2$ with $max(\|m_2\|_{L^\infty(I)},\|m_1\|_{L^\infty(I)}) \leq M_1$. Then for all $(x_1,x_2)\in L^\infty(I)^2$, it holds: \begin{equation}\label{eq:psiSklip2} \begin{cases} \|\psi_{S,k}(\cdot,x_1,m_1)-\psi_{S,k}(\cdot,x_2,m_1)\|_{L^2(I)} \, \leq \, (k+1) L_S M_1^{k+1} \|x_1-x_2\|_{L^2(I)}\\ \|\psi_{S,k}(\cdot,x_1,m_1)-\psi_{S,k}(\cdot,x_1,m_2)\|_{L^2(I)} \, \leq \, \sqrt{k+1} \bar{S} M_1^k \|m_1-m_2\|_{L^2(I)}. \end{cases} \end{equation} \end{lemma} \begin{proof} We start by computing \begin{equation*} \begin{split} \|\psi_{S,k} & (\cdot,x_1,m_1)-\psi_{S,k}(\cdot,x_2,m_1)\|_{L^2(I)}^2 \\ & = \int_I \bigg | \int_{I^k} m_1(s_0)\cdots m_1(s_k) (S(x_1(s_0),\cdots,x_1(s_k))-S(x_2(s_0),\cdots,x_2(s_k))) ds_1 \cdots ds_k \bigg |^2 ds_0 \\ & \leq \int_{I^{k+1}} m_1(s_0)^2\cdots m_1(s_k)^2 \Big | S(x_1(s_0),\cdots,x_1(s_k))-S(x_2(s_0),\cdots,x_2(s_k))\Big |^2 ds_0 \cdots ds_k \\ & \leq \int_{I^{k+1}} m_1(s_0)^2\cdots m_1(s_k)^2 L_S^2 \Big (\sum_{i=0}^k \|x_1(s_i)-x_2(s_i)\|\Big )^2 ds_0 \cdots ds_k \\ & \leq M_1^{2(k+1)} L_S^2 \int_{I^{k+1}} (k+1) \sum_{i=0}^k \|x_1(s_i)-x_2(s_i)\|^2 ds_0 \cdots ds_k \\ & \leq (k+1)^2 L_S^2 M_1^{2(k+1)} \|x_1-x_2\|_{L^2(I)}^2 \end{split} \end{equation*} from which we get the first inequality of \eqref{eq:psiSklip2}. For the second inequality, we compute: \begin{equation*} \begin{split} \|\psi_{S,k} & (\cdot,x_1,m_1)-\psi_{S,k}(\cdot,x_1,m_2)\|_{L^2(I)}^2 \\ & = \int_I \bigg | \int_{I^k} \big ( m_1(s_0)\cdots m_1(s_k) - m_2(s_0)\cdots m_2(s_k) \big ) S(x_1(s_0),\cdots,x_1(s_k)) ds_1 \cdots ds_k \bigg |^2 ds_0 \\ & \leq \bar{S}^2 \int_{I^{k+1}} \big | m_1(s_0)\cdots m_1(s_k) - m_2(s_0)\cdots m_2(s_k) \big |^2 ds_0 \cdots ds_k \\ & \leq \bar{S}^2 \int_{I^{k+1}} \sum_{i=0}^k \Big( \prod_{j=0}^{i-1} m_2(s_j)^2\Big) | m_1(s_i)-m_2(s_i)|^2 \Big( \prod_{j=i+1}^{k} m_1(s_j)^2\Big) ds_0 \cdots ds_k \\ & \leq \bar{S}^2 M_1^{2k} \int_{I^{k+1}} \sum_{i=0}^k | m_1(s_i)-m_2(s_i)|^2 ds_0 \cdots ds_k = (k+1) \bar{S}^2 M_1^{2k} \|m_1-m_2\|_{L^2(I)}^2. \end{split} \end{equation*} \end{proof} We are now fully equipped to prove the well-posedness of the coupled system, with $\psi=\psi_{S,k}$. \begin{theorem}\label{Th:WellPos-psiSk} Let $x_0\in L^\infty(I;\mathbb{R}^d)$ and $m_0\in L^\infty(I;\mathbb{R}^{*+})$. Let $\phi$ satisfy Hyp. \ref{hyp:phi} and let $\psi_{S,k}$ satisfy \eqref{eq:psiSk} and Hyp. \ref{hyp:S}. \\ Then for any $T>0$, there exists a unique solution $(x,m)\in\mathcal{C}^1([0,T];L^\infty(I;\mathbb{R}^d \times \mathbb{R}^{*+}))$ to the integro-differential system \begin{equation*} \left\{\begin{array}{l} \displaystyle \partial_t x(t,s) = \int_I m(t,s_*)\phi(x(t,s) - x(t,s_*)) ds_*; \qquad x(\cdot,0)=x_0 \\ \partial_t m(t,s) = \psi_{S,k}(s,x(t,\cdot),m(t,\cdot)); \qquad m(\cdot,0)=m_0. \end{array}\right. \end{equation*} \end{theorem} \begin{proof} The proof is almost identical to the one of Theorem \ref{Th:GL-wellpos}. The only difference lies in the inequality \eqref{eq:ineqmn}. We notice that from the third point of Proposition \ref{Prop:psiSkprop}, we can apply Lemma \ref{Lemma:psiSklip2} with $m_1:=m^n$ and $m_2:= m^{n+1}$, and $\|m^n\|_{L^\infty(I)} \leq M_T:=\| m_0\|_{L^\infty(I)} \exp(M_0^k \bar{S} T)$. We can thus replace \eqref{eq:ineqmn} by: \begin{equation*} \begin{split} \|m^{n+1}-& m^n\|_{L^2(I)}^2(t) \\ \leq & 2 t \int_0^t \int_I |\psi_{S,k}(s,x^n(\tau,\cdot),m^{n+1}(\tau,\cdot)) - \psi_{S,k}(s,x^n(\tau,\cdot),m^{n}(\tau,\cdot)) |^2 ds \, d\tau \\ & + 2t \int_0^t \int_I | \psi_{S,k}(s,x^n(\tau,\cdot),m^{n}(\tau,\cdot)) - \psi_{S,k}(s,x^{n-1}(\tau,\cdot),m^{n}(\tau,\cdot)) | ^2 ds \, d\tau \\ \leq & 2 t (k+1) \bar{S}^2 M_1^{2k} \int_0^t \|m^{n+1}-m^{n}\|_{L^2(I)}^2(\tau) d\tau + 2 t (k+1)^2 L_S^2 M_1^{2(k+1)} \int_0^t \| x^n - x^{n-1}\|_{L^2(I)}^2(\tau) d\tau . \end{split} \end{equation*} The rest of the proof of existence is identical, replacing the previous definition of $A_T$ by $$A_T = \max(8 T L_\phi^2 \, X_T^2,\, 8 T L_\phi^2 \, M_T^2,\, 2 T (k+1) \bar{S}^2 M_T^{2k}, 2T (k+1)^2 L_S^2 M_T^{2(k+1)} ).$$ The proof of uniqueness can be adapted similarly. \end{proof} \subsection{Well-posedness of the microscopic system}\label{sec:WellPos-Indist-micro} The existence and uniqueness for the microscopic system \eqref{eq:syst-gen} can be shown in the case of functions $\psi_{S,k}$ satisfying Hyp. \ref{hyp:S} instead of Hyp. \ref{hyp:psi}, in the same way that we adapted the proof of well-posedness for the integro-differential equations in Section \ref{sec:WellPos-Indist}. For this reason, we do not provide the proof, which would be redundant, and merely state the result. \begin{theorem} Let $(x^{0,N},m^{0,N}) \in \mathbb{R}^{dN} \times \mathbb{R}^N$. Let $\phi$ satisfy Hyp. \ref{hyp:phi}, let $\psi_{S,k}$ satisfy \eqref{eq:psiSk} and Hyp. \ref{hyp:S}, and $\psi_i^{(N)}$ be defined by \begin{equation*} \forall i\in\{1,\cdots,N\},\qquad \psi_i^{(N)}(x^{N}(t),m^{N}(t)) = N \int_{\frac{i-1}{N}}^{\frac{i}{N}} \psi_{S,k}(s,x_N(t,\cdot),m_N(t,\cdot)) ds. \end{equation*}\\ Then for any $T>0$, there exists a unique solution $(x^{N},m^{N}) \in \mathcal{C}^1([0,T];\mathbb{R}^{dN} \times \mathbb{R}^N)$ to the discrete system \eqref{eq:syst-gen} with initial condition $(x^{0,N},m^{0,N})$. Furthermore, the solution to the system has the following properties : \begin{enumerate} \item[(i)] $\forall i \in \{1, \dots, N \}$, $\forall t \in [0,T]$, $m_i^{(N)}(t) >0$, \item[(ii)]$\forall t \in [0,T]$, $\displaystyle{\sum_{i=1}^N m_i^{(N)}(t) = N}$, \item[(iii)] there exist constants $\overline{X}$ and $\overline{M}$ such that for all $t \in [0,T]$, for all $i \in \{1, \dots, N \}$, \begin{equation*} \| x_i^{(N)}(t)\| \leq \overline{X} \quad \text{ and } \quad | m_i^{(N)}(t)| \leq \overline{M}. \end{equation*} \end{enumerate} \end{theorem} \begin{rem} The theorem can also be stated for functions $\psi_i^{(N)}$ given by \eqref{eq:psi_i_gen} satisfying Hyp. \ref{hyp:S}. \end{rem} \subsection{Subordination of the mean-field equation to the graph limit equation}\label{sec:mflsubord} The first part of Theorem \ref{Th:mfl2} consists of deriving the Graph Limit, as in Theorem \ref{Th:GraphLimit}, but for mass dynamics \eqref{eq:psiSk} satisfying Hyp. \ref{hyp:S} instead of Hyp. \ref{hyp:psi}. Nevertheless, as for the proof of the existence of a solution to the graph limit equation, the convergence proof can be adapted in a straightforward way, by using the assumptions on $\psi_{S,k}$ given by Hyp. \ref{hyp:S}. Thus, we can show that the solution to the discrete system \eqref{eq:syst-gen}-\eqref{eq:psi_i_gen} converges to the functions $(x,m)\in \mathcal{C}([0,T]; L^2(I;\mathbb{R}^d))\times \mathcal{C}([0,T]; L^2(I;\mathbb{R}))$, solutions to the integro-differential system \eqref{eq:GraphLimit-gen} with $\psi:I \times L^2(I;\mathbb{R}^d)\times L^2(I;\mathbb{R})$ defined in \eqref{eq:psiSk}, as stated in the following: \begin{prop} \label{prop:conv} Let $x_0\in L^\infty(I;\mathbb{R}^d)$ and $m_0\in L^\infty(I;\mathbb{R}^{*+})$ satisfying \eqref{eq:integral_egal_a_1}. Let $\phi$ satisfying Hyp. \ref{hyp:phi} and $\psi:=\psi_{S,k}$ satisfying \eqref{eq:psiSk} and Hyp. \ref{hyp:S}. Then the solution $(x_N,m_N)$ to \eqref{eq:syst-contN} with initial conditions $x_N(0,s)=P_\mathrm{c}^N(P_\mathrm{d}^N(x_0))$ and $m_N(0,s)=P_\mathrm{c}^N(P_\mathrm{d}^N(m_0))$ defined by \eqref{eq:ICx}-\eqref{eq:xN} converges when $N$ tends to infinity in the $\mathcal{C}([0,T];L^2(I))$ topology, i.e. there exists $(x,m)\in \mathcal{C}([0,T];L^2(I,\mathbb{R}^d))\times \mathcal{C}([0,T];L^2(I,\mathbb{R}))$ such that \[ \displaystyle \|x-x_N\|_{\mathcal{C}([0,T];L^2(I,\mathbb{R}^d))} \xrightarrow[N\rightarrow+\infty]{} 0 \quad \text{ and } \quad \|m-m_N\|_{\mathcal{C}([0,T];L^2(I,\mathbb{R}))} \xrightarrow[N\rightarrow+\infty]{} 0. \] Furthermore, the limit functions $x$ and $m$ are solutions to the integro-differential system \eqref{eq:GraphLimit-gen} with $\psi=\psi_{S,k}$, supplemented by the initial conditions $x(0,\cdot) = x_0$ and $m(0,\cdot) = m_0$. \end{prop} Let us now study the link between the non-local diffusive model coming from the graph limit equation~\eqref{eq:GraphLimit-gen} and the non-local transport equation with source \eqref{eq:mfl} obtained by the mean-field approach. This will give us an alternative proof of the mean-field limit. \begin{prop} \label{Prop:link_mfl-gl} Let $(x,m)\in \mathcal{C}([0,T]; L^2(I;\mathbb{R}^d))\times \mathcal{C}([0,T]; L^2(I;\mathbb{R}))$such that \begin{equation} \label{eq:GL-TE} \begin{cases} \partial_t x(t,s) = \displaystyle \int_I m(t,s_*) \phi(x(t,s_*)-x(t,s)) ds_* \\ \partial_t m(t,s) = \displaystyle m(t,s) \int_{I^k} m(t,s_1) \cdots m(t,s_k) \, S(x(t,s), x(t,s_1), \cdots, x(t,s_k) ) \; ds_1\cdots ds_k. \end{cases} \end{equation} Let $\tilde{\mu}\in\P(\mathbb{R}^d)$ be defined by \begin{equation}\label{eq:def_mu} \tilde{\mu}_t(x) := \int_I m(t,s_*) \delta(x-x(t,s_*)) ds_*. \end{equation} Then $\tilde{\mu}$ satisfies the transport equation with source \eqref{eq:mfl}. \end{prop} \begin{proof} Given a test function $\varphi=\varphi(x)$, we consider the term $$(\tilde{\mu}_t,\varphi) := \int_{\mathbb{R}^d} \varphi(x) d \tilde{\mu}_t(x).$$ Let us study the quantity $\displaystyle{\frac{d}{dt} (\tilde{\mu}_t,\varphi)}.$ We start by noticing that $$ (\tilde{\mu}_t,\varphi) = \displaystyle \int_{\mathbb{R}^d} \varphi(x) d \tilde{\mu}_t(x)=\displaystyle \int_I m(t,s) \varphi(x(t,s)) ds $$ by definition of $\tilde{\mu}_t$. Therefore, it holds $$ \displaystyle \frac{d}{dt} (\tilde{\mu}_t,\varphi) = \displaystyle \int_I \partial_tm(t,s) \varphi(x(t,s)) ds + \int_I m(t,s) <\partial_t x(t,s), \nabla_x \varphi(x(t,s))> ds $$ where $<\cdot,\cdot>$ denotes the inner product in $\mathbb{R}^d$. Let us deal with the first term. Using \eqref{eq:GL-TE}, we have \[ \begin{split} & \displaystyle \int_I \partial_t m(t,s) \varphi(x(t,s)) ds\\ =& \displaystyle \int_{I^{k+1}} m(t,s) m(t,s_1) \cdots m(t,s_k) \, S(x(t,s), x(t,s_1), \cdots, x(t,s_k) )\varphi(x(t,s)) \; ds\, ds_1\cdots ds_k\\ =& \displaystyle \int_{\mathbb{R}^d} \varphi(x) dh[\tilde{\mu}_t](x). \end{split} \] Let us now deal with the second term. We start by rewriting the right-hand side of the first equation of \eqref{eq:GL-TE}. $$ \displaystyle \int_I m(t,s_*) \phi(x(t,s_*)-x(t,s)) ds_* = \int_{\mathbb{R}^d} \phi(x_*-x(t,s)) d\tilde{\mu}_t(x_*) = V[\tilde{\mu}_t](x(t,s)). $$ Thus, we obtain \[ \begin{split} \displaystyle \int_I m(t,s) <\partial_t x(t,s), \nabla_x \varphi(x(t,s))> ds = & \displaystyle \int_I m(t,s) <V[\tilde{\mu}_t](x(t,s)), \nabla_x \varphi(x(t,s))> ds \\ \displaystyle = & \displaystyle \int_{\mathbb{R}^d} <V[\tilde{\mu}_t](x), \nabla_x \varphi(x)> d \tilde{\mu}_t(x). \end{split} \] Finally, we obtain \begin{equation*} \frac{d}{dt} (\tilde{\mu}_t,\varphi) = \int_{\mathbb{R}^d} <V[\tilde{\mu}_t](x), \nabla_x \varphi(x)> d \tilde{\mu}_t(x) + \int_{\mathbb{R}^d} \varphi(x) dh[\tilde{\mu}_t](x) \end{equation*} which is the weak version of \eqref{eq:mfl}. \end{proof} In order to prove Theorem \ref{Th:mfl2}, what is left is to show the convergence of $\mu^N$ to $\tilde{\mu}$, where $\mu^N$ is the empirical measure for the microscopic system defined in \eqref{eq:empmes}, and $\tilde{\mu}$ is defined from the solution to the graph limit equation by \eqref{eq:def_mu}. The key point in order to do that is to rewrite $\mu^N$ using the functions $x_N$ and $m_N$ introduced to perform the graph limit and to pass to the limit in that expression. More precisely, we prove the following proposition: \begin{prop}\label{prop:convmu} Let $x_0\in L^\infty(I;\mathbb{R}^d) $ and $m_0\in L^\infty(I;\mathbb{R}^d)$. Let $(x^{N},m^{N})\in \mathcal{C}([0,T]; \mathbb{R}^d)^{N}\times \mathcal{C}([0,T];\mathbb{R})^{N}$ satisfy the differential system~\eqref{eq:syst-gen} with initial condition $x^{0,N}=P_\mathrm{d}^N(x_0)$ and $m^{0,N}=P_\mathrm{d}^N(m_0)$ given by \eqref{eq:ICx} and mass dynamics given by \eqref{eq:psi}-\eqref{eq:psiSk}. Let $\mu^N$ be the empirical measure associated with $(x^N,m^N)$, i.e. for all $t\in [0,T]$, \[ \mu^N_t(x) := \frac{1}{N} \sum_{i=1}^N m^N_i(t) \delta_{x^N_i(t)}. \] Secondly, let $(x,m)$ be the solutions to the integro-differential system \eqref{eq:GraphLimit-gen} with weight dynamics given by \eqref{eq:psiSk} and initial conditions given by $x(0,\cdot)=x_0$ and $m(0,\cdot)=m_0$. Let \[ \tilde{\mu}_t(x) := \int_I m(t,s) \delta(x-x(t,s)) ds. \] Then, for all test function $\varphi\in \mathcal{C}^\infty_c(\mathbb{R}^d)$, and all $t\in [0,T]$, it holds \begin{equation*} \lim_{N\rightarrow\infty} \int_{\mathbb{R}^d} \varphi(x) (d\mu^N_t(x) - d\tilde{\mu}_t(x)) =0. \end{equation*} \end{prop} \begin{proof} Let $x_N= P_\mathrm{c}^N(x^N)\in C([0,T];L^\infty(I;\mathbb{R}^d))$ and $m_N=P_\mathrm{c}^N(m^N)\in \mathcal{C}([0,T];L^\infty(I;\mathbb{R}))$ defined by \eqref{eq:xN} be the solutions to the integro-differential system \eqref{eq:syst-contN}-\eqref{eq:psiSk} with initial conditions $x_N(0,\cdot) = P_\mathrm{c}^N(x^{0,N})$ and $m_N(0,\cdot) = P_\mathrm{c}^N(m^{0,N})$ given by \eqref{eq:ICx}. We begin by showing that for all test function $\varphi\in \mathcal{C}^\infty_c(\mathbb{R}^d)$, \begin{equation}\label{eq:equivmeas} \int_{\mathbb{R}^d} \varphi(x) d\mu^N_t(x) = \int_{\mathbb{R}^d} \varphi(x) d\tilde{\mu}_t^N(x) , \end{equation} where $\tilde{\mu}^N_t\in\P(\mathbb{R}^d)$ is the measure defined by \[ \tilde{\mu}^N_t(x) := \int_I m_N(t,s) \delta(x-x_N(t,s)) ds. \] Since $x_N(t,s) = \sum_{i=1}^N x_i^{N}(t) \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s)$ and $m_N(t,s) = \sum_{i=1}^N m_i^{N}(t) \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s)$, we can compute: \[ \begin{split} \int_{\mathbb{R}^d} \varphi(x) \int_I m_N(t,s) \delta(x-x_N(t,s)) ds = & \int_{\mathbb{R}^d} \varphi(x) \int_I \sum_{i=1}^N m_i^{N}(t) \mathbf{1}_{[\frac{i-1}{N}, \frac{i}{N})}(s) \delta(x-\sum_{i=1}^N x_i^{N}(t) ) ds\\ = & \int_{\mathbb{R}^d} \frac{1}{N} \sum_{i=1}^N m_i^{N}(t) \varphi(x_i^{N}(t)) dx = \int_{\mathbb{R}^d} \varphi(x) d\mu^N_t(x) . \end{split} \] Secondly, we prove the following weak convergence: \begin{equation*} \tilde{\mu}^N_t \rightharpoonup \tilde{\mu}_t \text{~when~} N \to \infty. \end{equation*} Using the definitions of $\tilde{\mu}^N_t$ and $\tilde{\mu}_t$, we write: \[ \begin{split} \bigg|\int_{\mathbb{R}^d} \varphi(x) & (d\tilde\mu^N_t(x) - d\tilde{\mu}_t(x))\bigg| = \left| \int_{\mathbb{R}^d} \varphi(x) \int_I (m_N(t,s) \delta(x-x_N(t,s)) - m(t,s) \delta(x-x(t,s))) \, ds \, dx \right| \\ = &\left| \int_{I} \varphi(x_N(t,s)) m_N(t,s) ds - \int_{I} \varphi(x(t,s)) m(t,s) ds \right| \\ = & \left| \int_{I} (\varphi(x_N(t,s))- \varphi(x(t,s)) ) m_N(t,s) ds - \int_{I} \varphi(x(t,s)) (m(t,s)- m_N(t,s) ) ds\right| \\ \leq & \left| \left( \int_I (\varphi(x_N(t,s))- \varphi(x(t,s)) )^2ds \right)^{1/2} \left(\int_I m_N(t,s)^2 ds \right)^{1/2} \right| \\ & + \left(\int_I \varphi(x(t,s))^2 ds \right)^{1/2} \left( \int_I (m_N(t,s) - m(t,s))^2 ds\right)^{1/2}\\ \leq & \left| K \left( \int_I ((x_N(t,s)- x(t,s) )^2ds \right)^{1/2} \left(\int_I m_N(t,s)^2 ds \right)^{1/2} \right| \\ & + \left(\int_I \varphi(x(t,s))^2 ds \right)^{1/2} \left( \int_I (m_N(t,s) - m(t,s))^2 ds\right)^{1/2} \end{split} \] for a certain constant $K>0$, using the fact that $\varphi$ is a regular enough test function. Moreover, since the solution to \eqref{eq:syst-gen} satisfies \eqref{eq:bornemN} and since $x \in \mathcal{C}([0;T];L^\infty(I;\mathbb{R}^d))$, $\varphi$ being regular, there exists a constant $C>0$ such that we finally have\\ \[ \begin{split} & \left|\int_{\mathbb{R}^d} \varphi(x) (d\tilde\mu^N_t(x) - d\tilde{\mu}_t(x)) dx \right|\\ \leq & K \overline{M} \left( \int_I ((x_N(t,s)- x(t,s) )^2ds \right)^{1/2} + C \left( \int_I (m_N(t,s) - m(t,s))^2 ds\right)^{1/2}. \end{split} \] Proposition \ref{prop:conv} allows us to conclude that \[ \lim_{N\rightarrow\infty} \left|\int_{\mathbb{R}^d} \varphi(x) (d\mu^N_t(x) - d\tilde{\mu}_t(x)) dx \right| =0, \] and together with the equality \eqref{eq:equivmeas}, this proves the desired convergence. \end{proof} Theorem \ref{Th:mfl2} is thus proven by combining Propositions \ref{prop:conv}, \ref{Prop:link_mfl-gl} and \ref{prop:convmu}. \section{Numerical simulations} \label{sec:numeric} \subsection{Dynamics not preserving indistinguishability}\label{sec:numericnotindisting} In this section, we illustrate the convergence of the solutions of the microscopic model \eqref{eq:syst-gen} to the solution of the graph limit equation satisfying \eqref{eq:GraphLimit-gen}, as stated in Theorem \ref{Th:GraphLimit}. We focus on mass dynamics that \textit{do not} preserve indistinguishability, so for which the classical mean-field limit process does not hold. In particular, we consider a situation in which the agents are divided into $K$ groups $I_k$, with $k\in\{1,\cdots,K\}$. Each group is composed of leaders $I_k^L$ and followers $I_k^F$ so that $I_k=I_k^L\cup I_k^F$ and $I_k^L\cap I_k^F=\emptyset$. Within each group, the weight of each leader increases proportionally to itself and to the total weight of all the followers of the group. Conversely, the weight of each follower decreases proportionally to itself and to the total weight of all the leaders. More specifically, consider the function $\psi$ given by \begin{equation}\label{eq:modelsimuGL2} \forall x\in L^2(I;\mathbb{R}),\quad \forall m\in L^2(I;\mathbb{R}),\qquad \psi(s,x,m) = \begin{cases} \beta m(s) \int_{I_k^F} m(s')ds' \quad \text{ if } s\in I_k^L \\ -\beta m(s) \int_{I_k^L} m(s')ds' \quad \text{ if } s\in I_k^F. \end{cases} \end{equation} One easily checks that the total mass $\int_I m(s) ds$ is conserved, and that $\psi$ satisfies \eqref{eq:psilip2} on any time interval $[0,T]$ with $T<\infty$. Given a total number of groups $K\in\mathbb{N}$ and a proportion $r\in (0,1)$ of leaders in each group, we define $I_k := [\frac{k-1}{K},\cdots,\frac{k}{K})$, $I_k^L = [\frac{k-1}{K},\cdots,\frac{k-1}{K}+\frac{r}{K})$ and $I_k^F = [\frac{k-1}{K}+\frac{r}{K},\cdots \frac{k}{K})$. Provided that $n := N/K \in\mathbb{N}$ and that $rn\in\mathbb{N}$, the corresponding microscopic dynamics can be written simply as: \begin{equation}\label{eq:modelsimumicro2} \forall k\in\{1,\cdots ,K\}, \quad \dot m_{(k-1)n + i} = \begin{cases} \displaystyle \frac{\beta}{N} m_{(k-1)n + i} \sum_{j=rn+1}^{n} m_{(k-1)n+j} \quad \text{ if } i\in\{1,\cdots , r n\}, \\ \displaystyle - \frac{\beta}{N} m_{(k-1)n + i} \sum_{j=1}^{rn} m_{(k-1)n+j} \quad \text{ if } i\in\{rn+1,\cdots , n\}. \end{cases} \end{equation} We show the behavior of the model and the convergence of the microscopic dynamics to the macroscopic ones in cases in which we have one ($K=1$) and two ($K=2$) groups, with a proportion $r=10\%$ of leaders in each one. The initial conditions for the graph limit equation were taken to be $s\mapsto x_0(s) = \sin^2(4s)$ and $s\mapsto m_0(s)= 1/M_0*s \cos^2(5s)$, with $M_0:=\int_0^1 s \cos^2(5s) ds$. The corresponding initial conditions for the microscopic model were computed from \eqref{eq:ICx}. In all the following examples, the interaction function used was $y\mapsto a(y) = \frac{1}{1+y^2}$. Figure \ref{fig:Micro_K1} shows the evolution of the opinions and weights of 20 agents divided into 2 leaders (indexed $i=1$ and $i=2$) and 18 followers (indexed $i=3... 20$). Observe that the leaders' weights quickly increase to sum up to the total weight of the group. As a consequence, consensus is achieved at a value $x=0.2$ close to the leaders' initial positions. Figure \ref{fig:microGL2} illustrate the convergence of the microscopic dynamics to the graph limit ones by comparing the opinions and weights for $N=100$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.18]{MicroK1_x.png} \includegraphics[scale=0.18]{MicroK1_m.png} \includegraphics[scale=0.5, clip=true]{SpaceI \end{center} \caption{ Evolution of opinions (left) and weights (right) of 20 agents. Center: color scale for the index $i$ ranging from 1 (blue) to 20 (red). Parameters: $K=1$, $r=0.1$, $\beta=5$.}\label{fig:Micro_K1} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.32\textwidth]{GLxN100t0p05.eps} \includegraphics[width=0.32\textwidth]{GLxN100t1p4.eps} \includegraphics[width=0.32\textwidth]{GLxN100t5.eps} \\ \includegraphics[width=0.32\textwidth]{GLmN100t0p05.eps} \includegraphics[width=0.32\textwidth]{GLmN100t1p4.eps} \includegraphics[width=0.32\textwidth]{GLmN100t5.eps} \\ \caption{\label{fig:microGL2} Representation of the solutions $s\mapsto x_N(t,s)$ (top) and $s \mapsto m_N(t,s)$ (bottom) to the microscopic dynamics \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro2} and of the solutions $s\mapsto x(t,s)$ (top) and $s\mapsto m(t,s)$ (bottom) to the Graph Limit equation \eqref{eq:GraphLimit-gen}-\eqref{eq:modelsimuGL2} for $t=0.05$, $t=1.4$ and $t=5$. Parameters: $K = 1$, $r = 0.1$, $\beta = 5$.} \end{figure} Figure \ref{fig:Micro_K2} shows the evolution of the opinions and weights of 20 agents divided into 2 groups, each one containing one leader (respectively indexed $i=1$ and $i=11$) and 9 followers (respectively indexed $i=2... 10$ and $i=12...20$). Observe that the second group leaders' weights increases much faster than the first group leaders' weight, due to the fact that the total weigh of the second group is larger than the total weight of the first group. As a consequence, consensus is achieved at a value $x=0.6$ close to the second leader's initial position. Figure \ref{fig:microGL3} illustrate the convergence of the microscopic dynamics to the graph limit ones by comparing the opinions and weights for $N=100$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.18]{MicroK2_x.png} \includegraphics[scale=0.18]{MicroK2_m.png} \includegraphics[scale=0.5, clip=true]{SpaceI \end{center} \caption{ Evolution of opinions (left) and weights (right) of 20 agents. Center: color scale for the index $i$ ranging from 1 (blue) to 20 (red). Parameters: $K=2$, $r=0.1$, $\beta=5$.}\label{fig:Micro_K2} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.32\textwidth]{GL_x_N100t0p05.eps} \includegraphics[width=0.32\textwidth]{GL_xN100t1p4.eps} \includegraphics[width=0.32\textwidth]{GL_x_N100t5.eps} \\ \includegraphics[width=0.32\textwidth]{GL_m_N100_t0p05.eps} \includegraphics[width=0.32\textwidth]{GL_m_N100_t1p4.eps} \includegraphics[width=0.32\textwidth]{GL_m_N100t5.eps} \\ \caption{\label{fig:microGL3} Representation of the solutions $s\mapsto x_N(t,s)$ (top) and $s \mapsto m_N(t,s)$ (bottom) to the microscopic dynamics \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro2} and of the solutions $s\mapsto x(t,s)$ (top) and $s\mapsto m(t,s)$ (bottom) to the Graph Limit equation \eqref{eq:GraphLimit-gen}-\eqref{eq:modelsimuGL2} for $t=0.05$, $t=1.4$ and $t=5$. Parameters: $K = 2$, $r = 0.1$, $\beta = 5$.} \end{figure} \subsection{Dynamics preserving indistinguishability} \label{sec:numericindisting} \subsubsection{The model} In this first series of simulations, we will consider a particular case of mass dynamics of the form \eqref{eq:psiSk}. Recall that as shown in Section \ref{sec:indisting}, all mass dynamics of this form preserve indistinguishability, thus we can study the two limits - graph and mean-field - of the microscopic model. More precisely, let us focus on the following model associated with the weight dynamics \begin{equation}\label{eq:modelsimumicro} \psi_i(x,m) = \frac{1}{N} m_i \left( \frac{1}{N} \sum_{k=1}^N \sum_{j=1}^N m_k m_j \| \phi(x_l-x_j)\| - \sum_{j=1}^N m_j \| \phi(x_i-x_j)\|\right). \end{equation} Let us explain its origin. We denote by $e_{j \to i}$ the influence of $j$ on $i$ and define it as $e_{j \to i} = m_j \phi(x_i-x_j)$. Let $e_i$ represent the total group influence on $i$, defined as $$e_i=\sum_{j=1}^N e_{j \to i} = \sum_{j=1}^N m_j \|\phi(x_i-x_j)\|.$$ Now denoting by $\overline{e}$ the weighted average of the total group influence $$\overline{e} = \sum_{k=1}^N \frac{m_k}{N} e_k = \sum_{k=1}^N \sum_{j=1}^N \frac{m_k}{N} m_j \|\phi(x_k-x_j)\|,$$ the mass dynamics of model \eqref{eq:modelsimumicro} can be rewritten as: \[ \psi_i(x,m) = \frac{1}{N} m_i \left( \overline{e} - e_i \right). \] Thus, in our model, if the group influence on $i$ is lower than the weighted average of the group influence among the population (i.e. agent $i$ is being less influenced than average), $i$ gains weight proportionally to this difference and to its own weight $m_i$. On the other hand, if the group influence on $i$ is higher than the weighted average among the population (i.e. agent $i$ is more influenced than average), $i$ loses weight proportionally to this difference and to its own weight $m_i$. In other words, in this model, the less influenced agents gain weight, thus becoming the more influential. The Graph Limit of the microscopic model \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro} is given by \eqref{eq:GraphLimit-gen}, with \begin{equation}\label{eq:modelsimuGL} \psi(s,x,m) = m(s) \left( \int_I \int_I m(s_*) m(\tilde{s}) \| \phi(x(\tilde s)-x(s_*))\|\, ds_*\, d\tilde{s} - \int_I m(s_*) \| \phi(x(s)-x(s_*))\|\, ds_* \right). \end{equation} As seen in Section \ref{sec:mfl}, the mean-field limit of \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro} is given by the transport equation with source \eqref{eq:mfl}, with \begin{equation}\label{eq:modelsimumacro} h[\mu](x) = \left(\int_{\mathbb{R}^d} \int_{\mathbb{R}^d} \|\phi(y-z)\| d\mu(z) d\mu(y) - \int_{\mathbb{R}^d} \|\phi(y-x)\| d\mu(y)\right) \mu(x). \end{equation} \subsubsection{Numerical results} For the simplicity of numerical simulations, we take $d=1$ and we choose an interaction function compactly supported on $[0,R]$. Let $a\in \mathcal{C}(\mathbb{R}^+, \mathbb{R})$ be defined by $a:\delta \mapsto a(\delta) = \frac{1}{\delta} \sin^2( \frac{\pi}{R}\delta ) \, \mathbbm{1}_{(0,R)}(\delta)$, so that for all $x\in\mathbb{R}$, \[ \phi(x) = a(|x|) x = \begin{cases} \frac{x}{|x|} \sin^2( \frac{\pi}{R}|x| )\quad \text{ for all } x\in (-R,R)\setminus 0 \\ 0\quad \text{ otherwise}. \end{cases} \] Initial conditions are given by the functions $s\mapsto x^0(s)$ and $s\mapsto m^0(s)$ defined by \begin{equation}\label{eq:ICsimuGL} \begin{cases} x_0(s) = \frac{1}{\pi}\arccos(2s-1) \\ m_0(s) = \tilde{m}^0(s) (\int_I \tilde{m}_0(s_*) ds_*)^{-1} \end{cases} \end{equation} where $\tilde{m}_0(s) = s^{1/4}\cos^2(5s)+ 0.2 s^2+0.5$. Graphical representations of $x_0$ and $m_0$ can be found in the left panels of Figure \ref{fig:microGL}. Notice that all opinions are initially in the interval $[0,1]$, thus if $R\geq 1$, all agents interact with all others, and we expect consensus. From here onward, we choose $R=0.2$. We begin by showing numerical simulations of the microscopic model \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro} for $N=30$. Initial conditions were computed from \eqref{eq:ICsimuGL}-\eqref{eq:ICx}. The left panel of Figure \ref{fig:micro_30} shows the time evolution of the opinions $t\mapsto x_i(t)$ in which the (time-dependent) thickness of the lines is proportional to the corresponding weights $t\mapsto m_i(t)$, whose evolution is shown in the right panel. Notice that due to the compact support of the interaction function, the population divides into three clusters separated by distances greater than $R$, the interaction radius. Although the weights initially all start within the interval $[0.5,1.6]$, the weight dynamics spread the weights by leading the least influenced agents to gain mass. This can be observed in the evolution of $m_{30}$ (represented in light green), which feels little group influence since $x_{30}$ is at the lower edge of the group. Likewise, $m_1$ (in red) initially increases since $x_1$ is at the upper edge of the group, but after $x_1$ joins its closest neighbors, the group influence that it feels increases and $m_1$ decreases. Observe also that the total mass is conserved, as shown by the constant evolution of the average mass (black dotted line). \begin{figure}[!h] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{ModelMicro2_opinions1D-1.eps} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{ModelMicro2_weights1D-1.eps} \end{subfigure} \caption{\label{fig:micro_30} Evolution of opinions (left) and weights (right) for the microscopic model \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro} with $N=30$ agents. The weighted barycenter $\bar x = \frac{1}{N}\sum_{i=1}^N m_i x_i$ and the average mass $\bar m = \frac{1}{N}\sum_{i=1}^N m_i$ are represented by black dotted lines. In the left panel, the thickness of the curve representing $x_i(t)$ is proportional to the corresponding weight $m_i(t)$.} \end{figure} Figure \ref{fig:microGL} shows the profile of the mean-field limit $s\mapsto x(t,s)$ and $s\mapsto m(t,s)$ solving \eqref{eq:GraphLimit-gen}-\eqref{eq:modelsimuGL} with initial conditions given by \eqref{eq:ICsimuGL} at times $t=0$, $t=0.45$ and $t=1.5$ (black line). For comparison purposes, the solutions $s\mapsto x^N(t,s)$ and $s\mapsto m^N(t,s)$ to the microscopic model \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro} for $N=50$ is plotted on the same figures, using the representation via step functions given by \eqref{eq:xN} (red line). The clustering behavior is now shown by the convergence of $x$ and $x^N$ to a step function taking three distinct values. Notice that the weight function converges to a function with three local maxima attained at the centers of the three clusters, while the agents at each cluster's edge form local minima. \begin{figure}[!h] \centering \includegraphics[width=0.3\textwidth]{MicGL_x_1.eps} \includegraphics[width=0.3\textwidth]{MicGL_x_10.eps} \includegraphics[width=0.3\textwidth]{MicGL_x_31.eps} \\ \includegraphics[width=0.3\textwidth]{MicGL_m_1.eps} \includegraphics[width=0.3\textwidth]{MicGL_m_10.eps} \includegraphics[width=0.3\textwidth]{MicGL_m_31.eps} \\ \caption{\label{fig:microGL} Representation of the solutions $s\mapsto x_N(t,s)$ (top) and $s \mapsto m_N(t,s)$ (bottom) to the microscopic dynamics \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro} and of the solutions $s\mapsto x(t,s)$ (top) and $s\mapsto m(t,s)$ (bottom) to the Graph Limit equation \eqref{eq:GraphLimit-gen}-\eqref{eq:modelsimuGL} for $t=0$, $t=0.45$ and $t=1.5$.} \end{figure} Figure \ref{fig:microGLmacro} represents the solution $x\mapsto \mu_t(x)$ to the mean-field equation \eqref{eq:mfl}-\eqref{eq:modelsimumacro} with initial condition given by \[ \mu_0(x) = \int_I m_0(s) \delta( x-x_0(s)) ds \] at times $t=0$, $t=0.45$ and $t=1.5$. Again we observe convergence of the population to three clusters, as the measure converges to three Dirac masses located at the centers of mass of the clusters. As in the previous two representations, convergence to the left-most cluster is slower than convergence to the center and right clusters. For comparison, the solution to the microscopic model \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro} with $N=50$ was plotted on the same figure (red line) using the step-function representation $\mu_t^{N,n}$ defined as follows: \[ \forall j\in \{1,...n\}, \quad \forall x\in E_j, \quad \mu^{N,n}_t(x):= \sum_{i=1}^N \frac{m_i(t)}{|E_j|}\mathbbm{1}_{E_j}(x_i(t)), \] where for all $j\in \{1,...n\}$, $E_j:=[\frac{j-1}{n},\frac{j}{n})$, and $n=25$. Subordination of the mean-field limit to the graph limit is shown by representing the solution to the graph limit equation \eqref{eq:GraphLimit-gen}-\eqref{eq:modelsimuGL} \[ \forall x\in \mathbb{R}, \quad \tilde{\mu}_t(x) = \int_I m(t,s) \delta( x-x(t,s)) ds. \] The solution $t\mapsto (x_i(t), m_i(t))_{i\in\{1,\cdots,N\}}$ to the system of ODE was computed using the Matlab solver ode45. The solution $(t,s)\mapsto (x(t,s),m(t,s))$ to the integro-differential equation was computed using Euler's method for time differentiation and Simpson's method for space integration. Lastly, the solution $(t,x)\mapsto\mu(t,x)$ to the transport PDE with source was computed using a standard Lax-Wendroff scheme. \begin{figure}[!h] \centering \includegraphics[width=0.3\textwidth]{MicMac_1.eps} \includegraphics[width=0.3\textwidth]{MicMac_10.eps} \includegraphics[width=0.3\textwidth]{MicMac_31.eps} \\ \caption{\label{fig:microGLmacro} Representation of the solution $\mu_t^{N,n}$ (red) to the microscopic dynamics \eqref{eq:syst-gen}-\eqref{eq:modelsimumicro} for $N=50$ and $n=25$, of the solution $\tilde{\mu}_t$ (black) to the graph limit equation \eqref{eq:GraphLimit-gen}-\eqref{eq:modelsimuGL} and of the solution $\mu_t$ (blue) to the transport equation with source \eqref{eq:mfl}-\eqref{eq:modelsimumacro} at times $t=0$, $t=0.45$ and $t=1.5$.} \end{figure} \newpage
{ "timestamp": "2020-12-17T02:12:57", "yymm": "2012", "arxiv_id": "2012.08807", "language": "en", "url": "https://arxiv.org/abs/2012.08807", "abstract": "In this paper, we study a model for opinion dynamics where the influence weights of agents evolve in time via an equation which is coupled with the opinions' evolution. We explore the natural question of the large population limit with two approaches: the now classical mean-field limit and the more recent graph limit. After establishing the existence and uniqueness of solutions to the models that we will consider, we provide a rigorous mathematical justification for taking the graph limit in a general context. Then, establishing the key notion of indistinguishability, which is a necessary framework to consider the mean-field limit, we prove the subordination of the mean-field limit to the graph one in that context. This actually provides an alternative (but weaker) proof for the mean-field limit. We conclude by showing some numerical simulations to illustrate our results.", "subjects": "Analysis of PDEs (math.AP)", "title": "Mean-field and graph limits for collective dynamics models with time-varying weights", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986777179025415, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7090925356708839 }
https://arxiv.org/abs/1007.1791
Fredman's reciprocity, invariants of abelian groups, and the permanent of the Cayley table
Let $R$ be the regular representation of a finite abelian group $G$ and let $C_n$ denote the cyclic group of order $n$. For $G=C_n$, we compute the Poincare series of all $C_n$-isotypic components in $S^{\cdot} R\otimes \wedge^{\cdot} R$ (the symmetric tensor exterior algebra of $R$). From this we derive a general reciprocity and some number-theoretic identities. This generalises results of Fredman and Elashvili-Jibladze. Then we consider the Cayley table, $M_G$, of $G$ and some generalisations of it. In particular, we prove that the number of formally different terms in the permanent of $M_G$ equals $(S^n R)^G$, where $n$ is the order of $G$.
\section{Introduction} \noindent In the beginning of the 1970s, M.~Fredman \cite{fred} considered the problem of computing the number of vectors $(\lambda_0,\lambda_1,\dots,\lambda_{n-1})$ with non-negative integer components that satisfy \begin{equation} \label{eq:fred} \lambda_0+\dots +\lambda_{n-1}=m \quad \text{ and } \quad \sum_{j=0}^{n-1} j\lambda_j \equiv i \mod n . \end{equation} He denoted this number by $S(n,m,i)$. Using generating functions, Fredman obtained an explicit formula for $S(n,m,i)$, which immediately showed that $S(n,m,i)=S(m,n,i)$. The latter is said to be {\it Fredman's reciprocity}. Using a necklace interpretation, he also constructed a bijection between the vectors enumerated by $S(n,m,i)$ and those enumerated by $S(m,n,i)$. However, these results did not attract attention and remained unnoticed. Later, Elashvili and Jibladze \cite{elji-1,elji-2} (partly with Pataraia \cite{EJP}) rediscovered these results using Invariant Theory. Let $\gc_n\simeq \BZ/n\BZ$ be the cyclic group of order $n$ and $\EuScript R$ the space of its regular representation over $\BC$. Choose a basis $(v_0,v_1,\dots,v_{n-1})$ for $\EuScript R$ consisting of $\gc_n$-eigenvectors. More precisely, if $\gamma\in\gc_n$ is a generator and $\zeta=\sqrt[n]1$ a fixed primitive root of unity, then $\gamma{\cdot}v_i=\zeta^i v_i$. Write $\chi_i$ for the linear character $\gc_n\to \BC^\times$ that takes $\gamma$ to $\zeta^i$. The monomial $v_0^{\lambda_1}\dots v_{n-1}^{\lambda_{n-1}}$ has degree $m$ and weight $\chi_i$ if and only if $(\lambda_0,\lambda_1,\dots,\lambda_{n-1})$ satisfies \eqref{eq:fred}. Thus, $S(n,m,i)$ is the dimension of the space of $\gc_n$-semi-invariants of weight $\chi_i$ in the $m$th symmetric power $\cs^m \EuScript R$. This space can also be understood as the $\gc_n$-isotypic component of type $\chi_i$ in $\cs^m \EuScript R$, denoted by $(\cs^m\EuScript R)_{\gc_n,\chi_i}$. To stress the connection with cyclic groups, we will write $a_i(\gc_n,m)$ in place of $S(n,m,i)$. The celebrated Molien formula provides a closed expression for the generating function (Poincar\'e series) \[ \cf((\cs^{\cdot}\EuScript R)_{\gc_n,\chi_i};t)=\sum_{m=0}^\infty a_i(\gc_n,m)t^m , \] where $(\cs^{\cdot}\EuScript R)_{\gc_n,\chi_i}=\bigoplus_{m\ge 0} (\cs^m\EuScript R)_{\gc_n,\chi_i}$ is the $(\gc_n,\chi_i)$-isotypic component in $\cs^{\cdot}\EuScript R$. Then extracting the coefficient of $t^m$ yields a formula for $a_i(\gc_n,m)$, see \eqref{f-ela-jib}. It is worth stressing that Molien's formula is a very efficient tool that provides a uniform approach to various combinatorial problems and paves the way for further generalisations, see e.g. \cite{st79}. In this note, we elaborate on two topics. {\sl First}, generalising results of Fredman and Elashvili-Jibladze, we compute the Poincar\'e series for each $\gc_n$-isotypic component in the bi-graded algebra $\cs^{\cdot}\EuScript R \otimes \wedge^{\cdot}\EuScript R$ and then $\dim(\cs^{p}\EuScript R \otimes \wedge^{m}\EuScript R)_{\gc_n, \chi_i}$ for all $p,m,i$ (Theorem~\ref{thm:iso-comp}). From this we derive a more general reciprocity, see \eqref{eq:gen-Hermite}. As a by-product of these computations, we obtain some interesting identities, e.g., \[ \exp(\frac{z}{1-z^2})=\prod_{d=1}^\infty (1+z^d)^{\varphi(d)/d} , \] where $\varphi$ is Euler's totient function. In Section~\ref{sect:exterior}, several identities related to isotypic components in $\wedge^{\cdot}\EuScript R$ are given; some of them are valid for an arbitrary finite abelian group $G$, see Theorem~\ref{thm:sum=0}. {\sl Second}, in Section~\ref{sect:Keli-table}, we study some properties of the Cayley table, $\EuScript M_G$, of $G$. If $G=\{x_0,x_1,\dots,x_{n-1}\}$, then $\EuScript M_G$ can be regarded as $n$ by $n$ matrix with entries in $\BC[x_0,\dots,x_{n-1}]\simeq \cs^{\cdot}\EuScript R$. For $G=\gc_n$, \ $\EuScript M_G$ is nothing but a generic {\it circulant matrix}. The permanent of $\EuScript M_G$, $\per(\EuScript M_G)$, is a sum of monomials in $x_i$'s of degree $n$. Using \cite{hall}, we prove that the number of different monomials occurring in this sum equals $\dim(\cs^n\EuScript R)^G$. Then we introduce the extended Cayley table, $\widetilde{\EuScript M}_G$ (which is a matrix of order $n+1$), and characterise the monomials occurring in $\per(\widetilde{\EuScript M}_G)$ (Theorem~\ref{thm:per-ext-keli}). This characterisation implies that the number of different monomials in $\per(\widetilde{\EuScript M}_G)$ equals $\dim(\cs^{n+1}\EuScript R)^G$. Both $\per(\EuScript M_G)$ and $\det(\EuScript M_G)$ belong to $\cs^n\EuScript R$, and we prove that $\per(\EuScript M_G)$ is $G$-invariant, whereas $\det(\EuScript M_G)$ is a semi-invariant whose weight is the sum of all elements of the dual group $\hat G$. The latter means that in many cases $\det(\EuScript M_G)$ is invariant, too. In Section~\ref{sect:questions}, we discuss some open problems related to $(\cs^{\cdot}\EuScript R)^G$ and $\per(\EuScript M_G)$. \noindent \un{Notation:} $\# (M)$ is the cardinality of a finite set $M$; \ $(n,m)$ is the greatest common divisor of $n,m\in \BN$; $G$ is always a finite group. {\small {\bf Acknowledgements.} A part of this work was done during my stay at the Centro di Ricerca Matematica Ennio De Giorgi (Pisa) in July 2009. I am grateful to this institution for the warm hospitality and support.} \section{Preliminaries} \label{prelim} \subsection{Ramanujan's sums} Two important number-theoretic functions are {\it Euler's totient function\/} $\varphi$ and the {\it M\"obius function} $\mu$. Recall that $\varphi(n)$ is the number of all primitive roots of unity of order $n$. Given $i, n\in\BN$, $n\ge 1$, the {\it Ramanujan's sum}, $c_n(i)$, is the sum of $i$-th powers of the primitive roots of unity of order $n$. In particular, $c_n(0)=\varphi(n)$. There are two useful expressions for Ramanujan's sums: \[ c_n(i)= \sum_{d \vert (n,i)} \mu\left(\displaystyle \frac{n}{d}\right)d , \qquad c_n(i)= \frac{\varphi(n)}{\varphi\left(\displaystyle\frac{n}{(n,i)}\right)} {\cdot} \mu\left({\frac{n}{(n,i)}}\right), \] see \cite[Theorems 271 \& 272]{ha-wr}. These formulae also show that $c_n(1)=\mu(n)$, $c_n(i)=c_n(n{-}i)$, and $c_n(i)$ is always a rational integer. \subsection{Molien's formula for the symmetric algebra} \label{subs:Molien} Let $G$ be a finite group and $V$ a finite-dimensional $G$-module. The original Molien formula computes the Poincar\'e series of the graded algebra of invariants $(\cs^{\cdot}V)^G=\bigoplus_{m\ge 0} (\cs^m V)^G$. More generally, there is a similar formula for the Poincar\'e series of any $G$-isotypic component in $\cs^{\cdot} V$. Let $\chi$ be an irreducible representation of $G$ and $(\cs^{\cdot} V)_{G,\chi}$ the isotypic component of type $\chi$ in $\cs^{\cdot} V$. By definition, the Poincar\'e series of $(\cs^{\cdot} V)_{G,\chi}$ is the power series $\cf((\cs^{\cdot} V)_{G,\chi};t):=\sum_{m\ge 0}\dim \bigl( (\cs^m V)_{G,\chi}\bigr) t^m$. Then \[ \cf\bigl((\cs^{\cdot} V)_{G,\chi};t\bigr)=\frac{\deg(\chi)}{\#(G)}\sum_{\gamma\in G} \frac{\tr(\chi(\gamma^{-1}))}{\det_V({\mathrm{1\hspace{0.5pt}\!\! I}}-t\gamma )} , \] see e.g. \cite[Thm.\,2.1]{st79}. Here ${\mathrm{1\hspace{0.5pt}\!\! I}}$ is the identity matrix in $GL(V)$. (The algebra of invariants corresponds to the trivial one-dimen\-si\-onal representation, i.e., if $\deg(\chi)=1$ and $\chi(\gamma)=1$ for all $\gamma\in G$.) Let $\EuScript R$ be the space of the regular representation of $G$. For the $G$-module $\EuScript R$, Molien's formula can be presented in a somewhat simpler form. \begin{prop}[\protect{\cite[V.1.8]{AF76}}] \label{molien_reg} Let $\varphi_G(d)$ be the number of elements of order $d$ in $G$. Then \[ \displaystyle \cf((\cs^{\cdot}\EuScript R)^{G};t)=\frac{1}{\#(G)}\sum_{d\ge 1} \frac{\varphi_G(d)}{(1-t^d)^{(\# G)/d}} . \] \end{prop} \noindent This can easily be extended to an arbitrary $\chi$. If $\mathsf{ord}(\gamma)$ is the order of $\gamma\in G$, then \begin{equation} \label{eq:molin-reg-chi} \cf((\cs^{\cdot}\EuScript R)_{G,\chi};t)= \frac{\deg(\chi)}{\#(G)}\sum_{d\vert \# G} \frac{\sum_{\gamma:\,\mathsf{ord}(\gamma)=d}\tr(\chi(\gamma^{-1}))}{(1-t^d)^{(\# G)/d}} . \end{equation} In fact, we prove below a more general formula (Lemma~\ref{lem:gen-formula}). \subsection{Formulae of Fredman and Elashvili-Jibladze} \label{subs:el-ji} \noindent Recall that $a_i(\gc_n,m)=\dim \cs^m(\EuScript R)_{\gc_n,\chi_i}$ or, equivalently, it is the number of vectors satisfying \eqref{eq:fred}. In particular, $a_0(\gc_n,m)=\dim \cs^m(\EuScript R)^{\gc_n}$. If the elements of $\gc_n$ are regarded as the roots of unity of order $n$, then $\chi_i$ is the character $\xi\mapsto \xi^i,\ \xi\in\gc_n$. Here $\varphi_{\gc_n}(d)$ is almost Euler's totient function. That is, $\varphi_{\gc_n}(d)=\varphi(d)$, if $d\vert n$; and $\varphi_{\gc_n}(d)=0$ otherwise. Using \eqref{eq:molin-reg-chi} with $G=\gc_n$ and $\chi=\chi_i$, we see that $\deg(\chi_i)=1$ and $\sum_{\gamma:\,\mathsf{ord}(\gamma)=d}\chi_i(\gamma^{-1})=c_d(d-i)$. Then extracting the coefficient of $t^m$ yields a nice-looking formula (Fredman \protect\cite{fred}, Elashvili-Jibladze \protect\cite{elji-2} \begin{equation} \label{f-ela-jib} a_i(\gc_n,m)=\frac{1}{n+m}\sum_{d\vert (n,m)}c_d(i) \genfrac{(}{)}{0pt}{}{n/d+m/d}{n/d} \ . \end{equation} \begin{rmk} Both Fredman's approach, see \eqref{eq:fred}, and cyclic group interpretation presuppose that $a_i(\gc_n,m)$ is defined for $n\ge 1$ and $m\ge 0$. But \eqref{f-ela-jib} shows that $a_i(\gc_n,m)$ is naturally defined for $(n,m)\in\BN^2$, $(n,m)\ne (0,0)$. \end{rmk} \noindent It follows from \eqref{f-ela-jib} that $a_i(\gc_n,m)=a_i(\gc_m,n)$. In \cite{elji-1,elji-2,EJP}, this equality is named the ``Hermite reciprocity''. As it has no relation to Hermite and was first proved by Fredman, the term {\it Fredman's reciprocity\/} seems to be more appropriate. From \eqref{f-ela-jib}, one can derive the equality \begin{equation} \label{rmk:log-elji} \sum_{(n,m)\in \BN^2,\,(n,m)\ne (0,0)} a_i(\gc_n,m)x^ny^m=-\sum_{d=1}^\infty \frac{c_d(i)}{d}\log(1-x^d-y^d) \ . \end{equation} (Cf. \cite[Remark\,2]{elji-1}, \cite[Sect.\,4]{EJP}.) \section{Symmetric tensor exterior algebra and Poincar\'e series} \label{sect:symm-exterior} \noindent As above, let $V$ be a $G$-module. We consider the Poincar\'e series of the $G$-isotypic components in $\cs^{\cdot} V\otimes \wedge^{\cdot}V$. Let $(\cs^{\cdot} V\otimes \wedge^{\cdot}V )_{G,\chi}$ denote the isotypic component corresponding to an irreducible representation $\chi$. It is a bi-graded vector space and its Poincar\'e series is the formal power series \[ \cf\bigl((\cs^{\cdot} V\otimes \wedge^{\cdot}V )_{G,\chi}; s,t\bigr)= \sum_{p,m\ge 0} \dim(\cs^p V\otimes \wedge^m V)_{G,\chi}\,s^pt^m . \] (Clearly, it is a polynomial with respect to $t$.) It is known that \[ \cf\bigl((\cs^{\cdot} V\otimes \wedge^{\cdot}V )^{G}; s,t\bigr)= \frac{1}{\# G}\sum_{\gamma\in G} \frac{\det_V({\mathrm{1\hspace{0.5pt}\!\! I}}+t\gamma )}{\det_V({\mathrm{1\hspace{0.5pt}\!\! I}}-s\gamma )}, \] see \cite[Theorem\,1.33]{alm82}. A similar argument provides the formula for an arbitrary $G$-iso\-ty\-pic component: \begin{equation} \label{eq:gen-iso-comp} \cf\bigl((\cs^{\cdot} V\otimes \wedge^{\cdot}V )_{G,\chi}; s,t\bigr)= \frac{\deg(\chi)}{\# G}\sum_{\gamma\in G}\tr(\chi(\gamma^{-1})) \frac{\det_V({\mathrm{1\hspace{0.5pt}\!\! I}}+t\gamma )}{\det_V({\mathrm{1\hspace{0.5pt}\!\! I}}-s\gamma )}. \end{equation} For, in place of the Reynolds operator $\displaystyle\frac{1}{\#(G)}\sum_{\gamma\in G}\gamma$ \ (the projection to the subspace of $G$-invariants), one should merely exploit the operator $\displaystyle\frac{\deg(\chi)}{\#(G)}\sum_{\gamma\in G}\tr(\chi(\gamma^{-1}))\gamma$ \ (the projection to the isotypic component of type $\chi$). \begin{lm} \label{lem:gen-formula} For the regular representation $\EuScript R$ of $G$, the right-hand side of \eqref{eq:gen-iso-comp} can be written as \[ \frac{\deg(\chi)}{\# G}\sum_{d\ge 1} \left( \sum_{\gamma:\,\mathsf{ord}(\gamma)=d}\tr(\chi(\gamma^{-1})){\cdot} \left(\frac{1-(-t)^d)}{1-s^d}\right)^{(\# G)/d} \right) . \] \end{lm}\begin{proof} If $\gamma\in G$ is of order $d$, then $\langle \gamma\rangle\simeq \gc_d$ and each coset of $\langle \gamma\rangle$ in $G$ is a cycle of length $d$ with respect to the multiplication by $\gamma$. Hence, in a suitable basis of $\EuScript R$, the matrix of $\gamma$ in $GL(\EuScript R)$ consists of $(\# G)/d$ diagonal \\blocks $ \begin{pmatrix} 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & 1 & \ddots & 0 \\ 0 & 0 & \ddots & \ddots & 0 \\ 0 & 0 & \ddots & \ddots & 1 \\ 1 & 0 & 0 & \dots & 0 \end{pmatrix}$ of size $d$. Since $\det\begin{bmatrix} 1 & -s & 0 & \dots & 0 \\ 0 & 1 & -s & \ddots & 0 \\ 0 & 0& \ddots & \ddots & 0 \\ 0 & 0 & \ddots & \ddots & -s \\ -s & 0 & 0 & \dots & 1 \end{bmatrix}=1-s^d$, we obtain $\displaystyle \frac{\det_{\EuScript R}({\mathrm{1\hspace{0.5pt}\!\! I}}+t\gamma )}{\det_{\EuScript R}({\mathrm{1\hspace{0.5pt}\!\! I}}-s\gamma )}= \left(\frac{1-(-t)^d)}{1-s^d}\right)^{(\# G)/d}$, which proves the lemma. \end{proof} Now, we apply this lemma to the regular representation of $\gc_n$. Recall that the number $\displaystyle \genfrac{(}{)}{0pt}{}{a+b+c}{a,\, b,\, c}$ is defined to be $\displaystyle \frac{(a+b+c)!}{a!\, b!\, c!}$. \begin{thm} \label{thm:iso-comp} The Poincar\'e series of the $(\gc_n,\chi_i)$-isotypic component equals \begin{multline} \label{eq:cyclic-poinc} \cf\bigl((\cs^{\cdot}\EuScript R \otimes \wedge^{\cdot}\EuScript R )_{\gc_n,\chi_i}; s,t\bigr)= \frac{1}{n}\sum_{d\vert n} c_d(i)\frac{(1-(-t)^d)^{n/d}}{(1-s^d)^{n/d}} \\ =\frac{1}{n}\sum_{d\vert n} c_d(i) \Bigl(\sum_{a=0}^{n/d} (-1)^{(d+1)a}\genfrac{(}{)}{0pt}{}{n/d}{a}t^{ad} \Bigr) \Bigl( \sum_{b\ge 0} \genfrac{(}{)}{0pt}{}{(n/d)+b-1}{(n/d)-1}s^{bd}\Bigr) . \end{multline} Consequently, \begin{equation} \label{dim-nonsym} \dim \bigl(\cs^p \EuScript R\otimes \wedge^m \EuScript R \bigr)_{\gc_n,\chi_i}=\frac{(-1)^m}{p+n} \sum_{d\vert n,p,m} (-1)^{m/d} c_d(i) \genfrac{(}{)}{0pt}{}{(n+p)/d}{m/d,\, p/d,\, (n{-}m)/d} . \end{equation} \end{thm} \begin{proof} This is a straightforward consequence of Lemma~\ref{lem:gen-formula}. If $G=\gc_n$, then $\deg(\chi_i)=1$ and $\sum_{\gamma:\,\mathsf{ord}(\gamma)=d}\chi_i(\gamma^{-1})= c_d(n{-}i)=c_d(i)$, which proves \eqref{eq:cyclic-poinc}. \noindent We leave it to the reader to extract the coefficient of $t^m s^p$ in \eqref{eq:cyclic-poinc} and obtain \eqref{dim-nonsym}. \end{proof} Letting $n=q+m$ yields a more symmetric form of \eqref{dim-nonsym}: \begin{equation} \label{eq:dim-sym} \dim \bigl(\cs^p \EuScript R\otimes \wedge^m \EuScript R\bigr)_{\gc_{q+m},\chi_i}=\frac{(-1)^m}{p+q+m} \sum_{d\vert p,q,m} (-1)^{m/d} c_d(i) \genfrac{(}{)}{0pt}{}{(m+p+q)/d}{m/d,\, p/d,\, q/d} . \end{equation} As the right-hand side is symmetric with respect to $p$ and $q$, we get an equality for dimensions of isotypic components related to the regular representations of two cyclic groups, $(\gc_{q+m},\EuScript R)$ and $(\gc_{p+m},\tilde{\EuScript R})$: \begin{equation} \label{eq:gen-Hermite} \dim \bigl(\cs^p \EuScript R\otimes \wedge^m \EuScript R\bigr)_{\gc_{q+m},\chi_i}= \dim \bigl(\cs^q \tilde{\EuScript R}\otimes \wedge^m \tilde{\EuScript R}\bigr)_{\gc_{p+m},\chi_i}. \end{equation} For $m=0$, this simplifies to Fredman's reciprocity \cite[(4)]{fred}. It would be interesting to have a combinatorial interpretation of this symmetry in the spirit of Fredman's approach. \begin{rmk} (1) \ Letting $t=0$ in \eqref{eq:cyclic-poinc} or $m=0$ in \eqref{dim-nonsym}, we get known formulae for the isotypic components in the symmetric algebra of $\EuScript R$, see \cite{elji-1, elji-2}. Letting $s=0$ in \eqref{eq:cyclic-poinc} or $n=0$ in \eqref{dim-nonsym}, we get interesting formulae for the isotypic components in the exterior algebra of $\EuScript R$, see the next section. (2) \ If $d$ is always odd (e.g. at least one of $m, p, q$ is odd), then $(-1)^{m+\frac{m}{d}}=1$ and the right-hand side of \eqref{eq:dim-sym} becomes totally symmetric with respect to $p,q,m$. \end{rmk} The following is a generalisation of \eqref{rmk:log-elji}: \begin{prop} \label{prop:log} \[ \sum_{(p,q,m)\in \BN^3,\,p{+}q{+}m\ge 1} \dim \bigl(\cs^p \EuScript R\otimes \wedge^m \EuScript R\bigr)_{\gc_{q+m},\chi_i}{\cdot} x^py^q z^m=-\sum_{d=1}^\infty \frac{c_d(i)}{d}\log(1-x^d-y^d+(-z)^d) \ . \] \end{prop}\begin{proof} By \eqref{eq:dim-sym}, the left-hand side equals \[ \sum_{p+q+m\ge 1}\frac{(-1)^m}{p+q+m}\sum_{d\vert p,q,m} (-1)^{m/d}c_d(i) \genfrac{(}{)}{0pt}{}{(p{+}q{+}m)/d}{p/d,\, q/d,\, m/d}x^py^qz^m . \] Letting $p/d=\alpha,\, q/d=\beta,\, m/d=\gamma$, we rewrite it as \begin{multline*} \sum_{d=1}^\infty \frac{c_d(i)}{d}\sum_{\alpha+\beta+\gamma\ge 1} \frac{(-1)^\gamma}{\alpha+\beta+\gamma}\genfrac{(}{)}{0pt}{}{\alpha+\beta+\gamma}{\alpha,\, \beta,\, \gamma}x^{\alpha d}y^{\beta d}(-z)^{\gamma d}= \\ \sum_{d=1}^\infty \frac{c_d(i)}{d}\sum_{k\ge 1}\Bigl(\sum_{\alpha+\beta+\gamma=k} \frac{1}{k}\genfrac{(}{)}{0pt}{}{k}{\alpha,\, \beta,\, \gamma}(x^d)^\alpha(y^d)^\beta (-(-z)^d)^\gamma\Bigr)=\\ \sum_{d=1}^\infty \frac{c_d(i)}{d}\sum_{k\ge 1} \frac{(x^d+y^d-(-z)^d)^k}{k}= -\sum_{d=1}^\infty \frac{c_d(i)}{d}\log(1-x^d-y^d+(-z)^d). \end{multline*} \end{proof} Specializing the equality of Proposition~\ref{prop:log}, we get some interesting identities. \\ A) Taking $x=y=0$ forces that $p=q=0$ in the left-hand side, which leads to the equality \[ \sum_{m\ge 1}\dim (\wedge^m\EuScript R)_{\gc_m,\chi_i}z^m =- \sum_{d=1}^\infty \frac{c_d(i)}{d}\log(1+(-z)^d). \] For $i=0$, we have $c_d(0)=\varphi(d)$ and $\dim (\wedge^m\EuScript R)^{\gc_m}=\begin{cases} 1, & m \text{ odd }; \\ 0, & m \text{ even}. \end{cases}$ \\ \text{That is, } $\displaystyle\frac{z}{1-z^2}=-\sum_{d=1}^\infty \frac{\varphi(d)}{d}\log(1+(-z)^d).$ Replacing $z$ with $-z$ and exponentiating, we finally obtain: \[ \exp\left(\frac{z}{1-z^2}\right)=\prod_{d\ge 1} (1+z^d)^{\varphi(d)/d} . \] \noindent B) \ Likewise, for $x=z=0$ (or just $x=0$ in \eqref{rmk:log-elji}), we get \[ -\sum_{d=1}^\infty \frac{c_d(i)}{d}\log(1-y^d)=\begin{cases} y/(1-y), & i=0 \\ 0, & i\ne 0 . \end{cases} \] In particular, \[ \exp\left(\frac{-y}{1-y}\right)=\prod_{d\ge 1} (1-y^d)^{\varphi(d)/d} . \] \section{On the exterior algebra of the regular representation} \label{sect:exterior} \noindent In case of the exterior algebra of a $G$-module, the Poincar\'e series of an isotypic component is actually a polynomial in $t$, which can be evaluated for any $t$. Here we gather some practical formulae for the regular representations and for cyclic groups. First, using \eqref{eq:gen-iso-comp} and Lemma~\ref{lem:gen-formula} with trivial $\chi$ and $s=0$, we obtain \[ \cf((\wedge^{\cdot}\EuScript R)^G;t)=\frac{1}{\#(G)}\sum_{d\ge 1} \varphi_G(d)(1-(-t)^d)^{\#(G)/d} . \] It follows that $\cf((\wedge^{\cdot}\EuScript R)^G;t)$ always has the factor $1+t$ and \begin{equation} \label{eq:dim-ext-inv} \dim(\wedge^{\cdot}\EuScript R)^G=\frac{1}{\#(G)}\sum_{ d\text{ odd}} \varphi_G(d)2^{\#(G)/d} . \end{equation} Note that here $G$ is not necessarily abelian! \begin{ex} For $G=\EuScript S_3$, we have $\varphi_G(1)=1$, $\varphi_G(2)=3$, and $\varphi_G(3)=2$. Therefore, \[ \cf((\wedge^{\cdot}\EuScript R)^{\EuScript S_3};t)= \frac{1}{6}\bigl( (1+t)^6+3(1-t^2)^3+2(1+t^3)^2\bigr)=1+t+t^2+4t^3+4t^4+t^5 . \] \end{ex} For $G=\gc_n$, there are precise assertions for all $G$-isotypic components in $\wedge^{\cdot}\EuScript R$. Using Theorem~\ref{thm:iso-comp} with $s=0$ and $p=0$, we obtain \begin{gather} \label{eq:poinc-ext} \cf((\wedge^{\cdot}\EuScript R)_{\gc_n,\chi_i}; t )= \frac{1}{n}\sum_{d\vert n} c_d(i)(1-(-t)^d)^{n/d}, \\ \label{eq:dim-ext} \dim \bigl((\wedge^m\EuScript R)_{\gc_n,\chi_i}\bigr)=\frac{(-1)^m}{n} \sum_{d\vert n,m} (-1)^{m/d} c_d(i) \genfrac{(}{)}{0pt}{}{n/d}{m/d}=: b_i(\gc_n,m) . \end{gather} Again, it is convenient to replace $n$ with $q+m$ in \eqref{eq:dim-ext}. Then \[ b_i(\gc_{q+m},m)=\dim \bigl((\wedge^m\EuScript R)_{\gc_{q+m},\chi_i}\bigr)=\frac{(-1)^m}{q+m} \sum_{d\vert q,m} (-1)^{m/d} c_d(i) \genfrac{(}{)}{0pt}{}{q/d+m/d}{m/d} . \] From this we derive the following observation: \begin{prop} If $q$ or $m$ is odd, then $b_i(\gc_{q+m}, m)=a_i(\gc_{q}, m)$ and also $b_i(\gc_{q+m}, m)=b_i(\gc_{q+m}, q)$. \end{prop} \begin{ex} $b_i(\gc_{2n-1},n-1)=a_i(\gc_{n-1},n)$, and it is the $(n-1)$-th Catalan number regardless of $i$. \end{ex} \begin{rem} If $n$ is odd, then $\wedge^n \EuScript R$ is the trivial $\gc_n$-module and therefore $\wedge^m \EuScript R \simeq \wedge^{n-m} \EuScript R$ as $\gc_n$-modules. This ``explains'' the equality $b_i(\gc_{n}, m)=b_i(\gc_{n}, n-m)$ for $n$ odd. \end{rem} Substituting $t=1$ in \eqref{eq:poinc-ext} yields a nice formula for dimension of the whole isotypic component: \begin{equation} \label{dim-ext-comp} \dim \bigl((\wedge^{\cdot}\EuScript R)_{\gc_n,\chi_i}\bigr)= \frac{1}{n} \sum_{d\vert n,\ d\ \text{odd}} c_d(i)\, 2^{n/d} . \end{equation} For $i=0$, this becomes a special case of \eqref{eq:dim-ext-inv}. There is a down-to-earth interpretation of \eqref{dim-ext-comp} that does not invoke Invariant Theory. As in the introduction, choose a basis $\{v_0,v_1,\dots,v_{n-1}\}$ for $\EuScript R$ such that $v_i$ has weight $\chi_i$. Then \[ v_{j_1}\wedge\dots \wedge v_{j_m}\in (\wedge^m\EuScript R)_{\gc_n,\chi_i} \quad \Longleftrightarrow \quad j_1+\dots + j_m \equiv i \mod n . \] Consequently, $\dim(\wedge^{\cdot}\EuScript R)_{\gc_n,\chi_i}$ equals the number of subsets $J\subset \{0,1,\dots,n{-}1\}$ such that $|J| \equiv i \mod n$. (Here $|J|$ stands for the sum of elements of $J$.) Hence our invariant-theoretic approach proves the following purely combinatorial fact: \[ \#\{ J\subset \{0,1,\dots,n{-}1\} \mid |J| \equiv i \mod n \} = \frac{1}{n} \sum_{d\vert n,\ d\ \text{odd}} c_d(i)\, 2^{n/d} . \] For $i=0$, this is nothing but the number of subsets of $\gc_n$ summing to the neutral element (in the additive notation). We provide a similar interpretation for any abelian group. \begin{thm} \label{thm:sum=0} For an abelian group $G$, let $\N_G$ denote the number of subsets $S$ of $G$ such that $|S|:=\sum_{\gamma\in S}\gamma=0\in G$. Then $\displaystyle \N_G=\dim(\wedge^{\cdot}\EuScript R)^G=\frac{1}{\#(G)}\sum_{d\text{ \rm odd}} \varphi_G(d)2^{\#(G)/d}$. \end{thm} \begin{proof} In view of \eqref{eq:dim-ext-inv}, only the first equality requires a proof. Let $(z_0,\dots,z_{n-1})$ be a basis for $\EuScript R$ consisting of $G$-eigenvectors, $n=\#(G)$. Here the weight of $z_i$ is a linear character $\chi_i$ and $\hat{G}=\{\chi_0,\chi_1,\dots,\chi_{n-1}\}$ is the dual group of $G$. One of the $\chi_i$'s is the neutral element of $\hat G$, denoted by $\hat 0$ in the additive notation. Then \[ z_{j_1}\wedge\dots \wedge z_{j_m}\in (\wedge^m\EuScript R)^G \quad \Longleftrightarrow \quad \chi_{j_1}+\dots + \chi_{j_m}=\hat 0\in \hat G. \] Thus, $\dim(\wedge^{\cdot}\EuScript R)^G$ equals the number of subsets of $\hat G$ summing to $\hat 0$. However, the groups $\hat G$ and $G$ are (non-canonically) isomorphic, hence $\N_G=\N_{\hat G}$ and we are done. \end{proof} \section{On the permanent of the Cayley table of an abelian group} \label{sect:Keli-table} \noindent In this section, $G$ is an abelian group, $G=\{x_0,x_1,\dots,x_{n-1}\}$. The {\it Cayley table\/} of $G$, denoted $\EuScript M_G=(m_{i,j})$, can be regarded as $n$ by $n$ matrix with entries in the polynomial ring $\BC[x_0,x_1,\dots,x_{n-1}]\simeq \cs^{\cdot}\EuScript R$. To distinguish the addition in $\BC[x_0,x_1,\dots,x_{n-1}]$ and the group operation in $G$, the latter is denoted by `$\dotplus $'. By definiton, $m_{i,j}=x_i\dotplus x_j$, $i,j=0,\dots, n-1$. Hence $\EuScript M_G$ is a symmetric matrix. The permanent of $\EuScript M_G$, $\per(\EuScript M_G)$, is a homogeneous polynomial of degree $n$ in $x_i$'s, and it does not depend on the ordering of elements of $G$. Let $p(G)$ denote the number of formally different monomials occurring in $\per(\EuScript M_{G})$. \begin{rmk} In place of the Cayley table, one can consider the matrix $\hat{\EuScript M}_G$ with entries $\hat m_{i,j}=x_i\circleddash x_j$ (the difference in $G$). Clearly, $\hat{\EuScript M}_G$ is obtained from $\EuScript M_G$ by rearranging the columns only (or, the rows only), using the permutation on $G$ that takes each element to its inverse. Therefore $\per(\hat{\EuScript M}_G)=\per(\EuScript M_{G})$ and $\det(\hat{\EuScript M}_G)=\pm\det(\EuScript M_{G})$. Although $\hat{\EuScript M}_G$ is not symmetric in general, an advantage is that every entry on the main diagonal is the neutral elements of $G$. \end{rmk} \begin{ex} \label{ex:circulant} For $G=\gc_n$ and the natural ordering of its elements (i.e., $x_i$ corresponds to $i$), one obtains a generic {\it circulant matrix} (the latter means that the rows are successive cyclic permutations of the first row). More precisely, $\EuScript M_{\gc_n}$ (resp. $\hat{\EuScript M}_{\gc_n}$) is a circulant matrix in Hankel (resp. Toeplitz) form. For instance, $\EuScript M_{\gc_3}=\begin{pmatrix} x_0 & x_1 & x_2 \\ x_1 & x_2 & x_0 \\x_2 & x_0 & x_1 \\ \end{pmatrix}$. Here $\per(\EuScript M_{\gc_3})=x_0^3+x_1^3+x_2^3+3x_0x_1x_2$. Therefore $p(\gc_3)=4$. \end{ex} The function $n\mapsto p(\gc_n)$ was studied in \cite{bru-new}, where it was pointed out that the main result of Hall~\cite{hall} shows that $p(\gc_n)$ equals the number of solutions to \[ \begin{cases}\lambda_0+\dots +\lambda_{n-1}=n, \\ \displaystyle \sum_{j=0}^{n-1} j\lambda_j \equiv 0 \mod n .\end{cases} \] That is, $p(\gc_n)=a_0(\gc_n,n)$ in our notation. Because results of \cite{hall} apply to arbitrary finite abelian groups, one can be interested in $p(G)$ in this more general setting. Below, we give an invariant-theoretic answer using that result of Hall. Let $\EuScript S_n$ denote the symmetric group acting by permutations on $\{0,1,\dots,n-1\}$. Accordingly, $\EuScript S_n$ permutes the elements of $G$ by the rule $\pi(x_i):=x_{\pi(i)}$. Recall that \[ \per(m_{i,j})=\sum_{\pi\in\EuScript S_n}\prod_{i=0}^{n-1}m_{i,\pi(i)} . \] For the matrix $\EuScript M_G$, \[ \prod_{i=0}^{n-1}m_{i,\pi(i)}=\prod_{i=0}^{n-1}(x_i\dotplus x_{\pi(i)})= \prod_{i=0}^{n-1}x_i^{k_i(\pi)}=:\boldsymbol{x}(\pi) \] is a monomial in $x_i$'s of degree $n$. Note that different permutations may result in the same monomial. The following is essentially proved by M.~Hall. \begin{thm}[\protect{\cite[n.\,3]{hall}}] \label{thm:hall-perm} A monomial\/ $\me=\prod_{i=0}^{n-1}x_i^{k_i}$ is of the form $\boldsymbol{x}(\pi)$ for some $\pi\in\EuScript S_n$ (i.e., occurs in $\per(\EuScript M_G)$) if and only if\/ $ \sum_i k_i=n$ \text{ and } $k_0x_0\dotplus \dots \dotplus k_{n-1}x_{n-1}= 0\in G$. {\rm [Of course, here $k_i x_i$ stands for $x_i\dotplus \dots\dotplus x_i$ ($k_i$ times). ]} \end{thm} \noindent The necessity of the conditions is easy; a non-trivial argument is required for the sufficiency, i.e., for the existence of $\pi$. \begin{thm} \label{thm:number-in-perm} $p(G)=\dim \cs^n(\EuScript R)^G$. \end{thm} \begin{proof} Let $(z_0,\dots,z_{n-1})$ be a basis for $\EuScript R$ consisting of $G$-eigenvectors. Recall that the weight of $z_i$ is $\chi_i$ and $\hat{G}=\{\chi_0,\dots,\chi_{n-1}\}$ is the dual group. The monomial $z_0^{k_0}\dots z_{n-1}^{k_{n-1}}\in \cs^{\cdot}\EuScript R$ is a semi-invariant of $G$ of weight $k_0\chi_0\dotplus \dots \dotplus k_{n-1}\chi_{n-1}\in \hat G$. It follows that \[ \dim \cs^n(\EuScript R)^G=\{(k_0,\dots,k_{n-1}) \mid \sum_i k_i=n \ \ \& \ \ k_0\chi_0\dotplus \dots \dotplus k_{n-1}\chi_{n-1}=\hat 0\} . \] Modulo the passage from $G$ to $\hat G$, these conditions coincide with those of Theorem~\ref{thm:hall-perm}. Since $G\simeq \hat G$, we are done. \end{proof} Our next goal is to extend these results to a certain matrix of order $n+1$. We begin with two assertionts on $\per(\EuScript M_G)$, which are of independent interest. \begin{prop} \label{prop:nat-action} There is a natural action $\ast: G\times \EuScript S_n\to \EuScript S_n$ such that, for $\gamma\in G$ and $\pi\in\EuScript S_n$, $\text{\rm sign}(\gamma{\ast}\pi)=\text{\rm sign}(\pi)$ and $\boldsymbol{x}(\gamma{\ast}\pi)=\boldsymbol{x}(\pi)$. \end{prop} \begin{proof} Every $\gamma\in G$ determines a permutation $\sigma_\gamma$ on $G$ and thereby an element of $\EuScript S_n$. Namely: \[ (x_0,\dots,x_{n-1}) \ \stackrel{\sigma_\gamma}{\mapsto} \ (\gamma\dotplus x_0,\dots, \gamma\dotplus x_{n-1}) . \] Equivalently, $x_{\sigma_\gamma(i)}=x_i\dotplus \gamma$. Define the $G$-action on $\EuScript S_n$ by $\gamma{\ast}\pi=\sigma_\gamma \pi \sigma_\gamma$. Hence $\text{\rm sign}(\gamma\ast\pi)=\text{\rm sign}(\pi)$. Recall that $\boldsymbol{x}(\pi)=\prod_{i=0}^{n-1}(x_i\dotplus x_{\pi(i)})$. Then \[ \boldsymbol{x}(\gamma{\ast}\pi)=\prod_{i=0}^{n-1}(x_i\dotplus x_{\sigma_\gamma\pi\sigma_\gamma(i)})=\prod_{j=0}^{n-1}(x_{\sigma_\gamma^{-1}(j)}\dotplus x_{\sigma_\gamma\pi(j)}) , \] where $j=\sigma_\gamma(i)$. By definition, $x_{\sigma_\gamma\pi(j)}=x_{\pi(j)}\dotplus \gamma$ and $x_j=x_{\sigma_\gamma^{-1}(j)} \dotplus \gamma$. Thus, the linear factors of $\boldsymbol{x}(\gamma{\ast}\pi)$ remain the same. \end{proof} \begin{rmk} Our action `$\ast$' can be regarded as a generalisation of Lehmer's ``operator $S$'' for circulant matrices~\cite[p.\,45]{lehm}, i.e., essentially, for $G=\gc_n$. Using that operator Lehmer proved that, for $n=p$ odd prime, \[ \det(\EuScript M_{\gc_p})=x_0^p+\dots +x_{p-1}^p+ p F(x_0,\dots,x_{p-1}), \] where $F\in \BZ[x_0,\dots,x_{p-1}]$. We note that Lehmer's argument applies to $\per(\EuScript M_{\gc_p})$ as well. \end{rmk} \begin{prop} \label{prop:sdvig} Suppose that $\me$ is a monomial in $\per(\EuScript M_G)$ such that $x_k$ occurs in $\me$. If $x_k=x_i\dotplus x_j$ for some $i,j$, then there is $\sigma \in\EuScript S_n$ such that $\sigma(i)=j$ and $\me=\boldsymbol{x}(\sigma)$. \end{prop} \begin{proof} By the assumption on $\me$, there is a $\pi\in\EuScript S_n$ such that $\me=\boldsymbol{x}(\pi)$ and $x_k=x_\alpha\dotplus x_\beta$ for some $\alpha,\beta$ with $\pi(\alpha)=\beta$. If $\{\alpha,\beta\}\ne \{ i,j\}$, then we have to correct $\pi$. Take $\gamma\in G$ such that $x_i\dotplus \gamma =x_\alpha$. Then $x_\beta\dotplus \gamma=x_j$ and for $\sigma=\gamma{\ast}\pi$ we have \[ \sigma(x_i)= \sigma_\gamma\pi\sigma_\gamma(x_i)= \sigma_\gamma\pi(x_\alpha)= \sigma_\gamma(x_\beta) =x_\beta\dotplus \gamma=x_j . \] Thus, $\sigma(i)=j$ and also $\boldsymbol{x}(\sigma)=\boldsymbol{x}(\pi)$ in view of Proposition~\ref{prop:nat-action}. \end{proof} \noindent The Cayley table of $G$ is the ``addition table'' of all elements of $G$. Define the {\it extended Cayley table} as an $n{+}1$ by $n{+}1$ matrix that is the ``addition table'' of $n{+}1$ elements of $G$, with the neutral element taken twice. More precisely, we assume that $x_0=x_n=0$ is the neutral element of $G$ and consider the matrix $\widetilde{\EuScript M}_G=(m_{i,j})$, where $m_{i,j}=x_i\dotplus x_j$, $i,j=0,1,\dots,n$. In this context, $\EuScript S_{n+1}$ is regarded as permutation group on $\{0,1,\dots,n\}$. Then $\per(\widetilde{\EuScript M}_G)=\sum_{\tilde\pi\in \EuScript S_{n+1}} \boldsymbol{x}(\tilde\pi)$ is a sum of monomials of degree $n+1$. \begin{exa} $\widetilde{\EuScript M}_{\gc_3}=\begin{pmatrix} x_0 & x_1 & x_2 & x_0\\ x_1 & x_2 & x_0 & x_1 \\x_2 & x_0 & x_1 & x_2\\ x_0 & x_1 & x_2 & x_0\\ \end{pmatrix}$, $\per(\widetilde{\EuScript M}_{\gc_3})=2x_0^4+10x_0^2x_1x_2+4x_0x_1^3+4x_0x_2^3+4x_1^2x_2^2$. \end{exa} \begin{thm} \label{thm:per-ext-keli} The monomial\/ $\me=\prod_{i=0}^{n-1}x_i^{k_i}$ occurs in\/ $\per(\widetilde{\EuScript M}_G)$ if and only if \[ \sum_i k_i=n+1 \ \text{ and } \ k_0x_0\dotplus \dots \dotplus k_{n-1}x_{n-1}= 0\in G . \] \end{thm} \begin{proof} "$\Rightarrow$". Suppose $\me=\boldsymbol{x}(\tilde\pi)$ for some $\tilde\pi\in\EuScript S_{n+1}$. Obviously, $\deg\me=n+1$. Next, \[ k_0x_0\dotplus \dots \dotplus k_{n-1}x_{n-1}=(x_0\dotplus x_{\tilde\pi(0)})\dotplus (x_1\dotplus x_{\tilde\pi(1)})\dotplus \dots \dotplus (x_n\dotplus x_{\tilde\pi(n)})=0, \] since the multiset $\{x_0,x_1,\dots,x_{n-1},x_n=x_0\}$ is closed with respect to taking inverses. "$\Leftarrow$". Suppose $\me$ satisfies the conditions of the theorem. \textbullet \ \ $k_0>0$. Take $\me'=x_0^{k_0-1}x_1^{k_1}\dots x_{n-1}^{k_{n-1}}$. Then $\me'$ satisfies the conditions of Theorem~\ref{thm:hall-perm}. Therefore $\me'$ is a monomial of $\per(\EuScript M_G)$ and there is a $\pi\in \EuScript S_n$ such that $\me'=\boldsymbol{x}(\pi)$. Embed $\EuScript S_n$ into $\EuScript S_{n+1}$ as the subgroup preserving the last element $n$. Let $\tilde\pi$ denote $\pi$ considered as element of $\EuScript S_{n+1}$. Then $\me=\boldsymbol{x}(\tilde\pi)$. \textbullet \ \ $k_0=0$. Choose any binomial $x_ix_j$ in $\me$ and replace it with $(x_i\dotplus x_j)x_0=x_kx_0$ (i.e., $x_i\dotplus x_j=x_k$). That is, $\me=\me''x_ix_j$ is replaced with $\me''x_kx_0=:\me' x_0$. By the previous argument, we can find $\pi\in \EuScript S_{n}$ such that $\me'=\boldsymbol{x}(\pi)$ and $\me' x_0=\boldsymbol{x}(\tilde\pi)$. Since $x_k=x_i\dotplus x_j$ occurs in $\boldsymbol{x}(\pi)$, we can apply Proposition~\ref{prop:sdvig} and assume that $\pi(i)=j$ and hence $\tilde\pi(i)=j$. Finally, we replace $\tilde\pi$ with $\tilde\pi \tau$, where the transposition $\tau\in \EuScript S_{n+1}$ permutes $i$ and $n$. One readily verifies that $\boldsymbol{x}(\tilde\pi\tau)=\me''x_ix_j=\me$. \end{proof} \begin{cl} The number of different monomials in\/ $\per(\widetilde{\EuScript M}_G)$ equals\/ $\dim (\cs^{n+1}\EuScript R)^G$. \end{cl} The proof is almost identical to that of Theorem~\ref{thm:number-in-perm} and left to the reader. It follows from Frobenius' theory of group determinants (see e.g. \cite[\S\,2]{johnson}) that, for abelian groups, $\det(\EuScript M_{G})$ is the product of linear forms in $x_i$'s. In case of generic circulant matrices, this fact plays an important role in \cite{lehm} and \cite{thomas}. For future use, we provide a quick derivation. Recall that $G=\{x_0,x_1,\dots,x_{n-1}\}$ and $\hat G=\{\chi_0,\chi_1,\dots,\chi_{n-1}\}$. Consider the $n$ by $n$ complex matrix $\EuScript K_G$, with $(\EuScript K_G)_{i,j}=(\chi_j(x_i))$, and the vectors $v_j=\sum_{i=0}^{n-1}\chi_j(x_i)x_i\in \EuScript R$, $j=0,1,\dots,n-1$. \begin{prop} \label{prop:lin-forms-det} Under the above notation, we have: \begin{enumerate} \item $v_j$ is an eigenvector of $G$ corresponding to the weight $\chi_j^{-1}$; \item $\det(\EuScript M_G){\cdot} \det(\EuScript K_G)=\det(\ov{\EuScript K_G})v_0v_1\dots v_{n-1}$, where `bar' stands for the complex conjugation; \item $\det(\ov{\EuScript K_G})/ \det(\EuScript K_G)$ equals the sign of the permutation $\pi_0\in\EuScript S_n$ that takes each $x_i$ to its inverse. Hence $\det(\EuScript M_G)=\text{\rm sign}(\pi_0) v_0v_1\dots v_{n-1}$. \end{enumerate} \end{prop} \begin{proof} (1) Obvious. (2) \ It is easily seen that $(\EuScript M_G{\cdot}\EuScript K_G)_{ij}=\chi_j(x_i)^{-1}v_j= \ov{\chi_j(x_i)}v_j=(\ov{\EuScript K_G})_{i,j} v_j$. (3) \ Assuming that $x_0$ is the neutral element, compare the coefficient of $x_0^n$ in both parts of the equality in (2). \end{proof} Note that $\widetilde{\EuScript M}_G$ has equal columns and hence $\det(\widetilde{\EuScript M}_G)=0$. \begin{rmk} 1. The set of vectors $\{v_j\}$ is closed with respect to complex conjugation, and letting $z_j=\ov{v_j}=\sum_i \ov{\chi_j(x_i)} x_i$ one obtains the eigenvector corresponding to $\chi_j$. 2. The orthogonality relations for the characters imply that $\EuScript K_G (\ov{\EuScript K_G})^t= n {\mathrm{1\hspace{0.5pt}\!\! I}}_n$; that is, $\frac{1}{\sqrt n}\EuScript K_G$ is unitary and $|\det(\EuScript K_G)|^2=n^n$. \end{rmk} For the sake of completeness, we mention some other easy properties. \begin{prop} \label{prop:izi-prop} Suppose $\gamma\in G$ and $\pi\in \EuScript S_n$. \begin{enumerate} \item $\gamma{\cdot}\boldsymbol{x}(\pi)=\boldsymbol{x}(\pi\sigma_\gamma^{-1})$, where `$\cdot$' stands for the natural $G$-action on $\cs^{n}\EuScript R$; \item $\boldsymbol{x}(\pi)=\boldsymbol{x}(\pi^{-1})$; \item $\per(\EuScript M_G)\in (\cs^n\EuScript R)^G$; \item If $\hat G$ has a unique element of order 2, say $\psi$, then $\det(\EuScript M_G)$ is a semi-invariant of weight $\psi$. In all other cases, $\det(\EuScript M_G)\in (\cs^n\EuScript R)^G$. \end{enumerate} \end{prop}\begin{proof} 1) $\gamma{\cdot}\boldsymbol{x}(\pi)=\gamma{\cdot}\prod_{i=0}^{n-1}(x_i\dotplus x_{\pi(i)})= \prod_{i=0}^{n-1}(x_i\dotplus x_{\pi(i)}\dotplus \gamma)= \prod_{i=0}^{n-1}(x_{\sigma_\gamma(i)}\dotplus x_{\pi(i)})= \boldsymbol{x}(\pi\sigma_\gamma^{-1})$. 2) Obvious. 3) follows from (1). 4) Proposition~\ref{prop:lin-forms-det} shows that $\det(\EuScript M_G)$ is a semi-invariant whose weight equals the sum of all elements of $\hat G$. The sum of all elements of an abelian group is known to be the neutral element unless the group has a unique element of order 2, in which case the sum is this unique element. \end{proof} Note that $\per(\widetilde{\EuScript M}_G)$ is an element of $\cs^{n+1}\EuScript R$, but it does not belong to $(\cs^{n+1}\EuScript R)^G$. \section{Some open problems} \label{sect:questions} \noindent Associated with previous results on $\per(\EuScript M_G)$, there are some interesting problems. Let $d(G)$ denote the number of different monomials in $\det(\EuScript M_G)$. In view of possible cancellations, we have $d(G)\le p(G)$. Using the factorisation of $\det(\EuScript M_{\gc_n})$ and theory of symmetric functions, Thomas~\cite{thomas} proved that $d(\gc_n)=p(\gc_n)$ whenever $n$ is a prime power. He also computed these values up to $n=12$ (e.g. $d(\gc_6)=68<80 =p(\gc_6)$) and suggested that the converse could be true. \begin{qtn} \label{vopr:1} What are necessary/sufficient conditions on a finite abelian group $G$ for the equality $d(G)=p(G)$? Specifically, is it still true that the condition `$\#(G)$ is a prime power' is sufficient? \end{qtn} The equality $\det(\EuScript M_G)=\text{\rm sign}(\pi_0) v_0v_1\dots v_{n-1}$ might be helpful in resolving Problem~\ref{vopr:1}. The following problem is more general and vague. \begin{qtn} Let $\me$ be a monomial that satisfies conditions of Theorem~\ref{thm:hall-perm}. Is there a group-theoretic (or invariant-theoretic) interpretation of the coefficient of\/ $\me$ in $\per(\EuScript M_G)$ or $\det(\EuScript M_G)$? \end{qtn} \noindent For $G=\gc_2\oplus\gc_2$, we have $p(G)=d(G)=11$. Because $p(\gc_4)=d(\gc_4)=10$, one may speculate that $p(G)\ge p(\gc_n)$ and $d(G)\ge d(\gc_n)$ if $\#(G)=n$. By Theorem~\ref{thm:number-in-perm}, $p(G)$ is the coefficient of $t^{n}$ in the Poincar\'e series $\cf( (\cs^{\cdot}\EuScript R)^G;t)$, and one can consider a related \begin{qtn} Is it true that \ $[t^m]\cf( (\cs^{\cdot}\EuScript R)^G;t)\ge [t^m]\cf( (\cs^{\cdot}\EuScript R)^{\gc_n};t)$ \ for any $m\in \BN$ ? \end{qtn} Given $G=\{x_0,\dots,x_{n-1}\}$, a family of matrices $\EuScript M_{G,l}\in {\rm Mat}_l(\BC[x_0,\dots,x_{n-1}])$, $l\ge n$, is said to be {\it admissible}, if $\EuScript M_{G,n}=\EuScript M_G$, $\EuScript M_{G,l}$ is a principal submatrix of $\EuScript M_{G,l+1}$, and the number of different monomials in $\per(\EuScript M_{G,l})$ equals $\dim (\cs^l\EuScript R)^G$. \begin{qtn} For what $G$, does an admissible family exist? \end{qtn} So far, we only have matrices $\EuScript M_{G,l}$ for $l=n, n+1$. It is possible to jump up to $l=2n$ by letting $\EuScript M_{G,2n}=\begin{pmatrix} \EuScript M_G & \EuScript M_G \\ \EuScript M_G & \EuScript M_G \end{pmatrix}$. It is the addition table for two consecutive sets of group elements, and it can be proved that $\per(\EuScript M_{G,2n})$ has the required property. Then, similarly to the construction of the extended Cayley table, one defines a larger matrix $\EuScript M_{G,2n+1}$. This procedure can be iterated, so one obtains a suitable collection of matrices of orders $kn, kn+1$, $k\in\BN$. However, it is not clear whether it is possible to define matrices $\EuScript M_{G,l}$ for all other $l$. Maybe the reason is that, for arbitrary abelian $G$, there is no natural ordering of its elements. But, for a cyclic group, one does have a natural ordering, and we provide a conjectural definition of an admissible family of matrices. For $G=\gc_n$, it will be convenient to begin with the circulant matrix in the Toeplitz form, see Example~\ref{ex:circulant}. That is to say, our initial matrix is $\hat{\EuScript M}_{\gc_n}=(\hat m_{i,j})$, where $\hat m_{i,j}=x_{i-j}$, $i,j=0,1,\dots,n-1$, and the subscripts of $x$'s are interpreted $\pmod n$. For any $l\ge n$, we then define the entries of $\hat{\EuScript M}_{\gc_n, l}$ by the same formula, only the range of $i,j$ is extended. In particular, $\hat{\EuScript M}_{\gc_n, l}$ is a Toeplitz matrix for any $l$. \begin{exa} $\hat{\EuScript M}_{\gc_3,5}=\begin{pmatrix} x_0 & x_1 & x_2 & x_0 & x_1\\ x_2 & x_0 & x_1 & x_2 & x_0 \\x_1 & x_2 & x_0 & x_1 & x_2\\ x_0 & x_1 & x_2 & x_0 & x_1 \\ x_2 & x_0 & x_1 & x_2 & x_0 \end{pmatrix}$ \end{exa} \begin{conj} \label{conj:quasi-circ} For $l\ge n$, the monomial $x_0^{\lambda_0}x_1^{\lambda_1}\dots x_{n-1}^{\lambda_{n-1}}$ occurs in\/ $\per(\hat{\EuScript M}_{\gc_n,l})$ if and only if \\ \hbox to \textwidth{ \ $(\ast)$ \hfil $\lambda_0+\dots +\lambda_{n-1}=l \quad \text{ and } \quad \displaystyle \sum_{j=1}^{n-1} j\lambda_j \equiv 0 \mod n$. \hfil } In particular, the number of different monomials in $\per(\hat{\EuScript M}_{\gc_n,l})$ equals $a_0(\gc_n,l)$. \end{conj} It is not hard to verify the necessity of $(\ast)$ and that the conjecture is true for $n=2$.
{ "timestamp": "2010-07-13T02:01:43", "yymm": "1007", "arxiv_id": "1007.1791", "language": "en", "url": "https://arxiv.org/abs/1007.1791", "abstract": "Let $R$ be the regular representation of a finite abelian group $G$ and let $C_n$ denote the cyclic group of order $n$. For $G=C_n$, we compute the Poincare series of all $C_n$-isotypic components in $S^{\\cdot} R\\otimes \\wedge^{\\cdot} R$ (the symmetric tensor exterior algebra of $R$). From this we derive a general reciprocity and some number-theoretic identities. This generalises results of Fredman and Elashvili-Jibladze. Then we consider the Cayley table, $M_G$, of $G$ and some generalisations of it. In particular, we prove that the number of formally different terms in the permanent of $M_G$ equals $(S^n R)^G$, where $n$ is the order of $G$.", "subjects": "Representation Theory (math.RT)", "title": "Fredman's reciprocity, invariants of abelian groups, and the permanent of the Cayley table", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986777178636555, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7090925353914515 }
https://arxiv.org/abs/1704.01251
On collisions times of self-sorting interacting particles in one-dimension with random initial positions and velocities
We investigate a one-dimensional system of $N$ particles, initially distributed with random positions and velocities, interacting through binary collisions. The collision rule is such that there is a time after which the $N$ particles do not interact and become sorted according to their velocities. When the collisions are elastic, we derive asymptotic distributions for the final collision time of a single particle and the final collision time of the system as the number of particles approaches infinity, under different assumptions for the initial distributions of the particles' positions and velocities. For comparison, a numerical investigation is carried out to determine how a non-elastic collision rule, which conserves neither momentum nor energy, affects the median collision time of a particle and the median final collision time of the system.
\section{Introduction} \seth{\ay{\jld{We c}\seth{onsider} a collection of $N$ `identical' point-particles, with equal mass, moving \seth{on $\mathbb{R}$} and interacting through \jld{a linear} binary collision \ayd{rule} \jld{given by} \begin{equation} \begin{split} v_i' &= (1-\epsilon)v_j \\ v_j' &= (1-\epsilon)v_i . \\ \end{split} \label{eq:coll_rule} \end{equation} Here $v_i,v_j$ ($v_i',v_j'$) are the pre (post) collision velocities of particles $i$ and $j$ \jld{and the} parameter $\epsilon < 1$ controls conservation and dissipation of momentum and energy. Between collisions, particles undergo free flight.}} \jld{When $\epsilon=0$, \ayd{a collision is elastic, preserving momentum and energy, and corresponds to an exchange of velocity between the two colliding particles.}} \ayd{For $\epsilon \ne 0$, we refer to the collisions as non-elastic since Eq. (\ref{eq:coll_rule}) differs from the traditional construction of inelastic collisions which conserve momentum but not energy. In Section \ref{sec:inelastic_intro}, we discuss the motivations for choosing Eq. (\ref{eq:coll_rule}) and its connection to standard inelastic collisions through a generalized linear collision framework. }When \seth{$\epsilon <0$, \jld{collisions} generate energy, and when $\epsilon \in (0,1)$ \jld{they} dissipate energy. The case $\epsilon =1$ is degenerate, in the sense that particles stop their motion and remain `\jld{frozen}' together. For any $\epsilon < 1$, as we show in Sections \ref{sec:sorting} and \ref{sec:inelastic_intro}, each particle experiences a final collision, as eventually the velocities of \jld{all of} the particles \jld{become} sorted. In the above context, it is natural to ask (1) when a particle will experience its final collision, (2) when the final collision of the entire collection will occur, and (3) how these statistics depend on the initial position and velocity distributions, the number of particles in the system, and $\epsilon.$ \jld{The purpose of this article is to address these questions, analytically in the case of elastic collisions ($\epsilon = 0$), and numerically for non-elastic collisions ($\epsilon \ne 0$).}} \jld{The motivations for this work are twofold. First, }\seth{in a recent \jld{article} by Bardos et al \cite{levermore}, the long time limit of solutions to the Boltzmann equation over $\mathbb{R}^d$ was investigated. In the case of particles interacting through \ay{hard} collisions, it was found that in unbounded domains, as opposed to bounded domains where the phenomenon is different, the dispersive effects of particle free flight\jld{s} were sufficient to quench the dissipative effects of collisions\jld{, thereby} preventing the system from reaching \ay{a state of maximal entropy}. \jld{Specifically, i}n terms of microscopic dynamics, one can envision a finite collection of particles in $\mathbb{R}^d$ with random initial positions and velocities. As time grows, the particles will likely spread out and interact with each other until a time when no more collisions occur. At this point, the system would be in \jld{a} steady-state different from \seth{what} might arise if the particles were kept confined to a bounded domain in $\mathbb{R}^d$. Therefore, from this view, understanding the final time of collision in the system is a natural question.} \seth{\jlc{ \jld{Second,} microscopic descriptions in terms of non-overlapping particles have proven to be central to the modeling of diffusion in single-file systems (see e.g. \cite{Roedenbeck98} and references therein), single-lane traffic flow \cite{Helbing96}, and self-organization in one-dimensional systems of self-propelled particles \cite{Czirok99}. \jld{\seth{In this context,} the system \seth{we study} can be viewed as the microscopic equivalent of a one-dimensional gas of point particles, which interact through a generalized collision rule that is not necessarily subject to conservation of momentum or energy.} A related example of dynamics on one-dimensional lattices includes the Stirring exclusion process \cite{Ligget1985}. }} \seth{Understanding the final collision time of a distinguished particle and the final collision time in \jld{a} system of $N$ particles are questions about maximal order statistics for certain functions of the initial positions and velocities. Although we will assume the initial position and velocities of the particles are independent and identically distributed, when collisions are elastic, the collision times turn out to be an array of $\binom{N}{2}$ non-independent `exchangeable' random variables without a finite mean. In this setting, we are able to analyze the collision times rigorously. When the collisions are \jld{non-}elastic, however, an analytic approach is more difficult. As such, we use molecular dynamics simulations to \jld{assess} the effects of \jld{non-}elastic collisions on the collision times. In both \jld{non-}elastic and elastic collisions, to avoid degeneracies, we will assume that the distributions of the initial positions and velocities are continuous random variables.} \seth{Informally, in \jld{the case of} elastic collisions one set of our main results is that the final collision time of a distinguished particle, in the system of $N$ particles, depends on the moment properties of the position random variable (Theorems \ref{FTC_particle1}, \ref{FTC_particle2}, \ref{FTC_particle3}). That is, this final time scales with $N$ when the position r.v. has finite mean. It scales with $N^{1/\alpha}$ when the position r.v \jld{has} the form of a symmetric stable law with parameter $0<\alpha<1$, and it scales with $N\log(N)$ when the position r.v. is a Cauchy distribution. In \jld{both of} these cases, the limit distribution is a mixture of Fr\'echet distributions.} \seth{Another set of our results concerns the final collision time \jld{$T^{(N)}$ for} the whole system of $N$ particles. \jld{Here a}gain, this time depends on how many moments the position r.v. possesses. In particular, when the \jld{latter} has at least a $3/2$-moment, \jld{$T^{(N)}$} scales with $\binom{N}{2}$. When the position r.v. \jld{has} the form of a stable symmetric law with parameter $0<\alpha<1$, \jld{$T^{(N)}$} scales \jld{with} $N^{2/\alpha}$, and when the position r.v. is a Cauchy distribution, \jld{$T^{(N)}$} scales with $N^2\log N$. In each of these cases, the limit is of Fr\'echet type (Theorems \ref{FTC_system}, \ref{FTC_system2}, \ref{FTC_system3}). However, we also show that when the position r.v. has a first moment, the sequence of final collision times for $N\geq 2$ is tight in the scale $\binom{N}{2}$ (Proposition \ref{prop:tight}). \jld{Moreover, s}ome numerical simulations are provided which indicate that the result of Theorem \ref{FTC_system} should extend to the case when the position r.v. has a mean without further restrictions.} \seth{\jld{For non-elastic collisions, we present numerical results in the case where} the initial positions are standard normal distributions. Numerical simulations indicate that, unlike elastic collisions, the final collision \jld{time} of \jld{a distinguished} particle does not scale with $N$, \jld{and} the final collision time of the system \jld{does not} scale \jld{with} $N^2$. Instead, both times require an exponential correction depending on $N\epsilon$ ad the initial distribution of velocities. An ansatz is discussed which offers a limited explanation to the observed changes.} \seth{A\jld{n outcome} of this work \jld{is the formulation of} a `sorting model' through which the collision times under elastic interactions can be conveniently analyzed. In this \ay{elastic interaction} framework, the $\binom{N}{2}$ collision times \jld{are understood} through methods for exchangeable arrays. \ayd{Various types of these arrays have been considered in \cite{berman, Barbour_Eagleson,lao,silverman}.} \ayd{From the perspective of colliding particles, this work is related to a} different collection of processes, where the initial positions are nonrandom, but the particles interact randomly, \ayd{which} \jld{was} considered in \cite{angel, Angel2012,Angel2007,Angel2009}.} \jld{The rest of this article is organized as follows.} \seth{We now state precisely, in the next subsection, the setting and the main quantities of interest, recast the space-time dynamics of the $N$ particles in terms of a `sorting process', and formulate the questions studied \ayd{with respect to elastic collisions}. In subsection \ref{sec:inelastic_intro}, we prove the existence of a final collision time under \jld{non-}elastic collisions ($\epsilon \ne 0$). In Section \ref{sec:results}, we state the main theorems for elastic collisions. In Section \ref{sec:sim}, we discuss numerical results for \jld{non-}elastic collisions. Finally, the proofs of theorem 1, 2, 4, and 5 are given in Section \ref{sec:proof}. As the proofs of theorems 3 and 6 are similar in structure to those of theorems 2 and 5, we have elected to include them in an appendix.} \subsection{Elastic collisions on the line as a sorting process} \label{sec:sorting} \jl{Consider $N$ point particles \ay{with equal mass} and} initial positions $\{X_i\}_{i=1}^N$, \jl{which are assumed to} be independent, identically distributed random variables with density $f_x$. The initial velocities of the particles \jl{are denoted by} $\{V_i\}_{i=1}^N$ \jl{and are also assumed to} be independent, identically distributed random variables with continuous, bounded density $f_v$. In particular, almost surely, no two particles have the same position or velocity. Between collisions, particles undergo free flight. We start by proving that all collisions occur within a finite time $T^{(N)}$, as long as $N$ remains finite. \jld{Since an elastic collision corresponds to an exchange of velocities between the colliding particles, switching particle labels during collisions turns each labelled trajectory into a straight line.} \seth{ This point of view gives a \jl{simple} way of calculating all the collision times, past and present, in terms of the initial data only. \jl{Indeed, l}et \jld{$\ell_i(t) = X_i + tV_i$ denote the labelled trajectory of the $i$th particle, and} $\tau_{i,j}$ denote the \jl{time of} intercept of the paths $\ell_i(\cdot)$ and $\ell_j(\cdot)$ of particles $i$ and $j$. Then, $$\tau_{i,j} = \frac{X_j-X_i}{V_i - V_j}$$ and the set of all the line intersection times, $\{ \tau_{i,j}: 1\le i \neq j \le N\}$ is in \jld{one-to-one} correspondence with the set of all collision times of the particles undergoing elastic collisions if we consider both positive and negative times. We remark, since the distributions of the initial positions and velocity are continuous, these times are distinct almost surely. Also, although these intersection times are not independent random variables, for example $\tau_{1,2} $ and $\tau_{1,3}$ both depend on $(X_1,V_1)$, the\jld{y} are `exchangeable' in \jld{the sense} that if $\pi$ is a permutation of $\{1,\ldots, N\}$, then $\{\tau_{i,j}: 1\leq i\neq j \leq N\} \mathop{=}\limits^d \{\tau_{\pi(i),\pi(j)}: 1\leq i \neq j\leq N\}$.} \jld{The intersection times, $\tau_{i,j}$, are examples \jl{of} ratio distributions. One can construct an integral representation of the distribution of such a ratio, although evaluating the integral in closed form is often difficult. However, for example, if $\{X_i\}_{i=1}^N$ and $\{V_i\}_{i=1}^N$ are all iid $N(\mu,\sigma)$, then any $\tau_{i,j}$ follows a Cauchy distribution. } \seth{\jld{A}s the collection of collision times consists of $\binom{N}{2}$ terms, after the final (random) time $T\jld{^{(N)}}:=\max_{i\neq j}\tau_{i,j}$, there will be no more collisions between particles. After this time, the labels of the lines have been sorted according to their velocities: \jld{i}n a vertical cross-section of the space time diagram, \jld{they are, from top to bottom,} in order from the largest to the smallest initial velocities. \jld{Similarly,} after time $T^{(N)}$, all particles are arranged on the one-dimensional line in order of increasing speed, moving only by free flight since no further collisions can occur. } \seth{\jld{Let $t_i^{(N)}$ be the last time the $i$th particle interacts with any other particle.} Since the initial positions and velocities are independent and identically distributed, the random variables $\{t_i^{(N)}: 1\leq i\leq N\}$ are exchangeable, although not independent. On the other hand, let $r_i^{(N)} := \max_{j: j\neq i} \tau_{i,j}$ denote the last time another line intersects $\ell_i(\cdot)$. Then, as the last collision time of a particle must be the last intersection time of a line, we have $\{t_i^{(N)}: 1\leq i\leq N\} \mathop{=}\limits^d \{r_i^{(N)}: 1\leq i\leq N\}$. Moreover, as the initial positions and velocities are independent, $r^{(N)}_i$ equals any one of the $\{t_i^{(N)}: 1\leq i\leq N\}$ with equal probability. In particular, $$P(r_i^{(N)} \leq x) = N^{-1}\sum_{j=1}^N P(t_j^{(N)}\leq x) = P(t_i^{(N)}\leq x),$$ and so $t_i^{(N)}$ and $ r_i^{(N)}$ have the same distribution. } Also, $$T^{(N)} = \max_{1\leq i\leq N} r_i^{(N)} = \max_{1\leq i\leq N}t_i^{(N)}.$$ \ay{Although not \jld{discussed} here, \jld{we remark that} it is also natural to consider, as an alternative, the minimum of the collision times. Given the symmetric distribution of $\tau_{i,j}$ it follows that $$\min_{1\le i < j \le N} \tau_{i,j} \mathop{=}\limits^d -T^{(N)}.$$} \subsection{\jld{Non-}elastic collisions} \label{sec:inelastic_intro} \ayd{Consider the more general, linear inelastic collision rule} \begin{equation} \begin{split} v_i' &= (1-\epsilon)v_j \jld{+ \beta v_i,}\\ v_j' &= (1-\epsilon)v_i \jld{+ \beta v_j}. \end{split} \label{eq:coll_rule_general} \end{equation} Recall that $v_i,v_j$ ($v_i',v_j'$) are the pre (post) collision velocities of particles $i$ and $j$. \jld{The} parameters $\jld{\beta \ge\ } 0$ and $\epsilon < 1$ control conservation and dissipation of momentum and energy. \jld{In the case $\beta=\epsilon\ne0$, Eq. \eqref{eq:coll_rule_general} corresponds to inelastic collisions with coefficient of restitution $C_R = 1 - 2 \epsilon$}, which conserve momentum but not energy. \jld{In what follows we choose $\beta = 0$ to reduce} the computational cost of \jld{the simulations, thereby making} the large numerical study from Sec. \ref{sec:sim} possible. The simplicity of the collision rule \jld{obtained when $\beta = 0$ allows} for some analytic investigation \ayd{as well}. \jld{Moreover, s}ince reversing the direction of time when a collision occurs amounts to changing $\epsilon$ into \ay{$1- \frac{1}{1-\epsilon} \simeq -\epsilon$} at leading order when $\epsilon$ is small, the distributions of collision times \jld{are, for small values of $\epsilon$, expected} to be nearly symmetric under the transformation $\cal T$ sending $t \to -t$ and $\epsilon \to -\epsilon$\jld{. This is confirmed by the simulations of Fig. \ref{fig:tau_diss}, which interestingly indicate that the symmetry under $\cal T$ is qualitatively observed, even for values of $\epsilon$ equal to 0.1.} \jld{We also note that, t}\ay{he transformation $\cal T$ can be used to understand properties of the minimum order statistics of the collision times through study of $t_i^{(N)}$ and $T^{(N)}$.} \jld{As already mentioned,} the pairwise collision times display a system size dependency which \jld{is} not present for elastic collisions. \jld{Together with the} broken symmetry, \jld{this effect} can be viewed as \jld{resulting from} the cooling (heating) of the system in the case of energy dissipating (generating) collisions\jld{: a} greater number of particles results in more collisions\jld{, which is compounded into increased} cooling (or heating). \begin{figure}[h!] \begin{center} \includegraphics[width = 3.2 in,height=3.2 in]{Pairwise_collisions_inelastic_diss.pdf} \includegraphics[width = 3.2 in,height=3.2in]{Pairwise_collisions_inelastic_gen.pdf} \caption{Histograms of \jl{collision} times \jl{$\tau_{i,j}$ for positive (left) and negative (right) values of $\epsilon$ in \jld{the non-}elastic collision rule (\ref{eq:coll_rule})} \jld{with $\beta = 0$}. The initial positions and velocities are sampled from a standard normal distribution. \jl{For comparison, the densities} of collision times for elastic collisions \jl{are} shown in \ay{dashed} red. In this case, the elastic collision times follow a Cauchy distribution.} \label{fig:tau_diss} \end{center} \end{figure} \ay{For elastic collisions, the one-to-one correspondence of path intersection times and collision times made the argument for the existence of a final collision time straightforward. For \jld{non-}elastic collisions, the existence of a final collision time is less \jld{obvious}. Nonetheless, \jld{the proposition below indicates} there is almost surely a final collision time for a system of particles interacting with the collision rule from Eq. \eqref{eq:coll_rule} \jld{when} $\epsilon <1$.} \begin{proposition}\label{prop:FTC_inelastic} Suppose $N$ point particles with equal mass have initial positions $\{X_i\}_{i=1}^N$ and velocities $\{V_i\}_{i=1}^N$ \jld{that} are continuous random variables, so that $X_i \ne X_j$, $V_i\ne V_j$ for $i\ne j$ a.s. If the particle\jld{s} interact \jld{through} the collision rule given in Eq. \eqref{eq:coll_rule} with $\epsilon \ne 0,\ \epsilon < 1 $, there is a final collision time almost surely. \end{proposition} \begin{proof}[Proof ] Suppose at the time of a collision, in addition to the change in velocity, the labels of particles are also switched. Unlike \ayd{the case of} elastic collisions, the path of a particle \jld{in space-time} is not a straight line. Instead, individual particles follow piecewise linear trajectories where the slope of \jld{each segment} changes by a factor of $1-\epsilon$ \jld{after each \ayd{intersection with another path}}. Again, let $\ell_i(t)$ denote the position of the $i$th particle and $\tau_{i,\star_1}<\tau_{i,\star_2}<\dots$ denote the times when the path of particle $i$ intercepts the path of another particle. Then \begin{equation*} \ell_i(t) = \begin{cases} X_i +t V_i & 0<t \le \tau_{i,\star_1} \\ (X_i+\tau_{i,\star_1} V_1) + (t-\tau_{i,\star_1}) (1-\epsilon) V_i & \tau_{i,\star_1} < t \le \tau_{i,\star_2} \\ \vdots \end{cases} \end{equation*} As was the case for elastic collisions, the collection of intersection times of the trajectories of particles (now piecewise linear) is in one-to-one correspondence with the collection of collision times. \jld{Therefore,} for a system to have infinitely many collision times, there must be two paths which intersect infinitely many times. \ay{Since the initial velocities are continuous random variables, it follows that for all $1\le i < j \le N$ and $k,m \in \{0,1,2,\dots\}$, $(1-\epsilon)^k V_i \neq (1-\epsilon)^m V_j$ almost surely which ensures that any two paths which intersect do so by crossing one another a.s. } We now assume the paths $\ell_1$ and $\ell_2$ intersect infinitely often and derive a contradiction. Since the paths cross, we may choose two successive intersection times, $s_1^{(2)}<s_2^{(2)}$, such that $\ell_1(t) > \ell_2(t)$ for $t\in (s_1^{(2)},s_2^{(2)})=\mathbb{S}^{(2)}.$ It may be the case that a third path also intersects $\ell_1$ and $\ell_2$ at time $s_1^{(2)}$ or $s_2^{(2)}$. However, we may relabel the paths so that a ternary intersection does not alter the velocities of paths $\ell_1$ and $\ell_2$ on the open interval $\mathbb{S}^{(2)}.$ For example, suppose $\ell_3$ also intersects with $\ell_1$ and $\ell_2$ at $s_1^{(2)}$ or $s_2^{(2)}$. There are two possibilities. Firstly, $\ell_3$ does not intersect $\ell_1$ or $\ell_2$ on the interval $(s_1^{(2)},s_2^{(2)}).$ Thus, $\ell_3$ will not alter the velocities of $\ell_1,\ell_2$ on this interval and so will not influence the time, $s_2^{(2)}$, when $\ell_1$ and $\ell_2$ intersect again. Secondly, $\ell_3$ could intersect either $\ell_1$ or $\ell_2$ at a second time $s\in (s_1^{(2)},s_2^{(2)}).$ For instance, suppose $\ell_3$ intersects with $\ell_1$ at times $s$ and $s_2^{(2)}$, and that $\ell_3$ does not intersect $\ell_2$ on the interval $(s,s_2^{(2)}).$ We can then relabel the paths and times so that, $\ell_1$ and $\ell_2$ (formerly $\ell_1$ and $\ell_3$), intersect at successive times $s_1^{(2)}$ and $s_2^{(2)}$ (formerly $s$ and $s_2^{(2)}$). Then on the interval $(s_1^{(2)},s_2^{(2)})$, the path $\ell_3$ (formerly $\ell_2$) intersects with neither $\ell_1$ nor $\ell_3$. Other possibilities are handled similarly. As a result of this choice, on the closed interval $\overline{\mathbb{S}^{(2)}}$, any path which crosses both $\ell_1$ and $\ell_2$ does so an equal number of times. Let us return to case where $\ell_1$ and $\ell_2$ intersect at successive times $s_2^{(1)}$ and $s_2^{(2)}$ and $\ell_1(t)>\ell_2(t)$ for $t\in\mathbb{S}^{(2)}$. Furthermore, assume these paths are chosen so that there is no other path which intersects both $\ell_1$ or $\ell_2$ at $s_1^{(2)}$ or $s_2^{(2)}$ and either $\ell_1$ or $\ell_2$ at another time in the interval $\mathbb{S}^{(2)}.$ Let $$ u_i^+ = \lim_{h\to 0^+}\frac{\ell_i(s_1^{(2)}+h)-\ell_i(s_1^{(2)})}{h}, \hspace{2mm} i=1,2\\ $$ be the velocities of particles $1$ and $2$ immediately after their collision at $s_1^{(2)}.$ Let $$ U_i^- = \lim_{h\to 0^-}\frac{\ell_i(s_2^{(2)}+h)-\ell_i(s_2^{(2)})}{h}, \hspace{2mm} i=1,2\\ $$ be the velocities of particles $1$ and $2$ immediately before their collision at $s_2^{(2)}.$ Since $\ell_1(t) > \ell_2(t)$ for $t\in \mathbb{S}^{(2)}$ and the paths cross a.s., it follows that \begin{equation} u_1^+>u_2^+ \hspace{2 mm} \text{ and } \hspace{2 mm} U_1^- < U_2^- \hspace{2mm} \text{ a.s}. \end{equation} We note that the velocities of both paths $1$ and $2$ must be of the same sign since $U_i^-$ is related to $u_i^+$ by the equation $U_i^-=(1-\epsilon)^{k_i}u_i^+$ where $(1-\epsilon)^{k_1}>0$ and $k_i$ is the number of additional path intersections of $i$ during $\mathbb{S}^{(2)}$. Consider the path intersections involving $\ell_1$ on the interval $\mathbb{S}^{(2)}$ (Fig. \ref{FIG:Path_Intersections_Diagram}). There are two possibilities. (1) A path, (shown as a dotted, red line in Fig. \ref{FIG:Path_Intersections_Diagram}), crosses both $\ell_1$ and $\ell_2$ (the dashed, blue lines in Fig. \ref{FIG:Path_Intersections_Diagram}) $k$ times, and alters the velocities of both particles by a multiplicative factor of $(1-\epsilon)^k.$ Alternatively, (2) \jld{a path} may only intersect $\ell_1$ leaving the velocity of particle 2 unchanged (the solid black line in Fig. \ref{FIG:Path_Intersections_Diagram}). \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=0.75] \def\stwo{7}; \draw[dashed] (-\stwo,-3) -- (-\stwo,-2.9); \draw[dashed] (\stwo,-3) -- (\stwo,-2.9); \draw (-\stwo,-3) node[anchor=north] {$s_1^{(2)}$}; \draw (\stwo,-3) node[anchor=north] {$s_2^{(2)}$}; \def\sthree{4}; \draw[dashed] (-\sthree,-3) -- (-\sthree,-2.9); \draw[dashed] (\sthree,-3) -- (\sthree,-2.9); \draw (-\sthree,-3) node[anchor=north] {$s_1^{(3)}$}; \draw (\sthree,-3) node[anchor=north] {$s_2^{(3)}$}; \draw[solid,black] (-7.5,-3) --(7.5,-3); \draw[solid, blue,dashed] (-\stwo-1,-2.75) -- (0,-0.5) -- (\stwo+1,2.5); \draw (\stwo+1,2.5) node[anchor=north] {$\ell_1$}; \draw[solid, blue,dashed] (-\stwo-1,-2.6) -- (0,-1.5) -- (4,-0.5) -- (\stwo+1,3); \draw (\stwo+0.9,3) node[anchor=east] {$\ell_2$}; \draw[solid, black] (-7,-1.85) -- (0,-1.35) -- (7,2.75); \draw (7,3) node[anchor=east] {$\ell_3$}; \draw[red,dotted,thick] (-5,3) -- (7,-3); \end{tikzpicture} \caption{Paths $\ell_1$ and $\ell_2$ are shown as dashed, blue lines. The dotted red line serves as an example of a path which intersects both $\ell_1$ and $\ell_2$. A path which intersects $\ell_1$ twice without hitting $\ell_k$ (shown as a solid black line) implies the existence of another `nested' path, $\ell_{k+1}$, also crossing $\ell_1$ twice but between the times, $s_1^{(k)}<s_2^{(k)}$, when $\ell_{k}$ hits $\ell_1$. } \label{FIG:Path_Intersections_Diagram} \end{center} \end{figure} If only $k_1$ intersections of the first type occur, then $$U_1^-=u_1^+(1-\epsilon )^{k_1} > u_2^+(1-\epsilon )^{k_1}=U_2^-$$ which contradicts the assumption that $U_1^- < U_2^-.$ \ayd{Indeed, $\ell_1$ and $\ell_2$ must experience a different number of intersections, $k_1$ and $k_2$ respectively, so that $$U_1^- = (1-\epsilon)^{k_1}u_1^+ < (1-\epsilon)^{k_2} u_2^+ = U_2^-.$$ The details of the ordering of $k_1$ and $k_2$ depend on both the sign of $u_1^+,u_2^+$ and on the sign of $\epsilon.$ However, we may assume without loss of generality that $k_1 > k_2 \ge 0.$} Thus, there must be a path which intersects $\ell_1$ at least twice without intersecting $\ell_2$. Call it $\ell_3$. Again, we may choose successive intersection times, $s_1^{(3)} < s_2^{(3)}$ in $\mathbb{S}^{(2)},$ such that $\ell_1(t)>\ell_3(t)$ and repeat the preceding argument (Fig. \ref{FIG:Path_Intersections_Diagram}). As such, the existence of a $k$th path with successive intersections of $\ell_1$ at times $s_1^{(k)}<s_2^{(k)}$ then implies the existence of a $(k+1)$th path which has successive intersections with $\ell_1$ on a subinterval $\mathbb{S}^{(k+1)}=(s_1^{(k+1)},s_2^{(k+1)})\subset (s_1^{(k)},s_2^{(k)})=\mathbb{S}^{(k)}$. By construction, $\ell_2,\dots,\ell_k$ cannot collide with $\ell_1$ on $\mathbb{S}^{(k+1)}$. Through induction we reach a contradiction as this requires an $(N+1)$th path in a system with only $N$ paths. \end{proof} \jl{Like in the system with elastic collisions discussed above, collisions stop once particles become sorted in order of increasing velocity. However, the distribution of velocities of the particles \ay{\jld{is} no longer independent of collisions, but instead} depends on the number of collisions \jld{that} each particle experiences. } \section{Results for Elastic Collisions}\label{sec:results} \subsection{Final Collision Time for a Single Particle} The first theorem concerns the setting where $E|X_1|<\infty$. In this case, the \ay{initial} \jl{position of the} center of mass of the system of particles converges to a constant value a.s. as $N$ tends towards infinity. \begin{theorem}\label{FTC_particle1} Let $t_i^{(N)}=\max_{i\ne j} \tau_{i,j}$ be the final path intersection time for a single particle in a system of $N$ particles with iid initial position $\{X_i\}_{i=1}^N$ \jl{with density $f_x$} and iid initial velocities $\{V_i\}\jld{_{i=1}^N}$ \jl{with density $f_v$}. Suppose $E|X_1| < \infty$ \jl{and that} $f_v$ is continuous and bounded. \jl{Then, f}or $\mu > 0$, $$\lim_{N\to \infty} P\bigg(\frac{t_i^{(N)}}{N} < \mu\bigg) = \int_{\mathbb{R}^2}f_x(X)f_v(V)e^{-\frac{C(X,V)}{\mu}}dX dV $$ where $$C(X,V) = f_v(V)\int_\mathbb{R} |X-y|f_x(y) dy.$$ \end{theorem} \jl{In other words, the period of time during which each point particle undergoes elastic collisions scales linearly with the size $N$ of the system, as long as the position of the center of mass of the system remains finite. } \ay{For example, if the positions and velocities are iid $U[-1/2,1/2]$, then $$ \lim_{N\to \infty} P\bigg(\frac{t_i^{(N)}}{N} < \mu\bigg) =\begin{cases} e^{-\frac{1}{4\mu}} \int_{-1/2}^{1/2} e^{-x^2/\mu} dx & \mu >0 \\ 0 & \mu\le 0. \end{cases} $$ } Theorems \ref{FTC_particle2} and \ref{FTC_particle3} \seth{ consider when $X_1$ does not have mean, or}\ayd{,} when the \ay{initial }position of the center of mass \jl{of the system of particles} does not converge \seth{a.s.} as $N$ tends towards infinity. In this case, we require different rescalings of $t_i^{(N)}$ depending on the behavior of the tail of the distribution of $X_1.$ \begin{theorem}\label{FTC_particle2} Let $t_i^{(N)}$ and $f_v$ be as in Thm. \ref{FTC_particle1}. Suppose $\{X_i \}_{i=1}^N$ are iid with density $f_x(X) = \frac{C_\alpha}{1+|x|^{1+\alpha}}$ for some $\alpha \in (0,1)$. \jl{Then, f}or $\mu > 0,$ $$\lim_{N\to\infty} P\bigg( \frac{t_i^{(N)}}{N^{1/\alpha}} < \mu \bigg) = \int_{\mathbb{R}} f_v(V) \exp\bigg( -\frac{C(V)}{\mu^\alpha}\bigg) dV $$ where $$C(V) = \frac{C_\alpha}{\alpha}\int_\mathbb{R} f_v(V+w)\frac{1}{|w|^\alpha} dw.$$ \end{theorem} \begin{theorem}\label{FTC_particle3} Let $t_i^{(N)}$ and $f_v$ be as in Thm. \ref{FTC_particle1}. Suppose $\{X_i\}_{i=1}^N$ are iid Cauchy random variables. \jl{Then, f}or $\mu > 0,$ $$\lim_{N\to \infty} P\bigg( \frac{t_i^{(N)}}{N \log N} \le \mu \bigg) = \int_\mathbb{R}f_v(V) \exp \bigg( - \frac{2f_v(V)}{\pi \mu} \bigg) dV.$$ \end{theorem} \jld{We remark, in passing, that} in the event $f_x(z)$ has different scalings as $z\to \pm\infty$, one can still construct asymptotic distributions of $t_i^{(N)}$. The more slowly decaying tail will dominate the asymptotic behavior and the distribution will look similar to the result from Thm. \ref{FTC_particle2} or Thm. \ref{FTC_particle3} depending on the details of $f_x.$ \jl{The proofs of the above theorems are provided in Section \ref{sec:proof_tiN}.} \subsection{Final Collision Time of the System} We now turn to the scaling properties of $T^{(N)}$. Again, the first theorem presented concerns the situation when $X_1$ has a finite mean. \seth{We will in fact} require that the position distribution has at least a $3/2$-moment for technical reasons\jld{; details} of this proof are discussed in Sec. \ref{sec:proofs_TN}. \begin{theorem}\label{FTC_system} Let $T^{(N)}=\max\limits_{1\le i < j \le N} \tau_{i,j}$ be the final collision time of a collection of $N$ particles. Suppose $E|X_1|^{3/2}<\infty$ and that $f_v$ is continuous and bounded. Then, for $t >0$, $$\jld{\lim_{N\to \infty} } P\bigg(\frac{T^{(N)}}{\binom{N}{2}} \le t\bigg) \jld{=\ } e^{-C/t },$$ where $$C = \int_{\mathbb{R}^2} |x-y|\, f_x(x)\, f_x(y)\, dx\, dy \cdot \int_\mathbb{R} f_v^2(v)\, dv.$$ \end{theorem} This indicates that the final collision time of the system scales like the total number of particle pairs. The numerical simulation of Figure \ref{fig:TN_error_XN_VN} illustrates the convergence of the cumulative distribution of $T^{(N)}/\binom{N}{2}$ towards its asymptotic value on the interval $t \in (0,5)$ when both $\{X_i\}_{i=1}^N$ and $\{V_i\}_{i=1}^N$ are iid $N(0,1)$. The Silverman-Brown limit law used in the proof shown in Section \ref{sec:proofs_TN} indicates that the rate of convergence is $O(N^{-1})$, which is consistent with the numerical results. In order to numerically observe this rate of convergence, a total of $N^4$ trials had to be conducted to reconstruct the cumulative distribution of $T^{(N)}$. \begin{figure}[h!] \begin{center} \includegraphics[width = 6 in]{FCT_Comparison_System_XN_VN} \caption{The maximum difference between the empirical cumulative distribution function and theoretical asymptotic cumulative distribution function of $T^{(N)}/\binom{N}{2}$ over all values $t$ in the interval $(0,5)$ when both $\{X_i\}_{i=1}^N$ and $\{V_i\}_{i=1}^N$ are iid $N(0,1)$ with system sizes, $N$, between 5 and 100. For reference, the dashed line has slope negative one.} \label{fig:TN_error_XN_VN} \end{center} \end{figure} Two points are worth noting regarding the proof of Thm. \ref{FTC_system}. First, since both $\{X_i\}_{i=1}^N$ and $\{V_i\}_{i=1}^N$ are identically distributed, the intersection times $\tau_{i,j}$ are also identically distributed. However, they are not independent. Nonetheless, if one assumes they are independent, then through the usual construction of the maximal order statistics of i.i.d. random variables, one recovers the same distribution as \jld{in} Thm. \ref{FTC_system} as $N\to\infty.$ Second, although the proof of the theorem requires that $E|X_1|^{3/2}<\infty$ for technical reasons, it is natural to wonder if $T^{(N)}/\binom{N}{2}$ is still converging weakly to some random variable if the requirements of the moment of $X_1$ are lessened. \begin{proposition}\label{prop:tight} Suppose $E|X_1| <\infty$ and $f_v$ is continuous and bounded, then the sequence of random variables $\{T^{(N)}/\binom{N}{2}\}_{N=2}^\infty$ is tight. \end{proposition} A proof of this proposition is given in Sec. \ref{sec:proofs_TN}. Since the sequence, $\{T^{(N)}/\binom{N}{2}\}_{N=2}^\infty$ is tight, there is a random variable $T$ and subsequence $N_j$ such that $$\frac{T^{(N_j)}}{\binom{N_j}{2} }\Rightarrow T.$$ In the case of Thm. \ref{FTC_system}, the asymptotic distribution of $T^{(N)}/\binom{N}{2}$ is known. However, the tightness of the sequence $\{T^{(N)}/\binom{N}{2}\}$ requires only that $E|X_1| <\infty$, a condition which also guarantees the constant $C$ of Thm. \ref{FTC_system} is well-defined. \jld{It is natural to ask whether} the limiting distribution given in Thm. \ref{FTC_system} still hold\jld{s} if the requirement that $E|X_1|^{3/2}\jld{< \infty}$ is removed. \jld{Even though we do not provide a definite answer to this question, t}he following example \jld{gives} some insight. We simulated systems \jld{of increasing size $N$}, with random initial positions $\{X_i\}_{i=1}^N$ generated, via inverse sampling, from the density \begin{equation} \label{eq:alpha_dens} f_x(x) = \frac{9\cos(\pi/18)}{8\pi(1+|x|^{9/4})}, \end{equation} for which $E|X_1|<\infty$ but $E|X_1|^\alpha$ is infinite for $\alpha \ge 5/4$. Random \jld{initial} velocities were sampled from a $N(0,1)$ distribution. Figure \ref{fig:TN_error_XAlpha_VN} shows a comparison between the empirical and theoretical cumulative distribution functions, similar to the example of Fig. \ref{fig:TN_error_XN_VN}, but for a larger range of values of $N$. The distribution of $T^{(N)}/\binom{N}{2}$ appears to be converging to the same theoretical limit as Thm. \ref{FTC_system}, but at a much slower rate, of order $O(N^{-0.35})$. This suggests that an argument for the convergence of $T^{(N)}$ to the same limiting distribution may exist \jld{for this example}. \seth{However, we show \jld{in the appendix} that the method \jld{of proof} we use cannot apply in this case. \begin{figure}[h!] \begin{center} \includegraphics[width = 6 in]{FCT_Comparison_System_XAlpha_VN.jpg} \caption{ The maximum difference between the empirical cumulative distribution function and theoretical asymptotic cumulative distribution function of $T^{(N)}/\binom{N}{2}$ in the infinity norm over the interval $\mu\in(0,5)$\jld{. Here,} $\{X_i\}_{i=1}$ are \jld{iid} with density given in Eq. \eqref{eq:alpha_dens}, $\{V_i\}_{i=1}^N$ are iid $N(0,1)$\jld{, and the size of the} system, $N$, \jld{ranges} between 5 and 1000. For reference, the dashed line has slope of -0.35. For $N$ between 5 and 100, $N^4$ trials were conducted. Over that interval the observed convergence rate was $N^{-0.35}$. As such, only $N^2$ trials were conducted \jld{for} systems sizes between 100 and 1000.} \label{fig:TN_error_XAlpha_VN} \end{center} \end{figure} \seth{We now discuss} asymptotic distributions for $T^{(N)}$ when the \jld{initial} position r.v. does not have a first moment. Similar to Theorems \ref{FTC_particle2} and \ref{FTC_particle3} an alternate scaling is required. \begin{theorem} Let $T^{(N)}$ be the final collision time of a collection of $N$ particles. Suppose the initial positions are iid with density $\displaystyle f_x(x) = \frac{C_\alpha}{1+\jld{|x|}^{1+\alpha}}$ for $\alpha \in (0,1)$ and that $f_v$ is continuous \jld{and bounded}. Then for $t > 0$, $$\jld{\lim_{N\to \infty} } P\bigg(\frac{T^{(N)}}{N^{2/\alpha}} \le t \bigg) \jld{=\ } e^{-C/t^\alpha}$$ where $$C = \frac{C_\alpha}{2\alpha} \int_{\mathbb{R}^2}f_v(V)\frac{f_v(V+w)}{|w|^\alpha} dw \, dV. $$ \label{FTC_system2} \end{theorem} \begin{theorem} Let $T^{(N)}$ be the final collision time of a collection of $N$ particles. Suppose the initial positions are iid Cauchy and that $f_v$ is continuous \jld{and bounded}. Then for $t > 0$, $$\jld{\lim_{N\to \infty} } P\bigg(\frac{T^{(N)}}{N^2\log N} \le t \bigg) \jld{=\ } e^{-C/t}$$ where $$C = \frac{\jld{2}}{\pi } \int_\mathbb{R}f_v^2(V) dV.$$ \label{FTC_system3} \end{theorem} Like Theorem \ref{FTC_system}, the limiting distributions in Theorems \ref{FTC_system2} and \ref{FTC_system3} are of Fr\'echet type. Additionally, as was the case for Theorem \ref{FTC_system}, one recovers the same asymptotic distribution if the pairwise collision times are assumed to be independent. % The proofs of these theorems follow from the same asymptotic arguments employed in the proofs of Theorems \ref{FTC_particle2} and \ref{FTC_particle3} which are discussed in detail in Section \ref{sec:proof_tiN} \jld{and in the Appendix}. In summary, the asymptotic properties of $T^{(N)}$ depend on the moment properties of the initial position r.v. In the simple case where the initial velocity r.v. has a continuous density and the initial position r.v. has a density of the form $f_x(x) = \frac{C_\alpha}{1+|x|^{1+\alpha}}$, \jld{we have the following results:} for $\alpha \in (0,1)$, Theorem \ref{FTC_system2} holds and $T^{(N)}$ scales like $N^{2/\alpha}$\jld{;} for $\alpha = 1$, Theorem \ref{FTC_system3} holds and $T^{(N)}$ scales like $N^2\log N$\jld{;} and for $\alpha >3/2$ Theorem \ref{FTC_system} holds and $T^{(N)}$ scales like $N^2$. \seth{As mentioned earlier,} the case for $\alpha \in (1,3/2]$ is open, but \jld{the} numerical stud\jld{y of Fig. \ref{fig:TN_error_XAlpha_VN} suggests} the result of Theorem \ref{FTC_system} still holds and $T^{(N)}$ scales like $N^2$. \section{Results for \jld{Non-}elastic Collisions}\label{sec:sim} \jld{The presence of non-elastic collisions ($\epsilon \ne 0$) significantly \jld{affects} the asymptotic behavior of the system.} First, due in part to the heating/cooling effects observed in Sec. \ref{sec:inelastic_intro}, the asymptotic properties of $t_i^{(N)}$ and $T^{(N)}$ are \jld{modified.} \jld{Importantly,} the \jld{initial} velocity distribution play\jld{s} a role, \jld{which was not the case for} elastic collision\jld{s, whose asymptotic behavior was solely determined by} the distribution of initial positions. \jld{Second}, the analytic framework \jld{we} used to determine the correct asymptotic behavior of $t_i^{(N)}$ and $T^{(N)}$ is \jld{no longer} viable. \jld{As a consequence, we do not have the means} to rigorously derive the asymptotic distributions \jld{of these quantities} at this time. Instead, we \jld{describe how their medians scale} through a numerical study using molecular dynamics (MD) simulations. \jld{Since} under elastic collision\ayd{s}, neither the limiting distribution of $t_i^{(N)}/N$ nor that of $T^{(N)}/N^2$ \jld{has a} mean\jld{, we} assume \jld{this is also the case under non-elastic collisions. We thus investigate how the medians of $t_i^{(N)}$ and $T^{(N)}$ scale for large $N$.} We restrict our focus to $N(0,1)$ \jld{initial} position \jld{distributions}, and study separate cases when the initial velocities are iid $N(0,1)$, $N(1,1)$, $U(-1,1)$, and $U(0,2).$ Due to numerical constraints, we limit our \jld{investigation} to systems no larger than $N=500$ and \jld{to} values of $\epsilon \in (-5\times 10^{-3},5\times 10^{-3}).$ \jld{We generate $10^4$ simulations for each value of $N$ and $\epsilon$ studied, and use bootstrapping to estimate confidence intervals for $M_t$, the median of the final collision time of a particle, and for $M_T$, the median of the final collision time of the system.} We begin with a review of the properties of the sampling distribution of the sample median, highlighting consistency with known properties \jld{of} the sample median under elastic collisions. We then discuss the effects of \jld{non-}elastic collisions on $t_i^{(N)}$ and $T^{(N)}$. Consider initial velocities and positions which are standard normal random variables. Under elastic collisions, \jld{we know that} the final collision time for a single particle scales linearly with the number of particles in the system (Thm. \ref{FTC_particle1}) while the final collision time for the system scales linearly with $N^2$ (Thm. \ref{FTC_system}). Thus, the median of the final collision time of a particle is of the form $M_t = C_t N$ at leading order as $N\to \infty$ where $C_t$ is the value of $\mu$ satisfying the equation $$\frac{1}{2} = \frac{1}{2\pi} \int_{\mathbb{R}^2} \exp\bigg( -\frac{X^2+V^2}{2} - \frac{1}{2\pi \mu}e^{-v^2/2} \int_\mathbb{R} |X-y| e^{-y^2/2}dy \bigg)dX dV.$$ A numerical calculation indicates $C_t\approx 0.412.$ \jld{Similarly, t}he median of the final collision time of the system behaves like $M_T = C_T N^2$ at leading order as $N\to \infty$, where $C_T=\jld{1 / \big(}{\pi\log 4}\jld{\big)}$, which one can calculate directly from the asymptotic distribution \jld{provided} in Thm. \ref{FTC_system}. \begin{figure}[h!] \begin{center} \includegraphics[width = 5 in]{Density_Sample_Median_N400_Elastic.jpg} \caption{Statistical properies of sample medians when initial position\jld{s} and velocit\jld{ies} are iid $N(0,1).$ (a) Numerical reconstruction of the sampling distribution of the median final collision time of a particle for increasing sample size in a system of 400 particle under elastic collisions. The vertical dashed line \jld{in} black corresponds to a sample median generated from a sample of size $10^4.$ The line in green corresponds to the leading order behavior $M_t\approx C_tN.$ (b) Numerical reconstruction of the sampling distribution of the median final collision time of the system for increasing sample size in a system of 400 particle under elastic collisions. The vertical dashed line \jld{in} black corresponds to a sample median generated from a sample of size $10^4.$ The line in green corresponds to the leading order behavior $M_T\approx C_TN^2.$} \label{FIG:Density_sample_median} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width = 5 in]{sample_median_error} \caption{(a) The sample standard deviation \jld{of} final collision time medians is $O(1/\sqrt{k})$ consistent with the results of \cite{Chu}. (b) The average relative error of the sample medians when compared to the true median is $O(1/k)$ and less than 1\% for both $\tilde{M_t}$ and $\tilde{M_T}$ when the sample size $k\ge 200.$ } \label{FIG:sample_median_error} \end{center} \end{figure} For a general random variable, $Y$, with density $f_Y$ and median $M_Y$, the sampling distribution of the sample median of $k$ iid samples from $Y$ converges to a normal distribution as the sample size approaches infinity. The mean of the normal distribution is $M_Y$ and \jld{its} variance $\jld{1 / \big(}4k\, [f_Y(M_Y)]^2\jld{\big )}$ \cite{Chu}. Histograms generated from sample medians under increasing sample sizes suggest this result extends to the sample median of both $t_i^{(N)}$ and $T^{(N)}$ (Fig. \ref{FIG:Density_sample_median}). \jld{Indeed, the} $O(1/k)$ decay in the variance \jld{is} observed across all particle numbers simulated. Additionally, if one uses the sample median of $10^4$ simulations as an approximation for the true medians, $M_t$ and $M_T$, then the mean of the sample medians converges to the true median at a rate which is $O(1/k)$ (Fig. \ref{FIG:sample_median_error}). \jld{Specifically, f}or $k\ge 200$, the relative error between the mean sample median and \jld{the} true median approximated from $10^4$ simulations was less than 1\% for both $M_t$ and $M_T$ across all values of particle number simulated. \subsection{Median of $t_i^{(N)}$ under \jld{non-}elastic collisions} We \jld{first present results on} the behavior of the final collision time of a particle, $t_i^{(N)}.$ The median $M_t$ was approximated by selecting at random the final collision \jld{time} of one particle from each of $10^4$ simulations and taking the median of these values. This process was repeated 100 times and the results were used to construct the 99\% confidence intervals for the true median. This construction of the confidence intervals assumes the approximately normal distribution of the sample medians observed for elastic collisions persists under \jld{non-}elastic collisions. We use $M_t^{elas}$ to denote the median final collision time under elastic collisions and $M_t^{inel}$ to denote the median final collision time under \jld{non-}elastic collisions. Before discussing the results, we propose an ansatz which relates path intersection times under elastic collisions to those under \jld{non-}elastic collisions. Let $\epsilon =0$ and suppose that the path of particle 1 first intersects the path of particle 2\jld{, and} then proceeds to intersect the path of particle 3. Let $\tau_{1,j} = \frac{X_1-X_j}{V_j-V_1}$, $j=2,3$ be the path intersection times between particle 1 and particle\jld{s} $j=2,3$\jld{, and l}et $\tau_{1,3}'$ be \jld{the} time at which the paths of particles 1 and 3 intersect under \jld{non-}elastic collisions. If we assume particle 3 experiences a path intersection at approximately the same time that the paths \jld{of} particles 1 and 2 coincide, then $$\tau_{1,3}' \approx \tau_{1,2} +\frac{(X_1+\tau_{1,2} V_1) - (X_3 + \tau_{1,2}V_3)}{(1-\epsilon)(V_3-V_1)} = \frac{1}{1-\epsilon}\tau_{1,3}.$$ \begin{figure}[h!] \begin{center} \includegraphics[width = 6 in]{Path_Intersection_Proportion_vs_Nepsilon.jpg} \caption{The sample mean of the proportion of path intersections, $\overline{\alpha(\epsilon,N)},$ exhibits a clear dependence \jld{in} $\epsilon N$, which is influenced by the initial velocity distribution.} \label{FIG:Path_Intersection_Proportions} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width= 6 in]{Median_single_particle_vs_Nepsilon.jpg} \caption{Confidence intervals (95\%) of the relative change in the median single particle final collision time\jld{. This quantity} exhibit\jld{s} a \jld{clear} dependence on $\epsilon N$\jld{, which} is influenced by the velocity distribution. The coefficients of the regression \ayd{$D_1(\epsilon N) +D_2(\epsilon N)^2$} applied to each of the four velocity distributions are \jld{displayed} in Table \ref{TABLE:Regression_Coefficient_t}.} \label{FIG:Medians_inelastic_t} \end{center} \end{figure} Thus, under \jld{non-}elastic collisions, the second path intersection time differs from the elastic path intersection time by a factor of $\frac{1}{1-\epsilon}.$ Generalizing this argument to the the $j^{th}$ interaction of particle 1, $$\tau_{1,j}' \approx \frac{1}{(1-\epsilon)^{j-1}} \tau_{1,j}.$$ This extension implicitly assumes that particle 1 has experienced an (approximately) equal number of path intersection as \jld{every} other particle with which it interacts. Furthermore, the interactions should have occurred at approximately the same time. As such, it is not reasonable to expect the ansatz to apply to the longest intersection times, i.e. those which correspond to the final collision time of the system. However, \jld{assuming that} one can extend this ansatz to the final collision \jld{time} of a given particle, then \jld{non}-elasticity alters $t_i^{(N)}$ by \jld{a} multiplicative factor of the form $$\frac{1}{(1-\epsilon)^{N\alpha(\epsilon,N)}} = e^{-\log(1-\epsilon) \jld{N} \alpha(\epsilon,N)}$$ where $\alpha(\epsilon,N)$ represents the proportion of path intersections \jld{experienced by particle} $i$. \jld{Since f}or $|\epsilon | \ll 1$, the multiplicative correction is approximately $\exp(\epsilon N \alpha(\epsilon,N))$\jld{, we expect} $\log(M_t^{inel} \jld{/} M_t^{elas})$ \jld{to} depend on $\epsilon N$ \jld{at} leading order. Furthermore, \jld{numerical simulations indicate that} the mean proportion of path intersections, $\overline{\alpha(\epsilon,N)}$, is a function of $\epsilon N$ (Fig. \ref{FIG:Path_Intersection_Proportions})\jld{, with leading order contribution equal to $1/2$. This} is consistent with the sorting process interpretation of elastic collisions\jld{: e}ach particle experience\jld{s} $N-1$ path intersections\jld{, including both} positive and negative \jld{times; g}iven the symmetric distribution of $\tau_{i,j}$ about zero, half of these path intersections \jld{are expected to} occur for $t>0$ on average. \jld{Therefore, one would expect} $\log(M_t^{inel}/M_t^{elas})\sim \frac{\epsilon N}{2}$ \jld{at} leading order. \jld{W}hile the general structure of this relationship is correct, including the dependence on $\epsilon N$, \jld{a} numerical reconstruction of $\log(M_t^{inel}/M_t^{elas})$ indicates this assumption is \jld{quantitatively inaccurate} (Fig. \ref{FIG:Medians_inelastic_t}). \jld{Indeed, a}pplying a regression of the form \ayd{$D_1 (\epsilon N) + D_2 (\epsilon N)^2$} to the median final collision results indicates that $\ayd{D_1}\approx 1/3$ rather than the value of $1/2$ predicted by the ansatz (Table \ref{TABLE:Regression_Coefficient_t}). The difference is likely due to the assumption that particle\jld{s} experience equal path intersections\jld{, which is only reasonable for small times.} A \ayd{more rigorous} study of the \jld{number of path intersections experienced by colliding particles} could shed some light on this issue. \begin{table}[ht] \centering \begin{tabular}{|c|| c | c |} \hline Distribution & $D_1$ & $D_2$ \\ \hline $N(0,1)$ & 0.299364165261241 & 0.023741992291113 \\ \hline $U(-1,1)$& 0.299564298338677 & -0.0284407509187195 \\ \hline $N(1,1)$ & 0.315410496892726 & 0.221319632974651 \\ \hline $U(0,2)$ & 0.333999962446071 & 0.403837320001947 \\ \hline \end{tabular} \caption \jld{Coefficients of} the regression \ayd{$D_1 \epsilon N + D_2(\epsilon N)^2$} applied to $\log(M_t^{inel}/M_t^{elas})$. The linear \jld{term has a slope \ayd{$D_1$}} approximately equal \jld{to $1/3$} across all velocity distributions studied. \jld{T}he quadratic coefficient \ayd{\jld{$D_2$}} is much larger for the distributions centered about \jld{$1$}.} \label{TABLE:Regression_Coefficient_t} \end{table} \jld{T}his study \jld{highlights an important} difference \jld{between} elastic \jld{and non-elastic} collisions. \jld{While} the velocity distribution play\jld{s} no role in determining the scaling of $t_i^{(N)}$ \jld{when $\epsilon = 0$, it clearly affects the value of $C_2$ when $\epsilon \ne 0$, although the effect is} weaker for those velocity distributions \jld{that} are symmetric about zero. \subsection{Median of $T^{(N)}$ under \jld{non-}elastic collisions} \jld{The median} $M_t$ \jld{considered in the previous section} captures the final collisions time of a randomly selected particle within the system, \jld{and therefore} reflects the \jld{behavior} of `typical' particle\jld{s}. \jld{On the other hand, t}he final collision time of the system, $T^{(N)}$, arises from \jld{those particles that} exhibit the largest final collision times, whose behavior is \jld{thus} atypical. As such, the ansatz proposed in the previous section cannot be expected to adequately describe the behavior of $T^{(N)}$. Nonetheless, \jld{the numerical results of Fig. \ref{FIG:Medians_T_inelastic} confirm that} $\log(M_T^{inel}/M_T^{elas})$ still depends on $\epsilon N$ and \jld{on} the velocity distribution. \jld{Table \ref{TABLE:Regression_Coefficient_T} shows the coefficients obtained by applying a regression of the form \ayd{$D_1(\epsilon N) + D_2(\epsilon N)^2$} to this quantity. As before, the effect on the quadratic term is stronger for distributions centered about $1$.} \begin{table}[ht] \centering \begin{tabular}{| c || c | c|} \hline Distribution & $D_1$ & $D_2$ \\ \hline $N(0,1)$ & 0.316098755619495 & 0.353122072713743 \\ \hline $U(-1,1)$ & 0.294877488559631& 0.440060818253797 \\ \hline $N(1,1)$ & 0.270061962096432 & 1.65459638500048 \\ \hline $U(0,2)$ & 0.184011367253361 & 2.53476516441542 \\ \hline \end{tabular} \caption{\jld{Coefficients of} the regression \ayd{$D_1 \epsilon N + D_2(\epsilon N)^2$} applied to $\log(M_T^{inel}/M_T^{elas})$. As in the study of $t_i^{(N)}$, the quadratic \jld{term} is much larger for the distributions centered about \jld{$1$}. } \label{TABLE:Regression_Coefficient_T} \end{table} \begin{figure}[h!] \begin{center} \includegraphics[width=6 in]{Median_system_vs_Nepsilon.jpg} \caption{Confidence intervals for $\log(M_T^{inel}/M_T^{elas})$. \jld{The coefficients of the regression \ayd{$D_1(\epsilon N) +D_2(\epsilon N)^2$} applied to each of the four velocity distributions are \jld{displayed} in Table \ref{TABLE:Regression_Coefficient_T}.}} \label{FIG:Medians_T_inelastic} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=6 in]{Median_system_v_single_particle_vs_Nepsilon.jpg} \caption{Confidence intervals for $\log(M_T^{inel}/NM_t^{elas})$ are symmetric about $\epsilon N = 0.$ \jld{The coefficients of the regression} \ayd{$D_0+D_1(\epsilon N) +D_2(\epsilon N)^2$} \jld{applied to each of the four velocity distributions are displayed} in Table \ref{TABLE:Regression_Coefficient_TNt}. } \label{FIG:Medians_TN_inelastic} \end{center} \end{figure} \jld{T}he asymmetry of $\log(M_T^{inel}/M_T^{elas})$ about $\epsilon N = 0$ \jld{observed in Fig. \ref{FIG:Medians_T_inelastic}} is due to the \jld{linear dependence of $t^{(N)}$ on $\epsilon N$. We checked that} the ratio $M_T/N M_t$ behaves like $e^{O(1)+D_2(\epsilon N)^2}$(\jld{Figure \ref{FIG:Medians_TN_inelastic} and} Table \ref{TABLE:Regression_Coefficient_TNt}). The $O(1)$ term arises from differences in the coefficients \jld{$C_t$ and} $C_T$ \jld{that appear} in the scaling of the medians, $M_t \sim \jld{C}_t N$ and $M_T \sim C_T N^2$, under elastic collisions. \begin{table}[h!] \centering \begin{tabular}{| c || c | c | c|} \hline Distribution & $D_0$ & $D_1$ & $D_2$ \\ \hline $N(0,1)$ &-0.542655411094044 & 0.0171180888904638 & 0.312943997391057 \\ \hline $U(-1,1)$ & -0.517690994670863 & -0.00466592391659517& 0.424631130596750\\ \hline $N(1,1)$ & -0.512993538107585 & -0.0466014260783037 & 1.40886890163564 \\ \hline $U(0,2)$ & -0.419236799598809 & -0.153106277504774 & 2.05627305470200 \\ \hline \end{tabular} \caption{\jld{Coefficients of} the regression $D_0 + D_1 \epsilon N + D_2(\epsilon N)^2$ applied to $\log(M_T^{inel}/NM_t^{elas})$. The values of $C_1$ are much smaller than in the previous tables. } \label{TABLE:Regression_Coefficient_TNt} \end{table} \subsection{Discussion of \jld{Non-}elastic Collisions} The rescaling of velocities by \jld{the} factor $(1-\epsilon)>0$ \jld{each time} a collision \jld{occurs} results in an increase (decrease) in kinetic energy when $\epsilon <0$ ($\epsilon >0)$. While the overall increase (decrease) in the speed of particles is likely to increase (decrease) the rate at which collisions occur, surprisingly, the final collision times do not respond accordingly. For $|\epsilon N| \ll 1$, both $t_i^{(N)}$ and $T^{(N)}$ are decreasing functions of $\epsilon N$, consistent with the heating/cooling effects of the \jld{non-}elastic collisions. However, as $|\epsilon N|$ moves away from zero, $T^{(N)}$ increases until it reaches values larger than that of elastic collisions. \ayd{In the present simulations, this behavior is independent from the value, $0$ or $1$, of the average initial particle velocity. However, under non-elastic collisions, $t_i^{(N)}$ increases to values larger than that of elastic collisions only when the average initial velocity is $1$.} \jld{The numerical investigations discussed in this section lay out a possible path towards a} rigorous construction of the asymptotic \jld{behavior} of the final collision times \jld{by identifying three central questions: (i) d}oes the proposed ansatz fail to accurately capture the effects of \jld{non-}elastic collisions due to differences in path intersection \jld{counts} and if so, to what extent? \jld{(ii)} How do properties of the \jld{initial} velocity distribution affect \jld{the} scaling of the (median) collision times? And \jld{(iii)}, what aspects of the collision dynamics give rise to the observed increase in the median collision time? This \jld{last question} is perhaps most difficult to answer \jld{for} $T^{(N)}$, which is influenced largely by rare events (large collision times). We believe connections with the sorting processes outlined in Sections \ref{sec:sorting} \jld{and} \ref{sec:inelastic_intro} may be able to offer some insight, \jld{although a preliminary} numerical investigation \jld{(which is beyond the scope of this article)} has proven inconclusive to this point. \section{Proofs of theorems}\label{sec:proof} \subsection{Proofs of Theorems Regarding $t_i^{(N)}$}\label{sec:proof_tiN} We begin the proofs concerning Thms. \ref{FTC_particle1} - \ref{FTC_particle3} by \seth{deriving a formula for} the exact distribution of $t_i^{(N)}=\max_{j\ne i} \tau_{i,j}$. Using independence of $\{(X_i,V_i)\}_{i=1}^N$ and by conditioning on the initial velocity and position of particle $i$, we can express the conditional probability of the event $\{t_i^{(N)}< \mu_N | (x_i,v_i)=(X,V)\}$ as follows. \begin{equation} \begin{split} P(t_i^{(N)} < \mu_N | (x_i,v_i) = (X,V)) &= P(\tau_{i,1} < \mu_N, \dots \jld{,} \tau_{i,i-1} < \mu_N, \tau_{i,i+1}<\mu_N \jld{,} \dots \jld{,} \tau_{i,N}< \mu_N |(x_i,v_i)=(X,V)) \\ &=[P(\tau_{i,1} < \mu_N|(x_i,v_i)=(X,V))]^{N-1} \\ &=\bigg[P\bigg(\frac{x_1-X}{V-v_1} < \mu_N \bigg)\bigg]^{N-1} . \end{split} \label{eq:cond_prob} \end{equation} Integrating Eq. \eqref{eq:cond_prob} \jld{with respect to} the distribution of $(X,V)$ yields the unconditional probability of $\{t_i^{(N)} < \mu_N\}$, \begin{equation} \begin{split} P(t_i^{(N)} < \mu_N) &= \int\int f_x(X)f_v(V)P(\tau_{i,j} < \mu_N | (x_i,v_i) = (X,V)) dX dV \\ &=\int\int f_x(X)f_v(V)\bigg[P\bigg(\frac{x_1-X}{V-v_1} < \mu_N \bigg)\bigg]^{N-1} dX dV. \end{split} \label{eq:part_time_max_dist} \end{equation} Note that the integrand in Eq. \eqref{eq:part_time_max_dist} is dominated by $f_x(X)f_v(V)$. Thus, by dominated convergence, one can determine the asymptotic behavior $P(t_i^{(N)} < \mu_N)$ by studying the large $N$ limit of Eq. \eqref{eq:cond_prob}. We search for a scaling of $\mu_N$ depending on $N$ such that $$\bigg[P(t_i^{(N)} < \mu_N|(x_i,v_i)=(X,V))\bigg]^{N-1}=\bigg[P\bigg(\frac{x_1-X}{V-v_1} < \mu_N \bigg)\bigg]^{N-1}$$ has a non-trivial limit. We begin by constructing an integral representation of $\displaystyle P\bigg(\frac{x_1-X}{V-v_1} < \mu_N \bigg)$ by conditioning on the value of $v_1$. \begin{equation} \begin{split} P\bigg(\frac{x_1-X}{V-v_1} < \mu _N\bigg) &= P\bigg(x_1>X+(V-v_1)\mu_N, V < v_i\bigg) + P\bigg(x_i<X+(V-v_1)\mu_N ,V\ge v_i\bigg) \\ &=\int_V^\infty \int_{X+\mu_N(V-v_1)}^\infty f_x(x_1)f_v(v_1) dx_1 dv_1 + \int_{-\infty}^V \int_{-\infty}^{X+\mu_N(V-v_1)}f_v(v_1)f_x(x_1) dx_1 dv_1 \\ &=\int_V^\infty f_v(v_1)[1-F_x(X+\mu_N(V-v_1)]dv_1 +\int_{-\infty}^V f_v(v_1)F_X(X+\mu_N(V-v_1) )dv_1 \end{split} \label{eq:cond_prob_int1} \end{equation} Applying the change of variables $w=v_1-V$ gives a more useful representation of Eq. \eqref{eq:cond_prob_int1}. \begin{equation} \begin{split} P\bigg(\frac{x_1-X}{V-v_1} < \mu _N\bigg) & =\int_0^\infty f_v(w+V) [1-F_x(X-\mu_N w)]dw +\int_{-\infty}^0 f_v(w+V) F_x(X-\mu_N w)dw \\ &=1 - \int_0^\infty f_v(w+V) F_x(X-\mu_N w ) dw - \int_{-\infty}^0 f_v(w+V)[1-F_x(X-\mu_N w )]dw \end{split} \label{eq:cond_prob_int2} \end{equation} Inputting the result from Eq. \eqref{eq:cond_prob_int2} in Eq. \eqref{eq:part_time_max_dist} gives an integral representation of the distribution of $t_i^{(N)}.$ \begin{equation} \begin{split} P(t_i^{(N)} < \mu_N) &= \int_{\mathbb{R}^2} f_x(X)f_v(V)\bigg[1-\int_0^\infty f_v(w+V) F_x(X-\mu_N w ) dw \\ & \hspace{4 cm}-\int_{-\infty}^0 f_v(w+V)[1-F_x(X-\mu_N w )]dw\bigg]^{N-1}dXdV \end{split} \label{eq:tiN_dist_exact} \end{equation} The results of Thms. \ref{FTC_particle1} - \ref{FTC_particle3} depend on the pointwise limit of \begin{equation} g(X,V,N) = \bigg[1-\int_0^\infty f_v(w+V) F_x(X-\mu_N w ) dw -\int_{-\infty}^0 f_v(w+V)[1-F_x(X-\mu_N w )]dw\bigg]^{N-1} \label{eq:g(X,V,N)} \end{equation} as $N\to \infty$ under a suitable choice of $\mu_N.$ In particular, as $g(X,V,N)$ is a probability, hence bounded by one, we have the following proposition. \begin{proposition} Suppose that $\lim_{N\to\infty} g(X,V,N)$ exists. Then, \begin{equation} \begin{split} \lim_{N\to \infty} P\bigg(t_i^{(N)}< \mu_N\bigg) &= \lim_{N\to \infty}\int_{\mathbb{R}^2} f_x(X)f_v(V) g(X,V,N) dX dV= \int_{\mathbb{R}^2} f_x(X) f_v(V) \bigg(\lim_{N\to \infty} g(X,V,N) \bigg) dX dV \end{split} \label{eq:FCT_particle_limit} \end{equation} \end{proposition} \noindent The details of the proofs of Thms.\ref{FTC_particle1} - \ref{FTC_particle3} consider the pointwise limit of $g(X,V,N)$ as $N\to \infty.$ \jld{Since the proof of Theorem 3 is similar to \jld{that of} Theorem 2, we have placed it in the appendix.} \begin{proof}[Proof of Theorem \ref{FTC_particle1}] Suppose that $E|X_1| < \infty$ and $f_v$ is continuous and bounded. Let $\mu_N = \mu N$ with $\mu>0.$ Consider the pointwise limit of $g(X,V,N)$ as $N\to \infty.$ \begin{equation} \begin{split} g(X,V,N) &= \bigg[1-\int_0^\infty f_v(w+V) F_x(X-\mu N w ) dw -\int_{-\infty}^0 f_v(w+V)[1-F_x(X-\mu N w )]dw\bigg]^{N-1} \\ &=\bigg[1-\frac{1}{ N}\int_0^\infty f_v(w/N+V) F_x(X-\mu w ) dw - \frac{1}{N} \int_{-\infty}^0 f_v(w/N+V)[1-F_x(X-\mu w )]dw \bigg]^{N-1}\\ &=\exp\bigg\{(N-1) \ln \bigg(1-\frac{1}{ N}\int_0^\infty f_v(w/N+V) F_x(X-\mu w ) dw \\ & \hspace{5 cm}- \frac{1}{N} \int_{-\infty}^0 f_v(w/N+V)[1-F_x(X-\mu w )]dw \bigg) \bigg\} \end{split} \end{equation} Recall that $E|X|<\infty$, so both $\displaystyle \int_0^\infty F_x(X-\mu w) dw$ and $\displaystyle \int_{-\infty}^0 [1-F_x(X-\mu w )] dw$ are finite. Additionally, \begin{equation*} \begin{split} 0 &< f_v(w/N +V) F_x(X-\mu w) \le M_v F_x(X-\mu w) \\ 0&< f_v(w/N +V)[1-F_x(X-\mu w )] < M_v [1-F_x(X-\mu w )] \\ \end{split} \end{equation*} where $M_v = \sup |f_v| $. By dominated convergence, it follows that \begin{equation*} \begin{split} \lim_{N\to \infty} \int_0^\infty f_v(w/N+V) F_x(X-\mu w ) dw &= \int_\mathbb{R} f_v(V) F_x(X-\mu w) dw \\ \lim_{N\to \infty} \int_{-\infty}^0 f_v(w/N +V)[1-F_x(X-\mu w )] dw &= \int_{-\infty}^0 f_v(V)[1-F_x(X-\mu w )] dw \end{split} \end{equation*} Hence, by the relation $\ln (1-x)= -x +o(x)$ for small $x$, then as $N$ tends to \jld{$\infty$}, \begin{equation} \begin{split} g(X,V,N) &= \exp \bigg\{ -\frac{N-1}{N}\bigg( \int_0^\infty f_v(V)F_x(X-\mu w ) dw+ \int_{-\infty}^0 f_v(V)[1-F_x(X-\mu w )] \bigg) +o(1) \bigg\} \\ & \to \exp\bigg\{ -f_v(V) \bigg(\int_0^\infty F_x(X-\mu w ) dw+ \int_{-\infty}^0 [1-F_x(X-\mu w )] dw \bigg) \bigg\}.\\ \end{split} \end{equation} Furthermore, \begin{equation} \begin{split} &\int_0^\infty F_x(X-\mu w ) dw+ \int_{-\infty}^0 [1-F_x(X-\mu w )] dw =\frac{1}{\mu }\int_0^\infty F_x(X-y)dy + \frac{1}{\mu} \int_{-\infty}^0 [1-F_x(X-y)] dy \\ &=\frac{1}{\mu} \bigg(\int_0^\infty y f_x(X-y)dy - \int_{-\infty}^0 y f_x(X-y)dy \bigg)=\frac{1}{\mu} \bigg( \int_{-\infty}^X (X-y) f_x(y) dy -\int_X^\infty (X-y) f_x(y) dy\bigg) \\ &=\frac{1}{\mu} \int_\mathbb{R} |X-y| f_x(y) dy \end{split} \label{eq:E|X-y|} \end{equation} So that for $\mu >0$, $\mu_N = \mu \cdot N$, and with the representation of $P(\frac{t_i^{(N)}}{N}\le \mu)$ from Eq. \eqref{eq:FCT_particle_limit}, we have the desired result\jld{:} $$\lim_{N\to \infty} g(X,V,N) = \exp\bigg\{ -\frac{1}{\mu} f_v(V) \int_\mathbb{R} |X-y| f_x(y) dy\bigg\}. $$ Thus, by Proposition 3, $$\lim_{N\to\infty}P\bigg(\frac{t_i^{(N)}}{N} < \mu \bigg) = \int_{\mathbb{R}^2} f_x(X)f_v(V) \exp\bigg\{ -\frac{1}{\mu} f_v(V) \int_\mathbb{R} |X-y| f_x(y) dy\bigg\} dX dV. $$ \end{proof} \begin{proof}[Proof of Theorem \ref{FTC_particle2}] As in the proof of Thm. \ref{FTC_particle1}, we proceed by investigating the large $N$ behavior of $g(X,V,N)$. However, unlike Thm. \ref{FTC_particle1}, we assume that $\{X_i\}_{i=1}^N$ have density $f_x(x) = \frac{C_\alpha}{1+|x|^{1+\alpha}}$ for $0<\alpha< 1$, so that $E|X_i|= \infty$ and $$\int_0^\infty F_x(X- w) dw =\int_{-\infty}^0 [1-F_x(X- w )] dw=\infty.$$ \seth{Now for $\mu>0$, we take} $\mu_N = \mu\cdot N^{1/\alpha}$ and begin with the form of $g(X,V,N)$ from Eq. \eqref{eq:g(X,V,N)}. \begin{equation} \begin{split} g(X,V,N) &= \bigg[ 1 - \int_0^{N^{\alpha-2}} f_v(w+V)F_x(X-\mu N^{1/\alpha} w) dw -\int_{-{N^{\alpha-2}} }^0 f_v(w+V)[1-F_x(X-\mu N^{1/\alpha} w)] dw \\ & \hspace{0.6 cm} - \int_{{N^{\alpha-2}} }^\infty f_v(w+V)F_x(X-\mu N^{1/\alpha} w) dw- \int_{-\infty}^{-{N^{\alpha-2}} } f_v(w+V)[1-F_x(X-\mu N^{1/\alpha} w)]dw \bigg]^{N-1} \end{split} \label{eq:g(X,V,N)_split} \end{equation} \jld{We now} investigate the large $N$ behavior of the different integrals within Eq. \eqref{eq:g(X,V,N)_split}. The following statements follow from the \jld{boundedness} of $f_v$. For fixed $(X,V)$, \begin{equation*} \begin{split} \int_0^{{N^{\alpha-2}} } f_v(w+V)F_x(X-\mu N^{1/\alpha} w) dw &= O(1)N^{\alpha-2} = o\bigg(\frac{1}{N}\bigg)\\ \int_{-{N^{\alpha-2}} }^0 f_v(w+V)[1-F_x(X-\mu N^{1/\alpha} w)] dw &= O(1) {N^{\alpha-2}} = o\bigg(\frac{1}{N}\bigg) \end{split} \end{equation*} For the remaining integrals in $g(X,V,N)$, we need some bounds on $F_x(-z)$ and $1-F_x(z)$ for large $z$. Note that, \begin{equation} F_x(-z)= \int_{-\infty}^{-z} \frac{C_\alpha}{1+|t|^{1+\alpha}} dt=C_\alpha \int_{-\infty}^{-z} \frac{1}{|t|^{1+\alpha}} \frac{1}{1+\frac{1}{|t|^{1+\alpha}}} dt \\ \end{equation} Thus, \seth{by the symmetry of $f_x$} we have the following bounds of $F_x(-z)=1-F_x(z)$ \begin{equation} \frac{1}{1+\frac{1}{|z|^{1+\alpha}}}\frac{C_\alpha}{\alpha} |z|^{-\alpha}=\frac{C_\alpha}{1+\frac{1}{|z|^{1+\alpha}}} \int_{-\infty}^{-z}\frac{1}{|t|^{1+\alpha}} dt \le F_x(-z)=1-F_x(z) \le C_\alpha \int_{-\infty}^{-z} \frac{1}{|t|^{1+\alpha}}dt=\frac{C_\alpha}{\alpha} |z|^{-\alpha}. \label{eq:FX_bounds} \end{equation} The upper and lower bounds converge as $z\to\infty$ so we have the following scaling laws for large $z$, \begin{equation} F_x(-z) =1-F_x(z) = \frac{C_\alpha}{\alpha} |z|^{-\alpha} + O\bigg(\frac{1}{|z|^{1+2\alpha}}\bigg) \\ \label{eq:scalings} \end{equation} Note that $N^{\alpha -2}\cdot N^{1/\alpha} = N^{(1-\alpha)^2/\alpha}.$ Thus, we can take $N$ large, \seth{depending on $\mu$ and $X$}, so that $\mu N^{1/\alpha} w - X\ge \mu N^{(1-\alpha)^2/\alpha} - X >> 1$ for all $w\ge N^{\alpha-2}$ allowing us to make use of the bounds from Eq. \eqref{eq:FX_bounds} in the remaining integrals in $g(X,V,N).$ Then, for fixed $(X,V)$, \begin{equation} \int_{N^{\alpha-2}}^\infty f_v(w+V) F_x(X-\mu N^{1/\alpha} w) dw =\int_{N^{\alpha-2}}^\infty f_v(w+V) \bigg(\frac{C_\alpha}{\alpha} \frac{1}{|X-\mu N^{1/\alpha} w |^\alpha} + O\bigg(\frac{1}{|X-\mu N^{1/\alpha}w|^{1+2\alpha}} \bigg) \bigg)dw \\ \end{equation} Recall that $f_v$ is bounded. Let $M_v=\sup |f_v|$. We observe that the correction in the preceding integral is $o(1/N)$ as follows. \begin{equation} \begin{split} \int_{N^{\alpha-2}}^\infty\frac{f_v(w+V) }{|X-\mu N^{1/\alpha} w|^{1+2\alpha}} dw &\le \int_{N^{\alpha-2}}^\infty\frac{M_v}{|X-\mu N^{1/\alpha} w|^{1+2\alpha}} dw \\ & = \frac{M_v}{2\alpha \mu N^{1/\alpha}} \cdot \frac{1}{|\mu N^{(1-\alpha)^2/\alpha} - X|^{2\alpha} } = O\bigg( \frac{1}{N^{1/\alpha+2(1-\alpha)^2}}\bigg) \end{split} \end{equation} The quantity $2(1-\alpha)^2\ge 0$. Therefore, $\alpha \in (0,1)$ implies $1/\alpha+ 2(1-\alpha)^2 > 1$. Hence, \begin{equation} \begin{split} \int_{N^{\alpha-2}}^\infty f_v(w+V) F_x(X-\mu N^{1/\alpha} w) dw&= \frac{1}{N} \frac{C_\alpha}{\alpha \mu^\alpha } \int_{N^{\alpha-2}}^\infty f_v(w+V) \frac{1}{w^\alpha} \frac{1}{|X/(\mu w N^{1/\alpha}) -1|^\alpha} dw + o\bigg(\frac{1}{N}\bigg) \\ & = \frac{1}{N} \frac{C_\alpha}{\alpha \mu^\alpha} \int_0^\infty \frac{f_v(w+V)}{w^\alpha} dw + o\bigg(\frac{1}{N}\bigg) \end{split} \end{equation} Finally, for the final integral in Eq. \eqref{eq:g(X,V,N)_split}. We make the following change of variables. $$ \int_{-\infty}^{-{N^{\alpha-2}} } f_v(w+V)[1-F_x(X-\mu N^{1/\alpha} w)]dw = \int_{N^{\alpha-2}}^\infty f_v(V-w)[1-F_x(X+\mu N^{1/\alpha} w)] dw.$$ To which we apply the same arguments as above to show $$\int_{-\infty}^{-{N^{\alpha-2}} } f_v(w+V)[1-F_x(X-\mu N^{1/\alpha} w)]dw = \frac{C_\alpha}{\alpha \mu^\alpha N} \int_0^\infty\jld{ \frac{f_v(V-w)}{w^{\alpha}}} dw + o\bigg(\frac{1}{N}\bigg)$$ Thus, \begin{equation} \begin{split} &\lim_{N\to\infty} g(X,V,N) = \jld{\lim_{N\to\infty}} \bigg(1- \frac{C_\alpha}{\alpha \mu^{\alpha} N} \int_0^\infty \frac{f_v(V+w)+f_v(V-w)}{w^\alpha}dw +o(1/N) \bigg)^{N-1} \\ & = \exp \bigg\{ -\frac{C_\alpha}{\alpha \mu^\alpha} \int_0^\infty \frac{f_v(V+w)+f_v(V-w)}{w^\alpha}dw \bigg\}= \exp\bigg\{- \frac{C_\alpha}{\alpha \mu^\alpha} \int_{-\infty}^\infty \frac{f_v(V+w)}{|w|^\alpha} dw\bigg\} \end{split} \end{equation} which in turns gives the desired result. $$\lim_{N\to\infty} P\bigg( \frac{t_i^{(N)}}{N^{1/\alpha}} < \mu \bigg) = \int_{\mathbb{R}^2} f_x(X) f_v(V) \exp\bigg( -\frac{C(V)}{\mu^\alpha}\bigg) dX dV $$ where $$C(V) = \frac{C_\alpha}{\alpha}\int_\mathbb{R} f_v(V+w)\frac{1}{|w|^\alpha} dw. $$ \end{proof} \subsection{Proofs of Theorems Regarding $T^{(N)}$}\label{sec:proofs_TN} The key observation in the proofs of Theorems \ref{FTC_system}, \ref{FTC_system2}, and \ref{FTC_system3} and Propositions \ref{prop:tight} is that for any time $z_N(t)\in\mathbb{R}$, the final collision time $T^{(N)}$ does not exceed $z_N(t)$ if and only if the random variable $A_{N,z_N(t)} = \sum_{1\le i<j\le N} \mathbf{1}_{\tau_{i,j} > z_N(t)}$ does not exceed one. The distribution of $A_{N,z_N(t)}$ can be approximated by a Poisson distribution \cite{lao, silverman,Barbour_Eagleson}. \jld{W}e \seth{state} Corollary 2.1 from \cite{lao}. \begin{theorem1} Let $\xi_1,\dots, \xi_N$ be $\mathbb{S}$-valued random variables and $h:\mathbb{S}^k \to \mathbb{R}$ be a symmetric Borel function. Let \begin{equation*} \begin{split} p_{N,z} &= P(h(\xi_1,\dots,\xi_k)>z), \\ \lambda_{N,z}& = \binom{N}{k} p_{N,z} \end{split} \end{equation*} If for some sequence of transformations, $z_N:\mathbb{T} \to \mathbb{R}$ with $\mathbb{T} \subset \mathbb{R}$, the conditions \begin{equation*} \lim_{N\to \infty} \lambda_{N,z_N(t)} = \lambda_t > 0 \end{equation*} and \begin{equation*} \lim_{N\to\infty} N^{2k-1} P\bigg(h(\xi_1,\dots,\jld{\xi_k}) > z_N(t), h(\xi_{1+k-r},\dots, \xi_{2k-r}) > z_N(t) \bigg) = \jld{0} \end{equation*} hold for all $t \in \mathbb{T} $ and $r=1,2,\dots,k-1$, then \begin{equation*} \lim_{N\to\infty} P(H_N \le z_N(t)) = \exp(-\lambda_t) \end{equation*} for all $t \in \mathbb{T} $ where $H_N = \max\limits_{1 \le j_1 \le \dots \le j_k\le N} h(\xi_{j_1},\dots, \xi_{j_k}).$ \end{theorem1} \seth{Returning to our problem, let} $\xi_i = (X_i,V_i)$ be the initial position and velocity of particle $i$ and $h:\mathbb{S}^2 \to \mathbb{R}$ be the function which gives the path intersection time of two particles $$h(\xi_i,\xi_j) = h\bigg((X_i,V_i),(X_j,V_j)\bigg) = \frac{X_i-X_j}{V_j-V_i}=\tau_{i,j}.$$ Then $H_N$ from the Silverman-Brown Limit Law stated above is the maximum collision time of the system, $T^{(N)}$. To prove Theorems \ref{FTC_system}, \ref{FTC_system2}, and \ref{FTC_system3}, we need to find a time scale $z_N(t)$ such that \begin{align} \binom{N}{2} P(\tau_{1,2} > z_N(t)) &\to \lambda_t >0 \label{eq:gen_req_a} \\ N^3P(\tau_{1,2} >z_N(t), \tau_{2,3} > z_N(t) ) &\to 0\label{eq:gen_req_b}. \end{align} \jld{The r}andom variable $A_{N,z_N(t)}$ \jld{previously introduced} is a summation of the $\binom{N}{2}$ dependent, Bernoulli random variables, $\mathbf{1}_{\tau_{i,j} > z_N(t)}$. \seth{Recounting the idea of the proof of the Silverman-Brown law,} when Eq. \eqref{eq:gen_req_b} is satisfied, these random variable become sufficiently uncorrelated as $N\to\infty$ so that the distribution of $A_{N,z_N(t)}$ is approximately $Bin(\binom{N}{2},p_{N,z_N(t)})$. The choice of $z_N(t)$ which gives a non-trivial limit in Eq. \eqref{eq:gen_req_a} allows one to approximate the distribution of $A_{N,z_N(t)}$ as $Poisson(\lambda_t)$. \seth{We mention} \jld{that} detailed error bounds for the Poisson approximation can be found in \cite{Barbour_Eagleson}. \begin{proof}[Proof of Theorem \ref{FTC_system}] Let $z_N(t) = \binom{N}{2} t$ and consider the behavior of $\lambda_{N,z_N(t)}$ as $N\to \infty.$ \begin{equation} \begin{split} \lambda_{N,z_N(t)} & = \binom{N}{2} P(\tau_{1,2} > z_N(t) ) =\binom{N}{2} \bigg( 1 - P(\tau_{1,2} \le z_N(t) ) \bigg)\\ &=\binom{N}{2} \bigg(\int_{\mathbb{R}^2}\int_0^\infty f_x(X)f_v(V) f_v(w+V)F_x(X-z_N(t) w) dwdXdV \\ & \hspace{2.5 cm} +\int_{\mathbb{R}^2}\int_{-\infty}^0f_x(X)f_v(V) f_v(w+V) (1-F_x(X-z_N(t) w))dw dX dV \bigg) \label{Eq:ptau>z} \end{split} \end{equation} where the last equality follows from Eq. \eqref{eq:tiN_dist_exact} taking $N=2$ and noting that $t_1^{(2)}=\tau_{1,2}$ \jld{(} recall that $\jld{t_1^{(2)}}$ is the (first and) last collision time in a system of two particles\jld{)}. Making use of the change of variable $y=\binom{N}{2} w$ we have the following representation for $\lambda_{N,z_N(t)}$ \begin{equation} \begin{split} \lambda_{N,z_N(t)} &= \int_{\mathbb{R}^2}\int_0^\infty f_x(X)f_v(V) f_v\bigg(V+\frac{y}{\binom{N}{2}}\bigg) F_x(X-t y)dy dX dV\\ & \hspace{2 cm}+ \int_{\mathbb{R}^2}\int_{-\infty}^0 f_x(X)f_v(V) f_v\bigg(V+\frac{y}{\binom{N}{2}}\bigg) (1-F_x(X-t y)) dy dX dV \end{split} \end{equation} Under the assumption that $f_v$ is bounded and $E|X_1|<\infty$, we can evaluate $\lim_{N\to \infty} \lambda_{N,z_N(t)}$ by use of the dominated convergence theorem. \begin{equation} \begin{split} \lim_{N\to \infty} \lambda_{N,z_N(t)} &= \int_{\mathbb{R}^2}\int_0^\infty f_x(X)(f_v(V))^2 F_x(X-ty) dy dX dV \\ & \hspace{2 cm} + \int_{\mathbb{R}^2}\int_{-\infty}^0 f_x(X) (f_v(V))^2 (1-F_x(X-ty)) dy dX dV \\ &=\bigg(\int_{-\infty}^\infty f_v^2(V) dV \bigg) \int_{-\infty}^\infty f_x(X) \bigg(\int_0^\infty F_x(X-ty) dy + \int_{-\infty}^0 (1-F_x(X-ty))dy \bigg)dX \\ &=\frac{1}{t} \int_{-\infty}^\infty f_v^2(V) dV \cdot \int_{\mathbb{R}^2} |X-y|f_x(X)f_x(y) dy dX \end{split} \end{equation} where the last equality follows from the same argument as in Eq. \eqref{eq:E|X-y|}. We have now determined the appropriate scaling to use in the Silverman-Brown limit law. All \jld{that} remains to be shown is that $$N^3 P(\tau_{1,2} >z_N(t), \tau_{2,3}>z_N(t)) \to 0$$ as $N\to \infty.$ To prove this limit we first note the event $\{\tau_{1,2} >z_N(t), \tau_{2,3}>z_N(t)\}$ is equivalent to the event $\{t_2^{(3)} > z_N(t)\}$. We can construct an integral representation for $P(t_2^{(3)} > z_N(t))$ following a similar argument used in the construction of Eq. \eqref{eq:tiN_dist_exact}. \begin{equation} \begin{split} & N^3P(\tau_{1,2} >z_N(t), \tau_{2,3}>z_N(t)) \\ =\ & N^3 \int_{\mathbb{R}^2}f_x(X)f_v(V)P\bigg(\frac{x_1-X}{V-v_1} > z_N(t), \frac{x_3-X}{V-v_3} \jld{> z_N(t)} \bigg| (x_2,v_2)=(X,V)\bigg) dXdV \\ =\ & N^3 \int_{\mathbb{R}^2}f_x(X)f_v(V)\bigg[P\bigg(\frac{x_1-X}{V-v_1} > z_N(t) \bigg) \bigg]^2dXdV \end{split} \label{eq:joint_int} \end{equation} From Eq. \eqref{eq:cond_prob_int2}, it follows that \begin{align*} P\bigg(\frac{x_1-X}{V-v_1} > z_N(t) \bigg) &= 1- P\bigg(\frac{x_1-X}{V-v_1} \le z_N(t) \bigg) \\ &=\int_0^\infty f_v(w+V)F_x(X-z_N(t) w) dw+\int_{-\infty}^0 f_v(w+V) (1-F_x(X-z_N(t) w))dw \end{align*} Substituting the above equality into Eq. \eqref{eq:joint_int}, we continue the calculation. \begin{equation} \begin{split} &N^3P(\tau_{1,2} >z_N(t), \tau_{2,3}>z_N(t)) \\ &=N^3\int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg( \int_0^\infty f_v(w+V)F_x(X-z_N(t) w) dw +\int_{-\infty}^0 f_v(w+V) (1-F_x(X-z_N(t) w))dw \bigg)^2 dX dV \\ &=N^3\int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg( \int_0^\infty f_v(w+V) P(X'\le X-z_N(t)w) dw + \int_0^\infty f_v(V-w)P(X'> X + z_N(t)w) dw \bigg)^2dX dV \end{split} \end{equation} where $X'$ has the same distribution as $X_1$. We now make use of the assumption that $E|X_1|^{3/2} <\infty$ . Also, recall $z_N(t) = \binom{N}{2} t.$ \begin{equation} \begin{split} &N^3P(\tau_{1,2} >z_N(t), \tau_{2,3}>z_N(t)) \\ &\le N^3\int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg( \int_0^\infty f_v(w+V) P( |X-X'|\ge z_N(t) w) dw \\ &\hspace{4 cm}+ \int_0^\infty f_v(V-w)P(|X'- X| > z_N(t)w) dw \bigg)^2dX dV \\ &=N^3\int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg( \int_0^\infty [f_v(V-w)+f_v(V+w)] P(|X'-X| \ge z_N(t) w ) dw \bigg)^2 dXdV \\ &\le N^3 \int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg(\int_0^\infty [f_v(V-w)+f_v(V+w)] \frac{ E_{X'} (|X'-X|^{3/4}\mathbbm{1}_{|X'-X|>z_N(t) w})}{(z_N(t) w)^{3/4}} dw \bigg)^2 dX dV \\ &= C_N(t) \int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg(\int_0^\infty \frac{f_v(V-w)+f_v(V+w)}{w^{3/4}} E_{X'} (|X'-X|^{3/4}\mathbbm{1}_{|X'-X|>z_N(t) w}) \jld{dw}\bigg)^2 dX dV \end{split} \label{eq:Pnz_ineq} \end{equation} where \seth{$E_{X'}[\cdot]$ is expectation with respect to $X'$ and} $C_N(t) = \frac{N^3}{(N^2-N)^{3/2}} \frac{2^{3/2}}{t^{3/2}} \to \big(\frac{2}{t}\big)^{3/2}$ as $N\to\infty.$ Thus, to show $N^3P(\tau_{1,2}>z_N(t),\tau_{2,3}>z_N(t))\to 0$ as $N\to\infty$, we need to show the final integral in Eq. \eqref{eq:Pnz_ineq} approaches zero as $N\to\infty.$ The integrand can be dominated as follows. \begin{equation} \begin{split} &f_x(X)f_v(V) \bigg(\int_0^\infty \frac{f_v(V-w)+f_v(V+w)}{w^{3/4}} E_{X'} (|X'-X|^{3/4}\mathbbm{1}_{|X'-X|>z_N(t) w}) \jld{dw} \bigg)^2 \\ &\le f_x(X)f_v(V) \bigg( \int_0^\infty \frac{f_v(V-w)+f_v(V+w)}{w^{3/4}}dw\bigg)^2 \bigg(E_{X'} (|X'-X|^{3/4}) \bigg)^2 \\ &\le f_x(X)f_v(V) \bigg( \int_1^\infty (f_v(V-w)+f_v(V+w)) \jld{dw} + \int_0^1 \frac{2M_v}{w^{3/4}} dw\bigg)^2 E_{X'} |X'-X|^{3/2} \\ &\le f_x(X)f_v(V)(1+8M_v)^2 E_{X'}(|X'|+|X|)^{3/2} \end{split} \label{eq:Pnz_dom} \end{equation} where $M_v = \max\limits_{y} f_v(y).$ The final equation in Eq. \eqref{eq:Pnz_dom} is integrable since $X_1$ has a 3/2 moment. Using dominated convergence twice, \begin{equation} \begin{split} &\lim_{N\to \infty} \int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg(\int_0^\infty \frac{f_v(V-w)+f_v(V+w)}{w^{3/4}} E_{X'} (|X'-X|^{3/4}\mathbbm{1}_{|X'-X|>z_N(t) w}) \jld{dw} \bigg)^2 dX dV \\ &= \int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg(\lim_{N\to\infty} \int_0^\infty \frac{f_v(V-w)+f_v(V+w)}{w^{3/4}} E_{X'} (|X'-X|^{3/4}\mathbbm{1}_{|X'-X|>z_N(t) w}) \jld{dw} \bigg)^2 dX dV \\ &=\int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg(\int_0^\infty \frac{f_v(V-w)+f_v(V+w)}{w^{3/4}} \lim_{N\to\infty} E_{X'} (|X'-X|^{3/4}\mathbbm{1}_{|X'-X|>z_N(t) w}) \jld{dw} \bigg)^2 dX dV \end{split} \end{equation} Since $E|X'|^{3/2}<\infty$, it follows that $$\lim_{N\to\infty} E\jld{\bigg(}|X'-X|^{3/4}\mathbbm{1}_{|X'-X|>z_N(t)w}\jld{\bigg) =} \ 0$$ for all $w> 0.$ Thus, we can conclude that $$N^3P\big(\tau_{1,2}>z_N(t), \tau_{2,3}>z_N(t) \big)\to 0$$ as $N\to\infty$. The final requirement of the Silverman-Brown limit law is satisfied, therefore $$P\bigg(\frac{T^{(N)}}{\binom{N}{2}} \le \mu\bigg) \to e^{-C/\mu } $$ where $$C = \int_{\mathbb{R}^2} |x-y|f_x(x) f_x(y) dx dy \cdot \int_\mathbb{R} f_v^2(v) dv. $$ \end{proof} \begin{proof}[Proof of Theorem \ref{FTC_system2}] Applying the Silverman-Brown limit law used in the proof of Theorem \ref{FTC_system} again with $z_N(t) = N^{2/\alpha} t$, we need only verify the following two requirements: \begin{align*} &\binom{N}{2} P(\tau_{1,2} > N^{2/\alpha} t) \to \frac{C_\alpha}{2at^{\alpha}}\int_{\mathbb{R}^2} f_v(V) \frac{f_v(V+w)}{|w|^\alpha} dw dV \\ &N^3 P(\tau_{1,2} > N^{2/\alpha}t, \tau_{2,3} > N^{2/\alpha} t ) \to 0. \end{align*} We begin by making use of Eq. \eqref{Eq:ptau>z} \begin{equation} \begin{split} \binom{N}{2} P(\tau_{1,2} > N^{2/\alpha} t ) &=\binom{N}{2} \bigg(\int_{\mathbb{R}^2}f_x(X)f_v(V) \int_0^\infty f_v(w+V)F_x(X-N^{2/\alpha}t w) dwdXdV \\ & \hspace{2.5 cm} +\int_{\mathbb{R}^2}f_x(X)f_v(V) \int_{-\infty}^0 f_v(w+V) (1-F_x(X-N^{2/\alpha}t w))dw dX dV \bigg) \end{split} \end{equation} In the proof of Thm. \ref{FTC_particle2}, it was shown that for large $N$, \begin{equation} \begin{split} \int_0^\infty f_v(w+V)F_x(X-N^{1/\alpha}t w) dw &= \frac{C_\alpha}{\alpha t^\alpha N } \int_0^\infty \frac{f_v(V+w)}{|w|^\alpha} dw + o(N^{-1}) \\ \int_{-\infty}^0 f_v(w+V) (1-F_x(X-N^{\jld{1}/\alpha}t w))dw &= \frac{C_\alpha}{\alpha t^\alpha N } \int_0^\infty \frac{f_v(V-w)}{|w|^\alpha} dw + o(N^{-1}) \end{split} \label{eq:stablealpha_asym} \end{equation} Replacing $N$ with $N^2$, we have the following relationship for large $N$. \begin{equation} \begin{split} \binom{N}{2} P(\tau_{1,2} > N^{2/\alpha} t ) & = \binom{N}{2} \int_{\mathbb{R}^2} f_x(X)f_v(V) \bigg(\frac{C_\alpha}{\alpha t^\alpha N^2} \int_\mathbb{R} \frac{f_v(V+w)}{|w|^\alpha}dw +o(N^{-2}) \bigg) dXdV \\ &\to \frac{C_\alpha}{2\alpha t^\alpha} \int_{\mathbb{R}^2} f_v(V)\frac{f_v(V+w)}{|w|^\alpha} dw dV \jld{\qquad \text{as } N \to \infty.} \end{split} \end{equation} For the second requirement, we make use of the same asymptotic \jld{expansion} as \jld{in} Eq. \eqref{eq:stablealpha_asym}. \begin{equation} \begin{split} &N^3P(\tau_{1,2} >N^{2/\alpha} t , \tau_{2,3} > N^{2/\alpha} ) \\ &=N^3 \int_{\mathbb{R}^2}f_x(X)f_v(V) \bigg(\int_0^\infty f_v(w+V)F_x(X-N^{2/\alpha}t w) dw + \int_{-\infty}^0 f_v(w+V) (1-F_x(X-N^{2/\alpha}t w))dw\bigg)^2 dX dV \\ &= N^3 \int_{\mathbb{R}^2}f_x(X)f_v(V) \bigg(\frac{C_\alpha}{\alpha t^\alpha N^2} \int_\mathbb{R} \frac{f_v(V+w)}{|w|^\alpha} dw +o(N^{-2}) \bigg)^2 dXdV \\ &= \frac{C_\alpha^2}{\alpha^2\jld{t^{2 \alpha}}N} \int_\mathbb{R}f_v(V) \bigg(\int_\mathbb{R} \frac{f_v(V+w)}{|w|^\alpha} dw\bigg)^2 dV +o(N^{-1}) \to 0 \jld{\qquad \text{as } N \to \infty.} \end{split} \end{equation} Having satisfied the requirements of the limit law, the desired result follows and the proof is complete. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:tight}] It is sufficient to show $$\lim_{A\to\infty} \limsup_{N\to\infty} P\bigg(\bigg|\frac{T^{(N)}}{\binom{N}{2}} \bigg| > A\bigg)=0.$$ Fix $A>0$, then \begin{equation} \begin{split} P\bigg(\frac{T^{(N)}}{\binom{N}{2}} > A \bigg) &= P\bigg(\max_{i,j} \tau_{i,j}> \binom{N}{2} A \bigg) = P\bigg(\sum_{i,j} \mathbbm{1}_{\tau_{i,j} > \binom{N}{2} A} \ge 1 \bigg) \\ &\le E\bigg(\sum_{i,j} \mathbbm{1}_{\tau_{i,j} > \binom{N}{2} A}\bigg) =\binom{N}{2} P\bigg(\tau_{1,2} > \binom{N}{2} A\bigg) \\ &\xrightarrow{N\to\infty} \frac{1}{A} \int_{\mathbb{R}} f_v^2(v)dv \cdot \int_{\mathbb{R}^2} |x-y|f_x(x)f_x(y) dx dy \end{split} \end{equation} The convergence in the final line follows from arguments made in the proof of Thm. \ref{FTC_system}. Thus, $$\lim_{A\to \infty } \lim_{N\to\infty} P\bigg(\frac{T^{(N)}}{\binom{N}{2}} > A \bigg) =0.$$ Also, for any $A>0$, \begin{equation} \begin{split} P\bigg(\frac{T^{(N)}}{\binom{N}{2}} < -A \bigg) &\le P(T^{(N)} < 0) \\ &\le P(\tau_{1,2} < 0, \tau_{3,4} < 0, \dots, \tau_{2\left\lfloor N/2\right\rfloor -1, 2\left\lfloor N/2\right\rfloor}< 0) \\ &=\bigg[P(\tau_{1,2} < 0 )\bigg]^{\left\lfloor N/2\right\rfloor} \xrightarrow{N\to\infty} 0. \end{split} \end{equation} Thus, $\displaystyle \lim_{A\to\infty} \limsup_{N\to\infty} P\bigg(\bigg|\frac{T^{(N)}}{\binom{N}{2}} \bigg| > A\bigg)=0.$ \end{proof} \section*{Acknowledgments} \ayd{A large number of simulations were required in the numerical reconstruction of the distribution of $T^{(N)}/\binom{N}{2}$ under elastic collisions and the investigation of $M_t$ and $M_T$ under non-elastic collisions. An allocation of computer time from the UA Research Computing High Performance Computing (HPC) at the University of Arizona is gratefully acknowledged.} This material is based upon work supported ARO grant W911NF-14-1-0179.
{ "timestamp": "2017-04-06T02:02:38", "yymm": "1704", "arxiv_id": "1704.01251", "language": "en", "url": "https://arxiv.org/abs/1704.01251", "abstract": "We investigate a one-dimensional system of $N$ particles, initially distributed with random positions and velocities, interacting through binary collisions. The collision rule is such that there is a time after which the $N$ particles do not interact and become sorted according to their velocities. When the collisions are elastic, we derive asymptotic distributions for the final collision time of a single particle and the final collision time of the system as the number of particles approaches infinity, under different assumptions for the initial distributions of the particles' positions and velocities. For comparison, a numerical investigation is carried out to determine how a non-elastic collision rule, which conserves neither momentum nor energy, affects the median collision time of a particle and the median final collision time of the system.", "subjects": "Mathematical Physics (math-ph)", "title": "On collisions times of self-sorting interacting particles in one-dimension with random initial positions and velocities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9658995762509215, "lm_q2_score": 0.7341195385342971, "lm_q1q2_score": 0.7090857511877996 }
https://arxiv.org/abs/1605.03040
A note on the statistical view of matrix completion
A very simple interpretation of matrix completion problem is introduced based on statistical models. Combined with the well-known results from missing data analysis, such interpretation indicates that matrix completion is still a valid and principled estimation procedure even without the missing completely at random (MCAR) assumption, which almost all of the current theoretical studies of matrix completion assume.
\section*{Acknowledgements}% \addtocontents{toc}{\protect\vspace{6pt}}% \addcontentsline{toc}{section}{Acknowledgements}% } \setlength{\textwidth}{15.3 truecm} \setlength{\textheight}{23.9 truecm} \newcommand{\nonumber \\}{\nonumber \\} \def\pr{\textsf{P}} \def\ep{\textsf{E}} \def\Cov{\textsf{Cov}} \def\Var{\textsf{Var}} \def\Cal#1{{\mathcal #1}} \def\bk#1{{\mathbf #1}} \def\bkg#1{\mbox{\boldmath{$#1$}}} \def\smallbkg#1{\mbox{\scriptsize \boldmath{$#1$}}} \begin{document} \def\text#1{\mbox{\rm #1}} \def\overset#1#2{\stackrel{#1}{#2} } \def\mathbf{\mathbf} \def\mathrm{\mathrm} \def\displaystyle\sum{\displaystyle\sum} \def\displaystyle\int{\displaystyle\int} \def\displaystyle\frac{\displaystyle\frac} \renewcommand{\baselinestretch}{1} \newcommand{\pkg}[1]{\textsf{#1}} \title{\LARGE \bf A note on the statistical view of matrix completion} \author{Tianxi Li \\ \normalsize Department of Statistics,\\ \normalsize University of Michigan, Ann Arbor \\} \maketitle \bigskip \begin{abstract} A very simple interpretation of matrix completion problem is introduced based on statistical models. Combined with the well-known results from missing data analysis, such interpretation indicates that matrix completion is still a valid and principled estimation procedure even without the missing completely at random (MCAR) assumption, which almost all of the current theoretical studies of matrix completion assume. \end{abstract} \section{Introduction} Matrix completion has attracted a great amount of attention in the past ten years \cite{candes2009exact, candes2010power, candes2010matrix, mazumder2010spectral, keshavan2009matrix, davenport20141, klopp2015matrix}. A low-rank assumption is made to ensure a matrix, though with a larger proportion of entries being unobserved, to be estimable in certain senses. Given a matrix $Y \in \bR^{m\times n}$ with missing entries, we denote the index set of observed positions as $\Omega \subset [m]\times[n]$ and define the projection operator $P_{\Omega}$ as a mapping from $\bR^{m\times n}$ to $\bR^{m\times n}$, such that $$P_{\Omega}A = (A_{ij}I((i,j)\in \Omega))_{m\times n},$$ where by convention, we define a missing value multiplied by zero is still zero. With these notations, the most straightforward way of formulating a low-rank matrix completion procedure is \begin{equation}\label{eq:SVD-lowrank} \ourmin_{\Rank(Z)=q}\norm{P_{\Omega}Y - P_{\Omega}Z}_F^2, \end{equation} where $\norm{\cdot}_F$ is the Frobenius norm. This is a very natural generalization of the classical SVD problem to the situation where missing entries are present, so we call this problem missing value SVD. Missing value SVD is non-convex, due to the constraint on rank, and currently there is no efficient algorithm that guarantees global optimal solution. One approach to deal with the non-convexity is taking a convex relaxation of the rank. The most popular relaxation currently used is the nuclear norm $\norm{Z}_*$, which is the sum of singular values of $Z$. For instance, \cite{mazumder2010spectral} takes the Langrangian problem \begin{equation}\label{eq:nuclear} \ourmin_{Z}\norm{P_{\Omega}Y - P_{\Omega}Z}_F^2 + \lambda\norm{Z}_*. \end{equation} Other similar formulations are also available \cite{candes2010matrix, davenport20141}. \newline Now we proceed to introduce a few terminologies of missing data mechanism in statistics from \cite{little2014statistical}. Let $M$ be the indicator matrix of missing positions, such that $M_{ij}=1$ if and only if $(i,j) \notin \Omega$. Then we say the missing mechanism is {\em missing completely at random (MCAR)} if $$\p(M|Y,\phi) = \p(M|\phi)$$ where $\phi$ is certain unknown underlying parameters of the missing process. For example, in the case of uniformly missing, $\phi$ can be the probability that each entry is missing. A more general mechanism is called {\em missing at random (MAR)} which assumes the missing indicator only depends on the observed values as $$\p(M|Y,\phi) = \p(M|\{Y_{ij}\}_{(i,j)\in \Omega},\phi).$$ If the missing indicator also has dependence on the missing entries, given all the observed ones, then it is called {\em not missing at random (NMAR)}. To the best of our knowledge, all of the theoretical results about matrix completion are assuming that the missing positions are MCAR. Such assumption is not realistic in many applications. For instance, in the Netflix problem where the missing entries are the movie scores that users never watched, MCAR is clearly unrealistic (while MAR may be better but still a bit too strong). In next section, we will treat the matrix completion as a statistical model estimation problem and show that the matrix completion is valid for MAR. \section{Statistical models for matrix completion} Assume $Y = Z +E$, where $Z$ is a parameter matrix of rank $q$. $E = (e_{ij})_{m\times n}$ such that $e_{ij}$'s are i.i.d $\ncal(0,\sigma^2)$. $\sigma^2$ can be assumed to be known without loss of generality. It is easy to see that the log likelihood kernel of the model is $$\ell(Z;Y) \propto \exp(\frac{1}{2\sigma^2}\sum_{ij}(Y_{ij}-Z_{ij})^2).$$ Therefore, using the fact that the rank-$q$ SVD is the rank-$q$ matrix that has the smallest Frobenius error, we know that the rank-$q$ SVD is the MLE of $Z$. Using the same idea, we can see that the missing-value SVD is actually trying to solve the MLE for the log likelihood $$\ell(Z;Y_{ij}, (i,j)\in \Omega).$$ This is the likelihood after integrating out all of the missing entries in the model. According to \cite{little2014statistical}, we just need the following ignorable assumption the make such MLE is a valid estimate. \begin{ass}[Ignorable assumption 1] The missing mechanism is MAR and the model parameter space for $(Z, \phi)$ is a product space of sets for $Z$ and $\phi$. \end{ass} The second half of the assumption is trivial, thus the ignorable assumption essentially indicates that MAR is enough for a valid estimation. On the other hand, problem \eqref{eq:nuclear} can be seen as an approximate MLE to the ignorable likelihood. However, it will also be interesting to see what is the statistical interpretation by itself, which may give a better view of the needed missing mechanism. We now define the following Bayesian model: Let $O(m,n)$ be the space of $m \times n$ orthogonal matrices. \begin{itemize} \item Noninformative improper prior $U \sim Unif(O(m,q))$, $V \sim Unif(O(n,q))$. \item $D = \diag(\theta)$ and $\theta_k$'s are i.i.d from a Laplace distribution $\lcal(0,b)$, $k=1, \cdots, q$. \item $Y|U,D,V = UDV^T + E$, $e_{ij}$ i.i.d $\ncal(0,\sigma^2)$. \item $M|Y,\phi \sim \p(M|Y,\phi)$. \end{itemize} Now suppose in the likelihood, we integrate the missing positions out and use the resulting marginal likelihood of $Y_{ij}, (i,j)\in \Omega|U,D,V$ in the Bayes estimation. It is not hard to see that the problem for solving the posterior mode is the following one: $$\ourmin \sum_{(i,j)\in \Omega}(Y_{ij}-Z_{ij})^2 + \lambda\norm{Z}_*$$ which is exactly problem \eqref{eq:nuclear}. To make the estimation valid, we just need a Bayesian ignorable assumption as discussed in \cite{little2014statistical}: \begin{ass}[Ignorable assumption 2] The missing mechanism is MAR and the priors for $Z$ and $\phi$ are independent. \end{ass} The second part is trivial thus the condition we need is still MAR. \section{Simulation example of matrix completion under MCAR and MAR} In this example, we generate data with MCAR and MAR then compare the performance of \eqref{eq:nuclear} in the two situations. In simulation, we constrain that MAR data has the same amount of missing entries and similar missing position distributions as with the MCAR data, up to permutation of rows. So the performance should be purely about the power under the two mechanisms. The performance is measured by the average relative errors over 100 replications. The relative error is defined as $$\sum_{(i,j)\in \Omega}(Z_{ij}-\hat{Z}_{ij})^2/\norm{Z}_F^2 \times 100\%.$$ The performance on $300 \times 300$ matrices is given in Table 1. As can be seen that the performance under MCAR and MAR are nearly the same. This can also be checked by two-sample tests which gives large p-values. Essentially, it shows that matrix completion still works well under MAR. \begin{table}[ht] \centering \begin{tabular}{r|r|rrrr} \hline Rank & mechanism & 10\% & 30\% & 50\% & 80\% \\ \hline \multirow{2}{*}{ 5}& MAR & 0.0538 & 0.1644 & 0.2838 & 0.5512 \\ &MCAR & 0.0539 & 0.1641 & 0.2822 & 0.5535 \\ \hline \multirow{2}{*}{20}& MAR & 0.0626 & 0.1985 & 0.3630 & 1.3950 \\ & MCAR & 0.0627 & 0.1975 & 0.3645 & 1.4018 \\ \hline \end{tabular} \label{tab1} \caption{Relative imputation errors for different missing proportions and ranks. The performance differences between the two missing mechanism are very small.} \end{table} \section{Summary} We draw statistical interpretations of two most popular matrix completion techniques. Such statistical interpretations reveal that MAR missing mechanism is enough for matrix completion procedure to be a valid statistical estimation. Though assuming MCAR makes it easier for deriving theoretical properties of matrix completion in general, such condition may be unnecessary for good performances in practice.
{ "timestamp": "2016-05-11T02:11:27", "yymm": "1605", "arxiv_id": "1605.03040", "language": "en", "url": "https://arxiv.org/abs/1605.03040", "abstract": "A very simple interpretation of matrix completion problem is introduced based on statistical models. Combined with the well-known results from missing data analysis, such interpretation indicates that matrix completion is still a valid and principled estimation procedure even without the missing completely at random (MCAR) assumption, which almost all of the current theoretical studies of matrix completion assume.", "subjects": "Machine Learning (stat.ML)", "title": "A note on the statistical view of matrix completion", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.965899575269305, "lm_q2_score": 0.7341195327172402, "lm_q1q2_score": 0.709085744848483 }
https://arxiv.org/abs/1703.08057
PageRank in Undirected Random Graphs
PageRank has numerous applications in information retrieval, reputation systems, machine learning, and graph partitioning. In this paper, we study PageRank in undirected random graphs with an expansion property. The Chung-Lu random graph is an example of such a graph. We show that in the limit, as the size of the graph goes to infinity, PageR- ank can be approximated by a mixture of the restart distribution and the vertex degree distribution. We also extend the result to Stochastic Block Model (SBM) graphs, where we show that there is a correction term that depends on the community partitioning.
\section*{\nomname \renewcommand{\nomname}{List of Symbols} \usepackage{etoolbox,ragged2e,siunitx} \newcommand{\DescrCol}[1]{\hfill\parbox[t]{12em}{#1}\ignorespaces} \newcommand{\nomsubtitle}[1]{\item[\large\bfseries #1]} \renewcommand\nomgroup[1]{\def\csname nomstart#1\endcsname}\nomtemp{\csname nomstart#1\endcsname}\csname nomstart#1\endcsname}\nomtemp} \newcommand{\nomstartD}{\nomsubtitle{}% \item[\bfseries Symbol]\textbf{Definition}} \newcommand{\nomDef}[1]{\parbox[t]{10cm}{\RaggedRight #1}} \newcommand{\nomwithdim}[5]{\nomenclature[#1]{#2}% {\nomDef{#3}\DimensUnits{#4}{#5}}} \newcommand{\nomtypeD}[3][]{\nomenclature[D#1]{#2}{\nomDef{#3}}} \makenomenclature \RequirePackage{ifthen} \setcounter{tocdepth}{3} \usepackage{longtable} \usepackage{url} \usepackage{floatrow} \newcommand{\mathbf{1}}{\mathbf{1}} \newcommand{\mathbf}{\mathbf} \newcommand\myeq[1]{\mathrel{\overset{\makebox[0pt]{#1}}{=}}} \newcommand\mylt[1]{\mathrel{\overset{\makebox[0pt]{#1}}{\le}}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newtheorem{defi}{Definition} \newtheorem{assumption}{Assumption} \newtheorem{cor}{Corollary} \begin{document} \include{symbols} \mainmatter \title{PageRank in Undirected Random Graphs} \titlerunning{PageRank in Undirected Random Graphs} \author{K. Avrachenkov$^1$, A. Kadavankandy$^1$\thanks{Primary author, {\tt arun.kadavankandy@inria.fr}}, \\ L. Ostroumova Prokhorenkova$^{2,3}$ and A. Raigorodskii$^{2,3}$} \authorrunning{K. Avrachenkov et al.} \institute{Inria Sophia Antipolis, France \and Yandex, Russia \and Moscow Institute of Physics and Technology, Russia} \maketitle \begin{abstract} PageRank has numerous applications in information retrieval, reputation systems, machine learning, and graph partitioning. In this paper, we study PageRank in undirected random graphs with an expansion property. The Chung-Lu random graph is an example of such a graph. We show that in the limit, as the size of the graph goes to infinity, PageRank can be approximated by a mixture of the restart distribution and the vertex degree distribution. We also extend the result to Stochastic Block Model (SBM) graphs, where we show that there is a correction term that depends on the community partitioning. \keywords{PageRank, undirected random graphs, expander graphs, Chung-Lu random graphs, Stochastic Block Model} \end{abstract} \vspace{-0.5cm} \section{Introduction} \vspace{-0.1cm} PageRank has numerous applications in information retrieval \cite{H02,PBMW97,Yetal09}, reputation systems \cite{Getal13,KSG03}, machine learning \cite{Aetal08,AGMS12}, and graph partitioning \cite{ACL06,C09}. A large complex network can often be conveniently modeled by a random graph. It is surprising that not many analytic studies are available for PageRank in random graph models. We mention the work \cite{AL06} where PageRank was analysed in preferential attachment models and the more recent works \cite{CLO14a,CLO14b}, where PageRank was analysed in directed configuration models. According to several studies \cite{Detal02,Fetal08,LSV07,VL10}, PageRank and in-degree are strongly correlated in directed networks such as the Web graph. Apart from some empirical studies~\cite{B13,Perra}, to the best of our knowledge, there is no rigorous analysis of PageRank on basic undirected random graph models such as the Erd\H{o}s-R\'enyi graph \cite{ER59} or the Chung-Lu graph \cite{CL02}. In this paper, we attempt to fill this gap and show that under certain conditions on the preference vector and the spectrum of the graphs, PageRank in these models can be approximated by a mixture of the preference vector and the vertex degree distribution when the size of the graph goes to infinity. First, we show the convergence in total variation norm for a general family of random graphs with expansion property. Then, we specialize the results for the Chung-Lu random graph model proving the element-wise convergence. We also analyse the asymptotics of PageRank on Stochastic Block Model (SBM) graphs, which are random graph models used to benchmark community detection algorithms \cite{holland1983}. In these graphs the asymptotic expression for PageRank contains an additional correction term that depends on the community partitioning. This demonstrates that PageRank captures properties of the graph not visible in the stationary distribution of a simple random walk.We conclude the paper with numerical experiments and several future research directions. \section{Definitions} Let $G^{(n)}=(V^{(n)},E^{(n)})$ denote a family of random graphs, where $V^{(n)}$ is a vertex set, $|V^{(n)}|=n$, and $E^{(n)}$ is an edge set, $|E^{(n)}|=m$. Matrices and vectors related to the graph are denoted by bold letters, while their components are denoted by non-bold letters. We denote by $\mathbf A^{(n)}$ the associated adjacency matrix with elements \[ A^{(n)}_{ij} = \left\{ \begin{array}{ll} 1, & \mbox{if} \ i \ \mbox{and} \ j \ \mbox{are connected},\\ 0, & \mbox{otherwise}, \end{array}\right. \] In the interest of compactness of notation, the superscript $n$ is dropped when it is not likely to cause confusion. In this work, since we analyze PageRank on undirected graphs, we have $\mathbf A^T=\mathbf A$. The personalized PageRank vector is denoted by $\boldsymbol{\pi}.$ We consider unweighted graphs; however our analysis easily extends to some families of weighted undirected graphs. Let $\mathbf{1}$ be a column vector of $n$ ones and let $\mathbf d=\mathbf A\mathbf{1}$ be the vector of degrees. It is helpful to define $\mathbf D=\mbox{diag}(\mathbf d)$, a diagonal matrix with the degree sequence on its diagonal. Let $\mathbf P=\mathbf A \mathbf D^{-1}$ be column-stochastic Markov transition matrix corresponding to the standard random walk on the graph and let $\mathbf Q=\mathbf D^{-1/2}\mathbf A \mathbf D^{-1/2}$ be the symmetrized transition matrix, whose eigenvalues are the same as those of $\mathbf P.$ Note that the symmetrized transition matrix is closely related to the normalized Laplacian ${\boldsymbol{\cal L}}=\mathbf I-\mathbf D^{-1/2}\mathbf A\mathbf D^{-1/2}=\mathbf I-\mathbf Q$ \cite{C97}, where $\mathbf I$ is the identity matrix. Further we will also use the resolvent matrix $\mathbf R=[\mathbf I-\alpha \mathbf P]^{-1}$ and the symmetrized resolvent matrix $\mathbf S=[\mathbf I-\alpha \mathbf Q]^{-1}$. Note that since $\mathbf Q$ is a symmetric matrix, its eigenvalues $\lambda_i,$ $i=1,...,n$ are real and can be arranged in decreasing order, i.e., $\lambda_1 \ge \lambda_2 \ge ...$ . In particular, we have $\lambda_1=1$. The value $\delta = 1-\max\{|\lambda_2|,|\lambda_n|\}$ is called the spectral gap. In what follows, let $K,C$ be arbitrary constants independent of graph size $n,$ which may change from one line to the next (of course, not causing any inconsistencies). For two functions $f(n),g(n),$ $g(n) = O(f(n))$ if $\exists C,N$ such that $\left| \frac{g(n)}{f(n)} \right| \leq C,$ $\forall n>N$ and $g(n) = {o}(f(n))$ if $\limsup_{n \to \infty} \left| \frac{g(n)}{f(n)} \right| = 0.$ Also $f(n) = \mathrm\omega(g(n))$ or $f(n) \gg g(n) $ if $g(n) = {o}(f(n)).$ We use $\mathbb P, \mathbb E$ to denote probability and expectation respectively. An event $E$ is said to hold with high probability (w.h.p.) if $\exists N$ such that (s.t.) $\mathbb{P}(E) \geq 1 - {O}(n^{-c})$ for some $c > 0,$ $\forall n>N.$ Recall that if a finite number of events hold true w.h.p., then so does their intersection. Furthermore, we say that a sequence of random variables $X_n = {o}(1)$ w.h.p. if there exists a function $\psi(n) = {o}(1)$ such that the event $\{X_n \leq \psi(n)\}$ holds w.h.p. In the first part of the paper, we study the asymptotics of PageRank for a family of random graphs with the following two properties: \begin{comment} \noindent {\bf Property I}: w.h.p., $d^{(n)}_{max}/d^{(n)}_{min} \le K,$ where $d^{(n)}_{max}$ and $d^{(n)}_{min}$ are the maximum and minimum degrees, respectively. \bigskip \noindent {\bf Property II}: w.h.p., $\max\{|\lambda^{(n)}_2|,|\lambda^{(n)}_n|\} = o(1).$ \end{comment} \smallskip \begin{property}\label{prop:bounded_degrees} For some $K$ w.h.p., $d^{(n)}_{max}/d^{(n)}_{min} \le K,$ where $d^{(n)}_{max}$ and $d^{(n)}_{min}$ are the maximum and minimum degrees, respectively. \end{property} \begin{property}\label{prop:fast_mixing} W.h.p., $\max\{|\lambda^{(n)}_2|,|\lambda^{(n)}_n|\} = o(1).$ \end{property} The above two properties can be regarded as a variation of the expansion property. In the standard case of an expander family, one requires the graphs to be regular and the spectral gap $\delta = 1-\max\{|\lambda_2|,|\lambda_n|\}$ to be bounded away from zero (see, e.g., \cite{V12}). Property 1 is a relaxation of the regularity condition, whereas Property 2 is stronger than the requirement for the spectral gap to be bounded away from zero. These two properties allow us to consider several standard families of random graphs such as ER graphs, regular random graphs with increasing average degrees, and Chung-Lu graphs. For Chung-Lu graphs Property 1 imposes some restriction on the degree spread of the graph. \vspace{5pt} \noindent \textit{Remark:} Property 2 implies that the graph is connected w.h.p., since the spectral gap is strictly greater than zero. \vspace{5pt} Later, we study the asymptotics of PageRank for specific classes of random graphs namely the Chung-Lu graphs, and the Stochastic Block Model. Recall that the Personalized PageRank vector with preference vector $\mathbf v$ is defined as the stationary distribution of a modified Markov chain with transition matrix \begin{equation}\label{eq:tildeP} \widetilde{\mathbf P} = \alpha \mathbf P + (1-\alpha) \mathbf v \mathbf{1}^T, \end{equation} where $\alpha$ is the so-called damping factor \cite{H02}. In other words, $\boldsymbol{\pi}$ satisfies \begin{equation} \boldsymbol{\pi} = \widetilde{\mathbf P} \boldsymbol{\pi} \label{eq:PRstat}, \\ \end{equation} or, \begin{equation} \boldsymbol{\pi} = (1-\alpha) [\mathbf I-\alpha \mathbf P]^{-1} \mathbf v = (1-\alpha) \mathbf R \mathbf v \label{eq:PPRexpl}, \end{equation} where (\ref{eq:PPRexpl}) holds when $\alpha < 1.$ \vspace{-0.2cm} \section{Convergence in total variation}\label{sec:tvconv} \vspace{-0.1cm} We recall that for two discrete probability distributions $u$ and $v$, the total variation distance $d_{\text{TV}}(u,v)$ is defined as $d_{\text{TV}}(u,v) = \frac{1}{2}\sum_i |u_i - v_i|.$ This can also be thought of as the $L^1$-norm distance measure in the space of probability vectors, wherein for $\mathbf x \in \mathbb{R}^{n},$ the $L^1$-norm is defined as $\norm{\mathbf x}_1 = \sum_i |x_i|.$ Since for any probability vector $\boldsymbol{\pi}, \text{ } \norm{\boldsymbol{\pi}}_1 = 1$ $\forall n,$ it makes sense to talk about convergence in 1-norm or TV-distance. Also recall that for a vector $\mathbf x \in \mathbb{R}^n,$ $\norm{\mathbf x}_2 = \sqrt{\sum_i |x_i|^2}$ is the $L^2$-norm. Now we are in a position to formulate our first result. \begin{theorem}\label{th:1norm} \label{total variation convergence} Let a family of graphs $G^{(n)}$ satisfy Properties 1 and 2. If, in addition, $\norm{\mathbf v}_2 = O(1/\sqrt{n})$, PageRank can be asymptotically approximated in total variation norm by a mixture of the restart distribution $\mathbf v$ and the vertex degree distribution. Namely, w.h.p., $$ d_{TV}(\boldsymbol{\pi}^{(n)},\overline{\boldsymbol{\pi}}^{(n)}) = o(1) \,\, \text{ as } n \to \infty, $$ where \begin{equation} \label{eq:asymexpr} \overline{\boldsymbol{\pi}}^{(n)}= \frac{\alpha \mathbf d^{(n)}}{\textnormal{vol}(G^{(n)})} +(1-\alpha) \mathbf v, \end{equation} with $\textnormal{vol}(G^{(n)}) = \sum_i d_i^{(n)}$. \end{theorem} \vspace{2pt} \textit{Observations:} \vspace{-2pt} \begin{enumerate} \item This result says that PageRank vector asymptotically behaves like a convex combination of the preference vector and the stationary vector of a standard random walk with transition matrix $\mathbf P;$ with the weight being $\alpha,$ and that it starts to resemble the random walk stationary vector as $\alpha$ gets close to $1.$ \item One of the possible intuitive explanations of the result of Theorem \ref{th:1norm} is based on the observation that when Properties 1 \& 2 hold, as $n \to \infty,$ the random walk mixes approximately in one step and so for any probability vector $\mathbf x$ $\mathbf{P} \mathbf x$ is roughly equal to $\mathbf d/\mbox{vol}(G),$ the stationary distribution of the simple random walk. The proposed asymptotic approximation for PageRank can then be seen to follow from the series representation of PageRank if we replace $\mathbf P \mathbf v$ by $\mathbf d/\mbox{vol}(G).$ Note that since $\mathbf d/\textnormal{vol}(G)$ is the stationary vector of the simple random walk, if $\mathbf P \mathbf v = \mathbf d/\textnormal{vol}(G),$ it also holds that $\mathbf P^k \mathbf v = \mathbf d/\textnormal{vol}(G), \forall k \ge 2.$ Making these substitutions in the series representation of PageRank, namely \begin{equation} \label{eq:forpi} \boldsymbol{\pi} = (1 - \alpha) \left( \mathbf I + \alpha \mathbf P + \alpha^2 \mathbf P^2 + \ldots \right) \mathbf v, \end{equation} we obtain \begin{align*} \boldsymbol{\pi} &= (1-\alpha) \mathbf v + (1-\alpha)\alpha (1 + \alpha + \alpha^2 + \ldots)\frac{\mathbf d}{\textnormal{vol}(G)}\\ &= (1-\alpha) \mathbf v + \alpha \frac{\mathbf d}{\textnormal{vol}(G)}. \end{align*} \item The condition on the 2-norm of the preference vector $\mathbf v$ can be viewed as a constraint on its allowed localization. \end{enumerate} \textbf{Proof of Theorem \ref{th:1norm}:} First observe from (\ref{eq:tildeP}) that when $\alpha = 0,$ we have $\widetilde{\mathbf{P}} = \mathbf v \mathbf{1}^T,$ hence from (\ref{eq:PRstat}) we obtain $\boldsymbol{\pi} = \mathbf v,$ since $\mathbf{1}^T \boldsymbol{\pi} = 1.$ Similarly for the case $\alpha = 1,$ $\widetilde{\mathbf{P}} = \mathbf P$ and so $\boldsymbol{\pi}$ in this case is just the stationary distribution of the original random walk, which is well-defined and equals $\frac{\mathbf d}{\textnormal{vol}(G)}$ since by Property 2 the graph is connected. Examining (\ref{eq:asymexpr}) for these two cases we can see that the statement of the theorem holds trivially for both $\alpha = 0$ and $\alpha = 1.$ In what follows, we consider the case $0 < \alpha < 1.$ We first note that the matrix $\mathbf Q = \mathbf D^{-1/2}\mathbf A \mathbf D^{-1/2}$ can be written as follows by Spectral Decomposition Theorem \cite{BhatiaSpr}: \begin{equation} \label{eq:Qeig} \mathbf Q = \mathbf u_1 \mathbf u_1^T + \sum_{i=2}^n \lambda_i \mathbf u_i \mathbf u_i^T, \end{equation} where $1 = \lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n$ are the eigenvalues and $\{\mathbf u_1,\mathbf u_2, \ldots \mathbf u_n \}$ with $\mathbf u_i \in \mathbf{R}^n$ and $\norm{\mathbf u_i}_2 = 1$ are the corresponding orthogonal eigenvectors of $\mathbf Q.$ Recall that $\mathbf u_1=\mathbf D^{1/2}\mathbf{1}/\sqrt{\mathbf{1}^T \mathbf D\mathbf{1}}$ is the Perron--Frobenius eigenvector. Next, we rewrite (\ref{eq:PPRexpl}) in terms of the matrix $\mathbf Q$ as follows \begin{equation}\label{eq:expprank} \boldsymbol{\pi} = (1-\alpha) \mathbf D^{1/2} [\mathbf I-\alpha \mathbf Q]^{-1} \mathbf D^{-1/2} \mathbf v. \end{equation} Substituting (\ref{eq:Qeig}) into (\ref{eq:expprank}), we obtain \begin{align*} \boldsymbol{\pi} & = (1-\alpha) \mathbf D^{1/2} \left(\frac{1}{1-\alpha}\mathbf u_1 \mathbf u_1^T + \sum_{i=2}^n \frac{1}{1-\alpha\lambda_i} \mathbf u_i \mathbf u_i^T \right) \mathbf D^{-1/2} \mathbf v \\ & = \mathbf D^{1/2}\mathbf u_1 \mathbf u_1^T \mathbf D^{-1/2}\mathbf v + (1 - \alpha) \mathbf D^{1/2} \left( \sum_{i\neq 1} \frac{1}{1 - \alpha \lambda_i} \mathbf u_i \mathbf u_i^T \right)\mathbf D^{-1/2} \mathbf v. \\ \end{align*} Let us denote the error vector by $\boldsymbol{\epsilon} = \boldsymbol{\pi} - \overline{\boldsymbol{\pi}}$. Note that since $\mathbf u_1 = \frac{\mathbf D^{1/2} \mathbf{1}}{\sqrt{\textnormal{vol}(G)}},$ we can write $\overline{\boldsymbol{\pi}}$ as \begin{align*} \overline{\boldsymbol{\pi}} &= \alpha \frac{\mathbf d}{\textnormal{vol}(G)} + (1-\alpha) \mathbf v \\ &\myeq{(a)} \alpha \frac{\mathbf D \mathbf{1} \mathbf{1}^T \mathbf v}{\textnormal{vol}(G)} + (1-\alpha) \mathbf D^{1/2}\mathbf D^{-1/2} \mathbf v\\ &= \alpha \mathbf D^{1/2} \frac{\mathbf D^{1/2} \mathbf{1}}{\sqrt{\textnormal{vol}(G)}} \frac{\mathbf{1}^T\mathbf D^{1/2}}{\sqrt{\textnormal{vol}(G)}} \mathbf D^{-1/2} \mathbf v + (1-\alpha) \mathbf D^{1/2}\mathbf D^{-1/2} \mathbf v\\ &= \alpha \mathbf D^{1/2}\mathbf u_1 \mathbf u_1^T \mathbf D^{-1/2}\mathbf v + (1-\alpha) \mathbf D^{1/2}\mathbf D^{-1/2}\mathbf v, \end{align*} where in (a) above we used the fact that $\mathbf{1}^T \mathbf v = 1,$ since $\mathbf v$ is a probability vector. Then, we can write $\boldsymbol{\epsilon}$ as \begin{align} \boldsymbol{\epsilon} &= \boldsymbol{\pi} - \alpha \mathbf D^{1/2}\mathbf u_1 \mathbf u_1^T \mathbf D^{-1/2} \mathbf v - (1 - \alpha) \mathbf D^{1/2} \mathbf I \mathbf D^{-1/2}\mathbf v \nonumber \\ &= (1 - \alpha) \mathbf D^{1/2} \left ( \sum_{i \neq 1} \frac{\mathbf u_i \mathbf u_i^T}{1 - \alpha \lambda_i} - (\mathbf I - \mathbf u_1 \mathbf u_1^T)\right ) \mathbf D^{-1/2} \mathbf v \nonumber \\ &= (1 - \alpha) \mathbf D^{1/2} \left ( \sum_{i \neq 1} \mathbf u_i \mathbf u_i^T \frac{\alpha \lambda_i}{1 - \alpha \lambda_i} \right) \mathbf D^{-1/2} \mathbf v. \label{eq:epsilonerr} \end{align} Now let us bound the $L^1$-norm $\norm{\boldsymbol{\epsilon}}_1$ of the error: \begin{align} \norm{\boldsymbol{\epsilon} }_1 /(1 - \alpha) & \mylt{(a)} \sqrt{n} \|\boldsymbol{\epsilon}\|_2/(1-\alpha) \nonumber \\ & \mylt{(b)} \sqrt{n}\|\mathbf D^{1/2}\|_2 \left\| \sum_{i \neq 1} \mathbf u_i \mathbf u_i^T \frac{\alpha \lambda_i}{1 - \alpha \lambda_i} \right\|_2\|\mathbf D^{-1/2}\|_2 \|\mathbf v\|_2 \nonumber \\ &\mylt{(c)} \sqrt{d_{max}/d_{min}} \sqrt{n} \max_{i > 1}\left| \frac{\alpha \lambda_i}{1 - \alpha \lambda_i}\right| \norm{\mathbf v}_2 \nonumber \\ & \le C \sqrt{d_{max}/d_{min}} \max(|\lambda_2|, |\lambda_{n}|) \label{normbound} \end{align} where in (a) we used the fact that for any vector $\mathbf{x} \in \mathbb R^n,$ $\|\mathbf x\|_1 \le \sqrt{n} \|\mathbf x\|_2$ by Cauchy-Schwartz inequality. In (b) we used the submultiplicative property of matrix norms, i.e., $\norm{\mathbf A \mathbf B}_2 \leq \norm{\mathbf A}_2 \norm{\mathbf B}_2$. We obtain (c) by noting that the norm of a diagonal matrix is the leading diagonal value and the fact that for a symmetric matrix the 2-norm is the largest eigenvalue in magnitude. The last inequality is obtained by noting that the assumption $\lambda_i = o(1) $ w.h.p. $\forall i>1$ implies that $\exists N$ s.t. $\forall n > N,$ $|1-\alpha\lambda_i| > C$ for some constant C and the fact that $\norm{\mathbf v}_2 = O(1/\sqrt{n}).$ \noindent Observing that $d_{max}/d_{min}$ is bounded w.h.p. by Property~1 and $\max(|\lambda_2|, |\lambda_{n}|) = o(1)$ w.h.p. by Property~2 we obtain our result. \qed \smallskip Note that in the case of standard PageRank, $v_i=1/n, 1\le i\le n,$ and hence $\norm{\mathbf v}_2 = O(1/\sqrt{n}),$ but Theorem~\ref{th:1norm} also admits more general preference vectors than the uniform one. \begin{cor} The statement of Theorem~\ref{th:1norm} also holds with respect to the weak convergence, i.e., for any function $f$ on $V$ such that $\max_{x \in V} |f(x)| \leq 1,$ $$ \sup \left \{ \sum_v f(v) \pi_v - \sum_v f(v) \overline{\pi}_v \right \} = o(1) \quad \mbox{w.h.p.} $$ \end{cor} \textbf{Proof:} This follows from Theorem~\ref{th:1norm} and the fact that the left-hand side of the above equation is upper bounded by $2\, d_{\text{TV}}(\boldsymbol{\pi}_n , \overline{\boldsymbol{\pi}}_n)$ \cite{AsherPeres2009}. \qed \vspace{-0.2cm} \section{Chung-Lu random graphs} \vspace{-0.1cm} In this section, we study the PageRank for the Chung-Lu model~\cite{CL02} of random graphs. These results naturally hold for ER graphs also. The spectral properties of Chung-Lu graphs have been studied extensively in a series of papers by Fan Chung et al \cite{CLV03,Chung2011}. \subsection{Chung-Lu Random Graph Model} Let us first provide a definition of the Chung-Lu random graph model. \begin{defi}{\bf Chung-Lu Random Graph Model} A Chung-Lu graph $\mathcal{G}(w)$ with an expected degree vector $\mathbf w = (w_1,w_2, \ldots w_n)$, where $w_i$ are positive real numbers, is generated by drawing an edge between any two vertices $v_i$ and $v_j$ independently of all other pairs, with probability $p_{ij} = \frac{w_i w_j}{\sum_k w_k}.$ To ensure that the probabilities $p_{ij}$ are well-defined, we need $\max_i w_i^2 \leq \sum_k w_k$. \end{defi} In the following, let $w_{\max} = \max_i w_i$ and $w_{\min} = \min_i w_i.$ Below we specify a corollary of Theorem \ref{total variation convergence} as applied to these graphs. But before that we need the following lemmas about Chung-Lu graphs mainly taken from \cite{CLV03,Chung2011}. \begin{lemma} \label{degree_concentration} If the expected degrees $w_1,w_2,\ldots w_n$ satisfy $w_{\min} \gg \log(n),$ then in $\mathcal{G}(w)$ we have, w.h.p., $\max_i |\frac{d_i}{w_i} - 1| = o(1)$. \end{lemma} In the proof we use Bernstein Concentration Lemma~\cite{Bilsley}: \begin{lemma}\label{lemma:berny}(Bernstein Concentration Lemma~\cite{Bilsley}) If $Y_n = X_1 + X_2 + \ldots X_n,$ where $X_i$ are independent random variables such that $|X_i|\le b$ and if $B_n^2 = \mathbb{E}(Y_n - \mathbb{E}(Y_n))^2 $ then \[ \mathbb{P}\{|Y_n - \mathbb{E}(Y_n)| \geq \epsilon\} \leq 2 \exp \frac{-\epsilon^2}{2(B_n^2 + b\epsilon/3 )}, \] for any $\epsilon > 0.$ \end{lemma} \textbf{Proof of Lemma \ref{degree_concentration}:} This result is shown in the sense of convergence in probability in the proof of \cite[Theorem~2]{Chung2011}; using Lemma \ref{lemma:berny} we show the result holds w.h.p. By a straight forward application of Lemma \ref{lemma:berny} to the degrees $d_i$ of the Chung-Lu graph we obtain $$ \mathbb{P}\left(\max_{1 \leq i \leq n} \left|\frac{d_i}{w_i} - 1 \right| \geq \beta \right) \leq \frac{2}{n^{c/4 - 1}}, \quad \mbox{if} \quad \beta \geq \sqrt{\frac{c\log(n)}{w_{\min}}}=o(1) $$ if $w_{\min} \gg \log(n)$.\qed We present below a perturbation result for the eigenvalues of Hermitian matrices, called Weyl's inequalities, which we will need for our proofs. \begin{lemma} \cite[Theorem ~4.3.1]{hornjohn12} \label{lem:weyl} Let $\mathbf A,\mathbf B \in \mathbb R^{n\times n}$ be Hermitian and let the eigenvalues $\lambda_i(\mathbf A),$ $\lambda_i(\mathbf B)$ and $\lambda_i(\mathbf A+\mathbf B)$ be arranged in decreasing order. For each $k = 1,2, \ldots n$ we have \[ |\lambda_k(\mathbf A+\mathbf B) - \lambda_k(\mathbf A)| \le \|\mathbf B\|_2, \] where $\|\mathbf B\|_2$ is the induced 2-norm or the spectral norm of $\mathbf B.$ \end{lemma} The following lemma is an application of Theorem 5 in \cite{CLV03}. \begin{lemma} \label{twonormchunglu} If $w_{\max} \leq K w_{\min},$ for some $K>0$ and $\overline{w} = \sum_k w_k/n \gg \log^6(n)$, then for $\mathcal{G}(w)$ we have almost surely (a.s.) $$\norm{\mathbf C}_2 = \frac{2}{\sqrt{\overline{w}}}(1 + o(1)),$$ where $ \mathbf C = \mathbf W^{-1/2}\mathbf A \mathbf W^{-1/2} - \boldsymbol{\chi}^{T} \boldsymbol{\chi}$, $\mathbf W = \mbox{diag}(\mathbf{w}),$ and $\mathbf \chi_i = \sqrt{w_i/\sum_k w_k}$ is a row vector. \end{lemma} \textbf{Proof:} It can be verified that when $w_{\max} \leq K w_{\min}$ and $\overline{w} \gg \log^6(n),$ the condition in \cite[Theorem~5]{CLV03}, namely, $w_{\min} \gg \sqrt{\overline{w}} \log^3(n),$ is satisfied and hence the result follows.\qed \begin{lemma} \label{second eigen chung} For $\mathcal{G}(w)$ with $w_{\max}\leq K w_{\min},$ and $\overline{w}\gg \log^6(n),$ $$\max(\lambda_2(\mathbf P),-\lambda_{n}(\mathbf P)) = o(1) \quad \mbox{w.h.p.},$$ where $\mathbf P$ is Markov matrix. \end{lemma} {\bf Proof:} Recall that $\mathbf Q = \mathbf D^{-1/2} \mathbf A \mathbf D^{-1/2}$ is the normalized adjacency matrix. We want to be able to bound the eigenvalues $\lambda_i, i \ge 2$ of $\mathbf Q.$ We do this in two steps. Using Lemmas \ref{degree_concentration} and \ref{lem:weyl} we first show that if we replace the degree matrix $\mathbf D$ in the expression for $\mathbf Q$ by the expected degree matrix $\mathbf W = \mathbb{E}(\mathbf D),$ the eigenvalues of the resulting matrix are close to those of $\mathbf Q.$ Then, using Lemma \ref{twonormchunglu} we show that the eigenvalues of $\mathbf W^{-1/2}\mathbf A \mathbf W^{-1/2}$ roughly coincide with those of $\boldsymbol{\chi}^{T} \boldsymbol{\chi},$ which is a unit rank matrix and hence only has a single non-zero eigenvalue. Thus we arrive at the result of Lemma \ref{second eigen chung}. Now we give the detailed proof. The first step, $\|\mathbf Q - \mathbf W^{-1/2}\mathbf A \mathbf W^{-1/2}\|_2 = o(1)$ w.h.p. follows from Lemma \ref{degree_concentration} and the same argument as in the last part of the proof of Theorem 2 in \cite{Chung2011}. We present the steps in the derivation here for the sake of completeness. Since the 2-norm of a diagonal matrix is the maximum diagonal in absolute value, we have \begin{equation}\label{eq:normerr} \|\mathbf W^{-1/2}\mathbf D^{1/2} - \mathbf I \|_2 = \max_{\{i = 1,2,\ldots \}} \left|\sqrt{\frac{d_i}{w_i}} - 1\right| \le \max_{\{i = 1,2,\ldots \}} \left|{\frac{d_i}{w_i}} - 1 \right| = o(1), \end{equation} by Lemma \ref{degree_concentration}. Also observe that \begin{equation} \label{eqnormq} \|\mathbf Q\|_2 = \max_{\{i=1,2,\ldots n \}} |\lambda_i(\mathbf Q)| = \max_{\{i=1,2,\ldots n \}} |\lambda_i(\mathbf P)| = 1. \end{equation} We now proceed to bound the norm of the difference $\|\mathbf Q - \mathbf W^{-1/2}\mathbf A \mathbf W^{-1/2}\|$ as follows \begin{eqnarray} \lefteqn{\|\mathbf Q - \mathbf W^{-1/2}\mathbf A \mathbf W^{-1/2}\|_2} \nonumber\\ &=& \| \mathbf Q - \mathbf W^{-1/2} \mathbf D^{1/2} \mathbf D^{-1/2}\mathbf A \mathbf D^{-1/2} \mathbf D^{1/2} \mathbf W^{-1/2}\|_2 \nonumber\\ &=& \|\mathbf Q - \mathbf W^{-1/2} \mathbf D^{1/2} \mathbf Q \mathbf D^{1/2} \mathbf W^{-1/2} \|_2 \nonumber\\ &=& \|\mathbf Q - \mathbf W^{-1/2} \mathbf D^{1/2} \mathbf Q + \mathbf W^{-1/2} \mathbf D^{1/2} \mathbf Q - \mathbf W^{-1/2} \mathbf D^{1/2} \mathbf Q \mathbf D^{1/2} \mathbf W^{-1/2}\|_2 \nonumber \\ &\myeq{(a)}& \|(\mathbf I - \mathbf W^{-1/2} \mathbf D^{1/2}) \mathbf Q\|_2 + \|\mathbf W^{-1/2} \mathbf D^{1/2} \mathbf Q(\mathbf I - \mathbf D^{1/2} \mathbf W^{-1/2})\|_2 \nonumber \\ &\mylt{(b)}& \|(\mathbf I - \mathbf W^{-1/2} \mathbf D^{1/2})\|_2 \|\mathbf Q\|_2 + \|\mathbf W^{-1/2} \mathbf D^{1/2}\|_2 \|\mathbf Q\|_2 \|\mathbf I - \mathbf D^{1/2} \mathbf W^{-1/2}\|_2 \nonumber \\ &\myeq{(c)}& o(1) + (1 + o(1))o(1) = o(1) \quad w.h.p., \label{eq:boundonq} \end{eqnarray} where (a) follows from triangular inequality of norms, in (b) we used submultiplicativity of matrix norms, and (c) follows from (\ref{eq:normerr}), (\ref{eqnormq}) and the fact that $\|\mathbf W^{-1/2} \mathbf D^{1/2}\|_2 \le \|\mathbf I \|_2 + \|\mathbf W^{-1/2} \mathbf D^{1/2} - \mathbf I \|_2 = (1 + o(1)).$ By Lemma \ref{lem:weyl} we have for any $i,$ \begin{equation}\label{eq:fpart} |\lambda_i(\mathbf Q) - \lambda_i(\mathbf W^{-1/2} \mathbf A \mathbf W^{-1/2})| \le \|\mathbf Q - \mathbf W^{-1/2} \mathbf A \mathbf W^{-1/2}\|_2 = o(1), \end{equation} by (\ref{eq:boundonq}). Furthermore, using Lemma \ref{lem:weyl} and the fact that $\lambda_i(\boldsymbol{\chi}^T \boldsymbol{\chi}) = 0$ for $i>1,$ we have for $i \ge 2,$ \begin{eqnarray} \lefteqn{|\lambda_i(\mathbf W^{-1/2} \mathbf A \mathbf W^{-1/2})|}\nonumber \\ &=& |\lambda_i(\mathbf W^{-1/2} \mathbf A \mathbf W^{-1/2}) - \lambda_i(\boldsymbol{\chi}^T \boldsymbol{\chi})| \le \|\mathbf W^{-1/2} \mathbf A \mathbf W^{-1/2} - \boldsymbol{\chi}^T \boldsymbol{\chi}\|_2 \nonumber \\ &=& o(1), \label{eq:diffbnd} \end{eqnarray} where the last inequality follows from Lemma \ref{twonormchunglu}.\\ Now recall that $\max(\lambda_2(\mathbf P),-\lambda_{n}(\mathbf P)) = \max_{\{i \ge 2\}}|\lambda_i(\mathbf Q)|.$ We have for any $i,$ \begin{align} |\lambda_i(\mathbf Q)| \le |\lambda_i(\mathbf Q) - \lambda_i(\mathbf W^{-1/2} \mathbf A \mathbf W^{-1/2})| + ||\lambda_i(\mathbf W^{-1/2} \mathbf A \mathbf W^{-1/2})|, \end{align} which implies from (\ref{eq:fpart}) and (\ref{eq:diffbnd}): \[ \max_{\{i \ge 2\}} |\lambda_i(\mathbf Q)| = o(1). \] \qed Armed with these lemmas we now present the following corollary of Theorem \ref{th:1norm} in the case of Chung-Lu graphs. \begin{cor} \label{prChungLuTV} Let $\norm{\mathbf v}_2 = O(1/\sqrt{n}),$ and $\alpha \in (0,1).$ Then PageRank $\boldsymbol{\pi}$ of the Chung-Lu graph $\mathcal{G}(w)$ can asymptotically be approximated in TV distance by $\overline{\boldsymbol{\pi}},$ defined in Theorem \ref{total variation convergence}, if $\overline{w} \gg \log^6(n)$ and $w_{\max} \leq K w_{\min}$ for some $K$ that does not depend on $n.$ \end{cor} \textbf{Proof:} Using Lemma \ref{degree_concentration} and the condition that $w_{\max} \leq K w_{\min},$ one can show that $\exists K^{'}$ s.t. $\frac{d_{max}}{d_{min}} \leq K^{'}$ w.h.p. Then the result is a direct consequence of Lemma \ref{second eigen chung} and the inequality from~\eqref{normbound}.\qed We further note that this result also holds for ER graphs $\mathcal{G}(n,p_n)$ with $n$ nodes and edge probability $p_n$ such that $n p_n \gg \log^6(n),$ where we have $(w_1,w_2,\ldots w_n) = (np_n, np_n, \ldots np_n).$ \subsection{Element-wise Convergence} In Corollary \ref{prChungLuTV} we proved the convergence of PageRank in TV distance for Chung-Lu random graphs. Note that since each component of PageRank could decay to zero as the graph size grows to infinity, this does not necessarily guarantee convergence in an element-wise sense. In this section, we provide a proof for our convergence conjecture to include the element-wise convergence of the PageRank vector. Here we deviate slightly from the spectral decomposition technique and eigenvalue bounds used hitherto, and instead rely on well-known concentration bounds to bound the error in convergence. Let $\overline{\boldsymbol{\Pi}} = \mbox{diag} \{ \overline{\pi}_1,\overline{\pi}_2, \ldots \overline{\pi}_n \}$ be a diagonal matrix whose diagonal elements are made of the components of the approximated PageRank vector and $\widetilde{\boldsymbol{\delta}} = \overline{\boldsymbol{\Pi}}^{-1}(\boldsymbol{\pi} - \overline{\boldsymbol{\pi}}),$ i.e., $\widetilde{\delta}_i = (\pi _i - \overline{\pi} _i)/\overline{\pi}_i = \epsilon_i/\overline{\pi}_i,$ where $\boldsymbol{\epsilon}$ is the unnormalized error defined in Section \ref{sec:tvconv}. Then using (\ref{eq:epsilonerr}) we obtain \[ \widetilde{\delta}_i = \left((1 - \alpha)v_i + \alpha \frac{d_i}{\text{vol(G)}}\right)^{-1} \left [\mathbf D^{1/2} \sum_{j \neq 1} \frac{\alpha \lambda_j}{1 - \alpha \lambda_j} \mathbf u_j \mathbf u_j^T \mathbf D^{-1/2} \mathbf v \right ]_i. \] Therefore, using $\mathbf v'$ to denote $n \mathbf D^{-1/2} \mathbf v$ we can bound $\norm{\widetilde{\boldsymbol{\delta}}}_{\infty} = \max_i |\widetilde{\delta}_i|$ as follows \begin{align} \norm{\widetilde{\boldsymbol{\delta}}}_{\infty} &\le \frac{1}{\min_i \left((1-\alpha)v_i + \alpha \frac{d_i}{\textnormal{vol}(G)}\right)}\left\|\mathbf D^{1/2} \sum_{j \neq 1} \frac{\alpha \lambda_j}{1 - \alpha \lambda_j} \mathbf u_j \mathbf u_j^T \mathbf D^{-1/2} \mathbf v\right\|_{\infty}\\ &\leq \frac{\sum_i d_i/n}{\alpha d_{\min}} \sqrt{d_{max}} \norm{\sum_{j \neq 1} \frac{\alpha \lambda_j}{1 - \alpha \lambda_j} \mathbf u_j \mathbf u_j^{T} \mathbf v'}_{\infty}\label{infty_normbound}. \end{align} Here $d_{\min}$ denotes $\min_i d_i.$ To obtain (\ref{infty_normbound}) we used the submultiplicativity property of matrix norms, the fact that $\|\mathbf D^{1/2}\|_{\infty} = \sqrt{\max_i d_i} = \sqrt{d_{\max}}$ and the fact that $v_i \ge 0, \forall i \in V.$ Define $\widetilde{\mathbf Q} = \mathbf Q - \mathbf u_1 \mathbf u_1^T,$ the restriction of the matrix $\mathbf Q$ to the orthogonal subspace of $\mathbf u_1.$ \begin{lemma} \label{Bound on Sv} For a Chung-Lu random graph $\mathcal{G}(w)$ with expected degrees $w_1,\ldots w_n$, where $w_{\max} \leq K w_{\min}$ and $w_{\min} \gg \log(n),$ we have w.h.p., \[ \norm{\widetilde{\mathbf Q}\mathbf v'}_{\infty} = o(1/\sqrt{w_{\min}}), \] when $v_i = O(1/n) \ \forall i.$ \end{lemma} This lemma can be proven by a few applications of Lemma \ref{degree_concentration} and Bernstein's concentration inequality. To keep the train of thought intact, please refer to Appendix \ref{ap:lemmaSv} for a detailed proof of this lemma. In the next lemma we prove an upper bound on the infinity norm of the matrix $\mathbf S = (\mathbf I - \alpha \mathbf Q)^{-1}.$ \begin{lemma} \label{bound_inv_infty} Under the conditions of Lemma \ref{Bound on Sv}, $\norm{\mathbf S}_{\infty} \leq C $ w.h.p., where $\mathbf C$ is a number independent of $n$ that depends only on $\alpha$ and $K$. \end{lemma} {\bf Proof:} Note that $\mathbf S = (\mathbf I - \alpha \mathbf Q)^{-1} = \mathbf D^{-1/2}(\mathbf I - \alpha \mathbf P)^{-1} \mathbf D^{1/2}.$ Therefore, $\norm{\mathbf S}_{\infty} \leq \sqrt{\frac{d_{max}}{d_{min}}} \norm{(\mathbf I - \alpha \mathbf P)^{-1}}_{\infty}$ and the result follows since $\norm{(\mathbf I - \alpha \mathbf P)^{-1}}_{\infty} \leq \frac{1}{1 - \alpha}$ \cite{LangMeyer} and using Lemma~\ref{degree_concentration}. \qed Now we are in a position to present our main result in this section. \begin{theorem} \label{elementwise} Let $v_i = O(1/n) \,\, \forall i,$ and $\alpha < 1.$ PageRank $\boldsymbol{\pi}$ converges element-wise to $\overline{\boldsymbol{\pi}} = (1 - \alpha) \mathbf v + \alpha \mathbf d/\textnormal{vol}(G),$ in the sense that $\max_i \ (\pi _i - \overline{\pi} _i)/\overline{\pi}_i = o(1)$ w.h.p., on the Chung-Lu graph $\mathcal G(w)$ with expected degrees $\{w_1,w_2,\ldots w_{n} \}$ such that $w_{\min} > \log^c(n)$ for some $c > 1$ and $w_{\max} \leq K w_{\min},$ for some $K,$ a constant independent of $n.$ \end{theorem} \textbf{Proof:} Define $\mathbf Z = \sum_{i \neq 1} \frac{\alpha \lambda_i}{1 - \alpha \lambda_i} \mathbf u_i \mathbf u_i^T. $ We then have: \begin{align} \mathbf Z &= \sum_{i = 1}^{n} \frac{\alpha \lambda_i}{1 - \alpha \lambda_i} \mathbf u_i \mathbf u_i^T - \frac{\alpha}{1 - \alpha} \mathbf u_1 \mathbf u_1^T \nonumber \\ & = (\mathbf I - \alpha \mathbf Q)^{-1} \alpha \mathbf Q - \frac{\alpha}{1 - \alpha} \mathbf u_1 \mathbf u_1^T \nonumber \\ & = \mathbf S \left [ \alpha \mathbf Q - \frac{\alpha}{1 - \alpha} (\mathbf I - \alpha \mathbf Q) \mathbf u_1 \mathbf u_1^T\right ] \nonumber \\ & = \alpha \mathbf S \widetilde{\mathbf Q} \label{eq:z} \end{align} Now from (\ref{infty_normbound}) we have \begin{align*} \norm{\widetilde{\boldsymbol\delta}}_{\infty} &\le C \frac{\sum_i d_i/n}{ d_{\min}} \sqrt{d_{max}} \|\mathbf S \widetilde{\mathbf Q}\mathbf v^{'}\|_{\infty} \\ & \mylt{(a)} C \frac{\sum_i d_i/n}{ d_{\min}} \sqrt{d_{max}}o(1/\sqrt{w_{\min}}) \nonumber \\ & \le C \frac{d_{\max}}{d_{\min}} \sqrt{w_{\max}(1+o(1))} \frac{1}{\sqrt{w_{\min}}}o(1) \label{replacedw} \\ &= C \frac{w_{\max}}{w_{\min}} \sqrt{\frac{w_{\max}}{w_{\min}}} (1+o(1)) o(1) \\ &= C \left(\frac{w_{\max}}{w_{\min}}\right)^{\frac{3}{2}}o(1) \\ & \le C o(1) \quad \mbox{w.h.p.}, \end{align*} where in (a) we used (\ref{eq:z}) and Lemmas \ref{Bound on Sv} and \ref{bound_inv_infty}. The rest of the inequalities are obtained by repeatedly using the fact that $d_{\max} = w_{\max}(1+o(1))$ and $d_{min} = w_{\min}(1+o(1)),$ from Lemma \ref{degree_concentration}. The last step follows from the assumption that $w_{\max}\le Kw_{\min}$ for some constant $K.$\qed \begin{corollary} [ER Graphs]\\ For an ER graph $\mathcal{G}(n,p_n)$ such that $np_n \gg \log(n),$ we have that asymptotically the personalized PageRank $\boldsymbol\pi$ converges pointwise to $\overline{\boldsymbol\pi}$ for $\mathbf v$ such that $v_i = O(1/n).$ \end{corollary} \section{Asymptotic PageRank for the Stochastic Block Model}\label{sec:sbmprank} In this section, we extend the analysis of PageRank to Stochastic Block Models (SBM) with constraints on average degrees. The SBM is a random graph model that reflects the community structure prevalent in many online social networks. It was first introduced in \cite{holland1983} and has been analyzed subsequently in several works, specifically in the community detection literature, including \cite{condon2001},\cite{Karrer2011}, \cite{rohe2011}, \cite{ACK15} and several extensions thereof as in \cite{Heimlicher2012} and \cite{zhao2012}, and the references therein. For the sake of simplicity we focus on an SBM graph with two communities, but the idea of the proof extends easily to generalizations of this simple model. \begin{definition} \label{def:sbm} [Stochastic Block Model (SBM) with two communities]: An SBM graph $\mathcal{G}(m,n-m,p,q)$ with two communities is an undirected graph on a set of disjoint vertices $C_1, C_2$ such that $ C_1 \cup C_2 = V,$ and let $|C_1| = m$ and $|C_2| = n-m$. Furthermore, if two vertices $i,j \in C_k, k = 1,2$, then $\mathbb{P}( (i,j) \in E ) = p$, if $i \in C_1$ and $j \in C_{2}$, then $\mathbb{P}( (i,j) \in E ) = q.$ The probabilities $p,q$ may scale with $n$ and we assume that $m > n/2$ and $p > q;$ this last assumption is necessary for modeling the community structure of a network. \end{definition} \noindent \textit{Remark:} For the sake of simplicity, we assume that the edge probabilities within both communities are equal to $p,$ but this is a minor assumption and can be generalised so that community 1 has a different edge probability to community 2. For an SBM graph we use $w_{\max}$ and $w_{\min}$ to denote the maximum and the minimum expected degrees of the nodes respectively. From Definition \ref{def:sbm}, by our assumption on $m,p$ and $q,$ we have $w_{\max} = m p+ (n-m) q$ and $w_{\min} = (n-m)p + m q.$ Note that our results only depend on these two parameters. We present our main result on SBM graphs in the following theorem. \begin{theorem} \label{theo:sbm} For a Stochastic Block Model with $w_{\min} = \mathrm{\omega}(\log^3(n))$ and $\frac{w_{\max}}{w_{\min}} \le C,$ PageRank with preference vector $\mathbf{v}$ such that $\|\mathbf{v}\|_2 = O(\frac{1}{\sqrt{n}})$ satisfies \[ \| \boldsymbol{\pi} - \overline{\boldsymbol{\pi}}_{\textnormal{SBM}} \|_{\text{TV}} = o(1) \] w.h.p., where \begin{equation} \label{eq:prSBM} \overline{\boldsymbol{\pi}}_{\textnormal{SBM}} = (1- \alpha) \left( \mathbf I - \alpha \overline{\mathbf P} \right)^{-1} \mathbf{v}. \end{equation} Here $\overline{\mathbf P}$ represents the ``average'' Markov matrix given as $\overline{\mathbf P} = \overline{\mathbf A} \mathbf W^{-1}$ where $\mathbf W= \mathbb{E}(\mathbf D)$ and $\overline{\mathbf A}= \mathbb{E}(\mathbf{A}).$ \end{theorem} \noindent \textit{Discussion:} Let us look at the permissible values of $m,p,q$ under the assumptions in the above theorem. Recall that we have $w_{\min} = (n-m)p + m q > nq.$ Therefore the condition on the growth of minimum expected degree is met, for example, if $q = \omega(\log^3(n)/n).$ On the other hand we have \[ \frac{w_{\max}}{w_{\min}} = \frac{mp + (n-m)q}{(n-m)p + m q} = \frac{\frac{m}{n-m} \frac{p}{q} + 1}{\frac{m}{n-m}+\frac{p}{q}} \quad, \] which remains bounded if either $m/(n-m)$ or $p/q$ tends to infinity, but not both. The following corollary of Theorem \ref{theo:sbm} gives an interesting expression for PageRank for an SBM graph with two equal-sized communities. \begin{corollary} For an SBM graph as in Definition \ref{def:sbm}, with $m = n/2,$ (n assumed to be even) such that $p+q \gg \log^3(n)/n$ the PageRank vector $\boldsymbol{\pi}$ with preference vector $\mathbf{v}$ such that $\|\mathbf{v}\|_2 = O(\frac{1}{\sqrt{n}})$ satisfies \[ \| \boldsymbol{\pi} - \overline{\boldsymbol{\pi}}_{\textnormal{SBM}} \|_{TV} \to 0 \] w.h.p as $n \to \infty$ where \begin{equation}\label{eq:asymsbm} \overline{\boldsymbol{\pi}}_{\textnormal{SBM}}= \alpha \frac{1}{n} \mathbf{1} + (1 - \alpha) \left( \mathbf{v} + \frac{\alpha \beta}{1 - \alpha \beta} (\mathbf v^T \mathbf u) \mathbf{u} \right) , \end{equation} where $\beta \coloneqq \frac{p-q}{p+q},$ and $\mathbf{u} \in \mathbb{R}^n$ is a unit vector such that $u_i = \frac{1}{\sqrt{n}},$ for $i \in C_1$ and $u_i = -\frac{1}{\sqrt{n}}$ for $i \in C_2.$ \end{corollary} \textbf{Proof:} With equal-sized communities, i.e., $m = n/2$, we have $w_{\max} = w_{\min} = \frac{n}{2}(p+q).$ Therefore the conditions of Theorem \ref{theo:sbm} are satisfied if $p + q \gg \log^3(n)/n.$ Observe that the expected adjacency matrix can be written as $\overline{\mathbf{A}} = \frac{p+q}{2} \mathbf{1} \mathbf{1}^T + \frac{n}{2}(p-q) \mathbf u \mathbf u^T.$ Furthermore, $\mathbf W = \frac{n}{2}(p+q) \mathbf I.$ Therefore $\overline{\mathbf P} = \overline{\mathbf A} \mathbf W^{-1} = \frac{1}{n} \mathbf{1} \mathbf{1}^T + \frac{p-q}{p+q} \mathbf{u} \mathbf{u}^T.$ From (\ref{eq:prSBM}), the asymptotic PageRank $\overline{\boldsymbol{\pi}}_{\textnormal{sbm}}$ is therefore given as \[ \overline{\boldsymbol{\pi}}_{\textnormal{sbm}} = \alpha \overline{\mathbf P} \overline{\boldsymbol{\pi}}_{\textnormal{sbm}} + (1- \alpha) \mathbf v. \] Consequently, $\overline{\boldsymbol{\pi}}_{\textnormal{sbm}} = \frac{\alpha}{n} \mathbf{1} + \alpha \beta \mathbf u \mathbf u^T \overline{\boldsymbol{\pi}}_{\textnormal{sbm}} + (1 - \alpha) \mathbf v,$ or $\left[ \mathbf I - \alpha \beta \mathbf u \mathbf u^T\right] \overline{\boldsymbol{\pi}}_{\textnormal{sbm}} = \frac{\alpha}{n} \mathbf{1} + (1 - \alpha) \mathbf v.$ By Woodbury Matrix Inversion Lemma in \cite{hornjohn12}, $\left[ \mathbf I - \alpha \beta \mathbf u \mathbf u^T\right]^{-1} = \mathbf I + \frac{\alpha \beta}{1 - \alpha \beta} \mathbf u \mathbf u^T.$ Hence we obtain $\overline{\boldsymbol{\pi}}_{\textnormal{sbm}} = \frac{\alpha}{n} \mathbf{1} + (1 - \alpha) \left (\mathbf v + \frac{\alpha \beta}{1 - \alpha \beta} (\mathbf u^T \mathbf v) \mathbf u \right),$ using the fact that $\mathbf u$ and $\mathbf{1}$ are orthogonal vectors. \qed The above corollary asserts that on an SBM matrix the PageRank is well approximated in the asymptotic regime of large graph size by the convex combination of the uniform probability vector $\frac{1}{n} \mathbf{1}$, which is the asymptotic stationary distribution of a simple random walk on the SBM graph, and a linear combination of the preference vector $\mathbf v$ and the projection of the preference vector onto the community partitioning vector $\mathbf u.$ Thus in this simple scenario of SBM graphs with equally sized communities, we observe that PageRank incorporates information about the community structure, in the form of a term correlated with the partition vector $\mathbf u,$ as opposed to the usual random walk, which misses this information. It can also be inferred from (\ref{eq:asymsbm}) that if the correlation between the preference vector $\mathbf v$ and $\mathbf u$ is large, e.g., when the seed set of PageRank is chosen to be in one of the communities, the resulting PageRank will display a clear delineation of the communities. This provides a mathematical rationale for why PageRank works for semi-supervised graph partitioning \cite{AGMS12}, at least in the asymptotic regime. To prove Theorem \ref{theo:sbm} we need the following Lemmas, whose proofs are given in Appendix \ref{ap:secsbmprank}. \begin{lemma} \label{lemma:deg_conc} For an SBM graph $\mathcal{G}(m,n-m,p,q),$ when $w_{\min} = \mathrm\omega(\log^3(n))$ it can be shown that for some $C,$ \[ \max_{ 1 \leq i \leq n} \left|\frac{D_i}{\mathbb{E} (D_i)} - 1 \right| \leq C\sqrt{\frac{\log(n)}{ w_{\min}}} \text{ w.h.p}. \] \end{lemma} The proof of this lemma follows from applying Bernstein's concentration lemma to the degrees of SBM graph. The proof is given in Appendix \ref{ap:lemsbmdeg}. For ease of notation, let $\overline {\mathbf{Q}} = \mathbf W^{-1/2} \mathbb{E} (\mathbf{A}) \mathbf W^{-1/2},$ where $\mathbf{W} = \mathbb{E}(\mathbf{D}).$ As before $\mathbf{Q} = \mathbf{D}^{1/2} \mathbf{A} \mathbf{D}^{1/2}.$ We need the following concentration result on $\mathbf Q.$ \begin{lemma}\label{errorboundadj} For an SBM graph for which $w_{\min} = \mathrm\omega(\log^3(n)),$ and $\frac{w_{\max}}{w_{\min}} \le C$ for some $C,$ it can be shown that \[ \| \mathbf{Q} - \overline{\mathbf{Q}}\|_2 = C \frac{\sqrt{\log(n)w_{\max}}}{w_{\min}} = o(1) \] w.h.p. \end{lemma} We prove this lemma in Appendix \ref{ap:errorbadj}.\\ \textbf{Proof of Theorem \ref{theo:sbm}:} We write the error between $\boldsymbol{\pi}$ and $\overline{\boldsymbol{\pi}}$ as follows \begin{align} \boldsymbol{\delta} &= \boldsymbol{\pi} - \overline{\boldsymbol{\pi}} \nonumber \\ &= (1 - \alpha) \left[\mathbf D^{1/2} (\mathbf I - \alpha \mathbf Q )^{-1} \mathbf D^{-1/2} - \mathbf W^{1/2} (\mathbf I - \alpha \overline{\mathbf Q})^{-1} \mathbf W^{-1/2} \right] \mathbf v \nonumber \\ &= (1-\alpha)\biggl[ \mathbf W^{1/2} \left((\mathbf I - \alpha {\mathbf Q} )^{-1} - (\mathbf I - \alpha \overline{\mathbf Q} )^{-1}\right) \mathbf W^{-1/2} \biggr ]\mathbf v + \nonumber \\ & (1-\alpha) \biggl[\mathbf D^{1/2} (\mathbf I - \alpha \mathbf Q )^{-1} \mathbf D^{-1/2} - \mathbf W^{1/2}(\mathbf I - \alpha \mathbf Q )^{-1} \mathbf W^{-1/2} \biggr ] \mathbf v , \label{eq:theosteps} \end{align} where in the last equality we added and subtracted $\mathbf W^{1/2}(\mathbf I - \alpha \mathbf Q )^{-1} \mathbf W^{-1/2}$ and reordered terms. Now we analyse the two terms in square brackets in the last equality in (\ref{eq:theosteps}), which we denote $T_1$ and $T_2,$ respectively. Notice that we have $\| \boldsymbol{\delta} \|_1 \le \|T_1\|_1 + \|T_2\|_1 .$ Next we show that as $n \to \infty,$ $\|T_1\|_1$ and $\|T_2\|_1$ are $o(1)$ separately and consequently we obtain the result of the theorem. Let us first consider $T_1.$ We have \begin{align*} T_1 &=(1-\alpha)\biggl[ \mathbf W^{1/2} \left((\mathbf I - \alpha {\mathbf Q} )^{-1} - (\mathbf I - \alpha \overline{\mathbf Q} )^{-1}\right) \mathbf W^{-1/2} \biggr ]\mathbf v\\ &= (1- \alpha) \mathbf{W}^{1/2}(\mathbf{I} - \alpha {\mathbf{Q}})^{-1}\left( \overline{\mathbf{Q}} - \mathbf{Q} \right)(\mathbf{I} - \alpha \overline{\mathbf{Q}})^{-1} \mathbf{W}^{-1/2} \mathbf{v}, \end{align*} which we obtained by factoring out $(\mathbf{I} - \alpha {\mathbf{Q}})^{-1}$ and $(\mathbf{I} - \alpha \overline{\mathbf{Q}})^{-1}$ on the left and right sides of the square brackets. Next we focus on the 2-norm of $T_1.$ \begin{align*} \|T_1\|_2 &\mylt{(a)} (1- \alpha) \sqrt{ w_{\max}} \|(\mathbf{I} - \alpha {\mathbf{Q}})^{-1}\|_2 \|\overline{\mathbf{Q}} - \mathbf{Q}\|_2 \|(\mathbf{I} - \alpha \overline{\mathbf{Q}})^{-1}\|_2 \frac{1}{\sqrt{w_{\min}}}\|\mathbf{v}\|_2 \\ &\mylt{(b)} \frac{1}{1-\alpha}\sqrt{\frac{w_{\max}}{w_{\min}}} \|\mathbf Q - \overline{\mathbf Q}\|_2\|\mathbf v\|_2 \\ &\mylt{(c)} C \frac{\sqrt{\log(n)w_{\max}}}{w_{\min}\sqrt{n}} \\ &= C \sqrt{\frac{\log(n)}{nw_{\max}}} \frac{w_{\max}}{w_{\min}}. \end{align*} This proves $\|T_1\|_1 \le \sqrt{n}\|T_1\|_2 \footnote{By Cauchy Schwartz inequality on norms.}\le C \sqrt{\frac{\log(n)}{w_{\max}}} \frac{w_{\max}}{w_{\min}} = o(1),$ from the assumptions of the theorem. Here in (a) we used the submultiplicative property of matrix norms and the fact that 2-norm of diagonal matrices is the maximum diagonal element in magnitude. The inequality (b) follows because $\|(\mathbf{I} - \alpha {\mathbf{Q}})^{-1}\|_2 \le \frac{1}{1- \alpha}$ and $\|(\mathbf{I} - \alpha \overline{\mathbf{Q}})^{-1}\|_2 \le \frac{1}{1 - \alpha}$ and step (c) follows from Lemma \ref{errorboundadj} and the assumption that $\|\mathbf v\|_2 = O(1/\sqrt{n})$. Next we analyse the second term $T_2.$ For ease of notation we denote $\widetilde{\mathbf R} = \mathbf W^{1/2} \left ( \mathbf{I} - \alpha \mathbf Q \right )^{-1} \mathbf W^{-1/2}.$ Then by simple algebraic manipulations \begin{align*} T_2 &= (1 - \alpha) \left [ \mathbf D^{1/2} \left ( \mathbf{I} - \alpha \mathbf Q \right )^{-1} \mathbf D^{-1/2} - \mathbf W^{1/2} \left ( \mathbf{I} - \alpha \mathbf Q \right )^{-1} \mathbf W^{-1/2}\right]\mathbf v \\ &= (1-\alpha)\left(\mathbf D^{1/2} \mathbf W^{-1/2} \widetilde{\mathbf R} \mathbf W^{1/2} \mathbf D^{-1/2} - \widetilde{\mathbf R} \right) \mathbf v\\ &= (1-\alpha)\left(\mathbf D^{1/2} \mathbf W^{-1/2} \widetilde{\mathbf R} \left( \mathbf W^{1/2} \mathbf D^{-1/2} - \mathbf I \right) + \left( \mathbf D^{1/2}\mathbf W^{-1/2} - \mathbf I \right) \widetilde{\mathbf R} \right) \mathbf v, \end{align*} where the last step is obtained by adding and subtracting $\mathbf D^{1/2} \mathbf W^{-1/2} \widetilde{\mathbf R}.$ \noindent Now we have $\| \mathbf D^{1/2} \mathbf W^{-1/2} - \mathbf I \|_2 = \max_{i} \left |\sqrt{\frac{d_i}{w_i}} - 1 \right| \le \max_{i} \left |\frac{d_i}{w_i} - 1 \right| \le C \sqrt{\frac{\log(n)}{w_{\min}}} $ w.h.p. by Lemma \ref{lemma:deg_conc} and similarly $\|\mathbf D^{1/2} \mathbf W^{-1/2} \|_2 \le \|\mathbf D^{1/2} \mathbf W^{-1/2} - \mathbf I\|_2 + \|\mathbf I\|_2 \le C \sqrt{\frac{\log(n)}{w_{\min}}} + 1.$ In addition $\|\mathbf W^{1/2} \mathbf{D}^{-1/2} - \mathbf I\|_2 = \max_i \left| \sqrt{\frac{w_i}{d_i}} - 1 \right| \le \max_i \left| \frac{w_i}{d_i} - 1 \right|.$ It can be shown that since $\max_i \left| \frac{d_i}{w_i} - 1 \right| \le C\sqrt{\frac{\log(n)}{w_{\min}}}$ w.h.p. (by Lemma \ref{lemma:deg_conc}), then $\max_i \left| \frac{w_i}{d_i} - 1 \right| \le C\sqrt{\frac{\log(n)}{w_{\min}}}$ w.h.p.\footnote{{\footnotesize This follows since we can write $\frac{d_i}{w_i} = 1 + \eta_i$, with $\max_i |\eta_i| = O\left(\sqrt{\frac{\log(n)}{w_{\min}}}\right) = o(1)$ w.h.p., then $\frac{w_i}{d_i} = \frac{1}{1+\eta_i} = 1 - \eta_i + O(\eta_i^2),$ hence $\max_i |\frac{w_i}{d_i} - 1| = O(\max_i |\eta_i|) = O\left(\sqrt{\frac{\log(n)}{w_{\min}}}\right) = o(1)$ w.h.p.}} Therefore $\|\mathbf W^{1/2} \mathbf{D}^{-1/2} \|_2 \le \|\mathbf W^{1/2} \mathbf{D}^{-1/2} - \mathbf I \|_2 + \|\mathbf I\|_2 \le C\sqrt{\frac{\log(n)}{w_{\min}}} + 1 $ w.h.p. Using the above facts and denoting $\delta = C\sqrt{\frac{\log(n)}{w_{\min}}} $ we obtain \begin{align} \|T_2\|_2 &\le \left (\|\mathbf D^{\frac{1}{2}}\mathbf W^{-\frac{1}{2}}\|_2 \|\widetilde{\mathbf R}\|_2 \|\mathbf W^{\frac{1}{2}} \mathbf D^{-\frac{1}{2}} - \mathbf I\|_2 + \|\mathbf D^{\frac{1}{2}} \mathbf W^{-\frac{1}{2}} - \mathbf I\|_2 \|\widetilde{\mathbf R}\|_2 \right)\|\mathbf v\|_2 \nonumber \\ \label{eq:proofstep} &\le C(\delta(\delta + 1)\frac{1}{1-\alpha} + \delta) \frac{1}{1-\alpha}\sqrt{\frac{w_{\max}}{n w_{\min}}}\\ &\le C \delta\sqrt{\frac{w_{\max}}{n w_{\min}}} \mbox{w.h.p.} \end{align} Hence we have $\|T_2\|_1 \le \sqrt{n}\|T_2\|_2 \le C \delta\sqrt{\frac{w_{\max}}{w_{\min}}}$ w.h.p., which from our assumptions is $o(1).$ Here in (\ref{eq:proofstep}) we used the fact that \[ \|\widetilde{\mathbf R}\|_2 = \|\mathbf W^{1/2} \left ( \mathbf{I} - \alpha \mathbf Q \right )^{-1} \mathbf W^{-1/2}\|_2 \le \sqrt{\frac{w_{\max}}{w_{\min}}}\|\mathbf I - \alpha \mathbf{Q}\|_2 \le \frac{1}{1 - \alpha} \sqrt{\frac{w_{\max}}{w_{\min}}} \le C, \] and that $\|\mathbf v\|_2 \le C/\sqrt{n} ,$ for some $C.$ \qed \vspace{5pt} \noindent \textit{Remark:} This method of proof can be extended to similar models like the Stochastic Block Model with multiple communities and their generalizations, e.g., Random Dot Product Graphs \cite{athreya2013}. \section{Experimental Results} \begin{comment} \begin{figure} \label{fig1} \begin{center} \includegraphics[scale=0.6]{maxerror1.eps} \end{center} \caption{Maximum Normalized error for ER and Chung-Lu graphs\vspace{-20pt}} \end{figure} \end{comment} \begin{figure} \centering \includegraphics[scale=0.65]{maxerror2.eps} \caption{Log-log plot of maximum normalized error for ER and Chung-Lu graphs} \label{fig1} \end{figure} In this section, we provide experimental evidence to further illustrate the analytic results obtained in the previous sections. In particular, we simulated ER graphs with $p_n = C\frac{\log^7(n)}{n}$ and Chung-Lu graphs with the degree vector $w$ sampled from a geometric distribution so that the average degree $\overline{w} = c n^{1/3},$ clipped such that $w_{\max}= 7w_{\min}$, for various values of graph size, and plotted the maximum of normalized error $\widetilde{\delta}$ and TV distance error $\norm{\delta}_1$, respectively, in Figures \ref{fig1} and \ref{fig:tverror}. As expected, both these errors decay as functions of $n,$ which illustrates that the PageRank vector does converge to the asymptotic value. \begin{figure} \centering \includegraphics[scale=0.65]{tverror2.eps} \caption{Log-log plot of TV distance error for ER and Chung-Lu graphs} \label{fig:tverror} \end{figure} In the spirit of further exploration, we have also conducted simulations on power-law graphs with exponent $\beta = 4$ using the Chung-Lu graph model with $w_i = ci^{-1/(\beta - 1)},$ for $i_0 \le i \le n + i_0 $ with \[ c = \frac{\beta - 2}{\beta - 1}dn^{1/(\beta - 1)}, \] \[ i_0 = n \left[ \frac{d(\beta-1}{m(\beta-2)}\right] \] Please refer to \cite{CLV03} for details. We set max degree $m = n^{1/3}$ and average degree $d = n^{1/6}.$ In Figure \ref{figpowererror} we observe that for this graph the max-norm of the relative error does not converge to zero. On the other hand the TV-norm seems to converge to zero with graph size, albeit very slowly. Note that these graphs satisfy Property \ref{prop:fast_mixing} \cite{CLV03}, but they do not satisfy Property \ref{prop:bounded_degrees}. Therefore at this point, it is not possible to conclude whether the assumption of bounded variation of degrees is necessary for the convergence to hold. It might be interesting to investigate in detail the asymptotic behavior of PageRank in undirected power-law graphs. \begin{comment} \begin{figure} \begin{center} \includegraphics[scale=0.4]{powerlaw.eps} \end{center} \vspace{-15pt} \label{fig:powererror} \caption{TV distance and maximum relative error for power-law graphs} \end{figure} \end{comment} \begin{figure} \centering \includegraphics[scale=0.5]{prpowerlawfinal.eps} \caption{Log-log plot of TV distance and maximum error for power-law graphs} \label{figpowererror} \end{figure} \begin{comment} \begin{figure} \label{fig:v1} \begin{center} \includegraphics[scale=0.65]{vdeltaER.eps} \end{center} \caption{TV distance and maximum relative error for ER-graph when $v_1 = 1$} \end{figure} \end{comment} \begin{figure} \centering \includegraphics[scale=0.6]{vdeltaER2.eps} \caption{Log-log plot of TV distance and maximum relative error for ER-graph when $v = e_1$} \label{figv1} \end{figure} Furthermore, we also see that in the case $\mathbf v= \mathbf e_i,$ the standard unit vector, for some $i$ we do not have the conjectured convergence, as can be seen on Figure \ref{figv1} in the case of ER graphs. It can also be seen from our analysis that if $v_k = 1 $ for some $k,$ the quantity $\norm{\widetilde{Q}D^{-1/2}v}_{\infty},$ becomes: \vspace{-0.2cm} \begin{equation*} \max_i \left|\sum_j \left ( \frac{A_{ij}}{\sqrt{d_i d_j}} - \frac{\sqrt{d_i d_j}}{\sum_l d_l} \right ) v_j /\sqrt{d_j}\right| = \max_{i} \frac{1}{\sqrt{d_i} d_k} \left|A_{ik} - \frac{d_i d_k}{\sum_l d_l}\right|, \vspace{-0.1cm} \end{equation*} which is $O\left(\frac{1}{\sqrt{w_{\min}}w_{k}}\right)$ and does not fall sufficiently fast. We simulated an SBM matrix with two communities of equal size, with $p = 0.1$ and $q = 0.01.$ In Figure \ref{figsbm} we plot the maximum normalized error and the TV-distance error against graph size on a log-log plot. As expected both errors go to zero for large graph sizes. \begin{figure} \centering \includegraphics[scale=0.5]{prSBM_2.eps} \caption{Log-log plot of maximum normalized error and TV-distance error for an SBM graph} \label{figsbm} \end{figure} \section{Conclusions} In this work, we have shown that when the size of a graph tends to infinity, the PageRank vector lends itself to be approximated by a mixture of the preference vector and the degree distribution, for a class of undirected random graphs including the Chung-Lu graph. We expect that these findings will shed more light on the behaviour of PageRank on undirected graphs, and possibly help to optimize the PageRank operation, or suggest further modifications to better capture local graph properties. We also obtain an asymptotic expression for the PageRank on SBM graphs. It is seen that this asymptotic expression contains information about community partitioning in the simple case of SBM with equal-sized communities. It would be interesting to study the implications of our results for community detection algorithms. \vspace{-0.1cm} \section*{Acknowledgements} We would like to thank Nelly Litvak for stimulating discussions on the topic of the paper. The work of K. Avrachenkov and A. Kadavankandy was partly funded by the French Government (National Research Agency, ANR) through the ``Investments for the Future'' Program reference \#ANR-11-LABX-0031-01 and the work of L. Ostroumova Prokhorenkova and A. Raigorodskii was supported by Russian Science Foundation (\# 16-11-10014).
{ "timestamp": "2017-03-24T01:05:46", "yymm": "1703", "arxiv_id": "1703.08057", "language": "en", "url": "https://arxiv.org/abs/1703.08057", "abstract": "PageRank has numerous applications in information retrieval, reputation systems, machine learning, and graph partitioning. In this paper, we study PageRank in undirected random graphs with an expansion property. The Chung-Lu random graph is an example of such a graph. We show that in the limit, as the size of the graph goes to infinity, PageR- ank can be approximated by a mixture of the restart distribution and the vertex degree distribution. We also extend the result to Stochastic Block Model (SBM) graphs, where we show that there is a correction term that depends on the community partitioning.", "subjects": "Probability (math.PR)", "title": "PageRank in Undirected Random Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9658995742876885, "lm_q2_score": 0.734119526900183, "lm_q1q2_score": 0.709085738509166 }
https://arxiv.org/abs/cond-mat/0701707
Relaxation dynamics in strained fiber bundles
Under an applied external load the global load-sharing fiber bundle model, with individual fiber strength thresholds sampled randomly from a probability distribution, will relax to an equilibrium state, or to complete bundle breakdown. The relaxation can be viewed as taking place in a sequence of steps. In the first step all fibers weaker than the applied stress fail. As the total load is redistributed on the surviving fibers, a group of secondary fiber failures occur, etc. For a bundle with a finite number of fibers the process stops after a finite number of steps, $t$. By simulation and theoretical estimates, it is determined how $t$ depends upon the stress, the initial load per fiber, both for subcritical and supercritical stress. The two-sided critical divergence is characterized by an exponent -1/2, independent of the probability distribution of the fiber thresholds.
\section{Introduction} Bundles of fibers, with a statistical distribution of breakdown thresholds for the individual fibers, are simple and interesting models of failure processes in materials. They can be analyzed to an extent that is not possible for most materials (for reviews, see \cite{Herrmann,Chakrabarti,Sahimi,Sornette,Bhattacharyya}). We consider a bundle with a large number $N$ of elastic and parallel fibers, clamped at both ends. When the load on fiber $i$ is increased beyond a threshold value $x_{i}$, the fiber ruptures. The breakdown thresholds $x_{i}$ for the separate fibers are assumed to be independent random variables with a probability density $p(x)$, and a corresponding cumulative distribution function $P(x)$: \begin{equation} \mbox{ Prob }(x_{i}\leq x)\equiv P(x)=\int_{0}^{x}\; p(y)\; dy.\end{equation} The mechanism for how the extra stress caused by a fiber failure is redistributed among the unbroken fibers must be specified. We study here the classical version, the equal-load-sharing model, in which a ruptured fiber carries no load, and the increased stress caused by a failed element is shared equally by all the remaining intact fibers in the bundle \cite{Peirce}. If an external load $F$ is applied to a fiber bundle, the resulting failure events can be seen as a sequential process \cite{PC,PBC,BPC}. In the first step all fibers that cannot withstand the applied load break. Then the stress is redistributed on the surviving fibers, which compels further fibers to fail, etc. This iterative process continues until all fibers fail, or an equilibrium situation with a nonzero bundle strength is reached. Since the number of fibers is finite, the number of steps, $t$, in this sequential process is {\em finite}. In this paper we determine how $t$ depends upon the number of fibers and, more importantly, upon the stress $\sigma$, the applied external load per fiber, \begin{equation} \sigma=F/N.\end{equation} At a force $x$ per surviving fiber, the total force on the bundle is $x$ times the number of intact fibers. The expected or average force at this stage is therefore \begin{equation} F(x)=N\, x\,(1-P(x)).\label{load}\end{equation} One may consider $x$ to represent the elongation of the bundle, with the elasticity constant set equal to unity. The maximum $F_{c}$ of $F(x)$ corresponds to the value $x_{c}$ for which $dF/dx$ vanishes. Thus \begin{equation} 1-P(x_{c})-x_{c}p(x_{c})=0.\end{equation} We characterize the state of the bundle as \textit{subcritical} or \textit{supercritical} depending upon the stress value relative to the critical stress \begin{equation} \sigma_{c}=F_{c}/N,\end{equation} above which the bundle collapses completely. Critical properties of fiber bundles have been discussed before, but with a signature different from the one that we use here, and always with the critical point approached from the subcritical side \cite{Sornette,Hansen,PBC}. The function $t(\sigma)$ that we focus on, however, exhibits critical divergence when the critical point is approached from \textit{either} side. As an example, we show in Fig.\ 1 $t(\sigma)$ obtained by simulation for a uniform threshold distribution. \begin{center}\includegraphics[width=3in,height=3in]{relax-fig1}\par\end{center} {\footnotesize FIG.\ 1. Number of relaxation steps $t(\sigma)$ for a fiber bundle with a uniform threshold distribution (\ref{uniform}). Here $\sigma_{c}=0.25$. The figure is based on 1000 samples, each with $N=10^{6}$ fibers.}\\ We study the stepwise failure process in the bundle, when a fixed external load $F=N\sigma$ is applied. Let $N_{t}$ be the number of intact fibers at step no.\ $t$, with $N_{0}=N$. We want to determine how $N_{t}$ decreases until the degradation process stops. With $N_{t}$ intact fibers, an expected number \begin{equation} \left[NP(N\sigma/N_{t})\right]\end{equation} of fibers will have thresholds that cannot withstand the load, and consequently these fibers break immediately. Here $[X]$ denotes the largest integer not exceeding $X$. The number of intact fibers in the next step is therefore \begin{equation} N_{t+1}=N-\left[NP(N\sigma/N_{t})\right].\label{Nt}\end{equation} Since $N$ is a large number, the ratio \begin{equation} n_{t}=\frac{N_{t}}{N}\end{equation} can for most purposes be considered a continuous variable. By (\ref{Nt}) we have essentially \cite{PC,PBC,BPC} \begin{equation} n_{t+1}=1-P(\sigma/n_{t}).\label{n}\end{equation} In Sec.\ II we study $t(\sigma)$ in the supercritical domain, while Sec.\ III is devoted to subcritical situations. Simulation results are presented for two threshold distributions, the uniform and a Weibull distribution, and these are compared with detailed analytic results. The theoretical analysis is, however, not limited to these special threshold distributions. In Sec.\ IV we summarize our results and discuss briefly the approximations involved. \section{Supercritical relaxation} We investigate first the supercritical situation, $\sigma>\sigma_{c}$, with positive values of \begin{equation} \epsilon=\sigma-\sigma_{c},\end{equation} and start with the simplest model. \subsection{Uniform threshold distribution} Consider a bundle in which the failure thresholds are distributed according to the uniform distribution \begin{equation} P(x)=\left\{ \begin{array}{cl} x & \mbox{ for }0\leq x\leq1\\ 0 & \mbox{ for }x>1\end{array}\right.\label{uniform}\end{equation} For this case the load curve (\ref{load}) is parabolic, \begin{equation} F=Nx(1-x),\end{equation} with the critical point at $x_{c}=1/2$, $\sigma_{c}=1/4$. Simulation results for the uniform threshold distribution are presented in Fig.\ 1. The basic equation (\ref{n}) takes the form \begin{equation} n_{t+1}=1-\frac{\sigma}{n_{t}}=1-\frac{\frac{1}{4}+\epsilon}{n_{t}}.\label{ntuni}\end{equation} This nonlinear iteration can be transformed into a linear one by the following procedure. Introduce first \begin{equation} n_{t}={\textstyle \frac{1}{2}}-y_{t}\sqrt{\epsilon},\end{equation} into (\ref{ntuni}), with a result that may be written \begin{equation} \frac{y_{t+1}-y_{t}}{1+y_{t}y_{t+1}}=2\sqrt{\epsilon}.\end{equation} Putting \begin{equation} y_{t}=\tan v_{t},\end{equation} we have \begin{equation} 2\sqrt{\epsilon}=\frac{\tan v_{t+1}-\tan v_{t}}{1+\tan v_{t+1}\;\tan v_{t}}=\tan(v_{t+1}-v_{t}).\end{equation} Hence $v_{t+1}-v_{t}=\tan^{-1}(2\sqrt{\epsilon})$, with solution \begin{equation} v_{t}=v_{0}+t\;\tan^{-1}(2\sqrt{\epsilon}).\end{equation} In the original variable the solution reads {\small \begin{eqnarray} n_{t} & = & {\textstyle \frac{1}{2}}-\sqrt{\epsilon}\;\tan\left(\tan^{-1}(\frac{\frac{1}{2}-n_{0}}{\sqrt{\epsilon}})+\; t\;\tan^{-1}(2\sqrt{\epsilon})\right)\\ & = & {\textstyle \frac{1}{2}}-\sqrt{\epsilon}\;\tan\left(-\tan^{-1}(1/2\sqrt{\epsilon})+\; t\;\tan^{-1}(2\sqrt{\epsilon})\right),\label{finaluni}\end{eqnarray} }where $n_{0}=1$ has been used. Eq.\ (\ref{ntuni}) shows that when $n_{t}$ obtains a value in the interval $(0,\sigma)$, the next iteration gives complete bundle failure. Taking $n_{t}=\sigma$ as the penultimate value gives a lower bound, $t_{l}$, for the number of iterations, while using $n_{t}=0$ in (\ref{finaluni}) gives an upper bound $t_{u}$. Adding unity for the final iteration, (\ref{finaluni}) gives the bounds \begin{equation} t_{u}(\sigma)=1+\frac{2\tan^{-1}(1/2\sqrt{\epsilon})}{\tan^{-1}(2\sqrt{\epsilon})},\label{tu}\end{equation} and \begin{equation} t_{l}(\sigma)=1+\frac{\tan^{-1}((\frac{1}{4}-\epsilon)/\sqrt{\epsilon})+\tan^{-1}(1/2\sqrt{\epsilon})}{\tan^{-1}(2\sqrt{\epsilon})}.\label{tl}\end{equation} It is easy to show that $t_{u}(\sigma)-t_{l}(\sigma)=1$. In Fig.\ 2A we show that these bounds nicely embrace the simulation results. \begin{center}\includegraphics[width=3in,height=2.5in]{relax-fig2A}\par\end{center} \begin{center}\includegraphics[width=3in,height=2.5in]{relax-fig2B}\par\end{center} {\footnotesize FIG.\ 2. Simulation results with supercritical stress for (A) the uniform threshold distribution (\ref{uniform}), and (B) the Weibull distribution (\ref{Weibull}). The graphs are based on 10000 samples with $N=10^{6}$ fibers in each bundle. The dashed lines represent the theoretical estimates (\ref{tu}), (\ref{tl}) and (\ref{estimateW})}. \\ Note that both the upper and the lower bound behave as $\epsilon^{-\frac{1}{2}}$ for small $\epsilon$. A rough approximation near the critical point is \begin{equation} t(\sigma)\approx{\textstyle \frac{1}{2}}\pi(\sigma-\sigma_{c})^{-\frac{1}{2}}.\end{equation} \subsection{General threshold distributions} The uniform distribution is amenable to analysis to a degree not shared by other threshold distributions. Therefore we now discuss how to handle other distributions, and we start by a special case, a Weibull distribution of index $5$, \begin{equation} P(x)=1-e^{-x^{5}},\hspace{1cm}x\geq0.\label{Weibull}\end{equation} The critical parameters for this case are $x_{c}=5^{-1/5}=0.72478$ and $\sigma_{c}=(5e)^{-1/5}=0.5933994$. Simulation results for $t(\sigma)$ are displayed in Fig.\ 2B for the Weibull supercritical case. The variation with the external stress $\sigma$ is qualitatively similar to the results for the uniform threshold distribution. The interesting values of the external stress are close to $\sigma_{c}$, because for large supercritical stresses the bundle breaks down almost immediately. For $\sigma$ slightly above $\sigma_{c}$ the iteration function \begin{equation} n_{t+1}=f(n_{t})=1-P(\sigma/n_{t})=e^{-(\sigma/n_{t})^{5}},\label{it-W}\end{equation} takes the form sketched in Fig.\ 3. \begin{center}\includegraphics[width=3in,height=2.5in]{relax-fig3}\par\end{center} {\footnotesize FIG.\ 3. The iteration function $f(n)$ for the Weibull distribution (\ref{Weibull}). Here $\sigma=0.6$, slightly greater than the critical value $\sigma_{c}=0.5933994.$}\\ The iteration function is almost tangent to the reflection line $n_{t+1}=n_{t}$ and a long channel of width proportional to $\epsilon$ appears. The dominating number of iterations occur within this channel (see Fig. 3). The channel wall formed by the iteration function is almost parabolic and is well approximated by a second-order expression \begin{equation} n_{t+1}=n_{c}+(n_{t}-n_{c})+a(n_{t}-n_{c})^{2}+b(\sigma_{c}-\sigma).\label{quadr}\end{equation} Here $n_{c}=e^{-1/5}$ is the fixed point, $n_{t+1}=n_{t}$, of the iteration at $\sigma=\sigma_{c}$. With $u=(n-n_{c})/b$ and $\epsilon=\sigma-\sigma_{c}$ (\ref{quadr}) takes the form \begin{equation} u_{t+1}-u_{t}=-Au_{t}^{2}-\epsilon,\end{equation} with $A=ab$. In the channel $u$ changes very slowly, so we may treat the difference equation as a a differential equation: \begin{equation} \frac{du}{dt}=-Au^{2}-\epsilon,\end{equation} with solution \begin{equation} t\sqrt{A\epsilon}=-\tan^{-1}\left(u\sqrt{A/\epsilon}\right)+\mbox{ constant }.\end{equation} Thus \begin{equation} t_{f}-t_{i}=(A\epsilon)^{-\frac{1}{2}}\left\{ \tan^{-1}(u_{i}\sqrt{A/\epsilon})-\tan^{-1}(u_{f}\sqrt{A/\epsilon})\right\} \label{tf-ti}\end{equation} is the number of iterations \textit{in the channel}, starting with $u_{i}$, ending with $u_{f}$. This treatment is general and can be applied to any threshold distribution near criticality. Although the vast majority of the iterations occur in the channel, there are a few iterations at the entrance and at the exit of the channel that may require attention in special cases. The situation is similar to type I intermittency in dynamical systems\cite{Pomeau}, but in our case the channel is traversed merely once. For the Weibull distribution the expansion (\ref{quadr}) has the precise form\begin{eqnarray} n_{t} & = & e^{-(\sigma/n)^{5}}\simeq e^{-1/5}+(n-n_{c})\nonumber \\ & & -{\textstyle \frac{5}{2}}e^{1/5}(n-n_{c})^{2}-5^{1/5}(\sigma-\sigma_{c}),\label{b1}\end{eqnarray} where $n_{c}=e^{-1/5}$, $a={\textstyle \frac{5}{2}}e^{1/5}$, $b=5^{1/5}$ and $A={\textstyle \frac{5}{2}}(5e)^{1/5}$. For completeness we must also consider the number of iteration to reach the entrance to the channel. It is not meaningful to use the quadratic approximation (\ref{b1}) where it is not monotonously increasing, i.e. for $n>n_{m}=n_{c}+1/(2a)=\frac{6}{5}e^{-1/5}\simeq0.98$. Thus we take $n_{i}=n_{m}$ as the entrance to the channel, and add one extra iteration to arrive from $n_{0}=1$ to the channel entrance. (Numerical evidence for this extra step: For $\sigma=\sigma_{c}$ the iteration (\ref{it-W}) starts as follows: $n_{0}=1.00$, $n_{1}=0.93$, $n_{2}=0.90$, while using the quadratic function with $n_{0}=n_{m}=0.98$ as the initial value, we get after one step $n_{1}=0.90$, approximately the same value that the exact iteration reaches after two steps.) With $n_{f}=0$ we obtain from (\ref{tf-ti}) in the Weibull case the estimate\begin{eqnarray} t & = & 1+(A\epsilon)^{-1/2}\left\{ \tan^{-1}(e^{-1/5}\sqrt{A/\epsilon}\,/5b)\right.\nonumber \\ & & \left.+\tan^{-1}(e^{-1/5}\sqrt{A/\epsilon}\,/b)\right\} ,\label{estimateW}\end{eqnarray} with $A=\frac{5}{2}(5e)^{1/5}$ and $b=5^{1/5}$. We see in Fig.\ 2B that the theoretical estimate (\ref{estimateW}) gives an excellent representation of the simulation data. Near the critical point (\ref{estimateW}) has the asymptotic form \begin{equation} t\approx\pi(A\epsilon)^{-1/2}\propto(\sigma-\sigma_{c})^{-1/2},\label{div}\end{equation} with the same critical index as for the uniform threshold distribution. The divergence is caused by the large number of iterations in the narrow channel in Fig.\ 3. For a general threshold distribution such a channel will always be present, and therefore the divergence (\ref{div}) is universal. Moreover, the amplitude of $(\sigma-\sigma_{c})^{-1/2}$, as well as an excellent representation of the complete $t(\sigma)$ function may, for a given threshold distribution, be obtained by the same procedure as used above for the Weibull case. \section{Subcritical relaxation} We now assume the external stress to be subcritical, $\sigma<\sigma_{c}$, and introduce the positive parameter \begin{equation} \varepsilon=\sigma_{c}-\sigma\end{equation} to characterize the deviation from the critical point. Also in the subcritical situation a bundle with the uniform threshold distribution (\ref{uniform}) can be analyzed analytically to a greater extent than for other distributions, and consequently we start with this case. \subsection{Uniform threshold distribution} Using a similar method as in the supercritical situation we introduce into (\ref{ntuni}) \begin{equation} n_{t}={\textstyle \frac{1}{2}}+\sqrt{\varepsilon}/z_{t},\end{equation} as well as $\sigma=\frac{1}{4}-\varepsilon$, with the result \begin{equation} 2\sqrt{\varepsilon}=\frac{z_{t+1}-z_{t}}{1-z_{t+1}\; z_{t}}.\end{equation} In this case \begin{equation} z_{t}=\tanh w_{t}\end{equation} is the useful substitution. It gives \begin{equation} 2\sqrt{\varepsilon}=\frac{\tanh w_{t+1}-\tanh w_{t}}{1-\tanh w_{t+1}\;\tanh w_{t}}=\tanh(w_{t+1}-w_{t}).\end{equation} Thus $w_{t+1}-w_{t}=\tanh^{-1}(2\sqrt{\varepsilon})$, i.e. \begin{equation} w_{t}=w_{0}+t\;\tanh^{-1}(2\sqrt{\varepsilon}).\end{equation} Starting with $n_{0}=1$, we obtain $z_{0}=2\sqrt{\varepsilon}$ and hence \begin{equation} w_{t}=(1+t)\;\tanh^{-1}(2\sqrt{\varepsilon}).\end{equation} This corresponds to \begin{equation} n_{t}={\textstyle \frac{1}{2}}+\frac{\sqrt{\varepsilon}}{\tanh\left\{ (1+t)\tanh^{-1}(2\sqrt{\varepsilon})\right\} }\label{ntor}\end{equation} in the original variable. Apparently $n_{t}$ reaches a fixed point $n^{*}=\frac{1}{2}+\sqrt{\varepsilon}$ after an infinite number of iterations. However, our bundle contains a \textit{finite} number of fibers, and therefore only a finite number of steps is needed for the iteration to arrive at a fixed point $N^{*}$ of the integer iteration (\ref{Nt}), \begin{equation} N_{t+1}=N-[\sigma N^{2}/N_{t}].\label{intuni}\end{equation} Since $X\leq[X]<X+1$ a fixed point $N^{*}$ of (\ref{intuni}) must satisfy \cite{PBC} {\small \begin{equation} \frac{N}{2}\left(1+\sqrt{1-4\sigma}\right)\leq N^{*}<\frac{1}{2}\left(N+1+\sqrt{N^{2}(1-4\sigma)+2N+1}\right).\label{Nfix}\end{equation} } It is interesting to note that (\ref{intuni}) has in general several fixed points for a given value of $\sigma$. With $N=10^{6}$ and $\sigma=0.249$, for instance, there are nine fixed points, viz.\ $531623,531624,\ldots,531631$, the complete set of integers within the interval (\ref{Nfix}). Since our iteration starts high at $N_{0}=N$, with steadily decreasing values of $N_{t}$, it will stop at the \textit{upper} fixed point, the largest integer satisfying (\ref{Nfix}). As long as $N(1-4\sigma)\gg1$, which is fulfilled in our simulations, we may take \begin{equation} N_{{\rm u}}^{*}=\frac{N}{2}\left(1-\sqrt{1-4\sigma}\right)+\frac{1}{2}\left(1+(1-4\sigma)^{-1/2}\right)\label{app}\end{equation} as a good approximation to the upper fixed point (in the example above (\ref{app}) gives $531631.1$, compared with $N_{{\rm u}}^{*}=531631$). As a consequence we use \begin{equation} n_{t}=\frac{N_{{\rm u}}^{*}}{N}=\frac{1}{2}+\sqrt{\varepsilon}+\frac{1}{4N}\left(2+\varepsilon^{-1/2}\right)\end{equation} as the final value in (\ref{ntor}). Consequently we obtain the following estimate for the number of iterations to reach this value: \begin{equation} t(\sigma)=-1+\frac{\coth^{-1}\left\{ 1+(1+2\sqrt{\varepsilon})/4N\varepsilon\right\} }{\tanh^{-1}(2\sqrt{\varepsilon})}.\label{subnorm}\end{equation} We see in Fig.\ 4A that the simulation data are well approximated by the analytic formula (\ref{subnorm}). For very large $N$ (\ref{subnorm}) is approximated by \begin{equation} t=\frac{\ln(8N\varepsilon)}{4}\;\varepsilon^{-1/2}.\end{equation} The critical behavior is again characterized by a square root divergence, in this case somewhat modified by a logarithmic term. \subsection{General threshold distribution} Again we use the Weibull distribution (\ref{Weibull}) as an example threshold distribution. Simulation results for the subcritical Weibull distribution are shown in Fig.\ 4B. \begin{center}\includegraphics[width=3in,height=2.5in]{relax-fig4A}\par\end{center} \begin{center}\includegraphics[width=3in,height=2.5in]{relax-fig4B}\par\end{center} {\footnotesize FIG.\ 4. Simulation results with subcritical stress for (A) the uniform threshold distribution, and (B) the Weibull distribution (\ref{Weibull}). The graphs are based on $10000$ samples with $N=10^{6}$ fibers in each bundle. The dotted lines are the theoretical estimates, eq.\ (\ref{subnorm}) for the uniform distribution case, and eqs.\ (\ref{par1}-\ref{par2}) for the Weibull case.} \\ Forgetting for the moment the finiteness of the fiber bundle, the iteration (\ref{n}), \begin{equation} n_{t+1}=1-P(\sigma/n_{t}),\end{equation} will reach a fixed point $n^{*}$ after infinite many steps. The deviation from the fixed point, $n_{t}-n^{*}$, will decrease exponentially near the fixed point \cite{PC,PBC,BPC,PAS}: \begin{equation} n_{t}-n^{*}\propto e^{-t/\tau},\label{exp}\end{equation} with \begin{equation} \tau=1/\ln\left\{ n^{*2}\sigma^{-1}/p(\sigma/n^{*})\right\} .\end{equation} For the Weibull threshold distribution, in particular, \begin{equation} p(\sigma/n^{*})=5(\sigma/n^{*})^{4}\;\exp\left(-(\sigma/n^{*})^{5}\right)=5\sigma^{4}/n^{*3},\end{equation} and thus \begin{equation} \tau=1/\ln(n^{*5}/5\sigma^{5})\end{equation} for the Weibull case. If we allow ourselves to use the exponential formula (\ref{exp}) all the way from $n_{0}=1$, we obtain \begin{equation} n_{t}-n^{*}=(1-n^{*})e^{-t/\tau}.\label{fixW}\end{equation} For a finite number $N$ of fibers the iteration will stop after a finite number of steps. It is a reasonable supposition to assume that the iteration stops when $N_{t}-N^{*}$ is of the order $1$. This corresponds to take the left-hand side of (\ref{fixW}) equal to $1/N$. The corresponding number of iterations is then given by \begin{equation} t=\tau\;\ln\left(N(1-n^{*})\right)=\frac{\ln\left(N(1-n^{*})\right)}{\ln(n^{*5}/5\sigma^{5})}.\label{tWsub}\end{equation} Solving the Weibull iteration $n^{*}=\exp(-(\sigma/n^{*})^{5})$ with respect to $\sigma$ and inserting into (\ref{tWsub}), we obtain \begin{eqnarray} t & = & -\frac{\ln\left\{ N(1-n^{*})\right\} }{\ln\left\{ 5(-\ln n^{*})\right\} }\label{par1}\\ \sigma & = & n^{*}(-\ln n^{*})^{1/5}.\label{par2}\end{eqnarray} These two equations represent the function $t(\sigma)$ on parameter form, with $n^{*}$ running from $n_{c}=e^{-1/5}$ to $n^{*}=1$. In Fig.\ 4B this theoretical estimate is compared with the simulation data. The agreement is satisfactory. For $n^{*}=n_{c}=e^{-1/5}$ (\ref{par1}) shows that $t$ is infinite, as it should be. To investigate the critical neighborhood we put $n^{*}=n_{c}(1+\xi)$ with $\xi$ small, to obtain to lowest order \begin{eqnarray} t & = & {\cal O}(\xi^{-1})\\ \sigma_{c}-\sigma & = & {\cal O}(\xi^{2})\end{eqnarray} This gives, once more, the divergence \begin{equation} t(\sigma)\propto(\sigma_{c}-\sigma)^{-1/2}.\end{equation} For a general threshold distribution the same procedure may be followed. To use the exponential approach to the fixed point, as we have done, may seem to be doubtful. But the rational is that for small $\sigma$ the starting point $n_{0}=1$ is already rather close to the fixed point, while for larger $\sigma$ it doesn't matter much if the first few iterations is not described well by the exponential formula, since in any case the majority of the iterations occur close to the fixed point. \section{Concluding remarks} A detailed numerical and analytic study of the relaxation dynamics in finite fiber bundles subjected to external loads is presented. The relaxation takes place in a number, $t(\sigma)$, of steps: In each step all fibers weaker than the load per surviving fiber burst, and the relaxation proceeds until equilibrium is reached or until all fibers have failed. As function of the initial stress $\sigma$ the number of steps, $t(\sigma)$, shows a divergence $\left|\sigma-\sigma_{c}\right|^{-1/2}$ at the critical point, both on the subcritical and supercritical side. This is a generic result, valid for a general probability distribution of the individual fiber breakdown thresholds. On the supercritical side $t(\sigma)$ for large $N$, is independent of the system size $N$. On the subcritical side there is, however, a weak (logarithmic) $N$-dependence, as witnessed by eqs. ($46$), ($47$) and ($55$). The analytic estimates are based on the average strength of a group of fibers. The comparison with the simulation data on the probabilistic distribution of fiber strength shows that this is a satisfactory calculation procedure. \vskip.2in \begin{center}\noun{Acknowledgments}\par\end{center} \vskip.1in S. P. thanks the Research Council of Norway (NFR) for financial support through Grant No. 166720/V30 and 177958/V30.
{ "timestamp": "2007-01-29T12:12:46", "yymm": "0701", "arxiv_id": "cond-mat/0701707", "language": "en", "url": "https://arxiv.org/abs/cond-mat/0701707", "abstract": "Under an applied external load the global load-sharing fiber bundle model, with individual fiber strength thresholds sampled randomly from a probability distribution, will relax to an equilibrium state, or to complete bundle breakdown. The relaxation can be viewed as taking place in a sequence of steps. In the first step all fibers weaker than the applied stress fail. As the total load is redistributed on the surviving fibers, a group of secondary fiber failures occur, etc. For a bundle with a finite number of fibers the process stops after a finite number of steps, $t$. By simulation and theoretical estimates, it is determined how $t$ depends upon the stress, the initial load per fiber, both for subcritical and supercritical stress. The two-sided critical divergence is characterized by an exponent -1/2, independent of the probability distribution of the fiber thresholds.", "subjects": "Statistical Mechanics (cond-mat.stat-mech); Soft Condensed Matter (cond-mat.soft)", "title": "Relaxation dynamics in strained fiber bundles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9658995782141545, "lm_q2_score": 0.734119521083126, "lm_q1q2_score": 0.7090857357729684 }
https://arxiv.org/abs/1701.03378
Linearizing the Word Problem in (some) Free Fields
We describe a solution of the word problem in free fields (coming from non-commutative polynomials over a commutative field) using elementary linear algebra, provided that the elements are given by minimal linear representations. It relies on the normal form of Cohn and Reutenauer and can be used more generally to (positively) test rational identities. Moreover we provide a construction of minimal linear representations for the inverse of non-zero elements.
\section{Representing Elements}\label{sec:wp.rep} Although there are several ways for representing elements in (a subset of) the free field (linear representation \cite{Cohn1999a , linearization \cite{Cohn1985a , realization \cite{Helton2006a , proper linear system \cite{Salomaa1978a , etc.) the concept of a \emph{linear representation} seems to be the most convenient. It has the advantage (among others) that in the special case of regular elements, the general definition of the rank coincides with the Hankel rank \cite{Fliess1974a , \cite[Section~II.3]{Salomaa1978a .\index{Hankel rank} Closely related to LRs are \emph{admissible linear systems} (ALS for short) \cite{Cohn1985a , which could be seen as a special case. Both notations will be used synonymously. Depending on the context an ALS will be written as a triple, for example $\als{A} = (u,A,v)$ or as linear system $As = v$, sometimes as $u = tA$. Like the rational operations defined on linear representation level \cite{Cohn1999a , similar constructions can be done easily on ALS level. Thus, starting from systems for monomials (Proposition~\ref{pro:min.mon}) only, a representation for \emph{each} element in the free field can be constructed recursively. \smallskip \begin{notation} Zero entries in matrices are usually replaced by (lower) dots to stress the structure of the non-zero entries unless they result from transformations where there were possibly non-zero entries before. We denote by $I_n$ the identity matrix and $\Sigma_n$ the permutation matrix that reverses the order of rows/columns of size $n$. If the size is clear from the context, $I$ and $\Sigma$ are used respectively. \end{notation} \smallskip Let $\field{K}$ be a \emph{commutative} field and $X = \{ x_1, x_2, \ldots, x_d\}$ be a \emph{finite} alphabet. $\ncPOLY{\field{K}}{X}$ denotes the \emph{free associative algebra}\index{free associative algebra} (or ``algebra of non-commutative polynomials'') and $\freeFLD{\field{K}}{X}$ denotes the \emph{universal field of fractions}\index{universal field of fractions} (or ``free field'') of $\ncPOLY{\field{K}}{X}$ \cite{Cohn1995a , \cite{Cohn1999a . In the examples the alphabet is usually $X=\{x,y,z\}$. \begin{definition}[Inner Rank, Full Matrix, Hollow Matrix \cite{Cohn1985a , \cite{Cohn1999a ] \index{inner rank}\index{full matrix}\index{hollow matrix}% Given a matrix $A \in \ncPOLY{\field{K}}{X}^{n \times n}$, the \emph{inner rank} of $A$ is the smallest number $m\in \mathbb{N}$ such that there exists a factorization $A = T U$ with $T \in \ncPOLY{\field{K}}{X}^{n \times m}$ and $U \in \ncPOLY{\field{K}}{X}^{m \times n}$. The matrix $A$ is called \emph{full} if $m = n$, \emph{non-full} otherwise. It is called \emph{hollow} if it contains a zero submatrix of size $k \times l$ with $k + l > n$. \end{definition} \begin{definition}[Associated and Stably Associated Matrices \cite{Cohn1995a ]\label{def:wp.ass} \index{associated matrix}\index{stable associated matrix}% Two matrices $A$ and $B$ over $\ncPOLY{\field{K}}{X}$ (of the same size) are called \emph{associated} over a subring $R\subseteq \ncPOLY{\field{K}}{X}$ if there exist invertible matrices $P,Q$ over $R$ such that $A = P B Q$. $A$ and $B$ (not necessarily of the same size) are called \emph{stably associated} if $A\oplus I_p$ and $B\oplus I_q$ are associated for some unit matrices $I_p$ and $I_q$. Here by $C \oplus D$ we denote the diagonal sum $\bigl[\begin{smallmatrix} C & . \\ . & D \end{smallmatrix}\bigr]$. \end{definition} In general it is hard to decide whether a matrix is full or not. For a linear matrix, that is, a matrix of the form $A = A_0 \otimes 1 + A_1 \otimes x_1 + \ldots + A_d \otimes x_d$ with $A_\ell$ over $\field{K}$, the following criterion is known, which is used in (the proof of) Theorem~\ref{thr:cohn99.41}. If a matrix over $\ncPOLY{\field{K}}{X}$ is not linear, then Higman's trick \cite[Section~5.8]{Cohn1985a \ can be used to linearize it by enlargement. The inner rank is also discussed in \cite{Fortin2004a . \begin{lemma}[% \protect{\cite[Corollary~6.3.6]{Cohn1995a }]\label{lem:cohn95.636} A linear square matrix over $\ncPOLY{\field{K}}{X}$ which is not full is associated over $\field{K}$ to a linear hollow matrix. \end{lemma} \begin{definition}[Linear Representations \cite{Cohn1994a ]\label{def:rep} \index{linear representation}% Let $f \in \freeFLD{\field{K}}{X}$. A \emph{linear representation} of $f$ is a triple $(u,A,v)$ with $u \in \field{K}^{1 \times n}$, $A = A_0 \otimes 1 + A_1 \otimes x_1 + \ldots + A_d \otimes x_d$, $A_\ell \in \field{K}^{n\times n}$ and $v \in \field{K}^{n\times 1}$ such that $A$ is full, that is, $A$ is invertible over the free field $\freeFLD{\field{K}}{X}$, and $f = u A^{-1} v$. The \emph{dimension} of the representation is $\dim \, (u,A,v) = n$. It is called \emph{minimal} if $A$ has the smallest possible dimension among all linear representations of $f$. The ``empty'' representation $\pi = (,,)$ is the minimal representation for $0 \in \freeFLD{\field{K}}{X}$ with $\dim \pi = 0$. \end{definition} \begin{remark} In Definition~\ref{def:wp.lin} it can be seen that $f=u A^{-1} v$ is (up to sign) the \emph{Schur complement} of the linearization $\bigl[\begin{smallmatrix} 0 & u \\ v & A \end{smallmatrix}\bigr]$ with respect to the upper left $1 \times 1$ block. \end{remark} \begin{definition}[\cite{Cohn1999a ] \index{equivalent linear representations}% Two linear representations are called \emph{equivalent} if they represent the same element. \end{definition} \begin{definition}[Rank \cite{Cohn1999a ] \index{rank}% Let $f \in \freeFLD{\field{K}}{X}$ and $\pi$ be a \emph{minimal} representation of $f$. Then the \emph{rank} of $f$ is defined as $\rank f = \dim \pi$. \end{definition} \begin{remark} The connection to the related concepts of \emph{inversion height} and \emph{depth} can be found in \cite{Reutenauer1996a , namely \emph{inversion height} $\le$ \emph{depth} $\le$ \emph{rank}. Additional discussion about the depth appears in \cite[Section 7.7]{Cohn2006a . \end{remark} \begin{definition}\label{def:rep.reg} Let $M = M_1 \otimes x_1 + \ldots + M_d \otimes x_d$. An element in $\freeFLD{\field{K}}{X}$ is called \emph{regular}, if it has a linear representation $(u,A,v)$ with $A = I - M$, that is, $A_0 = I$ in Definition~\ref{def:rep}, or equivalently, if $A_0$ is regular (invertible). \end{definition} \begin{definition}[Left and Right Families \cite{Cohn1994a ]\label{def:cohn94.family} \index{left family}\index{right family}% Let $\pi=(u,A,v)$ be a linear representation of $f \in \freeFLD{\field{K}}{X}$ of dimension $n$. The families $( s_1, s_2, \ldots, s_n )\subseteq \freeFLD{\field{K}}{X}$ with $s_i = (A^{-1} v)_i$ and $( t_1, t_2, \ldots, t_n )\subseteq \freeFLD{\field{K}}{X}$ with $t_j = (u A^{-1})_j$ are called \emph{left family} and \emph{right family} respectively. $L(\pi) = \linsp \{ s_1, s_2, \ldots, s_n \}$ and $R(\pi) = \linsp \{ t_1, t_2, \ldots, t_n \}$ denote their linear spans. \end{definition} \begin{remark} The left family $(A^{-1} v)_i$ (respectively the right family $(u A^{-1})_j$) and the solution vector $s$ of $As = v$ (respectively $t$ of $u = tA$) will be used synonymously. \end{remark} \begin{proposition}[% \cite{Cohn1994a , Proposition 4.7] A representation $\pi=(u,A,v)$ of an element $f \in \freeFLD{\field{K}}{X}$ is minimal if and only if both, the left family and the right family are $\field{K}$-linearly independent. \label{pro:cohn94.47} \end{proposition} \begin{definition}[Admissible Linear Systems \cite{Cohn1972a ]\label{def:wp.als} \index{admissible linear system}\index{ALS|see{admissible linear system}}% A linear representation $\als{A} = (u,A,v)$ of $f \in \freeFLD{\field{K}}{X}$ is called \emph{admissible linear system} (for $f$), denoted by $A s = v$, if $u=e_1=[1,0,\ldots,0]$. The element $f$ is then the first component of the (unique) solution vector $s$. \end{definition} \begin{remark} In \cite{Cohn1985a , Cohn defines admissible linear systems with $v = v_0 \otimes 1 + v_1 \otimes x_1 + \ldots + v_d \otimes x_d$ with $v_i \in \field{K}^{n \times 1}$, and $u = [0,\ldots,0,1]$. Writing $B = [-v,A]$ as block of size $n \times (n+1)$ the first $n$ columns of $B$ serve as numerator and the last $n$ columns of $B$ as denominator. However, in this setting, for regular elements, the dimension of such a minimal system could differ from the Hankel rank \cite{Fliess1974a , \cite[Section~II.3]{Salomaa1978a .\index{Hankel rank} \end{remark} \begin{definition}[Admissible Transformations] \label{def:trn.adm} \index{admissible transformation}% Given a linear representation $\als{A} = (u,A,v)$ of dimension $n$ of $f \in \freeFLD{\field{K}}{X}$ and invertible matrices $P,Q \in \field{K}^{n\times n}$, the transformed $P\als{A}Q = (uQ, PAQ, Pv)$ is again a linear representation (of $f$). If $\als{A}$ is an ALS, the transformation $(P,Q)$ is called \emph{admissible} if the first row of $Q$ is $e_1 = [1,0,\ldots,0]$. \end{definition} \begin{Remark}[Elementary Transformations]\label{rem:trn.ele} In practice, transformations can be done by elementary row- and column operations (with respect to the system matrix $A$). If we add $\alpha$-times row~$i$ to row~$j\neq i$ in $A$, we also have to do this in $v$. If we add $\beta$-times column~$i$ to column~$j\neq i$ we have to \emph{subtract} $\beta$-times row~$j$ from row~$i$ in $s$. Since it is not allowed to change the first entry of $s$, column~1 cannot be used to eliminate entries in other columns! As an example consider the ALS \begin{displaymath} \begin{bmatrix} 1 & x-1 \\ . & 1 \end{bmatrix} s = \begin{bmatrix} . \\ 1 \end{bmatrix},\quad s = \begin{bmatrix} 1-x \\ 1 \end{bmatrix} \end{displaymath} for the element $1-x \in \freeFLD{\field{K}}{X}$. Adding column~1 to column~2, that is, $Q = \bigl[\begin{smallmatrix} 1 & 1 \\ . & 1 \end{smallmatrix}\bigr]$ (and $P = I$), yields the ALS \begin{displaymath} \begin{bmatrix} 1 & x \\ . & 1 \end{bmatrix} s = \begin{bmatrix} . \\ 1 \end{bmatrix},\quad s = \begin{bmatrix} -x \\ 1 \end{bmatrix} \end{displaymath} for the element $-x \neq 1-x$. \end{Remark} \begin{proposition}[Rational Operations]\label{pro:wp.ratop} Let $f,g,h \in \freeFLD{\field{K}}{X}$ be given by the admissible linear systems $\als{A}_f = (u_f, A_f, v_f)$, $\als{A}_g = (u_g, A_g, v_g)$ and $\als{A}_h = (u_h, A_h, v_h)$ respectively, with $h \ne 0$ and let $\mu \in \field{K}$. Then admissible linear systems for the rational operations can be obtained as follows: \smallskip\noindent The scalar multiplication $\mu f$ is given by \begin{displaymath} \mu \als{A}_f = \bigl( u_f, A_f, \mu v_f \bigr). \end{displaymath} The sum $f + g$ is given by \begin{displaymath} \als{A}_f + \als{A}_g = \left( \begin{bmatrix} u_f & . \end{bmatrix}, \begin{bmatrix} A_f & -A_f u_f^{\!\top} u_g \\ . & A_g \end{bmatrix}, \begin{bmatrix} v_f \\ v_g \end{bmatrix} \right). \end{displaymath} The product $fg$ is given by \begin{displaymath} \als{A}_f \cdot \als{A}_g = \left( \begin{bmatrix} u_f & . \end{bmatrix}, \begin{bmatrix} A_f & -v_f u_g \\ . & A_g \end{bmatrix}, \begin{bmatrix} . \\ v_g \end{bmatrix} \right). \end{displaymath} And the inverse $h^{-1}$ is given by \begin{displaymath} \als{A}_h^{-1} = \left( \begin{bmatrix} 1 & . \end{bmatrix}, \begin{bmatrix} -v_h & A_h \\ . & u_h \end{bmatrix}, \begin{bmatrix} . \\ 1 \end{bmatrix} \right). \end{displaymath} \end{proposition} \begin{proof} The reader can easily verify that the solution vectors for the admissible linear systems defined above are \begin{displaymath} \mu s_f, \quad \begin{bmatrix} s_f + u_f^{\!\top} g \\ s_g \end{bmatrix}, \quad \begin{bmatrix} s_f g \\ s_g \end{bmatrix} \quad\text{and}\quad \begin{bmatrix} h^{-1} \\ s_h h^{-1} \end{bmatrix} \end{displaymath} respectively, compare \cite{Cohn1995a \ and \cite{Cohn1999a . It remains to check that the system matrices are full. For the sum and the product this is clear from the fact that the free associative algebra ---being a \emph{free ideal ring} (FIR)--- has \emph{unbounded generating number} (UGN) and therefore the diagonal sum of full matrices is full, see \cite[Section~7.3]{Cohn1985a . The system matrix for the inverse is full because $h\ne 0$ and therefore the linearization of $\als{A}_h$ is full, compare \cite[Section~4.5]{Cohn1995a . \end{proof} \begin{Remark}\label{rem:span.lf} For the rational operations from Proposition~\ref{pro:wp.ratop} we observe that the left families satisfy the relations \begin{align*} L(\mu \als{A}_f) &= L(\als{A}_f), \\ L(\als{A}_f + \als{A}_g) &= L(\als{A}_f) + L(\als{A}_g) \quad\text{and}\quad \\ L(\als{A}_g) &\subseteq L(\als{A}_f \cdot \als{A}_g) = L(\als{A}_f) g + L(\als{A}_g). \end{align*} And similarly for the right families we have \begin{align*} R(\mu \als{A}_f) &= R(\als{A}_f), \\ R(\als{A}_f + \als{A}_g) &= R(\als{A}_f) + R(\als{A}_g) \quad\text{and}\quad \\ R(\als{A}_f) &\subseteq R(\als{A}_f \cdot \als{A}_g) = R(\als{A}_f) + f R(\als{A}_g). \end{align*} \end{Remark} \begin{definition}[Linearization \cite{Belinschi2013a , \cite{Cohn1999a ]\label{def:wp.lin} \index{linearization}% Let $f \in \freeFLD{\field{K}}{X}$. A \emph{linearization} of $f$ is a matrix $L = L_0 \otimes 1 + L_1 \otimes x_1 + \ldots + L_d \otimes x_d$, with $L_\ell \in \field{K}^{m \times m}$, of the form \begin{displaymath} L = \begin{bmatrix} c & u \\ v & A \end{bmatrix} \quad \in \ncPOLY{\field{K}}{X}^{m \times m} \end{displaymath} such that $A$ is invertible over the free field and $f$ is the Schur complement, that is, $f = c - u A^{-1} v$. If $c=0$ then $L$ is called a \emph{pure} linearization. The \emph{size} of the linearization is $\size L = m$, the \emph{dimension} is $\dim L = m-1$. \end{definition} \begin{proposition}[% \protect{\cite[Proposition~3.2]{Belinschi2013a }]\label{pro:belinschi13.32} Let $\fsf{F} = \freeFLD{\field{K}}{X}$ and $A\in \fsf{F}^{k \times k}$, $B \in \fsf{F}^{k \times l}$, $C \in \fsf{F}^{l \times k}$ and $D \in \fsf{F}^{l \times l}$ be given and assume that $D$ is invertible in $\fsf{F}^{l \times l}$. Then the matrix $\bigl[\begin{smallmatrix} A & B \\ C & D \end{smallmatrix}\bigr]$ is invertible in $\fsf{F}^{(k+l) \times (k+l)}$ if and only if the \emph{Schur complement} $A-BD^{-1} C$ is invertible in $\fsf{F}$. In this case \begin{displaymath} \begin{bmatrix} A & B \\ C & D \end{bmatrix} ^{-1} = \begin{bmatrix} . & . \\ . & D^{-1} \end{bmatrix} + \begin{bmatrix} I_k \\ -D^{-1} C \end{bmatrix} \bigl(A - BD^{-1} C\bigr)^{-1} \begin{bmatrix} I_k & -BD^{-1} \end{bmatrix}. \end{displaymath} \end{proposition} \begin{Remark} (i) Let $f \in \freeFLD{\field{K}}{X}$ be given by the linearization $L$. Then $f = \qdet{L}_{1,1}$ is the $(1,1)$-quasideterminant \cite{Gelfand2005a \ of $L$. (ii) Given a linear representation $(u,A,v)$ of $f \in \freeFLD{\field{K}}{X}$, then $L = \bigl[\begin{smallmatrix} . & u \\ -v & A \end{smallmatrix}\bigr]$ is a pure linearization of $f$. (iii) Talking about a \emph{minimal} linearization, one has to specify which class of matrices is considered: Scalar entries in the first row and column? Pure? And, if applicable, selfadjoint? \end{Remark} \begin{proposition}\label{pro:lin.ext} Let \begin{displaymath} L = \begin{bmatrix} c & u \\ v & A \end{bmatrix} \end{displaymath} be a linearization of size $n$ for some element $f\in\freeFLD{\field{K}}{X}$ and define another element $g\in \freeFLD{\field{K}}{X}$ by the pure linearization \begin{displaymath} \tilde{L} = \begin{bmatrix} . & \tilde{u} \\ \tilde{v} & \tilde{A} \end{bmatrix} \quad\text{with}\quad \tilde{A} = \begin{bmatrix} c & u & -1 \\ v & A & . \\ -1 & . & . \end{bmatrix},\quad \tilde{u} = [0,\ldots,0,1],\quad\tilde{v} = \tilde{u}^{\!\top} \end{displaymath} of size $n+2$. Then $g = f$. \end{proposition} \begin{proof} Using Proposition~\ref{pro:belinschi13.32} ---taking the Schur complement with respect to the block entry $(2,2)$--- and $b = [-1,0,\ldots,0]$, the inverse of $\tilde{A}=\bigl[\begin{smallmatrix} L & b^{\!\top} \\ b & . \end{smallmatrix}\bigr]$ can be written as \begin{displaymath} \tilde{A}^{-1} = \begin{bmatrix} L^{-1} & . \\ . & . \end{bmatrix} -\begin{bmatrix} -L^{-1} b^{\!\top} \\ 1 \end{bmatrix} \bigl(b L^{-1} b^{\!\top}\bigr)^{-1} \begin{bmatrix} - b L^{-1} & 1 \end{bmatrix}. \end{displaymath} Hence \begin{align*} -\tilde{u}\tilde{A}^{-1}\tilde{v} &= -\Bigl( \begin{bmatrix} . & . \end{bmatrix} - (b L^{-1} b^{\!\top})^{-1} \begin{bmatrix} -b L^{-1} & 1 \end{bmatrix} \Bigr) \begin{bmatrix} . \\ 1 \end{bmatrix} \\ &= \bigl(b L^{-1} b^{\!\top}\bigr)^{-1} \\ &= \Biggl( b \left( \begin{bmatrix} . & . \\ . & A^{-1} \end{bmatrix} + \begin{bmatrix} 1 \\ -A^{-1} v \end{bmatrix} \bigl(c - u A^{-1} v\bigr)^{-1} \begin{bmatrix} 1 & -u A^{-1} \end{bmatrix} \right) b^{\!\top} \Biggr)^{-1} \\ &= \Biggl( \left( \begin{bmatrix} . & . \end{bmatrix} -\bigl(c - u A^{-1} v\bigr)^{-1} \begin{bmatrix} 1 & -u A^{-1} \end{bmatrix} \right) \begin{bmatrix} -1 \\ . \end{bmatrix} \Biggr)^{-1} \\ &= c - u A^{-1} v . \end{align*} \end{proof} If the first row and column of a linearization for some $f \in \freeFLD{\field{K}}{X}$ contains non-scalar entries, then Proposition \ref{pro:lin.ext} can be used to construct a linear representation of $f$. On the other hand, given a linear representation of dimension $n$ (of $f$) which can be brought to such a form, a linearization of size $n-1$ can be obtained. The characterization of minimality for linearizations will be considered in future work. \begin{example}\label{ex:anticomm} For the anticommutator $xy + yx$ a minimal ALS is given by \begin{displaymath} \left( \begin{bmatrix} 1 & . & . & . \end{bmatrix}, \begin{bmatrix} 1 & -x & -y & . \\ . & 1 & . & -y \\ . & . & 1 & -x \\ . & . & . & 1 \end{bmatrix}, \begin{bmatrix} . \\ . \\ . \\ 1 \end{bmatrix} \right). \end{displaymath} Permuting the columns and multiplying the system matrix by $-1$ we get the linearization \begin{displaymath} L'_{xy+yx} = \begin{bmatrix} . & . & . & . & 1 \\ . & . & y & x & -1 \\ . & y & . & -1 & . \\ . & x & -1 & . & . \\ 1 & -1 & . & . & . \end{bmatrix} \end{displaymath} which is of the form in Proposition \ref{pro:lin.ext} and yields a \emph{minimal} (pure) linearization of the anticommutator \begin{displaymath} L_{xy+yx} = \begin{bmatrix} . & y & x \\ y & . & -1 \\ x & -1 & . \end{bmatrix}. \end{displaymath} \end{example} \begin{definition}[Realization \cite{Helton2006a ]\label{def:wp.realization} \index{realization}\index{butterfly realization}% A \emph{realization} of a matrix $F \in \freeFLD{\field{K}}{X}^{p \times q}$ is a quadruple $(A,B,C,D)$ with $A = A_0 \otimes 1 + A_1 \otimes x_1 + \ldots + A_d \otimes x_d$, $A_\ell \in \field{K}^{n \times n}$, $B \in \field{K}^{n \times q}$, $C \in \field{K}^{p \times n}$ and $D \in \field{K}^{p \times q}$ such that $A$ is invertible over the free field and $F = D - C A^{-1} B$. The \emph{dimension} of the realization is $\dim\,(A,B,C,D) = n$. \end{definition} \begin{remark} A realization $\mathcal{R} = (A,B,C,D)$ could be written in block form \begin{displaymath} L_\mathcal{R} = \begin{bmatrix} D & C \\ B & A \end{bmatrix}\quad \in \ncPOLY{\field{K}}{X}^{(p+n) \times (q+n)}. \end{displaymath} Here, the definition is such that $F = \qdet{L_\mathcal{R}}_{1',1'}$ is the $(1,1)$-block-quasideterminant \cite{Gelfand2005a \ with respect to block $D$. For $A = -J+L_A(X)$ we obtain the \emph{descriptor realization} in \cite{Helton2006a . Realizations where $B$ and/or $C$ contain non-scalar entries are sometimes called ``butterfly realizations'' \cite{Helton2006a . Minimality with respect to realizations is investigated in \cite{Volcic2015a . \end{remark} \section{The Word Problem}\label{sec:wp.wp} Let $f,g \in \freeFLD{\field{K}}{X}$ be given by the linear representations $\pi_f = (u_f, A_f, v_f)$ and $\pi_g = (u_g, A_g, v_g)$ of dimension $n_f$ and $n_g$ respectively and define the matrix \begin{displaymath} M = \begin{bmatrix} . & u_f & u_g \\ v_f & A_f & . \\ v_g & . & -A_g \end{bmatrix}, \end{displaymath} which is a linearization of $f-g$, of size $n = n_f + n_g + 1$. Then $f=g$ if and only if $M$ is not full \cite[Section~4.5]{Cohn1995a . For the word problem see also \cite[Section~6.6]{Cohn1995a . Whether $M$ is full or not can be decided by the following theorem. \begin{theorem}[% \protect{\cite[Theorem 4.1]{Cohn1999a }]\label{thr:cohn99.41} For each $r\in \{1,2,\ldots,n\}$, denote by $I_r$ the ideal of $\field{K}[a,b]$ generated by the polynomials $\det A - 1$, $\det B -1$ and the coefficients of each $x \in \{ 1 \} \cup X$ in the $(i,j)$ entries of the matrix $AMB$ for $1 \le i \le r$, $r \le j \le n$. Then the linear matrix $M$ is full if and only if for each $r\in \{1,2,\ldots,n\}$, the ideal $I_r$ is trivial. \end{theorem} \begin{remark} Notice that there is a misprint in \cite{Cohn1999a \ and the coefficients of $M_0$ are omitted. \end{remark} So far we were not able to apply this theorem practically for $n \ge 5$, where $50$ or more unknowns are involved. However, if we have any ALS (or linear representation) for $f-g$, say from Proposition \ref{pro:wp.ratop}, then we could check whether it can be (admissibly) transformed into a smaller system, for example $A's' = 0$. For polynomials (with $A = I-Q$ and $Q$ upper triangular and nilpotent) this could be done row by row. In general the pivot blocks (the blocks in the diagonal) can be arbitrarily large. Therefore this elimination has to be done \emph{blockwise} by setting up a single linear system for row \emph{and} column operations. This idea is used in the following lemma. Note that the existence of a solution for this linear system is \emph{invariant} under admissible transformations (on the subsystems). This is a key requirement since the normal form \cite{Cohn1994a \ is defined modulo similarity transformations (more general by stable association, Definition~\ref{def:wp.ass}). \begin{theorem}[% \protect{\cite[Theorem 1.4]{Cohn1999a }]\label{thr:cohn99.14} If $\pi' = (u',A',v')$ and $\pi''=(u'',A'',v'')$ are equivalent (pure) linear representations, of which the first is minimal, then the second is isomorphic to a representation $(u,A,v)$ which has the block decomposition \begin{displaymath} u = \begin{bmatrix} * & u' & . \end{bmatrix},\quad A = \begin{bmatrix} * & . & . \\ * & A' & . \\ * & * & * \end{bmatrix} \quad\text{and}\quad v = \begin{bmatrix} . \\ v' \\ * \end{bmatrix}. \end{displaymath} \end{theorem} \smallskip \begin{lemma}\label{lem:wp.ri} Let $f,g \in \freeFLD{\field{K}}{X}$ be given by the admissible linear systems $\als{A}_f = (u_f, A_f, v_f)$ and $\als{A}_g = (u_g, A_g, v_g)$ of dimension $n_f$ and $n_g$ respectively. If there exist matrices $T,U \in \field{K}^{n_f \times n_g}$ such that $u_f U = 0$, $T A_g - A_f U = A_f u_f^{\!\top} u_g$ and $T v_g = v_f$, then $f = g$. \end{lemma} \begin{proof} The difference $f - g$ can be represented by the admissible linear system $A s = v$ with \begin{displaymath} A = \begin{bmatrix} A_f & -A_f u_f^{\!\top} u_g \\ . & A_g \end{bmatrix}, \quad s = \begin{bmatrix} s_f - u_f^{\!\top} g \\ -s_g \end{bmatrix} \quad\text{and}\quad v = \begin{bmatrix} v_f \\ -v_g \end{bmatrix}. \end{displaymath} Defining the (invertible) transformations \begin{displaymath} P = \begin{bmatrix} I_{n_f} & T \\ . & I_{n_g} \end{bmatrix} \quad\text{and}\quad Q = \begin{bmatrix} I_{n_f} & -U \\ . & I_{n_g} \end{bmatrix} \end{displaymath} and $A' = P A Q$, $s' = Q^{-1} s$ and $v' = P v$ we get a new system $A' s' = v'$: \begin{align*} A' &= \begin{bmatrix} I_{n_f} & T \\ . & I_{n_g} \end{bmatrix} \begin{bmatrix} A_f & -A_f u_f^{\!\top} u_g \\ . & A_g \end{bmatrix} \begin{bmatrix} I_{n_f} & -U \\ . & I_{n_g} \end{bmatrix} \\ &= \begin{bmatrix} A_f & - A_f u_f^{\!\top} u_g + TA_g \\ . & A_g \end{bmatrix} \begin{bmatrix} I_{n_f} & -U \\ . & I_{n_g} \end{bmatrix} \\ &= \begin{bmatrix} A_f & -A_f u_f^{\!\top} u_g + T A_g - A_f U \\ . & A_g \end{bmatrix} = \begin{bmatrix} A_f & 0 \\ . & A_g \end{bmatrix},\\ s' &= \begin{bmatrix} I_{n_f} & U \\ . & I_{n_g} \end{bmatrix} \begin{bmatrix} s_f - u_f^{\!\top} g \\ -s_g \end{bmatrix} = \begin{bmatrix} s_f - u_f^{\!\top} g - U s_g \\ -s_g \end{bmatrix}, \\ v' &= \begin{bmatrix} I_{n_f} & T \\ . & I_{n_g} \end{bmatrix} \begin{bmatrix} v_f \\ -v_g \end{bmatrix} = \begin{bmatrix} v_f -T v_g \\ -v_g \end{bmatrix}. \end{align*} Invertibility of $A'$ over the free field implies $s_f - u_f^{\!\top} g - U s_g = 0$, in particular \begin{align*} 0 &= u_f s_f - u_f u_f^{\!\top} g - u_f U s_g \\ &= f - g \end{align*} because $u_f U = 0$. \end{proof} Let $d$ be the number of letters in the alphabet $X$, $\dim \als{A}_f = n_f$ and $\dim \als{A}_g = n_g$. To determine the transformation matrices $T,U \in \field{K}^{n_f \times n_g}$ from the lemma we just have to solve a linear system of $(d+1)n_f(n_g+1)$ equations in $2 n_f n_g$ unknowns. If there is a solution then $f=g$. Neither $\als{A}_f$ nor $\als{A}_g$ have to be minimal. Computer experiments show, that Hua's identity \cite{Amitsur1966a \begin{displaymath} x - \bigl(x^{-1} + (y^{-1} - x)^{-1} \bigr)^{-1} = xyx \end{displaymath} can be tested positively by Lemma~\ref{lem:wp.ri} when the left hand side is constructed by the rational operations from Proposition \ref{pro:wp.ratop}. However, without assuming minimality, the fact that there is no solution does \emph{not} imply, that $f\neq g$, see Example~\ref{ex:wp.path} below. \begin{theorem}[``Linear solution'' of the Word Problem]\label{thr:wp.wp} Let $f,g \in \freeFLD{\field{K}}{X}$ be given by the \emph{minimal} admissible linear systems $\als{A}_f = (u_f, A_f, v_f)$ and $\als{A}_g = (u_g, A_g, v_g)$ of dimension $n$ respectively. Then $f = g$ if and only if there exist matrices $T,U \in \field{K}^{n\times n}$ such that $u_f U = 0$, $T A_g - A_f U= A_f u_f^{\!\top} u_g$ and $T v_g = v_f$. \end{theorem} \begin{proof} If $f=g$ then, since admissible linear systems correspond to (pure) linear representations, by Theorem~\ref{thr:cohn99.14} there exist invertible matrices $P, Q \in \field{K}^{n\times n}$ such that $A_f = P A_g Q$ and $v_f = P v_g$. Let $T = P$ and $U = Q^{-1} - u_f^{\!\top} u_g$. The admissible linear systems are minimal. Hence, the left family $s_f$ is $\field{K}$-linearly independent. Since the first component of $s_g$ is equal to the first component of $s_f = Q^{-1} s_g$ and the left family $s_g$ is $\field{K}$-linearly independent, the first row of $Q^{-1}$ must be $[1,0,\ldots,0]$. Therefore $u_f U = u_f(Q^{-1} - u_f^{\!\top} u_g) = 0$. Clearly $v_f = T v_g$ and \begin{align*} T A_g - A_f U & = P A_g - A_f Q^{-1} + A_f u_f^{\!\top} u_g \\ &= A_f u_f^{\!\top} u_g. \end{align*} The other implication follows from Lemma~\ref{lem:wp.ri}. \end{proof} \begin{example}\label{ex:wp.path} Let $f=x^{-1}$ and $g=x^{-1}$ be given by the admissible linear systems \begin{displaymath} [x] s_f = [1] \quad\text{and}\quad \begin{bmatrix} x & -z \\ . & 1 \end{bmatrix} s_g = \begin{bmatrix} 1 \\ . \end{bmatrix} \end{displaymath} respectively. Then the ALS \begin{displaymath} \begin{bmatrix} x & -x & . \\ . & x & -z \\ . & . & 1 \end{bmatrix} s = \begin{bmatrix} 1 \\ -1 \\ . \end{bmatrix},\quad s = \begin{bmatrix} 0 \\ -x^{-1} \\ 0 \end{bmatrix} \end{displaymath} represents $f - g = 0$. \end{example} While it is obvious here that the second component of the solution vector $s_g$ is zero, it is not clear in general how one can exclude such ``pathological'' LRs without minimality assumption. One might ask, for which class of constructions (rational operations \cite{Cohn1999a , Higman's trick \cite{Higman1940a,Cohn1985a , selfadjoint linearization trick \cite{Anderson2013a , etc.) there are sufficient conditions for the existence of matrices $T,U$ (over $\field{K}$) in Lemma~\ref{lem:wp.ri} if $f=g$. Unfortunately this seems to be impossible except for some specific examples as the following ALS (constructed by the rational operations from Proposition~\ref{pro:wp.ratop}) for $x - x y y^{-1} = 0$ suggests (some zeros are kept to emphasize the block structure): \begin{displaymath} \begin{bmatrix} 1 & -x & -1 & . & . & . & . & . & . \\ 0 & 1 & . & . & . & . & . & . & . \\ . & . & 1 & -x & . & . & . & . & . \\ . & . & 0 & 1 & -1 & . & . & . & . \\ . & . & . & . & 0 & 1 & -y & . & . \\ . & . & . & . & -1 & 0 & 1 & . & . \\ . & . & . & . & 0 & 1 & 0 & -1 & . \\ . & . & . & . & . & . & . & 1 & -y \\ . & . & . & . & . & . & . & 0 & 1 \end{bmatrix} s = \begin{bmatrix} . \\ 1 \\ . \\ . \\ . \\ . \\ . \\ . \\ -1 \end{bmatrix}. \end{displaymath} There are no $T,U$ and therefore no $P,Q$ (admissible, with blocks $T,U$) such that $PAQ$ has a $2\times 7$ upper right block of zeros and the first two components of $Pv$ are zero. Therefore we would like to construct minimal linear representations directly. For regular elements, algorithms are known, see Section~\ref{sec:wp.reg}. How to proceed in general is open except for the inverse. This is discussed in Section~\ref{sec:wp.min}. \section{Regular Elements}\label{sec:wp.reg} For regular elements (Definition \ref{def:rep.reg}) in the free field minimal linear representations can be obtained via the Extended Ho-Algorithm \cite{Fornasini1980a \ from the Hankel matrix\index{Hankel matrix} or by minimizing a given linear representation via the algorithm in \cite{Cardon1980a \ by detecting linearly dependent rows in the controllability matrix and linearly dependent columns in the observability matrix, see Definition~\ref{def:mtx.co}. The basic idea goes back to Schützenberger \cite{Schutzenberger1961b . Controllability and observability is discussed in \cite[Chapter~10]{Kalman1969a . \smallskip For an alphabet $X = \{ x_1, x_2, \ldots, x_d \}$ (finite, non-empty set), the free monoid generated by $X$ is denoted by $X^*$. A formal power series (in non-commuting variables) is a mapping $f$ from $X^*$ to a commutative field $\field{K}$, written as formal sum \begin{displaymath} f = \sum_{w\in X^*} (f,w)\, w \end{displaymath} with coefficients $(f,w)\in \field{K}$. In general, $\field{K}$ could be replaced by a ring or a skew field. \cite{Salomaa1978a \ and \cite{Berstel2011a \ contain detailed introductions. On the set of formal power series $\ncPOWS{\field{K}}{X}$ the following rational operations are defined for $f,g,h \in \ncPOWS{\field{K}}{X}$ with $(h,1) = 0$, and $\mu \in \field{K}$: \smallskip \noindent The scalar multiplication \begin{displaymath} \mu f = \sum_{w\in X^*} \mu(f,w) \, w. \end{displaymath} The sum \begin{displaymath} f + g = \sum_{w\in X^*} \bigl( (f,w) + (g,w) \bigr) \, w. \end{displaymath} The product \begin{displaymath} f g = \sum_{w\in X^*} \left( \sum_{uv=w} (f,u)(g,v) \right) \, w. \end{displaymath} And the quasiinverse \begin{displaymath} h^+ = \sum_{n\ge 1} h^n. \end{displaymath} The set of non-commutative (nc) rational series $\ncRATS{\field{K}}{X}$ is the smallest rationally closed (that is, closed under scalar multiplication, sum, product and quasiinverse) subset of $\ncPOWS{\field{K}}{X}$ containing the nc polynomials $\ncPOLY{\field{K}}{X}$. A series $f\in \ncPOWS{\field{K}}{X}$ is called \emph{recognizable} if there exists a natural number $n$, a monoid homomorphism $\mu : X^* \to \field{K}^{n\times n}$ and two vectors $\alpha\in\field{K}^{1 \times n}$, $\beta\in\field{K}^{n\times 1}$ such that $f$ can be written as \begin{displaymath} f = \sum_{w\in X^*} \alpha\, \mu(w)\, \beta\, w. \end{displaymath} The triple $(\alpha, \mu, \beta)$ is called a \emph{linear representation} \cite[Section~II.2]{Salomaa1978a . \begin{theorem}[\cite{Schutzenberger1961b ] A series $f\in \ncPOWS{\field{K}}{X}$ is rational if and only if it is recognizable. \end{theorem} A rational series $f$ can be represented by a \emph{proper linear system} (PLS for short) $s = v + Qs$ where $f$ is the first component of the \emph{unique} solution vector $s$ (with $v \in \field{K}^{n \times 1}$, $Q = Q_1 \otimes x_1 + Q_2 \otimes x_2 + \ldots + Q_d \otimes x_d$, $Q_i \in \field{K}^{n \times n}$ for some $n\in \mathbb{N}$). Rational operations are then formulated on this level \cite[Section~II.1]{Salomaa1978a . Clearly, every proper linear system gives rise to an admissible linear system $\als{A} = (u, I-Q, v)$ with $u = e_1$. When $(\alpha, \mu, \beta)$ is a linear representation of a recognizable series, then $\pi = (\alpha, I-Q, \beta)$ with \begin{equation}\label{eqn:wp.regQ} Q = \sum_{x\in X} \mu(x) \otimes x. \end{equation} is a linear representation of $f \in \freeFLD{\field{K}}{X}$. For a PLS the solution vector $s$ can be computed by the quasiinverse $Q^+$: \begin{align*} s &= (I-Q)^{-1} v \\ &= (I+Q^+) v \\ &= (I + Q + Q^2 + Q^3 + \ldots) v. \end{align*} \begin{definition}[Controllability and Observability Matrix]\label{def:mtx.co} \index{controllability matrix}\index{observability matrix}% Let $\als{P} = (u,I-Q,v)$ be a proper linear system of dimension $n$ (for some nc rational series). Then the \emph{controllability matrix} and the \emph{observability matrix} are defined as \begin{displaymath} \mathcal{V} = \mathcal{V}(\als{P}) = \begin{bmatrix} v & Qv & \ldots & Q^{n-1}v \end{bmatrix} \quad\text{and}\quad \mathcal{U} = \mathcal{U}(\als{P}) = \begin{bmatrix} u \\ uQ \\ \vdots \\ uQ^{n-1} \end{bmatrix} \end{displaymath} \vspace{-1.5ex} \noindent respectively. \end{definition} \begin{remark} Note that the monomials (in the polynomials) in $Q^k$ have length $k$. The matrices $\mathcal{V}$ and $\mathcal{U}$ are over $\ncPOLY{\field{K}}{X}$. A priori these matrices would have an infinite number of columns and rows respectively. However, by \cite[Lemma~6.6.3]{Cohn1995a , it suffices to use the columns of $\mathcal{V}$ and rows of $\mathcal{U}$ only. This gives the connection to \cite[Section I.2]{Berstel2011a \ and could be used for minimizing proper linear systems \cite{Salomaa1978a . In other words: Instead of identifying $\field{K}$-linear dependence of the left family $s = (I-Q)^{-1} v = (I + Q + Q^2 + \ldots) v$, we can restrict to the ``approximated'' left family $\tilde{s} = (I + Q + \ldots + Q^{n-1})v$. \end{remark} Now let $X_{k}^* \subseteq X^*$ denote the set of words of length $k$ and use $\mu:X^* \to \field{K}^{n \times n}$ from~\eqref{eqn:wp.regQ} to define \raisebox{0pt}[0pt][0pt]{$V_k \in \field{K}^{n \times d^k}$} with columns $\mu(w) v$ for $w \in X_k^*$ and \raisebox{0pt}[0pt][0pt]{$U_k \in \field{K}^{d^k \times n}$} with rows $u \mu(w)$ for $w \in X_k^*$. Then the \emph{controllability matrix} and the \emph{observability matrix} can be defined alternatively as \vspace{-1.5ex} \begin{displaymath} \mathcal{V}' = \begin{bmatrix} V_0 & V_1 & \ldots & V_{n-1} \end{bmatrix} \quad\text{and}\quad \mathcal{U}' = \begin{bmatrix} U_0 \\ U_1 \\ \vdots \\ U_{n-1} \end{bmatrix} \quad\text{respectively} \end{displaymath} with entries in $\field{K}$. Note that the rank of $\mathcal{V}'$ is at most $n$ while the number of columns of the blocks $V_k$ is $d^k$. So ---for an alphabet with more than one letter--- most of the columns are not needed. For $X=\{ x \}$ and $Q = Q_x \otimes x$ we have $V_k = Q_x^k v$ and $U_k = u Q_x^k$. Hence $\mathcal{V}'$ and $\mathcal{U}'$ can be written as \vspace{-1.5ex} \begin{displaymath} \mathcal{V}' = \begin{bmatrix} v & Q_x v & \ldots & Q_x^{n-1} v \end{bmatrix} \quad\text{and}\quad \mathcal{U}' = \begin{bmatrix} u \\ u Q_x \\ \vdots \\ u Q_x^{n-1} \end{bmatrix} \quad\text{respectively.} \end{displaymath} Compare with \cite[Section~6.3]{Kalman1969a . For controllability and observability in connection with realizations (Definition~\ref{def:wp.realization}) see also \cite{Helton2006a . \section{Minimizing the Inverse}\label{sec:wp.min} Having solved the word problem, our next goal is ``minimal arithmetics'' in the free field $\freeFLD{\field{K}}{X}$. That is, given elements by minimal admissible linear systems, to compute minimal ones for the rational operations. For the scalar multiplication this is trivial. For the inverse some preparation is necessary. The result is presented in Theorem~\ref{thr:min.inv}. The ``minimal sum'' and the ``minimal product'' are considered in future work. The main difficulty is \emph{not} minimality but the restriction to \emph{linear} techniques. \begin{proposition}\label{pro:min.mon} Let $k \in \mathbb{N}$ and $f= x_{i_1} x_{i_2} \cdots x_{i_k}$ be a monomial in $\ncPOLY{\field{K}}{X} \subseteq \freeFLD{\field{K}}{X}$. Then \begin{displaymath} \als{A} = \left( \begin{bmatrix} 1 & . & \cdots & . \end{bmatrix}, \begin{bmatrix} 1 & -x_{i_1} \\ & 1 & -x_{i_2} \\ & & \ddots & \ddots \\ & & & 1 & -x_{i_k} \\ & & & & 1 \end{bmatrix}, \begin{bmatrix} . \\ . \\ \vdots \\ . \\ 1 \end{bmatrix} \right) \end{displaymath} is a \emph{minimal} ALS of dimension $\dim \als{A} = k+1$. \end{proposition} \begin{proof} The system matrix of $\als{A}$ is full. For row indices $[1,x_{i_1},x_{i_1}x_{i_2},$ $\ldots,x_{i_1}\cdots x_{i_k}]$ and column indices $[1,x_{i_k},x_{i_{k-1}}x_{i_k},$ $\ldots,x_{i_1}\cdots x_{i_k}]$ the Hankel matrix \cite{Fliess1974a , \cite[Section~II.3]{Salomaa1978a \ of $f$ is \begin{displaymath} H(f) = \begin{bmatrix} & & & 1 \\ & & 1 \\ & \revddots \\ 1 \end{bmatrix} \end{displaymath} and has rank $k+1$. \end{proof} \begin{remark} Trivially, $\als{A} = ([1], [1], [1])$ is a minimal ALS for the \emph{unit element} (empty word). \end{remark} The following proposition is a variant of the inverse in Proposition~\ref{pro:wp.ratop} and is motivated by inverting the inverse of a monomial, for example, $f = (xyz)^{-1}$. A minimal ALS for $f$ is given by \begin{displaymath} \begin{bmatrix} z & -1 & . \\ . & y & -1 \\ . & . & x \end{bmatrix} s = \begin{bmatrix} . \\ . \\ 1 \end{bmatrix}, \quad s = \begin{bmatrix} z^{-1} y^{-1} x^{-1} \\ y^{-1} x^{-1} \\ x^{-1} \end{bmatrix}. \end{displaymath} Minimality is clear immediately by also checking the $\field{K}$-linear independence of the right family. Using the construction of the inverse from Proposition~\ref{pro:wp.ratop} we get the system \begin{displaymath} \begin{bmatrix} . & z & -1 & . \\ . & . & y & -1 \\ -1 & . & . & x \\ . & 1 & . & . \end{bmatrix} s = \begin{bmatrix} . \\ . \\ . \\ 1 \end{bmatrix}, \quad s = \begin{bmatrix} xyz \\ 1 \\ z \\ yz \end{bmatrix}. \end{displaymath} for $f^{-1} = xyz$. To obtain the form of Proposition~\ref{pro:min.mon} we have to reverse the rows $1,2,3$ and the columns $2,3,4$ and multiply the rows $1,2,3$ by $-1$. \begin{proposition}[Standard Inverse]\label{pro:inv.std} Let $0 \neq f \in \freeFLD{\field{K}}{X}$ be given by the admissible linear system $\als{A} = (u,A,v)$ of dimension $n$. Then an admissible linear system of dimension $n+1$ for $f^{-1}$ is given by \begin{equation}\label{eqn:wp.inv0} \als{A}^{-1} = \left( \begin{bmatrix} 1 & . \\ \end{bmatrix}, \begin{bmatrix} \Sigma v & -\Sigma A \Sigma \\ . & u \Sigma \end{bmatrix}, \begin{bmatrix} . \\ 1 \end{bmatrix} \right). \end{equation} (Recall that the permutation matrix $\Sigma = \Sigma_n$ reverses the order of rows/columns.) \end{proposition} \begin{definition}[Standard Inverse] \index{standard inverse} Let $\als{A}$ be an ALS for a non-zero element. We call the ALS~\eqref{eqn:wp.inv0} the \emph{standard inverse} of $\als{A}$, denoted by $\als{A}^{-1}$. \end{definition} \begin{proof} The reader can easily verify that the solution vector of $\als{A}^{-1}$ is \begin{displaymath} \begin{bmatrix} f^{-1} \\ \Sigma_n s_f f^{-1} \end{bmatrix}. \end{displaymath} Compare with Proposition~\ref{pro:wp.ratop}. \end{proof} We proceed with the calculation of minimal admissible linear systems for the inverse. We distinguish four types of ALS according to the form of the system matrix. Later, in the remark following Lemma~\ref{lem:wp.for1} and~\ref{lem:wp.for2}, we will see how to bring a system matrix to one of these forms depending on the left and right families. \begin{lemma}[Inverse Type~$(1,1)$]\label{lem:wp.inv3} Assume that $0 \neq f \in \freeFLD{\field{K}}{X}$ has a \emph{minimal} admissible linear system of dimension $n$ of the form \begin{equation}\label{eqn:wp.inv3a} \als{A} = \left( \begin{bmatrix} 1 & . & . \end{bmatrix}, \begin{bmatrix} 1 & b' & b \\ . & B & b'' \\ . & . & 1 \end{bmatrix}, \begin{bmatrix} . \\ . \\ \lambda \end{bmatrix} \right). \end{equation} Then a minimal ALS for $f^{-1}$ of dimension $n-1$ is given by \begin{equation}\label{eqn:wp.inv3b} \als{A}'=\left( \begin{bmatrix} 1 & . \end{bmatrix}, \begin{bmatrix} -\lambda \Sigma b'' & -\Sigma B \Sigma \\ -\lambda b & - b'\Sigma \end{bmatrix}, \begin{bmatrix} . \\ 1 \end{bmatrix} \right) \end{equation} with $1\notin R(\als{A}')$ and $1\notin L(\als{A}')$. \end{lemma} \begin{proof} The standard inverse of $\als{A}$ is \begin{displaymath} \begin{bmatrix} \lambda & -1 & . & . \\ . & -\Sigma b'' & -\Sigma B \Sigma & . \\ . & -b & -b'\Sigma & -1 \\ . & . & . & 1 \end{bmatrix} \tilde{s} = \begin{bmatrix} . \\ . \\ . \\ 1 \end{bmatrix}. \end{displaymath} Adding row 4 to row 3 and $\lambda$-times column 2 to column 1 gives \begin{displaymath} \begin{bmatrix} 0 & -1 & . & . \\ -\lambda \Sigma b'' & -\Sigma b'' & -\Sigma B \Sigma & . \\ -\lambda b & -b & -b'\Sigma & 0 \\ . & . & . & 1 \end{bmatrix} s' = \begin{bmatrix} . \\ . \\ 1 \\ 1 \end{bmatrix}. \end{displaymath} It follows that $s_2'=0$ and $s_{n+1}'$ does not contribute to the solution $s_1'$ and thus the first and the last row as well as the second and the last column can be removed. If $\als{A}'$ were not minimal, then there would exist a system $\als{A}''$ of dimension $m < n-1$ for $f^{-1}$. The standard inverse $(\als{A}'')^{-1}$ would give a system of dimension $m+1 < n$ for $f$, contradicting minimality of $\als{A}$. It remains to show that $1\notin R(\als{A}')$ and $1\notin L(\als{A}')$. Let $t = (t_1, t_2, \ldots, t_n)$ be the right family of $\als{A}$ which is (due to minimality) $\field{K}$-linearly independent. Then the right family of $\als{A}^{-1}$ is $(t_n f^{-1}, \ldots, t_2 f^{-1}, t_1 f^{-1}, f^{-1})$, that after the row operation becomes $(t_n, \ldots, t_2, t_1, 1-t_1) f^{-1}$. Removing the first and the last component (corresponding to the first and the last row) yields the right family $(t_{n-1}, \ldots, t_2, t_1) f^{-1}$. Therefore $1\notin R(\als{A}')$, because otherwise $f \in \linsp \{ t_{n-1}, \ldots, t_2, t_1 \}$, contradicting $\field{K}$-linear independence of $t$. Similar arguments show that $1\notin L(\als{A}')$. \end{proof} \begin{lemma}[Inverse Type~$(1,0)$]\label{lem:wp.inv2} Assume that $0 \neq f \in \freeFLD{\field{K}}{X}$ has a \emph{minimal} admissible linear system of dimension $n$ of the form \begin{equation}\label{eqn:wp.inv2a} \als{A} = \left( \begin{bmatrix} 1 & . & . \end{bmatrix}, \begin{bmatrix} 1 & b' & b \\ . & B & b'' \\ . & c' & c \end{bmatrix}, \begin{bmatrix} . \\ . \\ \lambda \end{bmatrix} \right) \end{equation} with $1 \not\in L(\als{A})$. Then a minimal ALS for $f^{-1}$ of dimension $n$ is given by \begin{equation}\label{eqn:wp.inv2b} \als{A}' = \left( \begin{bmatrix} 1 & . & . \end{bmatrix}, \begin{bmatrix} 1 & - \frac{1}{\lambda} c & - \frac{1}{\lambda} c' \Sigma \\ . & -\Sigma b'' & -\Sigma B \Sigma \\ . & -b & -b'\Sigma \end{bmatrix}, \begin{bmatrix} . \\ . \\ 1 \end{bmatrix} \right) \end{equation} with $1\notin L(\als{A}')$. \end{lemma} \begin{proof} The standard inverse of $\als{A}$ is \begin{displaymath} \begin{bmatrix} \lambda & -c & -c' \Sigma & . \\ . & -\Sigma b'' & -\Sigma B \Sigma & . \\ . & -b & -b'\Sigma & -1 \\ . & . & . & 1 \end{bmatrix} \tilde{s} = \begin{bmatrix} . \\ . \\ . \\ 1 \end{bmatrix} \end{displaymath} and has dimension $n+1$. After adding row~$n+1$ to row~$n$ we can remove row and column~$n+1$ because $\tilde{s}_{n+1}$ does not contribute to the solution $\tilde{s}_1 = f^{-1}$. Then we divide the first row by $\lambda$ and obtain \eqref{eqn:wp.inv2b}. It remains to show that $\als{A}'$ is minimal and $1\notin L(\als{A}')$. Let $(s_1, s_2, \ldots, s_n)$ be the left family of $\als{A}$ which is (due to minimality) $\field{K}$-linearly independent. Then the left family of $\als{A}^{-1}$ is $(f^{-1}, s_n f^{-1}, \ldots, s_2 f^{-1}, 1)$. Note that (admissible) row operations do not affect the left family. Since we removed the last entry $s_1 f^{-1} = \tilde{s}_{n+1}=1$, the left family of $\als{A}'$ is $(1, s_n, \ldots, s_2) f^{-1}$. By assumption $1 \notin L(\als{A})$. Therefore $1 \notin \linsp \{ s_2, s_3, \ldots, s_n \}$, hence $(1,s_n, \ldots, s_2)$ is $\field{K}$-linearly independent. Clearly, $1\notin L(\als{A}')$ because $f \notin \linsp \{ 1, s_n, \ldots, s_2 \}$. Similarly, let $(t_1, t_2, \ldots, t_n)$ be the right family of $\als{A}$ which is $\field{K}$-linearly independent. Then the right family of $\als{A}^{-1}$ is $(t_n f^{-1}, \ldots, t_2 f^{-1}, t_1 f^{-1}, f^{-1})$, that after the row operation is $(t_n f^{-1}, \ldots, t_2 f^{-1}, t_1 f^{-1}, f^{-1} -t_1 f^{-1} )$. Since we removed the last entry, the right family of $\als{A}'$ is $(t_n, \ldots, t_2, t_1)f^{-1}$ which is clearly $\field{K}$-linearly independent. Proposition~\ref{pro:cohn94.47} gives minimality of $\als{A}'$. \end{proof} \begin{lemma}[Inverse Type~$(0,1)$]\label{lem:wp.inv1} Assume that $0 \neq f \in \freeFLD{\field{K}}{X}$ has a \emph{minimal} admissible linear system of dimension $n$ of the form \begin{equation}\label{eqn:wp.inv1a} \als{A} = \left( \begin{bmatrix} 1 & . & . \end{bmatrix}, \begin{bmatrix} a & b' & b \\ a' & B & b'' \\ . & . & 1 \end{bmatrix}, \begin{bmatrix} . \\ . \\ \lambda \end{bmatrix} \right) \end{equation} with $1 \not\in R(\als{A})$. Then a minimal ALS for $f^{-1}$ of dimension $n$ is given by \begin{equation}\label{eqn:wp.inv1b} \als{A}' = \left( \begin{bmatrix} 1 & . & . \end{bmatrix}, \begin{bmatrix} -\lambda \Sigma b'' & -\Sigma B \Sigma & -\Sigma a'. \\ -\lambda b & -b'\Sigma & -a \\ . & . & 1 \end{bmatrix}, \begin{bmatrix} . \\ . \\ 1 \end{bmatrix} \right) \end{equation} with $1\notin R(\als{A'})$. \end{lemma} \begin{proof} The standard inverse of $\als{A}$ is \begin{displaymath} \begin{bmatrix} \lambda & -1 & . & . \\ . & -\Sigma b'' & -\Sigma B \Sigma & -\Sigma a' \\ . & -b & -b'\Sigma & -a \\ . & . & . & 1 \end{bmatrix} \tilde{s} = \begin{bmatrix} . \\ . \\ . \\ 1 \end{bmatrix}. \end{displaymath} After adding $\lambda$-times column~2 to column~1 we can remove row~1 and column~2, because $\tilde{s}_2=0$. Showing minimality and $1\notin R(\als{A}')$ is similar to the proof of Lemma~\ref{lem:wp.inv2} (column operations affect the left family). \end{proof} \begin{lemma}[Inverse Type~$(0,0)$]\label{lem:wp.inv0} Let $\als{A} = (u,A,v)$ be a minimal admissible linear system of dimension $n$ for $0 \neq f \in \freeFLD{\field{K}}{X}$ with $1\notin R(\als{A})$ and $1\notin L(\als{A})$. Then the standard inverse $\als{A}^{-1}$ is a minimal ALS of dimension $n+1$ for $f^{-1}$. \end{lemma} \begin{proof} If $\als{A}^{-1}$ were not minimal, then there would be a system $\als{A}'$ of dimension $m < n+1$ for $f^{-1}$. Applying Lemma~\ref{lem:wp.inv3} (Inverse Type~$(1,1)$), we would get a system of dimension $m-1 < n$ for $f$, contradicting minimality of $\als{A}$. \end{proof} \begin{example} Taking the minimal ALS (for the anticommutator) from Example~\ref{ex:anticomm}, we get, by Lemma~\ref{lem:wp.inv3} (Inverse Type~$(1,1)$), a \emph{minimal} ALS for $(xy+yx)^{-1}$: \begin{displaymath} \als{A}'= \left( \begin{bmatrix} 1 & . & . \end{bmatrix}, \begin{bmatrix} x & - 1 & . \\ y & . & -1 \\ . & y & x \end{bmatrix}, \begin{bmatrix} . \\ . \\ 1 \end{bmatrix} \right). \end{displaymath} Lemma~\ref{lem:wp.inv0} (Inverse Type~$(0,0)$) gives again a minimal system for $xy+yx$. \end{example} Since it is possible to construct minimal linear representations for regular elements (see Section~\ref{sec:wp.reg}), this is in particular true for polynomials. These are of the form \eqref{eqn:wp.inv3a}, since $A = I-Q$ with nilpotent $Q$, which can be choosen upper triangular. This can be seen either by looking at proper linear systems (Section~\ref{sec:wp.reg}) where admissible transformations are conjugations (of the system matrix such that the first component of the solution vector is left untouched) or by the following proposition. \begin{proposition}[% \protect{\cite[Proposition 2.1]{Cohn1999a }]\label{thr:cohn99.21} Let $f\in \freeFLD{\field{K}}{X}$. (i) $f$ is a power series if and only if in any minimal representation, the constant term of its system matrix is invertible. There is then a minimal representation which is unital, that is, $A_0 = I$. (ii) $f$ is a polynomial if and only if in any unital minimal representation, the matrix $A = A_1 \otimes x_1 + \ldots + A_d \otimes x_d$ is nilpotent. There is then a minimal representation with a unitriangular (ones on and zeros below the diagonal) system matrix. \end{proposition} \begin{example} The element $xyz^{-1}$ admits the following \emph{minimal} ALS \begin{displaymath} \als{A} = \left( \begin{bmatrix} 1 & . & . \end{bmatrix}, \begin{bmatrix} 1 & -x & . \\ . & 1 & -y \\ . & . & z \end{bmatrix}, \begin{bmatrix} . \\ . \\ 1 \end{bmatrix} \right) \end{displaymath} with right family $t = (1, x, xyz^{-1})$ and left family $s = (xyz^{-1}, yz^{-1}, z^{-1})$. Now Lemma~\ref{lem:wp.inv2} (Inverse Type~$(1,0)$) can be applied to get the minimal ALS \begin{displaymath} \als{A}' = \left( \begin{bmatrix} 1 & . & . \end{bmatrix}, \begin{bmatrix} 1 & -z & . \\ . & y & -1 \\ . & . & x \end{bmatrix}, \begin{bmatrix} . \\ . \\ 1 \end{bmatrix} \right) \end{displaymath} for $zy^{-1} x^{-1}$. Note that we can apply Lemma~\ref{lem:wp.inv2} again, because $1\notin L(\als{A}')$. \end{example} If an admissible linear system $\als{A}$ is of the form \eqref{eqn:wp.inv1a} in Lemma~\ref{lem:wp.inv1} (Inverse Type~$(0,1)$), then it follows immediately that $1 \in L(\als{A})$. Conversely, assuming $1\in L(\als{A})$, the proof of the existence of an admissible transformation $(P,Q)$ such that $P\als{A}Q$ is of the form \eqref{eqn:wp.inv1a} is a bit more involved and requires minimality. \begin{lemma}[for Inverse Type~$(0,1)$]\label{lem:wp.for1} Let $\als{A} = (u,A,v)$ be a \emph{minimal} admissible linear system with $\dim \als{A} = n \ge 2$ and $1\in L(\als{A})$. Then there exists an admissible transformation $(P,Q)$ such that $(uQ, PAQ, Pv)$ is of the form \eqref{eqn:wp.inv1a}. \end{lemma} \begin{proof} Without loss of generality, assume that $v = [0,\ldots,0,1]^{\!\top}$ and the left family $s = A^{-1} v$ is $(s_1, s_2,$ $\ldots, s_{n-1}, 1)$. Otherwise it can be brought to this form by some admissible transformation $(P^\circ, Q^\circ)$. Now let $\bar{A}$ denote the upper left $(n-1)\times (n-1)$ block of $A$, let $\bar{s} = (s_1, \ldots, s_{n-1})$ and write $As = v$ as \begin{displaymath} \begin{bmatrix} \bar{A} & b \\ c & d \end{bmatrix} \begin{bmatrix} \bar{s} \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. \end{displaymath} This system is equivalent to \begin{displaymath} \begin{bmatrix} \bar{A} & b \\ c & d-1 \end{bmatrix} \begin{bmatrix} \bar{s} \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}. \end{displaymath} Since the left family is $\field{K}$-linearly independent (by minimality of $\als{A}$), the matrix $\tilde{A} = \bigl[\begin{smallmatrix} \bar{A} & b \\ c & d-1 \end{smallmatrix}\bigr]$ cannot be full. We claim that there is only one possibility to transform $\tilde{A}$ to a hollow matrix, namely with zero last row. If we cannot produce a $(n-i) \times i$ block of zeros (by invertible transformations) in the first $n-1$ rows of $\tilde{A}$, then we cannot get blocks of zeros of size $(n-i+1) \times i$ and we are done. Now assume that there are invertible matrices $P' \in \field{K}^{(n-1) \times (n-1)}$ and (admissible) $Q\in\field{K}^{n \times n}$ with $(Q^{-1} s)_1 = s_1$, such that $P'[ \bar{A} , b]Q$ contains a zero block of size $(n-i) \times i$ for some $i=1,\ldots, n-1$. There are two cases. If the first $n-i$ entries in column~1 cannot be made zero, we construct an upper right zero block: \begin{displaymath} \hat{A} = \begin{bmatrix} A_{11} & . \\ A_{21} & A_{22} \end{bmatrix}, \quad \hat{s} = Q^{-1} s \quad\text{and}\quad \hat{v} = P v = v \end{displaymath} where $A_{11}$ has size $(n-i) \times (n-i)$. If $A_{11}$ were not full, then $A$ would not be full (the last row is not involved in the transformation). Hence this pivot block is invertible over the free field. Therefore $\hat{s}_1 = \hat{s}_2 = \ldots = \hat{s}_{n-i} = 0$. Otherwise we construct an upper left zero block in $PAQ$. But then $\hat{s}_{i+1} = \hat{s}_{i+2} = \ldots = \hat{s}_{n} = 0$. Both contradict $\field{K}$-linear independence of the left family. So there is only one block left, which can make $\tilde{A}$ non-full. Hence, by Lemma~\ref{lem:cohn95.636}, the modified (system) matrix $\tilde{A}$ is associated over $\field{K}$ to a linear hollow matrix with a $1 \times n$ block of zeros, say in the last row (the columns and the first $n-1$ rows are left unouched): \begin{displaymath} \begin{bmatrix} I_{n-1} & . \\ T & 1 \end{bmatrix} \begin{bmatrix} \bar{A} & b \\ c & d-1 \end{bmatrix} I_n = \begin{bmatrix} \bar{A} & b \\ 0 & 0 \end{bmatrix}. \end{displaymath} Hence we have $T\bar{A} + c = 0$ and therefore \begin{displaymath} \begin{bmatrix} \bar{A} & b \\ 0 & Tb + d \end{bmatrix} \begin{bmatrix} \bar{s} \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. \end{displaymath} Clearly $Tb + d = 1$. The transformation \begin{displaymath} (P,Q) = \left( \begin{bmatrix} I_{n-1} & . \\ T & 1 \end{bmatrix} P^\circ, Q^\circ \right) \end{displaymath} does the job. \end{proof} \begin{lemma}[for Inverse Type~$(1,0)$]\label{lem:wp.for2} Let $\als{A} = (u,A,v)$ be a \emph{minimal} admissible linear system with $\dim \als{A} = n \ge 2$ and $1\in R(\als{A})$. Then there exists an admissible transformation $(P,Q)$ such that $(uQ, PAQ, Pv)$ is of the form \eqref{eqn:wp.inv2a}. \end{lemma} \begin{proof} The proof is similar to the previous one switching the role of left and right family. \end{proof} \begin{remark} If $1 \in R(\als{A})$ for some \emph{minimal} ALS $\als{A} = (u,A,v)$, say $\dim \als{A} = n$, then, by Lemma~\ref{lem:wp.for2}, there is an admissible transformation $(P,Q)$ such that the first column in $PAQ$ is $[1,0,\ldots,0]^{\!\top}$. So, if the first column of $A=(a_{ij})$ is not in this form, an admissible transformation can be found in two steps: Firstly, we can set up a linear system to determine an $(n-1)$-duple of scalars $(\mu_2, \mu_3, \ldots, \mu_n)$ such that $a_{i1} + \mu_2 a_{i2} + \mu_3 a_{i3} + \ldots + \mu_n a_{in}$ is in $\field{K}$ for $i=1,2,\ldots,n$. Secondly, we use elementary row transformations (Gaussian elimination in the first column) and ---if necessary--- permutations to get the desired form of the first column. Together, these transformations give some (admissible) transformation $(P',Q')$. An analogous procedure can be applied if $1\in L(\als{A})$. And it can be combined for the case as in Lemma~\ref{lem:wp.inv3} (Inverse Type~$(1,1)$). It works more generally for non-minimal systems, but can fail in ``pathological'' cases. Compare with Example~\ref{ex:wp.path}. \end{remark} \begin{theorem}[Minimal Inverse]\label{thr:min.inv} Let $0 \neq f \in \freeFLD{\field{K}}{X}$ be given by the minimal system $\als{A} = (u, A, v)$ of dimension $n$. Then a minimal admissible linear system for $f^{-1}$ is given by \begin{displaymath} \als{A}' = \begin{cases} \text{\eqref{eqn:wp.inv3b} of $\dim \als{A}' = n-1$} & \text{if $1\in L(\als{A})$ and $1\in R(\als{A})$,}\\ \text{\eqref{eqn:wp.inv2b} of $\dim \als{A}' = n$} & \text{if $1\not\in L(\als{A})$ and $1\in R(\als{A})$,}\\ \text{\eqref{eqn:wp.inv1b} of $\dim \als{A}' = n$} & \text{if $1\in L(\als{A})$ and $1\not\in R(\als{A})$ and}\\ \text{\eqref{eqn:wp.inv0} of $\dim \als{A}' = n+1$} & \text{if $1\not\in L(\als{A})$ and $1\not\in R(\als{A})$} \end{cases} \end{displaymath} provided that the necessary transformations according to Lemma~\ref{lem:wp.for1} and~\ref{lem:wp.for2} are done before. \end{theorem} \begin{proof} See Lemma~\ref{lem:wp.inv3}, \ref{lem:wp.inv2}, \ref{lem:wp.inv1} and~\ref{lem:wp.inv0}. \end{proof} \section*{Introduction} Free (skew) fields arise as universal objects when it comes to embed the ring of non-commutative polynomials, that is, polynomials in (a finite number of) non-commuting variables, into a skew field \cite[Chapter~7]{Cohn1985a . The notion of ``free fields'' goes back to Amitsur \cite{Amitsur1966a . A brief introduction can be found in \cite[Section~9.3]{Cohn2003b , for details we refer to \cite[Section~6.4]{Cohn1995a . In the present paper we restrict the setting to \emph{commutative} ground fields, as a special case. See also \cite{Roberts1984a . In \cite{Cohn1994a , Cohn and Reutenauer introduced a \emph{normal form} for elements in free fields in order to extend results from the theory of formal languages. In particular they characterize minimality of linear representations in terms of linear independence of the entries of a column and a row vector, generalizing the concept of ``controllability and observability matrix'' (Section~\ref{sec:wp.reg}). It is difficult to solve the word problem, that is, given linear representations (LRs for short) of two elements $f$ and $g$, to decide whether $f=g$. In \cite{Cohn1999a \ the authors describe a answer of this question (for free fields coming from non-commutative polynomials over a \emph{commutative} field). In practice however, this technique (using Gröbner bases) can be impractical even for representations of small dimensions. Fortunately, it turns out that the word problem is equivalent to the solvability of a \emph{linear} system of equations if \emph{both} elements are given by \emph{minimal} linear representations. Constructions of the latter are known for \emph{regular} elements (non-commutative rational series), but in general non-linear techniques are necessary. This is considered in future work. Here we present a simple construction of minimal LRs for the inverses of arbitrary non-zero elements given by \emph{minimal} LRs. In particular this applies to the inverses of non-zero polynomials with vanishing constant coefficient (which are not regular anymore). In any case, \emph{positive} testing of rational identities becomes easy. Furthermore, the implementation in computer (algebra) software needs only a basic data structure for matrices (linear matrix pencil) and an \emph{exact} solver for linear systems. \bigskip Section~\ref{sec:wp.rep} introduces the required notation concerning linear representations and admissible linear systems in free fields. Rational operations on representation level are formulated and the related concepts of linearization and realization are briefly discussed. Section~\ref{sec:wp.wp} describes the word problem. Theorem~\ref{thr:wp.wp} shows that the (in general non-linear) problem of finding appropriate transformation matrices can be reduced to a \emph{linear system of equations} if the given LRs are \emph{minimal}. Examples can be constructed for regular elements (rational series) as special cases (of elements in the free field), which are summarized in Section~\ref{sec:wp.reg}. Here algorithms for obtaining minimal LRs are already known. Section~\ref{sec:wp.min} provides a first step in the construction of minimal LRs (with linear techniques), namely for the inverses of non-zero elements given itself by a \emph{minimal} linear representations. This is formulated in Theorem~\ref{thr:min.inv}. \bigskip The main result is Theorem~\ref{thr:wp.wp}, the ``linear'' word problem. Although it is rather elementary, it opens the possibility to work \emph{directly} on linear representations (instead of the spaces they ``span''). Or, using Bergman's words \cite{Bergman1978a : ``The main results in this paper are trivial. But what is trivial when described in the abstract can be far from clear in the context of a complicated situation where it is needed.'' \input{math_01a} \section*{Acknowledgement} Special thanks go to Marek Bo\.zejko and Victor Vinnikov for the hospitality and all the motivating discussions in Wrocław and Beer-Sheva, respectively. However, I am very indebted and grateful to Franz Lehner. He gave me the chance to enter the world of non-commutative mathematics. Without the freedom, support and advice he offered this work would not have been possible. Thanks a lot! \bibliographystyle{alpha}
{ "timestamp": "2017-01-13T02:06:28", "yymm": "1701", "arxiv_id": "1701.03378", "language": "en", "url": "https://arxiv.org/abs/1701.03378", "abstract": "We describe a solution of the word problem in free fields (coming from non-commutative polynomials over a commutative field) using elementary linear algebra, provided that the elements are given by minimal linear representations. It relies on the normal form of Cohn and Reutenauer and can be used more generally to (positively) test rational identities. Moreover we provide a construction of minimal linear representations for the inverse of non-zero elements.", "subjects": "Rings and Algebras (math.RA)", "title": "Linearizing the Word Problem in (some) Free Fields", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.965899570361222, "lm_q2_score": 0.7341195269001831, "lm_q1q2_score": 0.7090857356266704 }
https://arxiv.org/abs/1701.02191
Actuator design for parabolic distributed parameter systems with the moment method
In this paper, we model and solve the problem of designing in an optimal way actuators for parabolic partial differential equations settled on a bounded open connected subset $\Omega$ of IR n. We optimize not only the location but also the shape of actuators, by finding what is the optimal distribution of actuators in $\Omega$, over all possible such distributions of a given measure. Using the moment method, we formulate a spectral optimal design problem, which consists of maximizing a criterion corresponding to an average over random initial data of the largest L 2-energy of controllers. Since we choose the moment method to control the PDE, our study mainly covers one-dimensional parabolic operators, but we also provide several examples in higher dimensions. We consider two types of controllers: either internal controls, modeled by characteristic functions, or lumped controls, that are tensorized functions in time and space. Under appropriate spectral assumptions, we prove existence and uniqueness of an optimal actuator distribution, and we provide a simple computation procedure. Numerical simulations illustrate our results.
\section{Introduction and modeling of the problem}\label{secintro} In this article, we model and solve the problem of finding the optimal shape and location of internal controllers for parabolic equations with (mainly) Dirichlet boundary conditions and (mainly) in the one-dimensional case $\Omega=(0,\pi)$. Such questions are frequently encountered in engineering applications. We provide a possible mathematical model for investigating such issues. For mathematical reasons that will be clarified in the sequel, we will focus in the whole article on controls obtained by using the so-called {\it moment method}. As it will be underlined, it requires in general some spectral gap assumptions on the operators involved that essentially reduce the applications of our results to one-dimensional partial differential equations, but our results also cover several particular situations in larger dimension. To avoid technicalities and highlight the main ideas, we first present the results in the simplified framework of the controlled one-dimensional heat equation with Dirichlet boundary conditions, without introducing (at this step) the more general parabolic framework in which our results are actually valid. Generalizations to a more general framework will be described in Section \ref{sec:Control}. Unlike the simplified case of the one-dimensional heat equation, it requires a discussion on the M\"untz-Sz\'asz theorem as well as specific spectral considerations. Notice that the general control framework in which this problem could be addressed is much more intricate and will be evoked as a possible perspective at the end of this article. \subsection{Reminders on the controllability of the 1D heat equation}\label{sec:control2001} Consider the internally controlled one-dimensional heat equation \begin{equation}\label{heatEqcontrolled_intro} \partial_t y(t,x)-\partial_{xx} y(t,x)=\chi_\omega (x)u(t,x), \quad (t,x)\in (0,T)\times (0,\pi), \end{equation} with Dirichlet boundary conditions \begin{equation}\label{dir_cond} y(t,0)=y(t,\pi)=0, \qquad t\in (0,T), \end{equation} where $u\in L^2((0,T)\times(0,\pi))$ is a control function, and $\omega$ is a measurable subset of $(0,\pi)$ standing for the support of the controller. Here, $\chi_\omega$ is the characteristic function of $\omega$, defined by $\chi_\omega(x)=1$ if $x\in\omega$ and $\chi_\omega(x)=0$ otherwise. For a given subset $\omega$, the equation \eqref{heatEqcontrolled_intro} is said to be exactly null controllable in time $T$ whenever every initial datum $y(0,\cdot)\in L^2(0,\pi)$ can be steered to $0$ in time $T$ by means of an appropriate control function $u\in L^2((0,T)\times(0,\pi))$. It is well known that, for a given subset $\omega$, the system \eqref{heatEqcontrolled_intro} is exactly null controllable if and only if there exist a positive constant $C$ (only depending on $T$ and $\omega$) such that \begin{equation}\label{ineqobs} C \int_0^\pi z(T,x)^2\, dx \leq \int_0^T\int_\omega z(t,x)^2 \,dx \, dt, \end{equation} (observability inequality) for every solution of \begin{equation}\label{heatEq} \begin{split} & \partial_t z(t,x)-\partial_{xx} z(t,x)=0, \quad (t,x)\in (0,T)\times(0,\pi),\\ & z(t,0)=z(t,\pi)=0, \qquad\qquad t\in (0,T), \end{split} \end{equation} with $z(0,\cdot)\in L^2(0,\pi)$. \paragraph{Exact null controllability by the moment method.} The observability inequality \eqref{ineqobs} has been shown to hold true for any subset $\omega$ of $[0,\pi]$ of positive Lebesgue measure in \cite{Russell} by the moment method, that we will use as well in the present paper and that we recall hereafterin. The eigenfunctions of the Dirichlet-Laplacian, given by $\phi_j(x)=\sqrt{\frac{2}{\pi}}\sin(jx)$ for every $j\in\textrm{I\kern-0.21emN}^*$, associated with the eigenvalues $\lambda_j=j^2$, make up an orthonormal basis of $L^2(0,\pi)$. From the M\"untz-Sz\'asz theorem, there exists a sequence $(\theta_j^T)_{j\in\textrm{I\kern-0.21emN}^*}$ of $L^2(0,T)$, biorthogonal to the sequence of functions $t\mapsto e^{-j^2 t }$. The following lemma provides an exact null controllability result for \eqref{heatEqcontrolled_intro}-\eqref{dir_cond}. \begin{lemma}\label{lemm:contHeat}\cite{Russell} Let $T>0$ and let $\omega$ be a measurable subset of $(0,\pi)$ of positive measure. Then every initial datum $$ y(0,x)=y^0(x)=\sum_{j=1}^{+\infty} a_j\sin(jx), $$ in $L^2(0,\pi)$, can be steered to zero in time $T$ with the control $u\in L^2((0,T)\times (0,\pi))$ defined by $$ u(t,x)=-\sum_{j=1}^{+\infty} \frac{a_j e^{-j^2T}}{\int_\omega\sin^2(jy) \, dy}\theta_j^T(T-t)\sin(jx) . $$ \end{lemma} A proof of this lemma is given in a more general setting in Section \ref{secproof31}. We set $\Gamma_{\omega}(y^0)=\chi_\omega u$. The operator $\Gamma_{\omega}:L^2(0,\pi)\rightarrow L^2((0,T)\times(0,\pi))$ is linear and continuous, and is called the \textit{moment control operator}. The norm of this operator, given by $\Vert \Gamma_{\omega}\Vert = \sup \{ \Vert \Gamma_{\omega} (y^0)\Vert_{L^2((0,T)\times(0,\pi))} \mid \Vert y^0\Vert_{L^2(0,\pi)}=1 \},$ provides an account for the worst possible initial datum to be controlled to zero, in terms of the effort ($L^2$ energy) required to steer this initial datum to zero. Minimizing $\Vert \Gamma_{\omega}\Vert$ over a class of admissible domains (that we will denote by $\mathcal{U}_L$ in the sequel) is then an interesting problem, that will be discussed in the next section. \subsection{State of the art} When realizing exact null controllability in practice, an important question is to know where to place and how to shape optimally the actuators (modeled here by the subset $\omega$), in order to minimize the efforts done to steer any possible initial data to zero. In this paper, we want to optimize not only the location, but also the shape of actuators, without any specific restriction on the regularity of $\omega$. The literature on optimal sensor or actuator location problems is abundant in engineering applications (see, e.g., \cite{armaoua,harris,Kumar,Morris,Sigmund,Ucinski,vandeWal,vandewouwer} and references therein), where the aim is often to optimize the number, the place and the type of sensors or actuators in order to improve the estimation of the state of the system. Fields of applications are very numerous and concern for example active structural acoustics, piezoelectric actuators, vibration control in mechanical structures, damage detection and chemical reactions, just to name a few of them. In most of these applications the method consists of approximating appropriately the problem by selecting a finite number of possible optimal candidates and of recasting the problem as a finite-dimensional combinatorial optimization problem. In many of these contributions the sensors or actuators have a prescribed shape (for instance, balls with a prescribed radius) and then the problem consists of placing optimally a finite number of points (the centers of the balls) and thus is finite-dimensional, since the class of optimal designs is replaced with a compact finite-dimensional set. We stress that, in the present paper, the shape of the control domain is an unknown of the optimization procedure. From the mathematical point of view, the issue of studying a relaxed version of optimal design problems for the shape and position of sensors or actuators has been investigated in a series of articles. In \cite{munchHeat}, the authors study a homogenized version of the optimal location of controllers for the heat equation problem (for fixed initial data), noticing that such problems are often ill-posed. In \cite{allaireMunch}, the authors consider a similar problem and study the asymptotic behavior as the final time $T$ goes to infinity of the solutions of the relaxed problem; they prove that optimal designs converge to an optimal relaxed design of the corresponding two-phase optimization problem for the stationary heat equation. We also mention \cite{munchPedr} where, for fixed initial data, numerical investigations are used to provide evidence that the optimal location of null-controllers of the heat equation problem is an ill-posed problem. Concerning the problem of optimal shape and location of sensors for fixed initial data (instead of controllers in \cite{munchPedr}) we proved in \cite{PTZobspb1} that it is always well posed for heat, wave or Schr\"odinger equations (in the sense that no relaxation phenomenon occurs); we showed that the complexity of the optimal set depends on the regularity of the initial data, and in particular we proved that, even for smooth initial data, the optimal set may be of fractal type (and there is no relaxation). In \cite{PTZparabND}, we modeled and solved the problem of optimal shape and location of the observation domain having a prescribed measure. This problem was motivated by the question of shaping and placing sensors in some domain in such a way to optimize the quality of the observation. Here, we rather investigate the dual question of the best shape and location of actuators. In \cite{munchZuazua}, the authors investigate numerical approximations of exact or trajectory controls for the heat equation, by developing a numerical version of the so-called transmutation method. \subsection{Modeling of the optimal design problems: a randomization procedure} In the present paper, our objective is to search the internal control domain over \textit{all} possible subsets of $(0,\pi)$, without assuming any a priori regularity. We optimize not only the placement but also the shape of the actuators. \medskip Note that, for any problem consisting of optimizing the quality of the control, certainly the best strategy consists of controlling the solutions over the whole domain $(0,\pi)$. This is however obviously not reasonable and in practice the domain covered by actuators is limited, due for instance to cost considerations. From the mathematical point of view, we model this basic limitation by considering as the set of unknowns, the set of all possible measurable subsets $\omega$ of $(0,\pi)$ that are of Lebesgue measure $\vert\omega\vert = L\pi$, where $L\in(0,1)$ is some fixed real number. Any such subset $\omega$ represents the actuators put in $(0,\pi)$. Finally, for mathematical reasons, it is more convenient to assimilate a measurable subdomain $\omega$ of $(0,\pi)$ to its characteristic function $\chi_{\omega}$, vanishing outside $\omega$ and equal to 1 else. Hence, let us introduce the class of admissible control domains \begin{equation}\label{defUL} \boxed{ \mathcal{U}_L = \{ \chi_\omega\in L^\infty(0,\pi,\{0,1\})\ \vert\ \omega\subset \Omega\ \textrm{measurable} ,\ \vert\omega\vert=L \pi\} . } \end{equation} In view of modeling the optimal design of actuators, a first approach consists of minimizing the functional $\chi_{\omega}\mapsto \Vert \Gamma_{\omega}\Vert$ over the set $\mathcal{U}_{L}$. However, even for simple choices of control domains $\omega$, the quantity $\Vert \Gamma_{\omega}\Vert$ is not explicitly computable and therefore the cost functional is hard to handle. Besides, note that the moment control operator norm $\Vert \Gamma_{\omega}\Vert$ is \textit{deterministic} and thus provides an account for the \textit{worst possible case}; in this sense, it is a \textit{pessimistic} constant. One can argue that, in practice, when running a large number of experiments, it is expected that the worst possible case does not occur so often. For these reasons, we are next going to consider an average criterion which, in some sense, does not take into account rare events. Nevertheless, we stress that the issue of minimizing $\Vert \Gamma_{\omega}\Vert$ with respect to the domain $\omega$ has not only a mathematical interest, but appears also naturally in some practical situations, where it is imperative that the worst possible case be avoided, even if it is a rare event. We refer to the conclusion section \ref{sec:conclpb} for some comments about such a problem. The same kind of difficulty arises when modeling optimal design problems for sensors, as discussed in \cite{PTZobsND,PTZparabND}. In this paper, we propose another approach based on the controllability result stated in Lemma \ref{lemm:contHeat}, and on a randomization argument reflecting what happens when a large number of experiments is expected to be done. We are going to use a probabilistic argument, by considering random initial data. We follow the approach developed in \cite{PTZobsND,PTZparabND}. Let us fix an arbitrary initial datum $y(0,\cdot)=y^0(\cdot)\in L^2(0,\pi)$ of the controlled system \eqref{heatEqcontrolled_intro}, with Fourier coefficients defined by \begin{equation}\label{defajbj} a_j = \int_0^\pi y^0(x) \sin (jx)\, dx, \end{equation} These coefficients are now randomized according to $a_j^\nu=\beta^\nu_{j}a_j$ for every $j\in\textrm{I\kern-0.21emN}^*$, where $(\beta_{j}^\nu)_{j\in\textrm{I\kern-0.21emN}^*}$ is a sequence of independent real random variables on a probability space $(\mathcal{X},\mathcal{F},\mathbb{P})$, having mean equal to $0$, variance equal to $1$, and with a super exponential decay\footnote{Recall that the sequence $(\beta_{j}^\nu)_{j\in\textrm{I\kern-0.21emN}^*}$ is said to have a {\it super-exponential decay} whenever $$ \exists (C, \delta)\in (\textrm{I\kern-0.21emR}_+^*)^2\ \mid \ \forall \alpha \in \textrm{I\kern-0.21emR}, \ \mathbb{E}(e^{\alpha |\beta_{j}^\nu|})\leq Ce^{\delta \alpha^2}. $$} (for instance, independent Bernoulli random variables, see \cite{Burq,BurqTzvetkov1} for details and properties of randomization). For every event $\nu\in\mathcal{X}$, the control steering the initial datum $$ y^0_\nu (x)= \sum_{j=1}^{+\infty}\beta_j^\nu a_j \sin(jx) $$ to zero by the moment method is, according to Lemma \ref{lemm:contHeat}, $$ u^\nu (t,x)=-\sum_{j=1}^{+\infty} \frac{\beta_j^\nu a_j e^{-j^2T}}{\int_\omega\sin^2(jy) \, dy}\theta_j^T(T-t)\sin(jx). $$ Using the previous notations, one has $\Gamma_{\omega}(y^0_\nu )=\chi_\omega u^\nu$. We propose, then, to model the problem of best actuator shape and location as the problem of minimizing the averaged functional $$ \mathcal{K}(\chi_\omega) = \sup_{\Vert y^0\Vert_{L^2(0,\pi)}=1} \mathbb{E}\left(\Vert \Gamma_{\omega}(y^0_{\nu})\Vert^2_{L^2((0,T)\times (0,\pi))}\right) $$ over $\mathcal{U}_L$, where $\mathbb{E}$ is the expectation over the probability space $(\mathcal{X},\mathcal{F},\mathbb{P})$. This is the randomized counterpart to the deterministic quantity $\Vert \Gamma_{\omega}\Vert$. One of the advantages is that $\mathcal{K}(\chi_\omega)$ can be explicitly computed, as follows. \begin{lemma}\label{lem2} One has $$ \mathcal{K}(\chi_\omega) =\left( \inf_{j\in\textrm{I\kern-0.21emN}^*} \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\omega}\sin^2(jx)\, dx\right)^{-1} , $$ for every measurable subset $\omega\subset(0,\pi)$. \end{lemma} Lemma \ref{lem2} is proved in Section \ref{sec_proof_lem2}. Therefore, the problem of best shape and location of actuators is finally written as \begin{equation}\label{def:odp1D} \boxed{ \sup_{\chi_{\omega}\in\mathcal{U}_{L}}\inf_{j\in\textrm{I\kern-0.21emN}^*} \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\omega}\sin^2(jx)\, dx. } \end{equation} The article is organized as follows: Section \ref{sec31} is devoted to comments on the control of parabolic equations by the moment method, the use of biorthogonal sequences and modeling of the optimal design problem issues. In Section \ref{sec32}, we solve the problem and provide a numerical illustration as well as a series of examples, mainly in the 1D case due to the restrictions imposed by the choice of the control method. Finally, in Section \ref{sec:OLC}, we investigate a variant of the previously studied optimal design problem, where the control acts on the system by means of tuning the time-intensity. \section{Solving the problem \eqref{def:odp1D}}\label{sec:mainResIntro} \subsection{Main results, comments and illustration}\label{sec:mainrescomillu} We first provide an existence result. \begin{theorem}\label{thm:odp1D} The shape optimization problem \eqref{def:odp1D} has a unique\footnote{Here and in the sequel, it is understood that the optimal set is unique within the class of all measurable subsets of $(0,\pi)$ quotiented by the set of all measurable subsets of $\Omega$ of zero measure.} solution $\chi_{\omega^*}$. \end{theorem} This theorem is proved in Section \ref{sec_thm:odp1D}. In addition to this result, what is remarkable is that we have a simple and numerically efficient procedure to compute the optimal control domain $\omega^*$. \paragraph{Algorithmic computation procedure.} The optimal set $\omega^*$ of Theorem \ref{thm:odp1D} can actually be built from a finite-dimensional spectral approximation, by keeping only a finite number of modes. Let us provide the details of the procedure. For every integer $N\in\textrm{I\kern-0.21emN}^*$, we define the functional $\mathcal{J}_N$ by \begin{equation*} \mathcal{J}_N(\chi_\omega) = \inf_{1\leq j\leq N} \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\omega} \sin^2(jx)\, dx, \end{equation*} for every measurable subset $\omega$ of $(0,\pi)$. The functional $\mathcal{J}_N$ is a spectral truncation to the $N$ first terms. We consider the shape optimization problem \begin{equation}\label{pb_max_JN} \sup_{\chi_\omega\in {\mathcal{U}}_L} \mathcal{J}_N(\chi_\omega), \end{equation} called truncated problem, which is a spectral approximation of the problem \eqref{def:odp1D}. We have then the following results, proved in Sections \ref{sec_A2} and \ref{sec_thm:odp1D}. \begin{proposition}\label{truncTheo} For every $N\in\textrm{I\kern-0.21emN}^*$, the truncated problem \eqref{pb_max_JN} has a unique solution $\chi_{\omega^N}\in\mathcal{U}_L$. Moreover, $\omega^N$ has a finite number of connected components, and there exists $\eta^N>0$ such that $\omega^N\subset [\eta^N,\pi-\eta^N]$. \end{proposition} Proposition \ref{prop:trunc} further (see Section \ref{sec32}) will provide an extension of this result to higher dimensions. We will however provide two different proofs. Indeed, in the one-dimensional case investigated here, we will show in the proof that the problem \eqref{pb_max_JN} can be expressed in an equivalent way as a classical optimal control problem. This point of view (already used in \cite{PTZ_HUM1D}) is interesting not only for the proof but also in order to derive efficient numerical methods for the numerical computation of the optimal domains. Let us now give the main result that is at the base of the algorithmic procedure. \begin{theorem}\label{thm:stat1D} There exists $N_0\in \textrm{I\kern-0.21emN}^*$ such that $\omega^*=\omega^N$ for every $N\geq N_0$. Furthermore, we have $N_0\leq \widetilde{N}_0$, where $\widetilde{N}_0$ is the first integer (which exists and is finite) such that $$ \forall j\geq \widetilde{N}_0, \quad \Vert \theta_j^T\Vert_{L^2(0,T)}^2\leq \frac{e^2\left(\pi L-\sin (\pi L)\right)}{128}e^{2T(j^2-1)}. $$ As a result, $N_0$ is equal to $1$ if $T$ is large enough. \end{theorem} In other words, Theorem \ref{thm:stat1D} says that the sequence $(\omega^N)_{N\in\textrm{I\kern-0.21emN}^*}$ of optimal sets, whose existence is stated in Proposition \ref{truncTheo}, is stationary. The numerical procedure consists of computing these sets, and once it has become stationary, then we have found the optimal set $\omega^*$, solution of the shape optimization problem \eqref{def:odp1D}. A natural issue concerns the characterization of the minimal integer $N_0$ such that the sequence of optimal domains $(\omega^N)_{N\geq N_0}$ remains constant. Even if a partial answer is provided in Theorem \ref{thm:stat1D}, it is likely that the determination of $N_0$ is in general intricate. As a numerical illustration of this computation procedure, we provide on Figure \ref{figpb2Dcont} several numerical simulations of the optimal control domain, solution of the truncated problem \eqref{pb_max_JN} in the 1D case, for the Dirichlet-Laplacian. We observe the expected stationarity property of the sequence of optimal domains $\omega^N$ from $N=5$ on. \bigskip \begin{figure}[h!] \begin{center} \includegraphics[width=3.5cm]{fig/controlN1L20.pdf} \includegraphics[width=3.5cm]{fig/controlN2L20.pdf} \includegraphics[width=3.5cm]{fig/controlN3L20.pdf} \includegraphics[width=3.5cm]{fig/controlN4L20.pdf} \includegraphics[width=3.5cm]{fig/controlN5L20.pdf} \includegraphics[width=3.5cm]{fig/controlN6L20.pdf} \includegraphics[width=3.5cm]{fig/controlN7L20.pdf} \includegraphics[width=3.5cm]{fig/controlN8L20.pdf} \caption{${\Omega}=(0,\pi)$, $L=0.2$, $T=0.05$. From left to right, and top to down: optimal solution $\chi_{\omega^N}$ for $N=1,\ldots,8$.}\label{figpb2Dcont} \end{center} \end{figure} In the forthcoming section devoted to providing the proofs of the results above, it will be required to consider a convexified version of the problem \eqref{def:odp1D}, which may fail to have some solutions because of the hard constraint\footnote{Indeed, equality constraints in $L^\infty$ are in general not preserved by the natural topologies such as the $L^\infty$ weak-star topology.} $\chi_\omega\in\mathcal{U}_L$ (which is a binary constraint almost everywhere). This is usually referred to as relaxation (see, e.g., \cite{BucurButtazzo}). Since the set $\mathcal{U}_L$ (defined by \eqref{defUL}) does not share nice compactness properties, we consider the convex closure of $\mathcal{U}_L$ for the weak star topology of $L^\infty$, which is \begin{equation}\label{defALbar} \overline{\mathcal{U}}_L = \left\{ a\in L^\infty(0,\pi;[0,1])\ \vert\ \int_{\Omega} a(x)\,dx=L\pi\right\}. \end{equation} Such a relaxation was used as well in \cite{munchHeat,PTZObs1,PTZobsND}. Replacing $\chi_\omega\in\mathcal{U}_L$ with $a\in\overline{\mathcal{U}}_L$, we consider the relaxed (or convexified) formulation of the problem \eqref{def:odp1D} given by \begin{equation}\label{defJa} \sup_{a\in \overline{\mathcal{U}}_L} \mathcal{J}(a) , \end{equation} where the functional $J$ is naturally extended to $\overline{\mathcal{U}}_L$ by \begin{equation}\label{defJrelax} \mathcal{J}(a)=\inf_{j\in \textrm{I\kern-0.21emN}^*}\frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\Omega}a(x) \sin^2(jx)\, dx, \end{equation} for every $a\in \overline{\mathcal{U}}_L$. We consider as well a relaxed formulation of the truncated optimal design problem \eqref{pb_max_JN} by \begin{equation}\label{defJaN} \sup_{a\in \overline{\mathcal{U}}_L} \mathcal{J}_N(a) , \end{equation} where the functional $\mathcal{J}_{N}$ is naturally extended to $\overline{\mathcal{U}}_L$ by \begin{equation}\label{defJaNrelax} \mathcal{J}_N(a)=\inf_{1\leq j\leq N}\frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\Omega}a(x)\sin^2(jx)\, dx, \end{equation} for every $a\in \overline{\mathcal{U}}_L$. Being defined as the infimum of linear functions, continuous for the $L^\infty$ weak star topology, the functional $\mathcal{J}$ is upper semi continuous for the $L^\infty$ weak star topology. The set $\overline{\mathcal{U}}_L$ being compact for this topology, we then have the following result. \begin{lemma}\label{propExistRelax} For every $L\in (0,1)$, the relaxed problem \eqref{defJa} (respectively \eqref{defJaN}, for any $N\in\textrm{I\kern-0.21emN}^*$) has at least one solution $a^*\in\overline{\mathcal{U}}_L$ (respectively $a^{N}\in\overline{\mathcal{U}}_L$). \end{lemma} \subsection{Proof of Proposition \ref{truncTheo}}\label{sec_A2} Considering the functions $a(\cdot)$ of $\overline{\mathcal{U}}_L$ as controls, and interpreting the problem \eqref{pb_max_JN} as an optimal control problem, leads to consider the control system \begin{equation}\label{contsys1henrot} \begin{split} y'(x) &= a(x), \\ y_j'(x) & = \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} a(x)\sin^2(jx),\quad j\in\{1,\ldots,N\}, \\ z'(x) & = 0, \end{split} \end{equation} for almost every $x\in[0,\pi]$, with initial conditions \begin{equation}\label{initcond1henrot} y(0)=0,\quad y_j(0)=0,\ j\in\{1,\ldots,N\}. \end{equation} The additional function $z$ above stands for the cost functional $\mathcal{J}_N(a)$ and will be defined with the help of inequality constraints below since it is written as the minimum of the quantities $y_j (\pi)$ over $ j\in\{1,\ldots,N\}$. The relaxed problem \eqref{defJa} is then equivalent to the optimal control problem of determining a control $a\in\overline{\mathcal{U}}_L$ steering the control system \eqref{contsys1henrot} from the initial conditions \eqref{initcond1henrot} to the final condition \begin{equation}\label{finalcondhenrot} y(\pi)=L\pi, \end{equation} and maximizing the quantity $z (\pi)$ (or similarly $z(0)$, since $z$ in constant on $[0,\pi]$), with the additional final conditions \begin{equation}\label{finalcondhenrot2} z (\pi)\leq y_j (\pi),\quad \textnormal{for all }j\in\{1,\ldots,N\}. \end{equation} Indeed, this follows directly from the observation that the unique solution of $$ \max \{z\ \mid \ z\leq y_j (\pi), \ j\in\{1,\ldots,N\} \} $$ is $z=\min_{1\leq j\leq N}y_j (\pi)$. Therefore, $a^*$ is a solution of the optimal control problem above. The existence of an optimal control is standard. According to the Pontryagin Maximum Principle (see \cite{Pontryagin}), if $a$ is optimal then there exist real numbers\footnote{Note that, since the dynamics of \eqref{contsys1henrot} do not depend on the state, it follows that the adjoint states of the Pontryagin Maximum Principle are constant.} $(p_y,p_1,\ldots,p_N)\in \textrm{I\kern-0.21emR}_{-}\times \textrm{I\kern-0.21emR}_{+}^{N}\backslash (0,\ldots,0)$, such that \begin{equation}\label{maxcondhenrot} a(x) = \left\{ \begin{split} 1 & \quad \textrm{if}\ \varphi^N(x)>0,\\ 0 &\quad \textrm{if}\ \varphi^N(x)<0, \end{split} \right. \end{equation} for almost every $x\in[0,\pi]$, where the so-called switching function $\varphi^N$ is defined by \begin{equation}\label{defvarphihenrot} \varphi^N(x) = p_y+\sum_{j=1}^{N}\frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} p_j\sin^2(jx). \end{equation} Moreover, the control $a(\cdot)$ is nonsingular (see \cite{trelat}) since $\varphi^N$ is a finite trigonometric sum and thus cannot be constant on any subset of positive measure. In particular, this implies that the optimal control $a^N$ is the characteristic function of a measurable subset $\omega^N(L)$ of $[0,\pi]$ of measure $L\pi$. Note that the minimum of $\varphi^N$ on $[0,\pi]$ is reached at $0$ and $\pi$, hence from \eqref{maxcondhenrot} the optimal set $\omega^N$ does not contain $0$ and $\pi$. To prove uniqueness, according to the previous discussion where it is stated that every maximizer of $J$ over $\overline{\mathcal{U}}_{L}$ is the characteristic function of some subset of $[0,\pi]$, assume that there exist two distinct minimizers $\chi_{\omega_1}$ and $\chi_{\omega_2}$ in $\mathcal{U}_{L}$. As a maximum of linear functionals, the functional $a\mapsto \mathcal{J}(a)$ is convex on $\overline{\mathcal{U}}_L$, and it follows that for every $t\in (0,1)$ the function $t\chi_{\omega_1}+(1-t)\chi_{\omega_2}$ is also a solution of the problem \eqref{defJaN}, which is in contradiction with the fact that any solution of this problem is extremal. Finally, the fact that $\omega^N(L)$ has at most $N$ connected components follows from the facts that the elements of $\partial\omega^N(L)$ are the solutions of $\varphi^N(x)=0$ and that $\varphi^N$ can be written as $$ \varphi^N(x)=p_y+\frac{1}{2}\sum_{j=1}^N\frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} p_j-\frac{1}{2}\sum_{j=1}^N \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} p_{j}T_{2j}(\cos x), $$ where $T_{2j}$ denotes the $2j$-th Chebychev polynomial of the first kind. The degree of the polynomial $\varphi^N(\arccos X)$ (in the argument $X$) is at most $2N$, whence the result. \subsection{Proofs of Theorems \ref{thm:odp1D} and \ref{thm:stat1D}}\label{sec_thm:odp1D} The main idea of this proof is close to the one of \cite[Theorem 1]{PTZparabND}. According to Lemma \ref{propExistRelax}, the relaxed optimal design problem \eqref{defJa} has at least one solution $a^*\in\overline{\mathcal{U}}_L$. We will prove simultaneously Theorems \ref{thm:odp1D} and \ref{thm:stat1D}, by showing that $a^*$ coincides with the solution $a^N$ of the truncated problem \eqref{pb_max_JN} for $N$ large enough. First of all, as a consequence of \cite[Lemma 6]{PTZ_HUM1D}, we have $ \int_{0}^\pi a^*(x)\sin^2(jx)\, dx\geq \frac{L\pi-\sin(L\pi)}{2}, $ and therefore, \begin{equation}\label{metz1144} \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt}\int_{0}^\pi a^*(x)\sin^2(jx)\, dx\geq \frac{e^{2j^2T}(L\pi-\sin(L\pi))}{2\int_{0}^T\theta_j^T(t)^2\, dt} \end{equation} for every $j\in \textrm{I\kern-0.21emN}^*$. Besides, we have the following result on the growth of the biorthogonal sequence $(\theta_j^T)_{j\in\textrm{I\kern-0.21emN}^*}$, following from \cite[Theorem 3.2]{MicuZuazua}. \begin{lemma}\label{lemma:CTbiorth} Let $T>0$. There exists $C_{T}>0$ such that $$ C_{T} \int_{0}^T\theta_j^T(t)^2\, dt\leq e^{2\pi j}, $$ for every $j\in\textrm{I\kern-0.21emN}^*$. \end{lemma} It follows from this result that $$ \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt}\geq C_{T}e^{2j^2T-2\pi j}, $$ for every $j\in \textrm{I\kern-0.21emN}^*$. Combining these two facts, we infer that \begin{equation}\label{eq:limproof} \lim_{j\to +\infty}\frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt}\int_{0}^\pi a^*(x)\sin^2(jx)\, dx=+\infty , \end{equation} and moreover, there exists $N_0\in\textrm{I\kern-0.21emN}^*$ such that \begin{equation}\label{propN0} \inf_{j>N_0}\frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt}\int_{\Omega} a^*(x) \sin^2(jx)\, dx > \frac{e^{2T}}{\int_{0}^T\theta_1^T(t)^2\, dt} . \end{equation} Since there holds in particular $$ \mathcal{J}_{N_0}(a^*)\leq \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt}\int_{\Omega} a^*(x)\sin^2(x)\, dx\leq \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt}, $$ we infer from \eqref{propN0} that $$ \mathcal{J}(a^*) = \min \left(\mathcal{J}_{N_0}(a^*),\inf_{j>N_0}\frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt}\int_{\Omega} a^*(x) \sin^2(jx)\, dx\right) = \mathcal{J}_{N_0}(a^*) . $$ Let us actually prove that $\mathcal{J}(a^*) = \mathcal{J}_{N_0}(a^{N_0})$, where $a^{N_0}\in\mathcal{U}_L$ denotes the unique maximizer of $\mathcal{J}_{N_0}$, as stated in Lemma \ref{propExistRelax}. Since $a^{N_0}$ maximizes $\mathcal{J}_{N_0}$ over $\overline{\mathcal{U}}_L$, one has $\mathcal{J}(a^*) =\mathcal{J}_{N_0}(a^*) \leq \mathcal{J}_{N_0}(a^{N_0})$. Let us argue by contradiction and assume that $\mathcal{J}_{N_0}(a^*) < \mathcal{J}_{N_0}(a^{N_0})$. For every $t\in[0,1]$, we set $ a_t = a^* + t ( a^{N_0} - a^*)$. Since $\mathcal{J}_{N_0}$ is concave (as an infimum of linear functionals), we get $$ \mathcal{J}_{N_0}(a_t) \geq (1-t)\mathcal{J}_{N_0}(a^*) + t \mathcal{J}_{N_0}(a^{N_0}) > \mathcal{J}_{N_0}(a^*) = \mathcal{J}(a^*), $$ for every $t\in(0,1]$, which means that \begin{equation}\label{train17h39} \inf_{1\leq j\leq N_0} \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\Omega} a_t(x) \sin^2(jx) \, dx > \inf_{1\leq j\leq N_0} \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\Omega} a^*(x) \sin^2(jx) \, dx \geq \mathcal{J}(a^*), \end{equation} for every $t\in(0,1]$. Besides, for every $\varepsilon>0$ there exists $t>0$ small enough such that $$ \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\Omega} a_t(x) \sin^2(jx) \, dx \geq (1-t)\frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\Omega} a^*(x)\sin^2(jx) \, dx \geq \frac{e^{2T}}{\int_{0}^T\theta_1^T(t)^2\, dt} +\varepsilon, $$ for every $j>N_0$. Therefore, \begin{equation}\label{train17h44} \inf_{j> N_0} \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt}\int_{\Omega} a_t(x) \sin^2(jx) \, dx > \frac{e^{2T}}{\int_{0}^T\theta_1^T(t)^2\, dt}. \end{equation} Since there holds in particular $\mathcal{J}_{N_0}(a_t)\leq \frac{e^{2T}}{\int_{0}^T\theta_1^T(t)^2\, dt}$, we infer from \eqref{train17h39} and \eqref{train17h44} that $\mathcal{J}(a_t) = \mathcal{J}_{N_0}(a_t) > \mathcal{J}(a^*)$, which contradicts the optimality of $a^*$. Therefore $\mathcal{J}_{N_0}(a^*) =\mathcal{J}(a^*)= \mathcal{J}_{N_0}(a^{N_0})$, whence the result. \paragraph{Estimate of the integer $N_0$.} It remains to provide an estimate for $N_0$. We claim that any nonzero integer $\widetilde{N}_0$ such that the inequality $$ \inf_{j>\widetilde{N}_0} \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt} \int_{\Omega} \chi_{\omega^N}(x) \sin(jx)^2\, dx > \frac{e^{2T}}{\int_{0}^T\theta_1^T(t)^2\, dt}, $$ holds true satisfies $N_0\leq \widetilde{N}_0$ (in the sequel, we denote by $\widetilde{N}_0$ any integer such that the sequence $(\omega^N)_{N\geq \widetilde{N}_0}$ remains constant). To prove this claim, let us consider the simple case where $T\geq 1$. Notice that, in the next explanations, the lower bound on the time $T$ is not a restriction of our approach and can be chosen as small as desired with a slight adaptation of the following arguments. It is possible to perform more precise computations since in this case, we know at the same time several properties on the involved biorthogonal sequences $(\theta_j^T)_{j\in\textrm{I\kern-0.21emN}^*}$ as well as the useful spectral property: for all $j\in\textrm{I\kern-0.21emN}^*$, one has $\int_{0}^\pi \chi_{\omega^N}(x)\sin^2(jx)\, dx\geq \frac{L\pi-\sin(L\pi)}{2}$ according to \cite[Lemma 6]{PTZ_HUM1D}. As a consequence, following the proof of Theorem \ref{thm:odp1D} and by using in particular \eqref{metz1144}, $\widetilde{N}_0$ can be chosen to be any integer such that $$ \forall j\geq \widetilde{N}_0, \quad \frac{e^{2j^2T}}{\Vert \theta_j^T\Vert_{L^2(0,T)}^2}\left(\frac{\pi L-\sin (\pi L)}{2}\right)\geq \frac{e^{2T}}{\Vert \theta_1^T\Vert_{L^2(0,T)}^2}. $$ According to \cite[Theorem 3.2]{MicuZuazua}, there holds $\Vert \theta_1^T\Vert_{L^2(0,T)}^2\geq \frac{e^2}{64}$, and we infer that $\widetilde{N}_0$ can also be chosen such that $$ \forall j\geq \widetilde{N}_0, \quad \Vert \theta_j^T\Vert_{L^2(0,T)}^2\leq \frac{e^2\left(\pi L-\sin (\pi L)\right)}{128}e^{2T(j^2-1)}. $$ It remains to provide an upper bound of the quantity $\theta_j^T$ for any $j\in\textrm{I\kern-0.21emN}^*$. To this aim, we will use that for a given $j\in \textrm{I\kern-0.21emN}^*$, the mapping $v:[0,T]\ni t\mapsto \frac{(-1)^j}{2j}e^{-j^2T}\theta_j^T(t)$ is the control of minimal $L^2(0,T)$-norm for the boundary control problem of steering the system \begin{equation}\label{eqphicont} \begin{split} & \partial_t \varphi(t,x)-\partial_{xx} \varphi (t,x)=0, \qquad (t,x)\in (0,T)\times(0,\pi),\\ & \varphi(t,0)=0, \quad \varphi (t,\pi)=v(t), \qquad\qquad t\in (0,T), \end{split} \end{equation} with initial datum $\varphi(0,x)=\sin(jx)$ to zero in time $T$, as highlighted in \cite[Proposition 2.2]{MicuZuazua}. Consider the particular control function $w_j$ vanishing on the time interval $[0,T-1]$ and equal to the control constructed with the help of the Hilbert Uniqueness Method (HUM) for the control problem of steering the system \eqref{eqphicont} with initial datum $\varphi(0,x)=e^{-j^2(T-1)}\sin (jx)$ to zero in time $1$. The existence of $w_j$ is well-known and we refer for instance to \cite{TucsnakWeiss,zuazua}. More generally, the controllability property of \eqref{eqphicont} also implies the existence $C>0$ that does not depend on $j$ nor $T$ such that $\Vert w_j\Vert_{L^2(0,T)}\leq Ce^{-(T-1)j^2}$ for all $j\in\textrm{I\kern-0.21emN}^*$ since the sequence of functions $(x\mapsto\sin(jx))_{j\in\textrm{I\kern-0.21emN}^*}$ is uniformly bounded in $L^2(0,\pi)$. We thus infer that $$ \Vert \theta_j^T\Vert_{L^2(0,T)}\leq 2Cje^{j^2} $$ for every $j\in\textrm{I\kern-0.21emN}^*$, and therefore, it suffices to choose $\widetilde{N}_0$ such that $$ j\geq \widetilde{N}_0\ \Rightarrow\ 2Cje^{j^2}\leq \frac{e^2\left(\pi L-\sin (\pi L)\right)}{128}e^{2T(j^2-1)}. $$ This estimate shows in particular that $\widetilde{N}_0$ and thus $N_0$ are equal to $1$ if $T$ is large enough. \section{Generalization to parabolic distributed parameter systems, and lumped control}\label{sec:Control} In this section, we generalize the results obtained previously for the one-dimensional heat equation, to a large family of parabolic systems. In a second step, we consider an alternative way of acting on the system, by means of lumped controls. \subsection{Problem setting}\label{sec31} Let $n\in\textrm{I\kern-0.21emN}^*$ be an integer, and let $\Omega$ be a bounded open connected subset of $\textrm{I\kern-0.21emR}^n$. We consider the internally controlled parabolic distributed parameter system \begin{equation}\label{heatEqcontrolled} \partial_t y+A_{0} y=\chi_\omega u, \quad t\in (0,T), \end{equation} where $A_0:D(A_{0})\rightarrow L^2(\Omega,\C)$ is a densely defined operator that generates a strongly continuous semigroup on $L^2(\Omega,\C)$, $u\in L^2((0,T)\times \Omega,\C)$ is the control function, and $\omega\subset\Omega$ is a measurable subset standing for the control domain. We assume that there exists an orthonormal basis $(\phi_j)_{j\in\textrm{I\kern-0.21emN}^*}$ of $L^2(\Omega,\C)$ consisting of eigenfunctions of $A_{0}$, associated with (complex) eigenvalues $(\lambda_j)_{j\in\textrm{I\kern-0.21emN}^*}$ such that $\mathrm{Re}(\lambda_1)\leq\cdots\leq \mathrm{Re}(\lambda_j)\leq\cdots$. The one-dimensional heat equation investigated previously enters into this frame, but now the setting is much more general. \medskip The objective of this section is to give a precise sense to the question of optimizing the control domain $\omega$. As a first remark, let us note that, since the equation is parabolic and thus has smoothing properties, we focus on the exact null controllability problem, that is the problem of steering the system from any initial condition (in an appropriate functional space) to zero, within a time $T>0$. We use the moment method in order to derive a relevant model of optimal sensor shape and location with results valuable for almost every initial data. This method provides a way of constructing a control achieving exact null controllability, for some given initial data $y^0\in L^2(\Omega)$. As explained below, this approach suffers however from restrictions related to the M\"untz-Sz\'asz theorem, and then cannot be applied to any parabolic system. We address this control problem in the framework developed in \cite{fattorini2} (see also the survey \cite{Russell}) where the controllability problem is reduced to a moment problem which is solved explicitly with the help of a biorthogonal sequence to the family of exponential functions $\Lambda = (e^{-\lambda_{j}t })_{j\geq 1}$. Consider the control system \eqref{heatEqcontrolled} with the initial data \begin{equation}\label{y0} y(0)=y^0=\sum_{j\in\textrm{I\kern-0.21emN}^*} a_j\phi_j \in L^2(\Omega). \end{equation} The moment method provides a control steering the parabolic system \eqref{heatEqcontrolled} to zero, as stated in the following result. \begin{lemma}\label{lemma:momentCont} We define formally the function $u$ by \begin{equation}\label{defu} u(t,x)=-\sum_{j\in\textrm{I\kern-0.21emN}^*} \frac{a_j e^{-\lambda_jT}}{\int_\omega|\phi_{j}(y)|^2 \, dy}\theta_j^T(T-t)\phi_j(x) , \end{equation} for almost every $t\in (0,T)$ and every $x\in \Omega$. If this series defines a function of $L^2((0,T)\times\Omega)$, then this control is a solution of the problem of steering the system \eqref{heatEqcontrolled} from $y^0$ to $0$ in time $T$. \end{lemma} The proof of this lemma is done in Section \ref{secproof31}. \begin{remark}\label{rem_Muntz} Recall that such a biorthogonal sequence exists if and only if the family $\Lambda$ is \textit{minimal}, that is, every element $t\mapsto e^{-\lambda_jt}$ lies outside of the closure in $L^2(0,T)$ of the vector space spanned by all other elements $t\mapsto e^{-\lambda_kt}$, with $k\neq j$. If this condition is fulfilled, then this biorthogonal sequence is uniquely determined if and only if the family $\Lambda$ is complete in $L^2(0,T)$. It is well known, by the M\"untz-Sz\'asz theorem, that the family $\Lambda$ is complete in $L^2(0,T)$ (but not independent) if and only if $$\sum_{j\in\textrm{I\kern-0.21emN}^*}\frac{1}{\mathrm{Re}(\lambda_j)+\lambda}=+\infty,$$ for some real number $\lambda$ such that $\mathrm{Re}(\lambda_j)+\lambda>0$ for every $j\in\textrm{I\kern-0.21emN}^*$ (for instance, $\lambda=-\mathrm{Re} (\lambda_1)+1$ is suitable). On the contrary, if this series is convergent then the closure of the span of $\Lambda$ is a proper subspace of $L^2(0,T)$, moreover $\Lambda$ is minimal and thus a biorthogonal sequence exists. Then, here, we are led to assume that the series is convergent, which is a quite strong restriction on the parabolic system under consideration. \end{remark} For every $y^0\in L^2(\Omega)$, we set $\Gamma_{\omega}(y^0)=\chi_\omega u$, where $u$ is the control defined by \eqref{defu}, steering the system \eqref{heatEqcontrolled} from $y^0$ to $0$ in time $T$. This defines an operator $\Gamma_{\omega}:L^2(\Omega)\rightarrow L^2((0,T)\times\Omega)$, called the \textit{moment control operator}, which is linear and continuous. Its norm is $ \Vert \Gamma_{\omega}\Vert = \sup \{ \Vert \Gamma_{\omega} (y^0)\Vert_{L^2((0,T)\times\Omega)} \mid \Vert y^0\Vert_{L^2(\Omega)}=1 \}. $ As in the previous section, we randomize the Fourier coefficients of a given $y^0\in D(A_{0})$, with $y^0=\sum_{j=1}^{+\infty}a_j\phi_j$, by defining $a_j^\nu=\beta^\nu_{j}a_j$ for every $j\in\textrm{I\kern-0.21emN}^*$, where $(\beta_{j}^\nu)_{j\in\textrm{I\kern-0.21emN}^*}$ is a sequence of independent real-valued random variables on a probability space $(\mathcal{X},\mathcal{A},\mathbb{P})$ having mean equal to $0$, variance equal to $1$, and a super exponential decay (for instance, independent Bernoulli random variables). Then we define $$ \mathcal{K}(\chi_\omega) = \sup_{\Vert y^0\Vert_{L^2(\Omega)}=1} \mathbb{E}\left(\Vert \Gamma_{\omega}(y^0_{\nu})\Vert^2_{L^2((0,T)\times \Omega)}\right), $$ where $y^0_\nu= \sum_{j=1}^{+\infty}\beta_j^\nu a_j \phi_j$, and $\mathbb{E}$ is the expectation over the space $\mathcal{X}$ with respect to the probability measure $\mathbb{P}$. \begin{lemma}\label{lemmContrand} There holds $$ \mathcal{K}(\chi_\omega) = \left( \inf_{j\in\textrm{I\kern-0.21emN}^*}\gamma_{j}(T)\int_{\omega}|\phi_j(x)|^2\, dx \right)^{-1} , $$ where the coefficients $\gamma_{j}(T)$ are defined by \begin{equation}\label{defgammajcontrol} \gamma_{j}(T)=\frac{e^{2\mathrm{Re}(\lambda_{j})T}}{\int_{0}^T\theta_j^T(t)^2\, dt}, \end{equation} for every $j\in\textrm{I\kern-0.21emN}^*$. \end{lemma} This lemma is proved in Section \ref{sec_proof_lem2}. As discussed previously, we model the best actuator shape and placement problem as the problem of minimizing $\mathcal{K}$ over the set $\mathcal{U}_{L}$ defined by \begin{equation}\label{defULnew} \mathcal{U}_L=\{ \chi_\omega\in L^\infty(\Omega ,\{0,1\})\ \vert\ \omega\subset \Omega\ \textrm{measurable} ,\ \vert\omega\vert=L |\Omega|\}. \end{equation} According to Lemma \ref{lemmContrand}, the problem of optimal actuator placement is equivalent to the problem \begin{equation}\label{optDesignContrand} \boxed{ \sup_{\chi_{\omega}\in \mathcal{U}_{L}}\inf_{j\in\textrm{I\kern-0.21emN}^*}\gamma_{j}(T)\int_{\omega}|\phi_j(x)|^2\, dx, } \end{equation} where the coefficients $\gamma_{j}(T)$ are defined by \eqref{defgammajcontrol}. In what follows, we define $$ \mathcal{J}(\chi_\omega)=\inf_{j\in\textrm{I\kern-0.21emN}^*}\gamma_{j}(T)\int_{\omega}|\phi_j(x)|^2\, dx, $$ for every measurable subset $\omega\subset\Omega$. \subsection{Main result and examples}\label{sec32} We consider the following assumptions. \begin{itemize} \item[$\mathbf{(H_1)}$] (\textit{Strong Conic Independence Property}) If there exist a subset $E$ of $\Omega$ of positive Lebesgue measure, an integer $N\in\textrm{I\kern-0.21emN}^*$, a $N$-tuple $(\alpha_{j})_{1\leq j\leq N}\in (\textrm{I\kern-0.21emR}_{+})^N$, and $C\geq 0$ such that $\sum_{j=1}^N \alpha_j \vert\phi_j(x)\vert^2 = C$ almost everywhere on $E$, then there must hold $C = 0$ and $\alpha_j = 0$ for every $j\in \{1,\cdots,N\}$. \item[$\mathbf{(H_2)}$] For every $a\in L^\infty(\Omega;[0,1])$ such that $\int_{\Omega}a(x)\, dx = L|\Omega|$, one has \begin{equation* \liminf_{j\rightarrow+\infty} \ \gamma_j(T) \int_\Omega a(x) \vert\phi_j(x)\vert^2\, dx > \gamma_1(T) ; \end{equation*} \item[$\mathbf{(H_3)}$] The eigenfunctions $\phi_j$ are analytic in $\Omega$. \end{itemize} These assumptions have been considered as well in \cite{PTZparabND} and are commented in that reference. For instance, they are satisfied for $A_0=(-\triangle)^\alpha$ with $\alpha>\frac{1}{2}$ and $\triangle$ is the Dirichlet-Laplacian on a piecewise $C^1$ domain $\Omega$ (see \cite[Section 2.4]{PTZparabND}). The problem \eqref{optDesignContrand} is similar to the optimal design problem \eqref{def:odp1D}, except that now the weights $\gamma_j(T)$ are defined by \eqref{defgammajcontrol}. It appears then important to estimate the asymptotics of $\gamma_j(T)$ as $j$ tends to $+\infty$. But this has been done in \cite{Avdonin,fattorini2,Hansen,MicuZuazua}. Those estimates will impose further restrictions on the problem under consideration. For every $N\in\textrm{I\kern-0.21emN}^*$, we define the truncated criterion \begin{equation*} \mathcal{J}_N(\chi_\omega)=\inf_{1\leq j\leq N}\gamma_{j}(T)\int_{\omega}|\phi_j(x)|^2\, dx, \end{equation*} for every measurable subset $\omega\subset\Omega$. We have the following result. \begin{proposition}\label{prop:trunc} Let $N\in\textrm{I\kern-0.21emN}^*$. Under $\mathbf{(H_1)}$, the problem \begin{equation}\label{pb:truncated} \sup_{\chi_{\omega}\in\mathcal{U}_{L}}\mathcal{J}_N(\chi_\omega) \end{equation} has a unique solution $\chi_{\omega^N}$ in $\mathcal{U}_{L}$. Moreover, under the additional assumption $\mathbf{(H_3)}$, $\omega^N$ is an open semi-analytic\footnote{A subset $\omega$ of a real analytic finite dimensional manifold $M$ is said to be semi-analytic if it can be written in terms of equalities and inequalities of analytic functions. We recall that such semi-analytic subsets are stratifiable in the sense of Whitney (see \cite{Hardt,Hironaka}), and enjoy local finitetess properties, such that: local finite perimeter, local finite number of connected components, etc.} set. \end{proposition} This proposition is proved in Section \ref{secproof:prop:trunc}. The main result is then the following theorem, proved in Section \ref{sec:proofcormaintheo}. \begin{theorem}\label{cormaintheo} Assume that there exist $m_1>0$, $m_2\in(0,2T)$, and a sequence $(\theta_j^T)_{j\in\textrm{I\kern-0.21emN}^*}$ biorthogonal to the family $\Lambda=(t\mapsto e^{-\lambda_{j}t})_{j\geq 1}$, such that \begin{equation}\label{cond_vplambdaj} \Vert \theta_j^T\Vert_{L^2(0,T)}^2 \leq m_1 e^{m_2 \mathrm{Re}(\lambda_j)}, \end{equation} for every $j\in\textrm{I\kern-0.21emN}^*$. Then, under $\mathbf{(H_1)}$ and $\mathbf{(H_2)}$, the problem \eqref{optDesignContrand} has a unique solution $\chi_{\omega^*}\in\mathcal{U}_L$. Moreover there exists $N_{0}\in \textrm{I\kern-0.21emN}^*$ such that $\omega^*=\omega^N$, for every $N\geq N_0$. In particular, if $\mathbf{(H_3)}$ is moreover satisfied, then $\omega^*$ is an open semi-analytic subset of $\Omega$, and thus, it has a finite number of connected components. \end{theorem} The same considerations as in Section \ref{sec:mainResIntro} on the algorithmic computation procedure still hold in this general framework. \bigskip To finish, we provide hereafter some classes of examples for which the existence of a biorthogonal sequence satisfying \eqref{cond_vplambdaj} is known. \begin{itemize} \item Assume that there exist $\delta>0$, $\beta>1$, $\varepsilon>0$, $A\geq 0$ and $B\geq\delta$ such that \begin{equation}\label{cond_vpj} \vert \lambda_j-\lambda_k\vert \geq \delta \vert j^\beta-k^\beta \vert\quad\textrm{and}\quad \varepsilon(A+Bj^\beta)\leq\vert\lambda_j\vert<A+Bj^\beta, \end{equation} for all $(j,k)\in(\textrm{I\kern-0.21emN}^*)^2$, where the elements of the sequence $(\lambda_k)_{k\in\textrm{I\kern-0.21emN}^*}$ are assumed to lie in $\{\lambda \in \C\mid |\arg \lambda |\leq \theta\}$ for some given $\theta\in (0,\pi/2)$. As argued in Remark \ref{rem_Muntz}, under the condition \eqref{cond_vpj} there exists a sequence $(\theta_j^T)_{j\in\textrm{I\kern-0.21emN}^*}$ biorthogonal to $\Lambda$, and it is proved in \cite{Hansen} that there exist two positive constants $\tilde A$ and $\tilde B$ such that $$ \Vert \theta_j^T\Vert_{L^2(0,T)}^2\leq \tilde B e^{\tilde A j}, $$ and since $$ \mathrm{Re} (\lambda_j)\geq |\lambda_j|\cos \theta\geq \varepsilon(A+Bj^\beta)\cos\theta $$ for every $j\in\textrm{I\kern-0.21emN}^*$, we infer the existence of $m_1$ and $m_2$ such that the estimate \eqref{cond_vplambdaj} holds. We also refer to \cite[Theorem 3.2]{MicuZuazua} for an elementary proof of \eqref{cond_vplambdaj} for the eigenvalues the one-dimensional Dirichlet Laplacian operator. For example, assume that $A_{0}=(-\triangle)^\alpha$ is a positive power of the one-dimensional Dirichlet-Laplacian on $\Omega=(0,\pi)$; then \eqref{cond_vpj} is satisfied if and only if $\alpha>1/2$. In \cite{Hansen} other examples are provided where \eqref{cond_vpj} is satisfied, such as the damped Euler-Bernoulli plate in dimension two. \item Assume that $(\lambda_n)_{n\in\textrm{I\kern-0.21emN}^*}$ is a sequence of positive real numbers and that there exist $K>0$, $\alpha>0$ and $\beta>1$ such that $$ \lambda_{n}=K(n+\alpha)^\beta+\operatorname{o}(n^{\beta-1}), $$ as $n$ tends to $+\infty$. It is proved in \cite[Formula (3.25)]{fattorini2} that there exists two constants $\tilde A$ and $\tilde B$ such that $$ \Vert \theta_j^T\Vert_{L^2(0,T)}^2\leq \tilde A e^{\tilde B \lambda_j^{1/\beta }} $$ for every $j\in \textrm{I\kern-0.21emN}^*$ and the estimate \eqref{cond_vplambdaj} then holds true. Note that the authors of \cite{fattorini2} use it to derive exact controllability results for a Sturm-Liouville one-dimensional equation. We also mention the article \cite{ammar} where the authors extend the above approach and estimate to the framework of systems of one-dimensional parabolic equations, in view of establishing exact boundary controllability properties. \item Assume that $A_0$ is the Dirichlet-Laplacian on the unit ball $\Omega=\{x\in \textrm{I\kern-0.21emR}^n\mid \Vert x\Vert < 1\}$, with $n$ arbitrary. Using a refined study of the sequences of eigenfunctions and eigenvalues, it is proved in \cite[Section 6, (6.27)]{fattorini1} that \eqref{cond_vplambdaj} holds true with a constant $m_{2}$ not depending on $T$, and the authors use it to investigate boundary controllability issues for the heat equation in $\Omega$. Then Theorem \ref{cormaintheo} can be applied, provided that $T$ is large enough (since it is required that $m_2\in (0,2T)$). \end{itemize} \subsection{Optimal lumped controls}\label{sec:OLC} In this section, we investigate a variant of the previously studied optimal design problem, based on another kind of controls referred to in the literature as the \textit{lumped controls} (see \cite[Chapter 4]{Russell} or \cite[Chapter 1.4]{Khapalov}). This wording designates tensorized controls that are the product of separated variables functions in time and space, the space profile of the control term being given. Then one only acts on the system by means of tuning the time-intensity of the control. Let $\Omega$ be an open connected subset of $\textrm{I\kern-0.21emR}^n$ and $A_0:D(A_{0})\rightarrow L^2(\Omega,\C)$ be a densely defined operator that generates a strongly continuous semigroup on $L^2(\Omega,\C)$. We adopt the same framework as in Section \ref{sec31}, assuming the existence of an orthonormal basis $(\phi_j)_{j\in\textrm{I\kern-0.21emN}^*}$ of $L^2(\Omega,\C)$ consisting of eigenfunctions of $A_{0}$, associated with (complex) eigenvalues $(\lambda_j)_{j\in\textrm{I\kern-0.21emN}^*}$ such that $\mathrm{Re}(\lambda_1)\leq\cdots\leq \mathrm{Re}(\lambda_j)\leq\cdots$. Consider the internally controlled parabolic system \begin{equation}\label{heatEqcontrolled_lumped} \partial_t y(t,x)+A_{0} y(t,x)+g(x)u(t)=0, \quad (t,x)\in (0,T)\times \Omega, \end{equation} with Dirichlet boundary conditions, where $g\in L^2(\Omega,\C)$ is the control profile and $u\in L^2(0,T)$ is the control function. The controlled system \eqref{heatEqcontrolled_lumped} is a particular version of \eqref{heatEqcontrolled_intro}. In some sense, the function $g$ plays the role of $\chi_\omega$ in \eqref{heatEqcontrolled}, but here, the control function $u$ depends only on $t$. The function $g$ is usually fixed and the control is $u$. Here, we propose to optimize the control profile $g$. Performing the same analysis as in Section \ref{sec:control2001} and using the same notations, one proves easily that every initial datum $y^0=\sum_{j=1}^{+\infty}a_{j}\phi_{j}\in L^2(\Omega)$ can be steered to zero in time $T$ with the control $u\in L^2(0,T)$ given by $$ u(t)=-\sum_{j=1}^{+\infty} \frac{a_j e^{-\lambda_{j}T}}{\int_\Omega g(y)\overline{\phi}_j(y) \, dy}\theta_j^T(T-t), $$ provided that the Fourier coefficients $\int_\Omega g(y)\phi_{j}(y) \, dy$ of $g$ do not vanish. As previously, we define the \textit{moment control operator} $\tilde \Gamma_{g}:L^2(\Omega)\rightarrow L^2((0,T)\times \Omega)$ by $\tilde \Gamma_{g}(y^0)=f$, with $f(t,x)=g(x)u(t)$. Its norm is given by $$ \Vert \tilde \Gamma_{g}\Vert =\sup_{\Vert y^0\Vert_{L^2(\Omega)}=1}\Vert \tilde \Gamma_{g}(y^0)\Vert _{L^2((0,T)\times\Omega )}=\Vert g\Vert_{L^2(\Omega)}\sup_{\Vert y^0\Vert_{L^2(\Omega)}=1}\Vert u\Vert_{L^2(0,T)} $$ Following the framework developed in Sections \ref{sec:control2001} and \ref{sec31} leads to define a randomized criterion by defining $a_j^\nu=\beta^\nu_{j}a_j$ for every $j\in\textrm{I\kern-0.21emN}^*$. Then we define $$ \tilde{\mathcal{K}}_g(\chi_\omega) =\sup_{\Vert y^0\Vert_{L^2(\Omega)}=1}\mathbb{E}(\Vert \tilde \Gamma_{g}(y^0_\nu)\Vert_{L^2((0,T)\times \Omega)}^2 ), $$ where $y^0_{\nu}$ denotes the function of $L^2(\Omega)$ whose Fourier coefficients are the $a_{j}^\nu$ defined above. \begin{lemma} There holds $$ \tilde{\mathcal{K}}_g(\chi_\omega) = \sup_{j\in\textrm{I\kern-0.21emN}^*}\frac{e^{-2\mathrm{Re}(\lambda_{j})T}\int_{0}^T\theta_j^T(t)^2\, dt}{\left|\int_{\Omega} g(x)\overline{\phi}_{j} (x)\, dx\right|^2}\int_{\Omega} |g(x)|^2\, dx. $$ \end{lemma} The proof is similar to the proofs of Lemmas \ref{lemm:contHeat} and \ref{lemma:momentCont}, and thus is skipped. We model the ``best design of lumped controller" as the problem of minimizing $\tilde{\mathcal{K}}_g(\chi_\omega)$ over the set of all possible profiles $g\in L^2(\Omega)$. The functional $g\mapsto \tilde{\mathcal{K}}_g(\chi_\omega)$ being homogeneous according to the previous lemma, the problem of optimal lumped control placement is then equivalent to the problem \begin{equation}\label{pboptGamma1010} \boxed{ \sup_{\Vert g\Vert_{L^2(\Omega)}=1}\ \inf_{j\in\textrm{I\kern-0.21emN}^*}\gamma_{j}(T)\left|\int_{\Omega} g(x)\overline{\phi}_{j} (x)\, dx\right|^2, } \end{equation} with $$ \gamma_{j}(T)=\frac{e^{2\mathrm{Re}(\lambda_{j})T}}{\int_{0}^T\theta_j^T(t)^2\, dt} , $$ for every $j\in \textrm{I\kern-0.21emN}^*$. Let us now solve this optimal design problem. \begin{theorem}\label{thm:lumped} We assume that \begin{equation}\label{assump:specGammaj} \sum_{j=1}^{+\infty}\frac{1}{\gamma_{j}(T)}<+\infty . \end{equation} Then, the problem \eqref{pboptGamma1010} has at least one solution, and we have $$ \sup_{\Vert g\Vert_{L^2(\Omega )}=1}\ \inf_{j\in\textrm{I\kern-0.21emN}^*}\gamma_{j}(T)\left|\int_{\Omega} g(x)\overline{\phi}_{j} (x)\, dx\right|^2=\left(\sum_{j=1}^{+\infty}\frac{1}{\gamma_{j}(T)}\right)^{-1}. $$ Moreover, the set of solutions consists of all functions $g$ in $L^2(\Omega)$ that can be expanded as $$ g=\sum_{j=1}^{+\infty}g_{j}\phi_{j}\qquad\textrm{with}\qquad |g_{j}|^2=\left(\sum_{j=1}^{+\infty}\frac{1}{\gamma_{j}(T)}\right)^{-1}\frac{1}{\gamma_{j}(T)}. $$ \end{theorem} This theorem is proved in Section \ref{secproof:lumpedthm}. \begin{remark} Consider the case where $A_{0}=-\partial_{xx}$ is defined on $H^2(0,\pi)\cap H^1_{0}(0,\pi)$ (one-dimensional Dirichlet-Laplacian). Then $\phi_{j}(x)=\sqrt{\frac{2}{\pi}}\sin (jx)$ and $\lambda_{j}=j^2$ for every $j\in\textrm{I\kern-0.21emN}^*$. Denote by $g$ any solution of the problem \eqref{pboptGamma1010}. According to \cite[Theorem 3.2]{MicuZuazua}, there exists a positive constant $C_{T}$ such that $$ \frac{e^{2j^2T}}{\int_{0}^T\theta_j^T(t)^2\, dt}\geq C_{T}e^{2j^2T-2\pi j} , $$ for every $j\in\textrm{I\kern-0.21emN}^*$. According to Theorem \ref{thm:lumped}, it follows that the Fourier coefficients $g_{j}$ decrease exponentially with respect to $j$, and as a consequence, the optimal functions $g$ are analytic (see e.g. \cite[Chapter 11, \S 63]{akhiezer}). \end{remark} \begin{remark} It might seem natural and of physical interest to investigate what happens if we restrict our search of the control profile $g$ to a set of characteristic functions of a measurable subset $\omega$, with the measure of $\omega$ possibly fixed. Doing this, we get a kind of instability: indeed, assuming that $\omega$ is the finite union of rational intervals (in other words, intervals whose extremities are rational multiples of $\pi$), one can easily check that $$ \inf_{j\in\textrm{I\kern-0.21emN}^*}\gamma_{j}(T)\left(\int_{0}^\pi\chi_{\omega}(x)\sin (jx)\, dx\right)^2=0. $$ Therefore, this problem appears to be ill-posed in some sense, and is probably not so much relevant with respect to practical issues. \end{remark} \subsection{Conclusion}\label{sec:conclpb} To conclude, let us provide several further comments and open problems. \paragraph{Generalization to other methods of control and higher dimensions.} As underlined in the previous sections, the use of controllers obtained by the moment method reduces mainly the perimeter of our study to one-dimensional operators. In view of generalizing our approach to other control operators, let us use the framework described in Section \ref{sec31}, considering the controlled system \begin{equation}\label{syst:concl} \partial_t y+A_{0} y=\chi_\omega u, \quad t\in (0,T), \end{equation} where $y(0,\cdot)=y^0\in L^2(\Omega)$ and $u$ is a control steering this system from $y^0$ to 0 in time $T$, whenever it is possible. Let us assume that for every $T>0$ and every Lebesgue measurable subset $\omega$ of positive measure, the system \eqref{syst:concl} is null-controllable in time $T$. In this case, let us write $\Gamma_\omega=\chi_\omega u$. For instance, the Hilbert Uniqueness Method (see \cite{lions2,lions}) is a well-known method used to design a null control for \eqref{heatEqcontrolled_intro}-\eqref{dir_cond}, with the additional property that this control has a minimal $L^2$ norm over all possible null controls. The null-controllability property in time $T$ of this system is equivalent to an observability property on the pair $(\omega,T)$. Note that in the case where $A_0$ is the Dirichlet-Laplacian operator $-\triangle$, it has been showed in \cite{AEWZ} that the observability inequality holds true for every $T>0$ and every Lebesgue measurable subset $\omega$ of positive measure. Following the approach described in Section \ref{sec31}, we define $$ \mathcal{K}(\chi_\omega) = \sup_{\Vert y^0\Vert_{L^2(\Omega)}=1} \mathbb{E}\left(\Vert \Gamma_{\omega}(y^0_{\nu})\Vert^2_{L^2((0,T)\times \Omega)}\right), $$ where $y^0_\nu= \sum_{j=1}^{+\infty}\beta_j^\nu a_j \phi_j$, and $\mathbb{E}$ is the expectation over the space $\mathcal{X}$ with respect to the probability measure $\mathbb{P}$. Similar computations as those of Section \ref{sec_proof_lem2} enable to show that $$ \mathcal{K}(\chi_\omega)= \sup_{j\in\textrm{I\kern-0.21emN}^*} \big\Vert \Gamma_{\omega}(\phi_j) \big\Vert_{L^2((0,T)\times \Omega)}^2 $$ As discussed previously, we model the best actuator shape and placement problem as the problem of minimizing $\mathcal{K}$ over the set $\mathcal{U}_{L}$. Analyzing this optimal design problem does not seem easy since it requires to know fine regularity properties of each control function $u_j$ defined by $\Gamma_\omega(\phi_j)=\chi_\omega u_j$. \paragraph{Analysis of the full control operator.} One of the main issues that remains to be developed is whether one can attack the problem of the optimal design of the controllers and actuators without the diagonalization procedure by randomization. The issue is then much harder to handle, as it occurs at the level of the observability problem. Note also that in that case, because of possible interactions of all modes, it is unclear how complex the optimal sets are. Actually, if one defines the \textit{Gramian} operator $G_T$ as the infinite dimensional symmetric nonnegative matrix whose coefficient at row $j$ and column $k$ is given by $\int_0^T e^{(j^2+k^2)t}\, dt \int_\omega \sin(jx)\sin(kx)\, dx$, the operator norm $\Vert \Gamma_\omega\Vert$ is the inverse of the smallest eigenvalue of $G_T$. The randomization procedure consists in dropping the non-diagonal terms in $G_T$, by considering the inverse of smallest eigenvalue of $\operatorname{diag}(G_T)$. Concerning the particular case of controllers given by the Hilbert Uniqueness Method (see \cite{lions2,lions}) to design a null control for \eqref{heatEqcontrolled_intro}-\eqref{dir_cond}, minimizing the control efforts in a deterministic way is actually equivalent to maximizing $$ C_T(\chi_\omega)=\inf\left\{ \frac{ \int_0^T\int_\omega \vert y(t,x)\vert^2\,dx \, dt }{\Vert y(T,\cdot)\Vert^2_{L^2(0,\pi)}} \ \big\vert\ y(0,\cdot)\in L^2(0,\pi) \setminus\{0\} \right\} , $$ which is the largest possible observability constant $C$ in the inequality \eqref{ineqobs}, over $\mathcal{U}_L$, because of the duality between controllability and observability. An interesting problem then consists of maximizing the functional $C_T(\chi_\omega)$ over the set $\mathcal{U}_L$. This problem has been discussed in \cite{PTZobsND,PTZparabND}, and for the same reasons as above it has appeared more relevant to introduce the concept of randomized observability constant $C_{T,\textrm{rand}}(\chi_\omega)$. At this step, one may think of coming back, by duality, to the controllability problem. Unfortunately, the problem of maximizing $C_{T,\textrm{rand}}(\chi_\omega)$ does not admit any nice interpretation in terms of controlling, say, almost every initial data to $0$ in time $T$. This is due to the fact that the randomization procedure does not commute with the duality operator realizing the duality between observability and controllability. More precisely, the Gramian $G_T$ defined above does not commute with the randomization procedure. To describe which kind of initial data can be steered to $0$ in a random way, it would be required to compute the image under $G_T$ of the random laws used in the randomization procedure, and then show that these random laws share appropriate probability properties, as in \cite{BurqTzvetkov1}. Hence, here, we have found more relevant to combine the randomization procedure with the moment method, in which case the problem of the lack of commutation arising in the HUM procedure disappears. \paragraph{Use of other biorthogonal families.} We have here used the moment problem approach but in a very special way, taking advantage of the fact that eigenvalues grow sufficiently fast so to ensure the existence of a family of time-biorthogonals that allow to build by separation of variables biorthogonal families for all possible supports of the control $\omega$.\\ Of course the issue can be formulated without that restrictive assumption taking advantage of the existence of biorthogonal families in the $(x, t)$ variables, in other words of families $\Lambda=(\theta_k^{\omega,T})_{k\in\textrm{I\kern-0.21emN}^*}$ such that $$ \int_0^T\int_\omega \theta^{\omega,T}_k(t,x)e^{-\lambda_jt}\phi_j(x)\, dxdt=\delta_{jk}. $$ But their dependence with respect to $\omega$ seems to be hard to analyze.
{ "timestamp": "2017-01-10T02:11:00", "yymm": "1701", "arxiv_id": "1701.02191", "language": "en", "url": "https://arxiv.org/abs/1701.02191", "abstract": "In this paper, we model and solve the problem of designing in an optimal way actuators for parabolic partial differential equations settled on a bounded open connected subset $\\Omega$ of IR n. We optimize not only the location but also the shape of actuators, by finding what is the optimal distribution of actuators in $\\Omega$, over all possible such distributions of a given measure. Using the moment method, we formulate a spectral optimal design problem, which consists of maximizing a criterion corresponding to an average over random initial data of the largest L 2-energy of controllers. Since we choose the moment method to control the PDE, our study mainly covers one-dimensional parabolic operators, but we also provide several examples in higher dimensions. We consider two types of controllers: either internal controls, modeled by characteristic functions, or lumped controls, that are tensorized functions in time and space. Under appropriate spectral assumptions, we prove existence and uniqueness of an optimal actuator distribution, and we provide a simple computation procedure. Numerical simulations illustrate our results.", "subjects": "Analysis of PDEs (math.AP); Optimization and Control (math.OC)", "title": "Actuator design for parabolic distributed parameter systems with the moment method", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426458162397, "lm_q2_score": 0.7279754607093178, "lm_q1q2_score": 0.7090791438385999 }
https://arxiv.org/abs/1807.08672
A Bound on the Rate of Convergence in the Central Limit Theorem for Renewal Processes under Second Moment Conditions
A famous result in renewal theory is the Central Limit Theorem for renewal processes. As in applications usually only observations from a finite time interval are available, a bound on the Kolmogorov distance to the normal distribution is desirable. Here we provide an explicit non-uniform bound for the Renewal Central Limit Theorem based on Stein's method and track the explicit values of the constants. For this bound the inter-arrival time distribution is required to have only a second moment. As an intermediate result of independent interest we obtain explicit bounds in a non-central Berry-Essén theorem under second moment conditions.
\section{Introduction} Let $Z, Z_i, i=1, 2, \ldots$ be i.i.d. non-negative random variables with positive mean $\mu$ and finite variance $\sigma^2$, and let $$X_t = \max \{ n: \sum_{i=1}^n Z_i \le t\}.$$ Then $(X_t, t \ge 0)$ is a classical renewal process. Renewal processes are a cornerstone in applied probability and appear in a number of applications, see for example \cite{omey} and references therein. As in applications, time is finite, a quantification of the the rate of convergence to normal is desirable. Also note that $X_t$ only takes on values in $\{ 0, 1, \ldots\}$. In \cite{englund} it is shown that when $\gamma:= {\mathbb E} ( | Z - \mu|^3) < \infty$ then \begin{equation}\label{englund} \sup_{n =0, 1,\ldots} \left| {\mathbb P} (X_t < n) - \Phi \left( \frac{( n\mu - t) \sqrt{\mu}}{\sigma \sqrt{t}}\right) \right| \le 4 \left( \frac{\gamma}{\sigma} \right)^3 \left( \frac{\sqrt{\mu}}{\sqrt{t}} \right)^\frac12 \end{equation} where $\Phi$ is the c.d.f. of the standard normal distribution. Also in \cite{englund} a similar bound is indicated when $Z$ possesses moments of order $\alpha$ for some $2 < \alpha < 3$. Under the third moment assumption, this bound was generalised to the bivariate case in \cite{ahmad}, which in turn was generalised to a $k$-variate process in \cite{niculescu}. The result was extended in \cite{roginsky} to allow for non-identically distributed inter-arrival times $Z_i$, again under third moment assumptions. In \cite{billingsley}, Theorem 17.3, a functional central limit theorem for the renewal process is shown. In particular, as $t \rightarrow \infty$, $X_t$ is asymptotically normally distributed with mean $\frac{t}{\mu}$ and variance $\frac{\sigma^2 t }{ \mu^3}$. Hence second moments suffice for the normal approximation. Unfortunately \cite{billingsley} does not give a bound on the rate of convergence. In this paper we provide a bound on the rate of convergence in the case that $Z$ has only second moments; this bound is of the order $t^{-\frac{1}{2}}$. As an intermediate result we provide explicit constants for a non-uniform Berry-Esse\'{e}n theorem, quantifying Theorem 2.2 in \cite{chenshao} (also Theorem 8.1 in \cite{ChGoSh}). Our main tool is Stein's method. The paper is organised as follows. In Section \ref{notations} we introduce notation, we give some bounds on the tail of the normal distribution, and we provide some background from Stein's method. Section \ref{main} gives the main result, with a proof. The proof is based on the approach to obtain non-uniform bound from sums of i.i.d. random variables in Chapter 8 of \cite{ChGoSh}, while deriving explicit bounds for the required intermediary results from that chapter. Proofs of auxiliary results are given in Section \ref{proofs}. For convenience, in the Appendix we re-state results from \cite{ChGoSh} which are used in this paper. \section{Notations, tail bounds, and results from Stein's method}\label{notations} \subsection{Notations} Let $Z_n$, $n\geq0$, be independent identically distributed, positive random variables. Let $T_n = Z_0 + ... + Z_{n-1}, n\geq1$. The process $X=(X_t, t\geq0)$ defined by $X_t = \#\left\{n \geq1 : T_n \leq t\right\}$ is the renewal process of interest. For a renewal process $X_t$ whose inter-arrival times $Z_i$ have mean $\mu$ and variance $\sigma^2$, and $n, t \in \{0, 1, \ldots, \}$ fixed, we aim to compare ${\mathbb P}( X_t \le n) = {\mathbb P}\left( \frac{X_t - \frac{t}{\mu}}{\sigma \sqrt{t} \mu^{-\frac32}} \le \frac{(n \mu - t)\sqrt{\mu}}{\sigma \sqrt{t}} \right) $ to $\Phi\left(\frac{(n \mu - t)\sqrt{\mu}}{\sigma \sqrt{t}}\right)$. \subsection{Normal tail bounds} The following results will be useful when we develop the bounds. Firstly, for every $w>0$, the standard normal tail bound \begin{align} \frac{1}{4(1+w^2)}e^{-\frac{w^2}{2}}\leq \Phi(-w)= 1-\Phi(w) \leq \min\left(\frac{1}{2},\frac{1} {w\sqrt{2\pi}} \right)e^{-\frac{w^2}{2}} \label{4normaltail} \end{align} holds. This is a well-known result, see for example Inequality (2.11) and p.243 in \cite{ChGoSh}. The next result assesses the smoothness of the standard normal c.d.f., as follows. \begin{lem}\label{lemma2.4} For $\mu>0$, $\sigma>0$, $n \ge 1$ and $t>0$, let $$ I = \left| \Phi \left( \frac{n \mu -t }{\sigma \sqrt{n}} \right) - \Phi \left( \frac{( n\mu - t) \sqrt{\mu}}{\sigma \sqrt{t}}\right)\right| $$ Then \begin{equation*} I \leq \begin{cases} \frac{\sqrt{2}}{e \sqrt{\pi}} \frac{\sigma}{\sqrt{t \mu }} & \mbox{if}\; \,t \le n \mu;\\ \frac{16 }{ e^2 \sqrt{2 \pi } } \frac{{t^2}\sigma^3 }{{n^{\frac{1}{2}} \mu^2}{(t - n \mu)^2(\sqrt{n \mu t} + t)} } & \mbox{if}\,\, t> n \mu. \end{cases} \end{equation*} \end{lem} A proof of Lemma \ref{lemma2.4} is in Section \ref{proofs}. \subsection{Results from Stein's method} Stein's method, origating from \cite{stein} is a powerful tool to assess distances between distributions. The proof of the statements below can be found in \cite{ChGoSh}, pp.13--16. Let $W$ be a random variable and suppose that the aim is to bound $| {\mathbb P}(W \le z) - \Phi (z)| $ for all real $z$. For fixed $z \in \mathbb{R}$, the unique bounded solution $f(w):=f_z(w)$ of the so-called {\it Stein equation} \begin{equation} f'(w)-wf(w)=\mathbbm{1}({w \leq z}) -\Phi(z) \label{3steineqn} \end{equation} is given by \begin{equation} f_z(w) = \begin{cases} \sqrt{2\pi}e^{w^2/2}\Phi(w)[1-\Phi(z)] & \mbox{if} \;w \leq z;\\ \sqrt{2\pi}e^{w^2/2}\Phi(z)[1-\Phi(w)] & \mbox{if} \;w>z.\\ \end{cases} \label{3steinsoln} \end{equation} With this solution, $$ {\mathbb P}(W \le z) - \Phi (z) = {\mathbb E} \{ f'(W) - W f(W) \}$$ and the right-hand side depends only on the distribution of $W$ and can often be bounded using Taylor expansion. Moreover, for the solution $f_z$ of the Stein equation \eqref{3steineqn}, $wf_z(w)$ in an increasing function of $w$, and for all real $w$, \begin{align} |f'_z(w)| &\leq 1; \label{3lemma43}\\ 0 < f_z(w) &\leq \min\left(\frac{\sqrt{2\pi}}{4}, \frac{1}{|z|}\right). \label{3lemma45} \end{align} \section{A non-uniform bound for the Renewal Central Limit Theorem}\label{main} Our main result is Theorem \ref{theorem31}. As ${\mathbb P} (X_t \le n) = 0$ for $n < 1$, we restrict attention to the regime that $n \ge 1$. \begin{theorem}[\bf{Bound for the Renewal Central Limit Theorem Under Second Moment Assumptions}]\label{theorem31} Let $X=(X_t, t\geq 0)$ be a renewal process whose inter-arrival times $Z_n$, $n\geq0$, have finite mean $\mu \in (0,\infty)$ and finite variance $\sigma^2 \in (0,\infty)$. Then for $n \ge 1$, \begin{eqnarray} \lefteqn{\left| {\mathbb P} (X_t \le n) - \Phi \left( \frac{( n\mu - t) \sqrt{\mu}}{\sigma \sqrt{t}}\right) \right| } \nonumber \\ &\leq & \mathbbm{1}( t \le n \mu) \frac{\sqrt{2}}{e \sqrt{\pi}} \frac{\sigma}{\sqrt{t} \mu } + \mathbbm{1}( t > n \mu) \frac{32 }{ e^2 \sqrt{2 \pi } } \frac{1}{\sqrt{t} } \left( \frac{\sigma^3 }{\mu^2\sqrt{t} } + \frac{ \sigma }{ (224)^2 \sqrt{\mu}}\right) \nonumber \\ && + \, 50,990 \left(1+\left|\frac{t- n\mu} {\sigma \sqrt{n}}\right|\right) ^{-2} . \label{55theorem7} \end{eqnarray} \end{theorem} Before we prove this results, here are some remarks. \begin{remark} \begin{enumerate} \item The explicit value of the constant in Theorem $3.1$ is large. This is because the calculation of the constant is not optimized. As a result, the bound is not informative for small values of $n$. \item The bound is the order of $t^{-\frac{1}{2}}$. The bound deteriorates for $t$ close to the expectation $n \mu$. \item Theorem $3.1$ does not assume the existence of the finite third moments. It holds as long as the inter-arrival times have finite variance. This result enables us to assess the rate of convergence in the Central Limit Theorem for example for a renewal process whose inter-arrival times $Z_i $ follow a Pareto $\mbox{Pareto} \;(n, \alpha)$-distribution with $\alpha \in [2,3)$ for $i\geq 1$. \end{enumerate} \end{remark} For the proof of Theorem \ref{theorem31}, recall that $T_n = \sum_{i=1}^n Z_i$ has mean $n \mu $ and variance $n \sigma^2$, and ${\mathbb P}(X_t \le n) = {\mathbb P} (T_n \ge t)$. Moreover, the standardised $T_n$ satisfies the Central Limit Theorem. We decompose \begin{eqnarray} {{\mathbb P} (X_t \le n) - \Phi \left( \frac{( n\mu - t) \sqrt{\mu}}{\sigma \sqrt{t}}\right)} &=& {\mathbb P} (T_n \ge t) - \left\{ 1 - \Phi \left( \frac{t - n \mu }{\sigma \sqrt{n}} \right) \right\} \label{term1} \\ &&+ \Phi \left( \frac{n \mu -t }{\sigma \sqrt{n}} \right) - \Phi \left( \frac{( n\mu - t) \sqrt{\mu}}{\sigma \sqrt{t}}\right). \label{term2} \end{eqnarray} We bound the terms \eqref{term1} and \eqref{term2} separately. For \eqref{term2} we employ the tail bounds for the normal distribution from Lemma \ref{lemma2.4}. For \eqref{term1} we derive non-uniform bounds using ideas from Chapter 8 in \cite{ChGoSh} - our Theorem \ref{theorem5.1} is a version of Theorem 8.1 in \cite{ChGoSh} but with the constants in the bound made explicit. This bound is of interest in their own right and hence we give it as a theorem. \begin{theorem}\label{theorem5.1} Let $\xi_1, \xi_2, \ldots \xi_n$ be i.i.d. random variables with mean $\mu$ and variance $\sigma^2$. Let $W$ denote their sum, $W=\sum_{i=1}^n \xi_i$. Let \begin{equation*} \beta_2 = \sum_{i=1}^n {\mathbb E} \xi_i^2 \mathbbm{1}({|\xi_i|> 1}) \;\; \mbox{and} \;\; \beta_3 = \sum_{i=1}^n {\mathbb E} |\xi_i|^3 \mathbbm{1}({|\xi_i|\leq 1}). \end{equation*} Then, for all $z\in \mathbb{R}$, \begin{equation} |{\mathbb P}(W \leq z)-\Phi(z)|\leq 2\sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{1\vee |z|}{4}\right) + C_2(1+|z|)^{-2}(\beta_2+\beta_3), \label{5theorem3} \end{equation} where \begin{equation} C_2\leq \begin{cases} 15 & if\; \beta_2+\beta_3\geq 1;\\ 37 & if\;\beta_2+\beta_3<1 \;and\; |z|\leq 2;\\ 25431 & if\; \beta_2+\beta_3<1 \;and\; |z|>2. \end{cases} \label{52C2} \end{equation} \end{theorem} The proof of Theorem \ref{theorem5.1} is found in Section \ref{proofs}. The proof of Theorem \ref{theorem31} is now almost immediate. \begin{proof}[Proof of Theorem \ref{theorem31}] First, Term \eqref{term2} is bounded directly in Lemma \ref{lemma2.4}. The bound arising from \eqref{term1} is less than 1 only when $$\frac{| t - n \mu| }{\sigma \sqrt{n}} \ge \sqrt{50,990} -1.$$ Hence if $\frac{| t - n \mu| }{\sigma \sqrt{n}} \le 224$, the claim is trivially true. So we apply Lemma \ref{lemma2.4} for $\frac{| t - n \mu| }{\sigma \sqrt{n}} > 224$, which turns the non-uniform bound for the regime $t > n \mu$ and $n \ge 1$ into a uniform bound for the regime $ \frac{ t - n \mu }{\sigma \sqrt{n}} > 224$, for which it holds that $$\frac{t}{t - n \mu} \le 1 + \frac{n \mu}{ 224 \sigma \sqrt{n}}$$ so that \begin{eqnarray*} {\frac{16 }{ e^2 \sqrt{2 \pi } } \frac{t^2\sigma^3 }{\sqrt{n} \mu^2(t - n \mu)^2(\sqrt{n \mu t} + t)} } & \le & \frac{32 }{ e^2 \sqrt{2 \pi } } \frac{1}{\sqrt{t } }\left( \frac{\sigma^3 }{\mu^2\sqrt{t} } + \frac{ \sigma }{ (224)^2 \sqrt{\mu}}\right) . \end{eqnarray*} This gives the first part of the bound. For Term \eqref{term1}, using Theorem \ref{theorem5.1} it remains to show that $$ 2\sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{1\vee |z|}{4}\right) \le 128 \left(\frac{1}{1+|z|}\right)^2 $$ with $\xi = \frac{Z_i - \mu}{\sigma \sqrt{n}}$ and then apply this inequalty to $z = \left| \frac{t - n \mu}{\sigma \sqrt{n}} \right| .$ Note that $\frac{1+|z|}{2}\leq 1\vee |z|$. So, using Markov's inequality, \begin{align*} \sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{1\vee |z|}{4}\right) &\leq \sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{1+|z|}{8}\right)\\ &\leq \left(\frac{8}{1+|z|}\right)^2 \sum_{i=1}^n {\mathbb E} \xi_i^2 = \left(\frac{8}{1+|z|}\right)^2 . \end{align*} Setting $C= 128 + 2 C_2$ gives the assertion. \end{proof} \begin{remark} With the notation from Theorem \ref{theorem5.1}, under the same assumptions as for Theorem \ref{theorem31}, using Theorem 3.3 in \cite{chenshao2} with $\xi_i =\frac{Z_i - \mu}{\sigma \sqrt{n}} $ to bound \eqref{term2} gives the bound \begin{eqnarray*} \lefteqn{\left| {\mathbb P} (X_t \le n) - \Phi \left( \frac{( n\mu - t) \sqrt{\mu}}{\sigma \sqrt{t}}\right) \right| } \nonumber \\ &\leq & \frac{1}{\sqrt{t}} \max \left\{ \frac{\sqrt{2}}{e \sqrt{\pi}} \frac{\sigma}{\mu } , \frac{32 }{ e^2 \sqrt{2 \pi } } \left( \frac{\sigma^3 }{\mu^2\sqrt{t} } + \frac{\sigma}{(224)^2 \sqrt{\mu}} \right) \right\} + 4 ( 4 \beta_2 + 3 \beta_3). \end{eqnarray*} The terms $\beta_2 $ and $\beta_3$ depend on $n$ as well as on $t$ in an implicit fashion but may be straightforward to calculate in some situations. \end{remark} \section{Remaining proofs of results}\label{proofs} \begin{proof}[Proof of Lemma \ref{lemma2.4}] To bound $I = \left| \Phi \left( \frac{n \mu -t }{\sigma \sqrt{n}} \right) - \Phi \left( \frac{( n\mu - t) \sqrt{\mu}}{\sigma \sqrt{t}}\right)\right| $ we consider two cases. {\bf Case 1: $n \mu \ge t$:} If $t \le n \mu$ then $\frac{n \mu}{t} \ge 1$ and \begin{eqnarray*} I &\leq& \frac{1}{\sqrt{2 \pi} } \frac{n \mu -t}{\sigma \sqrt{n}}\left( \frac{\sqrt{n \mu}}{\sqrt{t}} -1 \right) \exp\left\{-\frac12 \left( \frac{n \mu -t}{\sigma \sqrt{n}} \right)^2\right\} \\ &\leq& \frac{1}{\sqrt{2 \pi} } \frac{\sigma \sqrt{n}}{ t + \sqrt{ t n \mu}} \sup_{x \ge 0} \left\{ x^2 e^{-\frac12 x^2} \right\} \\ &\leq& \frac{\sqrt{2}}{e \sqrt{\pi } } \frac{\sigma }{ \sqrt{ t \mu}} . \end{eqnarray*} \medskip {\bf Case 2: $t > n \mu $:} If $t > n \mu$ then $\frac{n \mu}{t} < 1$ and \begin{eqnarray*} I &\le& \frac{1}{\sqrt{ 2 \pi}} \frac{t - n \mu }{\sigma \sqrt{n}} \left( 1 - \frac{\sqrt{n \mu}}{ \sqrt{t}}\right) \exp\left\{-\frac12 \left( \frac{( t - n\mu ) }{\sigma \sqrt{n}} \frac{\sqrt{n \mu}}{\sqrt{t}} \right)^2 \right\} \\ &\le&\frac{1}{\sqrt{ 2 \pi}} \frac{{t^2}\sigma^3 {n^{\frac{3}{2}}}}{{(n \mu)^2}{(t - n \mu)^2(\sqrt{n \mu t} + t)} } \sup_{x \ge 0} \left\{ x^4 e^{-\frac12 x^2} \right\} \\ &\le & \frac{16 }{ e^2 \sqrt{2 \pi } } \frac{{t^2}\sigma^3 }{{n^{\frac{1}{2}} \mu^2}{(t - n \mu)^2(\sqrt{n \mu t} + t)} } . \end{eqnarray*} This completes the proof. \end{proof} \bigskip {\bf{Proof of Theorem \ref{theorem5.1}}} \medskip For the proof of Theorem \ref{theorem5.1} we first show an auxiliary result, Lemma \ref{lemma8.4explicit}, which gives an explicit bound for Lemma 8.4 in \cite{ChGoSh}. Let $\xi_1,...,\xi_n$ denote independent random variables with zero means and variances summing to one. Let $W$ denote their sum, $W=\sum_{i=1}^n \xi_i$. We consider the truncated random variables and their sums \begin{equation} \Bar{x_i}=\xi_i\mathbbm{1}({\xi_i\leq 1}), \;\; \overline{W}=\sum_{i=1}^n \Bar{x_i}, \;\; \mbox{and} \;\; \overline{W}^{(i)}=\overline{W}-\Bar{x_i}. \label{2notation} \end{equation} \begin{lem}\label{lemma8.4explicit} Let $f_z$ denote the solution to the Stein Equation \eqref{3steineqn}. For $z>2$ and for all $s\leq t\leq 1$, we have \begin{eqnarray} \lefteqn{{\mathbb E} [(\overline{W}^{(i)}+\Bar{x_i}) f_z(\overline{W}^{(i)}+\Bar{x_i}) - (\overline{W}^{(i)}+t)f_z(\overline{W}^{(i)}+t)]} \nonumber \\ &\leq & \left(25.8+\frac{20e^{e^2-2}}{\sqrt{2\pi}}\right) e^{-\frac{z}{2}}\min(1,|s|+|t|). \label{51lem5} \end{eqnarray} \end{lem} \begin{proof}[Proof of Lemma \ref{lemma8.4explicit}] Let $g(w)=(wf_z(w))'$. Then for all $s\leq t\leq 1$, \begin{equation} {\mathbb E} (\overline{W}^{(i)}+t) f_z(\overline{W}^{(i)}+t)-(\overline{W}^{(i)}-s) f_z(\overline{W}^{(i)}-s)] = \int_s^t {\mathbb E} g(\overline{W}^{(i)}+u)du. \label{5lemma9proof1} \end{equation} Using \eqref{3steinsoln}, we can compute that \begin{equation} g(w) = \begin{cases} \sqrt{2\pi}(1-\Phi(z))((1+w^2)e^{w^2/2}\Phi(w)+\frac{w}{\sqrt{2\pi}}) & \mbox{if} \;w \leq z;\\ \sqrt{2\pi}\Phi(z)((1+w^2)e^{w^2/2}(1-\Phi(w))-\frac{w} {\sqrt{2\pi}}) & \mbox{if} \;w>z.\\ \end{cases} \label{5gw0} \end{equation} Instead of $w\leq z$, we consider whether or not $w\leq \frac{z}{2}$. We split the problem into four cases. Case $1$. If $w\leq 0$, then $(5.4)$ from \cite{chenshao} gives \begin{equation} \sqrt{2\pi}(1+w^2)e^{w^2/2}\Phi(w)+w \leq \frac{2}{1+|w|^3} \;\;\; \mbox{for}\; w\leq 0. \label{5ChenShao2011} \end{equation} In this case, $w\leq 0<z$, so \begin{equation} g(w)\leq (1-\Phi(z))\frac{2}{1+|w|^3}\leq \frac{4(1+z^2)(1+z^3)}{1+|w|^3} e^{\frac{z^2}{8}}(1-\Phi(z)). \label{5gw1} \end{equation} Case $2$. If $0<w\leq \frac{z}{2}$, then \begin{align} g(w)&\leq (1-\Phi(z))(3(1+z^2)e^{\frac{z^2}{8}}+z) \nonumber\\ &\leq \frac{4(1+z^2)(1+z^3)}{1+|w|^3} e^{\frac{z^2}{8}}(1-\Phi(z)).\label{5gw2} \end{align} Case $3$. If $\frac{z}{2}<w\leq z$, then \begin{align} g(w)&\leq \sqrt{2\pi}(1-\Phi(z))((1+z^2)e^{z^2/2}+\frac{z}{\sqrt{2\pi}})\nonumber\\ &\leq 8(1+z^2)e^{z^2/2}(1-\Phi(z)). \label{5gw3} \end{align} Case $4$. If $z<w$, then replacing $w$ by $-w$ in \eqref{5ChenShao2011} gives $$\sqrt{2\pi}(1+w^2)e^{w^2/2}\Phi(-w)-w \leq \frac{2}{1+|w|^3}.$$ In this case, we use the standard normal tail bound \eqref{4normaltail} to obtain \begin{align} g(w)&\leq \Phi(z)\frac{2}{1+|w|^3} \leq 2= 8(1+z^2)e^{\frac{z^2}{2}} \frac{e^{-z^2/2}}{4(1+z^2)} \leq 8(1+z^2)e^{\frac{z^2}{2}}(1-\Phi(z)).\label{5gw4} \end{align} Collecting \eqref{5gw1}, \eqref{5gw2}, \eqref{5gw3} and \eqref{5gw4}, \begin{equation} g(w) \leq \begin{cases} \frac{4(1+z^2)(1+z^3)}{1+|w|^3} e^{\frac{z^2}{8}}(1-\Phi(z)) & \mbox{if}\; w\leq \frac{z}{2};\\ 8(1+z^2)e^{\frac{z^2}{2}}(1-\Phi(z)) & \mbox{if}\; w>\frac{z}{2}.\\ \end{cases} \label{5gw5} \end{equation} So for any $u \in [s,t]$, since $z>2$, we have \begin{align*} {\mathbb E} g(\overline{W}^{(i)}+u)&= {\mathbb E}\left[g(\overline{W}^{(i)}+u)\mathbbm{1}_{\overline{W}^{(i)}+u\leq \frac{z}{2}}\right] + {\mathbb E}\left[g(\overline{W}^{(i)}+u)\mathbbm{1}_{\overline{W}^{(i)}+u> \frac{z}{2}}\right] \\ &\leq {\mathbb E}\left[\frac{1}{1+|\overline{W}^{(i)}+u|^3}\right]4(1+z^2)(1+z^3)e^{\frac{z^2}{8}}(1-\Phi(z))\\ &\;\;\;\;\;+ 8(1+z^2)e^{\frac{z^2}{2}}(1-\Phi(z)){\mathbb P}\left(\overline{W}^{(i)}+u>\frac{z}{2}\right) \\ &\leq {\mathbb E} \left[\frac{1}{1+|\overline{W}^{(i)}+u|^3}\right]4(1+z^2)(1+z^3)e^{\frac{z^2}{8}} \frac{1}{z\sqrt{2\pi}}e^{-\frac{z^2}{2}} \\ &\;\;\;\;\;+ 8(1+z^2)e^{\frac{z^2}{2}} \frac{1}{z\sqrt{2\pi}} e^{-\frac{z^2}{2}} {\mathbb P}(e^{2u}e^{2\overline{W}^{(i)}}>e^z). \end{align*} Using Markov's Inequality, since $u\leq t\leq 1$, we obtain \begin{align} {\mathbb E} g(\overline{W}^{(i)}+u)&\leq \frac{4(1+z^2)(1+z^3)e^{-\frac{3z^2}{8}}}{z\sqrt{2\pi}} {\mathbb E} \left[\frac{1}{1+|\overline{W}^{(i)}+u|^3}\right] \nonumber \\ &+ \frac{8(1+z^2)}{z\sqrt{2\pi}}e^{2u-z} {\mathbb E} [e^{2\overline{W}^{(i)}}] \nonumber\\ &\leq \frac{4(1+z^2)(1+z^3)e^{-\frac{3z^2}{8}}}{z\sqrt{2\pi}} + \frac{8(1+z^2)}{z\sqrt{2\pi}}e^2 e^{-z} e^{e^2-3}\nonumber \\ &\leq \left(25.8+\frac{20}{\sqrt{2\pi}}e^{e^2-2}\right) e^{-\frac{z}{2}}, \label{5lemma9proof2} \end{align} where we used Lemma 8.2 from \cite{ChGoSh} with $t=2$ and $\alpha=B=1$. So for $z>2$, from \eqref{5lemma9proof2} we have \begin{align} \int_s^t {\mathbb E} g(\overline{W}^{(i)}+u)du &\leq \left(25.8+\frac{20e}{\sqrt{2\pi}}e^{e^2-3}\right) e^{-\frac{z}{2}}(t-s) \nonumber\\ &\leq \left(25.8+\frac{20e^{e^2-2}}{\sqrt{2\pi}}\right) e^{-\frac{z}{2}} (|t|+|s|). \label{5lemma9proof3} \end{align} The assertion follows. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem5.1}] Note that it is enough to consider $z\geq 0$. To see this, replacing $W$ by $-W$ gives \begin{equation} |{\mathbb P}(-W\leq z)-\Phi(z)|=|{\mathbb P}(-W\geq z)-\Phi(-z)|=|{\mathbb P}(W\leq -z)-\Phi(-z)| . \label{5z<0} \end{equation} {\bf The case $\beta_2+\beta_3\geq 1$} We start with the case of $\beta_2+\beta_3\geq 1$. Note that $$ |{\mathbb P}( W \le z) - \Phi (z) | = | {\mathbb P} (W > z) - ( 1 - \Phi(z))| \le {\mathbb P}(W>z) + 1-\Phi(z).$$ As $W$ is sum of independent random variables with zero means and variances less than or equal to one, we apply Lemma 8.1 in \cite{ChGoSh} with $B=1$ and $p=2$ to obtain \begin{align} {\mathbb P}(W\geq z)&\leq {\mathbb P}\left(\max_{1\leq i\leq n}|\xi_i|> \frac{z\vee 1}{2}\right) + e^2\left(1+\frac{z^2}{2}\right)^{-2}\nonumber\\ &\leq \sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{z\vee 1}{4}\right) + e^2\left(1+\frac{z^2}{2}\right)^{-2} \label{5beta>1aim}. \end{align} To write \eqref{5beta>1aim} as a bound of the form \eqref{5theorem3}, we bound $e^2(1+\frac{z^2}{2})^{-2}$ by $1.867e^2(1+z)^{-2}$. For $1-\Phi(z)$ we apply the standard normal tail bound \eqref{4normaltail} and obtain \begin{align} |{\mathbb P}(W\leq z)-\Phi(z)| &\leq {\mathbb P}(W\geq z) +|1-\Phi(z)|\nonumber\\ &\leq \sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{z\vee 1}{4}\right) + 1.867e^2(1+z)^{-2}(\beta_2+\beta_3) \nonumber\\ &\;\;\;\; + \min\left(\frac{1}{2},\frac{1}{z\sqrt{2\pi}}\right)e^{-\frac{z^2}{2}}. \label{51.8662} \end{align} Now we bound the standard normal tail bound in \eqref{51.8662} by \begin{equation} \min\left(\frac{1}{2},\frac{1}{z\sqrt{2\pi}}\right)e^{-\frac{z^2}{2}} \leq 1.176(1+z)^{-2}. \label{51.176} \end{equation} Substituting \eqref{51.176} into \eqref{51.8662} gives that for $z\geq 0$, \begin{align} \lefteqn{|{\mathbb P}(W\leq z)-\Phi(z)|} \nonumber\\ &\leq \sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{z\vee 1}{4}\right) + (1.867e^2+1.176)(1+z)^{-2}(\beta_2+\beta_3)\nonumber\\ &\leq 2\sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{1\vee |z|}{4}\right) + (1.867e^2+1.176)(1+|z|)^{-2}(\beta_2+\beta_3). \label{5nonunifboundbeta>1} \end{align} Since $1.867e^2+1.176< 15$, we have proved the theorem for the case that $\beta_2+\beta_3 \geq 1$. \medskip {\bf{The case $\beta_2+\beta_3 <1$ and $z \le 2$}} Next, we consider the case of $\beta_2+\beta_3 <1$. We distinguish whether or not $z>2$. If $z\in [0, 2]$, then we use the uniform bound $(3.31)$ from \cite{ChGoSh}, which states that \begin{equation} \sup_{z\in \mathbb{R}} |{\mathbb P}(W\leq z)-\Phi(z)| \leq 4.1(\beta_2+\beta_3).\label{54.1} \end{equation} We bound $4.1$ by $37(1+|z|)^{-2}$ for $z\in [0, 2]$ because $4.1\times (1+2)^2 <37$. So we have \begin{equation} |{\mathbb P}(W\leq z)-\Phi(z)|\leq 2\sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{1\vee |z|}{4}\right) + 37(1+|z|)^{-2}(\beta_2+\beta_3). \label{536.9} \end{equation} Thus we have proved the theorem when $\beta_2+\beta_3 <1$ and $z\in[0,2]$. \medskip {\bf{The case $\beta_2+\beta_3 <1$ and $z > 2$}} Our remaining task is to prove the theorem when $\beta_2+\beta_3 <1$ and $z>2$. Recall the notations $\Bar{x_i}=\xi_i\mathbbm{1}_{\xi_i\leq 1}$, $\overline{W}=\sum_{i=1}^n \Bar{x_i}$, and $\overline{W}^{(i)}=\overline{W}-\Bar{x_i}$. The idea is to show that ${\mathbb P}(W>z)$ is close to ${\mathbb P}(\overline{W}>z)$ for $z>2$. Observing that \begin{align} \{W>z\}&=\{W>z, \max_{1\leq i\leq n}\xi_i >1\}\cup \{W>z, \max_{1\leq i\leq n}\xi_i \leq 1\} \nonumber\\ &\subset \{W>z, \max_{1\leq i\leq n}\xi_i >1\}\cup\{\overline{W}>z\},\label{5Wset} \end{align} and $W\geq \overline{W}$, ${\mathbb P}(\overline{W}>z)$ yields \begin{equation} {\mathbb P}(\overline{W}>z)\leq {\mathbb P}(W>z)\leq {\mathbb P}(\overline{W}>z)+{\mathbb P}(W>z, \max_{1\leq i\leq n}\xi_i >1). \label{5Wset2} \end{equation} From Lemma $8.3$ in \cite{ChGoSh}, with $p=2$ and $z>2$, \begin{equation} {\mathbb P}(W \geq z, \max_{1\leq i\leq n}\xi_i >1) \leq 2\sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{z}{4}\right) + e^2\left(1+\frac{z^2}{8}\right)^{-2}\beta_2. \label{5Wset3} \end{equation} For a bound of type \eqref{5theorem3}, we bound $(1+\frac{z^2}{8})^{-2}$ by $4(1+z)^{-2}$. Thus from \eqref{5Wset2} and \eqref{5Wset3}, \begin{align*} |{\mathbb P}(W\geq z)-P(\overline{W}>z)| &\leq 2\sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{z}{4}\right) + e^2\left(1+\frac{z^2}{8}\right)^{-2}\beta_2\\ &\leq 2\sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{z}{4}\right) + 4e^2(1+z)^{-2} (\beta_2+\beta_3), \end{align*} where for the last inequality we used that$\beta_3 \ge 0$. Hence, using the triangle inequality, we have for $z>2$, \begin{align*} \lefteqn{ |{\mathbb P}(W\leq z)-\Phi(z)|}\\ &\leq |{\mathbb P}(W\geq z)-{\mathbb P}(\overline{W}>z)|+ |{\mathbb P}(\overline{W}>z)-\Phi(-z)| \\ &\leq 2\sum_{i=1}^n P\left(|\xi_i|>\frac{z}{4}\right) + 4e^2(1+z)^{-2} (\beta_2+\beta_3) + |{\mathbb P}(\overline{W}\leq z)-\Phi(z)|. \end{align*} Note that for $z>2$, we can bound $e^{-\frac{z}{2}} \le \frac{16}{e^{1.5}}(1+z)^{-2}$. Now we claim that for $z>2$, \begin{equation} |{\mathbb P}(\overline{W}\leq z)-\Phi(z)|\leq 7115 e^{-\frac{z}{2}}(\beta_2+\beta_3)\label{5e-z/2}. \end{equation} If \eqref{5e-z/2} holds, then for $z>2$, bounding $e^{-\frac{z}{2}} \le \frac{16}{e^{1.5}}(1+z)^{-2}$, we obtain \begin{align} \lefteqn{|{\mathbb P}(W\leq z)-\Phi(z)| } \nonumber \\ &\leq 2\sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{z}{4}\right)+ \left(4e^2+\frac{16}{e^{1.5}}\times7115\right)(1+z)^{-2}(\beta_2+\beta_3) \nonumber \\ &\leq 2\sum_{i=1}^n {\mathbb P}\left(|\xi_i|>\frac{1\vee |z|}{4}\right)+ 25431(1+|z|)^{-2}(\beta_2+\beta_3) \label{5z>2bound} \end{align} which proves the theorem when $\beta_2+\beta_3<1$ and $z>2$ and therefore completes the proof of Theorem $5.1$. So our remaining work is to prove \eqref{5e-z/2}. \medskip {\bf Proof of \eqref{5e-z/2}} To prove \eqref{5e-z/2} we use Stein's method as well as properties of the solution $f_z$ to the Stein Equation \eqref{3steineqn}. We define the function \begin{equation} \Bar{K_i}(t)={\mathbb E}[\Bar{x_i}(\mathbbm{1}_{0 \leq t \leq \Bar{x_i}}- \mathbbm{1}_{\Bar{x_i} \leq t <0})] \label{5kfunction} \end{equation} where $\Bar{x_i}=\xi_i \mathbbm{1}_{\xi_i\leq 1}$. Equation (8.24) in \cite{ChGoSh} and $\sum_{i=1}^n {\mathbb E}{\xi_i}^2=1$ give \begin{equation} \sum_{i=1}^n \int_{-\infty}^1 \Bar{K_i}(t) dt= \sum_{i=1}^n {\mathbb E}\Bar{x_i}^2 = 1- \sum_{i=1}^n {\mathbb E}[{\xi_i}^2 \mathbbm{1}_{\xi_i>1}]. \label{5kfunction2} \end{equation} Using the independence between $\overline{W}^{(i)}$ and $\Bar{x_i}$, \begin{align} {\mathbb E}\left[\overline{W}f_z(\overline{W})\right] &= \sum_{i=1}^n {\mathbb E}\left[\Bar{x_i}f_z(\overline{W})\right]\nonumber\\ &=\sum_{i=1}^n {\mathbb E}[\Bar{x_i}(f_z(\overline{W})-f_z(\overline{W}^{(i)}) )]+ \sum_{i=1}^n {\mathbb E}\Bar{x_i} {\mathbb E}[f_z(\overline{W}^{(i)})]\nonumber\\ &=\sum_{i=1}^n {\mathbb E}\left[\Bar{x_i} \int_0^{\Bar{x_i}} f'_z(\overline{W}^{(i)}+t) dt\right] + \sum_{i=1}^n {\mathbb E}\Bar{x_i} {\mathbb E}[ f_z(\overline{W}^{(i)})] \label{52p33}. \end{align} The first term in \eqref{52p33} can be written as \begin{eqnarray*} \lefteqn{\sum_{i=1}^n {\mathbb E}\left[\Bar{x_i} \int_0^{\Bar{x_i}} f'_z(\overline{W}^{(i)}+t) dt\right] }\\ &=& \sum_{i=1}^n {\mathbb E}\int_{-\infty}^1 f'_z(\overline{W}^{(i)}+t) \Bar{x_i} \mathbbm{1}_{0 \leq t \leq \Bar{x_i}} dt - \sum_{i=1}^n {\mathbb E}\int_{-\infty}^1 f'_z(\overline{W}^{(i)}+t) \Bar{x_i} \mathbbm{1}_{\Bar{x_i}\leq t<0}dt\\ &= &\sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[f'_z(\overline{W}^{(i)}+t)] \; \Bar{K_i}(t) dt \end{eqnarray*} where the last equality follows from independence. Therefore, \begin{equation} {\mathbb E}[\overline{W}f_z(\overline{W})] = \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[f'_z(\overline{W}^{(i)}+t)] \; \Bar{K_i}(t) dt + \sum_{i=1}^n {\mathbb E}\Bar{x_i} {\mathbb E} [f_z(\overline{W}^{(i)})]. \label{5steineqn} \end{equation} Next we replace $w$ by $\overline{W}$ and take expectations in the Stein Equation \eqref{3steineqn}, together with \eqref{5kfunction2} and \eqref{5steineqn}, to obtain \begin{align} {\mathbb P}(\overline{W}\leq z)-\Phi(z)&= {\mathbb E}[f'_z(\overline{W})] - {\mathbb E}[\overline{W}f_z(\overline{W})] \nonumber \\ &={\mathbb E}[f'_z(\overline{W})]\left(\sum_{i=1}^n \int_{-\infty}^1 \Bar{K_i}(t) dt+\sum_{i=1}^n {\mathbb E}[{\xi_i}^2 \mathbbm{1}_{\xi_i>1}]\right) \nonumber \\ &= \sum_{i=1}^n {\mathbb E}[{\xi_i}^2 \mathbbm{1}_{\xi_i>1}] E[f'_z(\overline{W})] \label{5R1}\\ &\;\;\;\;\;+ \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[f'_z(\overline{W}^{(i)}+\Bar{x_i}) - f'_z(\overline{W}^{(i)}+t)] \Bar{K_i}(t) dt \label{5R2}\\ &\;\;\;\;\;+ \sum_{i=1}^n {\mathbb E}[\xi_i \mathbbm{1}_{\xi_i> 1}] E[ f_z(\overline{W}^{(i)})] \label{5R3}\\ &= R_1 + R_2 + R_3. \nonumber \end{align} In order to prove \eqref{5e-z/2}, we bound each of $R_1$ given in \eqref{5R1}, $R_2$ given in \eqref{5R2} and $R_3$ given in \eqref{5R3} and show that the sum of the three bounds is less than or equal to $7115e^{-\frac{z}{2}}(\beta_2+\beta_3)$ for $z>2$. \medskip {\it Bound for $R_1$} For $R_1= \sum_{i=1}^n {\mathbb E}[{\xi_i}^2 \mathbbm{1}_{\xi_i>1}] {\mathbb E}[f'_z(\overline{W})]$, substituting \eqref{3steinsoln} into the Stein Equation \eqref{3steineqn} gives \begin{align*} f_z'(w)&=w f_z(w) + \mathbbm{1}_{w \leq z}-\Phi(z) \\ &=\begin{cases} (\sqrt{2\pi}we^{w^2/2}\Phi(w)+1)(1-\Phi(z)) & \mbox{if} \;w \leq z;\\ (\sqrt{2\pi}we^{w^2/2}(1-\Phi(w))-1)\Phi(z) & \mbox{if}\; w>z. \end{cases} \end{align*} Using \eqref{3lemma43}, \begin{align*} {\mathbb E}|f'_z(\overline{W})|&= {\mathbb E}[|f'_z(\overline{W})|\mathbbm{1}_{\overline{W}\leq \frac{z}{2}}] + {\mathbb E}[|f'_z(\overline{W})|\mathbbm{1}_{\overline{W}>\frac{z}{2}}]\\ &= {\mathbb E}[(\sqrt{2\pi}we^{w^2/2}\Phi(w)+1)(1-\Phi(z))\mathbbm{1}_{\overline{W}\leq \frac{z}{2}})]+ {\mathbb E}[|f'_z(\overline{W})| \mathbbm{1}_{\overline{W} >\frac{z}{2}}]\\ &\leq \left(\sqrt{2\pi}\; \frac{z}{2} e^{z^2/8}+1\right)(1-\Phi(z)) + {\mathbb P}\left(\overline{W} >\frac{z}{2}\right). \end{align*} By Markov's inequality, $ {\mathbb P}\left(\overline{W} >\frac{z}{2}\right)={\mathbb P} \left(e^{\overline{W}} >e^{\frac{z}{2}}\right) \leq e^{-\frac{z}{2}}{\mathbb E}[e^{\overline{W}}]. $ By definition $\Bar{x_i}\leq 1$, so ${\mathbb E}\Bar{x_i}\leq 0$ and $\sum_{i=1}^n {\mathbb E}{\Bar{x_i}}^2\leq 1$. Applying Lemma 8.2 in \cite{ChGoSh} with $\alpha=B=t=1$ gives $${\mathbb E}[e^{\overline{W}}]\leq \exp(e-1-1)=e^{e-2}.$$ Again employing the standard normal tail bound \eqref{4normaltail}, \begin{align*} {\mathbb E}|f'_z(\overline{W})| &\le \frac{1}{2}e^{-\frac{3}{8}z^2} + \frac{1}{z\sqrt{2\pi}}e^{-\frac{z^2}{2}} + e^{-\frac{z}{2}}e^{e-2}\\ &\le \frac{1}{2}e^{-\frac{1}{2}}e^{-\frac{z}{2}} + \frac{e^{-1}}{2\sqrt{2\pi}} e^{-\frac{z}{2}} + e^{-\frac{z}{2}}e^{e-2}. \end{align*} Hence, we have shown that \begin{align} |R_1| &\leq \left(\frac{1}{2}e^{-\frac{1}{2}}+\frac{e^{-1}}{2\sqrt{2\pi}}+e^{e-2}\right) e^{-\frac{z}{2}} \sum_{i=1}^n {\mathbb E}[{\xi_i}^2 \mathbbm{1}_{\xi_i>1}]\nonumber\\ &\leq \left( \frac{1}{2}e^{-\frac{1}{2}}+\frac{e^{-1}}{2\sqrt{2\pi}}+e^{e-2}\right) e^{-\frac{z}{2}} (\beta_2+\beta_3). \label{5boundR1} \end{align} \medskip {\it Bound for $R_2$} For $R_2= \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[f'_z(\overline{W}^{(i)}+\Bar{x_i}) - f'_z(\overline{W}^{(i)}+t)] \Bar{K_i}(t) dt$, we use the Stein Equation \eqref{3steineqn} to write $R_2$ as the sum of two quantities, and then bound them separately; \begin{align} R_2 &= \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[(\overline{W}^{(i)}+\Bar{x_i}) f_z(\overline{W}^{(i)}+\Bar{x_i}) + \mathbbm{1}_{\overline{W}^{(i)}+\Bar{x_i} \leq z} - \Phi(z) \nonumber\\ &\;\;\;\;\;- (\overline{W}^{(i)}+t)f_z(\overline{W}^{(i)}+t) - \mathbbm{1}_{\overline{W}^{(i)}+t \leq z}+\Phi(z) ] \Bar{K_i}(t) dt \nonumber\\ &= R_{21}+R_{22} \label{5R21+R22} \end{align} with \begin{align*} R_{21} &= \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\mathbbm{1}_{\overline{W}^{(i)}+\Bar{x_i} \leq z}- \mathbbm{1}_{\overline{W}^{(i)}+t \leq z}]\Bar{K_i}(t) dt ; \\ R_{22} &= \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[(\overline{W}^{(i)}+\Bar{x_i}) f_z(\overline{W}^{(i)}+\Bar{x_i})-(\overline{W}^{(i)}+t)f_z(\overline{W}^{(i)}+t)]\Bar{K_i}(t) dt. \end{align*} Since the difference between two indicator functions is always less than or equal to one, $R_{21}$ can be bounded by \begin{align*} R_{21} &\leq \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\mathbbm{1}_{\Bar{x_i} \leq t} P(z-t<\overline{W}^{(i)} \leq z-\Bar{x_i}|\Bar{x_i})] \Bar{K_i}(t) dt. \end{align*} Applying Proposition 8.1 from \cite{ChGoSh} with $a=z-t$ and $b=z-\Bar{x_i}$ gives \begin{align} R_{21} &\leq \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[6(\min(1, t- \Bar{x_i})+\beta_2+\beta_3)e^{-\frac{z-t}{2}}] \Bar{K_i}(t) dt\nonumber\\ &\leq 6e ^{-\frac{z}{2}}e^{\frac{1}{2}} \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\min(1, |t|+|\Bar{x_i}|)+\beta_2+\beta_3] \Bar{K_i}(t) dt \nonumber \\ &\le 6e ^{-\frac{z}{2}}e^{\frac{1}{2}} (\beta_2 + \beta_3) + 6e ^{-\frac{z}{2}}e^{\frac{1}{2}} \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\min(1, |t|+|\Bar{x_i}|)] \Bar{K_i}(t) dt , \label{5R211} \end{align} where we used \eqref{5kfunction2} for the last step. Note that $\mathbbm{1}_{0 \leq t\leq \Bar{x_i}}+\mathbbm{1}_{\Bar{x_i} \leq t<0} \leq \mathbbm{1}_{|t|\leq |\Bar{x_i}|}$, so $\Bar{K_i}(t) \leq {\mathbb E}[|\Bar{x_i}| \mathbbm{1}_{|t|\leq |\Bar{x_i}|}]$. Moreover, as both $\min(1, |t|+|\Bar{x_i}|)$ and $|\Bar{x_i}| \mathbbm{1}_{|t|\leq |\Bar{x_i}|}$ are increasing functions of $|\Bar{x_i}|$, they are positively correlated. So \begin{align*} {\mathbb E}[\min(1, |t|+|\Bar{x_i}|)] \Bar{K_i}(t) &\leq {\mathbb E}[\min(1, |t|+|\Bar{x_i}|)] {\mathbb E}[|\Bar{x_i}| \mathbbm{1}_{|t|\leq |\Bar{x_i}|}]\\ &\leq {\mathbb E}[\min(1, |t|+|\Bar{x_i}|) |\Bar{x_i}| \mathbbm{1}_{|t|\leq |\Bar{x_i}|}] \\ &\leq 2{\mathbb E}[\min(1, |\Bar{x_i}|) |\Bar{x_i}| \mathbbm{1}_{|t|\leq |\Bar{x_i}|}]. \end{align*} This gives \begin{align} \lefteqn{ \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\min(1, |t|+|\Bar{x_i}|)] \Bar{K_i}(t) dt } \nonumber \\ &\leq \sum_{i=1}^n \int_{-\infty}^1 2{\mathbb E}[\min(1, |\Bar{x_i}|) |\Bar{x_i}| \mathbbm{1}_{|t|\leq |\Bar{x_i}|}] dt \\ &\leq 4\sum_{i=1}^n {\mathbb E}[\min(1, |\Bar{\xi_i}|) |\Bar{\xi_i}|^2] = 4(\beta_2+\beta_3). \label{5R213} \end{align} Substituting \eqref{5R213} into \eqref{5R211}, \begin{align} R_{21}\leq 6e^{-\frac{z}{2}}e^{\frac{1}{2}} (4+1) (\beta_2+\beta_3) = 30e^{\frac{1}{2}}(\beta_2+\beta_3)e^{-\frac{z}{2}}. \label{5R21upperbound} \end{align} Similarly, we can construct a lower bound for $R_{21}$ by symmetry, \begin{align} R_{21} &\geq \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[-\mathbbm{1}_{t \leq \Bar{x_i}} P(z-\Bar{x_i}<\overline{W}^{(i)} \leq z-t |\Bar{x_i})] \Bar{K_i}(t) dt \nonumber\\ &\geq - \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[6(\min(1, \Bar{x_i}-t)+\beta_2+\beta_3) e^{-\frac{z-\Bar{x_i}}{2}}] \Bar{K_i}(t) dt\nonumber\\ &\geq - 6e^{-\frac{z}{2}}e^{\frac{1}{2}} \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\min(1, |t|+|\Bar{x_i}|)+\beta_2+\beta_3] \Bar{K_i}(t) dt. \label{5R21lowerbound} \end{align} Proceeding now as for \eqref{5R213} gives that $R_{21}\geq -30e^{\frac{1}{2}}(\beta_2+\beta_3)e^{-\frac{z}{2}}$ and therefore, \begin{equation} |R_{21}|\leq 30e^{\frac{1}{2}}(\beta_2+\beta_3)e^{-\frac{z}{2}}. \label{5R21bound} \end{equation} For $R_{22}$, since $wf_z(w)$ is increasing in $w$, Lemma \ref{lemma8.4explicit} gives \begin{align} R_{22} &\leq \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\mathbbm{1}_{t\leq \Bar{x_i}}(\overline{W}^{(i)}+\Bar{x_i}) f_z(\overline{W}^{(i)}+\Bar{x_i})|\Bar{x_i} \nonumber \\ & \quad \quad \quad \quad \quad -(\overline{W}^{(i)}+t)f_z(\overline{W}^{(i)}+t)]\Bar{K_i}(t) dt \nonumber \\ &\leq \left(25.8+\frac{20e^{e^2-2}}{\sqrt{2\pi}}\right) e^{-\frac{z}{2}} \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\min(1, |\Bar{x_i}|+|t|)]\Bar{K_i}(t) dt \nonumber\\ &\leq \left(103.2+\frac{80}{\sqrt{2\pi}}e^{e^2-2}\right) e^{-\frac{z}{2}}(\beta_2+\beta_3). \label{5R221} . \end{align} Here we used \eqref{5R213} for the last step. A lower bound for $R_{22}$ follows similarly, \begin{align} R_{22}&\geq \sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\mathbbm{1}_{\Bar{x_i}\leq t} (\overline{W}^{(i)}+\Bar{x_i}) f_z(\overline{W}^{(i)}+\Bar{x_i})|\Bar{x_i}\nonumber \\ & \quad \quad \quad \quad \quad -(\overline{W}^{(i)}+t)f_z(\overline{W}^{(i)}+t)]\Bar{K_i}(t) dt \nonumber\\ &\geq -\sum_{i=1}^n \int_{-\infty}^1 {\mathbb E}[\mathbbm{1}_{\Bar{x_i}\leq t} (\overline{W}^{(i)}+t)f_z(\overline{W}^{(i)}+t) \nonumber \\ & \quad \quad \quad \quad \quad -(\overline{W}^{(i)}+\Bar{x_i}) f_z(\overline{W}^{(i)}+\Bar{x_i})|\Bar{x_i}]\Bar{K_i}(t) dt \nonumber\\ &\geq -\left(103.2+\frac{80}{\sqrt{2\pi}}e^{e^2-2}\right) e^{-\frac{z}{2}}(\beta_2+\beta_3). \label{5R222} \end{align} Collecting \eqref{5R21bound}, \eqref{5R221} and \eqref{5R222} gives \begin{equation} |R_2|\leq |R_{21}|+|R_{22}|\leq \left(30e^\frac{1}{2}+103.2+\frac{80}{\sqrt{2\pi}}e^{e^2-2}\right) e^{-\frac{z}{2}}(\beta_2+\beta_3). \label{5boundR2} \end{equation} \medskip {\it Bound for $R_3$} Finally, for $R_3=\sum_{i=1}^n {\mathbb E}[\xi_i \mathbbm{1}_{\xi_i> 1}] {\mathbb E}[ f_z(\overline{W}^{(i)})]$, we use similar arguments as for $R_1$; \begin{align*} {\mathbb E}|f_z(\overline{W}^{(i)})|&= {\mathbb E}\left[|f_z(\overline{W}^{(i)})|\mathbbm{1}_{\overline{W}^{(i)}\leq \frac{z}{2}}\right] + {\mathbb E}\left[|f_z(\overline{W}^{(i)})|\mathbbm{1}_{\overline{W}^{(i)}>\frac{z}{2}}\right]\\ &\leq \sqrt{2\pi} \; e^{\frac{z^2}{8}} (1-\Phi(z)) + {\mathbb E}\left[|f_z(\overline{W}^{(i)})| \mathbbm{1}_{\overline{W}^{(i)} > \frac{z}{2}}\right]. \end{align*} From \eqref{3lemma45}, $0<f_z\leq \min(\frac{\sqrt{2\pi}}{4},\frac{1}{|z|}) =\frac{1}{|z|}\leq \frac{1}{2}$ for $z>2$. The standard normal tail bound \eqref{4normaltail} and Lemma $8.2$ in \cite{ChGoSh} with $\alpha=B=t=1$ give \begin{align*} {\mathbb E}|f_z(\overline{W}^{(i)})|&\leq \sqrt{2\pi}e^{\frac{z^2}{8}}\frac{1}{z\sqrt{2\pi}} e^{-\frac{z^2}{2}} + \frac{1}{2} P\left(\overline{W}^{(i)} >\frac{z}{2}\right) \\ &\leq \frac{1}{z}e^{-\frac{3}{8}z^2} + \frac{1}{2} e^{-\frac{z}{2}}e^{e-2}\\ &\leq \frac{1}{2} e^{-\frac{1}{2}} e^{-\frac{z}{2}}+ \frac{1}{2} e^{-\frac{z}{2}}e^{e-2} . \end{align*} Hence, we have shown that \begin{align} |R_ &\leq \frac{1}{2}(e^{-\frac{1}{2}}+e^{e-2})e^{-\frac{z}{2}} \sum_{i=1}^n{\mathbb E}[\xi_i \mathbbm{1}_{\xi_i> 1}] \nonumber\\ &\leq \frac{1}{2}(e^{-\frac{1}{2}}+e^{e-2}) e^{-\frac{z}{2}}(\beta_2+\beta_3).\label{5boundR3} \end{align} Applying \eqref{5boundR1}, \eqref{5boundR2} and \eqref{5boundR3} to \eqref{5R1}, \eqref{5R2} and \eqref{5R3} respectively, we have \begin{align} \lefteqn{ |P(\overline{W}\leq z)-\Phi(z)|} \nonumber \\ &\leq \left(31e^{-\frac{1}{2}}+\frac{3}{2}e^{e-2}+103.2+\frac{0.5e^{-1} +80e^{e^2-2}} {\sqrt{2\pi}}\right)e^{-\frac{z}{2}}(\beta_2+\beta_3) \nonumber\\ &\leq 7115e^{-\frac{z}{2}}(\beta_2+\beta_3) \label{5e-z/22}. \end{align} This completes the proof of \eqref{5e-z/2} and therefore the proof of \eqref{5z>2bound}. Thus we have proved the theorem when $\beta_2+\beta_3<1$ and $z>2$. Hence we have proved Theorem \ref{theorem5.1}. \end{proof}
{ "timestamp": "2018-07-24T02:22:52", "yymm": "1807", "arxiv_id": "1807.08672", "language": "en", "url": "https://arxiv.org/abs/1807.08672", "abstract": "A famous result in renewal theory is the Central Limit Theorem for renewal processes. As in applications usually only observations from a finite time interval are available, a bound on the Kolmogorov distance to the normal distribution is desirable. Here we provide an explicit non-uniform bound for the Renewal Central Limit Theorem based on Stein's method and track the explicit values of the constants. For this bound the inter-arrival time distribution is required to have only a second moment. As an intermediate result of independent interest we obtain explicit bounds in a non-central Berry-Essén theorem under second moment conditions.", "subjects": "Probability (math.PR)", "title": "A Bound on the Rate of Convergence in the Central Limit Theorem for Renewal Processes under Second Moment Conditions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426405416756, "lm_q2_score": 0.727975460709318, "lm_q1q2_score": 0.7090791399988469 }
https://arxiv.org/abs/1910.06922
Gradient penalty from a maximum margin perspective
A popular heuristic for improved performance in Generative adversarial networks (GANs) is to use some form of gradient penalty on the discriminator. This gradient penalty was originally motivated by a Wasserstein distance formulation. However, the use of gradient penalty in other GAN formulations is not well motivated. We present a unifying framework of expected margin maximization and show that a wide range of gradient-penalized GANs (e.g., Wasserstein, Standard, Least-Squares, and Hinge GANs) can be derived from this framework. Our results imply that employing gradient penalties induces a large-margin classifier (thus, a large-margin discriminator in GANs). We describe how expected margin maximization helps reduce vanishing gradients at fake (generated) samples, a known problem in GANs. From this framework, we derive a new $L^\infty$ gradient norm penalty with Hinge loss which generally produces equally good (or better) generated output in GANs than $L^2$-norm penalties (based on the Fréchet Inception Distance).
\subsubsection*{\bibname}} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{amsmath} \usepackage{amsthm} \usepackage{algpseudocode,algorithm,algorithmicx} \usepackage{graphicx} \graphicspath{ {images/} } \usepackage[usenames, dvipsnames]{color} \def\do\/\do-{\do\/\do-} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \usepackage{mathtools} \definecolor{dark-gray}{gray}{.35} \definecolor{myorange}{RGB}{246, 164, 16} \definecolor{mygreen}{RGB}{1, 100, 3} \usepackage{footnote} \makesavenoteenv{table} \makesavenoteenv{tabular} \usepackage{cleveref}[2012/02/15 \crefformat{footnote}{#2\footnotemark[#1]#3} \begin{document} \twocolumn[ \aistatstitle{Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs} \aistatsauthor{ Alexia Jolicoeur-Martineau \And Ioannis Mitliagkas} \aistatsaddress{ Mila, University of Montreal \And Mila, University of Montreal} ] \begin{abstract} We generalize the concept of maximum-margin classifiers (MMCs) to arbitrary norms and non-linear functions. Support Vector Machines (SVMs) are a special case of MMC. We find that MMCs can be formulated as Integral Probability Metrics (IPMs) or classifiers with some form of gradient norm penalty. This implies a direct link to a class of Generative adversarial networks (GANs) which penalize a gradient norm. We show that the Discriminator in Wasserstein, Standard, Least-Squares, and Hinge GAN with Gradient Penalty is an MMC. We explain why maximizing a margin may be helpful in GANs. We hypothesize and confirm experimentally that $L^\infty$-norm penalties with Hinge loss produce better GANs than $L^2$-norm penalties (based on common evaluation metrics). We derive the margins of Relativistic paired (Rp) and average (Ra) GANs \end{abstract} \section{Introduction} Support Vector Machines (SVMs) \cite{cortes1995support} are a very popular type of maximum-margin classifier (MMC). The margin can be conceptualized as the minimum $L^p$ distance between the decision boundary of the classifier and any data-point. An SVM is a linear classifier which maximizes the minimum $L^2$ margin. A significant body of work has been done on generalizing SVM beyond a simple linear classifier through the kernel trick \cite{aizerman1964theoretical}. However, until very recently, SVMs had not been generalized to arbitrary norms with non-linear classifiers (e.g., neural networks). In this paper, we describe how to train MMCs (which generalize SVMs) through different approximations of the $L^p$-norm margin and we show that this results in loss functions with a gradient norm penalty. Generative adversarial networks (GANs) \citep{GAN} are a very successful class of generative models. Their most common formulation involves a game played between two competing neural networks, the discriminator $D$ and the generator $G$. $D$ is a classifier trained to distinguish real from fake examples, while $G$ is trained to generate fake examples that will confuse $D$ into recognizing them as real. When the discriminator's objective is maximized, it yields the value of a specific divergence (i.e., a distance between probability distributions) between the distributions of real and fake examples. The generator then aims to minimize that divergence (although this interpretation is not perfect; see \citet{jolicoeur2018beyonddivergence}). Importantly, many GANs apply some form of gradient norm penalty to the discriminator \citep{WGAN-GP,fedus2017many,mescheder2018training,karras2019style}. This penalty is motivated by a Wasserstein distance formulation in \citet{WGAN-GP}, or by numerical stability arguments \cite{mescheder2018training,karras2019style}. In this paper, we show that discriminator loss functions that use a gradient penalty correspond to specific types of MMCs. Our contributions are the following: \begin{enumerate} \item We define the concept of expected margin maximization and show that Wasserstein-, Standard-, Least-Squares-, and Hinge-GANs can be derived from this framework. \item We derive a new method from this framework, a GAN that penalize $L^\infty$ gradient norm values above 1 (instead of penalizing all values unequal to 1 as done by \citet{WGAN-GP}). We hypothesize and experimentally show that this method leads to better generated outputs. \item We describe how margin maximization (and thereby gradient penalties) help reduce vanishing gradients at fake (generated) samples, a known problem in many GANs. \item We derive the margins of Relativistic paired and average GANs \citep{jolicoeur2018relativistic}. \end{enumerate} It is worth noting that \cite{lim2017geometric} explore a similar connection between GANs and SVMs, which they use to propose Geometric GANs. The main difference to our work is that they assume a linear classifier working on the {\em feature space of a neural network's output} instead of the {\em input space}. Furthermore, that work does not exploit the duality theory of SVMs. Thereby, it does not draw a connection to gradient penalty terms. Our work explores this new connection which motivates an $L^\infty$ norm gradient penalty and shows great promise over the standard $L^2$ norm gradient penalty. The paper is organized as follows. In Section~\ref{sec:2}, we review SVMs and GANs. In Section~\ref{sec:3}, we generalize the concept of maximum-margin classifiers (MMCs). In Section~4, we explain the connections between MMCs and GANs with gradient penalty. In Section~\ref{sec:4.1}, we mention that enforcing 1-Lipschitz is equivalent to assuming a bounded gradient; this implies that Wasserstein's distance can be approximated with an MMC formulation. In Section~\ref{sec:4.2}, we describe the benefits of using MMCs in GANs. In Section~\ref{sec:4.3}, we hypothesize that $L^1$-norm margins may lead to more robust classifiers. In Section~\ref{sec:4.4}, we derive margins for Relativistic paired and average GANs. Finally, in Section~\ref{sec:5}, we provide experiments to support the hypotheses in our contributions. \section{Review of SVMs and GANs} \label{sec:2} \subsection{Notation} \label{sec:notation} We focus on binary classifiers. Let $f$ be the classifier and $(x,y) \sim \mathbb{D}$ the distribution (of a dataset $D$) with $n$ data samples $x$ and labels $y$. As per SVM literature, $y=1$ when $x$ is sampled from class 1 and $y=-1$ when $x$ is sampled from class 2. Furthermore, we denote $x_1 = x | (y = 1) \sim \mathbb{P}$ and $x_2 = x | (y = -1) \sim \mathbb{Q}$ as the data samples from class 1 and class 2 respectively (with distributions $\mathbb{P}$ and $\mathbb{Q}$). When discussing GANs, $x_1 \sim \mathbb{P}$ (class 1) refer to real data samples and $x_2 \sim \mathbb{Q}$ (class 2) refer to fake data samples (produced by the generator). The $L^{\infty}$-norm is defined as: $|| x ||_\infty = \max(|x_1|,|x_2|,\ldots,|x_k|)$. Note that we sometimes refer to a function $F$; this is an objective function to be maximized (not to be confused with the classifier $f$). The critic (C) is the discriminator (D) before applying any activation function (i.e., $D(x)=a(C(x))$, where $a$ is the activation function). For consistency with existing literature, we will generally refer to the critic rather than the discriminator. \subsection{SVMs} In this section, we explain how to obtain a linear maximum-margin classifier (MMC) which maximizes the minimum $L^2$-norm margin (i.e., SVMs). \subsubsection{Decision boundary and margin} The \em decision boundary \em of a classifier is defined as the set of points $x_0$ such that $f(x_0)=0$. The margin is either defined as i) the minimum distance between a sample and the boundary, or ii) the minimum distance between the \em closest sample \em to the boundary and the boundary. The former thus corresponds to the \em margin of a sample \em and the latter corresponds to the \em margin of a dataset \em. In order to disambiguate the two cases, we refer to the former as the \em margin \em and the latter as the \em minimum margin\em. The first step towards obtaining a linear MMC is to define the $L^p$-norm margin: \begin{align}\label{eq:10} \gamma(x) = &\min_{x_0} || x_0 - x ||_p \hspace{5pt} \text{ s.t. } \hspace{5pt} f(x_0)=0 \end{align} With a linear classifier (i.e., $f(x) = w^T x - b$) and $p=2$, we have: $$\gamma(x) = \frac{|w^T x - b|}{||w||_2} = \frac{\alpha(x)}{\beta}$$ Our goal is to maximize this margin, but we also want to obtain a classifier. To do so, we simply replace $\alpha(x)=|w^T x - b|$ by $\widetilde{\alpha}(x,y) = y(w^T x - b)$. We call $\widetilde{\alpha}$ the \em functional margin\em. After replacement, we obtain the \em geometric margin\em: $$\widetilde{\gamma}(x,y) = \frac{y(w^T x - b)}{||w||_2} = \frac{\widetilde{\alpha}(x,y)}{\beta}$$ The specific goal of SVMs is to find a linear classifier which maximizes the \em minimum \em $L^2$-norm geometric margin (in each class): \begin{align}\label{eqn:1} \max_{w,b} \min_{(x,y) \in D} \widetilde{\gamma}(x,Y). \end{align} \subsubsection{Formulations} Directly solving equation \eqref{eqn:1} is an ill-posed problem for multiple reasons. Firstly, the numerator and denominator are dependent on one another; increasing the functional margin also increases the norm of the weights (and vice-versa). Thereby, there are infinite solutions which maximize the geometric margin. Secondly, maximizing $\widetilde{\alpha}$ means minimizing the denominator which can cause numerical issues (division by near zero). Thirdly, it makes for a very difficult optimization given the max-min formulation. For these reasons, we generally prefer to i) constrain the numerator and minimize the denominator, or ii) constrain the denominator and maximize the numerator. The classical approach is to minimize the denominator and constrain the numerator using the following formulation: \begin{align}\label{eqn:2} \min_{w,b} {||w||^2_2} \hspace{5pt} \text{ s.t. } \hspace{5pt} y(w^T x - b) \ge 1 \hspace{2pt}\forall\hspace{2pt} (x,y) \in D \end{align} This formulation corresponds to Hard-Margin SVM. The main limitation of this approach is that it only works when the data are separable. However, if we take the opposite approach of maximizing a function of $y(w^T x - b)$ and constraining the denominator $|| w ||_2$, we can still solve the problem with non-separable data. For this reason, we prefer solving of the following Soft-Margin SVM: \begin{align*} \min_{w,b} \frac{1}{n} \sum_{(x,y)\in D}\left[\max(0,1-y(w^T x - b))\right] \hspace{1pt} \text{ s.t. } \hspace{1pt} ||w||_2 = 1 \end{align*} This can be rewritten equivalently with a KKT multiplier $\lambda$ in the following way: \begin{align*} \min_{w,b} \frac{1}{n} \sum_{(x,y)\in D}\left[\max(0,1-y(w^T x - b))\right] + \lambda (||w||^2_2 - 1), \end{align*} Note that the Hinge function $max(0,1-y(w^T x - b)$ is simply a relaxation of the hard constraint $y(w^T x - b) \ge 1 \hspace{2pt}\forall\hspace{2pt} (x,y) \in D$. Thereby, we are not actually solving equation \eqref{eqn:1} anymore. \subsection{GANs}\label{sec:GAN} GANs can be formulated in the following way: \begin{align}\label{eqn:3} &\max_{C: \mathcal{X} \to \mathbb{R}} \mathbb{E}_{x_1 \sim \mathbb{P}}\left[ f_1(C(x_1)) \right] + \mathbb{E}_{z \sim \mathbb{Z}} \left[ f_2(C(G(z))) \right], \\ &\min_{G: Z \to \mathcal{X}} \mathbb{E}_{z \sim \mathbb{Z}} \left[ f_3(C(G(z))) \right], \end{align}\label{eqn:4} where $f_1, f_2, f_3:\mathbb{R} \to \mathbb{R}$, $\mathbb{P}$ is the distribution of real data with support $\mathcal{X}$, $\mathbb{Z}$ is a multivariate normal distribution with support $Z=\mathbb{R}$, $C(x)$ is the critic evaluated at $x$, $G(z)$ is the generator evaluated at $z$, and $G(z) \sim \mathbb{Q}$, where $\mathbb{Q}$ is the distribution of fake data. Many variants exist; to name a few: Standard GAN (SGAN) \citep{GAN} corresponds to $f_1(z)=\log(\text{sigmoid}(z))$, $f_2(z)=\log(\text{sigmoid}(-z))$, and $f_3(z)=-f_1(z)$. Least-Squares GAN (LSGAN) \citep{LSGAN} corresponds to $f_1(z)=-(1-z)^2$, $f_2(z)=-(1+z)^2$, and $f_3(z)=-f_1(z)$. HingeGAN \citep{lim2017geometric} corresponds to $f_1(z)=-max(0,1-z)$, $f_2(z)=-max(0,1+z)$, and $f_3(z)=-z$. An important class of GANs are those based on Integral probability metrics (IPMs) \citep{muller1997integral}. IPMs are statistical divergences (distances between probability distributions) defined in the following way: \[ IPM_{F} (\mathbb{P} || \mathbb{Q}) = \sup_{C \in \mathcal{F}} \mathbb{E}_{x_1 \sim \mathbb{P}}[C(x_1)] - \mathbb{E}_{x_2 \sim \mathbb{Q}}[C(x_2)], \] where $\mathcal{F}$ is a class of real-valued functions. Of note, certain connections between IPMs and SVMs have been identified in \citet{sriperumbudur2009integral}. IPM-based GANs attempt to solve the following problem $$ \min_{G} \max_{C \in \mathcal{F}} \mathbb{E}_{x_2 \sim \mathbb{P}}[C(x_1)] - \mathbb{E}_{z \sim \mathbb{Z}}[C(G(z))].$$ There are many GANs based on IPMs \citep{mroueh2017sobolev,Fisher}, but we will focus on two of them: WGAN \citep{WGAN} and WGAN-GP \citep{WGAN-GP}. WGAN is an IPM-based GAN which uses the first-order Wasserstein's distance ($W_1$), the IPM restricted to the class of all 1-Lipschitz functions. This corresponds to the set of functions $C$ such that $\frac{C(x_1)-C(x_2)}{d(x_1,x_2)} \leq 1$ for all $x_1$,$x_2$, where $d(x_1,x_2)$ is a metric. $W_1$ also has a primal form which can be written in the following way: $$ W_1 (\mathbb{P}, \mathbb{Q}):= \inf_{\pi \in \Pi (\mathbb{P}, \mathbb{Q})} \int_{M \times M} d(x_1, x_2) \, \mathrm{d} \pi (x_1, x_2),$$ where $\Pi (\mathbb{P}, \mathbb{Q})$ is the set of all distributions with marginals $\mathbb{P}$ and $\mathbb{Q}$ and we call $\pi$ a coupling. The original way to enforce the 1-Lipschitz property on the critic was to clamp its weights after each update. This was later shown to be problematic \citep{WGAN-GP}. Albeit with its issues, WGAN improves the stability of GANs and reduces the incidence of mode collapse (when the generator produces less diversity than the training dataset) \citep{WGAN}. \citet{WGAN-GP} showed that if the optimal critic $f^{*}(x)$ is differentiable everywhere and $\hat{x} = \alpha x_1 + (1-\alpha)x_2$ for $0 \leq \alpha \leq 1$, we have that $||\nabla C^{*}(\hat{x})|| = 1$ almost everywhere for all pair $(x_1,x_2)$ which comes from the optimal coupling $\pi^{*}$. Sampling from the optimal coupling is difficult so they suggest to softly penalize $\mathbb{E}_{\tilde{x}}{(||\nabla_{\tilde{x}} C(\tilde{x})||_2-1)^2}$, where $\tilde{x} = \alpha x + (1-\alpha)y$, $\alpha \sim U(0,1)$, $x \sim \mathbb{P}$, and $y \sim \mathbb{Q}$. This penalty works well in practice and is a popular way to approximate Wasserstein's distance. However, this is not equivalent to estimating Wasserstein's distance since we are not sampling from $\pi^{*}$ and $f^{*}$ does not need to be differentiable everywhere \citep{petzka2017regularization}. Of importance, gradient norm penalties of the form $\mathbb{E}_{x}{(||\nabla_{x} D(x)||_2-\delta)^2}$, for some $\delta \in \mathbb{R}$ are very popular in GANs. Remember that $D(x)=a(C(x))$; in the case of IPM-based-GANs, we have that $D(x)=C(x)$. It has been shown that the GP-1 penalty ($\delta=1$), as in WGAN-GP, also improves the performance of non-IPM-based GANs \citep{ManyPaths}. Another successful variant is GP-0 ($\delta=0$ and $x \sim \mathbb{P}$) \citep{mescheder2018training,karras2019style}. Although there are explanations to why gradient penalties may be helpful \citep{mescheder2018training, kodali2017convergence, WGAN-GP}, the theory is still lacking. There are other GAN variants which improve the stability of training and will be relevant to our discussion. The first one is HingeGAN \citep{lim2017geometric} which uses the Hinge loss as objective function. This corresponds to using equation \eqref{eqn:3} and \eqref{eqn:4} using $f_1(z)=-max(0,1-z)$, $f_2(z)=-max(0,1+z)$, and $f_3(z)=-z$. Another class of GANs relevant to our discussion are Relativistic paired GANs (RpGANs) \citep{jolicoeur2018relativistic,jolicoeur2019relativistic}: \begin{align*} \max\limits_{C:\mathcal{X} \to \mathbb{R}} \hspace{1pt} &\underset{ \substack{x_1 \sim \mathbb{P} \\ x_2 \sim \mathbb{Q}}}{\mathbb{E}\vphantom{p}} \left[ f_1 \left( C(x_1) - C(x_2) \right) \right], \\ \min_{G} \hspace{1pt}& \underset{ \substack{x_1 \sim \mathbb{P} \\ x_2 \sim \mathbb{Q}}}{\mathbb{E}\vphantom{p}} \left[ f_2 \left( C(x_1) - C(x_2) \right) \right], \end{align*} and Relativistic average GANs (RaGANs) \citep{jolicoeur2018relativistic,jolicoeur2019relativistic}: \begin{align*} \max\limits_{C:\mathcal{X} \to \mathbb{R}} &\mathbb{E}_{x_1 \sim \mathbb{P}}\left[ f_1\left( C(x_1)-\mathbb{E}_{x_2 \sim \mathbb{Q}} \hspace{1pt} C(x_2) \right)) \right] + \\ &\mathbb{E}_{x_2 \sim \mathbb{Q}} \left[ f_2 \left( C(x_2)-\mathbb{E}_{x_1 \sim \mathbb{P}} \hspace{1pt} C(x_1) \right) \right], \\ \max_{G} \hspace{2pt} &\mathbb{E}_{x_2 \sim \mathbb{Q}}\left[ f_1\left( C(x_2)-\mathbb{E}_{x_1 \sim \mathbb{P}} \hspace{1pt} C(x_1) \right)) \right] + \\ &\mathbb{E}_{x_1 \sim \mathbb{P}} \left[ f_2 \left( C(x_1)-\mathbb{E}_{x_2 \sim \mathbb{Q}} \hspace{1pt} C(x_2) \right) \right]. \end{align*} Most loss functions can be represented as RaGANs or RpGANs; SGAN, LSGAN, and HingeGAN all have relativistic counterparts. \section{Generalizing SVMs} \label{sec:3} The main approach used to generalize SVMs beyond the linear classifier is to apply the kernel trick \citep{aizerman1964theoretical}. This simply consists in replacing $f(x)=w^T x - b$ by $f(x) = w^T \phi(x) - b$, where $\phi(x)$ is a kernel. Kernels can be chosen a priori or learned \citep{goodfellow2016deep}. In this section, we generalize SVMs to arbitrary classifiers $f(x)$, $L^p$-norms and loss functions. We start by showing how to derive an $L^p$-norm geometric margin. Then, we present the concept of maximizing the \em expected \em margin, rather than the \em minimum \em margin. \subsection{Approximating the geometric margin} Calculating the geometric margin involves computing a projection. For general $L^p$ norms it has no closed form. One way to approximate it, is using a Taylor expansion. Depending on the order of the Taylor expansion (before or after solving for the projection), we can get two different approximations: one new and one existing. \subsubsection{Taylor approximation (After solving)}\label{sec:3.1.1} The formulation of the $L^p$-norm margin \eqref{eq:10} has no closed form for arbitrary non-linear classifiers. However, when $p=2$, if we use a Taylor's expansion, we can show that \begin{align*} \gamma_2(x) &= \frac{|\nabla_{x_0} f(x_0)^T (x-x_0)|}{|| \nabla_{x_0} f(x_0) ||_2} \\ &\approx \frac{|f(x)|}{|| \nabla_{x_0} f(x_0) ||_2} \hspace*{2pt} \text{ (Taylor's expansion)} \\ &\approx \frac{|f(x)|}{|| \nabla_x f(x) ||_2} \hspace*{2pt} \text{ (if $f(x)\approx w^T x - b$)} \end{align*} This approximation depends on approximate linearity of the classifier. If $f(x)=w^T x - b$, we have that $|| \nabla_x f(x) ||_2 = ||w||$ (and vice-versa). This means that if we enforce $|| \nabla_x f(x) ||_2 \approx 1$ for all $x$, we have $|| \nabla_{x_0} f(x_0) ||_2 \approx 1$ for all points $x_0$ on the boundary. This may appear to bring us back to the original scenario with a linear classifier. However, in practice, we only penalize the gradient norm in expectation which means that we do not obtain a linear classifier. Thus, we can use the following pseudo-margin: $$ \gamma(x)_2^{+} = \frac{yf(x)}{|| \nabla_x f(x) ||_2}. $$ \subsubsection{Taylor approximation (Before solving)} An alternative approach to derive a pseudo-margin is to use Taylor's approximation before solving the problem rather than after (as done by \citet{matyasko2017margin} and \citet{elsayed2018large}): \begin{align*} \gamma_p(x) &= \min_{r} || r ||_p \hspace{5pt} \text{ s.t. } \hspace{5pt} f(x+r)=0 \\ &\approx \min_{r} || r ||_p \hspace{5pt} \text{ s.t. } \hspace{5pt} f(x)+\nabla_x f(x)^T r=0 \\ &=\frac{|f(x)|}{|| \nabla_x f(x) ||_q}, \end{align*} where $||\cdot||_q$ is the dual norm \citep{boyd2004convex} of $||\cdot||_p$. By Hölder's inequality \citep{holder1889ueber, rogers1888extension}, we have that $1/p + 1/q=1$. This means that if $p=2$, we still get $q=2$; if $p=\infty$, we get $q=1$; if $p=1$, we get $q=\infty$. We can then define the geometric margin as: $$ \gamma_p^{-} = \frac{yf(x)}{|| \nabla_x f(x) ||_q}.$$ \subsection{Maximizing the expected margin} As previously discussed, the goal of hard-margin SVMs is to maximize the \em minimum margin \em as in equation \eqref{eqn:1}. However, this problem is infeasible in non-linearly separable datasets. In these cases, the soft-margin formulation of SVM is most common: \begin{align}\label{eqn:4andahalf} \max_f \mathbb{E}_{(x,y)\sim D}\left[ F(\gamma(x,y)) \right], \end{align} where $F:\mathbb{R}\to \mathbb{R}$ is an objective to be maximized (not to be confused with the classifier $f$) and the expectation represents the empirical average over a sampled dataset $D$. For large datasets, the empirical average is a good approximation of the expectation of the data distribution, $\mathbb{D}$. This is an easier optimization problem to solve compared to equation \eqref{eqn:1}, and is also always feasible. If $F$ is chosen to be the negative hinge function (i.e., $F(z)=-max(0,1-z)$), we ignore samples far from the boundary (as in SVMs). For general choices of $F$, every sample may influence the solution. The identity function $F(z)=z$, cross entropy with sigmoid activation $F(z)=\log(\text{sigmoid}(z)))$ and least-squares $F(z)=-(1-z)^2$ are also valid choices. However, as before, we prefer to separate the numerator from the denominator of the margin. Furthermore, the denominator (the norm of the gradient) is now a random variable. To make things as general as possible, we use the following formulation: \begin{align}\label{eqn:6} \max_f \mathbb{E}_{(x,y)\sim \mathbb{D}}\left[F(yf(x)) - \lambda g(||\nabla_x f(x)||_q)\right]. \end{align} where $F,g:\mathbb{R}\to \mathbb{R}$ and $\lambda$ is a scalar penalty term. There are many potential choices of $F$ and $g$ which we can use. The standard choice of $g$ (in SVMs) is $g(z)=(z^2-1)$. This corresponds to constraining $|| \nabla_x f(x) || = 1$ or $|| \nabla_x f(x) || \leq 1$ for all $x$ (by KKT conditions). Since the gradient norm is a random variable, we do not want it to be equal to one everywhere. For this reason, we will generally work with softer constraints of the form $g(z)=(z-1)^2$ or $g(z)=max(0,z-1)$. The first function enforces a soft equality constraint so that $z\approx 1$ while the second function enforces a soft inequality constraint so that $z \leq 1$. Of note, under perfect separation of the data and with a linear classifier, it has been shown that the empirical version of equation \eqref{eqn:6} (integrating over a dataset $D$ drawn from distribution $\mathbb{D}$) divided by its norm is equivalent to \eqref{eqn:1} under the constraint $||w||=1$ when $\lambda \to 0$ \citep{rosset2004margin}. This is true for cross-entropy and Hinge loss functions, but not least-squares. This implies that, under strong assumptions, maximizing the expected margin could also maximize the minimum margin. \subsection{Better approximation of the margin} In Section~\ref{sec:3.1.1}, we showed an approximation to the $L^2$-norm geometric margin. To reach a closed form, we had to assume that the classifier was approximately linear. This approximation is problematic since samples are pushed away from the boundary so we may never minimize the gradient norm at the boundary (as needed to actually maximize the geometric margin). Given that we separate the problem of estimating an MMC into maximizing a function of the numerator ($yf(x)$) and minimizing a function of the denominator (gradient norm), we do not need to make this approximation. Rather than finding the closest element of the decision boundary $x_0$ for a given sample $x$, we can simply apply the penalty on the decision boundary. However, working on the boundary is intractable given the infinite size of the decision boundary. Although sampling from the decision boundary is difficult, sampling around it is easy. Rather than working on the decision boundary, we can instead apply the constraint in a bigger region encompassing all points of the decision boundary. A simple way to do so is to sample from all linear interpolations between samples from classes 1 and 2. This can be formulated as: \begin{align}\label{eqn:7} \max_f \mathbb{E}_{(x,y)\sim \mathbb{D}}\left[F(yf(x)) \right] - \lambda \mathbb{E}_{\tilde{x}} \left[g(||\nabla_{\tilde{x}} f(\tilde{x})||_2)\right], \end{align} where $\tilde{x} = \alpha x + (1-\alpha)y$, $\alpha \sim U(0,1)$, $x \sim \mathbb{P}$, and $y \sim \mathbb{Q}$. This is same interpolation as used in WGAN-GP; this provides an additional argument in favor of this practice. \section{Connections to GANs} Let $f(x)=C(x)$. Although not immediately clear given the different notations, the objective functions of the discriminator/critic in many penalized GANs are equivalent to the ones from MMCs based on \eqref{eqn:7}. If $g(z)=(z-1)^2$, we have that $F(z)=z$ corresponds to WGAN-GP, $F(z)=\log(\text{sigmoid}(z))$ corresponds to SGAN, $F(z)=-(1-z)^2$ corresponds to LSGAN, and $F(z)=-max(0,1-z)$ corresponds to HingeGAN with gradient penalty. Thus, all of these penalized GANs maximize an expected $L^2$-norm margin. \subsection{Equivalence between gradient norm constraints and Lipschitz functions} \label{sec:4.1} As stated in Section~\ref{sec:GAN}, the popular approach of softly enforcing $|| \nabla_x f(x) ||_2 \approx 1$ at all interpolations between real and fake samples does not ensure that we estimate the Wasserstein distance ($W_1$). On the other hand, we show here that enforcing $|| \nabla_x f(x) ||_2 \leq 1$ is sufficient in order to estimate $W_1$. Assuming $d(x_1,x_2)$ is a $L^p$-norm, $p \ge 2$ and $f(x)$ is differentiable, we have that: \begin{align*} || \nabla f(x) ||_p \leq K \iff f \text{ is K-Lipschitz on $L^p$}. \end{align*} See appendix for the proof. \citet{adler2018banach} showed a similar result on dual norms. This suggests that, in order to work on the set of Lipschitz functions, we can penalize $|| \nabla_x f(x) || \leq 1$ for all $x$. This can be done easily through \eqref{eqn:6} by choosing $g(z)=\max(0,z-1)$. \citet{petzka2017regularization} also suggested using a similar function (the square hinge) in order to only penalize gradient norms above 1. If we let $F(z)=z$ and $g(z)=\max(0,z-1)$, we have an IPM over all Lipschitz functions; thus, we effectively approximate $W_1$. This means that $W_1$ can be found through maximizing a geometric margin. Importantly, most successful GANs \citep{brock2018large,karras2019style,karras2017progressive} either enforce the 1-Lipschitz property using Spectral normalization \citep{miyato2018spectral} or use some form of gradient norm penalty \citep{WGAN-GP,mescheder2018training}. Since 1-Lipschitz is equivalent to enforcing a gradient norm constraint (as shown above), we have that most successful GANs effectively train a discriminator/critic to maximize a geometric margin. This suggests that the key contributor to stable and effective GAN training may not be having a 1-Lipschitz discriminator, but may be maximizing a geometric margin. \subsection{Why do maximum-margin classifiers make good GAN discriminators/critics?} \label{sec:4.2} To answer this question, we focus on a simple two-dimensional example where $x=(x_{(1)},x_{(2)})$. Let real data (class 1) be uniformly distributed on the line between $(1,-1)$ and $(1,1)$. Let fake data (class 2) be uniformly distributed on the line between $(-1,-1)$ and $(-1,1)$. This is represented by Figure~\ref{fig:fig1}. Clearly, the maximum-margin boundary is the line $x_{(1)}=0$ and any classifier should learn to ignore $x_{(2)}$. For a classifier of the form $f(x)=c + w_1 x_{(1)} + w_2 x_{(2)}$, the maximum-margin classifier is $f^{*}(x)=w_1 x_{(1)}$ for any choice of $w_1 > 0$. We can see this by looking at the expected geometric margin: \begin{align*} \mathbb{E}_{(z,y)\sim \mathbb{D}}\left[ \gamma(x,y) \right] &= \mathbb{E}_{(z,y)\sim \mathbb{D}}\left[\frac{y w_1 x_{(1)}}{|w_1|} \right] \\ &= \frac{2 w_1 x_{(1)}}{|w_1|} \\ &= \begin{cases} 2 &\text{if $w_1 > 0$}\\ -2 &\text{if $w_1 < 0$}. \end{cases} \end{align*} \vspace*{-15pt} \begin{figure}[!ht] \centering \includegraphics[scale=0.59]{fig1.pdf} \caption{Two-dimensional GAN example with different choices of boundaries.} \label{fig:fig1} \end{figure} This means that the problem is overparameterized (there are infinitely many solutions). We will show that this is problematic in GANs. In GANs, the dynamics of the game depends in great part on $||\nabla_{x_2} f(x_2)||$ where $x_2$'s are samples from the fake, or generated, distribution (not to be confused with $x_{(2)}$, see Section~\ref{sec:notation} for definition). This is because the generator only learns through the discriminator/critic and it uses $\nabla_{x_2} f(x_2)$ in order to improve its objective function. Thus, for stable training, $||\nabla_{x_2} f(x_2)||$ should not be too big or too small. There are two ways of ensuring this property in this example: we can either i) fix the gradient norm to 1 or ii) fix $y w_1 x_{(1)} = 1$. Both solutions lead to $w_1=1$. The former is the approach taken by soft-margin SVMs and the latter is the approach taken by hard-margin SVMs. This means that, in order to get stable GAN training, maximizing a margin is not enough. We need to ensure that we obtain a solution with a stable non-zero gradient around fake samples. Thus, it is preferable to solve the penalized formulation from equation \eqref{eqn:7} and choose a large penalty term $\lambda$ in order to obtain a small-gradient solution. When the gradient norm is 1 everywhere, the only solution is a linear classifier which leads to the gradient being fixed everywhere. In this case, the placement of the margin may not be particularly important for GAN stability since the gradient is the same everywhere (although we do still obtain an MMC). When we have a non-linear classifier and we impose $||\nabla_{x} f(x)|| \leq 1$ through $g(z)=\max(0,z-1)$, the gradient norm will fade toward zero as we move away from the boundary. Thus, in this case, obtaining a maximum-margin solution is important because it reduces the risk of vanishing gradients at fake samples. To see this, we can consider our simple example, but assume $f(x)=\text{sigmoid}(w_1 x_{(1)}+w_0)$ (See Figure~\ref{fig:fig2}). \begin{figure}[!ht] \centering \includegraphics[scale=0.59]{fig3.pdf} \caption{$\nabla f(x_{(1)})$ at different values of $x_{(1)}$ for the two-dimensional example assuming a sigmoid function.} \label{fig:fig2} \end{figure} We can enforce $||\nabla_{x} f(x)|| \leq 1$, by choosing $w_1 \leq 4$. We let $w_1 = 4$ because it leads to the best classifier. The maximum-margin boundary is at $x_{(1)}=0$ (which we get by taking $w_0=0$; blue curve in Figure~\ref{fig:fig2}); for this choice, we have that $f(x_1)=.02$ and $f(x_2)=.98$ for real and fake samples respectively. Meanwhile, if we take a slightly worse margin with boundary at $x_{(1)} = \frac{1}{4}$ (equivalent to choosing $w_0=-1$; red curve in Figure~\ref{fig:fig2}), we have that $f(x_1)=.01$ and $f(x_2)=.95$ for real and fake samples respectively. Thus, both solutions almost perfectly classify the samples. However, the optimal margin has gradient $.07$, while the worse margin has gradient $.03$ at fake samples. Thus, the maximum-margin provides a stronger signal for the generator. Had we not imposed a gradient penalty constraint, we could have chosen $w_1 = 8$ (green curve in Figure~\ref{fig:fig2}) and we would have ended up with vanishing gradients at fake samples while still using a maximum-margin classifier. In summary, imposing $||\nabla_{x} f(x)|| \approx 1$, as done in WGAN-GP, may be helpful because it approximately fixes the gradient to 1 everywhere which stabilizes the generator's training. However, it imposes a strong constraint (linear discriminator) which only leads to a lower bound on Wasserstein's distance. Meanwhile, imposing $||\nabla_{x} f(x)|| \leq 1$, as we suggested to properly estimate Wasserstein's distance, may be helpful because it reduces the risk of having no gradient at fake samples. \subsection{Are certain margins better than others?} \label{sec:4.3} It is well known that $L^p$-norms (with $p\ge 1$) are more sensitive to outliers as $p$ increases which is why many robust methods minimize the $L^1$-norm \citep{bloomfield1983least}. Furthermore, minimizing the $L^1$-norm loss results in a median estimator \citep{bloomfield1983least}. This suggests that penalizing the $L^2$ gradient norm penalty ($p=2$) may not lead to the most robust classifier. We hypothesize that $L^\infty$ gradient norm penalties may improve robustness in comparision to $L^2$ gradient norm penalties since they correspond to maximizing $L^1$-norm margin. In Section~\ref{sec:experiments} we provide experimental evidence in support of our hypothesis. \subsection{Margins in Relativistic GANs} \label{sec:4.4} Relativistic paired GANs (RpGANs) and Relativistic average GANs (RaGAN) \citep{jolicoeur2018relativistic} are GAN variants which tend to be more stable than their non-relativistic counterparts. Below, we explain how we can link both approaches to MMCs. \subsubsection{Relativistic average GANs} From the loss function of RaGAN, we can deduce its decision boundary. Contrary to typical classifiers, we define two boundaries, depending on the label. The two surfaces are defined as two sets of points $(x_0,y_0)$ such that: \begin{align*} f(x_0) &= \mathbb{E}_{x \sim \mathbb{Q}}[f(x)] \text{, when } y_0 = 1 (\text{real}) \\ f(x_0) &= \mathbb{E}_{x \sim \mathbb{P}}[f(x)] \text{, when } y_0 = -1 (\text{fake}) \end{align*} It can be shown that the relativistic average geometric margin is approximated as: \begin{align*} \gamma_p^{Ra-}(x,y) = &\frac{((y+1)/2)(f(x)-\mathbb{E}_{x \sim \mathbb{Q}}[f(x)])}{|| \nabla_x f(x) ||_q} + \\ &\frac{ ((y-1)/2)(f(x)-\mathbb{E}_{x \sim \mathbb{P}}[f(x)])}{|| \nabla_x f(x) ||_q} \\ = &\frac{\alpha_{Ra}(x,y)}{\beta(x)}. \end{align*} Maximizing the boundary of RaGANs can be done in the following way: \begin{align*} \max_f \mathbb{E}_{(x,y)\sim \mathbb{D}}\left[F(\alpha_{Ra}(x,y)) - \lambda g(||\nabla_x f(x)||_q)\right]. \end{align*} Of note, when $F(z)=z$ (identity function), the loss is equivalent to its non-relativistic counterpart (WGAN-GP). Of all the RaGAN variants presented by \citet{jolicoeur2018relativistic}, RaHingeGAN with $F(z)=-\max(0,1-z)$ is the only one which maximizes the relativistic average geometric margin when using a gradient norm penalty. \subsubsection{Relativistic paired GANs} From its loss function (as described in section \ref{sec:GAN}), it is not clear what the boundary of RpGANs can be. However, through reverse engineering, it is possible to realize that the boundary is the same as the one from non-relativistic GANs, but using a different margin. We previously derived that the approximated margin (non-geometric) for any point is $\gamma_p(x) \approx \frac{|f(x)|}{|| \nabla_x f(x) ||_q}$. We define the geometric margin as the margin after replacing $|f(x)|$ by $yf(x)$ so that it depends on both $x$ and $y$. However, there is an alternative way to transform the margin in order to achieve a classifier. We call it the \em relativistic paired margin\em: \begin{align*} \gamma^{*}_p(x_1,x_2) &= \gamma_p(x_1) - \gamma_p(x_2) \\ & = \frac{f(x_1)}{|| \nabla_{x_1} f(x_1) ||_q} - \frac{f(x_2)}{|| \nabla_{x_2} f(x_2) ||_q}. \end{align*} where $x_1$ is a sample from $\mathbb{P}$ and $x_2$ is a sample from $\mathbb{Q}$. This alternate margin does not depend on the label $y$, but only ask that for any pair of class 1 (real) and class 2 (fake) samples, we maximize the relativistic paired margin. This margin is hard to work with, but if we enforce $|| \nabla_{x_1} f(x_1) ||_q \approx || \nabla_{x_2} f(x_2) ||_q$, for all $x_1 \sim \mathbb{P}$,$x_2 \sim \mathbb{Q}$, we have that: \begin{align*} \gamma^{*}_p(x_1,x_2) \approx \frac{f(x_1) - f(x_2)}{|| \nabla_{x} f(x) ||_q}, \end{align*} where $x$ is any sample (from class 1 or 2). Thus, we can train an MMC to maximize the relativistic paired margin in the following way: \begin{align*} \max_f \underset{ \substack{x_1 \sim \mathbb{P} \\ x_2 \sim \mathbb{Q}}}{\mathbb{E}\vphantom{p}}&\left[F(f(x_1)-f(x_2))\right] - \\ \lambda& \mathbb{E}_{(x,y)\sim \mathbb{D}}\left[g(||\nabla_x f(x)||_q)\right], \end{align*} where $g$ must constrain $||\nabla_x f(x)||_q$ to a constant. This means that maximizing $F(f(x_1)-f(x_2))$ without gradient penalty can be problematic if we have different gradient norms at samples from class 1 (real) and 2 (fake). This provides an explanation as to why RpGANs do not perform very well unless using a gradient penalty \citep{jolicoeur2018relativistic}. \section{Experiments} \label{sec:5} \label{sec:experiments} Following our analysis and discussion in the previous sections, we hypothesized that $L^1$ margins, corresponding to a $L^\infty$ gradient norm penalty, would perform better than $L^2$ margins ($L^2$ gradient norm penalty). As far as we know, researchers have not yet tried using a $L^\infty$ gradient norm penalty in GANs. In addition, we showed that it would be more sensible to penalize violations of $||\nabla f(x)||_q \leq 1$ rather than $||\nabla f(x)||_q \approx 1$. To test these hypotheses, we ran experiments on CIFAR-10 \citep{krizhevsky2009learning} using HingeGAN ($F(z)=-\max(0,1-z)$) and WGAN ($F(z)=z$) loss functions with $L^1$, $L^2$, $L^\infty$ gradient norm penalties. We enforce either $||\nabla f(x)||_q \approx 1$ using Least Squares (LS) $(g(z)=(z-1)^2)$ or $||\nabla f(x)||_q \leq 1$ using Hinge $(g(z)=\max(0,z-1))$. We used the ADAM optimizer \citep{Adam} with $\beta=(.5,.99)$ and a DCGAN architecture \citep{DCGAN}. As per convention, we report the Fréchet Inception Distance (FID) \citep{heusel2017gans}; lower values correspond to better generated outputs (higher quality and diversity). We ran all experiments using seed 1 and with gradient penalty $\lambda=20$. Details on the architectures are in the Appendix. Code is available on \em https://github.com/AlexiaJM/MaximumMarginGANs\em. The results are shown in Table~\ref{tab:1}. \begin{table}[!ht] \caption{Fréchet Inception Distance (FID) after 100k generator iterations on CIFAR-10.} \label{tab:1} \centering \begin{tabular}{ccc} \toprule $g(||\nabla_x f(x))||_q)$ & WGAN & HingeGAN \\ \cmidrule(){1-3} $(||\nabla_x f(x))||_1-1)^2$ & 99.7 & 88.9 \\ $\max(0,||\nabla_x f(x))||_1-1)$ & 65.6 & 77.3 \\ \cmidrule(){1-3} $(||\nabla_x f(x))||_2-1)^2$ & 37.6 & 32.8 \\ $\max(0,||\nabla_x f(x))||_2-1)$ & 37.8 & 33.9 \\ \cmidrule(){1-3} $(||\nabla_x f(x))||_{\infty}-1)^2$ & 33.4 & 33.6 \\ $\max(0,||\nabla_x f(x))||_{\infty}-1)$ & 36 & \fontseries{b}\selectfont 27.1 \\ \bottomrule \end{tabular} \end{table} Due to space constraint, we only show the previously stated experiments in Table~\ref{tab:1}. However, we also ran additional experiments on CIFAR-10 with 1) Relativistic paired and average HingeGAN, 2) $\beta=(0,.90)$, 3) the standard CNN architecture from \citet{miyato2018spectral}. Furthermore, we ran experiments on CAT \citep{cat} with 1) Standard CNN (in 32x32), and 2) DCGAN (in 64x64). These experiments correspond to Table~\ref{tab:2},~\ref{tab:3},~\ref{tab:4},~\ref{tab:5}, and~\ref{tab:6} from the appendix. In all sets of experiments, we generally observed that we obtain smaller FIDs by using: i) a larger $q$ (as theorized), ii) the Hinge penalty to enforce an inequality gradient norm constraint (in both WGAN and HingeGAN), and iii) HingeGAN instead of WGAN. \section{Conclusion} This work provides a framework in which to derive MMCs that results in very effective GAN loss functions. In the future, this could be used to derive new gradient norm penalties which further improve the performance of GANs. Rather than trying to devise better ways of enforcing 1-Lipschitz, researchers may instead want to focus on constructing better MMCs (possibly by devising better margins). This research shows a strong link between GANs with gradient penalties, Wasserstein's distance, and SVMs. Maximizing the minimum $L^2$-norm geometric margin, as done in SVMs, has been shown to lower bounds on the VC dimension which implies lower generalization error \citep{vapnik1998statistical,mount2015sure}. This paper may help researchers bridge the gap needed to derive PAC bounds on Wasserstein's distance and GANs/IPMs with gradient penalty. Furthermore, it may be of interest to theoreticians whether certain margins lead to lower bounds on the VC dimension. \section{Acknowledgements} This work was supported by Borealis AI through the Borealis AI Global Fellowship Award. We would also like to thank Compute Canada and Calcul Québec for the GPUs which were used in this work. This work was also partially supported by the FRQNT new researcher program (2019-NC-257943), the NSERC Discovery grant (RGPIN-2019-06512), a startup grant by IVADO, a grant by Microsoft Research and a Canada CIFAR AI chair. \bibliographystyle{unsrtnat}
{ "timestamp": "2019-10-16T02:18:58", "yymm": "1910", "arxiv_id": "1910.06922", "language": "en", "url": "https://arxiv.org/abs/1910.06922", "abstract": "A popular heuristic for improved performance in Generative adversarial networks (GANs) is to use some form of gradient penalty on the discriminator. This gradient penalty was originally motivated by a Wasserstein distance formulation. However, the use of gradient penalty in other GAN formulations is not well motivated. We present a unifying framework of expected margin maximization and show that a wide range of gradient-penalized GANs (e.g., Wasserstein, Standard, Least-Squares, and Hinge GANs) can be derived from this framework. Our results imply that employing gradient penalties induces a large-margin classifier (thus, a large-margin discriminator in GANs). We describe how expected margin maximization helps reduce vanishing gradients at fake (generated) samples, a known problem in GANs. From this framework, we derive a new $L^\\infty$ gradient norm penalty with Hinge loss which generally produces equally good (or better) generated output in GANs than $L^2$-norm penalties (based on the Fréchet Inception Distance).", "subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)", "title": "Gradient penalty from a maximum margin perspective", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426382811477, "lm_q2_score": 0.727975460709318, "lm_q1q2_score": 0.709079138353238 }
https://arxiv.org/abs/1005.2988
$L^p$ spectrum and heat dynamics of locally symmetric spaces of higher rank
The aim of this paper is to study the spectrum of the $L^p$ Laplacian and the dynamics of the $L^p$ heat semigroup on non-compact locally symmetric spaces of higher rank. Our work here generalizes previously obtained results in the setting of locally symmetric spaces of rank one to higher rank spaces. Similarly as in the rank one case, it turns out that the $L^p$ heat semigroup on $M$ has a certain chaotic behavior if $p\in(1,2)$ whereas for $p\geq 2$ such a chaotic behavior never occurs.
\section{Introduction} The aim of this paper is to study the spectrum of the $L^p$ Laplacian $\DMp$ and the dynamics of the $L^p$ heat semigroup $e^{-t\DMp}: L^p(M)\to L^p(M)$ on non-compact locally symmetric spaces $M=\Gamma\backslash X$ of higher rank with finite volume. More precisely, $X$ is a symmetric space of non-comact type and $\Gamma$ a non-uniform arithmetic subgroup of isometries. Our work here generalizes previously obtained results in the setting of locally symmetric spaces of rank one (see \cite{Ji:yq}) to higher rank spaces.\\ Similarly as in the rank one case, it turns out that the $L^p$ heat semigroup has a certain chaotic behavior if $p\in(1,2)$ whereas for $p\geq 2$ such a chaotic behavior never occurs (see Theorem \ref{thm dynamics}). Let us briefly recall related previous work before we give a more detailed description of the contents of this paper.\\ Davies, Simon, and Taylor completely determined in \cite{MR937635} the $L^p$ spectrum of the Laplacian on the $n$-dimensional hyperbolic space $X=\mathbb{H}^n$ and on non-compact quotients $M=\Gamma\backslash X$ of $X$ by a geometrically finite group $\Gamma$ of isometries such that $M$ has no cusps or has finite volume.\\ Taylor generalized some of these results to symmetric spaces $X$ of non-compact type \cite{MR1016445}. More precisely, he proved the following result (for a definition of $\rho$ we refer to Section \ref{symmetric spaces}) . \begin{theorem}[cf. \cite{MR1016445}]\label{taylor} Let $X$ denote a symmetric space of non-compact type. Then for any $p\in [1,\infty)$ we have $$\sigma(\Delta_{X,p}) = {\cal P}_{X,p},$$ where $${\cal P}_{X,p}=\left\{ ||\rho||^2 - z^2 : z\in\mathbb{C}, |\re z|\leq ||\rho||\cdot |\frac{2}{p}-1|\right\}.$$ Furthermore, if $p>2$ every point in the interior of the parabolic region ${\cal P}_{X,p}$ is an eigenvalue for $\Delta_{X,p}$ and eigenfunctions corresponding to these eigenvalues are given by spherical functions. \end{theorem} In the case $p\leq 2$ the Helgason-Fourier-Transform ${\cal F}$ turns the Laplacian into a multiplication operator, i.e. $({\cal F}\Delta f)(\lambda) = (||\rho||^2 + \langle \lambda,\lambda\rangle){\cal F}f$. Hence, together with the $L^p$ inversion formula by Stanton and Tomas \cite{MR518528}, it follows that there are no eigenvalues in the case $p\leq 2$. \\ Taylor, using similar ideas as in \cite{MR937635}, was also able to give an upper bound for the $L^p$ spectrum on quotients $M=\Gamma\backslash X$: \begin{proposition}[Proposition 3.3 in \cite{MR1016445}]\label{taylor upper} Let $M$ denote a non-compact locally symmetric space whose universal cover is a symmetric space of non-compact type and assume $$ \sigma(\Delta_{M,2}) \subset \{\lambda_0,\ldots,\lambda_r\}\cup [b,\infty), $$ where $\lambda_j, j=0,\ldots, r$, are eigenvalues of finite multiplicity. Then, if $\vol(M)<\infty$ or if the injectivity radius of $M$ is bounded away from $0$, we have for $p\in (1,\infty)$ $$ \sigma(\Delta_{M,p}) \subset \{\lambda_0,\ldots,\lambda_r\}\cup {\cal P}_{M,p}', $$ where $$ {\cal P}_{M,p}' = \left\{ b - z^2 : z\in\mathbb{C}, |\re z|\leq ||\rho||\cdot |\frac{2}{p}-1|\right\}. $$ \end{proposition} Some of these results have been generalized in \cite{Ji:2007fk,Ji:yq,MR2342629} to rank one locally symmetric spaces. For results concerning the $L^p$ heat dynamics on symmetric spaces of non-compact type we refer to \cite{Ji:nr}.\\ In this paper, after recalling some basic definitions and facts concerning $L^p$ heat semigroups and locally symmetric spaces in the next section, we study Eisenstein series $E(\bP|\varphi,\Lambda)$ on $M=\Gamma\backslash X$ which are generalized (smooth) eigenfunctions for the $L^2$ Laplacian $\Delta_{M,2}$. In order to show that if $p<2$ these Eisenstein series are contained in $L^p(M)$ for certain choices of $\Lambda$ (cf. Theorem \ref{thm general parabolic} and Corollary \ref{eigenvalues minimal}), and hence are eigenfunctions of $\DMp$, we first derive an upper bound for Eisenstein series (Proposition \ref{upper bound}, Corollary \ref{corollary eisenstein upper bound minimal}).\\ From these results it follows that in the case $p\in(1,2)$ there is an open subset of $\mathbb{C}$ consisting of eigenvalues of the $L^p$ Laplacian $\DMp$ whereas in the case $p\geq 2$ the point spectrum of the $L^p$ Laplacian is discrete (Theorem \ref{theorem eigenvalues}). Both of these facts are a main ingredient in the proof of Theorem \ref{thm dynamics} which is devoted to the dynamics of the $L^p$ heat semigroup $$ e^{-t\DMp}: L^p(M)\to L^p(M). $$ Note that Theorem \ref{theorem eigenvalues} seems to be the first result in which it is shown that some subset of $\mathbb{C}$ with non-empty interior (a parabolic region) is contained in the $L^p$ spectrum of the Laplacian on a locally symmetric space of higher $\mathbb{Q}$-rank. Nevertheless, the exact determination of the $L^p$ spectrum of a non-compact locally symmetric space with finite volume remains open in the higher rank case (cf. Conjecture \ref{conjecture}). Even for rank one spaces the $L^p$ spectrum is completely known only in certain situations, cf. \cite{MR937635,Ji:yq}. However, as the $L^p$ spectrum of a Riemannian product $M=M_1\times M_2$ equals the set theoretic sum of the $L^p$ spectra of its factors, i.e. $\sigma(\DMp) = \sigma(\Delta_{M_1,p}) + \sigma(\Delta_{M_2,p})$, cf. \cite{Weber:2008ve}, we can restric ourselves to irreducible manifolds. \section{Preliminaries} \subsection{The heat semigroup on $\boldsymbol{L^p}$ spaces}\label{heat semigroup} In this section we denote by $M$ an arbitrary complete Riemannian manifold and by $\Delta=-\dive(\grad)$ the Laplace-Beltrami operator acting on differentiable functions of $M$. If we denote by $\DM$ the Laplacian on $L^2(M)$ with domain $C_c^{\infty}(M)$ (the set of differentiable functions with compact support), this is an essentially self-adjoint operator and hence, its closure $\Delta_{M,2}$ is a self-adjoint operator on the Hilbert space $L^2(M)$. Since $\Delta_{M,2}$ is positive, $-\Delta_{M,2}$ generates a bounded analytic semigroup $e^{-t\Delta_{M,2}}$ on $L^2(M)$. The semigroup $e^{-t\Delta_{M,2}}$ is a {\em submarkovian semigroup} (i.e., $e^{-t\Delta_{M,2}}$ is positive and a contraction on $L^{\infty}(M)$ for any $t\geq 0$) and we therefore have the following: \begin{itemize} \item[(1)] The semigroup $e^{-t\Delta_{M,2}}$ leaves the set $L^1(M)\cap L^{\infty}(M)\subset L^2(M)$ invariant and thus, $e^{-t\Delta_{M,2}}|_{L^1\cap L^{\infty}}$ may be extended to a positive contraction semigroup $T_p(t)$ on $L^p(M)$ for any $p\in [1,\infty]$. These semigroups are strongly continuous if $p\in [1,\infty)$ and {\em consistent} in the sense that $T_p(t)|_{L^p\cap L^q} = T_q(t)|_{L^p\cap L^q}$. \item[(2)] Furthermore, if $p\in (1,\infty)$, the semigroup $T_p(t)$ is a bounded analytic semigroup with angle of analyticity $\theta_p \geq \frac{\pi}{2} - \arctan\frac{|p-2|}{2\sqrt{p-1}}$. \end{itemize} For a proof of (1) we refer to \cite[Theorem 1.4.1]{MR1103113}. For (2) see \cite{MR1224619}. In general, the semigroup $T_1(t)$ needs not be analytic. However, if $M$ has bounded geometry $T_1(t)$ is analytic in {\em some} sector (cf. \cite{MR924464,MR1023321}). In the following, we denote by $-\DMp$ the generator of $T_p(t)$ and by $\sigma(\DMp)$ the spectrum of $\DMp$. Furthermore, we write $e^{-t\DMp}$ for the semigroup $T_p(t)$. Because of (2) from above, the $L^p$ spectrum $\sigma(\DMp)$ of $M$ is contained in the sector \begin{multline*} \left\{ z\in \mathbb{C}\setminus\{0\} : |\arg(z)| \leq \frac{\pi}{2}-\theta_p\right\}\cup\{0\} \subset\\ \left\{ z\in \mathbb{C}\setminus\{0\} : |\arg(z)| \leq \arctan\frac{|p-2|}{2\sqrt{p-1}} \right\}\cup\{0\} = \Sigma_p. \end{multline*} If we identify as usual the dual space of $L^p(M), 1\leq p<\infty$, with $L^{p'}(M), \frac{1}{p}+\frac{1}{p'}=1$, the dual operator of $\DMp$ equals $\Delta_{M,p'}$ and therefore we always have $\sigma(\DMp) = \sigma(\Delta_{M,p'})$. It should also be mentioned that the family $\DMp, p\geq 1,$ is consistent, which means that the restrictions of $\DMp$ and $\Delta_{M,q}$ to the intersection of their domains coincide: \begin{lemma}\label{lemma consistent} If $p,q \in [1,\infty)$, the operators $\DMp$ and $\Delta_{M,q}$ are consistent, i.e. $$ \DMp f = \Delta_{M,q} f\qquad\mbox{for any~} f\in \dom(\DMp)\cap\dom(\Delta_{M,q}). $$ \end{lemma} For a proof see e.g. \cite[Lemma 2.1]{Ji:yq}.\\ Since it is not obvious that a differentiable $L^p$ function $f$ that satisfies the eigenequation $\Delta f = \mu f$ is contained in the domain of $\DMp$, we state this result in a lemma: \begin{lemma}\label{lemma Lp eigenfunctions} Let $p\in (1,\infty)$ and $f: M\to \mathbb{R}$ denote a differentiable function such that $f\in L^p(M)$ and $\Delta f = \mu f$ for some $\mu\in\mathbb{R}$. Then $f\in \dom(\DMp)$ and $\DMp f = \mu f$. \end{lemma} For a proof we refer to \cite[Corollary 2.3]{Ji:yq}.\\ An essential ingredient in this proof is a uniqueness result concerning $L^p$ solutions of the heat equation by Strichartz, cf. \cite[Theorem 3.9]{MR705991}. As in this theorem $p$ is required to be contained in $(1,\infty)$, we have excluded the case $p=1$ here as well. \subsubsection{Manifolds with finite volume} If $M$ is Riemannian manifold with finite volume we have by H\"older's inequality $L^q(M) \hookrightarrow L^p(M)$ for $1\leq p\leq q\leq \infty$. Hence, by consistency, the semigroup $e^{-t\Delta_{M,p}}$ can be regarded as extension of the semigroup $e^{-t\Delta_{M,q}}$, $p\leq q$. It also follows an analogous result for the $L^p$ Laplacians, which is stronger than Lemma \ref{lemma consistent} in the case of finite volume: \begin{lemma}\label{domains} Let $M$ denote a complete Riemannian manifold with finite volume. If $1\leq p\leq q<\infty$, we have $$ \dom(\Delta_{M,q})\subset\dom(\DMp) $$ and for $f\in\dom(\Delta_{M,q})$ it follows $\Delta_{M,q}f=\DMp f$, i.e. $\DMp$ is an extension of $\Delta_{M,q}$. \end{lemma} \begin{proof} Let $f\in\dom(\Delta_{M,q})$. Because of H\"older's inequality and because of consistency of the $L^p$ heat semigroups, we have \begin{eqnarray*} || \frac{1}{t}(e^{-\DMp}f - f) - \Delta_{M,q}f ||_{L^p} & \leq & C || \frac{1}{t}(e^{-\DMp}f - f) - \Delta_{M,q}f ||_{L^q} \\ & = & C || \frac{1}{t}(e^{-\Delta_{M,q}}f - f) - \Delta_{M,q}f ||_{L^q}\\ &\to& 0 \qquad (t\to 0), \end{eqnarray*} i.e. $f\in \dom(\DMp)$ with $\Delta_{M,q}f=\DMp f$. \end{proof} \begin{proposition} Let $M$ denote a complete Riemannian manifold with finite volume. If $2\leq p\leq q<\infty$, we have $$ \sigma(\DMp) \subset \sigma(\Delta_{M,q}). $$ In particular, for $p\in (1,\infty)$, it follows that $$ \sigma(\Delta_{M,2}) \subset \sigma(\DMp). $$ \end{proposition} \begin{proof} For a proof of the first statement we refer to \cite[Proposition 3.3]{MR2342629}. The second statement follows from the first together with the fact $\sigma(\DMp)=\sigma(\Delta_{M,p'}), \frac{1}{p} + \frac{1}{p'}=1$. \end{proof} Note that in the case of a compact manifold $M$, we always have $\sigma(\DMp) = \sigma(\Delta_{M,2})$ whereas for non-compact manifolds this needs not be true \cite{MR1250269}. Examples for manifolds whose spectrum depends non-trivially on $p$ are non-compact arithmetic locally symmetric spaces, cf. Section \ref{Lp spectrum}. It is actually shown there that the $L^p$ spectrum (if $p\neq 2$) contains a subset of $\mathbb{C}$ with non-empty interior (see also \cite{MR937635,Ji:nr,Ji:yq}). On the one hand, this proposition states that the $L^p$ spectrum is bigger than the $L^2$ spectrum. On the other hand, how much bigger it is, is often difficult to say. In Conjecture \ref{conjecture} we give a precise picture of the $L^p$ spectrum of non-compact locally symmetric spaces $M=\Gamma\backslash X$ with arithmetic $\Gamma$. \subsection{Locally symmetric spaces}\label{symmetric spaces} Let $X$ denote always a symmetric space of non-compact type. Then $G= \isom^0(X)$ is a non-compact, semi-simple Lie group with trivial center that acts transitively on $X$ and $X\cong G/K$ for any maximal compact subgroup $K$ of $G$. We denote the respective Lie algebras by $\mathfrak{g}$ and $\mathfrak{k}$. Given a corresponding Cartan involution $\theta: \mathfrak{g}\to\mathfrak{g}$ we obtain the Cartan decomposition $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ of $\mathfrak{g}$ into the eigenspaces of $\theta$. The subspace $\mathfrak{p}$ of $\mathfrak{g}$ can be identified with the tangent space $T_{eK}X$ and we assume that the Riemannian metric $\langle\cdot,\cdot\rangle$ of $X$ in $\mathfrak{p}\cong T_{eK}X$ coincides with the restriction of the Killing form $B(Y,Z) = \tr(\ad Y\circ \ad Z ), Y, Z\in \mathfrak{g},$ to $\mathfrak{p}$. For any maximal abelian subspace $\mathfrak{a}\subset \mathfrak{p}$ we refer to $\Phi(\mathfrak{g},\mathfrak{a})$ as the set of restricted roots for the pair $(\mathfrak{g},\mathfrak{a})$, i.e. $\Phi(\mathfrak{g},\mathfrak{a})$ contains all $\alpha\in \mathfrak{a}^*\setminus\{0\}$ such that $$ \mathfrak{h}_{\alpha} = \{ Y\in \mathfrak{g} : \ad(H)(Y) = \alpha(H)Y \mbox{~for all~} H\in\mathfrak{a} \}\neq \{0\}. $$ These subspaces $ \mathfrak{h}_{\alpha}\neq \{0\}$ are called root spaces.\\ Once a positive Weyl chamber $\mathfrak{a}^+$ in $\mathfrak{a}$ is chosen, we denote by $\Phi(\mathfrak{g},\mathfrak{a}) ^+$ the subset of positive roots and by $$ \rho = \frac{1}{2}\sum_{\alpha\in\Phi^+(\mathfrak{g},\mathfrak{a}) } (\dim \mathfrak{h}_{\alpha})\alpha $$ half the sum of the positive roots (counted according to their multiplicity).\\ In what follows, we denote by $\Gamma$ a non-uniform irreducible lattice in $G$ and hence, $M=\Gamma\backslash X=\Gamma\backslash G/K$ is a non-compact locally symmetric space with finite volume. From Margulis' famous arithmeticity result it follows that such a $\Gamma$ is always arithmetic if $\rank(X)\geq 2$ (\cite{MR1090825,MR776417}). In the rank one case however, it is known that non-arithmetic lattices exist (\cite{MR932135,MR1090825}). Since we treated the rank one case already in \cite{Ji:yq}, we will restrict ourselves here to arithmetic lattices. \\ We will now recall some basic facts about the geometry and $L^2$ spectral theory of arithmetic locally symmetric spaces in order to fix notation. More details can be found e.g. in \cite{MR0232893,MR1839581,MR1025165}. \subsubsection{Langlands decomposition of rational parabolic subgroups} Since $G= \isom^0(X)$ is a non-compact, semi-simple Lie group with trivial center, we can find a connected, semi-simple algebraic group $\bG\subset \glnc$ defined over $\mathbb{Q}$ such that the groups $G$ and $\bG(\mathbb{R})^0$ are isomorphic as Lie groups (cf. \cite[Proposition 1.14.6]{MR1441541}). \\ A closed subgroup $\bP\subset \bG$ defined over $\mathbb{Q}$ is called {\em rational parabolic subgroup} if $\bP$ contains a maximal, connected solvable subgroup of $\bG$.\\ For any rational parabolic subgroup $\bP$ its real locus $P=\bP(\mathbb{R})$ admits a so-called Langlands decomposition \begin{equation} P = N_{\bP}A_{\bP}M_{\bP}. \end{equation} Here $N_{\bP}=\bN_{\bP}(\mathbb{R})$ denotes the real points of the unipotent radical $\bN_{\bP}$ of $\bP$, $A_{\bP} = \bS_{\bP}(\mathbb{R})^0$, where $\bS_{\bP}$ denotes the maximal $\mathbb{Q}$-split torus in the center of the Levi quotient $\bL_{\bP} = \bP/\bN_{\bP}$ and hence, $A_{\bP}$ is abelian, and $M_{\bP}$ are the real points of a reductive algebraic group $\bM_{\bP}$ defined over $\mathbb{Q}$. More precisely, this means that the map \begin{equation} P\to N_{\bP}\times A_{\bP}\times M_{\bP},\quad g\mapsto \left( n(g), a(g), m(g)\right) \end{equation} is a real analytic diffeomorphism. Note that such a Langlands decomposition depends on a choice of a maximal compact subgroup $K$ in $G$ (or equivalently on a base point $x_0\in X$) and $\bM_{\bP}$ is not for all such choices defined over $\mathbb{Q}$. However, it is known that there exists always a maximal compact subgroup $K$ such that the algebraic group $\bM_{\bP}$ is defined over $\mathbb{Q}$. For more details we refer the reader to the discussion in \cite{MR1906482}. If we denote by $X_{\bP}$ the {\em boundary symmetric space} $$ X_{\bP} := M_{\bP}/ K\cap M_{\bP} $$ we obtain the {\em rational horocyclic decomposition} of $X$: $$ X\cong N_{\bP}\times A_{\bP}\times X_{\bP}, $$ since the subgroup $P$ acts transitively on the symmetric space $X=G/K$. More precisely, if we denote by $\tau: M_{\bP}\to X_{\bP}$ the canonical projection, we have an analytic diffeomorphism \begin{equation}\label{rational horocyclic decomposition} \mu: N_{\bP}\times A_{\bP}\times X_{\bP} \to X,\,\, (n,a,\tau(m)) \mapsto nam\cdot x_0, \end{equation} where $x_0\in X$ denotes a certain base point.\\ Note that the boundary symmetric space $X_{\bP}$ is a Riemannian product of a symmetric space of non-compact type by possibly a Euclidean space. \subsubsection{$\mathbb{Q}$-roots and reduction theory}\label{Q-Roots and Reduction Theory} Let us fix some proper rational parabolic subgroup $\bP$ of $\bG$. We denote in the following by $\mathfrak{g}, \mathfrak{a}_{\bP}$, and $\mathfrak{n}_{\bP}$ the Lie algebras of the real Lie groups $G, A_{\bP}$, and $N_{\bP}$. Associated with the pair $(\mathfrak{g}, \mathfrak{a}_{\bP})$ there is a system $\Phi(\mathfrak{g}, \mathfrak{a}_{\bP})$ of so-called {\em $\mathbb{Q}$-roots} and for each $\alpha\in \Phi(\mathfrak{g}, \mathfrak{a}_{\bP})$ we have a {\em root space} $$ \mathfrak{g}_{\alpha} = \{ Z\in \mathfrak{g} : \ad(H)(Y) = \alpha(H)(Y) \mbox{~for all~} H\in \mathfrak{a}_{\bP} \}. $$ These root spaces yield a decomposition $$ \mathfrak{g} = \mathfrak{g}_0 \bigoplus_{\alpha\in \Phi(\mathfrak{g}, \mathfrak{a}_{\bP})}\mathfrak{g}_{\alpha}, $$ where $\mathfrak{g}_0$ is the Lie algebra of $Z(\bS_{\bP}(\mathbb{R}))$, the center of $\bS_{\bP}(\mathbb{R})$. Furthermore, the rational parabolic subgroup $\bP$ defines an ordering of $\Phi(\mathfrak{g}, \mathfrak{a}_{\bP})$ such that $$ \mathfrak{n}_{\bP} = \bigoplus_{\alpha\in \Phi^+(\mathfrak{g}, \mathfrak{a}_{\bP})} \mathfrak{g}_{\alpha}, $$ and the root spaces $\mathfrak{g}_{\alpha}, \mathfrak{g}_{\beta}$ of distinct positive roots $\alpha, \beta\in \Phi^+(\mathfrak{g}, \mathfrak{a}_{\bP})$ are orthogonal with respect to the Killing form: $$ B(\mathfrak{g}_{\alpha}, \mathfrak{g}_{\beta}) = \{0\}. $$ We also define $$ \rho_{\bP} = \sum_{\alpha\in\Phi^{+}(\mathfrak{g}, \mathfrak{a}_{\bP})}(\dim\mathfrak{g}_{\alpha})\alpha, $$ and denote by $\Phi^{++}(\mathfrak{g}, \mathfrak{a}_{\bP})$ the set of simple positive roots, i.e. the set of all $\alpha\in \Phi^{+}(\mathfrak{g}, \mathfrak{a}_{\bP})$ such that $\frac{1}{2}\alpha$ is not a root.\\ Let us now discuss reduction theories which describe the structure of fundamental sets for $\Gamma$ in terms of Siegel sets associated with rational parabolic subgroups.\\ If we define for $t\in \mathfrak{a}_{\bP}$ $$ A_{\bP,t} = \{ e^{H} \in A_{\bP} : \alpha(H) > \alpha(t) \mbox{~for all~} \alpha\in \Phi^{++}(\mathfrak{g}, \mathfrak{a}_{\bP}) \}, $$ a {\em Siegel set} (associated with $\bP$) is a subset of $X = N_{\bP}\times A_{\bP}\times X_{\bP}$ of the form $U\times A_{\bP,t}\times V$ with bounded $U\subset N_{\bP}$ and $V\subset X_{\bP}$. As it turns out, Siegel sets associated with minimal rational parabolic subgroups are the building blocks of a fundamental set for $\Gamma$ (for a proof see e.g. \cite{MR0244260}): \begin{proposition}\label{non-precise} Let $\bP_1,\ldots, \bP_k$ denote representatives of (the finitely many) $\Gamma$ conjugacy classes of minimal rational parabolic subgroups. Then there are associated Siegel sets ${\cal S}_1,\ldots, {\cal S}_k$ such that $F = \bigcup_{j=1}^k {\cal S}_j$ covers a fundamental domain for $\Gamma$ and for any $g\in \bG(\mathbb{Q})$ the set $\{\gamma : gF \cap \gamma F\neq\emptyset\}$ is finite, i.e. $F$ is a fundamental set. \end{proposition} If we take into account all rational parabolic subgroups of $\bG$, a refined (precise) reduction theory yields even a fundamental domain for $\Gamma$. As Proposition \ref{non-precise} suffices for our purposes, we only refer to e.g. \cite[Proposition III.2.21]{MR2189882} for further details. \subsubsection{$\boldsymbol{L^2}$ spectral theory and Eisenstein series}\label{L2 spectral theory} One knows that the $L^2$ spectrum $\sigma(\Delta_{M,2})$ of the Laplace-Beltrami operator $\Delta_{M,2}$ on a non-compact arithmetic locally symmetric space $M$ is the union of a point spectrum and an absolutely continuous spectrum. The point spectrum consists of a (possibly infinite) sequence of eigenvalues $$ 0=\lambda_0 < \lambda_1\leq \lambda_2\leq \dots $$ with finite multiplicities such that below any finite number there are only finitely many eigenvalues. The absolutely continuous spectrum equals $[b,\infty)$ for $b = \min_{\bP} ||\rho_{\bP}||^2$ where the minimum is taken over all proper rational parabolic subgroups $\bP$. In what follows, we denote by $L^2_{dis}(M)$ the subspace spanned by all eigenfunctions of $\Delta_{M,2}$ and by $L^2_{con}(M)$ the orthogonal complement of $L^2_{dis}(M)$ in $L^2(M)$. Generalized eigenfunctions for the absolutely continuous part $\sigma_{ac}(\Delta_{M,2})$ of the $L^2$ spectrum are given by Eisenstein series. Therefore, we recall several basic facts about these important functions. Our main reference here is \cite{MR0232893}. \begin{definition} Let $f$ be a measurable, locally integrable function on $\Gamma\backslash X$. The {\em constant term} $f_{\bP}$ of $f$ along some rational parabolic subgroup $\bP$ of $\bG$ is defined as $$ f_{\bP}(x) = \int_{(\Gamma_{\bP}\cap N_{\bP})\backslash N_{\bP}} f(nx) dn, $$ where $\Gamma_{\bP} = \Gamma\cap P$ and the measure $dn$ is normalized such that the volume of $(\Gamma_{\bP}\cap N_{\bP})\backslash N_{\bP}$ equals one.\\ A function $f$ on $\Gamma\backslash X$ with the property $f_{\bP}=0$ a.e. for all rational parabolic subgroups $\bP\neq \bG$ is called {\em cuspidal} and the subspace of cuspidal functions in $L^2(\Gamma\backslash X)$ is denoted by $L^2_{cus}(\Gamma\backslash X)$. \end{definition} It is known that $$ L^2_{cus}(M) \subset L^2_{dis}(M) $$ and this inclusion is in general strict as the non-zero constant functions are not contained in $L^2_{cus}(M)$ if $M$ is non-compact. Let $\bP$ be a rational parabolic subgroup of $\bG$ and $\Gamma_{M_{\bP}}$ the image of $\Gamma_{\bP}=\Gamma\cap P$ under the projection $N_{\bP}A_{\bP}M_{\bP} \to M_{\bP}$. Then $\Gamma_{M_{\bP}}$ acts discretely on the boundary symmetric space $X_{\bP}$ and the respective quotient $\Gamma_{M_{\bP}}\backslash X_{\bP}$, called {\em boundary locally symmetric space}, has finite volume. Furthermore, we denote by $\mathfrak{a}_{\bP}^*$ the dual of $\mathfrak{a}_{\bP}$ and put $$ \mathfrak{a}_{\bP}^{*+} = \{ \lambda \in \mathfrak{a}_{\bP}^* : \langle \lambda, \alpha\rangle > 0 \mbox{~for all~} \alpha \in \Phi^{++}(\mathfrak{g}, \mathfrak{a}_{\bP}) \}. $$ For any $\varphi\in L^2_{cus}(\Gamma_{M_{\bP}}\backslash X_{\bP})$ and $\Lambda \in \mathfrak{a}_{\bP}^*\otimes\mathbb{C}$ with $\re(\Lambda) \in \rho_{\bP} + \mathfrak{a}_{\bP}^{*+}$ we define the {\em (cuspidal) Eisenstein series} $E(\bP|\varphi,\Lambda)$ as follows: \begin{equation}\label{eisenstein series} E(\bP|\varphi,\Lambda:x) = \sum_{\gamma\in \Gamma_{\bP}\backslash \Gamma} e^{(\rho_{\bP}+\Lambda)(H_{\bP}(\gamma x))}\varphi(z_{\bP}(\gamma x)), \end{equation} where $\mu(n_{\bP}(x),e^{H_{\bP}(x)}, z_{\bP}(x)) = x$ (cf. (\ref{rational horocyclic decomposition})). This series converges uniformly for $x$ in compact subsets of $X$ and is holomorphic in $\Lambda$ (cf. \cite[Lemma 4.1]{MR0579181}). Furthermore, $E(\bP|\varphi,\Lambda)$ can meromorphically be continued (as a function of $\Lambda$) to $\mathfrak{a}_{\bP}^*\otimes\mathbb{C}$ (cf. \cite[Chapter 7]{MR0579181} or \cite[Theorem 9]{MR0232893}). By definition, the Eisenstein series are $\Gamma$ invariant and hence, they define functions on $M=\Gamma\backslash X$. Eisenstein series are in general not contained in $L^2(\Gamma\backslash X)$ but it is known that they satisfy an eigenequation of the Laplacian $\Delta$ on $\Gamma\backslash X$: \begin{lemma}\label{generalized eigenfunctions} Let $\varphi\in L^2_{cus}(\Gamma_{M_{\bP}}\backslash X_{\bP})$ be an eigenfunction of $\Delta_{\Gamma_{M_{\bP}}\backslash X_{\bP},2}$ with respect to some eigenvalue $\nu$. Then we have for any $\Lambda\in\mathfrak{a}_{\bP}^*\otimes\mathbb{C}$ that is not a pole of $E(\bP|\varphi,\Lambda)$ the following: $$ \Delta E(\bP|\varphi,\Lambda) = (\nu + ||\rho_{\bP}||^2 -\langle \Lambda,\Lambda\rangle) E(\bP|\varphi,\Lambda). $$ \end{lemma} For a proof we refer to \cite{MR1906482} or \cite[Lemma 2.5]{Ji:yq}. \section{An upper bound of Eisenstein series and the $\boldsymbol{L^p}$ spectrum of the Laplacian} The estimates in this section are a precise form of the general philosophy that an automorphic form is bounded by its constant terms. In order to state these results, we recall that two rational parabolic subgroups $\bP_1, \bP_2 \subset \bG$ are called {\em associate} ($\bP_1\sim\bP_2$) if there is some $g\in \bG(\mathbb{Q})$ with \[ \Ad(g)\mathfrak{a}_{\bP_1} = \mathfrak{a}_{\bP_2}. \] The set of such isomorphisms $\mathfrak{a}_{\bP_1}\to \mathfrak{a}_{\bP_2}$ is denoted by $W(\mathfrak{a}_{\bP_1}, \mathfrak{a}_{\bP_2})$ and $W(\mathfrak{a}_{\bP}) = W(\mathfrak{a}_{\bP}, \mathfrak{a}_{\bP})$. For $\Lambda\in\mathfrak{a}_{\bP_1}^*$ and $w\in W(\mathfrak{a}_{\bP_1}, \mathfrak{a}_{\bP_2})$ we define $w\Lambda = \Lambda\circ w^{-1} \in \mathfrak{a}_{\bP_2}^*.$ Furthermore, for two rational parabolic subgroups $\bP_1\subset\bP_2$ in $\bG$, and hence $\mathfrak{a}_{\bP_2}\subset \mathfrak{a}_{\bP_1}$, we extend an element $\Lambda\in\mathfrak{a}_{\bP_2}^*$ to an element in $\mathfrak{a}_{\bP_1}^*$ by defining it to be zero on the orthogonal complement (w.r.t. the Killing form) of $\mathfrak{a}_{\bP_2}$ in $\mathfrak{a}_{\bP_1}$. \begin{proposition}\label{upper bound} Denote by ${\mathcal S}$ a Siegel set associated with a minimal rational parabolic subgroupf $\bP_0$ of $\bG$ and by $\bP$ a proper rational parabolic subgroup. If $\varphi \in L^2_{cus}(\Gamma_{M_{\bP}}\backslash X_{\bP})$, the Eisenstein series $E(\bP | \varphi,\Lambda)$ satisfies the following upper bound for $x\in \mathcal S$: \begin{equation} |E(\bP | \varphi,\Lambda : x)| \leq C \sum_{\bP'\sim\bP, \atop \bP'\supset\bP_0}\sum_{w\in W(\mathfrak{a}_{\bP},\mathfrak{a}_{\bP'})} e^{(w\re\Lambda + \rho_{\bP'})(\log a_{\bP_0}(x))}, \end{equation} where the constant $C>0$ depends only on $\bP, \varphi,$ and $\mathcal S$. \end{proposition} \begin{proof} The key point is to determine the so-called cuspidal data for the Eisenstein series $E(\bP | \varphi,\Lambda)$ along rational parabolic subgroups $\bP'$. Then the result basically follows from \cite[Lemma I.4.1]{MR1361168}. We recall the definition of cuspidal data from \cite{MR1361168} in the notations we use here: Given an automorphic form $\varphi$ on $\Gamma\backslash X$, for any rational parabolic subgroup $\bP$, the constant term $\varphi_{\bP}$ of $\varphi$ along $\bP$ has a cuspidal component $\varphi_{\bP}^{cusp}$ characterized by the condition that for every cuspidal function $\psi$ on $\Gamma_{M_\bP}\backslash X_\bP$, $$\langle \psi, \varphi_{\bP}\rangle = \langle \psi, \varphi_{\bP}^{cusp}\rangle,$$ \cite[p.39]{MR1361168}.\\ The cuspidal component $\varphi_{\bP}^{cusp}(x)$ can be written as a finite sum of functions of the form $Q(\log a_{\bP}(x)) e^{(\Lambda + \rho_{\bP}) (\log a_{\bP}(x))} \psi(z_{\bP}(x))$, where $Q$ is a polynomial and $\psi$ a cuspidal form on $\Gamma_{M_{\bP}}\backslash X_{\bP}$. The finite triples $(Q, \Lambda, \psi)$ are called the cuspidal data of $\varphi_{\bP}^{cusp}$, or the cuspidal data of $\varphi$ along $\bP$, cf. \cite[p.44]{MR1361168}. Note that in \cite{MR1361168} representations $\pi$ are used instead of characters $\Lambda$ (or rather functionals in $\mathfrak{a}_{\bP}^*\otimes\mathbb{C}$). This is the reason for the shift by $+\rho_{\bP}$ appearing here (cf. also \cite[Chapter VII.1]{0993.22001}).\\ Let us now determine the cuspidal data for the Eisenstein series $E(\bP | \varphi,\Lambda)$ along some rational parabolic subgroup $\bP'$. If $\bP, \bP'$ are not associate, the constant term $E_{\bP'}(\bP | \varphi,\Lambda)$ is orthogonal to all cusp forms on $\Gamma_{M_{\bP}}\backslash X_{\bP}$, cf. \cite[Lemma 39]{MR0232893} or \cite[p.86]{MR643242} and hence, in this case, the set of cupidal data is empty.\\ If on the other hand $\bP$ and $\bP'$ are associate, we have the following formula for the constant term of $E(\bP | \varphi,\Lambda)$ along $\bP'$: \begin{equation} E_{\bP'}(\bP | \varphi,\Lambda : x) = \sum_{w\in W(\mathfrak{a}_{\bP},\mathfrak{a}_{\bP'})} e^{(w\Lambda + \rho_{\bP'})(\log(a_{\bP'}(x))}\cdot (c_{cus}(\bP' : \bP : w : \Lambda)\varphi)(z_{\bP'}(x)), \end{equation} where $c_{cus}(\bP' : \bP : w : \Lambda)$ denotes the intertwining operator from the space of cusp forms on $\Gamma_{M_{\bP}}\backslash X_{\bP}$ to the space of cusp forms on $\Gamma_{M_{\bP'}}\backslash X_{\bP'}$, cf. \cite[Theorem 5]{MR0232893} or \cite[p.86]{MR643242}. Note that in the last mentioned book a slightly different definition of the ``constant term'' of an automorphic form is used, see \cite[p.79]{MR643242} for the definition used therein.\\ Hence, for associated $\bP, \bP'$ the set of cuspidal data of $E(\bP | \varphi,\Lambda)$ along $\bP'$ consists of the triples \begin{equation} (Q=1, w\Lambda, c_{cus}(\bP' : \bP : w : \Lambda)\varphi), \qquad w\in W(\mathfrak{a}_{\bP},\mathfrak{a}_{\bP'}). \end{equation} From \cite[Lemma I.4.1]{MR1361168} it now follows that there is some $C>0$ such that for $x\in S$ we have \begin{eqnarray*} |E(\bP | \varphi,\Lambda)| &\leq& C\sum_{\bP'\sim\bP, \atop \bP'\supset \bP_0}\sum_{w\in W(\mathfrak{a}_{\bP},\mathfrak{a}_{\bP'})} e^{(w\re\Lambda + \rho_{\bP'})(\log a_{\bP_0}(x))}\, . \end{eqnarray*} Note that in \cite[Lemma I.4.1]{MR1361168} the summation considers only standard parabolic subgroups, which are defined as rational parabolic subroups $\bP$ with $\bP\supset \bP_0$. In our situation here, the subgroups $\bP'$ associated with $\bP$ are the only standard parabolic subgroups which lead to non-empty cuspidal data. \end{proof} Recall that all minimal rational parabolic subgroups are conjugate under $\bG(\mathbb{Q})$. Furthermore, if $\bP$ is a minimal rational parabolic subgroup, the boundary locally symmetric space $\Gamma_{M_{\bP}}\backslash X_{\bP}$ is compact and hence any $L^2$ eigenfunction $\varphi$ on $\Gamma_{M_{\bP}}\backslash X_{\bP}$ is cuspidal, i.e $L^2_{cus}(\Gamma_{M_{\bP}}\backslash X_{\bP}) = L^2(\Gamma_{M_{\bP}}\backslash X_{\bP})$. Therefore Proposition \ref{upper bound} simplifies in the case where $\bP$ is a minimal rational parabolic subgroup: \begin{corollary}\label{corollary eisenstein upper bound minimal} Let $\bP_0, \bP$ denote minimal rational parabolic subgroups of $\bG$, ${\mathcal S}$ a Siegel set associated with $\bP_0$, and $E(\bP | \varphi,\Lambda)$ an Eisenstein series associated with $\bP$. Then there exists a constant $C>0$ such that for all $x\in{\mathcal S}$ \begin{eqnarray*} |E(\bP | \varphi,\Lambda : x)| &\leq& C\sum_{w\in W(\mathfrak{a}_{\bP},\mathfrak{a}_{\bP_0})} e^{(w\re\Lambda + \rho_{\bP_0})(\log a_{\bP_0}(x))}. \end{eqnarray*} \end{corollary} \subsection{$\boldsymbol{L^p}$ spectrum} \label{Lp spectrum} Let us denote for rational parabolic subgroups $\bP, \bP'$ of $\bG$ by \begin{equation} C(\bP|\rho_{\bP'}) = \mathrm{int}\, \mathrm{conv}\{w\rho_{\bP'} : w\in W(\mathfrak{a}_{\bP'},\mathfrak{a}_{\bP})\} \subset \mathfrak{a}_{\bP}^* \end{equation} the interior of the convex hull of the points $w\rho_{\bP'}, w\in W(\mathfrak{a}_{\bP'},\mathfrak{a}_{\bP})$, and by $C(\rho_{\bP}) = C(\bP|\rho_{\bP}) = \mathrm{int}\,\mathrm{conv}\{w\rho_{\bP} : w\in W(\mathfrak{a}_{\bP})\}$. Then we have the following lemma. \begin{lemma}\label{lemma estimate siegel set} Let $\bP$ denote a rational parabolic subgroup, $E(\bP | \varphi, \Lambda)$ an associated Eisenstein series on $M=\Gamma\backslash X$ with $\varphi \in L^2_{cus}(\Gamma_{M_P}\backslash X_P)$, ${\mathcal S}$ a Siegel set associated with a minimal rational parabolic subgroup $\bP_0$, and $p\in [1,2)$. If \begin{eqnarray}\label{condition lambda} \re\Lambda &\in& \left(\frac{2}{p}-1\right)\bigcap_{\bP'\sim\bP,\atop \bP'\supset \bP_0}C(\bP|\rho_{\bP'}) \end{eqnarray} and if $\Lambda$ is not a pole, we have $E(\bP | \varphi, \Lambda) \in L^p({\mathcal S})$. \end{lemma} \begin{proof} Recall that the Riemannian measure on $X$ with respect to the horocyclic decomposition $X = N_{\bP_0}\times A_{\bP_0}\times X_{\bP_0}$ is given by $dvol_X = h(n_{\bP_0},z_{\bP_0})e^{-2\rho_{\bP_0}(\log a_{\bP_0})}dndzda$, where $h$ is smooth on $N_{\bP_0}\times X_{\bP_0}$, cf. \cite[Proposition 1.6]{MR0338456} or \cite[Proposition 4.3]{MR0387496}. Therefore, we obtain for $p\in [1,2)$ with Proposition \ref{upper bound} \begin{eqnarray*} \int_{{\mathcal S}} |E(\bP | \varphi,\Lambda)(x)|^p dvol_X &\leq& C\sum_{\bP'\sim\bP,\atop \bP'\supset\bP_0} \sum_{w\in W(\mathfrak{a}_{\bP},\mathfrak{a}_{\bP'})}\int_{A_{\bP_0, t}} e^{(pw\re\Lambda + p\rho_{\bP'} - 2\rho_{\bP_0})(\log a_{\bP_0}(x))} da \end{eqnarray*} and hence, $E(\bP | \varphi, \Lambda) \in L^p(S)$ if \begin{equation}\label{inequality1} (pw\re\Lambda + p\rho_{\bP'} - 2\rho_{\bP_0})(\log a_{\bP_0}) < 0 \end{equation} for all $a_{\bP_0}\in A_{\bP_0, t}, w\in W(\mathfrak{a}_{\bP},\mathfrak{a}_{\bP'})$, and all rational parabolic subgroups $\bP'\sim\bP$ with $\bP'\supset\bP_0$. As by definition $w\re\Lambda = \rho_{\bP'} = 0$ on $\mathfrak{a}_{\bP'}^{\perp}$ (the orthogonal complement of $\mathfrak{a}_{\bP'}$ in $\mathfrak{a}_{\bP_0}$) and since the restriction of $\rho_{\bP_0}$ to $\mathfrak{a}_{\bP'}$ coincides with $\rho_{\bP'}$ (cf. \cite{MR0231321}), inequality (\ref{inequality1}) is equivalent to \begin{eqnarray*} (pw\re\Lambda + (p-2)\rho_{\bP'})(\log a_{\bP'}) & < & 0, \end{eqnarray*} for all $a_{\bP'}\in A_{\bP',t} = A_{\bP'}\cap A_{\bP_0,t}$. This inequailty in turn is equivalent to \begin{eqnarray*} \re\Lambda &<& \left(\frac{2}{p} - 1\right)w\rho_{\bP'} \end{eqnarray*} on $\log(A_{\bP',t})$ for all $w\in W(\mathfrak{a}_{\bP'},\mathfrak{a}_{\bP})$ and $\bP'\sim\bP$ with $\bP'\subset\bP_0$. This proves the claim. \end{proof} We are now prepared to prove the following result. \begin{theorem}\label{thm general parabolic} Let $\bP$ denote a rational parabolic subgroup of $\bG$, $E(\bP | \varphi, \Lambda)$ an associated Eisenstein series on $M=\Gamma\backslash X$ with $\varphi \in L^2_{cus}(\Gamma_{M_P}\backslash X_P)$, $\bP_0$ a minimal rational parabolic subgroup, and $p\in [1,2)$. If \begin{eqnarray}\label{condition lambda X} \re\Lambda &\in& \left(\frac{2}{p}-1\right) \bigcap_{\bP'\sim\bP,\atop \bP'\supset \bP_0}C(\bP|\rho_{\bP'}) \end{eqnarray} and if $\Lambda$ is not a pole, we have $E(\bP | \varphi, \Lambda) \in L^p(M)$. \end{theorem} \begin{proof} Let $\bP_j, j=0,\ldots,k,$ denote representatives of $\Gamma$ conjugacy classes of minimal rational parabolic subgroups. From Proposition \ref{non-precise} it follows that there are Siegel sets ${\cal S}_j$ associated with $\bP_j, j=0,\ldots,k,$ such that the union of these Siegel sets covers a fundamental domain for $\Gamma$. From Lemma \ref{lemma estimate siegel set} it follows $E(\bP | \varphi, \Lambda) \in L^p({\cal S}_j)$ if \begin{eqnarray*} \re\Lambda &\in& \left(\frac{2}{p}-1\right)\bigcap_{\bP'\sim\bP,\atop \bP'\supset \bP_j}C(\bP|\rho_{\bP'}). \end{eqnarray*} Since all minimal rational parabolic subgroups are conjugate under $\bG(\mathbb{Q})$, the sets $$ \bigcap_{\bP'\sim\bP,\atop \bP'\supset \bP_j}C(\bP|\rho_{\bP'}), \qquad j=0,\ldots, k, $$ coincide and hence, $E(\bP | \varphi, \Lambda) \in L^p(M)$ if $\Lambda$ satisfies condition (\ref{condition lambda X}). \end{proof} \begin{corollary} Let $\bP$ denote a rational parabolic subgroup of $\bG$, $E(\bP | \varphi, \Lambda)$ an associated Eisenstein series on $M=\Gamma\backslash X$ with $\varphi \in L^2_{cus}(\Gamma_{M_{\bP}}\backslash X_{\bP})$, $\bP_0$ a minimal rational parabolic subgroup, and $p\in (1,2)$. If \begin{eqnarray*} \re\Lambda &\in& \left(\frac{2}{p}-1\right) \bigcap_{\bP'\sim\bP,\atop \bP'\supset \bP_0}C(\bP|\rho_{\bP'}) \end{eqnarray*} and if $\Lambda$ is not a pole, $E(\bP | \varphi, \Lambda)$ is an eigenfunction of $\DMp$ with eigenvalue $\nu + ||\rho_{\bP}||^2 - \langle \Lambda,\Lambda\rangle$, where $\nu$ is the eigenvalue of the $L^2$ Laplacian on the boundary locally symmetric space $\Gamma_{M_{\bP}}\backslash X_{\bP}$ for $\varphi$. \end{corollary} \begin{proof} This follows from Lemma \ref{lemma Lp eigenfunctions} and Lemma \ref{generalized eigenfunctions} together with the preceeding theorem. \end{proof} In the case where $\bP$ is a minimal rational parabolic subgroup, these results simplify significantly. \begin{corollary}\label{eigenvalues minimal} Let $\bP$ denote a minimal rational parabolic subgroup of $\bG$, $E(\bP | \varphi, \Lambda)$ an associated Eisenstein series on $M=\Gamma\backslash X$, and $p\in [1,2)$. If \begin{eqnarray} \re\Lambda &\in& \left(\frac{2}{p}-1\right)C(\rho_{\bP}) \end{eqnarray} and if $\Lambda$ is not a pole, we have $E(\bP | \varphi, \Lambda) \in L^p(M)$. Furthermore, if $p\in (1,2)$, $$ \DMp E(\bP|\varphi,\Lambda) = (\nu + ||\rho_{\bP}||^2 - \langle \Lambda,\Lambda\rangle)E(\bP|\varphi,\Lambda), $$ where $\nu$ is the eigenvalue of the $L^2$ Laplacian on $\Gamma_{M_\bP}\backslash X_\bP$ for $\varphi$. \end{corollary} \begin{proof} Let $\bP'$ denote a minimal rational parabolic subgroup $\bP'$ of $\bG$. Since $\bP'$ and $\bP$ are conjucate under $\bG(\mathbb{Q})$, the sets % \begin{eqnarray*} C(\bP|\rho_{\bP'}) &=& \mathrm{int}\, \mathrm{conv}\{w\rho_{\bP'} : w\in W(\mathfrak{a}_{\bP'},\mathfrak{a}_{\bP})\} \subset \mathfrak{a}_{\bP}^* \end{eqnarray*} and \begin{eqnarray*} C(\rho_{\bP}) &=& \mathrm{int}\, \mathrm{conv}\{w\rho_{\bP} : w\in W(\mathfrak{a}_{\bP})\} \end{eqnarray*} coincide. Hence, the claim follows from Theorem \ref{thm general parabolic}. \end{proof} This result resembles the behavior of spherical functions $$ \varphi_{\lambda}(gK) = \int_K \e^{(i\lambda + \rho)(A(kg))} dk,\qquad \lambda\in\mathfrak{a}^*_{\mathbb{C}} $$ on (globally) symmetric spaces $X=G/K$ of non-compact type: If $C_X(\rho)$ denotes the convex hull of the points $s\rho\in\mathfrak{a}^*, s\in W$, where $W$ denotes the Weyl group, for any $p>2$ and any $\lambda \in \mathfrak{a}^* + i(1-\frac{2}{p})C_X(\rho)$ the spherical function $\varphi_{\lambda}$ is contained in $L^p(X)$, cf. \cite[Proposition 2.2]{MR1016445} (note that the roles of $p>2$ and $p<2$ are interchanged). \\ If we define for $p\in [1,\infty)$ the parabolic region \begin{equation} {\cal P}_{M,p}(\rho_{\bP}) = \left\{ ||\rho_{\bP}||^2 - z^2 : z\in\mathbb{C}, |\re z|\leq ||\rho_{\bP}||\cdot |\frac{2}{p}-1|\right\}\subset \mathbb{C}, \end{equation} we obtain as in \cite[Proposition 2.2]{MR1016445} or \cite[Corollary 3.6]{Ji:yq} the following: \begin{theorem}\label{theorem eigenvalues} Let $\bP$ denote a minimal rational parabolic subgroup of $\bG$. \begin{itemize} \item[\textup{(a)}] If $p\in (1,2)$, there exists a discrete set $B\subset {\cal P}_{M,p}(\rho_{\bP})$ such that all points in the interior of ${\cal P}_{M,p}(\rho_{\bP}) \setminus B$ are eigenvalues of $\DMp$. \item[\textup{(b)}] If $p\in [2,\infty)$, the point spectrum $\sigma_{pt}(\DMp)$ is a discrete subset of $[0,\infty)$. \end{itemize} Furthermore, ${\cal P}_{M,p}(\rho_{\bP}) \subset \sigma(\DMp)$ for all $p\in[1,\infty)$. \end{theorem} Recall that two minimal rational parabolic subgroups $\bP,\bP'$ are conjugate under $\bG(\mathbb{Q})$ and in particular, $||\rho_{\bP}|| = ||\rho_{\bP'}||$. Therefore we have ${\cal P}_{M,p}(\rho_{\bP}) = {\cal P}_{M,p}(\rho_{\bP'})$. \begin{proof} Let first $p\in (1,2)$ and let $E(\bP | \varphi, \Lambda)$ denote an Eisenstein series associated with $\bP$ on $M=\Gamma\backslash X$ with $\varphi= const \in L^2_{cus}(\Gamma_{M_{\bP}}\backslash X_{\bP})$. Note that such a choice is always possible as -- due to the minimality of $\bP$ -- the boundary locally symmetric space $\Gamma_{M_{\bP}}\backslash X_{\bP}$ is compact and hence, $L^2_{cus}(\Gamma_{M_{\bP}}\backslash X_{\bP}) = L^2(\Gamma_{M_{\bP}}\backslash X_{\bP})$.\\ From Corollary \ref{eigenvalues minimal} it follows that for $\Lambda$ in $$ \left(\frac{2}{p}-1\right)\mathrm{int}\,\mathrm{conv}\{0, \rho_{\bP} \} + i \mathfrak{a}_{\bP}^* $$ all the points $||\rho_{\bP}||^2 - \langle \Lambda, \Lambda\rangle$ are contained in the point spectrum of $\DMp$ if $\Lambda$ is not a pole and therefore there exists a discrete set $B$ such that all points in the interior of ${\cal P}_{M,p}(\rho_{\bP}) \setminus B$ are eigenvalues of $\DMp$. Since the spectrum is a closed subset of $\mathbb{C}$, we obtain ${\cal P}_{M,p}(\rho_{\bP}) \subset \sigma(\DMp)$ for $p\in (1,2)$ and by duality for $p\in(2,\infty)$. In the case $p=2$, we have ${\cal P}_{M,2}(\rho_{\bP}) = [||\rho_{\bP}||^2,\infty)$, and it is well known that this set belongs to the $L^2$ spectrum of $M$. This completes the proof of (a) and the last statement.\\ To prove (b), assume $p>2$ and $\DMp \psi = \lambda \psi$ for some $\psi\in \dom(\DMp)$. Since $vol(M)<\infty$ it follows from Lemma \ref{domains} that $\psi$ is an eigenfunction of $\DMq$ with eigenvalue $\lambda$ if $1<q\leq p$. Since it is known that the point spectrum of $\Delta_{M,2}$ is a discrete subset of $[0,\infty)$, the proof is complete. \end{proof} An application of Theorem \ref{thm general parabolic} yields similarly for non-minimal rational parabolic subgroups the existence of an open subset of $\mathbb{C}$ that consists only of eigenvalues of $\DMp$. However, an explicit description of this set seems to be more complicated than in the case of minimal rational parabolic subgroups.\\ On the other hand, an ``upper bound'' for the $L^p$ spectrum of $M$ follows from a result by Taylor: \begin{proposition} Let $M=\Gamma\backslash X$ denote a non-compact arithmetic locally symmetric space and denote by $\bP$ a minimal rational parabolic subgroup. For $p\in (1,\infty)$ we have $$ \{\lambda_0,\ldots, \lambda_r\}\cup {\cal P}_{M,p}(\rho_{\bP})\subset \sigma(\DMp) \subset \{\lambda_0,\ldots, \lambda_r\}\cup {\cal P}_{M,p}', $$ where $$ {\cal P}_{M,p}' = \left\{ b - z^2 : z\in\mathbb{C}, |\re z|\leq ||\rho||\cdot |\frac{2}{p}-1|\right\} $$ with $b = \inf \sigma_{ac}(\Delta_{M,2}) = \inf_{\bP\neq \bG} ||\rho_{\bP}||^2 > 0,$ and where $\lambda_0 = 0, \lambda_1,\ldots, \lambda_r$ are eigenvalues of $\Delta_{M,2}$ with finite multiplicity. \end{proposition} \begin{proof} The second inclusion follows from Taylor's Proposition 3.3 in \cite{MR1016445} (see also Proposition \ref{taylor upper} in this paper). Taking into account Theorem \ref{theorem eigenvalues}, it remains to show $\lambda_j \in \sigma(\DMp), j=0,\ldots, r$. For $p\leq 2$ it follows similarly as in the proof of Theorem \ref{theorem eigenvalues} that each $\lambda_j$ is an eigenvalue of $\DMp$. By duality, it then follows $\lambda_j\in \sigma(\DMp)$ for any $p>2$ and the proof is complete. \end{proof} If $X=G/K$ denotes a symmetric space of non-compact type and $M=\Gamma\backslash X$ for a non-uniform arithmetic $\Gamma$, in the case where $\qrank(\Gamma) = \rank(X)$, we have for any minimal rational parabolic subgroup $\bP\subset\bG$ $$ \dim(A_{\bP}) = \dim(A), $$ where $A$ denotes the abelian subgroup in the Iwasawa decomposition $G=KAN$. Hence, in this situation, we can conclude $||\rho|| = ||\rho_{\bP}||$ and therefore ${\cal P}_{M,p}'$ is a shift of ${\cal P}_{M,p}(\rho_{\bP})$, more precisely ${\cal P}_{M,p}' = (b-||\rho_{\bP}||^2) + {\cal P}_{M,p}(\rho_{\bP})$. See Figure \ref{parabolic regions}. \begin{figure}[htb] \centering \includegraphics[scale=0.4]{JiWeber_hr_chaos-fig1} \caption{The parabolic region ${\cal P}_{M,p}(\rho_{\bP})$ tangent to the sector $\Sigma_p$ defined in Section \ref{heat semigroup} and the parabolic region $ {\cal P}_{M,p}'$.} \label{parabolic regions} \end{figure} We conclude this section with a conjecture about the precise form of the $L^p$ spectrum. \begin{conjecture}[cf. Figure \ref{conjecture figure}] \label{conjecture} Let $M=\Gamma\backslash X$ denote a non-compact locally symmetric space with arithmetic fundamental group $\Gamma$ and denote by $0=\lambda_0,\ldots,\lambda_r$ the eigenvalues of $\Delta_{M,2}$ below the absolutely continuous spectrum $\sigma_{ac}(\Delta_{M,2})$. Then, for any $p\in(1,\infty)$, each proper rational parabolic subgroup $\bP$ defines a parabolic region ${\cal P}_{\bP,p}$ tangent to the boundary of the sector $\Sigma_p = \left\{ z\in \mathbb{C}\setminus\{0\} : |\arg(z)| \leq \arctan\frac{|p-2|}{2\sqrt{p-1}} \right\}$ with apex at the point $z(\bP)=\frac{4||\rho_{\bP}||^2}{p}\left(1- \frac{1}{p}\right)$ such that the $L^p$ spectrum $\sigma(\DMp)$ of $M$ coincides precisely with the union of these parabolic regions ${\cal P}_{\bP,p}$ and the $L^2$ eigenvalues $\lambda_0,\ldots,\lambda_r$: $$ \sigma(\DMp) = \{\lambda_0,\ldots,\lambda_r\}\cup \bigcup_{\bP} {\cal P}_{\bP,p}. $$ Furthermore, if $p\in (1,2)$, there is a discrete subset $B\subset \mathbb{C}$ such that all points in $\left(\{\lambda_0,\ldots,\lambda_r\}\cup \bigcup {\cal P}_{\bP,p}\right)\setminus B$ are eigenvalues of $\DMp$. \end{conjecture} \begin{figure}[htb] \centering \includegraphics[scale=0.7]{JiWeber_hr_chaos-fig2} \caption{Conjecture about the $L^p$ spectrum with sector $\Sigma_p$.} \label{conjecture figure} \end{figure} We actually assume that, similarly as in the case of minimal rational parbolic subgroups, each {\em non-cuspidal} Eisenstein series $E(\bP|\varphi_0,\Lambda)$ with $\varphi_0=const.$ defines a parabolic region ${\cal P}_{\bP,p}$. For fixed $\bP$, it seems to be plausible that for any $\varphi\in L^2(\Gamma_{M_{\bP}}\backslash X_{\bP})$ the parabolic region defined by $E(\bP|\varphi,\Lambda)$ should be contained in ${\cal P}_{\bP,p}$. More precisely, for any $\varphi\in L^2(\Gamma_{M_{\bP}}\backslash X_{\bP})$ the parabolic region defined by $E(\bP|\varphi,\Lambda)$ should be a shift (by the eigenvalue corresponding to $\varphi$) of the parabolic region defined by $E(\bP|\varphi_0,\Lambda)$.\\ In the case $p=2$ each parabolic region ${\cal P}_{\bP,p}$ of course needs to degenerate to a subset of $\mathbb{R}$ as $\Delta_{M,2}$ is self-adjoint. It is very plausible that this subset is of the form $[||\rho_{\bP}||^2,\infty)$.\\ If Conjecture \ref{conjecture} is true, the $L^p$ spectrum (as set), for any $p\neq 2$, contains more information about $M$ than the $L^2$ spectrum. For example, if the $L^p$ spectrum containes $n$ different parabolic regions (i.e. $n$ parabolic regions tangent to $\Sigma_p$ with pairwise different apex), there must be $n$ different rational parabolic subgroups $\bP$ such that $||\rho_{\bP}||$ are pairwise different numbers.\\ Conjecture \ref{conjecture} is known to be true for non-compact locally symmetric spaces $M=\Gamma\backslash X$ with finite volume if $X$ is a rank one symmetric space of non-compact type \cite{MR937635,Ji:yq}, for Hilbert modular varieties $M$ \cite{Ji:yq}, and by the result in \cite{Weber:2008ve} for any Riemannian product of these spaces. \section{Heat dynamics} In this section we use our results of the previous sections to show that the $L^p$ heat semigroup has a certain chaotic behavior if $p\in(1,2)$ whereas such a chaotic behavior cannot occur if $p\geq 2$. \\ Similar results have been proven in the context of globally symmetric spaces of non-compact type \cite{Ji:nr} and for rank one locally symmetric spaces (which need not neccessarily be arithmetic) in \cite{Ji:yq}. \subsection{Chaotic semigroups} There are many different definitions of chaos. We will use the following one which is basically an adaption of Devaney's definition \cite{MR1046376} to the setting of strongly continuous semigroups (cf. \cite{MR1468101}). \begin{definition} A strongly continuous semigroup $T(t)$ on a Banach space ${\cal B}$ is called {\em chaotic} if the following two conditions are satisfied: \begin{itemize} \item[\textup{(i)}] There exists an $f\in {\cal B}$ such that its orbit $\{T(t)f : t\geq 0 \}$ is dense in ${\cal B}$, i.e. $T(t)$ is {\em hypercyclic}. \item[\textup{(ii)}] The set of periodic points $\{ f\in {\cal B} : \exists t>0 \mbox{~such that~} T(t)f=f \}$ is dense in ${\cal B}$. \end{itemize} \end{definition} \begin{remark}\label{remark1} \begin{itemize} \item[\textup{(1)}] As with $\{T(t)f : t\geq 0 \}$ also the set $\{T(q)f : q\in\mathbb{Q}_{\geq 0} \}$ is dense, ${\cal B}$ is necessarily separable. \item[\textup{(2)}] The orbit of any point $T(t)f$ in a dense orbit $\{T(t)f : t\geq 0 \}$ is again dense in ${\cal B}$. Hence, the set of points with a dense orbit is a dense subset of ${\cal B}$ or empty. \item[\textup{(3)}] If $A$ is the generator of a hypercyclic semigroup, its dual operator $A'$ has empty point spectrum \cite[Theorem 3.3]{MR1468101}. \end{itemize} \end{remark} A sufficient condition for a strongly continuous semigroup to be chaotic in terms of spectral properties of its generator was given by Desch, Schappacher, and Webb: \begin{theorem}[\cite{MR1468101}] \label{thm dsw} Let $T(t)$ denote a strongly continuous semigroup on a separable Banach space ${\cal B}$ with generator $A$ and let $\Omega$ denote an open, connected subset of $\mathbb{C}$ with $\Omega\subset \sigma_{pt}(A)$ (the point spectrum of $A$). Assume that there is a function $F: \Omega\to {\cal B}$ such that \begin{itemize} \item[\textup{(i)}] $\Omega \cap i\mathbb{R} \neq \emptyset$. \item[\textup{(ii)}] $F(\lambda) \in \ker(A-\lambda)$ for all $\lambda \in \Omega$. \item[\textup{(iii)}] For all $\phi \in {\cal B'}$ in the dual space of ${\cal B}$, the mapping $F_{\phi}:\Omega\to \mathbb{C},\, \lambda\mapsto \phi\circ F $ is analytic. Furthermore, if for some $\phi \in {\cal B'}$ we have $F_{\phi}=0$ then already $\phi = 0$ holds. \end{itemize} Then the semigroup $T(t)$ is chaotic. \end{theorem} In \cite{MR1468101} it was also required that the elements $F(\lambda)$, $\lambda \in \Omega$, are non-zero but as remarked in \cite{MR2128736} this assumption is redundant. \\ In the theory of dynamical systems chaotic semigroups are highly unwanted because of their difficult dynamics. Not much more appreciated are so called subspace chaotic semigroups: \begin{definition} A strongly continuous semigroup $T(t)$ on a Banach space ${\cal B}$ is called {\em subspace chaotic} if there is a closed, $T(t)$ invariant subspace ${\cal V}\neq \{0\}$ of ${\cal B}$ such that the restriction $T(t)|_{\cal V}$ is a chaotic semigroup on ${\cal V}$. \end{definition} \noindent Because of Remark \ref{remark1} such a subspace is always infinite dimensional.\\ Banasiak and Moszy\'nski showed that a subset of the conditions in Theorem \ref{thm dsw} yields a sufficient condition for subspace chaos: \begin{theorem}\textup{(\cite[Criterion 3.3]{MR2128736}).}\label{thm ban} Let $T(t)$ denote a strongly continuous semigroup on a separable Banach space ${\cal B}$ with generator $A$. Assume, there is an open, connected subset $\Omega\subset \mathbb{C}$ and a function $F: \Omega\to {\cal B}, F\neq 0,$ such that \begin{itemize} \item[\textup{(i)}] $\Omega \cap i\mathbb{R} \neq \emptyset$. \item[\textup{(ii)}] $F(\lambda) \in \ker(A-\lambda)$ for all $\lambda \in \Omega$. \item[\textup{(iii)}] For all $\phi \in {\cal B'}$, the mapping $F_{\phi}:\Omega\to \mathbb{C},\, \lambda\mapsto \phi\circ F $ is analytic. \end{itemize} Then the semigroup $T(t)$ is subspace chaotic.\\ Furthermore, the restriction of $T(t)$ to the $T(t)$ invariant subspace ${\cal V} = \overline{\spa}F(\Omega)$ is chaotic. \end{theorem} Note that it is not required $\Omega\subset \sigma_{pt}(A)$ here, i.e. either $F(\lambda)$ is an eigenvector or $F(\lambda)=0$. But, as explained in \cite{MR2128736}, the assumption $\Omega\subset \mathbb{C}$ is not really weaker. \subsection{$\boldsymbol{L^p}$ heat dynamics on locally symmetric spaces} \begin{theorem}\label{thm dynamics} Let $M=\Gamma\backslash X$ denote a non-compact locally symmetric space with arithmetic fundamental group $\Gamma$. \begin{itemize} \item[\textup{(a)}] If $p\in (1,2)$ there is a constant $c_p>0$ such that for any $c>c_p$ the semigroup $$ e^{-t(\DMp -c)}: L^p(M) \to L^p(M) $$ is subspace chaotic. \item[\textup{(b)}] For any $p\geq 2$ and $c\in\mathbb{R}$ the semigroup $e^{-t(\DMp -c)}: L^p(M) \to L^p(M)$ is not subspace chaotic. \end{itemize} \end{theorem} \begin{proof} For the proof of part (a), we will check the conditions of Theorem \ref{thm ban}. If $p\in (1,2)$ and if $\bP$ denotes a minimal rational parabolic subgroup of $\bG$, the interior of $$ ({\cal P}_{M,p}(\rho_{\bP})\setminus B)\cap\{ z\in\mathbb{C} : \im(z) <0\} $$ consists completely of eigenvalues (cf. Theorem \ref{theorem eigenvalues}), and the apex of ${\cal P}_{M,p}(\rho_{\bP})$ is at the point $$ c_p = ||\rho_{\bP}||^2 - ||\rho_{\bP}||^2 \cdot \left(\frac{2}{p} - 1\right)^2 = \frac{4||\rho_{\bP}||^2}{p}\left(1 - \frac{1}{p}\right). $$ Hence, the point spectrum of $(\DMp - c)$ intersects the imaginary axis for any $c>c_p$. We assume in the following $c > c_p$ and denote by $\Omega$ the interior of the set $$ \left({\cal P}_{M,p}(\rho_{\bP})\setminus B - c\right)\cap\{ z\in\mathbb{C} : \im(z) <0\}. $$ Then, if the usual analytic branch of the square root is chosen, $\Omega$ is mapped (analytically) by $h(z) = i ||\rho_{\bP}||^{-1}\sqrt{z+c-||\rho_{\bP}||^2}$ onto the strip $$ \left\{z\in\mathbb{C} : \im(z)>0, 0<\re(z)< \left(\frac{2}{p}-1\right) \right\} \setminus h(B). $$ If we now define for some $ \varphi = const. \in L^2_{cus}(\Gamma_{M_{\bP}}\backslash X_{\bP})$ (note that $L^2_{cus}(\Gamma_{M_{\bP}}\backslash X_{\bP}) = L^2(\Gamma_{M_{\bP}}\backslash X_{\bP})$ as $\Gamma_{M_{\bP}}\backslash X_{\bP}$ is compact), $\varphi\neq 0,$ $$ F: \Omega \to L^p(M),\, z\mapsto E(\bP| \varphi, h(z)\rho_{\bP}) $$ the map $F_f: \Omega\to \mathbb{C}, z\mapsto \int_M F(z)(x)f(x) dx$ is analytic as a composition of analytic mappings for all $f\in L^{p'}(M), \frac{1}{p} + \frac{1}{p'} = 1$. Note that the integral is always finite as the Eisenstein series $F(z)$ are contained in $L^p(M)$. Furthermore, it follows from Corollary \ref{eigenvalues minimal} and Theorem \ref{theorem eigenvalues} that each $F(z)$ is an eigenfunction of $(\DMp-c)$ for the eigenvalue $z$ and the proof of part (a) is complete.\\ If $p\geq 2$, the point spectrum of $\DMp$, and hence of $(\DMp - c)$, is a discrete subset of $\mathbb{R}$. On the other hand, the intersection of the point spectrum of the generator of a chaotic semigroup with the imaginary axis is always infinite, cf. \cite{MR1855839}. \end{proof} \subsubsection{Periods of $\boldsymbol{L^p}$ heat semigroups} \begin{definition} If $T(t)$ denotes a strongly continuous semigroup on a Banach space ${\cal B}$, we call any $t>0$ such that there is some $f\neq 0$ with $T(t)f=f$ and $T(s)f\neq f$ for all $0<s<t$ a {\em period} of the semigroup $T(t)$. \end{definition} In \cite{Munoz:wj} it is shown that the periods of a strongly continuous semigroup can be determined if the eigenvalues of the generator on the imaginary axis are known: \begin{lemma}\label{characterization periods} Let $T(t)$ denote a strongly continuous semigroup on a Banach space ${\cal B}$ with generator $A$. Then $t>0$ is a period of $T(t)$ if and only if there exist $\alpha_k\in i\mathbb{R} \cap \sigma_{pt}(A), k=1,\ldots,l$ ($l=\infty$ is allowed) with eigenvectors $h_{\alpha_k}$ such that $\sum_{k=1}^l h_{\alpha_k}$ is convergent and $t$ is the smallest positive number with $$ t\alpha_k \in 2\pi i \mathbb{Z} $$ for all $k$. \end{lemma} We use this result in order to describe the set of periods of the semigroups $e^{-t(\DMp -c)}: L^p(M) \to L^p(M)$ on a non-compact locally symmetric space $M=\Gamma\backslash X$ with arithmetic fundamental group $\Gamma$. Let $\bP$ denote a minimal rational parabolic subgroup. As the boundary of the parabolic region $$ {\cal P}_{M,p}(\rho_{\bP}) = \left\{ ||\rho_{\bP}||^2 - z^2 : z\in\mathbb{C}, |\re z|\leq ||\rho_{\bP}||\cdot |\frac{2}{p}-1|\right\}\subset \mathbb{C}, $$ is parametrized by the curve $$ s\mapsto \left(\frac{2||\rho_{\bP}||}{p}+ is \right)\left(2||\rho_{\bP}|| - \frac{2||\rho_{\bP}||}{p} - is\right), \qquad s\in\mathbb{R}, $$ the intersection of ${\cal P}_{M,p}(\rho_{\bP}) - c$ with the imaginary axis consists of the points in $i[-r,r]$ where $r= r(c,p) = 2||\rho_{\bP}||\left(1- \frac{2}{p}\right)\sqrt{c-c_p}$ if $c>c_p$. \begin{proposition} Let $M=\Gamma\backslash X$ denote a non-compact locally symmetric space with arithmetic $\Gamma$. \begin{itemize} \item[\textup{(a)}] If $p\in (1,2)$ and $c>c_p$ all but finitely many points in $[2\pi/r(c,p),\infty)$ are periods of the semigroup $e^{-t(\DMp -c)}: L^p(M) \to L^p(M)$. \item[\textup{(b)}] If $p\geq 2$ the semigroup $e^{-t(\DMp -c)}: L^p(M) \to L^p(M)$ has no periods. \end{itemize} \end{proposition} \begin{proof} This follows immediately from Lemma \ref{characterization periods} and Theorem \ref{theorem eigenvalues}. \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2010-05-18T02:03:04", "yymm": "1005", "arxiv_id": "1005.2988", "language": "en", "url": "https://arxiv.org/abs/1005.2988", "abstract": "The aim of this paper is to study the spectrum of the $L^p$ Laplacian and the dynamics of the $L^p$ heat semigroup on non-compact locally symmetric spaces of higher rank. Our work here generalizes previously obtained results in the setting of locally symmetric spaces of rank one to higher rank spaces. Similarly as in the rank one case, it turns out that the $L^p$ heat semigroup on $M$ has a certain chaotic behavior if $p\\in(1,2)$ whereas for $p\\geq 2$ such a chaotic behavior never occurs.", "subjects": "Differential Geometry (math.DG); Dynamical Systems (math.DS)", "title": "$L^p$ spectrum and heat dynamics of locally symmetric spaces of higher rank", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426450627306, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.7090791375415855 }
https://arxiv.org/abs/1903.06968
Creation of discontinuities in circle maps
Circle maps frequently arise in mathematical models of physical or biological systems. Motivated by Cherry flows and `threshold' systems such as integrate and fire neuronal models, models of cardiac arrhythmias, and models of sleep/wake regulation, we consider how structural transitions in circle maps occur. In particular we describe how maps evolve near the creation of a discontinuity.We show that the natural way to create discontinuities in the maps associated with both threshold systems and Cherry flows results in a square root singularity in the map as the discontinuity is approached from either one or both sides. We analyse the generic properties of maps with gaps displaying this local behaviour, showing how border collisions and saddle-node bifurcations are interspersed. This highlights how the Arnold tongue picture for tongues bordered by saddle-node bifurcations is amended once gaps are present.For the threshold models we also show that a loss of injectivity naturally results in the creation of multiple gaps giving rise to a novel codimension two bifurcation.
\section{Introduction}\label{section:intro} Degree one circle maps $f:{\mathbb S}^1\to {\mathbb S}^1$ are described by real functions $F:{\mathbb R}\to {\mathbb R}$ with $F(x+1)=F(x)+1$ and $f(x)=F(x)$ modulo 1. These maps arise naturally in many situations and $F$ may be injective (or not) and continuous (or not) leading to four different types of map: $f$ is injective (monotonic) and continuous, the classic case; non-injective and continuous, \cite{Glass_1979}; injective and discontinuous, \cite{Glass_1991}; and non-injective and discontinuous, \cite{Skeldon_2014}. In many applications the type of map is fixed. However, for maps derived from the classes of models considered below changes of type occur naturally as parameters vary. Close to the transitions between types, the maps have a well-defined structure. This structure in turn changes some dynamical properties of the systems (note that the transition between type is not necessarily a bifurcation \emph{per se}). These transitions and their consequences are the subject of this paper. Our motivation comes from two classes of models. The first arises in many biological models including models of cardiac arrhythmias (see \cite{Arnold_1991} and \cite{Glass_1991} and the references therein), neuronal models \cite{Bressloff_1990,Glendinning_1995}, and the two process model of sleep-wake regulation \cite{Borbely_1982,Daan_1984}. We refer to these examples as `threshold systems', since in each case a variable of interest increases until it hits an upper threshold, decreases until it hits a lower threshold and then repeats. Some typical examples are shown in Fig.\ref{fig:thresholdmodels}. If the thresholds are periodic with the same period, then each system can be represented by a circle map \cite{Glass_1991, Nakao_1998, Skeldon_2014} and the resulting observed phenomena include phase-locked solutions, `period-adding', period-doubling and chaos. The second class of models has found application in problems of breathing rhythms \cite{Baesens_2013a} and arises naturally in coupled oscillator problems at appropriate parameter values \cite{Baesens_2013b}. The initial model is a flow on the torus. If there are no stationary points and a global cross section (a Poincar\'e flow), then the return map on this section is a continuous monotonic circle map. If, as parameters are varied, a pair of stationary points is created by saddle-node bifurcations then the resulting flow is known as a Cherry flow \cite{Cherry_1938}. These can generate return maps which have either discontinuities or regions where the map is not defined \cite{Palis_1982}. \begin{figure}[tbhp] \centering{\includegraphics[scale=1.2]{Figure1.pdf}} \caption{ (a) A model of cardiac arrhythmias, attributed to Gel'fand and Tsetlin. Reprinted from \cite{Arnold_1991} with the permission of AIP Publishing. (b) The two-process model of sleep-wake regulation, sketch based on the model in \cite{Daan_1984}. (c) An integrate and fire model. Reprinted from \cite{Glass_1991} with the permission of AIP Publishing. This model will be described in detail in section~\ref{sect:STS} and be called the sinusoidal threshold system (STS). } \label{fig:thresholdmodels} \end{figure} There is a vast literature on circle maps which are both continuous and monotonic (e.g. \cite{Katok_1995}). All points have the same rotation number (average rate of rotation) under iteration by such maps. If the rotation number is rational then solutions tend to periodic orbits. Whilst if the rotation number is irrational there are no periodic solutions and, if the map is sufficiently smooth (e.g. $C^2$), all orbits are dense in the circle. Deeper results about the smoothness of conjugacies to rigid rotations for maps with irrational rotation numbers were developed by Herman \cite{Herman_1979}, and led to many technical results in this direction \cite{vStrien_1993}. For typical families of circle maps the rotation number takes rational values on closed intervals of parameters, this is called mode-locking and the regions of parameter space with a given rational rotation number is an Arnold tongue. The Arnold tongues are bounded by saddle-node bifurcations. If the circle map is continuous but not monotonic then the rotation number is replaced by a rotation interval \cite{Ito_1981}. Many properties can be understood using classic results for maps of the interval and the transition from continuous and monotonic circle maps to continuous and non-monotonic maps has been described in some detail, including the transitions to chaos which involve different sequences of period-doubling bifurcations \cite{Boyland_1986, Mackay_1986,MacKay_1988}. The injective and discontinuous circle maps arise in a number of contexts and basic results such as the existence of a well-defined rotation number can be found in \cite{Keener_1980,Rhodes_1986,Rhodes_1991}. The review paper \cite{Granados_2017} gives a thorough summary of the current literature on these monotonic circle maps with gaps, i.e. intervals with no pre-images, and shows how maps of the real line with gaps can be framed as circle maps. This is particularly important because it highlights how many results known from the study of circle maps have been rediscovered in the context of maps of the real line. The non-injective discontinuous circle maps can be divided into sub-classes. If the continuous branches are increasing then this includes the Lorenz maps and again a lot is understood, e.g. \cite{Glendinning_1993}. Less is known about the details of the dynamics of non-injective discontinuous maps in general (although the techniques of kneading theory do apply), partly because it is less clear what results would be useful without further context. Both threshold systems and the transition from Poincar\'e flows to Cherry flows provide natural settings to consider the transition from continuous circle maps to piecewise continuous circle maps with discontinuities. In each case, specific properties of the original dynamical system induce transitions between different circle map types. For the transition to discontinuous maps in both of these cases, we show that the derivative of the map is singular on at least one side of the discontinuity and derive scaling results. Although the derivative becomes singular, derivatives may be large for a small neighbourhood of the singular value and so can be difficult to resolve numerically. Nevertheless, we show that the presence of the singularity is essential to understanding how a continuous circle map with phase-locked regions bounded by saddle-node bifurcations transitions to a circle map with a gap, with creation/destruction of periodic solutions via border collisions and saddle-node bifurcations. The layout of the paper is as follows. In section~\ref{sect:threshold} we define smooth threshold systems and the associated circle maps. An extension of the standard Arnold map is presented as an example. In section~\ref{sect:tangencies}, we discuss the creation of gaps in smooth threshold systems, deriving the typical scalings for the gap size, and showing that the map to one side of the gap has a square root singularity. In section~\ref{sect:squareroot} we consider a general form for a piecewise continuous map with a gap on which one side the map has a square root singularity and discuss two examples. In section~\ref{sect:nonmonotonic} we discuss the creation of non-monotonicity in threshold systems and how this can result in multiple gaps and lead to codimension two bifurcations. In section~\ref{sect:Cherryflow} we consider Cherry flows and discuss the creation of a discontinuity, showing that a finite gap is created instantaneously and that both sides of the gap have a square root singularity. We end with a short discussion. \section{Threshold systems}\label{sect:threshold} The essential feature of a threshold system is that there is a dependent variable which increases until it hits an upper threshold, decreases until it hits a lower threshold, and then repeats. The following definition formalises this idea in the case of smooth thresholds which provide the generic cases described later in this paper. In this definition, $x$ represents the independent time-like variable, and for any flow $\phi_x:{\mathbb R}\to{\mathbb R}$, $\phi_x$ depends smoothly on the independent variable $x$, $\phi_0$ is the identity and for all $r,s\in{\mathbb R}$, $\phi_r\circ\phi_s=\phi_{r+s}$. \begin{definition}\label{def:TS} A smooth threshold system is a pair of flows $\phi_x$ and $\psi_x$, the up flow and down flow respectively, and an upper threshold and a lower threshold such that \begin{enumerate} \item[(i)] The up flow is strictly increasing and the down flow is strictly decreasing. \item[(ii)] The upper and lower thresholds are the graphs of smooth real functions $y=h(x)$ and $y=g(x)$ with period one respectively such that for all $x\in {\mathbb R}$ \begin{equation} g(x) < h(x). \label{eq:threshold_ordering} \end{equation} \item[(iii)] Starting from the lower threshold, the up flow reaches the upper threshold in finite time and vice versa. Formally, if $y_0= g(x_0)$ then there exists $\tau >0$ such that $\phi_{\tau}(y_0)=h(x_0+\tau )$, and if $y_0 =h(x_0)$ then there exists $\tau^\prime>0$ such that $\psi_{\tau^\prime}(y_0) =g(x_0+\tau^\prime )$. \end{enumerate} \end{definition} To determine the dynamics of a threshold system consider an initial condition on the upper threshold, $(x_n,y_n)$ with $y_n=h(x_n)$. By property (iii) there is a smallest $\tau_n >0$ such that $({x}_n+\tau_n,\tilde{y}_n)$ is on the lower threshold with $\tilde{y}_n=\psi_{\tau_n} (y_n)=g(x_n+\tau_n)$. Now using the up flow part of property (iii) there exists a smallest $\tau^\prime_n >0$ such that \[ y_{n+1 :=\phi_{\tau^\prime_n}(\tilde{y}_n) = h(x_n+\tau_n+\tau^\prime_n). \] Thus starting at $x_n$ on the upper boundary, the trajectory returns to the upper boundary at time $x_n+\tau_n +\tau_n^\prime$, generating a map \[ x_{n+1}=F(x_n)=x_n+\tau_n+\tau^\prime_n. \] If the trajectory had started at the upper boundary with $x$-coordinate $x_n+1$ then the periodicity of $g$ and $h$ implies that the return to the upper boundary would be at $x_n+1+\tau_n+\tau^\prime_n$ and so $F(x+1)=F(x)$ thus $F$ is the lift of a degree one circle map. In sections~\ref{sect:tangencies} and \ref{sect:nonmonotonic} {we will show that monotonicity and continuity of the circle map of a smooth threshold system is related to the absence of tangencies between the thresholds and flows.} Throughout this paper we illustrate our findings with the standard example of a sinusoidal threshold, sketched in Fig.~\ref{fig:thresholdmodels}(c) and described below. The section below also contains the Arnold tongue structure for this example. \subsection{Example: the Sinusoidal Threshold System (STS)}\label{sect:STS} We believe that this dynamical system first appeared {explicitly} in a paper by Glass {and Belair in 1986~\cite{Glass_1986}}. {In a 1991 paper~\cite{Glass_1991},} Glass refers to it as the Gel'fand and Tsetlin integrate and fire model, though he was unable to locate a reference. He notes that it is also studied by Arnold in his {1959} thesis, an excerpt of which is in~\cite{Arnold_1991}, though no explicit model is given. Although the STS appears in \cite{Glass_1991}, the dynamics were not fully analysed. In ~\cite{Glass_1986}, three special cases are considered: infinite slope for the down flow (reset dynamics), equal rates for the up and down flow, and infinite slope for the up flow. The first and last case seem to investigated in detail in various papers, but middle one is referred to as ``hope to investigate later''. The case with the infinite slope in the down flow is considered by Winfree in~\cite{Winfree_1980} in the context of the entrainment of circadian rhythms. The STS can also be thought of as a simplified form of the two process model of sleep-wake regulation~\cite{Borbely_1982,Daan_1984}. For the STS, the upper and lower thresholds are given by the functions \begin{eqnarray} y & = & h(x) = \beta + \frac{\alpha}{2 \pi} \left ( 1 + \sin 2 \pi x \right ), \nonumber \\ y & = & g(x) = 0, \label{eq:toymap1} \end{eqnarray} respectively, with $\alpha >0$ and $\beta >0$. The up and down flows are linear functions as shown in Fig.~\ref{fig:thresholdmodels}(c) $\phi_{x}(y_0)=y_0+\gamma \,x$, $\gamma \ge 0$, (i.e. the solution to $\frac{dy}{dx}=\gamma$) and $\psi_{x}(y_0)=y_0-x$ (the solution to $\frac{dy}{dx}=-1$). This system is equivalent to the system in~\cite{Glass_1991} with a rescaling of the parameters. Suppose that at $x_n$ the system is on the upper threshold, with $y_n=h(x_n)$. Then the trajectory of the down flow will reach the lower threshold after additional time $\tau_n$, where $\tau_n=h(x_n)$, and the new value of the independent variable is $\tilde{x}_n=x_n+\tau_n$ with $y$-coordinate equal to zero. The time taken to return to the upper threshold using the up flow is $\tau_n^\prime$ which is determined implicitly by \begin{equation} \gamma \tau^\prime_n= h(x_n+\tau_n+\tau^\prime_n). \label{eq:toymap3} \end{equation} If (\ref{eq:toymap3}) has multiple positive solutions then the smallest possible $\tau^\prime$ is chosen. The return map to the upper threshold is therefore $x_{n+1}=F(x_n)=x_n+\tau_n+\tau^\prime_n$ or (as an implicit difference equation) \begin{equation} x_{n+1}=x_n+h(x_n)+\frac{1}{\gamma}h(x_{n+1}). \label{eq:implicitF} \end{equation} An immediate consequence is that as $\gamma\to \infty$ (a classic reset) this reduces to the classic Arnold (sine) circle map \cite{Arnold_1991,Glass_1991} which has been extensively studied. The gradient of the map determines properties such as monotonicity. Implicit differentiation of (\ref{eq:implicitF}) gives \begin{equation} \frac{d x_{n+1}}{d x_n} = \frac{1 + h'(x_n)}{1 - \frac{1}{\gamma}h'(x_{n+1})} . \label{eq:mapgradient} \end{equation} Since $h^\prime (x)= \alpha\cos (2\pi x)$, the numerator of (\ref{eq:mapgradient}) is always positive if $\alpha <1$ (recall $\alpha$ is positive) and the denominator is positive provided $\alpha <\gamma$. In particular, the map is monotonic and continuous if $\alpha < \min (1,\gamma)$. The derivative becomes singular when the up flow becomes tangent to the upper threshold. In section~\ref{sect:tangencies} we will show that this a generic feature in maps generated by threshold systems and describe the generic development of a discontinuity and singular derivative in the map. Similarly,~\eref{eq:mapgradient} shows that the derivative vanishes when the down flow becomes tangent to the upper boundary. In section~\ref{sect:nonmonotonic} we will show that such a tangency generates non-monotonicity in the map generated by a threshold system. Thus the STS example illustrates the generic transition from monotonicity to non-monotonicity when $\alpha=1$ and the generic transition from continuous to non-continuous when $\alpha=\gamma$. One attractive feature of the STS is that explicit expressions for some of the bifurcations can be found. A periodic solution on the circle corresponds to a solution of the form $F^q(x)=x+p$ and has rotation number $p/q$. We will refer to such solutions as $(p,q)$-periodic orbits, they satisfy \begin{equation} x_{n+q} = x_{n} + p. \label{eq:STMpqsolution} \end{equation} In the STS, a necessary condition for the existence of $(p,1)$-periodic solutions is that there exist $x$ such that \begin{equation}\label{eq:p1crit} \sin 2 \pi x = \frac{2 \pi}{\alpha} \left ( p \tilde \gamma - \beta \right ) -1, \quad\mbox{with} \quad\tilde \gamma = \frac{\gamma}{1+\gamma}. \end{equation} \begin{figure} \centering{\includegraphics[scale=1.0]{Figure2.pdf}} \caption{(a) Bifurcation set showing the largest few tongues that bound the regions of existence for periodic solutions ($\gamma=0.5$). The blue lines are lines of saddle-node bifurcations. (b)-(e) Trajectories for periodic solutions with $(p,q)=(1,1), (2,1), (1,2)$ and $(4,3)$ respectively ($\alpha=0.4, \gamma=0.5$ and $\beta=0.3, 0.65, 0.097$ and $0.39$ respectively). } \label{fig:tongues_saddlenodes} \end{figure} As we will see in section~\ref{sect:tangencies}, this condition is not sufficient if the map has a discontinuity. (This is related to the fact that a threshold system is defined on the first intersection of the up/down flow with the upper/lower threshold.) Solutions to~\ref{eq:p1crit} exist provided its right hand side has modulus less than or equal to one. Thus for fixed $\gamma$, the maximal region of existence of $(p,1)$-periodic solutions in the $(\beta ,\alpha)$ parameter plane with $\alpha >0$ is bounded by the curves $$ \alpha = \pi \left ( p \tilde \gamma - \beta \right ), \quad\mbox{and}\quad \beta=p\tilde \gamma, $$ on which $x$ takes the values $1/4$ and $3/4$ respectively. If these $x$ values correspond to first intersections of the up flow with the upper threshold (as is the case when the map is continuous, i.e., for $\alpha<\gamma)$, these curves are lines of saddle-node bifurcations that create one stable and one unstable periodic solution. They form tongue-like regions emanating from $\alpha=0$, $\beta=p\tilde \gamma$. Saddle-node bifurcations for general $(p,q)$ tongues can be found by numerically solving equation~(\ref{eq:STMpqsolution}) along with the condition that the gradient of the $q^{\rm th}$ iterate of the map is one. Typical Arnold tongues for the STS with $\alpha<\gamma$ are shown in Fig.~\ref{fig:tongues_saddlenodes}. We note that there is a parameter symmetry for existence regions for periodic solutions. Specifically, if for some $\alpha, \beta, \gamma$ there exists a $(p,q)$-periodic orbit, where $p,q \in \mathbb Z^+$, and $p$ and $q$ are relatively prime, then there also exists a $(\tilde p,q)$-periodic orbit, where $\tilde p = p+mq$, $m \in \mathbb Z^+$ for $\alpha, \gamma$ and $$ \tilde \beta = \beta + \frac{m \gamma}{1+\gamma} =\beta + m\tilde\gamma . $$ This symmetry is then reflected in the positions of the tongues: in Fig.~\ref{fig:tongues_saddlenodes}, the $(2,1)$-tongue is a translation of the $(1,1)$-tongue, the $(3,2)$-tongue is a translation of the $(1,2)$-tongue, the $(4,3)$-tongue is a translation of the $(1,3)$-tongue and the $(5,3)$-tongue is a translation of the $(2,3)$-tongue. At $\alpha=\gamma$, the up flow becomes tangent to the upper threshold and the map looses smoothness at the pre-image of this tangency point (see~\eref{eq:mapgradient}). As we will show in the next section, the map develops a discontinuity for $\alpha>\gamma$. We will continue with this example after deriving the general theory. \section{Tangencies leading to gaps}\label{sect:tangencies} As shown by Arnold~\cite{Arnold_1991}, gaps in a threshold map are a result of `shadow' regions in the dynamics, that is, regions for which the upper threshold is unreachable, as illustrated in Fig.~\ref{fig:gaps} for the STS. In this example, there are regions of the upper threshold such that every trajectory from the lower threshold that intersects this region must already have crossed the upper threshold at least once. For sufficiently smooth flows and thresholds, the generic transition from no gaps to gaps will occur as a result of either tangency of the up flow with the upper threshold or the down flow with the lower threshold. \begin{figure} \begin{center} \includegraphics[scale=1.0]{Figure3.pdf} \end{center} \caption{(a) A tangency of the up-flow with the upper threshold for the STS ($\alpha=0.7, \beta=0.15, \gamma=0.5$). The tangency occurs at the point $x=b$ which has pre-image at $x=a$. The figure illustrates how the region local to $a$ maps to two disjoint sets, one local to $b$ and one local to $c$, where $x=c$ is the position of the second intersection. The shadow region is shaded in dark grey and corresponds to $x \in (b,c]$. (b) Corresponding circle map. } \label{fig:gaps} \end{figure} \newcommand{\mathcal{T}}{\mathcal{T}} In this section we will look at parameterised families of threshold systems. We determine generic criteria for the creation of a discontinuity and describe the local behaviour nearby. The construction of the return map involves solving for the zeroes of a function which can be treated in almost precisely the same way as the standard bifurcation theory for fixed points (e.g. \cite{Kuz_1995}, chapters 4 and 8). Let $\mathbb{P}\subset\mathbb R$ be the parameter space. Parameterised threshold maps can be thought of as the composition of two maps: the down map $T_d:\mathbb R\times \mathbb P\to \mathbb R$ from the upper boundary to the lower boundary, and the up map $T_u:\mathbb R \times \mathbb P\to \mathbb R$ from the lower boundary to the upper boundary as described in section~\ref{sect:threshold} but with the addition of a real parameter, so $T_{u,d}=T_{u,d}(x,\mu)$. The periodicity of the thresholds implies that both of these maps have period one in the first variable: $T_{u,d}(x+1,\mu ) = T_{u,d}(x,\mu)$ for all $x\in \mathbb R$. Assume that the down map is a smooth bijection of the real line for all $\mu$ in the region of interest. Thus the down trajectory from $(x_n,h(x_n,\mu ))$ will intersect the lower threshold at $(x, g(x,\mu ))$ with $x=T_d(x_n,\mu )$ and for every $x\in \mathbb R$ such an $x_n$ exists. The trajectory under the up flow $\phi$ starting at $(x,g(x, \mu ))$ on the lower threshold will intersect the upper threshold $y=h(x, \mu)$, at time(s) $\tau$ which satisfy \begin{equation}\label{eq:basic0} W(\tau ,x,\mu ):=\phi_\tau (g(x,\mu), \mu )-h(x+\tau,\mu) =0. \end{equation} We will be interested in local behaviour near a solution $(\tau^*,x^*,\mu^*)$ of (\ref{eq:basic0}) representing a first intersection of the up flow with the upper boundary. By shifting coordinates we may choose $(x^*,\mu^*) = (0,0)$ and from now on we assume this shift of coordinates has been implemented. The essential genericity condition on the $x$ and $\mu$ behaviour, assumed throughout this section, is that \begin{equation}\label{eq:bgen} W_2(\tau^*,0,0)\ne 0 \quad \textrm{and} \quad W_3(\tau^*,0,0)\ne 0. \end{equation} Here we used subscripts to denote partial differentiation by the $i^{th}$ variable, e.g., $W_2 = \frac{\partial}{\partial x} W$. \begin{figure} \begin{center} \includegraphics[scale=0.9]{Figure4a.pdf} \includegraphics[scale=0.9]{Figure4b.pdf} \includegraphics[scale=0.9]{Figure4b.pdf} \end{center} \caption{Tangencies leading to gaps. (a) Unique solution to $\tau^*$. (b) Existence of a simple tangency between the up flow and the upper boundary. (c) The cusp catastrophe.} \label{fig:gapcreation} \end{figure} The map is locally continuous if in addition $W_1(\tau^*,0,0)\ne 0$. Indeed, a standard application of the Implicit Function Theorem yields a unique and smooth local solution $\tau (x,\mu)$ to~\ref{eq:basic0} the form \[ \tau =\tau^*-x\,\frac{W_2}{W_1}+\mu\,\frac{W_3 }{W_1}+ O(2) \] where the partial derivatives are evaluated at $(\tau^*,0,0)$ (see Fig.~\ref{fig:gapcreation}(a)). There is a simple tangency between the upwards trajectory and the upper boundary if \begin{equation}\label{eq:tan} W(\tau^*,0,0)=0, \quad W_1(\tau^*,0,0)=0, \quad W_{11}(\tau^*,0,0)\ne 0. \end{equation} Although this is not the \emph{first} tangency it is worth considering as it shows the persistence of jumps in the one-dimensional map. In this case, the intersection equation~(\ref{eq:basic0}) can be written locally as \begin{equation}\label{eq:x2} (\tau -\tau^*)^2=-\frac{2}{W_{11}}\left( W_2x+W_3\mu \right) + O(2). \end{equation} Hence there is a fold along a curve in the $(x,\mu )$-plane given by $W_2x+W_3\mu =0$ in lowest order. The fold has two interpretations: it represents a unique local solution to~\eref{eq:basic0} (a fold point) $\tau(\mu) = \tau^* + O(\mu)$. In the threshold system it gives also a persisting simple tangency between the upwards trajectory and the upper threshold at this point. When $W_{11}(W_2x+W_3\mu)< 0$ then~\eref{eq:basic0} has locally two solutions, one less and one greater than $\tau^*$. The map is defined using the negative solution to (\ref{eq:x2}) as this represents the first intersection of the up flow and the upper threshold. Locally the second solution does not play a role in defining the map. Furthermore, the derivative $\frac{\partial \tau}{\partial x}=\frac{W_2}{W_{11}(\tau^*-\tau)}$ in lowest order, hence for $(x,\mu)$, with $W_{11}(W_2x+W_3\mu)<0$, approaching the fold curve, the derivative of the map becomes unbounded at $\tau^*$ and exhibits a square root singularity (see Fig.~\ref{fig:gapcreation}(b)). By Definition~\ref{def:TS} there are solutions to $W(\tau ,x,\mu )=0$ even if $W_{11}(W_2x+W_3\mu)>0$. Generically this implies that~\eref{eq:basic0} has a second (typically non-local) solution $(T^*,0,0)$, $T^*>\tau^*$, representing the second intersection between the up flow and the upper threshold (after the first one at $(\tau^*,0,0)$). Generically all the first derivatives at $(T^*,0,0)$ are non-zero and the map can be continued for $W_{11}(W_2x+W_3\mu)>0$, with a discontinuity and finite derivative on approaching the fold curve $W_2x+W_3\mu=O(2)$ from this side. This proves that once the jump exists, it is stable to perturbations of the map. This analysis also implies that there is a second fold between $\tau^*$ and $T^*$ for some fold curve $(x,\mu)$ with $W_{11}(W_2x+W_3\mu)<0$. This fold is not part of the return map and at the risk of confusing with the standard names in piecewise smooth dynamics (of which this is a natural counterpart) we refer to this second fold as an invisible fold and to the fold at $\tau^*$ as a \emph{visible} fold, see Fig.~\ref{fig:gapcreation}(b). So at a standard fold satisfying (\ref{eq:tan}) the discontinuity already exists ($T^*$ of the previous paragraphs is non-local). To obtain the transition from continuity to discontinuity we need a further condition: \begin{equation}\label{eq:tantan} W(\tau^*,0,0)=W_1(\tau^*,0,0)=W_{11}(\tau^*,0,0)= 0, \quad W_{111}(\tau^*,0,0)\ne 0, \end{equation} see Fig.~\ref{fig:gapcreation}(c). Taken together with the genericity condition (\ref{eq:bgen}) this defines a standard unfolding of the cubic singularity: a cusp catastrophe. The standard form for the unfolding of the cusp is \begin{equation}\label{eq:cuspa} A +B u+u^3=0, \quad u=\tau -\tau^*, \end{equation} which has three solutions if $A^2<\frac{4}{27}|B|^3$, $B <0$ and otherwise one solution except on the curves $A^2=\frac{4}{27}|B|^3$, $B <0$, which are the loci of local folds. Using the Taylor series for $W$, the intersection equation (\ref{eq:basic0}) becomes \[\begin{array}{rl} 0= & (W_2x+W_3\mu + O((|x|+|\mu |)^2) )+(W_{12}x+W_{13}\mu +O((|x|+|\mu |)^2 )u\\ & +\frac{1}{2}(W_{112}x+W_{113} \mu +O((|x|+|\mu |)^2 )u^2+\frac{1}{6}(W_{111}+O(|x|+|\mu | )u^3+O(u^4). \end{array} \] The transformation $u\to u-\frac{1}{W_{111}}( W_{112}x+W_{113} \mu)$ transforms this to \begin{equation}\label{eq:cuspb} 0=a(x,\mu )+b(x,\mu )u+u^3+O(u^4), \end{equation} where \begin{equation}\label{eq:axmu} a(x,\mu )=\frac{6}{W_{111}}\left(W_2x+W_3\mu\right) +O((|x|+|\mu |)^2) \end{equation} and \begin{equation}\label{eq:bxmu} b(x,\mu )=\frac{6}{W_{111}}\left(W_{12}x+W_{13}\mu\right) +O((|x|+|\mu |)^2). \end{equation} Note that $a(0,0)=b(0,0)=0$. Provided the transformation from $(a,b)$ to $(x,\mu )$ is non-singular, i.e. provided \begin{equation}\label{eq:Jac0} \det\left(\frac{\partial (a,b)}{\partial (x,\mu)}\right)\Big|_{(0,0)}\ne 0, \end{equation} all the standard results for the unfolding of the $x^3$ singularity in the $(a,b)$-plane carry over to the $(x,\mu )$-plane. A glance at the standard cusp manifold shows that one of the branches of the folds defined by the cusp is invisible and the other is visible, creating a bifurcation curve of jumps in the $(x,\mu )$-plane terminating at $(0,0)$. \begin{lemma}\label{thm:cusp}Let $a(x,\mu )$, $b(x,\mu )$ be as defined in~\eref{eq:axmu} and~\eref{eq:bxmu}. Suppose that the nondegeneracy condition (\ref{eq:bgen}) and the double tangency condition (\ref{eq:tantan}) hold with \begin{equation}\label{eq:ngen} W_2W_{13}-W_3 W_{12}\ne 0. \end{equation} Then for $\sigma\in\{+1,-1\}$ the locus $a=\sigma \frac{2}{3\sqrt{3}}|b|^{\frac{3}{2}}$, $b<0$, gives a fold at $u = \sigma \sqrt\frac{|b|}{3}$. \end{lemma} \begin{proof} Equation (\ref{eq:ngen}) is just a rewriting of (\ref{eq:Jac0}). The remainder is a restatement of properties of the standard unfolding of the singularity $\pm u^3$. \end{proof} Lemma~\ref{thm:cusp} is a re-interpretation of the cusp bifurcation (\cite[section 8.2]{Kuz_1995}). The up map $T_u$ is \[ x \to x+u_m+\tau^*-\frac{W_{112}x+W_{113} \mu}{W_{111}} \] where $u=u_m$ is the smallest $u$ value satisfying (\ref{eq:cuspb}), hence the visible fold corresponds to $\sigma=-1$ in Lemma~\ref{thm:cusp}. The lemma determines the locus of the discontinuity but not the effect. \begin{corollary}\label{cor:jumps}Suppose that (\ref{eq:bgen}), (\ref{eq:tantan}) and~\eref{eq:ngen} hold. Then the corresponding up map $T_u$ has a discontinuity of size of order $\sqrt{|\mu |}$ on the visible fold line. \end{corollary} \begin{proof} The size of the discontinuity is the change in $\tau$ values from the value on the visible fold to the other value of the cubic (\ref{eq:cuspb}). On the visible fold $u^2=-\frac{1}{3}b>0$ to lowest order, and if we choose the negative solution, $u=-\sqrt{\frac{-b}{3}}$ then substituting back into (\ref{eq:cuspb}) gives $a=-\frac{2}{3\sqrt{3}}|b|^{\frac{3}{2}}$ \cite{Kuz_1995}. At the fold there are only two distinct solutions. The solution $u\approx -\sqrt{\frac{-b}{3}}$ already discussed is a repeated root, and so the other solution can be found by solving $(u+\sqrt{\frac{-b}{3}})^2(u-\alpha )=a+bu+u^3$ to give $\alpha = 2\sqrt{\frac{-b}{3}}$ to lowest order. The jump is therefore the difference of the two roots, i.e. $3\sqrt{\frac{-b}{3}}$. On the fold $a^2=-\frac{4}{27}b^3$, with $a$ and $b$ linear functions of the original variables $x$ and $\mu$ via (\ref{eq:axmu}) and (\ref{eq:bxmu}). To lowest order this implies that the fold is tangential to the line $a=0$, i.e. $x\sim -W_3\mu/W_2$ and so the evaluating $b$ on this line the jump becomes \[ 3\sqrt{2\Big| \frac{W_{12}W_3-W_{13}W_2}{W_2W_{111}}\mu\Big|} \] Thus the jump is order $\sqrt{|\mu |}$. \end{proof} Note that although we have derived these properties for the up map alone, they are preserved by composition with a smoothly invertible down map. \section{Square root discontinuities in monotonic circle maps}\label{sect:squareroot} The discontinuous circle maps derived in section~\ref{sect:tangencies} have two features: there is an interval of values which cannot be reached by the map, and on one side of this gap the derivative of the map tends to infinity. In this section, we will consider the fundamental bifurcation sequences for periodic solutions of the simplest form of such maps and illustrate the observations with two examples. In terms of the lift $F$ of such a circle map, defined on the real line with $F(x+1)=F(x)+1$, coordinates can be chosen such that \[ \lim_{x\downarrow 0}F(x)+1>\lim_{x\uparrow 1}F(x) , \] and $F$ is continuously differentiable and strictly increasing on $(0,1)$. In other words the gap is a jump upwards at integer values of $x$. The square root discontinuity implies that \[ \lim_{x\downarrow 0}F^\prime (x)=c>0,\quad F^\prime (x)\to \infty~\textrm{as}~x\uparrow 1, \] or vice versa. Thus $F$ is strictly increasing and hence has a unique rotation number \cite{Rhodes_1986}. Now consider families $(F_\mu )$ of such maps. Provided $F_\mu$ is continuous in $\mu$ then the rotation number is a continuous function of $\mu$ (\cite[Theorem 5.8]{Rhodes_1991}). If $F_\mu$ is an increasing function of $\mu$ for each fixed $x$, then the rotation number is monotonic in $\mu$, irrational values of the rotation number exist at isolated values of $\mu$ (\cite[Proposition 6.1]{Rhodes_1991}), and the invariant set for an irrational rotation number is a Cantor set \cite{Keener_1980}. Finally, under the same conditions, the set of $\mu$ with a given rational rotation number is a non-trivial closed interval, i.e. the maps have phase locking analogous to that of continuous homeomorphisms of the circle (\cite[Theorem 6.6]{Rhodes_1991}). The gaps introduce a second mechanism by which phase-locked solutions can be created or destroyed, that is via border collisions. We will call a border collision with an end point with an infinite derivative a \emph{type I} border collision and use \emph{type II} for a border collision with an endpoint with a finite derivative. Within each phase-locked region, there is a branch of periodic solutions connecting $0^+$ with $1^-$. If $F_\mu^\prime (x)<1$ for all $x\in (0,1)$ then there are two border collision bifurcations, one creating a stable fixed point and the other destroying it, and similarly for $F_\mu^q$ in a $p/q$ phase locked region. There can be no saddle-node bifurcations in such maps. However, for maps with a square root singularity $F_\mu^\prime (x)$ cannot be less than one for all $x\in (0,1)$, although to remain an injection $F_\mu^\prime (x) <1$ for some $x\in (0,1)$. For maps $F_\mu(x)$ that are monotonic increasing in both $x$ and $\mu$, the border collision with the square root singularity at $1^-$ always leads to the creation of an unstable periodic orbit when $\mu$ is increased. The border collision with~$0^+$ can either lead to the creation of a stable periodic orbit (if $(F^q)^\prime (0^+)<1$) or the loss of a unstable periodic orbit (if $(F^q)^\prime (0^+)>1$). There appear to be two possible `simplest' sequences of bifurcations for the creation/destruction of periodic orbits of a given rotation number $p/q$: \begin{enumerate} \item[(a)] border collision $\to$ border collision $\to$ saddle-node bifurcation; \item[(b)] saddle-node bifurcation $\to$ border collision $\to$ border collision $\to$ saddle-node bifurcation. \end{enumerate} These two bifurcation sequences are illustrated in the bifurcation diagram shown in Fig.~\ref{fig:bifseq}. Case (a) occurs if $(F^q)^\prime (0^+)<1$ when first a stable periodic orbit is created in a border collision at $0^+$, then a second unstable periodic orbit is created in a border collision at $1^-$ and finally both solutions disappear in a saddle-node bifurcation. Case~(b) occurs if $(F^q)^\prime (0^+)>1$. In both bifurcation sequences, there is always a stable periodic solutions in each phase-locked region. Of course, the bifurcation sequences could have extra pairs of saddle-node bifurcations. However, if $F_\mu$ is convex, then also $F^q_\mu$ is piecewise convex and case (a) will always occur as the convexity implies that $(F^q)^\prime$ is monotonic increasing, hence at most one saddle-node bifurcation is possible. Note also that $F^\prime (0^+)\lessgtr1$ does not imply that $ (F^q)^\prime (0^+)\lessgtr1$ for all $q>1$, so it may be possible to find both case (a) and case (b) in examples. \begin{figure} \begin{center} \includegraphics[scale=1]{Figure5.pdf} \end{center} \caption{Bifurcation diagrams illustrating the two sequences (a) border collision $\to$ border collision $\to$ saddle-node bifurcation; (b) saddle-node bifurcation $\to$ border collision $\to$ border collision $\to$ saddle-node bifurcation. } \label{fig:bifseq} \end{figure} The monotonicity of the map in the parameter ensures that structures are not repeated; if this parameter monotonicity assumption is not true, the essential results hold true but there may be multiple parameter values with a given irrational rotation number (or even intervals), and the bifurcation structure of phase-locked regions can be considerably more complicated. The important observation is that the number of times $0^+$ (resp. $1^-$) is periodic in a phase locked region must be odd (counting with multiplicity, so a solution that approaches 0 from above and then moves back up again is counted as two intersections) if the rotation number changes from below a given rational number to above it, and so at least one branch of solutions connects $0^+$ to $1^-$. For the fundamental cases (a) and (b) this implies that there may be extra pairs of saddle-node bifurcations and/or border collision bifurcations whose net effect is to add no changes to the stability of orbits, or create isolas. In the next sections, we illustrate the bifurcation sequences (a) and (b) with a `canonical' example and then show how they are manifested in the STS. \subsection{Example: `canonical' square root singularity maps} To illustrate the consequences of the square root singularity we have constructed an example which is monotonic in $x$ and its parameters and demonstrates the bifurcation sequences described above. This example can be derived from threshold models, by using the inverse to define an upper threshold with the gap replaced by a smooth continuation and taking the lower threshold to be a constant. For $n\geq 2$, define the lift $F_n$ by \begin{equation} F_n(x) = a +b\left(1 - c_1 \sqrt{1-x} + (c_1-1)\,\left(\sqrt{1-x}\, \right)^n\right), \quad x\in [0,1), \label{eq:sqrmap} \end{equation} and extend $F_n$ to the real line using the periodicity condition $F_n(x+m)=F_n(x)+m$, $m\in{\mathbb Z}$. Taking \[ c_1 = \frac{nb-2c}{(n-1)b}, \] gives \begin{equation}\label{eq:mon_incr} F_n(0)=a, \quad F_n^\prime (0)=c, \quad \textrm{and}\quad \lim_{x\uparrow 1}F_n(x)=a+b. \end{equation} The assumption $b\in(0,1)$ gives a discontinuity at $x=m\in\mathbb{Z}$. Since \begin{equation}\label{eq:sqrtderiv} F_n^\prime (x) = \frac{b}{2\sqrt{1-x}} \, \left( c_1+(1-c_1)n\left(\sqrt {1-x}\, \right)^{n-1} \right), \end{equation} $F_n$ is the lift of a monotonic circle map if $0<c_1\leq\frac{n}{n-1}$, i.e. $0\le c<\frac{nb}{2}$. Note that at $c=\frac{nb}{2}$, $c_1=0$ and the square root singularity disappears. The lift $F_n(x)$ is increasing in the parameter~$a$. First we look at the case $n=2$, see Fig.~\ref{fig.mapplots}(a), as the structure of the mode-locked region corresponding to rotation number zero (i.e. the fixed points) can be described analytically. We have \begin{equation}\label{eq:simplesqr} F_2(x) = a+ 2(b-c) + (2c-b)x +2(c-b)\sqrt{1-x}. \end{equation} Since \[ F_2^\prime (x)=2c-b+\frac{b-c}{\sqrt{1-x}}, \quad x\in(0,1), \] $F_2$ is a monotonic increasing function of the real line with a quadratic singularity at $x=m\in \mathbb{Z}$ provided $0\leq c<b$. This implies that $F_2$ is convex and $F_2'(0)=c<b<1$. Fixed points of $F_2$ in $(0,1)$ satisfy \[ x=a+b + (b-2c)(1-x) -2(b-c)\sqrt{1-x}. \] and these are created in either border collisions, in which the fixed point is at the discontinuity (0 or 1) or saddle-node bifurcations (as the map is increasing there can be no other smooth bifurcations). Thus there is a border collision at $x=0$ if $a=0$ and since $F_2'(0)<1$, this is a border collision creating a stable fixed point in $a>0$ locally. There is a border collision at $x=1$ if $a=1-b$, and this is an border collision creating an unstable fixed point in $a>1-b$ locally. Saddle-node bifurcations occur if $F_2^\prime (x)=1$ at a fixed point. A straightforward manipulation shows that the locus of these bifurcations is \begin{equation}\label{eq:locsnb} a=1-b+\frac{(b-c)^2}{1+b-2c}. \end{equation} Fig.~\ref{fig.mapplots}(b) shows the bifurcation structure in the $(a,c)$-plane for $b=0.7$. As $a$ increases there is a border collision bifurcation at $a=0$ which creates a stable fixed point, followed by a border collision bifurcation at $a=0.3$ which creates an unstable fixed point and finally the saddle-node bifurcation destroying the pair of orbits created in the two border collision bifurcations. This is precisely case~(a) as described at the start of this section \begin{figure}[htb] \centering \includegraphics[scale=1]{Figure6.pdf} \caption{ (a) The map with lift $F_2(x)$ (black) and its second (magenta) and third (cyan) iterates for $a=0.4$, $b=0.7$, $c=0.5$. (b) The bifurcations in the $(a,c)$-plane of the fixed points ($\rho=0$) of the map with lift $F_2$ defined by (\ref{eq:simplesqr}) with $b=0.7$. Border collision bifurcations are shown in red and saddle-node bifurcations are shown in blue. In the light/dark grey regions there are one/two fixed points.} \label{fig.mapplots} \end{figure} Next we return to the general map $F_n$. Since the map has a discontinuity at $x=1$, if the range of $F_n^{m-1}$ contains an integer, then the $m^{th}$ iterate of the circle map $f_n$ with lift~$F_n$ has $m$ discontinuities at the pre-images of $x=1, f_n^{-1}(1), \ldots , f_n^{-(m-1)}(1)$, and the derivative of $f_n^m$ is unbounded on one side of each of these pre-images as illustrated in Fig.~\ref{fig.mapplots}(a). If the map is convex, then the higher iterates are piecewise convex. \subsubsection{Periodic solutions} For $a\in[0,1]$, $b\in(0,1)$ and $c\in \left[0,\frac {nb}2\right)$, the lift $F_n(x;a,b,c)$ is monotonically increasing in $x$ (hence orientation preserving) and piecewise continuous so the rotation number is well-defined and if it is rational it can be linked to periodic solutions of the map. The map is also increasing in all its parameter as the derivatives of the lift $F_n$ with respect to the parameters are positive. Indeed \[ \frac{\partial}{\partial a} {F_n} %{M}(x) = 1; \quad \frac{\partial}{\partial b} {F_n} %{M}(x) =\frac{(1-\sqrt{1-x})}{n-1}\, \left(n-\sum_{j=0}^{n-1}\left(\sqrt{1-x}\right)^j\right); \] \ \frac{\partial}{\partial c} {F_n} %{M}(x) = \frac{2\sqrt{1-x}}{n-1}\left(1-\left(\sqrt{1-x}\right)^{n-1}\right); \quad x\in[0,1]. \] As the maps are increasing injections, the rotation number \[ \rho(a,b,c) = \lim_{k\to\infty}\frac{F_n^k(x;a,b,c)-x}{k}. \] is well-defined and increases continuously~\cite{Rhodes_1986} from~$\rho=0$ to~$\rho=1$ when~$a$ increases from~$0$ to~$1$ and~$b$ and~$c$ are fixed. Since $F(0^+)=a$, it follows immediately that if $a=m\in\mathbb{Z}$, then $\rho(m,b,c)=m$ for any $b,c$. Consider the border collision bifurcations of fixed points for $F_n$ in $(0,1)$, i.e., the $a$ values for which $\rho\in\mathbb{Z}$. There is a border collision bifurcation if either $F(0)=0$ or $\lim_{x\uparrow 1}F(x)=1$. These occur at $a=0$ and $a=1-b$ respectively. As $a$ increases, the latter always destroys an unstable fixed point, whilst the former creates a stable fixed point if $c<1$ and an unstable fixed point if $c>1$. Compared to $F_2$, the new case here is that if $c>1$ then both border collision bifurcations can be unstable. In this case the simplest bifurcation sequence is to have case~(b): two saddle-node bifurcations bounding the mode-locked region. Further bifurcation curves have to be determined numerically. \begin{figure}[htb] \begin{center} \includegraphics[scale=1]{Figure7.pdf} \end{center} \caption{The bifurcation set in the $a$-$c$ plane for the map $F_n(x)$ up to the fourth iterate. Parameters: $b=0.9,$ $n=5$, $c\in[0.8,1.85]$. The light/dark shaded regions are regions where one/two fixed points exist with the labelled rotation number. Transitions between different numbers of fixed points are either saddle-node bifurcation curves (blue) or border collisions (red). For each pair of border collision curves, the right curve is the type I border collision and the left curve is the type II border collision. } \label{fig.bif5_9} \end{figure} In Fig.~\ref{fig.bif5_9} the saddle-node bifurcation and border collision curves associated with the map and its first four iterates in the $a$-$c$ plane for $b=0.9$, $n=5$ are depicted. It illustrates the two possible types of fundamental bifurcation sequences and the smooth change from the bifurcation sequence~(a) to the bifurcation sequence~(b). The transition occurs when $(F^q)'(0^+)=1$ and the saddle-node curve merges with the type~II border collision curve. Furthermore it shows \begin{itemize} \item Every phase locked region is always bounded by a saddle-node curve on the right. \item The border collision on the right is associated with a type~I border collision with an infinite derivative and hence cannot merge with the right saddle-node curve. \item The border collision on the left is associated with a type~II border collision with a finite derivative. For the smaller values of $c$ (the derivative of $F_n'$ at $0^+$), this is also the left bound of the phase locked region. \item For some phase locked regions, the left bound for the larger $c$ values is formed by a saddle-node curve. This curve merges with the type~II border collision curve when $c$ decreases. \item The function $F_n$ is convex for $c\in \left[0,\frac{bn}{2(n-1)}\right] = \left[0,0.5625\right]$ and only the bifurcation sequence~(a) occurs for those parameter values. \end{itemize} \subsection{Example: the Sinusoidal Threshold System (STS)} Next we continue with our illustrative example of a threshold system, the STS given by equations (\ref{eq:toymap1})-(\ref{eq:toymap3}). As discussed in section~\ref{sect:tangencies}, this system has an associated circle map with a gap if {the flow is tangent to the upper threshold on contact. This will happen if} $h'(x) > \gamma$, i.e. $ \alpha > \gamma$. Thus increasing $\alpha$ smoothly changes the map from continuous to a map with a gap. This enables us to see how the familiar saddle-node bifurcation structure for smooth maps without gaps, as shown in Fig.~\ref{fig:tongues_saddlenodes}, transitions to the bifurcation sequences: first (b) saddle-node bifurcation $\to$ border collision $\to$ border collision $\to$ saddle-node bifurcation, and then (a) border collision $\to$ border collision $\to$ saddle-node bifurcation. Recall that the sufficient conditions for a saddle-node bifurcation curve of a $(p,1)$ fixed point are $\beta = p\tilde\gamma$ or $\beta = p\tilde\gamma -\frac\alpha\pi$. We will see now why this condition is only sufficient. An explicit expression for the infinite derivative type I border collision of a $(p,1)$ fixed point can be derived {(recall that $\tilde \gamma = \frac{\gamma}{\gamma+1}$)}, $$ \alpha= \frac{\gamma^2 + 4\pi^2 \left (\tilde \gamma{p} - \beta\right )^2}{ 4 \pi \left( \tilde \gamma{p}- \beta \right) }, \quad \mbox{for} \quad {\max\left(0\,,\,{p}\tilde \gamma - \frac{\gamma}{2 \pi}\right) \leq \beta < \tilde \gamma{p}}. $$ This curve is a monotonically increasing function of $\beta$ starting at the minimum value of $\beta=\tilde \gamma{p} - \frac{\gamma}{2\pi}$, $\alpha =\gamma$, where $t^n=t^{n+1}=1$ and asymptoting to the saddle-node line $\beta=\tilde \gamma{p}$ where $t^n=t^{n+1}=3/4$. The finite derivative type II border collisions for $(p,1)$ fixed points satisfy \begin{eqnarray*} \cos 2 \pi x_b & = & \frac{\gamma}{\alpha}, \\ \sin 2 \pi x_c & = & \frac{2\pi}{\alpha} \left ( p \tilde \gamma - \beta \right ) - 1, \\ \sin 2 \pi x_c -\sin 2 \pi x_b & = & \frac{2 \pi \gamma}{\alpha} \left ( x_c - x_b \right ), \end{eqnarray*} where $x_c$ and $x_b$ are equivalent to the time points on the periodic cycle indicated by $b$ and $c$ in Fig.~\ref{fig:gaps}. Note that $x_b=x_c$ when $\alpha=\gamma$ and $\beta=p \tilde \gamma - \gamm /2\pi$, at which point the type I and type II border collisions coalesce. To find the intersection of the type~II border collision curve and the saddle-node curve, we observe that $x_c=\frac14$ along the saddle-node curve $\beta=p\tilde\gamma-\frac \alpha\pi$. Substituting this into the equations gives the relation $1+\sqrt{1-q^2}=q(\frac\pi2 + \arccos(q))$, where $q=\frac\gamma\alpha$. This has a unique solution $q^*\in(0,1)$, $q^*\approx 0.725$. Hence when $\alpha = \frac{\gamma}{q^*}$ and $\beta=p\tilde\gamma-\frac \gamma{q^*\pi}$, then the type II border collision curve and the saddle-node curve collide and the border collision curve terminates the saddle-node curve. This implies that at $\alpha = \frac{\gamma}{q^*}$, the bifurcation sequence sequence (b) transitions to the sequence (a). \begin{figure} \centering{\includegraphics[scale=1.0]{Figure8.pdf}} \caption{(a) Bifurcation set for $\gamma=0.5$ showing the relation between border collisions (red) and saddle-node bifurcations (blue). Border collisions to the left hand side of each minima are of type II and to the right hand side are of type I. The dashed horizontal line at $\alpha=\gamma=0.5$ marks the transition from continuous to gap map. The dashed horizontal line at $\alpha=1$ marks the transition from monotonicity to to nonmonotonicity. The dashed lines forming the `v' shape mark the transition from single to multiple gaps, see section~\ref{sect:nonmonotonic}. For $\alpha<1$, the light/dark shaded regions correspond to regions of existence of one stable/a pair of fixed points. For $\alpha>1$ the map is nonmonotonic and the dynamics can be more complicated. In this region, period-doubling bifurcations also exist (not shown). (b) Bifurcation diagram showing stable solutions for $\gamma=0.5, \alpha=0.6$ (corresponding to the upper light grey line in (a)). The gaps in the map appear as bands of `forbidden' regions in the bifurcation diagram and result in the Cantor structure for quasi-periodic solutions. (c) Bifurcation diagram showing stable solutions for $\gamma=0.5, \alpha=0.4$ (corresponding to the lower light grey line in (a)). The numerical bifurcation diagram has dark bands corresponding to the fact that there exist quasiperiodic solutions that densely fill the circle.} \label{fig:bifdiagnogapandgap} \end{figure} Border collisions for general $(p,q)$ solutions can be found numerically by solving the fixed point condition (\ref{eq:STMpqsolution}) along with the requirement that the fixed point occurs at the appropriate end point of the gap. In Fig.~\ref{fig:bifdiagnogapandgap}(a) the bifurcation set for $\gamma=0.5$ is shown for the first few iterates of the map. Between the lines $\alpha=\gamma=0.5$ and $\alpha=1$ the circle map is monotonic with a gap. In this region, the border collisions form u-shaped regions inside each tongue. The left/right hand side of each u-shaped region corresponds to the type II/type I border collision bifurcations respectively. This illustrates how the sequences of bifurcations seen in continuous circle maps transition to the sequences seen in maps with gaps. The vertical derivative present in the threshold maps implies that one side of the saddle-node tongues is terminated by a border collision and the other side persists. For $\alpha>1$, the map is non-monotonic. We consider the general transition from monotonic to non-monotonic maps in section~\ref{sect:nonmonotonic} and then continue with this example. \section{Tangencies leading to non-monotonic maps}\label{sect:nonmonotonic} We return to the general threshold maps. We have seen in section~\ref{sect:tangencies} that tangencies of the up flow with the upper threshold create discontinuities in the map. In this section we will show that tangencies of the down flow with the upper threshold lead to multiple pre-images (non-monotonicity) in the down map $T_d$ (see Figure~\ref{fig:nonmonotonicity}). This implies that tangencies of the up flow with the lower threshold lead to non-monotonicity in the up map $T_u$. We will also discuss how and when tangencies in the up or down flow imply non-monotonicity in the full circle map. \begin{figure} \centering{\includegraphics[scale=1.0]{Figure9.pdf}} \caption{(a) The point $x=0$ has multiple pre-images in the down map $T_d$, leading to non-monotonicity in the associated threshold system circle map, see (b).} \label{fig:nonmonotonicity} \end{figure} Finally we will discuss how simultaneous tangencies in the up and down maps lead to the combination of gaps and nonmonotonicity. This can lead to the presence of multiple gaps (see Fig.~\ref{fig:multiplegaps}) and give rise to a codimension~2 bifurcation which organises the local bifurcation structure. We illustrate the mechanism and the consequent bifurcations with the STS {example}. \begin{figure} \centering{\includegraphics[scale=1.0]{Figure10.pdf}} \caption{A tangency point between the down flow and the upper threshold leads to multiple pre-images, as shown in (a) for the STS ($\alpha=4, \beta=0.5, \gamma=3$). If there is also a tangency between the up flow and the upper threshold, then the corresponding threshold map has multiple gaps, where each gap has the same size with an infinite derivative on one side (at $x_{n+1}=d$) and a finite derivative on the other (at $x_{n+1} = e$).} \label{fig:multiplegaps} \end{figure} \subsection{Existence of nonmonotonicity} As in section~\ref{sect:tangencies}, we consider a family of parameterised threshold maps with $\mathbb{P}$ the parameter space. The threshold maps are the composition of two maps: the down map $T_d:\mathbb R\times \mathbb P\to \mathbb R$ from the upper boundary to the lower boundary, and the up map $T_u:\mathbb R \times \mathbb P\to \mathbb R$ from the lower boundary to the upper boundary. A map is monotonic if every point in its range has exactly one pre-image. By using the backward flow, we can find the pre-images of the down map. Let's consider the function \begin{equation} \label{eq:tW} \widetilde{W}(\tau,x,\mu) = \psi_{-\tau}(g(x,\mu),\mu)-h(x-\tau), \end{equation} i.e., $\widetilde{W}$ is very similar to~$W$ as defined in~\ref{eq:basic0}, but uses the backward down flow starting at the lower threshold $g(x,\mu)$. If $(\tau^*,x^*,\mu^*)$ satisfies $\widetilde{W}(\tau^*,x^*,\mu^*)=0$, then $x^*-\tau^*$ is a pre-image of $x^*$ for the down map~$T_d$. Using the convention for derivatives from section~\ref{sect:tangencies}, if also $\widetilde{W}_1(\tau^*,x^*,\mu^*)=0$, then there is a tangency between the down flow~$\psi_\tau(h(x,\mu),\mu)$ and the upper threshold $h(x,\mu)$ at $x=x^*-\tau^*$, $\mu=\mu^*$ and $\tau=0$. Due to the similarity between $\widetilde{W}$ and $W$, the results of section~\ref{sect:tangencies} give the local behaviour near a pre-image. Let $(\tau^*,x^*,\mu^*)$ satisfy $\widetilde{W}(\tau^*,x^*,\mu^*)=0$, hence $x^*-\tau^*$ is a pre-image for $x^*$ under $T_d$. Assume the non-degeneracy conditions $\widetilde{W}_2(\tau^*,x^*,\mu^*)\neq 0$ and $\widetilde{W}_3(\tau^*,x^*,\mu^*)\neq 0$, then we have the following results. \begin{itemize} \item If $\widetilde{W}_1(\tau^*,x^*,\mu^*)\neq 0$, then for $(x,\mu)$ nearby $(x^*,\mu^*)$ there is a locally unique pre-image. \item If $\widetilde{W}_1(\tau^*,x^*,\mu^*)=0$ and $\widetilde{W}_{11}(\tau^*,x^*,\mu^*)\neq 0$, then there is a fold along a curve in the $(x,\mu )$-plane given by $\widetilde{W}_2x+\widetilde{W}_3\mu =0$ in lowest order. The fold has again two interpretations: it represents a unique pre-image along the fold line. And in the threshold system it gives also a persisting simple tangency between the upwards trajectory and the lower threshold at this point. When $\widetilde{W}_{11}(\widetilde{W}_2x+\widetilde{W}_3\mu)< 0$ then there are two pre-images, one less and one greater than $x^*-\tau^*$. Both pre-images are relevant for the map $T_d$, which has a turning point at the unique pre-images on the fold line, i.e, at the tangency points, see Fig.\ref{fig:nonmonotonicity}. \item If $\widetilde{W}_1(\tau^*,x^*,\mu^*)=0=\widetilde{W}_{11}(\tau^*,x^*,\mu^*)$ and $\widetilde{W}_{111}(\tau^*,x^*,\mu^*)\neq 0$, then again generically we have cusp unfolding and locally there is a change in monotonicity with two turning points emerging in the map~$T_d$. \end{itemize} As $T_d$ is periodic, this local analysis shows that globally the map $T_d$ always has an even number of tangency points and an odd number of pre-images (counting multiplicity at the degenerate points). \subsection{Simultaneous tangencies} We have now shown for the up and down maps that tangencies between the flow and the nearby threshold correspond to non-monotonicity and tangencies between the flow and the opposite threshold correspond to discontinuities in those maps (section~\ref{sect:tangencies}). The threshold map is a composition of the up and down maps, hence these tangencies will influence the monotonicity and continuity of the threshold map. The derivative of the threshold map is \[ (T_u\circ T_d)'(x) = T_u'(T_d(x))\, T'_d(x), \] thus a tangency between the down flow and the upper threshold leads to a turning point in the threshold map. And a tangency between the up flow and the lower threshold leads to a turning point in the threshold map if this tangency occurs at a point that is in the range of the up map. If both the up and down map have tangencies with the upper threshold, then the result will be a non-monotonic, discontinuous map. The presence of both gaps and non-monotonicity can also lead to further structural changes in the map: a transition from a single gap to three (or more) gaps. The transition to multiple gaps occurs as follows (see also Figure~\ref{fig:multiplegaps}). Suppose that the up flow is tangent to the upper threshold at a first intersection point at $x=d$. This leads to a gap in the up map~$T_u$, say at $x=x_d$, i.e, $T_d(x^-_d)=d$ and $T_d(x^+_d)=e$ for some $e>d$. When the down map is monotonic (i.e. has no tangencies between the down flow and the upper threshold), the point $x=x_d$ will have a single pre-image in the down map and there will be a single gap in the threshold map. When the down map is non-monotonic, the point $x=x_d$ can have three (or more) pre-images, as illustrated in Fig.~\ref{fig:multiplegaps}. In the corresponding circle map, there will be a gap associated with each of these pre-images. In each case the gap arises from the same tangency, thus the size and the qualitative nature of the gap will be preserved. The unbounded derivative can occur either to the left or the right of the gap depending on the slope of the upper threshold at the pre-image. The transition point between one and three gaps occurs when the down map has a tangency with the upper threshold and this tangency point is mapped into $x=x_d$. In the notation of Figure~\ref{fig:multiplegaps}, at this special point, we have $b=c$ and the threshold map maps $b$ into $d$. This is an isolated point in the map as all points nearby $b$ get mapped nearby~$e$. The finite derivate at $e$ vanishes and there is no derivative at $d$ as it is an isolated point. How the creation of multiple gaps plays out in the STS circle map is illustrated in Fig.~\ref{fig:wedge}. The two points at which tangencies with the upper threshold map into $x=x_d$ correspond to a local maximum and a local minimum of the circle map. These tangencies are mapped into $x=x_d$ when the local maximum coincides with the infinite derivative (see Fig.~\ref{fig:wedge}(a)) or the local minimum coincides with the finite derivative (see Fig.~\ref{fig:wedge}(b)). The isolated point in both cases is marked in orange. Tangencies in both the up and down flow with the lower threshold do not lead to multiple gaps. This apparent asymmetry is a consequence of the fact that we have considered the map from the upper threshold to the upper threshold ($T_u \circ T_d$). Generically, the tangency of the down map with the lower threshold creates a gap, this gap persists under the action of the up map, and hence creates a gap in the threshold map. Nonmonotonicity of the up map (tangencies of the up flow with the lower threshold) do not affect the pre-images of the gap. In this case, the only mechanism in which multiple gaps can be created is by non-monotonicity of the down map, i.e, the down flow being tangent simultaneously to the upper and lower threshold. Thus the multiple gaps in the threshold map reflect the multiple tangencies in the down map. In section~\ref{sect:cod2} we discuss further the influence of the order of composition on the apparent structure of the map. The consequences of non-monotonicity for bifurcations in the standard circle map are discussed in~\cite{Mackay_1986}. All the typical features of period-doubling bifurcations and the associated transition to chaos can be seen in threshold systems {too}. \subsection{Example: The sinusoidal threshold system (STS)} Consider the STS given by equations (\ref{eq:toymap1})-(\ref{eq:toymap3}). As the lower threshold is flat, there is no tangency between the up flow and the lower threshold, hence the up map~$T_d$ % is monotonic and the down map~$T_d$ is continuous. Thus the STS map is monotonic if and only if the down map~$T_d$ % is monotonic, i.e., if and only if $h'(x) \leq 1$ for all~$x$, which is equivalent to $0<\alpha \leq 1$. Transitions from one to three gaps occur when both the up and down flow have tangencies with the upper threshold and the tangency point of the down flow with the upper threshold is mapped into the $T_u$ pre-image of the tangency point of the up flow with the upper threshold. These transitions can be computed numerically and are shown for the particular case $\gamma=0.5$ {by the v-shaped curves formed by the dashed lines in Fig.~\ref{fig:bifdiagnogapandgap}(a) and} Fig.~\ref{fig:wedge}(c). Even though there are three gaps, no new border collision curves will be formed. This follows from the observation that the fixed points and bifurcation curves of the maps $T_u\circ T_d$ (mapping upper threshold into upper threshold) and $T_d\circ T_u$ (mapping lower threshold into lower threshold) are equivalent. The latter map is the composition of a non-monotonic, continuous map acting on a monotonic, discontinuous map. So the gap has a unique pre-image, implying that there are at most two border collision curves, see also the section below. \begin{figure} \begin{center} \centering{\includegraphics[scale=1.0]{Figure11.pdf}} \end{center} \caption{ (a) Map on the left-hand edge of the v-shaped wedge for $\alpha=1.3$, $\beta=0.3508$, $\gamma=0.5$. This left-hand edge corresponds to the point when the local maximum coincides with the side of the gap with infinite derivative. The orange dot denotes the isolated point in the map. (b) Map on the right-hand edge of the v-shaped wedge for $\alpha=1.3$, $\beta=0.3653$, $\gamma=0.5$. (c) Bifurcation set for the STS for $\gamma=0.5$ showing a blow-up of the v-shaped region. } \label{fig:wedge} \end{figure} Fig.~\ref{fig:wedge}(c) shows that the type I border collisions for the $(1,1)$-tongue and the type~II border collisions for the $(2,1)$-tongue cross in the three-gap region (inside the v-shaped region). The map at this point is shown in Fig.~\ref{fig:cod2}(c). The two border collision points are denoted by $c_0$ (type II border collision) and $c_1$ (type I border collision), i.e., the tangency between the up flow and the upper threshold is at $c_1$. This implies that there is some $x_d$ such that $T_u(x^-_d)=c_1$, $T_u(x^+_d)=c_0$, and $T_d(c_0)=x_d=T_d(c_1)$. Many of the border collisions from the intermediate tongues appear to converge on this crossing point, suggesting that it forms a codimension two point that organises the local bifurcation structure. In the next section we will show that this is indeed the case \begin{figure} \centering{\includegraphics[scale=0.9]{Figure12.pdf}} \caption{Maps at the intersection point of the border collisions, the point which is marked by a red dot in Fig.~\ref{fig:wedge}(c). (a) The down map $T_d$ which is non-monotonic because of a tangency of the down flow with the upper threshold. (b) The up map $T_u$ which contains a gap as a consequence of a tangency of the up flow with the upper threshold. (c) $T_d \circ T_u$ (d) $T_u \circ T_d$. } \label{fig:cod2} \end{figure} Though we will not go into the details here, we note that explicit expressions can be derived for the first period-doubling bifurcation for $(p,1)$ fixed points, giving rise to fixed points $(2p,2)$. The case $\gamma=3$ is particularly interesting since at this value, the period-doubling and type~I border collision curves coincide. \subsection{A codimension two bifurcation}\label{sect:cod2} In this section we conjecture that if the up flow has a generic tangency with the upper threshold and the down map maps both end points of this gap into the pre-image of the up map associated with the tangency, then the corresponding codimension~2 border collision bifurcation point is an organising centre for the local bifurcations. Introducing the parameter vector $\bmu\in\mathbb{R}^2$, such codimension~2 point is characterised by the following properties. There exist $c_0$, $c_1$, $c_0\neq c_1$ such that \begin{eqnarray*} T_u(x_0^-;\mathbf{0})&=c_0; \,\,T_u(x_0^+;\mathbf{0})=c_1;\\ T_d(c_1;\mathbf{0})&=T_d(c_0;\mathbf{0})=:x_0; \quad T'_d(c_1;\mathbf{0})\,T'_d(c_0;\mathbf{0})<0. \end{eqnarray*} This implies that the threshold map~$T_u\circ T_d$ has multiple gaps and there are two simultaneously two border collisions: one type I border collision at~$c_0$ and one type II border collision at~$c_1$. To analyse this border collision, it is convenient to consider the map from the lower threshold to the lower threshold, i.e., $T_d\circ T_u$. This map has the same fixed points and bifurcations as the threshold map~$T_u\circ T_d$. Under the conditions above, the border collisions in the map $T_d\circ T_u$ collide when $\bmu=\mathbf{0}$. The map is continuous at $x_0$ and the derivatives on each side have opposite signs (with one of them having a square root singularity). In section~\ref{sect:tangencies} it is shown that the gap in $T_u$ persists for $\bmu$ small and that generically the derivative of $T_u$ nearby the gap has the same sign at both sides of the gap (with one of them having a square root singularity). For $\bmu\neq\mathbf{0}$, the discontinuity in $T_u$ leads to a discontinuity in the threshold map $T_d\circ T_u$ with the derivatives at each side of the gap still have opposite signs. Hence the parameter plane nearby $\bmu=\mathbf{0}$ can be divided into four regions which are such that the threshold map has two solutions in one region, one solution in two regions, and no solutions in one region. In~\cite[\S 7.1.1]{Granados_2017}, it is shown that if the local derivatives are less than~1 (i.e., the map is contracting) such maps are organising centres for the local bifurcations. It is also noted that this bifurcation point is equivalent to the \emph{gluing bifurcation} in~\cite{Gambaudo_1984,Gambaudo_1988} and \emph{big bang bifurcation} in~\cite{Avrutin_2006}. Although we do not satisfy the condition that the derivatives are less than~1 (there is a square root singularity at one of the end points), we still see the organising centre in the bifurcation diagram. Avrutin et al \cite{Avrutin_2010} have studied maps on the real line with similar singularities in the derivative, although our local behaviour does not seem to be one of the cases they study in detail. Our behaviour looks more like a 1D Nordmark map at the grazing point. \section{Other mechanisms: Cherry flows}\label{sect:Cherryflow} Circle maps arise naturally in other contexts. If two oscillators interact then a lowest order model might relate the evolution of the phase of each oscillator. In this case the natural phase space is the torus (one angle for each phase) leading to a differential equation on the torus. Examples include neuronal models such as the Kuramoto equations \cite{Kopell_2002} and models of breathing patterns \cite{Baesens_2013b}. Suppose that a flow on the torus has a global cross-section transverse to the flow. Then the return map on the global section is a circle map as discussed in previous sections. There are two natural classes \cite{Baesens_2013a}. In a Poincar\'e flow this map is continuous and monotonic, so the classic results about the existence of rotation numbers and the dichotomy of dynamics depending on whether the rotation number is rational or irrational hold. On the other hand, a Cherry flow \cite{Baesens_2013a,Palis_1982,Palmisano_2015} has at least one unstable stationary point and one saddle, which can create a return map which is monotonic and with a discontinuity. Such maps have a well-defined rotation number and for continuous perturbations of the defining vector field this rotation number varies continuously (see section~\ref{section:intro} and \cite{Rhodes_1986,Rhodes_1991}. In particular if the family of maps has parameter values which have rotation numbers that are different, then there are parameters with irrational rotation numbers. Since the image of the cross-section is not surjective because of the discontinuity this is a natural way to construct a Denjoy counterexample (a map with an irrational rotation number but no dense orbits). Although Cherry flows are classic examples from geometric dynamics \cite{Palis_1982}, the transition from a Poincar\'e flow to a Cherry flow has not been discussed in the literature. In this section we give a brief account of the scalings predicted by a theoretical model of this transition and describe a piecewise smooth example. We will show that \begin{itemize} \item the size of the gap is finite at the transition point, so the transition is discontinuous; and \item the slope of the map tends to infinity at both boundaries of the jump. \end{itemize} A pair of stationary points can be created in a saddle-node bifurcation. Suppose that $\tilde\mu$ is a real parameter and that if $\tilde\mu >0$ there are no stationary points of the flow, whilst if $\tilde\mu<0$ there is an unstable stationary point and a saddle. Then there are local coordinates $(\xi ,\eta )$ such that in a neighbourhood of $(\xi, \eta ,\tilde\mu )=(0,0,0)$ the leading order terms of the differential equation are \begin{equation}\label{eq:locsn}\begin{array}{rl} \dot\xi & = \mu +\xi^2\\ \dot\eta & = \lambda \eta\end{array} \end{equation} with $\lambda>0$ and $\mu$ is a rescaled version of the original parameter $\tilde\mu$. We would like to derive a leading order return map through a neighbourhood of the origin from $\xi<0$ to $\xi >0$ which can then be composed with the standard return maps for Poincar\'e type flows away from this singularity to obtain a theoretical model of the global return map for the transition from a Poincar\'e flow to a Cherry flow. In the classic form (\ref{eq:locsn}), the unstable manifolds of the stationary points are vertical and so it is not possible to define a return map from positive to negative $\xi$. This suggests that for this problem we should add a further change of coordinates \begin{equation}\label{eq:coords} x=\xi+a\eta^2 , \quad y=\eta, \quad (a>0) \end{equation} so that in these new coordinates the unstable manifold of the saddle-node stationary point at $\mu=0$ is $x=ay^2$, $\xi =0$ in (\ref{eq:coords}), making a return map from negative $x$ to positive $x$ possible even if $\mu <0$. Another way of seeing this is to claim that generically the unstable manifolds will be quadratic at the saddle-node bifurcation, and this modification of coordinates has the effect of making the unstable manifolds quadratic without complicating the underlying dynamics. In the new coordinates (\ref{eq:locsn}) becomes \begin{equation}\label{eq:locsna}\begin{array}{rl} \dot x & = \mu +x^2+2a\lambda y^2-2axy^2+a^2y^4\\ \dot y & = \lambda y.\end{array} \end{equation} As with (\ref{eq:locsn}), (\ref{eq:locsna}) is the leading order approximation of the vector field in a neighbourhood ${\mathcal N}$ of the origin in phase space and parameter space, i.e. $|x|^2+|y|^2+|\mu |^2<\epsilon^2$ for some small $\epsilon >0$. Fix $\epsilon >0$ and ${\mathcal N}$ as above and let $k$ be a constant, $0<k<1$ to be determined. Our goal is to derive a return map of (\ref{eq:locsna}) from $x=-k\epsilon$ to $x=k\epsilon$ in ${\mathcal N}$. If the initial condition is $(-k\epsilon, y_0)$ in ${\mathcal N}$ then this corresponds to $(\xi_0 ,\eta_0 )= (-k\epsilon -ay_0^2, y_0)$ and since $a>0$, $\xi_0<0$. Suppose that $\mu >0$, so we can write \begin{equation}\label{eq:posmu} \mu=\sigma^2, \quad \sigma >0 . \end{equation} Solutions to (\ref{eq:locsn}) are \begin{equation}\label{eq:sl} \xi =\sigma\tan (\sigma t+C), \quad \eta =\eta_0\exp(\lambda t), \end{equation} or \begin{equation}\label{eq:sol} x =\sigma\tan (\sigma t+C)+ay_0^2\exp(2\lambda t), \quad y=y_0\exp(\lambda t). \end{equation} The initial condition $(-k\epsilon, y_0)$ implies that \[ \sigma\tan C+ay_0^2=-k\epsilon \] and so as $\sigma \to 0$ \begin{equation}\label{eq:C} C=-\frac{\pi}{2}+\frac{\sigma}{k\epsilon +ay_0^2} +O(\sigma^{2}). \end{equation} Provided it stays in ${\mathcal N}$ this solution intersects $x=k\epsilon$ after time $T$ given by \begin{equation}\label{eq:bal} \sigma\tan (\sigma T+C)+ay_0^2\exp(2\lambda T)=k\epsilon . \end{equation} If $|y_0|$ is sufficiently small so that the first term on the left hand side of (\ref{eq:bal}) dominates the second term this leaves \[ \sigma\tan (\sigma T+C)\approx k\epsilon , \] so \begin{equation}\label{eq:T} T\approx \frac{\pi}{\sigma} +O(1), \end{equation} and $T\to \infty$ as $\sigma \downarrow 0$. This approximation holds in an exponentially small region of parameter space with \[ |y_0|\ll \epsilon\exp (-\lambda T). \] However, in this very small neighbourhood of $y_0=0$ the return map through the region where the saddle-node bifurcation is about to take place is approximately \[ y\to \exp(\lambda T)y\approx (\textrm{e}^{\lambda\pi})^{\frac{1}{\sigma}}y . \] In other words there is a small neighbourhood on which the slope $s$ of the map is very steep and for $\mu\to 0^+$ tends to infinity with \begin{equation} \label{eq:slopscal} \log s \approx \frac{\lambda\pi}{\sigma}=\frac{\lambda\pi}{\sqrt{\mu}} . \end{equation} Note that the constant $\lambda\pi$ is determined by the normal form (\ref{eq:locsn}), and in general $\log s\approx \kappa /\sqrt{\mu}$ for some constant $\kappa$. Now suppose that $\mu<0$, so \begin{equation}\label{eq:negmu} \mu =-\sigma^2, \quad \sigma \ge 0. \end{equation} The derivation of the return map is more standard in this case. By construction there are stationary points at $(\pm \sigma , 0)$ and $(-\sigma ,0)$ is a saddle. The unstable manifold of the saddle in $(x,y)$ coordinates is the curve \[ x=-\sigma +ay^2 \] and so this intersects $x=k\epsilon$ at $y=\pm\frac{1}{a}\sqrt{k\epsilon+\sigma }$. Most importantly, this is non-zero for all $\sigma \ge 0$. In other words the map develops a non-zero discontinuity at the bifurcation point $\mu=0$. Moreover, standard analysis (e.g.\ \cite{Glendinning_1994}) close to the saddle shows that the slope of the return map at the discontinuity tends to infinity as the leading order non-constant term of the return map is \[ C_1|y|^\alpha, \quad \alpha =2\sqrt{|\mu |}/\lambda <1. \] To summarise the theoretical predictions we have \begin{itemize} \item if $\mu >0$ then as $\mu \downarrow 0$ the global return map develops an exponentially small region on which the slope $s$ of the return map grows large and scales with $\log s\approx \frac{\kappa}{\sqrt{\mu}}$ for some constant $\kappa$; \item if $\mu\le 0$ then there is a finite discontinuity and if $\mu <0$ then the slope of the map tends to infinity at the discontinuity. \end{itemize} The system (\ref{eq:locsna}) can be embedded in a global flow to create a piecewise smooth example of the transition from a Poincar\'e flow to a Cherry flow that can be analyzed numerically. Note that there is no reason why a $C^\infty$ interpolating function could not be used to smooth out the discontinuities in the defining flow, but this would not add significantly to the discussion here. We will define three vector fields and then show that they can be used in different regions of the phase space ${\mathbb T}^2=[0,1]^2$ to define a continuous flow on the torus with the desired properties. In \[ A =\{(x,y)~|~\textstyle{\frac{3}{8}}<x<\textstyle{\frac{5}{8}},~\textstyle{\frac{3}{8}}<y<\textstyle{\frac{5}{8}}\} \] we use the saddle-node bifurcation (\ref{eq:locsna}) transformed to the centre of the square: \begin{equation}\label{Um} U_\mu (x,y)=\left(\begin{array}{c}\mu +(x-\textstyle{\frac{1}{2}})^2+2a(\lambda -(x-\textstyle{\frac{1}{2}}))(y-\textstyle{\frac{1}{2}})^2+a^2(y-\textstyle{\frac{1}{2}})^4\\ \lambda (y-\textstyle{\frac{1}{2}}).\end{array}\right)\end{equation} Below the square $A$, in \[ B =\{(x,y)~|~\textstyle{\frac{3}{8}}<x<\textstyle{\frac{5}{8}},~0<y<\textstyle{\frac{3}{8}}\} \] define \begin{equation}\label{V} V(x,y)=\left(\begin{array}{c}1 \\ b-y \end{array}\right), \quad 0<b<\textstyle{\frac{3}{8}}.\end{equation} Finally, in the remainder of the torus \[ C=[0,1]^2\backslash (A\cup B) \] define \begin{equation}\label{W} W(x,y)=\left(\begin{array}{c}1 \\ c\end{array}\right), \quad c>0.\end{equation} The constants in the equations now need to be restricted so that trajectories cross the boundaries between the regions in the same direction, which implies that solutions can be continuously extended across these boundaries (technically this implies that the system has unique Carath\'eodory solutions for all initial conditions \cite{Filippov}). Consider first the boundary between regions $B$ and $C$. This is two vertical line segments and one horizontal line segment. On the vertical lines $\dot x =1$ in both (\ref{V}) and (\ref{W}), so both flows are transverse to these surfaces in the same direction. On the horizontal line $y=0$, so $\dot y = c$ from below using (\ref{W}) and $\dot y=b$ from above using (\ref{V}), so again the flow is transverse to the boundary and in the same direction as $b,c >0$. There is only one boundary between regions $A$ and $B$: the line segment with $y=\frac{3}{8}$ and $\textstyle{\frac{3}{8}}<x<\textstyle{\frac{5}{8}}$. If $y=\frac{3}{8}$ then the flow in $A$ has $\dot y=-\frac{1}{8}\lambda<0$ whilst the flow in $B$ has $\dot y=b-\frac{3}{8}$, so if \begin{equation}\label{eq:bcond} 0<b<\textstyle{\frac{3}{8}} \end{equation} then $\dot y<0$ from below, and hence once again the flow is transverse to the boundary line segment and crosses it in the same direction from each side. The horizontal boundary between $A$ and $C$ has $\dot y >0$ on both sides, but the vertical boundaries require further constraints. On these boundaries $|y-\textstyle{\frac{1}{2}}|=\frac{1}{8}$ and so if $z=x-\frac{1}{2}$ then from (\ref{Um}) on these boundaries approached from $A$ \[ \dot x = \mu+z^2+\textstyle{\frac{a}{32}}(\lambda -z) + (\textstyle{\frac{a}{64}})^2 \] and so $\dot x \ge \mu+z^2$ if $\lambda >z$. In $A$, $z<\frac{1}{8}$ and so if \begin{equation}\label{eq:ACcond} \lambda > \textstyle{\frac{1}{8}}\quad \textrm{and} \quad \mu >-\frac{1}{64} . \end{equation} then $\dot x> 0$ as either boundary is approached from within $A$, $\dot x=1$ in $C$, so the consistent transversality condition is satisfied, and solutions pass transversely across these boundaries too. Fig.~\ref{fig:Cherry} illustrates the flow and return maps associated with this model as $\mu$ passes through zero. The parameters used are \begin{equation}\label{eq:cherrypars} \lambda =1, \quad a=45, b=0.66\times \frac{3}{8}, \quad c=0.25, \quad \mu=\pm\frac{1}{70}. \end{equation} With these parameters we expect a gap of size $2(\frac{1}{8\times 45})\approx 0.00555$ to open up as $\mu$ decreases through zero and increase in seize as $\mu$ decreases. \begin{figure} \centering{\includegraphics[scale=1.1]{Figure13.pdf}} \caption{The piecewise smooth model of the transition to a Cherry flow with parameters (\ref{eq:cherrypars}). (a) Flow with $\mu =\frac{1}{70}$ showing a stable solution that winds many times around the torus; (b) return map on $x=0$ for $\mu=\frac{1}{70}$ with enlargement around the region with high derivative ($x_n\in[0.406245, 0.406255], x_{n+1}\in[0.45, 0.7])$) ; and (c) return map on $x=0$ for $\mu=-\frac{1}{70}$ with enlargements around each end of the discontinuity ($x_n\in[0.4061, 0.4064]$ with $x_{n+1}\in[0.6674, 0.6677]$ for the upper end, and $x_{n+1}\in[0.5198, 0.5201])$ for the lower end.) } \label{fig:Cherry} \end{figure} Figs~\ref{fig:Cherry}(a-b) show the flow and the associated return map with $\mu=\frac{1}{70}$, i.e. when the flow is still a Poincar\'e flow. The return map clearly displays the very steep derivative over a significant region in the $x_{n+1}$ variable, but a very small region in the $x_n$ variable. The inset shows a blow-up the map in a segment of $x_n$-values containing the region of high derivative. This emphasises just how narrow the regions involved become and illustrates the rapid transition between shallow and steep derivatives. Figs~\ref{fig:Cherry}(c) shows the associated return map with $\mu=-\frac{1}{70}$, at which the flow is a Cherry flow. For most $x_n$ values, the return map is almost identical, except that the steep curve has been replaced by a gap. The insets show a neighbourhood of the points of discontinuity. This reveals the infinite slope at the point of discontinuity. \section{Conclusion} In this paper our focus has been to understand how structural transitions occur in maps derived from fundamental models. We have considered transitions from continuity to discontinuity, monotonicity to nonmonotonicity and the creation of multiple gaps, and have described how these transitions can alter the bifurcations and dynamics of circle maps. Understanding how these structural transitions occur and their consequences suggests some new phenomena and gives a wider context within which to interpret some of the existing literature. For example, in the study of maps with gaps much of the focus has been on gaps where the derivatives at either side of the gap are bounded. Applications of such maps included threshold maps with non-smooth thresholds, like the combs in~\cite{Glendinning_1995} or the triangles and rectangles in~\cite{Arnold_1991}. However, these non-smooth thresholds were introduced as approximations of smooth thresholds to allow for explicit calculations and there are other applications (e.g.\ impact oscillators) in which finite derivatives do not occur \cite{Avrutin_2010,BBCK}. Generically, we have shown that both in threshold systems and in the creation of Cherry flows one expects the associated discontinuous circle map to have a singularity in the derivative to one (threshold models) or both (Cherry flows) sides of the gap. In the case of threshold systems, it is the contact between the up/down flow and the upper/lower thresholds that is important in determining the local behaviour: if the contact is at a tangency, then generically a gap and the square root singularity results. At the first tangency in a family of such maps the size of the gap increases continuously from zero for the threshold models, but for Cherry flow there is a discontinuous jump to a finite gap at the transition point. We have shown that the natural consequence of the square root singularity is that one expects to see sequences of border collisions (types I and II of section~\ref{sect:squareroot}) interspersed with saddle-node bifurcations. Using a specific example threshold model we have illustrated how the Arnold tongue bifurcation set for continuous monotonic circle maps with periodic solutions created and destroyed by saddle-node bifurcations transitions to a bifurcation set where periodic solutions can in addition be created/destroyed by border collisions. This underlying structure underpins the bifurcation sets found numerically by Glass et al~\cite{Glass_1991,Glass_1979} and the recent work on the two process model for sleep-wake regulation \cite{Bailey_2018}. The transition from no gaps to gaps in piecewise smooth monotonic maps also has an important consequence for non-periodic solutions. With no gaps, non-periodic solutions are quasiperiodic and typically they are dense in the circle. With gaps, solutions tend to a Cantor set \cite{Rhodes_1986}. Once noted, this difference is readily observable in numerically computed bifurcation diagrams, as illustrated in Fig.~\ref{fig:bifdiagnogapandgap}(b) and (c). For threshold systems, we have identified that the transition to nonmonotonicity is also the result of tangency, this time of the up/down flow with the lower/upper threshold. The presence of both gaps and nonmonotonicity gives many different new possibilities. For example, we have shown that there is a natural transition from circle maps with a single gap to multiple gaps. This in turn leads to a novel codimension two point in which there is the coincidence of two border collisions. A provisional analysis of this codimension two point shows how it acts as a local organising centre, out of which an infinite sequence of other border collisions emerge (cf. \cite{Granados_2017}); details will be published elsewhere. Even the simple example model that we have chosen to illustrate many of our results, the STS, has extremely rich dynamics which we have not classified exhaustively and which will be the subject of future work. Our aim has been more to understand the structure of some specific novel generic situations and provide an overall framework. \section*{Acknowledgements} The authors would like to thank the intensive programme on Advances in Nonsmooth Dynamics at the Centre de Recerca Matem\'{a}tica in Spring 2016. Discussions at this programme gave an important impetus to this work. ACS would like to thank Leon Glass for interesting discussions on the background of the STS model at the Biological Oscillator meeting at the European Molecular Biology Laboratory in 2018. \section*{References} \bibliographystyle{siamplain}
{ "timestamp": "2019-03-19T01:14:49", "yymm": "1903", "arxiv_id": "1903.06968", "language": "en", "url": "https://arxiv.org/abs/1903.06968", "abstract": "Circle maps frequently arise in mathematical models of physical or biological systems. Motivated by Cherry flows and `threshold' systems such as integrate and fire neuronal models, models of cardiac arrhythmias, and models of sleep/wake regulation, we consider how structural transitions in circle maps occur. In particular we describe how maps evolve near the creation of a discontinuity.We show that the natural way to create discontinuities in the maps associated with both threshold systems and Cherry flows results in a square root singularity in the map as the discontinuity is approached from either one or both sides. We analyse the generic properties of maps with gaps displaying this local behaviour, showing how border collisions and saddle-node bifurcations are interspersed. This highlights how the Arnold tongue picture for tongues bordered by saddle-node bifurcations is amended once gaps are present.For the threshold models we also show that a loss of injectivity naturally results in the creation of multiple gaps giving rise to a novel codimension two bifurcation.", "subjects": "Dynamical Systems (math.DS)", "title": "Creation of discontinuities in circle maps", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426443092214, "lm_q2_score": 0.7279754548076478, "lm_q1q2_score": 0.7090791369930494 }
https://arxiv.org/abs/1203.5845
Accuracy and Stability of Filters for Dissipative PDEs
Data assimilation methodologies are designed to incorporate noisy observations of a physical system into an underlying model in order to infer the properties of the state of the system. Filters refer to a class of data assimilation algorithms designed to update the estimation of the state as data is acquired sequentially. For linear problems subject to Gaussian noise filtering can be performed exactly using the Kalman filter. For nonlinear systems it can be approximated in a systematic way by particle filters. However in high dimensions these particle filtering methods can break down. Hence, for the large nonlinear systems arising in applications such as oceanography and weather forecasting, various ad hoc filters are used, based on Gaussian approximations. In this work, we study the accuracy and stability of these ad hoc filters in the context of the 2D incompressible Navier-Stokes equation. The ideas readily generalize to a range of dissipative partial differential equations (PDEs). By working in this infinite dimensional setting we provide an analysis which is useful for the understanding of high dimensional filtering, and is robust to mesh-refinement. We describe theoretical results showing that, in the small observational noise limit, the filters can be tuned to perform accurately in tracking the signal itself (filter accuracy), provided the system is observed in a sufficiently large low dimensional space; roughly speaking this space should be large enough to contain the unstable modes of the linearized dynamics. The tuning corresponds to what is known as variance inflation in the applied literature. Numerical results are given which illustrate the theory. The positive results herein concerning filter stability complement recent numerical studies which demonstrate that the ad hoc filters can perform poorly in reproducing statistical variation about the true signal.
\section{Introduction} \label{sec:intro} Assimilating large data sets into mathematical models of time-evolving systems presents a major challenge in a wide range of applications. Since the data and the model are often uncertain, a natural overarching framework for the formulation of such problems is that of Bayesian statistics. However, for high dimensional models, investigation of the Bayesian posterior distribution of model state given data is not computationally feasible in on-line situations. For this reason various {\em ad hoc} filters are used. The purpose of this paper is to provide an analysis of such filters. The paradigmatic example of data assimilation is weather forecasting: computational models to predict the state of the atmosphere currently involving on the order of ${\mathcal O}(10^{8})$ unknowns but these models must be run with an initial condition which is only known incompletely. This is compensated for by a large number, currently on the order of ${\mathcal O}(10^{6})$, partial observations of the atmosphere at subsequent times. Filters are widely used to make forecasts which combine the mathematical model of the atmosphere and the data to make predictions. Indeed the particular method of data assimilation which we study here includes, as a special case, the algorithm commonly known as 3DVAR. This method originated in weather forecasting. It was first proposed at the UK Met Office in 1986 \cite{article:Lorenc1986}, and was developed by the US National Oceanic and Atmospheric Administration (NOAA) soon thereafter; see \cite{article:Parrish1992}. More details of the implementation of 3DVAR by the UK Met Office can be found in \cite{article:Lorenc2000}, and by the European Centre for Medium-Range Weather Forecasts (ECMWF) in \cite{article:Courtier1998}. The 3DVAR algorithm is prototypical of the many more sophisticated filters which are now widely used in practice and it is thus natural to study it. The reader should be aware, however, that the development of new filters is a very active area and that the analysis here constitutes an initial step towards the analyses required for these more recently developed algorithms. For insight into some of these more sophisticated filters see \cite{toth1997ensemble,evensen2009data,VL09, harlim2008filtering,majda2010mathematical} and the references therein. Filtering can be performed exactly for linear systems subject to Gaussian noise: the Kalman filter \cite{harvey1991forecasting}. For nonlinear or non-Gaussian scenarios the particle filter \cite{doucet2001sequential} may be used and provably approximates the desired probability distribution as the number of particles is increased \cite{bain2008fundamentals}. However in practice this method performs poorly in high dimensional systems \cite{Bickel}. Whilst there is considerable research activity aimed at overcoming this degeneration \cite{van2010nonlinear, chorin2010implicit}, the methodology cannot yet be viewed as a provably accurate tool within the context of the high dimensional problems arising in geophysical data assimilation. In order to circumvent problems associated with the representation of high dimensional probability distributions some form of {\em ad hoc} Gaussian approximation is typically used to create practical filters, and the 3DVAR method which we analyze here is perhaps the simplest example of this. This {\em ad hoc} filters may also be viewed in the framework of nonlinear control theory. Proving filter stability and accuracy has a long history in this field and the paper \cite{tarn76} is closely related to the work we develop here. However we work in an infinite dimensional setting, in order to directly confront the high dimensionality of many current real-world filtering applications, and this brings new issues to the problem of establishing filter stability and accuracy; overcoming these problems provides the focus of this paper. In the paper \cite{lawstuart} a wide range of Gaussian approximate filters, including 3DVAR, are evaluated by comparing the distributions they produce with a highly accurate (and impractical in realistic online scenarios) MCMC simulation of the desired distributions. The conclusion of that work is that the Gaussian approximate filters perform well in tracking the mean of the desired distribution, but poorly in terms of statistical variability about the mean. In this paper we provide a theoretical analysis of the ability of the filters to estimate the mean state accurately. Although filtering is widely used in practice, much of the analysis of it, in the context of fluid mechanics, works with finite-dimensional dynamical models. Our aim is to work directly with a PDE model relevant in fluid mechanics, the Navier-Stokes equation, and thereby confront the high-dimensional nature of the problem head-on. Study of the stability of filters for data assimilation has been a developing research area over the last few years and the paper \cite{carrassi2008data} contains a finite dimensional theoretical result, numerical experiments in a variety of finite and (discretized) infinite dimensional systems not covered by the theory, and references to relevant applied literature. The paper \cite{trevisan2011chaos} gives a review of many important aspects relating to assimilating the evolving unstable directions of the underlying dynamical system. Our analysis will build in a concrete fashion on the approach in \cite{olson2003determining} and \cite{hayden2011discrete} which were amongst the first to study data assimilation directly through PDE analysis, using ideas from the theory of determining modes in infinite dimensional dynamical systems. However, in contrast to those papers, we will allow for noisy observations in our analysis. Nonetheless the estimates in \cite{hayden2011discrete} form an important component of our analysis. Furthermore the large time asymptotic results in \cite{hayden2011discrete} constitute a limiting case of our theory, where there is no observational noise. The presentation will be organized as follows: in Section \ref{sec:combine} we introduce the Navier-Stokes equation as the forward model of interest and formulate the inverse problem of estimating the velocity field given partial noisy observations. This leads to a family of filters for the velocity field which have the form of a non-autonomous dynamical system which blends the forward model with the data in a judicious fashion; Theorem \ref{t:min} describes this dynamical system via the solution of a sequence of inverse problems. In Section \ref{sec:stability} we introduce notions of stability and prove the Main Theorems \ref{t:m} and \ref{t:mz} concerning filter stability and accuracy for sufficiently small observational noise. In Section \ref{sec:numerics} we present numerical results which corroborate the analysis; and finally, in Section \ref{sec:conclusions} we present conclusions. \section{Combining Model and Data} \label{sec:combine} In subsection \ref{ssec:NSE} we describe the forward model that we employ throughout the remainder of the paper: the Navier Stokes equation on a two dimensional torus. Then, in subsection \ref{ssec:filters}, we describe the observational data model that we employ; using this we apply Tikhonov-Phillips regularization to derive the filter which we use to combine model and data. Subsection \ref{ssec:3DVar} contains a specific example of this filter, used later in the paper for our numerical illustrations. This example constitutes a particular instance of the 3DVAR method. \subsection{Forward Model: Navier-Stokes equation} \label{ssec:NSE} In this section we establish a notation for, and describe the properties of, the Navier-Stokes equation. This is the forward model which underlies the inverse problem which we study in this paper. We consider the 2D Navier-Stokes equation on the torus $\mathbb{T}^{2} := [0,L) \times [0,L)$ with periodic boundary conditions: \begin{eqnarray*} \begin{array}{cccc} \partial_{t}u - \nu \Delta u + u \cdot \nabla u + \nabla p &=& f & {\rm for~ all ~} (x, t) \in \mathbb{T}^{2} \times (0, \infty), \\ \nabla \cdot u &=& 0 &{\rm for~ all~ } (x, t) \in \mathbb{T}^{2} \times (0, \infty), \\ u(x, 0) &=& u_{0}(x) &{\rm for~ all ~} x \in \mathbb{T}^{2}. \end{array} \label{eq:NSE} \end{eqnarray*} Here $u \colon \mathbb{T}^{2} \times (0, \infty) \to \mathbb{R}^{2}$ is a time-dependent vector field representing the velocity, $p \colon \mathbb{T}^{2} \times (0,\infty) \to \mathbb{R}$ is a time-dependent scalar field representing the pressure, $f \colon \mathbb{T}^{2} \to \mathbb{R}^{2}$ is a vector field representing the forcing (which we assume to be time-independent for simplicity), and $\nu$ is the viscosity. In numerical simulations (see section~\ref{sec:numerics}), we typically represent the solution via the vorticity $w$ and stream function $\zeta$; these are related through $u = \nabla^{\perp} \zeta$ and $w=\nabla^{\perp} \cdot u$, where $\nabla^{\perp} = (\partial_{2}, -\partial_{1})^{\mathrm{T}}$. We define $${\mathcal H}:= \left\{ {\rm {trigonometric\,polynomials\,}} u:\mathbb{T}^2 \to {\mathbb R}^2\,\Bigl|\, \nabla \cdot u = 0, \,\int_{\mathbb{T}^{2}} u(x) \, \mathrm{d} x = 0 \right\} $$ and $H$ as the closure of ${\mathcal H}$ with respect to the $(L^{2}(\mathbb{T}^{2}))^{2}$ norm. We define $P:(L^{2}(\mathbb{T}^{2}))^{2} \to H$ to be the Leray-Helmholtz orthogonal projector. Given $k = (k_{1}, k_{2})^{\mathrm{T}}$, define $k^{\perp} := (k_{2}, -k_{1})^{\mathrm{T}}$. Then an orthonormal basis for $H$ is given by $\psi_{k} \colon \mathbb{R}^{2} \to \mathbb{R}^{2}$, where \begin{equation} \psi_{k} (x) := \frac{k^{\perp}}{|k|} \exp\Bigl(\frac{2 \pi i k \cdot x}{L}\Bigr)\quad {\rm for} \quad k \in \mathbb{Z}^{2} \setminus \{0\}. \label{psik} \end{equation} Thus for $u \in H$ we may write $$u = \sum_{k \in \mathbb{Z}^{2} \setminus \{0\}} u_{k}(t) \psi_{k}(x)$$ where, since $u$ is a real-valued function, we have the reality constraint $u_{-k} = - \overline{u_{k}}.$ We then define the projection operators $P_{\lambda}: H \to H$ and $Q_{\lambda}:H \to H$ by $$P_{\lambda} u = \sum_{|2\pi k|^2 <\lambda L^2} u_{k}(t) \psi_{k}(x), \quad Q_{\lambda}=I-P_{\lambda}.$$ We let $W_{\lambda}=P_{\lambda} H$ and $W_{\lambda}^c=Q_{\lambda} H.$ Using the Fourier decomposition of $u$, we define the fractional Sobolev spaces \begin{equation} \label{eq:Hs} H^{s}:= \left\{ u \in H : \sum_{k \in \mathbb{Z}^{2} \setminus \{0\}} (4\pi^{2}\abs{k}^{2})^{s}\abs{u_{k}}^{2} < \infty \right\} \end{equation} with the norm $\norm{u}_{s}:= \bigl(\sum_{k} (4\pi^{2}\abs{k}^{2})^{s} \abs{u_{k}}^{2}\bigr)^{1/2}$, where $s \in \mathbb{R}$. We use the abbreviated notation $\norm{u}$ for the norm on $H^1$, and $|\cdot|$ for the norm on $H=H^0$. Applying the projection $P$ to the Navier-Stokes equation we may write it as an ODE (ordinary differential equation) in $H$; see \cite{constantin1988navier, temam1995navier, book:Robinson2001} for details. This ODE takes the form \begin{equation} \frac{\mathrm{d} u}{\mathrm{d} t} + \nu Au + \mathcal{B}(u, u) = f, \quad u(0)=u_0. \label{eq:nse} \end{equation} Here, $A = -P \Delta$ is the Stokes operator, the term $\mathcal{B}(u,u) = P(u \cdot \nabla u)$ is the bilinear form found by projecting the nonlinear term $u \cdot \nabla u$ onto $H$ and finally, with abuse of notation, $f$ is the original forcing, projected into $H$. We note that $A$ is diagonalized in the basis comprised of the $\{\psi_k\}_{k \in {\mathbb{Z}}^2\backslash\{0\}}$, on $H$, and the smallest eigenvalue of $A$ is $\lambda_1=4\pi^2/L^2.$ The following proposition is a classical result which implies the existence of a dissipative semigroup for the ODE \eref{eq:nse}. See Theorems 9.5 and 12.5 in \cite{book:Robinson2001} for a concise overview and \cite{temam1995navier,book:Temam1997} for further details. \begin{proposition} \label{prop:1} Assume that $u_0 \in H^1$ and $f \in H$. Then \eref{eq:nse} has a unique strong solution on $t \in [0,T]$ for any $T>0:$ $$u \in L^{\infty}\bigl((0,T);H^1\bigr)\cap L^{2}\bigl((0,T);D(A)\bigr),\quad \frac{du}{dt} \in L^{2}\bigl((0,T);H\bigr).$$ Furthermore the equation has a global attractor $\mathcal{A}$ and there is $K>0$ such that, if $u_0 \in \mathcal{A}$, then $\sup_{t \ge 0}\|u(t)\|^2 \le K.$ \end{proposition} We let $\{\Psi(\cdot,t): H^1 \to H^1\}_{t \ge 0}$ denote the semigroup of solution operators for the equation \eref{eq:nse} through $t$ time units. We note that by working with weak solutions, $\Psi(\cdot,t)$ can be extended to act on larger spaces $H^s$, with $s \in [0,1)$, under the same assumption on $f$; see Theorem 9.4 in \cite{book:Robinson2001}. We will, on occasion, use this extension of $\Psi(\cdot,t)$ to act on larger spaces. The key properties of the Navier-Stokes equation that drive our analysis of the filters are summarized in the following proposition, taken from the paper \cite{hayden2011discrete}. To this end, define $\Psi(\cdot)=\Psi(\cdot,h)$ for some fixed $h>0.$ Note that the statement here is closely related to the {\em squeezing property} \cite{book:Robinson2001} of the Navier-Stokes equation, a property employed in a wide range of applied contexts. Furthermore, many other dissipative PDEs are known to satisfy similar properties. \begin{proposition} \label{prop:3} Let $u \in \mathcal{A}$ and $v \in H^1$. There is $\beta=\beta(|f|,L,\nu)>0$ such that \begin{equation} \label{eq:1} \|\Psi(u)-\Psi(v)\|^2 \le \exp(\beta h)\|u-v\|^2. \end{equation} Now let $\|u-v\| \le R$ and assume that $$\lambda>\lambda^{\star}:=\frac{9c^{8/3}}{\lambda_1^{\frac13}}\Bigl( \frac{2K^\frac12+R^{\frac12}}{\nu}\Bigr)^{8/3}$$ where $c$ is a dimensionless positive constant and $K$ is the constant appearing in Proposition \ref{prop:1} above. Then there exists $t^*=t^*(|f|,L,\nu,\lambda,R)$ with the property that, for all $h \in (0,t^*],$ there is $\gamma \in (0,1)$ such that \begin{equation} \label{eq:2} \|Q_{\lambda}\bigl(\Psi(u)-\Psi(v)\bigr)\|^2 \le \gamma^2\|u-v\|^2. \end{equation} \end{proposition} \begin{proof} The first statement is simply Theorem 3.8 from \cite{hayden2011discrete}. The second statement follows from the proof of Theorem 3.9 in the same paper, modified at the end to reflect the fact that, in our setting, $P_{\lambda} \delta(0) \ne 0.$ Note also that the constant $\lambda$ appearing on the right hand side of the lower bound for $\lambda$ in the statement of Theorem 3.9 in \cite{hayden2011discrete} should be $\lambda_1$ (as the proof that follows in \cite{hayden2011discrete} shows) and that use of definition of $K$ (see Theorem 3.6 of that paper) allows rewrite in terms of $K$ -- indeed the proof in that paper is expressed in terms of $K$. \end{proof} \subsection{Inverse Problem: Filtering} \label{ssec:filters} In this section we describe the basic problem of {\em filtering} the Navier Stokes equation \eref{eq:nse}: estimating properties of the state of the system sequentially from partial, noisy, sequential observations of the state. Throughout the following we write $\mathbb{N}$ for the natural numbers $\{ 1, 2, 3, \dots \}$, and $\mathbb{Z}^{+} := \mathbb{N} \cup \{ 0 \}$ for the non-negative integers $\{ 0, 1, 2, 3, \dots \}$. Let $H$ be the Hilbert spaces with inner product $\inner{\cdot}{\cdot}$ and norm $|\cdot|$ defined via \eref{eq:Hs} with $s=0$. If $\mathcal{L}^{-1}$ is a self-adjoint, positive-definite compact operator on $H$, which therefore has a symmetric square root $\mathcal{L}^{-1/2}$, we define \[ \inner{\cdot}{\cdot}_{\mathcal{L}} := \inner{\cdot}{\mathcal{L}^{-1} \cdot}, \qquad \norm{\cdot}_{\mathcal{L}} := |\mathcal{L}^{-1/2} \cdot|. \] Recall that we have defined $\Psi(\cdot)=\Psi(\cdot,h)$ for some fixed $h>0.$ We let $X$ denote $H^1$ and define $\{ u_j \}_{j \in \mathbb{Z}^{+}}$, $u_j \in X$ by \footnote{With abuse of notation, subscripts $j$ will indicate times, while subscripts $k$ will denote Fourier coefficients as before in order to avoid confusion. The meaning should also be clear in context.} \begin{equation} \label{eqn:ModelWithoutNoise} u_{j+1} := \Psi(u_j). \end{equation} This discrete dynamical system is well-defined by virtue of Proposition \ref{prop:3}. Thus $u_j=u(jh)$ where $u$ is the solution of \eref{eq:nse}. Our interest is in determining $u_j$ from noisy observations of $P_{\lambda} u_j$. We now let $\{ {\xi}_j \}_{j \in \mathbb{N}}$ be a noise sequence in $W_{\lambda}$ which perturbs the sequence $\{P_{\lambda} u_j \}_{j \in \mathbb{N}}$ to generate the observation sequence $\{ y_j \}_{j \in \mathbb{N}}$ in $W_{\lambda}$ given by \begin{equation} \label{eqn:Observation} y_{j+1} := P_{\lambda} u_{j+1} + {\xi}_{j+1}, \quad j \in \mathbb{Z}^{+}. \end{equation} This observation operator allows us to study partially observed infinite dimensional systems in a clean fashion and the resulting analysis will be a useful building block for the study of other partial observations such as pointwise, or smoothed, observations on a regular grid in physical space. We let $Y_j=\{y_i\}_{i=1}^j$, the accumulated data up to time $t=jh.$ We assume that $u_0$ is not known exactly. The {\em goal} of filtering is to determine the state $u_j$ from the data $Y_j.$ The approach to the problem of blending data and model that we study here is based on Tikhonov-Phillips regularization. We find a point which represents the best compromise between information given by the model and by the data. More precisely, we use the model to provide a regularization of a least squares problem designed to match the data. This will result in a sequence $\{\widehat {u}_j\}_{j \in \mathbb{Z}^+}$ which approximates the true signal $\{u_j\}_{j \in \mathbb{Z}^+}$ giving rise to the data. To define the sequence $\{\widehat {u}_j\}_{j \in \mathbb{Z}^+}$ we introduce two sequences of operators $\{\Gamma_j\}_{j \in \mathbb{N}}$, and $\{\mathcal{C}_j\}_{j \in \mathbb{N}}$ which will be used to weight the contributions of model and data at discrete time $j.$ Then define \begin{equation} \label{eq:Ij} I_j(u)=\frac12\|y_{j+1}-P_{\lambda} u\|_{\Gamma_{j+1}}^2 +\frac12\|u-\Psi(\widehat {u}_j)\|_{\mathcal{C}_{j+1}}^2. \end{equation} Choosing a minimizer of the functional encodes the idea of a compromise between the model output $\Psi(\widehat {u}_j)$ and the data $y_{j+1}$, to estimate the state of the system at time $t=(j+1)h$. The operators $\mathcal{C}_{j+1}$ and $\Gamma_{j+1}$ give appropriate weights on the two sources of information. With this in mind we set \begin{equation} \widehat {u}_{j+1}={\rm arginf}_{u \in H^1} I_j(u). \label{eq:update} \end{equation} The following theorem shows that this is justified: \begin{theorem} \label{t:min} Assume that $\mathcal{C}_{j+1}$ and $\Gamma_{j+1}$ are positive-definite self-adjoint operators on $H$ and $P_{\lambda} H$ respectively, that $\mathcal{C}_{j+1}$ is a bounded operator from $H^1$ into itself and that the norm $\|\cdot\|_{\mathcal{C}_{j+1}}$ is equivalent to the $H^r$ norm for some $r \ge 1.$ Then, if $\widehat {u}_0 \in H^1$, the iteration (\ref{eq:Ij}, \ref{eq:update}) determines a unique sequence $\{\widehat {u}_j\}_{j \in \mathbb{N}}$ in $H^1$ given by \begin{equation} \label{eq:min} \widehat {u}_{j+1}=\bigl(I-K_{j+1}\bigr)\Psi(\widehat {u}_j)+K_{j+1}y_{j+1} \end{equation} where \begin{equation} \label{eq:gain} K_{j+1}=\mathcal{C}_{j+1}P_{\lambda}^*\Bigl(\Gamma_{j+1}+P_{\lambda}\mathcal{C}_{j+1}P_{\lambda}^*\Bigr)^{-1} P_{\lambda}. \end{equation} \end{theorem} \begin{proof} The proof follows by a simple induction. Assume that $\widehat {u}_j \in H^1$. Then let $u=\Psi(\widehat {u}_j)+v$ and note that the functional $J_j: H^{r} \to \mathbb{R}^+$ given by \begin{equation} \label{eq:Jj} J_j(v)=\frac12\|y_{j+1}-P_{\lambda}\bigl(\Psi(\widehat {u}_j)+v\bigr)\|_{\Gamma_{j+1}}^2 +\frac12\|v\|_{\mathcal{C}_{j+1}}^2 \end{equation} is convex and weakly lower semi-continuous and hence has a unique minimizer $\widehat {v} \in H^{r}$. Thus $\widehat {u}_{j+1}=\widehat {v}+\widehat {u}_{j}$ is in $H^1$ since $\widehat {u}_j \in H^1$ and $\widehat {v} \in H^{r}$ with $r \ge 1$. Equations (\ref{eq:min}, \ref{eq:gain}) for the minimizer follow from standard theory of calculus of variations. \end{proof} In the case where $\Psi(\cdot)$ is a linear map, the matrix $K_{j+1}$ is the {\em Kalman gain} matrix \cite{harvey1991forecasting}, composed with the observation operator $P_{\lambda}.$ We will also study the situation where complete observations are made, obtained by taking $\lambda \to \infty$ in the preceding analyses. The observations are given by \begin{equation} \label{eqn:Observation2} y_{j} := u_{j} + {\xi}_{j}, \quad j \in \mathbb{Z}^{+} \end{equation} where now $y_j, \xi_j \in H.$ The operator $\Gamma_j$ is now a positive self-adjoint operator on $H$. The filter that we study takes the form (\ref{eq:min}) with $P_{\lambda}$ replaced by the identity so that (\ref{eq:gain}) is replaced by \begin{equation} \label{eq:gain2} K_{j+1}=\mathcal{C}_{j+1}\Bigl(\Gamma_{j+1}+\mathcal{C}_{j+1}\Bigr)^{-1}. \end{equation} If we define \begin{equation} \label{eq:Bdef} B_j=I-K_{j+1} \end{equation} then Theorem \ref{t:min} yields the key equation \begin{equation} \label{eq:meanu} \widehat {u}_{j+1}=B_j\Psi(\widehat {u}_j)+(I-B_j)y_{j+1}. \end{equation} This equation and \eref{eq:min} demonstrate that the estimate of the solution at time $j+1$ is found as an operator-convex combination of the true dynamics applied to the estimate of the solution at time $j$, and the data at time $j+1$. We will favor use of $B_j$ in what follows, rather than $K_{j+1}$, as $B_j$ is the operator which controls the stability, and hence accuracy, of the estimate. This problem can be given a Bayesian formulation in which the probability distribution of $u_j|Y_j$ is the primary object of interest. In practice, for high dimensional systems arising in applications in the atmospheric sciences, various {\em ad hoc} Gaussian approximations are typically used to approximate these probability distributions; the reader interested in understanding the derivation of filters from this probabilistic perspective is directed to \cite{lsetal} for details; this unpublished technical report is an expanded version of the material contained here, to incorporate the probabilistic perspective. The mean of a Gaussian posterior distribution has an equivalent formulation as solution of a Tikhonov-Phillips regularized least squares problem, as we have here, and this perspective on data assimilation is adopted in \cite{pot12}. In \cite{pot12} linear autonomous dynamical systems are studied and filter accuracy and stability results similar to ours are derived. It is demonstrated numerically in \cite{lawstuart} that the Gaussian approximations are, in general, not good approximations. More precisely, they fail to accurately capture covariance information. However, the same numerical experiments reveal that the methodology can perform well in replicating the mean, if parameters are chosen correctly, even if it is initially in error. Indeed this accurate tracking of the mean is often achieved by means of {\em variance inflation} -- increasing the model uncertainty, here captured in the exogenously imposed $\mathcal{C}_j$, in comparison with the data uncertainty, here captured in the $\Gamma_j$. The purpose of the remainder of the paper is to explain, and illustrate, this phenomenon by means of analysis and numerical experiments. \subsection{Example of a Filter: 3DVAR} \label{ssec:3DVar} The algorithm described in the previous section yields the well-known 3DVAR method, discussed in the introduction, when $\mathcal{C}_j \equiv \mathcal{C}$ and $\Gamma_j \equiv \Gamma$ for some fixed operators $\mathcal{C}, \Gamma$. To impose commutativity with $A$, we assume that the operators $\Gamma$ and $\mathcal{C}$ are both fractional powers of the Stokes operator $A$, in $W_{\lambda}$ and $H$ respectively. We choose $A_0=\ell A$ (the parameter $\ell$ forms a useful normalizing constant in the numerical experiments of section \ref{sec:numerics}) and set $\mathcal{C} =\delta^2 A_0^{-2\zeta}$ in $H$ and $\Gamma = \sigma^2 A_0^{-2\beta}$ in $W_{\lambda}.$ Substituting into the update formula \eref{eq:meanu} for $\widehat {u}_j$ and defining $\eta = \sigma / \delta,$ $\alpha = \zeta - \beta$, $B_0({\eta})=(I+\eta^2 A_0^{2\alpha})^{-1}\eta^2 A_0^{2\alpha}$ in $W_{\lambda}$ then in (\ref{eq:meanu}) we have $B_j=B:W_{\lambda} \times W_{\lambda}^c \to W_{\lambda} \times W_{\lambda}^c$ the constant operator \begin{eqnarray} B=\left( \begin{array}{cc} B_0({\eta}) & 0\\ 0 & I \end{array} \right). \label{eq:B} \end{eqnarray} Using this we obtain the mean update formula \begin{equation} \widehat {u}_{j+1} = B\Psi(\widehat {u}_j) + ( I- B) y_{j+1}. \label{eq:up2} \end{equation} Notice that for $\mathcal{C}, \Gamma$ given as above, the algorithm depends only on the three parameters $\lambda, \alpha$ and $\eta$, once the constant of proportionality $\ell$ in $A_0$ is set. The parameter $\lambda$ measures the size of the space in which observations are made; for fixed wavevector $k$, the parameter $\eta$ is a measure of the scale of the uncertainty in observations to uncertainty in the model; and the sign of the parameter $\alpha$ determines whether, for fixed $\eta$ and asymptotically for large wavevectors, the model is trusted more ($\alpha>0$) or less ($\alpha<0$) than the data. This can be seen by noting that if $\alpha>0$ then $B \psi_k \rightarrow \psi_k$, while if $\alpha<0$ then $B \psi_k \rightarrow 0$.'' In the case $\lambda=\infty$, the case of complete observations where the whole velocity field is noisily observed, we again obtain \eref{eq:up2}, with $B=B_0({\eta})=(I+\eta^2 A_0^{2\alpha})^{-1}\eta^2 A_0^{2\alpha}$ in $H$. The roles of $\eta$ and $\alpha$ are the same as in the finite $\lambda$ (partial observations) case. The discussion concerning parametric dependence with respect to varying $\eta$ shows that, for the example of 3DVAR introduced here, and for both $\lambda$ finite and infinite, variance inflation, which refers to reducing faith in the model in comparison with the data, can be achieved by decreasing the parameter $\eta.$ We will show that variance inflation does indeed improve the ability of the filter to track the signal. \section{Accuracy and Stability} \label{sec:stability} In this section we develop conditions under which it is possible to prove stability of the nonautonomous dynamical system defined by the mean update equation \eref{eq:meanu} and show that, after a sufficiently long time, the true signal is accurately recovered. By stability we here mean that two filters driven by the same noise observation will converge towards the same estimate of the solution. By accuracy we mean that when the noise perturbing the observations is ${\mathcal O}(\epsilon)$, the filter will converge to an ${\mathcal O}(\epsilon)$ neighbourhood of the true signal, even if initially it is an ${\mathcal O}(1)$ distance from the true signal. In subsection \ref{ssec:m1} we study the case of partial observations; subsection \ref{ssec:m2} contains the (easier) result for the case of complete observations. The third subsection \ref{ssec:s2} shows how our results can be applied to the specific instance of the 3DVAR algorithm introduced in subsection \ref{ssec:3DVar}, for any $\alpha \in {\mathbb R}$, provided $\eta$, which is a measure of the ratio of uncertainty in the data to uncertainty in the model, is sufficiently small: this, then, is a result concerning variance inflation. For simplicity, we will assume a ``truth'' which is on the global attractor, as in \cite{hayden2011discrete}. This is not necessary, but streamlines the presentation as it gives an automatic uniform in time bound in $H^1$. Recall that $\|\cdot\|$ denotes the norm on $H^1$, and $|\cdot|$ the norm on $H$; similarly we lift $\|\cdot\|$ to denote the induced operator norm on $H^1 \to H^1$. It is useful to recall the filter in the form (\ref{eq:meanu}): \begin{equation} \widehat {u}_{j+1}=B_j\Psi(\widehat {u}_j)+(I-B_j)y_{j+1}. \end{equation} It is also useful to consider a second filter driven by the same data $\{y_j\}_{j \in \mathbb{Z}^+}$, but possibly started at a different point: \begin{equation} \label{eq:meanw} \widehat {w}_{j+1}=B_j\Psi(\widehat {w}_j)+(I-B_j)y_{j+1}. \end{equation} \subsection{Main Result: Partial Observations} \label{ssec:m1} In this case we will see that it is crucial that the observation space $W_{\lambda}$ is sufficiently large, i.e. that a sufficiently large number of modes are observed. This, combined with the contractivity in the high modes encapsulated in Proposition \ref{prop:3} from \cite{hayden2011discrete}, can be used to ensure stability if combined with variance inflation. We study filters of the form given in \eref{eq:meanu} and make the following assumption on the observations $\{y_j\}.$ \begin{assumption} \label{a:1} Consider a sequence $u_j=u(jh)$, where $u(t)$ is a solution of \eref{eq:nse} lying on the global attractor $\mathcal{A}$. Then, for some $\lambda \in (\lambda_1,\infty)$, $$y_j=P_{\lambda} u_j+\xi_j$$ for some sequence $\xi_j$ satisfying $\sup_{j \ge 1}\|\xi_j\| \le \epsilon.$ \end{assumption} Note that this assumption, concerning uniform boundedness of the noise, is not verified for the i.i.d. Gaussian case. However we do expect that a more involved analysis would enable us to handle Gaussian noise, at the expense of proving results in mean square, or in probability. Indeed in \cite{blsz12} we study a continuous time limit of the set-up contained in this paper, in which white noise forcing arises from the i.i.d. Gaussian noise; accuracy and stability results can then indeed be proved in mean square and in probability. However, we believe that, for clarity, the assumption made in this paper enables us to convey the important ideas in the most straightforward fashion. We make the following assumption about the family $\{B_j\}$, and assumed dependence on a parameter $\eta \in {\mathbb R}^+.$ Recall that the inverse of $\eta$ quantifies the amount of variance inflation. \begin{assumption} \label{a:2} The family of positive operators $\{B_j(\eta)\colon H^1 \to H^1\}_{j \ge 1}$ commute with $A$, satisfy $\sup_{j \ge 1}\|B_j(\eta)\| \le 1$, and $\sup_{j \ge 1}\|I-B_j(\eta)\| \le b$ for some $b \in {\mathbb R}^+,$ uniformly with respect to $\eta$. Furthermore, $\bigl(I-B_j(\eta)\bigr)Q_{\lambda} \equiv 0$ and there is, for all $\lambda>\lambda_1$, constant $c=c(\lambda)>0$ such that $\sup_{j \ge 1}\|P_{\lambda} B_j(\eta)\| \le c\eta^2.$ \end{assumption} We now study the asymptotic behaviour of the filter under these assumptions. \begin{theorem} \label{t:m} Let Assumptions \ref{a:1} and \ref{a:2} hold, choose any $\widehat {u}_0, \widehat {w}_0 \in \mathbb{B}_{H^1}\bigl(u(0),r\bigr)$ and let $(\lambda^*,t^*)$ be as given in Proposition \ref{prop:3}. Assume that $\lambda>\lambda^*$. Then for any $h \in (0,t^*]$ there is $\eta$ sufficiently small so that the sequences $\{\widehat {u}_j\}_{j \ge 0}$, $\{\widehat {w}_j\}_{j \ge 0}$ given by \eref{eq:meanu}, \eref{eq:meanw} satisfy, for some $a \in (0,1)$, $$\|\widehat {u}_j-\widehat {w}_j\| \le a^j\|\widehat {u}_0-\widehat {w}_0\|$$ and $$\|\widehat {u}_j-u_j\| \le a^j r+2b\epsilon\sum_{i=0}^{j-1} a^i.$$ Hence $$\limsup_{j \to \infty}\|\widehat {u}_j-u_j\| \le \frac{2b}{1-a}\epsilon.$$ \end{theorem} \begin{proof} We prove the second, accuracy, result concerning $\|\widehat {u}_j-u_j\|.$ The stability result concerning $\|\widehat {u}_j-\widehat {w}_j\|$ is proved similarly. Assumption \ref{a:2} shows that $y_{j+1}=P_{\lambda}\Psi(u_j)+\xi_{j+1}.$ Recall that in \eref{eq:meanu} $y_{j+1}$ has been extended to an element of $H$, by defining it to be zero in $W_{\lambda}^c$, and we do the same with $\xi_{j+1}.$ Substituting the resulting expression for $y_{j+1}$ in \eref{eq:meanu} we obtain $$\widehat {u}_{j+1}=B_j \Psi(\widehat {u}_j)+(I-B_j)P_{\lambda}\Psi(u_j)+(I-B_j)\xi_{j+1} $$ but since $(I-B_j)Q_{\lambda} \equiv 0$ by assumption we have \begin{equation} \label{eq:need} \widehat {u}_{j+1}=B_j \Psi(\widehat {u}_j)+(I-B_j)\Psi(u_j)+(I-B_j)\xi_{j+1}. \end{equation} Note also that $$u_{j+1}=B_j \Psi(u_j)+(I-B_j)\Psi(u_j).$$ Subtracting gives the basic equation for error propagation, namely \begin{equation} \widehat {u}_{j+1}-u_{j+1}=B_j\bigl(\Psi(\widehat {u}_j)-\Psi(u_j)\bigr) +(I-B_j)\xi_{j+1}. \label{eq:error} \end{equation} Since $\lambda>\lambda^{\star}$ the second item in Proposition \ref{prop:3} holds. Fix $a \in (\gamma,1)$ where $\gamma$ is defined in Proposition \ref{prop:3}. Assume, for the purposes of induction, that $$\|\widehat {u}_j-u_j\| \le a^j r+2b\epsilon\sum_{i=0}^{j-1} a^i.$$ Define $R=2r$ noting that the inductive hypothesis implies that, for $\epsilon$ sufficiently small, $\|\widehat {u}_j-u_j\| \le r+2b(1-a)^{-1}\epsilon \le R.$ Applying $P_{\lambda}$ to \eref{eq:error} and using \eref{eq:1} gives \begin{eqnarray*} \begin{array}{cc} \|P_{\lambda}(\widehat {u}_{j+1}-u_{j+1})\| &\le \|P_{\lambda}B_j\|\|\bigl(\Psi(\widehat {u}_j)-\Psi(u_j)\bigr)\| +\|P_{\lambda}(I-B_j)\|\epsilon\\ &\le c(\lambda) \eta^2 \exp(\beta h/2)\|\widehat {u}_j-u_j\|+b\epsilon. \end{array} \end{eqnarray*} Applying $Q_{\lambda}$ to \eref{eq:error} and using \eref{eq:2} gives\footnote{The term $b\epsilon$ on the right-hand side of the final identity can here be set to zero because $(I-B_j)Q_{\lambda} \equiv 0$; however in the analogous proof of Theorem \ref{t:mz} it is present and so we retain it for that reason.} \begin{eqnarray*} \begin{array}{cc} \|Q_{\lambda}(\widehat {u}_{j+1}-u_{j+1})\| &\le \|B_j\|\|Q_{\lambda}\bigl(\Psi(\widehat {u}_j)-\Psi(u_j)\bigr)\| +\|Q_{\lambda}(I-B_j)\|\epsilon\\ &\le \gamma\|\widehat {u}_j-u_j\|+b\epsilon. \end{array} \end{eqnarray*} Now note that, for any $w \in H^1$, $\|w\| =\bigl(\|P_{\lambda} w\|^2+\|Q_{\lambda} w\|^2\bigr)^{\frac12} \le \|P_{\lambda} w\|+\|Q_{\lambda} w\|.$ Thus, by adding the two previous inequalities, we find that $$\|\widehat {u}_{j+1}-u_{j+1}\| \le \bigl(c(\lambda) \eta^2 \exp(\beta h/2)+\gamma\bigr) \|\widehat {u}_j-u_j\|+2b\epsilon.$$ Since $\gamma\in (0,1)$ and $a \in (\gamma,1)$, we may choose $\eta$ sufficiently small so that $$\|\widehat {u}_{j+1}-u_{j+1}\| \le a\|\widehat {u}_j-u_j\|+2b\epsilon.$$ and the inductive hypothesis holds with $j \mapsto j+1$. Taking $j \to \infty$ gives the desired result concerning the limsup. \end{proof} \begin{remark} \label{rem:n} Note that the proof exploits the fact that $B_j\Psi(\cdot)$ induces a contraction within a finite ball in $H^1$. This contraction is established by means of the contractivity of $B_j$ in $W_{\lambda}$, via variance inflation, and the squeezing property of $\Psi(\cdot)$ in $W_{\lambda}^c$, for large enough observation space, from Proposition \ref{prop:3}. There are two important conclusions from this theorem. The first is that, even though the solution is only observed in the low modes, there is sufficient contraction in the high modes to obtain an error in the entire estimated state which is of the same order of magnitude as the error in the (low mode only) observations. The second is that this phenomenon occurs even when the initial estimate suffers from an ${\mathcal O}(1)$ error. Of course this result also shows that, if the true solution starts in an $O(\epsilon)$ neighbourhood of the truth, then it remains there for all positive time. \label{rem:1} \end{remark} \subsection{Main Result: Complete Observations} \label{ssec:m2} Here we study filters of the form given in \eref{eq:meanu} with observations given by \eref{eqn:Observation2}. In this situation the whole velocity field is observed and so, intuitively, it should be no harder to obtain stability than in the partially observed case. The proof is in fact almost identical to the case of partial observations, and so we omit the details. We observe that, although there is no parameter $\lambda$ in the problem statement itself, it is introduced in the proof: as in the previous subsection, see Remark \ref{rem:n}, the key to stability is to obtain contraction in $W_{\lambda}^c$ using the squeezing property of the Navier-Stokes equation, and contraction in $W_{\lambda}$ using the properties of the filter to control unstable modes. We make the following assumptions: \begin{assumption} \label{a:1z} Consider a sequence $u_j=u(jh)$ where $u(t)$ is a solution of \eref{eq:nse} lying on the global attractor $\mathcal{A}$. Then $$y_j=u_j+\xi_j$$ for some sequence $\xi_j$ satisfying $\sup_{j \ge 1}\|\xi_j\| \le \epsilon.$ \end{assumption} \begin{assumption} \label{a:2z} The family of positive operators $\{B_j(\eta)\colon H^1 \to H^1\}_{j \ge 1}$ commute with $A$ $\sup_{j \ge 1}\|B_j(\eta)\| \le 1$, and $\sup_{j \ge 1}\|I-B_j(\eta)\| \le b$ for some $b \in {\mathbb R}^+,$ uniformly with respect to $\eta.$ Furthermore, for all $\lambda>\lambda_1$, there is a constant $c=c(\lambda)>0$ such that $\sup_{j \ge 1}\|P_{\lambda} B_j(\eta)\| \le c\eta^2.$ \end{assumption} We now study the asymptotic behaviour of the filter under these assumptions. \begin{theorem} \label{t:mz} Let Assumptions \ref{a:1z} and \ref{a:2z} hold and choose any $\widehat {u}_0, \widehat {w}_0 \in \mathbb{B}_{H^1}\bigl(u(0),r\bigr).$ Then for any $h \in (0,t^*]$, with $t^*$ given in Proposition \ref{prop:3}, there is $\eta$ sufficiently small so that the sequences $\{\widehat {u}_j\}_{j \ge 0}$, $\{\widehat {u}_j\}_{j \ge 0}$ given by \eref{eq:meanu}, \eref{eq:meanw} satisfy, for some $a \in (0,1)$, $$\|\widehat {u}_j-\widehat {w}_j\| \le a^j\|\widehat {u}_0-\widehat {w}_0\|$$ and $$\|\widehat {u}_j-u_j\| \le a^j r+2b\epsilon\sum_{i=0}^{j-1} a^i.$$ Hence $$\limsup_{j \to \infty}\|\widehat {u}_j-u_j\| \le \frac{2b}{1-a}\epsilon.$$ \end{theorem} \begin{proof} The proof is nearly identical to that of Theorem \ref{t:m}. Differences arise only because we have not assumed that $(I-B_j)Q_{\lambda} \equiv 0.$ This fact arises in two places in Theorem \ref{t:m}. The first is where we obtain \eref{eq:need}. However in this case we directly obtain \eref{eq:need} since the whole velocity field is observed. The second place it arises is already dealt with in the footnote appearing in the proof of Theorem \ref{t:m} when estimating the contraction properties in $W_{\lambda}^c$; there we indicate that the proof is already adjusted to allow for the situation required here. \end{proof} \begin{remark} \label{r:2} If $\sup_{j \ge 1}\|B_j(\eta)\|<c\eta^2$ then the proof may be simplified considerably as it is not necessary to split the space into two parts, $W_{\lambda}$ and $W_{\lambda}^c$. Instead the contraction of $B_j$ can be used to control any expansion in $\Psi(\cdot)$, provided $\eta$ is sufficiently small. \end{remark} \subsection{Example of Main Result: 3DVAR} \label{ssec:s2} We demonstrate that the 3DVAR algorithm from subsection \ref{ssec:3DVar} satisfies Assumptions \ref{a:2} and \ref{a:2z} in the partially and completely observed cases respectively, and hence Theorems \ref{t:m} and \ref{t:mz} respectively may be applied to the resulting filters. In particular, the filters will locate the true signal, provided $\eta$ is sufficiently small. Satisfaction of Assumptions \ref{a:2} and \ref{a:2z} follows from the properties of $$B_0({\eta})=(I+\eta^2A_0^{2\alpha})^{-1}\eta^2 A_0^{2\alpha}, \quad I-B_0({\eta})=(I+\eta^2A_0^{2\alpha})^{-1}.$$ Note that the eigenvalues of $B_0({\eta})$ are $$\frac{\eta^2\bigl(4\ell\pi^2|k|^2\bigr)^{2\alpha}}{1+\eta^2\bigl(4\ell\pi^2|k|^2\bigr)^{2\alpha}},$$ if $A_0=\ell A.$ Clearly the spectral radius of $B_0({\eta})$ is less than or equal to one on $W_{\lambda}$ or $H$, independently of the sign of $\alpha.$ The difference is just that $|k|^2<\lambda/\lambda_1$ in the former, and $|k|$ is unbounded in the latter. First we consider the partially observed situation. We note that $B_j \equiv B$ and is given by \eref{eq:B}: \begin{eqnarray} B=\left( \begin{array}{cc} (I+\eta^2A_0^{2\alpha})^{-1}\eta^2 A_0^{2\alpha} & 0\\ 0 & I \end{array} \right) \label{be} \end{eqnarray} so that the Kalman gain-like matrix $I-B$ is given by \begin{eqnarray} I-B=\left( \begin{array}{cc} (I+\eta^2A_0^{2\alpha})^{-1} & 0\\ 0 & 0 \end{array} \right). \label{ibe} \end{eqnarray} From this it is clear that $(I-B)Q_{\lambda} \equiv 0.$ Furthermore, since the spectral radius of $B_0({\eta})$ does not exceed one, the same is true of $B$. Hence for the operator norms from $H^1$ into itself we have $\|B\| \le 1.$ Similarly, if $\alpha<0$ then $b:=\|I-B\|=1$, whilst if $\alpha \ge 0$ then $b=\Bigl(1+\eta^2(\ell \lambda_1)^{2\alpha}\Bigr)^{-1}<1.$ Thus Theorem \ref{t:m} applies. Note that $P_{\lambda}B=B_0$ and that $B_0=O(\eta^2)$. In the fully observed case we simply have $B_j \equiv B$ where $B=B_0(\eta)$ defined above on $H$. Again $\|B\| \le 1$ and if $\alpha<0$ then $\|I-B\|=b=1$, whilst if $\alpha \ge 0$ then $b=\Bigl(1+\eta^2(\ell\lambda_1)^{2\alpha}\Bigr)^{-1}<1.$ Thus Theorem \ref{t:mz} applies. Note (see Remark \ref{r:2}), that if $\alpha<0$ then the proof of that theorem could be simplified considerably because $\|B\|<1$ and in fact $\sup_{j \ge 1}\|B\|<c\eta^2.$ \begin{remark} \label{r:3} We observe that the key conclusion of Theorems \ref{t:m} and \ref{t:mz} is the asymptotic accuracy of the algorithm, when started at distances of ${\mathcal O}(1)$. The asymptotic bound, although of ${\mathcal O}(\epsilon)$, has constant $\frac{2b}{1-a}$ which may exceed $1$ and so the bound may exceed the error obtained by simply using the observations to estimate the signal. Our numerics will show, however, that in practice the algorithm gives estimates of the state which improve upon the observations. \end{remark} \section{Numerical Results} \label{sec:numerics} In this section we describe a number of numerical results designed to illustrate the range of filter stability phenomena studied in the previous sections. We start, in subsection \ref{ssec:prob}, by describing two useful bounds on the error committed by filters; we will use these guides in the subsequent numerics. Subsection \ref{ssec:results} describes the common setup for all the subsequent numerical results shown. Subsection \ref{ssec:complete} describes these results in the case of complete observations in discrete time, whilst Subsection \ref{ssec:partial} extends to the case of partial observations, also in discrete time. Our theoretical results have been derived under Assumptions \ref{a:1} and \ref{a:1z} on the errors. These are incompatible with the assumption that the observational noise sequence is Gaussian. This is because i.i.d Gaussian sequences will not have finite supremum. However, in order to test the robustness of our theory we will conduct numerical experiments with Gaussian noise sequences. \subsection{Useful Error Bounds} \label{ssec:prob} We describe two useful bounds on the error which help to guide and evaluate the numerical simulations. To derive these bounds we assume that the observational noise sequence $\xi_j$ is i.i.d with $\mathbb{E} \xi_j=0$ and $\mathbb{E} \xi_j \otimes \xi_j = \Gamma$. Then $$\mathbb{E} |\xi_j|^2={\rm tr} (\Gamma) = \sum_k g_k$$ where $\{g_k\}$ are the eigenvalues of the operator $\Gamma.$ \begin{itemize} \item The lower bound is derived from (\ref{eq:error}). Using the assumed independence of the sequence we see that \begin{equation} \mathbb{E} |\widehat {u}_{j+1}-u_{j+1} |^2 \ge \mathbb{E} |(I-B_j)\xi_{j+1} |^2= {\rm tr}\Bigl((I-B_j)\Gamma(I-B_j)^*\Bigr) \label{eq:lowerbd} \end{equation} \item The upper bound on the filter error is found by noting that a trivial filter is obtained by simply using the observation sequence as the filter mean; this corresponds to setting $B_j \equiv 0$ in (\ref{eq:meanu}). For this filter we obtain \begin{equation} \mathbb{E} |\widehat {u}_{j+1}-u_{j+1} |^2 \le \mathbb{E} |\xi_{j+1} |^2= {\rm tr}\bigl(\Gamma\bigr) \label{eq:upperbd} \end{equation} in the case of complete observations, and \begin{equation} \mathbb{E} |\widehat {u}_{j+1}-u_{j+1} |^2 \le \mathbb{E} |\xi_{j+1} |^2+|Q_{\lambda} u_{j+1}|^2= {\rm tr}\bigl(\Gamma\bigr)+|Q_{\lambda} u_{j+1}|^2 \label{eq:upperbd2} \end{equation} in the case of incomplete observations. \end{itemize} Although the lower bound (\ref{eq:lowerbd}) does not hold {\em pathwise}, only on average, it provides a useful guide for our pathwise experiments. The upper bounds (\ref{eq:upperbd}) and (\ref{eq:upperbd2}) do not apply to any numerical experiment conducted with non-zero $B_j$, but also serve as a useful guide to those experiments: it is clearly undesirable to greatly exceed the error committed by simply trusting the data. We will hence plot the lower and upper bounds as useful comparators for the actual error incurred in our numerical experiments below. We note that, for the 3DVAR example from subsection \ref{ssec:3DVar} with complete observations, the upper and lower bounds coincide in the limit $\eta \to 0$ as then $B \to 0$. For partial observations they differ by the second term in the upper bound. \subsection{Experimental Setup} \label{ssec:results} For all the results shown we choose a box side of length $L=2$. The forcing in Eq. (\ref{eq:nse}) is taken to be $f=\nabla^{\perp}\psi$, where $\psi=\cos(\pi k \cdot x)$ and $\nabla^{\perp}=J\nabla$ with $J$ the canonical skew-symmetric matrix, and $k=(5,5)$. The method used to approximate the forward model (\ref{eq:nse}) is a modification of a fourth-order Runge-Kutta method, ETD4RK \cite{cox2002exponential}, in which the Stokes semi-group is computed exactly by working in the incompressible Fourier basis $\{\psi_{k}(x)\}_{k \in {\mathbb Z}^2\backslash\{0\}}$ in Eq. (\ref{psik}), and Duhamel's principle (variation of constants formula) is used to incorporate the nonlinear term. Spatially, a Galerkin spectral method \cite{hesthaven2007spectral} is used, in the same basis, and the convolutions arising from products in the nonlinear term are computed via FFTs. We use a double-sized domain in each dimension, buffered with zeros, resulting in $64^2$ grid-point FFTs, and only half the modes in each direction are retained when transforming back into spectral space in order to prevent aliasing, which is avoided as long as fewer than 2/3 of the modes are retained. The dimension of the attractor is determined by the viscosity parameter $\nu$. For the particular forcing used there is an explicit steady state for all $\nu>0$ and for $\nu \geq 0.035$ this solution is stable (see \cite{majda2006non}, Chapter 2 for details). As $\nu$ decrease the flow becomes increasingly complex and the regime $\nu \leq 0.016$ corresponds to strongly chaotic dynamics with an upscale cascade of energy in the spectrum. We focus subsequent studies of the filter on a strongly chaotic ($\nu = 0.01$) parametric regime. For this small viscosity parameter, we use a time-step of $\delta t = 0.005$. The data is generated by computing a true signal solving \eref{eq:nse} at the desired value of $\nu$, and then adding Gaussian random noise to it at each observation time. Such noise does not satisfy Assumption \ref{a:1z}, since the supremum of the norm of the noise sequence is not finite, and so this setting provides a severe test beyond what is predicted by the theory; nonetheless, it should be noted that Gaussian random variables only obtain arbitrarily large values arbitrarily rarely. All experiments are conducted using the 3DVAR setup and it is useful to reread the end of subsection \ref{ssec:3DVar} in order to interpret the parameters $\alpha$ and $\eta.$ We consider both the choices $\alpha=\pm 1$ for 3DVAR, noting that in the case $\alpha=-1$ the operator $B$ has norm strictly less than one and so we expect the algorithm to be more robust in this case (see Remark \ref{r:2} for discussion of this fact). For all experiments we set $\ell=\lambda_1^{-1}$ which ensures that the action of $A_0^{2\alpha}$, and hence $B$, on the first eigenfunction is independent of the value of $\alpha$; this is a useful normalization when comparing computations with $\alpha=1$ and $\alpha=-1.$ We set the observational noise to constant white noise $\Gamma_j = \Gamma = \sigma^2 I$ (i.e. $\beta=0$ in section \ref{ssec:3DVar}). Here $\sigma = 0.04$, which gives a standard deviation of approximately 10\% of the maximum standard deviation of the strongly chaotic dynamics. Since we are computing in a truncated finite-dimensional basis the eigenvalues are summable; the situation can be considered as an approximation of an operator whose eigenvalues decay rapidly outside the basis in which we compute. To be more precise regarding the algorithm, let $U$ denote the finite-dimensional spectral representation, which is a complex-valued vector of dimension $32^2$, including redundancy arising from reality and zero mass constraints. The computation of $\mathcal{B}$ in Eq. (\ref{eq:nse}) requires padding this with zeros to a $64^2$ vector, computing inverse FFTs on the discretization of the 6 fields $u_i, u_{i,j}$ for $i,j \in \{1,2\}$, performing products, computing FFTs on the 2 resulting (discrete) spatial fields, and finally discarding the now-populated padding modes. Denoting the discrete map from time $t$ to time $t+s$ by $\Phi_s(M)$, the experiments proceed precisely as follows: \begin{itemize} \item Evolve $U_t=\Phi_t(U_0)$ until statistical equilibrium, as judged by observing the energy fluctuations $E[U_t(t)] = ||U_t||_2^2$. Set $U_0=U_t$ so that the initial condition is on the attractor. \item Compute the observations $Y_j = \Phi_{j h}(U_0) + \mathcal{N}(0,\Gamma)$. \item Draw $\widehat{U}_0 \sim \mathcal{N}(0,\kappa A^{2\alpha})$, where $\kappa \gg 1$ [this is essentially arbitrary, as long as the initial condition is something sensible and such that $\| \widehat{U}_0-U_0\| = O(1)$]. \item Compute $B$, and $I-B$ as given in Equations (\ref{be}, \ref{ibe}) \item For $j=1, \dots, J$ : Compute $\widehat{U}_j = B \Phi_h(\widehat{U}_{j-1}) + (I-B) Y_j.$ \end{itemize} \subsection{Complete Observations} \label{ssec:complete} We start by considering discrete and complete observations and illustrate Theorem \ref{t:mz}, and in particular the role of the parameter $\eta.$ The experiments presented employ a large observation increment of $h = 0.5 = 100 \delta t$. For $\alpha=1$ we find that when $\eta=\sigma$ (Fig. \ref{a1.1}) the estimator stabilizes from an initial ${\mathcal O}(1)$ error and then remains stable. The upper and lower bounds are satisfied (the upper bound after an initial rapid transient), and even the high modes, which are slaved to the low modes, synchronize to the true signal. For $\eta=10\sigma$ (Fig. \ref{a1.10}) the estimator fails to satisfy the upper bound, but remains stable over a long time horizon; there is now significant error in the $k=(7,7)$ mode, in contrast to the situation with smaller $\eta$ shown in Fig. \ref{a1.1}. Finally, when $\eta=100\sigma$ (Fig. \ref{a1.100}), the estimator really diverges from the signal, although still remains bounded. When $\alpha=-1$ the lower and upper bounds are almost indistinguishable and, for all values of $\eta$ examined, the error either exceeds or fluctuates around the upper bound; see Figures \ref{am1.1}, \ref{am1.10} and \ref{am1.100} where $\eta=\sigma, 10\sigma$ and $100\sigma$ respectively. It is not until $\eta=100\sigma$ (Fig. \ref{am1.100}) that the estimator really loses the signal. Notice also that the high modes of the estimator always follow the noisy observations and this could be undesirable. For both $\eta=100\sigma$ and $10\sigma$, the $\alpha=-1$ estimator performs better than the one for $\alpha=1$ in terms of overall error, illustrating the robustness alluded to in Remark \ref{r:2} since for $\alpha<0$ we have $\|B\|<1.$ However, an appropriately tuned $\alpha=1$ filter has the potential to perform remarkably well, both in terms of overall error and individual error of all modes (see Fig. \ref{a1.1}, in contrast to Fig. \ref{am1.1}). In particular, this filter has an expected error substantially smaller than the upper bound, which does not happen for the case of $\alpha=-1$ when complete observations are assimilated. \begin{figure*} \includegraphics[width=.45\textwidth]{da25_a1_bound1} \includegraphics[width=.45\textwidth]{da25_a1_mo011} \includegraphics[width=.45\textwidth]{da25_a1_mo771} \includegraphics[width=.45\textwidth]{da25_a1_mo12121} \caption{Example of a stable trajectory for 3DVAR with $\nu=0.01, h=0.5, \eta=\sigma=0.04, \alpha=1$. The top left plot shows the norm-squared error between the estimated mean, $m(t_n)=\hat{m}_n$, and the signal, $u(t_n)$, in comparison to the preferred upper bound (i.e. the total observation error ${\rm tr} (\Gamma) = \Xi$) and the lower bound ${\rm tr} [(I-B_n) \Gamma (I-B_n)]$. The other three plots show the estimator, $m(t)$, together with the signal, $u(t)$, and the observations, $y_n$ for a few individual modes.} \label{a1.1} \end{figure*} \begin{figure*} \includegraphics[width=.45\textwidth]{da25e10_a1_bound1} \includegraphics[width=.45\textwidth]{da25e10_a1_mo011} \includegraphics[width=.45\textwidth]{da25e10_a1_mo771} \includegraphics[width=.45\textwidth]{da25e10_a1_mo12121} \caption{Example of a destabilized trajectory for 3DVAR with the same parameters as in Fig. \ref{a1.1} except the larger value of $\eta=10\sigma=0.4$. Panels are the same.} \label{a1.10} \end{figure*} \begin{figure*} \includegraphics[width=.45\textwidth]{da25e100_a1_bound1} \includegraphics[width=.45\textwidth]{da25e100_a1_mo011} \includegraphics[width=.45\textwidth]{da25e100_a1_mo771} \includegraphics[width=.45\textwidth]{da25e100_a1_mo12121} \caption{Example of a destabilized trajectory for 3DVAR with the same parameters as in Fig. \ref{a1.1} except the larger value of $\eta=100\sigma=4$. Panels are the same.} \label{a1.100} \end{figure*} \begin{figure*} \includegraphics[width=.45\textwidth]{da25_am1_bound1} \includegraphics[width=.45\textwidth]{da25_am1_mo011} \includegraphics[width=.45\textwidth]{da25_am1_mo771} \includegraphics[width=.45\textwidth]{da25_am1_mo12121} \caption{Example of a stable trajectory for 3DVAR with the same parameters as in Fig. \ref{a1.1} except with $\alpha=-1$. Panels are the same.} \label{am1.1} \end{figure*} \begin{figure*} \includegraphics[width=.45\textwidth]{da25e10_am1_bound} \includegraphics[width=.45\textwidth]{da25e10_am1_mo01} \includegraphics[width=.45\textwidth]{da25e10_am1_mo77} \includegraphics[width=.45\textwidth]{da25e10_am1_mo1212} \caption{Example of a stable trajectory for 3DVAR with the same parameters as in Fig. \ref{a1.10} except with value of $\alpha=-1$. Panels are the same.} \label{am1.10} \end{figure*} \begin{figure*} \includegraphics[width=.45\textwidth]{da25e100_am1_bound1} \includegraphics[width=.45\textwidth]{da25e100_am1_mo011} \includegraphics[width=.45\textwidth]{da25e100_am1_mo771} \includegraphics[width=.45\textwidth]{da25e100_am1_mo12121} \caption{Example of a destabilized trajectory for 3DVAR with the same parameters as in Fig. \ref{a1.100} except with value of $\alpha=-1$. Panels are the same.} \label{am1.100} \end{figure*} \subsection{Partial Observations} \label{ssec:partial} We now proceed to examine the case of partial observations, illustrating Theorem \ref{t:m}. Note that the forced mode has magnitude $|k_f|^2 =50$, so ensuring that it is observed requires that $\lambda > 50\lambda_1$. When enough modes are retained, for example when $\lambda=100\lambda_1$ in our setting, the results for the $\alpha=1$ case remain roughly the same and are not shown. However, in the case $\alpha=-1$, in which the observations are trusted more than the model at high wavevectors, the results are greatly improved by ignoring the observations of the high-frequencies. See Fig. \ref{am1.10.p10}. This improvement, and indeed the improvement beyond setting $B=0$ for both cases $\alpha=\pm 1$ disappears as $\lambda$ is decreased. In particular, when $\lambda=25 \lambda_1$ the error is never very much smaller than the upper bound. This is due to the fact that the dynamics of the low wavevectors tend to be unpredictable and they contain very little useful information for the assimilation. Then, for much smaller $\lambda=4 \lambda_1$, once enough unstable modes are left unobserved, there is no convergence. The order of magnitude of the error in the asymptotic regime as a function of $\eta$ remains roughly consistent as $\lambda$ is decreased until the estimator no longer converges. For small $h$ (high-frequency in time observations) and complete observations, the estimator can be slow to converge to the asymptotic regime. In this case, the number of iterations until convergence, for a sufficiently small $\eta$, becomes significantly larger as $\lambda$ is decreased (again until the estimator fails to converge at all). Given $\lambda \approx k_\lambda^2\lambda_1$, we expect that for $\eta$ sufficiently small the contribution of the model to the filter will be negligible for all $k$ with $|k|<k_\lambda$ for $\alpha=1$. Hence the estimators for both $\alpha=\pm 1$ will behave similarly. An example of this is shown in Fig. \ref{a1am1.7.p01} where $\eta=0.01\sigma=0.0004$ and $\lambda=49 \lambda_1$ in Fig. \ref{a1am1.7.p01}. In both cases, the estimator is essentially utilizing all the available observations. There are enough observations to draw the higher wavevectors of the estimator closer to the truth than if we just set the population of those modes to zero. In contrast, as mentioned above, when $\lambda=25 \lambda_1$, there are not enough observations even when they are all used, and the error is roughly the same as the upper bound as $\eta \rightarrow 0$ (not shown). \begin{figure*} \includegraphics[width=.45\textwidth]{da2510p10_am1_bound} \includegraphics[width=.45\textwidth]{da2510p10_am1_mo01} \includegraphics[width=.45\textwidth]{da2510p10_am1_mo77} \includegraphics[width=.45\textwidth]{da2510p10_am1_mo1212} \caption{Example of an improved estimator for partial observations with $\lambda=100 \lambda_1$ and otherwise the same parameters as in Fig. \ref{am1.10}. Panels are the same.} \label{am1.10.p10} \end{figure*} \begin{figure*} \includegraphics[width=.45\textwidth]{da25p01_a1_bound} \includegraphics[width=.45\textwidth]{da25p01_am1_bound} \includegraphics[width=.45\textwidth]{da25p01_a1_mo01} \includegraphics[width=.45\textwidth]{da25p01_am1_mo01} \caption{Examples of estimators for partial observations with $\lambda=49 \lambda_1^2$ and $\eta=0.1\sigma=0.004$, otherwise the same parameters as in Figs. \ref{a1.1} and \ref{am1.1}. Left panels are for $\alpha=1$ and right panels are for $\alpha=-1$.} \label{a1am1.7.p01} \end{figure*} \section{Conclusion} \label{sec:conclusions} This paper contains three main components: \begin{itemize} \item we show that the filtering problem for the Navier-Stokes equation may be formulated as a a sequence of well-posed inverse problems: Theorem \ref{t:min}; \item we prove Theorems \ref{t:m} and \ref{t:mz}, which establish filter accuracy and stability provided variance inflation is employed; \item we describe numerical results which illustrate, and extend the validity of, the theory. \end{itemize} We note that the analysis will also apply to other semilinear dissipative PDEs which possess a squeezing property (contraction when projected into the high modes) and a global attractor; such equations are studied in depth in \cite{book:Temam1997}, and include the Cahn-Allen and Cahn-Hilliard equations, the Kuramoto-Sivashinsky equation and Ginzburg-Landau equation. These two structural properties of the underlying model, when combined with sufficient variance inflation ($\eta$ small enough in 3DVAR) enable proof that the 3DVAR filter has a contractive property, even when the underlying dynamical system itself has positive Lyapunov exponents. There are a number of natural future directions which stem from this work: \begin{itemize} \item to develop analogous filter stability theorems for more sophisticated filters, such as the extended and ensemble-based methods, when applied to the Navier-Stokes equation; \item to study model-data mismatch by looking at filter stability for data generated by forward models which differ from those used to construct the filter; \item to study the effect of filtering in the presence of model error, by similar methods, to understand how this may be used to overcome problems arising from model-data mismatch. \item to combine the analysis in this paper, which concerns nonlinear problems, but assumes that the observation operator and the Stokes operator commute, and the recent work \cite{pot12} which concerns only linear autonomous problems but does not assume commutativity of the solution operator for the forward model and the observation operator. \end{itemize} \vspace{0.1in} \noindent{\bf Acknowledgements} {AMS would like to thank the following institutions for financial support: EPSRC, ERC, ESA, and ONR; KJHL was supported by EPSRC, ESA, and ONR; and CEAB, KFL, DSM and MRS were supported EPSRC, through the MASDOC Graduate Training Centre at Warwick University. The authors also thank The Mathematics Institute and Centre for Scientific Computing at Warwick University for supplying valuable computation time. Finally, the authors thank Masoumeh Dashti for valuable input.}
{ "timestamp": "2013-08-06T02:07:10", "yymm": "1203", "arxiv_id": "1203.5845", "language": "en", "url": "https://arxiv.org/abs/1203.5845", "abstract": "Data assimilation methodologies are designed to incorporate noisy observations of a physical system into an underlying model in order to infer the properties of the state of the system. Filters refer to a class of data assimilation algorithms designed to update the estimation of the state as data is acquired sequentially. For linear problems subject to Gaussian noise filtering can be performed exactly using the Kalman filter. For nonlinear systems it can be approximated in a systematic way by particle filters. However in high dimensions these particle filtering methods can break down. Hence, for the large nonlinear systems arising in applications such as oceanography and weather forecasting, various ad hoc filters are used, based on Gaussian approximations. In this work, we study the accuracy and stability of these ad hoc filters in the context of the 2D incompressible Navier-Stokes equation. The ideas readily generalize to a range of dissipative partial differential equations (PDEs). By working in this infinite dimensional setting we provide an analysis which is useful for the understanding of high dimensional filtering, and is robust to mesh-refinement. We describe theoretical results showing that, in the small observational noise limit, the filters can be tuned to perform accurately in tracking the signal itself (filter accuracy), provided the system is observed in a sufficiently large low dimensional space; roughly speaking this space should be large enough to contain the unstable modes of the linearized dynamics. The tuning corresponds to what is known as variance inflation in the applied literature. Numerical results are given which illustrate the theory. The positive results herein concerning filter stability complement recent numerical studies which demonstrate that the ad hoc filters can perform poorly in reproducing statistical variation about the true signal.", "subjects": "Optimization and Control (math.OC)", "title": "Accuracy and Stability of Filters for Dissipative PDEs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426435557122, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.709079136444513 }
https://arxiv.org/abs/1912.10175
Quantitative results on the multi-parameters Proximal Point Algorithm
We give a quantitative analysis of a theorem due to Fenghui Wang and Huanhuan Cui concerning the convergence of a multi-parametric version of the proximal point algorithm. Wang and Cui's result ensures the convergence of the algorithm to a zero of the operator. Our quantitative analysis provides explicit bounds on the metastability (in the sense of Terence Tao) for the convergence and the asymptotic regularity of the iteration. Moreover, our analysis bypasses the need of sequential weak compactness and only requires a weak form of the metric projection argument.
\section{Introduction} In this paper we give a quantitative analysis of a theorem due to Fenghui Wang and Huanhuan Cui concerning the strong convergence of a multi-parametric version of the proximal point algorithm in Hilbert spaces. The \emph{proximal point algorithm} $(\mathsf{PPA})$ is recognized as a powerful and successful tool in approximating a zero of a maximal monotone operator in a Hilbert space. The algorithm was studied by Ralph Rockafellar in \cite{Rockafellar76}, where weak convergence for $(\mathsf{PPA})$ was established. A counter-example by Osman G\"{u}ler in \cite{G(91)} showed that, in general, one cannot guarantee strong convergence for this iteration. This has prompted a series of variants in an attempt to obtain strong convergence. Motivated by the success of the \emph{Halpern iterations} in fixed point theory \cite{Halpern67}, the \emph{Halpern-type proximal point algorithm} ($\mathsf{HPPA}$) was introduced by Shoji Kamimura and Wataru Takahashi in \cite{KT(00)} and, independently, by Hong-Kun Xu in \cite{X(02)}. With given points $u, z_0$, a regularization sequence of positive real numbers $(c_n)$ and a sequence of errors $(e_n)$, ($\mathsf{HPPA}$) is recursively defined by \begin{equation}\label{HPPA} \tag{\textsf{HPPA}} z_{n+1}=\lambda_n u+(1-\lambda_n)J_{c_n}(z_n) +e_n, \end{equation} where $J_{c_n}$ is the resolvent function associated with $c_n$ and the maximal monotone operator. Strong convergence for \eqref{HPPA} was shown e.g.\ in \cite[Theorem~2]{BM(11)} and \cite[Theorem~5.1]{X(02)}. These two strong convergence results received quantitative analyses in \cite{LLPP(ta)} and \cite{PP(ta)}, respectively. A generalization to Banach spaces was discussed in \cite{AM(17)}. This generalization received a quantitative analysis by Ulrich Kohlenbach in \cite{Koh(ta)}. Yonghong Yao and Muhammad Aslam Noor studied in \cite{YaoNoor2008} a generalization of $\eqref{HPPA}$, in an attempt to obtain a result of strong convergence, in Hilbert spaces, under weaker assumptions. This generalization involves the use of several parameters and is called the \emph{multi-parameters proximal point algorithm} $\eqref{PPA}$. This was partially achieved in \cite[Theorem~3.3]{YaoNoor2008}, however a new condition was necessary that prevented the reduction to \eqref{HPPA}. A metastable version of this result was given in \cite{DP(ta)}. In this paper we give a quantitative analysis of a strong convergence result for \eqref{PPA} by Wang and Cui \cite[Theorem~1]{WangCui2012}. Wang and Cui's result can be viewed as a generalization of previous results (see e.g.\ \cite{KT(00), X(02), MX(04), YaoNoor2008, BM(11)1}). Indeed, it relies on weaker conditions and enables a reduction to \eqref{HPPA}. The output of our analysis consists of explicit bounds on metastability properties (in the sense of Terence Tao \cite{T(08a),T(08b)}). Namely, we obtain functionals $\rho$ and $\widetilde{\rho}$ such that for every natural number $k$ and function $f:\mathbb{N} \to \mathbb{N}$ the following properties hold \begin{equation}\label{rho1} \exists n \leq \rho(k,f) \, \forall i,j \in [n,f(n)] \left(\norm{z_i-z_j}\leq \frac{1}{k+1} \right), \end{equation} \begin{equation}\label{rho2} \exists n \leq \widetilde{\rho}(k,f) \, \forall i \in [n,f(n)] \left(\norm{J_{c_i}(z_i)-z_i}\leq \frac{1}{k+1} \right), \end{equation} with the sequence $(z_n)$ defined by \eqref{PPA}. The propeties \eqref{rho1} and \eqref{rho2} are the \emph{metastable} versions of the Cauchy property and the asymptotic regularity for $(z_n)$, respectively. Notice that the original properties and their metastable versions are, in fact, (ineffectively) equivalent. While in general one cannot guarantee rate extraction for such properties, the underlying theoretical techniques ensure that we are always able to extract information for the corresponding metastable versions. For a discussion on the history and relevance of the notion of metastability we refer to \cite{K(18)}. In this analysis, and similarly to previous analyses (cf. \cite{DP(ta),LLPP(ta),FFLLPP(19),PP(ta)}), we were guided by Fernando Ferreira and Paulo Oliva's \emph{bounded functional interpretation} \cite{FO(05)}, more specifically the classical variant from \cite{F(09)}. Functional interpretations are helpful to navigate the original proof, to avoid certain non-essential principles used therein (such as sequential weak compactness) and to obtain explicit bounds. The use of functional interpretations to analyse mathematical proofs has been very successful, particularly in areas such as approximation theory, ergodic theory, fixed point theory and optimization theory. We refer to \cite{K(17)} and the book \cite{K(08)} for an overview of such results. We would like to point out that, even though a proof-theoretical technique underlines the analysis in this paper, no knowledge of Mathematical Logic is required for the understanding of its results. We work only with a weaker version of the metric projection principle where the projection point, crucial in the original proof, is replaced by approximations. Also, we bypass the sequential weak compactness arguments used in the original proof. The way to deal with the projection argument and sequential weak compactness is explained in full detail in \cite{FFLLPP(19)} (these qualitative improvements first appeared in \cite{K(11)}). Furthermore, the original proof has a discussion by cases which, in our quantitive analysis, imposes a discussion by cases for each approximation to the projection point. Namely, for each natural number $k$ and function $f$, one must consider a ``good enough'' approximation to the projection point and carry out the discussion by cases relative to that point. The structure of the paper is the following. In Section~\ref{sectionPrelim} we recall the relevant terminology as well as some well-known results from the theory of monotone operators in Hilbert spaces. We also recall some results necessary for our analysis. Also, in Subsection~\ref{sectionoriginalproof} we give a detailed description of the original proof by Wang and Cui in order to clarify the different steps that our analysis requires. The main analysis is carried out in Section~\ref{Sectionanalysis}. As in the original proof we divide it into two cases depending on whether a certain auxiliary sequence is eventually decreasing or not. Some final remarks are left to Section~\ref{sectionremarks}. \section{Preliminaries}\label{sectionPrelim} \subsection{Background on Monotone Operators on Hilbert spaces} Throughout we let $H$ be a real Hilbert space with inner product $\langle \cdot, \cdot \rangle$ and norm $\norm{\cdot}$. We recall that an operator $\mathsf{A}:H \to 2^{H}$ is said to be \emph{monotone} if and only if whenever $(x,y)$ and $(x',y')$ are elements of the graph of $\mathsf{A}$, it holds that $\langle x-x',y-y'\rangle \geq 0$. A monotone operator $\mathsf{A}$ is said to be \emph{maximal monotone} if the graph of $\mathsf{A}$ is not properly contained in the graph of any other monotone operator on $H$. We define $S:=\mathsf{A}^{-1}(0)$, the set of all \emph{zeros} of $\mathsf{A}$. For a comprehensive introduction to convex analysis and the theory of monotone operators in Hilbert spaces we refer to \cite{BC(17)}. We fix $\mathsf{A}$ a maximal monotone operator and assume henceforth $S$ to be nonempty. For every positive real number $\sigma$, we use $J_\sigma$ to denote the \emph{resolvent function} of $\mathsf{A}$, i.e.\ the single-valued function defined by $J_\sigma = (I + \sigma\mathsf{A} )^{-1}.$ \begin{definition} A mapping $T : H \to H$ is called \emph{nonexpansive} if $$\forall x, y \in H \left(\norm{T (x) -T (y)}\leq \norm{x -y}\right),$$ and \emph{firmly nonexpansive} if $$\forall x, y \in H\left( \norm{T(x)-T(y)}^2 \leq \norm{x-y}^2-\norm{(Id -T)(x)- (Id-T)(y)}^2 \right).$$ \end{definition} Note that if $T$ is firmly nonexpansive then it is nonexpansive. The set $\{x \in H: T(x)=x\}$ of fixed points of the mapping $T$ will be denoted by $\mathrm{Fix}(T)$. If $T$ is nonexpansive, then $\mathrm{Fix}(T)$ is a closed and convex subset of $H$. For $\sigma>0$, the resolvent function $J_\sigma$ is firmly nonexpansive and the set of fixed points of $J_\sigma$ is $S$. Consider the following multi-parametric version of the proximal point algorithm introduced in \cite{YaoNoor2008}, \begin{equation}\label{PPA}\tag{\textsf{mPPA}} z_{n+1}=\lambda_nu+\gamma_nz_n+\delta_nJ_{c_n}(z_n)+e_n, \end{equation} where $u,z_0 \in H$ are given, $(c_n) \subset (0,+\infty) $, $(\lambda_n),(\delta_n) \subset (0,1)$ and $(\gamma_n) \subseteq [0,1)$ such that$\lambda_n+\gamma_n+\delta_n=1$, for all $n \in \mathbb{N}$. Sometimes it is useful to consider the following \emph{exact} version of the algorithm \eqref{PPA}, \begin{equation}\label{exactPPA}\tag{\textsf{mPPA}$_{\mathsf{e}}$} y_{n+1}=\lambda_nu+\gamma_ny_n+\delta_nJ_{c_n}(y_n), \end{equation} where $u,y_0 \in H$ are given, $(c_n) \subset (0,+\infty)$, $(\lambda_n),(\delta_n) \subset (0,1)$ and $(\gamma_n) \subseteq [0,1)$ such that$\lambda_n+\gamma_n+\delta_n=1$, for all $n \in \mathbb{N}$. Since we will only look at \eqref{exactPPA} as a way to prove strong convergence for \eqref{PPA}, we will always assume that $y_0=z_0$. The following lemmas are well-known. \begin{lemma}[Resolvent identity] For $a,b>0$, the identity \begin{equation*} J_a (x) = J_b\left(\frac{b}{a}x+ \left(1-\frac{b}{a} \right)J_a (x)\right), \end{equation*} holds for every $x \in H$. \end{lemma} \begin{lemma}[\cite{MX(04)}]\label{lemmaresolvineq} If $0<a\leq b$, then $\norm{J_a (x)-x} \leq 2 \norm{J_b (x)-x}$, for all $x \in H$. \end{lemma} \begin{lemma}\label{Lemmabasic} Let $x, y \in H$ and let $t, s \geq 0$. Then \begin{enumerate} \item $\norm{x+y}^2\leq \norm{x}^2+2 \langle y,x+y \rangle$; \item $\norm{tx+sy}^2=t(t+s)\norm{x}^2+s(t+s)\norm{y}^2-st\norm{x-y}^2$. \end{enumerate} \end{lemma} \iffalse \begin{lemma}[Demiclosedness principle \cite{B(65)}] \label{Lemmademiclosedness} Let $C$ be a closed convex subset of $H$ and let $f : C \to C$ be a nonexpansive mapping such that $Fix(f)\neq \emptyset$. Assume that $(x_n)$ is a sequence in $C$ such that $(x_n)$ weakly converges to $x \in C$ and $((I - f)(x_n))$ converges strongly to $y \in H$. Then $(I - f)(x) = y$. \end{lemma} \fi We will use the following result due to Xu. \begin{lemma}[\cite{X(02)}]\label{LemmaXu} Let $(\alpha_n) \subset (0,1)$ and $(b_n)$ be real sequences such that \begin{enumerate}[$(i)$] \item $\sum \alpha_n = \infty$. \item $\lim \alpha_n =0$. \item $\limsup b_n \leq 0$ or $\sum \alpha_n |b_n|<\infty$. \end{enumerate} Let $(a_n)$ be a nonnegative real sequence satisfying $a_{n+1}\leq (1-\alpha_n)a_n+ \alpha_nb_n$. Then $\lim a_n = 0$. \end{lemma} In this paper we carry out a quantative analysis of Theorem~\ref{ThmWangCui} below, due to Wang and Cui, which relies on the following conditions \begin{enumerate}[($C_1$)] \item\label{C1} $\lim \lambda_n=0$, \item\label{C2} $\sum_{n=0}^{\infty} \lambda_n=\infty$, \item\label{C3} $\liminf c_n>0$, \item\label{C4} $\liminf\delta_n>0$, \item\label{C5} $\sum_{n=1}^{\infty}\norm{e_n}<\infty$ or $\lim\frac{\norm{e_n}}{\lambda_n}=0$. \end{enumerate} \begin{theorem}{\rm (\cite[Theorem 1]{WangCui2012})}\label{ThmWangCui} Let $(z_n)$ be generated by \eqref{PPA}. Assume that conditions $(C_{\ref{C1}})-(C_{\ref{C5}})$ hold. Then $(z_n)$ converges strongly to a point $z \in S$ (the nearest point projection of $u$ onto $S$). \end{theorem} \subsection{Quantitative Lemmas} We recall the notion of monotone functional for two particular cases. First consider the strong majorizability relation $\leq^{\ast}$ from \cite{bezem1985strongly} for functions $f,g:\mathbb{N}\to\mathbb{N}$. $$g\leq^* f := \forall n, m\in\mathbb{N}\,\left(m\leq n\to \left( g(m)\leq f(n) \land f(m)\leq f(n)\right)\right).$$ A function $f:\mathbb{N}\to\mathbb{N}$ is said to be \emph{monotone} if $f\leq^* f$, which corresponds to saying that $f$ is an increasing function, i.e.\ $\forall n\in\mathbb{N}\, \left(f(n)\leq f(n+1)\right)$. We say that a functional $\varphi:\mathbb{N}\times \mathbb{N}^{\mathbb{N}}\to\mathbb{N}$ is \emph{monotone} if for all $m,n\in\mathbb{N}$ and all $f,g:\mathbb{N}\to\mathbb{N}$, $$\left(m\leq n \land g\leq^* f\right) \to \left(\varphi(m,g)\leq \varphi(n,f)\right).$$ A function depending on several variables (ranging over $\mathbb{N}$ or over $\mathbb{N}^\mathbb{N}$) is said to be monotone if it is monotone in all the variables. \begin{remark}\label{maj} We usually restrict our arguments to monotone functions in $\mathbb{N}^\mathbb{N}$. There is no real restriction in doing so, as for $f: \mathbb{N} \to \mathbb{N}$, one has $f \leq^* \! f^{\mathrm{maj}}$, where $f^{\mathrm{maj}}$ is the monotone function defined by $f^{\mathrm{maj}}(n):= \max\{f(i)\, :\, i \leq n\}$. In this way, we avoid constantly having to switch from $f$ to $f^{\mathrm{maj}}$, and simplify the notation. \end{remark} \begin{notation}\label{notation} Consider a function $\varphi$ on tuples of variables $\bar{x}$, $\bar{y}$. If we wish to consider the variables $\bar{x}$ as parameters we write $\varphi[\bar{x}](\bar{y})$. For simplicity of notation we may then even omit the parameters and simply write $\varphi(\bar{y})$. \end{notation} We will use the following lemma. \begin{lemma}[\cite{LLPP(ta)}]\label{LemmaLLPP} Let $(s_n)$ be a bounded sequence of non-negative real numbers, with $d\in\mathbb{N} \setminus\{0\}$ an upper bound for $(s_n)$, such that for any $n\in \mathbb{N}$ \begin{equation*} s_{n+1}\leq (1-\alpha_n)s_n+\alpha_nr_n + \gamma_n, \end{equation*} \noindent where $(\alpha_n)\subset [0,1]$, $(r_n)$ and $(\gamma_n)\subset [0,+\infty)$ are given sequences of real numbers.\\ Assume that exist functions $A$, $R$, $G:\mathbb{N} \to \mathbb{N}$ such that \begin{itemize} \item[$(i)$] $\forall k\in \mathbb{N} \, \left( \sum\limits_{i=1}^{A(k)} \alpha_i \geq k \right)$, \item[$(ii)$] $\forall k\in \mathbb{N} \, \forall n\geq R(k) \, \left( r_n \leq \dfrac{1}{k+1} \right)$, \item[$(iii)$] $\forall k \in \mathbb{N} \, \forall n \in \mathbb{N} \, \left( \sum\limits_{i=G(k)+1}^{G(k)+n} \gamma_i \leq \dfrac{1}{k+1} \right)$. \end{itemize} Then \begin{equation*} \forall k \in \mathbb{N} \,\forall n\geq \theta(k) \, \left(s_n\leq \dfrac{1}{k+1}\right), \end{equation*} \noindent with $\theta(k):=\theta[A, R, G, d](k):=A(N-1+\lceil \ln(3d(k+1))\rceil)+1$, where $N:=\max\{ R(3k+2), G(3k+2)+1 \}$. \end{lemma} It is well-known that for a sequence $(\alpha_n) \subset (0,1)$, having $\sum \alpha_n = \infty$ is equivalent to $\prod (1-\alpha_n)=0$. An alternative version of Lemma~\ref{LemmaLLPP} can be given where one assumes the existence of a rate of convergence $A'$ for the product $\prod (1-\alpha_n)$ instead of a rate of divergence $A$ for the sum $\sum \alpha_n $ (see \cite{LLPP(ta)} and \cite[Lemma~2.4]{Korn}). \subsection{The proof by Wang and Cui}\label{sectionoriginalproof} Let us discuss the proof of Theorem~\ref{ThmWangCui} in detail in order to better understand the required steps of our quantitative analysis. We write $J_n$ to denote $J_{c_n}$, for $n\in\mathbb{N}$. \begin{enumerate}[$1)$] \item The proof starts by showing that $\norm{z_n - y_n}\to 0$, using Lemma~\ref{LemmaXu}. This allows to reduce the convergence of a sequence $(z_n)$ given by \eqref{PPA} to that of a sequence $(y_n)$ given by the exact variant \eqref{exactPPA}. \item By an easy induction argument it is shown that $(y_n)$ is bounded. \item Using Lemma~\ref{Lemmabasic} and the fact that the resolvent functions are firmly nonexpansive it is shown that \begin{equation} \sigma \norm{J_n(y_n)-y_n}^2\leq M \lambda_n+ s_n-s_{n+1}, \end{equation} where $\sigma, M$ are positive constants and $(s_n)$ is the sequence defined by $s_n:=\norm{y_n-\widetilde{z}\,}^2$, with $\widetilde{z}$ the projection point of $u$ onto $S$. \item Form this point on, the proof proceeds by distinguishing the cases: $i)$ $(s_{n})$ is eventually decreasing, $ii)$ $(s_{n})$ is not eventually decreasing. In each case it is shown that $(s_n) \to 0$, which entails the result. Let us describe how the proof proceeds in each case. \begin{enumerate} \item[$i)$] \underline{$(s_{n})$ is eventually decreasing.} \begin{enumerate}[$a)$] \item Since $(y_n)$ is bounded we have that $(s_n)$ is also bounded and therefore it is convergent. \item By step 3 and the fact that $\lambda_n \to 0$ it folllows that $\norm{J_n(y_n)-y_n}\to 0$. \item From the previous step and Lemma~\ref{lemmaresolvineq} it follows that $\norm{J_{\sigma}(y_n)-y_n}\to 0$. \item It is shown that $s_{n+1}\leq (1-\lambda_n)s_n+2 \lambda_n \langle u-\widetilde{z},y_{n+1}-\widetilde{z}\, \rangle$. \item By sequential weak compactness, using the demiclosedness principle \cite{B(65)} and the fact that $\widetilde{z}$ is the projection point, it follows that $\limsup \langle u-\widetilde{z},y_{n+1}-\widetilde{z} \, \rangle \leq 0$. \item By Lemma~\ref{LemmaXu} one concludes that $s_n \to 0$. \end{enumerate} \item[$ii)$] \underline{$(s_{n})$ is not eventually decreasing.} \begin{enumerate}[$a)$] \item For a certain sequence $\tau(n)$ and $n_0 \in \mathbb{N}$, we have $s_{\tau(n)}\leq s_{\tau(n)+1}$, for $n \geq n_0$. By step 3, it holds that \begin{equation*} \sigma\norm{J_{\tau(n)}(y_{\tau(n)})-y_{\tau(n)}}^2 \leq M\lambda_{\tau(n)}. \end{equation*} The sequence $\tau(n)$ is obtained using \cite[Lemma~3.1]{Mainge}. \iffalse \begin{lemma}[\cite{Mainge}]\label{LemmaMainge} Let $(s_n)$ be a sequence of real numbers for which there exists a subsequence $(s_{n_j})_{j\geq 0}$ such that $s_{n_j} < s_{n_j+1}$, for all $j\geq 0$. Let $n_0 \in \mathbb{N}$ and $(\tau(n))_{n\geq n_0}$ be the sequence of integers defined by $$\tau(n):= \max\{n_0 \leq k \leq n : s_{k} < s_{k+1}\}.$$ Then $(\tau(n))_{n\geq n_0}$ is a nondecreasing sequence verifying $\lim \tau(n)=+\infty$ and, for all $n \geq n_0$, one has $s_{\tau(n)}\leq s_{\tau(n)+1}$ and $s_n \leq s_{\tau(n)+1}$. \end{lemma} \fi \item Since $\lambda_{n}\to 0$ and $\tau(n)\to \infty$ one obtains that $\norm{J_{\tau(n)}(y_{\tau(n)})-y_{\tau(n)}}\to 0$. \item From the previous step using sequential weak compactness and the demiclosedness principle, it follows that $\limsup \langle u-\widetilde{z},y_{\tau(n)}-\widetilde{z}\, \rangle\leq 0.$ \item The previous step implies that $\limsup \langle u-\widetilde{z},y_{\tau(n)+1}-\widetilde{z}\, \rangle\leq 0$. \item Since $s_{\tau(n)}\leq 2 \langle u-\widetilde{z},y_{\tau(n)+1}-\widetilde{z} \,\rangle$ it follows that $\limsup s_{\tau(n)}\leq 0$ and since $(s_{n})\subset [0,+\infty)$ one concludes that $\lim s_{\tau(n)}=0$ and consequentely $\lim s_{\tau(n)+1}=0$. \item The result follows because $s_n \leq s_{\tau(n)+1}$. \end{enumerate} \end{enumerate} \end{enumerate} \section{Quantitative analysis}\label{Sectionanalysis} We start by stating our quantitative assumptions. We assume that there exist $c \in \mathbb{N}\setminus \{0\}$ and monotone functions $\ell,L, E: \mathbb{N} \to \mathbb{N}$ and $h: \mathbb{N}\to \mathbb{N}\setminus \{0\}$ such that \begin{enumerate}[($Q_1$)] \item\label{Q0} $\forall n \in \mathbb{N}\left(\lambda_n \geq \frac{1}{h(n)} \right)$, \item\label{Q1} $\forall k \in \mathbb{N} \,\forall n \geq \ell(k) \left(\lambda_n \leq \frac{1}{k+1}\right)$, \item\label{Q2} $\forall k \in \mathbb{N} \left(\sum_{i=1}^{L(k)}\lambda_i \geq k\right)$, \item\label{Q3} $\forall n \in \mathbb{N} \left(\min\{c_n, \delta_n^2\} \geq \frac{1}{c}\right)$, \item[($Q_{5a}$)]\label{Q5} $\forall k \in \mathbb{N} \, \forall n \in \mathbb{N} \left(\sum_{i=E(k)+1}^{E(k)+n}\norm{e_i}\leq \frac{1}{k+1} \right)$, \item[($Q_{5b}$)]\label{Q5b} $\forall k \in \mathbb{N} \, \forall n \geq E(k) \left(\frac{\norm{e_n}}{\lambda_n}\leq \frac{1}{k+1} \right)$. \end{enumerate} The first condition is a quantitative version of the fact that the sequence $(\lambda_n)$ is always positive. Condition $(Q_{\ref{Q1}})$ states that $\ell$ is a rate of convergence for the sequence $(\lambda_n)$. Condition $(Q_{\ref{Q2}})$ postulates that $L$ is a rate of divergence for the series $\sum_{n=0}^{\infty} \lambda_n$. Condition ($Q_{\ref{Q3}}$) expresses the fact that the terms of the sequences $(c_n)$ and $(\delta_n)$ are bounded away from zero. Finally, the last two conditions express, respectively, that the sequence of the partial sums $\left(\sum_{i=0}^n \norm{e_i}\right)$ is a Cauchy sequence with rate $E$, or that the sequence $\left(\frac{\norm{e_n}}{\lambda_n}\right)$ converges towards zero with rate of convergence $E$. In the following we will assume, unless stated otherwise, that we are under the conditions $(Q_1)-(Q_4)$ and either $(Q_{5a})$ or $(Q_{5b})$. \begin{notation} In order to make the notation less cumbersome we will write $J_n$ instead of $J_{c_n}$ and $J$ instead of $J_{\frac{1}{c}}$. \end{notation} The following functions are useful for our analysis. \begin{definition}\label{definitionfunctions1} We define functions $\zeta, \overline{\sigma}, \phi_1,\phi_2,\widetilde{f}, r_1,r_2,r_3, r_4$ and $\Phi$, as follows. \begin{enumerate}[$(i)$] \item Given $c\in\mathbb{N}$ and $\mathsf{C}: \mathbb{N} \to \mathbb{N}$, for all $k, n\in \mathbb{N},$ $$\zeta(k,n):=\zeta[c,\mathsf{C}](k,n):= c(k+1)\mathsf{C}(n)-1 \text{ (cf. Lemma~\ref{samefix-pt-set})}.$$ \item Given $D\in\mathbb{N}$ and $L: \mathbb{N} \to \mathbb{N}$, for all $k, n\in \mathbb{N}$, $$ \overline{\sigma}(k,n):=\overline{\sigma}[L,D](k,n):=L\left(n+\lceil \ln(12D^2(k+1))\rceil\right)+1 \text{ (we have $\overline{\sigma}[L,D]=\sigma[L,4D^2]$, cf. Lemma~\ref{lemmaqtXu1})}.$$ \item Given $D\in\mathbb{N}$, for all $k, n\in \mathbb{N}$ and $f: \mathbb{N} \to \mathbb{N}$, $$\phi_1(k,n,f):=\phi_1[D](k,n,f):=f\left(\phi_2(k,n,f)\right) \text{, with $\phi_2$ as defined below (cf. also Lemma~\ref{lemmaJyi})}.$$ \item Given $D\in\mathbb{N}$, for all $k, n\in \mathbb{N}$ and $f: \mathbb{N} \to \mathbb{N}$, $$\phi_2(k,n,f):=\phi_2[D](k,n,f):= \max \{n, f^{(4D^2(k+1))}(n)\} \text{ (cf. Lemma~\ref{lemmaJyi})}.$$ \item Given $k, D\in \mathbb{N}$ and $f, L: \mathbb{N} \to \mathbb{N}$, for all $n\in \mathbb{N}$, $$\widetilde{f}(n):=\widetilde{f}[k, f, L, D](n):=f(\overline{\sigma}(k,n)).$ \item Given $c\in\mathbb{N}$, for all $n \in \mathbb{N}$, $$ r_1(n):=r_1[c](n):= 12c(n+1)^2-1.$$ \item Given $k,D \in \mathbb{N}$, for all $n \in \mathbb{N}$, $$r_2(n):=r_2[k, D](n):=\max\{2(n+1),128D(k+1)^2\}.$$ \item Given $k, c, D \in \mathbb{N}$ and $\ell : \mathbb{N} \to \mathbb{N}$, for all $n\in\mathbb{N}$, $$r_3(n):=r_3[k, c, \ell, D](n):=\ell\left(\max \{96cD^2(n+1)^2-1,256D^2(k+1)^2-1, 16c(r_2(n))^2D^2\}\right).$$ \item Given $k, c, D \in \mathbb{N}$ and $f, \ell, L: \mathbb{N}\to \mathbb{N}$, for all $n\in\mathbb{N}$, $$r_4(n):=r_4[k,c,f,\ell, D,L](n) :=3(k+1)(\widetilde{f}(\phi_2(r_1(n),r_3(n),\widetilde{f}+2))+1).$$ \item Given $k, c, D\in\mathbb{N}$ and $f, \ell, L: \mathbb{N} \to \mathbb{N}$, for all $n\in\mathbb{N}$, $$\Phi(n):=\Phi[k, c, f, \ell, L, D](n):= \phi_1(r_1(n),r_3(n),\widetilde{f}+2).$$ \end{enumerate} \end{definition} Using the functions from Definition~\ref{definitionfunctions1}, we can now present the main functions. \begin{definition}\label{definitionfunctions2} Given natural numbers $k, c, D\in \mathbb{N}$ and functions $f, \mathsf{C}, \ell, L:\mathbb{N}\to\mathbb{N}$, we define for all $n\in\mathbb{N}$, \begin{equation*} \Xi_1(n):=\Xi_1[k,f,c, \mathsf{C},\ell, L, D](n):=\zeta(2(6D+1)\max\{r_1, r_4 \}-1, \phi_1(r_1, r_3, \widetilde{f}+2)), \end{equation*} abbreviating $r_1=r_1(n), r_3=r_3(n)$ and $r_4=r_4(n)$.\\ Given natural numbers $k, c, D \in \mathbb{N}$ and functions $f,c, \mathsf{C},\ell, L, h:\mathbb{N}\to\mathbb{N}$, we define for all $n\in\mathbb{N}$, \begin{equation*} \xi(n) := \xi[k, f,c, \mathsf{C},\ell, L, D, h](n):=\zeta(\max \{16h(f(n))(k+1)^2(6D+1),4c(r_2)^2 (6D+1)\},f(n)), \end{equation*} abbreviating $r_2=r_2(n)$, and also \begin{equation*} \Xi_2(n) := \Xi_2[k, f,c, \mathsf{C},\ell, L,D, h](n):=\xi(\Phi(n)), \end{equation*} For every natural number $n\in\mathbb{N}$, we consider $\Xi(n):=\max\{ \Xi_1(n), \Xi_2(n)\}$. \end{definition} \begin{remark}\label{remrakmonotone} It is easy to check that all the functions defined in Definitions~\ref{definitionfunctions1} and \ref{definitionfunctions2}, except $\phi_1$ and $\phi_2$, are monotone, provided that the parameter functions are also monotone. For $\phi_2$ we always have monotonicity in $n,f$. In order to obtain also monotonicity in $k$ it is enough that $f(n) \geq n$. Similarly for $\phi_1$. \end{remark} We are now able to formulate our main result. We then show some easy consequences, in particular a metastable version of Theorem~\ref{ThmWangCui} (cf. Corollary~\ref{cor_metazn}). For each point $z$, below $(s_{n}^{z})$ denotes the auxiliary sequence defined by $\norm{y_n-z}^2$. \begin{theorem}\label{theoremwangcui} Let $(z_n)$ be generated by \eqref{PPA}. Assume that there exist $c \in \mathbb{N}\setminus\{0\}$ and monotone functions $h:\mathbb{N} \to \mathbb{N} \setminus \{0\}$ and $\ell,L, E:\mathbb{N} \to\mathbb{N}$ such that conditions $(Q_{\ref{Q0}})-(Q_{\ref{Q5}})$ and either $(Q_{5a})$ or $(Q_{5b})$ hold. Let $D \in \mathbb{N} \setminus \{ 0 \}$ be such that $D\geq \max\{2 \norm{u-p},\norm{z_0-p} \}$, for some $p \in S$. Let $\mathsf{C}: \mathbb{N} \to \mathbb{N}$ be such that $c_n \leq \mathsf{C}(n)$, for all $n \in \mathbb{N}$. Then for all $k \in \mathbb{N}$ and monotone function $f:\mathbb{N}\to \mathbb{N}$ \begin{equation*} \exists n \leq \mu(k,f) \, \exists z \in B_D \, \forall i \in [n,f(n)] \left(s_{i}^{z}\leq \frac{1}{k+1} \right), \end{equation*} where $\mu(k,f):=\max\{\overline{\sigma}(k, \phi_2(\bar{r},\bar{n},\widetilde{f}+2)),\Phi(\beta(\overline{k}, \Xi))\}$, with $\bar{r}:=r_1(\beta(\overline{k},\Xi))$, $\bar{n}:=r_3(\beta(\overline{k},\Xi))$, $\overline{k}:=32(k+1)^2-1$ and $\overline{\sigma}, \widetilde{f},r_1, r_3,\phi_2,\Phi$ as in Definition~\ref{definitionfunctions1}, $\Xi(m)$ as in Definition~\ref{definitionfunctions2}, with $\beta$ as in Proposition~\ref{lemmaprojectarg}. \end{theorem} In the conditions of Theorem~\ref{theoremwangcui}, we have the following corollaries exhibiting, respectively, a metastability bound and a metastable version of asymptotic regularity for the iteration \eqref{exactPPA}. \begin{corollary}\label{cor_metayn} For all $k \in \mathbb{N}$ and monotone function $f:\mathbb{N}\to \mathbb{N}$, \begin{equation*} \exists n \leq \mu(4(k+1)^2-1,f) \, \forall i,j \in [n,f(n)] \left(\norm{y_i-y_j}\leq \frac{1}{k+1} \right). \end{equation*} \end{corollary} \begin{proof} By Theorem~\ref{theoremwangcui} there exists $n \leq \mu(4(k+1)^2-1,f)$ and $z \in B_{D}$ such that for all $i \in [n,f(n)]$, $\norm{y_i-z}\leq \frac{1}{2(k+1)}$. Hence, for $i,j \in [n,f(n)]$ we have that $\norm{y_i-y_j}\leq \norm{y_i-z}+\norm{y_j-z}\leq \frac{1}{k+1}$. \end{proof} \begin{corollary}\label{cor_metaJi} For all $k \in \mathbb{N}$ and monotone function $f:\mathbb{N}\to \mathbb{N}$, we have \begin{enumerate}[$(i)$] \item\label{cor_metJi1} $\exists n \leq \widetilde{\mu}(k,f) \, \forall i \in [n,f(n)] \left(\norm{J_i(y_i)-y_i}\leq \frac{1}{k+1} \right)$, \item\label{cor_metJi2} $\exists n \leq \widetilde{\mu}(2k+1,f) \, \forall i \in [n,f(n)] \left(\norm{J(y_i)-y_i}\leq \frac{1}{k+1} \right)$, \end{enumerate} where $\widetilde{\mu}(k,f):=\max\{\mu(16c^2(k+1)^2-1,\check{f}+1),\ell(4cD(k+1)-1)\}$ and $\check{f}(m):=f(\max\{m, \ell(4cD(k+1)-1)\})$. \end{corollary} \begin{proof} It follows from Corollary~\ref{cor_metayn} that there exists $n_0 \leq \mu(16c^2(k+1)^2-1,\check{f}+1)$ such that $$\forall i \in [n_0,\check{f}(n_0)]\left(\norm{y_{i+1}-y_{i}}\leq \frac{1}{2c(k+1)} \right).$$ Let $n:=\max\{n_0,\ell(4cD(k+1)-1)\}$. Observe that $n\leq \widetilde{\mu}(k,f)$ and $[n,f(n)] \subseteq [n_0,\check{f}(n_0)]$. Then, for $i \in [n,f(n)]$ we have that $\norm{y_{i+1}-y_i}\leq \frac{1}{2cD(k+1)}$ and $\lambda_i\leq \frac{1}{4cD(k+1)}$, by condition $(Q_2)$. We have that \begin{equation*} \begin{split} \norm{J_i(y_i)-y_i} &\leq \norm{J_i(y_i)-(\lambda_i u + \gamma_i y_i+ \delta_iJ_i(y_i))}+\norm{y_{i+1}-y_i} \\ & \leq \lambda_i \norm{J_i(y_i)-u}+\gamma_i \norm{J_i(y_i)-y_i}+\norm{y_{i+1}-y_i} . \end{split} \end{equation*} Hence $(1-\gamma_i)\norm{J_i(y_i)-y_i} \leq \lambda_i \norm{J_i(y_i)-u}+\norm{y_{i+1}-y_i}$. Since \begin{equation*} \frac{1}{c}\norm{J_i(y_i)-y_i} \leq \delta_i\norm{J_i(y_i)-y_i} \leq (\lambda_i+\delta_i)\norm{J_i(y_i)-y_i}=(1-\gamma_i)\norm{J_i(y_i)-y_i}, \end{equation*} and $\norm{J_i(y_i)-u}\leq 2D$, we conclude that for $i \in [n,f(n)]$ \begin{equation*} \norm{J_i(y_i)-y_i} \leq c\lambda_i \norm{J_i(y_i)-u}+c \norm{y_{i+1}-y_i}\leq \frac{2cD}{4cD(k+1)}+\frac{c}{2c(k+1)}=\frac{1}{k+1}. \end{equation*} This conclude the proof of Part~\eqref{cor_metJi1}. Part~\eqref{cor_metJi2} then follows from Lemma~\ref{lemmaresolvineq}. \end{proof} In the original proof, the convergence of \eqref{PPA} is reduced to that of the exact variant \eqref{exactPPA}. The quantitative version of that reduction is shown in Lemma~\ref{lemma_zn-yn_small} provided that the sequence $\norm{z_n-y_n}$ is bounded as shown in Lemma~\ref{aux_bounds}. \begin{lemma}\label{aux_bounds} Consider a monotone function $E:\mathbb{N}\to\mathbb{N}$. Let $d_0,d_1, d_2\in \mathbb{N}\setminus\{0\}$ be natural numbers satisfying $d_0\geq \max\left\{\norm{u-p},\norm{z_0-p} \right\}$, $d_1\geq \sum_{i=0}^{E(0)}\norm{e_i}+1$, for some $p\in S$, and $d_2\geq\max\{\norm{z_i-y_i}\, :\, i\leq E(0)\}$. \begin{enumerate}[$(i)$] \item \label{raiodabola} $\forall n \in \mathbb{N} \left( \norm{y_{n}-p}\leq d_0\right)$, \item If $E$ satisfies $(Q_{5a})$, then $\forall n\in \mathbb{N}\, \left( \norm{z_n-y_n} \leq 2d_0+d_1\right)$, \item If $E$ satisfies $(Q_{5b})$, then $\forall n\in \mathbb{N}\, \left( \norm{z_n-y_n} \leq d_2\right)$. \end{enumerate} \end{lemma} \begin{proof} Since the resolvent is nonexpansive we have that \begin{equation}\label{eqladecima} \begin{split} \norm{z_{n+1}-y_{n+1}}&\leq \gamma_n\norm{z_n-y_n}+\delta_n\norm{J_{n}z_n-J_{n}y_n}+\norm{e_n}\\ &\leq \gamma_n\norm{z_n-y_n}+\delta_n\norm{z_n-y_n}+\norm{e_n}\\ &=(1-\lambda_n)\norm{z_n-y_n}+\norm{e_n}. \end{split} \end{equation} One sees that $\forall n \in \mathbb{N} \left( \norm{y_{n}-p}\leq d_0\right)$ by induction. Indeed, clearly $\norm{y_0-p}=\norm{z_0-p}\leq d_0$. As for the induction step we have \begin{equation*} \begin{split} \norm{y_{n+1}-p}&\leq \lambda_n\norm{u-p}+\gamma_n\norm{y_n-p}+\delta_n\norm{y_n-p}\\ &=\lambda_n\norm{u-p}+(1-\lambda_n)\norm{y_n-p}\\ &\leq \lambda_nd_0+(1-\lambda_n)d_0=d_0. \end{split} \end{equation*} Assume that $E$ satisfies $(Q_{5a})$. In particular, for $k=0$ we have $\forall n\in \mathbb{N}\left(\sum_{i=E(0)+1}^{E(0)+n}\norm{e_i}\leq 1 \right)$. \noindent For all $m\in\mathbb{N}$ \begin{equation*} \sum_{i=0}^{m}\norm{e_i}\leq \sum_{i=0}^{E(0)}\norm{e_i}+\sum_{i=E(0)+1}^{m}\norm{e_i}\leq d_1. \end{equation*} Then, similarly to \eqref{raiodabola}, one shows that $\norm{z_n-p}\leq d_0+ d_1$. Hence, for all $n\in \mathbb{N}$ \begin{equation*} \norm{z_{n}-y_{n}}\leq\norm{z_n-p}+\norm{y_n-p}\leq 2d_0+d_1. \end{equation*} Assume now that $E$ satisfies $(Q_{5b})$. We show by induction that $\norm{z_n-y_n}\leq d_2$, for all $n \in \mathbb{N}$. Clearly $\norm{z_0-y_0}\leq d_2$. Assume that $\norm{z_n-y_n}\leq d_2$. If $n<E(0)$, then $n+1 \leq E(0)$ and therefore $\norm{z_{n+1}-y_{n+1}}\leq d_2$. For $n \geq E(0)$, we have $\norm{e_n}\leq \lambda_n$. Then by \eqref{eqladecima}, the induction hypothesis, and the fact that $d_2\geq 1$ \begin{equation*} \norm{z_{n+1}-y_{n+1}}\leq (1-\lambda_n)d_2+\lambda_n\leq(1-\lambda_n)d_2+\lambda_nd_2= d_2.\qedhere \end{equation*} \end{proof} \begin{lemma}\label{lemma_zn-yn_small} Let $(z_n)$, $(y_n)$ be given, respectively, by \eqref{PPA} and \eqref{exactPPA}. Consider monotone functions $L, E:\mathbb{N}\to\mathbb{N}$ such that $L$ satisfies $(Q_3)$ and $E$ satisfies either $(Q_{5a})$ or $(Q_{5b})$. Let $d_0, d_1, d_2\in \mathbb{N}\setminus\{0\}$ be natural numbers as in Lemma~\ref{aux_bounds}. Define $d:=\max\{2d_0+d_1,d_2\}$. Then the sequence $(z_n-y_n)$ converges to zero and has rate of convergence $\Theta$, i.e. \begin{equation*} \forall k \in \mathbb{N} \, \forall n \geq \Theta(k)\left(\norm{z_n -y_n}\leq \frac{1}{k+1} \right), \end{equation*} where $\Theta(k):=\Theta[L, E, d_0, d_1, d_2](k):= L(E(3k+2)+\lceil\ln(3d(k+1))\rceil)+1$. \end{lemma} \begin{proof} In the case where $E:\mathbb{N} \to \mathbb{N}$ satisfies $(Q_{5a})$, by Lemma~\ref{aux_bounds}, we have $\norm{z_n-y_n}\leq 2d_0+d_1$ for all $n\in\mathbb{N}$. By inequality \eqref{eqladecima} we can instantiate Lemma~\ref{LemmaLLPP} with $s_n=\norm{z_n-y_n}$, $\alpha_n=\lambda_n$, $r_n\equiv 0$, $\gamma_n=\norm{e_n}$, $A=L$, $R\equiv 0$ and $G=E$. Hence \begin{equation}\label{theta1} \forall k \in \mathbb{N} \,\forall n \geq \theta_1(k)\left(\norm{z_n-y_n}\leq\frac{1}{k+1}\right), \end{equation} with $\theta_1(k):=L\left(E(3k+2)+\lceil \ln(3(2d_0+d_1)(k+1))\rceil \right)+1$. In the case where $E:\mathbb{N} \to \mathbb{N}$ satisfies $(Q_{5b})$, by Lemma~\ref{aux_bounds}, we have $\norm{z_n-y_n}\leq d_2$ for all $n\in\mathbb{N}$. Instantiating Lemma~\ref{LemmaLLPP} with $s_n=\norm{z_n-y_n}$, $\alpha_n=\lambda_n$, $r_n=\frac{\norm{e_n}}{\lambda_n} $, $\gamma_n\equiv 0$, $A=L$, $R=E$ and $G\equiv 0$, \begin{equation}\label{theta2} \forall k \in \mathbb{N} \, \forall n \geq \theta_2(k)\left(\norm{z_n-y_n}\leq\frac{1}{k+1}\right), \end{equation} with $\theta_2(k):=L\left(\max\{E(3k+2)-1,0\}+\lceil \ln(3d_2(k+1))\rceil \right)+1$. The monotonicity of the function $L$ implies $\max\{\theta_1(k),\theta_2(k)\}\leq \Theta(k)$. From \eqref{theta1} and \eqref{theta2}, we conclude the result. \end{proof} In the conditions of Theorem~\ref{theoremwangcui} and Lemma~\ref{lemma_zn-yn_small}, we have the following corollary exhibiting a metastability bound for the iteration \eqref{PPA}. \begin{corollary}\label{cor_metazn} For all $k \in \mathbb{N}$ and monotone function $f:\mathbb{N}\to \mathbb{N}$, \begin{equation*} \exists n \leq \nu(k,f) \, \forall i,j \in [n,f(n)] \left(\norm{z_i-z_j}\leq \frac{1}{k+1} \right), \end{equation*} where $\nu(k,f):= \max\{\mu(36(k+1)^2-1,\widehat{f}), \Theta(3k+2)\}$, with $\widehat{f}(m):=f(\max\{m, \Theta(3k+2)\})$, $\mu$ as in Theorem~\ref{theoremwangcui} and $\Theta$ as in Lemma~\ref{lemma_zn-yn_small}. \end{corollary} \begin{proof} By Corollary~\ref{cor_metayn}, there exists $n_0 \leq \mu(36(k+1)^2-1,\widehat{f})$ such that for all $i,j \in \left[n_0,\widehat{f}(n_0)\right]$ it holds that $\norm{y_i-y_j}\leq \frac{1}{3(k+1)}$. Define $n:= \max\{n_0,\Theta(3k+2)\}\leq \nu(k,f)$. Then clearly $[n,f(n)]\subset [n_0,\widehat{f}(n_0)]$ and for $i \in [n,f(n)]$, we have $i \geq \Theta(3k+2)$. Using Lemma~\ref{lemma_zn-yn_small} we conclude that \begin{equation*} \norm{z_i-z_j}\leq \norm{z_i-y_i}+\norm{y_i-y_j}+\norm{y_j-z_j}\leq \frac{1}{k+1}.\qedhere \end{equation*} \end{proof} \begin{remark}\label{rho1remark1} The functional $\rho$ defined by $\rho(k,f):=\nu(k,f^{maj})$ satisfies \eqref{rho1}, i.e. the restriction to monotone functions in Corollary~\ref{cor_metazn} poses no real limitation (cf. Remark~\ref{maj}). \end{remark} Using Corollary~\ref{cor_metaJi} and Lemma~\ref{lemma_zn-yn_small} we obtain a metastable version of the asymptotic regularity for the iteration \eqref{PPA}. \begin{corollary}\label{cor_metaJiz} For all $k \in \mathbb{N}$ and monotone function $f:\mathbb{N}\to \mathbb{N}$, we have \begin{enumerate}[$(i)$] \item\label{cor_metJiz1} $\exists n \leq \widetilde{\nu}(k,f) \, \forall i \in [n,f(n)] \left(\norm{J_i(z_i)-z_i}\leq \frac{1}{k+1} \right)$, \item\label{cor_metJiz2} $\exists n \leq \widetilde{\nu}(2k+1,f) \, \forall i \in [n,f(n)] \left(\norm{J(z_i)-z_i}\leq \frac{1}{k+1} \right)$, \end{enumerate} where $\widetilde{\nu}(k,f):=\max\{\widetilde{\mu}(2k+1,\breve{f}),\Theta(4k+3)\}$ and $\breve{f}(m):=f(\max\{m, \Theta(4k+3)\})$. \end{corollary} \begin{proof} By Corollary~\ref{cor_metaJi} there exists $n_0 \leq \widetilde{\mu}(2k+1, \breve{f})$ such that \begin{equation*} \forall i \in [n_0,\breve{f}(n_0)] \left( \norm{J_i(y_i)-y_i}\leq \frac{1}{2(k+1)} \right). \end{equation*} Let $n:=\max\{n_0, \Theta(4k+3)\}$. Then $n \leq \widetilde{\nu}(k,f)$ and for $i \in [n,f(n)] \subseteq [n_0, \breve{f}(n_0)]$ we have that \begin{equation*} \begin{split} \norm{J_i(z_i)-z_i}& \leq \norm{z_i-y_i} + \norm{y_i-J_i(y_i)}+ \norm{J_i(y_i)-J_i(z_i)}\\ & \leq 2\norm{z_i-y_i} + \norm{y_i-J_i(y_i)} \leq \frac{1}{k+1}. \end{split} \end{equation*} This shows Part~\eqref{cor_metJiz1}. Part~\eqref{cor_metJiz2} then follows from Lemma~\ref{lemmaresolvineq}. \end{proof} \begin{remark}\label{rho1remark2} Similarly to Remark~\ref{rho1remark1}, the functional $\widetilde{\rho}$ defined by $\widetilde{\rho}(k,f):=\widetilde{\nu}(k,f^{maj})$ satisfies \eqref{rho2}. \end{remark} In the remainder of this section we carry out the analysis of Theorem~\ref{ThmWangCui} which provides a proof to Theorem~\ref{theoremwangcui}. We begin with a lemma relating the resolvent functions $J$ and $J_n$. \begin{lemma}[\cite{DP(ta)}]\label{samefix-pt-set}Consider a monotone function $\mathsf{C}:\mathbb{N}\to \mathbb{N}$ such that $c_n\leq \mathsf{C}(n)$, for all $n\in\mathbb{N}$. For any $k,n\in \mathbb{N}$ and any $z\in H$, \[ \norm{J(z)-z}\leq \frac{1}{\zeta(k,n)+1} \;\to\; \forall n'\leq n\, \left(\norm{J_{n'}(z)-z}\leq \frac{1}{k+1}\right), \] with $\zeta(k,n):=\zeta[c,\mathsf{C}](k,n):= c(k+1)\mathsf{C}(n)-1$. \end{lemma} \begin{notation} For $p \in S$ and $D \in \mathbb{N}$, we denote by $B_{D}$ the closed ball centered at $p$ with radius $D$, i.e.\ $ B_{D}:=\{ z \in H: \norm{z-p}\leq D\}.$ In the following, a point $p$ is always made clear from the context. \end{notation} We recall the following quantitative result related to the projection argument. \begin{proposition}[\cite{PP(ta)}]\label{lemmaprojectarg} Let $D\in \mathbb{N}\setminus \{0\}$ be such that $D\geq 2\norm{u-p}$ for some $p\in S$.\\ For any natural number $k$ and monotone function $f:\mathbb{N} \to \mathbb{N} $, there are $n \leq \beta(k,f)$ and $z\in B_D$ such that $$\norm{J(z)-z} \leq \frac{1}{f(n)+1} \, \land \, \forall y\in B_D \left(\norm{J(y)-y}\leq \frac{1}{n+1} \to \langle u-z,y-z\rangle \leq \frac{1}{k+1}\right),$$ where $\beta(k,f):=\beta[D](k,f):=24D\left(w_{f}^{(R)}(0)+1\right)^2$,\\ with $R:=R[D,k]:=4D^4(k+1)^2$ and $w_{f}(m):=w_f[D,f](m):=\max\{ f(24D(m+1)^2),\, 24D(m+1)^2 \}$. \end{proposition} The fact that we are working with almost fixed points instead of actual fixed points creates a new error term $P_{n}^{z}$ in the main inequalities (cf. \eqref{Desigmain} and \eqref{desigJnyn-yn} below). Since we can consider good enough almost fixed points $z$, this error $P_{n}^{z}$ can be made small so as not to affect the convergence of the algorithm. \begin{lemma}\label{Lemmaoriginaleq10} Let $D \in \mathbb{N} \setminus \{ 0 \}$ be such that $D\geq \max\{2 \norm{u-p},\norm{z_0-p} \}$, for some $p \in S$, and $c \in \mathbb{N} \setminus \{0\}$ satisfying $(Q_4)$. For all $n \in \mathbb{N}$ and $z \in B_{D}$ we have \begin{equation}\label{Desigmain} s^{z}_{n+1}\leq (1-\lambda_{n})(s^{z}_{n}+P_{n}^{z})+2\lambda_n \langle u-z, y_{n+1}-z\rangle \end{equation} and \begin{equation}\label{desigJnyn-yn} \norm{J_{n}(y_n)-y_n}^2 \leq c\left(8D^2\lambda_n + s_{n}^z-s_{n+1}^{z}+P_{n}^{z}\right), \end{equation} where $s_{n}^{z}:=\norm{y_n-z}^2$, $P_{n}^{z}:=2\norm{J_{n}(z)-z}\left(3\norm{y_n-z}+\norm{J_{n}(z)-z} \right)$. \end{lemma} \begin{proof} Let $z$ be a point in $B_{D}$. Since the resolvent is nonexpansive we have that \begin{equation}\label{ineqJ} \begin{split} \norm{y_n-J_n(y_n)- (z-J_n(z))}^2 &\geq \left(\norm{J_n(y_n)-y_n}-\norm{J_n(z)-z}\right)^2\\ &= \norm{J_n(y_n)-y_n}^2 + \norm{J_n(z)-z}\left(-2 \norm{J_n(y_n)- y_n}+\norm{J_n(z)-z}\right)\\ &\geq \norm{J_n(y_n)-y_n}^2 +\\ &\norm{J_n(z)-z} \left(-2\left(\norm{J_n(y_n)-J_n(z)} + \norm{J_n(z)-z}+ \norm{y_n- z}\right) +\norm{J_n(z)-z}\right)\\ & =\norm{J_n(y_n)-y_n}^2 - \norm{J_n(z)-z} \left(4 \norm{y_n- z} +\norm{J_n(z)-z}\right). \end{split} \end{equation} Using \eqref{ineqJ} and the fact that the resolvent is both nonexpansive and firmly nonexpansive we derive \begin{equation*} \begin{split} \norm{J_{n}(y_n)-z}^2&\leq \left( \norm{J_{n}(y_n)-J_{n}(z)}+\norm{J_{n}(z)-z}\right)^2\\ &=\norm{J_{n}(y_n)-J_{n}(z)}^2+\norm{J_{n}(z)-z}\left(2\norm{J_{n}(y_n)-J_{n}(z)}+\norm{J_{n}(z)-z} \right)\\ &\leq \norm{y_n-z}^2-\norm{y_n-J_{n}(y_n)-z+J_{n}(z)}^2+\norm{J_{n}(z)-z}\left(2\norm{y_n-z}+\norm{J_{n}(z)-z}\right)\\%J is firmly nonexpansive & \leq \norm{y_n-z}^2-\norm{J_{n}(y_n)-y_n}^2+2\norm{J_{n}(z)-z}\left(3\norm{y_n-z}+\norm{J_{n}(z)-z}\right). \end{split} \end{equation*} Then, the definition of $y_{n+1}$ and Lemma~\ref{Lemmabasic} entail \begin{equation*}\label{ineqtoapplemma} \begin{split} \norm{y_{n+1}-z}^2&\leq \norm{\gamma_n(y_n-z)+\delta_n\left(J_{n}(y_n)-z\right)}^2+2\lambda_n\langle u-z,y_{n+1}-z \rangle\\ &= \gamma_n(\gamma_n+\delta_n)\norm{y_n-z}^2+\delta_n(\gamma_n+\delta_n)\norm{J_{n}(y_n)-z}^2\\ & \qquad -\gamma_n\delta_n\norm{J_{n}(y_n)-y_n}^2+2\lambda_n\langle u-z,y_{n+1}-z \rangle\\ &\leq \gamma_n(\gamma_n+\delta_n)\norm{y_n-z}^2+\delta_n(\gamma_n+\delta_n)\Big[\norm{y_n-z}^2-\norm{J_{n}(y_n)-y_n}^2 \\ & \qquad +2\norm{J_{n}(z)-z}\left(3\norm{y_n-z}+\norm{J_{n}(z)-z}\right)\Big]\\ &\qquad -\gamma_n\delta_n\norm{\left(J_{n}(y_n)-y_n \right)}^2+2\lambda_n\langle u-z,y_{n+1}-z\rangle\\ &\leq (1- \lambda_n)\norm{y_n-z}^2+2\lambda_n\langle u-z,y_{n+1}-z \rangle-\delta_n(2\gamma_n+\delta_n)\norm{J_{n}(y_n)-y_n}^2\\ &\qquad +2(1-\lambda_n)\norm{J_{n}(z)-z}\left(3\norm{y_n-z}+\norm{J_{n}(z)-z} \right). \end{split} \end{equation*} We conclude that \eqref{Desigmain} holds. Also, since $\delta_n(2\gamma_n+\delta_n) \geq \delta_n^2 \geq \frac{1}{c}$, \begin{equation}\label{desig10} s_{n+1}^z-s_{n}^{z}+\lambda_n s_{n}^{z}+\frac{1}{c}\norm{J_{n}(y_n)-y_n}^2\leq 2 \lambda_n\langle u-z,y_{n+1}-z\rangle+P_{n}^{z}. \end{equation} The inequality \eqref{desigJnyn-yn} follows from the fact that $2\langle u-z,y_{n+1}-z\rangle \leq 2 \norm{u-z}\norm{y_{n+1}-z}\leq8D^2$ . \end{proof} To deal with the remainder of the analysis we must discuss two cases depending on whether the sequence $(s^{z}_n)$ is decreasing or not. \subsection{First case} The first case that we are going to consider is the case where the sequence $(s^{z}_n)$ is eventually decreasing. Our goal is to apply Lemma~\ref{lemmaqtXu1} below, which is a quantitative version of Lemma~\ref{LemmaXu}, with $(v_n):=P_{n}^{z}$, $(r_{n}):= 2 \langle u-z, y_{n+1}-z\rangle$, for an adequate choice of $z$, in order to obtain a rate of metastability for $(s^{z}_n)$. The result is an easy adaptation of \cite[Lemma~14]{PP(ta)} for the case where $(\gamma_n)\equiv 0$. \begin{lemma}\label{lemmaqtXu1} Let $(s_n)$ be a bounded sequence of non-negative real numbers and $M\in\mathbb{N}$ a positive upper bound on $(s_n)$. Consider sequences of real numbers $(\lambda_n)\subset (0,1)$, $(r_n)$ and $(v_n)$ and assume the existence of a monotone function $ L$ satisfying condition $(Q_{\ref{Q2}})$. For natural numbers $k, n$ and $q$ assume \[\forall i\in[n,q]\, \left(v_i\leq \frac{1}{3(k+1)(q+1)}\land r_i\leq \frac{1}{3(k+1)}\right),\] and for all $i\in\mathbb{N}$, \[s_{i+1}\leq (1-\lambda_i)(s_i+v_i)+\lambda_ir_i.\] Then \[\forall i\in[\sigma(k,n),q]\, \left(s_i\leq \frac{1}{k+1}\right),\] with $\sigma(k,n):=\sigma[L, M](k,n):=L\left(n+\lceil \ln(3M(k+1))\rceil\right)+1$. \end{lemma} \begin{remark}\label{remarksigma} Observe that since $\lambda_n \leq 1$, for all $n \in \mathbb{N}$, by $(Q_3)$ it follows that for all $n \in \mathbb{N}$ we have $L(n) \geq n$. Hence the function $\sigma$ defined in Lemma~\ref{lemmaqtXu1} verifies the condition $\sigma(k,n) \geq n$, for all $n \in \mathbb{N}$. \end{remark} In the original proof of Theorem~\ref{ThmWangCui}, metric projection is used, followed by a sequential weak compactness argument. Sequential weak compactness can be eliminated in a way similar to \cite{DP(ta),PP(ta)}. This is to be expected in light of the arguments given in \cite{FFLLPP(19)}. The next result is an easy adaptation of \cite[Proposition~2.27]{K(08)} (see also Remark~2.29 in the same reference). \begin{lemma}\label{lemmaJyi} Let $D \in \mathbb{N} \setminus \{ 0 \}$ be such that $D\geq \max\{2 \norm{u-p},\norm{z_0-p} \}$, for some $p \in S$. For $k,n \in \mathbb{N}$, $f:\mathbb{N} \to \mathbb{N}$ monotone and $z \in B_{D}$, if \begin{equation*} \forall i \in [n, \phi_1(k,n,f)] \left(s_{i+1}^{z} \leq s_{i}^{z} \right), \end{equation*} then there exists $n' \leq \phi_2 (k,n,f)$ such that \begin{equation}\label{eqCauchymeta} \forall i,j \in [n',f(n')] \left(\left|s_{i}^{z}-s_{j}^{z}\right|\leq \frac{1}{k+1}\right), \end{equation} where $\phi_1(k,n,f):=\phi_1[D](k,n,f):=f(\phi_2(k,n,f))$ and $\phi_2(k,n,f):=\phi_2[D](k,n,f):= \max\{n, f^{(4D^2(k+1))}(n)\}$. Moreover, there is $n'\in \{f^{(i)}(n): i \leq 4D^2(k+1)\}$ satisfying \eqref{eqCauchymeta}. \end{lemma} We will need the following particular instance of Lemma~\ref{lemmaJyi}. \begin{lemma}\label{lemmametadifference} Let $D \in \mathbb{N} \setminus \{ 0 \}$ be such that $D\geq \max\{2 \norm{u-p},\norm{z_0-p} \}$, for some $p \in S$. For $k,n \in \mathbb{N}$, $f:\mathbb{N} \to \mathbb{N}$ monotone and $z \in B_{D}$, if \begin{equation*} \forall i \in [n, \phi_1(k,n,f+1)] \left(s_{i+1}^{z} \leq s_{i}^{z} \right), \end{equation*} then there exists $n' \leq \phi_2 (k,n,f+1)$ such that \begin{equation}\label{eqCauchymeta2} \forall i \in [n',f(n')] \left(s_{i}^{z}-s_{i+1}^{z}\leq \frac{1}{k+1}\right), \end{equation} where $\phi_1$, $\phi_2$ are as in Lemma~\ref{lemmaJyi}. Moreover, there is $n' \in \{(f+1)^{(i)}(n): i \leq 4D^2(k+1)\}$ satisfying \eqref{eqCauchymeta2}. \end{lemma} \begin{proof} We may assume that $f(n) \geq n$, because otherwise the result is trivial. By Lemma~\ref{lemmaJyi} we have that $$\forall i,j \in [n', f(n')+1] \left(\left|s_{i}^{z}-s_j^{z} \right|\leq \frac{1}{k+1} \right),$$ with $n' \in \{(f+1)^{(i)}(n): i \leq 4D^2(k+1)\}$. If $i \in [n',f(n')]$, then $i+1 \in [n', f(n')+1]$ and so $\left|s_{i}^{z}-s_{i+1}^{z} \right|\leq \frac{1}{k+1} $. Since $n' \geq n $ and $f(n')\leq f(\phi_2(k,n,f+1)) \leq \phi_1(k,n,f+1)$, using the monotonicity of the function $f$ we have $[n',f(n')] \subseteq [n,\phi_1(k,n,f+1)]$. Then for $i \in [n',f(n')]$ it holds that $s_{i}^{z}-s_{i+1}^{z} \geq 0$ which entails the result. \end{proof} In the discussion of the first case we need a quantitative version of the fact that $(y_n)$ is a sequence of almost fixed points for the resolvent function. This is accomplished with Lemmas~\ref{linnerproduct0} and \ref{lemma_g}. \begin{lemma}\label{linnerproduct0} Let $D \in \mathbb{N} \setminus \{ 0 \}$ be such that $D\geq \max\{2 \norm{u-p},\norm{z_0-p} \}$, for some $p \in S$, and $c\in \mathbb{N} \setminus \{0\} $ satisfying $(Q_4)$. For $m,n \in \mathbb{N}$, $f:\mathbb{N} \to \mathbb{N}$ monotone and $z \in B_{D}$, if $n \geq \ell(96cD^2(m+1)^2-1)$ and \begin{equation*} \forall i \in [n, f(n)] \left(s_{i}^{z}-s_{i+1}^{z}\leq \frac{1}{12c(m+1)^2} \wedge P_{i}^{z}\leq \frac{1}{12c(m+1)^2} \right), \end{equation*} then \begin{equation*} \forall i \in [n, f(n)] \left(\norm{J(y_i)-y_i}\leq \frac{1}{m+1} \right). \end{equation*} \end{lemma} \begin{proof} For $i\in [n, f(n)]$, we have $i\geq n\geq \ell(96cD^2(m+1)^2-1)$. Hence, by condition (Q$_{\ref{Q1}}$), \begin{equation*} \lambda_{i}\leq \frac{1}{96cD^2(m+1)^{2}}. \end{equation*} By inequality \eqref{desigJnyn-yn} \begin{equation*} \norm{J_{i}(y_i)-y_i}^2 \leq c\left(\frac{8D^2}{96cD^2(m+1)^{2}} + \frac{1}{12c(m+1)^2}+\frac{1}{12c(m+1)^2}\right) = \frac{1}{4(m+1)^2}. \end{equation*} Hence \begin{equation*} \forall i \in [n,f(n)] \left(\norm{J_{i}(y_i)-y_i} \leq \frac{1}{2(m+1)} \right), \end{equation*} and the result follows by Lemma~\ref{lemmaresolvineq}. \end{proof} \begin{lemma}\label{lemma_g} Let $D \in \mathbb{N} \setminus \{ 0 \}$ be such that $D\geq \max\{2 \norm{u-p},\norm{z_0-p} \}$, for some $p \in S$, and $c\in \mathbb{N} \setminus \{0\} $ satisfying $(Q_4)$. For $m,n \in \mathbb{N}$, $f:\mathbb{N} \to \mathbb{N}$ monotone and $z \in B_{D}$, if $n \geq \ell ((r_1+1)8D^2-1)$ and \begin{equation*} \forall i \in [n,\phi_1(r_1,n,f+1)] \left( s_{i+1}^{z }\leq s_i^{z} \wedge P_i^{z} \leq \frac{1}{r_1+1}\right), \end{equation*} then \begin{equation*} \exists n' \leq \phi_2 (r_1,n,f+1)\, \forall i \in [n',f(n')] \left(\norm{J(y_i)-y_i}\leq \frac{1}{m+1}\right), \end{equation*} where $r_1:=r_1(m)=12c(m+1)^2-1$, as in Definition~\ref{definitionfunctions1}. \end{lemma} \begin{proof} We may assume that $f(n) \geq n$, because otherwise the result is trivial. By Lemma~\ref{lemmametadifference}, there exists $n' \in [n,\phi_{2}(r_1,n,f+1)]$ such that \begin{equation*} \forall i \in [n', f(n')] \left(s_{i}^{z}-s_{i+1}^{z} \leq \frac{1}{12c(m+1)^2} \right). \end{equation*} Since $n'\geq n$ and $f(n')\leq \phi_1(r_1, n, f+1)$ we have that $n' \geq \ell((r_1+1)8D^2-1)$ and $[n',f(n')]\subseteq [n, \phi_1(r_1,n,f+1)],$ which implies that $n' \geq \ell (96cD^2(m+1)^2-1)$ and \begin{equation*} \forall i \in [n',f(n')] \left(s_{i}^{z}-s_{i+1}^{z}\leq \frac{1}{r_1+1} \wedge P_{i}^{z}\leq \frac{1}{r_1+1} \right). \end{equation*} Hence, by Lemma~\ref{linnerproduct0}, \begin{equation*} \forall i \in [n', f(n')] \left(\norm{J(y_i)-y_i}\leq \frac{1}{m+1} \right).\qedhere \end{equation*} \end{proof} The analysis of the first case is concluded with the following result. It gives a rate of metastability for the convergence of the sequence $(s_n^{z})$ provided that $z$ is a suficiently good approximation to the projection point and that the decreasing property of the sequence $(s_n^{z})$ holds long enough. \begin{lemma}\label{maincase1} Let $D \in \mathbb{N} \setminus \{ 0 \}$ be such that $D\geq \max\{2 \norm{u-p},\norm{z_0-p} \}$, for some $p \in S$. For $k,m \in \mathbb{N}$, $f:\mathbb{N} \to \mathbb{N}$ monotone and $z \in B_{D}$, assume that \begin{enumerate}[$(i)$] \item\label{lc13} $\forall i\in [r_3, \phi_1(r_1, r_3, \widetilde{f}+2)]\, \left( s_{i+1}^z\leq s_i^z \right)$, \item\label{lc11} $\norm{J(z)-z}\leq \dfrac{1}{\Xi_1(m)+1}$, \item\label{lc12} $\forall y\in B_D\, \left( \norm{J(y)-y}\leq \frac{1}{m+1}\to \langle u-z, y-z\rangle \leq \frac{1}{6(k+1)} \right)$. \end{enumerate} Then \begin{equation*} \exists n\leq \overline{\sigma}(k,\phi_2(r_1,r_3,\widetilde{f}+2))\, \forall i\in[n, f(n)]\, \left(s_i^z\leq \frac{1}{k+1} \right), \end{equation*} where $\overline{\sigma}$, $\widetilde{f}$, $r_1:=r_1(m)$, $r_3:=r_3(m)$, $\phi_1$ and $\phi_2$ are as in Definition~\ref{definitionfunctions1} and $\Xi_1$ is as Definition~\ref{definitionfunctions2}. \end{lemma} \begin{proof} Let $r_4:=r_4(m)$ be as in Definition~\ref{definitionfunctions1}. By \eqref{lc11}, using Lemma~\ref{samefix-pt-set} and the definition of $\Xi_1$, we have \begin{equation*} \forall i\leq \phi_1(r_1,r_3,\widetilde{f}+2)\, \left(\norm{J_i(z)-z}\leq \frac{1}{2(6D+1)\max \{r_1,r_4\}} \right). \end{equation*} Noticing that $\norm{J_i(z)-z}\leq 1$ and $\norm{y_i-z}\leq 2D$, for $i\leq \phi_1(r_1,r_3,\widetilde{f}+2)$, we have \begin{equation}\label{eqr1r2} P_{i}^{z}=2\norm{J_{i}(z)-z}\left(3\norm{y_i-z}+\norm{J_{i}(z)-z} \right)\leq \frac{2(6D+1)}{2(6D+1)\max \{r_1,r_4\}}=\frac{1}{\max \{r_1,r_4\}}. \end{equation} By \eqref{eqr1r2}, \eqref{lc13} and the fact that $r_3 \geq \ell(96cD^2(m+1)^2-1)$, it follows from Lemma~\ref{lemma_g} that \begin{equation}\label{eqJyim} \exists n \leq \phi_2(r_1,r_3,\widetilde{f}+2) \, \forall i \in [n,\widetilde{f}(n)+1] \left(\norm{J(y_i)-y_i}\leq \frac{1}{m+1} \right). \end{equation} For $i \in [n, \widetilde{f}(n)]$, we have that $i+1 \in [n,\widetilde{f}(n)+1]$. Hence $\norm{J(y_{i+1})-y_{i+1}}\leq \frac{1}{m+1}$. It follows from \eqref{lc12} and the fact that $y_{i+1} \in B_{D}$ that \begin{equation}\label{eqinnerproduct} \exists n \leq \phi_2(r_1,r_3,\widetilde{f}+2) \, \forall i \in [n, \widetilde{f}(n)] \left(\langle u-z, y_{i+1}-z \rangle \leq \frac{1}{6(k+1)}\right). \end{equation} Since $\widetilde{f}(n)\leq \widetilde{f}(\phi_2(r_1,r_3,\widetilde{f}+2))$, we have $r_4 \geq 3(k+1)(\widetilde{f}(n)+1)$. Since $\widetilde{f}(n)\leq \phi_1(r_1,r_3,\widetilde{f}+2)$, by \eqref{eqr1r2} and \eqref{eqinnerproduct} it follows that \begin{equation}\label{eqinnerproductPsmall} \exists n \leq \phi_2(r_1,r_3,\widetilde{f}+2) \, \forall i \in [n, \widetilde{f}(n)] \left(P_{i}^{z}\leq \frac{1}{3(k+1)(\widetilde{f}(n)+1)} \wedge \langle u-z, y_{i+1}-z \rangle \leq \frac{1}{6(k+1)}\right). \end{equation} Then from \eqref{eqinnerproductPsmall}, by applying Lemma~\ref{lemmaqtXu1} with $q=\widetilde{f}(n):=f(\overline{\sigma}(k,n))$ and using inequality \eqref{Desigmain} we conclude \begin{equation*} \exists n\leq \phi_2(r_1,r_3,\widetilde{f}+2)\, \forall i\in[\overline{\sigma}(k,n), f(\overline{\sigma}(k,n))]\, \left(s_i^z\leq \frac{1}{k+1} \right), \end{equation*} which entails the result. \end{proof} \subsection{Second case} We are now going to consider the case where the sequence $(s^{z}_n)$ is not eventually decreasing. For $s:\mathbb{N} \to \mathbb{N}$ and $m \in \mathbb{N}$ we define a functional $\tau$ as follows. \begin{equation*} \tau^{s}_{m}(n):= \begin{cases} n & n<m\\ \max\{ k \in [m,n]: s_{k}<s_{k+1}\} & n \geq m \wedge\exists k \in [m,n] \left(s_{k}<s_{k+1}\right) \\ n & n \geq m \wedge \forall k \in [m,n] \left(s_{k+1}\leq s_{k}\right) . \end{cases} \end{equation*} \begin{remark} Given $s:\mathbb{N} \to \mathbb{N}$ and $m \in \mathbb{N}$, the definition of $\tau^{s}_{m}$ implies immediately that for all $i \in \mathbb{N}$ it holds that $\tau^{s}_{m}(i) \leq i$. Moreover, $\tau^{s}_{m}(m)=m$ and for $n \in \mathbb{N}$, if $s_{j+1} \leq s_{j}$, for all $j \in [m,n]$, then $\tau^{s}_{m}(i) =i$, for all $i\leq n$. \end{remark} The following proposition shows that the functional $\tau^{s}_{m}$ is monotone in $m$ and $n$, respectively. \begin{proposition}\label{proptau} Let $s:\mathbb{N} \to \mathbb{N}$ and $m \in \mathbb{N}$. The functional $\tau^{s}_{m}$ enjoys the following properties \begin{enumerate}[$(i)$] \item\label{tau3} $\forall i \in \mathbb{N} \left(\tau^{s}_{m}(i) \leq \tau^{s}_{m}(i+1) \right)$. \item\label{tau4} $\forall i \in \mathbb{N} \left(\tau^{s}_{m}(i) \leq \tau^{s}_{m+1}(i) \right)$. \end{enumerate} \end{proposition} \begin{proof} \eqref{tau3}. If $i+1\leq m$, then $\tau^{s}_{m}(i) \leq i+1 =\tau^{s}_{m}(i+1)$. Assume that $i+1>m$. Then $i \geq m$. If $\exists j \in [m,i](s_j <s_{j+1})$, then clearly $\exists j \in [m,i+1](s_j <s_{j+1})$ and \begin{equation*} \tau^{s}_{m}(i)= \max \{ j \in [m,i]: s_j <s_{j+1}\} \leq \max \{ j \in [m,i+1]: s_j <s_{j+1}\}= \tau^{s}_{m}(i+1). \end{equation*} On the other hand, if $\forall j \in [m,i](s_{j+1}\leq s_j)$ then $\tau^{s}_{m}(i)=i \leq i+1 =\tau^{s}_{m}(i+1)$. \eqref{tau4}. If $i \leq m$, then $\tau^{s}_{m}(i)=i=\tau^{s}_{m+1}(i)$. If $i = m+1$, then $\tau^{s}_{m}(m+1) \leq m+1 = \tau^{s}_{m+1}(m+1)$. If $i> m+1$ and $\forall j \in [m,i](s_{j+1}\leq s_{j})$, then clearly also $\forall j \in [m+1,i](s_{j+1}\leq s_{j})$ and $\tau^{s}_{m}(i)=i=\tau^{s}_{m+1}(i)$. If $i> m+1$ and $\exists j \in [m,i](s_j< s_{j+1})$, we have that either \begin{equation}\label{condtodo} \forall j \in [m+1,i](s_{j+1}\leq s_{j}) \end{equation} or \begin{equation}\label{condexiste} \exists j \in [m+1,i](s_j< s_{j+1}). \end{equation} If \eqref{condtodo} holds, then we must have $s_m< s_{m+1}$ and hence $\tau^{s}_{m}(i)=m < i = \tau^{s}_{m+1}(i)$. If \eqref{condexiste} holds, then \begin{equation*} \tau^{s}_{m}(i)= \max \{ j \in [m,i]: s_j <s_{j+1}\} = \max \{ j \in [m+1,i]: s_j <s_{j+1}\}= \tau^{s}_{m+1}(i). \qedhere \end{equation*} \end{proof} We recall that the original proof relies on \cite[Lemma~3.1]{Mainge}. As it turns out, for our quantitative analysis we do not need a full quantitative version of that result as the following weakening is sufficient. \begin{lemma}\label{qtlemmaMainge} Let $s:\mathbb{N} \to \mathbb{N}$ and $m,r \in \mathbb{N}$ be arbitrary. If $m \geq r$ and $s_{m}<s_{m+1}$, then \begin{equation*} \forall i \geq m\left(\tau^{s}_{m}(i) \geq r \wedge \max\{s_{\tau^{s}_{m}(i)},s_i\}\leq s_{\tau^{s}_{m}(i)+1} \right) . \end{equation*} \end{lemma} \begin{proof} We show first that \begin{equation}\label{tau5} s_{m}<s_{m+1} \to \forall i \geq m\left(\max\{s_{\tau^{s}_{m}(i)},s_i\}\leq s_{\tau^{s}_{m}(i)+1} \right) \end{equation} Assume that $s_{m}<s_{m+1}$ and take arbitrary $i \geq m$. Then $\tau^{s}_{m}(i)=\max \{j \in [m,i]: s_{j}<s_{j+1}\}$ and, in particular, $s_{\tau^{s}_{m}(i)} \leq s_{\tau^{s}_{m}(i)+1}$. On the other hand, consider the following three cases: (i) $\tau^{s}_{m}(i)=i$, (ii) $\tau^{s}_{m}(i)=i-1$ and (iii) $\tau^{s}_{m}(i)< i-1$. (i) From $s_{\tau^{s}_{m}(i)}\leq s_{\tau^{s}_{m}(i)+1}$, we get $s_i \leq s_{\tau^{s}_{m}(i)+1}$. (ii) This case reduces to $s_i \leq s_i$ which is trivially true. (iii) Note that for $j \in [\tau^{s}_{m}(i)+1, i-1]$ we must have $s_{j+1} \leq s_{j}$. Hence $s_i \leq s_{i-1}\leq \dots \leq s_{\tau^{z}_{m}(i)+1}$. This finishes the proof of \eqref{tau5}. Assume that $m \geq r $. Then by Part~\eqref{tau3} of Proposition~\ref{proptau}, for $i \geq m$, it follows that $\tau^{s}_{m}(i) \geq \tau^{s}_{m}(m)=m \geq r$, which concludes the proof. \end{proof} In the following, since we are going to use sequences $(s^z_n)$ that involve a parameter $z \in H$, we simplify the notation writting $\tau^{z}_{m}$ instead of $\tau^{s^z}_{m}$. \begin{lemma}\label{lemmamaincase2} Let $D \in \mathbb{N} \setminus \{ 0 \}$ be such that $D\geq \max\{2 \norm{u-p},\norm{z_0-p} \}$, for some $p \in S$. For $k,m,n \in \mathbb{N}$, $f:\mathbb{N} \to \mathbb{N}$ monotone and $z \in B_{D}$, assume that \begin{enumerate} [$(i)$] \item\label{quantcase2} $n \geq r_3(m) \wedge s_n^{z} <s_{n+1}^{z}$, \item\label{IneqJz-z} $\norm{J(z)-z}\leq \frac{1}{\xi(n)+1}$, \item\label{hypprodint} $\forall y \in B_{D} \left(\norm{J(y)-y}\leq \frac{1}{m+1} \to \langle u-z,y-z \rangle \leq \frac{1}{32(k+1)^2} \right).$ \end{enumerate} Then $$\forall i \in [n,f(n)]\left(s_i^z \leq \frac{1}{k+1} \right),$$ where $\zeta$, $r_2$ and $r_3$ as in Definition~\ref{definitionfunctions1} and $\xi$ is as in Definition~\ref{definitionfunctions2}. \end{lemma} \begin{proof} In this proof we omit the parameter $z$, whenever possible, and write $s_{(\cdot)}$, $\tau_n(\cdot)$ and $P_{(\cdot)}$ instead of $s_{(\cdot)}^{z}$, $\tau_n^{z}(\cdot)$ and $P_{(\cdot)}^{z}$, respectively. By Lemma~\ref{qtlemmaMainge} we have that \begin{equation*} \forall i \geq n \left( \tau_{n}(i) \geq r_3(m) \wedge \max\{s_{\tau_{n}(i)},s_i\}\leq s_{\tau_{n}(i)+1} \right). \end{equation*} Let $i \in [n,f(n)]$. Since $s_{\tau_{n}(i)} \leq s_{\tau_{n}(i)+1}$, by inequality \eqref{desigJnyn-yn} we have that \begin{equation}\label{eq10caso2} \norm{J_{\tau_{n}(i)}(y_{\tau_{n}(i)})-y_{\tau_{n}(i)}}^2 \leq 8cD^2\lambda_{\tau_{n}(i)}+cP_{\tau_{n}(i)}. \end{equation} By the monotonicity of $\ell$ and the definition of $r_3(m)$ we have that $\tau_{n}(i) \geq \ell\left(16c(r_2(m))^2D^2\right)$. Hence, by $(Q_{\ref{Q1}})$ \begin{equation}\label{Ineqlambda} 8cD^2\lambda_{\tau_{n}(i)}\leq \frac{8cD^2}{16c(r_2(m))^2D^2}=\frac{1}{2(r_2(m))^2}. \end{equation} By \eqref{IneqJz-z}, the monotonicity of $\zeta$ and the definition of $\xi$ we have that $\norm{J(z)-z}\leq \frac{1}{\zeta\left(4c(r_2(m))^2(6D+1),f(n)\right)+1}$. Since $\tau_{n}(i)\leq i \leq f(n)$, by Lemma~\ref{samefix-pt-set} we have \begin{equation*} \norm{J_{\tau_{n}(i)}(z)-z}\leq\frac{1}{4c(r_2(m))^2(6D+1)}\, (\leq 1). \end{equation*} Then, since $\norm{y_{\tau_{n}(i)}-z}\leq 2D$, \begin{equation}\label{IneqP} P_{\tau_{n}(i)}=2\norm{J_{\tau_{n}(i)}(z)-z}\left(3\norm{y_{\tau_{n}(i)}-z}+\norm{J_{\tau_{n}(i)}(z)-z}\right)\leq \frac{2(6D+1)}{4c(r_2(m))^2(6D+1)}=\frac{1}{2c(r_2(m))^2}. \end{equation} Combining $\eqref{eq10caso2} - \eqref{IneqP}$ we conclude that \begin{equation*} \norm{J_{\tau_{n}(i)}(y_{\tau_{n}(i)})-y_{\tau_{n}(i)}}^2 \leq \frac{1}{2(r_2(m))^2}+c\frac{1}{2c(r_2(m))^2}=\frac{1}{(r_2(m))^2}. \end{equation*} By the definition of $r_2(m)$ we conclude that \begin{equation}\label{ineqJtau} \norm{J_{\tau_{n}(i)}(y_{\tau_{n}(i)})-y_{\tau_{n}(i)}} \leq \dfrac{1}{128D(k+1)^2}. \end{equation} Moreover, the definition of $r_2(m)$ and Lemma~\ref{lemmaresolvineq} entail that $\norm{J(y_{\tau_{n}(i)})-y_{\tau_{n}(i)}}\leq \frac{1}{m+1}.$ We show that \begin{equation}\label{ineqQ} P_{\tau_{n}(i)} \leq \frac{1}{8h(i)(k+1)^2}. \end{equation} Indeed, by the definition of $\xi$ we have $\norm{J(z)-z}\leq \frac{1}{\zeta(16h(f(n))(k+1)^2(6D+1),f(n))+1}$. Then, by Lemma~\ref{samefix-pt-set}, for $n'\leq f(n) $ $$\norm{J_{n'}(z)-z}\leq \frac{1}{16h(f(n))(k+1)^2(6D+1)}.$$ Since $h$ is monotone and $i\leq f(n)$ we have $\tau_{n}(i)\leq f(n)$ and $h(i)\leq h(f(n))$. Then $$\norm{J_{\tau_{n}(i)}(z)-z}\leq \frac{1}{16h(i)(k+1)^2(6D+1)}(\leq 1).$$ Hence \begin{equation*} P_{\tau_{n}(i)} \leq \frac{2(6D+1)}{16h(i)(k+1)^2(6D+1)}=\frac{1}{8h(i)(k+1)^2}. \end{equation*} By $(Q_{\ref{Q1}})$ and the the fact that $\tau_{n}(i) \geq r_3(m)$ we have that $\lambda_{\tau_{n}(i)} \leq \frac{1}{256D^2(k+1)^2}.$ Then, the definition of $\eqref{exactPPA}$ and the fact that $\norm{y_{\tau_{n}(i)}-u}\leq 2D$ entail \begin{equation}\label{designormytau} \begin{split} \norm{y_{\tau_{n}(i)}-y_{\tau_{n}(i)+1}} &=\norm{\lambda_{\tau_{n}(i)}(y_{\tau_{n}(i)}-u)+\delta_{\tau_{n}(i)}\left(y_{\tau_{n}(i)}-J_{\tau_{n}(i)}(y_{\tau_{n}(i)}\right)}\\ &\leq \lambda_{\tau_{n}(i)}\norm{y_{\tau_{n}(i)}-u}+ \delta_{\tau_{n}(i)} \norm{J_{\tau_{n}(i)}(y_{\tau_{n}(i)})-y_{\tau_{n}(i)}} \\ & \leq 2D\lambda_{\tau_{n}(i)} +\norm{J_{\tau_{n}(i)}(y_{\tau_{n}(i)})-y_{\tau_{n}(i)}}. \end{split} \end{equation} Hence, using \eqref{ineqJtau}, we derive that $\norm{y_{\tau_{n}(i)}-y_{\tau_{n}(i)+1}} \leq \frac{2D}{256D^2(k+1)^2}+\frac{1}{128D(k+1)^2} =\frac{1}{64D(k+1)^2},$ which trivialy implies that \begin{equation}\label{ineqprinc2} \norm{y_{\tau_{n}(i)}-y_{\tau_{n}(i)+1}} \leq \frac{1}{2(k+1)}. \end{equation} Since $y_{\tau_{n}(i)} \in B_D$ and $\norm{J(y_{\tau_{n}(i)})-y_{\tau_{n}(i)}} \leq \frac{1}{m+1}$, from \eqref{hypprodint} we obtain \begin{equation}\label{desigprodint} \langle u-z,y_{\tau_{n}(i)}-z \rangle \leq \frac{1}{32(k+1)^2}. \end{equation} We have that $\norm{u-z}\leq 2D$. Then, using \eqref{desigprodint} \begin{equation*} \begin{split} \langle u-z, y_{\tau_{n}(i)+1}-z\rangle &= \langle u-z, y_{\tau_{n}(i)+1}-y_{\tau_{n}(i)}\rangle+\langle u-z, y_{\tau_{n}(i)}-z\rangle \\ & \leq \norm{u-z} \norm{y_{\tau_{n}(i)+1}-y_{\tau_{n}(i)}}+\langle u-z, y_{\tau_{n}(i)}-z\rangle \\ & \leq \frac{1}{16(k+1)^2}. \end{split} \end{equation*} By \eqref{desig10}, using \eqref{ineqQ}, condition $(Q_1)$ and the fact that $h\left(\tau_{n}(i)\right)\leq h(i)$, we derive \begin{equation}\label{desigstn} \begin{split} s_{\tau_{n}(i)} &\leq 2 \langle u-z,y_{\tau_{n}(i)+1}-z \rangle +h\left(\tau_{n}(i)\right)\left(s_{\tau_{n}(i)}-s_{\tau_{n}(i)+1}+P_{\tau_{n}(i)}\right)\\ &\leq 2 \langle u-z,y_{\tau_{n}(i)+1}-z \rangle +h\left(\tau_{n}(i)\right)P_{\tau_{n}(i)}\\ &\leq \frac{1}{4(k+1)^2}. \end{split} \end{equation} Observe that \begin{equation}\label{desigsqrt} \begin{split} \sqrt{s_{\tau_{n}(i)+1}} &= \norm{y_{\tau_{n}(i)}-z-\left(y_{\tau_{n}(i)}-y_{\tau_{n}(i)+1}\right)}\\ &\leq \sqrt{s_{\tau_{n}(i)}}+ \norm{y_{\tau_{n}(i)}-y_{\tau_{n}(i)+1}}. \end{split} \end{equation} Then, by \eqref{ineqprinc2}, \eqref{desigstn} and \eqref{desigsqrt} we have \begin{equation*} \sqrt{s_{\tau_{n}(i)+1}} \leq \sqrt{\frac{1}{4(k+1)^2}}+\frac{1}{2(k+1)}=\frac{1}{k+1}. \end{equation*} Hence $s_{\tau_{n}(i)+1} \leq \frac{1}{(k+1)^2} \leq \frac{1}{k+1}$, which entails the result. \end{proof} \subsection{Putting it together} We are now able to prove our main result. \begin{proof}[Proof of Theorem~\ref{theoremwangcui}] By Proposition~\ref{lemmaprojectarg} there exist $m_0 \leq \beta(\overline{k},\Xi)$ and $z \in B_{D}$ such that \begin{equation*} \norm{J(z)-z}\leq \frac{1}{\Xi(m_0)+1} \end{equation*} and \begin{equation*} \forall y \in B_{D} \left(\norm{J(y)-y}\leq \frac{1}{m_0+1} \to \langle u-z,y-z\rangle \leq \frac{1}{32(k+1)^2} \right). \end{equation*} Consider $r_1$ and $r_3$ to be, respectively, the natural numbers $r_1(m_0)$ and $r_3(m_0)$. Observe that by monotonicity (cf. Remark~\ref{remrakmonotone}), $r_1\leq \bar{r}$ and $r_3\leq \bar{n}$. We may assume that $\widetilde{f}(r_3)\geq r_3$. Indeed, if $f(\overline{\sigma}(k,r_3))<r_3$, then $f(r_3)<r_3$ by monotonicity and the fact that $\overline{\sigma}(k,r_3)\geq r_3$ (cf. Remark~\ref{remarksigma}). Notice that by the definition of $\phi_2$ we obtain $r_3 \leq \overline{n}\leq \phi_2(\overline{r}, \overline{n}, \widetilde{f}+2)$. Again by monotonicity and the fact that $\overline{\sigma}(k,r_3)\geq r_3$ we would then have that $r_3 \leq \mu(k,f)$ and the result would be trivially true. The condition $\widetilde{f}(r_3)\geq r_3$ implies that $\phi_2(r_1,r_3,\widetilde{f}+2) \leq \phi_2(\overline{r},\overline{n},\widetilde{f}+2)$ and consequently $\phi_1(r_1,r_3,\widetilde{f}+2) \leq \phi_1(\overline{r},\overline{n},\widetilde{f}+2)$. If \begin{equation*} \forall i \in [r_3, \phi_1(r_1,r_3,\widetilde{f}+2)] \left(s_{i+1}^{z}<s_{i}^{z}\right), \end{equation*} then by Lemma~\ref{maincase1}, there is $n\leq \overline{\sigma}(k, \phi_2(r_1,r_3,\widetilde{f}+2)) \leq \mu(k,f)$ such that \begin{equation*} \forall i \in [n,f(n)] \left(s_{i}^{z}\leq \frac{1}{k+1} \right). \end{equation*} On the other hand, if $s_{n}^{z}\leq s_{n+1}^{z}$ for some $n \in [r_3, \phi_1(r_1,r_3,\widetilde{f}+2)]=[r_3, \Phi(m_0)]$, we have \begin{equation*} \norm{J(z)-z}\leq \frac{1}{\Xi(m_0)+1}\leq \frac{1}{\xi(\Phi (m_0))+1}\leq \frac{1}{\xi(n)+1}. \end{equation*} By Lemma~\ref{lemmamaincase2}, we conclude that there is $n\leq \Phi(m_0)\leq \mu(k,f)$ such that \begin{equation*} \forall i \in [n,f(n)] \left(s_{i}^{z}\leq \frac{1}{k+1} \right). \qedhere \end{equation*} \end{proof} \section{Final remarks}\label{sectionremarks} We observe that conditions $(Q_1) - (Q_{4})$ together with either condition $(Q_{5a})$ or $(Q_{5b})$ allow the sequence $(\gamma_n)$ to be identically equal to zero and so, by taking that choice for $(\gamma_n)$, the iteration \eqref{PPA} reduces to \eqref{HPPA}. In that case, condition $(Q_4)$ can be written as $\forall n \in \mathbb{N} \left(\min\{c_n, (1-\lambda_n)^2\} \geq \frac{1}{c}\right)$ and a quantitative version holds with the same bounds. In fact, that quantitative version is a generalization of previous analyses \cite{LLPP(ta),PP(ta)}, as Theorem~\ref{ThmWangCui} has weaker conditions than those of \cite[Theorem~5.1]{X(02)} and \cite[Theorem~2]{BM(11)}. However, the analyses in \cite{LLPP(ta),PP(ta)} are still of interest as the bounds obtained there are much simpler than the ones obtained in this paper. Under the quantitative conditions $(Q_1) - (Q_{4})$ together with either $(Q_{5a})$ or $(Q_{5b})$, in corollaries \ref{cor_metayn} and \ref{cor_metazn} we gave explicit bounds on the metastability of the iterations \eqref{PPA} and \eqref{exactPPA}, and in corollaries \ref{cor_metaJi} and \ref{cor_metaJiz} we computed a bound on (the metastable version of) the asymptotic regularity of these iterations. Let us argue that these results provide a quantitative version of Theorem~\ref{ThmWangCui}. By Corollary~\ref{cor_metayn} it follows (ineffectively) that \eqref{exactPPA} is a Cauchy sequence. Hence it converges strongly to a point $\widetilde{y} \in H$. By Corollary~\ref{cor_metaJi} and the continuity of the resolvent functions it follows that $\widetilde{y}$ must be a fixed point, and therefore a zero of the operator $\mathsf{A}$. Furthermore, one can argue that $\widetilde{y}$ must be the projection point of $u$ onto $S$. Indeed, consider the sequence $\left(s_n^{\widetilde{z}}\right)$ with $\widetilde{z}$ a projection point. One can argue, as in Lemmas~\ref{maincase1} and \ref{lemmamaincase2}, to conclude that Theorem~\ref{theoremwangcui} holds with $z=\widetilde{z}$ for every $k$ and $f$. Notice that one cannot guarantee the third assumption in neither of those lemmas. However, those conditions are only required to show equations \eqref{eqinnerproduct} and \eqref{desigprodint}, respectively, which follow from the fact that $\langle u- \widetilde{z}, \widetilde{y}-\widetilde{z} \,\rangle \leq 0$ and the fact that $\widetilde{y}= \lim y_n$. Since Theorem~\ref{theoremwangcui} is always true with $z=\widetilde{z}$, we conclude that $\widetilde{y}$ must be the projection point. By Lemma~\ref{lemma_zn-yn_small} one concludes that the iteration $\eqref{PPA}$ must also converge strongly to a zero of the operator, namely the projection point. \section*{Acknowledgements} Both authors acknowledge the support of FCT - Funda\c{c}\~ao para a Ci\^{e}ncia e Tecnologia under the project: UID/MAT/04561/2019 and the research center Centro de Matem\'{a}tica, Aplica\c{c}\~{o}es Fundamentais e Investiga\c{c}\~{a}o Operacional, Universidade de Lisboa. The second author also acknowledges the support of the `Future Talents' short-term scholarship at Technische Universit{\"a}t Darmstadt. The paper also benefited from discussions with Fernando Ferreira and Ulrich Kohlenbach.
{ "timestamp": "2019-12-24T02:04:09", "yymm": "1912", "arxiv_id": "1912.10175", "language": "en", "url": "https://arxiv.org/abs/1912.10175", "abstract": "We give a quantitative analysis of a theorem due to Fenghui Wang and Huanhuan Cui concerning the convergence of a multi-parametric version of the proximal point algorithm. Wang and Cui's result ensures the convergence of the algorithm to a zero of the operator. Our quantitative analysis provides explicit bounds on the metastability (in the sense of Terence Tao) for the convergence and the asymptotic regularity of the iteration. Moreover, our analysis bypasses the need of sequential weak compactness and only requires a weak form of the metric projection argument.", "subjects": "Functional Analysis (math.FA); Logic (math.LO)", "title": "Quantitative results on the multi-parameters Proximal Point Algorithm", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426420486941, "lm_q2_score": 0.7279754548076478, "lm_q1q2_score": 0.7090791353474409 }
https://arxiv.org/abs/1911.09084
Breakdown of Liesegang precipitation bands in a simplified fast reaction limit of the Keller-Rubinow model
We study solutions to the integral equation \[ \omega(x) = \Gamma - x^2 \int_{0}^1 K(\theta) \, H(\omega(x\theta)) \, \mathrm d \theta \] where $\Gamma>0$, $K$ is a weakly degenerate kernel satisfying, among other properties, $K(\theta) \sim k \, (1-\theta)^\sigma$ as $\theta \to 1$ for constants $k>0$ and $\sigma \in (0, \log_2 3 -1)$, $H$ denotes the Heaviside function, and $x \in [0,\infty)$. This equation arises from a reaction-diffusion equation describing Liesegang precipitation band patterns under certain simplifying assumptions. We argue that the integral equation is an analytically tractable paradigm for the clustering of precipitation rings observed in the full model. This problem is nontrivial as the right hand side fails a Lipschitz condition so that classical contraction mapping arguments do not apply.Our results are the following. Solutions to the integral equation, which initially feature a sequence of relatively open intervals on which $\omega$ is positive ("rings") or negative ("gaps") break down beyond a finite interval $[0,x^*]$ in one of two possible ways. Either the sequence of rings accumulates at $x^*$ ("non-degenerate breakdown") or the solution cannot be continued past one of its zeroes at all ("degenerate breakdown"). Moreover, we show that degenerate breakdown is possible within the class of kernels considered. Finally, we prove existence of generalized solutions which extend the integral equation past the point of breakdown.
\section{Introduction} Reaction-diffusion equations with discontinuous hysteresis occur in a range of modeling problems \cite{BrokateS:1996:HysteresisPT, KrasnoselskiiP:1989:SystemsH, Mayergoyz:1991:MathematicalMH, Visintin:1994:DifferentialMH, Visintin:2014:TenIH}. We are particularly interested in non-ideal relays---two-valued operators where the output switches from the ``off-state'' $0$ to the ``on-state'' $1$ when the input crosses a threshold $\beta$, and switches back to zero only when the input drops below a lower threshold $\alpha<\beta$. There are different choices to define the behavior of the relay at the threshold. The relay may be restricted to binary values and jump when the threshold is reached or exceeded. Alternatively, the relay may be \emph{completed}: when the threshold is reached but not exceeded, the relay may take fractional values which can change monotonically in time; when the input drops below the threshold without having crossed, the attained fractional value gets ``frozen in''. See, e.g., \cite{CurranGT:2016:RecentAR} for a detailed description of different relay behaviors. Rigorous mathematical results are of two types. For reaction-diffusion equations with completed relays, weak limit arguments lead to existence of solution \cite{Visintin:1986:EvolutionPH, AikiK:2008:MathematicalMB} but not necessarily their uniqueness and continuous dependence on the data. For reaction-diffusion equations with non-completed non-ideal relays, local well-posedness, including uniqueness and continuous dependence, holds true provided that a certain transversality condition on the data is satisfied. The solution can be continued in time for as long as the transversality condition remains satisfied \cite{GurevichT:2012:UniquenessTS, GurevichST:2013:ReactionDE, CurranGT:2016:RecentAR}. We finally remark that for some types of spatially distributed hysteresis, variational approaches may be available \cite{MielkeTL:2002:VariationalFR}. In this paper, we study an explicit example of a reaction-diffusion equation with relay hysteresis which demonstrates that, in general, global-in-time solutions require the notion of a completed relay. Our example is motivated from the study of the fast reaction limit, introduced by Hilhorst \emph{et al.}\ \cite{HilhorstHM:2007:FastRL, HilhorstHM:2009:MathematicalSO}, of the Keller and Rubinow model for Liesegang precipitation rings \cite{KellerR:1981:RecurrentPL}. This limit model, which we will refer to as the \emph{HHMO-model}, is a scalar reaction-diffusion equation driven by a point source which is constant in parabolic similarity variables with a reaction term modeled by a relay with a positive upper threshold and zero lower threshold. As a consequence, at a fixed location in space, the reaction, once switched on, can never switch off. The loci of reaction then form a spatial precipitation pattern. Simple as it seems, an analysis of the HHMO-model faces the same type of difficulty as the analysis of other reaction-diffusion equations with relay hysteresis; in particular, the questions of global uniqueness and continuous dependence on the data remain open. Our aim here is to provide insight into the essential features of the distributed relay dynamics. We make use of a remarkable feature of the HHMO-model: it can be formally simplified to an equation, different but qualitatively similar to the actual HHMO-model, that is self-similar in parabolic similarity variables. This new model, which we shall refer to as the \emph{simplified HHMO-model}, reduces to a single scalar integral equation, i.e., can be considered as a scalar dynamical system with memory. The simplified model is finally simple enough that a fairly complete explicit analysis is possible, which is the main contribution of this paper. We prove that the binary precipitation pattern in the dynamics of the simplified HHMO-model must break down in finite space-time. Beyond the point of breakdown, it can only be continued as a generalized solution. We think of the behavior prior to breakdown as analogous to the well-posedness result for binary switching relays in the spirit of Gurevich \emph{et al.}\ \cite{GurevichST:2013:ReactionDE} and the behavior past the point of breakdown as generalized solutions in the sense of Visintin \cite{Visintin:1986:EvolutionPH}. While these analogies are tentative and we make no claim that the simplified HHMO-model reflects the behavior of true Liesegang precipitation patterns, the study of this model offers a paradigm for the breakdown of binary patterns. In particular, it gives insight that breakdown can happen in two distinct ways. We believe that more general models---which may not share the symmetry which makes the explicit results of this paper possible---are capable of exhibiting the behaviors observed here, so that the results of this paper provide a lower bound on the complexity which must be addressed when studying more general situations. We also offer a possible perspective for a reformulation of the problem that may lead to well-posedness past the point of breakdown. To be specific, the simplified HHMO-model can be formulated as \begin{equation} \label{omega.explicit.0} \omega(x) = \Gamma - x^2 \int_{0}^1 K(\theta) \, H(\omega(x\theta)) \, \d \theta \,, \end{equation} where $\omega(x)$ is the excess reactant concentration at the source point, $\Gamma$ is a positive constant, $H$ denotes the Heaviside function, and $K$ is a unimodal kernel, continuous on $[0,1]$, continuously differentiable on $[0,1)$, and twice continuously differentiable in the interior of this interval, with the following properties: \begin{enumerate}[label={\upshape(\roman*)}] \item \label{i.k1} $K(\theta)$ is non-negative with $K(0)=K'(0)=0$, \item \label{i.k2} $K(\theta) \sim k \, \sqrt{1-\theta}$ as $\theta\to1$ for some $k>0$, \item \label{i.k3} there exists $\theta^\star \in (0,1)$ such that $K''(\theta)>0$ for $\theta \in (0,\theta^\star)$ and $K''(\theta)<0$ for $\theta \in (\theta^\star,1)$. \end{enumerate} These properties imply, in particular, that $K>0$ on $(0,1)$ and $K(1)=0$. Clearly, at $x_0=0$, $\omega(x_0) = \Gamma >0$ and there must be a point $x_1$ at which $\omega$ changes sign, i.e., where the concentration falls below the super-saturation threshold. Continuing, we may define a sequence $x_i$ of loci where $\omega$ changes sign, so that $(x_i,x_{i+1})$ corresponds to a ``ring'' or ``band'' where precipitation occurs when $i$ is even and to a precipitation gap when $i$ is odd. Given the physical background of the problem, we might think that the $x_i$ form an unbounded sequence, indicating that the entire domain is covered by a pattern of rings or gaps, or, if the sequence is finite, that the last ring or gap extends to infinity. Our first result proves that this is not the case: The sequence $x_i$ either has a finite accumulation point $x^*$ or there is a finite index $i$ such that $\omega$ cannot be extended past $x^*=x_i$ in the sense of equation \eqref{omega.explicit.0}. We call the former case non-degenerate, the latter degenerate. Our second result demonstrates the existence of degenerate solutions to \eqref{omega.explicit.0}. To this end, we present the construction of a kernel where the solution cannot be continued past the first gap, i.e., where the point of breakdown is $x^*=x_2$. To extend the solution past $x^*$, we introduce the concept of \emph{extended solutions}, reflecting the concept of a completed relay in the spirit of \cite{Visintin:1986:EvolutionPH} and also \cite{HilhorstHM:2009:MathematicalSO}. Extended solutions are pairs $(\omega,\rho)$ where $\omega \in C([0,\infty))$ and \begin{equation} \label{HHMO.extended.0} \omega(x) = \Gamma-x^2 \int_0^1K(\theta) \, \rho(x\theta) \, \d\theta \,, \end{equation} subject to the condition that $\rho$ takes values from the Heaviside graph, i.e., \begin{equation} \label{Heaviside} \rho(y)\in H(\omega(y)) = \begin{cases} 0 & \text{if }\omega(y)<0 \,, \\ [0,1] & \text{if }\omega(y)=0 \,, \\ 1 & \text{otherwise} \,. \end{cases} \end{equation} As our third result, we prove existence of extended solutions. Extended solutions are unique under the condition that they are \emph{regularly extended}, namely that $\omega$ remains identically zero on some right neighborhood $[x^\star, b)$ past the point of breakdown. The remainder of the paper is structured as follows. In Section~\ref{s.HHMO}, we recall some background on Liesegang rings and the fast reaction limit of the Keller--Rubinow model. In Section~\ref{s.simplified}, we simplify the model to the scalar integral equation \eqref{omega.explicit.0} and present arguments and numerical evidence that the simplified model reflects the qualitative behavior of the full model. We then proceed to show, in Section~\ref{s.nondeg}, that the sequence of precipitation bands either terminates finitely or has a finite accumulation point. In Section~\ref{s.degenerate}, we provide a construction that shows that within the class of kernels considered, finite termination is possible. Section~\ref{exist.simplified} discusses extended solutions in the sense of \eqref{HHMO.extended.0}. We conclude with a brief discussion and outlook. \section{The Keller--Rubinow model in the fast reaction limit} \label{s.HHMO} Liesegang precipitation bands are structured patterns in reaction-diffusion kinetics which emerge when, in a chain of two chemical reactions, the second reaction is triggered upon exceeding a supersaturation threshold and is maintained until the reactant concentration falls below a lower so-called saturation threshold. Within suitable parameter ranges, the second reaction will only ignite in restricted spatial regions. When the product of the final reaction precipitates, these regions may be visible as ``Liesegang rings'' or ``Liesegang bands'' in reference to German chemist Raphael Liesegang who described this phenomenon in 1896. For a review of the history and chemistry of Liesegang patterns, see \cite{Henisch:1988:CrystalsGL,Stern:1954:LiesegangP}. Keller and Rubinow \cite{KellerR:1981:RecurrentPL} gave a quantitative model of Liesegang bands in terms of coupled reaction-diffusion equations, see \cite{DuleyFM:2017:KellerRM, DuleyFM:2019:RegularizationOS} for recent results and further references. We note that there is a competing description in terms of competitive growth of precipitation germs \cite{Smith:1984:OstwaldST} which will not play any role in the following; see, e.g., \cite{KrugB:1999:MorphologicalCL} for a comparative discussion. Our starting point is the fast reaction limit of the Keller--Rubinow model, where the first-stage reaction rate constant is taken to infinity and one of the first-stage reactant is assumed to be immobile. Hilhorst \emph{et al.}\ \cite{HilhorstHM:2007:FastRL,HilhorstHM:2009:MathematicalSO} proved that, in this limit, the first-stage reaction can be solved explicitly and contributes a point source of reactant to the second-stage process. Thus, only one scalar reaction-diffusion equation for the second-stage reactant concentration $u = u(x,t)$ remains. Formulated on the half-line, the fast reaction limit, which we shall refer to as the \emph{full} HHMO-model, reads as follows: \begin{subequations} \label{e.original} \begin{gather} u_t = u_{xx} + \frac{\alpha \beta}{2 \sqrt t} \, \delta (x - \alpha \sqrt{t}) - p[x,t;u] \, u \,, \label{e.original.a} \\ u_x(0,t) = 0 \quad \text{for } t \geq 0 \,, \\ u(x,0) = 0 \quad \text{for } x>0 \,, \label{e.original.c} \end{gather} where $\alpha$ and $\beta$ are positive constants and the precipitation function $p[x,t;u]$ is constrained by \begin{equation} \label{e.hhmo-p-weak-alternative} p(x,t)\in \begin{cases} 0&\text{ if }\sup_{s\in[0,t]}u(x,s)<u^* \,,\\ [0,1]&\text{ if }\sup_{s\in[0,t]}u(x,s)=u^* \,,\\ 1 &\text{ if }\sup_{s\in[0,t]}u(x,s)>u^* \,. \end{cases} \end{equation} \end{subequations} In this expression, $u^*>0$ is the super-saturation threshold, i.e., the ignition threshold for the second-stage reaction. For simplicity, the saturation threshold is taken to be zero. This means that once the reaction is ignited at some spatial location $x$, it will not ever be extinguished at $x$. Hilhorst \emph{et al.}\ \cite{HilhorstHM:2009:MathematicalSO} proved existence of weak solutions to \eqref{e.original}; the question of uniqueness was left open. It is important to note that a weak solution is always a tuple $(u,p)$ where $p$ is constrained, but not defined uniquely in terms of $u$, by \eqref{e.hhmo-p-weak-alternative}. The analytic difficulties lie in the fact that the onset of precipitation is a free boundary in the $(x,t)$-plane. Moreover, the precipitation term is discontinuous, so that most of the standard analytical tools are not applicable; in particular, estimates based on energy stability fail. In \cite{Darbenas:2018:PhDThesis,DarbenasO:2018:UniquenessSK}, we are able to prove uniqueness for at least an initial short interval of time and derive a sufficient condition for uniqueness at later times; we conjecture that it is possible to obtain instances of non-uniqueness when the problem is considered with arbitrary smooth initial data or smooth additional forcing. One of the questions posed in \cite{HilhorstHM:2009:MathematicalSO} is the problem of proving that the precipitation function $p$ takes only binary values. Numerical evidence suggests that after an initial transient period in which a small number of rapidly shrinking rings is visible, the solution appears to precipitate on single grid points whose exact locations are unstable with respect to grid refinement. Results specifically for the HHMO-model are reported in Section~\ref{s.simplified} below; Duley \emph{et al.}\ \cite{DuleyFM:2017:KellerRM} report similar behavior also for the original Keller--Rubinow model. The main result of this paper is that we suggest a mechanism by which an actual breakdown of the ring structure in the Keller--Rubinow model occurs. It is made rigorous for a simplified version of the HHMO-model, introduced in the following, but displays features that are also seen in both of its parent models. \section{The simplified HHMO-model} \label{s.simplified} In the following, we detail the connection between the full HHMO-model \eqref{e.original} and the integral equation \eqref{omega.explicit.0}. The key observation is that, when written in a suitable equivalent form, there are only two terms in the full model which do not possess a parabolic scaling symmetry. We cite a mixture of analytic and numerical evidence that suggest that these terms have a negligible impact on the long-time behavior of the solution: One of the neglected terms represents linear damping toward equilibrium. It is asymptotically subdominant relative to the precipitation term; moreover, its presence could only enhance relaxation to equilibrium. The other term is observed to be asymptotically negligible as the width of the precipitation rings decreases, hence its contribution vanishes as the point of breakdown is approached. Leaving only terms which scale parabolically self-similarly, one of the variables of integration in the Duhamel formula representation of the simplified model can be integrated out, leaving an expression of the form \eqref{omega.explicit.0} with a complicated, yet explicit expression for the kernel $K$ which is shown, using a mixture of analysis and numerical verification, to satisfy properties \ref{i.k1}--\ref{i.k3}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{simplification-comparison.pdf} \caption{Numerical verification of the convergence of the full and the simplified HHMO-model to the self-similar profile $\Phi$. Even though the transients are different, the difference field $w$ converges to zero in both cases. Model parameters are $\alpha=\beta=1$ and $u^*=0.2$. The simulation shows that numerical artifacts at large times decrease with improved resolution, where $\Delta s = \Delta \eta = 10^{-2}$ (dotted lines), $\Delta s = \Delta \eta = 10^{-3}$ (dashed lines), and $\Delta s = \Delta \eta = 10^{-4}$ (solid lines). See text for a detailed discussion.} \label{f.comparison} \end{figure} Numerical evidence suggests that solutions to the full HHMO-model converge robustly to a steady state $\Phi(\eta)$ with respect to the parabolic similarity variable $\eta = x/s$ as $s = \sqrt{t} \to \infty$, see Figure~\ref{f.comparison} which is explained in detail further below. A proof of convergence to a steady state is difficult for much the same reasons that well-posedness is difficult, but \cite{DarbenasHO:2018:LongTA} were able to prove a slightly weaker result: \emph{assuming} that the HHMO-solution converges to a steady state $\Phi(\eta)$ at all, this steady state must satisfy the differential equation \begin{subequations} \label{e.v-selfsimilar} \begin{gather} \Phi'' + \frac\eta2 \, \Phi' + \frac{\alpha\beta}2 \, \delta(\eta-\alpha) - \frac\gamma{\eta^2} \, H(\alpha-\eta) \, \Phi = 0 \,, \label{e.ode2} \\ \Phi'(0) = 0 \,, \label{e.v-selfsimilar-b} \\ \Phi(\eta) \to 0 \quad \text{as } \eta \to \infty \,, \label{e.v-selfsimilar-c} \\ \Phi(\alpha) = u^* \,. \label{e.v-selfsimilar-d} \end{gather} \end{subequations} In this formulation, $\gamma$ is an unknown constant. To determine $\gamma$ uniquely, this second order system has an additional internal boundary condition \eqref{e.v-selfsimilar-d} which expresses that the reactant concentration in the HHMO-model converge to the critical value $u^*$ at the source point which, in similarity coordinates, moves along the line $\eta=\alpha$. There exists a unique solution $(\Phi,\gamma)$ to \eqref{e.v-selfsimilar} with $\gamma>0$ and $\Phi$ given by \begin{gather} \Phi(\eta) = \begin{dcases} \frac{u^*\,\eta^\kappa \, M \bigl(\frac\kappa2,\kappa+\frac12, -\frac{\eta^2}4 \bigr)} {\alpha^\kappa \, M \bigl(\frac\kappa2,\kappa+\frac12, -\frac{\alpha^2}4 \bigr)} & \text{ if }\eta<\alpha \,, \\ \frac{u^*}{\operatorname{erfc} (\frac\alpha2)} \, \operatorname{erfc} \Bigl( \frac\eta2 \Bigr) & \text{ if }\eta\ge\alpha \,, \end{dcases} \label{e.phi.gamma} \end{gather} where $M$ is Kummer's confluent hypergeometric function \cite{AbramowitzS:1972:HandbookMF}, $\kappa$ is a solution of the algebraic equation \begin{equation} \label{eq.kappa.2} u^* = u^*_\gamma \equiv \left(\frac{\kappa \, M \bigl( \frac\kappa2+1,\kappa+\frac12, -\frac{\alpha^2}4 \bigr)}% {\alpha \, M \bigl( \frac\kappa2,\kappa+\frac12,-\frac{\alpha^2}4 \bigr)} + \, \frac{\exp \bigl(-\frac{\alpha^2}4 \bigr)}% {\sqrt\pi \, \operatorname{erfc} (\frac\alpha2 )}\right)^{-1} \frac{\alpha\beta}2 \,, \end{equation} and $\gamma = \kappa (\kappa-1)$, subject to the solvability condition \begin{equation} \label{e.solvability} u^* < u_0^* \,. \end{equation} In $x$-$t$ coordinates, the self-similar solution to \eqref{e.v-selfsimilar} takes the form \begin{equation} \label{e.phi.gamma.x-t} \phi(x,t)=\Phi(x/{\sqrt t}) \,. \end{equation} Throughout this paper, we assume that $\alpha$, $\beta$, and $u^*$ satisfy the solvability condition \eqref{e.solvability}, so that the self-similar solution $\phi$ exists. (For triples $\alpha$, $\beta$, and $u^*$ which violate the solvability condition, the corresponding weak solution precipitates in only a bounded region and the asymptotic state is easy to determine explicitly; for details, see \cite{DarbenasHO:2018:LongTA}.) \begin{figure} \centering \includegraphics[width=\textwidth]{omega-full-model} \caption{Relative concentration $u/u^*$ and precipitation function $p$ along the parabola $t = x^2/\alpha^2$ for the full HHMO-model. Each subsequent graph zooms into the boxed area of the previous. The simulation parameters are $\alpha=\beta=1$, $u^*=0.2$, $\Delta s = \Delta \eta = 5 \cdot 10^{-5}$. Note that the $x$-scale here coincides directly with similarity time $s=x/\alpha$ used in Figure~\ref{f.comparison}.} \label{f.omega-full} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{omega-simplified-model} \caption{Relative concentration $u/u^*$ and precipitation function $p$ along the parabola $t = x^2/\alpha^2$ for the simplified HHMO-model. Each subsequent graph zooms into the boxed area of the previous. The simulation parameters are $\alpha=\beta=1$, $u^*=0.1$, $\Delta s = \Delta \eta = 3.33 \cdot 10^{-5}$. Note that the $x$-scale here coincides directly with similarity time $s=x/\alpha$ used in Figure~\ref{f.comparison}.} \label{f.omega-simplified} \end{figure} We now write $w=u-\phi$ to denote the difference between the solution of the full HHMO-model \eqref{e.original} and the self-similar profile \eqref{e.phi.gamma.x-t}. Then $w$ solves the equation \begin{subequations} \label{e.u-phi} \begin{gather} w_t - w_{xx} + p \, w = \biggl( \frac\gamma{x^2} \, H \Bigl( \alpha - \frac{x}{\sqrt t} \Bigr) - p \biggr) \, \phi \Bigl( \frac{x}{\sqrt t} \Bigr) \,, \label{e.u-phi.a} \\ w_x(0,t) = 0 \quad \text{for } t \geq 0 \,, \\ w(x,0) = 0 \quad \text{for } x > 0 \,. \end{gather} \end{subequations} To pass to the simplified HHMO-model, we make two changes to this equation: \begin{enumerate}[label={\upshape(\alph*)}] \item\label{i.1} Precipitation is triggered on the condition that $u>u^*$ on the line $x^2 = \alpha^2 t$, and \item\label{i.2} the damping term $pw$ in \eqref{e.u-phi.a} is neglected. \end{enumerate} The simplified model then reads \begin{subequations} \label{e.simplified} \begin{gather} w_t - w_{xx} = \biggl( \frac\gamma{x^2} - H \Bigl(w\Bigl(x,\frac{x^2}{\alpha^2} \Bigr) \Bigr) \biggr) \, H \Bigl( \alpha - \frac{x}{\sqrt t} \Bigr) \, \phi (x,t) \,, \\ w_x(0,t) = 0 \quad \text{for } t \geq 0 \,, \\ w(x,0) = 0 \quad \text{for } x > 0 \,. \end{gather} \end{subequations} To illustrate the impact of these simplifications, we resort to numerical simulation. In the context of this problem, two general comments about numerical simulation are necessary. First, we are looking at the long-time behavior of the solution where breakdown has already occurred early in the simulation. For lack of a clear alternative, numerical schemes are constructed under the assumption of a binary precipitation function. Thus, we cannot expect pointwise convergence of the precipitation function; convergence of the numerical precipitation function to the precipitation function of a weak solution in the sense of \cite{HilhorstHM:2009:MathematicalSO} can at best take place in the weak-$*$ topology. The concentration, on the other hand, may converge point-wise. High-order convergence cannot be expected in this setting so that we limit ourselves to the lowest order finite difference approximations. Second, the asymptotic profile is stationary in \emph{similarity} variables, but a spatial grid cell of constant size in similarity variables maps to a cell in physical space whose size increases in time. This leaves fundamentally two options: \begin{enumerate}[label={\upshape(\roman*)}] \item Compute on a fixed grid in physical space and accept that the essential part of the solution will run out of the computational domain in a finite time. \item Compute on a fixed grid in similarity variables and accept that the smallest unit of precipitation that can be represented by the scheme will get coarser as time goes on, leading to spurious oscillations with increasing amplitude at late times. \end{enumerate} It is conceivable that an $h$-$p$-adaptive method on an essentially infinite domain in physical space could break this dichotomy, but would also raise the issue of how much such a complex scheme can be trusted, and how to adapt it to the essentially non-smooth nature of this problem. For these reasons, we choose not to take this route but rather accept that accuracy is lost as time progresses. The time of validity can be extended by choosing a finer spatial grid in similarity variables, or choosing a bigger domain in physical space. For any given grid, however, the time of validity is finite. For our code, we have chosen a finite difference scheme formulated in similarity variables $\eta = x/\sqrt{t}$ and $s = \sqrt t$; it is detailed in Appendix~\ref{a.numerics}. The advantage is that it makes the identification of the limit profile particularly easy; the drawback is that it requires a transport scheme for the precipitation function, making the code just slightly longer than a direct solve in physical variables. Figure~\ref{f.comparison} demonstrates that both the full HHMO-model and the simplified model converge to the asymptotic profile $\Phi$, i.e., $\lVert w(t) \rVert \to 0$ as $t \to \infty$. As expected, due to the lack of the linear damping term $pw$, the simplified model takes longer to equilibrize, but the asymptotics remain unchanged. We note that the loss of accuracy with time, described above, is clearly visible. Increasing the numerical resolution moves the point of visible onset of numerical error to larger times, but the behavior of the scheme is fundamentally non-uniform in time. Figures~\ref{f.omega-full} and~\ref{f.omega-simplified} show details of the initial transient of the full and the simplified model, respectively. We see that even though the transients are quantitatively different, the two models have the same qualitative features: The amplitude of the variation of concentration about the threshold concentration at the source point decreases extremely rapidly, as does the width of the precipitation rings and gaps. In both simulations, we were able to clearly resolve two precipitation rings and two gaps, where the last gap is only visible by zooming in about five orders of magnitude. We cannot determine whether there is a third distinct ring; simulating this numerically would require at least one order of magnitude more resolution in space, due to the additional timestepping at least two orders of magnitude more in computational expense. In the following, we prove, for the simplified model, that the ring structure must break down within a finite interval; the simulations suggest that this interval is not particularly large. We have also observed that most of the quantitative change comes from simplification \ref{i.2}. Implementing simplification \ref{i.1} without simplification \ref{i.2} yields a solution that is visually indistinguishable from the solution to the full model. Note that simplification \ref{i.1} implies that there is no precipitation below the line $x^2 = \alpha^2 t$, even when $u>u^*$. The advantage of this simplification is that onset of precipitation now ceases to be a free boundary problem and follows parabolic scaling. A motivation for the validity of this simplification comes from the following fact: it is proved in \cite{DarbenasHO:2018:LongTA} that \emph{if} the solution to the full HHMO-model converges to a parabolically self-similar profile as $t \to \infty$, then the contribution to the HHMO-dynamics from precipitation below the parabola $\alpha^2 \, t=x^2$ is asymptotically negligible. Simplification \ref{i.2} is justified by the numerical observation that the equation without the damping term $pw$ already converges to the same profile, so that an additional linear damping toward the equilibrium will not make a qualitative difference. Moreover, assuming that the HHMO-solution converges to equilibrium, $pw$ becomes asymptotically small while the right hand side of \eqref{e.u-phi.a} remains an order-one quantity. It is very difficult, however, to estimate the quantitative effect of \ref{i.1} and \ref{i.2} due to the discontinuous reaction term and the free boundary of onset of precipitation, so that a rigorous justification of these two steps remains open. To proceed, we extend the simplified HHMO-model \eqref{e.simplified} to the entire real line by even reflection and abbreviate \begin{equation} \rho(x) = \frac\gamma{x^2} - H \Bigl(w\Bigl(x,\frac{x^2}{\alpha^2} \Bigr) \Bigr) \,. \end{equation} Proceeding formally, we apply the Duhamel principle---a detailed justification of the Duhamel principle in context of weak solutions is given in \cite{Darbenas:2018:PhDThesis}---then change the order of integration and implement the change of variables $s=y^2/\zeta^2$, so that \begin{align} w(x,t) & = \int_{-\alpha \sqrt t}^{\alpha \sqrt t} \int_{y^2/\alpha^2}^t \HK(x-y,t-s) \, \phi (y,s) \, \d s \, \rho(y) \, \d y \notag \\ & = 2 \int_{-\alpha \sqrt t}^{\alpha \sqrt t} \int_{\lvert y \rvert/\sqrt t}^{\alpha} \HK(x-y,t-y^2/\zeta^2) \, \frac{\Phi (\zeta)}{\zeta^3} \, \d \zeta \, \rho(y) \, y^2 \, \d y \label{e.fixed-point2} \end{align} where $\HK$ is the standard heat kernel \begin{equation} \HK(x,t) = \begin{dcases}\frac1{\sqrt{4\pi t}} \, \e^{-\tfrac{x^2}{4t}}&\text{if }t>0 \,,\\ 0&\text{if }t\le0 \,. \end{dcases} \end{equation} We are specifically interested in the solution on the parabola $x^2=\alpha^2 \, t$. For notational convenience, we assume in the following that $x$ is nonnegative; solutions for negative $x$ are obtained by even reflection. Then, setting $\omega(x) = w(x,x^2/\alpha^2)$ and inserting the fundamental solution of the heat equation explicitly, we find \begin{align} \omega(x) & = \frac1{\sqrt \pi} \int_{-x}^{x} \int_{\alpha \lvert y \rvert/x}^{\alpha} \frac1{\sqrt{\tfrac{x^2}{\alpha^2}-\tfrac{y^2}{\zeta^2}}} \, \exp \biggl( - \frac{(x-y)^2}% {4 \, \bigl( \tfrac{x^2}{\alpha^2}-\tfrac{y^2}{\zeta^2} \bigr)} \biggr) \, \frac{\Phi (\zeta)}{\zeta^3} \, \d \zeta \, \rho(y) \, y^2 \, \d y \notag \\ & = \frac1x\int_{-x}^{x} G \Bigl( \frac{y}x \Bigr) \, \rho(y) \, y^2 \, \d y \notag \\ & = x^2 \int_{-1}^1 G(\theta) \, \rho(x\theta) \, \theta^2 \, \d \theta \,, \label{e.omega1} \end{align} where \begin{equation} \label{e.G} G(\theta) = \frac\alpha{\sqrt \pi} \int_{\alpha\lvert\theta\rvert}^\alpha \frac1{\sqrt{\zeta^2-\alpha^2 \, \theta^2}} \, \exp \biggl( - \frac{\zeta^2 \, \alpha^2 \, (1-\theta)^2}% {4 \, (\zeta^2 - \alpha^2 \, \theta^2)} \biggr) \, \frac{\Phi (\zeta)}{\zeta^2} \, \d \zeta \,. \end{equation} Inserting the explicit expression for $\rho$ into \eqref{e.omega1} and noting that $\omega$ is extended to negative arguments by even reflection, we obtain \begin{equation} \label{omega.explicit} \omega(x) = \Gamma - x^2 \int_{0}^1 K(\theta) \, H(\omega(x\theta)) \, \d \theta \end{equation} with \begin{equation} \Gamma = \gamma \int_{-1}^1 G(\theta) \, \d \theta \label{e.Gamma} \end{equation} and \begin{equation} \label{e.K.G} K(\theta) = \theta^2 \, (G(\theta) + G(-\theta)) \,. \end{equation} A graph of the kernel $K$ is shown in Figure~\ref{f.K}. In Appendix~\ref{kernel.prop}, we give a combination of analytic and numerical evidence showing that this kernel satisfies properties \ref{i.k1}--\ref{i.k3} stated in the introduction. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Kplot} \caption{Plot of the kernel $K(\theta)$ for $\alpha=\beta=1$ and $u^*=0.15$.} \label{f.K} \end{figure} We conclude that the simplified HHMO-model implies an integral equation of the form \eqref{omega.explicit.0}. Vice versa, given a solution $\omega$ to \eqref{omega.explicit}, we can reconstruct a solution to the PDE-formulation of the simplified HHMO-model. Indeed, setting \begin{align} W(x,t) &= \int_0^t \int_{-\alpha \sqrt s}^{\alpha \sqrt s} \HK(x-y,t-s) \, \Bigl(\frac\gamma{y^2}-H(\omega(y))\Bigr) \, \phi (y,s) \, \d y \, \d s \,, \label{e.reconstruction} \end{align} we can repeat the calculation leading to \eqref{e.omega1}, which proves that $W(x,\alpha^{-2}x^2)=\omega(x)$. Thus, $W$ solves \eqref{e.fixed-point2} so that it provides a mild solution to \eqref{e.simplified}. \section{Non-degenerate breakdown of precipitation bands} \label{s.nondeg} In this section, we investigate the structure of solutions to the integral equation \eqref{omega.explicit.0} for kernels $K$ which satisfy assumptions \ref{i.k1}--\ref{i.k3}. Specifically, we seek solutions $\omega$ defined on a half-open interval $[0,x^*)$ or on $[0,\infty)$ which change sign at isolated points $x_i$ for $i=1,2,\dots$, ordered in increasing sequence. Setting $x_0=0$ and noting that precipitation must occur in a neighborhood of the origin if it sets in at all, the precipitation bands are the intervals $(x_i,x_{i+1})$ for even integers $i\geq 0$. Hence, \begin{equation} H(\omega(z)) = \sum_{i \text{ even}} \I_{[x_i,x_{i+1}]}(z) \,, \end{equation} where we write $\I_A$ to denote the indicator function of a set $A$. Thus, the one-dimensional precipitation equation \eqref{omega.explicit.0} takes the form \begin{align} \omega(x) & = \Gamma - x^2 \sum_{\substack{i \text{ even}\\ x_{i} < x}} \int_{x_i/x}^{\min\{x_{i+1}/x,1\}} K (\theta) \, \d \theta \notag \\ & = \Gamma - \sum_{x_{i} < x} (-1)^i \, \rho_i(x) \label{e.abstract-prec-eqn} \end{align} with \begin{equation} \rho_i(x) = x^2 \int_{x_i/x}^{1} K (\theta) \, \d \theta \,. \end{equation} For $x \geq x_{n-1}$, we also define the partial sums \begin{equation} \omega_n(x) = \Gamma - \sum_{i=0}^{n-1} (-1)^i \, \rho_i(x) \,. \label{e.partialsum} \end{equation} Thus, $\omega_n(x) = \omega(x)$ for $x \in [x_{n-1}, x_n]$. With this notation in place, we are able to define the notion of degenerate solutions. \begin{definition} \label{d.degenerate} A solution $\omega$ to \eqref{omega.explicit.0} is \emph{degenerate} if \eqref{e.abstract-prec-eqn} holds up to some finite $x_i \equiv x^*$ and it is not possible to apply this formula on $[x^*,x^*+\varepsilon)$ for any $\varepsilon>0$; it is \emph{non-degenerate} if $\omega$ possesses a finite or infinite sequence of isolated zeros $\{x_i\}$ and the solution can be continued in the sense of \eqref{e.abstract-prec-eqn} to some right neighborhood of any of its zeros. \end{definition} In the remainder of this section, we characterize non-degenerate solutions. We cannot exclude that a solution is degenerate, i.e., that it cannot be continued at all beyond an isolated root; in fact, Section~\ref{s.degenerate} shows that kernels with degenerate solutions exist. We note that a degenerate solution provides an extreme scenario of a breakdown in which the solution reaches equilibrium in finite time. Thus, the main result of this section, Theorem~\ref{th.shrink} below, can be understood as saying that even when the solution is non-degenerate, it still fails to exist outside of a bounded interval. \begin{lemma} \label{l.infrings} Suppose $K \in C([0,1])$ is non-negative, strictly positive somewhere, and $K(\theta) = o(\theta)$ as $\theta \to 0$. Then a non-degenerate solution to \eqref{omega.explicit.0} has an infinite number of precipitation rings. \end{lemma} \begin{proof} A non-degenerate solution, by definition, is a solution that can be extended to the right in some neighborhood of any of its zeros. Now suppose there is a largest zero $x_{n}$. Then $\omega$ is well-defined and equals $\omega_{n+1}$ on $[x_n,\infty)$. Now let $x>x_n$ and consider the limit $x \to \infty$. When $n$ is even, since $\rho_{i+1}(x) < \rho_i(x)$, \begin{equation} 0 < \omega(x) < \Gamma - \rho_n(x) \to -\infty \,, \end{equation} a contradiction. When $n$ is odd, \begin{align} 0 > \omega(x) & > \Gamma - x^2 \int_0^{x_n/x} K(\theta) \, \d \theta \notag \\ & > \Gamma - x^2 \, \sup_{\theta \in [0,x_{n}/x]} \frac{K(\theta)}{\theta} \int_0^{x_{n}/x} \theta \, \d\theta \notag \\ & = \Gamma - \frac12 \, x_{n}^2 \, \sup_{\theta\in[0,x_{n}/x]} \frac{K(\theta)}{\theta} \to \Gamma > 0 \,, \end{align} once again a contradiction. \end{proof} \begin{lemma} \label{rings.schrinking} Suppose $K \in C([0,1]) \cap C^1([0,1))$ with $K(1)=0$ and $K'(\theta) \to -\infty$ as $\theta \to 1$. Let $\omega$ be a non-degenerate solution to \eqref{omega.explicit.0} with an infinite number of precipitation rings. Then $x_{2n}/x_{2n+1} \to 1$ as $n\to\infty$. Moreover, $x_{2n+1}-x_{2n}$, the width of the $n$th precipitation ring, converges to zero. \end{lemma} \begin{proof} When the sequence $\{x_i\}$ is bounded, the claim is obvious. Thus, assume that this sequence is unbounded. Since $K'$ is negative on $(1-\varepsilon,1)$ for sufficiently small $\varepsilon>0$, $K$ is positive on this interval. As in the proof of Lemma~\ref{l.infrings}, \begin{equation} \label{r.eq} 0 \equiv \omega(x_{2n+1}) < \Gamma - \rho_{2n}(x_{2n+1}) = \Gamma - x_{2n+1}^2 \int_{x_{2n}/x_{2n+1}}^1 K(\theta) \, \d \theta \,. \end{equation} We can directly conclude that $x_{2n}/x_{2n+1} \to 1$ as $n \to \infty$. Further, noting that $K(1)=0$ and using the fundamental theorem of calculus, we obtain \begin{align} \Gamma & > x_{2n+1}^2 \int_{x_{2n}/x_{2n+1}}^1 (K (\theta) - K(1)) \, \d \theta \notag \\ & = - x_{2n+1}^2 \int_{x_{2n}/x_{2n+1}}^1 \int_\theta^1 K'(\zeta) \, \d \zeta \, \d \theta \notag \\ & = - x_{2n+1}^2 \int_{x_{2n}/x_{2n+1}}^1 K'(\zeta) \int_{x_{2n}/x_{2n+1}}^\zeta \d \theta \, \d \zeta \notag \\ & = - \frac12 \, K'(\zeta_n) \, (x_{2n+1}-x_{2n})^2 \end{align} where, by the mean value theorem of integration, the last equality holds for some $\zeta_n \in [x_{2n}/x_{2n+1},1]$. Since $\zeta_n \to 1$, we conclude that $x_{2n+1}-x_{2n} \to0$ as $n \to \infty$. \end{proof} \begin{theorem} \label{th.shrink} Suppose $K \in C([0,1])$, differentiable with absolutely continuous first derivative on $[0,z]$ for every $z\in(0,1)$, and unimodal, i.e., there exists $\theta^* \in (0,1)$ such that $K''(\theta)>0$ for $\theta \in (0,\theta^*)$ and $K''(\theta)<0$ for $\theta \in (\theta^*,1)$, and that $K(\theta) \sim k \, (1-\theta)^\sigma$ for some $k>0$ and $\sigma \in (0, \log_2 3 -1)$ as $\theta \to 1$. Further, assume that equation \eqref{omega.explicit.0} has a non-degenerate solution $\omega$ with an infinite number of precipitation rings. Then its zeros have a finite accumulation point. \end{theorem} \begin{proof} We begin by recalling the second order mean value theorem, which states that for a twice continuously differentiable function $f$ and nodes $a<b<c$ there exists $y \in [a,c]$ such that \begin{equation} \frac{f(c)-f(b)}{c-b} - \frac{f(b)-f(a)}{b-a} = \frac{c-a}2 \, f''(y) \,. \label{f.second.der} \end{equation} We apply this result to the partial sum function $\omega_n$ with $a=x_n$, $b=x_{n+1}$, and $c=x \in (x_{n+1},x_{n+2}]$. We note that $\omega_n(x_n)=0$. Further, subtracting \eqref{e.partialsum} from \eqref{e.abstract-prec-eqn}, we obtain, for $x \in [x_{n+1},x_{n+2}]$, that \begin{equation} \omega_n(x) = \omega(x) + (-1)^n \, (\rho_{n}(x) - \rho_{n+1}(x)) \label{e.omegax} \end{equation} so that, in particular, $\omega_n(x_{n+1}) = (-1)^{n} \, \rho_{n}(x_{n+1})$. Equation \eqref{f.second.der} then reads \begin{equation} \frac{\omega(x) + (-1)^n \, (\rho_{n}(x) - \rho_{n+1}(x) - \rho_n(x_{n+1}))}% {x-x_{n+1}} - \frac{(-1)^{n} \, \rho_{n}(x_{n+1})}{x_{n+1}-x_n} = \frac{x-x_n}2 \, \omega_n''(y) \label{e.second.der2} \end{equation} for some $y \in [x_n,x]$. To estimate the right hand expression, we compute \begin{equation} \omega_n''(y) = \sum_{i=0}^{n-1} (-1)^i \, F \Bigl(\frac{x_i}{y} \Bigr) \equiv \sum_{i=0}^{n-1} (-1)^i \, f_i \end{equation} where \begin{equation} \label{def.F} F(z) = z^2 \, K'(z) - 2 \, z \, K(z) - 2 \int^1_z K(\theta) \, \d\theta \,. \end{equation} By direct computation, $F'(z) = z^2 \, K''(z)$. Since $K$ is unimodal, this implies that $F$ has an isolated maximum on $[0,1]$. Now suppose that $x \in (x_{n+1},x_{n+2})$. We consider two separate cases. When $n$ is even, for every $y \geq x_{n-1}$ there exists a unique odd index $\ell$ such that the sequence of $f_i$ is strictly increasing for $i = 1, \dots, \ell-1$ and is strictly decreasing for $i = \ell+1, \dots, n-1$. Hence, \begin{align} \omega_n''(y) & = f_0 + (-f_1 + f_2) + \dots - f_\ell + (f_{\ell+1} - f_{\ell+2}) + \dots + (f_{n-2} - f_{n-1}) \notag \\ & > f_0 - f_\ell \geq F (0) - \max_{z \in [0,1]} F(z) \equiv -M \,, \end{align} where $M$ is a strictly positive constant. Further, $\omega(x)<0$. Inserting these two estimates into \eqref{e.second.der2}, we obtain \begin{equation} \frac{\rho_{n}(x) - \rho_{n+1}(x) - \rho_n(x_{n+1})}% {x-x_{n+1}} - \frac{\rho_{n}(x_{n+1})}{x_{n+1}-x_n} > - \frac{x-x_n}2 \, M \,. \label{e.main-ineq} \end{equation} When $n$ is odd, for every $y \geq x_{n-1}$ there exists a unique even index $\ell$ such that the sequence of $f_i$ is strictly increasing for $i = 0, \dots, \ell-1$ and is strictly decreasing for $i = \ell+1, \dots, n-1$. Hence, \begin{align} \omega_n''(y) & = (f_0 - f_1) + \dots + f_\ell + (-f_{\ell+1} + f_{\ell+2}) + \dots + (- f_{n-2} + f_{n-1}) \notag \\ & < f_\ell \leq \max_{z \in [0,1]} F(z) = M + F (0) < M \end{align} Further, $\omega(x)>0$. As before, inserting these two estimates into \eqref{e.second.der2}, we obtain \begin{equation} - \frac{\rho_{n}(x) - \rho_{n+1}(x) - \rho_n(x_{n+1})}% {x-x_{n+1}} + \frac{\rho_{n}(x_{n+1})}{x_{n+1}-x_n} < \frac{x-x_n}2 \, M \,. \label{e.main-ineq2} \end{equation} Thus, we again obtain an estimate of exactly the form \eqref{e.main-ineq} and we do not need to further distinguish between $n$ even or odd. To proceed, we define \begin{equation} R(\theta) = \dfrac{\int_\theta^1 K(\zeta) \, \d \zeta}% {k \int_\theta^1 \left(1-\zeta\right)^\sigma \, \d \zeta} \end{equation} so that, by assumption, $R(\theta) \to 1$ as $\theta \to 1$. Further, \begin{equation} \rho_i(x) = \frac{k}{1+\sigma} \, x^2 \, \Bigl( 1 - \frac{x_i}x \Bigr)^{1+\sigma} \, R \Bigl( \frac{x_i}x \Bigr) \,. \end{equation} Changing variables to \begin{equation} d_n = x_{n+1} - x_n \,, \quad r_n = \frac{x_n}{x_{n+1}} \,, \quad \text{and} \quad q = \frac{x-x_{n+1}}{d_n} \,, \end{equation} we write inequality \eqref{e.main-ineq} in the form \begin{equation} G(q) > - S_0 (r_n,q) + S_1(r_n,q) \, (1+q)^{1+\sigma} - S_2(r_n,q) \, q^{1+\sigma} - S_3(r_n,q) \, (q+1) \label{e.G-ineq} \end{equation} where \begin{subequations} \begin{gather} S_0 (r,q) = -q \, (q+1) \, \biggl( \frac{1-r}{1+(1-r)q} \biggr)^{1-\sigma} \, \frac{M \, (1+\sigma)}{2k} \,, \\ S_1 (r,q) = 1 - R \biggl( \frac{r}{1+(1-r)q} \biggr) \,, \\ S_2 (r,q) = 1 - R \biggl( \frac1{1+(1-r)q} \biggr) \,, \\ S_3 (r,q) = 1 - \biggl( \frac1{1+(1-r)q} \biggr)^{1-\sigma} \, R(r) \,, \end{gather} and \begin{gather} G(q) = (1+q)^{1+\sigma} - q^{1+\sigma} - q - 1 \,. \label{e.Gdef} \end{gather} \end{subequations} Observe that $G(0)=0$, $G'(0)>0$, $G''(q)<0$ for all $q>0$, and $G(1)=2^{1+\sigma}-3<0$. Hence, there is a unique root $q^* \in (0,1)$ such that $G(q)<0$ for all $q>q^*$. Now fix $\varepsilon>0$, define \begin{gather} q_n = \frac{x_{n+2}-x_{n+1}}{x_{n+1}-x_n} \,, \end{gather} and consider any even index $j$ for which $q_j > q^* + \varepsilon$. Since \eqref{e.G-ineq} was derived under the assumption $x \in (x_{j+1},x_{j+2})$, or equivalently $q\in(0,q_j)$, this inequality must hold for each tuple $(r_j, q^* + \varepsilon)$. Now if there were an infinite set of indices for which $q_j > q^* + \varepsilon$, we could pass to the limit $j \to \infty$ on the subsequence of such indices. Since $K''(\theta)<0$ for $\theta\in(\theta^*,1)$ and $K(\theta) \sim k \, (1-\theta)^\sigma$ as $\theta \to 1$, Lemma~\ref{rings.schrinking} is applicable and implies that $r_{2k}=x_{2k}/x_{2k+1}\to1$. As for any fixed $q$, each of the $S_i(r,q)$ converges to zero as $r\to 1$, we arrive at the contradiction $G(q^* + \varepsilon)>0$. Hence, \begin{equation} \limsup_{\substack{k \to \infty\\ k \text{ even}}} q_k \le q^* < 1 \,. \label{e.limsup} \end{equation} To extend this result to odd $n$, we note that \begin{gather} r_{n+1} = \frac{x_{n+1}}{x_{n+2}} = \frac1{1+q_n(1-r_n)} = \frac{(1-r_n)(1-r_n q_n)}{1+q_n(1-r_n)} + r_n > r_n \end{gather} for all large enough even $n$. This implies an even stricter bound on the right hand side of \eqref{e.G-ineq} when $n$ is replaced by $n+1$, so that \eqref{e.limsup} holds on the subsequence of odd integers as well. Altogether, this proves that the sequence of internodal distances $d_n = x_{n+1}-x_n$ is geometric, thus the $x_n$ have a finite limit. \end{proof} \begin{remark} Note that in the proof of Theorem~\ref{th.shrink}, we only need a result which is weaker than the statement of Lemma~\ref{rings.schrinking}, namely that $r_n \to 1$ for even $n$ going to infinity \end{remark} \begin{remark} Note that the argument yields an explicit upper bound for $q_n$, namely \begin{equation} \limsup_{n\to \infty} q_n \le q^* < 1 \end{equation} where, as in the proof, $q^*$ is the unique positive root of $G$, which is defined in \eqref{e.Gdef}. \end{remark} \begin{remark} \label{r.shrink-generalization} It is possible to relax the unimodality condition in the statement of Theorem~\ref{th.shrink}. In fact, it suffices that $\lim_{\theta\nearrow1}K''(\theta)=-\infty$. Indeed, assume that $K''$ is defined on $U_K$. Take any $\theta^\star\in(0,1)$ such that $K''<0$ on $[\theta^\star,1)\cap U_K$. Changing variables in \eqref{e.partialsum}, we obtain \begin{equation} \omega_n(x) = \Gamma-x\int_0^x \I_{[0,x_n]}(y) \, K \Bigl( \frac yx \Bigr) \, H(\omega(y)) \,\d y \,. \end{equation} When $n$ is even and $x\in(x_{n+1},x_{n+2})$, the singularity of $K'$ and $K''$ at $\theta=1$ is separated from the domain of integration. We can therefore differentiate under the integral, so that \begin{align} \omega_n'(x) = \frac1x\int_0^x \I_{[0,x_n]}(y) \, K' \Bigl( \frac yx \Bigr) \, y \, H(\omega(y)) \, \d y - \int_0^x\I_{[0,x_n]}(y) \, K \Bigl( \frac yx \Bigr) \, H(\omega(y)) \, \d y \end{align} and \begin{align} \omega_n''(x) & = - \frac1{x^3}\int_0^x \, \I_{[0,x_n]}(y) \, K'' \Bigl( \frac yx \Bigr) \, y^2 \, H(\omega(y)) \, \d y \notag \\ & = - \int_0^1 \I_{[0,x_n/x]}(\theta) \, K''(\theta) \, \theta^2 \, H(\omega(\theta x)) \, \d \theta \notag \\ & \ge -\int_0^{\theta^\star} \lvert K''(\theta) \rvert \, \theta^2 \, \d\theta >-\infty \,. \end{align} When $n$ is odd, then for every $x\in(x_{n+1},x_{x+2})$ there is an even index $\ell$ such that $x_{\ell-1}<x\theta^\star\le x_{\ell+1}$ (with the provision that $x_{-1}=0$), and \begin{equation} \omega_n''(x) = - x^{-3} \int_0^{x_{\ell-1}} K'' \Bigl( \frac yx \Bigr) \, y^2 \, H(\omega(y)) \, \d y + \sum_{i=\ell}^{n-1}(-1)^i \, F \Bigl( \frac{x_i}x \Bigr) \,, \end{equation} where $F$ is as in \eqref{def.F}. As in the proof of the theorem, $F'(z)=z^2 \, K''(z)<0$ on $[\theta^\star,1)$ so that $M^*=\max_{z\in[0,1]}F(z)$ is finite and \begin{equation} \sum_{i=\ell}^{n-1}(-1)^i \, F \Bigl(\frac{x_i}x \Bigr) \le F \Bigl( \frac{x_\ell}x \Bigr) \le M^* \,. \end{equation} Therefore, \begin{align} \omega_n''(x) \le - \int_0^{x_\ell/x} K''(\theta) \, \theta^2 \, \d \theta + M^* \le \int_0^{\theta^\star} \lvert K''(\theta) \rvert \, \theta^2 \, \d\theta + M^* < \infty \,. \end{align} Hence, \eqref{e.main-ineq} and \eqref{e.main-ineq2} continue to hold and the remainder of the proof proceeds as before. \end{remark} \begin{corollary} Suppose that $K$ satisfies conditions \ref{i.k1}--\ref{i.k3} stated in the introduction. Then there exists $x^*<\infty$ so that the maximal interval of existence of a precipitation ring pattern in the sense of \eqref{e.abstract-prec-eqn} is $[0,x^*]$. \end{corollary} \begin{proof} When the solution is degenerate, then such $x^*$ exists by definition. Otherwise, property \ref{i.k1} and the positivity of the kernel on $(0,1)$ imply that Lemma~\ref{l.infrings} is applicable, i.e., there exist an infinite number of precipitation rings. Then, due to properties \ref{i.k2} and \ref{i.k3}, Theorem~\ref{th.shrink} applies and asserts the existence of a finite accumulation point $x^*$ of the ring pattern. \end{proof} \section{Existence of degenerate solutions} \label{s.degenerate} In this section, we show that degenerate solutions exist. These are solutions to \eqref{omega.explicit.0} which cannot be continued past a finite number of zeros. While we cannot settle this question for the concrete kernel introduced in Section~\ref{s.simplified}, we construct a kernel $K$ such that the solution cannot be continued in the sense of \eqref{omega.explicit.0} past $x_2$, the end point of the first precipitation gap. \begin{theorem} \label{t.degenerate} There exist a non-negative kernel $\K \in C([0,1])$ and a constant $\Gamma$ such that the solution of the integral equation \eqref{omega.explicit.0} is degenerate. Moreover, $\K$ is differentiable at $\theta=0$ and satisfies conditions \ref{i.k1} and \ref{i.k2}. \end{theorem} \begin{remark} The proof starts from a kernel template which is then modified on a subinterval $[0,r)$ to produce a kernel $\K$ with the desired properties. When the kernel template is $C^1([0,1))$ and $C^2((0,1))$, the resulting kernel $\K$ will inherit these properties except possibly at the gluing point $\theta=r$ where continuity of the first derivative is not enforced. This is a matter of convenience, not of principle: The existence of degenerate solutions does not hinge on the existence of a jump discontinuity for $\K'$. Straightforward, yet technical modifications of the gluing construction employed in the proof will yield a kernel producing degenerate solutions within the same class of kernels to which Theorem~\ref{th.shrink}, in the sense of Remark~\ref{r.shrink-generalization}, applies. \end{remark} \begin{remark} In Section~\ref{s.simplified}, we derived a concrete kernel by simplifying the HHMO-model. In that setting, there exists an integrable function $G$ such that $K(\theta) = \theta^2 \, G(\theta)$ and \begin{gather} \Gamma = \int_0^1 \G(\theta) \, \d\theta \,. \label{e.gamma-relation} \end{gather} In the proof of the theorem, we preserve this relationship, i.e., the constant $\Gamma$ here will also satisfy \eqref{e.gamma-relation}. \end{remark} \begin{proof} Take any continuous template kernel $\G_\star \colon [0,1] \to \R_+$ with $\G_\star(\theta)\sim k \, \sqrt{1-\theta}$ as $\theta \to 1$ for some positive constant $k$. Set $\K_\star(\theta)=\theta^2 \, \G_\star(\theta)$ and \begin{equation} \Gamma = \int_0^1 \G_\star(\theta) \, \d\theta \,. \end{equation} Then $\K_\star(0)=0$ and \begin{equation} \K^\prime_\star(0) = \lim_{\theta\searrow0} \frac{\K_\star(\theta)} \theta = \lim_{\theta\searrow0} \theta \, \G_\star(\theta) = 0 \,. \end{equation} Now consider the solution to \eqref{omega.explicit.0} with template kernel $\K_*$ in place of $K$. As in the proof of Lemma~\ref{l.infrings}, the solution must have at least two zeros $x_1$ and $x_2$. Let \begin{equation} \omega_\star(x) = \begin{dcases} \Gamma - x^2\int_0^1\K_\star(\theta) \, \d\theta & \text{if $x<x_1$} \,, \\ \Gamma - x^2 \int_0^{x_1/x} \K_\star(\theta) \, \d\theta & \text{otherwise} \,. \end{dcases} \end{equation} In the following we assume, for simplicity, that $\omega_\star^\prime(x_2)>0$. This is true for generic template kernels $\G_\star$, thus suffices for the construction. However, it is also possible to modify the procedure to come to the same conclusion then $\omega_\star^\prime(x_2)=0$; for details, see \cite{Darbenas:2018:PhDThesis}. Note that $\omega_\star$ is continuously differentiable on $[0,x_2]$, so $\omega^\prime_\star(x_2-\varepsilon)>0$ for all $\varepsilon>0$ small enough. For each such small $\varepsilon$, set \begin{equation} \omega_{\varepsilon} (x) = \begin{dcases} \omega_\star(x) & \text{for } x \in [0,x_2-\varepsilon] \,, \\ \frac12 \, x^2 \int_{\tfrac{x_2+\varepsilon}x}^1 \K_\star(\theta) \, \d\theta & \text{for } x \in [x_2+\varepsilon,z_\varepsilon] \,, \end{dcases} \label{e.omega_epsilon} \end{equation} where $z_\varepsilon \in (x_2+\varepsilon,x_2+2\varepsilon]$ is chosen such that $\omega_\varepsilon(z_\varepsilon) \leq \Gamma/2$. (This is always possible because $\omega_\varepsilon(x_2+\varepsilon)=0$ and the second case expression in \eqref{e.omega_epsilon} is continuous, so we can take $z_\varepsilon$ such that $\omega_\varepsilon(z_\varepsilon)=\Gamma/2$ if such solution exists on $(x_2+\varepsilon,x_2+2\varepsilon]$, otherwise we take $z_\varepsilon = x_2+2\varepsilon$.) We now fill the gap in the definition of $\omega_{\varepsilon}$ such that \begin{enumerate}[label={\upshape(\alph*)}] \item $\omega_\varepsilon \colon [0,x_2 + 2 \varepsilon] \to \R$ is continuously differentiable, \item \label{i.omega.ii} $\omega_\varepsilon$ is increasing on $[x_2-\varepsilon,x_2+2\varepsilon]$, \item \label{i.omega.iii} $\omega_\varepsilon<\Gamma$ on $[x_2-\varepsilon,x_2+2\varepsilon]$. \end{enumerate} Observe that $\omega_\varepsilon(x_2+\varepsilon)=0$. Moreover, due to the positivity of $\K_\star$, $\omega_\varepsilon$ is positive and strictly increasing on the interval $(x_2+\varepsilon,z_\varepsilon]$, and \begin{equation} \label{omega.prime} \omega^\prime_\varepsilon(x) = x\int_{\tfrac{x_2+\varepsilon}x}^1 \K_\star(\theta) \, \d\theta + \frac12 \, (x_2+\varepsilon) \, \K_\star \Bigl( \frac{x_2+\varepsilon}x \Bigr) \,. \end{equation} Hence, $\omega_\varepsilon^\prime(x_2+\varepsilon)=0$. We now define a new kernel $\K_\varepsilon \colon [x_1/(x_2+2\varepsilon),1] \to \R_+$ via \begin{align} \label{K.def.2} \K_\varepsilon (\theta) &=\frac{\omega_\varepsilon^\prime(x_1/\theta)}{x_1} - 2 \theta \, \frac{\omega_\varepsilon(x_1/\theta) - \Gamma}{x_1^2} \end{align} and set $\G_\varepsilon(\theta)=\K_\varepsilon(\theta)/\theta^2$. Due to \ref{i.omega.ii} and \ref{i.omega.iii} above, $\K_\varepsilon$ is positive on its interval of definition. Moreover, for $x\in(x_1,x_2+2\varepsilon)$, \begin{equation} \K_\varepsilon\Bigl(\frac{x_1}x\Bigr) = \frac{x^2}{x_1} \, \frac\d{\d x} \frac{\omega_\varepsilon(x)-\Gamma}{x^2} \,. \label{e.K.def.1} \end{equation} Thus, for $x \in (x_1,x_2-\varepsilon)$ where $\omega_\varepsilon = \omega_\star$, \begin{equation} \K_\varepsilon\Bigl(\frac{x_1}x\Bigr) = \frac{x^2}{x_1} \, \frac\d{\d x} \frac{\omega_\star(x)-\Gamma}{x^2} = \frac{x^2}{x_1} \, \frac\d{\d x} \int_{x_1/x}^1\K_\star(\theta) \, \d\theta = \K_\star\Bigl(\frac{x_1}x\Bigr) \,, \label{e.K.def.3} \end{equation} or, equivalently, \begin{equation} \K_\varepsilon(\theta)=\K_\star(\theta)\text{ for }\theta \in \Bigl[\frac{x_1}{x_2-\varepsilon},1\Bigr] \,. \label{e.K.def.4} \end{equation} Noting that \begin{subequations} \begin{gather} \I_{[x_1/(x_2+2\varepsilon),1]}(\theta) \, \K_\varepsilon(\theta) \to \I_{[x_1/x_2,1]}(\theta) \, \K_\star(\theta) \intertext{and} \I_{[x_1/(x_2+2\varepsilon),1]}(\theta) \, \G_\varepsilon(\theta) \to \I_{[x_1/x_2,1]}(\theta) \, \G_\star(\theta) \end{gather} \end{subequations} pointwise for a.e.\ $\theta$ as $\varepsilon\searrow0$, we find that \begin{multline} \frac{\int_0^1 \K_\star(\theta) \, \d\theta - \int^1_{x_1/(x_2+2\varepsilon)} \K_\varepsilon(\theta) \, \d\theta}% {\int_0^1 \G_\star(\theta) \, \d\theta - \int^1_{x_1/(x_2+2\varepsilon)} \G_\varepsilon(\theta) \, \d\theta} - \biggl( \frac{x_1}{x_2+2\varepsilon} \biggr)^2 \to \\ \frac{\int_0^1 \K_\star(\theta) \, \d\theta - \int^1_{x_1/x_2} \K_\star(\theta) \, \d\theta}% {\int_0^1 \G_\star(\theta) \, \d\theta - \int^1_{x_1/x_2} \G_\star(\theta) \, \d\theta} - \frac{x_1^2}{x_2^2} = \frac{\int_0^{x_1/x_2} \K_\star(\theta) \, \d\theta}% {\int_0^{x_1/x_2} \G_\star(\theta) \, \d\theta} - \frac{x_1^2}{x_2^2} \,. \label{e.intfrac} \end{multline} Recall that $0\le\K_\star(\theta)=\G_\star(\theta) \, \theta^2$, so that $\K_\star(\theta)\le\G_\star(\theta) \, x_1^2/x_2^2$ on $[0, x_1/x_2]$. This implies that the right hand side of \eqref{e.intfrac} is negative so that there exists $\varepsilon>0$ such that \begin{equation} \frac{\int_0^1 \K_\star(\theta) \, \d\theta - \int^1_{x_1/(x_2+2\varepsilon)} \K_{\varepsilon}(\theta) \, \d\theta}% {\int_0^1 \G_\star(\theta) \, \d\theta - \int^1_{x_1/(x_2+2\varepsilon)} \G_{\varepsilon}(\theta) \, \d\theta} < \biggl( \frac{x_1}{x_2+2\varepsilon} \biggr)^2 \,. \end{equation} In all of the following, we fix $\varepsilon>0$ such that this inequality holds true, abbreviate $r={x_1}/(x_2+2\varepsilon)$, and set $\G(\theta) = \G_{\varepsilon}(\theta)$ and $\K(\theta) = \K_{\varepsilon}(\theta)$ for $\theta \in [r,1]$. We still need to define $\G$ and $\K$ on the interval $[0,r)$, which is done as follows. Since $\int_0^r r^{-n} \, \theta^n \, \d \theta\to0$ as $n\to\infty$, we can choose $n$ such that \begin{equation} r_\star^2 \equiv \frac{\int_0^1 \K_\star(\theta) \, \d\theta - \int^1_r \K_{\varepsilon}(\theta) \, \d\theta - \G_{\varepsilon}(r) \int_0^r r^{-n} \, \theta^{n+2} \, \d\theta}% {\int_0^1 \G_\star(\theta) \, \d\theta - \int^1_r \G_{\varepsilon}(\theta) \, \d\theta - \G_{\varepsilon}(r) \int_0^r r^{-n} \, \theta^n \, \d \theta} < r^2 \,. \label{e.rstar2} \end{equation} Define $b_1,b_2 \in C^3([0,r],\R_+)$ as the spline functions \begin{subequations} \label{B.spline} \begin{gather} b_1(\theta) = \I_{[0,r_\star/2]}(\theta) \, \theta^4 \, (r_\star/2-\theta)^4 \,, \\ b_2(\theta) = \I_{[r_\star/2,r_\star]}(\theta) \, (r_\star-\theta)^4 \, (\theta-r_\star/2)^4 \,. \end{gather} \end{subequations} By the integral mean value theorem, \begin{gather} \frac{\int_0^r \theta^2 \, b_1(\theta) \, \d \theta}% {\int_0^r b_1(\theta) \, \d \theta} = \frac{\int_0^{r_\star} \theta^2 \, b_1(\theta) \, \d \theta}% {\int_0^{r_\star} b_1(\theta) \, \d \theta} < r_\star^2 < \frac{\int^r_{r_\star} \theta^2 \, b_2(\theta) \, \d \theta}% {\int^r_{r_\star} b_2(\theta) \, \d \theta} = \frac{\int_0^r \theta^2 \, b_2(\theta) \, \d \theta}% {\int_0^r b_2(\theta) \, \d \theta} \,. \end{gather} Now define $B_1,B_2 \colon [0,1]\to\R_+$ as \begin{gather} B_1(\lambda) = \lambda \int_0^r \theta^2 \, b_1(\theta) \, \d \theta + (1-\lambda) \int_0^r \theta^2 \, b_2(\theta) \, \d \theta \,, \\ B_2(\lambda) = \lambda \int_0^r b_1(\theta) \, \d \theta + (1-\lambda) \int_0^r b_2(\theta) \, \d \theta \,. \end{gather} Clearly, $\frac{B_1(0)}{B_2(0)}>r_\star^2$ and $\frac{B_1(1)}{B_2(1)}<r_\star^2$. Hence, due to continuity with respect to $\lambda$, we can find $\lambda_\star \in (0,1)$ such that \begin{equation} \label{lambda.star} \frac{B_1(\lambda_\star)}{B_2(\lambda_\star)} = r_\star^2 \,. \end{equation} Finally, on the interval $[0,r)$, we define \begin{gather} \K (\theta) = \frac{k_\star}{B_1(\lambda_\star)} \, \bigl( \lambda_\star \, \theta^2 \, b_1(\theta) + (1-\lambda_\star) \, \theta^2 \, b_2(\theta) \bigr) + \G_{\varepsilon}(r) \, \frac{\theta^{n+2}}{r^n} \,, \end{gather} where $k_\star$ denotes the numerator of the fraction defining $r_*^2$ in \eqref{e.rstar2}. Further, we set $\G(\theta) = \K(\theta)/\theta^2$. Then $\K$ and $\G$ are continuous on $[0,1]$, strictly positive on $(0,1)$, and, by direct computation, satisfy \begin{subequations} \begin{gather} \int_0^1\K(\theta) \, \d\theta = \int_0^1\K_\star(\theta) \, \d\theta \label{e.k-star-01} \\ \intertext{and} \int_0^1\G(\theta) \, \d\theta = \int_0^1\G_\star(\theta) \, \d\theta = \Gamma \,. \end{gather} \end{subequations} Further, using \eqref{e.k-star-01}, \eqref{e.K.def.1}, and the fact that $\omega_\star(x_1)=0$, we verify that \begin{equation} \omega_{\varepsilon} (x) = \begin{dcases} \Gamma - x^2\int_0^1 \K (\theta) \, \d\theta & \text{for } x \in [0,x_1) \,, \\ \Gamma -x^2 \int_0^{x_1/x} \K(\theta) \, \d\theta & \text{for } x \in [x_1, x_2+2\varepsilon] \,. \end{dcases} \label{e.omega-newdef} \end{equation} on the entire interval $[0, x_2+2\varepsilon]$. Moreover, comparing \eqref{e.omega-newdef} with \eqref{omega.explicit.0} and noting that $x_1$ is the first and $x_2+\varepsilon$ the second zero of $\omega_\varepsilon$ by construction, we see that $\omega_{\varepsilon}$ satisfies \eqref{omega.explicit.0} at least on the interval $[0,x_2+\varepsilon]$. To complete the proof, we show that it is not possible to find a solution $\omega$ to \eqref{omega.explicit.0} that extends to a non-degenerate solution past the interval $[0,x_2+\varepsilon]$ on which $\omega = \omega_{\varepsilon}$. Assume that, on the contrary, such an extension exists. Then there exists a small interval $I=(x_2+\varepsilon,x_2+\varepsilon+\varepsilon_\star)$ on which $\omega$ is either positive or negative. (Note that we must require that $\varepsilon_\star<z_{\varepsilon}$, cf.\ \eqref{e.omega_epsilon} where $z_\varepsilon$ is first introduced.) Suppose first that $\omega<0$ on $I$. Then \eqref{e.omega-newdef} continues to provide a solution for \eqref{omega.explicit.0} on $I$, but $\omega_{\varepsilon}$ is positive there, a contradiction. Suppose then that $\omega>0$ on $I$. Then, using \eqref{e.abstract-prec-eqn} and \eqref{e.omega_epsilon}, we express $\omega$ on the interval $I$ as \begin{align} \omega(x) & = \Gamma - x^2 \int_0^{\tfrac{x_1}x} \K(\theta) \, \d\theta - x^2\int_{\tfrac{x_2+\varepsilon}x}^1 \K (\theta) \, \d\theta \notag\\ & = \omega_{\varepsilon}(x) - x^2 \int_{\tfrac{x_2+\varepsilon}x}^1 \K (\theta) \, \d\theta \notag\\ & = - \frac12 \, x^2 \int_{\tfrac{x_2+\varepsilon}x}^1 \K_\star(\theta) \, \d\theta < 0 \,. \label{e.extension-positive} \end{align} The last equality is due to \eqref{e.omega_epsilon} and \eqref{e.K.def.4} which is applicable for sufficiently small $\varepsilon_\star$. Again, this contradicts the assumed sign of $\omega$. Thus, we conclude that $\omega$ cannot be extended via formula \eqref{e.abstract-prec-eqn} onto any right neighborhood of $x=x_2+\varepsilon$. \end{proof} \section{Extended solutions for the simplified HHMO-model} \label{exist.simplified} So far, we have seen that precipitation band patterns as a sequence of intervals in which $\omega(x)>0$, i.e.\ the reactant concentration exceeds the super-saturation threshold, must break down at a finite location $x^*$ which is either an accumulation point given by Theorem~\ref{th.shrink}, or until a point at which the solution degenerates after a finite number of precipitation bands as in Theorem~\ref{t.degenerate}. In this section, we consider a more general notion of solution which is motivated by the construction of weak solutions to the full HHMO-model in \cite{HilhorstHM:2009:MathematicalSO}. \begin{definition} \label{d.extended} A pair $(\omega, \rho)$ is an \emph{extended solution} of the simplified HHMO-model if $\omega \in C([0,\infty))$, \begin{equation} \label{HHMO.extended} \omega(x) = \Gamma-x^2\int_0^1K(\theta) \, \rho(x\theta) \, \d\theta \,, \end{equation} and $\rho$ is a measurable function on $[0,\infty)$ taking values from the Heaviside graph, i.e., \begin{equation} \label{Heaviside} \rho(y)\in H(\omega(y)) = \begin{cases} 0 & \text{if }\omega(y)<0 \,, \\ [0,1] & \text{if }\omega(y)=0 \,, \\ 1 & \text{otherwise} \,. \end{cases} \end{equation} \end{definition} \begin{theorem} If $K \in C([0,1])$, then an extended solution to \eqref{HHMO.extended} exists. \end{theorem} \begin{proof} Changing variables, we write \eqref{HHMO.extended} as \begin{equation} \omega(x) = \Gamma - x \int_0^x K \Bigl( \frac y x \Bigr) \, \rho(y) \, \d y \,. \end{equation} Now consider a family of mollified Heaviside functions $H_\varepsilon\in C^\infty(\R,[0,1])$ parameterized by $\varepsilon>0$ such that $H_\varepsilon(z)=1$ for $z\ge\varepsilon$ and $H_\varepsilon(z)=0$ for $z\le-\varepsilon$. We claim that, for fixed $\varepsilon>0$, the corresponding mollified equation \begin{equation} \label{omega.epsilon} \omega_\varepsilon(x) = \Gamma - x \int_0^x K \Bigl( \frac y x \Bigr) \, H_\varepsilon(\omega_\varepsilon(y)) \, \d y \end{equation} has a solution $\omega_\varepsilon \in C([0,\infty))$. Indeed, suppose that $\omega_\varepsilon$ is already defined on some interval $[0,a]$, where $a$ may be zero. We seek $\omega_\varepsilon \in C([a,a+\delta])$ which continuously extends $\omega_\varepsilon$ past $x=a$ as a fixed point of a map $T$ from $C([a,a+\delta])$ endowed with the supremum norm into itself, defined by \begin{equation} T[\phi] (x) = \Gamma-x\int_0^a K\left(\frac{y}x \right) \, H_\varepsilon(\omega_\varepsilon(y)) \, \d y - x\int_a^x K\left(\frac{y}x \right) \, H_\varepsilon(\phi(y)) \, \d y \,. \end{equation} Since \begin{align} \bigl| T[\phi](x) - T[\psi](x) \bigr| & = \biggl| x \int_a^x K \Bigl( \frac y x \Bigr) \, \bigl( H_\varepsilon(\psi(y)) - H_\varepsilon(\phi(y)) \bigr) \, \d y \biggr| \notag \\ & \leq x \int_a^x \Bigl| K \Bigl( \frac y x \Bigr) \Bigr| \, \norm{}{H_\varepsilon(\phi) - H_\varepsilon(\psi)}{L^\infty} \, \d y \notag \\ & \leq x \, (x-a) \, \norm{}{K}{L^\infty} \, \norm{}{H_\varepsilon^\prime}{L^\infty} \, \norm{}{\phi - \psi}{L^\infty} \,, \end{align} $T$ is a strict contraction for $\delta>0$ small enough, hence has a unique fixed point. In addition, the maximal interval of existence of $\omega_\varepsilon$ is closed, as the right hand side of \eqref{omega.epsilon} is continuous, and open at the same time due to the preceding argument. Thus, a solution $\omega_\varepsilon \in C([0,\infty))$ exists (and is unique). By direct inspection, for every fixed $b>0$, the families $\{\omega_\varepsilon\}$ and $\{H_\varepsilon\circ\omega_\varepsilon\}$ are uniformly bounded in $C([0,b])$ endowed with the supremum norm. Moreover, $\{\omega_\varepsilon\}$ is equicontinuous. Indeed, for $y,z \in (0,b]$, \begin{equation} \biggl| \frac{\omega_{\varepsilon_i}(z)-\Gamma}{z} - \frac{\omega_{\varepsilon_i}(y)-\Gamma}y \biggr| \leq \int_0^{\max \{y,z\}} \biggl| K \Bigl(\frac \theta{z} \Bigr) - K \Bigl(\frac\theta{y} \Bigr) \biggr| \, \d \theta \,, \end{equation} where, by the dominated convergence theorem, the right hand side converges to zero as $z \to y$. Equicontinuity at $y=0$ is obvious. Thus, by the Arzel\'a--Ascoli theorem, there exist a decreasing sequence $\varepsilon_i \to 0$ and a function $\omega \in C([0,b])$ such that $\omega_{\varepsilon_i} \to \omega$ in $C([0,b])$. Further, by the Banach--Alaoglu theorem, there exists $\rho \in L^\infty([0,b])$ such that, possibly passing to a subsequence, $H_{\varepsilon_i}\circ\omega_{\varepsilon_i} \rightharpoonup \rho$ weakly-$*$ in $L^\infty$. This implies that $\rho$ takes values a.e.\ from the interval $[0,1]$ which contains the convex hull of the sequence. Passing to the limit in \eqref{omega.epsilon}, we conclude that \begin{equation} \omega(x) = \Gamma - x \int_0^x K \Bigl( \frac y x \Bigr) \, \rho(y) \, \d y \,. \end{equation} Finally, we claim that $\rho(y)=1$ whenever $\omega(y)>0$. Indeed, fixing $y$ such that $\omega(y)>0$, equicontinuity of $\omega_\varepsilon$ implies that there exists a neighborhood of $y$ on which $\omega_{\varepsilon_i}$ is eventually strictly positive. On this neighborhood, $H_{\varepsilon_i}\circ\omega_{\varepsilon_i}$ converges strongly to $1$. A similar argument proves that $\rho(y)=0$ whenever $\omega(y)<0$. So far, we have shown that $(\omega,\rho)$ satisfy \eqref{HHMO.extended} and \eqref{Heaviside} on $[0,b]$. To extend the interval of existence, we can iteratively restart the compactness argument on intervals $[0,nb]$ for $n\in \N$, passing to a subsequence each time. The proves existence of an extended solution on $[0,\infty)$. \end{proof} Uniqueness of extended solutions is a much more delicate issue. In the following particular case, we can give a positive answer to the question of uniqueness. \begin{definition} \label{reg.extend} An extended solution $(\omega,\rho)$ to the simplified HHMO-model is \emph{regularly extended} to an interval $[x^\star,x^\star+\varepsilon]$, where $x^*$ is the point of breakdown in the sense of Theorem~\ref{th.shrink} or Theorem~\ref{t.degenerate} and $\varepsilon>0$, if $\omega \equiv 0$ on this interval. \end{definition} If $(\omega_1, \rho_1)$ and $(\omega_2,\rho_2)$ are pairs of regularly extended solutions to \eqref{HHMO.extended}, then $\omega_1$ and $\omega_2$ coincide on $[0,x^*]$ by construction and on $[x^\star,x^\star+\varepsilon]$ by definition. Moreover, $\Delta \rho=\rho_1-\rho_2 = 0$ on $[0,x^\star]$. Thus, the question of uniqueness reduces to a statement on the non-existence of non-trivial solutions to the linear homogeneous integral equation \begin{equation} \label{WDCVIE} \int_0^1K(\theta) \, \Delta \rho(x\theta) \, \d \theta = 0 \end{equation} for $x\in[0,x^\star+\varepsilon]$. Due to properties \ref{i.k1}--\ref{i.k3} of the kernel, the problem falls into the general class of weakly degenerate cordial Volterra integral equations. In \cite{DarbenasO:2019:UniquenessSW}, we answer this question in the affirmative. While we believe that extended solutions are generically regularly extended, we cannot exclude the possibility that extended solutions develop a precipitation band pattern that accumulates at $x^*$ from above. Thus, the general question of unique extendability remains open. We conjecture that the question of uniqueness of extended solutions might be addressed by replacing Definition~\ref{d.extended} by a formulation in terms of a mixed linear complementarity problem. To be concrete, write $\omega = \omega_+ - \omega_-$, where $\omega_+$ is the positive part and $\omega_-$ the negative part of $\omega$. Further, set $\sigma = 1-\rho$ and define the vector functions \begin{equation} V = \begin{pmatrix} \sigma \\ \rho \end{pmatrix} \qquad \text{and} \qquad W = \begin{pmatrix} \omega_+ \\ \omega_- \end{pmatrix} \,. \end{equation} Then we can formulate the extended solution as follows. Find $V \geq 0$, $W \geq 0$ such that \begin{subequations} \begin{equation} \mathcal L V + \mathcal M W + B = 0 \end{equation} subject to \begin{equation} \langle V, W \rangle = 0 \,, \label{e.complementarity-condition} \end{equation} where $\mathcal L$ and $\mathcal M$ are linear operators defined by \begin{gather} \mathcal L V = \begin{pmatrix} \displaystyle x^2 \int_0^1 K(\theta) \, \rho(x\theta) \, \d \theta \\ \rho + \sigma \end{pmatrix} \,, \\ \mathcal M W = \begin{pmatrix} \omega_+ - \omega_- \\ 0 \end{pmatrix} \,, \end{gather} and \begin{gather} B = \begin{pmatrix} \Gamma \\ -1 \end{pmatrix} \,. \end{gather} \end{subequations} The angle brackets in \eqref{e.complementarity-condition} denote the canonical inner product for vectors of $L^2((0,a))$-functions, \begin{equation} \langle V, W \rangle = \int_0^a \omega_+ (x) \, \sigma(x) \, \d x + \int_0^a \omega_- (x) \, \rho(x) \, \d x \,. \end{equation} This formulation is known as a mixed linear complementarity problem. To make progress here, it is necessary to adapt Lions--Stampaccia theory \cite{LionsS:1967:VariationalI} to the case of mixed complementarity problems; see, e.g., \cite{CryerD:1980:EquivalenceLC,ZengAY:2009:EquivalenceML}, for different reformulations in a Sobolev space setting. \section{Discussion} \label{s.discussion} In this paper, we have identified a mechanism which leads to very rapid, i.e., finite-time equilibrization of a dynamical system with memory that is ``damped'' via relay hysteresis. Past a certain point, a solution can only be continued in a generalized sense by ``completing the relay''. We can assert existence and conditional uniqueness for the generalized solution, but full well-posedness remains open. Possible approaches are a reformulation of the concept of generalized solution in terms of a mixed linear complementarity problem as outlined above, or possibly a fixed point formulation using fractional integral operators. We believe that the integral equation \eqref{omega.explicit.0} is a useful test bed for studying such approaches, with the hope to eventually transfer results to more general reaction-diffusion equations with relay hysteresis. The detailed observed behavior is very much tied to property \ref{i.k1}, the square-root degeneracy of the kernel $K$ near $\theta=1$. From the perspective of solving integral equations, this behavior is too degenerate for classical contraction mapping arguments as used by Volterra to apply, but it is sufficiently non-degenerate that strong results can still be proved, see the discussion in \cite{DarbenasO:2019:UniquenessSW}. This degeneracy is associated with the scaling behavior of the heat kernel, so even when an exact reduction from a PDE to an integral equation, as is possible for the simplified HHMO-model, is not available, the associated phenomenology is expected to survive. Let us finally remark on the connection of our models to the real-world Liesegang precipitation phenomenon. Our detailed results on the breakdown of patterns for the simplified model imply that we cannot expect the Keller--Rubinow family of models to provide a good literal description of Liesegang rings. However, the fact that, independent of the details of simplified vs.\ full dynamics, the models converge rapidly toward a steady-state which \emph{only exists as a generalized solution}, we believe that it might be possible to interpret fractional values of the precipitation function as a \emph{precipitation density}. In this view, the model would provide a coarse-grained description of the phenomenon in the sense that the precise information about the location of the rings is lost, but the local average fraction of space covered by precipitation rings can still be asserted. If this suggested interpretation is valid, the explicit asymptotic profile detailed in Section~\ref{s.simplified} will provide a direct relationship between the parameters of the system and the \emph{macroscopic} properties of the Liesegang pattern. This point of view is different from that taken by Duley \emph{et al.}\ \cite{DuleyFM:2019:RegularizationOS}. As they numerically encountered a qualitatively similar breakdown of solutions in the full Keller--Rubinow model (the model without the fast reaction limit taken), they chose to modify the dynamics of the model by introducing a delay variable to smooth out the onset of precipitation. Their approach aims at a more physical description at the \emph{microscopic} level, i.e., toward a model that can correctly represent the width and location of each individual Liesegang ring.
{ "timestamp": "2020-10-29T01:23:52", "yymm": "1911", "arxiv_id": "1911.09084", "language": "en", "url": "https://arxiv.org/abs/1911.09084", "abstract": "We study solutions to the integral equation \\[ \\omega(x) = \\Gamma - x^2 \\int_{0}^1 K(\\theta) \\, H(\\omega(x\\theta)) \\, \\mathrm d \\theta \\] where $\\Gamma>0$, $K$ is a weakly degenerate kernel satisfying, among other properties, $K(\\theta) \\sim k \\, (1-\\theta)^\\sigma$ as $\\theta \\to 1$ for constants $k>0$ and $\\sigma \\in (0, \\log_2 3 -1)$, $H$ denotes the Heaviside function, and $x \\in [0,\\infty)$. This equation arises from a reaction-diffusion equation describing Liesegang precipitation band patterns under certain simplifying assumptions. We argue that the integral equation is an analytically tractable paradigm for the clustering of precipitation rings observed in the full model. This problem is nontrivial as the right hand side fails a Lipschitz condition so that classical contraction mapping arguments do not apply.Our results are the following. Solutions to the integral equation, which initially feature a sequence of relatively open intervals on which $\\omega$ is positive (\"rings\") or negative (\"gaps\") break down beyond a finite interval $[0,x^*]$ in one of two possible ways. Either the sequence of rings accumulates at $x^*$ (\"non-degenerate breakdown\") or the solution cannot be continued past one of its zeroes at all (\"degenerate breakdown\"). Moreover, we show that degenerate breakdown is possible within the class of kernels considered. Finally, we prove existence of generalized solutions which extend the integral equation past the point of breakdown.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Breakdown of Liesegang precipitation bands in a simplified fast reaction limit of the Keller-Rubinow model", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426412951847, "lm_q2_score": 0.7279754548076478, "lm_q1q2_score": 0.7090791347989046 }
https://arxiv.org/abs/1207.7209
Concentration inequalities for order statistics
This note describes non-asymptotic variance and tail bounds for order statistics of samples of independent identically distributed random variables. Those bounds are checked to be asymptotically tight when the sampling distribution belongs to a maximum domain of attraction. If the sampling distribution has non-decreasing hazard rate (this includes the Gaussian distribution), we derive an exponential Efron-Stein inequality for order statistics: an inequality connecting the logarithmic moment generating function of centered order statistics with exponential moments of Efron-Stein (jackknife) estimates of variance. We use this general connection to derive variance and tail bounds for order statistics of Gaussian sample. Those bounds are not within the scope of the Tsirelson-Ibragimov-SudakovGaussian concentration inequality. Proofs are elementary and combine Rényi's representation of order statistics and the so-called entropy approach to concentration inequalities popularized by M. Ledoux.
\section{Introduction} \label{sec:introduction} The purpose of this note is to develop non-asymptotic variance and tail bounds for order statistics. In the sequel, $X_1,\ldots,X_n$ are independent random variables, distributed according to some probability distribution $F$, and $X_{(1)}\geq X_{(2)} \geq \ldots \geq X_{(n)}$ denote the corresponding order statistics (the non-increasing rearrangement of $X_1,\ldots,X_n$). The cornerstone of Extreme Value Theory (\textsf{EVT}), the Fisher-Tippett-Gnedenko Theorem describes the asymptotic behavior of $X_{(1)}$ after centering and normalization \cite{dHaFei06}. The median $X_{(\lfloor n/2\rfloor)}$ is a widely used location estimator. Its asymptotic properties are well documented (See for example \cite{vandervaart:1998} for a review). Much less seems to be available if the sample size $n$ is fixed. The distribution function of $X_{(1)}$ is obviously explicitly known ($F^n$ !), but simple and useful variance or tail bounds do not seem to be publicized. Our main tools will be the Efron-Stein inequalities that assert that the jackknife estimate(s) of the variance of functions of independent random variables are on average upper bounds, and extensions of those inequalities that allow to derive exponential tail bounds (See Theorem \ref{thm:ess:1}). We refer to \cite{Mil74,ShaoWu89} and references therein for an account of the interplay between jackknife estimates, order statistics, extreme value theory and statistical inference. The search for non-asymptotic variance and tail bounds for extreme order statistics is not only motivated by the possible applications of \textsf{EVT} to quantitative risk management, but also by our desire to understand some aspects of the concentration of measure phenomenon \cite{ledoux:2001,massart:2003}. Concentration of measure theory tells us that a function of many independent random variables that does not depend too much on any of them is almost constant. The best known results in that field are the Poincar\'e and Gross logarithmic Sobolev inequalities and the Tsirelson-Ibragimov-Sudakov tail bounds for functions of random Gaussian vectors. If $X_1,\ldots,X_n$ are independent standard Gaussian random variables, and $f\colon\mathbb{R}^n\rightarrow\mathbb{R}$ is $L$-Lipschitz, then $Z=f(X_1,\ldots,X_n)$ satisfies $\operatorname{Var}(Z)\leq L^2$, $\log \mathbb{E}[\exp(\lambda(Z-\mathbb{E}Z))]\leq \tfrac{\lambda^2L^2}{2}$ and $\mathbb{P}\{ Z-\mathbb{E}Z\geq t\}\leq \exp(-\tfrac{t^2}{2L^2})\, .$ If we apply those bounds to $X_{(1)}$ (resp. to $X_{(\lfloor n/2\rfloor )}$) that is the maximum (resp. the median) of $X_1,\ldots,X_n$, the Lipschitz constant is (almost surely) $L=1$, so Poincar\'e inequality allows to establish $\operatorname{Var}(X_{(1)})\leq 1$ (resp. $\operatorname{Var}(X_{(\lfloor n/2\rfloor )})\leq 1$ ). This easy upper bound is far from being satisfactory, it is well-known in \textsf{EVT} that $\operatorname{Var}(X_{(1)})= O\big(\tfrac{1}{\log n }\big)$ and $\operatorname{Var}(X_{(\lfloor n/2\rfloor )})= O\big(\tfrac{1}{n}\big)$. Naive use of off-the-shelf concentration bounds does not work when handling maxima or order statistics at large. This situation is not uncommon. The analysis of the largest eigenvalue of random matrices from the Gaussian Unitary Ensemble (\textsf{GUE}) \cite{Led03} provides a setting where the derivation of sharp concentration inequalities require ingenuity and combining concentration/hypercontractivity with special representations. Our purpose is to show that the tools and methods used to investigate the concentration of measure phenomenon are relevant to the analysis of order statistics. When properly combined with R\'enyi's representation for order statistics (see Theorem \ref{thm:renyi}) the so-called entropy method developed and popularized by Ledoux \cite{ledoux:2001} allows to recover sharp variance and tail bounds. Proofs are elementary and parallel the approach followed in \cite{Led03} in a much more sophisticated setting: whereas Ledoux builds on the determinantal structure of the joint density of the eigenvalues of random matrices from the \textsf{GUE} to upper bound tail bounds by sums of Gaussian integrals that can be handled by concentration/hypercontractivity arguments, in the sequel, we build on R\'enyi's representation of order statistics: $X_{(1)},\ldots,X_{(n)}$ can be represented as the image of the order statistics of a sample of the exponential distribution by a monotone function. The order statistics of an exponential sample turn out to be represented as partial sums of independent random variables. In Section~\ref{sec:gener-purp-ineq}, using Efron-Stein inequalities and modified logarithmic Sobolev inequalities, we derive simple relations between the variance or the entropy of order statistics $X_{(k)}$ and moments of spacings $\Delta_k=X_{(k)}-X_{(k+1)}$. When the sampling distribution has non-decreasing hazard rate (a condition that is satisfied by Gaussian, exponential, Gumbel, logistic distributions, etc, see \ref{dfn:hazard} for a definition), we are able to build on the connection between the fluctuations of order statistics $X_{(k)}$ and spacings. Combining Proposition \ref{thm:maud:1} and R\'enyi's representation for order statistics, we connect the variance and the logarithmic moment generating function of $X_{(k)}$ with moments of spacings, Theorem \ref{prp:var:hazard:dec} may be considered as an exponential Efron-Stein inequality for order statistics. In the framework of \textsf{EVT}, those relations are checked to be asymptotically tight (see Section \ref{sec:assessment}). In Section \ref{sec:gaussian-samples}, using explicit bounds on the Gaussian hazard rate, we derive Bernstein-like inequalities for the maximum and the median of a sample of independent Gaussian random variables with a correct variance and scale factors (Proposition~\ref{prp:order-stat-gauss}). We provide non-asymptotic variance bounds for order statistics of Gaussian samples with the right order of magnitude in Propositions \ref{prp:var:gaussian}, and \ref{prp:cheap:tight}. \section{Order statistics and spacings} \label{sec:gener-purp-ineq} Efron-Stein inequalities (\cite{efron:stein:1981}) allow us to derive upper bounds on the variance of functions of independent random variables. \begin{thm}\textsf{(Efron-Stein inequalities.)} Let $f \colon \mathbb{R}^n \rightarrow \mathbb{R}$ be measurable, and let $Z=f(X_1,\ldots,X_n)$. Let $Z_i= f_i (X_1,\ldots,X_{i-1},X_{i+1},\ldots, X_n)$ where $f_i \colon \mathbb{R}^{n-1} \rightarrow \mathbb{R}$ is an arbitrary measurable function. Suppose $Z$ is square-integrable. Then \label{thm:ess:1} \begin{displaymath} \operatorname{Var}[Z] \le \sum_{i=1}^n \mathbb{E} \left[ \left( Z- Z_i \right)^2\right] \enspace . \end{displaymath} \end{thm} The quantity $\sum_{i=1}^n (Z-Z_i) ^2$ is called a jackknife estimate of variance. Efron-Stein inequalities form a special case of a more general collection of inequalities that encompasses the so-called modified logarithmic Sobolev inequalities \cite{ledoux:2001}. \\ Henceforth, the entropy of a non-negative random variable $X$ is defined by $\operatorname{Ent}[X]= \mathbb{E} [X \log X] - \mathbb{E} X \log \mathbb{E} X$. The next inequality from \cite{massart:2000} has been used to derive a variety of concentration inequalities \cite{boucheron:lugosi:massart:2003}. \begin{thm}\textsf{(Modified logarithmic Sobolev inequality.)} \label{thm:modlogsob} Let $\tau(x)=e^x-x-1$. Then for any $\lambda \in \mathbb{R}$, \begin{displaymath} \operatorname{Ent}\left[e^{\lambda Z}\right] = \lambda \mathbb{E} \left[ Ze^{\lambda Z}\right] - \mathbb{E} \left[ e^{\lambda Z}\right] \log \mathbb{E} \left[ e^{\lambda Z} \right] \le \sum_{i=1}^n \mathbb{E} \left[ e^{\lambda Z} \tau \left( -\lambda (Z-Z_i) \right) \right] \enspace . \end{displaymath} \end{thm} Theorems \ref{thm:ess:1} and \ref{thm:modlogsob} provide a transparent connexion between moments of order statistics and moments of spacings. Henceforth, let $\psi\colon \mathbb{R}\rightarrow\mathbb{R}_+$ be defined by $\psi(x)=e^x\tau(-x)=1 + (x-1)e^x$. \begin{prp}\textsf{(Order statistics and spacings.)} \label{thm:maud:1} For all $1\le k \le n/2$, \begin{displaymath} \operatorname{Var} [X_{(k)}] \leq k \ensuremath{\mathbb{E}} \left[ (X_{(k)}-X_{(k+1)})^2\right] \, \end{displaymath} and for all $\lambda \in \mathbb{R}$, \begin{displaymath} \operatorname{Ent} \left[e^{\lambda X_{(k)} }\right] \le k \mathbb{E} \left[e^{\lambda X_{(k+1)}} \psi( \lambda (X_{(k)} - X_{(k+1)})) \right]\enspace . \end{displaymath} For all $n/2 < k \le n$, \begin{displaymath} \operatorname{Var} [X_{(k)}] \leq (n-k+1) \ensuremath{\mathbb{E}} \left[ (X_{(k-1)}-X_{(k)})^2\right] \, \end{displaymath} and for all $\lambda \in \mathbb{R}$, \begin{displaymath} \operatorname{Ent} \left[e^{\lambda X_{(k)} }\right] \le (n-k+1) \mathbb{E} \left[e^{\lambda X_{(k)}} \tau( \lambda (X_{(k-1)} - X_{(k)})) \right]\enspace . \end{displaymath} \end{prp} \begin{proof} Let $Z=X_{(k)}$ and for $k\leq n/2$ define $Z_i$ as the rank $k$ statistic from subsample $X_1,\dots,X_{i-1},X_{i+1},\ldots, X_{n}$, that is $Z_i = X_{(k+1)}$ if $X_i\geq X_{(k)}$ and $Z_i=Z$ otherwise. Apply Theorem \ref{thm:ess:1}. For $k > n/2$, define $Z_i$ as the rank $k-1$ statistic from $X_1, \ldots, X_{i-1}, X_{i+1},\ldots, X_n$, that is $Z_i = X_{(k-1)}$ if $X_i\leq X_{(k)}$ and $Z_i=Z$ otherwise. Apply Theorem \ref{thm:ess:1} again. For $k\leq n/2 $, define $Z$ and $Z_i$ as before, apply Theorem \ref{thm:modlogsob}: \begin{eqnarray*} \operatorname{Ent} \left[ e^{\lambda X_{(k)}}\right] &\le & k \mathbb{E} \left[ e^{\lambda X_{(k)} }\tau \left( - \lambda ( X_{(k)} -X_{(k+1)} ) \right) \right] \\ & = & k \mathbb{E} \left[ e^{\lambda X_{(k+1)} }e^{\lambda (X_{(k)}-X_{(k+1)}) }\tau \left( - \lambda ( X_{(k)} -X_{(k+1)} ) \right) \right] \\ &=& k \mathbb{E} \left[e^{\lambda X_{(k+1)}} \psi( \lambda (X_{(k)} - X_{(k+1)})) \right] \enspace . \end{eqnarray*} The proof of the last statement proceeds by the same argument. \end{proof} Proposition \ref{thm:maud:1} can be fruitfully complemented by R\'enyi's representation of order statistics (See \cite{dHaFei06} and references therein) In the sequel, if $f$ is a monotone function from $(a,b)$ (where $a$ and $b$ may be infinite) to $(c,d)$, its generalized inverse $f^\leftarrow : (c,d) \rightarrow (a,b)$ is defined by $f^\leftarrow(y) = \inf\{ x : a<x< b, f(x)\geq y \}\, $ (See \cite{dHaFei06} for properties of this transformation). \begin{dfn \label{dfn:U} The $U$- transform of a distribution function $F$ is defined as a function on $(1,\infty)$ by $U=(1/(1-F))^{\leftarrow}$, \begin{math} U(t) = \inf \{ x~:~F(x)\geq 1-1/t \}= F^\leftarrow(1-1/t)\enspace . \end{math} \end{dfn} R\'enyi's representation asserts that the order statistics of a sample of independent exponentially distributed random variables are distributed as partials sums of independent exponentially distributed random variables. \begin{thm}\textsf{(R\'enyi's representation)} \label{thm:renyi} Let $X_{(1)}\geq\ldots\geq X_{(n)}$ be the order statistics of a sample from distribution $F$, let $U= (1/(1-{F}))^\leftarrow,$ let $Y_{(1)} \geq Y_{(2)}\geq\ldots \geq Y_{(n)}$ be the order statistics of an independent sample of the exponential distribution, then \begin{displaymath} \left( Y_{(n)}, \ldots, Y_{(i)},\ldots, Y_{(1)} \right) \sim \big( \tfrac{E_n}{n},\ldots, \sum_{k=i}^n \tfrac{E_{k}}{k},\ldots, \sum_{k=1}^n \tfrac{E_{k}}{k}\big) \end{displaymath} where $E_1,\ldots,E_k$ are independent and identically distributed standard exponential random variables, and \begin{displaymath} (X_{(n)}, \ldots, X_{(1)} ) \sim \big(U\circ \exp(Y_{(n)}), \ldots , U\circ \exp(Y_{(1)}) \big) \, . \end{displaymath} \end{thm} We may readily test the tightness of propositions \ref{thm:maud:1}. By Theorem \ref{thm:renyi}, $Y_{(k)} = \frac{E_n}{n} +\ldots + \frac{E_k}{k}$ and \begin{math} \operatorname{Var}[Y_{(k)}] = \sum_{i=k}^n \tfrac{1}{i^2}\enspace . \end{math} Hence, for any sequence $(k_n)_n$ with $\lim_n k_n= \infty$, and $\limsup k_n/n< 1$, \begin{math} \lim_{n\rightarrow \infty}k_n \operatorname{Var}[Y_{(k_n)}]= 1 , \end{math} while by Proposition \ref{thm:maud:1}, \begin{math} \operatorname{Var}[Y_{(k)}] \le k \mathbb{E} \big[ \big( {E_k}/{k} \big)^2 \big] = \tfrac{2}{k}. \end{math} The next condition makes combining Propositions \ref{thm:maud:1} and Theorem \ref{thm:renyi} easy. \begin{dfn}\textsf{(Hazard rate.)} \label{dfn:hazard} The hazard rate of an absolutely continuous probability distribution with distribution function $F$ is: \begin{math} h = {f}/{\overline{F}} \end{math} where $f$ and $\overline{F}=1-F$ are respectively the density and the survival function associated with $F$. \end{dfn} From elementary calculus, we get $(U\circ \exp)'= 1/h(U\circ\exp)$ where $U=(1/(1-F))^\leftarrow$, which translates into \begin{prp \label{prp:incr:hazard} Let $F$ be an absolutely continuous distribution function with hazard rate $h$, let $U=(1/(1-F))^\leftarrow$. Then, $h$ is non-decreasing if and only if $U \circ \exp$ is concave. \end{prp} Observe that if the hazard rate $h$ is non-decreasing, then for all $t>0$ and $x>0$, \begin{math} U \left( \exp({t+x}) \right) - U \left( \exp(t) \right) \le {x}/{h(U(\exp(t)))} \enspace . \end{math} Moreover, assuming that the hazard rate is non-decreasing warrants negative association between spacings and related order statistics. \begin{prp} \label{prp:hazard:rate:neg:assoc} If $F$ has non-decreasing hazard rate, then the $k^{\text{th}}$ spacing $\Delta_k =X_{(k)}-X_{(k+1)}$ and $X_{(k+1)}$ are negatively associated: for any pair of non-decreasing functions $g_1$ and $g_2$, \begin{displaymath} \mathbb{E}[g_1(X_{(k+1)})g_2(\Delta_k)] \le \mathbb{E}[g_1(X_{(k+1)})] \mathbb{E}[g_2(\Delta_k)] \enspace . \end{displaymath} \end{prp} \begin{proof} Let $Y_{(n)},\ldots,Y_{(1)}$ be the order statistics of an exponential sample. Let $E_k=Y_{(k)}-Y_{(k+1)}$ be the $k^{\text{th}}$ spacing of the exponential sample. By Theorem \ref{thm:renyi}, $E_k$ and $Y_{(k+1)}$ are independent. Let $g_1$ and $g_2$ be two non-decreasing functions. By Theorem \ref{thm:renyi}, \begin{eqnarray*} \mathbb{E} [g_1(X_{(k+1)}) g_2(\Delta_k) ] &= &\mathbb{E}[ g_1 (U (e^{Y_{(k+1)}})) g_2 ( U(e^{E_k+Y_{(k+1)}}) - U(e^{Y_{(k+1)}}))] \\ &= & \mathbb{E}\left[ \mathbb{E}\left[ g_1 (U (e^{Y_{(k+1)}})) g_2 ( U(e^{E_k+Y_{(k+1)}}) - U(e^{Y_{(k+1)}}))\mid Y_{(k+1)} \right] \right] \\ & = & \mathbb{E}\left[ g_1 (U (e^{Y_{(k+1)}})) \mathbb{E}\left[ g_2 ( U(e^{E_k+Y_{(k+1)}}) - U(e^{Y_{(k+1)}}))\mid Y_{(k+1)} \right] \right] \enspace . \end{eqnarray*} The function $ g_1 \circ U \circ \exp$ is non-decreasing. Almost surely, as the conditional distribution of $k E_k$ with respect to $Y_{(k+1)}$ is the exponential distribution, \begin{eqnarray*} \mathbb{E}\left[ g_2 ( U(e^{E_k+Y_{(k+1)}}) - U(e^{Y_{(k+1)}}))\mid Y_{(k+1)} \right] & = & \int_0^\infty e^{-x} g_2 (U(e^{\frac{x}{k}+Y_{(k+1)}}) - U(e^{Y_{(k+1)}})) \mathrm{d}x\enspace . \end{eqnarray*} As $F$ has non-decreasing hazard rate, $ U(\exp({x/k+y})) - U(\exp(y))=\int_0^{x/k} (U\circ\exp)'(y+z) \mathrm{d}z$ is non-increasing with respect to $y$. \\ This entails that $\mathbb{E}\left[ g_2 ( U(e^{E_k+Y_{(k+1)}}) - U(e^{Y_{(k+1)}}))\mid Y_{(k+1)} \right]$ is a non-increasing function of $Y_{(k+1)}$. Hence, by Chebyshev's association inequality, \begin{eqnarray*} \lefteqn{\mathbb{E} [g_1(X_{(k+1)}) g_2(\Delta_k) ] } \\ & \leq & \mathbb{E}\left[ g_1 (U (e^{Y_{(k+1)}})) \right]\times \mathbb{E} \left[ \mathbb{E}\left[g_2 ( U(e^{E_k+Y_{(k+1)}}) - U(e^{Y_{(k+1)}}))\mid Y_{(k+1)}\right] \right] \\ & = & \mathbb{E}\left[ g_1 (U (e^{Y_{(k+1)}}))\right] \times \mathbb{E}\left[g_2 ( U(e^{E_k+Y_{(k+1)}}) - U(e^{Y_{(k+1)}}))\right]\\ & = & \mathbb{E}\left[ g_1 (X_{(k+1)})\right] \times \mathbb{E}\left[g_2 (\Delta_k)\right] \enspace . \end{eqnarray*} \end{proof} Negative association between order statistics and spacings allows us to establish our main result. \begin{thm} \label{prp:var:hazard:dec} Let $X_1,\ldots,X_n$ be independently distributed according to $F$, let $X_{(1)}\geq \ldots\geq X_{(n)}$ be the order statistics and let $\Delta_k =X_{(k)}-X_{(k+1)}$ be the $k^{\text{th}}$ spacing. Let $V_k= k \Delta_k^2$ denote the Efron-Stein estimate of the variance of $X_{(k)}$ (for $k=1,\ldots,n/2$). If $F$ has non-decreasing hazard rate $h$, then for $1 \leq k\leq n/2$, \begin{displaymath} \operatorname{Var}\big[X_{(k)} \big] \leq \mathbb{E} V_k \leq \frac{2}{k}\ensuremath{\mathbb{E}} \left[ \big( \tfrac{1}{h(X_{(k+1)})}\big)^2\right] \, , \end{displaymath} while for $k > n/2$, \begin{displaymath} \operatorname{Var}\big[X_{(k)} \big] \leq \frac{2(n-k+1)}{(k-1)^2}\ensuremath{\mathbb{E}} \left[ \big( \tfrac{1}{h(X_{(k)})}\big)^2\right] \, . \end{displaymath} For $\lambda\geq 0$, and $1\leq k \leq n/2, $ \begin{equation}\label{eq:1} \log \mathbb{E} e^{\lambda(X_{(k)} -\mathbb{E} X_{(k)} ) } \leq \lambda \frac{k}{2} \mathbb{E} \left[\Delta_k \left(e^{\lambda\Delta_k}-1\right) \right] = \lambda \frac{k}{2} \mathbb{E} \left[\sqrt{\frac{V_k}{k}} \left(e^{\lambda\sqrt{V_k/k}}-1\right) \right] \, . \end{equation} \end{thm} Inequality~\eqref{eq:1} may be considered as an exponential Efron-Stein inequality for order-statistics: it connects the logarithmic moment generating function of the $k^{\text{th}}$ order statistic with the exponential moments of the square root of the Efron-Stein estimate of variance $k\Delta_k^2$. This connection provides correct bounds for exponential distribution whereas the exponential Efron-Stein inequality described in \cite{boucheron:lugosi:massart:2003} does not. This comes from the fact that negative association between spacing and order statistics leads to an easy decoupling argument, there is no need to resort to the variational representation of entropy as in \cite{boucheron:lugosi:massart:2003}. It is then possible to carry out the so-called Herbst's argument in an effortless way. \begin{proof}Throughout the proof $Y_{(k)}, Y_{(k+1)}$ denote the $k^{\text{th}}$ and $k+1^{\text{th}}$ order statistics of an exponential sample of size $n.$ By Proposition~\ref{thm:maud:1}, using R\'enyi's representation (Theorem~\ref{thm:renyi}), and Proposition~\ref{prp:incr:hazard}, for $k\leq n/2$, \begin{eqnarray*} \operatorname{Var}[ X_{(k)}] &\le& k \mathbb{E} \left[ \left(U\left( e^{Y_{(k+1)}}e^{Y_{(k)}-Y_{(k+1)}} \right) - U \left( e^{Y_{(k+1)}} \right)\right)^2\right] \\ & \le& k\mathbb{E} \left[ \left( e^{Y_{(k+1)}} U' \left( e^{Y_{(k+1)}} \right) \right)^2 \left( Y_{(k)} -Y_{(k+1)} \right)^2\right]\\ & \leq & \frac{2}{k} \ensuremath{\mathbb{E}} \left[ \Big( \tfrac{1}{h(X_{(k+1)})}\Big)^2\right] \enspace , \end{eqnarray*} as by Theorem \ref{thm:renyi}, \begin{math} Y_{(k)}-Y_{(k+1)} \end{math} is independent of $Y_{(k+1)}$ and exponentially distributed with scale parameter $1/k$. By Proposition \ref{thm:maud:1} and~\ref{prp:incr:hazard}, as $\psi$ is non-decreasing over $\mathbb{R}_+$, \begin{eqnarray*} \operatorname{Ent} \left[e^{\lambda X_{(k)} }\right] &\le & k \mathbb{E} \left[e^{\lambda X_{(k+1)}} \psi( \lambda \Delta_k) \right]\\ & \leq & k \mathbb{E} \left[ e^{\lambda X_{(k+1)}} \right] \times \mathbb{E} \left[ \psi( \lambda \Delta_k) \right] \\ & \leq & k \mathbb{E} \left[ e^{\lambda X_{(k)}} \right] \times \mathbb{E} \left[ \psi( \lambda \Delta_k) \right] \, . \end{eqnarray*} Multiplying both sides by $\exp(-\lambda \mathbb{E} X_{(k)})$, \begin{eqnarray*} \operatorname{Ent} \left[e^{\lambda (X_{(k)}-\mathbb{E} X_{(k)}) }\right] & \leq & k \mathbb{E} \left[ e^{\lambda (X_{(k)}-\mathbb{E} X_{(k)})} \right] \times \mathbb{E} \left[ \psi( \lambda \Delta_k) \right] \enspace . \end{eqnarray*} Let $G(\lambda)= \mathbb{E} e^{\lambda \Delta_k}$. Obviously, $G(0)=1$, and as $\Delta_k\geq 0$, $G$ and its derivatives are increasing on $[0,\infty)$, \begin{displaymath} \mathbb{E} \left[ \psi( \lambda \Delta_k) \right] = 1 - G(\lambda) + \lambda G'(\lambda) = \int_0^\lambda sG^{\prime\prime}(s)\mathrm{d}s \leq G^{\prime\prime}(\lambda) \frac{\lambda^2}{2} \, . \end{displaymath} Hence, for $\lambda\geq 0,$ \begin{displaymath} \frac{\operatorname{Ent} \left[e^{\lambda (X_{(k)}-\mathbb{E} X_{(k)}) }\right] }{\lambda^2 \mathbb{E} \left[ e^{\lambda (X_{(k)}-\mathbb{E} X_{(k)})} \right]} =\frac{\mathrm{d}\frac{1}{\lambda} \log \mathbb{E} e^{\lambda(X_{(k)} -\mathbb{E} X_{(k)} ) } }{\mathrm{d}\lambda} \leq \frac{k}{2}\frac{\mathrm{d} G'}{\mathrm{d}\lambda} \, . \end{displaymath} Integrating both sides, using the fact that $\lim_{\lambda \rightarrow 0} \frac{1}{\lambda} \log \mathbb{E} e^{\lambda(X_{(k)} -\mathbb{E} X_{(k)} ) } =0, $ \begin{displaymath} \frac{1}{\lambda} \log \mathbb{E} e^{\lambda(X_{(k)} -\mathbb{E} X_{(k)} ) } \leq \frac{k}{2} (G'(\lambda)-G'(0))= \frac{k}{2} \mathbb{E} \left[\Delta_k \left(e^{\lambda\Delta_k}-1\right) \right] \, . \end{displaymath} \end{proof} \section{Asymptotic assessment} \label{sec:assessment} Assessing the quality of the variance bounds from Proposition \ref{thm:maud:1} in full generality is not easy. However, Extreme Value Theory (\textsf{EVT}) describes a framework where the Efron-Stein estimates of variance are asymptotically of the right order of magnitude. \begin{dfn} The distribution function $F$ belongs to a maximum domain of attraction with tail index $\gamma \in \mathbb{R}$ ($F\in \textsf{MDA}(\gamma)$), if and only if there exists a non negative auxiliary function $a$ on $[1,\infty)$ such that for $x\in [0,\infty)$ (if $\gamma>0$), $\in [0,-1/\gamma)$ (if $\gamma<0$), $x\in \mathbb{R}$ (if $\gamma=0$) $$\lim_n \mathbb{P} \left\{ \frac{\max(X_1,\ldots,X_n)-F^\leftarrow(1-1/n)}{a(n)} \leq x \right\ = \exp(-(1+\gamma x)^{-1/\gamma}) \, . $$ If $\gamma=0,$ $(1+\gamma x)^{-1/\gamma}$ should be read as $\exp(-x).$ \end{dfn} If $F \in \mathsf{MDA}(\gamma)$ and has finite variance ($\gamma<1/2$), the variance of $(\max(X_1,\ldots,X_n)-F^\leftarrow(1-1/n))/a(n)$ converges to the variance of the limiting extreme value distribution~\cite{dHaFei06}. Membership in a maximum domain of attraction is characterized by the \emph{extended regular variation} property of $U= (1/(1-F))^{\leftarrow}$: $F\in \mathsf{MDA}(\gamma)$ with auxiliary function $a$ iff for all $x>0$ \begin{displaymath} \lim_{t\rightarrow \infty} \frac{U(tx)-U(t)}{a(t)} = \frac{x^\gamma-1}{\gamma} \, , \end{displaymath} where the right-hand-side should be read as $\log x$ when $\gamma=0$ \cite{dHaFei06}. Using Theorem 2.1.1 and Theorem 5.3.1 from \cite{dHaFei06}, and performing simple calculus, we readily obtain \begin{prp}\label{prop:assessment:var:bound} Assume $X_{(1)}\geq \ldots \geq X_{(n)}$ are the order statistics of an independent sample distributed according to $F$, where $F\in \textsf{MDA}(\gamma), \gamma < 1/2$ with auxiliary function~$a$. Then \begin{displaymath} \lim_n \tfrac{\mathbb{E}\left[\left( (X_{(1)} -X_{(2)})\right)^2\right]}{a(n)^2} = \tfrac{2\Gamma(2(1-\gamma))}{(1-\gamma)(1-2\gamma)} \quad \mathrm{ while }\quad \lim_n \tfrac{\operatorname{Var}\left( X_{(1)} \right)}{a(n)^2} = \frac{1}{\gamma{^2}} \left( \Gamma(1-2\gamma) -\Gamma(1-\gamma)^2\right) . \end{displaymath} For $\gamma=0$, the last expression should be read as $\pi^2/6.$ \end{prp} The asymptotic ratio between the Efron-Stein upper bound and the variance of $X_{(1)}$ converges toward a limit that depends only on $\gamma$ (for $\gamma=0$ this limit is $12/\pi^2 \approx 1.21$). When the tail index $\gamma<0$, the asymptotic ratio degrades as $\gamma \rightarrow -\infty,$ it scales like~$-4\gamma.$ \section{Order statistics of Gaussian samples} \label{sec:gaussian-samples} We now turn to the Gaussian setting. We will establish Bernstein inequalities for order statistics of absolute values of independent Gaussian random variables. A real-valued random variable $X$ is said to be {\sl sub-gamma on the right tail with variance factor $v$ and scale parameter $c$} if \[ \log \mathbb{E} e^{\lambda (X-\mathbb{E} X)}\leq\frac{\lambda^2v}{2(1-c\lambda) } \text{ for every }\lambda\quad \mbox{such that} \quad 0<\lambda<1/c~. \] Such a random variable satisfies a so-called Bernstein-inequality: for $t>0,$\\ \begin{math} \mathbb{P} \left\{ X\geq \mathbb{E} X +\sqrt{2vt } + ct\right\}\leq \exp\left( -t\right) . \end{math} A real-valued random variable $X$ is said to be {\sl sub-gamma on the left tail with variance factor $v$ and scale parameter $c$}, if $-X$ is sub-gamma on the right tail with variance factor $v$ and scale parameter $c$. A Gamma random variable with shape parameter $p$ and scale parameter $c$ (expectation $pc$ and variance $pc^2$) is sub-gamma on the right tail with variance factor $pc^2$ and scale factor $c$ while it is sub-gamma on the left-tail with variance factor $pc^2$ and scale factor $0$. The Gumbel distribution (with distribution function $\exp(-\exp(-x))$ is sub-gamma on the right-tail with variance factor $\pi^2/6$ and scale factor $1$, it is sub-gamma on the left-tail with scale factor $0$ (note that this statement is not sharp, see Lemma \ref{lem:stupid} below). Order statistics of Gaussian samples provide an interesting playground for assessing Theorem~\ref{prp:var:hazard:dec}. Let $\Phi$ and $\phi$ denote respectively the standard Gaussian distribution function and density. Throughout this section, let $\widetilde{U}\colon ]1,\infty) \rightarrow [0,\infty) $ be defined by \begin{math} \widetilde{U}(t) = \Phi^\leftarrow(1-1/(2t)) \, , \end{math} $\widetilde{U}(t)$ is the $1-1/t$ quantile of the distribution of the absolute value of a standard Gaussian random variable, or the $1-1/(2t)$ quantile of the Gaussian distribution. \begin{prp} Absolute values of Gaussians have non-decreasing hazard rate : \label{prp:absgaussian}\\ i) $\widetilde{U} \circ \exp$ is concave; \\ ii) For $y>0$, \begin{math} \phi(\widetilde{U}(\exp(y)))/\overline{\Phi}(\widetilde{U}(\exp(y))) \ge {\sqrt{\kappa_1(y + \log 2)}} \end{math} where $\kappa_1\geq 1/2.$\\ iii) For $t\geq 3,$ \begin{displaymath} \sqrt{2 \log(2t)-\log\log(2t) -\log (4\pi) }\leq \widetilde{U}(t) \leq\sqrt{2 \log(2t)-\log\log(2t) -\log \pi } \enspace . \end{displaymath} \end{prp} \begin{proof} i) As $(\widetilde{U}\circ\exp)'(t)=\overline{\Phi}(\widetilde{U}(e^t))/\phi(\widetilde{U}(e^t)) $ it suffices to check that the standard Gaussian distribution has non-decreasing hazard rate on $[0,\infty).$ Let $h=\phi/\overline{\Phi}$, by elementary calculus, for $x>0$, \begin{math} h'(x) = \left( \phi(x) - x\overline{\Phi}(x) \right) {\phi(x)}/{\overline{\Phi}^2(x)} \geq 0 \end{math} where the last inequality is a well known fact. ii) For $\kappa_1=1/2$, for $p\in (0,1/2]$, the fact that \begin{math} p\sqrt{\kappa_1\log 1/p} \leq \phi\circ \Phi^{\leftarrow}(p) \end{math} follows from $\phi(x) - x\overline{\Phi}(x) \geq 0$ for $x>0$. Hence, \begin{displaymath} \frac{\overline{\Phi} ({\Phi}^\leftarrow(1-e^{-y}/2))}{\phi({\Phi}^\leftarrow(1-e^{-y}/2))} = \frac{e^{-y}/2}{\phi ( \Phi^{\leftarrow}(e^{-y}/2))} \le \frac{1}{ \sqrt{\kappa_1(\log 2 + y)}} \enspace . \end{displaymath} iii) The first inequality can be deduced from $\phi\circ \Phi^{\leftarrow}(p) \leq p\sqrt{2\log 1/p} \, , $ for $p\in (0,1/2)$ \cite{tillich2001dii}, the second from \begin{math} p\sqrt{\kappa_1\log 1/p} \leq \phi\circ \Phi^{\leftarrow}(p) \enspace . \end{math} \end{proof} The next proposition shows that when used in a proper way, Efron-Stein inequalities may provide seamless bounds on extreme, intermediate and central order statistics of Gaussian samples. \begin{prp \label{prp:var:gaussian} Let $n\geq 3$, let $X_{(1)}\geq \ldots \geq X_{(n)}$ be the order statistics of absolute values of a standard Gaussian sample, \begin{displaymath} \text{For $1 \leq k\leq n/2$,}\quad \operatorname{Var} [X_{(k)}] \le \frac{1}{k\log 2} \frac{8}{\log\tfrac{2n}{k} -\log (1+ \tfrac{4}{k} \log \log \tfrac{2n}{k})} \enspace . \end{displaymath} \end{prp} By Theorem 5.3.1 from \cite{dHaFei06}, $\lim_n 2\log n \operatorname{Var}[X_{(1)}]=\pi^2/6,$ while the above described upper bound on $\operatorname{Var}[X_{(1)}]$ is equivalent to $(8/\log 2)/\log n $. If $\lim_n k_n =\infty$ while $\lim_n k_n/n=0$, Smirnov's lemma \cite{dHaFei06} implies that $\lim_n k(\widetilde{U}(n/k))^2 \operatorname{Var}[X_{(k)}]=1.$ For the asymptotically normal median of absolute values, $\lim_n (4 \phi(\widetilde{U}(2))^2n)\operatorname{Var} [X_{(n/2)}]=1$ \cite{vandervaart:1998}. Again, the bound in Proposition~\ref{prp:var:gaussian} has the correct order of magnitude. \begin{lem}\label{lem:stupid} Let $Y_{(k)}$ be the $k^{\text{th}}$ order statistics of a sample of $n$ independent exponential random variables, let $\log 2<z< \log (n/k),$ then \begin{displaymath} \mathbb{P}\left\{ Y_{(k+1)} \leq \log(n/k)-z \right\} \leq \exp\left( - \tfrac{k(e^z-1)}{4 }\right)\enspace . \end{displaymath} \end{lem} \begin{proof} \begin{eqnarray*} \mathbb{P}\left\{ Y_{(k+1)} \leq \log(n/k)-z \right\} &= & \sum_{j=0}^k \binom{n}{j} \left(1-\frac{ke^{z}}{n}\right)^{n-j} \left(\frac{ke^{z}}{n} \right)^j \\ & \leq & \exp\left(-\frac{k (e^{z}-1)^2}{2 e^{z} }\right) \end{eqnarray*} since the right-hand-side of the first line is the probability that a binomial random variable with parameters $n$ and $\frac{ke^{z}}{n}$ is less than $k$, which is sub-gamma on the left-tail with variance factor less than $ke^z$ and scale factor $0$. \end{proof} \begin{proof} By Propositions \ref{prp:var:hazard:dec} and \ref{prp:absgaussian}, letting $\kappa_1=1/2$ \begin{eqnarray*} \operatorname{Var} \left( X_{(k)} \right)& \le& \frac{2}{k}\mathbb{E} \left[\frac{2}{ \log 2 + Y_{(k+1)}}\right] \\ &=& \frac{1}{\log2}\frac{4}{k} \mathbb{P} \Big \{ Y_{(k+1)} \le \log(n/k) -z \Big \} + \frac{4}{k} \frac{1}{\log \frac{n}{k}- z+ \log 2} \\ & \leq & \frac{4}{k \log 2 } \frac{1}{ \log\tfrac{2n}{k}} + \frac{4}{k} \frac{1}{\log\tfrac{2n}{k} -\log (1+ \tfrac{4}{k} \log \log \tfrac{2n}{k})} \, , \end{eqnarray*} where we used Lemma \ref{lem:stupid} with $z= \log (1+\frac{4}{k}\log \log \frac{2n}{k}).$ \end{proof} Our next goal is to establish that the order statistics of absolute values of independent Gaussian random variables are sub-gamma on the right-tail with variance factor close to the Efron-Stein estimates of variance derived in Proposition~\ref{prp:absgaussian} and scale factor not larger than the square root of the Efron-Stein estimate of variance. Before describing the consequences of Theorem \ref{prp:var:hazard:dec}, it is interesting to look at what can be obtained from R\'enyi's representation and exponential inequalities for sums of Gamma-distributed random variables. \begin{prp}\label{prp:cheap:tight} Let $X_{(1)}$ be the maximum of the absolute values of $n$ independent standard Gaussian random variables, and let $\widetilde{U}(s)=\Phi^\leftarrow(1-1/(2s))$ for $s\geq 1.$ For $t>0$, \begin{displaymath} \mathbb{P}\left\{ X_{(1)} -\ensuremath{\mathbb{E}} X_{(1)} \geq t/(3\widetilde{U}(n)) +\sqrt{t}/\widetilde{U}(n) +\delta_n\right\} \leq \exp\left( -t\right) \, , \end{displaymath} where $\delta_n>0$ and \begin{math} \lim_n (\widetilde{U}(n))^{3}\delta_n = \tfrac{\pi^2}{12}\enspace . \end{math} \end{prp} This inequality looks like what we are looking for: ${\widetilde{U}(n)}(X_{(1)} -\ensuremath{\mathbb{E}} X_{(1)}) $ converges in distribution, but also in quadratic mean, or even according to the Orlicz norm defined by $x\mapsto \exp(|x|)-1$, toward a centered Gumbel distribution. The centered Gumbel distribution is sub-gamma on the right tail with variance factor $\pi^2/6$ and scale factor $1$, we expect $X_{(1)}$ to satisfy a Bernstein inequality with variance factor of order $1/\widetilde{U}(n)^2$ and scale factor $1/\widetilde{U}(n)$. Up to the shift $\delta_n$, this is the content of the proposition. Note that the shift is asymptotically negligible with respect to the typical order of magnitude of the fluctuations. The constants in the next proposition are not sharp enough to make the next proposition competitive with Proposition~\ref{prp:cheap:tight}. Nevertheless it illustrates that Proposition~\ref{prp:var:hazard:dec} captures the correct order of growth for the right-tail of Gaussian maxima. \begin{prp} \label{prp:deviation:max:gaussian} For $n$ such that the solution $v_n$ of equation $16/x+\log(1+2/x+4\log(4/x))=\log (2n)$ is smaller than 1, for all $0 \le \lambda <\frac{1}{\sqrt{v_n}}$, \begin{eqnarray*} \log \mathbb{E} e^{ \lambda(X_{(1)} - \mathbb{E} X_{(1)})} \le \frac{v_n \lambda ^2 }{2(1-\sqrt{v_n}\lambda) } \enspace . \end{eqnarray*} For all $t >0$, \begin{displaymath} \mathbb{P} \left \{ X_{(1)} - \mathbb{E} X_{(1)} > \sqrt{v_n} (t+\sqrt{2t}) \right \} \leq e^{-t}\enspace . \end{displaymath} \end{prp} \begin{proof} By Proposition \ref{prp:var:hazard:dec}, \begin{displaymath} \log \mathbb{E} e^{\lambda( X_{(1)} - \mathbb{E} X_{(1)})} \le \frac{\lambda}{2} \mathbb{E} \left[ \Delta \left( e^{\lambda \Delta}-1 \right) \right] \end{displaymath} where $\Delta = X_{(1)} - X_{(2)} \sim U(2e^{Y_{(2)} +E_1}) - U(2e^{Y_{(2)}})$, with $E_1$ is exponentially distributed and independent of $Y_{(2)}$ which is distributed like the $2^{\text{nd}}$ largest order statistics of an exponential sample.\\ On the one hand, the conditional expectation \begin{displaymath} \mathbb{E} \left[ \left(U(2e^{E_1+ Y_{(2)}}) - U(2e^{Y_{(2)}})\right) \left(e^{\lambda (U(2e^{E_1+Y_{(2)}})-U(2e^{Y_{(2)}}))}-1\right) \vert Y_{(2)} \right] \end{displaymath} is a non-increasing function of $Y_{(2)}$. The maximum is achieved for $Y_{(2)}=0$, and it is equal to : \begin{displaymath} 2 \int_{0}^{\infty} \frac{e^{-x^2/2}}{\sqrt{2 \pi}} x(e^{\lambda x}-1) \mathrm{d}x \le 2 \lambda e^{\frac{\lambda^2}{2}} \enspace . \end{displaymath} On the other hand, by Proposition \ref{prp:absgaussian}, \begin{displaymath} U(2e^{E_1+Y_{(2)}}) - U(2e^{Y_{(2)}}) \le \frac{\sqrt{2} E_1}{ \sqrt{ (\log 2 + Y_{(2)})}} \enspace . \end{displaymath} Meanwhile, for $0 \le \mu < 1/2$, \begin{displaymath} \int_{0}^{\infty} \mu x (e^{\mu x}-1)e^{-x} \mathrm{d}x = \frac{\mu^2(2-\mu)}{(1-\mu)^2} \le \frac{2 \mu^2}{1-2\mu} \enspace . \end{displaymath} Hence, \begin{displaymath} \begin{split} { \lambda \mathbb{E} \left[\left( U(2 e^{E_1+Y_{(2)}})- U(2e^{Y_{(2)}}) \right) \left(e^{\lambda(U(2 e^{E_1+Y_{(2)}})- U(2e^{Y_{(2)}}))}-1\right) \mid Y_{(2)}\right]}\qquad \qquad \qquad\\ \leq \frac{4\lambda^2}{\log 2+Y_{(2)}} \frac{1}{1-\tfrac{2\sqrt{2}\lambda}{\sqrt{\log 2+Y_{(2)}}}}\enspace . \end{split} \end{displaymath} Letting $\tau= \log n - \log (1+2\lambda^2+4\log(4/v_n)), \begin{eqnarray*} \log \mathbb{E} \left[ e^{\lambda(X_{(1)}-\mathbb{E} X_{(1)})} \right] &\le& \underbrace{\lambda^2 e^{\lambda^2/2} \mathbb{P} \left \{ Y_{(2)} \le \tau \right \}}_{:= \textsf{i}} + \underbrace{\frac{4\lambda^2}{ \log 2+\tau} \frac{1}{1-\tfrac{2\sqrt{2}\lambda}{\sqrt{\log 2+\tau}}}}_{:= \textsf{ii}}\, . \end{eqnarray*} By Lemma \ref{lem:stupid}, \begin{math} (\textsf{i}) \leq \frac{v_n\lambda^2}{ 4} \enspace . \end{math}\\ As $\lambda\leq 1/\sqrt{v_n} $, $\log 2 +\tau\geq 16/v_n$ and \begin{math} (\textsf{ii}) \le \frac{v_n \lambda^2}{4(1-\sqrt{v_n} \lambda)}\enspace . \end{math} \end{proof} We may also use Theorem \ref{prp:var:hazard:dec} to provide a Bernstein inequality for the median of absolute values of a Gaussian sample. We assume $n/2$ is an integer. \begin{prp}\label{prp:order-stat-gauss} Let $v_n =8/(n \log 2)$. \\ For all $0 \le \lambda < n/(2 \sqrt{v_n})$, \\ \begin{displaymath} \log \mathbb{E} e^{\lambda(X_{(n/2)} - \mathbb{E} X_{(n/2)})} \le \frac{v_n \lambda^2}{2(1- 2\lambda \sqrt{v_n/n})} \enspace . \end{displaymath} For all $t >0$, \begin{displaymath} \mathbb{P} \left \{ X_{(n/2)} - \mathbb{E} X_{(n/2)} > \sqrt{2 v_n t} + 2 \sqrt{v_n/n}t\right\} \le e^{-t} \enspace . \end{displaymath} \end{prp} \begin{proof} By Proposition \ref{prp:var:hazard:dec}, \begin{displaymath} \log \mathbb{E} e^{\lambda(X_{(n/2)} - \mathbb{E} X_{(n/2)})} \le \frac{n}{4} \lambda \mathbb{E} \left[ \Delta_{n/2} \left( e^{\lambda \Delta_{n/2} } -1 \right) \right] \end{displaymath} where $\Delta_{n/2} = X_{(n/2)} - X_{(n/2+1)} \sim U \left(2e^{E/({n/2}) + Y_{(n/2 +1)}} \right) -U \left( e^{\lambda Y_{(n/2+1)}} \right)$ where $E$ is exponentially distributed and independent of $Y_{(n/2+1)}$. By Proposition \ref{prp:absgaussian}, \begin{displaymath} \lambda \Delta_{n/2} \le \frac{\sqrt{2}\lambda E}{(n/2)\sqrt{\log 2 + Y_{(n/2 +1)}}} \le \frac{\sqrt{2}\lambda E}{(n/2) \sqrt{\log2}} = \lambda\sqrt{\frac{v_n}{n}}E \enspace . \end{displaymath} Reasoning as in the proof of Proposition \ref{prp:deviation:max:gaussian}, \begin{displaymath} \log \mathbb{E} e^{\lambda(X_{(n/2)} - \mathbb{E} X_{(n/2)})} \le \frac{v_n \lambda^2}{2(1- {2\lambda\sqrt{v_n/n}})} \enspace . \end{displaymath} \end{proof} As the hazard rate $\phi(x)/\overline{\Phi}(x)$ of the Gaussian distribution tends to $0$ as $x$ tends to $-\infty$, the preceding approach does not work when dealing with order statistics of Gaussian samples. Nevertheless, Proposition \ref{prp:var:gaussian} paves the way to simple bounds on the variance of maxima of Gaussian samples. \begin{prp \label{prp:max:gaussian} Let $X_1,\ldots,X_{n}$ be $n$ independent and identically distributed standard Gaussian random variables, let $X_{(1)} \ge \ldots \ge X_{(n)}$ be the order statistics.\\ For all $n \geq 11$, \begin{eqnarray*} \operatorname{Var}[X_{(1)}] &\le& \frac{8/\log 2}{\log(n/2)-\log (1+4\log\log(n/2))} + 2^{-n}+ \exp(-\tfrac{n}{8})+ \frac{4\pi}{n} \enspace . \end{eqnarray*} \end{prp} \begin{proof}[Proof of Proposition~\ref{prp:max:gaussian}.] We may generate $n$ independent standard Gaussian random variables in two steps: first generate $n$ independent random signs ($\epsilon_1,\ldots, \epsilon_n$: $\mathbb{P}\{ \epsilon_i=1\}=1-\mathbb{P}\{ \epsilon_i=-1\}=1/2$), then generate absolute values ($V_1,\ldots,V_n$), the resulting sample ($X_1, \ldots, X_n$) is obtained as $X_i=\epsilon_i V_i$. Let $N$ be the number of positive random signs. \begin{displaymath} \operatorname{Var}(X_{(k)}) = \underbrace{\mathbb{E} \left[ \operatorname{Var}\big( X_{(k)} \mid \sigma(N) \big)\right]}_{\textsf{i}} + \underbrace{\operatorname{Var}\left( \mathbb{E}\left[ X_{(k)}\mid \sigma({N})\right] \right)}_{\textsf{ii}}\, . \end{displaymath} Conditionally on $N=m$, if $k\leq m$, $ X_{(k)}$ is distributed as the $k^{\text{th}}$ order statistic of a sample of $m$ independent absolute values of Gaussian random variables. If $k>m$, $X_{(k)}$ is negative, its conditional variance is equal to the variance of the statistics of order $n-k+1$ in a sample of size $n-m$. Hence, letting $V_{(k)}^m$ denote the $k^{\text{th}}$ order statistic of a sample of $n$ independent absolute values of Gaussian random variables. For $k=1$, $\textsf{(ii)}\ll\textsf{(i)}$, as \begin{eqnarray*} \textsf{(i)} & = & \sum_{m=1}^n \binom{n}{m} 2^{-n} \operatorname{Var}\big( V^m_{(1)} \big) +\binom{n}{0} 2^{-n} \operatorname{Var}\big( V^{n}_{(n)} \big) \\ &\leq & \sum_{m=1}^{n/4} \binom{n}{m} 2^{-n} + \operatorname{Var}\big( V^{n/4}_{(1)} \big) + 2^{-n} \\ & \leq & \exp(-n/8) +\operatorname{Var}\big( V^{n/4}_{(1)} \big) + 2^{-n} \, \end{eqnarray*} where the middle term can be upper bounded using Proposition \ref{prp:var:gaussian}. Let $H_N$ be the random harmonic number $H_N= \sum_{i=1}^N1/i,$ \begin{eqnarray*} \textsf{(ii)} & = & \mathbb{E}_{N,N'} \left[\Big( \mathbb{E}[X_{(1)} \mid N]- \mathbb{E}[X_{(1)} \mid N']\Big)_+^2 \right] \\ &= & \mathbb{E}_{N,N'} \left[ \left(\mathbb{E} \left[ \widetilde{U}\Big( \exp\Big( \textstyle{\sum_{i=1}^N \tfrac{E_i}{i}}\Big) \Big)-\widetilde{U}\Big( \exp\Big( \textstyle{\sum_{i=1}^{N'} \tfrac{E_i}{i}}\Big) \Big)\right] \right)_+^2\right]\\ & \leq & \mathbb{E}_{N,N'} \left[ \left( 1/h\big(\widetilde{U}\big( \exp\big( \textstyle{\sum_{i=1}^{N'} \tfrac{E_i}{i}}\big) \big)\big) (H_N-H_{N'})\right)^2_+\right]\\ & \leq & \frac{\pi}{2} \mathbb{E}_{N,N'} \Big[ (H_N-H_{N'})_+^2\Big] \\ & = & \frac{\pi}{2} \operatorname{Var}(H_N) \, . \end{eqnarray*} Now, as $N= \sum_{i=1}^n (1+\epsilon_i)/2$, letting $Z=H_N$ and $Z_i= \sum_{j=1}^{N-(1+\epsilon_i)/2} 1/j$, by Efron-Stein inequality, $\operatorname{Var}{Z} \leq \mathbb{E}[0\wedge 1/N].$ Finally, using Hoeffding inequality in a crude way leads to $\mathbb{E}[0\wedge 1/N]\leq \exp(-n/8)+ 4/n\leq 8/n$. We may conclude by $\textsf{ii}\leq (4\pi)/n$. \end{proof} \begin{rem} Trading simplicity for tightness, sharper bounds on $\textsf{ii}$ could be derived and show that $\textsf{ii}=O(1/(n\log n))$. \end{rem} \bibliographystyle{abbrv}
{ "timestamp": "2012-08-01T02:03:07", "yymm": "1207", "arxiv_id": "1207.7209", "language": "en", "url": "https://arxiv.org/abs/1207.7209", "abstract": "This note describes non-asymptotic variance and tail bounds for order statistics of samples of independent identically distributed random variables. Those bounds are checked to be asymptotically tight when the sampling distribution belongs to a maximum domain of attraction. If the sampling distribution has non-decreasing hazard rate (this includes the Gaussian distribution), we derive an exponential Efron-Stein inequality for order statistics: an inequality connecting the logarithmic moment generating function of centered order statistics with exponential moments of Efron-Stein (jackknife) estimates of variance. We use this general connection to derive variance and tail bounds for order statistics of Gaussian sample. Those bounds are not within the scope of the Tsirelson-Ibragimov-SudakovGaussian concentration inequality. Proofs are elementary and combine Rényi's representation of order statistics and the so-called entropy approach to concentration inequalities popularized by M. Ledoux.", "subjects": "Probability (math.PR); Statistics Theory (math.ST)", "title": "Concentration inequalities for order statistics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426397881662, "lm_q2_score": 0.7279754548076477, "lm_q1q2_score": 0.7090791337018321 }
https://arxiv.org/abs/1108.3909
Relative Commutator Theory in Semi-Abelian Categories
Basing ourselves on the concept of double central extension from categorical Galois theory, we study a notion of commutator which is defined relative to a Birkhoff subcategory B of a semi-abelian category A. This commutator characterises Janelidze and Kelly's B-central extensions; when the subcategory B is determined by the abelian objects in A, it coincides with Huq's commutator; and when the category A is a variety of omega-groups, it coincides with the relative commutator introduced by the first author.
\section{Introduction} The aim of this article is to fill in the question mark in the diagram \[ \xymatrix@!0@C=6em@R=3em{& \fbox{\small ?} \ar@{-}[dl] \ar@{-}[dd] \ar@{-}[rd] \\ \fbox{\small Janelidze \& Kelly} \ar@{-}[dd] && \fbox{\txt{\small Huq}} \ar@{-}[dd]\\ & \fbox{\small Everaert} \ar@{-}[dl] \ar@{-}[rd] \\ \fbox{\txt{\small Froehlich}} && \fbox{\small Higgins}} \] which relates several non-equivalent concepts of \emph{commuting normal subobjects}, here named after the authors who introduced them. This diagram is meant to be read in the following manner. The bottom triangle restricts itself to theories which make sense for varieties of $\Omega$-groups, while the top triangle extends those theories to a categorical context. In the left hand side column we have theories which are \emph{one-dimensional} and \emph{relative}; the theories in the right hand side column, however, are \emph{two-dimensional} and \emph{absolute}, while the ones in the middle column are \emph{two-dimensional} and \emph{relative}. So we are looking for a \emph{categorical commutator theory which is both relative and two-dimensional}. Let us explain in more detail what this means for us. \subsection{The bottom triangle} Recall that a \textbf{variety of $\Omega$-groups}~\cite{Higgins} is a variety in the sense of universal algebra which is pointed (i.e., it has exactly one constant) and has amongst its operations and identities those of the variety of groups. Apart from groups, the examples include the varieties of abelian groups, of non-unital rings, of commutative algebras, of modules and of Lie algebras, and also the categories of crossed modules and of precrossed modules are known (essentially from~\cite{LR}) to be equivalent to varieties of $\Omega$-groups. In this context there are two classical approaches to commutator theory. On the one hand, there is the Higgins commutator of normal subobjects~\cite{Higgins} which has as particular cases the ordinary commutators of groups, rings, etc. It is \textbf{two-dimensional} in the sense that any two normal subobjects (i.e., ideals or kernels) $N$ and $M$ of an object~$A$ in a variety of $\Omega$-groups~$\ensuremath{\mathcal{A}}$ have a commutator $[N,M]^{\ensuremath{\mathrm{\Omega}}}$, namely, the normal subobject of the join $M\vee N=M\cdot N$ of $M$ and~$N$ generated by the set \[ \{w(\mathitbf{m}\mathitbf{n})w(\mathitbf{n})^{-1}w(\mathitbf{m})^{-1}\mid \text{$w$ is a term, $\mathitbf{m}\in M$ and $\mathitbf{n}\in N$}\}, \] where the notation ``$\mathitbf{m}\in M$'' means that $\mathitbf{m}$ is a finite sequence $(m_{1},\dots,m_{r})$ of elements in $M$. Call an object~$A$ of $\ensuremath{\mathcal{A}}$ \textbf{abelian} when it can be endowed with the structure of an internal abelian group (necessarily in a unique way). The subcategory of~$\ensuremath{\mathcal{A}}$ determined by the abelian objects is denoted by $\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$. It is well known (and easily verified) that when $\ensuremath{\mathcal{A}}$ is a variety of $\Omega$-groups, an algebra~$A$ is in~$\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$ precisely when the product map $A\times A\to A$ (sending a pair of elements $(a,a')$ to its product $aa'$) is a homomorphism in the variety. From this it follows immediately that the Higgins commutator characterises the abelian objects: $A$ is abelian if and only if $[A,A]^{\ensuremath{\mathrm{\Omega}}}=0$. On the other hand there is the \textbf{relative} notion of central extension due to Fr\"ohlich~\cite{Froehlich} (see also Lue~\cite{Lue} and Furtado-Coelho~\cite{Furtado-Coelho}). This notion of central extension corresponds to a \textbf{one-dimensional} commutator. Here one starts from a variety of $\Omega$-groups $\ensuremath{\mathcal{A}}$ together with a chosen subvariety $\ensuremath{\mathcal{B}}$ of~$\ensuremath{\mathcal{A}}$. The subvariety $\ensuremath{\mathcal{B}}$ is completely determined by a set of identities of terms of the form $w(\mathitbf{x}) = 1$; the set of all corresponding terms $w(\mathitbf{x})$ is denoted by \[ W_{\ensuremath{\mathcal{B}}}=\{w(\mathitbf{x})\mid w(\b)=1,\forall B\in \ensuremath{\mathcal{B}},\forall \b\in B\}, \] and an object $A$ of $\ensuremath{\mathcal{A}}$ belongs to $\ensuremath{\mathcal{B}}$ if and only if $w(\a)=1$ for all $w\in W_{\ensuremath{\mathcal{B}}}$ and all~$\a\in A$. An \textbf{extension} $f\colon{A\to B}$ in $\ensuremath{\mathcal{A}}$ is a regular epimorphism, i.e., a surjective homomorphism. Let $K$ denote the kernel of $f$. The normal subobject $[K,A]^{\Omega}_{\ensuremath{\mathcal{B}}}$ of $A$ generated by the set \[ \{w(\k\a)w(\a)^{-1}\mid \text{$w\in W_{\ensuremath{\mathcal{B}}}$, $\k\in K$ and $\a\in A$}\} \] is called the \textbf{relative commutator (with respect to $\ensuremath{\mathcal{B}}$)} of $K$ and~$A$. (Note that Fr\"ohlich uses the notation $V_1$ for the relative commutator.) The extension~$f$ is \textbf{central (with respect to $\ensuremath{\mathcal{B}}$)} when $[K,A]^{\Omega}_{\ensuremath{\mathcal{B}}}$ is zero. It is easily seen that this relative commutator characterises objects of~$\ensuremath{\mathcal{B}}$ as follows: $A$~belongs to~$\ensuremath{\mathcal{B}}$ if and only if $[A,A]^{\Omega}_{\ensuremath{\mathcal{B}}}$ is zero. In the \textbf{absolute} case when the subvariety $\ensuremath{\mathcal{B}}$ consists of all abelian objects in $\ensuremath{\mathcal{A}}$, it was shown in~\cite{Furtado-Coelho} that the two commutators coincide, \[ [K,A]^{\Omega}_{\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}}=[K,A]^{\ensuremath{\mathrm{\Omega}}}. \] (Note here that $K\vee A=A$.) The main advantage of the relative approach is that one may consider many situations which are not covered by the Higgins commutator. For instance, the notion of central extension of precrossed modules relative to the subvariety of crossed modules is of this type. The main advantage of the Higgins commutator is that it is two-dimensional. So the Higgins commutator is two-dimensional and absolute, the Fr\"ohlich commutator is one-dimensional and relative, and in the one-dimensional absolute case the two commutators coincide. What about the two-dimensional relative case? In his article~\cite{EveraertCommutator} the first author of the present article aims at answering precisely this question. He introduces a two-dimensional relative commutator for varieties of $\Omega$-groups which restricts to the Higgins commutator in the absolute case and which characterises Fr\"ohlich's relative central extensions. Given any pair of normal subobjects $M$ and $N$ of an object $A$ of $\ensuremath{\mathcal{A}}$, the commutator~$[M,N]_{\ensuremath{\mathcal{B}}}$ is the normal subobject of $M\vee N$ generated by the set \[ \{w(\mathitbf{m}\mathitbf{n})w(\mathitbf{n})^{-1}w(\mathitbf{m})^{-1} w(\mathitbf{p})\mid w\in W_{\ensuremath{\mathcal{B}}}, \mathitbf{m}\in M, \mathitbf{n}\in N, \mathitbf{p}\in M\wedge N\}. \] The examples give an indication of how good his definition is. For instance, when considering the variety of precrossed modules together with the subvariety of crossed modules, the relative commutator obtained is the so-called \emph{Peiffer commutator}, which is exactly what one would expect. \subsection{The left hand side column} Basing themselves on ideas from categorical Galois theory~\cite{Janelidze:Pure, Borceux-Janelidze}, in the article~\cite{Janelidze-Kelly} Janelidze and Kelly introduce a general notion of central extension, relative with respect to a Birkhoff subcategory~$\ensuremath{\mathcal{B}}$ of a (Barr) exact category~$\ensuremath{\mathcal{A}}$. This notion of relative central extension is a generalisation of Fr\"ohlich's definition. In what follows, we shall restrict ourselves to the case of \textbf{semi-abelian} categories~\cite{Janelidze-Marki-Tholen}: pointed, exact and protomodular with binary sums. So let~$\ensuremath{\mathcal{A}}$ be a semi-abelian category and $\ensuremath{\mathcal{B}}$ a \textbf{Birkhoff subcategory} of $\ensuremath{\mathcal{A}}$---full, reflective and closed under subobjects and regular quotients; a Birkhoff subcategory of a variety is nothing but a subvariety. Let $I\colon{\ensuremath{\mathcal{A}}\to \ensuremath{\mathcal{B}}}$ denote the reflector, and $\eta\colon{1_{\ensuremath{\mathcal{A}}}\Rightarrow I}$ the unit of the adjunction. Recall from~\cite{Janelidze-Kelly} that the closure of $\ensuremath{\mathcal{B}}$ under subobjects and regular quotients is equivalent to the condition that the commutative square \begin{equation}\label{Diagram-unitsquare} \vcenter{\xymatrix@!0@=3.5em{A \ar[r]^-{f} \ar[d]_-{\eta_{A}} & B \ar[d]^-{\eta_{B}}\\ IA \ar[r]_-{If} & IB}} \end{equation} is a pushout of regular epimorphisms, for any regular epi $f\colon A\to B$. An \textbf{extension} in $\ensuremath{\mathcal{A}}$ is a regular epimorphism. Such an extension $f\colon{A\to B}$ is called \textbf{trivial (with respect to $\ensuremath{\mathcal{B}}$)} when the induced commutative square~\eqref{Diagram-unitsquare} is a pullback. $f$ is \textbf{central (with respect to $\ensuremath{\mathcal{B}}$)} when it is \emph{locally trivial} in the sense that there exists a regular epimorphism $p\colon E\to B$ such that the pullback $p^*(f)\colon E\times_B A\to E$ of $f$ along $p$ is a trivial extension. Since, in the present context, this implies that $f^*(f)$ is trivial, we have that $f$ is central if and only if it is \textbf{normal}: either one of the projections in the kernel pair $(R[f],f_{0},f_{1})$ of~$f$ is a trivial extension. It is explained in the articles~\cite{Janelidze-Kelly, Bourn-Gran} why these central extensions reduce to Fr\"ohlich's when the category $\ensuremath{\mathcal{A}}$ is a variety of $\Omega$-groups. This notion of relative central extension induces a \textbf{one-dimensional relative commutator} as follows~\cite{EverVdL1,EGVdL}. Let $[-]_{\ensuremath{\mathcal{B}}}\colon{\ensuremath{\mathcal{A}}\to \ensuremath{\mathcal{A}}}$ denote the \emph{radical} induced by~$\ensuremath{\mathcal{B}}$: the functor which maps an object $A$ of $\ensuremath{\mathcal{A}}$ to the object $[A]_{\ensuremath{\mathcal{B}}}$ defined through the short exact sequence \[ \xymatrix@!0@=3.5em{0 \ar[r] & [A]_{\ensuremath{\mathcal{B}}} \ar[r]^-{\mu_{A}} & A \ar[r]^-{\eta_{A}} & IA \ar[r] & 0,} \] and a morphism $a\colon{A'\to A}$ to its (co)restriction $[a]_{\ensuremath{\mathcal{B}}}\colon{[A']_{\ensuremath{\mathcal{B}}}\to [A]_{\ensuremath{\mathcal{B}}}}$. Let again $f\colon{A\to B}$ be an extension and let $K$ be its kernel. By protomodularity, $f$ is $\ensuremath{\mathcal{B}}$-central if and only if for the kernel pair $(R[f],f_{0},f_{1})$ of $f$, the (co)restrictions \[ [f_{0}]_{\ensuremath{\mathcal{B}}},[f_{1}]_{\ensuremath{\mathcal{B}}}\colon{[R[f]]_{\ensuremath{\mathcal{B}}}\to [A]_{\ensuremath{\mathcal{B}}}} \] of the two projections are isomorphisms (see~\cite{Bourn-Gran}). Hence the kernel $[K,A]_{\ensuremath{\mathcal{B}}}$ of $[f_{0}]_{\ensuremath{\mathcal{B}}}$ measures how far $f$ is from being central: $f$ is $\ensuremath{\mathcal{B}}$-central if and only if~$[K,A]_{\ensuremath{\mathcal{B}}}$ is zero. The object $[K,A]_{\ensuremath{\mathcal{B}}}$ may be considered as a normal subobject of $A$ via the composite \[ \mu_{A}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} [f_{1}]_{\ensuremath{\mathcal{B}}}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker [f_{0}]_{\ensuremath{\mathcal{B}}}\colon{[K,A]_{\ensuremath{\mathcal{B}}}\to A}; \] the induced extension $A/[K,A]_{\ensuremath{\mathcal{B}}}\to B$ is the \textbf{$\ensuremath{\mathcal{B}}$-centralisation} of $f$. We interpret~$[K,A]_{\ensuremath{\mathcal{B}}}$ as a commutator of $K$ with $A$, relative to the Birkhoff subcategory~$\ensuremath{\mathcal{B}}$ of $\ensuremath{\mathcal{A}}$. When $\ensuremath{\mathcal{A}}$ is a variety of $\Omega$-groups, $[K,A]_{\ensuremath{\mathcal{B}}}$ coincides with the relative commutator $[K,A]^{\Omega}_{\ensuremath{\mathcal{B}}}$, because they induce the same central extensions. And as in the varietal case, an object $A$ of $\ensuremath{\mathcal{A}}$ belongs to~$\ensuremath{\mathcal{B}}$ if and only if $[A,A]_{\ensuremath{\mathcal{B}}}=0$, because the extension $A\to 0$ is a split epimorphism, and therefore central if and only if it is trivial~\cite{Janelidze-Kelly}. \subsection{The right hand side column}\label{Subsection-Huq-Commutator} In his article~\cite{Huq}, Huq introduces a categorical notion of commutator of coterminal morphisms which makes sense in quite diverse algebraic settings. Using ``old-style'' axioms, he formulates his results for those categories we would nowadays call semi-abelian~\cite{Janelidze-Marki-Tholen}. Recast in more modern terminology by Bourn, his definition takes the following shape~\cite{Bourn-Huq}. In a semi-abelian category, consider two coterminal morphisms, $m\colon {M\to A}$ and $n\colon {N\to A}$, and the resulting square of solid arrows \[ \xymatrix@!0@=3.5em{& M \ar[ld]_-{\langle 1_{M},0\rangle } \ar@{.>}[d] \ar[rd]^-{m}\\ M\times N \ar@{.>}[r] & Q & A. \ar@{.>}[l]|-{q}\\ & N \ar[lu]^-{\langle 0,1_{N}\rangle } \ar[ru]_-{n} \ar@{.>}[u]} \] The colimit of this square consists of an object $Q$ together with four morphisms with codomain $Q$ as indicated in the diagram. The morphism $q$ turns out to be a normal epimorphism; its kernel is denoted \[ [m,n]^{\ensuremath{\mathrm{H}}}\colon {[M,N]^{\ensuremath{\mathrm{H}}}\to A} \] and called the \textbf{Huq commutator of $m$ and~$n$}. It is convenient for us to restrict its use to the situation when $M$ and~$N$ are normal subobjects of $A$, i.e., $m$ and~$n$ are kernels. The commutator $[M,N]^{\ensuremath{\mathrm{H}}}$ becomes the ordinary commutator of normal subgroups $M$ and~$N$ in the case of groups, the ideal generated by $MN+NM$ in the case of rings, the Lie bracket in the case of Lie algebras, and so on. More generally, when computed in the join $M\vee N$, we know from~\cite{Huq} that in any variety of $\Omega$-groups the Huq commutator~$[M,N]^{\ensuremath{\mathrm{H}}}$ coincides with the Higgins commutator $[M,N]^{\ensuremath{\mathrm{\Omega}}}$. Just as the Higgins commutator, the Huq commutator characterises the Birkhoff subcategory~$\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$ of $\ensuremath{\mathcal{A}}$ of abelian objects in $\ensuremath{\mathcal{A}}$. This is a consequence of the fact that, in a semi-abelian category $\ensuremath{\mathcal{A}}$, an object~$A$ admits at most one internal abelian group structure, and such a structure is entirely determined by a morphism $m\colon{A\times A\to A}$ which satisfies $m\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \langle 1_{A},0\rangle = 1_{A}=m\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \langle 0,1_{A}\rangle $~\cite{Huq,Bourn2002}. \subsection{The question mark}\label{Question} By now it is clear, we hope, that the purpose of the present article is to introduce a categorical version of the relative commutator for varieties of $\Omega$-groups, in such a way that \begin{enumerate} \item it characterises the $\ensuremath{\mathcal{B}}$-central extensions of $\ensuremath{\mathcal{A}}$, \item it coincides with the Huq commutator when $\ensuremath{\mathcal{B}}$ is $\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$. \end{enumerate} In~\cite{EverVdL4} the present authors already introduced a relative concept of commuting normal subobjects, based on categorical Galois theory and valid in the context of semi-abelian categories. This notion was shown to be compatible with the relative commutator for varieties of $\Omega$-groups. What we still have to do now is \begin{itemize} \item explain how this induces a two-dimensional commutator; \item prove that this commutator satisfies (1) and (2) above; \item explore the commutator's basic properties. \end{itemize} One may ask whether it is worth the effort at all to leave the context of $\Omega$-groups and study a relative commutator from a categorical perspective. We claim that the categorical approach not only provides us with a conceptual explanation of the definitions (in terms of Galois theory) but also with interesting new examples. For instance, in the case of loops vs.\ groups considered in~\cite{EverVdL4}, the commutator becomes an \emph{associator}, and it effectively measures how well two normal subloops of a loop \emph{associate with each other}. \subsection{Definition of the commutator} Let us now briefly sketch how the relative commutator $[-,-]_{\ensuremath{\mathcal{B}}}$ is defined. Let $\ensuremath{\mathcal{A}}$ again be a semi-abelian category and $\ensuremath{\mathcal{B}}$ a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$. $M$ and~$N$ will be normal subobjects of an object $A$ of~$\ensuremath{\mathcal{A}}$. $R_{M}$ and~$R_{N}$ are the equivalence relations on the join $M\vee N$ (taken in the lattice of normal subobjects of $A$) corresponding to $M$ and~$N$, and \[ \xymatrix@!0@R=3.5em@C=6em{R_{M}\square R_{N} \ar@<.5ex>[r]^-{r_{1}} \ar@<-.5ex>[r]_-{r_{0}} \ar@<.5ex>[d]^-{p_{1}} \ar@<-.5ex>[d]_-{p_{0}} & R_{N} \ar@<.5ex>[d] \ar@<-.5ex>[d]\\ R_{M} \ar@<.5ex>[r] \ar@<-.5ex>[r] & M\vee N} \] is the largest double equivalence relation on $R_{M}$ and $R_{N}$: the object $R_{M}\square R_{N}$ ``consists of'' all quadruples $(x, y, z, t)\in M\vee N$ where $(x,z)$, $(y,t)\in R_{M}$ and $(x,y)$, $(z,t)\in R_{N}$. The commutator of $M$ and $N$ is the meet \[ [M,N]_{\ensuremath{\mathcal{B}}}=K[[p_{0}]_{\ensuremath{\mathcal{B}}}]\wedge K[[r_{0}]_{\ensuremath{\mathcal{B}}}] \] of the kernels of the morphisms $[p_{0}]_{\ensuremath{\mathcal{B}}}$ and $[r_{0}]_{\ensuremath{\mathcal{B}}}$ in the following diagram, obtained by applying the functor $[-]_{\ensuremath{\mathcal{B}}}$ to the diagram above. \begin{equation}\label{Commutator-Squares} \vcenter{\xymatrix@!0@R=3.5em@C=6em{[R_{M}\square R_{N}]_{\ensuremath{\mathcal{B}}} \ar@<.5ex>[r]^-{[r_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[r]_-{[r_{0}]_{\ensuremath{\mathcal{B}}}} \ar@<.5ex>[d]^-{[p_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[d]_-{[p_{0}]_{\ensuremath{\mathcal{B}}}} & [R_{N}]_{\ensuremath{\mathcal{B}}} \ar@<.5ex>[d] \ar@<-.5ex>[d]\\ [R_{M}]_{\ensuremath{\mathcal{B}}} \ar@<.5ex>[r] \ar@<-.5ex>[r] & [M\vee N]_{\ensuremath{\mathcal{B}}}}} \end{equation} It may be considered as a normal subobject of $M\vee N$. \subsection{Interpretation in terms of double central extensions}\label{Galois-Structure} We have to explain why $[M,N]_{\ensuremath{\mathcal{B}}}$ is defined the way it is. The reason comes from categorical Galois theory, in particular the theory of \emph{higher central extensions}. Just like the concept of central extension which is defined with respect to the adjunction \begin{equation}\label{Adjunction-I} \xymatrix@!0@=3.5em{{\ensuremath{\mathcal{A}}} \ar@<1ex>[r]^-{I} \ar@{}[r]|-{\perp} & {\ensuremath{\mathcal{B}},} \ar@<1ex>[l]^-{\supset}} \end{equation} one may consider double central extensions which are defined with respect to the reflection of extensions to central extensions---the adjunction \begin{equation}\label{Adjunction-I_{1}} \xymatrix@!0@=6em{{\ensuremath{\mathsf{Ext}}\ensuremath{\mathcal{A}}} \ar@<1ex>[r]^-{I_{1}} \ar@{}[r]|-{\perp} & {\ensuremath{\mathsf{CExt}}_{\ensuremath{\mathcal{B}}}\ensuremath{\mathcal{A}}} \ar@<1ex>[l]^-{\supset}} \end{equation} where $\ensuremath{\mathsf{Ext}}\ensuremath{\mathcal{A}}$ is the category of extensions and commutative squares between them, and $\ensuremath{\mathsf{CExt}}_{\ensuremath{\mathcal{B}}}\ensuremath{\mathcal{A}}$ its full subcategory determined by those extensions which are central. The reflector $I_{1}$ takes an extension $f\colon{A\to B}$ with kernel $K$ and maps it to the central extension \[ I_{1}f\colon{A/[K,A]_{\ensuremath{\mathcal{B}}}\to B}. \] This may be repeated ad infinitum, so that notions of \emph{$n$-fold central extension} are obtained, but for the present purposes the second step is sufficient. Double central extensions, first introduced by Janelidze for groups~\cite{Janelidze:Double}, are an important tool in semi-abelian (co)homology~\cite{EGVdL, Janelidze:Hopf-talk, RVdL}, and turn out to be precisely what is needed to understand how the relative commutator works. We refer the reader to the articles~\cite{EGVdL,EverHopf} for more details on higher central extensions. As we explain below, the commutator $[M,N]_{\ensuremath{\mathcal{B}}}$ is zero if and only if any (hence, all) of the four commutative squares in the diagram~\eqref{Commutator-Squares} is a pullback. Galois theory shows that this condition is equivalent to the square \begin{equation}\label{Double-Central-Extension} \vcenter{\xymatrix@!0@R=3.5em@C=6em{M\vee N \ar[r]^-{q_{M}} \ar[d]_-{q_{N}} & \tfrac{M\vee N}{M} \ar[d]\\ \tfrac{M\vee N}{N} \ar[r] & 0}} \end{equation} being a double central extension. (Here $q_M$ denotes the cokernel of the normal monomorphism $M\to M\vee N$.) When this happens, we say that $M$ and $N$ \textbf{commute (with respect to $\ensuremath{\mathcal{B}}$)}. Accordingly, given any two normal subobjects $M$ and $N$ of an object~$A$, the commutator $[M,N]_{\ensuremath{\mathcal{B}}}$ is the smallest normal subobject $J$ of $M\vee N$ such that $M/J$ and $N/J$ commute; it is the normal subobject which must be divided out of $M\vee N$ to turn the double extension~\eqref{Double-Central-Extension} into a double central extension. \subsection{Structure of the text} In the following sections we shall explain why the commutator has the properties (1) and (2) mentioned in~\ref{Question}. With this purpose in mind, the text is structured as follows. In Section~\ref{Section-Preliminaries} we provide the necessary background for understanding the definition of the commutator: semi-abelian categories, normal subobjects, double extensions and double central extensions. Its basic technical properties and the proof of (1) are given in Section~\ref{Section-Definition}. In Section~\ref{Section-Huq} we prove~(2): the commutator $[-,-]_{\ensuremath{\mathcal{B}}}$ coincides with the Huq commutator in case~$\ensuremath{\mathcal{B}}$ is~$\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$. Finally, Section~\ref{Further-Remarks} brings up some further remarks and unanswered questions. \tableofcontents \pagebreak \section{Preliminaries}\label{Section-Preliminaries} We recall some basic definitions and results which we shall need in the following sections. \subsection{Semi-abelian categories} A category is \textbf{regular} when it is finitely complete with coequalisers of kernel pairs and with pullback-stable regular epimorphisms~\cite{Barr}. In a regular category, any morphism $f$ may be factored as a regular epimorphism followed by a monomorphism (called the \textbf{image} of $f$), and this \textbf{image factorisation} is unique up to isomorphism. Given a monomorphism $m\colon {M\to A}$ and a regular epimorphism $f\colon A\to B$, the \textbf{direct image} $f(m)\colon{fM\to B}$ of $m$ along $f$ is the image of the composite~${f\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} m}$. When a category is pointed and regular, \textbf{protomodularity} can be defined via the following property, which is equivalent to the Short Five Lemma~\cite{Bourn1991,Bourn2001}: given any commutative diagram \begin{equation}\label{Short-Five-Lemma} \vcenter{\xymatrix@!0@=4em{K[f'] \ar[r]^-{\ker f'} \ar[d]_-k & A' \ar[r]^-{f'} \ar[d]_-a & B' \ar[d]^-b \\ K[f] \ar[r]_-{\ker f} & A \ar[r]_-{f} & B}} \end{equation} such that $f$ and $f'$ are regular epimorphisms, $k$ is an isomorphism if and only if the right hand square $b\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} f'=f\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} a$ is a pullback. (Here, we use the notation $\ker f\colon K[f]\to A$ for the kernel of $f$.) A \textbf{homological} category is pointed, regular and protomodular~\cite{Borceux-Bourn}. In such a category, a regular epimorphism is always the cokernel of its kernel, and there is the following notion of short exact sequence. A \textbf{short exact sequence} is any sequence \[ \xymatrix@!0@=3.5em{K \ar[r]^-{k} & A \ar[r]^-{f} & B} \] with $k=\ker f$ and $f$ a regular epimorphism. We denote this situation by \[ \xymatrix@!0@=3.5em{0 \ar[r] & K \ar[r]^-{k} & A \ar[r]^-{f} & B \ar[r] & 0.} \] The following property holds. \begin{lemma}\cite{Bourn2001}\label{Lemma-Pullback} Consider a morphism of short exact sequences such as~\eqref{Short-Five-Lemma} above. The left hand side square $\ker f\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} k=a\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker f'$ is a pullback if and only if $b$ is a mono. \hfill \qed \end{lemma} A \textbf{(Barr) exact} category is regular and such that every internal equivalence relation is a kernel pair~\cite{Barr}. A homological category is exact if and only if the direct image of a normal monomorphism along a regular epimorphism is again a normal monomorphism. A \textbf{semi-abelian} category is homological and exact with binary coproducts~\cite{Janelidze-Marki-Tholen}. A \textbf{regular pushout square} is a commutative square \begin{equation}\label{Diagram-Square} \vcenter{\xymatrix@!0@=3.5em{X \ar[r]^-{c} \ar[d]_-{d} & C \ar[d]^-{g}\\ D \ar[r]_-{f} & Z}} \end{equation} such that all its maps and the comparison map $\langle d,c\rangle \colon{X\to D\times_Z C}$ to the pullback of $f$ with $g$ are regular epimorphisms. In a semi-abelian category, every pushout of a regular epimorphism along a regular epimorphism is a regular pushout~\cite{Carboni-Kelly-Pedicchio}, and the following dual to Lemma~\ref{Lemma-Pullback} holds: \begin{lemma}\label{Rotlemma}\cite{Bourn-Gran} Given a morphism of short exact sequences such as~\eqref{Short-Five-Lemma} above with $a$ and $b$ regular epi, the right hand side square $f\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} a=b\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} f'$ is a (regular) pushout if and only if $k$ is a regular epimorphism. \hfill \qed \end{lemma} \subsection{Normal subobjects} A \textbf{normal subobject} $N$ of an object~$A$ of a semi-abelian category is a subobject represented by a normal monomorphism $n\colon{N\to A}$. Let $M$ and~$N$ be two normal subobjects of $A$ with representing normal monomorphisms $m$ and~$n$. Taking into account Lemma~\ref{Lemma-Pullback} and the stability of normal monomorphisms under regular images, we may always form the $3\times 3$ diagram in Figure~\ref{3x3-Diagram} (in which all rows and columns are short exact sequences). The meet ${M\wedge N}$ and the join $M\vee N$ of the subobjects~$M$ and~$N$ are taken in the lattice of normal subobjects of $A$. We see that ${M\wedge N}$ is computed as the pullback~\texttt{(i)} and~$M\vee N$ is obtained through the pushout~\texttt{(ii)}, as the kernel of the composite morphism ${A\to {A}/{(M\vee N)}}$. Of course, $M\wedge N$ coincides with the meet~$M\cap N$ in the lattice of (all) subobjects of~$A$. One could also compute the join of~$M$ and~$N$ as (ordinary) subobjects of $A$ by taking the image $M\cup N$ of the morphism~$\langle \begin{smallmatrix}m \\ n\end{smallmatrix}\rangle\colon {M+N\to A}$. It is known~\cite{Borceux-Semiab,Huq} that both constructions yield the same result. We shall give an alternative proof of this fact below, but first we prove a weaker property. \begin{figure} \[ \xymatrix@!0@=3.5em{& 0 \ar[d] & 0 \ar[d] & 0 \ar[d]\\ 0 \ar[r] & M\wedge N \ar@{}[rd]|-{\texttt{(i)}} \ar[r] \ar[d] \ar@{}[rd]|<<{\copy\pullbackbox} & N \ar[d]^-{n} \ar[r] & \tfrac{N}{M\wedge N} \ar[d] \ar[r] & 0\\ 0 \ar[r] &M \ar[r]_-{m} \ar[d] & A \ar[r] \ar[d] \ar@{}[rd]|>{\copy\pushoutbox} \ar@{}[rd]|-{\texttt{(ii)}} & \tfrac{A}{M} \ar[d] \ar[r] & 0\\ 0 \ar[r] & \tfrac{M}{M\wedge N} \ar[r] \ar[d] & \tfrac{A}{N} \ar[r] \ar[d] & \tfrac{A}{M\vee N} \ar[d] \ar[r] & 0\\ & 0 & 0 & 0 } \] \caption{The $3\times 3$ diagram induced by $M$, $N$ normal in $A$}\label{3x3-Diagram} \end{figure} Let us fix some notation: we write $j$ for the normal monomorphism representing~$M\vee N$, and $m'\colon M\to M\vee N$ and $n'\colon N\to M\vee N$ for the induced factorisations. Since $m'$ and $n'$ are normal monomorphisms, we may also consider the join of $M$ and $N$ as normal subobjects of $M\vee N$. We denote it by $M\curlyvee N$ and write $j'\colon M\curlyvee N\to M\vee N$ for the representing normal monomorphism. \begin{lemma} The two joins $M\vee N$ and $M\curlyvee N$ coincide: $j'$ is an isomorphism. \end{lemma} \begin{proof} First of all note that the commutative square \[ \xymatrix@!0@R=3.5em@C=5em{ {M\vee N} \ar[r] \ar[d]_j & \frac{M\vee N}{M} \ar[d]\\ A \ar[r] & \frac{A}{M}} \] is a pullback by protomodularity, so that the right hand vertical morphism is a monomorphism because, in a protomodular category, pullbacks reflect monos~\cite{Bourn1991}. (One could, alternatively, use Lemma~\ref{Lemma-Pullback} to prove that this morphism is a monomorphism.) Now, the normal monomorphisms~$m'$ and~$n'$ induce a $3\times 3$ diagram similar to Figure~\ref{3x3-Diagram}, and $j$ induces a morphism between the two $3\times 3$ diagrams, of which we consider only the last row: \[ \xymatrix@!0@R=3.5em@C=5em{ 0 \ar[r] & \frac{N}{M\wedge N} \ar[r] \ar@{=}[d] & \frac{M\vee N}{M} \ar[r] \ar[d] & \frac{M\vee N}{M\curlyvee N} \ar[d] \ar[r] & 0\\ 0 \ar[r] & \frac{N}{M\wedge N} \ar[r] & \frac{A}{M} \ar[r] & \frac{A}{M\vee N} \ar[r] & 0} \] We have just explained why the middle vertical morphism is a monomorphism. Hence, using the same arguments as above, we find that also the right hand vertical morphisms is a mono. Since the composite \[ M\vee N \to (M\vee N)/(M\curlyvee N)\to A/(M\vee N) \] is zero, we find that $(M\vee N)/(M\curlyvee N)=0$, i.e., the factorisation $j'$ is an isomorphism. \end{proof} Now, taking this lemma into account, when $A=M\vee N$ in the $3\times 3$ diagram above, the object $A/(M\vee N)$ is zero, and we regain the \textbf{Noether isomorphisms}~\cite{Borceux-Bourn} \begin{equation}\label{Noether} \frac{N}{M\wedge N}\cong \frac{M\vee N}{M}\qquad\text{and}\qquad\frac{M}{M\wedge N}\cong \frac{M\vee N}{N}. \end{equation} We are ready to prove the identity $M\vee N=M\cup N$. \begin{notation}\label{Notation-q} Given a normal subobject $J$ of an object~$A$, the induced quotient of $A$ is denoted \[ q_{J}^{A}\colon{A\to A/J}; \] we write $R^{A}_{J}$ for the kernel pair $A\times_{A/J} A$ of $q^{A}_{J}$. Most of the time $A$ will be a join $M\vee N$, in which case we drop the $A$ from the notation and simply write \[ q_{J}\colon{M\vee N\to (M\vee N) / J} \] for the quotient and $R_{J}$ for the kernel pair of $q_{J}$. \end{notation} \begin{proposition}\cite{Borceux-Semiab,Huq}\label{Lemma-Join} If $M$ and $N$ are normal in $A$, then their join as normal subobjects $M\vee N$ coincides with their join as subobjects $M\cup N$. Hence the morphism \[ \langle \ensuremath{\mathrm{coker\,}} n,\ensuremath{\mathrm{coker\,}} m\rangle \colon{A\to (A/N)\times (A/M)} \] is a regular epimorphism if and only if such is the morphism \[ \langle\begin{smallmatrix}m \\ n\end{smallmatrix}\rangle\colon {M+N\to A}. \] \end{proposition} \begin{proof} If $J$ is a subobject of $M\vee N$ containing $M$ and $N$, then by Lemma~\ref{Lemma-Pullback} it induces a factorisation of the first of the isomorphisms~\eqref{Noether} as a morphism ${N/(M\wedge N)\to J/M}$ followed by a monomorphism \[ j\colon{J/M\to {(M\vee N)}/{M}}. \] This $j$ is also a split epimorphism; hence it is an isomorphism, and $J$ is equal to~$M\vee N$ by the Short Five Lemma. Now $M\cup N$ is a subobject of $M\vee N$ containing $M$ and $N$, and the two joins coincide. As to the latter statement, the first condition holds if and only if the square \[ \xymatrix@!0@R=3.5em@C=5em{A \ar[r] \ar[d] & \tfrac{A}{M} \ar[d]\\ \tfrac{A}{N} \ar[r] & 0} \] is a regular pushout. Since, in a semi-abelian category, a pushout of regular epimorphisms is necessarily regular, this happens when $A=M\vee N$. But then~$A$ is~$M\cup N$ by the former part of the proof, and the second condition holds only when this is the case. \end{proof} Given a monomorphism $m\colon{M\to A}$, the \textbf{normal closure} $\overline{M}^{A}$ of $M$ in $A$ always exists, and is computed as the kernel of the cokernel of $m$. It is the smallest normal subobject of $A$ that contains~$M$. \subsection{Double (central) extensions}\label{Subsection-Double-Central-Extensions} A \textbf{double extension} is a regular pushout square~\eqref{Diagram-Square}. For instance, given any two normal subobjects $M$ and~$N$ of an object $A$ of $\ensuremath{\mathcal{A}}$, the induced pushout square~\eqref{Double-Central-Extension} is a double extension. Recall from~\cite{EGVdL} that pullbacks of double extensions exist in $\ensuremath{\mathsf{Ext}}\ensuremath{\mathcal{A}}$ and are degree-wise pullbacks in $\ensuremath{\mathcal{A}}$. Moreover, double extensions are pullback-stable. The category of double extensions in $\ensuremath{\mathcal{A}}$ and commutative cubes between them is denoted~$\ensuremath{\mathsf{Ext}}^{2}\!\ensuremath{\mathcal{A}}$. Double central extensions are defined with respect to the adjunction~\eqref{Adjunction-I_{1}} in the same way as central extensions are defined with respect to the adjunction~\eqref{Adjunction-I}. More precisely, a double extension~\eqref{Diagram-Square}, considered as a map $(c,f)\colon{d\to g}$ in the category $\ensuremath{\mathsf{Ext}} \ensuremath{\mathcal{A}}$, is \textbf{trivial} when the left hand commutative square below, induced by the unit of the adjunction~\eqref{Adjunction-I_{1}}, is a pullback in~$\ensuremath{\mathsf{Ext}}\ensuremath{\mathcal{A}}$; this means that the right hand commutative square, in which the vertical morphisms are the canonical quotient maps, is a pullback in~$\ensuremath{\mathcal{A}}$. \[ \vcenter{\xymatrix@1@!0@=3.5em{ d \ar[r]^{(c,f)} \ar[d] & g \ar[d]\\ I_1d \ar[r] & I_1g}} \qquad\qquad \vcenter{\xymatrix@!0@R=3.5em@C=5em{ X \ar[r]^c \ar[d] & C \ar[d] \\ \frac{X}{[K[d],X]_{\ensuremath{\mathcal{B}}}} \ar[r] & \frac{C}{[K[g],C]_{\ensuremath{\mathcal{B}}}}}} \] The square~\eqref{Diagram-Square} is a \textbf{double central extension (with respect to $\ensuremath{\mathcal{B}}$)} when its pullback along some double extension is a trivial double extension. It is a \textbf{double normal extension (with respect to $\ensuremath{\mathcal{B}}$)} when the first projection of its kernel pair \[ \vcenter{\xymatrix@!0@R=3.5em@C=5em{R[c] \ar[r]^-{c_{0}} \ar[d]_-{r} & X \ar[d]^-{d} \\ R[f] \ar[r]_-{f_{0}} & D}} \] is a trivial double extension. (Alternatively, one could use the square of second projections.) By protomodularity, this amounts to the (one-dimen\-sional, relative) commutators $[K[r],R[c]]_{\ensuremath{\mathcal{B}}}$ and $[K[d],X]_{\ensuremath{\mathcal{B}}}$ being isomorphic. Similar to the one-dimensional case, double central extensions and double normal extensions coincide. \subsection{Higher extensions} In what follows we shall also need three-fold extensions, so let us recall the definition of $n$-fold extension for arbitrary $n$. Given $n\geq 0$, denote by $\ensuremath{\mathsf{Arr}}^n\!\ensuremath{\mathcal{A}}$ the category of $n$-dimensional arrows in $\ensuremath{\mathcal{A}}$. (Zero-dimensional arrows---as well as zero-dimensional extensions---are just objects of $\ensuremath{\mathcal{A}}$.) A \textbf{(one-fold) extension} is a regular epimorphism in $\ensuremath{\mathcal{A}}$. For $n\geq 1$, an \textbf{$(n+1)$-fold extension} is a commutative square~\eqref{Diagram-Square} in~$\ensuremath{\mathsf{Arr}}^{n-1}\!\ensuremath{\mathcal{A}}$ (an arrow in~$\ensuremath{\mathsf{Arr}}^n\!\ensuremath{\mathcal{A}}$) such that all its maps and the comparison map $\langle d,c\rangle\colon{X\to D\times_Z C}$ to the pullback of~$f$ with~$g$ are~$n$-fold extensions. Thus for $n=2$ we regain the notion of double extension. A three-fold extension is a commutative cube \[ \vcenter{\xymatrix@!0@=3.5em{& X \ar[rr] \ar@{.>}[dd] && C \ar[dd] \\ X' \ar[rr] \ar[dd] \ar[ru] && C' \ar[dd] \ar[ru]\\ & D \ar@{.>}[rr] && Z\\ D' \ar[rr] \ar@{.>}[ru] && Z' \ar[ru]}} \qquad\qquad \vcenter{\xymatrix@!0@R=3.5em@C=4.5em{X' \ar[r] \ar[d] & D'\times_{Z'}C' \ar[d] \\ X \ar[r] & D\times _{Z}C}} \] of which all faces as well as the induced right-hand square are double extensions. Since, in a semi-abelian category, regular epimorphisms are normal, the three-fold extension above is completely determined by the object $X'$ and the three normal subobjects given by the kernels of its ``initial ribs'' ${X'\to X}$, ${X'\to C'}$ and ${X'\to D'}$. Conversely, given an object $X'$ and three normal subobjects $J$,~$M$ and $N$ of $X'$, the following lemma determines when the induced cube is a three-fold extension. \begin{lemma}\label{Lemma-Three-Fold-Extension} Given normal subobjects $J$,~$M$ and $N$ of an object $X'$ in a semi-abelian category, the cube obtained by pushing out the induced quotients is a three-fold extension if and only if \[ {q^{X'}_{J}(M\wedge N)=q^{X'}_{J}M\wedge q^{X'}_{J}N}. \] \end{lemma} \begin{proof} Since, in a semi-abelian category, pushouts of regular epimorphisms are regular, the induced cube is a three-fold extension as soon as the square \[ \vcenter{\xymatrix@!0@R=4em@C=7em{X' \ar[r] \ar[d] & \tfrac{X'}{M}\times_{\tfrac{X'}{M\vee N}}\tfrac{X'}{N} \ar[d] \\ \tfrac{X'}{J} \ar[r] & \tfrac{X'}{J\vee M}\times_{\tfrac{X'}{J\vee M\vee N}}\tfrac{X'}{J\vee N}}} \] is a double extension. We already know that all morphisms in this square are regular epimorphisms, so by Lemma~\ref{Rotlemma} it is a double extension if and only if $q^{X'}_{J}(M\wedge N)=q^{X'}_{J}M\wedge q^{X'}_{J}N$. \end{proof} Further results on higher-dimensional extensions and central extensions may be found in~\cite{EGVdL} and~\cite{EverHopf}. Let us just recall here that, for any $n\geq 0$, a split epimorphism of $n$-fold extensions is always an $(n+1)$-fold extension, and it is an $(n+1)$-fold central extension if and only if it is a trivial $(n+1)$-fold extension. Higher-dimensional central extensions are important in homology where they appear in the higher Hopf formulae, and in cohomology where (in the absolute case, and in low dimensions) they are classified by the cohomology groups~\cite{Gran-VdL,RVdL}. \section{Definition and basic properties}\label{Section-Definition} In this section we recall the categorical definition of the relative commutator from the introduction and we explore its basic properties: compatibility with the central extensions introduced by Janelidze and Kelly (Proposition~\ref{Proposition-Characterisation-Central-Extensions}), basic stability properties (Theorem~\ref{Proposition-Commutator-Properties}) and the case of $\Omega$-groups (Proposition~\ref{Theorem-Characterisation}). In what follows, $\ensuremath{\mathcal{A}}$ will be a semi-abelian category and $\ensuremath{\mathcal{B}}$ a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$. \begin{definition}\label{Definition-Commuting-Subobjects} Let $M$ and $N$ be normal subobjects of an object $A$ of $\ensuremath{\mathcal{A}}$. We say that $M$ and $N$ \textbf{commute (with respect to $\ensuremath{\mathcal{B}}$)} when the double extension \begin{equation}\label{Double-Extension-MN} \vcenter{\xymatrix@!0@R=3.5em@C=5em{M\vee N \ar[r]^-{q_{M}} \ar[d]_-{q_{N}} & \tfrac{M\vee N}{M} \ar[d]\\ \tfrac{M\vee N}{N} \ar[r] & 0}} \end{equation} is central (with respect to $\ensuremath{\mathcal{B}}$). \end{definition} Is is immediately clear that this notion of commuting subobjects characterises the $\ensuremath{\mathcal{B}}$-central extensions of $\ensuremath{\mathcal{A}}$ and the objects of $\ensuremath{\mathcal{B}}$: \begin{proposition}\label{Proposition-Characterisation-Central-Extensions} An extension $f\colon{A\to B}$ in $\ensuremath{\mathcal{A}}$ is $\ensuremath{\mathcal{B}}$-central if and only if the object $A$ and the kernel $K$ of $f$ commute. An object $A$ of $\ensuremath{\mathcal{A}}$ lies in $\ensuremath{\mathcal{B}}$ if and only if~$A$ commutes with itself. \end{proposition} \begin{proof} The first result holds because the double extension \[ \xymatrix@!0@=3.5em{A \ar[r]^-{q_{A}} \ar[d]_-{f=q_{K}} & 0 \ar@{=}[d] \\ B \ar[r] & 0,} \] being a split epimorphism of extensions, is central if and only if it is trivial, which happens precisely when $f$ is a central extension. The second result follows from the first, since $A$ is in $\ensuremath{\mathcal{B}}$ if and only if the split epimorphism ${A\to 0}$ is a $\ensuremath{\mathcal{B}}$-central extension. \end{proof} \begin{lemma}\cite[Proposition 2.9]{EverVdL4}\label{Lemma-Characterisation-Square} Let $M$ and $N$ be normal subobjects of an object~$A$. $M$ and $N$ commute if and only if any of the four commutative squares in the diagram \begin{equation}\label{Square-Commutator} \vcenter{\xymatrix@!0@C=7em@R=3.5em{[R_{M}\square R_{N}]_{\ensuremath{\mathcal{B}}} \ar@<.5ex>[r]^-{[r_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[r]_-{[r_{0}]_{\ensuremath{\mathcal{B}}}} \ar@<.5ex>[d]^-{[p_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[d]_-{[p_{0}]_{\ensuremath{\mathcal{B}}}} & [R_{N}]_{\ensuremath{\mathcal{B}}} \ar@<.5ex>[d]^-{[\pi_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[d]_-{[\pi_{0}]_{\ensuremath{\mathcal{B}}}}\\ [R_{M}]_{\ensuremath{\mathcal{B}}} \ar@<.5ex>[r]^-{[\rho_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[r]_-{[\rho_{0}]_{\ensuremath{\mathcal{B}}}} & [M\vee N]_{\ensuremath{\mathcal{B}}}}} \end{equation} is a pullback.\hfill \qed \end{lemma} \begin{definition}\label{Definition-Commutator} Let $M$ and $N$ be normal subobjects of an object $A$. Let \[ [R_{M}]_{\ensuremath{\mathcal{B}}}\times_{[M\vee N]_{\ensuremath{\mathcal{B}}}}[R_{N}]_{\ensuremath{\mathcal{B}}} \] denote the pullback of the morphisms $[\pi_{0}]_{\ensuremath{\mathcal{B}}}$ and $[\rho_{0}]_{\ensuremath{\mathcal{B}}}$ from Diagram~\eqref{Square-Commutator}. The \textbf{commutator $[M,N]_{\ensuremath{\mathcal{B}}}$} is the kernel of the morphism \[ \langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle \colon{[R_{M}\square R_{N}]_{\ensuremath{\mathcal{B}}}\to [R_{M}]_{\ensuremath{\mathcal{B}}}\times_{[M\vee N]_{\ensuremath{\mathcal{B}}}}[R_{N}]_{\ensuremath{\mathcal{B}}}}, \] considered as a normal subobject of $M\vee N$. \end{definition} \begin{remark} Two normal subobjects $M$ and $N$ of an object $A$ commute if and only if $[M,N]_{\ensuremath{\mathcal{B}}}$ is zero. Indeed, the morphism $\langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle $ is a regular (hence, normal) epimorphism because the square $[\pi_0]_{\ensuremath{\mathcal{B}}}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} [r_0]_{\ensuremath{\mathcal{B}}}=[\rho_0]_{\ensuremath{\mathcal{B}}}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} [p_0]_{\ensuremath{\mathcal{B}}}$ is a double extension as a split epimorphism of split epimorphisms. Hence its kernel is zero if and only if it is an isomorphism---which, by Lemma~\ref{Lemma-Characterisation-Square}, means that~$M$ and~$N$ commute. \end{remark} \begin{remark}\label{Remark-Subobject} The kernel of $\langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle $ may indeed be considered as a normal subobject of $M\vee N$, namely, through the composition of \[ \ker \langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle \colon{K[\langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle ]\to [R_{M}\square R_{N}]_{\ensuremath{\mathcal{B}}}} \] with \[ \rho_{1}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} p_{1}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \mu_{R_{M}\square R_{N}}\colon{[R_{M}\square R_{N}]_{\ensuremath{\mathcal{B}}}\to M\vee N.} \] First of all, this composite is a monomorphism, because \[ \mu_{R_{M}\square R_{N}}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker \langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle= \ker \langle p_0,r_0 \rangle \circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \mu_{K[\langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle]} \] and both $\mu_{K[\langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle]}$ and $\rho_{1}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} p_{1}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker \langle p_0,r_0 \rangle$ are monomorphisms. Now $\mu_{R_{M}\square R_{N}}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker \langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle$ is a normal monomorphism as a meet of two normal monomorphisms. This follows from Lemma~\ref{Lemma-Pullback}, since the induced morphism \[ {[R_{M}]_{\ensuremath{\mathcal{B}}}\times_{[M\vee N]_{\ensuremath{\mathcal{B}}}}[R_{N}]_{\ensuremath{\mathcal{B}}}\to R_{M}\times_{M\vee N}R_{N}} \] is a monomorphism. Hence \[ \rho_{1}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} p_{1}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \mu_{R_{M}\square R_{N}}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker \langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle \] is normal, being the direct image of this latter normal monomorphism along the regular epimorphism $\rho_{1}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} p_{1}$. \end{remark} \begin{remark} On the other hand, there is no reason why $[M,N]_{\ensuremath{\mathcal{B}}}$ should be a normal subobject of $A$. A counterexample is given in~\cite{MM-NC}. \end{remark} \begin{remark} The commutator $[M,N]_{\ensuremath{\mathcal{B}}}$ is nothing but $L_{2}$ of the double extension~\eqref{Double-Extension-MN} as considered in the article~\cite{EGVdL}. \end{remark} \begin{theorem}\label{Proposition-Commutator-Properties} Let $M$, $N$, $L$ (resp.~$M'$, $N'$) be normal subobjects of an object~$A$ (resp.~$A'$). Let $J$ be a normal subobject of $M\vee N$. The following hold: \begin{enumerate} \item $[0,N]_{\ensuremath{\mathcal{B}}}=0$; \item $[M,N]_{\ensuremath{\mathcal{B}}}=[N,M]_{\ensuremath{\mathcal{B}}}$; \item $[M,N]_{\ensuremath{\mathcal{B}}}\leq M\wedge N$; \item if $N\leq L$ then $[M,N]_{\ensuremath{\mathcal{B}}}\leq [M,L]_{\ensuremath{\mathcal{B}}}$ as subobjects of $A$; \item $q_{J}[M,N]_{\ensuremath{\mathcal{B}}}\leq [q_{J}M,q_{J}N]_{\ensuremath{\mathcal{B}}}$; \item $[M\times M',N\times N']_{\ensuremath{\mathcal{B}}}=[M,N]_{\ensuremath{\mathcal{B}}}\times [M',N']_{\ensuremath{\mathcal{B}}}$; \item $q_{J}[M,N]_{\ensuremath{\mathcal{B}}}= [q_{J}M,q_{J}N]_{\ensuremath{\mathcal{B}}}$ as soon as $q_{J}(M\wedge N)=q_{J}M\wedge q_{J}N$, which happens, for instance, when either $M\leq N$ or $J\leq M\wedge N$; \item $[M,N]_{\ensuremath{\mathcal{B}}}$ is the smallest normal subobject $J$ of $M\vee N$ such that $q_{J}M$ and $q_{J}N$ commute. \end{enumerate} \end{theorem} \begin{proof} The first property holds because, for any object $N$, the square \[ \xymatrix@!0@=3.5em{N \ar[d]_-{q_{N}} \ar@{=}[r]^-{q_{0}} & N \ar[d] \\ 0 \ar@{=}[r] & 0} \] is a double central extension with respect to $\ensuremath{\mathcal{B}}$. Property (2) follows from the symmetry of Diagram~\eqref{Square-Commutator}; see~\cite{EverHopf} for a detailed explanation. (3) follows from the definition of $[M,N]_{\ensuremath{\mathcal{B}}}$. To see this, consider the diagram \[ \xymatrix@!0@C=7.5em@R=4em{K[[r_{0}]_{\ensuremath{\mathcal{B}}}] \ar@<.5ex>[d]^-{k_{1}} \ar@<-.5ex>[d]_-{k_{0}} \ar[r]^-{\ker [r_{0}]_{\ensuremath{\mathcal{B}}}} & [R_{M}\square R_{N}]_{\ensuremath{\mathcal{B}}} \ar@<.5ex>[r]^-{[r_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[r]_-{[r_{0}]_{\ensuremath{\mathcal{B}}}} \ar@<.5ex>[d]^-{[p_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[d]_-{[p_{0}]_{\ensuremath{\mathcal{B}}}} & [R_{N}]_{\ensuremath{\mathcal{B}}} \ar@<.5ex>[d]^-{[\pi_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[d]_-{[\pi_{0}]_{\ensuremath{\mathcal{B}}}}\\ K[[\rho_{0}]_{\ensuremath{\mathcal{B}}}] \ar[r]^-{\ker [\rho_{0}]_{\ensuremath{\mathcal{B}}}} \ar[d]_-{l} & [R_{M}]_{\ensuremath{\mathcal{B}}} \ar@<.5ex>[r]^-{[\rho_{1}]_{\ensuremath{\mathcal{B}}}} \ar@<-.5ex>[r]_-{[\rho_{0}]_{\ensuremath{\mathcal{B}}}} \ar[d]_-{\mu_{R_{M}}} & [M\vee N]_{\ensuremath{\mathcal{B}}} \ar[d]^-{\mu_{M\vee N}} \\ M \ar[r]_-{\ker \rho_{0}} & R_{M} \ar@<.5ex>[r]^-{\rho_{1}} \ar@<-.5ex>[r]_-{\rho_{0}} & M\vee N.} \] Since $[M,N]_{\ensuremath{\mathcal{B}}}$, being the kernel of $\langle [p_{0}]_{\ensuremath{\mathcal{B}}},[r_{0}]_{\ensuremath{\mathcal{B}}}\rangle$, may be computed as the meet of the kernels of $[p_{0}]_{\ensuremath{\mathcal{B}}}$ and $[r_{0}]_{\ensuremath{\mathcal{B}}}$, it is also the kernel of~$k_{0}$. Hence, considered as a subobject of $M\vee N$ via Remark~\ref{Remark-Subobject}, it is a subobject of $M$ through the morphism $l\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} k_{1}$. Likewise, $[M,N]_{\ensuremath{\mathcal{B}}}$ is contained in $N$. The fourth property follows from the functoriality of the construction of $[-,-]_{\ensuremath{\mathcal{B}}}$. So does the fifth. To see that the relative commutator preserves binary products, it suffices to note that the zero-dimensional commutator~$[-]_{\ensuremath{\mathcal{B}}}$ preserves them, and that joins commute with products. The former property is well known. It is a consequence of the fact that the reflector $I\colon \ensuremath{\mathcal{A}}\to\ensuremath{\mathcal{B}}$ preserves pullbacks of split epimorphisms along split epimorphisms (because the components of the unit are extensions) together with the fact that a split epimorphism of split epimorphisms in $\ensuremath{\mathsf{Ext}}\ensuremath{\mathcal{A}}$ is always a three-fold extension. The latter property holds because the product of two regular pushouts is a regular pushout: products of pullbacks are pullbacks, products of regular epis are regular epis. To prove (7), first of all recall that the square~\eqref{Diagram-unitsquare} induced by the unit~$\eta$ is a pushout of regular epimorphisms for any regular epimorphism $f$, by the Birkhoff condition. Hence, by Lemma~\ref{Rotlemma}, the zero-dimensional commutator $[-]_{\ensuremath{\mathcal{B}}}\colon \ensuremath{\mathcal{A}}\to\ensuremath{\mathcal{A}}$ preserves extensions. Now assume that $q_{J}(M\wedge N)=q_{J}M\wedge q_{J}N$. Then by Lemma~\ref{Lemma-Three-Fold-Extension} the left hand side commutative cube \[ \xymatrix@!0@=3.5em{& \tfrac{M\vee N}{J} \ar[rr] \ar@{.>}[dd] && \tfrac{M\vee N}{M\vee J} \ar[dd] \\ M\vee N \ar[rr] \ar[dd] \ar[ru] && \tfrac{M\vee N}{M} \ar[dd] \ar[ru]_-{\beta}\\ & \tfrac{M\vee N}{N\vee J} \ar@{.>}[rr] && 0\\ \tfrac{M\vee N}{N} \ar[rr] \ar@{.>}[ru]^-{\alpha} && 0 \ar@{=}[ru] } \qquad \xymatrix@!0@=3.5em{& R_{\tfrac{M\vee J}{J}}\square R_{\tfrac{N\vee J}{J} } \ar@<.5ex>[rr] \ar@<-.5ex>[rr] \ar@{.>}@<.5ex>[dd] \ar@{.>}@<-.5ex>[dd] && R_{\tfrac{N\vee J}{J}} \ar@<.5ex>[dd] \ar@<-.5ex>[dd]\\ R_{M}\square R_{N} \ar@<.5ex>[rr] \ar@<-.5ex>[rr] \ar@<.5ex>[dd] \ar@<-.5ex>[dd]\ar[ru] && R_{N} \ar@<.5ex>[dd] \ar@<-.5ex>[dd] \ar[ru]\\ & R_{\tfrac{M\vee J}{J} } \ar@{.>}@<.5ex>[rr] \ar@{.>}@<-.5ex>[rr] && \tfrac{M\vee N}{J} \\ R_{M} \ar@<.5ex>[rr] \ar@<-.5ex>[rr] \ar@{.>}[ru] && M\vee N \ar[ru] } \] is a three-fold extension. As a consequence, so are all the commutative cubes in the right hand side diagram, being pullbacks of three-fold extensions. This is still true if we apply the functor $[-]_{\ensuremath{\mathcal{B}}}$ to the right hand side diagram, since $[-]_{\ensuremath{\mathcal{B}}}$ preserves extensions and because a split epimorphism of extensions is a double extension, and a split epimorphism of double extensions a three-fold extension. The identity in (7) now follows. If~${M\leq N}$ then $q_{J}(M\wedge N)=q_{J}M=q_{J}M\wedge q_{J}N$. If, on the other hand, we assume that $J\leq M\wedge N$, then the morphism $\alpha$ and, by symmetry, also $\beta$, are isomorphisms. This implies that the left hand side cube above is a three-fold extension, so that $q_{J}(M\wedge N)=q_{J}M\wedge q_{J}N$ by Lemma~\ref{Lemma-Three-Fold-Extension}. Properties (3) and (7) together imply that $q_{[M,N]_{\ensuremath{\mathcal{B}}}}M$ and $q_{[M,N]_{\ensuremath{\mathcal{B}}}}N$ commute. Using (5) it is now easily seen that $[M,N]_{\ensuremath{\mathcal{B}}}$ is minimal amongst all $J$ such that ${[q_{J}M,q_{J}N]_{\ensuremath{\mathcal{B}}}=0}$. \end{proof} It was shown in~\cite{EverVdL4} that two normal subobjects of an $\Omega$-group commute in the sense of~\cite{EveraertCommutator} if and only if they commute in the sense of our Definition~\ref{Definition-Commuting-Subobjects}. Since both notions of relative commutator satisfy the same universal property (see Theorem~\ref{Proposition-Commutator-Properties} (8)), we find: \begin{proposition}\label{Theorem-Characterisation} Let $\ensuremath{\mathcal{A}}$ be a variety of $\Omega$-groups and $\ensuremath{\mathcal{B}}$ a subvariety of $\ensuremath{\mathcal{A}}$. Given any two normal subobjects $M$ and $N$ of an object $A$ of $\ensuremath{\mathcal{A}}$, we have \[ [M,N]^{\Omega}_{\ensuremath{\mathcal{B}}}\cong [M,N]_{\ensuremath{\mathcal{B}}}. \] In particular, the commutator $[M,N]^{\Omega}_{\ensuremath{\mathcal{B}}}$ is zero if and only if the double extension~\eqref{Double-Extension-MN} is central.\hfill \qed \end{proposition} \begin{remark}\label{Remark-examples} This already gives us the examples worked out in~\cite{EveraertCommutator}: precrossed modules vs.\ crossed modules, where the relative commutator is the Peiffer commutator, for instance. An example which is not a consequence of this theorem---loops vs.\ groups, where the relative commutator is an associator---was considered in the article~\cite{EverVdL4}. Another example which falls outside the scope of~\cite{EveraertCommutator} is the case of compact Hausdorff topological groups vs.\ profinite groups. Here, the relative commutator $[M,N]_{\ensuremath{\mathcal{B}}}$ is the connected component of the intersection $M\cap N$, as follows from results in~\cite{Everaert-Gran-TT}. More generally, in any situation where the reflector $I\colon \ensuremath{\mathcal{A}}\to\ensuremath{\mathcal{B}}$ is \emph{protoadditive}~\cite{EG-honfg,Everaert-Gran-TT} (for instance, when $\ensuremath{\mathcal{A}}$ is abelian), one has the identity $[M,N]_{\ensuremath{\mathcal{B}}}=[M\cap N]_{\ensuremath{\mathcal{B}}}$ for any object $A$ of $\ensuremath{\mathcal{A}}$ and any pair of normal subobjects~$M$ and~$N$ of~$A$. The ``absolute'' case of abelianisation is treated in the following section. \end{remark} \begin{remark}\label{Remark-Preservation} It suffices to consider the case $\ensuremath{\mathcal{B}}=0$ (where $0$ is the category with one object and one arrow) to see that the equality in Statement~(5) of Theorem~\ref{Proposition-Commutator-Properties} does not hold in general. The case $\ensuremath{\mathcal{B}}=0$ shows, furthermore, that unlike the Smith/Pedicchio commutator---cf.\ Lemma~\ref{Proposition-Smith-Commutator-Properties}---the commutator $[-,-]_{\ensuremath{\mathcal{B}}}$ need not preserve binary joins. \end{remark} \section{The absolute case: abelianisation}\label{Section-Huq} In the case of $\Omega$-groups, the relative commutator $[-,-]_{\ensuremath{\mathcal{B}}}^{\Omega}$ in~$\ensuremath{\mathcal{A}}$ reduces to the Higgins commutator when $\ensuremath{\mathcal{B}}$ is the Birkhoff subcategory~$\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$ of all abelian objects of $\ensuremath{\mathcal{A}}$. Likewise, when $\ensuremath{\mathcal{A}}$ is an arbitrary semi-abelian category and $\ensuremath{\mathcal{B}}$ is~$\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$, the relative $[-,-]_{\ensuremath{\mathcal{B}}}$ is nothing but the Huq commutator. To show this we take a detour via the Smith/Pedicchio commutator of equivalence relations. First, in Lemma~\ref{Proposition-Commutator-Is-Smith}, we prove that the equivalence relation corresponding to the commutator of two normal subobjects is exactly the commutator of the equivalence relations corresponding to those normal subobjects. Then we prove Proposition~\ref{Proposition-Huq-Is-Smith} which states that the Huq commutator of a pair of normal subobjects $M$ and $N$ of an object~$A$ is the normalisation of the Smith/Pedicchio commutator of the corresponding equivalence relations, when $M\vee N=A$. Combining both results, we obtain Theorem~\ref{Proposition-Commutator-Is-Huq}: given any two normal subobjects~$M$ and~$N$ of $A$, their Huq commutator $[M,N]^{\ensuremath{\mathrm{H}}}$, computed in $M\vee N$, coincides with $[M,N]_{\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}}$. \subsection{The commutator of equivalence relations} In his book~\cite{Smith}, Smith introduced a commutator of equivalence relations in the context of Mal'tsev varieties. It was extended to a purely categorical setting by Pedicchio~\cite{Pedicchio} and may be presented in a manner which is similar to the definition of the Huq commutator of normal subobjects~\cite{Borceux-Bourn, BG}. Let $A$ be an object of a semi-abelian category $\ensuremath{\mathcal{A}}$. The largest equivalence relation on $A$ is denoted by $\nabla_{A}=(A\times A,\pi_{0},\pi_{1})$ and the smallest one by~$\Delta_{A}=(A,1_{A},1_{A})$. Two equivalence relations $R=(R,r_{0},r_{1})$ and $S=(S,s_{0},s_{1})$ on~$A$ are said to \textbf{centralise each other} when they admit a \textbf{centralising double relation} \begin{equation}\label{Centralising-Relation} \vcenter{\xymatrix@!0@=3.5em{C \ar@<.5ex>[r] \ar@<-.5ex>[r] \ar@<.5ex>[d] \ar@<-.5ex>[d] & S \ar@<.5ex>[d] \ar@<-.5ex>[d]\\ R \ar@<.5ex>[r] \ar@<-.5ex>[r] & A,}} \end{equation} i.e., a (unique) double equivalence relation $C$ on $R$ and $S$ such that any of the four commutative squares in~\eqref{Centralising-Relation} is a pullback. (Then all of the commutative squares in~\eqref{Centralising-Relation} are pullbacks.) $R$ and $S$ centralise each other if and only if there exists a partial Mal'tsev operation on $R$ and $S$, a morphism $p\colon{R\times_{A}S\to A}$ which satisfies $p(\alpha,\alpha,\gamma)=\gamma$ and $p(\alpha,\gamma,\gamma)=\alpha$. The \textbf{commutator} $[R,S]^{\ensuremath{\mathrm{S}}}$ of $R$ and $S$ is the universal equivalence relation on $A$ which, when divided out, makes them centralise each other. Consider the pullback \[ \vcenter{\xymatrix@!0@=4em{R\times_{A}S \ar@<.5ex>[d]^-{p_{R}} \ar@<.5ex>[r]^-{p_{S}} \ar@{}[rd]|<<{\copy\pullbackbox} & S \ar@<.5ex>[d]^-{s_{0}} \ar@<.5ex>[l]^-{i_{S}}\\ R \ar@<.5ex>[r]^-{r_{1}} \ar@<.5ex>[u]^-{i_{R}} & A \ar@<.5ex>[l] \ar@<.5ex>[u]}} \] of $r_{1}$ and $s_{0}$; then $[R,S]^{\ensuremath{\mathrm{S}}}$ is the kernel pair $R[q]$ of the morphism $q$ in the diagram \[ \xymatrix@!0@=3.5em{& R \ar[ld]_-{i_{R}} \ar@{.>}[d] \ar[rd]^-{r_{0}}\\ R\times_{A}S \ar@{.>}[r] & Q & A \ar@{.>}[l]|-{q}\\ & S \ar[lu]^-{i_{S}} \ar[ru]_-{s_{0}} \ar@{.>}[u]} \] where the dotted arrows denote the colimit of the outer square. The direct images~$qR$ and~$qS$ of~$R$ and~$S$ along the regular epimorphism $q$ centralise each other; hence $R$ and $S$ do so if and only if $[R,S]^{\ensuremath{\mathrm{S}}}=\Delta_{A}$. The following properties of this commutator will be useful for us. \begin{lemma}\label{Proposition-Smith-Commutator-Properties}\cite{Borceux-Bourn, Bourn-Gran-Maltsev, Pedicchio} Let $R$, $S$, $S'$ be equivalence relations on an object~$A$ and $f\colon{A\to B}$ a regular epimorphism. The following hold: \begin{enumerate} \item $[\Delta_{A},S]^{\ensuremath{\mathrm{S}}}=\Delta_{A}$; \item $[R,S]^{\ensuremath{\mathrm{S}}}=[S,R]^{\ensuremath{\mathrm{S}}}$; \item $[R,S]^{\ensuremath{\mathrm{S}}}\leq R\wedge S$; \item if $S\leq S'$ then $[R,S]^{\ensuremath{\mathrm{S}}}\leq [R,S']^{\ensuremath{\mathrm{S}}}$; \item $[R,S\vee S']^{\ensuremath{\mathrm{S}}}=[R,S]^{\ensuremath{\mathrm{S}}}\vee [R,S']^{\ensuremath{\mathrm{S}}}$; \item if $[R,S]^{\ensuremath{\mathrm{S}}}=\Delta_{A}$ then $[fR,fS]^{\ensuremath{\mathrm{S}}}=\Delta_{B}$.\hfill \qed \end{enumerate} \end{lemma} The double central extensions with respect to the Birkhoff subcategory~$\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$ of abelian objects in a semi-abelian category $\ensuremath{\mathcal{A}}$ have been characterised in terms of this commutator of equivalence relations as follows. \begin{lemma}\cite{RVdL, EverVdL3}\label{Proposition-Central-With-Commutators} A double extension~\eqref{Diagram-Square} in a semi-abelian category~$\ensuremath{\mathcal{A}}$ satisfies \[ [R[d],R[c]]^{\ensuremath{\mathrm{S}}}=\Delta_A=[R[d]\wedge R[c], \nabla_A]^{\ensuremath{\mathrm{S}}} \] if and only if it is central with respect to $\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$.\hfill \qed \end{lemma} This immediately implies that $[-,-]_{\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}}$ corresponds to $[-,-]^{\ensuremath{\mathrm{S}}}$ in the following sense: \begin{lemma}\label{Proposition-Commutator-Is-Smith} Given any two normal subobjects $M$ and $N$ of $A$, \[ [R_{M},R_{N}]^{\ensuremath{\mathrm{S}}}=R_{[M,N]_{\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}}}. \] \end{lemma} \begin{proof} By definition, $M$ and $N$ commute when the square~\eqref{Double-Extension-MN} is a double central extension with respect to $\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$. According to Lemma~\ref{Proposition-Central-With-Commutators}, this happens if and only if \begin{equation}\label{Two-Equalities} [R_{M},R_{N}]^{\ensuremath{\mathrm{S}}}=\Delta_{M\vee N}=[R_{M}\wedge R_{N}, \nabla_{M\vee N}]^{\ensuremath{\mathrm{S}}}. \end{equation} Using $\nabla_{M\vee N}=R_{M}\vee R_{N}$ we see that \[ [R_{M}\wedge R_{N}, \nabla_{M\vee N}]^{\ensuremath{\mathrm{S}}}= [R_{M}\wedge R_{N}, R_{M}]^{\ensuremath{\mathrm{S}}}\vee [R_{M}\wedge R_{N}, R_{N}]^{\ensuremath{\mathrm{S}}} \leq [R_{M},R_{N}]^{\ensuremath{\mathrm{S}}} \] and the second equality in~\eqref{Two-Equalities} follows from the first. Hence $[M,N]_{\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}}$ is zero if and only if $[R_{M},R_{N}]^{\ensuremath{\mathrm{S}}}=\Delta_{M\vee N}$. The commutator $[R_{M},R_{N}]^{\ensuremath{\mathrm{S}}}$ now coincides with $R_{[M,N]_{\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}}}$ because these two equivalence relations satisfy the same universal property. \end{proof} \subsection{The Huq commutator} It is well known that in general, the Huq commutator does \emph{not} correspond to the commutator of equivalence relations: the relation~$R^{A}_{[M,N]^{\ensuremath{\mathrm{H}}}}$ need not be isomorphic to $[R_{M}^{A},R_{N}^{A}]^{\ensuremath{\mathrm{S}}}$ for arbitrary normal subobjects $M$ and $N$ of an object~$A$---a counterexample is given in~\cite{Bourn2004} for digroups, a variety of $\Omega$-groups. There are essentially two ways to remedy this situation. On the one hand, the context may be strengthened to that of \emph{Moore categories} by imposing the \emph{strong protomodularity} axiom~\cite{Borceux-Bourn, Rodelo:Moore}; but then the theory no longer applies to all varieties of $\Omega$-groups. On the other hand, it is known that the induced notions of centrality coincide in \emph{any} semi-abelian category (see~\cite[Proposition~2.2]{Gran-VdL}). That is to say, $R_{[M,N]^{\ensuremath{\mathrm{H}}}}^{A}$ is isomorphic to $[R_{M}^{A},R_{N}^{A}]^{\ensuremath{\mathrm{S}}}$ when $N$ is equal to $A$. In fact, according to an unpublished result by M.~Gran and the first author (presented here as Proposition~\ref{Proposition-Huq-Is-Smith} below) this assumption is too strong: as we shall see, the commutators coincide as soon as $A=M\vee N$. Two coterminal morphisms $m\colon {M\to A}$ and $n\colon {N\to A}$ \textbf{commute} when there exists a (necessarily unique) morphism $\varphi_{m,n}\colon{M\times N\to A}$ such that \[ m=\varphi_{m,n}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \langle 1_{M},0\rangle\qquad\text{and}\qquad n=\varphi_{m,n}\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \langle 0,1_{N}\rangle. \] It is clear that $m$ and $n$ commute if and only if their Huq commutator \[ [m,n]^{\ensuremath{\mathrm{H}}}\colon [M,N]^{\ensuremath{\mathrm{H}}}\to A \] is zero, see Subsection~\ref{Subsection-Huq-Commutator}. \begin{proposition}\label{Proposition-Huq-Is-Smith} Given any two normal subobjects $M$ and $N$ of $A$ such that ${M\vee N=A}$ we have $R_{[M,N]^{\ensuremath{\mathrm{H}}}}=[R_{M},R_{N}]^{\ensuremath{\mathrm{S}}}$. \end{proposition} \begin{proof} We show that the representing normal monomorphisms $m$ and $n$ of~$M$ and~$N$ commute if and only if the equivalence relations $R_{M}$ and~$R_{N}$ centralise each other; the result then follows, because the commutators $[-,-]^{\ensuremath{\mathrm{H}}}$ and $[-,-]^{\ensuremath{\mathrm{S}}}$ satisfy the same universal property. One implication is Proposition~3.2 in~\cite{BG} which states that $m$ and $n$ commute whenever $[R_{M},R_{N}]^{\ensuremath{\mathrm{S}}}$ is~$\Delta_{A}$. Indeed, if $p\colon{R_{M}\times_{A}R_{N}\to A}$ is a partial Mal'tsev operation on $R_{M}$ and $R_{N}$, then its restriction to $M\times N$ is the needed $\varphi_{m,n}$. To prove the other implication, suppose that $\varphi_{m,n}\colon{M\times N\to A}$ exists. By assumption, the morphism $\langle\begin{smallmatrix}m \\ n\end{smallmatrix}\rangle\colon {M+N\to A}$, and hence also $\varphi_{m,n}$, is a regular epimorphism. This implies that $R_{M}=\varphi_{m,n}(\varphi_{m,n}^{-1}R_{M})$ and $R_{N}=\varphi_{m,n}(\varphi_{m,n}^{-1}R_{N})$. Since the images of two equivalence relations which centralise each other still centralise each other (by (6) in Lemma~\ref{Proposition-Smith-Commutator-Properties}), it suffices to show that so do $\varphi_{m,n}^{-1}R_{M}$ and $\varphi_{m,n}^{-1}R_{N}$. Now these relations turn out to be particularly simple. Via Lemma~\ref{Lemma-Pullback}, the Noether isomorphism $N/(M\wedge N)\cong(M\vee N)/M$ implies that the left hand side square in the diagram with exact rows \[ \xymatrix@!0@R=3.5em@C=6.5em{0 \ar[r] & M\times (M\wedge N) \ar[d] \ar[r] & M\times N \ar[d]^-{\varphi_{m,n}} \ar[r] & \tfrac{N}{M\wedge N} \ar[d]^-{\cong} \ar[r] & 0\\ 0 \ar[r] & M \ar[r] & M\vee N \ar[r] & \tfrac{M\vee N}{M} \ar[r] & 0} \] is a pullback, so $\varphi_{m,n}^{-1}R_{M}=\nabla_{M}\times R_{M\wedge N}^{N}$. Similarly, $\varphi_{m,n}^{-1}R_{N}=R_{M\wedge N}^{M}\times \nabla_{N}$. Since \[ [M\wedge N,M]^{\ensuremath{\mathrm{H}}}=0=[M\wedge N,N]^{\ensuremath{\mathrm{H}}}, \] Proposition~2.2 in~\cite{Gran-VdL} may be used to see that both $[\nabla_{M},R_{M\wedge N}^{M}]^{\ensuremath{\mathrm{S}}}=\Delta_{M}$ and $[R_{M\wedge N}^{N},\nabla_N]^{\ensuremath{\mathrm{S}}}=\Delta_N$, so that $[\varphi_{m,n}^{-1}R_{M},\varphi_{m,n}^{-1}R_{N}]^{\ensuremath{\mathrm{S}}}=\Delta_{M\times N}$---which finishes the proof. \end{proof} Combining Lemma~\ref{Proposition-Commutator-Is-Smith} with Proposition~\ref{Proposition-Huq-Is-Smith}, we obtain \begin{theorem}\label{Proposition-Commutator-Is-Huq} Given any two normal subobjects $M$ and $N$ of $A$, their Huq commutator $[M,N]^{\ensuremath{\mathrm{H}}}$, computed in $M\vee N$, coincides with $[M,N]_{\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}}$.\hfill \qed \end{theorem} \begin{remark} Given any monomorphism $i\colon{A\to B}$, two coterminal morphisms $m\colon{M\to A}$ and $n\colon{N\to A}$ commute if and only if $i\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} m$ and $i\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} n$ commute---both in Huq's sense and relative with respect to any $\ensuremath{\mathcal{B}}$. This implies that the concept of ``commuting subobjects'' is independent of the surrounding object~$A$. As a consequence, \[ \overline{[M,N]_{\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}}}^{A}=[M,N]^{\ensuremath{\mathrm{H}}}. \] \end{remark} \section{Further remarks}\label{Further-Remarks} \subsection{Finding the right context} We have defined the relative commutator in the framework of semi-abelian categories. However, looking at the diagram in the introduction, this is not entirely satisfactory, because: \begin{itemize} \item Central extensions were defined in~\cite{Janelidze-Kelly} in the context of exact categories~$\ensuremath{\mathcal{A}}$, relative to a choice of admissible Birkhoff subcategory; and it was shown that if $\ensuremath{\mathcal{A}}$ is Mal'tsev (every reflexive relation internal in~$\ensuremath{\mathcal{A}}$ is an equivalence relation) then any Birkhoff subcategory is admissible. More recently, V.~Rossi proved in~\cite{Val} the admissibility of Birkhoff subcategories in a context which includes every regular Mal'tsev category that is ``almost exact'' in the sense that every regular epimorphism is an effective descent morphism. \item The Huq commutator can be considered in a context, as general as that of finitely cocomplete unital categories; in particular, in any finitely cocomplete pointed Mal'tsev category~\cite{Bourn-Huq}. \end{itemize} Thus one may ask if it is possible to consider the relative commutator in a more general context than that of semi-abelian categories, say, in finitely cocomplete, pointed, regular, ``almost exact'' Mal'tsev categories? We do not know the answer, but let us mention here two apparent obstacles and comment on either of these. (1) Double central extensions, on which concept the notion of relative commutator depends, were defined in~\cite{EGVdL} in the semi-abelian context. One reason for this was that the construction of the left adjoint to the inclusion functor $\ensuremath{\mathsf{CExt}}_{\ensuremath{\mathcal{B}}}\ensuremath{\mathcal{A}}\to \ensuremath{\mathsf{Ext}}\ensuremath{\mathcal{A}}$ given in~\cite{EGVdL} is only valid if $\ensuremath{\mathcal{A}}$ is semi-abelian (and~$\ensuremath{\mathcal{B}}$ is a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$). In this case, the same construction can be applied to higher dimensions, giving us, in particular, a left adjoint to the inclusion functor $\ensuremath{\mathsf{CExt}}^2_{\ensuremath{\mathcal{B}}}\ensuremath{\mathcal{A}}\to \ensuremath{\mathsf{Ext}}^2\!\ensuremath{\mathcal{A}}$ of double central extensions into double extensions. The existence of the latter adjoint or, more precisely, of the reflection into $\ensuremath{\mathsf{CExt}}^2_{\ensuremath{\mathcal{B}}}\ensuremath{\mathcal{A}}$ of double extensions of the form \eqref{Double-Central-Extension} is what allows us to define the relative commutator. There is no a priori reason, though, why the left adjoints $\ensuremath{\mathsf{Ext}}\ensuremath{\mathcal{A}}\to\ensuremath{\mathsf{CExt}}_{\ensuremath{\mathcal{B}}}\ensuremath{\mathcal{A}}$ and $\ensuremath{\mathsf{Ext}}^2\!\ensuremath{\mathcal{A}}\to\ensuremath{\mathsf{CExt}}^2_{\ensuremath{\mathcal{B}}}\ensuremath{\mathcal{A}}$ could not exist when the category $\ensuremath{\mathcal{A}}$ is not semi-abelian. In fact, the former adjoint is known to exist in a wide variety of cases (see \cite{Janelidze-Kelly:Reflectiveness,Janelidze:Hopf}). For instance, it exists if $\ensuremath{\mathcal{A}}$ is a finitely cocomplete exact Mal'tsev category and $\ensuremath{\mathcal{B}}$ the Birkhoff subcategory of abelian objects, and in this case the characterisation of Lemma~\ref{Proposition-Central-With-Commutators} above remains valid (see~\cite{EverVdL3}). (2) In an exact Mal'tsev category any pushout of regular epimorphisms is a regular pushout~\cite{Carboni-Kelly-Pedicchio}, and we have used this property to conclude the crucial fact that the square~\eqref{Double-Central-Extension} is always a double extension. Furthermore, we know from~\cite{Carboni-Kelly-Pedicchio} that in every regular, but not exact, Mal'tsev category there exist pushout squares of regular epimorphisms that are not double extensions. This seems to indicate that exactness is unavoidable in defining a relative commutator. However, we can say the following. First of all we recall from~\cite{Bourn1996} that a finitely complete category $\ensuremath{\mathcal{A}}$ is Mal'tsev if and only if for any square of split epimorphisms \[ \vcenter{\xymatrix@!0@=3.5em{X \ar@<.5ex>[d]^-{d} \ar@<-.5ex>[r]_-{c} & C \ar@<.5ex>[d]^-{g} \ar@<-.5ex>[l]\\ D \ar@<-.5ex>[r]_-{f} \ar@<.5ex>[u] & Z \ar@<-.5ex>[l] \ar@<.5ex>[u]}} \] which ``reasonably'' commutes (in the sense that it represents a split epimorphism in the category of split epimorphisms, with given splitting, in $\ensuremath{\mathcal{A}}$), the factorisation $\langle d,c\rangle\colon X\to D\times_Z C$ to the pullback of $f$ with $g$ is a strong epimorphism. A finitely complete pointed category $\ensuremath{\mathcal{A}}$ is called \textbf{unital} if the same property holds, but only in the case where $Z$ is the zero object. Equivalently, $\ensuremath{\mathcal{A}}$ is unital if for any two objects $C$ and $D$ the ``product injections'' $\langle 0, 1_{C}\rangle\colon C\to D\times C$ and $\langle 1_{D},0\rangle\colon D\to D\times C$ are jointly strongly epimorphic~\cite{Bourn1996,Bourn2002}. A third characterisation of unital categories is given by the following proposition. \begin{proposition} If $\ensuremath{\mathcal{A}}$ is a finitely complete pointed category, then the first condition implies the second: \begin{enumerate} \item $\ensuremath{\mathcal{A}}$ is unital; \item for any pair of strong epimorphisms $c$ and $d$ \[ \xymatrix@!0@=3.5em{ D & X \ar[l]_-d \ar[r]^-c & C} \] such that the kernels $\ker d$ and $\ker c$ are jointly strongly epimorphic, the induced morphism to the product $\langle d,c\rangle\colon X\to D\times C$ is a strong epimorphism. \end{enumerate} If, moreover, $\ensuremath{\mathcal{A}}$ has finite coproducts, then the two conditions are equivalent. \end{proposition} \begin{proof} Assume that $\ensuremath{\mathcal{A}}$ is unital and that $d$ and $c$ are as in (2). First of all note that a morphism is a strong epimorphism if it is jointly strongly epimorphic with a zero morphism. Since $\ker d$ and $\ker c$ are jointly strongly epimorphic, and $d$ is a strong epimorphism, this implies that the composite $d\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker c$ is strongly epimorphic. Similarly, $c\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker d$ is a strong epimorphism. Since~$\ensuremath{\mathcal{A}}$ is unital, the product injections $\langle 0, 1_{C}\rangle$ and $\langle 1_{D},0\rangle$ are jointly strongly epimorphic, hence, by the above, so are $\langle 0, 1_{C}\rangle\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} c\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker d=\langle d,c\rangle\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker d$ and $\langle 1_{D},0\rangle\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} c\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker c=\langle d,c\rangle\circ}%{\raisebox{0.2mm}{\ensuremath{\scriptstyle{\circ}}} \ker c$. Hence $\langle d,c\rangle$ is a strong epimorphism. Conversely, for any two objects $D$ and $C$ of $\ensuremath{\mathcal{A}}$, applying condition (2) to the ``coproduct projections'' \[ \xymatrix@!0@=5em{ D & D+C \ar[l]_-{\langle\begin{smallmatrix}1_{D} \\ 0\end{smallmatrix}\rangle} \ar[r]^-{\langle\begin{smallmatrix}0 \\ 1_{C}\end{smallmatrix}\rangle} & C} \] gives us that the product injections $\langle 1_{D},0\rangle$ and $\langle 0, 1_{C}\rangle$ are jointly strongly epimorphic. Hence $\ensuremath{\mathcal{A}}$ is unital. \end{proof} Now suppose that $\ensuremath{\mathcal{A}}$ is finitely cocomplete, regular and unital. Then, in particular, any two normal subobjects $M$ and $N$ of an object $A$ in $\ensuremath{\mathcal{A}}$ admit a union~${M\cup N}$, and the above proposition implies that the square \begin{equation}\label{Diagram-Non-Exact} \vcenter{\xymatrix@!0@R=3.5em@C=5em{M\cup N \ar[r]^-{q_{M}} \ar[d]_-{q_{N}} & \tfrac{M\cup N}{M} \ar[d]\\ \tfrac{M\cup N}{N} \ar[r] & 0}} \end{equation} is a double extension (here $q_M$ and $q_N$ are the cokernels of the inclusions in~$M\cup N$ of $M$ and $N$, respectively). This indicates that it might be possible, after all, to consider the relative commutator in a non-exact context, but we would need to have an appropriate notion of double central extension. In that case, we could say that $M$ and $N$ commute if and only if the double extension~\eqref{Diagram-Non-Exact} is central. \subsection{Stability under regular images} We proved in Theorem~\ref{Proposition-Commutator-Properties} that \begin{equation}\label{Identity-Stable-Images} p[M,N]_{\ensuremath{\mathcal{B}}}=[pM,pN]_{\ensuremath{\mathcal{B}}} \end{equation} for any regular epimorphism $p\colon A\to B$ and normal subobjects $M$ and $N$ of~$A$ such that either $K[p]\leq M\wedge N$ or $M\leq N$. As noted in Remark~\ref{Remark-Preservation}, this identity need not hold for arbitrary $p$, $M$ and $N$. However, we know from~\cite{Huq} that~\eqref{Identity-Stable-Images} \emph{does} hold for arbitrary $p$, $M$ and $N$ if $\ensuremath{\mathcal{B}}=\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$, and the same is true, for instance, for the Peiffer commutator of precrossed modules or the associator of loops (considered in~\cite{EverVdL4}). This suggests to look for necessary and sufficient conditions on the Birkhoff subcategory $\ensuremath{\mathcal{B}}$ for $[-,-]_{\ensuremath{\mathcal{B}}}$ to be \emph{stable under regular images}, i.e., for the identity~\eqref{Identity-Stable-Images} to hold for any regular epimorphism $p\colon A\to B$ and any normal subobjects $M$ and $N$ of $A$. We do not have a satisfactory answer to this question, although a characterisation of such $\ensuremath{\mathcal{B}}$ in the case of $\Omega$-groups was given in~\cite{EveraertCommutator}, in terms of the identities that define the subvariety $\ensuremath{\mathcal{B}}$. Let us just recall here the following necessary condition, again taken from the article~\cite{EveraertCommutator}: we need the subcategory $\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$ of abelian objects of~$\ensuremath{\mathcal{A}}$ to be contained in~$\ensuremath{\mathcal{B}}$. Indeed, if we assume that the relative commutator $[-,-]_{\ensuremath{\mathcal{B}}}$ is stable under regular images, and that $A$ is an abelian object with ``multiplication'' $\pi\colon {A\times A\to A}$, then \begin{align*} [A,A]_{\ensuremath{\mathcal{B}}} &= \Bigl[\pi \bigl(A\times 0\bigr), \pi \bigl(0 \times A\bigr)\Bigr]_{\ensuremath{\mathcal{B}}} = \pi \Bigl(\bigl[A \times 0, 0 \times A\bigr]_{\ensuremath{\mathcal{B}}}\Bigr)\\ &\subseteq \pi\Bigl(\bigl(A \times 0\bigr) \wedge \bigl(0 \times A\bigr)\Bigr) = 0. \end{align*} However, the converse is not true. The condition $\ensuremath{\mathcal{B}}\supseteq\ensuremath{\mathsf{Ab}}\ensuremath{\mathcal{A}}$ does \emph{not} imply the stability under regular images of $[-,-]_{\ensuremath{\mathcal{B}}}$; a counterexample was given in~\cite{EveraertCommutator}. A similar question may be asked with respect to preservation of joins, see Remark~\ref{Remark-Preservation}. \subsection{Higher dimensions} In this article, we considered what we have called zero-dimensional, one-dimensional and two-dimensional relative commutators, but what about higher dimensions? Keeping in mind examples such as the associator of loops, this does not seem to be an unreasonable question to ask. Let us write $[L,M,N]_{\ensuremath{\mathcal{B}}}$ for a three-dimensional relative commutator defined on triples of normal subobjects $L$, $M$, $N$ of an object $A$ of $\ensuremath{\mathcal{A}}$, with respect to a Birkhoff subcategory $\ensuremath{\mathcal{B}}$ of $\ensuremath{\mathcal{A}}$. Then, for instance, if $\ensuremath{\mathcal{A}}$ is the variety of loops and $\ensuremath{\mathcal{B}}$ is the subvariety of groups, we would like that \[ [L,M,N]_{\ensuremath{\mathcal{B}}}=[L,M,N], \] where on the right hand side is the associator of loops considered in~\cite{EverVdL4}. It is not clear to us what would be the appropriate definition of $n$-dimen\-sion\-al relative commutator (for $n\geq 3$), or whether it is even possible to obtain a convenient theory.
{ "timestamp": "2011-08-22T02:01:01", "yymm": "1108", "arxiv_id": "1108.3909", "language": "en", "url": "https://arxiv.org/abs/1108.3909", "abstract": "Basing ourselves on the concept of double central extension from categorical Galois theory, we study a notion of commutator which is defined relative to a Birkhoff subcategory B of a semi-abelian category A. This commutator characterises Janelidze and Kelly's B-central extensions; when the subcategory B is determined by the abelian objects in A, it coincides with Huq's commutator; and when the category A is a variety of omega-groups, it coincides with the relative commutator introduced by the first author.", "subjects": "Category Theory (math.CT); Rings and Algebras (math.RA)", "title": "Relative Commutator Theory in Semi-Abelian Categories", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.974042647323258, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.7090791334387154 }
https://arxiv.org/abs/2211.07862
Improved expected $L_2$-discrepancy formulas on jittered sampling
We study the expected $ L_2-$discrepancy under two classes of partitions, explicit and exact formulas are derived respectively. These results attain better expected $L_2-$discrepancy formulas than jittered sampling.
\section{Introduction}\label{intro} It is well known that classical jittered sampling (JS) patterns perform better than traditional Monte Carlo (MC) patterns in terms of convergence order, see \cite{jittsamp,ZD2016,KP2}. This means stratified sampling is the refinement of the traditional Monte Carlo method, which involves the measure of the irregularity of point distribution, volume partition is adopted to study it, which also needs us to introduce the definition of $L_2$-discrepancy. \textbf{$L_{2}-$discrepancy}. $L_{2}-$discrepancy of a sampling set $P_{N, d}=\{t_{1}, t_{2}, \ldots , t_{N}\}$ is defined by \begin{equation}\label{lpdefn} L_{2}(D_{N},P_{N, d})=\Big(\int_{[0,1]^{d}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(t_{i})|^{2}dz\Big)^{1/2}, \end{equation} where $\lambda$ denotes the Lebesgue measure, $\mathbf{1}_{A}$ denotes the characteristic function on set $A$. For the applications of $L_{2}-$discrepancy, see\cite{Dick2005,Dick2006,Dick2014,Dick2020}. In the definition of $L_2-$discrepancy, if we introduce the counting measure $\#$, \eqref{lpdefn} can also be expressed as \begin{equation} L_{2}(D_{N},P_{N, d})=\Big(\int_{[0,1]^{d}}|\lambda([0,z))- \frac{1}{N}\#\big(P_{N, d}\cap[0,z)\big)|^{2}dz\Big)^{1/2}, \end{equation} where $\#\big(P_{N, d}\cap[0,z)\big)$ denotes the number of points falling into the set $[0,z).$ To simplify the expression of $L_{2}-$discrepancy, we employ the discrepancy function $\Delta(P_{N,d},z)$ via: \begin{equation}\label{disfun1} \Delta(P_{N,d},z)=\lambda([0,z))- \frac{1}{N}\#\big(P_{N, d}\cap[0,z)\big). \end{equation} Discrepancy function \eqref{disfun1} essentially reveals the difference between the ratio of the tested area to the total area and the ratio of random sample points falling within the area to the total sample points. Its $L_2-$norm ($L_2-$discrepancy) actually measures the uniformity of random sampling point set. Accordingly, the $L_2-$discrepancy can be extended to a fixed compact convex set $K\subset \mathbb{R}^{d}$ with $\lambda(K)>0,$ see \cite{KP}. Discrepancy function in \eqref{disfun1} of a finite set of points $P=\{x_1,x_2,\ldots,x_n\}\subset K$ is now given by \begin{equation}\label{deltapx} \Delta(P,x)=\frac{\lambda\big((-\infty,x]\cap K\big)}{\lambda(K)}- \frac{1}{N}\#\big(P\cap(-\infty,x]\big). \end{equation} The $L_2$-discrepancy is well established and has found applications in areas of number theory, see \cite{Dick2005,Dick2006,Dick2014,Dick2020}. The special deterministic point set constructions are called \textbf{low discrepancy point sets}, and the best known asymptotic upper bounds of $L_2$-discrepancy for these point sets are of the form $$O(\frac{(\ln N)^{\frac{d-1}{2}}}{N}).$$ In the present paper, we introduce random factors to study $L_2$-discrepancy. Formers have conducted extensive research on $L_2$-discrepancy under random sampling. In \cite{jittsamp}, F. Pausinger and S. Steinerberger proved that the expected $L_2$-discrepancy of a point set generated from any equivolume partition is no larger than that from simple random sampling. The conclusion of `strictly smaller' was given in \cite{KP2}. Explicit $L_2-$discrepancy under simple random sampling $P_N$ was presented, \begin{equation}\label{srsfor} \mathbb{E}L_{2}^2(D_{N},P_{N})=\frac{1}{N}[\frac{1}{2^d}-\frac{1}{3^d}], \end{equation} for the corresponding sampling manner, see Figure \ref{srs0}. Jittered sampling has been found that it was not the best equivolume partition manner if we use the $L_2$-discrepancy measure. This was proved in \cite{KP}. Recently, an explicit expected $L_2$-discrepancy formula has been given in \cite{kirk2022expected}, which is, \begin{equation}\label{jittfor} \mathbb{E}L_{2}^2(D_{N},P_{\Omega})=\frac{1}{m^{2d}}[(\frac{m-1}{2}+\frac{1}{2})^d-(\frac{m-1}{2}+\frac{1}{3})^d], \end{equation} where $N=m^d$ and $P_{\Omega}$ is jittered sampling set, for the corresponding partition, see Figure \ref{jss0}. Combing this explicit formula with the partition manner $\Omega^{*}_{\backslash}$ in \cite{KP}, it can be proved \begin{equation}\label{pamin9} \mathbb{E}L_{2}^2(D_{N},P_{\Omega^{*}_{\backslash}})=\frac{1}{m^{2d}}[(\frac{m-1}{2}+\frac{1}{2})^d-(\frac{m-1}{2}+\frac{1}{3})^d]-\frac{2}{5}\cdot\frac{1}{3^d}\cdot\frac{1}{m^{3d}}, \end{equation} where $N=m^d$, for the corresponding partition, see Figure \ref{ss000}. In present paper, we study \textbf{a class of unequivolume partitions}, see Figure \ref{unpar}, which acquires better $L_2$-discrepancy than the use of jittered sampling and stratified sampling sets formed by stratified manner in \cite{KP}, explicit formula is given. The rest of this paper is organized as follows. Section 2 presents preliminaries on random sampling. Section \ref{sec3} presents our main result, which provides explicit expected $L_2$-discrepancy for a certain class of partitions. Section \ref{Prf} includes the proofs of the main results. Finally, in section \ref{conclu} we conclude the paper with a short summary. \section{Preliminaries on random sampling}\label{prelim} Before introducing the main result, we list the preliminaries used in this paper. \subsection{Simple random sampling} In a sense, simple random sampling is Monte Carlo sampling. Uniform distributed point set is selected in $[0,1]^{d}$, see Figure \ref{srs0}. In terms of the discrepancy under simple random sampling, see \cite{aistleitner2014, Gnewuch2020}. \begin{figure*}[h] \centering \subfigure[Simple random sampling in two dimensions]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.6\textwidth]{srs.png} \end{minipage} } \subfigure[Simple random sampling in three dimensions]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.7\textwidth]{srs3d.png} \end{minipage} } \caption{\label{srs0} Simple random sampling.} \end{figure*} \subsection{Jittered sampling} Jittered sampling is a type of grid-based equivolume partition. $[0,1]^{d}$ is divided into $m^d$ axis parallel boxes $Q_{i},1\leq i\leq N,$ each with sides $\frac{1}{m},$ see illustration of Figure \ref{jss0}. Research on the jittered sampling are extensive, see \cite{shirely1994,Doerr2,KP,KP2,jittsamp}. \begin{figure*}[h] \centering \subfigure[jittered sampling in two dimension]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.6\textwidth]{jittsamp0.png} \end{minipage} } \subfigure[jittered sampling in three dimension]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.7\textwidth]{js3d.png} \end{minipage} } \caption{\label{jss0} jittered sampling formed by isometric grid partition.} \end{figure*} \begin{comment} We now consider a rectangle $R=[0,x)$ (we shall call it the test set in the following) in $[0,1]^{d}$ anchored at $0$. For an isometric grid partition $\Omega=\{Q_{1},Q_{2}, \ldots,Q_{N}\}$ of $[0,1]^{d}$, we put $$ I_{N}:=\{j:\partial R \cap Q_{j}\neq \emptyset\}, $$ and $$ C_{N}:=|I_{N}|, $$ which means the cardinality of the index set $I_{N}$. For $C_N$, it is easy to obtain \begin{equation}\label{CNbd0} C_{N}\leq d\cdot N^{1-\frac{1}{d}}. \end{equation} \end{comment} \begin{comment} For jittered sampling, we consider a rectangle $R=[0,x)$ (we shall call it the test set in the following) in $[0,1]^{d}$ anchored at $0$. For the corresponding isometric grid partition $\Omega=\{Q_{1},Q_{2}, \ldots,Q_{N}\}$ of $[0,1]^{d}$, if we put $$ I_{N}:=\{j:\partial R \cap Q_{j}\neq \emptyset\}, $$ and $$ C_{N}:=|I_{N}|, $$ which means the cardinality of the index set $I_{N}$. Then for $C_N$, easy to show that \begin{equation}\label{CNbd0} C_{N}\leq d\cdot N^{1-\frac{1}{d}}. \end{equation} \end{comment} \subsection{Partition model in \cite{KP}} For a grid-based equivolume partition in two dimension, the two squares in the upper right corner are merged to form a rectangle $$I=[a_1,a_1+2b]\times [a_2,a_2+b],$$where $a_1,a_2,b$ are three positive constants. The diagonal of $I$ is the partition line, which constitutes a special partition mode. We set it $$\Omega_{\backslash}=(\Omega_{1,\backslash},\Omega_{2,\backslash},Q_3,\ldots,Q_{N}),$$where $\Omega_{2,\backslash}=I\setminus\Omega_{1,\backslash}$. \begin{figure*}[h] \centering \subfigure[]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.6\textwidth]{pmin29.png} \end{minipage} } \subfigure[]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.7\textwidth]{jmt1.png} \end{minipage} } \caption{\label{ss000} The designed partition model in \cite{KP}.} \end{figure*} \begin{comment} \subsection{Newly designed partition models for one parameter $r$} Now, for the rectangle $$I=[a_1,a_1+2b]\times [a_2,a_2+b],$$a series of straight line partitions is used to divide the rectangle into two parts, which will be converted to a one-parameter model if we set the straight line be parallel to diagonal of $I$, and the distance from the intersection point $p$ to the endpoint at the lower left corner of $I$ is $A\in(0,\frac{2}{m})$. If we set $r=A\cdot m$, then we have $r\in (0,2)$. If $r=2,$ the case will come back to $2.3$. For convenience of notation, we set this partition model $$\Omega_{r,\sim}=(\Omega_{1,r,\sim},\Omega_{2,r,\sim},Q_3,\ldots,Q_{N}),$$where $\Omega_{2,r,\sim}=I\setminus\Omega_{1,r,\sim}$. \begin{figure*}[h] \centering \subfigure[]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.8\textwidth]{uneqp0.png} \end{minipage} } \subfigure[]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.7\textwidth]{uneqp1.png} \end{minipage} } \caption{\label{ss000} Newly designed partition model.} \end{figure*} \begin{figure*}[h] \centering \subfigure[Two partition subsets of $I$]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.8\textwidth]{tpart1.png} \end{minipage} } \subfigure[Three dimensional case]{ \begin{minipage}{7cm} \centering \includegraphics[width=0.7\textwidth]{partA3d.png} \end{minipage} } \caption{\label{ss000} Newly designed partition model.} \end{figure*} \end{comment} \subsection{Class of partition model} For the rectangle $$I=[a_1,a_1+2b]\times [a_2,a_2+b],$$where $a_1,a_2,b$ are three positive constants. A straight line partition is used to divide the rectangle into two parts if we set the straight line parallel to the diagonal of $I$, and the distance from the intersection point $Q$ to the endpoint at the upper right corner of $I$ is $b\in (0,\frac{2}{m})$, and we set the original of $I$ is $O'$, see Figure \ref{unpar}. For convenience of notation, we set this partition model $$\Omega_{b,\sim}=(\Omega_{1,b,\sim},\Omega_{2,b,\sim},Q_3,\ldots,Q_{N}),$$ where $\Omega_{2,b,\sim}=I\setminus\Omega_{1,b,\sim}$. \begin{figure}[H] \centering \includegraphics[width=0.40\textwidth]{pqb.png} \caption{\label{unpar} Uneqivolume partition} \end{figure} Now, consider $d-$dimensional cuboid \begin{equation}\label{eq28} I_d=I\times\prod_{i=3}^{d}[a_i,a_i+b] \end{equation} and its two partitions $\Omega'_{\sim}=(\Omega'_{1,\sim},\Omega'_{2,\sim})$ and $\Omega'_{b,\sim}=(\Omega'_{1,b,\sim},\Omega'_{2,b,\sim})$ into two closed, superconvex bodies with \begin{equation}\label{o1blaksla} \Omega'_{1,\sim}= \Omega_{1,\sim}\times \prod_{i=3}^{d}[a_i,a_i+b], \end{equation} and $$ \begin{aligned} \Omega'_{1,b,\sim}= \Omega_{1,b,\sim}\times \prod_{i=3}^{d}[a_i,a_i+b]. \end{aligned} $$ We choose $a_1=\frac{m-2}{m}, a_2=\frac{m-1}{m}, b=\frac{1}{m}$ in $\Omega'_{1,\sim}$ and $\Omega'_{1,b,\sim}$, denoted by $\Omega^{*}_{1,\sim}$ and $\Omega^{*}_{1,b,\sim}$, then we obtain \begin{equation}\label{omga1} \Omega^{*}_{\sim}=(\Omega^{*}_{1,\sim},\Omega^{*}_{2,\sim},Q_3 \ldots,Q_{N}), \end{equation} and \begin{equation}\label{omgaBsim} \Omega^{*}_{b,\sim}=(\Omega^{*}_{1,b,\sim},\Omega^{*}_{2,b,\sim},Q_3 \ldots,Q_{N}). \end{equation} \section{Explicit Expected $L_2$-discrepancy for stratified random sampling formed by a class of partitions}\label{sec3} In this section, explicit $L_2-$discrepancy formula is given for a class of partition model 2.4. \begin{thm}\label{th33} For partition $\Omega^{*}_{b,\sim}$ of $[0,1]^d$ and $m\ge 2, b\in[\frac{3}{2m},\frac{2}{m})$, then \begin{equation}\label{eq35} \mathbb{E}L_{2}^2(D_{N},P_{\Omega^{*}_{b,\sim}})=\frac{1}{m^{2d}}[(\frac{m-1}{2}+\frac{1}{2})^d-(\frac{m-1}{2}+\frac{1}{3})^d]-\frac{P_0(b)}{2^d\cdot m^{3d}}-\frac{P_1(b)}{3^d\cdot m^{3d}}, \end{equation} where $$P_0(b)=\frac{8-m^2b^2}{3}-\frac{16}{24-3m^2b^2},$$ $$P_1(b)=\frac{m^4b^4}{40}+\frac{114m^2b^2}{40}+\frac{19}{5}-\frac{6m^3b^3-3m^5b^5+352}{40-5m^2b^2}.$$ \end{thm} \begin{rem} Actually, from the model of 2.4, formula \eqref{eq35} presents explicit result for division line above diagonal, the same argument can be applied for the divide line below the diagonal, however, could not get better results than partition model 2.3. Noticing that the parameter $b\in[\frac{3}{2m},\frac{2}{m})$, within this range, for any $d$, we obtain a class of better $L_2-$discrepancy result than the use of partition model 2.3.\end{rem} \begin{rem} For $b\in (0,\frac{3}{2m})$, $P_0(b)$ is a decreasing function, thus $P_0(b)>0$, using \eqref{eq35} minus \eqref{pamin9}, we obtain $$-\frac{P_0(b)}{2^d\cdot m^{3d}}-\frac{P_1(b)}{3^d\cdot m^{3d}}+\frac{2}{5\cdot 3^d\cdot m^{3d}},$$ thus $-\frac{P_0(b)}{2^d\cdot m^{3d}}<0$, we only consider $\frac{2}{5}-P_1(b)$, for $b\in [\frac{3}{2m},\frac{2}{m})$, we can assure this quantity less than $0$, which proves a smaller expcted $L_2$-discrepancy, while $b\in (0,\frac{3}{2m})$ is an undetermined situation, which may need consider \eqref{eq35} as a whole, this depends on the dimensions $d$. \end{rem} \begin{cor}\label{uniformnoise0} Let $m,d\in \mathbb{N}$ with $m\ge 2,b\in[\frac{3}{2m},\frac{2}{m})$. Let $N=m^d$ and jittered sampling set $P_{\Omega}=\{x_1,x_2,\ldots,x_N\}$. Stratified random $d-$dimension point sets $P_{\Omega^{*}_{b,\sim}}=\{s_1,s_2,\ldots,s_N\}$ and $P_{\Omega^{*}_{\backslash}}=\{y_1,y_2,\ldots,y_N\}$ are uniformly distributed in the stratified subsets of $\Omega^{*}_{b,\sim}$ and $\Omega^{*}_{\backslash}$ respectively, then \begin{equation}\label{formula31} \mathbb{E}L_{2}^2(D_{N},P_{\Omega^{*}_{b,\sim}})<\mathbb{E}L_{2}^2(D_{N},P_{\Omega^{*}_{\backslash}})< \mathbb{E}L_{2}^2(D_{N},P_{\Omega}). \end{equation} \end{cor} \begin{rem} In \eqref{eq35}, Let $b\in [\frac{3}{2m},\frac{2}{m})$, for any fixed $d$, using \eqref{eq35} minus \eqref{pamin9}, easy to show that the desired result. \end{rem} \section{Proofs}\label{Prf} We prove Theorem 3.1 by the definition of $L_2-$discrepancy, we first give some lemmas. \begin{lma} Let $\Omega^{*}_{b,\sim}$ of $[0,1]^d$ and $m\ge 2, b\in[\frac{3}{2m},\frac{2}{m}]$, stratified random point sets $P_{\Omega^{*}_{b,\sim}}=\{s_1,s_2,\ldots,s_N\}$, assume $I_{\sim}=\Omega^{*}_{1,b,\sim}\cup\Omega^{*}_{2,b,\sim}$, if we divide $I_{\sim}$ into three area $I,II,III$ as Figure \ref{sptI}, then for $z\in I\cup II$ we have \begin{equation} \begin{aligned} &\int_{P_{\Omega^{*}_{b,\sim}}}\int_{I+II}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega-\int_{I+II}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(s_{i}))dz\\&=[(-\frac{b^4m^4}{24}+\frac{2b^3m^3}{3}-2b^2m^2+4)\cdot \frac{4}{8-b^2m^2}]\cdot (\frac{1}{2m^3})^d\\&+[(\frac{-b^2m^2}{8-b^2m^2})\cdot(-\frac{b^6m^6}{160}+\frac{3b^5m^5}{20}-\frac{3b^4m^4}{2}+6b^3m^3-9b^2m^2+8)]\cdot (\frac{1}{3m^3})^{d}, \end{aligned} \end{equation} where $\text{Var}$ denotes the variance. \end{lma} \begin{lma} Let $\Omega^{*}_{b,\sim}$ of $[0,1]^d$ and $m\ge 2, b\in[\frac{3}{2m},\frac{2}{m}]$, stratified random point sets $P_{\Omega^{*}_{b,\sim}}=\{s_1,s_2,\ldots,s_N\}$, assume $I_{\sim}=\Omega^{*}_{1,b,\sim}\cup\Omega^{*}_{2,b,\sim}$, if we divide $I_{\sim}$ into three area $I,II,III$ as Figure \ref{sptI}, then for $z\in III$ we have \begin{equation} \begin{aligned} &\int_{P_{\Omega^{*}_{b,\sim}}}\int_{III}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega-\int_{III}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(s_{i}))dz\\&=[\frac{24b^2m^2-8b^3m^3}{24-3b^2m^2})\cdot \frac{4}{8-b^2m^2}+\frac{b^2m^2}{6}]\cdot (\frac{1}{2m^3})^d\\&+[(1-\frac{8}{b^2m^2})\cdot\frac{9b^6m^6}{960}-\frac{1}{8-b^2m^2}\cdot(\frac{b^8m^8}{80}-\frac{3b^7m^7}{40}+\frac{9b^6m^6}{8}-6b^5m^5+9b^4m^4)\\&+2(1-\frac{4}{b^2m^2})(1-\frac{4}{8-b^2m^2})\cdot(-\frac{9b^6m^6}{1152}-\frac{9b^5m^5}{240}+\frac{9b^4m^4}{48})]\cdot (\frac{1}{3m^3})^{d}, \end{aligned} \end{equation} where $\text{Var}$ denotes the variance. \end{lma} \begin{figure}[H] \centering \includegraphics[width=0.40\textwidth]{tub.png} \caption{Division of the integral region I}\label{sptI} \end{figure} \textbf{Proof of Lemma 4.1} Let $\Omega_z=[0,z)\setminus[O',z)$, then we have \begin{equation*} \begin{aligned}&\int_{P_{\Omega^{*}_{b,\sim}}}\int_{I+II}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{I+II}|\lambda(\Omega_z)+\lambda([O',z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{\Omega_z}(s_{i})- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i})|^{2}dzd\omega. \end{aligned} \end{equation*} Therefore, we compute \begin{equation}\label{hfcwgqyfb} \begin{aligned} &\int_{P_{\Omega^{*}_{b,\sim}}}\int_{I+II}|\lambda(\Omega_z)+\lambda([O',z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{\Omega_z}(s_{i})- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i})|^{2}dz d\omega\\&=\int_{I+II}\int_{P_{\Omega^{*}_{b,\sim}}}|\lambda(\Omega_z)+\lambda([O',z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{\Omega_z}(s_{i})- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i})|^{2} d\omega dz. \end{aligned} \end{equation} Besides, \begin{equation}\label{tjy1} \mathbb{E}(\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{\Omega_z}(s_{i}))=\lambda(\Omega_z), \end{equation} and \begin{equation}\label{tjy2} \mathbb{E}(\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i}))=\frac{4m^{d-2}}{8m^{d-2}-Nb^2}\cdot\lambda([O',z)). \end{equation} If we set $$\mathbb{P}(s_i\in V_i)=\frac{\lambda(V_i)}{\lambda(\Omega^{*}_{i,b,\sim})}$$ for $i=1,2,$ and $$\mathbb{P}(s_i\in U_i)=\frac{\lambda(U_i)}{\lambda(Q_i)}$$ for $i=3,4,\ldots,N.$ Then, \eqref{hfcwgqyfb} can be converted to \begin{equation}\label{befcaozbf} \begin{aligned} &\int_{I+II}\int_{P_{\Omega^{*}_{b,\sim}}}| \mathbb{E}(\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{\Omega_z}(s_{i}))+\mathbb{E}(\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i}))+(1-\frac{4m^{d-2}}{8m^{d-2}-Nb^2})\cdot\lambda([O',z))\\&- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{\Omega_z}(s_{i})- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i})|^{2} d\omega dz\\&=\int_{I+II}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(s_{i}))dz+\int_{I+II}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i}))\\&+|(1-\frac{4m^{d-2}}{8m^{d-2}-Nb^2})\cdot\lambda([O',z))|^2dz. \end{aligned} \end{equation} \begin{equation*} \text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i}))=\frac{1}{N^2}\cdot\frac{\lambda([O',z)])}{\lambda(I+II)}\cdot(1-\frac{\lambda([O',z)])}{\lambda(I+II)}). \end{equation*} \begin{equation*} \lambda(I+II)=\frac{2}{N}-\frac{b^2}{4}\cdot\frac{1}{m^{d-2}}. \end{equation*} Thus, \begin{equation}\label{gs1} \begin{aligned} &\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i}))+|(1-\frac{4m^{d-2}}{8m^{d-2}-Nb^2})\cdot\lambda([O',z))|^2\\&=\frac{1}{N}\cdot \frac{4}{8-m^2b^2}\cdot \lambda([O',z)])+(1-\frac{8}{8-m^2b^2})\cdot\lambda^2([O',z)]). \end{aligned} \end{equation} The equation of the dividing line is the following \begin{equation*} Z_2=-\frac{1}{2}Z_1+\frac{3}{2}-\frac{b}{2}. \end{equation*} Thus, \begin{equation*} \begin{aligned} \int_{I}\lambda([O',z))&=\int_{1-\frac{2}{m}}^{1-b}\int_{1-\frac{1}{m}}^{1}(Z_1-1+\frac{2}{m})(Z_2-1+\frac{1}{m})dZ_1dZ_2\cdot\\&\int_{1-\frac{1}{m}}^{1}\int_{1-\frac{1}{m}}^{1}\ldots \int_{1-\frac{1}{m}}^{1}\prod_{i=3}^{d}(Z_i-1+\frac{1}{m})\\&=(b^2m^2-4bm+4)\cdot (\frac{1}{2m^2})^{d}. \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \int_{II}\lambda([O',z))&=\int_{1-b}^{1}\int_{1-\frac{1}{m}}^{-\frac{Z_1}{2}+\frac{3}{2}-\frac{b}{2}}(Z_1-1+\frac{2}{m})(Z_2-1+\frac{1}{m})dZ_1dZ_2\cdot\\&\int_{1-\frac{1}{m}}^{1}\int_{1-\frac{1}{m}}^{1}\ldots \int_{1-\frac{1}{m}}^{1}\prod_{i=3}^{d}(Z_i-1+\frac{1}{m})\\&=(-\frac{b^4m^4}{24}+\frac{2b^3m^3}{3}-3b^2m^2+4bm)\cdot (\frac{1}{2m^2})^{d}. \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \int_{I}\lambda^2([O',z))&=\int_{1-\frac{2}{m}}^{1-b}\int_{1-\frac{1}{m}}^{1}(Z_1-1+\frac{2}{m})^2(Z_2-1+\frac{1}{m})^2dZ_1dZ_2\cdot\\&\int_{1-\frac{1}{m}}^{1}\int_{1-\frac{1}{m}}^{1}\ldots \int_{1-\frac{1}{m}}^{1}(\prod_{i=3}^{d}(Z_i-1+\frac{1}{m}))^2\\&=(-bm^3+6b^2m^2-12bm+8)\cdot (\frac{1}{3m^3})^{d}. \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \int_{II}\lambda^2([O',z))&=\int_{1-b}^{1}\int_{1-\frac{1}{m}}^{-\frac{Z_1}{2}+\frac{3}{2}-\frac{b}{2}}(Z_1-1+\frac{2}{m})^2(Z_2-1+\frac{1}{m})^2dZ_1dZ_2\cdot\\&\int_{1-\frac{1}{m}}^{1}\int_{1-\frac{1}{m}}^{1}\ldots \int_{1-\frac{1}{m}}^{1}\prod_{i=3}^{d}(Z_i-1+\frac{1}{m})\\&=(-\frac{b^6m^6}{160}+\frac{3b^5m^5}{20}-\frac{3b^4m^4}{2}+7b^3m^3-15b^2m^2+12bm)\cdot (\frac{1}{3m^3})^{d}. \end{aligned} \end{equation*} Combining with \eqref{befcaozbf} and \eqref{gs1}, we obtain the desired result. \textbf{Proof of Lemma 4.2} For area $z\in$III, we have, $$\lambda(\Omega_{2,1})=\frac{1}{4}(Z_1+2Z_2+b-3)^2\cdot\prod_{i=3}^{d}(Z_i-1+\frac{1}{m}),$$ and $$\lambda(\Omega_{2,2})=(Z_1-1+\frac{2}{m})(Z_2-1+\frac{1}{m})\cdot\prod_{i=3}^{d}(Z_i-1+\frac{1}{m})-\lambda(\Omega_{2,1}).$$ \begin{figure}[H] \centering \includegraphics[width=0.40\textwidth]{tubIII.png} \caption{Division of the integral region II}\label{sptII} \end{figure} Furthermore, we have \begin{equation}\label{gs3} \begin{aligned} &\int_{P_{\Omega^{*}_{b,\sim}}}\int_{III}|\lambda(\Omega_z)+\lambda([O',z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{\Omega_z}(s_{i})- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i})|^{2}dz d\omega\\&=\int_{III}\int_{P_{\Omega^{*}_{b,\sim}}}|\lambda(\Omega_z)+\lambda([O',z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{\Omega_z}(s_{i})- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i})|^{2} d\omega dz\\&\int_{III}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(s_{i}))dz+\int_{III}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i}))\\&+|(1-\frac{4m^{d-2}}{Nb^2})\cdot\lambda(\Omega_{2,1})+(1-\frac{4m^{d-2}}{8m^{d-2}-Nb^2})\cdot\lambda(\Omega_{2,2})|^2dz. \end{aligned} \end{equation} Besides, \begin{equation}\label{gs2} \begin{aligned} &\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i}))+|(1-\frac{4m^{d-2}}{Nb^2})\cdot\lambda(\Omega_{2,1})+(1-\frac{4m^{d-2}}{8m^{d-2}-Nb^2})\cdot\lambda(\Omega_{2,2})|^2\\&=\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i}))+|(1-\frac{4m^{d-2}}{Nb^2})\cdot\lambda(\Omega_{2,1})|^2+|(1-\frac{4m^{d-2}}{8m^{d-2}-Nb^2})\cdot\lambda(\Omega_{2,2})|^2\\&+2((1-\frac{4m^{d-2}}{Nb^2})\cdot\lambda(\Omega_{2,1}))((1-\frac{4m^{d-2}}{8m^{d-2}-Nb^2})\cdot\lambda(\Omega_{2,2}))\\&=\frac{4}{Nm^2b^2}\lambda(\Omega_{2,1})+\frac{4}{8N-Nm^2b^2}\lambda(\Omega_{2,2})+(1-\frac{8}{m^2b^2})\lambda^2(\Omega_{2,1})+(1-\frac{8}{8-m^2b^2})\lambda^2(\Omega_{2,2})\\&+2((1-\frac{4m^{d-2}}{Nb^2})\cdot\lambda(\Omega_{2,1}))((1-\frac{4m^{d-2}}{8m^{d-2}-Nb^2})\cdot\lambda(\Omega_{2,2})). \end{aligned} \end{equation} Furthermore, we have, \begin{equation*} \begin{aligned} &\int_{III}\lambda(\Omega_{2,2})dZ_1dZ_2\ldots Z_d=\frac{-2b^3m^3+6b^2m^2}{3}(\frac{1}{2m^2})^{d}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &\int_{III}\lambda^2(\Omega_{2,2})dZ_1dZ_2\ldots Z_d\\&=(\frac{b^6m^6}{80}-\frac{9b^5m^5}{120}+\frac{9b^4m^4}{8}-\frac{18b^3m^3}{3}+9b^2m^2)(\frac{1}{3m^3})^{d}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &\int_{III}\lambda(\Omega_{2,1})dZ_1dZ_2\ldots Z_d=\frac{b^4m^4}{24}(\frac{1}{2m^2})^{d}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &\int_{III}\lambda^2(\Omega_{2,1})dZ_1dZ_2\ldots Z_d=\frac{9b^6m^6}{960}(\frac{1}{3m^3})^{d}, \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &\int_{III}\lambda(\Omega_{2,1})\lambda(\Omega_{2,2}) dZ_1dZ_2\ldots Z_d\\&=(-\frac{9b^6m^6}{1152}-\frac{9b^5m^5}{240}+\frac{9b^4m^4}{48})(\frac{1}{3m^3})^{d}. \end{aligned} \end{equation*} Combining with \eqref{gs3} and \eqref{gs2}, we obtain the disred result. \textbf{Proof of Theorem 3.1} From the definition of $L_2$-discrepancy. For point set $P_{\Omega^{*}_{b,\sim}}=\{s_1,s_2,\ldots,s_N\}$, we have \begin{equation}\label{eqpoasim} \mathbb{E}L_{2}^2(D_{N},P_{\Omega^{*}_{b,\sim}})=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{[0,1]^{d}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega. \end{equation} Therefore, from \eqref{eqpoasim}, we get \begin{equation}\label{dwdwftxk} \begin{aligned}\mathbb{E}L_{2}^2(D_{N},P_{\Omega^{*}_{b,\sim}})&=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{[0,1]^{d}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{[0,1]^d \setminus I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&+\int_{P_{\Omega^{*}_{b,\sim}}}\int_{I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega. \end{aligned} \end{equation} Now, we first only care about $I_{\sim}$, then we have \begin{equation}\label{eqgsxk} \begin{aligned}&\int_{P_{\Omega^{*}_{b,\sim}}}\int_{I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{I_{\sim}}|\lambda([0,z)\setminus[O',z))+\lambda([O',z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus[O',z)}(s_{i})- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(s_{i})|^{2} dzd\omega. \end{aligned} \end{equation} From Lemma 4.1 and 4.2, we obatin \begin{equation}\label{zuihouyishi} \begin{aligned}\mathbb{E}L_{2}^2(D_{N},P_{\Omega^{*}_{b,\sim}})&=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{[0,1]^{d}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{[0,1]^d \setminus I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&+\int_{P_{\Omega^{*}_{b,\sim}}}\int_{I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{[0,1]^d \setminus I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&+\int_{I_{\sim}}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(s_{i})) dz+[(-\frac{b^4m^4}{24}+4)\cdot \frac{4}{8-b^2m^2}+\frac{b^2m^2}{6}]\cdot (\frac{1}{2m^3})^d\\&+[(1-\frac{8}{b^2m^2})\cdot\frac{9b^6m^6}{960}+(1-\frac{8}{8-b^2m^2})\cdot(\frac{b^6m^6}{160}+\frac{9b^5m^5}{120}-\frac{3b^4m^4}{8}+8)\\&+2(1-\frac{4}{b^2m^2})(1-\frac{4}{8-b^2m^2})\cdot(-\frac{9b^6m^6}{1152}-\frac{9b^5m^5}{240}+\frac{9b^4m^4}{48})]\cdot (\frac{1}{3m^3})^{d}. \end{aligned} \end{equation} Simplifying \eqref{zuihouyishi} and let $$P_2(b)=\frac{m^2b^2}{3}+\frac{16}{24-3m^2b^2}+\frac{4}{3},$$ $$P_3(b)=-\frac{m^4b^4}{40}-\frac{114m^2b^2}{40}-\frac{352}{40}+\frac{6m^3b^3-3m^5b^5+352}{40-5m^2b^2}.$$ We obtain \begin{equation}\label{410} \begin{aligned}\mathbb{E}L_{2}^2(D_{N},P_{\Omega^{*}_{b,\sim}})&=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{[0,1]^d \setminus I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&+\int_{I_{\sim}}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(s_{i})) dz+P_2(b)\cdot (\frac{1}{2m^3})^d+P_3(b)\cdot (\frac{1}{3m^3})^{d}. \end{aligned} \end{equation} \begin{comment} Let $$P_1(t)=(-\frac{t^4}{24}+4)\cdot \frac{4}{8-t^2}+\frac{t^2}{6}$$ and $s=8-t^2$, we have $$P_1(s)=-\frac{s}{3}+\frac{16}{3s}+4.$$ Thus, $$P_1(b)=\frac{m^2b^2}{3}+\frac{16}{24-3m^2b^2}+\frac{4}{3}.$$ Let $$P_2(t)=(1-\frac{8}{8-t^2})\cdot \frac{9t^5}{120}-(1-\frac{4}{t^2})(1-\frac{4}{8-t^2})\cdot \frac{9t^5}{120}$$ and $s=8-t^2$, we have $$P_2(s)=\frac{3(s-6)\cdot(8-s)^{\frac{3}{2}}}{5s}.$$ Thus, $$P_2(b)=\frac{6m^3b^3-3m^5b^5}{40-5m^2b^2}.$$ Let $$P_3(t)=(1-\frac{8}{t^2})\cdot\frac{9t^6}{960}+(1-\frac{8}{8-t^2})\cdot(\frac{t^6}{160}-\frac{3t^4}{8}+8)$$ and $s=8-t^2$, we have $$P_3(s)=-\frac{s^3}{64}-\frac{s^2}{40}+6s-\frac{256}{5}+\frac{512}{5s}.$$ Let $$P_4(t)=2(1-\frac{4}{t^2})(1-\frac{4}{8-t^2})\cdot(-\frac{9t^6}{1152}+\frac{9t^4}{48})$$ and $s=8-t^2$, we have $$P_4(s)=\frac{s^3}{64}-\frac{11s}{4}+18-\frac{32}{s}.$$ Besides, $$P_3(s)+P_4(s)=-\frac{s^2}{40}+\frac{13s}{4}-\frac{166}{5}+\frac{352}{5s}.$$ Thus, $$P_3(b)=-\frac{m^4b^4}{40}-\frac{114m^2b^2}{40}-\frac{352}{40}+\frac{352}{40-5m^2b^2}.$$ $$P_2(b)+P_3(b)=-\frac{m^4b^4}{40}-\frac{114m^2b^2}{40}-\frac{352}{40}+\frac{6m^3b^3-3m^5b^5+352}{40-5m^2b^2}.$$ \end{comment} \begin{comment} Let $b=\frac{9}{5m}$, therefore, we have \begin{equation}\label{tmb95} \begin{aligned}\mathbb{E}L_{2}^2(D_{N},P_{\Omega^{*}_{b,\sim}})&=\int_{P_{\Omega^{*}_{b,\sim}}}\int_{[0,1]^d \setminus I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega\\&+\int_{I_{\sim}}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(s_{i})) dz+\frac{523}{148}\cdot (\frac{1}{2m^3})^d\\&-\frac{782}{177}\cdot (\frac{1}{3m^3})^{d}. \end{aligned} \end{equation} \end{comment} For jittered grid area $[0,1]^d \setminus I_{\sim}$ and $[0,z)\setminus [O',z)$, we obtain \begin{equation}\label{J1} \begin{aligned} &\int_{P_{\Omega^{*}_{b,\sim}}}\int_{[0,1]^d \setminus I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(s_{i})|^{2}dzd\omega+\int_{I_{\sim}}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(s_{i})) dz\\&=\int_{P_{\Omega}}\int_{[0,1]^d \setminus I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(x_{i})|^{2}dzd\eta+\int_{I_{\sim}}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(x_{i})) dz. \end{aligned} \end{equation} For jittered sampling point set $P_{\Omega}=\{x_1,x_2,\ldots,x_N\}$, \begin{equation}\label{J2} \begin{aligned}\mathbb{E}L_{2}^2(D_{N},P_{\Omega})&=\int_{P_{\Omega}}\int_{[0,1]^{d}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(x_{i})|^{2}dzd\eta\\&=\int_{P_{\Omega}}\int_{[0,1]^d\setminus I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(x_{i})|^{2}dzd\eta\\&+\int_{P_{\Omega}}\int_{I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(x_{i})|^{2}dzd\eta. \end{aligned} \end{equation} Besides, \begin{equation}\label{J3} \begin{aligned} &\int_{P_{\Omega}}\int_{I_{\sim}}|\lambda([0,z))- \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)}(x_{i})|^{2}dzd\eta\\&=\int_{I_{\sim}}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[0,z)\setminus [O',z)}(x_{i}))dz+\int_{I_{\sim}}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(x_{i}))dz. \end{aligned} \end{equation} Easy to obtain \begin{equation}\label{J4} \int_{I_{\sim}}\text{Var}( \frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{[O',z)}(x_{i}))dz=4\cdot \frac{1}{2^d}\cdot \frac{1}{N^3}-5\cdot \frac{1}{3^d}\cdot \frac{1}{N^3}. \end{equation} Therefore, from \eqref{410},\eqref{J1},\eqref{J2},\eqref{J3} and \eqref{J4}, we have the desired result. \section{Conclusion}\label{conclu} We study expected $L_2-$discrepancy under a class of unequivolume partitions, we give an explicit formula, this result improves expected $L_2-$discrepancy under partition manner \cite{KP}. In future, we will study the optimal partition under this class and give corresponding explicit $L_2-$discrepancy formula.
{ "timestamp": "2022-11-16T02:07:41", "yymm": "2211", "arxiv_id": "2211.07862", "language": "en", "url": "https://arxiv.org/abs/2211.07862", "abstract": "We study the expected $ L_2-$discrepancy under two classes of partitions, explicit and exact formulas are derived respectively. These results attain better expected $L_2-$discrepancy formulas than jittered sampling.", "subjects": "Computation (stat.CO); Probability (math.PR)", "title": "Improved expected $L_2$-discrepancy formulas on jittered sampling", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426458162398, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.7090791323416432 }
https://arxiv.org/abs/2212.05617
Decomposition of the Leinster-Cobbold Diversity Index
The Leinster and Cobbold diversity index possesses a number of merits; in particular, it generalises many existing indices and defines an effective number. We present a scheme to quantify the contribution of richness, evenness, and taxonomic similarity to this index. Compared to the work of van Dam (2019), our approach gives unbiased estimates of both evenness and similarity in a non-homogeneous community. We also introduce a notion of taxonomic tree equilibration which should be of use in the description of community structure.
\section{Introduction} Measuring biodiversity is a difficult task due to sampling issues and accounting for missing data, but also as there is no one universally accepted definition of what biodiversity is \citep{Daly2018}. In ecological practice, definitions of biodiversity can include contributions from multiple channels of information such as the number of species (``richness''), dominance or rarity relations among the constituent species (``evenness''), and measures of ``similarity'' among the species (estimated either from taxonomic or phylogenetic relationships, or from functional traits relationships) \citep{Purvis2000, LeinsterCobbold2012}. Biogeographic patterns of diversity can depend on the definitions used. For example, \citep{Stuart2013} showed that biodiversity hotspots can shift from the tropics to higher latitudes if one only considers abundances or also takes account of functional traits similarity of species. An outstanding challenge in the field of conservation ecology is to relate the various aspects of biodiversity data to the functioning of ecosystems \citep{Maureaud2019, Hillebrand2018}. Thus the goal is to construct a biodiversity index that would carry information about as many aspects of diversity as possible. This goal has been actively pursued \citep{Rao1982, LeinsterCobbold2012, ChaoChiuJost2014}. By ``carrying information'' we means that, for example, we should be able to extract information about richness or evenness from our index. One way of extracting such information is to decompose the index additively or multiplicatively into components that can be interpreted in a biologically meaningful way; see for example discussions of $\alpha$- $\beta$- and $\gamma$- diversity \citep{Jost2007, Anderson2011}. A priori it is not clear why such a decomposition would exist, whether it has to be unique, and in cases of non-uniqueness, what are the conditions for optimality of a decomposition. As an example of this approach, van Dam \citep{vanDam2019} has recently proposed a straightforward decomposition of the Leinster--Cobbold (LC) index \citep{LeinsterCobbold2012}. In a sense, our work below is a generalisation of the work of van Dam, which uses intrinsic properties of the LC index to remove an important bias in van Dam's decomposition with intriguing and far-reaching consequences. The structure of the paper is as follows. In Section~\ref{Concepts} we discuss the definitions of richness, evenness and similarity using \citep{ChaoChiuJost2014,ChiuJostChao2014,Daly2018,GregoriusGillet2022}. In Section~\ref{LC} we collect the required information about the LC index following \citep{LeinsterCobbold2012, LeinsterMeckes2016}; it subsumes many other diversity indices such Rao's index that is widely used in functional ecology \citep{Rao1982, RicottaMoretti2011}. In Section~\ref{Scheme} we present van Dam's and then our decomposition and its consequences. Finally, in Section~\ref{Remarks} we discuss the relation of our work to that or Chao and Ricotta~\citep{ChaoRicotta2019}, extensions and open problems. \section{Diversity components} \label{Concepts} \subsection{Notation} First of all, we need to establish notation. Everywhere below we assume that the number of species in a community is fixed at $n>1$. We will use $\mb{p}=(p_1,\, \ldots, p_n)$ to denote the vector of relative abundances, and let \begin{equation}\label{simp} \Delta(n) = \{ \mb{x} \in {\mathbb R}^n,\; | \; x_k \geq 0, \; \; k= 1, \ldots, n, \; \; \sum_{k=1}^n x_k =1\} \end{equation} be the standard $n-1$-simplex in ${\mathbb R}^n$. \begin{rem} It has to be emphasised that admissible relative abundance vectors $\mb{p}$ take values in $\Delta(n)^\circ$, the interior of $\Delta(n) $: \begin{equation}\label{simpo} \Delta(n)^\circ = \{ \mb{x} \in {\mathbb R}^n,\; | \; x_k > 0, \; \; k= 1, \ldots, n, \; \; \sum_{k+1}^n x_k =1\}, \end{equation} which is not a closed set in ${\mathbb R}^n$; the consequence of that is that there are converging sequences $\seq{\mb{p}_m}$ in $\Delta(n)^\circ$, whose limit is contained in the boundary $\partial \Delta(n) = \Delta(n) \backslash \Delta(n)^\circ$; such limits by necessity correspond to communities with fewer than $n$ species. \end{rem} We will often use the vector \begin{equation}\label{vect} \mb{p}_h = \left( \frac1n, \ldots, \, \frac1n \right) \in \Delta(n)^\circ; \end{equation} the subscript $h$ stands for ``homogeneous''. $\mb{p}_h$ is the relative abundance of a community where each species is represented equally. We denote by $M(n) \subset \Delta(n) \backslash \Delta(n)^\circ$ the set of $n$-vectors having one component equal to 1 and the rest equal to zero. Thus, $\mb{m} \in M(n)$ is the relative abundance vector of a monomorphic community. $\bf{1}$ will stand below for an $n$-vector with all components equal to $1$. Next, we need to discuss sets of $n \times n$ matrices. First of all, we will denote the $n \times n$ identity matrix by $I_n$. We will use the notation $J_n$ for the $n \times n$ matrix of ones. In the present paper, for simplicity, we will be working with ultrametric matrices; this choice is motivated by the fact that similarity matrices (see subsection~\ref{dlc}) constructed using taxonomic trees are necessarily ultrametric and since using them simplifies the theory of \citep{LeinsterMeckes2016}. For more information on ultrametric matrices, please see \citep[Ch. 3]{Della2014} and \citep{Leinster2013, LeinsterMeckes2016}. We will denote that set of all ultrametric $n \times n$ matrices by $\mathcal{U}(n)$ and its interior by $\mathcal{U}(n)^\circ$. \begin{defn}[Defn. 3.2 of \citep{Della2014}] \label{ultra} A symmetric $n \times n$ matrix $A$ is {\bf ultrametric} if $A_{i,i} \geq \max_{k \neq i} A_{ik}$, $i, k \in \{1,\ldots, n\}$ and $A_{ik} \geq \min \{ A_{ij}, A_{jk} \}$, $i,j,k \in \{1,\ldots, n\}$. \end{defn} \begin{rem} Note that in \citep[Example 12]{LeinsterMeckes2016} Leinster and Meckes take the matrices they call ultrametric to be strictly diagonally dominant. That would preclude the set of ultrametric matrices from being closed; so in our definition $J_n \in \mathcal{U}(n)$. \end{rem} \subsection{Evenness} The concept of evenness (for which see, e.g. \citep{ChaoChiuJost2014,ChaoRicotta2019,GregoriusGillet2022}) and references therein), is rather problematic. First of all, the terminology is badly chosen as it would immediately seem that the ``most even'' population of $n$ species is one for which the vector of relative abundances is the homogeneous vector $\mb{p}_h$, i.e. one where every species is equally represented. Thus, like the case of richness discussed below, the terminology seems to be precluding discussion. As rightly pointed in \citep{GregoriusGillet2022}, such a categorical answer to the question of maximal evenness leaves open the discussion of what would constitute ``maximum unevenness'' in $\Delta(n)^\circ$. van Dam \citep{vanDam2019} uses instead the concept of ``balance'', which seems to us a better term; this is the concept which, after defining it properly (see (\ref{bal})) we will be using below. \subsection{Richness} Richness is sometimes summarily defined to be the number of species, see e.g. \citep{Daly2018}. Our approach below allows us to retain this definition but at a price. Such a definition is open to the same criticism as the notion of maximal evenness defined by $\mb{p}_h$ that we have discussed above. It is again ``species-centric'', and takes into account only the last level of taxonomic classification. Below, in Section~\ref{Scheme} we suggest how to introduce a defensible new notion of richness that uses taxonomic information. \subsection{Similarity}\label{sim} In this section we discuss the construction of taxonomic similarity matrices $Z$ for a community with $n$ species. The usual way of constructing similarity matrices $Z$ which are automatically ultrametric is to assign distances between different levels of a taxonomic tree. Then the taxonomic distance $d(i,j)$ between two species is the sum of distance from the nodes corresponding to these species to the first common node, and then one puts $Z_{ij}=e^{-d(i,j)}$ or if the maximal distance in the tree has been normalised to 1, one could put $Z_{ij}=1-d(i,j)$. As a example, consider the tree in Figure~\ref{tree1} \begin{figure}[h] \hspace*{-2in} \Tree [.$F$ [ .$G_1$ [ $S_1$ $S_2$ ] ] [.$G_2$ $S_3$ ] ] \caption{A taxonomic tree}\label{tree1} \end{figure} and set the species-genus and the genus-family distance to be $0.3$. If we use the additive recipe, we get \begin{equation}\label{zex} Z_1 = \bemat{ccc} 1 & 0.7 & 0.4\\ 0.7 & 1 & 0.4\\ 0.4 & 0.4 & 1 \emat \end{equation} \section{The Leinster--Cobbold diversity index}\label{LC} In their influential paper \citep{LeinsterCobbold2012}, Leinster and Cobbold introduced a far-reaching generalisation of Hill numbers, for discussions of which see \citep{ChiuJostChao2014,Hill1973}, the LC index. For details on its properties, see \citep{LeinsterCobbold2012,LeinsterMeckes2016}; here we just collect the bare minimum in the framework of taxonomic (ultrametric) similarity matrices. \subsection{Definition of the LC index} \label{dlc} As in the definition of Hill numbers, below $q \in [0,\infty)$ is the sensitivity parameter, measuring the importance given to rare species. Then for a community of $n$ species with relative abundance vector $\mb{p}$ and (ultrametric) similarity matrix $Z$, we have \begin{defn}\label{LCi} The {\bf LC diversity of order $q$} is \begin{equation}\label{lcdef} F(Z,\mb{p},q) :=\left(\sum_{i=1}^n p_i \left(Z \mb{p}^T\right)_i^{q-1}\right)^{1/(1-q)}. \end{equation} \end{defn} Note that \citep{LeinsterCobbold2012} use a different notation, similar to the Hill number notation in the literature; they denote the right-hand side of (\ref{lcdef}) by ${}^qD^Z(\mb{p})$. We prefer the notation used here as it clearly shows functional dependencies and allows easy generalisation, which we discuss briefly in Section~\ref{Remarks}. We collect the required properties of the LC index in the proposition below and in subsection~\ref{max}. \begin{prop} \label{LCprop} Let $\mb{p}\in \Delta(n)^\circ$. $Z \in U(n)$. Then \begin{enumerate}[(a)] \item $F(\mb{p},Z,q)$ is a monotone decreasing function of $q$; \item $F(\mb{p},Z,q) < F(\mb{p}, I_n,q)$ for all $q$ if $Z \neq I_n$; \item $F(\mb{p}_h,I_n,q)=n$ for all $q$; \item $F(\mb{p},J_n,q)=1$ for all $q$. \end{enumerate} \end{prop} For proofs of (a) and (b) please see \citep{LeinsterCobbold2012}; the rest are immediate. Following \citep{LeinsterMeckes2016}, we now discuss the concept of a \emph{maximally balanced abundance vector} for a community of $n$ species with an ultrametric similarity matrix $Z$. \subsection{A crucial property of the LC index}\label{max} Using only ultrametric taxonomic similarity matrices simplifies the presentation considerably. For the more general case where the similarity matrix is simply a symmetric matrix with positive elements, see~\citep{LeinsterMeckes2016}. The results of that paper have not, in our opinion, been sufficiently seriously considered by the biodiversity community. We present two theorems from \citep{LeinsterMeckes2016}. First of all we have the following existence and uniqueness result for maximisers of the LC diversity index. \begin{theo}\label{exmax} For each $Z \in \mathcal{U}(n)^\circ$ there exists a unique abundance vector $\mb{p}^* \in \Delta(n)^\circ$ that maximises $F(Z,\mb{p},q)$ for every value of $q \in [0, \infty)$. \end{theo} \begin{defn}\label{mb} Given $Z \in \mathcal{U}(n)^\circ$, we call the corresponding abundance vector $\mb{p}^*$ the {\bf maximally balanced} abundance vector. \end{defn} This is the vector that corresponds to $\mb{p}_h$ that arises in theories that do not take into account taxonomic similarity. Computing the maximally balanced vector in the case of ultrametric similarity matrices is a simple matter of solving a system of linear equations and normalising. If the similarity matrix is not ultrametric, the situation is more complex; see \citep{LeinsterMeckes2016} for details. \begin{theo}\label{comp} Given $Z \in \mathcal{U}(n)^\circ$, $\mb{p}^*$ is given by \[ p^*_i = \frac{w_i}{\sum_{j=1}^n w_j}, \] where $\mb{w}$ solves the system of equations $Z\mb{w} = \mb{1}$, where $\mb{1}$ is a column vector of ones. \end{theo} Note that \citep[Lemma 6]{LeinsterMeckes2016} provides an alternative way of computing $\mb{p}^*$. \begin{defn}\label{pt} A taxonomic tree will be called {\bf taxonomically equilibrated} if $\mb{p}^* = \mb{p}_h$. \end{defn} Of course we have \begin{prop}\label{balance} If at each level of the tree all the nodes have the same degree, the taxonomic tree is equilibrated. \end{prop} The converse of Proposition~\ref{balance} does not hold, i. e. there are taxonomic graphs that do not satisfy the conditions of that proposition, for which a metric $d(\cdot,\cdot)$ can be assigned such that the resulting $\mb{p}^*$ is $\mb{p}_h$. An example is provided by the following tree: \begin{figure}[h] \hspace*{-2in} \Tree [ .$O$ [ .$F_1$ [ .$G_1$ [ $S_1$ $S_2$ ] ] [ .$G_2$ [ $S_3$ $S_4$ ] ] ] [ .$F_2$ [.$G_3$ $S_5$ $S_6$ $S_7$ ].$G_3$ ] ] \caption{A taxonomic tree that allows $\mb{p}^*=\mb{p}_h$.} \label{tree2} \end{figure} It is not hard to show that the assignment of species-genus, genus-family and family-order distances of $0.25$ and using the additive recipe, results in a similarity matrix for which $\mb{p}^*=\mb{p}_h$. Thus there is a trichotomy of taxonomic trees: those in Proposition~\ref{balance} is which $\mb{p}^*=\mb{p}_h$ holds for every assignment of distances; those where such assignments can be chosen, as in Figure~\ref{tree2}, and such that no assignment of distances results in a homogeneous maximally balanced abundance vector; an example of such a tree is in Figure~\ref{tree3}. \begin{figure}[h] \hspace*{-2in} \Tree [.$O$ [ .$F_1$ [ .$G_1$ $S_1$ ] [.$G_2$ [$S_2$ $S_3$ ] ] ] [.$F_2$ [.$G_3$ [$S_4$ $S_5$ ] ] ] ] \caption{A taxonomic tree for which $\mb{p}^* \neq \mb{p}_h$ is guaranteed.}\label{tree3} \end{figure} We will discuss this trichotomy in more detail in \citep{ChenGrinfeld2023}. \begin{rem} Compared to the diversity metrics proposed by Chao {\em et al.} \citep{ChaoChiuJost2014}, the LC index has the flexibility to take into account taxonomic, phylogenetic, and functional diversity simultaneously. However in that case the resulting similarity matrices are no longer ultrametric and we leave that more general case for future study. Also note that the above trichotomy of taxonomic trees would not necessarily exit if there were a canonical way of constructing similarity matrices. \end{rem} \section{An unbiased decomposition scheme} \label{Scheme} We are now ready to propose a decomposition scheme for the LC index. It is best to start with the decomposition scheme proposed by van Dam \citep{vanDam2019} and see why it has to be modified. van Dam writes \begin{equation}\label{dec1} F(\mb{p},Z,q) = \frac{F(\mb{p},Z,q)}{F(\mb{p},I_n,q)} \cdot \frac{F(\mb{p},I_n,q)}{F(\mb{p}_h,I_n, q)} \cdot F(\mb{p}_h,I_n, q). \end{equation} The first fraction is clearly a measure of dissimilarity, while the second fraction is a measure of balance; of course $F(\mb{p}_h,I_n, q)=n$, so the last term in the right-hand side is a richness. From many points of view, this is a good decomposition as the two fractions always lie in the interval $(1/n,1]$. Consult Proposition~\ref{LCprop} to see that the infimum $1/n$ is never reached. The problem here is with the definition of the measure of balance, as it does not take into account the taxonomic similarity matrix $Z$ while the dissimilarity measure uses information from both $\mb{p}$ and $Z$. We call such a decomposition {\bf asymmetrically biased}. A different asymmetrically biased decomposition is given by \begin{equation}\label{dec2} F(\mb{p},Z,q) = \frac{F(\mb{p},Z,q)}{F(\mb{p}_h,Z,q)} \cdot \frac{F(\mb{p}_h,Z,q)}{F(\mb{p}_h,I_n, q)} \cdot F(\mb{p}_h,I_n, q). \end{equation} In this decomposition the first fraction is a measure of balance, the second a measure of dissimilarity, while as before, the last term is richness. Of course here it is the measure of dissimilarity that is asymmetrically biased. A possibility that might be considered of multiplying (\ref{dec1}) and (\ref{dec2}) and taking the square root. That will give us a decomposition which we would call {\bf unbiased} (though it could be also called ``symmetrically biased'') . However, there is an additional problem in (\ref{dec2}) which is that the first fraction in the right-hand side can take values larger than one if for example, $Z$ is not a similarity matrix of a taxonomically equilibrated tree and $\mb{p}=\mb{p}^*$. We do not pursue this direction as we do not see any reason for a relative measure not to take values in $[0,1]$. The price of ensuring normalisation is having to deal with richness in more detail. Consider instead of (\ref{dec2}), the following decomposition: \begin{equation}\label{dec3} F(\mb{p},Z,q) =\frac{F(\mb{p},Z,q)}{F(\mb{p}^*,Z,q)} \cdot \frac{F(\mb{p}^*,Z,q)}{F(\mb{p}^*,I_n, q)} \cdot F(\mb{p}^*,I_n, q). \end{equation} It is asymmetrically biased as second factor does not involve $\mb{p}$. We will discuss the interpretation of $F(\mb{p}^*,I_n, q)$ later. To obtain an {\bf unbiased} decomposition, we therefore multiply (\ref{dec1}) and (\ref{dec3}) and take a square root. The result is \begin{equation}\label{dec4} F(\mb{p},Z,q) =\sqrt{\frac{F(\mb{p},Z,q)}{F(\mb{p}^*,Z,q)} \frac{F(\mb{p},I_n,q)}{F(\mb{p}_h,I_n, q)}} \cdot \sqrt{\frac{F(\mb{p},Z,q)}{F(\mb{p},I_n,q)} \frac{F(\mb{p}^*,Z,q)}{F(\mb{p}^*,I_n, q)}} \cdot \sqrt{n F(\mb{p}^*,I_n, q)}. \end{equation} The last term in the right-hand side can be rewritten as \begin{equation}\label{equil} \sqrt{n F(\mb{p}^*,I_n, q)}= \sqrt{ \frac{F(\mb{p}^*,I_n,q)}{n}} n := E(Z,q)n. \end{equation} The term $E(z,q)$ expresses the lack of equilibration in the taxonomic tree; see Definition~\ref{pt} and Theorem~\ref{balance}. Putting \begin{equation}\label{bal} B(\mb{p},Z,q) = \sqrt{\frac{F(\mb{p},Z,q)}{F(\mb{p}^*,Z,q)} \frac{F(\mb{p},I_n,q)}{F(\mb{p}_h,I_n, q)}}, \end{equation} \begin{equation}\label{dis} D(\mb{p},Z,q) = \sqrt{\frac{F(\mb{p},Z,q)}{F(\mb{p},I_n,q)} \frac{F(\mb{p}^*,Z,q)}{F(\mb{p}^*,I_n, q)}}, \end{equation} We finally write our decomposition as \begin{equation}\label{decf} F(\mb{p},Z,q) = B(\mb{p},Z,q) D(\mb{p},Z,q) E(Z,q)n., \end{equation} i.e. a product of measures of balance $B(\mb{p},Z,q)$, dissimilarity $D(\mb{p},Z,q)$, (lack of) equilibration $E(Z,q)$ and the classical richness $n$. Note that by construction both the measure of balance (\ref{bal}) and dissimilarity (\ref{dis}) are constrained to lie in $[0,1]$ by Proposition~\ref{LCprop}. Both the measure of balance and of dissimilarity are geometric means of an unbiased measure and a biased one. It does not seem possible to find a truly unbiased decomposition of the LC index, which is the reason we could call the decomposition (\ref{decf}) symmetrically biased. Note that though $B(\mb{p},Z,q) \geq 1/(\sqrt{F(\mb{p}^*,Z,q)}$ (the right-hand side being independent of $q$), it is not clear what vector $\mb{p}(q)$ maximises it for a particular value of $q$. Of course the value $1$ is reached for $q=0$ by the choice $\mb{p}(0)= \mb{p}^*$. $E(Z,q)$ depends on $Z$, as $Z$ defines $\mb{p}^*$, and hence $E(Z,q)$ reflects the structure of the underlying taxonomic tree. Note that in the case of similarity matrices of taxonomically equilibrated trees, for which we have $\mb{p}^*= \mb{p}_h$ and hence $F(\mb{p}^*,I_n, q)=n$, so that $E(Z,q)=1$, If $Z$ does not correspond to a taxonomically equilibrated taxonomic tree, $F(\mb{p}^*, I_n, q)$ is dependent on $q$. \begin{rem} We could have defined a notion of ``richness'' by $R(z,q):=E(Z,q)n$, but the decomposition (\ref{decf}) seems to us more insightful and does not necessitate an advocacy of a new notion of richness. Definition~\ref{pt} singles out a class of taxonomic trees for which the two notions of richness coincide. \end{rem} \section{Practical examples} \label{Examples} To illustrate how our decomposition approach differs from the ``ABC" approach suggested by van Dam \citep{vanDam2019}, we use a simple example in Leinster and Cobbold \citep{LeinsterCobbold2012} (their Example 3; original data from \citep{deVries1997}). This example has the nice feature that the difference between the two communities changes the sign when $q$ increases from 0 to 2. \begin{figure} \centerline{\includegraphics[width=12cm]{DeVeries.eps}} \caption{Comparing our approach (``This study") and van Dam \citep{vanDam2019} to decompose (A) the LC diversity index into (B) Lack of equilibration, (C) Balance (or Evenness), and (D) Dissimilarity for the example of \textit{Charaxinae} in Leinster and Cobbold \citep{LeinsterCobbold2012}.} \label{fig1} \end{figure} Compared to van Dam \citep{vanDam2019}, our new decomposition approach yields different interpretations regarding what aspects of diversity lead to the differences in diversity estimates between the two communities (Canopy vs. Understory). When we put more emphasis on rare species ($q < 1$), the Canopy community is more diverse because it has a larger number of species richness (6 vs. 5) and its species are slightly more dissimilar with each other (Fig. \ref{fig1}A,D). By contrast, when we focus more on abundant species ($q > 1$), the Understory community becomes more diverse because of its greater balance (evenness) of dominant species (Fig. \ref{fig1}A,C). The difference between our decomposition approach and van Dam \citep{vanDam2019} is that our approach predicts a much larger difference in Balance than van Dam \citep{vanDam2019} (and a smaller difference in species dissimilarity) when we increasingly focus on dominant species (larger $q$). Our approach also shows that as $q$ increases, the Understory community shows a stronger lack of equilibration (i.e., deviation of $\mb{p}$ from $\mb{p}^*$) than the Canopy community, albeit this difference is small. In summary, we would interpret that the greater diversity of the Understory community when we focus on dominant species is because its dominant species are more balanced than those in the Canopy community. On the contrary, the interpretation would be that the dominant species are more dissimilar to each other in the Understory community if we use the approach of van Dam \citep{vanDam2019}. \section{Discussion} \label{Remarks} The LC index uses three ``information streams'': the number of species (richness) $n$, the relative abundance vector $\mb{p}$ and the similarity matrix $Z$. We could in theory consider a diversity index $F(c_1, \; \ldots, c_m; \, q)$, where $c_1, \ldots, \; c_m$ are information streams, sources of information about the structure of the community, expressed as some mathematical objects (vectors, matrices higher order tensors). We could then follow the decomposition process of Section~\ref{Scheme}: find $m!$ biased decompositions, multiply them together and take the $m!$-root. However, this is already unwieldy in the case of $m=3$. But note that this procedure is unnecessary as the LC theory has a lot of built-in flexibility in the definition of similarity matrices. As explained in subsection~\ref{sim}, one can define a similarity matrix by setting $Z_{ij}=e^{-d(i,j)}$, where $d(i,j)$ is some suitably defined distance between species $i$ and $j$. Hence incorporating more information streams can be thought about as changing the distance function $d(\cdot,\cdot)$; in the process of incorporating such information, such as functional similarity, the ultrametricty of the similarity matrix is lost; it is possible that the resulting function $d(\cdot,\cdot)$ will no longer be a metric, becoming more generally a divergence measure. The point is $\mb{p}$ and (a suitably redefined) $Z$ dependence of a diversity index is sufficient to incorporate all relevant information. In \citep{ChaoRicotta2019}, Chao and Ricotta show how to quantify evenness using divergence measures. It is useful therefore to consider the LC diversity index (\ref{LCi}) in this context. As now we deal with balance as opposed to evenness, we will denote the resulting balance index by $B$. First of all, let us note that (\ref{bal}) cannot give rise to an divergence measure-based index of balance as there is no well defined upper bound for it for all $q$. It is still of course useful in providing an estimate of the balance contribution to the LC diversity index. These two statements are not in contradiction. Clearly, the LC index itself provides a divergence measure-based estimate (index) of balance via \begin{equation}\label{e1} {\mathbb B} = \frac{F(\mb{p},Z,q)-1}{F(\mb{p}^*,Z,q)-1}, \end{equation} where one could alternatively write $1=F(\mb{m},Z,q)$ where $\mb{m}$ is any vector in $M(n)$. Concerning similarity indices, it again does not seem to be possible to utilise $D(\mb{p},Z,q)$ of (\ref{dis}) to this end. On the other had, the LC index itself provides a divergence measure based similarity index by \begin{equation}\label{d1} \S=\frac{F(\mb{p},I_n,q)-F(\mb{p},Z,q)}{F(\mb{p},I_n,q)-F(\mb{p},J_n,q)}. \end{equation} Again, here the value 1 is never reached over $\mathcal{U}(n)^\circ$. It is not hard to show the following proposition: \begin{prop} The indices ${\mathbb B}$, $\S$ satisfy all the requirements in \citep{ChaoRicotta2019}. \end{prop} To conclude, we have proposed a novel decomposition of the LC index. Compared to a previous version of decomposition due to van Dam \citep{vanDam2019}, our approach estimates the balance and dissimilarity of the community more comprehensively (e.g., we not only estimate dissimilarity for a homogeneous community but also consider the present vector of relative abundance). As such, we believe that our inference is more robust when comparing balance and dissimilarity among communities. In addition, we had by necessity to introduce a notion of taxonomic tree equilibration (which turns out to be an important concept in our on-going work on quantifying un-evenness \citep{ChenGrinfeld2023}), which is another descriptor of a biological community. We advocate the use of our decomposition (\ref{decf}) as a ``maximally unbiased'' estimate of contributions of balance and (dis)similarity to diversity. \bibliographystyle{apa}
{ "timestamp": "2022-12-13T02:16:03", "yymm": "2212", "arxiv_id": "2212.05617", "language": "en", "url": "https://arxiv.org/abs/2212.05617", "abstract": "The Leinster and Cobbold diversity index possesses a number of merits; in particular, it generalises many existing indices and defines an effective number. We present a scheme to quantify the contribution of richness, evenness, and taxonomic similarity to this index. Compared to the work of van Dam (2019), our approach gives unbiased estimates of both evenness and similarity in a non-homogeneous community. We also introduce a notion of taxonomic tree equilibration which should be of use in the description of community structure.", "subjects": "Populations and Evolution (q-bio.PE)", "title": "Decomposition of the Leinster-Cobbold Diversity Index", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426443092215, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.7090791312445708 }
https://arxiv.org/abs/2010.14137
The leapfrog algorithm as nonlinear Gauss-Seidel
Several applications in optimization, image, and signal processing deal with data that belong to the Stiefel manifold St(n,p), that is, the set of n-by-p matrices with orthonormal columns. Some applications, like the Riemannian center of mass, require evaluating the Riemannian distance between two arbitrary points on St(n,p). This can be done by explicitly constructing the geodesic connecting these two points. An existing method for finding geodesics is the leapfrog algorithm of J. L. Noakes. This algorithm is related to the Gauss-Seidel method, a classical iterative method for solving a linear system of equations that can be extended to nonlinear systems. We propose a convergence proof of leapfrog as a nonlinear Gauss-Seidel method. Our discussion is limited to the case of the Stiefel manifold, however, it may be generalized to other embedded submanifolds. We discuss other aspects of leapfrog and present some numerical experiments.
\section{Introduction} \label{sec:geodesic_exp_log} The object of study in this paper is the compact Stiefel manifold, i.e., the set of orthonormal $n$-by-$p$ matrices \[ \Stnp = \lbrace X \in \R^{n \times p}: \ X\tr\! X = I_p \rbrace. \] Here, we are concerned with computing the Riemannian distance between two points on the Stiefel manifold. The distance between two points on a manifold is related to the concept of minimizing geodesic. A geodesic $ \gamma \colon [0,t] \to \cM $ is a curve with zero acceleration, which generalizes the notion of straight lines in Euclidean space to a Riemannian manifold~\cite{AMS:2008}. Geodesics allow us to introduce the \emph{Riemannian exponential} $ \mathrm{Exp}_{x}\colon T_{x}\cM \to \cM $ that maps a tangent vector $\xi = \dot{\gamma}(0) \in T_{x}\cM $ to the geodesic endpoint $\gamma(1) = y $ such that $ \mathrm{Exp}_{x}(\xi) = y$. The Riemannian exponential is a local diffeomorphism, i.e., it is locally invertible and its inverse is called the \emph{Riemannian logarithm} of $y$ at $x$ satisfying $ \mathrm{Log}_{x}(y) = \xi $. The \emph{injectivity radius} at a point $x$ of a Riemannian manifold $\cM$ is the largest radius for which the exponential map $\mathrm{Exp}_{x}$ is a diffeomorphism from the tangent space to the manifold. The global injectivity radius of a manifold is the infimum of all the injectivity radii over all points of the manifold. Given two points $ x $ and $ y $ on a manifold $ \cM $, if the Riemannian distance $ d(x,y) $ is smaller than $ \mathrm{inj}(\cM) $, then there exists a unique length-minimizing geodesic from $ x $ to $ y $. For the Stiefel manifold, the injectivity radius is at least $ 0.89\,\pi $; see \protect{\cite[Eq.~(5.13)]{Rentmeesters:2013}}. The distance on the Stiefel manifold is involved in numerous fields of applications, among which, computer vision \protect{\cite{Cetingul:2009,Sundaramoorthi:2011,Turaga:2011,Yin:2015}}, statistics \protect{\cite{Srivastava:2016}}, reduced-order models \protect{\cite{Amsallem:2011,Benner:2015}}. Given two points on the Stiefel manifold, our goal is to compute the length of the minimizing geodesic connecting them. For some manifolds, there are explicit formulas available for computing the distance. For the Stiefel manifold there is no closed-form solution known. In general, the problem of finding the distance given two points on a Riemannian manifold is related to the Riemannian logarithm. The problem of computing the Riemannian logarithm on the Stiefel manifold has already been tackled by several authors, who proposed some numerical algorithms. Rentmeesters \protect{\cite{Rentmeesters:2013}} and Zimmermann \protect{\cite{Zimmermann:2017,Zimmermann:2019}} proposed a similar algorithm which is only locally convergent and depends upon the definition of the (standard) matrix logarithm function. Another method for finding geodesics is the leapfrog algorithm introduced by J. L. Noakes \cite{Noakes:1998}. This method has global convergence properties, but it slows down when the solution is approached \protect{\cite[p.~2796]{Kaya:2008}}. Moreover, Noakes realized that his leapfrog algorithm was related to the Gauss--Seidel method \protect{\cite[p.~39]{Noakes:1998}}. The link between leapfrog and nonlinear Gauss--Seidel was not further investigated, since there is no trace of this idea being developed in the other related papers \cite{Kaya:1997,Kaya:2008}. In this paper, we will prove convergence of leapfrog as a nonlinear block Gauss--Seidel method. Even though our focus will be on $\Stnp$, most of our discussion may be generalized to other embedded submanifolds. A Riemannian metric has to be specified in order to turn $\Stnp$ into a Riemannian manifold, and in general different choices are possible. In this paper, we consider the non-Euclidean \emph{canonical metric} inherited by $\Stnp$ from its definition as a quotient space of the orthogonal group \protect{\cite[Eq.~(2.39)]{Edelman:1998}}. Given $ Y \in \Stnp $ and $ \xi \in T_{Y}\Stnp $, the canonical metric reads \begin{equation}\label{eq:formula_canonical_metric} g_{c}(\xi,\xi) = \trace\!\big( \xi\tr ( I - \tfrac{1}{2}YY\tr) \, \xi \big). \end{equation} Tangent vectors to the Stiefel manifold may be expressed in the form \[ \xi = Y_{0}\Omega + Y_{0\perp}K, \quad \text{with} \quad \Omega \in \mathcal{S}_{\mathrm{skew}}(p), \quad K\in\R^{(n-p)\times p}, \] with $ \mathcal{S}_{\mathrm{skew}}(p) $ the vector space of $p$-by-$p$ skew-symmetric matrices. An explicit formula for a geodesic with initial acceleration the tangent vector $ \xi $ and base point $ Y_{0} $ is \protect{\cite[Eq.~(2.42)]{Edelman:1998}} \begin{equation}\label{eq:closed-form-sol-geodesic} Y(t) = Q \exp\!\left( \begin{bmatrix} \Omega & -K\tr \\ K & O_{n-p} \end{bmatrix} t \right) \begin{bmatrix} I_{p} \\ O_{(n-p)\times p} \end{bmatrix}, \end{equation} with $ Q = \big[ Y_{0} \ Y_{0\perp} \big] $ and $Y_{0\perp}$ being any matrix whose range is $ (\mathrm{span}(Y_{0}))^{\perp}$. Given two points $Y_0$, $Y_1$ on $\Stnp$ that are sufficiently close to each other, finding the distance between them is equivalent to finding the tangent vector $\xi^{\ast} \in T_{Y_0}\Stnp$ with the shortest possible length such that $ \Exp_{Y_0}(\xi^{\ast}) = Y_1 $ \protect{\cite{Lee:2018,boumal2020intromanifolds}}. The solution to this problem is equivalent to the Riemannian logarithm of $Y_{1}$ with base point $Y_{0}$, i.e., $ \xi^{\ast} = \mathrm{Log}_{Y_0}(Y_1) $. Given the endpoints $ Y_0 $ and $ Y_1 $, we do not know what the matrices $ \Omega $ and $ K $ in \eqref{eq:closed-form-sol-geodesic} are. So the problem becomes: Find the matrices $ \Omega $ and $ K $ such that the explicit formula \eqref{eq:closed-form-sol-geodesic} gives the endpoint $ Y_{1} $. \section{Leapfrog algorithm} When $X, Y \in \cM$ are sufficiently close, their connecting geodesic can be found by applying Newton's method to~\eqref{eq:closed-form-sol-geodesic} such that $Y(1)=Y$ with $Y(0) = X$. This is more generally known as single shooting\footnote{In this context, there is no need to solve an ordinary differential equation as in a normal shooting method, because we have the solution \eqref{eq:closed-form-sol-geodesic}. Hence, it is actually Newton's method, but we keep the shooting terminology because it is typical for boundary value problems.}. However, when $X$ and $Y$ are far apart, it is well known that single shooting will have difficulty finding the connecting geodesic. The main idea behind the leapfrog algorithm of Noakes~\cite{Noakes:1998} is to exploit the success of single shooting by subdividing the global problem into several local problems, where intermediate points $X_i \in \cM$ are introduced between $X$ and $Y$, for which the endpoint geodesic problem can be solved again by single shooting. The algorithm then iteratively updates a piecewise geodesic to obtain a globally smooth geodesic between $X$ and $Y$. This idea is actually not new and goes back as early as 1963 by Milnor~\cite[III.\S16]{Milnor:1963}. It also resembles the better-known multiple shooting method for boundary value problems but it is different. \subsection{Formal description of the algorithm} In this section, we describe the leapfrog algorithm by following the presentation in~\cite{Noakes:1998}. Let $\cM$ be a $C^{\infty}$ path-connected Riemannian manifold. Consider a \emph{piecewise} (or \emph{broken}) \emph{geodesic} $ \omega_{X} $ joining $ X_{0} $ to $ X_{m-1} $, having $ m-1 $ geodesic segments. Assuming $ X_{i} $ and $ X_{i+1} $ are sufficiently close to each other, $ \omega_{X} $ is uniquely identified by the $ m $-tuple $ X = ( X_{0}, X_{1}, \ldots, X_{m-1} ) \in \cM^{m} $, where $ X_{i} $ are the junctions of the geodesic segments. The leapfrog algorithm now proceeds as follows: for $ i=1,\ldots, m-2 $, each $ X_i $ is mapped onto the minimizing geodesic joining $ X_{i-1} $ and $ X_{i+1} $. This achieves the largest possible decrease in length while keeping other variables fixed. Though there are several choices to do this, leapfrog maps $ X_i $ onto the midpoint of the geodesic joining $ X_{i-1} $ and $ X_{i+1} $. By iterating this procedure, the algorithm generates a sequence $\Omega = \left\lbrace \omega_{X^{(k)}}\colon [0,1] \to \cM \colon k = 0,1, \ldots \right\rbrace $ of broken geodesics whose lengths are decreasing. Figure \ref{fig:leapfrog} illustrates one iteration of the leapfrog algorithm. It is clear that leapfrog generates a sequence of broken geodesics $\omega_{X^{(k)}}$ that are defined from $ X^{(k)}$. In addition, the length of $\omega_{X^{(k)}}$ is non-increasing in $k$ since at each step two neighboring geodesics get replaced by one global geodesic connecting their endpoints. \begin{figure} \centering \includegraphics[width=0.85\columnwidth]{Leapfrog_process.pdf} \caption{Illustration of one full iteration of the leapfrog scheme for some non-Euclidean metric (the lengths for the Euclidean metric clearly increase during iteration).} \label{fig:leapfrog} \end{figure} \subsection{Known results} Let $ \cY $ be the set of all tuples $X=(X_{0},X_{1},\ldots,X_{m-1}) \in \cM^{m} $ satisfying $d(X_{i-1},X_{i})\leq \delta$ for all $i=1,2,\ldots,m-2$. In \cite[\S2]{Noakes:1998}, $\delta$ is related to the notion of Lebesgue number of an open cover. Here, we can assume that $ \delta $ is equal to $ \tfrac{1}{2} \inj\!\big(\cM\big) $, where $ \inj $ is the injectivity radius (see Sect.~\ref{sec:geodesic_exp_log}). Let $ \cF \colon \cY \to \cY $ represent one full leapfrog iteration and let $X^{\ast}$ be the limit of any convergent subsequence of $ S = \lbrace \cF^{k}(X^{(0)}) \colon k \geq 1 \rbrace $ with $X^{(0)} \in \cY$. By compactness, \cite{Noakes:1998} shows that at least one convergent subsequence of $S$ exists and that the limit of this subsequence are points that lie on a global geodesic connecting the endpoints $ X_0 $ and $ X_{m-1} $. The following result is stated in \cite[Theorem~5.2]{Kaya:2008}. \begin{theorem}\label{thm:kaya_noakes_convg_leapfrog} $S$ has a unique accumulation point. \end{theorem} The theorem guarantees convergence of the iterates $X^{(k)} = \cF(X^{(k-1)})$ with $X^{(0)} \in \cY$. From \cite[Lemma~3.2]{Noakes:1998} we also know that leapfrog will converge to a uniformly distributed $ m $-tuple $ X^{\ast} = ( X_{0}, X_{1}^{\ast} \ldots, X_{m-2}^{\ast}, X_{m-1} ) $, i.e., $ d(X_{i}^{\ast},X_{i+1}^{\ast}) $ are all equal, for $ i = 0, \ldots, m-2 $. In other words, at convergence, the geodesic segments connecting the junction points will all have the same length. An apparent drawback in the current theory is that it lacks a classical convergence proof as a fixed point iteration method, although leapfrog can be easily recognized as such. In the next section, we will provide the details of how to analyze leapfrog as a nonlinear block Gauss--Seidel method. \section{Convergence of leapfrog as nonlinear Gauss--Seidel}\label{sec:lf_as_ngs} Let $\cM = \Stnp$ with the Riemannian distance function $d$. The starting point is to realize that leapfrog solves the optimization problem \[ \min_{ X_{1}, \ldots, X_{m-2} \in \Stnp } F(X_{1}, \ldots, X_{m-2} ) \quad \text{with} \quad F(X_{1}, \ldots, X_{m-2}) = \sum_{i=1}^{m-1} d^{2}(X_{i-1},X_{i}), \] by cyclically minimizing over each variable $X_{i}$ for $i=1,2,\ldots, m-2 $. Specifically, at the $k$th iteration, leapfrog updates $X_i^{(k-1)}$ by the minimizer of the problem \begin{equation}\label{eq:constrained_opt_pb} \begin{aligned} &\min_{X_{i} \in \Stnp } F( X_{1}^{(k)}, \ldots, X_{i-1}^{(k)}, X_{i}, X_{i+1}^{(k-1)}, \ldots, X_{m-2}^{(k-1)} ) \\ &= \min_{X_{i} \in \Stnp } d^2(X_{i-1}^{(k)},X_{i}) + d^2(X_{i},X_{i+1}^{(k-1)}) + \text{constant}. \end{aligned} \end{equation} Since $d$ is the Riemannian distance, this problem coincides with the definition of the Riemannian center of mass\footnote{The Riemannian center of mass was constructed in \protect{\cite{Grove:1973}}. As H. Karcher points out in \protect{\cite{Karcher:2014}}, ``Probably in 1990 someone renamed it without justification into \emph{karcher mean} and references to the older papers were omitted by those using the new name. (...) I think it is fair to say that a substantial amount of damage was caused by the renaming''. For this reason, in this paper, we decided to stick to the original name.} between the two points $X_{i-1}^{(k)}$ and $X_{i+1}^{(k-1)}$; see~\cite[Eq.~(1.1)]{Karcher:1977}. For the compact Stiefel manifold, a Riemannian center of mass always exists, but it does not need to be unique \cite[p.~37]{Rentmeesters:2013}. However, a sufficient condition for uniqueness is $d(X_{i-1}^{(k)},X_{i+1}^{(k-1)}) < \inj\!\big(\Stnp\big)$, where $ \inj $ is the injectivity radius (see Sect.~\ref{sec:geodesic_exp_log}). This is true if all $X_i$ are close enough (we will make this more precise later). In that case, the unique solution that solves~\eqref{eq:constrained_opt_pb} is the midpoint of the minimizing geodesic between $X_{i-1}^{(k)}$ and $X_{i+1}^{(k-1)}$. Leapfrog now proceeds to update the $X_i$ in a Gauss--Seidel fashion where the most recent $X_{i-1}^{(k)}$ is used to update $X_{i}^{(k-1)}$. This kind of optimization scheme is known as \emph{block coordinate descent method} of Gauss--Seidel type~\cite{Ortega:2000}. \subsection{Nonlinear block Gauss--Seidel method} Let us first consider the case of Gauss--Seidel in $ \R^{n} $. Let the variable $ x \in \R^{n} $ be partitioned as $ x = (x_{1}, x_{2}, \ldots, x_{m}) $, where $ x_{i} \in \R^{q_{i}} $ and $ \sum_{i} q_{i} = n $, and group correspondingly the components of $ \widetilde{F} \colon D \subset \R^{n} \to \R^{n} $ into mappings $ \widetilde{F}_{i} \colon \R^{n} \to \R^{q_{i}} $, $ i = 1, \ldots, m $. The minimizers of the function $ \widetilde{F}(x) $ satisfy the first-order optimality condition $ \nabla \widetilde{F}(x) = 0 $. Let us define $ \cG_{i} = \nabla \widetilde{F}_{i} $, $ i = 1, \ldots, m $. If we interpret the linear Gauss--Seidel iteration in terms of obtaining $ x_{i}^{(k)} $ as the solution of the $i$th equation of the system with the other $ m-1 $ block variables held fixed, then we may immediately consider the same prescription for nonlinear equations \cite[p.~219]{Ortega:2000}. Then solving \begin{equation}\label{eq:ith_nonlinear_eq} \cG_{i}( x_{1}^{(k)},\ \ldots,\ x_{i-1}^{(k)}, \ y,\ x_{i+1}^{(k-1)}, \ \ldots,\ x_{m}^{(k-1)}) = 0 \end{equation} for $ y $ and defining $ x_{i}^{(k)} = y $ describes a nonlinear block Gauss--Seidel process in which a complete iteration requires the solution of $ m $ nonlinear systems of dimensions $ q_{i} $, $ i = 1, \ldots, m $; see~\cite[p.~225]{Ortega:2000}. The convergence theory in \cite{Ortega:2000} applies only to functions whose domain of definition is Euclidean space $\R^{n}$. This theory cannot be applied to functions that are defined on manifolds. For instance, the Riemannian distance $ d $ is only defined on a subset of $\R^{n}$, i.e., the embedded submanifold. For this reason, in the next section we will introduce a smooth extension of the Riemannian distance function that can also be evaluated for points that do not belong to the manifold. \subsection{Extended objective function} \label{sec:ext_dist_function} As we have seen above, leapfrog solves in an alternating way the problem \begin{equation*} \min_{X_1,\ldots, X_{m-2} \in \Stnp} F(X_{1}, \ldots, X_{m-2} ) = \sum_{i=1}^{m-1} d^2(X_{i-1},X_{i}), \end{equation*} where $X_0$ and $X_{m-1}$ are the fixed endpoints. This objective function $F$ is only defined on the manifold $\Stnp$. In this section, we will identify an \emph{extended objective function} $\widetilde F$ that is defined on $\Rnp$ for which the standard nonlinear block Gauss--Seidel method produces the same iterates as the leapfrog algorithm. The key result of this section is stated in Prop.~\ref{prop:leapfrog_is_GS}. This will allow us to analyze the convergence of leapfrog using standard results for nonlinear Gauss--Seidel. We claim the extended cost function can be chosen as \begin{equation*} \min_{X_1,\ldots, X_{m-2} \in \Rnp} \widetilde F(X_{1}, \ldots, X_{m-2} ) = \sum_{i=1}^{m-1} \widetilde d^2(X_{i-1},X_{i}), \end{equation*} with \emph{extended distance function} \begin{equation}\label{eq:ext_dist_function} \widetilde d^2(\widetilde X, \widetilde Y) = \begin{cases} d^{2}(\PSt \widetilde X,\PSt \widetilde Y) + \| \widetilde X - \PSt \widetilde X \|_{\F}^{2} + \| \widetilde Y - \PSt \widetilde Y\|_{\F}^{2}, \\ \quad\quad\quad\qquad\qquad\qquad\qquad \text{if $\sigma_p(\widetilde X) > 0$ and $\sigma_p(\widetilde Y) > 0$;}\\ +\infty, \qquad\qquad\qquad\quad\quad\quad \ \ \text{otherwise,} \end{cases} \end{equation} where $ \PSt $ denotes the orthogonal projector onto the Stiefel manifold, and $ \sigma_{p} $ is the smallest singular value. The condition $\sigma_p(\widetilde X)>0$ is equivalent to the existence of a unique best approximation of $\widetilde X$ in $\Stnp$. In other words, $\PSt \widetilde X$ is well defined. Concretely, we can define the projector $ \PSt \colon \Rnp \to \Stnp $ by $ \PSt(Z) = Z(Z\tr Z)^{-1/2} $, that is, the orthogonal factor of the polar decomposition of $ Z $ \cite[p.~58]{AMS:2008}. Figure \ref{fig:extended_distance} illustrates the extended distance function $ \widetilde d^2(\widetilde X, \widetilde Y) $. \begin{figure} \centering \includegraphics[width=0.55\columnwidth]{extended_distance.pdf} \caption{The extended distance function.} \label{fig:extended_distance} \end{figure} \subsection{Leapfrog as nonlinear Gauss--Seidel} In order to show that nonlinear Gauss--Seidel applied to $\widetilde F$ is equivalent to leapfrog for $F$, we need a few lemmas. The first one is a known result which addresses the problem of how close the points on $\Stnp$ need to be so that their connecting geodesic is unique. \begin{lemma}\label{lemma:geodesic_exists_for_close_neighbours} Let $X,Y \in \Stnp$ such that $d(X,Y)\leq \delta_g $, with $ \delta_g = 0.89 \, \pi $. Then there exists a unique minimizing geodesic between $X$ and $Y$. As a consequence, also the Riemannian center of mass between $X$ and $Y$ exists and is uniquely defined. \end{lemma} \begin{remark}\label{rmk:distance} We can compare the Riemannian and Euclidean distances between $X$ and $ Y \in \Stnp$ asymptotically in the following way\footnote{For the Riemannian distance $d_\mathrm{e}$ based on the embedded metric, it is easy to see that $ \| X - Y \|_{\F} \leq d_\mathrm{e}(X,Y) $ since the Euclidean length of a geodesic on $\Stnp$ is always larger than that of a straight line.}. From the expansion of the canonical distance in~\eqref{eq:expansion_canonical_dist}, it is clear that \[ d(X,Y) \leq \| X - Y \|_{\F} + \cO (\| X - Y \|^{2}_{\F}) \quad \text{for} \quad \| X - Y \|_{\F} \to 0. \] By neglecting $ \cO (\| X - Y \|^{2}_{\F}) $, we thus have $ d(X,Y) \lesssim \| X - Y \|_{\F} $. In particular, $\|X - Y\|_{\F} \leq \delta_g $ implies $ d(X,Y) \lesssim \delta_g$. \end{remark} Let $X_{i-1}, X_{i+1} \in \Stnp$. Denote \[ F_i(Y) = d^2(X_{i-1},Y) + d^2(Y,X_{i+1}), \qquad \widetilde F_i(\widetilde Y) = \widetilde d^2(X_{i-1}, \widetilde Y) + \widetilde d^2(\widetilde Y,X_{i+1}), \] where $X_{i-1}$, $ X_{i+1}$ are constant and hidden in the notation. \begin{lemma}\label{lemma:substep_leapfrog_equals_substep_GS} With the notation from above assume that $d(X_{i-1},X_{i+1})\leq \delta_g$, then the $i$th substep of leapfrog produces the same solution $Y^{\ast}$ as the minimization of $\widetilde{F}_{i}$ \[ \argmin_{Y \in \Stnp} F_i(Y) = \argmin_{\widetilde Y \in \Rnp} \widetilde F_i(\widetilde Y) = Y^{\ast}, \] with $Y^{\ast}$ the Riemannian center of mass on $\Stnp$ of $X_{i-1}$ and $X_{i+1}$. \end{lemma} \begin{proof} Since $d(X_{i-1},X_{i+1})\leq \delta_g$, Lemma~\ref{lemma:geodesic_exists_for_close_neighbours} gives that the minimizer of $F_i$ on $\Stnp$ is unique and equals the Riemannian center of mass $Y^{\ast}$. To show that it also equals the minimizer of $\widetilde F_i$ on $\Rnp$, take any $\widetilde Y \in \Rnp$. If $\sigma_k(\widetilde Y) > 0$, then we can write \[ \widetilde Y = Y + \Delta, \qquad Y = \PSt \widetilde Y \in \Stnp. \] Using that $Y^{\ast}$ is the minimizer of $F_i$ on $\Stnp$, we thus get \[ \widetilde F_i(\widetilde Y) = d^{2}(X_{i-1}, Y) + d^{2}(Y,X_{i+1}) + 2 \| \Delta \|_{\F}^{2} \geq F_i(Y) \geq F_i(Y^{\ast}). \] The same inequality holds trivially if $\sigma_k(\widetilde Y) = 0$ since then $\widetilde F_i(\widetilde Y) = +\infty$. Finally, since $\widetilde F_i(Y^{\ast}) = F_i(Y^{\ast})$, we obtain that $\widetilde F_i$ is also uniquely minimized by $Y^{\ast}$. \end{proof} \begin{lemma}\label{lem:leapfrog_one_step_is_GS} Suppose that for all iterations $k=0,1,\ldots$, the iterates of leapfrog satisfy \[ d(X_{i-1}^{(k)}, X_{i+1}^{(k-1)}) \leq \delta_g, \] for all $ i = 1, 2, \ldots, m-2 $. Then, the leapfrog algorithm started in $X^{(0)}$ generates the same iterates as the nonlinear Gauss--Seidel algorithm started in $X^{(0)}$ and applied to \[ \min_{X_1,\ldots, X_{m-2} \in \Rnp} \widetilde F(X_{1}, \ldots, X_{m-2} ). \] \end{lemma} \begin{proof} By induction. Suppose true until substep $i-1$ of iteration $k$. Then, leapfrog computes the new iterate as \[ X_i^{(k)} = \argmin_{Y \in \Stnp} d^2(X_{i-1}^{(k)},Y) + d^2(Y,X_{i+1}^{(k-1)}). \] The uniqueness of the minimizer follows from Lemma \ref{lemma:geodesic_exists_for_close_neighbours} and $d(X_{i-1}^{(k)}, X_{i+1}^{(k-1)}) \leq \delta_g$. Likewise, nonlinear Gauss--Seidel computes \[ \widetilde X_i^{(k)} = \argmin_{\widetilde Y \in \Rnp} \widetilde F(X_1^{(k)}, \ldots, X_{i-1}^{(k)}, \widetilde Y, X_{i+1}^{(k-1)}, \ldots, X_{m-2}^{(k-1)}), \] and the uniqueness of the minimizer follows from our reasoning below. Both minimization problems are the same as minimizing $F_i$ and $\widetilde F_i$ from Lemma~\ref{lemma:substep_leapfrog_equals_substep_GS} but with $X_{i-1}^{(k)}$ and $X_{i+1}^{(k-1)}$ taking the roles of $X_{i-1}$ and $X_{i+1}$, respectively. By Lemma~\ref{lemma:substep_leapfrog_equals_substep_GS}, the minimizers of both problems are the same and hence $X_i^{(k)} = \widetilde X_i^{(k)}$. The above reasoning can also be applied to the base case $k=i=1$ since $X^{(1)}_0 = X^{(0)}_0$. Hence, we have proven the result. \end{proof} If the initial points are close enough, the iterates in leapfrog stay close. \begin{lemma}\label{lem:leapfrog_iterates_stay_close} Let $X^{(0)} \in \Stnp^{m}$ be such that $d(X_{i-1}^{(0)}, X_{i}^{(0)}) \leq \tfrac{1}{2}\delta_g$ for all $1 \leq i \leq m -1 $. Then, leapfrog started at $X^{(0)}$ is well defined and all its iterates $X^{(k)}$ satisfy for all $1 \leq i \leq m-2$ and $k\geq 1$ \begin{equation}\label{eq:closer_points_during_leapfrog} d(X_{i-1}^{(k)}, X_{i}^{(k)}) = d(X_{i}^{(k)}, X_{i+1}^{(k-1)}) \leq \tfrac{1}{2}\delta_g. \end{equation} \end{lemma} \begin{proof} By induction. Suppose true for all substeps $i$ until iteration $k-1$ and until substep $i-1$ of iteration $k$. This implies in particular \[ d(X_{i-1}^{(k)}, X_{i}^{(k-1)}) \leq \tfrac{1}{2}\delta_g , \quad d(X_{i}^{(k-1)}, X_{i+1}^{(k-1)}) \leq \tfrac{1}{2} \delta_g. \] By triangle inequality for the Riemannian distance, \[ d(X_{i-1}^{(k)}, X_{i+1}^{(k-1)}) \leq d(X_{i-1}^{(k)}, X_{i}^{(k-1)}) + d(X_{i}^{(k-1)}, X_{i+1}^{(k-1)}) \leq \delta_g, \] Lemma~\ref{lemma:geodesic_exists_for_close_neighbours} gives that the leapfrog iteration is well defined and produces the unique minimizer \[ X_i^{(k)} = \argmin_{Y \in \Stnp} d^2(X_{i-1}^{(k)},Y) + d^2(Y,X_{i+1}^{(k-1)}). \] We thus have \[ d^2(X_{i-1}^{(k)},X_i^{(k)}) + d^2(X_i^{(k)},X_{i+1}^{(k-1)}) \leq d^2(X_{i-1}^{(k)},X_i^{(k-1)}) + d^2(X_i^{(k-1)},X_{i+1}^{(k-1)}) \leq \tfrac{1}{2} \delta_g^2. \] Since $X_i^{(k)}$ is the midpoint of the geodesic connecting $X_{i-1}^{(k)}$ to $X_{i+1}^{(k-1)}$, we also have \[ d(X_{i-1}^{(k)},X_i^{(k)}) = d(X_i^{(k)},X_{i+1}^{(k-1)}). \] Combining these two results proves~\eqref{eq:closer_points_during_leapfrog} until substep $i$ at iteration $k$. Since $X_0^{(k+1)} = X_0^{(k)} = X_0^{(0)}$, the case for substep $i=1$ and iteration $k+1$ satisifes the same reasoning as above. The same is true for the base case $i=k=1$, which ends the proof. \end{proof} Hence, combining Lemmas~\ref{lem:leapfrog_one_step_is_GS} and~\ref{lem:leapfrog_iterates_stay_close}, we get our desired result: \begin{proposition}\label{prop:leapfrog_is_GS} Let $X^{(0)} \in \Stnp^{m}$ be such that $d(X_{i-1}^{(0)}, X_{i}^{(0)}) \leq \tfrac{1}{2}\delta_g$ for all $1 \leq i \leq m$. Then the leapfrog algorithm applied to $F$ is equivalent to the nonlinear Gauss--Seidel method applied to $\widetilde F$. \end{proposition} We can now proceed and analyze the convergence of this nonlinear Gauss--Seidel method using standard theory. \subsection{First-order optimality} From Prop.~\ref{prop:leapfrog_is_GS}, we know that at iteration $k \geq 1$ and for subinterval $i \in \{ 1,\ldots, m-2\}$, leapfrog solves the following unconstrained optimization problem \begin{equation* \min_{X_{i} \in \Rnp} \widetilde F_{i}^{k}(X_{i}), \end{equation*} where the objective function is defined as \begin{equation*} \widetilde F_i^{k}(Y) = \widetilde d^2(X_{i-1}^{(k)}, Y) + \widetilde d^2(Y,X_{i+1}^{(k-1)}). \end{equation*} Recall that $X_{i-1}^{(k)}, X_{i+1}^{(k-1)} \in \Stnp$ are the neighboring points of $X_i$ and that $X_{i-1}^{(k)}$ was previously updated and that $X_{i+1}^{(k-1)}$ will be updated next. Let us define \begin{equation*} \cG_{i}(Y) = \nabla_{Y} \widetilde F_{i}^{k}(Y) = \nabla_{Y} \widetilde{d}^{2}(X_{i-1}^{(k)},Y) + \nabla_{Y} \widetilde{d}^{2} (X_{i+1}^{(k-1)}, Y). \end{equation*} At the minimizer $X_i$, the gradient of $\widetilde F_i^{k}$ vanishes, i.e., $ \cG_{i}(X_{i}) = 0 $. Likewise, if we take all the minimizers $X = (X_1, \ldots, X_{m-2})$ together, they will satisfy \begin{equation*} \begin{cases} \cG_{1}(X_{}) = \nabla_{X_{1}} \widetilde{d}^{2}(X_{0},X_{1}) + \nabla_{X_{1}} \widetilde{d}^{2} (X_{1}, X_{2}) = 0, \\ \cG_{2}(X_{}) =\nabla_{X_{2}} \widetilde{d}^{2}(X_{1},X_{2}) + \nabla_{X_{2}} \widetilde{d}^{2} (X_{2}, X_{3}) = 0, \\ \qquad \vdots \\ \cG_{m-2}(X_{}) =\nabla_{X_{m-2}} \widetilde{d}^{2}(X_{m-3},X_{m-2}) + \nabla_{X_{m-2}} \widetilde{d}^{2} (X_{m-2}, X_{m-1}) = 0. \end{cases} \end{equation*} This can be written compactly as $\cG(X)=0$, where $\cG$ is defined componentwise $ \cG_{i} \colon \Rnp \to \Rnp $, for $i = 1,\ldots, m-2$. \subsection{Known results on local convergence} Assuming convergence to the limit point $X_1^{\ast}, X_2^{\ast}, \ldots, X_{m-2}^{\ast}$, the asymptotic convergence rate is determined by the spectral radius of a certain blockwise partitioning of the Hessian of $\widetilde F$ at this limit point. \begin{theorem}[Nonlinear block Gauss--Seidel theorem]\label{thm:asymptotic_speed_nonlinearBGS} Let $ \cG \colon \cD \subset \R^{(m-2)np} \to \R^{(m-2)np} $ be continuously differentiable in an open neighborhood $ \cB_{0} \subset \cD $ of a point $ X^{\ast} \in \cD $ for which $ \cG(X^{\ast}) = 0 $. Consider the decomposition of $ \cG' = D - L - U $ into its block diagonal, strictly lower-, and strictly upper-triangular parts, and suppose that $ D(X^{\ast} ) $ is nonsingular and $ \rho( M^{\mathrm{BGS}}(X^{\ast} ) ) < 1 $, where $ M^{\mathrm{BGS}} = (D-L)^{-1} U $. Then there exists an open ball $ \cB = \cB(X^{\ast}, \delta ) $ in $ \cB_{0} $ such that, for any $ X^{(0)} \in \cB $, there is a unique sequence $ \lbrace X^{(k)} \rbrace \subset \cB $ which satisfies the nonlinear Gauss--Seidel prescription. Moreover, $ \lim_{k\to\infty} X^{(k)} = X^{\ast} $ and for any $ X^{(0)} \in \cB_{0} $, the convergence rate in the form $ \limsup_{k\to \infty} \sqrt[k]{\| X^{(k)} - X^{\ast} \|} $ is upper bounded by $ \rho( M^{\mathrm{BGS}}(X^{\ast} ) ) $. \end{theorem} \begin{proof} As a direct extension of \cite[Theorem~10.3.5]{Ortega:2000}. \end{proof} This theorem shows the need for the Hessian of $ \widetilde{F} $ (i.e., $ \cG' $) and its block $ D-L-U $ decomposition. As we shall see, our matrix $ \cG' $ is given by the sum of two matrices $ \cG' = A + E $, where $ A $ is symmetric block tridiagonal and positive definite, and $ E $ can be regarded as a perturbation matrix. Since it is very difficult to compute the spectral radius of $ M^{\mathrm{BGS}} $ with this perturbation $ E $, we will not use Theorem~\ref{thm:asymptotic_speed_nonlinearBGS} directly. Instead, we will use the \emph{Householder--John theorem} \cite[Corollary~3.42]{Hackbusch:2016aa}, which states that if $ \cG' $ is positive definite, then the $ M^{\mathrm{BGS}} $ from Theorem \ref{thm:asymptotic_speed_nonlinearBGS} satisfies $ \rho(M^{\mathrm{BGS}}) < 1 $. In other words, (linear) block Gauss--Seidel for a symmetric and positive definite $ \cG' $ always converges monotonically in the energy norm \cite[Theorem~3.53]{Hackbusch:2016aa}. Therefore, we only need to restrict the perturbation $ E $ such that the whole matrix $ \cG' $ is symmetric and positive definite. In order to do that, we will also use a block version of the Gershgorin circle theorem \cite[Theorem~2]{Feingold:1962aa}. \subsection{Local convergence}\label{sec:local_convergence} As required in Theorem~\ref{thm:asymptotic_speed_nonlinearBGS}, we compute the Hessian as the Jacobian matrix $\cG'(X)$, a square matrix of size $(m-2)np$. \begin{scriptsize} \begin{equation*} \cG' = \begin{bmatrix} \nabla^{2}_{X_{1}}\widetilde{d}^{2}(X_{0},X_{1}) + \nabla^{2}_{X_{1}}\widetilde{d}^{2}(X_{1},X_{2}) & \nabla_{X_{2}} \nabla_{X_{1}}\widetilde{d}^{2}(X_{1},X_{2}) & & \\ \nabla_{X_{1}} \nabla_{X_{2}}\widetilde{d}^{2}(X_{1},X_{2}) & \nabla^{2}_{X_{2}}\widetilde{d}^{2}(X_{1},X_{2}) + \nabla^{2}_{X_{2}}\widetilde{d}^{2}(X_{2},X_{3}) & \quad \nabla_{X_{3}} \nabla_{X_{2}}\widetilde{d}^{2}(X_{2},X_{3}) & \\ & {\scriptstyle\ddots} & {\scriptstyle\ddots} & {\scriptstyle\ddots} \end{bmatrix}. \end{equation*} \end{scriptsize} By symmetry of the Hessian, we can write this compactly as \begin{equation*} \cG' = \begin{bmatrix} D_{10} + D_{12} & L_{12}\tr & & & \\ L_{12} & D_{21} + D_{23} & L_{23}\tr & & \\ & \ddots & \ddots & \ddots \\ & & L_{m-3,m-2} & D_{m-2,m-3} + D_{m-2,m-1} \end{bmatrix}, \end{equation*} where \[ L_{ij} = \nabla_{X_{i}} \nabla_{X_{j}} \widetilde{d}^{2}(X_{i},X_{j}) \qquad \text{and} \qquad D_{ij} = \nabla^2_{X_{i}} \widetilde{d}^{2}(X_{i},X_{j}) \] denote the mixed and double derivatives\footnote{Observe that $L_{ij} = L_{ji}\tr$ by equality of mixed derivatives but in general $D_{ij} \neq D_{ji}\tr$ since only the variable corresponding to the first index is derived.}. We now turn to the computation of these derivatives $L_{ij}$ and $D_{ij}$. To that end, the following lemma is convenient since it writes $\widetilde{d}^{2}(X_{i},X_{j})$ as an expansion that does not explicitly use the Riemannian distance. \begin{lemma}\label{lemma:expansion_ext_dist_function} Let $\widetilde{X}, \widetilde{Y} \in \Rnp $ such that $\sigma_p(\widetilde X) > 0$ and $\sigma_p(\widetilde Y) > 0$, then \begin{equation}\label{eq:expansion_ext_dist_function} \begin{split} \widetilde{d}^2(\widetilde{X}, \widetilde{Y}) = & \ \| \PSt\widetilde{X} - \PSt\widetilde{Y} \|^{2}_{\F} - \tfrac{1}{2} \| I_{p} - \big(\PSt\widetilde{X}\big)\tr\PSt\widetilde{Y}\|^{2}_{\F} \\ & + \| \widetilde{X} - \PSt\widetilde{X} \|_{\F}^{2} + \| \widetilde{Y} - \PSt\widetilde{Y} \|_{\F}^{2} + \cO (\| \PSt\widetilde{X} - \PSt\widetilde{Y} \|^{4}_{\F}). \end{split} \end{equation} \end{lemma} \begin{proof} See App.~\ref{app:proof_expansion_ext_dist_function}. \end{proof} In the following, denote $\delta_{ij} = \| X_i - X_j \|_2$ for any $X_i, X_j \in \Stnp$. \begin{lemma}\label{lemma:hessian_at_X_Z_on_Stiefel} Let $ X_{i} \in \Stnp $. Then \begin{align} D_{ij} &= 2 I_{np} + \tfrac{1}{2}\,( X_{i}\tr \otimes X_{i} ) \, \Pi_{p,n} - \tfrac{1}{2}\,(I_{p} \otimes X_{i}X_{i}\tr ) + \Delta_{ij}, \label{eq:Dij_on_Stiefel} \\ L_{ij} &= - 2I_{np} + \tfrac{1}{2}(X_i\tr \otimes X_i) \, \Pi_{p,n} + \tfrac{3}{2}(I_{p}\otimes X_iX_i\tr) + \Lambda_{ij}, \label{eq:Lij_on_Stiefel} \end{align} with $ \| \Delta_{ij} \|_2 \leq 14 \delta_{ij} + 10 \delta_{ij}^2 $ and $\| \Lambda_{ij} \|_2 \leq \tfrac{11}{2} \delta_{ij} + 10 \delta_{ij}^2 + 4 \delta_{ij}^3$. Here, $\Pi_{p,n}$ is the vec-permutation matrix defined as the matrix that satisfies $\vecop (X) = \Pi_{n,p} \vecop (X\tr)$; see, e.g., \cite[Eq.~(5)]{Henderson:1981}. \end{lemma} \begin{proof} See App.~\ref{app:proof_hessian_at_X_Z_on_Stiefel}. \end{proof} Our aim is to diagonalize $\cG'$. We will do this in a few steps. First, observe that $\cG'$ remains block tridiagonal if it is transformed using a compatible block diagonal matrix $\cQ = \diag\lbrace Q_1, Q_2, \ldots, Q_{m-2} \rbrace $: \begin{equation}\label{eq:block_QGQ} \cQ\tr \cG' \cQ = \begin{bmatrix} Q_1\tr (D_{10} + D_{12}) Q_1 & Q_1\tr L_{12}\tr Q_2 & & & \\ Q_2\tr L_{12} Q_1 & Q_2\tr (D_{21} + D_{23})Q_2 & Q_2\tr L_{23}\tr Q_3 & & \\ & \ddots & \ddots & \ddots \\ \end{bmatrix}, \end{equation} Here, the $Q_1, \ldots, Q_{m-2} \in \R^{np \times np}$ can be any orthogonal matrices. The lemma below shows us how to choose these matrices so that we obtain diagonal blocks in $\cQ\tr \cG' \cQ$, up to first order in $\delta_{ij}$. \begin{lemma}\label{lemma:diagonalization_hessian_on_Stiefel} Let $X_i^\perp \in \R^{n \times (n-p)}$ be such that $X_i\tr X_i^{\perp} = O_{p \times (n-p)} $ and $(X_i^{\perp})\tr X_i^{\perp} = I_{(n-p)}$. Define the orthogonal matrices \begin{equation*} \overbar{Q}_i = \big[ I_{p}\otimes X_i \quad I_{p}\otimes X_i^{\perp} \big], \end{equation*} and similarly for~$ \overbar{Q}_{j}$. Then, there exists an orthogonal matrix $\widehat{Q}$, only depending on $n$ and $p$, such that $ Q_i = \bar{Q}_i \widehat{Q} $ and $ Q_j = \bar{Q}_j \widehat{Q} $ satisfy \begin{align} \| Q_i \tr D_{ij} Q_i - D \|_2 &\leq C_D^{(ij)}, & D &= \diag\left\lbrace I_{p(p-1)/2}, \ 2 \, I_{np-p(p-1)/2} \right\rbrace, \label{eq:diag_Dij_on_Stiefel} \\ \|Q_j \tr L_{ij} Q_i - L \|_2 &\leq C_L^{(ij)}, & L &= \diag\left\lbrace -I_{p(p-1)/2}, \ -2 I_{(n-p)p}, \ O_{p(p+1)/2} \right\rbrace, \label{eq:diag_Lij_on_Stiefel} \end{align} where $C_D^{(ij)} = 14 \delta_{ij} + 10 \delta_{ij}^2$ and $C_L^{(ij)} = \tfrac{15}{2} \delta_{ij} + \tfrac{31}{2} \delta_{ij}^2 + 14 \delta_{ij}^3 + 4 \delta_{ij}^4$. \end{lemma} \begin{proof} See App.~\ref{app:proof_diag_Hij}. \end{proof} The matrix $\widehat{Q}$ above is related to the diagonalizaton of the vec-permutation matrix $\Pi_{p,p}$; see~\eqref{eq:def_hat_Pi} in App.~\ref{app:proof_diag_Hij} for its definition. It is therefore also independent of $X_i$. This is a crucial property to obtain the following result. \begin{lemma}\label{lemma:condition_for_pos_def_Gprime} Define $ \delta = \max_{0 \leq i \leq m-2} \delta_{i,i+1} $ and assume $ \delta \leq 1 $. Then the minimal eigenvalue of $\cG'$ is bounded by \[ \lambda_{\min}(\cG') \geq 2 - 2 \cos\tfrac{\pi}{m-1} - 43 \delta - 90 \delta^{2}. \] As a consequence, $\cG'$ is symmetric and positive definite when \[ \delta < \frac{1}{180}\left( \sqrt{2\,569 - 720\cos\tfrac{\pi}{m - 1}} - 43 \right). \] \end{lemma} \begin{proof} From Lemma~\ref{lemma:diagonalization_hessian_on_Stiefel}, recall the diagonal matrices $D$ and $L$, and the orthogonal matrices $Q_1, \ldots, Q_{m-2}$. Define $\cQ = \diag\lbrace Q_1, Q_2, \ldots, Q_{m-2} \rbrace $. Substituting the nonzero blocks in~\eqref{eq:block_QGQ} by \[ Q_i\tr(D_{i,i-1} + D_{i,i+1}) Q_i = 2D + E_{ii} , \qquad Q_{i+1}\tr L_{i,i+1} Q_i = L + E_{i,i+1}, \] we can write $\cQ\tr \cG' \cQ $ as \begin{equation}\label{eq:block_QGQ_diag} \cQ\tr \cG' \cQ = \begin{bmatrix} 2D & L & & & \\ L & 2D & L & & \\ & \ddots & \ddots & \ddots \\ \end{bmatrix} + \begin{bmatrix} E_{11} & E_{12}\tr & & & \\ E_{12} & E_{22} & E_{23}\tr & & \\ & \ddots & \ddots & \ddots \\ \end{bmatrix} \eqqcolon A + E. \end{equation} Eq. \eqref{eq:block_QGQ_diag} is an approximate tridiagonalization of the matrix $\cG'$. Observe that the symmetric matrices $A$ and $E$ have compatible block partitioning. Furthermore, from Lemma~\ref{lemma:diagonalization_hessian_on_Stiefel}, we get immediately that \[ \| E_{ii} \|_2 \leq 28 \delta + 20 \delta^2 \eqqcolon C_D, \qquad \| E_{i,i+1} \|_2 \leq \tfrac{15}{2} \delta + \tfrac{31}{2} \delta^2 + 14 \delta^3 + 4 \delta^4 \eqqcolon C_L. \] We will regard $\cQ\tr \cG' \cQ$ as an $\cO (\delta)$ perturbation of $A$. Using properties of Kronecker products, we can write \begin{equation}\label{eq:cQtrcGcQ} A = 2I_{m-2} \otimes D + M \otimes L, \qquad M = \begin{bmatrix} 0 & 1 & & \\ 1 & \ddots & \ddots & \\ & \ddots & \ddots & 1 \\ & & 1 & 0 \end{bmatrix} \in \R^{(m-2) \times (m-2)}. \end{equation} Thanks to the Kronecker structure in~\eqref{eq:cQtrcGcQ} and the diagonal matrices $D$ and $L$, the eigenvalues of $A$ are easily determined as \[ \lambda_{jk} = 2 d_j + \mu_k \ell_j, \quad j=1,\ldots,np, \quad k=1,\ldots, m-2, \] where $d_j$ and $\ell_j$ are the diagonal entries of $D$ and $L$, respectively, and $\mu_k$ are the eigenvalues of the Toeplitz matrix $M$. Using~\cite[Eq.~(2.7)]{Gover:1994}, we find \[ \mu_k = -2 \cos \tfrac{k \pi}{m-1}, \qquad k=1,\ldots, m-2. \] Together with~\eqref{eq:diag_Dij_on_Stiefel} and~\eqref{eq:diag_Lij_on_Stiefel}, this allows us to determine that the minimal value among all $\lambda_{jk}$ corresponds to $ j = 1 $ and $ k = m-2 $. We thus obtain \[ \lambda_{\min}(A) = 2 - 2 \cos\tfrac{\pi}{m-1} > 0 \quad \text{for all $m \geq 2$}. \] By Weyl's inequality \cite[Corollary~4.9]{Stewart:1990}, $\lambda_{\min}(\cG') = \lambda_{\min}(A+E)> 0$ is guaranteed if $\|E\|_2 < \lambda_{\min}(A)$. To bound $\|E\|_2$, we use a block version of the Gershgorin circle theorem (see \cite[Theorem~2]{Feingold:1962aa} and also~\cite[Remark~1.13.2]{Tretter:2008aa}). Applied to the symmetric block tridiagonal matrix $E$, it guarantees that its eigenvalues are included in the union of intervals \[ \bigcup_{i=1}^{m-2} \bigcup_{k=1}^{np} [\varepsilon^{(i)}_k - R_i, \varepsilon^{(i)}_k + R_i], \qquad R_i = \|E_{i-1,i}\|_2 + \|E_{i,i+1}\tr\|_2 \leq 2 C_L, \] where $\varepsilon^{(i)}_k$ is the $k$th eigenvalue of $E_{ii}$. These eigenvalues $\varepsilon^{(i)}_k$ are all bounded in magnitude by $C_D$. Hence $\|E\|_2 \leq C_D + 2 C_L = 43 \delta + 51 \delta^2 + 28 \delta^3 + 8 \delta^4$. Since $ \delta<1 $, it is easily verified that $\|E\|_2 \leq 43 \delta + 90 \delta^2 $ and thus the matrix $\cG'$ remains positive definite if $ 43 \delta + 90 \delta^2 < \lambda_{\min}(A) $, i.e., \[ \delta < \frac{1}{180}\left( \sqrt{2\,569 - 720\cos\tfrac{\pi}{m - 1}} - 43 \right). \] \end{proof} All put together, we have the final result of local convergence. \begin{theorem} If the leapfrog algorithm is started with $ \delta $ satisfying the condition of Lemma~\ref{lemma:condition_for_pos_def_Gprime}, then it converges to the unique minimizing geodesic connecting $X_0$ and $X_{m-1}$, provided that the initial intermediate points are sufficiently close to that geodesic. \end{theorem} \begin{proof} We use \cite[Corollary~3.42]{Hackbusch:2016aa} which states that if $ \cG' $ is positive definite and can be split into the sum of an arbitrary positive definite matrix and an arbitrary symmetric matrix, then the scalar Gauss--Seidel converges, i.e., $ \rho(M^{\mathrm{BGS}}) < 1 $, and the convergence is monotone with respect to the energy norm $ \| \cdot \|_{\cG'} $. By~\cite[Theorem~3.53]{Hackbusch:2016aa}, we know that this theorem remains valid for any block version. Now, the splitting \eqref{eq:block_QGQ_diag} has exactly the form prescribed by~\cite[Corollary~3.42]{Hackbusch:2016aa}, because $ A $ is positive definite and $ E $ is symmetric. By Lemma~\ref{lemma:condition_for_pos_def_Gprime}, we know that $ \cG' $ remains positive definite if $ \delta < \frac{1}{180}\left( \sqrt{2\,569 - 720\cos\tfrac{\pi}{m - 1}} - 43 \right) $. Under these conditions, the leapfrog algorithm converges as a block Gauss--Seidel method to the minimizing geodesic connecting $X_{0}$ and $X_{m-1}$. \end{proof} \section{Some observations and open problems}\label{sec:open_problems} For $m$ large, Lemma~\ref{lemma:condition_for_pos_def_Gprime} gives that $\cG'$ is positive definite when $ \delta \lesssim \pi^2 / 43 m^2 $. Let $ d_{0} = \|X_0 - X_{m-1}\|_{2} $ be the distance between the two endpoints. Then by equidistant partitioning of the intermediate points, one has $\delta \simeq d_{0} / m$. To guarantee a positive definite $\cG'$, we would then need $d_{0} / m \lesssim \pi^2 / 43 m^2$ which implies $m \lesssim 0.23 / d_{0}$. This result is unsatisfactory, since it would have been desirable to guarantee positive definiteness of $\cQ\tr \cG' \cQ =A+E$ with orthogonal $\cQ$ by increasing the number of points $ m $ given a fixed $ d_{0} $. Unfortunately, we cannot guarantee this with our proof. The problem is that $\|E\|_2 = \cO (\delta)$ whereas $\lambda_{\min}(A) = \cO (1/m^2)$, which lead to our condition that $ m $ needed to be smaller than some fixed fraction of the original distance $d_0$. If $\|E\|_2 = \cO (\delta^2)$, then there would be no condition on $m$ since $\delta^2 \simeq d_0^2 / m^2 \lesssim 1/m^2$ is sufficient to guarantee $\lambda_{\min}(A+E)>0$. However, it would still not be satisfactory since the perturbation does not lead to an improvement with increasing $ m $, for which one probably needs $\|E\|_2 = \cO (\delta^3)$. As we show below, there is strong numerical indication that with our choice of extended distance function this is not the case. Numerical experiments reported in Figure \ref{fig:error_plot_lambdas_limit_false} suggest that the minimal eigenvalues of $\cG'$ and $A$ differ by $\cO ( \delta^2 )$, whereas our perturbation analysis only showed $ \|E\|_2= \cO ( \delta )$. It is however not trivial to prove this result. Indeed, up to first order, we can study the eigenvalues of the symmetric matrix $A+E$ by using the derivative formula~\cite[Theorem~2.3]{Stewart:1990} \begin{equation}\label{eq:eigenval_perturb_formula} \lambda_{\min} (A + E) = \lambda_{\min} (A) + v_{\min}\tr E v_{\min} + \cO ( \| E \|^2 ), \end{equation} where $\lambda_{\min} (A) $ is assumed to be isolated (as it is the case) and $ v_{\min} $ is its associated eigenvector. One possibility to improve on our bounds, at least asymptotically, would be to prove that $ \vert v_{\min}\tr E v_{\min} \vert= \cO ( \delta^{3} ) $. However, in the same figure, $\vert v_{\min}\tr E v_{\min} \vert $ seems to be again $\cO (\delta^2)$. In addition, all these conclusions remain true in the limiting geodesic. \begin{figure} \centering \begin{minipage}{.48\textwidth} \centering \includegraphics[height=.725\linewidth]{error_plot_lambdas_limit_false.pdf} \caption{Eigenvalue perturbations -- not at the limiting geodesic.}\label{fig:error_plot_lambdas_limit_false} \end{minipage} \hfill \begin{minipage}{.48\textwidth} \centering \includegraphics[height=.725\linewidth]{error_plot_lambdas_limit_true.pdf} \caption{Eigenvalue perturbations -- at the limiting geodesic.}\label{fig:error_plot_lambdas_limit_true} \end{minipage} \end{figure} Another problem with the matrix $ A $ and $ \cG' $ is that it has a bad spectral gap $ \gamma $ (i.e., the difference of smallest and second smallest eigenvalue) when $ m $ grows. Numerical observations suggest that the spectral gap might be $ \cO ( 1/m ) $ which complicates non-asymptotic bounds. As a last remark, one could resort to a more general theory for the convergence of nonlinear block Gauss--Seidel for a quasi-convex objective function \cite{Grippo:2000}, which requires quasi-convexity for each $ X_i $ alone. Looking at the Hessian $ \cG' $ where all $X_j$ except $ X_i $ are constant, the only block that is left in the matrix $ \cG' $ is the diagonal one, namely $ D_{i,i-1} + D_{i,i+1} $. Using Lemma~\ref{lemma:diagonalization_hessian_on_Stiefel}, we immediately get the eigenvalues of this block. Now, for $ C_D^{(ij)} < 1 $ in~\eqref{eq:diag_Dij_on_Stiefel} we get strong convexity in $ X_i $ alone. One problem with this approach is that the feasible set has to be a Cartesian product of convex subsets of $ \Rnp $. Moreover, the result in \cite{Grippo:2000} only guarantees subsequence convergence, and there is no rate of convergence or contraction rate for the whole sequence. Hence the convergence behavior could also be slower than linear. \section{Numerical experiments}\label{sec:stiefel_experiments} The leapfrog algorithm and the single shooting method were implemented in MATLAB, see also~\protect{\cite[chapter~2]{Sutti:2020}}. The software package has been named ``LFMS\_Stiefel'', where ``LFMS'' stands for ``leapfrog multiple shooting'', and can be downloaded from~\url{https://github.com/MarcoSutti/LFMS_Stiefel}. We conducted our experiments on a laptop Lenovo ThinkPad T460s with Ubuntu 20.04 LTS and MATLAB R2018a installed, with Intel Core i7-6600 CPU, 20GB RAM, and Mesa Intel HD Graphics 520. As a concrete example to demonstrate the leapfrog algorithm, let us consider the Stiefel manifold $ \mathrm{St}(12,3) $. We fix one point $ X = [ I_{3} \ \ O_{9 \times 3} ]\tr $, while the other point $ Y $ is placed at the distance $ L^{\ast} = 0.95\,\pi $ from $ X $. This is done by creating a tangent vector to $ \mathrm{St}(12,3) $ at $ X $ of length $ L^{\ast} $, and then mapping it to $ \mathrm{St}(12,3) $ via the Riemannian exponential~\eqref{eq:closed-form-sol-geodesic}. For this choice of $L^{\ast}$, single shooting will not work (recall that the injectivity radius on $ \Stnp $ is at least $ 0.89 \, \pi $). We want to recover this distance using the leapfrog algorithm and study its convergence. For each value of $m \in \{ 10,20,50,100 \}$, we construct an initial guess $X^{(0)}$ by placing $ m-2 $ intermediate points randomly along the linear segment connecting $ X $ and $ Y $ in the embedding space, and projecting them to the Stiefel manifold. We then apply leapfrog for 300 iterations and monitor the convergence behavior of \[ \textrm{err-$k$} = \| X^{(k)} - X^{\ast} \|_{\F}, \] where $X^{\ast}$ is the solution of leapfrog (i.e., a uniformly distributed tuple corresponding to the global geodesic that was constructed above), and $ X^{(k)} $ is the approximate solution at iteration $ k $ of leapfrog. This is illustrated in Figure~\ref{fig:Convg_LF_12_3_m10_100_maxiter_300}, from which it is clear that for large $ m $ leapfrog always converges, albeit very slowly. \begin{figure} \centering \begin{minipage}{.48\textwidth} \centering \includegraphics[height=.725\linewidth]{Convg_LF_12_3_m10_100_maxiter_300.pdf} \caption{Convergence behavior of err-$k$ for increasing values of $m$.}\label{fig:Convg_LF_12_3_m10_100_maxiter_300} \end{minipage} \hfill \begin{minipage}{.48\textwidth} \centering \includegraphics[height=.725\linewidth]{Boxplot_LF_n12_p3_m4_100_tot_num_tests_100.pdf} \caption{Boxplot of $ \max_{k} \lbrace \mu_{k}^{(i)} \rbrace $ for increasing values of $m$.}\label{fig:boxplot} \end{minipage} \end{figure} Next, we apply leapfrog for 50 iterations and for each $ m \in \{ 4,6,8,10,\ldots, 100 \} $ we repeat this experiment for 100 random initializations of the initial guess $X^{(0)}$. For each experiment $i$, we define the error reduction rate\footnote{In the limit $ k \to \infty $, this gives the asymptotic Q-rate of convergence of the sequence.} as \[ \mu_{k}^{(i)} = \frac{\textrm{err-$(k+1)$}}{\textrm{err-$k$}}, \quad \text{for} \quad k = 0, 1, \ldots, 49, \quad i=1,\ldots,100, \] and we compute the worst and the median reduction rates across all the experiments, namely, $\max_{i,k} \{ \mu_{k}^{(i)} \}$ and $\med_{i} \max_{k} \{ \mu_{k}^{(i)} \}$. Since during the first iterations leapfrog is faster, we also compute the convergence factor given by $\max_{i} \{ \mu_{0}^{(i)} \}$. From Table~\ref{tab:1}, we see that the convergence of leapfrog deteriorates as $ m $ increases but it remains strictly smaller than $1$. For small values of $ m $, $ \max_{i} \lbrace \mu_{0}^{(i)} \rbrace $ and $ \max_{i,k} \lbrace \mu_{k}^{(i)} \rbrace $ are significantly different, whereas for large values of $ m $, they are quite similar. The same conclusion can be reached from Figure~\ref{fig:boxplot} where boxplots show the dispersion and skewness in the $ \mu^{(i)}_k $. Clearly, the convergence factors become very concentrated for large $m$. \begin{table} \begin{center} \caption{Values of $ \max_{i} \lbrace \mu_{0}^{(i)} \rbrace $, $ \max_{i,k} \lbrace \mu_{k}^{(i)} \rbrace $ and $\med_{i} \max_{k} \{ \mu_{k}^{(i)} \}$ versus number of points $m$, for the experiment described in Sect.~\ref{sec:stiefel_experiments}.} \label{tab:1} \begin{tabular}{cccccccc} \hline\noalign{\smallskip} $m$ & 4 & 6 & 8 & 10 & 15 & 20 & 30 \\ \noalign{\smallskip}\hline\noalign{\smallskip} $ \max_{i} \lbrace \mu_{0}^{(i)} \rbrace $ & 0.5577 & 0.7058 & 0.7829 & 0.8296 & 0.8604 & 0.8824 & 0.8980 \\ $ \max_{i,k} \lbrace \mu_{k}^{(i)} \rbrace $ & 0.8776 & 0.9443 & 0.9671 & 0.9781 & 0.9843 & 0.9881 & 0.9906 \\ $\med_{i} \max_{k} \{ \mu_{k}^{(i)} \}$ & 0.8774 & 0.9443 & 0.9671 & 0.9781 & 0.9843 & 0.9881 & 0.9906 \\ \noalign{\smallskip}\hline \hline\noalign{\smallskip} $m$ & 40 & 50 & 60 & 70 & 80 & 90 & 100 \\ \noalign{\smallskip}\hline\noalign{\smallskip} $ \max_{i} \lbrace \mu_{0}^{(i)} \rbrace $ & 0.9390 & 0.9573 & 0.9728 & 0.9799 & 0.9843 & 0.9870 & 0.9888 \\ $ \max_{i,k} \lbrace \mu_{k}^{(i)} \rbrace $ & 0.9836 & 0.9799 & 0.9898 & 0.9940 & 0.9959 & 0.9969 & 0.9976 \\ $\med_{i} \max_{k} \{ \mu_{k}^{(i)} \}$ & 0.9822 & 0.9790 & 0.9898 & 0.9940 & 0.9958 & 0.9968 & 0.9975 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table}
{ "timestamp": "2022-09-13T02:15:55", "yymm": "2010", "arxiv_id": "2010.14137", "language": "en", "url": "https://arxiv.org/abs/2010.14137", "abstract": "Several applications in optimization, image, and signal processing deal with data that belong to the Stiefel manifold St(n,p), that is, the set of n-by-p matrices with orthonormal columns. Some applications, like the Riemannian center of mass, require evaluating the Riemannian distance between two arbitrary points on St(n,p). This can be done by explicitly constructing the geodesic connecting these two points. An existing method for finding geodesics is the leapfrog algorithm of J. L. Noakes. This algorithm is related to the Gauss-Seidel method, a classical iterative method for solving a linear system of equations that can be extended to nonlinear systems. We propose a convergence proof of leapfrog as a nonlinear Gauss-Seidel method. Our discussion is limited to the case of the Stiefel manifold, however, it may be generalized to other embedded submanifolds. We discuss other aspects of leapfrog and present some numerical experiments.", "subjects": "Numerical Analysis (math.NA)", "title": "The leapfrog algorithm as nonlinear Gauss-Seidel", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426435557124, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.7090791306960348 }
https://arxiv.org/abs/1608.00032
On the Power of Likelihood Ratio Tests in Dimension-Restricted Submodels
Likelihood ratio tests are widely used to test statistical hypotheses about parametric families of probability distributions. If interest is restricted to a subfamily of distributions, then it is natural to inquire if the restricted LRT is superior to the unrestricted LRT. Marden's general LRT conjecture posits that any restriction placed on the alternative hypothesis will increase power. The only published counterexample to this conjecture is rather technical and involves a restriction that maintains the dimension of the alternative. We formulate the dimension-restricted LRT conjecture, which posits that any restriction that replaces a parametric family with a subfamily of lower dimension will increase power. Under standard regularity conditions, we then demonstrate that the restricted LRT is asymptotically more powerful than the unrestricted LRT for local alternatives. Remarkably, however, even the dimension-restricted LRT conjecture fails in the case of finite samples. Our counterexamples involve subfamilies of multinomial distributions. In particular, our study of the Hardy-Weinberg subfamily of trinomial distributions provides a simple and elegant demonstration that restrictions may not increase power.
\section{Introduction} \label{intro} We compare restricted and unrestricted likelihood ratio tests in situations where the restriction decreases the dimension of the alternative. The issues that concern us are motivated by an elementary example. \subparagraph{Basic Example} Suppose that $X=(X_1,X_2)$ has a bivariate normal distribution with mean vector $\theta = (\theta_1,\theta_2)$ and identity covariance matrix, in which case the parametric family of possible probability distributions (the model) is $2$-dimensional. Let $\vec{0}$ denote the origin in $\Re^2$ and consider testing the simple null hypothesis $H_0: \theta = \vec{0}$ against the $2$-dimensional composite alternative hypothesis $H_A: \theta \neq \vec{0}$. Under this model, the likelihood function is \[ L_x(\theta) = \frac{1}{2 \pi} \exp \left( -\frac{1}{2} \left\| x-\theta \right\|^2 \right), \] the (unrestricted) maximum likelihood estimate (MLE) of $\theta$ is $\hat{\theta}=x$, and the (unrestricted) likelihood ratio test (LRT) rejects $H_0$ if and only if \[ \Lambda_2(x) = -2 \log \frac{L_x(\vec{0})}{L_x(\hat{\theta})} = -2 \log \frac{\exp \left( -\frac{1}{2} \| x-0 \|^2 \right)}{\exp \left( -\frac{1}{2} \| x-x \|^2 \right)} = \| x \|^2 = x_1^2 + x_2^2 \] is sufficiently large. Because the random variable $X_1^2+X_2^2 \sim \chi^2_2$, a (central) chi-squared distribution with $2$ degrees of freedom, the unrestricted LRT rejects $H_0$ at significance level $\alpha$ if and only if \[ \phi_2(x) = \| x \|^2 > c_{2,\alpha}, \] where $c_{2,\alpha}$ is the $1-\alpha$ quantile of $\chi^2_2$. Suppose that we know that $\theta = (\theta_1,\theta_2) = (\theta_1,0)$. Restricting attention to this $1$-dimensional submodel, we can write the null hypothesis as $H_0: \theta_1 = 0$ and the alternative hypothesis as $H_A: \theta_1 \neq 0$. Under this submodel, the likelihood function is \[ L_x(\theta) = \frac{1}{2 \pi} \exp \left( -\frac{1}{2} \left[ \left( x_1-\theta_1 \right)^2 + \left( x_2-0\right)^2 \right] \right), \] the (restricted) maximum likelihood estimate (MLE) of $\theta$ is $\tilde{\theta}=(x_1,0)$, and the (restricted) likelihood ratio test (LRT) rejects $H_0$ if and only if \[ \Lambda_1(x) = -2 \log \frac{L_x(\vec{0})}{L_x(\tilde{\theta})} = -2 \log \frac{\exp \left( -\frac{1}{2} \left[ x_1^2+x_2^2 \right] \right)}{\exp \left( -\frac{1}{2} x_2^2 \right)} = x_1^2 \] is sufficiently large. Because the random variable $X_1^2 \sim \chi^2_1$, the restricted LRT rejects $H_0$ at significance level $\alpha$ if and only if \[ \phi_1(x) = x_1^2 > c_{1,\alpha}, \] where $c_{1,\alpha}$ is the $1-\alpha$ quantile of $\chi^2_1$. We now compare the power of $\phi_1$ and $\phi_2$ at alternatives of the form $\theta=(\delta,0)$. If $X_1 \sim \mbox{Normal}(\delta,1)$ and $X_2 \sim \mbox{Normal}(0,1)$, then \begin{eqnarray*} X_1^2 \sim \chi^2_1(\delta^2) & \mbox{ and } & X_1^2+X_2^2 \sim \chi^2_2(\delta^2), \end{eqnarray*} where $\chi^2_m(\lambda)$ is the noncentral chi-squared distribution with $m$ degrees of freedom and noncentrality parameter $\lambda$. The power of $\phi_1$ is \[ \pi_\alpha \left( \delta^2; 1 \right) = P \left( \chi^2_1 \left( \delta^2 \right) > c_{1,\alpha} \right), \] the power of $\phi_2$ is \[ \pi_\alpha \left( \delta^2; 2 \right) = P \left( \chi^2_2 \left( \delta^2 \right) > c_{2,\alpha} \right), \] and the following lemma implies that \[ \pi_\alpha \left( \delta^2; 1 \right) > \pi_\alpha \left( \delta^2; 2 \right). \] \hfill $\Box$ \bigskip \noindent {\bf Lemma} {$\mathbf \chi^2$} (Das Gupta and Perlman \citep{dasgupta&perlman:1974}): {\em Fix $\alpha \in (0,1)$ and $\lambda>0$. For $m \geq 1$, define $c_{m,\alpha}$ by $P(\chi^2_m > c_{m,\alpha}) = \alpha$. Then \[ \pi_\alpha ( \lambda; m ) = P \left( \chi^2_m ( \lambda ) > c_{m,\alpha} \right) \] is strictly decreasing in $m$.} \bigskip The essential property of the Basic Example is that the restricted LRT is more powerful on the submodel than the unrestricted LRT. Numerous studies have explored the extent to which this property can be generalized. The most studied problem is that of order-restricted inference on normal means, exemplified in the Basic Example by restricting attention to the $2$-dimensional submodel defined by $\theta_1 \leq \theta_2$. The most general result of this type appears to be that of Praestgaard \citep{praestgaard:2012}: \begin{quote} ``Let $X_1,X_2,\ldots,X_n$ be normally distributed random variables with means $\vec{\mu}=(\mu_1,\mu_2,\ldots,\mu_n)$ and known positive definite covariance matrix $\Sigma$. Let ${\mathcal C}$ be a closed convex cone in $\Re^n$ which contains a linear space ${\mathcal L} \subset {\mathcal C}$. The present note considers the likelihood ratio test for null and alternative hypotheses \begin{eqnarray*} H_0: \vec{\mu} \in {\mathcal L} & \mbox{ versus } & H_1: \vec{\mu} \in {\mathcal C}/{\mathcal L}. \end{eqnarray*} We prove that the restricted likelihood ratio test is uniformly more powerful over $\vec{\mu} \in {\mathcal C}$ than the omnibus test with alternative hypothesis $\vec{\mu} \in \Re^n$.'' \end{quote} Studies of restricted LRTs for order-restricted inference date to the pioneering work of Bartholomew \citep{bartholomew:1959a,bartholomew:1959b,bartholomew:1961} in the late 1950s and early 1960s. By the 1980s, it was imagined that the power superiority of restricted LRTs might be a universal phenomenon. Al-Rawwash's 1986 Ph.D. dissertation \citep{alrawwash:1986} stated a general LRT conjecture (``the more restrictions which are put on the alternative space, the higher the power of the L.R.T.''), attributing it to a 1982 NSF proposal submitted by his advisor, J. Marden; however, Al-Rawwash \citep[Chapter 6]{alrawwash:1986} only studied the Basic Example with conic submodels. As late as 1992, Tsai \citep{tsai:1992} was able to state that ``The long time conjecture of the power superiority of the restricted LRT to its unrestricted version in the entire parameter space of alternatives for the general setting is of considerably analytic difficulty and lack of the definitive results.'' In 2003, however, Abu-Dayyeh, Al-Jararha, and Madan \citep{abudayyeh&etal:2003} constructed a counterexample using the Basic Example with nonconic submodels of the form $\theta \in [-k,k]^2$. Surprisingly, smaller values of $k$ give more restricted alternatives, but {\em not}\/ uniformly greater power. In our presentation of the Basic Example, the power superiority of the restricted LRT appears to be a consequence of the fact that the chi-squared distribution of the restricted test statistic has fewer degrees of freedom than the chi-squared distribution of the unrestricted test statistic. This fact, in turn, is a consequence of the fact that the dimension of the submodel is smaller than the dimension of the model. Noting that the dimension of the submodels $\theta \in [-k,k]^2$ is the same as the dimension of the model, it is inviting to modify the general LRT conjecture by speculating that it holds if the submodel has lower dimension than the model. We will refer to this special case of Marden's general LRT conjecture as the dimension-restricted LRT conjecture. One would expect the dimension-restricted LRT conjecture to hold at least asymptotically because (under suitable regularity conditions) LRT statistics have asymptotic chi-squared distributions. In Section \ref{asymptotic} we exploit that fact and demonstrate that the dimension-restricted LRT conjecture {\em does}\/ hold asymptotically. The more interesting question is whether or not it holds for finite samples. Despite considerable interest in Marden's general LRT conjecture, we are not aware of any previous statements of the more plausible dimension-restricted LRT conjecture. We demonstrate in Section \ref{HW} that the dimension-restricted LRT conjecture does not hold for finite samples. To construct a counterexample, we abandon normal models and study the $1$-dimensional Hardy-Weinberg submodel of the $2$-dimensional trinomial model. The implications of this counterexample are considered in Section~\ref{disc}. \section{Asymptotic Theory} \label{asymptotic} We begin by recalling some basic properties of differentiable manifolds, which will serve as index sets for our statistical models and submodels. Let $M$ denote a completely separable Hausdorff space. (For our purposes, it will suffice to assume that $M$ is a subset of Euclidean space.) Let $U \subseteq M$ and $V \subseteq \Re^k$ denote open sets. If $\varphi : U \rightarrow V$ is a homeomorphism, then $\varphi(u) = (x_1(u),\ldots,x_k(u))$ defines a coordinate system on $U$. The $x_i$ are the coordinate functions and $\varphi^{-1}$ is a parametrization of $U$. The pair $(U,\varphi)$ is a chart. An atlas on $M$ is a collection of charts $\{ (U_a,\varphi_a) \}$ such that the $U_a$ cover $M$. The set $M$ is a $k$-dimensional topological manifold if and only if it admits an atlas for which each $\varphi_a(U_a)$ is open in $\Re^k$. It is smooth if and only if the transition maps $\varphi_b \varphi_a^{-1}$ are diffeomorphisms. A subset $S \subset M$ is a $d$-dimensional embedded submanifold if and only if, for every $p \in S$, there is a chart $(U,\varphi)$ such that $p \in U$ and \[ \varphi(U \cap S) = \varphi(U) \cap \left( \Re^d \times \{ \vec{0} \in \Re^{k-d} \} \right) = \left\{ y \in \varphi(U) : y_{d+1}= \cdots = y_k=0 \right\}. \] We will assume that the statistical model $\{ P_\theta : \theta \in \Theta \}$ is indexed by a $k$-dimensional manifold $\Theta$ and that the statistical submodel $\{ P_\theta : \theta \in \Psi \}$ is indexed by a $d$-dimensional embedded submanifold $\Psi \subset \Theta$. Let $X_1,\ldots,X_n$ be independent and identically distributed as $P_\theta$ and suppose that $\Theta_0 \subset \Psi$ is an $\ell$-dimensional manifold. For testing $H_0 : \theta \in \Theta_0$, the unrestricted LRT statistic is \[ \Lambda_{k,n} \left( X_1,\ldots,X_n \right) = -2 \log \frac{\sup_{\theta \in \Theta_0} \prod_{i=1}^n p_\theta (X_i)}{ \sup_{\theta \in \Theta} \prod_{i=1}^n p_\theta (X_i)} \] and the restricted LRT statistic is \[ \Lambda_{d,n} \left( X_1,\ldots,X_n \right) = -2 \log \frac{\sup_{\theta \in \Theta_0} \prod_{i=1}^n p_\theta (X_i)}{ \sup_{\theta \in \Psi} \prod_{i=1}^n p_\theta (X_i)}. \] We will compare the power of these LRTs at local alternatives in $\Psi/\Theta_0$ as $n \rightarrow \infty$. To do so, we require the asymptotic distributions of $\Lambda_{k,n}$ and $\Lambda_{d,n}$. Under classical regularity conditions, the asymptotic null distribution of $\Lambda_{k,n}$ is $\chi^2_{k-\ell}$ \citep{wilks:1938,chernoff:1954}. The same conclusion was drawn by van der Vaart \citep[Chapter 16]{vandervaart:1998}, who based his derivation on the convergence of experiments. Using this approach, ``the main conditions are that the model is differentiable in $\theta$ [more precisely, that the map $\theta \mapsto \sqrt{p_\theta}$ is differentiable in quadratic mean] and that the null hypothesis $\Theta_0$ and the full parameter set $\Theta$ are (locally) equal to linear spaces [i.e., $\Theta_0$ and $\Theta$ are manifolds].'' By the same reasoning, of course, the asymptotic null distribution of $\Lambda_{d,n}$ is $\chi^2_{d-\ell}$. We might assume any set of conditions that ensure these asymptotic distributions; for greatest generality, we simply assume that, for $\vartheta \in \Theta_0$, \begin{eqnarray} \lim_{n \rightarrow \infty} P_\vartheta \left( \Lambda_{k,n} > c \right) = P \left( \chi^2_{k-\ell} > c \right) & \mbox{ and } & \lim_{n \rightarrow \infty} P_\vartheta \left( \Lambda_{d,n} > c \right) = P \left( \chi^2_{d-\ell} > c \right). \label{eq:null} \end{eqnarray} Local asymptotic power functions were also derived (in the special case of local asymptotic normality, although the approach is more general) by van der Vaart \citep[Section 16.4]{vandervaart:1998}. Local alternatives are alternatives of the form $\theta = \vartheta + h/\sqrt{n}$ for $\vartheta \in \Theta_0$ and $n$ sufficiently large. In order to compare the restricted and unrestricted LRTs, we study power at local alternatives $\vartheta + h/\sqrt{n} \in \Psi$. Let $I_k(\vartheta)$ and $I_d(\vartheta)$ denote the Fisher information matrices at $\vartheta$ for the model and submodel respectively. Following van der Vaart \citep[Chapter 16]{vandervaart:1998}, define sets \[ H_{n,0} = \sqrt{n} \left( \Theta_0 - \vartheta \right) = \left\{ \sqrt{n} \left( \vartheta^\prime - \vartheta \right) : \vartheta^\prime \in \Theta_0 \right\} \] and let $H_0$ denote the set of all limits of convergent sequences $\{ h_n \in H_{n,0} \}$. Assume that the limits of all convergent subsequences $\{ h_{n_i} \in H_{n_i,0} \}$ lie in $H_0$, in which case we write $H_{n,0} \rightarrow H_0$. In the case of a simple null hypothesis $\Theta_0 = \{ \theta_0 \}$, \[ H_{n,0} = \sqrt{n} \left( \theta_0 - \theta_0 \right) = \{ \vec{0} \in \Re^k \} = H_0. \] Then, under suitable regularity conditions, \begin{equation} \lim_{n \rightarrow \infty} P_{\vartheta + h/\sqrt{n}} \left( \Lambda_{k,n} > c \right) = P \left( \chi^2_{k-\ell} \left( \lambda_k \right) > c \right), \label{eq:altk1} \end{equation} with noncentrality parameter \begin{equation} \lambda_k = \delta_k^2 = \inf_{h^\prime \in H_0} \left( h-h^\prime \right)^T I_k (\vartheta) \left( h-h^\prime \right), \label{eq:altk2} \end{equation} and \begin{equation} \lim_{n \rightarrow \infty} P_{\vartheta + h/\sqrt{n}} \left( \Lambda_{d,n} > c \right) = P \left( \chi^2_{d-\ell} \left( \lambda_d \right) > c \right), \label{eq:altd1} \end{equation} with noncentrality parameter \begin{equation} \lambda_d = \delta_d^2 = \inf_{h^\prime \in H_0} \left( h-h^\prime \right)^T I_d (\vartheta) \left( h-h^\prime \right), \label{eq:altd2} \end{equation} Again, for greatest generality, we simply assume that (\ref{eq:altk1})--(\ref{eq:altd2}) hold. To apply Lemma~$\chi^2$, we will require \bigskip \begin{lemma} If $\Psi$ is an embedded submanifold of $\Theta$, then $\lambda_k=\lambda_d$. \label{lm:noncentral} \end{lemma} \subparagraph{Proof} Because $\varphi \in \Theta_0 \subset \Psi$ and $\Psi$ is an embedded submanifold of $\Theta$, there exists a chart $(U,\varphi)$ such that $\vartheta \in U$ and \[ \varphi(u) = (y_1,\ldots,y_d,0,\ldots,0) \in \Re^k \] if $u \in U \cap \Psi$. Choose $n$ large enough that $\vartheta + h/\sqrt{n} \in U$, then write (\ref{eq:altk2}) and (\ref{eq:altd2}) in the coordinate system defined by $\varphi$. We obtain \begin{eqnarray*} \lambda_k & = & \inf_{(z_1,\ldots,z_d,0,\ldots,0) \in H_0} \left[ \begin{array}{ccc|ccc} y_1-z_1 & \ldots & y_d-z_d & 0 & \ldots & 0 \end{array} \right] \left[ \begin{array}{c|c} I_d (\vartheta) & \cdot \\ \hline \cdot & \cdot \end{array} \right] \left[ \begin{array}{c} y_1-z_1 \\ \vdots \\ y_d-z_d \\ \hline 0 \\ \vdots \\ 0 \end{array} \right] \\ & = & \inf_{(z_1,\ldots,z_d) \in H_0} \left[ \begin{array}{ccc} y_1-z_1 & \ldots & y_d-z_d \end{array} \right] I_d (\vartheta) \left[ \begin{array}{c} y_1-z_1 \\ \vdots \\ y_d-z_d \end{array} \right] \\ & = & \lambda_d. \end{eqnarray*} \hfill $\Box$ \bigskip Now we establish the asymptotic power superiority of the restricted LRT. Given $\alpha \in (0,1)$, define quantiles $c_{k,\alpha}$ and $c_{d,\alpha}$ by \[ P \left( \chi^2_{k-\ell} > c_{k,\alpha} \right) = \alpha = P \left( \chi^2_{d-\ell} > c_{d,\alpha} \right). \] The unrestricted LRT rejects $H_0: \theta \in \Theta_0$ if and only if \[ \phi_k \left( \vec{x}_n \right) = \Lambda_{k,n} \left( x_1,\ldots,x_n \right) > c_{k,\alpha}, \] and the restricted LRT rejects $H_0: \theta \in \Theta_0$ if and only if \[ \phi_d \left( \vec{x}_n \right) = \Lambda_{d,n} \left( x_1,\ldots,x_n \right) > c_{d,\alpha}. \] We compare the power of $\phi_k$ and $\phi_d$ at local alternatives $\vartheta + h/\sqrt{n} \in \Psi$. \bigskip \begin{theorem} Fix $\vartheta \in \Theta_0$ and $h$ such that $\vartheta + h/\sqrt{n} \in \Psi$ for $n \geq N$. If $\Psi$ is an embedded submanifold of $\Theta$, then \[ P_{\vartheta + h/\sqrt{n}} \left( \Lambda_{d,n} > c_{d,\alpha} \right) > P_{\vartheta + h/\sqrt{n}} \left( \Lambda_{k,n} > c_{k,\alpha} \right) \] for $n$ sufficiently large. \label{thm:asymptotic} \end{theorem} \subparagraph{Proof} Applying Lemma \ref{lm:noncentral}, let $\lambda = \lambda_d = \lambda_k$. Let \[ \epsilon = P \left( \chi^2_{d-\ell}(\lambda) > c_{d,\alpha} \right) - P \left( \chi^2_{k-\ell}(\lambda) > c_{k,\alpha} \right), \] which is strictly positive by virtue of Lemma {$\mathbf \chi^2$}. Now use (\ref{eq:altd1}) and (\ref{eq:altk1}) to choose $n \geq N$ sufficiently large that \[ \left| P_{\vartheta + h/\sqrt{n}} \left( \Lambda_{d,n} > c_{d,\alpha} \right) - P \left( \chi^2_{d-\ell} \left( \lambda_d \right) > c_{d,\alpha} \right) \right| < \epsilon/2 \] and \[ \left| P_{\vartheta + h/\sqrt{n}} \left( \Lambda_{k,n} > c_{k,\alpha} \right) - P \left( \chi^2_{k-\ell} \left( \lambda_k \right) > c_{k,\alpha} \right) \right| < \epsilon/2. \] \hfill $\Box$ \bigskip Under classical regularity conditions or, more generally, under any set of conditions that ensure (\ref{eq:null})--(\ref{eq:altd2}), Theorem \ref{thm:asymptotic} establishes that the restricted LRT $\phi_d$ is asymptotically more powerful than the unrestricted LRT $\phi_k$ for local alternatives. The following section compares restricted and unrestricted LRTs for finite samples. \section{Hardy-Weinberg Equilibrium} \label{HW} Consider an experiment with three possible outcomes. The model $\mbox{Trinomial}(\theta)$ specifies that the outcomes occur with probabilities $\theta=(\theta_1,\theta_2,\theta_3)$. It is parametrized by the unit simplex in $\Re^3$, $\Theta = \{ \theta \in [0,1]^3 : \theta_1+\theta_2+\theta_3=1 \}$. Suppose that one draws $n$ i.i.d.\ observations from $\mbox{Trinomial}(\theta)$ and counts $x=(x_1,x_2,x_3)$, where $x_i$ records the number of occurrences of outcome $i$. The unrestricted likelihood function of $\theta$ is \[ L_x(\theta) = P_\theta \left( X = \left( x_1,x_2,x_3 \right) \right) = \frac{n!}{x_1! x_2! x_3!} \theta_1^{x_1} \theta_2^{x_2} \theta_3^{x_3}, \] and the unrestricted maximum likelihood estimate of $\theta$ is $\hat{\theta} = (x_1/n,x_2/n,x_3/n)$. The unrestricted LRT rejects $H_0: \theta=\bar{\theta}$ if and only if \[ \Lambda_2(x) = -2 \log \frac{L_x(\bar{\theta})}{L_x(\hat{\theta})} = -2 \log \frac{\bar{\theta}^{x_1}\bar{\theta}^{x_2}\bar{\theta}^{x_3}}{ \hat{\theta}^{x_1}\hat{\theta}^{x_2}\hat{\theta}^{x_3}} \] is sufficiently large. Define $\psi : [0,1] \rightarrow \Theta$ by $\psi(\tau) = \left( \tau^2, 2\tau(1-\tau), (1-\tau)^2 \right)$. The Hardy-Weinberg subfamily of trinomial distributions is parametrized by the embedded submanifold $\Psi = \{ \psi(\tau) : \tau \in [0,1] \}$. Notice that $\mbox{dim } \Psi =1 < 2 = \mbox{dim } \Theta$. Writing $\mbox{HW}(\tau) = \mbox{Trinomial}(\psi(\tau))$ and $m=2x_1+x_2$, the likelihood function of $\tau$ is \begin{eqnarray*} L_x(\psi(\tau)) & = & P_{\psi(\tau)} \left( X = \left( x_1,x_2,x_3 \right) \right) = \frac{n!}{x_1! x_2! x_3!} [\tau^2]^{x_1} [2\tau(1-\tau)]^{x_2} [(1-\tau)^2]^{x_3} \\ & = & \frac{n!}{x_1! x_2! x_3!} 2^{x_2} \tau^m (1-\tau)^{2n-m}, \end{eqnarray*} the maximum likelihood estimate of $\tau$ is $\hat{\tau} = m/(2n)$, the restricted maximum likelihood estimate of $\theta$ is $\tilde{\theta} = \psi(\hat{\tau})$, and the restricted LRT rejects $H_0: \theta=\psi(\bar{\tau})$ if and only if \[ \Lambda_1(x) = -2 \log \frac{L_x(\psi(\bar{\tau}))}{ L_x(\psi(\hat{\tau}))} = -2 \log \frac{\bar{\tau}^m(1-\bar{\tau})^{2n-m}}{ \hat{\tau}^m(1-\hat{\tau})^{2n-m}} \] is sufficiently large. \subparagraph{Counterexample} The trinomial experiment with $n=3$ has $10$ possible outcomes, enumerated in the first column of Table~\ref{tbl:counter}. The three outcomes with the largest values of $\Lambda_2(x)$ are $(3,0,0)$, $(2,1,0)$, and $(2,0,1)$. Denote this set of outcomes by $C_2$ and let \[ \alpha = P_{\psi(0.3)}(C_2) = 0.000729+0.010206+0.011907 = 0.022842, \] so that $C_2$ is the critical region for the unrestricted LRT of $H_0: \theta=\psi(0.3)$ of size $\alpha$. In contrast, the three outcomes with the largest values of $\Lambda_1(x)$ are $(3,0,0)$, $(2,1,0)$, and $(0,0,3)$. Because \[ P_{\psi(0.3)} (0,0,3) = 0.117649 > 0.011907 = P_{\psi(0.3)} (2,0,1) , \] the restricted LRT of $H_0: \theta=\psi(0.3)$ must be randomized in order to have size $\alpha$. The randomized test will reject $H_0$ with certainty if $x=(3,0,0)$ or $x=(2,1,0)$, and with probability $0.011907/0.117649$ if $x=(0,0,3)$. \begin{table}[tb] \[ \begin{array}{|crrcrr|} \hline x_1,x_2,x_3 & L_x(\psi(0.3)) & L_x(\hat{\theta}) & L_x(\psi(\hat{\tau})) & \Lambda_2(x) & \Lambda_1(x) \\ \hline 3,0,0 & 0.000729 & 27/27 & 1 & 14.447674 & 14.447674 \\ 2,1,0 & 0.010206 & 12/27 & 6 \cdot 5^5 \cdot 1^1 / 6^6 & 7.547699 & 7.346343 \\ 2,0,1 & 0.011907 & 12/27 & 3 \cdot 4^4 \cdot 2^2 / 6^6 & 7.239397 & 3.420312 \\ 1,2,0 & 0.047628 & 12/27 & 12 \cdot 4^4 \cdot 2^2 / 6^6 & 4.466808 & 3.420312 \\ 1,1,1 & 0.111132 & 6/27 & 12 \cdot 3^3 \cdot 3^3 / 6^6 & 1.385918 & 1.046120 \\ 1,0,2 & 0.064827 & 12/27 & 3 \cdot 2^2 \cdot 4^4 / 6^6 & 3.850206 & 0.031121 \\ 0,3,0 & 0.074088 & 27/27 & 8 \cdot 3^3 \cdot 3^3 / 6^6 & 5.205003 & 1.046120 \\ 0,2,1 & 0.259308 & 12/27 & 12 \cdot 2^2 \cdot 4^4 / 6^6 & 1.077617 & 0.031121 \\ 0,1,2 & 0.302526 & 12/27 & 6 \cdot 1^1 \cdot 5^5 / 6^6 & 0.769316 & 0.567960 \\ 0,0,3 & 0.117649 & 27/27 & 1 & 4.280099 & 4.280099 \\ \hline \end{array} \] \caption{Unrestricted (Trinomial) and restricted (Hardy-Weinberg) LRTs of $H_0:\theta=\psi(0.3)$ with $n=3$ observations. Columns 1--2 list the possible outcomes and their exact probabilities under $H_0$; Column 3--4 list exact probabilities under the most likely Trinomial and Hardy-Weinberg distributions; Columns 5--6 list the unrestricted and restricted LRT statistics.} \label{tbl:counter} \end{table} It is now apparent that the relative powers of the unrestricted and restricted LRTs at an alternative will depend on the probabilities of observing $(2,0,1)$ and $(0,0,3)$: the restricted LRT will be more powerful at $\theta=\psi(\tau)$ if and only if \begin{equation} \frac{0.011907}{0.117649} P_{\psi(\tau)} (0,0,3) > P_{\psi(\tau)} (2,0,1). \label{eq:cex} \end{equation} Some calculation reveals that (\ref{eq:cex}) holds when $\tau < 0.3$, but not when $\tau > 0.3$. For $\tau \in (0.3,1)$, the restricted LRT is {\em less}\/ powerful than the unrestricted LRT. \hfill $\Box$ Our explication of the Counterexample reveals three key elements that allow construction of other counterexamples: first, a choice of $\bar{\tau}$ that causes $\Lambda_2$ and $\Lambda_1$ to order the possible outcomes differently; second, a level $\alpha$ for which the critical regions $C_2$ and $C_1$ differ with respect to a single outcome; and third, an alternative $\tau \neq \bar{\tau}$ under which the probability of $C_2$ exceeds the probability of $C_1$. The remainder of this section is devoted to demonstrating that these elements occur far more generally. In what follows, it will be convenient to work with the inverse likelihood ratios, \[ R_2(x) = \frac{L_x \left( \hat{\theta} \right)}{ L_x \left( \psi \left( \bar{\tau} \right) \right)} = \frac{ \left( \frac{x_1}{n} \right)^{x_1} \left( \frac{x_2}{n} \right)^{x_2} \left( \frac{x_3}{n} \right)^{x_3}}{2^{x_2} \bar{\tau}^m \left( 1-\bar{\tau} \right)^{2n-m}} \] and \[ R_1(x) = \frac{L_x \left( \psi \left( \hat{\tau} \right) \right)}{ L_x \left( \psi \left( \bar{\tau} \right) \right)} = \frac{ \left( \frac{m}{2n} \right)^m \left( 1-\frac{m}{2n} \right)^{2n-m}}{ \bar{\tau}^m \left( 1-\bar{\tau} \right)^{2n-m}}, \] instead of $\Lambda_2$ and $\Lambda_1$. Let $R_{2i}$ denote the level sets of $R_2$, ordered from largest $R_2$ value to smallest $R_2$ value, and let \[ \alpha_{2j} = P_{\psi(\bar{\tau})} \left( \bigcup_{i=1}^j R_{2i} \right) = \sum_{i=1}^j P_{\psi(\bar{\tau})} \left( R_{2i} \right). \] For testing $H_0: \theta=\psi(\bar{\tau})$, the unrestricted LRT of size $\alpha_{2j}$ has critical region \[ C_2 \left( \alpha_{2j} \right) = \bigcup_{i=1}^j R_{2i}, \] i.e., it rejects $H_0$ if and only if $x \in C_2(\alpha_{2j})$. To obtain the LRT of size $\alpha \in (\alpha_{2j},\alpha_{2,j+1})$, it is necessary to randomize. The conventional randomized LRT rejects $H_0$ with probability one if $x \in C_2(\alpha_{2j})$, with probability $(\alpha-\alpha_{2j})/(\alpha_{2,j+1}-\alpha_{2j})$ if $x \in R_{2,j+1}$, and with probability zero otherwise. The restricted case is analogous. Let $R_{1i}$ denote the level sets of $R_1$, ordered from largest $R_1$ value to smallest $R_1$ value, and let \[ \alpha_{1j} = P_{\psi(\bar{\tau})} \left( \bigcup_{i=1}^j R_{1i} \right) = \sum_{i=1}^j P_{\psi(\bar{\tau})} \left( R_{1i} \right). \] For testing $H_0: \theta=\psi(\bar{\tau})$, the restricted LRT of size $\alpha_{1j}$ has critical region \[ C_1 \left( \alpha_{1j} \right) = \bigcup_{i=1}^j R_{1i}, \] i.e., it rejects $H_0$ if and only if $x \in C_1(\alpha_{1j})$. The conventional randomized LRT of size $\alpha \in (\alpha_{1j},\alpha_{1,j+1})$ rejects $H_0$ with probability one if $x \in C_1(\alpha_{1j})$, with probability $(\alpha-\alpha_{1j})/(\alpha_{1,j+1}-\alpha_{1j})$ if $x \in R_{1,j+1}$, and with probability zero otherwise. Notice that $R_1$ is constant on the level sets of the integer-valued random variable $M=2X_1+X_2$. For $R_1$ to assume the same value with $m_1 \neq m_2$, it must be that \[ \frac{ \left( \frac{m_1}{2n} \right)^{m_1} \left( 1-\frac{m_1}{2n} \right)^{2n-m_1}}{ \bar{\tau}^{m_1} \left( 1-\bar{\tau} \right)^{2n-m_1}} = \frac{ \left( \frac{m_2}{2n} \right)^{m_2} \left( 1-\frac{m_2}{2n} \right)^{2n-m_2}}{ \bar{\tau}^{m_2} \left( 1-\bar{\tau} \right)^{2n-m_2}}, \] which is equivalent to \begin{equation} \left( \frac{\bar{\tau}}{1-\bar{\tau}} \right)^{m_1-m_2} = \frac{m_1^{m_1} \left( 2n-m_1 \right)^{2n-m_1}}{ m_2^{m_2} \left( 2n-m_2 \right)^{2n-m_2}}. \label{eq:rational} \end{equation} Because the right-hand side of (\ref{eq:rational}) is rational, (\ref{eq:rational}) cannot obtain if $\bar{\tau}$ is irrational. We thus establish \begin{lemma} If $\bar{\tau}$ is irrational, then the level sets of $R_1$ coincide with the level sets of $2X_1+X_2$. In particular, each level set of $R_1$ is associated with a single value of $2X_1+X_2$. \label{lm:irrational1} \end{lemma} The level sets of $R_2$ are not so easily characterized, but suppose that $R_2(x)=R_2(y)$. Set $m_1=2x_1+x_2$ and $m_2=2y_1+y_2$. Then \[ \frac{\bar{\tau}^{m_1} (1-\bar{\tau})^{2n-m_1}}{ \bar{\tau}^{m_2} (1-\bar{\tau})^{2n-m_2}} = \frac{(2x_1)^{x_1}x_2^{x_2}(2x_3)^{x_3}}{ (2y_1)^{y_1}y_2^{y_2}(2y_3)^{y_3}}, \] the right-hand side of which is rational and the left-hand side of which is irrational if $\bar{\tau}$ is irrational and $m_1 \neq m_2$. We thus establish \begin{lemma} If $\bar{\tau}$ is irrational, then each level set of $R_2$ is associated with a single value of $2X_1+X_2$. \label{lm:irrational2} \end{lemma} Next we compare how $R_1$ and $R_2$ arrange possible outcomes into critical regions. Our first result states that the restricted and unrestricted LRTs agree on which outcome (or set of outcomes) is most adverse to $H_0:\theta=\psi(\bar{\tau})$. \begin{lemma} For every $\bar{\tau} \in (0,1)$, $R_{11}=R_{21}$. \label{lm:worst} \end{lemma} \subparagraph{Proof} The level set $R_{11}$ consists of the outcomes that maximize \[ R_1(x) = \frac{ \left( \frac{m}{2n} \right)^m \left( 1-\frac{m}{2n} \right)^{2n-m}}{ \bar{\tau}^m \left( 1-\bar{\tau} \right)^{2n-m}}. \] If $\bar{\tau} \in (0,0.5)$, then the denominator is minimized by $m=2n$, which also maximizes the numerator. In this case, $R_{11} = \{ (n,0,0) \}$. By the same reasoning, if $\bar{\tau} \in (0.5,1)$, then $R_{11} = \{ (0,0,n) \}$. If $\bar{\tau}=0.5$, then the denominator is constant and the numerator is maximized by either $m=2n$ or $m=0$; hence, $R_{11} = \{ (n,0,0),(0,0,n) \}$. The level set $R_{21}$ consists of the outcomes that maximize \[ R_2(x) = \frac{ \left( \frac{x_1}{n} \right)^{x_1} \left( \frac{x_2}{n} \right)^{x_2} \left( \frac{x_3}{n} \right)^{x_3}}{2^{x_2} \bar{\tau}^m \left( 1-\bar{\tau} \right)^{2n-m}}. \] If $\bar{\tau} \in (0,0.5)$, then the denominator is minimized by $(n,0,0)$, which also maximizes the numerator. Hence, $R_{21} = \{ (n,0,0) \} = R_{11}$. By the same reasoning, if $\bar{\tau} \in (0.5,1)$, then $R_{21} = \{ (n,0,0) \} = R_{11}$. If $\bar{\tau}=0.5$, then the denominator is minimized by any $x$ with $x_2=0$ and the numerator is maximized by either $(n,0,0)$ or $(0,0,n)$; hence, $R_{21} = \{ (n,0,0),(0,0,n) \} = R_{11}$. \hfill $\Box$ Next we establish some conditions under which $R_2$ and $R_1$ induce different orderings of the possible outcomes. More precisely, the conditions in Lemma~\ref{lm:order} ensure the existence of outcomes $x$ and $y$ such that $R_2(x) > R_2(y)$ and $R_1(x) < R_1(y)$. Note the strict inequalities in this definition, which are essential to the proof of Theorem~\ref{thm:HWirrational}. \begin{lemma} The unrestricted and restricted LRTs order the possible outcomes differently under any of the following conditions: \begin{enumerate} \item $\bar{\tau} \in (0.2,1/3) \cup (2/3,0.8)$; \item $\bar{\tau} \in (1/3,2/3)$ and $n \geq 2$; \item $\bar{\tau} \in \{ 1/3, 2/3 \}$ and $n \geq 3$. \end{enumerate} \label{lm:order} \end{lemma} \subparagraph{Proof} First, consider the three extremal outcomes displayed in Table~\ref{tbl:extremal0}. The ordering of these outcomes is determined by $R_2(x)=L_x(\hat{\theta})/L_x(\psi(\bar{\tau}))$ for the unrestricted LRT and by $R_1(x)=L_x(\psi(\hat{\tau}))/L_x(\psi(\bar{\tau}))$ for the restricted LRT. For $\bar{\tau} \in (0.2,1/3)$, \[ \frac{1}{2 \bar{\tau} (1-\bar{\tau})} > \frac{1}{(1-\bar{\tau})^2} \] and the unrestricted LRT orders $(0,n,0) \succ (0,0,n)$, whereas \[ \frac{1}{(1-\bar{\tau})^2} > \frac{1/2}{2 \bar{\tau} (1-\bar{\tau})} \] and the restricted LRT orders $(0,0,n) \succ (0,n,0)$. By symmetry, the unrestricted and restricted orders also differ if $\bar{\tau} \in (2/3,0.8)$. \begin{table}[tb] \[ \begin{array}{|cccc|} \hline x_1,x_2,x_3 & L_x(\psi(\bar{\tau})) & L_x(\hat{\theta}) & L_x(\psi(\hat{\tau})) \\ \hline (n,0,0) & \bar{\tau}^{2n} & 1 & 1 \\ (0,n,0) & 2^n \bar{\tau}^n (1-\bar{\tau})^n & 1 & 2^{-n} \\ (0,0,n) & (1-\bar{\tau})^{2n} & 1 & 1 \\ \hline \end{array} \] \caption{Probabilities of three extremal outcomes under $\theta=\psi(\bar{\tau})$ (null hypothesis), $\theta=\hat{\theta}$ (unrestricted MLE), and $\theta=\psi(\hat{\tau})$ (restricted MLE).} \label{tbl:extremal0} \end{table} Next, assume that $n \geq 2$ and consider the outcomes displayed in Table~\ref{tbl:extremal1}. The unrestricted LRT places $(n-1,1,0) \succ (n-1,0,1)$ if and only if \[ 2 \bar{\tau}^{2n-1} (1-\bar{\tau}) < \bar{\tau}^{2n-2} (1-\bar{\tau})^{2}, \] which obtains if and only if $\bar{\tau} < 1/3$. In contrast, the restricted LRT places $(n-1,1,0) \succ (n-1,0,1)$ if and only if \[ \left( \frac{2n-1}{2n} \right)^{2n-1} \frac{1}{2n} \div \bar{\tau}^{2n-1} (1-\bar{\tau}) > \left( \frac{2n-2}{2n} \right)^{2n-2} \left( \frac{2}{2n} \right)^{2} \div \bar{\tau}^{2n-2} (1-\bar{\tau})^{2}, \] which obtains if and only if \[ \bar{\tau} < b(n) = \frac{(2n-1)^{2n-1}}{(2n-1)^{2n-1}+4(2n-2)^{2n-2}}. \] If $n=2$, then \[ 4(2n-2)^{2n-2} = 16<27 = (2n-1)^{2n-1} \] and $b(n)>0.5$. If $n \geq 3$, then \[ 4(2n-2)^{2n-2} \leq (2n-2)^{2n-1} < (2n-1)^{2n-1} \] and again $b(n) > 0.5$. It follows that the unrestricted and restricted LRT orders differ if $\bar{\tau} \in (1/3,0.5]$. By symmetry, they also differ if $\bar{\tau} \in [0.5,2/3)$. \begin{table}[tb] \[ \begin{array}{|cccc|} \hline x_1,x_2,x_3 & L_x(\psi(\bar{\tau})) & L_x(\hat{\theta}) & L_x(\psi(\hat{\tau})) \\ \hline (n-1,1,0) & 2n \bar{\tau}^{2n-1} (1-\bar{\tau}) & \left( \frac{n-1}{n} \right)^{n-1} & 2n \left( \frac{2n-1}{2n} \right)^{2n-1} \frac{1}{2n} \\ (n-1,0,1) & n \bar{\tau}^{2n-2} (1-\bar{\tau})^{2} & \left( \frac{n-1}{n} \right)^{n-1} & n \left( \frac{2n-2}{2n} \right)^{2n-2} \left( \frac{2}{2n} \right)^{2} \\ (1,n-1,0) & 2^{n-1}n \bar{\tau}^{n+1} (1-\bar{\tau})^{n-1} & \left( \frac{n-1}{n} \right)^{n-1} & 2^{n-1}n \left( \frac{n+1}{2n} \right)^{n+1} \left( \frac{n-1}{2n} \right)^{n-1} \\ (0,n-1,1) & 2^{n-1}n \bar{\tau}^{n-1} (1-\bar{\tau})^{n+1} & \left( \frac{n-1}{n} \right)^{n-1} & 2^{n-1}n \left( \frac{n-1}{2n} \right)^{n-1} \left( \frac{n+1}{2n} \right)^{n+1} \\ (1,0,n-1) & n \bar{\tau}^{2} (1-\bar{\tau})^{2n-2} & \left( \frac{n-1}{n} \right)^{n-1} & n \left( \frac{2}{2n} \right)^{2} \left( \frac{2n-2}{2n} \right)^{2n-2} \\ (0,1,n-1) & 2n \bar{\tau} (1-\bar{\tau})^{2n-1} & \left( \frac{n-1}{n} \right)^{n-1} & 2n \frac{1}{2n} \left( \frac{2n-1}{2n} \right)^{2n-1} \\ \hline \end{array} \] \caption{Probabilities of six more outcomes under $\theta=\psi(\bar{\tau})$ (null hypothesis), $\theta=\hat{\theta}$ (unrestricted MLE), and $\theta=\psi(\hat{\tau})$ (restricted MLE), assuming $n>1$.} \label{tbl:extremal1} \end{table} Finally, let $\bar{\tau}=1/3$ and $n \geq 3$. Then \[ n \left( \frac{1}{3} \right)^2 \left( \frac{2}{3} \right)^{2n-2} = n \frac{2^{n-2}}{3^{2n}} < n \frac{2^{2n}}{3^{2n}} = 2n \frac{1}{3} \left( \frac{2}{3} \right)^{2n-1} \] and it follows that the unrestricted LRT orders $(1,0,n-1) \succ (0,1,n-1)$. In contrast, \[ \frac{L_{(1,0,n-1)}((\psi(\hat{\tau}))}{L_{(1,0,n-1)}(\psi(1/3))} \div \frac{L_{(0,1,n-1)}((\psi(\hat{\tau}))}{L_{(0,1,n-1)}(\psi(1/3))} = c(n) = \frac{8(2n-2)^{2n-2}}{(2n-1)^{2n-1}}. \] If $n=3$, then \[ 8(2n-2)^{2n-2} = 2048 < 3125 = (2n-1)^{2n-1} \] and $c(n)<1$. If $n=4$, then \[ 8(2n-2)^{2n-2} = 373248 < 823543 = (2n-1)^{2n-1} \] and $c(n)<1$. If $n \geq 5$, then \[ 8(2n-2)^{2n-2} \leq (2n-2)^{2n-1} < (2n-1)^{2n-1} \] and $c(n)<1$. It follows that the restricted LRT orders $(1,0,n-1) \prec (0,1,n-1)$. By symmetry, the unrestricted and restricted LRT orders differ if $\bar{\tau}=2/3$ and $n \geq 3$. \hfill $\Box$ We now establish our crucial result. \begin{theorem} Let $\psi$ parametrize the Hardy-Weinberg submodel of the trinomial experiment. Suppose that $\bar{\tau} \in (0,1)$ is irrational, and that $(\bar{\tau},n)$ is such that the restricted and unrestricted LRTs of $H_0:\theta=\psi(\bar{\tau})$ order the possible outcomes of the trinomial experiment differently. Then there exist $\alpha \in (0,1)$ and $\tau \in (0,1)$ such that the restricted LRT of size $\alpha$ is less powerful at $\tau$ than the unrestricted LRT of size $\alpha$. \label{thm:HWirrational} \end{theorem} \subparagraph{Proof} Lemma~\ref{lm:worst} states that $R_{11}=R_{21}$. If $R_{1i}=R_{2i}$ for $i=1,\ldots,j$ and \[ x,y \in \bigcup_{i=1}^j R_{1i} = \bigcup_{i=1}^j R_{2i}, \] then it cannot be that $R_2(x) > R_2(y)$ and $R_1(x) < R_1(y)$. Hence, for the restricted and unrestricted LRTs to order the possible outcomes differently, there must exist a value of $j$ for which $R_{1j} \neq R_{2j}$ Let $j*$ denote the smallest such value of $j$. Because of Lemmas \ref{lm:irrational1} and \ref{lm:irrational2}, there are two possibilities: either \begin{enumerate} \item $R_{1j*}$ and $R_{2j*}$ are associated with different values of $2X_1+X_2$; or \item $R_{1j*}$ and $R_{2j*}$ are associated with the same value of $2X_1+X_2$, in which case $R_{1j*} \neq R_{2j*}$ implies that $R_{2j*}$ is a proper subset of $R_{1j*}$. \end{enumerate} The first case is straightforward. Let $m_1$ and $m_2$ denote the values of $2X_1+X_2$ associated with $R_{1j*}$ and $R_{2j*}$. Let $\alpha = \min(\alpha_{1j*},\alpha_{2j*})$, so that the restricted and unrestricted LRTs of size $\alpha$ have critical regions that are identical except for (possibly randomized) outcomes in $R_{1j*}$ or $R_{2j*}$. Any power differences between the two LRTs will accrue from these outcomes. The probability that the restricted LRT will reject $H_0$ as a result of $x \in R_{1j*}$ is \[ r_1 P_{\psi(\tau)} \left( R_{1i*} \right) = r_1 n! \tau^{m_1} (1-\tau)^{2n-m_1} \sum_{x \in R_{1i*}} \frac{2^{x_2}}{x_1!x_2!x_3!}, \] and the probability that the unrestricted LRT will reject $H_0$ as a result of $x \in R_{2j*}$ is \[ r_2 P_{\psi(\tau)} \left( R_{2j*} \right) = r_2 n! \tau^{m_2} (1-\tau)^{2n-m_2} \sum_{x \in R_{2j*}} \frac{2^{x_2}}{x_1!x_2!x_3!}, \] where $r_1$ and $r_2$ are randomization factors. If $\tau=\bar{\tau}$, then these quantities are equal. If \[ r_1 P_{\psi(\tau)} \left( R_{1j*} \right) < r_2 P_{\psi(\tau)} \left( R_{2j*} \right), \] which obtains when \[ \left( \frac{\tau}{1-\tau} \right)^{m_1-m_2} < \left( r_2 \sum_{x \in R_{2j*}} \frac{2^{x_2}}{x_1!x_2!x_3!} \right) \div \left( r_1 \sum_{x \in R_{1j*}} \frac{2^{x_2}}{x_1!x_2!x_3!} \right), \] then the unrestricted LRT has greater power at $\tau$ then the restricted LRT. The key to the preceding argument lies in identifying a size for which the critical regions of the respective LRTs differ only in replacing one level set of $2X_1+X_2$ with another of different value. If $R_{1j*}$ and $R_{2j*}$ are associated with the same value of $2X_1+X_2$, then identifying such a size is more complicated. If $R_{2j*}$ is a proper subset of $R_{1j*}$, then we set $\alpha = \min(\alpha_{1j*},\alpha_{2,j*+1})$ and consider $R_{2,j*+1}$. If $R_{2,j*+1}$ is associated with a different value of $2X_1+X_2$, then the probability that the unrestricted LRT will reject $H_0$ as a result of $x \in R_{2j*} \cup R_{2,j*+1}$ is \begin{eqnarray*} P_{\psi(\tau)} \left( R_{2j*} \right) + r_2 P_{\psi(\tau)} \left( R_{2,j*+1} \right) & = & n! \tau^{m_1} (1-\tau)^{2n-m_1} \sum_{x \in R_{2j*}} \frac{2^{x_2}}{x_1!x_2!x_3!} + \\ & & r_2 n! \tau^{m_2} (1-\tau)^{2n-m_2} \sum_{x \in R_{2,j*+1}} \frac{2^{x_2}}{x_1!x_2!x_3!}. \end{eqnarray*} If \[ r_1 P_{\psi(\tau)} \left( R_{1j*} \right) < P_{\psi(\tau)} \left( R_{2j*} \right) + r_2 P_{\psi(\tau)} \left( R_{2,j*+1} \right), \] which obtains when \begin{eqnarray*} \left( \frac{\tau}{1-\tau} \right)^{m_1-m_2} & < & \left( r_2 \sum_{x \in R_{2,j*+1}} \frac{2^{x_2}}{x_1!x_2!x_3!} \right) \div \\ & & \left( r_1 \sum_{x \in R_{1j*}} \frac{2^{x_2}}{x_1!x_2!x_3!} - \sum_{x \in R_{2j*}} \frac{2^{x_2}}{x_1!x_2!x_3!} \right), \end{eqnarray*} then the unrestricted LRT has greater power at $\tau$ than the restricted LRT. Continuing in this manner, if $R_{2,j*+1}$ is not associated with a different value of $2X_1+X_2$, then we increment $i$ in $\alpha = \min(\alpha_{1j*},\alpha_{2,j*+i})$ and $R_{2,j*+i}$ until either we do encounter a different value or until both LRTs have the same critical region for size $\alpha_{1j*}=\alpha_{2,j*+i*}$. If we encounter a different value, then the same reasoning used in the previous paragraph establishes values of $\tau$ for which the unrestricted LRT has greater power at $\tau$ then the restricted LRT. If we exhaust the outcomes in $R_{1j*}$ without encountering a different value, then we obtain \begin{eqnarray*} C_1 \left( \alpha_{1j*} \right) & = & R_{11} \cup \cdots \cup R_{1,j*-1} \cup R_{1j*} \\ & = & R_{21} \cup \cdots \cup R_{2,j*-1} \cup \bigcup_{i=1}^{i*} R_{2,j*+i} \\ & = & C_2 \left( \alpha_{1j*} \right). \end{eqnarray*} In this case, however, we have not yet discovered \[ x,y \in C_1 \left( \alpha_{1j*} \right) = C_2 \left( \alpha_{1j*} \right) \] for which $R_2(x) > R_2(y)$ and $R_1(x) < R_1(y)$. We have already observed that no such reversal is possible for \[ x,y \in \bigcup_{i=1}^{j*-1} R_{1i} = \bigcup_{i=1}^{j*-1} R_{2i}, \] and it is certainly not possible for \[ x,y \in R_{1j*} = \bigcup_{i=1}^{i*} R_{2,j*+i} \] because $R_{1j*}$ is a single level set of $R_1$. If we reach the case of $C_1 (\alpha_{1j*}) = C_2 (\alpha_{1j*})$, then we progress to $R_{1,j*+1}$ versus $R_{2,j*+i*+1}$ and apply the same reasoning. Because there exist $x$ and $y$ for which $R_2(x) > R_2(y)$ and $R_1(x) < R_1(y)$, we will eventually identify a size for which the critical regions of the two LRTs differ only in replacing one level set of $2X_1+X_2$ with another of different value, and we have already demonstrated how to derive the desired result when that circumstance obtains. \hfill $\Box$ Combining Theorem~\ref{thm:HWirrational} and Lemma~\ref{lm:order}, we obtain the following result. \begin{corollary} Let $\psi$ parametrize the Hardy-Weinberg submodel of the trinomial experiment with $n \geq 2$ and consider the null hypothesis $H_0: \theta = \psi(\bar{\tau})$. For almost every $\bar{\tau} \in (0.2,0.8)$, there exist $\alpha \in (0,1)$ and $\tau \in (0,1)$ such that the restricted LRT of size $\alpha$ is less powerful at $\tau$ than the unrestricted LRT of size $\alpha$. \label{cor:HWae} \end{corollary} \section{Discussion} \label{disc} Marden's 1982 conjecture, that restricting the set of alternatives increases the power of a likelihood ratio test, was falsified in 2003. Unfortunately, the counterexample constructed in \citep{abudayyeh&etal:2003} is highly technical and involves a restriction that maintains the dimension of the alternative. It is inviting to modify the general LRT conjecture and speculate that it holds if the submodel has lower dimension than the model. As demonstrated in Section~\ref{asymptotic}, the dimension-restricted LRT conjecture does hold asymptotically. However, as demonstrated in Section~\ref{HW}, the Hardy-Weinberg submodel of the trinomial experiment provides a wealth of counterexamples when sample size is finite. This finding may surprise many readers, as it surprised us. Several observations are in order. First, the counterexamples presented in Section~\ref{HW} only suggest that the restricted LRT is not {\em uniformly}\/ more powerful than the unrestricted LRT. The fact that there exist sizes and alternatives for which the restricted LRT is less powerful does not imply that we should prefer the unrestricted LRT. Our first counterexample, in which the restricted LRT is more powerful than the unrestricted LRT for alternatives in $(0,0.3)$ but less powerful for alternatives in $(0.3,1)$, is dramatic. Nevertheless, in most of the Hardy-Weinberg examples that we have studied, the restricted LRT outperforms the unrestricted LRT in more cases---and by wider margins---than the reverse. We display an example in Figure~\ref{fig:HWpower}. \begin{figure}[tb] \begin{center} \includegraphics[width=0.8\textwidth]{lrtfig1.pdf} \end{center} \caption{Power of the unrestricted LRT ($\beta_2$) minus power of the restricted LRT ($\beta_1$) for testing $H_0: \theta = \psi(0.3)$ with $\alpha=0.05$ and $n=10$. The alternatives $\{ \theta=\psi(\tau) : \tau \in (0,1) \}$ are displayed on the horizontal axis. The restricted LRT is superior where the power difference is less than zero (blue), inferior where it exceeds zero (red).} \label{fig:HWpower} \end{figure} The sufficient conditions identified in Section~\ref{HW}, e.g., in Corollary~\ref{cor:HWae}, are expedient but hardly necessary. We expect that more elaborate calculations will extend the scope of Lemma~\ref{lm:order}, while continuity arguments will extend the results of Theorem~\ref{thm:HWirrational} from irrational $\bar{\tau}$ to real intervals of $\bar{\tau}$. This is not our present concern. Because the Hardy-Weinberg subfamily of trinomial distributions is widely used to model genetic equilibrium, it provides a compelling counterexample to the dimension-restricted LRT conjecture. It is hardly unique, however, for we have constructed other submodels of multinomial models (including submodels of dimension and/or co-dimension greater than one) that also falsify the conjecture. Our current efforts are focussed on identifying conditions under which the dimension-restricted LRT conjecture does---or does not---hold. Our counterexample is also compelling from the perspective of statistical theory. As a submodel of the trinomial model, the Hardy-Weinberg distributions constitute a curved exponential family. Furthermore, as demonstrated in \citep[Example~2.3.4]{kass&voss:1997}, these distributions can be parametrized as a regular $1$-parameter exponential family. For testing simple null hypotheses, it follows from results in \citep[Section~4.2]{lehmann:tsh2} that there exists a uniformly most powerful test among all unbiased tests. The difficulty is not that one cannot improve power by restricting the trinomial model to the Hardy-Weinberg submodel, but that LRTs may fail to do so. For testing simple null and alternative hypotheses, the Neyman-Pearson Lemma states that a test based on the ratio of the null and alternative probability densities is most powerful. Likelihood ratio tests are heuristic extensions of this construction that possess various desirable asymptotic properties. However, although LRTs are asymptotically unbiased \citep[Section~53.3]{borokov:1999}, they may be biased for finite sample sizes. The Counterexample in Section~\ref{HW} considered the restricted LRT of $H_0: \psi(0.3)$ with size $\alpha = 0.22842$. The power of that test at alternative $\tau$ is \[ \tau^6 + 3 \cdot 2 \cdot \tau^5(1-\tau) + \frac{0.011907}{0.117649} (1-\tau)^6, \] which numerical calculation reveals is slightly less than $\alpha$ for a small interval of alternatives. From a modeling perspective, our report is a cautionary tale: restricting inferences about a submodel to that submodel will strike most statisticians as natural and intuitive, yet doing so is not necessarily most powerful. As noted above, this finding is not due to some pathology unique to the Hardy-Weinberg submodel, which is nicely behaved; we have constructed additional counterexamples using various multinomial submodels, including submodels of dimension and/or co-dimension greater than one. Taken together, these examples constitute a compelling demonstration that efforts to exploit submodel structure may require considerable care. \section*{Acknowledgments} This work was partially supported by a National Security Science and Engineering Faculty Fellowship, DARPA XDATA contract FA8750-12-2-0303, DARPA SIMPLEX contract N66001-15-C-4041, DARPA GRAPHS contract N66001-14-1-4028, and NSF award DBI-1451081.
{ "timestamp": "2016-08-02T02:01:29", "yymm": "1608", "arxiv_id": "1608.00032", "language": "en", "url": "https://arxiv.org/abs/1608.00032", "abstract": "Likelihood ratio tests are widely used to test statistical hypotheses about parametric families of probability distributions. If interest is restricted to a subfamily of distributions, then it is natural to inquire if the restricted LRT is superior to the unrestricted LRT. Marden's general LRT conjecture posits that any restriction placed on the alternative hypothesis will increase power. The only published counterexample to this conjecture is rather technical and involves a restriction that maintains the dimension of the alternative. We formulate the dimension-restricted LRT conjecture, which posits that any restriction that replaces a parametric family with a subfamily of lower dimension will increase power. Under standard regularity conditions, we then demonstrate that the restricted LRT is asymptotically more powerful than the unrestricted LRT for local alternatives. Remarkably, however, even the dimension-restricted LRT conjecture fails in the case of finite samples. Our counterexamples involve subfamilies of multinomial distributions. In particular, our study of the Hardy-Weinberg subfamily of trinomial distributions provides a simple and elegant demonstration that restrictions may not increase power.", "subjects": "Statistics Theory (math.ST)", "title": "On the Power of Likelihood Ratio Tests in Dimension-Restricted Submodels", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426428022031, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.7090791301474986 }
https://arxiv.org/abs/2105.05645
Homotopy Comomentum Maps in Multisymplectic Geometry
Homotopy comomentum maps are a higher generalization of the notion of moment map introduced to extend the concept of Hamiltonian actions to the framework of multisymplectic geometry. Loosely speaking, higher means passing from considering symplectic 2-form to consider differential forms in higher degrees. The goal of this thesis is to provide new explicit constructions and concrete examples related to group actions on multisymplectic manifolds admitting homotopy comomentum maps.The first result is a complete classification of compact group actions on multisymplectic spheres. The existence of a homotopy comomentum map about the latter depends on the dimension of the sphere and the transitivity of the group action. Several concrete examples of such actions are also provided.The second novel result is the explicit construction of the higher analogue of the embedding of the Poisson algebra of a given symplectic manifold into the corresponding twisted Lie algebroid. It is also proved a compatibility condition for such embedding for gauge-related multisymplectic manifolds in presence of a compatible Hamiltonian group action. The latter construction could play a role in determining the multisymplectic analogue of the geometric quantization procedure.Finally, a concrete construction of a homotopy comomentum map for the action of the group of volume-preserving diffeomorphisms on the multisymplectic 3-dimensional Euclidean space is proposed. This map can be naturally related to hydrodynamics. For instance, it transgresses to the standard hydrodynamical co-momentum map of Arnold, Marsden and Weinstein and others. A slight generalization of this construction to a special class of Riemannian manifolds is also provided. The explicitly constructed homotopy comomentum map can be also related to knot theory by virtue of the aforementioned hydrodynamical interpretation.
\chapter{Abstract} \label{ch:abstract} \vspace{-3em} \Momaps are a higher generalization of the notion of moment map introduced to extend the concept of Hamiltonian actions to the framework of multisymplectic geometry. Loosely speaking, higher means passing from considering symplectic $2$-form to consider differential forms in higher degrees. The goal of this thesis is to provide new explicit constructions and concrete examples related to group actions on multisymplectic manifolds admitting \momaps. \\ The first result is a complete classification of compact group actions on multisymplectic spheres. The existence of a \momap pertaining to the latter depends on the dimension of the sphere and the transitivity of the group action. Several concrete examples of such actions are also provided. \\ The second novel result is the explicit construction of the higher analogue of the embedding of the Poisson algebra of a given symplectic manifold into the corresponding twisted Lie algebroid. It is also proved a compatibility condition for such embedding for gauge-related multisymplectic manifolds in presence of a compatible Hamiltonian group action. The latter construction could play a role in determining the multisymplectic analogue of the geometric quantization procedure. \\ Finally, a concrete construction of a \momap for the action of the group of volume-preserving diffeomorphisms on the multisymplectic 3-dimensional Euclidean space is proposed. This map can be naturally related to hydrodynamics. For instance, it transgresses to the standard hydrodynamical co-momentum map of Arnol'd, Marsden and Weinstein and others. A slight generalization of this construction to a special class of Riemannian manifolds is also provided. The explicitly constructed \momap can be also related to knot theory by virtue of the aforementioned hydrodynamical interpretation. Namely, it allows for a reinterpretation of (higher-order) linking numbers in terms of multisymplectic conserved quantities. As an aside, it also paves the road for a semiclassical interpretation of the HOMFLYPT polynomial in the language of geometric quantization. \ifstandalone \bibliographystyle{../../hep} \chapter{Action of compact groups on multisymplectic manifolds}\label{Chap:LeonidPaper} \note{ \textbf{Abstract:} We investigate the existence of \momaps (comoments) for high-dimensional spheres seen as multisymplectic manifolds. Especially, we solve the existence problem for compact effective group actions on spheres and provide explicit constructions for such comoments in interesting particular cases. } In this chapter, based on \cite{Miti2019}, we focus our attention to \momap pertaining to multisymplectic actions of compact groups. More specifically, we solve the existence problem for compact effective group actions on oriented high-dimensional spheres seen as multisymplectic manifolds and provide explicit constructions of some relevant cases. The main result can be subsumed by the following important theorem: \begin{theorem}\label{thm:Leomainresult} (Proposition \ref{prop:intransitive} and Theorem \ref{thm:surprise}) Let $G$ be a compact Lie group acting multisymplectically and effectively on the $n$-dimensional sphere $S^n$ equipped with the standard volume form, then the action admits a \momap if and only if $n$ is even or the action is not transitive. \end{theorem} The outline of the chapter is as follows: in the first section we briefly survey the theory of cohomological obstructions to the existence of \momaps as introduced in \cite{Callies2016} and further developed in \cite{Fregier2015,zbMATH06448534}. We include proofs for some known results in order to achieve a complete and self-contained exposition. The main novelty in this section is an intrinsic proof of Theorem \ref{cpteqcom}, which does not depend on the choice of a model for equivariant cohomology. \\ We then prove theorem \ref{thm:Leomainresult} for the non-transitive case in Section \ref{secintrans} and the transitive case in Section \ref{sectra}. In addition to proving the abstract theorem, we give explicit constructions for important classes of group actions and highlight interesting phenomena. \begin{remark}[Right-actions convention]\label{rem:RightActionMess} In this chapter, all groups considered will be connected unless stated otherwise, all actions will be smooth and on the right. A right smooth action $\vartheta:G\action M$ will be encoded by the smooth map $\vartheta: G\times M \to M$, \ie with the acting group $G$ put as the first multiplicand of the scalar product (in place of the more natural choice $\vartheta: M\times G \to M$), in order to agree with the sign conventions usually found in the literature. The corresponding infinitesimal action (via fundamental vector fields) will be given by the Lie algebra morphism $\vAct: \mathfrak{g}\to \mathfrak{X}(M)$ defined as \begin{displaymath} \vAct_\xi(m) = \left.\dfrac{d}{dt}\right\vert_0 \vartheta(m,\exp( t\xi)) \qquad \forall m \in M , \xi \in \mathfrak{g} ~; \end{displaymath} (\cf equation \eqref{eq:LeftFundVF}, notice the opposite sign). \\ Recall that preferring right action rather than left action is mostly a matter of taste. Furthermore, for any given left action $\lambda:G\action M$, the smooth action $\rho:G\action M$, defined as $\rho_g = \lambda_{g^{-1}}$ for any $g\in G$, is a right action. \end{remark} \section{Cohomological obstructions to \Hcmm for compact groups} In this section, we give a short introduction to the cohomological obstructions for \momaps focusing on their geometric description. Most results can be found in the literature \cite{Rogers2010,Callies2016,Fregier2015,zbMATH06448534} but we present some of the proofs for a clearer and self-contained exposition. Our main contribution is an intrinsic proof of Theorem \ref{cpteqcom} which does not depend on a choice of model for the equivariant cohomology. Consider an infinitesimal action of $\mathfrak{g}$ on the pre-$n$-plectic manifold $(M,\omega)$ preserving the form $\omega$. As shown in \cite{Fregier2015, zbMATH06448534}, \momaps for this action can be interpreted as a primitive of a certain cocycle in a cochain complex. \begin{definition}\label{def:comomentbicomplex} The bi-complex naturally associated to the action of $\mathfrak{g}$ on $M$ is defined by \begin{displaymath} (C_\mathfrak{g}^{\bullet,\bullet} = \Lambda^{\geq 1} \mathfrak{g}^*\otimes \Omega^\bullet(M), \delta_\text{CE},d), \end{displaymath} where $d$ denotes the de Rham differential and $\delta_{CE}:\Lambda^k\mathfrak g^*\to \Lambda^{k+1}\mathfrak g^*$ the Lie algebra cohomology differential (see reminder \ref{Rem:CEconventions}), defined on generators by \begin{displaymath} \delta_{CE}(f)(\xi_1,...,\xi_k)=\sum_{i<j}(-1)^{i+j}f([\xi_i,\xi_j],\xi_1,\dots,\hat{\xi}_i,\dots,\hat{\xi}_j,\dots,\xi_{k}) ~. \end{displaymath} The corresponding total complex is given by \begin{displaymath} (C_\mathfrak{g}^{\bullet}, d_{\tot} = \delta_\text{CE}\otimes \id + \id\otimes d), \end{displaymath} where, according to the Koszul sign convention, $d_{\tot}$ acts on an element of $\Lambda^k \mathfrak{g}^*\otimes \Omega^\bullet(M)$ as $\delta_\text{CE} + (-1)^k d$. \end{definition} \begin{theorem}[Proposition 2.5 in \cite{Fregier2015}, Lemma 3.3 in \cite{zbMATH06448534}]\label{Thm:CoboundaryinCGRANDE} Let $(M,\omega)$ be a pre-$n$-plectic manifold and $v:\mathfrak g\to \mathfrak X(M)$ be an infinitesimal multisymplectic action. The primitives of the natural cocycle \begin{equation}\label{eq:omegatildeobstruction} \tilde{\omega} = \sum_{k=1}^{n+1} (-1)^{k-1} \iota^k_\mathfrak{g} \omega \in C_\mathfrak{g}^{n+1}, \end{equation} where, as already introduced in remark \ref{Rem:SignedMultiContraction}, \begin{displaymath} \morphism{\iota^k_\mathfrak{g}} {\Omega^\bullet(M)} {\Lambda^k \mathfrak{g}^\ast \otimes \Omega^{\bullet-k}(M)} {\omega} {\left(p \mapsto \iota_{v_p} \omega \right)} ~, \end{displaymath} \noindent are in one-to-one correspondence with comoments of $v$. In particular, a \comoment exists if and only if~$[\tilde{\omega}]=0\in H^{n+1}(C_\mathfrak g^\bullet,d_ {tot})$. \end{theorem} \note{ Controllare che la vecchia notazione $\omega_k$ non viene alla fine utilizzata: \begin{align*} \iota^k_\mathfrak{g} \colon \Omega^\bullet(M) &\to \Lambda^k \mathfrak{g}^\ast \otimes \Omega^{\bullet-k}(M) \\ \omega&\mapsto \omega_k = \left(p \mapsto \iota_{v_p} \omega \right) , \end{align*} } \begin{proof} Being linear maps, the components $f_k$ can be regarded as elements of a vector space \begin{displaymath} f_k \in \Lambda^k \mathfrak{g}^\ast \otimes \Omega^{n-k}(M)\cong \text{Hom}_{\text{Vect}}(\Lambda^k \mathfrak{g}, \Omega^{n-k}(M)) \end{displaymath} satisfying equation (\ref{eq:fk_hcmm}) or, equivalently, as vectors $\tilde{f}_k =\varsigma(k) f_k$ satisfying \begin{equation}\label{eq:fk_hcmm_tilde} \tilde{f}_{k-1}(\partial p ) + (-1)^k d \tilde{f}_k ( p) = (-1)^{k-1}\iota(v_p) \omega . \end{equation} The last equation is obtained multiplying equation (\ref{eq:fk_hcmm}) by the sign factor $\varsigma{(k-1)}$ and noting that $\varsigma{(k-1)}\varsigma{(k)} = (-1)^k$. \\ Upon considering the direct sum of these vectors \begin{displaymath} \tilde{f} = \left(\sum_{k=1}^n \tilde{f}_k \right) \in \bigoplus_{k=1}^n \left(\Lambda^k \mathfrak{g}^\ast \otimes \Omega^{n-k}(M)\right) \end{displaymath} equation (\ref{eq:fk_hcmm_tilde}) can be recast as: \begin{equation}\label{eq:fk_hcmm_tilde_complex_1} \left[\delta_{\text{CE}}\otimes \id + \id\otimes d \right] \tilde f = \sum_{k=1}^{n+1} (-1)^{k-1} \iota^k_\mathfrak{g} \omega \end{equation} where $\iota^k_\mathfrak{g}$ is the operator defined above. Note that we are implicitly using the Koszul convention, therefore the action of $\id\otimes d$ on a homogeneous element $f_k \in \Lambda^k \mathfrak{g}^*\otimes \Omega^\bullet(M)$ yields $(\id\otimes d) f_k = (-1)^k (\id \otimes d f_k) $. \\ If we set \begin{displaymath} \tilde{\omega} = \sum_{k=1}^{n+1} (-1)^{k-1} \iota^k_\mathfrak{g} \omega \in C_\mathfrak{g}^{n+1}, \end{displaymath} equation (\ref{eq:fk_hcmm_tilde_complex_1}) corresponds to \begin{equation}\label{eq:fk_hcmm_tilde_complex_2} d_{\tot} \tilde{f} = \tilde{\omega} \end{equation} which is exactly the condition of $\tilde{f}$ being a primitive of $\tilde{\omega}$. \\ It follows from Lemma \ref{lemma:multicartan} that $d_{tot}\tilde \omega=0$ for all actions preserving $\omega$. Therefore the vanishing of the cohomology class $[\tilde{\omega}] \in H^{n+1}(C_\mathfrak{g}^\bullet)$ is a necessary and sufficient condition for the existence of a \comoment for the infinitesimal action of $\mathfrak{g}$ on $M$. \end{proof} The upshot is that \momaps pertaining to a certain multisymplectic action $\vAct:\mathfrak{g}\to \mathfrak{X}_{\msy}(M,\omega)$ are in a one-to-one correspondence with the primitives of the cochain $\tilde{\omega}$, hence they form an affine space modelled on $B^{n+1}(C_\mathfrak{g})$. \begin{remark}\label{rk:kuenneth} By the K\"{u}nneth theorem, the cohomology $H^\bullet(C_\mathfrak{g}^\bullet)$ is isomorphic to $H^{\geq 1}(\mathfrak g)\otimes H^\bullet(M)$, we will give a geometric interpretation to this fact in the next section. \end{remark} \begin{remark}\label{rk:cp_obsruction} In Chapter \ref{Chap:MauroPaper} we will consider a slightly different obstruction to the existence of a \comoment for $v:\mathfrak{g}\to \mathfrak{X}(M,\omega)$. Namely, we are going to consider the following cocycle in the Chevalley-Eilenberg complex of $\mathfrak{g}$ \begin{displaymath} \morphism{c^{\mathfrak{g}}_{p}=(\iota^{n+1}_\mathfrak{g}\omega)\big\vert_p} {\Lambda^{n+1} \mathfrak{g}} {\mathbb{R}} {x_{1} \wedge \dots \wedge x_{n+1}} {(\iota( v_{1} \wedge \dots \wedge v_{n+1})\omega)\big\vert_p} \end{displaymath} where $p\in M$ is a fixed point in $M$. Lemma \ref{lemma:multicartan} guarantees that $\delta_\text{CE} c^{\mathfrak{g}}_{p} = 0$. Note that, when $M$ is connected, the cohomology class $[c_p^{\mathfrak{g}}] \in H^{n+1}(\mathfrak{g})$ is independent of the point $p$, see \cite[Proposition 9.1]{Callies2016}. \\ The vanishing of $[\tilde{\omega}]$ implies in particular that $(\iota^{n+1}_\mathfrak{g}\omega) \in C^{n+1}_\mathfrak{g}$ must be a boundary, hence the vanishing of $[c_p^{\mathfrak{g}}]$ is a necessary condition for the existence of a \comoment. \\ Moreover, it follows from Remark \ref{rk:kuenneth} that when $H^{i}_{\mathrm{dR}}(M) =0$ for $1 \leq i \leq n-1$ the vanishing of $[c_p^{\mathfrak{g}}]$ is also a sufficient condition for the existence of a \comoment. \end{remark} \begin{remark}[Relations between $C_\mathfrak{g}$ and equivariant cohomology] At this stage, this cohomological construction might appear to be a simple formal rearrangement of the moment map definition. Its real power is manifested when one can draw a correspondence between the cohomology groups of $C$ and those of the other geometric structures involved, namely the manifold $M$, the Lie group $G$ and the action $\vartheta$. \\ When talking about $G$-spaces, \ie triples $(M,G,\vartheta)$, the cohomology theory of interest is called \emph{equivariant cohomology}. It can be shown, see \cite[\S 6]{Callies2016}, that the complex $C_\mathfrak{g}$ can be embedded into the Bott-Shulman-Stasheff model $\Omega(G\ltimes M)$ pertaining to the action. Hence the statement of theorem \ref{Thm:CoboundaryinCGRANDE} can be framed as a coboundary condition in equivariant cohomology since $\Omega(G\ltimes M)$ is a model of equivariant cohomology (see proposition 6.4 in \cite{Callies2016}). % In the same article, it has also been proved how to frame the same cohomological obstruction in another complex called \emph{Cartan model}, which is a model for equivariant cohomology only in the case of compact group actions. \end{remark} In the following subsections we will focus on compact groups trying to give a geometric interpretation of such obstruction which will not depend on the model used to represent the equivariant cohomology of the action. \subsection{A geometric interpretation of the obstruction class}\label{subsec:geomint} In symplectic geometry the existence of a comoment map implies that $\omega$ can be extended to a cocycle in equivariant de Rham cohomology (\cite{MR721448}). Following \cite{Callies2016}, we illustrate this fact by giving a geometric interpretation to the obstruction class $[\tilde \omega]$ defined above and explain its analogue in the multisymplectic setting. When the Lie algebra action $v$ comes from a Lie group action, we can interpret the complex $C^\bullet_\mathfrak g$ and the cocycle $\tilde \omega$ in terms of the de Rham cohomology of invariant forms. \begin{definition} Let $\vartheta:G\times M\to M$ be a Lie group action. We denote by $\Omega^\bullet(M,\vartheta)$ the subcomplex of $\vartheta$-invariant differential forms. The cohomology of this complex is called \emph{invariant de Rham cohomology} and denoted by $H^\bullet(M,\vartheta)$. \end{definition} \begin{remark} It is more common in the literature to denote these invariant spaces by $\Omega^\bullet(M)^G$ and $H^\bullet(M)^G$. We use the above notation to emphasize the specific Lie group action involved. \end{remark} \begin{remark}\label{comactintegration} The invariant cohomology is not the same as the equivariant cohomology, which we will define later. For example, whenever $G$ is compact, the natural map $H^\bullet(M,\vartheta)\to H^\bullet(M)$ induced by the inclusion of the subcomplex is an isomorphism, as in this case any form can be made invariant by averaging. Pullbacks along equivariant maps lead to homomorphisms of the invariant cohomology groups. For details we refer to \cite[\textsection 4.3]{MR0336651}. \end{remark} \note{Recall that $C_\mathfrak{g}$ is built from $\wedge^{\geq 1}\mathfrak{g}^*\otimes \Omega^\bullet(M)$ \ie neglecting $\wedge^0\mathfrak g^*$ } \begin{lemma}[Lemma 6.3 in \cite{Callies2016}]\label{old_infmom} Let $\vartheta:G\times M\to M$ be a right Lie group action, denote by \begin{displaymath} \morphism{(r\times \id)} {G\times (G\times M)} {(G\times M)} {(h,(g,m))} {(gh,m)} \end{displaymath} the right multiplication action on the second factor. \\ The complex ~$\Omega^\bullet(G\times M,{r\times \id})$ is naturally isomorphic to ~$C_\mathfrak g^\bullet\oplus\Omega^\bullet(M)$. \end{lemma} \begin{proof} % We have a natural morphism in the category of cochain complexes: \begin{equation}\label{eq:naturalmap} \varphi:~ \tot(\Lambda^\bullet\mathfrak g^* \otimes \Omega^\bullet(M)) \to \tot(\Omega^\bullet(G,r) \otimes \Omega^\bullet(M)) \to \Omega^\bullet(G\times M, {r\times \id}). \end{equation} The first arrow comes from the isomorphism $\Lambda^k \mathfrak{g}^\ast \to \Omega^k(G,r)$, which associates to any element of $\Lambda^k\mathfrak g^*=\Lambda^kT^*_eG$ its right-invariant differential form extension on $G$. This is in particular an isomorphism of cochain complex obtained dualizing the Chevalley-Eilenberg chain complex pertaining to right-invariant vector fields on $G$ that is $CE(\mathfrak{g})\cong CE(\mathfrak{X}(G,r))$. Note that from lemma \ref{lemma:multicartan}, the cochain differential $\delta_{CE}$ on $(CE(\mathfrak{X}(G,r)))^\ast \cong \Omega(G,r)$ corresponds to the usual de Rham differential. \\ The second arrow comes from the exterior wedge product \ie from the map \begin{displaymath} \begin{tikzcd}[column sep= small,row sep=0ex] \Omega^q(G,r) \otimes \Omega^p(M) \arrow[r] & \Omega^{q+p}(G\times M, {r\times \id}) \\ \alpha\otimes\beta \ar[r, mapsto] & \pi_1^*\alpha \wedge \pi_2^* \beta \end{tikzcd} \end{displaymath} where $\pi_i$ are the projections on the $i$-th factor of $G\times M$. Regarding the complexes involved as graded vector spaces, the previous map can be extended to a degree 0, bilinear map \begin{equation}\label{eq:kunnetqiso} \begin{tikzcd}[column sep= small,row sep=0ex] \Omega^\bullet(G,r) \otimes \Omega^\bullet(M) \arrow[r] & \Omega^{\bullet}(G\times M, {r\times \id}) \\ \alpha\otimes\beta \ar[r, mapsto] & (-1)^{|\alpha|}\pi_1^*\alpha \wedge \pi_2^* \beta \end{tikzcd} \end{equation} where the extra signs comes from the Koszul convention we employed in definition \ref{def:comomentbicomplex} when defining the differential in the total complex. \\ The map $\varphi$ defined in \eqref{eq:naturalmap} admits an inverse given by restricting a form $\alpha\in \Omega^\bullet(G\times M, r\times \id)$ to the submanifold ${\{e\}\times M}$. Explicitly one has $\varphi^{-1} := (\iota_e)^\ast $, where $\iota_e : \{e\}\times M \hookrightarrow G \times M$ is the standard inclusion, since \begin{displaymath} \begin{aligned} \Gamma(\iota_e^* \Lambda^k T(G\times M)) &\cong \Gamma(\Lambda^k((T_e G)_M \otimes TM )) \cong \\ & \cong \Gamma \left(\bigoplus_{n=0}^k \Lambda^n T_e G \otimes \Lambda^{k-n} TM \right) \cong \\ &\cong \bigoplus_{n=0}^k \Lambda^n\mathfrak{g}\otimes \Omega^{k-n}(M) ~. \end{aligned} \end{displaymath} (In the above equality, $(T_e G)_M$ stands for the trivial vector bundle $(T_e G)\times M \to M$.) \\ The statement follows from the observation that $C_\mathfrak g^\bullet\oplus( \Lambda^0\mathfrak g^*\otimes \Omega^\bullet(M))\cong C_\mathfrak g^\bullet\oplus \Omega^\bullet(M)$ is the total complex of $\Lambda^\bullet\mathfrak g^*\otimes \Omega^\bullet(M)$ and that the second arrow defined above is precisely the function inducing the K\"unneth isomorphism \cite{Bott-Tu82}. \end{proof} \begin{proposition}[Proposition 6.4 in \cite{Callies2016}]\label{infmom} % Assume that $G$ preserves a pre-$n$-\-plec\-tic form $\omega$. Call $v$ the infinitesimal action induced by $G$. \\ Then the cocycle $\tilde \omega \in C_\mathfrak g^{n+1}\hookrightarrow\Omega^\bullet(G\times M, {r\times \id})$ with respect to the infinitesimal action $v$ is given by \begin{displaymath} \tilde{\omega}= \varphi^{-1}\left(\vartheta^*\omega-\pi^*\omega\right) ~, \end{displaymath} where $\pi:G\times M\to M$ is the projection onto the second factor and $\varphi$ is the natural isomorphism defined by equation \eqref{eq:naturalmap}. \end{proposition} \begin{proof} First must be checked that $\vartheta^* \omega - \pi^* \omega$ is a well-defined $(r\times \id)$-invariant form. Being an action, the map $\vartheta\colon G \times M \to M$ is equivariant (with respect to $r\times \id$ in the domain and $\vartheta$ in the target). Hence, the cochain-map $\vartheta^*:\Omega^\bullet(M)\to \Omega^\bullet(G\times M)$ restricts to a well-defined map on the invariant subcomplexes $\vartheta^*:\Omega^\bullet(M,{\vartheta})\to \Omega^\bullet(G\times M,{r \times \id}) $, and in particular we have a well-defined map in cohomology. \\ Proceeding by inspection, we consider, without loss of generality, the point $(e,p)\in G\times M$. From the general theory of product manifolds one has that \begin{displaymath} T_{(e,p)}G\times M \cong T_e G \oplus T_p M \cong \mathfrak{g}\oplus T_p M ~. \end{displaymath} Let $X_1,...,X_{n+1-i}\in T_pM$ and $\xi_1,...,\xi_{i}\in \mathfrak g$. For all $ 1<i\leq n+1 $ we get \begin{align*} \vartheta^*\omega(\xi_1,...,\xi_{i},X_1,...,X_{n+1-i}) & = \omega(\vartheta_*\xi_1,...,\vartheta_*\xi_{i},\vartheta_*X_1,...,\vartheta_*X_{n+1-i}) =\\ &=\omega(v(\xi_1),...,v(\xi_{i}),X_1,...,X_{n+1-i})=\\ &= \left[(\iota^i_\mathfrak g\omega)(\xi_1,...,\xi_{i})\right](X_1,...,X_{n+1-i}) \end{align*} and for $i=0$ we get \begin{displaymath} \vartheta^*\omega(X_1,\dots,X_{n+1} )= \omega(X_1,\dots,X_{n+1}) = \pi^* \omega (X_1,\dots, X_{n+1}) ~. \end{displaymath} Observe then that by the sign convention in equation \eqref{eq:kunnetqiso} follows that \begin{displaymath} \varphi (\tilde{\omega}) = \sum_{k=1}^{n+1}(-)^{k-1} \varphi (\iota^k_{\mathfrak{g}}\omega ) = -\sum_{k=1}^{n+1}\iota^k_{\mathfrak{g}}\omega \end{displaymath} with the last $\iota^k_{\mathfrak{g}}\omega$ is seen as a differential form on $G\times M$. Being $\xi_1,\dots,\xi_i,X_1,\dots,X_{n+1-i}$ arbitrary elements, it turns out that \begin{displaymath} \vartheta^* \omega = - \varphi(\tilde{\omega}) + \pi^*(\omega) \end{displaymath} and the invertibility of $\varphi$ (see lemma \ref{old_infmom}) finishes the proof. One should note, that this is true although $\pi$ is not an equivariant map with respect to $( r\times \id, \vartheta)$. \end{proof} \begin{remark} The previous results are subsumed by the following sequence in the category of cochain complexes: \begin{displaymath} \begin{tikzcd} \Omega(M,\vartheta) \ar[r,"\vartheta^\ast-\pi^\ast"'] & \Omega(G\times M,r\times id) \ar[r,equal,"\varphi^{-1}"',"\sim"] & \Lambda^{\geq 0}\mathfrak{g}^\ast\otimes\Omega(M) \ar[r,equal,"\sim"] &[-3em] C_{\mathfrak{g}}\oplus\Omega(M) \end{tikzcd} \end{displaymath} for any smooth action $\vartheta:G\action M$. Given $\omega\in \Omega(M,\vartheta)$ multisymplectic, are implied the following cohomological conditions to the existence of a \momap: \begin{displaymath} \left( \underset{\text{admits \Hcmm}}{\vartheta:G\action (M,\omega)} \right) ~ \Leftrightarrow ~ \Big( [\tilde{\omega}]=0 \in H(C_{\mathfrak{g}}) \Big) ~ \Leftrightarrow ~ \Big( [(\vartheta^\ast-\pi^\ast)\omega]=0 \in H(G\times M, r\times id) \Big) \end{displaymath} Notice, in particular, that the right-hand side provides an obstruction in terms of the $G$-invariant cohomology, which is closer to the problem's geometric data and less "ad hoc" than $C_{\mathfrak{g}}$. \\ All of this does not require the group to be compact. In case of compact groups, the obstruction can be read directly in de Rham cohomology, see corollary \ref{cor:core}. \end{remark} \begin{remark} We think that the above proposition is central to the understanding of multisymplectic \comoments, as it enables an elementary and unified treatment of many phenomena in multisymplectic geometry. \begin{itemize} \item For symplectic manifolds, this result gives a nice interpretation for the result of \cite{MR721448} that moment maps are in correspondence to equivariant extensions of $\omega$ and also explains why this correspondence fails in the general multisymplectic setting (\cf Example \ref{exnongen} and \cite[\textsection 4]{weinstein1977lectures}). \item A \comoment exists, whenever the multisymplectic form $\omega$ can be lifted to a class in the equivariant cohomology $H_G^{n+1}(M)$. (See Theorem \ref{cpteqcom}). \item Let $G_i$ act on the multisymplectic manifolds $(M_i,\omega_i)$ for $i\in\{1,2\}$. If there exists a \comoment for $\vartheta_i:G_i\action (M_i,\omega_i)$, then there exists a \comoment for the product $\vartheta=(\vartheta_1\times \vartheta_2): (G_1\times G_2) \action (M_1\times M_2)$ with respect to the multisymplectic structure $\omega=(\tau_1^*\omega_1\wedge \tau_2^*\omega_2)\in \Omega(M_1\times M_2)$. (Here $\tau_i:M_1\times M_2 \to M_i$ denotes the standard projection of the cartesian product). This theorem from \cite{Shahbazi2016}, proven by explicit exhibition of the sought \comoment, can be derived from proposition \ref{infmom} observing that \begin{align*} [ (\vartheta^\ast - & \pi^\ast) \omega] = \\ =&~ [ (\vartheta_1^\ast\times\vartheta_2^\ast - \pi_1^\ast \times \pi_2^\ast) (\tau_1^*\omega_1\wedge \tau_2^*\omega_2)] = \\ =&~ [\tau_1^*\vartheta_1^\ast \omega_1 \wedge \tau_2^*\vartheta_2^\ast \omega_2 - \tau_1^*\pi_1^\ast \omega_1 \wedge \tau_2^*\pi_2^\ast \omega_2 ] ~= \\ =&~ [ (\tau_1^*(\vartheta_1^*-\pi_1^*)\omega_1)\wedge (\tau_2^*\vartheta_2^* \omega_2) + (\tau_1^\ast\pi_1^\ast\omega_1)\wedge (\tau_2^\ast(\vartheta_2^\ast-\pi_2^\ast)\omega_2)] ~= \\ =&~ \kappa\left( \cancel{[(\vartheta_1^\ast -\pi_1^\ast) \omega_1]},[\vartheta_2^\ast \omega_2] \right) + \kappa\left( [\pi_1^\ast \omega_1], \cancel{[(\vartheta_2^\ast -\pi_2^\ast )\omega_2]} \right) ~= \\ =&~ 0 ~, \end{align*} where $\kappa$ denotes the K\"unneth isomorphism and the last cancellations are due to the existence of a \momap with respect to the components $\vartheta_i$. A similar result also holds endowing $(M_1\times M_2)$ with the multisymplectic form $\pi_1^\ast \omega + \pi_2^\ast \omega$. \note{Maggiori dettagli nel file LeoQ.tex} \end{itemize} \end{remark} \begin{remark}\label{rk:invpot} In particular, Proposition \ref{infmom} immediately implies that a $\vartheta$-invariant potential of $\omega$ induces a \comoment, as an invariant potential $\alpha$ of $\omega$ would be mapped to a potential $(\vartheta^*\alpha-\pi^*\alpha) \in \Omega^\bullet( G\times M , r\times \id)$ of $\tilde{\omega}=\vartheta^*\omega-\pi^*\omega$. Note that $\omega$ being exact is not a sufficient condition, because a primitive $\vartheta^*\alpha-\pi^*\alpha$ need not to be an element in the invariant cochain complex $\Omega^\bullet(G\times M, r\times \id)$ in general. \\ \end{remark} When such an invariant potential exists, it is fairly easy to give an explicit expression for the components of a \comoment, as illustrated by the following Lemma: \begin{lemma}[Section 8.1 in \cite{Callies2016}]\label{lem:extexact} Let $M$ be a manifold with a $G$-action, let $\alpha\in \Omega^n(M,G)$ be a $G$-invariant $n$-form and consider the pre-$n$-plectic form $\omega=d\alpha$ on $M$.\\ The action $G\action \left(M,d\alpha\right)$ admits a $G$-equivariant \momap $(f):\mathfrak{g} \to L_{\infty}(M,\omega)$, given by $(k=1,\dots,n)$: \begin{displaymath} \morphism{f_{k}} {\Lambda^k\mathfrak{g}} {\Omega^{n-k}(M)} {q} {(-1)^{k-1}\varsigma(k)\iota(v_q)(\alpha)~.} \end{displaymath} \end{lemma} \begin{proof} A direct proof of this statement can be given by showing that equation \eqref{eq:fk_hcmm} is satisfied. Upon employing Lemma \ref{lemma:multicartan}, we have: \begin{align*} \textrm{d} f_m (p) &= (-1)^{m-1} \varsigma(m) \textrm{d} \iota_{v_p} \alpha = \\ &= -\varsigma(m) (-1)^{m} \textrm{d} \iota(v_1\wedge\dots\wedge v_m) \alpha =\\ &= -\varsigma(m) \left(\iota_{v_p} \textrm{d}\alpha + \iota_{\partial v_p} \alpha + \sum_{k=1}^{m} (-1)^k \iota( x_1\wedge\, \hat{x_k}\, \wedge\dots\wedge x_m) \cancel{\mathcal{L}_{x_k} \alpha} \right) =\\ &= -\varsigma(m) \iota_{v_p} \omega + (-1)^{m-1} \varsigma(m-1) \iota_{\partial v_p} \alpha = -\varsigma(m) \iota_{v_p} \omega - f_{m-1}(\partial v_p) , \end{align*} thus $G$-equivariance follows from lemma \ref{lemma:multicartan}. \end{proof} \begin{corollary}[$SO(n)$-action on $\mathbb{R}^n$, Example 8.4 in \cite{Callies2016}]\label{cor:sorn} The canonical action $$SO(n) \action \left( \mathbb{R}^{n}, dx^{1\dots n}\right),$$ where~$x = (x^i)$ are the standard coordinates on~$\mathbb{R}^n$ and~$dx^{1\dots n} = d x^1\wedge\dots \wedge dx^{n}$ is the standard volume form of $\mathbb{R}^n$, admits a \comoment given by $(k=1,\dots,n)$: \begin{displaymath} \morphism{f_{k}} {\Lambda^k\mathfrak{g}} {\Omega^{n-1-k}(M)} {q} {(-1)^{k-1} \frac{\varsigma(k)}{n} \iota(E \wedge v_q)\left(dx^{1\dots n}\right)} \end{displaymath} where $E =\sum_i x^i\partial_i$ is the Euler vector field. \end{corollary} \begin{proof} The proof follows from Lemma \ref{lem:extexact} noting that the standard volume form admits the following $G=SO(n)$-invariant primitive $$\alpha = \dfrac{\iota_E\left(dx^{1\dots n}\right)}{n}~.$$ \end{proof} \subsection{Cohomological obstructions for compact group actions} Let us specialize our description of the obstructions to the existence of a \momap pertaining to a compact Lie group action. In this case, we do not have to care about the invariance of forms and Proposition \ref{infmom} reads as follows: \begin{corollary}\label{cor:core} Let $\vartheta:G\times M\to M$ be a compact Lie group acting on a pre-$n$-plectic manifold, preserving the pre-multisymplectic form $\omega$. A \comoment exists if and only if $[\vartheta^*\omega-\pi^*\omega]=0\in H^{n+1}_{dR}(G\times M)$. \end{corollary} \begin{proof} From Proposition \ref{infmom} we get the following sequence of maps between complexes together with the induced maps in cohomology: \begin{center} \begin{tikzcd} \Omega^\bullet(M,\vartheta) \ar[d,"\vartheta^\ast-\pi^\ast"] &\quad &[-2em] H_\text{dR}(M) \ar[d,"\vartheta^\ast-\pi^\ast"] &[-2em] \lbrack \omega \rbrack \ar[d,mapsto] \\ \Omega^\bullet(G\times M, r\times \id) \ar["\cong",leftrightarrow]{d} &\quad & H_\text{dR}(G\times M) \ar[leftrightarrow,"\cong"]{d}[swap]{\text{\tiny (K\"unneth)}} & \lbrack \vartheta^\ast \omega - \pi^\ast \omega \rbrack \ar[ddd,mapsto] \\ \Omega^\bullet(G,r) \otimes \Omega^\bullet(M) \ar["\cong",leftrightarrow]{d}[swap]{} &\quad & H_\text{dR}(G) \otimes H_\text{dR}(M) \ar[d,"\cong",leftrightarrow] \\ \Lambda^\bullet \mathfrak{g}^* \otimes \Omega^\bullet(M)\ar["\cong",leftrightarrow]{d} &\quad & H_\text{CE}(\mathfrak{g}) \otimes H_\text{dR}(M) \ar["\cong",leftrightarrow]{d} & \\ C_\mathfrak{g}^\bullet \oplus ( \mathbb{R}\otimes \Omega^\bullet(M))&\quad & H(C_\mathfrak{g}^\bullet)\oplus H_\text{dR}(M) & \lbrack \tilde{\omega}\rbrack \end{tikzcd} \end{center} The statement follows by resorting to Remark \ref{comactintegration}, \ie noting that $H_\text{dR}(G) \cong H(G,r)$ and $H_\text{dR}(G\times M) \cong H(G\times M, r\times \id)$ via the averaging trick on compact connected Lie groups (see reminder \ref{rem:Averaging}). \end{proof} \begin{reminder}[Averaging trick]\label{rem:Averaging} Let be $\vartheta: G\action M$ be a compact and connected Lie group action. By \emph{averaging trick} we mean the procedure associating a $\vartheta$-invariant differential form to any differential form on $M$. \\ Recall at first that Lie groups are orientable (since the tangent bundle $TG\cong G\times \mathfrak{g}$ is trivial) and in particular it is always possible to find left or right invariant volume forms thereon. Such volume forms are uniquely defined by a constant. When $G$ is a compact group, such constant can be fixed via normalization. Namely, there exists a unique $dg \in (\Omega^{\text{top}}(G),R)$ right-invariant volume form, called \emph{Haar volume}, such that $\int_G \d g = 1$ (see for instance \cite[\S 3.13]{Duistermaat2000}). \\ Consider now the aforementioned compact group action $\vartheta:G\action M$. For any $0\leq k \leq \text{dim}(M)$, we define the \emph{averaging} as the mapping \begin{displaymath} \morphism{A} {\Omega^k(M)} {(\Omega^k(M),\vartheta)} {\omega} {\displaystyle \int_G (\vartheta^\ast_g \omega) dg}~, \end{displaymath} where $dg$ denotes the Haar measure, $\vartheta_g:x\mapsto x.g$ is the action of $g$ and $(\vartheta_g^\ast \omega)$ is the pullback of $\omega$ along the action seen as a vector-valued smooth function on $G$: \begin{displaymath} \vartheta^\ast_\blank \omega : G \to \Omega^k(M)~. \end{displaymath} A simple consequence of this construction is that any $\vartheta$-invariant exact form admits a $\vartheta$-invariant primitive. Namely, considering $\omega= \d \beta$, one has $$ \omega = A(\omega) = \int_G(\vartheta_g^\ast \d \beta) dg = \d A(\beta)~. $$ \end{reminder} \medskip \noindent We will now investigate the connection between \comoments and equivariant cohomology. The latter is the most general cohomology theory pertaining to group actions on topological space (see \cite{Tu2011} for a gentle introduction). \begin{definition}[{Equivariant Cohomology (\cite{Borel1960}, see also \cite[\S 3]{Hsiang1975}\cite[\S 1]{Guillemin1999})}] \label{def:equivariantcoho} Consider a smooth Lie group action $\vartheta:G\action M$ on a manifold $M$. Let $EG$ be a contractible space on which $G$ acts freely by $\vartheta^{EG}$. Then we define the \emph{equivariant cohomology of $M$} as $H^\bullet_G(M):=H^\bullet((M\times EG)/G)$, where $G$ acts on $M\times EG$ diagonally. \end{definition} \begin{remark} As $EG$ might not be a manifold, we have to interpret $H^\bullet_G(\cdot)$ as a suitable cohomology theory (\eg singular cohomology with real coefficients) in the above definition. \\ When the group $G$ is compact, any action $\vartheta:G\times M\to M$ is in particular proper. If the action $\vartheta$ is also free then the quotient manifold theorem holds and $H_G^\bullet(M)=H^\bullet_{dR}(M/G)$. \end{remark} For a not necessarily free action $\vartheta$, we have the following diagram in the category of topological spaces \begin{center} \begin{tikzcd}[column sep=large] G\times (M\times EG) \ar[r,shift left=1.5ex,"\vartheta\times \vartheta^{EG}"]\ar[r,shift right=1.5ex,,"\pi"] & M\times EG \ar[r,"q"]& (M\times EG)/G, \end{tikzcd} \end{center} where $q$ is the projection to the orbits, which induces a sequence in cohomology: \begin{center} \begin{tikzcd}[column sep=large] H^\bullet(G\times M) & H^\bullet(M) \ar[l,"\vartheta^*-\pi^*"] & \ar[l,"q^*"]H^\bullet_G(M) \end{tikzcd} \end{center} The latter diagram is obtained using the definition of equivariant cohomology $H(\frac{M\times EG}{G})\cong H_G(M)$ on the rightmost term and the contractibility of $EG$, \ie \begin{displaymath} \begin{split} H(M\times EG) &\underset{\text{de Rham thm.}}{\cong} H_{dR}(M\times EG) \cong \\ &\underset{\text{K\"unneth thm.}}{\cong} H_{dR}(M)\otimes H_{dr}(EG) \cong \\ &\underset{\text{$EG$ conctractible}}{\cong} H_{dR}(M)~. \end{split} \end{displaymath} Being $q$ a mapping into orbits one has $q\circ \vartheta=q\circ\pi$ (where $\theta$ here is shorthand for the action $\vartheta\times \vartheta^{EG}$ on $M\times EG$. Therefore $(\vartheta^*-\pi^*)\circ q^*=0$. Using Remark \ref{comactintegration} and Proposition \ref{infmom} we get the following statement: \begin{theorem}[\cite{Callies2016}]\label{cpteqcom} Let $G\times M\to M$ be a compact Lie group preserving a pre-$n$-plectic form $\omega$. If $[\omega]\in H^\bullet(M)$ lies in the image of $q^*:H^\bullet_G(M)\to H^\bullet(M)$, then $\vartheta$ admits a \comoment. \end{theorem} \begin{remark} The advantage of our approach to the theorem is that it is much simpler and more intrinsic, in particular we do not need to choose a model for equivariant cohomology. In Section 6 of \cite{Callies2016} one can find similar results framed in the Bott-Shulman-Stasheff and in the Cartan model. Furthermore, in Section 7.5 of \cite{Callies2016}, a generalization of this statement to non-compact groups is discussed. \end{remark} Observe that the vector space $\text{Im}(q^\ast) \subset H_{dR}(M)$ consists of closed forms (it is better to say cohomology classes) coming from a class in equivariant cohomology; in other words, elements in $\text{Im}(q^\ast)$ can be extended to an equivariant cohomology class. The condition for $[\omega]$ to come from an equivariant cohomology class is sufficient for the existence of a \momap. \\ Unfortunately, unlike the symplectic case (\cf \cite{MR721448}), the converse statement does not hold in general. Even if a (pre-)multisymplectic action of $G$ on $(M,\omega)$ admits a \comoment, $[\omega]$ does not need to come from an equivariant cocycle. We will illustrate this fact by an example. \begin{example} \label{exnongen} Consider the action $\vartheta: S^1\times S^3\to S^3$ given by the Hopf fibration $S^1\hookrightarrow S^3 \twoheadrightarrow S^2$. Let $\omega$ be the standard volume on $S^3$. By Remark $\ref{rk:kuenneth}$ the obstructions to a \comoment sit in $H^1(\mathfrak u(1))\otimes H^2(S^3) $, $ H^2(\mathfrak u(1))\otimes H^1(S^3)$ and $H^3(\mathfrak u(1))\otimes H^0(S^3) $, which are trivial. Hence, a \comoment exists. \\ As the action is free (and the quotient is $S^2$), we have $H^3_{S^1}(S^3)= H^3(S^3/S^1)=H^3(S^2)=0$. But $[\omega]\neq 0$ in $H^3(S^3)$, so the class $[\omega]$ cannot come from an equivariant cocycle. \end{example} \begin{remark} We note that this example has a different character than the ones provided in Section 7.5 of \cite{Callies2016}. They exhibit cases where individual \comoments do not come from equivariant cocycles, whereas in our case no equivariant cocycle can be found for any of the possible \comoments. \end{remark} \section{Non-transitive multisymplectic group actions on spheres}\label{secintrans} The goal of this section is to prove the existence of \comoments for non-transitive actions and construct an explicit \comoment for the $SO(n)$-action on $S^n$. \begin{remark}[Canonical multisymplecticity of the sphere] Recall that the $n$-dimensional sphere $S^n$ is a connected, simply connected manifold; therefore is orientable. Recall also that $S^n$ can be canonically embedded in the Euclidean space $\R^{n+1}$ as the standard unit sphere centered on the origin. Thus, it possesses a canonical Riemannian structure induced by the standard metric on $\R^{n+1}$. One can induce a volume form on $S^n$ by pulling-back contraction of the Euclidean volume with a nowhere vanishing normal vector field. At this scope, it is customary to choose the outward-facing vector field $E = x^i \partial_i$ called \emph{Euler vector field}. The unique Riemannian volume form singled out of this procedure will be the standard multisymplectic form considered on spheres. (See example \ref{Ex:VolumesAreMultiSymp}.) \end{remark} Recall first that the \emph{orbit map of $p\in M$} with respect to the smooth action $\vartheta:G\action M$ is the smooth function \begin{displaymath} \morphism{\vartheta_p} {G} {M} {g} {g.p = \vartheta(g,p)} \end{displaymath} determining an immersed submanifold of $M$ simply called \emph{the orbit of $p$}. \\ The existence of a \momap for actions on a sphere can be ascertained by studying the restriction of the multisymplectic form on orbits: \begin{lemma}\label{lem:core} Let $\vartheta: G\times S^n\to S^n$ be a compact Lie group acting multisymplectically on $S^n$ equipped with the standard volume form $\omega\in \Omega^n(S^n)$. Let $p\in S^n$ be any point. \\ A \momap exists if and only if $\vartheta_p^*[\omega]\in H^{n}(G)$ vanishes. \end{lemma} \begin{proof} By Corollary \ref{cor:core}, we only have to check that $$ [\vartheta^*\omega-\pi^*\omega]=0\in H^{n}(G\times S^n) \quad\Leftrightarrow\quad \vartheta_p^*[\omega]= 0\in H^{n}(G)~.$$ % The direct implication follows by considering the map \begin{displaymath} \morphism{i} {G} {G\times S^n} {g} {(g,p)} \end{displaymath} and its induced linear map in cohomology $i^* : H^\bullet(G \times S^n) \to H^\bullet(G)$ which acts on $[\vartheta^*\omega - \pi^*\omega] \in H^n(G\times S^n)$ as \begin{displaymath} i^* [\vartheta^* \omega - \pi^* \omega ] = [(\vartheta \circ i)^\ast \omega - \cancel{(\pi \circ i)^\ast \omega}] = \vartheta^*_p[\omega] , \end{displaymath} because $\vartheta \circ i = \vartheta_p$ and $\pi \circ i $ is the constant map valued in $p\in S^n$. \\ For the converse implication, note at first that the cohomology of the sphere implies $$H^{n}(G\times S^n)=(H^n(G)\otimes H^0(S^n))\oplus (H^0(G)\otimes H^n(S^n)) . $$ Therefore, recalling Proposition \ref{infmom}, the obstruction $[\vartheta^*\omega-\pi^*\omega]$ lies entirely in $H^n(G)\otimes H^0(S^n)$ as $[\tilde{\omega}]$ has null component in $ H^0(G)\otimes H^n(S^n)$. Since the restriction of $i^*$ to $H^n(G)\otimes H^0(S^n)$ is an isomorphism (the $0$-th cohomology group of a connected manifold is isomorphic to $\mathbb{R}$), one can conclude that $[\vartheta^*\omega-\pi^*\omega]$ vanishes if and only if $$i^*[\vartheta^*\omega-\pi^*\omega]=\vartheta_p^*[\omega]=0\in H^n(G) .$$ \end{proof} \begin{remark} Note that the direct implication in the proof of Lemma \ref{lem:core} does not depend from being the base manifold a sphere. In other words, a necessary condition for the existence of a \comoment is that the pullback of $\omega$ with respect to any orbit map vanishes. \end{remark} \begin{remark} Observe that a result similar to Lemma \ref{lem:core} can be stated for any compact multisymplectic action $G \action ( M,\omega)$, with $\omega$ in degree $n+1$, such that the following cohomological condition holds $$ \bigoplus_{k=1}^n H^k(G)\otimes H^{n-k}(M) = 0 . $$ In particular, the same result is true for the action of any compact, connected and semisimple Lie group $G$ (\ie such that $H^1(\mathfrak{g})=H^2(\mathfrak{g})=0$) acting on a $2$-plectic manifold. See \cite[Proposition 7.1]{Callies2016} for an existence result for \comoments related to this kind of actions in presence of a fixed point. \end{remark} An action $G\action M$ is said \emph{transitive} if the orbit map $\vartheta_p$ is surjective for any $p\in M$. When this condition is not met we have the following result: \begin{proposition}\label{prop:intransitive} Let $G$ be a compact Lie group acting non-transitively on $S^n$ and preserving the standard volume form. Then the action $G\action M$ admits a \comoment. \end{proposition} \begin{proof} If $G$ acts non-transitively then there exists an orbit $O\subset S^n$ of dimension strictly less than $n$. Let $p\in O$. Then we have $\vartheta_p^*[\omega]=\vartheta_p^*[\omega|_O]$, but $\omega|_O\in \Omega^n(O)$ is zero due to dimension reasons. Hence, the action admits a \comoment, due to Lemma \ref{lem:core}. \end{proof} \subsection{The action of $SO(n)$ on $S^n$}\label{subsecson} The goal of this section is to give an explicit construction for the \comoment of the action $SO(n) \action S^n$ by resorting to Corollary \ref{cor:inducedmachinery}. \vspace{1ex} In what follows, we will consider the standard $SO(n+1)$-invariant embedding $ j: S^n \to \mathbb{R}^{n+1}$ of $S^n$ as the sphere with unit radius and consider the linear action of the group $SO(n)$ on $\mathbb{R}^{n+1}$ as the subgroup of special orthogonal linear transformations fixing the axis $x^0$. \\ In~$\mathbb{R}^{n+1}$ we consider the standard coordinates~$x = (x^0,\dots,x^n)$ and the corresponding volume form~$\Vol = d x^0\wedge\dots \wedge dx^{n}$. Furthermore, we will make use of the following notation for the cylindrical radius (with axis $x^0$) and the central radius respectively: \begin{displaymath} r = \sqrt{(x^1)^2 + \dots + (x^n)^2} \qquad R = \sqrt{(x^0)^2 + r^2} ~. \end{displaymath} \vspace{1ex} Recall that the volume form on the unit sphere embedded in $\mathbb{R}^{n+1}$ is given by $$ \omega = j^\ast \iota_E ~\Vol$$ where $E$ is the Euler vector field. $E$ can be seen as the fundamental vector field of the action \begin{align*} \vartheta: \mathbb{R} \times \mathbb{R}^{n+1} &\to \mathbb{R}^{n+1}\\ (\lambda,x)&\mapsto e^{\lambda} x \end{align*} of $\mathbb{R}$ on the Euclidean space via dilations, that is the linear action generated by the identity matrix $\id_{n+1} \in \mathfrak{gl}(n,\mathbb{R}^{n+1})$, \ie $ E= v_{\id_{n+1}} = \sum_i x^i\, \partial_i$. Let us call $H = SO(n)\times\mathbb{R}$ the subgroup of $GL(n,\mathbb{R}^{n+1})$ generated by \begin{displaymath} \mathfrak{h} = \left\lbrace \begin{bmatrix} 1 & 0 \\ 0 & a \end{bmatrix} \; \vert \: a \in \mathfrak{so}(n) \right\rbrace \oplus \langle \id_{n+1} \rangle \simeq \mathfrak{so}(n)\oplus \mathbb{R} ~. \end{displaymath} The group $H$ acts linearly on $\mathbb{R}^{n+1}$ through the standard infinitesimal action \begin{displaymath} \morphism{v} {\mathfrak{h}} {\mathfrak{X}(\mathbb{R}^{n+1})} {A} {\displaystyle\sum_{i,j} [A]_{i j} ~ x^j \partial_i} \end{displaymath} where $[A]_{i j}$ denotes the $i j$ entry of the matrix $A$. \begin{lemma}\label{lem:rescaledvolume} The differential form \begin{displaymath} \eta = \rescaling \,\Vol \in \Omega^{n+1}(\mathbb{R}^{n+1} \setminus \{0\}) \qquad \text{with} \qquad \rescaling = \dfrac{1}{R^{n+1}} \end{displaymath} is multisymplectic on $N = (\mathbb{R}^{n+1} \setminus \{0\})$, invariant under the action $H \action N$ and restricts to the standard volume form on the unit sphere. \end{lemma} \begin{proof} Multisymplecticity follows from the closure and non-degeneracy of $\Vol$ together with the property that $\rescaling$ never vanishes \\ The form is clearly $SO(n)$-invariant because $\rescaling$ depends only on $R$. The $H$-invariance follows from the invariance along the Euler vector field: $$ \mathcal{L}_E (\rescaling~\Vol) = (\mathcal{L}_E \rescaling + (n+1) \rescaling ) \Vol $$ and $$ \mathcal{L}_E \rescaling = \sum_i x^i \dfrac{\partial}{\partial x^i} R^{-(n+1)} = -\dfrac{n+1}{2}\sum_i 2 (x^i)^2 R^{-(n+1)-1} =-(n+1) \rescaling $$ Finally, $\eta$ restricts to the volume form on $S^n$ because $j^\ast \rescaling = 1$. \end{proof} The function $\rescaling \in C^\infty(\mathbb{R}^{n+1}\setminus\{0\})$ is precisely the scaling factor that makes the Euclidean volume invariant with respect to the extended group $SO(n+1)\times\mathbb{R}$. The problem to find explicitly a \comoment: \begin{displaymath} s \colon \mathfrak{h} \rightarrow L_{\infty} \left(\mathbb{R}^{n+1}\setminus\{0\},\eta = \rescaling \, \Vol\right) \end{displaymath} can be solved by exhibiting an $H$-invariant primitive of the rescaled volume $\eta$ and then resorting to Lemma \ref{lem:extexact}. \begin{lemma}\label{lem:uglyprimitive} The differential (n+1)-form $\eta$ admits an $H$-invariant potential $n$-form: \begin{displaymath} \beta = (\hat{\varphi}~x^0)~dx^1\wedge\dots\wedge dx^n \end{displaymath} where $\hat{\varphi}\in C^\infty (\mathbb{R}^{n+1}\setminus{0})$ depends only on the cylindrical coordinates $(x_0,r)$ and it is given by \begin{equation}\label{eq:uglyprimitive} \hat{\varphi}(x^0,r) = \begin{cases} \frac{1}{\left((x^0)^2 + r^2\right)^{\frac{n+1}{2}}} \left(x^0 (n+1) - r \arctan\left(\dfrac{x^0}{r}\right)\right) &\quad r \neq 0 \\ (n+1)\frac{1}{\vert x^0 \vert^n} &\quad r=0,\, x^0 > 0 \\ -(n+1)\frac{1}{\vert x^0 \vert^n} &\quad r=0,\, x^0 < 0 \end{cases} \end{equation} \end{lemma} \begin{proof} Let us start from the following ansatz $$\beta =\iota_{(x^0\partial_0)} \varphi(x^0,r) \eta $$ for a potential $n$-form of the scaled volume $\eta$ as defined in Lemma \ref{lem:rescaledvolume}. At this point, $\varphi$ is an arbitrary smooth function depending only on the cylindrical parameters $(x^0,r)$. \\ Being $x^0 \partial_0$ the fundamental vector field of \begin{displaymath} \zeta = \begin{bmatrix} 1 & 0 \\ 0 & 0_n \end{bmatrix} \in \mathfrak{gl}(n+1,\mathbb{R}) , \end{displaymath} one gets \begin{displaymath} \mathcal{L}_{v_\xi} \beta = \left( \iota(\cancel{v_{[\xi,\zeta]}}) + \iota({v_\zeta}) \mathcal{L}_{v_\xi} \right) \varphi \, \eta = 0 \qquad \forall \xi \in \mathfrak{so}(n), \end{displaymath} because the $(n+1)$-form $\varphi \, \eta $ depends only on $(x^0,r)$, \ie $\beta$ is $SO(n)$ invariant. On the other hand, one has: \begin{displaymath} \begin{aligned} \mathcal{L}_E \beta &= \left( \cancel{\iota_{[\id,\zeta]}} + \iota_{x_0 \partial_0} \mathcal{L}_{E} \right) \varphi \, \rescaling \, \Vol = \\ &= \iota_{x_0 \partial_0} \left[ \left(\mathcal{L}_E \varphi \right)\, \rescaling \, \Vol + \varphi \cancel{\mathcal{L}_E\,\rescaling\Vol} \right] \end{aligned} \end{displaymath} and \begin{displaymath} \begin{aligned} \textrm{d} \beta &= \left( \dfrac{\partial \varphi}{\partial x^0}\,\rescaling\,x^0 + \varphi\, x^0 \, \dfrac{\partial \rescaling}{\partial x^0} + \varphi \rescaling \right) \Vol \\ &= \left( \partial_0 \varphi \, x^0 - (n+1)\dfrac{(x^0)^2}{(r^2 + (x^0)^2)} + \varphi \right)\, \rescaling\,\Vol ~. \end{aligned} \end{displaymath} Therefore, in order for $\beta$ to be $G-$invariant primitive of $\omega$, the following conditions on $\varphi$ have to be met: \begin{displaymath} \begin{cases} \mathcal{L}_E \varphi = r\,\partial_r\,\varphi + x^0\,\partial_0\,\varphi = 0 \\ x^0\,\partial_0 \varphi - (n+1)\dfrac{(x^0)^2}{(r^2 + (x^0)^2)} + \varphi = 1 \end{cases} \end{displaymath} The general solution of this system reads: \begin{displaymath} \varphi(x^0,r) = - \dfrac{r}{x^0}\arctan\left(\dfrac{x^0}{r}\right)(n+1) + n + 2 \end{displaymath} which is a smooth function defined on $\left\{ x \in \mathbb{R}^{n+1} ~\vert x^0 \neq 0, r \neq 0\right\}$. Recalling that $$\lim_{y\to 0}\dfrac{\arctan(y)}{y}=1 \qquad \lim_{y\to \infty}\dfrac{\arctan(y)}{y}=0 ~,$$ one can see that the limits to all the critical points of $\varphi$, except $(x^0,r)=(0,0)$, are finite. Hence, considering the unique smooth extension $\hat{\varphi}\in C^\infty(\mathbb{R}^{n+1}\setminus\{0\})$ of $\varphi$, given explicitly by equation (\ref{eq:uglyprimitive}), we conclude that \begin{displaymath} \beta = (x^0~\hat{\varphi})~dx^1\wedge\dots\wedge dx^n \end{displaymath} is the sought $H$-invariant primitive. \end{proof} \begin{proposition}\label{Prop:SonSn} A \comoment for the action $SO(n) \action \left( S^{n}, \omega\right)$, for $n \geq 2$, is given by \begin{displaymath} \morphism{f_i} {\Lambda^i\mathfrak{so}(n)} {\Omega^{n-1-i}(S^n)} {q} {-j^\ast\iota(v_q)(\iota_E \beta)} ~. \end{displaymath} where $\beta$ is the primitive given in Lemma \ref{lem:uglyprimitive}. \end{proposition} \begin{proof} The statement follows directly from Corollary \ref{cor:inducedmachinery} upon considering \begin{align*} (N,\eta) = (\mathbb{R}^{n+1},\rescaling\Vol ) \qquad\quad &(M,\omega) = (S^n, j^\ast \iota_E \Vol) \\ p= \id_{(n+1)} \in Z_1(\mathfrak{h}) \qquad\quad & H = SO(n)\times \mathbb{R} = H_E \supset SO(n) \end{align*} and noting that an explicit \comoment for the $H$-action is given by Lemma \ref{lem:extexact} via employment of the primitive constructed in Lemma \ref{lem:uglyprimitive}. \end{proof} \note{Check if this result gives the known result for $n=3$. (Angular momentum)} \begin{remark} Proposition \ref{Prop:SonSn} extends to spheres of arbitrary dimension a similar result given in \cite[Paragraph 8.3.2]{Callies2016} up to dimension $5$. \end{remark} \section{Transitive multisymplectic group actions on spheres}\label{sectra} Recall at first an important property of transitive compact group actions: \begin{proposition}[Isotropies of transitive compact group actions \emph{(see e.g \cite{encyclopedia:Isotropy})}] Be $G\action M$ a transitive compact group smooth action and let be $H\subset G$ be an isotropy subgroup. Then $H$ is a closed subgroup, all isotropy subgroups are isomorphic to $H$, \ie \begin{displaymath} G_x:= \left\lbrace g \in G ~\vert~ g.x = x \right\rbrace\cong H \qquad \forall x \in M \end{displaymath} and there exists a canonical diffeomorphism $G/H \cong M$. \end{proposition} \begin{remark}\label{Rem:ActionasBundle} Synthetically, we will denote any transitive action of a compact group $G$ on the manifold $M$ with isotropy subgroup $H$ simply by the corresponding canonical isomorphism $G/H = M$. Recall also that there exists a corresponding $H$-principal bundle over $M$ given by fixing a certain point $p\in M$ and considering the corresponding orbit map \begin{displaymath} \begin{tikzcd}[column sep= small,row sep=0ex] H \cong G_p \ar[r,hook,"i"] & G \ar[r,two heads, "\vartheta_p"] & M \end{tikzcd} ~. \end{displaymath} \end{remark} Focussing to our case, all effective transitive compact connected group actions on spheres have been classified (\cf \cite{MR2371700} for an overview of the results): \begin{proposition}[Classification of transitive compact group action on spheres, \cite{MR0008817, MR0034768,MR0029915} ]\label{Prop:ClassificActionSpheres} The only compact groups $G$ acting transitively and effectively on $S^n$ with isotropy group $H$, hence $G/H=S^n$, are given by the following list: \begin{itemize} \item $SO(n)/SO(n-1)=S^{n-1}$ \item $SU(n)/SU(n-1)=U(n)/U(n-1)=S^{2n-1}$ \item $Sp(n)Sp(1)/Sp(n-1)Sp(1)=Sp(n)U(1)/Sp(n-1)U(1)=Sp(n)/Sp(n-1)=S^{4n-1}$ \item $G_2/SU(3)=S^6$ \item $Spin(7)/G_2=S^7$ \item $Spin(9)/Spin(7)=S^{15}$. \end{itemize} \end{proposition} Observe that effectiveness is not a particularly stringent requirement since the action of any group $G$ on $M$ is effective when restricted to the quotient group $G/N$ given by the closed subgroup $\displaystyle N :=\bigcap_{x\in M} G_x$. The goal of this section is to prove the following theorem: \begin{theorem}\label{thm:surprise} Let $G$ be a compact Lie group acting multisymplectically, transitively and effectively on $S^n$ equipped with the standard volume form, then the action admits a \comoment if and only if $n$ is even. \end{theorem} \begin{proof} According to the classification given by proposition \ref{Prop:ClassificActionSpheres}, it will suffice to prove the following statements: \begin{enumerate} \item The action of $SU(n)$ on $S^{2n-1}$ does not admit a \comoment. As $SU(n)\subset U(n)\subset SO(2n)$, from this we will automatically get the statements, that $U(n)$ and $SO(2n)$ do not admit a \comoment when acting on $S^{2n-1}$. Moreover, as $SU(4)\cong Spin(6)\subset Spin(7)$, this implies that $Spin(7)$ does not admit a \comoment, when acting on $S^7$. \item The action of $Sp(n)$ on $S^{4n-1}$ does not admit a \comoment. Hence, neither $Sp(n)U(1)$ nor $Sp(n)Sp(1)$ do. \item Spin(9) does not admit a \comoment, when acting on $S^{15}$. \item $SO(2n+1)$ has a \comoment when acting on $S^{2n}$. As $G_2\subset SO(7)$, this implies that $G_2$ admits a \comoment when acting on $S^6$. \end{enumerate} Here we have employed the standard inclusions and isomorphisms coming from the well-known classification of compact connected Lie groups (see for example \cite{fulton2013representation,knapp2013lie}). The theorem can, therefore, be proved resorting to Lemma \ref{lem:core}, after proving the following Proposition \ref{prop:stiefel} and Proposition \ref{prop:annoying}. \end{proof} \begin{proposition}\label{prop:stiefel} Let $\omega_n$ be the volume form of $S^n$, $N$ the north pole and consider the orbit map $\vartheta_N$ of $N$ for a certain group acting on the sphere. \begin{itemize} \item Let $\vartheta_N:SU(n)\to S^{2n-1}$. Then $\vartheta_N^*[\omega_{2n-1}]\neq 0$. \item Let $\vartheta_N:Sp(n)\to S^{4n-1}$. Then $\vartheta_N^*[\omega_{4n-1}]\neq 0$. \item Let $\vartheta_N:Spin(9)\to S^{15}$. Then $\vartheta_N^*[\omega_{15}]\neq 0$. \end{itemize} \end{proposition} \begin{proof} Starting from the first case, consider the principal bundle \begin{displaymath} \begin{tikzcd}[row sep=0ex] SU(n-1) \ar[r,hook,"i"] & SU(n) \ar[r,two heads, "\vartheta_N"] & S^{2n-1} \end{tikzcd} \end{displaymath} corresponding to the action $SU(n)\action S^{2n-1}$ with respect to the orbit of the north pole $N$ as introduced in remark \ref{Rem:ActionasBundle}. % This bundle has a $(2n$-$2)$-connected base manifold, hence the pair $(SU(n),SU(n$-$1)$ is $(2n$-$2)$-connected and the pull-back maps $i^\ast: H^k(SU(n))\to H^k(SU(n$-$1))$ are isomorphism for $k\leq (2n$-$2)$ (see \cite{MR1867354} for extra details). % The cohomology of $SU(n)$ is completely described by its generators, namely one has \begin{displaymath} H^\bullet(SU(n),\mathbb{R})\simeq \Lambda_{\mathbb{R}}[u_3,u_5,\dots,u_{2n-1}] \end{displaymath} where the right-hand side denotes the exterior algebra generated by elements $u_k$ in odd degree $k$ (\cf \cite[Corollary 4D.3]{MR1867354}). % In particular one can see that all generators $H^\bullet(SU(n-1))$ comes from the restriction of the generators of $H^\bullet(SU(n))$ , \ie are images under $i^\ast$. % \note{Hint MZ: Since you mention this theorem several times, make sure you can explain it at the defense} This means that the Leray-Hirsch-theorem \cite[Theorem 4D.1]{MR1867354} can be applied, hence the following isomorphism holds: \begin{displaymath} H^\bullet(S^{2n-1})\otimes H^\bullet ( SU(n-1)) \xrightarrow{\sim} H^\bullet(SU(n)) ~. \end{displaymath} Inserting the explicit values for the real cohomology of the higher dimensional sphere, the left-hand side in degree $(2n-1)$ results in \begin{displaymath} \mathclap{ \left(H^{(2n-1)}(S^{2n-1})\otimes H^0(SU(n-1)\right) \oplus \left(H^0(S^{2n-1})\otimes H^{2n-1}(SU(n-1))\right) \cong H^{2n-1}(S^{2n-1})} \end{displaymath} and the Leray-Hirsch isomorphism restrict to $\vartheta_N^\ast$ \begin{displaymath} H^{2n-1}(S^{2n-1}) \xrightarrow{\vartheta_N^\ast} H^{2n-1}(SU(n)) \end{displaymath} % Being the volume $[\omega_{2n-1}]$ the (degree $(2n-1)$) generator of the cohomology of $S^{2n-1}$, that means that $\vartheta_N^*[\omega_{2n-1}]$ is a generator of the cohomology of $SU(n)$ and thus non-zero in $H^{2n-1}(SU(n))$. The same reasoning holds almost verbatim to the second case. The expression for the generators of $H^\bullet(Sp(n))$ can be read again in \cite[Corollary 4D.3]{MR1867354}. Regarding the third case, one has only to observe that, $Spin(n)$ is the universal covering of $SO(n)$ and, in particular, the two groups are locally isomorphic, hence the real-valued cohomologies of $Spin(n)$ and $SO(n)$ are isomorphic (see \cite[\textsection 11.1]{Borel1955}). The cohomology groups can be explicitly given through the set of generators \begin{equation}\label{Eq:SOn-cohomology} H^\bullet(Spin(k) \simeq H^\bullet(SO(k)) \simeq \begin{cases} \Lambda_{\mathbb{R}}[a_3,a_5,\dots,a_{4n-1}] &~ k = 2n+1 \\ \Lambda_{\mathbb{R}}[a_3,a_5,\dots,a_{4n-1},a'_{2n+1}] &~ k = 2n + 2 \end{cases} \end{equation} where, as before, subscripts denote degrees (see, for instance, \cite[Theorem 3D.4]{MR1867354}). As before,the pair $(Spin(9),Spin(7))$ is 14-connected and the maps $H^i(Spin(9))\to H^i(Spin(7))$ are isomorphisms for $i\leq 13$. Hence Leray-Hirsch-theorem holds and $\vartheta_\omega^\ast$ maps $[\omega_{15}]$ to the generator $a_{15}\in H^{15}(Spin(9))$ which is necessarily non-zero. \end{proof} In the case of $SO(2n+1)$, the Leray-Hirsch theorem can not be applied since is clear from equation \eqref{Eq:SOn-cohomology} that $SO(2n)$ has a class in degree $2n-1$ which does not come from any class in $SO(2n+1)$. In fact we have the following: \begin{proposition}\label{prop:annoying} Let $\omega_{2n}$ be the volume form of $S^{2n}$ and $N$ the north pole. Let $\vartheta_N:SO(2n+1)\to S^{2n}$. Then $\vartheta_N^*[\omega_{2n}]= 0$. \end{proposition} \begin{proof} Let $i:SO(2n)\hookrightarrow SO(2n+1)$ be the inclusion. According to equation \eqref{Eq:SOn-cohomology}, cohomologies of $SO(2n+1)$ and $SO(2n)$ are isomorphic up to degree $2n-2$ since their generators are isomorphic through $i^*$ up to the same degree. Nevertheless, also $i^*:H^{2n}(SO(2n+1))\to H^{2n}(SO(2n))$ is an isomorphism because the first different generator is $a'_{2n-1}\in (SO(2n))$ and it cannot produce any element in degree $2n$ since the lowest degree generator has degree equal to $3$. \\ Observe than that the class $[i^*\vartheta_N^*\omega_{2n}]$ is the obstruction to the existence of a \momap pertaining to the $SO(2n)$-action on $S^{2n}$. We know from Proposition \ref{lem:core}, that this action admits a \comoment, \ie $[i^*\vartheta_N^*\omega_{2n}]=0\in H^{2n}(SO(2n))$. But as $i^*$ is an isomorphism, this implies that $[\vartheta_N^*\omega_{2n}]=0\in H^{2n}(SO(2n+1))$. \end{proof} \note{ \textbf{Marco} How to get rid of effectiveness?\\ $\forall s: G \twoheadrightarrow SO(n+1)$ surjective Lie group morphism, the effective and transitive action $\theta:SO(n+1)\action S^{n}$ induce a transitive but generally non-effective action $\tilde{\theta}:G\action S^{n}$ given by $\tilde{\theta}= \theta \circ (\id\times s) : S^n \times G \to S^n$ \\ Effective means that $\hat{\theta}_g = \Id_M \Leftrightarrow g = e$ (\ie $\hat{\theta}:G\to \text{Diff}(M)$ is not injective). Defining $N = \text{ker}(\theta) = \{g \in G\; \vert\: \hat{\theta}_g = \Id_M \}$ (subgroup of G) one can see that every non-effective action can be made effective, \ie $$ \dfrac{G}{\text{ker}(\theta)} \twoheadrightarrow SO(n) \action S^{n}$$ is transitive and effective. In general, consider $G \twoheadrightarrow \frac{G}{N}$ (Check that $\frac{G}{N}$ is an honest Lie group. $N$ is always a closed normal subgroup and if $G$ is closed it should be ok) you get $j:\mathfrak{g}\twoheadrightarrow \frac{\mathfrak{g}}{\mathfrak{n}}$, in general not injective. If $\frac{\mathfrak{g}}{\mathfrak{n}}$ admits a \comoment you get a \comoment for $\mathfrak{g}$ via pre-composition with $j$ If you can give an example of \comoment for $\mathfrak{g}$ which does not descend from a \comoment of $\frac{\mathfrak{g}}{\mathfrak{n}}$ you should be able to complete the argument. } \subsection{Explicit \comoment for $SO(n+1)$ on $S^{n}$}\label{subsectra} Giving an explicit expression for a \comoment of $SO(n+1) \action S^{n}$ requires to find iteratively, for $k \in (1,\dots,n-1)$ and for all $p \in \Lambda^k \mathfrak{so}(n)$, a primitive, denoted as $f_k(p)$, of the closed differential $(n-k)$-form \begin{equation}\label{eq:comomentasprimitive} \mu_k (p) = -f_{k-1} (\partial p) - \varsigma(k) \iota(v_p) \omega \end{equation} with $f_0 = 0$ and satisfying the following constraint \begin{equation} \label{eq:comomentconstraints} f_n(\partial p ) = - \varsigma(k+1) \iota_{v_p} \omega ~. \end{equation} As $H^0(S^n)$ and $H^{n}(S^n)$ are the only non-trivial cohomology groups, it is always possible to find primitives of $\mu_k (p)$. The only thing that could fail, and actually fails when $n$ is odd, is the fulfilment of condition \eqref{eq:comomentconstraints}. In the latter case, it is however possible to consider an extension of $\mathfrak g$ to a suitable Lie-$n$ algebra and consider \emph{Lie-$n$ \momaps} instead of our notion of \comoments. (See \cite{Callies2016} or \cite{Mammadova2019} for the explicit construction in the $2$-plectic case. The latter applies to the action $SO(4)\action S^3$ in the situation we are considering here.) \vspace{1em} Instead of dealing with the analytical problem of finding explicit potentials for the form $\mu_k (p)$, let us translate the problem in a more algebraic fashion focusing on the particular structure of the Chevalley-Eilenberg complex of $\mathfrak{so}(n)$. \\ In general, it is fairly easy to express the action of a \comoment on boundaries: \begin{lemma}[\Momaps on boundaries] Let $v:\mathfrak{g}\to (M,\omega)$ be a multisymplectic infinitesimal action.\\ Let $F^k: B_k(\mathfrak{g}) \to \Lambda^{k+1}\mathfrak{g}$ such that $\partial \circ F^k = \id_{B^k}$, \ie $F^k$ gives a choice of representative of a primitive for every $k$-boundary of $\mathfrak{g}$.\\ Then, the function $ f_k \colon B_k(\mathfrak{g}) \to\Omega^{n-k}(M)$ defined as \begin{displaymath} f_k(p) = -\varsigma(k+1) \iota(v_{F^k(p)}) \omega \end{displaymath} satisfies equation (\ref{eq:fk_hcmm}) defining the $k$-th component for a \comoment of the infinitesimal action, for every boundary $p$. \end{lemma} \begin{proof} It is a straightforward application of Lemma \ref{lemma:multicartan} together with the multisymplecticity of the infinitesimal action: \begin{align*} d f_k (p) & = -\varsigma(k+1) d \iota(v_{F^k(p)}) \omega = \\ &= -(-1)^{k+1}\varsigma(k+1) \iota(v_{\partial F^k(p)}) \omega =\\ & = \cancel{- f_{k-1}(\partial p)} - \varsigma(k) \iota_{v_p} \omega. \end{align*} \end{proof} \begin{remark} Note that the constraint given by equation (\ref{eq:comomentconstraints}) is precisely the requirement that action on boundaries of the highest component $f_n$ of the \comoment $(f)$ is independent from the choice of $F^n$. \end{remark} In other words, finding the action of the component $f_k$ of the \comoment $(f)$ on boundaries is tantamount to finding a function $F^k: B_k(\mathfrak{g}) \to \Lambda^{k+1}\mathfrak{g}$ mapping a boundary $p$ to a specific primitive $q$. \\ Note that this is nothing more than replacing the problem of finding a potential of an exact differential form to the one of finding a primitive of a boundary in the CE-complex. It follows from the previous lemma that the $k$-th component of the \comoment is completely determined by its action on boundaries when the $k$-th Chevalley-Eilenberg homology group vanishes: \note{Hint Leo: Maybe connection to leyli (concerning hermann sfuff)} \begin{corollary} Consider a Lie algebra with $H_k(\mathfrak{g})=0$ and fix a choice of representatives $F^k$ and $F^{k-1}$ as before. Then, the function $ f^k \colon \Lambda^k \mathfrak{g} \to\Omega^{n-k}(M)$ defined as \begin{displaymath} f_k(q) = \varsigma(k+1)\iota(v_{F^k(F^{k-1}(\partial q) -q)})\omega \end{displaymath} satisfies equation (\ref{eq:fk_hcmm}) for every chain $q \in \Lambda^k\mathfrak{g}$. \end{corollary} \begin{proof} Consider a function $f_{k-1}$ defined by means of $F^{k-1}$ according to the previous lemma. For every cycle $q \in \Lambda^k\mathfrak{g}$ we get \begin{displaymath} \begin{split} - f_{k-1}(\partial q) -\varsigma(k)\iota_{v_q}\omega =&~ \varsigma(k)[\iota(v_{F^{k-1}(\partial q)}) - \iota(v_{q})] \omega= \\ =&~ \varsigma(k)\iota( v_r )\omega \end{split} \end{displaymath} where $r =(F^{k-1}(\partial q) - q)$ is closed, hence exact. Again from Lemma \ref{lemma:multicartan} follows that the right-hand side it is equal to \begin{displaymath} \begin{split} \varsigma(k)\iota(v_{\partial F^k(r)} \omega) =&~ \varsigma(k)(-1)^{k+1} d \iota(v_{F^k(r)}) \omega = \\ =&~ d f_k(q) ~. \end{split} \end{displaymath} \end{proof} An explicit construction of a \comoment is generally more delicate in presence of cycles that are not boundaries. Nevertheless, we know from equation \eqref{Eq:SOn-cohomology} that the first two homology group of $\mathfrak{so}(n)$ are trivial, therefore it is easy to give the first two components of the \comoment. From the linearity of $f_k$, it is clear that we only need to give its action on the standard basis of the finite-dimensional vector space $\mathfrak{so}(n)$. \begin{remark}[Standard basis of $\mathfrak{so}(n)$] Recall that $\mathfrak{so}(n)$ is the Lie sub-algebra of $\mathfrak{gl}(n,\mathbb{R})$ consisting of all skew-symmetric square matrices. A basis can be constructed as follows: \begin{equation}\label{eq:standard-basis} \mathcal{B}\coloneqq \big\lbrace A_{a b} = (-1)^{1+a+b} \left( E_{a b} - E_{b a}\right) \quad \vert \quad 1\leq a<b\leq n \big\rbrace \end{equation} where $E_{a b}$ is the matrix with all entries equal to zero and entry $(a,b)$ equal to one. \\ The fundamental vector field of $A_{a b}$ associated to the linear action of $SO(n)$ on $\mathbb{R}^n$ reads as follows: \begin{displaymath} v_{A_{a b}}= \sum_{i,j}[A_{a b}]_{i j}x^j \partial_i = (-1)^{1+a+b}\left(x^a \partial_b - x^b \partial_a\right) \end{displaymath} \noindent Using such a basis, the structure constants read as follows: \begin{displaymath} \begin{aligned} [A_{a b}, A_{c d}] =& (-1)^{(b+c+1)}\delta_{b c} A_{a d} + (-1)^{(a+d+1)}\delta_{a d} A_{b c} +\\ & (-1)^{(d+b+1)}\delta_{d b} A_{a c} + (-1)^{(a+c+1)}\delta_{c a} A_{d b} \end{aligned} \end{displaymath} in particular: \begin{equation}\label{eq:reductionformula} [A_{k a}, A_{k b}] = A_{a b} \qquad \forall k \neq a,b ~. \end{equation} \end{remark} \paragraph{$f_1$ for any $SO(n)$.} Since all 1-chains in the CE complex are automatically cycles, $H^1(\mathfrak{so}(n))=0$ implies that all elements of $\mathfrak{so}(n)$ are boundaries. \\ Formula \eqref{eq:reductionformula} suggests a natural choice for the function $F^1$ when acting on elements of the standard basis: \begin{equation}\label{eq:primitiveMapF} F^1(A_{a b}) = -\sum_{k=1}^n \dfrac{1}{n-2} A_{k a}\wedge A_{k b} ~. \end{equation} Therefore the first component of the \comoment is given by \begin{displaymath} \begin{split} f_1 (A_{a b}) =&~ - \iota(v_{F^1(A_{a b})}) \omega = \\ =&~ \dfrac{1}{n-2}\sum_{k=1}^n \iota(v_{A_{k b}})\iota(v_{A_{k a}})\omega~. \end{split} \end{displaymath} \begin{example} In the three-dimensional case, denoting the three generators of $\mathfrak{so}(3)$ as $l_x,l_y,l_z$, namely \begin{displaymath} \mathclap{ l_x = A_{1\, 2} = \begin{bmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} \quad l_y = A_{1\, 3} = \begin{bmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{bmatrix} \quad l_z = A_{2\, 3} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{bmatrix} ~, } \end{displaymath} one gets \begin{displaymath} F^1 (l_i) = - \dfrac{1}{2}\sum_{j,k=1}^3\epsilon_{i j k} l_j \wedge l_k, \end{displaymath} where $\epsilon_{i j k}$ is the Levi-Civita symbol, and \begin{displaymath} \begin{split} f_1(l_z) =&~ \iota(v_{l_y \wedge l_z}) \omega = \\ =&~ j^\ast \iota(E \wedge v_{l_y}\wedge v_{l_z}) dx^{123}. \end{split} \end{displaymath} \end{example} \paragraph{$f_2$ for any $SO(n)$.} In this case there are two subsets of generators of $\Lambda^2 \mathfrak{so}(n)$ to consider: \begin{displaymath} \begin{cases} p = A_{a b} \wedge A_{c d} \xmapsto{\,\partial\,} 0 & \quad\text{for } a,b,c,d \text{ different} \\ q = A_{j a} \wedge A_{j b} \xmapsto{\,\partial\,} -A_{a b} & \quad\text{for } j,a,b \text{ different} \end{cases} \end{displaymath} The elements of the first set are boundaries and a primitive can be given as follows \begin{displaymath} F^2 (A_{a b} \wedge A_{c d}) = \dfrac{n-2}{4}\left( F^1(A_{a b}) \wedge A_{c d} - A_{a b}\wedge F^1(A_{c d}) \right) \end{displaymath} where $F^1$ is given again by equation \eqref{eq:primitiveMapF}. In the second case, we need to find a primitive of \begin{displaymath} \begin{aligned} F^1(\partial q) -q &= -F^1(A_{a b}) - A_{j a} \wedge A_{j b} = \\ &= \dfrac{1}{n-2}\sum_{k=1}^n( A_{k a}\wedge A_{k b} - A_{j a} \wedge A_{j b}) =\\ &=\dfrac{1}{n-2}\sum_{k=1}^n \partial (A_{k a}\wedge A_{j b}\wedge A_{k j}) =\\ &=\partial \left(\dfrac{1}{n-2}\sum_{k=1}^n (A_{k a}\wedge A_{j b}\wedge A_{k j}) \right) \end{aligned} \end{displaymath} The last equality suggests the following choice \begin{displaymath} F^2(F^1(\partial q) -q) = \left(\dfrac{1}{n-2}\sum_{k=1}^n (A_{k a}\wedge A_{j b}\wedge A_{k j}) \right). \end{displaymath} Finally, one gets \begin{displaymath} \begin{aligned} f_2(A_{a b} \wedge A_{c d}) &= \dfrac{n-2}{4}\left( \iota(v_{F^1(A_{a b}) \wedge A_{c d}}) - \iota(v_{A_{a b}\wedge F^1(A_{c d})})\right)\omega\\ f_2(A_{j a} \wedge A_{j b}) &= \dfrac{-1}{n-2} \sum_{k=1}^n \left(\iota(v_{A_{k a}\wedge A_{j b}\wedge A_{k j}})\right)\omega. \end{aligned} \end{displaymath} \note{ \textbf{$f_k$ for any $SO(n)$ and $k\geq 3$}:\\ We know from Theorem \ref{thm:son-cohomology} that $H^3(\mathfrak{so}(n))$ never vanishes... how can we proceed? Tony drafted a code in Python. The repository is still private } \subsection{Explicit \comoment for $G_2$ on $(S^6,\phi)$} We finish by providing a nice example of \comoments for non-volume forms on spheres.\\ Recall that $G_2$ is a subgroup of $SO(7)$ acting transitively and multisymplectically on $S^6$ with the standard volume. Therefore, according to Theorem \ref{thm:surprise}, the action $G_2\action (S^6,\omega)$ admits a \comoment. This group can be explicitly defined as the subgroup of $GL(7,\mathbb{R})$ preserving the multisymplectic $3$-form \begin{equation} \phi =d x^{123}+ d x^{145}+ d x^{167}+ d x^{246}- d x^{257}- d x^{356}-d x^{347}, \end{equation} where~$x = (x^i)$ are the standard coordinates on~$\mathbb{R}^7$ and~$d x^{ijk} = d x^i \wedge d x^j \wedge d x^k$. (See \cite{MR1939543} for further remarks on $G_2$-homogeneous multisymplectic forms and \cite{MR2253159} for details on the $G_2$-manifold $S^6$). Considering the multisymplectic structure $j^\ast\phi$ on $S^6$, where $j$ is the inclusion of the sphere in $\mathbb{R}^7$, instead of the standard volume, it is possible to give an explicit \comoment for the action of $G_2$: % \begin{lemma} The action $G_2 \circlearrowright (S^6, j^\ast\phi)$ admits an equivariant \comoment given by: \begin{displaymath} \morphism{f_k} {\Lambda^k\mathfrak{g}_2} {\Omega^{2-k}(M)} {q} {\dfrac{(-1)^{k-1}}{3}~j^\ast\left(\iota_{v_q}~\iota_E~ \phi\right)} ~, \end{displaymath} for $(k=1,2)$, where $E=x^i \partial_i \in \mathfrak{X}(\R^7)$ is the Euler vector field. \end{lemma} \begin{proof} It follows from Lemma \ref{lem:extexact}, noting that $(\frac{1}{3} \iota_E \phi)$ is a $G_2$ invariant primitive of $\phi$ in $\mathbb{R}^3$. \end{proof} \note{ Hint Marco: if there exists a multisymplectic 3-form $\beta$ such that $(j^*\phi) \wedge \beta = j^\ast\iota_E Vol=\omega$ and if you have an explicit \comoment for $G_2 \action (S^6, \beta)$ you can cook up a \comoment for $G_2 \action (S^6, \omega)$ resorting to the main theorem of \cite{Shahbazi2016}.\\ } \ifstandalone \bibliographystyle{../../hep} \chapter{Graded multilinear algebra}\label{App:GradedMultilinearAlgebra} This chapter introduces some basics definitions of linear algebra on graded vector spaces and establishes the notation adopted in the thesis's body. \\ Although this material can now be considered standard, it is useful to include it for the sake of fixing unambiguously the several conventions related to graded objects. \\ Most of the content is drawn from the following sources \cite{Manetti-website,Bandiera2016,Schatz2009,Reinhold2019,Doubek2007,Delgado2015,Fiorenza2006}. To clarify how certain conventions and sign rules emerge, we thought it worthwhile to arrive at the definition of graded vector space starting from the more abstract notion of \emph{category of graded objects}. However, the reader may decide to jump directly to the section \ref{Section:GradedVectorSpaces} where the category of graded vector spaces, the main setting of this thesis, is introduced. \section{Categorical prelude: categories of graded objects}\label{Section:GradedObjects} In this section will be assumed some basic notions in category theory, see \cite{Riehl2016} or \cite{MacLane1978} for an introductory exposition. Note in particular that all the category considered are \emph{locally small}. \subsection{Basic definitions} Consider an arbitrary set $S$ and a category $\cat$. We call $S$-graded object any family of objects of $\cat$ indexed by $S$. \begin{definition}[Category of $S$-Graded $\cat$-objects]\label{Def:GradedObjects}\index{$S$-Graded $\cat$-objects} We define the \emph{Category of $S$-Graded $\cat$-objects} as the functor category \begin{displaymath} \cat^{S}:= \left\lbrace \hat{S}\to \cat \text{ functors} \right\rbrace \end{displaymath} where $\hat{S}$ is the set $S$ regarded as a discrete category (objects are the elements on the set $S$ and the only arrows are identities). \end{definition} % \begin{remark} Definition \ref{Def:GradedObjects} implies that a morphism $\phi$ between two graded object $A$ and $B$ in $\cat^S$ is a natural transformation \begin{displaymath} \begin{tikzcd} \left(\phi: A \xrightarrow[]{~\cdot~}\right) B & \equiv & S \arrow[r, bend left=50, "A"{name=U, above}] \arrow[r, bend right=50, "B"{name=D, below}] & \cat \arrow[shorten <=1pt,shorten >=1pt,Rightarrow, from=U, to=D, "\phi"] \end{tikzcd} \end{displaymath} \end{remark} % \begin{notation} A $S\times S$ graded $\cat$-object is also called \emph{S-bi-graded}. Such terminology could be iterated straightforwardly. \end{notation} % \begin{remark} Concretely, any object $A \in \cat^{S}$ is equivalent to a family $\{A_s\}_{s\in S}$ of objects in $\cat$ labelled by the elements of $S$. Similarly, any morphism $\phi\in \Hom_{\cat^S}(A,B)$ can be seen as a collection $\phi = \lbrace \phi_s : A_s \to B_s \rbrace_{s\in S}$ of morphisms in $\cat$ indexed by $S$. \\ Informally, both can be seen as functions from $S$ to, respectively, objects and morphisms of $\cat$. This definition is more correctly phrased in terms of functors since $ob(\cat)$ is not in general a set. \end{remark} % \begin{remark} Observe that $\hat{S}=\hat{S}^{op}$, therefore graded objects are in particular presheaves. \end{remark} % \begin{notation} When one specializes the category $\cat$ to be the category of sets (resp. $R$-modules, $\mathbb{K}$-vector spaces or $\mathbb{K}$-algebras; Being $R$ a ring and $\mathbb{K}$ a field), $S$-graded objects are called $S$-graded sets (resp. $S$-graded $R$-modules, $S$-graded $\mathbb{K}$-vector spaces or $S$-graded $\mathbb{K}$-algebras). \end{notation} Since $\cat^S$ is a category of functors, it inherits several properties from the categorical structure of $\cat$: \begin{proposition}\label{prop:limitsofFunctorCat} Consider two categories $\cat$ and $\cat[D]$ and denote by $[\cat,\cat[D]]$ the category of functors from $\cat$ to $\cat[D]$. \begin{enumerate} \item If $\cat[D]$ has limits or colimits of a certain shape, then so does $[\cat,\cat[D]]$. \item If $\cat$ is small and $\cat[D]$ is Cartesian closed and complete, then $[\cat,\cat[D]]$ is Cartesian closed. \item If $\cat[D]$ is pre-additive so is $[\cat,\cat[D]]$, in particular if $\cat[D]$ is Abelian so is $[\cat,\cat[D]]$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item Limits in $[\cat,\cat[D]]$ can be computed "point-wise" from the limits in $\cat[D]$. Note that if $\cat[D]$ is not complete, then can exist other limits in $[\cat,\cat[D]]$ that are not defined point-wise. (See \href{https://ncatlab.org/nlab/show/functor+category#properties}{\cite{nlab:functor_category}} for details). \item The sketch of the proof can be found in \href{https://ncatlab.org/nlab/show/cartesian+closed+category#exponentials_of_cartesian_closed_categories}{\cite{nlab:cartesian_closed_category}} . \item The $Ab$-enrichment property on $[\cat,\cat[D]]$, as well as the $ker$ and the $coker$ of a morphism, can be defined again point-wise from the pre-additive structure on $\cat[D]$. (The specific proof for $\mathbb{Z}$-graded objects in an Abelian category can be found in \href{https://stacks.math.columbia.edu/tag/09MF}{\cite[Lemma 12.16.2.]{stacks-project}}.) \end{enumerate} \end{proof} In practice, there are three main choices for the grading set $S$ that appear most frequently in the literature: the natural numbers $\mathbb{N}$, the integers $\mathbb{Z}$ and the group $\mathbb{Z}/2$ \footnote{A $\mathbb{Z}/2$-graded object it is also called a \emph{super-object}.}. Note that these three objects are not simply sets but they carry a canonical algebraic structures: $\mathbb{N}$ is a monoid, $\mathbb{Z}$ and $\mathbb{Z}/2$ are Abelian groups. In the following, we will see how the algebraic structure of $S$ reverberates in the properties of $\cat^S$. \subsection{Degree shifts}\label{sec:degreeshifts} Let be $\cat$ an arbitrary category and be $S$ a grading set. Observe that, given a $S$-graded object $A\in \cat^S$, exchanging the components associated to two fixed labels define a different graded object. More generally, an $S$-indexed family of objects can be re-indexed giving a different $S$-graded object. This operation can be encoded as a functor: \begin{definition}[Degree shift functor]\label{Def:ShiftFunctor}\index{Shift functor} Given a function $f:S\to S$, we define the \emph{degree shift pertaining to $f$} as the endofunctor $[f]:\cat^S \to \cat^S$ defined on a graded object $A$ as the precomposition of the associated functor with $\hat{f}$, \ie \begin{displaymath} [f](A) := A[f] = A \circ \hat{f} ~, \end{displaymath} diagrammatically denoted as \begin{displaymath} \begin{tikzcd} S \ar[r,"f"] \ar[dr,dashed,"A\lbrack f \rbrack"'] & S \ar[d,"A"] \\ & \cat \end{tikzcd} ~, \end{displaymath} where $\hat{f}$ is the function $f$ regarded as an endofunctor on $\hat{S}$. The action on a morphism $\phi:A\to B$ is given by the (horizontal) precomposition of the natural transformation $\phi$ with the functor $\hat{f}$. \end{definition} % Concretely, for any two graded objects $A$ and $B$ and for any graded morphism $\phi:A\to B$, one has that $(A[f])_i = A_{f(i)}$ and $\phi[f]_i =\phi_{f(i)}$, \ie the following diagram commutes in the category $\cat$ \begin{displaymath} \begin{tikzcd} (A{[f]})_i \ar[r,equal] \ar[d, "\phi{[f]}_i"'] & A_{f(i)} \ar[d, "\phi_{f(i)}"] \\ (B{[f]})_i \ar[r,equal] & B_{f(i)} \end{tikzcd} ~. \end{displaymath} \begin{remark}[Grading given by a monoid]\label{rem:GradingGivenByMonoid} A little bit more abstractly, one could say that there exists a monoid action\footnote{One must pay extra care when considering the collection of natural transformations since not it is not a set in general. However, when $\cat$ is small, $\cat^S$ is locally small \cite{Freyd1995}.} \begin{displaymath} (\Hom_{Set}(S,S),\circ) \xrightarrow{[~\cdot~]} (\Hom_{Cat}(\cat^S,\cat^S),\circ)) ~, \end{displaymath} \ie there is a sort of "action" of the monoid of endomorphism of $S$ on the category of $S$-graded $\cat$-object. \\ In particular, when $S$ is a monoid, with product denoted by $\cdot$, it is natural to consider the subset of (left) actions of $S$ on itself \begin{displaymath} \left\{ \left[ \morphism{L_s}{S}{S}{q}{s \cdot q} \right] \right\}_{s \in S} \subset (\Hom_{Set}(S,S),\circ) \end{displaymath} which is in a one-to-one correspondence to $S$. In other words, there is an action of $S$ on $\cat^S$, that is a monoid homomorphism \begin{displaymath} \morphism{[\cdot]} {S} {\Hom_{Cat}(\cat^S,\cat^S)} {s} { \left( \Almorphism{[s]} {\cat^S} {\cat^S} {(k\mapsto A_k)} {(k\mapsto A_{s\cdot k})} \right) } \end{displaymath} Being a monoid morphism guarantees that $[s'][s]=[s'\cdot s]$. \\ When the monoid is commutative, there is no need to distinguish between left or right action; hence we simply talk about $[s]$ as the \emph{shift by $s \in S$}. \\ When $S$ is also a group, $[L_s]$ functors are invertible with $[L_s]^{-1} = [L_{s^{-1}}]$. \end{remark} % \begin{remark} Note that the definition of shift can be seen as a particular instance of the action of $\End(\cat[S])=\Hom_{cat}(\cat[S],\cat[S])$ on $[\cat[S],\cat[D]]$ given by "precomposition" or "pullback". In other terms, \begin{displaymath} [\cdot]: \End(\cat[S]) =\Hom_{cat}(\cat[S],\cat[S]) \to \Hom_{cat}(\cat[S],\cat[D]) ~. \end{displaymath} \end{remark} % \begin{remark} Let be $A \in \cat^S$ a $S$-graded object. When $S$ is a group, it is trivial to observe that, taken all together, the shifts define a $S$-graded $\cat^S$ object \begin{displaymath} \left(k \mapsto A[k]\right) ~. \end{displaymath} Therefore, one could alternative regard the shift operation as a functor $[\bullet]: \cat^S \to (\cat^S)^S$. \end{remark} % \begin{notation}[Graded-morphisms vs homogeneous maps] Given any $k\in S$ and $A,B$ graded objects, it is customary to call any map in $\Hom_{\cat^S}(A,B[k])$ \emph{homogeneous map in degree $k$}\index{Homogeneous maps}. In particular \emph{morphisms in $\cat^S$} are \emph{degree $0$ homogeneous map}. \end{notation} \subsection{Graded objects in categories with (co)limits} Recall that, according to proposition \ref{prop:limitsofFunctorCat}, any limit\footnote{In the categorical sense, \ie universal cone over a certain diagram. See \cite[Chap. V]{MacLane1978}.} $Lim$ in $\cat$ can be easily extended to a limit in $\cat^S$ simply defining $(Lim(D))_i= Lim (D_i)$ for any diagram $D$ in the category of $S$-graded $\cat$-objects Any category $\cat$ with terminal object can be easily regarded has a full subcategory of $\cat^S$: \begin{lemma}\label{Lemma:CembedsinCS} Given $k\in S$, a category $\cat$ and a fixed object $T\in ob(\cat)$, the functor \begin{displaymath} G_k : \cat \to \cat^S ~, \end{displaymath} acting on objects as \begin{displaymath} (G_k(A))_i = \begin{cases} A & i=k \\ T & i\neq k \end{cases} \end{displaymath} and on morphisms as \begin{displaymath} \biggr(G_k\big(A\xrightarrow \phi B\big)\biggr)_i = \begin{cases} (A\xrightarrow \phi B\big) & i=k \\ (T \xrightarrow {id_T} T ) & i\neq k \end{cases}~, \end{displaymath} is faithful. If $T$ is terminal object in $\cat$ the functor $G_k$ is also full. \\ Given two indices $k,k'$ in $S$, one has $G_{k'} = [f] \circ G_k$ where $f$ is the unique invertible function on $S$ such that $f(k)= k', f(k')= k$ and $f(i)=i$ when $i\neq k,k'$. In a diagram, the following triangle of functors commutes \begin{displaymath} \begin{tikzcd} \cat \ar[r,"G_k"] \ar[dr,"G_{k'}"'] & \cat^S \ar[d,"{[f]}"] \\ & \cat^S \end{tikzcd} ~. \end{displaymath} \end{lemma} \begin{proof} The definition is well-posed, functoriality is enforced by the composition rule of morphisms of $\cat$. \\ Faithfulness follows easily from the definition; namely $G_k(f) = G_k(g)$ if and only if $f = g$ for any $f,g \in \Hom_{\cat}(A,B)$. \\ Fullness is a consequence of the universal property of the terminal object since the only possible morphism between $(G_k(A))_i = (G_{k'}(A))_i$ when $i\neq k,k'$ is the terminal arrow $T \to T$. \\ The last property follows from the definition of shift, for any $A \in \cat$ one has \begin{displaymath} \big( G_k(A)[f] \big)_i = \big( G_k(A)\big)_{f(i)} = \begin{cases} 0 & f(i) \neq k\\ A & f(i) = k \end{cases} = \begin{cases} 0 & i \neq k'\\ A & i = k' \end{cases} = \big( G_{k'}(A) \big)_i \end{displaymath} \end{proof} % \begin{remark}\label{Rem:TrivialExtension} Observe that, given $A\in\cat^S$ and $T\in\cat$ terminal object, any morphisms $f\in \Hom_{\cat}(A_k,B)$ can be uniquely extended to a morphism $f \in \Hom_{\cat^S}(A,G_k(B))$ in the trivial way \begin{displaymath} f_i = \begin{cases} A_i \xrightarrow ! T & i\neq k \\ A_k \xrightarrow f B & i = k \end{cases} ~. \end{displaymath} Uniqueness is a simple consequence of the universal property for $T$. \end{remark} It follows from Lemma \ref{Lemma:CembedsinCS} that any object in a category $\cat$ with terminal object can be seen as a "special" object in $\cat^S$. % When $\cat$ possesses all coproducts $\coprod$, and the terminal object $T$ is initial (\ie zero object), also the "converse" statement holds: % \begin{lemma}\label{lemma:totalfunctor} Given a category with all coproducts $\coprod$, consider the functor \begin{displaymath} \morphism{F}{\cat^S}{\cat}{A}{\coprod_{i\in S} A_i} ~, \end{displaymath} acting on morphisms as \begin{displaymath} \biggr(F\big(A\xrightarrow \phi B\big)\biggr) = \coprod_{i\in I} \varphi_i ~, \end{displaymath} where $\coprod_{i\in S} \varphi_i$ is the unique arrow provided by the universal property of coproducts. Namely, given two arrows $(f:A_1\to B_1)$ and $(g:A_2\to B_2)$ one could define their sum as the unique arrow $f\sqcup g $ provided by the universal property of the coproduct $\sqcup$: \begin{displaymath} \begin{tikzcd}[column sep=small, row sep =tiny,ampersand replacement=\&] A_1 \ar[dr]\ar[dd,"f"']\& \& A_2\ar[dl]\ar[dd,"g"] \\ \& A_1 \sqcup A_2 \ar[dd,"f\sqcup g ","!"'] \& \\ B_1 \ar[dr] \& \& B_2 \ar[dl] \\ \& B_1 \sqcup B_2 \& \\ \end{tikzcd} ~. \end{displaymath} When $T$ defining $G_k$ is an initial object, $F$ is a left inverse of $G_k$, \ie there exists a canonical natural isomorphism $$F \circ G_k \Rightarrow id_{\cat}~.$$ \end{lemma} \begin{proof} $F$ is well-defined since the universal property of $\coprod$ guarantees functoriality. The natural isomorphism is given on every component by the canonical isomorphism implieds by the universal property of coproducts and initial objects $$ A \coprod I \simeq A \simeq I \coprod A ~. $$ \end{proof} % \begin{remark} Note that the two functors $F$ and $G_k$ are not adjoint. \end{remark} % \begin{example}\label{Example:GradedSets} When $\cat$ is the category of sets with functions and $\coprod$ is the disjoint union of sets, a \emph{S-graded set} $A$ can be regarded as a pair: \begin{displaymath} \left( \tilde{A}= \coprod_{s\in S} A_s ~,~ \morphism{\vert\cdot\vert}{\tilde{A}}{S}{A_s\ni a}{s} \right) \end{displaymath} where $\tilde{A}=\{(a,s)\vert s\in S, a\in A_s\}$ is called the \emph{(total) set of homogeneous elements} and $\vert\cdot\vert$ is the grading function. \end{example} \begin{remark}\label{Remark:GradedObjectasCoproduct} The reasoning of example \ref{Example:GradedSets} can be extended to any \emph{concrete category}, \ie categories of structured sets. Recall that a concrete category can be formalized as category $\cat$ together with a faithful functor $J:\cat \to Set$, called \emph{forgetful functor}, which associates to any object $A \in \cat$ the underlying sets of points $\underline{A} \in Set$. The forgetful functor implies a functor from the category of $S$-graded $\cat$-objects to the category of $S$-graded sets simply given by post-composition with $J$, \begin{displaymath} \morphism{J'}{\cat^S}{Set^{S}}{A}{J\circ A} ~. \end{displaymath} Therefore, one can define a "grading" as a function on the set of homogeneous elements $\coprod_{s\in S}\underline{A_s}$. \\ When $\cat$ is a concrete category with coproducts $\coprod$ one could mimic the definition in example \ref{Example:GradedSets} identifying any graded object $A$ as a pair \begin{displaymath} \left( \coprod_{s\in S} A_s ~,~ \morphism{\vert\cdot\vert}{ \coprod_{s\in S} \underline{A_s}}{S}{\underline{A_s}\ni a}{s} \right) ~. \end{displaymath} Note that one can not define the grading as a function on $\underline{\coprod_{s\in S} A_s }$, or as morphism in $\cat^S$, since, in general, the subset of homogeneous elements $\coprod_{s\in S} \underline{A_s}$ is only a subset of $\underline{\coprod_{s\in S} A_s}$. When the two coincide the category is said to \emph{admit concrete coproducts} \cite[1.3.3]{Bourles2017}. % \end{remark} \begin{remark}[About the total functor]\label{rem:totaloplus} When $\cat$ is pre-additive, \ie finite products and coproducts coincide (see for example the category $\Vect$ of vector spaces in section \ref{Section:GradedVectorSpaces}), it is customary to denote coproducts as $\oplus$ and the functor $F$ of lemma \ref{lemma:totalfunctor} as $(-)^\oplus$. Given a graded object $A$, $A^\oplus$ is sometimes called the \emph{total object} or \emph{total space}. \end{remark} \subsection{Monoidal structure}\label{Section:CategoricalGradedMonoidalStructure} In simple terms, a monoidal category is a category equipped with a notion of "tensor product" (see \cite{MacLane1978} or also \cite{nlab:monoidal_category}). Most of the subtleties that can be encountered when dealing with graded spaces derive from consistently using the monoidal structure. The keypoint is that, when certain conditions are met, a category of graded objects $\cat^S$ inherit the monoidal structure from $\cat$. Observe first that that the Cartesian structure $\times$ on Cat and Set implies that there exists a canonical functor \begin{displaymath} \morphism{\times} {\cat^S \times \cat^S} {(\cat\times\cat)^{S\times S}} {((k\mapsto A_k),(s\mapsto B_s))} {((k,s)\mapsto A_k\times B_s)} ~. \end{displaymath} When $\cat$ is equipped with the monoidal structure $\otimes$, the postcomposition of the previous functor with $\otimes$ yields a functor from pairs of $S$-graded objects to $S$-bigraded objects: \begin{displaymath} \morphism{(\otimes) \circ (\times)} {\cat^S \times \cat^S} {(\cat)^{S\times S}} {((k\mapsto A_k),(s\mapsto B_s))} {((k,s)\mapsto A_k\otimes B_s)} ~. \end{displaymath} In the case that the grading set $S$ is equipped with a monoid product $\cdot$, and given a monoidal structure $\oplus$ on $\cat$ (possibly different from $\otimes$) one can also introduce the canonical functor \begin{equation}\label{eq:totalbigradedcomplex} \morphism{\tot} {\cat^{S\times S}} {\cat^S} {((k,s)\mapsto A_{(k,s)})} {(g\mapsto \bigoplus_{s\cdot k =g}A_{s,k})} ~. \end{equation} % In general, when two suitable compatible monoidal structures occus, it is possible to introduce a canonical monoidal structure on the category of graded objects. Namely, one can consider the consecutive composition of the three previous functors: % \begin{proposition}\label{Prop:GradedMonoidalStructure} Given a monoidal structure $(\cdot,1)$ on $S$ and be $\cat$ a "bimonoidal" \footnote{A \emph{Rig-category} \cite{nlab:rig_category} is a category with two monoidal structures $(\cat,\oplus,\mathbb{0})$ and $(\cat,\otimes,\mathbb{1})$, where the first one is symmetric, together with left and right distributivity natural isomorphisms \begin{align*} d_\ell: A\otimes(B\oplus C) \to (A \otimes B) \oplus (A \otimes C) \\ d_r: (A\oplus B) \otimes C \to (A \otimes C) \oplus (B \otimes C) \end{align*} and absorption/annihilation isomorphisms \begin{align*} a_\ell: A\otimes \mathbb{0} \to \mathbb{0} \\ a_r : \mathbb{0} \otimes A \to \mathbb{0} \end{align*} satisfying a set of coherence laws (see for example \cite{Laplaza1972}). } category $(\cat,\oplus,\mathbb{0},\otimes,\mathbb{1})$, the endofunctor $ \tilde{\otimes} = (\tot)\circ(\otimes)\circ(\times): \cat^S\times \cat^S \to \cat^S$, defined on objects as \begin{displaymath} (A \,\tilde{\otimes}\, B)_s = \bigoplus_{g h =s} A_g \otimes B_h ~, \end{displaymath} yields a monoidal structure on $\cat^S$. \end{proposition} \begin{proof} Defining the unit object in $\cat^S$ as \begin{displaymath} \begin{tikzcd}[ column sep=2em, row sep=-1ex, ] \tilde{\mathbb{1}}\colon &[-3em] S \arrow[r] & \cat \\ & 1 \arrow[r,mapsto] & \mathbb{1} \\ & s\neq 1 \arrow[r,mapsto] & \mathbb{0} \end{tikzcd} \end{displaymath} the definition of the associator, left unitor and right unitor of $\tilde{\otimes}$ follows straightforwardly from the their counterparts related to $\otimes$ on each degree. \end{proof} % \begin{remark}\label{Rem:CanonicalInducedBraiding} In the case that $\cat$ is equipped with a braiding with respect to $\otimes$ (\ie $\cat$ is a "braided monoidal" category), there is a natural induced braiding pertaining to $\tilde{\otimes}$, \begin{displaymath} (B_{V,W})_n = \bigoplus_{k+\ell = n} B_{V_k,W_\ell} : (V\otimes W)_n \to (W \otimes V)_n ~, \end{displaymath} where $B_{V_k,W_\ell}$ is the braiding provided on $\cat$ and $(V,W)$ is an arbitrary pair of $S$-graded $\cat$-objects. \end{remark} % \begin{remark}\label{Rem:AnticipazionediKoszul} When the grading set is a ring $(S,+,\cdot)$, there are two possible monoidal structures to consider when constructing $\tilde{\otimes}$. In the case that $S=\mathbb{Z}$, it is customary to choose the sum operation as the monoid structure defining $\tilde{\otimes}$ while the product is used to twist the induced braiding with an extra sign. (Compare with the so-called "Koszul convention" in section \ref{sec:bloodyKoszulConvention}). \end{remark} \section{The category of graded vector spaces}\label{Section:GradedVectorSpaces} In what concern this thesis, we are only interested in the case where $S=\mathbb{Z}$ and $\cat=\Vect$ is the category of vector spaces over the field of real numbers $\mathbb{R}$. Most of what follows could be extended to vector spaces over a generic field $\mathbb{K}$ of characteristic $0$ and $R$-modules with a relatively small effort. % \begin{definition}[Graded vector spaces]\label{Def:GradedVectorSpaces} We call \emph{graded vector space} any object in the $\mathbb{Z}$-graded vector spaces category $\Vect^{\mathbb{Z}}$. In other words, a graded vector space is a collection $\{V^k\}_{k\in\mathbb{Z}}$ of vector spaces over $\mathbb{R}$ indexed by $\mathbb{Z}$. A morphism of graded vector spaces $V \to W$ is a family of linear maps $\{f^k : V^k \to W^k \}_{k\in \mathbb{Z}}$. \end{definition} % \begin{notation} Notice that, when dealing with graded vector spaces, we slightly changed the notation introduced in section \ref{Section:GradedObjects}. Namely, we denote the degree $k$ sector of a graded vector space as $V^k$, \ie we put a superscript $k$ instead of a subscript. This choice is in accordance with the "cohomological convention" (see section \ref{sec:HomologicalAlgebrasConventions}) employed throughout the text. \end{notation} % \begin{remark}[Categorical properties of $\Vect$]\label{rem:Vectpdvs} Recall that $\Vect$ is an extremely rich category. we will be interested in the following properties: \begin{itemize} \item $\Vect$ is a complete and cocomplete category. In particular the trivial 0-dimensional vector space $\mathbb{0}$ is a \emph{zero object}, \ie it is both initial and terminal: \begin{displaymath} \begin{tikzcd}[row sep =tiny] V \ar[r,two heads,"!"] & \mathbb{0} \ar[r,hook,"!"]& W\\[-.7em] v \ar[r,mapsto] & 0 \ar[r,mapsto] & 0_W \end{tikzcd} ~. \end{displaymath} % $\Vect$ admits products, called \emph{direct products}, for any set of indices $J$, given by: \begin{displaymath} \mathclap{ \prod_{j\in J} W_j = \left( \underbrace{\dots\times W_j \times \dots}_{\text{Set-theoretic Cartesian product}}, \underbrace{ \begin{aligned} (\omega_j)_{j\in J} + (\omega_j')_{j'\in J} &= (\omega_j + \omega_{j'})_{j\in J} \\ \lambda(\omega_j)_{j\in J} &= (\lambda \omega_j)_{j \in J} \end{aligned} }_{\text{index-wise \\linear structure}} \right) } \end{displaymath} with surjective "projectors": \begin{displaymath} \morphism{{\pi_j}} {\displaystyle\prod_{j\in J} W_j } {W_j} {(\omega_j)_{j\in J}} {\omega_j} ~. \end{displaymath} % $\Vect$ admits infinite coproducts, called \emph{direct sums}, given by: \begin{displaymath} \bigoplus_{j\in J} W_j = \left\lbrace\left. (\omega_j)_{j\in J} \in \prod_{j \in J} W_j \right\vert \quad\omega_j \neq 0_{W_j} \quad\parbox{9em}{only for a finite\\ numbers of indices $j$} \right\rbrace \end{displaymath} with injective "injectors": \begin{displaymath} \morphism{{\iota_j}}{W_j }{\bigoplus_{j\in J} W_j} {\omega}{\left(\omega_h= \begin{cases} 0_h & h \neq j \\ \omega & h = j \end{cases} \right)_{h \in J}} ~. \end{displaymath} % By definition $\displaystyle \bigoplus_{j \in J} W_j \subseteq \prod_{j\in J} W_j $ is a linear subspace, the equality holds when $\#J<\infty$. \item $\Vect$ is a symmetric monoidal category. The tensor product functor is given by the usual tensor product of vector spaces: \begin{displaymath} V_1 \bigotimes_{\mathbb{K}} V_2 = \frac{\Free( V_1 \oplus V_2)}{\sim} \end{displaymath} where $\Free( V_1 \oplus V_2)$ denotes the free vector space generated by the elements of set $V_1\times V_2$ and $\sim$ is the vector subspace encoding distributivity \begin{displaymath} \begin{aligned} (v+v',w) \sim (v,w) + (v',w) \\ (v,w+w') \sim (v,w) + (v,w') \end{aligned} \end{displaymath} and scalar multiplication \begin{displaymath} (\lambda v , w) \sim \lambda (v,w) \sim (v,\lambda w)~. \end{displaymath} Such object is a vector space by the definition of quotient of a vector space with respect to a linear subspace. % The unit object is given by the 1-dimensional vector space $\mathbb{R}$. The corresponding associator, left unitor, and right unitor isomorphisms, are trivial and usually treated as simple identifications\footnote{Technically, $\Vect$ is not "strict" monoidal (see \cite{Muger2010}), \eg ${V_1 \otimes (V_2 \otimes V_3)}$ and ${(V_1 \otimes V_2 ) \otimes V_3}$ are isomorphic but -in principle- different. However, most of the time they will be considered "equal" implying the underlying isomorphism $\alpha$. The same idea is applied also to $\lambda$ and $\rho$.}: \begin{displaymath} \isomorphism{\alpha} {V_1 \otimes (V_2 \otimes V_3)} {(V_1 \otimes V_2 ) \otimes V_3} {x_1\otimes(x_2 \otimes x_3)} {(x_1\otimes x_2)\otimes x_3} \end{displaymath} % \begin{displaymath} \isomorphism{\lambda} {V \otimes \mathbb{R}} {V} {v\otimes \lambda} {\lambda v} \end{displaymath} % \begin{displaymath} \isomorphism{\rho} {\mathbb{R} \otimes V} {V} {\lambda \otimes v} {\lambda v} ~. \end{displaymath} % There is also a trivial symmetric braiding: \begin{displaymath} \morphism{B_{V_1,V_2}} {V_1 \otimes V_2} {V_2 \otimes V_1} {v_1\otimes v_2} {v_2 \otimes v_1} ~. \end{displaymath} \item $\Vect$ is a \href{https://ncatlab.org/nlab/show/distributive+monoidal+category}{distributive monoidal category} \cite{nlab:distributive_monoidal_category}. That means that the monoidal structure $\otimes$ distributes over the (cartesian) monoidal structure given by direct sums, thus we have two canonical isomorphisms \begin{displaymath} \isomorphism{d_\ell} {X \otimes (Y \oplus Z)} {(X\otimes Y)\oplus(X\otimes Z)} {x\otimes(y+z)} {x\otimes y + x \otimes z} \end{displaymath} \begin{displaymath} \isomorphism{d_r} {(X\oplus Y) \otimes Z} {(X\otimes Z)\oplus(Y\otimes Z)} {(x+y)\otimes z} {x\otimes z + y \otimes z} ~. \end{displaymath} This is a particular case of \emph{Rig-category} \cite{nlab:rig_category}. \item $\Vect$ is a $\Vect$-enriched category (also known as $\mathbb{R}$-linear category or \emph{algebroid} \cite{nlab:algebroid}). That means that there is a canonical $\mathbb{R}$-structure on the space of linear maps $\Hom_{\Vect}(V,W)$ or, in other words, there exists an "internal hom functor" \begin{displaymath} [-,-] \,:~ {\Vect^{op}\times \Vect} \to {\Vect} \end{displaymath} defined on a pair of objects $V,W\in\Vect$ as \begin{displaymath} [V,W] = \left( \underbrace{\Hom_{\Vect}(V,W)}_{\text{Set of linear maps}}, \underbrace{ \begin{aligned} (A+B)(v) &= A(v)+B(v) \\ (\lambda A)(v) &= \lambda A(v) \end{aligned} }_{\substack{\text{vector-wise linear structure}\\ \forall v \in V \\ \forall A, B \in \Hom_{\Vect}(V,W)}} \right) ~. \end{displaymath} Clearly the usual $\Hom$ functor factors through the internal one and the forgetful functor neglecting the linear structure. It follows easily from the definition that the composition map of linear maps is bilinear with respect to the linear structure on the hom-space, therefore there is natural morphism \begin{displaymath} \morphism{-\circ-} {{[V,W]\otimes [W,V']}} {{[V,V']}} {(A:V\to W)\otimes (B:W\to V')} {(B\circ A : V \to V')} ~. \end{displaymath} \item $\Vect$ is \href{https://ncatlab.org/nlab/show/closed+monoidal+category}{monoidal closed} \cite{nlab:closed_monoidal_category}. The functor $[-,-]$ defined above is also \emph{internal} in the sense of monoidal closed categories; \ie, for any $V \in \Vect$, $[V,-]$ is right adjoint to $-\otimes V$. The latter can be proved by exhibiting an explicit "currying" \cite{nlab:currying} natural isomorphism in the category of sets: \begin{equation}\label{eq:Curry} \mathclap{ \isomorphism{"Curry"} {\Hom_{\Vect}(V\otimes W, L)} {\Hom_{\Vect}(V,[W,L])} {(h:V\otimes W \to L)} { \left( \Almorphism{\hat{h}} {V} {[W,L]} {v} {(h(v,-):W\to L)} \right) }} \end{equation} Note that this isomorphism realizes precisely the \emph{universal property of tensor products of vector spaces}. It is not difficult to see that $\Hom_{\Vect}(V,[W,L])$ consists of functions from $V\times W$ to $L$ which are separately linear in both of the two entries \ie coincides with the set of $L$-valued bilinear maps $\text{Multi}(V,W;L)$ . \item $\Vect$ is an Abelian category. Being enriched in vector spaces, $\Vect$ is in particular an ab-enriched category, also called pre-additive. The further condition that for any linear maps is possible to define a kernel and a cokernel ultimately makes $\Vect$ Abelian. \end{itemize} \end{remark} \begin{remark}[Specializing the general theory of graded objects]\label{Rem:GVectCategoricalProps} Most of the structures explained in section \ref{Section:GradedObjects} specialize to the category $\Vect^{\mathbb{Z}}$; in particular: \begin{itemize} \item The category $\Vect^{\mathbb{Z}}$ is complete and cocomplete. Limits are defined component-wise. For instance there is a zero object \begin{displaymath} \mathbb{0} = (k \mapsto \mathbb{0}\in \Vect) ~, \end{displaymath} product and coproduct are given by \begin{displaymath} \prod_{j\in J }V_j = \left( k \mapsto \prod_{j \in J}(V_j)^k \right) \end{displaymath} and \begin{displaymath} \bigoplus_{j\in J }V_j = \left( k \mapsto \bigoplus_{j \in J}(V_j)^k \right)~. \end{displaymath} \item The category $\Vect^{\mathbb{Z}}$ is Abelian. Kernels and cokernels are defined component-wise: \begin{align*} \ker(f:V\to W) =&~ \big(k \mapsto \ker( f^k: V^k \to W^k) \big) \\ \coker(f:V\to W) =&~ \big(k \mapsto \coker( f^k: V^k \to W^k) \big) ~. \end{align*} \item The category $\Vect^{\mathbb{Z}}$ is $\Vect^{\mathbb{Z}}$-enriched. For any two $V,W \in \GVect$, the internal hom-functor is defined component-wise as the graded vector space\footnote{We ought to notice that the convention we are employing here is slightly non-standard. A popular choice (see \cite{nlab:internal_hom}) is to consider the graded vector space homogeneous graded maps, defined below in remark \ref{rem:neglectinginternalgrading}, as the internal hom-space for the category of graded vector spaces.} \begin{displaymath} [V,W] = \big( k \mapsto [V^k,W^k] \big) ~. \end{displaymath} In particular it is also monoidal closed \cite[ex 1.1.]{Delgado2015}. Considering the direct sums of all the components one gets an enrichment over $\Vect$. \item The grading set $\ZZ$ is a field and in particular a ring. We thus may consider the \emph{shift by $j$} functor for any $j\in\mathbb{Z}$: \begin{displaymath} V[j] = \big( k \mapsto V^{k+j} \big) ~. \end{displaymath} \end{itemize} \end{remark} \begin{remark}\label{Remark:GvsAsTotalSpace} Similarly to what hinted in \ref{Remark:GradedObjectasCoproduct}, it is customary to understand a graded vector space as its \emph{associated total vector space} $V^\oplus \cong \bigoplus_{k\in\mathbb{Z}} V^k$ (see also remark \ref{rem:totaloplus}) keeping implied the choice of a particular $\mathbb{Z}$-labelled decomposition, or \emph{grading}, and referring to elements completely contained in $V^p$ as "\emph{homogeneous} of degree $p$". \\ In particular, the standard isomorphism $A\oplus 0 \simeq A$ implies that the category defined in Definition \ref{Def:GradedVectorSpaces} includes any $S$-graded vector space where $S$ is a countable set. For instance, if $S=\{0,1\}$, any $S$-graded vector space $A$ can be regarded as a $\mathbb{Z}$-graded vector space simply by imposing that $A_i=0$ for all $i\notin S$. \\ Furthermore, $\GVect$ can be viewed a the (non-full) subcategory of $\Vect$ consisting of decomposable vector spaces $V^\oplus = \oplus_{k\in \mathbb{Z}} V^k$ with morphisms given by degree-preserving linear maps. According to this, one can easily define the direct sum and tensor product of two graded vector spaces out of the corresponding operators $\oplus$ and $\otimes$ on $\Vect$. Namely it suffices to specify a grading on the following "decomposable" ordinary vector spaces \begin{displaymath} \begin{aligned} (V\oplus W)^\oplus &= V^\oplus \oplus W^\oplus ~,\\ (V\otimes W)^\oplus &= V^\oplus \otimes W^\oplus ~. \end{aligned} \end{displaymath} For instance, one can enforce that $(V\oplus W)_k = \oplus_{i+j=k} V^i\otimes W^k$ for any $k\in\mathbb{Z}$. \end{remark} % \begin{notation}[Concentrated graded vector spaces] A graded vector space $V$ is said to be \emph{concentrated in degrees $S\subset \mathbb{Z}$} if $V^k=0$ for all $k \not\in S$. Note that we can regard ordinary (in the sense of "ungraded") vector spaces as $\mathbb{Z}$-graded vector spaces concentrated in degree $0$. \end{notation} % \begin{definition}[Bi-graded vector spaces] We call \emph{bi-graded vector space} any object in the $(\mathbb{Z}\times\mathbb{Z})$-graded vector space category $\Vect^{(\mathbb{Z}\times\mathbb{Z})}$. In other words, is is a collection $\{V_k\}_{k\in\mathbb{Z}\times\ZZ}$ of vector spaces over $\mathbb{R}$ indexed by pairs of integers. (The definition extends trivially to any multi-grading on generic graded object.) \end{definition} % \begin{remark}\label{Rem:TotGradedVecSpace} To any bi-graded vector space one can associated a graded one through the \emph{total space} construction, \ie, for any given $\overline{V}=( s,t \mapsto V_{s, t}) \in \Vect^{\mathbb{Z}\times\mathbb{Z}}$, one can introduce \begin{displaymath} \tot(\overline{V}) = (q \mapsto \bigoplus_{s+t=q}V_{s,t}) ~. \end{displaymath} \end{remark} % \begin{definition}[Homogeneous maps] It is customary to call any graded-morphism $f \in \Hom_{\Vect^\mathbb{Z}}(V,W[k])$ a \emph{homogeneous map from $V$ to $W$ in degree $k$}. In particular any graded morphism $V \to W$ is a \emph{degree $0$ homogeneous map}. \end{definition} \begin{remark}[Neglecting the internal grading]\label{rem:neglectinginternalgrading} The set of all homogeneous maps from $V\to W$ constitutes a $\mathbb{Z}$-graded object in the category $\Vect^\mathbb{Z}$. In other words, there is a bi-graded vector space \begin{displaymath} \left( (k,s) \mapsto \Hom_{\Vect}(V^s,W[k]^s) \right) ~. \end{displaymath} It is customary to neglect the "internal grading" given by the index $s$. Namely, one introduces the \emph{$\ZZ$-graded} vector space of homogeneous maps as \begin{displaymath \underline{\Hom}_{\Vect^{\ZZ}}(V,W):= \left( k \mapsto \left(\Hom_{\Vect^{\ZZ}}(V,W[k])\right)^{\oplus} \right) ~, \end{displaymath} where the superscript $\oplus$ means the direct sums of all the components of the graded vector space, \ie $V^\oplus=\oplus_{k\in\ZZ}V^k$, as defined in remark \ref{rem:totaloplus}. Accordingly, we will often refer to elements in $\underline{\Hom}_{\Vect^{\ZZ}}^k(V,W)$ simply as \emph{linear maps} in degree $k$. This is completely consistent with the fact that $f\in \underline{\Hom}^k_{\Vect^{\ZZ}}(V,W)$ is, by its very definition, a linear map between ordinary vector spaces $f:V^{\oplus}\to W^\oplus$ such that $\im(f\eval_{V^i})\subset W^{i+k}$. \end{remark} \begin{notation}[Shifted elements]\label{not:shiftedelements} Given $v \in V^{|v|}$, we denote by $v_{[k]}$ the element $v$ seen as an element in $V[k]^{|v|-k}$. In other words, $|v_{[k]}|= |v|-k$. An homogeneous map $f$ in degree $|f|=k$ from $V$ to $W$ is a graded morphism (\ie degree $0$ linear map) given by: \begin{displaymath} \morphism{f} {V} {W[k]} {v} {(f(v))_{[|f|]}} ~. \end{displaymath} The latter will be often denoted as a linear map $f:V\to W$ implying the passage to $(f)^\oplus$ explained in remark \ref{rem:neglectinginternalgrading}. \\ The action of the shift functor $[\ell]$ on an homogeneous map $f$ (graded morphism $f:V\to W[|f|]$) is given by \begin{equation}\label{eq:shiftHomoMaps} \morphism{f[\ell]} {V[\ell]} {W[|f|][\ell]} {v_{[\ell]}} {(f(v))_{[|f|][\ell]}} ~. \end{equation} \end{notation} % \begin{definition}[Composition of homogeneous maps]\label{Def:compositionofhomogeneousmaps} Given two homogeneous maps $f \in \Hom_{\GVect}(V,W[k])$ and $g \in \Hom_{\GVect}(W,X[\ell])$ in degree $k$ and $\ell$ respectively, we define their composition as the homogeneous map $$ g\circ f := g[k] \circ f \in \Hom_{\GVect}(V,X[k+\ell]) ~.$$ \end{definition} % \begin{remark} According to the notation introduced in remark \ref{rem:neglectinginternalgrading}, one has \begin{displaymath} \underline{\Hom}^i_{\GVect}(V,W[l])\cong \underline{\Hom}^{i+l}_{\GVect}(V,W) ~. \end{displaymath} Notice also that linearity ensures the following compatibility rule on homogeneous maps \begin{equation}\label{Eq:HomOfDirectSum} \underline{\Hom}_{\GVect}(A\oplus B, C) \cong \underline{\Hom}_{\GVect}(A,C)\oplus \underline{\Hom}_{\GVect}(B,C) ~. \end{equation} \end{remark} % \begin{notation}[Graded maps vs. homogeneous maps]\label{Not:GradedvsHomogeneous-maps} From now on, we will only work in the category of graded vector spaces, therefore we will omit the subscript $\GVect$ when denoting the hom-sets. \\ Namely, we denote as $\Hom(V,W)$ the graded vector space (due to the internal hom-functor) of \emph{graded morphisms} between $V$ and $W$. We also denote as $ \underline{\Hom}^k(V,W) = \Hom (V,W[k])$ the graded vector space of \emph{homogeneous maps in degree $k$}. Therefore we call $\underline{\Hom}(V,W)=\oplus_{k\in\mathbb{Z}}\Hom (V,W[k])$ the graded vector space of \emph{homogeneous maps in any degree}. \\ Recall that $\Hom(V,W) = \underline{\Hom}^0(V,W)$, we stress that in our convention \emph{arrows between graded vector spaces} are always homogeneous map in degree $0$. \end{notation} \subsection{Monoidal structure and Koszul convention}\label{sec:bloodyKoszulConvention} Some slightly more subtle conventions appear when dealing with the induced monoidal structure on $\GVect$. Since $\Vect$ is a bi-monoidal\footnote{Namely with respect to the monoidal structures given by the tensor product $\otimes$ and by the direct sum $\oplus$.} category and $\mathbb{Z}$ is, in particular, a monoid, proposition \ref{Prop:GradedMonoidalStructure} assures the existence of an induced monoidal structure on graded vector spaces. The action on objects is given by \begin{displaymath} V\otimes W = \left( k \mapsto \bigoplus_{i+j=k} V^i\otimes W^j \right) ~, \end{displaymath} and the action of morphism $f:X\to Y$ and $g:W\to Z$ is given by \begin{displaymath} f \otimes g \left( k \mapsto \bigoplus_{i+j=k}\left(f^i\otimes g^j : X^i\otimes W^j \to Y^i\otimes Z^j \right) \right)~. \end{displaymath} The associator and unitor isomorphisms come from the associativity and unity isomorphisms in $\Vect$. Note that $\Vect$ is also braided, therefore, according again to proposition \ref{Prop:GradedMonoidalStructure}, there is a canonical braiding induced on the category $\Vect^\mathbb{Z}$ of graded vector spaces. As it has been anticipated in remark \ref{Rem:AnticipazionediKoszul}, it is customary to consider on $\Vect^\mathbb{Z}$ a "twisted" version of the canonical Braiding called "Koszul braiding". \begin{definition}[Koszul braiding]\label{Def:KoszulBraiding} We call \emph{Koszul braiding} the braiding natural transformation defined on homogeneous elements by the isomorphism \begin{displaymath} \morphism{B_{V,W}} {V\otimes W} {W \otimes V} {x\otimes y} {(-)^{|x||y|}y\otimes x} \end{displaymath} for any $V,W \in \Vect^\mathbb{Z}$. \end{definition} % \begin{remark}[Braiding notation]\label{not:shortenBraiding} The braiding is clearly symmetric since \begin{displaymath} B_{V,W} \circ B_{W,V} = \id_{W\otimes V} ~,\qquad B_{W,V} \circ B_{V,W} = \id_{V\otimes W} ~. \end{displaymath} \end{remark} \begin{notation} We will often omit the subscript $V,W$ in $B_{V,W}$ when there is no ambiguity about the domain $V\otimes W$ of the braiding operator. \end{notation} \begin{remark}[Koszul convention]\index{Koszul convention}\label{rem:KoszulRuleofThumb} Informally, the choice of the Koszul braiding implies that the exchange of two homogeneous elements keeps track of their degree of the two exchanged elements. Namely, whenever two elements in degree $m$ and $n$ are swapped, a sign $(-)^{m n }$ is introduced. \end{remark} % This convention has several tricky consequences mostly coming from the tensor product of shifted spaces. % \begin{remark}[Suspension]\index{Suspension isomorphism}\label{Rem:suspension} A first important observation is that the three graded vector spaces, $V[1]$, $\mathbb{R}[1]\otimes V$, and $V \otimes \mathbb{R}[1]$, coincides components-wise \begin{displaymath} (V[1])_k \equiv (\mathbb{R}[1]\otimes V)_k \equiv (V \otimes \mathbb{R}[1])_k = V_{k+1} \qquad \forall k \in \mathbb{Z} ~. \end{displaymath} Nevertheless, we cannot assume that the above three spaces coincide because $\mathbb{R}[1]\otimes V$ and $V \otimes \mathbb{R}[1]$ are isomorphic through the braiding but, in principle, different. % Therefore there is the freedom to choose which of the two spaces can be identified with $V[1]$. In the wording of \cite{Fiorenza2006}: \begin{quotation} \underline{ we adopt the convention that \emph{"degrees are shifted on the left"}}. \end{quotation} Namely we understand the following natural identification $V[1]\cong \mathbb{R}[1]\otimes V$ realized by the \emph{suspension isomorphism}\footnote{Would be better to refer to it as \emph{suspension natural transformation}, $\susp:\mathbb{R}[1]\otimes \blank \Rightarrow [1]$.}, \begin{displaymath} \isomorphism{\susp} {V[1]} {\mathbb{R}[1]\otimes V} {v_{[1]}} {1_{[1]}\otimes v} ~. \end{displaymath} According to this convention, there is also a corresponding "suspension on the right" by composing the suspension with the Braiding: \begin{displaymath} \begin{tikzcd}[row sep=small] & \mathbb{R}[1]\otimes V \ar[dd,"B"] \\ V[1] \ar[ru,"\susp"] \ar[dashed,dr] & \\ & V \otimes \mathbb{R}[1] \end{tikzcd} \end{displaymath} that is : \begin{displaymath} \morphism{B \circ \susp} {V[1]} {V\otimes \mathbb{R}[1]} {v_{[1]}} {(-)^{|v|} v\otimes 1_{[1]}} ~. \end{displaymath} Basically, we are imposing that the shift functor is equivalent to left tensor product with $\mathbb{R}[1]$. Iterating the suspension, one obtains the following isomorphism \begin{displaymath} \isomorphism{\susp} {V[k]} {\mathbb{R}[k]\otimes V} {v_{[k]}} {1_{[k]}\otimes v} ~, \end{displaymath} denoted again by $\susp$ with a slight abuse of notation. In particular we assume that \begin{displaymath} \isomorphism{\text{susp}} {\RR[k]\otimes \RR[\ell]} {\RR[k+\ell]} {1_{[k]}\otimes 1_{[\ell]}} {(1_{[\ell]})_{[k]}=1_{[\ell+k]}} ~. \end{displaymath} The latter implies the following canonical identification\footnote{In the sense that no extra sign arises.} $\RR[k]\otimes \RR[\ell]\equiv \RR[\ell+k] \equiv \RR[\ell]\otimes\RR[k]$ and, according to remark \ref{rem:GradingGivenByMonoid}, the following shifted spaces are also identified $$ V[k][\ell]\equiv V[k+\ell] \equiv V [\ell][k] ~. $$ We stress that the latter convention differs from some references in the bibliography, see \eg \cite{Delgado2018b}, where the shift is precisely defined by tensor multiplication on the left and $V[k][\ell]$ and $V[\ell][k]$ are not identified but considered isomorphic (implying in particular an extra sign). \end{remark} % More in general, the choice of a suspension implies the following isomorphism defined on any pair of graded vector spaces: \begin{definition}[D\'ecalage]\label{Def:DecIso} \begin{displaymath} \isomorphism{\dec} {V[k]\otimes W[\ell]} {(V\otimes W)[k+\ell]} {v_{[k]}\otimes w_{[\ell]}} {(-)^{\ell\cdot |v|}(v\otimes w)_{[k+\ell]}} \end{displaymath} \end{definition} % \begin{remark}[Construction of $\dec$]\label{rem:decConstruction} The d\'ecalage operator introduced in definition \ref{Def:DecIso} can be explicitly constructed by a suitable composition of the "building block" introduced so far (suspension, braiding, associator and unitors). For instance, $dec$ could be given in four "steps" as follows: \begin{displaymath} \begin{tikzcd} V[k]\otimes W[\ell] \ar[r,"\susp\otimes\susp"] &[2em] \RR[k]\otimes V \otimes \RR[\ell]\otimes W \ar[r] &[-2em] \cdots \\[-2em] v_{[k]}\otimes w_{[\ell]} \ar[r,mapsto]& 1_{[k]}\otimes v \otimes 1_{[\ell]}\otimes w \ar[r,mapsto] & \cdots \\ \cdots \ar[r,"\Unit\otimes B \otimes \Unit"] & \RR[k]\otimes \RR[\ell] \otimes V \otimes W \ar[r]& \cdots \\[-2em] \cdots \ar[r,mapsto] & (-)^{\ell\,|v|}~1_{[k]}\otimes 1_{[\ell]}\otimes v \otimes w \ar[r,mapsto] & \cdots \\ \cdots \ar[r,"\susp^{-1} \otimes \Unit"] & \RR[k+\ell] \otimes (V \otimes W) \ar[r]& \cdots \\[-2em] \cdots \ar[r,mapsto] & (-)^{\ell\,|v|}~1_{[k+\ell]}\otimes (v \otimes w) \ar[r,mapsto] & \cdots \\ \cdots \ar[r,"\susp^{-1}"] & (V \otimes W)[k+\ell] \\[-2em] \cdots \ar[r,mapsto] & (-)^{\ell\,|v|}~(v \otimes w)_{[k+\ell]} \end{tikzcd} \end{displaymath} (In the above construction, we understood the isomorphisms $\alpha,\lambda$ and $\rho$ given by the monoidal structure as identifications, see remark \ref{rem:Vectpdvs}). \end{remark} Iterating the previous definition, one can introduce the \emph{d\'ecalage} isomorphism, defined on any $n$-tuple of graded vector spaces $(V_1,\dots, V_n)$ as: \begin{displaymath} \isomorphism{dec} {V_1[1]\otimes\dots\otimes V_n[1]} {(V_1\otimes\dots \otimes V_n)[n]} {v_{1[1]}\otimes\dots v_{n[1]}} {(-)^{\sum_{i=1}^{n}(n-i)|v_i|}(v_1\otimes\dots\otimes v_n)_{[n]}} ~. \end{displaymath} % \begin{remark}[Comparison with other conventions] Observe that with the building blocks in our hands one could make other slightly different choices on how to build a canonical isomorphism $V[k]\otimes W[\ell]\cong (V\otimes W)[k+\ell]$. For instance, one could also introduce the operator \begin{equation}\label{eq:decstrano} \morphism{\overline{\dec}} {V[k]\otimes W[\ell]} {(V\otimes W)[k+\ell]} {v_{[k]}\otimes w_{[\ell]}} {(-)^{\ell\cdot |v_{[k]}|}(v\otimes w)_{[k+\ell]}} \end{equation} which can be obtained substituting the second arrow in the diagram of remark \ref{rem:decConstruction}, given by $(\Unit_{\RR[k]}\otimes B_{V,\RR[\ell]} \otimes \Unit_W)$, with $(B_{\RR[k]\otimes V, \RR[\ell]}\otimes \Unit_W)$. Such a choice is rather common in the literature (see remark \ref{rem:comparesignmesswithliterature}). In our convention, the operator $\overline{\dec}$ will only play a role in the definition of the tensor product of graded homogeneous maps (see remark \ref{Rem:TensorHomogeneousMaps} below). \end{remark} \begin{remark}\label{rem:decVsBraiding} It is clear, from the characterization of the shift functor in terms of the suspension, that $[k]$ is not a "monoidal functor". Namely, one has that \begin{displaymath} (V\otimes W)[k] = \mathbb{R}[k]\otimes V \otimes W ~\cancel{\cong}~ \mathbb{R}[k]\otimes V \otimes \mathbb{R}[k]\otimes W = V[k]\otimes W[k] ~. \end{displaymath} However, the d\'ecalage isomorphism defines a sort of "compatibility rule" between $[k]$ and $\otimes$. In other terms, it determines an iso-natural transformation \begin{displaymath} \otimes \circ ( [k]\times [l] ) \Rightarrow [k+l]\circ \otimes ~. \end{displaymath} Accordingly, the shift functor is not compatible with the braiding structure. This implies that, without loss of generality, the following diagram do not plain commutes: \begin{displaymath} \begin{tikzcd} {V[k]\otimes W[l]} \ar[r,"\dec"]\ar[d,"{B_{V[k],W[l]}}"'] \ar[rd,phantom,"\cancel{\circlearrowleft}"]& {(V\otimes W)[k+l]} \ar[d,"{[k+l]B_{V,W}}"] \\ {W[l]\otimes V[k]} \ar[r,"\dec"']& {W\otimes V [k+l]} \end{tikzcd} \end{displaymath} but it is required an extra sign on the right vertical arrow in order to achieve the commutation: \begin{equation}\label{Eq:DecalageInterwiningPermutations} \begin{tikzcd} {V[k]\otimes W[l]} \ar[r,"\dec"]\ar[d,"{B_{V[k],W[l]}}"']& {(V\otimes W)[k+l]} \ar[d,"{(-)^{k l}[k+l]B_{V,W}}"] \\ {W[l]\otimes V[k]} \ar[r,"\dec"']& {W\otimes V [k+l]} \end{tikzcd} ~. \end{equation} The appearance of this extra sign determines the crucial property of $\dec$ to intertwine odd and even permutations of homogeneous vectors (see remark \ref{Rem:abstractpermutationofGVS}). \end{remark} According to this choice, there is also a corresponding non-trivial formula for the tensor product of graded maps. \begin{remark}[Tensor product of homogeneous maps]\label{Rem:TensorHomogeneousMaps} Recall at first that an homogeneous map $f:V\to W$ in degree $|f|=k$ is simply a graded morphism $f:V\to W[k]$. \\ Consider now any two homogeneous maps $f \in \underline{\Hom}^{|f|}(V,W), f' \in \underline{\Hom}^{|f'|}(V',W')$ seen as graded morphisms in the above sense. Applying the monoidal product functor $\otimes$ defined on $\GVect$ one obtains the following graded morphism \begin{equation}\label{eq:CompletelyAccurateTensorOfMaps} \morphism{f\otimes f'} {V~\otimes~ V'} {W[k]\otimes W'[k']} {v \otimes v'} {{f(v)}_{[|f|]}\otimes {f'(v')}_{[|f'|]}} \end{equation} which -technically speaking- is not an homogeneous map\footnote{ According to the previous conventions: $$ f\otimes f' \in \Hom(V\otimes V', W[k]\otimes W[k']) \cong \underline{\Hom}^k(V\otimes V',W\otimes (W'[k'])) ~. $$ } from $V\otimes V'$ to $W\otimes W'$. \\ To get an an honest homogeneous map in $\underline{\Hom}(V\otimes V',W\otimes W')$ one should postcompose the above map $f\otimes f'$ with a suitable isomorphism $W[|f|]\otimes W'[|f'|]\cong (W\otimes W')[|f|+|f'|]$. \\ Accordingly, we define, with a provisionally decorated notation, the tensor product of the above homogeneous map $f$ and $f'$ to be the homogeneous map $f~\widetilde{\otimes}~f' \in \underline{\Hom}^{|f|+|f'|}(V\otimes V', W \otimes W')$ acting on homogeneous elements as \begin{equation}\label{eq:KoszulConvTensorProducts-appendix} \morphism{f~\widetilde{\otimes}~ f'} {V\otimes V'} {(W\otimes W')[k+k']} {v \otimes v'} {(-)^{|v||f'|}(f(v)\otimes f'(v'))_{[|f|+|f'|]}} ~. \end{equation} In equation \eqref{eq:KoszulConvTensorProducts-appendix} is implied to choose a postcomposition of the above $\otimes$ with with $\overline{\dec}$ (see equation \eqref{eq:decstrano}) since $|{f(v)}_{[|f|]}|=|v|$. \\ This choice is perfectly consistent with the \emph{Koszul convention} (see remark \ref{rem:KoszulRuleofThumb}), which, as a rule of thumb, prescribes that a sign $(-)^{|\alpha||\beta|}$ appears whenever two graded elements $\alpha$ and $\beta$. In case of \eqref{eq:KoszulConvTensorProducts-appendix}, the sign come from the permutation $(f,f',x,x')\mapsto(f,x,f',x')$ \\ Equation \eqref{eq:KoszulConvTensorProducts-appendix} implies also a sign rule for the composition of tensor products of homogeneous maps: \begin{equation}\label{Eq:TensorHomogeneousMaps-appendix} (f'~\widetilde{\otimes}~ g') \circ (f ~\widetilde{\otimes}~ g) = (-)^{|g'||f|}(f'\circ f)~\widetilde{\otimes}~ (g'\circ g) ~. \end{equation} \end{remark} \begin{notation}[Dropping the decorated notation for $\widetilde{\otimes}$]\label{not:abuseotimes} It is an almost universally accepted practice to drop the decorated notation $\widetilde{\otimes}$ and denote with $\otimes$, with a slight abuse of notation, the tensor product of homogeneous map \begin{displaymath} \otimes:~ \underline{Hom}^k(V,W)\times \underline{Hom}^{k'}(V',W') \to \underline{Hom}^{k+k'}(V\otimes V',W\otimes W') \end{displaymath} whose image is given by equation \eqref{eq:KoszulConvTensorProducts-appendix} \end{notation} \begin{remark}[Considering homogeneous maps as morphisms in $\GVect$]\label{rem:homomapsasmorphism} Some sources in the literature propose to consider as morphisms in the category of graded vector spaces all homogeneous maps, therefore not only maps in degree $0$ as we are considering here. In that case, equation \ref{Eq:TensorHomogeneousMaps-appendix} must be imposed in order to define the action of functor $\otimes$ on arrows properly. \\ We stress again that this is not the convention that we are employing here. In our diagrams, arrows must always be interpreted as degree $0$ maps. \end{remark} \begin{remark}[Shift of homogeneous maps]\label{rem:shiftHomogMapsTrickySign} There is a possible source of confusion coming from notation \ref{not:abuseotimes} that we shall clarify. \\ Recall that we mentioned in remark \ref{Rem:suspension} that the convention "shift is suspension from the left" can be read as a natural transformation. The naturality condition implies that the below (on the right) square diagram ought to commute, in the category of graded vector spaces, for any arrow (below, on the left) \begin{displaymath} \begin{tikzcd} V \ar[d,"f"]&[2em] & V[\ell] \ar[r,"\text{susp}"] \ar[d,"{f[\ell]}"]& \mathbb{K}[\ell]\otimes V \ar[d,"{\id_{\mathbb{K}[\ell]}\otimes f}"] \\ W[|f|] & & W[|f|][\ell] \ar[r,"\text{susp}"] & \mathbb{K}[\ell]\otimes W[|f|] \end{tikzcd} ~. \end{displaymath} Observe that the naturality is with respect to the monoidal structure of $\Vect^\ZZ$, hence, even if the arrow $f$ is an homogeneous map in degree $|f|$, the rightmost arrow $({\id_{\mathbb{K}[\ell]}\otimes f})$ has to be interpreted in the sense of equation \eqref{eq:CompletelyAccurateTensorOfMaps} \underline{not} in the sense of \eqref{eq:KoszulConvTensorProducts-appendix}. Chasing a generic homogeneous element $v\in V$ incorrectly, \ie interpreting $\otimes$ in the sense of notation \ref{not:abuseotimes}, would implies that \begin{displaymath} (f[\ell])(v_{[\ell]}) = (\id_{\mathbb{K}[\ell]} \otimes f) ~ (1_{[\ell]}\otimes v = (-)^{\ell k } 1_{[\ell]}\otimes f(v) = (-)^{\ell k } (f(v))_{[\ell]} ~, \end{displaymath} which contradicts what we stated in equation \eqref{eq:shiftHomoMaps}. \\ Notice that different conventions, see for example remark \ref{rem:homomapsasmorphism}, would lead to different interpretation of the symbol $\otimes$. \\ We emphasize that in the body of the thesis the tensor product of homogeneous maps will be always interpreted in the sense of notation \ref{not:abuseotimes} \end{remark} \begin{remark}[Suspension maps]\label{Rem:suspensionmaps} Observe that any homogeneous element $v$ of degree $|v|$ in the graded vector space $V$ is also an element in $V[i]$ with shifted homogeneous degree. Denoting $v_{[i]}$ the corresponding element in the shifted vector space one has $|v_{[i]}| = |v| -i$ because if $v \in V^k$ therefore $v_{[i]}\in (V[i])^{k-i}$. Accordingly, the identity morphism $\id_V=(k\mapsto \id_{V^k})$ can be regarded as an invertible degree $k$ homogeneous map between $V$ and $V[-k]$. \\ Sometimes can be useful to introduce the \emph{suspension map} $\uparrow \in \underline{\Hom}^1(V,V[-1])$ and the \emph{desuspension map} $\downarrow \in \underline{\Hom}^{-1}(V,V[1])$ (to not be confused with the suspension natural transformation defined in \ref{Rem:suspension}) defined as follows: \begin{displaymath} \uparrow(v) = 1_{[-1]}\otimes v = v_{[-1]} \qquad \downarrow(v)= 1_{[1]}\otimes v = v_{[1]} ~. \end{displaymath} \\ The suspension convention implies that $(\downarrow\otimes \downarrow)\circ (\uparrow \otimes \uparrow)= -\mathbb{1}$ or, more generally, \begin{displaymath} \downarrow^{\otimes n}\circ \uparrow^{\otimes n} = \uparrow^{\otimes n} \circ \downarrow^{\otimes n} = (-)^{\frac{n(n-1)}{2}}\mathbb{1} ~. \end{displaymath} Finally, observe that if one chooses homogeneous maps as morphisms in the graded vector spaces category, as hinted in remark \ref{rem:homomapsasmorphism}, a graded vector space $V$ will be isomorphic to all its shifts $V[n]$ through the suspension and desuspension maps. \end{remark} \begin{remark}[About the convention ${[k][\ell]=[\ell][k]}$]\label{rem:compsingShiftsvsSuspensions} Our convention about shift endofunctors to identify $[k][\ell]$, $[\ell][k]$ and $[\ell+k]$ for any $k,\ell \in \mathbb{Z}$ comes quite naturally when regarding a graded vector space as a functor $\mathbb{Z}\to \Vect$. In fact, since $\mathbb{Z}$ is in particular a group, the action $\mathbb{Z}\action\mathbb{Z}$ given by $+$ naturally translates to a family of endofunctor $\mathbb{Z}\to \mathbb{Z}$ where $\mathbb{Z}$ is interpreted as discrete category. The shift functors on $\Vect^{\mathbb{Z}}$ are therefore obtained by precomposition with the latter functors. \\ More precisely, since $[k][\ell],[\ell][k]$ and $[\ell+k]$ are different functors, it is implied the following natural transformation \begin{displaymath} \begin{tikzcd}[ column sep=2em,row sep=-1ex] V[k][\ell] \ar[r,"\sim"] & V[k+\ell]=V[\ell+k] \ar[r,"\sim"] & V[\ell][k] \\ (v_{[k]})_{[\ell]} \ar[r,mapsto] & v_{[k+\ell]} \ar[r,mapsto] & (v_{[\ell]})_{[k]} \end{tikzcd} \end{displaymath} for any homogeneous vector $v\in V$. \\ This choice is perfectly compatible with the "(suspension) shift on the left" convention: \begin{displaymath} \begin{tikzcd} (v_{[k]})_{[\ell]} \ar[ddd,mapsto]&[-7em] \\[-2em] & V[k][\ell] \ar[r,equal]\ar[d,"\text{susp}"] & V[k+\ell]=V[\ell+k] \ar[r,equal]\ar[d,"\text{susp}"] & V[\ell][k]\ar[d,"\text{susp}"] \\ & \mathbb{K}[\ell]\otimes\mathbb{K}[k]\otimes V \ar[r,"\text{susp}^{-1}"] & \mathbb{K}[\ell+k]\otimes V \ar[r,"\text{susp}"] & \mathbb{K}[k]\otimes\mathbb{K}[\ell]\otimes V \\[-2em] 1_{[\ell]}\otimes 1_{[k]}\otimes v \ar[rr,mapsto]&& 1_{[\ell+k]}\otimes v \ar[r,mapsto]& 1_{[k]}\otimes 1_{[\ell]} \otimes v \end{tikzcd} \end{displaymath} % We notice that is also common to make different choices. For instance in \cite[pag. 4]{Delgado2018} one can find the opposite \ie $V[k] \equiv \mathbb{R}[k]\otimes V$ and \begin{displaymath} \lambdaisomorphism{V{[k][\ell]}} {V{[\ell][k]}} {(v_{[k]})_{[\ell]}} {\mathit{\nrt{[(-)^{k \ell}]}}(v_{[\ell]})_{[k]}} ~. \end{displaymath} The latter is in particular compatible with a different convention regarding the action of "suspended maps" (see \eg \cite{Fiorenza2006}). Namely, they define the shift functor $[\ell]$ on an homogeneous map (graded morphism $f:V\to W[|f|]$) as given by \begin{displaymath} \morphism{f[\ell]} {V[\ell]} {W[|f|][\ell]} {v_{[\ell]}} {\mathit{\nrt{[(-)^{|f|k}]}} (f(v))_{[|f|][\ell]}} ~. \end{displaymath} In the previous two expressions we emphasized (with square brackets) the extra signs that are not present in our conventions. \end{remark} \begin{remark}[Koszul convention - Take away message]\label{Rem:KoszulTAM} Summing up, the so-called "Koszul convention" rigorously consists of the following three choices: \begin{enumerate} \item Endowing the category $\GVect$ with the Koszul braiding defined in definition \ref{Def:KoszulBraiding}, that is different from the canonical one induced from the grading in $\Vect$ according to \ref{Rem:CanonicalInducedBraiding}. \item Adopting the convention that degrees are "shifted on the left", that is imposing that the shift functor $[k]$ coincides (is naturally identified) with $\mathbb{R}[k]\otimes-$. (See remark \ref{Rem:suspension}). \item Understanding the tensor product of homogeneous maps as obtained by postcomposition of the monoidal structure $\otimes$ with the operator $\overline{\dec}$ given in equation \eqref{eq:decstrano}. (See remark \ref{Rem:TensorHomogeneousMaps}) \end{enumerate} Informally, these are subsumed by the following rule of thumb (already anticipated in \ref{rem:KoszulRuleofThumb}): \begin{quote} whenever two objects of degrees $p$ and $q$ are permuted, one has alsot to multiply by the sign prefactor $(-)^{p q}$. \end{quote} \end{remark} \begin{remark}[Permutations of tensor products]\label{Rem:abstractpermutationofGVS} Observe that, out of the symmetric braiding $B$, one can construct two natural transformations (an odd and an even one) pertaining to a permutation $\sigma\in S_n$, for any $n\geq 1$. Recall at first that any permutation can be expressed ---not uniquely--- as a composition of transpositions $\tau_i$, \ie, by permutations exchanging the $i$-th index with the $(i+1)$-th only. The sought transformations can be given on the generating set $\{\tau_i\}_{0< i < n}$ of $S_n$. The "even" transformation is given by \begin{displaymath} \mathclap{ \morphism{B_{s_i}} {W_1\otimes\dots\otimes W_n} {W_1\otimes\dots\otimes W_{i-1}\otimes W_{i+1}\otimes W_i\otimes W_{i+2}\otimes\dots W_n} {w_1\otimes\dots\otimes w_n} {(-)^{|w_i||w_{i+1}|}w_1\otimes\dots\otimes w_{i-1}\otimes w_{i+1}\otimes w_i\otimes w_{i+2}\otimes\dots w_n} } \end{displaymath} and the "odd" transformation is given by: \begin{displaymath} \mathclap{ \morphism{P_{s_i}} {W_1\otimes\dots\otimes W_n} {W_1\otimes\dots\otimes W_{i-1}\otimes W_{i+1}\otimes W_i\otimes W_{i+2}\otimes\dots W_n} {w_1\otimes\dots\otimes w_n} {-(-)^{|w_i||w_{i+1}|}w_1\otimes\dots\otimes w_{i-1}\otimes w_{i+1}\otimes w_i\otimes w_{i+2}\otimes\dots w_n} ~. } \end{displaymath} More succinctly, this means that \begin{displaymath} \begin{aligned} B_{s_i} =& \mathbb{1}_{W_1}\otimes\dots\otimes\mathbb{1}_{W_{i-1}}\otimes B_{W_i,W_{i+1}}\otimes \mathbb{1}_{W_{i+2}}\otimes \dots \otimes \mathbb{1}_{W_{n-i-1}} \\ P_{s_i} =& - B_{s_i} \end{aligned} ~. \end{displaymath} % Accordingly, to any given permutation $\sigma\in S_n$, decomposed in transpositions as $\sigma=s_{i_k}\dots s_{i_1}$, one can associate \begin{displaymath} \morphism{B_\sigma = B_{s_{i_k}}\dots B_{s_{i_1}}} {W_1\otimes\dots\otimes W_n} {W_{\sigma_1}\otimes\dots\otimes W_{\sigma_n}} {w_1\otimes\dots\otimes w_n} {\epsilon(\sigma,w) w_{\sigma_1}\otimes\dots\otimes w_{\sigma_n}} \end{displaymath} and \begin{displaymath} P_\sigma= (-)^\sigma B_\sigma ~. \end{displaymath} % We will often call this last two operators "even (resp. odd) permutators". They will be more extensively studied in appendix \ref{App:UnshuffleAtors}. \\ The coefficient $\epsilon(\sigma,w)$ arising from a generic permutation is called \emph{(symmetric) Koszul sign}. The corresponding sign $\chi(\sigma,w)=(-)^\sigma\epsilon(\sigma,w)$ pertaining to $P$ is called \emph{skew-symmetric Koszul sign}. See remark \ref{rem:aboutKoszulSign} for further details. \end{remark} \begin{remark}[On the Koszul sign]\label{rem:aboutKoszulSign} The Koszul sign $\epsilon(\sigma,w)$ is the sign one gets by transposing elements in $w=w_1\otimes\dots\otimes w_n$ complying with the Koszul convention. Namely it is the sign of the subpermutation $\bar{\sigma}$ obtained by restricting $\sigma$ to the subset of odd-degree elements only (see also remark \ref{rem:practicalComputKoszulSign}). For instance, the Koszul sign pertaining to the transposition of two elements $\tau_{1,2}\in S_2$, is given by \begin{displaymath} \epsilon(\tau_{1,2};x_1,x_2)= (-)^{|x_1||x_2|} \end{displaymath} that is the sign given by the (Koszul) braiding. In other words $B_{\tau_{1,2}}\equiv B_{V,V}$. One can then extend multiplicatively this definition to an arbitrary permutation using a decomposition into transpositions to get the general sign. \end{remark} \begin{notation}[Shortening the notation of the Koszul sign] Although the odd and even Koszul signs depend on the specific sequence of homogeneous vectors that are permuted, we will often omit the dependence on the list of graded vectors to shorten the notation. Everything should be clear from the context. Namely we will often abuse the notation $\epsilon(\sigma;x_1,\dots,x_n)$ by writing $\epsilon(\sigma)$, and writing $\chi(\sigma):= \epsilon(\sigma)(-)^{\sigma}$ for the sign involved in the definition of $P_\sigma$. \end{notation} \subsection{Tensor spaces} Considering the tensor product of a graded vector space $V$ with itself $n$-times, one gives rise to the so-called \emph{$n$-th tensors space}: % \begin{displaymath} \otimes^n V = V^{\otimes n} = \underbrace{V\otimes\dots\otimes V}_{\text{n times}} ~. \end{displaymath} \subsubsection{Action of symmetric groups on Tensor spaces}\label{Section:ActionsonTensorSpaces} When specialized to $n$ copies of the graded vector space $V$, the iso-natural transformations $B_\sigma$ and $P_\sigma$, defined in remark \ref{Rem:abstractpermutationofGVS} for a generic permutation $\sigma \in S_n$, can be regarded as canonical representations of the symmetric group $S_n$ on $V^{\otimes n}$ (for any $V\in \GVect$) induced by the Koszul Braiding. Namely, the graded vector space $\otimes^n V$, for any $n>1$, carries two natural actions of the group of permutations $S_n$: a canonical \emph{even representation} \begin{displaymath} \morphism{B} {S_n \times V^{\otimes n}} {V^{\otimes n}} {(\sigma,v_1\otimes\dots\otimes v_n)} {\epsilon(\sigma,v) v_{\sigma_1}\otimes\dots \otimes v_{\sigma_n}} ~, \end{displaymath} and an \emph{odd representation} \begin{displaymath} \morphism{P} {S_n \times V^{\otimes n}} {V^{\otimes n}} {(\sigma,v_1\otimes\dots\otimes v_n)} {\chi(\sigma,v) v_{\sigma_1}\otimes\dots \otimes v_{\sigma_n}} ~, \end{displaymath} where $\epsilon(\sigma,v)$ is the (even) Koszul sign and $\chi(\sigma,v)=(-)^\sigma\epsilon(\sigma,v)$ is the odd Koszul sign (see remark \ref{rem:aboutKoszulSign}). Conventionally, we consider this actions as left-actions. \\ Recall that this actions are natural in $V$ therefore they can be equally encoded as group morphisms $ S_n \to \text{Iso.Nat.}(\otimes^2,\otimes^2)$ from the group of permutations to the group of natural isomorphisms from the functor $\otimes^n$ to itself. % \begin{remark}[Practical computation of the Koszul sign]\label{rem:practicalComputKoszulSign} Consider an element $x_1\otimes\dots\otimes x_n$ in $\otimes^n V$. Call $\zeta \in S_n$ the unique permutation that accumulate all odd degree elements on the left, that is $$ x_{\zeta_1}\otimes \dots \otimes x_{\zeta_n} = y_1\otimes\dots\otimes y_k\otimes z_1 \otimes \dots \otimes z_{n-k} $$ with $|y_i|$ odd and $|z_i|$ even. \\ Operatively, the permutation $\zeta$ can be constructed by scanning $x_1 \otimes \dots \otimes x_n$ element by element from the left and, whenever the considered $x_i$ is even, cycling it with all subsequent elements. Since all of these permutations involve only swapping with an element in even degree, there is no Koszul sign to take into account. \\ Given any permutation $\sigma \in S_n$, there are two unique permutations $\bar{\sigma}\in S_k$ and $\bar{\bar{\sigma}} \in S_{n-k}$ defined by the commutativity of the following diagram: \begin{displaymath} \begin{tikzcd} x_1\otimes \dots \otimes x_n \ar[r,"B_\zeta"] \ar[d,"B_\sigma"] & y_1\otimes\dots\otimes y_k\otimes z_1 \otimes \dots \otimes z_{n-k} \ar[d,"B_{\bar{\sigma}}\otimes B_{\bar{\bar{\sigma}}}"] \\ \epsilon(\sigma)~x_{\sigma_1}\otimes \dots \otimes x_{\sigma_n} \ar[r,"B_\zeta"] & (-)^{\bar{\sigma}}~y_{\bar{\sigma}_1}\otimes\dots\otimes y_{\bar{\sigma}_k} \otimes z_{\bar{\bar{\sigma}}_1} \otimes \dots \otimes z_{\bar{\bar{\sigma}}_{n-k}} \end{tikzcd} ~. \end{displaymath} These are respectively the subpermutation of $\sigma$ involving odd degree elements and the one involving even degree elements only. One can conclude that $\epsilon(\sigma;x_1\otimes\dots\otimes x_n) = (-)^{\bar{\sigma}}$~. \end{remark} \begin{proposition}\label{Proposition:PermutingHomogeneousMaps} The permutation action on $(\underline{\Hom}(V,W))^{\otimes n}$ is compatible with the conjugate of the twist action on $\underline{\Hom}(V^{\otimes n},W^{\otimes m})$, \ie \begin{displaymath} B_\sigma(f_1\otimes\dots\otimes f_n) = B_\sigma \circ (f_1\otimes\dots \otimes f_n) \circ B_\sigma^{-1} \end{displaymath} where $B$ on the left-hand side denotes the action of the permutation group on $(\underline{\Hom}(V,W))^{\otimes n}$ and on the right-hand side denotes the action on $V^{\otimes n}$ and $W^{\otimes n}$. \end{proposition} \begin{proof} Without loss of generality, consider two homogeneous maps $f,g$ and two arbitrary graded homogeneous vectors $x,y$, one has \begin{align*} B \circ (f\otimes g) \circ ( x \otimes y) =&~ (-)^{|x||g|} B(f(x)\otimes g(y)) = \\ =&~ (-)^{|f||g|+|f||y| + |x||y|} g(y)\otimes f(x) = \\ =&~ (-)^{|f||g|+ |x||y|} (g\otimes f) \circ (y\otimes x) = \\ =&~ (-)^{|f||g|} (g\otimes f) \circ B \circ (x \otimes y) ~. \end{align*} Hence \begin{displaymath} B \circ (f \otimes g) = B(f\otimes g) \circ B ~, \end{displaymath} where $B(f\otimes g)$ has to be carefully interpreted as the braiding on $\underline{\Hom}(V,W)^{\otimes 2}$ and it is not a postcompostion. \end{proof} \begin{lemma}[Composition of the Koszul signs] Given $\sigma,\tau \in S_n$, we have: \begin{displaymath} B_{\tau \sigma} (v_1\otimes\dots\otimes v_n) = \epsilon(\sigma;v_1,\dots,v_n) B_{\tau}(v_{\sigma_1}\otimes\dots\otimes v_{\sigma_n}) \end{displaymath} and \begin{displaymath} \epsilon(\tau \sigma;v_1,\dots,v_n)= \epsilon(\sigma;v_1,\dots,v_n)\epsilon(\tau;v_{\sigma_1},\dots,v_{\sigma_n}) ~. \end{displaymath} \end{lemma} Specializing the d\'ecalage isomorphism of definition \ref{Def:DecIso} to several tensor products of a graded vector space with itself, it results the following expression \begin{displaymath} \morphism{\dec} {(\otimes^n V)[n]} {\otimes^n (V[1])} {(v_1\otimes\dots\otimes v_n)_{[n]}} {(-)^{\sum_{k=1}^n |v_k|(n-k)}(v_{1~[1]}\otimes \dots \otimes v_{n~[1]})} ~. \end{displaymath} \begin{proposition}[D\'ecalage intertwines even and odd permutation actions]\label{Prop:DecalageOfPermutation} \begin{displaymath} \begin{tikzcd} (V[1])^{\otimes n} \ar[r,"dec"] \ar[d,"B_{\sigma}"'] & (V^{\otimes n})[n] \ar[d,"(P_{\sigma}){[n]}"] \\ (V[1])^{\otimes n} \ar[r,"dec"] & (V^{\otimes n})[n] \end{tikzcd} \end{displaymath} \end{proposition} \begin{proof} Specialize the equation given by diagram \eqref{Eq:DecalageInterwiningPermutations} to two copies of the same vector space and then iterate $n$ times. \end{proof} % \begin{remark} Employing the language of the suspension maps rather then the suspension functors (see remark \ref{Rem:suspensionmaps}), the previous lemma can be subsumed by the following equation: \begin{displaymath} B_\sigma\circ \downarrow^{\otimes n} = \downarrow^{\otimes n} \circ P_\sigma~. \end{displaymath} \end{remark} \subsubsection{(Skew)-symmetric tensor spaces} Due to the presence of the aforementioned canonical actions of $S_n$ on $V^{\otimes n}$, it is natural to consider the corresponding subspaces of coinvariants elements: \\ \begin{paracol}{2} \begin{definition}[Symmetric tensor space] We call $\odot^n V$ the space of coinvariants with respect to the (even) canonical action of $S_n$ on $\otimes^n V$. In other words, it is the unique subspace $\odot^n V \subset \otimes^n V$ such that: \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] \otimes^n V \ar[r,"B_\sigma"] \& \otimes^n V \\ \odot^n V \ar[u,hook]\ar[ur,hook] \end{tikzcd} \end{displaymath} commutes for all $\sigma \in S_n$. \end{definition} \switchcolumn \begin{definition}[Skew-symmetric tensor space] We call $\Lambda^n V$ the space of coinvariants with respect to the the twisted canonical action of $S_n$ on $\otimes^n V$. In other words, it is the unique subspace $\Lambda^n V \subset \otimes^n V$ such that: \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] \otimes^n V \ar[r,"P_\sigma"] \& \otimes^n V \\ \Lambda^n V \ar[u,hook]\ar[ur,hook] \end{tikzcd} \end{displaymath} commutes for all $\sigma \in S_n$. \end{definition} \end{paracol} Exploiting the Abelian structure of $\GVect$ it is easy to introduce the projectors on the (skew)-symmetric subspaces: \\ \begin{paracol}{2} \begin{definition}[Symmetrizator]\label{Def:Symmetrizator} \begin{displaymath} \mathcal{S}= \left(\sum_{\sigma \in S_n} \frac{1}{n!} B_\sigma \right) : \otimes^n V \to \otimes^n V \end{displaymath} \end{definition} % \switchcolumn \begin{definition}[Skew-symmetrizator]\label{Def:SkewSymmetrizator} \begin{displaymath} \mathcal{A}= \left(\sum_{\sigma \in S_n} \frac{1}{n!} P_\sigma \right) : \otimes^n V \to \otimes^n V \end{displaymath} \end{definition} \end{paracol} % \begin{lemma}\label{lemma:splitsequencebrutta} There are several equivalent characterizations of $V^{\odot n}$ and $V^{\wedge n}$ \begin{enumerate} \item $\odot^n V =\im(\mathcal{S}) = \ker(\mathcal{A})$ $\qquad$ $\Lambda^n V = \im(\mathcal{A}) = \ker(\mathcal{S})$~. \item $\Lambda^n V = \frac{V^{\otimes n}}{\ker(\mathcal{A})} = \frac{V^{\otimes n}}{\im(\mathcal{S})}$ $\qquad$ $\odot^n V = \frac{V^{\otimes n}}{\ker(\mathcal{S})} = \frac{V^{\otimes n}}{\im(\mathcal{A})}$. \item $V^{\otimes n} = V^{\wedge n}\oplus V^{\odot n}$. \end{enumerate} \end{lemma} \begin{proof} Consider the following sequence in the category of graded vector spaces \begin{equation}\label{eq:symskewsplitting-appendix} \begin{tikzcd} 0 \ar[r, shift left =.5ex] &[-5ex] \Lambda^n V \ar[r, shift left =.75ex,hookrightarrow,"N_a"] & \bigotimes^n V \ar[r,two heads, shift left =.75ex,two heads,"\pi_s"] \ar[l,two heads, shift left =.75ex,two heads,"\pi_a"] & \bigodot^n V \ar[r] \ar[l, shift left =.75ex,hookrightarrow,"N_s"] &[-5ex] 0 \end{tikzcd}~, \end{equation} where the mapping are explicitly defined on homogeneous elements as follows \begin{equation}\label{eq:SymSkewOperatorsDef-appendix} \mathclap{ \begin{aligned}[c] \pi_s:&~ x_1\otimes\dots\otimes x_n \mapsto x_1\odot\dots\odot x_n ~, \\ \pi_a:&~ x_1\otimes\dots\otimes x_n \mapsto x_1\wedge\dots\wedge x_n ~; \\ {N_s}:&~ x_1\odot\dots\odot x_n \mapsto \left(\sum_{\sigma\in S_n}x_{\sigma_1}\otimes\dots \otimes x_{\sigma_n}\right) ~, \\ {N_a}:&~ x_1\wedge\dots\wedge x_n \mapsto \left(\sum_{\sigma\in S_n}(-)^\sigma x_{\sigma_1}\otimes\dots \otimes x_{\sigma_n}\right) ~; \\ \symAtor_{(n)}:&= \frac{1}{n!}N_s\circ \pi_s \equiv \left(\sum_{\sigma \in S_n} \frac{1}{n!} B_\sigma \right)~, \\ \skewAtor_{(n)}:&= \frac{1}{n!}N_a\circ \pi_a \equiv \left(\sum_{\sigma \in S_n} \frac{1}{n!} P_\sigma \right) ~; \end{aligned} } \end{equation} and trivially extended by linearity. The diagram in equation \eqref{eq:symskewsplitting-appendix} is a short exact sequence (both reading it from left to right than the converse). In fact, for any $x_1\wedge\dots\wedge x_n \in \Lambda^n V$, one gets that \begin{align*} \pi_s \circ N_a (x_1\wedge\dots\wedge x_n) =&~ \pi_s\left( \sum_{\sigma \in S_n}(-)^\sigma x_{\sigma_1}\otimes\dots x_{\sigma_n}\right) = \\ =&~ \left(n!\sum_{\sigma \in S_n}(-)^\sigma\right) x_1\odot\dots \odot x_n = \\ =&~ 0~. \end{align*} In particular one has $ \mathcal{S}\circ \mathcal{A} = \mathcal{A}\circ \mathcal{S} = 0$ and this proves the first claim. \\ The sequence is both left and right splitting, \ie $\frac{1}{n!}\pi_a \circ N_a = \id_{\Lambda^n V}$ and $\frac{1}{n!}\pi_s \circ N_s = \id_{V^{\odot n}}$. The other two claims follows from the splitting lemma (see \cite{Weibel} or \cite{nlab:split_exact_sequence}). \end{proof} \begin{example} When $n=2$, one has \begin{align*} V\odot V &= \left\lbrace v \in \otimes^2 V \; \left| \; B_{\sigma} v = v \quad \forall \sigma \in S_2 \right.\right\rbrace = \\ &= \frac{V\otimes V}{\langle v_1\otimes v_2 - (-)^{|v_1||v_2|}v_2\otimes v_1 \rangle_{v_i\in V}}~, \end{align*} in the symmetric case, and \begin{align*} V\wedge V &= \left\lbrace v \in \otimes^2 V \; \left| \; P_{\sigma} v = v \quad \forall \sigma \in S_2 \right.\right\rbrace = \\ &= \frac{V\otimes V}{\langle v_1\otimes v_2 + (-)^{|v_1||v_2|}v_2\otimes v_1 \rangle_{v_i\in V}} ~, \end{align*} in the skew-symmetric case. \end{example} \begin{lemma}\label{Lemma:DecalageRestrictToSymTens} The D\'ecalage isomorphism restrict nicely on the symmetric and skew-symmetric subspaces \begin{displaymath} \begin{tikzcd}[column sep = large] (V^{\odot n})[n] \ar[r,"dec\big|_{V^{\odot n}}"] \ar[d,hook]& (V[1])^{\wedge n}\ar[d,hook] \\ (V^{\otimes n}) [n] \ar[r,"dec"] & (V[1])^{\otimes n} \\ (V^{\wedge n})[n] \ar[r,"dec\big|_{V^{\wedge n}}"'] \ar[u,hook]& (V[1])^{\odot n} \ar[u,hook] \end{tikzcd} \end{displaymath} \end{lemma} \begin{proof} Consider, without loss of generality, the case $n=2$. The commutation of the upper and lower squares follows from the following equation: \begin{displaymath} \begin{aligned} dec( v_1\otimes v_2 &\pm (-)^{|v_1||v_2|} v_2 \otimes v_1 ) ~= \\ =& (-)^{|v_1|} v_{1~[1]}\otimes v_{2~[1]} \pm (-)^{|v_2||v_1|+|v_2|} v_{2~[1]}\otimes v_{1~[1]} = \\ =& (-)^{|v_1|} (v_{1~[1]}\otimes v_{2~[1]} \pm (-)^{|v_2||v_1|+|v_2|+|v_1|} v_{2~[1]}\otimes v_{1~[1]}) = \\ =& (-)^{|v_1|} (v_{1~[1]}\otimes v_{2~[1]} \pm (-)^{|v_{2~[1]}||v_{2~[1]}|+1} v_{2~[1]}\otimes v_{1~[1]}) = \\ =& (-)^{|v_1|} (v_{1~[1]}\otimes v_{2~[1]} \mp (-)^{|v_{2~[1]}||v_{2~[1]}|} v_{2~[1]}\otimes v_{1~[1]}) ~. \end{aligned} \end{displaymath} \end{proof} \section{Graded algebras and coalgebras}\label{section:GradedAlgebras} In the course of the thesis, several different algebraic structures over graded vector spaces are considered. In this section, we review the basic definitions of (co)associative (co)algebra to the context of graded vector spaces and recall the notion of tensor (co)algebra associated to any given graded vector space. \subsection{(co)Associative (co)Algebras}\label{sec:abstractAlgebras} In this subsection, we review the basic notions of algebras and coalgebras. Existing a notion of duality between these two concepts, we will present them in parallel columns. What follows can be considered standard material; hence most of the proofs will be omitted. Further details can be found, for instance, in \cite{Loday} and \cite{Manetti-website-coalgebras}. \\ Let be $\mathbb{K}$ a field in characteristic $0$. Consider the category of graded vectors space over the field $\mathbb{K}$. \globalcounter* \sidebyside{ \begin{definition}[(Graded) Algebra] We call a \emph{Graded Algebra} a pair $(A,\mu)$ composed of a graded vector space $A$ and a graded morphism \begin{displaymath} \morphism{\mu} {A\otimes A} {A} {a\otimes b} {a\cdot b} \end{displaymath} (\ie an homogeneous bilinear map in degree $0$). \end{definition} }{ \begin{definition}[(Graded) coAlgebra] We call a \emph{Graded coAlgebra} a pair $(C,\Delta)$ composed of a graded vector space $C$ and a graded morphism (expressed in the Sweedler notation) \begin{displaymath} \morphism{\Delta} {C} {C\otimes C} {x} {\sum x_{(1)}\otimes x_{(2)}} ~. \end{displaymath} \end{definition} } \sidebyside{ \begin{definition}[Algebra morphism] Given two graded algebras $A=(A,\mu)$ and $A'=(A',\mu')$ a graded morphism $\phi:A\to A'$ is a \emph{algebra morphism} if the following diagram commutes: \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] A\otimes A \ar{r}{\mu} \ar{d}[swap]{\phi\otimes\phi} \& A \ar{d}{\phi} \\ A'\otimes A' \ar{r}[swap]{\mu} \& A' \end{tikzcd} ~. \end{displaymath} \end{definition} }{ \begin{definition}[coAlgebra morphism] Given two graded coalgebras $C=(C,\Delta)$ and $C'=(C',\Delta')$ a graded morphism $\psi:C\to C'$ is a \emph{coalgebra morphism} if the following diagram commutes: \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] C \ar{r}{\Delta} \ar{d}[swap]{\psi} \& C\otimes C \ar{d}{\psi\otimes\psi} \\ C' \ar{r}[swap]{\Delta} \& C'\otimes C' \end{tikzcd} ~. \end{displaymath} \end{definition} } % \begin{remark}\label{Rem:CoalgebrasNotLinear} Observe that the space of (co)-algebra morphisms is not linear, \eg the sum of two coalgebra morphisms is not a morphism \begin{displaymath} \Delta (f + g ) = (f\otimes f + g \otimes g ) \Delta ~. \end{displaymath} However, one can define a deformation of a coalgebra morphism $f$ as the linear map $h$ such that $(f+h)$ is a coalgebra morphism. Accordingly, a deformation $h$ must satisfy the following equation: \begin{displaymath} \Delta h = \left( h\otimes h + h \otimes f + f \otimes h \right) \Delta ~. \end{displaymath} \end{remark} \sidebyside{ \begin{definition}[Associative algebra] A given algebra $(A,\mu)$ is said \emph{associative} if the following diagram commutes \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] A \otimes A \otimes A \ar{r}{\mu\otimes\id_A}\ar{d}[left]{\id_A\otimes\mu} \& A \otimes A \ar{d}{\mu} \\ A \otimes \ar{r}{\mu} \& A \end{tikzcd} ~. \end{displaymath} \end{definition} % }{ \begin{definition}[coAssociative coalgebra] A given coalgebra $(C,\Delta)$ is said \emph{coassociative} if the following diagram commutes \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] C \ar{r}{\Delta} \ar{d}[left]{\Delta} \& C \otimes C \ar{d}{\Delta\otimes\id_C} \\ C \otimes C \ar{r}{\id_C \otimes \Delta} \& C\otimes C\otimes C \end{tikzcd} ~. \end{displaymath} \end{definition} % } \sidebyside{ \begin{notation}[Iterated multiplication] On an associative algebra one will often consider the \emph{iterated product} $\mu^n: A^{\otimes(n+1)}\to A$ defined as follows: \begin{displaymath} \mu^n := \mu(\mu^{n-1}\otimes \id) \end{displaymath} with $\mu^0= \id$ and $\mu^1=\mu$. \end{notation} }{ \begin{notation}[Iterated comultiplication] On a coassociative coalgebra one will often consider the \emph{iterated coproduct} $\Delta^n: C \to C^{\otimes(n+1)}$ defined as follows: \begin{displaymath} \Delta^n := (\Delta\otimes\id\otimes\dots\otimes\id)\circ\Delta^{n-1} \end{displaymath} with $\Delta^0= \id$ and $\Delta^1=\Delta$. \end{notation} % \begin{lemma}\label{Lemma:IteratedCoalgebra} Given a coalgebra $(C,\Delta)$, one has \begin{displaymath} (\Delta^p \otimes \Delta^q)\Delta = \Delta^{p+q+1} ~. \end{displaymath} Furthermore, for any given coalgebra morphism $f:(C,\Delta_C)\to (D,\Delta_D)$, one has \begin{displaymath} f^{\otimes(n+1)} \Delta^n_C = \Delta^n_D \circ f ~. \end{displaymath} \end{lemma} } \sidebyside{ \begin{definition}[(Anti)commutative algebra] A given algebra $(A,\mu)$ is said \emph{(anti)commutative} if the following diagram commutes \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] A \otimes A \ar{rr}{B_{A,A}}\ar{rr}[swap]{(-B_{A,A})} \ar{dr}[swap]{\mu} \&\& A\otimes A \ar{dl}{\mu} \\ \& A \& \end{tikzcd} ~, \end{displaymath} where $B$ and $P=-B$ denote the even and odd braiding operators (see notation \ref{not:shortenBraiding}). \end{definition} }{ \begin{definition}[(Anti)cocommutative coalgebra] A given coalgebra $(C,\Delta)$ is said \emph{co(anti)commutative} if the following diagram commutes \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] \&C \ar{dl}[swap]{\Delta} \ar{dr}{\Delta} \& \\ C \otimes C \ar{rr}{B_{C,C}}\ar{rr}[swap]{(-B_{C,C})} \& \& C\otimes C \end{tikzcd} ~, \end{displaymath} where $B$ and $P=-B$ denote the even and odd braiding operators (see notation \ref{not:shortenBraiding}). \end{definition} % } \sidebyside{ \begin{definition}[Unital algebra] A given algebra $(A,\mu)$ is said \emph{unital} if there exists an element $u\in A$, called \emph{unit}, with $u:\mathbb{K}\to A$ the corresponding generated subspace, such that the following diagram commutes \begin{displaymath} \hspace{-.1\textwidth} \begin{tikzcd}[ampersand replacement=\&] \mathbb{K}\otimes A \ar{r}{e\otimes\id_A}\ar{dr}[sloped,swap]{\sim} \& A\otimes A \ar{d}{\mu} \& A \otimes \mathbb{K} \ar{l}[swap]{\id_A\otimes e} \ar{dl}[sloped,swap]{\sim}] \\ \& A \& \end{tikzcd} ~. \end{displaymath} \end{definition} % }{ \begin{definition}[coUnital coalgebra] A given coalgebra $(C,\Delta)$ is said \emph{counital} if there exists a graded morphism $\epsilon:C\to\mathbb{K}$, called \emph{counit} (or \emph{augmentation map}), such that the following diagram commutes \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] \& C \ar{d}{\Delta} \ar{dl}[sloped]{\sim} \ar{dr}[sloped]{\sim} \& \\ \mathbb{K}\otimes C \& C \otimes C \ar{l}{\epsilon\otimes\id_C"} \ar{r}[swap]{\id_C\otimes\epsilon} \& C \otimes \mathbb{K} \end{tikzcd} ~. \end{displaymath} \end{definition} } \sidebyside{ \begin{remark} Given two unital algebras $(A,\mu,u)$ and $(A',\mu',u')$, a \emph{unital morphism} is an algebra morphism $f:A\to A'$ such that $u' = f(u)$. Unital algebras form a non-full subcategory of the category of graded algebras with algebra morphisms. \end{remark} }{ \begin{remark} Given two counital coalgebras $(C,\Delta,\epsilon)$ and $(C',\Delta',\epsilon')$, a \emph{counital morphism} is a coalgebra morphism $f:C\to C'$ such that $\epsilon' f = \epsilon$. Counital coalgebras form a non-full subcategory of the category of graded coalgebras with coalgebra morphisms. \end{remark} } \begin{remark} Observe that the ground field $\mathbb{K}$ itself, in addition to constituting an unital associative algebra with its product, it is also a counital coassociative coalgebra for $\Delta(1_{\mathbb{k}})=1_{\mathbb{k}} \otimes 1_{\mathbb{k}}$. \end{remark} \begin{notation} From now on, we will often abbreviate associative unital graded algebras as "algebras" and coassociative counital graded coalgebras as "coalgebras". \end{notation} \sidebyside{ \begin{definition}[Augmented Algebra] A unital associative graded algebra is called \emph{augmented} when there is a given morphism of algebras $\epsilon:A\to \mathbb{K}$, called \emph{augmentation map}. \footnote{In particular $\epsilon(1_A)=1_{\mathbb{K}}$.} \end{definition} % }{ \begin{definition}[coAugmented coAlgebra] A counital coassociative graded coalgebra is called \emph{coaugmented} when there is a given morphism of coalgebras $u:\mathbb{K}\to C$ called \emph{coaugmentation map}. \footnote{The coproduct on $\mathbb{K}$ is given by $1_{\mathbb{K}}\mapsto 1_{\mathbb{K}}\otimes 1_{\mathbb{K}} $, in particular $\epsilon u = \id_{\mathbb{K}}$.} \note{A morphism of coaugmented coalgebras} \end{definition} } \sidebyside{ \begin{theorem} If $A$ is augmented, then $A$ is canonically isomorphic to $\overline{A}\oplus\mathbb{K}$ where $\overline{A}=\ker(\epsilon)$ is called the \emph{augmentation ideal}. Furthermore, the category of nonunital augmented associative algebras is equivalent to the category of unital augmented associative algebras. \end{theorem} }{ \begin{theorem} If $C$ is coaugmented, then $C$ is canonically isomorphic to $\overline{C}\oplus\mathbb{K}$ where $\overline{C}=\ker(\epsilon)$ The reduced coproduct $\overline{\Delta}:\overline{C}\to\overline{C}\otimes\overline{C}$ is given by $$ \overline{\Delta}(x):= \Delta(x) - x\otimes 1 - 1 \otimes x ~. $$ Furthermore, the category of coaugmented coalgebras is equivalent to the category of coassociative coalgebras. \end{theorem} (proof, see \cite{Reinhold2019}.) } \sidebyside{ \begin{definition}[Nilpotent Algebra] A coassociative coalgebra $(A,\mu)$ is said \emph{nilpotent} if there exists some positive integer $n\in\mathbb{N}$ such that $\mu^n=0$. \end{definition} }{ \begin{definition}[coNilpotent coalgebra] A coassociative coalgebra $(C,\Delta)$ is called \emph{conilpotent} if for any $x\in \overline{C}$ there exists some positive integer $n\in\mathbb{N}$ such that $\overline{\Delta}^n(x) =0 $. \end{definition} \note{Forse questa definitione è locally nilpotent} % } \vspace{.5em} % \sidebyside{ Consider a graded vector space $W$ and a subcategory $\cat[A]$ of the category associative algebras. \begin{definition}[Free algebra generated by $W$ in the subcategory ${\cat[A]}$]\label{Def:FreeAlgebra} The \emph{Free associative algebra} generated by $W$ in the subcategory ${\cat[A]}$ is an associative algebra $\Free(W)\in\cat[A]$ equipped with a graded morphism $i:W\to \Free(W)$ which satisfies the following universal property: \\ for any graded vector spaces morphism $f:W\to A$ where $A$ is an associative algebra in $\cat[A]$, there exists an unique morphism $\tilde{f}:\Free(W)\to A$ of $\cat[A]$ such that the following diagram commutes: \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] W \ar{r}{i}\ar{dr}[swap]{\forall f} \& \Free(W)\ar{d}{\exists!\tilde{f}} \\ \& A \end{tikzcd} ~. \end{displaymath} \end{definition} }{ Consider a graded vector space $W$ and a subcategory $\cat[C]$ of the category of coassociative coalgebras. \begin{definition}[coFree coalgebra generated by $W$]\label{Def:FreeCoAlgebra} The \emph{coFree coassociative coalgebra} generated by $W$ in the subcategory ${\cat[C]}$ is a coalgebra $\Free^c(W)\in\cat[C]$ equipped with a graded morphism $p:\Free^c(W)\to W$ which satisfies the following universal property: \\ for any graded vector spaces morphism $\varphi:C\to W$ where $C$ is a coassociative coalgebra in $\cat[C]$, there exists an unique morphism $\tilde{\varphi}:C\to\Free^c(W)$ of $\cat[C]$ such that the following diagram commutes: \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] C \ar{dr}{\forall\varphi} \ar{d}[swap]{\exists! \tilde{\varphi}} \& \\ \Free^c(W) \ar{r}{p} \& W \end{tikzcd} ~. \end{displaymath} \end{definition} } \begin{proposition} The (co)free (co)associative (co)algebra on $W$ is uniquely determined up to isomorphisms. \end{proposition} \begin{remark} Putting the previous definitions side to side emphasize that the notion of coalgebra is obtained by formally dualizing, \ie reversing arrows in the diagrams, the notion of algebra. However, these two notions are not equivalent. Despite the fact that a coalgebra gives an algebra by linear dualization, the dual of an algebra is not in general a coalgebra. The latter is guaranteed when considering finite dimensional algebras. \\ Heuristically, this asymmetry comes from the fact that, given any (possibly infinite-dimensional) graded vector space $V$, and considering the corresponding linear dual $V^\ast := \Hom(V,\mathbb{K})$, there exists a canonical map \begin{displaymath} \morphism{\omega} {V^\ast\otimes V^\ast} {(V\otimes V)^\ast} {(f\otimes g)} { \left( \Almorphism{\omega(f\otimes g)} {V\otimes V} {\mathbb{K}} {x\otimes y} {f(x)g(y)} \right) } \end{displaymath} while does not exists any natural map $(V\otimes V)^\ast \to V^\ast \otimes V^\ast$ in general. \\ When $V$ is finite dimensional, $\omega$ is an isomorphism. If $(C,\Delta)$ is a coalgebra, then $(C^\ast,\Delta^\ast \circ\omega)$ is an algebra (no need of finiteness hypothesis). If $(A,\mu)$ is an algebra which is finite dimensional, then $(A^\ast,\omega^{-1}\circ \mu^\ast)$ is a coalgebra. See \cite{Loday,Manetti-website} for further details. \end{remark} \subsection{Tensor algebras} Consider a graded vector space $V$, by direct sum of all the possible tensor spaces one gets the so-called \emph{tensor algebra}. % \begin{definition}[Tensor algebra]\label{Def:TensorAlgebra} Given a graded vector space $V\in \GVect$, we call \emph{tensor algebra of $V$} the graded algebra $(T(V),\otimes)$ where \begin{displaymath} T(V) = \oplus_{k\geq 0} V^{\otimes k} ~, \end{displaymath} conventionally it is assumed that $V^{\otimes 0} = \mathbb{R}$, and $\otimes$ denotes, with a slightly (yet standard) abuse of notation, the standard projection \begin{displaymath} \morphism{\otimes} {V\times V} {V\otimes V = \frac{\Free(V\oplus V)}{\sim}} {(v_1,v_2)} {v_1\otimes v_2} \end{displaymath} which extends naturally to $T(V)$ yielding a bilinear map (concatenation product) and therefore defining an algebra structure on $T(V)$. \\ We call \emph{reduced tensor algebra}, the subalgebra \begin{displaymath} \overline{T(V)} = \oplus_{k > 0} V^{\otimes k} \hookrightarrow T(V) ~. \end{displaymath} \end{definition} % \begin{remark} Recall that we do not consider $V^{\otimes n}$ as the $\ZZ^n$-graded vector space \begin{displaymath} \left( (i_1,\dots,i_n) \mapsto V_{i_1}\otimes\dots\otimes V_{i_n} \right) \end{displaymath} but it is implicit in the definition of $\otimes$ to pass to the total space. Similarly, we do not regard $T(V)$ as the bigraded vector space \begin{displaymath} \left( (s,n)\mapsto (V^{\otimes n})_s = \bigoplus_{i_1+\dots+i_n=s} V_{i_1}\otimes\dots\otimes V_{i_n} \right) \end{displaymath} in other terms, the $n$-th component of the Tensor algebra reads a follows \begin{displaymath} T^n(V) = \bigoplus_{q=0}^n (V^{\otimes q})_n \neq V^{\otimes n} ~. \end{displaymath} \end{remark} % \begin{proposition} The tensor algebra of a graded vector space $V$ satisfies the following properties: \begin{enumerate} \item $T(V)$ is an associative unital graded algebra, $\overline{T}(V)$ is an associative non-unital sub algebra of $T(V)$; \item $T(V)$ is augmented by $\epsilon (v_1,\dots,v_n)=0$ for any $ n\geq 1$ and $\epsilon(1)=1$, therefore $T(V)=\mathbb{R}\oplus \overline{T}(V)$. \item $T(V)$ is free in the category of unital associative algebra and $\overline{T(V)}$ is free in the category of non-unital associative algebra. That means that holds the following universal property: \begin{itemize} \item for any given graded unital associative algebra $A$, for each linear degree preserving map $f:V\to A$, there exists a unique homomorphism of unital graded algebras $\varphi: T(V)\to A$ that agrees on $V$ to $f$, \ie the following diagram commutes \begin{displaymath} \begin{tikzcd} T(V) \ar[d] \ar[dr,"\exists ! F"] & \\ V \ar[r,"\forall f"] & A \end{tikzcd} ~. \end{displaymath} \end{itemize} \end{enumerate} \end{proposition} % Similar constructions can be repeated on the subspaces of symmetric and skew-symmetric tensors. We present them side by side to emphasize the parallelism of the construction. However, unlike section \ref{sec:abstractAlgebras}, the two columns are not implying a duality in the categorical side. % \sidebyside{ \begin{definition}[Symmetric tensor algebra] Given a graded vector space $V\in \GVect$, we call \emph{symmetric tensor algebra of $V$} the graded algebra $(S(V),\odot)$ where \begin{displaymath} S(V) = \oplus_{k\geq 0} V^{\odot k} ~, \end{displaymath} conventionally it is assumed that $V^{\odot 0} = \mathbb{R}$, and $\odot$ is the symmetrization of $\otimes$ \begin{displaymath} \hspace{-.1\textwidth} \begin{tikzcd}[row sep=-1ex, column sep= small, ampersand replacement=\&] V\times V \ar{r}{\otimes} \ar[bend left = 30]{rr}{\odot} \& V\otimes V \ar{r}{\mathcal{S}} \& V \wedge V \\ (v,w) \ar[mapsto]{r} \& v\otimes w \ar[mapsto]{r} \& \displaystyle\sum_{\sigma \in S_2} B_\sigma(v\otimes w) \end{tikzcd} \end{displaymath} which extends naturally to $S(V)$ yielding a bilinear map (concatenation product) and therefore define an algebra structure on $S(V)$. \end{definition} % % }{ \begin{definition}[Exterior algebra] Given a graded vector space $V\in \GVect$, we call \emph{exterior algebra of $V$} the graded algebra $(\Lambda(V),\wedge)$ where \begin{displaymath} \Lambda(V) = \oplus_{k\geq 0} V^{\wedge k} ~, \end{displaymath} conventionally it is assumed that $V^{\wedge 0} = \mathbb{R}$, and $\wedge$ is the skew-symmetrization of $\otimes$ \begin{displaymath} \begin{tikzcd}[row sep=-1ex, column sep= small,ampersand replacement=\&] V\times V \ar{r}{\otimes} \ar[bend left = 30]{rr}{\wedge} \& V\otimes V \ar{r}{\mathcal{A}} \& V \wedge V \\ (v,w) \ar[mapsto]{r} \& v\otimes w \ar[mapsto]{r} \& \displaystyle\sum_{\sigma \in S_2} P_\sigma(v\otimes w) \end{tikzcd} \end{displaymath} which extends naturally to $\Lambda(V)$ yielding a bilinear map (concatenation product) and therefore define an algebra structure on $\Lambda(V)$. \end{definition} % } \sidebyside{ % \begin{definition} We call \emph{reduced symmetric tensor algebra}, the subalgebra \begin{displaymath} \overline{S(V)} = \oplus_{k > 0} V^{\odot k} \hookrightarrow S(V) ~. \end{displaymath} \end{definition} % % }{ % \begin{definition} We call \emph{reduced exterior algebra}, the subalgebra \begin{displaymath} \overline{\Lambda(V)} = \oplus_{k > 0} V^{\wedge k} \hookrightarrow \Lambda(V) ~. \end{displaymath} \end{definition} % } \sidebyside{ % \begin{proposition}[Symmetric tensor algebra] $S(V)= \frac{T(V)}{I_S}$ where $I_S$ is the two-sided homogeneous ideal generated by elements of the form $$v_1\otimes v_2 -(-)^{|v_1||v_2|}v_2\otimes v_1~.$$ \end{proposition} % }{ % \begin{proposition}[Skew-symmetric tensor algebra] $\Lambda(V)= \frac{T(V)}{I_A}$ where $I_A$ is the two-sided homogeneous ideal generated by elements of the form $$v_1\otimes v_2 +(-)^{|v_1||v_2|}v_2\otimes v_1~.$$ \end{proposition} % } \sidebyside{ \begin{proposition} $S(W)$ is the free graded commutative associative algebra over $W$, i.e it satisfies the analogue of the universal property but in the graded unital associative commutative algebras subcategory. That means that holds the following universal property: \begin{itemize} \item for any given graded unital associative commutative algebra $A$, for each linear degree preserving map $f:V\to A$, there exists a unique morphism of graded unital associative commutative algebras $\varphi: T(V)\to A$ that agrees on $V$ to $f$, \ie the following diagram commutes \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] S(V) \ar{d} \ar{dr}{\exists ! F} \& \\ V \ar{r}{\forall f} \& A \end{tikzcd} \end{displaymath} \end{itemize} \end{proposition} }{ \begin{proposition} $\Lambda(W)$ is the free graded anti-commutative associative algebra over $W$, i.e it satisfies the analogue of the universal property but in the graded unital associative anti-commutative algebras subcategory. That means that holds the following universal property: \begin{itemize} \item for any given graded unital associative anti-commutative algebra $A$, for each linear degree preserving map $f:V\to A$, there exists an unique homomorphism of graded unital associative anti-commutative algebras $\varphi: T(V)\to A$ that agrees on $V$ to $f$, \ie the following diagram commutes \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] \Lambda(V) \ar{d} \ar{dr}{\exists ! F} \& \\ V \ar{r}{\forall f} \& A \end{tikzcd} \end{displaymath} \end{itemize} \end{proposition} } % \begin{notation}[Inclusion $N$]\label{Remark:ManettiNotation} For later reference, let us note that, from the very definition of the symmetrizator operator (definition \ref{Def:Symmetrizator} ), the direct sum of all symmetrizators \begin{displaymath} \mathcal{S}= \sum_{n\geq 0} \mathcal{S}_{(n)} ~ : T(V) \to T(V) \end{displaymath} admits the following trivial factorization \begin{displaymath} \begin{tikzcd}[column sep = huge] T(V) \ar[rr,"\mathcal{S}"] \ar[d,two heads,"\pi"] && T(V) \ar[dll,two heads,"\pi"] \\ S(V) \ar[rr,"\sum_{n\geq 0} \frac{1}{n!} \id_{V^{\odot n}}"'] && S(V) \ar[u,hook,"N"] \end{tikzcd} \end{displaymath} where $\pi$ and $N$ are given in equation \eqref{eq:SymSkewOperatorsDef-appendix}. We omit the similar construction that can be made in the skew-symmetric case. \end{notation} % \begin{remark}[Tensor functors]\label{Remark:TensorFunctors} One can easily assemble an endofunctor $T$ which maps a graded vector space $V$ to the corresponding tensor algebra assigning the action on any $f \in \Hom_{\text{Alg}}(V,W)$ to be \begin{displaymath} \morphism{ T(f) := \sum_{n\geq o} f^{\otimes n} ~} {T(V)}{T(W)} {x_1\otimes\dots\otimes x_n} {f(x_1)\otimes\dots\otimes f(x_n)} . \end{displaymath} Similarly it is possible to introduce a \emph{symmetric tensor endofunctor} $S$ given on morphisms by \begin{displaymath} \morphism{ S(f)} {S(V)}{S(W)} {x_1\odot\dots\odot x_n} {f(x_1)\odot\dots\odot f(x_n)} . \end{displaymath} Note that the following factorization holds \begin{displaymath} \begin{tikzcd} S(V) \ar[r,hook,"N"] \ar[d,"S(f)"]& T(V) \ar[d,"T(f)"] \\ S(W) \ar[r,hook,"N"] & T(W) \end{tikzcd} \end{displaymath} since $ (f\otimes f) \circ B = B \circ (f\otimes f)$ on any degree $0$ map, hence $S(f) = \pi \circ T(f) \circ N$. \\ Furthermore, associativity of the tensor product of vector ensure that $T$ and $S$ can be safely regarded as functors valued in the category of graded associative algebras. \end{remark} \subsection{Tensor coalgebras}\label{Section:TensorCoalgebras} In addition to the natural notion of associative algebra, the tensor space $T(V)$ (definition \ref{Def:TensorAlgebra}) can be also equipped with a canonical coalgebra structure. % \begin{definition}[Deconcatenation coproduct]\label{def:Decon} Given a graded vector space $V\in \GVect$, we call \emph{deconcatenation coproduct} the graded morphism \begin{displaymath} \Delta = \sum_{n \geq 0} \sum_{a=0}^{n} \mathfrak{d}_{a,n-a} ~:~ T(V) \longrightarrow T(V)\otimes T(V) \end{displaymath} given by the following operators \begin{displaymath} \morphism{\mathfrak{d}_{a,n-a}} {V^{\otimes n}} {V^{\otimes a}\otimes V^{\otimes n-a}} {x_1\otimes \dots \otimes x_n} {(x_1\otimes\dots\otimes x_a)\otimes (x_{a+1}\otimes \dots \otimes x_n)} . \end{displaymath} Similarly, we call \emph{reduced deconcatenation coproduct}, the graded morphism \begin{displaymath} \overline{\Delta} = \sum_{n > 0} \sum_{a=1}^{n-1} \mathfrak{d}_{a,n-a} ~:~ T(V) \longrightarrow T(V)\otimes T(V) \end{displaymath} ~. \end{definition} \begin{notation}[Tensor coalgebra] We call \emph{tensor coalgebra of $V$} and \emph{reduced tensor coalgebra} the pairs $T^c(V)= (T(V),\Delta)$ and $\overline{T^c}(V)= (\overline{T(V)},\overline{\Delta})$ respectively. The latter acts on decomposable elements as follows \begin{displaymath} \morphism{\overline{\Delta}} {\overline{T(V)}}{\overline{T(V)}\otimes \overline{T(V)}} {x_1\otimes\dots \otimes x_n} {\displaystyle\sum_{i=1}^{n-1}(x_1\otimes\dots\otimes x_i)\otimes (x_{i+1}\otimes\dots\otimes x_n)}~. \end{displaymath} \end{notation} % \begin{lemma}\label{Lemma:TensorCoalgebraProperties} The tensor coalgebra of a graded vector space $V$ satisfies the following properties: \begin{enumerate} \item $T^c(V)$ is a coassociative counital graded coalgebra. $\overline{T^c(V)}$ is a non-counital coassociative sub-coalgebra of $T(V)$, \ie $$\overline{T^c(V)} = \oplus_{k > 0} V^{\otimes k} \hookrightarrow T^c(V) ~;$$ \item $T^c(V)$ is coaugmented by the standard inclusion $u: \mathbb{R} \hookrightarrow T(V)$, therefore $T^c(V)=\mathbb{R}\oplus \overline{T^c(V)}$. \item $T^c(V)$ and $\overline{T^C(V)}$ are conilpotent, the iterated coproducts reads as follows: \begin{displaymath} \mathclap{ \begin{aligned} \Delta^s(v_1\otimes\dots\otimes v_n) =& \sum_{0\leq i_1 < i_2 < \dots < i_{s} \leq n} (v_1\otimes\dots\otimes v_{i_1})\otimes\dots\otimes (v_{i_s+1}\otimes\dots\otimes v_n) ~; \\ \overline{\Delta}^s(v_1\otimes\dots\otimes v_n) =& \sum_{1\leq i_1 < i_2 < \dots < i_{s} \leq n-1} (v_1\otimes\dots\otimes v_{i_1})\otimes\dots\otimes (v_{i_s+1}\otimes\dots\otimes v_n) ~. \end{aligned} } \end{displaymath} \end{enumerate} \end{lemma} % Tensor coalgebras enjoy the property to be cofree in a suitable subcategory of graded coalgebras: \begin{proposition}[Universal property of graded coalgebras]\label{Prop:UniversalPropertyGradedCoalgebras} ${T^c(V)}$ is cofree in the category of counital coaugmented conilpotent coassociative coalgebras. \begin{enumerate} \item The universal property reads as follows: \\ given a $(C,\Gamma)$ conilpotent coaugmented coassociative coalgebra on a graded vector space $V$, for any $f: C\to V$ graded morphism, there exists a unique morphism of graded coaugmented coalgebras $F:C\to T(V)^c$ such that the following diagram commutes \begin{displaymath} \begin{tikzcd} C \ar[r,"\exists! F"] \ar[dr,"\forall f"'] & T(V) \ar[d,"p"] \\ & V \\ \end{tikzcd} ~. \end{displaymath} \\ Similarly, $\overline{T^c(V)}$ is cofree in the category of conilpotent coassociative coalgebras. \item In the reduced case, the unique map $F$ associated to $f$ is explcitly given by the following commutative diagram in the category of graded coalgebras: \begin{displaymath} \begin{tikzcd} \overline{T}^c(V) \ar[rrd,sloped,"\sum_{n>0} f^{\otimes n} := \overline{T(f)}"] \\ C \ar[u,"\sum_{n\geq 0}\Gamma^n"] \ar[rr,"F"] & & \overline{T^c(V) } \end{tikzcd} ~. \end{displaymath} \end{enumerate} Namely, $F$ is given as follows \begin{displaymath} F = \sum_{n=1}^\infty (f^{\otimes n})\circ \Gamma^{n-1} ~: C \to \overline{T}(C) \to \overline{T}(V) ~. \end{displaymath} \end{proposition} \begin{proof} The construction follows by noting that $\sum_{n\geq 0} \Gamma^n~: C \to \overline{T(C)}$ is a canonical coalgebra morphism and noting that lemma \ref{Lemma:IteratedCoalgebra} implies that $f^{\otimes n} \circ \Gamma^{n-1} = \Gamma^{n-1} \circ f$. \end{proof} Let us now focus on the case where the considered coalgebra is a tensor coalgebra itself, \ie consider $C=\overline{T}(U)$ for some graded vector space $U$. \begin{definition}[Corestriction] Given any homogeneous map $F\in \underline{\Hom}(\overline{T}(V),\overline{T}(W))$, we call \emph{$n$-th corestriction} the homogeneous map $f_n \in \underline{\Hom}^k(\overline{T}(V),W)$ given by \begin{displaymath} f_n := \pr \circ F \big\vert_{V^{\otimes n}} \end{displaymath} where $\pr: \overline{T}(W) \to W$ denotes the canonical projection. \\ We can encode the process of taking all possible corestrictions as the single graded morphism \begin{equation} \morphism{P} {\underline{\Hom}^k (\overline{T}(V),\overline{T}(W))} {\underline{\Hom}^k (\overline{T}(V),W)\cong \bigoplus_{n>0}\underline{\Hom}^k(V^{\otimes n},W)} {F} {(f_1,f_2,\dots)} ~. \end{equation} \end{definition} \begin{remark}[Unique lift to a coalgebra morphism]\label{Rem:LiftToMorphism} According to proposition \ref{Prop:UniversalPropertyGradedCoalgebras}, given a graded morphism $f:\overline{T}(U)\to V$ the corresponding unique coalgebra morphism $F: \overline{T}(U)\to\overline{T}(V)$ is given by \begin{displaymath} F(v_1\otimes\dots\otimes v_n) = \sum_{s=1}^n \mkern-30mu \sum_{\mkern45mu 1\leq i_1<\dots< i_s =n}\mkern-40mu f(v_1\otimes\dots\otimes v_{i_1})\otimes\dots\otimes f(v_{i_{s-1}+1}\otimes\dots\otimes v_{i_s}) \end{displaymath} and is called \emph{unique lift (to a graded morphism) of $f$}. The uniqueness condition of such a lift allows to encode it as a "lift" mapping \begin{equation}\label{Eq:LiftMorphismMapping} \morphism{L} {{\Hom}(\overline{T}{(V)},W)} {\Hom_{\text{coAlg}}(\overline{T^c}{(V)},\overline{T^c}{(W)})} {f} {\displaystyle\sum_{n>0}\sum_{s=1}^n \left(\sum_{j_1+\dots+j_s = n} f_{j_1}\otimes \dots \otimes f_{j_s}\right)~.} \end{equation} Note that the function is manifestly non-linear compatibly with remark \ref{Rem:CoalgebrasNotLinear}. \end{remark} % % It is worth to note that the universal property of tensor coalgebras can be reviewed as an isomorphism of graded sets (they are not graded vector spaces): \begin{theorem}[Universal property as an isomorphism between hom-spaces]\label{Theorem:HomCoAlgISO} The following graded sets are in a one-to-one correspondence \begin{displaymath} {\Hom}_{\text{coAlg}}(\overline{T^c(V)},\overline{T^c(W)}) \cong {\Hom}(\overline{T(V)},W) \cong \bigoplus_{n>0} \Hom(V^{\otimes n},W) ~. \end{displaymath} In particular, the isomorphism is induced from the corestriction $P$ according to the following diagram: \begin{displaymath} \begin{tikzcd} \Hom(\overline{T(V)},\overline{T(W)}) \ar[r,"P"] & \Hom(\overline{T(V)},W) \ar[d,equal] \\ \Hom_{\text{coAlg}}(\overline{T^c(V)},\overline{T^c(W)}) \ar[u,hook]& \Hom(\overline{T(V)},W) \ar[l,"L"] \end{tikzcd} ~. \end{displaymath} % \end{theorem} \begin{proof} The fact that $P \circ L = \id_{\Hom(\overline{T}(V),W)}$ follows by noting that the term between brackets in equation \eqref{Eq:LiftMorphismMapping} is an operator $V^{\otimes n}\to V^{\otimes s}$, therefore $P\circ F = \sum_{n>0} f_n$. On the other hand, lemma \ref{Lemma:IteratedCoalgebra} implies \begin{align*} L (P (F)) =&~ \sum_{n>0} f^{\otimes n}\circ \Delta^{n-1} = \\ =&~ \sum_{n>0} p^{\otimes n} \circ F^{\otimes n} \circ \Delta^{n-1} = \\ =&~ \sum_{n>0} p^{\otimes n} \circ \Delta^{n-1} \circ F = \\ =&~ F ~, \end{align*} since $p^{\otimes n} \circ \Delta^{n-1}$ coincides with the projector $T(V) \to V^{\otimes n}$. Hence $L\circ P = \id_{\Hom(\overline{T}(V),\overline{T}(W))}$. \end{proof} % The upshot is that any morphism of graded coalgebras is entirely determined by its corestriction. In particular, isomorphisms can be characterized as follows: \begin{lemma}\label{Lemma:CoalgebraIsosCondition} A coalgebra morphism $F: \overline{T}(V) \to \overline{T}(W)$ is invertible if and only if its first corestriction $f_1 = \pr\circ F \eval_V$ is a graded vector spaces isomorphism. \end{lemma} \begin{proof} Note first that, by its very definition, the unique lift of the canonical projection $\pr: T(V) \to V$ is the identity $\id\vert_{T(V)}$. Hence, given a coalgebra isomorphism $F: \overline{T(V)} \to \overline{T(W)}$ with inverse denoted by $G$, one has \begin{displaymath} G\circ F = L (g \circ F) = L (\pr) = \id_{\overline{T(W)}} ~. \end{displaymath} According to remark \ref{Theorem:HomCoAlgISO}, $L$ is injective. Therefore: \begin{displaymath} \id_V = \pr \eval_V = g\circ F \eval_V = g_1 \circ f_1 \end{displaymath} hence $f_1$ is an isomorphism with inverse $g_1$. \\ On the converse, one can check that, given a coalgebra morphism $F$ with invertible first corestriction $f_1$, the lift $L(g)$ with respect to $g_n = f_1^{-1} \circ f_n \circ (f_1^{-1})^{\otimes n}$, yields an inverse of $F$. \end{proof} % We will see in appendix \ref{App:RNAlgebras} how one can define a similar universal property also in the case of homogeneous map in degree different than zero. \subsubsection{Symmetric tensor coalgebras} We already seen how a given tensor space $T(V)$ can be decomposed in a symmetric and skew-symmetric part, \ie $T(V) = S(V) \oplus \Lambda(V)$. Let us now focus on the graded symmetric tensor space (we omit the analogue constructions that can be straightforwardly carried out in the skew-symmetric case taking care of extra signs). \\ $S(V)$ admits a canonical coassociative coalgebra structure: % \begin{definition}[Unshuffle coproduct]\label{def:unshuffleDecon} Given a graded vector space $V\in \GVect$, we call \emph{unshuffle coproduct} the graded morphism \begin{displaymath} \Xi = \sum_{n \geq 0} \sum_{a=0}^{n} \dfrac{1}{n!} (\pi\otimes \pi) \circ \mathfrak{k}_{a,n-a} \circ N ~:~ S(V) \longrightarrow S(V)\otimes S(V) \end{displaymath} given by the following operators \begin{displaymath} \mathfrak{k}_{a,n-a}:= \mathfrak{d}_{a,n-a} \circ B_{a,n-a}~:~ T(V) \longrightarrow T(V) \otimes T(V) \end{displaymath} where $\mathfrak{a}_{i,j}$ is the operator of definition \ref{def:Decon}, $\pi$ and $N$ have been introduced in remark \ref{Remark:ManettiNotation} and $B_{a,n-a}$ denotes the sum of all possible unshuffle permutation (unshuffleators, see appendix \ref{App:UnshuffleAtors}). Similarly, we call \emph{reduced unshuffle coproduct}, the graded morphism \begin{displaymath} \overline{\Xi} = \sum_{n > 0} \sum_{a=1}^{n-1} \dfrac{1}{n!} (\pi\otimes \pi) \circ \mathfrak{k}_{a,n-a} \circ N ~:~ \overline{S(V)} \longrightarrow \overline{S(V)} \otimes \overline{S(V)} ~. \end{displaymath} \end{definition} \begin{notation}[Symmetric tensor coalgebra] We call \emph{symmetric tensor coalgebra of $V$} and \emph{reduced symmetric tensor coalgebra} the pairs $S^c(V)= (S(V),\Xi)$ and $\overline{S^c}(V)= (\overline{S(V)},\overline{\Xi})$ respectively. \end{notation} % \begin{notation}[Explicit action of the unshuffle coproduct] The action of $\Xi$ on homogeneous elements is given explicitly by \begin{displaymath} \Xi(v_1\odot\dots\odot v_n) = \sum_{i=0}^{n} \sum_{\sigma \in S_{i,n-i}} \epsilon(\sigma) (v_{\sigma_1}\odot\dots\odot v_{\sigma_i}) \otimes (v_{\sigma_{(i+1)}}\odot\dots\odot v_{\sigma_n}) \end{displaymath} where $S_{i,n-i}\subset S_{n}$ denotes the set of $(i,n-i)$ unshuffles (see appendix \ref{App:UnshuffleAtors}). \end{notation} \begin{proposition} $S^c(V)=(S(V),\Xi)$ forms a cocommutative coalgebra, it forms a sub-coalgebra of the tensor coalgebra $T^c(V)$ with respect to the injection $N$ defined in remark \ref{Remark:ManettiNotation}, \ie \begin{displaymath} \begin{tikzcd} N:~S^c(V) \ar[r,hook] & T^c(V)~. \end{tikzcd} \end{displaymath} % A similar result holds in the reduced case for $(\overline{S(V)},\overline{\Xi})$. \end{proposition} \begin{proof} Cocommutativity follows from \begin{displaymath} \begin{aligned} B \circ \Xi =&~ \sum_{n \geq 0} \sum_{a=0}^{n} \dfrac{1}{n!} (\pi\otimes \pi) \circ \mathcal{C}_{(n)}^a \circ \mathfrak{d}_{a,n-a}\circ B_{a,n-a} \circ N = \\ =&~ \sum_{n \geq 0} \sum_{a=0}^{n} \dfrac{1}{n!} (\pi\otimes \pi) \circ \mathfrak{d}_{n-a,a}\circ B_{n-a,a} \circ \cancel{\mathcal{C}_{(n)}^a} \circ N = \\ =&~ \Xi ~, \end{aligned} \end{displaymath} where $\mathcal{C}_{(n)}^a$ denotes the cyclic permutation of $n$ elements $a$ times (see appendix \ref{App:UnshuffleAtors}). The latter can be easily ascertained by inspection on homogeneous elements. \\ The property of being coassociative follows by proving that $N$ is an injective map that preserve coproducts, i.e $\Delta\circ N = (N\otimes N) \circ \Xi$. This follows from the next equation \begin{displaymath} \mathfrak{d}_{a,n-a} \circ N = (N \otimes N) ~ \circ \left[\pi\otimes \pi \left(\dfrac{1}{n!} \circ\mathfrak{k}_{n,n-a}\right) \circ N \right] \end{displaymath} which can be again ascertained by inspection on homogeneous elements \begin{displaymath} \begin{aligned} \mathfrak{d}_{a,n-a} \circ N &(x_1\odot\dots\odot x_n) \equal{} \\ \equal{Rem \ref{Remark:ManettiNotation}}& n! ~\mathfrak{d}_{a,n-a} \circ \mathcal{S}_{(n)}\circ \mathcal{S}_{(n)} (x_1\otimes\dots \otimes x_n) = \\ \equal{Rem: \ref{Rem:UnshufflesAsCoset}}& a!~(n-a)!~ ( \mathcal{S}_{(a)}\otimes\mathcal{S}_{(n-a)})\circ \mathfrak{d}_{a,n-a} \circ B_{a,n-a} \circ \mathcal{S}_{(n)} (x_1\otimes\dots \otimes x_n) = \\ \equal{Rem \ref{Remark:ManettiNotation}}& (N \otimes N) \circ \left[ \dfrac{1}{n!} ~ (\pi\otimes \pi) \circ \mathfrak{d}_{a,n-a} \circ B_{a,n-a} \circ N \right] (x_1\odot\dots\odot x_n) ~. \end{aligned} \end{displaymath} In other words, the restriction of $\Delta$ to $S(V)$ has image in $S(V)\otimes S(V)$ and it is invariant under the even action of the symmetric group. In particular one has $B_\sigma \circ \Delta = \Delta$ for all $\sigma \in S_2$. \end{proof} Therefore, the conilpotency and coaugmentation property of $T^c(V)$ stated in lemma \ref{Lemma:TensorCoalgebraProperties} are automatically reflected on $S^c(V)$ and $T^c(V)$. Similarly, they satisfy an analogous universal property: % \begin{proposition}[Universal property of cocommutative graded coalgebras]\label{Prop:UniversalPropertyCocommutativeGradedCoalgebras} ${S^c(V)}$ is cofree in the category of counital coaugmented conilpotent coassociative coalgebras. \begin{enumerate} \item The universal property reads as follows: \\ given $(C,\Gamma)$ a cocommutative conilpotent coaugmented coassociative coalgebra and given a graded vector space $V$, for each graded morphism $f: C\to V$ there exists a unique morphism of graded coaugmented coalgebras $F:C\to S(V)^c$ such that the following diagram commutes \begin{displaymath} \begin{tikzcd} C \ar[r,"\exists! F"] \ar[dr,"\forall f"] & S(V) \ar[d,"p"] \\ & V \\ \end{tikzcd} ~. \end{displaymath} Similarly, $\overline{S^c(V)}$ is cofree in the category of cocommutative conilpotent coassociative coalgebra. \item In the reduced case, the unique map $F$ associated to $f$ is explicitly given by the following commutative diagram in the category of graded coalgebras: \begin{displaymath} \begin{tikzcd}[column sep = huge] \overline{S(V)} \ar[hook,d,"N"] \ar[drr,"S(f)"]&& \\ \overline{T(V)}\ar[drr,"T(f)"] && \overline{S(V)}\ar[d,hook,"N"] \\ C \ar[rr,"L(f)"'] \ar[uu,bend left=60,"\sum_{n\geq 0}\frac{1}{n!}~\pi\circ\Gamma^n"] \ar[u,"\sum_{n\geq 0}\Gamma^n"']\ar[drr,"f"'] && \overline{T(V)} \ar[d] \\ && V \end{tikzcd} ~. \end{displaymath} Namely, $F$ is given by \begin{displaymath} F= \sum_{n=1}^\infty \frac{\pi}{n!}\circ(f^{\otimes n})\circ \Gamma^{n-1} : C \to \overline{S(C)} \to \overline{S(V)} ~. \end{displaymath} \end{enumerate} \end{proposition} \begin{proof} The leftmost factorization follows by noting that the canonical coalgebra morphism $\sum_{n\geq 0} \Gamma^n$ has image in permutation invariant elements when the coalgebra $C$ is cocommutative, hence \begin{displaymath} \mathcal{S} \circ \left(\sum_{n\geq 0} \Gamma^n \right) = N \circ \frac{\pi}{n!} \circ \left(\sum_{n\geq 0}\Gamma^n\right) . \end{displaymath} The uppermost square has been explained in remark \ref{Remark:TensorFunctors} and the other two triangles are precisely given by proposition \ref{Prop:UniversalPropertyGradedCoalgebras}. \end{proof} Continuing the parallelism with what explained in the previous subsection, we now focus in the case in which the coalgebra considered is the reduced symmetric coalgebra itself, \ie $C=\overline{S(V)}$ \begin{remark}[Unique symmetric lift]\label{Rem:SymmetricLiftToMorphism} According to proposition \ref{Prop:UniversalPropertyCocommutativeGradedCoalgebras}, given a graded morphism $f:\overline{S}(V)\to W$ the corresponding unique coalgebra morphism $F: \overline{S^c(V)}\to\overline{S^c(W)}$ is given by \begin{displaymath} \mathclap{ \begin{aligned} F(x_1&\odot\dots\odot x_n) = \\ =&~ \sum_{s=1}^n \mkern-30mu \sum_{\qquad\substack{i_1+\dots+i_s = n \\ 0<i_1\leq \dots \leq i_s}}\mkern-60mu \sum_{\qquad\qquad\sigma \in \ush{i_1,\dots,i_s}^{<}}\mkern-60mu f_{i_1}(x_{\sigma_1}\odot\dots\odot x_{\sigma_i})\odot \dots \odot f_{i_s}(x_{n-i_s -1}\odot \dots \odot x_n) \end{aligned}} \end{displaymath} and is called \emph{unique lift (to a graded morphism) of $f$}. The uniqueness condition of such lift allows to encode it as a \emph{symmetric lift mapping}: \begin{equation}\label{Eq:LiftMorphismMapping-Sym} \hspace{-.1\textwidth} \morphism{L_{\sym}} {{\Hom}(\overline{S(V)},W)} {\Hom_{\text{coAlg}}(\overline{S^c(V)},\overline{S^c(W)})} {f} {\displaystyle \sum_{n>0}\sum_{s=1}^n \pi \circ \left[\mkern-30mu\sum_{\qquad\substack{i_1+\dots+i_s = n \\ 0<i_1\leq \dots \leq i_s}}\mkern-40mu (f_{i_1}\otimes \dots \otimes f_{i_s}) \circ B^{<}_{i_1,\dots,i_s} \right]\circ N~} \end{equation} where $B^<_{i_1,\dots,i_n}$ denotes the sum on all the ordered $(i_1,\dots,i_n)$-unshuffles (ordered unshuffleator, see appendix \ref{App:UnshuffleAtors}) and the term between square brackets is an operator $V^{\otimes n}\to V^{\otimes s}$. \end{remark} The universal property of symmetric tensor coalgebras can be regarded as an isomorphism of graded vector spaces. The following is the analogue of theorem \ref{Theorem:HomCoAlgISO} in the case of symmetric tensor coalgebras. \begin{theorem}[Universal property as an isomorphism between hom-spaces (symmetric case)]\label{Theorem:HomCoAlgISOSymmetric} The following graded sets are isomorphic: \begin{displaymath} {\Hom}_{\text{coAlg}}(\overline{S^c(V)},\overline{S^c(W)}) \cong {\Hom}(\overline{S(V)},W) \cong \bigoplus_{n>0} \Hom(V^{\odot n},W) ~. \end{displaymath} In particular, the isomorphism is induced from the corestriction according to the following diagram: \begin{displaymath} \begin{tikzcd}[column sep = huge, row sep = huge] \Hom(\overline{S(V)},\overline{S(W)}) \ar[r,"P"] & \Hom(\overline{S(V)},W) \ar[d,equal] \\ \Hom_{\text{coAlg}}(\overline{S^c(V)},\overline{S^c(W)}) \ar[u,hook]& \Hom(\overline{S(V)},W) \ar[l,"L_{\sym}"] \ar[dl,"L"] \\ \Hom_{\text{coAlg}}(\overline{S^c(V)},\overline{T^c(W)}) \ar[u,"\tiny\substack{\sum\\{n>0}} \frac{1}{n!} \pi \eval_{V^{\otimes n}}"] \end{tikzcd} ~. \end{displaymath} % \end{theorem} \begin{proof} The commutation of the lower triangle is precisely given by proposition \ref{Prop:UniversalPropertyCocommutativeGradedCoalgebras}. Commutation of the upper square follows from the condition that $\pr \circ \pi = \pr : T(V) \to V$, denoting with a slightly abuse of notation the canonical projection $\pr$ from both $T(V)$ and $S(V)$ to $V$. \end{proof} % \begin{remark} Note that functors $T$ and $S$ introduced in remark \ref{Remark:TensorFunctors} can be safely regarded as functors valued in the category of coalgebras since \begin{align*} f^{\otimes n} \circ \Delta =&~ \sum_{a=0}^n f^{\otimes n} \circ \mathfrak{d}_{a,n-a} = \\ =&~ \sum_{a=0}^n (f^{\otimes a} \otimes f^{\otimes n -a})\circ\mathfrak{d}_{a,n-a} = \\ =&~ \sum_{a=0}^n\mathfrak{d}_{a,n-a}\circ f^{\otimes n} = \\ =&~ \Delta\circ f^{\otimes n} ~. \end{align*} \end{remark} \begin{notation}\label{not:droppingthesuperc} From now on we will drop the superscript $c$ when dealing with tensors (co)algebras. It will be clear from the context whether we are focusing on the algebra structure or the coalgebra structure. \end{notation} \ifstandalone \bibliographystyle{../../hep} \chapter{Graded pre-Lie algebras}\label{App:PreLie} This appendix contains some definitions and basic properties of other algebraic structures mentioned in this work. In particular, we talk about \emph{differential graded Lie algebras} (DGLA) and \emph{pre-Lie algebras}. We do not aim to give a comprehensive account; the general idea is to lay the foundations for a later reference in the thesis's body. \section{Differential graded Lie algebras} $L_\infty$-algebras are a generalization of \emph{differential graded Lie algebra} (DGLA). Let us record some basic definitions: % \begin{definition}[Lie algebra] A \emph{Lie algebra} is a vector space $\mathfrak{g}\in \Vect$ equipped with a bilinear skew-symmetric map $[\cdot,\cdot]:\mathfrak{g}\wedge\mathfrak{g}\to \mathfrak{g}$ which satisfies the Jacobi identity: \begin{displaymath} [x,[y,z]]+[z,[x,y]]+[y,[z,x]]=0 \qquad \forall x,y,z\in \mathfrak{g}~. \end{displaymath} \end{definition} The whole definition could be subsumed by the following diagram\footnote{The notion of Lie algebra may be formulated internalLY to any symmetric monoidal linear category. We do not insist here in this direction, see \cite{nlab:lie_algebra} for the general idea.} in the category of vector spaces: \begin{equation}\label{eq:LieAlgebraDiagram} \begin{tikzcd}[column sep=huge] & \wedge^3 \mathfrak{g} \ar[ddl,"{[\cdot,\cdot]\ca [\cdot,\cdot]}",sloped] \ar[d,"{[\cdot,\cdot]\otimes \Unit \circ P_{2,1}}"] \\ & \wedge^2 \mathfrak{g} \ar[d,"{[\cdot,\cdot]}"] \\ 0 \ar[r,hook] & \mathfrak{g} \end{tikzcd} \end{equation} Reading the above diagram in the category $\GVect$ of graded vector spaces one gets the notion of \emph{graded Lie algebra}: \begin{definition}[Graded Lie algebra (GLA)] A \emph{graded Lie algebra} is a graded vector space $\mathfrak{L}\in \GVect$ equipped with a bilinear graded skew-symmetric degree $0$ homogeneous map $[\cdot,\cdot]:\mathfrak{L}\wedge\mathfrak{L}\to \mathfrak{L}$ which satisfies the (graded) Jacobi identity: \begin{equation}\label{eq:GradedJacobiDGLA} (-)^{|x||z|}[x,[y,z]]+(-)^{|z||y|}[z,[x,y]]+(-)^{|y||x|}[y,[z,x]]=0 \qquad \forall x,y,z\in \mathfrak{L}~. \end{equation} \end{definition} % \begin{remark}[Any associative algebra determines a Lie algebra]\label{rem:anyassociativegiveLie} Given a graded associative algebra $(A,\bullet)$. Consider the graded commutator $[\cdot,\cdot]_\bullet$ defined on homogeneous elements as \begin{displaymath} [x,y]_{\bullet}= x\bullet y - (-)^{|x||y|} y \bullet x \end{displaymath} (see definition \ref{def:gradedCommutator} below). The pair $(A,[\cdot,\cdot]_\bullet)$ is a GLA. \\ Observe furthermore that the commutator of an associative graded algebra acts as a graded derivation (see definition \ref{Def:Fderivation}). Namely, for any fixed homogeneous elements $x,y,z\in A$, the following equation holds \begin{displaymath} [x,y\bullet z]_\bullet = [x,y]_\bullet \bullet z + (-)^{|x||y|} y \bullet [x,z]_\bullet ~. \end{displaymath} In other terms, for any given $x\in A$, the unary operator $[x,\cdot]_\bullet$ is a degree $|x|$ derivation from the graded algebra $A$ into itself. \end{remark} If one reads diagram \eqref{eq:LieAlgebraDiagram} in the category of graded differential vector spaces, \ie cochain complexes (see section \ref{sec:HomologicalAlgebrasConventions}), one gets the notion of \emph{differential graded Lie algebra}: \begin{definition}[Differential Graded Lie Algebra (DGLA)]\label{def:DGLA} A \emph{differential graded Lie algebra} (DGLA) is a graded vector space $\mathfrak{L}\in\GVect$ equipped with a bilinear graded skew-symmetric degree $0$ homogeneous map $[\cdot,\cdot]:\mathfrak{L}\wedge\mathfrak{L}\to \mathfrak{L}$ and with a unary degree $1$ homogeneous map $d$ such that \begin{itemize} \item $d$ is a coboundary, \ie $d\circ d=0$; \item $[\cdot,\cdot]$ satisfies the graded Jacobi equation \eqref{eq:GradedJacobiDGLA}; \item $\d$ and $[\cdot,\cdot]$ are compatible in the sense of the Liebniz rule\footnote{$d$ is a degree $1$ derivation from the non-associative anticommutative graded algebra $(A,[\cdot,\cdot])$ into itself.} \begin{displaymath} d[x,y]= [d x,y] + (-1)^{|x|}[x, d y] ~. \end{displaymath} \end{itemize} \end{definition} The three conditions defining a DGLA are subsumed by the following commutative diagram in the category of graded vector spaces \begin{equation}\label{eq:DGLADiagram} \begin{tikzcd}[column sep=huge, row sep = large] & \wedge^3 \mathfrak{g} \ar[ddl,"{[\cdot,\cdot]\ca [\cdot,\cdot]}",sloped] \ar[d,"{[\cdot,\cdot]\otimes \Unit \circ P_{2,1}}"] & & \\ & \wedge^2 \mathfrak{g} \ar[d,"{[\cdot,\cdot]}"] \ar[r,"d\otimes\Unit + \Unit\otimes d"] & \wedge^2 \mathfrak{g} [1] \ar[d,"{[\cdot,\cdot]}_{[1]}"] \\ 0 \ar[r,hook] & \mathfrak{g} \ar[r,"d"] \ar[rr,bend right=20,"0"']& \mathfrak{g}[1] \ar[r,"{d[1]}"] & \mathfrak{g}[2] \end{tikzcd} \end{equation} \begin{remark}[Any GLA determines (several) DGLAs]\label{rem:DglaFromGla} Consider a GLA $(\mathfrak{L},[\cdot,\cdot])$. Let be $x\in \mathfrak{L}^1$ a degree $1$ homogeneous element that commutes with itself: \begin{displaymath} [x,x]=0 \end{displaymath} (notice that we are in the graded setting, hence the above equation is not automatic on elements in odd degree). Then, the graded Jacobi identity implies that the degree $1$ homogeneous map $[x,\cdot]$ is a derivation on $(\mathfrak{L},[\cdot,\cdot])$ and is $2$-nilpotent. In other terms $[x,\cdot]$ is a coboundary operator which satisfies the Liebniz equation, hence $(\mathfrak{L},[\cdot,\cdot],[x,\cdot])$ is a DGLA. \\ This idea is one of the fundamental concepts underlying the so-called \emph{deformation theory} (see \cite{Doubek2007} for some introductory lecture notes). \end{remark} The special elements introduced in remark \ref{rem:DglaFromGla} are subcases of a more general class: \begin{definition}[Maurer-Cartan elements]\label{def:MCelements} Let be $\mathfrak{L}=(\mathfrak{L},d,[\cdot,\cdot])$ a DGLA. We call \emph{Maurer-Cartan elements} of $\mathfrak{L}$ the set of degree $1$ elements satisfying the Maurer-Cartan equation: \begin{displaymath} MC(\mathfrak{L}):= \left\{ x \in \mathfrak{L}^1 ~\left\vert~ d x + \frac{1}{2}[x,x] =0 \right\}\right. \end{displaymath} \end{definition} \begin{remark}[Maurer-Cartan elements in a $L_\infty$-algebra] Notice that the Maurer-Cartan equation can be formally extended to any (curved) $L_\infty$-algebra $(L,\{\mu_k\}_{k\geq 0})$ as follows: \begin{displaymath} \sum_{k=0}^\infty \dfrac{1}{k!}\mu_k(x,\dots x) = 0 ~. \end{displaymath} In this case, however, it is essential to pay extra attention to the convergence conditions of the infinite series. \end{remark} \section{Graded pre-Lie algebras} Given any associative algebra, it is a standard result (see remark \ref{rem:anyassociativegiveLie}) that the corresponding commutator yields a Lie algebra structure. Roughly speaking, a pre-Lie algebra is a non-associative algebra determining a Lie structure in a similar fashion. One of the early adopters of this concept was Gerstenhaber (see \cite{Gerstenhaber1963a}) see also \cite{Manchon2011a} for a survey. Consider a non-associative (non-commutative) graded $\mathbb{R}$-algebra $(X,\MBComp)$, we introduce the following auxiliary operators: \begin{definition}[Associator]\label{def:gradedAssociator} Given a (not necessarily associative) graded algebra $(X,\ca)$ we call \emph{associator} the graded tri-linear graded morphism \begin{displaymath} \alpha(\ca\,;\cdot,\cdot,\cdot): X^{\otimes 3} \to X \end{displaymath} defined on arbitrary homogeneous elements $x,y,z\in X$ as \begin{displaymath} \alpha(\ca\,;x,y,z) := (x\ca y)\ca z - x \ca (y \ca z)~. \end{displaymath} \end{definition} One can construct a graded skew-symmetric binary bracket out of any graded algebra: \begin{definition}[(Graded) Commutator]\label{def:gradedCommutator} Given a (not necessarily associative) graded algebra $(X,\ca)$, we call \emph{commutator} the graded skew-symmetric bi-linear graded morphism \begin{displaymath} [\cdot,\cdot]_{\ca}: X^{\wedge 2} \to X \end{displaymath} defined on arbitrary homogeneous elements $x,y\in X$ as \begin{displaymath} [x,y]_{\ca} := (x\ca y)- (-)^{|x||y|} (y \ca x)~. \end{displaymath} \end{definition} The role of the associator is to measure the failing of the associativity of the product $\ca$. When $(X,\ca)$ is associative $\alpha$ is automatically zero. In the case that $\alpha$ satisfy certain symmetry properties one talks about \emph{pre-Lie algebras}: \sidebyside{ \begin{definition}[Left pre-Lie Algebra] A non-associative graded algebra $(X,\ca)$ is said to be a \emph{left pre-Lie algebra} if the corresponding associator is graded symmetric in the two leftmost entries; \ie, for any $x,y,z\in X$, one has: \begin{displaymath} \alpha(\ca;x,y,z) = (-)^{|x||y|}\alpha(\ca;y,x,z)~. \end{displaymath} \end{definition} }{ \begin{definition}[Right pre-Lie Algebra] A non-associative graded algebra $(X,\ca)$ is said to be a \emph{right pre-Lie algebra} if the corresponding associator is graded symmetric in the two rightmost entries; \ie, for any $x,y,z\in X$, one has: \begin{displaymath} \alpha(\ca;x,y,z) = (-)^{|y||z|}\alpha(\ca;x,z,y) ~. \end{displaymath} \end{definition} } \smallskip \begin{notation}[Short-hand notation] In the following we will focus on right pre-Lie algebras and we will omit the symbol $\ca$ when it is possible. The action of the product $\ca$ will be denoted by juxtaposition. \end{notation} The next proposition justifies why these algebras are called \emph{pre-}Lie: \begin{proposition} Given a right pre-Lie $(X,\ca)$, then the corresponding graded commutator satisfies the \emph{(right) graded Jacobi} equation \begin{displaymath} J(x,y,z) := (-)^ {|x||z|}[x,[y,z]] + (-)^ {|x||y|}[y,[z,x]] + (-)^ {|y||z|}[z,[x,y]] =0~. \end{displaymath} \end{proposition} \begin{proof} Expanding the definition of commutator one gets that: \begin{displaymath} \begin{aligned} [x,&[y,z]] =\\ =&~x(yz) - (-)^{|y||z|} x(zy) - (-)^{(|y|+|z|)|x|}(yz)x + (-)^{(|y|+|z|)|x|}(-)^{|y||z|}(zy)x ~; \end{aligned} \end{displaymath} % therefore: \begin{align*} &(-)^{|x||z|}[x,[y,z]] \\ &=~ (-)^{|x||z|}x(yz) - (-)^{(|x|+|y|)|z|} x(zy) - (-)^{|y||x|}(yz)x + (-)^{|y|(|z|+|x|)}(zy)x ~; \\[1em] &(-)^{|y||x|}[y,[z,x]] \\ &=~ (-)^{|y||x|}y(zx) - (-)^{(|y|+|z|)|x|} y(xz) - (-)^{|z||y|}(zx)y + (-)^{|z|(|x|+|y|)}(xz)y ~; \\[1em] &(-)^{|z||y|}[z,[x,y]] \\ &=~ (-)^{|z||y|}z(xy) - (-)^{(|z|+|x|)|y|} z(yx) - (-)^{|x||z|}(xy)z + (-)^{|x|(|y|+|z|)}(yx)z ~. \end{align*} Summing up all the previous terms, one gets \begin{align*} J(x,y,z) =&+ (-)^{|x||z|} ( -\alpha(x,y,z) + (-)^{|z||y|} \alpha(x,z,y))+ \\ &+ (-)^{|y||x|} ( -\alpha(y,z,x) + (-)^{|x||z|} \alpha(y,x,z))+ \\ &+ (-)^{|z||y|} ( -\alpha(z,x,y) + (-)^{|y||x|} \alpha(z,y,z))= \\ =& 0~. \end{align*} % \end{proof} In a pre-Lie algebra the associator measure the failure of the latter property: \begin{proposition} Let be $(X,\ca)$ right pre-Lie. Consider three homogeneous elements $x,y,z\in X$. Then \begin{displaymath} \begin{aligned} [x,yz] =& [x,y]z + (-)^{|x||y|}y[x,z] + \alpha(x,y,z) ~; \\ [yz,x] =& y[z,x] + (-)^{|x||z|}[y,x]z - (-)^{|x|(|y|+|z|)}\alpha(x,y,z) ~. \end{aligned} \end{displaymath} \end{proposition} \begin{proof} The first claim follows by the simple expansion of the terms: \begin{displaymath} \begin{aligned} [x,yz] =& x(yz) -(-)^{|x|(|y|+|z|)}(yz)x ~; \\[1em] y[x,z] =& y(xz) -(-)^{|x||z|} y(zx) = \\ =&~ y(xz) -(-)^{|x||z|} (yz)x + (-)^{|x||z|} \alpha(y,z,x) ~; \\[1em] [x,y]z =& (xy)z -(-)^{|x||y|}(yx)z = \\ =&~ x(yz) + \alpha(x,y,z) -(-)^{|x||y|}y(xz) -(-)^{|x||y|}\alpha(y,x,z) ~. \end{aligned} \end{displaymath} The second claim follows from the first and from the graded commutativity of the commutator: \begin{displaymath} \begin{aligned} [yz,x] =& -(-)^{|x|(|y|+|z|)}[x,yz] = \\ =& -(-)^{|x|(|y|+|z|)}([x,y]z + (-)^{|x||y|}y[x,z] + \alpha(x,y,z)) = \\ =& (-)^{|x||z|}[y,x]z + y[z,x] - (-)^{|x|(|y|+|z|)}\alpha(x,y,z) ~. \end{aligned} \end{displaymath} \end{proof} \ifstandalone \bibliographystyle{../../hep} \chapter{Gauge transformations and \momaps}\label{Chap:MarcoPaper} A natural theme that arises when dealing with both symplectic and multisymplectic structures is to investigate what relationship exists between gauge-related multisymplectic manifolds, \ie manifolds endowed with multisymplectic forms lying in the same cohomology class (see section \ref{Section:GaugeTransformations} in chapter \ref{Chap:MultiSymplecticGeometry}). To date, no canonical correspondence between the $L_\infty$-algebras of observables of two gauge related multisymplectic manifolds is known\footnote{Clearly they are not isomorphic as graded vector spaces. In particular, they differ in their degree $0$ component $\Omega^{n-1}_{\ham}(M,\omega)$.} In this chapter, we will exhibit a compatibility relation between those observables that are momenta of corresponding homotopy moment maps (the higher analogues of a moment map in the multisymplectic setting). Although this construction is essentially algebraic in nature, it admits also a geometric interpretation when declined to the particular case of pre-quantizable symplectic forms. For a symplectic manifold $(M,\omega)$, a choice of prequantization circle bundle with connection gives a Lie algebra embedding of the Poisson algebra of functions on $M$ into the invariant vector fields on the prequantization bundle. The latter form the sections of a Lie algebroid over $M$, called Atiyah algebroid, which is isomorphic to a central extension $(TM\oplus \RR)_{\omega}$ of the tangent bundle $TM$. Hence we obtain a Lie algebra embedding \begin{equation}\label{eq:intropreqmap} C^{\infty}(M)\to \Gamma((TM\oplus \RR)_{\omega}), \end{equation} which we call \emph{prequantization map}. Assume $(M,\omega)$ is endowed with a moment map for the action of some Lie group $G$, whose corresponding comoment map we denote $J^*\colon\g\to C^{\infty}(M)$. Any choice of $\alpha\in \Omega^1(M)^G$ provides another $G$-invariant symplectic form $\omega+d\alpha$ (assuming this is non-degenerate), and $\alpha$ can be used to twist some of the above data, obtaining in particular a moment map $J_{\alpha}$ for $\omega+d\alpha$. A geometric argument shows that the following diagram commutes: \begin{equation} \label{intro:diag:main} \begin{tikzcd}[column sep=huge] & C^{\infty}(M)_{\omega} \ar[r,""] & \Gamma(TM\oplus \RR)_{\omega} \ar[dd,"\tau_\alpha"] \\[-1em] \mathfrak{g}\ar[ru,"J^*"] \ar[dr,"J^*_{\alpha}"'] \\[-1em] & C^{\infty}(M)_{\omega+d\alpha} \ar[r,""] & \Gamma(TM\oplus \RR)_{\omega+d\alpha} \end{tikzcd} \end{equation} Here $\tau_{\alpha}$ is the gauge transformation of the Atiyah algebroids induced by $\alpha$, which, in particular, is a Lie algebroid isomorphism. We interpret this commutativity by saying that the twisting {of the moment map by $\alpha$} is compatible with the twisting of the {Atiyah algebroid}. \\ The prequantization bundle over $(M,\omega)$ exists only when $[\omega]$ satisfies an integrality condition. Notice that the above Lie algebra embedding \eqref{eq:intropreqmap} and the commutative diagram \eqref{intro:diag:main} make no reference to the prequantization bundle. Indeed, one can check that they make sense and hold for any arbitrary symplectic manifold $(M,\omega)$. \bigskip In this chapter, based on joint work with Marco Zambon \cite{Miti2020}, we show that the existence of the above Lie algebra embedding and commutative diagram extends to the setting of higher geometry, \ie replacing $\omega$ by a multisymplectic $(n+1)$-form (no integrality condition is required). In that case the Poisson algebra of functions $C^{\infty}(M)$ is replaced by a $L_{\infty}$-algebra \cite{LadaMarkl}\cite{LadaStasheff}, the Atiyah Lie algebroid by the higher Courant algebroid $TM \oplus \Lambda^{n-1} T^\ast M$, and $\alpha$ by an invariant $n$-form $B$. \\ Our previous discussion around diagram \eqref{intro:diag:main} provides some evidences that this construction may be related to the higher analogue of geometric quantization for integral multisymplectic forms. Our main results are the following. \begin{itemize} \item In theorem \ref{thm:iso} {and corollary \ref{cor:Psi}} we establish the existence of the embedding for $n\le 4$. The method is based on the description of $L_{\infty}$-algebras as suitable coderivations, and we expect the proof to extend to the case of arbitrary $n$. \item Building on this, in theorem \ref{thm:comm}, we establish that the higher version of the above diagram \eqref{intro:diag:main} commutes, for $n\le 4$. \begin{equation} \label{intro:eq:pentagonDiagram} \begin{tikzcd} & L_{\infty}(M,\omega) \ar[r] & L_{\infty}(TM \oplus \Lambda^{n-1} T^\ast M,\omega) \ar[dd,"\tau_B"] \\[-1em] \mathfrak{g}\ar[ru] \ar[dr] \\[-1em] & L_{\infty}(M,\widetilde{\omega}) \ar[r] & L_{\infty}(TM \oplus \Lambda^{n-1} T^\ast M,\widetilde{\omega}) \end{tikzcd} \end{equation} \end{itemize} We relate our first result above with the literature. In the special case $n=2$, the Atiyah algebroid is an instance of Courant algebroid, and the embedding was established by Rogers \cite[Theorem 7.1]{Rogers2013}. For arbitrary $n$ the existence of the embedding is stated by S\"amann-Ritter in their preprint \cite[Theorem 4.10.]{Ritter2015a}. They provide a proof in which the embedding is constructed recursively, but not all steps are worked out explicitly. They do not give a closed formula for the embedding. For a different approach in the case of integral multisymplectic forms, involving a choice of open cover on the manifold $M$, see Fiorenza-Rogers-Schreiber \cite[\S 5]{Fiorenza2014a}. \\ {Our second result above, to the best of our knowledge, has not been addressed in the literature yet}. \\ {We expect to be able to extend both results above to arbitrary values of $n$. The proof of theorem \ref{thm:comm} suggests that theorem \ref{thm:iso} can be extended by choosing the coefficients there to depend suitably on the Bernoulli numbers, and in that case theorem \ref{thm:comm} would hold for all $n$ as a consequence. } \medskip \noindent The outline of the chapter is as follows. \\ First, in section \ref{Sec:GeoMotiv}, we describe the possible motivation for studying the commutation of a diagram like \eqref{intro:eq:pentagonDiagram} coming from geometric quantization. \\ Then, we move to prepare the ground for delivering our explicit construction for the embedding of the $L_\infty$-algebra associated to a multisymplectic manifold into the $L_\infty$-algebra associated to the corresponding Vinogradov algebroid. Our construction is based on the observation that both structures - when restricted to a suitable subcomplex - can be seen as being generated by the same set of few multibrackets. Accordingly, in section \ref{Sec:RogersL1prop} we review again the construction of the Rogers $L_\infty$-algebra of observable, discussed in section \ref{Section:RogersObservables}, with the language of \RN products. \\ In section \ref{Sec:Vinoids} we present the notion of Vinogradov algebroid and of the corresponding $L_\infty$-algebra working out certain recurrence properties of multibrackets that are similar to what we found for the Rogers' $L_\infty$-algebra of observables. \\ In section \ref{Section:ExtendedRogersEmbedding} we reap the fruits of this preliminary work providing an explicit expression of the sought embedding of the Rogers' $L_\infty$-algebra into the Vinogradov's one. \\ Finally, in section \ref{Sec:DiagramGaugeTransf} we discuss the compatibility of the previous construction under gauge transformations in presence of symmetries admitting \momaps. \section{Geometric motivation: the symplectic case}\label{Sec:GeoMotiv} In this section, we explain in detail the considerations on symplectic geometry outlined in the first part of the introduction. \begin{remark} On the symplectic manifold $(M,\omega)$ we adopt the conventions that $\iota_{X_f}\omega=-df$ and $\{f,g\}=\omega(X_f,X_g)$ (hence $f\mapsto X_f$ is a Lie algebra morphism). To shorten the notation, we denote by $C^{\infty}(M)_{\omega}$ the Lie algebra $(C^{\infty}(M),\{\cdot,\cdot\})$. \end{remark} We first introduce in \S \ref{subsec:prequantizationReminder} the notion of "prequantization" needed to discuss the geometric interpretation of the embedding \eqref{eq:intropreqmap} and the diagram \eqref{intro:diag:main}. The latter will be explained in \S \ref{subsec:first} and \S \ref{subsec:second}. Finally, in \S \ref{subsec:higher} we provide some motivation for the higher case. \subsection{Reminders on Atiyah algebroids and Prequantization}\label{subsec:prequantizationReminder} In this subsection, we succinctly review the language required to deliver a geometric interpretation of the -purely algebraic- problem of ascertaining the commutativity of diagram \eqref{intro:diag:main}. We will need the notion of Atiyah algebroid and prequantization. The former is a certain Lie algebroid uniquely associated to any principal bundle and the latter is a construction involving $S^1$-principal bundles on certain symplectic manifolds. \subsubsection{Lie algebroids} Informally, Lie algebroids are infinite dimensional Lie algebras controlled by a geometric data, namely, elements are sections of given vector bundle. % \begin{definition}[Lie algebroid] We call \emph{Lie algebroid} a triple $(E,[\cdot,\cdot]_E,\rho)$ consisting of \begin{itemize} \item a vector bundle $\pi:E \twoheadrightarrow M$; \item a Lie algebra structure $[\cdot,\cdot]_E$ on the space of section $\Gamma(E)$; \item a vector bundle morphism $\rho:E\to TM$ (over the identity $\id_M$), called \emph{anchor}; \end{itemize} such that: \begin{enumerate} \item[(a)] $\rho$ induces a Lie algebra morphism at the level of sections\footnote{Note that this condition follows directly from condition (b).} $$\rho: (\Gamma(E),[\cdot,\cdot]_E)\to (\mathfrak{X}(M),[\cdot,\cdot])~;$$ \item[(b)] $[\cdot,\cdot]_E$ is compatible with the anchor in the sense of the \emph{Liebniz rule}\footnote{Algebraically, it tells that $[X,\cdot ]$ is a derivation on the $C^\infty(M)$-module of sections $\Gamma(E)$.}: \begin{displaymath} [X, f~Y ]_E = (\rho(X) f)~ Y + f~ [X,Y]_E \qquad \forall f \in C^\infty(M);~X,Y\in \Gamma(E)~. \end{displaymath} ($\rho(X)$ in the above equation has to be interpreted as the unique derivation on the associative commutative algebra $C^\infty(M)$, with respect to the point-wise product, associated to the smooth vector field $\rho(X)\in\mathfrak{X}(M)$.) \end{enumerate} \end{definition} \begin{notation} If the anchor is a surjective map $ \rho:E \twoheadrightarrow TM$, the Lie algebroid is said to be \emph{transitive}. \end{notation} \begin{example}[Lie Algebras] Any Lie algebra $\mathfrak{g}$ can be seen as a Lie algebroid over a $0$-dimensional manifold (\ie a point) $\{\ast\}$. In this sense, a Lie algebroid is a "many points version" (see "horizontal categorification" \cite{nlab:horizontal_categorification}) of a Lie algebra. \end{example} \begin{example}[Tangent bundle] Given a manifold $M$, the corresponding tangent bundle is Lie algebroid with anchor given by the identity bundle map. \end{example} \begin{example}[Standard Lie algebroid]\label{ex:StandardLieAlgbroid} Given any smooth manifold $M$, the vector bundle $E= \RR_M\oplus TM$ together with the standard projection $\rho: \RR_M\oplus TM \twoheadrightarrow TM$ and the binary bracket \begin{displaymath} \morphism{[\cdot,\cdot]} {\Gamma(TM \oplus\RR_M)\otimes \Gamma(TM \oplus\RR_M)} {\Gamma(TM \oplus\RR_M)} {\pair{X_1}{f_1}\otimes\pair{X_2}{f_2}} {\pair{[X_1,X_2]} {\mathcal{L}_{X_1} f_2 - \mathcal{L}_{X_2} f_1}} \end{displaymath} constitute a Lie algebroid called \emph{standard Lie algebroid}. \end{example} \begin{example}[$\omega$-Twisted (standard) Lie algebroid]\label{ex:TwistedLieAlgbroid} Consider a smooth manifold $M$. Let be $\omega\in\Omega^2(M)$ a closed $2$-form , \ie the manifold $(M,\omega)$ is a pre-$1$-plectic manifold. Then the vector bundle $E= \RR_M\oplus TM$ together with the standard projection $\rho: \RR_M\oplus TM \twoheadrightarrow TM$ and the binary bracket \begin{displaymath} \morphism{[\cdot,\cdot]_{\omega}} {\Gamma(TM \oplus\RR_M)\otimes \Gamma(TM \oplus\RR_M)} {\Gamma(TM \oplus\RR_M)} {\pair{X_1}{f_1}\otimes\pair{X_2}{f_2}} {\pair{[X_1,X_2]} {\mathcal{L}_{X_1} f_2 - \mathcal{L}_{X_2} f_1- \omega(X_1,X_2)} } \end{displaymath} constitute a Lie algebroid called \emph{$\omega$-twisted (standard) Lie algebroid}. \end{example} \subsubsection{Principal connections and Atiyah algebroids} Given a finite dimensional Lie group $G$, consider a principal bundle $G\hookrightarrow P \twoheadrightarrow M~.$ We denote by $\hat{\xi}\in \mathfrak{X}(P)$ the fundamental vector field of $\xi \in \mathfrak{g}$ with respect to the action $R:G\action P$ of the group on $P$ from the right, namely \begin{displaymath} \hat{\xi}\eval_p := \dfrac{\d}{\d t} R_{\exp{t \xi}}(p) \eval_{t=0} ~; \end{displaymath} (\cf remark \ref{rem:RightActionMess}). Le us recall the following two definitions: \begin{definition}[Connection of a Principal bundle]\label{def:connections} Given a $G$-principal bundle $G\hookrightarrow P \twoheadrightarrow M$ , we call a \emph{connection on $P$} any $\mathfrak{g}$-valued differential $1$-form $\theta\in \Omega^1(P,\mathfrak{g})$ satisfying the two following property \begin{itemize} \item $\theta$ reproduces the fundamental vector fields, \ie $\theta(\hat{\xi})= \xi$; \item $\theta$ is $G$-equivariant: $R_g^\ast \theta = \text{Ad}_{g^{-1}}\theta$. \end{itemize} \end{definition} \begin{definition}[Curvature $2$-form] Given a connection $\theta\in \Omega^1(P,\mathfrak{g})$, we call \emph{curvature of $\theta$} the $\mathfrak{g}$-valued differential 2-form $\d_{\cA}\theta \in \Omega^2(P,\mathfrak{g})$ such that \begin{displaymath} \d_{\cA}\theta (v_1,v_2) = \d\theta (v_1,v_2) + [\theta(v_1),\theta(v_2)] \qquad \forall v_i \in \mathfrak{X}(P)~. \end{displaymath} The connection $\theta$ is said \emph{flat} if $\d_{\cA}\theta = 0$. \end{definition} Principal connections are special cases of Ehresmann connections: \begin{reminder}[Ehresmann connections]\label{rem:ehresmann} Let $\pi:E\to M$ be a smooth fibre bundle. Consider the tangent bundle $\tau:TE\to E$. One can introduce the unique \emph{vertical bundle} $V:=\ker(\d\pi:TE\to TM)$ where $\d \pi$ is the differential of the smooth map $\pi$ (the tangent map $T\pi$). $V$ is a subbundle of $TE$ whose fibres $V_e=T_e (E_{\pi(e)})$ consist of vectors on $TE$ which are tangent to the fibres of $E$. \\ An \emph{Ehresmann connection} of $E$ is any smooth subbundle $H$ of $E$ such that $$ TE = H \oplus V~.$$ \\ Consider now a principal bundle $\pi:P\to M$ with connection encoded by a $1$-form $\theta$ as in definition \ref{def:connections}. The subbundle $H_{\theta}:=\ker(\theta)$ defines a invariant Ehresmann connection on $P$ corresponding by $\theta$. Hence $TP= \ker(\theta)\oplus \ker(\d \pi)$. \end{reminder} \begin{reminder}[Horizontal lifts]\label{rem:horlifts} Let be $\pi:P\to M$ a vector bundle and consider an Ehresmann connection encoded by an horizontal subbundle $H$ of $P$. Vector fields on $P$ decompose uniquely in a vertical and a horizontal part. Given a vector field $X\in \mathfrak{X}(M)$, we call the \emph{horizontal lift} of $X$ the unique (horizontal) vector field $X^H\in \mathfrak{X}(P)$ such that the following diagram commutes in the category of smooth manifolds: \begin{displaymath} \begin{tikzcd}[column sep = huge] P\ar[r,bend left=30,"X^H"] \ar[d,two heads, "\pi"']& H \ar[l,two heads] \ar[d,dashed] \ar[r,hook] & TP \ar[dl,"\d\pi"] \\ M \ar[r,bend right=30,"X"'] & TM \ar[l,two heads] \end{tikzcd} ~. \end{displaymath} \end{reminder} \iffalse \subsubsection{Atiyah algebroids}\fi Consider a $G$-principal bundle $G\hookrightarrow P \twoheadrightarrow M$, denoted simply as $\pi:P\to M$. The action $R:G\action P$ is free and proper, therefore, there is a well-defined quotient. The same property holds for the lift of the action $R$ to $G\action TP$. In other words, we have the following commutative diagram in the category of smooth manifold \begin{displaymath} \begin{tikzcd} & TP \ar[r,"\d\pi"]\ar[dd,"\tau_P"] & TM \ar[dd,"\tau_M"] \\[-1.5em] G \ar[ru,hook] \ar[rd,hook] & & \\[-1.5em] & P \ar[r,two heads,"\pi"] & M ~, \end{tikzcd} \end{displaymath} where the vertical arrows $\tau_P$ and $\tau_M$ denote the canonical fibrations of the tangent bundles over their corresponding base manifolds. Observe that, while the lower horizontal line is a $G$-principal bundles, the upper one never inherits the structure of $G$-principal bundle since the Lie group does not act transitively on the fibers of $\d \pi$. \\ This justify the following definition: \begin{definition}[Atiyah algebroid]\label{def:ati} Given a $G$-principal bundle $\pi:P\to M$, we call \emph{Atiyah algebroid} the Lie algebroid obtained by taking the quotient, with respect of $G$, of the tangent bundle $\tau_P: TP \to P$. Namely, it is given by the vector bundle \begin{displaymath} \begin{tikzcd} A_P \cong \dfrac{TP}{G} \ar[r,two heads,"\tau_P"] & \dfrac{P}{G}\cong M \end{tikzcd} ~, \end{displaymath} whose sections correspond to $G$-invariant vector fields over $P$, \ie $\Gamma(A_P)\cong \mathfrak{X}(P)^{G}$, together with the restriction of the standard Lie bracket on $\mathfrak{X}(P)$ to $G$-invariant vector fields and with anchor given by $\d\pi: TP \to TM$. \end{definition} The upshot is that Atiyah algebroids are certain Lie algebroids naturally associated to principal bundles. The following lemma gives another characterization of the Atiyah algebroid that will be used in the following. \begin{lemma}[Atiyah exact sequence {(\cite[Thm.1]{Atiyah1957})}]\label{lem:atiexactseq} Given a principal bundle $\pi:P\to M$, its corresponding Atiyah algebroid $A_P$ fits in a short exact sequence in the category of Lie algebroids: \begin{displaymath} \begin{tikzcd} 0 \ar[r] & P \times_G \mathfrak{g} \ar[r] & A_P \ar[r,"\d \pi"] & TM \ar[r] & 0 \end{tikzcd} \end{displaymath} where $P \times_G \mathfrak{g}$ is the \emph{adjoint bundle of $P$}, \ie the vector bundle $(P\times \mathfrak{g})/\sim$ where $(p, ad_{g} \xi)\sim (R_g(p),\xi)$ (see \cite[\S 17.6.]{Kolar1993}). \end{lemma} \subsubsection{Prequantization}\label{sec:Prequantum} In this subsection we concisely review some basic notions related to geometric quantization, tailored to our needs (see \cite{Kostant70,Souriau66} for the original articles or \cite{Woodhouse97,Bry,Carosso2018} for a more recent review), in a finite dimensional environment. Let $(M,\omega)$ be a connected symplectic manifold. \begin{definition}[Prequantum bundle]\label{Def:PrequantumBundle} We call a \emph{prequantum bundle} of the symplectic manifold $(M,\omega)$ the pair $(P,\omega)$ consisting of a $S^1$-principal bundle \begin{displaymath} S^1 \hookrightarrow P \twoheadrightarrow M ~, \end{displaymath} together with a connection $\theta\in \Omega^1(P,\mathfrak{g})\cong\Omega^1(P)$ such that \begin{displaymath} \pi^\ast \omega = \d \theta \end{displaymath} where $\pi:P\to M$ denotes the fibre projection encoding $P$. \\ A prequantum bundle $(P,\theta)$ is also called a \emph{prequantization of $(M,\omega)$}. When a given symplectic manifold admits a prequantum bundle it is said to be \emph{prequantizable}. \end{definition} Not all symplectic manifolds admit a prequantum bundle. The following celebrated theorem provides a cohomological condition to the existence of a prequantization. \begin{theorem}[Weyl-Kostant integrality condition \emph{(see \cite{Kostant70} or \cite[Thm. 8.3.1]{Woodhouse97})}]\label{thm:integralitycondition} Consider a symplectic structure $\omega$ on the connected manifold $M$. The symplectic manifold $(M,\omega)$ is "prequantizable" (in the sense of definition \ref{Def:PrequantumBundle}) if and only if ${\frac{1}{2\pi}}[\omega]$ is an integral class, \ie it lies in the image of the mapping \begin{displaymath} \begin{tikzcd} H^2_{\text{sing}}(M,\mathbb{Z}) \ar[r] & H^2_{\text{sing}}(M,\R) \ar[r,equal,"\sim"] & H^2_{dR}(M) ~. \end{tikzcd} \end{displaymath} \end{theorem} \begin{remark}[Connections on circle bundles] Specializing definition \ref{def:connections} to the case of circle bundles, like in definition \ref{Def:PrequantumBundle}, has the following implications: \begin{itemize} \item since $\mathfrak{g}\cong \RR$, the connection is an honest $1$-form $\theta \in \Omega^1(P)$; \item since $\mathfrak{g}$ is generated by $1\in \RR$, one has that $\theta(\vAct_1)=1\in \RR$ where $\vAct_1$ denotes the fundamental vector field of the generator $1$; \item recalling that $S^1\cong U(1)$ and $\mathfrak{g}^\ast\cong \RR$, it results that $Ad_g^\ast = \id$ for any $g\in S^1$. Therefore $R_g^\ast \theta = \theta$, \ie $\theta$ is $S^1$-invariant. \end{itemize} \end{remark} \begin{notation} We denote by $E\in \vX(P)$ the fundamental vector field, pertaining to the action of the $1$-dimensional group $S^1$ on $P$, corresponding to generator $1$. \end{notation} Once one fixes a "preferred" differential form, it is natural to select the class of smooth maps that preserve such a structure. \begin{definition}[Infinitesimal quantomorphisms] Consider a prequantum bundle $(P,\theta)$ of $(M,\omega)$. We call an \emph{infinitesimal quantomorphism} any vector field on $P$ preserving the connection $\theta$. We denote by $$Q(P,\theta):=\{Y\in \vX(P)~\vert~\cL_Y\theta=0\}$$ the Lie subalgebra of $\mathfrak{X}(P)$ consisting of infinitesimal quantomorphisms. \end{definition} \begin{lemma}[{\cite[\S 2.2]{VaughanJ}}]\label{lem:preqInclusionSinvariant} Consider a prequantum bundle $(P,\theta)$. The infinitesimal quantomorphisms are automatically $S^1$-invariant, \ie \begin{displaymath} Q(P,\theta) \subset \mathfrak{X}(P)^{S^1}~. \end{displaymath} \end{lemma} \begin{proof} Consider $E\in\mathfrak{X}(P)$, the generator of the action of $S^1$ on $P$. Notice that $E$ is determined by $\theta$ being the unique vector fields such that $\iota_E\theta=1$ and $\iota_E \d\theta=0$. Hence if $X\in\mathfrak{X}(P)$ preserves $\theta$, then it will preserve $E$ too. More precisely, let be $X\in\mathfrak{X}(P)$ such that $\mathcal{L}_X \theta =0$. One has that ${[X,E]}$ is an horizontal vector field since \begin{displaymath} \iota_{[X,E]}\theta = \mathcal{L}_X \iota_E \theta - \iota_E \mathcal{L}_X \theta = 0 ~, \end{displaymath} hence $[X,E]$ is projectable and completely determined by its projection onto $M$. Similarly, one has that $\iota_{[X,E]} \d \theta =0$. Noticing that \begin{displaymath} \iota_{[X,E]} \d \theta = \iota_{[X,E]} \pi^\ast \omega = \iota_{\pi_\ast [X,E]} \omega ~, \end{displaymath} the non-degeneracy of $\omega$ implies $[X,E]= \mathcal{L}_E X =0$ for any $X\in Q(P,\theta)$. \end{proof} Assume that ${\frac{1}{2\pi}}[\omega]$ is an integral class and fix a prequantization circle bundle $P\to M$. As above, we denote by $E\in \vX(P)$ the unique infinitesimal generator pertaining to the action of the $1$-dimensional group $S^1$ and by $H_{\theta}:=\ker(\theta)$ the invariant Ehresmann connection on $P$ corresponding by $\theta$. It is known that a prequantization provides a Lie algebra isomorphism between the observables Poisson algebras and the infinitesimal quantomorphisms. \begin{lemma}[Kostant {\cite{Kostant70}} \emph{(see also \cite[Thm. 2.8]{VaughanJ})}]\label{lem:preqMap} Consider a prequantizable presymplectic manifold $(M,\omega)$, let be $(P,\theta)$ a prequantum bundle. One has a Lie algebra isomorphism \begin{equation}\label{eq:preq} \isomorphism{\Preq_{\theta}} {C^{\infty}(M)_{\omega}} {Q(P,\theta)} {f} {X_f^{H_\theta}+(\pi^*f)\cdot E}, \end{equation} where $X^{H_\theta}$ denotes the horizontal lift of a vector field $X$ on $M$ using the Ehresmann connection $H_\theta$ (see reminder \ref{rem:ehresmann} and \ref{rem:horlifts}) and $\pi^\ast f \in C^{\infty}(P)$ is the pullback of $f$ along $\pi:P\to M$. \end{lemma} The Lie algebra isomorphism $\Preq_\theta$ is the first ingredient to realize the map \eqref{eq:intropreqmap} anticipated in the introduction of this chapter. \begin{remark} In the definition of $\Preq_\theta$ is implied the commutation of the following diagram in the category of Lie algebras: \begin{equation}\label{eq:preqiso} \begin{tikzcd}[column sep = huge] C^{\infty}(M)_{\omega} \ar[r,"\Preq_{\theta}"',"\sim"] \ar[d,"X_{\bullet}"'] & Q(P,\theta) \ar[dl,shift left=.5em,"\pi_*"] \\ \vX(M) & \end{tikzcd} ~. \end{equation} Notice that the vertical map has a one-dimensional kernel given by the constant functions on $M$, \ie $\ker(X_\bullet)=\RR$. Hence the same holds for the map $\pi_*$ in the diagram giving the pushforward of projectable vector fields via $\pi$, i.e \begin{displaymath} \Preq_\theta(\ker(X_\bullet)) = \ker(\pi_\ast\vert_{Q(P,\theta)})~, \end{displaymath} where the right-hand side consists of vertical vector fields preserving $\theta$, \ie elements of $\ker(\pi_\ast\vert_{Q(P,\theta)})$ are constant multiples of $E$. \end{remark} \begin{remark}[About the \emph{Quantum} name] We briefly explain why the term "quantum" appears in this context. \\ Associating a prequantum bundle to a prequantizable symplectic manifold is the first step of a 3-step procedure called \emph{"geometric quantization (scheme)"} essentially due to Kostant, Kirillov and Souriau (KKS). \\ Roughly, a "quantization scheme" is procedure to associate to any symplectic manifold (prequantizable in some sense), taken together with the corresponding Poisson algebra of observables $(C^{\infty}(M),\{\cdot,\cdot\})$, a Hilbert $\mathbb{C}$-vector space $\mathscr{H}$, taken together with the algebra of self-adjoint operators on $\mathscr{H}$. \\ If one understands states of a classical mechanical system as points on a (finite dimensional) manifold $M$, the corresponding "quantum" states will be vectors of an (infinite dimensional) complex vector spaces $\mathscr{H}$ with unitary norm. The keypoint is the linearity of $\mathscr{H}$. This property provides a framework making possible to encompass the phenomenon of "superposition of states". In particular, given $\varphi\in \mathscr{H}$, the vectors $e^{i\lambda}\varphi$ represents the same "physical state" for any $\lambda\in \RR$. \\ Once this point is clear, the reason why we have named "prequantization" the act of associating a $S^1$-bundle $P$ over $M$ should begin to emerge. Intuitively, a $S^1$ bundle over $M$ is the attachment of a $1$-dimensional circle to any classical state $p\in M$. On the other hand, it is well-known that $S^1$ is diffeomorphic to the unitary circle in the complex plane $\mathbb{C}$, \ie the Lie group $U(1)$. Hence points in $P$ can be seen as classical states taken together with a certain \emph{phase factor} $e^{i \lambda}\in\mathbb{C}$. \\ Nevertheless, being $P$ a generic smooth manifold, is not yet the sought linear space. According to the (KKS) procedure, the \emph{prequantum Hilbert space} is represented by a certain subclass of complex-valued smooth function over $P$. Furthermore, being possible to regard vector fields on $P$ as derivations $C^\infty(P)\to C^\infty(P)$, the images of the morphism given in equation \eqref{eq:preq} can be seen as prequantum versions of the classical observables of $C^\infty(M)$, hence the name "prequantization map"\footnote{We mention that the prequantization procedure could be equivalently expressed in terms of $1$-dimensional Hermitian bundles and integrable sections, see for instance \cite[\S 1.1]{Weinstein2005a}.}. \note{ Furthermore, the morphism given in equation \eqref{eq:preq} can be suitably extended to give an operator $\Gamma(P)\to \Gamma(P)$ rather then a vector field $\mathfrak{X}(P)$ ( which in turn can be seen as a derivation $C^\infty(P)\to C^\infty(P)$). The former are prequantum version of the classical observables of $C^\infty(M)$, hence the name "prequantum map". } We do not insist here on further details and we refer the interested reader to the fundamental manuals of geometric quantization \cite{Bry} and \cite{Woodhouse97}. We only stress that what we loosely described here are quantization schemes for ordinary, \ie point-like, mechanical systems. The mathematical foundation of quantization procedures for $\infty$-dimensional mechanical systems, \ie field theories, is still largely incomplete. \end{remark} \subsection{Embedding of the observables in the Atiyah algebroid}\label{subsec:first}\label{subsec:At} In this subsection we derive the map \eqref{eq:intropreqmap}. {The material reviewed here can be found also in \cite[\S 2]{Rogers2013}.} \iffalse\subsubsection{Atiyah algebroid of the prequantum bundle}\label{subsec:At}\fi Consider again a principal circle bundle $\pi\colon P\to M$. According to definition \ref{def:ati}, the corresponding Atiyah Lie algebroid $A_P$ is the transitive Lie algebroid over $M$ with space of sections given by $\vX(P)^{S^1}$, the invariant vector fields on $P$, and anchor given by $\pi_*$. Lemma \ref{lem:atiexactseq} says that $A_P$ fits in a short exact sequence of Lie algebroids \begin{displaymath} \begin{tikzcd} 0 \ar[r] & \RR \ar[r] & A_P \ar[r] & TM \ar[r]& 0 \end{tikzcd} \end{displaymath} where {$\RR$ denotes the trivial rank-1 bundle\footnote{This is usually denoted as $\RR_M$, we are employing a short-hand notation here.} over $M$} (a bundle of Abelian Lie algebras, {with the constant section $1$ mapping to $E\in \Gamma(A_P)$}) and {the second map is the anchor}. A principal connection $\theta$ on $P$ provides a linear splitting of the above short exact sequence. \begin{lemma}[$TM \oplus \RR$ is isomorphic to $A_P$]\label{lem:sigmatheta} Let be $\pi: P \to M$ a $S^1$-principal bundle. Consider a connection $1$-form $\theta$. There is an isomorphism of vector bundles \begin{displaymath} \isomorphism{\sigma_\theta} {TM \oplus \RR} {A_P} {\pair{v}{c}} {v^{H_\theta}+ c\cdot E} ~, \end{displaymath} where the superscript $H_\theta$ denotes the horizontal lift on vectors and $E$ is the fundamental vector field of the generator $1$ in the Lie algebra of $S^1$. \end{lemma} \begin{remark}[Recovering the standard $\omega$-twisted Lie algebroid] Considering the inverse of the isomorphism $\sigma_\theta$ introduced in lemma \ref{lem:sigmatheta}, one can pull back the Lie algebroid structure from $A_P$ to $TM\oplus \RR$. As a result, we obtain the Lie algebroid $(TM\oplus \RR)_{{\omega}}$, with anchor map given by the first projection onto $TM$ and Lie bracket on sections given by: \begin{displaymath} \begin{aligned} \left[\pair{X}{f},\pair{Y}{g} \right]_{\omega} :=&~ \sigma_\theta^{-1} \left( \left[\sigma_\theta\pair{X}{f},\sigma_\theta\pair{Y}{g}\right] \right) = \\ =&~\pair{[X,Y]}{X(g)-Y(f) + {\iota_X\iota_Y\omega} } ~. \end{aligned} \end{displaymath} To prove the last equality, observe that \begin{displaymath} \mathclap{ \begin{aligned} &[X^{H_\theta} +(\pi^\ast f) E, Y^{H_\theta} + (\pi^\ast g) E ] = \\ &=~ [X^{H_\theta},Y^{H_\theta}] + [ (\pi^\ast f) E, Y^{H_\theta}] + [X^{H_\theta}, (\pi^\ast g) E] + \cancel{[(\pi^\ast f) E,(\pi^\ast g) E] } = \\ &=~ [X^{H_\theta},Y^{H_\theta}] + \cancel{ (\pi^\ast f) [E,Y^{H_\theta}]} + \cancel{(\pi^\ast g) [X^{H_\theta},E]} + \left( -\cL_{Y^{H_\theta}}(\pi^\ast f) +\cL_{X^{H_\theta}}(\pi^\ast g) \right) E = \\ &=~ [X^{H_\theta},Y^{H_\theta}] + \pi^\ast\left( X(g)-Y(f) \right) E \end{aligned} } \end{displaymath} where the first cancellation occurs because $E$ is vertical and $\pi^\ast f$ and $\pi^\ast g$ are constant along the fibres, and the second one follows from $\pi_\ast ([X^{H_\theta},E])=\theta([X^{H_\theta},E])=0$. The claim is proved by noticing that $$[X^{H_\theta},Y^{H_\theta}]= [X,Y]^{H_\theta} + \pi^\ast(\iota_X\iota_Y\omega) E$$ since \begin{displaymath} \mathclap{ \begin{aligned} \theta([X^{H_\theta},Y^{H_\theta}] =&~ \cL_{X^{H_\theta}}\cancel{\iota_{Y^{H_\theta}}\theta} -\iota_{Y^{H_\theta}}\iota_{X^{H_\theta}} \d \theta + \iota_{Y^{H_\theta}}\d \cancel{\iota_{X^{H_\theta}}\theta} = \\ =&~ -\iota_{Y^{H_\theta}}\iota_{X^{H_\theta}} \pi^\ast \omega = \\ =&~ -\pi^\ast (-\iota_{Y}\iota_{X} \omega) \end{aligned} } \end{displaymath} and \begin{displaymath} \pi_\ast [X^{H_\theta},Y^{H_\theta}] = [X,Y] ~. \end{displaymath} \end{remark} The upshot is that $\sigma_\theta$ is a Lie algebroid isomorphism $(TM\oplus \RR)_{{\omega}} \cong A_P$ between the standard $\omega$-twisted Lie algebroid (see example \ref{ex:TwistedLieAlgbroid}) and the Atiyah algebroid of the $S^1$-principal bundle. Finally, noticing that $Q(P,\theta)\subset \vX(P)^{S^1} =\Gamma(A_P)$, we conclude that the sought embedding (see equation \ref{eq:intropreqmap}) can be obtained by composing\footnote{We will tend, with a slight abuse of notation, to regard the map $\Preq_{\theta}$ introduced in equation \eqref{eq:preq} as a morphism $C^{\infty}(M)_{\omega}\to \vX(P)^{S^1}$. } $\Preq_{\theta}$ and $\sigma_\theta^{-1}$. \begin{proposition}[The Poisson algebra embeds into the sections of $(TM\oplus\RR)_\omega$] Let be $(M,\omega)$ a prequantizable symplectic manifold Consider the $\omega$-twisted standard Lie algebroid $(TM\oplus\RR)_\omega$. The Lie algebra morphism $\Psi$ obtained by the composition of the maps introduced in lemmas \ref{lem:preqMap}, \ref{lem:preqInclusionSinvariant} and \ref{lem:sigmatheta}, \ie the map obtained from the following diagram \begin{displaymath} \begin{tikzcd}[row sep = small, column sep = small] C^\infty(M)_\omega \ar[dr,equal,sloped,"\sim","\Preq_{\theta}"'] \ar[rrr,dashed,"\Psi"] & &[2em] & \Gamma(TM\oplus \RR)_{{\omega}} \\ & Q(P,\theta) \ar[r,hook] & \Gamma(A_P) \ar[ur,equal,sloped,"\sim","\sigma_\theta^{-1}"'] \end{tikzcd} ~, \end{displaymath} is the Lie algebra embedding \eqref{eq:intropreqmap} appearing in the introduction. \\ Namely one has: \begin{equation}\label{eq:chris} \morphism{\Psi} {C^{\infty}(M)_{\omega}} {\Gamma(TM\oplus \RR)_{{\omega}}} {f} {\pair{X_f}{f}} ~, \end{equation} where $X_f$ denotes the Hamiltonian vector field pertaining to $f$. \end{proposition} For the rest of this section, we will simply denote $\Psi$ as $$\sigma_{\theta}^{-1}\circ \Preq_{\theta}\colon C^{\infty}(M)_{\omega} \to \Gamma(TM\oplus \RR)_{\omega} ~.$$ \begin{remark}[Independence from the choice of prequantization]\label{rem:IndiPreQuantum} We point out that the above expression \eqref{eq:chris} is a Lie algebra embedding even when $\omega$ does not satisfy the integrality condition. Furthermore, it does not depend on the connection $\theta$ implied by the prequantization procedure. \end{remark} \subsection{Commutativity after twisting}\label{subsec:second} {In this subsection we show, by geometric arguments, the commutativity of the diagram \eqref{intro:diag:main} from the introduction.} \iffalse \subsubsection{Moment maps}\label{subsec:momaps}\fi Assume we have an action of a Lie group $G$ on $M$, and denote by $v:\g\to \vX(M)$ the corresponding infinitesimal action (a Lie algebra morphism). Assume the existence of an equivariant moment map $$J\colon M\to \g^*.$$ This means that $J$ satisfies $\iota_{v_x}\omega=-d(J^*(x))$ (\ie $v_x=X_{J^*(x)}$) and that $J^*\colon \g\to C^{\infty}(M)_{\omega}$ is a Lie algebra morphism\footnote{This is equivalent to infinitesimal equivariance, \ie $\cL_{v_y}J^*(x)=J^*([y,x])$.} (see reminder \ref{Rem:SymplecticMomaps} in chapter \ref{Chap:MultiSymplecticGeometry}). Therefore, diagram \eqref{eq:preqiso} is extended to \begin{equation}\label{diag:gQ} \begin{tikzcd}[column sep = huge, row sep = large] & C^{\infty}(M)_{\omega} \ar[r,"{\sim}","\Preq_\theta"'] \ar[d,"X_{\bullet}"] & Q(P,\theta) \ar[dl,"\pi_*"] \\ \g \ar[r,"{v_{\bullet}}"'] \ar[ru,"{J^*}"] & \vX(M) & \end{tikzcd} \end{equation} In particular, we obtain a Lie algebra morphism \begin{equation}\label{eq:L0map} \morphism{L_0:=\Preq_{\theta}\circ J^*} {\g} {Q(P,\theta)} {x} {v_x^{H_\theta}+\left((\pi^*J^*)(x)\right)\cdot E}~, \end{equation} lifting the infinitesimal action in the sense of the commutativity of the following diagram in the category of Lie algebras \begin{equation*} \begin{tikzcd}[column sep = large] & Q(P,\theta) \ar[d,"{\pi_*}"] \\ \g \ar[r,"v_{\bullet}"'] \ar[ru,"L_0"] & \vX(M) \end{tikzcd}~. \end{equation*} \subsubsection{Twisting by an invariant one-form} {Notice that the difference between any two connection 1-forms on the circle bundle $\pi\colon P\to M$ is basic, \ie is the pullback of a 1-form on $M$.} \\ Now we take $\alpha\in \Omega^1(M)^G$ and use it to twist some of the above data, keeping the $G$-action fixed: $\omega+d\alpha$ is an invariant symplectic form on $M$ (assuming it is non-degenerate), with moment map $J_{\alpha}$ determined by\footnote{Indeed it can be checked easily that $\iota_{v_x}(\omega+d\alpha)=-d(J^*(x)+\iota_{v_x}\alpha)$ using the $G$-invariance of $\alpha$ (expressed as $\cL_{v_x}\alpha=0$ for all $x\in \g$).} \begin{equation}\label{eq:jalpha} \morphism{J_{\alpha}^*} {\g} {C^{\infty}(M)_{\omega+d\alpha}} {x} {\mapsto J^*(x)+\iota_{v_x}\alpha} ~. \end{equation} Furthermore, a prequantization of the symplectic manifold $(M,\omega+d\alpha)$ is given by the same circle bundle $P$ but with connection $\theta+\pi^*\alpha$. We can repeat the procedure outlined above (see in particular equation \eqref{eq:L0map}), obtaining a Lie algebra morphism $L_\alpha\colon \g \to Q(P,\theta+\pi^*\alpha) $ lifting the infinitesimal action. Since $\alpha$ is $G$-invariant, any lift to $P$ of a generator $v_x$ preserves $\pi^*\alpha$, hence we can view both $L_0$ and $L_{\alpha}$ as maps $$ \g \to Q(P,\theta)\cap Q(P,\theta+\pi^*\alpha)$$ which are \emph{Lie algebra morphisms lifting the infinitesimal action}. There are ``few'' such Lie algebra morphisms. (They are in bijection with moment maps for $(M,\omega)$, by diagram \eqref{diag:gQ}; if $H^1(\g)=0$ then the moment map is unique \cite[Theorem 26.5]{CannasdaSilva2001}.) Hence the following is not a surprise. \begin{proposition}\label{prop:L} The Lie algebra morphisms $L_0$ and $L_\alpha$ coincide. \end{proposition} \begin{proof} Fix $x\in \g$ and write $f_x:=J^*(x)$. We have to show that $L_0(x)=L_{\alpha}(x)$, \ie $$v_x^{H_\theta}+\left(\pi^*f_x\right)\cdot E=v_x^{H_{\theta+\pi^*\alpha}}+\pi^*(f_x+\iota_{v_x}\alpha)\cdot E~.$$ We do so decomposing $TP$ as $H_\theta\oplus \RR E$. Since both the left-hand side and the right-hand side $\pi$-project to the same vector field (namely $v_x$), we have to check that we obtain the same function applying $\theta$ to both vector fields. This is indeed the case, since applying $\theta$ to the vector field on the right we obtain $$\pi^*(f_x+\iota_{v_x}\alpha)-(\pi^*\alpha)(v_x^{H_{\theta+\pi^*\alpha}})=\pi^*f_x.$$ \end{proof} We can also repeat the construction of \S \ref{subsec:At} using the connection $\theta+\pi^*\alpha$, {yielding a Lie algebroid isomorphism $$\sigma_{\theta+\pi^*\alpha}\colon (TM\oplus \RR)_{\omega+d\alpha} \cong A_P.$$} The composition $(\sigma_{\theta+\pi^*\alpha})^{-1}\circ \sigma_{\theta}$ reads \begin{equation}\label{eq:tau} \isomorphism{\tau_{\alpha}} {(TM\oplus \RR)_{\omega}} {(TM\oplus \RR)_{\omega+d\alpha}} {\pair{v}{c}} {\pair{v}{c+ \iota_{v}\alpha}} \end{equation} and is often referred to as \emph{gauge transformation}. \subsubsection{Commutativity} We end up with the following commutative diagram \begin{displaymath} \begin{tikzcd} & C^{\infty}(M)_{\omega} \ar[dr,"{\Preq_{\theta}}"] && \Gamma(TM\oplus \RR)_{\omega} \ar[dd,"{\tau_{\alpha}}"'] \\ \g \ar[rd,"{J_{\alpha}^*}"'] \ar[ru,"{J^*}"] && \vX(P)^{S^1} \ar[ur,"{\sigma_{\theta}^{-1}}"] \ar[dr,"{\sigma^{-1}_{\theta+\pi^*\alpha}}"'] & \\ & C^{\infty}(M)_{\omega+d\alpha} \ar[ur,"{\Preq_{\theta+\pi^*\alpha}}"'] && \Gamma(TM\oplus \RR)_{\omega+\d\alpha} \end{tikzcd} \end{displaymath} where the left square commutes by proposition \ref{prop:L} and the right one by the very definition of $\tau_\alpha$ (see equation \eqref{eq:tau}). As we emphasized in remark \ref{rem:IndiPreQuantum}, the composition $\sigma_{\theta}^{-1}\circ \Preq_{\theta}\colon C^{\infty}(M)_{\omega} \to \Gamma(TM\oplus \RR)_{\omega}$ does not depend on $\theta$. Hence, after removing $\vX(P)^{S^1}$ from the above diagram, we obtain a commutative diagram that makes no reference to the prequantization bundle $P$: \begin{equation}\label{diag:main} \begin{tikzcd}[column sep=huge] & C^{\infty}(M)_{\omega} \ar[r,""] & \Gamma(TM\oplus \RR)_{\omega} \ar[dd,"\tau_\alpha"] \\[-1em] \mathfrak{g}\ar[ru,"J^*"] \ar[dr,"J^*_{\alpha}"'] \\[-1em] & C^{\infty}(M)_{\omega+d\alpha} \ar[r,""] & \Gamma(TM\oplus \RR)_{\omega+d\alpha} \end{tikzcd} \end{equation} \begin{remark} For a given $\alpha$, in general, there is no linear map $C^{\infty}(M)_{\omega}\to C^{\infty}(M)_{\omega+d\alpha}$ making the left part of diagram \eqref{diag:main} commute. Indeed such a map exists if, and only if, for all $f\in C^{\infty}(M)$ we have $$X_f^{\omega}=X_{f+\iota_{X^{\omega}_f}\alpha}^{\omega+d\alpha}$$ where $X_g^{\nu}$ denotes the Hamiltonian vector field pertaining to the function $g$ with respect to the symplectic form $\nu$. The latter is equivalent to say that $\cL_{X_f^{\omega}}\alpha=0$. This explains why it is necessary to consider moment maps for a $\alpha$-preserving action. \end{remark} \begin{remark}\label{rem:symcomm} Diagram \eqref{diag:main} commutes for any symplectic form $\omega$, even for those that do not satisfy the integrality condition and therefore do not admit a prequantization bundle. {This is immediate using the explicit expressions for the maps involved in eq. \eqref{eq:chris}, \eqref{eq:jalpha} and \eqref{eq:tau}.} The discussion of this subsection -- in particular proposition \ref{prop:L} -- provides a geometric argument for the commutativity of diagram \eqref{diag:main} in the integral case. \end{remark} \subsection{Motivation for the higher case}\label{subsec:higher} In the rest of this chapter we will consider a multisymplectic form $\omega$, and show that the higher analogue of diagram \eqref{diag:main} commutes too. This provides some evidence that, in the integral case, one can expect a global geometric picture (higher prequantization) as the one we outlined for the symplectic case in the previous section. We first address \S\ref{subsec:first}. In the integral case the analogue of the prequantization map $\Preq_{\theta}$ of eq. \eqref{eq:preq} was already established for all $n$ in Fiorenza-Rogers-Schreiber \cite[Thm. 4.6]{Fiorenza2014a}; there however the higher prequantization bundle is described by means of an open cover on the manifold $M$. For $n=2$, the analogue of the prequantization map was established on a higher prequantization bundle that admits a global description (without choosing a cover) in Krepski-Vaughan \cite[\S 5.1]{krepski2020multiplicative} -- but not as an $L_{\infty}$-algebra morphism -- and later in Sevestre-Wurzbacher \cite[Thm. 3.5]{SevestreWurzbacherPreq}. See \cite[Remark 5.2]{krepski2020multiplicative} and \cite[Rem. 3.6]{SevestreWurzbacherPreq} for a comparison. For the fact that ``higher Atiyah algebroids'' (also known as Vinogradov algebroids) can be obtained from $S^1$-gerbes and higher analogues, see \cite[\S 2.3]{Gualtieri2004}. \section{Algebraic Properties of Rogers' $L_\infty$-algebra}\label{Sec:RogersL1prop} In this section, we review again the construction of the $L_\infty$-algebra of observables introduced by Rogers, focusing on its presentation in terms of symmetric multibrackets ($L_\infty[1]$-algebra, \cf definition \ref{Def:LInfinityShifted}). Namely, we establish certain relations between the multibrackets of different degrees. We do so by means of concise computations using the \RN operation $\cs$ introduced in eq. \eqref{eq:compsymm}. In sections \ref{Sec:Vinoids} and \ref{Section:ExtendedRogersEmbedding}, these relations will be necessary to relate $L_\infty(M,\omega)$ with the $L_\infty$-algebra associated to the corresponding "higher Courant algebroid" (more precisely they will be used to make explicit the expression \eqref{eq:pi'} appearing in proposition \ref{prop:main}). \subsection{Rogers'$L_{\infty}[1]$-algebra}\label{sec:RogersL1} Let $(M,\omega)$ be an $n$-plectic manifold, denote by $L_{\infty}(M,\omega):=(L,\{l_{k} \})$ the associated $L_\infty$-algebra prescribed by the Rogers' construction\footnote{Notice the small change of notation: in chapter \ref{Chap:Linfinity} we denoted the $k$-ary multibrackets as $[\cdots]_k$.}. In order to understand the relationship of $L_\infty(M,\omega)$ with the Vinogradov's $ L_\infty $ structure (see section \ref{subsec:Vinogradov}), it will be more convenient to consider the graded vector space $\cA$ given by the following components \begin{equation}\label{eq:Aspace} \cA^i:= \begin{cases} \left.\left\lbrace \pair{X}{\alpha}\in \mathfrak{X}(M)\oplus \Omega^{n-1}(M) ~ \right\vert ~ \iota_X \omega = -\d \alpha\right\rbrace & ~\text{if } i=0 \\ ~\Omega^{n+i-1}(M) & ~\text{if } 1-n \leq i\leq -1 \\ ~0 & ~\text{otherwise ~. \end{cases} \end{equation} \begin{remark} Observe that the graded vector space $\cA$ coincides with the graded vector space \begin{displaymath} Ham_\infty(M,\omega):= Ham^{n-1}(M,\omega)\oplus \trunc_{0}(\Omega(M)[n-1]) \end{displaymath} mani introduced in remark \ref{Rem:DegenerateCase}. Accordingly, $\cA^0 = Ham^{n-1}(M,\omega)$ consists of all the Hamiltonian pairs pertaining to $\omega$, \ie Hamiltonian forms together with their corresponding Hamiltonian vector field. \end{remark} When $\omega$ is $n$-plectic, $\cA\cong L$ as graded vector spaces. Let us denote $\pi_k$ the pullback of $\ell_k$ from $L$ to $\cA$ along the isomorphism acting as the projection $\cA^0\twoheadrightarrow \Omega^{n-1}_{\ham}(M,\omega)$ in degree $0$ and as the identity in other degrees. Being $\omega$ non-degenerate, the latter projection is in particular bijective. We will denote as $\vartriangle$ its inverse, given, in degree $0$, by mapping Hamiltonian forms into the corresponding Hamiltonian pairs. Namely, the map acts like the identity in degrees lesser than $0$ and by \begin{equation}\label{eq:trianglemap} \morphism{(\blank)^\vartriangle} {L^0=\Omega^{n-1}_{\ham}(M,\omega)} {\cA^0 \subset \mathfrak{X}(M)\oplus\Omega^{n-1}(M)} {\alpha} {\pair{\vHam_\alpha}{\alpha}} ~, \end{equation} in degree $0$. Accordingly we will also denote $(\Omega^{n-1}_{\ham}(M,\omega))^{\Delta}=\cA^0$. For the sake of clarity, we reiterate the explicit expression for the higher observables multibrackets in this slightly different setting. Denoting by $\varv= f\oplus e$ a generic element in $\cA$, where $f\in \bigoplus_{k=0}^{n-2}\Omega^k(M)$ and $e=\pair{X}{\alpha}\in \cA^0$, the unary multibracket reads as the following linear map \footnote{To be completely consistent with the framework introduced in section \ref{Sec:ConventionsMultiLinearAlgebras}, we should replace $\cA$ with $\mathcal{A^\oplus}$. Here we are choosing to lighten that notation. Everything should be clear from the context. } \begin{displaymath} \morphism{\pi_1} {\cA} {\cA} {f\oplus\pair{X}{\alpha}} {\d f} ~; \end{displaymath} and all the other non-trivial $k$-multibrackets $(2\leq k \leq n$+$1)$ result in : \begin{displaymath} \morphism{\pi_k} {\cA^{\otimes k}} {\cA} {e_1\otimes\dots \otimes e_k} {\varsigma(k) \iota(X_1\wedge\dots\wedge X_k) \omega} ~. \end{displaymath} Recall that, given a graded vector space $\cA$, an $L_{\infty}[1]$-algebra structure on $\cA[1]$ is equivalent to a $L_{\infty}$-algebra structure on $\cA$, via the d\'ecalage isomorphism (see equation \eqref{eq:deca}), reading in the present case as follows \begin{equation}\label{deca} \isomorphism{\dec} {(\wedge^n \cA)[n]} {\odot^n(\cA[1])} {(u_1\wedge \cdots \wedge u_n)_{[n]}} {u_{1 [1]}\cdots u_{n [1]}\cdot (-1)^{(n-1)|u_{1}|+\dots+2|u_{n-2}|+|u_{n-1}|}} \end{equation} where $u_1,\dots,u_n\in \cA$ are homogeneous vectors nd $|u_i|$ denote the degrees of $u_i\in \cA$. \\ We denote by $(\cA[1],\{\bpi_k\})$ the $L_{\infty}[1]$-algebra corresponding to Rogers' Lie infinity algebra on $\cA$. \subsection{{Properties} of Rogers' $L_{\infty}[1]$-algebra}\label{sec:L1RogersProp} We consider now the $L_\infty[1]$-algebra $(\cA[1],\{\bpi_k\})$. The crucial property is that, whatever the degree of the multisymplectic form $\omega$ considered on $M$, all Rogers' multibrackets can be expressed as combination of the three lowest arity multibrackets together with the following auxiliary operators: \begin{definition}[Symmetric and skew-symmetric Pairings $\pairing_\pm$]\label{def:pairing-} Let be $M$ a smooth manifold and $n\in \ZZ$ a fixed integer. Consider the graded vector space \begin{equation}\label{eq:VtildeSpace} \widetilde{\cV}:= \mathfrak{X}(M)\oplus\left( \Omega(M)[n]\right)~. \end{equation} We call \emph{symmetric (resp. skew-symmetric) pairing} the, degree $-1$, binary brackets $\pairing_+\in M^{\sym}(\widetilde{\cV})$ (resp. $\pairing_-\in M^{\skew}(\widetilde{\cV})$) defined as \begin{displaymath} \morphism{\pairing_\pm} {\widetilde{\cV}\otimes\widetilde{\cV}} {\widetilde{\cV}} {(x_1+f_1, x_2 +f_2)} {\dfrac{1}{2}(\iota_{x_1}f_2 \pm \iota_{x_2} f_1)} \end{displaymath} for any $x_i \in \mathfrak{X}(M)$ and $f_i\in \Omega(M)$. \end{definition} \begin{remark}\label{rem:pair-} In the following, we will be mostly interested in the restriction of $\pairing_-$ to the graded subspaces $\cA$ (or to $\cV$, that is a truncation of $\widetilde{\cV}$, see \S \ref{subsec:Vinogradov} below). This determines a graded skew-symmetric bilinear map $\cA\otimes \cA\to \cA$ of degree $-1$. Namely, for any $f_i\in \bigoplus_{k=0}^{n-2}\Omega^k(M)$ and $e_i=\pair{X_i}{\alpha_i}\in \cA^0$, the pairing reads as follows \begin{equation}\label{Eq:PairingExtensions} \left\langle f_1 \oplus \pair{X_1}{\alpha_1}, f_2 \oplus \pair{X_2}{\alpha_2} \right\rangle_\pm = \frac{1}{2}{\left( \iota_{X_1}( \alpha_2 + f_2) -\iota_{X_2}( \alpha_1 + f_1)\right)}\oplus \pair{0}{0} ~. \end{equation} In turn, the latter defines by d\'ecalage a degree zero map $S^{ 2}(\cA[1])\to \cA[1]$ which we denote by {$\pairing$}. Observe that, for any $\pair{X_i}{\alpha_i}_{[1]}\in (\cA[1])^{-1} = \cA^0$, one has \begin{displaymath} \begin{aligned} \left\langle \pair{X_1}{\alpha_1}_{[1]}, \pair{X_2}{\alpha_2}_{[1]}\right\rangle &= \left(\iota_{X_1}\alpha_2 + (-)^{|X_{2[1]}||\alpha_{1[1]}|}\iota_{X_2}\alpha_1\right) = \\ &= \left(\iota_{X_1}\alpha_2 - \iota_{X_2}\alpha_1\right) = \\ &= \left(\left\langle \pair{X_1}{\alpha_1}, \pair{X_2}{\alpha_2}\right\rangle_-\right)_{[1]} ~. \end{aligned} \end{displaymath} This map vanishes if both entries of $\cA[1]$ lie in degrees $\le -2$. By extending trivially we obtain a map $\pairing \colon S^{\ge 1}(\cA[1])\to \cA[1]$. \end{remark} According to the next lemma, the whole Rogers' $L_\infty[1]$ structure on $\cA$ is completely generated by the following four multibrackets $\{\pairing, \bpi_1, \bpi_2, \bpi_3\}$: \begin{lemma}[Higher Rogers multibrackets recursive formula]\label{lem:rogersRecurFormula} \begin{displaymath} [\pairing,\bpi_{k-1}] _{\cs} = \frac{k}{2} \bpi_k \qquad \forall k \geq 4 \end{displaymath} where $\ca$ and $\cs$ denote respectively the \RN product of graded skew-symmetric and graded symmetric multibrackets introduced in section \ref{sec:RNProdMB} (see appendix \ref{App:RNAlgebras} for all the details). \end{lemma} {We observe that $\bpi_3$ can be expressed in terms of $\bpi_2$ too, see the proof of Lemma \ref{Lemma:BoringAssociator}.} \begin{proof} Inspecting elements $e_i=f_i + \pair{X_i}{\alpha_i}\in \cA$ one gets \allowdisplaybreaks \begin{align*} \pi_k(e_1,\dots,e_k) &= \varsigma(k)\omega(X_1,\dots,X_k) = \\ &= \varsigma(k)\sum_{\sigma\in \ush{k-1,1}} \frac{1}{k}\iota_{X_{\sigma_k}}\omega(X_{\sigma_1},\dots,X_{\sigma_{k-1}}) = \\ &= \left(\frac{\varsigma(k)\varsigma(k-1)}{k}\right) \sum_{\sigma\in \ush{k-1,1}} \iota_{X_{\sigma_k}}~\pi_{k-1}(e_{\sigma_1},\dots,e_{\sigma_{k-1}}) = \\ &= -\frac{2}{k} (-)^k \sum_{\sigma\in \ush{k-1,1}} \Big\langle \pi_{k-1}(e_{\sigma_1},\dots,e_{\sigma_{k-1}}), e_{\sigma_k} \Big\rangle_- = \\ &= -\frac{2}{k} (-)^k (-)^{|\pi_{k-1}|} \pairing_- \ca \pi_{k-1} (e_1,\dots e_k) = \\ &= \frac{2}{k} \pairing_- \ca \pi_{k-1} (e_1,\dots e_k) \end{align*} \allowdisplaybreaks[0] The claim follows after d\'ecalage. \end{proof} The upshot is that the the graded subalgebra of $M^{sym}(\cA[1])$ generated by the Rogers multibrackets is isomorphic to the subalgebra generated by $\bpi_1,\bpi_2,\bpi_3$ together with the pairing $\pairing$ \footnote{The same property could also be expressed in term of the $\ca$ product in the spaces $M(V)$ of multilinear maps without any symmetry. In that terms, one can see that are sufficient the three generators $\pi_1,\pi_2$ and $(\pairing_+ + \pairing_-)$ to generate the entire $L_\infty$-structure.}. \\ All possible multibrackets generated by $\{\bpi_1,\bpi_2,\bpi_3,\pairing\}$ can be reconstructed from the $L_\infty$-algebra axioms and by working out the iterated {commutators} of powers $\pairing^{\cs l}$ with $\bpi_k$. { We now proceed to compute some of this commutators in some relevant cases. } \begin{remark}\label{rem:two} a) $\bpi_k$ with $k\geq 2$ is non-zero only when evaluated on degree $0$ elements, hence $$ [\pairing, \bpi_k ]_{\cs} = \pairing \cs \bpi_k.$$ b) We carry out many of the proofs in terms of the multilinear maps $\pi$ on $\cA$, rather than using the corresponding graded-symmetric maps $\bpi$ on $\cA[1]$. This is possible thanks to the graded algebra isomorphism given by the D\'ecalage \eqref{eq:Dec}, and it is convenient because it allows us to apply the identities of Cartan calculus easily. c) {In this section, we will sometimes make use of the symmetric pairing $\pairing_+ : \cV \otimes \cV \to \cV$, which is defined analogously to $\pairing_-$ in eq. \eqref{Eq:PairingExtensions} but replacing the minus sign there with a plus sign.} \end{remark} \begin{proposition}[Commutators of arity $3$]\label{Prop:TernaryCommutator} \begin{displaymath} [\pairing,\bpi_2]_{\cs} = [\pairing,[\pairing, \bpi_1]_{\cs}]_{\cs} \end{displaymath} \end{proposition} {We point out that the computations in the proof below are similar to -- but more concise than -- those found in \cite[Lemmas 7.2, 7.3 7.4]{Rogers2013}.} \begin{proof} First note by inspecting on elements $e_i=f_i + \pair{X_i}{\alpha_i}\in \cA$ that: \begin{displaymath} 2 \big(\pairing_- \ca \pi_2\big)~(e_1,e_2,e_3) = \iota_{[X_1,X_2]} e_3 - \omega(X_1,X_2,X_3) + \cyc \end{displaymath} where $\cyc$ denotes sum on all cyclic permutations. Using Cartan's magic formula twice: \allowdisplaybreaks \begin{align} &\iota_{[X_1,X_2]} e_3 +\cyc = \notag \\ &= \mathcal{L}_{X_1} \iota_{X_2} e_3 - \iota_{X_2} \mathcal{L}_{X_1} e_3 +\cyc= \notag \\ &= (\iota_{X_1} \dd + \dd \iota_{X_1}) \iota_{X_2} e_3 - \iota_{X_2} (\dd \iota_{X_1} + \iota_{X_1} \dd ) e_3 +\cyc= \label{eq:bruttocoso} \\ &= \dd \iota_{X_1} \iota_{X_2} e_3 + \iota_{X_1} \dd \iota_{X_2} e_3 - \iota_{X_2} \dd \iota_{X_1} e_3 + \iota_{X_2}\iota_{X_1}\iota_{X_3} \omega - \iota_{X_2} \iota_{X_1} \mu_1 e_3 +\cyc = \notag \\ &= (\mu_1 \iota_{X_1} \iota_{X_2} e_3) + (\iota_{X_1} \mu_1 \iota_{X_2} e_3 - \iota_{X_2} \mu_1 \iota_{X_1} e_3) - (\iota_{X_2} \iota_{X_1} \mu_1 e_3) + \omega(X_1,X_2,X_3) +\cyc \notag \end{align} \allowdisplaybreaks[0] where, in the penultimate equation, is employed that: \begin{displaymath} \dd~ e_3 = \dd (\alpha_3 + f_3) = -\iota_{X_3} \omega + \mu_1 e_3 ~. \end{displaymath} {The first three terms on the r.h.s. of} equation \eqref{eq:bruttocoso} can be recast as follows: % \allowdisplaybreaks \begin{align*} \mu_1 \iota_{X_1} \iota_{X_2} e_3 + \cyc &= \mu_1 \iota_{X_3} \iota_{X_1} e_2 + \cyc = \\ &= \mu_1 \iota_{X_3} (\langle e_1 , e_2\rangle_+ + \langle e_1 , e_2\rangle_- ) + \cyc = \\ &= - 2 \mu_1 \langle (\langle e_1 , e_2\rangle_+ + \langle e_1 , e_2\rangle_- ), e_3 \rangle_- + \cyc = \\ &= 2 \mu_1 \pairing_- \ca (\cancel{\pairing_+} + \pairing_- ) (e_1,e_2,e_3) = \\ &= 2 \Big(\mu_1 \pairing_- \ca \pairing_-\Big)~(e_1,e_2,e_3) = % % \\ \iota_{X_1} \mu_1 \iota_{X_2} e_3 - \iota_{X_2} \mu_1 \iota_{X_1} e_3 + \cyc &= \iota_{X_3} \mu_1 \iota_{X_1} e_2 - \iota_{X_3} \mu_1 \iota_{X_2} e_1 + \cyc = \\ &= 2 \iota_{X_3} \mu_1 \langle e_1,e_2 \rangle_- + \cyc = \\ &= -4 \langle \mu_1 \langle e_1,e_2 \rangle_-, e_3 \rangle_- + \cyc = \\ &= -4 \Big(\pairing_- \ca \mu_1 \ca \pairing_-\Big) (e_1,e_2,e_3) % % \\ -\iota_{X_2} \iota_{X_1} \mu_1 e_3 + \cyc &= -\iota_{X_3} \iota_{X_2} \mu_1 e_1 + \cyc = \\ &= - \dfrac{1}{2}\iota_{X_3} \big(\iota_{X_2} \mu_1 e_1 - \iota_{X_1} \mu_1 e_2) + \cyc =\\ &= + \iota_{X_3} \big(\langle \mu_1 e_1, e_2 \rangle_- - \langle \mu_1 e_2,e_1 \rangle_-\big) + \cyc =\\ &= - 2 \langle \big(\langle \mu_1 e_1, e_2 \rangle_- - \langle \mu_1 e_2,e_1 \rangle_-\big), e_3 \rangle_- + \cyc =\\ &= 2 \Big(\pairing_- \ca (\pairing_- \ca \mu_1)\Big)~(e_1,e_2,e_3) \end{align*} \allowdisplaybreaks[0] % Hence, after d\'ecalage, one gets: \begin{equation}\label{Eq:pairing-pi2} \begin{aligned} [\pairing,\bpi_2]_{\cs} =&~\pairing \cs \bpi_2 = \\ =&~ \bpi_1 \cs \pairing\cs \pairing -2 \pairing \cs \bpi_1 \cs \pairing + \pairing \cs \pairing \cs \bpi_1 =\\ =&~ [\pairing,[\pairing, {\bpi_1}]_{\cs}]_{\cs} \end{aligned} \end{equation} \end{proof} \begin{proposition}[Commutators of arity $4$]\label{Prop:QuaternaryCommutator} \begin{align} [\pairing,\bpi_3] =&~ 2 \bpi_4 \label{eq:commPairingMu3} \\[1em] [\pairing^{\cs 2}, \bpi_2 ] =&~ [\pairing^{\cs 2},[\pairing,\bpi_1]] = \\[-.5em] =&~ [\pairing,[\pairing^{\cs 2},\bpi_1]] \nonumber \\[1em] [\pairing,[\pairing,\bpi_2]] =&~ [\pairing,[\pairing,[\pairing,\bpi_1]]] = \\[-.5em] =&~[\pairing^{\cs 2}, \bpi_2 ] - 2 \alpha (\pairing,\pairing,\bpi_2) = \nonumber \\[-.5em] =&~ 3 \bpi_4 \nonumber \end{align} where $\alpha$ denotes the associator (see definition \ref{def:gradedAssociator}). \end{proposition} \begin{proof} Rather straightforward algebraic computations together with the following Lemma \ref{Lemma:BoringAssociator}. \end{proof} \begin{lemma}[{A} recurrent associator]\label{Lemma:BoringAssociator} \begin{displaymath} \alpha(\cs; \pairing,\pairing, \bpi_2) = \frac{1}{2}\Big( [\pairing^{\cs 2},\bpi_2] -3 \bpi_4\Big) \end{displaymath} \end{lemma} \begin{proof} Observe that \begin{equation}\label{eq:obscure} 2 \pairing_- \ca \pi_2 = K + 3 \pi_3 \end{equation} where the auxiliary operator $K$ is given by the following equation \begin{equation*} \begin{aligned} K(e_1,e_2,e_3)=&~ (\pairing_+ + \pairing_-)\ca \pi_2 (e_1,e_2,e_3) = \\ =&~ \iota_{[X_1,X_2]}e_3 + \iota_{[X_2,X_3]}e_1 + \iota_{[X_3,X_1]}e_2 ~. \end{aligned} \end{equation*} According to equation \eqref{Eq:explicitassociators}, the following holds: \begin{equation}\label{eq:Kalpha} \begin{aligned} \alpha(\ca;\pairing_-,&\pairing_-,\pi_2) (e_1,e_2,e_3,e_4) = \\ =& (-)^2\frac{1}{2}\iota_{[X_3,X_4]}\langle e_1,e_2 \rangle_- + \unsh{(2,2)} =\\ =& \frac{1}{2}\biggr( + \iota_{[X_3,X_4]}\langle e_1,e_2\rangle_- + \iota_{[X_1,X_2]}\langle e_3,e_4\rangle_- - \iota_{[X_2,X_4]}\langle e_1,e_3\rangle_- + \\ &\phantom{\frac{1}{2}\biggr(}- \iota_{[X_1,X_3]}\langle e_2,e_4\rangle_- + \iota_{[X_2,X_3]}\langle e_1,e_4\rangle_- + \iota_{[X_1,X_4]}\langle e_2,e_3\rangle_- \biggr)~, \end{aligned} \end{equation} % {where $\unsh{(2,2)}$ denotes sum over all $(2,2)$-unshuffles.} \\ {On the other hand, one finds that} % \begin{align*} \big(\pairing_-&\ca K\big) (e_1,e_2,e_3,e_4) = \\ =& (-)^{|K|} \langle K(e_1,e_2,e_3),e_4 \rangle_- + \unsh{(3,1)} = \\ =& -\langle K(e_1,e_2,e_3),e_4\rangle_- + \langle K(e_1,e_2,e_4),e_3\rangle_- + \\ & - \langle K(e_1,e_3,e_4),e_2\rangle_- + \langle K(e_2,e_3,e_4),e_1\rangle_- =\\ =&\frac{1}{2}\Big\{ +\iota_{X_4}(\iota_{[X_1,X_2]}e_3+\iota_{[X_2,X_3]}e_1+\iota_{[X_3,X_1]}e_2)+ \\ &\phantom{\frac{1}{2}\big\{} -\iota_{X_3}(\iota_{[X_1,X_2]}e_4+\iota_{[X_2,X_4]}e_1+\iota_{[X_4,X_1]}e_2)+ \\ &\phantom{\frac{1}{2}\big\{ } +\iota_{X_2}(\iota_{[X_1,X_3]}e_4+\iota_{[X_3,X_4]}e_1+\iota_{[X_4,X_1]}e_3)+ \\ &\phantom{\frac{1}{2}\big\{} -\iota_{X_1}(\iota_{[X_2,X_3]}e_4+\iota_{[X_3,X_4]}e_2+\iota_{[X_4,X_2]}e_3) \Big\} = \\ =&+ \iota_{[X_1,X_2]}\langle e_3, e_4\rangle_- - \iota_{[X_1,X_3]}\langle e_2, e_4\rangle_- + \iota_{[X_1,X_4]}\langle e_2, e_3\rangle_- +\\& + \iota_{[X_2,X_3]}\langle e_1, e_4\rangle_- - \iota_{[X_2,X_4]}\langle e_1, e_3\rangle_- + \iota_{[X_3,X_4]}\langle e_1, e_2\rangle_- =\\=& 2~ \alpha(\ca;\pairing_-,\pairing_-,\pi_2) (e_1,e_2,e_3,e_4) ~, \end{align*} {using eq. \eqref{eq:Kalpha} in the last equality. In other words,} \begin{displaymath} \pairing_- \ca K = 2~ \alpha(\ca;\pairing_-,\pairing_-,\pi_2) ~. \end{displaymath} % Plugging this last result into the equation obtained by {composing} equation \eqref{eq:obscure} with $\pairing_-$ from the left one gets: \begin{displaymath} 2~\pairing_- \ca (\pairing_-\ca \pi_2 ) = 2~\alpha(\ca;\pairing_-,\pairing_-,\pi_2) + 3~ \pairing_- \ca \pi_3 ~. \end{displaymath} % Applying on the l.h.s. the definition of the associator, using equation \eqref{eq:commPairingMu3}, and remark \ref{rem:two}, one gets \begin{displaymath} [ \pairing_-^{\ca 2}, \pi_2] = 2~\alpha(\ca; \pairing_-, \pairing_-, \pi_2 ) + 3\,\pi_{4} ~. \end{displaymath} The claim follows after d\'ecalage. \end{proof} The following technical lemma will be used in remark \ref{Rem:DiagramSituation} to express the gauge transformation of a \momap in terms of the pairing. The key idea is to take advantage of the operator $\pairing_-$ on the space $\cA$ by expressing the operation of inserting several vector fields in a given differential form as a ``power'' of the pairing: \begin{lemma}[Insertions as pairing]\label{lemma:InsertionsAsPairing} Consider the graded vector space $\tilde{\cV}:=\mathfrak{X}(M)\oplus\Omega(M)[k]$. Denote by $\rho$ the standard projection $\rho:\tilde{\cV} \twoheadrightarrow\mathfrak{X}(M)$. Given a $k$-form $B$, \ie an element in $\ker(\rho)\subset \tilde{\cV}$, and given vector fields $x_i$, the following equation holds {for all $m$}: \begin{displaymath} \left( \pairing_-^{\ca m} \right)(B, x_1,\dots, x_m) = \left(-\varsigma(m) \cdot \frac{m!}{2^m} \right)~ \iota_{x_m}\dots \iota_{x_1} B ~. \end{displaymath} Here the left-hand side denotes the evaluation of operator $\pairing_-^{\ca m}$, see definition \ref{def:pairing-}, on the element $ B\otimes x_1\otimes \dots \otimes x_m \in \tilde{\cV}^{\otimes m+1}$. \end{lemma} \begin{proof} By induction. From equation \eqref{Eq:RNProducts-explicit}, given two vector fields $x_1,x_2$, which are degree $0$ elements in $\widehat{\cV}$ and a differential form $B$ one has \allowdisplaybreaks \begin{align*} \pairing_-\ca \pairing_- ~& (B,x_1,x_2)= \\ &= (-)^{|\pairing_-|} \mkern-20mu \sum_{\sigma \in \ush{1,1}} \mkern-20mu \chi(\sigma) \langle\langle B,x_{\sigma_1}\rangle_-, x_{\sigma_2} \rangle_- = \\ &= \left(- \frac{2!}{(-2)^2}\right)~\iota_{x_2}\iota_{x_1} B = \\ &= \left(-\varsigma(2) \dfrac{2!}{2^2}\right)~\iota_{x_2}\iota_{x_1} ~, \end{align*} \allowdisplaybreaks[0] where $\chi(\sigma)=(-)^\sigma \epsilon(\sigma)$ denotes the odd Koszul sign. Assuming now \begin{displaymath} \pairing_-^{\ca(m-1)}\ca \pairing_-~(B,x_1,\dots,x_{m}) = \left(-\varsigma(m) \frac{m!}{2^m}\right) \iota_{x_m}\dots \iota_{x_1} B ~, \end{displaymath} it follows that \begin{displaymath} \mathclap{ \begin{aligned} \pairing_-^{\ca(m)}&\ca \pairing_- ~(B,x_1,\dots,x_{m+1}) = \\ =& \pairing_- \ca \left( \pairing_-^{\ca (m-1)}\ca \pairing_- \right) ~ (B,x_1,\dots,x_{m+1}) = \\ =& (-)^{|\pairing_-^{\ca (m)}|} \mkern-20mu\sum_{\sigma \in \ush{m,1}} \mkern-20mu \chi(\sigma) \left\langle \left( \pairing_-^{\ca(m-1)}\ca \pairing_-~(B,x_1,\dots,x_{m}) \right) , x_{\sigma_{m+1}} \right \rangle_- = \\ =& (-)^m \mkern-20mu\sum_{\sigma \in \ush{m,1}} \mkern-20mu \chi(\sigma) \left(-\varsigma(m) \frac{m!}{2^m}\right)\left(-\frac{1}{2}\right) \iota_{x_{\sigma_{m+1}}}\iota_{x_{\sigma_m}}\dots \iota_{x_{\sigma_1}} B = \\ =& \left( (-)^m \varsigma(m) \frac{(m+1)!}{2^{m+1}} \right) \iota_{x_{m+1}}\dots \iota_{x_1} B ~, \end{aligned} } \end{displaymath} hence the claim. Formula \eqref{Eq:explicitassociators} ensures associativity in the first equality. \end{proof} % \begin{remark}\label{Rem:piccoloerroredisegnoneldraft} Notice that the above statement can be re-expressed by singling out the contraction with the differential from $B$. For any given differential form $B$, seen as a degree $|B|-k$ element in $\widetilde{\cV}$, one has: \begin{displaymath} \pairing_-^{\ca m} \ca \langle B, \cdot \rangle_- = (-)^{m(|B|-k)} \pairing_-^{\ca (m+1)} B \otimes \Unit_{m+1} ~. \end{displaymath} This follows from inspecting on vector fields $x_i$, seen as degree $0$ elements in $\widetilde{\cV}$: \begin{displaymath} \begin{aligned} \pairing_-^{\ca (m)} \ca \langle B, \cdot \rangle_- &(x_1,\dots,x_{m+1}) = \\ &= (-)^{m(|\langle B, \cdot \rangle_-|)} \sum_{\sigma \in S_{m+1}}\chi(\sigma) \langle\dots\langle B, x_{\sigma_1}\rangle_-,\dots,x_{\sigma_{m+1}}\rangle_- = \\ &= (-)^{m(|B|-k+1)}(-)^{m|\pairing_-|} \pairing_-^{\ca(m+1)} (B,x_1,\dots, x_{m+1}) ~. \end{aligned} \end{displaymath} \end{remark} As a corollary\footnote{This corollary is not essential in this chapter, but we expect that it should be useful to extend theorem \ref{thm:iso} to arbitrary values of $n$, just as Lemma \ref{lem:mun} below.}, one can express all higher multibrackets $\bpi_k$ in terms of $\bpi_3$: \begin{corollary}\label{cor:higherpi} \begin{displaymath} \bpi_n = \left( \frac{2^{n-3}}{n!} 3!\right) \pairing^{\cs(n-3)} \cs \bpi_3 \end{displaymath} \end{corollary} \begin{proof} This can be proved iterating lemma \ref{lem:rogersRecurFormula}, or employing lemma \ref{lemma:InsertionsAsPairing} in the following way. { Consider Hamiltonian pairs $e_i= \pair{x_i}{\alpha_i}\in \cA$, one has:} \begin{displaymath} \begin{aligned} \pi_{n}& (e_1,\dots e_n ) = \varsigma(n) \iota_{x_n}\dots \iota_{x_1} \omega = \\ &= \varsigma(n)\varsigma(3) \frac{1}{\# \ush{3,n-3}} \mkern-50mu\sum_{\quad\qquad\sigma \in \ush{3,n-3}}\mkern-50mu \chi(\sigma)~ \iota_{x_{\sigma_n}}\dots \iota_{x_{\sigma_4}} ~ \pi_3(x_{\sigma_1},x_{\sigma_2}x_{\sigma_3}) = \\ &= - \varsigma(n)\frac{3!(n-3)!}{n!} \left(-\varsigma(n-3)\frac{2^{n-3}}{(n-3)!}\right)\cdot \\ &\qquad \cdot \mkern-50mu\sum_{\quad\qquad\sigma \in \ush{3,n-3}}\mkern-50mu \chi(\sigma)~ \pairing_-^{\ca (n-3)} \cdot \pi_3 \otimes \mathbb{1}_{n-3} ~(x_{\sigma_1}\dots x_{\sigma_n}) = \\ &= \left( \varsigma(n)\varsigma(n-3)(-)^{n-3} \frac{2^{n-3}}{n!} 3! \right) \left(\pairing_-^{\ca (n-3)} \ca \pi_3 \right){(e_1,\dots e_n)} = \\ &= \left( \frac{2^{n-3}}{n!} 3! \right) \left(\pairing_-^{\ca (n-3)} \ca \pi_3\right){(e_1,\dots e_n)} ~. \end{aligned} \end{displaymath} The statement follows after d\'ecalage. \end{proof} \section{Vinogradov algebroid and $L_\infty$-algebra}\label{Sec:Vinoids} To any smooth manifold $M$, and index $n\in \NN$, one can associate an auxiliary geometric structure called \emph{Vinogradov algebroid}. In this section, we will recall this definition and some basic properties. In particular we will be interested in describing how one can produce a $L_\infty$-algebra out of this geometric data and we will discuss the algebraic properties enjoyed by the corresponding set of multibrackets with the language of the \RN product. A (standard) Vinogradov algebroid is a slightly generalized version of a Courant algebroid \cite{Courant1990}. \begin{definition}[(Standard ) $n$-Vinogradov algebroid]\label{def:standardVinalgoid} Given a smooth manifold $M$, fixed a natural number $n\geq 1$, the \emph{(standard) Vinogradov algebroid} consists of the data $(E^n , \rho, \pairing_{-}, [\cdot,\cdot]_C)$ where: \begin{itemize} \item $E^n$ denotes the vector bundle over $M$ given by \begin{displaymath} E^n := TM \oplus \Lambda^{n-1} T^\ast M ~, \end{displaymath} we will denote elements of $E^n$ as $e = \pair{X}{\alpha}$; \item $\rho \colon E^n \to TM$ is a vector bundle morphism given by the first projection, also called \emph{the anchor}; \item $\pairing_{\pm} \colon E^n \otimes E^n \to \Lambda^{n-2} T^\ast M $ are binary bundle maps given by \begin{displaymath} \left(\pair{X_1}{\alpha_1},\pair{X_2}{\alpha_2}\right) \mapsto \frac{1}{2}\left( \iota_{X_1}\alpha_2 \pm \iota_{X_2}\alpha_1 \right)~; \end{displaymath} \item $[\cdot,\cdot]_C$ is a skew-symmetric bracket on the vector space of sections $\Gamma(E^n)$, called \emph{higher Courant bracket}, given by \begin{displaymath} \left(\pair{X_1}{\alpha_1},\pair{X_2}{\alpha_2}\right) \mapsto \pair{[X_1,X_2]}{\mathcal{L}_{X_1}\alpha_2 - \mathcal{L}_{X_2}\alpha_1 - \dd \left\langle \pair{X_1}{\alpha_1},\pair{X_2}{\alpha_2}\right\rangle_-} ~. \end{displaymath} \end{itemize} \end{definition} \begin{notation} In the following, we will drop the $n$-prefix; everything should be clear from the context. We will also drop the "standard" term most of the times. This adjective is due to the existence of an "abstract" notion of Vinogradov algebroid, see remark \ref{rem:abstractVino} below, that will not be needed in the following. \end{notation} Consider now a pre-$n$-plectic form $\omega\in \Omega^{n+1}(M)$. The degree of the form select a specific integer and thus a specific standard $n$-Vinogradov algebroid. Furthermore, the higher Courant bracket can be ``twisted'' by the closed form $\omega\in \Omega^{n+1}(M)$. \begin{definition}[(Standard) $\omega$-twisted Vinogradov algebroid]\label{def:Vinalgoid} Let be $M$ a smooth manifold, and $\omega\in\Omega^{n+1}(M)$ a closed differential form. The \emph{Vinogradov algebroid twisted by $\omega$} consists of the data\\ $(E^n, \rho, \pairing_{-}, [\cdot,\cdot]_\omega)$ where $[\cdot,\cdot]_\omega: \Gamma(E^n)\otimes \Gamma(E^n) \to \Gamma(E^n)$ is defined by \begin{displaymath} [e_1,e_2]_\omega = [e_1,e_2]_C + \pair{0}{\iota_{X_1}\iota_{X_2} \omega} ~. \end{displaymath} \end{definition} \begin{example}[Lie and Courant algebroids] When $n=1$, the operators $\pairing_\pm$ are trivial. Hence one recovers, as a particular case, the definition of standard and twisted Lie algebroid (\cf examples \ref{ex:StandardLieAlgbroid} and \ref{ex:TwistedLieAlgbroid}). \\ If $n=2$, definition \ref{def:standardVinalgoid} gives the definition of the standard Courant algebroid with Courant bracket. \end{example} \begin{remark}[About the naming] The choice to name "Vinogradov algebroid" this, somewhat natural, higher generalization of the Courant algebroid is borrowed from Ritter and Saemann \cite{Ritter2015a}. They pointed out that the key structure, given in particular by the bracket on sections, was first studied by Vinogradov in \cite{zbMATH04172022}. In other sources, see for example \cite{Zambon2012} or \cite{Bi2011a}, the same object is simlpy called \emph{higher Courant algebroid}. \end{remark} \begin{remark}["Abstract" Vinogradov algebroids]\label{rem:abstractVino} When introducing the Vinogradov algebroid we are mostly employing a constructive, "hands-on", approach here. Our definition appears as a "standard" example, \cf the notion of "standard" vs. "abstract" Courant algebroid, in \cite[\S 5, Def. 5.6]{Grutzmann2015}. More conceptually, this notion can be framed in the language of graded geometry as a prototypical $NQ$-manifold, see \cite[\S 3.3.]{Deser2018b} and \cite{Ritter2015a}. In turn, the latter concept can be interpreted in terms of \emph{horizontal categorification}\footnote{This justifies the "-oids" suffix in the name.} since $NQ$-manifolds are related to \emph{symplectic Lie-$n$ algebroids}. \end{remark} \begin{remark}[Vinogradov algebroid morphisms]\label{rem:VinoidsMorphism} The precise notion of morphism between Vinogradov algebroids can be given by mimicking the corresponding definition for the Courant ($n=2$) algebroid case (see \cite{Courant1990} for the original definition or also \cite[\S 1.3]{Li-Bland2009} and \cite[\S 2.2]{Bursztyn2008}). \\ We will not need the full fledged apparatus here. In this chapter, precisely in \ref{Sec:VinoGauge}, we will only deal with morphisms between Vinogradov algebroid on the same smooth manifold $M$ and twisted by differential forms $\omega$ and $\widetilde{\omega}$ of the same degree. In the latter situation, a Vinogradov morphism \begin{displaymath} \Psi:~ (E^n, \rho, \pairing_\pm, [\cdot,\cdot]_\omega) \to (E^n, \rho, \pairing_\pm, [\cdot,\cdot]_{\widetilde{\omega}}) \end{displaymath} will be simply given by a vector bundle automorphism $\Psi:E^n\to E^n$ that is compatible with the anchor, \ie the following diagram commutes in the category of smooth manifolds \begin{displaymath} \begin{tikzcd}[column sep = small, row sep = small] E^n \ar[rr,"\Psi"] \ar[ddr,bend right = 45,"\rho"'] \ar[dr,two heads]& & E^n \ar[ddl,bend left = 45,"\rho"] \ar[dl,two heads] \\ & M \\ & TM \ar[u,two heads] \end{tikzcd} ~, \end{displaymath} and such to preserve the symmetric pairing and the higher Courant bracket in the following sense: \begin{displaymath} \begin{aligned} \Psi \circ \pairing_+ &= \pairing_+ \circ (\Psi\otimes \Psi) ~; \\ \Psi \circ [\cdot,\cdot]_\omega &= [\cdot,\cdot]_{\widetilde{\omega}} \circ (\Psi \otimes \Psi) ~. \end{aligned} \end{displaymath} \end{remark} For the sake of completeness, we mention here some basic properties enjoyed by the twisted higher Courant bracket. \begin{proposition}[\emph{\cite[Thm 2.2]{{Bi2011a}}}]\label{prop:VinoAlgoidsProperties} Let be $(M,\omega)$ be a pre-$n$-plectic manifold. Consider the associated Vinogradov algebroid $$E^n=(TM\oplus\wedge^{n-2}T^\ast M, \rho, \pairing_\pm,[\cdot,\cdot]_\omega) ~.$$ The higher Courant bracket $[\cdot,\cdot]_\omega$ satisfies the following properties \begin{enumerate} \item $[\cdot,\cdot]_\omega$ is skew-symmetric; \item $[\cdot,\cdot]_\omega$ satisfies the Jacobi equation up to an exact term. Namely \begin{equation}\label{eq:CourantFailsJacobi} [[e_1,e_2]_\omega,e_3]_\omega +\cyc = \d T_\omega(e_1,e_2,e_3) \qquad \forall e_i \in \Gamma(E^n) \end{equation} where \begin{equation}\label{eq:CourantTernaryOp} \morphism{T_\omega} {(\Gamma(E^n))^{\otimes 3}} {\Omega^{n-2}(M)} {(e_1,e_2,e_3)} {\frac{1}{3}\langle [e_1,e_2]_\omega, e_3 \rangle_+ + \cyc} ~. \end{equation} \item Regard $\Gamma(E^n)$ as a $C^\infty(M)$-module. $[\cdot,\cdot]_\omega$ is not $C^\infty(M)$-linear. In the standard case, one has \begin{displaymath} [e_1,f e_2]_C = f [e_1,e_2]_C + (\mathcal{L}_{X_1}f) e_2 - \dd f \wedge \langle e_1,e_2\rangle_+ \qquad \forall e_i \in \Gamma(E^n)~. \end{displaymath} \item $[\cdot,\cdot]_\omega$ is compatible with the anchor: \begin{displaymath} \rho\left([e_1,e_2]_\omega \right) = [\rho(e_1),\rho(e_2)]_\omega ~. \end{displaymath} \end{enumerate} \end{proposition} \begin{proof} The proof relies on the same computations routinely performed for the standard Courant algebroid case. See for instance \cite[Prop. 4.7]{Ritter2015a}. \end{proof} \begin{remark}[Comparison with the literature] We stress that there are several different conventions that one could adopt when defining Vinogradov algebroids and, in turn, Courant algebroids. We briefly compare our choices to the references in bibliography. \begin{itemize} \item The pairing operators $\pairing_\pm$ are often defined without the $1/2$ prefactor (see for example \cite[eq. (1)]{Zambon2012} and \cite[ex. 1]{Rogers2013}). Here, we are adopting the same convention of \cite[eq. (2.2.1),(2.2.2)]{Courant1990} and \cite[eq. (1)]{Bi2011a}. \item The choice of the sign used to "twist" the standard Courant bracket also differs across the literature. Here, like in \cite[eqn. 5.6]{Rogers2013} and \cite[eqn. 4.12]{Ritter2015a}, we are choosing to twist the Courant bracket $[e_1,e_2]$, for any given $e_i\in \Gamma(E^n)$ with $\rho(e_i)=x_i$, by adding the term $\iota_{x_1}\iota_{x_2}\omega$. With the notation\footnote{Recall that \begin{displaymath} \morphism{\iota_\rho^k\omega} {(\Gamma(E^n))^{\otimes k}} {\Omega^{n+1-k}(M)} {(e_1,\dots,e_k)} {\varsigma(k) \iota_{x_k}\dots \iota_{x_1} = (-)^{k+1}\iota_{x_1}\dots\iota_{x_k}\omega ~,} \end{displaymath} hence, in particular, one has $\iota_{x_1}\iota_{x_2}\omega = - \iota^2_\rho \omega$.} introduced in remark \ref{Rem:SignedMultiContraction}, this convention can compactly stated as: \begin{equation} [\cdot,\cdot]_\omega = [\cdot,\cdot]_C - \iota^2_\rho \omega ~. \end{equation} Notice that in \cite[\S 3.7]{Gualtieri2004}, \cite[eq (2.3)]{Gu2} and \cite[\S 2]{Zambon2012} it is preferred the opposite sign. \item The previous choices also affect the definition of the ternary operator $T_\omega$ introduced in proposition \ref{prop:VinoAlgoidsProperties}. In the above notation, this can be compactly written as \begin{displaymath} T_\omega = T_0 + \dfrac{1}{2}\iota^3_\rho \omega ~, \end{displaymath} where $T_0$ denotes the ternary operator pertaining to to the standard higher Courant bracket. Our expression can be easily compared with \cite[Def 4.1]{Rogers2013}, \cite[eq (4.12)]{Ritter2015a} and \cite[Thm. 4.2]{Bi2011a}. \end{itemize} \end{remark} The following remark supports the observation that multisymplectic manifolds are deeply related to Vinogradov algebroids. \begin{remark}[Higher Dirac structures \cite{Zambon2012}] Exactly as symplectic structures are special cases of Dirac structures, given by the graph subbundle inside the standard Courant algebroid, also multisymplectic structures can be regarded as special case of \emph{higher Dirac structures}. The latter has been identified in \cite[Def. 3.1, Prop. 3.2 and 3.7]{Zambon2012} as a \emph{involutive, Lagrangian subbundles} of $E^n$. (See also \cite[\S 4]{Bi2011a}.) \end{remark} Proposition \ref{prop:VinoAlgoidsProperties}, especially items $1.$ and $2.$, leads us naturally to replicate the reasoning described in section \ref{Section:RogersObservables}, \ie the construction of the multisymplectic observables $L_\infty$-algebra, to the case of Vinogradov algebroids. \subsection{Vinogradov's $L_{\infty}$-algebra}\label{subsec:Vinogradov} Consider a $\omega$-twisted Vinogradov algebroid $E^n$. The upshot of proposition \ref{prop:VinoAlgoidsProperties} is that, similarly to what has been observed for the vector space $\Omega^{n-1}_{\ham}(M,\omega)$ in section \ref{Section:RogersObservables}, the higher Courant bracket on the space of section $\Gamma(E^n)$ fails to be a Lie algebra structure. Namely, the higher Courant bracket $[\cdot,\cdot]_\omega$ is skew-symmetric but satisfies the Jacobi equation only modulo the exterior derivative of the ternary operator $T_\omega$ (see equation \eqref{eq:CourantFailsJacobi}). This naturally prompts to look for a suitable completion of $\Gamma(E^n)$ to give a full-fledged $L_\infty$-algebra. Roytenberg and Weinstein showed how this $L_\infty$-algebra can be constructed in the case of ordinary Courant algebroids \cite[Thm 4.3]{Roytenberg1998}. This result has been extended by Zambon in \cite[Prop. 8.1 and 8.4]{Zambon2012} to Vinogradov algebroids. Summing up, for any $\omega$-twisted Vinogradov algebroid there is an associated $L_n$-algebra, given by the following definition: \begin{definition}[$L_\infty$-algebra of a twisted Vinogradov algebroid]\label{def:vinolinfty} Given a twisted Vinogradov algebroid $$ (E^n,\omega)=(TM\oplus \Lambda^{n-1}T^\ast M , \rho, {\pairing_{-}}, [\cdot,\cdot]_\omega) ~,$$ we denote the the associated $L_n$-algebra structure as $$L_{\infty}(E^n,\omega) = ({\cV}, \{\mu_k\})$$ Its underlying graded vector space is given by \begin{equation}\label{eq:VSpace} {\cV^i}:= \begin{cases} \mathfrak{X}(M)\oplus \Omega^{n-1}(M) & \quad ~\text{if } i=0 \\ ~\Omega^{n+i-1}(M) & \quad~\text{if } 1-n \leq i\leq -1 \\ ~0 & \quad ~\text{otherwise ~. \end{cases} \end{equation} The actions of non-vanishing multi-brackets (up to permutations of the entries) on arbitrary vectors $\varv_i=f_i\oplus e_i \in {\cV}$, with $e_i = \pair{X_i}{\alpha_i} \in \mathfrak{X}(M)\oplus \Omega^{n-1}(M) = \Gamma(E^n)$ and $f_i \in \bigoplus_{k=0}^{n-2}\Omega^k(M)$, are given as follows: \begin{itemize} \item unary bracket: $$\mu_1 \left(f\right) = \dd f ~;$$ \item binary bracket: \begin{displaymath} \begin{aligned} \mu_2 \left(e_1,e_2\right) =& [e_1,e_2]_\omega = \pair{[X_1,X_2]}{\dd \langle e_1, e_2\rangle_- + (\iota_{X_1}\dd\alpha_2 - \iota_{X_2}\dd\alpha_1 + \iota_{X_1}\iota_{X_2}\omega)} ~; \\ \mu_2 \left(e_1,f_2\right) =& -\mu_2(f_2,e_1) = \frac{1}{2} \mathcal{L}_{X_1} f_2 = \langle e_1, \dd f_2 \rangle_- ~; \end{aligned} \end{displaymath} \item ternary bracket: \begin{displaymath} \begin{aligned} \mu_3 (e_1, e_2, e_3) =& -T_\omega(e_1,e_2,e_3) = -\frac{1}{3} \langle[e_1,e_2]_\omega,e_3 \rangle_+ + \cyc \\ \mu_3 (f_1, e_2, e_3) =& -\frac{1}{6} \left( \frac{1}{2}(\iota_{X_1}\mathcal{L}_{X_2} - \iota_{X_2}\mathcal{L}_{X_1}) + \iota_{[X_1,X_2]} \right)f \end{aligned} \end{displaymath} \item $k$-ary bracket for $k \ge 3$ an \emph{odd} integer: \begin{equation}\label{eq:VinoMultibrakAllaZambon_1} \begin{aligned} \mu_k(\varv_0,\cdots,\varv_{k-1}) =& \left(\sum_{i=0}^{k-1} {(-)^{i-1}\mu_k(f_i+\alpha_i,X_0,\dots,\widehat{X_i},\dots,X_{k-1})}\right) +\\ &+(-)^{\frac{k+1}{2}} \cdot k \cdot B_{k-1} \cdot \iota_{X_{k-1}} \dots \iota_{X_{0}} \omega ~; \end{aligned} \end{equation} where \begin{equation}\label{eq:VinoMultibrakAllaZambon_2} \begin{aligned} \mu_k &(f_0+\alpha_0, X_1,\dots,X_{n-1}) = \\ &=~c_k \mkern-30mu\sum_{\quad1\le i<j\le k-1}\mkern-20mu (-1)^{i+j+1} \iota_{X_{k-1}}\dots \widehat{\iota_{X_{j}}}\dots \widehat{\iota_{X_{i}}}\dots \iota_{X_{1}} ~ [f_0+\alpha_0,X_i,X_j]_3~. \end{aligned} \end{equation} In the above formula, $[\cdot,\cdot,\cdot]_3 = -T_0$ denotes the ternary bracket associated to the untwisted ($\omega=0$) Vinogradov algebroid, and $c_k$ is a numerical constant \begin{equation}\label{eq:UglyCoefficient} c_k= (-)^{\frac{k+1}{2}}\frac{12~B_{k-1}}{(k-1)(k-2)}~, \end{equation} containing the \emph{Bernoulli numbers}\footnote{{Hence $B_0=1, B_1=-\frac{1}{2}, B_2=\frac{1}{6}$, and $B_k=0$ for odd $k\neq 1$.}} $B_{n}$. \end{itemize} \end{definition} \begin{remark}\label{Remark:semplifica-conti} Notice that Vinogradov's brackets $\mu_k$ with $k\geq 2$ vanishes unless $k-1$ entries are elements of degree zero. For $k\ge 2$, Rogers' brackets $\pi_k$ vanish unless all entries are elements in degree zero (\cf remark \ref{rem:two}). \end{remark} \begin{remark}[On the origin of definition \ref{def:vinolinfty}] The definition $L_\infty(M,\omega)$ has been worked out in \cite[Prop. 8.1 and 8.4]{Zambon2012} relying on a result by Getzler \cite{Getzler1991} (see also theorem \ref{Thm:Getzler} in chapter \ref{Chap:Linfinity}) and on some observations in graded geometry. % Briefly, they noticed that $\cV$ can be obtained as a certain truncation of the graded vector space of smooth functions on the graded manifold $T^\ast[n]T[1]M$. Namely, denoting by $\mathscr{C}= C^{\infty}(T^\ast[n]T[1]M)$ the space of smooth functions on the above-mentioned graded manifold, one can show that \begin{displaymath} \cV[1] = \trunc_{n} \mathscr{C}[n]~. \end{displaymath} As such, $\cV[1]$ inherits from $\mathscr{C}$ a binary bracket $\lbrace\cdot,\cdot\rbrace$, given by the canonical Poisson bracket on the cotangent bundle (\ie the graded analogue of example \ref{Ex:Multicotangent} in the $1$-plectic case, see \cite{CATTANEO2011}), and a unary bracket given by the de Rham differential. \\ Summing up $(\mathscr{C},\d,\lbrace\cdot,\cdot\rbrace)$ constitute a DGLA (see definition \ref{def:DGLA}). Thence, one could readily apply the machinery given by theorem \ref{Thm:Getzler} endowing $\cV=\mathscr{C}[n][-1]$ with the above $L_\infty$-structure. \end{remark} \begin{remark}[Vinogradov algebroids as $L_\infty$-algebroids] Recall that, in layman terms, a Lie algebroid can be thought of as the geometric data encoding a certain "well-behaving" infinite dimensional Lie algebra. In other words, it can be seen as a $\infty$-dimensional Lie algebra whose elements are sections of a vector bundles, taken together with a Lie algebra morphism $\rho$ into the Lie algebra $\mathfrak{X}(M)$ giving a sort of "representation" in terms of vector fields. \\ Definition \ref{def:vinolinfty} tells us that the same point of view could be also loosely applied to a Vinogradov algebroid. In this spirit, one could say that definition \ref{def:Vinalgoid} is the geometric data giving a certain $\infty$-dimensional $L_n$-algebra whose elements are differential forms of a smooth manifold $M$ up to the degree $n-1$. \\ This observation drops a hint about the interpretation of Vinogradov algebroids as "$L_n$-algebroids" mentioned in remark \ref{rem:abstractVino}. \end{remark} \subsection{Hamiltonian subalgebroid} Let be $(M,\omega)$ be a $n$-plectic manifold, consider the corresponding $\omega$-twisted Vinogradov algebroid $E^n$. Consider also the two corresponding $L_\infty$-algebras $L_\infty(M,\omega)$ and $L_\infty(E^n,\omega)$. We denoted respectively by $L$ and $\cV$ the underlying graded vector spaces (see equations \eqref{eq:VSpace} above and \eqref{eq:Lspace} in chapter \ref{Chap:MultiSymplecticGeometry}). There is an obvious sequence of inclusions at the level of graded vector spaces \begin{equation}\label{eq:GVSinclusions} \begin{tikzcd} L \ar[r,equal,"\sim"] & \cA \ar[r,hook,"h"] & \cV \ar[r,hook] & \widetilde{\cV} \end{tikzcd} \end{equation} where $\cA$ is the graded vector space introduced in equation \eqref{eq:Aspace} and $\widetilde{\cV}$ has been introduced in equation \eqref{eq:VtildeSpace}. The first isomorphism is the map $\vartriangle$ defined in equation \eqref{eq:trianglemap}, the second one is the standard inclusion of $\cA$ as a subspace of $\cV$ and the last one is a truncation of the identity map. With the following lemma, we want to point out that there are two $L_n$-structures naturally induced by the $n$-plectic form $\omega$ on the cochain complex $(\cA,\pi_1)$. Namely, the structure $\{\pi_k\}$ given by Rogers, and the multibrackets $\{\mu_k\}$ obtained by restricting the Vinogradov's $L_\infty$-algebra. \begin{lemma} \noindent (i) The inclusions expressed by equation \eqref{eq:GVSinclusions} extend at the level of cochain complexes to \begin{displaymath} \begin{tikzcd} (L, [\cdot]_1) \ar[r,equal,"\sim"] & (\cA,\pi_1) \ar[r,hook] & (\cV,\mu_1) ~. \end{tikzcd} \end{displaymath} \noindent (ii) The $L_\infty$-algebra structure $\{\mu_k\}$ on $\cV$ restricts to a $L_\infty$-subalgebra structure on $\cA$, namely \begin{displaymath} (h)=(h,0,\dots)~:~ (\cA,\{\mu_k \circ h^{\otimes k}\}) \to (\cV,\{\mu_k\}) \end{displaymath} is a strict $L_\infty$-morphism. \end{lemma} \begin{proof} \noindent (i) By their very definition (\cf equations \eqref{eq:Lspace}, \eqref{eq:Aspace}, \eqref{eq:VSpace}), the considered cochain complexes coincide in degrees lesser than $0$. Therefore, one has only to prove that the following diagram commutes in the category of ordinary vector spaces % \begin{displaymath} \begin{tikzcd}[column sep= large] & &\quad \mathfrak{X}(M)\oplus \Omega^{n-1}(M) \\ \cdots \ar[r,"d"] & \Omega^{n-2}(M) \ar[ru,sloped,"\mu_1"]\ar[r,"\pi_1"] \ar[rd,sloped,"\dd"'] & \cA^{n-1}\ar[u,hook,"h "'] \\ & & \Omega^{n-1}_{\text{Ham}}(M,\omega)\ar[u,"\Delta "'] \end{tikzcd} ~. \end{displaymath} The commutativity of the two rightmost triangles follows immediately remembering that exact $(n-1)$-forms are Hamiltonian with trivial Hamiltonian vector field (see remark \ref{Rem:ClosedformsTrivialHamiltonian}), hence \begin{displaymath} \pi_1\ (\alpha) = \mu_1 (\alpha) = h(\d\alpha) = \pair{0}{\d \alpha} \qquad \forall \alpha \in \Omega^{n-2}(M)~. \end{displaymath} \noindent (ii) One has to prove that \begin{displaymath} \text{Im} (\mu_k \vert_{\cA}) = \text{Im} (\mu_k \cdot h^{\otimes k}) \subset \text{Im} ( h) ~. \end{displaymath} Being $\cA^i\equiv \cV^i$ for all $i\leq -1$, the previous condition holds automatically for any $\mu_k$ with $k\geq 3$ since $|\mu_k| = 2-k$. The argument of (i) implied that that $\mu_1=\pi_1$. It remains only to check the case of $\mu_2$ restricted to the degree $0$ sector. Consider $e_i=\pair{x_i}{\alpha_i}\in \cA^0\subset \cV^0$, where $x_i$ is the Hamiltonian vector field pertaining to $\alpha_i$, one gets: \begin{displaymath} \begin{aligned} [e_1,e_2]_\omega =&~ \pair{[x_1,x_2]}{\mathcal{L}_{x_1}\alpha_2 - \mathcal{L}_{x_2}\alpha_1 - \d \langle e_1,e_2\rangle_- + \iota_{x_1}\iota_{x_2}\omega} = \\ =&~ \pair{[x_1,x_2]}{\d (\iota_{x_1}\alpha_2 - \iota_{x_2}\alpha_1) - \d \langle e_1,e_2\rangle_- + \iota_{x_1}\d \alpha_2 - \iota_{x_2}\d\alpha_1 + \iota_{x_1}\iota_{x_2}\omega}= \\ =&~ \pair{[x_1,x_2]}{\d \langle e_1,e_2\rangle_- + \iota_{x_2}\iota_{x_1}\omega} \end{aligned} \end{displaymath} employing the Cartan formula on the second equality and the Hamilton-DeDonder-Weyl equation (see the definition of the "Hamiltonian condition" in \ref{Def:Hamiltonianform}) in the last one. From the above equation follows that Hamiltonian pairs in $(\Omega^{n-1}_{\text{Ham}}(M,\omega))^{\Delta}$ are closed under the bracket $[\cdot,\cdot]_\omega$ since, according to lemma \ref{Lem:BinBrackofHamFormsisHamiltonian}, $[x_1,x_2]$ is the Hamiltonian vector field pertaining to $\iota_{x_2}\iota_{x_1}\omega$. \end{proof} \begin{notation} From now on, we will simply denote as $\mu_k$ the restriction of the $k$-ary brackets of definition \ref{def:Vinalgoid}, to $\mathcal{A}$. Let us stress that, according to previous definitions, the grading convention reads as follows: \begin{align*} |\mathfrak{X}(M)\otimes \Omega^{n-1}(M)| &= |\cA^{n-1}| = |\Omega^{n-1}_{\text{Ham}}| = 0 ; \\ |\Omega^k(M)| &= k - (n-1) ~. \end{align*} Summing up, \begin{displaymath} L_\infty(M,\omega) \cong (\mathcal{A},\pi) ~;\quad L_\infty(E^n,\omega) \supset (\mathcal{A},\mu)~. \end{displaymath} \end{notation} \subsection{{Properties} of Vinogradov's $L_\infty[1]$-algebra}\label{sec:L1VinoProp} Is a result stated by Ritter and Saemann \cite{Ritter2015a} that the two aforementioned $L_\infty$-algebras $(\cA,\pi)$ and $(\cA,\mu)$ are isomorphic. In section \ref{Section:ExtendedRogersEmbedding} we will manage to provide an explicit construction of such isomorphism up to the $4$-plectic case. In order to do so, we need to understand the relations between the multibrackets $\{\pi_k\}$ and $\{\mu_k\}$ and we are going to express them using the \RN product introduced in eq. \eqref{eq:compsymm}. Denote by $(\cA[1],\{\bmu_k\})$ the $L_{\infty}[1]$-algebra corresponding to Lie infinity algebra on $\cA$ obtained restricting Vinogradov's Lie-$n$ algebra $ L_{\infty}(E^n,\omega)$. We establish some relationship satisfied by the multibrackets $\bmu_k$. \begin{proposition}[{Binary bracket}]\label{Prop:mu2} \begin{displaymath} {\bmu_2} = \bpi_2 - [\pairing,\bpi_1]_{\cs} \end{displaymath} \end{proposition} \begin{proof} First, we introduce the following {auxiliary} binary bracket on $\cV$ \begin{displaymath} \begin{aligned} \eta_2(e_1,e_2) =& \pair{[X_1,X_2]}{\iota_{X_1}\dd \alpha_2 - \iota_{X_2}\dd \alpha_1 + \iota_{X_1}\iota_{X_2}\omega} \\ \eta_2(e,f) =& \eta_2(f,e)= 0 ~. \end{aligned} \end{displaymath} Recalling the natural extension of the pairing operator (see remark \ref{rem:pair-}), the binary bracket in definition \ref{def:vinolinfty} can then be written as $$ \mu_2 = \eta_2 + \mu_1 \ca \pairing_- - \pairing_- \ca \mu_1 ~.$$ The latter equality can be checked by inspection on arbitrary elements on $\cV$ \begin{displaymath} \mathclap{ \begin{aligned} \mu_2&(f_1\oplus e_1 ,f_2\oplus e_2) = \mu_2(e_1,e_2) + \mu_2(f_1,e_2)-\mu_2(e_1,f_2) + \cancel{\mu_2(f_1,f_2)} =\\ &= \frac{1}{2} d \langle e_1,e_2\rangle_- + \eta_2(e_1,e_2)-\frac{1}{2}\mathcal{L}_{X_2}f_1 + \frac{1}{2}\mathcal{L}_{X_1}f_2 = \\ &= \frac{1}{2}\big[ \dd (\iota_{X_1}\alpha_{2} - \iota_{X_2}\alpha_1 ) - (\dd \iota_{X_2}f_1 + \iota_{X_2} \dd f_1) + (\dd \iota_{X_1}f_2 + \iota_{X_1} \dd f_2) \big] + \eta_2(e_1,e_2) =\\ &= \frac{1}{2}\big[ \dd \iota_{X_1}(\alpha_{2} + f_2) -\dd \iota_{X_2}(\alpha_{1} + f_1) + \iota_{X_1} \dd f_2 -\iota_{X_2}\dd f_1 \big] + \eta_2(f_1\oplus e_1 , f_2\oplus e_2) =\\ &=\big[ \mu_1 \ca \pairing_- - \pairing_- \ca \mu_1 + \eta_2 \big](f_1\oplus e_1 , f_2\oplus e_2) \end{aligned} } \end{displaymath} Restricting to $\cA{\subset\cV}$, one finds that $\eta_2 = \pi_2$, since {$\eta_2(e_1,e_2) = \pair{[X_1,X_2]}{\iota_{X_2}\iota_{X_1}\omega} = \pi_2 (e_1,e_2)$.} Hence {on $\cA$ we have} \begin{equation}\label{Eq:mu2} \mu_2 = \pi_2 + \mu_1 \ca \pairing_- - \pairing_- \ca \mu_1 \end{equation} and thus on $\cA[1]$: \begin{equation}\label{Eq:mu2-shifted} {\bmu_2 = \bpi_2 + \bmu_1 \cs\pairing - \pairing \cs \bmu_1 ~.} \end{equation} \end{proof} \begin{corollary}\label{cor:pairmu2} The Vinogradov binary bracket commutes with the pairing: \begin{displaymath} [\pairing,\bmu_2]_{\cs} = 0 \end{displaymath} \end{corollary} \begin{proof} Computing the commutator of equation \eqref{Eq:mu2-shifted} with the pairing yields \begin{displaymath} \begin{aligned} [\pairing,\bmu_2]_{\cs} =& [\pairing, \bpi_2 - [\pairing,\bpi_1]_{\cs}]_{\cs} = \\ =& [\pairing,\bpi_2]_{\cs} - [\pairing,[\pairing,\bpi_1]_{\cs}]_{\cs} = 0 ~, \end{aligned} \end{displaymath} where the last equality is given by equation \eqref{eq:commPairingMu3}. \end{proof} \begin{proposition}[{Ternary bracket}]\label{Prop:mu3} \begin{displaymath} {\bmu_3} = \bpi_3 -\frac{1}{2} [\pairing, \bpi_2]_{\cs} - \frac{1}{6} [\pairing^{\cs 2},\bpi_1]_{\cs} \end{displaymath} \end{proposition} \begin{proof} Employing the definition of the Richardson-Nijenhuis product, one can express the ternary bracket in definition \ref{def:vinolinfty} as \begin{equation}\label{Eq:mu3} \mu_3 = -\dfrac{1}{3} \pairing_+\ca \mu_2 ~. \end{equation} The explicit definition of $\ca$, see equation \eqref{Eq:RNProducts-explicit}, ensures that multiplying on the left by a binary bracket, not necessarily skew-symmetric, is a well-defined operation valued in graded skew-symmetric multilinear maps. More precisely, equation \eqref{Eq:mu3} can be deduced by inspection on homogeneous elements. When evaluated on degree $0$ elements, $\mu_3$ reads as: $$ \mu_3(e_1,e_2,e_3) = -T_\omega(e_1,e_2,e_3) = -\frac{1}{3} \langle[e_1,e_2]_\omega,e_3 \rangle_+ + \cyc ~, $$ {while in other degrees, for any $f$ such that $|f|\neq 0$, it reads} \begin{align*} \mu_3(f,e_1,e_2) &= -\frac{1}{6}\left[ \iota_{X_1}(\frac{\mathcal{L}_{X_2}}{2}f) - \iota_{X_2}(\frac{\mathcal{L}_{X_1}}{2}f) + \iota_{[X_1,X_2]}f \right] = \\ &= -\frac{1}{6}\left[ \iota_{X_1}\mu_2(e_2,f) - \iota_{X_2}\mu_2(e_1,f) + \iota_{[X_1,X_2]}f \right] = \\ &= -\frac{1}{3}\left[ \pairing_+ \circ ( \mu_2 \otimes \mathbb{1}) \right] \big( (f,e_1,e_2)-(f,e_2,e_1)+(e_1,e_2,f) \big) = \\ &= -\frac{1}{3}\left[ \pairing_+ \circ ( \mu_2 \otimes \mathbb{1}) \right] (f,e_1,e_2) + \cyc ~. \end{align*} Equation \eqref{Eq:mu3} can be further expressed as: \begin{align*} \mu_3 \equal{Eq: \eqref{Eq:mu3}}& -\dfrac{1}{3} \pairing_+ \ca \mu_2 = \\ \equal{Eq: \eqref{Eq:mu2}}& -\dfrac{1}{3} \pairing_+ \ca \Big( \pi_2 + \pi_1\ca \pairing_- - \pairing_- \ca \pi_1 \Big) = \\ \equal{\phantom{Eq: \eqref{Eq:mu2}}}& -\frac{1}{3} \pairing_+ \ca \pi_2 - \frac{1}{3}\Big( \pairing_+\ca\pi_1\ca \pairing_- - \pairing_+\ca\pairing_- \ca \pi_1 \Big) = \\ \equal{Eq: \eqref{Eq:pi3}}& \pi_3 -\frac{1}{3} \pairing_- \ca \pi_2 + \frac{1}{3} \Big( \pairing_-\ca\pi_1\ca \pairing_- - \pairing_-\ca\pairing_- \ca \pi_1 \Big) \end{align*} where in the last equation we used that $\pairing_-\ca \eta = -\pairing_+ \ca \eta$ for any multilinear map $\eta$ in degree non-zero, and that \begin{equation}\label{Eq:pi3} \pairing_+ \ca \pi_2 = \pairing_- \ca \pi_2 - 3\, \pi_3 ~. \end{equation} % Equation \eqref{Eq:pi3} {can be checked by probing with} elements $e_i=f_i + \pair{X_i}{\alpha_i}$, \ie \begin{displaymath} \begin{aligned} \Big[\big(\pairing_+ - \pairing_-\big) \ca \pi_2\Big] (e_1,e_2,e_3) =&~ \iota_{X_3} \pi_2(e_1,e_2) + \cyc ~= \\ =&~ \omega(X_1,X_2,X_3) + \cyc ~= \\ =&~ 3 \omega(X_1,X_2,X_3) ~= \\ =&~ - 3 \pi_3 (e_1,e_2,e_3) ~. \end{aligned} \end{displaymath} % The claim {of the proposition} follows after applying the d\'ecalage: \begin{displaymath} \begin{aligned} {\bmu_3} \equal{\phantom{Eq: \eqref{Eq:mu2}}}& \bpi_3 -\frac{1}{3} \pairing \cs \bpi_2 + \frac{1}{3} \pairing\cs\bpi_1\cs \pairing -\frac{1}{3} \pairing\cs\pairing \cs \bpi_1 = \\ \equal{Eq: \eqref{Eq:pairing-pi2}}& \bpi_3 -\frac{1}{3} [\pairing, \bpi_2] + \frac{1}{6}\Big( -[\pairing,\bpi_2] + \bpi_1\cs\pairing^{\cs 2} +\pairing^{\cs 2}\cs \bpi_1 \Big)~+ \\ & -\frac{1}{3} \pairing^{\cs 2} \cs \bpi_1 = \\ \equal{\phantom{Eq: \eqref{Eq:mu2}}}& \bpi_3 -\frac{1}{2} [\pairing, \bpi_2] + \frac{1}{6}\Big( \bpi_1\cs\pairing^{\cs 2} -\pairing^{\cs 2}\cs \bpi_1 \Big) = \\ \equal{\phantom{Eq: \eqref{Eq:mu2}}}& \bpi_3 -\frac{1}{2} [\pairing, \bpi_2] - \frac{1}{6} [\pairing^{\cs 2},\bpi_1] ~. \end{aligned} \end{displaymath} \end{proof} Observe that there is a common pattern for constructing $\pi_k$ and $\mu_k$ (when $k\geq 4$). Both of them are realized via multiple insertions $\iota_{x_k},\iota_{x_{k-1}},\dots$ inside a fixed differential form $B\in \Omega(M)$. In the first case, $B$ coincides with the $n$-plectic form $\omega$, in the second $B=\mu_3(\xi_1,\xi_2,\xi_3)$ where $x_i$ are generic elements of $\mathcal{A}$. In lemma \ref{lemma:InsertionsAsPairing} we stated how similar constructions can be phrased in terms of the pairing operator $\pairing$. We finish this section by showing the analogue of corollary \ref{cor:higherpi} in the case of the Vinogradov $L_\infty$-algebra\footnote{This will not be used in this chapter but we believe it will be useful in extending theorem \ref{thm:iso} to arbitrary values of $n$.}. \begin{lemma}\label{lem:mun} \begin{displaymath} \bmu_{{n}} = 3~ \left( \frac{2^{n-1}}{(n-1)!}B_{n-1} \right) ~\cdot~ \pairing^{\cs ({n}-3)} \cs \bmu_3 \end{displaymath} \end{lemma} \begin{proof} According to equation \eqref{eq:VinoMultibrakAllaZambon_1}, the explicit value of $\mu_n(\varv_1,\dots,\varv_n)$ is a sum of two terms which can be rewritten employing the anchor operator $\rho$, such that $\rho(\varv_i) = X_i$ for any $\varv_i=f_i\oplus e_i \in \cA$. {We consider the two terms separately.} i) The first summand reads as \begin{displaymath} \begin{aligned} \sum_{i=1}^n (-)^{i-1}\mu_n & \left( \varv_i,\rho(\varv_1),\dots,\widehat{\rho(\varv_i)},\dots,\rho(n)\right) = \\ =&~ \left( \mu_n \circ \big(\mathbb{1}\otimes \rho^{\otimes(n-1)}\big) \circ P_{1,n-1} \right)~(\varv_1,\dots,\varv_n) \end{aligned} \end{displaymath} noticing that $\sigma=(i,1,\dots,\hat{i},\dots,n)\in \ush{1,n-1}$ and $|\sigma|=(-)^{i+1}$. Explicitly, see equation \eqref{eq:VinoMultibrakAllaZambon_2}, the term {on the r.h.s. between the large brackets} can be given by contraction with several vector fields: \begin{displaymath} \mathclap{ \begin{aligned} \mu_n &\circ\big(\mathbb{1}\otimes \rho^{\otimes(n-1)}\big)~ (\varv_0,\varv_1,\dots,\varv_{n-1})= \\ =&~ \mu_n( \varv_0,X_1\dots,X_{n-1} ) \quad= \\ =&~ c_n \mkern-20mu\sum_{1\leq i < j \leq n-1}\mkern-20mu (-)^{i+j+1} \iota_{\rho(n-1)}\dots\widehat{\iota_{\rho(\varv_j)}}\dots \widehat{\iota_{\rho(\varv_i)}}\iota_{\rho(\varv_1)} ~[\xi_0,\rho(\varv_i),\rho(\varv_j)]_3 = \\ =&~ c_n \left( -\varsigma(n-3)\frac{2^{n-3}}{(n-3)!} \right) \cdot \mkern-50mu \sum_{\qquad~1\leq i < j \leq n-1} \mkern-50mu (-)^{i+j+1} \pairing_-^{\ca (n-3)} \circ \left([\cdot,\cdot,\cdot]_3\otimes\mathbb{1}_{n-3}\right) \circ \left(\mathbb{1}\otimes \rho^{\otimes n-1}\right)\\ &~~~ (\varv_0,\varv_i,\varv_j,\varv_1,\dots,\hat{\varv_i},\dots,\hat{\varv_j}\dots,\varv_{n-1}) = \\ =&~ 3~d_n ~ \pairing_-^{\ca (n-3)} \circ ([\cdot,\cdot,\cdot]_3\otimes\mathbb{1}_{n-3}) \circ (\mathbb{1}\otimes \rho^{\otimes n-1}) \circ (\mathbb{1}\otimes P_{2,n-3}) ~(\varv_0,\varv_1, \dots,\varv_{n-1}) ~. \end{aligned} } \end{displaymath} Here $c_n$ denotes the coefficient defined in equation \eqref{eq:UglyCoefficient} and, in the last equality, we noticed that $(-)^{i+j+1}=|\sigma|$ with $\sigma=(i,j,1,\dots,\hat{i},\dots,\hat{j},\dots,n-1)\in \ush{2,n-3}$. Further, $d_n$ is given by \begin{displaymath} \begin{aligned} d_n =&~ \frac{c_n}{3} \left( -\varsigma(n-3)\frac{2^{n-3}}{(n-3)!} \right) \\ =&~ \left(-\varsigma(n-3)(-)^{\frac{n+1}{2}}\right) \left(\frac{4~B_{n-1}}{(n-1)(n-2)} \right) \left(\frac{2^{n-3}}{(n-3)!} \right) = \\ =&~ \frac{2^{n-1}}{(n-1)!}B_{n-1} ~. \end{aligned} \end{displaymath} Therefore \begin{align*} [\cdot,\dots,\cdot]_n &\circ \big(\mathbb{1}\otimes \rho^{\otimes(n-1)}\big)\circ P_{1,n-1} = \\ =& 3~d_n~ \pairing_-^{\ca(n-3)} \circ \left( [\cdot,\cdot,\cdot]_3\otimes\mathbb{1}_{n-3} \right) \circ \left(\mathbb{1}\otimes \rho^{\otimes n-1} \right) \circ \left(\mathbb{1}\otimes P_{2,n-3}\right) \circ P_{1,n-1}= \\ =& 3~d_n~ \pairing_-^{\ca(n-3)} \circ \left( [\cdot,\cdot,\cdot]_3\otimes\mathbb{1}_{n-3} \right) \circ \left(\mathbb{1}\otimes \rho^{\otimes n-1} \right) \circ P_{1,2,n-3}= \\ =& 3~d_n~ \pairing_-^{\ca(n-3)} \circ \left( \left( [\cdot,\cdot,\cdot]_3 \circ \mathbb{1}\otimes \rho^{\otimes 2} \circ P_{1,2} \right)\otimes \rho^{\otimes (n-3)}\right) \circ P_{3,n-3}= \\ =& 3~d_n~ \pairing_-^{\ca(n-3)} \circ \left([\cdot,\cdot,\cdot]_3 \otimes \rho^{\otimes (n-3)}\right) \circ P_{3,n-3}= \\ =& 3~d_n~ \left( (-)^{n-3} \pairing_-^{\ca(n-3)} \circ \left( [\cdot,\cdot,\cdot]_3 \otimes \mathbb{1}_{n-3}\right) \circ P_{3,n-3} \right) = \\ =& 3~d_n~ \left( \pairing_-^{\ca(n-3)} \ca [\cdot,\cdot,\cdot]_3 \right) ~. \end{align*} The last three equalities follow respectively from the observations that $\mu_3\circ \big(\mathbb{1}\otimes \rho^{\otimes 2}\big) \circ P_{1,2} = \mu_3$ (see the definition of the ternary bracket), that $\pairing_-^{n-3}\circ \big(\alpha \otimes \rho^{\otimes (n-3)}\big) = \pairing_-^{n-3}\circ (\alpha \otimes \mathbb{1}_{n-2})$ for any element $\alpha\in \cV$ such that $\rho(\alpha) = 0$, and that $(-)^{n-3}=1$ for any $n\geq 3$ odd. % ii) The second summand, {see equation \eqref{eq:VinoMultibrakAllaZambon_1}}, {can be re-expressed via lemma \ref{lemma:InsertionsAsPairing} to give:} \begin{displaymath} \begin{aligned} \Big( &(-)^{\frac{n+1}{2}}\cdot n \cdot B_{n-1} \Big) ~\iota_{\rho(\varv_n)}\dots\iota_{\rho(\varv_1)} \omega = \\ =& -\varsigma(n) \left( -\frac{2^{n-3}}{n!}3! \right) \left( (-)^{\frac{n+1}{2}}\cdot n \cdot B_{n-1} \right) \pairing_-^{\ca (n-3)} \ca \pi_3 ~ (\varv_1,\dots,\varv_n) = \\ =& \left(\varsigma(n) (-)^{\frac{n+1}{2}}\right) \cdot \frac{3}{2} \cdot \left( \frac{2^{n-1}}{(n-1)!} B_{n-1} \right) ~ \pairing_-^{\ca (n-3)} \ca \pi_3 ~ (\varv_1,\dots,\varv_n) = \\ =& -\frac{3}{2} d_n ~ \pairing_-^{\ca (n-3)} \ca \pi_3 ~ (\varv_1,\dots,\varv_n) \end{aligned} \end{displaymath} % Summing up, one can conclude that \begin{displaymath} \begin{aligned} \mu_n =& 3 ~d_n ~ \pairing_-^{\ca (n-3)} \ca \left([\cdot,\cdot,\cdot]_3 -\frac{1}{2}\pi_3 \right) = \\ =& 3 ~d_n ~ \pairing_-^{\ca (n-3)} \ca \mu_3 ~ \end{aligned} \end{displaymath} {using proposition \ref{Prop:mu3}.} The claim follows after d\'ecalage. % \end{proof} The upshot of the previous lemmas is that the two subalgebras of $M^{\sym}(\cA[1])$ respectively generated by $\{\bpi_k\}$ and $\{\bmu_k\}$ are actually generated by the same subset of only $4$ generators $\{\bpi_1,\bpi_2,\bpi_3,\pairing \}$. This will be crucial when trying to ascertain the existence of a $L_\infty$-morphism between $(\cA,\{\bpi_k\})\to (\cA,\{\bmu_k\})$. \section{Extending Rogers' embedding}\label{Section:ExtendedRogersEmbedding} Let $(M,\omega)$ be a $n$-plectic manifold. In this section we provide an explicit $L_{\infty}$-embedding from the $L_{\infty}$-algebra of observables on $(M,\omega)$ into the $L_{\infty}$-algebra associated to the Vinogradov algebroid (higher Courant algebroid) $E^n=TM \oplus \Lambda^{n-1} T^\ast M$ with bracket twisted by $\omega$; see theorem \ref{thm:iso} and corollary \ref{cor:Psi}. \\ This construction can be seen as the higher analogue of mapping \eqref{eq:chris}. The case with $n=2$ has been originally carried out by Rogers in \cite{Rogers2013}. \subsection{An $L_{\infty}$-isomorphism} Recall that we can transfer the multibrackets of Rogers' $L_{\infty}$-algebra $L_{\infty}(M,\omega)$ to $\cA$, by using the isomorphism $\Omega^{n-1}_{\text{Ham}}(M,\omega) \cong (\Omega^{n-1}_{\text{Ham}}(M,\omega))^{\Delta}\cong \cA^0$ in degree zero and the identity in negative degrees. This way we obtain an $L_{\infty}$-algebra structure on $\cA$, which we denoted by $(\cA,\{\pi_k\})$. The latter differs from the $L_\infty$-algebra $(\cA,\{\mu_k\})$ we associated after Def. \ref{def:vinolinfty} to the $\omega$-twisted Vinogradov algebroid, but the underlying chain complex (\ie the unary brackets) are the same. We show that these two $L_{\infty}$-algebra structures are $L_{\infty}$-isomorphic, by a morphism whose first component is $\Id_{\cA}$. We will prove the following theorem in \S \ref{subsec:isoproof}. \begin{theorem}\label{thm:iso} For $n\le 4$ there is an $L_{\infty}$-isomorphism ${\Phi}\colon (\cA,\{\pi_k\})\to(\cA,\{\mu_k\})$. Its non-vanishing components are given by { \begin{align*} {\Phi}_1&=\Id_{\cA}\\ {\Phi}_2&=-\pairing_-\\ {\Phi}_3&=\frac{1}{3}\pairing_-\ca \pairing_- \\ {\Phi}_4&=0 \end{align*} where $\ca$ is the skew-symmetric \RN product defined in equation \eqref{Eq:RNProducts-explicit}. } \end{theorem} \begin{remark}\label{rem:Phientries} By construction $\Phi$ enjoys the property that $ \Phi_k (\varv_1 \dots \varv_k) =0$ unless $|\varv_i|=0$ for at least $k-1$ entries. \end{remark} Recall the sequence of graded vector space morphisms $$L \cong \cA \hookrightarrow \cV~, $$ given by the identity on $\Omega^{\le n-2}(M)$ in negative degrees, and by $\Omega^{n-1}_{\text{Ham}}(M,\omega)\cong (\Omega^{n-1}_{\text{Ham}}(M,\omega))^{\Delta}\subset \Gamma(E^n)$ in degree zero. As an immediate corollary of theorem \ref{thm:iso} we obtain the following embedding, which for $n=2$ is due to Rogers \cite[Theorem 7.1]{Rogers2013}. \begin{corollary}\label{cor:Psi} For $n\le 4$ there is an $L_{\infty}$-embedding ${\Psi}: L_{\infty}(M,\omega)\hookrightarrow L_\infty(E^n,\omega)$ of Rogers' $L_{\infty}$-algebra into into Vinogradov's. The embedding consists of only three non-trivial components given, for any $\varv_i=f_i\oplus \alpha_i \in L_{\infty}(M,\omega)$ with $f_i \in \bigoplus_{k=0}^{n-2}\Omega^k(M)$ and $\alpha \in \Omega^{n-1}_{ham}(M,\omega)$, by the following equations: \begin{displaymath} \begin{aligned} {\Psi}_1(\varv) =& f\oplus \pair{v_\alpha}{\alpha} \\ {\Psi}_2(\varv_1,\varv_2) =& -\dfrac{1}{2}\left( \iota_{v_{\alpha_1}}(f_2\oplus \alpha_2) - \iota_{v_{\alpha_2}}(f_1\oplus \alpha_1) \right) \\ {\Psi}_3(\varv_1,\varv_2,\varv_3) =& \dfrac{1}{6}\iota_{v_{\alpha_1}}\iota_{v_{\alpha_2}}(f_3\oplus \alpha_3) + \cyc \end{aligned} ~. \end{displaymath} \end{corollary} \begin{remark}[Comparisons with the Ritter-Saemann Theorem] We emphasize that a result similar to corollary \ref{cor:Psi} was already stated, slightly less explicitly, by Ritter and Saemann in \cite[Thm. 4.10]{Ritter2015a}. Basically, their idea was to deform $\mu_k$ into $\pi_k$ via a sequence of approximating $L_\infty$-morphisms. \\ Let us briefly paraphrase their results in our notation. Given an $n$-plectic manifold $(M,\omega)$, consider the graded vector space $\cA$ as defined in equation \eqref{eq:Aspace} above. They claimed the existence of $(n$-$1)$ $L_\infty$-morphisms \begin{displaymath} \Psi^{(m)}= (id,0,\dots,\varphi^{(m)}_{m+1},0,\dots): (\mathcal{A},\mu_k^{(m)}) \to (\mathcal{A}, \mu_k^{(m+1)}) ~, \end{displaymath} where $\mu_k^{(\ell)}$ is a $L_\infty$-structure on $\mathcal{A}$ that agrees to Roger's up to order $\ell$, i.e $$\mu_k^{(\ell)} = \pi_k \qquad \forall k\leq \ell ~,$$ and in particular $\mu_k^{(1)}$ coincides with the restriction of Vinogradov's $L_\infty$-algebra on $\mathcal{A}$, such that $\Psi = \Psi^{(n-1)}\circ\dots\circ\Psi^{(1)}$ is a $L_\infty$-isomorphism between $(\mathcal{A},\mu_k^{(1)})$ and $(\mathcal{A},\pi_k)$. This construction can be alternatively subsumed by the following diagram: \begin{displaymath} \begin{tikzcd}[column sep = large] L_{\infty}(M,\omega) \ar[d,equal,sloped,"\sim "]& & L_{\infty}(E^n,\omega) \\ (\mathcal{A},\pi_k) & & (\mathcal{A},\mu_k^{(1)}) \ar[u,hook,"h"'] \ar[d,"{\Psi^{(1)}=(id,\varphi_2^{(1)},0,\dots)}"] \\ & & (\mathcal{A},\mu_k^{(2)}) \ar[d,"{\Psi^{(2)}=(id,0,\varphi_3^{(2)},0,\dots)}"] \\ & & \vdots \ar[d,"{\Psi^{(n-1)}=(id,0,\dots,\varphi_n^{(n-1)})}"] \\ & & (\mathcal{A},\mu_k^{(n)}) \ar[d,"\Psi^{(n)}=id"] \\ & & (\mathcal{A},\mu_{k}^{(n+1)}) \ar[lluuuu,equal] \end{tikzcd} \end{displaymath} Notice that in \cite{Ritter2015a} also the single components $\Psi^{(k)}$, for any $k$, are claimed to be obtained from subsequent approximations. Namely $\mu_{k+1}^{(k)}$ is deformed into $\mu_{k+1}^{(k+1)}=\pi_{k+1}$ degree by degree, with respect to the grading of $\cA$. \\ The interest in finding an explicit expression for each of the above components $\varphi_n$ has been our original motivation for performing the computations involved in the results of sections \ref{sec:L1RogersProp} and \ref{sec:L1VinoProp}. \end{remark} \subsection{The proof of Theorem \ref{thm:iso}}\label{subsec:isoproof} {For the proof, it is convenient to work with $L_{\infty}[1]$-algebras, by applying the d\'ecalage isomorphism \eqref{eq:deca}.} \\ In the following, for any given homogeneous linear map $m\in\underline{\Hom}(S^{\ge 1}(V),V)$, we denote by $C_m$ the corresponding coderivation of $S^{\ge 1}(V)$ given by the lift, \ie $C_m=\widetilde{L}_{\sym}(m)$. Denote by $(\cA[1],\{\bpi_k\})$ the $L_{\infty}[1]$-algebra corresponding to $(\cA,\{\pi_k\})$. Notice that applying the d\'ecalage isomorphism to obtain $\bpi$ does not introduce any extra signs, since the higher multibrackets in Rogers' $L_{\infty}$-algebra vanish unless all entries have degree $0$. \\ Similarly, denote by $(\cA[1],\{\bmu_k\})$ the $L_{\infty}[1]$-algebra corresponding to $(\cA,\{\mu_k\})$, and write $\bmu \colon S^{\ge 1}(\cA[1])\to \cA[1]$ for the map with components $\bmu_k$ ($k\ge 1)$. {We want to construct an $L_{\infty}[1]$-isomorphism from $(\cA[1],\{\bpi_k\})$ to $(\cA[1],\{\bmu_k\})$.} The idea is to apply the key remark \ref{rem:exp} in section \ref{SubSection:studycase}, \ie to construct the sought isomorphism as the exponential of a degree $0$ coderivation. Denote by $Q_{\bpi}$ the codifferential on $S^{\ge 1}(\cA[1])$ corresponding to $(\cA[1],\bpi)$. For any degree $0$ linear map $p\colon S^{\ge 1}(\cA[1])\to \cA[1]$ {such that $e^{C_p}$ converges,} we know that \begin{itemize} \item $ e^{C_p}\circ Q_{\bpi}\circ e^{-C_p}$ is a new codifferential, which corresponds to a new $L_{\infty}[1]$-algebra structure $\bpi'$ on $\cA[1]$; \item $e^{C_p}$ corresponds to an $L_{\infty}[1]$-isomorphism $f$ from $(\cA[1],\bpi)$ to $(\cA[1],\bpi')$. \end{itemize} The explicit formulae for $\bpi'$ and $f$ were given in Remark \ref{rem:exp}. We will show that $p$ can be chosen in such a way that $\bpi'=\bmu$, at least when $n\le 4$. In the following we will employ the straightforward extension of the pairing operator $\pairing_-$ of \S \ref{subsec:Vinogradov} from $\mathfrak{X}(M)\oplus \Omega^{n-1}$(M) to the whole {graded vector space $\cV$}. (In the terms of remark \ref{rem:pair-}, one can equivalently say that we are considering the restriction of $\pairing_-$ from $\widetilde{\cV}$ to $\cV$.) \\ The following proposition implies immediately the first part of theorem \ref{thm:iso}. The proof relies on our previous computations about $\bpi$ and $\bmu$ in terms of the \RN products (see sections \ref{sec:L1RogersProp} and \ref{sec:L1VinoProp}). \begin{proposition}\label{prop:main} Let $n\le 4$. Let $p\colon S^{\ge 1}(\cA[1])\to \cA[1]$ be the degree $0$ linear map given by\footnote{Thus $p=-\langle \;,\; \rangle-\frac{1}{6}\langle \;,\; \rangle\cs \langle \;,\; \rangle$.} $$p=c_1\langle \;,\; \rangle+c_2\langle \;,\; \rangle\cs \langle \;,\; \rangle+c_3\langle \;,\; \rangle\cs \langle \;,\; \rangle\cs \langle \;,\; \rangle$$ for $c_1=-1, c_2=-\frac{1}{6}, c_3=0$, where $\cs$ denotes the operation introduced in eq. \eqref{eq:compsymm}. Then \begin{equation}\label{eq:pi'} \bpi':= \bpi+[p,\bpi]_{\cs}+\frac{1}{2!}[p, [p,\bpi]_{\cs}]_{\cs}+\dots \end{equation} agrees with $\bmu$. \end{proposition} \begin{remark}\label{rem:degrees} {Although proposition \ref{prop:main} is phrased only for $n\le 4$, we make some remarks that hold for arbitrary $n$ and for any linear map $p=\sum_{i=1}^{n-1}c_i \langle \;,\; \rangle^{\cs i}$ with arbitrary string of coefficients $c_1,\dots,c_{n-1}$.} a) For every $k\ge 1$, the component $\bpi'_k=\bpi'|_{S^k(\cA[1])}$ is a finite sum: more precisely, only the first $k$ summands on the r.h.s. of eq. \eqref{eq:pi'} contribute to it. The reason is that $p_1=0$ by definition, and the only non-vanishing component of the maps $p_i\cs\bpi_j$ and $\bpi_j\cs p_i$ is the component defined on $S^{i+j-1}(\cA[1])$. b) The elements of $\cA[1]$ of maximal degree are those of degree $-1$, and are those lying in $(\Omega^{n-1}_{\text{Ham}}(M,\omega))^{\Delta}[1]$. We now make some considerations about the vanishing of $\bpi'_k$ when applied to {homogeneous} elements of $\cA[1]$ of lower degree (by which we mean: of degree that is non-maximal, \ie $\le -2$). First notice the following, {where $p_k=p|_{S^k(\cA[1])}$:} \begin{itemize} \item $p_1=0~.$ \item for $k\ge 2$: $p_k$ applied to {homogeneous} elements of $\cA[1]$ might be non-vanishing only when \emph{all but possibly one} elements are in degree $-1$. The result is a lower degree element. \item for $m\ge 2$: $\bpi_m$ applied to {homogeneous} elements of $\cA[1]$ might be non-vanishing only when \emph{all} elements are in degree $-1$. The result is a lower degree element, except in the case of $\bpi_2$. \end{itemize} As a consequence, for all $k\ge 2 $ we have: \begin{itemize} \item[i)] $[p_k,\bpi_2]_{\cs}$ applied to {homogeneous} elements of $\cA[1]$ might be non-vanishing only when \emph{all but possibly one} elements are in degree $-1$.\\ The same holds for iterated brackets $[p_{k_1},\dots,[p_{k_l},\bpi_2]_{\cs}]_{\cs}$. \item[ii)] for $m\ge 3$, $[p_k,\bpi_m]_{\cs}$ applied to {homogeneous} elements of $\cA[1]$ might be non-vanishing only when \emph{all} elements are in degree $-1$.\\ The same holds for iterated brackets $[p_{k_1},\dots,[p_{k_l},\bpi_m]_{\cs}]_{\cs}$. \end{itemize} We consider separately the case of iterated brackets involving $\bpi_1$. Notice that $\bpi_1(\xi)$ is a degree $-1$ element only when $\xi\in \cA[1]$ has degree $-2$. In that case $\bpi_1(\xi)= \pair{0}{\dd \xi}\in(\Omega^{n-1}_{\text{Ham}}(M,\omega))^{\Delta}[1]$, \ie the vector field component vanishes. Further for $k\ge 2$, refining a statement above, results that $p_k$ applied to elements of $\cA[1]$ may be non-vanishing only if \emph{all but possibly one} entries are are in degree $-1$ with \emph{non-vanishing vector field component}. Consequently we have: \begin{itemize} \item[iii)] $[p_k,\bpi_1]_{\cs}$ applied to {homogeneous} elements of $\cA[1]$ might be non-vanishing only when \emph{all but possibly one} elements are in degree $-1$. \\ The same holds for iterated brackets $[p_{k_1},\dots,[p_{k_l},\bpi_1]_{\cs}]_{\cs}$. \end{itemize} {The conclusion we draw from i), ii), iii) is that the $L_{\infty}[1]$-algebra structure $\bpi'$ on $\cA[1]$, defined as in eq. \eqref{eq:pi'}, has the following property: the evaluation of multibrackets with arity $k\ge 2$ on homogeneous elements might be non-vanishing only when \emph{all but possibly one} elements are of top degree (\ie degree $-1$). Notice that the $L_{\infty}[1]$-algebra associated to the $\omega$-twisted Vinogradov algebroid has the same property, as recalled in Remark \ref{Remark:semplifica-conti}. } \end{remark} \begin{proof}[Proof of proposition \ref{prop:main}] We write out explicitly the first $\bpi_k$'s: \begin{align*} \allowdisplaybreaks \bpi_1'&=\bpi_1 \\ \bpi_2'&=\bpi_2+[p_2,\bpi_1]_{\cs} \\ \bpi_3'&=\bpi_3+[p_3,\bpi_1]_{\cs}+[p_2,\bpi_2]_{\cs}+\frac{1}{2} [p_2, [p_2,\bpi_1]_{\cs}]_{\cs} \allowdisplaybreaks[0] \\ \bpi_4'&=\bpi_4+[p_4,\bpi_1]_{\cs}+[p_3,\bpi_2]_{\cs}+[p_2,\bpi_3]_{\cs}\\ &+ \frac{1}{2} [p_3, [p_2,\bpi_1]_{\cs}]_{\cs}+\frac{1}{2} [p_2, [p_3,\bpi_1]_{\cs}]_{\cs}+ \frac{1}{2} [p_2, [p_2,\bpi_2]_{\cs}]_{\cs}+ \\ &+ \frac{1}{6} [p_1,[p_1,[p_1,\bpi_1]_{\cs}]_{\cs}]_{\cs}. \end{align*} We now compute explicitly the right-hand sides. According to proposition \ref{Prop:mu2}, one has that \begin{displaymath} \bpi'_2 = \bpi_2 + c_1 [\pairing,\bpi_1]_{\cs} \end{displaymath} {equals $\bmu_2$} if and only if $c_1 = -1$. Similarly, according to proposition \ref{Prop:TernaryCommutator}, one has \begin{displaymath} \begin{aligned} \bpi'_3 =&~ \bpi_3 + c_2 [\pairing^{\cs 2}, \bpi_1] + c_1 [ \pairing, \bpi_2] + \frac{c_1^2}{2}[\pairing,[\pairing,\bpi_1]] \\ =&~\bpi_3 + c_2 [\pairing^{\cs 2}, \bpi_1] + c_1\left(\frac{2+c_1}{2}\right) [ \pairing, \bpi_2], \end{aligned} \end{displaymath} {and this equals $\bmu_3$} if and only if $c_1=-1$ and $c_2 = -1/6$, {by proposition \ref{Prop:mu3}}.\\ Every even multibracket in the Vinogradov's $L_\infty$-algebra with arity greater than 2 is trivial, in particular $\bmu_4 = 0$. On the other hand, proposition \ref{Prop:QuaternaryCommutator} implies that \begin{align*} \bpi'_4 =& \bpi_4 + \\ &+ c_1 \pairing \cs \bpi_3 + c_2 \pairing^{\cs 2} \cs \bpi_2 + c_3 [ \pairing^{\cs3}, \bpi_1] + \\ & + \frac{c_1^2}{2} [ \pairing, \pairing\cs \bpi_2] + \frac{c_1 c_2}{2} \left( [\pairing^{\cs 2},[\pairing,\bpi_1]] + [\pairing,[\pairing^{\cs 2},\bpi_1]] \right)+ \\ &+ \frac{c_1^3}{6} [\pairing,[ \pairing , [ \pairing , \bpi_1 ] ] ] = \\ =& \left(1+ 2c_1 + \frac{3}{2}c_1^2 + \frac{{c_1^3}}{2}\right)\bpi_4 + \\ &+ (c_2 + c_1 c_2)[\pairing^{\cs 2},\bpi_2] \\ &+ c_3 [\pairing^{\cs 3}, \bpi_1]. \end{align*} Substituting the values of $c_1$ and $c_2$ just found implies that \begin{displaymath} \bpi_4' = c_3 [\pairing^{\cs 3}, \bpi_1] \end{displaymath} hence, $\bpi_4' = 0$ when $c_3=0$. \end{proof} \begin{proof}[Proof of theorem \ref{thm:iso}] {We apply remark \ref{rem:exp} to the $L_{\infty}[1]$-algebra $(\cA[1],\bpi)$ as indicated at the beginning of this subsection, choosing $p\colon S^{\ge 1}(\cA[1])\to \cA[1]$ as in proposition \ref{prop:main}. Notice that $p$ satisfies the condition explained in Remark \ref{rem:codermor}, hence $e^{C_p}$ is convergent.} Thanks to proposition \ref{prop:main}, we just have to make explicit the $L_{\infty}$-isomorphism $f\colon (\cA[1],\{\bpi_k\})\to (\cA[1],\{\bmu_k\})$ provided by remark \ref{rem:exp}. \\ Due to the restriction $n\le 4$, we know that $\cA[1]$ is concentrated in degrees $-4,\dots,-1$. Since $\pairing \colon S^{ 2}(\cA[1])\to \cA[1]$ has degree zero, we have $\pairing^{\cs 4}=0$. Computing $p^{\cs 2}=\pairing^{\cs 2} +\frac{1}{3}\pairing^{\cs 3}$ and $p^{\cs 3}=-\pairing^{\cs 3}$ we obtain \begin{align*} f &=pr_{\cA[1]}+p+\frac{1}{2!}p^{\cs 2}+\frac{1}{3!}p^{\cs 3} ~=\\ &=pr_{\cA[1]}- \pairing +\frac{1}{3}\pairing^{\cs 2}. \end{align*} Applying the d\'ecalage isomorphism \eqref{eq:Dec}, and a fortiori of remark \ref{rem:pair-}, we obtain an $L_{\infty}$-isomorphism ${\Phi}\colon (\cA,\{\pi_k\})\to(\cA,\{\mu_k\})$. \end{proof} \begin{remark}\label{rem:arbitraryn} We expect to be able to extend proposition \ref{prop:main} to arbitrary values of $n$. Thanks to Remark \ref{rem:degrees} b), to do so it suffices to apply $\bpi'$ on strings of elements of $\cA[1]$ which are of maximal degree except for possibly one. {For the resulting $L_{\infty}$-isomorphism ${\Phi}\colon (\cA,\{\pi_k\})\to(\cA,\{\mu_k\})$, extending theorem \ref{thm:iso}, we expect each component $\Phi_k$ to be given as in eq. \eqref{eq:Phik} in the next section. This expectation is suggested by the proof of theorem \ref{thm:comm}.} \end{remark} \section{Gauge transformations}\label{Sec:DiagramGaugeTransf} Given two $n$-plectic forms $\omega$ and $\widetilde{\omega}$ on the same manifold $M$, we say that the two are \emph{gauge related} if there exists a $n$-form $B$ such that $$ \widetilde{\omega} = \omega {+}\dd B ~, $$ \ie if they lie in the same de Rham cohomology class. Recall that in section \ref{Section:GaugeTransformations} we already described the relationship between the \momaps pertaining to a group action that is multisymplectic with respect to two gauge related $n$-plectic form. {The aim of this section is to show that the $L_{\infty}$-embedding constructed in corollary \ref{cor:Psi} is compatible with gauge transformations, see theorem \ref{thm:comm}.} \subsection{Vinogradov algebroids and gauge transformations}\label{Sec:VinoGauge} {Let $\omega$ and $\widetilde{\omega} = \omega + \dd B$ be gauge-related closed $(n+1)$-forms on $M$.} The two corresponding twisted Vinogradov algebroids $(E^n, \omega)$ and $(E^n,\widetilde{\omega})$ are isomorphic. Indeed, the vector bundle isomorphism \begin{displaymath} \begin{tikzcd}[column sep= small,row sep=0ex, /tikz/column 1/.append style={anchor=base east}] \tau_B \colon & E^n = TM \oplus \Lambda^{n-1} T^\ast M \ar[r]& E^n ~, \\ &\pair{X}{\alpha} \ar[r, mapsto] & \pair{X}{\alpha + \iota_X B} = \pair{X}{\alpha} + \pair{0}{\iota_X B} \end{tikzcd} ~, \end{displaymath} {preserves the anchor $\rho$, the pairing $\pairing_{-}$, and maps the bracket $[\cdot,\cdot]_{\omega}$ to $[\cdot,\cdot]_{\widetilde{\omega}}$. (Cf. remark \ref{rem:VinoidsMorphism}).} \\ Hence this bundle isomorphism induces a strict $L_\infty$-isomorphism at the level of the corresponding Vinogradov's $L_\infty$-algebras (\cf \cite[Prop. 8.5]{Zambon2012}) given by \begin{displaymath} (\tau_B)_m = \begin{cases} \id_{\cV} + \iota_{\rho(\cdot)} B & m=1 \\ 0 & m\geq 2 \end{cases} ~. \end{displaymath} Notice that there is a natural diagram in the category of $L_\infty$-algebras \begin{equation}\label{diag:3sides} \begin{tikzcd} L_{\infty}(M,\omega) \ar[r,"\Psi"] & L_{\infty}(E^n,\omega) \ar[d,"\tau_B"] \\ L_{\infty}(M,\widetilde{\omega}) \ar[r,"\Psi"] & L_{\infty}(E^n,\widetilde{\omega}) \end{tikzcd} \end{equation} where the horizontal arrows are the extensions of the Rogers's embedding {constructed in corollary \ref{cor:Psi} (for $n\le 4$), which, in particular, are given by formulae that do not depend on the multisymplectic form $\omega$ or $\widetilde{\omega}$.} \\ When considering two gauge-related multisymplectic manifolds, it is not possible to define a canonical $L_\infty$-morphisms between the two corresponding observables $L_\infty$-algebras\footnote{The two objects are already substantially different at the level the underlying vector space. An Hamiltonian form with respect to $\omega$ is not in general Hamiltonian with respect to $\tilde{\omega}$.}. In particular there is no canonical way to close this diagram on the left to give a commutative square. In what follows, we will look for a suitable pullback $\mathfrak{g}$ in the category of $L_\infty$-algebras: \begin{displaymath} \begin{tikzcd} \mathfrak{g}\ar[r] \ar[d] \arrow[dr, phantom, "\scalebox{1.5}{$\lrcorner$}" , very near start, color=black] & L_\infty(M,\omega) \ar[d,"\Psi"] \\ L_\infty(M,\tilde{\omega}) \ar[r,"\Psi"] & L_\infty(E^n,\omega)\cong L_\infty(E^n,\tilde{\omega}) \end{tikzcd} \end{displaymath} % \subsection{Commutativity} {Consider an infinitesimal action $v:\mathfrak{g}\to \mathfrak{X}(M)$ preserving the $n$-plectic form $\omega$. Suppose that $B\in \Omega^n(M)$ is also preserved, and that $\widetilde{\omega}:=\omega + dB$ is non-degenerate.} Further, assume there exists a homotopy comoment map $(f):\mathfrak{g}\to L_\infty(M,\omega)$ for $\omega$. Then we obtain a homotopy comoment map $(\widetilde{f})$ for $\widetilde{\omega}$, by Lemma \ref{lem:momaps} of section \ref{Section:GaugeTransformations}. {We will show that the diagram \eqref{diag:3sides} can be completed} to a commutative pentagon, at least when $n\le 4$. % \begin{theorem}\label{thm:comm} {Assume that $n\le 4$.} The following diagram of $L_{\infty}$-algebra morphisms commutes, where $\Psi$ is the morphism introduced in corollary \ref{cor:Psi}. \begin{equation}\label{eq:pentagonDiagram} \begin{tikzcd} & L_{\infty}(M,\omega) \ar[r,"\Psi"] & L_{\infty}(E^n,\omega) \ar[dd,"\tau_B"] \\[-1em] \mathfrak{g}\ar[ru,"f"] \ar[dr,"\widetilde{f}"'] \\[-1em] & L_{\infty}(M,\widetilde{\omega}) \ar[r,"\Psi"] & L_{\infty}(E^n,\widetilde{\omega}) \end{tikzcd} \end{equation} \end{theorem} {We interpret this commutativity by saying that the twisting of the homotopy comoment moment map by $B$ is compatible with the twisting of the Vinogradov algebroid.} {For the proof of theorem \ref{thm:comm} we} {make use of the strict isomorphism $L_{\infty}(M,\omega)\cong (\cA,\{\pi_k\})$ (see equations \eqref{eq:Aspace} and \eqref{eq:trianglemap}), given by the identity in negative degrees, and $\alpha\mapsto \pair{X_{\alpha}}{\alpha}$ in degree zero. \\ The commutativity of diagram \eqref{eq:pentagonDiagram} is then equivalent to the commutativity of this diagram, where $\Phi$ is the $L_{\infty}$-morphism constructed in theorem \ref{thm:iso}. \\ \begin{equation}\label{diag:HamiltonianpentagonDiagram} \begin{tikzcd}[column sep=huge] & (\cA,\{\pi_k\}) \ar[r,"\Phi"] & {L_{\infty}(E^n,\omega)} \ar[dd,"\tau_B"] \\[-1em] \mathfrak{g}\ar[ru,"f"] \ar[dr,"\widetilde{f}"'] \\[-1em] & (\widetilde{\cA},\{\widetilde{\pi_k}\})) \ar[r,"\Phi"] & {L_{\infty}(E^n,\widetilde{\omega})} \end{tikzcd} \end{equation} \\ \begin{remark}\label{Rem:DiagramSituation} All the arrows involved in diagram \eqref{diag:HamiltonianpentagonDiagram} can be expressed in terms of the pairing $\pairing_-$ {and the operation $\ca$ defined in eq. \eqref{Eq:RNProducts-explicit}}: % \begin{align}\label{eq:Phim} \Phi_m =& \begin{cases} \id_{\cA} & m=1 \\ \varphi_m ~ \pairing_-^{\ca (m-1)} & m\geq 2 \end{cases} \\[.5em] (\tau_B)_m =& \begin{cases} \id_{\cA} - 2 ~\langle B, \cdot \rangle_- & m=1 \\ 0 & m\geq 2 \end{cases} \\[.5em] (f)_m =& \begin{cases} f_1: \xi \mapsto \pair{v_\xi}{f_1(\xi)} \in \cA_0 & m=1 \\ f_m & m\geq 2 \end{cases} \\[.5em] (\widetilde{f})_m =& \begin{cases} f_1 - 2~ \langle B, \cdot \rangle_- \circ f_1 & m=1 \\ f_m - d_m ~\left(\pairing_-^{\ca (m-1)} \ca \langle B,\cdot \rangle_-\right) \circ f_1^{\otimes m} & m\geq 2. \end{cases} \end{align} {The coefficients $\varphi_m$ for $\Phi_m$ are provided by theorem \ref{thm:iso}, which holds for $n\le 4$: they are given by $\varphi_2=-1$, $\varphi_3=\frac{1}{3}$, and $\varphi_m=0$ for $m\ge 4$.} The coefficients $d_m$ are given by $$d_m= \left( \frac{2^m}{m!}\right),$$ as follows {from lemma \ref{lem:momaps}}, lemma \ref{lemma:InsertionsAsPairing} and remark \ref{Rem:piccoloerroredisegnoneldraft} noting that \begin{equation}\label{eq:bm} \begin{aligned} \mathsf{b}_m(x_1,\dots x_m) &= \varsigma(m+1) \iota_{v_{x_n}}\dots \iota_{v_{x_1}} B = \\ &= - \left(\varsigma(m+1)\varsigma(m) \frac{2^m}{m!}\right) ~\left(\pairing_-^{\ca (m)} \right) \circ \left(B\otimes f_1^{\otimes m}\right) = \\ &= - \dfrac{2^m}{m!} \left(\pairing_-^{\ca(m-1)}\ca \langle B, \cdot \rangle_- \right) \circ f_1^{\otimes m} ~. \end{aligned} \end{equation} \end{remark} \subsection{Proof of Theorem \ref{thm:comm}} In this subsection we provide the proof of theorem \ref{thm:comm}. The condition $n\le 4$ will be used only at the very end. To ascertain the strict commutativity of diagram \eqref{diag:HamiltonianpentagonDiagram} one has to make sure that \begin{displaymath} (\tau_B \circ \Phi \circ f)_m -(\Phi\circ \widetilde{f})_m = 0 \qquad \forall m\geq 1 ~. \end{displaymath} The case $m=1$ is straightforward: \begin{displaymath} \begin{aligned} (\tau_B \circ \Phi \circ f)_1 -(\Phi\cdot \widetilde{f})_1 =&~ (\tau_B)_1 \circ \Phi_1 \circ f_1 -\Phi_1 \circ \widetilde{f}_1 = \\ =&~ (\tau_B)_1 \circ f_1 - f_1 + 2 \langle B, \cdot\rangle_- \circ f_1 = \\ =&~ 0 ~. \end{aligned} \end{displaymath} {Alternatively, one can adapt the argument given for the symplectic case $n=1$ in remark \ref{rem:symcomm}.} % {Now let $m\ge 2$.} Since $\tau_B$ is a strict morphism and $(\tau_B)_1$ acts as the identity on any element of $\cA$ in degree different than $0$, the higher cases requires to check that \begin{displaymath} {(\Phi \circ f)_m- (\Phi \circ \widetilde{f})_m = 0}. \end{displaymath} Recall now the explicit expression for the composition of of two $L_{\infty}$-morphisms in the skew-symmetric framework (obtained by working out definition \ref{Def:CompositionFormula} together with remark \ref{rem:operatorSln}) : \begin{reminder} Given two $L_\infty$-morphisms $f\colon V \to V'$ and $g\colon V' \to V''$ , the components of their {composition} $g\circ f$ are \begin{equation}\label{eq:LinfinityMorphismsComposition} (g \circ f)_m = \sum_{\ell=1}^m g_\ell \circ \mathcal{S}_{\ell,m} (f) \end{equation} where the operator $\mathcal{S}_{\ell,m} (f)$ is the component $\otimes^m V \to \otimes^\ell V'$ of the lift of $f$ to a coalgebra morphism $T(V)\to T(V')$ . Explicitly, \begin{displaymath \mathcal{S}_{\ell,m} (f) = \left( \sum_{\substack{k_{1}+\cdots+k_{\ell}=m\\1\leq k_{1}\leq\cdots\leq k_{\ell}}} (-)^{\sum_{i=1}^{\ell-1}(|f_{k_i}|)(\ell-i)} (f_{k_1}\otimes\cdots\otimes f_{k_\ell})\circ P_{k_1,\ldots,k_\ell}^< \right)~, \end{displaymath} with $P_{k_1,\ldots,k_\ell}^<$ denoting the sum over all the (odd) action of permutations $\sigma$ in $(k_1,\cdots,k_\ell)$-unshuffles on $V^{\otimes (k_1+\dots+k_\ell)}$ satisfying the extra condition \begin{displaymath} \sigma(k_1+\dots+k_{j-1}+1)<\sigma(k_1+\dots+k_{j}+1) \quad \text{if}~k_{j-1}=k_j ~. \end{displaymath} \end{reminder} {Thanks to remark \ref{rem:Phientries},} equation \eqref{eq:LinfinityMorphismsComposition}, defining the the composition of $L_{\infty}$-morphisms, takes the simpler form \begin{displaymath} (\Phi \circ f )_m = \Phi_m \circ f_1^{\otimes m} + \left[ \sum_{\ell=1}^{m-1} \Phi_\ell \circ \left( f_1^{\otimes (\ell -1)} \otimes f_{m-\ell+1} \right) \circ P_{\ell -1, m-\ell +1} \right] ~, \end{displaymath} and similarly for $(\Phi \circ \widetilde{f} )_m $. % Observe now that one can compare the images of two gauge related \momaps $(f)$ and $(\widetilde{f})$ by viewing them as elements in $\cV$ in virtue of the diagram (yet to proven to be commutative): \begin{displaymath} \begin{tikzcd}[row sep=small] \mathfrak{g} \ar[r,"f"]\ar[rd,"\widetilde{f}"'] & L_{\infty}(M,\omega) \ar[r,"\cong"]& (\cA,\pi) \ar[r,hook] & L_\infty(E^n,\omega) \\ & L_{\infty}(M,\widetilde{\omega}) \ar[r,"\cong"]& (\widetilde{\cA},\widetilde{\pi}) \ar[ru,hook] & \end{tikzcd} \end{displaymath} For instance, fixing $\xi \in \mathfrak{g}$, one could conclude that \begin{displaymath} \widetilde{f}(\xi)-f(\xi) = \pair{0}{\iota_{v_\xi}B}=\mathsf{b}_1(\xi) \end{displaymath} and read it as an element lying in the kernel of the anchor $\rho$, \ie the map that projects onto the vector fields component of any given Hamiltonian pair. In particular one could make sense of the following equation: \begin{displaymath} \Phi_m \circ \widetilde{f}_1^{\otimes m} - \Phi_m \circ f_1^{\otimes m} = \Phi_m \circ\left( \widetilde{f}_1^{\otimes m} - f_1^{\otimes m}\right) ~. \end{displaymath} According to remark \ref{rem:Phientries}, all terms in the r.h.s. of the above equation which involve more than one occurrence of $\mathsf{b}_1$ vanish by degree reasons\footnote{$\Phi_m$ is constructed out of the pairing. The cancellation of these terms happens since they involve $m-1$ contractions with less than $m-1$ non-trivial vector fields.}, thence \begin{displaymath} \begin{aligned} \Phi_m \circ \widetilde{f}_1^{\otimes m} - \Phi_m \circ f_1^{\otimes m} =&~ \sum_{\ell=0}^{m-1}\Phi_m \circ \left(f_1^{\otimes(m-1-\ell)}\otimes \mathsf{b}_1 \otimes f_1^{\otimes \ell}\right) = \\ =&~ \Phi_m \circ \left( f_1^{\otimes (m-1)}\otimes \mathsf{b}_1 \right) \circ P_{m-1, 1} ~. \end{aligned} \end{displaymath} Summing up, we conclude that \begin{equation}\label{eq:square} (\Phi \circ f)_m -(\Phi\circ \widetilde{f})_m = - \left[ \sum_{\ell=1}^{m} \Phi_\ell \circ \left( f_1^{\otimes(\ell -1)} \otimes \mathsf{b}_{m-\ell+1} \right) \circ P_{\ell -1, m-\ell +1} \right] ~. \end{equation} {The following lemma allows to compute the summands on the right-hand side of equation \eqref{eq:square}, using equation \eqref{eq:Phim} to write $\Phi_{\ell}=\varphi_\ell \circ \pairing_-^{\ca (\ell-1)}.$ } \begin{lemma}\label{lem:rhsbm} {For all $\ell\ge 1$ we have} \begin{displaymath} \pairing_-^{\ca \ell -1} \circ \left( f_1^{\otimes(\ell -1)} \otimes \mathsf{b}_{m-\ell+1} \right) \circ P_{\ell -1, m-\ell +1} = \binom{m}{\ell-1} \left[\frac{(\ell-1)!}{2^{\ell-1}}\right] \cdot \mathsf{b}_m \end{displaymath} where $\binom{m}{\ell-1}$ is the Newton binomial. \end{lemma} \begin{proof} Recall that $\mathsf{b}_m = - d_m ~\left(\pairing_-^{\ca (m-1)} \ca \langle B,\cdot \rangle_-\right) \circ f_1^{\otimes m}$ {(see eq. \eqref{eq:bm})}. {We use this in the first and last equalities below,} to write the left-hand side above as \begin{displaymath} \mathclap{ \begin{aligned} &(l.h.s.)~= \\ &= - d_{m-\ell+1} \cdot \pairing_-^{\ca (\ell -1)} \circ \left( \mathbb{1}_{\ell-1} \otimes \left( \pairing_-^{\ca (m -\ell)}\ca \langle B,\cdot \rangle_- \right) \right) \circ P_{\ell-1, m-\ell +1} \circ {f_1}^{\otimes m} = \\ &= - (-)^{(\ell-1)(m-\ell)} ~ d_{m-\ell+1} \cdot \\ &\phantom{=-}\cdot\left[ \pairing_-^{\ca (\ell -1)} \circ \left( \left( \pairing_-^{\ca (m -\ell)}\ca \langle B,\cdot \rangle_- \right) \otimes \mathbb{1}_{\ell-1} \right) \circ P_{m-\ell +1, \ell-1 }\right] \circ {f_1}^{\otimes m} = \\ &= - (-)^{(\ell-1)(m)} ~ (-)^{(\ell-1)(|B|-k-1-m+\ell)} ~ d_{m-\ell+1} \cdot \left( \pairing_-^{\ca (m -1)}\ca \langle B,\cdot \rangle_- \right) \circ {f_1}^{\otimes m} = \\ &= \frac{d_{m-\ell+1}}{d_m} \cdot \mathsf{b}_m ~. \end{aligned} } \end{displaymath} The sign term in the second equality comes from noting that, for any graded $b$-multilinear map $\nu_b$ on the graded vector space $V$, one has \begin{displaymath} \mathbb{1}_a \otimes \nu_b = (-)^{a(b+1)}~ \mathsf{C}_{(a+1)} \circ \left(\nu_b\otimes\mathbb{1}_a\right) \circ \mathsf{C}^{-1}_{(a+b)}, \end{displaymath} where $\mathsf{C}_{(i)}^j$ denotes the odd action of the cyclic permutation on $V^{\otimes i}$ repeated $j$ times. The sign term in the third equality comes from the sign convention in the definition of $\ca$. \\ Finally, the claim now follows from an explicit computation of the coefficients \begin{displaymath} \dfrac{d_{m-\ell+1}}{d_m} = \dfrac{ 2^{m-\ell+1}}{(m-\ell+1)!}~ \dfrac{m!}{2^{m}} = \dfrac{1}{2^{\ell-1}} \dfrac{m!}{(m-\ell+1)!} = \binom{m}{\ell-1} \dfrac{(\ell-1)!}{2^{\ell-1}} ~. \end{displaymath} \end{proof} {Thanks to Lemma \ref{lem:rhsbm}, we can write eq. \eqref{eq:square} as follows, for any $m\geq 2$:} \begin{equation}\label{eq:nicesum} (\Phi \circ f)_m -(\Phi\circ \widetilde{f})_m = - \left[ \sum_{\ell=1}^{m} \binom{m}{\ell-1} ~ \dfrac{(\ell-1)!}{2^{\ell-1}}\varphi_\ell \right] \circ \mathsf{b}_m \end{equation} {where $\varphi_1=1$.} Observe now that the Bernoulli numbers $B_k$, given $B_0=1$, are completely defined by the the following summation formula for all $m\ge 2$ (see for instance \cite{Agoh}\cite{Weisstein}): \begin{equation} \sum_{j=0}^{m-1} \binom{m}{j}B_{j}=0. \end{equation} Using this recurrence relation, it is clear that the r.h.s. of equation \eqref{eq:nicesum} vanishes {provided the coefficients $\varphi_{\ell}$ in eq. \eqref{eq:Phim} are such that} {$\dfrac{(\ell-1)!}{2^{\ell-1}}\varphi_{\ell}=B_{\ell-1}$ for $\ell=1,\dots,m$}. We conclude that diagram \eqref{diag:HamiltonianpentagonDiagram} commutes {provided} the following equation holds true for any ${k}\geq 2$: \begin{equation}\label{eq:Phik} \Phi_k = \left(\dfrac{2^{k-1}}{(k-1)!}B_{k-1} \right) \circ \pairing_-^{\ca (k-1)} ~, \end{equation} {The following table displays the values of the coefficient for low values of $k$:} \begin{center} \begin{tabular}{|l|llllllllll|} \hline $k$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline $\varphi_k = \frac{2^{k-1}}{(k-1)!}B_{k-1}$ & 1 & -1 & 1/3 & 0 & -1/45 & 0 & 2/945 & 0 & -1/4725 & 0\\ \hline \end{tabular} \end{center} In {theorem \ref{thm:iso}}} we proved that equation \eqref{eq:Phik} holds true {for $k=2,3,4$ when $n\le4$}. \\ {This concludes the proof of theorem \ref{thm:comm}.} \ifstandalone \bibliographystyle{../../hep} \chapter{Hydrodynamical \Hcmm and linked vortices}\label{Chap:MauroPaper} In this chapter, we discuss some applications of multisymplectic techniques in a hydrodynamical context. The possibility of applying symplectic techniques therein ultimately comes from Arnol'd's pioneering work culminating in the geometrization of fluid mechanics (\cite{Arnold66, Abraham1978,Arn-Khe,MW83}). In particular, in this connection we may mention the paper \cite{Rasetti-Regge75}, with its symplectic reinterpretation \cite{Pe-Spe89,Pe-Spe92,Pe-Spe00}, and the general portrait depicted in \cite{Bry}. Here we wish to apply some recently emerged concepts in multisymplectic geometry (mostly building on \cite{Callies2016,Ryvkin2016,Ryvkin2018}) and construct an explicit {\it \momap} (\cite{Callies2016}) in a hydrodynamical setting, leading to a multisymplectic interpretation of the so-called {\it higher order linking numbers}, viewed \`a la Massey (\cite{Pe-Spe02,Spe06,Hebda-Tsau12}). The construction is generalized to cover connected compact oriented Riemannian manifolds (with a specified volume form) having vanishing intermediate de Rham groups. Moreover, a {\it covariant phase space} interpretation of the multisymplectic setting is outlined. \par We make clear from the outset that our constructions, together with the covariant phase space portrait, will not adhere to the standard multisymplectic approach to continuum mechanics set forth \eg in \cite{Gimmsy1, MPSW} but they will be based instead on the peculiar structure of an ideal fluid, whose configuration space is the ``Lie group" of diffeomorphisms preserving a volume form, which will be directly taken as a multisymplectic form (\cite{CatIbort}). \par The content of this chapter is a joint work with Mauro Spera appeared in \cite{Miti2018} and \cite{Miti2019a}. \medskip The layout of the chapter is the following. First, in Section \ref{Sec:MSHydroFluids}, we give an example of \momap in fluid mechanics - in the sense of Callies-Fr\'egier-Rogers-Zambon (\cite{Callies2016}) - transgressing to Brylinski's symplectic structure on loop spaces and descending, in turn, to the manifold of smooth oriented knots, see \cite{Bry,BeSpe06} and below for precise definitions. We briefly discuss the (non-)equivariance of the above construction with respect to the group of volume-preserving diffeomorphisms of 3-space and we outline a generalization thereof in a Riemannian framework, signalling potential topological obstructions. Moreover, covariant phase space aspects will be analysed. In Section \ref{Sec:Ham1FormLinks} we prepare the ground for the forthcoming applications by depicting a hydrodynamical multisymplectic portrait of basic knot theoretic objects, used, in Section \ref{Sec:MasseyMess}, to reinterpret the Massey higher order linking numbers in multisymplectic terms: the $1$-forms appearing in the hierarchical Massey construction (viewed, in turn, differential geometrically \`a la Chen) provide an example of {\it first integrals in involution} in a multisymplectic framework. Appropriate background material is provided within the various sections in order to ease readability. \begin{remark}[Hydrodynamical brackets] In this chapter, we are slightly departing from the \emph{hydrodynamical bracket convention} employed in \cite{Miti2018}. In that convention, the infinitesimal action pertaining to a left action is an anti-homomorphism, and the Lie bracket on the space of vector fields $\mathfrak{X}(M)=\Gamma(M)$ is given by minus the standard one (see \cite[pag. 6]{Arn-Khe}). In the convention employed throughout this text, the Lie bracket of vector fields is the standard one; in particular, $\mathcal{L}_X Y = [X, Y]$ for any $X,Y\in \mathfrak{X}(M)$. \end{remark} \section{Multisymplectic geometry and hydrodynamics of perfect fluids}\label{Sec:MSHydroFluids} By \emph{hydrodynamics} we mean the discipline study the motion of liquids; it can be seen as a branch of fluid mechanics and, in turn, of continuum mechanics. In what pertains to us, we will only focus on the specific ideal model embodied by the perfect, inviscid, incompressible continuous medium (fluid) in space, \eg water in ideal conditions. % \begin{figure}[h!] \begin{center} \includegraphics[width=0.38\textwidth]{\datapath/Figure_continuum.png} \end{center} \caption{Spatial displacement of a continuous bulk in the Euclidean space.\\ \small(\href{https://commons.wikimedia.org/wiki/File:Displacement_of_a_continuum.svg}{Wikimedia Commons})} \label{Fig:FluidDisplacement} \end{figure} % In a rather general stance, fluid mechanics can be framed in the context of Riemannian geometry regarding the displacement of a bulk of material at a given time as a submanifold $B$, possibly with a regular boundary $\partial B$, embedded into the Riemannian "physical" space $(M,g)$ (see Figure \ref{Fig:FluidDisplacement}). % Given a generic fluid system, its kinematics can be encoded by a "mass density" function $\rho$ and a "velocity field" $\mathbf{u}$, to be thought of as distributional sections in the sense of de Rham (see Remark \ref{Rem:ReminderDeRhamCurrents}). In the case that the fluid fills the whole ambient space and turbulence phenomena are negligible, this quantities can be seen as honest smooth objects $\rho \in C^\infty(M)$ and $\mathbf{u}\in \mathfrak{X}(M) $. \\ The dynamics can therefore be formulated through evolutionary equations involving this pair of quantities. As far as we are concerned, we will only be interested in the following example: % \begin{example}[Ideal fluids dynamics]\label{Ex:EulerEquations} The dynamics of an ideal fluid occupying an open region $\Omega \subset M$, with a regular boundary $\partial \Omega$, is ruled by the following system of equations: \begin{displaymath}\tag{EE} \label{Eq:EulerEquation} \begin{cases} \frac{\partial \mathbf{u}}{\partial t} + \nabla_\mathbf{u} \mathbf{u} & = -\nabla P \\[.5em] \div (\mathbf{u}) &= 0 \qquad\qquad \textrm{in } \Omega \\[.5em] \mathbf{u} \cdot \hat{n} &= 0 \qquad\qquad \textrm{on } \partial \Omega\\ \end{cases} ~, \end{displaymath} called \emph{Euler equation}, where $\nabla$ denotes the Levi-Civita connection, $\mathbf{u}$ is the velocity field of the fluid and $P$ is a function encoding the force exerted on an infinitesimal element of the fluid by its surroundings (pressure). \end{example} An alternative description of continuous systems, more akin to the framework of \emph{geometric mechanics}, can be achieved as follows. Note that, if one fixes a reference displacement $\kappa_0: B\hookrightarrow M$ of a given continuous system, any other spatial configuration can be encoded by a diffeomorphism $F= \kappa_t \cdot \kappa_0^{-1} : M \to M$ (see again Figure \ref{Fig:FluidDisplacement}). In other words, the configuration space of said continuous system, \ie the set of all its spatial displacements (not to be confused with the "physical states" of the system), is given by a suitable "Lie" subgroup $Q\subset \Diff(M)$. As it is often done, we shall gloss over analytic subtleties coming from the infinite dimensionality of the group $\Diff(M)$ (see \eg \cite{Arnold66,Arn-Khe,Eb-Mar,Kri-Mich} for more information). We just recall here that $\Diff(M)$ is a {\it regular Lie group} in the sense of Kriegl-Michor \cite[43.1]{Kri-Mich} and that its associated exponential map is not even locally surjective (a quite general phenomenon). Further details can be found in the survey \cite{Roger2005}. \\ Two notable examples are given by the \emph{rigid body} and the \emph{incompressible fluid} filling the whole three-dimensional Euclidean space. In these cases, the configuration spaces are given respectively by the space of Euclidean isometries $\text{Iso}(\R^3)\cong \R^3 \rtimes SO(3)$ and the group of volume-preserving diffeomorphisms $\sDiff(\R^3)$. \\ Formally speaking, physical states of a fluid mechanical system should be given by a point in the tangent bundle of $Q$ (or in the cotangent bundle when adopting the Hamiltonian framework). Namely, fixed a spatial displacement $F\in Q$, a vector $v \in T_F Q$ can be thought of as vector field over $M$ integrating to an infinitesimal flow sitting in $Q$. \\ In this framework, the Euler equation given in Example \ref{Ex:EulerEquations} can be deduced as an extremal of a certain Lagrangian (see for instance \cite{Arn-Khe,Abraham1978,Kri-Mich}) and its solutions can be recast as geodesics pertaining to a right-invariant metric on the group $\Diff(M)$ (see \cite{Arn-Khe} or \cite[Appendix A]{Khesin2020a}). \medskip In a preprint appeared in 1998 \cite{Gimmsy1}, Gotay, Isenberg, Marsden, Montgomery, Sniatycki and Yasskin introduced a recipe to associate to any classical (relativistic but not quantum) first order field theory a multisymplectic manifold called \emph{multiphase space}. This construction mimics the well-known prescription of an ordinary phase space starting from an associated configuration space, which is usually done in the case of point-like particles, in the context of continuous mechanics. (See section \ref{Sec:CPSMess} for further details.) \\ In particular, this would also apply to any ideal fluid system. In the following, however, we will not adhere to this construction. Instead, we will focus on the other "multisymplectic" feature peculiar of ideal fluid systems; more specifically, our key point will be that their configuration space is a Lie group consisting of multisymplectic (\ie preserving a multisymplectic form) diffeomorphisms. \subsection{The hydrodynamical Poisson bracket}\label{Sec:IdroPoisson} In the present subsection we briefly review, for motivation and further applications, the symplectic geometrical portrait underlying the theory of hydrodynamics in its simplest instance. Namely we will focus on the free motion of an ideal fluid filling the standard "physical" space. More concretely, let us assume the following conditions: \begin{itemize} \item The ambient space is given by $M=\R^3$ taken with the standard volume form $\nu = \d x\wedge \d y \wedge \d z$. \item The configuration space is given by the "Lie" group $G = \sDiff({\mathbb R}^3)$ of volume-preserving diffeomorphisms of ${\mathbb R}^3$. \item The dynamics is ruled by equation \eqref{Eq:EulerEquation} with $P=0$. \end{itemize} We denote by ${\mathfrak g}$ the ({\it infinite dimensional}) Lie subalgebra of ${\mathfrak X}({\mathbb R}^3)$ consisting of divergence-free vector fields on ${\mathbb R}^3$. These are the vector fields integrating to volume-preserving diffeomorphisms. In this sense $\mathfrak{g}$ is the ``Lie algebra" of the ``Lie group" $G$. It is customary to tacitly assume that our fields {\it rapidly vanish at infinity}. This is justified by the physical assumption that the motion of the system is localized in a finite region of the ambient space. Such condition ensures that convergence problems are avoided and boundary terms are absent. Briefly, we also assume the following: % \begin{equation}\label{Eq:IdrodynamicalLieAlgebra} \begin{aligned} \mathfrak{g} :=&~ \rm{sdiff}_0\,(\R^3)~= \\ =&~ \left\lbrace X \in \mathfrak{X}(\R^3) ~\left\vert~ \rm{div} X = 0, \textrm{\emph{ rapidly vanishing at }}\infty \right.\right\rbrace \end{aligned}~. \end{equation} It is crucial to observe that equation \eqref{Eq:EulerEquation} is exclusively stated in terms of the velocity field of the system, hence the physical states of the system are completely encoded by $\mathfrak{g}$ (this can be expected a priori since the incompressibility condition would imply that the density function must by a constant.) % \begin{remark}[Vorticity]\label{Rem:Vorticity} {\it Euler evolution} can be read, among others, in the so-called {\it vorticity form}: \begin{equation} \label{Eq:EulerVorticityForm} \begin{cases} \frac{\partial \mathbf{w}}{\partial t} &= [\mathbf{w}, \mathbf{u}] \\ \mathbf{w} &= \curl \left( \mathbf{u} \right) \\ \end{cases} \end{equation} We call $\mathbf{w}=\curl \mathbf{u}$ the \emph{vorticity field} pertaining to the velocity field $\mathbf{u}\in \mathfrak{g}$ (hence divergence-free). \\ Recall also the standard result about vector analysis in $\R^3$ that the Lie bracket of two divergence vector field can be expressed via the curl of their cross product \begin{equation}\label{eq:BrackAsCurl} [\mathbf{v},\mathbf{w}] = -\curl (\mathbf{v}\times \mathbf{w}) ~. \end{equation} \end{remark} Following \eg \cite{Arn-Khe}, we shall consider the so-called {\it regular dual} ${\mathfrak g}^*$ of ${\mathfrak g}$ consisting of all $1$-forms modulo exact $1$-forms: \begin{equation}\label{eq:gastPoisson} {\mathfrak g}^* := \Omega^1({\mathbb R}^3)/ d \Omega^0({\mathbb R}^3) \end{equation} together with the standard pairing ($\omega \in {\mathfrak g}^*$, $\xi \in {\mathfrak g} $) $$ (\omega, \xi) = \int \langle \omega (x), \xi(x) \rangle \, d^3x . $$ Observe that this is different from the the full topological dual that, in principle, would contain also suitable genuine distributional elements as well (\ie {\it currents}, in the sense of de Rham, see \cite{dR} and remark \ref{Rem:ReminderDeRhamCurrents} below). \begin{theorem}[\cite{Arn-Khe,Kuznetsov-Mikhailov80,MW83,Pe-Spe89,Pe-Spe92,Pe-Spe00,Spera16}]\label{Thm:HydroBracket} The (regular) dual ${\mathfrak g}^*$ is naturally interpreted as a Poisson manifold with respect to the hydrodynamical Poisson bracket (Arnol'd --Marsden-Weinstein Lie-Poisson structure) \begin{equation}\label{Eq:HydroBracket} \{ F, G \} ([{\mathbf v}]) = \int_{ \R^3} \left\langle {\mathbf v}, \, \left[\frac{\delta F}{\delta {\mathbf v}}, \frac{\delta G}{\delta {\mathbf v}}\right] \right\rangle \,\d^3x \end{equation} with ${\mathbf v} \in {\mathfrak g}$ (velocity field), ${\mathbf w} := {\rm curl}\,{\mathbf v}$, its {\it vorticity}, with $[{\mathbf v}]$ denoting the ``gauge" class of ${\mathbf v}$: $[{\mathbf v}] = \{ {\mathbf v}+ \nabla f \}$. \end{theorem} The {\it Euler evolution}, given for instance by equation \eqref{Eq:EulerVorticityForm}, is naturally {\it volume-preserving} and it also preserves the {\it symplectic leaves} of ${\mathfrak g}^*$ given by the $G$-{\it coadjoint orbits} ${\mathcal O}_{[{\mathbf v}]} \equiv {\mathcal O}_{{\mathbf w}}$. The symplectic structure on ${\mathcal O}_{\mathbf w}$, given by the celebrated Kirillov-Kostant-Souriau (KKS) construction (\cite{Kirillov01,Kostant70,Souriau70}), is explicitly given by \begin{align*} {\Omega_{KKS}}{([{\mathbf v}])} (ad^*_{{\mathbf b}}([{\mathbf v}]),ad^*_{{\mathbf c}})([{\mathbf v}]) =&~ \int_{{\mathbb R}^3} \langle {\mathbf v}, [{\mathbf b},{\mathbf c}]\rangle \, d^3x ~= \\ =&~ \int_{{\mathbb R}^3} \langle {\mathbf w}, {\mathbf b} \times {\mathbf c}\rangle\, d^3x \end{align*} with the coadjoint action reading, explicitly, {\it up to a gradient} (not influencing calculations) $$ ad^*_{{\mathbf b}} ({\mathbf v}) = - {\mathbf w} \times {\mathbf b} \quad(\equiv ad^*_{{\mathbf b}} ([{\mathbf v}]) ~. $$ The {\it Hamiltonian algebra} $\Lambda$ pertaining to ${\mathcal O}_{{\mathbf w}}$ consists of the so-called {\it Rasetti-Regge currents} originally introduced in \cite{Rasetti-Regge75} and further developed in \cite{Pe-Spe89,Pe-Spe92,Pe-Spe00,Spera16,Bry}: % \begin{definition}[Rasetti-Regge currents algebra]\label{def:RR-current} We call \emph{Rasetti-Regge current} pertaining to a given $\mathbf{b} \in \mathfrak{g}$ the linear function $\lambda_{\mathbf{b}}$ in the topological dual of $\mathfrak{g}$ given by \begin{displaymath} \morphism{\lambda_{\mathbf{b}}} {\mathfrak{g}} {\R} {\mathbf{v}} {\displaystyle\int_{\R^3}\left\langle \mathbf{b} , \mathbf{v}\right\rangle d^3x \,= \int_{\R^3}\left\langle {\mathbf B}, \curl({\mathbf v}) \right\rangle d^3x} \end{displaymath} where $\mathbf{B}\in \mathfrak{X}(\R^3)$ is an arbitrary vector field such that $\curl({\mathbf B}) = {\mathbf b}$. \\ We call \emph{Rasetti-Regge currents algebra} the vector space $\Lambda = \left\lbrace \lambda_{\mathbf b} \right\rbrace_{{\mathbf b}\in\mathfrak{g}}$ endowed with the skew-symmetric bilinear structure $\lbrace\cdot,\cdot\rbrace$ given by \begin{displaymath} \{\lambda_{\mathbf b}, \lambda_{\mathbf c} \} = \lambda_{[{\mathbf b},{\mathbf c}] } \qquad \forall ~{\mathbf b}, {\mathbf c} \in {\mathfrak g} ~. \end{displaymath} \end{definition} % \begin{theorem}[\cite{Pe-Spe92}] \begin{enumerate}[label=(\roman*)] \item $\Lambda$ is a Lie algebra and the map \begin{displaymath} \morphism{\lambda} {\mathfrak{g}} {\Lambda} {\mathbf{b}} {\lambda_{\mathbf{b}}} \end{displaymath} gives a {\it $G$-equivariant co-momentum map}. (Observe in particular that $\frac{\delta \lambda_{{\mathbf b}}}{\delta {\mathbf v}} = {\mathbf b}$). \item The Euler equation \eqref{Eq:EulerEquation} can be recast in term of the following Hamilton equation \begin{displaymath} \partial_t ~ \lambda_{\mathbf{b}} = - \langle H, \lambda_{\mathbf{v}}\rangle \end{displaymath} with Hamiltonian given by $H(\mathbf{v})=\frac{1}{2}\langle \mathbf{v},\mathbf{v}\rangle$. \end{enumerate} \note{ Nelle note si accenna a "Kutnetsiv-Mikhailov" Poisson brackets coincidente con KKS Poisson bracket. Devo spiegare qualche fonte? sarebbe l'articolo \cite{Kuznetsov-Mikhailov80}, citato esplicitamente da Mauro in \cite{Pe-Spe92}. } \end{theorem} % \begin{remark}[Helicity]\label{Rem:Helicity} We call \emph{helicity}, pertaining to an ideal fluid with velocity ${\mathbf v}$ and vorticity ${\mathbf w} = {\rm curl}\,{\mathbf v}$ in $\R^3$, the quantity \begin{displaymath} {\mathcal H} = \int_{\R^3} \langle {\mathbf v}, {\mathbf w}\rangle \d^3x = \int_{\R^3} v \wedge dv \d^3x \end{displaymath} where the last expression is the differential form counterpart. \\ Helicity is preserved along the Euler evolution of the fluid \cite{Moffatt-Ricca92}: \begin{align*} \partial_t \mathcal{H} =&~ \partial_t \int \langle \mathbf{v},\mathbf{w}\rangle = \\ =&~ \int \langle \partial_t\mathbf{v},\mathbf{w}\rangle + \int \langle \mathbf{v},\partial_t\mathbf{w}\rangle = \\ =&~ - \lbrace H, \lambda_{\mathbf{w}}\rbrace + \int \langle\mathbf{v},[\mathbf{w},\mathbf{v}]\rangle = 0 ~. \end{align*} \end{remark} \subsubsection*{Singular vortices} The preceding portrait carries through to the {\it singular} vorticity case, in particular when the vorticity field is $\delta$-like and concentrated on a two-dimensional patch or -possibly knotted- filament. We like to stress that the latter hydrodynamical configurations have been extensively studied and are recently proved to exists both analytically \cite{Enciso2015} and experimentally \cite{Kleckner2013}. It is crucial to observe that the support of the vorticity $\mathbf{w}$ of an ideal fluid is preserved along the motion in the sense that $\text{supp}(\mathbf{w}(0))$ and $\text{supp}(\mathbf{w}(t))$ are diffeomorphic for any time $t$. \begin{figure}[h!] \begin{center} \href{https://www.nature.com/articles/nphys2560}{\includegraphics[width=0.32\textwidth]{\datapath/knottedwatervortex.jpg}} \end{center} \caption{Experimental realization of a knotted vortex in water. (Klenecker \& Irvine \cite{Kleckner2013})} \label{Fig:KnottedVortexExp} \end{figure} When dealing with the embeddings of loops in the Euclidean space we will make use of the following nomenclature due to Brylinski: % \begin{notation}[Loop spaces and Brylinski manifolds]\label{Rem:BryLoopSpaces} Let $M$ be a smooth, paracompact, oriented manifold of dimension $n$. (Throughout all this chapter we will mostly assume $M=\R^3$.) \begin{itemize} \item We call \emph{(free) loop space} the set of smooth maps from the circle into the manifold, it will be denoted by $LM=C^{\infty}(S^1,M)$. This set can be made into a (infinite dimensional) smooth Fr\'echet manifold modelled on the topological vector space $C^\infty(S^1,\R^n)$ (which is Lind\"elof and paracompact)\cite[\S{3.1}]{Bry}. One can make sense of the tangent space to a $\gamma \in LM$ as \begin{displaymath} T_{\gamma}LM = C^\infty(S^{1},\gamma^{\ast}TM)= \Gamma^\infty(\gamma^\ast TM) ~. \end{displaymath} Observe that the Lie\footnote{ Given a manifold $M$ with a volume form $\Omega$, $\sDiff(M,\Omega)$ is a bona fide Lie group at least in the case that $M$ is compact \cite[prop. 2.1]{Molitor2007}. In particular, $\sDiff(S^1)$ is a rather well-behaved Lie group, modelled on a Fréchet space. When $M$ is a non-compact manifold, things are more subtle. } group $\sDiff(S^1)$ of oriented diffeomorphisms of $S^1$ acts smoothly and on the right on $LM$ via precomposition. This action, determines all possible re-parametrization of a given loop. \item We call \emph{space of mildly singular knots} the subset $\hat{X}\subset LM$ consisting of immersions $\gamma$ which have the property to induce an embedding of $S^1\setminus A$, for $A$ a finite subset of $S^1$, and such that the branches of $\gamma$ at any distinct points $x_1$ and $x_2$ of $A$ have at most finite-order tangency. \item We call \emph{Knot space} of $M$ the subset $X\subset \hat{X}$ consisting of bona fide embeddings. (When $M=\R^3$, the image of $\gamma \in X$ is called \emph{knot} and $\gamma:S^1\to \R^3$ is called a \emph{parametrization}.) \item We call \emph{space of oriented singular knots} and \emph{space of oriented knots} in $M$ the quotients $\hat{Y} = \hat{X}/\sDiff(S^1)$ and $Y = X/\sDiff(S^1)$ respectively. Note that $\hat{Y}$ is taken with a Fr\'echet structure such that the projection $\hat{X}\to \hat{Y}$ is a $\sDiff(S^1)$-principal bundle. Thence, $Y$ is an open subset of $\hat{Y}$ such that the projection $X\to Y$ is a locally trivial $\sDiff(S^1)$-principal bundle. \end{itemize} \end{notation} % Considering the case of vorticity concentrated on a knot in (smooth embedded loops modulo orientation-preserving reparametrizations in $\R^3$), we ultimately retrieve the symplectic structure ${\Omega_Y}$ on the the {\it Brylinski manifold $Y$} (see \cite{Bry,BeSpe06} for more details): \begin{equation} \Omega_Y (\cdot , \cdot) ({\gamma}) := \int_{\gamma} \nu (\dot{\gamma}, \cdot, \cdot) = \int_{\gamma} \langle \dot{\gamma}, \cdot \times \cdot \rangle ~. \end{equation} Indeed, given a volume form $\nu$ on a 3-dimensional $M$, one gets, by {\it transgression} (see Example \ref{Ex:TransgressionOnLoops}), a $2$-form $\Omega$ on $LM$ via the formula \begin{equation} \Omega = \int_{S^1} ev^* (\nu ) ~, \end{equation} where $ev: LM \times S^1\rightarrow M$ given by $ev (\gamma , t ) := \gamma(t)$ is the evaluation map of a loop $\gamma \in LM$ at a point $t \in S^1 \equiv [0,1] / \,\,{}_{\widetilde{}}\,$ (endpoint identification) and $\int_{S^1}$ denotes integration along the $S^1$-fibres (see \cite[\S 3.5]{Bry}). More explicitly, given tangent vectors $u$ and $v$ at $\gamma$, the symplectic form reads (\cf \cite{Bry}, formula 6 - 8, p. 238): \begin{equation} \Omega_{\gamma} (u , v) = \int_{0}^1 \nu ({\dot \gamma}(t), u(t), v(t))\, dt \end{equation} (where we set ${\dot \gamma} = {d\gamma \over d t}$). The above construction carries through to $Y$. In this case, the coadjoint orbits are labelled by the equivalence types of knots (via ambient isotopies), by virtue of a result of Brylinski\cite{Bry}. \begin{remark} Rasetti-Regge currents has been originally introduced in the context of the theory of singular vortices\cite{Rasetti-Regge75}. Considering a velocity field configuration $\mathbf{v}\in \mathfrak{g}$ with vorticity $\mathbf{w}= \delta_\gamma$ concentrated on a closed loop $\gamma: S^1 \to \mathbb{R}^3$, one could denote the Rasetti-Regge current on the vortex filament as \begin{displaymath} \lambda_{\mathbf b} (\gamma) := \lambda_{\mathbf{b}} (\mathbf{v}) = \int_{\R^3} \mathbf{v} \cdot \mathbf{b} = \int_{\R^3} \mathbf{v} \cdot \curl(\mathbf{B}) = -\int_{\R^3} \mathbf{w} \cdot \mathbf{B} = - \oint_\gamma \mathbf{B} \end{displaymath} where $\mathbf{B}$ is a vector field such that $\mathbf{b}=\curl{B}$ (vector potential of $b$, see \ref{Rem:ConcreteSolutionBiotSavart}). \begin{figure}[h!] \begin{center} \begin{tikzpicture}[use Hobby shortcut] \begin{knot}[ consider self intersections=true, ignore endpoint intersections=false, flip crossing=5, ] \strand[blue] ([closed]0,0) .. (0.5,1) .. (-0.5,2) .. (-0.65,2.5) .. (0,3) .. (0.5,2) .. (-0.5,1) .. (0,0); \strand (-0.65,2.5) circle[x radius=0.3, y radius=0.15]; \end{knot} \draw[blue,-stealth](-0.65,2.5)--(-0.65,3) node[anchor=north east]{$\omega$}; \draw[black,-stealth](-0.65,2.35)--(-0.3,2.35) node[anchor= west]{$v$}; \draw[black,-stealth](-0.95,2.5)--(-0.95,2.3) node[anchor= west]{}; \draw[black,-stealth](-0.35,2.5)--(-0.35,2.7) node[anchor= west]{}; \node[blue,anchor= north] at (0,0) {$\gamma$}; \end{tikzpicture} \end{center} \caption{Singular vorticity concentrated on a closed curve $\gamma$.} \end{figure} \end{remark} \subsection{A hydrodynamical \momap}\label{Sec:IdroMoMap} In this Subsection we elaborate on the previous discussion by introducing an explicit {\it \momap} (\Hcmm) emerging in fluid dynamics. We depart from the standard setting since we are going to consider actions of an infinite dimensional group $G = \sDiff(\R^3)$. \\ More precisely, we consider an infinitesimal action pertaining to a subalgebra $\mathfrak{g}$ of the Lie algebra of $G$ given by equation \eqref{Eq:IdrodynamicalLieAlgebra}. We ought to notice that the study of the deformation quantization problem for this Lie algebra, carried out by Claude Roger in \cite{Roger2012}, brought to the construction of a $L_\infty$-algebra anticipating the one proposed by Chris Rogers in the general multisymplectic case. In example \ref{Ex:VolumesAreMultiSymp}, it has been noted how any oriented $n$-dimensional manifold $M$ can be interpreted as multisymplectic interpreting a volume form determining its orientation as a $(n$-$1)$-plectic form. In the case we are considering, the volume form takes the canonical expression $\nu := dx \wedge dy \wedge dz $. Denoting by $\alpha$ the flat operator pertaining to the multisymplectic form $\nu$ (see equation \eqref{Eq:OmegaFlat}), one has, in coordinates for any $\xi = (\xi^i)$, that $$ \alpha(\xi) = \iota_{\xi} \nu = \xi^1 dy \wedge dz + \xi^2 dz \wedge dx + \xi^3 dx \wedge dy $$ which is manifestly bijective. It is also important to recall that the standard volume on $\R^3$ is in particular coming from a (standard) Riemannian structure. Hence, we have at our disposal the Hodge $*$ operator which allows us to compare differential forms of different degree. \begin{reminder}[Hodge calculus]\label{Rem:HodgeCalculus} For the sake of completeness, let us briefly recall the basic operators involved in the Hodge calculus (the reader can refer to \cite{Warner,Kri-Mich} for a complete account). \\ Recall that, given a $n$-dimensional vector space $V$ endowed with an inner product $\pairing$, there exists an induced inner product on the skew-symmetric tensor space $\Lambda( V)$, denoted with the same symbol, given on homogeneous $k$-vectors by \begin{displaymath} \morphism{\pairing} {\Lambda^p V \otimes \Lambda^q V} {\R} {u_1\wedge\dots\wedge u_p,~ v_1\wedge\dots\wedge v_q} {\text{det}\left[ \langle u_i,v_j\rangle\right] \delta_{p, q}} \end{displaymath} and extended by linearity. Considering on the same vector space the orientation, specified by the volume form $\nu$, induced by the inner product, one can construct an invertible linear operator $\ast: \Lambda^k (V)\to \Lambda^{n-k}(V)$ defined completely, for any $0\leq k \leq n$, by the following property \begin{displaymath} \alpha \wedge (\ast \beta ) = \langle \alpha, \beta \rangle \nu \qquad \forall \alpha \in \Lambda^k (V) ~. \end{displaymath} In particular $**\eval_{\Lambda^k(V)} = (-)^{k(n-k)} \id$. \\ Given a Riemannian manifold $(M,g)$, oriented with the corresponding Riemannian volume, such construction can be replicated locally on the tangent and cotangent space for any points $p\in M$. This yields a well-defined isomorphism called \emph{Hodge dual}, or "star", \begin{displaymath} \morphism{\ast} {\Omega^k(M)} {\Omega^{n-k}(M)} {\sigma} {\ast \sigma} \end{displaymath} uniquely defined by the following property: \begin{displaymath} \omega \wedge \ast \sigma = \langle \omega, \sigma \rangle \nu \qquad \forall \omega \in \Omega^k(M) ~. \end{displaymath} Here $\langle \omega, \sigma \rangle$ denotes the smooth function on $M$ given point-wise on $p\in M$ by the inner product $\langle \omega_p, \sigma_p \rangle$. When $M$ is compact, the integration of the previous expression defines an inner product on $\Omega^k(M)$. \\ Through the Hodge dual it is possible to introduce the following differential operators: \begin{displaymath} \begin{aligned} \delta :=~ (-)^{n(k+1)+1}\ast\circ\d\circ\ast \quad & : \Omega^k(M) \to \Omega^{k-1}(M) \\ \Delta :=~ \delta\circ\d + \d\circ\delta \quad & : \Omega^k(M) \to \Omega^{k}(M) \\ \grad :=~ \sharp \circ \d \quad & : C^\infty(M) \to \mathfrak{X}(M) \\ \div :=~ -\delta \circ \flat = \ast\circ \d\circ \ast \circ \flat \quad & : \mathfrak{X}(M) \to C^\infty(M) \\ \curl :=~ \sharp \circ \ast \circ \d \circ \flat \quad & : \mathfrak{X}(M) \to \mathfrak{X}^{n-2}(M) ~, \end{aligned} \end{displaymath} where $\flat$ and $\sharp$ denote the Riemannian "musical isomorphisms" associated to the metric. When $M=\R^3$ these operators assume the more familiar expression involving the gradient operator $\nabla$. % \end{reminder} Upon introducing the Hodge $*$ relative on $(\R^3,\nu)$, one can easily recast the operator $\alpha$ as the invertible linear map \begin{equation}\label{eq:alphacontractionmap} \morphism{\alpha = (\ast \circ \flat)} {\mathfrak{X}(\R^3)} {\Omega^1(\R^3)} {\xi} {\ast(\xi^{\flat})} ~. \end{equation} Then we have, for $\xi \in \mathfrak {g}$ (via Cartan's formula) $$ 0 = {\mathcal L}_{\xi} \nu = d \iota_{\xi} \nu + \iota_{\xi} d \nu = d \iota_{\xi} \nu = \div(\xi)~\nu $$ and thus we have an isomorphism $\mathfrak {g} \cong Z^2(\R^3)$ (closed $2$-forms on ${\mathbb R}^3$). This will be important in the sequel. The above also expresses the fact that $\nu$ is a {\it strictly conserved} $3$-form (see definition \ref{Def:conservedQuantities}). \\ We are now ready to give the promised example of {\rm \momap} pertaining to the action of $\sDiff_0$ on $\R^3$. \begin{theorem}[Explicit \Hcmm for $\sDiff_0 \circlearrowright (\R^3,\nu)$]\label{Thm:HydrodynamicalComoment} Consider the infinitesimal action of $v:\mathfrak{g}\to \mathfrak{X}(\mathbb{R}^3)$ concretely given by the inclusion of divergence free fields in the set of all vector fields. The previous action admits a \momap $(f)$ with components $f_j: \Lambda^j {\mathfrak g} \to \Omega^{2-j} (\R^3)$ given by \begin{displaymath} \begin{aligned} f_1 =&~ \flat\circ {\rm curl}^{-1} \\ f_2 =&~ \Delta^{-1} \circ \delta~ \circ\mu_2 \end{aligned} \end{displaymath} where $\mu_2(p):= f_{1} (\partial p) + \iota(v_p) \omega$ is the term introduced in remark \ref{Rem:TermMuByMauro}. The inverse of the vector calculus operators involved, $\curl$ and Laplacian $\Delta$ (see reminder \ref{Rem:HodgeCalculus}), are to be interpreted as their corresponding Green operators. Hence they are not unique. In remark \ref{Rem:ConcreteSolutionBiotSavart} we provide a more explicit expression for the components $f_k$ in the standard coordinates of $\R^3$ . \end{theorem} \begin{proof} $(1)$ In this case, the observables are given by a graded vector space concentrated in degrees $-1$ and $0$, $$L= \Omega^1_{\textrm{ham}}(\mathbb{R}^3)\oplus\left(\Omega^0(\mathbb{R}^3)[1]\right)~,$$ hence, a \momap, consists of a pair of functions: \begin{align*} f_1 &\colon \mathfrak{g} \rightarrow \Omega^1_{\textrm{ham}}(\mathbb{R}^3) \\ f_2 &\colon \mathfrak{g}\wedge\mathfrak{g} \rightarrow C^\infty(\mathbb{R}^3) \end{align*} sitting in the following (non-commutative) diagram: \begin{displaymath} \begin{tikzcd}[column sep=large, execute at end picture={ \node[label=below:{\tiny CE complex},dashsquare=(L1)(L2)]{}; \node[label=below:{\tiny Observables},dashsquare=(R1)(R2)]{}; }] & & \Omega^3(M) \ar[ddd,leftrightarrow,green!60!black,bend left=60,"\ast"]\\ & \mathfrak{X}(M) \ar[r,red,"\alpha"] \ar[dr,blue,"\flat",shift left=1ex]\ar[dr,blue,leftarrow,"\sharp"',shift right=1ex]& \Omega^2(M) \ar[u,"d"'] \ar[d,leftrightarrow,green!60!black,bend left=60,"\ast"] \\ |[alias=L1]| \mathfrak{g} \ar[ru,hookrightarrow,"v"] \ar[rr,purple,"f_1"] & & |[alias=R1]| \Omega^1_{(ham)}(M) \ar[u,"d"'] \\ |[alias=L2]| \mathfrak{g} \wedge \mathfrak{g} \ar[u,"\partial"]\ar[rr,purple,"f_2"] & \color{purple}\underbrace{\qquad\qquad}_{\Hcmm} & |[alias=R2]| \Omega^0(M) \ar[u,"d"'] \end{tikzcd} \end{displaymath} and satisfying the following system of three equations: \begin{subequations} \begin{equation}\label{eq_1Idro} \d f_1(\xi) = -\iota_\xi \nu = -\alpha(\xi) \end{equation} \begin{equation}\label{eq_2Idro} \d f_2(\xi_1 \wedge \xi_2) = f_1\left([\xi_1,\xi_2]\right) - \iota_{\xi_2}\iota_{\xi_1} \nu := \mu_2(\xi_1,\xi_2) \end{equation} \begin{equation}\label{eq_3Idro} f_2\left(\partial \xi_1 \wedge \xi_2 \wedge \xi_3 \right) = \iota_{\xi_3}\iota_{\xi_2}\iota_{\xi_1} \nu \end{equation} \end{subequations} (Compare with Lemma \ref{Lem:ExplicitHCCM} fixing, in the case $k=2$, $\xi_i \in {\mathfrak g}$ ($i=1,2$), $p = \xi_1 \wedge \xi_2$, and $\partial p = -[\xi_1, \xi_2]$.) \\ The first two equations imply that the components of the \momap are primitives of the closed forms on the right-hand side. Closedness of the $1$-form $\mu_2(\xi_1,\xi_2) := f_1([\xi_1,\xi_2]) - \iota_{\xi_1 \wedge \xi_2} \nu$ can be checked using lemma \ref{lemma:multicartan}, hence $$ df_1 ([\xi_1, \xi_2]) = d (\iota_{\xi_1 \wedge \xi_2} \nu) = -\iota_{[\xi_1,\xi_2]} \nu $$ where $\iota_{\xi_1 \wedge \xi_2} \nu = \nu(\xi_1,\xi_2, \cdot)$. According to Poincar\'e Lemma, such primitives can always be found but are in principle determined only up to a constant. Fixing $\mathbf{b} \in \mathfrak{g}$, as noted below remark \ref{Rem:HodgeCalculus}, equation \eqref{eq_1Idro} can be easily recast as \begin{displaymath} \d f_1 (\mathbf{b}) = - \ast \circ \flat \circ \mathbf{b} ~. \end{displaymath} Inverting the Hodge and flat operators one gets \begin{displaymath} - \left(~\sharp \circ \ast \circ \d \circ f_1\right)(\mathbf{b}) = \mathbf{b} ~. \end{displaymath} Hence, introducing the vector field $\mathbf{A}= -~\sharp \circ f_1(\mathbf{b})$, the first component of the \momap can be given, modulo a constant $c_1(\xi)$, by solving the following equation on smooth vector fields \begin{equation}\label{Eq:MagnetoEq} \curl \mathbf{A} = \mathbf{b} \end{equation} which is the well-known equation of magnetostatic, see remark \ref{Rem:ConcreteSolutionBiotSavart} for the explicit solution. Applying the $\delta$ operator, equation \eqref{eq_2Idro} yields that \begin{equation}\label{Eq:PoissonEq} \Delta (f_2(\xi_1\wedge\xi_2)) = \delta~ \mu_2(\xi_1,\xi_2) \end{equation} hence $f_2$ can be expressed, modulo a constant $c_2(\xi_1,\xi_2)$, as a solution of a Poisson equation in $\R^3$ with source $\delta~ \mu_2(\xi_1,\xi_2)$. In order to prove that we have a bona fide \momap, we must have, in particular, for $q = \xi_1 \wedge \xi_2 \wedge \xi_3$, the explicit formula \begin{equation}\label{eq:condition3_hccm} f_2(\partial q) = \nu(\xi_1, \xi_2, \xi_3) \end{equation} which is a priori true up to a constant $c_3(\xi_1, \xi_2, \xi_3)$ by virtue of (\ref{eq_1Idro}) and \cite[Lemma 9.2]{Callies2016}. However, the constant is in fact zero since $\nu(\xi_1, \xi_2, \xi_3)$ vanishes at infinity and the same is true for $f_2(\partial q)$ upon solving the related Poisson equation $$ \Delta f_2(\partial q) = \Delta \nu(\xi_1, \xi_2, \xi_3) $$ (obtained via a straightforward computation; notice that we use the Riemannian Laplacian, which is {\it minus} the standard one). \\ An alternative derivation uses $x$-independence of the class $[c_x]$ defined in remark \ref{rk:cp_obsruction}. Upon taking $S^3 = {\mathbb R}^3 \cup \{\infty \}$ , we have $c_{\infty } = 0$, hence $c_x = \delta_{CE} (b)$, with $$ b = -\int_{\gamma_{\infty}} \iota (v_1\wedge v_2) \nu $$ ($\gamma_{\infty}$ being a path connecting $ x $ to ${\infty}$, compare with the proof of \cite[Prop. 9.1]{Callies2016}. In this case, the expression is meaningful in view of the assumed decay at infinity of our objects). This is equivalent to the previous equation (\ref{eq:condition3_hccm}). \end{proof} We may also naturally ask the question of whether the above \momap is equivariant. \begin{proposition The map $(f)$ is {\rm not $G$-equivariant} in the sense of definition \ref{Def:EquivariantMomap} \end{proposition} \begin{proof} To ascertain the infinitesimal {\it G-equivariance} of the above map $(f)$, one should check the validity of the formula $$ {\mathcal L}_{\xi} f_1({\mathbf b}) = f_1 ([\xi, {\mathbf b}]) \qquad \forall \xi, {\mathbf b} \in {\mathfrak g} ~. $$ Considering in particular $\xi = {\mathbf b}$, the above equation easily yields that \begin{displaymath} f_1\left( \cancel{[\xi, \mathbf{b}]} \right) = {\mathcal L}_{\xi} f_1({\mathbf{b}}) = -{\mathcal L}_{\xi} \mathbf{A}^\flat = - d \iota_\xi \mathbf{A}^\flat = - d \langle \mathbf{A}, \mathbf{b} \rangle_g \qquad \forall \xi \in \mathfrak{g} \end{displaymath} where $\mathbf{A}$ is defined as within the previous proof. Hence $\langle \mathbf{A}, \mathbf{b} \rangle_g$ ought to be a constant vanishing at infinity, \ie must be equal to zero. \begin{figure}[h!] \begin{center} \includegraphics[width=0.32\textwidth]{\datapath/VortexLink} \end{center} \caption{Linked flux tubes.} \label{Fig:VortexLink} \end{figure} Consider now a solenoidal field $\mathbf{b} = \curl(\mathbf{A})$ supported on a domain consisting of two disjoint, unknotted but linked, closed \emph{flux tubes} ($\text{supp}(\mathbf{b}) = \Gamma_1 \cup \Gamma_2$, see figure \ref{Fig:VortexLink}). One gets \begin{displaymath} \int \langle \mathbf{A}, \mathbf{b} \rangle_g = 2 n ~\Phi_1 \Phi_2 \qquad \text{with} \quad \Phi_i = \int_{S_i} \mathbf{n} \cdot \mathbf{b}\, d\sigma \end{displaymath} where $n$ (\emph{Gauss linking number}, see theorem \ref{Thm:Moffatt-Ricca} below), is an integer different than zero, hence we get a contradiction. % \end{proof} \begin{remark} Interpreting $\mathbf{A}$ in the proof of theorem as the velocity field of the fluid and $\mathbf{v}$ as the associated vorticity, the configuration given in figure \ref{Fig:VortexLink}) exemplifies the case of a configuration with non-zero {\it helicity}. See \cite{Moffatt-Ricca92,BeSpe06,Spe06} and below for further elucidation of this train of concepts. Notice that the argument does not depend on the choice of the vector field $\mathbf{A}$ pertaining to $\mathbf{b}$. The lack of $G$-equivariance is not surprising, since our construction involves Riemannian geometric features. \end{remark} \note{In the symplectic case, $G$ connected implies $G$-equivariance. This does not hold true in the multisymplectic case. (ask Leyli!)} \begin{remark} Observe that the condition for the fields to vanish at infinity plays a crucial role when proving that the solution fulfils equation \eqref{eq_3Idro}. Relaxing this condition arises an obstruction to the existence of a \momap, however the component $f_1$ and $f_2$ constructed constitute a \emph{weak \momap} (see \cite[Def. 3.11]{Herman2017}\cite[Def. 3.8]{Herman2018} or \cite[Def. 1.16]{Mammadova2020}). \note{La definizione di weak moment map non è stata aggiunta da nessuna parte! Pensavo di metterla ma desisto per mancanza di tempo } \end{remark} \begin{remark}\label{Rem:ConcreteSolutionBiotSavart} To give a completely explicit expression of the components $f_1$ and $f_2$ in theorem \ref{Thm:HydrodynamicalComoment}, it is required to solve two differential problems. Notice that the solution would not be unique, thus one can say that theorem \ref{Thm:HydrodynamicalComoment} yields a family of \momaps pertaining to the same action of the Lie algebra $\mathfrak{g}_0$ on $\R^3$. \\ Given a vector field $\mathbf{b} \in \mathfrak{g}$ and defining $\mathbf{A}^\flat := - f_1(\mathbf{b})$ for a certain vector field $\mathbf{A}\in \mathfrak{X}(\R^3)$, the first component requires to solve the differential equation \eqref{Eq:MagnetoEq}. Such equation is the well-known equation of magnetostatic which admits a solution given by the so-called Biot-Savart law \begin{displaymath} \tag{Biot-Savart law} \mathbf{A}(r) = \int\frac{\mathbf{b}(r')\times(\vec{r}-\vec{r}')}{|\vec{r}-\vec{r}'|^3}\d r' \end{displaymath} defined up to a gradient \emph{(gauge freedom)}. It is customary to chose $\div(\mathbf{b})=0$ (Coulomb gauge). When $\mathbf{b}$ denotes a magnetic induction field, $\mathbf{A}$ is interpreted as the magnetic vector potential. In the context of hydrodynamics, $\mathbf{A}$ can be interpreted as a \emph{velocity field} and $\mathbf{b}$ as the corresponding vorticity. \\ On the other hand, the computation of the second component requires to solve the Poisson equation \eqref{Eq:PoissonEq} with source given by $\rho:=-\delta \mu_2(\xi_1,\xi_2)$. A general solution is given by \begin{displaymath} \varphi(r) = \int\frac{-\rho(r')}{|\vec{r}-\vec{r}'|}\d r' ~. \end{displaymath} Observe furthermore that $\rho$ can be explicitly given by \begin{displaymath} \begin{aligned} \rho :=~ \delta~ \mu_2(\xi_1,\xi_2) \equal{}&~ \delta (f_1([\xi_1,\xi_2]) - \nu(\xi_1,\xi_2,\cdot)= \\ \equal{}&~ \delta \cdot \flat \cdot (\curl^{-1}([\xi_1,\xi_2]) - \xi_1 \times \xi_2)= \\ \equal{}&~ -\div( \curl^{-1}([\xi_1,\xi_2]) - \xi_1 \times \xi_2) = \\ \equal{\eqref{eq:BrackAsCurl}}&~ 2~ \div(\xi_1 \times \xi_2) = \\ \equal{}&~ 2~\langle \xi_2, \curl \xi_1\rangle_g - \langle \xi_1, \curl \xi_2 \rangle_g \end{aligned} \end{displaymath} where $\times$ denotes the vector (cross) product on $\R^3$ and the last equality comes from another well-known result in vector analysis. \\ For further informations see \cite[Appendix A1]{Miti2019a}. \end{remark} \subsubsection*{Hydrodynamical reinterpretation} Up to now, the main connection between the construction given in theorem \ref{Thm:HydrodynamicalComoment} and hydrodynamics consists in the fact that the considered Lie algebra $\mathfrak{g}$ can be interpreted as the phase space of a freely moving ideal fluid (see Subsection \ref{Sec:IdroPoisson}). Another important reason why the above construction is relevant in hydrodynamics is that the above \momap can be related, after transgression along the evaluation map ${\rm ev}: L{\mathbb R}^3 \times {\mathbb R} \ni (\gamma, t) \mapsto \gamma(t) \in {\mathbb R}^3$, to the hydrodynamical co-momentum map of Arnol'd and Marsden-Weinstein, defined on the Brylinski (infinite-dimensional) manifold $Y$ of oriented knots\cite{MW83}. \begin{theorem} The above \Hcmm for $G\circlearrowright(\mathbb{R}^3,\nu)$ induces an ordinary (\ie "symplectic") co-moment map for $G\circlearrowright (L{\mathbb R}^3,\nu^{\ell})$, where $\nu^\ell$ denotes the trangression of the volume form along loops (see example \ref{Ex:TransgressionOnLoops}). Namely $(f) \colon \mathfrak{g} \to L_{\infty}(\mathbb{R}^3,\nu)$, given in theorem \ref{Thm:HydrodynamicalComoment}, transgresses to \begin{displaymath \begin{tikzcd}[column sep= small,row sep=0ex] \lambda \colon& \mathfrak{g} \arrow[r]& C^\infty(L{\mathbb R}^3) \\ & {\mathbf b} \arrow[r, mapsto] & \displaystyle \biggr( \gamma \mapsto \lambda_{\mathbf b}(\gamma) = - \oint_{\gamma} f_1({\mathbf b}) \biggr) \end{tikzcd} \end{displaymath} that is a moment map for the induced action $G$ on the pre-symplectic loop space $(L{\mathbb R}^3,\nu^{\ell})$ (Smooth space in the sense of Brylinski, see remark \ref{Rem:BryLoopSpaces}). \end{theorem} \begin{proof} Observe that the relevant piece of the \momap is $f_1$ which, under transgression, becomes $$ \mu_{\mathbf b} = - \int_{\gamma} B = -\lambda_{\mathbf b} $$ \ie, up to sign, the {\it Rasetti-Regge current} $\lambda_{\mathbf b}$ pertaining to ${\mathbf b} \in {\mathfrak g}$ (see definition \ref{def:RR-current}). In particular, $\mu_{\mathbf b}$ is independent of the choice of $B$. This is in accordance with the general result in \cite{Ryvkin2016} asserting that, roughly speaking, \momaps transgress to \momaps on loop (and even mapping) spaces. {\it Actually, the ansatz for the $f_1$ term was precisely motivated by this phenomenon}. \end{proof} Note that it is natural to define a {\it "Poisson" bracket} on momenta, \ie on the image of $f_1$, via the expression: \begin{equation}\label{Eq:MSPB} \{ f_1({\mathbf b}), f_1({\mathbf c}) \} (\cdot):= \iota_{\mathbf c} \iota_{\mathbf b} \nu (\cdot)= \nu({\mathbf b}, {\mathbf c}, \cdot) \end{equation} satisfying the formula \begin{equation}\label{eq:bracketsformula} \{ f_1({\mathbf b}), f_1({\mathbf c}) \} - f_1([{\mathbf b},{\mathbf c}]) = -df_2 ({\mathbf b} \wedge {\mathbf c}). \end{equation} that one gets after rewriting equation \eqref{eq_2Idro}. This construction will be employed below and in section \ref{Sec:MasseyMess}. \subsection{A generalization to Riemannian manifolds} We ought to notice that a hydrodynamically flavoured \momap can be similarly construed also for an $(n+1)$-dimensional, connected, compact, orientable Riemannian manifold $(M,g)$, upon taking a Riemannian volume form $\nu$ as a multisymplectic form and again the group $G$ of volume-preserving diffeomorphisms as symmetry group. \begin{theorem}\label{Thm:RiemannGeneralization} Let $(M,g)$ be a connected compact oriented Riemannian manifold of dimension $n+1$, $n\geq1$, with multisymplectic form $\nu$ given by its Riemannian volume form, and such that the {\it de Rham} cohomology groups $H_{dR}^{k}(M)$ vanish for $k=1,2,\dots n-1$ (one has necessarily $H_{dR}^{0} (M) = H_{dR}^{n+1} (M) = {\mathbb R}$). Let ${\mathfrak g}_0$ be the Lie subalgebra of ${\mathfrak g}$ consisting of divergence-free vector fields vanishing at a point $x_0 \in M$. \\ Then there exists an associated family\footnote{They are a family in the sense that the "inverse" of $\Delta$ in equation \eqref{eq:RiemannCase} is a Green operator, hence it does not yield an unique solution} of \momaps for the infinitesimal action of $\mathfrak{g}_0$ on $(M,\nu)$ given by the following compact formulae: \begin{equation}\label{eq:RiemannCase} f_1(\xi) := -\Delta^{-1} \delta (\iota_{\xi} \nu); \quad\quad f_k = \Delta^{-1} \delta \mu_k, \qquad k=2\dots n ~, \end{equation} where $\Delta$ denotes the Riemannian Laplace operator (see reminder \ref{Rem:HodgeCalculus}) and $\Delta^{-1}$ denotes the corresponding Green operator. \end{theorem} \begin{proof} We already noticed in the proof of theorem \ref{Thm:HydrodynamicalComoment}, how the defining equations of a \momap (see lemma \ref{Lem:ExplicitHCCM}) triggers a recursive construction starting from $f_1$, up to topological obstructions. Namely, we have a sequence of closed forms, which must be actually exact, together with the constraint $ f_n(\partial q) = (-1)^{\frac{(n+1)(n+2)}{2}}\nu(\xi_1,\dots\xi_{n+1})$, with $q= \xi_1 \wedge \dots \xi_{n+1}$. % As before, the Riemannian structure allows us to connect elements of the Rogers $L_\infty$-algebra in different degrees: \begin{displaymath} \begin{tikzcd}[column sep=huge] & & \Omega^{n+1}(M) \ar[dddddd,leftrightarrow,green,bend left=60,"\ast"] \\ & \mathfrak{X}(M) \ar[r,red,"\alpha"] \ar[ddddr,blue,"\flat",shift left=1ex]\ar[ddddr,blue,leftarrow,"\sharp"',shift right=1ex] & \Omega^{n}(M) \ar[u,"d"'] \ar[dddd,leftrightarrow,green,bend left=60,"\ast"] \\ \mathfrak{g} \ar[ru,hookrightarrow,"v"] \ar[rr,purple,"f_1",crossing over] & & \Omega^{n-1}_{(ham)}(M) \ar[u,"d"'] \ar[dd,leftrightarrow,green,bend left=60,"\ast"] \\ \vdots \ar[u,"\partial"] & & \vdots \ar[u,"d"'] \\ \bigwedge^{n-3}\mathfrak{g} \ar[u,"\partial"]\ar[rr,purple,"f_{n-3}",crossing over] & & \Omega^2(M) \ar[u,"d"'] \\ \bigwedge^{n-2}\mathfrak{g} \ar[u,"\partial"]\ar[rr,purple,"f_{n-2}",crossing over] & & \Omega^1(M) \ar[u,"d"'] \\ \bigwedge^{n-1}\mathfrak{g} \ar[u,"\partial"]\ar[rr,purple,"f_{n-1}",crossing over] & & \Omega^0(M) \ar[u,"d"'] \end{tikzcd} \end{displaymath} In the present case, a natural candidate for the $(n$-$1)$-form $f_1$ can be readily manufactured via Hodge theory (see remark \ref{Rem:HodgeCalculus}): \begin{equation}\label{eq:f1_hcmm_riemann} f_1(\xi) := -\Delta^{-1} \delta (\iota_{\xi} \nu) \end{equation} (the direct generalization of the preceding case) after imposing $\delta f_1({\xi}) = 0$ (the analogue of the Coulomb gauge condition), provided one can safely invert the Hodge Laplacian $\Delta = d\delta + \delta d$. According to the Hodge theorem (see \cite[\S 6]{Warner}) $H_{dR}^k(M) \cong \ker(\Delta\eval_{\Omega^k(M)})$, hence the latter holds true for any $1 \leq k \leq n-1$. \\ The topological assumption of vanishing of all the middle cohomology groups ensure that the entire procedure goes through unimpeded due to the formula $$ df_k (\xi_1 \wedge\dots \wedge \xi_k) = \mu_k (\xi_1 \wedge\dots \wedge \xi_k), \qquad \qquad k=2,3,\dots n ~, $$ where $\mu_k$ is the auxiliary form defined in remark \ref{Rem:TermMuByMauro} (see equation \eqref{eq:MauroMukForm}). Hence, one can compactly state that: \begin{equation}\label{eq:fk_hcmm_riemann} f_k = \Delta^{-1} \delta \mu_k, \qquad k=2\dots n, \end{equation} since \begin{displaymath} \d ( \Delta f_k - \delta \mu_k) = \Delta ( \d f_k - \mu_k) = 0 ~. \end{displaymath} One has finally to check that $$ f_n (\partial (\xi_1 \wedge\dots \wedge \xi_{n+1})) = -\varsigma(n+1) \iota (\xi_1 \wedge\dots \wedge \xi_{n+1})\nu $$ but this is guaranteed by the vanishing of all fields at $x_0$, hence $c_{x_0} = 0$ and the obstruction class $[c_x]$ vanishes (\cf remark \ref{rk:cp_obsruction}). % Note that one can also alter the above solution, given by equations \eqref{eq:f1_hcmm_riemann} and \eqref{eq:fk_hcmm_riemann}, by addition of an exact form. This freedom explain why the claimed solution has to be thought of as a family of \momaps. \end{proof} \begin{remark} We notice that the above result holds, in particular, for {\it homology spheres} such as, for instance, the celebrated Poincar\'e dodecahedral space\cite{Dror1973}. We point out that the case in which the intermediate homology groups are at most torsion (hence not detectable by de Rham techniques) is also encompassed: this is \eg the case of {\it lens spaces}. Notice that $G$-equivariance cannot be expected a priori. Also notice that one could restrict to the natural symmetry group provided by the isometries of $(M,g)$. See \eg \cite{zbMATH06448534} for a general discussion of topological constraints to existence and uniqueness of \momaps. \end{remark} \begin{remark}[Relation with weak homotopy moment maps] Recall that any \momap induces a {\it weak homotopy moment map} (see \eg \cite{Herman2018,Mammadova2020}) by restriction on cycles in the Chevalley-Eilenberg chain complex (known also as \emph{Lie kernels}). Namely, if in equation \eqref{eq:fk_hcmm} we set $\partial p = 0$, we get \begin{equation} d f_k (p) + \varsigma(k) \iota(v_p) \omega = 0 \end{equation} for $k=1,\dots,n+1$, which is the very property defining a weak homotopy moment map. For the time being we just notice that, in theorems \ref{Thm:HydrodynamicalComoment} and \ref{Thm:RiemannGeneralization} above, the vanishing of all fields at $x_0$ (that is infinity in the first case) is crucial for the existence of a \momap. Upon relaxing such condition {\it we get a weak homotopy moment map which, in general, is not a \momap} since, for $k=n+1$, the value of the ensuing constant is not specified. \end{remark} \subsection{Covariant phase space aspects}\label{Sec:CPSMess} We are now going to propose a multisymplectic interpretation of the hydrodynamical bracket (see theorem \ref{Thm:HydroBracket}), which ties neatly with the topics discussed in previous sections, via {\it covariant phase space} ideas but without literally following the standard recipe, as we shall see shortly. Let us briefly recall first the standard recipe for associating a multisymplectic manifold to any classical field system (see \cite{KS,Gimmsy1, Forger2005,Zuckerman87,Crnkovic,Ryvkin2018}). Assume that one can encode all the configurations of a given field-theoretic (continuous) mechanical system as sections of a \emph{"configuration"} bundle $E\to M$ on a \emph{"parameter"} space $M$ (usually a globally hyperbolic spacetime) and assume that the dynamics of the system is governed by a $I$-order differential operator on $\Gamma(E)$ coming from a certain Lagrangian density $\mathcal{L}$. Then, there is an associated multisymplectic manifold, called \emph{(Lagrangian) multiphase space}, given by the first jet space $J^1 E$ of the configuration bundle together with a $m$-plectic form $\omega_{\mathcal{L}}$, where $m$ is given by the dimension of the parameter space $M$, depending on the Lagrangian density.( See \cite[Eq. (3B.2)]{Gimmsy1}.) \\ For instance, in the case of a classical (\ie non-relativistic) continuous medium in Galilean spacetime, one would have a tower of trivial bundles \begin{displaymath} \begin{tikzcd}[column sep = small,row sep =small] M \ar[r,phantom,"\cong"] & T \times \Sigma \ar[r,phantom,"\cong"] & \R^4 \ar[r,phantom,"\sim"] &(t,x^i) \\ E \ar[u] \ar[r,phantom,"\cong"] & M \times \Sigma \ar[r,phantom,"\cong"] & \R^{4+3} \ar[r,phantom,"\sim"] &(t,x^i;y^j) \\ J^1 E \ar[u] \ar[r,phantom,"\cong"] & E \times \Sigma' \ar[r,phantom,"\cong"] & \R^{7+4\cdot 3} \ar[r,phantom,"\sim"] & (t,x^i,y^j; v^j_t, v^j_{, i}) \end{tikzcd} \end{displaymath} where $T$ denotes the "absolute" time line, $\Sigma$ denotes the physical space and $\Sigma'$ is the space of generalized velocities. Hence $(J^1 E, \omega_{\mathcal{L}})$ is a $19$-dimensional $4$-plectic manifold.\footnote{In the particular case of an ideal fluid, the Lagrangian density would be given, employing Einstein summation convention, by $\mathcal{L}= \frac{1}{2} g_{i,j} v^i_t ~ v^j_t$.} Recall also the notion of \emph{covariant phase space} for a classical field theory, given as above, defined as the (usually $\infty$-dimensional) submanifold $\text{Sol}\subset\Gamma(E)$ of all configurations consisting of only the solutions of the motion equation. It is well-known how to induce a presymplectic structure on $\text{Sol}$ from the multisymplectic structure $\omega_{\mathcal{L}}$ on $J^1 E$ (see for example \cite{Forger2005,Helein2011c}). Namely one has that for any $\sigma \in \text{Sol}$ and $\frac{\delta\phi}{\delta x^i}\in T_{\sigma}\text{Sol}$, (vertical) variation of the solution $\sigma$, the $2$-form given by \begin{displaymath} \Omega_\sigma \left(\frac{\delta\phi}{\delta x^i}, \frac{\delta\phi'}{\delta x^i}\right) = \int_{\Gamma} (j^1 \sigma)^\ast \left( \iota_{\frac{\delta\phi}{\delta x^i}}\iota_{\frac{\delta\phi'}{\delta x^i}} \omega_{\mathcal{L}}\right) \end{displaymath} is closed. Now we want to show how the Poisson structure given by equation \eqref{Eq:HydroBracket} can be obtained as a covariant phase space bracket obtained from a suitably reduced version of the multisymplectic manifold associated the ideal fluid. Observe that any divergence-free vector field can be viewed as an initial condition ${\mathbf v} (x,0)$ for the (volume-preserving) Euler evolution (at least for small times, but as we previously said, we do not insist on refined analytical nuances) ${\mathbf v} (x,t)$, yielding a section of $E$. We set \begin{displaymath} {\mathcal J}^1 {\mathbf v} := {\mathbf w} \quad (:= {\rm curl}\, {\mathbf v} ) \end{displaymath} which has to be understood as the natural "covariant" jetification of the section ${\mathbf v}$. This object is "covariant" in the sense that, in contrast with the standard jetification $j^1$ of a section (called \emph{first jet prolongation} in \cite[eq. (2A.4)]{Gimmsy1}), does not yield an object depending on the choice of coordinates in $E$. Also, since the section of the first jet bundle $J^1 E\to E$ can be interpreted as Ehresmann connections\cite[Rem. 1]{Gimmsy1}, this allows to look at ${\mathbf v} $ as the vector space counterpart of a connection $1$-form. \\ Using the 3-volume form $\nu$, orienting fibres (notice that, when viewed on $E$, it is only pre-$2$-plectic, namely closed but degenerate), we can rewrite the hydrodynamical bracket, mimicking \cite{Forger2005}, as \begin{equation}\label{Eq:FormerStar} \begin{aligned} \{ F, G \} ([{\mathbf v}]) =&~ \int_{\Sigma = {\mathbb R}^3} \left\langle {\mathbf w}, \, \frac{\delta F}{\delta {\mathbf v}} \times \frac{\delta G}{\delta {\mathbf v}}\right\rangle \,d^3x \\ =&~ \int_{\Sigma = {\mathbb R}^3} \nu ({\mathcal J}^1{\mathbf v}, \, \frac{\delta F}{\delta {\mathbf v}}, \frac{\delta G}{\delta {\mathbf v}}) \,d^3x \end{aligned} \end{equation} since the variations $\frac{\delta F}{\delta {\mathbf v}}$ and $\frac{\delta G}{\delta {\mathbf v}}$ are {\it vertical} and divergence-free: $\delta F/ \delta {\mathbf v} = {\rm curl} \, (\delta F/ \delta {\mathbf w})$. Taking again ${\mathbf b} = {\rm curl}\, {\mathbf B}$ et cetera, and setting finally $F = \lambda_{\bullet}$ (see \eg in particular \cite{Pe-Spe92,Spera16}) we see that the expression \eqref{Eq:FormerStar} can be manipulated to yield the expressive layout (with slight abuse of language) \begin{displaymath} \begin{aligned} \{ F, G \} ([{\mathbf v}]) =&~ \int_{\Sigma} ({{\mathcal J}^1}^*\nu) ({\mathbf v},{\mathbf B},{\mathbf C}) \,d^3x = \\ =&~ \int_{\Sigma}\nu ( {\mathcal J}^1{\mathbf v}, {\mathcal J}^1{\mathbf B}, {\mathcal J}^1{\mathbf C}) \,d^3x = \\ =&~ \int_{\Sigma}\nu ( {\mathbf w}, {\mathbf b}, {\mathbf c}) \,d^3x \end{aligned} \end{displaymath} (in full adherence with the discussion carried out in Section \ref{Sec:IdroPoisson}). The same portrait can be depicted, {\it mutatis mutandis}, for the singular case. Ultimately, we reached the following conclusion: \begin{theorem} (i) The Poisson manifold ${\mathfrak g}^*$, introduced in equation \eqref{eq:gastPoisson} and in theorem \ref{Thm:HydroBracket}, can be naturally interpreted as a (generalized) covariant phase space pertaining to the volume-preserving Euler evolution. The latter indeed preserves the symplectic leaves of ${\mathfrak g}^*$ given by the $G$-coadjoint orbits ${\mathcal O}_{[{\mathbf v}]}$.\par (ii) The above construction reproduces the symplectic structure of the Brylinski manifold ${Y}$, see remark \ref{Rem:BryLoopSpaces}, upon taking singular vorticities concentrated on a smooth oriented knot. The covariant phase space picture is fully retrieved upon passing to a 2-dimensional space-time $S^1 \times {\mathbb R}\rightsquigarrow (\lambda, t) $, with $\lambda \in S^1 \equiv \Sigma$ being a knot parameter (and staying of course with the same $\nu$). \par \end{theorem} \begin{remark} \begin{enumerate} \item We stress the fact that we did not literally follow the standard "multisymplectic to covariant" recipe developed in \cite{Forger2005}. In fact the multisymplectic manifold we consider is not the one prescribed by \cite{Gimmsy1} since we directly took the standard volume form $\nu$ on ${\mathbb R}^3$ as a $2$-plectic structure (or pre-$2$-plectic when pulled back to $E$), \cf \cite{CatIbort}. This neatly matches Brylinski's theory and fits with the stance long advocated, among others, by Rasetti and Regge and Goldin (see \eg \cite{Rasetti-Regge75,Goldin71,Goldin87,Goldin12}, and \cite{Spera16} as well) pinpointing the special and ubiquitous role played by the group of orientation-preserving diffeomorphism $G = \sDiff({\mathbb R}^3)$. Another motivation for considering $\nu$ is its pivotal role in the formulation of conservation theorems (see \cite{Ryvkin2016}). We shall pursue this aspect in what follows. \item In line with the preceding remark, notice that the above portrait can, in principle, be generalized to any volume form (on an orientable manifold), with its attached group $G$. The covariant phase space picture should basically persist in the sense that one might construct, in greater generality, an $n$-plectic structure out of an $(n+1)$-plectic one via an expression akin to equation \eqref{Eq:FormerStar}. The (non) $G$-equivariance issue should be relevant in this context. \end{enumerate} \end{remark} \section{A Hamiltonian $1$-form for links}\label{Sec:Ham1FormLinks} This section wants to show how it is possible to build a bridge between knot theory and multisymplectic geometry, exploiting the close connection between them and hydrodynamics. Vortex theory, together with the ubiquitous role of the group of volume-preserving diffeomorphisms, will be the cornerstone of this relationship. The basic and quite natural idea is {\it to associate to a knot (or link) a perfect fluid whose vorticity in concentrated thereon} (\cf the preceding discussion on the Brylinski manifold). \begin{displaymath} \begin{tikzpicture}[thick,node distance = 8em, auto] \node [->,rectangle, draw, fill=blue!20, text width=7em, text centered, rounded corners, minimum height=4em] (A) {Multisymplectic Geometry}; \node[inner sep=1em,minimum size=4em, text width=4em,right of=A] (k) {}; \node [rectangle,above of=k, draw, text width=7em,text centered, rounded corners, minimum height=4em] (B) {Hydrodynamics}; \node [rectangle,right of=k, draw, fill=blue!20, text width=7em, text centered, rounded corners, minimum height=4em] (C) {Knot theory}; \path[every node/.style={font=\sffamily\small}] (A) edge[bend left=45,-Latex] node [left,text width=5em,align=center] {Geometric\\ Fluid\\ Mechanics} (B) (B) edge[bend left=45,-Latex] node [right,text width=5em,align=center] {Vortex\\ dynamics} (C) ; {\path[every node/.style={font=\sffamily\small}] (A) edge[bend right=45,dashed,-Latex,gray] node [below,text width=10em] {Geometric Mechanics \\ of classical strings} (C);} \end{tikzpicture} \end{displaymath} Let us recall that Knot theory is a rich and ramified theory, born as a pure mathematical curiosity and naturally framed in the field of low-dimensional topology, but yielding innumerable implications in the most disparate areas of both pure and applied mathematics. We do not venture to review all the multifaceted aspects of this broad topic, and, for what follows, it is enough to recall just a few basic ideas regarding knots and links. (As general references on knot theory we quote, among others, \cite{Rolfsen,Lickorish1997}, together with \cite{Bott-Tu82} for the algebraic-topological tools employed below.) \begin{reminder}[Geometry of $n$-Links]\label{Rem:ReminderOnKnots} As already anticipated in remark \ref{Rem:BryLoopSpaces}, a (smooth) \emph{knot} can be formalized as a compact submanifold of codimension $2$ embedded in $\R^3$. A slight generalization of this concept is given by the notion of \emph{links}: \begin{itemize} \item We call \emph{$n$-link} any smooth embedding $\gamma: {\coprod_{i=1}^n S^1 } \to \mathbb{R}^3 $ of $n$ disjoint copies of the circle into the Euclidean space $\R^3$. \item Links are studied modulo \emph{ambient isotopies}. Specifically, given two embeddings $f,g: {\coprod_{i=1}^n S^1 }\to \R^3$, an ambient isotopy between $f$ and $g$ is a smooth isotopy $h : \R^3 \times [0,1] \to \mathbb{R}^3 $ of the "ambient" space, \ie $\hat{h}(t) : \R^3 \to \mathbb{R}^3$ is a diffeomorphism for any $t\in [0,1]$, such that $\hat{h_1}\circ f = g $. In layman terms, an ambient isotopy describes a continuous deformation of the parametrized link $\hat{h}(0)$ into $\hat{h}(1)$. Hence, to study $n$-links modulo ambient isotopies, means to study the equivalence classes of those links that can be transformed continuously one into the other (in particular without cuts or other surgeries). \\ It is then implicit that the Brylinski manifold $Y$ (see remark \ref{Rem:BryLoopSpaces}), and its obvious generalization to $n$-links, must be composed of several connected components, one for each equivalence class of links. Clearly all these classes are invariant with respect to volume-preserving diffeomorphisms. \item The ultimate goal of knot theory is to completely classify all non-equivalent (by ambient isotopies) classes of $n$-links. Note that it is not simply an instance of studying the topology of loops but the ambient space takes a significant role. \\ To date, such a classification is still missing. Nonetheless, numerous "partial" criteria to distinguish non-equivalent knots are known. These are mostly achieved introducing the so-called \emph{knot polynomials} (see section \ref{sec:prelim} below). \end{itemize} \end{reminder} Let us now specialize the considerations in Subsection \ref{Sec:IdroMoMap} to the case of links. Namely, we show how one can naturally associate to any link in $\R^3$ an \emph{Hamiltonian $1$-form} that in particular turns out to be conserved under the Euler evolution, \ie under the infinitesimal action of the Lie algebra $\mathfrak{g}=\sDiff(\R^3)$ on $\R^3$ (action defined in \ref{Sec:IdroPoisson}). % Such construction can be achieved introducing the concept of \emph{Poincar\'e} dual of a given closed, oriented submanifold. To make sense of the latter, we briefly recall the notion of \emph{de Rham currents}: % \begin{reminder}[de Rham currents (\cite{dR})]\label{Rem:ReminderDeRhamCurrents} It is possible to mimic the basic definition of "distribution" or "generalized function", originally introduced on $\R^n$, on any smooth manifold $M$ by mean of differential forms: \begin{itemize} \item We call \emph{(de Rham) $k$-current} any (sequentially) continuous\footnote{Continuity is meant in the "sequential" sense. More explicitly, the functional $T$ is continuous if, for any given sequence $\omega_{k}$ of smooth forms, all supported in the same compact set, such that all derivatives of all their coefficients tend uniformly to $0$ when $k$ tends to infinity, $T(\omega_{k})$ tends to 0. \\ Recall the definition of \emph{support} of a differential form as the closure of the non-vanishing locus of $\omega$. Namely $ \text{supp}(\omega) := \overline{\lbrace p \in M \; \vert \: \omega_p \neq 0 \rbrace} $. } linear functional $\eta$ from the space $\Omega^k_c(M)$ of compactly supported $k$-forms to $\R$. We denote by $\mathcal{D}_k(M)$ the vector space of all $k$-current $M$. Clearly one has $\mathcal{D}_k(M)\cong\left(\Omega^k_c (M) \right)^\ast$ understanding the space on the right-hand side as the \emph{topological dual} of $\Omega^k(M)$. % \item We call \emph{annihilation set} of $\eta\in \mathcal{D}$, an open subset $A \subset M$ such that $\langle \eta, \varphi \rangle = 0$ for any $\varphi\in\Omega_c(M)$ compactly supported in $A$. (here $\langle \eta, \varphi \rangle$ denotes evaluation of the given current $\eta$ on the "test" form $\varphi\in \Omega_c^k(M)$.) \\ We call \emph{support} of a given current $\eta\in \mathcal{D}$ the complement of the union of all open annihilation sets of $T$. % \item There exists an analogue of a \emph{regular distributions} given by the following mapping: \begin{displaymath} \morphism{D} {\Omega^k(M)} {\mathcal{D}^{n-k}(M)} {\alpha} {\displaystyle\left( D_\alpha : \varphi \mapsto \int_M \alpha\wedge \varphi \right)}~. \end{displaymath} \item The de Rham differential can be extended to currents by duality \begin{displaymath} \morphism{\partial} {\mathcal{D}_k(M)} {\mathcal{D}_{k-1}(M)} {\eta} {\displaystyle\left( \partial \eta : \varphi \mapsto (-)^k\int_M \eta\wedge \d \varphi \right)}~. \end{displaymath} \ie $\langle \partial \eta, \blank \rangle = (-)^{k} \langle \eta, \text{d}\blank \rangle$. Hence, de Rham currents build up a chain complex which is dual (modulo a sign) to the de Rham cochain complex. In a similar way, one can make sense of the wedge of de Rham currents. \end{itemize} \end{reminder} To any given a compact, oriented embedded $k$-dimensional submanifold one can associates a unique de Rham current \begin{definition}[Poincar\'e dual] Given a smooth embedding $\left( i : \Sigma \hookrightarrow M \right) \in \text{Emb}_c(k)$ of a compact, oriented, $k$-dimensional submanifold $\Sigma$ of the $n$-dimensional manifold $M$, we call \emph{Poincar\'e dual} of $\Sigma$ the de Rham current $D_\Sigma$ uniquely defined by the following equation \begin{displaymath} \langle D_\Sigma, \omega\rangle = \int_\Sigma i^\ast (\omega) \qquad \forall \omega \in \Omega^k(M) ~. \end{displaymath} \end{definition} These can be understood as \emph{generalized} differential $(n-k)$-forms concentrated on $\Sigma$. As such, the Poincar\'e dual $D_\Sigma$ is the analogue of a Dirac delta function localized on the submanifold $\Sigma$, in that sense are "non regular" or "singular". % One can then be interested in a \emph{regular approximation} of such singular behaviour: % \begin{definition}[Smooth (regular) Poincar\'e dual]\label{Def:RegularPoinDual} We call \emph{Smooth Poincar\'e} dual of an embedding $\left( i : \Sigma \hookrightarrow M \right) \in \text{Emb}_c(k)$ as above, any (smooth) differential form $\eta_\Sigma \in \Omega^k_c$ supported on a tubular neighbourhood $T$ of $\Sigma$ s.t. \begin{displaymath} \langle D_{\eta_\Sigma},\omega\rangle \equiv \int_M \omega \wedge \eta_\Sigma = \int_T i^\ast \omega ~. \end{displaymath} \end{definition} % Clearly, smooth Poincar\'e duals are not uniquely defined. The distribution associated to the differential form $\eta_\Sigma$ represents a possible regular approximation of $D_\Sigma$ in the sense that $\langle D_{\eta_\Sigma},\omega\rangle \sim \langle D_{\Sigma},\omega\rangle $. \\ Poincar\'e duals satisfy the following properties: \begin{lemma}\label{Lem:PropertiesPoinDuals} Poincar\'e duals satisfy the following properties for any given compact oriented submanifold $\Sigma_i$: \begin{align} & \partial D_\Sigma = (-)^{k-1} D_{\partial \Sigma} \label{Eq:PoinProp1}\\ & D_{\Sigma_1} \wedge D_{\Sigma_2} = D_{\Sigma_1 \cap \Sigma_2} \label{Eq:PoinProp2} \end{align} \end{lemma} \begin{proof} The proof of \eqref{Eq:PoinProp1} is straightforward: \begin{displaymath} \mathclap{ (-)^{k-1} \langle \partial D_\Sigma , \omega \rangle = \langle D_\Sigma, d\omega\rangle = \int_\Sigma i^\ast d \omega = \int_\Sigma d i^\ast \omega = \int_{\partial \Sigma} i^\ast \omega = \langle D_{\partial \Sigma}, \omega \rangle ~. } \end{displaymath} Claim \eqref{Eq:PoinProp2} is better understood by means of smooth Poincar\'e duals: \begin{displaymath} \text{supp}(\eta_1 \wedge \eta_2) \subset \text{supp}(\eta_1) \cap \text{supp}(\eta_2) \subset T_{\Sigma_1 \cap \Sigma_2} \quad\Rightarrow\quad \eta_1 \wedge \eta_2 = \eta_{\Sigma_1 \cap \Sigma_2} ~. \end{displaymath} For instance, take as $\Sigma_1$ the $z$-line in $\mathbb{R}^3$, $\eta_1 = \delta_{\{x=y=0\}} dx \wedge dy$, and as $\Sigma_2$ the $xy$-plane, $\eta_2= \delta_{\{z=0\}} dz$. You get $\eta_1 \wedge \eta_2 = \delta_{\{x=y=z=0\}} dx \wedge dy \wedge dz = \eta_{\Sigma_1 \cap \Sigma_2}$. \end{proof} \begin{remark}[ Poincar\'e duals as Thom classes] Generally speaking, in a more algebraic topological flavour, one can make sense of the Poincar\'e dual of a $k$-dimensional closed oriented submanifold $\Sigma$ of an n-dimensional manifold $M$ as a cohomology class $[\eta_\Sigma] \in H^{n-k}(M)$ characterized by the property: $$ \int_M \omega \wedge \eta_\Sigma = \int_\Sigma i^* \omega $$ for any {\it closed, compactly supported $k$-form} $\omega$ on $M$ ($i: \Sigma \hookrightarrow M$ being the inclusion map). \\ One can therefore see a (regular) Poincar\'e dual as $k$-form localized in a cross-section of a suitable tubular neighbourhood around $\Sigma$ \ie a \emph{Thom class} (see \cite{Bott-Tu82}). In \cite{Miti2019a} the authors indifferently view Poincar\'e duals as genuine forms or currents in the sense of de Rham. \end{remark} Building on \cite{Pe-Spe02,BeSpe06,Spe06}, let $ L = \coprod_{i=1}^n L_i$ be an oriented link in ${\mathbb R}^3$ with components $L_i$, $i=1,\dots,n$ required to be {\it trivial} knots. Let us choose a suitable tubular neighbourhood $T_i$ around any component $L_i$ (see Figure \ref{fig: tubes}). \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{\datapath/tubes} \caption{Tubular neighbourhood for the $i$-th component of $L$.} \label{fig: tubes} \end{figure} We introduce the following differential forms: % \begin{definition}[Vorticity $2$-form]\label{Def:Vorticity2Form} We call \emph{vorticity $2$-form} pertaining to $L$ the smooth differential form \begin{displaymath} \omega_{L} := \sum_{i=1}^n \omega_{L_i} \in \Omega^2(\R^3) \end{displaymath} with $\omega_{ L_i}$ denoting a smooth Poincar\'e dual associated to $L_i$ with support given by $T_i$. \end{definition} \begin{definition}[Velocity $1$-form]\label{Def:Velocity1Form} We call \emph{velocity $1$-form} pertaining to $L$ the smooth differential form \begin{displaymath} v_{ L} = \sum_{i=1}^n v_{L_i} \in \Omega^1(\R^3) \end{displaymath} with $v_{L_i}:= \omega_{{\mathfrak a}_i}$ denoting a smooth Poincar\'e dual associated to a disc ${\mathfrak a}_i$ bounded by $L_i$ (Seifert surface in the knot theory jargon, see Figure \ref{fig: thom}). \end{definition} % \begin{remark}[Non-uniqueness of $v_L$ and $\omega_L$]\label{Rem:KnotFormsNotUnique} Notice that $\omega_L$ and $v_L$ are not uniquely determined but rather heavily depend on the choice of the concrete differential form used to represent the regular Poincar\'e dual of each component, and corresponding Seifert surface, of the links. (Singular de Rham currents would be a correct language for restoring uniqueness.) % Although this feature may appear unpleasant at first glance, it actually fits neatly the philosophy of studying knots and links regardless their parametrization and modulo ambient isotopies. In fact, the property of being "knotted" of a certain link configuration is indifferently detected whether it is considered as a singular object or as something "smeared" in a region close to a possible spatial displacement of the link. An exemplification of this phenomenon will be discussed in the next subsection. \end{remark} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{\datapath/thom} \caption{Poincar\'e duals} \label{fig: thom} \end{figure} % \begin{theorem}\label{thm:VorticityFormExact} The vorticity $2$-form $\omega_L$ associated to the link $L$ is an exact form. The velocity $1$-form $\omega_L$ is a primitive of $\omega_L$ and is an Hamiltonian form for the $2$-plectic manifold $(\R^3,\nu)$ in the sense of definition \ref{Def:Hamiltonianform}. \end{theorem} \begin{proof} According to equation \eqref{Eq:PoinProp1} in Lemma \ref{Lem:PropertiesPoinDuals}, the vorticity form $\omega_L$ is closed since the boundary of $L$ is clearly empty. In particular, $\omega_L$ is exact by virtue of the Poincar\'e lemma. \\ By the same lemma, and recalling that $\partial\mathfrak{a}_i = L_i$, one gets for each component of the link, that \begin{displaymath} \d v_{L_i} = \d \eta_{\mathfrak{a}_i} = \eta_{\partial \mathfrak{a}_i} = \eta_{L_i} = \omega_{L_i} \end{displaymath} where $\eta_{\Sigma}$ denotes a regular Poincar\'e duals of $\Sigma$. \\ Finally, notice that, for each component $L_i$, the Hamiltonian vector field of $v_{L_i}$ is given by $\xi_{L_i}=-\alpha^{-1}(\omega_{L_i})$(via the map $\alpha$ of Section \ref{Sec:IdroMoMap}). Explicitly, one has (setting $\xi_L = \sum_{i=1}^n \xi_{L_i}$) \begin{equation}\label{eq:poincaredual} dv_L + \iota_{\xi_L} \nu = 0 ~. \end{equation} \end{proof} \begin{remark}[Cohomological interpretation of $v_L$]\label{Rem:CohoIntLinkForm} Recall that there is a canonical cohomology theory associated to a $n$-link $L$ given by the cohomology of $S^3 \setminus L$. (Note that there is no harm in considering the compactification $S^3$ instead of the standard Euclidean space since the image of any link configuration would be in any case sits in a compact subset of $\R^3$.) For the sake of clarity, the (de Rham) cohomology and relative homology groups of $S^3 \setminus L$ with real coefficients, reads as follows \begin{displaymath} \begin{aligned} H^0 (S^3 \setminus L) &\cong H_3(S^3, L) \cong {\mathbb R} \\ H^1 (S^3 \setminus L) &\cong H_2(S^3, L) \cong {\mathbb R}^{n} \\ H^2 (S^3 \setminus L) &\cong H_1(S^3, L) \cong {\mathbb R}^{n-1} \\ H^3 (S^3 \setminus L) &\cong H_0(S^3, L) \cong 0 \end{aligned} \end{displaymath} which can be computed iterating the Mayer-Vietoris on $\R^3$ deprived of a line.\par The de Rham classes of the forms $v_{L_i}$ generate the cohomology group $H^1(S^3 \setminus L, {\mathbb R})$ (or, better, that of $S^3 \setminus T$, with $ T = \cup_{i=1}^n T_i$). Their homological counterparts are given by the (classes of) the discs ${\mathfrak a}_i$. One can also interpret the other groups: in particular, elements in $H_1(S^3, L) $ can be represented by classes $[\gamma_{ij}]$ of (smooth) paths $\gamma_{ij}$ connecting two components $L_i$ and $L_j$, subject to the relation $[\gamma_{ij}] + [\gamma_{jk}] = [\gamma_{ik}]$. (See \cite{Spe06}) \end{remark} \begin{remark} Inspection of the very geometry of Poincar\'e duality shows that the velocity $1$-forms $v_i$ correspond (upon approximation of the associated Euler equation) to the so-called LIA (Linear Induction Approximation) or {\it binormal evolution} of the ``vortex ring" $L_i$ (``orthogonal" to the discs ${\mathfrak a}_i$ - an easy depiction, \cf figure \ref{fig: thom}), see \cite{Khe} for more information. Formula (\ref{eq:poincaredual}) will be the prototype for the calculations in section \ref{Sec:MasseyMess}. \end{remark} \subsection{Relation to Gauss linking number}\label{subsec:ReltoGauss} We want now to show how the previous constructions can determine quantities relevant to the classification of a link despite their heavy reliance on arbitrary choices. Consider a link $L$ embedded in $(\R^3,\nu)$. Let be $v_L$ and $\omega_L$ the velocity and vorticity forms defined in \ref{Def:Velocity1Form} and \ref{Def:Vorticity2Form} respectively. One can therefore introduce the {\it Chern-Simons (helicity) } $3$-form: \begin{equation}\label{Eq:CSHeli} CS({L}) := v_{L} \wedge \omega_{ L} ~. \end{equation} Integration of $CS({L})$ over ${\mathbb R}^3$ (or $S^3$ in case of compactification of the ambient space) yields a number ${\mathcal H}(L) $ called the {\it helicity} of $L$. The naming is clearly not incidental. Interpreting a $L$ as an ideal fluid configuration with singular vorticity concentrated in a filament given by $L$, one recovers the \emph{helicity} of the fluid as introduced in remark \ref{Rem:Helicity}. It is a celebrated result that the quantity $\mathcal{H}(L)$ is indeed an integer, depending on the knot and not on its possible parametrization, measuring the mutual knotting of two generic flow lines (we point to \cite{Moffatt-Ricca92, Arn-Khe,Pe-Spe89,Pe-Spe92,Spe06} for a more extensive discussion) \begin{theorem}[Moffatt-Ricca \cite{Moffatt-Ricca92}]\label{Thm:Moffatt-Ricca} Given a link $L$ in $\R^3$, one has \begin{displaymath} \int_{S^3} CS({L}) =: {\mathcal H}(L) = \sum_{i,j=1}^n \ell(i,j) ~\in \mathbb{N}, \end{displaymath} where $\ell(i,j)\in\mathbb{Z}$ are quantities that can be algorithmically computed on indented diagrams\footnote{ A useful way to visualise and manipulate knots is to project the knot onto a plane. In order to be able to recreate the original knot, one must distinguish between the over-strand and the under-strand at every crossing. This is usually done by creating a break in the strand going underneath, called "indentation". The resulting diagram is an immersed plane curve with the additional data of which strand is over and under each crossing. } (see \eg \cite{Rolfsen,Moffatt-Ricca92,Ricca2011,Spe06}). Namely: \begin{itemize} \item when $i\neq j$, $\ell(i,j) =\ell(j,i)= \ell(L_i,L_j)$ is the {\it Gauss linking number}\footnote{Informally, it represents the number of time that the curve $L_i$ winds around the curve $L_j$. Operatively, it can be computed by summing all the signed crossings of the indented diagram (see \cite{Rolfsen,Lickorish1997}). } pertaining to the components $L_i$ and $L_j$; \item when $i=j$, $\ell(j,j)$ is equal to the Gauss linking number $\ell(L_j, L_j^{\prime})$ with $L_j^{\prime}$ being a section of the normal bundle of $L_j$ (think $L_j^\prime$ as an another loop winding around $L_j$, arbitrarily close without intersecting it). \end{itemize} \end{theorem} \begin{proof} (\textit{Sketch})\\ Choosing a parametrization $\mathbf{r}_i$ (in standard coordinates) for each component $L_i$, one gets the Gauss' linking integral formula\cite{Ricca2011} \begin{displaymath} {\mathcal H}(L) = \sum_{i,j=1}^n \,\frac{1}{4\pi} \oint_{\gamma_i}\oint_{\gamma_j} \frac{\mathbf{r}_i - \mathbf{r}_j}{|\mathbf{r}_i - \mathbf{r}_j|^3} \cdot (d\mathbf{r}_i \times d\mathbf{r}_j) ~. \end{displaymath} % \begin{figure}[h!] \vspace{-1em} \begin{center} \includegraphics[width=0.3\textwidth]{\datapath/GaussLink} \end{center} \caption{Hopf link, component $C'$ crossing the Seifert surface of component $C$. \cite{Ricca2011}.} \label{Fig:HopfRicca} \end{figure} Solving this integral could seem a daunting task at first; however, one could convince oneself that it yields an integer by using the notion of Poincar\'e duals in a simple example. \\ Let $L=C\coprod C'$ be a Hopf Link as in figure \ref{Fig:HopfRicca}, consider the de Rham current \begin{displaymath} \mathbf{H}(L) = (D_C\wedge D_\Sigma + D_{C'}\wedge D_{\Sigma'}) + (D_C \wedge D_{\Sigma'} + D_{C'}\wedge D_{\Sigma'}) \end{displaymath} where $D_C$ denotes the (singular) Poincar\'e dual of $C$ and $D_{\Sigma}$ denotes the (singular) Poincar\'e dual of a certain Seifert surface $\Sigma$ of $C$. Notice that this can be seen as a singular version of the Chern-Simons $3$-form given in equation \eqref{Eq:CSHeli}. According to Lemma \ref{Lem:PropertiesPoinDuals}, we have \begin{displaymath} \mathbf{H}(L) = D_{P'} + D_{P} \end{displaymath} where $D_P$ and $D_P'$ are singular currents localized in the intersection point of $C$ with $\Sigma'$ and viceversa. Hence, the integration of $\int \mathbf{H}(L)$ simply counts the times that a component cross another Seifert Surface \ie the Gauss linking number of the link. \end{proof} % It is important to recall that ${\mathcal H}(L)$ is invariant under ambient isotopies but non-ambient isotopic links do not necessarily yield different linking numbers. Hence, it does not solve the ultimate goal sketched in remark \ref{Rem:ReminderOnKnots}. \section{A multisymplectic interpretation of Massey products }\label{Sec:MasseyMess} In this section we resort to the techniques developed in the sections above and propose a reformulation of the so-called higher order linking numbers in multisymplectic terms. \begin{figure}[h!] \vspace{-1em} \begin{center} \includegraphics[width=0.43\textwidth]{\datapath/BorromeanLink} \end{center} \vspace{-1em} \caption{ \emph{Borromean rings} is a prototypical example of \emph{Brunnian link}. Removing a component of the link it yields a pair of unknots. \small(\href{https://commons.wikimedia.org/wiki/File:Borromean_Rings_Illusion.png}{Wikimedia Commons}) \smallskip } \label{Fig:BorromeanRings} \end{figure} Ordinary and higher order linking numbers provide, among others, a quite useful tool for the investigation of Brunnian phenomena in knot theory: recall that a link is {\it almost trivial} or {\it Brunnian} if upon removing any component therefrom one gets a trivial link (see figure \ref{Fig:BorromeanRings}). They can be defined recursively in terms of Massey products, or equivalently, Milnor invariants, by the celebrated Turaev-Porter theorem (see \cite{Fenn,Pe-Spe02,Spe06,Hebda-Tsau12}). We are going to review, briefly and quite concretely, the basic steps of the Massey procedure, read differential geometrically as in \cite{Pe-Spe02, Spe06,Hebda-Tsau12}, presenting at the same time our novel multisymplectic interpretation thereof. As a first thing, let us recall how one can associate to any pair of knots in a link a certain cohomology class: \begin{remark}[Cohomological reinterpretation of the ordinary linking number]\label{Rem:CohoReintOrdinaryLinking} Consider a pair of linked knots $L_1$ and $L_2$, possibly components of a bigger link. The cohomological reinterpretation of the ordinary linking number $\ell(1,2)$ of the components $L_1$ and $L_2$, starts from consideration of the $2$-form \begin{equation} \Omega_{1 2} := - v_{1}\wedge v_{2} \end{equation} with $v_i$ the velocity $1$-forms of the component $L_i$. Observe that $\d \Omega_{1 2} = -\omega_1\wedge v_2 + v_1 \wedge \omega_2$ is equal to the CS-form minus a "self-linking" term, \ie $\int \d \Omega_{1 2} = \ell(1,2)$ (\cf Theorem \ref{Thm:Moffatt-Ricca}). Although this form is not uniquely defined, it determines a unique (integral) de Rham class in the cohomology of the link (see Remark \ref{Rem:CohoIntLinkForm}) completely independent from the various choices \begin{displaymath} \langle L_1, L_2 \rangle := \left[\Omega_{1 2}\Big\rvert_{S^3\setminus L} \right] \in H^2(S^3\setminus L) \end{displaymath} The closedness of $\Omega_{1 2}\eval_{S^3\setminus L}$ follows from observing that $\text{supp}(\d \Omega_{1 2})$ is contained in a tubular neighbourhood centered along the link that can be taken arbitrarily small, hence, one can assume that $\text{supp}(\d \Omega_{1 2})\subset L$ hence $\d \Omega_{1 2}$ vanishes out of $L$. (See \cite[\S 2.3]{Pe-Spe02} or \cite[\S 6.2]{Spe06} for the complete argument.) The linking number $\ell(1,2)$ is non-zero precisely when $\langle L_1, L_2 \rangle $, which in $H_1(S^3,L)$ equals $\ell(1,2) [\gamma_{12}]$, is non-trivial. If the latter class vanishes (\ie $\Omega_{1 2}\eval_{S^3\setminus L}$ is exact), we have \begin{equation}\label{eq:massey1} dv_{12} + v_{1}\wedge v_{2} = dv_{12} + \Omega_{12} = 0. \end{equation} for some $1$-form $v_{12}$. \end{remark} % Let $L$ be an oriented link with three or more components $L_j$ such that all the ordinary mutual linking numbers of the components under consideration vanish. Out of the primitives obtained in equation \eqref{eq:massey1}, one can manufacture another closed form: % \begin{definition}[Third order linking number (class)] Let $v_{i j}$ be the primitive obtained through equation \eqref{eq:massey1} from the vanishing of the cohomology class $\langle L_i, L_j \rangle$ defined in remark \ref{Rem:CohoReintOrdinaryLinking}. We call {\it third order linking number} (as a class) for the three components $L_1, L_2,L_3$ the cohomology class: $$ \langle L_1, L_2, L_3 \rangle := [\Omega_{123}\rvert_{S^3\setminus L} ] \in H^2(S^3 \setminus L). $$ of the (closed) \emph{Massey} $2$-form \begin{displaymath} \Omega_{123} = v_{1}\wedge v_{23} + v_{12}\wedge v_{3} ~. \end{displaymath} \end{definition} % It is then easy to devise a general pattern, if the latter class vanishes, one can find a $1$-form $v_{123}$ such that \begin{equation}\label{eq:massey2} dv_{123} + v_{1}\wedge v_{23} + v_{12}\wedge v_{3} = dv_{123} + \Omega_{123} = 0. \end{equation} The iteration of this procedure, eventually obstructed by the non-vanishing of certain higher linking number classes, yields an hierarchy of pairs \begin{displaymath} v_I \in \Omega^1(S^3\setminus L) \qquad \Omega_{I} \in Z^2(S^3\setminus L) \end{displaymath} with $I$ a general multi-index constructed out of the set $\{1,\dots,n\}$ of the $n$-link components. % \begin{remark} The previous construction can be organised - via Chen's calculus of iterated path integrals \cite{Chen,Chen4}- in terms of sequences of {\it nilpotent connections} ${\mathbf v}^{(k)}$, $k=1,2...$ on a trivial vector bundle over $S^3 \setminus L$ and their attached {\it curvature forms} ${\mathbf w}^{(k)}$ (ultimately, the $\Omega_I$, \cite{Pe-Spe02,Spe06,Tavares,Hain}), everything stemming from the {\it Cartan structure equation} $$ d {\mathbf v}^{(k)} + {\mathbf v}^{(k)} \wedge {\mathbf v}^{(k)} = {\mathbf w}^{(k)} $$ together with the ensuing Bianchi identity $$ d {\mathbf w}^{(k)} + {\mathbf v}^{(k)} \wedge {\mathbf w}^{(k)} - {\mathbf w}^{(k)} \wedge {\mathbf v}^{(k)} = 0 $$ (the latter implying closure of the forms $\Omega_I$). In order to give a flavour of the general argument, start from the nilpotent connection ${\mathbf v}^{(1)}$ with its corresponding curvature ${\mathbf w}^{(1)}$: % \begin{equation*} {\mathbf v}^{(1)} = \begin{pmatrix} 0 & v_1 & 0 & 0 \\ 0 & 0 & v_2 & 0 \\ 0 & 0 & 0 & v_3 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix}, \quad {\mathbf w}^{(1)} = \begin{pmatrix} 0 & 0 & \Omega_{12} = v_1\wedge v_2 & 0 \\ 0 & 0 & 0 & \Omega_{23} = v_2\wedge v_3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} \end{equation*} % Then proceed similarly with $$ {\mathbf v}^{(2)} = \begin{pmatrix} 0 & v_1 & v_{12} & 0 \\ 0 & 0 & v_2 & v_{23} \\ 0 & 0 & 0 & v_3 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix},\quad {\mathbf w}^{(2)} = \begin{pmatrix} 0 & 0 & 0 & \Omega_{123} = v_{1} \wedge v_{23} + v_{12} \wedge v_{3} \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix} $$ (we made use of $dv_{12} + \Omega_{12} = dv_{23} + \Omega_{23} = 0$), and so on. \end{remark} % \begin{remark} Recall that all forms $\Omega_{I}$ can be neatly interpreted, via Poincar\'e duality, as auxiliary (trivial) knots $L_I$, and $v_I$ as discs bounded by $L_I$, in adherence to the considerations in Section \ref{Sec:Ham1FormLinks}, see \cite{Pe-Spe02,Spe06} for more details and worked out examples, including the {\it Whitehead link} (involving fourth order linking numbers - with repeated indices) and the {\it Borromean rings} (exhibiting a third order linking number). Just notice here that, for instance, formula (\ref{eq:massey1}) becomes, intersection theoretically $$ \partial {\mathfrak a}_{12} + {\mathfrak a}_1 \cap {\mathfrak a}_2 = 0, $$ see Figure \ref{fig: chen}. % \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{\datapath/chen} \caption{Starting the Chen procedure} \label{fig: chen} \end{figure} \end{remark} We are now ready to interpret these quantities in the framework of multisymplectic higher observables: \begin{proposition}\label{Prop:MasseyMess} \begin{enumerate}[label=(\roman*)] \item All Massey $2$-forms are globally conserved in the sense of definition \ref{Def:conservedQuantities}; \item Given an exact Massey $2$-form $\Omega_I$, the corresponding primitive $v_I$ will be Hamiltonian with respect to the volume $\nu\in \Omega^3(S^3\setminus L)$. The latter is obtained by extending first the standard Euclidean volume $\nu$ from $\R^3$ to its compactification $\R^3\cup \{\infty\} \cong S^3$ and then restricting it again from $S^3$ to $S^3\setminus L$. The corresponding Hamiltonian vector field is given by \begin{equation} \xi_I = \alpha^{-1}(\Omega_I) ~, \end{equation} where $\alpha$ is the "flat" map introduced in equation \eqref{eq:alphacontractionmap}. \end{enumerate} \end{proposition} \begin{proof} Formula (\ref{eq:massey2}) can be rewritten as $$ dv_{123} + \iota_{\xi_{123}} \nu = 0 $$ where ${\xi \equiv \xi_{123}} = \alpha^{-1}(\Omega_{123})$. % The above (``vorticity") vector field $\xi_{123}$ can be thought of as being concentrated on the knot corresponding to $\xi_{123}$, or, alternatively, in a thin tube around it, when considering a bona fide Poincar\'e dual, \cf (\ref{eq:poincaredual}). This tells us that $v_{123}$ is a {\it Hamiltonian $1$-form} with respect to the volume $\nu$ and the formula $$ {\mathcal L}_{\xi} \Omega_{123} = \d \iota_{\xi} \Omega_{123} + \iota_{\xi} \d \Omega_{123} = \d \iota_{\xi} \Omega_{123} $$ expresses the fact that $\Omega_{123}$ is a globally conserved $2$-form and the same holds for $\Omega_{12}$. This discussion can be carried out verbatim for a general multi-index $I$: $$ d v_I + \iota_{\xi_I} \nu = 0 $$ (an extension of (\ref{eq:poincaredual})) and, in general, $\Omega_I$ is globally conserved. \end{proof} % We stress that, by construction, the momenta associated to the divergence-free field $\xi_I$ through the \momap constructed in section \ref{Sec:IdroMoMap} corresponds to $v_I$. The following is the main result of this section. \begin{theorem}\label{Thm:MasseyMess} The $1$-forms $v_I$ are {\rm first integrals in involution} with respect to the flow generated by the Hamiltonian vector field $\xi_{ L}$ pertaining to the velocity $1$-form introduce in definition \ref{Def:Velocity1Form} , namely $$ {\mathcal L}_{\xi_{ L}} v_I = 0 $$ (\ie the $v_I$'s are {\rm strictly conserved}) and the Poisson brackets given in equation \eqref{Eq:MSPB} above yields $$ \{v_I, v_J \} = 0 $$ (for multi-indices $I$ and $J$). \end{theorem} \begin{proof} Using Cartan's formula, we get $$ {\mathcal L}_{\xi_{ L}} v_I = d \iota_{\xi_{ L}} v_I + \iota_{\xi_{ L}} d v_I = d \iota_{\xi_{ L}} v_I - \iota_{\xi_{ L}} \iota_{\xi_{ I}} \nu, $$ but the second summand vanishes in view of the general expression $$ \{v_\xi, v_\eta \}(\cdot) = \nu (\xi, \eta, \cdot) $$ and of the peculiar structure of the vector fields involved (they either partially coincide or have disjoint supports). By the same argument, one gets $\iota_{\xi_{ L}} v_I = 0$, in view of the Poincar\'e dual interpretation of $v_I$ (\cf Section \ref{Sec:Ham1FormLinks}), together with the second assertion; a crucial point to notice is that the auxiliary links obtained via Chen's procedure may be suitably split from their ascendants, this leading to $$ \iota_{\xi_{ L}} v_I = 0, $$ the consequent {\it strict} conservation of the $v_I$'s being then immediate.\par \smallskip Notice that, in particular, from $$ \iota_{\xi_{ L}} v_L = 0 $$ (Poincar\'e dual interpretation again) we also get $$ {\mathcal L}_{\xi_{ L}} v_L = 0 $$ (this is {\it not} to be expected a priori in multisymplectic geometry, \cf \cite{Ryvkin2016}). \end{proof} We ought to remark that, upon altering the $v_I$'s by an exact form, we may lose strict conservation, but in any case global conservation is assured (the Poisson bracket is an exact form, by equation \eqref{eq:bracketsformula} in Section \ref{Sec:MSHydroFluids} and in view of commutativity of the vector fields $\xi_I$ and $\xi_J$).\par \smallskip Ultimately, we can draw the conclusion that {\it the Massey invariant route to ascertain the Brunnian character of a link can be mechanically understood as a recursive test of a kind of {\rm knot theoretic integrability}: the Massey linking numbers provide obstructions to the latter}. \par \smallskip Thus, somewhat curiously, higher order linking phenomena receive an interpretation in terms of multisymplectic geometry, which is a sort of higher order symplectic geometry. Also, integrability comes in with a twofold meaning: first, higher order linking numbers emerge from the construction of a sequence of flat, \ie integrable nilpotent connections; second, this very process yields first integrals in involution in a mechanical sense. \par \section{A symplectic approach to the HOMFLYPT polynomial}\par In this section, extracted from \cite{Miti2019a}, we present a novel {\it interpretation} of the HOMFLYPT polynomial (\cite{Freyd-etal,PT}) as a WKB-wave function via geometric quantization of the Brylinski manifold of singular knots (and links), drawing inspiration from the {\it ad hoc} helicity-based hydrodynamical procedures devised in \cite{Liu-Ricca12,Liu-Ricca15}. This section is divided in two parts. Subsection \ref{sec:prelim}, gathers together basic symplectic geometric and knot theoretic tools which are necessary for understanding the problem. Subsection \ref{Sec:HomWkb} contains the main theorem \ref{Thm:MainHomflyThm} in which the HOMFLY polynomial is read in the geometric quantization framework. Before proving theorem \ref{Thm:MainHomflyThm}, we recall, and extend to links, some results contained in \cite{BeSpe06}. These results are the foundation on which our -geometric quantization flavoured- approach is built. \subsection{Preliminaries}\label{sec:prelim} \subsubsection{The HOMFLYPT polynomial}\label{sub:HomflyPrelim} In this subsection we recall a few basic knot theoretic notions, referring, among others, to \cite{Rolfsen, Kauffman,Moffatt-Ricca92,BeSpe06,Spe06} for further information.\ First recall that, given a {\it framed}\footnote{Roughly speaking, the framing is tubular neighbourhood center around the knots of the knot. It is chosen to be "small" as desired so as to never intersect with itself in the vicinity of adjacent branches. See \cite{nlab:framed_link} for a concise description of this concept. } oriented link, its {\it helicity} ${\mathcal H}(L)$ is given as \begin{equation}\label{eq:helicity} {\mathcal H}(L) = \sum_{i,j =1}^n \ell(i,j) \end{equation} with $\ell(i,j) = \ell(j,i)$ being the {\it Gauss linking number} of components $L_i$ and $L_j$ if $i\neq j$ and where $\ell(j,j)$ is the {\it framing} of $L_j$, equal to $\ell(L_j, L_j^{\prime})$ with $L_j^{\prime}$ being a section of the normal bundle of $L_j$, see subsection \ref{subsec:ReltoGauss} above. \\ A regular projection of a link $L$ onto a plane produces a natural framing called the {\it blackboard framing}\footnote{ Think of it as an indented projection of framing. In other words, it is the extension of the knot diagram from a curve to an infinitesimally thin ribbon.}, and ${\mathcal H}(L) = w(L)$, the {\it writhe} of $L$, given by \begin{equation}\label{eq:Writhe} w(L) = \sum_{\rm all \,\,crossings} \pm 1 \end{equation} \begin{figure}[h! \begin{center} \includegraphics[width=0.43\textwidth]{\datapath/crossings} \end{center} \vspace{-3mm} \caption{Crossings} \label{fig: crossings} \end{figure} Let be $L$ an arbitrary link and consider a certain crossing pertaining to its indented diagram (\ie regularly projected onto a plane, $z=0$, say). We denote, as usual, by $L_+$, $L_-$ and $L_0$ three oriented links differing at a single crossing ($(\pm 1)$-crossing, no crossing, respectively), see Figure \ref{fig: crossings}. Denoting by $m_\pm$ and $m_0$ their respective number of components, one has $m_+ = m_- = m_0 \pm 1$ \begin{remark}\label{Rem:L0pm} Let us stress the fact that the three links $L_{\pm}$ and $L_0$ may well be mutually inequivalent (thus they belong to different connected components of the manifold $Y$ defined below): for instance, if $L_+$ is a trefoil knot, $L_-$ is a trivial knot and $L_0$ is a Hopf link. This situation is not general, considering $L_+ = E_+$ and $L_- = E_-$ (``figure of eight" knots, \ie trivial knots with $\pm 1$ writhe, see figure \ref{fig:surgery}), both equivalent to the unknot $\bigcirc$, the corresponding $L_0$ is a trivial 2-component link. \end{remark} We already mentioned in reminder \ref{Rem:ReminderOnKnots} the notion of \emph{knot polynomials} as a tool to provide knot invariants. With the notation introduced in remark \ref{Rem:BryLoopSpaces}, one can see a knot polynomial as a function \begin{displaymath} \hat{Y} \to R[X] \end{displaymath} from the space of (possibly) mildly singular knots to the $R$-algebra of polynomials on a certain ring $R$ and set of indeterminates $X$. An example is given by the so-called \emph{HOMFLYPT polynomial} (see \cite{Freyd-etal, PT, Kauffman}). \begin{definition}[HOMFLYPT polynomial] For any given link $L$, the corresponding \emph{HOMFLYPT polynomial} is a $2$-variables (knot) polynomial $P = P(\alpha,z)$ defined by the {\it skein relation} \begin{displaymath} \alpha \,P ({L_+}) - \alpha^{-1}\, P({L_-}) = z \, P({L_0})~, \end{displaymath} for any crossing of a given diagram of $L$ (see above remark \ref{Rem:L0pm}), and by the normalization relation on the unknot $\bigcirc$ \begin{displaymath} P(\bigcirc) = 1~. \end{displaymath} \end{definition} \begin{remark} The {\it HOMFLYPT polynomial} is an {\it ambient isotopy}. Diagrammatically, this amounts to invariance under the {\it Reidemeister moves} $R_j$, $j=0,1,2,3$. The actual calculation may be performed, for instance, via the skein-template algorithm (see \eg \cite{Kauffman}, p.57). We recall also that $P$ is not a universal invariant, meaning that one can find inequivalent knots determining the same HOMFLYPT polynomial (see for example \cite{RAMADEVI1994}). \note{Domanda per Mauro:\\ ci sono fonti preferibili a questa?} \end{remark} The HOMFLYPT polynomial can be obtained from the so-called $H$-{\it polynomial}: \begin{definition}[$H$-polynomial]\label{Def:Hpoly} For any given link $L$, the corresponding \emph{$H$-polynomial} is a $2$-variables (knot) polynomial $H = H(\alpha,z)$ defined by the {\it skein relations} \begin{align*} H(L_+) - H(L_-) =&~ z \, H(L_0) ~, \\ H(L \sharp {\,8_\pm}) =&~ \alpha^{\pm 1} H(L), \end{align*} for any crossing of a given diagram of $L$. With $L \sharp {\,8_\pm}$ we mean the topological glueing of a "figure of eight"-shaped ``curl" (see also section \ref{Sec:HomWkb} below for further interpretation of the notation). \end{definition} \begin{proposition}[``Kauffman principle" (see {\cite[p.55]{Kauffman}})]\label{prop:KaufPrin} It is possible to recover the HOMFLYPT polynomial out of the $H$-polynomial via the standard writhe correction : \begin{equation} P(L) = \alpha^{-w(L)}\,H(L) = \alpha^{-{\mathcal H}(L)}\,H(L) ~. \end{equation} In particular, the $H$-polynomial is not an ambient isotopy invariant but only a regular isotopy invariant. Namely, one drops invariance under $R_1$, \ie addition of a ``curl". \end{proposition} \subsubsection{Lagrangian submanifolds revisited} Recall that a submanifold $\Lambda$ of a symplectic manifold $(M, \omega)$ is {\it Lagrangian} when the symplectic form $\omega$ vanishes thereon and it is of maximal dimension with respect to this property. Consider now the cotangent space $T^*Q$ of a given smooth manifold $Q$ taken with its canonical symplectic form (see example \ref{Ex:Multicotangent}). A Lagrangian submanifold $\Lambda \subset T^*Q$ in general position can be described in the following way: \begin{theorem}[Maslov-H\"ormander Morse family theorem \emph{(see \eg \cite{Mas,Hor,Gui-Ste,McD-Sal})}]\label{thm:morseFam} Consider a Lagrangian submanifold $\Lambda$ of the canonically symplectic manifold $T^\ast Q$. There exists (locally) a smooth function $\phi = \phi(q, a), (q, a) \in Q \times {\mathbb R}^k$ (${\mathbb R}^k$ being a space of auxiliary parameters) and a submanifold \begin{equation} C_{\phi} = \{ (q, a) \in Q \times {\mathbb R}^k \, \mid \, d_{a} \phi = 0 \} \end{equation} with $d(d_{a})$ of maximal rank thereon (here $d = d_q + d_{a}$) such that the map \begin{equation} \lambdamorphism{C_{\phi}} {T^*Q } {(q, a)} {(q,d_q \phi )} \end{equation} is an immersion with image $\Lambda$. \end{theorem} If the Hessian $H_{a}$ (with respect to the auxiliary variables $a$) is non-degenerate, one can solve $a = a(q)$ and introduce the following objects: \begin{itemize} \item we call \emph{phase function} the smooth function $F\in C^{\infty}(Q)$, also denoted $F(q):= \phi(q, a(q))$, such that $(q, dF(q) ) \in \Lambda$; \item given a phase function, we call \emph{momentum} at $q\in Q$ the covector $dF(q) =: p(q)$. \end{itemize} The non-degeneracy of the Hessian fails at the singular points of the obvious projection $\Lambda \rightarrow Q$. However, the singular locus $Z$ (the {\it Maslov cycle}) turns out to be orientable and of codimension $1$ in $\Lambda$ with $\partial Z$ of codimension $\geq 3$ (see \eg \cite[\S II.7]{Gui-Ste}). \subsubsection{WKB wave functions} Recall that, given a {\it prequantizable} symplectic manifold $(M,\omega)$, the Weil-Kostant theorem (see theorem \ref{thm:integralitycondition} above) implies the existence of a complex line bundle ${\mathcal L} \to M$ (prequantum bundle), equipped with a Hermitian metric and compatible connection $\nabla$ with curvature $\Omega_{\nabla} = -2\pi i \omega$. Since the symplectic 2-form $\omega$ vanishes on any Lagrangian submanifold $\Lambda \subset M$, any (local) symplectic potential $\vartheta$ ($d\vartheta = \omega$) becomes a closed form thereon, giving a (local) connection form pertaining to the restriction of the prequantum connection $\nabla$. The former is a {\it flat} connection that will be denoted by the same symbol $\nabla$. When $\nabla$ has trivial holonomy, one can defined the so-called \emph{WKB-wave functions}: \begin{definition}[WKB-wave function] We call a \emph{WKB-wave function} any global covariantly constant section of the restriction to $\Lambda$ of the prequantum line bundle ${\mathcal L} \to M$. In other terms, the latter is a section $s\in \Gamma(\mathcal{L},\Lambda)$ such that $\nabla s = 0$. \end{definition} A WKB-wave function $s$ takes the local form (neglecting the so-called half-form correction, see \eg \cite{Woodhouse97}) \begin{equation} s(m) := hol_\gamma(\nabla) \cdot s(m_0) = e^{i \int_{\gamma} \theta} \cdot s(m_0) \end{equation} with $\gamma$ denoting any path connecting a chosen point $m_0$ in a (connected) symplectic manifold $M$ with a generic point $m \in M$, ${\rm hol}_{\gamma}(\nabla)$ being the holonomy of the (restriction to $\Lambda$ of the) prequantum connection $\nabla$ along $\gamma$. The right-hand side tacitly assumes a trivialization of $L \to M$ around $m_0$, and $m$ in a corresponding local chart. \subsection{The HOMFLYPT polynomial as a WKB wave function}\label{Sec:HomWkb} Let be $(M,\nu)$ a $3$-dimensional smooth manifold oriented by the volume form $\nu$. Consider the free loop space $LM$ and the Brylinski's manifold $\widehat{Y}_{M}$ of (mildly) singular {\it knots} embedded in $M$ introduced in reminder \ref{Rem:BryLoopSpaces}. Recall that, by transgression, one gets a 2-form $\Omega$ on $LM$ via the formula \begin{equation} \Omega = \int_{S^1} ev^* (\nu ) \end{equation} where $ev: S^1 \times LM \rightarrow M$, given by $ev (\lambda , \gamma ) := \gamma(\lambda)$, is posthe evaluation map (of a loop $\gamma \in LM$ at a point $\lambda \in S^1$). More explicitly, given tangent vectors $u$ and $v$ at $\gamma$, it reads \begin{equation} \Omega_{\gamma} (u , v) = \int_{0}^1 \nu ({\dot \gamma}(\lambda), u(\lambda), v(\lambda)) \end{equation} (where we set ${\dot \gamma} = {d\gamma \over d\lambda}$). The 2-form $\Omega$ is basic with respect to the ${\rm Diff}^+(S^1)$-principal bundle $\widehat{X}_{M} \rightarrow \widehat{Y}_{M}$, namely $i_{\xi} \Omega = i_{\xi} d\Omega = 0$, with $\xi$ any vertical vector field (\ie generating an orientation preserving reparametrization of the loop). Hence, $\Omega$ descends to a closed, non-degenerate 2-form on $\widehat{Y}_{M}$, \ie a (weak) {\it symplectic form}. The keypoint is given by the following proposition: \begin{proposition}[Pre-quantum bundle for the Brylinksi's manifold $\widehat{Y}_{M}$] Let be $(M,\nu)$ a $3$-dimensional manifold with volume form $\nu$. If $[\nu]$ is an integral $3$-form then $(\widehat{Y}_{M},\Omega)$ is prequantizable. \end{proposition} \begin{proof} Recall that, in general, the transgression gives rise to a (degree shifting) morphism of complexes $\Omega^{\bullet} (M) \rightarrow \Omega^{\bullet - 1} (LM)$, mapping closed (resp. exact) forms to closed (resp. exact) ones in view of the general formula (direct calculation, or see \cite{Chen}): \begin{equation} d \int \omega = - \int d \omega \end{equation} where, of course, the l.h.s. differential pertains to $LM$ and the r.h.s. one pertains to $M$. \\ Consequently, integral cohomology classes on $M$ are mapped to integral cohomology classes on $LM$. Therefore, if $[\nu ]$ is integral, then $[\Omega]$ is integral as well, this ensuring, via the Weil-Kostant theorem, the existence of a prequantum bundle. \\ A subtle though explicit construction can be given via the integral class $[\nu ] \in H^3(M, {\mathbb Z})$, defining a {\it gerbe}, see \cite{Bry,Spe11}. \end{proof} We can naturally extend the above discussion to oriented links and accordingly define the symplectic structure on the {\it generalized Brylinski space of oriented mildly singular links} $\widehat{Y}$ (same notation) via the same formula above, by replacing a knot $K$ by a link $L$: \begin{equation}\label{eq:LinkSymplecticForm} \Omega_L (\cdot , \cdot )= (\int_L \nu) \, \,(\cdot,\cdot):= \sum_{i=1}^n \int_{L_i} \nu (\dot{\gamma}_i, \cdot,\cdot) \end{equation} \begin{remark} We also point out, for completeness, that the weak symplectic manifold $(\widehat{Y}_{M}, \Omega)$ can be endowed with several other natural structures: \begin{itemize} \item $\widehat{Y}_{M}$ can be naturally equipped with a (formally) integrable compatible almost complex structure making it a K\"ahler manifold in an appropriate sense, see \eg \cite{Bry,Pe-Spe89,Arn-Khe}. \item Each connected component of $\widehat{Y}_{M}$ is (up to technical subtleties, see \cite{Bry} and \cite{Pe-Spe89} as well) a {\it coadjoint orbit} of the group of unimodular diffeomorphisms of $M$, \ie those preserving a volume form. \end{itemize} \end{remark} \medskip We shall deal with the case $M = {\mathbb R}^3$; the ensuing manifold $\widehat{Y} := \widehat{Y}_{{\mathbb R}^3}$ is called the manifold of {\it oriented singular links} in ${\mathbb R}^3$, whereas $Y := {Y}_{{\mathbb R}^3}$ is called the manifold of {\it oriented links} in ${\mathbb R}^3$. \begin{remark}[$\widehat{Y}$ is prequantizable]\label{rem:OmegaExact} Observe that the standard volume form $\nu$ in ${\mathbb R}^3$ can be portrayed as \begin{equation} \nu = dx \wedge dy \wedge dz = d (z\, dx \wedge dy) \equiv d\hat{\theta} \end{equation} in terms of the (multisymplectic) potential $\hat{\theta}$; the latter transgresses to a (symplectic) potential $\theta$ for $\Omega$, which vanishes identically when restricted on the plane $z=0$. \\ In other terms, $\Omega$ is exact. Thence the assumptions of the Weil-Kostant theorem are fulfilled and we have a trivial prequantization bundle ${\mathcal L} \to \widehat{Y}$ (Brylinski's line bundle). \end{remark} \subsubsection{The Chern-Simons Lagrangian submanifold} Consider the weak symplectic manifold $T^*{\widehat{Y}}$ that is the cotangent space associated to $\widehat{Y}$ together with its canonical symplectic structure. One can naturally introduce an appropriate Morse family\footnote{To be formally interpreted in the sense of H\"ormander (see \eg \cite{Mas},\cite{Hor}).}, \cf with theorem \ref{thm:morseFam}, treating the space of $U(1)$-connections ${\cal A}$ as a set of auxiliary parameters\footnote{ It may be identified with ${\cal D}_{{\mathbb R}} ({\mathbb R}^3)\otimes {\mathbb R}^3$, the space of compactly supported (real) vector fields on ${\mathbb R}^3$, and standardly topologized accordingly. We regard it as an infinite dimensional manifold modelled on itself (\cf \eg \cite{Kri-Mich}, p.439). }. This family is given by the (link analogue) of the {\it Abelian Chern-Simons action with source}, \ie is the smooth function\footnote{Concretely, one first checks G\^ateaux differentiability of $\Phi =\Phi (K, \cdot )$, then observes that $\Phi$ is indeed Fr\'echet differentiable (for instance by checking continuity of the G\^ateaux derivative, see \eg \cite[p.128]{Kri-Mich}). } $\Phi \in C^{\infty}(Y\times \cA)$ given by \begin{equation} \Phi (L,A) := {k\over{8\pi}}\int_{{\mathbb R}^3}A \wedge dA + \int_L A \end{equation} Accordingly to theorem \ref{thm:morseFam}, $\Phi$ defines locally a Lagrangian submanifold $\widetilde{\Lambda}$ of the cotangent space $T^*{\widehat{Y}}$ via the position \begin{equation} d_{\cal A}\Phi \mid_{(K,A)} = 0 \end{equation} where $d_{\cal A}$ denotes the differential of ${\cal A}$. Furthermore, the Lagrangian submanifold $\widetilde{\Lambda} \hookrightarrow T^*{\widehat{Y}}$ admits a local phase $\phi\in C^\infty(\widehat{Y})$ given by \begin{equation}\label{eq:phasephi} \phi (L) = -\frac{2\pi}{k} {\mathcal H}(L) \equiv 2\pi \lambda \,{\mathcal H}(L) \end{equation} where ${\mathcal H}(L)$ is the helicity of the link (see equation \eqref{eq:helicity}) and $\lambda := -1/k$ is non-zero number \footnote{The generic value taken by $\lambda$ (in particular, it can be taken equal to a root of unity) avoids trivialities.} (see \cite[\S 3]{BeSpe06} and references therein for details). The previous discussion in subsumed by the following statement \begin{theorem}[Helicity as a phase function \emph{(\cite[Thm 3.1]{BeSpe06})}] The writhe (helicity) of a link can be interpreted as a phase function pertaining to $\widetilde{\Lambda}$, looked upon as a Lagrangian submanifold of $T^*\widehat{Y}$. \end{theorem} \subsubsection{Link invariants via geometric quantization} Consider now the submanifold $\Lambda \subset {\widehat Y}$ consisting of the links on a plane with transversal intersections. It has been proved in \cite{BeSpe06} that $\Lambda$ is a Lagrangian submanifold with respect the symplectic given in equation \eqref{eq:LinkSymplecticForm}. Henceforth we tacitly identify a link in $Y$ with its projection onto $\Lambda$ (whilst retaining crossing information via a $\pm$-marking). \\ Since the symplectic potential of Brylinski's form can be taken equal to zero (see remark \ref{rem:OmegaExact}) the phase is (locally) constant. This is in accordance with the fact that the phase is essentially given by the helicity and the latter is a topological invariant. The CS-Lagrangian can be used, as in \cite{BeSpe06}, as a Morse family also for $\Lambda$. The Lagrangian submanifold $\Lambda$ is thence locally given by the graph \begin{equation} (L, d \, {\mathcal H}(L)) = (L, 0) \end{equation} ($d {\mathcal H}(L) = 0$ is the so-called {\it eikonal equation}, see \cite{BeSpe06,Spe06}). In our context the assumptions of the Weil-Kostant theorem are fulfilled (see remark \ref{rem:OmegaExact}). Hence we have a trivial prequantization bundle ${\mathcal L} \to \widehat{Y}$, restricting to $\Lambda$ (and denoted by the same symbol). A covariantly constant section on ${\mathcal L} \to \Lambda$ is then just a {\it locally constant function} on $\Lambda \subset \widehat{Y}$ since, locally $\theta = d{\mathcal H} = 0$ and we neglect the so-called ``half-form" correction (see \eg \cite{BeSpe06} and \cite{Woodhouse97}). \\ Then, observe that the phase function in equation \eqref{eq:phasephi} gives rise precisely to the exponent of the -regular isotopy- \emph{Witten invariant} \begin{equation}\label{eq:wittenInv} \psi = \psi (L) := e^{2\pi i \lambda{\mathcal H}(L)} \end{equation} (obtained in \cite{Witten89} via a path integral computation, see also \cite{Gua,Kauffman,BeSpe06}). Furthermore $\psi$ arises as a WKB wave function for the prequantum line bundle ${\mathcal L} \to \Lambda$ in a purely topological theory. \smallskip The discussion developed in the preceding subsections, extending to links the results obtained for knots in \cite{BeSpe06}, can be summarized via the following theorem (suitably merging and extending Theorems 3.1, 4.1, 5.1 of the above paper):\par \begin{theorem}[Witten invariant as a WKB-phase function] Consider the Brylinski' manifold $\widehat{Y}$ of mildly singular links. The regular isotopy Witten invariant $\psi$ can be interpreted as WKB wave function in Brylinski's framework. Namely, it corresponds to a covariantly constant section of the restriction of the prequantum line bundle to the submanifold $\Lambda$ of singular knots on a plane, viewed as a Lagrangian submanifold of $\widehat{Y}$. \end{theorem} In this spirit, we propose a geometric quantization interpretation of the HOMFLYPT polynomial building on the Besana-Spera symplectic approach to framing via Brylinski's manifold of mildly singular links. \begin{theorem}[HOMFLYPT as a WKB-phase function]\label{Thm:MainHomflyThm} The HOMFLYPT polynomial can be recovered from the geometric quantization procedure applied to the Brylinski manifold $\widehat{Y}$ and to its Lagrangian subspace $\Lambda$. \\ Namely, it coincides (after normalization) with a suitable covariantly constant section $\Psi = \Psi(\alpha,z)$. The coefficient $\alpha$ is a phase factor related to the helicity of a standard ``eight-figure" and $z$ comes from accounting for the variation of the number of components of a link. \end{theorem} \begin{proof} \begin{figure}[h! \begin{center} \includegraphics[width=0.43\textwidth]{\datapath/surgery} \end{center} \vspace{-3mm} \caption{Surgery via $E_+$} \label{fig:surgery} \end{figure} Let be $L$ a link. Consider three links $L_0$, $L_\pm$ pertaining to a given crossing in the indented diagram of $L$ (see subsection \ref{sub:HomflyPrelim}). \\ Then, inspired by the Liu-Ricca approach (\cite{Liu-Ricca12,Liu-Ricca15}), let us consider the above ``figures of eight" $E_{\pm}$. Having the figure of eight a single crossing, one has that ${\mathcal H}(E_{\pm}) = \pm 1$. Starting, for instance, from $L_0$, one can ``add" $E_+$ to the two coherently oriented parallel strands of $L_0$ in such a way that $E_+$ comes with the opposite orientation: a partial cancellation occurs and the net result is $L_+$. Conversely, proceeding backwards we can, by adding appropriately an $E_-$, produce $L_0$ from $L_+$ and so on. Therefore, addition of $E_{\pm}$ allows one to pass from one local configuration to the other\footnote{ We explicitly notice that the two strands may indifferently belong to the same or to two different components of the link involved.}, see Figure \ref{fig:surgery}. \\ Now set: \begin{equation} \begin{aligned} \alpha :=&~ e^{2\pi i \lambda {\mathcal H}(E_+)} = e^{2\pi i \lambda} ~, \\ \alpha^{-1} =& e^{-2\pi i \lambda} = e^{2\pi i \lambda {\mathcal H}(E_-)} ~, \end{aligned} \end{equation} so that, trivially (compare with equation\eqref{eq:Writhe})m \begin{equation}\label{eq:vecchia4.2} \begin{aligned} \psi(L_{\pm}) =&~ \alpha^{\pm 1} \psi(L_{0}) ~, \\ \psi(L_{\pm}) =&~ \alpha^{\pm 2} \psi(L_{\mp}) ~, \\ \psi (L_+) - \psi (L_-) =&~ (\alpha - \alpha^{-1}) \psi (L_0) ~. \end{aligned} \end{equation} Thus we see that $\alpha^{\pm 1}$ arises as the local contribution to the WKB wave function $\psi$ upon addition (surgery) of an eight figure (or ``curl") - which can be applied to a single branch as well (first Reidemeister move: this explains our notation $ L \sharp {\,8_\pm}$ for the link obtained from $L$ by adding a $\pm$-curl on one of its strands) and $\alpha^{\pm 2}$ as the corresponding contribution upon crossing the Maslov cycle $Z$. We now wish to modify the above formula \eqref{eq:vecchia4.2} so as to produce a genuine {\rm ambient isotopy} link invariant, while keeping the above interpretation, that is, we shall prove the following \\ First of all, the prequantization bundle ${\mathcal L} \to \Lambda$ can be trivialized via the trivial link invariant $\Psi_0 := {\mathbb 1}$. We wish to alter $\Psi_0$ beyond the connected component of the unknot $\bigcirc$ in order to get a non-trivial invariant. This will be accomplished via a minimal alteration of the r.h.s. of equation \eqref{eq:vecchia4.2}. Given links $L_{\pm}$ and $L_0$, with respect to one of their crossings (we already observed that they may all be mutually inequivalent, \ie they may lie in different connected components of $Y$), we can nevertheless identify their respective fibres ${\mathcal L}_{\pm}$ and ${\mathcal L}_0 $ of the line bundle ${\mathcal L}$ via the given trivialization. Let us then slightly modify the above formula \eqref{eq:vecchia4.2} and first look for a regular isotopy invariant wave function $\widetilde{\Psi}$ such that \begin{equation} \begin{aligned} \widetilde{\Psi}(L_+) - \widetilde{\Psi}(L_-) =&~ z \,\widetilde{\Psi}(L_0) ~, \\ \widetilde{\Psi}(L \sharp {\,8_\pm}) =&~ \alpha^{\pm 1} \widetilde{\Psi}(L) \end{aligned} \end{equation} with $z$ now {\it independent} of $\alpha$. But this is precisely the skein relation for the $H$-polynomial (see definition \ref{Def:Hpoly}). Hence $\widetilde{\Psi}$ exists and can be promoted to the looked for HOMFLYPT polynomial wave function $\Psi$ via Kauffman's principle (proposition \ref{prop:KaufPrin}). Namely \begin{equation} \Psi(L) := \alpha^{-w(L)}\,\widetilde{\Psi}(L) = \alpha^{-{\mathcal H}(L)}\,\widetilde{\Psi}(L), \end{equation} fulfilling \begin{equation} \begin{aligned} \alpha \Psi ({L_+}) - \alpha^{-1} \Psi ({L_-}) =&~ z \,\Psi ({L_0}) ~, \\ \Psi(\bigcirc) =&~ 1 ~. \end{aligned} \end{equation} The above procedure, applied to $\psi$, yields the trivial invariant $\Psi_0 = \mathbb{1}$. \end{proof} \begin{remark} \begin{enumerate} \item The skein relation for the $H$-polynomial, and thence for the HOMFLYPT polynomial, has been {\it used} in order to guarantee the representation of the latter as a covariantly constant section of the above line bundle. As such, our interpretation is consistent and {\it crossing independent}. \item The position $z= \alpha^{-\frac{1}{2}} - \alpha^{\frac{1}{2}}$ reproduces the {\it Jones} polynomial. For $\alpha = 1$ and $z \neq 0$ we recover the {\it Conway} polynomial. The case $z = 0$ yields the trivial invariant $\Psi = \Psi_0 = {\mathbb 1}$. \item The skein relation (5.3) can be equivalently written in the form \begin{equation} \Psi(L_{-}) = \alpha^{2} \,\Psi(L_{+}) - z \alpha \,\Psi(L_{0}) \end{equation} which tells us that $\Psi(L_{-})$ can be obtained by suitably adding $\Psi(L_{+})$, corrected by a {\it Maslov type} transition (switching term: local surgery via $\alpha^{2}$ - one has the same number of link components) and $\Psi(L_{0})$, corrected by a splicing term (``component transition") $\alpha$ (and multiplied by an extra coefficient $-z$). The latter contribution was absent in \cite{BeSpe06} since that paper dealt with {\it knots} only. Notice that the apparent notational clash (one would naively expect a switch $\alpha \leftrightarrow\alpha^{-1}$ in (5.3) and (5.4)) is simply due to the Kauffman principle. \item Passage from $L_{\pm}$ to $L_0$ (and conversely) in $Y$ - abutting, as already remarked, at a change in the number of the link components - involves coalescence of {\it two} opposite crossings into one and corresponding tangent alignment (a sort of ``dipole", related to the second Reidemeister move). This is a sort of ``higher order" contribution beyond the Maslov one. \item In this way we essentially recover the hydrodynamical portrait of Liu and Ricca \cite{Liu-Ricca12,Liu-Ricca15}, essentially stating that `` $P= t^{\mathcal H}$ " via a different (and more conceptual) interpretation. In particular, the meanings of the two parameters used in HOMFLYPT are not quite the same. The local surgery operation involves helicity, as in Liu-Ricca, but we portray the latter as yielding a local phase function, governing a component transition or Maslov, upon squaring it, as in \cite{BeSpe06}. \item The Chern-Simons (CS) 3-form $A_L \wedge dA_L $ can be interpreted, in adherence to \cite{Spe11}, as a {\it connection 3-form} for a {\it 2-gerbe}, having {\it zero curvature} on ${\mathbb R}^3 \setminus L$. The provisional wave function $\psi = \exp 2\pi i \lambda {\mathcal H}(L)$ then essentially becomes the ``parallel transport" of this connection ``along" ${\mathbb R}^3$, and it is already an important topological invariant. \item The considerations made in the previous remark may provide the starting point of a {\it multisymplectic} reformulation, involving $({\mathbb R}^3, \nu)$ instead of its transgressed symplectic manifold $({L}{\mathbb R}^3, \Omega)$, possibly casting further light on the Jones-Witten theory. \end{enumerate} \end{remark} \ifstandalone \bibliographystyle{../../hep} \chapter*{Introduction and overview}\label{Chap:Introduction} \chaptermark{Introduction and overview} \addcontentsline{toc}{chapter}{Introduction} In this doctoral thesis, we are interested in developing the theory of symmetries on multisymplectic manifolds, especially when they admit a so-called \emph{\momap}. \\ \emph{Multisymplectic structures} (also called \emph{``$n$-plectic''}) are the rather straightforward generalization of symplectic ones when closed non-degenerate $(n+1)$-forms replace $2$-forms. Historically, the interest in multisymplectic manifolds, \ie smooth manifolds equipped with an $n$-plectic structure, has been motivated by the need for understanding the geometrical foundations of first-order classical field theories. The key point is that, just as one can associate a symplectic manifold to an ordinary classical mechanical system, it is possible to associate a multisymplectic manifold to any classical field system. Hence, from the mechanical point of view, the passage from considering $2$-forms to higher forms is equivalent to leap from considering a single point-like particle constrained to some manifold to considering a continuous medium, like a filament, a brane or a fluid. \\ The initial thrust of the theory\footnote{ Although the first instances (in local coordinates) of such structure could be traced back to the classical work of De Donder and Weyl in the 1930’s \cite{DeDonder35,Weyl35}, what we mean here is the modern -global- formulation. The latter was initiated by Kijowski and Szczyrba \cite{Kijowski1973,KS} in the 1970’s and definitely established in the 1990’s with the works of Cari\~nena-Crampin-Ibort \cite{Carinena1991b} and Gotay \cite{Gotay91} }, essentially due to its close relationship with the finite-dimensional geometrical description of classical fields, lost its momentum when it was realized that an adequate notion of "observables" was not available. The centrality of this concept, due to its fundamental role in the construction of most quantization schemes, prompted the research to focus on different generalized notions of symplectic manifolds, predominantly in the direction of infinite-dimensional smooth spaces. One of the inherent difficulties was that most of the candidates aimed at representing the appropriate "algebra of observables", understood not only as "physically" measurable quantities but also as generators of the time evolution, suffered from the lack of identification of a suitable related algebraic structure. Essentially, one lacked an appropriate Lie algebra structure. Early approaches tried to cure this defect by dividing out "non-physical terms", \eg by considering quantities "modulo divergences". \\ A breakthrough happened around the year 2010 when Chris Rogers and John Baez \cite{Rogers2010} realized that a more apt structure to look for was an $L_\infty$ (strongly homotopy) algebra rather than to expect a bonafide Lie algebra. Namely, Rogers proved that the algebraic structure encoding the observables on a multisymplectic manifold is the one of an $L_{\infty}$-algebra, that is, a graded vector space endowed with skew-symmetric multilinear brackets satisfying the Jacobi identity up to coherent homotopies. This idea has led to a new surge of interest in the multisymplectic framework and, roughly five years later, a suitable notion of "moment map" made its appearance In \cite{Callies2016}, Callies, Fregier, Rogers, and Zambon, gave the definition of \emph{\momap} as a natural generalization of the notion of the ordinary (symplectic) comomentum map unifying several other previous attempts to encode momenta in multisymplectic geometry. In a nutshell, a \momap is an $L_\infty$-morphism associated to certain infinitesimal actions which preserve the multisymplectic form of a target manifold. Therefore, the upshot is as follows: Multisymplectic manifolds, $L_\infty$ observables, and \momaps are higher \footnote{ We will mostly understand the "higher" term appearing here in a naive way, that is as "going higher" in the degrees of the considered differential forms. A more cogent interpretation of the use of this term can be provided in the language of higher categories and homotopy theory. } generalizations of symplectic manifolds, Poisson algebras, and co-moment maps. Being the latter concept particularly subtle and technical, there are not so many meaningful examples worked out in full details. In this thesis, we try to address this problem by giving new insights and delivering new concrete constructions related to \momaps trying to further develop the understanding of symmetries in the context of multisymplectic geometry. For instance, in chapter \ref{Chap:MauroPaper}, we exhibit how one can explicitly construct a \momap pertaining to the infinitesimal action of the volume-preserving diffeomorphisms group of Euclidean space, and other Riemannian manifolds with similar cohomology, upon resorting to Hodge theory. This construction allows for a physical interpretation in terms of ideal fluids and singular vortices. In addition, the latter can be put in relation with knot theory. Though mechanics provided the original motivation for the foundation of multisymplectic geometry, we stress nevertheless that mathematical-physics is not the only source where to find instances of this class of structures. For example, any orientable $n$-dimensional manifold can be considered $(n-1)$-plectic when equipped with a volume form. Following this purely mathematical premise, in chapter \ref{Chap:LeonidPaper} we focus our attention on multisymplectic actions of compact groups and thus deriving existence results and explicit constructions for \momaps related to actions on spheres. A natural issue that arises when dealing with both symplectic and multisymplectic structures is to investigate what relationship exists between gauge-related multisymplectic manifolds, \ie manifolds endowed with multisymplectic forms lying in the same de Rham cohomology class. In chapter \ref{Chap:MarcoPaper}, we will focus on the two L$_\infty$-algebras of observables associated to a pair of gauge-related multisymplectic manifolds. To date, no canonical correspondence is known between two gauge-related observables algebras. However, we will be able to exhibit a compatibility relation between those observables that are momenta of corresponding \momaps. Although this construction is essentially algebraic in nature, it admits also a geometric interpretation when applied to the particular case of pre-quantizable symplectic forms. This provides some evidence that this construction may be related to the higher analogue of geometric quantization for integral multisymplectic forms. \section*{Structure of the thesis} \addcontentsline{toc}{section}{Structure of the thesis} \sectionmark{Structure of the thesis} This thesis consists of an introduction, five chapters and four appendices. The core of the thesis is split in two parts. The first part (chapters \ref{Chap:Linfinity} and \ref{Chap:MultiSymplecticGeometry}) consists of background material and the second part (chapters \ref{Chap:LeonidPaper}, \ref{Chap:MarcoPaper}, \ref{Chap:MauroPaper}) contains mostly original results. \begin{figure}[h!] \centering \resizebox{\columnwidth}{!}{% \begin{tikzpicture}[>=stealth,every node/.style={shape=rectangle,draw,rounded corners},] \node [dashed] (a1) {App. \ref{App:GradedMultilinearAlgebra}}; \node [dashed] (a2)[below right =of a1] {App. \ref{App:RNAlgebras}}; \node [dashed] (a3)[right =of a1] {App. \ref{App:UnshuffleAtors}}; \node [dashed] (a4)[right =of a3] {App. \ref{App:PreLie}}; \node (c1)[below left =of a1] {Chapter \ref{Chap:Linfinity}}; \node (c2)[below =of c1] {Chapter \ref{Chap:MultiSymplecticGeometry}}; \node (c3)[below left =of c2] {Chapter \ref{Chap:LeonidPaper}}; \node (c4)[below =of c2] {Chapter \ref{Chap:MarcoPaper}}; \node (c5)[below right =of c2] {Chapter \ref{Chap:MauroPaper}}; \draw[->,dashed] (a1) -- (a2); \draw[->,dashed] (a3) -- (a2); \draw[->,dashed] (a4) -- (a2); \draw[->,dashed] (a2) -- (c1); \draw[->] (c1) -- (c2); \draw[->] (c2) -- (c3); \draw[->] (c2) -- (c4); \draw[->] (c2) -- (c5); \end{tikzpicture} } \caption{Structure of the chapters.} \end{figure} \medskip Chapter \ref{Chap:Linfinity} is devoted to recap the conventions of multilinear algebra adopted in the thesis (which are more extensively explained in appendices \ref{App:GradedMultilinearAlgebra} and \ref{App:RNAlgebras} ) and to introduce the reader to the language of $L_\infty$-algebras. Chapter \ref{Chap:MultiSymplecticGeometry} is a short survey on the three cornerstone concepts upon which this work is based. Namely, the notion of \emph{multisymplectic manifold}, the definition of the pertaining \emph{$L_\infty$-algebra of observables} and the characterization of symmetries admitting a \emph{\momap}. \medskip In chapter \ref{Chap:LeonidPaper} we focus on multisymplectic actions of compact groups. We observe how, in this case, the cohomological obstructions to the existence of a \momap, already discussed in the literature in term of the equivariant cohomology, can be expressed in terms of the de Rham cohomology on the product of the acting Lie group with the base manifold. We profit from this by giving a complete classification of proper actions of compact groups on spheres admitting \momaps, providing at the same time several explicit constructions. In chapter \ref{Chap:MarcoPaper} we deal with the commutativity of a certain diagram involving multisymplectic structures in the same cohomology class and \momaps. First we discuss a motivation for this purely algebraic problem which comes from the context of geometric quantization of symplectic manifolds. Then, we show explicitly how the observables $L_\infty$-algebra can be embedded into the $L_\infty$-algebra corresponding to another geometric object called \emph{Vinogradov algebroid}. At last we prove how the diagram given by the aforementioned embedding with respect to two gauge related multisymplectic manifold can be closed in presence of a group action admitting a \momap. Chapter \ref{Chap:MauroPaper} revolves around the explicit construction of a \momap for the action of divergence-free vector fields on $\RR^3$ rapidly vanishing at infinity. This map will be called "hydrodynamical" for two main reasons: the acting group can be interpreted as the configuration (displacement) group of an incompressible fluid and the transgression of the corresponding \momap yields the standard hydrodynamical co-momentum map of Arnol'd, Marsden, Weinstein and others. As an application of the above \momap, a reinterpretation of the (Massey) higher order linking numbers in terms of conserved quantities within the multisymplectic framework is provided and knot theoretic analogues of first integrals in involution are determined. \medskip The chapters in appendix are intended as a survey to prepare the algebraic framework used to present our results. In particular, the language of \RN algebras (see section \ref{Section:MultibracketsAlgebra}) will be extensively used in chapters \ref{Chap:Linfinity},\ref{Chap:MultiSymplecticGeometry} and \ref{Chap:MarcoPaper}. Appendix \ref{App:GradedMultilinearAlgebra} describes our conventions involving graded vector spaces starting from the notion of \emph{graded objects}. We also recall the notion of tensor algebras and coalgebras and lifts to coalgebra morphisms. In appendix \ref{App:RNAlgebras} we discuss the algebraic structure of the graded vector space of multilinear maps. In particular, we focus on the space of symmetric homogeneous multilinear maps, endow it with a non-associative algebra structure, called \emph{\RN product}, and check its pre-Lie property. The ensuing right pre-Lie algebra is then proved to be isomorphic to the algebra of graded skew-symmetric multilinear maps and to the algebra of coderivation of a corresponding symmetric tensor coalgebra. Appendix \ref{App:UnshuffleAtors} focuses on the combinatorial aspects involved in the definition of the \RN product and, consequently, in dealing with $L_\infty$-algebras in the "multibrackets presentation". Namely, it describes \emph{unshuffle permutations}, states some properties involving the operators giving the action of this permutations on a given graded vector spaces, and uses these ideas to yield an explicit expression for the associator pertaining to the non-associative algebra of multilinear maps. At last, appendix \ref{App:PreLie} is a short collection of some basic definitions and properties on graded pre-Lie algebras found scattered in the literature. \section*{Results} \addcontentsline{toc}{section}{Results} \sectionmark{Results} This thesis aims at investigating the notion of \momap and trying to work out in full detail concrete examples of interests in a different area of mathematics. Below we summarize the main results obtained. \iffalse\subsection*{Appendices}\fi The first thing we want to mention cannot be qualified as a new result but we believe it is noteworthy since it rests on an approach not conventionally used in the multisymplectic literature. \\ In the presentation of the background material (chapters \ref{Chap:Linfinity} and \ref{Chap:MultiSymplecticGeometry}) and in chapter \ref{Chap:MarcoPaper}, we will heavily rely on the notion of \RN product (which is in particular discussed in appendix \ref{App:RNAlgebras}). This choice proves to be convenient because it allows to express explicit constructions concerning multilinear maps, specifically the multibrackets of certain $L_\infty$-algebras, without having to deal with the combinatorial details implied by the constructions themselves. In a certain sense, this places us at an intermediate level of abstraction between the original presentation of Lada and Stasheff \cite{LadaStasheff} and more abstract presentations, for example via operads \cite{Loday}, in which the central matter is no longer how certain multilinear maps act on vectors but it is given by the maps themselves, together with their composition rules. In other words, this promotes the philosophy to focus on multibrackets, interpreted as morphisms in the category of graded vector spaces, without referring to their internal structure. The core of the appendix can be subsumed by the following theorem \begin{bigthm}[\emph{Thm \ref{Theorem:RecapGerstenhaber}}] Let $V$ be a graded vector space, denote by $M^{\sym}(V)$ and $M^{\skew}(V)$ the graded vector spaces of symmetric and skew-symmetric multilinear maps from $V$ into itself. One has the following sequence of isomorphisms in the category of graded right pre-Lie algebras \begin{displaymath} \begin{tikzcd} (M^{\skew}(V[-1]),\skewgerst) \ar[r,"\Dec"',"\cong"] & (M^{\sym}(V),\symgerst) \ar[r,"\widetilde{L}_{\sym}"',"\cong"] & (\coDer({S(V)}),\symgerst) \end{tikzcd} \end{displaymath} where: \begin{itemize} \item[-] $\cs$ and $\ca$ denote respectively the symmetric and skew-symmetric \RN product; \item[-] $\coDer(\overline{S(V)})$ denote the space of coderivations on the reduced symmetric Tensor coalgebra $\overline{S(V)}$; \item[-] $\Dec$ is the \emph{d\'ecalage} operator pertaining to multilinear maps; \item[-] $\widetilde{L}_{\sym}$ denotes are the lift operators to the symmetric tensor coalgebra. \end{itemize} \end{bigthm} Being the above three algebras isomorphic, it is legitimate to regard them as a single object \ie as \emph{the} \RN algebra pertaining to the graded vector space $V$. Accordingly, one could talk of three different "presentations" of the \RN algebra; respectively in terms of skew-symmetric multibrackets, symmetric multibrackets and coderivations. In proposition \ref{Prop:SymmetricGerstenhaberAssociators} we also provided explicit formulas regarding the failure of the associativity in both the symmetric and skew-symmetric "\RN algebras". \\ In this framework, it is immediate to view $L_\infty$-algebras structures on the graded vector space $V$ as $2$-nilpotent degree $1$ elements in the \RN algebra. Next, we proceed to state the main results of the present work, expounded from chapter \ref{Chap:LeonidPaper} onwards. \subsection*{Chapter \ref{Chap:LeonidPaper}} The contents of this chapter are based on a paper co-authored by the author of the present thesis with Leonid Ryvkin \cite{Miti2019}. In this chapter, we expand the panorama of examples of \momaps by giving new insights on multisymplectic actions of compact groups and thus deriving existence results and explicit constructions for \momaps related to actions on spheres. \\ The first novel result presented is the solution of the existence problem for \momaps (\Hcmm) pertaining to compact effective group actions on high-dimensional spheres seen as multisymplectic manifolds (with respect to the standard volume form). \begin{bigthm}[\emph{Prop. \ref{prop:intransitive} and Thm \ref{thm:surprise}}]\label{thm:mainresult} Let $G$ be a compact Lie group acting multisymplectically and effectively on the $n$-dimensional sphere $S^n$ equipped with the standard volume form. \\ Then the action admits a \momap if and only if $n$ is even or the action is not transitive. \end{bigthm} Independently from the proof, which was mainly based on results in algebraic topology, we also exhibited non-trivial classes of examples. \begin{itemize} \item The action of $SO(n)$ on $S^n$ is not transitive, hence it admits a \momap for all $n$. We give an explicit construction for such a \momap in Subsection \ref{subsecson} that extends the construction given in \cite{Callies2016} only up to the $5$-dimensional sphere. \item The action of $SO(n+1)$ on $S^n$ admits a \momap for even $n$ only. For the cases where such a \momap exists, giving explicit formulas seems to be a non-trivial task. We give explicit formulas for the first two components $f_1$ and $f_2$ in terms of the standard basis of $\mathfrak {so}(n)$ in Subsection \ref{subsectra}, leaving an explicit description of the higher components as an open question. The core idea will be to focus on the particular cohomology of the acting group rather than working on the analytical problem of finding the primitives required for the construction of the components of a \momap. \item For $SO(4)$ acting on $S^3$ no \momap exists. However, this problem can be fixed by centrally extending the Lie algebra $\mathfrak {so}(n)$ to a suitable $L_\infty$-algebra (\cf \cite{Callies2016,Mammadova2019}). For instance, the Lie algebra $\mathfrak{so}(2n)$ (giving the action of $SO(2n)$ on $S^{2n-1}$ preserving the $(2n$-$1)$-plectic volume form) could be extended to an $L_\infty$-algebra concentrated in degrees from $0$ to $(2-2n)$. \item Apart from the previous constructions, which all pertain to the multisymplectic structure given by the volume, we also discuss the existence of another natural \momap that can be associated to the action of the exceptional group $G_2$ on $S^7$. \end{itemize} \subsection*{Chapter \ref{Chap:MarcoPaper}} The contents of this chapter are based on a preprint co-authored by the author of the present thesis and Marco Zambon \cite{Miti2020}. In chapter \ref{Chap:MarcoPaper}, we study the higher (multisymplectic) analogue of the standard embedding of the observables Poisson algebra pertaining to a symplectic manifold into the space of sections of the corresponding Lie algebroid. Namely, we produce the following explicit construction for the embedding of the observables $L_\infty$-algebra of a given $n$-plectic manifold into the $L_\infty$-algebra pertaining to the corresponding \emph{Vinogradov} (or \emph{higher Courant}) algebroid: \begin{bigthm}[\emph{Thm. \ref{thm:iso} and Cor. \ref{cor:Psi}}]\label{bigthm:rogerEmb} Let be $n \leq 4$. Consider an $n$-plectic manifold $(M,\omega)$ and take the corresponding \emph{Vinogradov (higher Courant) algebroid} $E^n:=TM\oplus\Lambda^{n-1}T^\ast M$ twisted by $\omega$. Denote by $L_\infty(M,\omega)$ the observables $L_\infty$-algebra associated to the former and by $L_\infty(E^{n},\omega)$ the $L_\infty$-algebra associated to the latter. \\ There is a $L_\infty$-algebra embedding $\Psi$ defined by the following diagram \begin{displaymath} \begin{tikzcd}[column sep = huge, row sep = large] L_\infty(M,\omega) \ar[r,dashed,"\Psi"] \ar[d,sloped,"\sim"] & L_\infty(E^n,\omega) \\ (\mathcal{A},\pi) \ar[r,"\sim","\Phi"'] & (\mathcal{A},\mu) \ar[u,hook] \end{tikzcd} \end{displaymath} Where: \begin{itemize} \item $\mathcal{A}$ is a graded vector subspace of $L_{\infty}(E^n,\omega)$ concentrated in degrees $0\leq k \leq 1-n$. It consists of Hamiltonian pairs (pairs composed of Hamiltonian forms together with the corresponding Hamiltonian field) in degree $0$ and differential $(n-1+k)$-forms in degree $-k$. \item $\pi$ and $\mu$ denote respectively the restriction of the $L_\infty$-algebra structures of $L_\infty(M,\omega)$ and $L_\infty(E^{n},\omega)$ to the graded vector space $\mathcal{A}$. \item $\Phi$ is a $L_\infty$-isomorphism given in components by \begin{displaymath} \Phi_n := \left(\frac{2^{n-1}}{(n-1)!} B_{n-1}\right)~\pairing_-^{\ca (n-1)}~: \mathcal{A}^{\wedge k } \to \mathcal{A} \end{displaymath} where $B_k$ denotes the $k$-th \emph{Bernoulli number}, $\pairing_-$ is the skew-symmetric pairing operator on $E^n$, and superscript $\ca(k)$ denotes the \RN product of the given operator with itself $k$ times. \end{itemize} \end{bigthm} The proof relies on the observation that the two aforementioned $L_\infty$-structures $\pi$ and $\mu$ on $\mathcal{A}$ can be reconstructed, via the \RN product, out of the same set of just four graded skew-symmetric multilinear maps. We establish the existence of the embedding for $n\le 4$ but we expect the proof to extend to the case of arbitrary $n$. \\ Nevertheless, this result is a generalization of a similar construction performed by Rogers \cite{Rogers2013} in the $2$-plectic case involving the ordinary twisted Courant algebroid. Given the embedding $\Psi$, we investigated its compatibility with \Momaps and gauge transformations: \begin{bigthm}[\emph{Thm. \ref{thm:comm}}] Consider two gauge related $n$-plectic forms $\omega$ and $\widetilde{\omega}:= \omega + \d B$ on the smooth manifold $M$. Consider a smooth action of the Lie group $G$ on $M$ admitting \momap with respect to both $\omega$ and $\widetilde{\omega}$. Denote by $f:\mathfrak{g}\to L_\infty(M,\omega)$ and $\tilde{f}:\mathfrak{g}\to L_\infty(M,\widetilde{\omega})$ the two \momaps. Denote by $(E^n,\omega)$ the Vinogradov Algebroid twisted by $\omega$. \\ The following diagram commutes in the category of $L_\infty$-algebras: \begin{displaymath} \begin{tikzcd} & L_{\infty}(M,\omega) \ar[r,"\Psi"] & L_{\infty}(E^n,\omega) \ar[dd,"\tau_B"] \\[-1em] \mathfrak{g}\ar[ru,"f"] \ar[dr,"\widetilde{f}"'] \\[-1em] & L_{\infty}(M,\widetilde{\omega}) \ar[r,"\Psi"] & L_{\infty}(E^n ,\widetilde{\omega}) \end{tikzcd} \end{displaymath} where $\Psi$ is the embedding introduced in theorem \ref{bigthm:rogerEmb} and the rightmost vertical arrow is the gauge transformation isomorphism $L_{\infty}(E^n,\omega) \cong L_{\infty}(E^n,\widetilde{\omega}) $. \end{bigthm} Carrying out this construction in the $1$-plectic case, it is possible to interpret the morphism $\Psi$ in term of prequantization. We suppose that our construction could play a similar role in the prequantization of higher dimensional systems. \subsection*{Chapter \ref{Chap:MauroPaper}} The contents of this chapter are based on two papers co-authored by the author of thesis and Mauro Spera \cite{Miti2018,Miti2019a}. In this chapter, we exhibit a \momap pertaining to the action of the infinitesimal action of divergence free vector fields, \ie the infinite-dimensional Lie algebra $\sDiff_0$, on the three dimensional Euclidean space, that is a $2$-plectic manifold with respect to the standard volume form $\nu$: \begin{bigthm}[\emph{Thm. \ref{Thm:HydrodynamicalComoment}}] The infinitesimal action of $v:\mathfrak{g}\to \mathfrak{X}(\mathbb{R}^3)$, concretely given by the inclusion of divergence free fields in the set of all vector fields, admits a \momap $(f)$ with components $f_j: \Lambda^j {\mathfrak g} \to \Omega^{2-j} (\R^3)$ given by \begin{displaymath} \begin{aligned} f_1 =&~ \flat\circ {\rm curl}^{-1} \\ f_2 =&~ \Delta^{-1} \circ\delta\circ \mu_2 \end{aligned} \end{displaymath} where $\mu_2(p):= f_{1} (\partial p) + \iota(v_p) \omega$ (a term introduced in remark \ref{Rem:TermMuByMauro}), $\delta$ is the de Rham induced by the Hodge structure, and the inverse of the vector calculus operators involved, $\curl$ and $\Delta$, have to be thought of as their corresponding Green operators (hence they are not unique). \end{bigthm} This object has an interesting interpretation in the context of fluid dynamics since it transgresses to the standard hydrodynamical co-momentum map of Arnol'd, Marsden and Weinstein and others. In theorem \ref{Thm:RiemannGeneralization}, we also show how this construction could be then generalized to a suitable class of Riemannian manifolds. Furthermore, we discuss a covariant phase space interpretation of the coadjoint orbits associated to the Euler evolution for perfect fluids and, in particular, of Brylinski's manifold of smooth knots. The last observation prepares the ground for an application of the aforementioned \momap in the context of knot theory. Namely, we provide a reinterpretation of the (Massey) higher order linking numbers in terms of conserved quantities within the multisymplectic framework thus determining knot theoretic analogues of first integrals in involution. \begin{bigthm} [\emph{Prop. \ref{Prop:MasseyMess}, Thm. \ref{thm:VorticityFormExact} and \ref{Thm:MasseyMess}}] Consider a $n$-link $L=\coprod_{i=1}^n L_i $ with $L_i:S^1\to \R^3$ the parametrization of the $i$-th component. \begin{itemize} \item The velocity $1$-form $v_L$ (definition \ref{Def:Velocity1Form}) pertaining to $L$ is an Hamiltonian form of $L_\infty(\R^3,\nu)$ and it lie in the image of the \momap $f$ given in the previous theorem (in other terms, $v_L$ is a "momentum" with respect to the action of divergence free fields). \end{itemize} \smallskip Consider the case that the $n$-link $L$ above satisfies the property that the cohomology classes of all Massey $2$-forms $\Omega_{i j}:= v_i \wedge v_l$, for any pairs of components $i$ and $j$ of $L$, vanish in the cohomology of the link $H(S^3\setminus L)$. In other words, all mutual first order linking numbers are zero. \begin{itemize} \item The primitive $v_{i j}$ corresponding to the Massey form $\Omega_{i j}$ are Hamiltonian with Hamiltonian vector field given by $\xi_{i j}= \flat \cdot \ast \cdot \Omega_{i j}$. Hence they are in particular momenta with respect to the \momap given above (\ie $f_1(\xi_{i j})= v_{i j}$). \item $v_{i j}$ are strictly conserved quantities (see definition \ref{Def:conservedQuantities}) along the flow given by the velocity $1$-form $v_L$ associated to the link. \item The commutation rule (with respect to the binary brackets $\lbrace\cdot,\cdot\rbrace$ of $L_\infty(\R^3,\nu)$) $$\lbrace v_{i j}, v_{k \ell}\rbrace=0$$ holds for any given primitives of two arbitrary Massey $2$-forms. \end{itemize} \end{bigthm} The second part of the previous statement can be easily expressed for any higher-order linking number, of order greater than $2$. The derivation of the previous results essentially relies on the vortex theoretic approach to $n$-links. Considering \emph{fluid configurations with linked singular vortices} is the cornerstone idea for building our bridge between multisymplectic geometry and knot theory. \\ By further pursuit of this path, it was also possible to give a semiclassical interpretation of the HOMFLYPT polynomial, building on the Liu-Ricca hydrodynamical approach to the latter and on the Besana-Spera symplectic approach to framing. \section*{Previous works} \addcontentsline{toc}{section}{Previous works} \sectionmark{Previous works} As explained in the previous section, most of the original results presented here also appeared in the following four preprints \cite{Miti2018}, \cite{Miti2019}, \cite{Miti2019a} and \cite{Miti2020}. To date, three of them have been accepted for publication. This project owes a lot to the recent development in multisymplectic geometry published in the last ten years. \\ In large part, our research is built upon the works written by Chris Rogers \cite{Rogers2010,Rogers2011,Rogers2013}, Marco Zambon \cite{Zambon2012,Fregier2015,Callies2016} and Leonid Ryvkin \cite{zbMATH06448534,Ryvkin2016,Ryvkin2016a,Ryvkin2018}, together with their collaborators John Baez, Martin Callies, Yael Fregier, Camille Laurent-Gengoux, and Tilmann Wurzbacher. All our considerations regarding the possible application of multisymplectic geometry in knot theory are instead motived by a series of articles authored by Alberto Besana, Vittorio Penna and Mauro Spera \cite{BeSpe06,Pe-Spe89,Pe-Spe92,Pe-Spe00,Pe-Spe02,Spe06}. The contents of the appendix draw inspiration from numerous lecture notes and sources available online. Above all, especially in what pertains the identification of the algebraic structure of the graded space of skew-symmetric multibrackets, we should pinpoint the course on deformation theory by Marco Manetti \cite{Manetti-website}. We also want to mention that the present work has been submitted shortly after the publication of the thesis, bearing a similar title, authored by Leyli Mammadova \cite{Mammadova2020a}. Both theses share a comparable background material but differ, though, in the different flavour used to present it. Her novel results mainly concern two further generalizations of the notion of \momap given by the so-called\emph{ weak homotopy comoment map} and \emph{$L_2$ moment maps}. \section*{Conventions}\label{Sec:conventions} \addcontentsline{toc}{section}{Conventions} \sectionmark{Conventions} Throughout the thesis we will essentially work within two categories only. On the algebraic side, we will mostly work in the category of graded (non-necessarily finite dimensional) vector spaces and, on the geometric side, we will mainly deal with finite dimensional smooth real manifolds. \iffalse\subsection{QuestioneComposizioni}\fi Accordingly, the composition of maps\footnote{In this text, function composition is always meant as precomposition $(f\circ g)(x) = f(g(x))$; in other terms, we apply transformations on the left. The same convention holds for the composition of morphism in any category as in \cite[S 1.8]{MacLane1978}.}, mostly indicated with the symbol $\circ$, will be often denoted by simple juxtaposition in those situations where it cannot be confused with the point-wise product of functions. When dealing with multilinear maps, we will introduce the Gerstenhaber, symmetric \RN, and skew-symmetric \RN products, denoted respectively by $\gerst, \symgerst$ and $\skewgerst$. These can be thought of as suitable notion of composition between two multilinear maps. \iffalse\subsection{graded multilinear algebra}\fi Our conventions in graded multilinear algebra will be extensively recalled in section \ref{Sec:ConventionsMultiLinearAlgebras} and further justified in appendix \ref{App:GradedMultilinearAlgebra}. We only mention here that, for any given graded vector space $V$, we will seamlessly identify linear maps from $V^{\otimes n},V^{\odot n}$ or $V^{\wedge n}$ into $W$ with maps $\bigtimes^k V \to W$ of arity $k$ enjoying respectively the extra property of being multilinear, multilinear graded symmetric and multilinear graded skew-symmetric. \\ \iffalse\subsection{homological algebra}\fi In what pertains to homological algebra, given any cochain complex $C=(C^\bullet,d)$ we denote by $Z^k(C)=ker(d^{(k)})$ the subgroup of cocycles and by $B^k(C)=d~C^{k-1}$ the subgroup of coboundaries. In the case of chain complexes, we employ the same notation with lower indices. \iffalse\subsection{Differential geometry}\fi On the geometric side, all our objects will be smooth unless differently specified. We will take for granted the notions of smooth manifold, smooth map, diffeomorphism, smooth bundle, vector field, differential form, and de Rham calculus. In particular, all the manifolds considered will be smooth, finite-dimensional, Hausdorff, and second-countable. In section \ref{Sec:MultiCartan}, we will state our conventions regarding the Cartan calculus of multi-vector fields. We only stress here that the contraction operator $\iota$ with decomposable multi-vector fields $p=\xi_1\wedge\dots\wedge \xi_n$ will be given by $\iota_p = \iota_{\xi_n}\dots\iota_{\xi_1}$. \iffalse\subsection{Group Actions}\fi The vast majority of this work deals with the notion of symmetry in the sense of group actions on smooth manifolds. We take mostly for granted the notions of Lie group, Lie algebra, $G$-principal bundle, smooth Lie group action, (infinitesimal) Lie algebra action, and equivariant map. We will denote by $Ad$ the adjoint action of a Lie group on itself, by $ad$ the representation of the Lie group on its Lie algebra, and by $ad^\ast$ the coadjoint representation of the group on the dual of its Lie algebra. \emph{Left (smooth) actions} will be employed throughout, unless differently specified (see for instance \ref{rem:RightActionMess}). Hence, by $\theta:G\action M$ we mean a group homomorphism $\vartheta: G \to \Diff(M)$ into the diffeomorphism group, understanding the group structure on $\Diff(M)$ again as precomposition, that is smooth in the appropriate sense. (A right action is, on the other hand, given by an antihomomorphism.) The corresponding Lie algebra action will be the morphism of Lie algebras $\mathfrak{g}\to \mathfrak{X}(M)$ given by fundamental vector fields according to the prescription of equation \eqref{eq:LeftFundVF}. \iffalse\subsection{Combinatorics}\fi In what concerns the combinatorics involved in this work, most of the constructions that we will encounter will be expressed in terms of unshuffles. We denote by $S_n$ the group of permutations of $n$ elements and by $\ush{i_1,\dots, i_\ell}$ the subgroup of $(i_1,\dots,i_\ell)$-\emph{unshuffle permutatations}. Recall that $\sigma \in S_{n}$ is a $(i,n-i)$-unshuffle if $\sigma_{k}<\sigma_{k+1}$ for any $k\neq i$. Further details are given in appendix \ref{App:UnshuffleAtors}. \iffalse\subsection{Cat theory}\fi In the appendix, we will make elementary use of some basic concepts in category theory like functor, natural transformation, limits, colimits, and monoidal structure. \ifstandalone \bibliographystyle{../../hep} \chapter{$L_\infty$-algebras}\label{Chap:Linfinity} $L_{\infty}$-algebras, also known as \emph{strongly homotopy Lie algebras} or SH Lie algebras, are a generalization of \emph{Lie algebras} where one requires that the Jacobi equation is satisfied only up a controlling term. \\ According to Stasheff \cite{Stasheff2019}, the idea of considering "Jacobi equations up to homotopy" began to sprout in conjunction with the developments in homotopy theory occurring during the mid-20th century. However, the precise mathematical formalization of "strongly homotopy Lie algebras" only appears in 1993, in a greatly influential paper by Lada and Stasheff \cite{LadaStasheff}. The authors seemed prompted to crystallize this concept after its progressive and ubiquitous appearance, roughly sparked in the eighties, in the many different branches of theoretical physics. In fact, some first examples had already been found in supergravity, string theory and quantization (see \cite{nlab:l-infinity-algebra} for an updated list of applications in physics). \\ Besides their physical applications, $L_{\infty}$-structures took a crucial role in deformation theory exemplified by the Deligne's leading principle that \emph{deformation of any given algebraic structure or geometric structure is governed by a strong homotopy algebra} \cite{Deligne}. \\ In the last decade, they also began to assume a prominent role in the context of the geometric approach to \emph{classical} (in the sense of "local" and "prequantum") \emph{field theory}, \ie the classical mechanics of system with a continuum of degrees of freedom. Namely, Baez and Rogers noticed the existence of an infinite-dimensional $L_\infty$-algebra behaving like the analogue for continuum media of the observables (Poisson) algebra of ordinary mechanical systems. This observation leads to the introduction of the so-called \emph{(Rogers) $L_\infty$-algebra of observables} to any multisymplectic manifold \cite{Rogers2010}. \\ The latter concept will be of pivotal importance in the following chapters. In particular, most of the results involving the multisymplectic analogue of \emph{moment maps} can only be correctly phrased in the language of $L_\infty$-structures. \medskip In this chapter, we present some background material on $L_{\infty}$-algebras. The contents are not new in their substance (which can largely be found in the seminal articles \cite[\S 3]{LadaStasheff}\cite{LadaMarkl} as well as in more recent surveys like \cite[Ch. 2, \S 1]{Schatz2009}, \cite[\S 6]{Doubek2007},\cite{Ryvkin2016a} and \cite{Reinhold2019}) but are somehow "new" in the form of their presentation. Namely, our discussion relays heavily on the notion of \emph{\RN} products between graded multilinear maps. \\ The main motivation for this unusual choice is that it allows to perform computations on "multibrackets" in a relatively more agile way with respect to dealing with expressions evaluated on arbitrary lists of objects. Furthermore, this notation allows to keep track more easily of many prefactors involved in the combinatorics underlying the theory of $L_\infty$-algebras without requiring to pass to others layers of abstraction (like coderivations or operads) that are not perfectly suited for the kind of explicit constructions that we desired to deliver. Specifically, in section \ref{Sec:ConventionsMultiLinearAlgebras} we explicitly state the conventions in graded multi-linear algebra, including the definition of \RN algebras, that will be employed throughout this thesis. This section is meant as a summary of the contents of appendices \ref{App:GradedMultilinearAlgebra} and \ref{App:RNAlgebras} which are self-contained surveys on the \RN products starting from the basics in graded linear algebra. \\ In section \ref{Sec:LinfinityAlgebras}, we discuss three possible equivalent perspectives on $L_\infty$-algebras. Namely as a graded vector space with a family of skew-symmetric multibrackets satisfying Higher Jacobi equations, as a graded vector space with a family of symmetric multibrackets satisfying (symmetric) Higher Jacobi equations and as a cofree graded cocommutative coalgebra endowed with a degree $1$ codifferential. We also give explicit expressions for morphisms and their composition. Finally, in subsection \ref{SubSection:studycase} we discuss some examples related to the specific $L_\infty$-structures studied in the rest of the thesis. \section{Conventions in Graded multi-linear algebra}\label{Sec:ConventionsMultiLinearAlgebras} Most of the objects of this thesis sit in the category of \emph{$\ZZ$-graded vector spaces}. From now on, we will often drop the $\ZZ$ and simply talk about \emph{graded vector space}. In this section we summarize the conventions that will be adopted in the body of this text. The rationale of these notations, especially in what pertains the so-called "Koszul convention", are more extensively explained in appendices \ref{App:GradedMultilinearAlgebra} and \ref{App:RNAlgebras}. \\ We think of graded vector space as a functor between the set $\ZZ$, seen as a discrete category, and the category of (not necessarily finite-dimensional) vectors spaces $\Vect$ over the field field $\mathbb{R}$ in characteristic $0$ (it will always be the field of real numbers in what pertains this text). We write a graded vector space as \begin{displaymath} V= (k \mapsto V^k) ~. \end{displaymath} In practical terms, a graded vector space can be thought of as a family of vector spaces parametrized by $\mathbb{Z}$. We denote by $V^\oplus = \bigoplus_{n\in\ZZ} V^n$ the ordinary vector space obtained as a direct sum of all the components of the graded vector space $V$ (see lemma \ref{lemma:totalfunctor} and remark \ref{rem:totaloplus} in appendix). \\ We call $V^k$ the \emph{$k$-th component of $V$} and an element $v\in V^k$ is called \emph{homogeneous element in degree $k$}. We denote by $|\cdot|$ the \emph{grading map} giving for any homogeneous element $v\in V^k$, for any $k\in \ZZ$, its degree $k$. We say that the graded vector space is \emph{concentrated in degree $n$} if $V^k=0$ for $k\neq n$. Ordinary vector spaces will be regarded as graded vector spaces concentrated in degree $0$. \\ Morphisms between graded vector spaces $\varphi:V\to W$ are natural transformations between the corresponding functors, hence, they are defined by a collection of linear maps maps \begin{displaymath} \varphi = \lbrace\varphi^n \in \Hom_{\Vect}(V^n,W^n) \rbrace_{n \in \ZZ} ~. \end{displaymath} We will refer to elements in $\Hom_{\Vect^\ZZ}(V,W)$ as \emph{graded-morphisms} or \emph{degree preserving (linear) maps}. Epimorphisms, monomorphisms and isomorphisms in $\Vect^{\ZZ}$ are given by collections of epimorphisms, monomorphisms and isomorphisms in $\Vect$, hence by surjective, injective and invertible linear maps respectively. The category $\Vect^{\ZZ}$ is closed, the hom space between two given graded vector spaces is a graded vector space given by \begin{equation}\label{eq:gvecthomspace} \Hom_{\Vect^{\ZZ}}(V,W) = \left(k \mapsto \Hom_{\Vect}(V^k,W^k)\right) ~. \end{equation} We will often neglect this "internal grading" understanding it as the ordinary vector space obtained by direct sum on all the components, note that one has \begin{displaymath} \Hom_{\Vect^{\ZZ}}(V,W)^{\oplus} = \Hom_{\Vect}(V^{\oplus},W^{\oplus}) ~. \end{displaymath} The category $\Vect^{\ZZ}$ inherit from $\Vect$ the property to be Abelian and thence complete and cocomplete. In particular limits in $\Vect^{\ZZ}$ are constructed out of collections of limits on the components. For instance, the \emph{direct sum} of graded vector spaces is defined as the graded vector space given as follows: \begin{displaymath} V\oplus W := (k \mapsto V^k\oplus W^k) ~. \end{displaymath} The action of the \emph{shift endofunctor} $[k]:\Vect^{\ZZ}\to\Vect^{\ZZ}$, for any $k\in\ZZ$, is given on components as \begin{displaymath} (V[k])^i = V^{k+i} ~. \end{displaymath} An homogeneous vector of $v\in V$ in degree $|v|=n$ can be seen as an homogeneous vector of $V[k]$ in degree $(n-k)$, the latter will be denoted as \begin{displaymath} v_{[k]} \in (V[k])^{(n-k)} ~ \end{displaymath} \ie $|v_{[k]}| = |v|-k$. By definition, one has the following identification of functors $[k][\ell]=[\ell][k]=[k+\ell]$ (see subsection \ref{sec:degreeshifts}). \\ Beside the shift functor, we will sometimes make use of the following two functors altering the grading of graded vector spaces. We call \emph{Truncation up to degree $n$} the functor \begin{equation}\label{eq:truncationFunctor} \morphism{\trunc_{n}~} {\Vect^\ZZ} {\Vect^\ZZ} {\left(k\mapsto V^k\right)} {\left(k\mapsto \begin{cases} V^k & \text{ if } k<n\\ 0 & \text{ if } k\geq n \end{cases} \right)} ~, \end{equation} and call \emph{degree reversing}the functor \begin{equation}\label{eq:degreeReversingFunctor} \morphism{\setminus~} {\Vect^\ZZ} {\Vect^\ZZ} {\left(k\mapsto V^k\right)} {\left(k\mapsto (\setminus V)^k = V^{-k}\right)} ~. \end{equation} In both cases, the action on morphisms is the obvious one. We call \emph{homogeneous map in degree $n$} between the graded vector spaces $V$ and $W$ any graded morphism \begin{displaymath} \morphism{f} {V} {W[n]} {v} {(f(v))_{[|f|]}} ~, \end{displaymath} where $|f|=n$. Homogeneous maps in degree $0$ are the graded morphisms defined above. The action of the shift functor $[\ell]$ on an homogeneous map $f:V\to W[|f|]$ (\cf remark \ref{rem:shiftHomogMapsTrickySign}) is given by \begin{displaymath} \morphism{f[\ell]} {V[\ell]} {W[|f|][\ell]} {v_{[\ell]}} {(f(v))_{[|f|][\ell]}} ~. \end{displaymath} \\ By the closedness property of the category $\Vect^{\ZZ}$, also homogeneous maps in a fixed degree $n$ forms a graded vector space. Neglecting again this internal grading, we define the graded vector space of homogeneous map in any degree as \begin{equation}\label{eq:homogmapshomspace} \underline{\Hom}_{\Vect^{\ZZ}}(V,W):= \left( k \mapsto \left(\Hom_{Vect^{\ZZ}}(V,W[k])\right)^{\oplus} \right) ~. \end{equation} All the algebraic structures considered in this thesis will be graded linear, hence they could be thought of as graded vector spaces endowed with extra structure hence as subcategories of $\Vect^{\ZZ}$. Therefore, from now on, we will omit the subscript $\Vect^{\ZZ}$ when denoting the hom-spaces defined in equations \eqref{eq:homogmapshomspace} and \eqref{eq:gvecthomspace}. Furthermore, we will often refer to element in $\underline{\Hom}^k(V,W)$ simply as \emph{linear maps} (in degree $k$). We stress that in our convention only homogeneous maps of degree $0$ are "morphisms", hence when we will write a diagram in the category $\Vect^{\ZZ}$ of graded vector space, arrows have always to be interpreted as degree $0$ linear maps. We will also employ the following decorated notation for particular subclasses of morphisms: \begin{center} \begin{tabular}{ c l } $\rightarrowtail$ & monomorphism, \ie degree-wise injective, \\ $\twoheadrightarrow$ & epimorphism, \ie degree-wise surjective, \\ $\hookrightarrow$ & inclusion monomorphism, \\ $\xrightarrow{\sim}$ & isomorphisms, \ie degree-wise invertible. \end{tabular} \end{center} The category of graded vector spaces inherits the symmetric monoidal structure from the category of ordinary vector spaces. Namely, we define the \emph{tensor product of two graded vector spaces} $V$ and $W$ as the graded vector space \begin{displaymath} V\otimes W := \left( k \mapsto \bigoplus_{i+j=k} V_i\otimes W_j \right) ~. \end{displaymath} The action of the tensor product functor on any given graded morphisms $f_i:V_i\to W_i$, \ie degree $0$ homogeneous maps, is a graded morphism given by \begin{displaymath} \morphism{f_1\otimes f_2} {V_1\otimes V_2} {W_1 \otimes W_2} {u_1\otimes u_2} {f_1(u_1)\otimes f_2(u_2)} \end{displaymath} The respective \emph{symmetric braiding} is given, for any graded vector space $V$ and $W$, by the graded linear isomorphism $B_{V,W}$, defined on homogeneous elements by \begin{displaymath} \morphism{B_{V,W}} {V\otimes W} {W \otimes V} {x\otimes y} {(-)^{|x||y|} y \otimes x} ~. \end{displaymath} The choice of that sign prefactor in the definition of the braiding takes the name of \emph{Koszul convention}. \\ Out of this convention (see section \ref{sec:bloodyKoszulConvention}), one can introduce the \emph{d\'ecalage} isomorphism, defined on any $n$-tuple of graded vector spaces $(V_1,\dots, V_n)$ as: \begin{equation}\label{eq:deca} \isomorphism{\dec} {V_1[1]\otimes\dots\otimes V_n[1]} {(V_1\otimes\dots \otimes V_n)[n]} {v_{1[1]}\otimes\dots v_{n[1]}} {(-)^{\sum_{i=1}^{n}(n-i)|v_i|}(v_1\otimes\dots\otimes v_n)_{[n]}} ~. \end{equation} For any homogeneous maps $f \in \underline{\Hom}^{|f|}(V,W), f' \in \underline{\Hom}^{|f'|}(V',W')$ we define, with a slight abuse of notation (see notation \ref{not:abuseotimes}), $f\otimes f' \in \underline{\Hom}^{|f|+|f'|}(V\otimes V', W \otimes W')$ as the homogeneous map acting on homogeneous elements as \begin{equation}\label{eq:KoszulConvTensorProducts} \morphism{f~\widetilde{\otimes}~ f'} {V\otimes V'} {(W\otimes W')[k+k']} {v \otimes v'} {(-)^{|v||f'|}(f(v)\otimes f'(v'))_{[|f|+|f'|]}} ~. \end{equation} Accordingly, there is also a sign rule for the composition of tensor products of homogeneous maps: \begin{equation}\label{Eq:TensorHomogeneousMaps} (f'\otimes g') \circ (f \otimes g) = (-)^{|g'||f|}(f'\circ f)\otimes (g'\circ g) ~. \end{equation} Since $\Vect^{\ZZ}$ is a symmetric monoidal category, for any graded vector space $V$ and for any positive integer $n$ there are two canonical representation of the symmetric group $S_n$ over $V^{\otimes n}=\otimes^n V$. We denote by $B_\sigma$ and $P_\sigma$ the even, respectively odd, representation of the permutation $\sigma\in S_n$ on a given $V^{\otimes n}$, namely \begin{displaymath} \morphism{B_\sigma} {V^{\otimes n}} {V^{\otimes n}} {x_1\otimes \dots \otimes x_n} {\epsilon(\sigma;x_1,\dots,x_n) x_{\sigma_1}\otimes \dots \otimes x_{\sigma_n}} \end{displaymath} and $P_\sigma = (-)^\sigma B_{\sigma}$. The coefficient $\epsilon(\sigma;x_1,\dots,x_n)$ is the so-called \emph{Koszul sign} and it is defined as the sign of the sub-permutation of $\sigma$ involving only elements in odd-degree (see remark \ref{rem:aboutKoszulSign} and \ref{rem:practicalComputKoszulSign} for further details). We will usually omit the dependence on the list of graded vectors since it should appear clear from the context. Namely we will often abuse the notation $\epsilon(\sigma;x_1,\dots,x_n)$ by writing $\epsilon(\sigma)$, and writing $\chi(\sigma):= \epsilon(\sigma)(-)^{\sigma}$ for the sign involved in the definition of $P_\sigma$, also called \emph{odd Koszul sign}. \\ Given any subset $I\subset S_n$ of the group of permutations of $n$-elements, we denote by $B_I$ and $P_I$ the operator giving the sum on all the elements of $S$: \begin{displaymath} B_I = \sum_{\sigma \in I} B_\sigma~,\qquad P_I=\sum_{\sigma \in I} P_\sigma~ ~, \end{displaymath} we will often refer to them as the even and odd \emph{permutator of $I$}. Observe that the d\'ecalage isomorphism determine a natural transformation between the even and odd representation of $S_n$, namely the following diagram commutes in the graded vector spaces category (see remark \ref{rem:decVsBraiding} and proposition \ref{Prop:DecalageOfPermutation}): \begin{displaymath} \begin{tikzcd} (V[1])^{\otimes n} \ar[r,"\dec"] \ar[d,"B_{\sigma}"'] & (V^{\otimes n})[n] \ar[d,"(P_{\sigma}){[n]}"] \\ (V[1])^{\otimes n} \ar[r,"\dec"] & (V^{\otimes n})[n] \end{tikzcd} ~. \end{displaymath} We denote by $\odot^n V = V^{\odot n}$ and $\Lambda^n V = V^{\wedge n}$ the spaces of coinvariant elements with respect to the even and odd representation of $S_n$. One has the following splitting sequence in the category of graded vector spaces \begin{equation}\label{eq:symskewsplitting} \begin{tikzcd} 0 \ar[r, shift left =.5ex] &[-5ex] \Lambda^n V \ar[r, shift left =.75ex,hookrightarrow,"N_a"] & \bigotimes^n V \ar[r,two heads, shift left =.75ex,two heads,"\pi_s"] \ar[l,two heads, shift left =.75ex,two heads,"\pi_a"] & \bigodot^n V \ar[r] \ar[l, shift left =.75ex,hookrightarrow,"N_s"] &[-5ex] 0 \end{tikzcd}~. \end{equation} We refer to proposition \ref{lemma:splitsequencebrutta} in appendix (specifically to equation \eqref{eq:SymSkewOperatorsDef-appendix}) for the explicit definition of all these mappings. We only mention here that, on the symmetric side, one has \begin{equation}\label{eq:SymSkewOperatorsDef} \begin{aligned}[c] \pi_s:&~ x_1\otimes\dots\otimes x_n \mapsto x_1\odot\dots\odot x_n ~, \\ {N_s}:&~ x_1\odot\dots\odot x_n \mapsto \left(\sum_{\sigma\in S_n}x_{\sigma_1}\otimes\dots \otimes x_{\sigma_n}\right) ~, \end{aligned} \end{equation} and their composition yields the \emph{(graded) symmetrizator operator} \begin{displaymath} \symAtor_{(n)}:= \frac{1}{n!}~N_s\circ \pi_s \equiv \left(\sum_{\sigma \in S_n} \frac{1}{n!} B_\sigma \right)~. \end{displaymath} The d\'ecalage isomorphism restrict compatibly with these space of coinvariants, namely the following diagram commutes (see lemma \ref{Lemma:DecalageRestrictToSymTens}): \begin{equation}\label{eq:decalageRestriction} \begin{tikzcd}[column sep = huge,row sep =small] (V^{\odot n})[n] \ar[r,"dec\big|_{V^{\odot n}}"] \ar[d,hook]& (V[1])^{\wedge n}\ar[d,hook] \\ (V^{\otimes n}) [n] \ar[r,"\dec"] & (V[1])^{\otimes n} \\ (V^{\wedge n})[n] \ar[r,"dec\big|_{V^{\wedge n}}"'] \ar[u,hook]& (V[1])^{\odot n} \ar[u,hook] \end{tikzcd} \end{equation} % \subsection{Tensor algebras and coalgebras} Given a graded vector space $V$ we denote the tensor, symmetric tensor, and skew-symmetric tensor spaces\footnote{In the literature these spaces are usually called \emph{reduced tensor spaces} (see \eg \cite{Manetti-website-coalgebras,Reinhold2019}) in order to distinguish them from the (augmented) tensor spaces. The latter are defined by a similar summation with index $n$ starting from $0$ and by implicitly assuming that ${T(V)}^0=S(V)^0=\Lambda(V)^0 = \RR$. We do not need this case in the body of the thesis, the (augmented) case is only mentioned in appendix \ref{App:GradedMultilinearAlgebra}.} respectively as the graded vector spaces: \begin{displaymath} \overline{T(V)} := \bigoplus_{n\geq 1} V^{\otimes n} ~,\qquad \overline{S(V)} := \bigoplus_{n\geq 1} V^{\odot n} ~,\qquad \overline{\Lambda(V)} := \bigoplus_{n\geq 1} V^{\wedge n} ~. \end{displaymath} At times, we will also adopt the notation $\overline{S(V)}=S^{\geq 1}(V)$ to stress the fact that the direct summutation runs on non-negative degrees. Observe that, according to this definition, $\overline{T(V)}^k = V^{\otimes k}$ if and only if $V$ is a graded vector space concentrated in degree $1$. These spaces carry a natural structure of (not unital) graded associative algebra given by the action of $\otimes,\odot$ and $\wedge$ on vectors, and a (not counital) coassociative coalgebra structures given by deconcatenation. In particular $\overline{S(V)}$ and $\overline{\Lambda(V)}$ are graded (co)commutative and graded (co)anticommutative respectively. We call \emph{decomposable element} in the Tensor algebra any element that can be expressed as tensor product of given vectors. In what follows we will essentially only concerned with the coalgebra structure on $\overline{S(V)}$, we refer to appendix \ref{App:GradedMultilinearAlgebra}, and references therein, for a more exhaustive discussion of the subject. For the rest of this subsection we will only state some basic notations\footnote{We point out that that the notation employed here slightly depart to the more decorated notation used in the appendix. See notation \ref{not:droppingthesuperc}.}. We will denote, with a slight abuse of language, the \emph{deconcatenation} and \emph{unshuffled deconcatenation} with the same symbol $\Delta$ (see definitions \ref{def:Decon} and \ref{def:unshuffleDecon} for the explicit expressions). We recall that a \emph{coalgebra morphism} from $\overline{T(V)}$ to $\overline{T(W)}$ is a graded vector space morphism $f:\overline{T(V)}\to \overline{T(W)}$, hence a degree $0$ homogeneous map, compatible with the coproduct. We denote by \begin{displaymath} \Hom_{coAlg}(\overline{T(V)},\overline{T(W)}):= \left\lbrace\left. F \in \Hom(\overline{T(V)},\overline{T(W)}) ~ \right\vert ~ \Delta\circ F = F\otimes F \circ \Delta \right \rbrace \end{displaymath} the graded set of all coalgebra morphisms. We stress that it does not respect the linear structure of $\Hom(\overline{T(V)},\overline{T(W)}$ thus it is only a graded subset. \\ Fixed a coalgebra morphism $F\in\Hom_{coAlg}(\overline{T(V)},\overline{T(W)})$, we recall that a \emph{degree $k$ $F$-coderivation} (see section \ref{sec:abtractFcoderivations}) of the coalgebra $\overline{T(V)}$ is a degree $k$ homogeneous linear map $Q\colon \overline{T(V)} \to \overline{T(W)}$ such that $\Delta\circ Q=(Q \otimes F + F\otimes Q)\circ \Delta$. To explain the terminology, notice that this equation is what one obtains dualizing the property of being a derivation of an algebra. We denote the graded vector space of coderivations along $F$ as \begin{displaymath} \coDer(\overline{T(V)},\overline{T(W)};F) ~. \end{displaymath} We denote by $\coDer(\overline{T(V)})$ the special case where $W=V$ and $F$ is given by the identity. Elements in $\coDer(\overline{T(V)})$ will be simply called \emph{coderivations}. The same definitions applies, mutatis mutandis, also to the symmetric coalgebra case. Tensor (co)algebras enjoy the special properties to be (co)-free objects with respect to certain subcategories of graded (co)-algebras. This feature can be expressed by universal properties that can in turn be translated into the following lemmas (stated for the symmetric case but analogue results hold for $\overline{T(V)}$ and $\overline{\Lambda(V)}$): \begin{lemma}[Lift to a coalgebra morphism (\emph{Prop. \ref{Prop:UniversalPropertyCocommutativeGradedCoalgebras} and Thm. \ref{Theorem:HomCoAlgISOSymmetric}})]\label{lem:liftToMorph} There is a bijection between morphisms of coalgebras $F:\overline{S(V)}\to \overline{S(W)}$ and morphisms of graded vector spaces $f\colon: \overline{S(V)} \to W$. Namely the following graded sets are isomorphic \begin{displaymath} \begin{tikzcd}[ column sep=1em,row sep=-1ex] \Hom_{coAlg}(\overline{S(V)},S(W)) \ar[r,equal,"\sim"] & \Hom(\overline{S(V)},W) \ar[r,equal,"\sim"] & \bigoplus_{n\geq 1}\Hom(V^{\odot n},W) \\ F \ar[r,mapsto] & \pr_W \circ F \ar[r,mapsto] & (\pr_W \circ F\eval_{V},\dots,\pr_W \circ F \eval_{V^{\odot n}},\dots) \\ L_{\sym}(f) & f= \bigoplus_{i\geq 1} f_i \ar[l,mapsto] & (f_1,\dots,f_n,\dots) \ar[l,mapsto] \end{tikzcd} \end{displaymath} where the direct function is called \emph{corestriction} and is obtained by postcomposition with the standard projection $\pr_W: \overline{T(W)}\twoheadrightarrow W$, and the inverse is called \emph{(unique) lift to a coalgebra morphism} and is given explicitly by \begin{equation}\label{Eq:LiftMorphismMapping-noappendix} \hspace{-.025\textwidth} \morphism{L_{\sym}} {{\Hom}({\overline{S(V)}},W)} {\Hom_{\text{coAlg}}({\overline{S(V)}},{S(W)})} {f} {\displaystyle \sum_{n>0}\sum_{s=1}^n \pi_s \circ \left[\mkern-60mu\sum_{\mkern75mu \substack{i_1+\dots+i_s = n \\ 0<i_1\leq i_2 \leq \dots \leq i_s}}\mkern-55mu (f_{i_1}\otimes \dots \otimes f_{i_s}) \circ B^{<}_{i_1,\dots,i_s} \right]\circ N_s~} \end{equation} $N_s,\pi_s$ are the operator defined by equation \eqref{eq:SymSkewOperatorsDef} and $B^<_{i_1,\dots,i_n}\equiv B_{\ush{i_1,\dots,i_n}^<}$ denotes the sum on all the ordered $(i_1,\dots,i_n)$-unshuffles (ordered unshuffleator, see definition \ref{Def:OrderedUnshuffleator}), \ie the sum runs through all $(k_1,\cdots,k_\ell)$-unshuffles $\sigma$ satisfying the extra condition \begin{displaymath} \sigma(k_1+\dots+k_{j-1}+1)<\sigma(k_1+\dots+k_{j}+1) \quad \text{if}~k_{j-1}=k_j ~. \end{displaymath} \end{lemma} Given a coalgebra morphism $F:\overline{S(V)}\to \overline{S(W)}$, we call its \emph{$k$-th} corestriction the symmetric multilinear operator \begin{displaymath} f_k= \pr_W \circ F \eval_{V^{\odot k}}~: V^{\odot k} \to W~. \end{displaymath} \begin{notation}\label{rem:operatorSln} The linear operator between square brackets in equation \eqref{Eq:LiftMorphismMapping-noappendix} will appear again in our constructions. We single out the following definition for future reference. \\ Given a $f\in\Hom(\overline{T(V)},W)$, eventually expressible as a collection $(f_i,\dots,f_k,\dots)$ with $f_k\in \Hom(V^{\otimes k},W)$, for any $1\leq\ell\leq m $, We denote by $\mathcal{S}_{\ell,m} (f)$ the graded multilinear map $V^{\otimes m} \to W^{\otimes \ell}$ defined as \begin{equation}\label{Eq:Soperator \mathcal{S}_{\ell,m} (f) = \left( \sum_{\substack{k_{1}+\cdots+k_{\ell}=m\\1\leq k_{1}\leq\cdots\leq k_{\ell}}} (f_{k_1}\otimes\cdots\otimes f_{k_\ell})\circ B_{k_1,\ldots,k_\ell}^< \right) ~. \end{equation} With this notation one has: \begin{displaymath} L_{\sym}(f) = \sum_{m>0}\sum_{\ell=1}^m \pi_s \circ \Sop_{\ell,m}(f)\circ N_s ~. \end{displaymath} \note{Forse la scelta di questo simbolo non è felice e crea confusione con i simmetrizzatori $\symAtor$} \end{notation} Equation \eqref{Eq:LiftMorphismMapping-noappendix} permits to lift uniquely any graded morphisms, \ie degree $0$ homogeneous maps, to a coalgebra morphisms. A similar construction applies also to homogeneous maps in arbitrary degree, yielding this time a unique lift to a coderivation: \begin{lemma}[Lift to a coderivation {\cite[Lemma 2.4]{LadaMarkl}}]\label{lem:liftToCoder} \note{Dalla appendice vedo che questo lemma si può esprimere per ogni $f:\overline{S(V)}\to W$, ma le espressioni sono ancora piu' complicate} Given any graded morphism $f_1:V\to W$, denote by $f:\overline{S(V)}\to W$ its trivial extension to the whole symmetric tensor space, there exists a bijection between degree $k$ coderivations $Q:\overline{S(V)}\to \overline{S(W)}$ and degree $k$ linear maps $q\colon \overline{S(V)}\to W$. Namely the following graded vector spaces are isomorphic \begin{displaymath} \hspace{-.075\textwidth} \begin{tikzcd}[ column sep=2em,row sep=-1ex] \coDer(\overline{S(V)},\overline{S(W)};L_{\sym}(f)) \ar[r,equal,"\sim"] & \underline{\Hom}(\overline{S(V)},W) \ar[r,equal,"\sim"] & \bigoplus_{n\geq 1}\underline{\Hom}(V^{\odot n},W) \\ Q \ar[r,mapsto] & \pr_W \circ Q \ar[r,mapsto] & (\pr_W \circ Q\eval_{V},\dots,\pr_W \circ Q \eval_{V^{\odot n}},\dots) \\ \widetilde{L}_{\sym}(q) & q= \bigoplus_{i\geq 1} q_i \ar[l,mapsto] & (q_1,\dots,q_n,\dots) \ar[l,mapsto] \end{tikzcd} \end{displaymath} where the direct function is the \emph{corestriction} as in lemma \ref{lem:liftToCoder} and inverse is called \emph{(unique) lift to a $F$-coderivation}, with $F=L_{\sym}(f)$, and is given explicitly by \begin{equation}\label{Eq:LiftCoDerMapping-noappendix} \hspace{-.025\textwidth} \morphism{\widetilde{L}_{\sym}} {\underline{\Hom}^k({\overline{S(V)}},W)} {\coDer^k({\overline{S(V)}},{\overline{S(W)}};F)} {q} {\displaystyle \sum_{n>0}\sum_{s=1}^n \pi_s \circ \left[\left( q_{n-s+1}\otimes f^{\otimes(s-1)}\right) \circ B_{n-s+1,s-1} \right]\circ N_s~} \end{equation} where $N_s,\pi_s$ are the operators defined by equation \eqref{eq:SymSkewOperatorsDef} and $B_{n-s+1,s-1}\equiv B_{\ush{n-s+1,s-1}}$ denotes the sum on all $(n-s+1,s-1)$-unshuffles. \end{lemma} Note that the term between square brackets in equation \eqref{Eq:LiftCoDerMapping-noappendix} is a graded linear operator $V^{\otimes n}\to W^{\otimes s}$. In the case that $V=W$ and $f_1=\Unit$, one has that $F=\Unit$ and $f^{\otimes n} = \mathbb{1}_n$; the corresponding term play a role in the definition of the \emph{\RN product}. \\ More explicitly, for any given degree $k$ homogeneous map $m\in \underline{\Hom}^k(\overline{S(V)},V)$, the action of its lift on homogeneous elements is given by: \begin{equation}\label{eq:coder} \widetilde{L}_{\sym}(m)~(x_1,\dots,x_n):= \sum_{i=1}^n \mkern-50mu \sum_{\mkern70mu\sigma \in \ush{i,n-i}}\mkern-50mu \epsilon(\sigma)~ m_{i}(x_{\sigma_1},\dots,x_{\sigma_{i}})\odot x_{\sigma_{i+1}}\odot\dots\odot x_{\sigma_n} ~, \end{equation} where $\ush{i,j}$ denotes the subgroup of $(i,j)$-unshuffles in the permutation group $S_{i+j}$. The use of the term "lift" follows from the commutativity of the following diagrams in the category of graded vector spaces: \begin{displaymath} \begin{tikzcd}[column sep =huge] \overline{S(V)} \ar[r,"F=L_{\sym}(f)"] \ar[dr,"f"']& \overline{S(W)} \ar[d,"\pr_W"]\\ &W \end{tikzcd} \quad,\qquad \begin{tikzcd}[column sep =huge] \overline{S(V)} \ar[r,"Q=\widetilde{L}_{\sym}(q)"] \ar[dr,"q"'] & \overline{S(W)}[|q|] \ar[d,"\pr_W{[|q|]}"]\\ &W[|q|] \end{tikzcd} \quad . \end{displaymath} Observe that for degree $0$ linear maps, \ie graded morphisms, both the lift to a coalgebra morphism and the lift to a coderivation are well-defined. This specific case requires to use the decorated notation $L$ and $\widetilde{L}$ to distinguish between the two possible lifts. \subsection{Differential graded vector spaces}\label{sec:HomologicalAlgebrasConventions} We call a \emph{differential graded vector space} any pair $(V,\d)$ composed of a graded vector space $V\in \Vect^\ZZ$ together with a $2$-nilpotent homogeneous map in degree $1$ from $V$ into itself, \ie $\d \in \underline{\End}^1(V)$ and $\d\cdot \d = 0$. Most of the time we will make use of the language of homological algebra. In those terms, a differential graded vector space can be simply seen as a \emph{cochain complex} \ie as a sequence of vector spaces $V^k$ together with operators $d^{(k)}\equiv d\eval_{V^k}: V^k\to V^{k+1}$ such that $d^{(k+1)}\circ d^{(k)}=0$. Diagrammatically, the differential graded vector space $(V,\d)$ will be depicted as \begin{displaymath} \begin{tikzcd} \dots \ar[r] & V^{k-1} \ar[r,"\d"] & V^{k} \ar[r,"\d"] & V^{k+1} \ar[r] & \dots \end{tikzcd} ~. \end{displaymath} Accordingly, operator $\d$ will be called \emph{coboundary operator} and homogeneous elements $v\in V^k$ will be called \emph{$k$-cochains}. We will refer to elements in $(\ker(d))^n\subset V^n$ as \emph{$n$-cocycles} and elements in $(\Im(\d))^n \subset V^n$ as \emph{$n$-coboundaries}. We will also employ the shorthand notation $B(V):=\Im(d)$ and $Z(V):=\ker(d)$ for the graded vector spaces of coboundaries and cocycles. Clearly $B(V)\subset Z(V)$. We will call \emph{the cohomology of $(V,\d)$} the graded vector spaces \begin{displaymath} H(V,\d)= \dfrac{\ker(d)}{\Im(d)} := \left( k \mapsto \dfrac{\ker(d^{(n)})}{\Im(d^{(n-1)})}\right) \end{displaymath} and its $k$-component $H^k(V,\d)$ will be called \emph{$k$-th cohomology group of $(V,\d)$}. Recall that in homological algebra there is the dual notion of \emph{chain complex} defined by a $2$-nilpotent boundary operator $\partial$ with decrease the grading. Therefore we are tacitly adopting the \emph{cohomological convention} when dealing with differential graded vector space. One can simply get a "homological" differential graded vector space applying the reverse ordering functor $\setminus:\Vect^\ZZ\to \Vect^\ZZ$ to the ("cohomological") differential graded vector space $(V,\d)$. We will call a \emph{chain map} between two cochain complexes $(V_1,\d_1)$ and $(V_2,\d_2)$ any graded morphism $f:V_1\to V_2$ which commutes with the coboundary operators, that is, the following diagram commutes in the category of (ordinary) vector spaces: \begin{displaymath} \begin{tikzcd} \dots \ar[r] & V^{k-1}_1 \ar[r,"\d_1"]\ar[d,"f^{k-1}"] & V^{k}_1 \ar[r,"\d_1"]\ar[d,"f^k"] & V^{k+1}_1 \ar[r]\ar[d,"f^{k+1}"] & \dots \\ \dots \ar[r] & V^{k-1}_2 \ar[r,"\d_2"] & V^{k}_2 \ar[r,"\d_2"] & V^{k+1}_2 \ar[r] & \dots \end{tikzcd} ~. \end{displaymath} More synthetically, the latter diagram can be seen as a commutative square in the category $\GVect$ of graded vector spaces: \begin{displaymath} \begin{tikzcd} V_1 \ar[r,"\d_1"] \ar[d,"f"'] & V_1 \ar[d,"f"] \\ V_2 \ar[r,"\d_2"] & V_2 \end{tikzcd} \end{displaymath} We will call \emph{chain homotopy} from a chain map $h:V_1 \to V_2$ to a chain map $f:V_1 \to V_2$ a degree $-1$ homogeneous map $\varphi\in \underline{\Hom}^1(V_1,V_2)$ such that \begin{equation}\label{eq:chainhomotopy} \d_2 \circ \varphi + \varphi \circ \d_1 = f-h ~. \end{equation} Diagrammatically, we will denote a chain homotopy as a $2$-morphism: \begin{displaymath} \begin{tikzcd}[column sep = huge] V_1 \ar[r,blue,"h", bend left=20, ""{name=U, below}] \ar[r,"f"', bend right=20, ""{name=D}] &[1em] V_2 \ar[Rightarrow, purple,"\varphi", from=U, to=D] \end{tikzcd} \end{displaymath} Consider now a differential graded vector space $C=(C,\d)$, if $C$ is a (counital, cocommutative) coalgebra and $d\in \coDer^1(C)\subset \End^1(C)$ is a degree $1$ coderivation, we will talk of a \emph{graded differential coalgebra} and call $\d$ a \emph{codifferential} rather than a coboundary operator. When the above algebra $C$ is also cocommutative and \emph{cofree}, \ie $C\cong \overline{S(V)}$ for a certain graded vector space $V$, one talk about a \emph{cofree graded differential coalgebra}. This last notion will play a central role in the coalgebraic presentation of a $L_\infty$-algebra. We will call a \emph{(cochain) bicomplex} any $(\ZZ\times\ZZ)$-graded (or \emph{bi-graded}) vector space \begin{displaymath} V=\left( (k,\ell)\mapsto V^{k,\ell}\right) \end{displaymath} together with a pair of $2$-nilpotent homogeneous map $\d_v \in \underline{\End}^{(1,0)}(V)$ and $\d_h \in \underline{\End}^{(0,1)}(V)$ such that \begin{displaymath} \d_v\cdot \d_h + \d_h \circ \d_v=0 ~. \end{displaymath} Namely, this is given by the following commuting diagram \begin{displaymath} \begin{tikzcd}[column sep = huge] & \vdots & \vdots & \\ \cdots \ar[r,"(-)^{i+1}\d_h"]& V^{i+1,j} \ar[u,"\d_v"]\ar[r,"(-)^{i+1}\d_h"]& V^{i+1,j+1} \ar[r,"(-)^{i+1}\d_h"]\ar[u,"\d_v"]& \cdots \\ \cdots \ar[r,"(-)^i\d_h"]& V^{i,j} \ar[u,"\d_v"]\ar[r,"(-)^i\d_h"]& V^{i,j+1}\ar[r,"(-)^i\d_h"] \ar[u,"\d_v"]& \cdots \\ & \vdots \ar[u,"\d_v"]& \vdots \ar[u,"\d_v"] \end{tikzcd} \end{displaymath} where each row and column are separately cochains complexes. (In other words, we are adopting the "anticommuting square" convention with the wording of \cite{nlab:double_complex}). \\ Given a bicomplex $V^{\bullet,\bullet}=(V,\d_h,\d_v)$ we call \emph{total complex} of $V^{\bullet,\bullet}$ the $\ZZ$-graded vector space \begin{displaymath} \tot(V) = \left( k \mapsto \bigoplus_{i+j=k}V^{i,j}\right) \end{displaymath} together with the coboundary operator $\d_{tot} = \d_h + \d_v$. \\ Given two differential graded vector spaces $(V,\d_v)$ and $(H,\d_h)$, we call \emph{tensor product complex}, denoted as $(V,\d_v)\otimes (H,\d_h)$, the total complex corresponding to the bigraded vector space \begin{displaymath} (V\otimes W)^{\bullet,\bullet} := \left( (k,\ell) \mapsto V^k\otimes W^\ell \right) \end{displaymath} together with the coboundary operators $\d_v\otimes \Unit_H$ and $\Unit_V\otimes \d_h$. Observe that the \emph{Koszul convention} implies that the action of $\d_{tot}$ on an homogeneous element $x\otimes y\in V\otimes W$ reads as follows: \begin{displaymath} \d_{tot}(x\otimes y) = (\d_v(x))\otimes y + (-)^{|x|} x\otimes (\d_h(y)) ~. \end{displaymath} Iterating this procedure, we will call the \emph{$n$-iterated tensor product} of a cochain complex $(V,\d)$ the cochain complex $(V^{\otimes n},\d_{\otimes n})$ where \begin{displaymath} V^{\otimes n} = \left( k \mapsto \bigoplus_{i_1+\dots+i_n = k} V^{i_1}\otimes V^{i_2}\otimes \dots V^{i_n} \right) \end{displaymath} and \begin{equation}\label{eq:totalcoboundaryoperator} \d_{\otimes n} = \sum_{i=1}^n \Unit_{i-1}\otimes \d \otimes \Unit_{n-i} ~. \end{equation} \subsection{Multibrackets and Coderivations}\label{sec:RNstuff} Consider a graded vector space $V$, for any $n\geq 0$ and $k \in \ZZ$ we denote by \begin{displaymath} M_{n,k}(V,W):= \underline{\Hom}^k (V^{\otimes n},W) \end{displaymath} the graded vector space of degree $k$ (homogeneous) $n$-multilinear maps. We take for granted, and kept implied, the universal property of multilinear maps, hence we will be free to understand elements of $M_{n,k}(V,W)$ as $n$-ary functions $V\times\dots \times V \to W[k]$ with the extra property of being separately linear in each entry. Accordingly, homogeneous linear maps from $V^{\odot n}$ to $W$ will be said \emph{of arity $n$}, and we will often denote the image of a multilinear map $\mu_n$ on $x_1\otimes\dots\otimes x_n$ as $\mu_n(x_1,\dots,x_n)$ separating elements by commas and omitting the symbol $\otimes$. The same applies to graded symmetric and graded skew-symmetric multilinear maps, we denote by \begin{displaymath} M_{n,k}^{\sym}(V,W):= \underline{\Hom}^k (V^{\odot n},W) ~,\quad M_{n,k}(V,W)^{\skew}:= \underline{\Hom}^k (V^{\wedge n},W) ~, \end{displaymath} respectively the spaces of degree $k$ symmetric and skew-symmetric $n$-multilinear homogeneous maps on $V$ {with values in $W$}. It follows from the splitting sequence \eqref{eq:symskewsplitting} that $M_{n,k}(V,W)=M_{n,k}^{\sym}(V,W)\oplus M_{n,k}^{\skew}(V,W)$. When $W=V$ we will lighten the notation omitting the second entry. Considering all the possible arities and degrees collectively, and neglecting the "internal" grading of the graded vector space $\underline{\Hom}(V^{\otimes n},W[k])$, one obtains the $(\NN_0\times \ZZ)$-graded (bi-graded) vector space $M_{\bullet,\bullet}(V,W)$. The same reasoning applies also to the subspace of (skew)-symmetric multilinear maps. \\ The d\'ecalage isomorphism, {introduced in eq. \eqref{eq:deca}}, induces by precomposition an isomorphism of graded vector spaces \begin{displaymath} \isomorphism{\Dec} {\underline{\Hom}^k(V^{\otimes n}, W)} {\underline{\Hom}^{k+n-1}(V[1]^{\otimes n}, W[1])} {\mu} {\Dec(\mu):= \mu[n]\circ \dec} \end{displaymath} with \begin{equation}\label{eq:ExplicitDECA} \Dec(\mu)(x_{1\,[1]},\dots,x_{n\,[1]}) = (-)^{\sum_{i=1}^{n}(n-i)|x_1|} \left( \mu(x_1,\dots,x_n)\right)_{[n]} ~. \end{equation} Slightly more in general, we will also intend the d\'ecalage of a map $f\in \underline{Hom}(V^{\otimes n},W^{\otimes m})$ as $\Dec(f):= \dec^{-1}[|f|+n-m]\circ f[n] \circ \dec$. It is possible to read the operator $\Dec$ as a genuine graded isomorphism (not bi-graded) by appropriately contracting indices $n$ and $k$ to give a certain $\ZZ$-grading. Conventionally, we introduce the \emph{graded vector spaces of graded symmetric and graded skew-symmetric multilinear maps} from $V$ to $W$ as \begin{equation}\label{eq:RNspaces} \begin{aligned} M^{sym}(V):=&~ \left( k \mapsto \bigoplus_{n} M^{\sym}_{n, k}(V)\right) ~, \\ M^{skew}(V) :=&~ \left( k \mapsto \bigoplus_{n+i=k+1} M^{\skew}_{n, i}(V)\right) ~. \end{aligned} \end{equation} According to this choice, the d\'ecalage operator defined above restricts to a well-defined isomorphism of graded vector spaces (see remark \ref{rem:DecasGradedMorph}) \begin{displaymath} Dec: M^{skew}(V) \xrightarrow{\quad \sim \quad} M^{sym}(V[1]) ~. \end{displaymath} We finally notice that in equation \eqref{eq:RNspaces} is implied the choice of two possible $\ZZ$-grading on $M(V)$. Namely, given an homogeneous multilinear map $\mu \in M^{n,k}(V,W)$, we introduce the two gradings \begin{displaymath} |\mu|=k ~,\qquad ||\mu||=k+n-1 ~, \end{displaymath} the first is the degree in the sense of homogeneous maps and the second is the grading as an element of \begin{displaymath} \left( k \mapsto \bigoplus_{n+m=k+1}M_{n,k}(V,W) \right) ~. \end{displaymath} \subsubsection{\RN product for multibrackets}\label{sec:RNProdMB} The graded vector spaces of graded symmetric and graded skew-symmetric multilinear maps constitute graded right pre-Lie algebras (see appendix \ref{App:PreLie}) $(M^{sym}(V),\cs)$ and $(M^{skew}(V),\ca)$ when endowed with the so-called \emph{\RN} product (see \cite{Nijenhuis1967} for the original definition on ordinary vector spaces and \cite[Thm. 3.3]{Lecomte1992} or \cite[\S 1.2]{Delgado2018b} for the graded case - note that our definition differs for a sign-; the formula for the graded symmetric case can be also found in \cite[\S 1.1]{Bandiera2016}). % Denoting by $B_{i_1,\dots, i_\ell}$ and $P_{i_1,\dots,i_\ell}$ the operators summing on all even and odd actions of permutations in $\ush{i_1,\dots,i_\ell}$ (see "conventions" section on page \pageref{Sec:conventions} or appendix \ref{App:UnshuffleAtors}), i.e \begin{displaymath} \begin{aligned} B_{i_1,\dots,i_\ell} ~\left( x_1\otimes x_2 \otimes \dots \right) =& \mkern-30mu\sum_{\mkern50mu\sigma \in \ush{i_1,\dots,i_\ell}} \mkern-30mu \epsilon(\sigma) x_{\sigma_1}\otimes x_{\sigma_2} \otimes \dots \\ P_{i_1,\dots,i_\ell} ~\left( x_1\otimes x_2 \otimes \dots \right) =& \mkern-30mu \sum_{\mkern50mu\sigma \in \ush{i_1,\dots,i_\ell}} \mkern-30mu \epsilon(\sigma)(-)^\sigma x_{\sigma_1}\otimes x_{\sigma_2} \otimes \dots ~, \end{aligned} \end{displaymath} the \RN product can be succinctly written as \begin{equation}\label{Eq:RNProducts-myway} \begin{aligned} \mu_n \cs \mu_m =& \mu_n \circ (\mu_m \otimes \mathbb{1}_{n-1}) \circ B_{m,n-1} \\ \mu_n \ca \mu_m =& (-)^{|\mu_m|(n-1)} ~\mu_n \circ (\mu_m \otimes \mathbb{1}_{n-1}) \circ P_{m,n-1} ~. \end{aligned} \end{equation} More explicitly, evaluating on homogeneous element $x_i\in V$, the products read as follows: \begin{equation}\label{Eq:RNProducts-explicit} \mathclap{ \begin{aligned} \mu_n \cs \mu_m &(x_1,\dots,x_{m+k-1}) = \\ =&~ \sum_{\sigma \in \ush{m,n-1}} \mkern-20mu \epsilon(\sigma) \mu_n\Big(\mu_m(x_{\sigma_1},\dots,x_{\sigma_m}),x_{\sigma_{m+1}}\dots,x_{\sigma_{m+k-1}} \Big) \\[2em] \mu_n \ca \mu_m &(x_1,\dots,x_{m+k-1}) = \\ =&~ (-)^{|\mu_m|(n-1)}\mkern-30mu \sum_{\sigma \in \ush{m,n-1}}\mkern-20mu (-)^\sigma \epsilon(\sigma) \mu_n\Big(\mu_m(x_{\sigma_1},\dots,x_{\sigma_m}),x_{\sigma_{m+1}}\dots,x_{\sigma_{m+k-1}}\Big) \end{aligned} } \end{equation} where $\epsilon(\sigma) $ is the Koszul sign. \\ These products are not associative and non-associativity is measured by the \emph{associators} multilinear operators\footnote{For the skew-symmetric case, replace $\cs$ with $\ca$.} \begin{displaymath} \alpha(\cs ;\mu_\ell,\mu_m,\mu_n) := (\mu_\ell \cs \mu_m) \cs \mu_n - \mu_\ell \cs (\mu_m \cs \mu_n) ~. \end{displaymath} Explicitly, they are given by the following equation (see proposition \ref{Prop:SymmetricGerstenhaberAssociators}): \note{ \begin{displaymath} \begin{aligned} \alpha&({\cs};\mu_\ell,\mu_m,\mu_n) = \mu_\ell \circ (\mu_m \otimes \mu_n) \circ B_{\ell,m,n} \\ \alpha&({\ca};\mu_\ell,\mu_m,\mu_n) = \pm \mu_\ell \circ (\mu_m \otimes \mu_n) \circ P_{\ell,m,n} \end{aligned} \end{displaymath} } % \begin{align}\label{Eq:explicitassociators} \alpha&({\cs};\mu_\ell,\mu_m,\mu_n)(x_1,\dots,x_{m+n+\ell-2}) = \\ =& \mkern-35mu\sum_{\qquad\sigma \in \ush{m,n,\ell-2}}\mkern-40mu \mathscr{s}(\cs;\sigma)~ \mu_\ell\Big(\mu_m(x_{\sigma_1},\dots,x_{\sigma_m}),\mu_n(x_{\sigma_{m+1}},\dots,x_{\sigma_{m+n}}),x_{\sigma_{m+n+1}},\dots,x_{\sigma_{m+n+\ell-2}}\Big) \notag \end{align} where $\mathscr{s}(\cs;\sigma)$ and $\mathscr{s}(\ca;\sigma)$ are sign prefactors given explicitly by \begin{displaymath} \begin{aligned} \mathscr{s}(\cs;\sigma)&=(-)^{|\mu_n|(|x_1|+\dots+|x_m|)}\epsilon(\sigma) \\ \mathscr{s}(\ca;\sigma)&= (-)^{{|\mu_n|(m+\ell)} +{(|\mu_m|(\ell-1))} +m(n+1)} (-)^{\sigma}\mathscr{s}(\cs;\sigma) \end{aligned} \end{displaymath} % In our conventions, the operator $\Dec$, giving the d\'ecalage of multilinear maps, is compatible with the \RN products (see theorem \ref{Thm:ManettiFactorizationOnGerstenhaberAlgebras}). Namely it induces an isomorphism in the category of graded right pre-Lie algebras: \begin{equation}\label{eq:Dec} Dec: (M^{skew}(V), \ca) \xrightarrow{\quad \sim \quad} (M^{sym}(V[1]),\cs) ~. \end{equation} \begin{remark}[Comparing our conventions with the literature]\label{rem:comparesignmesswithliterature} It has to be pointed out how our conventions differ from those found in the literature. \begin{itemize} \item Notice that our definition of d\'ecalage of multilinear maps $\Dec$ does not get a sign coming from the degree of homogeneous map in input as it appears in the foundational paper \cite[Eq. 3]{LadaStasheff}, or in many other references in our bibliography (e.g \cite[\S 1]{Fiorenza2006}\cite[Rem 1.7]{Bandiera2016}). This discrepancy can be motivated from a different convention in defining the action of the shifted functor on homogeneous map in odd degree or from a different choice of isomorphism $W[|\mu_n|][n]\cong W[1][|\mu_n|+n-1]$ in the diagram defining $\Dec$: \begin{displaymath} \begin{tikzcd} V^{\otimes n}[n] \ar[r,"\mu_n{[n]}"] & W[|\mu_n|][n] \ar[r,equal] & W[1][|\mu_n|+n-1] \\ (V[1])^{\otimes n} \ar[u,"\dec"] \ar[urr,dashed,"\Dec(\mu_n)"'] \end{tikzcd} \end{displaymath} \item Our definition of the skew-symmetric \RN product, see equation \eqref{Eq:RNProducts-myway}, differs by a sign with respect to the corresponding definition given in \cite[Thm. 3.3]{Lecomte1992}. This is a byproduct of a different sign convention in the definition of the Gerstenhaber product (see remark \ref{Rem:signsProblemwithGerstenhaberProducts}) together with our different convention regarding the sign of the d\'ecalage of multilinear maps. In appendix \ref{App:RNAlgebras} we introduced this sign in order to guarantee the property of preservation of the \RN products under D\'ecalage, expressed in equation \eqref{eq:Dec} (to read as a diagram in the category of graded algebras). \end{itemize} \end{remark} \subsubsection{\RN product for coderivations}\label{sec:RNProdCoder} Let be $V$ a $\ZZ$-graded vector space and consider the graded vector space of homogeneous maps from $V$ into itself $\underline{End}(V):=\underline{\Hom}(V,V)$. The latter forms an associative algebra with respect to the obvious composition of homogeneous maps \begin{displaymath} \begin{tikzcd} V \ar[rr,bend left = 30,"(g\cdot f)"]\ar[r,"f"] & V[|f|]\ar[r,"{g[|f|]}"] & V[|g|+|f|] \end{tikzcd} \end{displaymath} and therefore it forms a graded Lie algebra with respect to the (graded) commutator \begin{displaymath} [f,g]_\circ = f\circ g - (-)^{|f||g|} g \circ f ~. \end{displaymath} Consider then the graded vector subspace of coderivations $\coDer(\overline{S(V)}) \subset \underline{End}(\overline{S(V)},\overline{S(V)})$. The composition of two coderivations $Q_1$ and $Q_2$ is a linear map $Q_1\circ Q_2\colon \overline{S(V)} \to \overline{S(V)}$, which in general fails to be a coderivation. However the graded commutator \begin{displaymath} [Q_1,Q_2]:= Q_1\circ Q_2 - (-)^{|Q_1||Q_2|} Q_2 \circ Q_1 \end{displaymath} is a coderivation of degree $|Q_1|+|Q_2|$. In other words, the space of coderivations is graded Lie subalgebra of $(\underline{End}(V),[\cdot,\cdot])$ but it is not an associative subalgebra with respect to the composition of graded linear maps. It is then customary to introduce the \emph{\RN product} (or "composition") on $\coDer(\overline{S(V)})$ defined as \begin{displaymath} \morphism{\cs} {\coDer(\overline{S(V)})\otimes\coDer(\overline{S(V)})} {\coDer(\overline{S(V)})} {Q\otimes Q'} {\widetilde{L}_{\sym}\left(\pr_V \circ Q \circ Q' \right)} ~, \end{displaymath} where $\pr_V: \overline{S(V)}\twoheadrightarrow V$ denotes the standard projection on $V^{\odot 1}$ and $\widetilde{L}_{\sym}$ is the "lift to a coderivation" operator introduced in lemma \ref{lem:liftToCoder}. The composition $\cs$, which is not associative, makes the space of coderivations into a graded right pre-Lie algebra, in particular, it induces the same Lie brackets inherited from $\underline{\End}(V)$ since \begin{displaymath} [Q,Q']_{\cs}= \widetilde{L}_{\sym}\left(\pr_V \left( Q \circ Q'-(-)^{|Q||Q'|}~Q' \circ Q \right)\right) = \cancel{\widetilde{L}_{\sym}\circ\pr_V}\circ [Q,Q']_{\circ} ~. \end{displaymath} The naming comes from the following sequence of isomorphisms in the category of graded pre-Lie algebra structures: \begin{equation}\label{eq:NocciolodiAppendice} \begin{tikzcd} (M^{\skew}(V),\ca) \ar[r,"\Dec"]& (M^{\sym}(V[1]),\cs) \ar[r,"\widetilde{L}_{\sym}"]& \coDer(S(V[1]),\ca) \end{tikzcd} \end{equation} coming from the natural identification \begin{displaymath} M^{\sym}(V) = \bigoplus_{n\geq 1} \underline{\Hom}(V^{\odot n},V) = \underline{\Hom}(\overline{S(V)},V) \end{displaymath} given by the commutation rules of categorical (co)limits with the $\Hom$ functor. \begin{remark} Notice that here, and in appendix \ref{App:RNAlgebras}, we decided to present this topic defining the products on $M^{\sym}(V)$ and $\coDer(\overline{S(V)})$ independently and proving they are isomorphic as a second step. Often in the literature, see for example \cite{Bandiera2016,Manetti-website-coalgebras,Miti2020}, one chooses to go the opposite way. Namely, one starts from the definition of $\cs$ on $\coDer(\overline{S(V)})$ and then pullback the product to $M^{\sym}(V)$, along the lift, and to $M^{\skew}(V[-1])$ along the d\'ecalage. For instance, the \emph{Nijenhuis-Richardson product} of two given maps $a,b\in \underline{\Hom}(\overline{S(V)},V)$ can be explicitly given by \begin{equation}\label{eq:compsymm} a\cs b:= \pr_V(C_a\circ C_b) ~, \end{equation} where $C_a$ denotes the lift of $a$ to a coderivation on $\overline{S(V)}$. It is not too difficult to see that this is explicitly obtained by summing insertions of $b_j$ in $a_i$, where $a_i:=a \vert_{V^{\odot i}}$ (see equation \eqref{Eq:RNProducts-explicit}). \end{remark} Let us stress again that $\widetilde{L}_{\sym}(a\cs b) \neq \widetilde{L}_{\sym}(a)\circ \widetilde{L}_{\sym}(b)$ (the latter is not even a coderivation in general). However, as shown above, the lift to a coalgebra coderivations preserve the commutator bracket, \ie $$[\widetilde{L}_{\sym}(a),\widetilde{L}_{\sym}(b)] = \widetilde{L}_{\sym}([a,b]_{\cs})~.$$ In chapter \ref{Chap:MarcoPaper}, we will make use of the following construction for producing a coalgebra isomorphism starting from a degree $0$ coderivation: \begin{lemma}[Exponential of a coderivation]\label{lem:coderExpo} Given a $m$-nilpotent coderivation $Q\in \coDer^0(\overline{S(V)})$, \ie $Q^n=0$ for any $n\geq m$, then the corresponding exponential operator \begin{displaymath} e^Q := \sum_{n\geq 0} \dfrac{Q^k}{k!}~ \in \End(\overline{S(V)}) \end{displaymath} is an endomorphism of coalgebras. \end{lemma} \begin{remark}\label{rem:codermor} We notice that if $C$ is a degree $0$ coderivation of $\overline{S(V)}$, then $e^C$ is a morphism of coalgebras, provided it converges. The nilpotency guarantees that the summation involved is actually made up of a finite number of summands. A slightly more general statement can be provided as follows. Endow $\overline{S(V)}$ with the filtration $\mathcal{F}_0\subset\mathcal{F}_1\subset \mathcal{F}_2\subset\dots$ where $\mathcal{F}_0=\{0\}$ and $\mathcal{F}_k:=\oplus_{i=1}^k V^{\odot i}$ for any $k\geq 1$. \\ Whenever $C$ maps $\mathcal{F}_k$ to $\mathcal{F}_{k-1}$ for all $k\geq 1$, it follows that $(e^C-\Id)$ has the same property. Therefore $e^C\eval_{V^{\odot n}}$ is a finite sum for all $n$, and $e^C$ converges. \\ Let now $C$ be the coderivation obtained lifting the graded morphism $p\in \underline{\Hom}^0(\overline{S(V)},V)$, $C$ satisfies the above property whenever we have $p\eval_{S^1V}=0$. This follows from eq. \eqref{eq:coder}. \end{remark} Observe that out of given a multilinear map $f:\overline{S(V)}\to V$ one can construct two corresponding coalgebra morphisms: $L_{\sym}(f)\in \Hom_{coAlg}(V,V)$, given by the lift to a coalgebra morphism (Lemma \ref{lem:liftToMorph}), and $\exp(\widetilde{L}_{\sym}(f))\in \Iso_{coAlg}(V,V)$ given by lemma \ref{lem:coderExpo}. Clearly the two do not coincide, for instance their first components results: \begin{displaymath} L_{\sym}(f)\eval_{V} = f_1 \quad,\qquad \exp(\widetilde{L}_{\sym}(f))\eval_{V}= \exp(f_1) ~. \end{displaymath} \section{Lie infinity structures}\label{Sec:LinfinityAlgebras} Lie infinity structures, from now on $L_\infty$, are generalizations of \emph{differential graded Lie algebras} (DGLA), therefore of cochain complexes and (graded) Lie algebras at the same time. The key ideas are two (we refer to \cite[\S 2]{Ryvkin2016a} for a concise introductory exposition): \begin{enumerate} \item start from a DGLA $L$ and weaken the Jacobi equation condition requiring it to be satisfied only up to a chain homotopy; \item require that the failure of the ordinary Jacobi equation identity is controlled by a skew-symmetric $3$-multilinear map from $L$ to itself satisfying a similar higher "weaker" Jacobi equation thus allowing for a possible infinite sequence of higher multibrackets with arity greater than $3$. \end{enumerate} Given a graded vector space $V$, there are several alternative ways of presenting what does it mean to endow it with $L_\infty$-structure. Historically, the first precise definition it is due to Lada and Stasheff: \begin{definition}[$L_\infty$-algebra \emph{(Lada, Stasheff) \cite{LadaStasheff}}] \label{Def:LInfinityStasheff} We call \emph{$L_\infty$-algebra} the pair \begin{displaymath} \Big( L, \lbrace \mu_k \rbrace_{k\in \mathbb{N}} \Big) \end{displaymath} given by a $\mathbb{Z}$-graded vector space $L$ together with a family, parametrized by integers $k\geq 1$, of homogeneous graded skew-$k$-multilinear maps \begin{displaymath} \mu_k : \wedge^k L \rightarrow L[2-k] \end{displaymath} (usually called \emph{multi-brackets}) satisfying \emph{"Higher Jacobi"} relations \begin{equation}\label{Eq:HigherJacobiStasheff} 0 = \mkern-30mu\sum_{\substack{i+j=m+1\\ \sigma \in \ush{i,m-i}}}\mkern-30mu (-)^{i(j+1)} (-)^\sigma \epsilon (\sigma; x) ~\mu_j \Big( \mu_i(x_{\sigma_1},\dots, x_{\sigma_i}), x_{\sigma_{i+1}},\dots, x_{\sigma_m}\Big) \end{equation} $\forall m\geq 1$ and $x_i$ homogeneous elements in $L$. Recall that $\ush{i,m-i}$ denotes the subgroup of unshuffle permutations of $m$ elements, namely $\sigma \in \ush{i,m-i}$ if $\sigma(j)<\sigma(j+1)$ for every $j\neq i$ (see appendix \ref{App:UnshuffleAtors}). \end{definition} \begin{remark}[Homological and Cohomological convention] It is been noted already by Lada and Stasheff that are two possible convention in the previous definition. Namely that the unary operator $\mu_1$ lowers degrees (homological convention) or it raises degrees (cohomological convention). As explained in section \ref{sec:HomologicalAlgebrasConventions}, in the thesis we are adopting the cohomological convention. \\ Passing from a notation to the other can be easily achieved by the reverse-grading functor $\setminus$. Therefore, in the homological case, the degree of the $k$-multibracket $\mu_k$ changes from $(2-k)$ to $(k-2)$. \end{remark} \begin{example}[Cochain complexes]\label{ex:cochaincomplexLinfinity} A $L_\infty$-algebra with multibrackets $\mu_k=0$ for any $k\geq 2$ is a cochain complex. Indeed, it is given by a pair $(L,\mu_1)$ together with the only non-trivial higher Jacobi equation (\eqref{Eq:HigherJacobiStasheff} with $m=1$) reading as \begin{displaymath} \mu_1(\mu_1(x))=0 \qquad \forall x \in L ~. \end{displaymath} % Essentially, any $L_\infty$-algebra $(L,{\mu_k}_{k\geq 1})$ has an underlying cochain complex, or differential graded vector spaces, obtained by neglecting all multibrackets of arity greater than $1$. \end{example} \begin{example}[Differential graded Lie algebras]\label{Ex:dglaAsLinfinity} A $L_\infty$-algebra $(L,{\mu_k}_{k\geq 1})$ with multibrackets $\mu_k=0$ for any $k\geq 3$ is a differential graded Lie algebra (DGLA, see definition \ref{def:DGLA}). Namely, denoting by $\d$ and $[\cdot,\cdot]$ the unary and binary operators of $L$, equation \eqref{Eq:HigherJacobiStasheff} with $m=1$ yields the $2$-nilpotency condition of $\d$, and the other two non-trivial higher Jacobi equations, given by $m=2,3$, reads as follows: \begin{displaymath} \begin{aligned} \d [x_1,x_2] &= [\d x_1, x_1] -(-)^{|x_1|} [x_1, \d x_2] ~~, \\ 0 &=[[x_1,x_2],x_3] - (-)^{|x_3||x_2|}[[x_1,x_3],x_2]+(-)^{|x_1|(|x_2|+|x_3|)}[[x_2,x_3],x_1] ~~. \end{aligned} \end{displaymath} The first one expresses that $\d$ is graded derivation with respect to the algebraic product $[\cdot,\cdot]$ and the second one is the graded Jacobi identity for $[\cdot,\cdot]$. When $L$ is concentrated in degree $0$, the only non-trivial multibrackets is given by $[\cdot,\cdot]$, hence any $L_\infty$-algebra concentrated in degree $0$ is simply a \emph{Lie algebra}. \end{example} We will make us of the following nomenclature: \begin{definition}\label{Def:groundedLinfinity} \begin{itemize} \item A $L_\infty$-algebra is called an \emph{Abelian $L_\infty$-algebra} if all $k$-ary brackets with $k\geq 2$ are trivial, \ie is a plain chain complex.\cite[\S 1.0.3.]{Fiorenza2014a}. \item A $L_\infty$-algebra is called a \emph{grounded $L_\infty$-algebra} if $k$-ary brackets, for any $k\geq 2$, are trivial when evaluated on elements in non-zero degree, \ie $\mu_k(x_1,\dots,x_k)=0$ for any $\sum_{i=1}^k|x_i| \neq 0$. \cite[Def 2.33]{Ryvkin2016a}. (It is called \emph{property (P)} in \cite{Callies2016}.) \item A $L_\infty$-algebra is called a \emph{$L_n$-algebra} (or \emph{Lie-$n$}-algebra) if the underlying vector space is concentrated in degrees from $-n$ to $0$. (In the homological convention it would be concentrated in degrees $(0,\dots,n)$.) The corresponding $L_\infty$-structure consists of $n$+$1$ multibrackets $\{\mu_1,\dots,\mu_{n+1}\}$. \end{itemize} \end{definition} In this thesis, we will mainly concerned with $L_\infty$-algebras of the last two kinds. \begin{remark}[Curved $L_\infty$-algebras] It is often found in literature the notion of \emph{curved $L_\infty$-algebra}. This is obtained from definition \ref{Def:LInfinityStasheff} by additionally allowing for an element $\mu_0$, "$0$-ary bracket", in degree $2$ ( or $-2$ in homological conventions) and allowing indexes $i,j$ and $m$ in equation \eqref{Eq:HigherJacobiStasheff} to be zero. Specifically, when $m=0$ this would mean that $\mu_1(\mu_0)=0$ hence $\mu_0\in Z^2(L,\mu_1)$ is a cochain in the chain complex $(L,\mu_1)$ \end{remark} Taking advantage of the \RN formalism introduced in section \ref{sec:RNstuff}, it is possible to encode $L_\infty$-structures in a particularly succinct way. \begin{remark}[Reading definition \ref{Def:LInfinityStasheff} in NR-algebraic terms] By its very definition, a $k$-multibracket $\mu_k$ of a $L_\infty$-algebra $(L,\{\mu_k\}_{k\geq 1})$ is an element of $M^{\skew}_{k,2-k}$. According to the definition of the \RN product (see section \ref{Section:MultibracketsAlgebra}), "higher Jacobi equations" \eqref{Eq:HigherJacobiStasheff} can be synthetically recast as the vanishing of the multilinear operators $J_m \in M^{\skew}_{m,3-m}(L)$ for any $m\geq 1$, explicitly given by \begin{equation}\label{Eq:JacobiatorVit} \begin{split} J_m :&= \sum_{k=1}^m (-)^{k(m-k)} \mu_{m-k+1} \circ (\mu_k\otimes \Unit_{m-k})\circ P_{k,m-k} = \\ &= \sum_{k=1}^m \mu_{m-k+1}\skewgerst\mu_k ~. \end{split} \end{equation} With the grading defined on the skew-symmetric \RN algebra, see equation \eqref{eq:RNspaces}, every multilinear maps $\mu_k$ of a $L_\infty$-structure is a degree $1$ element in $M^{\skew}(V)$ and, in particular, their sum is homogeneous. Denoting by $\mu = \sum_{k\geq 1} \mu_k \in (M^{\skew}(L))^1$ the direct sum of all multibrackets, which completely encodes the $L_\infty$-structure given by $\{\mu_k\}_{k\geq 1}$, follows that \begin{displaymath} \sum_{m\geq 1} J_m = \mu \skewgerst \mu = \frac{1}{2} [ \mu,\mu]_{\skewgerst} ~, \end{displaymath} in other words, the $L_\infty$-structure $\mu$ is a \emph{Maurer-Cartan element} in the graded Lie algebra $(M^{\skew}(L),[\cdot,\cdot]_{\skewgerst})$ (see definition \ref{def:MCelements}). \end{remark} Summing up, we get the following reformulation of definition \ref{Def:LInfinityStasheff}: \begin{definition}[${L_\infty}$-algebras]\label{Def:LInfinityTony} We call \emph{$L_\infty$-algebra} a pair $(V, \mu_k)$ where $V$ is a graded vector space, and $\mu\in (M^{\skew}(V))^1=\bigoplus_{k\geq 1} M^{\skew}_{k,2-k}(V)$ are skew-symmetric multibrackets satisfying the \emph{higher Jacobi equations} \ie \begin{equation}\label{Eq:HigherJacobiTony} J_n :=~ \mu \skewgerst \mu \eval_{L^{\otimes n}}\equiv \sum_{k=1}^{n-1} \mu_k \skewgerst\mu_{n-k} =~0 \qquad \forall n\geq 2 ~, \end{equation} where $\mu_k$ denotes the projection of $\mu$ into $M^{\skew}_{k,2-k}(V) \subset (M^{\skew}(V))^1$. \end{definition} \begin{remark}[Understanding higher Jacobi equations as homotopies] The higher Jacobi equations (equations \eqref{Eq:HigherJacobiStasheff} or \eqref{Eq:HigherJacobiTony}) can be made slightly more expressive introducing the so-called \emph{Jacobiator} \footnote{Note that we are slightly departing from the more common notation, see for example \cite{Vitagliano2013}, which reserves the name "Jacobiator" to the operator $J_m$ (equation \eqref{Eq:JacobiatorVit}) instead of $j_m$ (equation \eqref{Eq:Jacobiator}.} multilinear operator $j_m\in M^{\skew}_{m,3-m}(L)$ defined as follows \begin{equation}\label{Eq:Jacobiator} j_m = \sum_{k=2}^{m-1} \mu_{m-k+1} \skewgerst \mu_k = J_m - \left( \mu_m \skewgerst \mu_1 +\mu_1 \skewgerst \mu_m \right) ~. \end{equation} % Observe that, when $m=3$ one gets \begin{align*} j_3 (x_1,x_2,x_3)=&~+ \mu_2(\mu_2(x_1,x_2),x_3) + \\ &~- (-)^{|x_3||x_2|}\,\mu_2(\mu_2(x_1,x_2),x_3)+ \\ &~+(-)^{|x_1|(|x_2|+|x_3|)}\,\mu_2(\mu_2(x_2,x_3),x_1) = \\ =& (-)^{|x_1||x_3|}\left( (-)^{|x_1||x_3|}\mu_2(\mu_2(x_1,x_2),x_3) + \cyc \right) ~~, \end{align*} where $\cyc$ denotes sum over cyclic permutations. Hence equation \eqref{Eq:Jacobiator} recovers the usual definition of Jacobiator for (graded) Lie algebras. In example \ref{ex:cochaincomplexLinfinity} has been shown that the unary bracket of a $L_\infty$-algebra determines a cochain complex boundary operator $\mu_1$ on $L$, let us denote $\mu_1=\partial_L$. Employing the notations introduced in section \ref{sec:HomologicalAlgebrasConventions}, we introduce a coboundary operator on $V^{\otimes m}$ given by \begin{displaymath} \partial_{{\otimes m}} := (-)^{m-1} \d_{\otimes m} = (-)^{m-1}\sum_{k=1}^m \Unit_{k-1}\otimes ~\partial~ \otimes \Unit_{m-k} ~. \end{displaymath} (differing from the total coboundary operator given in equation \eqref{eq:totalcoboundaryoperator} by an overall sign). Observe then that, for any given graded skew-symmetric $n$-multilinear map $f$, the following equality holds \begin{displaymath} \begin{aligned} f \circ \partial_{{\otimes n}} &(x_1,\dots,x_n) = \\ =& (-)^{m-1}\left( f(\partial x_1,\dots,x_n) + \dots + (-)^{|x_1|+\dots+|x_{n-1}|} f(x_1,\dots,\partial x_n) \right)= \\ =& (-)^{m-1}\sum_{k=1}^{n}(-)^{|x_i|(|x_1|+\dots+|x_{i-1}|)+i-1} f(\partial x_i,x_1,\dots \widehat{x_i},\dots, x_n) = \\ =&(-)^{m-1} f \circ \partial \otimes \Unit_{n-1} ~ (x_1,\dots, x_n) = \\ =& f \ca \mu_1 (x_1,\dots, x_n) ~. \end{aligned} \end{displaymath} Therefore, one can conclude that the condition $J_k=0$, \ie when higher Jacobi equations holds, is equivalent to say that \begin{displaymath} \mu_m \circ \partial_{L^{\otimes m}} + \partial \circ \mu_m = -j_m ~, \end{displaymath} hence the $m$-ary multibracket $\mu_m$ is a chain-homotopy between the higher Jacobiator $j_m$ and $0$: \begin{displaymath} \begin{tikzcd}[column sep = huge] (L^{\otimes m}, \partial_{\otimes m}) \ar[r,blue,"j_m", bend left=20, ""{name=U, below}] \ar[r,"0"', bend right=20, ""{name=D}] &[1em] (L,\partial)[3-m] \ar[Rightarrow, purple,"\mu_m", from=U, to=D] \end{tikzcd} ~. \end{displaymath} This justify the slogan: "\emph{$L_\infty$-algebra is the notion that one obtains from a Lie algebra when one requires the Jacobi identity to be satisfied only up to a higher coherent chain homotopy}" \footnote{ Notice that the term "coherent homotopy" has a precise meaning in the context of homotopy theory. What we are doing here is rather providing a basic justification of the reason why the term "homotopy" appears in conjunction with these structures. Notably, the first name attributed to this algebraic structure has been "strongly homotopy Lie algebra"\cite{LadaStasheff}. } \cite{nlab:l-infinity-algebra}\cite{Rogers2010}\cite{Shahbazi2016}. % Observe at last that the first two higher Jacobi identities implies that $j_1$ and $j_2$ are automatically zero. Summing up, the first Jacobi equations $J_m=0$ reads as follows: \begin{itemize} \item when $m=1$ means that $\partial \circ \partial = 0$, \ie $\partial=\mu_1$ is a coboundary operator and $(L,\partial)$ is a cochain complex; \item when $m=2$ means that $\mu_2 \circ \partial_{\otimes 2} + \partial \circ \mu_2=0$, hence $\mu_2$ is a chain map from $(L^{\otimes 2},\d_{\otimes_2})$ to $(L,\partial)$; \item when $m=3$ means that $\mu_3$ is a chain homotopy from the Jacobiator $j_2$ to the zero map, \begin{displaymath} \begin{tikzcd}[column sep = huge] (L^{\otimes 3}, \d_{\otimes 3}) \ar[r,blue,"j_3", bend left=20, ""{name=U, below}] \ar[r,"0"', bend right=20, ""{name=D}] &[1em] (L,\partial) \ar[Rightarrow, purple,"\mu_3", from=U, to=D] \end{tikzcd} ~. \end{displaymath} In particular $\mu_3$ is a degree $-1$ homogeneous map between the cochain complexes $(L^{\otimes 3},\d_{\otimes 3})$ and $ (L,\partial)$. \end{itemize} \end{remark} \subsection{Coalgebraic approach to $L_\infty$-structures} In a nutshell, the gist of the previous discussion is subsumed by stating that the set of all possible $L_\infty$-structure on a given graded vector space $L$ is given by Maurer-Cartan elements: \begin{displaymath} \mathbb{L}_{\infty}(L) := \text{MC}(M^{\skew}(L),[\cdot,\cdot]_{\ca}) \equiv \lbrace \mu \in M^{\skew} ~ \left\vert \quad ||\mu||=1 ~,\quad \mu\ca\mu = 0 \right.\rbrace ~. \end{displaymath} (Compare with definition \ref{def:MCelements}). This claim, joined with diagram \eqref{eq:NocciolodiAppendice} ( which basically subsumes the contents of appendix \ref{App:RNAlgebras}), implies that one gets two other completely equivalent presentations of a $L_\infty$-algebra structure over the graded vector space $L$ \begin{equation}\label{eq:VisioneGlobaleLinfinito} \begin{aligned} \mathbb{L}_{\infty}(L) \cong&~ \lbrace \nu \in M^{\sym}(L[1]) ~ \left\vert \quad |\nu|=1 ~,\quad \nu\cs\nu = 0 \right.\rbrace \cong \\ \cong&~ \lbrace Q\in \coDer(S(L)) ) ~ \left\vert \quad |Q|=1 ~,\quad Q\circ Q = 0 \right. \rbrace ~. \end{aligned} \end{equation} % \begin{remark} Observe that in the last term of equation \eqref{eq:VisioneGlobaleLinfinito} appears the composition of two coderivation, which in principle is not a coderivation, instead that the \RN product introduced in section \ref{sec:RNProdCoder}. This is due by the commutativity of the following diagram in the category of graded vector space \begin{displaymath} \begin{tikzcd}[column sep = huge] \overline{\overline{S(V)}} \ar[r,"\widetilde{L}_{\sym}(\mu)"] \ar[dr,"\mu"'] \ar[rr,bend left=20,"\widetilde{L}_{\sym}\big(\mu\cdot \widetilde{L}_{\sym}(\mu)\big) = \widetilde{L}_{\sym}(\mu\symgerst\mu)=0"] &[2em] \overline{S(V)}[1] \ar[r,"{\widetilde{L}_{\sym}(\mu)[1]}"]\ar[dr,"{\mu[1]}"']\ar[d] &[2em] \overline{S(V)}[2]\ar[d] \\ & V[1] & V[2] \end{tikzcd} \end{displaymath} noting that the commutativity of the uppermost triangle in the above diagram makes sense only if the curved arrow is the zero arrow. \end{remark} In this spirit, one could introduce the following two definition: \begin{definition}[Shifted Lie infinity algebra (${L_\infty[1]}$-structures) (see \emph{\cite[Def. 5]{Kajiura2006b} or \cite[Def. 4]{Vitagliano2013}})]\label{Def:LInfinityShifted} We call \emph{shifted $L_\infty$-algebra} a pair $(W, \{\ell_k\}_{k\geq 1})$ where $W$ is a graded vector space, and $\ell_k\in M^{\sym}_{k,1}$ are symmetric multibrackets satisfying the \emph{higher Jacobi equations} \ie \begin{displaymath} J_n :=~ \sum_{k=1}^{n-1} \ell_k \symgerst\ell_{n-k} =~0 \qquad \forall n\geq 2 ~. \end{displaymath} \end{definition} % \begin{definition}[Chevalley-Eilenberg complex for a ${L_\infty[1]}$-structure]\label{def:CELinfty1struct} We call \emph{Chevalley-Eilenberg complex} pertaining to the ${L_\infty[1]}$-algebra $(W, \nu)$, where $\nu=\sum_{k\geq 1} \nu_k$, the cofree graded codifferential coalgebra \begin{displaymath} \CE(W, \{\ell_k\}_{k\geq 1}) := ( \overline{S(W)}, \widetilde{L}_{\sym}(\nu) ) \end{displaymath} given by the (cofree cocommutative) symmetric tensor coalgebra $\overline{S(W)}$ together with the lift of $\nu$ to a coderivation. \end{definition} % \begin{remark} We point out that our naming here diverges from the literature. Often the name of "Chevalley-Eilenberg complex of a $L_\infty$-algebra $L$ is reserved to the dual of definition \ref{def:CELinfty1struct}, hence by the graded vector space $\overline{S(W)}^\ast \cong \Hom(\overline{S(W)},\RR)$ endowed with a certain differential. A more general notion can be given in terms of $L_\infty$-algebras representations, see \cite{Reinhold2019}. \end{remark} % The key point, as stated in \cite[Thm. 2.3]{LadaMarkl}, is the existence of a one-to-one correspondence between $L_\infty$-algebra structure on $V$ and degree $1$ nilpotent $2$-coderivations on $\overline{S(V[1])}$. % When $(W, \nu)$ is a differential graded Lie algebra, $\CE(W,\nu)$ is sometimes called \emph{Quillen construction} (see \cite[\S 2]{Fiorenza2006}). % \begin{reminder}[(Ordinary) Chevalley-Eilenberg (co)-chain complex]\label{Rem:CEconventions} Given an (ungraded) Lie algebra $\mathfrak{g}=(\mathfrak{g},[\cdot,\cdot])$ it is customary to call its \emph{Chevalley-Eilenberg chain complex} the pair $(\wedge^\bullet \mathfrak{g},\partial)$, depicted by \begin{displaymath} \begin{tikzcd} \cdots \ar[r] & \wedge^n\mathfrak{g} \ar[r, "\partial_{\CE}"] & \wedge^{n-1}\mathfrak{g} \ar[r] & \cdots \end{tikzcd} ~, \end{displaymath} where the first data, in slight contradiction with our previous notation, is the graded vector space $\wedge^\bullet \mathfrak{g} = (k \mapsto \Lambda^k(\mathfrak{g}))$ and the \emph{ CE boundary operator} $\partial: \wedge^\bullet \mathfrak{g} \to \wedge^{\bullet-1} \mathfrak{g}$ is given explicitly, for any homogeneous element $\xi_i \in \mathfrak{g}$, by the following equation: \begin{equation} \label{eq:CE_boun} \partial_{\CE} (\xi_1 \wedge \xi_2 \wedge \dots \wedge \xi_k) := \mkern-15mu\sum_{1\leq i< j \leq n}\mkern-15mu (-1)^{i+j}\, [\xi_i, \xi_j] \wedge \xi_1 \wedge \dots \wedge {\hat \xi}_i \wedge \dots \wedge {\hat \xi}_j \wedge \dots \wedge \xi_n, \end{equation} with $\partial_0 = 0$ and $\widehat{(\cdot)}$ denoting deletion. \\ Dually, one can define a cochain complex structure on $\wedge^{\bullet}\mathfrak{g}^*$ introducing the \emph{Chevalley-Eilenberg coboundary (differential)} $\delta_{\CE}: \wedge^k\mathfrak{g}^* \to \wedge^{k+1} \mathfrak{g}^*$, whose action on an element $\phi \in \wedge^\bullet \mathfrak g^*$ is given by $\delta_{CE} \phi := \phi \circ \partial$. \\ Henceforth, by \emph{$k$-th CE homology group} and \emph{$k$-th CE cohomology group} we will intend the following two vector spaces: \begin{displaymath} H_k(\mathfrak{g}):= H_k(\wedge^\bullet \mathfrak{g},\partial_{\CE}) ~, \qquad H^k(\mathfrak{g}^*)= H^k(\wedge^\bullet \mathfrak{g}^\ast,\delta_{\CE}) ~. \end{displaymath} More in general, one can define the \emph{Chevalley-Eilenberg cohomology over a representation $\rho$}, \ie a Lie algebra morphism $\rho: \mathfrak{g}\to \End(V)$ for a certain vector space $V$, as the cochain complex \begin{displaymath} \begin{tikzcd} \cdots \ar[r] & \Hom_{\Vect}(\wedge^n\mathfrak{g},V) \ar[r, "\delta_{\CE}"] & \Hom_{\Vect}(\wedge^{n+1}\mathfrak{g},V) \ar[r] & \cdots \end{tikzcd} \end{displaymath} with coboundary operator defined, for any $\omega: \mathfrak{g}^{\wedge n}\to V$ and for any elements $x_i \in \mathfrak{g}$, by \begin{displaymath} \begin{aligned} (\delta \omega) (x_1,\dots,x_{n+1}) =& +\sum_{i=1}^{n+1}(-)^{i+1}\rho(x_i)\cdot \omega(x_1,\dots,\hat{x_i},\dots,x_{n+1})+ \\ & +\mkern-25mu\sum_{1\leq j < k \leq n+1}\mkern-25mu (-)^{j+k} \omega([x_j,x_k],x_1,\dots,\hat{x_j},\dots,\hat{x_k},\dots,x_{n+1}) ~. \end{aligned} \end{displaymath} \end{reminder} \begin{remark} Observe that the naming in definition \ref{def:CELinfty1struct} is completely compatible with the previous construction. The Lie algebra $\mathfrak{g}$ is in particular a $L_\infty$-algebra structure with only non-trivial multibrackets given by $\mu_2=[\cdot,\cdot]$. Applying the d\'ecalage and the lift one gets the following diagram in the graded vector space category \begin{displaymath} \begin{tikzcd} & \overline{S(\mathfrak{g}[1])} \ar[r,"\widetilde{L}_{\sym}(\Dec(\mu_2))"] \ar[d,two heads] &[7em] (\overline{S(\mathfrak{g}[1])})[1] \ar[dd,two heads] \\ (\mathfrak{g}^{\wedge 2})[2] \ar[r,"\dec"] \ar[drr,bend right=10,"{\mu_2[2]}"'] & (\mathfrak{g}[1]^{\odot 2}) \ar[dr,"\Dec(\mu_2)"'] & \\ & & \mathfrak{g}[1][1] \end{tikzcd} \end{displaymath} where $\CE(\mathfrak{g})=(S(\mathfrak{g}[1]),\widetilde{L}_{\sym}(\Dec(\mu_2))$ is precisely the Chevalley-Eilenberg complex as defined in \ref{def:CELinfty1struct}. Explicitly, one can see that $\CE(\mathfrak{g}) = \setminus (\Lambda^\bullet \mathfrak{g},-\partial)$, where $\setminus$ denotes the degree-reversing functor introduced in equation \eqref{eq:degreeReversingFunctor}. In fact, according to diagram \eqref{eq:decalageRestriction}, one has \begin{displaymath} \CE(\mathfrak{g})^{-k} = (\mathfrak{g}[1])^{\odot k} \cong \mathfrak{g}^{\wedge k}[k] ~, \end{displaymath} and, from lemma \ref{lem:liftToCoder}, one has that \begin{displaymath} \mathclap{ \morphism{\widetilde{L}_{\sym}(\mu_2)} {\CE(\mathfrak{g})^{-k}} {\CE(\mathfrak{g})^{-k+1}} {x_1\wedge\dots \wedge x_n} {\displaystyle \mkern-30mu\sum_{\sigma\in \ush{2,n-2}}\mkern-30mu(-)^\sigma [x_i, x_j] \wedge x_1 \wedge \dots \wedge {\hat x}_i \wedge \dots \wedge {\hat x}_j \wedge \dots \wedge x_n} ~.} \end{displaymath} The latter can be read as the fact that the lift $\mu_2$ is equivalent to $\partial_{\CE}$, defined in reminder \ref{Rem:CEconventions}, modulo a sign. Namely, $\widetilde{L}_{\sym}(\mu_2) \equiv -\partial_{\CE}$. \end{remark} Spelling out, an $L_\infty$-algebra structure on $L$ is equivalently given by \begin{itemize} \item a family $\lbrace \mu_k \rbrace_{k\geq 1}$ of graded skew-symmetric multibrackets $\mu_k \in M^{\skew}_{k,2-k}(L)$ on $L$ satisfying the higher Jacobi equation $J_m =\sum_{k=1}^m \mu_k\ca\mu_{m-k+1}=0$; (that is a $2$-nilpotent degree $1$ element in the \RN algebra $(M^{\skew}(V),\ca)$); \item a family $\lbrace \nu_k \rbrace_{k\geq 1}$ of graded symmetric multibrackets $\nu_k \in M^{\sym}_{k,1}(L[1])$ satisfying the higher Jacobi equation $J_m =\sum_{k=1}^m \nu_k\cs\nu_{m-k+1}=0$; (that is a $2$-nilpotent degree $1$ element in the \RN algebra $(M^{\sym}(V),\cs)$); \item a $2$-nilpotent, degree one coderivation $Q \in \coDer(S(L[1]))$, \ie a codifferential on the (cofree) graded coalgebra $S(V[1])$. (In the terms of \cite{nlab:l-infinity-algebra}, an $L_{\infty}$-algebra is \emph{cofree cocommutative differential coalgebra}, \ie a dg-coalgebra whose underlying coalgebra is isomorphic to the symmetric tensor coalgebra for a given graded vector space.) \end{itemize} % Although the characterization in terms of $2$-nilpotent coderivations may appear particularly convoluted, it has several advantages. Specifically, it transparently provides the notion of commutator and linear combination of $L_\infty$-structures (beware that such operations do not do not yield $L_\infty$-structures in general), the notion of (tangent) differential graded Lie algebra $(\overline{S(V)},[\nu,\cdot]_{\cs},[\cdot,\cdot]_{\cs})$ governing the deformations of a given $L_\infty[1]$-algebra $(V,\nu)$, and the notion of $L_\infty$-morphism, as we will see in the following subsection. \subsection{$L_\infty$-morphisms} In the coalgebraic framework there is an obvious notion of morphism between $L_\infty$-algebras: % \begin{definition}[$L_\infty{[1]}$-morphisms (coalgebraic framework)]\label{Def:LinftyMorphism-coalg} Given two $L_\infty[1]$-algebras $(V,\mu)$ and $(W,\nu)$, an \emph{$L_\infty[1]$-morphism} between $(V,\mu)$ and $(W,\nu)$ is a morphism of graded differential cocommutative coalgebra $F: ({S([1])},\widetilde{L}_{\sym}(\mu))\to ({S(W[1])},\widetilde{L}_{\sym}(\nu))$, \ie a graded coalgebra morphism such that the following diagram commutes in the graded vector space category \begin{displaymath} \begin{tikzcd} {S(V[1])} \ar[r,"F"] \ar[d,"\widetilde{L}_{\sym}(\mu)"']& {S(W[1])} \ar[d,"\widetilde{L}_{\sym}(\nu)"] \\ {S(V[1])[1]} \ar[r,"{F[1]}"'] & {S(W[1])[1]} \end{tikzcd} ~. \end{displaymath} \end{definition} % According the universal property of cocommutative graded coalgebra given by lemma \ref{lem:liftToMorph}, coalgebra morphisms ${\overline{S(V)}}\to {\overline{S(W)}}$ are in one-to-one correspondence with graded morphisms ${\overline{S(V)}} \to W$, which are collections of symmetric multilinear maps from $V$ to $W$. Therefore, any $L_\infty$-morphism can be translated into a collection of (graded) symmetric maps that are compatible with the multi-brackets: % \begin{definition}[${L_\infty[1]}$-morphisms (graded symmetric framework)]\label{Def:LinftyMorphism-sym} Given two $L_\infty[1]$-algebras $(V,\mu)$ and $(W,\mu')$, a \emph{$L_\infty[1]$-morphism} between $(V,\mu)$ and $(W,\mu')$ is given by a collection of degree $0$, graded symmetric, multilinear maps \begin{displaymath} (f)= \left\{ f_k: (V[1])^{\odot k} \to W[1]\right\}_{k\geq 1} \end{displaymath} such that the auxiliary graded symmetric, multilinear operator $K_k^f: (V[1])^{\odot k} \to W[1]$ defined as \begin{displaymath} K_m^f := \sum_{\ell=1}^m \left( f_{m-\ell+1} \symgerst \mu_\ell - \mu_\ell' \circ \Sop_{\ell,m}(f) \right) ~, \end{displaymath} where $\Sop_{\ell,m}(f)$ is the operator defined in Notation \ref{rem:operatorSln}, vanishes for all $k \geq 1$. Namely, we denote by $\Sop_{\ell,m} (f)$ the multilinear map % \begin{equation}\label{Eq:SoperatorSymCase} \Sop_{\ell,m} (f) = \left( \sum_{\substack{k_{1}+\cdots+k_{\ell}=m\\1\leq k_{1}\leq\cdots\leq k_{\ell}}} (f_{k_1}\otimes\cdots\otimes f_{k_\ell})\circ B_{k_1,\ldots,k_\ell}^< \right)~: V[1]^{\otimes m} \to W[1]^{\otimes \ell} ~, \end{equation} where the unshuffleator $B_{k_1,\ldots,k_\ell}^<$ runs through all $(k_1,\cdots,k_\ell)$-unshuffles $\sigma$ satisfying the extra condition \begin{displaymath} \sigma(k_1+\dots+k_{j-1}+1)<\sigma(k_1+\dots+k_{j}+1) \quad \text{if}~k_{j-1}=k_j ~. \end{displaymath} \end{definition} After d\'ecalage, you get a similar expression for the formulation with skew-symmetric multibrackets: % \begin{definition}[$L_\infty$-morphisms (graded skew-symmetric framework)] \label{Def:LinftyMorphism-skew} An \emph{$L_\infty$-morphism} between $(V,\mu)$ and $(W,\mu')$ is given by a collection of homogeneous, graded skew-symmetric, multilinear maps \begin{displaymath} (f)= \left\{ f_k: V^{\wedge k} \to W[1-k]\right\}_{k\geq 1} \end{displaymath} such that the auxiliary graded symmetric, multilinear operator $\bar{K}_m^f: L^{\wedge m} \to L'[2-m]$ defined as \begin{displaymath} \bar{K}_m^f := \sum_{\ell=1}^m \left( f_{m-\ell+1} \skewgerst \mu_\ell - \mu_\ell' \circ \bar{\Sop}_{\ell,m}(f) \right) ~, \end{displaymath} vanishes for all $k \geq 1$. We denote by $\bar{\Sop}_{\ell,m} (f)$ the multilinear map % \begin{equation}\label{Eq:SoperatorSkewCase} \bar{\Sop}_{\ell,m} (f) = \left( \sum_{\substack{k_{1}+\cdots+k_{\ell}=m\\1\leq k_{1}\leq\cdots\leq k_{\ell}}} (-)^{\sum_{i=1}^{\ell-1}(|f_{k_i}|)(\ell-i)} (f_{k_1}\otimes\cdots\otimes f_{k_\ell})\circ P_{k_1,\ldots,k_\ell}^< \right)~, \end{equation} where the odd unshuffleator $P_{k_1,\ldots,k_\ell}^<$ runs through all ordered $(k_1,\cdots,k_\ell)$-unshuffles $\sigma$. \end{definition} \begin{remark}[Understanding $K_m^f$] Although the term may appear, at first glance, difficult to read, it is actually constructed according to a simple combinatorial pattern. \\ The first summand is a combination of all the possible ways to compose a bracket $\mu$ with a single component of $(f)$ to give a map of arity $m$. \\ The second term is a combination of all the ways one can compose several components of the morphism $(f)$ with a primed bracket $\mu'$ to get a map with arity $m$. \\ In particular, the term $\Sop_{\ell,m}(f)$ is a combination of all the possible ways one can tensor multiply different components of $(f)$ to give a graded (skew-)symmetric map $$ \Sop_{\ell,m} : L^{\otimes (k_1 + \dots +k_\ell)} = L^{\otimes m} \to L'^{\otimes \ell} ~. $$ The same applies to $\bar{\Sop}$ and $\bar{K}$ with extra signs coming from the different degrees and from the sum on odd permutations. \end{remark} % \begin{remark}[$L_\infty$-morphisms in the literature]\label{rem:CorrectAttributions} The notion of $L_\infty$-morphism is basically old as the notion of $L_\infty$-algebra itself. Initially, see \cite{LadaStasheff}\cite{LadaMarkl}, only "strict" (see definition \ref{Def:strictLinfi}) $L_\infty$-morphisms and (weak) $L_\infty$-morphisms valued in a DGLA were considered. The explicit expression of definition \ref{Def:LinftyMorphism-sym} can be tracked in \cite[Def. 2.7]{Kajiura2006}\cite[Def. 6]{Kajiura2006b}\cite[Def. 5]{Vitagliano2013}. The explicit formulation in the skew-symmetric framework (\ref{Def:LinftyMorphism-skew}) has been already worked out in \cite[Definition 4.7.1]{Allocca2010}, see also \cite[ex. 2.20]{Ryvkin2016a}. \end{remark} A crucial point is that the previous three definitions are equivalent: % \begin{lemma} Given a $L_\infty$-morphism $(f):(V,\mu)\to(W,\nu)$ as defined in \ref{Def:LinftyMorphism-skew}, $\Dec(f)$ yields a $L_\infty[1]$-morphism from $(V[1],\Dec(\mu))$ to $(W[1],\Dec(\nu))$ in the sense of definition \ref{Def:LinftyMorphism-sym} and $L_{\sym}(Dec(f))$ yields a morphism between the corresponding Chevalley-Eilenberg complexes as given by definition \ref{Def:LinftyMorphism-coalg}. \end{lemma} \begin{proof} Take a $F$ as in definition \ref{Def:LinftyMorphism-coalg}, by the universal property of cocommutative coalgebras, one has that $F= L_{\sym}(f)$ for a certain collection of graded symmetric of multilinear map $(f)$. Being % \begin{displaymath} F\circ \widetilde{L}_{\sym}(\mu) - \widetilde{L}_{\sym}(\nu) \circ F \end{displaymath} % a coderivation, the vanishing condition can be read as the following equation of multilinear maps (obtained by postcomoposition with the standard projection $\pr_W: \overline{S(W)}\twoheadrightarrow W$) \begin{equation}\label{Eq:intermadiateLinfinityMorphCond} f\circ \widetilde{L}_{\sym}(\mu) - \nu \circ F = 0 ~. \end{equation} In other terms, expression \eqref{Eq:intermadiateLinfinityMorphCond} lift to the above coderivation. The first summand can be written as $f\symgerst \mu$ extending in the obvious way the operator $\symgerst$ to \begin{displaymath} \symgerst: M(V,W)\otimes M(V,V) \to M(V,W) \end{displaymath} as the concatenation of multilinear operators with matching domains and codomain. Restricting equation \eqref{Eq:intermadiateLinfinityMorphCond} to $V[1]^{\otimes m}$ one gets the equation $K^f_m = 0$ observing that term \eqref{Eq:SoperatorSymCase} is exactly the same appearing in the construction of the symmetric lift given in remark \ref{rem:operatorSln}. % Pictorially, the whole situation is subsumed by the following commutative diagram in the category of graded vector spaces \begin{displaymath} \begin{tikzcd} & & W[1][1] \\ & \overline{S(V[1])} \ar[r,"F"] \ar[dl,"q"'] \ar[d,"Q"] \ar[ur,"f"] & \overline{S(W[1])} \ar[rd,"q'"] \ar[d,"Q'"] \ar[u,two heads] & \\ V[1][1] & \left(\overline{S(V[1])}\right)[1] \ar[l,two heads] \ar[r,"F"] & \left(\overline{S(W[1])}\right)[1] \ar[r,two heads] & W[1][1] \\ \end{tikzcd} \end{displaymath} Note that $Q,Q'$ are in particular coderivations and $F$ is a coalgebra morphism, respectively they are the unique lift of $q,q'$ to coderivation and of $f$ to a coalgebra morphism. This prove equivalence between definitions \ref{Def:LinftyMorphism-sym} and \ref{Def:LinftyMorphism-coalg}. To prove the last equivalence observe that \begin{displaymath} \Dec\left( \bar{K}^f_m \right) =\sum_{\ell=1}^m \left( \Dec(f_{m-\ell+1}) \symgerst \Dec(\mu_\ell) - \Dec(\nu_\ell') \circ \Dec(\bar{\Sop}_{\ell,m}(f)) \right) \end{displaymath} which equates ${K}^f_m$ upon noticing (see \ref{Cor:DecalageofTensorsProducts} in appendix) that \begin{displaymath} \Dec(f_{k_1}\otimes \dots \otimes f_{k_\ell}) = (-)^{\sum_{i=1}^\ell |f_{k_i}|(\ell-i)}\Dec(f_{{k_1}})\otimes \dots \otimes \Dec(f_{{k_\ell}}) ~. \end{displaymath} (The last step can also be tracked in \cite[Lemma 2.18]{Ryvkin2016a}). \end{proof} \begin{remark}[Compare our notation with the literature] Let us briefly discuss how our definitions of operators $K_m^f$, given in \ref{Def:LinftyMorphism-sym} and \ref{Def:LinftyMorphism-skew}, compare with the equations present in the literature mentioned in remark \ref{rem:CorrectAttributions}. \\ Plugging in homogeneous elements, equation $K_m^f(x_1,\dots,x_m)=0$ in definition \ref{Def:LinftyMorphism-sym} reads as \begin{displaymath} \mathclap{ \begin{aligned} &\sum_{\ell=1}^m \mkern-50mu \sum_{\mkern70mu \sigma \in \ush{\ell, m-\ell}}\mkern-40mu \mkern-18mu \epsilon(\sigma) f_{m-\ell+1 } (\mu_\ell(x_{\sigma_1},\dots,x_{\sigma_{\ell}}),~ x_{\sigma_{\ell+1}},\dots x_{\sigma_\ell}) = \\ &= \mkern-35mu \sum_{\substack{1\leq \ell \leq m\\k_{1}+\cdots+k_{\ell}=m\\1\leq k_{1}\leq\cdots\leq k_{\ell}}} \mkern-30mu (-)^{\sum_{i=1}^{\ell-1}(|f_{k_i}|)(\ell-i)} \mkern-70mu \sum_{\mkern70mu\sigma \in \ush{k_1,\dots,k_\ell}^<} \mkern-70mu \epsilon(\sigma) f_{k_1}(x_{\sigma_1},\dots, x_{\sigma_{k_1}})\otimes \dots \otimes f_{k_\ell}(x_{\sigma_{(m+1-k_{\ell})}},\dots, x_{\sigma_{m}}) \end{aligned} } \end{displaymath} which coincides with \cite[Def. 1.4]{Vitagliano2013} by substituting $\ell$ with $i$ and $m-1+1$ with $j$. \\ Performing the same computation for the operator $K_m^f$ in definition \ref{Def:LinftyMorphism-skew} yields \begin{displaymath} \mathclap{ \begin{aligned} &\sum_{\ell=1}^m \mkern-50mu \sum_{\mkern70mu \sigma \in \ush{\ell, m-\ell}}\mkern-40mu \mkern-18mu (-)^{|\mu_k|(m-\ell)} \chi(\sigma) f_{m-\ell+1 } (\mu_\ell(x_{\sigma_1},\dots,x_{\sigma_{\ell}}),~ x_{\sigma_{\ell+1}},\dots x_{\sigma_\ell}) = \\ &= \mkern-30mu \sum_{\substack{1\leq\ell\leq m\\k_{1}+\cdots+k_{\ell}=m\\1\leq k_{1}\leq\cdots\leq k_{\ell}}} \mkern-32mu (-)^{\sum_{i=1}^{\ell-1}(|f_{k_i}|)(\ell-i)} \mkern-40mu \sum_{\sigma \in \ush{k_1,\dots,k_\ell}^<} \mkern-40mu \chi(\sigma)(-)^\beta~ f_{k_1}(x_{\sigma_1},\dots, x_{\sigma_{k_1}})\otimes \dots \otimes f_{k_\ell}(x_{\sigma_{(m+1-k_\ell)}},\dots, x_{\sigma_{m}}) \end{aligned} } \end{displaymath} where $(-)^\beta$ is the sign coming from the Koszul convention when evaluating $f_{k_1}\otimes\dots \otimes f_{k_\ell}$ on homogeneous elements. Practically, the latter sign prefactor is computed as the Koszul sign of the following permutation of graded elements \begin{displaymath} \left( \substack{ {f_{k_1},\dots,f_{k_\ell},x_{\sigma_1},\dots x_{\sigma_{m}}} \\ {f_{k_1},x_{\sigma_1}\dots,x_{\sigma_{k_1}},f_{k_2},x_{\sigma_{k_1+1}}\dots,x_{\sigma_{k_1+k_2}},\dots} }\right) \end{displaymath} (a completely explicit expression can be found in \cite[Rem. 4.7.1]{Allocca2010}). Recalling that $|\mu_\ell|= 2-\ell$, $|f_{k_i}| = 1- k_i$ and noting that \begin{displaymath} \sum_{i=1}^\ell |f_{k_i}|(\ell-i) = \sum_{i=1}^\ell\left( \ell -i - k_i(\ell -i)\right) = \dfrac{\ell(\ell-1)}{2} + \sum_{i=1}^\ell (k_i(\ell -i)) ~, \end{displaymath} one recovers the explicit definition of $L_\infty$-morphisms as stated, for example, in \cite[Def. 4.7.1]{Allocca2010} and \cite[Lem. 2.18]{Ryvkin2016a}. \end{remark} \begin{remark}[Unfolding the notion of $L_\infty$-morphisms]\label{rem:morphismsunfolded} Observe that in the following two "extreme" cases $\Sop$ and $\bar{\Sop}$ coincide, as multilinear operators, to \begin{displaymath} \begin{aligned} \Sop_{1,m}(f) &= \bar{\Sop}_{m,m}(f) = f_m \\ \Sop_{m,m}(f) &= \bar{\Sop}_{m,m}(f) =\underbrace{f_1\otimes \ldots \otimes f_1}_\text{$m$-times} ~. \end{aligned} \end{displaymath} (That is because when $\ell=m$ then necessarily $k_1=k_2=\dots=k_m=1$ and $|f_1|=0$ in the skew-symmetric setting.) \\ Therefore the operator $K_m^f$ can be rewritten as \begin{displaymath} K_m^f := f_1 \circ \mu_m - \mu'_m \circ f_1^{\otimes m} + \kappa^m_f \end{displaymath} where \begin{displaymath} \kappa_m^f := \sum_{\ell=1}^{m-1} \left( f_{m-\ell+1} \cs \mu_\ell - \mu_\ell' \circ \Sop_{\ell,m}(f) \right) ~. \end{displaymath} The $L_\infty$-morphism condition, \ie the vanishing of operator $K_m^f$, can therefore be seen as the fact that the multibracket $\mu_m$, as a map from $L^{\otimes m} \to L[1]$, is not preserved by $f_1$. In other words, the following diagram does not commute: \begin{displaymath} \begin{tikzcd} V^{\otimes m} \ar[d,"\mu"] \ar[r,"f_1^{\otimes m}"]\ar[dr,phantom,"\cancel{\circlearrowleft}"]& W^{\otimes m} \ar[d,"\mu'"] \\ V[1] \ar[r,"f_1{[1]}"'] & W[1] \end{tikzcd} ~. \end{displaymath} Non-commutativity, however, is "mild" in the sense that its failure is explicitly controlled by the term $\kappa_m^f$ introduced above. The same discussion applies, mutatis mutandis, to the skew-symmetric case. \end{remark} The previous remark suggests to introduce the following definition: \begin{definition}[Strict $L_\infty$-morphisms]\label{Def:strictLinfi} A $L_\infty$-morphism $(f)$ is called "\emph{strict}" when $f_i = 0$ for all $i\geq 2$. Hence, in the terms of remark \ref{rem:morphismsunfolded}, each multibracket is preserved by $f_1$. \end{definition} \begin{notation} From now on, we will take for granted the equivalence between the three previous presentations of a $L_{\infty}$-algebra structure. We will denote by $(f)$ the $L_\infty$-morphism from $V\to W$ when seen as a collection of multilinear maps $f_k:\underline{\Hom}(V^{\otimes k},W)$, hence with a slight abuse of notation when considering the multibrackets symmetric or skew-symmetric. By $F\in \Hom_{\text{coAlg}}(\overline{S(V)},\overline{S(V)})$ we denote the lift of the symmetric multibrackets $(f_1,\dots,)$ to a coalgebra morphism, which is in particular also a cochain map. \end{notation} \subsection{Composition of $L_\infty$-morphisms} Once the concept of morphism between $L_\infty$-algebras has been clarified, it is crucial to specify how such morphisms act under composition to completely understand $L_\infty$-algebras as a category. \\ In the coalgebraic formulation there is a natural definition of composition of $L_\infty$-morphisms simply given by the composition of the two corresponding differential coalgebra morphisms. This easily translates to the following definition in the multibrackets setting: \begin{definition}[$L_\infty$-morphisms composition (multibrackets framework)]\label{Def:CompositionFormula} If $(f):L\to L^{\prime}$ and $(g):L^{\prime}\to L^{\prime\prime}$ are morphisms of $L_{\infty}$-algebras, the \emph{composition $(g)\circ (f):L\to L^{\prime\prime}$} is a $L_\infty$-morphism defined as $(g \circ f)_k:=\{(g\MBComp f)_{k}\}_{k\in\mathbb{N}}$, where \begin{displaymath} (g \circ f )_m := \pr_{L^{\prime\prime}} \circ \widetilde{L}_{\sym}(g) \circ \widetilde{L}_{\sym}(f) \eval_{L^{\otimes m}} = \sum_{\ell=1}^m g_\ell \circ \Sop_{\ell,m}(f) \end{displaymath} and $\Sop_{\ell,m}(f)$ is defined in equations \eqref{Eq:SoperatorSymCase} ,\eqref{Eq:SoperatorSkewCase} in the symmetric and skew-symmetric framework respectively. \end{definition} % (One can retrieve this formula decalaging the analogue formula in \cite[\S 1]{Vitagliano2013}. See also \cite[Cor. 1.3.3 and Prop 1.5.3]{Manetti-website-coalgebras}.) \\ % Pictorially, the composition of two $L_\infty$-morphisms is given by the commutativity of the following diagram in the graded vector space category: \begin{displaymath} \begin{tikzcd}[column sep=huge] \overline{S(L[1])} \ar[r,"L_{\sym}(f)"] \ar[dr,"f"'] \ar[rr,bend left=30,"L_{\sym}\big(g\circ L_{\sym}(f)\big):=L_{\sym}(g\circ f)"] & \overline{S(L'[1])}\ar[r,"{L_{\sym}(g)}"]\ar[dr,"g"']\ar[d] & \overline{S(L''[1])}\ar[d] \\ & L'[1] & L''[1] \end{tikzcd} \end{displaymath} % Recall that the identity coalgebra morphism $\Unit: \overline{S(V)} \to \overline{S(V)}$ is precisely given by the lift of the projection $\pr_V:\overline{S(V)}\twoheadrightarrow V$ to a coalgebra morphism. Hence, it corresponds to the following definition in the graded coalgebra setting: \begin{definition}[Identity $L_\infty$-morphism] Consider an $L_\infty$-algebra $L$, the \emph{identity morphism} pertaining to $L$ is the strict $L_\infty$-morphism $(\mathrm{id}):L\longrightarrow L$ given by: % \begin{displaymath} \begin{aligned} \mathrm{id}_{1}= \mathrm{id}_L & :L\longrightarrow L \\ \mathrm{id}_{k}=0 & :L^{\otimes k}\longrightarrow L \qquad \forall ~k \geq 1 \end{aligned} \end{displaymath} % \end{definition} In the same spirit one gets the following notion of $L_\infty$-isomorphism: \begin{definition}[$L_\infty$-isomorphisms] Consider two $L_\infty$-algebras $L$ and $L'$. An $L_\infty$-morphism $(f):L\to L^{\prime}$ is an \emph{$L_\infty$-isomorphism} if the corresponding lift $F:\overline{S(L[1])}\to \overline{S(L^{\prime}[1])}$ is an invertible coalgebra morphism, \ie if $f_1:L_1\to L_1^{\prime}$ is a chain complex isomorphism. \end{definition} % A characterization in the multibrackets frameworks is given by the following lemma % \begin{lemma}\label{Lem:InvertibleTensorColalgebraMorphisms} A coalgebra morphism $F: \overline{S(V)} \to \overline{S(W)}$ is invertible if and only if $f_1: V \to W $ is an invertible graded morphism. \end{lemma} \begin{proof} Being $F$ invertible, there exists $G: \overline{S(W)} \to \overline{S(V)}$ such that $F\circ G = \Unit$ and $G\circ F = \Unit$. These two compositions can therefore be characterized as lifts of the standard projection $p:\overline{S(X)} \to X$ with $X$ equal to $W$ and $V$ respectively, hence $f_1 \circ g_1 = \id$ and $g_1 = f_1^{-1}$. On the opposite, given a $F$ such that $f_1$ is invertible, one can construct an inverse $G$ of $F$ iteratively as the lift of the following components: \begin{displaymath} \begin{cases} g_1 = f_1^{-1} \\ g_2 = -f_1^{-1} \circ f_2 \circ \left(f_1^{-1}\otimes f_1^{-1}\right) \\ \vdots \\ g_m = -f_1^{-1} \circ f_2 \circ \left(f_1^{-1}\right)^{\otimes m} -\left[ \sum_{\ell=2}^{m-1} g_\ell \circ \Sop_{\ell, m}(f) \right] \end{cases} \end{displaymath} \end{proof} % Recalling that every $L_\infty$-algebra has an underlying chain complex, it is natural to introduce the notion of \emph{quasi-isomorphism} % \begin{definition}[$L_\infty$-quasi-isomorphism]\label{Def:LinfintyQuasiIso} A $L_\infty$-morphism $(f):(L,\mu)\to (L',\mu')$ is called a \emph{quasi-isomorphim} if $f_1: (L,\mu_1)\to (L',\mu_1')$ is a quasi isomorphism of chain complexes, \ie $f_1$ induces an isomorphism of the corresponding cohomology groups. \end{definition} \begin{remark}[$L_\infty$-algebras cohomology] Observe that, for any given $L_\infty$-algebra $(L,\mu)$, one has two naturally associated cohomologies: the cohomology of the underlying chain complex $(L,\mu_1)$, obtained by neglecting all multibrackets of arity greater than one (see example \ref{ex:cochaincomplexLinfinity}), and the cohomology of the Chevalley-Eilenberg complex $(S(L[1]),\widetilde{L}_{\sym}(\Dec(\mu)))$ (see definition \ref{def:CELinfty1struct}). % Although the former one has been used in definition \ref{Def:LinfintyQuasiIso} to characterize quasi-isomorphism, the latter is more apt to be intended as the proper cohomology of $(L,\mu)$. More refined notions of \emph{$L_\infty$-algebra cohomology} are expressed in terms of representations, we point out \cite{Reinhold2019} for an extensive introduction on this topic. \end{remark} \begin{remark}[$L_\infty$-homotopies] We have shown that understanding an $L_\infty$-algebra as a (commutative cofree) differential coalgebra readily rewards a natural notion of $L_\infty$-morphism as a graded morphism respecting at the same time the codifferential and the coalgebra structure. \\ Similarly, it might seem just as natural to introduce the notion of \emph{$L_\infty$-homotopy} between two $L_\infty$-morphisms, seen as graded differential coalgebra morphisms $F,G: (C,Q)\to (C',Q')$, as a certain chain homotopy $H$, in the sense of equation \eqref{eq:chainhomotopy}, together with suitable coalgebraic conditions. For instance, it could be given by a degree $-1$ \emph{(F,G)-coderivation} $H$ from $C$ to $C'$ , \ie $\Delta' \circ H = ( F \otimes H + H \otimes G)\circ \Delta)$, together with properties \begin{align*} H Q + Q' H =&~ F-G ~, \\ (H\otimes (G Q) + (F Q)\otimes H ))\Delta =&~ 0 ~. \end{align*} % However, such a kind of definition is unsatisfactory from the more conceptual point of view of \emph{homotopy theory} (see \cite[Remark 4]{Dotsenko2016}). In this language, the first step would be to identify a \emph{model structure} for the category of $L_\infty$-algebras. An account on many different models, and the discussion of their equivalence, can be found in \cite{Pridham2010}. \\ An useful and explicit model of \emph{weak equivalence} (homotopies or 2-morphisms) has been given by Dolgushev in \cite{Dolgushev2007}. Namely, given two $L_\infty$-algebras $L$ and $L'$, there is an auxiliary \emph{pro-nilpotent} $L_\infty$-algebra $\mathcal{U}(L,L')$, with $k$-multibrackets denoted by $\lbrace\dots\rbrace_k$, encoding $L_\infty$-morphisms as \emph{Maurer-Cartan} elements, \ie \begin{displaymath} \Hom_{Lie\infty}(L,L') = MC(\mathcal{U}(L,L')) := \left\lbrace f \in \mathcal{U} ~\left\vert\quad \sum_{n=1}^\infty \dfrac{1}{n!}\lbrace f,\dots, f \rbrace_n \right\rbrace\right. ~. \end{displaymath} Note that the pro-nilpotency condition guarantees convergence of the above sum. Hence, two $L_{\infty}$-morphisms are \emph{equivalent} if there corresponding Maurer Cartan elements in $\mathcal{U}$ are equivalent. (See also \cite[Appendix A]{Fregier2015}). \end{remark} \begin{remark}[The category of $L_\infty$-algebras] The discussion up to this point can be summarized by saying that $L_{\infty}$-algebras build up a subcategory of differential graded vector spaces, the insertion is given by "forgetting" all multibrackets of arity greater than two. More pedantically, we have constructed three $L_\infty$-algebras categories: \begin{enumerate} \item[(1)] Objects are graded vector spaces endowed with graded skew-symmetric multibrackets (definition \ref{Def:LInfinityStasheff} or \ref{Def:LInfinityTony}), together with morphisms given by definition \ref{Def:LinftyMorphism-skew}; \item[(2)] Objects are graded vector spaces endowed with graded symmetric multibrackets (definition \ref{Def:LInfinityShifted}), together with morphisms given by definition \ref{Def:LinftyMorphism-sym}; \item[(3)] Objects are cofree cocommutative differential coalgebras, morphisms are differential coalgebra morphisms. \end{enumerate} The three categories are isomorphic \footnote{Isomorphism of categories is an extremely stronger notion than \emph{equivalence of category}. That is why we talk about different presentations of the same concept rather than three equivalent categories.}. The invertible functor $(1)\to (2)$ acts as the shift endofunctor on graded vector spaces and as the d\'ecalage $\Dec$ on multibrackets and the components of any given $L_\infty$-morphism. The invertible functor $(2)\to (3)$ acts as the symmetric tensor coalgebra functor on graded vector spaces, as the lift to coderivations $\widetilde{L}_{\sym}$ on multibrackets, and as the lift to coalgebra morphism on the components of a $L_\infty[1]$-morphism. \end{remark} \begin{remark}[Other formalization of $L_\infty$-algebra structures] It is worth to mention, without claiming to be exhaustive, three other extremely fruitful frameworks to understand homotopy algebras structures. Both encode $L_\infty$-algebras as a special case, generalizing the concept in quite different directions. \begin{itemize} \item \emph{A $L_\infty$-algebra is a $L_\infty$-algebroids over a point (\ie $0$-dimensional)} \cite[Appendix A]{Sati2012a}. The latter can be encoded in terms of Graded manifolds. Namely a \emph{$L_\infty$}-algebroid is $Q$-manifold that is a $\mathbb{Z}$-graded manifold endowed with a homogeneous vector field $Q$ homogeneous in degree one and such that $Q^2=0$ (\emph{homological vector field}). See \cite[\S 1]{Jurco2018} for an introduction to this approach with application to prequantum field theories. \item \emph{The category of $L_\infty$-algebras is the vertical categorification of ordinary Lie algebras}. This categorical construction has been extensively discussed in \cite{Baez2003} for the case of $L_2$-algebras. The general idea can be found in \cite{nlab:l-infinity-algebra}. \item \emph{A $L_\infty$-algebra is an algebra over the homotopy Lie operad in the category of chain complexes} \cite{Markl1998}. The theory of operads is a vast topic; a full account is contained in \cite{Loday}, a motivational introduction can be found in \cite{Vallette2014}. For a contained exposition mainly geared toward the definition of $L_\infty$-algebras as as an algebra over the $S$ operad see \cite[\S 2]{Kimura1995}. \end{itemize} % While we will not use this machinery in what follows, it is interesting to note that working with the algebraic framework of multilinear maps with composition (\RN products or Gerstenhaber product in the non-symmetric case), as it will be done in chapter \ref{Chap:MarcoPaper}, can be seen as a little step toward the operadic approach. The point is recognizing the role of the \emph{endomorphism operad} \cite[\S 2.2.]{Vallette2014} as the appropriate setting for studying the composition of multi-linear maps. \end{remark} \subsection{Examples}\label{SubSection:studycase} In this subsection, we focus on some particular classes of $L_\infty$-structures that will appear recurrently in the next chapters. In section \ref{Sec:hcmm}, we will recall the notion of \emph{\momap}. This is a particular $L_\infty$-morphism from an ordinary Lie algebra: % \begin{example}[$L_\infty$-morphisms defined on Lie algebras]\label{Rem:LftyMorphasChainMap} Recall that in example \ref{Ex:dglaAsLinfinity} we have shown how a differential graded Lie algebra can be seen as a $L_\infty$-algebra where all multibrackets in arity greater than two are trivial. \\ Consider an ordinary Lie algebra $\mathfrak{g}$, \ie a differential graded Lie algebra concentrated in degree $0$, and an arbitrary $L_\infty$-algebra $(L,\mu)$. An $L_\infty$-morphism $(f):\mathfrak{g}\to (L,\mu)$, reads, in the skew-symmetric presentation, as a collection of skew-symmetric multilinear maps $f_k:\wedge^k \mathfrak{g}\to L^{1-k}$ such that \begin{displaymath} f_{m}\ca [\cdot,\cdot] = \sum_{\ell=1}^m \mu_\ell \circ \bar{\Sop}_{\ell,m}(f) \qquad \forall m\geq 2 ~. \end{displaymath} Recognizing that \begin{displaymath} \begin{aligned} (f_{m}\ca [\cdot,\cdot])(x_1,\dots,x_{m+1})=&~ \mkern-40mu\sum_{\mkern30mu\sigma \in \ush{2,m-1}} \mkern-40mu (-)^\sigma f_m([x_{\sigma_1},x_{\sigma_2}],x_{\sigma_3},\dots x_{\sigma_{m+1}} ) = \\ =&~ - (f_{m}\circ \partial_{\CE}) (x_1,\dots,x_{m+1}) \end{aligned} \end{displaymath} the previous condition can be read as the coboundary condition \begin{displaymath} \delta_{CE} f_m = - \sum_{\ell=1}^m \mu_\ell \circ \bar{\Sop}_{\ell,m}(f) \qquad \forall m\geq 2 \end{displaymath} in the Chevalley-Eilenberg cochain complex over the trivial representation $\rho: \mathfrak{g}\to 0$ (see reminder \ref{Rem:CEconventions}). \\ When $L$ is in particular a cochain complex, \ie the only non-trivial multibracket is $\mu_1$, the previous situation is expressed by the commutation of the following diagram (in the category of ordinary vector spaces): \begin{displaymath} \begin{tikzcd} \cdots \ar[r,"\partial"] & \Lambda^k\mathfrak{g} \ar[r,"\partial"] \ar[d,"f_k"]& \Lambda^{k-1}\mathfrak{g} \ar[r,"\partial"]\ar[d,"f_{k-1}"] & \cdots \ar[r,"\partial"] & \mathfrak{g} \ar[r,"\partial"]\ar[d,"f_1"] & 0 \\ \cdots \ar[r,"\mu_1"] & L^{-k} \ar[r,"\mu_1"] & L^{-k+1} \ar[r,"\mu_1"] & \cdots \ar[r,"\mu_1"] & L^0 \ar[r,"\mu_1"] & \cdots \end{tikzcd} \end{displaymath} hence it can be seen as a chain map between the Chevalley-Eilenberg complex, with a suitable degree re-parametrization, and $(L,\mu_1)$, namely \begin{displaymath} (f): \left(S( \mathfrak{g}[1])\right)[-1] \to (L,\mu_1) ~. \end{displaymath} \end{example} Most of the subsequent work revolves around the notion of \emph{(higher) observables $L_\infty$-algebras} (see section \ref{Section:RogersObservables}). These $L_\infty$-algebras enjoy the convenient properties to be \emph{grounded}, \ie all multibrackets with arity greater than one are non-trivial only on the "ground degree" zero (definition \ref{Def:groundedLinfinity}), and concentrated in finitely many degrees (negative degrees in the cohomological convention and positive degrees in the homological convention). \\ In the case of a grounded $L_\infty$-algebra the axioms defining multibrackets and morphisms are considerably simpler: \begin{example}[Axioms of grounded $L_\infty$-algebras {\cite[Rem. 6.2]{Ryvkin2018}}]\label{Rem:GroundedEasyAxioms} The higher Jacobi equations for a grounded $L_\infty$-algebra $(L,\{\mu_i\}_{i\geq 1})$ (in the skew-symmetric multibrackets presentation) read, by degree reasons, as \begin{displaymath} 0 =\mu_k \ca \mu_2 - \mu_1 \circ \mu_{k+1} \qquad (\forall k\geq 1) ~. \end{displaymath} In the spirit of example \ref{Rem:LftyMorphasChainMap}, the previous equation can be recast as \begin{displaymath} \delta (\mu_k ) = \d \circ \mu_{k+1} \qquad (\forall k\geq 1) ~, \end{displaymath} denoting $\mu_1 = \d$ and \begin{displaymath} \big(\delta (\mu_k)\big)(x_1,...,x_{k+1}) =\sum_{i<j}(-1)^{i+j}\mu_k(\mu_2(x_i,x_j), x_1 ,\dots,\widehat{x_i},\dots,\widehat{x_j},\dots,x_{k+1})~. \end{displaymath} (See \cite[Rem. 6.2]{Ryvkin2018} or \cite[\S 4]{Reinhold2019} for the definition of the Chevalley-Eilenberg coboundary $\delta$ in the context of $L_\infty$-algebras). \end{example} \begin{example}[Morphisms into a grounded $L_\infty$-algebras {(\cite[Lem. 2.36]{Ryvkin2016a})}]\label{Rem:GroundedEasyMorphisms} Also morphisms into a ground $L_n$ algebra (see definition \ref{Def:groundedLinfinity}) take a particular form. Consider $(f):(L,\mu)\to (L',\mu')$ with $L'$ grounded, according to definition \ref{Def:LinftyMorphism-skew}, the following equation must hold for $m\geq 1$: \begin{displaymath} \begin{aligned} 0 =& \left(\sum_{\ell=1}^m f_{m-\ell+1}\ca \mu_\ell\right) - \mu_1'\circ \Sop_{1,m}(f) - \mu_m' \circ \Sop_{m, m}(f) = \\ =& \left(\sum_{\ell=1}^m f_{m-\ell+1}\ca \mu_\ell\right) - \mu_1\circ f_m - \mu_m' \circ f_1^{\otimes m} \end{aligned} \end{displaymath} When studying infinitesimal Lie algebra actions, we will be particularly interested in $L_\infty$-morphisms from a Lie algebra into a grounded $L_\infty$-algebra. In this case the previous equation reads as follows: \begin{displaymath} \begin{aligned} 0 =& f_{m-1}\ca \mu_2 - \mu_1 \circ f_m - \mu_m' \circ f_1^{\otimes m} = \\ =& - \delta_{CE} \left( f_{m-1} \right) - \mu_1 \circ f_m - \mu_m' \circ f_1^{\otimes m} ~. \end{aligned} \end{displaymath} \end{example} Note also that, from the $L_\infty$-perspective, $L_n$-algebras are considerably easier to handle. Being concentrated in degrees $\lbrace k \in \mathbb{Z}~\vert -(n-1) \leq k \leq q\rbrace$ means that a $L_n$-algebra involves only finitely many non-trivial brackets $\mu_1,\dots,\mu_{n+1}$ satisfying only finitely many higher Jacobi equations $J_1,\dots, J_{n+1}$ (see for instance \cite[Lem. 2.35]{Ryvkin2016a}). In the following we will also make use of the following construction which allows to "pushforward" $L_\infty$-structures on a given graded vector space $V$ along any collection of multibrackets $(p_i: V^{\odot i} \to V)_{i\geq 1}$. \begin{remark}\label{rem:exp} Let $(V,\mu)$ be an $L_{\infty}[1]$-algebra, and denote the corresponding codifferential by $Q:=\widetilde{L}_{\sym}(\mu)$. A degree $0$ linear map $p\colon \overline{S(V)}\to V$ gives rise to a degree $0$ coderivation $C_p:=\widetilde{L}_{\sym}(p)$ of $\overline{S(V)}$. In turn, by Remark \ref{rem:codermor}, $C_p$ determines an isomorphism of coalgebras $e^{C_p}$. \\ Consider now $$Q':=e^{C_p}\circ Q\circ e^{-C_p} ~.$$ One checks easily that \begin{itemize} \item $Q'$ is also a codifferential on $\overline{S(V)}$, and thus corresponds to a new $L_{\infty}[1]$-algebra structure $\mu'$ on $V$, \item $e^{C_p}$ intertwines $Q$ and $Q'$, hence it corresponds to an $L_{\infty}[1]$-isomorphism $f$ from $(V,\mu)$ to $(V,\mu')$. \end{itemize} Explicitly, one has \begin{align*} m'&=~\pr_V(Q') ~= \\ &=~ \pr_V\left(Q+[C_p,Q]+\frac{1}{2!}[C_p, [C_p,Q]]+\dots \right) ~= \\ &=~ m+[p,m]_{\cs}+\frac{1}{2!}[p, [p,m]_{\cs}]_{\cs}+\dots \end{align*} and $$f=\pr_V(e^{C_p})=pr_V+p+\frac{1}{2!}(p\cs p)+\dots,$$ where $[\;,\;]_{\cs}$ denotes the graded commutator with respect to the symmetric \RN product $\cs$. Under d\'ecalage one gets a corresponding result in the skew-symmetric framework. \end{remark} In chapter \ref{Chap:MarcoPaper} we will discuss the so-called \emph{Vinogradov $L_\infty$-algebra}. That is a (not grounded) $L_\infty$-algebra realized from a differential graded Lie algebra applying the following construction: \begin{theorem}[Getzler {\cite[Thm. 3]{Getzler1991}}]\label{Thm:Getzler} Given a differential graded Lie algebra $(L,d,[\cdot,\cdot])$ \begin{displaymath} \begin{tikzcd} \cdots \ar[r] & L_{-1}\ar[r,"\d"] & L_0 \ar[r,"\d"] & L_1 \ar[r]& \cdots \end{tikzcd} \end{displaymath} the truncation of its underlying chain complex in negative degrees \begin{displaymath} \mathbb{L} : = \trunc_{0} L = \begin{cases} L_i~, & i< 0 \\ 0~, & i \geq 0 \end{cases} \end{displaymath} constitutes a $L_\infty[1]$-algebra with the following symmetric multibrackets \begin{displaymath} \lbrace a \rbrace_1 = \begin{cases} \d a ~, & |a| < 0 \\ 0 ~, & |a| \geq 0 \end{cases} \end{displaymath} and \begin{displaymath} \begin{aligned} \lbrace a_0,\dots, a_n\rbrace_{n+1} =& b_n \left( \underbrace{[\cdot,\cdot]\symgerst \dots ([\cdot,\cdot]}_{n\text{ times}} \symgerst D ) \right) (a_0,\dots,a_{n}) = \\ =& b_n\left(\sum_{\sigma \in S_{n+1}} \epsilon(\sigma) [[\dots,[[D a_{\sigma_0},a_{\sigma_1}],a_{\sigma_2}]\dots],a_{\sigma_n}]\right) \end{aligned} \end{displaymath} where \begin{displaymath} D := \d - \{\cdot\}_1 = \d \circ \pi_g ~, \end{displaymath} $\pi_g: L \twoheadrightarrow L_{-1}$ is the projection on the "ground" degree, and \begin{displaymath} b_n = (-)^n \dfrac{B_n}{n!} \end{displaymath} is a numerical constant containing the $n$-th Bernuolli number $B_n$ (see \cite{Weisstein} for a survey on the definition and property of Bernoulli numbers). \end{theorem} \begin{remark}[Getzler theorem in the literature] Observe that the previous result has been originally stated in the \emph{homological convention} \cite{Getzler1991}. In the same draft, it is also explained how it can be obtained as a corollary of \cite[Theorem 3.1]{Fiorenza2006}. Namely, Fiorenza and Manetti has shown how to naturally endow the mapping cone of a given DGLA morphism $\chi:(L,\d,[\cdot,\cdot]) \to (M,\d',[\cdot,\cdot]')$, \ie the differential graded vector space $(C(\chi),\d_C)$ where \begin{displaymath} C(\chi) = L\oplus M[1] \quad,\qquad \left( \morphism{\d_c} {L^i\oplus M^{i-1}} {L^{i+1}\oplus M^{i}} {\pair{x}{y}} {\pair{\d x}{\chi(x)- \d' y}} \right) ~, \end{displaymath} with a $L_\infty$-structure via \emph{homotopy transfer} (see \cite[\S 4,5]{Fiorenza2006}). In \cite[Rem. 5.7]{Fiorenza2006}, they also discuss the, somewhat, "miraculous" appearance of the Bernoulli numbers. \end{remark} Although we will not make explicit use of it in the following, we want to mention a procedure for constructing algebras which, in fashion similar to theorem \ref{Thm:Getzler}, realizes higher multibrackets by iteration on multibrackets in lowest arity. \begin{example}[Derived brackets] A huge class of examples of $L_\infty$-algebras can be produced with the so-called \emph{derived bracket construction} due to Voronov \cite{Voronov2005}. Namely, it is possible to show how to construct explicitly a $L_{\infty}$-structure out of any quadruple $(L,\mathcal{a},P,\Delta)$, called \emph{V-data}, composed of \begin{itemize} \item a graded Lie algebra $L=(L,[\cdot,\cdot])$; \item an Abelian Lie subalgebra $\mathcal{a}\hookrightarrow L$; \item a projection $P:L\twoheadrightarrow \mathcal{a}$ whose kernel is a Lie subalgebra of $L$; \item an homogeneous, degree $1$, element $\Delta \in \ker(P)_1$ such that $[\Delta,\Delta]=0$. \end{itemize} See \cite[Thm 2]{Voronov2004} or \cite[\S 1]{Fregier2015b} for a condensed survey. \end{example} Observe that given any two $L_\infty$-algebras one can construct a third $L_\infty$-algebra given by their direct sum: \begin{definition}[Direct sum of $L_\infty$-algebras {\cite[ex. 6.5]{Doubek2007}}] Given any two $L_\infty$-algebras $V=(V,\mu)$ and $W=(W,\nu)$, we call \emph{direct sum of $V$ and $W$} the $L_\infty$-algebra $V\oplus W$ with underlying vector space given by the direct sum $V\oplus W$ of the underlying vector spaces of $V$ and $W$ and with multibrackets given, for any $k\geq 1$, by \begin{displaymath} \morphism{\mu_k^\oplus:=\mu_k\oplus \mu_k'} {(V\oplus W)^{\wedge k}} {V\oplus W} {\pair{v_1}{w_1}\otimes \dots \pair{v_k}{w_k}} {\pair{\mu_k(v_1,\dots,v_k}{\mu'_k(w_1,\dots,w_k)}} ~. \end{displaymath} \end{definition} An important results show how any $L_\infty$-algebra can be equivalently expressed as the direct sum of two $L_\infty$-algebras of a "simpler" type: \begin{theorem}[Decomposition theorem {\cite[Lemma 4.9]{Kontsevich2003}}] Any $L_\infty$-algebra is $L_\infty$-isomorphic to the direct sum of a \emph{minimal $L_\infty$-algebra} (the unary bracket is zero) with a \emph{linear contractible $L_\infty$-algebra} (all brackets are zero except the unary one and all cohomology groups vanish). \end{theorem} \ifstandalone \bibliographystyle{../../hep} \chapter{Algebraic structure of multibrackets and coderivations}\label{App:RNAlgebras} This appendix discusses some algebraic properties possessed by the space of homogeneous multilinear maps between graded vector spaces. More precisely, we will deal with defining and providing some basic properties of the Gerstenhaber \cite{Gerstenhaber1963a}\cite{Gerstenhaber1964} and \RN \cite{Nijenhuis1967} products. These algebraic operations enjoy the crucial property to determine a Lie algebra structure despite being not associative ( they are pre-Lie, see appendix \ref{App:PreLie}). This framework will be useful in chapter \ref{Chap:Linfinity} for introducing the definition of \emph{$(L_\infty)$-structures}. This material hereby presented can be considered standard. It has been recorded here to provide a compact reference in the body of the thesis. Several proofs are included for the sake of completeness. We point out that some statements specific to the context of graded multilinear algebra (see \eg propositions \ref{Prop:SymmetricGerstenhaberAssociators} and \ref{Prop:DecalageAsCoalgebrasISO}) are not easy to found in the literature. In section \ref{section:AppBConclusions} we synthetically recap all the results in a single commutative diagram. Our presentation will be mostly tailored to our subsequent needs; its inspiration can be tracked in several sources \cite{Lecomte1992,Doubek2007,Manetti-website,Delgado2015,Bandiera2016}. \section{Algebra of multilinear operators (Gerstenhaber algebra)} In this section, we deal with some algebraic structures canonically associated with the graded vector space of homogeneous multilinear maps; namely, the Gerstenhaber product \cite{Gerstenhaber1964} and the \RN product \cite{Nijenhuis1967}. \\ Let $V$ be a graded vector space. We start by introducing the following notation: \begin{notation}\label{Notation:MultilinearMapsSpaces} We denote the vector space of $a$-linear homogeneous maps in degree $k$ valued in $W$ as \begin{displaymath} M_{a,k}(V,W) := \left\lbrace m:\underbrace{V\times\dots\times V}_{a-\text{times}}\to W \;\left\vert\; \stackanchor[5pt]{\text{$m$ is linear in each argument}} {$|m(x_1,\dots,x_a)| = k + \sum_{i=1}^a |x_i|$} \right.\right\rbrace ~. \end{displaymath} The linear structure of $M_{a,k}(V,W)$ is inherited "point-wise" from the vector space structure of $W$ (\cf with the internal hom-functor in remark \ref{rem:Vectpdvs}). When $W=V$ we will lighten the notation omitting the second entry. \end{notation} % \begin{remark}[Universal property of multilinear maps]\label{Rem:UnivPropMultilinMaps} Recall that the tensor product $\otimes$ of graded vector spaces is completely characterized by the following universal property: \begin{itemize} \item given three graded vector space $V,W,Z$, for any homogeneous bilinear map $\varphi : V \times W \to Z$ in degree $k$, there exists an unique homogeneous map $\hat{\varphi}: V \otimes W \to Z$, with same degree, such that the following diagram commutes in the category of graded vector spaces \begin{displaymath} \begin{tikzcd} V \times W \ar[r] \ar[dr,"\varphi"'] & V \otimes W \ar[d,"\hat{\varphi}"] \\ & Z \end{tikzcd} \end{displaymath} \end{itemize} Accordingly, for any graded vector spaces $V,W$, and for any $n\geq 0$ and $k \in \ZZ$, one has that \begin{displaymath} M_{n,k}(V,W)\cong \underline{\Hom}^k (V^{\otimes n},W) ~. \end{displaymath} We will usually treat this isomorphism as an identification. Hence we will be free to understand elements of $M_{n,k}(V,W)$, \ie $n$-ary functions $V\times\dots \times V \to W[k]$ with the extra property of being separately linear in each entry, as graded homogeneous map from $V^{\otimes n}$ to $W$. Homogeneous linear maps from $V^{\otimes n}$ to $W$ will be said \emph{of arity $n$}, and we will often denote the image of a multilinear map $\mu_n$ on $x_1\otimes\dots\otimes x_n$ as $\mu_n(x_1,\dots,x_n)$, separating the input elements by commas and omitting the symbol $\otimes$. In particular we have $M_{a,k}(V)\subseteq V[k]\otimes(V^{\otimes a})^\ast$, the equality holds when $dim(V)<\infty$. \end{remark} % \begin{remark}[The (bi-)graded vector space of multilinear maps]\label{rem:BigradedSpaceofMultilinearMaps} Considering all the possible arities and degrees collectively, and neglecting the "internal" grading of the graded vector space ${\Hom}(V^{\otimes n},W[k])$ (see remark \ref{rem:neglectinginternalgrading}), one obtains the $(\NN_0\times \ZZ)$-graded (bi-graded) vector space \begin{displaymath} M(V,W)_{\bullet\bullet} := \left( (a,d) \mapsto \underline{\Hom}^d(V^{\otimes a},W) \right) \end{displaymath} which is respectively graded by the arity and the degree of the homogeneous maps. \\ In what follows, it will be crucial to identify a correct, in the sense of most suitable to our needs, $\ZZ$-graded (single grading) vector space built out of the above bi-graded vector space. % The most common choice is to understand the degree of multilinear maps as their degree in the sense of homogeneous maps. Namely, we introduce the following graded vector space: \begin{displaymath} \begin{aligned} {M(V,W)} := M_{\oplus,\bullet} = \left( k \mapsto \bigoplus_{n\geq 1}M_{n,k}(V,W) \right) ~. \end{aligned} \end{displaymath} We will refer to it as "the" \emph{graded vector spaces of (homogeneous) multilinear maps}. According to remark \ref{Rem:UnivPropMultilinMaps} and definition \ref{Def:TensorAlgebra}, one has the following isomorphism of graded vector spaces \begin{displaymath} {M(V,W)} \cong \underline{\Hom}(\overline{T(V)},W) \end{displaymath} that we will interpret as an identification without further noticing. \\ In remark \ref{rem:DecasGradedMorph} we will introduce another graded vector space that one can construct out of $M_{\bullet \bullet}(V,W)$ by means of the $\tot$ functor defined in equation \eqref{eq:totalbigradedcomplex}. \end{remark} Without loss of generality, we now focus to the case where $V=W$. Everything can be easily extended to multilinear maps with any codomain\footnote{When defining the composition, one has to recall that it will be an operation only defined on pairs of multilinear maps with matching domains and codomains.}. The graded vector space of multilinear maps can be endowed with a non-associative algebra structure (see \eg \cite{Manetti-website-coalgebras}): % \begin{definition}[$i$-th Gerstenhaber product]\label{Def:ithGerstenhaberProduct} We call \emph{$i$-th Gerstenhaber product} the bilinear map % \begin{displaymath} \morphism{-\gerst_i-} {M_{a,k}(V)\times M_{a',k'}(V)} {M_{(a+a'-1),(k+k')}(V)} {(f,g)} {f\gerst_i g = f \circ (\mathbb{1}_{i-1}\otimes g \otimes \mathbb{1}_{a-i})} \end{displaymath} % where $\circ$ denotes the usual composition of graded linear maps and $1\leq i \leq a$. \\ The latter definition can be extended by linearity to the whole $M(V)$ keeping implied that $f\gerst_i g = 0$ when $i$ is greater than the arity of $f$. \end{definition} % \begin{definition}[(Full) Gerstenhaber product]\label{Def:FullGerstenhaberProduct} We call \emph{(full) Gerstenhaber product} the bilinear map % \begin{displaymath} \morphism{-\gerst-} {M_{a,k}(V)\times M_{a',k'}(V)} {M_{(a+a'-1),(k+k')}(V)} {(f,g)} {\displaystyle f\gerst g = \sum_{i=1}^{a} f \gerst_i g} ~. \end{displaymath} As before, this can be extended to the space $M(V)$ of all multilinear maps by linearity. \end{definition} % % \begin{notation}[Unfolding Koszul convention]\label{not:GerstProdUnfolded} Slightly more explicitly, given $f\in \Hom(V^{\otimes a},V[|f|])$ and $g\in \Hom(V^{\otimes b},V[|g|])$ one has \begin{displaymath} (f\gerst_i g)\in \Hom(V^{\otimes(a+b-1)},V[|f|+|g|]) ~. \end{displaymath} According to the Koszul convention (see remark \ref{Rem:KoszulTAM}), one gets that \begin{displaymath} \mathclap{ \begin{aligned} &(f \gerst_i g) (x_1\dots x_{a+b-1}) = \\ &=~ (-)^{|g|(|x_1|+\dots +|x_{i-1}|)} f(x_1,\dots,x_{i-1},g(x_i,\dots,x_{i+b-1}),x_{i+b},\dots,x_{a+b-1}) \end{aligned} } \end{displaymath} % and \begin{displaymath} \mathclap{ \begin{aligned} &(f\gerst g) (x_1 \otimes\dots\otimes x_{a+b-1})= \\ &~= \mkern-5mu \sum_{k=0}^{a-1}(-)^{|g|(|x_1|+\dots+|x_k|)} f(x_1,\dots, x_k , g(x_{k+1} , \dots , x_{k+b}), x_{k+n+1}, \dots , x_{a+b-1}) \end{aligned} } \end{displaymath} for any given $(a+b-1)$-tuple of homogeneous elements $(x_1\dots x_{a+b-1})$ of $V$. \end{notation} % The space $M(V)$, taken together with $\gerst$, forms a non-associative monoid with unit given by $\Unit \in M_{1,0}(L)$. More precisely, the Gerstenhaber product yields a pre-Lie structure (see appendix \ref{App:PreLie} for the definition): \begin{lemma} The graded vector space $M(V)$ together with the Gerstenhaber product $\gerst$ forms a graded right pre-Lie algebra. \end{lemma} \begin{proof} Recall that, according to our definition, the grading of $M(V)$ is given by the degree of the homogeneous map. We thus have to show that the associator $\alpha(f,g,h)= f\gerst(g\gerst h) - (f\gerst g)\gerst h$ is graded symmetric in the two rightmost entries, \ie \begin{displaymath} \alpha(f,g,h) = (-)^{|h||g|} \alpha(f,h,g) ~. \end{displaymath} We show how this computation works in a fairly simple case, a complete and conceptual proof follows from remark \ref{Remark:GerstPreLie}. Let us consider for simplicity $f\in \underline{\Hom}(V^{\otimes 2},V)$ and $g,h \in\underline{\Hom}(V,V)$, one has \begin{displaymath} f\gerst (g\gerst h) = \sum_{i=1}^2 f \gerst_i (g\circ h) = f \circ (gh\otimes \mathbb{1} + \mathbb{1}\otimes gh) \end{displaymath} and \begin{displaymath} \begin{aligned} (f\gerst g) \gerst h =& \sum_{j=1}^2 (\sum_{i=1}^2 f \gerst_i g ) \gerst_j h = \\ =& f \circ (g\otimes \mathbb{1}+ \mathbb{1}\otimes g) \circ (h\otimes \mathbb{1} + \mathbb{1}\otimes h) = \\ =& f \circ ( g\otimes h + (-)^{|g||h|} h\otimes g + gh \otimes \mathbb{1} + \mathbb{1}\otimes gh) \end{aligned} \end{displaymath} where the sign coefficient comes from the Koszul convention (see equation \eqref{Eq:TensorHomogeneousMaps}). The corresponding associator reads \begin{align*} \alpha(f,g,h) =&~ f \circ (g\otimes h + (-)^{|h||g|} h \otimes g ) = \\ =&~ f \circ (g\odot h) ~, \end{align*} where $\odot$ denotes the symmetric tensor product of linear maps. Hence, the latter equation is manifestly graded symmetric in the second and third entries. \end{proof} % % Therefore, there is a well-defined Lie bracket: \begin{definition}[Gerstenhaber bracket]\label{def:GerstenhaberBracket} We call \emph{Gerstenhaber bracket} the Lie bracket $[\cdot,\cdot]$ on the graded vector space of multilinear maps $M(V)$ given by the graded pre-Lie product $\gerst$, \ie \begin{displaymath} [f,g] := f\gerst g - (-)^{|f||g|} g \gerst f \qquad \forall f,g \in M_{\bullet}(V) ~. \end{displaymath} \end{definition} % It goes without saying that the Gerstenhaber bracket satisfy the Jacobi equation. % \begin{remark}\label{Rem:signsProblemwithGerstenhaberProducts} Note that definition \ref{def:GerstenhaberBracket} differs from the original one given by Gerstenhaber \cite{Gerstenhaber1964} where the grading of $M(V)$ is given by the arity minus one: \begin{displaymath} [f,g]_G := f \gerst g - (-)^{(a-1)(b -1)} g \gerst f ~, \end{displaymath} for any $f$ and $g$ as in remark \ref{not:GerstProdUnfolded}. \end{remark} % \subsection{D\'ecalage of multilinear maps} It is possible to establish a relationship between the multilinear operators on the graded vector space $V$ and those on its shift composing these operators with suitable d\'ecalage isomorphisms (see definition \ref{Def:DecIso}). Recall that we introduced the following d\'ecalage isomorphism: \begin{displaymath} \dec: (W^{\otimes n}) [n] \xrightarrow{~\sim~} (W[1])^{\otimes n} ~, \end{displaymath} given on arbitrary elements $\omega_1,\dots,\omega_n$ in $W$ by \begin{equation}\label{eq:decStandard-appendix} \begin{aligned} \dec \left( (\omega_1\otimes\dots\otimes \omega_n)_{[n]}\right) =&~ (-)^{\sum_{k=1}^n(n-k)|\omega_k|} \omega_{1\,[1]}\otimes\dots\otimes \omega_{n\,[1]} \\[1em] \dec^{-1} \left( \omega_{1\,[1]}\otimes\dots\otimes \omega_{n\,[1]} \right) =&~ (-)^{\sum_{k=1}^n(n-k)(|\omega_{k\,[1]}|+1)} (\omega_1\otimes\dots\otimes \omega_n)_{[n]} ~. \end{aligned} \end{equation} The d\'ecalage isomorphism induces by precomposition an isomorphism of graded vector spaces: \begin{definition}[D\'ecalage of multilinear maps]\label{Def:MultiBrackDecalage} Given two graded vector spaces $V$ and $W$, we call \emph{d\'ecalage} of multilinear maps from $V$ to $W^{\otimes m}$ the invertible homogeneous map: \begin{displaymath} \Dec : \lineHom^{k}( V^{\otimes n}, W^{\otimes m}) \to \lineHom^{k +n-m}(V[1]^{\otimes n},W[1]^{\otimes m}) \end{displaymath} given by \begin{displaymath} \begin{aligned} \Dec(\mu) = \dec[|\mu|+n-m] \circ \mu[n] \circ \dec^{-1} & \qquad \forall \mu\in \lineHom^{|\mu|}( V^{\otimes n}, W^{\otimes m}) \\ \Dec^{-1}(\varphi) = \left(\dec^{-1}[|\varphi|] \circ \varphi \circ \dec \right)[-n] & \qquad \forall \varphi \in \lineHom^{|\varphi|}( V[1]^{\otimes n}, W[1]^{\otimes m}) ~. \end{aligned} \end{displaymath} \end{definition} \begin{remark}[D\'ecalage of multilinear maps as a diagram]\label{Rem:DecasDiagram} Observe that the d\'ecalage of a given multilinear map is given by the following commutative diagram in the category of graded vector spaces: \begin{center} \begin{tikzcd} V^{\otimes n}{[n]} \ar[d,"\dec"'] \ar[r,"\mu{[n]}"] & W^{\otimes m} [|\mu| + n] = W^{\otimes m} [m][|\mu| + n-m] \ar[d,"\dec{[|\mu|+n-m]}"] \\ (V[1])^{\otimes n} \ar[r,dashed,"\Dec(\mu)"'] & (W[1])^{\otimes m}[|\mu| + n -m ] \end{tikzcd} \end{center} which in particular makes manifest the invertibility of the map $\Dec$. Namely, the inverse can be read out of the following diagram: % \begin{center} \begin{tikzcd}[column sep = huge] V^{\otimes n}{[n]} \ar[d,"\dec"'] \ar[r,dashed,"\Dec^{-1}(\varphi){[-n]}"] &[2em] W^{\otimes m} [m][|\varphi|] \ar[d,"\dec{[|\varphi|]}"] \\ (V[1])^{\otimes n} \ar[r,"\varphi"'] & (W[1])^{\otimes m}[|\varphi| ] \end{tikzcd} ~. \end{center} \end{remark} % \begin{remark}[D\'ecalage of $M_{\bullet,\bullet}(V)$]\label{Rem:MultiBracketDecalage} In the following we will be mainly concerned with the space $M(V)$ given at the beginning of this section. In the case $m=1$, the d\'ecalage of multilinear maps $\Dec$ is given by graded morphisms \begin{equation}\label{eq:DecComp-appendix} \begin{aligned} \Dec &: M_{n,k}(V) \xrightarrow{\quad\sim\quad} M_{n,k+n-1}(V[1]) \\ \Dec^{-1} &: M_{n,k}(V[1]) \xrightarrow{\quad\sim\quad} M_{n, k +1 -n}(V) \end{aligned}~. \end{equation} The situation is encoded by the following commutative diagram for any $\mu_n\in M_{n,|\mu_n|}(V)$: \begin{equation}\label{eq:DecalageDiagram-appendix} \begin{tikzcd} V^{\otimes n}[n] \ar[r,"\mu_n{[n]}"] & W[|\mu_n|][n] \ar[r,equal] & W[1][|\mu_n|+n-1] \\ (V[1])^{\otimes n} \ar[u,"\dec^{-1}"] \ar[urr,dashed,"\Dec(\mu_n)"'] \end{tikzcd} \end{equation} meaning that $\Dec(\mu_n)= \mu_n[n]\circ \dec^{-1}$. The d\'ecalage operator of multilinear maps is explicitly given as follows: \begin{equation}\label{eq:ExplicitDECA-appendix} \morphism{\Dec(\mu_n)} {\mkern-50mu V[1]^{\otimes n}} {W[1]} {v_{1~[1]}\otimes\dots\otimes v_{n~[1]}} {(-)^{ \sum\limits_{j=1}^{n}(n-j)|v_j|}\mu_n(v_1\otimes\dots\otimes v_n)_{[n]}} ~. \end{equation} \end{remark} \begin{remark}[D\'ecalage $\Dec$ as a graded morphism]\label{rem:DecasGradedMorph} Observe that the operators given in equation \eqref{eq:DecComp-appendix}, due to their property of intertwining the grading and the arity of any given multilinear maps, cannot be read as the components of a bi-graded morphism on $M_{\bullet \bullet}(V)$. \\ However, it is possible to read the operator $\Dec$ as a genuine graded isomorphism (not bi-graded) by appropriately contracting indices $n$ and $k$ to give a certain $\ZZ$-grading. Namely, by its very definition, the linear operator $\Dec$ descends to a graded morphism \begin{displaymath} \Dec: \overline{\overline{M(V,W)}} \to {M(V[1],W[1])} \end{displaymath} between the $\ZZ$-graded vector spaces constructed out of $M_{\bullet\bullet}$ according to remark \ref{rem:BigradedSpaceofMultilinearMaps} and the graded vector space obtained from the total space of $M_{\bullet \bullet}$. More explicitly, one has: \begin{align*} {M(V,W)} &:= \underline{\Hom}(\overline{T(V)},W) = M_{\oplus,\bullet} = \left( k \mapsto \bigoplus_{n\geq 1}M_{n,k}(V,W) \right) ~, \\ \overline{\overline{M(V,W)}} &:= \left(\tot(M_{\bullet \bullet}(V,W))\right)[1] = \left( k \mapsto \bigoplus_{n+m=k+1}M_{n,k}(V,W) \right)~. \end{align*} In other words, ${M(V,W)}$ and $\overline{\overline{M(V,W)}}$ denote the $\ZZ$-graded vector space of $n$-multilinear map, for any $n\geq 1$, from $V$ to $W$ taken with two different gradings. Namely, given an homogeneous multilinear map $\mu \in M^{n,k}(V,W)$, we denote by \begin{displaymath} |\mu|=k ~,\qquad ||\mu||=k+n-1 ~, \end{displaymath} the degree of $\mu$ when regarded respectively as an element of ${M(V,W)}$ and $\overline{\overline{M(V,W)}}$. Notice that $|\mu|$ coincides, as expected, with the degree of $\mu_k$ as an homogeneous map. \end{remark} \begin{remark}[$\Dec$ in the literature]\label{Rem:DecalageinLiterature} We ought to notice that our definition of the d\'ecalage of multilinear maps differs by a sign prefactor from the definition often found in literature (e.g \cite[Eq. 3]{LadaStasheff}, \cite[\S 1]{Fiorenza2006},\cite[Rem 1.7]{Bandiera2016},\cite[Prop. 1.5]{Delgado2018b}). The different sign comes from our convention about shift endofunctors to identify $[k][\ell]$, $[\ell][k]$ and $[\ell+k]$ for any $k,\ell \in \mathbb{Z}$ (see remark \ref{rem:compsingShiftsvsSuspensions}). Namely, in our convention, the isomorphism $V[k][\ell]\cong V[\ell][k]$ is treated as a natural identification and accordingly denoted with an $=$ in diagram \eqref{eq:DecalageDiagram-appendix}. \\ However different choices can be made. For instance, if one replaces that identification with natural isomorphism given by the braiding \begin{displaymath} \begin{tikzcd}[ column sep=4em,row sep=-1ex] \mathbb{K}[\ell]\otimes\mathbb{K}[k]\otimes V \ar[r,"B_{\mathbb{K}[\ell],\mathbb{K}[k]}"] & \mathbb{K}[k]\otimes\mathbb{K}[\ell]\otimes V \\ 1_{[\ell]}\otimes 1_{[k]}\otimes v \ar[r,mapsto]& (-)^{\ell k}~1_{[k]}\otimes 1_{[\ell]} \otimes v \end{tikzcd} ~, \end{displaymath} the resulting d\'ecalage would get an extra overall sign prefactor. \end{remark} % A crucial property of the d\'ecalage operator $\Dec$ (see definition \ref{Def:DecIso}) is to preserve the Gerstenhaber product modulo a sign. The proof is based on the following lemma: \begin{lemma}\label{Lemma:CalcoloDecalageGerstenhaber} Let be $\mu_b\in \underline{Hom}^{|\mu_b|}(V^{\otimes b}, V)$ a multilinear map of arity $b$. Consider the graded morphism \begin{displaymath} (\Unit_a \otimes \mu_b \otimes \Unit_c) \,:~ V^{\otimes (a+b+c)}\to V^{\otimes (a+1+c)}[|\mu_b|]~. \end{displaymath} Then the d\'ecalage of the latter, computed according to definition \ref{Def:MultiBrackDecalage}, satisfy the following equation $$ \Dec(\Unit_a\otimes \mu_b \otimes \Unit_c) = (-)^{|\mu_b|c}\Unit_a\otimes \Dec(\mu_b)\otimes \Unit_c$$ where $\Unit_i$ on the left-hand side denotes the identity on $V^{\otimes i}$ and on the right-hand denotes the identity on $(V[1])^{\otimes i}$. \end{lemma} \begin{proof} The appearance of the sign prefactor can be explained by carefully chasing arbitrary elements $x_i$, with $1\leq i \leq {a+b+c}$, in $V$. We denote the sign prefactors appearing in each step as $S_i=(-)^{s_i}$. The following $5$ mappings reproduce the d\'ecalage of the multilinear operator $\Unit_a\otimes \mu_b \otimes \Unit_c$: \begin{minipage}{\linewidth+2cm} \begin{shrinkmath} \hspace{-2cm} \begin{tikzcd}[ ampersand replacement = \&, /tikz/column 1/.append style={anchor=base east}, /tikz/column 2/.append style={anchor=base west}] \&[-3em] (V[1])^{\otimes a+b+c} \ar[r]\ar[r,"\dec"] \&[-3.5em] \phantom{.} \\[-2em] \& x_{1[1]}\otimes\dots\otimes x_{a+b+c[1]} \ar[r,mapsto] \& \phantom{.} \\ \cdots \ar[r] \& (V)^{\otimes a+b+c}[a+b+c] \ar[r,"{(\Unit_a\otimes \mu_b\otimes \Unit_c)[a+b+c]}"] \& \phantom{.} \\[-2em] \cdots \ar[r,mapsto] \& S_1(x_{1}\otimes\dots\otimes x_{a+b+c})_{[a+b+c]} \ar[r,mapsto] \& \phantom{.} \\ \cdots \ar[r] \& (V^{\otimes a}\otimes V[|\mu_b|]\otimes V^{\otimes c})[a+b+c] \ar[r,"\dec"] \& \phantom{.} \\[-2em] \cdots \ar[r,mapsto] \& S_1 S_2(x_{1}\otimes\dots\otimes x_a\otimes (\mu_b(x_{a+1},\dots, x_{a+b}))_{[|\mu_b|]}\otimes x_{a+b+1}\otimes\dots \otimes x_{a+b+c})_{[a+b+c]} \ar[r,mapsto] \& \phantom{.} \\ \cdots \ar[r] \& V^{\otimes a+1+c}[|\mu_b|+a+b+c]\equiv V^{\otimes a+1+c}[|a+1+c][|\mu_b|+b-1]\ar[r,"\dec"] \& \phantom{.} \\[-2em] \cdots \ar[r,mapsto] \& S_1 S_2 S_3(x_{1}\otimes\dots\otimes x_a\otimes \mu_b(x_{a+1},\dots, x_{a+b})\otimes x_{a+b+1}\otimes\dots \otimes x_{a+b+c})_{[|\mu_b|+a+b+c]} \ar[r,mapsto] \& \phantom{.} \\ \cdots \ar[r] \& (V[1])^{\otimes a+1+c}[|\mu_b|+b-1]\ar[r,"\dec"] \& \phantom{.} \\[-2em] \cdots \ar[r,mapsto] \& S_1 \cdots S_4(x_{1[1]}\otimes\dots\otimes x_{a[1]}\otimes (\mu_b(x_{a+1},\dots, x_{a+b}))_{[1]}\otimes x_{a+b+1[1]}\otimes\dots \otimes x_{a+b+c[1]})_{[|\mu_b|+b-1]} \ar[r,mapsto] \& \phantom{.} \\ \cdots \ar[r] \& (V[1])^{\otimes a}\otimes V[1][|\mu_b|]\otimes (V[1])^{\otimes c} \\[-2em] \cdots \ar[r,mapsto] \& S_1 \cdots S_5(x_{1[1]}\otimes\dots\otimes x_{a[1]}\otimes (\mu_b(x_{a+1},\dots, x_{a+b}))_{[|\mu_b|+b]}\otimes x_{a+b+1[1]}\otimes\dots \otimes x_{a+b+c[1]}) = \star \end{tikzcd} \end{shrinkmath} \end{minipage} % In the above diagram we sloppily denoted with the same symbol $\dec$ both the mapping given by equation \eqref{Def:DecIso} and the mappings given by equations \eqref{eq:decStandard-appendix}. \\ Furthermore, the last term can be also read as follows: \begin{displaymath} \begin{aligned} \star =&~ S_1 \cdots S_6(x_{1[1]}\otimes\dots\otimes x_{a[1]}\otimes (\Dec(\mu_b)(x_{a+1[1]},\dots, x_{a+b[1]})\otimes\dots \otimes x_{a+b+c[1]}) = \\ =&~ S_1 \cdots S_7 (\Unit_a \otimes \Dec(\mu_b) \otimes \Unit_c )(x_{1[1]},\dots, x_{a+b+c[1]}) ~. \end{aligned} \end{displaymath} The signs $S_1,S_3,S_4,S_5$ come from our definition of $\dec$. Namely: \begin{align*} s_1 =&~ \sum_{i=1}^{a+b+c} (a+b+c-i)|x_i| ~, \\ s_3 =&~ |\mu_b|(|x_1|+\dots+|x_a|) ~, \\ s_4 =&~ \sum_{i=1}^a|x_i|(a+1+c-i) + (|\mu_b|+\sum_{i={a+1}}^{a+b}|x_{i}|)c + \sum_{i=a+b+1}^{a+b+c} |x_i|(a+b+c-i) ~, \\ s_5 =&~ (|\mu_b|+b-1)(|x_1|+\dots+|x_a|+a) ~. \end{align*} % The coefficient $s_2$ is zero in accordance with our convention on the action of shifted maps (see remark \ref{rem:shiftHomogMapsTrickySign}). The sign $S_6$ comes from equation \eqref{eq:ExplicitDECA-appendix}, \ie \begin{displaymath} s_6= \sum_{i=1}^b (b-i)|x_{a+i}| ~. \end{displaymath} The last sign comes from the Koszul convention about the tensor product of homogeneous maps: \begin{displaymath} \begin{aligned} s_7 =&~ (|x_1[1]|+\dots+|x_a[1]|)(|\Dec(\mu_b)|)= \\ =&~(|x_1|+\dots +|x_a|+a)(|\mu_b|+b-1) ~. \end{aligned} \end{displaymath} In conclusion $S_5 \cdot S_7 = 1$ and $S_1\cdots S_7 =(-)^{|\mu_b|c}$. \end{proof} In particular, when $a=0$ one has \begin{displaymath} \Dec(\mu_m\otimes\Id_n) = (-)^{|\mu_m|\cdot n} \Dec(\mu_m)\otimes \Id_n~. \end{displaymath} \begin{corollary}\label{Cor:DecalageofTensorsProducts} Consider a $\ell$-tuple of multilinear maps $f_{k_i}$, with $1\leq i \leq \ell$, of arity $k_i$. The d\'ecalage of their tensor product is obtained as follows: \begin{displaymath} \Dec(f_{k_1}\otimes \dots \otimes f_{k_\ell}) = (-)^{\sum_{i=1}^\ell |f_{k_i}|(\ell-i)}\Dec(f_{{k_1}})\otimes \dots \otimes \Dec(f_{{k_\ell}}) ~. \end{displaymath} \end{corollary} \begin{proof} Without loss of generality consider $\ell=2$. One has \begin{displaymath} \begin{aligned} \Dec(f_{k_1}\otimes f_{k_2}) =&~ \Dec(f_{k_1}\otimes \Unit_1)\circ \Dec(\Unit_{k_1}\otimes f_{k_2})= \\ =&~(-)^{|f_{k_1}|} \Dec(f_{{k_1}})\otimes \Dec(f_{{k_2}})~, \end{aligned} \end{displaymath} where the last sign prefactor is due to the previous lemma and the Koszul convention on the tensor product of homogeneous maps do not yields any extra sign. The claim follows by iterating this construction. \end{proof} \begin{proposition}[D\'ecalage of Gerstenhaber product]\label{Prop:DecalageGerstenhaberProducts} Given any $\mu_n \in M_{n,|\mu_n|}(V)$ and $\ell_n \in M_{n, |\ell_n|}(V[1])$ the following equations holds: \begin{displaymath} \begin{aligned} \Dec(\mu_n \gerst_k \mu_m ) =& (-)^{|\mu_m|(n-k)} \Dec(\mu_n) \gerst_k \Dec(\mu_m) \\ \Dec^{-1}(\ell_n \gerst_k \ell_m) =& (-)^{|\Dec^{-1}(\ell_m)|(n-k)} \Dec^{-1}(\ell_n) \gerst_k \Dec^{-1}(\ell_m) ~. \end{aligned} \end{displaymath} \end{proposition} \begin{proof} In the first equation we are in the following situation \begin{displaymath} \begin{tikzcd}[row sep = large] (V^{\otimes(n+m-1)})[n+m-1] \ar[d,"(\Unit_{k-1}\otimes \mu_m \otimes \Unit_{n-k})_{[n+m-1]}"'] \ar[r,"\dec"] & (V[1])^{\otimes(n+m-1)} \ar[d,shift right = 1em,"\Dec(\Unit_{k-1}\otimes \mu_m \otimes \Unit_{n-k})"] \\ (V^{\otimes n})[n][|\mu_m|][m-1] \ar[d,"{\mu_n [n][|\mu_m|][m-1]}"'] \ar[r,"\dec{[|\mu_m|][m-1]}"] & (V[1])^{\otimes n}[|\mu_m|][m-1] \ar[ddl,"\Dec(\mu_n){[|\mu_m| + m -1]}",end anchor=east] \\ V[|\mu_n|][n][|\mu_m|][m-1] \ar[d,phantom,sloped,"\cong"] \\[-2ex] (V[1])[|\mu_n| + |\mu_m| + (n+m -1) -1] \end{tikzcd} \end{displaymath} The composition of the two leftmost vertical arrows gives $(\mu_n \gerst_k \mu_m){[n][m-1]}$ while the composition of the two rightmost gives its d\'ecalage. In particular, recalling definition \ref{Def:ithGerstenhaberProduct}, one gets that \begin{displaymath} \mathclap{ \begin{aligned} \Dec( \mu_n \gerst_k& \mu_m) = \Dec(\mu_n)[|\mu_m + m -1] \circ \Dec (\Unit_{k-1}\otimes\mu_m \otimes \Unit_{n-k}) = \\ =& (-)^{|\mu_m|(n-k)} \Dec(\mu_n)[|\mu_m + m -1] \circ (\Unit_{k-1}\otimes \Dec (\mu_m) \otimes \Unit_{n -k}) \\ =& (-)^{|\mu_m|(n-k)} \Dec(\mu_n) \gerst_k \Dec (\mu_m) \end{aligned} } \end{displaymath} where in the last two lines has been employed Lemma \ref{Lemma:CalcoloDecalageGerstenhaber} and definition \ref{Def:ithGerstenhaberProduct}. \\ Regarding the second claim, one could draw a diagram similar to the previous one. Alternatively, one can notice that the equality can be derived directly from the invertibility of $\Dec$ together with the following equation: \begin{displaymath} \Dec \left( \Dec^{-1}(\ell_n) \gerst_k \Dec^{-1}(\ell_m) \right) = (-)^{|\Dec^{-1}(\ell_m)|(n-k)} (\ell_n \gerst_k \ell_m) ~. \end{displaymath} \end{proof} % \begin{remark}\label{rem:gerstStrano} The previous proposition could be read more compactly as follows. Introduce the operator \begin{displaymath} \morphism{-\overline{\overline{\gerst_i}}-} {M_{a,k}(V)\times M_{a',k'}(V)} {M_{(a+a'-1),(k+k')}(V)} {(\mu_a,\mu_{a'})} {(-)^{k'(a-i)}\mu_a\gerst_i \mu_{a'}} \end{displaymath} that corresponds to $\gerst_i$ with an extra signs introduced in accordance to proposition \ref{Prop:DecalageGerstenhaberProducts}. Consider then the operator $\overline{\overline{\gerst}}=\sum_i \overline{\overline{\gerst_i}}$. Therefore, proposition \ref{Prop:DecalageGerstenhaberProducts} implies that \begin{displaymath} \begin{tikzcd} Dec:~ (\overline{\overline{M(V)}},\overline{\overline{\gerst}}) \ar[r,"\sim"]& (M(V),\gerst) \end{tikzcd}~, \end{displaymath} where $\overline{\overline{M(V)}}$ is given by remark \ref{rem:DecasGradedMorph} and $M(V)$ is given by remark \ref{rem:BigradedSpaceofMultilinearMaps}, is an isomorphism in the category of pre-Lie algebras. \end{remark} \subsection{Algebra of multibrackets (Nijenhuis$-$Richardson algebra)}\label{Section:MultibracketsAlgebra} We now focus on graded (skew)symmetric multilinear operators. To distinguish these particular multilinear maps from those without any particular symmetry, we will often refer to them as "multibrackets". % \begin{notation} We denote the spaces of symmetric and skew-symmetric $n$-multilinear homogeneous maps on $V$ in degree $k$ as $M_{n,k}^{\sym}(V)$ and $M_{n,k}^{\skew}(V)$ respectively. Namely, the two spaces are defined as % \begin{align*} M_{n,k}^{\sym}(V) =& \left\{ \ell_n \in M_{n,k} ~|~ \ell_n \circ B_{\sigma} = \ell_n \quad \forall \sigma \in S_n \right\} ~, \\ M_{n,k}^{\skew}(V) =& \left\{ \mu_n \in M_{n,k} ~|~ \mu_n \circ B_{\sigma} = (-)^{\sigma}\mu_n \quad \forall \sigma \in S_n \right\} ~. \end{align*} \end{notation} % \begin{remark}[Universal property of (skew)symmetric multilinear maps]\label{Rem:UnivPropSkewSymMaps} In the case of (skew)symmetric maps a universal property, similar to \ref{Rem:UnivPropMultilinMaps}, holds: \begin{displaymath} \begin{aligned} M_{n,k}^{\sym}(V) \cong& \lineHom^k(V^{\odot n},V) \\ M_{n,k}^{\skew}(V) \cong& \lineHom^k(V^{\wedge n},V) ~. \end{aligned} \end{displaymath} It also follows from the general properties of $\Hom$ functors, namely the property to map colimits in limits, that \begin{displaymath} \begin{aligned} M_{n,k} =&~ \lineHom^k(V^{\otimes n}, V) = \\ =&~ \lineHom^k(V^{\odot n}\oplus V^{\wedge n}) \cong \lineHom^k(V^{\odot n})\oplus \lineHom^k (V^{\wedge n}) = \\ =&~ M_{n,k}^{\sym}\oplus M_{n,k}^{\skew} ~. \end{aligned} \end{displaymath} \end{remark} % \begin{remark}\label{Remark:MSymEmbedding} Applying the controvariant endofunctor \begin{displaymath} \underline{Hom}(-,V) = \oplus_{k\in \mathbb{N}} (-,V[k]) ~:~ \GVect \to \GVect \end{displaymath} to the diagram introduced in remark \ref{Remark:ManettiNotation} we get the following commutative diagram (where $-$ denotes a blank entry, \eg $-\circ f$ means precomposition) \begin{displaymath} \begin{tikzcd}[column sep = huge] M(V) && M(V) \ar[ll,"\mathcal{-\circ S}"'] \ar[d,two heads,"-\circ N"] \\ M^{\sym}(V) \ar[u,hook,"-\circ\pi"] \ar[urr,hook,"-\circ\pi"] && M^{\sym}(V) \ar[ll,"\sum_{n\geq 0} \frac{1}{n!} \circ - \eval_{V^{\odot n}}"] \end{tikzcd} \end{displaymath} which formalizes how $M^{\sym(V)}$ can be embedded in $M(V)$. As before, we omit the analogous result in the skew-symmetric case. \end{remark} By its very definition, the subspace of (skew)symmetric multibrackets does not form a subalgebra of $M(V)$ (it does not close with respect to $\gerst$). This can be cured by considering a suitable (skew)symmetrization of $\gerst$. \subsubsection{The \RN product on $M(V)$}\label{subsec:RNMV} A preliminary step to understand how one can equip the spaces of symmetric and skew-symmetric multilinear maps with a meaningful algebra product (or "composition") is to investigate how the Gerstenhaber product $\gerst$ behaves under symmetrization. Observe first the following: \begin{lemma}\label{Lemma:GerstenhaberSymmetrization} Given any two $\ell_n,\ell_m \in M(V)$ one has \begin{displaymath} \left(\ell_n \gerst \ell_m\right) \circ \mathcal{S} = \left(\frac{n }{\#\ush{m,n-1}} \right) ~ (\ell_n \circ \mathcal{S}) \circ ((\ell_m \circ \mathcal{S}) \otimes \Unit_{n-1}) \circ B_{m,n-1} ~, \end{displaymath} where $\mathcal{S}$ denotes the symmetrizator (see definition \ref{Def:Symmetrizator}) and $B_{m,n-1}$ is the even permutator with respect to the subgroup of unshuffles (see definition \ref{Def:Unshuffleator}). Similarly, one has \begin{displaymath} \left(\mu_n \gerst \mu_m\right) \circ \mathcal{A} = \left(\frac{n}{\#\ush{m,n-1}} \right) ~ \mu_n \circ \mathcal{A} \circ ((\mu_m\circ\mathcal{A}) \otimes \Unit_{n-1}) \circ P_{m,n-1} ~, \end{displaymath} where $\mathcal{A}$ denotes the skew-symmetrizator (see definition \ref{Def:SkewSymmetrizator}) and $P_{m,n-1}$ is the odd permutator with respect to the subgroup of unshuffles (see definition \ref{Def:Unshuffles}). \end{lemma} \begin{proof} Consider any two multilinear maps $\ell_n,\ell_m \in M^{\sym}(V)$. Via a simple inspection, one can see that \begin{displaymath} \begin{aligned} \ell_n \gerst_i \ell_m =& \ell_n \circ (\Unit_{i-1}\otimes \ell_m \otimes \Unit_{n-i}) = \\ =& \ell_n \circ (\cycPermutator_{(i)}\otimes \Unit_{n-i}) \circ (\ell_m\otimes \Unit_{n-1}) \circ (\cycPermutator_{(i+m)}^i\otimes \Unit_{n-1}) = \\ =& \left( (\ell_n \circ (\cycPermutator_{(i)}\otimes \Unit_{n-i})) \gerst_1 \ell_m \right) \circ (\cycPermutator_{(i+m)}^i\otimes \Unit_{n-1}) \end{aligned} \end{displaymath} where $\cycPermutator_{(k)}^i$ denotes the cyclic permutation of $k$ elements $i$-times (see remark \ref{Notation:Cyclic permutator} in appendix) . Therefore, one has \begin{displaymath} \mathclap{ \begin{aligned} &\left(\ell_n \gerst \ell_m\right) \circ \mathcal{S} ~= \\ &= \sum_{i=1}^n \ell_n \gerst_i \ell_m \circ \mathcal{S \\ &= \sum_{i=1}^n \left(\ell_n \circ (\cycPermutator_{(i)}\otimes \Unit_{n-i})) \gerst_1 \ell_m \right) \circ \mathcal{S = \\ &= \left(\frac{m! (n-1)!}{(n+m-1)!}\right) \left( \ell_n \circ \left(\sum_{i=1}^n (\cycPermutator_{(i)}\otimes \Unit_{n-i})\circ ( \Unit \otimes \mathcal{S}_{(n-1)}) \right) \right) \gerst_1 (\ell_m \circ \mathcal{S}_{(m)}) \circ B_{m,n-1} = \\ &= \frac{1}{\#\ush{m,n-1}} \frac{n!}{(n-1)!} (\ell_n \circ \mathcal{S}_{(n)}) \gerst_1 (\ell_m \circ \mathcal{S}_{(m)}) \circ B_{m,n-1} = \\ &= \frac{n}{\#\ush{m,n-1}} (\ell_n \circ \mathcal{S}_{(n)}) \gerst_1 (\ell_m \circ \mathcal{S}_{(m)}) \circ B_{m,n-1} \end{aligned} } \end{displaymath} where we used the computation of the cardinality of the subgroup of unshuffles (see remark \ref{Rem:UnshufflesCardinality}) and the decomposition of the symmetrizator operator through unshuffles (see remark \ref{Rem:UnshufflesAsCoset}). \\ The same computation holds in the case of skew-symmetric multibrackets by substituting $B_\sigma$ with $P_\sigma$ and $\mathcal{S}$ with $\mathcal{A}$. \end{proof} \note{MZ\\ OK. Only now I understand that "Gerstenhaber project" is for maps without any symmetry, while "NR" is the name for the product on (skew)symm maps. Is this made clear in the introduction?} Lemma \ref{Lemma:GerstenhaberSymmetrization} suggests the following two definitions: \note{metti sidebyside?} \begin{definition}[(Symmetric) \RN product]\label{Def:SymGerstProd} We call \emph{(symmetric) \RN product} (or \emph{symmetric Gerstenhaber product}) the bilinear operator \begin{displaymath} - \symgerst - : M_{n, k}\otimes M_{m, d}(V) \to M_{n+m -1, k +d}(V) \end{displaymath} defined on any $\mu_n,\mu_m \in M(V)$ as \begin{displaymath} \mu_n \symgerst \mu_m = (\mu_n \gerst_1 \mu_m) \circ B_{n, m-1} ~, \end{displaymath} \end{definition} where $B_{n,m-1}$ denotes the operator that sum on all possible even actions of the $(n,m-1)$-unshuffles (see appendix \ref{App:UnshuffleAtors}) and $\gerst_1$ is the first Gerstenhaber product (see definition \ref{Def:ithGerstenhaberProduct}). % \begin{definition}[(Skew-symmetric) \RN product]\label{Def:SkewGerstProd} We call \emph{(skew-symmetric) \RN product} the bilinear operator \begin{displaymath} - \skewgerst - : M_{n, k}\otimes M_{m, d}(V) \to M_{n+m -1, k +d}(V) \end{displaymath} defined on any $\mu_n,\mu_m \in M(V)$ as \begin{displaymath} \mu_n \skewgerst \mu_m =(-)^{|\mu_m|(n-1)}(\mu_n \gerst_1 \mu_m) \circ P_{n, m-1} ~, \end{displaymath} where $P_{n,m-1}$ denotes the operator that sum on all possible odd actions of the $(n,m-1)$-unshuffles (see appendix \ref{App:UnshuffleAtors}) and $\gerst_1$ is the first Gerstenhaber product (see definition \ref{Def:ithGerstenhaberProduct}). \end{definition} The role of the extra sign in the previous equation is justified a posteriori by the following lemma: \begin{proposition}\label{Prop:DecalageAsCoalgebrasISO} The graded map $\Dec$ defined in remark \ref{Notation:FixingDec} is a graded algebra isomorphism: \begin{displaymath} \Dec: (\overline{\overline{M(V)}}, \skewgerst) \xrightarrow{\quad \sim \quad} (M(V[1]),\symgerst) ~. \end{displaymath} \end{proposition} \begin{proof} Inspecting on any one two given $\mu_n, \mu_m \in M(V)$ one gets \begin{displaymath} \begin{aligned} \Dec( \mu_n \skewgerst \mu_m ) \equal{Def: \ref{Def:SkewGerstProd}}& (-)^{|\mu_m|(n-1)}\Dec(\mu_n \gerst_1 \mu_m \circ P_{m,n-1}) = \\ \equal{Def: \ref{Def:MultiBrackDecalage}}& (-)^{|\mu_m|(n-1)}\Dec(\mu_n \gerst_1 \mu_m) \circ \Dec(P_{m,n-1}) = \\ \equal{Prop: \ref{Prop:DecalageGerstenhaberProducts}}& \Dec(\mu_n) \gerst_1 \Dec(\mu_m) \circ \Dec(P_{m,n-1}) = \\ \equal{Prop: \ref{Prop:DecalageOfPermutation}}& \Dec(\mu_n) \gerst_1 \Dec(\mu_m) \circ B_{m,n-1} = \\ \equal{Def: \ref{Def:SymGerstProd}}& \Dec(\mu_n) \symgerst \Dec(\mu_m) ~. \end{aligned} \end{displaymath} \end{proof} \begin{notation} More explicitly, evaluating on homogeneous element $x_i\in V$, the products read as follows: \begin{equation}\label{Eq:RNProducts-explicit-appendix} \mathclap{ \begin{aligned} \mu_n \cs \mu_m &(x_1,\dots,x_{m+k-1}) = \\ =&~ \mkern-30mu \sum_{\sigma \in unsh(m,n-1)} \mkern-30mu \epsilon(\sigma) \mu_n\Big(\mu_m(x_{\sigma_1},\dots,x_{\sigma_m}),x_{\sigma_{m+1}}\dots,x_{\sigma_{m+k-1}} \Big) \\[2em] \mu_n \ca \mu_m &(x_1,\dots,x_{m+k-1}) = \\ =&~ (-)^{|\mu_m|(n-1)}\mkern-30mu \sum_{\sigma \in unsh(m,n-1)}\mkern-30mu (-)^\sigma \epsilon(\sigma) \mu_n\Big(\mu_m(x_{\sigma_1},\dots,x_{\sigma_m}),x_{\sigma_{m+1}}\dots,x_{\sigma_{m+k-1}}\Big) \end{aligned} } \end{equation} where $\epsilon(\sigma) $ is the Koszul sign. \end{notation} The bigraded vector space $M(V)$, taken together with $\symgerst$ or $\skewgerst$, forms a non-associative graded algebra. The following proposition gives an explicit expression for the associator: \begin{proposition}\label{Prop:SymmetricGerstenhaberAssociators} Given any three multilinear operators $\mu_\ell,\mu_m,\mu_n \in M(V)$ the corresponding associators, with respect to $\symgerst$ and $\skewgerst$ respectively, results % \begin{align*} \alpha(\symgerst;\mu_a,\mu_b,\mu_c) =&~ \mu_a \circ \left((\mu_b\otimes\mu_c \circ B_{b,c})\otimes \Unit_{a-2}\right) \circ B_{b+c,a-2} \\ \alpha(\skewgerst; \mu_a,\mu_b,\mu_c) =&~ (-)^{\mathcal{s}} \mu_a \circ \left((\mu_b\otimes\mu_c \circ P_{b,c})\otimes \Unit_{a-2}\right) \circ P_{b+c,a-2} ~. \end{align*} where the latter sign prefactor is given by: \begin{displaymath} (-)^{\mathcal{s}}=(-)^{{|\mu_c|(b+a)} +{(|\mu_b|(a-1)} +b(c+1)} \end{displaymath} % \end{proposition} % \begin{proof} A complete proof of these statements can be obtained employing the combinatorial properties of the unshuffles. We refer to appendix \ref{Section:AppendixProofPreLie} for a complete argument. We only observe here that the second claim follows from the first since, for any $\mu_i= \Dec(\nu_i)$ one has \begin{align*} \alpha&(\ca;\mu_a,\mu_b,\mu_c) = \\ =&~ \Dec( \alpha(\cs;\nu_a,\nu_b,\nu_c) ) = \\ =&~ \Dec(\nu_a \circ \left((\nu_b\otimes\nu_c \circ B_{b,c})\otimes \Unit_{a-2}\right) \circ B_{b+c,a-2}) = \\ =&~ (-)^{(|\nu_b|+|\nu_c|)a +|\nu_b|}\mu_a \circ \left((\mu_b\otimes\mu_c \circ P_{b,c})\otimes \Unit_{a-2}\right) \circ P_{b+c,a-2} = \\ =&~ (-)^{(|\mu_b|+|\mu_c|+b+c)a}(-)^{|\mu_b|+b-1}\mu_a \circ \left((\mu_b\otimes\mu_c \circ P_{b,c})\otimes \Unit_{a-2}\right) \circ P_{b+c,a-2} \end{align*} \note{PROBLEMA! non sono sicuro che il segno prodotto in \ref{Section:AppendixProofPreLie} sia uguale a questo! lascio il problema per ora} \end{proof} \subsubsection{The \RN product on $M^{\sym}(V)$ and $M^{\skew}(V)$} Let us now focus again on the spaces of degree $k$ graded (skew)-symmetric $n$-multilinear homogeneous maps, previously denoted as \begin{displaymath} M_{n,k}^{\sym}(V,W):= \underline{\Hom}^k (V^{\odot n},W) ~,\quad M_{n,k}^{\skew}(V,W):= \underline{\Hom}^k (V^{\wedge n},W) ~. \end{displaymath} Recall that It follows from the splitting sequence \eqref{eq:symskewsplitting} that $M_{n,k}(V,W)=M_{n,k}^{\sym}(V,W)\oplus M_{n,k}^{\skew}(V,W)$. As before, when $W=V$ we will lighten the notation omitting the second entry. Recall that the operator $\dec$ determines an isomorphism $V^{\wedge n}[n]\cong V[1]^n$. This last property is also suitable transferred to the multibrackets: \\ \begin{remark}[D\'ecalage of symmetric multibrackets]\label{Rem:DecalageofMultiBrackets} Note that a relation similar to the one on the restriction of the d\'ecalage isomorphism to (skew)symmetric tensor spaces (see lemma \ref{Lemma:DecalageRestrictToSymTens}) also holds when dealing with multilinear maps. Namely, the following diagram commutes \begin{displaymath} \begin{tikzcd}[column sep = large] M^{\sym}_{n, k}(V) \ar[r,"\Dec\big|_{M^{\sym}_{n, k}(V)}"] \ar[d,hook] &[3em] M^{\skew}_{n, k+n-1}(V[1])\ar[d,hook] \\ M_{n, k}(V) \ar[r,"\Dec"] & M_{n, k+n-1}(V[1]) \\ M^{\skew}_{n, k}(V) \ar[r,"\Dec\big|_{M^{\skew}_{n, k}(V)}"'] \ar[u,hook]& M^{\sym}_{n, k+n-1}(V[1]) \ar[u,hook] \end{tikzcd} \end{displaymath} % where the hooked arrows denote the standard inclusion of symmetric and skew-symmetric maps as a subset of all multilinear maps. \end{remark} \begin{notation}\label{Notation:FixingDec} To avoid confusion, from now on we will assume that \begin{displaymath} \Dec: M^{\skew}_{n, k}(V) \to M^{\sym}_{n, k+n-1}(V[1]) \end{displaymath} and \begin{displaymath} \Dec^{-1}: M^{\sym}_{n, k}(V[1]) \to M^{\skew}_{n, k+1-n}(V) ~. \end{displaymath} \end{notation} We are now interested in replicating the reasoning of remark \ref{rem:DecasGradedMorph} to the case of (skew)-symmetric multibrackets. Namely, we want to consider suitable combinations of arities and degrees in order to produce a honest $\ZZ$-graded vector space of (skew)-symmetric multibrackets. \\ Conventionally, we introduce the \emph{graded vector spaces of graded symmetric and graded skew-symmetric multilinear maps} from $V$ to $V$ as vector subspaces of ${{M(V,W)}}$ and $\overline{\overline{M(V,W)}}$ respectively \begin{definition}[The $\ZZ$-graded vector space of (skew)-symmetric multilinear brackets]\label{Def:VSpacesRN} In order to read $\Dec$ as a graded morphism between skew-symmetric and symmetric multilinear maps, it is necessary to equip their corresponding spaces with two different gradings. Let be $V$ and $W$ two graded vector spaces. We call \emph{($\ZZ$)-graded vector space of graded symmetric multilinear maps} the graded vector subspace $M^{\sym}(V,W)\hookrightarrow M(V,W)$ given by: \begin{displaymath} M^{sym}(V,W):= \left( k \mapsto \bigoplus_{n} M^{\sym}_{n, k}(V,W)\right) ~. \end{displaymath} We call \emph{($\ZZ$)-graded vector space of graded skew-symmetric multilinear maps} the graded vector subspace $M^{\skew}(V,W)\hookrightarrow \overline{\overline{M(V,W)}}$ given by: \begin{displaymath} M^{skew}(V,W):= \left( k \mapsto \bigoplus_{n+i=k+1} M^{\skew}_{n, i}(V,W)\right) ~. \end{displaymath} Consider a multilinear maps $\mu\in M_{a,k}$ which is also graded skew-symmetric, we denote $|\mu|=k$ its degree as an homogeneous map and we denote by $\parallel\mu\parallel = a+k-1$ its degree as an element in $\overline{\overline{M(V,W)}}$. \end{definition} This unusual choice for the grading of $M^{\skew}(V,W)$ is due to enforce the commutativity of the following diagram in the category of graded vector spaces \begin{equation}\label{eq:diagramMultibracketsSpaces} \begin{tikzcd} M^{skew}(V) \ar[r,"\Dec"] \ar[d,hook]& M^{sym}(V[1])\ar[d,hook] \\ \overline{\overline{M(V)}} \ar[r,"\Dec"] & M(V[1]) \end{tikzcd} ~. \end{equation} In other words, this convention assures that the d\'ecalage operator of multilinear maps restricts to a well-defined isomorphism $M^{sym}(V[1])\cong M^{\skew}(V)$. The preliminary work of subsection \ref{subsec:RNMV} allows us to conclude that the above diagram can be replicated inside the category of graded algebras: % \begin{theorem}\label{Thm:ManettiFactorizationOnGerstenhaberAlgebras} Let be $V$ a graded vector spaces. Consider the graded algebras of multilinear maps (see definition \ref{Def:VSpacesRN} and remark \ref{rem:DecasGradedMorph}). The following diagram commutes in the category of graded vector spaces: \begin{displaymath} \begin{tikzcd} (\overline{\overline{M(V)}},\overline{\overline{\gerst}}) \ar[r,"\Dec"',"\sim"]\ar[d,two heads,"-\circ N_a"] & (M(V),\gerst)\ar[d,two heads,"-\circ N_s"] \\ (M^{\skew}(V),\ca)\ar[d,hook,"-\circ \pi_a"] \ar[r,"\Dec"',"\sim"] & (M^{\sym}(V),\cs)\ar[d,hook,"-\circ \pi_s"] \\ (\overline{\overline{M(V)}},\ca) \ar[r,"\Dec"',"\sim"] & (M(V),\cs) \end{tikzcd} \end{displaymath} where: \begin{itemize} \item $\gerst$ is the Gerstenhaber product (definition \ref{Def:FullGerstenhaberProduct}); \item $\overline{\overline{\gerst}}$ is its pullback along $\Dec$ (see remark \ref{rem:gerstStrano}); \item $\ca$ and $\cs$ are the \RN products (see definitions \ref{Def:SymGerstProd} and \ref{Def:SkewGerstProd}); \item $-\circ F$ denotes precomposition with $F$ and $N_a,N_s,\pi_s,\pi_a$ are the maps introduced in remark \ref{Remark:ManettiNotation} and equation \eqref{eq:symskewsplitting}. \end{itemize} \end{theorem} \begin{proof} Observe that, according to Lemma \ref{Lemma:GerstenhaberSymmetrization}, the monoid structures $\symgerst$ and $\skewgerst$ restrict correctly to a monoid structure on $M^{\sym}(V)$ and $M^{\skew}(V)$ respectively. Hence, $\symgerst$ (resp. $\skewgerst$) takes values in $M^{\sym}(V)$ (resp. $M^{\skew}(V)$) when computed on a pair of symmetric (resp. skew-symmetric) multilinear maps. \\ Moreover, the symmetric (resp. skew-symmetric) Gerstenhaber product between a generic multilinear map $\mu_n \in M_{n,k}(V)$ with $n\leq 2$ and a symmetric (resp. skew-symmetric) multibracket $\mu_m$ yields a symmetric multibracket $\mu_n \symgerst \mu_m \in M^{\sym}(V)$ (resp. $\mu_n \skewgerst \mu_m \in M^{\skew}(V)$). % Resuming the notation given in remark \ref{Remark:ManettiNotation}, lemma \ref{Lemma:GerstenhaberSymmetrization} implies that \begin{displaymath} \ell_n \gerst \ell_m \circ \frac{N \circ \pi}{\cancel{(n+m-1)!}} = \cancel{\left(\frac{n! m!}{(n+m-1)!}\right)} \left(\ell_n \circ \frac{N \circ \pi}{\cancel{n!}}\right) \symgerst \left(\ell_m \circ \frac{N \circ \pi}{\cancel{m!}} \right) ~, \end{displaymath} hence, the following diagram commutes \begin{displaymath} \begin{tikzcd}[column sep = huge] (M(V),\symgerst) & (M(V),\gerst) \ar[l,"-\circ N \circ \pi"'] \ar[d,two heads,"-\circ N"] \\ & (M^{\sym}(V),\symgerst) \ar[ul,hook,"-\circ \pi"] \end{tikzcd} ~. \end{displaymath} The same reasoning can be replicated in the skew-symmetric case replacing $N$ with $N_a$ and $\pi$ with $\pi_a$. Summing up, the whole situation is subsumed by the claimed diagram. \end{proof} The pairs $(M^{\sym}(V),\cs)$ and $(M^{\skew}(V))$ are also known as the \RN algebras of symmetric and skew-symmetric multilinear maps. These algebras are right pre-Lie: \begin{proposition}\label{prop:RNExplictPreLie} Consider three graded symmetric multilinear operators $\nu_i$ and three skew-symmetric multilinear operators $\mu_i$. The corresponding associators satisfy the following symmetry properties: \begin{align} \alpha(\symgerst; \nu_\ell,\nu_m,\nu_n) =& (-)^{|\nu_m||\nu_n|} \alpha(\symgerst; \nu_\ell,\nu_n,\nu_m) ~, \label{Equation:SymGerstAssociatorSymmetry-ripetizione} \\ \alpha(\skewgerst; \mu_\ell,\mu_m,\mu_n) =& (-)^{(|\mu_n| + n - 1)(|\mu_m| + m - 1)} \alpha(\skewgerst; \mu_\ell,\mu_n,\mu_m) \label{Equation:SkewGerstAssociatorSymmetry-ripetizione} ~. \end{align} \end{proposition} \begin{proof} A complete proof of these statements can be obtained employing the combinatorial properties of the unshuffles. We refer to proposition \ref{Prop:SymmetricGerstenhaberAssociators-AppendixC} in appendix \ref{Section:AppendixProofPreLie} for a complete argument. We only observe here that the second claim follows from the first since, for any $\mu_i= \Dec(\nu_i)$ one has \begin{displaymath} \begin{aligned} \alpha(\ca;\mu_a,\mu_b,\mu_c) =&~ \Dec( \alpha(\cs;\nu_a,\nu_b,\nu_c) ) = \\ =&~ (-)^{|\nu_b||\nu_c|}\Dec( \alpha(\cs;\nu_a,\nu_c,\nu_b) ) \\ =&~ (-)^{|\nu_b||\nu_c|}\alpha(\ca;\mu_a,\mu_c,\mu_b) \\ =&~ (-)^{\parallel\mu_b\parallel\parallel\nu_c\parallel}\alpha(\ca;\mu_a,\mu_c,\mu_b)~. \end{aligned} \end{displaymath} \end{proof} \begin{remark}[Pre-Lie structure on $M^{\sym}(V)$ and $M^{\skew}(V)$] The last two results of proposition \ref{Prop:SymmetricGerstenhaberAssociators} can be read as a pre-Lie property of $\symgerst$ and $\skewgerst$ with respect to a certain choice of (single) grading on the bigraded vector space $M(V)$. Let us denote this new single grading as $\lVert\cdot \rVert$ in order to not confuse it with $|\cdot|$ that gives the degree as an homogeneous map. \\ Equation \eqref{Equation:SymGerstAssociatorSymmetry-ripetizione} implies that on the graded vector space \begin{displaymath} M(V) := \big( k \mapsto \oplus_{a \geq 1} M_{a,k}(V) \big) ~, \end{displaymath} \ie when considering the grading given uniquely by the degree as an homogeneous map or $\lVert\cdot \rVert = |\cdot|$ , the \RN product gives a pre-Lie structure. In particular, $(M^{\sym}(V),\symgerst)$, implicitly considered with the former grading, forms a pre-Lie algebra and \begin{displaymath} \morphism{{[\cdot,\cdot]}} {M^{\sym}(V)\otimes M^{\sym}(V)} {M^{\sym}(V)} {\mu_m\otimes \mu_n} {\mu_m\symgerst \mu_n - (-)^{|\mu_m||\mu_n|}\mu_n\symgerst\mu_m} \end{displaymath} satisfies the \emph{graded Jacobi equation}. Similarly, Equation \eqref{Equation:SkewGerstAssociatorSymmetry-ripetizione} implies a pre-Lie property for $\skewgerst$ when considering a grading interweaving the arity of the multilinear map with the degree, namely \begin{equation}\label{Skew-symmetric grading} \lVert \mu_n \rVert = |\mu_n| + n -1 \qquad \forall \mu_n \in M_{n,|\mu_n|}(V) ~. \end{equation} In other words, $\skewgerst$ is pre-Lie on the graded vector space \begin{displaymath} \tot(M_{\bullet \bullet}(V))[1] := \left( k \mapsto \bigoplus_{a+d=k +1 } M_{a,d}(V) = \bigoplus_{a\geq 1 } M_{a,k+1-a}(V) \right) \end{displaymath} where $tot$ denotes the total graded vector space of a bigraded vector space as defined in \ref{Section:CategoricalGradedMonoidalStructure}). \\ In particular, $(M^{\skew}(V),\skewgerst)$, implicitly considered with the former grading, forms a pre-Lie algebra and \begin{displaymath} \morphism{{[\cdot,\cdot]}} {M^{\skew}(V)\otimes M^{\sym}(V)} {M^{\skew}(V)} {\mu_m\otimes \mu_n} {\mu_m\skewgerst \mu_n - (-)^{\lVert\mu_m\rVert \lVert\mu_n\rVert}\mu_n\skewgerst\mu_m} \end{displaymath} satisfies the Jacobi equation. \\ \note{ MZ: I guess another way to say this is: replace V by V[1], to absorb the arity in the homogeneous degree. Questa è di sicuro una ripetizione, non riesco a correggere prima della consegna. } \end{remark} \section{Algebra of coderivations} Let us now return again to the notions of graded algebras and coalgebras introduced in section \ref{section:GradedAlgebras}. Another relevant class of graded linear maps between graded (co)-algebras is given by (co)-derivations. In this section, we recall the abstract framework of $F$-coderivations and in particular prove how they give rise to a certain algebra isomorphic to the algebras of multilinear maps introduced in the previous section. \subsection{ $F$-(co)derivations on an arbitrary (co)algebra}\label{sec:abtractFcoderivations} In this subsection, we recall the abstract framework of $F$-coderivations. The property that will be central to us is the existence of a lift procedure from homogeneous maps to coderivations (see subsection \ref{subsec:UniPropeLifts}). Consider a graded algebra $(A,\mu)$ and a graded coalgebra $(C,\Delta)$, one can define: \begin{paracol}{2} \begin{leftcolumn} \begin{adjustwidth}{-2em}{0em} % \begin{definition}[$F$-Derivation]\label{Def:Fderivation} Given two graded algebras $(A,\mu)$ and $(A',\mu')$, consider a morphism of graded algebras $F\in\Hom_{\text{Alg}}(A,A')$, we call \emph{$F$-derivation from $A$ to $A'$} any homogeneous map $\theta \in \underline{\Hom}(A,A')$ such that the following diagram commutes \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] A\otimes A \ar[rr,"(\theta\otimes F + F\otimes\theta)"] \ar[d,"\mu"'] \& \& A'\otimes A' \ar[d,"\mu'"] \\ A \ar[rr,"\theta"] \& \& A' \end{tikzcd} \end{displaymath} \end{definition} \begin{remark} From the Koszul convention (Remark \ref{Rem:TensorHomogeneousMaps}) one has for any homogeneous vectors $a,b\in A$ that \begin{displaymath} \theta(a\cdot b) = \theta(a)\cdot F(b) + (-)^{|\theta||a|}F(a)\cdot \theta(b) ~, \end{displaymath} where we denoted by $\cdot$ the action of the multiplication $\mu$. \end{remark} \end{adjustwidth} \end{leftcolumn} \begin{rightcolumn} \begin{adjustwidth}{0em}{-2em} % \begin{definition}[$F$-coDerivation]\label{Def:FcoDerivations} Given two graded coalgebras $(C,\Delta)$ and $(C',\Delta')$, consider a morphism of graded coalgebras $F\in\Hom_{\text{coAlg}}(C,C')$, we call \emph{$F$-coderivation from $C$ to $C'$} any homogeneous map $\theta \in \underline{\Hom}(C,C')$ such that the following diagram commutes \begin{displaymath} \begin{tikzcd}[ampersand replacement=\&] C \ar[rr,"\theta"] \ar[d,"\Gamma"'] \& \& C' \ar[d,"\Gamma'"] \\ C\otimes C \ar[rr,"(\theta\otimes F + F\otimes\theta)"] \& \& C'\otimes C' \end{tikzcd} ~. \end{displaymath} \end{definition} % \end{adjustwidth} \end{rightcolumn} \end{paracol} \sidebyside{ \begin{notation} Fixed a graded coalgebra morphism, the set of all $F$-derivations in degree $p$ forms a graded vector space denoted by $\Der^p(A,A';F)$,. Accordingly, we denote by $\Der(A,A';F)$ the space of coderivations in any degree. \\ In the special case where $A=A'$ and $F= \id_A$ we simply talk about \emph{derivations over $A$} and we denote as \begin{displaymath} \Der^p(A):= \Der^p(A,A;\id_A) \end{displaymath} the corresponding graded vector space. \end{notation} }{ \begin{notation} Fixed a graded coalgebra morphism, the set of all $F$-coderivations in degree $p$ form a graded vector space denoted by $\coDer^p(C,C';F)$. Accordingly, we denote by $\coDer(C,C';F)$ the space of coderivations in any degree. \\ In the special case where $C=C'$ and $F= \id_C$ we simply talk about \emph{coderivations over $C$} and we denote as \begin{displaymath} \coDer^p(C):= \coDer^p(C,C;\id_C) \end{displaymath} the corresponding graded vector space. \end{notation} % \begin{lemma} Given $F:(C,\Gamma)\to(C',\Gamma')$ a coalgebra morphism and $Q,Q'$ coderivations on $C$ and $C'$ respectively, then $F\circ Q$ and $Q'\circ F$ are $F$-coderivations from $C$ to $C'$. \end{lemma} } \begin{remark} In case that the considered (co)algebra is (co)unital, we also require a (co)derivation to be zero at the (co)unit. \end{remark} The upshot is that graded coderivation is the dual notion of graded derivation. For our purposes, we will focus only on coderivations. Similar results can also be enunciated in the case of derivations and possibly further generalized from vector spaces to modules (\cite{Manetti-website-coalgebras,Doubek2007}). \begin{remark}[Composition of coderivations is not a coderivation]\label{Rem:CompositionofCoderivation} Although $\coDer(C)$ constitutes a subspace of $\underline{Hom}(C,C)$, it does not form a subalgebra with respect to the composition of linear operator. Namely, if one consider two coderivations $M$ and $N$, the following diagram commutes \begin{displaymath} \begin{tikzcd}[column sep=huge] C \ar[rr,"\Delta"]\ar[d,"M"'] && C\otimes C \ar[d,"M\otimes\mathbb{1} + \mathbb{1}\otimes M"] \\ C \ar[rr,"\Delta"]\ar[d,"N"'] && C\otimes C \ar[d,"N\otimes\mathbb{1} + \mathbb{1}\otimes N"] \\ C \ar[rr,"\Delta"] && C\otimes C \end{tikzcd} \end{displaymath} but the square obtained by composing the vertical arrows fails to commute \begin{displaymath} \begin{tikzcd} C \ar[d,"N\circ M"']\ar[r,"\Delta"] \ar[dr,phantom,"\cancel{\action}"]& C\otimes C \ar[d,"(N\circ M) \otimes \mathbb{1} + \mathbb{1}\otimes (N\circ M) + (-)^{|M||N|} M\otimes N + N\otimes M"] \\ C \ar[r,"\Delta"']& C\otimes C \end{tikzcd} \end{displaymath} Nevertheless, the graded vector spaces of coderivations is a Lie subalgebra with respect to the Lie bracket given by the commutator: \begin{displaymath} \begin{aligned} \Delta&(N\circ M - (-)^{|N||M|}M\circ N) = \\ =&~ \big( N\circ M \otimes \mathbb{1} + \mathbb{1}\otimes N\circ M +{ (-)^{|M||N|} M\otimes N + N\otimes M} + \\ &\phantom{\big(}-(-)^{|M||N|}(M\circ N \otimes \mathbb{1} + \mathbb{1}\otimes M\circ N + {M\otimes N + (-)^{|M||N|}N\otimes M)\big)\circ \Delta} = \\ =&~ \big( N\circ M \otimes \mathbb{1} + \mathbb{1}\otimes N\circ M -(-)^{|M||N|}(M\circ N \otimes \mathbb{1} + \mathbb{1}\otimes M\circ N \big)\Delta ~. \end{aligned} \end{displaymath} The latter property suggests to look for a pre-Lie structure on $\coDer(V)$. This will be achieved in the case of tensor coalgebras (see sections \ref{Subsection:CoderivationsAlgebra} and \ref{SubSection:CoderivationonSymmetricTensor}). \end{remark} An interesting feature of degree zero coderivations is that they can be used to build coalgebra morphisms: \begin{proposition} Given $\alpha\in \coDer^0(C)$ a degree $0$ nilpotent coderivation , \ie $\exists k \in \mathbb{N}$ such that $\alpha^{k'} = 0 ~\forall k' > k$, then \begin{displaymath} e^\alpha := \sum_{n\geq 0} \frac{\alpha^n}{n!}:C\to C \end{displaymath} is a coalgebra automorphism with inverse given by $e^{-\alpha}$. \end{proposition} \begin{proof} Note first that the nilpotency condition guarantees that the series defining the exponential of $\alpha$ converge to a linear operator $C\to C$. Requiring also $\alpha$ to be in degree $0$ guarantees that the exponential will be homogeneous in degree $0$. Compatibility with the comultiplication $\Gamma$ follows from the \emph{Baker-Campbell-Hausdorff formula} (BCH): \begin{displaymath} \begin{aligned} \Gamma e^\alpha \equal{}& \sum_{n\geq 0 } \frac{1}{n!} \Gamma \alpha^n = \sum_{n\geq 0 }\frac{1}{n!}(\alpha\otimes \Unit + \Unit \otimes \alpha ) \Gamma = (e^{\alpha\otimes \Unit + \Unit \otimes \alpha})~\Gamma = \\ \equal{BCH}& (e^{\alpha\otimes \Unit})~\circ (e^{\Unit \otimes \alpha})~\Gamma = (e^{\alpha}\otimes \Unit)~\circ (\Unit \otimes e^{\alpha})~ \Gamma = (e^{\alpha}\otimes e^{\alpha})~ \Gamma ~. \end{aligned} \end{displaymath} Invertibility is again a consequence of the BCH formula. \end{proof} % \begin{remark} Taking $C= T(V)$ one see that the first component of the exponential of a given coderivation $\alpha \in \coDer(T(V))$ is given by \begin{displaymath} \pr \circ e^\alpha \eval_V = \sum_{n\geq 0} \frac{1}{n!} \pr \circ \alpha^n \eval_V = \sum_{\geq 0} \frac{1}{n!} \alpha_1^n = e^{\alpha_1} ~, \end{displaymath} where $\pr:T(V)\twoheadrightarrow V$ denotes the standard projection. Claerly, not every coalgebra automorphism can be build as an exponential of some coderivation. \end{remark} \note{Falsa Pista:\\ non è vero che ogni polinomio $P(\alpha) = \sum_i c_i \alpha^i$ costruito a partire da una coderivazione costituisce un isomorfismo. La formula BCH vale solo per l'esponenziale.} \note{ Given $\alpha \in \coDer^0(\overline{T(V)},\overline{T(V)})$ we know from the universal property that there exists a morphism $a \in \underline{\Hom}(\overline{T(V)},V)$ such that $\widetilde{L}(a)=\alpha$ hence one can construct two coalgebra morphism $L(a)$ and $e^{\widetilde{L}(a)}$ in $\Hom_{\text{coAlg}}(\overline{T(V)},\overline{T(V)})$. TODO: che relazione c'è fra i due? coincidono? esiste $\varphi\in \Iso_{\text{coAlg}}(T(V),T(V))$ such that $L(a) = \varphi \circ e^{\widetilde{L}(a)} \circ \varphi^-1$ ? } \subsubsection{Universal Properties and Lifts}\label{subsec:UniPropeLifts} In section \ref{Section:TensorCoalgebras} we have seen how a graded morphism $f\in {\Hom}(C,V)$ can be uniquely lifted to a coalgebra morphism $F \in {\Hom_{\text{coAlg}}(C,\overline{T(V)}}$. In the case of a homogeneous map $f\in \underline{\Hom}(C,V)$, not necessarily in degree $0$, an analogous lift construction holds. In the latter case, one talk about \emph{lift to a coderivation}. \\ From now on, we will assume that all considered coalgebras will be coassociative and conilpotent. \begin{proposition}[Universal property of homogeneous maps]\label{Prop:UniversalPropertyCoderivation} Consider a coalgebra $(C,\Gamma)$ a graded morphism $f:C \to V$ and denote by $F=L(f) \in \Hom_{\text{coAlg}}(C,\overline{T(V)})$ the unique lift of $f$ to a coalgebra morphism. The following property are satisfied: % \begin{enumerate} \item for any homogeneous map $q \in \underline{\Hom}^k(C,V)$ in degree $k$ there exists a unique $F$-coderivation $ Q \in \coDer(C,\overline{T(V)};F)$ such that the following diagram commutes in the graded vector spaces category: \begin{displaymath} \begin{tikzcd} C \ar[r,"\exists!~Q"] \ar[dr,"\forall~q"'] & \overline{T(V)}[k] \ar[d,"{\pr[k]}"] \\ & V[k] \end{tikzcd} \end{displaymath} \item Explicitly, $Q$, the unique lift of $q$ to a coderivation, is given by the following commutative diagram in $\GVect$: \begin{displaymath} \begin{tikzcd} \overline{T(C)}\ar[rrd,"R(q)"] \\ C \ar[u,"\sum_{n>0}\Gamma^n"] \ar[rr,"Q"'] & & \overline{T(V) } [k] \end{tikzcd} \end{displaymath} where \begin{equation}\label{Eq:operatorR} \morphism{R} {\underline{\Hom}(C,V)} {\coDer\big(\overline{T(C)},\overline{T(V)};T(f)\big)} {q} {\displaystyle R(q)=\sum_{n\geq 1} \left[\sum_{i=0}^{n-1} f^{\otimes i} \otimes q \otimes f^{\otimes(n-1-i)}\right]} \end{equation} and hence \begin{equation}\label{Equation:explicitLiftToCoderivation} Q = \sum_{n \geq 1} \left[ \sum_{i=0}^{n-1} (f^{\otimes i} \otimes q \otimes f^{\otimes (n-1-i)}) \circ \Gamma^{n-1} \right] ~. \end{equation} \end{enumerate} \end{proposition} \begin{proof} One has only to check that \begin{displaymath} R(q) = \sum_{n \geq 1} \left[ \sum_{i=0}^{n-1} f^{\otimes i} \otimes q \otimes f^{\otimes (n-1-i)} \right] \end{displaymath} is a $T(f)$-coderivation in the sense of definition \ref{Def:FcoDerivations} and remark \ref{Remark:TensorFunctors} (see \cite[Prop. 1.3.5]{Manetti-website-coalgebras} for the complete computation). If the latter is true, the precomposition of $R(q)$ with the canonical coalgebra morphism $\sum_{n\geq 1}\Gamma^n \in \Hom_{\text{coAlg}}(C,\overline{T(C)})$ happen to be a $L(f)$-coderivation, where $L$ denotes the lift to a coalgebra morphism (see equation \eqref{Eq:LiftMorphismMapping}) because \begin{displaymath} (T(f))\circ \sum_{n\geq 1}\Gamma^n = L(f) ~. \end{displaymath} \end{proof} \begin{notation}[Lift to a coderivation operator]\label{Notation:CoderLiftMapping} We denote by \note{$\widetilde{L}$ va cambiato in $\widetilde{L}$} \begin{displaymath} \morphism{\widetilde{L}} {\underline{\Hom}^k(C,V)} {\coDer^k(C,\overline{T(V)};F)} {q} {Q} \end{displaymath} the mapping giving the unique \emph{lift to a coderivation} prescribed by proposition \ref{Prop:UniversalPropertyCoderivation}. We decorated this operator with a tilde to avoid possible confusion when dealing with degree zero map $C\to V$ since, according to remark \ref{Rem:LiftToMorphism}, they also admit a \emph{lift to a coalgebra morphism}. \end{notation} The similarity with the case of lift to coalgebra morphism, see section \ref{Section:TensorCoalgebras}, continue by noticing that this universal property can be read as an isomorphism of graded vector spaces (\cf with theorem \ref{Theorem:HomCoAlgISO}): \begin{theorem}[Universal property as an isomorphism between hom-spaces]\label{Theorem:HomCoAlgISO-Coder} The following graded vector spaces are isomorphic \begin{displaymath} \coDer^k(C,\overline{T(V)};F) \cong \underline{\Hom^k}(C,V) \end{displaymath} and the isomorphism is induced from the corestriction according to the following diagram: \begin{displaymath} \begin{tikzcd} \underline{\Hom}^k(C,\overline{T(V)}) \ar[r,"P"] & \underline{\Hom}^k(C,V) \ar[d,equal] \\ \coDer^k(C,\overline{T(V)};F) \ar[u,hook]& \underline{\Hom}^k(C,V) \ar[l,"\widetilde{L}"] \end{tikzcd}~, \end{displaymath} where $P$ denotes post-composition with the standard projection $\pr:T(V) \to V$. In particular, one also have that \begin{displaymath} \coDer^k(C,\overline{T(V)};F) \cong \coDer^k(C,\overline{T(V)};G) \end{displaymath} for any choice of $F,G\in \Hom_{\text{coAlg}}(C,\overline{T(V)})$. % \end{theorem} \begin{proof} For any $q \in \underline{\Hom}^k(C,V)$ one has that \begin{displaymath} P(\widetilde{L}(q)) = \pr \sum_{n \geq 1} \left[ \sum_{i=0}^{n-1} (f^{\otimes i} \otimes q \otimes f^{\otimes (n-1-i)}) \circ \Gamma^{n-1} \right] = q \end{displaymath} since the only non-zero summand is the one with $n=1$ and $i=0$. Hence $P\circ L = \id_{\underline{\Hom}^k(C,V)}$. \\ For any $Q \in \coDer^k(C,\overline{T(V)})$ one has \begin{align*} \widetilde{L} (P (Q)) \equal{\quad}& \widetilde{L}(\pr\circ Q) = \\ \equal{}& \sum_{n\geq 1} \left( \sum_{i=0}^{n-1} f^{\otimes i}\otimes (\pr\circ Q) \otimes f^{n-1-i}\right) \circ \Gamma^{n-1} = \\ \equal{Rem. \ref{Rem:LiftToMorphism}} & \sum_{n\geq 1} \left( \sum_{i=0}^{n-1} (\pr\circ F)^{\otimes i}\otimes (\pr\circ Q) \otimes (\pr \circ F)^{n-1-i}\right) \circ \Gamma^{n-1} = \\ \equal{\quad}& \sum_{n\geq 1} \pr^{\otimes n}\circ\left( \sum_{i=0}^{n-1} F^{\otimes i}\otimes Q \otimes F^{n-1-i}\right) \circ \Gamma^{n-1} = \\ \equal{ $F$-coderivation}&\quad \sum_{n\geq 1} \pr^{\otimes n} \circ \Delta^{n-1} \circ Q= \\ \equal{}& \sum_{n\geq 1} \pr\eval_{V^{\otimes n}} \circ Q = \\ \equal{}& Q \end{align*} where $\pr\eval_{V^{\otimes n}}: T(V) \to V^{\otimes n}$ is the standard projection. \end{proof} In the particular case where the coalgebra $C$ is cocommutative, \eg when $C=\overline{S(V)}$, a similar universal property holds. The difference is that now the lift maps into the symmetric tensor coalgebra. \begin{proposition}[Universal property of homogeneous maps (cocommutative case)]\label{Prop:UniversalPropertyCoderivationCocommutative} Consider a cocommutative coalgebra $(C,\Gamma)$ a graded morphism $f:C \to V$ and denote by $F=L_{\sym}(f) \in \Hom_{\text{coAlg}}(C,\overline{S(V)})$ the unique lift of $f$ to a coalgebra morphism. The following properties are satisfied: \begin{enumerate} \item for any homogeneous map $q \in \underline{\Hom}^k(C,V)$ in degree $k$ there exists a unique $F$-coderivation $ Q \in \coDer(C,\overline{S(V)};F)$ such that the following diagram commutes in the graded vector space category: \begin{displaymath} \begin{tikzcd} C \ar[r,"\exists!~Q"] \ar[dr,"\forall~q"'] & \overline{S(V)}[k] \ar[d,"{\pr[k]}"] \\ & V[k] \end{tikzcd} \end{displaymath} \item Explicitly, the unique lift is given by the following commutative diagram in $\GVect$: \begin{displaymath} \begin{tikzcd}[column sep = huge] \overline{T(C)}\ar[rr,"R(q)"] & & \overline{T(V)}[k]\ar[dd,"{\pi[k]}"] \\ \overline{S(C)} \ar[u,hook,"N"] \\ C \ar[u,"\sum\limits_{n>0}\frac{\pi}{n!}\Gamma^n"'] \ar[uu,bend left=60, "\small\sum\limits_{n>0}\Gamma^n"] \ar[rr,"Q"'] \ar[rruu,"\widetilde{L}(q)",sloped]& & \overline{S(V) }[k] \end{tikzcd} \end{displaymath} where $R$ is the operator defined in equation \eqref{Eq:operatorR} and $N$ is the injection of $S(V)\hookrightarrow T(V)$ introduced in remark \ref{Remark:ManettiNotation}. \end{enumerate} \end{proposition} \begin{proof} The proof runs in two steps. The first is to prove that when $C$ is cocommutative the image of $\sum_{n>0} \Gamma^n$ is $S_n$-invariant, \ie the operator factors trough $N$. The second is to prove that the image of $R(q)$ restricted to $\overline{S(V)}$, \ie $\pi \circ \widetilde{L}(q)$, is a coderivation with respect to $\pi \circ T(f) \circ N = S(F)$ (see for instance \cite[\S 1.5]{Manetti-website-coalgebras}). \end{proof} \begin{notation}\label{Notation:CoderLiftMappingCocommutative} We denote by \begin{displaymath} \morphism{\widetilde{L}_{\sym}= \pi \circ \widetilde{L}} {\underline{\Hom}^k(C,V)} {\coDer^k(C,\overline{S(V)};F)} {q} {Q} \end{displaymath} the mapping giving the unique lift to a coderivation prescribed by proposition \ref{Prop:UniversalPropertyCoderivationCocommutative}. \end{notation} In the cocommutative case, it is possible to give an a alternative explicit expression of the lift employing the sum on all unshuffles. % \begin{proposition}[Unshuffles notation for the symmetric lift]\label{Prop:SymLiftUnshufflesNotation} Given $(C,\Gamma)$ a cocommutative coalgebra with coproduct $\Gamma$, for any $q \in \underline{\Hom}(C,V)$ the unique lift is given by $$\widetilde{L}_{\sym}(q) = H(q) \sum_{n>0}\Gamma^n$$ where \begin{displaymath} \morphism{H} {\underline{\Hom}(C,V)} {\coDer\big(\overline{S(C)},\overline{S(V)};S(f)\big)} {q} {\displaystyle H=\sum_{n\geq 1} \pi \circ \left( q \otimes f^{\otimes n-1} \circ B_{1,n-1} \right) \circ N} \end{displaymath} \end{proposition} \begin{proof} The claim follows from the commutativity of the following diagram \begin{displaymath} \begin{tikzcd}[column sep = huge] \overline{S(V)} \ar[hook,d,"N"] \ar[drr,"H(q)"]&& \\ \overline{T(V)}\ar[drr,"R(q)"] && \overline{S(V)}\ar[d,hook,"N"] \\ C \ar[rr,"\widetilde{L}(q)"] \ar[uu,bend left=60,"\sum_{n>0}\frac{\pi}{n!}\Gamma^n"] \ar[u,"\sum_{n>0}\Gamma^n"']\ar[drr,"q"] && \overline{T(V)} \ar[d] \\ && V \end{tikzcd} \end{displaymath} (compare with diagram in theorem \ref{Prop:UniversalPropertyCocommutativeGradedCoalgebras} ) where the leftmost triangle is the factorization of $\sum_{n>0}\Gamma^n$ when $C$ is cocomutative, and the uppermost square is consequence of the following computations (involving the operator $\mathcal{C}_{(n)}$ of cyclic permutation, see appendix \ref{App:UnshuffleAtors}): \begin{align*} R_n(b) &\mathcal{S}_{(n+1)} = \\ \equal{}& \left(\sum_{i+j=n} f^{\otimes i}\otimes b \otimes f^{\otimes j} \right) \circ \mathcal{S}_{(n+1)} = \\ \equal{}& \frac{1}{(n+1)!} \sum_{i+j=n} \sum_{\sigma \in S_{(n+1)}}\left(\sum_{i+j=n} f^{\otimes i}\otimes b \otimes f^{\otimes j} \right)\circ B_{\sigma} = \\ \equal{Lem: \ref{Lemma:CyclicCommutation}}& \frac{1}{(n+1)!} \sum_{i+j=n} \sum_{\sigma \in S_{(n+1)}} \left((\mathcal{C}_{i+1}\otimes \Unit_j) \circ(b \otimes f^{\otimes n})\circ \mathcal{C}^{-1}_{(i+1)} \right)\circ B_{\sigma} = \\ \equal{}& \frac{1}{(n+1)!} \sum_{i+j=n} \sum_{\sigma \in S_{(n+1)}} \left((\mathcal{C}_{i+1}\otimes \Unit_j) \circ(b \otimes f^{\otimes n}) \right)\circ B_{\sigma} = \\ \equal{Eq: \ref{Eq:DecompositionofSymmetrizator}}& \frac{n!}{(n+1)!} \sum_{i+j=n} \left((\mathcal{C}_{i+1}\otimes \Unit_j) \circ(b \otimes f^{\otimes n}\mathcal{S}_n) \right)\circ B_{1,n} = \\ \equal{Prop: \ref{Proposition:PermutingHomogeneousMaps}}& \left[ \frac{1}{n+1} \sum_{i=0}^n (\mathcal{C}_{i+1}\otimes \Unit_{n-i}) \circ (\Unit\otimes \mathcal{S}_n) \right] \circ(b \otimes f^{\otimes n}) \circ B_{1,n} = \\ \equal{}& \mathcal{S}_{n+1} \circ (b \otimes f^{\otimes n}) \circ B_{1,n} ~. \end{align*} % The latter implies that \begin{align*} R(b) \circ N =&~ R(b) \circ \mathcal{S} \circ N = \\ =&~ N \circ \left[ \pi \circ \left(\sum_{n>0} b \otimes f^{\otimes n} \circ B_{1,n} \right) \circ N \right] = \\ =&~ N \circ H(b)~. \end{align*} \end{proof} Similarly to theorem \ref{Theorem:HomCoAlgISO-Coder}, this universal property can be also read as an isomorphism of graded vector spaces: \begin{theorem}[Universal property as an isomorphism between hom-spaces]\label{Theorem:HomCoAlgISOCoCommutative-Coder} Given $C$ a cocommutative coalgebra, the following graded vector spaces are isomorphic \begin{displaymath} \coDer^k(C,\overline{S(V)};F) \cong \underline{\Hom^k}(C,V) \end{displaymath} where $F= L_{\sym}(f)$ is the unique lift to a coalgebra morphism of an arbitrary $f\in \Hom(C,V)$ as defined in section \ref{Section:TensorCoalgebras}. The isomorphism is induced from the corestriction according to the following diagram: \begin{displaymath} \begin{tikzcd} \underline{\Hom}^k(C,\overline{S(V)}) \ar[r,"N\circ-"] &[-0.5em] \underline{\Hom}^k(C,\overline{T(V)}) \ar[r,"P"] &[-0.5em] \underline{\Hom}^k(C,V) \ar[d,equal] \\ \coDer^k(C,\overline{S(V)};F) \ar[u,hook] \ar[r,leftrightarrow,"\cong"]& \coDer^k(C,\overline{T(V)};N \circ F) \ar[u,hook]& \underline{\Hom}^k(C,V) \ar[l,"\widetilde{L}"] \end{tikzcd}~, \end{displaymath} where $P$ denotes post-composition with the standard projection $T(V) \to V$ and $N\circ-$ denotes precomposition with the injection $N:S(V)\hookrightarrow T(V)$. % \end{theorem} \begin{proof} We already showed that $\widetilde{L}$ factorizes through $\underline{\Hom}^k(C,\overline{S(V)})$ since the image of $\widetilde{L}(q)$ is invariant under permutations for any $q \in \underline{\Hom}^k(C,V)$. It remains to exhibit the bottom left isomorphism. Note first that for any $Q\in \coDer^k(C,\overline{S(V)};F)$ one has \begin{displaymath} \begin{aligned} \Delta \circ N \circ Q =&~ N\otimes N \circ \Xi \circ Q = \\ =&~ N \otimes N \circ ( Q \otimes F + F \otimes Q ) \circ \Gamma \end{aligned} \end{displaymath} where $\Delta$, $\Xi$ and $\Gamma$ denote the coproduct in $C$, $S(V)$ and $T(V)$ respectively. Therefore, post-composition with $N$ yields a morphism $$N\circ - : \coDer^k(C,\overline{S(V)};F) \to \coDer^k(C,\overline{T(V)};N \circ F) ~.$$ This morphism is invertible, with inverse given by $\pi$ since any $B\in \coDer^k(C,\overline{T(V)};N \circ F)$ lies in the image of $\widetilde{L}$ which is contained in the space of permutation invariants. \end{proof} In other words, any coderivation $\theta\in\coDer^k(S(V))$ is fully determined by its corestriction \ie by the projection onto $V$. \subsection{Algebra of coderivations on tensor spaces}\label{Subsection:CoderivationsAlgebra} We now focus our attention to the space of coderivations on a given (reduced) tensor coalgebra. Namely, we are going to specialize the construction of the previous subsection to the case of $F$-coderivations on $C=\overline{T(V)}$ with \begin{displaymath} F = L(\pr) = \id_{\overline{T(V)}} ~, \end{displaymath} where $\pr: \overline{T(V)}\to V$ denotes the standard projection. Recall that we denote the graded vector spaces of such coderivations as \begin{displaymath} \mathclap{ \coDer(\overline{T(V)}) := \left \lbrace Q \in \underline{\Hom}(\overline{T(V)},\overline{T(V)}) ~ \left\vert \quad \Gamma\circ Q = (Q\otimes \Unit + \Unit \otimes Q) \circ \Delta \right. \right \rbrace ~,} \end{displaymath} where $\Gamma$ denotes the \emph{deconcatenation coproduct} acting on decomposable elements as follows: \begin{displaymath} \morphism{\Gamma} {\overline{T(V)}}{\overline{T(V)}\otimes \overline{T(V)}} {x_1\otimes\dots \otimes x_n} {\displaystyle\sum_{i=1}^{n-1}(x_1\otimes\dots\otimes x_i)\otimes (x_{i+1}\otimes\dots\otimes x_n)} ~. \end{displaymath} \note{ \begin{displaymath} \mathclap{ \coDer(\overline{T(V)},\overline{T(W)};F) := \left \lbrace Q \in \underline{\Hom}(\overline{T(V)},\overline{T(W)}) ~ \left\vert \quad \Delta\circ Q = (Q\otimes F + F \otimes Q) \circ \Delta \right. \right \rbrace ~.} \end{displaymath} } The interest for this case, stems from the fact that $\coDer(\overline{T(V)})$ can be proved to be isomorphic to the spaces of the multilinear operators and in particular they provide an alternative language for dealing with multibrackets: \begin{corollary}[(of prop. \ref{Prop:UniversalPropertyCoderivation})] Let be $C = \overline{T(V)}$ the reduced tensor coalgebra of a graded vector space $V$. Consider the graded morphism $f=\pr: \overline{T(V)}\to V$, the unique lift prescribed by proposition \ref{Prop:UniversalPropertyCoderivation} is given by: \begin{displaymath} \morphism{\widetilde{L}} {\underline{\Hom}^k(\overline{T(V)},V)} {\coDer^k(\overline{T(V)})} {q} {Q=\displaystyle \sum_{n\geq1} \sum_{s=1}^{n} \left( \sum_{i=0}^{s-1} \Unit_i\otimes q_{n-s+1}\otimes\Unit_{s-1-i} \right)} \end{displaymath} Where $q_i = \pr \circ q \eval_{V^{\otimes i}}$ denote, as usual, the $i$-the corestriction of $q$. On any given n-tuple of vectors $v_1,\dots,v_n$, one has explicitly that \begin{displaymath} Q(v_1\otimes \dots \otimes v_n)= \sum_{i,\ell} (-)^{k(|v_1|+\dots+|v_i|)} v_1\otimes \dots \otimes v_i \otimes q(v_{i+1}\otimes\dots\otimes v_{i+\ell})\otimes \dots \otimes v_n ~. \end{displaymath} \end{corollary} For any graded vector space the naturally associated graded space of coderivations $\coDer(\overline{T(V)})$ (recall that the grading is given by the degree as homogeneous map) can be equipped with a (non-associative) algebra structure: \begin{definition}[Gerstenhaber product of coderivations]\label{Def:GerstProdOfCoderivations} We call \emph{Gerstenhaber product} of coderivations the (non-associative) product given by \begin{displaymath} \morphism{-\gerst-} {\coDer(\overline{T(V)})\otimes \coDer(\overline{T(V)}) } {\coDer(\overline{T(V)})} {Q\otimes R} {\widetilde{L}(\pr \circ Q\circ R)} \end{displaymath} where $\circ$ denotes the usual composition of graded linear maps. \end{definition} In other words $Q\gerst R$ is the unique coderivation with corestriction given by $p \circ Q\circ R$. Recall that $Q\circ R$ is not a coderivation in general (see remark \ref{Rem:CompositionofCoderivation}). The graded map $\widetilde{L}$ defined in remark \ref{Notation:CoderLiftMapping} yields not only an isomorphism at the level of graded vector space but also one at the level of algebras: \begin{lemma}[Gerstenhaber algebras isomorphism]\label{Lemma:GerstenAlgebrasIso} The graded vector spaces isomorphism \begin{displaymath} \widetilde{L}~:~ M(V) \equiv (\underline{\Hom}(\overline{T(V)},V), \gerst) ~\to~ (\coDer(\overline{T(V)}),\gerst) \end{displaymath} is an isomorphism in the category of graded algebras. \end{lemma} \begin{proof} Being $\widetilde{L}$ invertible, is sufficient to prove that the inverse $\pr$ preserve the product. For any fixed $n>0$ one has \begin{displaymath} \begin{aligned} p(Q\gerst R) \vert_{V^{\otimes n}} \equal{\quad}& (q \circ R ) \vert_{V^{\otimes n}} = \\ \equal{\quad}& \sum_{s=1}^n q_s \circ (\sum_{i=0}^{s-1} \Unit_i \otimes r_{n-s+1} \otimes \Unit_{s-1-i} )= \\ \equal{Def: \ref{Def:ithGerstenhaberProduct}}& \sum_{s=1}^n \sum_{i=0}^{s-1} q_s \gerst_i r_{n-s+1} = \\ \equal{\quad}& \sum_{s=1}^n q_s \gerst r_{n-s+1} = \\ \equal{\quad}& (q \gerst r)_n = \\ \equal{\quad}& (\pr\circ Q \gerst \pr\circ R)\vert_{V^{\otimes n}} ~. \end{aligned} \end{displaymath} \end{proof} \begin{remark} We ought to notice that, in the literature (see \eg \cite{Manetti-website-coalgebras}), the definition of the Gerstenhaber product of multibrackets (see definition \ref{Def:FullGerstenhaberProduct}) is often introduced via the previous lemma. In other words, the operator $\gerst$ on $M(V)$ is introduced by pulling back the product $\gerst$ on $\coDer(\overline{T(V)})$ along $\widetilde{L}$. \end{remark} \begin{remark}[$\gerst$ is pre-Lie]\label{Remark:GerstPreLie} Note that for any given $Q,R \in \coDer(\overline{T(V)})$ one has \begin{displaymath} \begin{aligned} [Q,R]_{\gerst} =&~ Q \gerst R - (-)^{|Q||R|} R \gerst Q = \\ =&~ \widetilde{L} \circ p \circ \left( Q\circ R - (-)^{|Q||R|} R \circ Q \right) = \\ =&~\widetilde{L} \circ P \circ ([Q,R]_{\circ}) \end{aligned} \end{displaymath} where the subscripts $\gerst$ and $\circ$ denotes the commutator bracket with respect to the Gerstenhaber product and the composition of graded linear maps respectively. In particular, one has \begin{displaymath} J_{\gerst}(Q,R,S) = \widetilde{L} \circ P (J_{\circ}(Q,R,S)) = L \circ P (0) = 0 \end{displaymath} hence the Gerstenhaber product defined on the space of coderivation is pre-Lie. This observation, together with lemma \ref{Lemma:GerstenAlgebrasIso}, provide a more conceptual proof of the pre-Lie property of the Gerstenhaber product $\gerst$ on $M(V)$. \end{remark} \subsection{Algebra of coderivations on symmetric tensor spaces}\label{SubSection:CoderivationonSymmetricTensor} Consider now a graded vector space $V$ and let us focus on the associated reduced symmetric tensor coalgebra $\overline{S(V)}= \oplus_{k\geq 1} V^{\odot k}$. Recall that $\overline{S(V)}$ carries natural coassociative coalgebra structure with coproduct given (see section \ref{Section:TensorCoalgebras}) by the unshuffled deconcatenation $\Xi$ \begin{displaymath} \morphism{\Xi} {\overline{S(V)}} {\overline{S(V)}\otimes \overline{S(V)}} {x_1\odot\dots \odot x_n} {\displaystyle\sum_{i=1}^{n-1} \mkern-60mu\sum_{\mkern80mu\sigma\in \ush{i,n-i}}\mkern-60mu (x_1\odot\dots\odot x_i)\otimes (x_{i+1}\odot\dots\odot x_n)} \end{displaymath} where $\ush{i,j}\subset S_{i+j}$ denotes the subgroup of unshuffles permutations (see appendix \ref{App:UnshuffleAtors}). Specializing theorem \ref{Theorem:HomCoAlgISOCoCommutative-Coder} to this specific cocommutative coalgebra yields a graded isomorphism between the space of coderivations and symmetric multilinear maps: \begin{corollary}[of theorem \ref{Theorem:HomCoAlgISOCoCommutative-Coder}] Considering $C = \overline{S(V)}$ and $f=\pr\circ N$, the unique lift prescribed by proposition \ref{Prop:UniversalPropertyCoderivation} is given by: \begin{displaymath} \mathclap{ \morphism{\widetilde{L}_{\sym}~} {\underline{\Hom}^k(\overline{S(V)},V)} {\coDer^k(\overline{S(V)})} {q} {Q=\displaystyle \sum_{n\geq1} \sum_{s=1}^n ~ \pi\circ \left( q_{n-s+1}\otimes \Unit_{s-1}\right) \circ B_{n-s+1,s-1} \circ N} } \end{displaymath} where $\pi$ and $N$ are the operators introduced in remark \ref{Remark:ManettiNotation}. On any given n-tuple of vectors $v_1,\dots,v_n$, one has explicitly that \begin{displaymath} Q(v_1\odot \dots \odot v_n)= \sum_{i=1}^n \mkern-60mu\sum_{\mkern80mu\sigma\in \ush{i,n-i}}\mkern-60mu \epsilon(\sigma) q_i ( v_{\sigma_1}\odot\dots\odot v_{\sigma_k}) \odot v_{\sigma_{k+1}}\odot \dots \odot v_{\sigma_n} ~, \end{displaymath} where $\ush{i,j}$ denotes as usual the $(i,j)$-unshuffles and $\epsilon(\sigma)$ is the Koszul sign. \end{corollary} \begin{proof} Applying the result of proposition \ref{Prop:SymLiftUnshufflesNotation} to this special case we get \begin{displaymath} \begin{aligned} \widetilde{L}(q) =& \sum_{n> 0} H_n(q) \circ \Xi^n = \\ =& \sum_{n> 0} \pi \circ (q \otimes \Unit_{n-1}) \circ B_{1,n-1} \circ N \circ \Xi^n = \\ =& \sum_{n> 0} \pi \circ (\sum_{i>0}q_i\otimes \Unit_{n-1}) \circ B_{1,n-1} \circ N \circ \Xi^n = \\ =& \sum_{i,n> 0} \pi \circ (q_i\otimes \Unit_{n-1}) \circ B_{1,n-1} \circ N \circ \Xi^n = \\ =& \sum_{n> 0}\sum_{s=1}^n \pi \circ \left[\left(q_{n-s+1}\otimes \Unit_{s-1}\right)\circ B_{n-s+,s-1} \right] \circ N \circ \Xi^n = \\ =& \sum_{n> 0}\sum_{s=1}^n \pi \circ \left[\left(q_{n-s+1}\otimes \Unit_{s-1}\right)\circ B_{n-s+,s-1} \right] \circ \Delta^n \circ N = \\ =& \sum_{n> 0}\sum_{s=1}^n \pi \circ \left[\left(q_{n-s+1}\otimes \Unit_{s-1}\right)\circ B_{n-s+,s-1} \right] \circ \Pi_{V^{\otimes n}} \circ N \end{aligned} \end{displaymath} where, in the last equation, operator $\Delta^n$ can be replaced by the projector on $V^{\otimes N}$ since the term between square brackets is an operator $V^{\otimes} \to V^{\otimes s}[|q|]$. \end{proof} Similarly to what has been done in section \ref{Subsection:CoderivationsAlgebra}, we can equip the graded vector space $\coDer(\overline{S(V)})$ with an algebraic product: \begin{definition}[\RN product of coderivations]\label{Def:SymmetricGerstProdOfCoderivations} We call \emph{Gerstenhaber product} of coderivations the (non-associative) product given by \begin{displaymath} \morphism{-\symgerst-} {\coDer(\overline{S(V)})\otimes \coDer(\overline{S(V)}) } {\coDer(\overline{S(V)})} {A\otimes B} {\widetilde{L}_{\sym}(\pr \circ A \circ B)} ~. \end{displaymath} \end{definition} Again as before, the graded map $\widetilde{L}_{\sym}$ defined in remark \ref{Notation:CoderLiftMappingCocommutative} yields not only an isomorphism at the level of graded vector space but also one at the level of non-associative algebras. \begin{lemma}[Gerstenhaber algebras isomorphism]\label{Lemma:GerstenAlgebrasIsoSymmetric} The graded vector spaces isomorphism \begin{displaymath} \widetilde{L}_{\sym}~:~ M^{\sym}(V) \equiv (\underline{\Hom}(\overline{S(V)},V), \symgerst) ~\to~ (\coDer(\overline{S(V)}),\symgerst) \end{displaymath} is an isomorphism in the category of graded algebras. \end{lemma} \begin{proof} Being $\widetilde{L}_{\sym}$ invertible, it is sufficient to prove that the inverse $p$ preserves the product. Consider $R=\widetilde{L}_{\sym}(r)$ and $Q=\widetilde{L}_{\sym}(q)$ in $\coDer(\overline{S(V)})$. For any fixed $n>0$ one has \begin{displaymath} \begin{aligned} \pr (R\symgerst Q) \vert_{V^{\otimes n}} \equal{Def: \ref{Def:SymGerstProd}}& (r \circ Q ) \vert_{V^{\otimes n}} = \\ \equal{}& \sum_{s=1}^n r_s \circ \pi \left[(q_{n-s+1}\otimes \Unit_{s-1}) \circ B_{n-s+1,s-1}\right] \circ N = \\ \equal{Def: \ref{Def:ithGerstenhaberProduct}}& \sum_{s=1}^n r_s \symgerst q_{n-s+1} = \\ \equal{}& (r \symgerst q)_n = \\ \equal{}& \left((\pr\circ R) \symgerst (\pr\circ Q)\right)\vert_{V^{\otimes n}} ~. \end{aligned} \end{displaymath} \end{proof} Therefore, both $\coDer(S(V)$ and $M^{\sym(V)}= \prod_{n\geq 0} \underline{\Hom}(V^{\odot n},V)$, when endowed with the \RN product $\cs$, form a right pre-Lie algebra. In particular, one has that the graded Lie brackets associated to any two $R,Q \in (\coDer(S(V)),\cs)$, \ie \begin{displaymath} [Q,R]:= Q\cs R - (-)^{|Q||R|} R \cs Q \in \coDer(S(V)) ~, \end{displaymath} coincides with the standard commutator of linear operators. \section{The threefold nature of the Nijenhuis$-$Richardson algebra}\label{section:AppBConclusions} In this last section, we want to draw the conclusions of this long algebraic excursus. The previous sections' core was to identify an algebraic framework useful for formalizing expressions involving compositions of multilinear maps. Synthetically, the core of this whole chapter can be subsumed by the following theorem: \begin{theorem}[Algebras of Multibrackets] \label{Theorem:RecapGerstenhaber} Let be $V$ a graded vector space, denote by $M^{\sym}(V)$ and $M^{\skew}(V)$ the graded vector spaces of symmetric and skew-symmetric multilinear maps from $V$ into itself. The following diagram commutes in the category of graded right pre-Lie algebras \begin{equation}\label{eq:diagrammone} \begin{tikzcd}[row sep =large] (\overline{\overline{M(V[-1])}},\overline{\overline{\gerst}}) \ar[r,"\Dec","\cong"'] \ar[d,two heads,"- \circ N_a"] & ({M(V)} ,\gerst) \ar[r,"\widetilde{L} ","\cong"'] \ar[d,two heads,"- \circ N_s"] & (\coDer({T(V)}),\gerst) \ar[d,two heads,"\pi_s\circ - \circ N_a"] \\ (M^{\skew}(V[-1]),\skewgerst) \ar[r,"\Dec","\cong"'] \ar[d,hook,"- \circ \pi_a"] & (M^{\sym}(V),\symgerst) \ar[r,"\widetilde{L}_{\sym}","\cong"'] \ar[d,hook,"-\circ \pi_s"] & (\coDer({S(V)}),\symgerst) \ar[d,hook,"N_s \circ - \circ \pi_s"] \\ (\overline{\overline{{M(V[-1])}}},\skewgerst)\ar[r,"\Dec","\cong"'] & (M(V) ,\symgerst) \ar[r,"\widetilde{L} ","\cong"'] & (\coDer(\overline{T(V)}),\symgerst) \\ \end{tikzcd} \end{equation} where: \begin{itemize} \item[-] $(\overline{M(V)} ,\gerst)$ denotes the \emph{Gerstenhaber algebra of multibrackets} (see remark \ref{rem:BigradedSpaceofMultilinearMaps} and definition \ref{Def:FullGerstenhaberProduct}); \item[-] $(\overline{\overline{M(V)}} ,\overline{\overline{\gerst}})$ denotes the algebra of graded multilinear maps taken with a different grading which intertwines arity and degree of the maps (see remark \ref{rem:DecasGradedMorph}); \item[-] $\cs$ and $\ca$ denote respectively the symmetric and skew-symmetric \RN product (see definitions \ref{Def:SymGerstProd} and \ref{Def:SkewGerstProd} for the multibrackets case and definition \ref{Def:SymmetricGerstProdOfCoderivations} for the coderivations case); \item[-] $\symgerst$ on $\coDer(\overline{T(V)}$ is defined by the following equation\footnote{We will not make any particular use of this algebra. Note that we are only saying that the algebra structure here is obtained by pulling back the product $\cs$ from $M(V)$.} \begin{displaymath} Q \symgerst R = \widetilde{L}\left((\pr \circ Q) \symgerst(\pr \circ R)\right) ~; \end{displaymath} \item[-] $\coDer(T(V))$ and $\coDer(S(V))$ denote the space of coderivations on the Tensor coalgebra and the symmetric Tensor coalgebra respectively; \item[-] $\pi_s: T(V)\to S(V)$ and $\pi_a:T(V)\to \Lambda(V)$ denote the canonical projections of $T(V)=S(V)\oplus\Lambda(V)$; \item[-] $N_s: S(V) \to T(V)$ and $N_a: \Lambda(V) \to T(V)$ denote the canonical injections; \item[-] $f\circ -$ denotes postcomposition with the function $f$ and $- \circ g$ denotes precomposition with function $g$; \item[-] $\Dec$ is the \emph{d\'ecalage} operator pertaining to multilinear maps (see remark \ref{Notation:FixingDec}); \item[-] $\widetilde{L}$ and $\widetilde{L}_{\sym}$ are the lift operators to the tensor and symmetric tensor coalgebra respectively. \end{itemize} \end{theorem} \begin{proof} The horizontal isomorphism have been proved in proposition \ref{Prop:DecalageAsCoalgebrasISO} and lemmas \ref{Lemma:GerstenAlgebrasIso} and \ref{Lemma:GerstenAlgebrasIsoSymmetric}. The central vertical line has been justified in theorem \ref{Thm:ManettiFactorizationOnGerstenhaberAlgebras}. The two rightmost squares simply follows by observing that \begin{displaymath} \begin{aligned} \pr \circ (\pi\circ Q \circ N) =& p\circ Q \circ N \qquad \forall Q \in \coDer(\overline{T(V)}) \\ \pr \circ (N\circ R \circ \pi) =& p\circ R \circ \pi \qquad \forall R \in \coDer(\overline{S(V)}) \end{aligned} \end{displaymath} denoting with the same letter $\pr$ the canonical projections from $S(V)$ and $T(V)$ to $V$. \end{proof} \begin{remark}[Coderivations on the anti-commutative tensor coalgebra] For the sake of completeness, we mention that one could add another column to the left of diagram \eqref{eq:diagrammone} introducing the algebra of coderivations on $\Lambda(V)$. That case can be recovered, mutatis mutandis, from the analogous symmetric construction given in section \ref{SubSection:CoderivationonSymmetricTensor}. \end{remark} \begin{remark}\label{Remark:HomSVSVEmbedding} Observe that the reasoning of remark \ref{Remark:MSymEmbedding} can be easily extended to more general homogeneous map, namely one gets a an embedding diagram for any homogeneous maps between tensor spaces \begin{displaymath} \begin{tikzcd}[column sep = huge] \underline{\Hom}(\overline{T(V)},\overline{T(V)}) && \underline{\Hom}(\overline{T(V)},\overline{T(V)}) \ar[ll,"\mathcal{S\circ-\circ S}"'] \ar[d,two heads,"\pi-\circ-N"] \\ \underline{\Hom}(\overline{S(V)},\overline{S(V)}) \ar[u,hook,"N-\circ-\pi"] \ar[urr,hook,"N-\circ-\pi"] && \underline{\Hom}(\overline{S(V)},\overline{S(V)}) \ar[ll,"\sum_{n,m\geq 0} \frac{1}{n!m!} \pr_{V^{\odot n}} \circ - \pr_{V^{\odot n}}"] \end{tikzcd} \end{displaymath} where $\pr_{V^{\odot n}}$ denotes the canonical projector $S(V) \to V^{\odot n}$. Will be clearer from theorem \ref{Theorem:RecapGerstenhaber} that this arrows factors also through coderivation seen as subspace of the space of homogeneous maps. \end{remark} \begin{remark}[THE \RN algebra] Since the horizontal lines in diagram \eqref{eq:diagrammone} consists of isomorphisms, it would be completely legitimate to regard each line as a single object in the category of right pre-Lie algebras. In particular, the three isomorphic algebras contained in the central line can be thought as \emph{the} \RN algebra pertaining to the graded vector space $V[-1]$. \\ One should think at the different descriptions of the \RN algebra (in term of symmetric multibrackets, skew-symmetric multibrackets or coderivation) as different presentation of the same algebraic objects. \end{remark} \ifstandalone \bibliographystyle{../../hep} \chapter{Multisymplectic manifolds and symmetries}\label{Chap:MultiSymplecticGeometry} \emph{Multisymplectic structures} (also called \emph{``$n$-plectic''}) are a rather straightforward generalization of symplectic ones where closed non-degenerate $(n+1)$-forms replace $2$-forms. \\ Historically, the interest in multisymplectic manifolds, \ie smooth manifolds equipped with an $n$-plectic structure, has been motivated by the need for understanding the geometrical foundations of first-order classical field theories. The key point is that, just as one can associate a symplectic manifold to an ordinary classical mechanical system (\eg a single point-like particle constrained to some manifold), it is possible to associate a multisymplectic manifold to any classical field system (\eg a continuous medium like a filament or a fluid). It is important to stress that mechanical systems are not the only source of inspiration for instances of this class of structures. For example, any orientable $n$-dimensional manifold can be considered $(n-1)$-plectic when equipped with a volume form and semisimple Lie groups have a natural interpretation as $2$-plectic manifolds. \\ As proposed by Rogers in \cite{Rogers2010} (see also cite{Zambon2012}), this generalization can be expanded by introducing a higher analogue of the Poisson algebra of smooth functions (also known as ``observable algebra'') to the multisymplectic case. Namely, he proved that observables on a multisymplectic manifold can be algebraically encoded by an $L_{\infty}$-algebra; that is a graded vector space endowed with skew-symmetric multilinear brackets satisfying the Jacobi identity up to coherent homotopies. \\ The latter concept allowed for a natural extension of the notion of moment map, called \emph{\momap}, originally defined in \cite{Callies2016}, associated to an infinitesimal action of a Lie group on a manifold preserving the multisymplectic form. This chapter delivers a self-contained survey to the theory of multisymplectic manifolds, as defined by Catrijn, Ibort, and de Le\'on \cite{CatIbort}, adopting the algebraic framework proposed originally by Rogers in order to study symmetries admitting \momaps. Most results can be found in the literature \cite{Rogers2010,Callies2016,Fregier2015,zbMATH06448534} but we present some of the proofs for a clearer and self-contained exposition. For general background on symplectic geometry and (co)momentum maps we quote, among others \cite{Abraham1978,GS84,Arn-Khe}. \section{Reminder on multi-Cartan calculus}\label{Sec:MultiCartan} From a purely algebraic point of view, one can define the \emph{Cartan calculus} on every pair $(\mathfrak{g}, A)$ composed by a Lie algebra $\mathfrak{g}$ and a graded associative algebra $A$ as a triple of operations \begin{displaymath} \d \in \Der^1(A) ~, \qquad \mathcal{L} : \mathfrak{g} \to \Der^{0}(A) ~, \qquad \iota: \mathfrak{g} \to \Der^{-1}(A) ~; \end{displaymath} satisfying the following commutation rules for any $x,y\in\mathfrak{g}$: \begin{eqnarray} [\d , \d] &=& 0 \\ {[\mathcal{L}_x , \d]} &=& 0 \\ {[\iota_x , \d]} &=& \mathcal{L}_x \label{Eq:CartanMagic} \\ {[\iota_x , \iota_y]} &=& 0 \\ {[\mathcal{L}_x , \mathcal{L}_y]} &=& \mathcal{L}_{[x,y]_\mathfrak{g}} \\ {[\mathcal{L}_x , \iota_y]} &=& \iota_{[x,y]_{\mathfrak{g}}} \end{eqnarray} where $[\cdot,\cdot]_{\mathfrak{g}}$ denotes the Lie bracket on $\mathfrak{g}$ and $[\cdot,\cdot]$ denotes the graded commutator between endomorphisms on $A$. (See \cite[\S 3]{Meinrenken2004}). Specializing this to the geometric setting of smooth manifolds, \ie taking as the Lie algebra the $C^\infty(M)$-module of vector fields and considering the graded commutative $C^\infty(M)$-algebra of differential forms \begin{displaymath} \mathfrak{g} = \mathfrak{X}(M) = \Gamma(TM) ~, \qquad A^k = \Omega^k(M) = \Gamma(\Lambda^k T^\ast M) ~, \end{displaymath} we get the standard Cartan calculus on smooth manifold where $\d$ is the \emph{de Rham} differential, $\mathcal{L}_v$ is the \emph{Lie derivative} along the vector field $v$ and $\iota_v$ is the \emph{insertion (contraction or inner composition)} of the vector field $v$ into a given form. When working on manifolds with a fixed ("preferred") differential form in degree greater than two, computations involving several vector fields come immediately into play. Hence, we are led to generalize the previous apparatus replacing $\mathfrak{g}$ with its corresponding \emph{Gerstenhaber algebra} (see remark \ref{Rem:GerstehaberofLieAlg}) or, in the geometric setting, to consider the algebra of \emph{multi-vector fields}. \begin{reminder}[Gerstenhaber Algebra structure on $CE(\mathfrak{g})$]\label{Rem:GerstehaberofLieAlg} Given a Lie algebra $(\mathfrak{g},[\cdot,\cdot])$, consider the Chevalley-Eilenberg chain complex $CE(\mathfrak{g})$ given by $\Lambda^{\geq 1} (\mathfrak{g}[1])$ together with the boundary operator $\partial$ given in equation \eqref{eq:CE_boun} (see reminder \ref{Rem:CEconventions}). One can introduce on $CE(\mathfrak{g})$ a skew-symmetric bilinear operator of degree $-1$ called \emph{Schouten bracket} defined on decomposable elements by \begin{equation}\label{Eq:Schouten} \begin{aligned} [x_1 & \wedge\dots\wedge x_m , y_1 \wedge\dots \wedge y_n ] =\\ &= \mkern-10mu\sum_{\substack{1\leq i \leq m \\ 1 \leq j \leq n}}\mkern-10mu (-)^{i+j} [x_i,y_j] ~\wedge x_1\wedge\dots\wedge \widehat{x_i} \wedge \dots \wedge x_m \wedge y_1\wedge\dots\wedge \widehat{y_j} \wedge \dots \wedge y_n ~. \end{aligned} \end{equation} and then extended by linearity on the whole graded vector space. This operation makes $CE(\mathfrak{g})$ into a \emph{Gerstenhaber algebra}, in particular the following properties are satisfied \begin{displaymath} [x,[y,z]] = [[x,y],z] + (-)^{(|x|-1)(|y|-1)}[y,[x,z]] \tag{Graded Jacobi} \end{displaymath} \begin{displaymath} [x,y \wedge z] = [x,y] \wedge z + (-)^{(|x|-1)|y|} y \wedge [x,z] \tag{Graded Poisson}~. \end{displaymath} Observe that the Schouten bracket and the Chevalley-Eilenberg operator (equation \eqref{eq:CE_boun}) satisfy the following compatibility relation (see for example \cite[Lemma 3.12]{Ryvkin2016}) for any $p,q \in CE(\mathfrak{g})$ \begin{displaymath} \partial (p \wedge q) = (\partial p) \wedge q + (-)^{|p|} p \wedge (\partial q) + (-)^{|p|} [ p,q] ~. \end{displaymath} \end{reminder} \begin{definition}[(Gerstenhaber) Algebra of multi-vector fields] We call \emph{algebra of multi-vector fields} the Gerstenhaber algebra associated to the Lie algebra of vector fields $\mathfrak{X}(M)$. Explicitly, one has \begin{displaymath} \mathfrak{X}^k(M) = \Gamma(\Lambda T^n M) \cong \Lambda_{C^{\infty}(M)} \mathfrak{X}(M) \end{displaymath} together with $\partial$ and $[\cdot,\cdot]$ as defined in equations \eqref{eq:CE_boun} and \eqref{Eq:Schouten} respectively. \end{definition} As before, we call \emph{decomposable multi-vector field} any element $p\in \mathfrak{X}^{\bullet}(M)$ which can be expressed as a wedge product of a fixed number of tangent vector fields. Hence, multi-vector calculus, is determined by the specification of the following three operations on the algebra $\Omega(M)$ (\ie differential forms together with the wedge product): \begin{displaymath} \mathclap{ \d \in \Der^1(\Omega(M)) ~, \quad \mathcal{L} : \mathfrak{X}^k(M) \to \Der^{1-k}(\Omega(M)) ~, \quad \iota: \mathfrak{X}^k(M) \to \Der^{-k}(\Omega(M)) ~; } \end{displaymath} Operator $\d$ is not affected by this transition from vector fields to multi-vector fields, and is still given by the de Rham operator. One has only to make clear how to construct $\iota_p$ and $\mathcal{L}_p$ along a given multi-vector field $p$. It is possible to define such operators iterating the definition of the corresponding operator on ordinary vector fields. \begin{definition}[(Multi-)contraction (interior product)] Given a differential form $\alpha \in \Omega(M)$, for any given decomposable vector field $p=v_1\wedge\dots\wedge v_n$ we define the \emph{multi-contraction} operation of $\alpha$ by $p$ as the consecutive contraction of $\alpha$ with each component of $p$, \ie \begin{displaymath} \iota_p\, \alpha = \iota(v_1\wedge\dots\wedge v_n) \alpha = \iota_{v_n}\dots \iota_{v_1} \alpha ~. \end{displaymath} The corresponding interior product $$\iota_\blank^k: \mathfrak{X}^k(M)\otimes \Omega^\ell(M) \to \Omega^{\ell-k}(M)$$ is extended to a well-defined derivation $\iota_{v}\in \Der^{-k}(\Omega(M))$ by $C^{\infty}(M)$-linearity. \end{definition} \begin{remark} There is an obvious matter of choice in the definition of $\iota$ given by the order in which the components $v_i$ of $p$ are inserted into the form. Clearly, all of this conventions only differs by a sign. Here, we are following the convention used, for example, in \cite{Rogers2010,Callies2016,Ryvkin2018}. Another natural choice would be to use the opposite order (see e.g \cite{Delgado2018,Delgado2018b}). The two conventions differs by a sign given by the following equation: \begin{displaymath} \iota_{x_1}\dots\iota_{x_n} = (-)^{\bar{\sigma}_n} \iota_{x_n}\dots \iota_{x_1} \end{displaymath} where $\bar{\sigma}_n$ is the permutation reversing the order of the list of $n$ elements, therefore \begin{displaymath} (-)^{\bar{\sigma}_n} =(-)^{\sum_{i=1}^n (n-i)} = (-)^{\frac{n (n-1)}{2}} ~. \end{displaymath} \end{remark} There is a sign emerging from the previous convention that will appear recurrently when dealing with multisymplectic construction. We single out a compact definition and a couple of simple properties for later reference: \begin{definition}[Total Koszul sign]\label{Def:SigmaSign} We call \emph{($k$-th) total Koszul sign}, the coefficient \begin{equation}\label{Eq:SigmaSign} \varsigma(k) = -(-)^{\frac{k(k+1)}{2}} \end{equation} corresponding to minus the Koszul sign of the permutation reversing $k+1$ degree $1$ elements \footnote{The first total Koszul signs read $+,+,-,-,+,+,-,-,\dots$ for $k$ equal to $1,2,\dots$.}. \end{definition} \begin{lemma}\label{lem:varsigmasignprops} The total Koszul signs satisfies the following properties \begin{equation} \varsigma(k+1)=-\varsigma(k-1) ~,\qquad \varsigma(k-1) = (-)^k \varsigma(k) ~. \end{equation} \end{lemma} The notion of Lie derivative can be extended to the multi-vector fields context simply by enforcing the analogue of the Cartan's "magic" rule (equation \eqref{Eq:CartanMagic} in the above list). \begin{definition}[(Multi-)Lie Derivative] Given a differential form $\alpha \in \Omega(M)$, for any given multi-vector field $p\in\mathfrak{X}(M)$ we define the \emph{multi-Lie derivative} of $\alpha$ along $p$ as the differential form given by the graded commutator \begin{displaymath} \mathcal{L}_p(\alpha) := \d \iota_p \alpha - (-)^{|p|} \iota_p \d \alpha ~. \end{displaymath} \end{definition} These operators satisfy a "graded" analogue of the six commutation rules defining the ordinary Cartan calculus: \begin{proposition}[Multi-Cartan commutation rules, {\cite[A.3]{Forger2003}}] For any multi-vector field $v \in \mathfrak{X}(M)$, the graded derivations $\mathcal{L}_v,\iota_v,\d$ in degrees $|\mathcal{L}_v| =1 -|v|,~ |\iota_v| = -|v|,~ |\d|=1$ satisfy the following commutation rules: \begin{eqnarray} [\d , \d] &=& 0 \\ {[\d, \mathcal{L}_v ]} &=& 0 \\ {[\d, \iota_x]} &=& \mathcal{L}_x \label{Eq:MultiCartanMagic} \\ {[\iota_x , \iota_y]} &=& 0 \\ {[\iota_y,\mathcal{L}_x]} &=& -\iota_{[x,y]_{sc}} \\ {[\mathcal{L}_x , \mathcal{L}_y]} &=& (-)^{(|x|-1)(|y|-1)}\mathcal{L}_{[x,y]_{sc}} \end{eqnarray} where $[\cdot,\cdot]_{sc}$ are understood as graded commutators pertaining to the (associative) composition of graded linear operators and $[\cdot,\cdot]$ denotes the Schouten bracket of multi-vector fields. \end{proposition} \begin{remark}\label{rem:Lieder} From the previous commutation rules follows the following explicit expression of the Lie derivative along the wedge of two ordinary graded vector fields: \begin{displaymath} \mathcal{L}_{x\wedge y} = [\d, \iota_{x\wedge y} ] = [\d, \iota_y \iota_x] = (-)^{|y|}\iota_y [d, \iota_x] + [\d,\iota_y] \iota_x = \mathcal{L}_y \iota_x +(-)^{|y|}\iota_y \mathcal{L}_x ~. \end{displaymath} Thence, by multiple iteration of the Cartan's formula \eqref{Eq:CartanMagic}, one gets the following explicit expression for the Lie derivative along a decomposable multi-vector field \begin{displaymath} \mathcal{L}_{x_1 \wedge\dots\wedge x_n} =(-)^n \biggr[ \iota(\partial(v_{1}\wedge\dots\wedge v_{n}))+\sum_{k=1}^{m} (-1)^{k} \iota( v_{1} \wedge \dots \wedge \widehat{v}_{k} \wedge \dots \wedge {v}_{n})\mathcal{L}_{v_i}\biggr] ~, \end{displaymath} where $\widehat{(\cdot)}$ denotes deletion and $\partial$ is the Chevalley-Eilenberg coboundary operator pertaining to the Lie algebra $\mathfrak{X}(M)$ (see equation \eqref{eq:CE_boun}). \end{remark} This leads to the following recurring formula: \begin{lemma}[Multi-Cartan magic formula {(see \cite[Lem 3.4]{Madsen2013} or \cite[Lem 2.18]{Ryvkin2016})}]\label{lemma:multicartan} Given any $m$ vector fields $x_i \in \mathfrak{X}(M)$, the following equation holds: \begin{equation} \begin{aligned} (-1)^m \d~ \iota(x_1\wedge\dots\wedge x_m) =&~\phantom{+} \iota(x_1\wedge\dots\wedge x_m) \textrm{d} ~+\\ &+ \iota(\partial \, x_1\wedge\dots\wedge x_m) ~+\\ &+ \sum_{k=1}^{m} (-1)^k \iota( x_1\wedge\dots\wedge \widehat{x_k}\, \wedge\dots\wedge x_m) \mathcal{L}_{x_k} ~. \end{aligned} \end{equation} \end{lemma} \section{Multisymplectic manifolds}\label{Sec:MultiSymGeo} In this work, we will adopt the notion of \emph{multisymplectic manifold} as introduced by Catrijn, Ibort, and de Le\'on in \cite{CatIbort} \begin{definition}[Multisymplectic manifold \cite{CatIbort} ($n$-plectic manifold)]\label{Def:MultisymplecticManifold} We call a \emph{pre-multisymplectic manifold} of degree $(n+1)$, a pair $(M,\omega)$ composed of a smooth manifold $M$ together with a closed differential $(n+1)$-form $\omega\in\Omega^{n+1}(M)$, called \emph{pre-multisymplectic form}. \\ A pre-multisymplectic manifold is called \emph{multisymplectic} if $\omega$ is not degenerate, \ie if the flat map of $\omega$, \begin{equation}\label{Eq:OmegaFlat} \morphism{\omega^\flat} {TM} {\Lambda^{n}T^*M} {(x,u)} {(x,\iota_u \omega_x)} ~, \end{equation} is an injective bundle morphism over $M$. \\ For a fixed degree $(n+1)$ of the form such manifolds are also called "$n$-plectic". \end{definition} Multisymplectic manifolds form a category with the following notion of morphism: % \begin{definition}[Multisymplecto-morphism ($(n)$-plecto-morphism)] Given two $n$-plectic manifolds $(M,\omega)$ and $(M',\omega')$ we call multisymplecto-morphism ($n$-plecto-morphism) $\phi: (M,\omega) \to (M',\omega')$ any diffeomorphism $\phi:M\to M'$ such that \begin{displaymath} \phi^* \omega' = \omega ~. \end{displaymath} \end{definition} \begin{remark}[On other notions of higher symplectic structures] We stress that there are numerous candidates proposed as the "higher analogue" of symplectic structures scattered across the literature; most of them are not entirely equivalent to the notion of multisymplectic structure adopted in this text. \\ For instance, we mention the notion of \emph{Poly-symplectic structures}\cite{Gunther1987a} which are given by vector-valued symplectic forms in $\Omega^{2}(M,V)$. Other recurrent names are the so-called \emph{$k$-symplectic structures}\cite{Awane1992} and the \emph{$k$-almost cotangent structures}\cite{DeLeon1988} that are now fully understood to be equivalent to the polysymplectic ones (see \cite{Blacker2019} for a review). \\ There is also a notion of \emph{higher symplectic} geometry\cite{nlab:higher_symplectic_geometry} in the sense of categorification\cite{Baez2010} based on the idea to consider as the underlying space $M$ some generalized notion of smooth spaces, for instance, \emph{Lie $\infty$ algebroids}. \end{remark} There are two extreme cases of multisymplectic manifolds given by the following two examples: \begin{example}[Symplectic forms and volume forms]\label{Ex:VolumesAreMultiSymp} \quad \begin{itemize} \item Symplectic manifolds are $n$-plectic manifolds with $n=1$. \item Orientable manifolds of dimension $(n+1)$ are $n$-plectic with respect to a choice of volume form. \end{itemize} Observe that in both of these cases the flat bundle map $\omega^\flat$ is also bijective, by purely dimensional reasons, since: \begin{displaymath} \text{dim}(\Lambda^{n}T^\ast M ) = \binom{n+1}{n} = \binom{n+1}{1} = \text{dim}(\Lambda^{1}T^\ast M ) ~. \end{displaymath} \end{example} In subsection \ref{Sec:MSExamples} we will recall some other examples thoroughly studied in the literature. \subsection{Linear case} When considering a "flat" smooth manifold $M$, \ie globally diffeomorphic to the tangent space $T_p M$ at any point $p\in M$, giving a pre-$n$-plectic form on $M$ is tantamount to give a $n$-form on its linear model $V\cong T_p M$. This justify the following definition: \begin{definition}[Multisymplectic vector space] We call \emph{$k$-plectic vector space} any $\R$-vector space $V$ endowed with a form $\omega\in \Lambda^{k+1} V^*$ that is not degenerate, \ie $ \omega(v,\cdot) = 0 \Leftrightarrow v=0$. \\ Two multisymplectic $k$-plectic vector spaces $(V,\omega), (V',\omega')$ are said \emph{isomorphic} if there exists a linear isomorphism map $L: V \to V'$ pull-backing $\omega'$ to $\omega$ (\emph{linear multisymplectomorphism}), i.e such that $L^\ast \omega' = \omega$. \end{definition} \begin{remark} Clearly, the previous notion of multisymplectic manifold can be recovered from the notion of multisymplectic vector space. A \emph{multisymplectic bundle}, is a vector bundle $E\to M$ equipped with a multisymplectic (linear) structure of fixed order (say $k$) on each fibre with smooth dependence on the point of the base manifold. % If $E$ is indeed the tangent bundle of $M$, the above structure is encoded by a differential form $\omega \in \Omega^k(M)$. If $\omega$ is non-degenerate in the above sense it will be called a \emph{almost multisymplectic structure}. The special subclass of almost multisymplectic structures that are also \emph{integrable}, in the sense that $\omega$ is closed, are the multisymplectic manifolds given in definition \ref{Def:MultisymplecticManifold}. \end{remark} \begin{example}["Canonical" linear multisymplectic form]\label{Ex:CanonicalLinearForm} Given any vector space $V$, for any $1\leq k \leq \text{dim}(V)$, the vector space $V\oplus \Lambda^k V^\ast$ is naturally $k$-plectic when equipped with the canonical multisymplectic form (see \cite[Prop. 2.2]{CatIbort}) \begin{displaymath} \morphism{\Omega} {(V\oplus \Lambda^k V^\ast)^{\otimes (k+1)}} {V\oplus \Lambda^k V^\ast} {\pair{v_1}{\alpha_1}\otimes\dots\otimes\pair{v_{k+1}}{\alpha_{k+1}}} {\displaystyle\sum_{i=1}^{k+1}(-)^i \alpha_i(v_1,\dots,\widehat{v_i},\dots,v_{k+1})} ~. \end{displaymath} \end{example} In the $1$-plectic case, the remarkable phenomenon occurs that every symplectic vector space of the same dimension "looks the same". Namely, every $1$-plectic vector space is isomorphic to the space $L\oplus L^*$, for a given Lagrangian subspace $L$, together with the canonical symplectic form defined in example \ref{Ex:CanonicalLinearForm}. In other words, the multisymplectic vector space of example \ref{Ex:CanonicalLinearForm}, when $k=1$, gives a canonical model for any $1$-plectic vector space in a given dimension. \\ On the other hand, the situation is much wilder for arbitrary $k$-plectic vector spaces. On a given $n$ dimensional space, there may be several non-isomorphic families of $k$-plectic structures or even none. The following theorem elucidates the picture: \begin{theorem}[Martinet-Capdevielle-Westwick-Djocovi\'c-Ryvkin {\cite[Thm. 4.2]{Ryvkin2018}}]\label{Thm:LeoClassification} Let $\Sigma_n^k$ be the number of non-equivalent isomorphism classes of $(k+1)$-plectic forms on $n$-dimensional vector spaces. The following properties hold: \begin{itemize} \item $\Sigma_n^n=1$ for all $n$ (volume forms). \item $\Sigma^1_n$ as well as $\Sigma^{n{-}1}_n$ are zero for $n>1$. \item $\Sigma^2_n$ is 0 for $n$ odd and one for $n$ even (symplectic forms). \item $\Sigma_{n}^{n-2}=\lfloor \frac{n}{2}\rfloor-1$, when $(n \mod 4)\neq 2$ (for $n\geq 4$) and $\Sigma_{n}^{n-2}= \frac{n}{2}$, when $(n \mod 4)=2$ (for $n\geq 4$). \item $\Sigma_6^3=3$, $\Sigma_7^3=8$ , $\Sigma_8^3=21$, $\Sigma_7^4=15$ and $\Sigma_8^5=31$ . \item $\Sigma_n^k=\infty$ in all other cases. \end{itemize} \noindent For dimensions up to $n=9$ the numbers look as follows, where the rows range from 0-forms to $n$-forms: \begin{align*} \mathclap{ \begin{array}{c|ccccccccccccccccccccc} n=0&&&&&&&&&&&-\\[1.5em] n=1&&&&&&&&&&-&&1\\[1.5em] n=2&&&&&&&&&-&&0&&1\\[1.5em] n=3&&&&&&&&-&&0&&0&&1\\[1.5em] n=4&&&&&&&-&&0&&1&&0&&1\\[1.5em] n=5&&&&&&-&&0&&0&&1&&0&&1\\[1.5em] n=6&&&&&-&&0&&1&&3&& 3&&0&&1\\[1.5em] n=7&&&&-&&0&&0&& 8&& 15&&2&&0&&1\\[1.5em] n=8&&&-&&0&&1&& 21&&\infty&&31&&3&&0&&1\\[1.5em] n=9&&-&&0&&0&&\infty&&\infty&&\infty&&\infty&&3&&0&&1\\[1.5em] \end{array} } \end{align*} \end{theorem} \begin{remark}[Normal forms in multisymplectic geometry] The \emph{Darboux theorem} is a celebrated result in symplectic geometry stating that, for any given point $p$ of a symplectic manifold $(M,\omega)$ of dimension $2n$, there exists a local coordinate chart $(U_p,(x_1,\dots,x_{2n})$, called \emph{Darboux coordinates}, such that $\omega\vert_{U_p}$ can be expressed in the normal form \begin{displaymath} \omega\eval_{U_p} = \sum_{i=1}^n \d x^i \wedge \d x^{n+i} ~. \end{displaymath} The existence of such coordinate charts is a consequence of two phenomena which appear to be false in the general $k$-plectic case: \begin{enumerate} \item\label{Item:canonical} All $1$-plectic vector spaces with the same dimension are isomorphic to the canonical symplectic vector space given in example \ref{Ex:CanonicalLinearForm}. \item\label{Item:flatness} All symplectic manifolds are locally diffeomorphic to the linear symplectic manifold $(T_p M,\omega_p)$, \ie for any $p\in M$ there exists a suitable neighbourhood $U_p$ together with a diffeomorphism $\phi:U_p \to T_p M$ such that $\phi(p)=0$ and $\phi^\ast \omega_p = \omega$, hence, in such a coordinate chart, $\omega\vert_{U_p}$ is given by the constant-coefficient differential form $\omega_p$. \end{enumerate} % The general failure of \ref{Item:canonical} is completely understood by theorem \ref{Thm:LeoClassification}, \ie the existence of several non-equivalent isomorphism classes. The failure of \ref{Item:flatness} is slightly more subtle and requires to define a notion of flatness at any point $p$, \ie the existence of a local chart $\varphi:U_p \to T_p M$ such that $\varphi(p)=0$ and $\varphi^\ast \omega_p = \omega$. Note that, if the latter holds, it is not guaranteed that local models for every point $p\in M$ will lie in the same isomorphism class. Other than $1$-plectic structure, also $(\text{dim}(M)-1)$-plectic structures (\ie volume forms) and multi-cotangent bundle structures admits a normal form. Several other examples are studied in \cite[\S 4]{Ryvkin2018}. \end{remark} \subsection{Examples}\label{Sec:MSExamples} In addition to the extreme cases stated in example \ref{Ex:VolumesAreMultiSymp}, several other examples of multisymplectic structure can be found in the literature. Below we mention some examples mainly taken from \cite{CatIbort,Callies2016,Ryvkin2018}. \begin{example}[Multicontangent bundles {\cite[\S 6]{CatIbort}}]\label{Ex:Multicotangent} Consider a smooth manifold $Q$, the corresponding \emph{Multicotangent bundle} $M = \Lambda^n T^\ast Q$ is naturally $n$-plectic. \\ We denote by $\pi_Q$ the fibration $\Lambda^n T^\ast Q \twoheadrightarrow Q$ and by $\pi_M$ the fibration $\Lambda^n T^\ast M \twoheadrightarrow M$. On $M$ one can introduce a differential form $\theta\in\Omega^n(M)$, called \emph{tautological $n$-form} defined as the unique section of $\Gamma(\Lambda^n T^\ast M, M)$ such that the diagram of smooth maps \begin{center} \begin{tikzcd} M \equiv \Lambda^n T^\ast Q \ar[r,"\theta"] & \Lambda^n T^\ast M \ar[d,two heads,"\pi_M"] \\ Q \ar[r,"\alpha"] \ar[u,"\alpha"] & \Lambda^n T^\ast Q \end{tikzcd} \end{center} commutes for any section $\alpha \in \Gamma(\Lambda^n T^\ast Q, Q)$. In terms of pull-backs of differential forms, it means that $\alpha^\ast \theta = \alpha$. Explicitly, commutativity means that for any $q\in Q$, $\eta\in \Lambda^n T^\ast_q Q$ and vectors $u_1,\dots, u_2 \in T_{(q,\eta)}M$ the following equation holds \begin{displaymath} (q, \eta((\pi_Q)_\ast u_1, \dots, (\pi_Q)_\ast u_n )) = (q, \theta_{(q,\eta)}(u_1,\dots u_n) ) ~, \end{displaymath} where $(\pi_Q)_\ast$ denotes the push forward along the projection $T_{(q,\eta)}\pi_Q$. This can be read as the condition that \begin{displaymath} \theta_{\eta} (u_1,\dots,u_n) = \eta((\pi_Q)_\ast u_1, \dots, (\pi_Q)_\ast u_n ) \end{displaymath} or \begin{displaymath} \theta\eval_{(q,\eta)} = (\pi_Q)^\ast_{(q,\eta)} \eta \end{displaymath} justifying the name of \emph{"tautological form"}. Choosing local coordinates $(q^1,\dots,q^k)$ on $Q$ and denoting by $p_I$, with $I=(i_1,\dots,i_n)$ a multi-index with $1\leq i_1<i_2<\dots<i_n\leq k$, the canonical conjugated coordinates on the fibres, one gets that the tautological form takes the following expression: \begin{displaymath} \mathclap{ \theta\eval_{(q^i,p_I)} = \sum_{1\leq i_1<i_2<\dots<i_n\leq k} p_{i_1,\dots,i_n} \d q^{i_1}\wedge\dots \d q^{i_n} = \sum_{I} p_I \d q^I ~. } \end{displaymath} \\ The canonical $n$-plectic form on $\Lambda^n T^\ast Q$, also known as \emph{multicanonical}\cite{Forger2005}, is given by the exterior derivative of $\theta$: \begin{displaymath} \omega := -\d \theta ~. \end{displaymath} In the above coordinates it reads as \begin{displaymath} \omega\eval_{(q^i,p_I)} = \sum_{I} -\d p_I \wedge \d q^I ~. \end{displaymath} Non-degeneracy of $\omega$ can be read as the condition that, for any $v= \sum_i x^i \partial_{i} + \sum_I z_I \partial_{p^I}$, the form \begin{displaymath} \iota_v \,\omega = \mkern-15mu \sum_{\substack{I=(i_1,\dots,i_n)\\1\leq i_1<\dots<i_n\leq k}} \mkern-15mu \left( z_I \d q^I + \d p^I \wedge \left( \mkern-50mu\sum_{\mkern70mu j\in\lbrace i_1,\dots,i_n\rbrace} \mkern-50mu x^j\, \d q^{i_1}\wedge\dots \wedge \widehat{\d q^j} \wedge \dots \wedge \d q^{i_n} \right) \right) \end{displaymath} vanishes only if $v=0$. The latter condition is met since the right-hand side is a composition of linearly independent $n$-forms. This construction is the ``higher analogue'' of the canonical symplectic structure naturally defined on any cotangent bundle. Note, however, that this is not yet the ``higher analogue'' of a \emph{phase space}, as explained in the following example. \end{example} \begin{example}[Multiphase space {\cite{Carinena1991b,Gimmsy1,CatIbort}}] Consider now the case that $Q$ is fibred over another smooth manifold $\Sigma$, \ie is the total space of a smooth bundle $\pi:Q\to \Sigma$. Denote by $V(\pi)$ the vertical sub-bundle $V(\pi)\hookrightarrow TQ$. Recall that the fibres of $V(\pi)$ are given by $V_y(\pi) = ker(\pi_\ast)_y$ for any $y \in Q$. Consider then the subbundle of $r$-semibasic $n$-forms of $Q$, concretely defined by the following fibres \begin{displaymath} \Lambda^n_r Q = \left\lbrace \alpha \in \Lambda^n T^\ast Q ~ \vert \quad \iota_{v_{1}}\dots \iota_{v_r} \alpha = 0,\quad \forall v_i \in V_{\pi(\alpha)}(\pi) \right\rbrace ~. \end{displaymath} For any $r\leq n$ one has the following diagram in the category of smooth manifolds \begin{displaymath} \begin{tikzcd}[column sep= small,row sep=small] \Lambda^n_r Q \ar[rr,hookrightarrow,"i"] \ar[dr]& & \Lambda^n T^\ast Q \ar[dl]\\ & Q \ar[d,"\; \pi"] & \\ & \Sigma & ~. \end{tikzcd} \end{displaymath} One can then pullback the canonical multisymplectic form $\omega$ given in example \ref{Ex:Multicotangent} from $\Lambda^n T^\ast Q$ to $\Lambda^n_r Q$ to obtain the pre-multisymplectic manifold \begin{displaymath} (\Lambda^n_{r} Q,~ i^\ast\omega) \end{displaymath} that can be proved to be $n$-plectic (see for example \cite[Prop./Def. 2.14]{Ryvkin2018}). A crucial result is that, in the particular case $r=2$, the smooth manifold $\Lambda^n_2 Q$ is diffeomorphic to the twisted affine dual of the first jet bundle \cite{Saunders1989} of $\pi:Q\to\Sigma$ (see \cite[Prop. 2.1]{Gimmsy1} or \cite[Example 2.4]{Baez2010}). The choice of such naming comes from the geometric mechanics of classical field theories with \emph{configuration bundle} given by $\pi$, for details see \cite{Carinena1991b,Gimmsy1,Ryvkin2018} \end{example} \begin{example}[Semisimple Lie groups \cite{CatIbort}] Any Lie group $G$ with (finite-dimensional) Lie algebra $\mathfrak{g}$ is canonically pre-$2$-plectic, if $\mathfrak{g}$ is semisimple the canonically pre-$2$-plectic form is non-degenerate. \\ The \emph{canonical $3$-form} for $G$ is the form $\omega \in \Omega^3(M)$ given, for any $g\in G$ and $u_i\in T_g G$, by \begin{displaymath} \omega\eval_g (u_1,u_2,u_3) = \left\langle \theta^L\eval_g(u_1),\left[\theta^L\eval_g(u_2),\theta^L\eval_g(u_3)\right]\right\rangle \end{displaymath} where: \begin{itemize} \item $\pairing$ denotes the \emph{Killing form} of $\mathfrak{g}$ defined as the bilinear operator \begin{displaymath} \morphism{\langle\cdot,\cdot\rangle} {\mathfrak{g}\otimes \mathfrak{g}} {\R} {(x,y)} {\text{trace}(\text{ad}_x \cdot \text{ad}_y )} ~. \end{displaymath} From the properties of the trace and from the $Ad$-equivariance of the Lie bracket one has that $\pairing$ is symmetric and $Ad$-invariant. \item $ad$ denotes the adjoint representation of $\mathfrak{g}$ explicitly given by the Lie algebra morphism \begin{displaymath} \morphism{\text{ad}} {\mathfrak{g}} {\End(\mathfrak{g})} {x} {(\text{ad}_x :~y\mapsto [x,y])} \end{displaymath} (where $[\cdot,\cdot]$ on $\End(\mathfrak{g})$ is given by the commutator of linear operators); \item $\theta^L$ denotes the \emph{Maurer-Cartan} form of $G$, that is the left invariant $\mathfrak{g}$-valued $1$-form $\theta^L\in \Omega^1(M,\mathfrak{g})$ defined as the tangent map along the left multiplication \begin{displaymath} \morphism{L_g} {G} {G} {h} {g\cdot h} ~. \end{displaymath} Namely: \begin{displaymath} \theta^L\eval_g = T_g(L_{g^{-1}}): T_g G \to T_e G \cong \mathfrak{g} ~. \end{displaymath} \end{itemize} % The closedness of $\omega$ follows from being both left and right invariant on the Lie group $G$. If the group is semisimple then $\mathfrak{g}=[\mathfrak{g},\mathfrak{g}]$; therefore the pairing $\pairing$ and $\omega$ are non-degenerate (see \cite[Ex. 3.6]{Ryvkin2018} for the complete argument). \end{example} \begin{example}[Cosymplectic manifolds \cite{CatIbort}] A \emph{cosymplectic} manifold is a triple $(M,\Phi,\eta)$ consisting of an orientable $(2n+1)$-dimensional manifold $M$ together with closed forms $\Phi\in\Omega^1(M)$ and $\eta\in \Omega^2(M)$ such that $\Phi\wedge\eta^{n}$ is a volume form. The form $\omega= \Phi\wedge\eta^n$ is a closed non-degenerate $(2n+1)$-form, hence $2n$-plectic over $M$. Non-degeneracy can be ascertained easily by writing $(M,\phi,\eta)$ in Darboux-like coordinates (see \cite{Cappelletti2013} for a complete survey on the subject). \end{example} \begin{example}[(Subclasses) of K\"ahler manifolds] Any \emph{quaternionic almost K\"ahler manifold} is $3$-plectic with multisymplectic form given by the \emph{fundamental $4$-form} $\Omega$ \cite{Swann1991}. Recall that \emph{Hyper-K\"ahler} manifolds are a subcase of the latter. In \cite{Madsen2012} is proposed a construction recovering all homogeneous strictly nearly K\"ahler $6$-manifolds as $2$-plectic. \end{example} \begin{example}[Sum and products of multisymplectic manifolds {\cite{Ryvkin2018}}]\label{Ex:SumProductMultiSymp} Given a finite number of multisymplectic manifolds $(M_i;\omega_i)$, one can canonically endow the product manifold $\times_i M_i$ wit a multisymplectic structure given by the wedge product \begin{displaymath} \omega_{prod} = \bigwedge_{i} \pi_i^\ast \omega_i ~, \end{displaymath} where $\pi_j : \times_i M_i \to M $ is the standard projection. % If all the multiplicand manifolds are $k$-plectic, one can also define a $k$-plectic "sum" structure given by \begin{displaymath} \omega_{sum}=\sum_{i} \pi_i^\ast \omega_i ~. \end{displaymath} \end{example} \begin{example}[$G_2$-structures {\cite[ex 3.7]{Ryvkin2018}}] Given a seven-dimensional manifold $M$, a closed $G_2$-structure on $M$ consists of a closed differential 3-form $\omega$ admitting, for all $p\in M$, a basis $(e^1,...,e^7)$ of $T^*_pM$ such that \begin{displaymath} \omega_p= e^{123}+e^{145}-e^{167}+e^{246}+e^{257}+e^{347}-e^{356} ~, \end{displaymath} where $e^{ijk}$ denotes $e^i\wedge e^j\wedge e^k$. Being the addenda defining $\omega$ linear independent, $\omega^\flat$ has a $7$-dimensional range, hence $\omega$ is a $2$-plectic structure. \end{example} The following is an example involving a $\infty$-dimensional smooth manifold: \begin{example}[Affine connections on a Principal bundle {\cite[\S 10]{Callies2016}}] Let be $G$ a Lie group with a finite-dimensional Lie algebra $\mathfrak{g}$, consider a smooth $G$-principal bundle $P=(G\hookrightarrow P \twoheadrightarrow M)$ over a $(n+1)$-dimensional compact smooth manifold $M$. The action $G\action P$ on the principal bundle is considered on the right (\cf subsection \ref{subsec:prequantizationReminder}). \\ Denote by $\mathcal{A}$ the space of all \emph{principal $G$-connections} on $P$. Recall that $\mathcal{A}$ is an infinite-dimensional affine space modelled on the vector space $\vec{\mathcal{A}}= (\Omega^1_{hor}(P)\otimes \mathfrak{g})^G$, where superscript $G$ denotes invariance with respect to the aforementioned right action. Therefore, $\mathcal{A}$ is smooth and "flat" in the sense that $T_a \mathcal{A}\cong \vec{A}$ for any affine connection $a\in \mathcal{A}$ (See \cite{Kobayashi1996}). \\ Any $G$-invariant polynomial $q \in S^{n+1}(\mathfrak{g}^*)$ define a constant $(n+1)$-differential form, hence pre-$n$-plectic form, on the space of affine connections $\mathcal{A}$. If the chosen polynomial is in particular also non-degenerate, \ie the mapping \begin{displaymath} \lambdamorphism{\mathfrak{g}} {S^n(\mathfrak{g}^*)} {x} {\iota_x q} \end{displaymath} in injective, the corresponding canonical pre-$n$-plectic structure on $\mathcal{A}$ is in particular non-degenerate \cite[Prop. 10.3]{Callies2016}. \end{example} \subsection{Special classes of differential forms and vector fields} Exactly as it happens in symplectic geometry, fixing a (pre-)$n$-plectic structure $\omega$ on $M$ provides a criterion for identifying special classes of vector fields and differential forms. \begin{definition}[Multisymplectic and Hamiltonian vector fields]\label{def:Hamiltonianvfields} Given a pre-$n$-plectic manifold $(M,\omega)$, a vector field $v\in \mathfrak{X}(M)$ is said \emph{(multi)-symplectic} (resp. \emph{Hamiltonian}) if $\iota_v \omega$ is a closed (resp. exact) form. We denote the corresponding vector spaces as \begin{displaymath} \begin{aligned} \mathfrak{X}_{\msy}(M,\omega) =&~ \lbrace v \in \mathfrak{X}(M) ~|~ \iota_v(\omega) \in Z^n(M) \rbrace \\ \mathfrak{X}_{\ham}(M,\omega) =&~ \lbrace v \in \mathfrak{X}(M) ~|~ \iota_v(\omega) \in B^n(M) \rbrace ~. \end{aligned} \end{displaymath} \end{definition} It follows from the Cartan rule that the flow of a multisymplectic vector field preserves the pre-$n$-plectic form (strictly, in the sense of definition \ref{Def:conservedQuantities}). Hence the elements of $\mathfrak{X}_{\msy}(M,\omega)$ are the infinitesimal counterpart of diffeomorphisms preserving the multisymplectic structure. \begin{lemma} Let be $(M,\omega)$ a pre-$n$-plectic manifold and consider the vector spaces introduced in definition \ref{def:Hamiltonianvfields}. One has the following inclusion of Lie algebras \begin{displaymath} \begin{tikzcd} \mathfrak{X}_{\ham}(M,\omega) \ar[r,hook] & \mathfrak{X}_{\msy}(M,\omega) \ar[r,hook] & (\mathfrak{X}(M),[\cdot,\cdot]) \end{tikzcd} ~. \end{displaymath} In the case that $H^{n}_{d R}(M)=0$ the first inclusion is an isomorphism $\mathfrak{X}_{\msy}(M)\cong\mathfrak{X}_{\ham}(M)$. \\ Furthermore, $\mathfrak{X}_{\ham}(M)$ is a Lie algebra ideal of $\mathfrak{X}_{\msy}(M)$, \ie $[\mathfrak{X}_{\ham}(M),\mathfrak{X}_{\msy}(M)]\subset \mathfrak{X}_{\ham}(M)$. \end{lemma} Accordingly, we can select a special class of differential forms: \begin{definition}[Hamiltonian $(n-1)$-forms]\label{Def:Hamiltonianform} Given a pre-$n$-plectic manifold $(M,\omega)$, a differential form $\alpha \in \Omega^{n-1}(M)$ is called \emph{Hamiltonian} if is a primitive of $\iota_v \omega \in B^{n}(M)$ for a certain Hamiltonian vector field $v$, called the \emph{Hamiltonian vector field of $\alpha$}. \\ Hamiltonian $(n-1)$-forms constitute a subspace of $\Omega^{n-1}(M)$ denoted as \begin{displaymath} \Omega^{n{-}1}_{Ham}(M,\omega)=\left\{\alpha\in \Omega^{n-1}(M)~|~\exists\, \vHam_\alpha\in\mathfrak X(M) ~:~ d\alpha=-\iota_{\vHam_\alpha}\omega \right\} ~. \end{displaymath} The equation $d\alpha +\iota_{\vHam_\alpha}\omega=0$ it is also known as the \emph{Hamilton-DeDonder-Weyl (HDDW) equation} (see \cite{Ryvkin2018}). \end{definition} \begin{remark}[Sign conventions] The appearance of the minus sign in the HDDW equation is purely conventional. Here we are adopting a convention mostly found in the multisymplectic geometry literature (\eg \cite{Callies2016,Ryvkin2018}). The opposite choice can be found for instance in \cite{CannasdaSilva2001}. \end{remark} \begin{remark}\label{Rem:ClosedformsTrivialHamiltonian} Observe that, except for the case $n$ equal to $1$ or $\dim(M)-1$ where $\omega^\flat$ is bijective, not all the $(n-1)$-forms are Hamiltonian. \\ Note also that if $\omega$ is non-degenerate, \ie $n$-plectic rather than pre-$n$-plectic, if there exists an Hamiltonian vector field $v_\alpha$ pertaining to $\alpha$ it ought to be unique. Hence there is a well-defined surjective map \begin{displaymath} \epimorphism{\pi_{\ham}} {\Omega^{n-1}_{\ham}(M,\omega)} {\mathfrak{X}_{\ham}(M,\omega)} {\alpha} {\vHam_{\alpha}} ~. \end{displaymath} % Finally, observe that closed $(n-1)$-forms are clearly Hamiltonian with trivial Hamiltonian vector field, hence there is a short exact sequence of vector spaces: \begin{displaymath} \begin{tikzcd}[column sep = small] 0 \ar[r]& \Omega^{n-1}_{cl}(M) \ar[r,hook] &[1em] \Omega_{\ham}^{n-1}(M,\omega) \ar[r,two heads,"\pi_{\ham}"] &[1.5em] \mathfrak{X}_{\ham}(M,\omega) \ar[r] & 0 \end{tikzcd} ~. \end{displaymath} \end{remark} \begin{remark}[Hamiltonian pairs and Hamilton-DeDonder-Weyl equation]\label{Rem:HamiltonianPairs} Slightly more in general (see for instance \cite{Delgado2018b,Ryvkin2018}), given a pre-$n$-plectic manifold $(M,\omega)$, one could define a \emph{Hamiltonian pair} as the pair of multivector fields and differential forms satisfying the so-called \emph{Hamilton-DeDonder-Weyl} equation pertaining to $\omega$. Namely, one can define \begin{displaymath} Ham^{n-k}(M) = \left\lbrace \left. \pair{\vHam_\alpha}{\alpha}\in \mathfrak{X}^k(M)\oplus\Omega^{n-k}(M) ~\right\vert~ \iota_{\vHam_\alpha}\omega + \d \alpha = 0 \right\rbrace ~, \end{displaymath} for any $ 0\leq k\leq n-1$. Note that one can make sense of the space of Hamiltonian $(n-k)$-forms as a pullback in the category of ordinary vector spaces \begin{displaymath} \begin{tikzcd} Ham^{n-k}(M) \arrow[dr, phantom, "\scalebox{1.5}{$\lrcorner$}" , very near start, color=black] \ar[r,dashed]\ar[d,dashed] & \Omega^{n-k}(M) \ar[d,"\d"]\\ \mathfrak{X}^k(M) \ar[r,"\iota(\blank)\omega"] & \Omega^{n-k+1}(M) \end{tikzcd} ~. \end{displaymath} In the non-degenerate case, one has that $Ham^{n-1}(M)\cong \Omega_{\ham}^{n-1}(M)$. \end{remark} \begin{notation}[On the ubiquitous role of the multicontraction operator]\label{Rem:SignedMultiContraction} Once one fixes a "preferred" differential form $\omega$ over a given smooth manifold $M$, and being the former a $C^\infty(M)$-multilinear map on vector fields, it is natural to study the properties of the pair $(M,\omega)$ by probing the chosen form $\omega$ on arbitrary vector fields. Hence, the contraction (insertion) operation against the multisymplectic form will be a key ingredient in most constructions involved in multisymplectic geometry. \\ In this spirit, let us anticipate some notations involving multicontraction that will have a recurring role in the following, although they may appear trivial and redundant at this stage. \\ \smallskip Given a differential form $\omega \in \Omega^n(M)$ we define the \emph{signed $k$-multicontraction of $\omega$} as the following linear operator \begin{equation}\label{Eq:signedMulticontraction} \morphism{\iota_{\mathfrak{X}}^k \omega} {\mathfrak{X}^k(M)} {\Omega^{n-k}(M)} {x_1\wedge\dots\wedge x_k} {\varsigma(k)\iota(x_1\wedge\dots\wedge x_k) \omega} ~. \end{equation} Observe that this maps can be arranged as a graded morphism between two chain complexes \begin{equation}\label{Eq:multicontractionaschainmap} \hspace{-.025\textwidth} \begin{tikzcd}[column sep = small] 0 & \mathfrak{X}(M) \ar[l,"\partial"']\ar[d,"\iota_{\mathfrak{X}}^1\omega"]& \mathfrak{X}^2(M) \ar[l,"\partial"']\ar[d,"\iota_{\mathfrak{X}}^2\omega"] & \cdots \ar[l,"\partial"']& \mathfrak{X}^k(M) \ar[l,"\partial"']\ar[d,"\iota_{\mathfrak{X}}^k\omega"]& \cdots \ar[l,"\partial"']& \mathfrak{X}^{n+1}(M) \ar[l,"\partial"']\ar[d,"\iota_{\mathfrak{X}}^{n+1}\omega"]& \cdots \ar[l,"\partial"'] \\ \cdots & \Omega^n(M) \ar[l,"\d"] & \Omega^{n-1}(M) \ar[l,"\d"] & \cdots \ar[l,"\d"]& \Omega^{n+1-k}(M) \ar[l,"\d"]& \cdots \ar[l,"\d"]& \Omega^{0}(M) \ar[l]& 0 \ar[l,"\d"] \end{tikzcd} \end{equation} % namely, following \cite[\S 3.2]{Delgado2018b}, one can observe that $\iota^k_{\mathfrak{X}}\omega$ can be read as the $k$-th component of a homogeneous map of degree $n$ given by the following graded vector space morphism \begin{displaymath} \iota_{\mathfrak{X}}\omega ~: \setminus \mathfrak{X}(M) \to \Omega(M)[n+1] ~, \end{displaymath} where $\setminus \mathfrak{X}(M)$ denotes the graded vector space $\mathfrak{X}(M)$ taken with reverse ordering(\ie $(\setminus \mathfrak{X}(M))^{-k}=\mathfrak{X}(M)^k$, see equation \eqref{eq:degreeReversingFunctor}). This graded map yields a chain-complexes morphism if certain other conditions are met (see lemma \ref{lem:signedmulticontractionLinfinitymorphism}). \\ \smallskip Collectively, this can be abstracted as the image of the following operator \begin{displaymath} \morphism{\iota_\mathfrak{X}} {\Omega(M)} {\underline{\Hom}(\setminus \mathfrak{X}(M),\Omega(M))} {\omega} {\left\lbrace \iota^{-k}_\mathfrak{X}\omega: \mathfrak{X}(M)^{-k} \to \Omega^{|\omega|+k}(M) \right\rbrace_{k\leq -1}} \end{displaymath} % Starting from section \ref{Sec:HCMM}, we will mainly focus on certain graded linear subspaces of the space of multivectors fields. More generally, for any given linear map $\vAct:\mathfrak{g}\to \mathfrak{X}^1(M)$ we will denote the signed multi-contraction along the image of $\vAct$ as the precomposition \begin{equation}\label{eq:restrictedmulticontraction} \iota_{\mathfrak{g}}^k \omega := \left(\iota_{\mathfrak{X}(M)}^k \omega \right) \circ \vAct^{\otimes k} ~. \end{equation} In chapter \ref{Chap:MarcoPaper} we will make frequent use of a (skew)-symmetrized version of the ordinary contraction operator defined, for any $n\in \mathbb{Z}$, as \begin{displaymath} \morphism{\pairing_\pm} {\left(\mathfrak{X}(M)\oplus \Omega(M){[n]}\right)^{\otimes 2}} {\Omega(M){[n-1]}} {\pair{x_1}{\alpha_1}\otimes \pair{x_2}{\alpha_2}} {\frac{1}{2}\left(\iota_{x_1}\alpha_2 \pm \iota_{x_2}\alpha_1\right)} ~. \end{displaymath} \end{notation} A first justification for introducing the notation given in remark \ref{Rem:SignedMultiContraction} comes from the next lemma: \begin{lemma}[{KKS $L_\infty$-cocycle \cite[Prop. 3.8]{Fiorenza2014a}}]\label{lem:signedmulticontractionLinfinitymorphism} Call $\mathcal{B}_{(n)}$ the following shifted truncation of the de Rham complex of the pre-$n$-plectic manifold $(M,\omega)$: \begin{displaymath} \mathcal{B}_{(n)} := \trunc_n(\Omega(M))[n]\oplus \d \Omega^{n-1}(M) ~; \end{displaymath} the latter is given component-wise by the following diagram in the category of vector spaces: \begin{displaymath} \begin{tikzcd}[column sep = small, row sep = tiny] 0 \ar[r] & \mathcal{B}_{(n)}^{-n}\ar[d,equal] \ar[r,"\d"] &\dots \ar[r,"\d"] & \mathcal{B}_{(n)}^{-k} \ar[r,"\d"] \ar[d,equal]& \dots & \mathcal{B}_{(n)}^{-1}\ar[d,equal] \ar[r,"\d"] & \mathcal{B}_{(n)}^{0}\ar[d,equal] \ar[r] & 0 \\ & C^\infty(M) && \Omega^{n-k}(M) & & \Omega^{n-1}(M) & \d \Omega^{n-1}(M) \end{tikzcd} ~. \end{displaymath} The $k$-multilinear maps $\iota_{\mathfrak{X}_{\ham}}^k\omega$ defined in equation \eqref{Eq:signedMulticontraction}, with $1\leq k \leq n+1$, taken together with the restriction to Hamiltonian vector fields (see eq. \eqref{eq:restrictedmulticontraction}), define the components $(1\leq k \leq n+1)$ of a $L_\infty$-morphism \begin{displaymath} (\iota_{\mathfrak{X}_{\ham}}\omega): \mathfrak{X}^1_{\ham}(M) \to \mathcal{B}_{(n)} \end{displaymath} between the Lie algebra of Hamiltonian vector fields and the Abelian $L_\infty$-algebra $\mathcal{B}_{(n)}$. \end{lemma} \begin{proof} Observe first that the diagram \eqref{Eq:multicontractionaschainmap} can be read as a bona fide chain map when restricted to multisymplectic vector fields since lemma \ref{lemma:multicartan} guarantees, for any $p=v_1\wedge\dots \wedge v_k \in \Lambda^k \mathfrak{X}(M)$, that \begin{displaymath} \left( \d \circ (\iota^k_{\mathfrak{X}}\omega) - (\iota_{\mathfrak{X}}^{k-1}\omega ) \circ \partial \right)(p) = (-)^k\varsigma(k) \left(\iota_p \cancel{\d \omega} +\sum_{i=1}^k \iota_{x_k}\dots \widehat{\iota_{x_i}}\dots \iota_{x_1} \mathcal{L}_{x_i}\omega \right) \end{displaymath} and the right-hand side vanishes for $p\in\Lambda^k \mathfrak{X}_{\msy}(M,\omega)$. Hence $\iota_{\mathfrak{X}}\omega$ defines a chain map $\iota_{\mathfrak{X}}\omega ~: \setminus \mathfrak{X}(M) \to \Omega(M)[n+1]$ from the order reversed complex of multisymplectic multivector fields to the $[n+1]$ shift of the de Rham complex (which puts $(n+1)$-forms in degree zero). \\ Restricting furthermore to $\mathfrak{X}_{\ham}(M,\omega)\subset \mathfrak{X}_{\msy}(M,\omega)$ this can be refined to a chain map \begin{displaymath} \iota_{\mathfrak{X}_{\ham}} : \setminus \mathfrak{X}_{\ham}(M,\omega) \to \mathcal{B}_{(n)}[1] ~. \end{displaymath} An overall shift of the previous map yields the following chain complex morphism \begin{displaymath} \iota_{\mathfrak{X}_{\ham}}[1] : \left(S^{\geq 1}\left(\mathfrak{X}^1_{\ham}(M,\omega)[1]\right)\right)[-1] \to \mathcal{B}_{(n)} \end{displaymath} giving, according to remark \ref{Rem:LftyMorphasChainMap}, the sought $L_\infty$-morphism. \end{proof} \begin{remark} This construction has been proposed in \cite{Fiorenza2014a} as a $n$-plectic analogue of the Kirillov-Konstant-Souriau $2$-cocycle of symplectic mechanics. In \cite[Rem 6.2]{Callies2016} the chain complex $\mathcal{B}_{(n)}$ is denoted as $B^n A$ to remind of an analogue construction involved when studying classifying spaces on smooth bundles. \end{remark} \paragraph{Conserved quantities} We will often adopt the following nomenclature borrowed from \cite{Ryvkin2016} providing different nuances of conservation along a given vector field. \begin{definition}[Conserved quantities {\cite{Ryvkin2016}}]\label{Def:conservedQuantities} Let be $v\in \mathfrak{X}^1(M)$ a vector field over the smooth manifold $M$. A differential form $\alpha\in \Omega^k(M)$ is called: \begin{itemize} \item \emph{locally conserved} along $v$ if $\mathcal{L_v \alpha}$ is a closed form; \item \emph{globally conserved} along $v$ if $\mathcal{L_v \alpha}$ is an exact form; \item \emph{strictly conserved} along $v$ if $\mathcal{L_v \alpha}=0$. \end{itemize} We denote by $C_{loc}(v),C(v),C_{str}(v)$ the graded vector space of locally, globally, strictly preserved forms respectively. \end{definition} This definition can be easily translated to group actions implying the conservation along the fundamental vector fields. The following inclusion relations between conserved quantities follow from the Cartan formula: \begin{lemma}[{\cite[\S 1.3]{Ryvkin2016}}] Given a vector field $v\in \mathfrak{X}(M)$, the following diagram holds in the category of graded vector spaces \begin{displaymath} \begin{tikzcd} \d C_{loc}(v) \ar[r,hook] & C_{str}(v) \ar[r,hook] & C(v) \ar[r,hook] & C_{loc}(v) \ar[r,hook] & \Omega(M) \\ & & \Omega_{cl}(M) \ar[u,hook] \end{tikzcd} ~. \end{displaymath} Furthermore, one has that $C_{str}(v)$ is a graded subalgebra of $(\Omega(M),\wedge)$ and $C(v),C_{loc}(v)$ are graded modules over $(C_{str}(v) \cap \Omega_{cl}(M))$. \end{lemma} In particular, one has that any Hamiltonian $(n-1)$-form is globally conserved along its corresponding Hamiltonian vector field: \begin{equation} \mathcal{L}_{v_H} H = \d \iota_{v_H} H + \cancel{\iota_{v_H}\d H} \qquad \forall H \in \Omega^{n-1}_{\ham}(M,\omega) \end{equation} (or strictly conserved in the $1$-plectic case). \section{Higher observables}\label{Section:RogersObservables} There is an algebraic counterpart to the geometric theory of smooth manifolds represented by the study of the unital commutative algebra of smooth functions on a given manifold, also known as \emph{smooth observables} \cite{Nestruev2010}. One could informally think about this relationship as a suitable extension of the Gelfand duality \cite{nlab:gelfand_duality}, establishing an equivalence between the category of compact topological spaces and the category of commutative $C^\ast$-algebras, from ordinary topology to differential geometry. When the studied manifold is in particular symplectic, its dual -algebraic- object is given by its corresponding \emph{Poisson algebra}. This is the unital commutative algebra of smooth functions over $M$ together with a Lie bracket $\{\cdot,\cdot\}$, called the Poisson bracket, that is compatible with the pointwise product in the sense of being a derivation in each entry, \ie \begin{displaymath} \lbrace f, g\cdot h \rbrace = g\cdot\lbrace f, h \rbrace + \lbrace f, g \rbrace\cdot h \qquad \forall f,g,h \in C^{\infty}(M) ~. \end{displaymath} Explicitly, the Poisson bracket corresponding to $(M,\omega)$ takes the following expression: \begin{displaymath} \morphism{\lbrace\cdot,\cdot\rbrace} {C^{\infty}(M)\otimes C^{\infty}(M)} {C^{\infty}(M)} {(\alpha,\beta)} {\iota_{\vHam_\beta}\iota_{\vHam_\alpha}\omega}, \end{displaymath} where $\vHam_\alpha$ denotes the Hamiltonian vector field pertaining to $\alpha$. Note that there is a neat interpretation of these concepts when one reads a symplectic manifold in the geometric mechanics setting, \ie as the space of states of a certain mechanical system with finite degrees of freedom. Namely, one can see the Poisson algebra of $M$ as the set of physical observables of the system, also known as "classical observables". The latter has to be interpreted in the sense of measurable quantities yielding a single value (real number) on every state. \begin{figure}[h!] \begin{center} \includegraphics[width=0.22\textwidth]{\datapath/regolo.jpg} \end{center} \caption{A (3-dimensional) rigid ruler.} \label{Fig:Regolo} \end{figure} An example is given by the act of reading the coordinates of the position of a certain point-particle counting the ticks on the axes of a rigid ruler (see figure \ref{Fig:Regolo}) displaced in the physical space. If the system is in particular conservative, one can single out a particular observable $H$, called "the Hamiltonian (function)", to be interpreted as the "energy" of the system. The flow along $v_H$ yields the time evolution of the system and taking the Poisson bracket of the Hamiltonian with a certain observable encodes how the value of the measurable quantity change along the motion of the system. \medskip Let us now discuss a possible candidate to be the corresponding algebraic counterpart of a generic multisymplectic manifold. When trying to generalize the construction of the Poisson bracket from the observables $C^{\infty}(M)$ of a symplectic manifold to an arbitrary $n$-plectic manifold, one could follow two paths: \begin{itemize} \item passing to multivector fields and thus considering Hamiltonian pairs composed of a $k$-vector field and a $(n+1-k)$-form satisfying HDDW equation (\eg \cite[\S 4]{Herman2017} ); \item Focus on Hamiltonian $1$-forms, as suggested by Baez and Rogers in \cite{Baez2010,Rogers2010}. \end{itemize} We will adhere to the second path. In this case the definition of Poisson bracket can be translated verbatim to $\Omega_{\ham}^{n-1}(M,\omega)$ as follows: \begin{definition}[Binary multi-bracket for Hamiltonian $(n-1)$-forms]\label{Def:BinaryBracketofHamiltonianForms} Let be $(M,\omega)$ an $n$-plectic manifold, we call \emph{binary multibrackets} of Hamiltonian $(n-1)$-forms the following bilinear operator \begin{displaymath} \morphism{\lbrace\cdot,\cdot\rbrace} {\Omega^{n-1}_{\ham}(M,\omega)\otimes\Omega^{n-1}_{\ham}(M,\omega)} {\Omega^{n-1}_{\ham}(M,\omega)} {(\alpha,\beta)} {\iota_{\vHam_\beta}\iota_{\vHam_\alpha}\omega} \end{displaymath} Note that $\lbrace\cdot,\cdot \rbrace = (\iota^2_{\mathfrak{X}}\omega) \circ \pi_{\ham}$ \ie is given by the precomposition of the signed multicontraction map given in equation \eqref{Eq:signedMulticontraction} with $\pi_{\ham}:\Omega^{n-1}_{\ham}(M,\omega) \twoheadrightarrow \mathfrak{X}_{\ham}(M,\omega)$, the standard projection from Hamiltonian forms to their corresponding Hamiltonian vector fields. \end{definition} % When $n=1$, one clearly recovers the Lie algebra structure on the "classical observables" smooth functions of $M$. When $n\geq 2$, the binary bracket shares some properties of the Poisson one but, crucially, it fails to give a Lie algebra structure and a derivation\footnote{In particular, we do not have a canonical associative algebra structure on $\Omega_{\ham}^{n-1}(M,\omega)$. Clearly Hamiltonian forms are not closed under the wedge product.}. \begin{lemma}\label{Lem:BinBrackofHamFormsisHamiltonian} Let be $(M,\omega)$ an $n$-plectic manifold, the binary bracket given in definition \ref{Def:BinaryBracketofHamiltonianForms} is a well-defined skew-symmetric bilinear maps valued in $\Omega^{n-1}_{\ham}(M,\omega)$ satisfying the following properties: \begin{itemize} \item For any $\alpha,\beta \in \Omega^{n-1}_{\ham}(M,\omega)$, one has $$\d ~\lbrace \alpha, \beta\rbrace = -\iota([\vHam_\alpha,\vHam_\beta])\omega~,$$ \ie the following diagram commutes \begin{displaymath} \begin{tikzcd} \big(\Omega^{n-1}_{\ham}(M,\omega)\big)^{\otimes 2} \ar[r,"{\lbrace \cdot , \cdot \rbrace}"] \ar[d,"\pi_{\ham}^{\otimes 2}"'] & \Omega^{n-1}_{\ham}(M,\omega) \ar[d,"\pi_{\ham}"] \\ \big( \mathfrak{X}_{\ham}(M,\omega)\big)^{\otimes 2} \ar[r,"{[\cdot,\cdot]}"'] & \mathfrak{X}_{\ham}(M,\omega) \end{tikzcd} \end{displaymath} \item It satisfies the Jacobi identity up to an exact term, \ie, for any $\alpha_1,\alpha_2,\alpha_3 \in \Omega^{n-1}_{\ham}(M,\omega)$, one has \begin{displaymath} \lbrace\lbrace \alpha_1,\alpha_2 \rbrace,\alpha_3 \rbrace + \cyc = \d~\iota(\vHam_{\alpha_1}\wedge \vHam_{\alpha_2} \wedge \vHam_{\alpha_3}) \omega~. \end{displaymath} \end{itemize} \end{lemma} \begin{proof} Both claims follow by specializing corollary \ref{lemma:multicartan} to the cases $m=2,3$ and noting that \begin{displaymath} \begin{split} \lbrace\lbrace \alpha_1,\alpha_2 \rbrace,\alpha_3 \rbrace + \cyc =&~ \left(\iota_{\vHam_3}\iota_{[\vHam_1,\vHam_2]} - \iota_{\vHam_2}\iota_{[\vHam_1,\vHam_3]} + \iota_{\vHam_1}\iota_{[\vHam_2,\vHam_3]}\right)\omega =\\ =&~ \iota(-\partial \vHam_1\wedge\vHam_2\wedge\vHam_3)\omega ~. \end{split} \end{displaymath} \end{proof} \begin{remark}[Lie algebra of Hamiltonian forms modulo boundaries]\label{Rem:LieAlgebraHamModulo} Modding out exact terms, \ie considering the vector space \begin{displaymath} \mathfrak{g}= \dfrac{\Omega^{n-1}_{\ham}(M,\omega)}{\d \Omega^{n-1}(M)}~, \end{displaymath} which is isomorphic to $\mathfrak{X}_{\ham}(M,\omega)$ when $\omega$ is not degenerate, the binary bracket $\lbrace\cdot,\cdot\rbrace$ induces on $\mathfrak{g}$ a well-defined Lie bracket since closed forms are Hamiltonian with trivial Hamiltonian vector fields (compare with the standard Lie algebra structure considered in Hydrodynamics, see section \ref{Thm:HydroBracket}). \end{remark} The upshot of lemma \ref{Lem:BinBrackofHamFormsisHamiltonian} is that $(\Omega_{\ham}^{n-1}(M,\omega),\lbrace\cdot,\cdot\rbrace)$ fails to be a Lie algebra (except for the case $n=1$) since $\lbrace\cdot,\cdot \rbrace$ does not satisfy the Jacobi identity in general. However, the failure is somewhat mild, namely, it is controlled by the exterior derivative of the differential $(n-2)$-form $(\iota^3_{\mathfrak{X}}\omega) \circ \pi_{\ham}$. In the earliest development of multisymplectic geometry, which was basically motivated by the study of mechanical systems with a continuous set of degrees of freedom (classical field theories), it has been suggested to cure this problem by dividing out inconsistencies, \ie, by considering everything up to exact terms ("Total divergences" in the physics lingo). See for example \cite{Carinena1991b} or section \ref{Sec:IdroPoisson} for the hydrodynamics case. \\ With the advent of homological methods in mathematical physics (Kontsievich, Stasheff, Vinogradov) this point of view has become obsolete. Indeed, before modding out exact terms, one should be naturally led to check whether the ternary bracket $(\iota^3_{\mathfrak{X}}\omega) \circ \pi_{\ham}$, controlling the failing of the Jacobi identity pertaining to $\lbrace\cdot,\cdot\brace$, satisfies an higher analogue of the Jacobi equation. More specifically, one should look for a suitable completion of $\lbrace\cdot,\cdot\rbrace$ and $(\iota^3_{\mathfrak{X}}\omega) \circ \pi_{\ham}$ to a family of multibrackets defining a $L_\infty$-algebra structure. Such completion has been proved to be possible; the following explicit construction is due by Rogers: \begin{definition}[$L_\infty$-algebra of observables \emph{(\cite[Thm. 5.2]{Rogers2010}, \cf also \cite{Barnich1998})}]\label{Def:RogersAlgebra} Given a $n$-plectic manifold $(M,\omega)$, we call \emph{$L_{\infty}$-algebra of observables} (higher observables) associated to $(M,\omega)$, the $L_\infty$-algebra $L_{\infty}(M,\omega)=(L,\{[\cdot,\cdots,\cdot]_k \}_{k\geq 1})$. \\ The underlying graded vector space is given by \begin{equation}\label{eq:Lspace} L^i=\begin{cases} \Omega_{\ham}^{n-1}(M,\omega) & \quad~\text{if } i=0 \\ \Omega^{n-1+i}(M) & \quad~\text{if } 1-n \leq i\leq -1 \\ 0 & \quad ~\text{otherwise} ~. \end{cases} \end{equation} The $n+1$ non-trivial multibrackets $\lbrace [\cdot,\cdots,\cdot]_k : L^{\wedge k} \to L ~| 1\leq k \leq n+1\rbrace$ are defined, for any given $\alpha_i\in L$, as \begin{displaymath} [\alpha]_1 = \begin{cases} 0 & \quad\text{if~} |\alpha| = 0 \\ \d \alpha & \quad \text{if~} |\alpha| \leq -1 \end{cases} \end{displaymath} and, for $ 2 \leq k \leq n+1$, as \begin{displaymath} [\alpha_1,\dots,\alpha_k]_k = \begin{cases} \varsigma(k) \iota(\vHam_{\alpha_1}\wedge\dots\wedge\vHam_{\alpha_k})~\omega & \quad\text{if~} |\alpha_i|=0 \text{ for } 1\leq i \leq k \\ 0 & \quad\text{otherwise} \end{cases} ~. \end{displaymath} In the above equation, $\vHam_{\alpha_k}=\pi_{\ham}(\alpha_k)$ denotes the Hamiltonian vector field associated to $\alpha_k\in \Omega^{n-1}_{\ham}(M,\omega)$ and $\varsigma(k) := - (-1)^{\frac{k(k+1)}{2}}$ is the total Koszul sign (see definition \ref{Def:SigmaSign}). \end{definition} \begin{remark} We emphasize that here, as in \cite{Callies2016,Ryvkin2016}, is adopted an index convention different from what originally employed by \cite{Rogers2010}. Namely, we adopt the "cohomological convention" with non-trivial components concentrated in negative degrees. \end{remark} \begin{remark}[The chain complex underlying $L_\infty(M,\omega)$]\label{Rem:RogersChainComplex} The $L_\infty$-algebra of observables consists of a cochain complex $(L,d)$ \begin{displaymath} \begin{tikzcd}[column sep= small,row sep=small] 0 \ar[r] & L^{1-n}\ar[symbol=\coloneqq,d] \ar[r] & \cdots\ar[r] & L^{-k}\ar[symbol=\coloneqq,d]\ar[r] & \cdots\ar[r] & L^{-1}\ar[symbol=\coloneqq,d]\ar[r] & L^0\ar[symbol=\coloneqq,d]\ar[r] &[-2em] 0\\ % & \Omega^0(M)\ar["d",r] & \cdots\ar["d",r] & {\Omega^{n-1-k}(M)} \ar["d",r]&\cdots\ar["d",r] & \Omega^{n-2}(M)\ar["d",r] & \Omega^{n-1}_{\textrm{Ham}}(M,\omega)& \end{tikzcd}, \end{displaymath} % which is a truncation of the de-Rham complex with a degree shift putting (Hamiltonian) $(n-1)$-form in degree zero, \ie \begin{displaymath} L= \trunc_{n-1}(\Omega(M))[n-1]\oplus \Omega^{n-1}_{\ham}(M,\omega) ~, \end{displaymath} endowed with $n$ (skew-symmetric) multibrackets $(2 \leq k \leq n+1)$ obtained by the signed multicontraction \begin{equation}\label{Eq:RogersKBrackets} \begin{tikzcd}[column sep= small,row sep=0ex] [\cdot,\dots,\cdot]_k:= (\iota^k_{\mathfrak{X}}\omega) \circ \pi_{\ham} \colon& \Lambda^k\left(\Omega^{n-1}_{\textrm{Ham}}\right) \arrow[r]& \Omega^{n+1-k} \\ & \alpha_1\wedge\dots\wedge\alpha_k \ar[r, mapsto]& \varsigma(k)\iota_{\vHam_{\alpha_k}}\dots\iota_{\vHam_{\alpha_1}}\omega \end{tikzcd} ~. \end{equation} \end{remark} \begin{remark}[Reinterpretation in terms of $\mathcal{B}_{(n)}$] Observe that extending the chain complex underlying $L_{\infty}(M,\omega)$ to the right with the space of Hamiltonian vector fields, one gets a cochain sub-complex of the complex $\mathcal{B}_{(n)}$ introduced in proposition \ref{lem:signedmulticontractionLinfinitymorphism}. Namely, one has the inclusion $L[1]\oplus \mathfrak{X}_{\ham}(M)\hookrightarrow \mathcal{B}_{(n)}$ with components given by the vertical arrows in the following commutative diagram: \begin{displaymath} \begin{tikzcd}[column sep = small] C^{\infty}(M) \ar[r,"\d"] \ar[d,equal] & \dots \ar[r,"\d"] & \Omega^{n-2}(M) \ar[r,"\d"] \ar[d,equal] &[1.5em] \Omega^{n-1}_{\ham}(M,\omega) \ar[r,"\pi_{\ham}"] \ar[d,hook] &[1.5em] \mathfrak{X}_{\ham}(M,\omega) \ar[d,"-\iota_{\mathfrak{X}_{\ham}}^1\omega"] \ar[r] & 0 \\ C^{\infty}(M) \ar[r,"\d"] & \dots \ar[r,"\d"] & \Omega^{n-2}(M) \ar[r,"\d"] & \Omega^{n-1}(M) \ar[r,"\d"] & \d \Omega^{n-1}(M) \ar[r] & 0 \end{tikzcd} ~. \end{displaymath} The two rightmost squares express respectively the fact that all closed $(n-2)$-forms are Hamiltonian with trivial Hamiltonian vector field, see remark \ref{Rem:ClosedformsTrivialHamiltonian}, and the very definition of Hamiltonian forms, see remark \ref{Rem:HamiltonianPairs}. \end{remark} \begin{remark}[Precursors] It should be noted that an early precursor of the (Chris) Rogers' construction can be found in the work of Claude Roger \cite[Thm. 6.1]{Roger2012}. Briefly, he showed that for any compact, connected, orientable $(p+1)$-dimensional manifold $(p\geq 1)$ there is an associated $L_p$-algebra (see also \cite[Cor. 6.11]{Zambon2012} for a dual version). Roger's work was in the context of \emph{unimodular vector fields} (also known as \emph{divergence-free} or \emph{solenoidal} fields), a framework that is akin to the construction that we are going to study in chapter \ref{Chap:MauroPaper}. \end{remark} \begin{lemma}[Theorem 5.2 in \cite{Rogers2010}] The $L_\infty$-algebra of observables $L_\infty(M,\omega)$ is a grounded $L_n$-algebra (see \ref{Def:groundedLinfinity}). \end{lemma} \begin{proof} By its very definition ,$L$ is concentrated in degrees $(1-n),\dots,0$ and $[\cdots]_k = (\iota^k_{\mathfrak{X}(M)}\omega) \circ \pi_{\ham}$ is non-zero only when evaluated on a degree $0$ element \ie on $k$ Hamiltonian $(n-1)$-forms. % According to remark \ref{Rem:GroundedEasyAxioms}, one has only to prove the following equality \begin{displaymath} [\cdot]_1 \ca\, [\cdots]_{k+1} = [\cdots]_k \ca\, [\cdot,\cdot]_2 ~, \end{displaymath} where $\ca$ denotes the skew-symmetric \RN product (see equation \eqref{Eq:RNProducts-explicit}), for any $k\geq 1$. Note in particular that on the left-hand side the \RN products reads simply as the composition $\circ$. On given Hamiltonian forms $\alpha_1,\dots, \alpha_{k+1}$ this can be read as \begin{displaymath} \mathclap{ \begin{aligned} \varsigma(k&+1) \d \iota(\vHam_{\alpha_1}\wedge\dots\wedge\vHam_{\alpha_{k+1}} ) \omega = \\ =&~ \varsigma(k)\sum_{i<j}(-1)^{i+j} \iota( \vHam_{[\alpha_i,\alpha_j]_2} \wedge \vHam_{\alpha_1} \wedge \dots \wedge \widehat{\vHam_{\alpha_i}} \wedge\dots\wedge \widehat{\vHam_{\alpha_j}} \wedge\dots\wedge \vHam_{\alpha_{k+1}} )\omega = \\ =&~ \varsigma(k) \iota(\partial \vHam_{\alpha_1}\wedge\dots\wedge\vHam_{\alpha_{k+1}}) \omega ~, \end{aligned} } \end{displaymath} where, in the first equation, lemma \ref{Lem:BinBrackofHamFormsisHamiltonian} has been used . One can easily read the previous equation as the multi-Cartan magic formula of Lemma \ref{lemma:multicartan}, restricted to the case of being $\omega$ a closed form and the involved vector fields being Hamiltonian. \end{proof} \begin{remark}[Homological and cohomological convention for $L_{\infty}(M,\omega)$] We emphasize that in definition \ref{Def:RogersAlgebra} we presented the $L_\infty$-algebra of observables with the so-called \emph{cohomological convention}; namely $[\cdot]_1$ acts by raising the degree and the underlying cochain complex $L$ is concentrated in degrees $(1-n),\dots,n$. \\ One can give an equivalent definition in the \emph{homological convention} of $L_\infty(M,\omega$ as the chain complex $L_\bullet$ \begin{center} \begin{tikzcd}[column sep= small,row sep=small] 0 \ar[r]& L_{n-1}\ar[symbol=\coloneqq,d] \ar[r]& \cdots\ar[r]&L_{k-2}\ar[symbol=\coloneqq,d]\ar[r]&\cdots\ar[r]& L_1\ar[symbol=\coloneqq,d]\ar[r]&L_0\ar[symbol=\coloneqq,d]\ar[r]&0\\ % & \Omega^0\ar["d",r]&\cdots\ar["d",r]&{\Omega^{n+1-k}} \ar["d",r]&\cdots\ar["d",r]&\Omega^{n-2}\ar["d",r] &\Omega^{n-1}_{\textrm{Ham}}& \end{tikzcd}, \end{center} (which is a truncation of the de-Rham complex with inverted grading) endowed with $n$ (skew-symmetric) multibrackets $(2 \leq k \leq n+1)$ % \begin{equation} \begin{tikzcd}[column sep= small,row sep=0ex] [\cdot,\dots,\cdot]_k \colon& \Lambda^k\left(\Omega^{n-1}_{\textrm{Ham}}\right) \arrow[r]& \Omega^{n+1-k} \\ & \sigma_1\wedge\dots\wedge\sigma_k \ar[r, mapsto]& \varsigma(k) ~\iota_{v_{\sigma_k}}\dots\iota_{v_{\sigma_1}}\omega \end{tikzcd} \end{equation} % Clearly the two definition are not unrelated, one can retrieve one from the other by applying the reverse-ordering functor $\setminus \blank$. \end{remark} \begin{remark}[Local existence of the $L_\infty$-algebra of observables] The introduction of the algebra of observables, as given in definition \ref{Def:RogersAlgebra}, could appear rather artificial at first sight. However, there is a motivation, proposed in \cite[\S 5]{Rogers2010}, for the local existence of such an object. % Observe that if $(M,\omega)$ is contractible (\eg if $M$ is an open neighbourhood in a given bigger multisymplectic manifold) the cohomology groups of the chain complex $L$ (\cf remark \ref{Rem:RogersChainComplex}) reads as follows \begin{displaymath} H^{i}(L) = \begin{cases} \dfrac{\Omega_{\ham}^{n-1}(M,\omega)}{\d \Omega^{n-2}(M,\omega)}:= \mathfrak{g} & \text{ if } i=0 \\[1em] 0 & \text{ if } 1 \leq i \leq 1-n \\[.5em] \R & \text{ if } i = 1-n \end{cases} ~. \end{displaymath} Hence, the complex (augmentation of $L$) \begin{displaymath} \begin{tikzcd}[column sep = small] \widetilde{L}:\quad & 0 \ar[r] & \R \ar[r,hook] & C^{\infty}(M)\ar[r,"\d"] & \cdots \ar[r,"\d"] & \Omega^{n-1}_{\ham}(M,\omega) \end{tikzcd} \end{displaymath} is a resolution (in the sense of \cite[\S 2.1]{Barnich1998}) of the Lie algebra $(\mathfrak{g},\lbrace \cdot,\cdot\rbrace)$, introduced in remark \ref{Rem:LieAlgebraHamModulo}. In particular all cohomology groups are zero except for $H^0(\widetilde{L})=\mathfrak{g}$. \\ The key point is that, from a general homological algebra result \cite[Thm. 7]{Barnich1998}, one can naturally endow a resolution of a Lie algebra with a $L_\infty$-algebra structure obtaining by suitable iteration of the complex coboundary operator and the Lie bracket. Hence, this proves global existence of a $L_\infty$-algebra structure with the same cohomology as $L$ for any contractible multisymplectic manifold or local existence for a general multisymplectic manifold. Definition \ref{Def:RogersAlgebra} has been originally proposed by Rogers as a suitable globalization of such local construction. \\ Since in the non-degenerate case $\mathfrak{g}$ is isomorphic to the Lie algebra of Hamiltonian vector fields, one can think about $L_\infty(M,\omega)$ as a $L_\infty$-extension of $\mathfrak{X}_{\ham}(M,\omega)$. \end{remark} \begin{remark}[Standard projection to Hamiltonian vector fields]\label{Rem:ProjectiontoHamVfields} By construction, the restriction of $[\cdot,\cdot]_2$ to Hamiltonian forms, coincides with the bracket $\lbrace\cdot,\cdot\rbrace$ given in definition \ref{Def:BinaryBracketofHamiltonianForms}. Lemma \ref{Lem:BinBrackofHamFormsisHamiltonian} implies then that the surjection $\pi_{\ham}$ mapping Hamiltonian forms into their corresponding Hamiltonian vector fields can be lifted to a strict $L_\infty$-morphism \begin{displaymath} \begin{tikzcd} L_\infty(M,\omega) \ar[r,dashed] \ar[d]& \mathfrak{X}^1_{\ham}(M,\omega) \\ \Omega^{n-1}_{\ham}(M,\omega) \ar[ur,two heads,"\pi_{\ham}"',sloped] \end{tikzcd} ~, \end{displaymath} defined by the projection \begin{displaymath} \morphism{\pi_{\ham}} {L_\infty(M,\omega)} {\mathfrak{X}^1_{\ham}(M,\omega)} {\alpha} { \begin{cases} \vHam_{\alpha} & \text{ if } |\alpha|=0 \\ 0 & \text{ otherwise} \end{cases} } ~, \end{displaymath} denoted with the same symbol with a slight abuse of notation\footnote{Note that any Lie algebra can be seen as an $L_\infty$-algebra concentrated in degree $0$, therefore any $L_\infty$-morphism $L\to\mathfrak{g}$ is simply given by a linear map $L_0 \to \mathfrak{g}$ preserving the binary bracket.}. \end{remark} \begin{remark}[Degenerate case]\label{Rem:DegenerateCase} Definition \ref{Def:RogersAlgebra} translates verbatim to the pre-$n$-plectic, \ie degenerate, case replacing $\Omega^{n-1}_{\ham}(M,\omega)$ with the vector space of Hamiltonian pairs $Ham^{n-1}(M,\omega)$ as given in remark \ref{Rem:HamiltonianPairs} (cfr \cite[Thm 5.2]{Rogers2010},\cite[\S 3]{Ryvkin2016a},\cite[Thm 4.7]{Callies2016},\cite[Prop. 3.2]{Fiorenza2014a}). Namely, one can define a graded vector space \begin{displaymath} \mathclap{ Ham_{\infty}(M,\omega)^i = \begin{cases} \left\lbrace\left. \pair{\vHam}{\alpha}\in\mathfrak{X}^1(M)\oplus\Omega^{n-1}(M) ~\right|~ \d\alpha = -\iota_{\vHam}\omega \right\rbrace & \text{ if } i=0 \\[1em] \Omega^{n-1+i}(M) & \text{ if } 1-n \leq i \leq 1 \end{cases} } \end{displaymath} with multibrackets defined as in equation \eqref{Eq:RogersKBrackets} interpreting the map $\pi_{\ham}$ as the projection on the first term of the direct sum: \begin{displaymath} \pi_{\ham}: Ham_{\infty}(M,\omega) \twoheadrightarrow \mathfrak{X}_{\ham}(M,\omega) \end{displaymath} (which can be again seen as a strict $L_\infty$-morphism). \\ When $\omega$ is non-degenerate, there is an $L_\infty$-isomorphism $Ham_{\infty}(M,\omega)\cong L_\infty(M,\omega)$ (see \cite[Prop. 4.8]{Callies2016} and section \ref{Chap:MarcoPaper}). \note{aggiungere riferimento preciso al lemma nel capitolo sul lavoro con Marco} \end{remark} \begin{remark}[Dg Leibniz algebra of higher observables] As pointed out by Rogers in \cite[\S 6]{Rogers2010}, there is another natural algebraic structure that one can construct out of $\Omega_{\ham}^{n-1}(M,\omega)$. Recall that, in the symplectic case, \ie $n$-plectic case with $n=1$, the Poisson bracket between two give smooth functions $f,g\in C^{\infty}(M)$ can be seen as \begin{displaymath} \lbrace f,g \rbrace = \mathcal{L}_{\vHam_f} g \end{displaymath} hence, $\lbrace f,\cdot \rbrace$ is a degree $0$ derivation making $\Omega^0_{\ham}(M)=C^{\infty}(M)$ into a Poisson algebra. \\ When $n\geq 2$ one should not expect that $L_\infty(M,\omega)$ would acts like a Poisson algebra. In such case, the Cartan magic formula would rather imply that \begin{displaymath} \mathcal{L}_{\vHam_\alpha} \beta = \lbrace \alpha, \beta \rbrace + \d \iota_{\vHam_\alpha}\beta~, \end{displaymath} thus suggesting to introduce the following non-symmetric binary bracket \begin{displaymath} \morphism{\llbracket \cdot,\cdot\rrbracket } {\Omega^{n-1}_{\ham}(M,\omega)\otimes\Omega^{n-1}_{\ham}(M,\omega)} {\Omega^{n-1}_{\ham}(M,\omega)} {(\alpha,\beta)} {\mathcal{L}_{\vHam_\alpha}\beta} \end{displaymath} Notice that $\llbracket\cdot,\cdot\rrbracket$ equals $\lbrace\cdot,\cdot\rbrace$ modulo boundary terms but with the different flavour of measuring the rate of change ("conservation", in the spirit of definition \ref{Def:conservedQuantities}) of the second Hamiltonian observable along the flow given by the first. % The bracket $\llbracket\cdot,\cdot \rrbracket$ can be easily extended to a binary bracket on $L$ as follows \begin{displaymath} \llbracket x, y \rrbracket = \begin{cases} \mathcal{L}_{\vHam_x} \beta & \text{ if } |x|=0 \\ 0 & \text{ otherwise} \end{cases} \qquad \forall \alpha,\beta \in L \end{displaymath} and it has been proved in \cite[Prop. 6.3]{Rogers2010} that the triple $(L,\delta,\llbracket\cdot,\cdot\rrbracket )$, with $\delta=[\cdot]_1$, forms a \emph{differential graded Leibniz algebra} (or \emph{dg Loday algebra}). Namely the following two compatibility relations hold for any $x,y,z\in L$: \begin{displaymath} \begin{aligned} \delta \llbracket x,y\rrbracket =& \llbracket\delta x,y\rrbracket + (-)^{|x|}\llbracket x,\delta y\rrbracket \\ \llbracket x,\llbracket y, z\rrbracket\rrbracket =& \llbracket\llbracket x,y \rrbracket, z\rrbracket + (-)^{|x||y|}\llbracket y,\llbracket x, z\rrbracket\rrbracket ~. \end{aligned} \end{displaymath} \end{remark} \begin{remark}[Observables $L_\infty$-algebra as a homotopy pullback]\label{Rem:FiorenzaPullback} It has been noticed in \cite[Thm. 3.12]{Fiorenza2014a} that the $L_\infty$-algebras of observables can be seen as a \emph{homotopy pullback} along the "signed multicontraction" $L_\infty$-morphism $(\iota_{\mathfrak{X}_{\ham}}\omega): \mathfrak{X}^1_{\ham}(M) \to \mathcal{B}_{(n)}$ introduced in lemma \ref{lem:signedmulticontractionLinfinitymorphism}. Namely it sits in the following pullback diagram \begin{displaymath} \begin{tikzcd} L_{\infty}(M,\omega)\arrow[dr, phantom, "\scalebox{1.5}{$\lrcorner$}" , very near start, color=black] \ar[r]\ar[d,"\pi_{\ham}"'] & 0 \ar[d]\\ \mathfrak{X}^1(M) \ar[r,"(\iota_{\mathfrak{X}_{\ham}})"] & \mathcal{B} \end{tikzcd}, \end{displaymath} commuting modulo homotopies, \ie $2$-morphisms in the category of $L_\infty$-algebras. \note{ "$Ham_{\infty}(M,\omega)$ is the \emph{homotopy fibre} of the \emph{structure maps} $(\iota_{\mathfrak{X}}\circ \pi_{\ham}) = (\iota_{\mathfrak{g}})$ with $\mathfrak{g}=\mathfrak{X}_{\ham}$ The notation $\mathcal{B}^n$ is to remind an analogous construction for the "classifying space of vector bundles" } \note{ sarebbe interessante dare esplicitamente questa omotopia (sospetto che coinvolga il pairing). Servirebbe una definizione esplicita di homotopia In section 3.2 of dolgushev,hoffung baez has been shown the mapping space for $L_\infty$-algebras ie.e the simplicial set $\text{Map}_{\bullet}(L,L')$ where $0$-simplcies are $L_\infty$-morphisms and 1-simplicies are homotopies between morphisms. In this term the homotopy-pullback diagram means that $[\iota_{\mathfrak{X}}\circ \pi_{\ham}]=[0]\in \pi_0\text{Map}_{\bullet}(\mathfrak{g},\mathcal{B})$. } Without going into the precise definition of homotopy\cite{Buijs2013}, or equivalence \cite{Dolgushev2007}\cite[Appendix]{Fregier2015}, between two $L_\infty$-morphism, one could notice that this weaker notion of commutativity is another reverberation of the multi-Cartan magic rule. For instance, in the $1$-plectic case the lower-left corner of the above diagram restricts to \begin{displaymath} \begin{tikzcd} C^{\infty}(M) \ar[dr,dashed,"(f)"] \ar[d,"\pi_{\ham}"'] & \\ \mathfrak{X}_{\ham} \ar[r,"(\iota_{\mathfrak{X}_{\ham}})"'] & C^{\infty}(M)[1]\oplus \d C^{\infty}(M) \end{tikzcd} \end{displaymath} where we denoted with $(f)$ the $L_2$-morphism resulting from the composition. This consists of two components. The first one, $f_1= (\iota^1_{\mathfrak{X}(M)}\omega) \circ \pi_{\ham}$, can be seen a chain map homotopically equivalent to $0$. Namely the identity map is a chain homotopy $\id: f_1 \Rightarrow 0$ since, according to the very definition of Hamiltonian vector fields, the following diagram commutes: \begin{displaymath} \begin{tikzcd} & C^{\infty}(M) \ar[dl,"-\id",sloped]\ar[d,"f_1"] \\ C^{\infty}(M) \ar[r,"\d"]& \d C^{\infty}(M) \end{tikzcd} ~. \end{displaymath} % On the other hand, the second component $f_2:C^{\infty}(M)\otimes C^{\infty}(M) \to C^{\infty}(M)$ can be seen itself as a chain homotopy from $0$ to the chain map $g$ defined by the commutation of the right square in the diagram below \begin{displaymath} \begin{tikzcd} & C^{\infty}(M)\otimes C^{\infty}(M) \ar[r,"\pi\otimes\pi"]\ar[d,dashed,"g"]\ar[dl,"f_2"',dashed]& \mathfrak{X}_{\ham}\otimes \mathfrak{X}_{\ham}\ar[d,"{-[\cdot,\cdot]}"] \\ C^{\infty}(M) \ar[r,"\d"]& \d C^{\infty}(M) & \mathfrak{X}_{\ham} \ar[l,"\iota_{\mathfrak{X}_{\ham}}^i\omega"] \end{tikzcd} \end{displaymath} since, as follows from the first point in lemma \ref{Lem:BinBrackofHamFormsisHamiltonian}, the whole diagram commutes. \end{remark} Many non-trivial explicit examples of the algebra of observables $L_\infty(M,\omega)$ can be found in \cite{Rogers2011}\cite{Callies2016}\cite{Ryvkin2018}. More will be studied in chapters \ref{Chap:LeonidPaper} and \ref{Chap:MauroPaper}. We only mention here an example directly following from example \ref{Ex:SumProductMultiSymp}: % \begin{example}[Higher observables for sums and products] Notice that the observables $L_\infty$-algebra construction behaves compatibly with the sum and product constructions of multisymplectic manifolds given in example \ref{Ex:SumProductMultiSymp}. Namely, when $(M,\omega)$ and $(\tilde{M},\tilde{\omega})$ are two multisymplectic structure of the same order, there exists a strict $L_\infty$-morphisms \cite[Ex. 6.5]{Ryvkin2018} \begin{displaymath} \lambdamorphism{L_\infty(M,\omega)\oplus L_\infty(\tilde{M},\tilde{\omega})} {L_\infty(M\times \tilde{M},\pi_M^\ast\omega+\pi_{\tilde{M}}^\ast\tilde{\omega})} {(\alpha,\beta)} {\pi_M^\ast \alpha + \pi_{\tilde{M}}^\ast \beta} ~, \end{displaymath} and, for any $\omega,\tilde{\omega}$, possibly in different order, there exists a non-strict $L_\infty$-morphism \cite[Thm. 4.2]{Shahbazi2016} \begin{displaymath} L_\infty(M,\omega)\oplus L_\infty(\tilde{M},\tilde{\omega}) \to L_\infty(M\times \tilde{M}, \pi_M^\ast \omega \wedge \pi_{\tilde{M}}^\ast \tilde{\omega}) ~. \end{displaymath} \end{example} \section{Symmetries and \Momaps}\label{Sec:HCMM} When one fixes a form $\omega$ on a manifold $M$ it is natural to highlight the group actions preserving this extra structure, also known as "symmetries". \\ Consider a multisymplectic manifold $(M,\omega)$, we introduce the following nomenclature pertaining to global and local symmetries: \begin{definition}[Multisymplectic actions] A smooth action $\vartheta:G\action M$ of a Lie group $G$ on $M$ is called \emph{multisymplectic} if it preserves the multisymplectic form, \ie \begin{displaymath} \hat{\vartheta}_g^* \omega = \omega \qquad \forall g\in G \end{displaymath} where $\hat{\vartheta}_g=\vartheta(\cdot,g)$ is the diffeomorphism giving the action of $G$ on $M$. \\ A Lie algebra morphism $\vAct: \mathfrak{g} \rightarrow \mathfrak{X} (M)$ is called an \emph{infinitesimal multisymplectic action} if it acts by multisymplectic vector fields, \ie \begin{displaymath} \mathcal{L}_{\vAct_\xi} \omega = 0 \quad \forall \xi \in \mathfrak{g} ~. \end{displaymath} \end{definition} Clearly, these two concepts are not unrelated: \begin{lemma} Consider a multisymplectic manifold $(M,\omega)$. If the action $\vartheta:G\action M$ is multisymplectic then the corresponding infinitesimal action $\vAct: \mathfrak{g}\to \mathfrak{X}(M)$, given by the fundamental vector fields \ie \begin{equation}\label{eq:LeftFundVF} v_\xi(m) = \left.\dfrac{d}{dt}\right\vert_0 \vartheta(m,\exp(- t\xi)) \qquad \forall m \in M , \xi \in \mathfrak{g} ~, \end{equation} is multisymplectic in the sense of definition \ref{def:Hamiltonianvfields}. \\ If the Lie group $G$ is in particular connected, also the converse is true. \end{lemma} A multisymplectic action acts infinitesimally by multisymplectic vector fields, we give a special name when the fundamental vector fields are Hamiltonian: \begin{definition}[Weakly Hamiltonian actions]\label{Def:WeakHamActions} The Lie algebra morphism $\vAct: \mathfrak{g} \rightarrow \mathfrak{X} (M)$ is an \emph{infinitesimal weakly Hamiltonian action} if $\mathfrak{g}$ acts by Hamiltonian vector fields, \ie $\text{Im}(v) \subseteq \mathfrak{X}_{\ham}(M,\omega)$. % A smooth action $\vartheta:G\action M$ of a Lie group $G$ on $M$ is called \emph{weakly Hamiltonian} if the corresponding infinitesimal action via fundamental vector fields (see equation \eqref{eq:LeftFundVF}) is Hamiltonian. \end{definition} \begin{remark}[Lift condition for weakly Hamiltonian actions {\cite[\S 4.1]{Ryvkin2016a}}]\label{Rem:weakHamasLift} The property for the action $\vartheta: G \action M$ to be weakly Hamiltonian can be easily read as the existence of a lift $f_1$ of the corresponding infinitesimal action to the vector space of Hamiltonian $(n-1)$-form, \ie the following diagram commutes in the category of vector spaces: \begin{displaymath} \begin{tikzcd}[column sep = huge] & \Omega^{n-1}_{\ham}(M,\omega) \ar[d,two heads,"\pi_{\ham}"] \\ & \mathfrak{X}_{\ham}(M,\omega)\ar[d,hook] \\ \mathfrak{g} \ar[r,"\vAct"] \ar[ruu,"f_1"] & \mathfrak{X}(M) \end{tikzcd} ~. \end{displaymath} \\ Such condition can be expressed in a cohomological flavour by considering the following exact sequence of vector spaces : \begin{displaymath} \begin{tikzcd}[row sep = small,column sep = small] 0 \ar[r] & \Omega^{n-1}_{cl}(M) \ar[r,hook] & \Omega^{n-1}_{\ham}(M,\omega) \ar[r,"\pi_{\ham}"] &[1.5em] \mathfrak{X}_{\ham}(M,\omega) \ar[r,"\gamma"] & H_{dR}^n(M) \ar[r] & 0 \\[-1em] & & & v \ar[r,mapsto] & {[\iota_{v}\omega]} \end{tikzcd} ~, \end{displaymath} meaning that $\vartheta$ acts by Hamiltonian vector fields whenever the class $[\gamma \circ \vAct_\xi]=0$ for any $\xi \in \mathfrak{g}$. \\ In particular, if $H^n_{dR}(M)=0$ or the acting Lie algebra satisfy the equation $[\mathfrak{g},\mathfrak{g}]= \mathfrak{g}$ (\eg $\mathfrak{g}$ is the Lie algebra of a semisimple Lie group) any multisymplectic action is also weakly Hamiltonian. \end{remark} \subsection{\Momaps}\label{Sec:hcmm} When studying weakly Hamiltonian actions on symplectic manifolds, the auxiliary concept of \emph{moment map} takes an exceptionally important role. The latter is in particular instrumental in many celebrated results in symplectic geometry like the Hamiltonian formulation of the Noether theorem, the classification of toric manifolds and the Kostant-Kirillov-Souriau coadjoint orbits method (see \cite{CannasdaSilva2001} for a complete review). Most of these results have also a non-trivial application in the geometrical approach to mechanics (see, for instance, \cite{Abraham1978}). \begin{reminder}[Moment maps in symplectic geometry]\label{Rem:SymplecticMomaps} Let be $(M,\omega)$ a symplectic (\ie $1$-plectic) manifold. Consider a symplectic smooth action $\vartheta: G \action M$ preserving the $2$-form $\omega$. Denote by $\vAct: G \to \mathfrak{X}_{\msy}(M,\omega)$ the corresponding infinitesimal action by fundamental vector fields. We define the following: \begin{itemize} \item A \emph{weak moment map pertaining to $\vartheta$} is a smooth map $\hat{\mu}:M\to \mathfrak{g}^\ast$ such that \begin{displaymath} \d \langle \hat{\mu}(x), \xi \rangle = -\iota_{v_\xi} \omega_x \qquad \forall x\in M, \xi \in \mathfrak{g}~. \end{displaymath} \item A \emph{(strong) moment map pertaining to $\vartheta$} is a $Ad^\ast$-equivariant weak moment map, \ie the following diagram commutes in the category of smooth manifolds \begin{displaymath} \begin{tikzcd} M \ar[r,"\hat{\mu}"]\ar[d,"\vartheta_g"] & \mathfrak{g}^\ast \ar[d,"Ad^\ast_g"] \\ M \ar[r,"\hat{\mu}"] & \mathfrak{g}^\ast \end{tikzcd} ~. \end{displaymath} \end{itemize} % Dually, see for instance \cite[\S 22.1]{CannasdaSilva2001}, one can define the following: \begin{itemize} \item A \emph{weak comoment map pertaining to $\vartheta$} is a linear map $\check{\mu}:\mathfrak{g} \to C^{\infty}(M)$ such that \begin{displaymath} \d \check{\mu}(\xi) = -\iota_{v_{\xi}}\omega \qquad \forall \xi \in \mathfrak{g} \end{displaymath} \item A \emph{(strong) comoment map pertaining to $\vartheta$} is a weak comoment map that is also a Lie algebra morphism, \ie the following diagram commutes in the category of vector spaces \begin{displaymath} \begin{tikzcd} \mathfrak{g}^{\otimes 2} \ar[r,"\check{\mu}^{\otimes 2}"]\ar[d,"{[\cdot,\cdot]}"'] & C^{\infty}(M)^{\otimes 2} \ar[d,"{[\cdot,\cdot]}"] \\ \mathfrak{g} \ar[r,"\check{\mu}"] & C^{\infty}(M) \end{tikzcd} ~. \end{displaymath} \end{itemize} % The duality, hence the "co" in the namings of the previous list, comes from the fact that $\check{\mu}$ is the dual of $\hat{\mu}$ in the following sense \begin{equation}\label{eq:dualitymomaps} \hat{\mu}(\xi)\eval_p = \langle \hat{\mu}_p, \xi \rangle \qquad \forall \xi \in \mathfrak{g}, p\in M ~. \end{equation} In other words \begin{displaymath} \hat{\mu} \in \Hom_{\text{smooth}}\big(M,~\Hom_{\text{vect}}(\mathfrak{g},\mathbb{R})\big) ~,\quad \check{\mu} \in \Hom_{\text{vect}}\big(\mathfrak{g},~\Hom_{\text{smooth}}(M,\mathbb{R})\big) \end{displaymath} come respectively from the \emph{currying} \cite{nlab:currying} of the same "evaluation" vector bundle map \begin{displaymath} \mu: M \times \mathfrak{g} \to \mathbb{R}_M \end{displaymath} with respect to the first or second entry. \\ The upshot is that a (co-)moment can exist only if the action is \emph{weakly Hamiltonian} in the sense of definition \ref{Def:WeakHamActions}. In that case, a weak comoment map is precisely a choice of Hamiltonian form for any fundamental vector field, hence the following diagram commutes in the category of vector spaces \begin{displaymath} \begin{tikzcd} & C^{\infty}(M)\ar[d,"\pi_{ham}"]\\ \mathfrak{g} \ar[r,"\vAct"'] \ar[ur,dashed,"\check{\mu}"]& \mathfrak{X} \end{tikzcd} ~. \end{displaymath} If the comoment map is "strong", the action is said to be \emph{(strongly) Hamiltonian} and the previous diagram commutes in the category of Lie algebras. \end{reminder} The term "moment" comes from the following crucial examples that is prototypical in geometric mechanics: \begin{example}[Linear and angular momenta]\label{Ex:SymplecticMechanicalMomenta} Given an action $\vartheta:G\action Q$, its lift $\vartheta^L:G\action M=T^\ast Q$ acts via symplectic vector fields with respect to the \emph{canonical symplectic form}. In particular this action preserves the tautological $1$-form $\theta$ and it can be shown to be Hamiltonian with comoment map given by \begin{displaymath} \morphism{\check{\mu}} {\mathfrak{g}} {C^{\infty}(M)} {\xi} {-\iota_{v_{\xi}}\theta} ~. \end{displaymath} % If one takes $Q=\R^3$, $Q$ and $T^* Q\cong \R^6$ can be interpreted as the \emph{configuration space} and the \emph{phase space} of a point-particle in the physical space. The comoment map with respect to the action of the translation or the rotation group on $Q$ gives, respectively, the linear and angular momenta of the point-particle freely moving in space \cite[\S 22.4]{CannasdaSilva2001}. \end{example} \medskip In the multisymplectic context, the generalization of the (co)momentum maps of the symplectic case leads to the more refined concept of "\momap". \begin{definition}[\Momaps \cite{Callies2016}]\label{Def:HCMM} Let $v:\mathfrak g\to \mathfrak X(M)$ be a multisymplectic Lie algebra action, \ie it preserves the symplectic form $\omega \in \Omega^{n+1}(M)$. We call a \emph{\momap} pertaining to $v$ any $L_\infty$-morphism $$ (f)= \Big\{f_k:\Lambda\mathfrak{g}^k \to (L_{\infty}(M,\omega))^{1-k}\subseteq \Omega^{n-k} \Big\}_{k=1,...,n} $$ from $\mathfrak g$ to $L_\infty(M,\omega)$ satisfying $$ df_1(\xi)=-\iota_{v_\xi}\omega \qquad \forall \xi\in\mathfrak{g} ~. $$ \end{definition} \begin{notation} A group action $G\action (M,\omega)$ is called \emph{Hamiltonian}, if the corresponding infinitesimal Lie algebra action admits a \momap. We will often refer to the \momap of a certain Lie group action understanding it as the \momap corresponding to the infinitesimal action for the Lie algebra of the given group. \end{notation} \begin{remark} More conceptually, a \momap is an $L_\infty$-morphism $(f):\mathfrak{g}\to L_\infty(M,\omega)$ lifting the action $v:\mathfrak{g}\to \mathfrak{X}(M)$, \ie making the following diagram commutative in the $L_\infty$-algebras category: \begin{center} \begin{tikzcd}[column sep = large] & L_\infty (M,\omega)\ar[d,"\pi_{\ham}"]\\ \mathfrak{g} \ar[ur,"(f)",dashed]\ar[r,"v"'] & \mathfrak{X}(M) \end{tikzcd} \end{center} where the vertical arrow $\pi_{\ham}$ is the trivial $L_\infty$-extension of the linear function mapping any Hamiltonian form to the unique corresponding Hamiltonian vector field given in remark \ref{Rem:ProjectiontoHamVfields}. \\ Hence, comparing with remark \ref{Rem:weakHamasLift}, one can see that the existence of a \momap is a stronger condition in the sense that it requires a lift of $\vAct$ not only in the plain vector space category but in the $L_\infty$-algebra category \begin{displaymath} \begin{tikzcd}[ column sep = huge] &[3em] L_{\infty}(M,\omega) \ar[d,two heads,"\pi_{\ham}"] \\ \mathfrak{g} \ar[rd,"\vAct"'] \ar[r,"f_1"]\ar[ru,"(f)"] & \Omega^{n-1}_{\ham}(M,\omega) \ar[d,two heads,"\pi_{\ham}"] \\ & \mathfrak{X}_{\ham}(M,\omega) \end{tikzcd} ~. \end{displaymath} \end{remark} In the following we will often make use of an explicit version of definition \ref{Def:HCMM} subsumed by the following lemma: \begin{lemma}[\cite{Callies2016}]\label{Lem:ExplicitHCCM} A \momap $(f)$ for the infinitesimal multisymplectic action of ${\mathfrak g}$ on $M$ is given explicitly by a sequence of linear maps \begin{displaymath} (f) = \Big\{ f_i: \,\,\, \Lambda^i {\mathfrak g} \to \Omega^{n-i}(M) \quad \vert \quad 0\leq i \leq n+1 \Big\} \end{displaymath} % fulfilling a set of equations: \begin{equation}\label{eq:fk_hcmm} -f_{k-1} (\partial p) = d f_k (p) + \varsigma(k) \iota(v_p) \omega \end{equation} together with the condition \begin{displaymath} f_0 = f_{n+1} = 0 \end{displaymath} for all $p \in \Lambda^k\mathfrak{g}$ and $k=1,\dots n+1$. (Recall that $\partial$ denotes the Chevalley-Eilenberg boundary operator defined in equation \eqref{eq:CE_boun} and $\varsigma(k)$ is the sign coefficient given in equation \eqref{Eq:SigmaSign}.) \end{lemma} \begin{proof} Equation \eqref{eq:fk_hcmm} is a simple application of remark \ref{Rem:GroundedEasyAxioms} to the grounded $L_\infty$-algebra of higher observables. \end{proof} \begin{remark}\label{Rem:TermMuByMauro} \note{ Usare $\mu_k$ può creare confusione con il capitolo higher rogers visto che è il simbolo che uso spesso per i multibrackets } Formula \eqref{eq:fk_hcmm} can be read as follows: when $k=1$ it tells us that $\vAct$ acts via Hamiltonian vector fields and $f_1$ is a linear map choosing an Hamiltonian form $f_1(x)$ pertaining to the Hamiltonian vector field $v_x$ for any $x\in \mathfrak{g}$. Hence, $f_1$ is the choice of a primitive for the contraction of $\omega$ with any fundamental vector field of the action. \\ When $k\geq 2$, equation \eqref{eq:fk_hcmm} can be read as the condition that the auxiliary closed differential form $$ \mu_k := f_{k-1} (\partial p) + \varsigma(k) \iota(v_p) \omega $$ must actually be {\it exact}, with potential $-f_k(p)$. Closure of $\mu_k$ is again a consequence of lemma \ref{lemma:multicartan} together with $\d \omega = 0$. Namely one has: \begin{equation}\label{eq:MauroMukForm} \begin{aligned} \d \mu_k &= \d (f_{k-1} (\partial p) + \varsigma(k) \iota(v_p) \omega) = \\ &= \varsigma(k) (-1)^k \iota(v_{\partial p})\omega - \varsigma(k-1) \iota(v_{\partial p})\omega =\\ &= [-\varsigma(k+1) -\varsigma(k-1)] \iota(v_{\partial p})\omega = 0 ~. \end{aligned} \end{equation} \end{remark} The previous remark leads to the following sufficient condition to the existence of a \momap: \begin{theorem}[{\cite[Thm. 9.6]{Callies2016}}] Consider a Lie algebra $\mathfrak{g}$ acting on a multisymplectic manifold $(M,\omega)$. The action $\vAct:\mathfrak{g}\to \mathfrak{X}(M)$ admits a \momap if it acts by Hamiltonian vector fields, \ie if there exists a function $\phi: \mathfrak{g}\to \Omega^{n-1}_{\ham}(M,\omega)$ such that $\d \phi = -\iota_{v} \omega$, and $H_{dR}^k(M)=0$ all $1 \leq k \leq n-1$. \end{theorem} \begin{remark}[{\cite[Rem. 9.7]{Callies2016}}] The previous theorem applies also when the infinitesimal Lie algebra action $\vAct$ comes from an infinite-dimensional Lie group $G$ that is locally exponential. \end{remark} \begin{remark} The reason why the term "homotopy" appears in the definition of the multisymplectic analogue of an ordinary comoment map is due to the fact that it can be interpreted as a homotopy between cochain-complexes. Observe that the components of $(f)$ can be arranged in the following (non-commutative) diagram inside the category of vector spaces: \begin{wideeq} \begin{tikzcd}[ampersand replacement=\&] \bigwedge^{n+2}\mathfrak{g} \ar[r,"\partial"] \ar[d,blue,"\iota_{\mathfrak{g}}^{n+2}\omega",pos=0.4] \& \bigwedge^{n+1}\mathfrak{g} \ar[r,"\partial"] \ar[dl,purple,"f_{n+1}",sloped,pos=0.4] \ar[d,blue,"\iota_{\mathfrak{g}}^{n+1}\omega",pos=0.4] \& \bigwedge^{n}\mathfrak{g} \ar[r] \ar[d,blue,"\iota_{\mathfrak{g}}^{n}\omega",pos=0.4] \ar[dl,purple,"f_{n}",sloped,pos=0.4] \&[-3em] \cdots \ar[r] \&[-3em] \bigwedge^{2}\mathfrak{g} \ar[r,"\partial"] \ar[d,blue,"\iota_{\mathfrak{g}}^{2}\omega",pos=0.4] \& \bigwedge^{1}\mathfrak{g} \ar[r,"\partial"] \ar[d,blue,"\iota_{\mathfrak{g}}^{1}\omega",pos=0.4] \ar[dl,purple,"f_{1}",sloped,pos=0.4] \&[-2em] 0 \ar[d,blue,hook] \\ 0 \ar[r,hook] \& C^{\infty}(M) \ar[r,"\d"] \& \Omega^{1}(M) \ar[r] \& \cdots \ar[r] \& \Omega^{n-1}(M) \ar[r,"\d"] \& \Omega^{n}(M) \ar[r,"\d"] \& \cdots \end{tikzcd} \end{wideeq} The top line can be interpreted as the reversed Chevalley-Eilenberg chain complex, i.e $ \setminus CE(\mathfrak{g}) = S^{\bullet}(\mathfrak{g}[1])$ and the line below as a suitable shifted truncation of the de Rham complex, $\trunc_{n}\Omega(M)[n+1] \supset L$, including the chain complex underlying $L_\infty(M,\omega)$. % Hence, equation \eqref{eq:fk_hcmm} express the condition that $(f)$ is a chain homotopy between the chain map $\iota_\mathfrak{g}\omega$ (see remark \ref{Rem:SignedMultiContraction}) and the zero map, \ie \begin{displaymath} \begin{tikzcd}[column sep = huge] \setminus CE(\mathfrak{g}) \arrow[r,blue,"\iota_{\mathfrak{g}}", bend left=20, ""{name=U, below}] \arrow[r,"0"', bend right=20, ""{name=D}] & \qquad\qquad\trunc_{n}\Omega(M)[n+1] \arrow[Rightarrow, purple,"(f)", from=U, to=D] \end{tikzcd} \end{displaymath} \end{remark} \begin{remark} The universal property of the homotopical pullback introduced in remark \ref{Rem:FiorenzaPullback} implies that if a weakly Hamiltonian action $\vAct:\mathfrak{g}\to \mathfrak{X}_{\ham}(M)$ satisfies the property that $(\iota_\mathfrak{g}):= ((\iota_{\mathfrak{X}})\circ \vAct ) $ is null-homotopic, then there exists a $(f)$ (unique modulo homotopy) such that the following diagram commutes up to homotopies \begin{displaymath} \begin{tikzcd} \mathfrak{g}\ar[ddr,bend right=30,"\vAct"']\ar[rrd,bend left=30]\ar[dr,"\exists ! (f)"] &&\\ &L_{\infty}(M,\omega)\arrow[dr, phantom, "\scalebox{1.5}{$\lrcorner$}" , very near start, color=black] \ar[r]\ar[d,"\pi_{\ham}"'] & 0 \ar[d]\\ &\mathfrak{X}^1(M) \ar[r,"(\iota_{\mathfrak{X}})"] & \mathcal{B} \end{tikzcd} ~. \end{displaymath} The latter means that the two outer triangles commute and there exists two homotopies $(\iota_{\mathfrak{X}_{\ham}})\Rightarrow 0$ and $(\iota_{\mathfrak{g}})\Rightarrow 0$. \end{remark} \begin{remark} As noted in \cite[Rem. 5.2]{Callies2016}, one can generalize the standard characterization of the image of a comoment map as a Poisson sub-algebra in symplectic geometry also in the multisymplectic case. \\ In the latter case, the image of $(f)$ has to be understood as the cochain complex $I \hookrightarrow L$ given by the following components \begin{displaymath} I^k = \begin{cases} \text{Im}(f_n)\subset C^{\infty}(M) & \text{ if } k=1-n \\ \text{Im}(f_{1-k})\oplus \d\text{Im}(f_{2-k}) \subset \Omega^{n+k-1}(M) & \text{ if } 2-n \leq k \leq 0 \end{cases} ~. \end{displaymath} \end{remark} \begin{remark}[Relation with other notions of multisymplectic moment maps]\label{Rem:OtherNotionsofComoments} It is important to remark that definition \ref{Def:HCMM} is not the only notion of "multisymplectic" moment map proposed in the literature. We mention among other the so-called \emph{covariant multimoment map}\cite{Carinena1991b} (see also \cite{Gimmsy1}), the \emph{multimoment map}\cite{Madsen2012} and the \emph{weak moment map}\cite{Herman2017}. The relationship of these notions with the definition of \momap can be read in \cite[\S 12]{Callies2016} and \cite{Mammadova2020}. \end{remark} A big part of chapters \ref{Chap:MauroPaper} and \ref{Chap:LeonidPaper} will be devoted to give new explicit constructions of \momaps. Several other examples can be found in \cite{Callies2016}\cite{Ryvkin2018}. \subsection{Equivariant \momaps} In symplectic geometry, the $G$-equivariance condition took a central role in discerning between the weaker and stronger notion of moment map. When passing to the dual notion, this condition is equivalent to the Lie algebra morphism property of the comoment map. In the $n$-plectic case, the latter condition's higher analogue translates to the requirement for the \momap to be a morphism in the $L_\infty$-algebra category; this it is not equivalent to the $G$-equivariance condition anymore. Therefore, it is still possible to recognize among \momaps those which happen to be equivariant in the sense that all components are equivariant with respect to the coadjoint action. Namely, we have the following definition: \begin{definition}[Equivariant \Momaps]\label{Def:EquivariantMomap} A \momap pertaining to the multisymplectic action $G\action M$ is called \emph{$G$-equivariant} if \begin{displaymath} {\mathcal L}_{v_\xi} f_k({p}) = f_k ([\xi, {p}]) \qquad \forall \xi \in \mathfrak{g} \; ,~ p \in \Lambda^k \mathfrak{g}, \end{displaymath} where $[\xi, p]$ is the adjoint action of $\mathfrak{g}$ on $\Lambda^k \mathfrak{g}$. Explicitly, the adjoint action reads on decomposable elements as follows: \begin{equation}\label{eq:adjointactionwedge} [\xi, x_1\wedge\dots\wedge x_k] = \sum_{l=0}^k (-1)^{k-l} [v,x_l] \wedge x_k \wedge \dots \wedge \hat{x_l} \wedge \dots \wedge x_1 ~. \end{equation} \end{definition} An example of equivariant \momap is given by the following example generalizing example \ref{Ex:SymplecticMechanicalMomenta}: \begin{example}[Fields mechanical momentum] Consider a smooth action $\vartheta^Q : G\action Q$ on a given a smooth manifold $Q$. There is a natural lift for this action to $M=\Lambda^n Q$ given by the bundle map \begin{displaymath} \morphism{\vartheta^M_g} {\Lambda^n T^\ast Q} {\Lambda^n T^\ast Q} {(q,\alpha)} {(\vartheta^Q_g(q),((T_q~\vartheta^Q_g)^{-1})^\ast \alpha )} \end{displaymath} over $\vartheta^Q_g$ for any $g\in G$. \\ This action preserves the tautological $n$-form $\theta$. Being then $\theta$ a $G$-invariant primitive of $\omega = \d \theta$, this action admits a \momap given by \cite[lem 8.1]{Callies2016}](see also lemma \ref{lem:extexact} in the following chapter). \end{example} \subsection{Induced \momaps and isotropy subgroups} In this subsection we will discuss how \momaps behave under restriction to subgroups and invariant submanifolds. This will be useful for constructing an $SO(n)$-comoment for $S^n$ in chapter \ref{Chap:LeonidPaper}. Let $G$ be a Lie group with Lie algebra $\mathfrak{g}$, acting on a pre-$n$-plectic manifold $(M,\omega)$ with \momap $(f)\colon \mathfrak{g}\to L_{\infty}(M,\omega)$. One can obtain new actions either restricting to a Lie subgroup of $G$ or restricting to an invariant submanifold of $(M,\omega)$. \begin{proposition}[Lemma 3.1 in \cite{Shahbazi2016}]\label{prop:restrict} Let $H\subset G$ be a Lie subgroup, and denote by $j\colon \mathfrak{h}\hookrightarrow \mathfrak{g}$ the corresponding Lie algebra inclusion. The restricted action of $H$ on $(M,\omega)$ has \momap $(f \circ j): \mathfrak{h}\to L_{\infty}(M,\omega)$, given in components by $f_i\circ j:\Lambda^i\mathfrak{h}\to \Omega^{n-i}(M)$ for $i=1,\dots,n$. \end{proposition} \begin{center} \begin{tikzcd} H \ar[symbol=\action]{r} \ar[hook]{d}& M \ar[equal]{d}& & \mathfrak{h} \ar[hook,"j"']{d} \ar[dashed]{dr} & \\ G \ar[symbol=\action]{r}& M & & \mathfrak{g}\ar["(f)"']{r} & L_{\infty}(M,\omega) \end{tikzcd} \end{center} \begin{proposition}[Lemma 3.2 in \cite{Shahbazi2016}] \label{prop:NenMGinvariant} Let $N\overset{i}{\hookrightarrow} M$ be a $G$-invariant submanifold of $M$. \\ Then the action $G\action \left(N,i^{\ast}\omega\right)$ has \momap $(i^{\ast} \circ f): \mathfrak{g}\to L_{\infty}\left(N,i^{\ast}\omega\right)$, given in components by $i^*\circ f_i:\Lambda^i\mathfrak{h}\to \Omega^{n-i}(N)$ for $i=1,\dots,n$. \end{proposition} \begin{center} \begin{tikzcd} G \ar[symbol=\action]{r} \ar[equal]{d}& N \ar[hook,"i"]{d}& & & L_{\infty}(N, i^\ast\omega) \\ G \ar[symbol=\action]{r}& M & & \mathfrak{g}\ar["(f)"']{r}\ar[dashed]{ur} & L_{\infty}(M, \omega) \ar["i^\ast"']{u} \end{tikzcd} \end{center} Moreover, we can produce a new \momap by considering a different multisymplectic form obtained contracting $\omega$ with cycles in the Lie algebra homology of $\mathfrak{g}$, \ie elements of the vector space \begin{equation} Z_k(\mathfrak{g}) = \{ p \in \Lambda^k \mathfrak{g} \; \vert: \partial p = 0 \} \quad. \end{equation} \begin{proposition}[Proposition 3.8 in \cite{Ryvkin2016}]\label{prop:indmomap} Let $p\in Z_{k}(\mathfrak{g})$, for some $k \geq 1$, denote by $G_{p}$ the corresponding isotropy group for the adjoint action of $G$ on $\Lambda^{k}\mathfrak{g}$, and by $\mathfrak{g}_p=\{x\in \mathfrak{g}: [x,p]=0\}$ its Lie algebra. \\ If $G_p^0$ is the connected component of the identity in $G_p$, then the action $G_p^0\action \left(M,\iota(v_p)\omega\right)$ admits a comoment $(f^p) \colon \mathfrak{g}_p \to L_{\infty}(M,\iota(v_p)\omega)$ with components $(i=1,\dots,n-k)$: \begin{displaymath} \morphism{f^p_i} {\Lambda^i\mathfrak{g}_p} {\Omega^{n-k-i}(M)} {q} {\varsigma(k)~f_{i+k}(q\wedge p)} ~. \end{displaymath} \end{proposition} \begin{remark} In the context of multisymplectic geometry, ``weak comoment maps'' (\cf \cite{Herman2018}), and ``multimoments'' (\cf \cite{Madsen2013} and remark \ref{Rem:OtherNotionsofComoments}), the subspace $ Z_{k}(\mathfrak{g})$ is often referred to as ``the $k$-th Lie kernel''. \end{remark} \begin{proposition}[Remark 3.9 in \cite{Ryvkin2016}]\label{prop:indmomap_equiv} If the \momap $(f) \colon \mathfrak{g} \to L_{\infty}(M,\omega)$ is also $G$-equivariant, then the map $(f^p)$ defined in proposition \ref{prop:indmomap} is $G_p^0$-equivariant.\\ Another equivariant \momap for the action of $G_p^0$ on $(M,\iota(v_p)\omega)$ is given in components $(i=1,\dots,n-k)$ by: \begin{displaymath} \morphism{f^p_i} {\Lambda^i\mathfrak{g}_p} {\Omega^{n-k-i}(M)} {q} {(-1)^{k}\iota(v_q)(f_{k}(p))} \end{displaymath} which, in general, may differ from the one given in Proposition \ref{prop:indmomap}. \end{proposition} \noindent \vspace{1em} Provided certain conditions are met, it is possible to induce a comoment on an invariant submanifold of $M$ even if the obvious pullback vanishes (\eg~when $\omega$ is a top dimensional form). In chapter \ref{Chap:LeonidPaper} we will make use of the following corollary subsuming the contents of the previous propositions: \begin{corollary}\label{cor:inducedmachinery} Let $G\action(M,\omega)$ be a multisymplectic group action. If there exists: \begin{itemize}[topsep=1pt,itemsep=0pt, partopsep=1pt] \item another multisymplectic manifold $(N,\eta)$ containing $M$ as a $G$-invariant embedding $j:M\hookrightarrow N$; \item a Lie group $H \supset G$ containing $G$ as a Lie subgroup; \item a multisymplectic action $H \action (N,\eta)$ with equivariant \momap $s:\mathfrak{h}\to L_\infty(N,\eta)$; \item an element $p\in Z_k(\mathfrak{h})$ in the Lie kernel of $\mathfrak{h}$ such that $G \subset H_p$ and $\omega = j^\ast \iota_p \eta$; \end{itemize} then the action $G\action(M,\omega)$ admits an equivariant \momap, given in components $(i=1,\dots,n-k)$ by: \begin{displaymath} \morphism{f_i} {\Lambda^i\mathfrak{g}} {\Omega^{n-k-i}(M)} {q} {\displaystyle(-1)^{k}j^\ast\left(\iota(v_q)(s_{k}(p))\right)} ~. \end{displaymath} \end{corollary} \begin{proof} Starting from the given comoment $(s)$ it is possible to construct another comoment $(s^p)$ resorting to proposition \ref{prop:indmomap_equiv}. The sought \momap descends from $(s^p)$ via the consecutive application of propositions \ref{prop:restrict} and \ref{prop:NenMGinvariant} \begin{center} \begin{tikzcd}[column sep=large] H \ar[symbol=\action]{r} & (N,\eta) & & \mathfrak{h} \ar["(s)"]{r} & L_{\infty}(N,\eta) \\ H_p \ar[symbol=\action]{r} \ar[hook]{u}& (N,\iota_{v_p}\eta) & & \mathfrak{h}_p\ar[hook]{u} \ar[dashed,"(s^p)"]{r}[swap]{Prop. \ref{prop:indmomap_equiv}} & L_{\infty}(N,\iota_{v_p}\eta) \ar[equal]{d} \\ G \ar[symbol=\action]{r} \ar[hook]{u}& (N,\iota_{v_p}\eta) \ar[equal]{u} & & \mathfrak{h}\ar[hook]{u} \ar[dashed,"Prop. \ref{prop:restrict}"']{r} & L_{\infty}(N,\iota_{v_p}\eta) \ar["j^\ast"]{d} \\ G \ar[symbol=\action]{r} \ar[equal]{u}& (M,\omega =j^\ast \iota_{v_p}\eta) \ar[hook]{u} & & \mathfrak{g}\ar[equal]{u} \ar[dashed,"Prop. \ref{prop:NenMGinvariant}"']{r}& L_{\infty}(M,\omega) \end{tikzcd} \end{center} together with the observation that if the starting \momap $(s)$ is equivariant then the induced maps are such. \end{proof} \subsection{Conserved quantities along \momaps} Recall that one of the primary motivation for introducing the notion of moment map in symplectic geometry was to characterize the generators of symmetries of Hamiltonian systems in terms of conservation laws. Let us take as a generalization of an Hamiltonian system in multisymplectic geometry the triple composed of a smooth manifold $M$ together with a $n$-plectic form $\omega$ and a fixed Hamiltonian form $H\in \Omega^{n-1}_{\ham}(M,\omega)$. One could wonder if the momenta, \ie the image of the \momap with respect to a certain action, are preserved along the flow determined by $H$. The answer to this question is subsumed by the following theorem \begin{theorem}[{\cite{Ryvkin2016}}] Let be $(M,\omega)$ be a multisymplectic manifold and consider a multisymplectic action $\vAct:\mathfrak{g}\to (M,\omega)$ admitting a \momap $(f):\mathfrak{g}\to L_{\infty}(M,\omega)$. Let be $H$ an Hamiltonian $(n-1)$-form that is locally (resp. globally or strictly) preserved along all the fundamental vector fields of $\vAct$. The image of the component $f_k$ of the \momap are preserved in the following sense (compare with definition \ref{Def:conservedQuantities}): \smallskip \begin{adjustbox}{center} \begin{tabular}{| l | p{9em}| p{9em} | p{9em} | } \hline & $H$ locally preserved along $\vAct$& $H$ globally preserved along $\vAct$& $H$ strictly preserved along $\vAct$ \\ \hline $f_k(Z_k(\mathfrak{g}))$ & locally conserved & locally conserved & globally conserved \\ $f_k(B_k(\mathfrak{g}))$ & globally conserved & globally conserved & globally conserved \\ \hline \end{tabular} \end{adjustbox} \smallskip \\ where $Z_k(\mathfrak{g})$ and $B_k(\mathfrak{g})$ denote respectively $k$-cycles and $k$-boundaries in the Chevalley-Eilenberg cohomology of $\mathfrak{g}$. \end{theorem} Note that conservation, in the sense of definition \ref{Def:conservedQuantities}, of momenta (\ie elements in the image of $(f)$) is only assured in the images of cocycles of the Chevalley-Eilenberg chain complex. This can be seen as the multisymplectic analogue of Noether's theorem as explained in \cite{Herman2017} employing the languages of \emph{weak homotopy moment map}. \section{Gauge transformations}\label{Section:GaugeTransformations} In chapter \ref{Chap:MarcoPaper}, we will discuss the possible relationship existing between the $L_\infty$-algebras of observables pertaining to two multisymplectic forms differing by an exact form. Let us first fix the following nomenclature: \begin{definition}[$B$-related closed form] Given two closed form $\tilde{\omega},~\omega \in \Omega^{n+1}(M)$, we call them \emph{$B$-related} or \emph{gauge related} if there exists $B\in\Omega^n(M)$ such that \begin{displaymath} \tilde{\omega}= \omega + \d B \end{displaymath} \ie if the two closed forms sit in the same de Rham cohomology class with difference given by $\d B$. \end{definition} Consider two $B$-related $n$-plectic forms $\omega$ and $\tilde{\omega}$ on $M$ and a Lie algebra action $v:\mathfrak{g} \to \mathfrak{X}(M)$ which is multisymplectic with respect to $\omega$, one has the following \begin{lemma} The infinitesimal action $v$ is multisymplectic with respect to both $\omega$ and $\tilde{\omega}$ if and only if $B$ is locally conserved along $v$. \end{lemma} \begin{proof} Being the action multisymplectic with respect to both $\omega$ and $\tilde{\omega}$ means that \begin{displaymath} 0 = \mathcal{L}_{v_\xi} \tilde{\omega} = \mathcal{L}_{v_\xi} \omega - \mathcal{L}_{v_\xi} \dd B \end{displaymath} therefore $\dd \mathcal{L}_{v_\xi}B = 0$ for all $\xi \in \mathfrak{g}$. \end{proof} Consider now the case that $v$ admits a \momap $(f):\mathfrak{g}\to L_\infty(M,\omega)$ with respect to $\omega$, \begin{lemma} A necessary condition for the existence of a \momap pertaining to $v$ with respect to both the symplectic forms $\omega$ and $\tilde{\omega}$ is that $B$ is globally conserved. \end{lemma} \begin{proof} Let $\tilde{f}:\mathfrak{g}\to L_\infty(M,\tilde{\omega})$ be an \Hcmm with respect to $\tilde{\omega}$, one has \begin{displaymath} \dd \tilde{f}_1(\xi) = -\iota_{v_\xi}\tilde{\omega}= -\iota_{v_\xi}\omega + \iota_{v_\xi}\dd B= 0 \qquad \forall \xi \in \mathfrak{g} ~, \end{displaymath} therefore, $\iota_{v_\xi}\dd B$ must be exact. A primitive $h$ of the latter must satisfy the following equation \begin{displaymath} \dd h = \iota_{v_\xi} \dd B = \mathcal{L}_{v_\xi} B - \dd \iota_{v_\xi}B ~, \end{displaymath} which means that $\mathcal{L}_{v_\xi} B $ has to be exact for all $\xi \in \mathfrak{g}$ \end{proof} \begin{remark} Observe that this is a simpler instance of the notion of equivalence between \momap proposed in \cite{Fregier2015}. \end{remark} When considering gauge-related multisymplectic structures, the following lemma holds, see \cite[Beginning of \S7.2]{Fregier2015}. \begin{lemma}[Gauge transformation of \Hcmm]\label{lem:momaps} Let the infinitesimal action $v:\mathfrak{g}\to \mathfrak{X}(M)$ {preserve the $n$-symplectic form $\omega$ and admit a homotopy comoment map $(f):\mathfrak{g}\to L_\infty(M,\omega)$. Suppose that $B\in \Omega^n(M)$ is strictly conserved. Then $\widetilde{\omega}=\omega + dB$, which we assume to be $n$-plectic, is also preserved and} admits a homotopy comoment map $(\widetilde{f}):\mathfrak{g}\to L_\infty(M,\widetilde{\omega})$, with components \begin{displaymath} \widetilde{f}_k = (f_k +\mathsf{b}_k) ~: {\wedge}^k \mathfrak{g}\to L^{1-k}\subseteq \Omega^{n-k}(M) ~. \end{displaymath} where \begin{displaymath} \morphism{\mathsf{b}_k:=(-)^k \iota^k_{\mathfrak{g}}B} {\Lambda^k\mathfrak{g}} {\Omega^{n-k}} {x_1\wedge\dots\wedge x_k} {- \varsigma(k+1) \iota(x_1\wedge\dots\wedge x_k) B} ~. \end{displaymath} \end{lemma} \begin{proof} Fix $p\in \Lambda^k\mathfrak g$, then: \begin{displaymath} \begin{aligned} \dd \tilde{f}_k(p) ~+&~ \tilde{f}_{k-1}(\partial p ) = \\ \equal{}& \dd f_k(p) + f_{k-1}(\partial p ) -\varsigma(k)\dd \iota_{v_p} B -\varsigma(k-1) \iota_{\partial v_p} B =\\ \equal{Lem. \ref{Lem:ExplicitHCCM}}& - \varsigma(k)\iota_{v_p}\omega -\varsigma(k)\dd \iota_{v_p} B -\varsigma(k-1) \iota_{\partial v_p} B \end{aligned} \end{displaymath} Plugging in the (multi)Cartan formula (Lemma \ref{lemma:multicartan}) and using the hypothesis that $\mathcal{L}_{v_\xi} B = 0$ we conclude that % \begin{displaymath} \begin{aligned} \dd \tilde{f}_k(p) +& \tilde{f}_{k-1}(\partial p ) = \\ =&~ - \varsigma(k)\iota_{v_p}\omega -[\varsigma(k+1)(-)^k + \varsigma(k)] \iota_{\partial p} B -\varsigma(k+1)(-)^k \iota_{v_p}\dd B =\\ =&~ - \varsigma(k)\iota_{v_p}\omega +\varsigma(k) \iota_{v_p}\dd B =\\ =&~ - \varsigma(k)\iota_{v_p}\tilde{\omega} \end{aligned} ~. \end{displaymath} The equality $\varsigma(k+1)(-)^k =- \varsigma(k)$ follows from the definition of $\varsigma(k)$ (see lemma \ref{lem:varsigmasignprops}). \end{proof} This last lemma will have a key role in \ref{Chap:MarcoPaper} (see section \ref{Sec:DiagramGaugeTransf}) where the operator $\mathsf{b}_k$ will be recast in terms of the \emph{pairing operator} mentioned in remark \ref{Rem:SignedMultiContraction}. \section{Transgression} We already mentioned several times how multisymplectic geometry originated from the long-standing problem of identifying the apt geometric formulation to encode field-like mechanical systems. Other approaches to the geometric mechanics of infinite-dimensional systems (in sense of being parametrized by a continuous infinity of degrees of freedom) involve formal methods on infinite-dimensional manifolds. \footnote{We use the term "formal" because the problem of formalizing the concept of smoothness in infinite-dimensional spaces is particularly difficult. Above all, there is none completely general and uniquely recognized definition of smooth infinite-dimensional space (see \cite{Stacey2008} for a rundown on some of these possible notions).} Further inputs coming from physics allow a slightly finer characterization of the infinite-dimensional space to be considered. \\ The key idea is that the configuration space of such systems consists of smooth sections of a certain smooth fibre bundle called "configuration bundle" (the points of the base are the parameters and the fibres consist of all the values admissible by the field). Under certain condition, it is also possible to equip this infinite-dimensional configuration space with a canonical symplectic structure (see \cite{Miti2015} and references therein for a slightly more complete account). In other terms, configurations of a field theory are encoded by a (weakly) symplectic mapping space. \\ In many practical cases, the field configurations are mappings into a multisymplectic manifold (see e.g \cite{Forger2005}). The notion of \emph{transgression on mapping space} makes possible to relate the geometry of (finite-dimensional) multisymplectic manifold to the geometry of infinite-dimensional symplectic manifolds. Consider a smooth $m$-dimensional manifold $M$ together with a smooth, compact $s$-dimensional manifold $\Sigma$. Let us denote by $M^{\Sigma}$ the mapping space of smooth functions from $\Sigma$ to $M$: \begin{displaymath} M^\Sigma := \hom_{smooth}(\Sigma, M) ~. \end{displaymath} The space $M^\Sigma$ carries a Fr\'echet manifold structure (see \cite{Kri-Mich}) and the tangent space on a given smooth function $\gamma\in M^\Sigma$ is given by the space of sections of the pullback bundle of $TM$ along $\gamma$, \ie : \begin{displaymath} T_\gamma( M^\Sigma) = \Gamma(\gamma^\ast (TM)) ~. \end{displaymath} In particular, there is a well-defined notion of differential form over $M^\Sigma$ giving on any submanifold $\gamma$ a $C^{\infty}(\Sigma)$-multilinear $\R$-valued form on $T_\gamma (M^\Sigma)$. \note{check $C^{\infty}(\Sigma)$-multilinear} One can induce a differential form on $M^\Sigma$ from any differential form on $M$: \begin{definition}[Transgression] We call \emph{transgression} the graded map \begin{displaymath} (-)^\ell: \Omega(M) \to \Omega(M^\Sigma)[-s] \end{displaymath} defined on any $\alpha\in \Omega^n(M)$, $\gamma \in M^{\Sigma}$ and $v_1,\dots v_{n-s}\in T_{\gamma}(M^\Sigma)$, by \begin{displaymath} \alpha^\ell \eval_{\gamma}(v_1,\dots,v_{n-s})= \int_{\Sigma} \gamma^\ast \left( \iota_{v_1\wedge\dots\wedge v_{n-s}}\alpha\eval_\gamma \right) ~. \end{displaymath} Alternatively, one can see the transgression map as the composition of the pullback along the \emph{evaluation map} \begin{displaymath} \morphism{ev} {\Sigma\times M^\Sigma} {M} {(x,\gamma)} {\gamma(x)} ~. \end{displaymath} with the operation of integration along the fibres (of the trivial fibration $\Sigma\times M^\Sigma \to M^\Sigma$): \begin{displaymath} \int_\Sigma ~: \Omega({\Sigma\times M^\Sigma}) \to \Omega(M^\Sigma) ~. \end{displaymath} See, for instance, \cite[\S 3.7]{Bry}. \end{definition} % \begin{example}[Transgression on loop spaces]\label{Ex:TransgressionOnLoops} Let $M$ be a smooth oriented manifold and $LM = C^\infty(S^{1},M) $ be the \emph{Free loop space} (see \cite{Bry} and Remark \ref{Rem:BryLoopSpaces}). The transgression on the loop space is given by a degree $-1$ chain map \[ \begin{tikzcd}[column sep= small,row sep=0ex] \ell \colon& \Omega^{\bullet}(M) \arrow[r]& \Omega^{\bullet -1}(LM) \\ & \alpha (\textvisiblespace)\arrow[r, mapsto]& \left.\alpha^{\ell} \right\vert_{\gamma} = \left.{\displaystyle\int^{2\pi}_{0} \iota_{\dot{\gamma}}\alpha(\textvisiblespace)} \right\vert_{\gamma(s)} ~ ds \end{tikzcd} \] \end{example} Since the transgression commutes with the de Rham differential one has that $n$-plectic forms on $M$ transgress to pre-$(n-s)$-plectic forms on $M^\Sigma$. \note{questo risultatino sarebbe stato bello metterlo, ma salto per motivi di tempo} A particularly interesting property of \momaps, already noticed in \cite[\S 6]{Fregier2015} and \cite[\S 11]{Callies2016}, is that $G$-actions on pre-$2$-plectic manifolds $(M,\omega)$ can be transgressed to ordinary moment maps on the associated pre-symplectic loop space $LM:=M^{S^1}$. \\ A similar result holds in the case of more general mapping spaces. Observe first that any smooth action $\vartheta:G\action M$ can be lifted to an action $\vartheta^\Sigma:G\action M^\Sigma$ given point-wise by \begin{displaymath} \morphism{\vartheta^{\Sigma}} {G\times M^\Sigma} {M^\Sigma} {(g,\gamma)} {\left(x\mapsto \vartheta(g,\gamma(x))\right)} ~. \end{displaymath} The key results is that, when $\vartheta:G\action (M,\omega)$ admits a \momap, the same holds for $\vartheta^\Sigma: G \action (M^\Sigma,\omega^\ell)$ \begin{theorem}[{\cite[Corollary 6.3]{Callies2016}}] Consider a pre-$n$-plectic manifold $(M,\omega)$ and let $ f:\mathfrak{g}\to L_\infty(M,\omega) $ be a \momap with respect to the action $\vartheta:G\action (M,\omega)$. \\ Then the $L_\infty$-morphism $f^\ell: \mathfrak{g}\to L_{\infty}(M,\omega^\ell)$ with components given by \begin{displaymath} (f^\ell)_k = (f_k)^\ell : \Lambda^k \mathfrak{g} \to \Omega^{n-s-k}(M^\Sigma) \end{displaymath} is a \momap with respect to the action $\vartheta^{\Sigma}:G\action (M^\Sigma,\omega^\ell)$. \end{theorem} This last result provides further hints on how the higher symplectic geometry on $M$ can interact with the ordinary geometry on $M^{\Sigma}$. We will return to the topic of transgression on loop spaces in chapter \ref{Chap:MauroPaper}. \ifstandalone \bibliographystyle{../../hep} \chapter{Graded permutators and Unshuffles}\label{App:UnshuffleAtors} In this appendix, we deal with the combinatorics involved when working with $L_\infty$-algebras. Specifically, we will talk about unshuffle permutations and how they act on the tensor algebra $T(V)$ of a given graded vector space $V$. The key results of this appendix are given in section \ref{Section:AppendixProofPreLie} which contains direct computations for the associator of the \RN algebras. \section{Unshuffles}\label{subsection:UnshufflesAbstract} The term "shuffle" evokes the idea of shuffling a deck of cards. Suppose we have a pack of $p$ cards and a pack of $q$ cards and we build a pack of $p+q$ cards, whilst retaining the order on the two "sub-packs". The result is a $(p,q)$-shuffle. A real-life shuffling consists of several repetitions of the previous operation. $(p,q)$-shuffles form a subgroup of the group $S_{p+q}$ of permutations of $p+q$ elements. The reverse of shuffling two decks of cards is what is called an "unshuffle". \begin{definition}[Unshuffles]\label{Def:Unshuffles} Let be $k_1,..k_\ell$ positive integers summing up to the integer $n$. Consider the group of permutation of $n$-elements $S_n$. The group of $(k_1,\dots,k_\ell)$-unshuffles is the subgroup of $S_n$ defined as follows: \begin{equation}\label{Eq:UnshufflesSet} \ush{k_1,k_2,\dots,k_\ell} : = \left\lbrace \sigma \in S_{k_1+\dots+k_\ell} \,\left|~ \begin{aligned} &\sigma_i < \sigma_{i+1}\\ &\forall i~\neq~ k_1,~ (k_1+k_2), \dots,~(k_1+\dots+k_{\ell}) \end{aligned} \right.\right\rbrace ~. \end{equation} \end{definition} \begin{reminder}[Cauchy representation and product of permutations]\label{rem:cauchyRep} Recall at first that any permutation $\sigma\in S_{k+\ell}$ can be denoted by its corresponding canonical Cauchy two-lines representation $$ \sigma= \begin{pmatrix} 1 & \dots & k & k+1 & \dots & k+\ell \\ \sigma_1 & \dots & \sigma_k & \sigma_{k+1} & \dots & \sigma_{k+\ell} \end{pmatrix}; $$ (permutations are in a one-to-one correspondence with bijections on a finite set). Recall also that the direct product of two permutations can be denoted by the cartesian product of their canonical representation. In other terms, given $$ \mu= \begin{pmatrix} 1 & \dots & k \\ \mu_1 & \dots & \mu_k \end{pmatrix} \qquad \nu= \begin{pmatrix} 1 & \dots & \ell \\ \nu_1 & \dots & \nu_\ell \end{pmatrix} $$ one has $$ \mu\times \nu= \begin{pmatrix} 1 & \dots & k & k+1 & \dots & k+\ell \\ \mu_1 & \dots & \mu_k & \nu_{1} & \dots & \nu_{\ell} \end{pmatrix}~. $$ \end{reminder} \begin{remark}\label{Rem:UnshufflesAsCoset} Observe that the group of unshuffles $\ush{p,q}$ is a set of representatives for the left cosets of the canonical embedding $S_p\times S_q \hookrightarrow S_{p+q}$. In other words, for any $\eta \in S_{p+q}$ exists an unique decomposition $\eta = \sigma \circ \tau$ with $\sigma \in \ush{p,q}$ and $\tau \in S_p\times S_q$. \\ In simpler terms, an unshuffle $\sigma\in\ush{p,q}$ rearrange the ordered list $(1,2,\dots,p+q)$ into a list that is separately ordered in the first $p$ and the second $q$ indexes. \end{remark} % \begin{remark}\label{Rem:UnshufflesCardinality} Recall that: $$ \# \ush{k_1,\dots,k_\ell} = \dfrac{(k_1+k_2+\dots+k_\ell)!}{k_1! k_2! \dots k_\ell!} $$ since $S_{(k_1+\dots+k_\ell)}= S_{k_1}\times S_{k_2}\times \dots S_{k_\ell} \circ \ush{k_1,\dots,k_\ell}$. In particular \begin{equation}\label{Eq:UnshufflesCardinality} \# \ush{k,\ell} = \frac{(k+\ell)!}{k!\ell!} = \binom{k+\ell}{k} \end{equation} where $()$ denotes the Newton binomial coefficient. \end{remark} \begin{remark}\label{Rem:UnshuffleSign} Observe that the general element $\sigma \in \ush{2,n-2}$ can be written as $$ \sigma=(i,j;1,\dots,\hat{i},\dots,\hat{j},\dots,n) \qquad \text{for}~ 1\leq i < j \leq n~, $$ therefore $|\sigma| = (-)^{i+j+1}$. Generalizing, any element $\sigma\in\ush{k,n-k}$ can be written as $$ \sigma=(i_1,\dots,i_k;1,\dots,\hat{i_1},\dots,\hat{i_k},\dots,n) \qquad \text{for}~ 1\leq i_1 < i_2 <\dots< i_k \leq n~, $$ with \begin{displaymath} \begin{aligned} |\sigma| =&~ (-)^{i_1-1}(-)^{i_2-2}\dots(-)^{i_k-k} \\ =&~ (-)^{\sum_{j=1}^k i_j}(-)^{-\sum_{j=1}^k 1}= \\ =&~ (-)^{\sum_{j=1}^k i_j}(-)^{\frac{k(k+1)}{2}} ~. \end{aligned} \end{displaymath} \end{remark} \begin{remark}[Decomposition of unshuffles]\label{rem:DecompositionOfUnshuffles} Recall that when a generic $(k,\ell)$-unshuffle acts on a list of elements $(1,2,\dots,k+\ell)$, in the resulting list the element $1$ can only appear in the first entry or at entry $(k+1)$. This is due to the fact that index $1$ is the minimum in the total order $(1,\dots, k+\ell)$ then, according to the definition of the group $\ush{k,\ell}$, it can only occur as the first element in the first batch of $k$ elements or the first in the second batch of $\ell$ elements. \\ According to that, one can see that the group of $(k,\ell)$-unshuffles is the union of two subsets $\ush{k,\ell} = X \cup Y$ where \begin{displaymath} X = \lbrace \mathbb{1}\times \tilde{\sigma} \quad\vert \quad \tilde{\sigma}\in \ush{k-1,\ell} \rbrace \end{displaymath} is the subgroup that keeps fixed the first index (\ie $\sigma_1 = 1$), and \begin{displaymath} Y = \left\lbrace (\varsigma_{(k+1)}\times \mathbb{1}_{\ell-1}) \circ (\mathbb{1}\times \tilde{\sigma}) \quad\left\vert \quad \tilde{\sigma}\in \ush{k,\ell-1} \right.\right\rbrace \end{displaymath} is the subset that keeps the first index in the $k+1$ entry (i.e $\sigma_{k+1}=1$). Note that $\varsigma_{(k+1)}$ is the cyclic permutation of the first $k+1$ elements that takes in account the final position of the first index. Alternatively, elements in $Y$ can be expressed as \begin{displaymath} (\varsigma_{(k+1)}\times \mathbb{1}_{\ell-1}) \circ (\mathbb{1}\times \tilde{\sigma}) = (\mathbb{1}_k\times \varsigma_{(\ell)}^{-1})\circ (\tilde{\sigma}\times \mathbb{1}) \circ (\varsigma_{(k+\ell)}) \end{displaymath} where $\tilde{\sigma}\in \ush{k,\ell-1}$. Here the first cyclic permutation is responsible for putting index $1$ in the last position and the second one is an inverse cyclic permutation of the last $\ell$ indices and is responsible of putting $1$ in position $k+1$. \end{remark} In defining the notion of $L_\infty$-morphismss in the multibrackets presentation one also has to deal with the following subgroup of the unshuffles: \begin{definition}[Ordered unshuffles]\label{Def:OrderedUnshufflesAbstract} Let be $k_1 \leq k_2 \dots \leq k_\ell$ non-zero integers summing up to the integer $n$. The group of $(k_1,\dots,k_\ell)$ \emph{ordered unshuffles} is the subgroup of $\ush{k_1,\dots,k_\ell}$ defined as follows: \begin{displaymath}\label{Eq:OrderedUnshufflesSet} \mathclap{ \ush{k_1,\dots,k_\ell}^< = \left\lbrace \sigma \in \ush{k_1,k_2,\dots,k_\ell} \,\left|~ \begin{aligned} &\sigma(k_1+\dots+k_{j-1}+1)<\sigma(k_1+\dots+k_{j}+1) \\[-1em] &\qquad \forall j \text{~s.t.~} k_{j-1}=k_j \end{aligned} \right.\right\rbrace ~. } \end{displaymath} \end{definition} When working with unshuffles is useful to introduce also the so-called \emph{Cyclic permutations} \begin{definition}[Cyclic permutation]\label{def:CyclycPermutation} We call \emph{cyclic permutation} (of $n$ elements) the permutation $\varsigma_{(n)}\in S_n$ given by the following Cauchy representation \begin{displaymath} \varsigma_{(n)}= \begin{pmatrix} 1 & 2 &\dots & n-1 & n \\ 2 & 3 &\dots & n & 1 \end{pmatrix}~. \end{displaymath} We omit the subscript $(n)$ when there is no ambiguity on the number of elements that are cyclically permuted. We denote with $\varsigma^k$ the consecutive application of the cyclic permutation $k$-times. \end{definition} \begin{remark}\label{ref:cyclingUnhsuffles} Observe that an unshuffle $\sigma\in \ush{k,\ell}$ is a permutation that preserves the order of the first $k$ and the last $\ell$ elements, then $\varsigma^k \circ \sigma$ is a permutation which preserves the order of the first $\ell$ and the last $k$ elements. In other words: \begin{displaymath} \Big\lbrace\varsigma^k \circ \sigma \,\Big|~ \sigma \in \ush{k,\ell} \Big\rbrace = \ush{\ell,k} ~. \end{displaymath} \end{remark} \section{Permutators on the unshuffles} In section \ref{Section:ActionsonTensorSpaces} has been introduced the action of the permutation group on graded tensor spaces (tensor product of a given vector space with itself several times). \\ The main subtlety with respect to ordinary vector space comes from the Koszul signs convention embedded in the graded braiding operator. More precisely, fixed a graded vector space $L$ and given $\sigma \in S_n$, one can define two linear operator: the even representation of $\sigma$: \begin{displaymath} \morphism{B_\sigma} {L^{\otimes n}} {L^{\otimes n}} {v_1\otimes v_2\otimes \dots\otimes v_n} {\epsilon(\sigma,v_i) v_{\sigma_1}\otimes v_{\sigma_2}\otimes \dots\otimes v_{\sigma_n}} ~, \end{displaymath} where $ \epsilon(\sigma,v_i)$ is the Koszul sign (produced by the permutation of homogeneous graded elements), and the odd representation of $\sigma$ \begin{displaymath} \morphism{P_\sigma} {L^{\otimes n}} {L^{\otimes n}} {v_1\otimes v_2\otimes \dots\otimes v_n} {\chi(\sigma,v_i) v_{\sigma_1}\otimes v_{\sigma_2}\otimes \dots\otimes v_{\sigma_n}} ~, \end{displaymath} where $\chi(\sigma,v_i)=(-)^\sigma \epsilon(\sigma,v_i)$. Clearly one has that $P_\sigma = (-)^\sigma B_\sigma$. Being the previous mappings two bona fide actions of the permutations group on $L^{\otimes n}$, they also satisfy the following property: \begin{displaymath} B_\sigma \circ B_\sigma' = B_{\sigma\circ \sigma'} \end{displaymath} Considering a subset of the permutation group $\Omega \subset S_n$, one can define the permutator operators: \begin{definition}[(even,odd) Permutator of $\Omega\subset S_n$] We call even (odd) permutator of $\Omega\subset S_n$ the linear operator $$B_{\Omega} = \sum_{\sigma \in \Omega} B_\sigma \qquad (P_{\Omega} = \sum_{\sigma \in \Omega}(-)^\sigma B_\sigma)$$ \end{definition} The even (resp. odd) permutator corresponding to the complete cyclic group $\Omega =S_n$ takes the special name of \emph{simmetrizator}, denoted as $\mathcal{S}_n$ in definition \ref{Def:Symmetrizator} (resp. \emph{skew-simmetrizator} and denoted by $\mathcal{A}_n$). The reason for such a naming comes from the fact that precomposing a multilinear operator with a (skew)-symmetrizator yields a graded (skew)-symmetric operator. Note also that, for any given multilinear operator $H$ and any permutation $\sigma$ one has \begin{displaymath} H \circ B_{\sigma} = \begin{cases} H & \quad \text{if $H$ is graded symmetric}\\ (-)^{\sigma} H & \quad \text{if $H$ is graded skew-symmetric} \end{cases} \end{displaymath} and in particular, given any subset $\Omega < S_n$ and $H$ graded symmetric, one has $H \circ B_{\Omega} = (\#\Omega) H$. In what concerns $L_\infty$-algebras, one has mainly to deal with the following three permutators (corresponding to the permutation subgroups introduced in subsection \ref{subsection:UnshufflesAbstract}): \begin{definition}[Unshuffles permutator (Unshuffleator)]\label{Def:Unshuffleator} We call even (odd) \emph{unshuffleator} the permutator \begin{displaymath} B_{k_1,k_2,\dots,k_\ell} := B_{\ush{k_1,k_2,\dots,k_\ell}} \qquad (P_{k_1,k_2,\dots,k_\ell} = P_{\ush{k_1,k_2,\dots,k_\ell}}) \end{displaymath} pertaining to the subgroup of unshuffles (see definition \ref{Def:Unshuffles}). \end{definition} \begin{remark} Remark \ref{Rem:UnshufflesAsCoset} can be recast in terms of unshuffleators into the following equation \begin{equation}\label{Eq:DecompositionofSymmetrizator} \mathcal{S}_n = \frac{\ell!(n-\ell)!}{n!} \mathcal{S}_\ell \otimes \mathcal{S}_{n-\ell} \circ B_{n,n-\ell} ~. \end{equation} \end{remark} \note{ \Eg $B_{k,\ell}$ contains all the permutations that preserve the order of the first $k$ elements and of the last $\ell$ elements. \\ A similar definition can be stated for the odd representation. } \begin{definition}[Ordered unshuffleator]\label{Def:OrderedUnshuffleator} We call even (odd) ordered \emph{unshuffleator} the permutator \begin{displaymath} B_{k_1,k_2,\dots,k_\ell}^< := B_{\ush{k_1,k_2,\dots,k_\ell}^<} \qquad (P_{k_1,k_2,\dots,k_\ell}^< = P_{\ush{k_1,k_2,\dots,k_\ell}^<}) \end{displaymath} pertaining to the subgroup of ordered unshuffles (see definition \ref{Def:OrderedUnshufflesAbstract}). Recall in particular that $k_1 \leq k_2 \dots \leq k_\ell$. \end{definition} \begin{remark} Observe that when $\ell=2$ and $k_1=k_2=k$ one has $$ B_{k,k}^< = \mathbb{1}\otimes B_{k-1,k}$$ and in particular $$ B_{\underbrace{1,\dots,1}_{j~\text{times}},k}^< = B_{j,k} ~. $$ \end{remark} \begin{definition}[Cyclic permutator]\label{Notation:Cyclic permutator} We call even \emph{cyclic permutator} the permutator \begin{displaymath} \cycPermutator_{(n)} = B_{\varsigma(n)} \end{displaymath} pertaining to the cyclic permutation of $n$ elements (see definition \ref{def:CyclycPermutation}). We denote as $\tilde{\cycPermutator}_{(n)} = P_{\varsigma(n)}$ the corresponding odd permutator. \end{definition} \begin{remark}[The Koszul sign of the cyclyc permutator]\label{rem:KoszulSignCyclic} The cyclic permutation can be obtained by the consecutive application of $n-1$ contiguous swaps. Namely \begin{displaymath} \cycPermutator_{(n)} = B_{n\leftrightarrow n-1}\circ B_{n-1\leftrightarrow n-2} \dots B_{2\leftrightarrow 1} ~, \end{displaymath} where $B_{n+1\leftrightarrow}$ denotes the permutation swapping the $n$-the element with its immediate successor. Therefore $(-)^{|\varsigma|} = (-)^{n-1}$. When evaluated, $\cycPermutator_n$ yields the following expressions: \begin{align*} \cycPermutator_{(n)} (v_1,\dots,v_n) &= (-)^{|v_1|(\sum_{i=2}^n|v_i|)} v_2\otimes\dots v_{n}\otimes v_1 ~, \\ \cycPermutator_{(n)}^{-1} (v_1,\dots,v_n) &= (-)^{|v_n|(\sum_{i=1}^{n-1}|v_i|)} v_n \otimes v_1 \otimes \dots \otimes v_{n-1} ~. \end{align*} Hence \begin{align*} \epsilon(\varsigma,v_i)=&~(-)^{|v_1|(\sum_{i=2}^n|v_i|)} ~, \\ \chi(\varsigma,v_i) =&~ (-)^{n-1}(-)^{|v_1|(\sum_{i=2}^n|v_i|)} ~. \end{align*} Note also that from the very definition of cyclic permutation one has that $\cycPermutator^{-\ell}_{(k+\ell)} = \cycPermutator^k_{(k+\ell)} $. \end{remark} \begin{lemma}[Properties of unshuffleators] Consider the even representation of the group of permutations, the following equations holds \begin{align} B_{k,\ell,m} ~=&~ B_{k,\ell}\otimes \mathbb{1}_m \circ B_{k+\ell,m} \label{Eq:compUnsh}\\ B_{\ell,k} ~=&~ \cycPermutator^k_{(k+\ell)} \circ B_{k,\ell} \label{Eq:CyclicUnshuffle}\\ B_{k,\ell} ~=&~ \mathbb{1}\otimes B_{k-1,\ell} + (\mathbb{1}_k\otimes\cycPermutator_{(\ell)}^{-1}) \circ (B_{k,\ell-1}\otimes\mathbb{1}) \circ (\cycPermutator_{(k+\ell)}) \label{Eq:decompUnsh} \end{align} The same equations also hold for the odd representation replacing $B_\sigma $ with $P_\sigma $. \end{lemma} \begin{proof} The statement follows from the properties of the group of unshuffles. \begin{itemize} \item Equation \eqref{Eq:compUnsh} follows from the observation that a permutation $\sigma \in \ush{k,\ell,m}$, being a permutation which preserves the order of the first $k$, the second $\ell$ and the last $m$ elements, can be decomposed as $\sigma = (\alpha \otimes \mathbb{1}_m) \circ \beta$ with $\alpha \in \ush{k,\ell}$ and $\beta \in \ush{k+\ell,m}$. \item Equation \eqref{Eq:CyclicUnshuffle} follows from remark \ref{ref:cyclingUnhsuffles} \item Equation \eqref{Eq:decompUnsh} follows from remark \ref{rem:DecompositionOfUnshuffles}. \end{itemize} \end{proof} % \begin{example} Equation \eqref{Eq:decompUnsh} could be better understood through examples. Inspecting the action of the unshuffleator on vectors $v_1,v_2,\dots$, denoted simply as $1,2,\dots$, one gets \begin{align*} B_{2,2} (1,2,3,4) =& [ + (1,2;3,4) - (1,3;2,4) + (1,4;2,3) ] + \\ & [ + (3,4;1,2) - (2,4;1,3) + (2,3;1,4) ] = \\ =& (1)\otimes P_{1,2} (2,3,4) - (\mathbb{1}_2\otimes\cycPermutator_{2}) \circ P_{2,1}(2,3,4)\otimes (1) \\ =& \mathbb{1}\otimes P_{1,2} (1,2,3,4) - (\mathbb{1}_2\otimes\cycPermutator_{2}) \circ P_{2,1}\otimes \mathbb{1} (2,3,4,1) \\ =& \mathbb{1}\otimes P_{1,2} (1,2,3,4) + (\mathbb{1}_2\otimes\cycPermutator_{(2)})\circ P_{2,1}\otimes \mathbb{1} \circ\cycPermutator_{(4)} (1,2,3,4) ~; \end{align*} % \begin{align*} B_{2,3} (1,2,3,4,5) =& [ + (1,2;3,4,5) - (1,3;2,4,5) + (1,4;2,3,5) - (1,5;2,3,4) ] + \\ & [ + (2,3;1,4,5) - (2,4;1,3,5) + (2,5;1,3,4) + \\ & \phantom{[} + (3,4;1,2,5) - (3,5;1,2,4) + (4,5;1,2,3)] ~. \end{align*} \end{example} \section{Explicit expressions for the associators of the Nijenhuis$-$Richardson products}\label{Section:AppendixProofPreLie} We focus now on the actions of the previous permutations on the tensor products of homogeneous multilinear maps. In this section we deliver an explicit proof of the pre-Lie property of \RN products (see propositions \ref{Prop:SymmetricGerstenhaberAssociators} and \ref{prop:RNExplictPreLie}). \begin{lemma}\label{Lemma:CyclicCommutation} Given any homogeneous multilinear map $\mu_k: L^{\otimes k}\to L$, one has \begin{displaymath} \cycPermutator_{(\ell+1)}^{-1} \circ \left(\mathbb{1}_\ell \otimes \mu_k \right) \circ \cycPermutator_{(\ell+k)}^k = \mu_k \otimes \mathbb{1}_\ell \end{displaymath} and \begin{displaymath} \tilde{\cycPermutator}_{(\ell+1)}^{-1} \circ \left(\mathbb{1}_\ell \otimes \mu_k \right)\circ \tilde{\cycPermutator}_{(\ell+k)}^k = (-)^{\ell(k+1)}\mu_k \otimes \mathbb{1}_\ell \end{displaymath} \end{lemma} \begin{proof} By inspection, let us denote as $1,\dots,i,\dots$ the graded vectors $v_1,\dots,v_i,\dots$. From the definitions of cyclic permutator, one has that \begin{displaymath} \begin{aligned} \cycPermutator_{(\ell+k)} &(1,\dots,k,k+1,\dots,k+\ell) = \\ =&~ (-)^{|1|\cdot(|2|+\dots+|k+\ell|)} (2,\dots,k,k+1,\dots,k+\ell,1) ~. \end{aligned} \end{displaymath} Iterating, one gets \begin{displaymath} \begin{aligned} \cycPermutator_{(\ell+k)}^2 &(1,\dots,k,k+1,\dots,k+\ell) = \\ =&~ (-)^{|1|\cdot(|2|+\dots+|k+\ell|)} \\ &~(-)^{|2|\cdot(|1|+|3|+\dots+|k+\ell|)} (3,\dots,k,k+1,\dots,k+\ell,1,2) \end{aligned} \end{displaymath} and \begin{align*} \cycPermutator_{(\ell+k)}^k &(1,\dots,k,k+1,\dots,k+\ell) = \\ =&~ (-)^{|1|\cdot(|2|+\dots+|k+\ell|)} \\ &~(-)^{|2|\cdot(|1|+|3|+\dots+|k+\ell|)} \\ &~\vdots \\ &~(-)^{|k|\cdot(|k+1|+\dots+|k+\ell| + |1| + \dots + |k-1|)} (k+1,\dots,k+\ell,1,\dots,k) = \\ =&~ (-)^{|1|\cdot(|k+1|+\dots+|k+\ell|)} (-)^{|1|\cdot(|2|+\dots+|k|)} \\ &~(-)^{|2|\cdot(|k+1|+\dots+|k+\ell|)} (-)^{|2|\cdot(|1|+|3|+\dots+|k|)} \\ &~\vdots \\ &~(-)^{|k|\cdot(|k+1|+\dots+|k+\ell|)} (-)^{|k|\cdot(|1|+\dots+|k-1|)} (k+1,\dots,k+\ell,1,\dots,k) = \\ =&~ (-)^{(|k+1|+\dots+|k+\ell|)(|1|+\dots+|k|)} (k+1,\dots,k+\ell,1,\dots,k) ~. \end{align*} Recall that the Koszul sign convention implies \begin{align*} \left(\mathbb{1}_\ell \otimes \mu_k \right)&(k+1,\dots,k+\ell,1,\dots,k) = \\ =&~ (-)^{|\mu_k|(|k+1|+\dots+|k+\ell|)} (k+1,\dots,k+\ell,\mu_k(1,\dots,k)) ~, \end{align*} therefore \begin{align*} \cycPermutator_{\ell+1}^{-1}& (k+1,\dots,k+\ell, \mu_k(1,\dots,k)) = \\ =&~ (-)^{|\mu_k(1,\dots,k)|\cdot(|k+1|+\dots+|k+\ell|)} (\mu_k(1,\dots,k),k+1,\dots,k+\ell) = \\ =&~ (-)^{(|\mu_k| +|1| +\dots+|k|)\cdot(|k+1|+\dots+|k+\ell|)} \cdot \left(\mu_k \otimes \mathbb{1}_\ell\right) (1,\dots,k,k+1,\dots,k+\ell) ~. \end{align*} % Composing all the previous steps we get \begin{displaymath} \begin{aligned} \left(\cycPermutator^{-1}_{\ell+1}\circ (\mathbb{1}_\ell \otimes \mu_k) \circ \cycPermutator^k_{\ell+k}\right) &(1,\dots,k+\ell) = \\ =&~ \cancel{(-)^{|\mu_k|(|k+1|+\dots+|k+\ell|)}} \\ &~ \cancel{(-)^{(|1|+\dots+|k|)(|k+1|+\dots+|k+\ell|)}} \\ &~ \cancel{ (-)^{|\mu_k|(|k+1|+\dots+|k+\ell|)}} \\ &~ \cancel{(-)^{(|1|+\dots+|k|)(|k+1|+\dots+|k+\ell|)}} \\ & \mu_k \otimes \mathbb{1}_\ell (1,\dots,k,k+1,\dots,k+\ell) \end{aligned} \end{displaymath} which yields the first equation. The extra sign in the second equation follows from noting that $\tilde{\cycPermutator}_n = (-)^{n-1} \cycPermutator_n$ (see remark \ref{rem:KoszulSignCyclic}). \end{proof} % Out of the cyclic permutator one can express a commutation rule involving the tensor product of multilinear maps: \begin{corollary}\label{Cor:CyclicCommutationMultiBracket} Given any two homogeneous multilinear maps $\mu_k: L^{\otimes k}\to L$ and $\eta_\ell:L^{\otimes\ell}\to L$, one has \begin{displaymath} \begin{aligned} \cycPermutator_{(2)} \circ \left(\mu_k\otimes\eta_\ell\right) \circ \cycPermutator^k_{(k+\ell)} &= (-)^{|\eta_\ell||\mu_k|} \left(\eta_\ell \otimes \mu_k\right) \\ \tilde{\cycPermutator}_{(2)} \circ \left(\mu_k\otimes\eta_\ell \right)\circ \tilde{\cycPermutator}^k_{(k+\ell)} &= (-)^{k\ell +1}(-)^{|\eta_\ell||\mu_k|} \left(\eta_\ell \otimes \mu_k\right) \end{aligned} ~. \end{displaymath} \end{corollary} \begin{proof} Employing the previous lemma twice, one gets the first equation easily \begin{displaymath} \begin{aligned} \cycPermutator_{(2)} \circ ( \mu_k\otimes \eta_\ell) \circ \cycPermutator^\ell_{(\ell+k)} =&~ \cycPermutator_{(2)} \circ (\mu_k\otimes\mathbb{1}) \circ (\mathbb{1}_k\otimes \eta_\ell) \circ \cycPermutator^\ell_{(\ell+k)}= \\ =&~ (\mathbb{1}\otimes \mu_k) \circ \cancel{ \cycPermutator^k_{(k+1)} \circ \cycPermutator_{(k+1)} \circ } (\eta_\ell \otimes \mathbb{1}_k) = \\ =&~ (-)^{|\mu_k||\eta_\ell|} \eta_\ell \otimes \mu_k ~, \end{aligned} \end{displaymath} where the last sign comes again from the Koszul convention on homogeneous maps, see equation \ref{Eq:TensorHomogeneousMaps}. \\ Regarding the second equation, one has only to note that \begin{displaymath} \tilde{\cycPermutator}_{(2)} \circ \left(\mu_k\otimes\eta_\ell\right) \circ \tilde{\cycPermutator}^k_{(k+\ell)} = -(-)^{(k+\ell+1)k} \cycPermutator_{(2)} \circ \left(\mu_k\otimes\eta_\ell \right)\circ \cycPermutator^k_{(k+\ell)} ~. \end{displaymath} \end{proof} \begin{lemma}\label{Lemma:UnshuffleSandwich} \begin{align*} (B_{m,k}) \circ & (\mu_n \otimes \mathbb{1}_{m+k-1}) \circ (B_{n,m+k-1}) = \\ =&~ (\mu_n \otimes \mathbb{1}_{m+k-1}) \circ (B_{n,m-1,k}) + \left(\mathbb{1}_m\otimes \mu_n\otimes \mathbb{1}_{k-1} \right) \circ (B_{m,n,k-1}) ~; \\[.2em] (P_{m,k}) \circ& (\mu_n \otimes \mathbb{1}_{m+k-1}) \circ (P_{n,m+k-1}) = \\ =&~ (\mu_n \otimes \mathbb{1}_{m+k-1}) \circ (P_{n,m-1,k}) + (-)^{m(n+1)}~ \left(\mathbb{1}_m\otimes \mu_n\otimes \mathbb{1}_{k-1} \right) \circ (P_{m,n,k-1}) ~. \end{align*} \end{lemma} \begin{proof} Applying equation \eqref{Eq:decompUnsh} to $B_{m,k}$, one can write \begin{align*} (B_{m,k}) \circ &(\mu_n \otimes \mathbb{1}_{m+k-1}) \circ (B_{n,m+k-1}) = \\ =&~ +(\mu_n\otimes B_{m-1,k})\circ (B_{n,m+k-1}) + \\ &~ +\left[ (\mathbb{1}_m\otimes\cycPermutator_{(k)}^{-1}) \circ (B_{m,k-1}\otimes \mathbb{1}) \circ \cycPermutator_{(m+k)} \right] \circ \left(\mu_n \otimes \mathbb{1}_{m+k-1}\right) \circ (B_{n,m+k-1}) = \\ =&~ X + Y ~. \end{align*} % From equation \eqref{Eq:compUnsh}, the first summand yields: \begin{displaymath} X = (\mu_n \otimes \mathbb{1}_{m+k-1}) \circ (B_{n,m-1,k}) ~; \end{displaymath} % the second summand results: \begin{displaymath} \mathclap{ \begin{aligned} Y \equal{asso.}& (\mathbb{1}_m\otimes\cycPermutator_{(k)}^{-1}) \circ (B_{m,k-1}\otimes \mathbb{1}) \circ \left[ \cycPermutator_{(m+k)} \circ \left(\mu_n \otimes \mathbb{1}_{m+k-1}\right) \right] \circ (B_{n,m+k-1}) = \\ \equal{Lem. \ref{Lemma:CyclicCommutation}}& (\mathbb{1}_m\otimes\cycPermutator_{(k)}^{-1}) \circ (B_{m,k-1}\otimes \mathbb{1}) \circ \left[ \left(\mathbb{1}_{m+k-1}\otimes\mu_n \right) \circ \cycPermutator_{(n+m+k-1)}^n \right] \circ (B_{n,m+k-1}) = \\ \equal{asso.}& (\mathbb{1}_m\otimes\cycPermutator_{(k)}^{-1}) \circ (\mathbb{1}_{m+k-1}\otimes\mu_n) \circ (B_{m,k-1}\otimes \mathbb{1}) \circ \left[ \cycPermutator_{(n+m+k-1)}^n \circ (B_{n,m+k-1}) \right] = \\ \equal{Eq. \eqref{Eq:CyclicUnshuffle}}& (\mathbb{1}_m\otimes\cycPermutator_{(k)}^{-1}) \circ (\mathbb{1}_{m+k-1}\otimes\mu_n) \circ (B_{m,k-1}\otimes \mathbb{1}) \circ (B_{m+k-1,n}) = \\ \equal{Eq. \eqref{Eq:compUnsh}}& (\mathbb{1}_m\otimes\cycPermutator_{(k)}^{-1}) \circ (\mathbb{1}_{m+k-1}\otimes\mu_n) \circ (B_{m,k-1,n}) = \\ \equal{asso.}& \left[\mathbb{1}_m\otimes \left(\cycPermutator_{(k)}^{-1} \circ \left(\mathbb{1}_{k-1}\otimes\mu_n\right)\right) \right] \circ (B_{m,k-1,n}) = \\ \equal{Lem. \ref{Lemma:CyclicCommutation}}& \left(\mathbb{1}_m\otimes \mu_n\otimes \mathbb{1}_{k-1} \right) \circ (\mathbb{1}_m\otimes \cycPermutator^{-n}_{(n+k-1)})\circ (B_{m,k-1,n}) = \\ \equal{Eq. \eqref{Eq:CyclicUnshuffle}}& \left(\mathbb{1}_m\otimes \mu_n\otimes \mathbb{1}_{k-1} \right) \circ (B_{m,n,k-1}) ~. \end{aligned} } \end{displaymath} Similarly, applying equation \eqref{Eq:decompUnsh} to $P_{m,k}$, one can write \begin{align*} (P_{m,k}) \circ& (\mu_n \otimes \mathbb{1}_{m+k-1}) \circ (P_{n,m+k-1}) = \\ =& +(\mu_n\otimes P_{m-1,k})\circ (P_{n,m+k-1}) + \\ & +\left[ (\mathbb{1}_m\otimes\tilde{\cycPermutator}_{(k)}^{-1}) \circ (P_{m,k-1}\otimes \mathbb{1}) \circ \tilde{\cycPermutator}_{(m+k)} \right] \circ \left(\mu_n \otimes \mathbb{1}_{m+k-1}\right) \circ (P_{n,m+k-1}) = \\ =&~ X + Y ~. \end{align*} % From equation \eqref{Eq:compUnsh}, the first summand yields: \begin{displaymath} X = (\mu_n \otimes \mathbb{1}_{m+k-1}) \circ (P_{n,m-1,k}) ~; \end{displaymath} % the second summand results: \begin{displaymath} \begin{aligned} Y \equal{asso.}& (\mathbb{1}_m\otimes\tilde{\cycPermutator}_{(k)}^{-1}) \circ (P_{m,k-1}\otimes \mathbb{1}) \circ \left[ \tilde{\cycPermutator}_{(m+k)} \circ \left(\mu_n \otimes \mathbb{1}_{m+k-1}\right) \right] \circ (P_{n,m+k-1}) = \\ \equal{Lem. \ref{Lemma:CyclicCommutation}}& (\mathbb{1}_m\otimes\tilde{\cycPermutator}_{(k)}^{-1}) \circ (P_{m,k-1}\otimes \mathbb{1}) \circ \\ \phantom{\equal{}}& \left[ (-)^{(m+k-1)(n+1)}\circ \left( \mathbb{1}_{m+k-1}\otimes\mu_n \right) \circ \tilde{\cycPermutator}_{(n+m+k-1)}^n \right]\circ (P_{n,m+k-1}) = \\ \equal{asso.}& (-)^{(m+k-1)(n+1)}~ (\mathbb{1}_m\otimes\tilde{\cycPermutator}_{(k)}^{-1}) \circ (\mathbb{1}_{m+k-1}\otimes\mu_n) \circ (P_{m,k-1}\otimes \mathbb{1}) \circ \\ \phantom{\equal{}}&\circ \left[ \tilde{\cycPermutator}_{(n+m+k-1)}^n \circ (P_{n,m+k-1}) \right] = \\ \equal{Eq. \eqref{Eq:CyclicUnshuffle}}& (-)^{(m+k-1)(n+1)}~ (\mathbb{1}_m\otimes\tilde{\cycPermutator}_{(k)}^{-1}) \circ (\mathbb{1}_{m+k-1}\otimes\mu_n) \circ (P_{m,k-1}\otimes \mathbb{1}) \circ \\ \phantom{\equal{}}&\circ (P_{m+k-1,n}) = \\ \equal{Eq. \eqref{Eq:compUnsh}}& (-)^{(m+k-1)(n+1)}~ (\mathbb{1}_m\otimes\tilde{\cycPermutator}_{(k)}^{-1}) \circ (\mathbb{1}_{m+k-1}\otimes\mu_n) \circ (P_{m,k-1,n}) = \\ \equal{asso.}& (-)^{(m+k-1)(n+1)}~ \left[\mathbb{1}_m\otimes \left(\tilde{\cycPermutator}_{(k)}^{-1} \circ \left(\mathbb{1}_{k-1}\otimes\mu_n\right)\right) \right] \circ (P_{m,k-1,n}) = \\ \equal{Lem. \ref{Lemma:CyclicCommutation}}& (-)^{(m+k-1)(n+1)+(k-1)(n+1)}~ \left(\mathbb{1}_m\otimes \mu_n\otimes \mathbb{1}_{k-1} \right) \circ \\ \phantom{\equal{}}&\circ (\mathbb{1}_m\otimes \tilde{\cycPermutator}^{-n}_{(n+k-1)})\circ (P_{m,k-1,n}) = \\ \equal{Eq. \eqref{Eq:CyclicUnshuffle}}& (-)^{m(n+1)}~ \left(\mathbb{1}_m\otimes \mu_n\otimes \mathbb{1}_{k-1} \right) \circ (P_{m,n,k-1}) ~. \end{aligned} \end{displaymath} \end{proof} % We are now ready to compute the associator of the products $\symgerst$ and $\skewgerst$ defined in section \ref{Section:MultibracketsAlgebra}. \begin{proposition}[Explicit associator of $\symgerst$ and $\skewgerst$]\label{Prop:ExplicitAssociators} Given any three multilinear operators $\mu_\ell,\mu_m,\mu_n \in M(V)$ the corresponding associators with respect to $\symgerst$ and $\skewgerst$ result: % \begin{equation} \label{Eq:ExplicitAssociatorSym} \alpha(\symgerst;\mu_\ell,\mu_m,\mu_n) = \mu_\ell \circ \left[\left(\left(\mu_m\otimes\mu_n \right)\circ B_{m,n}\right)\otimes \Unit_{\ell-2}\right] \circ B_{m+n,\ell-2} \end{equation} \begin{equation}\label{Eq:ExplicitAssociatorSkew} \alpha(\skewgerst; \mu_\ell,\mu_m,\mu_n) = (-)^{\mathcal{s}}\mu_\ell \circ \left[\left(\left(\mu_m\otimes\mu_n \right)\circ P_{m,n}\right)\otimes \Unit_{\ell-2}\right] \circ P_{m+n,\ell-2}~, \end{equation} where the sign prefactor is given by: \begin{displaymath} (-)^{\mathcal{s}}=(-)^{{|\mu_n|(m+\ell)} +{(|\mu_m|(\ell-1)} +m(n+1)} \end{displaymath} % \end{proposition} \begin{proof} By definition, the associator reads as follows: \begin{displaymath} \alpha(\symgerst;\mu_\ell,\mu_m,\mu_n) \colon= (\mu_\ell \symgerst \mu_m) \symgerst \mu_n - \mu_\ell \symgerst ( \mu_m \symgerst \mu_n) ~. \end{displaymath} % The second term on the right-hand side results: % \begin{align*} \mu_\ell \symgerst& ( \mu_m \symgerst \mu_n) =~ \mu_\ell \symgerst \left( \mu_m \circ \left(\mu_n \otimes \mathbb{1}_{m-1}\right) \circ B_{n,m-1}\right) =\\ =&~ \mu_\ell \circ \left[\left( \mu_m \circ \left( \mu_n \otimes \mathbb{1}_{m-1}\right) \circ B_{n,m-1}\right) \otimes \mathbb{1}_{\ell-1}\right] \circ B_{m+n-1,\ell-1} =\\ =&~ (\mu_\ell) \circ ( \mu_m\otimes \mathbb{1}_{\ell-1}) \circ (\mu_n \otimes \mathbb{1}_{m+\ell-2}) \circ (B_{n,m-1} \otimes \mathbb{1}_{\ell-1}) \circ B_{m+n-1,\ell-1} =\\ =&~ (\mu_\ell) \circ ( \mu_m\otimes \mathbb{1}_{\ell-1}) \circ (\mu_n \otimes \mathbb{1}_{m+\ell-2}) \circ B_{n,m-1,\ell-1} ~. \end{align*} The first term results \begin{align*} (\mu_\ell \symgerst \mu_m)& \symgerst \mu_n =~ \left(\mu_\ell \circ \left(\mu_m \otimes \mathbb{1}_{\ell-1}\right) \circ B_{m,\ell-1} \right) \symgerst \mu_n =\\ \equal{}&~ \left(\mu_\ell \circ \left(\mu_m \otimes \mathbb{1}_{\ell-1} \right)\circ B_{m,\ell-1} \right) \circ \left(\mu_n \otimes \mathbb{1}_{m+\ell-2} \right) \circ B_{n,m+\ell-2} =\\ \equal{}&~ (\mu_\ell) \circ (\mu_m \otimes \mathbb{1}_{\ell-1}) \circ \big[ (B_{m,\ell-1} ) \circ (\mu_n \otimes \mathbb{1}_{m+\ell-2}) \circ (B_{n,m+\ell-2}) \big] = \\ \equal{Lem. \ref{Lemma:UnshuffleSandwich}}& +(\mu_\ell) \circ (\mu_m \otimes \mathbb{1}_{\ell-1}) \circ (\mu_n\otimes\mathbb{1}_{m+\ell-2}) \circ B_{n,m-1,\ell-1} + \\ & s\cdot (\mu_m \otimes \mathbb{1}_{\ell-1}) \circ (\mathbb{1}_m\otimes \mu_n \otimes \mathbb{1}_{\ell-2}) \circ (B_{m,n,\ell-2}) = \\ \equal{}& + \mu_\ell \symgerst ( \mu_m \symgerst \mu_n) + \\ &+ s\cdot (\mu_m \otimes \mathbb{1}_{\ell-1}) \circ (\mathbb{1}_m\otimes \mu_n \otimes \mathbb{1}_{\ell-2}) \circ (B_{m,n,\ell-2}) ~, \end{align*} where $s$ is a numerical constant equal to one. % Therefore % \begin{align*} \alpha(\symgerst;\mu_\ell,&\mu_m,\mu_n) =~ s \cdot (\mu_m \otimes \mathbb{1}_{\ell-1}) \circ (\mathbb{1}_m\otimes \mu_n \otimes \mathbb{1}_{\ell-2}) \circ (B_{m,n,\ell-2}) = \\ \equal{Eq. \eqref{Eq:compUnsh}}& s \cdot(\mu_\ell) \circ (\mu_m \otimes \mu_n \otimes \mathbb{1}_{\ell-2}) \circ (B_{m,n} \otimes \mathbb{1}_{\ell-2}) \circ (B_{m+n,\ell-2}) = \\ \equal{}& s \cdot(\mu_\ell) \circ \left[\left(\left(\mu_m \otimes \mu_n\right) \circ B_{m,n}\right) \otimes \mathbb{1}_{\ell-2} \right] \circ (B_{m+n,\ell-2}) ~. \end{align*} The odd case is simply obtained substituting $B_{k,\ell}$ with $P_{k,\ell}$, $s$ with the sign $(-)^{m(n+1)}$ given by lemma \ref{Lemma:UnshuffleSandwich} and remembering that the definition of skewsymmetric Gerstenhaber takes an extra sign. Namely: \begin{displaymath} (\mu_\ell \skewgerst \mu_m) \skewgerst \mu_n = (-)^\mathcal{s}~ \mu_\ell \circ \left[ \mu_m \circ \left(\mu_n \otimes \mathbb{1}_{m-1}\right) \circ P_{n,m-1}\right] \otimes \mathbb{1}_{\ell-1}) \circ P_{m+n-1,\ell-1} ~, \end{displaymath} where \begin{displaymath} \mathcal{s}= {|\mu_n|(m+\ell-2)} +{(|\mu_m|(\ell-1)} ~. \end{displaymath} \end{proof} % \begin{proposition}[Pre-Lie property of $\symgerst$ and $\skewgerst$] \label{Prop:SymmetricGerstenhaberAssociators-AppendixC} Given any three multilinear operators $\mu_\ell,\mu_m,\mu_n \in M(V)$, the corresponding associators satisfy the following symmetry properties in the last two entries: \begin{equation}\label{Equation:SymGerstAssociatorSymmetry} \alpha(\symgerst; \mu_\ell,\mu_m,\mu_n) = (-)^{|\mu_m||\mu_n|} \alpha(\symgerst; \mu_\ell,\mu_n,\mu_m) ~; \end{equation} \begin{equation}\label{Equation:SkewGerstAssociatorSymmetry} \alpha(\skewgerst; \mu_\ell,\mu_m,\mu_n) = (-)^{(|\mu_n| + n - 1)(|\mu_m| + m - 1)} \alpha(\skewgerst; \mu_\ell,\mu_n,\mu_m) ~. \end{equation} \end{proposition} % \begin{proof} Plugging the result of corollary \ref{Cor:CyclicCommutationMultiBracket} inside equation \eqref{Eq:ExplicitAssociatorSym} yields \begin{align*} \alpha(\mu_\ell & ,\mu_m,\mu_n) = \\ \equal{}& \mu_\ell \circ \left[\left(\left( \mu_m \otimes \mu_n \right) \circ B_{m,n}\right) \otimes \mathbb{1}_{\ell-2}\right] \circ P_{m+n,\ell-2} = \\ \equal{Cor. \ref{Cor:CyclicCommutationMultiBracket}}& (-)^{|\mu_m||\mu_n|} \mu_\ell \circ \left[\left(\cycPermutator_{(2)} \circ \left(\mu_m \otimes \mu_n\right) \circ \cycPermutator_{(m+n)}^n B_{m,n}\right) \otimes \mathbb{1}_{\ell-2}\right] \circ P_{m+n,\ell-2} = \\ \equal{}& (-)^{|\mu_m||\mu_n|} \mu_\ell \circ \left[\left(\left( \mu_n \otimes \mu_m \right)\circ B_{n,m}\right) \otimes \mathbb{1}_{\ell-2}\right] \circ P_{m+n,\ell-2} = \\ \equal{}& (-)^{|\mu_m||\mu_n|} \alpha(\mu_\ell,\mu_n,\mu_m) ~. \end{align*} Plugging the same corollary in equation \eqref{Eq:ExplicitAssociatorSkew} yields extra signs: \begin{align*} \alpha(\mu_\ell & ,\mu_m,\mu_n) = \\ \equal{}& (-)^s ~ \mu_\ell \circ \left[\left(\left( \mu_m \otimes \mu_n \right)\circ P_{m,n}\right) \otimes \mathbb{1}_{\ell-2}\right] \circ P_{m+n,\ell-2} = \\ \equal{Cor. \ref{Cor:CyclicCommutationMultiBracket}}& (-)^{s + t} ~ \mu_\ell \circ \left[\left(\tilde\cycPermutator_{(2)} \circ( \mu_n \otimes \mu_m) \circ \tilde\cycPermutator_{(m+n)}^n\circ P_{m,n}\right) \otimes \mathbb{1}_{\ell-2}\right] \circ P_{m+n,\ell-2} = \\ \equal{}& (-)^{s + t} ~ \mu_\ell \circ \left[\left(\left( \mu_n \otimes \mu_m \right)\circ P_{n,m}\right) \otimes \mathbb{1}_{\ell-2}\right] \circ P_{m+n,\ell-2} = \\ \equal{}& (-)^{s + t + s'} ~ \alpha(\mu_\ell,\mu_n,\mu_m)~, \end{align*} where $s$ and $s'$ are the signs contained in the explicit expression of the associators (see proposition \ref{Prop:ExplicitAssociators}): \begin{align*} s=& {{|\mu_n|(m+\ell)} +{(|\mu_m|(\ell-1)} +m(n+1)} ~, \\ s'=& {{|\mu_m|(n+\ell)} +{(|\mu_n|(\ell-1)} +n(m+1)} ~, \end{align*} and $t$ is the sign coming from corollary \ref{Cor:CyclicCommutationMultiBracket}: \begin{displaymath} t = mn+1 +|\mu_m||\mu_n| ~. \end{displaymath} Computing $s+s'+t \mod 2 $ gives the exponent contained in equation \eqref{Equation:SkewGerstAssociatorSymmetry}. % \end{proof} \ifstandalone \bibliographystyle{../../hep} \part{Background} \input{chapters/Linfinity/Linfinity} \renewcommand*{\datapath}{chapters/multisymplectic/image} \input{chapters/multisymplectic/multisymplectic} \part{Foreground} \renewcommand*{\datapath}{chapters/compactactionspheres/image} \input{chapters/compactactionspheres/compactactionspheres} \renewcommand*{\datapath}{chapters/higherrogersembedding/image} \input{chapters/higherrogersembedding/higherrogersembedding} \renewcommand*{\datapath}{chapters/hydromomaps/image} \input{chapters/hydromomaps/hydromomaps} \part{Appendices}
{ "timestamp": "2021-05-13T02:18:07", "yymm": "2105", "arxiv_id": "2105.05645", "language": "en", "url": "https://arxiv.org/abs/2105.05645", "abstract": "Homotopy comomentum maps are a higher generalization of the notion of moment map introduced to extend the concept of Hamiltonian actions to the framework of multisymplectic geometry. Loosely speaking, higher means passing from considering symplectic 2-form to consider differential forms in higher degrees. The goal of this thesis is to provide new explicit constructions and concrete examples related to group actions on multisymplectic manifolds admitting homotopy comomentum maps.The first result is a complete classification of compact group actions on multisymplectic spheres. The existence of a homotopy comomentum map about the latter depends on the dimension of the sphere and the transitivity of the group action. Several concrete examples of such actions are also provided.The second novel result is the explicit construction of the higher analogue of the embedding of the Poisson algebra of a given symplectic manifold into the corresponding twisted Lie algebroid. It is also proved a compatibility condition for such embedding for gauge-related multisymplectic manifolds in presence of a compatible Hamiltonian group action. The latter construction could play a role in determining the multisymplectic analogue of the geometric quantization procedure.Finally, a concrete construction of a homotopy comomentum map for the action of the group of volume-preserving diffeomorphisms on the multisymplectic 3-dimensional Euclidean space is proposed. This map can be naturally related to hydrodynamics. For instance, it transgresses to the standard hydrodynamical co-momentum map of Arnold, Marsden and Weinstein and others. A slight generalization of this construction to a special class of Riemannian manifolds is also provided. The explicitly constructed homotopy comomentum map can be also related to knot theory by virtue of the aforementioned hydrodynamical interpretation.", "subjects": "Symplectic Geometry (math.SG); Differential Geometry (math.DG)", "title": "Homotopy Comomentum Maps in Multisymplectic Geometry", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.974042642048694, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.7090791295989622 }
https://arxiv.org/abs/1810.03494
k-price auctions and Combination auctions
We provide an exact analytical solution of the Nash equilibrium for $k$- price auctions. We also introduce a new type of auction and demonstrate that it has fair solutions other than the second price auctions, therefore paving the way for replacing second price auctions.
\section{Introduction} Second price auctions, also known as Vickrey auctions \cite{vickrey1961counterspeculation}, are well known and largely used as examples in online or Government auctions because it gives bidders an incentive to bid their true value. Nevertheless, second price auctions have reached some limits because, inter alia, bidders bid insincerely and the variance of the winning bid often has a large value. A natural generalization of second price auctions is the $k$-price auction, which has been studied by many researchers in the recent years. The reader can refer to \cite{PK99, KRISHNA20101,WILSON92} for related literature and \cite{EM04}. In particular, Roger B. Myerson~\cite{Roger1981} established the Revenue Equivalence Theorem (known as RET theorem) in 1981, which characterized the equilibrium strategy. Later in 1998, Monderer and Tenenholtz \cite{monderer2000k} proved the uniqueness of the equilibrium strategies in $k$-price auctions for $k=3$. Under some regularity assumptions, they also provided sufficient conditions for the existence of the equilibrium. In 2000, Wolfsteller \cite{wolfstetter01third} solved the equilibrium $k$-price auctions for a uniform distribution and in 2014, Nawar and Sen \cite{nawar2014k} generalized the result of Wolfsteller, and provided a derivation expression of the $k$-price auctions bidding equilibrium for a quadratic distribution. In this paper, we prove two major results. First, in Theorem~\ref{solution}, we generalized Wolfsteller and Nawar results by giving a closed form solution of the equilibrium of the $k$-price auctions in a general case. The solution takes an easy form which is easily calculable. As applications, this simplifies the discussions of the existence and uniqueness, and we can calculate the equilibrium for classical distributions and recover certain known results. Secondly, we extend the notion of $k$- price auctions by introducing a new type of auction that we will call a "Combination-auctions": the winner pays a centroid of the bids. In fact a natural question is: Are there other combinations that second price auction which lead to a truthful equilibrium? We demonstrate in Theorem~\ref{combination1} that there exists combinations other than the second price auction leading to a truthful equilibrium.These new strategies could replace second price auction... Then we show in Theorem~\ref{combination2} that if there exists truthful strategies other than the second price auctions, then the distribution of the valuations of the bidders is a uniform distribution. We also provide an alternative proof of the RET theorem with probability tools, which does not involve advanced auctions theory and game theory knowledge. This paper is organized as follows: in Section 2 we state the assumptions and re-establish the RET theorem with a probability approach. Next, we solve the equilibrium and discuss the uniqueness and the existence, together with applications for classical distributions. Finally, in Section 4 we introduce the "Combination-auctions" and discuss strategies other than second price auctions which are truthful. \section{Problem formulation and assumptions}\label{S2} In this section we present our assumptions and re-establish the RET theorem. The RET theorem is established in \cite{Roger1981} by Roger B. Myerson. The original proof involves heavy advanced auctions theory and game theory insight. Here we provide an alternative approach, with probability tools. With our approach, one can extend the RET theorem for "Combination-autions", which we will introduce in Section 4. \vskip 5mm The $k$- price auction problem can be formulated as follows: consider a $k$-price auctions with $n$ bidders, where the highest bidder wins, and pays only the $k$-th highest bid. Let's assume that $k \geq 2$ and $n \leq k$. We make the assumptions following \cite{RS1981}: \begin{enumerate} \item The valuations $V_i, i = 1,\cdots, n$ of the bidders are independent and identically distributed with distribution function $F$. \item The distribution function $F$ is with values in $I$ where $I= [0,1]$ or $I=\mathbb{R}^+$. \end{enumerate} We also assume that: \begin{itemize} \item[(A)] $F$ is $k-2$ times continuously differentiable and $ \forall x \in I$ $F'(x) >0$. \end{itemize} We remark that for analysis of the 3-price auction in the literature, the existence and the continuity of the density function are often assumed. It is thus natural to assume $(A)$ holds for the case of general k-price auctions. We also assume that each bidder bids $ X_i=g(V_i)$ where $g$ is a strictly increasing function. It follows that the bids $X_i$ are independent variables and we denote their distribution function $\hat{F}$ and their density $\hat{f}$. As $g$ is strictly increasing, $\hat{F}$ is also strictly increasing on $g(I)$. For $n-1$ bidders which bid $X_1,...,X_{n-1}$, we denote $Y_{n-1}= \max(X_1,...,X_{n-1})$ and $Y_{n-2}$ the second maximum. In general $\forall p \leq n-1$, $Y_{n-p}$ is the $p$-th maximum. Now we recall the RET theorem for the k-price auctions (see also \cite{monderer2000k}): \begin{thm}[RET theorem] Let $k\geq 3$. A risk-neutral strategy $g$ is an equilibrium strategy in the k-price auction if and only if the following two conditions hold: \begin{enumerate} \item $g$ is strictly increasing in the interval $[0,1]$. \item It holds for all $x\in [0,1]$: $$\int_{t = 0}^x (x-g(t))F(t)^{n-k}(F(x)-F(t))^{k-3}F'(t)dt = 0.$$ \end{enumerate} \end{thm} \subsection{An alternative approach of RET theorem} This section provides a probability approach of RET theorem. Before proving the theorem, we first show several lemmas with algebra computation. \begin{lemma} For all $t\leq x$ and $(t,x)\in I^2$, denote $$H(t):= \mathbb{P}([Y_{n-p+1} \leq t] \cap [Y_{n-1} \leq x]).$$ Then it holds \begin{equation}\label{Ht} H(t) = \sum_{p=0}^{k-2} \hat{F}(t)^{n-1-p} (\hat{F}(x)-\hat{F}(t))^p \end{equation} and \begin{equation}\label{H't} H'(t)=\frac{(n-1)!}{ (n-k)! (k-2)!} \hat{F}^{n-k}(t) (\hat{F}(x)-\hat{F}(t))^{k-2} \end{equation} \end{lemma} \begin{proof} It holds for all $t \leq x \text{ } (t,x) \in I^2 $: \begin{align*} P(Y_{n-p+1} \leq t \cap Y_{n-1} \leq x)=& P(X_1,...,X_{n-1} \leq t)\\ &+ \binom{n-1}{1} P(X_1,...,X_{n-2} \leq t \cap X_{n-1} \in ]t,x]) \\ &+ ...+ \binom{n-1}{p-2} P(X_1,...,X_{n-p+1} \leq t \cap X_{n-p+2},...,X_{n-1} \in ]t,x]) \end{align*} And equation \eqref{Ht} follows according to the fact that $X_i$ are independently identically distributed. Now we prove equation \eqref{H't}. Derive equation \eqref{Ht}, it follows: \begin{equation} \begin{split} H'(t)= \hat{f}(t) \left( \sum_{p=0}^{k-2} \binom{n-1}{p} (n-1-p)\hat{F}(t)^{n-2-p}(\hat{F}(x)-\hat{F}(t))^p \right. \\- \sum_{p=1}^{k-2} \left. \binom{n-1}{p} p \hat{F}(t)^{n-1-p}(\hat{F}(x)-\hat{F}(t))^{p-1} \right) \end{split} \end{equation} Then \begin{equation} \begin{split} H'(t)= \hat{f}(t) \left( \sum_{p=0}^{k-2} \binom{n-1}{n-1-p} (n-1-p) \hat{F}(t)^{n-2-p}(\hat{F}(x)-\hat{F}(t))^p \right. \\ - \sum_{p=0}^{k-3} \left. \binom{n-1}{p+1} (p+1) \hat{F}(t)^{n-2-p}(\hat{F}(x)-\hat{F}(t))^{p}\right) \end{split} \end{equation} A telescopic sum appears and after simplification we get: $$H'(t)=\frac{(n-1)!}{ (n-k)! (k-2)!} \hat{F}^{n-k}(t) (\hat{F}(x)-\hat{F}(t))^{k-2} \hat{f}(t),$$ which is exactly equation \eqref{H't}. \end{proof} \begin{lemma} \label{integration} For all $p,m \in \mathbb{N}$, it holds: \begin{equation}\label{IPP} \int_0^1 (1-u)^{p}u^{m} du = \frac{p!m!}{(p+m+1)!}. \end{equation} For $n>k\geq 2$, it holds: \begin{equation}\label{sum0} \sum_{p=0}^{k-2} \frac{(-1)^{k-2-p}}{n-1-p}\binom{k-2}{p} = \frac{(n-k)!(k-2)!}{(n-1)!}. \end{equation} \end{lemma} \begin{proof} The equation \eqref{IPP} is direct from integration by parts. We now prove equation \eqref{sum0}. for $n>k\geq 2$, denote $$A = \sum_{p=0}^{k-2} \frac{(-1)^{k-2-p}}{n-1-p}\binom{k-2}{p}$$ Consider $$r(x) = \sum_{p=0}^{k-2} (-x)^{n-2-p}\binom{k-2}{p} = (-1)^{n-k}x^{n-k}(1-x)^{k-2}. $$ Now let $$R(x) = \int_{0}^{x} r(u)du.$$ Observe that $$R(1) = (-1)^{n-k+1}A$$ and according to equation \eqref{IPP}, it holds $$R(1) = (-1)^{n-k+1} \frac{(n-k)!(k-2)!}{(n-1)!}$$ hence $$A = \frac{(n-k)!(k-2)!}{(n-1)!} = \frac{1}{(n-1)\binom{n-2}{k-2}}.$$ \end{proof} Now we are ready to prove the RET Theorem. \begin{proof} We consider here $n-1$ bidders which play with the same rule $X_i=g(V_i)$. The payoff expression of the $n$-th bidder for a valuation $v$ and a bid $x$ is: \begin{equation} U(x,v)= \int_{0}^{x} (v-t) H'(t) dt= \int_{0}^{x} (v-t) \frac{(n-1)!}{ (n-k)! (k-2)!} \hat{F}^{n-k}(t) (\hat{F}(x)-\hat{F}(t))^{k-2} \hat{f}(t) dt \label{bid1} \end{equation} The goal is at $v$ fixed to find $x=g(v)$ which maximizes the payoff. \vskip 5mm By developing \eqref{bid1} we find: $$ U(x,v)= \frac{(n-1)!}{ (n-k)! (k-2)!} \sum_{p=0}^{k-2} (-1)^{k-2-p} \binom{k-2}{p} \hat{F}^p(x) \int_0^x (v-t) \hat{F}^{n-2-p}(t) \hat{f}(t) dt $$ And then $$\frac{ \partial U}{\partial x}(x,v) =0$$ is equivalent to \begin{equation}\label{Upartialx} \sum_{p=0}^{k-2} \frac{(-1)^{k-2-p}}{n-1-p} \binom{k-2}{p} \left( (n-1)(v-x)\hat{F}^{n-2}(x)+p\hat{F}^{p-1}(x) \int_0^x \hat{F}^{n-1-p}(t) dt \right)=0. \end{equation} Now let $g_1$ be the quantile function of the distribution $X$, i.e. $g_1 = \hat{F}^{-1}$. As $\hat{F}$ is strictly increasing, $g_1$ exists and differentiable almost everywhere. Let denote $\hat F(x) = a$ and $q$ the quantile function of $F$. Thus, $X=g(V)$ implies that $a = \hat{F}(x)=F(v)$, therefore we have $v = q(a)$. Equality \eqref{Upartialx} becomes \begin{equation}\label{Upartialxq} \sum_{p=0}^{k-2} \frac{(-1)^{k-2-p}}{n-1-p} \binom{k-2}{p} \left( (n-1)(q(a)-g_1(a))a^{n-2}+pa^{p-1} \int_0^a u^{n-1-p} g_1'(u)du \right)=0. \end{equation} Observe that \begin{align*} &\sum_{p=0}^{k-2} \frac{(-1)^{k-2}}{n-1-p}\binom{k-2}{p} p a^{p-1}\int_0^a u^{n-1-p}g_1'(u)\ du \\ = &\int_0^a \left(\sum_{p=0}^{k-2} \frac{(-1)^{k-2}}{n-1-p}\binom{k-2}{p} p a^{p-1} u^{n-1-p}\right)g_1'(u)\ du \\ = & \int_0^a P(a,u)g_1'(u)\ du, \end{align*} where $$ P(a,u) = \sum_{p=0}^{k-2} \frac{(-1)^{k-2}}{n-1-p}\binom{k-2}{p} p a^{p-1} u^{n-1-p}. $$ Since, $n\geq k$, we deduce that $n-1-p>0$ for $p\in [0,k-2]$. Thus for all $a\in \mathbb{R}$, $P(a,0) = 0$. Denote $\partial_a$ the derivation operator with respect to $a$ and $\partial_u$ the derivation operator with respect to $u$. It holds: \begin{align*} \partial_u P(a,u) &= \sum_{p=0}^{k-2} (-1)^{k-2}\binom{k-2}{p} p a^{p-1} u^{n-2-p}\\ & = \left(\sum_{p=0}^{k-2} (-1)^{k-2}\binom{k-2}{p} p a^{p-1} u^{k-2-p}\right) u^{n-k}\\ & = \partial_a\left(\sum_{p=0}^{k-2} (-1)^{k-2}\binom{k-2}{p} a^{p} u^{k-2-p}\right) u^{n-k}\\ & = \partial_a \left((a-u)^{k-2}\right) u^{n-k}\\ & = (k-2)(a-u)^{k-3}u^{n-k}. \end{align*} Therefore, applying integration by parts and observing that $P(0,0) = 0$, it follows that \begin{align*} \int_0^a P(a,u)g_1'(u)du &= \left[P(a,u) g_1(u)\right]_0^a - \int_0^a \partial_u P(a,u) g_1(u) du\\ & = P(a,a)g_1(a) -\int_0^a (p-2)(a-u)^{k-3}u^{n-k}g_1(u)du. \end{align*} According to lemma \ref{integration} \begin{align*} P(a,a) &= \int_0^a (k-2)(a-u)^{k-3}u^{n-k}\ du\\ & = (k-2)a^{n-2}\int_0^1 (1-u)^{k-3}u^{n-k}du \\ & = a^{n-2} \frac{(n-k)!(k-2)!}{(n-2)!}. \end{align*} The latter equality together with equation \eqref{IPP} leads to \begin{equation} (q(a)-g_1(a)) a^{n-2} \frac{(n-k)!(k-2)!}{(n-2)!} + a^{n-2} \frac{(n-k)!(k-2)!}{(n-2)!} g_1(a) = \int_0^a (k-2)(a-u)^{k-3}u^{n-k}g_1(u)du \end{equation} which is equivalent to \begin{equation} \label{eqint} q(a) a^{n-2} \frac{(n-k)!(k-2)!}{(n-2)!}= \int_0^a (k-2)(a-u)^{k-3}u^{n-k}g_1(u)du. \end{equation} Notice that the latter equation can be also written as: $$q(a) P(a,a)= \int_0^a (k-2)(a-u)^{k-3}u^{n-k}g_1(u)du,$$ rearranging the terms, it holds: $$\int_0^a (k-2)(a-u)^{k-3}u^{n-k}(g_1(u)-q(a))du = 0.$$ Let $x = q(a)$ and change the variable $u = F(t)$ inside the integration, together with the fact that $g_1(u) = \hat{F}^{-1}\circ F(t) = g(t)$, we obtain: $$\int_0^x (k-2)(F(x)-F(t))^{k-3}F(t)^{n-k}(g(t)-x)F'(t)dt = 0.$$ The proof is completed. \end{proof} \section{Analysis of equilibrium} In this section, we first give a closed form solution of the equilibrium for $k$- price auctions, $k\geq 3$. Then with the solution, we analyze the existence and uniqueness of the equilibrium. At the end of this section we provide some examples. \subsection{Solution of the equilibrium} \begin{thm}\label{solution} We assume that $(A)$ holds, then the equilibrium satisfies equation \eqref{eqint}. Moreover, equation \eqref{eqint} has unique solution: \begin{equation} g_1(a) = \frac{a^{k-n}(n-k)!}{(n-2)!}\left(q(a) a^{n-2}\right)^{(k-2)} =\frac{\left(q(a) a^{n-2}\right)^{(k-2)}}{(a^{n-2})^{(k-2)}}. \end{equation} \end{thm} \begin{proof} We denote for $p\geq 0$ $$A_p(a,u) = (a-u)^p u^{n-k}g_1(u) $$ and $$G_p(a,u) = \int_0^a (a-u)^p u^{n-k}g_1(u). $$ It holds : $$A_0(a,a) = a^{n-k}g_1(a),$$ $$G_0(a,u) = \int_0^a u^{n-k}g_1(u) = \int_0^a A_0(a,u)du, $$ and for $p\geq 1$: $$A_p(a,a) = 0,$$ For $p>1$ and for all $a\in [0,1]$: \begin{align*} G_p'(a) &= \lim_{h\rightarrow 0} \frac{1}{h}\left(\int_0^{a+h} A_p(a+h,u)du - \int_0^a A_p(a,u) du\right)\\ &= \lim_{h\rightarrow 0} \frac{1}{h}\left(\int_0^{a+h} A_p(a+h,u) du - \int_0^a A_p(a+h,u)du + \int_0^{a} A_p(a+h,u) - A_p(a,u) du \right)\\ & = A_p(a,a)+ \lim_{h\rightarrow 0} \frac{1}{h} \left(\int_0^{a} A_p(a+h,u) - A_p(a,u) du\right)\\ & = A_p(a,a)+ \int_0^a \partial_a A_p(a,u) du\\ & = (p-1) G_{p-1}(a) \end{align*} We remark that for all $b \in I$, $\int_{0}^{b} |g_1(u)| du \leq b g_1(b)$ holds, thus we can switch the integral and the limit in the previous formula. Therefore calculation is valid. Therefore, the $(p+1)^{th}$ derivative of $G_p$ on $a$ is \begin{equation} G_p^{(p+1)}(a) = (p-1)!G_0'(a) = (p-1)! A_0(a,a) = (p-1)!a^{n-k}g_1(a). \end{equation} Hence, the $(k-2)^{th}$ derivative of equation\eqref{eqint} gives \begin{align*} \left(q(a) a^{n-2} \frac{(n-k)!(k-2)!}{(n-2)!}\right)^{(k-2)}& = (k-2)\left(\int_0^a (a-u)^{k-3}u^{n-k}g_1(u)du\right)^{(k-2)}\\ & = (k-2)G_{k-3}^{(k-2)}(a)\\ & = (k-2)!a^{n-k}g_1(a). \end{align*} Rearranging the terms, we have $$ g_1(a) = \frac{a^{k-n}(n-k)!}{(n-2)!}\left(q(a) a^{n-2}\right)^{(k-2)}=\frac{\left(q(a) a^{n-2}\right)^{(k-2)}}{(a^{n-2})^{(k-2)}}. $$ \end{proof} Since equation \eqref{eqint} has a unique solution, according to RET theorem, the equilibrium exists if and only if $\left(q(a) a^{n-2}\right)^{(k-2)}/(a^{n-2})^{(k-2)}$ is strictly increasing. \subsection{Examples} As applications, we introduce several examples. With the formula from previous section, one can easily recover some classical results. \begin{exple}[Uniform distribution] $q(a)= a, g(v) = g_1(a)$. \begin{align*} &g(v) = g_1(a) =\frac{a^{k-n}(n-k)!}{(n-2)!}\left(q(a) a^{n-2}\right)^{(k-2)} \\ = &\frac{a^{k-n}(n-k)!}{(n-2)!}\left( a^{n-1}\right)^{(k-2)}\\ = & \frac{n-1}{n-k+1} a= \frac{n-1}{n-k+1} v \end{align*} \end{exple} \begin{exple}[3-price auctions] For $k=3$, notice that $q'(a) = {F^{-1}}'(a) = 1/F'(v)$, it follows that: \begin{align*} g(v) &= g_1(a) =\frac{a^{3-n}}{(n-2)}\left(q(a) a^{n-2}\right)' \\ &= q(a)+\frac{1}{n-2}a q'(a)\\ &= v+\frac{1}{n-2}\frac{F(v)}{F'(v)} \end{align*} and we found the well known result. \end{exple} \begin{exple}[ 4-price auctions]. For $k = 4$, since $$q'(a) = \frac{1}{F'(v)} = \frac{1}{F'(q(a))},$$ it follows that $$ q''(a) = -\frac{F''(q(a))}{F'^2(q(a)}q'(a) = -\frac{F''(v)}{F'^3(v)}. $$ Then it holds : \begin{align*} g_1(a) &=\frac{a^{4-n}}{(n-2)(n-3)}\left(q(a) a^{n-2}\right)''\\&=\frac{a^{4-n}}{(n-2)(n-3)}\left((n-2)(n-3)a^{n-4}q(a) + 2(n-2)a^{n-3}q'(a)+a^{n-2}q''(a)\right) \\ &= q(a) + \frac{2}{n-3}\frac{a}{F'(q(a))}-\frac{1}{(n-2)(n-3)}\frac{a^2F''(q(a))}{F'(q(a))^3} \end{align*} Finally, with with $v=q(a)$ and $a=F(v)$: $$g(v)= v+\frac{2}{n-3}\frac{F(v)}{F'(v)}-\frac{1}{(n-2)(n-3)}\frac{F^2(v)F''(v)}{F'(v)^3}$$ We recover the result of \cite{nawar2014k}, Theorem 2. \end{exple} \begin{exple}[polynomial distribution]. For $F(x):= x^\alpha$ with $\alpha >0$, then $q(a) = a^{1/\alpha}$ and \begin{align*} g(v) &= g_1(a) =\frac{a^{k-n}(n-k)!}{(n-2)!}\left(q(a) a^{n-2}\right)^{(k-2)} \\ &= \frac{a^{k-n}(n-k)!}{(n-2)!}\left( a^{n-2+1/\alpha}\right)^{(k-2)}\\ &= \frac{\Gamma(n-k+1) \Gamma(n-1+1/\alpha)}{\Gamma(n-k+1+1/\alpha)\Gamma(n-1)} a^{1/\alpha}\\ &= \frac{\Gamma(n-k+1) \Gamma(n-1+1/\alpha)}{\Gamma(n-k+1+1/\alpha)\Gamma(n-1)} v \end{align*} where $\Gamma$ is the Gamma function. In particular, if $\alpha = \frac{1}{m}$ where $m$ is a positive integer, $$g(v) = \frac{(n-2+m)...(n-k+m+1)}{(n-2)...(n-k+1)}v$$ \end{exple} \section{ Combination-price auctions} In this section we start by defining a new type of auction that we call a Combination-price auction. We demonstrate that a Combination-price auction could be truthful for linear combinations other than second price auctions ($\alpha_2=1$), giving some companies an incentive to use it instead of second price auctions. Moreover, we will characterize the distributions for which there exists a linear combination different from $\alpha_2=1$ which is truthful. \subsection{ Nash equilibrium of Combination-price auctions} We call Combination-price auctions an auction in which the winner will pay a linear combination of the prices bid by the bidders: $\alpha_1 Y_{n}+...+\alpha_s Y_{n-s+1}$ where all the $\alpha_k$ are positive satisfying $\sum_{k=1}^s \alpha_k=1$. Like in the first part, we consider here $n-1$ bidders which play with the same rule $X_i=g(V_i)$. Reasoning as in Section \ref{S2}, the payoff expression for the $n$-th bidder for a valuation $v$ and a bid $x$ can be expressed as a multiple integral: $$ U(x,v)=\iint_{0}^x (x-\sum_{k=1}^s \alpha_k t_k) P(Y_{n-1} \in [t_1,t_1+dt_1] \cap...\cap Y_{n-s+1} \in [t_s,t_s+dt_s] \cap Y_{n-1} \leq x)dt_1...dt_s.$$ Together with $\sum_{k=1}^s \alpha_k=1$ we can write: $$ U(x,v)=\sum_{k=1}^s \alpha_k \iint_{0}^x (x- t_k) P(Y_{n-1} \in [t_1,t_1+dt_1] \cap...\cap Y_{n-s+1} \in [t_s,t_s+dt_s] \cap Y_{n-1} \leq x)dt_1...dt_s$$ then: $$ U(x,v)= \sum_{k=1}^s \alpha_k U_k(x,v)$$ where $U_k$ is the payoff of the $k$-price auctions. \vskip 3mm Now combining equation \eqref{eqint} with a simple calculation of $U_1$ and $U_2$, we deduce that $g_1$ is a Nash equilibrium for the combination-price auctions if and only if $g_1$ is increasing and the following equation holds: \begin{align}\label{combauct} &q(a) a^{n-2} \left( \sum_{k=2}^s \frac{(n-k)!(k-2)!}{(n-2)!} \alpha_p\right)-\alpha_1 a^{n-1} \nonumber \\ =& \alpha_1 (n-1) \frac{q(a)-g_1(a)}{g_1'(a)} a^{n-2}+\alpha_2 a^{n-2} g_1(a)+\int_0^a g_1(u) \left(\sum_{k=3}^s \alpha_k (k-2)(a-u)^{k-3}u^{n-k}\right)du. \end{align} \subsection{ Study of truthful equilibrium} The following theorem characterizes the truthful equilibrium for uniform distribution. \begin{thm}\label{combination1} If the distribution is uniform, there are an infinite number of linear combinations which lead to a truthful strategy. More precisely, adapting the notations before, a Combination-price auction is truthful if and only if the coefficients $(\alpha_i)$ satisfy the following conditions: \begin{enumerate} \item It holds $$ \alpha_1=\sum_{k=3}^s\frac{(k-3)!(n-k)!(2k-n-3)}{(n-2)!}\alpha_k,$$ \item for all $k$, $\alpha_k \geq 0$, and $$\sum_{k=1}^s \alpha_k=1.$$ \end{enumerate} \end{thm} \begin{proof} The equilibrium is truthful if and only if $ \forall \, a \, g_1(a)=q(a)$ and together with equation \eqref{combauct}, it holds: \begin{equation} \begin{split} q(a) a^{n-2} \left( \sum_{k=3}^s \frac{(n-k)!(k-2)!}{(n-2)!} \alpha_k\right)-\alpha_1 a^{n-1}=\\ \int_0^a q(u) \left(\sum_{k=3}^s \alpha_k (k-2)(a-u)^{k-3}u^{n-k}\right)d u. \end{split} \label{fair1} \end{equation} \vskip 5mm If the distribution is uniform in $[0,1]$, then $q(a)=a$ and by replacing the latter formula becomes $$ \alpha_1=\sum_{k=3}^s \frac{(k-3)!(n-k)!(2k-n-3)}{(n-2)!}\alpha_k.$$ Moreover we still have $\sum_{k=1}^s \alpha_k=1$ and $ \forall \, k$ $\alpha_k \geq 0$. \vskip 3mm We can easily show that this intersection of two hyper plans with the portion of the space of $\alpha_k \geq 0$ has an infinite number of solutions. \end{proof} \vskip 5mm Next theorem characterizes the truthful equilibrium for general distribution. \begin{thm}\label{combination2} If $F$ is a continuous and strictly increasing function on $I=[0,1]$ or $I=[0, +\infty)$ and if it exists a truthful equilibrium different of second price auctions, then $F$ corresponds to the uniform distribution in $[0,1]$, i.e $F(x)=x$ for $x\in [0,1]$. \end{thm} \begin{proof} As $F$ is continuous on $I$ and strictly increasing, then $q$ exists and is continuous on $J=[0,1]$ or $J=[0,1)$. Moreover, we have $q(0)=0$. Then, with equation \eqref{fair1} we can easily deduce that $q$ is infinitely differentiable on $J \setminus \{0\}$. Moreover by differentiating \eqref{fair1} we can show that $q'$ is extendable by continuity in $0$ and then by differentiating an other time that $q''$ is also extendable by continuity in $0$. \vskip 5mm With the variable change $u=av$ and after simplifying by $a^{n-2}$, equation \eqref{fair1} becomes: \begin{equation} \begin{split} q(a) \left( \sum_{p=3}^s \frac{(n-p)!(p-2)!}{(n-2)!} \alpha_p\right)-\alpha_1 a=\\ \int_0^1 q(av) \left(\sum_{p=3}^s \alpha_p (p-2)(1-v)^{p-3}v^{n-p}\right)dv \end{split} \label{fair2} \end{equation} Then, by deriving the equation \eqref{fair2} two times (the derivation under the integral sum is valid because $q$ is two times continuously differentiable) we find $\forall \, a \in I$: \begin{equation} q''(a) \left( \sum_{p=3}^s \frac{(n-p)!(p-2)!}{(n-2)!} \alpha_p\right)= \int_0^1 v^2 q''(av) \left(\sum_{p=3}^s \alpha_p (p-2)(1-v)^{p-3}v^{n-p}\right)dv. \label{fair3} \end{equation} Moreover, according to lemma \ref{IPP}, we have: $$\int_0^1 \sum_{p=3}^s \alpha_p (p-2)(1-v)^{p-3}v^{n-p}dv= \sum_{p=3}^s \frac{(n-p)!(p-2)!}{(n-2)!}.$$ Supposing by the absurd that it exists $ x \in I$ such that $q''(x) \neq 0$, and without loss of generality $q''(x) >0$. By noting $a_0$ such that $q(a_0)= \max_{I} q(x)$ (or $\max_{[0,b]} q(x)$ $\forall \, b \in (0,1) $ if $I=[0,1)$) we have that the right hand side of \eqref{fair3} satisfies: $$ \int_0^1 v^2 q''(a_0v) \left(\sum_{p=3}^s \alpha_p (p-2)(1-v)^{p-3}v^{n-p}\right)dv < q''(a_0) \left( \sum_{p=3}^s \frac{(n-p)!(p-2)!}{(n-2)!} \alpha_p\right), $$ which is absurd because there is equality in \eqref{fair2}. \vskip 3mm Then $\forall \, a \in I, \, q''(a)=0$ and we deduce that we necessarily have $I=[0,1]$ and such that $q(0)=0$ and $q(1)=1$ we have $q(a)=a$ which corresponds to an uniform distribution of the valuation of the bidders. \end{proof} \section{Conclusion} In this paper we give an exact analytical solution of the $k$-price auctions. Moreover we introduce a new type of auction which could replace the second price auctions. This article has led us to interesting open questions that we list here: \begin{enumerate} \item Is it possible to find the exact analytical solution of the combination-auctions, i.e solve the equation \eqref{eqint}? \item Here we took constant coefficients $\alpha_i$. What happens if the $\alpha_i$ becomes random variables dependent, for example, on $F$? \item For a given $F$ is it possible to find a combination such that $\alpha_2=0$ (or $\alpha_2\leq r, \, r \in [0,1)$) and such that the Nash equilibrium of these combination-auctions noted $g$ minimizes any standard norm between $g$ and the truthful one $x \mapsto x$? \end{enumerate}
{ "timestamp": "2019-03-06T02:19:28", "yymm": "1810", "arxiv_id": "1810.03494", "language": "en", "url": "https://arxiv.org/abs/1810.03494", "abstract": "We provide an exact analytical solution of the Nash equilibrium for $k$- price auctions. We also introduce a new type of auction and demonstrate that it has fair solutions other than the second price auctions, therefore paving the way for replacing second price auctions.", "subjects": "Mathematical Finance (q-fin.MF); Computer Science and Game Theory (cs.GT); General Economics (econ.GN)", "title": "k-price auctions and Combination auctions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.974042642048694, "lm_q2_score": 0.7279754489059774, "lm_q1q2_score": 0.7090791295989622 }
https://arxiv.org/abs/1606.05459
Plant complexes and homological stability for Hurwitz spaces
We study Hurwitz spaces with regard to homological stabilization. By a Hurwitz space, we mean a moduli space of branched, not necessarily connected coverings of a disk with fixed structure group and number of branch points. We choose a sequence of subspaces of Hurwitz spaces which is suitable for our investigations.In the first part, we introduce and study plant complexes, a large new class of simplicial complexes, generalizing the arc complex on a surface with marked points. In the second part, we generalize a result by Ellenberg-Venkatesh-Westerland by showing that homological stabilization of our sequence of Hurwitz spaces depends only on properties of their zeroth homology groups.
\section{Introduction} Understanding the topology of moduli spaces is a key aspect in order to grasp the behavior in families of the parametrized objects. Now, moduli spaces often come in sequences, such as the moduli spaces $\mathcal{M}_{g}$ of Riemann surfaces of genus~$g$. In some cases, such sequences satisfy \emph{homological stability}. Indeed, by \cite{MR786348}, the homology groups $H_{*}(\mathcal{M}_{g}; \Q)$ are independent of $g$ in a range of dimensions growing with $g$. The stable rational cohomology of $\mathcal{M}_{g}$ is the subject of \emph{Mumford's conjecture}, proved in~\cite{MR2335797}. In recent years, the study of (co-)homological stability phenomena has been of great interest in algebraic and geometric topology as well as algebraic geometry. Classical results include stabilization for the group homology of the sequences of symmetric groups $\mathfrak{S}_{n}$ (\cite{MR0112134}), general linear groups $\GL_{n}$ (\cite{Maazen}, \cite{MR586429}), and Artin braid groups $\Br_{n}$ (\cite{MR0274462}). The theorem for braid groups builds a bridge to moduli spaces: $\Br_{n}$ is classified by the unordered configuration space $\Conf_{n}$, which parametrizes subsets of size $n$ of a disk $D$. Hence, the sequence $\{\Conf_{n}\}$ is homologically stable. In fact, several homological stability theorems are concerned with sequences of (classifying spaces of) groups. There is by now a standard approach (cf.~\cite{MR2736166}) to the proof of such results which requires a highly connected simplicial complex with a nicely behaved group action in order to study the associated spectral sequence. Hurwitz spaces as moduli spaces of branched covers of $\C$ appeared in the second half of the 18th century in the work of Hurwitz (\cite{MR1510692}). Their properties helped proving the connectivity of $\mathcal{M}_{g}$ in \cite{MR0245574}, and they play an important role in arithmetic applications such as the Regular Inverse Galois Problem (cf.~\cite{MR1119950}). In this paper, we study the topology of Hurwitz spaces with respect to homological stabilization. It is worth mentioning the proximity of these spaces to both moduli spaces of Riemann surfaces and configuration spaces: The total space of a branched covering of $\C$ is a Riemann surface, whereas the branch locus defines an element of a configuration space. Having the homological stability theorems for both $\mathcal{M}_{g}$ and $\Conf_{n}$ in mind, it seems worthwhile to study Hurwitz spaces in this direction. \subsection*{Braids and configurations} Let $n\in\N$. By $\Br_{n}$, we denote the classical \emph{(Artin) braid group}, generated by $\sigma_{1},\ldots, \sigma_{n-1}$, subject to the relations \begin{equation*}\label{braidrel} \begin{alignedat}{2} \sigma_i \sigma_{i+1} \sigma_i &= \sigma_{i+1}\sigma_i\sigma_{i+1},\:\:\:\: &&1\leq i \leq n-2, \\ \sigma_i\sigma_j &= \sigma_j\sigma_i, &&|i-j|\geq 2, \end{alignedat} \end{equation*} cf.~\cite{MR3069440}. The \emph{pure braid group} $\PBr_{n} \subset \Br_{n}$ is the kernel of the surjection $\Br_{n} \to \mathfrak{S}_{n}$ which maps $\sigma_{i}$ to the transposition $(i, i+1)$. If $\underline\zeta = (\zeta_{1}, \ldots, \zeta_{t})$ is a partition of~$n$, the \emph{colored braid group with coloring $\underline\zeta$} is defined as the kernel of the map $\Br_{n} \to \mathfrak{S}_{n}/ \mathfrak{S}_{\underline\zeta}$, with $\mathfrak{S}_{\underline\zeta} \cong \mathfrak{S}_{\zeta_{1}} \times \ldots \times \mathfrak{S}_{\zeta_{t}}$. For a presentation of these groups, cf.~\cite{MR1465028} and \cite{MR2607077}. By \cite{MR0141126}, the \emph{(unordered) configuration space} $\Conf_{n}$ of $n$ points in (the interior of) a two-dimensional closed disk $D$ is of type $K(\Br_{n}, 1)$. Associated to the inclusion $\PBr_{n} \subset \Br_{\underline\zeta} \subset \Br_{n}$ of subgroups, there is a sequence of covering space maps $$ \PConf_{n} \to \Conf_{\underline\zeta} \to \Conf_{n}, $$ between aspherical spaces, where $\PConf_{n}$ is the \emph{ordered configuration space} of $n$ points in $D$. The space $\Conf_{\underline\zeta} = \PConf_{n}/\mathfrak{S}_{\underline\zeta}$ is called the \emph{colored configuration space} of $n$ points in $D$ with coloring $\underline\zeta$. By \cite{MR0274462}, for any $p\geq 0$, we have $$H_{p}(\Conf_{n}; \Z) \cong H_{p}(\Conf_{n+1};\Z)$$ for $n\geq 2p-2$. If $\XI \in\N^{t}$ and $n\cdot\XI = (n\xi_{1}, \ldots, n\xi_{t})$, it follows from \cite{1312.6327} that for any $p\geq 0$, \begin{equation}\label{tran} H_{p}(\Conf_{n\cdot\XI};\Z) \cong H_{p}(\Conf_{(n+1)\cdot\XI}; \Z) \end{equation} for $n\geq \frac{2p}{\min\XI}$, where $\min\XI$ denotes the smallest entry of $\XI$. Notable homological stability results for configuration spaces of surfaces include \cite{MR0358766}, \cite{MR533892}, \cite{MR2909770}, and \cite{MR3032101}, among others. \subsection*{Homological stability for Hurwitz spaces} Let $n\in\N$. Furthermore, let $G$ be a finite group, $c = (c_{1}, \ldots, c_{t})$ a tuple of $t$ distinct non-trivial conjugacy classes in $G$, and $\XI = (\xi_{1}, \ldots, \xi_{t})\in\N^{t}$ a partition of $\xi\in\N$. We replace $\C$ by a closed two-dimensional disk~$D$ and consider \emph{marked $n\cdot\XI$-branched $G$-covers of $D$}: We prescribe the covers' \emph{shape vectors} $n\cdot\XI$. With this, we mean that for $i=1,\ldots,t$, exactly $n\xi_{i}$ local monodromies around the branch points must lie in $c_{i}$. We refer to~Section~\ref{hurwitz-spaces} for a more thorough introduction to this kind of branched covers. We denote the space of such covers by $\Hur_{G,n\cdot\XI}^{c}$. This Hurwitz space must be a covering space of $\BBr_{n\cdot\XI} \cong \Conf_{n\cdot\XI}$ with fiber $\cc^{n} = (c_{1}^{\xi_{1}} \times \ldots \times c_{t}^{\xi_{t}})^{n}$, thus $$ \Hur_{G, n\cdot\XI}^{c} = \EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}} \cc^{n}, $$ up to homotopy, where the $\Br_{n\cdot\XI}$-action on $\cc^{n}$ is given by the restriction of the full \emph{Hurwitz action} of $\Br_{n\xi}$ on $G^{n\xi}$ described in~(\ref{hurwitz-action}). The prior homological stability result for Hurwitz spaces deals with the case where $c \subset G$ is a single conjugacy class and $\XI = 1 \in\N$. A conjugacy class $c \subset G$ is called \emph{non-splitting} if $c$ generates $G$ and for all subgroups $H\subset G$, $c\cap H$ is either empty or a conjugacy class in $H$. \begin{theorem*}[\textsc{Ellenberg--Venkatesh--Westerland}, \cite{0912.0325}] Let $c \subset G$ be a non-splitting conjugacy class. Let $A$ be a field of characteristic zero or prime to the order of $G$. Then there are positive constants $a$, $b$, $d$ such that for all $p\geq 0$, $$ H_{p}(\Hur_{G,n}^{c}; A) \cong H_{p}(\Hur_{G,n+d}^{c};A) $$ for $n > ap+b$. \end{theorem*} In Section~\ref{homstabhurwitz}, we follow the ideas of Sections~4 through~6 of \cite{0912.0325}. The main technical complication in comparison to the prior result is the fact that the colored braid group action on the set of $q$-simplices of the \emph{colored plant complexes} we introduce in Section~\ref{plants} is in general not transitive. In Section~\ref{hurwitz-spaces}, we explain why the $A$-module $$ R = \bigoplus_{n\geq 0} H_{0}(\Hur_{G,n\cdot\XI}^{c};A) $$ has the structure of a graded ring, where the grading is in the $n$-variable. For a central homogeneous element $U \in R$, we define $D_{R}(U) = \max\{\deg R/UR, \deg R[U]\}$, where $R[U]$ is the $U$-torsion in $R$. Our main theorem is proved in Section~\ref{homstabhurwitz}: \begin{theorem-main}\label{the-theorem} Suppose there is a central homogeneous element $U\in R$ of positive degree such that $D_{R}(U)$ is positive and finite. Then, for any $p\geq 0$, multiplication by~$U$ induces an isomorphism $$ H_{p}(\Hur_{G,n\cdot \XI}^{c}; A) \overset{\sim}{\to} H_{p}(\Hur_{G,(n+\deg U)\cdot \XI}^{c}; A) $$ whenever $n > (8 D_{R}(U) + \deg U)p + 7 D_{R}(U) + \deg U$. \end{theorem-main} Our theorem generalizes the prior theorem to the case of multiple conjugacy classes. Indeed, for $c$ a single non-splitting conjugacy class, \cite[Lemma~3.5]{0912.0325} shows that the condition of Theorem~\ref{the-theorem} is satisfied. We say that $G$ is \emph{invariably generated} by $c$ if for all choices of elements $g_{i} \in c_{i}$, $i = 1, \ldots, t$, the group generated by $g_{1}, \ldots, g_{t}$ is equal to $G$. We denote by $\partial U$ the product of the entries of a vector $U \in \cc^{d}$. Note that such a vector may be identified with a homogeneous element of $R$, cf.~Remark~\ref{comb-descr}. Applying Theorem~\ref{the-theorem}, a result from \cite{1212.0923}, and (\ref{tran}), we are able to deduce a concrete homological stability statement in the case where $c$ invariably generates $G$. \begin{theorem-main}[Theorem~\ref{thm-connected}] Assume $c$ invariably generates $G$. Then, for any $U \in \cc^{d}$ with $\partial U = \id$ and any $p \geq 0$, there are isomorphisms \begin{align*} H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Z ) &\cong H_{p}(\Hur_{G, (n+d)\cdot\XI}^{c}; \Z )\\ H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Q ) &\cong H_{p}(\Hur_{G, (n+1)\cdot\XI}^{c}; \Q ) \end{align*} for $n> (8D_{R}(U) + d)p+7D_{R}(U) + d$, and a constant $b\in\N$ such that $$ H_{p}(\Hur_{n\cdot\XI}^{c};\Q) \cong H_{p}( \Conf_{n\cdot\XI}; \Q ) \otimes_{\Q} \Q^{b} $$ in the same range. \end{theorem-main} \subsection*{Simplicial complexes in homological stability proofs} In the study of homological stability, simplicial complexes are ubiquitous. Given a sequence of groups $\{G_{n}\}$ and highly connected simplicial complexes $\mathcal{O}^{n}$ for all $n$ such that $G_{n}$ acts transitively on the set of $q$-simplices of $\mathcal{O}^{n}$ for all $q$ and with stabilizers isomorphic to $G_{n-q-1}$, the spectral sequence associated to the semi-simplicial space $\EG_{n} \times_{G_{n}} \mathcal{O}^{n}$ yields a description of the homology of $\BG_{n}$ in terms of the homology of spaces $\BG_{m}$, for $m<n$. This makes inductive arguments possible. The \emph{ordered arc complex} (cf.~\cite{MR3135444}) turns out to have the right properties for mapping class groups of surfaces (leading to the homological stability theorem for $\mathcal{M}_{g}$). For the Artin braid group, the \emph{arc complex} (though not used in the original article \cite{MR0274462}) is a suitable choice. This complex (which is contractible by \cite{Damiolini}) has also been employed in the homological stability proof in \cite{0912.0325}. We run into a couple of problems when examining homological stability for Hurwitz spaces: First, the Hurwitz spaces we consider are usually disconnected. This can be fixed by using the fact that they are finite covers of $K(G,1)$ spaces, where $G$ is a {colored braid group}. Secondly, there is no highly connected simplicial complex at hand which admits a well-behaved colored braid group action. For this purpose, we define and investigate \emph{plant complexes} in Sections~\ref{plants} to~\ref{combinatorics}. Thirdly, the group action on these complexes is generally not transitive. This last point makes a more extensive homological analysis in Section~\ref{homstabhurwitz} necessary. The definition of plant complexes generalizes both the arc complex and the \emph{fern complex} from~\cite{1410.0923}, hence the name. In Section~\ref{combinatorics}, we focus on a specific class of \emph{colored} plant complexes in order to obtain the following result which is essential to our homological stability proof: \begin{theorem-main}[Theorem~\ref{delta-conn}, Lemma~\ref{orbits}, Lemma~\ref{stabilizers}]\label{thm-5} For $n\in\N$ and $\XI\in\N^{t}$, there exists an $(n-1)$-dimensional and at least $\left( \lfloor\frac{n}{2}\rfloor -2\right)$-connected simplicial complex which admits a generally non-transitive action by the colored braid group $\Br_{n\cdot\XI}$. The stabilizer of a $q$-simplex under this action is isomorphic to $\Br_{(n-q-1)\cdot\XI}$. \end{theorem-main} \subsection*{Acknowledgements} This paper contains the central result of my 2016 Ph.D.~thesis. I would particularly like to thank my advisor Michael L\"onne for all the inspiring discussions. Furthermore, I am thankful to Craig Westerland for his supportive and helpful answers to my questions, and to Matthias Zach for numerous mathematical dialogues. I would like to appreciate the excellent mathematical environment and the pleasant colleagues I was offered by the Institute of Algebraic Geometry at the Leibniz University of Hannover over the last three years. \section{Plants and plant complexes}\label{plants} Let $S$ be a connected surface with non-empty boundary and $\DEL = (\delta_1, \ldots, \delta_t)\in\N^{t}$ a partition of $\delta = \sum_{i=1}^t \delta_i$. Let furthermore $\Delta$ be a set of $\delta$ points in the interior of $S$, partitioned as $\Delta = \Delta_1 \sqcup \ldots \sqcup \Delta_t$, where $|\Delta_{i}| = \delta_{i}$ for all $i=1,\ldots, t$. Finally, let $*$ be a fixed point in $\partial S$. An \emph{arc} is a smooth embedding $\gamma\colon I \to S$ with $\gamma(0) = *$ and $\gamma(1) \in \Delta$, meeting the boundary transversally, and with interior entirely in $S\setminus(\partial S\cup \Delta)$. \begin{definition}\label{def-plant} Let ${\XI} = (\xi_1, \ldots, \xi_t)\in \N^{t}$ and $\xi = \sum_{i=1}^{t}\xi_i $. \begin{enumerate}[(i)] \item A $\XI$\emph{-plant} in $(S,\Delta)$ is an unordered $\xi$-tuple of arcs in $S$ which only meet at~$*$, where for some permutation $\sigma\in \mathfrak{S}_{t}$, exactly $\xi_i$ arcs end at points of $\Delta_{\sigma(i)}$, for $i=1,\ldots,t$. The tuple $\XI$ is called the \emph{pattern}. \item A \emph{colored $\XI$-plant} in $(S,\Delta)$ is a $\XI$-plant in $(S,\Delta)$ with the requirement that for $i=1, \ldots t$, exactly $\xi_{i}$ arcs end at points of $\Delta_{i}$. \item Two $\XI$-plants $v, w$ in $(S,\Delta)$ are called \emph{equivalent} if there is an isotopy of $S$ fixing $\partial S \cup\Delta$ pointwise that transforms one plant into the other. \item For any plant $u$, we write $u^\circ = u\setminus (\{*\} \cup \Delta)$ for its \emph{interior}. \item We say that two plants (or arcs) $v$ and $w$ (not necessarily of the same pattern) have $s$ \emph{points of intersection} if $s$ is the minimal number such that there are are plants $v'$ and $w'$ equivalent to $v$ and $w$, respectively, such that $v'^{\circ}$ and $w'^{\circ}$ share $s$ points in $S\setminus(\Delta\cup\partial S)$. We write $v.w=s$. For $v.w=0$, we call $v$ and $w$ \emph{disjoint}. \end{enumerate} \end{definition} \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.31\linewidth} \centering \includegraphics[scale=0.14]{plant1.pdf} \caption{$\DEL = \XI = (1,1,1)$} \end{subfigure} \begin{subfigure}[b]{0.31\linewidth} \centering \includegraphics[scale=0.17]{plant2.pdf} \caption{$\DEL = (3,2,1)$, $\XI = (0,2,1)$} \end{subfigure} \begin{subfigure}[b]{0.31\linewidth} \centering \includegraphics[scale=0.17]{plant3.pdf} \caption{$\DEL = (3,2,1)$, $\XI = (0,2,1)$} \end{subfigure} \caption{Examples of $\XI$-plants in a disk.} \label{first-plants} \end{figure} First examples of plants in a two-dimensional closed disk $D$ can be seen in Figure~\ref{first-plants}. Given $\DEL$ and $\XI$ as in the captions, the left and right plants are colored. Changing $\XI$ to $(2,0,1)$, the middle plant is colored as well. \begin{lemma}\label{intersection_lemma} Let $v = (a_1, \ldots, a_\zeta)$ and $w = (b_1, \ldots, b_\xi)$ be plants in $(S, \Delta)$ with arbitrary patterns. The product $v.w$ is finite and arcwise distributive, i.e., we have $v.w = \sum_{i=1}^\zeta\sum_{j=1}^\xi a_i.b_j.$ \end{lemma} \begin{proof} For the first part, it suffices to show that generically, two arcs $a_{1}$, $b_{1}$ meet in finitely many points. By the transversality theorem (cf.~\cite{MR0061823}), the space of smooth embeddings $b_{1}\colon I \to S$ which are transversal to $a_{1}$ is dense in the space of all smooth embeddings. Now, $a_{1}\colon I \to S$ and $b_{1}\colon I \to S$ being transversal implies that $b_{1}^{-1}(a_{1}(I)) \subset I$ is a $0$-dimensional submanifold. Such a submanifold necessarily consists of only finitely many points. The inequality '$\geq$' is clear by definition of the products $a_i.b_j$. For the other inequality, let $v$, $w$ be in minimal position, i.e., $v.w = |v^{\circ} \cap w^{\circ}|$ and all intersections are transversal. Assume $v.w > \sum_{i=1}^\zeta\sum_{j=1}^\xi a_i.b_j$. By assumption, there exist indices $p$, $q$ with \begin{equation}\label{gnull} | a^{\circ}_p \cap {b^{\circ}_q}| > a_p.b_q. \end{equation} Therefore, there must be segments of $a_{p}$ and $b_{q}$ whose union is a continuous loop. Choose $k$ and $l$ among all such $p$, $q$ such that such a loop has no intersection with further arcs of $v$ or $w$. Then there is a closed disk $D_0 \subset S$ bounded by segments of $a_k$ and $ b_l$, containing no other arc segments of $v$ or $w$. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.28\linewidth} \centering \def\columnwidth{\columnwidth} \input{intersection1.pdf_tex} \end{subfigure} \begin{minipage}[b][3cm][c]{0.05\linewidth} $\rightsquigarrow$ \vspace{2cm} \end{minipage} \begin{subfigure}[b]{0.245\linewidth} \centering \def\columnwidth{\columnwidth} \input{intersection2.pdf_tex} \end{subfigure} \caption{The isotopy $H$ from the proof of Lemma~\ref{intersection_lemma}.} \end{figure} After eventual slight smooth deformations of $a_{k}$ or $ b_l$, we may assume that neither~$*$ nor~$\Delta$ share points with $D_0$. These deformed arcs $a'_{k}$, $b'_{l}$ can be chosen such that for the plants $v'$, $w'$ defined by replacing $a_{k}$ by $a'_{k}$ and $b_{k}$ by $b'_{k}$, respectively, we have \begin{equation}\label{cond3} |v'^{\circ} \cap w'^{\circ}| \leq |v^{\circ} \cap w^{\circ}| + 1. \end{equation} Indeed, if both $*$ and~$\Delta$ intersected $D_0$, there would be no interior point of intersection between $a_k$ and $ b_l$, contradicting~(\ref{gnull}). Denote by $D_{1}$ the resulting closed disk bounded by segments of $a'_{k}$ and $b'_{l}$. We now define an isotopy $H\colon I\times S \to S$ that satisfies \begin{align} \label{cond1} |H(\{1\} \times {a'_{k}}^{\circ}) \cap {{b_{l}'}^{\circ}}| &<| {a_{k}'}^\circ \cap {b_{l}'}^\circ| , \\ \label{cond2} |H(\{1\} \times {v'}^{\circ}) \cap {w'}^{\circ}| &\leq | {v'}^{\circ} \cap {w'}^{\circ}| - 2. \end{align} Let $\epsilon>0$ and $D_1^\epsilon$ an open $\epsilon$-neighborhood of $D_1$, where we choose $\epsilon$ such that there are no segments of arcs in $D_1^\epsilon$ other than $a'_{k}$ and $b'_{l}$, and such that $D_1^\epsilon$ lies entirely in the interior of $S$. Now, the arc segment $a'_{k} \cap \overline{D_1^\epsilon}$ is isotopic (fixing endpoints) to an arc segment in $\overline{D_{1}^{\epsilon}}$ which does not intersect $b'_{l} \cap \overline{D_1^\epsilon}$. By the isotopy extension theorem (cf.~\cite{MR0123338}), this isotopy may be extended to an ambient isotopy $h\colon I \times \overline{D_{0}^{\epsilon}} \to \overline{D_1^\epsilon}$ which fixes the boundary circle $\partial\overline{D_1^\epsilon}$ pointwise. We extend $h$ by the identity on $S \setminus D_1^\epsilon$ and denote the resulting isotopy of $S$ by $H$. Now, (\ref{cond1}) is satisfied since we push $D_1$ across $ b'_l$ and thus remove two intersections. As the potential slight deformation of $a'_k$ and $ b'_l$ creates at most one extra intersection, the application of $H$ removes at least one intersection point. Then, (\ref{cond2}) follows from the choice of $\epsilon$. From (\ref{cond3}) and (\ref{cond2}), we obtain $|H(\{1\} \times {v'}^{\circ}) \cap {w'}^{\circ}| < |v^{\circ} \cap w^{\circ}| = v.w$, which contradicts the definition of the intersection number. The assertion follows. \end{proof} \begin{definition} With notation as above, we define: \begin{enumerate}[(i)] \item The \emph{full $\XI$-plant complex} $\FP(S)$ is the simplicial complex with isotopy classes of $\XI$-plants in $(S,\Delta)$ as vertices. A $q$-simplex in $\FP(S)$ is a set of $q+1$ isotopy classes of $\XI$-plants on $(S,\Delta)$ which can be embedded with disjoint interiors. \item The \emph{$\XI$-plant complex} $\mathcal{P}_{\DEL}^{\XI}(S)$ is the subcomplex of $\FP(S)$ which contains the simplices $\alpha\in\FP(S)$ such that no two plants of $\alpha$ share a point in $\Delta$. \item The \emph{full colored $\XI$-plant complex} $\FO(S) \subset \FP(S)$ and the \emph{colored $\XI$-plant complex} $\cO(S) \subset \cP(S)$ are the subcomplexes defined by the restriction that only colored plants are allowed as vertices. \end{enumerate} \end{definition} \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant4.pdf} \caption{$2$-simplex in $\mathrm{F}\mathcal{P}_{(3,3)}^{(1,1)} (D)$} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant5.pdf} \caption{$1$-simplex in $\mathcal{P}_{(2,2,2)}^{(1,1,0)} (D)$} \end{subfigure} \caption{Representatives of simplices in plant complexes (1) -- colors indicate the type of endpoints, line styles distinguish between different plants.} \end{figure} \begin{remark} By definition, we have the following diagram: $$ \xymatrix{ \cO(S) \ar@{}[d]|-*[@]{\subset} \ar@{}[r]|-*[@]{\subset} &\cP(S) \ar@{}[d]|-*[@]{\subset}\\ \FO(S) \ar@{}[r]|-*[@]{\subset} &\FP(S)} $$ As there are only finitely many isotopy classes of arcs in $(S,\Delta)$, all plant complexes have a finite number of simplices. \end{remark} We use two partial orderings of $\N_{0}^t$: \begin{itemize} \item $(x_1, \ldots, x_t) \preccurlyeq (y_1, \ldots, y_t)$ if there is a permutation $\sigma\in \mathfrak{S}_{t}$ such that for all $i=1,\ldots, t$, we have $x_i \leq y_{\sigma{(i)}}$, and \item $(x_1, \ldots, x_t) \leq (y_1, \ldots, y_t)$ if $x_i \leq y_i$ for all $i=1,\ldots, t$. \end{itemize} Immediately from the definitions, we obtain: \begin{lemma}\label{nonempty} Both $\FP(S)$ and $\cP(S)$ are non-empty if and only if $\XI\preccurlyeq\DEL$, and $\FO(S)$ and $\cO(S)$ are non-empty if and only if $\XI \leq \DEL$. \end{lemma} \begin{remark}\label{plant-order} There is a natural total order on the vertices of simplices in $\cP(S)$ ($\cO(S)$) which induces the structure of an ordered simplicial complex: By imposing a Riemannian structure on $S$, we may assume that all arcs are parametrized by arc length. Now for plants $v$, $w$ of the same pattern, we write $v<w$ if and only if in a non-intersecting realization of $v$ and $w$, there is an arc in $v$ whose inward pointing unit tangent vector at $*$ occurs in clockwise order before any inward pointing unit tangent vector of an arc in $w$. \end{remark} \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant6.pdf} \caption{$1$-simplex in $\mathcal{P}_{(3,3)}^{(2,1)} (D)$} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant7.pdf} \caption{$1$-simplex in $\mathcal{O}_{(4,4)}^{(2,1)} (D)$} \end{subfigure} \caption{Representatives of simplices in plant complexes (2).} \end{figure} \section{Connectivity analysis} In this section, we always assume that the complexes are non-empty, so we have $\XI\preccurlyeq\DEL$ for plant complexes, and $\XI\leq\DEL$ for colored plant complexes. Similar connectivity proofs can be found in~\cite[Sect.~4]{MR3135444} and~\cite[Sect.~2]{1410.0923}. We defined the simplicial complexes abstractly. In order to talk about the \emph{connectivity} of a complex $\mathcal{O}$, we need to consider its geometric realization, which we denote by $|\mathcal{O}|$. \begin{proposition}\label{contractible} Both $|\FP(S)|$ and $|\FO(S)|$ are contractible. \end{proposition} \begin{proof} In the proof, we construct a flow similar to the \emph{Hatcher flow} introduced in~\cite{MR1123262}. The proof is carried out for $|\FP(S)|$. It is fully analogous for $|\FO(S)|$. In the following, we switch freely between plants as subsets of $S$, plants as vertices of plant complexes, and their respective isotopy classes if no misunderstandings are possible. We fix a plant $v$ in $\FP(S)$ with arcs $a_1, \ldots, a_\xi$ in a fixed order. Our goal is to show that $|\FP(S)|$ deformation retracts onto $|\Star(v)|$, which is contractible. We order the interior points of $v$ in the following way: \begin{itemize} \item $x \prec y$ if $x\in a_i$, $y\in a_j$ for $i<j$ \item If $x,y \in a_i$, $x\prec y$ if $x$ is closer to $*$ along $a_i$ than $y$. \end{itemize} Let $\alpha = \langle w_0, \ldots, w_p \rangle$ be a $p$-simplex of $\FP(S)$ with representative plants~$w_{i}$ chosen such that the number of intersections with $v$ is minimal. The ordering of the interior points of $v$ induces an order on the set $(w_{0}\cup\ldots\cup w_{p})\cap v^\circ$ of intersection points, which we denote by $g_1, \ldots, g_k$. At $g_i$, the plant $w_{j_i}$ intersects the arc $a_{l_i}$. In an $\epsilon$-neighborhood of $g_{1}$, we erase the segments of the arc of $w_{j_{1}}$ which contains~$g_{1}$ and join the two loose ends to $*$ by straight lines. We denote by $C(\alpha)$ the plant that is obtained from $w_{j_1}$ by replacing the arc containing $g_1$ with a smooth approximation of the one of the two newly created paths that is an arc. Because of the order we put on the intersection points, $C(\alpha)$ is a plant which is disjoint from the plants in $\alpha = \alpha^{(1)}$. In the following, we misuse notation by allowing vertices to occur more than once in a simplex. In this sense, by $\langle c_{0}, \ldots, c_{p}\rangle$ we denote the simplex with vertices $\{c_{0}, \ldots, c_{p}\}$, which might be of dimension smaller than $p$. We now define a finite sequence of simplices inductively. We start with $i=1$, the first intersection point $g_{i} = g_{1}$, and the simplex $$r_1(\alpha) = \langle w_0, \ldots, w_p, C(\alpha)\rangle = \langle \alpha^{(1)}, C(\alpha^{(1)})\rangle,$$ and execute the procedure below. In every step, we choose representative plants for the vertices of the simplices such that the number of intersections with $v$ is minimal. \begin{enumerate}[(1)] \item Increase $i$ by one. Stop if $i=k+1$, otherwise go to the next step. \item If the intersection at $g_{i}$ is not yet resolved in $\alpha^{(i)}$, replace the plant of $\alpha^{(i)}$ that contains~$g_i$ with $C(\alpha^{(i)})$, denote the resulting $p$-simplex by $\alpha^{(i+1)}$, and set $r_{i}(\alpha) = \langle \alpha^{(i)} , C(\alpha^{(i)})\rangle.$ Else, set $\alpha^{(i+1)} = \alpha^{(i)}$ and $r_{i}(\alpha) = \langle \alpha^{(i)} , w_{j_{i}}'\rangle$, where $w_{j_{i}}'$ is the $j_{i}$-th plant of $\alpha^{(i)}$. \item Go to step 1. \end{enumerate} By the above remarks about disjointness, we produce simplices $r_{i}(\alpha)$ of dimension at most $p+1$ at each step. The $p$-simplex $\alpha^{(k+1)}$ is in the star of $v$, since all of its plants are disjoint from $v$. By construction, $\alpha^{(k+1)}$ is a face of $r_{k}(\alpha)$. Now, we may use the sequence $r_1(\alpha), \ldots, r_{k}(\alpha)$ to define a deformation retraction of $|\FP(S)|$ onto $|\Star(v)|$. Using barycentric coordinates\footnote{Here, we use the same order on the vertices of $\FP(S)$ as in the definition of the $r_{i}(\alpha)$. If a vertex~$c_{j}$ appears more than once in $r_{i}(\alpha)$, adding up the corresponding entries of a given tuple $T$ yields the barycentric coordinate of the point we refer to.}, any point on the realization of the $p$-simplex $\alpha = \langle w_{0}, \ldots, w_{p} \rangle$ can be identified with a tuple $T = (t_0, \ldots, t_p)$, where $t_j\geq 0$ and $\sum_{i=0}^{p} t_i = 1$. For $i=0,\ldots, p$, let $k_i = |w_i^\circ \cap v^\circ| = w_i.v$, where the second equality is due to the choice of the~$w_{i}$. Given a tuple $T$ and $i\in\{1, \ldots, k\}$, we assign to~$g_i$ the weight $\omega_i(T) = t_{j_i}/k_{j_i}$ if $k_{j_{i}}>0$, and $\omega_{j}(T) = 0$ else, such that $\sum_{j=1}^{k} \omega_j(T) = 1$. For fixed $\alpha$ and $T$, we define $f\colon I \to |\FP(S)|$ by $$ f_{\alpha}^{T}(s) = [r_i(\alpha), (x_0, \ldots, x_{p+1})] $$ for $\sum_{j=1}^{i-1} \omega_j(T)\leq s \leq \sum_{j=1}^{i} \omega_j(T)$, where $i \in\{ 1, \ldots, k\}$. Here, we set $x_l = t_l$ for all $l$, except for the pair $$ (x_{j_i}, x_{p+1}) = (t_{j_i} - k_{j_i} (s-\sum_{j=1}^{i-1}\omega_j), k_{j_i}(s-\sum_{j=1}^{i-1}\omega_j )). $$ The map $f^{T}_{\alpha}$ is well-defined: \begin{align*} f^{T}_{\alpha}\left(\sum_{j=1}^{i} \omega_j\right) &= [r_{i+1}(\alpha), (t_0, \ldots,t_p, 0)] = [r_{i}(\alpha), (t_0, \ldots, t_{j_{i} -1}, 0, t_{j_{i} + 1}, \ldots, t_p, t_{j_{i}})]. \end{align*} By construction, $f_{\alpha}^{T}(1)$ lies in $\alpha^{(k+1)}\in \Star(v)$. We may now patch the maps $f^{T}_{\alpha}$ for all simplices $\alpha$ and coordinates $T$ with only non-zero entries in order to obtain a global homotopy $f\colon I \times |\FP(S)| \to |\FP(S)|$ with image in $\Star(v)$. We still need to prove that $f$ is continuous. By \cite[Thm.~3.1.15]{MR0210112}, we only have to show continuity for the restriction of $f$ to the geometric realization of any simplex. In the interior of the realization of $\alpha = \langle w_{0}, \ldots, w_{p}\rangle$, continuity follows from the definition of the $\omega_i(T)$. It remains to show that we may go to a subsimplex of $\alpha$ continuously. That is, for $\beta=\langle w_0, \ldots, w_{p-1}\rangle$, we must show that for all $s\in I$, $ f_{\alpha}^{(t_0, \ldots, t_{p-1},0)}(s) = f_{\beta}^{(t_0, \ldots, t_{p-1})}(s) $. This follows from Lemma~\ref{intersection_lemma}: The number of intersections of $w_p$ and~$v$ does not depend on the simplex $\alpha$; in other words, we have $v.\beta = v.\alpha - v.w_p$. Thus, going to $\beta$ corresponds exactly to $t_{p}$ and any corresponding weight going to zero. We can therefore pass from $\alpha$ to any facet $\beta$ of $\alpha$ continuously. For an arbitrary subsimplex, the claim follows inductively. \end{proof} \begin{theorem}\label{delta-conn} For the connectivity of plant complexes, we have: \begin{enumerate}[(i)] \item If $\min\XI>0$, $\conn |\cO(S)| \geq \min_{i=1, \ldots, t} \left\lfloor\frac{\delta_{i}}{2 \xi_{i}} \right\rfloor - 2$. \label{colored-conn} \item If $\min\XI>0$, $\conn |\cP(S)|\geq \left\lfloor \frac{\min\DEL}{2 \max \XI} \right\rfloor -2$. \label{plant-conn} \item Let $r>0$, $t\geq m> 0$. If $\DEL = (r, \ldots, r)\in\N^{t}$ and $\XI = (\overbrace{r, \ldots, r}^{m \text{ times}}, 0, \ldots, 0)\in\N_{0}^{t}$, $ \conn|\cP(S)| \geq \left\lfloor \frac{t}{2m-1}\right\rfloor -2$.\label{multi-conn} \end{enumerate} \end{theorem} \begin{remark} For $m=1$, the complex from Theorem~\ref{delta-conn}(\ref{multi-conn}) is the \emph{fern complex} from~\cite{1410.0923}. By the same article, the fern complex is at least $(t-2)$-connected. Our connectivity result generalizes this bound to the case $m>1$ which we call the \emph{multifern} case. \end{remark} If $\gamma$ is a collection of arcs in $(S,\Delta)$ which only meet at $*$, we write $S_\gamma$ for the connected space $(S\setminus\gamma)\cup\{*\}$. We can define (colored) plant complexes on $S_{\gamma}$ accordingly. In particular, the arguments from Proposition~\ref{contractible} carry over to $S=S_\gamma$, so spaces of the form $|\FP(S_\gamma)|$ ($|\FO(S_{\gamma})|$) are contractible. \begin{proof}[Proof of Theorem~\ref{delta-conn}] We prove the proposition for a surface with boundary $S$ or a space of the form $S_\gamma$. The proof is performed in detail for part~(\ref{colored-conn}), which is the result needed in subsequent sections. Some remarks on the proofs on the other two parts are included below. \textbf{Claim (\ref{colored-conn})} is proved by induction on $\min\DEL$, with $\XI$ fixed. The claim is void if there is an $i\in\{1, \ldots, t\}$ such that $\delta_{i} < 2\xi_{i}$, and we assume for the proof that $\delta_{i} \geq \xi_{i}$ for all $i$. Let now $k\leq \min_{i=1, \ldots, t} \left\lfloor\frac{\delta_{i}}{2 \xi_{i}} \right\rfloor - 2$, and consider a map $ f\colon S^k \to |\cO(S)|. $ We have to show that $f$ factors through a $(k+1)$-disk. By the contractibility of $|\FO(S)|$, we have a commutative diagram: $$ \xymatrix{ S^k \ar@{^{(}->}[d] \ar[r]^f &|\cO(S)| \ar@{^{(}->}[d]\\ D^{k+1} \ar[r]^{\hat f} &|\FO(S)|} $$ By simplicial approximation, we may assume that all maps are simplicial. That is, they are the geometric realization of simplicial maps $\mathcal{F} \colon \mathcal{S}^{k}\to \cO(S)$ and $\hat{\mathcal{F}} \colon \mathcal{D}^{k+1}\to \FO(S)$ for some finite PL triangulations $\mathcal{S}^{k}$ and $\mathcal{D}^{k+1}$ of the $k$-sphere and the $(k+1)$-disk, respectively. It suffices to show that $\mathcal{F}$ factors through $\mathcal{D}^{k+1}$. Our goal is to deform $\hat{ \mathcal{F}}$ such that its image lies entirely in $\cP(S)$. We call a simplex~$\alpha$ of~$\mathcal{D}^{k+1}$ \emph{bad} if in each plant in $\hat{\mathcal{F}}(\alpha)$, there is at least one arc that shares an endpoint with an arc from another plant in $\hat{\mathcal{F}} (\alpha)$ (note that vertices are good). In particular, a simplex of $\mathcal{D}^{k+1}$ with image in $\cP(S)$ cannot contain any bad subsimplices. Let $\alpha$ be a bad simplex of $\mathcal{D}^{k+1}$ of maximal dimension $p \leq k+1$ among all bad simplices. Now, $\hat{\mathcal{F}}$ restricts to a map $$ \hat{\mathcal{F}}|_{\Link(\alpha)}\colon \Link(\alpha) \to J_\alpha \coloneqq \mathcal{O}_{\DEL'}^{\XI}(S_{\hat{\mathcal{F}}(\alpha)}), $$ where $\DEL'$ is obtained from $\DEL$ by removing the endpoints of the arcs in $\alpha$ from an instance of $\DEL$. We still need to argue why the image of $\Link(\alpha)$ lies in $\mathcal{O}_{\DEL'}^{\XI}(S_{\hat{\mathcal{F}}(\alpha)})$: If it did not, there would be a bad simplex $\beta\in\Link(\alpha)$, hence $\alpha*\beta$ would be bad, contradicting the maximality of the dimension of $\alpha$ (note that $\alpha$ and $\beta$ are joinable as $\beta$ is in $\Link(\alpha)$). For all $i = 1, \ldots, t$, any $p$-simplex uses at most $(p+1)\cdot \xi_{i}$ endpoints of $\Delta_i$, so \begin{equation}\label{ineq} \delta_{i}'\geq \delta_{i} - (p+1)\cdot\xi_{i}. \end{equation} Furthermore, we have $p\leq k+1 \leq \min_{j=1, \ldots, t} \lfloor\frac{\delta_{j}}{2 \xi_{j}} \rfloor - 1$, so we obtain from~(\ref{ineq}) and the assumption $\delta_{i} \geq \xi_{i}$: \begin{align*} \delta'_{i} &\geq \delta_{i} - (p+1)\cdot \xi_{i} \\ &\geq \delta_{i} - \xi_{i} \cdot \min_{j=1, \ldots, t} \left\lfloor\frac{\delta_{j}}{2 \xi_{j}} \right \rfloor\\ & \geq \delta_{i} - \xi_{i} \left\lfloor \frac{\delta_{i}}{2\xi_{i}} \right\rfloor \\ & \geq \xi_{i} \end{align*} Here, the last inequality follows from the fact that for $a\geq b > 0$, the inequality $ a - b\cdot \left\lfloor \frac{a}{2b} \right\rfloor \geq b $ holds. From the assumption $\min\XI>0$, we get $\min\DEL'<\min\DEL$. Therefore, the induction hypothesis is applicable to $J_\alpha = \mathcal{O}_{\DEL'}^{\XI}(S_{\hat{\mathcal{F}}(\alpha)})$: \begin{align*} \conn J_{\alpha} &\geq \min_{i=1, \ldots, t} \left\lfloor\frac{\delta'_{i}}{\xi_{i}} \right\rfloor - 2 \\ &\geq \min_{i=1, \ldots, t} \left\lfloor\frac{\delta_{i} - (p+1)\cdot \xi_{i}}{2\xi_{i}}-2 \right\rfloor \\ &= \left\lfloor \min_{i=1, \ldots, t}\left( \frac{\delta_{i}}{2 \xi_{i}} \right) -2- \frac{p+1}{2}\right\rfloor\\ &\geq k-p,\end{align*} since $p\geq 1$. The rest is \emph{standard machinery}, cf. also the end of the proof of~\cite[Thm.~4.3]{MR3135444}: By the above connectivity bound for $J_\alpha$, as the link of $\alpha$ is a $(k+1)-p-1 = (k-p)$-sphere, there is a commutative diagram $$ \xymatrix{ \Link(\alpha) \ar[r]^{\hat {\mathcal{F}}|_{\Link(\alpha)}} \ar@{^{(}->}[d] &J_\alpha \ar[r] &\cO(S)\\ K \ar[ru]_{\hat f'}&& } $$ with $K$ a $(k-p+1)$-disk with boundary $\partial K = \Link(\alpha)$. The right map identifies plants on $S_\alpha$ with plants on $S$. Now, in the triangulation $\mathcal{D}^{k+1}$, replace the $(k+1)$-disk $\Star(\alpha) = \alpha*\Link(\alpha)$ with the $(k+1)$-disk $\partial\alpha*K$. This works because both $\Star(\alpha)$ and $\partial\alpha*K$ have the same boundary $\partial\alpha*\Link(\alpha)$. On $\partial\alpha*K$, modify $\hat{\mathcal{F}}$ by $$ \hat {\mathcal{F}} * \hat {\mathcal{F}}'\colon \partial\alpha*K \to \FO(S). $$ This is possible since $\hat {\mathcal{F}}'$ agrees with $\hat {\mathcal{F}}$ on $\Link(\alpha) = \partial K$. New simplices in $\partial \alpha*K$ are of the form $\tau = \beta_{1}*\beta_{2}$, where $\beta_{1}$ is a proper face of $\alpha$ and $\beta_{2}$ is mapped to $J_\alpha$. Therefore, if $\tau$ is a bad simplex in $\partial\alpha*K$, then $\tau=\beta_{1}$ since plants of $\hat {\mathcal{F}}'(\beta_{2})$ do not share any endpoints with other plants of $\hat {\mathcal{F}}'(\beta_{2})$ or $\hat {\mathcal{F}}'(\beta_{1})$, so they cannot contribute to a bad simplex. But $\beta_{1}$ is a proper face of $\alpha$, so we have decreased the number of top dimensional bad simplices. By induction on the number of top dimensional bad simplices, the result follows. The proof of \textbf{claim~(\ref{plant-conn})} is widely analogous to the proof of claim~(\ref{colored-conn}), replacing $\delta_{i}$ by $\min\DEL$ and $\xi_{i}$ by $\max\XI$ where necessary. \textbf{Claim~(\ref{multi-conn})} is proved by induction on $t$, for fixed $m$ and $r$. The multifern complex $\cP(S)$ is always non-empty, so the base case $t=m$ is trivial, as are all cases with $t< 4m-2$. We assume for the induction that $t\geq 2m$ holds. We argue as in part~(\ref{colored-conn}): Let $\alpha$ be a bad simplex of $\mathcal{D}^{k+1}$ of maximal dimension $p$. An arbitrary $p$-simplex in $\FP$ uses at most $(p+1) m$ different $\Delta_i$. Since $\alpha$ is bad, it uses at most $(p+1)(m-1) + \lfloor \frac{p+1}{2}\rfloor$ different $\Delta_i$. Let $\DEL'$ be the remainder of $\DEL$ after removing the arcs of $\alpha$, and $t'$ be the number of positive entries of~$\DEL'$. Recall that by the definition of multiferns, a multifern $\beta$ having endpoints at $\Delta_i$ necessarily implies that $\Delta_i$ disappears in $S_\beta$. Then, we have \begin{align} t' &\geq t - \left((p+1)(m-1) + \left\lfloor \frac{p+1}{2}\right\rfloor \right) \label{n-eqn}\\ &= t - \left\lfloor (p+1)\left(m-\frac{1}{2}\right) \right\rfloor\nonumber \\ &\geq t - \left\lfloor \left\lfloor\frac{t}{2m-1}\right\rfloor \left(m-\frac{1}{2}\right)\right\rfloor \label{eqn_x} \\ &\geq t - \left\lfloor\frac{t}{2}\right\rfloor\nonumber \\ &\geq m. \label{eqn_y} \end{align} Here, (\ref{eqn_x}) is due to the fact that $p+1 \leq k+2 \leq \left\lfloor \frac{t}{2m-1}\right\rfloor$, and~(\ref{eqn_y}) is true since we demanded that $t\geq 2m$. Consequently, we can apply the induction hypothesis to $J_\alpha$. Using~(\ref{n-eqn}), we obtain \begin{align*} \conn(J_\alpha) &\geq \left\lfloor \frac{t'}{2m-1}\right\rfloor -2 \\ &\geq \left\lfloor \frac{t-(p+1)(m-1) - \left\lfloor \frac{p+1}{2}\right\rfloor }{2m-1} - 2\right\rfloor \\ &\geq \left\lfloor k - \frac{\left\lfloor (p+1)\left(m-\frac{1}{2}\right) \right\rfloor}{2m-1}\right\rfloor \\ &\geq \left\lfloor k - \frac{p+1}{2} \right\rfloor \\ &\geq k-p, \end{align*} as $p\geq 1$ (there are no bad vertices). The rest of the proof can be copied from above (\emph{standard machinery}). \end{proof} \section{Combinatorics of colored plant complexes}\label{combinatorics} In this section, we focus on a specific class of colored plant complexes on a closed disk~$D$. Let $\XI$ be a $t$-tuple of positive integers and $n\in\N$. We consider colored plant complexes of the form $ \plant \coloneqq \mathcal{O}_{n\cdot\XI}^{\XI}(D) $ and write $\plantq$ for the set of $q$-simplices of $\plant$. Because of the specific constellation $\DEL = n\cdot\XI$, it is clear that the dimension of $\plant$ equals $n-1$. After applying a suitable homeomorphism, we may assume that $D$ lies in the complex plane as a disk of radius $1$ centered at $0\in\C$, that we have $* = -\mathrm{i}$, and that the $n\xi$ points of $\Delta$ are all real and arranged from left to right in $n$~\emph{clusters} of $\xi$ points each, where $\xi_{i}$ points in each cluster lie in $\Delta_{i}$, for $i=1, \ldots, t$. For each of these clusters, we suppose that the points in $\Delta_{i}$ are placed to the left of the points in $\Delta_{j}$, for $1\leq i<j\leq t$. \subsection*{The braid action} We can identify the full braid group $\Br_{n\xi}$ with the mapping class group $\Map(D\setminus \Delta)$, where the standard generator~$\sigma_{i}$ corresponds to a half twist in counterclockwise direction which interchanges the $i$-th and the $(i+1)$-th marked point. The colored braid group $\Br_{n\cdot\XI} \subset \Br_{n\xi}$ may then be identified with the set of mapping classes which leave the given partition of $\Delta$ invariant. This gives a well-defined left action of $\Br_{n\cdot\XI}$ on $\plantq$ by isotopy classes of orientation-preserving diffeomorphisms (or homeomorphisms, which is equivalent) for all $q = 0, \ldots, n-1$. We write $\alpha \mapsto \sigma\cdot\alpha$ for the action of $\sigma\in\Br_{n\cdot\XI}$ on a simplex $\alpha\in\plant$. Let $\alpha\in\plantq$ be a $q$-simplex. The sorting of the inward pointing unit tangent vectors of representative arcs of $\alpha$ at $*$ in clockwise order is well-defined, cf.~Remark~\ref{plant-order}. This ordering is invariant under the $\Br_{n\cdot\XI}$-action. We assign an \emph{index}~$i\in\{0, \ldots, q\}$ and a \emph{color}~$j\in\{1, \ldots, t\}$ to each arc in $\alpha$: \begin{itemize} \item An arc is labeled with the index $i$ if it belongs to the $(i+1)$-th plant in $\alpha$, where we use the order on the plants of $\alpha$ described in Remark~\ref{plant-order}. \item An arc is labeled with the color $j$ if its endpoint lies in $\Delta_{j}$. \end{itemize} Consequently, any $q$-simplex $\alpha$ defines a unique sequence \begin{equation}\label{ic-sequence} \omega_{\alpha} = (i_{1}, j_{1}), (i_{2}, j_{2}), \ldots, (i_{(q+1)\cdot\xi}, j_{(q+1)\cdot\xi}), \end{equation} where $i_{k}$ is the index and $j_{k}$ the color of the $k$-th arc of $\alpha$, in clockwise order at $*$. By definition of the mapping class group, this sequence is $\Br_{n\cdot\XI}$-invariant. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant8.pdf} \caption{$\omega_{\alpha} = (0,1),(1,2),(0,2),(0,1),(1,1),(1,1) $} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant9.pdf} \caption{$\omega_{\beta} = (0,2),(1,2),(2,1),(1,1),(2,2),(0,1)$} \end{subfigure} \caption{IC-sequences of simplices $\alpha\in\mathcal{O}^{[2,(2,1)]}_{1}$ and $\beta\in\mathcal{O}^{[4,(1,1)]}_{2}$.} \label{fig-sequences} \end{figure} \begin{definition} We call~(\ref{ic-sequence}) the \emph{IC-sequence} (index-color-sequence) of $\alpha\in\plantq$. \end{definition} With the help of IC-sequences, we are able to count the orbits of the $\Br_{n\cdot\XI}$-action: \begin{lemma}\label{orbits} Let $q < n$. The set $\plantq$ decomposes into $$ l_{q}^{\XI} \coloneqq \#\left\{\plantq / \Br_{n\cdot \XI} \right\} = \frac{(\xi(q+1))!}{(q+1)!\cdot(\xi_{1}!\cdot\ldots\cdot\xi_{t}!)^{q+1}} $$ $\Br_{n\cdot\XI}$-orbits. The orbits are in bijective correspondence to the set of occurring IC-sequences for $q$-simplices. \end{lemma} \begin{remark} A priori, the number $l_{q}^{\XI}$ depends not only on $q$ and $\XI$ but also on $n$. As a consequence of the lemma, the actual quantity is independent on $n$ as long as $q<n$; therefore, it makes sense to omit $n$ from the notation. \end{remark} \begin{proof} We start by counting the number of possible IC-sequences. We say a sequence of type~(\ref{ic-sequence}) is \emph{$q$-feasible} if each index $i\in\{0, \ldots, q\}$ is assigned $\xi$ times; for each index, each color $j = \{1, \ldots, t\}$ is assigned $\xi_{j}$ times; and the index $i+1$ does not appear before the index $i$, for all $i = 0, \ldots, q-1$. Since $D$ is path-connected, a $q$-feasible sequence appears as the IC-sequence of a $q$-simplex if and only if $q< n$, i.e., as long as there are enough endpoints for the arcs. We may now count the number of $q$-feasible sequences: There are $$ {\binom{\xi (q+1)}{\underbrace{\xi, \ldots, \xi}_{(q+1)\text{ times}}}}\frac{1}{(q+1)!} = \frac{(\xi(q+1))!}{(q+1)!\cdot(\xi!)^{q+1}} $$ different partitionings of $\xi (q+1)$ arcs into subsets of size $\xi$. Any such partition gives a unique indexing (recall that the first arc with index $i$ appears before the first arc with index $i+1$). Given an index $i$, the $\xi$ arcs labeled with it can be colored in $ \binom{\xi}{\xi_{1}, \ldots, \xi_{t}} $ different ways, which makes it $$ l_{q}^{\XI} = \frac{(\xi(q+1))!}{(q+1)!\cdot(\xi!)^{q+1}}\cdot \binom{\xi}{\xi_{1}, \ldots, \xi_{t}}^{q+1} = \frac{(\xi(q+1))!}{(q+1)!\cdot(\xi_{1}!\cdot\ldots\cdot\xi_{t}!)^{q+1}} $$ different choices of $q$-feasible sequences, as there are $(q+1)$ different indices. We have already seen above that the IC-sequence of a $q$-simplex is invariant under the $\Br_{n\cdot\XI}$-action. In the second part of the proof, we will now show that the group $\Br_{n\cdot\XI}$ acts transitively on simplices with according IC-sequences by an argument similar to \cite[Prop.~5.6]{0912.0325}. An alternate proof can be obtained by adapting the methods from the proof of \cite[Prop.~2.2(1)]{MR3135444}. Let $\alpha, \beta$ be two $q$-simplices with the same IC-sequence $\omega$. We choose representative non-intersecting collections of the plants of $\alpha$ and $\beta$ with arcs $a_{1}, \ldots, a_{(q+1)\xi}$ and $b_{1}, \ldots, b_{(q+1)\xi}$, respectively, subscripts chosen such that the arcs are arranged in clockwise order at $*$. Since $\Br_{n\cdot\XI}$ surjects onto $\mathfrak{S}_{n\cdot\XI}$, we may assume that $a_{i}$ and $b_{i}$ have the same endpoint $a_{i}(1) = b_{i}(1)$ for all $i = 1, \ldots, (q+1)\xi$. Furthermore, after a suitable isotopy, we may as well assume that for some $\epsilon>0$, we have $a_{i}(t) = b_{i}(t)$ for all $0 \leq t \leq \epsilon$. Hence, if we choose a continuous increasing function $h\colon I \to \R$ with $h(t) = t$ for $0 \leq t \leq \epsilon/2$ and $h(1) = \epsilon$, we obtain $a_{i} \circ h = b_{i} \circ h$ for all $i$. It remains to show that there is an orientation-preserving homeomorphism $G$ of~$D$ which, for all $i = 1, \ldots, (q+1)\xi$, retracts the arc $a_{i}$ to $a_{i} \circ h$, and fixes those marked points which are no endpoints of arcs in $\alpha$. In addition, we construct a similar map $H$ which carries $b_{i}$ to $b_{i} \circ h$ for all $i$. Then, the homeomorphism $H^{-1} \circ G$ defines a mapping class which carries $\alpha$ to $\beta$, and which corresponds to an element in $\Br_{n\cdot\XI}$ because the IC-sequences of $\alpha$ and $\beta$ coincide. To construct $G$, choose disjoint closed tubular neighborhoods $U_{i}$ of $a_{i}|_{[\epsilon/3, 1]}$ for all $i$. Such neighborhoods exist since the arcs are disjoint except at $*$. Now, $U_{i}$ is homeomorphic to a closed disk, and so there exists a homeomorphism which restricts to the identity on $\partial U_{i}$ and which carries the arc segment $U_{i} \cap a_{i}$ to its retraction $U_{i} \cap (a_{i} \circ h)$. Combining these homeomorphisms and extending them by the identity on $D \setminus \bigcup_{i} U_{i}$ yields the desired homeomorphism~$G$. The construction of $H$ is analogous. \end{proof} \begin{lemma}\label{stabilizers} Let $q< n$. For any simplex $\alpha\in\plantq$, the stabilizer of $\alpha$ under the $\Br_{n\cdot\XI}$-action is isomorphic to $\Br_{(n-q-1)\cdot\XI}$. In particular, there is a bijection between the elements of the orbit $\Br_{n\cdot\XI}\cdot\alpha$ and the cosets in $\Br_{n\cdot\XI} / \Br_{(n-q-1)\cdot\XI}$. \end{lemma} \begin{proof} We show that the stabilizer of any simplex $\alpha\in\plantq$ is isomorphic to $\Br_{(n-q-1)\cdot\XI}$. Then, the second assertion follows directly from the orbit-stabilizer-theorem. Let $\Sigma\subset D$ be the union of a representative set of the arcs of $\alpha$, only intersecting at~$*$. In clockwise order around $*$, denote the arcs in $\Sigma$ by $a_1, \ldots, a_{(q+1)\xi}$. By $\Map(D\setminus \Delta, \Sigma)$, we denote the group of isotopy classes of orientation-preserving diffeomorphisms of $D\setminus \Delta$ which fix $\Sigma$ pointwise. Since $\Sigma$ is contractible, the group $\Map(D\setminus\Delta, \Sigma)$ may be identified with $\Map(D\setminus (\Delta \setminus \Sigma)) \cong \Br_{(n-q-1)\cdot\XI}$, which itself may be identified with a subgroup of $\Br_{n\cdot\XI}$. We will show that the inclusion of subgroups of $\Map(D\setminus\Delta)$ \begin{equation*}\label{injection-mcg} \Map(D\setminus (\Delta \setminus \Sigma)) \hookrightarrow \left(\Map\left(D\setminus\Delta\right)\right)_{\alpha} \end{equation*} is surjective and hence an isomorphism. For this part, we follow the similar proof in \cite[Prop.~2.2(2)]{MR3135444}. Choose an element $\phi \in\Diff^+(D\setminus\Delta)$ which stabilizes the simplex $\alpha$. We have to show that $\phi$ is isotopic to a diffeomorphism that fixes $\Sigma$ pointwise. By definition, $\phi(a_1)$ is isotopic to $a_1$. The isotopy extension theorem \cite{MR0123338} implies that we can extend a corresponding isotopy to an ambient isotopy, so we may assume that $\phi$ fixes $a_1$ pointwise. We proceed by induction on the number of fixed arcs. Let $j>1$, and assume that $\phi$ fixes $\Sigma_{j} = a_1\cup \ldots\cup a_{j-1}$ pointwise. The arc $a_j$ is isotopic to $\phi(a_j)$, and we must show that the corresponding isotopy can be chosen disjointly from~$\Sigma_{j}$. If this holds, another application of the isotopy extension theorem implies the inductive step and thus the statement. Let $H\colon I \times I \to D$ be a smooth isotopy that conveys $\phi (a_{j})$ to $a_{j}$, and assume that $H$ is transverse to $\Sigma_{j}$, using the transversality theorem \cite{MR0061823}. Here, $H(0,-)$ and $H(1,-)$ correspond to the arcs $\phi(a_{j})$ and $a_{j}$, respectively. Furthermore, we have $H(-,0) = *$, and $H(-,1) \in\Delta$ is the endpoint of $a_{j}$. Now, consider the preimage $H^{-1}(\Sigma_{j})$. The line $I \times \{0\}$ is the preimage of $*$, and by transversality, all other components must be circles in the interior of $I \times I$. Since the intersection number is finite, there is at least one such circle which encloses no further circle in $H^{-1}(\Sigma_{j})$. Let $D_{0}$ be the closed disk it encloses. Let furthermore $\Sigma_{j}^{\delta}\subset D$ be a closed $\delta$-thickening of $\Sigma_{j}$ with $\delta>0$ chosen such that $\Sigma^{\delta}_{j}$ is still contractible. By continuity of $H$, we may now choose $\epsilon>0$ such that for a closed $\epsilon$-neighborhood $D_{0}^{\epsilon}$ of $D_{0}$, we have $H(\partial D_{0}^{\epsilon}) \subset \Sigma^{\delta}_{j}$, Restriction of $H$ to the closed disk $D_{0}^{\epsilon}$ defines an element of the relative homotopy group $\pi_{2}(D,\Sigma_{j}^{\delta} \setminus \Sigma_{j})$. This group is trivial. We may thus replace $H$ on $D_{0}^{\epsilon}$ by a homotopic map $H'$ with $H'|_{\partial D_{0}^{\epsilon}} = H|_{\partial D_{0}^{\epsilon}}$ and image in $\Sigma_{j}^{\delta} \setminus \Sigma_{j}$, which exists since $\Sigma_{j}^{\delta} \setminus \Sigma_{j}$ is simply connected. By extending $H'$ to $I \times I$ by $H'|_{(I \times I)\setminus D_{0}^{\epsilon}} = H|_{(I \times I)\setminus D_{0}^{\epsilon}}$, we obtain a homotopy $H'$ with $\pi_{0} (H'^{-1}(\Sigma_{j})) < \pi_{0} (H^{-1}(\Sigma_{j}))$. Inductively, we construct a homotopy $H''$ which is disjoint from $\Sigma_{j}$. Finally, by \cite[Thm.~3.1]{MR0214087}, $H''$ can be replaced by an isotopy in $(D\setminus \Sigma_{j}) \cup \{*\}$. \end{proof} \subsection*{Standard simplices} As a consequence of Lemma~\ref{orbits} and Lemma~\ref{stabilizers}, there are bijections between the set $\plantq$ of $q$-simplices and the disjoint union of $l_{q}^{\XI}$ copies of $\Br_{n\cdot\XI} / \Br_{(n-q-1)\cdot\XI}$ for all $q= 0,\ldots, n-1$. Our next goal is to make these bijections compatible with the semi-simplicial structure on $\plant$: We want to fix bijections and describe the structure of a semi-simplicial set on $$ \cplant = \bigsqcup_{q=0}^{n-1} \cplantq = \bigsqcup_{q=0}^{n-1} \bigsqcup_{l_{q}^{\XI} \text{ copies}} \Br_{n\cdot\XI} / \Br_{(n-q-1)\cdot\XI} $$ which is compatible with the face maps, thus defines a semi-simplicial isomorphism between $\cplant$ and $\plant$. Let $\omega$ be a fixed IC-sequence. In what follows, we define a standard $q$-simplex in $\plantq$ for the IC-sequence $\omega$: We resort the terms of $\omega$ by the consecutive sorting criteria \emph{index}~(1st), \emph{color}~(2nd) and \emph{position in the IC-sequence}~(3rd), and draw the arcs of a $q$-simplex in this new order, respecting the order at~$*$ prescribed by the IC-sequence. We draw the arcs such that the endpoint of each arc is chosen as the leftmost free marked point, where we always \emph{undercross} marked points if possible. This is compatible with the coloring because of the arrangement of marked points. This process generates a set of $(q+1)\cdot\xi$ arcs which is unique up to isotopy since we work on a disk $D$. Distinguishing by the indices, these arcs can be divided into $q+1$ colored $\XI$-plants, which in turn define a $q$-simplex $\alpha_{\omega} \in\plantq$. Using these simplices, every $q$-simplex can be written as $\sigma\cdot\alpha_{\omega}$ for some IC-sequence $\omega$ and some $\sigma\in\Br_{n\cdot\XI}$ as consequence of the transitivity of the $\Br_{n\cdot\XI}$-action on simplices with the same IC-sequence. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.2]{plant10.pdf} \caption{$\alpha_{\omega_{\alpha}}\in\mathcal{O}^{[2,(2,1)]}_{1}$} \end{subfigure} \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.2]{plant11.pdf} \caption{$\alpha_{\omega_{\beta}}\in\mathcal{O}^{[4,(1,1)]}_{2}$} \end{subfigure} \caption{Standard simplices for the IC-sequences of $\alpha$ and $\beta$ from Figure~\ref{fig-sequences}.} \label{fig-standard} \end{figure} \begin{definition} The simplex $\alpha_{\omega} \in\plantq$ is called the \emph{standard simplex for the IC-sequence $\omega$}. \end{definition} \begin{definition} Let $\alpha\in\plantq$ be a $q$-simplex, and $\omega$ its IC-sequence. The sequence $ \widetilde\omega = (p_{1}, p_{2}, \ldots, p_{(q+1)\cdot\xi}) $ induced by the reordering of the IC-sequence of $\alpha$ described above, where $p_{i}$ is the position in the IC-sequence of the corresponding arc, is called the \emph{P-sequence} (position sequence) of $\alpha$. \end{definition} \begin{example} The P-sequences of the simplices in Figures~\ref{fig-sequences} and~\ref{fig-standard} are given by $ \tilde\omega_{\alpha} = (1,4,3,5,6,2) $ and $ \tilde\omega_{\beta} = (6,1,4,2,3,5). $ \end{example} We identify $\Br_{n\xi}$ with the mapping class group of the $n\xi$-punctured disk in such a way that $\Br_{n\cdot\XI}$ is the stabilizer of the colored configuration of $n\xi$ points in $D$. The element $\sigma_{i\xi + j}$, for $i=0, \ldots, q$ and $j=1, \ldots, \xi-1$, describes the isotopy class of a a half twist that interchanges the $j$-th and the $(j+1)$-th point of the $(i+1)$-th cluster. On the other hand, the elements of the form $\sigma_{i\xi}$, $i=1, \ldots, q$, describe a half twist that interchanges the $\xi$-th point of the $i$-th cluster with the first point of the $(i+1)$-th cluster. We know from Lemma~\ref{stabilizers} that the stabilizer of a $q$-simplex is isomorphic to the group $\Br_{(n-q-1)\cdot\XI}$. For any standard $q$-simplex $\alpha_{\omega}$, we may thus write \begin{align*} (\Br_{n\cdot\XI})_{\alpha_{\omega}}&= \left\langle \sigma_{k} \mid (q+1)\cdot\xi+1 \leq k \leq n \xi-1 \right\rangle \cap \Br_{n\cdot\XI}. \end{align*} As this expression is independent of $\omega$, we may denote the stabilizer of \emph{any} standard $q$-simplex by $$ L_{q} = (\Br_{n\cdot\XI})_{\alpha_{\omega}} \cong \Br_{(n-q-1)\cdot\XI}. $$ Now, once and for all, we fix the bijection \begin{align*} \Gamma_{\omega}\colon \Br_{n\cdot\XI}/L_{q} &\to \Br_{n\cdot\XI}\cdot\alpha_{\omega} \\ \sigma L_{q} &\mapsto \sigma\cdot\alpha_{\omega} \end{align*} for each $\Br_{n\cdot\XI}$-orbit in $\plant$. Collecting these maps for all IC-sequences, we obtain a global bijection \begin{align*} \Gamma\colon\cplant&\to\plant\\ (\omega_{p}, \sigma L_{p}) &\mapsto \sigma\cdot\alpha_{\omega_{p}} \:\:\:\text{for all }p\geq 0, \end{align*} where $\omega_{p}$ is an IC-sequence of a $p$-simplex. \subsection*{Face maps} Recall that the colored plants in a $q$-simplex $\alpha = \langle v_{0}, \ldots, v_{q} \rangle \in\plantq$ are ordered by the tangential direction at $*$ of their respective leftmost arcs. For $i=0,\ldots,q$, the $i$-th \emph{face map} is given by leaving out the vertex $v_{i}$: \begin{align*} \partial_{i}\colon \plantq &\to \plantqi \\ \langle v_{0}, \ldots, v_{q} \rangle &\mapsto \langle v_{0}, \ldots, \hat{v_{i}}, \ldots v_{q}\rangle. \end{align*} We now determine \emph{face maps} in $\cplantq$ for all $q\geq 0$ which are compatible with the face maps in $\plant$ insofar as they give $\cplant$ the structure of a semi-simplicial set isomorphic to $\plant$. These maps are evidently given by $\Gamma^{-1}\circ \partial_{i} \circ \Gamma$. For later use, we need to describe them explicitly. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.2]{proc1.pdf} \caption{$\alpha_{\omega}$} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.2]{proc2.pdf} \caption{$\partial_{1}\alpha_{\omega}$} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.2]{proc3.pdf} \caption{$\alpha_{d_{1}\omega}$} \end{subfigure} \caption{The simplices $\alpha_{\omega}$, $\partial_{1}\alpha_{\omega}$, $\alpha_{d_{1}\omega}\in\mathcal{O}^{[3,(1,1)]}$ for $\omega$ from Example~\ref{ex-faces}.} \label{fig-faces} \end{figure} Given a $q$-simplex with IC-sequence $\omega$, its $i$-th face map is given by removing the arcs with index $i$. Hence, we may define the $i$-th \emph{face} of $\omega$ as the IC-sequence $d_{i}\omega$ obtained by first removing all the pairs with index $i$ from $\omega$, and secondly subtracting $1$ from the indices of the remaining elements with indices bigger than~$i$. The IC-sequence $d_{i} \omega$ defines a P-sequence which we denote by $d_{i} \tilde\omega$. \begin{example}\label{ex-faces} Consider the IC-sequence $\omega = (0,2),(1,1),(2,1),(0,1),(2,2),(1,2)$ for a simplex in $\mathcal{O}^{[3,(1,1)]}$. The corresponding P-sequence is $\tilde\omega = (4,1,2,6,3,6)$, and the first faces of the sequences are given by $d_{1}\omega = (0,2),(1,1),(0,1),(1,2)$ and $d_{1}\tilde\omega = (3,1,2,4)$. The inherent standard simplices are depicted in Figure~\ref{fig-faces}. \end{example} Our next goal is to find elements $\tau_{i,q}^{\omega}\in\Br_{n\cdot\XI}$ for all $i = 0, \ldots, q$, such that \begin{enumerate}[(i)] \item\label{cond11} $ \partial_{i}\alpha_{\omega} = \tau_{i,q}^{\omega}\cdot\alpha_{d_{i}\omega}, $ and \item\label{cond21} $\tau_{i,q}^{\omega}$ commutes with $L_{q-1}$. \end{enumerate} If we identify such elements, we are eventually able to define maps \begin{align*} \bd_{i}\colon \cplantq &\to \cplantqi \\ (\omega, \sigma L_{q}) &\mapsto (d_{i} \omega, \sigma\tau_{i,q}^{\omega}L_{q-1}), \end{align*} which are independent of the choice of a representative for the coset $\sigma L_{q}$ because of condition~(\ref{cond21}) and the fact $L_{q} \subset L_{q-1}$. Furthermore, by condition~(\ref{cond11}), such elements satisfy \begin{align} \bd_{i}(\omega, \sigma L_{q}) &= (d_{i} \omega, \sigma\tau_{i,q}^{\omega}L_{q-1})\nonumber \\ &= \Gamma^{-1}(\sigma\tau_{i,q}^{\omega}\cdot\alpha_{d_{i} \omega})\nonumber \\ &= \Gamma^{-1}(\sigma\cdot\partial_{i}\alpha_{\omega})\nonumber \\ &= (\Gamma^{-1}\circ\partial_{i})(\sigma\cdot \alpha_{\omega}) \label{geomfact}\\ &= (\Gamma^{-1}\circ\partial_{i}\circ\Gamma)(\omega, \sigma L_{q}) \nonumber, \end{align} as desired. Here, in~(\ref{geomfact}), we used the geometric fact that for any $\sigma\in\Br_{n\cdot\XI}$, we have $\sigma\cdot\partial_{i}\alpha_{\omega} = \partial_{i}(\sigma\cdot\alpha_{\omega})$: The plant deletion operator $\partial_{i}$ commutes with the action of the mapping class defined by $\sigma$. The face of an IC-sequence of a simplex is defined as the IC-sequence of the corresponding face of a simplex. By Lemma~\ref{orbits}, the colored braid group $\Br_{n\cdot\XI}$ acts transitively on the set of simplices with the same IC-sequence. From these two facts, it is immediate that elements $\tau^{\omega}_{i,q}$ satisfying condition (\ref{cond11}) exist and that the coset $\tau^{\omega}_{i,q}L_{q-1}$ is unique. We are yet to determine them explicitly, and check whether they satisfy condition~(\ref{cond21}). By the construction of standard simplices, the first $i$ vertices of $\partial_{i}\alpha_{\omega}$ and $\alpha_{d_{i}\omega}$ are identical. Now, mapping $\alpha_{d_{i}\omega}$ to $\partial_{i}\alpha_{\omega}$ requires transferring the points of the $(q+1)$-th cluster to the $(i+1)$-th cluster in a suitable way: This transfer is performed for one point after the other, starting with the leftmost point. Let $\widetilde\omega = (p_{1}, \ldots, p_{(q+1)\cdot\xi})$ be the P-sequence of $\alpha_\omega$. If $p_{i\xi + m} < p_{r \xi + s}$ for some $m, s\in \{1, \ldots, \xi\}$ and $q\geq r > i$, the $m$-th point of the $(q+1)$-th cluster has to be half-twisted around the endpoint of the arc which (in $\alpha_{d_{i}\omega}$) ends at the $s$-th point of the $r$-th cluster in a positive direction, and in a negative direction otherwise. A careful analysis of this procedure (where we take into account that the braid group acts from the left) yields the following formula: \begin{equation}\label{tau} \tau_{i,q}^{\omega} = \prod_{j=1}^{\xi}\left( \prod_{k=(i+1)\cdot\xi}^{(q+1)\cdot \xi - 1} \left(\sigma_{k-j+1}\right)^{\sgn(p_{k+1} - p_{(i+1)\cdot\xi - j + 1})}\right) \end{equation} Here, $\sgn\colon \Z \to \{-1, 1\}$ is the signum function. Visibly, the largest index of a braid generator involved is $(q+1)\cdot \xi - 1$, so $\tau_{q,i}^{\omega}$ commutes with the elements of $L_{q}$, where the smallest index involved is $(q+1)\cdot\xi + 1$. Thus, condition~(\ref{cond21}) is satisfied. An example for the stepwise construction of $\tau_{i,1}^{\omega}$ can be found in Figure~\ref{fig-tau}. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc3.pdf} \caption{$\alpha_{d_{1}\omega}$} \end{subfigure} \begin{minipage}[b][3cm][c]{0.04\linewidth} $\overset{\sigma_{4}}{\longmapsto}$ \vspace{0.5cm} \end{minipage} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc4.pdf} \caption{$\;$} \end{subfigure} \begin{minipage}[b][3cm][c]{0.04\linewidth} $\overset{\sigma_{3}}{\longmapsto}$ \vspace{0.5cm} \end{minipage} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc5.pdf} \caption{$\;$} \end{subfigure} \vspace{4ex} \begin{minipage}[b][3cm][c]{0.04\linewidth} $\overset{\sigma_{5}^{-1}}{\longmapsto}$ \vspace{0.5cm} \end{minipage} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc6.pdf} \caption{$\;$} \end{subfigure} \begin{minipage}[b][3cm][c]{0.04\linewidth} $\overset{\sigma_{4}^{-1}}{\longmapsto}$ \vspace{0.5cm} \end{minipage} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc2.pdf} \caption{$\tau_{1,2}^{\omega}\alpha_{d_{1}\omega} = \partial_{1}\alpha_{\omega}$} \end{subfigure} \caption{Passing from $\alpha_{d_{1} \omega}$ to $\partial_{1}\alpha_{\omega}$ ($\omega$ from Example~\ref{ex-faces}).} \label{fig-tau} \end{figure} \begin{remark}\label{pic-tau} \begin{enumerate}[(i)] \item A priori, $\tau^{\omega}_{i,q}\in\Br_{n\cdot\XI}$ for some fixed $n>q$. We note that the definition in~(\ref{tau}) does not depend on the particular choice of $n$, so we can regard $\tau^{\omega}_{i,q}$ as a common element of all $\Br_{n\cdot\XI}$ for $n>q$, using the inclusions $\Br_{n\cdot\XI} \hookrightarrow \Br_{(n+1)\cdot\XI}$ given by attaching $\xi$ trivial strands with coloring $\XI$ to the right of a braid in $\Br_{n\cdot\XI}$. \item If $\tilde\omega$ is the P-sequence corresponding to the IC-sequence $\omega$, we sometimes also write $\tau_{i,q}^{\tilde\omega}$ to denote the element $\tau_{i,q}^{\omega}$. \end{enumerate} \end{remark} We have just finished proving the following result: \begin{proposition}\label{semisimpl-iso} For $i=0,\ldots,q$, the maps \begin{align*} \bd_{i}\colon \cplantq &\to \cplantqi \\ (\omega, \beta L_{q}) &\mapsto (d_{i} \omega, \beta\tau_{i,q}^{\omega}L_{q-1}), \end{align*} give $\cplant$ the structure of a semi-simplicial set such that $ \Gamma\colon \cplant \to \plant $ is an isomorphism of semi-simplicial sets. \end{proposition} \begin{remark}\label{comb-action} As there is a $\Br_{n\cdot\XI}$-action on $\plant$, there is also a $\Br_{n\cdot\XI}$-action on $\cplant$: A braid $\tau\in\Br_{n\cdot\XI}$ acts via $ \tau\cdot(\omega_{p}, \sigma L_{p}) = (\omega_{p}, \tau\sigma L_{p}) $. \end{remark} \section{Hurwitz spaces}\label{hurwitz-spaces} Let $G$ be a finite group. Following the path pursued in \cite{0912.0325}, we consider Hurwitz spaces of branched $G$-covers of a closed disk $D$. These spaces are relevant to arithmetic applications, cf.~\cite{MR1119950} and \cite{MR2316356}. Let $n\in\N$, and $*$ a marked point in the boundary of $D$. We consider (not necessarily connected) branched covers of $D$ described by the following data: \begin{itemize} \item[--] a branch locus $B \in\Conf_{n}$, \item[--] an unbranched covering space map $p\colon Y \to D \setminus B$, \item[--] a marked point $\bullet$ in the fiber of $p$ above $*$, and \item[--] a group homomorphism $\alpha\colon G \to \Aut(p)$ which induces a free and transitive action of $G$ on any fiber of $p$. \end{itemize} An isomorphism of two such covers is a homeomorphism of the total spaces of the coverings which is compatible with the remaining data. By virtue of the Riemann existence theorem, the isomorphism classes can be parametrized by the following data: \begin{definition} A \emph{marked $n$-branched $G$-cover of $D$} is defined as a pair $(B, \mu)$, where $B\in\Conf_{n}$ is a configuration of $n$ branch points, and $\mu\colon\pi_{1}(D\setminus B, *) \to G$ is a homomorphism. \end{definition} \begin{remark} We call such covers \emph{marked} since we do not consider the monodromy homomorphisms $\mu\colon\pi_{1}(D\setminus B, *) \to G$ up to conjugacy in the target. This amounts to marking the point $\bullet$ in the fiber of a branched cover above $*$. \end{remark} The space of marked $n$-branched $G$-covers must be a covering space of $\Conf_{n}$ with fiber $\Hom(\pi_{1}(D\setminus B, *), G) \cong G^{n}$. The elements of $G^{n}$ are called \emph{Hurwitz vectors}. Such vectors are unique up to the choice of a basis for $\pi_{1}(D\setminus B, *) \cong F_{n}$ which consists of loops around the single points of $B$, i.e., up to the action of $\Map(D\setminus B) \cong \Br_{n}$ on $G^{n}$, given by \begin{equation}\label{hurwitz-action} \sigma_{i} \cdot \underline g = (g_{1}, \ldots, g_{i-1}, g_{i}g_{i+1}g_{i}^{-1}, g_{i}, g_{i+2}, \ldots, g_{n}) \end{equation} for $i = 1, \ldots, n-1$, cf.~\cite{MR1509816}. The left monodromy action of $\pi_{1}(\Conf_{n}) \cong \Br_{n}$ on $G^{n}$ can be identified with the \emph{Hurwitz action} (\ref{hurwitz-action}) as well. Replacing $\Conf_{n}$ with the classifying space $\BBr_{n}$ for convenience, we obtain: \begin{definition}\label{def-hurwitz} The \emph{Hurwitz space for marked $n$-branched $G$-covers of the disk} is defined as the Borel construction $$ \Hur_{G,n} = \EBr_n \times_{\Br_{n}} G^n, $$ where $\Br_n$ acts on $G^{n}$ via the Hurwitz action~(\ref{hurwitz-action}). \end{definition} \subsection*{Combinatorial invariants} The connected components of $\Hur_{G,n}$ are indexed by the set of $\Br_{n}$-orbits in $G^{n}$. Below, we list some $\Br_{n}$-invariant functions on Hurwitz vectors in $G^{n}$. Such invariants must be constant on connected components of $\Hur_{G,n}$. Let $(B, \mu)$ be a fixed marked $n$-branched $G$-cover of $D$ with Hurwitz vector $\underline g = (g_{1}, \ldots, g_{n})\in G^{n}$. \begin{itemize} \item[--] The \emph{global monodromy} of $(B, \mu)$ is the subgroup of $G$ generated by $g_{1}, \ldots, g_{n}$. It is equal to $G$ if and only if the corresponding branched cover is connected. In this case, we say that the branched cover has \emph{full monodromy}. \item[--] The \emph{boundary monodromy} of $(B, \mu)$ is defined as the product $\partial\underline g = \prod_{i=1}^{n} g_{i}$. Its inverse describes the branching behavior \emph{at infinity}. \item[--] Given distinct conjugacy classes $c_{1}, \ldots, c_{t}$ such that all entries of $\underline g$ lie in some $c_{i}$, the \emph{shape (vector)} of $(B, \mu)$ is the $t$-tuple $(n_{c_{1}}(B,\mu), \ldots, n_{c_{t}}(B, \mu))$, where $n_{c_{i}}(B, \mu)$ is the number of elements of $\underline g$ that lie in~$c_{i}$. \end{itemize} Considering possible shapes for a nontrivial group $G$, we obtain a lower bound $b_{0}(\Hur_{G,n}) \geq n+1$ for the zeroth Betti number. It makes thus sense to consider sequences of subspaces of Hurwitz spaces which do not \emph{a priori} exclude the possibility of the existence of a homological stability theorem. From now on, let $c = (c_{1}, \ldots, c_{t})$ be a tuple of $t$ distinct nontrivial conjugacy classes in~$G$. For a shape vector $\XI = (\xi_{1}, \ldots, \xi_{t})\in\N^{t}$ of length $\xi = \sum_{i=1}^{t} \xi_{i}$, we consider the subspace $\Hur^{c}_{G,\XI}$ of $\Hur_{G,\xi}$ which parametrizes covers with shape $\XI$, which is a union of connected components of $\Hur_{G, \xi}$ The space $\Hur^{c}_{G,\XI}$ is a cover of $\BBr_{\xi} \cong \Conf_{\xi}$ with the tuples in $G^\xi$ for which $\xi_{i}$ entries lie in $c_{i}$ as a fiber. This cover factors over $\BBr_{\XI} \cong \Conf_{\XI}$, which is the classifying space of the colored braid group $\Br_{\XI}$. The fiber of the unbranched cover $\Hur^{c}_{G,\XI} \to \BBr_{\XI}$ can be identified with $$ \cc = c_{1}^{\xi_{1}} \times \ldots \times c_{t}^{\xi_{t}} , $$ on which $\Br_{\XI}$ acts via the Hurwitz action. Generalizing to covers with shape vector $n\cdot\XI= (n\xi_{1}, \ldots, n\xi_{t})$, we may write $$\Hur_{G,n\cdot\XI}^{c} = \EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}} \cc^{n},$$ where we identify $\Br_{n\cdot\XI}$ with the set stabilizer of $\cc^{n}$ under the $\Br_{n\xi}$-action on $G^{n\xi}$. \subsection*{Structure on Hurwitz spaces} We will now identify more structure Hurwitz spaces in order to support the homological investigations to follow. Before starting, it is good to know that we may work in the category of CW complexes: Hurwitz spaces and thus all of their components are homotopy equivalent to finite CW complexes by~\cite[Prop.~2.5]{0912.0325}. For $m, n \in \N_{0}$, we obtain continuous maps $$ \Hur_{G,m\cdot\XI}^{c} \times \Hur_{G,n\cdot\XI}^{c} \to \Hur_{G, (m+n)\cdot\XI}^{c} $$ from the inclusions $\Br_{m\cdot\XI} \times \Br_{n\cdot\XI} \to \Br_{(m+n)\cdot\XI}$ and $\cc^{m} \times \cc^{n} \to \cc^{m+n}$ which are defined and associative up to homotopy. These maps give $\bigsqcup_{n\geq 0} \Hur_{G,n\cdot\XI}^{c}$ the structure of a disconnected $H$-space with homotopy identity $\Hur_{G,0\cdot\XI}^{c}$. Let $A$ be a commutative ring. The $H$-space structure on the union of Hurwitz spaces induces a graded (grading in the $n$-variable) ring structure on the direct sum of the zeroth homologies: \begin{definition}\label{def-ring} The $A$-module $$ R^{A,c}_{G,\XI} = \bigoplus_{n\geq 0} H_{0}(\Hur_{G,n\cdot \XI}^{c}; A) $$ is called the \emph{ring of connected components (with coefficient ring $A$)} for the sequence $\{\Hur_{G,n\cdot \XI}^{c} \mid n\geq 0\}$ of Hurwitz spaces. If $G$, $\XI$, $A$, and $c$ are clear from the context, we simply denote the ring by $R$. \end{definition} \begin{remark}\label{comb-descr}\label{deg-one} There is a nice combinatorial description of the ring $R$, cf.~\cite{MR1119950}: Let $\mathfrak{s} = \bigsqcup_{n\geq 0} \cc^{n}/\Br_{n\cdot\XI}$. Concatenation of Hurwitz vectors gives $\mathfrak{s}$ the structure of a monoid with the empty tuple as the identity. Then, $R$ is the monoid algebra $A[\mathfrak{s}]$. In particular, $R$ is finitely generated as an $A$-algebra: Its degree one part is generated as an $A$-module by elements $r(g)$ with $g\in\cc/\Br_{\XI}$. Any element of $\cc^{n}$ is the concatenation of $n$ elements in $\cc$, and this concatenation descends to a map $(\cc / \Br_{\XI})^{n} \to \cc^{n} / \Br_{n\cdot\XI}$ by virtue of the natural inclusion $(\Br_{\XI})^{n} \to \Br_{n\cdot\XI}$. Therefore, the degree one elements $r(g)$ generate $R$ as an $A$-algebra. \end{remark} The direct sum of the $p$-th homology modules of Hurwitz spaces obtains the structure of a graded $R$-module from the $H$-space structure in connection with the K\"unneth formula: The graded $A$-module (grading in the $n$-variable) $$ M_{G, \XI,p}^{A,c} = \bigoplus_{n \geq 0} H_{p}(\Hur_{G,n\cdot\XI}^{c};A), $$ has the structure of a graded $R^{A,c}_{G,\XI}$-module. If no misunderstandings are possible, we denote it by $M_{p}$. Clearly, we have $M_{0} = R$. \section{Homological stability for Hurwitz spaces}\label{homstabhurwitz} Let $G$ be a finite group, $c = (c_{1}, \ldots, c_{t})$ a tuple of distinct conjugacy classes in $G$, $\XI\in\N^{t}$, and $\cc = c_{1}^{\xi_{1}}\times\ldots\times c_{t}^{\xi_{t}}$. Let moreover $A$ be a commutative ring. In what follows, we use the notion of the \emph{ring of connected components} introduced in Definition~\ref{def-ring}. We usually write $R$ instead of $R^{A,c}_{G,\XI}$. For a central element $U\in R$ and $R[U]$ the $U$-torsion in $R$, we use the notation $D_{R}(U) = \max\{\deg (R/UR), \deg (R[U])\}$. We work with colored plant complexes of the form $ \plant = \mathcal{O}_{n\cdot\XI}^{\XI}(D) \cong \cplant. $ We study the homology of the spaces $ \Hur_{G, n\cdot \XI}^{c} = \EBr_{n\cdot{\XI}} \times_{\Br_{n\cdot\XI}} \cc ^{n} $ , for $n\geq 0$. \subsection*{The purely abelian case}\label{abelian} At first, we consider the case of a central homogeneous element $U\in R$ such that $D_{R}(U) = 0$. In this case, $U$ is necessarily of degree one and induces an isomorphism $R_{i} \cong R_{i+1}$ in any degree $i\geq 0$, so $R \cong A[x]$ must hold. Hence, there is only one $\Br_{\XI}$-orbit in $\cc$. This necessarily implies that any conjugacy class in $c$ is a singleton, since otherwise there would be multiple boundary monodromies and thus multiple $\Br_{\XI}$-orbits in $\cc$. In other words, the monodromy $\mu\colon \pi_{1}(D\setminus B) \to G$ of the covers in $\Hur_{G, n\cdot\XI}^{c}$ is a subgroup of the center of $G$. We call such covers \emph{purely abelian}. Vice versa, if all covers in $\Hur_{G, n\cdot\XI}^{c}$ are purely abelian, any conjugacy class in $c$ is a singleton and thus the single element of $\cc$ defines an element $U\in R$ such that $D_{R}(U) = 0$. We see that we have \begin{equation}\label{hurwitz-conf} \Hur_{G, n\cdot\XI}^{c} = \EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}} \cc^{n} = \BBr_{n\cdot\XI} \cong \Conf_{n\cdot\XI}. \end{equation} By (\ref{tran}), for all $p\geq 0$, we have $ H_{p}(\Conf_{n\cdot\XI};\Z) \cong H_{p}(\Conf_{(n+1)\cdot\XI};\Z) $ with stable range $n\geq \frac{2p}{\min\underline\xi}$. As a result, we obtain: \begin{corollary}\label{abelian-case} If there exists a central homogeneous element $U\in R_{>0}$ such that $D_{R}(U) = 0$ (equivalently, if $\Hur_{G,n\cdot\XI}^{c}$ parametrizes purely Abelian covers), there is an isomorphism $ H_{p}(\Hur^{c}_{G,n\cdot\XI}; \Z) \cong H_{p}(\Hur^{c}_{G,(n+1)\cdot\XI}; \Z) $ for $n\geq \frac{2p}{\min\XI}$. \end{corollary} \subsection*{The spectral sequence}\label{spectral-sequence} The space $\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}}(\plant \times \cc^{n})$ inherits a semi-simplicial structure from the face maps $\partial_{i}$ on $\plant$, where the left action of $\Br_{n\cdot\XI}$ on the product $\plant \times \cc^{n}$ is the diagonal action. The spectral sequence associated to the semi-simplicial space, \begin{align}\label{the-sequence} \begin{split} E^{1}_{qp} &= H_{p}(\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}}(\plantq \times \cc^{n});A) \\ &\Longrightarrow H_{p+q}(\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}}(\rplant \times \cc^{n});A), \end{split} \end{align} converges to the homology of the realization of the total complex. By Theorem~\ref{delta-conn}(\ref{colored-conn}), the space $\rplant$ is $( \left\lfloor\frac{n}{2} \right\rfloor - 2)$-connected. Thus, the target of the spectral sequence (\ref{the-sequence}) is isomorphic to $H_{p+q}(\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}} \cc^{n};A) \cong H_{p+q}(\Hur_{G,n\cdot\XI}^{c}; A)$ in degrees $p+q \leq \left\lfloor\frac{n}{2} \right\rfloor - 2$. Now, for $q < n$, we identify each of the $ l_{q}^{\XI} $ $\Br_{n\cdot\XI}$-orbits in $\plantq$ with a copy of the quotient $\Br_{n\cdot\XI}/L_{q}$, cf.~Lemmas~\ref{orbits} and~\ref{stabilizers}. The subgroup $L_{q}\cong \Br_{(n-q-1)\cdot\XI}$ acts on the last $(n-q-1)\cdot\xi$ entries of $\cc^{n}$. Consequently, we obtain \begin{align} \label{spseq-iso} \begin{split} E^{1}_{qp} &\cong H_{p}(\{1, \ldots, l_{q}^{\XI}\} \times \cc^{q+1} \times (\EBr_{n\cdot\XI} \times_{L_{q}} \cc^{n-q-1} );A ) \\ &\cong A^{l_{q}^{\XI}} \otimes_{A} A\langle \cc^{q+1}\rangle \otimes_{A} H_{p}(\Hur_{(n-q-1)\cdot\XI}^{c};A). \end{split} \end{align} The differentials $\dd$ on the $E^{1}$-page are induced by the alternating sum of the face maps on the semi-simplicial space. In the following, we aim to find an explicit identification of the differentials under the isomorphism (\ref{spseq-iso}). Let $\underline g \in \cc^{n}\subset G^{n\xi}$ for some $n\in\N$. We write $(\underline g)^{\leq j}\in\cc^{j}$ for the tuple consisting of the first $j\xi$ entries of $\underline g$, and $(\underline g)^{> j}\in\cc^{n-j}$ for the complementary $(n-j)\xi$-tuple. By $(\underline g)_{j}\in\cc$, we denote the $j$-th $\xi$-tuple in $\underline g$. \begin{lemma}\label{spseq-translation} Let $q<n$. Under the isomorphism~(\ref{spseq-iso}), $\dd \colon E^{1}_{qp} \to E^{1}_{q-1,p} $ is represented by the linear map \begin{align*} \dd = \sum_{i=0}^{q} (-1)^{i}{{{{\partial_{i}}_{*}}}} \colon &A^{l_{q}^{\XI}} \otimes_{A} A\langle \cc^{q+1}\rangle \otimes_{A} H_{p}(\Hur_{(n-q-1)\cdot\XI}^{c};A) \\ &\to A^{l_{q-1}^{\XI}} \otimes_{A} A\langle \cc^{q}\rangle \otimes_{A} H_{p}(\Hur_{(n-q)\cdot\XI}^{c};A), \end{align*} the ${{{\partial_{i}}_{*}}}$, for $i=0,\ldots, q$, being given by linear extension of $$ {{{\partial_{i}}_{*}}}(\omega\otimes \underline h\otimes x) = d_{i}\omega \otimes ((\tau_{i,q}^{\omega})^{-1}\cdot\underline h)^{\leq q}\otimes r(((\tau_{i,q}^{\omega})^{-1}\cdot\underline h)_{q+1})\cdot x, $$ where $\omega$ is the IC-sequence of a $q$-simplex, $\underline h\in\cc^{q+1}$, and $x\in H_{p}(\Hur_{G,(n-q-1)\cdot\XI}^{c};A)$. \end{lemma} \begin{proof} In combinatorial terms, we may write the face maps $\partial_{i}$ on the semi-simplicial space $\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}}(\plant \times \cc^{n})$ as $$ [(e, (\omega, \sigma L_{q}), \underline g)]_{\Br_{n\cdot\XI}} \mapsto[(e, (d_{i}\omega, \sigma\tau^{\omega}_{i,q} L_{q-1}), \underline g)]_{\Br_{n\cdot\XI}}, $$ where the $\tau_{q,i}^{\omega}$ are defined as in~(\ref{tau}). This may be rewritten as \begin{equation}\label{claim-1} [(e, \omega, \underline g)]_{L_{q}} \mapsto [(e \cdot\tau_{i,q}^{\omega}, d_{i}\omega, (\tau_{i,q}^{\omega})^{-1}\cdot\underline g)]_{L_{q-1}}. \end{equation} \textbf{Claim:} The map (\ref{claim-1}) is $L_{q}$-equivariantly homotopic to \begin{equation}\label{claim-2} [(e, \omega, \underline g)]_{L_{q}} \mapsto [(e, d_{i}\omega, (\tau_{i,q}^{\omega})^{-1}\cdot\underline g)]_{L_{q-1}}. \end{equation} \textbf{Proof of the claim:} Let $\iota$ be the identity on $\EBr_{n\cdot\XI}$ and $\tau$ multiplication in $\EBr_{n\cdot\XI}$ by $\tau_{i,q}^{\omega}$. Now, $\tau$ descends to a map $\BBr_{n\cdot\XI} \to \BBr_{n\cdot\XI}$ which is induced by conjugation with $\tau_{i,q}^{\omega}$ in $\Br_{n\cdot\XI}$. Since $\tau_{i,q}^{\omega}$ commutes with the elements of $L_{q}$, this conjugation restricts to the identity on $L_{q}$. Therefore, both $\iota$ and $\tau$ descend to self-maps of $\BL_{q}$ homotopic to the identity (note that we may use $\EBr_{n\cdot\XI} / L_{q}$ as a model for $\BL_{q}$). Hence, $\tau$ is $L_{q}$-equivariantly freely homotopic to $\iota$. From this fact, the claim follows directly. \qed Now, the map \begin{align*} \EBr_{n\cdot\XI} \times_{L_{q}} \cc^{n} &\to \EBr_{n\cdot\XI} \times_{L_{q-1}} \cc^{n} \\ [(e, \underline g )]_{L_{q}} &\mapsto[(e, \underline g )]_{L_{q-1}} \end{align*} is identified with $$ \cc^{q+1} \times \Hur_{G,(n-q-1)\cdot\XI} \to \cc^{q} \times \Hur_{G,(n-q)\cdot\XI}, $$ given by left concatenation of a Hurwitz vector with the last $\xi$-tuple $(\underline g)_{q+1}$ of $\underline g \in \cc^{n}$, where we also identify $$\Hur^{c}_{G,(n-q-i)\cdot\XI} \cong\EBr_{n\cdot\XI} \times_{L_{q-i}} \cc^{n-q-i}$$ for $i = 0,1$. In homology, this corresponds to multiplication by $r((\underline g)_{q+1}) \in R$. Finally, note that $\tau_{i,q}^{\omega}$ only acts on $(\underline g)^{\leq q+1}$, while $L_{q-1}$ acts on the $\xi$-tuples $(\underline g)^{>q+1}$. Thus, the induced map in homology of (\ref{claim-2}) yields the desired form of ${{{\partial_{i}}_{*}}}$. The lemma follows from the fact that the differential on the $E^{1}$-page of (\ref{the-sequence}) is given by the induced map in homology of the alternating sum of the face maps. \end{proof} For a graded $R$-module $M$ and $q\geq 0$, we write $M(q)$ for its \emph{shift} by $q$. \begin{definition}\label{k-complexes} Let $M$ be a graded left $R$-module. The \emph{$\KK$-complex associated to $M$} is defined as the complex $\KK(M)$ with terms \begin{align*} \KK(M)_{0} &= M, \\ \KK(M)_{q+1} &= A^{l_{q}^{\XI}} \otimes_{A} A\langle\cc^{q+1} \rangle \otimes_{A} M(q+1) \end{align*} for $q \geq 0$, where $l_{q}^{\XI}$ is given as in Lemma~\ref{orbits}. The differentials on $\KK(M)$ are the linear maps defined by \begin{align*} \dd_{q+1}\colon \KK(M)_{q+1} &\to \KK(M)_{q} \\ \omega \otimes \underline g \otimes x &\mapsto \sum_{i=0}^{q} (-1)^{i} [d_{i}\omega \otimes ((\tau_{i,q}^{\omega})^{-1}\cdot\underline g)^{\leq q}\otimes r(((\tau_{i,q}^{\omega})^{-1}\cdot\underline g)_{q+1})\cdot x], \end{align*} where $\omega$ is the IC-sequence of a $q$-simplex, $\underline g \in \cc^{q+1}$, and $x \in M(q+1)$. \end{definition} In a less general form, $\KK$-complexes were introduced in~\cite[Sect.~4]{0912.0325}. $\KK(M)$ is in fact a complex of graded left $R$-modules, where the grading on $\KK(M)_{q}$ is induced by the grading on $M(q)$: For $M = M_p$, this is immediate, as the $n$-th graded part of the $\KK$-complex is equal to a row in the spectral sequence (\ref{the-sequence}) by construction. The complex property is only needed in this case. The more general case involves computations which utilize the semi-simplicial identity on the face maps of $\plant$. These are performed in the author's Ph.D.~thesis (currently under review). Note that the differential $\dd_{q}$ preserves the grading: The grading on $\KK(M)_{q}$ is induced by the grading on $M(q)$, on which $\dd_{q}$ acts by the alternating sum of multiplication with degree one elements. This cancels out with the shifted grading on $\KK(M)_{q-1}$. We resume: \begin{corollary}\label{specseq-corollary} There is a homological spectral sequence with $$ E^{1}_{qp} = n\text{-th graded part of } \KK(M_{p})_{q+1}, $$ differentials on the $E^{1}$-page given by the differentials on $\KK(M_{p})$, which converges to $H_{p+q}(\Hur_{G,n\cdot\XI}^{c};A)$ for $p+q \leq \left\lfloor \frac{n}{2}\right\rfloor - 2$. \end{corollary} We now consider the homology of $\KK$-complexes for $M = M_{0} = R$. In this case, multiplication in $R$ gives $\KK(R)_{q}$ (and hence also the homology modules of $\KK(R)$) the structure of a two-sided graded $R$-module. A simpler version of the following lemma is proved in \cite[Lemma~4.11]{0912.0325}. \begin{lemma}\label{killing} For all $q\geq 0$, $H_{q}(\mathcal{K}(R))$ is killed by the right action of $R_{>0}$. \end{lemma} \begin{proof} For simplicity of notation, in this proof we work with P- instead of IC-sequences. Recall that for a Hurwitz vector $\underline g\in\cc^{q+1}$, we write $\partial\underline g$ for its boundary, which is invariant under the $\Br_{(q+1)\cdot\XI}$-action. $R$ is generated as an $A$-module by orbits $[\underline s] \in \cc^{n}/\Br_{n\cdot\XI}$, for $n\geq 0$. Let $h\in\cc$, such that the elements of the form $r(h)$ generate $R_{>0}$ as an $A$-algebra. We define a map $S_{h}\colon \KK(R)_{q+1} \to \KK(R)_{q+2}$ by linear extension of $$ S_{h}(\tilde\omega \otimes \underline g \otimes [\underline s]) = (\xi +\tilde\omega ) \otimes (h^{(\partial \underline g \partial [\underline s])^{-1}}, \underline g) \otimes [\underline s], $$ where $\tilde\omega$ is the P-sequence of a $q$-simplex, and $\underline g, h, [\underline s]$ as above. Here, $(\xi +\tilde\omega)$ denotes the P-sequence of a $(q+1)$-simplex obtained by increasing every entry of $\tilde\omega$ by $\xi$ and then attaching $(1, \ldots, \xi)$ from the left. In particular, the equations \begin{align}\label{diff-new} \begin{split} d_{0}(\xi +\tilde\omega) &= \tilde\omega \\ d_{i}(\xi +\tilde\omega) &= (\xi + d_{i-1}\tilde\omega) \end{split} \end{align} hold for $i = 1, \ldots, q$. Furthermore, since the first $\xi$ positions of the P-sequence $\xi + \tilde\omega$ are given by $1, \ldots, \xi$, the equations \begin{align}\label{diff-new-2} \begin{split} (\tau_{0,q+1}^{(\xi +\tilde\omega)} )^{-1}\cdot (h, \underline g ) &= ( \underline g, h^{\partial\underline g}) \\ (\tau_{i+1,q+1}^{(\xi +\tilde\omega)} )^{-1}\cdot (h, \underline g ) &= (h, (\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g ) \end{split} \end{align} hold for any $h\in\cc$ and $i=0, \ldots, q$, cf.~the definition of $\tau_{i,q}^{\tilde\omega}$ in~(\ref{tau}). We claim that $S_{h}$ is a chain homotopy from right multiplication with $r(h)$ to the zero map. Indeed, we have \begingroup \allowdisplaybreaks \begin{align*} &(\dd_{q+1} S_{h} + S_{h} \dd_{q})(\tilde\omega \otimes \underline g \otimes [\underline s]) \\ &\underset{\phantom{(\ref{diff-new-2})}}{\overset{\phantom{(\ref{diff-new})}}{=}} \dd_{q+1}((\xi +\tilde\omega) \otimes (h^{(\partial \underline g \partial \underline s)^{-1}}, \underline g) \otimes [\underline s] ) \\ &\phantom{\underset{(\ref{diff-new-2})}{\overset{(\ref{diff-new})}{=}}}+ \sum_{i=0}^{q} (-1)^{i} S_{h}( [d_{i}\tilde\omega \otimes ((\tau_{i,q}^{\tilde\omega})^{-1}\cdot\underline g)^{\leq q}\otimes [(((\tau_{i,q}^{\tilde\omega})^{-1}\cdot\underline g)_{q+1}, \underline s )]] ) \\ &\underset{(\ref{diff-new-2})}{\overset{(\ref{diff-new})}{=}} \tilde\omega \otimes \underline g \otimes (h^{(\partial \underline s)^{-1}}, [\underline s] ) \\ &+ \sum_{i=0}^{q} (-1)^{i+1}[(\xi + d_{i}\tilde\omega) \otimes (h^{(\partial \underline g\partial\underline s)^{-1}}, ((\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g )^{\leq q} ) \otimes [(((\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g )_{q+1}, \underline s)]] \\ &+ \sum_{i=0}^{q} (-1)^{i}[(\xi + d_{i}\tilde\omega) \otimes (h^{(\partial( (\tau_{i,q}^{\tilde\omega})^{-1}\cdot\underline g)\partial\underline s)^{-1}}, ((\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g )^{\leq q} ) \otimes [(((\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g )_{q+1}, \underline s)]] \\ &\underset{\phantom{(\ref{diff-new-2})}}{\overset{\phantom{(\ref{diff-new})}}{=}} \tilde\omega \otimes \underline g \otimes [\underline s \cdot r(h)], \end{align*} \endgroup as $(h^{(\partial \underline s)^{-1}}, \underline s )$ is equivalent to $(\underline s,h)$ under the $\Br_{(n+1)\cdot\XI}$-action. Hence, $S_{h}$ is the desired chain homotopy. \end{proof} \subsection*{Modules over stabilized rings}\label{stabilized} In the following, we study graded modules over graded rings satisfying a specific stability condition which will eventually form the essential criterion for homological stabilization of Hurwitz spaces. This generalizes the modules $M_{p}$ over the ring $R$. \begin{definition}\label{def_astable} Let $R = \bigoplus_{i\in\N_{0}} R_{i}$ be some graded ring, $A = R/R_{>0} \cong R_{0}$ the ring of degree zero elements, and $U\in R$ a central homogeneous element of positive degree. The ring $R$ is called \emph{$A$-stabilized by $U$} if the three following conditions are satisfied: \begin{enumerate}[(i)] \item Both kernel and cokernel of the multiplication $R \overset{U\cdot}{\to} R$ have finite degree as graded $R$-modules; in other words, $D_R(U) = \max\left\{\deg (R/UR), \deg (R[U])\right\}$ is finite, \label{isocoker} \item $A$ is commutative, and \label{comnoet} \item $R$ is generated in degree one (i.e., by $R_{1}$) as an algebra over $A$. \label{degone} \end{enumerate} We call $U$ the \emph{stabilizing element} for $R$. \end{definition} In what follows, let $M$ be a graded left $R$-module, where $R$ is $A$-stabilized by $U\in R$. We use the following notation: \begin{align*} D_{M}(U) &= \max\{\deg(M[U]), \deg(M/UM)\} \\ \delta_{M}(U) &= \max\{\deg (\Tor_{0}^{R}(R/UR, M)), \deg (\Tor_{1}^{R}(R/UR, M))\} \end{align*} Though both quantities depend heavily on the stabilizing element $U\in R$, we usually use the symbols $D_{M}$ and $\delta_{M}$. Furthermore, we write $H_{i}(M)$ for the graded left $R$-module $\Tor_{i}^R(A, M)$. We will from now on assume that $D_{R}(U)$ is positive. The case $D_{R}(U) = 0$ was fully treated earlier this section in the part about purely abelian covers. Section 4 of \cite{0912.0325} is about modules over graded rings $R$ which are $A$-stabilized by $U$. In that article's setting, it makes sense to focus on the case where $A$ is a field, though the proofs of Lemma~4.4 through Lemma~4.10 carry over directly to the case where $A$ is an arbitrary commutative ring. We proved an analogue to \cite[Lemma~4.11]{0912.0325} in Lemma~\ref{killing}. Therefore, the proofs and results of Proposition~4.12 and~4.13 of \cite{0912.0325} are also applicable to our situation. More specifically, we will need the follwing results: \begin{align} D_{M} &\leq \max\{\deg H_{0}(M), \deg H_{1}(M)\} + 5D_{R}, \label{homlem4.6} \\ \deg H_{q}(\KK(R)) &\leq D_{R} + \deg U + q, \label{4-12} \\ \deg H_{q}(\KK(M)) &\leq \max\{\deg H_{0}(M), \deg H_{1}(M)\} + (q+5)\cdot D_{R}+ \deg U \label{4-13}, \end{align} cf.~\cite[Lemma~4.6 and~4.9]{0912.0325}, \cite[Prop.~4.12]{0912.0325}, and \cite[Prop.~4.13]{0912.0325}, respectively. \begin{proposition}\label{homprop-4.2} Let $R$ be $A$-stabilized by $U$. Beyond, let $M$ be a graded left $R$-module and $h_{i} = \deg (H_{i}(\KK(M)))$. Then, we have $ h_{q} \leq \max\{h_{0}, h_{1}\} + D_{R}q + (5 D_{R} + \deg U), $ and multiplication by $U$, $M\overset{U\cdot}{\to}M$, is an isomorphism in source degree greater than or equal to $\max\{h_{0}, h_{1}\} + 5D_{R} + 1$. \end{proposition} \begin{proof} The present proof follows \cite[Thm.~4.2]{0912.0325}. We show that for $i = 0,1$, \begin{equation}\label{claim-prop} \deg (H_{i}(M)) \leq h_{i}. \end{equation} Using this result, we obtain \begin{align*} h_{q} &\overset{{(}\ref{4-13}{)}}{\leq} \max\{\deg (H_{0}(M)), \deg (H_{1}(M))\} + (q+5)\cdot D_{R} + \deg U \\ &\overset{(\ref{claim-prop})}{\leq} \max\{h_{0}, h_{1}\} + D_{R}q +(5 D_{R} + \deg U), \end{align*} which is the first part of the statement. Furthermore, by (\ref{homlem4.6}), multiplication by $U$ is an isomorphism in source degree greater or equal $\max\{\deg H_{0}(M), \deg H_{1}(M)\} + 5D_{R} + 1$. Together with~(\ref{claim-prop}), this gives the second claim of the proposition. It remains to show~(\ref{claim-prop}). For $i=0$, we have $H_{0}(M) = A \otimes_{R} M = M/R_{>0}M$ and $H_{0}(\KK(M)) = M/\im \dd_{1}$. Now, $\dd_{1}(\omega, g, x) = r(g)\cdot x$ for all elementary tensors in $\KK(M)_{1}$, so $\im \dd_{1} = R_{>0}M$ and the claim is vacuously true. For $i=1$, we factor the map $\dd_{1}\colon \KK(M)_{1} \to M$ as $\dd_{1} = \beta \circ \alpha$, $$ \KK(M)_{1} = A^{l_{0}^{\XI}} \otimes_{A} A\langle\cc\rangle \otimes_{A} M(1) \overset{\alpha}{\to} R_{>0}\otimes_{R} M\overset{\beta}{\to} M, $$ with $\alpha(\omega \otimes g \otimes x) = r(g) \otimes x$ and $\beta (r \otimes x) = r\cdot x$. As $R_{>0}$ is generated by elements of the form $r(g)$, we can factor any $r \in R_{>0}$ as $r = r(g)\cdot r'$ for some $r' \in R$; therefore, $\alpha$ is surjective. It is also degree-preserving -- note that $R_{>0} \otimes_{R} M$ is graded via $\deg(r\otimes x) = \deg r + \deg x$. Now, we have a sequence $$ \KK(M)_{2} \overset{\dd_{2}}{\to} \ker\dd_{1} \to H_{1}(\KK(M))\to 0 $$ which is by definition exact in the middle and on the right, so $\ker\dd_{1}$ is generated as an $A$-module in degree at most $\max\{\deg(\im\dd_{2}), \deg (H_{1}(\KK(M)))\}$. Now, the composition $$ \KK(M)_{2} \overset{\dd_{2}}{\to} \ker\dd_{1} \overset{\alpha}{\to} R_{>0}\otimes_{R} M $$ is zero, since it maps $\omega \otimes \underline g \otimes x$ to the element \begin{align*} &r((( \tau^{\omega}_{0,1} )^{-1}\underline g)_{1} ) \cdot r((( \tau^{\omega}_{0,1} )^{-1}\underline g)_{2} ) \otimes x - r((( \tau^{\omega}_{1,1} )^{-1}\underline g)_{1} ) \cdot r((( \tau^{\omega}_{1,1} )^{-1}\underline g)_{2} ) \otimes x \end{align*} which equals zero because $( \tau^{\omega}_{0,1} )^{-1}\underline g$ and $( \tau^{\omega}_{1,1} )^{-1}\underline g$ are equivalent up to the Hurwitz action. In other words, the elements of $\im\dd_{2} \subset \ker\dd_{1} \subset \KK(M)_{1}$ are killed by $\alpha$. Therefore, $\alpha(\ker \dd_{1})$ is generated as an $A$-module in degree $\leq \deg (H_{1}(\KK(M)))$. Now, as $\alpha$ is surjective, this implies $\deg(\ker \beta) \leq \deg (H_{1}(\KK(M)))$ (recall that $A$ is graded trivially). But the exact sequence $ 0 \to R_{>0} \to R \to A \to 0 $ is a projective resolution of $A = R/R_{>0}$; tensoring with $M$, we get an identification $ H_{1}(M) = \Tor_{1}^{R}(A,M) = \ker\beta. $ Thus, we have proved (\ref{claim-prop}). \end{proof} \subsection*{The proof} \begin{proof}[Proof of Theorem~\ref{the-theorem}]\label{hurwitz-proof} We follow the proof strategy of~\cite[Thm.~6.1]{0912.0325}, with an extra focus on the determination of the explicit stable range. By assumption, $D_{R} = D_{R}(U)$ is finite and positive. We know from Remark~\ref{deg-one} that $R$ is generated in degree one as an algebra over the commutative ring $A$. From these facts, we conclude that $R$ is $A$-stabilized by $U$. As before, let $ M_{p} = \bigoplus_{n\geq 0} H_{p}(\Hur_{G,n\cdot\XI}^{c}; A). $ In order to prove the theorem, we need to show that multiplication by $U$, $M_{p} \overset{U\cdot}{\to} M_{p}$, is an isomorphism in source degree $n > (8 D_{R} + \deg U)p + 7D_{R} + \deg U$. To see this, we show that \begin{equation}\label{thm-statement} \deg (H_{q}(\KK(M_{p}))) \leq D_{R} + \deg U + (8 D_{R} + \deg U)p + D_{R}q \end{equation} holds for all $q\geq 0$. Then, the theorem follows from the second statement of Proposition~\ref{homprop-4.2}, considering the cases $q = 0,1$. We prove~(\ref{thm-statement}) by induction on~$p$. For $p = 0$, we have $M_{0} = R$. Now, by~(\ref{4-12}), $$ \deg (H_{q}(\KK(R))) \overset{(\ref{4-12})}{\leq} D_{R} + \deg U + q \leq D_{R} + \deg U + D_{R} q, $$ which implies the assertion. For the inductive step, suppose that~(\ref{thm-statement}) holds for $0\leq p<P$. The final terms of $\KK(M_{P})$ are given by $$ \KK(M_{P})_{2} \overset{\dd_{2}}{\to} \KK(M_{P})_{1} \overset{\dd_{1}}{\to} M_{P}. $$ The $n$-th graded part of $\dd_{2}$ is a differential $\dd\colon E^{1}_{1P} \to E^{1}_{0P}$ in the spectral sequence from Corollary~\ref{specseq-corollary}. In the range $p\leq\left\lfloor\frac{n}{2} \right\rfloor - 2$, we can identify $\dd_{1}$ with an edge map in the same spectral sequence. Now, $E^{2}_{qp}$ is given by the $n$-th graded part of $H_{q+1}(\KK(M_{p}))$. From the inductive hypothesis, for $j>1$, we obtain $E^{2}_{j, P+1-j} = 0$ for $$n > 10 D_{R} + 2\deg U - (7 D_{R}+\deg U)j + (8D_{R}+\deg U)P.$$ Hence, we have $E^{2}_{0P} = E^{\infty}_{0P}$ for $n>-4 D_{R} + (8D_{R}+\deg U)P$, as there are no nonzero differentials going into or out of $E^{2}_{0P}$. Similarly, for $j>0$, we have $E^{2}_{j,P-j} = 0$ for $$n > 2 D_{R} + \deg U - (7 D_{R}+\deg U)j + (8D_{R}+\deg U)P.$$ Thus, for $n > -5 D_{R } + (8D_{R}+\deg U)P$, the only graded piece of $H_{P}(\Hur_{G,n\cdot\XI}^{c};A)$ which does not vanish is $E^{\infty}_{0P}$. Combining these results, we see that $E^{2}_{0P} \cong E^{\infty}_{0P} \cong H_{P}(\Hur_{G,n\cdot\XI}^{c};A)$ as long as $n> -4 D_{R} + (8D_{R}+\deg U)P$. In particular, the edge map $\coker (d_{2}) \to M_{P}$ is an isomorphism in degrees above $-4 D_{R} + (8D_{R}+\deg U)P$, and so \begin{equation}\label{pf-bound} \max\{\deg (H_{0}(\KK(M_{P}))), \deg (H_{1}(\KK(M_{P})))\} \leq -4 D_{R} + (8D_{R}+\deg U)P. \end{equation} Now, we make use of the first statement of Proposition~\ref{homprop-4.2} and obtain \begin{align*} \deg (H_{q}(\KK(M_{P}))) &\overset{(\ref{pf-bound})}{\leq} -4 D_{R} + (8D_{R}+\deg U)P + D_{R}q + 5D_{R} + \deg U\\ &\overset{\phantom{(\ref{pf-bound})}}{=} D_{R} + \deg U + (8D_{R}+\deg U)P + D_{R}q, \end{align*} which is what we wanted to show. \end{proof} \subsection*{Unmarked covers} The $G$-action on Hurwitz vectors by simultaneous conjugation induces an action of $G$ on the Hurwitz spaces $\Hur_{G,n\cdot\XI}^{c}$. The corresponding \emph{Hurwitz space for unmarked covers} is defined as the quotient $\mathcal{H}_{G,n\cdot\XI}^{c} = \Hur_{G,n\cdot\XI}^{c}/G$. Now, in general, $G$ does not act freely on Hurwitz vectors. Anyway, for suitable stabilizing elements $U$, homological stability for Hurwitz spaces descends to spaces of unmarked covers, as we will see in this section's theorem. Note that since the Hurwitz action commutes with conjugation, the $G$-action on Hurwitz vectors gives the ring $R$ and the modules $M_{p}$ the structure of modules over the group ring $A[G]$. \begin{corollary}\label{thm-unmarked} Let $A$ be a field whose characteristic is either zero or prime to the order of $G$. Assume that there is a $G$-invariant element $U\in R$ which for any $p\geq 0$ induces isomorphisms $H_{p}(\Hur_{G, n\cdot\XI}^{c} ; A) \overset{\sim}{\to} H_{p}(\Hur_{G, (n+\deg U)\cdot\XI}^{c} ; A)$ for $n\geq r(p)$. Then, $H_{p}(\mathcal{H}_{G, n\cdot\XI}^{c} ; A) \cong H_{p}(\mathcal{H}_{G, (n+\deg U)\cdot\XI}^{c} ; A)$ holds in the same range. \end{corollary} \begin{proof} By a transfer argument, we have $H_{p}(\mathcal{H}^{c}_{G,n\cdot\XI};A) \cong H_{p}(\Hur_{G,n\cdot\XI}^{c};A)_{G}$ for all $n,p\geq 0$. The assumption that $U$ is fixed under the action of $G$, together with the $G$-e\-qui\-va\-ri\-ance of the maps $G^{n\xi}\times G^{\deg U\cdot\xi} \to G^{(n+\deg U)\xi}$, implies that in the stable range, $H_{p}(\Hur_{G, n\cdot\XI}^{c} ; A) \cong H_{p}(\Hur_{G, (n+\deg U)\cdot\XI}^{c} ; A)$ is an isomorphism of $A[G]$-modules. Taking $G$-coinvariants yields the result. \end{proof} \section{Application}\label{application} We work in the same setting as in the previous section. \subsection*{Connected covers and stable homology} In the purely abelian case, we have seen in Section~\ref{homstabhurwitz} that Hurwitz spaces \emph{are} homotopy equivalent to colored configuration spaces. In the preprint~\cite{1212.0923}, this is made more general by showing that under certain conditions, the stable homology of the components of Hurwitz spaces is isomorphic to the stable homology of the corresponding colored configuration space. Due to a false reasoning in the application of results from the earlier article~\cite{0912.0325}, the named preprint was withdrawn from the arXiv in~2013. Fortunately, the statement of Theorem~\ref{evwii} remains untouched. We refer to Ellenberg's blog post \cite{Quomod} for an explanation of the mistakes and a clarification which results are still correct. For $\underline a = (a_{1}, \ldots, a_{t})\in \N^{t}$, we define the Hurwitz vector \begin{equation}\label{v-vector} V = V(\underline a) = \prod_{i=1}^{t}\prod_{g \in c_{i}} \left( g^{(a_{i} \ord(g))} \right), \end{equation} where the product operation means concatenation of tuples. Now, if there is an $n \in \N$ such that we have $\sum_{g\in c_{i}} a_{i}\ord(g) = n\xi_{i}$ for all $i = 1, \ldots, t$, we have $V \in \cc^{n}$ up to the action of $\Br_{n\cdot\XI}$. It is not hard to see that for any $\XI\in\N^{t}$, such an $\underline a$ exists. Hence, in this case, $V$ may be interpreted as an element of $R = R^{A,c}_{G,\XI}$, and we write $V\in R$. By construction, we have $\partial V = 1$, so $V$ is central in $R$. By $\CHur_{G,n\cdot\underline\xi}^{c} \subset \Hur_{G,n\cdot\underline\xi}^{c}$, we denote the union of connected components of $\Hur_{G,n\cdot\underline\xi}^{c}$ parametrizing covers with full monodromy. \begin{theorem}[\textsc{Ellenberg--Venkatesh--Westerland}, {\cite[Cor.~5.8.2]{1212.0923}}]\label{evwii} Suppose that for any $p\geq 0$, the element $V$ from (\ref{v-vector}) induces an isomorphism $$ H_{p}(\CHur_{G,n\cdot\underline\xi}^{c};\Q)\overset{\sim}{\to} H_{p}(\CHur_{G,(n+\deg V)\cdot\underline\xi}^{c};\Q) $$ for $n\geq r(p)$. Then for any connected component $X$ of $\CHur_{G,n\cdot\underline\xi}^{c}$, the branch point map $X \to \Conf_{n\cdot \underline\xi}$ induces an isomorphism $$ H_{p}(X;\Q) \overset{\sim}{\to} H_{p}(\Conf_{n\cdot\underline\xi};\Q) $$ whenever $n\geq r(p)$. \end{theorem} In Theorem~\ref{the-theorem}, we give a condition for homological stabilization of the sequence $\{\Hur_{G,n\cdot\XI}^{c} \}$, while Theorem~\ref{evwii} is about the subspaces $\CHur_{G,n\cdot\XI}^{c}$ \emph{of connected} covers. In order to make the two theorems compatible, the condition \begin{equation}\label{all-connected} \Hur_{G, n\cdot\XI}^{c} = \CHur_{G, n\cdot\XI}^{c} \text{ for all } n\geq 1 \end{equation} must be satisfied. Now, (\ref{all-connected}) holds if and only if $G$ is \emph{invariably generated} by $c$: \begin{definition} We say that $G$ is \emph{invariably generated} by a tuple $c = (c_{1}, \ldots, c_{t})$ of distinct conjugacy classes in $G$ if for all choices of elements $g_{i} \in c_{i}$, $i = 1, \ldots, t$, $G$ is generated by $g_{1}, \ldots, g_{t}$. In this case, we call $c$ an \emph{invariable generation system} for $G$. \end{definition} By Jordan's theorem (\cite{Jordan}), a list of all of its nontrivial conjugacy classes invariably generates $G$. The following proposition is a slight adaptation of a standard result about Hurwitz action orbits. \begin{proposition}\label{conn-standard} If $c$ invariably generates $G$, there is an $N\in\N$ such that for all $n \geq N$, concatenation with any $g \in \cc$ yields a bijection $$ \cc^{n}/\Br_{n\cdot\XI} \overset{1:1}{\longleftrightarrow} \cc^{n+1}/\Br_{(n+1)\cdot\XI}. $$ Thus, any Hurwitz vector $U\in R = R^{\Z,c}_{G,\XI}$ induces an isomorphism $$ H_{0}(\Hur_{G, n\cdot\XI}^{c} ; \Z ) \cong H_{0}(\Hur_{G, (n+\deg U)\cdot\XI}^{c} ; \Z) $$ for all $n \geq N$. In particular, $D_{R}(U)$ is finite for any vector $U \in R$ with $\partial U = 1$. \end{proposition} \begin{proof} We follow the proof in \cite[Prop.~3.4]{0912.0325}, where the first statement is proved for $t = 1$. The last two statements are direct consequences of the first one. Let $\underline h \in \cc^{n+1}$. We need to show that for $n$ sufficiently large, there is a tuple $\underline h' \in \cc^{n}$ such that $\underline h$ is equivalent under the $\Br_{n\cdot\XI}$-action to $(g, \underline h')$. This shows that the maps $\cc^{n}/\Br_{n\cdot\XI} \to \cc^{n+1}/\Br_{(n+1)\cdot\XI}$ given by concatenation with $g = (g_{1}, \ldots, g_{\xi})$ are surjective for $n \gg 0$; since the involved sets are finite, it follows that these maps are eventually bijective. In the following, we work with the full $\Br_{n\xi}$-action. If we construct a tuple $(g, \underline h'')$ which is equivalent under the $\Br_{n\xi}$-action to $\underline h$, there is another braid which transforms~$\underline h''$ to an element of $\cc^{n}$, since the Hurwitz action permutes conjugacy types. Thus, it suffices to show that we can realize any $g_{0}\in G$ as the first entry of a tuple which is $\Br_{n\xi}$-equivalent to $\underline h$; the claim follows by successive application of this property. Assume $g_{0} \in c_{1}$. For $n\gg 0$, there exists an element $g'_{0} \in c_{1}$ that appears at least $d+1 = \ord(g'_{0}) + 1$ times in $\underline h$. We may use the Hurwitz action to pull $d$ of these elements to the front of $\underline h$, resulting in a new tuple $({g'_{0}}^{(d)}, \tilde h_{1}, \ldots, \tilde h_{n\xi - d})$. By the invariable generation property, the elements $\tilde h_{1}, \ldots, \tilde h_{n\xi - d}$ generate $G$ (note that $g'_{0}$ appears at least once in these last $n\xi - d$ entries). Now for all $i = 1, \ldots, n\xi -d$, there is a braid $\sigma_{i}\in\Br_{n\xi}$ which satisfies $$ \sigma_{i}\cdot ({g'_{0}}^{(d)}, \tilde h_{1}, \ldots, \tilde h_{n\xi - d}) = ((\tilde h_{i}g'_{0}\tilde h_{i}^{-1})^{(d)}, \tilde h_{1}, \ldots, \tilde h_{n\xi - d}). $$ It is given by $$ \sigma_{i} = \alpha_{i}^{-1} (\sigma_{d+i-1} \cdots \sigma_{i+1} \sigma_{i}^{2} \sigma_{i+1} \cdots \sigma_{d+i-1}) \alpha_{i}, $$ where $\alpha_{i}$ pulls the $d$-tuple $({g'_{0}}^{(d)})$ in front of $\tilde h_{i}$, which works since the boundary of $({g'_{0}}^{(d)})$ is trivial. Thus, the Hurwitz action may conjugate the elements $g'_{0}$ at the beginning of the tuple by any element in the group generated by $\tilde h_{1}, \ldots, \tilde h_{n\xi - d}$, which is equal to $G$. Thus, we may establish $g_{0}$ as the first entry. \end{proof} We are now ready to conclude: \begin{theorem}\label{thm-connected} Let $G$ be a finite group, $c = (c_{1}, \ldots, c_{t})$ an invariable generation system for $G$, and $\XI = (\xi_{1}, \ldots, \xi_{t})\in\N^{t}$. Let $U \in R = R^{\Z.c}_{G,\XI}$ be a Hurwitz vector with $\partial U = 1$. Then for any $p \geq 0$, there are isomorphisms \begin{align*} H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Z ) &\cong H_{p}(\Hur_{G, (n+\deg U)\cdot\XI}^{c}; \Z )\\ H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Q ) &\cong H_{p}(\Hur_{G, (n+1)\cdot\XI}^{c}; \Q ) \end{align*} for $n> (8D_{R}(U) + \deg U)p + 7D_{R}(U) + \deg U$. For $b = b_{0}(\Hur_{G, (D_{R}(U) + 1)\cdot\XI}^{c})$, $$ H_{p}(\Hur_{n\cdot\XI}^{c};\Q) \cong H_{p}( \Conf_{n\cdot\XI}; \Q ) \otimes_{\Q} \Q^{b} $$ in the same range. \end{theorem} \begin{proof} For $G$ abelian, an even stronger statement follows from Corollary~\ref{abelian-case}. We may thus assume that $D_{R}(U)>0$. The last statement of Proposition~\ref{conn-standard} tells us that the assumptions of Theorem~\ref{the-theorem} are satisfied for $U$. Indeed, for $p\geq 0$, \begin{equation*}\label{periodical-stab} H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Z ) \cong H_{p}(\Hur_{G, (n+\deg U)\cdot\XI}^{c}; \Z ) \end{equation*} as long as $n> (8D_{R}(U) + \deg U)p + 7D_{R}(U) + \deg U$. By definition of $D_{R}(U)$, the number $b = b_{0}(\Hur_{G, n\cdot\XI}^{c})$ of connected components is stable for $n> D_{R}(U)$. We note that $V = V(\underline a) \in R$ is also a $\Z$-stabilizing element for $R$ by Proposition~\ref{conn-standard} and the fact $\partial V = 1$. Fix $p\geq 0$ and $n> (8D_{R}(U) + \deg U)p + 7D_{R}(U) + \deg U$. Now, $n$ is \emph{always} in the stable range for $\{\Conf_{n\cdot\XI} \mid n\geq 0\}$, given by $n \geq \frac{2p}{\min\XI}$. Indeed, we have $D_{R}(U)>0$ because $G$ is non-abelian. We choose $k \geq 0$ such that $n+k \deg U$ is in the stable range for the stabilizing element~$V$. We obtain \begingroup \allowdisplaybreaks \begin{align*} H_{p}(\Hur_{n\cdot\XI}^{c};\Q) \overset{\phantom{((}\ref{the-theorem}\phantom{))}}{\cong} &H_{p}(\Hur_{(n+k\deg U)\cdot\XI}^{c};\Q) \\ \overset{\phantom{(}\ref{evwii}\phantom{)}}{\cong} &H_{p} (\Conf_{(n+k\deg U)\cdot \XI};\Q) \otimes_{\Q} \Q^{b} \\ \overset{\text{(\ref{tran})}}{\cong} &H_{p} (\Conf_{n\cdot \XI};\Q) \otimes_{\Q} \Q^{b} \\ \overset{\text{(\ref{tran})}}{\cong} &H_{p} (\Conf_{(n+k\deg U+1)\cdot \XI};\Q) \otimes_{\Q} \Q^{b} \\ \overset{\phantom{(}\ref{evwii}\phantom{)}}{\cong} &H_{p}(\Hur_{(n+k\deg U+1)\cdot\XI}^{c};\Q) \\ \overset{\phantom{((}\ref{the-theorem}\phantom{))}}{\cong} &H_{p}(\Hur_{(n+1)\cdot\XI}^{c};\Q), \end{align*} \endgroup which yields the remaining assertions. \end{proof} \subsection*{Outlook} The present paper may be understood as a sequel to the topological part of \cite{0912.0325}. There are still open questions regarding the homological stabilization for Hurwitz spaces: \begin{itemize} \item[-] For $t=1$, the condition from Theorem~\ref{the-theorem} is equivalent to the non-splitting property. Is there an analogous translation for the general case? \item[-] Until now, we only considered the \emph{diagonal} stabilization direction, i.e., we considered the sequence of shapes $\{n\cdot\XI\}$. Is there also a general theorem for the stabilization in the direction of a fixed conjugacy class, i.e., sequences $\{\XI + ne_{i}\}$, where $e_{i}$ is a unit vector? This is motivated by the corresponding result for colored configuration spaces in \cite{1312.6327}. \item[-] Does the homological stabilization carry over to base spaces of higher genus? \item[-] With Harer's theorem for $\mathcal{M}_{g}$ in mind, it is a natural question whether homological stabilization happens not only in the direction of the number of branch points, but also in the direction of the genus of the covered surface (\emph{genus stabilization}). For the zeroth Betti number, this has been tackled in the articles \cite{MR3428412} and \cite{1301.4409} in the slightly different setting of the substrata $\mathcal{M}_{g}(G)$ of $\mathcal{M}_{g}$ which contain the algebraic curves admitting a faithful action by a fixed finite group $G$. \end{itemize}
{ "timestamp": "2016-06-24T02:10:16", "yymm": "1606", "arxiv_id": "1606.05459", "language": "en", "url": "https://arxiv.org/abs/1606.05459", "abstract": "We study Hurwitz spaces with regard to homological stabilization. By a Hurwitz space, we mean a moduli space of branched, not necessarily connected coverings of a disk with fixed structure group and number of branch points. We choose a sequence of subspaces of Hurwitz spaces which is suitable for our investigations.In the first part, we introduce and study plant complexes, a large new class of simplicial complexes, generalizing the arc complex on a surface with marked points. In the second part, we generalize a result by Ellenberg-Venkatesh-Westerland by showing that homological stabilization of our sequence of Hurwitz spaces depends only on properties of their zeroth homology groups.", "subjects": "Algebraic Topology (math.AT); Geometric Topology (math.GT)", "title": "Plant complexes and homological stability for Hurwitz spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426405416756, "lm_q2_score": 0.7279754489059775, "lm_q1q2_score": 0.70907912850189 }
https://arxiv.org/abs/1904.07587
Depth functions of powers of homogeneous ideals
We settle a conjecture of Herzog and Hibi, which states that the function depth $S/Q^n$, $n \ge 1$, where $Q$ is a homogeneous ideal in a polynomial ring $S$, can be any convergent numerical function. We also give a positive answer to a long-standing open question of Ratliff on the associated primes of powers of ideals.
\section{Introduction} Let $S$ be a standard graded algebra over a field $k$. For a homogeneous ideal $Q \subseteq S$, we call the function $\depth S/Q^n$, $n \ge 1$ the \emph{depth function} of $Q$. The goal of this paper is to prove the following conjecture of Herzog and Hibi in \cite{HH} (see also \cite[Problem 3.10]{He}). \medskip \begin{conjecture}[Herzog-Hibi] \label{conj.HH} Let $f: {\mathbb N} \rightarrow {\mathbb Z}_{\ge 0}$ be any function such that $f(n) = f(n+1)$ for all $n \gg 0$. Then there exists a homogeneous ideal $Q$ in a polynomial ring $S$ such that $f$ is the depth function of $Q$. \end{conjecture} For simplicity we call a function $f: {\mathbb N} \rightarrow {\mathbb Z}_{\ge 0}$ a \emph{numerical function} and say that $f$ is \emph{convergent} if $f(n) = f(n+1)$ for all $n \gg 0$. By a classical result of Brodmann \cite{Br}, the depth function of a homogeneous ideal is always convergent. Conjecture \ref{conj.HH} simply says that this is the only constraint for numerical functions to be depth functions of homogeneous ideals. This conjecture is remarkable since the depth function tends to be non-increasing in known examples. Before this work, Conjecture \ref{conj.HH} has been verified only for non-decreasing functions \cite{HH} and for some special classes of non-increasing functions \cite{HTT, HH, MST}. Note that the proof of Conjecture \ref{conj.HH} for non-increasing functions in \cite{HTT} has a gap. Examples of non-monotone depth functions were hard to find \cite{BHH, HS, HH, MV}. However, Bandari, Herzog and Hibi \cite{BHH} showed that the depth function can have any given number of local maxima. \par Our main result, Theorem \ref{Herzog-Hibi}, settles Conjecture \ref{conj.HH} in its full generality. Furthermore, we shall show that the ideal $Q$ can be chosen to be a monomial ideal. As a consequence, we give a positive answer to the following question of Ratliff, which has remained open since 1983 \cite[(8.9)]{Ra}. \medskip \begin{question}[Ratliff] \label{question.R} Given a finite set $\Gamma$ of positive integers, do there exist a Noetherian ring $S$, an ideal $Q$ and a prime ideal $P \supset Q$ in $S$ such that $P$ is an associated prime of $Q^n$ if and only if $n \in \Gamma$? \end{question} Inspired by Theorem \ref{Herzog-Hibi}, one may expect that for any convergent positive numerical function $f$, there exists a homogeneous ideal $Q$ such that $f$ is the depth function of symbolic powers of $Q$. This is verified recently by the second and the third authors of this paper \cite{NgT}. The proof of Conjecture \ref{conj.HH} is based on our recent works on sums of ideals \cite{HNTT, HTT}. The key observation is the additivity of depth functions; that is, the sum of two depth functions is again a depth function. It can also be seen that any convergent numerical function which is not the constant zero function can be written as the sum of a finite number of functions of the following two types: \begin{itemize} \item Type I: for some fixed $d \in {\mathbb N}$, $f(n) = \left\{\begin{array}{ll} 0 & \text{if } n < d \\ 1 & \text{if } n \ge d. \end{array}\right.$ \item Type II: for some fixed $d \in {\mathbb N}$, $f(n) = \left\{\begin{array}{ll} 0 & \text{if } n \not= d \\ 1 & \text{if } n = d. \end{array}\right.$ \end{itemize} Therefore, the proof is completed if we can construct ideals with depth functions of Types I and II. Our paper is structured as follows. In Section 2 we prove the additivity of depth functions. Ideals with depth functions of Types I and II are constructed in Section 3. Section 4 is devoted to consequences of our solution to Conjecture \ref{conj.HH}. We assume that the reader is familiar with basic properties of associated primes and depth, which we use without references. For unexplained notions and terminology, we refer the reader to \cite{BrH, E}. \section{Additivity of depth functions} \label{sect_prel} Throughout this section, let $A$ and $B$ be polynomial rings over a field $k$ with disjoint sets of variables, and let $R = A \otimes_k B$. Let $I \subseteq A$ and $J \subseteq B$ be nonzero proper homogeneous ideals. By abuse of notations, we shall also use $I$ and $J$ to denote their extensions in $R$. \begin{lemma}[\protect{\cite[Lemma 1.1]{HT}}] \label{HoaTam} $I \cap J = IJ$. \end{lemma} \begin{lemma}[\protect{\cite[Lemmas 2.2]{HT}}] \label{HoaTam2} $\depth R/IJ = \depth A/I + \depth B/J + 1.$ \end{lemma} We shall use the above lemmas to prove the following result which yields the additivity of depth functions. \begin{proposition} \label{additivity} Let $I \subset A$ and $J \subset B$ be homogeneous ideals as above. There exists a homogeneous ideal $Q$ in a polynomial ring $S$ such that for all $n > 0$, $$\depth S/Q^n = \depth A/I^n + \depth B/J^n.$$ Moreover, if $I$ and $J$ are monomial ideals, then $Q$ can be chosen to be a monomial ideal. \end{proposition} \begin{proof} Let $x \in A$ and $y \in B$ be arbitrary variables. Let $R = A \otimes_k B$. Then $R$ is a polynomial ring in the variables of $A$ and $B$. By Lemma \ref{HoaTam} we have $IJ = I \cap J$. The associated primes of $I \cap J$ are extensions of ideals in one of the rings $A,B$. Therefore, $x-y$ does not belong to any associated prime of $IJ$. From this it follows that $$\depth R/(IJ,x-y) = \depth R/IJ -1.$$ By Lemma \ref{HoaTam2} we have $$\depth R/IJ = \depth A/I + \depth B/J +1.$$ Therefore, $$\depth R/(IJ,x-y) = \depth A/I + \depth B/J.$$ Obviously, we may replace $I,J$ by $I^n,J^n$ and obtain $$\depth R/(I^nJ^n,x-y) = \depth A/I^n + \depth B/J^n.$$ Set $S = R/(x-y)$ and $Q = (IJ,x-y)/(x-y)$. Then $S$ is isomorphic to a polynomial ring over $k$ and $$\depth S/Q^n = \depth R/((IJ)^n,x-y) = \depth A/I^n + \depth B/J^n$$ for all $n > 0$. Moreover, $Q$ is a monomial ideal if $I, J$ are monomial ideals. \end{proof} To ease on notations, we shall identify a numerical function $f(n)$ with the sequence of its values $f(1),f(2),....$ If $f$ is the constant function 0,0,0,..., then $f$ is the depth function of the maximal homogeneous ideal of any polynomial ring over $k$. \begin{lemma} \label{Types} Let $f$ be a convergent numerical function which is not the constant confunction 0,0,0,.... Then $f$ can be written as a sum of numerical functions of the following two types:\par {\rm Type I:} $0,...,0,1,1,...$, \par {\rm Type II:} $0,...,0,1,0,0,...$. \par \end{lemma} \begin{proof} Let $f$ be a convergent numerical function of the form $c_1,...,c_n, c,c,...$. Then $f$ is the sum of the functions $0,...,0,c_i,0,0,...$, $i = 1,..,n$, and the functions $0,...,0,c,c,...$. The function $0,...,0,c_i,0,0,...$ is $c_i$ times the function $0,...,0,1,0,0,...$, where $1$ stands only at the $i$-th place. The function $0,...,0,c,c,...$ is $c$ times the function $0,...,0,1,1,...$, where $1$ starts from the $(n+1)$-th place. \end{proof} By Proposition \ref{additivity} and Lemma \ref{Types}, to establish the validity of Conjecture \ref{conj.HH}, it suffices to construct depth functions of types I and II. \section{Construction of ideals with depth functions of Types I and II} Herzog and Hibi \cite{HH} already constructed monomial ideals $I$ whose depth functions can be any non-decreasing convergent numerical function. Therefore, the existence of depth functions of Type I follows from their result. \begin{example}[\protect{\cite[Theorem 4.1]{HH}}] \label{Type I} Let $A = k[x,y,z]$. For any integer $d \ge 2$, let $I=(x^{d+2},x^{d+1}y,xy^{d+1},y^{d+2},x^dy^2z).$ Then $$\depth A/I^n = \begin{cases} 0 &\text{if $n \le d-1$},\\ 1 &\text{if $n \ge d$}. \end{cases}$$ \end{example} We also know that there are monomial ideals $J$ with the depth function $1,...,1,0,0,...$ \cite{HTT, MST}. The existence of such depth functions can be used to construct depth functions of Type II as follows. Let $I$ and $J$ be monomial ideals with the depth functions $0,...,0,1,1,...$ and $1,...,1,0,0,...$, where the first 1 of the first function and the last 1 of the second function are on the same place. By the proof of Proposition \ref{additivity}, the function $\depth R/((IJ)^n,x-y)$ is a function of the form $1,...,1,2,1,1,...$ for some variables $x,y$. If we can find variables $x',y'$ such that $x'-y'$ is a non-zerodivisor in $R/((IJ)^n,x-y)$ for all $n \ge1$, then $$\depth R/((IJ)^n,x-y,x'-y') = \depth R/((IJ)^n,x-y)-1$$ is a function of the form $0,...,0,1,0,0,....$ Clearly, we can identify $S = R/(x-y,x'-y')$ with a polynomial ring and $(IJ,x-y,x'-y')/(x-y,x'-y')$ with a monomial ideal in $S$. \par To find such variables $x',y'$ we need to know the associated primes of the ideal $((IJ)^n,x-y)$ for all $n \ge 1$. For convenience, we denote the set of the associated primes and the set of the minimal associated primes of an ideal $Q$ by $\Ass(Q)$ and $\Min(Q)$, respectively. \begin{proposition} \label{control} Let $A$ and $B$ be polynomial rings over a field $k$. Let $I \subset A$ and $J \subset B$ be nonzero proper homogeneous ideals. Let $x \in A$ and $y \in B$ be arbitrary variables. Let $R = A\otimes_kB$. Then \begin{align*} & \Ass (IJ,x-y) = \\ & \{({\mathfrak p},x-y)|\ {\mathfrak p} \in \Ass (I)\} \cup \{({\mathfrak q},x-y)|\ {\mathfrak q} \in \Ass(J)\} \cup \bigcup_{\begin{subarray}{l} {\mathfrak p} \in \Ass (I), x \in {\mathfrak p}\\ {\mathfrak q} \in \Ass (J), y \in {\mathfrak q} \end{subarray}} \Min({\mathfrak p}+{\mathfrak q}). \end{align*} \end{proposition} \begin{proof} Let $P$ be an arbitrary prime of $\Ass(I,x-y)$. Then $P = {\mathfrak p}+(x-y)$ for some ${\mathfrak p} \in \Ass(I)$. If $J \subseteq P$, we must have $J \subseteq (y) \subset P$. This implies $J = y^dJ'$ for some ideal $J' \subset B$, $J' \not\subseteq (y)$, $d \ge 1$. Let $f \in A$ be an element such that $P = (I,x-y):f$. It is easy to check that $P = (y^dI,x-y) : y^df.$ Hence, $P \in \Ass(y^dI,x-y)$. Since $(IJ,x-y)_P = (y^dI,x-y)_P$, this implies $P \in \Ass(IJ,x-y)$. If $J \not\subseteq P$, we have $(IJ,x-y)_P = (I,x-y)_P$. Hence, $P \in \Ass(IJ,x-y)$. So we can conclude that $$\Ass(I,x-y) = \{({\mathfrak p},x-y)|\ {\mathfrak p} \in \Ass (I)\} \subseteq \Ass(IJ,x-y).$$ \par Similarly, we also have $$\Ass(J,x-y) = \{({\mathfrak q},x-y)|\ {\mathfrak q} \in \Ass(J)\} \subseteq \Ass(IJ,x-y).$$ It remains to prove that if $P$ is a prime ideal of $R$, which does not belong to $\Ass(I,x-y)$ nor $\Ass(J,x-y)$, then $P \in \Ass(IJ,x-y)$ if and only if $P \in \Min({\mathfrak p}+{\mathfrak q})$ for some ${\mathfrak p} \in \Ass (I), x \in {\mathfrak p}$, and ${\mathfrak q} \in \Ass (J), y \in {\mathfrak q} $. \par Without restriction, we may assume that $(IJ,x-y) \subseteq P$. Since $P \not\in \Ass(I,x-y)$, we have $\depth(R/(I,x-y))_P \ge 1$. Since $x-y$ is a non-zerodivisor on $I$, this implies $\depth (R/I)_P \ge 2$. Similarly, we also have $\depth (R/J)_P \ge 2$. Note that $P \in \Ass(IJ,x-y)$ if and only if $\depth (R/(IJ,x-y))_P = 0$. By Lemma \ref{HoaTam} we have $IJ = I \cap J$. Hence, $x-y$ is a non-zerodivisor in $R/IJ$. From this it follows that $P \in \Ass(IJ,x-y)$ if and only if $\depth (R/IJ)_P = 1$. Using the exact sequence $$0 \to (R/IJ)_P \to (R/I)_P \oplus (R/J)_P \to (R/I+J)_P \to 0$$ we can deduce that $\depth (R/IJ)_P = 1$ if and only if $\depth (R/I+J)_P = 0$, which means $P \in \Ass(I+J)$. By \cite[Theorem 2.5]{HNTT}, we have $$\Ass(I+J) = \bigcup_{\begin{subarray}{l} {\mathfrak p} \in \Ass (I)\\ {\mathfrak q} \in \Ass (J)\end{subarray}} \Min({\mathfrak p}+{\mathfrak q}).$$ Notice that ${\mathfrak p} +{\mathfrak q}$ is not necessarily a prime ideal (see e.g. \cite[Example 2.3]{HNTT}). If $P \in \Min({\mathfrak p}+{\mathfrak q})$, then $P \cap A = {\mathfrak p}$ and $P \cap B = {\mathfrak q}$ by \cite[Lemma 2.4]{HNTT}. Moreover, $P$ is a bihomogeneous ideal with respect to the natural bigraded structure of $R = A \otimes_k B$. In this case, $x-y \in P$ implies $x \in P \cap A = {\mathfrak p}$ and $y \in P \cap B = {\mathfrak q}$. So we can conclude that $P \in \Ass(IJ,x-y)$ if and only if $P \in \Min({\mathfrak p}+{\mathfrak q})$ for some ${\mathfrak p} \in \Ass (I), x \in {\mathfrak p}$, and ${\mathfrak q} \in \Ass (J), y \in {\mathfrak q} $. \end{proof} \begin{remark} {\rm Since Theorem \ref{control} is of independent interest, one may ask whether it is true in a more general setting. If $I,J$ are not homogeneous, we can use the same arguments to prove the following general formula: \begin{align*} \Ass (IJ,x-y) = & \{({\mathfrak p},x-y)|\ {\mathfrak p} \in \Ass (I)\} \cup \{({\mathfrak q},x-y)|\ {\mathfrak q} \in \Ass(J)\} \cup\\ & \bigcup_{\begin{subarray}{l} {\mathfrak p} \in \Ass (I)\\ {\mathfrak q} \in \Ass (J)\end{subarray}} \Min({\mathfrak p}+{\mathfrak q}) \cap V(x-y), \end{align*} where $V(x-y)$ denotes the set of prime ideals containing $x-y$. In this case, $(IJ,x-y)$ may have an associated prime $P \in \Min({\mathfrak p}+{\mathfrak q})$ for some ${\mathfrak p} \in \Ass(I)$ and ${\mathfrak q} \in \Ass(J)$ which do not satisfy the conditions $x \not\in {\mathfrak p}$ and $y \not\in {\mathfrak q}$. } \end{remark} \begin{example} {\rm Let $A = {\mathbb Q}[x,z]$ and $I = (x^2+1,z)$. Let $B = {\mathbb Q}[y,t]$ and $J = (y^2+1,t)$. Then $I, J$ are prime ideals, $x \not\in I$ and $y \not\in J$. We have $$\Min_R(R/I+J) = \{(x^2+1,t,z,x-y),(x^2+1,t,z,x+y)\}.$$ Hence $$\Ass(IJ,x-y) = \{(x^2+1,z,x-y), (y^2+1,t,x-y), (x^2+1,z,t,x-y)\}.$$ These primes do not contain $x$ and $y$.} \end{example} Using Proposition \ref{control} we can give a sufficient condition for the existence of variables $x',y'$ such that $x'-y'$ is a non-zerodivisor in $R/((IJ)^n,x-y)$ for all $n \ge 1$. \begin{proposition} \label{reduction} Let $I$ be a proper monomial ideal in $A = k[x_1,...,x_r]$, $r \ge 3$, such that $x_3,...,x_r \in \sqrt{I}$. Let $J$ be a proper monomial ideal in $B=k[y_1,...,y_s]$, $s \ge 3$, such that $y_3,...,y_s \in \sqrt{J}$. Let $R = k[x_1,...,x_r,y_1,...,y_s]$. Assume that $\depth A/I^n > 0$ or $\depth B/J^n > 0$ for some $n > 0$. Then $$\depth R/((IJ)^n,x_1-y_1,x_2-y_2) = \depth A/I^n + \depth B/J^n - 1.$$ \end{proposition} \begin{proof} By the proof of Proposition \ref{additivity} we have $$\depth R/((IJ)^n,x_1-y_1) = \depth A/I^n + \depth B/J^n \ge 1.$$ It remains to show that $x_2-y_2$ is a non-zerodivisor in $R/((IJ)^n,x_1-y_1)$. Assume for the contrary that $x_2-y_2 \in P$ for some associated prime $P$ of $((IJ)^n,x_1-y_1)$. By Proposition \ref{control}, $P = {\mathfrak p}+{\mathfrak q}$ for some ${\mathfrak p} \in \Ass(I^n)$, $x_1 \in {\mathfrak p}$, and ${\mathfrak q} \in \Ass(J^n)$, $y_1 \in {\mathfrak q}$. Note that ${\mathfrak p}$ and ${\mathfrak q}$ are generated by variables in $A$ and $B$. Since $x_2-y_2 \in {\mathfrak p}+{\mathfrak q}$, we must have $x_2 \in {\mathfrak p}$ and $y_2 \in {\mathfrak q}$. The assumption $x_3,...,x_r \in \sqrt{I}$ and $y_3,...,y_s \in \sqrt{J}$ implies $x_3,...,x_r \in {\mathfrak p}$ and $y_3,...,y_s \in {\mathfrak q}$. Hence, $x_1,...,x_r,y_1,...,y_s \in P$. Therefore, $P = (x_1,...,x_r,y_1,...,y_s)$, which contradicts the fact that $\depth R/((IJ)^n,x_1-y_1) \ge 1$. \end{proof} Now we are going to construct monomial ideals having depth function of Type II. \begin{example} \label{Type II} {\rm Let $A = k[x,y,z]$ and $I=(x^{d+2},x^{d+1}y,xy^{d+1},y^{d+2},x^dy^2z)$, $d \ge 2$. By Example \ref{Type I} we have $$\depth A/I^n = \begin{cases} 0 &\text{if $n \le d-1$},\\ 1 &\text{if $n \ge d$}. \end{cases}$$ Let $B = k[t,u,v]$. Let $J$ be the integral closure of the ideal $(t^{3d+3},tu^{3d+1}v,u^{3d+2}v)^3$ or $J = (t^{d+1},tu^{d-1}v,u^dv)$. By \cite[Example 4.10]{HTT} and \cite[Proposition 1.5]{MST} we have $$\depth B/J^n=\begin{cases} 1 & \text{if $n\le d$},\\ 0 &\text{if $n\ge d+1$}. \end{cases}$$ Let $R = k[x,y,z,t,u,v]$. By Proposition \ref{reduction}, we have $$\depth R/((IJ)^n,y-u,z-v) = \begin{cases} 0 &\text{if $n\neq d$},\\ 1 &\text{if $n = d$}. \end{cases}$$ If we set $S = k[x,t,u,v]$ and $Q = (x^{d+2},x^{d+1}u,xu^{d+1},u^{d+2},x^du^2v)J$, which is obtained from $IJ$ by setting $y = u$ and $z = v$, then $$\depth S/Q^n = \depth R/((IJ)^n,y-u,z-v).$$ Hence, the depth function of $Q$ is of Type II.} \end{example} \section{Consequences} By Examples \ref{Type I} and \ref{Type II} we have monomial ideals with depth functions of Types I and II. Therefore, the solution to Conjecture \ref{conj.HH} immediately follows from Proposition \ref{additivity} and Lemma \ref{Types}. \begin{theorem} \label{Herzog-Hibi} Let $f$ be any convergent numerical function. There exists a monomial ideal $Q$ in a polynomial ring $S$ such that $\depth S/Q^n$ $= f(n)$ for all $n \ge 1$. \end{theorem} Theorem \ref{Herzog-Hibi} has the following interesting consequence on the associated primes of powers of ideals, which gives a positive answer to Question \ref{question.R} of Ratliff. \begin{corollary} \label{Ratliff} Let $\Gamma$ be a set of positive integers which is either finite or contains all sufficiently large integers. Then there exists a monomial ideal $Q$ in a polynomial ring $S$ such that ${\mathfrak m} \in \Ass(Q^n)$ if and only if $n \in \Gamma$, where ${\mathfrak m}$ is the maximal homogeneous ideal of $S$. \end{corollary} \begin{proof} Let $f$ be any convergent numerical function such that $f(n) = 0$ if and only if $n \in \Gamma$. Then there exists a monomial ideal $Q$ in a polynomial ring $S$ such that $\depth S/Q^n = f(n)$ for all $n \ge 1$. This is the desired ideal because ${\mathfrak m} \in \Ass(Q^n)$ if and only if $\depth S/Q^n = 0$. \end{proof} Corollary \ref{Ratliff} also gives a negative answer to the following question of Ratliff \cite[(8.4)]{Ra}. \begin{question}[Ratliff] \label{Ratliff 2} Let $Q$ be an arbitrary ideal in $Q$ in a Noetherian ring $S$. Let $P \supset Q$ be a prime ideal such that $P \in \Ass(Q^m)$ for some $m \ge 1$ and $P \in \Ass(Q^n)$ for all $n$ sufficiently large. Is $P \in \Ass(Q^n)$ for all $n \ge m$? \end{question} This question was already answered in the negative by Huckaba \cite[Example 1.1]{Hu}. However, the ideal $Q$ in his example is not a monomial ideal as in the proof of Corollary \ref{Ratliff}. One may also ask about the possible function of the projective dimension of powers of a homogeneous ideal. Let $Q$ be an arbitrary homogeneous ideal in a polynomial ring $S$. By the Auslander-Buchsbaum formula we have $$\pd Q^n = \dim S - \depth S/Q^n - 1.$$ Since $\depth S/Q^n$ is a convergent numerical function \cite{Br}, $\pd Q^n$ is also a convergent numerical function. \begin{corollary} \label{pd} Let $g$ be an arbitrary convergent numerical function. There exist a homogeneous ideal $Q$ and a number $c$ such that $\pd Q^n = g(n) + c$ for all $n \ge 1$. \end{corollary} \begin{proof} Let $m = \max_{n \ge 1}g(n)$. Then $f(n) = m - g(n)$, $n \ge 1$, is a convergent numerical function. By Theorem \ref{Herzog-Hibi}, there exists a homogeneous ideal $Q$ in a polynomial ring $S$ such that $\depth S/Q^n = f(n)$ for all $n \ge 1$. Let $d$ be the number of variables of $S$. Set $c = d-m-1$. Then $$\pd Q^n = d - f(n) - 1= d - m + g(n) - 1 = g(n) + c$$ for all $n \ge 1$. \end{proof} It is of interest to know the smallest possible number $c$ for a given function $g$ in Corollary \ref{pd}. This number is determined by the smallest number of variables of a polynomial ring which contains a homogeneous ideal with a given depth function $f$. We are not able to compute this number. The proof of Theorem \ref{Herzog-Hibi} uses a high number of variables compared to the values of $f$. \medskip \begin{acknowledgement} H.T. H\`a is partially supported by the Simons Foundation (grant \#279786) and Louisiana Board of Regents (grant \#LEQSF(2017-19)-ENH-TR-25). Hop D. Nguyen and T.N. Trung are partially supported by Project ICRTM 01$\_$2019.01 of the International Centre for Research and Postgraduate Training in Mathematics. N.V. Trung is partially supported by Vietnam National Foundation for Science and Technology Development. The authors would like to thank Aldo Conca and J\"urgen Herzog for useful discussions, Takayuki Hibi for pointing out a gap of the proof of Conjecture \ref{conj.HH} for non-increasing functions in \cite{HTT}, and C\u{a}t\u{a}lin Ciuperc\u{a} for informing that our negative answer to Question \ref{Ratliff 2} of Ratliff was already given by S. Huckaba in \cite{Hu}. This paper is split from the first version of \cite{HNTT} following a recommendation of its referee. \end{acknowledgement}
{ "timestamp": "2019-04-17T02:26:04", "yymm": "1904", "arxiv_id": "1904.07587", "language": "en", "url": "https://arxiv.org/abs/1904.07587", "abstract": "We settle a conjecture of Herzog and Hibi, which states that the function depth $S/Q^n$, $n \\ge 1$, where $Q$ is a homogeneous ideal in a polynomial ring $S$, can be any convergent numerical function. We also give a positive answer to a long-standing open question of Ratliff on the associated primes of powers of ideals.", "subjects": "Commutative Algebra (math.AC); Algebraic Geometry (math.AG)", "title": "Depth functions of powers of homogeneous ideals", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426435557124, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.7090791249475562 }
https://arxiv.org/abs/1601.02245
On parallel solution of ordinary differential equations
In this paper the performance of a parallel iterated Runge-Kutta method is compared versus those of the serial fouth order Runge-Kutta and Dormand-Prince methods. It was found that, typically, the runtime for the parallel method is comparable to that of the serial versions, thought it uses considerably more computational resources. A new algorithm is proposed where full parallelization is used to estimate the best stepsize for integration. It is shown that this new method outperforms the others, notably, in the integration of very large systems.
\section{Introduction} \label{sec:intro} The numerical solution of an initial value problem given as a system of ordinary differential equations (ODEs) is often required in engineering and applied sciences, and is less common, but not unusual in pure sciences. For precisely estimating asymptotic properties of the solutions, the global truncation errors must be kept lower than the desired tolerance during a very large number of iterations. This is usually achieved by using an adaptive algorithm for the estimation of the largest step for integration yielding a local truncation error below the tolerance. Nevertheless, such a correction usually leads to a drastic increase of the computational time. On the other hand, the use of spectral methods to solve parabolic and hyperbolic partial differential equations (PDE) is becoming more and more popular. The spectral methods reduce these PDEs to a set of ODEs \cite{Boyd}. The higher the desired precision for the numerical solution, the larger the resulting system of ODEs. Very large systems arise also in simulations of multi--agent systems \cite{Wooldridge}. It can take hours to integrate this kind of systems over a few steps. Taking all the above into account, it can be concluded that devising improved algorithms to compute the numerical solution of ODE systems is still a very important task. With the steady development of cheap multi-processors technology, it is reasonable to consider using parallel computing for speeding up real-time computations. Particularly interesting are the current options for small-scale parallelism with a dozen or so relatively powerful processors. Several methods have been deviced with that aim (see for instance Refs. \cite{Burrage,Houwen,hairer}), many warranting a substantial reduction of the runtime. For instance, the authors in Ref.\cite{Houwen} claim that the performance of their parallel method is comparable to that of the serial method developed by Dormand and Prince \cite{dop} and, in terms of the required number of evaluations of the right--hand side of the ODEs, demonstrates a superior behaviour. The aim of this work was twofold. Firstly, we wished to test these claims by solving ODEs systems with different degree of complexity over different ranges of time; secondly, we proposed and tested a new method which focuses on taking full advantage of parallel computing for estimating the optimal stepsize. All our codes were written in C and for parallalel programing we used the OPENMP resources. The programs were tested in a server Supermicro A+ 1022GG-TF with 24 CPUs and 32 gigabytes of operational memory. In the next section, we describe the numerical methods we used for our tests, the standard fourth order Runge-Kutta, a version of the Dormand-Prince method and the parallel iterated Runge-Kutta method proposed in Ref.\cite{Houwen}. These last two methods are widely regarded to be amongst the best options for serial and parallel numerical solution of ODEs. It is also briefly described how the optimal stepsize is estimated in each case. In section \ref{sec:systems} the initial value problems used for testing the methods are described. Next, in section \ref{sec:tresults} we report the results of our comparison of the performance of these methods. In section \ref{sec:aspa} we introduce an adaptive stepsize parallel algorithm coupled to the Dormand-Prince integrator, and report the results of the corresponding tests. Finally, in section \ref{sec:concl} we present our conclusions. \section{Numerical integrators and local error control} \label{sec:methods} Let the initial value problem be specified as follows, \begin{equation} \dot{y} = f(t,y)\, , \quad y(t_0)=y_0\, . \end{equation} Here $y(t)$ is the vector solution at time $t$, dot stands for the derivative with respect to time and the right--hand side of the equation defines a vector field. Our aim is to compare the performance of several methods for approximating the solution of this problem. All of them are members of the family of explicit Runge--Kutta methods and aproximate $y(t)$ at $t_{n+1}=t_{n}+h$ as \begin{equation} y_{n+1} = y_{n} + h\displaystyle\sum\limits_{i=1}^{s} b_{i}k_{i} \, , \end{equation} where \begin{eqnarray} k_1 &=& f(t_n, y_n)\, , \nonumber \\ k_2 &=& f(t_n + c_2 h, y_n + h a_{21} k_1)\, , \nonumber \\ k_3 &=& f(t_n + c_3 h, y_n + h( a_{31} k_1 + a_{32} k_2)\, , \nonumber \\ \vdots \nonumber \\ k_s &=& f(t_n + c_s h, y_n + h( a_{s1} k_1 + a_{s2} k_2 + \cdots a_{s,s-1}k_{s-1})\, , \end{eqnarray} and $s$ is known as the number of stages. Therefore, a method with $s$ stages usually requires, at least, $s$ evaluations of the right--hand side of the system at each iteration. A Runge--Kutta method can be especified by a Butcher tableau like in table \ref{table:butcher}. \begin{table}[h] \begin{center} $$\begin{array}{ c | c c c c c } 0 \\ c_{_2} & a_{_{21}} \\ c_{_3} & a_{_{31}} & a_{_{32}} \\ \vdots & \vdots & \vdots & \ddots\\ c_{_s} & a_{_{s1}} & a_{_{s2}} & \ldots & a_{_{s,s-1}}\\ \hline \T& b_{_1} & b_{_2} & \ldots & b_{_{s-1}} & b_{_s} \end{array}$$ \caption{Butcher tableau} \label{table:butcher} \end{center} \end{table} The order of a method is $p$ if the local truncation error is on the order of $\mathcal{O}(h^{p+1})$, while the total accumulated error is of order $\mathcal{O}(h^{p})$. The higher the order of the method, the lower the error of the approximation, nevertheless, constructing higher order Runge--Kutta formulas is not an easy task. To avoid increasing $s$ (and, therefore, the number of evaluations of $f$) a common alternative is to develope methods with adaptive stepsize. For any numerical method, an estimate for the local truncation error while integrating from $t_n$ to $t_{n+1}=t_n+h$ is given by \begin{equation} \epsilon:=\|y_{n+1}-\bar{y}_{n+1}\|\, , \label{eq:error} \end{equation} where $\|\cdot\|$ stands for a given norm, and $y_{n+1}$ and $\bar{y}_{n+1}$ are the results of different numerical approximations of $y(t_{n+1})$. The stepsize yielding a local error below the tolerance ($Tol$) is then given by \begin{equation} h_{opt}=h\displaystyle\left(\frac{Tol}{\epsilon}\right)^{\frac{1}{p}}\, . \end{equation} \subsection{Fourth order Runge--Kutta method (RK4)}\label{sec:rk4} The method given in table \ref{table:rk4} is the classical member of the family of Runge--Kutta methods. \begin{table}[ht] \begin{center} \begin{tabular}{c|c c c c} 0 & \\[.25cm] 1/2 & 1/2 \\[.25cm] 1/2 & 0 & 1/2 \\[.25cm] 1 & 0 & 0 & 1 \\ \hline \T& 1/6 & 2/6 & 2/6 & 1/6 \end{tabular} \caption{``The" Runge--Kutta method} \label{table:rk4} \end{center} \end{table} We used RK4 without a stepsize control mechanism. Hence, in all our tests we choose the stepsize in such a way that the global error had the same order of those obtained by the methods with adaptive stepsize. \subsection{Dormand-Prince method (DOP853)}\label{sec:dop} In this method \cite{dop}, the last stage is evaluated at the same point as the first stage of the next step (this is the so-called FSAL property), so that the number of evaluations of $f$ is one less than the number of stages. Here there is no easy way to present the Butcher coefficients in a tableau, because it involves dealing with irrational quantities \cite{hairer}. The coefficients we use can be found in the code by E. Hairer and G. Wanner available in the site \cite{dopButcher}. The approximations $y_{n+1}$ and $\bar{y}_{n+1}$ in equation (\ref{eq:error}) correspond here to the results obtained using different orders, and the $k_i$ are determined by minimizing the error of the higher order result. As a matter of fact, in the version we use two comparisons are made, one between 8th and 5th orders, and the second one between 8th and 3th orders. Then, the error is estimated using \cite{hairer}: \begin{equation} \epsilon=\epsilon_5 \frac{\epsilon_5}{\sqrt{\epsilon_5^2+0.01\epsilon_3}} \, . \end{equation} \subsection{Parallel iterated Runge--Kutta method (PIRK10)}\label{sec:pirk} Let us consider a s-stage Runge--Kutta method given by the coefficients $$A=(a_{ij})_{_{i,j=1}}^{^s}, \qquad B^{^T}=(b_{_1},\ldots,b_s), \qquad C=(c_{_1},\ldots,c_s)^{^T}$$ and let $y_1$ be defined as: \begin{flalign} k_{i}^{^{(0)}}&=f(x_{_0},y_{_0}) \nonumber\\ k_{i}^{^{(\ell)}}&=f(x_{_0}+c_{i}h,y_{_0}+ h\displaystyle\sum\limits_{j=1}^{s} a_{{ij}}k_{j}^{^{(\ell-1)}}) \ \ \ \ \ \ell=1,\ldots,m\label{PIRK}\\ y_1&=y_0+h\displaystyle\sum\limits_{i=1}^{s} b_{i}k_{i}^{^{(m)}} \nonumber \end{flalign} Here $m$ is the number of iterations used to estimate $k_{i}$. As it is shown in \cite{Houwen}, provided that $s$ processors are available, this scheme represents an explicit Runge--Kutta method. Furthermore, since each $k_{i}^{^{(\ell_{_0})}}$ can be computed in parallel, we have the following theorem, \begin{thm} Let $\{A,B^T,C\}$ define an s-stage Runge--Kutta method of order $p_0$. Then the method defined by \eqref{PIRK} represents an $(m+1)-stage$ explicit Runge--Kutta method of order $p$, where $$p=\min\{p_0,m+1\}.$$ \label{thm:thm} \end{thm} One of the advantages of this method is that if we set $m=p_0-1$, then the order of the method is equal to the number of stages, which results in less right--hand side evaluations (sequentially). In general the number of stages of explicit Runge--Kutta methods is greater than the order of the method, therefore if an explicit method is used the required number of processors is greater as well. Along the lines in Ref.\cite{Houwen} and with the Butcher coefficients in table \ref{table:RK10} in the Appendix, we implemented a parallel iterated Runge--Kutta method of order $10$. Here, $y_n$ and $\bar{y}_n$ in equation (\ref{eq:error}) correspond to the results obtained using different number of iterations for approximating $k_{i}$, \begin{equation} y_{_{n+1}}=y_{_n}+h\displaystyle\sum\limits_{i=1}^{s} b_{i}k_{i}^{^{(m)}} \end{equation} and \begin{equation} \bar{y}_{_{n+1}}=y_{_n}+h\displaystyle\sum\limits_{i=1}^{s} b_{i}k_{i}^{^{(m-1)}} \, . \end{equation} \section{Initial value problems} \label{sec:systems} Next, we list and briefly describe the systems of ODEs that we have used to test the above numerical methods. \subsection{Simple harmonic oscillator (HO)} \label{ssec:HO} As a first initial value problem we chose: \begin{equation} \begin{cases} \dot{y}_{1}=y_2 &\qquad y_{1}(0)=0,\nonumber\\ \dot{y}_{2}=-y_1 &\qquad y_2(0)=1\text{.} \label{eq:ho} \end{cases} \end{equation} Since this system is readily integrable, we used it to assess the quality of the numerical results by comparing with the analytical ones. \subsection{H\'enon-Heiles system (HH)} \label{ssec:HH} This is a Hamiltonian system which describes the nonlinear dynamics of a star around a galactic center when the motion is constrained to a plane \cite{hh}: \begin{equation} \begin{cases} \dot{y}_{_1}&=y_{_{2}}\nonumber\\ \dot{y}_{_2}&=-y_{_1}-2y_1y_3\\ \dot{y}_{_3}&=y_{_4}\nonumber\\ \dot{y}_{_4}&=-y_{_3}-y^2_{_1}+y^2_{_3} \label{eq:HH} \end{cases} \end{equation} Since the Hamiltonian $H$ is a constant of motion, it can be used to assess the precision of the numerical solution. We choose initial conditions such that $H=1/6$, yielding a chaotic solution. \subsection{Replicated H\'enon-Heiles system (HH100)} \label{ssec:HH100} To force the integrators to work a little bit harder we constructed a new system by replicating the H\'enon-Heiles system $100$ times, resulting in a nonlinear system with $400$ equations: \begin{equation} \begin{cases} \dot{y}_{_{4i+1}}&=y_{_{4i+2}}\nonumber\\ \dot{y}_{_{4i+2}}&=-y_{_{4i+1}}-2y_{_{4i+1}}y_{_{4i+3}}\\ \dot{y}_{_{4i+3}}&=y_{_{4i+4}}\nonumber\\ \dot{y}_{_{4i+4}}&=-y_{_{4i+3}}-y^2_{_{4i+1}}+y^2_{_{4i+3}} \, , \end{cases} \label{eq:HH100} \end{equation} with $i=0,1,\cdots,99$. \subsection{Gravitational collapse in AdS (GC40) and (GC10)} \label{ssec:ads} We also tested the methods by solving the system obtained from the Einstein field equations for the gravitational collapse of a scalar field in anti de Sitter spacetime \cite{deOliveira:2012ac}. Using the Galerkin method \cite{Boyd} the 10 coupled hyperbolic-elliptic nonlinear partial differential equations were converted to a set of $40$ nonlinear ordinary differential equations. The corresponding solutions were shown to be chaotic too \cite{deOliveira:2012ac}. Finally, the last system we used was obtained by reducing the previous one to ten equations \footnote{Any of the equations of the these two last systems fills several pages. The systems in C code are available from the authors.}. \section{Tests results} \label{sec:tresults} To test the methods we ask for the numerical solution of the corresponding problem starting from $t_0$ and up to a given $t_{end}$, such that the straigthforward integration with step $h_0=t_{end}-t_0$ yields a result with an error above the desired tolerance. This implies that, typically, a number of intermediate integrations will be required. In table \ref{table:smalltime} is shown the order of the runtime in seconds taken for solving the HO and HH problems in the time interval $0\leq t \leq 2000$ using RK4, DOP853 and PIRK10. In the methods with an adaptive stepsize algorithm we have used a tolerance of $10^{-15}$, what corresponded to using a step $h=0.01$ in the RK4. \begin{table}[ht] \begin{center} $$\begin{array}{c|c|c} \T & HO & HH\\[.2cm]\hline DOP853\T & 10^{-2} & 10^{-2}\\[.2cm] PIRK10 & 10^{-1} & 10^{-1}\\[.2cm] RK4 & 10^{-1} & 10^{-1}\\[.2cm] \hline \end{array}$$ \caption{Order of the runtime for the HO and HH problems.} \label{table:smalltime} \end{center} \end{table} In all the following tests the PRIK10 used its optimal number of 5 processors. As we can see very similar results were obtained with the three methods, and even though DOP853 seems to be faster, the differences are very small. Nevertheless, the serial methods can be considered to be better than PIRK10 because they are easier to implement and require significantly less computational resources for execution. Searching for a bigger runtime difference we tested the HH100 problem keeping the same tolerance for DOP853 and PIRK10, but now in the time interval $0\leq t \leq 5000 $. This implied to use $h=0.001$ in the RK4. In this case the RK4 and PIRK10 recorded a runtime of $\sim206$ seconds and $\sim75$ seconds respectively, both greater than the $\sim 11$ seconds obtained with DOP853. At this point we recall that, according with theorem \ref{thm:thm}, by using 5 processors the PRIK10 method at each timestep does 9 evaluations of the right-hand-side of the corresponding problem. This is to be contrasted with the, at least, 11 evaluations done at each timestep by the DOP853. Therefore, since according with the above results the serial method outperforms the parallel one, we conjecture that this due to a parallel overhead problem, i.e., the amount of time required to coordinate parallel tasks is larger than the time required for evaluating the system right--hand side. To verify this conjecture we tested the methods with the huge system of problem GC40. In this case we integrated the system over the small time interval $0\leq t \leq 0.1$, with a tolerance of $10^{-6}$, what corresponded to using a step $h=0.0001$ in the RK4. The results are presented in table \ref{table:colapso1}. \begin{table}[h!] \begin{center} \begin{tabular}{c c} \hline \T& $Time$ \\[.2cm] \hline DOP853 \T& $> 6$ days \\[.25cm] PIRK10 & $\approx 6$ hrs \\[.25cm] RK4& $\approx 2$ days \\[.25cm] \hline \end{tabular} \caption{Gravitational collapse runtime.} \label{table:colapso1} \end{center} \end{table} We can observe that the performance of PIRK10 was way better than DOP853 and RK4, being DOP853 unable to solve the system after six days. \section{Adaptive stepsize parallel algorithm (ASPA)}\label{sec:aspa} Since parallelizing the integrator does not seems to be helpful, we opted for a different approach, that is, to parallelize the choice of an optimal integration step. Let us consider an embedded Runge--Kutta method, which allows us to estimate the local error $\epsilon$. Given an initial step $h_0$ and a tolerance $Tol$, for integrating from $t_0$ to $t_{end}$ with $N_{CPU}$ processors, the next step is determined as follows: \begin{enumerate} \item Each processor $P_i$, with $i=1,\ldots,N_{CPU}$, integrates the system from $t_n$ to $t_n+ih_n$ and estimates the local error $\epsilon_i$. \item $m=\max_i\{i\ |\ \epsilon_i\leq Tol\}\cup\{0\}$. \item $ h_{n+1}=\Big(\frac{2N_{CPU}-1}{N_{CPU}+1}m+\frac{N_{CPU}}{2N_{CPU}-1}\Big) \frac {h_n}{N_{CPU}+1}$. \item $t_{n+1}=t_n+mh_n$. \item All the above steps are repeated while $t_k<t_{end}$. \end{enumerate} Figure \ref{fig:aspa} is an illustration of how the stepsize could change with respect to $h_0$, depending on the number $m$ of processors yielding an acceptable result. \begin{figure}[h] \centering \mbox{{\includegraphics[width=13cm, height=6cm]{DoPmodificado.eps}}} \caption{An illustration of how the adaptive stepsize parallel algorithm could work with $N_{CPU}=6$ processors.} \label{fig:aspa} \end{figure} The interval of $N_{CPU}$ black vertical bars is the amount of time probed by the integration using the initial step $h_0$. In our computations $h_0$ is assumed to be the total length of integration over the number of processors. A green horizontal line below a processor label indicates a successful integration, otherwise, a red line is used. \subsection{Designing the stepsize recurrence} In a given iteration we define success as obtaining an integration result with a local error below the user defined tolerance. The aim is to maximize the probability of success in each iteration, i.e., to determine the step $h_n$ such that it is obtained the biggest possible number of successful processors $m$ amongst the total number of available CPUs ($N_{CPU}$). We define $h_n$ as a function of $m$, keeping $N_{CPU}$ constant. So, if with a given integration step less than half of the processors are successful, then the next integration step needs to be smaller ($h_{n+1} < h_{n}$). Otherwise, we increase the integration step. This way, each integration becomes more efficient, both in amount of time and precision. This idea is summarized with the following expresion: \begin{align*} h_{n+1}&\approx \left(\frac{2}{N_{CPU}}m+\epsilon\right)h_n,\quad \text{for some } \epsilon>0,\\ &= \begin{cases} \left[(1+\frac{2}{N_{CPU}} k) + \epsilon\right]h_n & \text{ if } m = N_{CPU}/2 + k,\\ \left[(1-\frac{2}{N_{CPU}} k)+\epsilon\right]h_n & \text{ if } m = N_{CPU}/2 - k,\\ \end{cases}\\ \end{align*} for some integer $k\in[0,N_{CPU}/2]$. We need $\epsilon$ to keep $h_{n+1}$ finite, even when $m=0$ and it has to be less than one for the step to always decrease in this particular case. A reasonable proposal is then, to carry the integration in half the interval when $m=0$, i.e., \begin{equation} h_{n+1}\approx \frac{2}{N_{CPU}}mh_n+\frac{1}{2N_{CPU}}h_n\, . \nonumber \end{equation} On the other hand, if $m$ is large enough, the integration step will nearly doubles. If, for instance, this happens sequentially, for typical initial value problems there is a high probability that the next $m$ will be very small, making a poor use of the available CPUs. To avoid this, we finally propose the following recurrence: \begin{equation} h_{n+1}=\Bigg[\frac{2N_{CPU}-1}{(N_{CPU}+1)^2}m + \frac{N_{CPU}}{(2N_{CPU}-1)(N_{CPU}+1)}\Bigg] h_n \, . \label{eq:hn} \end{equation} Since $m$ is a function of $h_{n}$, this is a first order nonlinear map. It yields, $h_{n+1} > h_n$ if \[ m>\dfrac{(N_{CPU}+1)(2N_{CPU}^2-1)}{(2N_{CPU}-1)^2} > \frac{N_{CPU}}2+1 \, , \] and $h_{n+1} < h_n$ if \[ m<\dfrac{(N_{CPU}+1)(2N_{CPU}^2-1)}{(2N_{CPU}-1)^2} < \frac{N_{CPU}}2 \, . \] Moreover, $m=0$ implies $1/2>h_{n+1}/h_{n}>0$, and when $m=N_{CPU}$, then $2>h_{n+1}/h_{n}>3/4$. In consequence, this expression has the desired properties; a large integration step will ultimately lead to a low $m$ that, in turn, will decrease the stepsize and, then, increase $m$. This way, we expect $m$ to converge to the optimal value $N_{CPU}/2$. Nevertheless, it is not desirable that the stepsize occurs to be insensitive to the given integration interval. Thus, the map $h_{n+1}(h_n)$ was also designed to not have fixed points. Note that requiring $m\leq N_{CPU}$, implies $N_{CPU}>2$, i.e., there are not fixed points when using less than 3 CPUs. For the remaining cases, $h_{n+1} = h_n$ give us the condition for the map to have fixed points: \[ m=\dfrac{(N_{CPU}+1)(2N_{CPU}^2-1)}{(2N_{CPU}-1)^2}\, . \] Let us prove that, whereas $m\in\mathbb{Z}$, the righ hand side of the above expresion is never an integer. Suppose there is a $d\in\mathbb{Z}$ such that $d|(2N_{CPU}^2-1)$ and $d|(2N_{CPU}-1)$ \footnote{Here $d|f$ stands for $d$ divides $f$.}. Since $2N_{CPU}^2-1 = (N_{CPU}+1)(2N_{CPU}-1)-N_{CPU}$ then, $d|N_{CPU}$. So, by assumption $d|[(2N_{CPU}-1)-N_{CPU}]$, leading to $d|(N_{CPU}-1)$. Therefore, considering that $d|N_{CPU}$, $d|(N_{CPU}-1)$ and $\gcd(N_{CPU},N_{CPU}-1)=1$, we conclude that $d=1$. In turn this implies $\gcd(2N_{CPU}^2-1,2N_{CPU}-1)=1$, and, this way, $(2N_{CPU}^2-1)/(2N_{CPU}-1)^2$ is not an integer. Thus, $(N_{CPU}+1)(2N_{CPU}^2-1)/(2N_{CPU}-1)^2$ is an integer if and only if, $(2N_{CPU}-1)^2|(N_{CPU}+1)$. But, recalling that $N_{CPU}>2$, then $(N_{CPU}+1)/(2N_{CPU}-1)^2<1$ and we get a contradiction because by definition $m\in\mathbb Z$. Therefore $\{h_n\}$ has no fixed points. Finally, note that while increasing the value of $N_{CPU}$, $h_0$ becomes smaller, implying a big number of integration steps in the beginning of the process. However at a given time, since there are not fixed points, $h_n$ should show a bounded oscillatory behaviour around the optimal stepsize. It would imply that the proposed recurrence has an attractor, i.e., asymptotically, the process of integration will settled down around an optimal stepsize independently of its initial value. Indeed, this can be seen in figures \ref{fig:hnbeh} where we show some numerical realizations of $h_n(n)$ for different initial value problems and number of CPUs. \begin{figure} \centering \mbox{\subfigure{\includegraphics[width=3in, height=2in]{h48_10in.eps}}\quad \subfigure{\includegraphics[width=3in, height=2in]{h48_15in.eps}} } \mbox{\subfigure{\includegraphics[width=3in, height=2in]{t6_12in.eps}}\quad \subfigure{\includegraphics[width=3in, height=2in]{t6_14in.eps}} } \mbox{\subfigure{\includegraphics[width=3in, height=2in]{gc40_12in.eps}}\quad \subfigure{\includegraphics[width=3in, height=2in]{gc40_17in.eps}} } \caption{Oscillatory behaviour of $h_n$ vs $n$. Top: HH system for $N_{CPU}=10$ and $N_{CPU}=15$. Middle: HH100 system for $N_{CPU}=12$ and $N_{CPU}=14$. Right: Collapse (GC40) for 12 and 17 processors and $t=.1$.} \label{fig:hnbeh} \end{figure} \subsection{Testing ASPA} We tested the above described algorithm by coupling it to a version of the serial DOP853. We then compared the performances of the serial DOP853 and the DOP853 with ASPA (DOP853-ASPA). With this aim we calculated the difference of the number of stepsize corrections and the difference of runtime required to reach $t=t_{end}$ as function of the tolerance for a fixed number $N_{CPU}$ of processors. We also calculated the same differences but as function of the number of processors with the tolerance fixed to $10^{-15}$. The actual values of the runtime for each case are given in correponding tables in the appendix \ref{sec:runtimes}. In figures \ref{fig:ASPAHH} the results for the HH problem are presented. \begin{figure} \centering \mbox{\subfigure{\includegraphics[width=3in, height=3in]{hhTolSteps.eps}}\quad \subfigure{\includegraphics[width=3in, height=3in]{hhTolTime.eps} }} \mbox{\subfigure{\includegraphics[width=3in, height=3in]{hhCpuSteps.eps}}\quad \subfigure{\includegraphics[width=3in, height=3in]{hhCpuTime.eps} }} \caption{DOP853 minus DOP853-ASPA for HH. Left top: stepsize corrections vs tolerance. Left bottom: stepsize corrections vs number of processors. Right top: runtime vs tolerance. Right bottom: runtime vs number of processors.} \label{fig:ASPAHH} \end{figure} Here $t_{end}=5000$ and $N_{CPU}=10$. In the two top panels we can see that, even if the DOP853 requires more stepsize corrections to reach the required tolerance, it does it in relatively less runtime. From the two bottom panels we draw the unexpected conclusion that the runtimes are comparable only when the number of stepsize corrections required by DOP853-ASPA is significantly more than that required by DOP853. This happens when using five or less processors. Moreover, notice that in the bottom panel the differences are all calculated with respect of the fixed number obtained with DOP853 (where $N_{CPU}=1$). It means that, as expected, increasing $N_{CPU}$, the number of stepsize corrections in DOP853-ASPA decreases, nevertheless, the corresponding runtime increases. All these observations hint that, when more processors are used, at each iteration the parallel overhead is more important than the time required for integration. To determine whether this is the case, we tested the HH100 problem. The results are presented in figures \ref{fig:ASPAHH100}. \begin{figure} \centering \mbox{\subfigure{\includegraphics[width=3in, height=3in]{hh100TolSteps.eps}}\quad \subfigure{\includegraphics[width=3in, height=3in]{hh100TolTime.eps}} } \mbox{\subfigure{\includegraphics[width=3in, height=3in]{hh100CpuSteps.eps}}\quad \subfigure{\includegraphics[width=3in, height=3in]{hh100CpuTime.eps} } } \caption{DOP853 minus DOP853-ASPA for HH100. Left top: stepsize corrections vs tolerance. Left bottom: stepsize corrections vs number of processors. Right top: runtime vs tolerance. Right bottom: runtime vs number of processors.} \label{fig:ASPAHH100} \end{figure} Here $t_{end}=5000$ and $N_{CPU}=10$. As in the case of HH, here DOP853 requires more stepsize corrections than DOP853-ASPA, nevertheless, for low tolerances the parallel algorithm performs slightly better than the serial. This could be due to the fact that for the HH100 problem the amount of time used for the evaluation of the RHS is comparable with the parallel overhead and that, for tolerances greater than $10^{-7}$, $N_{CPU}=10$ processors are good enough to probe the whole time interval up to $t_{end}=5000$ in very few stages. Trying further to make the number of evaluations of the RHS to have a larger weight in the runtime, we tested the problem of the gravitational collapse, but the reduced version GC10, because the DOP853 was able to integrate it in a reasonable runtime. In figures \ref{fig:ASPAGC10} we present the results of the comparison. \begin{figure} \centering \mbox{\subfigure{\includegraphics[width=3in, height=3in]{GC10TolSteps.eps}}\quad \subfigure{\includegraphics[width=3in, height=3in]{GC10TolTime.eps}} } \mbox{\subfigure{\includegraphics[width=3in, height=3in]{GC10CpuSteps.eps}}\quad \subfigure{\includegraphics[width=3in, height=3in]{GC10CpuTime.eps}} } \caption{DOP853 minus DOP853-ASPA for GC10. Left top: stepsize corrections vs tolerance. Left bottom: stepsize corrections vs number of processors. Right top: runtime vs tolerance. Right bottom: runtime vs number of processors.} \label{fig:ASPAGC10} \end{figure} For these calculations we used $t_{end}=20$ and $N_{CPU}=20$. Now it is clear that the parallel algorithm typically performs better than the serial. From the top two panels we observe that increasing the tolerance induces a steadily increase of the difference in required stepsize corrections and this straightforwardly leads to a larger difference in runtime. The results in the bottom panels show that the DOP853-ASPA is efficient when $N_{CPU}>5$. Recalling that in the bottom panel the differences are all calculated with respect of the fixed number obtained with DOP853, we see now that increasing $N_{CPU}$ leads to less stepsize corrections required by the DOP853-ASPA, but now this also corresponds to less runtime. All the above suggests that, indeed, for the GC10 system, the parallel overhead problem is solved. To end the comparisons, note in table \ref{table:cpuvstime} that for this last system, DOP853-ASPA with 5 processors lasted a little bit more than 7 minutes, while we verified that PIRK10 took about an hour to solve the same problem. On the other hand, we also checked that for GC40, DOP853-ASPA took about an hour to reach $t_{end}=0.1$ with a tolerance of $10^{-6}$, while as mentioned in section \ref{sec:tresults} PIRK10 needed about six times more (see table \ref{table:colapso1}). \section{Conclusions}\label{sec:concl} We tested a parallel iterated Runge--Kutta method of order 10 (PIRK10) and an adaptive stepsize parallel algorithm (ASPA), introduced in this paper, which was coupled to a Dormand--Prince method of order 8 (DOP853-ASPA). The results presented in this paper show that when the initial value problem to solve has a simple to evaluate right--hand side (as is the case in more common dynamical systems), even in the best case scenarios for the parallel methods, their performances were only comparable to the corresponding performance of a serial Dormand-Prince method of order 8 (DOP853). Therefore, taking into account code and algorithmic efficiencies, parallel integration seems to not be a good practice. This negative result seems to be due to a parallel overhead problem, i.e., the amount of time required to coordinate parallel tasks is larger than the time required for evaluating the system right--hand side. We verified that for very complex initial value problems or low tolerances the parallel methods can outperform DOP853. For instance, such systems arise while using Galerkin projection to solve systems of partial differential equations or when simulating multi--agent systems. In these cases, it seems to be more efficient to parallelize the search for an optimal stepsize for integration than to parallelize the integration scheme. Indeed, our method, DOP853-ASPA, consistently outperformed PIRK10 by almost an order of runtime. Moreover, even in some cases where DOP853 did a better job than PIRK10, our method was able to solve the corresponding initial value problem in less time than both these methods. A nice feature of ASPA is that it does not relies on a given core integrator, it can be coupled to any method with a scheme to estimate the local integration error. It can even be another parallel method more efficient than the one tested here. \section*{Acknowledgments} This research was supported by the Sistema Nacional de Investigadores (M\'exico). The work of CAT-E was also partially funded by FRABA-UCOL-14-2013 (M\'exico).
{ "timestamp": "2016-01-12T02:08:26", "yymm": "1601", "arxiv_id": "1601.02245", "language": "en", "url": "https://arxiv.org/abs/1601.02245", "abstract": "In this paper the performance of a parallel iterated Runge-Kutta method is compared versus those of the serial fouth order Runge-Kutta and Dormand-Prince methods. It was found that, typically, the runtime for the parallel method is comparable to that of the serial versions, thought it uses considerably more computational resources. A new algorithm is proposed where full parallelization is used to estimate the best stepsize for integration. It is shown that this new method outperforms the others, notably, in the integration of very large systems.", "subjects": "Numerical Analysis (math.NA); Distributed, Parallel, and Cluster Computing (cs.DC)", "title": "On parallel solution of ordinary differential equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.974042642048694, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.7090791238504838 }
https://arxiv.org/abs/1808.10038
Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds
In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available:this https URL.
\section{Introduction} \vspace{-0.5em} This paper aims to recover a sparse vector $x^\ast$ from its noisy linear measurements: \begin{equation} \label{eq:linear_model} b = A x^\ast + \varepsilon, \end{equation} where $ b\in\mathbb{R}^m $, $ x\in\mathbb{R}^n $, $ A\in\mathbb{R}^{m \times n} $, $ \varepsilon\in\mathbb{R}^m $ is additive Gaussian white noise, and we have $ m \ll n $. (\ref{eq:linear_model}) is an ill-posed, highly under-determined system. However, it becomes easier to solve if $x^\ast$ is assumed to be sparse, i.e. the cardinality of support of $x^\ast$, $S=\{i|x^\ast_i \neq 0\}$, is small compared to $n$. A popular approach is to model the problem as the LASSO formulation ($\lambda$ is a scalar): \begin{equation} \minimize_{x} \frac{1}{2}\|b-Ax\|_2^2 + \lambda\|x\|_1 \label{eq:lasso} \end{equation} and solve it using iterative algorithms such as the iterative shrinkage thresholding algorithm (ISTA) \cite{blumensath2008iterative}: \begin{equation} x^{k+1} = \eta_{\lambda/L}\Big(x^k + \frac{1}{L}A^T(b-Ax^k)\Big),\quad k = 0, 1, 2, \ldots \label{eq:ista} \end{equation} where $\eta_{\theta}$ is the soft-thresholding function\footnote{Soft-thresholding function is defined in a component-wise way: $\eta_{\theta}(x) = \text{sign}(x)\max(0,|x|-\theta)$} and $L$ is usually taken as the largest eigenvalue of $A^TA$. In general, ISTA converges sublinearly for any given and fixed dictionary $A$ and sparse code $x^\ast$~\cite{beck2009fast}. In \cite{gregor2010learning}, inspired by ISTA, the authors proposed a learning-based model named Learned ISTA (LISTA). They view ISTA as a recurrent neural network (RNN) that is illustrated in Figure~\ref{fig:rnn}, where $ W_1 = \frac{1}{L}A^T$, $W_2 = I - \frac{1}{L}A^T A$, $\theta =\frac{1}{L}\lambda $. LISTA, illustrated in Figure~\ref{fig:lista}, unrolls the RNN and truncates it into $K$ iterations: \begin{equation} x^{k+1} = \eta_{\theta^k}(W^k_1 b + W^k_2 x^k), \quad k = 0,1,\cdots,K-1, \label{eq:gen_ista} \end{equation} leading to a $K$-layer feed-forward neural network with side connections. Different from ISTA where no parameter is learnable (except the hyper parameter $\lambda$ to be tuned), LISTA is treated as a specially structured neural network and trained using stochastic gradient descent (SGD), over a given training dataset $\{(x^\ast_i,b_i)\}_{i=1}^N$ sampled from some distribution $\mathcal{P}(x,b)$. All the parameters $\Theta = \{(W^k_1,W^k_2,\theta^k)\}_{k=0}^{K-1}$ are subject to learning. The training is modeled as: \begin{equation} \label{eq:train} \minimize_{\Theta} \mathbb{E}_{x^\ast,b}\Big\|x^K\Big( \Theta, b, x^0 \Big) - x^\ast \Big\|_2^2. \end{equation} Many empirical results, e.g., \cite{gregor2010learning,wang2016learning,wang2016d3,wang2016learningb,wang2016learningc}, show that a trained $K$-layer LISTA (with $K$ usually set to $10 \sim 20$) or its variants can generalize more than well to unseen samples $(x',b')$ from the same $\mathcal{P}(x,b)$ and recover $x'$ from $b'$ to the same accuracy within one or two order-of-magnitude fewer iterations than the original ISTA. Moreover, the accuracies of the outputs $\{x^k\}$ of the layers $k=1,..,K$ gradually improve. \begin{figure}[ht] \centering \begin{tabular}{cc} \hspace{-2mm} \subfigure[RNN structure of ISTA.]{ \includegraphics[width=0.30\linewidth]{figs/rnn.PNG} \label{fig:rnn} } & \hspace{-4mm} \subfigure[Unfolded learned ISTA Network. ]{ \includegraphics[width=0.68\linewidth]{figs/lista.png} \label{fig:lista} } \end{tabular} \caption{Diagrams of ISTA and LISTA.} \vspace{-1em} \label{fig:rnn_lista} \end{figure} \subsection{Related Works} \vspace{-0.5em} Many recent works \cite{sprechmann2015learning,wang2016sparse,wang2016learning,zhangista,zhou2018sc2net} followed the idea of \cite{gregor2010learning} to construct feed-forward networks by unfolding and truncating iterative algorithms, as fast trainable regressors to approximate the solutions of sparse coding models. On the other hand, progress has been slow towards understanding the efficient approximation from a theoretical perspective. The most relevant works are discussed below. \cite{moreau2017understanding} attempted to explain the mechanism of LISTA by re-factorizing the Gram matrix of dictionary, which tries to nearly diagonalize the Gram matrix with a basis that produces a small perturbation of the $\ell_1$ ball. They re-parameterized LISTA into a new factorized architecture that achieved similar acceleration gain to LISTA. Using an ``indirect'' proof, \cite{moreau2017understanding} was able to show that LISTA can converge faster than ISTA, but still sublinearly. Lately, \cite{giryes2018tradeoffs} tried to relate LISTA to a projected gradient descent descent (PGD) relying on inaccurate projections, where a trade-off between approximation error and convergence speed was made possible. \cite{xin2016maximal} investigated the convergence property of a sibling architecture to LISTA, proposed in \cite{wang2016learning}, which was obtained by instead unfolding/truncating the iterative hard thresholding (IHT) algorithm rather than ISTA. The authors argued that they can use data to train a transformation of dictionary that can improve its restricted isometry property (RIP) constant, when the original dictionary is highly correlated, causing IHT to fail easily. They moreover showed it beneficial to allow the weights to decouple across layers. However, the analysis in \cite{xin2016maximal} cannot be straightforwardly extended to ISTA although IHT is linearly convergent \cite{blumensath2009iterative} under rather strong assumptions. In \cite{borgerding2017amp}, a similar learning-based model inspired by another iterative algorithm solve LASSO, approximated message passing (AMP), was studied. The idea was advanced in \cite{metzler2017learned} to substituting the AMP proximal operator (soft-thresholding) with a learnable Gaussian denoiser. The resulting model, called Learned Denoising AMP (L-DAMP), has theoretical guarantees under the asymptotic assumption named ``state evolution.'' While the assumption is common in analyzing AMP algorithms, the tool is not directly applicable to ISTA. Moreover, \cite{borgerding2017amp} shows L-DAMP is MMSE optimal, but there is no result on its convergence rate. Besides, we also note the empirical effort in \cite{borgerding2016onsager} that introduces an Onsager correction to LISTA to make it resemble AMP. \vspace{-0.5em} \subsection{Motivations and Contributions} \vspace{-0.5em} We attempt to answer the following questions, which are not fully addressed in the literature yet: \begin{itemize} \vspace{-0.5em} \itemsep -1pt \item Rather than training LISTA as a conventional ``black-box'' network, can we benefit from exploiting certain dependencies among its parameters $\{(W_1^k,W_2^k,\theta^k)\}_{k=0}^{K-1}$ to simplify the network and improve the recovery results? \item Obtained with sufficiently many training samples from the target distribution $\mathcal{P}(x,b)$, LISTA works very well. So, we wonder if there is a theoretical guarantee to ensure that LISTA (\ref{eq:gen_ista}) converges \footnote{The convergence of ISTA/FISTA measures how fast the $k$-th iterate proceeds; the convergence of LISTA measures how fast the output of the $k$-th layer proceeds as $k$ increases.} faster and/or produces a better solution than ISTA (\ref{eq:ista}) when its parameters are ideal? If the answer is affirmative, can we quantize the amount of acceleration? \item Can some of the acceleration techniques such as support detection that were developed for LASSO also be used to improve LISTA? \vspace{-0.2em} \end{itemize} \textbf{Our Contributions:} this paper aims to introduce more theoretical insights for LISTA and to further unleash its power. To our best knowledge, this is the first attempt to establish a theoretical convergence rate (upper bound) of LISTA directly. We also observe that the \textit{weight structure} and the \textit{thresholds} can speedup the convergence of LISTA: \begin{itemize} \vspace{-0.5em} \itemsep -1pt \item We give a result on asymptotic coupling between the weight matrices $W_1^k$ and $W_2^k$. This result leads us to eliminating one of them, thus reducing the number of trainable parameters. This elimination still retains the theoretical and experimental performance of LISTA. \item ISTA is generally sublinearly convergent before its iterates settle on a support. We prove that, however, there exists a sequence of parameters that makes LISTA converge linearly since its first iteration. Our numerical experiments support this theoretical result. \item Furthermore, we introduce a thresholding scheme for \textit{support selection}, which is extremely simple to implement and significantly boosts the practical convergence. The linear convergence results are extended to support detection with an improved rate. \vspace{-0.5em} \end{itemize} Detailed discussions of the above three points will follow after Theorems \ref{prop:necessary}, \ref{prop:no_ss} and \ref{prop:ss}, respectively. Our proofs do not rely on any indirect resemblance, e.g., to AMP \cite{borgerding2016onsager} or PGD \cite{giryes2018tradeoffs}. The theories are supported by extensive simulation experiments, and substantial performance improvements are observed when applying the weight coupling and support selection schemes. We also evaluated LISTA equipped with those proposed techniques in an image compressive sensing task, obtaining superior performance over several of the state-of-the-arts. \vspace{-0.5em} \section{Algorithm Description} \label{sec:algo} \vspace{-0.5em} We first establish the necessary condition for LISTA convergence, which implies a partial weight coupling structure for training LISTA. We then describe the support-selection technique. \vspace{-0.5em} \subsection{Necessary Condition for LISTA Convergence and Partial Weight Coupling} \label{sec:cp} \vspace{-0.5em} \begin{assume}[Basic assumptions] \label{assume:basic} The signal $x^\ast$ and the observation noise $\varepsilon$ are sampled from the following set: \begin{equation} \label{eq:x_assume} (x^*,\varepsilon) \in \X(B,s,\sigma) \triangleq \Big\{(x^*,\varepsilon) \Big| |x^\ast_i | \leq B, \forall i, ~\|x^\ast\|_0 \leq s, \|\varepsilon\|_1 \leq \sigma \Big\}. \end{equation} In other words, $x^\ast$ is bounded and $s$-sparse\footnote{A signal is $s$-sparse if it has no more than $s$ non-zero entries.} ($s \geq 2$), and $\varepsilon$ is bounded. \end{assume} \begin{theorem}[Necessary Condition] \label{prop:necessary} Given $\{W^k_1,W^k_2,\theta^k\}_{k=0}^{\infty}$ and $x^0=0$, let $b$ be observed by (\ref{eq:linear_model}) and $\{x^k\}_{k=1}^{\infty}$ be generated layer-wise by LISTA~(\ref{eq:gen_ista}). If the following holds uniformly for any $(x^*,\varepsilon) \in \X(B,s,0)$ (no observation noise): \[x^k\Big( \{W_1^\tau, W_2^\tau, \theta^\tau \}_{\tau=0}^{k-1},b,x^0 \Big) \to x^*,\quad \text{as }k \to \infty\] and $\{W^k_2\}_{k=1}^{\infty}$ are bounded \[\|W^k_2\|_2\leq B_W,\quad \forall k = 0,1,2,\cdots, \] then $\{W^k_1,W^k_2,\theta^k\}_{k=0}^{\infty}$ must satisfy \begin{align} &W^k_2 -( I - W^k_1A)\to 0, \quad\text{as }k\to\infty \label{eq:couple_way} \\ &\theta^k\to 0,\quad\text{as }k\to\infty\label{eq:theta_to_0}. \end{align} \end{theorem} Proofs of the results throughout this paper can be found in the supplementary. The conclusion (\ref{eq:couple_way}) demonstrates that the weights $\{W^k_1,W^k_2\}_{k=0}^{\infty}$ in LISTA asymptotically satisfies the following partial weight coupling structure: \begin{equation} \label{eq:wcp} W^k_2 = I - W^k_1A. \end{equation} We adopt the above partial weight coupling for all layers, letting $W^k = (W^k_1)^T \in \Re^{m\times n}$, thus simplifying LISTA (\ref{eq:gen_ista}) to: \begin{equation} \label{eq:lista_cp} x^{k+1}=\eta_{\theta^k}\Big(x^k + (W^k)^\top (b - Ax^k)\Big),\quad k = 0,1,\cdots,K-1, \end{equation} where $\{W^k,\theta^k\}_{k=0}^{K-1}$ remain as free parameters to train. Empirical results in Fig. \ref{fig:coupleway} illustrate that the structure (\ref{eq:wcp}), though having fewer parameters, improves the performance of LISTA. The coupled structure (\ref{eq:wcp}) for soft-thresholding based algorithms was empirically studied in \cite{borgerding2017amp}. The similar structure was also theoretically studied in Proposition 1 of \cite{xin2016maximal} for IHT algorithms using the fixed-point theory, but they let all layers share the same weights, i.e. $W^k_2 = W_2, W^k_1 = W_1,\forall k$. \vspace{-1em} \subsection{LISTA with Support Selection} \vspace{-0.5em} \label{sec:ss} We introduce a special thresholding scheme to LISTA, called \textit{support selection}, which is inspired by ``kicking'' \cite{osher2011fast} in linearized Bregman iteration. This technique shows advantages on recoverability and convergence. Its impact on improving LISTA convergence rate and reducing recovery errors will be analyzed in Section \ref{sec:convergence}. With support selection, at each LISTA layer \textit{before} applying soft thresholding, we will select a certain percentage of entries with largest magnitudes, and trust them as ``true support'' and won’t pass them through thresholding. Those entries that do not go through thresholding will be directly fed into next layer, together with other thresholded entires. Assume we select $p^k\%$ of entries as the trusted support at layer $k$. LISTA with support selection can be generally formulated as \vspace{-0.6em} \begin{equation} \label{eq:lista_ss0} x^{k+1} = {\eta_\mathrm{ss}}_{\theta^k}^{p^k} \Big(W^k_1 b + W^k_2 x^k\Big), \quad k = 0,1,\cdots,K-1, \end{equation} where ${\eta_{ss}}$ is the thresholding operator with support selection, formally defined as: \[ ({\eta_\mathrm{ss}}_{\theta^k}^{p^k}(v))_i = \left\{ \begin{array}{lll} v_i & : v_i > \theta^k,& i\in S^{p^k}(v), \\ v_i - \theta^k & : v_i > \theta^k,& i\notin S^{p^k}(v), \\ 0 & : -\theta^k \leq v_i \leq \theta^k &\\ v_i + \theta^k & : v_i < -\theta^k,& i\notin S^{p^k}(v), \\ v_i & : v_i < -\theta^k, &i\in S^{p^k}(v), \end{array} \right. \] where $S^{p^k}(v)$ includes the elements with the largest $p^k\%$ magnitudes in vector $v$: \vspace{-0.6em} \begin{equation} \label{eq:spk} S^{p^k}(v) = \Big\{i_1,i_2,\cdots,i_{p^k}\Big||v_{i_1}| \geq |v_{i_2}| \geq \cdots |v_{i_{p^k}}|\cdots \geq |v_{i_n}|\Big\}. \end{equation} To clarify, in (\ref{eq:lista_ss0}), $p^k$ is a hyperparameter to be manually tuned, and $\theta^k$ is a parameter to train. We use an empirical formula to select $p^k$ for layer $k$: $p^k = \min(p\cdot k, p_\mathrm{max})$, where $p$ is a positive constant and $p_\mathrm{max}$ is an upper bound of the percentage of the support cardinality. Here $p$ and $p_\mathrm{max}$ are both hyperparameters to be manually tuned. If we adopt the partial weight coupling in (\ref{eq:wcp}), then (\ref{eq:lista_ss0}) is modified as \begin{equation} \label{eq:lista_ss} x^{k+1} = {\eta_\mathrm{ss}}_{\theta^k}^{p^k} \Big(x^k + (W^k)^T (b - Ax^k)\Big),\quad k = 0,1,\cdots,K-1. \end{equation} \paragraph{Algorithm abbreviations} For simplicity, hereinafter we will use the abbreviation ``CP'' for the partial weight coupling in ~(\ref{eq:wcp}), and ``SS'' for the support selection technique. \textit{LISTA-CP} denotes the LISTA model with weights coupling (\ref{eq:lista_cp}). \textit{LISTA-SS} denotes the LISTA model with support selection (\ref{eq:lista_ss0}). Similarly, \textit{LISTA-CPSS} stands for a model using both techniques (\ref{eq:lista_ss}), which has the best performance. Unless otherwise specified, \textit{LISTA} refers to the baseline LISTA (\ref{eq:gen_ista}). \vspace{-1em} \section{Convergence Analysis} \label{sec:convergence} \vspace{-0.5em} In this section, we formally establish the impacts of (\ref{eq:lista_cp}) and (\ref{eq:lista_ss}) on LISTA's convergence. The output of the $k^{\text{th}}$ layer $x^k$ depends on the parameters $\{W^\tau,\theta^\tau\}_{\tau=0}^{k-1}$, the observed measurement $b$ and the initial point $x^0$. Strictly speaking, $x^k$ should be written as $x^k\Big( \{W^\tau, \theta^\tau \}_{\tau=0}^{k-1},b, x^0 \Big)$. By the observation model $b=Ax^*+\varepsilon$, since $A$ is given and $x^0$ can be taken as $0$, $x^k$ therefore depends on $\{(W^\tau,\theta^\tau)\}_{\tau=0}^k$, $x^*$ and $\varepsilon$. So, we can write $x^k\Big( \{W^\tau, \theta^\tau \}_{\tau=0}^{k-1},x^*,\varepsilon \Big)$. For simplicity, we instead just write $x^k(x^*,\varepsilon)$. \begin{theorem}[Convergence of LISTA-CP] \label{prop:no_ss} Given $\{W^k,\theta^k\}_{k=0}^{\infty}$ and $x^0=0$, let $\{x^k\}_{k=1}^{\infty}$ be generated by (\ref{eq:lista_cp}). If Assumption \ref{assume:basic} holds and $s$ is sufficiently small, then there exists a sequence of parameters $\{W^k,\theta^k\}$ such that, for all $(x^*,\varepsilon)\in \X(B,s,\sigma)$, we have the error bound: \begin{equation} \label{eq:linear_conv} \|x^k(x^*,\varepsilon)-x^\ast\|_2 \leq s B \exp(-ck) + C\sigma,\quad \forall k = 1,2,\cdots, \end{equation} where $c>0,C>0$ are constants that depend only on $A$ and $s$. Recall $s$ (sparsity of the signals) and $\sigma$ (noise-level) are defined in (\ref{eq:x_assume}). \end{theorem} If $\sigma=0$ (noiseless case), (\ref{eq:linear_conv}) reduces to \vspace{-1mm} \begin{equation} \label{eq:linear_noiseless} \|x^k(x^*,0)-x^\ast\|_2 \leq s B \exp(-ck). \end{equation} The recovery error converges to $0$ at a linear rate as the number of layers goes to infinity. Combined with Theorem \ref{prop:necessary}, we see that the partial weight coupling structure (\ref{eq:lista_cp}) is both necessary and sufficient to guarantee convergence in the noiseless case. Fig. \ref{fig:coupleway} validates (\ref{eq:linear_conv}) and (\ref{eq:linear_noiseless}) directly. \textbf{Discussion:} The bound (\ref{eq:linear_noiseless}) also explains why LISTA (or its variants) can converge faster than ISTA and fast ISTA (FISTA) \cite{beck2009fast}. With a proper $\lambda$ (see (\ref{eq:lasso})), ISTA converges at an $O(1/k)$ rate and FISTA converges at an $O(1/k^2)$ rate~\cite{beck2009fast}. With a large enough $\lambda$, ISTA achieves a linear rate \cite{bredies2008linear,zhang2017new}. With $\bar{x}(\lambda)$ being the solution of LASSO (noiseless case), these results can be summarized as: before the iterates $x^k$ settle on a support\footnote{After $x^k$ settles on a support, i.e. as $k$ large enough such that $\mathrm{support}(x^{k})$ is fixed, even with small $\lambda$, ISTA reduces to a linear iteration, which has a linear convergence rate \cite{tao2016local}. }, \vspace{-1mm} \[\begin{aligned} x^k \to \bar{x}(\lambda) \text{ sublinearly},&\quad \|\bar{x}(\lambda) - x^*\| = O(\lambda),\quad \lambda > 0\\ x^k \to \bar{x}(\lambda) \text{ linearly},&\quad \|\bar{x}(\lambda) - x^*\| = O(\lambda),\quad \text{$\lambda$ large enough}. \end{aligned}\] Based on the choice of $\lambda$ in LASSO, the above observation reflects an inherent trade-off between convergence rate and approximation accuracy in solving the problem (\ref{eq:linear_model}), see a similar conclusion in \cite{giryes2018tradeoffs}: a larger $\lambda$ leads to faster convergence but a less accurate solution, and vice versa. However, if $\lambda$ is not constant throughout all iterations/layers, but instead chosen adaptively for each step, more promising trade-off can arise\footnote{This point was studied in \cite{HaleYinZhang2008_sparse,xiao2013proximal} with classical compressive sensing settings, while our learning settings can learn a good path of parameters without a complicated thresholding rule or any manual tuning.}. LISTA and LISTA-CP, with the thresholds $\{\theta^k\}_{k=0}^{K-1}$ free to train, actually adopt this idea because $\{\theta^k\}_{k=0}^{K-1}$ corresponds to a path of LASSO parameters $\{\lambda^k\}_{k=0}^{K-1}$. With extra free trainable parameters, $\{W^k\}_{k=0}^{K-1}$ (LISTA-CP) or $\{W^k_1, W^k_2\}_{k=0}^{K-1}$ (LISTA), learning based algorithms are able to converge to an accurate solution at a fast convergence rate. Theorem \ref{prop:no_ss} demonstrates the existence of such sequence $\{W^k,\theta^k\}_k$ in LISTA-CP (\ref{eq:lista_cp}). The experiment results in Fig. \ref{fig:nmse_ista_lista} show that such $\{W^k,\theta^k\}_k$ can be obtained by training. \begin{assume} \label{assume:basic2} Signal $x^\ast$ and observation noise $\varepsilon$ are sampled from the following set: \begin{equation} \label{eq:x_assume2} (x^*,\varepsilon) \in \bar{\X}(B,\underline{B}, s,\sigma) \triangleq \Big\{(x^*,\varepsilon) \Big| |x^\ast_i | \leq B, \forall i, ~\|x^\ast \|_1 \geq \underline{B} , \|x^\ast\|_0 \leq s, \|\varepsilon\|_1 \leq \sigma \Big\}. \end{equation} \end{assume} \begin{theorem}[Convergence of LISTA-CPSS] \label{prop:ss} Given $\{W^k,\theta^k\}_{k=0}^{\infty}$ and $x^0=0$, let $\{x^k\}_{k=1}^{\infty}$ be generated by ~(\ref{eq:lista_ss}). With the same assumption and parameters as in Theorem \ref{prop:no_ss}, the approximation error can be bounded for all $(x^*,\varepsilon)\in \X(B,s,\sigma)$: \vspace{-2mm} \begin{equation} \label{eq:linear_ss} \|x^k(x^*,\varepsilon) -x^\ast\|_2 \leq s B \exp\Big(-\sum_{t=0}^{k-1}c_{\mathrm{ss}}^t\Big) + C_{\mathrm{ss}}\sigma, \quad \forall k = 1,2,\cdots, \end{equation} where $c_{\mathrm{ss}}^k\geq c$ for all $k$ and $C_{\mathrm{ss}} \leq C$. If Assumption \ref{assume:basic2} holds, $s$ is small enough, and $\underline{B} \geq 2C\sigma$ (SNR is not too small), then there exists another sequence of parameters $\{\tilde{W}^k,\tilde{\theta}^k\}$ that yields the following improved error bound: for all $(x^*,\varepsilon) \in \bar{\X}(B,\underline{B}, s,\sigma)$, \vspace{-2mm} \begin{equation} \label{eq:linear_ss2} \|x^k(x^*,\varepsilon) -x^\ast\|_2 \leq s B \exp\Big(-\sum_{t=0}^{k-1} \tilde{c}_{\mathrm{ss}}^t\Big) + \tilde{C}_{\mathrm{ss}}\sigma, \quad \forall k = 1,2,\cdots, \end{equation} where $\tilde{c}_{\mathrm{ss}}^k\geq c$ for all $k$, $\tilde{c}_{\mathrm{ss}}^k > c$ for large enough $k$, and $\tilde{C}_{\mathrm{ss}}< C$. \end{theorem} The bound in (\ref{eq:linear_ss}) ensures that, with the same assumptions and parameters, LISTA-CPSS is \textit{at least no worse} than LISTA-CP. The bound in (\ref{eq:linear_ss2}) shows that, under stronger assumptions, LISTA-CPSS can be \textit{strictly better} than LISTA-CP in both folds: $\tilde{c}_{\mathrm{ss}}^k > c$ is the better convergence rate of LISTA-CPSS; $\tilde{C}_{\mathrm{ss}}< C$ means that the LISTA-CPSS can achieve smaller approximation error than the minimum error that LISTA can achieve. \vspace{-1em} \section{Numerical Results} \vspace{-0.5em} For all the models reported in this section, including the baseline LISTA and LAMP models , we adopt a stage-wise training strategy with learning rate decaying to stabilize the training and to get better performance, which is discussed in the supplementary. \subsection{Simulation Experiments} \label{sec:simulation} \vspace{-0.5em} \textbf{Experiments Setting.} We choose $m=250, n=500$. We sample the entries of $A$ i.i.d. from the standard Gaussian distribution, $A_{ij} \sim N(0, 1/m)$ and then normalize its columns to have the unit $\ell_2$ norm. We fix a matrix $A$ in each setting where different networks are compared. To generate sparse vectors $x^*$, we decide each of its entry to be non-zero following the Bernoulli distribution with $p_b=0.1$. The values of the non-zero entries are sampled from the standard Gaussian distribution. A test set of 1000 samples generated in the above manner is fixed for all tests in our simulations. All the networks have $K=16$ layers. In LISTA models with support selection, we add $p\%$ of entries into support and maximally select $p_\mathrm{max}\%$ in each layer. We manually tune the value of $p$ and $p_\mathrm{max}$ for the best final performance. With $p_b = 0.1$ and $K=16$, we choose $p=1.2$ for all models in simulation experiments and $p_\mathrm{max}=12$ for LISTA-SS but $p_\mathrm{max}=13$ for LISTA-CPSS. The recovery performance is evaluated by NMSE (in dB): \[ \mathrm{NMSE}(\hat{x},x^\ast)= 10 \log_{10} \left(\frac{\mathbb{E}\|\hat{x}-x^\ast\|^2}{\mathbb{E}\|x^\ast\|^2}\right), \] where $x^\ast$ is the ground truth and $\hat{x}$ is the estimate obtained by the recovery algorithms (ISTA, FISTA, LISTA, etc.). \textbf{Validation of Theorem \ref{prop:necessary}.} In Fig \ref{fig:all_free}, we report two values, $\|W^k_2 - (I - W^k_1A)\|_2$ and $\theta^k$, obtained by the baseline LISTA model (\ref{eq:gen_ista}) trained under the noiseless setting. The plot clearly demonstrates that $W^k_2 \to I - W^k_1A,$ and $\theta^k\to 0$, as $k\to\infty.$ Theorem \ref{prop:necessary} is directly validated. \begin{figure}[ht] \centering \begin{tabular}{cc} \hspace{-7mm} \subfigure[Weight $W^k_2 \to I-W^k_1A$ as $k\to \infty$.]{ \includegraphics[width=0.45\linewidth]{figs/weights_cp.eps} } & \subfigure[The threshold $\theta^k\to 0$.]{ \includegraphics[width=0.45\linewidth]{figs/theta.eps} } \end{tabular} \caption{Validation of Theorem \ref{prop:necessary}.} \vspace{-1em} \label{fig:all_free} \end{figure} \textbf{Validation of Theorem \ref{prop:no_ss}.} We report the test-set NMSE of LISTA-CP (\ref{eq:lista_cp}) in Fig. \ref{fig:coupleway}. Although (\ref{eq:lista_cp}) fixes the structure between $W_1^k$ and $W_2^k$, the final performance remains the same with the baseline LISTA (\ref{eq:gen_ista}), and outperforms AMP, in both noiseless and noisy cases. Moreover, the output of interior layers in LISTA-CP are even better than the baseline LISTA. In the noiseless case, NMSE converges exponentially to $0$; in the noisy case, NMSE converges to a stationary level related with the noise-level. This supports Theorem \ref{prop:no_ss}: there indeed exist a sequence of parameters $\{(W^k,\theta^k)\}_{k=0}^{K-1}$ leading to linear convergence for LISTA-CP, and they can be obtained by data-driven learning. \begin{figure}[ht] \centering \begin{tabular}{cc} \hspace{-7mm} \subfigure[$\ \mathrm{SNR}=\infty$]{ \includegraphics[width=0.45\linewidth]{figs/nmse_thm2_sinf.eps} } & \subfigure[$\ \mathrm{SNR}=30$]{ \includegraphics[width=0.45\linewidth]{figs/nmse_thm2_s30.eps} } \end{tabular} \caption{Validation of Theorem \ref{prop:no_ss}.} \label{fig:coupleway} \vspace{-0.5em} \end{figure} \begin{wrapfigure}{rt}{0.65\textwidth} \centering \includegraphics[width=\linewidth]{figs/ista_lista_new.eps} \vspace{-1em} \caption{Validating Discussion after Theorem \ref{prop:no_ss} (SNR = $\infty$).} \vspace{-0.5em} \label{fig:nmse_ista_lista} \end{wrapfigure} \textbf{Validation of Discussion after Theorem \ref{prop:no_ss}.} In Fig \ref{fig:nmse_ista_lista}, We compare LISTA-CP and ISTA with different $\lambda$s (see the LASSO problem (\ref{eq:lasso})) as well as an adaptive threshold rule similar to one in \cite{HaleYinZhang2008_sparse}, which is described in the supplementary. As we have discussed after Theorem \ref{prop:no_ss}, LASSO has an inherent tradeoff based on the choice of $\lambda$. A smaller $\lambda$ leads to a more accurate solution but slower convergence. The adaptive thresholding rule fixes this issue: it uses large $\lambda^k$ for small $k$, and gradually reduces it as $k$ increases to improve the accuracy~\cite{HaleYinZhang2008_sparse}. Except for adaptive thresholds $\{\theta^k\}_k$ ($\theta^k$ corresponds to $\lambda^k$ in LASSO), LISTA-CP has adaptive weights $\{W^k\}_k$, which further greatly accelerate the convergence. Note that we only ran ISTA and FISTA for 16 iterations, just enough and fair to compare them with the learned models. The number of iterations is so small that the difference between ISTA and FISTA is not quite observable. \textbf{Validation of Theorem \ref{prop:ss}.} We compare the recovery NMSEs of LISTA-CP (\ref{eq:lista_cp}) and LISTA-CPSS (\ref{eq:lista_ss}) in Fig. \ref{fig:ss}. The result of the noiseless case (Fig. \ref{fig:nmse_sinf}) shows that the recovery error of LISTA-SS converges to $0$ at a faster rate than that of LISTA-CP. The difference is significant with the number of layers $k \geq 10$, which supports our theoretical result: ``$\tilde{c}_{\text{ss}}^k > c$ as $k$ large enough'' in Theorem \ref{prop:ss}. The result of the noisy case (Fig. \ref{fig:nmse_s40}) shows that LISTA-CPSS has better recovery error than LISTA-CP. This point supports $\tilde{C}_{\text{ss}}<C$ in Theorem \ref{prop:ss}. Notably, LISTA-CPSS also outperforms LAMP \cite{borgerding2017amp}, when $k$ > 10 in the noiseless case, and even earlier as SNR becomes lower. \begin{figure}[ht] \centering \begin{tabular}{cc} \vspace{-1em} \subfigure[Noiseless Case]{ \includegraphics[width=0.45\linewidth]{figs/nmse_sinf.eps} \label{fig:nmse_sinf} } & \subfigure[Noisy Case: $\textrm{SNR}$=40dB]{ \includegraphics[width=0.45\linewidth]{figs/nmse_s40.eps} \label{fig:nmse_s40} } \\ \subfigure[Noisy Case: $\textrm{SNR}$=30dB]{ \includegraphics[width=0.45\linewidth]{figs/nmse_s30.eps} \label{fig:nmse_s30} } & \subfigure[Noisy Case: $\textrm{SNR}$=20dB]{ \includegraphics[width=0.45\linewidth]{figs/nmse_s20.eps} \label{fig:nmse_s20} } \end{tabular} \caption{Validation of Theorem \ref{prop:ss}.} \vspace{-1em} \label{fig:ss} \end{figure} \textbf{Performance with Ill-Conditioned Matrix.} We train LISTA, LAMP, LISTA-CPSS with ill-conditioned matrices $A$ of condition numbers $\kappa=5,30,50$. As is shown in Fig. \ref{fig:ill}, as $\kappa$ increases, the performance of LISTA remains stable while LAMP becomes worse, and eventually inferior to LISTA when $\kappa=50$. Although our LISTA-CPSS also suffers from ill-conditioning, its performance always stays much better than LISTA and LAMP. \begin{figure}[ht] \centering \begin{tabular}{ccc} \hspace{-10mm} \subfigure[$\ \kappa=5$]{ \includegraphics[width=0.34\linewidth]{figs/nmse_k05.eps} \label{fig:nmse_k05} } & \hspace{-4mm} \subfigure[$\ \kappa=30$]{ \includegraphics[width=0.34\linewidth]{figs/nmse_k30.eps} \label{fig:nmse_k30} } & \hspace{-4mm} \subfigure[$\ \kappa=50$]{ \includegraphics[width=0.34\linewidth]{figs/nmse_k50.eps} \label{fig:nmse_k50} } \end{tabular} \caption{Performance in ill-conditioned situations (SNR = $\infty$).} \label{fig:ill} \vspace{-1.5em} \end{figure} \vspace{-0.5em} \subsection{Natural Image Compressive Sensing} \vspace{-0.5em} \textbf{Experiments Setting.} We perform a compressive sensing (CS) experiment on natural images (patches). We divide the BSD500~\cite{MartinFTM01} set into a training set of 400 images, a validation set of 50 images, and a test set of 50 images. For training, we extract 10,000 patches $f \in \mathbb{R}^{16\times 16}$ at random positions of each image, with all means removed. We then learn a dictionary $D \in \mathbb{R}^{256\times 512}$ from them, using a block proximal gradient method \cite{xu2013block}. For each testing image, we divide it into non-overlapping $16\times 16$ patches. A Gaussian sensing matrices $\Phi \in \mathbb{R}^{m \times 256}$ is created in the same manner as in Sec. 4.1, where $\frac{m}{256}$ is the CS ratio. Since $f$ is typically not exactly sparse under the dictionary $D$, Assumptions \ref{assume:basic} and \ref{assume:basic2} no longer strictly hold. The primary goal of this experiment is thus to show that our proposed techniques remain robust and practically useful in non-ideal conditions, rather than beating all CS state-of-the-arts. \textbf{Network Extension.} In the real data case, we have no ground-truth sparse code available as the regression target for the loss function (\ref{eq:train}). In order to bypass pre-computing sparse codes $f$ over $D$ on the training set, we are inspired by \cite{zhou2018sc2net}: first using layer-wise pre-training with a reconstruction loss w.r.t. dictionary $D$ plus an $l_1$ loss, shown in (\ref{eq:extended_train}), where $k$ is the layer index and $\Theta^k$ denotes all parameters in the $k$-th and previous layers; then appending another learnable fully-connected layer (initialized by $D$) to LISTA-CPSS and perform an end-to-end training with the cost function (\ref{eq:joint_train}). \begin{align} L^k(\Theta^k)=&\sum_{i=1}^N \|f_i - D\cdot x_i^k(\Theta^k)\|_2^2+\lambda\|x_i^k(\Theta^k)\|_1 \label{eq:extended_train}\\ L(\Theta, W_D)=&\sum_{i=1}^N \|f_i - W_D\cdot x_i^K(\Theta)\|_2^2+\lambda\|x_i^K(\Theta)\|_1 \label{eq:joint_train} \end{align} \textbf{Results.} The results are reported in Table \ref{tab:cs}. We build CS models at the sample rates of $20\%,30\%,40\%,50\%,60\%$ and test on the standard Set 11 images as in \cite{kulkarni2016reconnet}. We compare our results with three baselines: the classical iterative CS solver, TVAL3 \cite{li2013efficient}; the ``black-box'' deep learning CS solver, Recon-Net \cite{kulkarni2016reconnet};a $l_0$-based network unfolded from IHT algorithm \cite{blumensath2009iterative}, noted as LIHT; and the baseline LISTA network, in terms of PSNR (dB) \footnote{ We applied TVAL3, LISTA and LISTA-CPSS on $16 \times 16$ patches to be fair. For Recon-Net, we used their default setting working on $33 \times 33$ patches, which was verified to perform better than using smaller patches.}. We build 16-layer LIHT, LISTA and LISTA-CPSS networks and set $\lambda=0.2$. For LISTA-CPSS, we set $p\% = 0.4\%$ more entries into the support in each layer for support selection. We also select support w.r.t. a percentage of the largest magnitudes within \textit{the whole batch} rather than within a single sample as we do in theorems and simulated experiments, which we emprically find is beneficial to the recovery performance. Table \ref{tab:cs} confirms LISTA-CPSS as the best performer among all. The advantage of LISTA-CPSS and LISTA over Recon-Net also endorses the incorporation of the unrolled sparse solver structure into deep networks. \begin{table \caption{The Average PSRN (dB) for Set 11 test images with CS ratio ranging from 0.2 to 0.6} \label{tab:cs} \centering \begin{tabular}{cccccc} \toprule Algorithm & 20\% & 30\% & 40\% & 50\% & 60\% \\ \midrule TVAL3 & 25.37 & 28.39 & 29.76 & 31.51 & 33.16 \\ Recon-Net & 27.18 & 29.11 & 30.49 & 31.39 & 32.44 \\ LIHT & 25.83 & 27.83 & 29.93 & 31.73 & 34.00 \\ LISTA & 28.17 & 30.43 & 32.75 & 34.26 & 35.99 \\ LISTA-CPSS & \textbf{28.25} & \textbf{30.54} & \textbf{32.87} & \textbf{34.60} & \textbf{36.39} \\ \bottomrule \end{tabular} \vspace{-1em} \end{table} \vspace{-0.5em} \section{Conclusions} \vspace{-0.5em} In this paper, we have introduced a partial weight coupling structure to LISTA, which reduces the number of trainable parameters but does not hurt the performance. With this structure, unfolded ISTA can attain a linear convergence rate. We have further proposed support selection, which improves the convergence rate both theoretically and empirically. Our theories are endorsed by extensive simulations and a real-data experiment. We believe that the methodology in this paper can be extended to analyzing and enhancing other unfolded iterative algorithms. \subsubsection*{Acknowledgments} The work by X. Chen and Z. Wang is supported in part by NSF RI-1755701. The work by J. Liu and W. Yin is supported in part by NSF DMS-1720237 and ONR N0001417121. We would also like to thank all anonymous reviewers for their tremendously useful comments to help improve our work. \bibliographystyle{unsrt}
{ "timestamp": "2018-11-06T02:18:51", "yymm": "1808", "arxiv_id": "1808.10038", "language": "en", "url": "https://arxiv.org/abs/1808.10038", "abstract": "In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available:this https URL.", "subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)", "title": "Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426390346569, "lm_q2_score": 0.727975443004307, "lm_q1q2_score": 0.7090791216563387 }
https://arxiv.org/abs/2105.00266
Data-driven discovery of Green's functions with human-understandable deep learning
There is an opportunity for deep learning to revolutionize science and technology by revealing its findings in a human interpretable manner. To do this, we develop a novel data-driven approach for creating a human-machine partnership to accelerate scientific discovery. By collecting physical system responses under excitations drawn from a Gaussian process, we train rational neural networks to learn Green's functions of hidden linear partial differential equations. These functions reveal human-understandable properties and features, such as linear conservation laws and symmetries, along with shock and singularity locations, boundary effects, and dominant modes. We illustrate the technique on several examples and capture a range of physics, including advection-diffusion, viscous shocks, and Stokes flow in a lid-driven cavity.
\section*{Results} \subsection*{Deep learning Green's functions.} \begin{figure*}[ht!] \centering \vspace{0.5cm} \begin{overpic}[width=\textwidth, trim=0 0 0 0,clip]{Figure/figure1-final.pdf} \put(15,50){\textbf{A}} \put(16,33){\textbf{B}} \put(16,17){\textbf{C}} \put(46,47){\textbf{D}} \put(51,38){\textbf{E}} \put(51,20){\textbf{F}} \end{overpic} \caption{Schematic of our DL method for learning Green's functions from input-output pairs. (A) The covariance kernel of the Gaussian process (GP), which is used to generate excitations. (B) The random excitations and the system's response are recorded (C). (D) A loss function is minimized to train rational NNs (E). (F) The learned Green's function and homogeneous solution are visualized by sampling the NNs.} \label{fig_idea} \end{figure*} Our DL approach (see~\cref{fig_idea}) begins with excitations (or forcing terms), $\{f_j\}_{j=1}^N$, sampled from a Gaussian process (GP) having a carefully designed covariance kernel~\cite{boulle2021learning}, and corresponding system responses, $\{u_j\}_{j=1}^N$. It is postulated that there is an unknown linearized governing PDE so that $\L u_j = f_j$. The selection of random forcing terms is theoretically justified~\cite{boulle2021learning} and enables us to learn the dominant eigenmodes of the solution operator, using only a small number, $N$, of training pairs. The Green's function, $G$, and homogeneous solution, $u_{\text{hom}}$, which encodes the boundary conditions associated with the PDE, satisfy \begin{equation} \label{eq_Green_integral} u_j(x) = \int_{\Omega}G(x,y)f_j(y)\d y + u_{\text{hom}}(x),\qquad x\in\Omega, \end{equation} and are approximated by two rational neural networks: $\mathcal{N}_G$ and $\mathcal{N}_{\text{hom}}$. A rational NN consists of a NN with trainable rational activation functions whose coefficients are learned simultaneously with the weights and biases of the network. Rational NNs have better approximation properties than standard NNs~\cite{boulle2020rational}, both in theory and in practice, which makes them ideal for the present application. The parameters of the NNs representing the Green's function and homogeneous solution are simultaneously learned through minimization of the loss function displayed in \cref{fig_idea}D (Supplementary Material, \cref{sec_loss}). We discretize the integrals in the loss function at the specified measurement locations $\{x_i\}_{i=1}^{N_u}$, within the domain, $\Omega$, and forcing term sample points, $\{y_i\}_{i=1}^{N_f}$, respectively, using a quadrature rule. In the Supplementary Material, \cref{fig_robustness}, we also present numerical results obtained from sparse training data, or noisy spatial measurements, which demonstrate the robustness of our method (Supplementary Material, \cref{sec_robustness}). Additionally, our DL technique is data-driven and requires minimal by-hand parameter tuning. In fact, all the numerical examples described here and in the Supplementary Material are performed using a unique rational NN architecture, initialization procedure, and optimization algorithm. \subsection*{Human-understandable features.} \begin{figure*}[t] \centering \vspace{0.5cm} \begin{overpic}[width=\textwidth, trim=0 0 0 0,clip]{Figure/figure2-final.pdf} \put(0.5,37){\textbf{A}} \put(0.5,18){\textbf{B}} \put(48,37){\textbf{C}} \put(48,18){\textbf{D}} \put(75,37){\textbf{E}} \put(75,18){\textbf{F}} \end{overpic} \caption{Feature extraction from learned Green's functions. The NNs for the learned Green's function (A) and homogeneous solution (B) enable the extraction of qualitative and quantitative features associated with the differential operator. For example, the symmetries in the Green's function reveal PDE invariances (C), poles of rational NNs identify singularity type and location (D), the dominant eigenvalues (E) and eigenmodes (F) of the learned Green's function are related to the eigenvalues and eigenmodes of the differential operator.} \label{fig_schema} \end{figure*} The trained NNs contain both the desired Green's function and homogeneous solution, which we evaluate and visualize to glean novel insights concerning the underlying, governing PDE (\cref{fig_schema}). In this way, we achieve one part of our human interpretation goal: finding a link between the properties of the Green's function and that of the underlying differential operator and solution constraints. As an example, if the Green's function is symmetric, i.e., $G(x,y)=G(y,x)$ for all $x,y\in\Omega$, then the operator $\L$ is self-adjoint. Another aspect of human interpretability is that the poles of the trained rational NN tend to cluster in a way that reveal the location and type of singularities in the homogeneous solution. Finally, there is a direct correspondence between the dominant eigenmodes and eigenvalues (as well as the singular vectors and singular values) of the learned Green's function and those of the differential operator. The correspondence gives insight into the important eigenmodes that govern the system's behavior (Supplementary Material, \cref{sec_features}). \begin{figure*}[ht!] \centering \vspace{0.5cm} \begin{overpic}[width=\textwidth]{Figure/fig3_paper.pdf} \put(0,67){\textbf{A}} \put(34,67){\textbf{B}} \put(65,67){\textbf{C}} \put(0,44){\textbf{D}} \put(34,44){\textbf{E}} \put(65,44){\textbf{F}} \put(0,21){\textbf{G}} \put(34,21){\textbf{H}} \put(65,21){\textbf{I}} \end{overpic} \caption{Green's functions learned by rational neural networks. (A) Green's function of a differential operator with a viscous shock at $x=0$, learned by a rational NN. (B) Learned and exact (computed by a classical spectral method) homogeneous solution to the differential equation with zero forcing term. (C) Phase portrait of the homogeneous rational NN evaluated on the complex plane. (D to F) Similar to (A to C), but without any system's response measurements in $x\in[-0.2,0.2]$ (see vertical black lines) near the shock. (G) Learned Green's function and homogeneous solution (H) of an advection-diffusion operator with advection occurring for $x\geq 0$. (I) Phase portrait of the homogeneous NN on the complex plane.} \label{fig_experiments} \end{figure*} \subsection*{Numerical examples.} As a first example, we consider a second-order differential operator having suitable variable coefficients to model a viscous shock at $x=0$~\cite{lee1997fast}. The system's responses are obtained by solving the PDE, with Dirichlet boundary conditions, using a spectral numerical solver for each of the $N=100$ random forcing terms, sampled from a GP having a squared-exponential covariance kernel~\cite{boulle2021learning}. The learned Green's function is displayed in \cref{fig_experiments}A and satisfies the following symmetry relation: $G(x,y) = G(-x,-y)$, indicating the presence of a reflective symmetry group within the underlying PDE. Indeed, if $u$ is a solution to $\L u=f$ with homogeneous boundary conditions, then $u(-x)$ is a solution to $\L v=f(-x)$. We also observe in \cref{fig_experiments}B and C that the homogeneous solution is accurately captured and that the poles of the homogeneous rational NN cluster near the real axis around $x=0$: the location of the singularity induced by the shock (Supplementary Material, \cref{sec_singularity}). Next, we reproduce the same viscous shock numerical experiment, except that this time we remove measurements of the system's response from the training dataset in the interval $[-0.2,0.2]$: adjacent to the shock front. By comparing \cref{fig_experiments}D to F and \cref{fig_experiments}A to C, we find that the Green's function and homogeneous solution, learned by the rational NNs, may not be affected in the region outside of the interval with missing data. In some cases, the NNs can still accurately capture the main features of the Green's function and homogeneous solution in the region lacking measurements. The robustness of our method to noise perturbation and corrupted or missing data is of significant interest and promising for real applications with experimental data. We next apply our DL method to discover the Green's function and homogeneous solution of an advection-diffusion operator, where the advection is dominant only within the right half of the domain. The output of the Green's function NN is plotted in \cref{fig_experiments}G, where we observe the disparate spatial behaviors of the dominant physical mechanisms. This can be recognized when observing the restriction of the Green's function to the subdomain $[-1,0]\times[-1,0]$, where the observed solution is reminiscent of the Green's function for the Laplacian; thus indicating that the PDE is diffusive on the left half of the domain. Similarly, the restriction of the learned Green's function to $[0,1]\times[0,1]$ is characteristic of advection. In \cref{fig_experiments}H and I, we display the homogeneous solution NN, along with the phase of the rational NN, evaluated on the complex plane. The agreement between the exact and learned homogeneous solution illustrates the ability of the DL method to accurately capture the behavior of a system within ``multiphysics'' contexts. The choice of rational NNs is crucial here: to deepen our understanding of the system, as the poles of the homogeneous rational NN characterize the location and type of singularities in the homogeneous solution. Here the change in behavior of the differential operator from diffusion to advection is delineated by the location of the poles of the rational NN. \subsection*{Nonlinear and vector-valued equations.} We can also discover Green's functions from forcing terms and concomitant solutions to nonlinear differential equations possessing semi-dominant linearity. In \cref{fig_stokes}A to C, we visualize the Green's function NNs of three operators with cubic nonlinearity considered in~\cite{gin2020deepgreen}. The nonlinearity does not prevent our method from discovering a Green's function of an approximate linear model, from which one can understand features such as symmetry and boundary conditions. This property is crucial for tackling time-dependent problems, where the present technique may be extended and applied to uncover linear propagators. \begin{figure*}[ht] \centering \vspace{0.5cm} \begin{overpic}[width=\textwidth]{Figure/fig4_paper.pdf} \put(0,67.5){\textbf{A}} \put(34,67.5){\textbf{B}} \put(68,67.5){\textbf{C}} \put(0,44){\textbf{D}} \put(68,44){\textbf{E}} \put(68,21){\textbf{F}} \end{overpic} \caption{Linearized models and Stokes flow. (A to C) Green's functions of three differential operators: Helmholtz, Sturm--Liouville, and biharmonic, with cubic nonlinearity. (D) Matrix of Green's functions of a two-dimensional Stokes flow in a lid-driven cavity, evaluated at a two-dimensional slice. Velocity magnitude and streamlines of the exact (E) and learned (F) homogeneous solution to the Stokes equations with zero applied body force.} \label{fig_stokes} \end{figure*} Finally, we consider a Stokes flow in a two-dimensional lid-driven cavity to emphasize the ability of our method to handle systems of differential equations in two dimensions. In this context, the relation between the system's responses and the forcing terms can be expressed using a Green's matrix, which consists of a two-by-two matrix of Green's functions and whose components reveal features of the underlying system such as symmetry and coupling (\cref{fig_stokes}D and the Supplementary Material, \cref{sec_system,sec_lid_driven}). \cref{fig_stokes}E and F illustrate that the homogeneous solution to the Stokes equation is accurately captured by the homogeneous rational NN, despite the corner singularities and coarse measurement grid (Supplementary Material, \cref{sec_lid_driven}). \section*{Discussion} Contrary to existing works in the literature~\cite{gin2020deepgreen,lu2021learning,li2020neural,li2020fourier}, our primary aim is to uncover mechanistic understanding from input-output data using a human-understandable representation of the underlying, but hidden, differential operator. This representation takes the form of a rational NN~\cite{boulle2020rational} for the Green's function. We extensively described all the physical features of the operator that can be extracted and discovered from the learned Green's function and homogeneous solutions, such as linear conservation laws, symmetries, shock front and singularity locations, boundary conditions, and dominant modes. The DL method for learning Green's functions of linear differential operators naturally extends to the case of three spatial dimensions but these systems are more challenging due to the GPU memory demands required to represent the six-dimensional inputs used to train the NN representing the Green's function. However, alternative optimization algorithms than the one used in this paper and described in \emph{Methods}, such as mini-batch optimization~\cite{kingma2014adam,li2014efficient}, may be employed to alleviate the computational expense of the training procedure. While our method is demonstrated on linear differential operators, it can be extended to nonlinear, time-dependent problems that can be linearized using an implicit-explicit time-stepping scheme~\cite{ascher1997implicit,pareschi2005implicit} or an iterative method~\cite{kelley1995iterative}. This process allows us to learn the Green's functions of linear time propagators and understand physical behavior in time-dependent problems from input-output data such as the time-dependent Schr\"odinger equation (Supplementary Material, \cref{sec_time_dep}). The numerical experiments conducted in \cref{fig_stokes}A to C highlight that our approach can discover Green's functions of linearizations of nonlinear differential operators. Our deep learning method for learning Green's functions and extracting human-understandable properties of partial differential equations benefits from the adaptivity of rational neural networks and its support for qualitative feature detection and interpretation. We successfully tested our approach with noisy and sparse measurements as training data (Supplementary Material, \cref{sec_robustness}). The design of our applied network architectures, and covariance kernel used to generate the system forcing is guided by rigorous theoretical statements~\cite{boulle2021learning,boulle2020rational} that offer performance guarantees. This shows that our proposed deep learning method may be used to discover new mechanistic understanding with machine learning. \section*{Methods} \subsection*{Green's functions.} We consider linear differential operators, $\L$, defined on a bounded domain $\Omega\subset\mathbb{R}^d$, where $d\in\{1,2,3\}$ denotes the spatial dimension. The aim of our method is to discover properties of the operator, $\L$, using $N$ input-output pairs $\{(f_j,u_j)\}_{j=1}^N$, consisting of forcing functions, $f_j:\Omega\to\mathbb{R}$, and system responses, $u_j:\Omega\to\mathbb{R}$, which are solutions to the following equation: \begin{equation} \label{eq_problem} \L u_j = f_j, \qquad \mathcal{D}(u_j,\Omega) = g, \end{equation} where $\mathcal{D}$ is a linear operator acting on the solutions, $u$, and the domain, $\Omega$; with $g$ being the constraint. We assume that the forcing terms have sufficient regularity, and that the operator, $\mathcal{D}$, is a constraint so that \cref{eq_problem} has a unique solution~\cite{stakgold2011green}. An example of constraint is the imposition of homogeneous Dirichlet boundary conditions on the solutions: $\mathcal{D}(u_j,\Omega) := u_{j}|_{\partial\Omega}=0$. Note that boundary conditions, integral conditions, jump conditions, or non-standard constraints, are all possible (Supplementary Material, \cref{sec_linear_const}). A Green's function~\cite{evans10,arfken2011mathematical,myint2007linear,stakgold2011green} of the operator, $\L$, is defined as the solution to the following equation: \[\L G(x,y) = \delta(y-x),\qquad x,y\in\Omega,\] where $\L$ is acting on the function $x\mapsto G(x,y)$ for fixed $y\in\Omega$, and $\delta(\cdot)$ denotes the Dirac delta function. The Green's function is well-defined and unique under mild conditions on $\L$, and suitable solution constraints imposed via an operator, $\mathcal{D}$ (see~\cref{eq_problem})~\cite{stakgold2011green}. Moreover, if $(f,u)$ is an input-output pair, satisfying \cref{eq_problem} with $g=0$, then \[u(x) = \int_{\Omega}G(x,y)f(y)\d y, \qquad x\in\Omega.\] Therefore, the Green's function associated with $\mathcal{L}$ can be thought of as the right inverse of $\mathcal{L}$. Let $u_{\text{hom}}$ be the homogeneous solution to~\eqref{eq_problem}, so that \[\L u_{\text{hom}} = 0, \qquad \mathcal{D}(u_{\text{hom}},\Omega) = g.\] Using superposition, we can construct solutions, $u_j$, to \cref{eq_problem} as $u_j = \tilde{u}_j+u_{\text{hom}}$, where $\tilde{u}_j$ satisfies \[\L \tilde{u}_j = f_j, \qquad \mathcal{D}(\tilde{u}_j,\Omega) = 0.\] Then, the relation between the system's response, $u_j$, and the forcing term, $f_j$, can be expressed via the Green's function as \[u_j(x) = \int_{\Omega}G(x,y)f_j(y)\d y + u_{\text{hom}}(x),\qquad x\in\Omega.\] Therefore, we train two NNs: $\mathcal{N}_G:\Omega\times\Omega\to\mathbb{R}\cup\{\pm\infty\}$ and $\mathcal{N}_{\text{hom}}:\Omega\to\mathbb{R}$, to learn the Green's function, and also the homogeneous solution associated with $\L$ and the constraint operator $\mathcal{D}$. Note that this procedure allows us to discover boundary conditions, or constraints, directly from the input-output data without imposing it in the loss function (which often results in training instabilities~\cite{wight2020solving}). \subsection*{Rational neural networks.} \label{sec_rational_net} Rational NNs~\cite{boulle2020rational} consist of NNs with adaptive rational activation functions $x\mapsto\sigma(x) = p(x)/q(x)$, where $p$ and $q$ are two polynomials, whose coefficients are trained at the same time as the other parameters of the networks, such as the weights and biases. These coefficients are shared between all the neurons in a given layer but generally differ between the network's layers. This type of network was proven to have better approximation power than standard Rectified Linear Unit (ReLU) networks~\cite{glorot2011deep,yarotsky2017error}, which means that they can approximate smooth functions more accurately with fewer layers and network parameters~\cite{boulle2020rational}. It is also observed in practice that rational NNs require fewer optimization steps and therefore can be more efficient to train than other activation functions~\cite{boulle2020rational}. The NNs, $\mathcal{N}_G$ and $\mathcal{N}_{\text{hom}}$, which approximate the Green's function and homogeneous solution associated with \cref{eq_problem}, respectively, are chosen to be rational NNs~\cite{boulle2020rational} with 4 hidden layers and 50 neurons in each layer. We choose the polynomials, $p$ and $q$, within the activation functions to be of degree 3 and 2, respectively, and initialize the coefficients of all the rational activation functions so that they are the best $(3,2)$ rational approximant to a ReLU (see the supplementary material of~\cite{boulle2020rational} for details). The motivation is that the flexibility of the rational functions brings extra benefit in the training and accuracy over the ReLU activation function. We highlight that the increase in the number of trainable parameters, due to the adaptive rational activation functions, is only linear with respect to the number of layers and negligible compared to the total number of parameters in the network as: \[\text{number of rational coefficients} = 7\times \text{number of hidden layers} = 28.\] The weight matrices of the NNs are initialized using Glorot normal initializer~\cite{glorot2010understanding}, while the biases are initialized to zero. Another advantage of rational NNs is the potential presence of poles, \emph{i.e.}, zeros of the polynomial $q$. While the initialization of the activation functions avoids training issues due to potential spurious poles, the poles can be exploited to learn physical features of the differential operator (Supplementary Material, \cref{sec_singularity}). Therefore, the architecture of the NNs also supports the aim of a human-understandable approach for learning PDEs. In higher dimensions, such as $d = 2$ or $d =3$, the Green's function is not necessarily bounded along the diagonal, i.e., $\{(x,x),\, x\in\Omega\}$; thus making the poles of the rational NNs crucial. Finally, we emphasize that the enhanced approximation properties of rational NNs~\cite{boulle2020rational} make them ideal for learning Green's functions and, more generally, approximating functions within regression problems. These networks may also be of benefit to other approaches for solving and learning PDEs with DL techniques, such as PINNs~\cite{raissi2019physics}, DeepGreen~\cite{gin2020deepgreen}, DeepONet~\cite{lu2021learning}, Neural operator~\cite{li2020neural}, and Fourier neural operator~\cite{li2020fourier}. \subsection*{Data generation.} We create a training dataset, consisting of input-output functions, $\{(f_j\,u_j)\}$ for $1\leq j \leq N$, in three steps: (1) Generating the forcing terms by sampling random functions from a Gaussian process (GP), (2) Solving \cref{eq_problem} for the generated forcing terms, and (3) Sampling the forcing terms, $f_j$, at the points $\{y_1,\ldots,y_{N_f}\}\subset\Omega$ and the system's responses, $u_j$, at $\{x_1,\ldots,x_{N_u}\}\subset\Omega$. Here, $N_f$ and $N_u$ are the forcing and solution discretization sizes, respectively. We recommend that all the forcing terms are sampled on the same grid and similarly for the system's responses. This minimizes the number of evaluations of $\mathcal{N}_G$ during the training phase and reduces the computational and memory costs of training. The spatial locations of points $\{y_i\}$ and the forcing discretization size, $N_f$, are chosen arbitrarily to train the NNs as the forcing terms are assumed to be known over $\Omega$. In practice, the number, $N_u$, and location of the measurement points, $\{x_i\}$, are imposed by the nature of the experiment, or simulation, performed to measure the system's response. When $\Omega$ is an interval, we always select $N_f=200$, $N_u=100$, and equally-spaced sampled points for the forcing and response functions. Further details on the training data generation are available in the Supplementary Material, \cref{sec_generation_data}. We then analyze the robustness of our method for learning Green's functions with respect to the number and location of the measurement points in the Supplementary Material, \cref{sec_robustness}. \subsection*{Neural network training.} The NNs are implemented with single-precision floating-point format within the TensorFlow DL library~\cite{abadi2016tensorflow}, and are trained (the numerical experiments are performed on a desktop computer with a Intel\textsuperscript{\tiny\textregistered} Xeon\textsuperscript{\tiny\textregistered} CPU E5-2667 v2 @ 3.30GHz and a NVIDIA\textsuperscript{\tiny\textregistered} Tesla\textsuperscript{\tiny\textregistered} K40m GPU) using a two-step optimization procedure to minimize the loss function (Supplementary Material, \cref{sec_loss}). First, we use Adam's algorithm~\cite{kingma2014adam} for the first 1000 optimization steps (or epochs), with default learning rate $0.001$ and parameters $\beta_1 = 0.9$, $\beta_2 = 0.999$. Then, we employ the limited memory BFGS, with bound constraints (L-BFGS-B) optimization algorithm~\cite{liu1989limited,byrd1995limited}, implemented in the SciPy library~\cite{virtanen2020scipy}, with a maximum of $5\times 10^4$ iterations. This training procedure is used by Lu \emph{et al.} to train physics-informed NNs (PINNs) and mitigate the risk of the optimizer getting stuck at a poor local minima~\cite{lu2021deepxde}. The L-BFGS-B algorithm is also successful for PDE learning~\cite{raissi2018deep} and PDE solvers using DL techniques~\cite{raissi2019physics,lu2021deepxde}. Moreover, this optimization algorithm takes advantage of the smoothness of the loss function by using second-order derivatives and often converges in fewer iterations than Adam's algorithm and other methods based on stochastic gradient descent~\cite{lu2021deepxde}. Within this setting, rational NNs are beneficial because the activation functions are smooth while maintaining an initialization close to ReLU (Supplementary Material, \cref{fig_loss}). \subsection*{Theoretical justification.} Our approach for learning Green's functions associated with linear differential operators has a theoretically rigorous underpinning. Indeed, it was shown in~\cite{boulle2021learning} that uniformly elliptic operators in three dimensions have an intrinsic \emph{learning rate}, which characterizes the number of training pairs needed to construct an $\epsilon$-approximation in the $L^2$-norm of the Green's function, $G$, with high probability, for $0<\epsilon<1$. The number of training pairs depends on the quality of the covariance kernel used to generate the random forcing terms, $\{f_j\}_{j=1}^N$. Our choice of covariance kernel (Supplementary Material, section~1) is motivated by the GP quality measure~\cite{boulle2021learning,boulle2021generalization}, to ensure that our set of training forcing terms is sufficiently diverse to capture the action of the solution operator, $f\mapsto u(x) = \int_{\Omega}G(x,y)f(y)\d y$, on a diverse set of functions. Similarly, the choice of rational NNs to approximate the Green's function, and the homogeneous solution, is justified by the higher approximation power of these networks over ReLU~\cite{boulle2020rational}. Other adaptive activation functions have been proposed for learning or solving PDEs with NNs~\cite{jagtap2020adaptive}, but they are only motivated by empirical observations. Both theory and experiments support rational NNs for regression problems. The number of trainable parameters, consisting of weight matrices, bias vectors, and rational coefficients, needed by a rational NN to approximate smooth functions within $0<\epsilon<1$, can be completely characterized~\cite{boulle2020rational}. This motivates our choice of NN architecture for learning the Green's functions. \section*{Data Availability} All data and codes used in this article and the Supplementary Material are publicly available on the GitHub and Zenodo repositories at \url{https://github.com/NBoulle/greenlearning/}~\cite{boulleZenodo} to reproduce the numerical experiments and figures. A software package, including additional examples and documentation, is also available at \url{https://greenlearning.readthedocs.io/}. \section*{Acknowledgments} We thank Gregory Bewley for suggestions on the manuscript. This work was supported by the EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling through grant EP/L015803/1 in collaboration with Simula Research Laboratory. C.J.E. was supported by the Army Research Office (ARO) Biomathematics Program grant W911NF-18-1-0351. A.T. was supported by the National Science Foundation grants DMS-1818757, DMS-1952757, and DMS-2045646. \bibliographystyle{Science}
{ "timestamp": "2022-03-14T01:16:04", "yymm": "2105", "arxiv_id": "2105.00266", "language": "en", "url": "https://arxiv.org/abs/2105.00266", "abstract": "There is an opportunity for deep learning to revolutionize science and technology by revealing its findings in a human interpretable manner. To do this, we develop a novel data-driven approach for creating a human-machine partnership to accelerate scientific discovery. By collecting physical system responses under excitations drawn from a Gaussian process, we train rational neural networks to learn Green's functions of hidden linear partial differential equations. These functions reveal human-understandable properties and features, such as linear conservation laws and symmetries, along with shock and singularity locations, boundary effects, and dominant modes. We illustrate the technique on several examples and capture a range of physics, including advection-diffusion, viscous shocks, and Stokes flow in a lid-driven cavity.", "subjects": "Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Numerical Analysis (math.NA)", "title": "Data-driven discovery of Green's functions with human-understandable deep learning", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426375276383, "lm_q2_score": 0.7279754430043072, "lm_q1q2_score": 0.7090791205592663 }
https://arxiv.org/abs/2006.07458
Projection Robust Wasserstein Distance and Riemannian Optimization
Projection robust Wasserstein (PRW) distance, or Wasserstein projection pursuit (WPP), is a robust variant of the Wasserstein distance. Recent work suggests that this quantity is more robust than the standard Wasserstein distance, in particular when comparing probability measures in high-dimensions. However, it is ruled out for practical application because the optimization model is essentially non-convex and non-smooth which makes the computation intractable. Our contribution in this paper is to revisit the original motivation behind WPP/PRW, but take the hard route of showing that, despite its non-convexity and lack of nonsmoothness, and even despite some hardness results proved by~\citet{Niles-2019-Estimation} in a minimax sense, the original formulation for PRW/WPP \textit{can} be efficiently computed in practice using Riemannian optimization, yielding in relevant cases better behavior than its convex relaxation. More specifically, we provide three simple algorithms with solid theoretical guarantee on their complexity bound (one in the appendix), and demonstrate their effectiveness and efficiency by conducing extensive experiments on synthetic and real data. This paper provides a first step into a computational theory of the PRW distance and provides the links between optimal transport and Riemannian optimization.
\section{Acknowledgements} We would like to thank Minhui Huang for very helpful discussion with the proof of Lemma 3.4 and Theorem 3.7. This work is supported in part by the Mathematical Data Science program of the Office of Naval Research under grant number N00014-18-1-2764. \bibliographystyle{plainnat} \section{Conclusion}\label{sec:conclusions} We study in this paper the computation of the projection robust Wasserstein (PRW) distance in the discrete setting. A set of algorithms are developed for computing the entropic regularized PRW distance and both guaranteed to converge to an approximate pair of optimal subspace projection and optimal transportation plan. Experiments on synthetic and real datasets demonstrate that our approach to computing the PRW distance is an improvement over existing approaches based on the convex relaxation of the PRW distance and the Frank-Wolfe algorithm. Future work includes the theory for continuous distributions and applications of PRW distance to deep generative models. \section{Introduction} Optimal transport (OT) theory~\citep{Villani-2003-Topics, Villani-2008-Optimal} has become an important source of ideas and algorithmic tools in machine learning and related fields. Examples include contributions to generative modelling~\citep{Arjovsky-2017-Wasserstein, Salimans-2018-Improving, Genevay-2017-AutoDiff, Tolstikhin-2018-Wasserstein, Genevay-2018-Sinkhorn}, domain adaptation~\citep{Courty-2017-Optimal}, clustering~\citep{Srivastava-2015-WASP, Ho-2017-Multilevel}, dictionary learning~\citep{Rolet-2016-Fast, Schmitz-2018-Wasserstein}, text mining~\citep{Lin-2019-Sparsemax}, neuroimaging~\citep{Janati-2020-Multi} and single-cell genomics~\citep{Schiebinger-2019-Optimal, Yang-2020-Predicting}. The Wasserstein geometry has also provided a simple and useful analytical tool to study latent mixture models~\citep{Ho-2016-Convergence}, reinforcement learning~\citep{Bellemare-2017-Distributional}, sampling~\citep{Cheng-2018-Underdamped, Dalalyan-2019-User, Mou-2019-High, Bernton-2018-Langevin} and stochastic optimization~\citep{Nagaraj-2019-Sgd}. For an overview of OT theory and the relevant applications, we refer to the recent survey~\citep{Peyre-2019-Computational}. \paragraph{Curse of Dimensionality in OT.} A significant barrier to the direct application of OT in machine learning lies in some inherent statistical limitations. It is well known that the sample complexity of approximating Wasserstein distances between densities using only samples can grow exponentially in dimension~\citep{Dudley-1969-Speed, Fournier-2015-Rate, Weed-2019-Sharp, Lei-2020-Convergence}. Practitioners have long been aware of this issue of the curse of dimensionality in applications of OT, and it can be argued that most of the efficient computational schemes that are known to improve computational complexity also carry out, implicitly through their simplifications, some form of statistical regularization. There have been many attempts to mitigate this curse when using OT, whether through entropic regularization~\citep{Cuturi-2013-Sinkhorn, Cuturi-2014-Barycenter, Genevay-2019-Sample, Mena-2019-Statistical}; other regularizations~\citep{Dessein-2018-Regularized, Blondel-2018-Smooth}; quantization~\citep{Canas-2012-Learning, Forrow-2019-Statistical}; simplification of the dual problem in the case of 1-Wasserstein distance~\citep{Shirdhonkar-2008-Approximate, Arjovsky-2017-Wasserstein} or by only using second-order moments of measures to fall back on the Bures-Wasserstein distance~\citep{Bhatia-2018-Bures, Muzellec-2018-Elliptical, Chen-2018-Optimal}. \paragraph{Subspace projections: PRW and WPP.} We focus in this paper on another important approach to regularize the Wasserstein distance: Project input measures onto lower-dimensional subspaces and compute the Wasserstein distance between these reductions, instead of the original measures. The simplest and most representative example of this approach is the sliced Wasserstein distance~\citep{Rabin-2011-Wasserstein, Bonneel-2015-Sliced, Kolouri-2019-Generalized, Nguyen-2020-Distributional}, which is defined as the average Wasserstein distance obtained between random 1D projections. In an important extension,~\citet{Paty-2019-Subspace} and~\citet{Niles-2019-Estimation} proposed very recently to look for the $k$-dimensional subspace ($k>1$) that would maximize the Wasserstein distance between two measures after projection.~\citep{Paty-2019-Subspace} called that quantity the \textit{projection robust Wasserstein} (PRW) distance, while~\citet{Niles-2019-Estimation} named it \textit{Wasserstein Projection Pursuit} (WPP). PRW/WPP are conceptually simple, easy to interpret, and do solve the curse of dimensionality in the so called spiked model as proved in~\citet[Theorem 1]{Niles-2019-Estimation} by recovering an optimal $1/\sqrt{n}$ rate. Very recently,~\citet{Lin-2020-Projection} further provided several fundamental statistical bounds for PRW as well as asymptotic guarantees for learning generative models with PRW. Despite this appeal,~\citep{Paty-2019-Subspace} quickly rule out PRW for practical applications because it is non-convex, and fall back on a convex relaxation, called the \emph{subspace robust Wasserstein} (SRW) distance, which is shown to work better empirically than the usual Wasserstein distance. Similarly,~\citet{Niles-2019-Estimation} seem to lose hope that it can be computed, by stating \textit{``it is unclear how to implement WPP efficiently,''} and after having proved positive results on sample complexity, conclude their paper on a negative note, showing hardness results which apply for WPP when the ground cost is the Euclidean metric (the 1-Wasserstein case). Our contribution in this paper is to revisit the original motivation behind WPP/PRW, but take the hard route of showing that, despite its non-convexity and lack of nonsmoothness, and even despite some hardness results proved in~\cite{Niles-2019-Estimation} in a minimax sense, the original formulation for PRW/WPP \textit{can} be efficiently computed in practice using Riemannian optimization, yielding in relevant cases better behavior than SRW. For simplicity, we refer from now on to PRW/WPP as PRW. \paragraph{Contribution:} In this paper, we study the computation of the PRW distance between two discrete probability measures of size $n$. We show that the resulting optimization problem has a special structure, allowing it to be solved in an efficient manner using Riemannian optimization~\citep{Absil-2009-Optimization, Boumal-2019-Global, Kasai-2019-Riemannian, Chen-2020-Proximal}. Our contributions can be summarized as follows. \begin{enumerate} \item We propose a max-min optimization model for computing the PRW distance. The maximization and minimization are performed over the Stiefel manifold and the transportation polytope, respectively. We prove the existence of the subdifferential (Lemma~\ref{lemma:bound-subdiff}), which allows us to properly define an \emph{$\epsilon$-approximate pair of optimal subspace projection and optimal transportation plan} (Definition~\ref{def:optimal-pair}) and carry out a finite-time analysis of the algorithm. \item We define an entropic regularized PRW distance between two finite discrete probability measures, and show that it is possible to efficiently optimize this distance over the transportation polytope using the Sinkhorn iteration. This poses the problem of performing the maximization over the Stiefel manifold, which is not solvable by existing optimal transport algorithms~\citep{Cuturi-2013-Sinkhorn, Altschuler-2017-Near, Dvurechensky-2018-Computational, Lin-2019-Efficient, Lin-2019-Acceleration, Guminov-2019-Accelerated}. To this end, we propose two new algorithms, which we refer to as \textit{Riemannian gradient ascent with Sinkhorn} (RGAS) and \textit{Riemannian adaptive gradient ascent with Sinkhorn} (RAGAS), for computing the entropic regularized PRW distance. These two algorithms are guaranteed to return an \emph{$\epsilon$-approximate pair of optimal subspace projection and optimal transportation plan} with a complexity bound of $\widetilde{O}(n^2d\|C\|_\infty^4\epsilon^{-4} + n^2\|C\|_\infty^8\epsilon^{-8} + n^2\|C\|_\infty^{12}\epsilon^{-12})$. To the best of our knowledge, our algorithms are the first provably efficient algorithms for the computation of the PRW distance. \item We provide comprehensive empirical studies to evaluate our algorithms on synthetic and real datasets. Experimental results confirm our conjecture that the PRW distance performs better than its convex relaxation counterpart, the SRW distance. Moreover, we show that the RGAS and RAGAS algorithms are faster than the Frank-Wolfe algorithm while the RAGAS algorithm is more robust than the RGAS algorithm. \end{enumerate} \paragraph{Organization.} The remainder of the paper is organized as follows. In Section~\ref{sec:background}, we present the nonconvex max-min optimization model for computing the PRW distance and its entropic regularized version. We also briefly summarize various concepts of geometry and optimization over the Stiefel manifold. In Section~\ref{sec:algorithm}, we propose and analyze the RGAS and RAGAS algorithms for computing the entropic regularized PRW distance and prove that both algorithms achieve the finite-time guarantee under stationarity measure. In Section~\ref{sec:experiment}, we conduct extensive experiments on both synthetic and real datasets, demonstrating that the PRW distance provides a computational advantage over the SRW distance in real application problems. In the supplementary material, we provide further background materials on Riemannian optimization, experiments with the algorithms, and proofs for key results. For the sake of completeness, we derive a near-optimality condition (Definition~\ref{def:near-stationarity} and~\ref{def:near-optimal-pair}) for the max-min optimization model and propose another \emph{Riemannian SuperGradient Ascent with Network simplex iteration} (RSGAN) algorithm for computing the PRW distance without regularization and prove the finite-time convergence under the near-optimality condition. \paragraph{Notation.} We let $[n]$ be the set $\{1, 2, \ldots, n\}$ and $\mathbb{R}^n_+$ be the set of all vectors in $\mathbb{R}^n$ with nonnegative components. $\textbf{1}_n$ and $\textbf{0}_n$ are the $n$-dimensional vectors of ones and zeros. $\Delta^n = \{u \in \mathbb{R}^n_+: \textbf{1}_n^\top u = 1\}$ is the probability simplex. For a vector $x \in \mathbb{R}^n$, the Euclidean norm stands for $\|x\|$ and the Dirac delta function at $x$ stands for $\delta_x(\cdot)$. The notation $\textnormal{Diag}\,(x)$ denotes an $n \times n$ diagonal matrix with $x$ as the diagonal elements. For a matrix $X \in \mathbb{R}^{n \times n}$, the right and left marginals are denoted $r(X) = X\textbf{1}_n$ and $c(X) = X^\top\textbf{1}_n$, and $\|X\|_\infty = \max_{1 \leq i, j \leq n} |X_{ij}|$ and $\|X\|_1 = \sum_{1 \leq i, j \leq n} |X_{ij}|$. The notation $\textnormal{diag}(X)$ stands for an $n$-dimensional vector which corresponds to the diagonal elements of $X$. If $X$ is symmetric, $\lambda_{\max}(X)$ stands for largest eigenvalue. The notation $\textnormal{St}(d, k) := \{X \in \mathbb{R}^{d \times k}: X^\top X = I_k\}$ denotes the Stiefel manifold. For $X, Y \in \mathbb{R}^{n \times n}$, we denote $\langle X, Y\rangle = \textnormal{Trace}(X^\top Y)$ as the Euclidean inner product and $\|X\|_F$ as the Frobenius norm of $X$. We let $P_\mathcal{S}$ be the orthogonal projection onto a closed set $\mathcal{S}$ and $\textnormal{dist}(X, \mathcal{S}) = \inf_{Y \in \mathcal{S}} \|X-Y\|_F$ denotes the distance between $X$ and $\mathcal{S}$. Lastly, $a = O(b(n, d, \epsilon))$ stands for the upper bound $a \leq C \cdot b(n, d, \epsilon)$ where $C>0$ is independent of $n$ and $1/\epsilon$ and $a = \widetilde{O}(b(n, d, \epsilon))$ indicates the same inequality where $C$ depends on the logarithmic factors of $n$, $d$ and $1/\epsilon$. \section{Experiments}\label{sec:experiment} We conduct numerical experiments to evaluate the computation of the PRW distance by the RGAS and RAGAS algorithms. The baseline approaches include the computation of SRW distance with the Frank-Wolfe algorithm\footnote{Available in https://github.com/francoispierrepaty/SubspaceRobustWasserstein.}~\citep{Paty-2019-Subspace} and the computation of Wasserstein distance with the POT software package\footnote{Available in https://github.com/PythonOT/POT}~\citep{Flamary-2017-Pot}. For the RGAS and RAGAS algorithms, we set $\gamma = 0.01$ unless stated otherwise, $\beta = 0.8$ and $\alpha=10^{-6}$. For the experiments on the MNIST digits, we run the feature extractor pretrained in PyTorch 1.5. All the experiments are implemented in Python 3.7 with Numpy 1.18 on a ThinkPad X1 with an Intel Core i7-10710U (6 cores and 12 threads) and 16GB memory, equipped with Ubuntu 20.04. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{figs/exp1_hypercube_value_1.png} \includegraphics[width=0.45\textwidth]{figs/exp1_hypercube_value_2.png} \caption{Mean estimation error (left) and mean subspace estimation error (right), with varying number of points $n$. The shaded areas represent the 10\%-90\% and 25\%-75\% quantiles over 100 samples.}\label{fig:exp_cube_1} \end{figure} \begin{figure}[!t] \centering \includegraphics[clip, trim=10 10 10 10, width=0.32\textwidth]{figs/exp1_plan_W_100.png} \includegraphics[clip, trim=10 10 10 10, width=0.32\textwidth]{figs/exp1_plan_SRW_100.png} \includegraphics[clip, trim=10 10 10 10, width=0.32\textwidth]{figs/exp1_plan_PRW_100.png} \includegraphics[clip, trim=10 10 10 10, width=0.32\textwidth]{figs/exp1_plan_W_250.png} \includegraphics[clip, trim=10 10 10 10, width=0.32\textwidth]{figs/exp1_plan_SRW_250.png} \includegraphics[clip, trim=10 10 10 10, width=0.32\textwidth]{figs/exp1_plan_PRW_250.png} \caption{Fragmented hypercube with $(n, d) = (100, 30)$ (above) and $(n, d) = (250, 30)$ (bottom). Optimal mappings in the Wasserstein space (left), in the SRW space (middle) and the PRW space (right). Geodesics in the PRW space are robust to statistical noise.} \label{fig:exp_cube_2} \end{figure} \paragraph{Fragmented hypercube.} We conduct our first experiment on the fragmented hypercube which is also used to evaluate the SRW distance~\citep{Paty-2019-Subspace} and FactoredOT~\citep{Forrow-2019-Statistical}. In particular, we consider $\mu = \mathcal{U}([-1,1]^d)$ which is an uniform distribution over an hypercube and $\nu = T_{\#} \mu$ which is the push-forward of $\mu$ under the map $T(x) = x + 2\textnormal{sign}(x)\odot(\sum_{k=1}^{k^*} e_k)$. Note that $\textnormal{sign}(\cdot)$ is taken element-wise, $k^* \in [d]$ and $(e_1, \ldots, e_d)$ is the canonical basis of $\mathbb{R}^d$. By the definition, $T$ divides $[-1,1]^d$ into four different hyper-rectangles, as well as serves as a subgradient of convex function. This together with Brenier's theorem (cf.\ \citet[Theorem~2.12]{Villani-2003-Topics}) implies that $T$ is an optimal transport map between $\mu$ and $\nu = T_{\#}\mu$ with $\mathcal{W}_2^2(\mu, \nu) = 4k^*$. Notice that the displacement vector $T(x) - x$ is optimal for any $x \in \mathbb{R}^d$ and always belongs to the $k^*$-dimensional subspace spanned by $\{e_j\}_{j \in [k^*]}$. Putting these pieces together yields that $\mathcal{P}_k^2(\mu, \nu) = 4k^*$ for any $k \geq k^*$. Figure~\ref{fig:exp1_dim_k} presents the behavior of $\mathcal{P}_k^2(\widehat{\mu}, \widehat{\nu})$ as a function of $k^* \in \{2, 4, 7, 10\}$, where $\widehat{\mu}$ and $\widehat{\nu}$ are empirical distributions corresponding to $\mu$ and $\nu$, respectively. The sequence is concave and increases slowly after $k = k^*$, which makes sense since the last $d - k^*$ dimensions only represent noise. The rigorious argument for the SRW distance is presented in~\citet[Proposition~3]{Paty-2019-Subspace} but hard to be extended here since the PRW distance can not be characterized as a sum of eigenvalues. Figure~\ref{fig:exp_cube_1} presents mean estimation error and mean subspace estimation error with varying number of points $n \in \{25, 50, 100, 250, 500, 1000\}$. In particular, $\widehat{U}$ is an approximate optimal subspace projection achieved by computing $\mathcal{P}_k^2(\widehat{\mu}, \widehat{\nu})$ with our algorithms and $\Omega^*$ is the optimal projection matrix onto the $k^*$-dimensional subspace spanned by $\{e_j\}_{j \in [k^*]}$. We set $k^*=2$ here and $\widehat{\mu}$ and $\hat{\nu}$ are constructed from $\mu$ and $\nu$ respectively with $n$ points each. The quality of solutions obtained by the RGAS and RAGAS algorithms are roughly the same. Figure~\ref{fig:exp_cube_2} presents the optimal transport plan in the Wasserstein space (left), the optimal transport plan in the SRW space (middle), and the optimal transport plan in the PRW space (right) between $\widehat{\mu}$ and $\widehat{\nu}$. We consider two cases: $n = 100$ and $n = 250$, in our experiment and observe that our results are consistent with~\citet[Figure~5]{Paty-2019-Subspace}, showing that both PRW and SRW distances share important properties with the Wasserstein distance. \paragraph{Robustness of $\mathcal{P}_k$ to noise.} We conduct our second experiment on the Gaussian distribution\footnote{~\citet{Paty-2019-Subspace} conducted this experiment with their projected supergradient ascent algorithm (cf.\ ~\citet[Algorithm 1]{Paty-2019-Subspace}) with the \textsc{emd} solver from the POT software package. For a fair comparison, we use Riemannian supergradient ascent algorithm (cf.\ Algorithm~\ref{alg:supergrad-simplex}) with the \textsc{emd} solver here; see Appendix for the details.}. In particular, we consider $\mu = \mathcal{N}(0, \Sigma_1)$ and $\nu = \mathcal{N}(0, \Sigma_2)$ where $\Sigma_1, \Sigma_2 \in \mathbb{R}^{d \times d}$ are positive semidefinite matrices of rank $k^*$. This implies that either of the support of $\mu$ and $\nu$ is the $k^*$-dimensional subspace of $\mathbb{R}^d$. Even though the supports of $\mu$ and $\nu$ can be different, their union is included in a $2k^*$-dimensional subspace. Putting these pieces together yields that $\mathcal{P}_k^2(\mu, \nu) = \mathcal{W}_2^2(\mu, \nu)$ for any $k \geq 2k^*$. In our experiment, we set $d = 20$ and sample 100 independent couples of covariance matrices $(\Sigma_1, \Sigma_2)$, where each has independently a Wishart distribution with $k^* = 5$ degrees of freedom. Then we construct the empirical measures $\widehat{\mu}$ and $\widehat{\nu}$ by drawing $n = 100$ points from $\mathcal{N}(0, \Sigma_1)$ and $\mathcal{N}(0, \Sigma_2)$. Figure~\ref{fig:exp_noise_1} presents the mean value of $\mathcal{S}_k^2(\widehat{\mu},\widehat{\nu})/\mathcal{W}_2^2(\widehat{\mu}, \widehat{\nu})$ (left) and $\mathcal{P}_k^2(\widehat{\mu},\widehat{\nu})/\mathcal{W}_2^2(\widehat{\mu}, \widehat{\nu})$ (right) over 100 samples with varying $k$. We plot the curves for both noise-free and noisy data, where white noise ($\mathcal{N}(0, I_d)$) was added to each data point. With moderate noise, the data is approximately on two $5$-dimensional subspaces and both the SRW and PRW distances do not vary too much. Our results are consistent with the SRW distance presented in~\citet[Figure~6]{Paty-2019-Subspace}, showing that the PRW distance is also robust to random perturbation of the data. \begin{figure}[!t] \centering \includegraphics[clip, trim=0 0 20 0, width=0.45\textwidth]{figs/exp2_noise_srw.png} \includegraphics[clip, trim=20 0 0 0, width=0.45\textwidth]{figs/exp2_noise_0.png} \caption{Mean normalized SRW distance (left) and mean normalized PRW distance (right) as a function of dimension. The shaded area shows the 10\%-90\% and 25\%-75\% quantiles over the 100 samples.}\label{fig:exp_noise_1} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{figs/exp2_noise_level.png} \includegraphics[width=0.45\textwidth]{figs/exp4_computation_time.png} \caption{(Left) Comparison of mean relative errors over 100 samples, depending on the noise level. The shaded areas show the min-max values and the 10\%-90\% quantiles; (Right) Comparisons of mean computation times on CPU. The shaded areas show the minimum and maximum values over 50 runs.}\label{fig:exp_noise_2} \end{figure} Figure~\ref{fig:exp_noise_2} (left) presents the comparison of mean relative errors over 100 samples as the noise level varies. In particular, we construct the empirical measures $\widehat{\mu}_\sigma$ and $\hat{\nu}_\sigma$ by gradually adding Gaussian noise $\sigma\mathcal{N}(0, I_d)$ to the points. The relative errors of the Wasserstein, SRW and PRW distances are defined the same as in~\citet[Section~6.3]{Paty-2019-Subspace}. For small noise level, the imprecision in the computation of the SRW distance adds to the error caused by the added noise, while the computation of the PRW distance with our algorithms is less sensitive to such noise. When the noise has the moderate to high variance, the PRW distance is the most robust to noise, followed by the SRW distance, both of which outperform the Wasserstein distance. \begin{wrapfigure}{r}{0.5\textwidth} \vspace*{-.5em}\includegraphics[width=0.45\textwidth]{figs/exp4_computation_time_lr.png} \caption{Comparisons of mean computation time of the RGAS and RAGAS algorithms on CPU (log-log scale) for different learning rates. The shaded areas show the max-min values over 50 runs.}\label{fig:exp4_computation_time_lr_above} \end{wrapfigure} \paragraph{Computation time of algorithms.} We conduct our third experiment on the fragmented hypercube with dimension $d \in \{25, 50, 100, 250, 500\}$, subspace dimension $k=2$, number of points $n=100$ and threshold $\epsilon=0.001$. For the SRW and the PRW distances, the regularization parameter is set as $\eta = 0.2$ for $n < 250$ and $\eta = 0.5$ otherwise\footnote{Available in https://github.com/francoispierrepaty/SubspaceRobustWasserstein}, as well as the scaling for the matrix $C$ (cf. Definition~\ref{def:C}) is applied for stabilizing the algorithms. We stop the RGAS and RAGAS algorithms when $\|U_{t+1} - U_t\|_F/\|U_t\|_F \leq \epsilon$. Figure~\ref{fig:exp_noise_2} (right) presents the mean computation time of the SRW distance with the Frank-Wolfe algorithm~\citep{Paty-2019-Subspace} and the PRW distance with our RGAS and RAGAS algorithms. Our approach is significantly faster since the complexity bound of their approach is quadratic in dimension $d$ while our methods are linear in dimension $d$. \begin{figure}[!t] \centering \includegraphics[clip, trim=10 20 10 20, width=0.45\textwidth]{figs/exp4_computation_time_lr_0.png} \includegraphics[clip, trim=20 10 10 20, width=0.45\textwidth]{figs/exp4_computation_time_lr_1.png} \caption{Comparisons of mean computation time of the RGAS and RAGAS algorithms on CPU (log-log scale) for different learning rates. The shaded areas show the max-min values over 50 runs.}\label{fig:exp4_computation_time_lr_bottom} \end{figure} \paragraph{Robustness of algorithms to learning rate.} We conduct our fourth experiment on the fragmented hypercube to evaluate the robustness of our RGAS and RAGAGS algorithms by choosing the learning rate $\gamma \in \{0.01, 0.1\}$. The parameter setting is the same as that in the third experiment. Figure~\ref{fig:exp4_computation_time_lr_above} indicates that the RAGAS algorithm is more robust than the RGAS algorithm as the learning rates varies, with smaller variance in computation time (seconds). This is the case especially when the dimension is large, showing the advantage of the adaptive strategies in practice. To demonstrate the advantage of the adaptive strategies in practice, we initialize the learning rate using four options $\gamma \in \{0.005, 0.01, 0.05, 0.1\}$ and present the results for the RGAS and RAGAS algorithms separately in Figure~\ref{fig:exp4_computation_time_lr_bottom}. This is consistent with the results in Figure~\ref{fig:exp4_computation_time_lr_above} and supports that the RAGAS algorithm is more robust than the RGAS algorithm to the learning rate. \begin{table}[!t]\small \centering\hspace*{-3.5em} \begin{tabular}{|c|ccccccc|} \hline & D & G & I & KB1 & KB2 & TM & T \\ \hline D & 0/0 & 0.184/0.126 & 0.185/0.135 & 0.195/0.153 & 0.202/0.162 & 0.186/0.134 & \textbf{0.170}/\textbf{0.105} \\ \hline G & 0.184/0.126 & 0/0 & \textbf{0.172}/0.101 & 0.196/0.146 & 0.203/0.158 & 0.175/\textbf{0.095} & 0.184/0.128 \\ \hline I & 0.185/0.135 & 0.172/0.101 & 0/0 & 0.195/0.155 & 0.203/0.166 & \textbf{0.169}/\textbf{0.099} & 0.180/0.134 \\ \hline KB1 & 0.195/0.153 & 0.196/0.146 & 0.195/0.155 & 0/0 & \textbf{0.164}/\textbf{0.089} & 0.190/0.146 & 0.179/0.132 \\ \hline KB2 & 0.202/0.162 & 0.203/0.158 & 0.203/0.166 & \textbf{0.164}/\textbf{0.089} & 0/0 & 0.193/0.155 & 0.180/0.138 \\ \hline TM & 0.186/0.134 & 0.175/\textbf{0.095} & \textbf{0.169}/0.099 & 0.190/0.146 & 0.193/0.155 & 0/0 & 0.182/0.136 \\ \hline T & \textbf{0.170}/\textbf{0.105} & 0.184/0.128 & 0.180/0.134 & 0.179/0.132 & 0.180/0.138 & 0.182/0.136 & 0/0 \\ \hline \end{tabular} \caption{Each entry is $\mathcal{S}_k^2/\mathcal{P}_k^2$ distance between different movie scripts. D = Dunkirk, G = Gravity, I = Interstellar, KB1 = Kill Bill Vol.1, KB2 = Kill Bill Vol.2, TM = The Martian, T = Titanic.}\label{tab:exp_cinema} \end{table} \begin{table}[!t]\small \centering \begin{tabular}{|c|cccccc|}\hline & H5 & H & JC & TMV & O & RJ \\ \hline H5 & 0/0 & \textbf{0.222}/\textbf{0.155} & 0.230/0.163 & 0.228/0.166 & 0.227/0.170 & 0.311/0.272 \\ \hline H & 0.222/0.155 & 0/0 & 0.224/0.163 & 0.221/0.159 & \textbf{0.220}/\textbf{0.153} & 0.323/0.264 \\ \hline JC & 0.230/0.163 & 0.224/0.163 & 0/0 & 0.221/\textbf{0.156} & \textbf{0.219}/0.157 & 0.246/0.191 \\ \hline TMV & 0.228/0.166 & 0.221/0.159 & \textbf{0.221}/0.156 & 0/0 & 0.222/\textbf{0.154} & 0.292/0.230 \\ \hline O & 0.227/0.170 & 0.220/\textbf{0.153} & \textbf{0.219}/0.157 & 0.222/0.154 & 0/0 & 0.264/0.215 \\ \hline RJ & 0.311/0.272 & 0.323/0.264 & \textbf{0.246}/\textbf{0.191} & 0.292/0.230 & 0.264/0.215 & 0/0 \\ \hline \end{tabular} \caption{Each entry is $\mathcal{S}_k^2/\mathcal{P}_k^2$ distance between different Shakespeare\ plays. H5 = Henry V, H = Hamlet, JC = Julius Caesar, TMV = The Merchant of Venice, O = Othello, RJ = Romeo and Juliet.}\label{tab:exp_shakespeare} \end{table} \paragraph{Experiments on real data.} We compute the PRW and SRW distances between all pairs of items in a corpus of seven \textit{movie scripts}. Each script is tokenized to a list of words, which is transformed to a measure over $\mathbb{R}^{300}$ using \textsc{word2vec}~\citep{Mikolov-2018-Advances} where each weight is word frequency. The SRW and PRW distances between all pairs of movies are in Table~\ref{tab:exp_cinema}, which is consistent with the SRW distance in~\citet[Figure~9]{Paty-2019-Subspace} and demonstrate that the PRW distance is smaller than SRW distance. We also compute the SRW and PRW for a preprocessed corpus of eight Shakespeare operas. The PRW distance is consistently smaller than the corresponding SRW distance; see Table~\ref{tab:exp_shakespeare}. Figure~\ref{fig:exp_word_cloud} displays the projection of two measures associated with \textit{Dunkirk} versus \textit{Interstellar} (left) and \textit{Julius Caesar} versus \textit{The Merchant of Venice} (right) onto their optimal 2-dimensional projection. To further show the versatility of SRW and PRW distances, we extract the features of different MNIST digits using a convolutional neural network (CNN) and compute the scaled SRW and PRW distances between all pairs of MNIST digits. In particular, we use an off-the-shelf PyTorch implementation\footnote{https://github.com/pytorch/examples/blob/master/mnist/main.py} and pretrain on MNIST with 98.6\% classification accuracy on the test set. We extract the 128-dimensional features of each digit from the penultimate layer of the CNN. Since the MNIST test set contains 1000 images per digit, each digit is associated with a measure over $\mathbb{R}^{128000}$. Then we compute the optimal 2-dimensional projection distance of measures associated with each pair of two digital classes and divide each distance by 1000; see Table~\ref{tab:MNIST} for the details. The minimum SRW and PRW distances in each row is highlighted to indicate its most similar digital class of that row, which coincides with our intuitions. For example, D1 is sometimes confused with D7 (0.58/0.47), while D4 is often confused with D9 (0.49/0.38) in scribbles. \paragraph{Summary.} The PRW distance has less discriminative power than the SRW distance which is equivalent to the Wasserstein distance~\citep[Proposition~2]{Paty-2019-Subspace}. Such equivalence implies that the SRW distance suffers from the curse of dimensionality in theory. In contrast, the PRW distance has much better sample complexity than the SRW distance if the distributions satisfy the mild condition~\citep{Niles-2019-Estimation, Lin-2020-Projection}. Our empirical evaluation shows that the PRW distance is computationally favorable and more robust than the SRW and Wasserstein distance, when the noise has the moderate to high variance. \begin{table}[!t]\small \centering\hspace*{-6em} \begin{tabular}{|c|cccccccccc|} \hline & D0 & D1 & D2 & D3 & D4 & D5 & D6 & D7 & D8 & D9 \\ \hline D0 & 0/0 & 0.97/0.79 & \textbf{0.80}/\textbf{0.59} & 1.20/0.92 & 1.23/0.90 & 1.03/0.71 & 0.81/0.59 & 0.86/0.66 & 1.06/0.79 & 1.09/0.81 \\ \hline D1 & 0.97/0.79 & 0/0 & 0.66/0.51 & 0.86/0.72 & 0.68/0.54 & 0.84/0.70 & 0.80/0.66 & \textbf{0.58}/\textbf{0.47} & 0.88/0.71 & 0.85/0.72 \\ \hline D2 & 0.80/0.59 & \textbf{0.66}/\textbf{0.51} & 0/0 & 0.73/0.54 & 1.08/0.79 & 1.08/0.83 & 0.90/0.70 & 0.70/0.53 & 0.68/0.52 & 1.07/0.81 \\ \hline D3 & 1.20/0.92 & 0.86/0.72 & 0.73/0.54 & 0/0 & 1.20/0.87 & \textbf{0.58}/\textbf{0.43} & 1.23/0.91 & 0.72/0.55 & 0.88/0.64 & 0.83/0.65 \\ \hline D4 & 1.23/0.90 & 0.68/0.54 & 1.08/0.79 & 1.20/0.87 & 0/0 & 1.00/0.75 & 0.85/0.62 & 0.79/0.61 & 1.09/0.78 & \textbf{0.49}/\textbf{0.38} \\ \hline D5 & 1.03/0.71 & 0.84/0.70 & 1.08/0.83 & \textbf{0.58}/\textbf{0.43} & 1.00/0.75 & 0/0 & 0.72/0.51 & 0.91/0.68 & 0.72/0.53 & 0.78/0.59 \\ \hline D6 & 0.81/0.59 & 0.80/0.66 & 0.90/0.70 & 1.23/0.91 & 0.85/0.62 & \textbf{0.72}/\textbf{0.51} & 0/0 & 1.11/0.83 & 0.92/0.66 & 1.22/0.83 \\ \hline D7 & 0.86/0.66 & \textbf{0.58}/0.47 & 0.70/0.53 & 0.72/0.55 & 0.79/0.61 & 0.91/0.68 & 1.11/0.83 & 0/0 & 1.07/0.78 & 0.62/\textbf{0.46} \\ \hline D8 & 1.06/0.79 & 0.88/0.71 & \textbf{0.68}/\textbf{0.52} & 0.88/0.64 & 1.09/0.78 & 0.72/0.53 & 0.92/0.66 & 1.07/0.78 & 0/0 & 0.87/0.63 \\ \hline D9 & 1.09/0.81 & 0.85/0.72 & 1.07/0.81 & 0.83/0.65 & \textbf{0.49}/\textbf{0.38} & 0.78/0.59 & 1.22/0.83 & 0.62/0.46 & 0.87/0.63 & 0/0 \\ \hline \end{tabular} \caption{Each entry is scaled $\mathcal{S}_k^2/\mathcal{P}_k^2$ distance between different hand-written digits.}\label{tab:MNIST} \end{table} \begin{figure}[!t] \centering \includegraphics[clip, trim=0 0 0 0, width=1\textwidth]{figs/word_clouds.pdf} \caption{Optimal 2-dimensional projections between ``Dunkirk" and ``Interstellar" (left) and optimal 2-dimensional projections between ``Julius Caesar" and ``The Merchant of Venice" (right). Common words of two items are displayed in violet and the 30 most frequent words of each item are displayed.}\label{fig:exp_word_cloud}\vspace*{-.5em} \end{figure} \section{Projection Robust Wasserstein Distance}\label{sec:background} In this section, we present the basic setup and optimality conditions for the computation of the projection robust 2-Wasserstein (PRW) distance between two discrete probability measures with at most $n$ components. We also review basic ideas in Riemannian optimization. \subsection{Structured max-min optimization model} In this section we define the PRW distance~\citep{Paty-2019-Subspace} and show that computing the PRW distance between two discrete probability measures supported on at most $n$ points reduces to solving a structured max-min optimization model over the Stiefel manifold and the transportation polytope. Let $\mathscr{P}(\mathbb{R}^d)$ be the set of Borel probability measures in $\mathbb{R}^d$ and let $\mathscr{P}_2(\mathbb{R}^d)$ be the subset of $\mathscr{P}(\mathbb{R}^d)$ consisting of probability measures that have finite second moments. Let $\mu, \nu \in \mathscr{P}_2(\mathbb{R}^d)$ and $\Pi(\mu, \nu)$ be the set of couplings between $\mu$ and $\nu$. The 2-Wasserstein distance~\citep{Villani-2008-Optimal} is defined by \begin{equation*} \mathcal{W}_2(\mu, \nu) \ := \ \left(\inf_{\pi \in \Pi(\mu, \nu)} \int \|x-y\|^2 \ d\pi(x, y)\right)^{1/2}. \end{equation*} To define the PRW distance, we require the notion of the push-forward of a measure by an operator. Letting $\mathcal{X}, \mathcal{Y} \subseteq \mathbb{R}^d$ and $T: \mathcal{X} \rightarrow \mathcal{Y}$, the push-forward of $\mu \in \mathscr{P}(\mathcal{X})$ by $T$ is defined by $T_{\#} \mu \in \mathscr{P}(\mathcal{Y})$. In other words, $T_{\#} \mu$ is the measure satisfying $T_{\#} \mu(A) = \mu(T^{-1}(A))$ for any Borel set in $\mathcal{Y}$. \begin{definition} For $\mu, \nu \in \mathscr{P}_2(\mathbb{R}^d)$, let $\mathcal{G}_k = \{E \subseteq \mathbb{R}^d \mid \dim(E) = k\}$ be the Grassmannian of $k$-dimensional subspace of $\mathbb{R}^d$ and let $P_E$ be the orthogonal projector onto $E$ for all $E \in \mathcal{G}_k$. The $k$-dimensional PRW distance is defined as $\mathcal{P}_k(\mu, \nu) := \sup_{E \in \mathcal{G}_k} \mathcal{W}_2(P_{E\#}\mu, P_{E\#}\nu)$. \end{definition} \citet[Proposition~5]{Paty-2019-Subspace} have shown that there exists a subspace $E^* \in \mathcal{G}_k$ such that $\mathcal{P}_k(\mu, \nu) = \mathcal{W}_2(P_{E^*\#}\mu, P_{E^*\#}\nu)$ for any $k \in [d]$ and $\mu, \nu \in \mathscr{P}_2(\mathbb{R}^d)$. For any $E \in \mathcal{G}_k$, the mapping $\pi \mapsto \int \|P_E(x-y)\|^2 \ d\pi(x, y)$ is lower semi-continuous. This together with the compactness of $\Pi(\mu, \nu)$ implies that the infimum is a minimum. Therefore, we obtain a structured max-min optimization problem: \begin{equation*} \mathcal{P}_k(\mu, \nu) = \max_{E \in \mathcal{G}_k} \min_{\pi \in \Pi(\mu, \nu)} \left(\int \|P_E(x-y)\|^2 \ d\pi(x, y)\right)^{1/2}. \end{equation*} Let us now consider this general problem in the case of discrete probability measures, which is the focus of the current paper. Let $\{x_1, x_2, \ldots, x_n\} \subseteq \mathbb{R}^d$ and $\{y_1, y_2, \ldots, y_n\} \subseteq \mathbb{R}^d$ denote sets of $n$ atoms, and let $(r_1, r_2, \ldots, r_n) \in \Delta^n$ and $(c_1, c_2, \ldots, c_n) \in \Delta^n$ denote weight vectors. We define discrete probability measures $\mu := \sum_{i=1}^n r_i\delta_{x_i}$ and $\nu := \sum_{j=1}^n c_j\delta_{y_j}$. In this setting, the computation of the $k$-dimensional PRW distance between $\mu$ and $\nu$ reduces to solving a structured max-min optimization model where the maximization and minimization are performed over the Stiefel manifold $\textnormal{St}(d, k) : = \{U \in \mathbb{R}^{d \times k} \mid U^\top U = I_k\}$ and the transportation polytope $\Pi(\mu, \nu) : = \{\pi \in \mathbb{R}_+^{n \times n} \mid r(\pi) = r, \ c(\pi) = c\}$ respectively. Formally, we have \begin{equation}\label{prob:main} \max\limits_{U \in \mathbb{R}^{d \times k}} \min\limits_{\pi \in \mathbb{R}_+^{n \times n}} \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j}\|U^\top x_i - U^\top y_j\|^2 \quad \textnormal{s.t.} \ U^\top U = I_k, \ r(\pi) = r, \ c(\pi) = c. \end{equation} The computation of this PRW distance raises numerous challenges. Indeed, there is no guarantee for finding a global Nash equilibrium as the special case of nonconvex optimization is already NP-hard~\citep{Murty-1987-Some}; moreover, Sion's minimax theorem~\citep{Sion-1958-General} is not applicable here due to the lack of quasi-convex-concave structure. More practically, solving Eq.~\eqref{prob:main} is expensive since (i) preserving the orthogonality constraint requires the singular value decompositions (SVDs) of a $d \times d$ matrix, and (ii) projecting onto the transportation polytope results in a costly quadratic network flow problem. To avoid this, ~\citep{Paty-2019-Subspace} proposed a convex surrogate for Eq.~\eqref{prob:main}: \begin{equation}\label{prob:main-CCP} \max\limits_{0 \preceq \Omega \preceq I_d} \min\limits_{\pi \in \mathbb{R}_+^{n \times n}} \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j}(x_i - y_j)^\top\Omega(x_i - y_j), \quad \textnormal{s.t.} \ \textnormal{Trace}(\Omega)=k, \ r(\pi) = r, \ c(\pi) = c. \end{equation} Eq.~\eqref{prob:main-CCP} is intrinsically a bilinear minimax optimization model which makes the computation tractable. Indeed, the constraint set $\mathcal{R} = \{\Omega \in \mathbb{R}^{d \times d} \mid 0 \preceq \Omega \preceq I_d, \textnormal{Trace}(\Omega)=k\}$ is convex and the objective function is bilinear since it can be rewritten as $\langle \Omega, \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} (x_i - y_j)(x_i - y_j)^\top\rangle$. Eq.~\eqref{prob:main-CCP} is, however, only a convex relaxation of Eq.~\eqref{prob:main} and its solutions are not necessarily good approximate solutions for the original problem. Moreover, the existing algorithms for solving Eq.~\eqref{prob:main-CCP} are also unsatisfactory---in each loop, we need to solve a OT or entropic regularized OT exactly and project a $d \times d$ matrix onto the set $\mathcal{R}$ using the SVD decomposition, both of which are computationally expensive as $d$ increases (see Algorithm~1 and~2 in \cite{Paty-2019-Subspace}). \begin{algorithm}[!t] \caption{Riemannian Gradient Ascent with Sinkhorn Iteration (RGAS)}\label{alg:grad-sinkhorn} \begin{algorithmic}[1] \STATE \textbf{Input:} $\{(x_i, r_i)\}_{i \in [n]}$ and $\{(y_j, c_j)\}_{j \in [n]}$, $k = \widetilde{O}(1)$, $U_0 \in \textnormal{St}(d, k)$ and $\epsilon$. \STATE \textbf{Initialize:} $\widehat{\epsilon} \leftarrow \frac{\epsilon}{10\|C\|_\infty}$, $\eta \leftarrow \frac{\epsilon\min\{1, 1/\bar{\theta}\}}{40\log(n)}$ and $\gamma \leftarrow \frac{1}{(8 L_1^2 + 16L_2)\|C\|_\infty + 16 \eta^{-1}L_1^2\|C\|_\infty^2}$. \FOR{$t = 0, 1, 2, \ldots$} \STATE Compute $\pi_{t+1} \leftarrow \textsc{regOT}(\{(x_i, r_i)\}_{i \in [n]}, \{(y_j, c_j)\}_{j \in [n]}, U_t, \eta, \widehat{\epsilon})$. \STATE Compute $\xi_{t+1} \leftarrow P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_{t+1}}U_t)$. \STATE Compute $U_{t+1} \leftarrow \textnormal{Retr}_{U_t}(\gamma\xi_{t+1})$. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Entropic regularized projection robust Wasserstein} Eq.~\eqref{prob:main} has structure that can be exploited. Indeed, fixing a $U \in \textnormal{St}(d, k)$, the problem reduces to minimizing a linear function over the transportation polytope, i.e., the OT problem. Therefore, we can reformulate Eq.~\eqref{prob:main} as the maximization of the function $f(U) := \min_{\pi \in \Pi(\mu, \nu)} \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} \|U^\top x_i - U^\top y_j\|^2$ over the Stiefel manifold $\textnormal{St}(d, k)$. Since the OT problem admits multiple optimal solutions, $f$ is not differentiable which makes the optimization over the Stiefel manifold hard~\citep{Absil-2019-Collection}. Computations are greatly facilitated by adding smoothness, which allows the use of gradient-type and adaptive gradient-type algorithms. This inspires us to consider an entropic regularized version of Eq.~\eqref{prob:main}, where an entropy penalty is added to the PRW distance. The resulting optimization model is as follows: \begin{equation}\label{prob:main-regularized} \max\limits_{U \in \mathbb{R}^{d \times k}} \min\limits_{\pi \in \mathbb{R}_+^{n \times n}} \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j}\|U^\top x_i - U^\top y_j\|^2 - \eta H(\pi) \quad \textnormal{s.t.} \ U^\top U = I_k, \ r(\pi) = r, \ c(\pi) = c, \end{equation} where $\eta > 0$ is the regularization parameter and $H(\pi) := - \langle \pi, \log(\pi) - \textbf{1}_n\textbf{1}_n^\top\rangle$ denotes the entropic regularization term. We refer to Eq.~\eqref{prob:main-regularized} as the computation of \emph{entropic regularized PRW distance}. Accordingly, we define the function $f_\eta = \min_{\pi \in \Pi(\mu, \nu)} \{\sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} \|U^\top x_i - U^\top y_j\|^2 - \eta H(\pi)\}$ and reformulate Eq.~\eqref{prob:main-regularized} as the maximization of the differentiable function $f_\eta$ over the Stiefel manifold $\textnormal{St}(d, k)$. Indeed, for any $U \in \textnormal{St}(d, k)$ and a fixed $\eta > 0$, there exists a unique solution $\pi^* \in \Pi(\mu, \nu)$ such that $\pi \mapsto \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j}\|U^\top x_i - U^\top y_j\|^2 - \eta H(\pi)$ is minimized at $\pi^*$. When $\eta$ is large, the optimal value of Eq.~\eqref{prob:main-regularized} may yield a poor approximation of Eq.~\eqref{prob:main}. To guarantee a good approximation, we scale the regularization parameter $\eta$ as a function of the desired accuracy of the approximation. Formally, we consider the following relaxed optimality condition for $\widehat{\pi} \in \Pi(\mu, \nu)$ given $U \in \textnormal{St}(d, k)$. \begin{definition}\label{def:approx_transportation_plan} The transportation plan $\widehat{\pi} \in \Pi(\mu, \nu)$ is called an \emph{$\epsilon$-approximate optimal transportation plan for a given $U \in \textnormal{St}(d, k)$} if the following inequality holds: \begin{equation*} \sum_{i=1}^n \sum_{j=1}^n \widehat{\pi}_{i, j} \|U^\top x_i - U^\top y_j\|^2 \ \leq \ \min\limits_{\pi \in \Pi(\mu, \nu)} \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} \|U^\top x_i - U^\top y_j\|^2 + \epsilon. \end{equation*} \end{definition} \subsection{Optimality condition} Recall that the computation of the PRW distance in Eq.~\eqref{prob:main} and the entropic regularized PRW distance in Eq.~\eqref{prob:main-regularized} are equivalent to \begin{equation}\label{prob:Stiefel-nonsmooth} \max\limits_{U \in \textnormal{St}(d, k)} \ \left\{f(U) := \min\limits_{\pi \in \Pi(\mu, \nu)} \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} \|U^\top x_i - U^\top y_j\|^2\right\}, \end{equation} and \begin{equation}\label{prob:Stiefel-smooth} \max\limits_{U \in \textnormal{St}(d, k)} \ \left\{f_\eta(U) := \min\limits_{\pi \in \Pi(\mu, \nu)} \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} \|U^\top x_i - U^\top y_j\|^2 - \eta H(\pi)\right\}. \end{equation} Since $\textnormal{St}(d, k)$ is a compact matrix submanifold of $\mathbb{R}^{d \times k}$~\citep{Boothby-1986-Introduction}, Eq.~\eqref{prob:Stiefel-nonsmooth} and Eq.~\eqref{prob:Stiefel-smooth} are both special instances of the Stiefel manifold optimization problem. The dimension of $\textnormal{St}(d, k)$ is equal to $dk - k(k+1)/2$ and the tangent space at the point $Z \in \textnormal{St}(d, k)$ is defined by $\textnormal{T}_Z\textnormal{St} := \{\xi \in \mathbb{R}^{d \times k}: \xi^\top Z + Z^\top\xi = 0\}$. We endow $\textnormal{St}(d, k)$ with Riemannian metric inherited from the Euclidean inner product $\langle X, Y\rangle$ for any $X, Y \in \textnormal{T}_Z\textnormal{St}$ and $Z \in \textnormal{St}(d, k)$. Then the projection of $G \in \mathbb{R}^{d \times k}$ onto $\textnormal{T}_Z\textnormal{St}$ is given by~\citet[Example~3.6.2]{Absil-2009-Optimization}: $P_{\textnormal{T}_Z\textnormal{St}}(G) = G - Z(G^\top Z + Z^\top G)/2$. We make use of the notion of a \textit{retraction}, which is the first-order approximation of an exponential mapping on the manifold and which is amenable to computation~\citep[Definition~4.1.1]{Absil-2009-Optimization}. For the Stiefel manifold, we have the following definition: \begin{definition}\label{def:retraction} A retraction on $\textnormal{St} \equiv \textnormal{St}(d, k)$ is a smooth mapping $\textnormal{Retr}: \textnormal{T}\textnormal{St} \rightarrow \textnormal{St}$ from the tangent bundle $\textnormal{T}\textnormal{St}$ onto $\textnormal{St}$ such that the restriction of $\textnormal{Retr}$ onto $\textnormal{T}_Z\textnormal{St}$, denoted by $\textnormal{Retr}_Z$, satisfies that (i) $\textnormal{Retr}_Z(0) = Z$ for all $Z \in \textnormal{St}$ where $0$ denotes the zero element of $\textnormal{T}\textnormal{St}$, and (ii) for any $Z \in \textnormal{St}$, it holds that $\lim_{\xi \in \textnormal{T}_Z\textnormal{St}, \xi \rightarrow 0} \|\textnormal{Retr}_Z(\xi) - (Z + \xi)\|_F/\|\xi\|_F = 0$. \end{definition} The retraction on the Stiefel manifold has the following well-known properties~\citep{Boumal-2019-Global, Liu-2019-Quadratic} which are important to subsequent analysis in this paper. \begin{proposition}\label{prop:retraction} For all $Z \in \textnormal{St} \equiv \textnormal{St}(d, k)$ and $\xi \in \textnormal{T}_Z\textnormal{St}$, there exist constants $L_1 > 0$ and $L_2 > 0$ such that the following two inequalities hold: \begin{eqnarray*} \|\textnormal{Retr}_Z(\xi) - Z\|_F & \leq & L_1\|\xi\|_F, \\ \|\textnormal{Retr}_Z(\xi) - (Z + \xi)\|_F & \leq & L_2\|\xi\|_F^2. \end{eqnarray*} \end{proposition} For the sake of completeness, we provide four popular restrictions~\citep{Edelman-1998-Geometry, Wen-2013-Feasible, Liu-2019-Quadratic, Chen-2020-Proximal} on the Stiefel manifold in practice. Determining which one is the most efficient in the algorithm is still an open question; see the discussion after~\citet[Theorem~3]{Liu-2019-Quadratic} or before~\citet[Fact~3.6]{Chen-2020-Proximal}. \begin{itemize} \item \textbf{Exponential mapping.} It takes $8dk^2 + O(k^3)$ flops and has the closed-form expression: \begin{equation*} \textnormal{Retr}_Z^{\exp}(\xi) \ = \ \begin{bmatrix} Z & Q\end{bmatrix}\exp\left(\begin{bmatrix} -Z^\top\xi & -R^\top \\ R & 0 \end{bmatrix}\right)\begin{bmatrix} I_k \\ 0 \end{bmatrix}. \end{equation*} where $QR = -(I_k - ZZ^\top)\xi$ is the unique QR factorization. \item \textbf{Polar decomposition.} It takes $3dk^2 + O(k^3)$ flops and has the closed-form expression: \begin{equation*} \textnormal{Retr}_Z^{\textnormal{polar}}(\xi) \ = \ (Z + \xi)(I_k + \xi^\top\xi)^{-1/2}. \end{equation*} \item \textbf{QR decomposition.} It takes $2dk^2 + O(k^3)$ flops and has the closed-form expression: \begin{equation*} \textnormal{Retr}_Z^{\textnormal{qr}}(\xi) \ = \ \textnormal{qr}(Z + \xi), \end{equation*} where $\textnormal{qr}(A)$ is the Q factor of the QR factorization of $A$. \item \textbf{Cayley transformation.} It takes $7dk^2 + O(k^3)$ flops and has the closed-form expression: \begin{equation*} \textnormal{Retr}_Z^{\textnormal{cayley}}(\xi) \ = \ \left(I_n - \frac{1}{2}W(\xi) \right)^{-1}\left(I_n + \frac{1}{2}W(\xi) \right)Z, \end{equation*} where $W(\xi) = (I_n - ZZ^\top/2)\xi Z^\top - Z\xi^\top(I_n - ZZ^\top/2)$. \end{itemize} We now present a novel approach to exploiting the structure of $f$. We begin with several definitions. \begin{algorithm}[!t] \caption{Riemannian Adaptive Gradient Ascent with Sinkhorn Iteration (RAGAS)}\label{alg:adagrad-sinkhorn} \begin{algorithmic}[1] \STATE \textbf{Input:} $\{(x_i, r_i)\}_{i \in [n]}$ and $\{(y_j, c_j)\}_{j \in [n]}$, $k = \widetilde{O}(1)$, $U_0 \in \textnormal{St}(d, k)$, $\epsilon$ and $\alpha \in (0, 1)$. \STATE \textbf{Initialize:} $p_0 = \textbf{0}_d$, $q_0 = \textbf{0}_k$, $\widehat{p_0} = \alpha\|C\|_\infty^2\textbf{1}_d$, $\widehat{q_0} = \alpha\|C\|_\infty^2\textbf{1}_k$, $\widehat{\epsilon} \leftarrow \frac{\epsilon\sqrt{\alpha}}{20\|C\|_\infty}$, $\eta \leftarrow \frac{\epsilon\min\{1, 1/\bar{\theta}\}}{40\log(n)}$ and $\gamma \leftarrow \frac{\alpha}{16L_1^2 + 32L_2 + 32\eta^{-1}L_1^2\|C\|_\infty}$. \FOR{$t = 0, 1, 2, \ldots$} \STATE Compute $\pi_{t+1} \leftarrow \textsc{regOT}(\{(x_i, r_i)\}_{i \in [n]}, \{(y_j, c_j)\}_{j \in [n]}, U_t, \eta, \widehat{\epsilon})$. \STATE Compute $G_{t+1} \leftarrow P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_{t+1}}U_t)$. \STATE Update $p_{t+1} \leftarrow \beta p_t + (1-\beta)\textnormal{diag}(G_{t+1}G_{t+1}^\top)/k$ and $\widehat{p}_{t+1} \leftarrow \max\{\widehat{p}_t, p_{t+1}\}$. \STATE Update $q_{t+1} \leftarrow \beta q_t + (1-\beta)\textnormal{diag}(G_{t+1}^\top G_{t+1})/d$ and $\widehat{q}_{t+1} \leftarrow \max\{\widehat{q}_t, q_{t+1}\}$. \STATE Compute $\xi_{t+1} \leftarrow P_{\textnormal{T}_{U_t}\textnormal{St}}(\textnormal{Diag}\,(\widehat{p}_{t+1})^{-1/4}G_{t+1}\textnormal{Diag}\,(\widehat{q}_{t+1})^{-1/4})$. \STATE Compute $U_{t+1} \leftarrow \textnormal{Retr}_{U_t}(\gamma\xi_{t+1})$. \ENDFOR \end{algorithmic} \end{algorithm} \begin{definition}\label{def:C} The \emph{coefficient matrix} between $\mu = \sum_{i=1}^n r_i\delta_{x_i}$ and $\nu = \sum_{j=1}^n c_j\delta_{y_j}$ is defined by $C = (C_{ij})_{1 \leq i, j \leq n} \in \mathbb{R}^{n \times n}$ with each entry $C_{ij} = \|x_i - y_j\|^2$. \end{definition} \begin{definition}\label{def:V} The \emph{correlation matrix} between $\mu = \sum_{i=1}^n r_i\delta_{x_i}$ and $\nu = \sum_{j=1}^n c_j\delta_{y_j}$ is defined by $V_\pi = \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} (x_i - y_j)(x_i - y_j)^\top \in \mathbb{R}^{d \times d}$. \end{definition} The first lemma shows that the structure of the function $f$ is not very bad regardless of nonconvexity and the lack of smoothness. \begin{lemma}\label{lemma:obj-weak-concave} The function $f$ is $2\|C\|_\infty$-weakly concave. \end{lemma} \begin{proof} By~\citet[Proposition~4.3]{Vial-1983-Strong}, it suffices to show that the function $f(U) - \|C\|_\infty\|U\|_F^2$ is concave for any $U \in \mathbb{R}^{d \times k}$. By the definition of $f$, we have \begin{equation*} f(U) \ = \ \min\limits_{\pi \in \Pi(\mu, \nu)} \textnormal{Trace}\left(U^\top V_\pi U\right). \end{equation*} Since $\{x_1, x_2, \ldots, x_n\} \subseteq \mathbb{R}^d$ and $\{y_1, y_2, \ldots, y_n\} \subseteq \mathbb{R}^d$ are two given groups of $n$ atoms in $\mathbb{R}^d$, the coefficient matrix $C$ is independent of $U$ and $\pi$. Furthermore, $\sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} = 1$ and $\pi_{i, j} \geq 0$ for all $i, j \in [n]$ since $\pi \in \Pi(\mu, \nu)$. Putting these pieces together with Jensen's inequality, we have \begin{equation*} \|V_\pi\|_F \ \leq \ \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} \|(x_i - y_j)(x_i - y_j)^\top\|_F \ \leq \ \max_{1 \leq i, j \leq n} \|x_i - y_j\|^2 \ = \ \|C\|_\infty. \end{equation*} This implies that $U \mapsto \textnormal{Trace}(U^\top V_\pi U) - \|C\|_\infty\|U\|_F^2$ is concave for any $\pi \in \Pi(\mu, \nu)$. Since $\Pi(\mu, \nu)$ is compact, Danskin's theorem~\citep{Rockafellar-2015-Convex} implies the desired result. \end{proof} The second lemma shows that the subdifferential of the function $f$ is independent of $U$ and bounded by a constant $2\|C\|_\infty$. \begin{lemma}\label{lemma:bound-subdiff} Each element of the subdifferential $\partial f(U)$ is bounded by $2\|C\|_\infty$ for all $U \in \textnormal{St}(d, k)$. \end{lemma} \begin{proof} By the definition of the subdifferential $\partial f$, it suffices to show that $\|V_\pi U \|_F \leq \|C\|_\infty$ for all $\pi \in \Pi(\mu, \nu)$ and $U \in \textnormal{St}(d, k)$. Indeed, by the definition, $V_\pi$ is symmetric and positive semi-definite. Therefore, we have \begin{equation*} \max_{U \in \textnormal{St}(d, k)} \|V_\pi U \|_F \ \leq \ \|V_\pi\|_F \ \leq \ \|C\|_\infty. \end{equation*} Putting these pieces together yields the desired result. \end{proof} \begin{remark} Lemma~\ref{lemma:obj-weak-concave} implies there exists a concave function $g: \mathbb{R}^{d \times k} \rightarrow \mathbb{R}$ such that $f(U) = g(U) + \|C\|_\infty\|U\|_F^2$ for any $U \in \mathbb{R}^{d \times k}$. Since $g$ is concave, $\partial g$ is well defined and~\citet[Proposition~4.6]{Vial-1983-Strong} implies that $\partial f(U) = \partial g(U) + 2\|C\|_\infty U$ for all $U \in \mathbb{R}^{d \times k}$. \end{remark} This result together with~\citet[Proposition~4.5]{Vial-1983-Strong} and~\citet[Theorem~5.1]{Yang-2014-Optimality} lead to the Riemannian subdifferential defined by $\textnormal{subdiff}\, f(U) = P_{\textnormal{T}_U\textnormal{St}}(\partial f(U))$ for all $U \in \textnormal{St}(d, k)$. \begin{definition}\label{def:stationarity} The subspace projection $\widehat{U} \in \textnormal{St}(d, k)$ is called an \emph{$\epsilon$-approximate optimal subspace projection} of $f$ over $\textnormal{St}(d, k)$ in Eq.~\eqref{prob:Stiefel-nonsmooth} if it satisfies $\textnormal{dist}(0, \textnormal{subdiff}\, f(\widehat{U})) \leq \epsilon$. \end{definition} \begin{definition}\label{def:optimal-pair} The pair of subspace projection and transportation plan $(\widehat{U}, \widehat{\pi}) \in \textnormal{St}(d, k) \times \Pi(\mu, \nu)$ is an \emph{$\epsilon$-approximate pair of optimal subspace projection and optimal transportation plan} for the computation of the PRW distance in Eq.~\eqref{prob:main} if the following statements hold true: (i) $\widehat{U}$ is an $\epsilon$-approximate optimal subspace projection of $f$ over $\textnormal{St}(d, k)$ in Eq.~\eqref{prob:Stiefel-nonsmooth}. (ii) $\widehat{\pi}$ is an $\epsilon$-approximate optimal transportation plan for the subspace projection $\widehat{U}$. \end{definition} The goal of this paper is to develop a set of algorithms which are guaranteed to converge to a pair of approximate optimal subspace projection and optimal transportation plan, which stand for a stationary point of the max-min optimization model in Eq.~\eqref{prob:main}. In the next section, we provide the detailed scheme of our algorithm as well as the finite-time theoretical guarantee. \section{Further Background Materials on Riemannian Optimization} The problem of optimizing a smooth function over the Riemannian manifold has been the subject of a large literature.~\citet{Absil-2009-Optimization} provide a comprehensive treatment, showing how first-order and second-order algorithms are extended to the Riemannian setting and proving asymptotic convergence to first-order stationary points.~\citet{Boumal-2019-Global} have established global sublinear convergence results for Riemannian gradient descent and Riemannian trust region algorithms, and further showed that the latter approach converges to a second order stationary point in polynomial time; see also~\citet{Kasai-2018-Inexact, Hu-2018-Adaptive, Hu-2019-Structured}. In contradistinction to the Euclidean setting, the Riemannian trust region algorithm requires a Hessian oracle. There have been also several recent papers on problem-specific algorithms~\citep{Wen-2013-Feasible, Gao-2018-New, Liu-2019-Quadratic} and primal-dual algorithms~\citep{Zhang-2019-Primal} for Riemannian optimization. Compared to the smooth setting, Riemannian nonsmooth optimization is harder and relatively less explored~\citep{Absil-2019-Collection}. There are two main lines of work. In the first category, one considers optimizing geodesically convex function over a Riemannian manifold with subgradient-type algorithms; see, e.g.,~\citet{Ferreira-1998-Subgradient, Zhang-2016-First, Bento-2017-Iteration}. In particular,~\citet{Ferreira-1998-Subgradient} first established an asymptotic convergence result while~\citet{Zhang-2016-First, Bento-2017-Iteration} derived a global convergence rate of $O(\epsilon^{-2})$ for the Riemannian subgradient algorithm. Unfortunately, these results are not useful for understanding the computation of the PRW distance in Eq.~\eqref{prob:main} since the Stiefel manifold is \textit{compact} and every continuous and geodesically convex function on a compact Riemannian manifold must be a constant; see~\citet[Proposition~2.2]{Bishop-1969-Manifolds}. In the second category, one assumes the tractable computation of the proximal mapping of the objective function over the Riemannian manifold.~\citet{Ferreira-2002-Proximal} proved that the Riemannian proximal point algorithm converges globally at a sublinear rate. When specialized to the Stiefel manifold,~\citet{Chen-2020-Proximal} consider the composite objective and proposed to compute the proximal mapping of nonsmooth component function over the tangent space. The resulting Riemannian proximal gradient algorithm is practical in real applications while achieving theoretical guarantees.~\citet{Li-2019-Nonsmooth} extended the results in~\citet{Davis-2019-Stochastic} to the Riemannian setting and proposed a family of Riemannian subgradienttype methods for optimizing a weakly convex function over the Stiefel manifold. They also proved that their algorithms have an iteration complexity of $O(\epsilon^{-4})$ for driving a near-optimal stationarity measure below $\epsilon$. Following up the direction proposed by~\citet{Li-2019-Nonsmooth}, we derive a near-optimal condition (Definition~\ref{def:near-stationarity} and~\ref{def:near-optimal-pair}) for the max-min optimization model in Eq.~\eqref{prob:main-CCP} and propose an algorithm with the finite-time convergence under this stationarity measure. Finally, there are several results on stochastic optimization over the Riemannian manifold.~\citet{Bonnabel-2013-Stochastic} proved the first asymptotic convergence result for Riemannian stochastic gradient descent, which is further extended by~\citet{Zhang-2016-Riemannian, Tripuraneni-2018-Averaging, Becigneul-2019-Riemannian}. If the Riemannian Hessian is not positive definite, a few recent works have developed frameworks to escape saddle points~\citep{Sun-2019-Escaping, Criscitiello-2019-Efficiently}. \section{Near-Optimality Condition} In this section, we derive a near-optimal condition (Definition~\ref{def:near-stationarity} and~\ref{def:near-optimal-pair}) for the max-min optimization model in Eq.~\eqref{prob:main} and the maximization of $f$ over $\textnormal{St}(d, k)$ in Eq.~\eqref{prob:Stiefel-nonsmooth}. Following~\citet{Davis-2019-Stochastic, Li-2019-Nonsmooth}, we define the proximal mapping of $f$ over $\textnormal{St}(d, k)$ in Eq.~\eqref{prob:Stiefel-nonsmooth}, which takes into account both the Stiefel manifold constraint and max-min structure\footnote{The proximal mapping $p(U)$ must exist since the Stiefel manifold is compact, yet may not be uniquely defined. However, this does not matter since $p(U)$ only appears in the analysis for the purpose of defining the surrogate stationarity measure; see~\citet{Li-2019-Nonsmooth}.}: \begin{equation*} p(U) \ \in \ \mathop{\rm argmax}\limits_{\bar{U} \in \textnormal{St}(d, k)} \ \left\{f(\bar{U}) - 6\|C\|_\infty\|\bar{U} - U\|_F^2\right\} \quad \text{for all } U \in \textnormal{St}(d, k). \end{equation*} After a simple calculation, we have \begin{equation*} \Theta(U) \ := \ 12\|C\|_\infty\|p(U) - U\|_F \ \geq \ \textnormal{dist}(0, \textnormal{subdiff}\, f(\textnormal{prox}_{\rho f}(U))), \end{equation*} Therefore, we conclude from Definition~\ref{def:stationarity} that $p(U) \in \textnormal{St}(d, k)$ is $\epsilon$-approximate optimal subspace projection of $f$ over $\textnormal{St}(d, k)$ in Eq.~\eqref{prob:Stiefel-nonsmooth} if $\Theta(U) \leq \epsilon$. We remark that $\Theta(\bullet)$ is a well-defined surrogate stationarity measure of $f$ over $\textnormal{St}(d, k)$ in Eq.~\eqref{prob:Stiefel-nonsmooth}. Indeed, if $\Theta(U) = 0$, then $U \in \textnormal{St}(d, k)$ is an optimal subspace projection. This inspires the following $\epsilon$-near-optimality condition for any $\widehat{U} \in \textnormal{St}(d, k)$. \begin{definition}\label{def:near-stationarity} A subspace projection $\widehat{U} \in \textnormal{St}(d, k)$ is called an \emph{$\epsilon$-approximate near-optimal subspace projection} of $f$ over $\textnormal{St}(d, k)$ in Eq.~\eqref{prob:Stiefel-nonsmooth} if it satisfies $\Theta(\widehat{U}) \leq \epsilon$. \end{definition} Equipped with Definition~\ref{def:approx_transportation_plan} and~\ref{def:near-stationarity}, we define an $\epsilon$-approximate pair of near-optimal subspace projection and optimal transportation plan for the computation of the PRW distance in Eq.~\eqref{prob:main}. \begin{definition}\label{def:near-optimal-pair} The pair of subspace projection and transportation plan $(\widehat{U}, \widehat{\pi}) \in \textnormal{St}(d, k) \times \Pi(\mu, \nu)$ is an \emph{$\epsilon$-approximate pair of near-optimal subspace projection and optimal transportation plan} for the computation of the PRW distance in Eq.~\eqref{prob:main} if the following statements hold true: \begin{itemize} \item $\widehat{U}$ is an $\epsilon$-approximate near-optimal subspace projection of $f$ over $\textnormal{St}(d, k)$ in Eq.~\eqref{prob:Stiefel-nonsmooth}. \item $\widehat{\pi}$ is an $\epsilon$-approximate optimal transportation plan for the subspace projection $\widehat{U}$. \end{itemize} \end{definition} Finally, we prove that the stationary measure in Definition~\ref{def:near-optimal-pair} is a local surrogate for the stationary measure in Definition~\ref{def:optimal-pair} in the following proposition. \begin{proposition} If $(U, \pi) \in \textnormal{St}(d, k) \times \Pi(\mu, \nu)$ is an $\epsilon$-approximate pair of optimal subspace projection and optimal transportation plan of problem~\eqref{prob:main}, it is an $3\epsilon$-approximate pair of optimal subspace projection and optimal transportation plan. \end{proposition} \begin{proof} By the definition, $(U, \pi) \in \textnormal{St}(d, k) \times \Pi(\mu, \nu)$ satisfies that $\pi$ is an $\epsilon$-approximate optimal transportation plan for the subspace projection $U$. Thus, it suffices to show that $\Theta(U) \leq 3\epsilon$. By the definition of $p(U)$, we have \begin{equation*} f(p(U)) - 6\|C\|_\infty\|p(U) - U\|_F^2 \ \geq \ f(U). \end{equation*} Since $f$ is $2\|C\|_\infty$-weakly concave and each element of the subdifferential $\partial f(U)$ is bounded by $2\|C\|_\infty$ for all $U \in \textnormal{St}(d, k)$, the Riemannian subgradient inequality~\citep[Theorem~1]{Li-2019-Nonsmooth} implies that \begin{equation*} f(\textnormal{prox}_{\rho f}(U)) - f(U) \ \leq \ \langle \xi, \textnormal{prox}_{\rho f}(U) - U\rangle + 2\|C\|_\infty\|\textnormal{prox}_{\rho f}(U) - U\|^2 \quad \text{for any } \xi \in \textnormal{subdiff}\, f(U). \end{equation*} Since $\textnormal{dist}(0, \textnormal{subdiff}\, f(U)) \leq \epsilon$, we have \begin{equation*} f(\textnormal{prox}_{\rho f}(U)) - f(U) \ \leq \ \epsilon\|\textnormal{prox}_{\rho f}(U) - U\|_F + 2\|C\|_\infty\|\textnormal{prox}_{\rho f}(U) - U\|^2. \end{equation*} Putting these pieces together with the definition of $\Theta(U)$ yields the desired result. \end{proof} \section{Riemannian Supergradient meets Network Simplex Iteration} In this section, we propose a new algorithm, named \emph{Riemannian SuperGradient Ascent with Network simplex iteration} (RSGAN), for computing the PRW distance in Eq.~\eqref{prob:main}. The iterates are guaranteed to converge to an $\epsilon$-approximate pair of \textit{near-optimal} subspace projection and optimal transportation plan (cf. Definition~\ref{def:near-optimal-pair}). The complexity bound is $\widetilde{O}(n^2(d + n)\epsilon^{-4})$ if $k = \widetilde{O}(1)$. \begin{algorithm}[!t] \caption{Riemannian SuperGradient Ascent with Network Simplex Iteration (RSGAN)}\label{alg:supergrad-simplex} \begin{algorithmic}[1] \STATE \textbf{Input:} measures $\{(x_i, r_i)\}_{i \in [n]}$ and $\{(y_j, c_j)\}_{j \in [n]}$, dimension $k = \widetilde{O}(1)$ and tolerance $\epsilon$. \STATE \textbf{Initialize:} $U_0 \in \textnormal{St}(d, k)$, $\widehat{\epsilon} \leftarrow \frac{\epsilon}{10\|C\|_\infty}$ and $\gamma_0 \leftarrow \frac{1}{k\|C\|_\infty}$. \FOR{$t = 0, 1, 2, \ldots, T-1$} \STATE Compute $\pi_{t+1} \leftarrow \textsc{OT}(\{(x_i, r_i)\}_{i \in [n]}, \{(y_j, c_j)\}_{j \in [n]}, U_t, \widehat{\epsilon})$. \STATE Compute $\xi_{t+1} \leftarrow P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_{t+1}}U_t)$. \STATE Compute $\gamma_{t+1} \leftarrow \gamma_0/\sqrt{t+1}$. \STATE Compute $U_{t+1} \leftarrow \textnormal{Retr}_{U_t}(\gamma_{t+1}\xi_{t+1})$. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Algorithmic scheme} We start with a brief overview of the Riemannian supergradient ascent algorithm for nonsmooth Stiefel optimization. Letting $F: \mathbb{R}^{d \times k} \rightarrow \mathbb{R}$ be a nonsmooth but weakly concave function, we consider \begin{equation*} \max\limits_{U \in \textnormal{St}(d, k)} \ F(U). \end{equation*} A generic Riemannian supergradient ascent algorithm for solving this problem is given by \begin{equation*} U_{t+1} \ \leftarrow \ \textnormal{Retr}_{U_t}(\gamma_{t+1}\xi_{t+1}) \quad \textnormal{ for any } \xi_{t+1} \in \textnormal{subdiff}\, F(U_t), \end{equation*} where $\textnormal{subdiff}\, F(U_t)$ is Riemannian subdifferential of $F$ at $U_t$ and $\textnormal{Retr}$ is any retraction on $\textnormal{St}(d, k)$. For the nonconvex nonsmooth optimization, the stepsize setting $\gamma_{t+1} = \gamma_0/\sqrt{t+1}$ is widely accepted in both theory and practice~\citep{Davis-2019-Stochastic, Li-2019-Nonsmooth}. By the definition of Riemannian subdifferential, $\xi_t$ can be obtained by taking $\xi \in \partial F(U)$ and by setting $\xi_t = P_{\textnormal{T}_U\textnormal{St}}(\xi)$. Thus, it is necessary for us to specify the subdifferential of $f$ in Eq.~\eqref{prob:Stiefel-nonsmooth}. Using the symmetry of $V_\pi$, we have \begin{equation*} \partial f(U) \ = \ \textnormal{Conv}\left\{2V_{\pi^\star} U \mid \pi^\star \in \mathop{\rm argmin}\limits_{\pi \in \Pi(\mu, \nu)} \ \langle UU^\top, V_\pi\rangle\right\}, \quad \textnormal{ for any } U \in \mathbb{R}^{d \times k}. \end{equation*} The remaining step is to solve an OT problem with a given $U$ at each inner loop of the maximization and use the output $\pi(U)$ to obtain an inexact supergradient of $f$. Since the OT problem with a given $U$ is exactly an LP, this is possible and can be done by applying the variant of network simplex method in the \textsc{POT} package~\citep{Flamary-2017-Pot}. While the simplex method can exactly solve this LP, we adopt the inexact solving rule as a practical matter. More specifically, the output $\pi_{t+1}$ satisfies that $\pi_{t+1} \in \Pi(\mu, \nu)$ and $\|\pi_{t+1} - \pi_t^\star\|_1 \leq \widehat{\epsilon}$ where $\pi_t^\star$ is an optimal solution of unregularized OT problem with $U_t \in \textnormal{St}(d, k)$. With the inexact solving rule, the interior-point method and some first-order methods can be adopted to solve the unregularized OT problem. To this end, we summarize the pseudocode of the RSGAN algorithm in Algorithm~\ref{alg:supergrad-simplex}. \subsection{Complexity analysis for Algorithm~\ref{alg:supergrad-simplex}} We define a function which is important to the subsequent analysis of Algorithm~\ref{alg:supergrad-simplex}: \begin{equation*} \Phi(U) \ := \ \max\limits_{U' \in \textnormal{St}(d, k)} \ \left\{f(U') - 6\|C\|_\infty\|U' - U\|_F^2\right\} \quad \text{for all } U \in \textnormal{St}(d, k). \end{equation*} Our first lemma provides a key inequality for quantifying the progress of the iterates $\{(U^t, \pi^t)\}_{t \geq 1}$ generated by Algorithm~\ref{alg:supergrad-simplex} using $\Phi(\bullet)$ as the potential function. \begin{lemma}\label{lemma:key-descent-supergrad} Letting $\{(U_t, \pi_t)\}_{t \geq 1}$ be the iterates generated by Algorithm~\ref{alg:supergrad-simplex}, we have \begin{eqnarray*} \Phi(U_{t+1}) & \geq & \Phi(U_t) - 12\gamma_{t+1}\|C\|_\infty\left(f(U_t) - f(p(U_t)) + 4\|C\|_\infty\|p(U_t) - U_t\|_F^2 + \frac{\epsilon^2}{200\|C\|_\infty}\right) \\ & - & 200\gamma_{t+1}^2\|C\|_\infty^3(\gamma_{t+1}^2 L_2^2\|C\|_\infty^2 + \gamma_{t+1}\|C\|_\infty + \sqrt{k}). \nonumber \end{eqnarray*} \end{lemma} \begin{proof} Since $p(U_t) \in \textnormal{St}(d, k)$, we have \begin{equation}\label{inequality:obj-progress-first} \Phi(U_{t+1}) \ \geq \ f(p(U_t)) - 6\|C\|_\infty\|p(U_t) - U_{t+1}\|_F^2. \end{equation} Using the update formula of $U_{t+1}$, we have \begin{equation*} \|p(U_t) - U_{t+1}\|_F^2 \ = \ \|p(U_t) - \textnormal{Retr}_{U_t}(\gamma_{t+1}\xi_{t+1})\|_F^2. \end{equation*} Using the Cauchy-Schwarz inequality and Proposition~\ref{prop:retraction}, we have \begin{eqnarray*} \lefteqn{\|p(U_t) - \textnormal{Retr}_{U_t}(\gamma_{t+1}\xi_{t+1})\|_F^2} \\ & = & \|(U_t + \gamma_{t+1}\xi_{t+1} - p(U_t)) + (\textnormal{Retr}_{U_t}(\gamma_{t+1}\xi_{t+1}) - U_t - \gamma_{t+1}\xi_{t+1})\|_F^2 \\ & \leq & \|U_t + \gamma_{t+1}\xi_{t+1} - p(U_t)\|_F^2 + \|\textnormal{Retr}_{U_t}(\gamma_{t+1}\xi_{t+1}) - (U_t + \gamma_{t+1}\xi_{t+1})\|_F^2 \\ & & + 2\|U_t + \gamma_{t+1}\xi_{t+1} - p(U_t)\|_F\|\textnormal{Retr}_{U_t}(\gamma_{t+1}\xi_{t+1}) - (U_t + \gamma_{t+1}\xi_{t+1})\|_F \\ & \leq & \|U_t + \gamma_{t+1}\xi_{t+1} - p(U_t)\|_F^2 + \gamma_{t+1}^4 L_2^2\|\xi_{t+1}\|_F^4 + 2\gamma_{t+1}^2\|U_t + \gamma_{t+1}\xi_{t+1} - p(U_t)\|_F\|\xi_{t+1}\|_F^2 \\ & \leq & \|U_t - p(U_t)\|_F^2 + 2\gamma_{t+1}\langle\xi_{t+1}, U_t - p(U_t)\rangle + \gamma_{t+1}^2\|\xi_{t+1}\|_F^2 + \gamma_{t+1}^4 L_2^2\|\xi_{t+1}\|_F^4 \\ & & + 2\gamma_{t+1}^2\|U_t + \gamma_{t+1}\xi_{t+1} - p(U_t)\|_F\|\xi_{t+1}\|_F^2. \end{eqnarray*} Since $U_t \in \textnormal{St}(d, k)$ and $p(U_t) \in \textnormal{St}(d, k)$, we have $\|U_t\|_F \leq \sqrt{k}$ and $\|p(U_t)\|_F \leq \sqrt{k}$. By the update formula for $\xi_{t+1}$, we have \begin{equation*} \|\xi_{t+1}\|_F \ = \ \|P_{\textnormal{T}_{U_{t-1}}\textnormal{St}}(2V_{\pi_{t+1}}U_t)\|_F \ \leq \ 2\|V_{\pi_{t+1}}U_t\|_F. \end{equation*} Since $U_t \in \textnormal{St}(d, k)$ and $\pi_{t+1} \in \Pi(\mu, \nu)$, we have $\|\xi_{t+1}\|_F \leq 2\|C\|_\infty$. Putting all these pieces together yields that \begin{eqnarray}\label{inequality:obj-progress-second} \|p(U_t) - U_{t+1}\|_F^2 & \leq & \|U_t - p(U_t)\|_F^2 + 2\gamma_{t+1}\langle\xi_{t+1}, U_t - p(U_t)\rangle + 4\gamma_{t+1}^2\|C\|_\infty^2 \\ & & \hspace*{-4em} + 16\gamma_{t+1}^4 L_2^2\|C\|_\infty^4 + 16\gamma_{t+1}^3\|C\|_\infty^3 + 16\gamma_{t+1}^2\sqrt{k}\|C\|_\infty^2. \nonumber \end{eqnarray} Plugging Eq.~\eqref{inequality:obj-progress-second} into Eq.~\eqref{inequality:obj-progress-first} and simplifying the inequality using $k \geq 1$, we have \begin{eqnarray*} \Phi(U_{t+1}) & \geq & f(p(U_t)) - 6\|C\|_\infty\|U_t - p(U_t)\|_F^2 - 12\gamma_{t+1}\|C\|_\infty\langle\xi_{t+1}, U_t - p(U_t)\rangle \\ & & - 200\gamma_{t+1}^2\|C\|_\infty^3\left(\gamma_{t+1}^2 L_2^2\|C\|_\infty^2 + \gamma_{t+1}\|C\|_\infty + \sqrt{k}\right). \end{eqnarray*} By the definition of $\Phi(\bullet)$ and $p(\bullet)$, we have \begin{eqnarray}\label{inequality:obj-progress-third} \Phi(U_{t+1}) & \geq & \Phi(U_t) - 12\gamma_{t+1}\|C\|_\infty\langle\xi_{t+1}, U_t - p(U_t)\rangle \\ & & \hspace*{-4em} - 200\gamma_{t+1}^2\|C\|_\infty^3\left(\gamma_{t+1}^2 L_2^2\|C\|_\infty^2 + \gamma_{t+1}\|C\|_\infty + \sqrt{k}\right). \nonumber \end{eqnarray} Now we proceed to bound the term $\langle\xi_{t+1}, U_t - p(U_t)\rangle$. Letting $\xi_t^\star = P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_t^\star}U_t)$ where $\pi_t^\star$ is a minimizer of unregularized OT problem, i.e., $\pi_t^\star \in \mathop{\rm argmin}_{\pi \in \Pi(\mu, \nu)} \ \langle U_tU_t^\top, V_\pi\rangle$, we have \begin{equation}\label{inequality:obj-progress-fourth} \langle\xi_{t+1}, U_t - p(U_t)\rangle \ \leq \ \langle\xi_t^\star, U_t - p(U_t)\rangle + \|\xi_{t+1} - \xi_t^\star\|_F\|U_t - p(U_t)\|_F. \end{equation} Since $f(U) = \min_{\pi \in \Pi(\mu, \nu)} \ \langle U_tU_t^\top, V_\pi\rangle$ is $2\|C\|_\infty$-weakly concave over $\mathbb{R}^{d \times k}$ (cf. Lemma~\ref{lemma:obj-weak-concave}), $\xi_t^\star \in \textnormal{subdiff}\, f(U_t)$ and each element in the subdifferential $\partial f(U)$ is bounded by $2\|C\|_\infty$ for all $U \in \textnormal{St}(d, k)$ (cf. Lemma~\ref{lemma:bound-subdiff}), the Riemannian subgradient inequality~\citep[Theorem~1]{Li-2019-Nonsmooth} holds true and implies that \begin{equation*} f(p(U_t)) \ \leq \ f(U_t) + \langle\xi_t^\star, p(U_t) - U_t\rangle + 2\|C\|_\infty\|p(U_t) - U_t\|_F^2. \end{equation*} This implies that \begin{equation}\label{inequality:obj-progress-fifth} \langle\xi_t^\star, U_t - p(U_t)\rangle \ \leq \ f(U_t) - f(p(U_t)) + 2\|C\|_\infty\|p(U_t) - U_t\|_F^2. \end{equation} By the definition of $\xi_{t+1}$ and $\xi_t^\star$, we have \begin{equation*} \|\xi_{t+1} - \xi_t^\star\|_F \ = \ \|P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_{t+1}}U_t) - P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_t^\star}U_t)\|_F \ \leq \ 2\|(V_{\pi_{t+1}} - V_{\pi_t^\star})U_t\|_F. \end{equation*} By the definition of the subroutine $\textsc{OT}(\{(x_i, r_i)\}_{i \in [n]}, \{(y_j, c_j)\}_{j \in [n]}, U, \widehat{\epsilon})$ in Algorithm~\ref{alg:supergrad-simplex}, we have $\pi_{t+1} \in \Pi(\mu, \nu)$ and $\|\pi_{t+1} - \pi_t^\star\|_1 \leq \widehat{\epsilon}$. Thus, we have \begin{equation*} \|\xi_{t+1} - \xi_t^\star\|_F \ \leq \ 2\|C\|_\infty\widehat{\epsilon} \ \leq \ \frac{\epsilon}{5}. \end{equation*} Using Young's inequality, we have \begin{eqnarray}\label{inequality:obj-progress-sixth} \|\xi_{t+1} - \xi_t^\star\|_F\|U_t - p(U_t)\|_F & \leq & \frac{\|\xi_{t+1} - \xi_t^\star\|_F^2}{8\|C\|_\infty} + 2\|C\|_\infty\|U_t - p(U_t)\|_F^2 \\ & \leq & \frac{\epsilon^2}{200\|C\|_\infty} + 2\|C\|_\infty\|U_t - p(U_t)\|_F^2. \nonumber \end{eqnarray} Combining Eq.~\eqref{inequality:obj-progress-third}, Eq.~\eqref{inequality:obj-progress-fourth}, Eq.~\eqref{inequality:obj-progress-fifth} and Eq.~\eqref{inequality:obj-progress-sixth} yields the desired result. \end{proof} Putting Lemma~\ref{lemma:key-descent-supergrad} together with the definition of $p(\bullet)$, we have the following consequence: \begin{proposition}\label{prop:obj-progress} Letting $\{(U_t, \pi_t)\}_{t \geq 1}$ be the iterates generated by Algorithm~\ref{alg:supergrad-simplex}, we have \begin{eqnarray*} & & \hspace{- 4 em} \frac{24\|C\|_\infty^2\sum_{t=0}^{T-1}\gamma_{t+1}\|p(U_t) - U_t\|_F^2}{\sum_{t=0}^{T-1} \gamma_{t+1}} \\ & \leq & \frac{\gamma_0^{-1}\Delta_\Phi + 200\gamma_0\|C\|_\infty^3(\gamma_0^2 L_2^2\|C\|_\infty^2 + \gamma_0\|C\|_\infty + \sqrt{k}(\log(T)+1))}{2\sqrt{T}} + \frac{\epsilon^2}{12}, \end{eqnarray*} where $\Delta_\Phi = \max_{U \in \textnormal{St}(d, k)} \Phi(U) - \Phi(U_0)$ is the initial objective gap. \end{proposition} \begin{proof} By the definition of $p(\bullet)$, we have \begin{eqnarray*} & & f(U_t) - f(p(U_t)) + 4\|C\|_\infty\|p(U_t) - U_t\|_F^2 \\ & = & f(U_t) - \left(f(p(U_t)) - 6\|C\|_\infty\|p(U_t) - U_t\|_F^2\right) - 2\|C\|_\infty\|p(U_t) - U_t\|_F^2 \\ & \leq & - 2\|C\|_\infty\|p(U_t) - U_t\|_F^2. \end{eqnarray*} Using Lemma~\ref{lemma:key-descent-supergrad}, we have \begin{eqnarray*} \Phi(U_{t+1}) & \geq & \Phi(U_t) + 24\gamma_{t+1}\|C\|_\infty^2\|p(U_t) - U_t\|_F^2 - \frac{\gamma_{t+1}\epsilon^2}{12} \\ & & \hspace*{-4em} - 200\gamma_{t+1}^2\|C\|_\infty^3(\gamma_{t+1}^2 L_2^2\|C\|_\infty^2 + \gamma_{t+1}\|C\|_\infty + \sqrt{k}). \end{eqnarray*} Rearranging this inequality, we have \begin{eqnarray*} 24\gamma_{t+1}\|C\|_\infty^2\|p(U_t) - U_t\|_F^2 & \leq & \Phi(U_{t+1}) - \Phi(U_t) + \frac{\gamma_{t+1}\epsilon^2}{12} \\ & & \hspace*{-10em} + 200\gamma_{t+1}^2\|C\|_\infty^3(\gamma_{t+1}^2 L_2^2\|C\|_\infty^2 + \gamma_{t+1}\|C\|_\infty + \sqrt{k}). \end{eqnarray*} Summing up over $t = 0, 1, 2, \ldots, T-1$ yields that \begin{equation*} \frac{24\|C\|_\infty^2\sum_{t=0}^{T-1}\gamma_{t+1}\|p(U_t) - U_t\|_F^2}{\sum_{t=0}^{T-1} \gamma_{t+1}} \ \leq \ \frac{\Delta\Phi + 200\|C\|_\infty^3(\sum_{t=1}^T \gamma_t^2(\gamma_t^2 L_2^2\|C\|_\infty^2 + \gamma_t\|C\|_\infty + \sqrt{k}))}{2\sum_{t=1}^T \gamma_t} + \frac{\epsilon^2}{12}. \end{equation*} By the definition of $\{\gamma_t\}_{t \geq 1}$, we have \begin{equation*} \sum_{t=1}^T \gamma_t \geq \gamma_0\sqrt{T}, \quad \sum_{t=1}^T \gamma_t^2 \leq \gamma_0^2(\log(T) + 1), \quad \sum_{t=1}^T \gamma_t^3 \leq 3\gamma_0^3, \quad \sum_{t=1}^T \gamma_t^4 \leq 2\gamma_0^4. \end{equation*} Putting these pieces together yields the desired result. \end{proof} We proceed to provide an upper bound for the number of iterations needed to return an $\epsilon$-approximate near-optimal subspace projection $U_t \in \textnormal{St}(d, k)$ satisfying $\Theta(U_t) \leq \epsilon$ in Algorithm~\ref{alg:supergrad-simplex}. \begin{theorem}\label{Theorem:RSGAN-Total-Iteration} Letting $\{(U_t, \pi_t)\}_{t \geq 1}$ be the iterates generated by Algorithm~\ref{alg:supergrad-simplex}, the number of iterations required to reach $\Theta(U_t) \leq \epsilon$ satisfies \begin{equation*} t \ = \ \widetilde{O}\left(\frac{k^2\|C\|_\infty^4}{\epsilon^4}\right). \end{equation*} \end{theorem} \begin{proof} By the definition of $\Theta(\bullet)$ and $p(\bullet)$, we have $\Theta(U_t) = 12\|C\|_\infty\|p(U_t) - U_t\|_F$. Using Proposition~\ref{prop:obj-progress}, we have \begin{equation*} \frac{\sum_{t=0}^{T-1} \gamma_{t+1}(\Theta(U_t))^2}{\sum_{t=0}^{T-1} \gamma_{t+1}} \ \leq \ \frac{3\gamma_0^{-1}\Delta\Phi + 600\gamma_0\|C\|_\infty^3(\gamma_0^2L_2^2\|C\|_\infty^2 + \gamma_0\|C\|_\infty + \sqrt{k}(\log(T)+1))}{\sqrt{T}} + \frac{\epsilon^2}{2}. \end{equation*} Furthermore, by the definition $\Phi(\bullet)$, we have \begin{eqnarray*} |\Phi(U)| & \leq & \max\limits_{U' \in \textnormal{St}(d, k)} \ |f(U') + 6\|C\|_\infty\|U' - U\|_F^2| \\ & \leq & \max\limits_{U \in \textnormal{St}(d, k)} \max\limits_{U' \in \textnormal{St}(d, k)} \ |f(U') + 6\|C\|_\infty\|U' - U\|_F^2| \\ & \leq & \max\limits_{U \in \textnormal{St}(d, k)} |f(U)| + 12k\|C\|_\infty. \end{eqnarray*} By the definition of $f(\bullet)$, we have $\max_{U \in \textnormal{St}(d, k)} |f(U)| \leq \|C\|_\infty$. Putting these pieces together with $k \geq 1$ implies that $|\Phi(U)| \leq 20k\|C\|_\infty$. By the definition of $\Delta_\Phi$, we conclude that $\Delta_\Phi \leq 40k\|C\|_\infty$. Given that $\gamma_0 = 1/\|C\|_\infty$ and $\Theta(U_t) > \epsilon$ for all $t = 0, 1, \ldots, T-1$, the upper bound $T$ must satisfy \begin{equation*} \epsilon^2 \ \leq \ \frac{240k\|C\|_\infty^2 + 1200\|C\|_\infty^2(L_2^2 + \sqrt{k}\log(T) + \sqrt{k} + 1)}{\sqrt{T}}. \end{equation*} This implies the desired result. \end{proof} Equipped with Theorem~\ref{Theorem:RSGAN-Total-Iteration} and Algorithm~\ref{alg:supergrad-simplex}, we establish the complexity bound of Algorithm~\ref{alg:supergrad-simplex}. \begin{theorem}\label{Theorem:RSGAN-Total-Complexity} The RSGAN algorithm (cf. Algorithm~\ref{alg:supergrad-simplex}) returns an $\epsilon$-approximate pair of near-optimal subspace projection and optimal transportation plan of computing the PRW distance in Eq.~\eqref{prob:main} (cf. Definition~\ref{def:near-optimal-pair}) in \begin{equation*} \widetilde{O}\left(\frac{n^2(n + d)\|C\|_\infty^4}{\epsilon^4}\right) \end{equation*} arithmetic operations. \end{theorem} \begin{proof} First, Theorem~\ref{Theorem:RSGAN-Total-Iteration} implies that the iteration complexity of Algorithm~\ref{alg:supergrad-simplex} is \begin{equation}\label{RSGAN-iteration} \widetilde{O}\left(\frac{k^2\|C\|_\infty^4}{\epsilon^4}\right). \end{equation} This implies that $U_t$ is an $\epsilon$-approximate near-optimal subspace projection of problem~\eqref{prob:Stiefel-nonsmooth}. Furthermore, $\widehat{\epsilon} = \min\{\epsilon, \epsilon^2/144\|C\|_\infty\}$. Since $\pi_{t+1} \leftarrow \textsc{OT}(\{(x_i, r_i)\}_{i \in [n]}, \{(y_j, c_j)\}_{j \in [n]}, U_t, \widehat{\epsilon})$, we have $\pi_{t+1} \in \Pi(\mu, \nu)$ and $\langle U_tU_t^\top, V_{\pi_{t+1}} - V_{\pi_t^\star}\rangle \leq \widehat{\epsilon} \leq \epsilon$. This implies that $\pi_{t+1}$ is an $\epsilon$-approximate optimal transportation plan for the subspace projection $U_t$. Therefore, we conclude that $(U_t, \pi_{t+1}) \in \textnormal{St}(d, k) \times \Pi(\mu, \nu)$ is an \emph{$\epsilon$-approximate pair of near-optimal subspace projection and optimal transportation plan} of problem~\eqref{prob:main}. The remaining step is to analyze the complexity bound. Note that the most of software packages, e.g., \textsc{POT}~\citep{Flamary-2017-Pot}, implement the OT subroutine using a variant of the network simplex method with a block search pivoting strategy~\citep{Damian-1991-Minimum, Bonneel-2011-Displacement}. The best known complexity bound is provided in~\citet{Tarjan-1997-Dynamic} and is $\widetilde{O}(n^3)$. Using the same argument in Theorem~\ref{Theorem:RGAS-RAGAS-Total-Complexity}, the number of arithmetic operations at each loop is \begin{equation}\label{RSGAN-arithmetic-operation} \widetilde{O}\left(n^2 dk + dk^2 + k^3 + n^3\right). \end{equation} Putting Eq.~\eqref{RSGAN-iteration} and Eq.~\eqref{RSGAN-arithmetic-operation} together with $k = \widetilde{O}(1)$ yields the desired result. \end{proof} \begin{remark} The complexity bound of Algorithm~\ref{alg:supergrad-simplex} is better than that of Algorithm~\ref{alg:grad-sinkhorn} and~\ref{alg:adagrad-sinkhorn} in terms of $\epsilon$ and $\|C\|_\infty$. This makes sense since Algorithm~\ref{alg:supergrad-simplex} only returns an $\epsilon$-approximate pair of \textit{near-optimal} subspace projection and optimal transportation plan which is weaker than an $\epsilon$-approximate pair of optimal subspace projection and optimal transportation plan. Furthermore, Algorithm~\ref{alg:supergrad-simplex} implements the network simplex method as the inner loop which might suffer when $n$ is large and yield unstable performance in practice. \end{remark} \section{Riemannian (Adaptive) Gradient meets Sinkhorn Iteration}\label{sec:algorithm} We present the \emph{Riemannian gradient ascent with Sinkhorn} (RGAS) algorithm for solving Eq.~\eqref{prob:Stiefel-smooth}. By the definition of $V_\pi$ (cf.\ Definition~\ref{def:V}), we can rewrite $f_\eta(U) = \min_{\pi \in \Pi(\mu, \nu)} \{\langle UU^\top, V_\pi\rangle - \eta H(\pi)\}$. Fix $U \in \mathbb{R}^{d \times k}$, and define the mapping $\pi \mapsto \langle UU^\top, V_\pi\rangle - \eta H(\pi)$ with respect to $\ell_1$-norm. By the compactness of the transportation polytope $\Pi(\mu, \nu)$, Danskin's theorem~\citep{Rockafellar-2015-Convex} implies that $f_\eta$ is smooth. Moreover, by the symmetry of $V_\pi$, we have \begin{equation}\label{def:grad-entropy-regularization} \nabla f_\eta(U) \ = \ 2V_{\pi^\star(U)} U \quad \textnormal{for any } U \in \mathbb{R}^{d \times k}, \end{equation} where $\pi^\star(U) := \mathop{\rm argmin}_{\pi \in \Pi(\mu, \nu)} \ \{\langle UU^\top, V_\pi\rangle - \eta H(\pi)\}$. This entropic regularized OT is solved inexactly at each inner loop of the maximization and we use the output $\pi_{t+1} \approx \pi(U_t)$ to obtain an inexact gradient of $f_\eta$ which permits the Riemannian gradient ascent update; see Algorithm~\ref{alg:grad-sinkhorn}. Note that the stopping criterion used here is set as $\|\pi_{t+1} - \pi(U_t)\|_1 \leq \widehat{\epsilon}$ which implies that $\pi_{t+1}$ is $\epsilon$-approximate optimal transport plan for $U_t \in \textnormal{St}(d, k)$. The remaining issue is to approximately solve an entropic regularized OT efficiently. We leverage Cuturi's approach and obtain the desired output $\pi_{t+1}$ for $U_t \in \textnormal{St}(d, k)$ using the Sinkhorn iteration. By adapting the proof presented by~\citet[Theorem~1]{Dvurechensky-2018-Computational}, we derive that Sinkhorn iteration achieves a finite-time guarantee which is polynomial in $n$ and $1/\widehat{\epsilon}$. As a practical enhancement, we develop the \emph{Riemannian adaptive gradient ascent with Sinkhorn} (RAGAS) algorithm by exploiting the matrix structure of $\textnormal{grad}\, f_\eta(U_t)$ via the use of two different adaptive weight vectors, namely $\widehat{p}_t$ and $\widehat{q}_t$; see the adaptive algorithm in Algorithm~\ref{alg:adagrad-sinkhorn}. It is worth mentioning that such an adaptive strategy is proposed by~\citet{Kasai-2019-Riemannian} and has been shown to generate a search direction which is better than the Riemannian gradient $\textnormal{grad}\, f_\eta(U_t)$ in terms of robustness to the stepsize. \subsection{Technical lemmas} We first show that $f_\eta$ is continuously differentiable over $\mathbb{R}^{d \times k}$ and the classical gradient inequality holds true over $\textnormal{St}(d, k)$. The derivation is novel and uncovers the structure of the computation of entropic regularized PRW in Eq.~\eqref{prob:main-regularized}. Let $g: \mathbb{R}^{d \times k} \times \Pi(\mu, \nu) \rightarrow \mathbb{R}$ be defined by \begin{equation*} g(U, \pi) \ : = \ \sum_{i=1}^n \sum_{j=1}^n \pi_{i, j} \|U^\top x_i - U^\top y_j\|^2 - \eta H(\pi). \end{equation*} \begin{lemma}\label{lemma:lip-grad} $f_\eta$ is differentiable over $\mathbb{R}^{d \times k}$ and $\|\nabla f_\eta(U)\|_F \leq 2\|C\|_\infty$ for all $U \in \textnormal{St}(d, k)$. \end{lemma} \begin{proof} It is clear that we have $f_\eta(\bullet) = \min_{\pi \in \Pi(\mu, \nu)} g(\bullet, \pi)$. Furthermore, $\pi^\star(\bullet) = \mathop{\rm argmin}_{\pi \in \Pi(\mu, \nu)} g(\bullet, \pi)$ is uniquely defined. Putting these pieces with the compactness of $\Pi(\mu, \nu)$ and the smoothness of $g(\bullet, \pi)$, Danskin's theorem~\citep{Rockafellar-2015-Convex} implies $f_\eta$ is continuously differentiable and the gradient is \begin{equation*} \nabla f_\eta(U) \ = \ 2V_{\pi^\star(U)}U \quad \text{for all } U \in \mathbb{R}^{d \times k}. \end{equation*} Since $U \in \textnormal{St}(d, k)$ and $\pi^\star(U) \in \Pi(\mu, \nu)$, we have \begin{equation*} \|\nabla f_\eta(U)\|_F \ = \ 2\|V_{\pi^\star(U)}U\|_F \ \leq \ 2\|V_{\pi^\star(U)}\|_F \ \leq \ 2\|C\|_\infty. \end{equation*} This completes the proof. \end{proof} \begin{lemma}\label{lemma:key-inequality} For all $U_1, U_2 \in \textnormal{St}(d, k)$, the following statement holds true, \begin{equation*} |f_\eta(U_1) - f_\eta(U_2) - \langle \nabla f_\eta(U_2), U_1 - U_2\rangle| \ \leq \ \left( \|C\|_\infty + \frac{2 \|C\|_\infty^2}{\eta}\right)\|U_1 - U_2\|_F^2. \end{equation*} \end{lemma} \begin{proof} It suffices to prove that \begin{equation*} \|\nabla f_\eta(\alpha U_1 + (1-\alpha)U_2) - \nabla f_\eta(U_2)\|_F \ \leq \ \left(2 \|C\|_\infty + \frac{4\|C\|_\infty^2}{\eta}\right) \alpha \|U_1 - U_2\|_F, \end{equation*} for any $U_1, U_2 \in \textnormal{St}(d, k)$ and any $\alpha \in [0, 1]$. Indeed, let $U_\alpha = \alpha U_1 + (1-\alpha)U_2$, we have \begin{equation*} \|\nabla f_\eta(U_\alpha) - \nabla f_\eta(U_2)\|_F \ \leq \ 2\|V_{\pi^\star(U_\alpha)}\|_F\|U_\alpha - U_2\|_F + 2\|V_{\pi^\star(U_\alpha)} - V_{\pi^\star(U_2)}\|_F. \end{equation*} Since $\pi^\star(U_\alpha) \in \Pi(\mu, \nu)$, we have $\|V_{\pi^\star(U_\alpha)}\|_F \leq \|C\|_\infty$. By the definition of $V_\pi$, we have \begin{equation*} \|V_{\pi^\star(U_\alpha)} - V_{\pi^\star(U_2)}\|_F \ \leq \ \sum_{i=1}^n \sum_{j=1}^n |\pi_{i, j}^\star(U_\alpha) - \pi_{i, j}^\star(U_2)|\|x_i - y_j\|^2 \ \leq \ \|C\|_\infty\|\pi^\star(U_\alpha) - \pi^\star(U_2)\|_1. \end{equation*} Putting these pieces together yields that \begin{equation}\label{inequality-lip-grad-first} \|\nabla f_\eta(U_\alpha) - \nabla f_\eta(U_2)\|_F \ \leq \ 2\|C\|_\infty\|U_\alpha - U_2\|_F + 2\|C\|_\infty\|\pi^\star(U_\alpha) - \pi^\star(U_2)\|_1. \end{equation} Using the property of the entropy regularization $H(\bullet)$, we have $g(U, \bullet)$ is strongly convex with respect to $\ell_1$-norm and the module is $\eta$. This implies that \begin{eqnarray*} g(U_\alpha, \pi^\star(U_2)) - g(U_\alpha, \pi^\star(U_\alpha)) - \langle\nabla_\pi g(U_\alpha, \pi^\star(U_\alpha)), \pi^\star(U_2) - \pi^\star(U_\alpha)\rangle & \geq & (\eta/2)\|\pi^\star(U_\alpha) - \pi^\star(U_2)\|_1^2, \\ g(U_\alpha, \pi^\star(U_\alpha)) - g(U_\alpha, \pi^\star(U_2)) - \langle\nabla_\pi g(U_\alpha, \pi^\star(U_2)), \pi^\star(U_\alpha) - \pi^\star(U_2)\rangle & \geq & (\eta/2)\|\pi^\star(U_\alpha) - \pi^\star(U_2)\|_1^2. \end{eqnarray*} Summing up these inequalities yields \begin{equation}\label{inequality-lip-grad-second} \langle \nabla_\pi g(U_\alpha, \pi^\star(U_\alpha)) - \nabla_\pi g(U_\alpha, \pi^\star(U_2)), \pi^\star(U_\alpha) - \pi^\star(U_2)\rangle \ \geq \ \eta\|\pi^\star(U_\alpha) - \pi^\star(U_2)\|_1^2. \end{equation} Furthermore, by the first-order optimality condition of $\pi^\star(U_1)$ and $\pi^\star(U_2)$, we have \begin{eqnarray*} \langle\nabla_\pi g(U_\alpha, \pi^\star(U_\alpha)), \pi^\star(U_2) - \pi^\star(U_\alpha)\rangle & \geq & 0, \\ \langle\nabla_\pi g(U_2, \pi^\star(U_2)), \pi^\star(U_\alpha) - \pi^\star(U_2)\rangle & \geq & 0. \end{eqnarray*} Summing up these inequalities yields \begin{equation}\label{inequality-lip-grad-third} \langle \nabla_\pi g(U_2, \pi^\star(U_2)) - \nabla_\pi g(U_\alpha, \pi^\star(U_\alpha)), \pi^\star(U_\alpha) - \pi^\star(U_2)\rangle \ \geq \ 0. \end{equation} Summing up Eq.~\eqref{inequality-lip-grad-second} and Eq.~\eqref{inequality-lip-grad-third} and further using H\"{o}lder's inequality, we have \begin{equation*} \|\pi^\star(U_\alpha) - \pi^\star(U_2)\|_1 \ \leq \ (1/\eta)\|\nabla_\pi g(U_2, \pi^\star(U_2)) - \nabla_\pi g(U_\alpha, \pi^\star(U_2))\|_\infty. \end{equation*} By the definition of function $g$, we have \begin{eqnarray*} \|\nabla_\pi g(U_2, \pi^\star(U_2)) - \nabla_\pi g(U_\alpha, \pi^\star(U_2))\|_\infty & \leq & \max_{1 \leq i, j \leq n} |(x_i - x_j)^\top(U_2U_2^\top - U_\alpha U_\alpha^\top)(x_i - x_j)| \\ & & \hspace*{-8em} \leq \ \left(\max_{1 \leq i, j \leq n} \| x_i - y_j\|^2\right)\|U_2U_2^\top - U_\alpha U_\alpha^\top\|_F \\ & & \hspace*{-8em} = \ \|C\|_\infty\|U_2 U_2^\top - U_\alpha U_\alpha^\top\|_F. \end{eqnarray*} Since $U_1, U_2 \in \textnormal{St}(d, k)$, we have \begin{eqnarray*} \|U_2U_2^\top - U_\alpha U_\alpha^\top\|_F & \leq & \|U_2(U_2 - U_\alpha)^\top\|_F + \|(U_2 - U_\alpha)U_\alpha^\top\|_F \\ & \leq & \|U_2 - U_\alpha\|_F + \|(U_2 - U_\alpha)(\alpha U_1 + (1-\alpha)U_2)^\top\|_F \\ & \leq & \|U_2 - U_\alpha\|_F + \alpha\|(U_2 - U_\alpha)U_1^\top\|_F + (1-\alpha)\|(U_2 - U_\alpha)U_2^\top\|_F \\ & \leq & 2\|U_2 - U_\alpha\|_F. \end{eqnarray*} Putting these pieces together yields that \begin{equation}\label{inequality-lip-grad-fourth} \|\pi^\star(U_\alpha) - \pi^\star(U_2)\|_1 \ \leq \ \frac{2 \|C\|_\infty}{\eta} \|U_\alpha - U_2\|_F. \end{equation} Plugging Eq.~\eqref{inequality-lip-grad-fourth} into Eq.~\eqref{inequality-lip-grad-first} yields the desired result. \end{proof} \begin{remark} Lemma~\ref{lemma:key-inequality} shows that $f_\eta$ satisfies the classical gradient inequality over the Stiefel manifold. This is indeed stronger than the following statement, \begin{equation*} \|\nabla f_\eta(U_1) - \nabla f_\eta(U_2)\|_F \ \leq \ \left(2 \|C\|_\infty + \frac{4 \|C\|_\infty^2}{\eta}\right)\|U_1 - U_2\|_F, \quad \textnormal{for all } U_1, U_2 \in \textnormal{St}(d, k), \end{equation*} and forms the basis for analyzing the complexity bound of Algorithm~\ref{alg:grad-sinkhorn} and~\ref{alg:adagrad-sinkhorn}. The techniques used in proving Lemma~\ref{lemma:key-inequality} are new and may be applicable to analyze the structure of the robust variant of the Wasserstein distance with other type of regularization~\citep{Dessein-2018-Regularized, Blondel-2018-Smooth}. \end{remark} Then we quantify the progress of RGAS algorithm (cf. Algorithm~\ref{alg:grad-sinkhorn}) using $f_\eta$ as a potential function and then provide an upper bound for the number of iterations to return an $\epsilon$-approximate optimal subspace projection $U_t \in \textnormal{St}(d, k)$ satisfying $\textnormal{dist}(0, \textnormal{subdiff}\, f(U_t)) \leq \epsilon$ in Algorithm~\ref{alg:grad-sinkhorn}. \begin{lemma}\label{lemma:obj-progress-grad} Let $\{(U_t, \pi_t)\}_{t \geq 1}$ be the iterates generated by Algorithm~\ref{alg:grad-sinkhorn}. We have \begin{equation*} \frac{1}{T}\left(\sum_{t=0}^{T-1} \|\textnormal{grad}\, f_\eta(U_t)\|_F^2\right) \ \leq \ \frac{4\Delta_f}{\gamma T} + \frac{\epsilon^2}{5}, \end{equation*} where $\Delta_f = \max_{U \in \textnormal{St}(d, k)} f_\eta(U) - f_\eta(U_0)$ is the initial objective gap. \end{lemma} \begin{proof} Using Lemma~\ref{lemma:key-inequality} with $U_1 = U_{t+1}$ and $U_2 = U_t$, we have \begin{equation}\label{inequality-descent-grad-first} f_\eta(U_{t+1}) - f_\eta(U_t) - \langle \nabla f_\eta(U_t), U_{t+1} - U_t\rangle \ \geq \ -\left(\|C\|_\infty + \frac{2 \|C\|_\infty^2}{\eta}\right)\|U_{t+1} - U_t\|_F^2. \end{equation} By the definition of $U_{t+1}$, we have \begin{eqnarray*} \langle \nabla f_\eta(U_t), U_{t+1} - U_t\rangle & = & \ \langle \nabla f_\eta(U_t), \textnormal{Retr}_{U_t}(\gamma\xi_{t+1}) - U_t\rangle \\ & & \hspace*{-6em} = \ \langle\nabla f_\eta(U_t), \gamma\xi_{t+1}\rangle + \langle\nabla f_\eta(U_t), \textnormal{Retr}_{U_t}(\gamma\xi_{t+1}) - (U_t + \gamma\xi_{t+1})\rangle \\ & & \hspace*{-6em} \geq \ \langle\nabla f_\eta(U_t), \gamma\xi_{t+1}\rangle - \|\nabla f_\eta(U_t)\|_F\|\textnormal{Retr}_{U_t}(\gamma\xi_{t+1}) - (U_t + \gamma\xi_{t+1})\|_F. \end{eqnarray*} By Lemma~\ref{lemma:lip-grad}, we have $\|\nabla f_\eta(U)\|_F \leq 2\|C\|_\infty$. Putting these pieces with Proposition~\ref{prop:retraction} yields that \begin{equation}\label{inequality-descent-grad-second} \langle \nabla f_\eta(U_t), U_{t+1} - U_t\rangle \ \geq \ \gamma\langle\nabla f_\eta(U_t), \xi_{t+1}\rangle - 2\gamma^2 L_2\|C\|_\infty\|\xi_{t+1}\|_F^2. \end{equation} Using Proposition~\ref{prop:retraction} again, we have \begin{equation}\label{inequality-descent-grad-third} \|U_{t+1} - U_t\|_F^2 \ = \ \|\textnormal{Retr}_{U_t}(\gamma\xi_{t+1}) - U_t\|_F^2 \ \leq \ \gamma^2 L_1^2\|\xi_{t+1}\|_F^2. \end{equation} Combining Eq.~\eqref{inequality-descent-grad-first}, Eq.~\eqref{inequality-descent-grad-second} and Eq.~\eqref{inequality-descent-grad-third} yields \begin{equation}\label{inequality-descent-grad-fourth} f_\eta(U_{t+1}) - f_\eta(U_t) \ \geq \ \gamma\langle\nabla f_\eta(U_t), \xi_{t+1}\rangle - \gamma^2((L_1^2 + 2 L_2)\|C\|_\infty + 2 \eta^{-1}L_1^2\|C\|_\infty^2)\|\xi_{t+1}\|_F^2. \end{equation} Recall that $\textnormal{grad}\, f_\eta(U_t) = P_{\textnormal{T}_{U_t}\textnormal{St}}(\nabla f_\eta(U_t))$ and $\xi_{t+1} = P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_{t+1}}U_t)$, we have \begin{equation*} \langle\nabla f_\eta(U_t), \xi_{t+1}\rangle \ = \ \langle \textnormal{grad}\, f_\eta(U_t), \xi_{t+1}\rangle \ = \ \|\textnormal{grad}\, f_\eta(U_t)\|_F^2 + \langle\textnormal{grad}\, f_\eta(U_t), \xi_{t+1} - \textnormal{grad}\, f_\eta(U_t)\rangle \end{equation*} Using Young's inequality, we have \begin{equation*} \langle\nabla f_\eta(U_t), \xi_{t+1}\rangle \ \geq \ (1/2)\left(\|\textnormal{grad}\, f_\eta(U_t)\|_F^2 - \|\xi_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F^2\right). \end{equation*} Furthermore, we have $\|\xi_{t+1}\|_F^2 \leq 2\|\textnormal{grad}\, f_\eta(U_t)\|_F^2 + 2\|\xi_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F^2$. Putting these pieces together with Eq.~\eqref{inequality-descent-grad-fourth} yields that \begin{eqnarray}\label{inequality-descent-grad-fifth} f_\eta(U_{t+1}) - f_\eta(U_t) & \geq & \gamma\left(\frac{1}{2} - \gamma(2 L_1^2\|C\|_\infty + 4L_2\|C\|_\infty + 4\eta^{-1}L_1^2\|C\|_\infty^2)\right)\|\textnormal{grad}\, f_\eta(U_t)\|_F^2 \nonumber \\ & & \hspace*{-6em} - \gamma\left(\frac{1}{2} + \gamma(2 L_1^2\|C\|_\infty + 4L_2\|C\|_\infty + 4 \eta^{-1}L_1^2\|C\|_\infty^2)\right)\|\xi_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F^2. \end{eqnarray} Since $\xi_{t+1} = P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_{t+1}}U_t)$ and $\textnormal{grad}\, f_\eta(U_t) = P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\tilde{\pi}_t^\star}U_t)$ where $\tilde{\pi}_t^\star$ is a minimizer of the entropic regularized OT problem, i.e., $\tilde{\pi}_t^\star \in \mathop{\rm argmin}_{\pi \in \Pi(\mu, \nu)} \ \{\langle U_tU_t^\top, V_\pi\rangle - \eta H(\pi)\}$, we have \begin{equation*} \|\xi_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F \ \leq \ 2\|(V_{\pi_{t+1}} - V_{\tilde{\pi}_t^\star})U_t\|_F \ = \ 2\|V_{\pi_{t+1}} - V_{\tilde{\pi}_t^\star}\|_F. \end{equation*} By the definition of $V_\pi$ and using the stopping criterion: $\|\pi_{t+1} - \tilde{\pi}_t^\star\|_1 \leq \widehat{\epsilon} = \frac{\epsilon}{10\|C\|_\infty}$, we have \begin{equation*} \|V_{\pi_{t+1}} - V_{\tilde{\pi}_t^\star}\|_F \ \leq \ \|C\|_\infty\|\pi_{t+1} - \tilde{\pi}_t^\star\|_1 \leq \frac{\epsilon}{10}. \end{equation*} Putting these pieces together yields that \begin{equation}\label{inequality-descent-grad-sixth} \|\xi_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F \ \leq \ \frac{\epsilon}{5}. \end{equation} Plugging Eq.~\eqref{inequality-descent-grad-sixth} into Eq.~\eqref{inequality-descent-grad-fifth} with the definition of $\gamma$ yields that \begin{equation*} f_\eta(U_{t+1}) - f_\eta(U_t) \ \geq \ \frac{\gamma\|\textnormal{grad}\, f_\eta(U_t)\|_F^2}{4} - \frac{\gamma\epsilon^2}{20}. \end{equation*} Summing and rearranging the resulting inequality yields that \begin{equation*} \frac{1}{T}\left(\sum_{t=0}^{T-1} \|\textnormal{grad}\, f_\eta(U_t)\|_F^2\right) \ \leq \ \frac{4(f_\eta(U_T) - f_\eta(U_0))}{\gamma T} + \frac{\epsilon^2}{5}. \end{equation*} This together with the definition of $\Delta_f$ implies the desired result. \end{proof} We now provide analogous results for the RAGAS algorithm (cf.\ Algorithm~\ref{alg:adagrad-sinkhorn}). \begin{lemma}\label{lemma:obj-progress-adagrad} Let $\{(U_t, \pi_t)\}_{t \geq 1}$ be the iterates generated by Algorithm~\ref{alg:adagrad-sinkhorn}. Then, we have \begin{equation*} \frac{1}{T}\left(\sum_{t=0}^{T-1} \|\textnormal{grad}\, f_\eta(U_t)\|_F^2\right) \ \leq \ \frac{8\|C\|_\infty\Delta_f}{\gamma T} + \frac{\epsilon^2}{10}, \end{equation*} where $\Delta_f = \max_{U \in \textnormal{St}(d, k)} f_\eta(U) - f_\eta(U_0)$ is the initial objective gap. \end{lemma} \begin{proof} Using the same argument as in the proof of Lemma~\ref{lemma:obj-progress-grad}, we have \begin{equation}\label{inequality-descent-adagrad-first} f_\eta(U_{t+1}) - f_\eta(U_t) \ \geq \ \gamma\langle\nabla f_\eta(U_t), \xi_{t+1}\rangle - \gamma^2((L_1^2 + 2 L_2)\|C\|_\infty + 2 \eta^{-1}L_1^2\|C\|_\infty^2)\|\xi_{t+1}\|_F^2. \end{equation} Recall that $\textnormal{grad}\, f_\eta(U_t) = P_{\textnormal{T}_{U_t}\textnormal{St}}(\nabla f_\eta(U_t))$ and the definition of $\xi_{t+1}$, we have \begin{eqnarray*} \langle\nabla f_\eta(U_t), \xi_{t+1}\rangle & = & \langle \textnormal{grad}\, f_\eta(U_t), \xi_{t+1}\rangle \\ & = & \langle \textnormal{grad}\, f_\eta(U_t), \textnormal{Diag}\,(\widehat{p}_{t+1})^{-1/4}(\textnormal{grad}\, f_\eta(U_t))\textnormal{Diag}\,(\widehat{q}_{t+1})^{-1/4}\rangle \\ & & \hspace*{2 em} + \langle\textnormal{grad}\, f_\eta(U_t), \textnormal{Diag}\,(\widehat{p}_{t+1})^{-1/4}(G_{t+1} - \textnormal{grad}\, f_\eta(U_t))\textnormal{Diag}\,(\widehat{q}_{t+1})^{-1/4}\rangle. \end{eqnarray*} Using the Cauchy-Schwarz inequality and the nonexpansiveness of $P_{\textnormal{T}_{U_t}\textnormal{St}}$, we have \begin{eqnarray*} \|\xi_{t+1}\|_F^2 & \leq & 2\|P_{\textnormal{T}_{U_t}\textnormal{St}}(\textnormal{Diag}\,(\widehat{p}_{t+1})^{-1/4}(\textnormal{grad}\, f_\eta(U_t))\textnormal{Diag}\,(\widehat{q}_{t+1})^{-1/4})\|_F^2 \\ & & + 2\|\xi_{t+1} - P_{\textnormal{T}_{U_t}\textnormal{St}}(\textnormal{Diag}\,(\widehat{p}_{t+1})^{-1/4}(\textnormal{grad}\, f_\eta(U_t))\textnormal{Diag}\,(\widehat{q}_{t+1})^{-1/4})\|_F^2 \\ & \leq & 2\|\textnormal{Diag}\,(\widehat{p}_{t+1})^{-1/4}(\textnormal{grad}\, f_\eta(U_t))\textnormal{Diag}\,(\widehat{q}_{t+1})^{-1/4}\|_F^2 \\ & & + 2\|\textnormal{Diag}\,(\widehat{p}_{t+1})^{-1/4}(G_{t+1} - \textnormal{grad}\, f_\eta(U_t))\textnormal{Diag}\,(\widehat{q}_{t+1})^{-1/4}\|_F^2. \end{eqnarray*} Furthermore, by the definition of $G_{t+1}$, we have $\|G_{t+1}\|_F \leq 2\|C\|_\infty$ and hence \begin{equation*} \textbf{0}_d \leq \frac{\textnormal{diag}(G_{t+1}G_{t+1}^\top)}{k} \leq 4\|C\|_\infty^2\textbf{1}_d, \qquad \textbf{0}_k \leq \frac{\textnormal{diag}(G_{t+1}^\top G_{t+1})}{d} \preceq 4\|C\|_\infty^2\textbf{1}_k. \end{equation*} By the definition of $p_t$ and $q_t$, we have $\textbf{0}_d \preceq p_t \preceq 4\|C\|_\infty^2\textbf{1}_d$ and $\textbf{0}_k \preceq q_t \preceq 4\|C\|_\infty^2\textbf{1}_k$. This together with the definition of $\widehat{p}_t$ and $\widehat{q}_t$ implies that \begin{equation*} \alpha\|C\|_\infty^2 \textbf{1}_d \leq \widehat{p}_t \leq 4\|C\|_\infty^2\textbf{1}_d, \qquad \alpha\|C\|_\infty^2 \textbf{1}_k \leq \widehat{q}_t \leq 4\|C\|_\infty^2\textbf{1}_k. \end{equation*} This inequality together with Young's inequality implies that \begin{eqnarray*} \langle\nabla f_\eta(U_t), \xi_{t+1}\rangle & \geq & \frac{\|\textnormal{grad}\, f_\eta(U_t)\|_F^2}{2\|C\|_\infty} - \frac{1}{\sqrt{\alpha}\|C\|_\infty}\left(\frac{\sqrt{\alpha}\|\textnormal{grad}\, f_\eta(U_t)\|_F^2}{4} + \frac{\|G_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F^2}{\sqrt{\alpha}} \right) \\ & = & \frac{\|\textnormal{grad}\, f_\eta(U_t)\|_F^2}{4\|C\|_\infty} - \frac{\|G_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F^2}{\alpha\|C\|_\infty}, \end{eqnarray*} and \begin{equation*} \|\xi_{t+1}\|_F^2 \ \leq \ \frac{2\|\textnormal{grad}\, f_\eta(U_t)\|_F^2}{\alpha\|C\|_\infty^2} + \frac{2\|G_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F^2}{\alpha\|C\|_\infty^2}. \end{equation*} Putting these pieces together with Eq.~\eqref{inequality-descent-adagrad-first} yields that \begin{eqnarray}\label{inequality-descent-adagrad-second} f_\eta(U_{t+1}) - f_\eta(U_t) & \geq & \frac{\gamma}{4\|C\|_\infty}\left(1 - \frac{8\gamma}{\alpha}\left(L_1^2 + 2L_2 + 2\eta^{-1}L_1^2\|C\|_\infty\right)\right)\|\textnormal{grad}\, f_\eta(U_t)\|_F^2 \nonumber \\ & & \hspace*{-6em} - \frac{\gamma}{\alpha\|C\|_\infty}\left(1 + \gamma(2 L_1^2 + 4L_2 + 4 \eta^{-1}L_1^2\|C\|_\infty)\right)\|G_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F^2. \end{eqnarray} Recall that $G_{t+1} = P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_{t+1}}U_t)$ and $\textnormal{grad}\, f_\eta(U_t) = P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\tilde{\pi}_t^\star}U_t)$. Then we can apply the same argument as in the proof of Lemma~\ref{lemma:obj-progress-grad} and obtain that \begin{equation}\label{inequality-descent-adagrad-third} \|G_{t+1} - \textnormal{grad}\, f_\eta(U_t)\|_F \ \leq \ \frac{\epsilon\sqrt{\alpha}}{10}. \end{equation} Plugging Eq.~\eqref{inequality-descent-adagrad-third} into Eq.~\eqref{inequality-descent-adagrad-second} with the definition of $\gamma$ yields that \begin{equation*} f_\eta(U_{t+1}) - f_\eta(U_t) \ \geq \ \frac{\gamma\|\textnormal{grad}\, f_\eta(U_t)\|_F^2}{8\|C\|_\infty} - \frac{\gamma\epsilon^2}{80\|C\|_\infty}. \end{equation*} Summing and rearranging the resulting inequality yields that \begin{equation*} \frac{1}{T}\left(\sum_{t=0}^{T-1} \|\textnormal{grad}\, f_\eta(U_t)\|_F^2\right) \ \leq \ \frac{8\|C\|_\infty(f_\eta(U_T) - f_\eta(U_0))}{\gamma T} + \frac{\epsilon^2}{10}. \end{equation*} This together with the definition of $\Delta_f$ implies the desired result. \end{proof} \subsection{Main results} Before proceeding to the main results, we present a technical lemma on the Hoffman's bound~\citep{Hoffman-1952-Approximate, Li-1994-Sharp} and the characterization of the Hoffman constant~\citep{Guler-1995-Approximations, Klatte-1995-Error, Wang-2014-Iteration}, which will be also crucial to the subsequent analysis. \begin{lemma}\label{lemma:Hoffman} Consider a polyhedron set $\mathcal{S} = \{x \in \mathbb{R}^d \mid Ex=t, x \geq 0\}$. For any point $x \in \mathbb{R}^d$, we have \begin{equation*} \|x - \textnormal{proj}_\mathcal{S}(x)\|_1 \leq \theta(E)\left\|\begin{bmatrix} \max\{0, -x\} \\ Ex-t \end{bmatrix}\right\|_1, \end{equation*} where $\theta(E)$ is the Hoffman constant and can be represented by () \begin{equation*} \theta(E) = \sup_{u, v \in \mathbb{R}^d} \left\{\left\|\begin{bmatrix} u \\ v \end{bmatrix}\right\|_\infty \left| \begin{array}{l} \|E^\top v - u\|_\infty = 1, u \geq 0 \\ \textnormal{The corresponding rows of $E$ to $v$’s nonzero} \\ \textnormal{elements are linearly independent.} \end{array}\right. \right\} \end{equation*} \end{lemma} We then present the iteration complexity of the RGAS algorithm (Algorithm~\ref{alg:grad-sinkhorn}) and the RAGAS algorithm (Algorithm~\ref{alg:adagrad-sinkhorn}) in the following two theorems. \begin{theorem}\label{Theorem:RGAS-Total-Iteration} Letting $\{(U_t, \pi_t)\}_{t \geq 1}$ be the iterates generated by Algorithm~\ref{alg:grad-sinkhorn}, the number of iterations required to reach $\textnormal{dist}(0, \textnormal{subdiff}\, f(U_t)) \leq \epsilon$ satisfies that \begin{equation*} t \ = \ \widetilde{O}\left(\frac{k\|C\|_\infty^2}{\epsilon^2}\left(1 + \frac{\|C\|_\infty}{\epsilon}\right)^2\right). \end{equation*} \end{theorem} \begin{proof} Let $\tilde{\pi}_t^\star$ be a minimzer of entropy-regularized OT problem and $\pi_t^\star$ be the projection of $\tilde{\pi}_t^\star$ onto the optimal solution set of unregularized OT problem. More specifically, the unregularized OT problem is a LP and the optimal solution set is a polyhedron set ($t^\star$ is an optimal objective value) \begin{equation*} \mathcal{S} = \{\pi \in \mathbb{R}^{d \times d} \mid \pi \in \Pi(\mu, \nu), \ \langle U_tU_t^\top, V_\pi\rangle = t^\star\}. \end{equation*} Then we have \begin{equation*} \tilde{\pi}_t^\star \in \mathop{\rm argmin}_{\pi \in \Pi(\mu, \nu)} \ \langle U_tU_t^\top, V_\pi\rangle - \eta H(\pi), \qquad \pi_t^\star = \textnormal{proj}(\tilde{\pi}_t^\star) \in \mathop{\rm argmin}_{\pi \in \Pi(\mu, \nu)} \ \langle U_tU_t^\top, V_\pi\rangle. \end{equation*} By definition, we have $\nabla f_\eta(U_t) = 2V_{\tilde{\pi}_t^\star}U_t$ and $2V_{\pi_t^\star}U_t \in \partial f(U_t)$. This together with the definition of Riemannian gradient and Riemannian subdifferential yields that \begin{eqnarray*} \textnormal{grad}\, f_\eta(U_t) & = & P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\tilde{\pi}_t^\star}U_t), \\ \textnormal{subdiff}\, f(U_t) & \ni & P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_t^\star}U_t). \end{eqnarray*} Therefore, we conclude that \begin{eqnarray*} \textnormal{dist}(0, \textnormal{subdiff}\, f(U_t)) & \leq & \|P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_t^\star}U_t)\|_F \\ & & \hspace*{-6em} \ \leq \ \|P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\tilde{\pi}_t^\star}U_t)\|_F + \|P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\pi_t^\star}U_t) - P_{\textnormal{T}_{U_t}\textnormal{St}}(2V_{\tilde{\pi}_t^\star}U_t)\|_F \\ & & \hspace*{-6em} \ \leq \ \|\textnormal{grad}\, f_\eta(U_t)\|_F + 2\|(V_{\pi_t^\star} - V_{\tilde{\pi}_t^\star})U_t\|_F. \end{eqnarray*} Note that scaling the objective function by $\|C\|_\infty$ will not change the optimal solution set. Since $U_t \in \textnormal{St}(d, k)$, each entry of the coefficient in the normalized objective function is less than 1. By Lemma~\ref{lemma:Hoffman}, we obtain that there exists a constant $\bar{\theta}$ independent of $\|C\|_\infty$ such that \begin{equation*} \|\tilde{\pi}_t^\star - \pi_t^\star\|_1 \ \leq \ \bar{\theta}\left\|\left\langle U_tU_t^\top, \frac{V_{\tilde{\pi}_t^\star} - V_{\pi_t^\star}}{\|C\|_\infty}\right\rangle\right\|_1. \end{equation*} By the definition of $\tilde{\pi}_t^\star$, we have $\langle U_tU_t^\top, V_{\tilde{\pi}_t^\star}\rangle - \eta H(\tilde{\pi}_t^\star) \leq \langle U_tU_t^\top, V_{\pi_t^\star}\rangle - \eta H(\pi_t^\star)$. Since $0 \leq H(\pi) \leq 2\log(n)$ and $\eta = \frac{\epsilon\min\{1, 1/\bar{\theta}\}}{40\log(n)}$, we have \begin{equation*} \tilde{\pi}_t^\star \ \in \ \Pi(\mu, \nu), \qquad 0 \ \leq \ \langle U_tU_t^\top, V_{\tilde{\pi}_t^\star} - V_{\pi_t^\star}\rangle \ \leq \ \epsilon/(20\bar{\theta}). \end{equation*} Putting these pieces together yields that \begin{equation*} \|\tilde{\pi}_t^\star - \pi_t^\star\|_1 \ \leq \ \frac{\epsilon}{20\|C\|_\infty\bar{\theta}}. \end{equation*} By the definition of $U_t$ and $V_\pi$, we have \begin{equation*} \|(V_{\pi_t^\star} - V_{\tilde{\pi}_t^\star})U_t\|_F \ = \ \|V_{\pi_t^\star} - V_{\tilde{\pi}_t^\star}\|_F \ \leq \ \bar{\theta}\|C\|_\infty\|\tilde{\pi}_t^\star - \pi_t^\star\|_1 \ \leq \ \frac{\epsilon}{20}. \end{equation*} Putting these pieces together yields \begin{equation*} \textnormal{dist}(0, \textnormal{subdiff}\, f(U_t)) \ \leq \ \|\textnormal{grad}\, f_\eta(U_t)\|_F + \frac{\epsilon}{10}. \end{equation*} Combining this inequality with Lemma~\ref{lemma:obj-progress-grad} and the Cauchy-Schwarz inequality, we have \begin{eqnarray*} \frac{1}{T}\left(\sum_{t=0}^{T-1} [\textnormal{dist}(0, \textnormal{subdiff}\, f(U_t))]^2\right) & \leq & \frac{2}{T}\left(\sum_{t=0}^{T-1} \|\textnormal{grad}\, f_\eta(U_t)\|_F^2\right) + \frac{\epsilon^2}{50} \ \leq \ \frac{8\Delta_f}{\gamma T} + \frac{2\epsilon^2}{5} + \frac{\epsilon^2}{50} \\ & \leq & \frac{8\Delta_f}{\gamma T} + \frac{\epsilon^2}{2}. \end{eqnarray*} Given that $\textnormal{dist}(0, \textnormal{subdiff}\, f(U_t)) > \epsilon$ for all $t = 0, 1, \ldots, T-1$ and \begin{equation*} \frac{1}{\gamma} \ = \ (8 L_1^2 + 16L_2)\|C\|_\infty + \frac{16 L_1^2\|C\|_\infty^2}{\eta} \ = \ (8 L_1^2 + 16L_2)\|C\|_\infty + \frac{640 L_1^2\max\{1, \bar{\theta}\}\|C\|_\infty^2\log(n)}{\epsilon}. \end{equation*} we conclude that the upper bound $T$ must satisfy \begin{equation*} \epsilon^2 \ \leq \ \frac{16\Delta_f}{T}\left((8 L_1^2 + 16L_2)\|C\|_\infty + \frac{640 L_1^2\max\{1, \bar{\theta}\}\|C\|_\infty^2\log(n)}{\epsilon}\right). \end{equation*} Using Lemma~\ref{lemma:key-inequality}, we have \begin{eqnarray*} \Delta_f & \leq & \left( \|C\|_\infty + \frac{2 \|C\|_\infty^2}{\eta}\right)\left(\max_{U \in \textnormal{St}(d, k)} \|U - U_0\|_F^2\right) + 2\|C\|_\infty\left(\max_{U \in \textnormal{St}(d, k)} \|U - U_0\|_F\right) \\ & = & k\left(6\|C\|_\infty + \frac{4 \|C\|_\infty^2}{\eta}\right) \ = \ k\left(6\|C\|_\infty + \frac{160\max\{1, \bar{\theta}\}\|C\|_\infty^2\log(n)}{\epsilon}\right). \end{eqnarray*} Putting these pieces together implies the desired result. \end{proof} \begin{theorem}\label{Theorem:RAGAS-Total-Iteration} Letting $\{(U_t, \pi_t)\}_{t \geq 1}$ be the iterates generated by Algorithm~\ref{alg:adagrad-sinkhorn}, the number of iterations required to reach $\textnormal{dist}(0, \textnormal{subdiff}\, f(U_t)) \leq \epsilon$ satisfies \begin{equation*} t \ = \ \widetilde{O}\left(\frac{k\|C\|_\infty^2}{\epsilon^2}\left(1 + \frac{\|C\|_\infty}{\epsilon}\right)^2\right). \end{equation*} \end{theorem} \begin{proof} Using the same argument as in the proof of Theorem~\ref{Theorem:RGAS-Total-Iteration}, we have \begin{equation*} \textnormal{dist}(0, \textnormal{subdiff}\, f(U_t)) \ \leq \ \|\textnormal{grad}\, f_\eta(U_t)\|_F + \frac{\epsilon}{10}. \end{equation*} Combining this inequality with Lemma~\ref{lemma:obj-progress-adagrad} and the Cauchy-Schwarz inequality, we have \begin{eqnarray*} \frac{1}{T}\left(\sum_{t=0}^{T-1} [\textnormal{dist}(0, \textnormal{subdiff}\, f(U_t))]^2\right) & \leq & \frac{2}{T}\left(\sum_{t=0}^{T-1} \|\textnormal{grad}\, f_\eta(U_t)\|_F^2\right) + \frac{\epsilon^2}{50} \\ & & \hspace*{-10em} \leq \ \frac{16\|C\|_\infty\Delta_f}{\gamma T} + \frac{\epsilon^2}{5} + \frac{\epsilon^2}{50} \ \leq \ \frac{16\|C\|_\infty\Delta_f}{\gamma T} + \frac{\epsilon^2}{2}. \end{eqnarray*} Given that $\textnormal{dist}(0, \textnormal{subdiff}\, f(U_t)) > \epsilon$ for all $t = 0, 1, \ldots, T-1$ and \begin{equation*} \frac{1}{\gamma} \ = \ 16L_1^2 + 32L_2 + \frac{1280L_1^2\max\{1, \bar{\theta}\}\|C\|_\infty\log(n)}{\epsilon}, \end{equation*} we conclude that the upper bound $T$ must satisfies \begin{equation*} \epsilon^2 \ \leq \ \frac{64\|C\|_\infty\Delta_f}{T}\left(16L_1^2 + 32L_2 + \frac{1280L_1^2\max\{1, \bar{\theta}\}\|C\|_\infty\log(n)}{\epsilon}\right). \end{equation*} Using Lemma~\ref{lemma:key-inequality}, we have \begin{eqnarray*} \Delta_f & \leq & \left( \|C\|_\infty + \frac{2 \|C\|_\infty^2}{\eta}\right)\left(\max_{U \in \textnormal{St}(d, k)} \|U - U_0\|_F^2\right) \ = \ k\left(2 \|C\|_\infty + \frac{4 \|C\|_\infty^2}{\eta}\right) \\ & = & k\left(2 \|C\|_\infty + \frac{160\max\{1, \bar{\theta}\}\|C\|_\infty^2\log(n)}{\epsilon}\right). \end{eqnarray*} Putting these pieces together implies the desired result. \end{proof} From Theorem~\ref{Theorem:RGAS-Total-Iteration} and~\ref{Theorem:RAGAS-Total-Iteration}, Algorithm~\ref{alg:grad-sinkhorn} and~\ref{alg:adagrad-sinkhorn} achieve the same iteration complexity. Furthermore, the number of arithmetic operations at each loop of Algorithm~\ref{alg:grad-sinkhorn} and~\ref{alg:adagrad-sinkhorn} are also the same. Thus, the complexity bound of Algorithm~\ref{alg:adagrad-sinkhorn} is the same as that of Algorithm~\ref{alg:grad-sinkhorn}. \begin{theorem}\label{Theorem:RGAS-RAGAS-Total-Complexity} Either the RGAS algorithm or the RAGAS algorithm returns an $\epsilon$-approximate pair of optimal subspace projection and optimal transportation plan of the computation of the PRW distance in Eq.~\eqref{prob:main} (cf. Definition~\ref{def:optimal-pair}) in \begin{equation*} \widetilde{O}\left(\left(\frac{n^2d\|C\|_\infty^2}{\epsilon^2} + \frac{n^2\|C\|_\infty^6}{\epsilon^6} + \frac{n^2\|C\|_\infty^{10}}{\epsilon^{10}}\right)\left(1 + \frac{\|C\|_\infty}{\epsilon}\right)^2\right) \end{equation*} arithmetic operations. \end{theorem} \begin{proof} First, Theorem~\ref{Theorem:RGAS-Total-Iteration} and~\ref{Theorem:RAGAS-Total-Iteration} imply that both algorithms achieve the same the iteration complexity as follows, \begin{equation}\label{RGAS-RAGAS-iteration} t \ = \ \widetilde{O}\left(\frac{k\|C\|_\infty^2}{\epsilon^2}\left(1 + \frac{\|C\|_\infty}{\epsilon}\right)^2\right). \end{equation} This implies that $U_t$ is an $\epsilon$-approximate optimal subspace projection of problem~\eqref{prob:Stiefel-nonsmooth}. By the definition of $\widehat{\epsilon}$ and using the stopping criterion of the subroutine $\textsc{regOT}(\{(x_i, r_i)\}_{i \in [n]}, \{(y_j, c_j)\}_{j \in [n]}, U_t, \eta, \widehat{\epsilon})$, we have $\pi_{t+1} \in \Pi(\mu, \nu)$ and \begin{equation*} 0 \ \leq \ \langle U_tU_t^\top, V_{\pi_{t+1}} - V_{\tilde{\pi}_t^\star}\rangle \ \leq \ \|C\|_\infty\|\pi_{t+1} - \tilde{\pi}_t^\star\|_1 \ \leq \ \|C\|_\infty\widehat{\epsilon} \ \leq \ \epsilon/2. \end{equation*} where $\tilde{\pi}_t^\star$ is an unique minimzer of entropy-regularized OT problem. Furthermore, by the definition of $\tilde{\pi}_t^\star$, we have $\langle U_tU_t^\top, V_{\tilde{\pi}_t^\star}\rangle - \eta H(\tilde{\pi}_t^\star) \leq \langle U_tU_t^\top, V_{\pi_t^\star}\rangle - \eta H(\pi_t^\star)$. Since $0 \leq H(\pi) \leq 2\log(n)$ and $\eta = \frac{\epsilon\min\{1, 1/\bar{\theta}\}}{40\log(n)}$, we have \begin{equation*} \tilde{\pi}_t^\star \ \in \ \Pi(\mu, \nu), \qquad 0 \ \leq \ \langle U_tU_t^\top, V_{\tilde{\pi}_t^\star} - V_{\pi_t^\star}\rangle \ \leq \ \epsilon/2. \end{equation*} Putting these pieces together yields that $\pi_{t+1}$ is an $\epsilon$-approximate optimal transportation plan for the subspace projection $U_t$. Therefore, we conclude that $(U_t, \pi_{t+1}) \in \textnormal{St}(d, k) \times \Pi(\mu, \nu)$ is an \emph{$\epsilon$-approximate pair of optimal subspace projection and optimal transportation plan} of problem~\eqref{prob:main}. The remaining step is to analyze the complexity bound. Indeed, we first claim that the number of arithmetic operations required by the Sinkhorn iteration at each loop is upper bounded by \begin{equation}\label{RGAS-complexity-Sinkhorn} \widetilde{O}\left(\frac{n^2\|C\|_\infty^4}{\epsilon^4} + \frac{n^2\|C\|_\infty^8}{\epsilon^8}\right). \end{equation} Furthermore, while \textbf{Step 5} and \textbf{Step 6} in Algorithm~\ref{alg:grad-sinkhorn} can be implemented in $O(dk^2 + k^3)$ arithmetic operations, we still need to construct $V_{\pi_{t+1}} U_t$. A naive approach suggests to first construct $V_{\pi_{t+1}}$ using $O(n^2 dk)$ arithmetic operations and then perform the matrix multiplication using $O(d^2k)$ arithmetic operations. This is computationally prohibitive since $d$ can be very large in practice. In contrast, we observe that \begin{equation*} V_{\pi_{t+1}} U_t \ = \ \sum_{i=1}^n \sum_{j=1}^n (\pi_{t+1})_{i, j} (x_i - y_j)(x_i - y_j)^\top U_t. \end{equation*} Since $x_i - y_j \in \mathbb{R}^d$, it will take $O(dk)$ arithmetic operations for computing $(x_i - y_j)(x_i - y_j)^\top U_t$ for all $(i, j) \in [n] \times n$. This implies that the total number of arithmetic operations is $O(n^2dk)$. Therefore, the number of arithmetic operations at each loop is \begin{equation}\label{RGAS-arithmetic-operation} \widetilde{O}\left(n^2 dk + dk^2 + k^3 + \frac{n^2\|C\|_\infty^4}{\epsilon^4} + \frac{n^2\|C\|_\infty^8}{\epsilon^8}\right). \end{equation} Putting Eq.~\eqref{RGAS-RAGAS-iteration} and Eq.~\eqref{RGAS-arithmetic-operation} together with $k = \widetilde{O}(1)$ yields the desired result. \paragraph{Proof of claim~\eqref{RGAS-complexity-Sinkhorn}.} The proof is based on the combination of several existing results proved by~\citet{Altschuler-2017-Near} and~\citet{Dvurechensky-2018-Computational}. For the sake of completeness, we provide the details. More specifically, we consider solving the entropic regularized OT problem as follows, \begin{equation*} \min_{\pi \in \mathbb{R}_+^{n \times n}} \ \langle C, \pi\rangle - \eta H(\pi), \quad \textnormal{s.t.} \ r(\pi) = r, \ c(\pi) = c. \end{equation*} We leverage the Sinkhorn iteration which aims at minimizing the following function \begin{equation*} f(u, v) = \textbf{1}_n^\top B(u, v)\textbf{1}_n - \langle u, r\rangle - \langle v, c\rangle, \quad \textnormal{where } B(u, v) := \textnormal{diag}(u)e^{-\frac{C}{\eta}}\textnormal{diag}(v). \end{equation*} From the update scheme of Sinkhorn iteration, it is clear that $\textbf{1}_n^\top B(u_j, v_j)\textbf{1}_n = 1$ for each iteration $j$. By a straightforward calculation, we have \begin{eqnarray*} & & \langle C, B(u_j, v_j)\rangle - \eta H(B(u_j, v_j)) - \left(\langle C, B(u^\star, v^\star)\rangle - \eta H(B(u^\star, v^\star))\right) \\ & \leq & \eta(f(u_j, v_j) - f(u^\star, v^\star)) + \eta R(\|r(B(u_j, v_j))-r\|_1 + \|c(B(u_j, v_j))-c\|_1) \end{eqnarray*} where $(u^\star, v^\star)$ is a maximizer of $f(u, v)$ over $\mathbb{R}^n \times \mathbb{R}^n$ and $R>0$ is defined in~\citet[Lemma~1]{Dvurechensky-2018-Computational}. Since the entropic regularization function is strongly convex with respect to $\ell_1$-norm over the probability simplex and $B(u_j, v_j)$ can be vectorized as a probability vector, we have \begin{equation*} \|B(u_j, v_j) - B(u^\star, v^\star)\|_1^2 \ \leq \ 2(f(u_j, v_j) - f(u^\star, v^\star)) + 2R(\|r(B(u_j, v_j))-r\|_1 + \|c(B(u_j, v_j))-c\|_1). \end{equation*} On one hand, by the definition of $(u^\star, v^\star)$ and $B(\cdot, \cdot)$, it is clear that $B(u^\star, v^\star)$ is an unique optimal solution of the entropic regularized OT problem and we further denote it as $\tilde{\pi}^\star$. On the other hand, the final output $\pi \in \Pi(\mu, \nu)$ is achieved by rounding $B(u_j, v_j)$ to $\Pi(\mu, \nu)$ for some $j$ using~\citet[Algorithm~2]{Altschuler-2017-Near} and~\citet[Lemma~7]{Altschuler-2017-Near} guarantees that \begin{equation*} \|\tilde{\pi} - B(u_j, v_j)\|_1 \leq 2(\|r(B(u_j, v_j))-r\|_1 + \|c(B(u_j, v_j))-c\|_1). \end{equation*} Again, from the update scheme of Sinkhorn iteration and By Pinsker's inequality, we have \begin{equation*} \sqrt{2\left(f(u_j, v_j) - f(u^\star, v^\star)\right)} \ \geq \ \|r(B(u_j, v_j))-r\|_1 + \|c(B(u_j, v_j))-c\|_1. \end{equation*} Putting these pieces together yields that \begin{equation*} \|\tilde{\pi} - \tilde{\pi}^\star\|_1 \ \leq \ c_1\left(f(u_j, v_j) - f(u^\star, v^\star)\right)^{1/2} +c_2\sqrt{R}\left(f(u_j, v_j) - f(u^\star, v^\star)\right)^{1/4} \end{equation*} where $c_1, c_2 > 0$ are constants. Then, by using Eq.(12) in~\citet[Theorem~1]{Dvurechensky-2018-Computational}, we have $f(u_j, v_j) - f(u^\star, v^\star) \leq \frac{2R^2}{j}$. This together with the definition of $R$ yields that the number of iterations required by the Sinkhorn iteration is \begin{equation*} \widetilde{O}\left(\frac{\|C\|_\infty^4}{\epsilon^4} + \frac{\|C\|_\infty^8}{\epsilon^8}\right). \end{equation*} This completes the proof. \end{proof} \begin{remark} Theorem~\ref{Theorem:RGAS-RAGAS-Total-Complexity} is surprising in that it provides a finite-time guarantee for finding an $\epsilon$-stationary point of a nonsmooth function $f$ over a nonconvex constraint set. This is impossible for general nonconvex nonsmooth optimization even in the Euclidean setting~\citep{Zhang-2020-Complexity, Shamir-2020-Can}. Our results demonstrate that the max-min optimization model in Eq.~\eqref{prob:main} has a special structure that makes fast computation possible. \end{remark} \begin{remark} Note that our algorithms only return an approximate stationary point for the nonconvex max-min optimization model in Eq.~\eqref{prob:main}, which needs to be evaluated in practice. It is also interesting to compare such stationary point to the global optimal solution of computing the SRW distance. This is very challenging in general due to multiple stationary points of non-convex max-min optimization model in Eq.~\eqref{prob:main} but possible if the data has certain structure. We leave it to the future work. \end{remark}
{ "timestamp": "2021-02-09T02:08:15", "yymm": "2006", "arxiv_id": "2006.07458", "language": "en", "url": "https://arxiv.org/abs/2006.07458", "abstract": "Projection robust Wasserstein (PRW) distance, or Wasserstein projection pursuit (WPP), is a robust variant of the Wasserstein distance. Recent work suggests that this quantity is more robust than the standard Wasserstein distance, in particular when comparing probability measures in high-dimensions. However, it is ruled out for practical application because the optimization model is essentially non-convex and non-smooth which makes the computation intractable. Our contribution in this paper is to revisit the original motivation behind WPP/PRW, but take the hard route of showing that, despite its non-convexity and lack of nonsmoothness, and even despite some hardness results proved by~\\citet{Niles-2019-Estimation} in a minimax sense, the original formulation for PRW/WPP \\textit{can} be efficiently computed in practice using Riemannian optimization, yielding in relevant cases better behavior than its convex relaxation. More specifically, we provide three simple algorithms with solid theoretical guarantee on their complexity bound (one in the appendix), and demonstrate their effectiveness and efficiency by conducing extensive experiments on synthetic and real data. This paper provides a first step into a computational theory of the PRW distance and provides the links between optimal transport and Riemannian optimization.", "subjects": "Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML)", "title": "Projection Robust Wasserstein Distance and Riemannian Optimization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9740426397881663, "lm_q2_score": 0.7279754371026367, "lm_q1q2_score": 0.7090791164563964 }
https://arxiv.org/abs/2203.11338
Fast Toeplitz eigenvalue computations, joining interpolation-extrapolation matrix-less algorithms and simple-loop conjectures: the preconditioned setting
Under appropriate technical assumptions, the simple-loop theory allows to deduce various types of asymptotic expansions for the eigenvalues of Toeplitz matrices $T_{n}(f)$ generated by a function $f$, unfortunately, such a theory is not available in the preconditioning setting, that is for matrices of the form $T_{n}^{-1}(g)T_{n}(l)$ with $l,g$ real-valued, $g$ nonnnegative and not identically zero almost everywhere. Independently and under the milder hypothesis that $f=\frac{l}{g}$ is even and monotonic over $[0,\pi]$, matrix-less algorithms have been developed for the fast eigenvalue computation of large preconditioned matrices of the type above, within a linear complexity in the matrix order: behind the high efficiency of such algorithms there are the expansions as in the case $g\equiv 1$, combined with the extrapolation idea, and hence we conjecture that the simple-loop theory has to be extended in such a new setting, as the numerics strongly suggest.Here we focus our attention on a change of variable, followed by the asymptotic expansion of the new variable, and we consider new matrix-less algorithms ad hoc for the current case. Numerical experiments show a much higher precision till machine precision and the same linear computation cost, when compared with the matrix-less procedures already proposed in the literature.
\section{Introduction} In this work we design and test fast procedures for the computation of all the eigenvalues of large preconditioned Toeplitz matrices of the form $X_{n}\equiv T_{n}^{-1}(g)T_{n}(l)$ with $g,l$ even, real-valued on $Q\equiv(-\pi,\pi)$, $g>0$ on $(0,\pi)$ such that $f\equiv\frac{l}{g}$ is monotone in the interval $(0,\pi)$. For the formal definition of Toeplitz matrix generated by a Lebesgue integrable function over $Q$ see the first lines of Section~\ref{sc:prel}. Taking into consideration a clear numerical evidence developed in a systematic series of numerical tests, in \cite{EkGa18} the second author formulated the following conjecture. \begin{conjecture}\label{conj} Let $l,g$ be two even functions with $g>0$ on $(0,\pi)$, and suppose that $f\equiv \frac{l}{g}$ is monotone increasing over $(0,\pi)$. Set $X_{n}\equiv T_{n}^{-1}(g)T_{n}(l)$ for all $n$. Then, for every integer $K\geq 0$, every $n$ and every $j=1,\ldots,n$, the following asymptotic expansion holds: \begin{equation}\label{hoapp} \lambda_{j}(X_{n})=f(\theta_{j,n})+\sum_{k=1}^{K}c_{k}(\theta_{j,n})h^{k}+E_{j,n,K}, \end{equation} where \begin{itemize} \item the eigenvalues of $X_{n}$ are arranged in non-decreasing order, $\lambda_{1}(X_{n})\leq\cdots\leq\lambda_{n}(X_{n})$;\,\footnote{Note that the eigenvalues of $X_{n}$ are real, because $T_{n}(g)$ is symmetric positive definite and $X_{n}$ is similar to the symmetric matrix $T_{n}^{-\frac{1}{2}}(g)T_{n}(l)T_{n}^{-\frac{1}{2}}(g)$.} \item $\{c_{k}\}_{k=1}^{\infty}$ is a sequence of functions from $(0,\pi)$ to $\mathbb{R}$ which depends only on $l$ and $g$; \item $h\equiv\frac{1}{n+1}$ and $\theta_{j,n}\equiv\frac{j\pi}{n+1}=j\pi h$; \item $E_{j,n,K}=O(h^{K+1})$ is the remainder (the error), which satisfies the inequality $|E_{j,n,K}|\le c h^{K+1}$ for some constant $c$ depending only on $K,l,g$. \end{itemize} \end{conjecture} \noindent As already mentioned, Conjecture~\ref{conj} was originally formulated and supported through numerical experiments in \cite{EkGa18}. Then the algorithmic proposal was extended and refined in \cite{AhAl18,EkFu18a,EkFu18b,EkGa19}. When $g\equiv 1$ and $l$ satisfies further technical additional assumptions, those of the simple-loop method, Conjecture~\ref{conj} was formally proved by Bogoya, B\"ottcher, Grudsky, and Maximenko in a series of papers \cite{BoBo15a,BoBo16,BoGr17,BoBo17}. For a positive function $g$, relation (\ref{hoapp}) was proven, using only purely matrix-theoretic tools, and only for $K=1$ in \cite{AhAl18}. Here we formulate a second conjecture and we exploit it for designing an even more precise method, when compared with that proposed in \cite{AhAl18,EkGa19}, having the same linear complexity. In fact, the distribution results reported in Theorem~\ref{teo-Sergo} and the localization results in Theorem~\ref{teo-Sext} imply that $\lambda_{j}(X_{n})=f(s_{j,n})$, $X_{n}=T_{n}^{-1}(g)T_{n}(l)$, $f=\frac{l}{g}$ with $s_{j,n}$ belonging to $(0,\pi)$, and with the sequence \[ \Big\{\{s_{j,n}\}_{j=1}^{n}\Big\}_{n} \] distributed as the identity function. More precisely, from the combination of Theorem~\ref{teo-Sergo}, Theorem~\ref{teo-Sext}, and of the monotonicity of $f$, for every $j,n$ we easily deduce that \begin{equation}\label{eq:MainExp zero-level} \lambda_{j}(X_{n})\equiv f(s_{j,n}), \quad s_{j,n}=\theta_{j,n}+ o(1),\quad \theta_{j,n}\equiv\frac{j\pi}{n+1}. \end{equation} However, we think a richer result holds and more in detail we conjecture that, also in the independent variable $\theta$, for any $K>0$, there exists an asymptotic expansion, regarding exactly the points $s_{j,n}$, given explicitly by the following conjecture. \begin{conjecture}\label{cj:MainExp} Let $l,g$ be two even functions with $g>0$ on $(0,\pi)$, and suppose that $f\equiv \frac{l}{g}$ is monotone increasing over $(0,\pi)$. Set $X_{n}\equiv T_{n}^{-1}(g)T_{n}(l)$ for all $n$. Then, for some integer $K\ge0$, every $n$ and every $j=1,\ldots,n$, the following asymptotic expansion holds: \begin{equation*} \lambda_{j}(X_{n})\equiv f(s_{j,n}), \quad s_{j,n}=\theta_{j,n}+\sum_{k=1}^{K}\rho_{k}(\theta_{j,n})h^{k}+E_{j,n,K}, \end{equation*} where \begin{itemize} \item the numbers $s_{j,n}$ are arranged in nondecreasing order; \item $h\equiv\frac{1}{n+1}$ and $\theta_{j,n}\equiv \pi j h$; \item the coefficients $\rho_{k}$ are continuous functions from $(0,\pi)$ to $\mathbb{R}$ which depend only on $f$, \item $E_{j,n,K}=O(h^{K+1})$ is the remainder (the error), which satisfies the inequality $|E_{j,n,K}|\le c h^{K+1}$ for some constant $c$ depending only on $K,l,g$. \end{itemize} \end{conjecture} This article deals with the adaptation of the interpolation-extrapolation algorithms to the previous change of variable, joined with a trick at the end points introduced in \cite{BoSe21}. The numerical results are extremely precise, even compared with the already good performances described in \cite{AhAl18,EkFu18a,EkFu18b,EkGa19,EkGa18} and of the same order as (or even better than) in \cite{BoEk22} for the non preconditioned setting: in fact, it is not difficult to reach machine precision and the complexity for computing all the eigenvalues is still linear. The present work is organized as follows. Preliminary definitions, tools, and results are concisely reported in Section~\ref{sc:prel}. Section~\ref{sc:algo} presents the new adapted algorithm for computing the Toeplitz eigenvalues: as in \cite{EkGa19}, our technique combines the extrapolation procedure proposed in \cite{AhAl18,EkGa18} -- which allows the computation of {\em some} of the eigenvalues of $X_{n}$ -- with an appropriate interpolation process, designed for the simultaneous computation of {\em all} the eigenvalues of $X_{n}$, with the additional end point trick in \cite{BoSe21}. In Section~\ref{sc:num} we present the numerical experiments, while in Section~\ref{conclusions} we draw conclusions and we list few open problems for future research lines, to be investigated in the next future. \section{Preliminaries and Tools}\label{sc:prel} For a real or complex valued function $f$ in $L^{1}[-\pi,\pi]$, let $\mathfrak{a}_{j}(f)$ be its $j$th Fourier coefficient, i.e. \[ \mathfrak{a}_{j}(f)\equiv\frac{1}{2\pi}\int_{-\pi}^{\pi} f(\theta)\mathrm{e}^{-\i j\theta}\,\textrm{d}\theta,\quad j\in\mathbb Z, \] and consider the sequence $\{T_{n}(f)\}_{n=1}^{\infty}$ of the $n\times n$ Toeplitz matrices defined by $T_{n}(f)\equiv\big(\mathfrak{a}_{j-k}(f)\big)_{j,k=0}^{n-1}$. The function $f$ is customarily referred to as the generating function of this sequence. As a second step, we introduce some notations and definitions concerning general sequences of matrices. For any function $F$ defined on the complex field and for any matrix $A_{n}$ of size $d_{n}$, by the symbol $\Sigma_{\lambda}(F,A_{n})$, we denote the mean \begin{equation*} \Sigma_{\lambda}(F,A_{n})\equiv\frac{1}{d_{n}} \sum_{j=1}^{d_{n}} F(\lambda_{j}(A_{n})), \end{equation*} \begin{definition} Given a sequence $\{A_{n}\}$ of matrices of size $d_{n}$ with $d_{n}<d_{n+1}$, and given a Lebesgue-measurable function $\psi$ defined over a measurable set $X\subset {\mathbb{R}}^{\nu}$, $\nu \in \mathbb N^+$, of finite and positive Lebesgue measure $\mu(X)$, we say that $\{A_{n}\}$ is distributed as $(\psi,X)$ in the sense of the eigenvalues if, for any continuous $F$ with bounded support, the following limit relation holds \begin{equation*} \lim_{n\rightarrow \infty}\Sigma_{\lambda}(F,A_{n})= \frac{1}{\mu(X)}\int_{X} F(\psi)\,\mathrm{d}\mu. \end{equation*} In this case, we write in short $\{A_{n}\}\sim_{\lambda} (\psi,X)$. \end{definition} In Remark~\ref{rem:meaning-distribution} we provide an informal meaning of the notion of eigenvalue distribution. \begin{remark}\label{rem:meaning-distribution} The informal meaning behind the above definition is the following. If $\psi$ is continuous, $n$ is large enough, and \begin{equation*} \left\{{\bf x}_{j}^{(d_{n})},\ j=1,\ldots, d_{n}\right\} \end{equation*} is an equispaced grid on $X$, then a suitable ordering $\lambda_{j}(A_{n})$, $j=1,\ldots,d_{n}$, of the eigenvalues of $A_{n}$ is such that the pairs $\big\{\big({\bf x}_{j}^{(d_{n})},\lambda_{j}(A_{n})\big),\ j=1,\ldots,d_{n}\big\}$ reconstruct approximately the hypersurface \begin{equation*} \{({\bf x},\psi({\bf x})),\ {\bf x}\in X\}. \end{equation*} In other words, the spectrum of $A_{n}$ `behaves' like a uniform sampling of $\psi$ over $X$. For instance, if $\nu=1$, $d_{n}=n$, and $X=[a,b]$, then the eigenvalues of $A_{n}$ are approximately equal to $\psi\big(a+\frac{j}{n+1}(b-a)\big)$, $j=1,\ldots,n$, for $n$ large enough and up to at most $o(n)$ outliers. Analogously, if we have $\nu=2$, $d_{n}=n^{2}$, and $X=[a_{1},b_{1}]\times [a_{2},b_{2}]$, then the eigenvalues of the matrix $A_{n}$ are approximately equal to $\psi\big(a_{1}+\frac{j}{n+1}(b_{1}-a_{1}),a_{2}+\frac{k}{n+1}(b_{2}-a_{2})\big)$, $j,k=1,\ldots,n$, for $n$ large enough and up to at most $o(n^{2})$ outliers. \end{remark} The asymptotic distribution of eigenvalues and singular values of Toeplitz matrix sequences has been studied deeply and continuously in the last century (for example see \cite{BaGa20b,BaGa20a,BoSi99,GaSe17,GaSe18} and references therein). For the preconditioned matrix sequences as defined before, a formally similar theory holds, studied by the second author in a series of papers \cite{DiBFi93,Se94,Se97b,Se99e,Se99f,Se98a}. \begin{theorem}{\rm\cite{Se98a}} \label{teo-Sergo} If $g,l$ are integrable over $Q=(-\pi,\pi)$, $g$ is nonnegative almost everywhere (a.e) and not identically zero a.e., and $f=\frac{l}{g}$. Then, setting $\{X_{n}\}$ the sequence of preconditioned Toeplitz matrices with $X_{n}=T_{n}^{-1}(g)T_{n}(l)$, we deduce \begin{equation*} \{X_{n} \}\sim_{\lambda} (f,\tilde Q), \end{equation*} where $\tilde Q$ is defined as $Q$ minus the set where both $g$ and $l$ vanish simultaneously. \end{theorem} \begin{theorem}{\rm \cite{Se97b}} \label{teo-Sext} If $g,l$ are integrable over $Q=(-\pi,\pi)$, $g$ is nonnegative almost everywhere (a.e) and not identically zero a.e., and $f=\frac{l}{g}$. Then, setting $X_{n}=T_{n}^{-1}(g)T_{n}(l)$, $m$ as the essential infimum of $f$, and $M$ as the essential supremum of $f$, with $m<M$, we deduce \begin{equation*} \lambda_{j}(X_{n})\in (m,M), \end{equation*} for all $j=1,\ldots n$, and all $n\ge 1$. If $m=M$ then the result is trivial since $X_{n}=mI_{n}$ with $I_{n}$ being the identity matrix, so that $\lambda_{j}(X_{n})\equiv m$, for all $j=1,\ldots n$, and for all $n\ge 1$. \end{theorem} We notice that Theorem~\ref{teo-Sergo} reduces to the famous Szeg\H{o}~Theorem \cite{GrSz84} when $g\equiv 1$, in its most general version due to Tyrtyshnikov and Zamarashkin \cite{TyZa98} (see also the work by Tilli \cite{Ti98a} for the extension to the case of matrix-valued generating functions), while Theorem~\ref{teo-Sext} again for $g\equiv 1$ reduces to the standard localization results for Toeplitz matrices generated by a Lebesgue integrable function. Finally, it is worth stressing that Remark~\ref{rem:meaning-distribution} of course applies also in this context, but no outliers are present (see Theorem~\ref{teo-Sext} and relations (\ref{eq:MainExp zero-level}) for the precise statements). \section{Algorithmic proposals}\label{sc:algo} In the work \cite{BoEk22} we used an asymptotic eigenvalue expansion which was based on the simple-loop theory (see for example \cite{BoBo15a,BoBo16,BoGr17} or the nice review \cite{BoBo17}). However, the preconditioned setting has the additional complication of not having a formal supporting result. Thus, our algorithm has the Conjecture~\ref{cj:MainExp} as its theoretical background. We considered also the algorithms proposed in \cite{BoSe21,EkFu18b,EkGa19,EkGa18}, and produced again an algorithm suited for parallel implementation and that can be called matrix-less, since it does not require to calculate or even to store the objective matrix entries. For every $n\in\mathbb N$ let $h\equiv\frac{1}{n+1}$ and $\theta_{j,n}\equiv \pi jh$. One of the key details is that the term $\theta_{j,n}$, remains unchanged for different combinations of $j$ and $n$, for instance \begin{equation}\label{eq:crule} \theta_{j,n}=\theta_{cj,c(n+1)-1}, \end{equation} for any constant $c\in\mathbb N$. Thus, if we select $c=2^{k-1}$ $(k\in\mathbb N)$, we will obtain an increasing sequence of matrix sizes, whose eigenvalues will be used in the precomputing phase. The sequence $\{\theta_{j,n}\}_{j=0}^{n+1}$ is a regular partition of the interval $[0,\pi]$ with step size $\pi h$. We mirror notations for $n$ and $h$, for example, $h_{k}$ means $\frac{1}{n_{k}+1}$, and so on. We assume that \begin{itemize} \item the function $f=\frac{l}{g}$ is even and real-valued, strictly increasing in the interval $[0,\pi]$, and $f(0)=0$; \item $n_{1}$ and $K$ are fixed natural numbers and $n\gg n_{1}$; \item for $k=1,\ldots,K$ let $n_{k}\equiv 2^{k-1}(n_{1}+1)-1$; \item for $j_{1}=1,\ldots,n$ and $k=1,\ldots,K$, let $j_{k}\equiv 2^{k-1}j_{1}$. \end{itemize} The index $j_{k}$ depends on $j_{1}$, and similarly, the matrix sizes $n_{k}$ depend on $n_{1}$, but for notation simplicity, we suppressed those dependencies. Following the rule \eqref{eq:crule}, the numbers $j_{k}$ and the matrix sizes $n_{k}$ were calculated in such a way that \[\sigma_{j_{1}}\equiv\theta_{j_{1},n_{1}}=\theta_{j_{2},n_{2}}=\cdots=\theta_{j_{K},n_{K}},\] see Figure~\ref{fg:Grid}. \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth]{NiceGrid} \caption{The regular grids $\{\theta_{j,n_{k}}\}$ for $j=1,\ldots,n_{k}$ and $k=1,\ldots,K$. In the precomputing phase we will need to calculate the eigenvalues of $X_{n_{k}}$ for $k=1,\ldots,K$ corresponding to the blue and red dots combined, but in the interpolation phase, we will use only the eigenvalues corresponding to the red dots, that is $\lambda_{j_{k}}(X_{n_{k}})$ for $j_{1}=1,\ldots,n_{1}$ and $k=1,\ldots,K$.}\label{fg:Grid} \end{figure} For well conditioned matrices of sizes in the order $10^{2}$--$10^{4}$ the eigenvalue computation can be done in any modern standard computer. However, in applications like statistical physics and other relevant applied settings we need to handle dimensions of order $10^{8}$--$10^{12}$, a task impossible even for any modern supercomputer. Additionally, the numerical verification of our Conjecture~\ref{cj:MainExp} opens a door for a formal extension of the simple-loop theory to the preconditioned setting. Thus, our aim is to produce an algorithm capable to calculate eigenvalues for `big' matrices, including those extremely large ones. The results show that we can reach machine precision accuracy easily. Recall that $X_{n}\equiv T_{n}^{-1}(g)T_{n}(l)$. As a precomputing phase we need to calculate the eigenvalues of $X_{n}$ for $n=n_{1},\ldots,n_{K}$. This can be easily done with any standard eigensolver (i.e. \texttt{Eigenvalues} in \textsc{Mathematica} or \texttt{eig} in \textsc{Matlab}). The algorithm has two phases, the first one includes an extrapolation procedure, and the second one is a local interpolation technique. \noindent{\textbf{Extrapolation.} For each fixed $j_{1}=1,\ldots,n_{1}$ let $\sigma_{j_{1}}\equiv\theta_{j_{1},n_{1}}=\cdots=\theta_{j_{K},n_{K}}$ (see the red dots in Figure~\ref{fg:Grid}), and apply $K$ times the expansion in the Conjecture~\ref{cj:MainExp} obtaining \begin{eqnarray*} s_{j_{1},n_{1}}-\sigma_{j_{1}}&=&\rho_{1}(\sigma_{j_{1}})h_{1}+\rho_{2}(\sigma_{j_{1}})h_{1}^{2}+\cdots+\rho_{K}(\sigma_{j_{1}})h_{1}^{K}+E_{j_{1},n_{1},K},\\ s_{j_{2},n_{2}}-\sigma_{j_{1}}&=&\rho_{1}(\sigma_{j_{1}})h_{2}+\rho_{2}(\sigma_{j_{1}})h_{2}^{2}+\cdots+\rho_{K}(\sigma_{j_{1}})h_{2}^{K}+E_{j_{2},n_{2},K},\\ &\vdots&\\ s_{j_{K},n_{K}}-\sigma_{j_{1}}&=&\rho_{1}(\sigma_{j_{1}})h_{K}+\rho_{2}(\sigma_{j_{1}})h_{K}^{2}+\cdots+\rho_{K}(\sigma_{j_{1}})h_{K}^{K}+E_{j_{K},n_{K},K}. \end{eqnarray*} Let $\hat \rho_{k}(\sigma_{j_{1}})$ be the approximation of $\rho_{k}(\sigma_{j_{1}})$ obtained by removing all the error terms $E_{j_{k},n_{k},K}$ and solving the resulting linear system: \begin{equation}\label{eq:IntPhase} \begin{bmatrix} h_{1} & h_{1}^{2} & \cdots & h_{1}^{K}\\ h_{2} & h_{2}^{2} & \cdots & h_{2}^{K}\\ \vdots & \vdots & \ddots & \vdots\\ h_{K} & h_{K}^{2} & \cdots & h_{K}^{K} \end{bmatrix} \begin{bmatrix} \hat \rho_{1}(\sigma_{j_{1}}) \\ \hat \rho_{2}(\sigma_{j_{1}}) \\ \vdots\\ \hat \rho_{K}(\sigma_{j_{1}}) \end{bmatrix} = \begin{bmatrix} s_{j_{1},n_{1}} \\ s_{j_{2},n_{2}} \\ \vdots\\ s_{j_{K},n_{K}} \end{bmatrix} -\sigma_{j_{1}} \begin{bmatrix} 1\\ 1\\ \vdots\\ 1 \end{bmatrix}. \end{equation} Let $\phi$ be the inverse function of $f$ restricted to the interval $[0,\pi]$. The value of each $s_{j_{k},n_{k}}$ can be calculated as $\phi(\lambda_{j_{k}}(X_{n_{k}}))$. If $\phi$ is not available exactly, it can be found numerically with any standard root finder (i.e. \texttt{FindRoot} in \textsc{Mathematica} or \texttt{fzero} in \textsc{Matlab}). As mentioned in previous works, a variant of this extrapolation strategy was first suggested by Albrecht Böttcher in \cite[\S7]{BoBo15a} and is analogous to the Richardson extrapolation employed in the context of Romberg integration \cite[\S3.4]{StBu10}. \bigskip\\ \noindent\textbf{Interpolation.} For any index $j\in\{1,\ldots,n\}$ and any level $k=1,\ldots,K$, we will estimate $\rho_{k}(\theta_{j,n})$. If $\theta_{j,n}$ coincides with one of the points in the grid $\{\theta_{1,n_{1}},\ldots,\theta_{n_{1},n_{1}}\}$, then we have the approximations $\hat\rho_{k}(\theta_{j,n})$ from the extrapolation phase for free. In any other case, we will do it by interpolating the data \begin{equation}\label{eq:ExtPhase} (\theta_{0,n_{k}},\hat \rho_{k}(\theta_{0,n_{k}})),(\theta_{1,n_{k}},\hat \rho_{k}(\theta_{1,n_{k}})),\ldots,(\theta_{n_{k}+1,n_{k}},\hat \rho_{k}(\theta_{n_{k}+1,n_{k}})), \end{equation} for $k=1,\ldots,K$, and then evaluating the resulting polynomial at $\theta_{j,n}$. This interpolation can be done in many ways, but to avoid spurious oscillations explained by the Runge phenomenon \cite[p.78]{Da75a}, and following the strategy of the previous works, we decided to do it considering only the $K-k+5$ points in the grid $\{\theta_{0,n_{1}},\ldots,\theta_{n_{1}+1,n_{1}}\}$ which are closest to $\theta_{j,n}$. Those points can be determined uniquely unless $\theta_{j,n}$ is the mid point of two consecutive points in the grid, in which case we can take any of the two possible choices. Finally, our eigenvalue approximation with $k$ terms, is given by \begin{equation}\label{eq:NAS} \lambda_{j,k}^{\nas}(X_{n})\equiv f\Big(\theta_{j,n}+\sum_{\ell=0}^{k-1} \hat \rho_{\ell}(\theta_{j,n})h^{\ell}\Big), \end{equation} where $k=1,\ldots,K$, and NAS stands for ``Numerical Algorithm in the variable $s_{j,k}$''. \begin{remark} Since our algorithm is able to reach machine precision accuracy, in the precomputing phase, we advise to calculate $s_{j_{k},n_{k}}$ with a significant number of precision digits, let's say $60$. In all of our examples we used $K=5$ and $n_{1}=100$, and the precomputing phase was carried in a standard computer in only a few minutes. \end{remark} \begin{remark} According to \cite[Th.3.2]{BoBo15b}, for any $u\in(0,1)$, we have \[\lim_{n\to\infty}\lambda_{\lceil un\rceil}(X_{n})=f(\pi u),\] thus, taking $u=\frac{j_{1}}{n_{1}+1}=j_{1}h_{1}$, we can see that for every $j_{1}=1,\ldots,n_{1}$, the sequences $\big(\lambda_{j_{k}}(X_{n_{k}})\big)_{k\ge1}$ converge to $f(\theta_{j_{1},n_{1}})$. Consequently, the sequences $\big(s_{j_{k},n_{k}}\big)_{k\ge1}$ converge to $\theta_{j_{1},n_{1}}$. The previous result is a direct consequence of the famous Avram--Parter theorem (see \cite{Av88,Pa86} or the beautiful paper \cite{Ty96}) and agrees with our Conjecture~\ref{cj:MainExp}. \end{remark} \section{Numerical evidences}\label{sc:num} In this section we want to compare our algorithm with the one introduced in \cite[\S4]{BoSe21}. To this aim, we start this section analyzing three examples from \cite[\S3]{EkGa18}, involving Real Cosine Trigonometric Polynomials (RCTP). Our numerical experiments was performed with \textsc{Mathematica v.12 (64~bit)} on a platform with 16GB RAM, using an Intel processor QuadCore IntelCore i7 2.6 GHz. Recall that $X_{n}\equiv T_{n}^{-1}(g)T_{n}(l)$, $\lambda_{j,k}^{\nas}$ from \eqref{eq:NAS}, and let $\lambda_{j,k}^{\mna}(X_{n})$ be the $k$th term approximation of $\lambda_{j}(X_{n})$, given by the Modified Numerical Algorithm \cite[\S4]{BoSe21}. We use the following notation for the absolute individual errors \[\varepsilon^{\nas}_{j,n,k}\equiv|\lambda_{j}(X_{n})-\lambda_{j,k}^{\nas}(X_{n})|,\qquad \varepsilon^{\mna}_{j,n,k}\equiv|\lambda_{j}(X_{n})-\lambda_{j,k}^{\mna}(X_{n})|,\] and the respective maximum absolute errors \[\varepsilon^{\nas}_{n,k}\equiv\max\{\varepsilon^{\nas}_{j,n,k}\colon j=1,\ldots,n\},\qquad \varepsilon^{\mna}_{n,k}\equiv\max\{\varepsilon^{\mna}_{j,n,k}\colon j=1,\ldots,n\}.\] \begin{example}\label{ex:1} Consider the two RCTPs \begin{equation*} l(\theta)=2-\cos(\theta)-\cos(2\theta),\qquad g(\theta)=3+2\cos(\theta), \end{equation*} and let $f=\frac{l}{g}$. In this case the function $f$ can be simplified to $f(\theta)=1-\cos(\theta)$ which satisfies our hypothesis and gives us an exact inverse function, i.e. $f^{-1}(\varphi)=\arccos(1-\varphi)$, $\varphi\in(0,2)$. The $j$th Fourier coefficient of $\cos(k\theta)$ is $\frac{1}{2}$ for $j=\pm k$, and $0$ in any other case, thus the Fourier coefficients of $l$ and $g$ can be obtained easily by linearity. For a matrix $X_{n}$ of size $n=4096$, the Figure~\ref{fg:Erru} show the log scale of the individual errors $\varepsilon_{j,n,k}^{\nas}$ and $\varepsilon_{j,n,k}^{\mna}$ for different levels $k$, and the Table~\ref{tb:Erru} show the maximum absolute errors $\varepsilon_{n,k}^{\nas}$ and $\varepsilon_{n,k}^{\mna}$ for different matrix sizes $n$ and different levels $k$. In this case the matrices $T_{n}(l)$ and $T_{n}(g)$ are banded but $X_{n}$ is not Toeplitz, but dense with exponentially decaying entries. Hence, if we work directly on $X_n$, we lose a lot of information and, as a consequence, the exact computation of its eigenvalues is quite hard. For example, when calculating the eigenvalues of $X_{512}$ with machine precision accuracy, we only get $2$ correct digits. Nevertheless, even for this small matrix, our algorithm is able to produce 15 correct digits and faster. {\renewcommand{\arraystretch}{1.2} \begin{table}[ht] \caption{Example~\ref{ex:1}: The maximum errors $\varepsilon_{n,k}^{\mna}$, $\varepsilon_{n,k}^{\nas}$, and maximum normalized errors $(n+1)^{k}\,\varepsilon_{n,k}^{\nas}$ for the levels $k=1,\ldots,5$, and different matrix sizes $n$, corresponding to the matrix $X_{n}=T_{n}^{-1}(g)T_{n}(l)$ where $l(\theta)=2-\cos(\theta)-\cos(2\theta)$ and $g(\theta)=3+2\cos(\theta)$. We used a grid of size $n_{1}=100$.}\label{tb:Erru} \centering \begin{tabular}{rlllll} \toprule \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$256$} & \multicolumn{1}{c}{$512$} & \multicolumn{1}{c}{$1024$} & \multicolumn{1}{c}{$2048$} & \multicolumn{1}{c}{$4096$} \\ \midrule $\varepsilon_{n,1}^{\mna}$ & $2.935\times10^{-3}$ & $1.4706\times10^{-3}$ & $7.3605\times10^{-4}$ & $3.6822\times10^{-4}$ & $1.8416\times10^{-4}$ \\ \midrule $\varepsilon_{n,1}^{\nas}$ & $2.935\times10^{-3}$ & $1.4706\times10^{-3}$ & $7.3605\times10^{-4}$ & $3.6822\times10^{-4}$ & $1.8416\times10^{-4}$ \\ \midrule $(n+1)\,\varepsilon_{n,1}^{\nas}$ & $7.5429\times10^{-1}$ & $7.5440\times10^{-1}$ & $7.5445\times10^{-1}$ & $7.5448\times10^{-1}$ & $7.5450\times10^{-1}$ \\ \midrule $\varepsilon_{n,2}^{\mna}$ & $6.0234\times10^{-6}$ & $1.5189\times10^{-6}$ & $3.8421\times10^{-7}$ & $9.8046\times10^{-8}$ & $2.5538\times10^{-8}$ \\ \midrule $\varepsilon_{n,2}^{\nas}$ & $3.4682\times10^{-6}$ & $8.6926\times10^{-7}$ & $2.1759\times10^{-7}$ & $5.4432\times10^{-8}$ & $1.3612\times10^{-8}$ \\ \midrule $(n+1)^{2}\,\varepsilon_{n,2}^{\nas}$ & $2.2907\times10^{-1}$ & $2.2876\times10^{-1}$ & $2.2861\times10^{-1}$ & $2.2853\times10^{-1}$ & $2.2849\times10^{-1}$ \\ \midrule $\varepsilon_{n,3}^{\mna}$ & $1.8060\times10^{-8}$ & $2.2864\times10^{-9}$ & $8.4778\times10^{-10}$ & $4.2540\times10^{-10}$ & $2.1313\times10^{-10}$ \\ \midrule $\varepsilon_{n,3}^{\nas}$ & $1.4429\times10^{-8}$ & $1.8129\times10^{-9}$ & $2.2720\times10^{-10}$ & $2.8437\times10^{-11}$ & $3.5569\times10^{-12}$ \\ \midrule $(n+1)^{3}\,\varepsilon_{n,3}^{\nas}$ & $2.4492\times10^{-1}$ & $2.4476\times10^{-1}$ & $2.4467\times10^{-1}$ & $2.4463\times10^{-1}$ & $2.4461\times10^{-1}$ \\ \midrule $\varepsilon_{n,4}^{\mna}$ & $1.5184\times10^{-10}$ & $5.7689\times10^{-11}$ & $2.4437\times10^{-11}$ & $1.2058\times10^{-11}$ & $6.0583\times10^{-12}$ \\ \midrule $\varepsilon_{n,4}^{\nas}$ & $4.9519\times10^{-11}$ & $3.1141\times10^{-12}$ & $1.9522\times10^{-13}$ & $1.2221\times10^{-14}$ & $7.6657\times10^{-16}$ \\ \midrule $(n+1)^{4}\,\varepsilon_{n,4}^{\nas}$ & $2.1603\times10^{-1}$ & $2.1568\times10^{-1}$ & $2.1548\times10^{-1}$ & $2.1541\times10^{-1}$ & $2.1598\times10^{-1}$ \\ \midrule $\varepsilon_{n,5}^{\mna}$ & $2.8044\times10^{-11}$ & $1.5290\times10^{-11}$ & $7.9990\times10^{-12}$ & $4.0993\times10^{-12}$ & $2.0649\times10^{-12}$ \\ \midrule $\varepsilon_{n,5}^{\nas}$ & $1.8256\times10^{-13}$ & $5.7554\times10^{-15}$ & $1.8077\times10^{-16}$ & $5.6588\times10^{-18}$ & $2.3660\times10^{-18}$ \\ \midrule $(n+1)^{5}\,\varepsilon_{n,5}^{\nas}$ & $2.0467\times10^{-1}$ & $2.0448\times10^{-1}$ & $2.0453\times10^{-1}$ & $2.0438\times10^{-1}$ & $2.7311\times10^{\,0}$ \\ \bottomrule \end{tabular} \end{table}} \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{ErruLv3} \put(-189,180){\footnotesize $k=3$}\medskip\\ \includegraphics[width=0.8\textwidth]{ErruLv4} \put(-189,180){\footnotesize $k=4$}\medskip\\ \includegraphics[width=0.8\textwidth]{ErruLv5} \put(-189,180){\footnotesize $k=5$} \caption{Example~\ref{ex:1}: The base-10 logarithm for the individual errors $\varepsilon_{j,n,k}^{\nas}$ (blue) and $\varepsilon_{j,n,k}^{\mna}$ (green) for the matrix $X_{n}=T_{n}^{-1}(g)T_{n}(l)$ with $l(\theta)=2-\cos(\theta)-\cos(2\theta)$ and $g(\theta)=3+2\cos(\theta)$, a matrix size $n=4096$, a grid size $n_{1}=100$, and different levels $k$.}\label{fg:Erru} \end{figure} \clearpage In \cite[Th.3]{EkGa18} the authors provide an error estimate of the kind \[\varepsilon_{j,n,K}^{\rm NA}\le c h_{1}^{K}h,\] where $c$ is a constant depending only on $K,l,g$. The Table~\ref{tb:Erru} shows that the boundary modification introduced in \cite[\S4]{BoSe21}, which we can see reflected in the maximum errors $\varepsilon_{n,k}^{\mna}$, is producing much better results, more specifically, for a grid size of $n_{1}=100$ and similar matrix sizes, they found a maximum error of $\approx 10^{-10}$ while the $\mna$ algorithm produced $\approx10^{-12}$, explaining why they were observing the constant $c$ to grow quickly with $K$. We can also see that our error estimate has the bounding $\varepsilon_{n,k}^{\nas}\le ch^{k}$, for some constant $c$ depending only on $K,l,g$, in perfect agreement with the Conjecture~\ref{cj:MainExp}, and proving that our algorithm is matching the theoretical error and reaching its maximum possible accuracy. \end{example} \begin{example}\label{ex:2} Consider the two RCTPs \begin{eqnarray*} l(\theta)&=&40-15\cos(\theta)-24\cos(2\theta)-\cos(3\theta),\\ g(\theta)&=&1208+1191\cos(\theta)+120\cos(2\theta)+\cos(3\theta), \end{eqnarray*} and let $f=\frac{l}{g}$. In this case the function $f$ has no important simplification and we must use a numerical root-solver to get its inverse function. As in the Example~\ref{ex:1}, the Fourier coefficients of the symbols $l,g$ can be exactly calculated by linearity and the rule mentioned there. For a matrix $X_{n}$ of size $n=4096$, the Figure~\ref{fg:Errv} show the log scale of the individual errors $\varepsilon_{j,n,k}^{\nas}$ and $\varepsilon_{j,n,k}^{\mna}$ for different levels $k$, and the Table~\ref{tb:Errv} show the maximum absolute errors $\varepsilon_{n,k}^{\nas}$ and $\varepsilon_{n,k}^{\mna}$ for different matrix sizes $n$ and different levels $k$. {\renewcommand{\arraystretch}{1.2} \begin{table} \centering \caption{Example~\ref{ex:2}: The maximum errors $\varepsilon_{n,k}^{\mna}$, $\varepsilon_{n,k}^{\nas}$, and maximum normalized errors $(n+1)^{k}\,\varepsilon_{n,k}^{\nas}$ for the levels $k=1,\ldots,5$ and different matrix sizes $n$, corresponding to the matrix $X_{n}=T_{n}^{-1}(g)T_{n}(l)$ where $l(\theta)=40-15\cos(\theta)-24\cos(2\theta)-\cos(3\theta)$ and $g(\theta)=1208+1191\cos(\theta)+120\cos(2\theta)+\cos(3\theta)$. We used a grid of size $n_{1}=100$.}\label{tb:Errv} \begin{tabular}{rlllll} \toprule \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$256$} & \multicolumn{1}{c}{$512$} & \multicolumn{1}{c}{$1024$} & \multicolumn{1}{c}{$2048$} & \multicolumn{1}{c}{$4096$} \\ \midrule $\varepsilon_{n,1}^{\mna}$ & $2.9350\times10^{-3}$ & $1.4706\times10^{-3}$ & $7.3605\times10^{-4}$ & $3.6822\times10^{-4}$ & $1.8416\times10^{-4}$ \\ \midrule $\varepsilon_{n,1}^{\nas}$ & $2.9350\times10^{-3}$ & $1.4706\times10^{-3}$ & $7.3605\times10^{-4}$ & $3.6822\times10^{-4}$ & $1.8416\times10^{-4}$ \\ \midrule $(n+1)\,\varepsilon_{n,1}^{\nas}$ & $7.5429\times10^{-1}$ & $7.5440\times10^{-1}$ & $7.5445\times10^{-1}$ & $7.5448\times10^{-1}$ & $7.5450\times10^{-1}$ \\ \midrule $\varepsilon_{n,2}^{\mna}$ & $6.0234\times10^{-6}$ & $1.5189\times10^{-6}$ & $3.8421\times10^{-7}$ & $9.8046\times10^{-8}$ & $2.5538\times10^{-8}$ \\ \midrule $\varepsilon_{n,2}^{\nas}$ & $3.4682\times10^{-6}$ & $8.6926\times10^{-7}$ & $2.1759\times10^{-7}$ & $5.4432\times10^{-8}$ & $1.3612\times10^{-8}$ \\ \midrule $(n+1)^{2}\,\varepsilon_{n,2}^{\nas}$ & $2.2907\times10^{-1}$ & $2.2876\times10^{-1}$ & $2.2861\times10^{-1}$ & $2.2853\times10^{-1}$ & $2.2849\times10^{-1}$ \\ \midrule $\varepsilon_{n,3}^{\mna}$ & $1.8060\times10^{-8}$ & $2.2864\times10^{-9}$ & $8.4778\times10^{-10}$ & $4.2540\times10^{-10}$ & $2.1313\times10^{-10}$ \\ \midrule $\varepsilon_{n,3}^{\nas}$ & $1.4429\times10^{-8}$ & $1.8129\times10^{-9}$ & $2.2720\times10^{-10}$ & $2.8437\times10^{-11}$ & $3.5569\times10^{-12}$ \\ \midrule $(n+1)^{3}\,\varepsilon_{n,3}^{\nas}$ & $2.4492\times10^{-1}$ & $2.4476\times10^{-1}$ & $2.4467\times10^{-1}$ & $2.4463\times10^{-1}$ & $2.4461\times10^{-1}$ \\ \midrule $\varepsilon_{n,4}^{\mna}$ & $1.5184\times10^{-10}$ & $5.7689\times10^{-11}$ & $2.4437\times10^{-11}$ & $1.2058\times10^{-11}$ & $6.0583\times10^{-12}$ \\ \midrule $\varepsilon_{n,4}^{\nas}$ & $4.9519\times10^{-11}$ & $3.1141\times10^{-12}$ & $1.9522\times10^{-13}$ & $1.2221\times10^{-14}$ & $7.6657\times10^{-16}$ \\ \midrule $(n+1)^{4}\,\varepsilon_{n,4}^{\nas}$ & $2.1603\times10^{-1}$ & $2.1568\times10^{-1}$ & $2.1548\times10^{-1}$ & $2.1541\times10^{-1}$ & $2.1598\times10^{-1}$ \\ \midrule $\varepsilon_{n,5}^{\mna}$ & $2.8044\times10^{-11}$ & $1.5290\times10^{-11}$ & $7.9990\times10^{-12}$ & $4.0983\times10^{-12}$ & $2.0649\times10^{-12}$ \\ \midrule $\varepsilon_{n,5}^{\nas}$ & $1.8256\times10^{-13}$ & $5.7554\times10^{-15}$ & $1.8077\times10^{-16}$ & $1.6588\times10^{-18}$ & $2.3660\times10^{-18}$ \\ \midrule $(n+1)^{5}\,\varepsilon_{n,5}^{\nas}$ & $2.0467\times10^{-1}$ & $2.0448\times10^{-1}$ & $2.0453\times10^{-1}$ & $2.0438\times10^{-1}$ & $2.7311\times10^{\,0}$ \\ \bottomrule \end{tabular} \end{table}} \clearpage \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{ErrvLv3} \put(-189,180){\footnotesize $k=3$}\medskip\\ \includegraphics[width=0.8\textwidth]{ErrvLv4} \put(-189,180){\footnotesize $k=4$}\medskip\\ \includegraphics[width=0.8\textwidth]{ErrvLv5} \put(-189,180){\footnotesize $k=5$} \caption{Example~\ref{ex:2}: The base-10 logarithm for the individual errors $\varepsilon_{j,n,k}^{\nas}$ (blue) and $\varepsilon_{j,n,k}^{\mna}$ (green) for the matrix $X_{n}=T_{n}^{-1}(g)T_{n}(l)$ with $l(\theta)=40-15\cos(\theta)-24\cos(2\theta)-\cos(3\theta)$ and $g(\theta)=1208+1191\cos(\theta)+120\cos(2\theta)+\cos(3\theta)$, a matrix size $n=4096$, a grid size $n_{1}=100$, and different levels $k$.}\label{fg:Errv} \end{figure} \clearpage There are different alternatives to improve the result of our model, for example, in \cite{EkGa18} the authors took a fixed level $k$ and manage different grid sizes $n_{1}=25,50,100,200,400$, and presented the respective log scaled error figures. Other alternative is to increase the number of levels $K$ in the extrapolation phase \eqref{eq:IntPhase}, or to increase the number of interpolated points in the interpolation phase \eqref{eq:ExtPhase}. In this article, we decided to handle a fixed grid size $n_{1}=100$, $K=5$, $K-k+5$ interpolated points at level $k$, and show the error evolution for different matrix sizes $n$ and different levels $k$. In this way we can observe the algorithm accuracy. As in Example~\ref{ex:1}, the Table~\ref{tb:Errv}, show that our model is nicely following the bound $\varepsilon_{n,k}^{\nas}=O(h^{k})$ which is the maximum possible accuracy. \end{example} \begin{example}\label{ex:3} Consider the two RCTPs \begin{eqnarray*} l(\theta)&=&\frac{35}{2}-12\cos(\theta)-6\cos(2\theta)+\frac{1}{2}\cos(4\theta),\\ g(\theta)&=&8-3\cos(\theta)-4\cos(2\theta)-\cos(3\theta), \end{eqnarray*} and let $f=\frac{l}{g}$. In this case the function $f$ can be simplified to $f(\theta)=2-\cos(\theta)$ which satisfies our hypothesis and gives us an exact inverse function, i.e. $f^{-1}(\varphi)=\arccos(2-\varphi)$, $\varphi\in(1,3)$. As in the Example~\ref{ex:1}, the Fourier coefficients of the symbols $l,g$ can be exactly calculated by linearity and the rule mentioned there. The Figure~\ref{fg:Errw} and the Table~\ref{tb:Errw} show the data. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{ErrwEven} \caption{Example~\ref{ex:3}: The individual eigenvalue errors $\varepsilon_{j,n,k}^{\nas}$ (blue) and $\varepsilon_{j,n,k}^{\mna}$ (green), for the level $k=2$ and a matrix $X_{n}=T_{n}^{-1}(g)T_{n}(l)$ of size $n=256$. The symbols $l,g$ are given by $l(\theta)=\frac{35}{2}-12\cos(\theta)-6\cos(2\theta)+\frac{1}{2}\cos(4\theta)$ and $g(\theta)=8-3\cos(\theta)-4\cos(2\theta)-\cos(3\theta)$. We used a grid of size $n_{1}=100$.}\label{fg:Errw} \end{figure} {\renewcommand{\arraystretch}{1.2} \begin{table}[ht] \centering \begin{tabular}{rlllll} \toprule \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$256$} & \multicolumn{1}{c}{$512$} & \multicolumn{1}{c}{$1024$} & \multicolumn{1}{c}{$2048$} & \multicolumn{1}{c}{$4096$} \\ \midrule $\varepsilon_{n,1}^{\mna}$ & $4.6910\times10^{-3}$ & $2.3666\times10^{-3}$ & $1.1887\times10^{-3}$ & $5.9570\times10^{-4}$ & $2.9819\times10^{-4}$ \\ \midrule $\varepsilon_{n,1}^{\nas}$ & $4.6910\times10^{-3}$ & $2.3666\times10^{-3}$ & $1.1887\times10^{-3}$ & $5.9570\times10^{-4}$ & $2.9819\times10^{-4}$ \\ \midrule $(n+1)\,\varepsilon_{n,1}^{\nas}$ & $1.2056\times10^{\,0}$ & $1.2141\times10^{\,0}$ & $1.2184\times10^{\,0}$ & $1.2206\times10^{\,0}$ & $1.2217\times10^{\,0}$ \\ \midrule $\varepsilon_{n,2}^{\mna}$ & $6.7245\times10^{-5}$ & $1.7140\times10^{-5}$ & $4.3543\times10^{-6}$ & $1.1152\times10^{-6}$ & $2.9156\times10^{-7}$ \\ \midrule $\varepsilon_{n,2}^{\nas}$ & $8.5834\times10^{-5}$ & $2.1860\times10^{-5}$ & $6.6691\times10^{-6}$ & $1.0156\times10^{-5}$ & $5.3725\times10^{-6}$ \\ \midrule $(n+1)^{2}\,\varepsilon_{n,2}^{\nas}$ & $5.6692\times10^{\,0}$ & $5.7530\times10^{\,0}$ & $7.0067\times10^{\,0}$ & $4.2640\times10^{\,1}$ & $9.0180\times10^{1}$ \\ \bottomrule \end{tabular} \vspace{2mm} \caption{Example~\ref{ex:3}: The maximum errors $\varepsilon_{n,k}^{\mna}$, $\varepsilon_{n,k}^{\nas}$, and maximum normalized errors $(n+1)^{k}\,\varepsilon_{n,k}^{\nas}$ for the levels $k=1,2,3$, and different matrix sizes $n$, corresponding to the matrix $X_{n}=T_{n}^{-1}(g)T_{n}(l)$ where the symbols $l,g$ are given by $l(\theta)=\frac{35}{2}-12\cos(\theta)-6\cos(2\theta)+\frac{1}{2}\cos(4\theta)$ and $g(\theta)=8-3\cos(\theta)-4\cos(2\theta)-\cos(3\theta)$. We used a grid of size $n_{1}=100$.}\label{tb:Errw} \end{table}} The Figure~\ref{fg:Errw} shows that the eigenvalues of the matrix $X_{n}$, with even and odd numbers have different behaviors. This phenomenon was studied in \cite{BaGr17} where the authors deduced formally, asymptotic individual expansions for the eigenvalues of certain penta-diagonal Toeplitz matrix. They produced two different expansions, one for the eigenvalues with even numbers, and another for the odd ones. As we can see in the Table~\ref{tb:Errw}, our algorithm was able to produce acceptable results until the level $k=2$ only. We think that it is possible to adapt our algorithm to this case, but it is the topic of a future investigation. \end{example} \section{Conclusions}\label{conclusions} Under appropriate technical assumptions, the simple-loop theory allows to deduce various types of asymptotic expansions for the eigenvalues of Toeplitz matrices $T_{n}(f)$ generated by a function $f$. Independently and under the milder hypothesis that $f$ is even and monotonic over $[0,\pi]$, matrix-less algorithms have been developed for the fast eigenvalue computation of large Toeplitz matrices. These procedures work with a linear complexity in the matrix order $n$ and behind the high efficiency of such algorithms there are the expansions predicted by the simple-loop theory, combined with the extrapolation idea. Here we conjectured that the same type of expansions hold also for preconditioned matrix sequences, by focusing our attention on a change of variable \[ \lambda_{j}(X_{n})\equiv f(s_{j,n}), \qquad s_{j,n}=\theta_{j,n}+\sum_{k=1}^{K}\rho_{k}(\theta_{j,n})h^{k}+E_{j,n,K}, \] and then we adapted the matrix-less procedures to the considered new setting. Numerical experiments have shown, in a clear way, a much higher precision (till machine precision) and the same linear computation cost, when compared with the matrix-less procedures already presented in the relevant literature. As next steps the following questions remain to be investigated: \begin{itemize} \item Taking inspiration from the works on the simple-loop setting \cite{BoBo15a,BoBo16,BoGr17,BoBo17}, extension of the proofs from the pure Toeplitz setting to the preconditioned Toeplitz setting, as described in this work and as strongly confirmed by the numerical experiments; \item Applications to the block cases (see \cite{BaGa20b,BaGa20a} for the theory in the block case) and related applications \cite{EkFu18a,EkFu18b} to differential problems, especially systems of ordinary differential equations (ODEs) \cite{GaMa18}, and/or ODE approximation via Discontinuous Galerkin methods \cite{DuFa18}, Finite Element methods of high order \cite{GaSe15}, Isogeometric Analysis with intermediate smoothness \cite{GaSp19}, etc.; \item A fine error analysis for having a theoretical explanation of the reason why the new expansion leads in practice to much smaller errors, when compared with the numerical results in \cite{AhAl18,EkGa19}. \end{itemize} \bibliographystyle{plain}
{ "timestamp": "2022-03-23T01:06:11", "yymm": "2203", "arxiv_id": "2203.11338", "language": "en", "url": "https://arxiv.org/abs/2203.11338", "abstract": "Under appropriate technical assumptions, the simple-loop theory allows to deduce various types of asymptotic expansions for the eigenvalues of Toeplitz matrices $T_{n}(f)$ generated by a function $f$, unfortunately, such a theory is not available in the preconditioning setting, that is for matrices of the form $T_{n}^{-1}(g)T_{n}(l)$ with $l,g$ real-valued, $g$ nonnnegative and not identically zero almost everywhere. Independently and under the milder hypothesis that $f=\\frac{l}{g}$ is even and monotonic over $[0,\\pi]$, matrix-less algorithms have been developed for the fast eigenvalue computation of large preconditioned matrices of the type above, within a linear complexity in the matrix order: behind the high efficiency of such algorithms there are the expansions as in the case $g\\equiv 1$, combined with the extrapolation idea, and hence we conjecture that the simple-loop theory has to be extended in such a new setting, as the numerics strongly suggest.Here we focus our attention on a change of variable, followed by the asymptotic expansion of the new variable, and we consider new matrix-less algorithms ad hoc for the current case. Numerical experiments show a much higher precision till machine precision and the same linear computation cost, when compared with the matrix-less procedures already proposed in the literature.", "subjects": "Numerical Analysis (math.NA)", "title": "Fast Toeplitz eigenvalue computations, joining interpolation-extrapolation matrix-less algorithms and simple-loop conjectures: the preconditioned setting", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.974042636774129, "lm_q2_score": 0.7279754371026368, "lm_q1q2_score": 0.7090791142622515 }
https://arxiv.org/abs/2006.14552
Practical Trade-Offs for the Prefix-Sum Problem
Given an integer array A, the prefix-sum problem is to answer sum(i) queries that return the sum of the elements in A[0..i], knowing that the integers in A can be changed. It is a classic problem in data structure design with a wide range of applications in computing from coding to databases. In this work, we propose and compare several and practical solutions to this problem, showing that new trade-offs between the performance of queries and updates can be achieved on modern hardware.
\section{The $b$-ary Fenwick-Tree}\label{sec:fentree_bary} The classic {\method{Fenwick-Tree}} we described in Section~\ref{sec:fentree} exploits the base-2 representation of a number in $[0,n)$ to support {\code{sum}} and {\code{update}} in time $O(\log n)$. If we change the base of the representation to a value $b>2$, the corresponding $b$-ary {\method{Fenwick-Tree}} can be defined~\cite{bille2017succinct} -- a data structure supporting {\code{sum}} in $O(\log_b n)$ and {\code{update}} in $O(b\log_b n)$. A pictorial representation of the data structure is given in Figure~\ref{fig:bary_fentree_shape} for $b=4$ and $n=64$. However, this data structure does not expose an improved trade-off compared to the solutions described in the previous sections, for the reasons we discuss in the following. Let us consider {\code{sum}}. Recall that the {\method{Fenwick-Tree}} ignores digits that are 0. We have already commented on this for the case $b=2$ in Section~\ref{sec:segment_tree_vs_fenwick_tree}: for random queries, this gives a consistent boost over the {\method{Segment-Tree}} with $b=2$ because roughly 50\% of the levels are skipped, \emph{as if} the height of the tree were actually $\frac{1}{2}\lceil\log_2(n+1)\rceil$. Unfortunately, this advantage does \emph{not} carry over for larger $b$ because the probability that a digit is 0 is $1/b$ which is very low for the values of $b$ we consider in our experimental analysis (64 and 256). In fact, the $b$-ary {\method{Fenwick-Tree}} is not faster than the $b$-ary {\method{Segment-Tree}} (although it is when compared to the classic {\method{Fenwick-Tree}} with $b=2$). Even more problematic is the case for {\code{update}}. Having to deal with more digits clearly slows down {\code{update}} that needs to access $b-1$ nodes per level, for a complexity of $O(b\log_b n)$. Observe that $(b-1)\frac{\log_2 n}{\log_2 b}$ is more than $\log_2 n$ for every $b>2$. For example, for $b=64$ we can expect a slowdown of more than $10\times$ (and nearly $32\times$ for $b=256$). Experimental results confirmed this analysis. Note that the $b$-ary {\method{Segment-Tree}} does much better than this because: (1) it traverses $\lceil \log_b n \rceil$ nodes for operation, (2) the $b$ keys to update per node are contiguous in memory for a better cache usage and, hence, (3) are amenable to the SIMD optimizations we described in Section~\ref{sec:nodes}. Therefore, we experimented with two other ideas in order to improve the trade-off of the $b$-ary {\method{Fenwick-Tree}}. \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_fenwick_tree_truncated_restricted_vs_unrestricted}} \label{fig:fenwick_tree_truncated_a} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_fenwick_tree_truncated_restricted_vs_unrestricted}} \label{fig:fenwick_tree_truncated_b} } \caption{The running times of {\code{sum}}/{\code{update}} for the truncated {\method{Fenwick-Tree}} (\textsf{TFT}) with leaves of 64 and 256 keys. \label{fig:fenwick_tree_truncated}} \end{figure} The first idea is to block $b$ keys together into the nodes of a classic {\method{Fenwick-Tree}}, as depicted in Figure~\ref{fig:blocked_fentree_shape} for $b=4$ and $n=64$. Compared to the $b$-ary {\method{Fenwick-Tree}}, this variant -- that we call the \emph{blocked} {\method{Fenwick-Tree}} -- slows down the queries but improves the updates, for a better trade-off. However, it does not improve over the classic {\method{Fenwick-Tree}} with $b=2$. In fact, although the height of the tree is $\lceil \log_2((n+1)/b) \rceil + 1$, thus smaller than that of the classic {\method{Fenwick-Tree}} by $\log_2 b$, the number of traversed nodes is only reduced by $\frac{1}{2}\log_2 b$, which is quite small for the values of $b$ considered (e.g., just 3 for $b=64$). Therefore, this variant traverses slightly fewer nodes but the computation at each node is more expensive (more cache lines accessed due to two-level nodes; more cycles spent due to SIMD), resulting in a larger runtime compared to the classic {\method{Fenwick-Tree}}. This suggests the implementation of a strategy where one keeps running the simplest code for {\code{update}}, e.g., that of the classic {\method{Fenwick-Tree}}, until the number of leaves in the sub-tree is $b$ and, thus, these can be updated in parallel with SIMD. The shape of the resulting data structure -- the \emph{truncated} {\method{Fenwick-Tree}} (\textsf{TFT}) -- is illustrated in Figure~\ref{fig:truncated_fentree_shape} (for $b=4$ and $n=64$) and shows the clear division between the upper part represented by the {\method{Fenwick-Tree}} and the lower part that consists in an array of blocks. Compared to the classic {\method{Fenwick-Tree}}, it is now intuitive that this variant reduces the number of cache-misses because the upper part is likely to fit well in cache and $b$ keys inside a block are contiguous in memory. The experimental results shown in Figure~\ref{fig:fenwick_tree_truncated} meet our expectations, as the truncated variant actually improves over the classic {\method{Fenwick-Tree}}. In particular, it performs similarly to the {\method{Fenwick-Tree}} for small values of $n$ but gains a significant advantage for larger $n$ thanks to its better cache usage. Comparing Figure~\ref{fig:fenwick_tree_truncated_b} to Figure~\ref{fig:segment_tree_b} at page~\pageref{fig:segment_tree_b}, we can also observe that this variant improves over the $b$-ary {\method{Segment-Tree}} in the general case for {\code{update}} ($\delta=64$) and large $n$ because the use of SIMD is limited to one block of $b$ keys per operation. For {\code{sum}} queries instead, the data structure performs similarly to the $b$-ary {\method{Segment-Tree}}, with the latter being generally faster for all values of $n$. As last note, we remark that all the variants of the {\method{Fenwick-Tree}} we sketched in this section are anyway part of the software library available at \url{https://github.com/jermp/psds}. \section{The $b$-ary Segment-Tree}\label{sec:segtree_bary} The solutions analyzed in the previous sections have two main limitations. (1) The height of the tree is $\lceil \log_2 n \rceil + 1$: for the {\method{Segment-Tree}}, because each internal node has 2 children; for the {\method{Fenwick-Tree}}, because an index in $[0,n)$ is decomposed as sum of some powers of 2. Thus, the tree may become excessively tall for large values of $n$. (2) The running time of {\code{update}} does not depend on $\Delta$. In particular, it makes no difference whether $\Delta$ is ``small'', e.g., it fits in one byte, or arbitrary big: possible assumptions on $\Delta$ are not currently exploited to achieve a better runtime. To address these limitations, we can let each internal node of the tree hold a block of $b>2$ keys. While this reduces the height of the tree for a better cache usage, it also enables the use of SIMD instructions for faster {\code{update}} operations because several keys can be updated in parallel. Therefore, in Section~\ref{sec:nodes} we introduce a solution that works for ``small'' arrays of $b>2$ keys, e.g., 64 or 256. Then, in Section~\ref{sec:large_arrays}, we show how to embed this small-array solution into the nodes of a {\method{Segment-Tree}} to obtain a solution for arbitrary values of $n$. \input{nodes} \input{large_arrays} \subsection{The Blocked-{\method{Fenwick-Tree}}}\label{sec:blockedfentree} A way of improving the trade-off exposed by the $b$-ary {\method{Fenwick-Tree}} is that of blocking $b$ keys together into the nodes of a classic {\method{Fenwick-Tree}}, as depicted in Figure~\ref{fig:blocked_fentree_shape} for $b=4$ and $n=64$. We call this data structure the \emph{Blocked}-{\method{Fenwick-Tree}} (\textsf{BFT}). The height of the tree is now $\lceil \log_2((n+1)/b) \rceil + 1$, for a consequent complexity of $O(\log(n/b))$ time for both {\code{sum}} and {\code{update}} if we represent each node with the two-level data structure introduced in Section~\ref{sec:nodes}. Compared to the $b$-ary {\method{Fenwick-Tree}}, this blocked variant slows down the queries but remarkably improves the updates, for a better trade-off. \giulio{reduce this one too.} The code implementing this idea is shown in Figure~\ref{code:blocked_fentree}. To build the data structure, we proceed as follows. We materialize a temporary array, \code{tmp}, that holds the first key of each block and build a classic {\method{Fenwick-Tree}} from this array. Then all blocks are written into \code{tree} one after the other, just pretending that the first key of the $j$-th block is $\code{tmp}[j]$, for $j=0..\lceil n/b \rceil-1$. The total space is simply $\lceil n/b \rceil \times \Theta(b) = \Theta(n)$ memory words. This solution provides a mechanism to ``index'' the blocks as they were the nodes of a {\method{Fenwick-Tree}}, so that an operation for index $i$ translates into the same operation but for index $j = \lfloor i / b \rfloor + 1$. Note that for a random workload, this data structure is likely to perform $\frac{1}{2} \lceil \log_2((n+1)/b) \rceil + 1$ memory accesses as a classic {\method{Fenwick-Tree}} with $b=1$. In Section~\ref{sec:segment_tree_vs_fenwick_tree} we saw that the {\method{Fenwick-Tree}} suffers from cache conflicts when $n$ is large. Not surprisingly, the blocked variant inherits this problem too. But also in this case we can overcome the limitation by offsetting the serialization of a node at byte location $i$ to location $i + \lfloor i/d \rfloor$, for some sufficiently large $d$ (we again used $d=2^{14}$). We do not show the comparison between the regular Blocked-{\method{Fenwick-Tree}} and the modified one with limited cache conflicts -- as we did in Figure~\ref{fig:fenwick_tree} for the classic {\method{Fenwick-Tree}} -- to avoid repeating ourselves with similar considerations. We again confirm that the modification significantly improves upon the regular tree layout also for this blocked variant, and that is what we consider in the following. Before commenting of the experimental results note that, as for the $b$-ary {\method{Fenwick-Tree}}, we cannot expect the blocked {\method{Fenwick-Tree}} to perform better than the $b$-ary {\method{Segment-Tree}} because of the larger height. As a matter of fact, Figure~\ref{fig:fenwick_tree_blocked} shows that this blocked variant does not even improve over the classic {\method{Fenwick-Tree}}: while it performs similarly for small values of $n$, it induces significantly more cache-misses as soon as the $L_2$ cache boundary is crossed, which happens for $n=2^{17}$. The increased number of cache-misses is due to the two-level data structures, requiring access to more cache lines at each traversed node. For example, queries need access to 2 cache lines instead of one, or even 4 in the restricted case. Again recall that, although the height of the tree is smaller than that of the classic {\method{Fenwick-Tree}} by $\log_2 b$, the number of traversed nodes is only reduced by $\frac{1}{2}\log_2 b$, which is quite small for the values of $b$ considered (e.g., just 3 for $b=64$). Therefore, this variant traverses slightly fewer nodes but the computation at each node is much more expensive, resulting in a larger running time. For example, considering $n=2^{28}$, $b=64$, and {\code{sum}} queries, we expect the the blocked {\method{Fenwick-Tree}} to touch $28/2-3+1=12$ nodes on average but perform the double of memory accesses, for a total of 24 accesses against 15 performed by the classic {\method{Fenwick-Tree}}. Low level profiling indeed confirms that the blocked variant consumes 70\% more cache-misses in this scenario. Similar considerations hold true for updates as well. In particular, the runtime is higher not only because of the increased number of cache-misses but also because of the additional cycles due to SIMD instructions. \section{Conclusions and future work}\label{sec:conclusions} We described, implemented, and studied the practical performance of several tree-shaped data structures to solve the \emph{prefix-sum problem}. After a careful experimental analysis, the following take-away lessons are formulated. \begin{enumerate} \item (Section~\ref{sec:segtree}) A bottom-up traversal of the {\method{Segment-Tree}} has a much simpler implementation compared to top-down, resulting in a faster execution of both {\code{sum}} and {\code{update}}. \item (Section~\ref{sec:segtree}) A branch-free implementation of the {\method{Segment-Tree}} is on average $2\times$ faster than a branchy implementation for all array sizes, $n$, up to $2^{25}$. This is so because the processor's pipeline is not stalled due to branches, for a consequent increase in the instruction throughput. For $n > 2^{25}$, the branchy code is faster because it saves memory accesses -- the dominant cost in the runtime for large values of $n$. In particular, the \emph{combination} of branchy and branch-free code execution -- what we called the \emph{two-loop} optimization -- results in a better runtime. Taking this into account, we recommend a version of the bottom-up {\method{Segment-Tree}} where an internal node holds the sum of the leaves descending from its \emph{left} subtree, because it allows the use of the two-loop optimization for {\code{update}} as well (and not only for {\code{sum}}). \item (Section~\ref{sec:fentree}) The {\method{Fenwick-Tree}} suffers from cache conflicts for larger values of $n$, caused by accessing nodes whose memory address is (almost) a large power of 2. Solving this issue, by offsetting a node from an original position $i$ to a new position $i + \lfloor i/d \rfloor$ for some $d>0$, significantly improves the performance of the {\method{Fenwick-Tree}}. \begin{table} \centering \caption{Average speedup factors achieved by the {\method{Fenwick-Tree}} over the {\method{Segment-Tree}}. \label{tab:speedups_ft}} \scalebox{1.0}{\input{tables/speedups_ft.tex}} \end{table} \item (Section~\ref{sec:segment_tree_vs_fenwick_tree}) The {\method{Fenwick-Tree}} is more efficient than (our optimized version of) the {\method{Segment-Tree}} for both queries and updates. The better efficiency is due to the simplicity of the code and the less (50\%) average number of loop iterations. We summarize the speedups achieved by the {\method{Fenwick-Tree}} over the {\method{Segment-Tree}} in Table~\ref{tab:speedups_ft}, for different ranges of $n$. \item (Section~\ref{sec:segtree_bary}) Although the {\method{Segment-Tree}} is outperformed by the {\method{Fenwick-Tree}}, we can enlarge its branching factor to a generic quantity $b>2$: this reduces the height of the {\method{Segment-Tree}} for a better cache usage and enables the use of SIMD instructions. Such instructions are very effective to lower the running time of {\code{update}}, because several values per node can be updated in parallel. Compared to the scalar {\code{update}} algorithm, SIMD can be on average $2-6\times$ faster depending of the value of $n$. \item (Section~\ref{sec:segtree_bary}) For the best of performance, we recommend to model the height of the $b$-ary {\method{Segment-Tree}} with a constant known at compile-time, so that the compiler can execute a specialized code path to handle the specific value of the height. This completely avoids branches during the execution of {\code{sum}} and {\code{update}}. The vectorized {\code{update}} implementation together with this branch-avoiding optimization makes the $b$-ary {\method{Segment-Tree}} actually faster than the {\method{Fenwick-Tree}}. We report the speedups achieved by this data structure over the {\method{Fenwick-Tree}} in Table~\ref{tab:speedups_sst}, for the different tested combinations of $b$ and $\delta$. (Recall that $\delta$ represents the bit-width of $\Delta$, the update value.) \item (Section~\ref{sec:segtree_bary}) For the most general case where $\delta=64$ bits, we recommend the use of a $b$-ary {\method{Segment-Tree}} with $b=64$. From Table~\ref{tab:speedups_sst}, we see that this solution offers an improved trade-off between the running time of {\code{sum}} and {\code{update}}: on average $1.9-5\times$ faster for {\code{sum}} and up to $1.6\times$ faster for {\code{update}} than the {\method{Fenwick-Tree}}. \item (Section~\ref{sec:segtree_bary}) For the restricted case where $\delta=8$ bits, we can update even more values in parallel by \emph{buffering} the updates at each node of the tree. Considering again Table~\ref{tab:speedups_sst}, for such case we recommend the use of a $b$-ary {\method{Segment-Tree}} with $b=256$. This solution is faster than a {\method{Fenwick-Tree}} by $1.7-4.7\times$ for {\code{sum}} and by $1.4-2.5\times$ for {\code{update}}. \item (Section~\ref{sec:fentree_bary}) The $b$-ary {\method{Fenwick-Tree}} improves the runtime for queries at the price of a significant slowdown for updates, compared to the classic {\method{Fenwick-Tree}} with $b=2$. This makes the data structure unpractical unless updates are extremely few compared to queries. \item (Section~\ref{sec:fentree_bary}) Despite of the larger tree height, the \emph{blocked} {\method{Fenwick-Tree}} improves the trade-off of the $b$-ary {\method{Fenwick-Tree}} (in particular, it improves {\code{update}} but worsens {\code{sum}}). However, it does not beat the classic {\method{Fenwick-Tree}} because the time spent at each traversed node is much higher (more cache-misses due to the two-level structure of a node; more spent cycles due to SIMD). \item (Section~\ref{sec:fentree_bary}) In order to combine the simplicity of the {\method{Fenwick-Tree}} with the advantages of blocking $b$ keys together (reduced cache-misses; SIMD exploitation), a \emph{truncated} {\method{Fenwick-Tree}} can be used. This data structure improves over the classic {\method{Fenwick-Tree}}, especially for large values of $n$, exposing a trade-off similar to that of the $b$-ary {\method{Segment-Tree}} (but with the latter being generally better). \end{enumerate} \begin{table} \centering \caption{Average speedup factors achieved by the $b$-ary {\method{Segment-Tree}} over the {\method{Fenwick-Tree}}. \label{tab:speedups_sst}} \subfloat[{\code{sum}}]{ \scalebox{1}{\input{tables/speedups_sum_bary_segtree.tex}} } \subfloat[{\code{update}}]{ \scalebox{1}{\input{tables/speedups_update_bary_segtree.tex}} \label{tab:speedups_sst_b} } \end{table} Some final remarks follow. The runtime of {\code{update}} will improve as SIMD instructions will become more powerful in future years (e.g., with lower latency), thus SIMD is a very promising hardware feature that cannot be overlooked in the design of practical algorithms. So far we obtained best results using SIMD registers of 128 and 256 bits (SSE and AVX instruction sets, respectively). We also made some experiments with the new AVX-512 instruction set which allows us to use massive load and store instructions comprising 512 bits of memory. In particular, doubling the size of the used SIMD registers can be exploited in two different ways: by either reducing the number of instructions, or enlarging the branching factor of a node. We tried both possibilities but did not observe a clear improvement. A promising avenue for future work would be to consider the \emph{searchable} version of the problem, i.e., to implement the {\code{search}} operation. Note that this operation is actually amenable to SIMD vectorization, and has parallels with search algorithms in inverted indexes. With this third operation, the objective would be to use (a specialization of) the best data structure from this article -- the $b$-ary {\method{Segment-Tree}} with SIMD on updates \emph{and searches} -- to support \emph{rank/select} queries over mutable bitmaps~\cite{vigna2019}, an important building block for dynamic succinct data structures. {The experiments presented in this work were conducted using a single system configuration, i.e., a specific processor (Intel i9-9940X), operating system (Linux), and compiler (\textsf{gcc}). We acknowledge the specificity of our analysis, although it involves a rather common setup, and we plan to extend our experimentation to other configurations as well in future work. } \section{Experiments}\label{sec:experiments} \todo{introduction} \section{Extra} Accumulate here other stuff that we will integrate later with the body of the paper. \subsection{The $b$-ary {\method{Fenwick-Tree}}}\label{sec:fentree_bary} The classic {\method{Fenwick-Tree}} we described in Section~\ref{sec:fentree} exploits the base-2 representation of a number in $[0,n)$ to support {\code{sum}} and {\code{update}} in time $O(\log n)$. If we change the base of the representation to a value $b>2$, the corresponding $b$-ary {\method{Fenwick-Tree}} can be defined -- a data structure supporting {\code{sum}} in $O(\log_b n)$ and {\code{update}} in $O(b\log_b n)$. A pictorial representation of the data structure is given in Figure~\ref{fig:fentree_shape} for $b=4$ and $n=64$. \giulio{reduce a lot; explain that one could write the tree in level-order as already noticed by Vigna, by actually deforming the fentree to be similar to a segtree and use SIMD on updates. While this could be done, we cannot expect advantages wrt the b-ary segtree because the b-ary fentree does not skip levels anymore.} \citet{bille2017succinct} quickly sketched this data structure from a theoretical point of view. What we show here is our own definition of it along with a practical implementation and comparison against different alternatives. \vspace{0.125cm} First, we illustrate how the data structure can be built from an input array $A[0..n)$. As for the classic {\method{Fenwick-Tree}} with $b=2$, also in this case the result of the procedure is an array, say $F$, of $n+1$ memory words (the first position is not used). Call a \emph{partition} of $A[\ell..r)$ according to the endpoints $\ell = r_0 < r_1 < r_2 < \ldots < r$, the set of sub-arrays $\{A[r_{i-1}..r_i)\}_i$. With this definition in mind, the data structure structure is obtained via the following two-step algorithm. Let $\ell=0$ and $r=n$ at the beginning. \begin{enumerate} \item Partition $A[\ell..r)$ according to the endpoints $r_i = b^j + k b^j$, for $j=0,1,2,\ldots$, $k=0..b-2$, and $i=b^j+k$. Set $r_i = n$ if $r_i > n$. Store in $F[\ell+r_i]$ the quantity $\sum_{k=0}^{r_i-1} A[\ell+k]$, for each $r_i$. \item Set $\ell=r_{i-1}+1$, $r=r_i$, and repeat step (1) for each $r_i$. \end{enumerate} Setting $r_i = n$ if $r_i > n$ in step (1) correctly prevents out-of-bound accesses to $A$ and is needed to handle a generic value of $n$ (not just the case where $n$ is a power of $b$). The recursive \code{build} method of the class shown in Figure~\ref{code:fentree_bary} implements this algorithm. The way we interleave the computation of a prefix sum and the recursive call to the method guarantees that at most $O(n)$ time is spent per level, for a total of $O(n \log_b n)$ time. In particular, we first use this procedure to build the $b$-ary structure into a temporary array, \code{tmp}; then, we block $b$ elements together and build a \code{Node} data structure out of these. It follows that the total space is $(\lceil n/b \rceil + 1) \times \Theta(b) = \Theta(n+b)$ which is always $\Theta(n)$ because $b=O(n)$. \vspace{0.125cm} Let us now consider how to answer a ${\code{sum}}(i)$ query, whose code is shown in Figure~\ref{code:fentree_bary_sum_update}. As for the regular {\method{Fenwick-Tree}}, we actually query the data structure for index $i+1$ but, instead of considering the binary representation of $i+1$, we consider its $b$-ary representation. Given an index $i$ in $[0,n)$, its representation in base $b$ is $\sum_{k=0}^{\lceil \log_b i \rceil} d_k b^k$, where $0 \leq d_k < b$ is the $k$-th digit of the representation. It follows that each digit indicates a node in the tree that we access to answer the query. For example, if $i=37$ in Figure~\ref{fig:fentree_shape} and $b=4$, the representation of $37+1$ in base 4 is \bit{212}. Therefore $\code{sum}(37)$ will be $\code{tree}[2 \times 4^2] + \code{tree}[2 \times 4^2 + 1 \times 4^1] + \code{tree}[2 \times 4^2 + 1 \times 4^1 + 2 \times 4^0]$. In particular, note that the first digit $d_0$ always corresponds to an \emph{offset} into the $(i+1)$-th block; all other digits correspond to blocks from which we have to \emph{access} the last element (method \code{back} invoked on the object \code{Node} in Figure~\ref{code:fentree_bary_sum_update}) since it already holds the sum of the elements in the block. Lastly, we need an efficient way of obtaining the $b$-ary representation of a number in $[1,n]$. If we assume that $b$ is a power of 2, then the digit $d_k$ can be directly obtained from the \emph{binary} representation of $i$ by reading its $k$-th segment of $\log_2 b$ bits (starting from the right). In conclusion, since we have $\lceil \lceil\log_2(n+1)\rceil / \log_2 b\rceil$ digits to consider, this will also be the height of the tree. Each digit corresponds to a node in the tree and we spend $O(1)$ per node, hence {\code{sum}} is supported in $O(\log_b n)$ worst-case time as for the $b$-ary {\method{Segment-Tree}}. Furthermore, although Figure~\ref{code:fentree_bary_sum_update} shows a loop-based implementation of {\code{sum}}, the \code{if-constexpr} optimization we explained in Section~\ref{sec:segtree_bary} -- a specialized code path for each possible value of the tree \code{Height} -- also applies to this case and such optimized version is what we tested in the experiments. As a last note, recall that the {\method{Fenwick-Tree}} ignores digits that are 0. We already commented on this for the case $b=2$ in Section~\ref{sec:segment_tree_vs_fenwick_tree}: for random indexes, this gives a consistent boost over the {\method{Segment-Tree}} because roughly 50\% of the levels are skipped, \emph{as if} the height of the tree were actually $\frac{1}{2}\lceil\log_2(n+1)\rceil$. However, this advantage does not carry over for larger $b$ because the probability that a digit is 0 is $1/b$ which is very low for the values of $b$ we consider in our experimental analysis (64 and 256). Not surprisingly, the $b$-ary {\method{Fenwick-Tree}} is not faster than the $b$-ary {\method{Segment-Tree}}, although it gains a significant advantage compared to the classic {\method{Fenwick-Tree}} with $b=2$. \vspace{0.125cm} More problematic is the case for {\code{update}}, whose implementation is shown in Figure~\ref{code:fentree_bary_sum_update}. Observe that the endpoints defined during step (1) do not only comprise powers of $b$: for each interval $[b^j,b^{j+1}]$ we add $b-2$ more endpoints, so that the interval is partitioned into $b-1$ sub-intervals of size $b^j$: one for each digit from 1 to $b-1$. Having to deal with more digits clearly slows down {\code{update}} that needs to access $b-1$ nodes per level, for a complexity of $O(b\log_b n)$. Observe that $(b-1)\frac{\log_2 n}{\log_2 b}$ is more than $\log_2 n$ for every $b>2$. For example, for $b=64$ we can expect a slowdown of more than $10\times$ (and nearly $32\times$ for $b=256$). Experimental numbers confirm this analysis. Note that the $b$-ary {\method{Segment-Tree}} does much better than this because: (1) it traverses $\lceil \log_b n \rceil$ nodes for operation, (2) the $b$ keys to update per node are contiguous in memory for a better cache usage and, hence, (3) are amenable to the SIMD optimizations we described in Section~\ref{sec:nodes}. \vspace{0.125cm} In conclusion, albeit attractive from a theoretical point of view, we do not regard the $b$-ary {\method{Fenwick-Tree}} as a promising solution to the Prefix-Sum Problem because the trade-off between the running time of {\code{sum}} and {\code{update}} is not even as good as that of the classic {\method{Fenwick-Tree}} with $b=2$. \section{The Fenwick-Tree}\label{sec:fentree} The solution we describe here is popularly known as the {\method{Fenwick-Tree}} (or \emph{Binary Indexed Tree}, BIT) after a paper by~\citet*{fenwick1994new}, although the underlying principle was introduced by~\citet*{ryabko1992fast}. The key idea is to exploit the fact that, as a natural number $i > 0$ can be expressed as sum of proper powers of 2, so {\code{sum}}/{\code{update}} operations can be implemented by considering array locations that are power-of-2 elements apart. For example, since $11 = 2^3+2^1+2^0=8+2+1$, then $\code{sum}(10)$ can be computed as $\var{tree}[2^3]+\var{tree}[2^3+2^1]+\var{tree}[2^3+2^1+2^0] = \var{tree}[8]+\var{tree}[10]+\var{tree}[11]$, where the array \var{tree} is computed from $A$ in some appropriate manner that we are going to illustrate soon. In this example, $\var{tree}[8]=\sum_{k=0}^{8-1} A[k]$, $\var{tree}[10]=A[8]+A[9]$, and $\var{tree}[11]=A[10]$. Now, let $\code{base}=0$. The array $\code{tree}[0..n+1)$ is obtained from $A$ with the following two-step algorithm. \begin{enumerate} \item Store in $\var{tree}[\code{base}+i]$ the quantity $\sum_{k=0}^{i-1} A[\code{base}+k]$ where $i = 2^j$, $j=0,1,2,\ldots$ \item For every sub-array $A[2^j..2^{j+1}-1)$, repeat step (1) with updated $\code{base}=2^j$. \end{enumerate} \begin{figure}[t] \centering \includegraphics[scale=1]{{images/interrogation}} \caption{An example of the array-like view \var{tree} of the {\method{Fenwick-Tree}}, along with its logical ``interrogation'' tree. The array \var{tree} is built from the input array $[$13, -1, 2, 23, -4, 231, 13, 5, 2, -88, -52, 0, 4, 90, 3, -12$]$. As in Figure~\ref{fig:segtree}, highlighted nodes belong to the root-to-leaf path that is traversed to answer {\code{sum}} for index 10. \label{fig:interrogation}} \end{figure} Figure~\ref{fig:interrogation} shows a tree-like view of one such array \var{tree}, built from the same array $A$ of size 16 shown in Figure~\ref{fig:segtree}. Note that position 0 is never used for ease of notation and $\var{tree}[0]=0$, thus \var{tree}'s actual size is $n+1$. Now, we said that during a {\code{sum}} query for index $i$ (or {\code{update}}) we only touch \var{tree}'s positions that are power-of-2 elements apart. It is not difficult to see that we have one such position for every bit set in the binary representation of $i$. Since $i$ goes from 0 to $n-1$, it needs $\lceil \log_2(n+1) \rceil$ bits to be represented, so this will also be the height of the {\method{Fenwick-Tree}}. (The depth of a node whose index is $i$ is the Hamming weight of $i$, i.e., the number of bits set in the binary representation of $i$.) Therefore, both queries and updates have a worst-case complexity of $O(\log n)$ and the array \var{tree} takes $n+1$ memory words. In the following we illustrate how to navigate the implicit tree structure in order to support {\code{sum}} and {\code{update}}. Let us consider the {\code{sum}} query. As intuitive from the given examples, when we have to answer $\code{sum}(i)$, we actually probe the array \var{tree} starting from index $\code{p}=i+1$. This is because index 0 is not a power of 2, thus is can be considered as a dummy entry in the array \var{tree}. Now, let \code{p} be 11 as in the example of Figure~\ref{fig:interrogation}. The sequence of nodes' indexes touched during the traversal to compute the result are (bottom-up): $11 \rightarrow 10 \rightarrow 8 \rightarrow 0$. It is easy to see that we \emph{always} start from index $i+1$ and end with index 0 (the root of the tree). To navigate the tree bottom-up we need an efficient way of computing the index of the parent node from a given index \code{p}. This operation can be accomplished by \emph{clearing the least significant bit} (LSB) of the binary representation of \code{p}. For example, 11 is \bit{0101\underline{1}} in binary. We underline its LSB. If we clear the LSB, we get the bit configuration \bit{010\underline{1}0} which indeed corresponds to index 10. Clearing the LSB of 10 gives \bit{0\underline{1}000} which is 8. Finally, clearing the LSB of a power of 2 will always give 0, that is the index of the root. This operation, clearing the LSB of a given number \code{p}, can be implemented efficiently with \code{p \& (p - 1)}. Again, note that $\code{sum}(i)$ traverses a number of nodes equal to the number of bits set (plus 1) in $\code{p}=i+1$. The code given in Figure~\ref{code:fentree} illustrates the approach. \begin{figure}[t] \centering \includegraphics[scale=1]{{images/updating}} \caption{The logical ``updating'' tree for the same example shown in Figure~\ref{fig:interrogation}. The highlighted nodes belong to the root-to-leaf path that is traversed to perform {\code{update}} for index 10. \label{fig:updating}} \end{figure} The {\method{Fenwick-Tree}} can be actually viewed as the superimposition of two different trees. One tree is called the ``interrogation'' tree because it is used during {\code{sum}}, and it is shown in Figure~\ref{fig:interrogation}. The other tree is called the ``updating'' tree, and it is shown in Figure~\ref{fig:updating} instead. This tree consists of the very same nodes as the ``interrogation'' tree but with different child-to-parent relationships. In fact, starting from an index $\code{p} = i + 1$ and traversing the tree bottom-up we obtain the sequence of nodes that need to be updated when issuing $\code{update}(i,\Delta)$. Again for the example $i=10$, such sequence is $11 \rightarrow 12 \rightarrow 16$. Starting from a given index \code{p}, this sequence can be obtained by isolating the LSB of \code{p} and summing this to \code{p} itself. The LSB can be isolated with the operation \code{p \& -p}. For $\code{update}(i,\Delta)$, the number of traversed node is equal to the number of leading zeros (plus 1) in the binary representation of $\code{p}=i+1$. The actual code for {\code{update}} is given in Figure~\ref{code:fentree}. \begin{figure}[t] \lstinputlisting{fenwick_tree.hpp} \caption{The {\method{Fenwick-Tree}} code. \label{code:fentree}} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.8]{{cache_usage_histograms/ft}} \caption{Number of distinct cache lines stored in each set of a $L_1$ 8-way set-associative cache, after running $10^4$ random {\code{sum}} queries with a {\method{Fenwick-Tree}} of size $n=10^7$. \label{fig:cache_usage_ft}} \end{figure} \paragraph*{Cache Conflicts} The indexing of the nodes in the {\method{Fenwick-Tree}} -- which, for every subtree, places the nodes on the same level at array locations that are power-of-2 elements apart -- induces a bad exploitation of the processor cache for large values of $n$. The problem comes from the fact that cache memories are typically $c$-way set-associative caches. The cache memory of the processor used for the experiments in this article is no exception. In such cache architecture, a cache line must be stored \emph{in one} (and only one) set and, if many different cache lines must be stored in the same set, the set has only room for $c$ of these. In fact, when the set fills up, a cache line must be evicted from the set. If the evicted line is then accessed again during the computation a cache miss will be generated because the line is not in the cache anymore. Therefore, accessing (more than $c$) different memory lines that must be stored in the same cache set will induce cache misses. To understand why this happens in the {\method{Fenwick-Tree}}, let us consider how the set number is determined from a memory address $a$. Let $C$ be the total cache size in bytes, where each line spans 64 bytes. The cache stores its lines as divided into $C/(c \times 64)$ sets, where each set can store a cache line in $c$ different possible ``ways''. For example, the $L_1$ cache of the processor used for the experiments in this article (see Table~\ref{tab:caches} at page~\pageref{tab:caches}) has 8 ways and a total of $C=\num{32768}$ bytes. Therefore there are $\num{32768} / (8 \times 64) = 64$ sets in the cache, for a total of 512 cache lines. The first 6 bits (0-5) of $a$ determine the offset into the cache line; the following 6 bits (6-11) specify the set number. Thus, the first line of a memory block is stored in the set 0, the second line is stored in set 1, ecc. The 64-th line will be stored again in set 0, the 65-th line in set 1, ecc. It is now easy to see that accessing memory addresses that are multiple of $64 \times 64 = \num{4096}$ bytes (a memory \emph{page}) is not cache efficient, since the lines will contend the very same set. Therefore, accessing memory locations whose distance in memory is a large power of 2 (multiple of $2^{12}=4096$) is not cache-friendly. Unfortunately, this is what happens in the {\method{Fenwick-Tree}} when $n$ is large. For example, all the nodes at the first level are stored at indexes that are powers of 2. Thus they will all map to the set 0. In Figure~\ref{fig:cache_usage_ft} we show the number of distinct cache lines that must be stored in each set of the cache $L_1$ (8-way set-associative with 64 sets), for $10^4$ random {\code{sum}} queries and $n=10^7$. For a total of $\approx$4$\times 10^4$ cache lines accessed, 29\% of these are stored in set 0. This highly skewed distribution is the source of cache inefficiency. (Updates exhibit the same distribution.) Instead, if all accesses were \emph{evenly} distributed among all sets, we would expect each set to contain $\approx$625 lines (1.56\%). This problem can be solved by inserting some \emph{holes} in the \code{tree} array~\cite{vigna2019}, one every $d$ positions, to let a node whose position is $i$ in the original array to be placed at position $i+\lfloor i/d \rfloor$. This only requires the \var{tree} array to be enlarged by $\lfloor n/d \rfloor$ words and to recalculate the index of every node accordingly. If $d$ is chosen to be a sufficiently large constant, e.g., $2^{14}$, then the extra space is very small. In Figure~\ref{fig:cache_usage} we show the result of the same experiment as in Figure~\ref{fig:cache_usage_ft}, but after the modification to the \var{tree} array and the new indexing of the nodes. As it is evident, now every cache set is equally used. (For comparison, we also report the behavior of the {\method{Segment-Tree}} to confirm that also in this case there is a good usage of the cache.) \begin{figure}[t] \centering \subfloat[modified {\method{Fenwick-Tree}}]{ \includegraphics[scale=0.8]{{cache_usage_histograms/ft_holes}} } \subfloat[{\method{Segment-Tree}}]{ \includegraphics[scale=0.8]{{cache_usage_histograms/st_bu}} } \caption{The same experiment as performed in Figure~\ref{fig:cache_usage_ft}, but after reducing the cache conflicts in the {\method{Fenwick-Tree}} (modified). For comparison, we also report the cache usage of the {\method{Segment-Tree}}. \label{fig:cache_usage}} \end{figure} \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_fenwick_tree}} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_fenwick_tree}} } \caption{The running times of {\code{sum}}/{\code{update}} for the regular and modified {\method{Fenwick-Tree}} ({\method{FT}}). \label{fig:fenwick_tree}} \end{figure} Lastly, in Figure~\ref{fig:fenwick_tree} we show the comparison between the regular {\method{Fenwick-Tree}} and the modified version as described above. As expected, the two curves are very similar up to $n=2^{20}$, then the cache conflicts start to pay a significant role in the behavior of the regular {\method{Fenwick-Tree}}, making the difference between the two curves to widen progressively. As an example, collecting the number of cache misses using Linux \textsf{perf} for $n \approx 250 \times 10^6$ (excluding those spent during the construction of the data structures and generation of the queries) indeed reveals that the regular version incurs in $2.5\times$ the cache-misses of the modified version, for both {\code{sum}} and {\code{update}}. From now on, we simply refer to the modified {\method{Fenwick-Tree}} \emph{without} the problem of cache conflicts as \textsf{FT} in the plots for further comparisons. \section{Introduction}\label{sec:introduction} The \emph{prefix-sum problem} is defined as follows. Given an array $A[0..n)$ of integers and an index $0 \leq i < n$, we are asked to support the following three operations as efficiently as possible. \begin{itemize} \item $\code{sum}(i)$ returns the quantity $\sum_{k=0}^i A[k]$. \item $\code{update}(i,\Delta)$ sets $A[i]$ to $A[i]+\Delta$, where $\Delta$ is a quantity that fits in $\delta$ bits. \item $\code{access}(i)$ returns $A[i]$. \end{itemize} (Note that $\code{access}(i)$ can be computed as $\code{sum}(i)-\code{sum}(i-1)$ for $i > 0$ and $\code{access}(0) = \code{sum}(0)$. Therefore, we do not consider this operation in the following. Also, a \emph{range-sum} query $\code{sum}(i,j)$, asking for the sum of the elements in $A[i..j]$, is computed as $\code{sum}(j) - \code{sum}(i-1)$.) It is an icon problem in data structure design and has been studied rather extensively from a theoretical point of view~\cite{FS89,yao1985complexity,hampapuram1998optimal,dietz1989optimal,raman2001succinct,hon2011succinct,patrascu2006logarithmic,chan2010counting,BCGSVV15,bille2017succinct} given its applicability to many areas of computing, such as coding, databases, parallel programming, dynamic data structures and others~\cite{blelloch1990pre}. For example, one of the most notable practical applications of this problem is for \emph{on-line analytical processing} (OLAP) in databases. An OLAP system relies on the popular \emph{data cube} model~\cite{gray1997data}, where the data is represented as a $d$-dimensional array. To answer an aggregate query on the data cube, a prefix-sum query is formulated (see the book edited by~\citet{toth2017handbook} -- Chapter 40, \emph{Range Searching} by P.K. Agarwal). \paragraph*{Scope of this Work and Overview of Solutions} Despite the many theoretical results, that we review in Section~\ref{sec:related_work}, literature about the prefix-sum problem lacks a thorough experimental investigation which is our concern with this work. We aim at determining the fastest single-core solution using \emph{commodity hardware and software}, that is, a recent manufactured processor with commodity architectural features (such as pipelined instructions, branch prediction, cache hierarchy, and SIMD instructions~\cite{SIMDIntel}), executing C++ compiled with a recent optimizing compiler. (We do not take into account parallel algorithms here, nor solutions devised for specialized/dedicated hardware that would limit the usability of our software. We will better describe our experimental setup in Section~\ref{sec:setup}.) As a warm-up, let us now consider two trivial solutions to the problem that will help reasoning about the different trade-offs. A first option is to leave $A$ as given. It follows that queries are supported in $O(n)$ by scanning the array and updates take $O(1)$ time. Otherwise, we can pre-compute the result of $\code{sum}(i)$ and save it in $A[i]$, for every $i$. Then, we have queries supported in $O(1)$, but updates in $O(n)$. These two solutions represent opposite extreme cases: the first achieving fastest {\code{update}} but slowest {\code{sum}}, the second achieving fastest {\code{sum}} but slowest {\code{update}}. {(They coincide only when $n$ is bounded by a constant.)} However, it is desirable to have a balance between the running times of {\code{sum}} and {\code{update}}. One such trade-off can be obtained by tree-shaped data structures, whose analysis is the scope of our paper. An elegant solution is to superimpose on $A$ a balanced binary tree. The leaves of the tree store the elements of $A$, whereas the internal nodes store the sum of the elements of $A$ descending from the left and right sub-trees. As the tree is balanced and has height $\lceil \log_2 n \rceil + 1$, it follows that both {\code{sum}} and {\code{update}} translate into tree traversals with $O(1)$ spent per level. This data structure -- called {\method{Segment-Tree}}~\cite{bentley1977solutions} -- guarantees $O(\log n)$ for both queries and updates. Another tree layout having $O(\log n)$-complexity is the {\method{Fenwick-Tree}}~\cite{fenwick1994new}. Differently from the {\method{Segment-Tree}}, the {\method{Fenwick-Tree}} is an \emph{implicit} data structure, i.e., it consumes exactly $n+1$ memory words for representing $A$ is some appropriate manner. It is not a binary tree and exploits bit-level programming tricks to traverse the implicit tree structure. Interestingly, it turns out that a logarithmic complexity is optimal for the problem {when $\delta$ is as large as the machine word~\cite{patrascu2006logarithmic}.} Thus, it is interesting to design efficient implementations of both {\method{Segment-Tree}} and {\method{Fenwick-Tree}} to understand what can be achieved in practice. Furthermore, it is natural to generalize the {\method{Segment-Tree}} to become $b$-ary and have a height of $\lceil \log_b n \rceil$, with $b>2$: an internal node stores an array $B$ of $b-1$ values, where $B[i]$ is the sum of the elements of $A$ covered by the sub-tree rooted in its $i$-th child. Now, we are concerned with solving a smaller instance of the original problem, i.e., the one having $B$ as input array. According to the solution adopted for the ``small array'', different complexities can be achieved. To give an idea, one could just adopt one of the two trivial solutions discussed above. If we leave the elements of $B$ as they are, then we obtain updates in $O(\log_b n)$ and queries in $O(b \log_b n)$. Conversely, if we pre-compute the answers to {\code{sum}}, we have $O(b \log_b n)$ for updates and $O(\log_b n)$ for queries. As a matter of fact, essentially all theoretical constructions are variations of a $b$-ary {\method{Segment-Tree}}~\cite{dietz1989optimal,raman2001succinct,hon2011succinct,patrascu2006logarithmic}. An efficient implementation of such data structure is, therefore, not only interesting for this reason but also particularly appealing in practice because it opens the possibility of using SIMD instructions to process in parallel the $b$ keys stored at each node of the tree. Lastly, also the {\method{Fenwick-Tree}} extends to branching factors larger than 2, but is a less obvious way that we discuss later in the paper. \paragraph*{Contributions} For all the reasons discussed above, we describe efficient implementations of and compare the following solutions: the {\method{Segment-Tree}}, the {\method{Fenwick-Tree}}, the $b$-ary {\method{Segment-Tree}}, and the $b$-ary {\method{Fenwick-Tree}} -- plus other optimized variants that we will introduce in the rest of the paper. We show that, by taking into account (1) branch-free code execution, (2) cache-friendly node layouts, (3) SIMD instructions, and (4) compile-time optimizations, new interesting trade-offs can be established on modern hardware. After a careful experimental analysis, we arrive at the conclusion that an optimized $b$-ary {\method{Segment-Tree}} is the best data structure. Very importantly, we remark that optimizing the {\method{Segment-Tree}} is not only relevant for the prefix-sum problem, because this data structure can also be used to solve several other problems in computational geometry, such as \emph{range-min/max queries}, \emph{rectangle intersection}, \emph{point location}, and \emph{three-sided queries}. (See the book by~\citet{BergCKO08} and references therein for an introduction to such problems.) Thus, the contents of this paper can be adapted to solve these problems as well. To better support our exposition, we directly show (almost) full C++ implementations of the data structures, in order to guide the reader into a deep performance tuning of the software. We made our best to guarantee that the presented code results compact and easy to understand but without, for any reason, sacrificing its efficiency. The whole code used in the article is freely available at \url{https://github.com/jermp/psds}, with detailed instructions on how to run the benchmarks and reproduce the results. \subsection{Prefix Sums on Large Arrays}\label{sec:large_arrays} In this section we use the two-level data structures introduced in Section~\ref{sec:nodes} in the nodes of a {\method{Segment-Tree}}, to establish new practical trade-offs. If $b$ is the node's fanout, this solution provides a tree of height $\lceil \log_b n \rceil$, whose shape is illustrated in Figure~\ref{fig:segtree_shape} for the case $b=4$ and $n=64$. The code in Figure~\ref{code:simdsegtree} takes a generic \code{Node} structure as a template parameter and builds the tree based on its fanout and size (in bytes). In what follows, we shall first discuss some optimizations and then illustrate the experimental results. \paragraph*{Avoiding Loops and Branches} The tree data structure is serialized in an array of bytes, the \code{tree} array in the code. In a separate array, \code{nodes}, we keep instead the number of nodes at each level of the tree. This information is used to traverse the tree. In particular, when we have to resolve an operation at a given tree level, all we have to do is to instantiate a \code{Node} object at a proper byte location in the array \code{tree} and call the desired {\code{sum}}/{\code{update}} method of the object. Such proper byte location depends on both the number of nodes in the previous levels and the size (in bytes) of the used \code{Node} structure. The deliberative choice of working with large node fanouts, such as 64 and 256, makes the tree very flat. For example when $b=256$, the {\method{Segment-Tree}} will be of height at most 3 until $n$ is $2^{24}$, and actually only at most 4 for arrays of up to $2^{32}$ keys. This suggests to write a specialized code path that handles each possible value of the tree height. Doing so permits to completely eliminate the loops in the body of both {\code{sum}} and {\code{update}} (unrolling) and reduce possible data dependencies, taking into account that the result of an operation at a given level of the tree does \emph{not} depend on the result of the operations at the other levels. Therefore, instead of looping through the levels of the tree, like \begin{lstlisting} int64_t sum(uint64_t i) const int64_t sum = 0; for (uint32_t h = 1; h != Height; ++h) { Node node(...); sum += node.sum(...); } return sum; } \end{lstlisting} that actually stalls the computation at level \var{h} because that at level \code{h+1} cannot begin, we instantiate a different \code{Node} object for each level without a loop. For example if the height of the tree is 2, we do \begin{lstlisting} uint64_t child1 = i / b; Node node1(ptr); Node node2(ptr + (1 + child1) * Node::size); /* do something with node1 and node2 */ \end{lstlisting} and if it is 3, we do instead \begin{lstlisting} uint64_t child1 = i / (b * b); uint64_t child2 = (i Node node1(ptr); Node node2(ptr + (1 + child1) * Node::size); Node node3(ptr + (1 + nodes[1] + child2 + child1 * b) * Node::size); /* do something with node1, node2, and node3 */ \end{lstlisting} where \var{ptr} is the pointer to the memory holding the \var{tree} data. Lastly, we discuss why also the height of the tree, \var{Height} in the code, is modeled as a template parameter. If the value of \var{Height} is known at compile-time, the compiler can produce a template specialization of the \var{segment\_tree} class that avoids the evaluation of an \code{if-else} cascade that would have been otherwise necessary to select the proper code-path that handles that specific value of \code{Height}. This removes unnecessary branches in the code of the operations, and it is achieved with the \code{if-constexpr} idiom of C++17: \begin{lstlisting} int64_t sum(uint64_t i) const { if constexpr (Height == 1) { ... } if constexpr (Height == 2) { ... } ... } void update(uint64_t i, int64_t delta) { if constexpr (Height == 1) { ... } if constexpr (Height == 2) { ... } ... } \end{lstlisting} where the code inside each \code{if} branch handles the corresponding value of \code{Height} as we explained above. \begin{figure}[t] \lstinputlisting{segment_tree_bary.hpp} \caption{The $b$-ary {\method{Segment-Tree}} code handling a \code{Node} structure that is specified as a template parameter. \label{code:simdsegtree}} \end{figure} \paragraph*{Experimental Results} We now comment on the experimental results in Figure~\ref{fig:segment_tree}. The two version of the data structure that we tested, with $b=64$ and $b=256$, will be referred to as ${\method{ST}}_{64}$ and ${\method{ST}}_{256}$ respectively. \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_segment_tree_restricted_vs_unrestricted}} \label{fig:segment_tree_a} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_restricted_vs_unrestricted}} \label{fig:segment_tree_b} } \caption{The running times of {\code{sum}}/{\code{update}} for the $b$-ary {\method{Segment-Tree}} with branching factor 64 and 256. \label{fig:segment_tree}} \end{figure} We first consider the most general case where $\delta=64$. Regarding {\code{sum}} queries, we see that the two-level node layout, which guarantees constant-time complexity, together with the very short tree height, makes both the tested {\method{Segment-Tree}} versions substantially improve over the {\method{Fenwick-Tree}}. The curves for the {\method{Segment-Tree}} are strictly sandwiched between 1 and 2 nanoseconds until the data structures cannot be contained in $L_2$, which happens close to the $2^{17}$ boundary. In this region, the {\method{Segment-Tree}} is bound by the CPU cycles, whereas the {\method{Fenwick-Tree}} by branch misprediction. For $n \approx 2^{16}$ and {\code{sum}} queries, the ${\method{ST}}_{64}$ executes 23\% of the instructions and 22\% of the cycles of the {\method{Fenwick-Tree}}. As a result of the optimizations we discussed at the beginning of the section (loop-unrolling and reduced branches), it only performs 2.2\% of the branches of the {\method{Fenwick-Tree}} and misses 0.36\% of those. The larger tree height of the {\method{Fenwick-Tree}} also induces a poor cache utilization compared to the {\method{Segment-Tree}}. As already discussed, this becomes evident for large values of $n$. For example, for $n=250 \times 10^6$ and {\code{sum}} queries, ${\method{ST}}_{64}$ incurs in 40\% less cache misses than the {\method{Fenwick-Tree}} (and ${\method{ST}}_{256}$ in 80\% less). This cache-friendly behavior is a direct consequence of the very short height. Perhaps more interesting is the case for updates, where SIMD instructions can be exploited to lower the running time. As a reference point, we show in Figure~\ref{fig:segment_tree_no_simd} the running time of {\code{update}} \emph{without} manual usage of SIMD (only the general case is shown for simplicity): compared to the plots in Figure~\ref{fig:segment_tree}, we see that SIMD offers a good reduction in the running time of {\code{update}}. If we were \emph{not} to use SIMD, we would have obtained a $4\times$ lager time for {\code{update}}, thus SIMD is close to its ideal speed-up for our case (4 $\times$ 64-bit integers are packed in a register). This reduction is more evident for $b=64$ rather than $b=256$ because, as we discussed in Section~\ref{sec:nodes}, updating 8 keys is faster than updating 16 keys: this let the $\method{ST}_{64}$ perform (roughly) half of the SIMD instructions of $\method{ST}_{256}$, although $\method{ST}_{256}$ is one level shorter. This translates into fewer spent cycles and less time. Also, ${\method{ST}}_{256}$ incurs in a bit more cache misses than ${\method{ST}}_{64}$, 11\% more, because each {\code{update}} must access the double of cache lines than ${\method{ST}}_{64}$: from Section~\ref{sec:nodes}, recall that each segment consists of 16 keys that fit in two cache lines. Instead, the $\method{ST}_{64}$ solution is as fast or better than the {\method{Fenwick-Tree}} thanks to SIMD instructions. Is is important to stress that $\method{ST}_{64}$ actually executes more instructions then the {\method{Fenwick-Tree}}. However, it does so in fewer cycles, thus confirming the good impact of SIMD. \begin{figure}[t] \centering \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_no_simd}} \caption{The running times of {\code{update}} for the $b$-ary {\method{Segment-Tree}} with branching factor 64 and 256, \emph{without} manual usage of SIMD instructions. \label{fig:segment_tree_no_simd}} \end{figure} Considering the overall trade-off between {\code{sum}} and {\code{update}}, we conclude that -- for the general case where $\Delta$ is expressed in 8 bytes -- the ${\method{ST}}_{64}$ data structure offers the best running times. We now discuss the case where $\delta = 8$. Queries are less efficient than those in the general case because, as explained in Section~\ref{sec:nodes}, they are computed by accessing four different array locations at each level of the tree. These additional accesses to memory induce more cache-misses as soon as the data structure does not fit in cache: in fact, notice how the time rises around $2^{12}$ and $2^{16}$ boundaries, also because the restricted versions consume more space. Anyway, the $\method{ST}_{256}$ variant maintains a good advantage against the {\method{Fenwick-Tree}}, especially on the large values of $n$ (its curve is actually very similar to that of ${\method{ST}}_{64}$ for the general case). The restriction on $\Delta$ produces an improvement for {\code{update}} as expected. Note how the shape of $\method{ST}_{256}$ dramatically improves (by $1.5-2\times$) in this case because 16 keys are updated in parallel rather than just 4 as in the general case (now, 16 $\times$ 16-bit integers are packed in a register). Therefore, we conclude that -- for the restricted case where $\Delta$ is expressed in 1 byte -- the solution $\method{ST}_{256}$ offers the best trade-off between the running times of {\code{sum}} and {\code{update}}. \subsection{Prefix Sums on Small Arrays: SIMD-Aware Node Layouts}\label{sec:nodes} \begin{figure}[t] \lstinputlisting{update4.cpp} \caption{A SIMD-based implementation of {\code{update}} for an array \var{keys} of 4 $\times$ 64-bit integers. The table \var{T} must be aligned on a 32-byte boundary. \label{alg:update4}} \end{figure} A reasonable starting point would be to consider $b = 4$ keys and compute their prefix sums in an array $\var{keys}[0..4)$. Queries are answered in the obvious way: $\code{sum}(i)=\var{keys}[i]$, and we do not explicitly mention it anymore in the following. Instead, an $\code{update}(i,\Delta)$ operation is solved by setting $\var{keys}[j] = \var{keys}[j] + \Delta$ for $j = i..3$. We can do this in parallel with SIMD as follows. Let \var{U} be a register of $4 \times 64$ bits that packs 4 integers, where the first $i - 1$ integers are 0 and the remaining ones, from $j = i..3$, are equal to $\Delta$. Then $\code{update}(i,\Delta)$ is achieved by adding \var{U} and \var{keys} in parallel using the instruction \code{\_mm256\_add\_epi64}. To obtain the register \var{U} as desired, we first initialize \var{U} with 4 copies of $\Delta$ using the instruction \code{\_mm256\_set1\_epi64x}. Now, the copies of $\Delta$ before the $i$-th must be masked out. To do so, we use the index $i$ to perform a lookup into a small pre-computed table \var{T}, where $\var{T}[i]$ is the proper 256-bit mask. In this case, \var{T} is a $4 \times 4$ table of unsigned 64-bit integers, where $\var{T}[0][0..3] = [0,2^{64}-1,2^{64}-1,2^{64}-1]$, $\var{T}[1][0..3] = [0,0,2^{64}-1,2^{64}-1]$, $\var{T}[2][0..3] = [0,0,0,2^{64}-1]$, and $\var{T}[3][0..3] = [0,0,0,0]$. Once we loaded the proper mask, we obtain the wanted register configuration with $\var{U} = \code{\_mm256\_and\_si256}(\var{U},\var{T}[i])$. The code\footnote{For all the code listings we show in this section, it is assumed that the size in bytes of (\code{u})\code{int64\_t}, (\code{u})\code{int32\_t}, and (\code{u})\code{int16\_t} is 8, 4, and 2, respectively.} for this algorithm is shown in Figure~\ref{alg:update4}. An important observation is in order, before proceeding. One could be tempted to leave the integers in $\var{keys}[0..4)$ as they are in order to obtain trivial updates and use SIMD instructions to answer {\code{sum}}. During our experiments we determined that this solution gives a \emph{worse} trade-off than the one described above: this was actually no surprise, considering that the algorithm for computing prefix sums with SIMD is complicated as it involves several shifts and additions (besides load and store). Therefore, SIMD is more effective on updates rather than queries. \paragraph*{Two-Level Data Structure} As already motivated, we would like to consider larger values of $b$, e.g., 64 or 256, in order to obtain even flatter trees. Working with such branching factors, would mean to apply the algorithm in Figure~\ref{alg:update4} for $b/4$ times, which may be too slow. To alleviate this issue, we use a two-level data structure. We split $b$ into $\sqrt{b}$ segments and store each segment in prefix-sum. The sum of the integers in the $j$-th segment is stored in a \var{summary} array of $\sqrt{b}$ values in position $j + 1$, for $j = 0..\sqrt{b}-2$ ($\var{summary}[0] = 0$). The values of the \var{summary} are stored in prefix-sum as well. In Figure~\ref{fig:node16} we show a graphical example of this organization for $b=16$. In the example, we apply the algorithm in Figure~\ref{alg:update4} only twice (first on the \code{summary}, then on a specific segment of \code{keys}): without the two-level organization, 4 executions of the algorithm would have been needed. Instead, queries are as easy as $\code{sum}(i) = \var{summary}[i / \sqrt{b}] + \var{keys}[i]$. The code corresponding to this approach for $b=64$ is shown in Figure~\ref{alg:update8}\footnote{The \code{build} method of the \code{node} class builds the two-level data structure as we explained above, and writes \code{node::size} bytes onto the output buffer, \code{out}. We do not report it for conciseness.}. In this case, note that the whole \var{summary} fits in one cache line, because its size is $8 \times 8 = 64$ bytes, as well as each segment of \var{keys}. The table \var{T} stores $(8 + 1) \times 8$ 64-bit unsigned values, where $\var{T}[i][0..7]$ is a an array of 8 integers: the first $i$ are 0 and the other $8-i$ are $2^{64}-1$, for $i=0..8$. Lastly, the space overhead due to the \var{summary} array is always $1/\sqrt{b} \times 100\%$. For the example code in Figure~\ref{alg:update8}, the space consumed is 12.5\% more than that of the input array (576 bytes consumed instead of $64 \times 8 = 512$). \begin{figure}[t] \centering \includegraphics[scale=1 ]{images/node16} \caption{A two-level node layout built from the array $[$13, -1, 2, 23, -4, 231, 13, 5, 2, -88, -52, 0, 4, 90, 3, -12$]$. The shaded arrays represent the content of the registers used when executing the operation $\code{update}(9,-37)$. \label{fig:node16}} \end{figure} \begin{figure}[t] \lstinputlisting{node64.hpp} \caption{Code for a two-level node data structure with $b=64$. \label{alg:update8}} \end{figure} \begin{figure}[t] \lstinputlisting{node64_restricted.hpp} \caption{Code for a two-level node data structure with $b=64$, for the ``restricted'' case where $\Delta$ is a signed 8-bit integer. \label{alg:update8_restricted}} \end{figure} \paragraph*{Handling Small Updates} It is intuitive that one can obtain faster results for {\code{update}} if the bit-width of $\Delta$ is smaller than 64. In fact, a restriction on the possible values $\Delta$ can take permits to ``pack'' more such values inside a SIMD register which, in turn, allows to update a larger number of keys in parallel. As we already observed, neither the binary {\method{Segment-Tree}} nor the {\method{Fenwick-Tree}} can possibly take advantage of this restriction. To exploit this possibility, we buffer the updates. We restrict the bit-width of $\Delta$ to 8, that is, $\Delta$ is now a value in the range $[-128, +127]$ (instead of the generic, un-restricted, case of $\Delta \in [-2^{63},+2^{63}-1]$). We enrich the two-level node layout introduced before with some additional arrays to buffer the updates. These arrays are made of 16-bit signed integers. We maintain one such array of size $\sqrt{b}$, say \var{summary\_buffer}, plus another of size $b$, \var{keys\_buffer}. In particular, the $i$-th value of \var{keys\_buffer} holds the $\Delta$ value for the $i$-th key; similarly, the $(j+1)$-th value of \var{summary\_buffer} holds the $\Delta$ value for the $j$-th segment. The buffers are kept in prefix-sum. Upon an $\code{update}(i,\Delta)$ operation, the buffer for the summary and that of the specific segment comprising $i$ are updated using SIMD. The key difference now is that -- because we work with smaller integers -- 8, or even 16, integers are updated simultaneously, instead of only 4 as illustrated with the code in Figure~\ref{alg:update4}. For example, suppose $b=64$. The whole \var{summary\_buffer}, which consists of $16 \times 8 = 128$ bits, fits into one SSE SIMD register, thus 8 integers are updated simultaneously. For $b=256$, 16 integers are updated simultaneously because $16 \times 16 = 256$ bits again fit into one AVX SIMD register. This makes a big improvement with respect to the un-restricted case because instead of executing the algorithm in Figure~\ref{alg:update4} for $\sqrt{b}/4$ times, \emph{only one} such update is sufficient. This potentially makes the restricted case $2\times$ and $4\times$ faster than the un-restricted case for $b=64$ and $b=256$ respectively. To avoid overflow issues, we bring the \var{keys} (and the \var{summary}) up-to-date by reflecting on these the updates stored in the buffer. Since we can perform a maximum of $-2^{15}/-128 = 256$ updates before overflowing, we clean the buffers every 256 {\code{update}} operations. The solution described here holds, therefore, in the amortized sense\footnote{Note that such ``cleaning'' operation will become less and less frequent the deeper a node in the tree hierarchy. Thus, scalar code is efficient to perform this operation as vectorization would negligibly affect the running time. }. Figure~\ref{alg:update8_restricted} contains the relevant code illustrating this approach. The code should be read as the ``restricted'' variant of that shown in Figure~\ref{alg:update8}. We count the number of updates with a single 8-bit unsigned integer, \var{updates}, which is initialized to 255 in the \code{build} method. When this variable is equal to 255, it will overflow the next time it is incremented by 1, indeed making it equal to 0. Therefore we correctly clean the buffers every 256 updates. In this case, the table \var{T} stores $(16+1) \times 16$ 16-bit unsigned values, where each $\var{T}[i][0..15]$ contains a prefix of $i$ zeros, followed by $16-i$ copies of $2^{16}-1$, for $i=0..16$. Also, {\code{sum}} queries are now answered by computing the sum between four quantities, which is actually more expensive than the queries for the general case. The data structure consumes $(2b + 10\sqrt{b} + 1)/(8b) \times 100\%$ more bytes than the input. For $b=64$, as in the code given in Figure~\ref{alg:update8_restricted}, this extra space is 40.8\%; for $b=256$, it is 32.9\%. \subsection{Prefix Sums on Large Arrays}\label{sec:large_arrays} In this section we use the two-level data structures introduced in Section~\ref{sec:nodes} in the nodes of a {\method{Segment-Tree}}, to establish new practical trade-offs. If $b$ is the node's fanout, this solution provides a tree of height $\lceil \log_b n \rceil$, whose shape is illustrated in Figure~\ref{fig:segtree_shape} for the case $b=4$ and $n=64$. The code in Figure~\ref{code:simdsegtree} takes a generic \code{Node} structure as a template parameter and builds the tree based on its fanout and size (in bytes). In what follows, we shall first discuss some optimizations and then illustrate the experimental results. \paragraph*{Avoiding Loops and Branches} The tree data structure is serialized in an array of bytes, the \code{tree} array in the code. In a separate array, \code{nodes}, we keep instead the number of nodes at each level of the tree. This information is used to traverse the tree. In particular, when we have to resolve an operation at a given tree level, all we have to do is to instantiate a \code{Node} object at a proper byte location in the array \code{tree} and call the desired {\code{sum}}/{\code{update}} method of the object. Such proper byte location depends on both the number of nodes in the previous levels and the size (in bytes) of the used \code{Node} structure. The deliberative choice of working with large node fanouts, such as 64 and 256, makes the tree very flat. For example when $b=256$, the {\method{Segment-Tree}} will be of height at most 3 until $n$ is $2^{24}$, and actually only at most 4 for arrays of up to $2^{32}$ keys. This suggests to write a specialized code path that handles each possible value of the tree height. Doing so permits to completely eliminate the loops in the body of both {\code{sum}} and {\code{update}} (unrolling) and reduce possible data dependencies, taking into account that the result of an operation at a given level of the tree does \emph{not} depend on the result of the operations at the other levels. Therefore, instead of looping through the levels of the tree, like \begin{lstlisting} int64_t sum(uint64_t i) const int64_t sum = 0; for (uint32_t h = 1; h != Height; ++h) { Node node(...); sum += node.sum(...); } return sum; } \end{lstlisting} that actually stalls the computation at level \var{h} because that at level \code{h+1} cannot begin, we instantiate a different \code{Node} object for each level without a loop. For example if the height of the tree is 2, we do \begin{lstlisting} uint64_t child1 = i / b; Node node1(ptr); Node node2(ptr + (1 + child1) * Node::size); /* do something with node1 and node2 */ \end{lstlisting} and if it is 3, we do instead \begin{lstlisting} uint64_t child1 = i / (b * b); uint64_t child2 = (i Node node1(ptr); Node node2(ptr + (1 + child1) * Node::size); Node node3(ptr + (1 + nodes[1] + child2 + child1 * b) * Node::size); /* do something with node1, node2, and node3 */ \end{lstlisting} where \var{ptr} is the pointer to the memory holding the \var{tree} data. Lastly, we discuss why also the height of the tree, \var{Height} in the code, is modeled as a template parameter. If the value of \var{Height} is known at compile-time, the compiler can produce a template specialization of the \var{segment\_tree} class that avoids the evaluation of an \code{if-else} cascade that would have been otherwise necessary to select the proper code-path that handles that specific value of \code{Height}. This removes unnecessary branches in the code of the operations, and it is achieved with the \code{if-constexpr} idiom of C++17: \begin{lstlisting} int64_t sum(uint64_t i) const { if constexpr (Height == 1) { ... } if constexpr (Height == 2) { ... } ... } void update(uint64_t i, int64_t delta) { if constexpr (Height == 1) { ... } if constexpr (Height == 2) { ... } ... } \end{lstlisting} where the code inside each \code{if} branch handles the corresponding value of \code{Height} as we explained above. \begin{figure}[t] \lstinputlisting{segment_tree_bary.hpp} \caption{The $b$-ary {\method{Segment-Tree}} code handling a \code{Node} structure that is specified as a template parameter. \label{code:simdsegtree}} \end{figure} \paragraph*{Experimental Results} We now comment on the experimental results in Figure~\ref{fig:segment_tree}. The two version of the data structure that we tested, with $b=64$ and $b=256$, will be referred to as ${\method{ST}}_{64}$ and ${\method{ST}}_{256}$ respectively. \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_segment_tree_restricted_vs_unrestricted}} \label{fig:segment_tree_a} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_restricted_vs_unrestricted}} \label{fig:segment_tree_b} } \caption{The running times of {\code{sum}}/{\code{update}} for the $b$-ary {\method{Segment-Tree}} with branching factor 64 and 256. \label{fig:segment_tree}} \end{figure} We first consider the most general case where $\delta=64$. Regarding {\code{sum}} queries, we see that the two-level node layout, which guarantees constant-time complexity, together with the very short tree height, makes both the tested {\method{Segment-Tree}} versions substantially improve over the {\method{Fenwick-Tree}}. The curves for the {\method{Segment-Tree}} are strictly sandwiched between 1 and 2 nanoseconds until the data structures cannot be contained in $L_2$, which happens close to the $2^{17}$ boundary. In this region, the {\method{Segment-Tree}} is bound by the CPU cycles, whereas the {\method{Fenwick-Tree}} by branch misprediction. For $n \approx 2^{16}$ and {\code{sum}} queries, the ${\method{ST}}_{64}$ executes 23\% of the instructions and 22\% of the cycles of the {\method{Fenwick-Tree}}. As a result of the optimizations we discussed at the beginning of the section (loop-unrolling and reduced branches), it only performs 2.2\% of the branches of the {\method{Fenwick-Tree}} and misses 0.36\% of those. The larger tree height of the {\method{Fenwick-Tree}} also induces a poor cache utilization compared to the {\method{Segment-Tree}}. As already discussed, this becomes evident for large values of $n$. For example, for $n=250 \times 10^6$ and {\code{sum}} queries, ${\method{ST}}_{64}$ incurs in 40\% less cache misses than the {\method{Fenwick-Tree}} (and ${\method{ST}}_{256}$ in 80\% less). This cache-friendly behavior is a direct consequence of the very short height. Perhaps more interesting is the case for updates, where SIMD instructions can be exploited to lower the running time. As a reference point, we show in Figure~\ref{fig:segment_tree_no_simd} the running time of {\code{update}} \emph{without} manual usage of SIMD (only the general case is shown for simplicity): compared to the plots in Figure~\ref{fig:segment_tree}, we see that SIMD offers a good reduction in the running time of {\code{update}}. If we were \emph{not} to use SIMD, we would have obtained a $4\times$ lager time for {\code{update}}, thus SIMD is close to its ideal speed-up for our case (4 $\times$ 64-bit integers are packed in a register). This reduction is more evident for $b=64$ rather than $b=256$ because, as we discussed in Section~\ref{sec:nodes}, updating 8 keys is faster than updating 16 keys: this let the $\method{ST}_{64}$ perform (roughly) half of the SIMD instructions of $\method{ST}_{256}$, although $\method{ST}_{256}$ is one level shorter. This translates into fewer spent cycles and less time. Also, ${\method{ST}}_{256}$ incurs in a bit more cache misses than ${\method{ST}}_{64}$, 11\% more, because each {\code{update}} must access the double of cache lines than ${\method{ST}}_{64}$: from Section~\ref{sec:nodes}, recall that each segment consists of 16 keys that fit in two cache lines. Instead, the $\method{ST}_{64}$ solution is as fast or better than the {\method{Fenwick-Tree}} thanks to SIMD instructions. Is is important to stress that $\method{ST}_{64}$ actually executes more instructions then the {\method{Fenwick-Tree}}. However, it does so in fewer cycles, thus confirming the good impact of SIMD. \begin{figure}[t] \centering \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_no_simd}} \caption{The running times of {\code{update}} for the $b$-ary {\method{Segment-Tree}} with branching factor 64 and 256, \emph{without} manual usage of SIMD instructions. \label{fig:segment_tree_no_simd}} \end{figure} Considering the overall trade-off between {\code{sum}} and {\code{update}}, we conclude that -- for the general case where $\Delta$ is expressed in 8 bytes -- the ${\method{ST}}_{64}$ data structure offers the best running times. We now discuss the case where $\delta = 8$. Queries are less efficient than those in the general case because, as explained in Section~\ref{sec:nodes}, they are computed by accessing four different array locations at each level of the tree. These additional accesses to memory induce more cache-misses as soon as the data structure does not fit in cache: in fact, notice how the time rises around $2^{12}$ and $2^{16}$ boundaries, also because the restricted versions consume more space. Anyway, the $\method{ST}_{256}$ variant maintains a good advantage against the {\method{Fenwick-Tree}}, especially on the large values of $n$ (its curve is actually very similar to that of ${\method{ST}}_{64}$ for the general case). The restriction on $\Delta$ produces an improvement for {\code{update}} as expected. Note how the shape of $\method{ST}_{256}$ dramatically improves (by $1.5-2\times$) in this case because 16 keys are updated in parallel rather than just 4 as in the general case (now, 16 $\times$ 16-bit integers are packed in a register). Therefore, we conclude that -- for the restricted case where $\Delta$ is expressed in 1 byte -- the solution $\method{ST}_{256}$ offers the best trade-off between the running times of {\code{sum}} and {\code{update}}. \subsection{The Truncated-{\method{Fenwick-Tree}}}\label{sec:truncatedfentree} We have argued and shown that SIMD is highly effective to reduce the running time for {\code{update}} (consider Figure~\ref{fig:segment_tree} vs. Figure~\ref{fig:segment_tree_no_simd}). However, these special instructions must necessarily be combined with a (very) short tree height, given that the {\code{update}} algorithm becomes much more involved at the node-level granularity: while this is the case for the $b$-ary {\method{Segment-Tree}}, it is \emph{not} for the Blocked-{\method{Fenwick-Tree}}, hence its poor efficiency. This suggests the implementation of a strategy where one keeps running the simplest code for {\code{update}}, e.g., that of the classic {\method{Fenwick-Tree}}, until the number of leaves in the sub-tree is $b$ and, thus, these can be updated in parallel with SIMD. The shape of the resulting data structure -- the \emph{Truncated}-{\method{Fenwick-Tree}} (\textsf{TFT}) -- is illustrated in Figure~\ref{fig:truncated_fentree_shape} (for $b=4$ and $n=64$) and shows the clear division between the upper part represented by the {\method{Fenwick-Tree}} and the lower part that consists in an array of blocks. Compared to the classic {\method{Fenwick-Tree}}, it is now intuitive that this variant reduces the number of cache-misses because the upper part is likely to fit well in cache and $b$ keys inside a block are contiguous in memory. \giulio{explain more succinctly...} To implement the Truncated-{\method{Fenwick-Tree}}, we proceed as follows. We form $\lceil n/b \rceil$ blocks and let these become the leaves of a classic {\method{Fenwick-Tree}}\footnote{Clearly, we could replace the upper part of the data structure with a 2-ary {\method{Segment-Tree}}, and define the corresponding \emph{Truncated}-{\method{Segment-Tree}}. However, we focus on the {\method{Fenwick-Tree}} only given that it is always better than the {\method{Segment-Tree}}, as analyzed in Section~\ref{sec:segment_tree_vs_fenwick_tree}.}. In particular, this smaller {\method{Fenwick-Tree}} of height $\lceil \log_2((n+1)/b) \rceil$ is obtained by materializing an temporary array \code{tmp} where $\code{tmp}[j]$ is the sum of the elements in the $j$-th block, $j=0..\lceil n/b \rceil-1$, and building a {\method{Fenwick-Tree}} from this array. Therefore, the {\method{Fenwick-Tree}} provides an index into a specific block of $b$ leaves that are handled in parallel with SIMD. The upper part of the data structure takes $\lceil n/b \rceil+1$ words and the lower part $\lceil n/b \rceil \times \Theta(b)$ words, for a total of $\Theta(n)$ words. The code in Figure~\ref{code:truncated_fentree} implements this approach. The procedures of {\code{sum}} and {\code{update}} for an index $i$ become the combination of the respective procedures invoked on the {\method{Fenwick-Tree}} (upper part) for index $j = \lfloor i/b \rfloor$ and on the \code{Node} structure holding the $j$-th block of leaves (lower part). Compared to the Blocked-{\method{Fenwick-Tree}}, this {truncated} variant has the same height but reduces the processing time at every traversed node using the simplicity of a classic {\method{Fenwick-Tree}} and limiting the usage of SIMD to one block per operation. This is achieved by using some extra space in the representation, i.e., the space for the {\method{Fenwick-Tree}}, that is $\lceil n/b \rceil+1$ extra memory words. As already observed, since the {\method{Fenwick-Tree}} is small and traversed for every operation, we expect it to be cached even for large $n$. For example, for $n = 2^{28}$ and $b=256$, the {\method{Fenwick-Tree}} takes $2^{20}+1$ memory words that fit well in $L_3$. Furthermore, cache-aliasing is only limited to the upper part of the data structure, and we already discussed how to solve this issue in Section~\ref{sec:segment_tree_vs_fenwick_tree}. \begin{figure}[t] \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_fenwick_tree_truncated_restricted_vs_unrestricted}} \label{fig:fenwick_tree_truncated_a} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_fenwick_tree_truncated_restricted_vs_unrestricted}} \label{fig:fenwick_tree_truncated_b} } \caption{The running times of {\code{sum}}/{\code{update}} for the Truncated-{\method{Fenwick-Tree}} (\textsf{TFT}) with leaves of 64 and 256 keys. \label{fig:fenwick_tree_truncated}} \end{figure} The experimental results shown in Figure~\ref{fig:fenwick_tree_truncated} meet our expectations, as the truncated variant improves over the blocked and the classic {\method{Fenwick-Tree}}. In particular, it performs similarly to the classic {\method{Fenwick-Tree}} for small values of $n$ but gains a significant advantage for larger $n$ thanks to its better cache usage. Comparing Figure~\ref{fig:fenwick_tree_truncated_b} to Figure~\ref{fig:segment_tree_b} at page~\pageref{fig:segment_tree_b}, we can also observe that this variant improves over the $b$-ary {\method{Segment-Tree}} in the general case for {\code{update}} ($\delta=64$) and large $n$ because the use of SIMD is limited to one block of $b$ keys per operation. For {\code{sum}} queries instead, the data structure performs similarly to the $b$-ary {\method{Segment-Tree}}, with the latter being generally faster for all values of $n$. \section{The $b$-ary Fenwick-Tree}\label{sec:fentree_bary} The classic {\method{Fenwick-Tree}} we described in Section~\ref{sec:fentree} exploits the base-2 representation of a number in $[0,n)$ to support {\code{sum}} and {\code{update}} in time $O(\log n)$. If we change the base of the representation to a value $b>2$, the corresponding $b$-ary {\method{Fenwick-Tree}} can be defined~\cite{bille2017succinct} -- a data structure supporting {\code{sum}} in $O(\log_b n)$ and {\code{update}} in $O(b\log_b n)$. A pictorial representation of the data structure is given in Figure~\ref{fig:bary_fentree_shape} for $b=4$ and $n=64$. However, this data structure does not expose an improved trade-off compared to the solutions described in the previous sections, for the reasons we discuss in the following. Let us consider {\code{sum}}. Recall that the {\method{Fenwick-Tree}} ignores digits that are 0. We have already commented on this for the case $b=2$ in Section~\ref{sec:segment_tree_vs_fenwick_tree}: for random queries, this gives a consistent boost over the {\method{Segment-Tree}} with $b=2$ because roughly 50\% of the levels are skipped, \emph{as if} the height of the tree were actually $\frac{1}{2}\lceil\log_2(n+1)\rceil$. Unfortunately, this advantage does \emph{not} carry over for larger $b$ because the probability that a digit is 0 is $1/b$ which is very low for the values of $b$ we consider in our experimental analysis (64 and 256). In fact, the $b$-ary {\method{Fenwick-Tree}} is not faster than the $b$-ary {\method{Segment-Tree}} (although it is when compared to the classic {\method{Fenwick-Tree}} with $b=2$). Even more problematic is the case for {\code{update}}. Having to deal with more digits clearly slows down {\code{update}} that needs to access $b-1$ nodes per level, for a complexity of $O(b\log_b n)$. Observe that $(b-1)\frac{\log_2 n}{\log_2 b}$ is more than $\log_2 n$ for every $b>2$. For example, for $b=64$ we can expect a slowdown of more than $10\times$ (and nearly $32\times$ for $b=256$). Experimental results confirmed this analysis. Note that the $b$-ary {\method{Segment-Tree}} does much better than this because: (1) it traverses $\lceil \log_b n \rceil$ nodes for operation, (2) the $b$ keys to update per node are contiguous in memory for a better cache usage and, hence, (3) are amenable to the SIMD optimizations we described in Section~\ref{sec:nodes}. Therefore, we experimented with two other ideas in order to improve the trade-off of the $b$-ary {\method{Fenwick-Tree}}. \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_fenwick_tree_truncated_restricted_vs_unrestricted}} \label{fig:fenwick_tree_truncated_a} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_fenwick_tree_truncated_restricted_vs_unrestricted}} \label{fig:fenwick_tree_truncated_b} } \caption{The running times of {\code{sum}}/{\code{update}} for the truncated {\method{Fenwick-Tree}} (\textsf{TFT}) with leaves of 64 and 256 keys. \label{fig:fenwick_tree_truncated}} \end{figure} The first idea is to block $b$ keys together into the nodes of a classic {\method{Fenwick-Tree}}, as depicted in Figure~\ref{fig:blocked_fentree_shape} for $b=4$ and $n=64$. Compared to the $b$-ary {\method{Fenwick-Tree}}, this variant -- that we call the \emph{blocked} {\method{Fenwick-Tree}} -- slows down the queries but improves the updates, for a better trade-off. However, it does not improve over the classic {\method{Fenwick-Tree}} with $b=2$. In fact, although the height of the tree is $\lceil \log_2((n+1)/b) \rceil + 1$, thus smaller than that of the classic {\method{Fenwick-Tree}} by $\log_2 b$, the number of traversed nodes is only reduced by $\frac{1}{2}\log_2 b$, which is quite small for the values of $b$ considered (e.g., just 3 for $b=64$). Therefore, this variant traverses slightly fewer nodes but the computation at each node is more expensive (more cache lines accessed due to two-level nodes; more cycles spent due to SIMD), resulting in a larger runtime compared to the classic {\method{Fenwick-Tree}}. This suggests the implementation of a strategy where one keeps running the simplest code for {\code{update}}, e.g., that of the classic {\method{Fenwick-Tree}}, until the number of leaves in the sub-tree is $b$ and, thus, these can be updated in parallel with SIMD. The shape of the resulting data structure -- the \emph{truncated} {\method{Fenwick-Tree}} (\textsf{TFT}) -- is illustrated in Figure~\ref{fig:truncated_fentree_shape} (for $b=4$ and $n=64$) and shows the clear division between the upper part represented by the {\method{Fenwick-Tree}} and the lower part that consists in an array of blocks. Compared to the classic {\method{Fenwick-Tree}}, it is now intuitive that this variant reduces the number of cache-misses because the upper part is likely to fit well in cache and $b$ keys inside a block are contiguous in memory. The experimental results shown in Figure~\ref{fig:fenwick_tree_truncated} meet our expectations, as the truncated variant actually improves over the classic {\method{Fenwick-Tree}}. In particular, it performs similarly to the {\method{Fenwick-Tree}} for small values of $n$ but gains a significant advantage for larger $n$ thanks to its better cache usage. Comparing Figure~\ref{fig:fenwick_tree_truncated_b} to Figure~\ref{fig:segment_tree_b} at page~\pageref{fig:segment_tree_b}, we can also observe that this variant improves over the $b$-ary {\method{Segment-Tree}} in the general case for {\code{update}} ($\delta=64$) and large $n$ because the use of SIMD is limited to one block of $b$ keys per operation. For {\code{sum}} queries instead, the data structure performs similarly to the $b$-ary {\method{Segment-Tree}}, with the latter being generally faster for all values of $n$. As last note, we remark that all the variants of the {\method{Fenwick-Tree}} we sketched in this section are anyway part of the software library available at \url{https://github.com/jermp/psds}. \section{The $b$-ary Segment-Tree}\label{sec:segtree_bary} The solutions analyzed in the previous sections have two main limitations. (1) The height of the tree is $\lceil \log_2 n \rceil + 1$: for the {\method{Segment-Tree}}, because each internal node has 2 children; for the {\method{Fenwick-Tree}}, because an index in $[0,n)$ is decomposed as sum of some powers of 2. Thus, the tree may become excessively tall for large values of $n$. (2) The running time of {\code{update}} does not depend on $\Delta$. In particular, it makes no difference whether $\Delta$ is ``small'', e.g., it fits in one byte, or arbitrary big: possible assumptions on $\Delta$ are not currently exploited to achieve a better runtime. To address these limitations, we can let each internal node of the tree hold a block of $b>2$ keys. While this reduces the height of the tree for a better cache usage, it also enables the use of SIMD instructions for faster {\code{update}} operations because several keys can be updated in parallel. Therefore, in Section~\ref{sec:nodes} we introduce a solution that works for ``small'' arrays of $b>2$ keys, e.g., 64 or 256. Then, in Section~\ref{sec:large_arrays}, we show how to embed this small-array solution into the nodes of a {\method{Segment-Tree}} to obtain a solution for arbitrary values of $n$. \input{nodes} \input{large_arrays} \section{Extra} Accumulate here other stuff that we will integrate later with the body of the paper. \section{Preliminary Exploration: Segment and Fenwick Trees}\label{sec:preliminaries} \section{The Segment-Tree}\label{sec:segtree} The {\method{Segment-Tree}} data structure was originally proposed by~\citet*{bentley1977solutions} in an unpublished manuscript, and later formalized by~\citet*{bentley1980optimal} to solve the so-called \emph{batched range problem} (given a set of rectangles in the plane and a query point, report all the rectangles where the point lies in). Given an array $A[0..n)$, the {\method{Segment-Tree}} is a complete balanced binary tree whose leaves correspond to the individual elements of $A$ and the internal nodes hold the sum of the elements of $A$ descending from the left and right sub-trees. The ``segment'' terminology derives from the fact that each internal node logically covers a segment of the array and holds the sum of the elements in the segment. Therefore, in simplest words, the {\method{Segment-Tree}} is a hierarchy of segments covering the elements of $A$. In fact, the root of the tree covers the entire array, i.e., the segment $[0,n-1]$; its children cover half array each, that is the segments $[0,\lfloor (n-1)/2 \rfloor]$ and $[\lfloor (n-1)/2 \rfloor + 1,n-1]$ respectively, and so on until the $i$-th leaf spans the segment $[i,i]$ which corresponds to $A[i]$. Figure~\ref{fig:segtree} shows a graphical example for an array of size 16. \begin{figure}[t] \centering \subfloat[]{ \includegraphics[scale=1]{{images/segtree}} \label{fig:segtree} } \subfloat[]{ \includegraphics[scale=0.95]{{images/implicit_tree}} \label{fig:implicit_tree} } \caption{In (a), an example of {\method{Segment-Tree}} built from an input array $A$, with highlighted nodes belonging to the root-to-leaf path for answering $\code{sum}(10)$. The shaded nodes are the ones accessed to compute the result, thus $\code{sum}(10)=282-86-52=144$. In (b), the array-like representation of the same tree. } \end{figure} \paragraph*{Top-Down Traversal} Both {\code{sum}} and {\code{update}} can be resolved by traversing the tree structure \emph{top-down} with a classic binary-search algorithm. For example, Figure~\ref{fig:segtree} highlights the nodes traversed during the computation of $\code{sum}(10)$. To answer a $\code{sum}(i)$ query, for every traversed node we determine the segment comprising $i$ by comparing $i$ with the index in the \emph{middle} of the segment spanned by the node and moving either to the left or right child. Every time we move to the right child, we can sum the value stored in the left child to our result. Updates are implemented in a similar way. Since each child of a node spans half of the segment of its parent, the tree is balanced and its height is $\lceil \log_2 n \rceil + 1$. It follows that {\code{sum}} and {\code{update}} are supported in $O(\log n)$ time. \begin{figure}[t] \lstinputlisting{segment_tree_topdown.hpp} \caption{A top-down implementation of the {\method{Segment-Tree}}. \label{code:segtree}} \end{figure} In Figure~\ref{code:segtree} we give the full code implementing this top-down approach. Some considerations are in order. First, observe that we build the tree by padding the array with zeros until we reach the first power of 2 that is larger-than or equal-to $n$. This substantially simplifies the code, and permits further optimizations that we will introduce later. Second, we store the entire tree in an array, named \code{tree} in the code, thus representing the tree topology \emph{implicitly} with parent-to-child relationships coded via integer arithmetic: if the parent node is stored in the array at position $i$, then its left and right children are placed at positions $2i + 1$ and $2i + 2$ respectively. Note that this indexing mechanism requires \emph{every} node of the tree to have two children, and this is (also) the reason why we pad the array with zeros to reach the closest power of two. Figure~\ref{fig:implicit_tree} shows the array-like view of the tree in Figure~\ref{fig:segtree}. The complete tree hierarchy, excluding the leaves which correspond to the elements of the original array, consists of $2^{\lceil \log_2 n \rceil}-1$ internal nodes. These internal nodes are stored in the first half of the array, i.e., in \code{tree[0..size-1)}; the leaves are stored in the second half, i.e., in \code{tree[size-1..2*size-1)}. Thus the overall {\method{Segment-Tree}} takes a total of $2^{\lceil \log_2 n \rceil+1}-1$ 64-bit words. Therefore, we have that the {\method{Segment-Tree}} takes $c \cdot n$ memory words, with $c \in [2,4)$. The constant $c$ is 2 when $n$ is a power of 2, but becomes close to 4 when $n$ is very distant from $2^{\lceil \log_2 n \rceil}$. \paragraph*{Bottom-Up Traversal} In Figure~\ref{code:segtree_bottomup} we show an alternative implementation of the {\method{Segment-Tree}}. Albeit logically equivalent to the code shown in Figure~\ref{code:segtree}, this implementation has advantages. First, it avoids extra space. In particular, it always consumes $2n-1$ memory words, \emph{while preserving implicit parent-to-child relationships} expressed via integer arithmetic. Achieving this when $n$ is a power of 2 is trivial because the tree will always be full, but is more difficult otherwise. In the \code{build} method of the class we show an algorithm that does so. The idea is to let the leaves of the tree, which correspond to the elements of the input, be stored in the array \code{tree} slightly out of order. The leaves are still stored in the second half of the array but instead of being laid out consecutively from position \code{n-1} (as it would be if \code{n} were a power of 2), they start from position \code{begin}, which can be larger than \code{n-1}. Let \var{m} be $2n-1$. The first \var{m - begin} leaves are stored in \code{tree[begin..m)}; all remaining leaves are stored in \code{tree[n-1,begin)}. This is achieved with the two \code{for} loops in the \code{build} method. (Note that \code{begin} is always $2^{\lceil \log_2 n \rceil}-1$ which is $n-1$ if $n$ is a power of 2.) Such \emph{circular} displacement of the leaves guarantees that parent-to-child relationships are preserved even when $n$ is \emph{not} a power of 2. Lastly, the \code{visit} method traverses the internal nodes recursively, writing in each node the proper sum value. \begin{figure}[t] \lstinputlisting{segment_tree_bottomup.hpp} \caption{A bottom-up implementation of the {\method{Segment-Tree}}. \label{code:segtree_bottomup}} \end{figure} Second, we now traverse the tree structure \emph{bottom-up}, instead of top-down. This direction allows a much simpler implementation of the {\code{sum}} and {\code{update}} procedures. The inner \code{while} loop is shorter and it is only governed by the index \var{p}, which is initialized to be the position of the \var{i}-th leaf using the function \code{leaf}, and updated to become the index of the parent node at each iteration until it becomes \var{0}, the index of the root node. Furthermore in the code for {\code{sum}}, every time \var{p} is the index of a \emph{right} child we sum the content of its left sibling, which is stored in \var{tree[p-1]}, to the result. To check whether \var{p} is the index of a right child, we exploit the property that \emph{left children are always stored at odd positions in the array; right children at even positions}. This can be proved by induction on \var{p}. If \var{p} is 0, i.e., is the index of the root, then its left child is in position 1 and its right child in position 2, hence the property is satisfied. Now, suppose it holds true when $\var{p} = k$, for some $k > 0$. To show that the property holds for the children of \var{p}, it is sufficient to recall that: the double of a number is always even; if we sum 1 to an even number, the result is an odd number (left child); if we sum 2 to an even number, the result is an even number (right child). In conclusion, if the parity of \var{p} is 0, it must be a right child. (The parity of \var{p} is indicated by its first bit from the right, that we isolate with \code{p \& 1}.) \paragraph*{Branch-Free Traversal} Whatever implementation of the {\method{Segment-Tree}} we use, either top-down or bottom-up, a branch (\code{if} statement) is always executed in the \code{while} loop of {\code{sum}} (and in that of {\code{update}} for the top-down variant). For randomly-distributed queries, we expect the branch to be hard to predict: it will be true for approximately half of the times, and false for the others. Using speculative execution as branch-handling policy, the penalty incurred whenever the processor mispredicts a branch is a \emph{pipeline flush}: all the instructions executed so far (speculatively) must be discarded from the processor pipeline, for a decreased instruction throughput. Therefore, we would like to avoid the branch inside the \code{while} loop. \begin{figure}[t] \subfloat[top-down]{ \begin{minipage}[t]{0.5\textwidth} \lstinputlisting{branch_free_sum_topdown.cpp} \label{code:branchfree-sum-a} \end{minipage} } \subfloat[bottom-up]{ \begin{minipage}[t]{0.5\textwidth} \lstinputlisting{branch_free_sum_bottomup.cpp} \end{minipage} } \caption{Branch-free {\code{sum}} implementations on the {\method{Segment-Tree}}. \label{code:branchfree-sum}} \end{figure} In Figure~\ref{code:branchfree-sum} we show a branch-free implementation of the {\code{sum}} algorithm\footnote{The {\code{update}} algorithm for top-down is completely symmetric to that of {\code{sum}}, and not shown here. Also, note that the {\code{update}} algorithm for the bottom-up variant is already branch-free.}. It basically uses the result of the comparison, \var{cmp}, which will be either 1 or 0, to appropriately mask\footnote{It is interesting to report that, although the code shown here uses multiplication, the assembly output by the compiler does not involve multiplication at all. The compiler implements the operation \code{cmp * tree[p]} with a \emph{conditional move} instruction (\code{cmov}), which loads into a register the content of \code{tree[p]} only if \code{cmp} is true. } the quantity we are summing to the result (and move to the proper child obliviously in the top-down traversal). The correctness is immediate and left to the reader. The result of the comparison between the two approaches, branchy vs. branch-free, is shown in Figure~\ref{fig:branchy_vs_branchfree}. \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_segment_tree_branchy_vs_branchless}} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_branchy_vs_branchless}} \label{fig:branchy_vs_branchfree:update} } \caption{Running times for branchy and branch-free {\code{sum}}/{\code{update}} on the {\method{Segment-Tree}} (ST). \label{fig:branchy_vs_branchfree}} \end{figure} As we can see, the branch-free implementations of both {\code{sum}} and {\code{update}} are much faster than the branchy counterparts, on average by $2\times$ or more, for a wide range of practical values of $n$. We collected some performance counts using the Linux \texttt{perf} utility to further explain this evidence. Here we report an example for the bottom-up approach and {\code{sum}} queries. For $n \approx 2^{16}$, the branch-free code spends 66\% less cycles than the branchy code although it actually executes 38\% more instructions, thus rising the instruction throughput from 0.54 to 2.2. Furthermore, it executes 44\% less branches and misses only 0.2\% of these. Also, note that the bottom-up version is considerably faster than the top-down version (and does not sacrifice anything for the branchy implementation). Therefore, from now on we focus on the bottom-up version of the {\method{Segment-Tree}}. However, the branchy implementation of {\code{sum}} is faster than the branch-free one for larger values of $n$ (e.g., for $n>2^{25}$). This shift in the trend is due to the fact that the latency of a memory access dominates the cost of a pipeline flush for large $n$. In fact, when the data structures are sufficiently large, accesses to larger but \emph{slower} memory levels are involved: in this case, one can obtain faster running times by avoiding such accesses when unnecessary. The branchy implementation does so. In particular, it performs a memory access \emph{only if} the condition of the branch is satisfied, thus saving roughly 50\% of the accesses for a random query workload compared to the branch-free implementation. This is also evident for {\code{update}}. All the different implementations of {\code{update}} perform an access at every iteration of their loop, and this is the reason why all the curves in Figure~\ref{fig:branchy_vs_branchfree:update} have a similar shape. \paragraph*{Two-Loop Traversal} In the light of the above considerations, we would actually like to combine the best of the ``two worlds'' by: executing branch-free code as long as the cost of a pipeline flush exceeds the cost of a memory access, which is the case when the accessed data resides in the smaller cache levels; executing branchy code otherwise, i.e., when the memory latency is the major cost in the running time that happens when the accesses are directed to slower memory levels. Now, the way the {\method{Segment-Tree}} is linearized in the \code{tree} array, i.e., the first $n-1$ positions of the array store the internal nodes in level order and the remaining $n$ store the leaves of the tree, is particularly suitable to achieve this. A node at depth $d$ has a $(1/2)^d$ probability of being accessed, for randomly distributed queries (root is at depth $d=0$). Therefore when we repeatedly traverse the tree, the first $L_1$ positions of the array that correspond to the top-most $\lceil \log_2 L_1 \rceil$ levels of the tree are kept in cache $L_1$; the following $L_2 - L_1$ positions are kept in $L_2$, and so on, where $L_k$ indicate the size of the $k$-th cache in data items (the ``64-bit Words'' column in Table~\ref{tab:caches} at page~\pageref{tab:caches}). If \code{T} is the size of the prefix of the array \code{tree} that fits in cache, it is intuitive that, as long as $\code{p} > \code{T}$, we should prefer the branchy code and save memory accesses; vice versa, when $\code{p} \leq \code{T}$, memory accesses are relatively cheap and we should opt for the branch-free code. The following code shows this approach. \lstinputlisting{segtree_bottomup_sum_two_loops.cpp} The value of the threshold \code{T} governs the number of loop iterations, out of $\lceil \log_2 n \rceil + 1$, that are executed in a branchy or branch-free manner. Its value intuitively depends on the size of the caches and the value of $n$. For smaller $n$ we would like to set \code{T} reasonably high to benefit from the branch-free code; vice versa for larger $n$, we would like to perform more branchy iterations. As a rule-of-thumb, we determined that a good choice of \code{T} is $L_2 - (n > L_3) \times (L_2 - L_1)$, which is equal to $L_1$ or $L_2$ if $n>L_3$ or not respectively. It remains to explain how we can apply the ``two-loop'' optimization we have just introduced to the {\code{update}} algorithm, since its current implementation \emph{always} executes an access per iteration. To reduce the number of accesses to memory, we modify the content of the internal nodes of the tree. If each node stores the sum of the leaves descending from its \emph{left} subtree only (rather than the sum of the elements covered by the left \emph{and} right segments), then a memory access is saved every time we do not have to update any node in the left subtree. We refer to this modification of the internal nodes as a \emph{left-sum} tree hereafter and show the corresponding implementation in Figure~\ref{code:segment_tree_bottomup_leftsum}. Lastly, Figure~\ref{fig:leftsum_switch_on_level} illustrates the comparison between branchy/branch-free implementations of regular and \emph{left-sum} {\method{Segment-Tree}}. As apparent from the plots, the left-sum variant with the two-loop optimization is the fastest {\method{Segment-Tree}} data structure because it combines the benefits of branch-free and branchy implementations. From now on and further comparisons, we simply refer to this strategy as \textsf{ST} in the plots. \begin{figure}[t] \lstinputlisting{segment_tree_bottomup_leftsum.hpp} \caption{The bottom-up \emph{left-sum} {\method{Segment-Tree}} implementation that stores in each internal node the sum of the leaves descending from its left sub-tree. \label{code:segment_tree_bottomup_leftsum}} \end{figure} \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_segment_tree_branchy_vs_branchless_leftsum}} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_branchy_vs_branchless_leftsum}} } \caption{Comparison between branchy/branch-free implementations of regular and \emph{left-sum} {\method{Segment-Tree}}. \label{fig:leftsum_switch_on_level}} \end{figure} \section{Segment-Tree vs. Fenwick-Tree}\label{sec:segment_tree_vs_fenwick_tree} In this section we compare the optimized versions of the the {\method{Segment-Tree}} and {\method{Fenwick-Tree}} -- respectively, the bottom-up left-sum {\method{Segment-Tree}} with 2-loop traversal, and the modified {\method{Fenwick-Tree}} with reduced cache conflicts. A quick look at Figure~\ref{fig:segment_tree_vs_fenwick_tree} immediately reveals that the {\method{Fenwick-Tree}} outperforms the {\method{Segment-Tree}}. There are two different reasons that explain why the {\method{Fenwick-Tree}} is more efficient than the {\method{Segment-Tree}}. \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_segment_tree_vs_fenwick_tree}} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_vs_fenwick_tree}} } \caption{The running times of {\code{sum}}/{\code{update}} for the {\method{Segment-Tree}} (ST) and the {\method{Fenwick-Tree}} ({\method{FT}}). \label{fig:segment_tree_vs_fenwick_tree}} \end{figure} \begin{enumerate} \item Although we have substantially simplified the code logic for traversing the {\method{Segment-Tree}} with the bottom-up navigation, the code for the {\method{Fenwick-Tree}} is even simpler and requires just a few arithmetic operations plus a memory access at each iteration of the loop. \item While both trees have height $\lceil \log_2 n \rceil + 1$, the number of traversed nodes per operation is different. The {\method{Segment-Tree}} \emph{always} traverses $\lceil \log_2 n \rceil + 1$ nodes and that is also the number of iterations of the \texttt{while} loop. When the branch-free code is into play, also $\lceil \log_2 n \rceil + 1$ memory accesses are performed. As we already explained, for the branchy implementation the number of memory accesses depends on the query: for random distributed queries we expect to perform one access every two iterations of the loop (i.e., half of the times we go left, half of the time we go right). Now, for a random integer $i$ between 0 and $n-1$, the number of bits set we expect to see in its binary representation is approximately 50\%. Therefore, the number of loop iterations \emph{and} memory accesses the {\method{Fenwick-Tree}} is likely to perform is $\frac{1}{2} \lceil \log_2(n+1) \rceil + 1$, i.e., half of those performed by the {\method{Segment-Tree}}, regardless the value of $n$. \end{enumerate} \emph{Both} factors contribute to the difference in efficiency between the two data structures. However, for small values of $n$, the number of performed instructions is the major contributor in the running time as both data structures fit in cache, thus memory accesses are relatively cheap (point 1). For example, when $n$ is $\approx 2^{16}$ and considering {\code{sum}} queries, the {\method{Fenwick-Tree}} code spends 58\% of the cycles spent by the {\method{Segment-Tree}} and executes nearly 1/3 of instructions. It also executes half of the branches and misses 11\% of them. Moving towards larger $n$, the memory latency progressively becomes the dominant factor in the running time, thus favoring solutions that spare memory accesses (point 2). Considering again the number of cache misses for $n \approx 250 \times 10^6$ (excluding overheads during benchmarking), we determined that the {\method{Fenwick-Tree}} incurs in 25\% less cache misses than the {\method{Segment-Tree}} for both {\code{sum}} and {\code{update}}. In conclusion, we will exclude the {\method{Segment-Tree}} from the experiments we are going to discuss in the rest of the paper and compare against the {\method{Fenwick-Tree}}. \section{The Fenwick-Tree}\label{sec:fentree} The solution we describe here is popularly known as the {\method{Fenwick-Tree}} (or \emph{Binary Indexed Tree}, BIT) after a paper by~\citet*{fenwick1994new}, although the underlying principle was introduced by~\citet*{ryabko1992fast}. The key idea is to exploit the fact that, as a natural number $i > 0$ can be expressed as sum of proper powers of 2, so {\code{sum}}/{\code{update}} operations can be implemented by considering array locations that are power-of-2 elements apart. For example, since $11 = 2^3+2^1+2^0=8+2+1$, then $\code{sum}(10)$ can be computed as $\var{tree}[2^3]+\var{tree}[2^3+2^1]+\var{tree}[2^3+2^1+2^0] = \var{tree}[8]+\var{tree}[10]+\var{tree}[11]$, where the array \var{tree} is computed from $A$ in some appropriate manner that we are going to illustrate soon. In this example, $\var{tree}[8]=\sum_{k=0}^{8-1} A[k]$, $\var{tree}[10]=A[8]+A[9]$, and $\var{tree}[11]=A[10]$. Now, let $\code{base}=0$. The array $\code{tree}[0..n+1)$ is obtained from $A$ with the following two-step algorithm. \begin{enumerate} \item Store in $\var{tree}[\code{base}+i]$ the quantity $\sum_{k=0}^{i-1} A[\code{base}+k]$ where $i = 2^j$, $j=0,1,2,\ldots$ \item For every sub-array $A[2^j..2^{j+1}-1)$, repeat step (1) with updated $\code{base}=2^j$. \end{enumerate} \begin{figure}[t] \centering \includegraphics[scale=1]{{images/interrogation}} \caption{An example of the array-like view \var{tree} of the {\method{Fenwick-Tree}}, along with its logical ``interrogation'' tree. The array \var{tree} is built from the input array $[$13, -1, 2, 23, -4, 231, 13, 5, 2, -88, -52, 0, 4, 90, 3, -12$]$. As in Figure~\ref{fig:segtree}, highlighted nodes belong to the root-to-leaf path that is traversed to answer {\code{sum}} for index 10. \label{fig:interrogation}} \end{figure} Figure~\ref{fig:interrogation} shows a tree-like view of one such array \var{tree}, built from the same array $A$ of size 16 shown in Figure~\ref{fig:segtree}. Note that position 0 is never used for ease of notation and $\var{tree}[0]=0$, thus \var{tree}'s actual size is $n+1$. Now, we said that during a {\code{sum}} query for index $i$ (or {\code{update}}) we only touch \var{tree}'s positions that are power-of-2 elements apart. It is not difficult to see that we have one such position for every bit set in the binary representation of $i$. Since $i$ goes from 0 to $n-1$, it needs $\lceil \log_2(n+1) \rceil$ bits to be represented, so this will also be the height of the {\method{Fenwick-Tree}}. (The depth of a node whose index is $i$ is the Hamming weight of $i$, i.e., the number of bits set in the binary representation of $i$.) Therefore, both queries and updates have a worst-case complexity of $O(\log n)$ and the array \var{tree} takes $n+1$ memory words. In the following we illustrate how to navigate the implicit tree structure in order to support {\code{sum}} and {\code{update}}. Let us consider the {\code{sum}} query. As intuitive from the given examples, when we have to answer $\code{sum}(i)$, we actually probe the array \var{tree} starting from index $\code{p}=i+1$. This is because index 0 is not a power of 2, thus is can be considered as a dummy entry in the array \var{tree}. Now, let \code{p} be 11 as in the example of Figure~\ref{fig:interrogation}. The sequence of nodes' indexes touched during the traversal to compute the result are (bottom-up): $11 \rightarrow 10 \rightarrow 8 \rightarrow 0$. It is easy to see that we \emph{always} start from index $i+1$ and end with index 0 (the root of the tree). To navigate the tree bottom-up we need an efficient way of computing the index of the parent node from a given index \code{p}. This operation can be accomplished by \emph{clearing the least significant bit} (LSB) of the binary representation of \code{p}. For example, 11 is \bit{0101\underline{1}} in binary. We underline its LSB. If we clear the LSB, we get the bit configuration \bit{010\underline{1}0} which indeed corresponds to index 10. Clearing the LSB of 10 gives \bit{0\underline{1}000} which is 8. Finally, clearing the LSB of a power of 2 will always give 0, that is the index of the root. This operation, clearing the LSB of a given number \code{p}, can be implemented efficiently with \code{p \& (p - 1)}. Again, note that $\code{sum}(i)$ traverses a number of nodes equal to the number of bits set (plus 1) in $\code{p}=i+1$. The code given in Figure~\ref{code:fentree} illustrates the approach. \begin{figure}[t] \centering \includegraphics[scale=1]{{images/updating}} \caption{The logical ``updating'' tree for the same example shown in Figure~\ref{fig:interrogation}. The highlighted nodes belong to the root-to-leaf path that is traversed to perform {\code{update}} for index 10. \label{fig:updating}} \end{figure} The {\method{Fenwick-Tree}} can be actually viewed as the superimposition of two different trees. One tree is called the ``interrogation'' tree because it is used during {\code{sum}}, and it is shown in Figure~\ref{fig:interrogation}. The other tree is called the ``updating'' tree, and it is shown in Figure~\ref{fig:updating} instead. This tree consists of the very same nodes as the ``interrogation'' tree but with different child-to-parent relationships. In fact, starting from an index $\code{p} = i + 1$ and traversing the tree bottom-up we obtain the sequence of nodes that need to be updated when issuing $\code{update}(i,\Delta)$. Again for the example $i=10$, such sequence is $11 \rightarrow 12 \rightarrow 16$. Starting from a given index \code{p}, this sequence can be obtained by isolating the LSB of \code{p} and summing this to \code{p} itself. The LSB can be isolated with the operation \code{p \& -p}. For $\code{update}(i,\Delta)$, the number of traversed node is equal to the number of leading zeros (plus 1) in the binary representation of $\code{p}=i+1$. The actual code for {\code{update}} is given in Figure~\ref{code:fentree}. \begin{figure}[t] \lstinputlisting{fenwick_tree.hpp} \caption{The {\method{Fenwick-Tree}} code. \label{code:fentree}} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.8]{{cache_usage_histograms/ft}} \caption{Number of distinct cache lines stored in each set of a $L_1$ 8-way set-associative cache, after running $10^4$ random {\code{sum}} queries with a {\method{Fenwick-Tree}} of size $n=10^7$. \label{fig:cache_usage_ft}} \end{figure} \paragraph*{Cache Conflicts} The indexing of the nodes in the {\method{Fenwick-Tree}} -- which, for every subtree, places the nodes on the same level at array locations that are power-of-2 elements apart -- induces a bad exploitation of the processor cache for large values of $n$. The problem comes from the fact that cache memories are typically $c$-way set-associative caches. The cache memory of the processor used for the experiments in this article is no exception. In such cache architecture, a cache line must be stored \emph{in one} (and only one) set and, if many different cache lines must be stored in the same set, the set has only room for $c$ of these. In fact, when the set fills up, a cache line must be evicted from the set. If the evicted line is then accessed again during the computation a cache miss will be generated because the line is not in the cache anymore. Therefore, accessing (more than $c$) different memory lines that must be stored in the same cache set will induce cache misses. To understand why this happens in the {\method{Fenwick-Tree}}, let us consider how the set number is determined from a memory address $a$. Let $C$ be the total cache size in bytes, where each line spans 64 bytes. The cache stores its lines as divided into $C/(c \times 64)$ sets, where each set can store a cache line in $c$ different possible ``ways''. For example, the $L_1$ cache of the processor used for the experiments in this article (see Table~\ref{tab:caches} at page~\pageref{tab:caches}) has 8 ways and a total of $C=\num{32768}$ bytes. Therefore there are $\num{32768} / (8 \times 64) = 64$ sets in the cache, for a total of 512 cache lines. The first 6 bits (0-5) of $a$ determine the offset into the cache line; the following 6 bits (6-11) specify the set number. Thus, the first line of a memory block is stored in the set 0, the second line is stored in set 1, ecc. The 64-th line will be stored again in set 0, the 65-th line in set 1, ecc. It is now easy to see that accessing memory addresses that are multiple of $64 \times 64 = \num{4096}$ bytes (a memory \emph{page}) is not cache efficient, since the lines will contend the very same set. Therefore, accessing memory locations whose distance in memory is a large power of 2 (multiple of $2^{12}=4096$) is not cache-friendly. Unfortunately, this is what happens in the {\method{Fenwick-Tree}} when $n$ is large. For example, all the nodes at the first level are stored at indexes that are powers of 2. Thus they will all map to the set 0. In Figure~\ref{fig:cache_usage_ft} we show the number of distinct cache lines that must be stored in each set of the cache $L_1$ (8-way set-associative with 64 sets), for $10^4$ random {\code{sum}} queries and $n=10^7$. For a total of $\approx$4$\times 10^4$ cache lines accessed, 29\% of these are stored in set 0. This highly skewed distribution is the source of cache inefficiency. (Updates exhibit the same distribution.) Instead, if all accesses were \emph{evenly} distributed among all sets, we would expect each set to contain $\approx$625 lines (1.56\%). This problem can be solved by inserting some \emph{holes} in the \code{tree} array~\cite{vigna2019}, one every $d$ positions, to let a node whose position is $i$ in the original array to be placed at position $i+\lfloor i/d \rfloor$. This only requires the \var{tree} array to be enlarged by $\lfloor n/d \rfloor$ words and to recalculate the index of every node accordingly. If $d$ is chosen to be a sufficiently large constant, e.g., $2^{14}$, then the extra space is very small. In Figure~\ref{fig:cache_usage} we show the result of the same experiment as in Figure~\ref{fig:cache_usage_ft}, but after the modification to the \var{tree} array and the new indexing of the nodes. As it is evident, now every cache set is equally used. (For comparison, we also report the behavior of the {\method{Segment-Tree}} to confirm that also in this case there is a good usage of the cache.) \begin{figure}[t] \centering \subfloat[modified {\method{Fenwick-Tree}}]{ \includegraphics[scale=0.8]{{cache_usage_histograms/ft_holes}} } \subfloat[{\method{Segment-Tree}}]{ \includegraphics[scale=0.8]{{cache_usage_histograms/st_bu}} } \caption{The same experiment as performed in Figure~\ref{fig:cache_usage_ft}, but after reducing the cache conflicts in the {\method{Fenwick-Tree}} (modified). For comparison, we also report the cache usage of the {\method{Segment-Tree}}. \label{fig:cache_usage}} \end{figure} \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_fenwick_tree}} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_fenwick_tree}} } \caption{The running times of {\code{sum}}/{\code{update}} for the regular and modified {\method{Fenwick-Tree}} ({\method{FT}}). \label{fig:fenwick_tree}} \end{figure} Lastly, in Figure~\ref{fig:fenwick_tree} we show the comparison between the regular {\method{Fenwick-Tree}} and the modified version as described above. As expected, the two curves are very similar up to $n=2^{20}$, then the cache conflicts start to pay a significant role in the behavior of the regular {\method{Fenwick-Tree}}, making the difference between the two curves to widen progressively. As an example, collecting the number of cache misses using Linux \textsf{perf} for $n \approx 250 \times 10^6$ (excluding those spent during the construction of the data structures and generation of the queries) indeed reveals that the regular version incurs in $2.5\times$ the cache-misses of the modified version, for both {\code{sum}} and {\code{update}}. From now on, we simply refer to the modified {\method{Fenwick-Tree}} \emph{without} the problem of cache conflicts as \textsf{FT} in the plots for further comparisons. \subsection{The $b$-ary {\method{Fenwick-Tree}}}\label{sec:fentree_bary} The classic {\method{Fenwick-Tree}} we described in Section~\ref{sec:fentree} exploits the base-2 representation of a number in $[0,n)$ to support {\code{sum}} and {\code{update}} in time $O(\log n)$. If we change the base of the representation to a value $b>2$, the corresponding $b$-ary {\method{Fenwick-Tree}} can be defined -- a data structure supporting {\code{sum}} in $O(\log_b n)$ and {\code{update}} in $O(b\log_b n)$. A pictorial representation of the data structure is given in Figure~\ref{fig:fentree_shape} for $b=4$ and $n=64$. \giulio{reduce a lot; explain that one could write the tree in level-order as already noticed by Vigna, by actually deforming the fentree to be similar to a segtree and use SIMD on updates. While this could be done, we cannot expect advantages wrt the b-ary segtree because the b-ary fentree does not skip levels anymore.} \citet{bille2017succinct} quickly sketched this data structure from a theoretical point of view. What we show here is our own definition of it along with a practical implementation and comparison against different alternatives. \vspace{0.125cm} First, we illustrate how the data structure can be built from an input array $A[0..n)$. As for the classic {\method{Fenwick-Tree}} with $b=2$, also in this case the result of the procedure is an array, say $F$, of $n+1$ memory words (the first position is not used). Call a \emph{partition} of $A[\ell..r)$ according to the endpoints $\ell = r_0 < r_1 < r_2 < \ldots < r$, the set of sub-arrays $\{A[r_{i-1}..r_i)\}_i$. With this definition in mind, the data structure structure is obtained via the following two-step algorithm. Let $\ell=0$ and $r=n$ at the beginning. \begin{enumerate} \item Partition $A[\ell..r)$ according to the endpoints $r_i = b^j + k b^j$, for $j=0,1,2,\ldots$, $k=0..b-2$, and $i=b^j+k$. Set $r_i = n$ if $r_i > n$. Store in $F[\ell+r_i]$ the quantity $\sum_{k=0}^{r_i-1} A[\ell+k]$, for each $r_i$. \item Set $\ell=r_{i-1}+1$, $r=r_i$, and repeat step (1) for each $r_i$. \end{enumerate} Setting $r_i = n$ if $r_i > n$ in step (1) correctly prevents out-of-bound accesses to $A$ and is needed to handle a generic value of $n$ (not just the case where $n$ is a power of $b$). The recursive \code{build} method of the class shown in Figure~\ref{code:fentree_bary} implements this algorithm. The way we interleave the computation of a prefix sum and the recursive call to the method guarantees that at most $O(n)$ time is spent per level, for a total of $O(n \log_b n)$ time. In particular, we first use this procedure to build the $b$-ary structure into a temporary array, \code{tmp}; then, we block $b$ elements together and build a \code{Node} data structure out of these. It follows that the total space is $(\lceil n/b \rceil + 1) \times \Theta(b) = \Theta(n+b)$ which is always $\Theta(n)$ because $b=O(n)$. \vspace{0.125cm} Let us now consider how to answer a ${\code{sum}}(i)$ query, whose code is shown in Figure~\ref{code:fentree_bary_sum_update}. As for the regular {\method{Fenwick-Tree}}, we actually query the data structure for index $i+1$ but, instead of considering the binary representation of $i+1$, we consider its $b$-ary representation. Given an index $i$ in $[0,n)$, its representation in base $b$ is $\sum_{k=0}^{\lceil \log_b i \rceil} d_k b^k$, where $0 \leq d_k < b$ is the $k$-th digit of the representation. It follows that each digit indicates a node in the tree that we access to answer the query. For example, if $i=37$ in Figure~\ref{fig:fentree_shape} and $b=4$, the representation of $37+1$ in base 4 is \bit{212}. Therefore $\code{sum}(37)$ will be $\code{tree}[2 \times 4^2] + \code{tree}[2 \times 4^2 + 1 \times 4^1] + \code{tree}[2 \times 4^2 + 1 \times 4^1 + 2 \times 4^0]$. In particular, note that the first digit $d_0$ always corresponds to an \emph{offset} into the $(i+1)$-th block; all other digits correspond to blocks from which we have to \emph{access} the last element (method \code{back} invoked on the object \code{Node} in Figure~\ref{code:fentree_bary_sum_update}) since it already holds the sum of the elements in the block. Lastly, we need an efficient way of obtaining the $b$-ary representation of a number in $[1,n]$. If we assume that $b$ is a power of 2, then the digit $d_k$ can be directly obtained from the \emph{binary} representation of $i$ by reading its $k$-th segment of $\log_2 b$ bits (starting from the right). In conclusion, since we have $\lceil \lceil\log_2(n+1)\rceil / \log_2 b\rceil$ digits to consider, this will also be the height of the tree. Each digit corresponds to a node in the tree and we spend $O(1)$ per node, hence {\code{sum}} is supported in $O(\log_b n)$ worst-case time as for the $b$-ary {\method{Segment-Tree}}. Furthermore, although Figure~\ref{code:fentree_bary_sum_update} shows a loop-based implementation of {\code{sum}}, the \code{if-constexpr} optimization we explained in Section~\ref{sec:segtree_bary} -- a specialized code path for each possible value of the tree \code{Height} -- also applies to this case and such optimized version is what we tested in the experiments. As a last note, recall that the {\method{Fenwick-Tree}} ignores digits that are 0. We already commented on this for the case $b=2$ in Section~\ref{sec:segment_tree_vs_fenwick_tree}: for random indexes, this gives a consistent boost over the {\method{Segment-Tree}} because roughly 50\% of the levels are skipped, \emph{as if} the height of the tree were actually $\frac{1}{2}\lceil\log_2(n+1)\rceil$. However, this advantage does not carry over for larger $b$ because the probability that a digit is 0 is $1/b$ which is very low for the values of $b$ we consider in our experimental analysis (64 and 256). Not surprisingly, the $b$-ary {\method{Fenwick-Tree}} is not faster than the $b$-ary {\method{Segment-Tree}}, although it gains a significant advantage compared to the classic {\method{Fenwick-Tree}} with $b=2$. \vspace{0.125cm} More problematic is the case for {\code{update}}, whose implementation is shown in Figure~\ref{code:fentree_bary_sum_update}. Observe that the endpoints defined during step (1) do not only comprise powers of $b$: for each interval $[b^j,b^{j+1}]$ we add $b-2$ more endpoints, so that the interval is partitioned into $b-1$ sub-intervals of size $b^j$: one for each digit from 1 to $b-1$. Having to deal with more digits clearly slows down {\code{update}} that needs to access $b-1$ nodes per level, for a complexity of $O(b\log_b n)$. Observe that $(b-1)\frac{\log_2 n}{\log_2 b}$ is more than $\log_2 n$ for every $b>2$. For example, for $b=64$ we can expect a slowdown of more than $10\times$ (and nearly $32\times$ for $b=256$). Experimental numbers confirm this analysis. Note that the $b$-ary {\method{Segment-Tree}} does much better than this because: (1) it traverses $\lceil \log_b n \rceil$ nodes for operation, (2) the $b$ keys to update per node are contiguous in memory for a better cache usage and, hence, (3) are amenable to the SIMD optimizations we described in Section~\ref{sec:nodes}. \vspace{0.125cm} In conclusion, albeit attractive from a theoretical point of view, we do not regard the $b$-ary {\method{Fenwick-Tree}} as a promising solution to the Prefix-Sum Problem because the trade-off between the running time of {\code{sum}} and {\code{update}} is not even as good as that of the classic {\method{Fenwick-Tree}} with $b=2$. \section{ACKNOWLEDGMENTS} This work was partially supported by the BigDataGrapes (EU H2020 RIA, grant agreement N\textsuperscript{\b{o}}780751), the ``Algorithms, Data Structures and Combinatorics for Machine Learning'' (MIUR-PRIN 2017), and the OK-INSAID (MIUR-PON 2018, grant agreement N\textsuperscript{\b{o}}ARS01\_00917) projects. \renewcommand{\bibsep}{3.0pt} \bibliographystyle{ACM-Reference-Format} \subsection{Prefix Sums on Small Arrays: SIMD-Aware Node Layouts}\label{sec:nodes} \begin{figure}[t] \lstinputlisting{update4.cpp} \caption{A SIMD-based implementation of {\code{update}} for an array \var{keys} of 4 $\times$ 64-bit integers. The table \var{T} must be aligned on a 32-byte boundary. \label{alg:update4}} \end{figure} A reasonable starting point would be to consider $b = 4$ keys and compute their prefix sums in an array $\var{keys}[0..4)$. Queries are answered in the obvious way: $\code{sum}(i)=\var{keys}[i]$, and we do not explicitly mention it anymore in the following. Instead, an $\code{update}(i,\Delta)$ operation is solved by setting $\var{keys}[j] = \var{keys}[j] + \Delta$ for $j = i..3$. We can do this in parallel with SIMD as follows. Let \var{U} be a register of $4 \times 64$ bits that packs 4 integers, where the first $i - 1$ integers are 0 and the remaining ones, from $j = i..3$, are equal to $\Delta$. Then $\code{update}(i,\Delta)$ is achieved by adding \var{U} and \var{keys} in parallel using the instruction \code{\_mm256\_add\_epi64}. To obtain the register \var{U} as desired, we first initialize \var{U} with 4 copies of $\Delta$ using the instruction \code{\_mm256\_set1\_epi64x}. Now, the copies of $\Delta$ before the $i$-th must be masked out. To do so, we use the index $i$ to perform a lookup into a small pre-computed table \var{T}, where $\var{T}[i]$ is the proper 256-bit mask. In this case, \var{T} is a $4 \times 4$ table of unsigned 64-bit integers, where $\var{T}[0][0..3] = [0,2^{64}-1,2^{64}-1,2^{64}-1]$, $\var{T}[1][0..3] = [0,0,2^{64}-1,2^{64}-1]$, $\var{T}[2][0..3] = [0,0,0,2^{64}-1]$, and $\var{T}[3][0..3] = [0,0,0,0]$. Once we loaded the proper mask, we obtain the wanted register configuration with $\var{U} = \code{\_mm256\_and\_si256}(\var{U},\var{T}[i])$. The code\footnote{For all the code listings we show in this section, it is assumed that the size in bytes of (\code{u})\code{int64\_t}, (\code{u})\code{int32\_t}, and (\code{u})\code{int16\_t} is 8, 4, and 2, respectively.} for this algorithm is shown in Figure~\ref{alg:update4}. An important observation is in order, before proceeding. One could be tempted to leave the integers in $\var{keys}[0..4)$ as they are in order to obtain trivial updates and use SIMD instructions to answer {\code{sum}}. During our experiments we determined that this solution gives a \emph{worse} trade-off than the one described above: this was actually no surprise, considering that the algorithm for computing prefix sums with SIMD is complicated as it involves several shifts and additions (besides load and store). Therefore, SIMD is more effective on updates rather than queries. \paragraph*{Two-Level Data Structure} As already motivated, we would like to consider larger values of $b$, e.g., 64 or 256, in order to obtain even flatter trees. Working with such branching factors, would mean to apply the algorithm in Figure~\ref{alg:update4} for $b/4$ times, which may be too slow. To alleviate this issue, we use a two-level data structure. We split $b$ into $\sqrt{b}$ segments and store each segment in prefix-sum. The sum of the integers in the $j$-th segment is stored in a \var{summary} array of $\sqrt{b}$ values in position $j + 1$, for $j = 0..\sqrt{b}-2$ ($\var{summary}[0] = 0$). The values of the \var{summary} are stored in prefix-sum as well. In Figure~\ref{fig:node16} we show a graphical example of this organization for $b=16$. In the example, we apply the algorithm in Figure~\ref{alg:update4} only twice (first on the \code{summary}, then on a specific segment of \code{keys}): without the two-level organization, 4 executions of the algorithm would have been needed. Instead, queries are as easy as $\code{sum}(i) = \var{summary}[i / \sqrt{b}] + \var{keys}[i]$. The code corresponding to this approach for $b=64$ is shown in Figure~\ref{alg:update8}\footnote{The \code{build} method of the \code{node} class builds the two-level data structure as we explained above, and writes \code{node::size} bytes onto the output buffer, \code{out}. We do not report it for conciseness.}. In this case, note that the whole \var{summary} fits in one cache line, because its size is $8 \times 8 = 64$ bytes, as well as each segment of \var{keys}. The table \var{T} stores $(8 + 1) \times 8$ 64-bit unsigned values, where $\var{T}[i][0..7]$ is a an array of 8 integers: the first $i$ are 0 and the other $8-i$ are $2^{64}-1$, for $i=0..8$. Lastly, the space overhead due to the \var{summary} array is always $1/\sqrt{b} \times 100\%$. For the example code in Figure~\ref{alg:update8}, the space consumed is 12.5\% more than that of the input array (576 bytes consumed instead of $64 \times 8 = 512$). \begin{figure}[t] \centering \includegraphics[scale=1 ]{images/node16} \caption{A two-level node layout built from the array $[$13, -1, 2, 23, -4, 231, 13, 5, 2, -88, -52, 0, 4, 90, 3, -12$]$. The shaded arrays represent the content of the registers used when executing the operation $\code{update}(9,-37)$. \label{fig:node16}} \end{figure} \begin{figure}[t] \lstinputlisting{node64.hpp} \caption{Code for a two-level node data structure with $b=64$. \label{alg:update8}} \end{figure} \begin{figure}[t] \lstinputlisting{node64_restricted.hpp} \caption{Code for a two-level node data structure with $b=64$, for the ``restricted'' case where $\Delta$ is a signed 8-bit integer. \label{alg:update8_restricted}} \end{figure} \paragraph*{Handling Small Updates} It is intuitive that one can obtain faster results for {\code{update}} if the bit-width of $\Delta$ is smaller than 64. In fact, a restriction on the possible values $\Delta$ can take permits to ``pack'' more such values inside a SIMD register which, in turn, allows to update a larger number of keys in parallel. As we already observed, neither the binary {\method{Segment-Tree}} nor the {\method{Fenwick-Tree}} can possibly take advantage of this restriction. To exploit this possibility, we buffer the updates. We restrict the bit-width of $\Delta$ to 8, that is, $\Delta$ is now a value in the range $[-128, +127]$ (instead of the generic, un-restricted, case of $\Delta \in [-2^{63},+2^{63}-1]$). We enrich the two-level node layout introduced before with some additional arrays to buffer the updates. These arrays are made of 16-bit signed integers. We maintain one such array of size $\sqrt{b}$, say \var{summary\_buffer}, plus another of size $b$, \var{keys\_buffer}. In particular, the $i$-th value of \var{keys\_buffer} holds the $\Delta$ value for the $i$-th key; similarly, the $(j+1)$-th value of \var{summary\_buffer} holds the $\Delta$ value for the $j$-th segment. The buffers are kept in prefix-sum. Upon an $\code{update}(i,\Delta)$ operation, the buffer for the summary and that of the specific segment comprising $i$ are updated using SIMD. The key difference now is that -- because we work with smaller integers -- 8, or even 16, integers are updated simultaneously, instead of only 4 as illustrated with the code in Figure~\ref{alg:update4}. For example, suppose $b=64$. The whole \var{summary\_buffer}, which consists of $16 \times 8 = 128$ bits, fits into one SSE SIMD register, thus 8 integers are updated simultaneously. For $b=256$, 16 integers are updated simultaneously because $16 \times 16 = 256$ bits again fit into one AVX SIMD register. This makes a big improvement with respect to the un-restricted case because instead of executing the algorithm in Figure~\ref{alg:update4} for $\sqrt{b}/4$ times, \emph{only one} such update is sufficient. This potentially makes the restricted case $2\times$ and $4\times$ faster than the un-restricted case for $b=64$ and $b=256$ respectively. To avoid overflow issues, we bring the \var{keys} (and the \var{summary}) up-to-date by reflecting on these the updates stored in the buffer. Since we can perform a maximum of $-2^{15}/-128 = 256$ updates before overflowing, we clean the buffers every 256 {\code{update}} operations. The solution described here holds, therefore, in the amortized sense\footnote{Note that such ``cleaning'' operation will become less and less frequent the deeper a node in the tree hierarchy. Thus, scalar code is efficient to perform this operation as vectorization would negligibly affect the running time. }. Figure~\ref{alg:update8_restricted} contains the relevant code illustrating this approach. The code should be read as the ``restricted'' variant of that shown in Figure~\ref{alg:update8}. We count the number of updates with a single 8-bit unsigned integer, \var{updates}, which is initialized to 255 in the \code{build} method. When this variable is equal to 255, it will overflow the next time it is incremented by 1, indeed making it equal to 0. Therefore we correctly clean the buffers every 256 updates. In this case, the table \var{T} stores $(16+1) \times 16$ 16-bit unsigned values, where each $\var{T}[i][0..15]$ contains a prefix of $i$ zeros, followed by $16-i$ copies of $2^{16}-1$, for $i=0..16$. Also, {\code{sum}} queries are now answered by computing the sum between four quantities, which is actually more expensive than the queries for the general case. The data structure consumes $(2b + 10\sqrt{b} + 1)/(8b) \times 100\%$ more bytes than the input. For $b=64$, as in the code given in Figure~\ref{alg:update8_restricted}, this extra space is 40.8\%; for $b=256$, it is 32.9\%. \section{Introduction}\label{sec:introduction} The \emph{prefix-sum problem} is defined as follows. Given an array $A[0..n)$ of integers and an index $0 \leq i < n$, we are asked to support the following three operations as efficiently as possible. \begin{itemize} \item $\code{sum}(i)$ returns the quantity $\sum_{k=0}^i A[k]$. \item $\code{update}(i,\Delta)$ sets $A[i]$ to $A[i]+\Delta$, where $\Delta$ is a quantity that fits in $\delta$ bits. \item $\code{access}(i)$ returns $A[i]$. \end{itemize} (Note that $\code{access}(i)$ can be computed as $\code{sum}(i)-\code{sum}(i-1)$ for $i > 0$ and $\code{access}(0) = \code{sum}(0)$. Therefore, we do not consider this operation in the following. Also, a \emph{range-sum} query $\code{sum}(i,j)$, asking for the sum of the elements in $A[i..j]$, is computed as $\code{sum}(j) - \code{sum}(i-1)$.) It is an icon problem in data structure design and has been studied rather extensively from a theoretical point of view~\cite{FS89,yao1985complexity,hampapuram1998optimal,dietz1989optimal,raman2001succinct,hon2011succinct,patrascu2006logarithmic,chan2010counting,BCGSVV15,bille2017succinct} given its applicability to many areas of computing, such as coding, databases, parallel programming, dynamic data structures and others~\cite{blelloch1990pre}. For example, one of the most notable practical applications of this problem is for \emph{on-line analytical processing} (OLAP) in databases. An OLAP system relies on the popular \emph{data cube} model~\cite{gray1997data}, where the data is represented as a $d$-dimensional array. To answer an aggregate query on the data cube, a prefix-sum query is formulated (see the book edited by~\citet{toth2017handbook} -- Chapter 40, \emph{Range Searching} by P.K. Agarwal). \paragraph*{Scope of this Work and Overview of Solutions} Despite the many theoretical results, that we review in Section~\ref{sec:related_work}, literature about the prefix-sum problem lacks a thorough experimental investigation which is our concern with this work. We aim at determining the fastest single-core solution using \emph{commodity hardware and software}, that is, a recent manufactured processor with commodity architectural features (such as pipelined instructions, branch prediction, cache hierarchy, and SIMD instructions~\cite{SIMDIntel}), executing C++ compiled with a recent optimizing compiler. (We do not take into account parallel algorithms here, nor solutions devised for specialized/dedicated hardware that would limit the usability of our software. We will better describe our experimental setup in Section~\ref{sec:setup}.) As a warm-up, let us now consider two trivial solutions to the problem that will help reasoning about the different trade-offs. A first option is to leave $A$ as given. It follows that queries are supported in $O(n)$ by scanning the array and updates take $O(1)$ time. Otherwise, we can pre-compute the result of $\code{sum}(i)$ and save it in $A[i]$, for every $i$. Then, we have queries supported in $O(1)$, but updates in $O(n)$. These two solutions represent opposite extreme cases: the first achieving fastest {\code{update}} but slowest {\code{sum}}, the second achieving fastest {\code{sum}} but slowest {\code{update}}. {(They coincide only when $n$ is bounded by a constant.)} However, it is desirable to have a balance between the running times of {\code{sum}} and {\code{update}}. One such trade-off can be obtained by tree-shaped data structures, whose analysis is the scope of our paper. An elegant solution is to superimpose on $A$ a balanced binary tree. The leaves of the tree store the elements of $A$, whereas the internal nodes store the sum of the elements of $A$ descending from the left and right sub-trees. As the tree is balanced and has height $\lceil \log_2 n \rceil + 1$, it follows that both {\code{sum}} and {\code{update}} translate into tree traversals with $O(1)$ spent per level. This data structure -- called {\method{Segment-Tree}}~\cite{bentley1977solutions} -- guarantees $O(\log n)$ for both queries and updates. Another tree layout having $O(\log n)$-complexity is the {\method{Fenwick-Tree}}~\cite{fenwick1994new}. Differently from the {\method{Segment-Tree}}, the {\method{Fenwick-Tree}} is an \emph{implicit} data structure, i.e., it consumes exactly $n+1$ memory words for representing $A$ is some appropriate manner. It is not a binary tree and exploits bit-level programming tricks to traverse the implicit tree structure. Interestingly, it turns out that a logarithmic complexity is optimal for the problem {when $\delta$ is as large as the machine word~\cite{patrascu2006logarithmic}.} Thus, it is interesting to design efficient implementations of both {\method{Segment-Tree}} and {\method{Fenwick-Tree}} to understand what can be achieved in practice. Furthermore, it is natural to generalize the {\method{Segment-Tree}} to become $b$-ary and have a height of $\lceil \log_b n \rceil$, with $b>2$: an internal node stores an array $B$ of $b-1$ values, where $B[i]$ is the sum of the elements of $A$ covered by the sub-tree rooted in its $i$-th child. Now, we are concerned with solving a smaller instance of the original problem, i.e., the one having $B$ as input array. According to the solution adopted for the ``small array'', different complexities can be achieved. To give an idea, one could just adopt one of the two trivial solutions discussed above. If we leave the elements of $B$ as they are, then we obtain updates in $O(\log_b n)$ and queries in $O(b \log_b n)$. Conversely, if we pre-compute the answers to {\code{sum}}, we have $O(b \log_b n)$ for updates and $O(\log_b n)$ for queries. As a matter of fact, essentially all theoretical constructions are variations of a $b$-ary {\method{Segment-Tree}}~\cite{dietz1989optimal,raman2001succinct,hon2011succinct,patrascu2006logarithmic}. An efficient implementation of such data structure is, therefore, not only interesting for this reason but also particularly appealing in practice because it opens the possibility of using SIMD instructions to process in parallel the $b$ keys stored at each node of the tree. Lastly, also the {\method{Fenwick-Tree}} extends to branching factors larger than 2, but is a less obvious way that we discuss later in the paper. \paragraph*{Contributions} For all the reasons discussed above, we describe efficient implementations of and compare the following solutions: the {\method{Segment-Tree}}, the {\method{Fenwick-Tree}}, the $b$-ary {\method{Segment-Tree}}, and the $b$-ary {\method{Fenwick-Tree}} -- plus other optimized variants that we will introduce in the rest of the paper. We show that, by taking into account (1) branch-free code execution, (2) cache-friendly node layouts, (3) SIMD instructions, and (4) compile-time optimizations, new interesting trade-offs can be established on modern hardware. After a careful experimental analysis, we arrive at the conclusion that an optimized $b$-ary {\method{Segment-Tree}} is the best data structure. Very importantly, we remark that optimizing the {\method{Segment-Tree}} is not only relevant for the prefix-sum problem, because this data structure can also be used to solve several other problems in computational geometry, such as \emph{range-min/max queries}, \emph{rectangle intersection}, \emph{point location}, and \emph{three-sided queries}. (See the book by~\citet{BergCKO08} and references therein for an introduction to such problems.) Thus, the contents of this paper can be adapted to solve these problems as well. To better support our exposition, we directly show (almost) full C++ implementations of the data structures, in order to guide the reader into a deep performance tuning of the software. We made our best to guarantee that the presented code results compact and easy to understand but without, for any reason, sacrificing its efficiency. The whole code used in the article is freely available at \url{https://github.com/jermp/psds}, with detailed instructions on how to run the benchmarks and reproduce the results. \section{Conclusions and future work}\label{sec:conclusions} We described, implemented, and studied the practical performance of several tree-shaped data structures to solve the \emph{prefix-sum problem}. After a careful experimental analysis, the following take-away lessons are formulated. \begin{enumerate} \item (Section~\ref{sec:segtree}) A bottom-up traversal of the {\method{Segment-Tree}} has a much simpler implementation compared to top-down, resulting in a faster execution of both {\code{sum}} and {\code{update}}. \item (Section~\ref{sec:segtree}) A branch-free implementation of the {\method{Segment-Tree}} is on average $2\times$ faster than a branchy implementation for all array sizes, $n$, up to $2^{25}$. This is so because the processor's pipeline is not stalled due to branches, for a consequent increase in the instruction throughput. For $n > 2^{25}$, the branchy code is faster because it saves memory accesses -- the dominant cost in the runtime for large values of $n$. In particular, the \emph{combination} of branchy and branch-free code execution -- what we called the \emph{two-loop} optimization -- results in a better runtime. Taking this into account, we recommend a version of the bottom-up {\method{Segment-Tree}} where an internal node holds the sum of the leaves descending from its \emph{left} subtree, because it allows the use of the two-loop optimization for {\code{update}} as well (and not only for {\code{sum}}). \item (Section~\ref{sec:fentree}) The {\method{Fenwick-Tree}} suffers from cache conflicts for larger values of $n$, caused by accessing nodes whose memory address is (almost) a large power of 2. Solving this issue, by offsetting a node from an original position $i$ to a new position $i + \lfloor i/d \rfloor$ for some $d>0$, significantly improves the performance of the {\method{Fenwick-Tree}}. \begin{table} \centering \caption{Average speedup factors achieved by the {\method{Fenwick-Tree}} over the {\method{Segment-Tree}}. \label{tab:speedups_ft}} \scalebox{1.0}{\input{tables/speedups_ft.tex}} \end{table} \item (Section~\ref{sec:segment_tree_vs_fenwick_tree}) The {\method{Fenwick-Tree}} is more efficient than (our optimized version of) the {\method{Segment-Tree}} for both queries and updates. The better efficiency is due to the simplicity of the code and the less (50\%) average number of loop iterations. We summarize the speedups achieved by the {\method{Fenwick-Tree}} over the {\method{Segment-Tree}} in Table~\ref{tab:speedups_ft}, for different ranges of $n$. \item (Section~\ref{sec:segtree_bary}) Although the {\method{Segment-Tree}} is outperformed by the {\method{Fenwick-Tree}}, we can enlarge its branching factor to a generic quantity $b>2$: this reduces the height of the {\method{Segment-Tree}} for a better cache usage and enables the use of SIMD instructions. Such instructions are very effective to lower the running time of {\code{update}}, because several values per node can be updated in parallel. Compared to the scalar {\code{update}} algorithm, SIMD can be on average $2-6\times$ faster depending of the value of $n$. \item (Section~\ref{sec:segtree_bary}) For the best of performance, we recommend to model the height of the $b$-ary {\method{Segment-Tree}} with a constant known at compile-time, so that the compiler can execute a specialized code path to handle the specific value of the height. This completely avoids branches during the execution of {\code{sum}} and {\code{update}}. The vectorized {\code{update}} implementation together with this branch-avoiding optimization makes the $b$-ary {\method{Segment-Tree}} actually faster than the {\method{Fenwick-Tree}}. We report the speedups achieved by this data structure over the {\method{Fenwick-Tree}} in Table~\ref{tab:speedups_sst}, for the different tested combinations of $b$ and $\delta$. (Recall that $\delta$ represents the bit-width of $\Delta$, the update value.) \item (Section~\ref{sec:segtree_bary}) For the most general case where $\delta=64$ bits, we recommend the use of a $b$-ary {\method{Segment-Tree}} with $b=64$. From Table~\ref{tab:speedups_sst}, we see that this solution offers an improved trade-off between the running time of {\code{sum}} and {\code{update}}: on average $1.9-5\times$ faster for {\code{sum}} and up to $1.6\times$ faster for {\code{update}} than the {\method{Fenwick-Tree}}. \item (Section~\ref{sec:segtree_bary}) For the restricted case where $\delta=8$ bits, we can update even more values in parallel by \emph{buffering} the updates at each node of the tree. Considering again Table~\ref{tab:speedups_sst}, for such case we recommend the use of a $b$-ary {\method{Segment-Tree}} with $b=256$. This solution is faster than a {\method{Fenwick-Tree}} by $1.7-4.7\times$ for {\code{sum}} and by $1.4-2.5\times$ for {\code{update}}. \item (Section~\ref{sec:fentree_bary}) The $b$-ary {\method{Fenwick-Tree}} improves the runtime for queries at the price of a significant slowdown for updates, compared to the classic {\method{Fenwick-Tree}} with $b=2$. This makes the data structure unpractical unless updates are extremely few compared to queries. \item (Section~\ref{sec:fentree_bary}) Despite of the larger tree height, the \emph{blocked} {\method{Fenwick-Tree}} improves the trade-off of the $b$-ary {\method{Fenwick-Tree}} (in particular, it improves {\code{update}} but worsens {\code{sum}}). However, it does not beat the classic {\method{Fenwick-Tree}} because the time spent at each traversed node is much higher (more cache-misses due to the two-level structure of a node; more spent cycles due to SIMD). \item (Section~\ref{sec:fentree_bary}) In order to combine the simplicity of the {\method{Fenwick-Tree}} with the advantages of blocking $b$ keys together (reduced cache-misses; SIMD exploitation), a \emph{truncated} {\method{Fenwick-Tree}} can be used. This data structure improves over the classic {\method{Fenwick-Tree}}, especially for large values of $n$, exposing a trade-off similar to that of the $b$-ary {\method{Segment-Tree}} (but with the latter being generally better). \end{enumerate} \begin{table} \centering \caption{Average speedup factors achieved by the $b$-ary {\method{Segment-Tree}} over the {\method{Fenwick-Tree}}. \label{tab:speedups_sst}} \subfloat[{\code{sum}}]{ \scalebox{1}{\input{tables/speedups_sum_bary_segtree.tex}} } \subfloat[{\code{update}}]{ \scalebox{1}{\input{tables/speedups_update_bary_segtree.tex}} \label{tab:speedups_sst_b} } \end{table} Some final remarks follow. The runtime of {\code{update}} will improve as SIMD instructions will become more powerful in future years (e.g., with lower latency), thus SIMD is a very promising hardware feature that cannot be overlooked in the design of practical algorithms. So far we obtained best results using SIMD registers of 128 and 256 bits (SSE and AVX instruction sets, respectively). We also made some experiments with the new AVX-512 instruction set which allows us to use massive load and store instructions comprising 512 bits of memory. In particular, doubling the size of the used SIMD registers can be exploited in two different ways: by either reducing the number of instructions, or enlarging the branching factor of a node. We tried both possibilities but did not observe a clear improvement. A promising avenue for future work would be to consider the \emph{searchable} version of the problem, i.e., to implement the {\code{search}} operation. Note that this operation is actually amenable to SIMD vectorization, and has parallels with search algorithms in inverted indexes. With this third operation, the objective would be to use (a specialization of) the best data structure from this article -- the $b$-ary {\method{Segment-Tree}} with SIMD on updates \emph{and searches} -- to support \emph{rank/select} queries over mutable bitmaps~\cite{vigna2019}, an important building block for dynamic succinct data structures. {The experiments presented in this work were conducted using a single system configuration, i.e., a specific processor (Intel i9-9940X), operating system (Linux), and compiler (\textsf{gcc}). We acknowledge the specificity of our analysis, although it involves a rather common setup, and we plan to extend our experimentation to other configurations as well in future work. } \subsection{The Blocked-{\method{Fenwick-Tree}}}\label{sec:blockedfentree} A way of improving the trade-off exposed by the $b$-ary {\method{Fenwick-Tree}} is that of blocking $b$ keys together into the nodes of a classic {\method{Fenwick-Tree}}, as depicted in Figure~\ref{fig:blocked_fentree_shape} for $b=4$ and $n=64$. We call this data structure the \emph{Blocked}-{\method{Fenwick-Tree}} (\textsf{BFT}). The height of the tree is now $\lceil \log_2((n+1)/b) \rceil + 1$, for a consequent complexity of $O(\log(n/b))$ time for both {\code{sum}} and {\code{update}} if we represent each node with the two-level data structure introduced in Section~\ref{sec:nodes}. Compared to the $b$-ary {\method{Fenwick-Tree}}, this blocked variant slows down the queries but remarkably improves the updates, for a better trade-off. \giulio{reduce this one too.} The code implementing this idea is shown in Figure~\ref{code:blocked_fentree}. To build the data structure, we proceed as follows. We materialize a temporary array, \code{tmp}, that holds the first key of each block and build a classic {\method{Fenwick-Tree}} from this array. Then all blocks are written into \code{tree} one after the other, just pretending that the first key of the $j$-th block is $\code{tmp}[j]$, for $j=0..\lceil n/b \rceil-1$. The total space is simply $\lceil n/b \rceil \times \Theta(b) = \Theta(n)$ memory words. This solution provides a mechanism to ``index'' the blocks as they were the nodes of a {\method{Fenwick-Tree}}, so that an operation for index $i$ translates into the same operation but for index $j = \lfloor i / b \rfloor + 1$. Note that for a random workload, this data structure is likely to perform $\frac{1}{2} \lceil \log_2((n+1)/b) \rceil + 1$ memory accesses as a classic {\method{Fenwick-Tree}} with $b=1$. In Section~\ref{sec:segment_tree_vs_fenwick_tree} we saw that the {\method{Fenwick-Tree}} suffers from cache conflicts when $n$ is large. Not surprisingly, the blocked variant inherits this problem too. But also in this case we can overcome the limitation by offsetting the serialization of a node at byte location $i$ to location $i + \lfloor i/d \rfloor$, for some sufficiently large $d$ (we again used $d=2^{14}$). We do not show the comparison between the regular Blocked-{\method{Fenwick-Tree}} and the modified one with limited cache conflicts -- as we did in Figure~\ref{fig:fenwick_tree} for the classic {\method{Fenwick-Tree}} -- to avoid repeating ourselves with similar considerations. We again confirm that the modification significantly improves upon the regular tree layout also for this blocked variant, and that is what we consider in the following. Before commenting of the experimental results note that, as for the $b$-ary {\method{Fenwick-Tree}}, we cannot expect the blocked {\method{Fenwick-Tree}} to perform better than the $b$-ary {\method{Segment-Tree}} because of the larger height. As a matter of fact, Figure~\ref{fig:fenwick_tree_blocked} shows that this blocked variant does not even improve over the classic {\method{Fenwick-Tree}}: while it performs similarly for small values of $n$, it induces significantly more cache-misses as soon as the $L_2$ cache boundary is crossed, which happens for $n=2^{17}$. The increased number of cache-misses is due to the two-level data structures, requiring access to more cache lines at each traversed node. For example, queries need access to 2 cache lines instead of one, or even 4 in the restricted case. Again recall that, although the height of the tree is smaller than that of the classic {\method{Fenwick-Tree}} by $\log_2 b$, the number of traversed nodes is only reduced by $\frac{1}{2}\log_2 b$, which is quite small for the values of $b$ considered (e.g., just 3 for $b=64$). Therefore, this variant traverses slightly fewer nodes but the computation at each node is much more expensive, resulting in a larger running time. For example, considering $n=2^{28}$, $b=64$, and {\code{sum}} queries, we expect the the blocked {\method{Fenwick-Tree}} to touch $28/2-3+1=12$ nodes on average but perform the double of memory accesses, for a total of 24 accesses against 15 performed by the classic {\method{Fenwick-Tree}}. Low level profiling indeed confirms that the blocked variant consumes 70\% more cache-misses in this scenario. Similar considerations hold true for updates as well. In particular, the runtime is higher not only because of the increased number of cache-misses but also because of the additional cycles due to SIMD instructions. \section{Related Work}\label{sec:related_work} Literature about the prefix-sum problem is rich of theoretical results that we summarize here. These results are valid under a RAM model with word size $w$ bits. Let us denote with $t_q$ and $t_u$ the worst-case complexities for queries and updates respectively. The first important result was given by~\citet*{FS89}, who proved a lower bound of $\Omega(\log n/\log w)$, which is $\Omega(\log n/ \log\log n)$ for the typical case where an integer fits in one word, i.e., $w=\Theta(\log n)$. They also gave the trade-off $t_q \log_2(w t_u) = \Omega(\log n)$. The same lower bound was found by~\citet*{yao1985complexity} using a different method. \citet*{hampapuram1998optimal} gave a $\Omega(\log n)$ lower bound for the amortized complexity. \citet*{dietz1989optimal} proposed a data structure that achieves \citet*{FS89}'s lower bound for both operations. However, his solution requires $\delta$ to be $O(\log\log n)$ (we recall that $\delta$ is the number of bits needed to represent $\Delta$). He designed a way to handle $b = O(\log^{\varepsilon} n)$ elements in constant time, for some $\varepsilon > 0$, and then built a {\method{Segment-Tree}} with branching factor $b$. Thus, the height of the tree is $O(\log_b n) = O(\log n/\log\log n)$. To handle $b$ elements in constant time, the prefix sums are computed into an array $B$, whereas the most recent updates are stored into another array $C$. Such array $C$ is handled in constant time using a precomputed table. This solution has been improved by~\citet{raman2001succinct} and then, again, by~\citet{hon2011succinct}. The downside of these solutions is that they require large universal tables which do not scale well with $\delta$ and $b$. \citet*{patrascu2006logarithmic} considerably improved the previous lower bounds and trade-off lower bounds for the problem. In particular, they determined a parametric lower bound in $w$ and $\delta$. Therefore, they distinguished two cases for the problem: the case where $\delta = \Omega(w)$ bits and the case where $\delta = o(w)$ bits. In the former case they found the usual logarithmic bound $\max\{t_q,t_u\} = \Omega(\log n)$. They also found two symmetrical trade-off lower bounds: $t_q \log_2(t_u/t_q) = \Omega(\log n)$ and $t_u \log_2(t_q/t_u) = \Omega(\log n)$. This proves the optimality of the {\method{Segment-Tree}} and {\method{Fenwick-Tree}} data structures introduced in Section~\ref{sec:introduction}. In the latter case they found a trade-off lower bound of $t_q(\log_2(w/\delta)+\log_2(t_u/t_q)) = \Omega(\log n)$ which implies the lower bound $\max\{t_q,t_u\} = \Omega(\log n/\log(w/\delta))$. (Note that in the case where $\delta = \Theta(\log n)$ it matches the lower bounds for the first case.) They also proposed a data structure for this latter case that achieves $t_q = t_u = O(\log n/\log(w/\delta))$ which is optimal for the given lower bound. Their idea is to support both operations in constant time on $b$ elements and build a {\method{Segment-Tree}} with branching factor $b$, as already done by Dietz. Again, the prefix sums for all the elements are precomputed and stored in an array $B$. The recent updates are kept in another array $C$. In particular, and differently from Dietz's solution, the array $C$ is packed in $w$ bits in order to manipulate it in constant time, which is possible as long as $b = O(w/(\delta+\log w))$. In Section~\ref{sec:segtree_bary} we will design an efficient and practical implementation of a $b$-ary {\method{Segment-Tree}}, which also takes advantage of smaller bit-widths $\delta$ to enhance the runtime. The lower bounds found by~\citet*{patrascu2006logarithmic} do not prevent one of the two operations to be implemented in time $o(\log n)$. Indeed, \citet*{chan2010counting} proposed a data structure that achieves $t_q = O(\log n/\log\log n)$ but $t_u = O(\log^{0.5+\varepsilon} n)$, for some $\varepsilon > 0$. Their idea is (again) to build a {\method{Segment-Tree}} with branching factor $b = \sqrt{w}$. Each node of the tree is represented as another tree, but with branching factor $b^{\prime} = \varepsilon\log w$. This smaller tree supports {\code{sum}} in $O(b/b^{\prime})$ time and {\code{update}} in $O(1 + 2^{b^{\prime}} b^2/w)$ amortized time. To obtain this amortized bound, each smaller tree keeps an extra machine word that holds the $k = w/b$ most recent updates. Every $k$ updates, the values are propagated to the children of the node. \paragraph*{Some Remarks} First, it is important to keep in mind that a theoretical solution may be too complex and, hence, of little practical use, as the constants hidden in the asymptotic bounds are high. For example, we tried to implement the data structure proposed by~\citet*{chan2010counting}, but soon found out that the algorithm for {\code{update}} was too complicated and actually performed much worse than a simple $O(\log n)$-time solution. Second, some studies tackle the problem of reducing the space of the data structures~\cite{raman2001succinct,hon2011succinct,bille2017succinct,vigna2019}; in this article we do not take into account compressed representations\footnote{ As a reference point, we determined that the fastest compressed {\method{Fenwick-Tree}} layout proposed by~\citet*{vigna2019}, the so-called \textsf{byte[F]} data structure described in the paper, is not faster than the classic uncompressed {\method{Fenwick-Tree}} \emph{when} used to support {\code{sum}} and {\code{update}}.}. Third, a different line of research studied the problem in the parallel-computing setting where the solution is computed using $p>1$ processors~\cite{meijer1987optimal,goodrich1994optimal} or using specialized/dedicated hardware~\cite{harris2007parallel,brodnik20061}. As already mentioned in Section~\ref{sec:introduction}, we entirely focus on single-core solutions that should run on commodity hardware. Lastly, several extensions to the problem exist, such as the so-called \emph{searchable prefix-sum problem}~\cite{raman2001succinct,hon2011succinct}, where we are also asked to support the operation $\code{search}(x)$ which returns the smallest $i$ such that $\code{sum}(i) \geq x$; and the \emph{dynamic prefix-sum problem} with insertions/deletions of elements in/from $A$ allowed~\cite{BCGSVV15}. \section{Experiments}\label{sec:experiments} \todo{introduction} \section{Experimental Setup}\label{sec:setup} Throughout the paper we show experimental results, so we describe here our experimental setup and methodology. \begin{table}[t] \centering \caption{Cache hierarchy on the Intel i9-9940X processor. All cache levels have a line size of 64 bytes.} \scalebox{1.0 }{\input{tables/caches.tex}} \label{tab:caches} \end{table} \paragraph*{Hardware} For the experiments reported in the article we used an Intel i9-9940X processor, clocked at 3.30 GHz. The processor has two private levels of cache memory per core: $2 \times 32$ KiB $L_1$ cache (32 KiB for instructions and 32 KiB for data); 1 MiB for $L_2$ cache. A $L_3$ cache level spans $\approx$19 MiB and is shared among all cores. Table~\ref{tab:caches} summarizes these specifications. The processor supports the following SIMD~\cite{SIMDIntel} instruction sets: MMX, SSE (including 2, 3, 4.1, 4.2, and SSSE3), AVX, AVX2, and AVX-512. {(We also confirmed our results using another Intel i9-9900X processor, clocked at 3.60 GHz with the same $L_1$ configuration, but smaller $L_2$ and $L_3$ caches. Although timings were slightly different, the same conclusions held.)} \paragraph*{Software} All solutions described in the paper were implemented in C++. The code is available at \url{https://github.com/jermp/psds} {and was tested on Linux with \texttt{gcc} 7.4 and 9.2, and Mac OS with \texttt{clang} 10 and 11.} For the experiments reported in the article, the code was compiled with \texttt{gcc} 9.2.1 under Ubuntu 19.10 (Linux kernel 5.3.0, 64 bits), using the flags: \texttt{-std=c++17 -O3 -march=native -Wall -Wextra}. \paragraph*{Methodology} All experiments run on a \emph{single} core of the processor, with the data residing entirely in memory so that disk operations are of no concern. (The host machine has 128 GiB of RAM.) {Performance counts, e.g., number of performed instructions, cycles spent, and cache misses, were collected for all algorithms, using the Linux \textsf{perf} utility. We will use such performance counts in our discussions to further explain the behavior of the algorithms.} We operate in the most general setting, i.e., with no specific assumptions on the data: arrays are made of 64-bit signed integers and initialized at random; the $\Delta$ value for an {\code{update}} operation is also a random signed number whose bit-width is 64 unless otherwise specified. To benchmark the speed of the operations for a given array size $n$, we generate $10^4$ (pseudo-) random natural numbers in the interval $[0,n)$ and use these as values of $i$ for both $\code{sum}(i)$ and $\code{update}(i,\Delta)$. Since the running time of {\code{update}} does not depend on the specific value of $\Delta$, we set $\Delta = i$ for the updates. We show the running times for {\code{sum}} and {\code{update}} using plots obtained by varying $n$ from a few hundreds up to one billion integers. In the plots the minimum and maximum timings obtained by varying $n$ draw a ``pipe'', with the marked line inside representing the average time. The values on $n$ used are rounded powers of $10^{1/10} \approx 1.25893$: this base was chosen so that data points are evenly spaced when plotted in a logarithmic scale~\cite{khuong2017array}. All running times are reported in nanoseconds spent per operation (either, nanosec/{\code{sum}} or nanosec/{\code{update}}). Prior to measurement, a warm-up run is executed to fetch the necessary data into the cache. \section{ACKNOWLEDGMENTS} This work was partially supported by the BigDataGrapes (EU H2020 RIA, grant agreement N\textsuperscript{\b{o}}780751), the ``Algorithms, Data Structures and Combinatorics for Machine Learning'' (MIUR-PRIN 2017), and the OK-INSAID (MIUR-PON 2018, grant agreement N\textsuperscript{\b{o}}ARS01\_00917) projects. \renewcommand{\bibsep}{3.0pt} \bibliographystyle{ACM-Reference-Format} \section{Related Work}\label{sec:related_work} Literature about the prefix-sum problem is rich of theoretical results that we summarize here. These results are valid under a RAM model with word size $w$ bits. Let us denote with $t_q$ and $t_u$ the worst-case complexities for queries and updates respectively. The first important result was given by~\citet*{FS89}, who proved a lower bound of $\Omega(\log n/\log w)$, which is $\Omega(\log n/ \log\log n)$ for the typical case where an integer fits in one word, i.e., $w=\Theta(\log n)$. They also gave the trade-off $t_q \log_2(w t_u) = \Omega(\log n)$. The same lower bound was found by~\citet*{yao1985complexity} using a different method. \citet*{hampapuram1998optimal} gave a $\Omega(\log n)$ lower bound for the amortized complexity. \citet*{dietz1989optimal} proposed a data structure that achieves \citet*{FS89}'s lower bound for both operations. However, his solution requires $\delta$ to be $O(\log\log n)$ (we recall that $\delta$ is the number of bits needed to represent $\Delta$). He designed a way to handle $b = O(\log^{\varepsilon} n)$ elements in constant time, for some $\varepsilon > 0$, and then built a {\method{Segment-Tree}} with branching factor $b$. Thus, the height of the tree is $O(\log_b n) = O(\log n/\log\log n)$. To handle $b$ elements in constant time, the prefix sums are computed into an array $B$, whereas the most recent updates are stored into another array $C$. Such array $C$ is handled in constant time using a precomputed table. This solution has been improved by~\citet{raman2001succinct} and then, again, by~\citet{hon2011succinct}. The downside of these solutions is that they require large universal tables which do not scale well with $\delta$ and $b$. \citet*{patrascu2006logarithmic} considerably improved the previous lower bounds and trade-off lower bounds for the problem. In particular, they determined a parametric lower bound in $w$ and $\delta$. Therefore, they distinguished two cases for the problem: the case where $\delta = \Omega(w)$ bits and the case where $\delta = o(w)$ bits. In the former case they found the usual logarithmic bound $\max\{t_q,t_u\} = \Omega(\log n)$. They also found two symmetrical trade-off lower bounds: $t_q \log_2(t_u/t_q) = \Omega(\log n)$ and $t_u \log_2(t_q/t_u) = \Omega(\log n)$. This proves the optimality of the {\method{Segment-Tree}} and {\method{Fenwick-Tree}} data structures introduced in Section~\ref{sec:introduction}. In the latter case they found a trade-off lower bound of $t_q(\log_2(w/\delta)+\log_2(t_u/t_q)) = \Omega(\log n)$ which implies the lower bound $\max\{t_q,t_u\} = \Omega(\log n/\log(w/\delta))$. (Note that in the case where $\delta = \Theta(\log n)$ it matches the lower bounds for the first case.) They also proposed a data structure for this latter case that achieves $t_q = t_u = O(\log n/\log(w/\delta))$ which is optimal for the given lower bound. Their idea is to support both operations in constant time on $b$ elements and build a {\method{Segment-Tree}} with branching factor $b$, as already done by Dietz. Again, the prefix sums for all the elements are precomputed and stored in an array $B$. The recent updates are kept in another array $C$. In particular, and differently from Dietz's solution, the array $C$ is packed in $w$ bits in order to manipulate it in constant time, which is possible as long as $b = O(w/(\delta+\log w))$. In Section~\ref{sec:segtree_bary} we will design an efficient and practical implementation of a $b$-ary {\method{Segment-Tree}}, which also takes advantage of smaller bit-widths $\delta$ to enhance the runtime. The lower bounds found by~\citet*{patrascu2006logarithmic} do not prevent one of the two operations to be implemented in time $o(\log n)$. Indeed, \citet*{chan2010counting} proposed a data structure that achieves $t_q = O(\log n/\log\log n)$ but $t_u = O(\log^{0.5+\varepsilon} n)$, for some $\varepsilon > 0$. Their idea is (again) to build a {\method{Segment-Tree}} with branching factor $b = \sqrt{w}$. Each node of the tree is represented as another tree, but with branching factor $b^{\prime} = \varepsilon\log w$. This smaller tree supports {\code{sum}} in $O(b/b^{\prime})$ time and {\code{update}} in $O(1 + 2^{b^{\prime}} b^2/w)$ amortized time. To obtain this amortized bound, each smaller tree keeps an extra machine word that holds the $k = w/b$ most recent updates. Every $k$ updates, the values are propagated to the children of the node. \paragraph*{Some Remarks} First, it is important to keep in mind that a theoretical solution may be too complex and, hence, of little practical use, as the constants hidden in the asymptotic bounds are high. For example, we tried to implement the data structure proposed by~\citet*{chan2010counting}, but soon found out that the algorithm for {\code{update}} was too complicated and actually performed much worse than a simple $O(\log n)$-time solution. Second, some studies tackle the problem of reducing the space of the data structures~\cite{raman2001succinct,hon2011succinct,bille2017succinct,vigna2019}; in this article we do not take into account compressed representations\footnote{ As a reference point, we determined that the fastest compressed {\method{Fenwick-Tree}} layout proposed by~\citet*{vigna2019}, the so-called \textsf{byte[F]} data structure described in the paper, is not faster than the classic uncompressed {\method{Fenwick-Tree}} \emph{when} used to support {\code{sum}} and {\code{update}}.}. Third, a different line of research studied the problem in the parallel-computing setting where the solution is computed using $p>1$ processors~\cite{meijer1987optimal,goodrich1994optimal} or using specialized/dedicated hardware~\cite{harris2007parallel,brodnik20061}. As already mentioned in Section~\ref{sec:introduction}, we entirely focus on single-core solutions that should run on commodity hardware. Lastly, several extensions to the problem exist, such as the so-called \emph{searchable prefix-sum problem}~\cite{raman2001succinct,hon2011succinct}, where we are also asked to support the operation $\code{search}(x)$ which returns the smallest $i$ such that $\code{sum}(i) \geq x$; and the \emph{dynamic prefix-sum problem} with insertions/deletions of elements in/from $A$ allowed~\cite{BCGSVV15}. \section{Preliminary Exploration: Segment and Fenwick Trees}\label{sec:preliminaries} \section{The Segment-Tree}\label{sec:segtree} The {\method{Segment-Tree}} data structure was originally proposed by~\citet*{bentley1977solutions} in an unpublished manuscript, and later formalized by~\citet*{bentley1980optimal} to solve the so-called \emph{batched range problem} (given a set of rectangles in the plane and a query point, report all the rectangles where the point lies in). Given an array $A[0..n)$, the {\method{Segment-Tree}} is a complete balanced binary tree whose leaves correspond to the individual elements of $A$ and the internal nodes hold the sum of the elements of $A$ descending from the left and right sub-trees. The ``segment'' terminology derives from the fact that each internal node logically covers a segment of the array and holds the sum of the elements in the segment. Therefore, in simplest words, the {\method{Segment-Tree}} is a hierarchy of segments covering the elements of $A$. In fact, the root of the tree covers the entire array, i.e., the segment $[0,n-1]$; its children cover half array each, that is the segments $[0,\lfloor (n-1)/2 \rfloor]$ and $[\lfloor (n-1)/2 \rfloor + 1,n-1]$ respectively, and so on until the $i$-th leaf spans the segment $[i,i]$ which corresponds to $A[i]$. Figure~\ref{fig:segtree} shows a graphical example for an array of size 16. \begin{figure}[t] \centering \subfloat[]{ \includegraphics[scale=1]{{images/segtree}} \label{fig:segtree} } \subfloat[]{ \includegraphics[scale=0.95]{{images/implicit_tree}} \label{fig:implicit_tree} } \caption{In (a), an example of {\method{Segment-Tree}} built from an input array $A$, with highlighted nodes belonging to the root-to-leaf path for answering $\code{sum}(10)$. The shaded nodes are the ones accessed to compute the result, thus $\code{sum}(10)=282-86-52=144$. In (b), the array-like representation of the same tree. } \end{figure} \paragraph*{Top-Down Traversal} Both {\code{sum}} and {\code{update}} can be resolved by traversing the tree structure \emph{top-down} with a classic binary-search algorithm. For example, Figure~\ref{fig:segtree} highlights the nodes traversed during the computation of $\code{sum}(10)$. To answer a $\code{sum}(i)$ query, for every traversed node we determine the segment comprising $i$ by comparing $i$ with the index in the \emph{middle} of the segment spanned by the node and moving either to the left or right child. Every time we move to the right child, we can sum the value stored in the left child to our result. Updates are implemented in a similar way. Since each child of a node spans half of the segment of its parent, the tree is balanced and its height is $\lceil \log_2 n \rceil + 1$. It follows that {\code{sum}} and {\code{update}} are supported in $O(\log n)$ time. \begin{figure}[t] \lstinputlisting{segment_tree_topdown.hpp} \caption{A top-down implementation of the {\method{Segment-Tree}}. \label{code:segtree}} \end{figure} In Figure~\ref{code:segtree} we give the full code implementing this top-down approach. Some considerations are in order. First, observe that we build the tree by padding the array with zeros until we reach the first power of 2 that is larger-than or equal-to $n$. This substantially simplifies the code, and permits further optimizations that we will introduce later. Second, we store the entire tree in an array, named \code{tree} in the code, thus representing the tree topology \emph{implicitly} with parent-to-child relationships coded via integer arithmetic: if the parent node is stored in the array at position $i$, then its left and right children are placed at positions $2i + 1$ and $2i + 2$ respectively. Note that this indexing mechanism requires \emph{every} node of the tree to have two children, and this is (also) the reason why we pad the array with zeros to reach the closest power of two. Figure~\ref{fig:implicit_tree} shows the array-like view of the tree in Figure~\ref{fig:segtree}. The complete tree hierarchy, excluding the leaves which correspond to the elements of the original array, consists of $2^{\lceil \log_2 n \rceil}-1$ internal nodes. These internal nodes are stored in the first half of the array, i.e., in \code{tree[0..size-1)}; the leaves are stored in the second half, i.e., in \code{tree[size-1..2*size-1)}. Thus the overall {\method{Segment-Tree}} takes a total of $2^{\lceil \log_2 n \rceil+1}-1$ 64-bit words. Therefore, we have that the {\method{Segment-Tree}} takes $c \cdot n$ memory words, with $c \in [2,4)$. The constant $c$ is 2 when $n$ is a power of 2, but becomes close to 4 when $n$ is very distant from $2^{\lceil \log_2 n \rceil}$. \paragraph*{Bottom-Up Traversal} In Figure~\ref{code:segtree_bottomup} we show an alternative implementation of the {\method{Segment-Tree}}. Albeit logically equivalent to the code shown in Figure~\ref{code:segtree}, this implementation has advantages. First, it avoids extra space. In particular, it always consumes $2n-1$ memory words, \emph{while preserving implicit parent-to-child relationships} expressed via integer arithmetic. Achieving this when $n$ is a power of 2 is trivial because the tree will always be full, but is more difficult otherwise. In the \code{build} method of the class we show an algorithm that does so. The idea is to let the leaves of the tree, which correspond to the elements of the input, be stored in the array \code{tree} slightly out of order. The leaves are still stored in the second half of the array but instead of being laid out consecutively from position \code{n-1} (as it would be if \code{n} were a power of 2), they start from position \code{begin}, which can be larger than \code{n-1}. Let \var{m} be $2n-1$. The first \var{m - begin} leaves are stored in \code{tree[begin..m)}; all remaining leaves are stored in \code{tree[n-1,begin)}. This is achieved with the two \code{for} loops in the \code{build} method. (Note that \code{begin} is always $2^{\lceil \log_2 n \rceil}-1$ which is $n-1$ if $n$ is a power of 2.) Such \emph{circular} displacement of the leaves guarantees that parent-to-child relationships are preserved even when $n$ is \emph{not} a power of 2. Lastly, the \code{visit} method traverses the internal nodes recursively, writing in each node the proper sum value. \begin{figure}[t] \lstinputlisting{segment_tree_bottomup.hpp} \caption{A bottom-up implementation of the {\method{Segment-Tree}}. \label{code:segtree_bottomup}} \end{figure} Second, we now traverse the tree structure \emph{bottom-up}, instead of top-down. This direction allows a much simpler implementation of the {\code{sum}} and {\code{update}} procedures. The inner \code{while} loop is shorter and it is only governed by the index \var{p}, which is initialized to be the position of the \var{i}-th leaf using the function \code{leaf}, and updated to become the index of the parent node at each iteration until it becomes \var{0}, the index of the root node. Furthermore in the code for {\code{sum}}, every time \var{p} is the index of a \emph{right} child we sum the content of its left sibling, which is stored in \var{tree[p-1]}, to the result. To check whether \var{p} is the index of a right child, we exploit the property that \emph{left children are always stored at odd positions in the array; right children at even positions}. This can be proved by induction on \var{p}. If \var{p} is 0, i.e., is the index of the root, then its left child is in position 1 and its right child in position 2, hence the property is satisfied. Now, suppose it holds true when $\var{p} = k$, for some $k > 0$. To show that the property holds for the children of \var{p}, it is sufficient to recall that: the double of a number is always even; if we sum 1 to an even number, the result is an odd number (left child); if we sum 2 to an even number, the result is an even number (right child). In conclusion, if the parity of \var{p} is 0, it must be a right child. (The parity of \var{p} is indicated by its first bit from the right, that we isolate with \code{p \& 1}.) \paragraph*{Branch-Free Traversal} Whatever implementation of the {\method{Segment-Tree}} we use, either top-down or bottom-up, a branch (\code{if} statement) is always executed in the \code{while} loop of {\code{sum}} (and in that of {\code{update}} for the top-down variant). For randomly-distributed queries, we expect the branch to be hard to predict: it will be true for approximately half of the times, and false for the others. Using speculative execution as branch-handling policy, the penalty incurred whenever the processor mispredicts a branch is a \emph{pipeline flush}: all the instructions executed so far (speculatively) must be discarded from the processor pipeline, for a decreased instruction throughput. Therefore, we would like to avoid the branch inside the \code{while} loop. \begin{figure}[t] \subfloat[top-down]{ \begin{minipage}[t]{0.5\textwidth} \lstinputlisting{branch_free_sum_topdown.cpp} \label{code:branchfree-sum-a} \end{minipage} } \subfloat[bottom-up]{ \begin{minipage}[t]{0.5\textwidth} \lstinputlisting{branch_free_sum_bottomup.cpp} \end{minipage} } \caption{Branch-free {\code{sum}} implementations on the {\method{Segment-Tree}}. \label{code:branchfree-sum}} \end{figure} In Figure~\ref{code:branchfree-sum} we show a branch-free implementation of the {\code{sum}} algorithm\footnote{The {\code{update}} algorithm for top-down is completely symmetric to that of {\code{sum}}, and not shown here. Also, note that the {\code{update}} algorithm for the bottom-up variant is already branch-free.}. It basically uses the result of the comparison, \var{cmp}, which will be either 1 or 0, to appropriately mask\footnote{It is interesting to report that, although the code shown here uses multiplication, the assembly output by the compiler does not involve multiplication at all. The compiler implements the operation \code{cmp * tree[p]} with a \emph{conditional move} instruction (\code{cmov}), which loads into a register the content of \code{tree[p]} only if \code{cmp} is true. } the quantity we are summing to the result (and move to the proper child obliviously in the top-down traversal). The correctness is immediate and left to the reader. The result of the comparison between the two approaches, branchy vs. branch-free, is shown in Figure~\ref{fig:branchy_vs_branchfree}. \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_segment_tree_branchy_vs_branchless}} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_branchy_vs_branchless}} \label{fig:branchy_vs_branchfree:update} } \caption{Running times for branchy and branch-free {\code{sum}}/{\code{update}} on the {\method{Segment-Tree}} (ST). \label{fig:branchy_vs_branchfree}} \end{figure} As we can see, the branch-free implementations of both {\code{sum}} and {\code{update}} are much faster than the branchy counterparts, on average by $2\times$ or more, for a wide range of practical values of $n$. We collected some performance counts using the Linux \texttt{perf} utility to further explain this evidence. Here we report an example for the bottom-up approach and {\code{sum}} queries. For $n \approx 2^{16}$, the branch-free code spends 66\% less cycles than the branchy code although it actually executes 38\% more instructions, thus rising the instruction throughput from 0.54 to 2.2. Furthermore, it executes 44\% less branches and misses only 0.2\% of these. Also, note that the bottom-up version is considerably faster than the top-down version (and does not sacrifice anything for the branchy implementation). Therefore, from now on we focus on the bottom-up version of the {\method{Segment-Tree}}. However, the branchy implementation of {\code{sum}} is faster than the branch-free one for larger values of $n$ (e.g., for $n>2^{25}$). This shift in the trend is due to the fact that the latency of a memory access dominates the cost of a pipeline flush for large $n$. In fact, when the data structures are sufficiently large, accesses to larger but \emph{slower} memory levels are involved: in this case, one can obtain faster running times by avoiding such accesses when unnecessary. The branchy implementation does so. In particular, it performs a memory access \emph{only if} the condition of the branch is satisfied, thus saving roughly 50\% of the accesses for a random query workload compared to the branch-free implementation. This is also evident for {\code{update}}. All the different implementations of {\code{update}} perform an access at every iteration of their loop, and this is the reason why all the curves in Figure~\ref{fig:branchy_vs_branchfree:update} have a similar shape. \paragraph*{Two-Loop Traversal} In the light of the above considerations, we would actually like to combine the best of the ``two worlds'' by: executing branch-free code as long as the cost of a pipeline flush exceeds the cost of a memory access, which is the case when the accessed data resides in the smaller cache levels; executing branchy code otherwise, i.e., when the memory latency is the major cost in the running time that happens when the accesses are directed to slower memory levels. Now, the way the {\method{Segment-Tree}} is linearized in the \code{tree} array, i.e., the first $n-1$ positions of the array store the internal nodes in level order and the remaining $n$ store the leaves of the tree, is particularly suitable to achieve this. A node at depth $d$ has a $(1/2)^d$ probability of being accessed, for randomly distributed queries (root is at depth $d=0$). Therefore when we repeatedly traverse the tree, the first $L_1$ positions of the array that correspond to the top-most $\lceil \log_2 L_1 \rceil$ levels of the tree are kept in cache $L_1$; the following $L_2 - L_1$ positions are kept in $L_2$, and so on, where $L_k$ indicate the size of the $k$-th cache in data items (the ``64-bit Words'' column in Table~\ref{tab:caches} at page~\pageref{tab:caches}). If \code{T} is the size of the prefix of the array \code{tree} that fits in cache, it is intuitive that, as long as $\code{p} > \code{T}$, we should prefer the branchy code and save memory accesses; vice versa, when $\code{p} \leq \code{T}$, memory accesses are relatively cheap and we should opt for the branch-free code. The following code shows this approach. \lstinputlisting{segtree_bottomup_sum_two_loops.cpp} The value of the threshold \code{T} governs the number of loop iterations, out of $\lceil \log_2 n \rceil + 1$, that are executed in a branchy or branch-free manner. Its value intuitively depends on the size of the caches and the value of $n$. For smaller $n$ we would like to set \code{T} reasonably high to benefit from the branch-free code; vice versa for larger $n$, we would like to perform more branchy iterations. As a rule-of-thumb, we determined that a good choice of \code{T} is $L_2 - (n > L_3) \times (L_2 - L_1)$, which is equal to $L_1$ or $L_2$ if $n>L_3$ or not respectively. It remains to explain how we can apply the ``two-loop'' optimization we have just introduced to the {\code{update}} algorithm, since its current implementation \emph{always} executes an access per iteration. To reduce the number of accesses to memory, we modify the content of the internal nodes of the tree. If each node stores the sum of the leaves descending from its \emph{left} subtree only (rather than the sum of the elements covered by the left \emph{and} right segments), then a memory access is saved every time we do not have to update any node in the left subtree. We refer to this modification of the internal nodes as a \emph{left-sum} tree hereafter and show the corresponding implementation in Figure~\ref{code:segment_tree_bottomup_leftsum}. Lastly, Figure~\ref{fig:leftsum_switch_on_level} illustrates the comparison between branchy/branch-free implementations of regular and \emph{left-sum} {\method{Segment-Tree}}. As apparent from the plots, the left-sum variant with the two-loop optimization is the fastest {\method{Segment-Tree}} data structure because it combines the benefits of branch-free and branchy implementations. From now on and further comparisons, we simply refer to this strategy as \textsf{ST} in the plots. \begin{figure}[t] \lstinputlisting{segment_tree_bottomup_leftsum.hpp} \caption{The bottom-up \emph{left-sum} {\method{Segment-Tree}} implementation that stores in each internal node the sum of the leaves descending from its left sub-tree. \label{code:segment_tree_bottomup_leftsum}} \end{figure} \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_segment_tree_branchy_vs_branchless_leftsum}} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_branchy_vs_branchless_leftsum}} } \caption{Comparison between branchy/branch-free implementations of regular and \emph{left-sum} {\method{Segment-Tree}}. \label{fig:leftsum_switch_on_level}} \end{figure} \section{Segment-Tree vs. Fenwick-Tree}\label{sec:segment_tree_vs_fenwick_tree} In this section we compare the optimized versions of the the {\method{Segment-Tree}} and {\method{Fenwick-Tree}} -- respectively, the bottom-up left-sum {\method{Segment-Tree}} with 2-loop traversal, and the modified {\method{Fenwick-Tree}} with reduced cache conflicts. A quick look at Figure~\ref{fig:segment_tree_vs_fenwick_tree} immediately reveals that the {\method{Fenwick-Tree}} outperforms the {\method{Segment-Tree}}. There are two different reasons that explain why the {\method{Fenwick-Tree}} is more efficient than the {\method{Segment-Tree}}. \begin{figure}[t] \centering \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_segment_tree_vs_fenwick_tree}} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_segment_tree_vs_fenwick_tree}} } \caption{The running times of {\code{sum}}/{\code{update}} for the {\method{Segment-Tree}} (ST) and the {\method{Fenwick-Tree}} ({\method{FT}}). \label{fig:segment_tree_vs_fenwick_tree}} \end{figure} \begin{enumerate} \item Although we have substantially simplified the code logic for traversing the {\method{Segment-Tree}} with the bottom-up navigation, the code for the {\method{Fenwick-Tree}} is even simpler and requires just a few arithmetic operations plus a memory access at each iteration of the loop. \item While both trees have height $\lceil \log_2 n \rceil + 1$, the number of traversed nodes per operation is different. The {\method{Segment-Tree}} \emph{always} traverses $\lceil \log_2 n \rceil + 1$ nodes and that is also the number of iterations of the \texttt{while} loop. When the branch-free code is into play, also $\lceil \log_2 n \rceil + 1$ memory accesses are performed. As we already explained, for the branchy implementation the number of memory accesses depends on the query: for random distributed queries we expect to perform one access every two iterations of the loop (i.e., half of the times we go left, half of the time we go right). Now, for a random integer $i$ between 0 and $n-1$, the number of bits set we expect to see in its binary representation is approximately 50\%. Therefore, the number of loop iterations \emph{and} memory accesses the {\method{Fenwick-Tree}} is likely to perform is $\frac{1}{2} \lceil \log_2(n+1) \rceil + 1$, i.e., half of those performed by the {\method{Segment-Tree}}, regardless the value of $n$. \end{enumerate} \emph{Both} factors contribute to the difference in efficiency between the two data structures. However, for small values of $n$, the number of performed instructions is the major contributor in the running time as both data structures fit in cache, thus memory accesses are relatively cheap (point 1). For example, when $n$ is $\approx 2^{16}$ and considering {\code{sum}} queries, the {\method{Fenwick-Tree}} code spends 58\% of the cycles spent by the {\method{Segment-Tree}} and executes nearly 1/3 of instructions. It also executes half of the branches and misses 11\% of them. Moving towards larger $n$, the memory latency progressively becomes the dominant factor in the running time, thus favoring solutions that spare memory accesses (point 2). Considering again the number of cache misses for $n \approx 250 \times 10^6$ (excluding overheads during benchmarking), we determined that the {\method{Fenwick-Tree}} incurs in 25\% less cache misses than the {\method{Segment-Tree}} for both {\code{sum}} and {\code{update}}. In conclusion, we will exclude the {\method{Segment-Tree}} from the experiments we are going to discuss in the rest of the paper and compare against the {\method{Fenwick-Tree}}. \section{Experimental Setup}\label{sec:setup} Throughout the paper we show experimental results, so we describe here our experimental setup and methodology. \begin{table}[t] \centering \caption{Cache hierarchy on the Intel i9-9940X processor. All cache levels have a line size of 64 bytes.} \scalebox{1.0 }{\input{tables/caches.tex}} \label{tab:caches} \end{table} \paragraph*{Hardware} For the experiments reported in the article we used an Intel i9-9940X processor, clocked at 3.30 GHz. The processor has two private levels of cache memory per core: $2 \times 32$ KiB $L_1$ cache (32 KiB for instructions and 32 KiB for data); 1 MiB for $L_2$ cache. A $L_3$ cache level spans $\approx$19 MiB and is shared among all cores. Table~\ref{tab:caches} summarizes these specifications. The processor supports the following SIMD~\cite{SIMDIntel} instruction sets: MMX, SSE (including 2, 3, 4.1, 4.2, and SSSE3), AVX, AVX2, and AVX-512. {(We also confirmed our results using another Intel i9-9900X processor, clocked at 3.60 GHz with the same $L_1$ configuration, but smaller $L_2$ and $L_3$ caches. Although timings were slightly different, the same conclusions held.)} \paragraph*{Software} All solutions described in the paper were implemented in C++. The code is available at \url{https://github.com/jermp/psds} {and was tested on Linux with \texttt{gcc} 7.4 and 9.2, and Mac OS with \texttt{clang} 10 and 11.} For the experiments reported in the article, the code was compiled with \texttt{gcc} 9.2.1 under Ubuntu 19.10 (Linux kernel 5.3.0, 64 bits), using the flags: \texttt{-std=c++17 -O3 -march=native -Wall -Wextra}. \paragraph*{Methodology} All experiments run on a \emph{single} core of the processor, with the data residing entirely in memory so that disk operations are of no concern. (The host machine has 128 GiB of RAM.) {Performance counts, e.g., number of performed instructions, cycles spent, and cache misses, were collected for all algorithms, using the Linux \textsf{perf} utility. We will use such performance counts in our discussions to further explain the behavior of the algorithms.} We operate in the most general setting, i.e., with no specific assumptions on the data: arrays are made of 64-bit signed integers and initialized at random; the $\Delta$ value for an {\code{update}} operation is also a random signed number whose bit-width is 64 unless otherwise specified. To benchmark the speed of the operations for a given array size $n$, we generate $10^4$ (pseudo-) random natural numbers in the interval $[0,n)$ and use these as values of $i$ for both $\code{sum}(i)$ and $\code{update}(i,\Delta)$. Since the running time of {\code{update}} does not depend on the specific value of $\Delta$, we set $\Delta = i$ for the updates. We show the running times for {\code{sum}} and {\code{update}} using plots obtained by varying $n$ from a few hundreds up to one billion integers. In the plots the minimum and maximum timings obtained by varying $n$ draw a ``pipe'', with the marked line inside representing the average time. The values on $n$ used are rounded powers of $10^{1/10} \approx 1.25893$: this base was chosen so that data points are evenly spaced when plotted in a logarithmic scale~\cite{khuong2017array}. All running times are reported in nanoseconds spent per operation (either, nanosec/{\code{sum}} or nanosec/{\code{update}}). Prior to measurement, a warm-up run is executed to fetch the necessary data into the cache. \subsection{The Truncated-{\method{Fenwick-Tree}}}\label{sec:truncatedfentree} We have argued and shown that SIMD is highly effective to reduce the running time for {\code{update}} (consider Figure~\ref{fig:segment_tree} vs. Figure~\ref{fig:segment_tree_no_simd}). However, these special instructions must necessarily be combined with a (very) short tree height, given that the {\code{update}} algorithm becomes much more involved at the node-level granularity: while this is the case for the $b$-ary {\method{Segment-Tree}}, it is \emph{not} for the Blocked-{\method{Fenwick-Tree}}, hence its poor efficiency. This suggests the implementation of a strategy where one keeps running the simplest code for {\code{update}}, e.g., that of the classic {\method{Fenwick-Tree}}, until the number of leaves in the sub-tree is $b$ and, thus, these can be updated in parallel with SIMD. The shape of the resulting data structure -- the \emph{Truncated}-{\method{Fenwick-Tree}} (\textsf{TFT}) -- is illustrated in Figure~\ref{fig:truncated_fentree_shape} (for $b=4$ and $n=64$) and shows the clear division between the upper part represented by the {\method{Fenwick-Tree}} and the lower part that consists in an array of blocks. Compared to the classic {\method{Fenwick-Tree}}, it is now intuitive that this variant reduces the number of cache-misses because the upper part is likely to fit well in cache and $b$ keys inside a block are contiguous in memory. \giulio{explain more succinctly...} To implement the Truncated-{\method{Fenwick-Tree}}, we proceed as follows. We form $\lceil n/b \rceil$ blocks and let these become the leaves of a classic {\method{Fenwick-Tree}}\footnote{Clearly, we could replace the upper part of the data structure with a 2-ary {\method{Segment-Tree}}, and define the corresponding \emph{Truncated}-{\method{Segment-Tree}}. However, we focus on the {\method{Fenwick-Tree}} only given that it is always better than the {\method{Segment-Tree}}, as analyzed in Section~\ref{sec:segment_tree_vs_fenwick_tree}.}. In particular, this smaller {\method{Fenwick-Tree}} of height $\lceil \log_2((n+1)/b) \rceil$ is obtained by materializing an temporary array \code{tmp} where $\code{tmp}[j]$ is the sum of the elements in the $j$-th block, $j=0..\lceil n/b \rceil-1$, and building a {\method{Fenwick-Tree}} from this array. Therefore, the {\method{Fenwick-Tree}} provides an index into a specific block of $b$ leaves that are handled in parallel with SIMD. The upper part of the data structure takes $\lceil n/b \rceil+1$ words and the lower part $\lceil n/b \rceil \times \Theta(b)$ words, for a total of $\Theta(n)$ words. The code in Figure~\ref{code:truncated_fentree} implements this approach. The procedures of {\code{sum}} and {\code{update}} for an index $i$ become the combination of the respective procedures invoked on the {\method{Fenwick-Tree}} (upper part) for index $j = \lfloor i/b \rfloor$ and on the \code{Node} structure holding the $j$-th block of leaves (lower part). Compared to the Blocked-{\method{Fenwick-Tree}}, this {truncated} variant has the same height but reduces the processing time at every traversed node using the simplicity of a classic {\method{Fenwick-Tree}} and limiting the usage of SIMD to one block per operation. This is achieved by using some extra space in the representation, i.e., the space for the {\method{Fenwick-Tree}}, that is $\lceil n/b \rceil+1$ extra memory words. As already observed, since the {\method{Fenwick-Tree}} is small and traversed for every operation, we expect it to be cached even for large $n$. For example, for $n = 2^{28}$ and $b=256$, the {\method{Fenwick-Tree}} takes $2^{20}+1$ memory words that fit well in $L_3$. Furthermore, cache-aliasing is only limited to the upper part of the data structure, and we already discussed how to solve this issue in Section~\ref{sec:segment_tree_vs_fenwick_tree}. \begin{figure}[t] \subfloat[{\code{sum}}]{ \includegraphics[scale=\myfigsize]{{results_and/sum_fenwick_tree_truncated_restricted_vs_unrestricted}} \label{fig:fenwick_tree_truncated_a} } \subfloat[{\code{update}}]{ \includegraphics[scale=\myfigsize]{{results_and/update_fenwick_tree_truncated_restricted_vs_unrestricted}} \label{fig:fenwick_tree_truncated_b} } \caption{The running times of {\code{sum}}/{\code{update}} for the Truncated-{\method{Fenwick-Tree}} (\textsf{TFT}) with leaves of 64 and 256 keys. \label{fig:fenwick_tree_truncated}} \end{figure} The experimental results shown in Figure~\ref{fig:fenwick_tree_truncated} meet our expectations, as the truncated variant improves over the blocked and the classic {\method{Fenwick-Tree}}. In particular, it performs similarly to the classic {\method{Fenwick-Tree}} for small values of $n$ but gains a significant advantage for larger $n$ thanks to its better cache usage. Comparing Figure~\ref{fig:fenwick_tree_truncated_b} to Figure~\ref{fig:segment_tree_b} at page~\pageref{fig:segment_tree_b}, we can also observe that this variant improves over the $b$-ary {\method{Segment-Tree}} in the general case for {\code{update}} ($\delta=64$) and large $n$ because the use of SIMD is limited to one block of $b$ keys per operation. For {\code{sum}} queries instead, the data structure performs similarly to the $b$-ary {\method{Segment-Tree}}, with the latter being generally faster for all values of $n$.
{ "timestamp": "2020-10-08T02:02:51", "yymm": "2006", "arxiv_id": "2006.14552", "language": "en", "url": "https://arxiv.org/abs/2006.14552", "abstract": "Given an integer array A, the prefix-sum problem is to answer sum(i) queries that return the sum of the elements in A[0..i], knowing that the integers in A can be changed. It is a classic problem in data structure design with a wide range of applications in computing from coding to databases. In this work, we propose and compare several and practical solutions to this problem, showing that new trade-offs between the performance of queries and updates can be achieved on modern hardware.", "subjects": "Data Structures and Algorithms (cs.DS)", "title": "Practical Trade-Offs for the Prefix-Sum Problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.957912274487423, "lm_q2_score": 0.7401743735019594, "lm_q1q2_score": 0.7090221176385653 }
https://arxiv.org/abs/1702.05659
On Loss Functions for Deep Neural Networks in Classification
Deep neural networks are currently among the most commonly used classifiers. Despite easily achieving very good performance, one of the best selling points of these models is their modular design - one can conveniently adapt their architecture to specific needs, change connectivity patterns, attach specialised layers, experiment with a large amount of activation functions, normalisation schemes and many others. While one can find impressively wide spread of various configurations of almost every aspect of the deep nets, one element is, in authors' opinion, underrepresented - while solving classification problems, vast majority of papers and applications simply use log loss. In this paper we try to investigate how particular choices of loss functions affect deep models and their learning dynamics, as well as resulting classifiers robustness to various effects. We perform experiments on classical datasets, as well as provide some additional, theoretical insights into the problem. In particular we show that L1 and L2 losses are, quite surprisingly, justified classification objectives for deep nets, by providing probabilistic interpretation in terms of expected misclassification. We also introduce two losses which are not typically used as deep nets objectives and show that they are viable alternatives to the existing ones.
\section{Introduction} For the last few years the Deep Learning (DL) research has been rapidly developing. It evolved from tricky pretraining routines~\cite{larochelle2009exploring} to a highly modular, customisable framework for building machine learning systems for various problems, spanning from image recognition~\cite{krizhevsky2012imagenet}, voice recognition and synthesis~\cite{oord2016wavenet} to complex AI systems~\cite{silver2016mastering}. One of the biggest advantages of DL is enormous flexibility in designing each part of the architecture, resulting in numerous ways of putting priors over data inside the model itself~\cite{larochelle2009exploring}, finding the most efficient activation functions~\cite{clevert2015fast} or learning algorithms~\cite{kingma2014adam}. However, to authors' best knowledge, most of the community still keeps one element nearly completely fixed -- when it comes to classification, we use log loss (applied to softmax activation of the output of the network). In this paper we try to address this issue by performing both theoretical and empirical analysis of effects various loss functions have on the training of deep nets. It is worth noting that Tang et al.~\cite{tang2013deep} showed that well fitted hinge loss can outperform log loss based networks in typical classification tasks. Lee et al.~\cite{lee2015deeply} used squared hinge loss for classification tasks, achieving very good results. From slightly more theoretical perspective Choromanska et al.~\cite{choromanska2015loss} also considered $\mathcal{L}_1$ loss as a deep net objective. However, these works seem to be exceptions, appear in complete separation from one another, and usually do not focus on any effect of the loss function but the final performance. Our goal is to show these losses in a wider context, comparing one another under various criteria and provide insights into when -- and why -- one should use them. \begin{table}[h] \centering \caption{List of losses analysed in this paper. $\mathbf{y}$ is true label as one-hot encoding, $\mathbf{\hat y}$ is true label as +1/-1 encoding, $\mathbf{o}$ is the output of the last layer of the network, $\cdot^{(j)}$ denotes $j$th dimension of a given vector, and $\sigma(\cdot)$ denotes probability estimate.} \begin{tabular}{lll} \toprule symbol & name & equation \\ \midrule $\mathcal{L}_1$ & \small L$_1$ loss & $\|\mathbf{y} - \mathbf{o}\|_1$\\ $\mathcal{L}_2$ & \small L$_2$ loss & $\|\mathbf{y} - \mathbf{o}\|_2^2$\\ $\mathcal{L}_1 \circ \sigma$ & \small expectation loss & $\|\mathbf{y} - \sigma(\textbf{o})\|_1$\\ $\mathcal{L}_2 \circ \sigma$ & \small regularised expectation loss\footnote{See Proposition~\ref{prop:l1exp}}& $\|\mathbf{y} - \sigma(\textbf{o})\|_2^2$\\ $\mathcal{L}_\infty \circ \sigma$ & \small Chebyshev loss & $\max_j |\sigma(\mathbf{o})^{(j)} - \mathbf{y}^{(j)}|$ \\ hinge & \small hinge~\cite{tang2013deep} (margin) loss& $\sum_{j} \max(0, \tfrac{1}{2} - \mathbf{\hat y}^{(j)}\mathbf{o}^{(j)}) $\\ hinge$^2$ & \small squared hinge (margin) loss& $\sum_{j} \max(0, \tfrac{1}{2} - \mathbf{\hat y}^{(j)}\mathbf{o}^{(j)})^2 $\\ hinge$^3$ & \small cubed hinge (margin) loss& $\sum_{j} \max(0,\tfrac{1}{2} - \mathbf{\hat y}^{(j)}\mathbf{o}^{(j)})^3 $\\ log & \small log (cross entropy) loss & $-\sum_{j} \mathbf{y}^{(j)}\log \sigma(\mathbf{o})^{(j)} $\\ log$^2$ & \small squared log loss & $- \sum_{j} [\mathbf{y}^{(j)}\log \sigma(\mathbf{o})^{(j)}]^2 $\\ tan & \small Tanimoto loss & $\tfrac{-\sum_j \sigma(\mathbf{o})^{(j)} \mathbf{y}^{(j)}}{\|\sigma(\mathbf{o})\|_2^2+\|\mathbf{y}\|_2^2-\sum_j\sigma(\mathbf{o})^{(j)} \mathbf{y}^{(j)}}$ \\ D$_\mathrm{CS}$ & \small Cauchy-Schwarz Divergence~\cite{czarnecki2015maximum} & $-\log \tfrac{\sum_j \sigma(\mathbf{o})^{(j)} \mathbf{y}^{(j)}}{\|\sigma(\mathbf{o})\|_2 \|\mathbf{y}\|_2}$ \\ \bottomrule \end{tabular} \label{tab:losses} \end{table} This work focuses on 12 loss functions, described in Table~\ref{tab:losses}. Most of them appear in deep learning (or more generally -- machine learning) literature, however some in slightly different context than a classification loss. In the following section we present new insights into theoretical properties of a couple of these losses and then provide experimental evaluation of resulting models' properties, including the effect on speed of learning, final performance, input data and label noise robustness as well as convergence for simple dataset under limited resources regime. \section{Theory} Let us begin with showing interesting properties of $\mathcal{L}_p$ functions, typically considered as purely regressive losses, which should not be used in classification. $\mathcal{L}_1$ is often used as an auxiliary loss in deep nets to ensure sparseness of representations. Similarly, $\mathcal{L}_2$ is sometimes (however nowadays quite rarely) applied to weights in order to prevent them from growing to infinity. In this section we show that -- despite their regression roots -- they still have reasonable probabilistic interpretation for classification and can be used as a main classification objective. We use the following notation: $\{(\mathbf{x}_i,\mathbf{y}_i)\}_{i=1}^N \subset \mathbb{R}^d \times \{0,1\}^K$ is a training set, an iid sample from unknown $P(\mathbf{x},\mathbf{y})$ and $\sigma$ denotes a function producing probability estimates (usually sigmoid or softmax). \begin{prop} \label{prop:l1exp} $\mathcal{L}_1$ loss applied to the probability estimates $\hat p(\vectorfmt{y}|\vectorfmt{x})$ leads to minimisation of expected misclassification probability (as opposed to maximisation of fully correct labelling given by the log loss). Similarly $\mathcal{L}_2$ minimises the same factor, but regularised with a half of expected squared L$_2$ norm of the predictions probability estimates. \end{prop} \begin{proof} In $K$-class classification dependent variables are vectors $\vectorfmt{y}_i \in \{0,1\}^K$ with $\mathrm{L}_1(\vectorfmt{y}_i)=1$, thus using notation $\vectorfmt{p}_i = \hat p(\vectorfmt{y}|\vectorfmt{x}_i)$ \begin{equation*} \begin{aligned} \mathcal{L}_1 &= \tfrac{1}{N}\sum\nolimits_i \sum\nolimits_j | {\vectorfmt{p}_i^{(j)}} - {\vectorfmt{y}_i^{(j)}} | = \tfrac{1}{N}\sum\nolimits_i \left [ \sum\nolimits_j {\vectorfmt{y}_i^{(j)}}(1-{\vectorfmt{p}_i^{(j)}}) + (1-{\vectorfmt{y}_i^{(j)}}){\vectorfmt{p}_i^{(j)}} \right ]\\ &= \tfrac{1}{N}\sum\nolimits_i \left [ \sum\nolimits_j {\vectorfmt{y}_i^{(j)}}-2\sum\nolimits_j{\vectorfmt{y}_i^{(j)}}{\vectorfmt{p}_i^{(j)}} + \sum\nolimits_j{\vectorfmt{p}_i^{(j)}} \right ] = 2 -2\tfrac{1}{N}\sum\nolimits_i \left [\sum\nolimits_j{\vectorfmt{y}_i^{(j)}}{\vectorfmt{p}_i^{(j)}} \right ]. \end{aligned} \end{equation*} Consequently if we sample label according to $\vectorfmt{p}_i$ then probability that it actually matches one hot encoded label in $\vectorfmt{y}_i$ equals $ P(\hat l = l | \hat l \sim \vectorfmt{p}_i, l \sim \vectorfmt{y}_i) = \sum\nolimits_j {\vectorfmt{y}_i^{(j)}} {\vectorfmt{p}_i^{(j)}}, $ and consequently \begin{equation*} \begin{aligned} \mathcal{L}_1 &= 2 -2\tfrac{1}{N}\sum\nolimits_i \left [\sum\nolimits_j{\vectorfmt{y}_i^{(j)}}{\vectorfmt{p}_i^{(j)}} \right ] \approx - 2\mathbb{E}_{P(\vectorfmt{x},\vectorfmt{y})}\left [ P(\hat l = l | \hat l \sim \vectorfmt{p}_i, l \sim \vectorfmt{y}_i) \right ] + \text{const.} \end{aligned} \end{equation*} Analogously for $\mathcal{L}_2,$ \begin{equation*} \begin{aligned} \mathcal{L}_2 &= - 2\tfrac{1}{N}\sum\nolimits_i \left [\sum\nolimits_j{\vectorfmt{y}_i^{(j)}}{\vectorfmt{p}_i^{(j)}} \right ] + \tfrac{1}{N}\sum\nolimits_i \mathrm{L}_2 ({\vectorfmt{y}_i} )^2 + \tfrac{1}{N}\sum\nolimits_i \mathrm{L}_2 ({\vectorfmt{p}_i} )^2\\ & \approx - 2\mathbb{E}_{P(\vectorfmt{x},\vectorfmt{y})}\left [ P(\hat l = l | \hat l \sim \vectorfmt{p}_i, l \sim \vectorfmt{y}_i) \right ] + \mathbb{E}_{P(\vectorfmt{x},\vectorfmt{y})}[\mathrm{L}_2 ({\vectorfmt{p}_i} )^2] + \text{const.} \end{aligned} \end{equation*} \end{proof} For this reason we refer to these losses as \emph{expectation loss} and \emph{regularised expectation loss} respectively. One could expect that this should lead to higher robustness to the outliers/noise, as we try to maximise the expected probability of good classification as opposed to the probability of completely correct labelling (which log loss does). Indeed, as we show in the experimental section -- this property is true for all losses sharing connection with \emph{expectation losses}. So why is using these two loss functions unpopular? Is there anything fundamentally wrong with this formulation from the mathematical perspective? While the following observation is not definitive, it shows an insight into what might be the issue causing slow convergence of such methods. \begin{prop} \label{prop:vanish} $\mathcal{L}_1$, $\mathcal{L}_2$ losses applied to probabilities estimates coming from sigmoid (or softmax) have non-monotonic partial derivatives wrt. to the output of the final layer (and the loss is not convex nor concave wrt. to last layer weights). Furthermore, they vanish in both infinities, which slows down learning of heavily misclassified examples. \end{prop} \begin{proof} Let us denote sigmoid activation as $\sigma(x) = (1+e^{-x})^{-1}$ and, without loss of generality, compute partial derivative of $\mathcal{L}_1$ when network is presented with $x_p$ with positive label. Let $o_p$ denote the output activation for this sample. \begin{equation*} \begin{aligned} \frac{\partial (\mathcal{L}_1 \circ \sigma)}{\partial o}(o_p) = \frac{\partial}{\partial o} \left ( | 1 - (1+e^{-o})^{-1} | \right )(o_p) = -\frac{e^{-o_p}}{(e^{-o_p}+1)^2}\\ \lim_{o \rightarrow -\infty} -\frac{e^{-o}}{(e^{-o}+1)^2} = 0 = \lim_{o \rightarrow \infty} -\frac{e^{-o}}{(e^{-o}+1)^2}, \end{aligned} \end{equation*} while at the same time $-\frac{e^{0}}{(e^{0}+1)^2} = -\tfrac{1}{4} < 0$, completing the proof of both non-monotonicity as well as the fact it vanishes when point is heavily misclassified. Lack of convexity comes from the same argument since second derivative wrt. to any weight in the final layer of the model changes sign (as it is equivalent to first derivative being non-monotonic). This comes directly from the above computations and the fact that $o_p = \langle \mathbf{w}, \mathbf{h}_p \rangle + b $ for some internal activation $\mathbf{h}_p$, layer weights $\mathbf{w}$ and layer bias $b$. In a natural way this is true even if we do not have any hidden layers (model is linear). Proofs for $\mathcal{L}_2$ and softmax are completely analogous. \end{proof} Given this negative result, it seems natural to ask whether a similar property can be proven to show which loss functions should lead to \emph{fast} convergence. It seems like the answer is again positive, however based on the well known deep learning hypothesis that deep models learn well when dealing with piece-wise linear functions. An interesting phenomenon in classification based on neural networks is that even in a deep linear model or rectifier network the top layer is often non-linear, as it uses softmax or sigmoid activation to produce probability estimates. Once this is introduced, also the partial derivatives stop being piece-wise linear. We believe that one can achieve faster, better convergence when we ensure that architecture together with loss function, produces a piecewise linear partial derivatives (but not constant) wrt. to final layer activations, especially while using first order optimisation methods. This property is true only for $\mathcal{L}_2$ loss and squared hinge loss (see Figure~\ref{fig:deriviatives}) among all considered ones in this paper. \begin{figure}[h] \includegraphics[width=0.375\textwidth]{losses.png} \includegraphics[width=0.3\textwidth]{dL.png} \includegraphics[width=0.3\textwidth]{dLs.png} \caption{Left: Visualisation of analysed losses as functions of activation on positive sample. Middle: Visualisation of partial derivatives wrt. to output neuron for losses based on linear output. Right: Visualisation of partial derivatives wrt. to output neuron for losses based on probability estimates.} \label{fig:deriviatives} \end{figure} Finally we show relation between Cauchy-Schwarz Divergence loss and the log loss, justifying its introduction as an objective for neural nets. \begin{prop} Cauchy-Schwarz Divergence loss is equivalent to cross entropy loss regularised with half of expected Renyi's quadratic entropy of the predictions. \end{prop} \begin{proof} Using the fact that $\forall_i\exists!_j : {\mathbf{y}_i^{(j)}} = 1$ we get that $ \log \sum\nolimits_j {\mathbf{p}_i^{(j)}} {\mathbf{y}_i^{(j)}} = \sum\nolimits_j {\mathbf{y}_i^{(j)}} \log {\mathbf{p}_i^{(j)}} $ as well as $\|\mathbf{y}_i\|_2 = 1$, consequently \begin{equation*} \begin{aligned} D_\mathrm{CS} & = - \tfrac{1}{N} \sum\nolimits_i \log \tfrac{\sum\nolimits_j {\mathbf{p}_i^{(j)}} {\mathbf{y}_i^{(j)}}}{\| {\mathbf{p}_i} \|_2 \| {\mathbf{y}_i} \|_2} = - \tfrac{1}{N} \sum\nolimits_i \log \sum\nolimits_j {\mathbf{p}_i^{(j)}} {\mathbf{y}_i^{(j)}} + \tfrac{1}{N} \sum\nolimits_i \log \| {\mathbf{p}_i} \|_2 \| {\mathbf{y}_i} \|_2 \\ &=- \tfrac{1}{N} \sum\nolimits_i \sum\nolimits_j {\mathbf{y}_i^{(j)}} \log {\mathbf{p}_i^{(j)}} + \tfrac{1}{2N} \sum\nolimits_i \log \| {\mathbf{p}_i} \|^2_2 \approx \mathcal{L}_\mathrm{log} + \tfrac{1}{2}\mathbb{E}_{P(\mathbf{x},\mathbf{y})}[H_2(\mathbf{p}_i)] \end{aligned} \end{equation*} \end{proof} \section{Experiments} We begin the experimental section with two simple 2D toy datasets. The first one is checkerboard -- 4 class classification problem where [-1,1] square is divided into 64 small squares with cyclic class assignment. The second one, spiral, is a 4 class generalisation of the well known 2 spirals dataset. Both datasets have 800 training and 800 testing samples. We train rectifier neural network having from 0 to 5 hidden layers with 200 units in each of them. Training is performed using Adam~\cite{kingma2014adam} with learning rate of $0.00003$ for 60,000 iterations with batch size of 50 samples. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{toy_training.png} \includegraphics[width=0.23\textwidth]{l1_5.png} \includegraphics[width=0.23\textwidth]{logcrossentropy_5.png} \includegraphics[width=0.23\textwidth]{exp_loss_correct_5.png} \includegraphics[width=0.23\textwidth]{margin2_5.png} \caption{Top row: Learning curves for toy datasets. Bottom row: examples of decision boundaries, from left: $\mathcal{L}_1$ loss, log loss, $\mathcal{L}_1 \circ \sigma$ loss, hinge$^2$ loss.} \label{fig:toy_train} \end{figure} In these simple problems one can distinguish (Figure~\ref{fig:toy_train}) two groups of losses -- one able to fit to our very dense, low-dimensional data and one struggling to reduce error to 0. The second group consists of $\mathcal{L}_1$, Chebyshev, Tanimoto and expectation loss. This division becomes clear once we build a relatively deep model (5 hidden layers), while for shallow ones this distinction is not very clear (3 hidden layers) or is even completely lost (1 hidden layer or linear model). To further confirm the lack of ability to easily overfit we also ran an experiment in which we tried to fit 800 samples from uniform distribution over $[-1,1]$ with randomly assigned 4 labels and achieved analogous partitioning. During following, real data-based experiments, we focus on further investigation of loss functions properties emerging after application to deep models, as well as characteristics of the created models. In particular, we show that lack of ability to reduce training error to 0 is often correlated with robustness to various types of noise (despite not underfitting the data). Let us now proceed with one of the most common datasets used in deep learning community -- MNIST~\cite{lecun1998mnist}. We train network consisting from 0 to 5 hidden layers, each followed by ReLU activation function and dropout~\cite{srivastava2014dropout} with 50\% probability. Each hidden layer consists of 512 neurons, and whole model is trained using Adam~\cite{kingma2014adam} with learning rate of $0.00003$ for 100,000 iterations using batch size of 100 samples. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{mnist_training.png}\vspace{0.25cm} \includegraphics[width=0.25\textwidth]{cifar10_exp2.png} \includegraphics[width=0.25\textwidth]{cifar10_exp.png} \includegraphics[width=0.47\textwidth]{cifar10_training.png} \caption{Top two rows: learning curves for MNIST dataset. Bottom row: (left) speed of learning expressed as expected training/testing accuracy when we sample iteration uniformly between 10k and 100k; (right) learning curves for CIFAR10 dataset.} \label{fig:mnist_train} \end{figure} There are few interesting findings, visible on Figure~\ref{fig:mnist_train}. First, results obtained for a linear model (lack of hidden layers) are qualitatively different from all the remaining ones. For example, using regularised expectation loss leads to the strongest model in terms of both training accuracy and generalisation capabilities, while the same loss function is far from being the best one once we introduce non-linearities. This shows two important things: first -- observations and conclusions drawn from linear models do not seem to transfer to deep nets, and second -- there seems to be an interesting co-dependence between learning dynamics coming from training rectifier nets and loss functions used. As a side note, 93\% testing accuracy, obtained by $\mathcal{L}_2 \circ \sigma$ and $D_\mathrm{CS}$, is a very strong result on MNIST using linear model without any data augmentation or model regularisation. Second interesting observation regards the speed of learning. It appears that (apart from linear models) hinge$^2$ and hinge$^3$ losses are consistently the fastest in training, and once we have enough hidden layers (basically more than 1) also $\mathcal{L}_2$. This matches our theoretical analysis of these losses in the previous section. At the same time both expectation losses are much slower to train, which we believe to be a result of their vanishing partial derivatives in heavily misclassified points (Proposition~\ref{prop:vanish}). It is important to notice that while higher order hinge losses (especially 2$^\mathrm{nd}$) actually help in terms of both speed and final performance, the same property does not hold for higher order log losses. One possible explanation is that taking a square of log loss only reduces model's certainty in classification (since any number between 0 and 1 taken to 2$^\mathrm{nd}$ power decreases), while for hinge losses the metric used for penalising margin-outliers is changed, and both L$_1$ metric (leading to hinge) as well as any other L$_p$ norm (leading to hinge$^p$) make perfect sense. Third remark is that pure $\mathcal{L}_1$ does not learn at all (ending up with ~20\% accuracy) due to causing serious ``jumps'' in the model because of its partial derivatives wrt. to net output always being either -1 or 1. Consequently, even after classifying a point correctly, we are still heavily penalised for it, while with losses like $\mathcal{L}_2$ the closer we are to the correct classification - the smaller the penalty is. Finally, in terms of generalisation capabilities margin-based losses seem to outperform the remaining families. One could argue that this is just a result of lack of regularisation in the rest of the losses, however we underline that all the analysed networks use strong dropout to counter the overfitting problem, and that typical $\mathrm{L}_1$ or $\mathrm{L}_2$ regularisation penalties do not work well in deep networks. For CIFAR10 dataset we used a simple convnet, consisting of 3 layers of convolutions, each of size 5x5 and 64 filters, with ReLU activation functions, batch-normalisation and pooling operations in between them (max pooling after first layer and then two average poolings, all 3x3 with stride 2), followed by a single fully connected hidden layer with 128 ReLU neurons, and final softmax layer with 10 neurons. As one can see in Figure~\ref{fig:mnist_train}, despite completely different architecture than before, we obtain very similar results -- higher order margin losses lead to faster training and significantly stronger models. Quite surprisingly -- $\mathcal{L}_2$ loss also exhibits similar property. Expectation losses again learn much slower (with the regularised one -- training at the level of log loss and unregularised -- significantly worse). We would like to underline that this is a very simple architecture, far from the state-of-the art models for CIFAR10, however we wish to avoid using architectures which are heavily overfitted to the log loss. Furthermore, the aim of this paper is not to provide any state-of-the-art models, but rather to characterise effects of loss functions on deep networks. As the final interesting result in these experiments, we notice that Cauchy-Schwarz Divergence as the optimisation criterion seems to be a consistently better choice than log loss. It performs equally well or better on both MNIST and CIFAR10 in terms of both learning speed and the final performance. At the same time this information theoretic measure is very rarely used in DL community, and rather exploited in shallow learning (for both classification~\cite{czarnecki2015maximum} and clustering~\cite{principe2000information}). Now we focus on the impact these losses have on noise robustness of the deep nets. We start by performing the following experiment on previously trained MNIST classifiers: we add noise sampled from $\mathcal{N}(0, \epsilon \mathbf{I})$ to each $\mathbf{x}_i$ and observe how quickly (in terms of growing $\epsilon$) network's training accuracy drops (Figure~\ref{fig:mnist_eps}). \begin{figure}[htb] \includegraphics[width=0.925\textwidth]{mnist_eps.png}\\ \includegraphics[width=\textwidth]{mnist_noise.png} \caption{Top row: Training accuracy curves for the MNIST trained models, when presented with training examples with added noise from $\mathcal{N}(0, \epsilon \mathbf{I})$, plotted as a function of $\epsilon$. Middle and bottom rows: Testing accuracy curves for the MNSIT experiment with $\epsilon$ of training labels changed, plotted as a function of training iteration. If $\mathcal{L}_1 \circ \sigma$ is not visible, it is almost perfectly overlapped by $\mathcal{L}_\infty \circ \sigma$.} \label{fig:mnist_eps} \end{figure} The first crucial observation is that both expectation losses perform very well in terms of input noise robustness. We believe that this is a consequence of what Proposition~\ref{prop:l1exp} showed about their probabilistic interpretation -- that they lead to minimisation of the expected misclassification, which is less biased towards outliers than log loss (or other losses that focus on maximisation of probability of correct labelling of all samples at the same time). For log loss a single heavily misclassified point has an enormous impact on the overall error surface, while for these two losses -- it is minor. Secondly, margin based losses also perform well on this test, usually slightly worse than the expectation losses, but still better than log loss. This shows that despite no longer maximising the misclassification margin while being used in deep nets -- they still share some characteristics with their linear origins (SVM). In another, similar experiment, we focus on the generalisation capabilities of the networks trained with increasing amount of label noise in the training set (Figure~\ref{fig:mnist_eps}) and obtain analogous results, showing that robustness to the noise of expectation and margin losses is high for both input and label noise for deep nets, while again -- slightly different results are obtained for linear models, where log loss is more robust than the margin-based ones. What is even more interesting, a completely non-standard loss function -- \emph{Tanimoto loss} -- performs extremely well on this task. We believe that its exact analysis is one of the important future research directions. \section{Conclusions} This paper provides basic analysis of effects the choice of the classification loss function has on deep neural networks training as well as their final characteristics. We believe the obtained results will lead to a wider adoption of various losses in DL work -- where up till now log loss is unquestionable favourite. In the theoretical section we show that, surprisingly, losses which are believed to be applicable mostly to regression, have a valid probabilistic interpretation when applied to deep network-based classifiers. We also provide theoretical arguments explaining why using them might lead to slower training, which might be one of the reasons DL practitioners have not yet exploited this path. Our experiments lead to two crucial conclusions. First, that intuitions drawn from linear models rarely transfer to highly-nonlinear deep networks. Second, that depending on the application of the deep model -- losses other than log loss are preferable. In particular, for purely accuracy focused research, squared hinge loss seems to be a better choice at it converges faster as well as provides better performance. It is also more robust to noise in the training set labelling and slightly more robust to noise in the input space. However, if one works with highly noised dataset (both input and output spaces) -- the expectation losses described in detail in this paper -- seem to be the best choice, both from theoretical and empirical perspective. At the same time this topic is far from being exhausted, with a large amount of possible paths to follow and questions to be answered. In particular, non-classical loss functions such as Tanimoto loss and Cauchy-Schwarz Divergence are worth further investigation. \bibliographystyle{plain}
{ "timestamp": "2017-02-21T02:05:11", "yymm": "1702", "arxiv_id": "1702.05659", "language": "en", "url": "https://arxiv.org/abs/1702.05659", "abstract": "Deep neural networks are currently among the most commonly used classifiers. Despite easily achieving very good performance, one of the best selling points of these models is their modular design - one can conveniently adapt their architecture to specific needs, change connectivity patterns, attach specialised layers, experiment with a large amount of activation functions, normalisation schemes and many others. While one can find impressively wide spread of various configurations of almost every aspect of the deep nets, one element is, in authors' opinion, underrepresented - while solving classification problems, vast majority of papers and applications simply use log loss. In this paper we try to investigate how particular choices of loss functions affect deep models and their learning dynamics, as well as resulting classifiers robustness to various effects. We perform experiments on classical datasets, as well as provide some additional, theoretical insights into the problem. In particular we show that L1 and L2 losses are, quite surprisingly, justified classification objectives for deep nets, by providing probabilistic interpretation in terms of expected misclassification. We also introduce two losses which are not typically used as deep nets objectives and show that they are viable alternatives to the existing ones.", "subjects": "Machine Learning (cs.LG)", "title": "On Loss Functions for Deep Neural Networks in Classification", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9579122768904645, "lm_q2_score": 0.7401743677704878, "lm_q1q2_score": 0.7090221139269881 }
https://arxiv.org/abs/2112.07184
Calibrated and Sharp Uncertainties in Deep Learning via Density Estimation
Accurate probabilistic predictions can be characterized by two properties -- calibration and sharpness. However, standard maximum likelihood training yields models that are poorly calibrated and thus inaccurate -- a 90% confidence interval typically does not contain the true outcome 90% of the time. This paper argues that calibration is important in practice and is easy to maintain by performing low-dimensional density estimation. We introduce a simple training procedure based on recalibration that yields calibrated models without sacrificing overall performance; unlike previous approaches, ours ensures the most general property of distribution calibration and applies to any model, including neural networks. We formally prove the correctness of our procedure assuming that we can estimate densities in low dimensions and we establish uniform convergence bounds. Our results yield empirical performance improvements on linear and deep Bayesian models and suggest that calibration should be increasingly leveraged across machine learning.
\section{Obtaining Calibrated And Sharp Uncertainties in Practice}\label{sec:algorithm} \section{Enforcing Distribution Calibration via Density Estimation}\label{sec:algorithm} This section introduces algorithms that ensure the distirbution calibration of any predictive machine learning model while maintaining sharpness. Unlike existing methods for distribution calibration, ours can be used with any model (not just ones that output Gaussians), are very simple to implement in differentiable programming frameworks, and have theoretical guarantees. We first assume there exists a parameterization $\Phi_1$ of the probabilities returned by forecaster $H$: for each $p \in \Delta(\mathcal{Y})$ returned by $H$, there exist parameters $\phi \in \Phi_1$ that describe $p$. The $\phi$ can be the natural parameters of an exponential family distribution, such as $(\mu, \sigma^2)$ describing a Gaussian. We consider a class of algorithms based on a classic approach called recalibration. First, we train a base forecaster $H$ to minimize a proper loss $L$. Then, we train an auxiliary model $R : \Phi_1 \to \Phi_2$ (called the recalibrator) over the outputs of $H$ that outputs the parameters $\phi_2 \in \Phi_2$ of another distribution such that $L$ is minimized. Here $\Phi_2$ is a second parameterization of $\Delta(\mathcal{Y})$ (possibly the same). As a result, the forecasts $(R \circ H)(X)$ will be calibrated. We provide details in Algorithm \ref{alg:recal}. \begin{algorithm} \caption{Calibrated Learning of Probabilistic Models.} \label{alg:recal} \textbf{Input:} Model $H : \mathcal{X} \to \Phi_1$, recalibrator $R : \Phi_1 \to \Phi_2$, training set $\mathcal{D}$, recalibration set $\mathcal{C}$\\ \textbf{Output:} Recalibrated model $R \circ H : \mathcal{X} \to \Phi_2$. \begin{enumerate} \item Fit the base model on $\mathcal{D}$: $\min_{H} \sum_{(x, y) \in \mathcal{D}} L(H(x), y)$ \item Fit the recalibration model $R$ on the output of $H$ on $\mathcal{C}$: $\min_{R} \sum_{(x,y) \in \mathcal{C}} L\left((R \circ H)(x), y\right)$ \end{enumerate} \end{algorithm} Implementing this approach requires choosing parameterizations of probabilities $\Phi_1, \Phi_2$, a recalibration model $R$, and an objective $L$. We discuss these choices below; then we clarify how \ref{alg:recal} differs from existing recalibration algorithms. \paragraph{Parameterizing Probability Distributions.} When fitting a classification model, each distribution $\Delta(\mathcal{Y})$ is a categorical and can be parameterized via its $K \geq 2$ class membership probabilities. In regression, most widely used models such as neural networks already output parameterized probabilities, in which the $\phi$ are usually the natural parameters of an exponential family model. In the most general case, if we only have black-box access to a density function or a cumulative distribution function, we may form a $d$-dimensional representation by evaluating the distribution at a grid of $d$ points. For example, if we have black-box access to a quantile function $F^{-1}$, we may featurize $F$ via its sequence of quantiles $\phi(F) = (F^{-1}(\alpha_i))_{i=1}^d$ for some sequence of $d$ levels $\alpha_i$, possibly chosen uniformly in $[0,1]$. In addition to the above techniques, the representation of output probabilities in $\Phi_2$ coming from $R$ can leverage flexible invertible models of the CDF, following methods developed in the normalizing flow literature, including monotonic neural networks, sum-of-squares polynomials (Wehenkel and Louppe, 2019; Jaini et al., 2019) spline functions (Muller et al., 2019; Durkan et al., 2019), piece-wise separable models (Wehenkel and Louppe, 2019), and others. \paragraph{Choosing a Recalibrator.} Ideal recalibrators are highly effective at optimizing the proper loss $L$ (see Section \ref{sec:recalibration}). In a simple setting like binary classification, our task reduces to one-dimensional density estimation; in such cases we can provably achieve calibration asymptotically by using kernel density estimation for the recalibrator $R$, while controlling the kernel width as a function of the dataset size to trade off overfitting and underfitting \citep{wasserman2006all}. In regression settings, we may rely on other non-parametric techniques such as Gaussian processes. An alternative approach is to rely on expressive neural networks; although their optimization is a non-convex, they are very effective at fitting proper losses $L$, feature mature regularization techniques, and can be implemented easily within deep learning frameworks, possibly within the same computation graph as a neural forecaster $H$, which can simplify deployment. In the classification setting, a natural architecture for $R$ is a sequence of dense layers mapping the simplex $\Delta_K$ into $\Delta_K$. In regression settings, $R$ needs to output a density function: a natural architecture for this is a mixture density network (MDN; \citet{bishop1994mixture}). \paragraph{Choosing a Proper Loss} A natural choice of proper loss is the log-loss. It applies in both calibration and regression; optimizing it is a standard supervised learning problem. In regression settings, we found that using the quantile loss $L={\mathbb{E}}_{\tau\in U[0,1]}{\mathbb{E}}_{y\sim G} \rho_\tau(y-F^{-1}(\tau))$ (see Table \ref{tbl:properlosses}) was numerically stable and produced the best performance. This objective fits a model $R_\theta(\tau; \phi)$ to estimate the $\tau$-th conditional quantile $F^{-1}(\tau)$ at $\phi$. When $R_\theta(\tau; \phi)$ is a neural network that takes in $\tau$ and $x$, we minimize the quantile loss ${\mathbb{E}}_{\tau\in U[0,1]}{\mathbb{E}}_{y\sim G} \rho_\tau(y-F^{-1}(\tau))$ using gradient descent, approximating both expectations using Monte Carlo (details are in the experiments section). \paragraph{Comparing Against Song et al.} Interestingly, the method of \citet{song2019distribution} is a special case of ours when $\Phi_1$ consists of Gaussian natural parameters, $\Phi_2$ consists of parameters for the Beta link function, $R$ is a Gaussian process, and $L$ is the log-likelihood. However, the resulting problem can only be solved using variational inference, which is slow and complex to implement. Our framework instead admits simple solutions based on gradient descent. \paragraph{Comparing Against Other Approaches.} Algorithm \ref{alg:recal} performs recalibration like many previous methods (e.g., Platt, Kuleshov et al., Song et al., etc.); it may thus appear to be the same as these methods. However, that is not the case. First, existing recalibration approaches operate over the space of probabilities (class probabilities or CDF values); ours operates over {\em functional parameters}. This is what enables it to achieve distribution rather than quantile calibration \citep{kuleshov2018accurate}. Our approach also involves novel recalibration objectives (e.g., the quantile divergence in regression) which differ from the calibration error of \citet{kuleshov2018accurate}. We also use different types of models (small neural networks instead of isotonic regression) and different optimization procedures used (stochastic gradient descent instead of variational inference). Thus, our recalibration strategy is distinct from previous work. \section{Recalibration with proper losses} We now extend our proofs to arbitrary proper losses. In this section, we assume, again, for expository purposes, that calibration is measured using the $\ell_1$ loss. This loss is often considered in the literature; in particular the current best recalibration algorithm by \citet{abernethy11blackwell} targets the $\ell_1$ loss directly. We will consider general losses in the next section. \subsection{Notation} \paragraph{Calibration.} We define the calibration error of a forecaster $F^\mathrm{cal}$ as \begin{equation} C_{T} = \sum_{i=0}^N \left| \rho_T(i/N) - \frac{i}{N} \right| \left( \frac{1}{T} \sum_{t=1}^T {\mathbb{I}}_{\{p_t = \frac{i}{N}\}} \right), \end{equation} where $\rho_T(p) = \frac{\sum_{t=1}^T y_t {\mathbb{I}}_{p_t = p}}{\sum_{t=1}^T {\mathbb{I}}_{p_t = p}}$ denotes the frequency at which event $y = 1$ occurred over the times when we predicted $p$. \begin{defn} We say that a loss $\ell(y,p) : \{0,1\} \times [0,1] \to \mathbb{R}_+$ is proper if $p \in \arg\min_q {\mathbb{E}}_{y \sim \text{Ber}(p)} \ell(y, q). $ \end{defn} Examples of proper losses include the L2 loss $\ell_2(y,p) = (y-p)^2$, the log-loss $\ell_\text{log}(y,p) = y\log(p) + (1-y)\log(1-p)$, and the the misclassification loss $\ell_\text{mc}(y,p) = (1-y) {\mathbb{I}}_{p < 0.5} + y {\mathbb{I}}_{p \geq 0.5}$ \footnote{There exists a vast literature on proper losses, particularly in connection with calibration. See the extensive survey by Buja et al. (http://www-stat.wharton.upenn.edu/~buja/PAPERS/paper-proper-scoring.pdf) for more details.}. Counter-examples include the L1 and the hinge losses. \subsection{Calibration implies no internal regret} Here, we show that a calibrated forecaster also has small internal regret relative to any bounded proper loss. This lemma is all we need to extend Lemma 2 to general proper losses, i.e. to show that recalibrated forecasts have low regret relative to the uncalibrated forecaster. Note that the proof used below has the same structure as that of an earlier theorem by Foster and Vohra\footnote{Theorem 1 in Dean Foster and Rakesh Vohra, {\em Calibrated Learning and Correlated Equilibrium}, Games and Economic Behavior, 1997}. \begin{lemma} Let $\ell(y,p)$ be a bounded proper loss with $\ell(y,p) < B$ over the entire domain. Suppose that $F^\mathrm{cal}$ is $(\epsilon, \ell_1)$-calibrated with $C_{T} \leq R_T + \epsilon$ all $T$, where $R_T = o(1)$ as $T \to \infty$. Then w.h.p. $F^\mathrm{cal}$ has a small internal regret with respect to $\ell$: $$ R^\mathrm{int}_{T} = \max_{ij} \sum_{t=1}^T {\mathbb{I}}_{ti} \left( \ell(i/N, y_t) - \ell(j/N, y_t) \right) \leq 2 B (R_T + \epsilon). $$ where ${\mathbb{I}}_{ti} = {\mathbb{I}}_{p_t = i/N}$ be the indicator of $F^\mathrm{cal}$ of outputting prediction $i/N$ at time $t$. \end{lemma} \begin{proof} Let $T$ be fixed for the rest of this proof. Let ${\mathbb{I}}_{ti} = {\mathbb{I}}_{p_t = i/N}$ be the indicator of $F^\mathrm{cal}$ outputting prediction $i/N$ at time $t$, let $T_i = \sum_{t=1}^T {\mathbb{I}}_{ti}$ denote the number of time $i/N$ was predicted, and let $$ R^\mathrm{int}_{T, ij} = \sum_{t=1}^T {\mathbb{I}}_{ti} \left( \ell(i/N, y_t) - \ell(j/N, y_t) \right) $$ denote the gain (measured using the proper loss $\ell$) from retrospectively switching all the plays of action $i$ to $j$. This value forms the basis of the definition of internal regret (Section 2). Let $T(i,y) = \sum_{t=1}^T {\mathbb{I}}_{ti} {\mathbb{I}}\{y_t = y\}$ denote the total number of $i/N$ forecasts at times when $y_t = y \in \{0,1\}$. Observe that we have \begin{align*} T(i,y) & = \sum_{t=1}^T {\mathbb{I}}_{ti} {\mathbb{I}}\{y_t = y\} = \frac{\sum_{t=1}^T {\mathbb{I}}_{ti} {\mathbb{I}}\{y_t = y\} }{T_i} T_i = \frac{\sum_{t=1}^T {\mathbb{I}}_{ti} {\mathbb{I}}\{y_t = y\} }{\sum_{t=1}^T {\mathbb{I}}_{ti}} T_i \\ & = q(i,y) T_i + T_i \left( \frac{\sum_{t=1}^T {\mathbb{I}}_{ti} {\mathbb{I}}\{y_t = y\} }{\sum_{t=1}^T {\mathbb{I}}_{ti}} - q(i,y) \right) \\ & = q(i,y) T_i + T_i \left( \rho_T(i/N) - i/N \right), \end{align*} where $q(i,y) = i/N$ if $y=1$ and $1-i/N$ if $y=0$. The last equality follows using some simple algebra after adding and subtracting one inside the parentheses in the second term. We now use this expression to bound $R^\mathrm{int}_{T, ij}$: \begin{align*} R^\mathrm{int}_{T, ij} & = \sum_{t=1}^T {\mathbb{I}}_{ti} \left( \ell(i/N, y_t) - \ell(j/N, y_t) \right) \\ & = \sum_{y \in \{0,1\}} T(i,y) \left( \ell(i/N, y) - \ell(j/N, y) \right) \\ & \leq \sum_{y \in \{0,1\}} q(i,y) T_i \left( \ell(i/N, y) - \ell(j/N, y) \right) + \sum_{y \in \{0,1\}} B T_i \left| \rho_T(i/N) - i/N \right| \\ & \leq 2B T_i \left| \rho_T(i/N) - i/N \right|, \end{align*} where in the first inequality, we used $\ell(i/N, y) - \ell(j/N, y) \leq \ell(i/N, y) \leq B$, and in the second inequality we used the fact that $\ell$ is a proper loss. Since internal regret equals $R^\mathrm{int}_{T} = \max_{i,j} R^\mathrm{int}_{T, ij}$, we have \begin{align*} R^\mathrm{int}_{T} & \leq \sum_{i=1}^N \max_{j} R^\mathrm{int}_{T, ij} \leq 2B \sum_{i=0}^N T_i \left| \rho(i/N) - i/N \right| \leq 2 B ( R_T + \epsilon ). \end{align*} \end{proof} \subsection{Recalibrated forecasts have low regret relative to uncalibrated forecasts} We now use the above result to prove an extension of Lemma 2 to general proper losses, i.e. we show that the forecasts recalibrated using Algorithm 2 have low regret relative to the baseline uncalibrated forecasts. \begin{lemma}\label{lem:regret} Consider an instance of Algorithm 2 with parameters $M \geq N$, and $\ell$ be a proper loss that is \begin{enumerate} \item Bounded in absolute value by $B>0$ \item $\ell(y_t, p) \leq \ell(y_t, j/M) + B/M$ whenever $p \in [j/M, (j+1)/M)$. \item $\ell(y_t, p) \leq \ell(y_t, i/N) + B/N$ whenever $p \in [i/N, (i+1)/N)$. \end{enumerate} The recalibrated forecasts $p_t$ have vanishing $\ell$-loss regret relative to $p^F_t$: $$ \lim_{T\to\infty} \left( \frac{1}{T} \sum_{t=1}^T \ell (y_t , p_t) - \frac{1}{T} \sum_{t=1}^T \ell(y_t , p^F_t) \right) < 3B/N. $$ \end{lemma} \begin{proof} By the previous lemma, we know that an algorithm with resolution $\frac{1}{N}$ whose calibration error is bounded by $R_T = o(1)$ also minimizes internal regret at a rate of $2BR_T$, and thus external regret at a rate of $2NBR_T$. Next, let us use ${\mathbb{I}}_{j,t} = {\mathbb{I}} \{p^F_t \in [\frac{j-1}{M},\frac{j}{M})\}$ to indicate that $F^\mathrm{cal}_j$ was called at time $t$. Also, let $i_j$ denote the index $i \in [N]$ associated with the interval $[i/N, (i+1)/N)$ in which $j/M$ falls. We establish our main claim as follows: \begin{align*} & \frac{1}{T} \sum_{t=1}^T \ell (y_t , p_t) - \frac{1}{T} \sum_{t=1}^T \ell (y_t , p^F_t) \\ & \;\; = \frac{1}{T} \sum_{t=1}^T \left( \sum_{j=1}^M \left( \ell (y_t , p_t) - \ell (y_t , p^F_t) \right) {\mathbb{I}}_{j,t} \right) \\ & \;\; < \frac{1}{T} \sum_{t=1}^T \left( \sum_{j=1}^M \left( \ell (y_t , p_t) - \ell (y_t , \frac{j}{M}) \right) {\mathbb{I}}_{j,t} + \frac{B}{N}\right) \\ & \;\; < \frac{1}{T} \sum_{t=1}^T \left( \sum_{j=1}^M \left( \ell (y_t , p_t) - \ell (y_t , \frac{i_j}{N}) \right) {\mathbb{I}}_{j,t} + \frac{2B}{N}\right) \\ & \;\; \leq N B \sum_{t=1}^T \sum_{j=1}^M \frac{T_j}{T} R_{T_j} + \frac{3B}{N}, \end{align*} where $R_{T_j}$ is a bound on the calibration error of $F^\mathrm{cal}_j$ after $T_j$ plays. In the first two inequalities, we use our assumption on the loss $\ell$, and that $ \frac{1}{M} \leq \frac{1}{N}$. The last inequality follows because $F^\mathrm{cal}_j$ minimizes external regret w.r.t.~the constant action $i_j$ at a rate of $NBR_{T_j}$. \end{proof} \subsection{Correctness of Algorithm 2 using general proper losses} We now prove our main result about the correctness of Algorithm 2. \begin{lemma} Let $\ell$ be a proper loss that is \begin{enumerate} \item Bounded in absolute value by $B\geq 1$ \item $\ell(y_t, p) \leq \ell(y_t, j/M) + B/M$ whenever $p \in [j/M, (j+1)/M)$. \item $\ell(y_t, p) \leq \ell(y_t, i/N) + B/N$ whenever $p \in [i/N, (i+1)/N)$. \end{enumerate} Let $F^\mathrm{cal}$ be an $(\ell_1, \epsilon/3B)$-calibrated online algorithm with resolution $N \geq 3B/\epsilon$. Then Algorithm 2 is an $\epsilon$-accurate online recalibration algorithm for the loss $\ell$. \end{lemma} \begin{proof} It is easy to show that Algorithm 2 is $(\ell_1, \epsilon/3B)$-calibrated by the same argument as Lemma 1 (see the next section for a formal proof). By Lemma 4, its regret w.r.t. the raw $p^F_t$ tends to $< 3B/N < \epsilon$. Hence, the theorem follows. \end{proof} Finally, we would like to instantiate this lemma with the misclassification loss $\ell_\text{mc}(y,p) = (1-y) {\mathbb{I}}_{p < 0.5} + y {\mathbb{I}}_{p \geq 0.5}$, which is arguably the most interesting and general loss. \begin{theorem} Let $F^\mathrm{cal}$ be an $(\ell_1, \epsilon/3)$-calibrated online algorithm with resolution $N \geq 3/\epsilon$, where $N$ is a power of two. Then Algorithm 2 is an $\epsilon$-accurate online recalibration algorithm for the loss $\ell_\text{mc}$. \end{theorem} \begin{proof} We only need to show that $\ell_\text{mc}$ satisfies the requirements of the above lemma. Clearly, we have $B=1$. Note that if $N$ is a power of two, the point $0.5$ is contained in an interval $[0.5, 0.5+1/N)$; hence, by construction, $\ell_\text{mc}(y,p)$ satisfies the other two conditions, since changing $p$ doesn't affect the loss, as long as $p$ is in the same interval. \end{proof} \section{Calibration using general losses} In the previous section, we assumed that we defined calibration using the $\ell_1$ loss; here, we argue that any convex loss $\ell$ may be used, although this may affect the convergence rate of our method. We give a detailed analysis of convergence rates for the $\ell_2$ loss. \subsection{Notation} We define the calibration error relative to loss $\ell$ as \begin{equation} C_{T, \ell} = \sum_{i=0}^N \ell \left( \rho_T(i/N), \frac{i}{N} \right) \left( \frac{1}{T} \sum_{t=1}^T {\mathbb{I}}_{\{p_t = \frac{i}{N}\}} \right),\label{eqn:cal_loss} \end{equation} which is the weighted distance between the $\rho_T(i/N)$ and the predicted probabilities $\frac{i}{N}$. \begin{defn}[Abernethy et al. (2011)] We say that $F^\mathrm{cal}$ is an $(\ell, \epsilon)$-calibrated algorithm with resolution $1/N$ if $$ \lim \sup_{T \to \infty} C_{T, \ell} \leq \epsilon. $$ \end{defn} Abernethy et al. (2011) gave examples of $(\ell_1, \epsilon)$-calibrated algorithms, where $\ell_1(x,y) = |x-y|$ denotes the L1 norm. Since L1 is the largest norm, their algorithm extends directly to other losses, e.g. $\ell_p(x,y) = ||x-y||_p^p$. \subsection{Analysis} \paragraph{Arbitrary convex losses $\ell$.} First, it is not hard to see that if $C_{T,\ell} \to 0$ for some convex, continuous $\ell$, then it must be the case that $C_{T,\ell_1} \to 0$ as well. This can be established using a simple real analysis style argument based on continuity. More generally, since we can make $C_{T,\ell} < \epsilon$ for an arbitrarily small $\epsilon >0$ by increasing $N$ and $T$, this means that we can make $C_{T,\ell}$ arbitrarily small as well. The only question has to do with the rate of convergence, and this varies depending on the loss. \paragraph{An analysis for the $\ell_2$ loss.} We can easily derive the correct convergence late in simple cases, such as for the $\ell_2$ loss. It follows form the bound $||x||_1 \leq \sqrt{d} ||x|_2$ that $$\sum_{i=0}^N w_i |a_i| \leq \sum_{i=0}^N \sqrt{w_i} |a_i| \leq \sqrt{N+1} \sqrt{\sum_{i=0}^N w_i a_i^2} , $$ where we assume that $0 \leq w_i \leq 1$. Using this with our definition of calibration, we find that $$ \sum_{i=0}^N \left| \rho_T(i/N) - \frac{i}{N} \right| \left( \frac{1}{T} \sum_{t=1}^T {\mathbb{I}}_{\{p_t = \frac{i}{N}\}} \right) \leq \sqrt{N+1}\left(\sqrt{R_T} + \sqrt{e}\right) $$ when $C_{T, \ell_2} \leq R_T + \epsilon$. This means that if an algorithm minimizes L2 calibration, the L1 calibration term will converge more slowly to a larger value. To achieve a level of accuracy we need to (among other things) square the resolution parameter $N$. \subsection{Proving that calibration holds under any loss} Finally, we give a simple extension of Lemma 1, that shows that Algorithm 2 preserves the $(\ell, \epsilon)$-calibration of its input forecaster. We use the same notation as in the main paper, except that we add a $\ell$ subscript to denote the loss. \begin{lemma}\label{lem:calibration} If each $F^\mathrm{cal}_j$ is $(\epsilon, \ell)$-calibrated with convex loss $\ell$ and with $ C^{(j)}_{T, \ell} \leq R_{T_j} + \epsilon $ for all $T$, where $R_{T_j} = o(1)$ as $T_j \to \infty$, then Algorithm 2 is also $(\epsilon, \ell)$-calibrated and \begin{align} C_{T, \ell} \leq \sum_{j=1}^M \frac{T_j}{T} R_{T_j} + \epsilon. \label{eqn:rate} \end{align} This bound holds uniformly over time $T$. \end{lemma} \begin{proof} Let $\Ind^{(j)}_i = \sum_{t=1}^T \Ind^{(j)}_{t,i}$. We may write \begin{align*} C_{T,i} & = \frac{\sum_{t=1}^T {\mathbb{I}}_{t,i}}{T} \ell \left( \rho_T(i/N) , \frac{i}{N} \right) \\ & = \frac{\sum_{j=1}^M \sum_{t=1}^T \Ind^{(j)}_{t,i}}{T} \ell \left( \frac{\sum_{j=1}^M \sum_{t=1}^T \Ind^{(j)}_{t,i} y_t}{\sum_{j=1}^M \sum_{t=1}^T \Ind^{(j)}_{t,i}} , \frac{i}{N} \right) \\ & = \frac{\sum_{j=1}^M \Ind^{(j)}_i}{T} \ell \left( \frac{\sum_{j=1}^M \Ind^{(j)}_i {\rho^{(j)}_T}(i/N) }{\sum_{j=1}^M \Ind^{(j)}_i } , \frac{i}{N} \right) \\ & \leq \sum_{j=1}^M \frac{\Ind^{(j)}_i}{T} \ell \left( {\rho^{(j)}_T}(i/N), \frac{i}{N} \right) = \sum_{j=1}^M \frac{T_j}{T} C^{(j)}_{T, i, \ell}, \end{align*} where in the last line we used Jensen's inequality. Plugging in this bound in the definition of $C_{T, \ell}$, we find that \begin{align} C_{T, \ell} & = \sum_{i=1}^N C_{T,i, \ell} \leq \sum_{j=1}^M \sum_{i=1}^N \frac{T_j}{T} C^{(j)}_{T,i, \ell} \nonumber\\ & \leq \sum_{j=1}^M \frac{T_j}{T} R_{T_j} + \epsilon, \nonumber \end{align} Since each $R_{T_j} \to 0$, Algorithm 2 will be $\epsilon$-calibrated. \end{proof} Note that this proof is essentially identical to that of the main paper. \section{Background}\label{sec:background} \subsection{Predictive Uncertainty in Machine Learning} Supervised machine learning models commonly predict a probability distribution over the output variables --- e.g. class membership probabilities or the parameters of an exponential family distribution. These predictive uncertainties are useful for interpretability, safety, and downstream decision-making. Aleatoric uncertainty captures the inherent noise in the data, while epistemic uncertainty arises from not having a large enough dataset to estimate model parameters \citep{kendall2017uncertainties}. \paragraph{Notation.} Formally, we say that a machine learning forecaster $H : \mathcal{X} \to \Delta(\mathcal{Y})$ outputs a probability distribution $F(y) : \mathcal{Y} \to [0,1]$ in the space $\Delta(\mathcal{Y})$ of distributions over $y$. We use $f$ to denote the probability density or probability mass function associated with $F$. The model $H$ is trained on a labeled dataset $x_t, y_t\in \mathcal{X} \times \mathcal{Y}$ for $t=1,2,...,T$ of i.i.d.~realizations of random variables $X, Y \sim \mathbb{P}$, where $\mathbb P$ is the data distribution. \subsection{What Defines Good Predictive Uncertainties?} The standard tool in statistics for evaluating the quality of predictive uncertainties is a proper scoring rule \citep{gneiting2007strictly}. Formally, a scoring rule $S : \Delta(\mathcal{Y}) \times \mathcal{Y} \to \mathbb{R}$ assigns a ``score" to a probabilistic forecast $F \in \Delta(\mathcal{Y})$ and a realized outcome $y \in \mathcal{Y}$. Given a true distribution $G \in \Delta(\mathcal{Y})$ for $y$, we use the notation $S(F,G)$ for the expected score $ S(F,G) = \mathbb{E}_{y \sim G} S(F, y). $ We say that a score $S$ is proper if it is minimized by $G$ when $G$ is the true distribution for $y$: $ S(F,G) \geq S(G,G) \text{ for all $F$}. $ When $S$ is proper, we also refer to it as a proper loss. An example of a proper loss is the log-likelihood $S(F,y) = \log f(y)$, where $f$ is the probability density or probability mass function of $F$. Another common loss is the check score $\rho_\tau(y, f) = \tau (y-f)$ if $y \geq f$ and $(1-\tau)(f-y)$ otherwise; it can be used to estimate the $\tau$-th quantile of a distribution. See Table \ref{tbl:properlosses} for additional examples. What are the qualities of a good probabilistic prediction, as measured by a proper scoring rule? It can be shown that every proper score is a sum of the following terms \citep{gneiting2007probabilistic}: $$\text{proper loss} = \text{calibration} \underbrace{- \text{sharpness} + \text{irreducible term}}_\text{refinement term}.$$ Thus, there are precisely two qualities that define an ideal forecast: calibration and sharpness. We examine each of them next. \begin{table} \begin{center} \begin{tabular}{l|c|c|c} \toprule {\bf Proper Score} & {\bf Loss} & {\bf Calibration} & {\bf Refinement} \\ & $L(F,G)$ & $L_c(F,Q)$ & $L_r(Q)$ \\ \midrule Logarithmic & ${\mathbb{E}}_{y\sim G}$ $\log f(y)$ & $\text{KL}(q||f)$ & $H(q)$ \\ CRPS & {\small ${\mathbb{E}}_{y\sim G}$ $(F(y) - G(y))^2$} & {\small $\int^{\infty}_{-\infty}(F(y) - Q(y))^2$dy} & {\small $\int^{\infty}_{-\infty} Q(y) (1 - Q(y))dy$} \\ Quantile & {\small ${\mathbb{E}}^{\tau\in U[0,1]}_{y\sim G} \rho_\tau(y-F^{-1}(\tau))$} & {\small $\int_0^1 \int^{F^{-1}(\tau)}_{Q^{-1}(\tau)}(Q(y) - \tau)dyd\tau$} & {\small ${\mathbb{E}}^{\tau\in U[0,1]}_{y\sim Q} \rho_\tau(y-Q^{-1}(\tau))$} \\ \bottomrule \end{tabular} \end{center} \caption{Proper loss functions. A proper loss is a function $L(F,G)$ over a forecast $F$ targeting a variable $y \in \mathcal{Y}$ whose true distribution is $G$ and for which $S(F,G) \geq S(G,G)$ for all $F$. Each $L(F,G)$ decomposes into the sum of a calibration loss term $L_c(F,Q)$ (also known as reliability) and a refinement loss term $L_r(Q)$ (which itself decomposes into a sharpness and an uncertainty term). Here, $Q(y)$ denotes the cumulative distribution function of the conditional distribution $\mathbb{P}(Y=y \mid F_X = F)$ of $Y$ given a forecast $F$, and $q(y), f(y)$ are the probability density functions of $Q$ and $F$, respectively. We give three examples of proper losses: the log-loss, the continuous ranked probability score (CRPS), and the quantile loss.}\label{tbl:properlosses} \end{table} \subsection{Calibration and Sharpness --- Two Qualities of an Ideal Prediction} Formally, calibration can be defined by the equation \begin{equation} \mathbb{P}(Y = y \mid F_X = F) = f(y) \textrm{ for all $y \in \mathcal{Y}$, $F \in \Delta(\mathcal{Y})$}, \label{eqn:calibration1} \end{equation} where $X, Y \sim \mathbb{P}$ are random variables corresponding to the input features and targets, and $F_X = H(X)$ is the forecast at $X$, itself a random variable that takes values in $\Delta(\mathcal{Y})$. We use $f$ to denote the probability density or probability mass function associated with $F$. When $\mathcal{Y}=\{0,1\}$ and $F_X$ is a Bernoulli distribution with parameter $p$, we can write \eqref{eqn:calibration1} as $\mathbb{P}(Y = 1 \mid F_X = p) = p$. This has a simple intuition: the true probability of $Y=1$ is $p$ conditioned on predicting it as $p$. Equation \ref{eqn:calibration1} extends beyond binary classification to arbitrary distributions. For example, if $F$ is a Gaussian with variance $\sigma^2$, this definition asks that the data distribution conditioned on predicting $F$ also has variance $\sigma^2$. This recently proposed definition is called {\em distribution} calibration \citep{song2019distribution}. A closely related, but weaker concept is quantile calibration \citep{kuleshov2018accurate}, which asks that a 90\% confidence interval contains the true value 90\% of the time. Formally, it can be written as: $ \mathbb{P}(Y \leq \text{CDF}_{F_X}^{-1}(p)) = p \; \textrm{for all $p \in [0,1]$}, $ Quantile calibration is implied by distributional calibration \citep{song2019distribution}. Calibration by itself is not sufficient to produce a useful forecast. For example, it is easy to see that a binary classifier that always outputs $\mathbb{P}(Y = 1)$ as the probability that $Y=1$ is calibrated; however it does not even use the features $X$ and thus cannot be accurate. In order to be useful, forecasts must also be {\em sharp}. Intuitively, this means that predicted confidence intervals should be as tight as possible around a single value. This is captured by proper scoring rules as part of a refinement term (see Table \ref{tbl:properlosses}), which equals an irreducible term minus a sharpness term \citep{murphy1973vector,brocker2009decomposition}. The latter is maximized when we minimize the scoring rule. \paragraph{Are Modern Machine Learning Models Calibrated And Sharp?} Most machine learning models are not calibrated out-of-the-box \citep{niculescu2005predicting,guo2017calibration}. Two reason for this are the limited expressivity of the model $H$---we cannot perfectly fit the entirety of the level curves of the data distribution---and computational approximations---computing extract predictive uncertainties may be intractable, and approximations are not entirely accurate. A final reason stems from how models are trained---since we cannot fit a perfect $H$, standard objective functions induce a tradeoff between sharp and calibrated forecasts. Next, we will show that by training models differently, we can achieve calibration without sacrificing performance. \section{What Uncertainties Are Needed in Modern Deep Learning?}\label{sec:calibration} Good predictive uncertainties are calibrated and sharp and these two properties yield optimal values of the log-likelihood and other proper loss functions. Thus, they characterize an ideal forecast. In practice, however, modern machine learning models do not output such ideal predictions. What then is the ideal type of forecast that we should aim to obtain from our models? \citet{gneiting2007probabilistic} argue that predictive uncertainties should be maximally sharp subject to being calibrated. They propose a diagnostic approach based on this principle; this approach is commonly used in statistics for {\em evaluating} the predictive performance of probabilistic models. In this paper, we also argue for this general principle, but approach it in a {\em prescriptive} way---we claim that this principle should be enforced in modern ML systems, and we show how to do so. Specifically, we show that any model can be modified to output calibrated uncertainties, and this property can be provably achieved without sacrificing performance. \subsection{Calibrated Risk Minimization} We formalize the intuition behind "maximizing sharpness subject to being calibrated" as follows. We argue that we should be training models to minimize expected risk (as measured by a proper loss) subject to being perfectly calibrated. We call this principle calibrated risk minimization. \begin{definition}[Calibrated Risk Minimization] We select a model $H$ that minimizes the constrained expected risk $$\min_H \mathbb{E}[ L(H(X), Y) ] \text{ subject to } \mathbb{E}[ L_c(H(X), Q)) ] = 0,$$ where $L$ is a proper scoring rule, $L_c$ is its associated calibration loss derived from the calibration-reliability decomposition of $L$, and $Q(y) := \mathbb{P}(Y=y \mid F_X = F)$. \end{definition} Recall that a proper loss $L(F,G)$ decomposes into a sum of calibration $L_c(H(X), Q)$ and reliability $L_r(Q)$; the latter equals an irreducible term minus sharpness. Thus, by minimizing $L(F,G)$ subject to $L_c(H(X) = 0$, we are maximizing sharpness subject to calibration \citep{gneiting2007probabilistic}. Note that a special case of the above principle is {\em calibrated maximum likelihood}, in which we seek a model that maximizes the expected log-likelihood $\mathbb{E}_{X,Y} \log F_X(Y)$ under the calibration constraint that $\mathbb{E}_{X,Y} \text{KL}(Q\mid\mid F) = 0$. Machine learning models are normally trained to minimize expected risk; our principle asks that in addition they should be calibrated. Our main result (Theorem \ref{thm:recal}) shows that this criterion is achievable. \subsection{Why Do We Need Calibrated And Sharp Uncertainties?}\label{sec:discussion} Probabilistic models are important building blocks of machine learning systems in many domains---including medicine, robotics, industrial automation, and others. Calibration is not difficult to achieve in many of these domains; hence, we argue that it should be enforced in predictive models, which will unlock the following set of benefits in downstream applications. \paragraph{Safety and Interpretability.} Good predictive uncertainties are important for model interpretability: in user-facing applications, humans make decisions based on model outputs and need to assess the confidence of the model, for example when interpreting an automated medical diagnosis. Calibration is also important for model safety: in areas such as robotics, we want to minimize the probability of adverse outcomes (e.g., a crash), and estimating these outcomes' probabilities is an important step for that \citep{Berkenkamp2017}. \paragraph{Model-Based Planning.} More generally, good predictive uncertainties also improve downstream decision-making applications such as model-based planning \citep{malik2019calibrated}, a setting in which agents learn a model of the world to plan future decisions \citep{deisenroth2011pilco}. Planning with a probabilistic model improves performance and sample complexity, especially when representing the model using a deep neural network. and improves the cumulative reward and the sample complexity of model-based agents~\citep{rajeswaran2016epopt,chua2018}. \paragraph{Efficient Exploration.} Balancing exploration and exploitation is a common challenge in many applications on machine learning such as reinforcement learning, Bayesian optimization, and active learning. When probabilistic models are uncalibrated, inaccurate confidence intervals might incentivize the model to explore ineffective actions, degrading performance. Calibrated uncertainties have been shown to improve decision-making in bandits \citep{malik2019calibrated} and likely to extend to Bayesian optimization and active learning as well. \paragraph{Other Applications.} The importance of accurate confidence estimates has been highlighted by practitioners in many fields, including medicine \citep{saria_sepsis_2018}, meteorology \citep{raftery2005using}, and natural language processing \citep{nguyen2015posterior}. Accurate confidence estimates also play an important in computer vision applications, such as depth estimation \citep{kendall2017uncertainties}. \section{Previous Work} \paragraph{Probabilistic Forecasting.} More modern discussions of probabilistic forecasting can be found in the literature on meteorology \citep{gneiting2005weather}. This influential work appears in methods weather forecasting applications systems \citep{raftery2005using}. Most previous work focuses on classification, but recent work \citep{gneiting2007probabilistic,kuleshov2018accurate} extends classical methods to regression. Probabilistic forecasting has been studied extensively in the statistics literature \citep{murphy1973vector,dawid1984prequential}, mainly in the context of evaluation using proper scoring rules \citep{gneiting2007strictly}. Proper scores measure calibration and sharpness in classification \citep{murphy1973vector} and regression \citep{hersbach2000decomposition}. \paragraph{Calibration.} Recalibration is a widely used approach for improving probabilistic forecasts. It takes it roots in the classification setting, where Platt scaling \citep{platt1999probabilistic} and isotonic regression \citep{niculescu2005predicting} are two widely used algorithms. The have been extended to multi-class \citep{zadrozny2002transforming}, structured \citep{kuleshov2015calibrated}, and online prediction \citep{kuleshov2017estimating}. There is significant recent interest in calibration in deep learning \citep{guo2017calibration,lakshminarayanan2016simple,gal2017concrete,kuleshov2018accurate}. Beyond deep learning, accurate uncertainty estimation is important for the design of decision-making systems ~\citep{malik2019calibrated}, crowdsourcing \citep{werling15adaptive}, data-efficient machine learning \citep{ratner2016data,kuleshov2017deep,ren2018learning}, machine translation \citep{kumar2019calibration}, as well as other problems in natural language processing and beyond \citep{nguyen2015posterior,kuleshov2019machine}. \section{Conclusion} We take inspiration from the statistics literature and argue that predictive uncertainties should be evaluated by proper scoring rules, which measure two specific qualities of probabilistic predictions: calibration and sharpness. \citet{gneiting2007probabilistic} argued that predictive uncertainties should maximize calibration subject to sharpness and used this paradigm to evaluate forecasts. We formalize the paradigm of \citet{gneiting2007probabilistic} into a novel learning principle called calibrated risk minimization and propose a general algorithm that meets the requirements of this paradigm. Overall, we show that calibration is a property that can be achieved in predictive models in a way that is easier than was previously thought. \section{Experiments} \subsection{Setup} \paragraph{Datasets.} We use a number of UCI regression datasets varying in size from 194 to 8192 training instances; each training input may have between 6 and 159 continuous features. We randomly use 25\% of each dataset for testing, and use the rest for training. We also perform image classification on the following standard datasets: MNIST, SVHN, CIFAR10. \paragraph{Models.} Our first model is Bayesian Ridge Regression \citep{mackay1992bayesian}. It uses a spherical Gaussian prior over the weights and a Gamma prior over the precision parameter. Posterior inference is performed in closed form as the prior is conjugate. We also test a number of deep neural networks. We use variational dropout \citep{Gal2016Dropout} to produce probabilistic predictions. In our UCI experiments, we use fully-connected feedforward neural networks with two layers of 128 hidden units with a dropout rate of 0.5 and parametric ReLU non-linearities. We use convolutional neural networks (CNNs) on the image classification tasks. These are formed by fine-tuning a ResNet50 architecture on the training split for each dataset. We also compare against a popular uncertainty estimation method recently developed specifically for deep learning models: deep ensembles \citep{lakshminarayanan2016simple}. Deep ensembles average the predictive distributions of multiple models; we ensembled 5 neural networks, each having the same architecture as our standard model. Our recalibrator $R$ was also a densely connected neural network with two fully connected hidden layers of 20 units each and parametric ReLU non-linearities. We added dense skip connections between the layers. In regression experiments, we featurized input distributions $F$ using nine quantiles $[0.1,...,0.9]$. We trained $R$ using the quantile regression version of Algorithm \ref{alg:recal}; we concatenated the quantile parameter $\tau \in [0,1]$ to the featurization of $F$. In classification experiments, the inputs and the ouputs of $R$ are class probabilities, and $R$ is trained using the log-likelihood maximization version of Algorithm \ref{alg:recal}. All other architectural details are unchanged. We did not observe significant overfitting in our experiments. We believe overfitting is mitigated by the fact that we perform quantile regression and thus learn a complex distribution function that is not easy to overfit. \subsection{Regression Experiments on UCI Data} We report the results of our regression experiments on the UCI datasets in Table \ref{tbl:uci}. We evaluate the quality of forecasts using a check score $\rho_\tau(y, f) = \tau (y-f)$ if $y \geq f$ and $(1-\tau)(f-y)$ as in \citet{song2019distribution}; we average it over nine quantile levels $\tau \in {0.1,...,0.9}$. We measure regression performance using the mean absolute percent error and mean average error. Our method improves over the accuracies and uncertainties of \citet{kuleshov2018accurate}, and in many cases over those of \citet{song2019distribution} on Bayesian linear regression, Bayesian neural networks, and deep ensembles, without ever being worse. Note that also that out method is simpler and easier to implement than that of \citet{song2019distribution} (it does not require implementing variational inference), and applies to any input distribution, not just Gaussians. \begin{table} \small \begin{center} \begin{tabular}{l|c|c|c} \toprule & {\bf MNIST} & {\bf SVHN} & {\bf CIFAR10} \\ \midrule \midrule {\bf Base Model} & & & \\ Accuracy & 0.9952 & 0.9508 & 0.9179 \\ Calibration & 0.3166 & 0.5975 & 0.5848 \\ \midrule {\footnotesize\bf Platt Scaling} & & & \\ Accuracy & 0.9952 & 0.9508 & 0.9181 \\ Calibration & 0.2212 & 0.3278 & 0.2233 \\ \midrule {\bf Ours} & & & \\ Accuracy & 0.9951 & 0.9509 & 0.9163 \\ Calibration & 0.1030 & 0.2674 & 0.1091 \\ \bottomrule \end{tabular} \end{center} \caption{Performance on Image Classification\label{classification}} \end{table} \subsection{Classification Experiments on MNIST, SVHN, CIFAR10} We report the results of the image classification experiments in Table \ref{classification}. We measure performance using accuracy and calibration error of \citet{kuleshov2018accurate} on the test set. We report these metrics for baseline and calibrated versions of convolutional neural network classifier. We perform recalibration with a simple softmax regression (a multi-class generalization of Platt scaling) and with the neural network recalibrator. The best uncertainties are produced by our method. Recalibrated and base models achieve similar levels of accuracy. \section{Introduction}\label{sec:introduction} Probabilistic forecasts can be characterized by two properties---calibration and sharpness \citep{gneiting2007probabilistic}. Intuitively, calibration means that a 90\% confidence interval contains the true outcome 90\% of the time. Sharpness means that these confidence intervals are narrow. These properties are grounded in the statistics literature on proper scoring rules, and are widely used to evaluate forecasts in domains ranging from meteorology to medicine \citep{gneiting2005weather,gneiting2007strictly}. This paper argues for reasoning about uncertainty in deep learning in terms the concepts of calibration and sharpness and proposes simple algorithms for enforcing these properties. We focus on the strongest and recently proposed notion of calibration---distribution calibration. Previous methods for promoting distribution calibration were complex \citep{song2019distribution} or enforced only a weaker notion of quantile calibration \citep{kuleshov2018accurate}. We show that enforcing distribution calibration is as simple as performing low-dimensional density or quantile function estimation and we propose to solve this task with a simple and numerically stable technique based on quantile function regression with a neural network estimator. Unlike existing methods for distribution calibration \citep{song2019distribution}, ours can be used with any model (not just ones that output Gaussians), are very simple to implement in differentiable programming frameworks, and have theoretical guarantees. As a result, our algorithms enable implementing the well-known principle that forecasts should ``maximize sharpness subject to being calibrated" proposed by \citet{gneiting2007probabilistic}. While this principle was originally used for evaluation, we show that it is also enforceable across machine learning models. This lends strong support for using calibration and sharpness to reason about predictive uncertainty in machine learning. Empirically, we find that our method consistently outputs well-calibrated predictions across a wide range of experiments, while improving performance on downstream tasks with minimal implementation overhead. We note that distribution calibration may be simpler to obtain than previously thought, and we argue that it should be enforced in predictive models and taken advantage of in downstream applications. \paragraph{Contributions.} In summary, we make three contributions. We propose a new recalibration technique that (a) is among the only to guarantee distribution calibration besides \citet{song2019distribution}. Unlike \citet{song2019distribution} we can (b) recalibrate any parametric distribution (not just Gaussians) and (c) our method is simpler. While theirs is based on variational inference in Gaussian processes, ours uses a neural network that can be implemented in a few lines of code, which encourages adoption. Our method (d) applies to both classification and regression and (e) outperforms methods by \citet{song2019distribution} and \cite{kuleshov2018accurate} as well as Platt and temperature scaling. We also formally prove that our technique produces asymptotically distributionally calibrated forecasts while minimizing regret. Most methods (e.g., Platt, Kuleshov scaling, Song et al., etc.) do not have a correctness proof, except for conformal prediction, which is significantly more complex. Finally, our analysis formalizes the well-known paradigm of Gneiting et al. (2007) and provides a simple method that provably achieves it. This lends strong support for this principle and influences how one should reason about uncertainty in machine learning. We believe that an important takeaway of our work is that calibration should be leveraged more broadly throughout machine learning. \subsubsection*{References}} \usepackage{mathtools} \usepackage{booktabs} \usepackage{tikz} \newcommand{\arabic{algorithm}}{\arabic{algorithm}} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem{assumption}{Assumption} \newtheorem{fact}{Fact} \title{Calibrated and Sharp Uncertainties in Deep Learning via Simple Density Estimation} \author{% Volodymyr Kuleshov, Shachi Deshpande \\ \texttt{kuleshov@cornell.edu}, \texttt{shachi@cs.cornell.edu} \\ Department of Computer Science, Cornell Tech\\ New York, NY, 10044 } \date{} \input{macros} \newcommand{\bm{k}}[1]{\textcolor{red}{[VK: #1]}} \begin{document} \maketitle \subimport{}{abstract} \subimport{}{intro} \subimport{}{background} \subimport{}{algorithm} \subimport{}{recalibration} \subimport{}{calibration} \subimport{}{experiments} \subimport{}{discussion} \section{Obtaining Calibrated And Sharp Uncertainties From Any Model}\label{sec:recalibration} \section{Theoretical Analysis}\label{sec:recalibration} Next, we now show that under some assumptions, calibrated and sharp uncertainties can be obtained in modern machine learning models via our simple procedure. In that sense, distribution calibration may be easier to obtain than was previously thought. We start with some notation. We have a recalibration dataset of size $T$ sampled from $\mathbb{P}$ and train a recalibrator $R : \Delta(\mathcal{Y}) \to \Delta(\mathcal{Y})$ over the outputs of a base model $H$ to minimize a proper loss $L$. We denote the Bayes-optimal recalibrator by $B := \mathbb{P}(Y=y\mid H(X))$; the distribution of $Y$ conditioned on the forecast $(R \circ H)(X)$ is $Q := \mathbb{P}(Y=y \mid (R \circ H)(X))$. We are interested in expectations of various losses over $X,Y$; to simplify notation, we omit the variable $X$, e.g. $\mathbb{E}[L(R \circ H,Y)] = \mathbb{E}[L(R(H(X)),Y)]$. Next, we will assume the following condition under which Algorithm \ref{alg:recal} works. \begin{assumption}[Density Estimation] \label{ass:density} The model $R$ can minimize expected risk such that w.h.p. we have \begin{align*} \mathbb{E}[L(B\circ H,Y)] \leq \mathbb{E}[L(R \circ H,Y)] < \mathbb{E}[L(B \circ H,Y)] + \delta \end{align*} where $\delta> 0$, $\delta = o(T)$ is a bound that decreases with $T$ and $\mathbb{E}[L(B\circ H,Y)]$ is the irreducible loss. \end{assumption} This assumption implies that the recalibrator can perform density estimation in what is usually a small number of dimensions (one or two). For some recalibrators, e.g., neural nets, it may not provably hold (e.g., because of non-convexity). However, neural networks are effective density estimators in practice, and we can quantify whether they estimate density well on a hold-out set. This assumption provably holds for many non-parametric density estimation methods. \begin{fact}[\cite{wasserman2006all}] When $R$ implements kernel density estimation and $L$ is the log-loss, Assumption 1 holds with $\delta=o(1/T^{2/3})$. \end{fact} We now prove two key lemmas. We show that Algorithm \ref{alg:recal} outputs calibrated forecasts without reducing the performance of the base model, as measured by regret relative to loss $L$. \begin{lemma \label{lem:calibration} The model $R \circ H$ is asymptotically calibrated, in the sense that $\mathbb{E}[L_c(R \circ H,Q)] < \delta$ for $\delta = o(T)$ w.h.p. \end{lemma} \begin{proof} Recall that the loss $\mathbb{E}[L(R \circ H,Y)]$ decomposes into a sum of calibration and refinement terms $\mathbb{E}[L_c(R \circ H,Q)] + \mathbb{E}[L_r(Q)]$ where $Q(y) := \mathbb{P}(Y=y \mid (R \circ H)(X))$. As shown by \citet{kull2015novel}, refinement further decomposes into a group loss and an irreducible term: $\mathbb{E}[L_r(Q)] = \mathbb{E}[L_g(Q,B\circ H)] + \mathbb{E}[L(B\circ H,Y)],$ where $B(Y=y\mid H(X))$ is the Bayes-optimal recalibrator. The form of the group loss $L_g$ is the same as that of $L_c$. We may then write: \begin{align*} \underbracket{\mathbb{E}[L(B\circ H,Y)]}_\text{Bayes-Optimal Loss} & \leq \underbracket{\mathbb{E}[L_c(R \circ H,Q)]}_\text{Calibration Loss} + \underbracket{\mathbb{E}[L_g(Q,B\circ H)]}_\text{Group Loss} + \underbracket{\mathbb{E}[L(B\circ H,Y)]}_\text{Bayes-Opt Loss} \\ & = \underbracket{\mathbb{E}[L(R \circ H,Y)]}_\text{Proper Loss} < \underbracket{\mathbb{E}[L(B \circ H,Y)]}_\text{Bayes-Optimal Loss} + \delta \end{align*} where $\delta>0, \delta=o(T)$. In the first equality we used the decomposition of \citet{kull2015novel} and in the last inequality we used Assumption \ref{ass:density}. It follows that $\mathbb{E}[L_c(R \circ H,Q)] < \delta$, i.e. the calibration loss is small. \end{proof} \begin{lemma} \label{lem:loss} The recalibrated model is asymptotically as good as the base model: $\mathbb{E}[L(R \circ H,Y)] \leq \mathbb{E}[L(H,Y)] + \gamma,$ where $\gamma >0, \gamma=o(T)$ is a bound that decreases with $T$. \end{lemma} \begin{proof} The claim holds by empirical risk minimization, since $R \circ H$ minimizes $L$, but is more expressive than $H$ and $R$ can represent the identity map (by Assumption \ref{ass:density}). \end{proof} We now combine these two lemmas to show Algorithm \ref{alg:recal} ensures calibration and low regret. \begin{theorem} \label{thm:recal} Algorithm \ref{alg:recal} produces a model that minimizes expected risk, while w.h.p. achieving asymptotically optimal calibration. \end{theorem} \begin{proof} The base model $H$ is trained using empirical risk minimization (ERM). The model $R \circ H$ minimizes the same objective $L$, hence minimizes the same expected risk by ERM theory. Also, by Lemma \ref{lem:loss}, the expected risk of $R \circ H$ also asymptotically approaches a lower value as that of $H$. By Lemma \ref{lem:calibration}, the model $R \circ H$ produces asymptotically calibrated forecasts w.h.p. \end{proof} Thus, given enough data, we are guaranteed to produce calibrated forecasts and preserve base model performance (as measured by $L$). \paragraph{Finite-Sample Bounds.} Note that our analysis provides {\em finite-sample} and not only asymptotic bounds on the regret and calibration error---the bounds are stated in terms of variables $\delta$, and $\gamma$ that are each $o(T)$. The bound $\delta$ on the calibration error directly depends on the finite-sample bound on the generalization error of the algorithm used as the recalibrator. \paragraph{Practical Considerations.} Assumption \ref{ass:density} suggests that we want to use a model family that can minimize the expected risk $\mathbb{E}[L(H(X),Y)]$ well. Thus, in practice we want to select a highly flexible algorithms for which we can control overfitting and underfitting. This motivates our earlier advice of using density estimation algorithms---which have provable guarantees---and neural networks---which are expressive and feature effective regularization techniques
{ "timestamp": "2021-12-15T02:11:11", "yymm": "2112", "arxiv_id": "2112.07184", "language": "en", "url": "https://arxiv.org/abs/2112.07184", "abstract": "Accurate probabilistic predictions can be characterized by two properties -- calibration and sharpness. However, standard maximum likelihood training yields models that are poorly calibrated and thus inaccurate -- a 90% confidence interval typically does not contain the true outcome 90% of the time. This paper argues that calibration is important in practice and is easy to maintain by performing low-dimensional density estimation. We introduce a simple training procedure based on recalibration that yields calibrated models without sacrificing overall performance; unlike previous approaches, ours ensures the most general property of distribution calibration and applies to any model, including neural networks. We formally prove the correctness of our procedure assuming that we can estimate densities in low dimensions and we establish uniform convergence bounds. Our results yield empirical performance improvements on linear and deep Bayesian models and suggest that calibration should be increasingly leveraged across machine learning.", "subjects": "Machine Learning (cs.LG)", "title": "Calibrated and Sharp Uncertainties in Deep Learning via Density Estimation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.957912273285902, "lm_q2_score": 0.7401743620390163, "lm_q1q2_score": 0.7090221057687364 }
https://arxiv.org/abs/2302.09729
Embedding theorems for random graphs with specified degrees
Given an $n\times n$ symmetric matrix $W\in [0,1]^{[n]\times [n]}$, let $\mathcal{G}(n,W)$ be the random graph obtained by independently including each edge $jk$ with probability $W_{jk}$. Given a degree sequence ${\bf d}=(d_1,\ldots, d_n)$, let $\mathcal{G}(n,{\bf d})$ denote a uniformly random graph with degree sequence ${\bf d}$. We couple $\mathcal{G}(n,W)$ and $\mathcal{G}(n,{\bf d})$ together so that a.a.s. $\mathcal{G}(n,W)$ is a subgraph of $\mathcal{G}(n,{\bf d})$, where $W$ is some function of ${\bf d}$. Let $\Delta({\bf d})$ denote the maximum degree in ${\bf d}$. Our coupling result is optimal when $\Delta({\bf d})^2\ll \|{\bf d}\|_1$, i.e.\ $W_{ij}$ is asymptotic to $\mathbb{P}(ij\in \mathcal{G}(n,{\bf d}))$ for every $i,j\in [n]$. We also have coupling results for ${\bf d}$ that are not constrained by the condition $\Delta({\bf d})^2\ll \|{\bf d}\|_1$. For such ${\bf d}$ our coupling result is still close to optimal, in the sense that $W_{ij}$ is asymptotic to $\mathbb{P}(ij\in \mathcal{G}(n,{\bf d}))$ for most pairs $i,j\in [n]$.
\section{Introduction} Given a degree sequence ${\bf d} = (d_1, \ldots, d_n)$ where $\norm{{\bf d}}_1=\sum_{i=1}^n d_i$ is even, let ${\mathcal G}(n,{\bf d})$ denote a random graph chosen uniformly from the set of graphs on $[n]$ where vertex $i$ has degree $d_i$. Random graphs with a specified degree sequence are a popular class of random graphs used in many fields of research. While these random graphs have been extensively used to model and analyse real-world networks, such as social networks and the internet, they present several analytical challenges. The most prominent difficulties in analysing ${\mathcal G}(n,{\bf d})$ are evaluating the edge probabilities and dealing with edge dependencies. Compared with the classical Erd\H{o}s-R\'{e}nyi random graph ${\mathcal G}(n,p)$ where every edge in $\binom{[n]}{2}$ appears independently with probability $p$, there is no known closed formula for ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))$ for general ${\bf d}$. Although asymptotic formulas exist for ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))$ for some classes of ${\bf d}$, the correlation between the edges poses additional challenges when estimating the probabilities of events in ${\mathcal G}(n,{\bf d})$. We denote ${\mathcal G}(n,{\bf d})$ by ${\mathcal G}(n,d)$ when ${\bf d}=(d,d,\ldots,d)$, i.e.\ ${\bf d}$ is a $d$-regular degree sequence. The random regular graph ${\mathcal G}(n,d)$ is the most well-understood model among all ${\mathcal G}(n,{\bf d})$, where ${\bf d}$ can vary from near-regular sequences to heavy-tailed sequences such as power-law sequences. In 2004 Kim and Vu proposed the sandwich conjecture, which informally says that ${\mathcal G}(n,d)$ can be well approximated by ${\mathcal G}(n,p=d/n)$ through sandwiching ${\mathcal G}(n,d)$ between two correlated copies of ${\mathcal G}(n,p)$, one with $p$ slightly smaller than $d/n$ and the other with $p$ slightly greater than $d/n$. To formally state the conjecture, we define a coupling of a finite set of random variables (or graphs) $Z_1,\ldots, Z_k$ as a $k$-tuple of random variables $(\hat Z_1,\ldots,\hat Z_k)$ defined in the same probability space such that the marginal distribution of $\hat Z_i$ is the same as the distribution of $Z_i$ for every $1\le i\le k$. The sandwich conjecture by Kim and Vu is given as below. \begin{conjecture}[\cite{kim2004sandwiching}] For $d\gg \log n$, there exist probabilities $p_1,p_2=(1+o(1))d/n$ and a coupling $(G_L, G, G_U)$ such that marginally, $G_L\sim {\mathcal G}(n,p_1)$, $G\sim {\mathcal G}(n,d)$, $G_U\sim {\mathcal G}(n,p_2)$ and jointly, ${\mathbb P}(G_L\subseteq G\subseteq G_U)=1-o(1)$. \end{conjecture} The sandwich conjecture, if proved to be true, is a powerful tool for analysing ${\mathcal G}(n,d)$, and reveals beautiful distributional relations between the two different random graph models. The assumption $d\gg \log n$ is necessary, as for $p=O(\log n/n)$, the maximum and minimum degrees of ${\mathcal G}(n,p)$ differ by some constant factor away from 1, making it impossible for the sandwich conjecture to hold. For simplicity, we refer to a coupling $({\mathcal G}_1,{\mathcal G}_2)$ of two random graphs ${\mathcal G}_1$ and ${\mathcal G}_2$ as embedding ${\mathcal G}_1$ into ${\mathcal G}_2$, if in the coupling a.a.s. (asymptotically almost surely) ${\mathcal G}_1$ is a subgraph of ${\mathcal G}_2$. It turns out that embedding ${\mathcal G}(n,d)$ into ${\mathcal G}(n,p=(1+o(1))d/n)$ is much more difficult than embedding ${\mathcal G}(n,p=(1-o(1))d/n)$ into ${\mathcal G}(n,d)$. In their paper~\cite{kim2004sandwiching}, Kim and Vu established an a.a.s.\ embedding of ${\mathcal G}(n,p=(1-o(1))d/n)$ into ${\mathcal G}(n,d)$ for $d$ up to approximately $n^{1/3}$ (and $d\gg \log n$). Subsequent work by Dudek, Frieze, Ruci\'{n}ski and {\v{S}ileikis~\cite{dudek2017embedding} improved this result to $d=o(n)$, and extended it to random uniform hypergraphs as well. The first 2-side sandwich theorem was proved by Isaev, McKay and the first author~\cite{gao2020sandwiching,gao2022sandwiching} in the case that $d$ is linear in $n$. Klimo\"{s}ov\'{a}, Reiher, Ruci\'{n}ski and {\v{S}ileikis~\cite{klimovsova2023sandwiching} proved the sandwich conjecture for $d\gg (n\log n)^{3/4}$. Finally the first author~\cite{gao2020kim} confirmed the sandwich conjecture for all $d\gg \log^7 n$. It is worth noting that a more general sandwich theorem was presented in~\cite{gao2020sandwiching,gao2022sandwiching} for random graphs ${\mathcal G}(n,{\bf d})$ where ${\bf d}$ is a near-regular degree sequence, where all degrees are asymptotically equal. While the proof of~\cite{gao2020kim} should work for such degree sequences as well, this remains an ongoing research direction. While it is not possible to approximate ${\mathcal G}(n,{\bf d})$ by ${\mathcal G}(n,p)$ in a useful way for more general forms of ${\bf d}$, a natural alternative is to consider a generalised Erd\"{o}s-R\'{e}nyi graph ${\mathcal G}(n,W)$, where $W$ is a symmetric $n\times n$ matrix. In this model, each edge $jk\in\binom{[n]}{2}$ appears in ${\mathcal G}(n,W)$ independently with probability $W_{jk}$. By choosing different forms of $W$, we can recover well-studied models in the literature, such as ${\mathcal G}(n,p)$, the Chung-Lu model~\cite{chung2002connected,chung2002average,chung2004spectra,chung2006volume}, the $\boldsymbol{\beta}$-model~\cite{chatterjee2011random,isaev2018complex}, and the stochastic block model~\cite{holland1983stochastic}. The objective of this paper is to establish an embedding of ${\mathcal G}(n,W)$ into ${\mathcal G}(n,{\bf d})$, where ${\bf d}$ is a degree sequence that may deviate significantly from being regular, for a suitable choice of $W$. What constitutes a good $W$? Intuitively one would like the entry $W_{jk}$ to be approximately ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))$, the true probability that $jk$ is an edge in ${\mathcal G}(n,{\bf d})$. To formalise this notion, we define $W^*=W^*({\bf d})$ as the $n\times n$ matrix where $W^*_{jk}={\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))$ for every $i,j\in [n]$. \begin{question}\label{question} Given ${\bf d}$, does there exists $W=(1-o(1))W^*({\bf d})$ such that ${\mathcal G}(n,W)$ can be embedded into ${\mathcal G}(n,{\bf d})$? \end{question} Note that the asymptotic value of ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))$ is not known for all realisable degree sequences ${\bf d}$. The coupling scheme we use in this paper heavily relies on the expression of the asymptotic formula for ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))$. Therefore, we are restricted to degree sequences for which an asymptotic evaluation of ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))$ is available in the literature. Without loss of generality we may assume that $d_1 \ge \cdots \ge d_n \ge 0$. Let $\Delta({\bf d}) = d_1$ and $\delta({\bf d}) = d_n$. Another important parameter of ${\bf d}$ is $J({\bf d})=\sum_{i=1}^{d_1} d_i$. The following theorem, a direct corollary of~\cite[Theorem 1]{gao2020subgraph}, estimates edge probabilities in ${\mathcal G}(n,{\bf d})$ when $J({\bf d})=o(\norm{{\bf d}}_1)$. For two degree sequences ${{\bf d}}$ and ${\bf g}$, we say ${\bf d}\preceq {\bf g}$ if $d_i\le g_i$ for every $i\in [n]$. Given a graph $H$ on $[n]$, we let ${\bf d}^H$ denote the degree sequence of $H$. \begin{theorem}\label{lem:edgeprob} Suppose $H$ is a graph on $[n]$ with ${\bf d}^{H}\preceq {\bf d}$, and let ${\bf t} = {\bf d}-{\bf d}^H$. Suppose that $J({\bf d})=o(\norm{{\bf t}}_1)$ and suppose $jk \not\in H$. Then \[ {\mathbb P}(jk \in {\mathcal G}(n, {\bf d}) \mid H) =\left(1+O\left(\frac{J({\bf d})}{\norm{{\bf t}}_1}\right)\right) \frac{t_jt_k}{\norm{{\bf t}}_1+t_jt_k}, \] where ${\mathbb P}(jk \in {\mathcal G}(n, {\bf d}) \mid H)$ denotes the probability that $jk\in {\mathcal G}(n,{\bf d})$ conditional on the event that ${\mathcal G}(n,{\bf d})$ contains $H$ as a subgraph. \end{theorem} \begin{remark} There are many interesting families of degree sequences satisfying $J({\bf d})=o(\norm{{\bf d}}_1)$, including examples of \begin{itemize} \item all $d$-regular, and near-$d$-regular degree sequences where $d=o(n)$; \item all perturbed sequences from a near-$d$-regular degree sequence by arbitrarily decreasing at most $(1-c)n$ entries, where $c>0$ is fixed, and $d=o(n)$; \item heavy-tailed degree sequences such as power-law sequences and long-tailed power-law sequences. See~\cite[Section 2]{gao2016enumeration} for definitions and more examples. \end{itemize} According to Theorem~\ref{lem:edgeprob}, if there exist pairs $jk$ such that $d_jd_k=\Omega(\norm{{\bf d}}_1)$, then the edge probabilities in ${\mathcal G}(n,{\bf d})$ can take values ranging from $o(1)$ to constant $0<c<1$ and to $1-o(1)$. As we will see later, it is the presence of such diverse edge probabilities that poses a challenge to embedding ${\mathcal G}(n,W)$ into ${\mathcal G}(n,{\bf d})$. \end{remark} In light of Theorem~\ref{lem:edgeprob}, we define matrix $P({\bf d})$, which is asymptotic to $W^*({\bf d})$ under the condition $J({\bf d})=o(\norm{{\bf d}}_1)$. \begin{definition} Given ${\bf d}=(d_1,\ldots, d_n)$, let $P({{\bf d}})$ be the symmetric $n\times n$ matrix defined by $P_{ij}=P_{ji}=\frac{d_id_j}{\|{\bf d}\|_1+d_id_j}$, for every $1\le i<j\le n$, and $P_{ii}=0$ for every $1\le i\le n$. \end{definition} One of our main results is the following theorem, which gives a positive answer to Question~\ref{question} for degree sequences satisfying $\Delta({\bf d})^2 = o(\norm{{\bf d}}_1)$, a condition that is stronger than $J({\bf d})=o(\norm{{\bf d}}_1)$. \begin{theorem}\label{thm:main} Assume that ${\bf d} = {\bf d}(n)$ is a degree sequence satisfying $\Delta({\bf d})^2 = o(\norm{{\bf d}}_1)$ and $\delta({\bf d}) \gg\log{n}$. Then, there is a matrix $W$ with $W = (1-o(1))P({\bf d})$ and a coupling $(G_L, G)$, where $G_L \sim {\mathcal G}(n, W)$ and $G \sim {\mathcal G}(n, {\bf d})$, such that ${\mathbb P}(G_L \subseteq G) = 1-o(1)$. \end{theorem} Under the stronger condition that $\Delta({\bf d})^2=o(\norm{{\bf d}}_1)$, ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))=o(1)$ uniformly for all $jk\in \binom{[n]}{2}$ by Theorem~\ref{lem:edgeprob}. This property is a crucial element in our coupling scheme which allows us to achieve an optimal embedding. Our next theorem embeds ${\mathcal G}(n,W)$ into ${\mathcal G}(n,{\bf d})$ for ${\bf d}$ satisfying $J({\bf d})=o(\norm{{\bf d}}_1)$. In this case, $W_{jk}=(1-o(1))P({\bf d})_{jk}$ for all $jk$ where ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))=o(1)$. However, $W_{jk}$ is not asymptotic to ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))$ for $jk$ where ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))$ is bounded away from 0. Given a function $f: {\mathbb R}\to {\mathbb R}$, let $f_c(P)$ denote the matrix $(f(P_{i,j}))_{i,j\in [n]}$. \begin{theorem}\label{thm:main2} Assume that ${\bf d} $ is a degree sequence satisfying $J({\bf d})=o(\norm{{\bf d}}_1)$ and $\delta({\bf d}) \gg\log{n}$. Then, there is a matrix $W$ with $W=(1+o(1))f_c(P({\bf d}))$ where $f(x)=1-e^{-x}$, and a coupling $(G_L, G)$, where $G_L \sim {\mathcal G}(n, W)$ and $G \sim {\mathcal G}(n, {\bf d})$, such that ${\mathbb P}(G_L \subseteq G) = 1-o(1)$. \end{theorem} \begin{remark} Note that $P({\bf d})=(1+o(1))W^*({\bf d})$ is not true in general. For instance, consider the $d$-regular degree sequence where $d=\Theta(n)$. There are two open problems that can be very interesting for future research. \begin{enumerate} \item[(a)] Does Theorem~\ref{thm:main2} hold with $W=(1+o(1))P({\bf d})$? \item[(b)] Can Question~\ref{question} be answered for a more general class of ${\bf d}$? In particular, is there a way to embed ${\mathcal G}(n,W)$ into ${\mathcal G}(n,{\bf d})$ for some reasonable $W$ without knowing the asymptotic value of $W^*({\bf d})$ or the conditional edge probabilities as in Theorem~\ref{lem:edgeprob}? \end{enumerate} \end{remark} \begin{remark} To obtain a sandwich type result as for ${\mathcal G}(n,d)$, we would need to embed ${\mathcal G}(n,{\bf d})$ into ${\mathcal G}(n, W)$ for some $W=(1+o(1))W^*$. If we approach in the same way as for ${\mathcal G}(n,d)$, we would embed ${\mathcal G}(n, (J-I)-(1+o(1))W^*)$ into ${\mathcal G}(n,(n-1){\bf 1}-{\bf d})$ where ${\bf 1}$ is the all-one vector, $J={\bf 1}{\bf 1}^T$ and $I$ is the identity matrix. Both the proofs in~\cite{gao2020sandwiching,klimovsova2023sandwiching} for embedding ${\mathcal G}(n,1-(1+o(1))d/n)$ into ${\mathcal G}(n,n-1-d)$ use some counting arguments that heavily rely on the fact that the underlying graphs (during the construction of the coupling) have almost equal degrees, which we do not think extend to graphs that are far away from being regular. In fact, we do not have enough intuition to support a sandwich conjecture for ${\mathcal G}(n,{\bf d})$ as for ${\mathcal G}(n,d)$, especially for degree sequences ${\bf d}$ where the values of edge probabilities range from $o(1)$ to $1-o(1)$. \end{remark} {\em Proof of Theorem~\ref{thm:main}.} It follows as a straightforward corollary of Theorem~\ref{thm:main2} by noticing that $1-e^{-x}=(1+O(x))x$, and $P({{\bf d}})_{jk}=o(1)$ for all $jk$ under the assumptions of Theorem~\ref{thm:main}. ~~\vrule height8pt width4pt depth0pt \medskip We use standard Landau notation. Throughout the paper $n$ is sufficiently large. For two sequences of real numbers $(a_n)_{n=0}^{\infty}$ and $(b_n)_{n=0}^{\infty}$, we write $a_n=O(b_n)$ if there is $C>0$ such that $|a_n|<C|b_n|$ for every $n$. We say $a_n=\Omega(b_n)$ if $a_n>0$ and $b_n=O(a_n)$. We say $b_n = o(a_n)$, or $b_n \ll a_n$, or $a_n \gg b_n$ if $a_n>0$ and $\lim_{n \rightarrow \infty} b_n/a_n = 0$. We say a sequence of events $A_n$ indexed by $n$ holds a.a.s.\ if ${\mathbb P}(A_n)=1-o(1)$. All asymptotics are with respect to $n \rightarrow \infty$. \subsection{Relation between ${\mathcal G}(n,P({\bf d}))$ and the Chung-Lu model} In 2002, Chung and Lu introduced the random graph model with given expected degrees~\cite{chung2002connected,chung2002average,chung2004spectra,chung2006volume}. Given a sequence of nonnegative real numbers ${\bf w}=(w_1,\ldots, w_n)$ satisfying that \begin{equation} \max_i w_i^2 \le \norm{{\bf w}}_1, \label{assumption} \end{equation} the random graph $G({\bf w})$ is defined by ${\mathcal G}(n,\hat W({\bf w}))$ where $\hat W_{jk}({\bf w})=w_jw_k/\norm{{\bf w}}_1$. To avoid relying on the assumption~\eqn{assumption} one can define $\hat W_{jk}=\min\{w_jw_k/\norm{{\bf w}}_1,1\}$. However, most of the work about the Chung-Lu model assumed~\eqn{assumption}. Notice that the three matrices $P({\bf d})$, $W^*({\bf d})$ and $\hat W({\bf d})$ are all asymptotically equal if ${\bf d}$ satisfies $\Delta({\bf d})^2=o(\norm{{\bf d}}_1)$. However, if there exists $jk$ such that $d_jd_k=\Omega(\norm{{\bf d}}_1)$ then $P({\bf d})_{jk}$, $W^*({\bf d})_{jk}$ and $\hat W({\bf d})_{jk}$ may be all distinct asymptotically. \subsection{Applications} Given two $n\times n$ matrices $W_1$ and $W_2$, we say $W_1\le W_2$ if $(W_1)_{ij}\le (W_2)_{ij}$ for every $i,j\in[n]$. It is well known that ${\mathcal G}(n,W)$ has the nice ``nesting property'', meaning that ${\mathcal G}(n,W_1)$ can be embedded into ${\mathcal G}(n,W_2)$ provided that $W_1\le W_2$. However, ${\mathcal G}(n,{\bf d})$ does not have the nesting property. Given two degree sequences ${\bf d}\preceq {\bf g}$, it is in general not true that ${\mathcal G}(n,{\bf d})$ can be a.a.s.\ embedded into ${\mathcal G}(n,{\bf g})$. In fact, it is easy to construct degree sequences ${\bf d}\preceq {\bf g}$ for which there exists $jk$ such that ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}))= 1-o(1)$ and ${\mathbb P}(jk\in {\mathcal G}(n,{\bf g}))=o(1)$. Consequently and perhaps rather surprisingly, it is difficult to prove some rather ``intuitive'' results about ${\mathcal G}(n,{\bf d})$. For instance, although it is known that a.a.s.\ ${\mathcal G}(n,3)$ is connected, to our knowledge it is not known if ${\mathcal G}(n,{\bf d})$ is a.a.s.\ connected provided that $\delta({\bf d})\ge 3$. Most such results are restricted to certain families of degree sequences for which either some enumeration results are known, or some enumeration proof techniques can be applied. Theorem~\ref{thm:main2} gives a powerful tool to obtain such results by first embedding ${\mathcal G}(n,W)$ into ${\mathcal G}(n,{\bf d})$ and then applying the nesting property of ${\mathcal G}(n,W)$. We show a few examples below. \begin{theorem} Assume ${\bf d} $ is a degree sequence satisfying $J({\bf d})=o(\norm{{\bf d}}_1)$ and $\delta({\bf d}) \gg\log{n}$. Then, a.a.s.\ \begin{enumerate}[(a)] \item ${\mathcal G}(n,{\bf d})$ is Hamiltonian and $k$-connected for every fixed $k$, if $\delta({\bf d})^2/\norm{{\bf d}}_1\ge (1+c)\log n/n$ for some fixed $c>0$; \item ${\mathcal G}(n,{\bf d})$ contains $H$ as a minor for every fixed graph $H$, if $\delta({\bf d})^2/\norm{{\bf d}}_1\ge c/n$ for some fixed $c>1$; \item Simultaneously for all graphs $H$ on $[n]$ with maximum degree at most $k$, ${\mathcal G}(n,{\bf d})$ has a subgraph isomorphic to $H$, if $\delta({\bf d})^2/\norm{{\bf d}}_1\ge C n^{-1/k}\log^{1/k} n$ where $k>0$ is fixed and $C>0$ is a sufficiently large constant. \end{enumerate} \end{theorem} \noindent{\bf Proof.~}~ By definition, $P({\bf d})_{jk}\ge (1+o(1))\delta({\bf d})^2/\norm{{\bf d}}_1$ for every $jk$. Thus, by Theorem~\ref{thm:main2} and the nesting property of ${\mathcal G}(n,W)$, ${\mathcal G}(n,p)$ can be embedded into ${\mathcal G}(n,{\bf d})$ for some $p= (1-o(1))\delta({\bf d})^2/\norm{{\bf d}}_1$. The theorem now follows by the corresponding known results of ${\mathcal G}(n,p)$.~~\vrule height8pt width4pt depth0pt \begin{remark} It should be a relatively easy task to improve or even remove some assumptions such as $\delta({\bf d})^2/\norm{{\bf d}}_1\ge (1+c)\log n/n$ for some fixed $c>0$ in part (a), by directly analysing ${\mathcal G}(n, (1+o(1))f_c(P({\bf d})))$. We did not attempt it as the main objective of this paper is to prove the embedding theorems. \end{remark} \section{The coupling procedure} \subsection{The old and the new} \label{sec:new} Assume that we aim to embed ${\mathcal G}(n,p)$ into ${\mathcal G}(n,d)$ where $p=(1-o(1))d/n$. Regardless of a few minor differences, the coupling procedures used in~\cite{kim2004sandwiching,dudek2017embedding,gao2020sandwiching,gao2022sandwiching,klimovsova2023sandwiching,gao2020kim} are all essentially the same: let $x_1,x_2,\ldots, x_m$ be a sequence of random edges, each uniformly and independently chosen from $K_n$. Sequentially add edges in this sequence to $G$ and to $G_L$ respectively. With a small probability ${\epsilon}_i=o(1)$, edge $x_i$ is rejected by $G$; and with a slightly larger but still rather small probability $\zeta$, $x_i$ is rejected by $G_L$. The parameter ${\epsilon}_i$ is chosen to be proportional to $p_i(x_i)$, the probability that $x_i$ is an edge of ${\mathcal G}(n,{\bf d})$ conditional on the event that all the edges that have been added to $G$ are edges of ${\mathcal G}(n,{\bf d})$. The key idea of the coupling procedure is that, until $m$ gets very close to $dn/2$, with high probability, $p_i(jk)$ is approximately the same for every edge $jk$ that has not been added to $G$ yet. Since $x_i$ is uniformly chosen, a small rejection probability ${\epsilon}_i$ suffices to ensure that $x_i$ is added to $G$ according to the correct conditional probability. We can prove that with high probability ${\epsilon}_i=o(1)$ for every $1\le i\le m$ where $m=(1-o(1))dn/2$. Hence we can choose some $\zeta=o(1)$ such that ${\epsilon}_i\le \zeta$ for every $1\le i\le m$. Moreover, (a) $G_L$ obtained after the $m$-th iteration is a uniformly random graph conditional on the number of edges it contains, as every edge in $\binom{[n]}{2}$ has an equal probability to be added to $G_L$; and (b) $G_L$ is a subgraph of $G$, as the rejection probability $\zeta$ for $G_L$ is slightly larger than that for $G$ in every step. Now we are considering ${\bf d}$ where the degrees are not all asymptotically the same. The most natural way to extend the previous coupling procedure is to generate the sequence of i.i.d.\ random edges $x_1,x_2,\ldots, x_m$ where each edge is chosen with probability proportional to $W^*({\bf d})$. This coupling procedure works well when $\Delta({\bf d})^2=o(\norm{{\bf d}}_1)$. This is because the conditional probability $p_i(jk)$ of $jk$ being an edge of ${\mathcal G}(n,{\bf d})$ in step $i$ of the procedure is proportional to $W^*({\bf d})_{jk}$ uniformly for all $jk$ during the whole coupling process, resulting a small rejection probability ${\epsilon}_i$. However, rather surprisingly, if there exists $jk$ such that $d_jd_k=\Omega(\norm{{\bf d}}_1)$ then the ratio $p_i(jk)/W^*_{jk}$ changes in a non-uniform way over $jk$ and over time $i$ of the coupling procedure, resulting larger and larger rejection probability ${\epsilon}_i$. As $\zeta$ has to be chosen uniformly during the process, we can only choose $\zeta=1-o(1)$, meaning that almost all edges are rejected by $G_L$. The coupling procedure thus fails. \smallskip To overcome the challenges, we develop two novel ideas. \begin{itemize} \item Instead of using the same rejection probability $\zeta$ for every edge to be added to $G_L$, we use different rejection probabilities for different edges. However, for a given edge $jk$, the rejection probability $\zeta_{jk}$ remains uniform throughout the procedure. This uniformity is needed for the output $G_L$ to have the correct distribution. \item During the coupling procedure, the value of $p_i(jk)/W^*_{jk}$ decreases significantly for edges $jk$ where $d_jd_k=O(\norm{{\bf d}}_1)$, but changes little for $jk$ where $d_jd_k\gg \norm{{\bf d}}_1$. Consequently, many rejections (particularly those where $d_jd_k\ll \norm{{\bf d}}_1$) occur already for the construction of $G$, and thus the first idea above would not help, as even more rejections would occur for $G_L$ than for $G$. To reduce the rejection probability for the construction of $G$, we ``intentionally'' boost the probability (by an $o(1)^{-1}$ factor) of generating edges $jk$ where $d_jd_k\gg \norm{{\bf d}}_1$ in the sequence of random edges $x_1,x_2,\ldots$. This probability boosting strategy magically reduces the rejection probability ${\epsilon}_i(jk)$ for $jk$ such that $d_jd_k\ll \norm{{\bf d}}_1$ (note that most $jk\in\binom{[n]}{2}$ are this type). However, the rejection probabilities ${\epsilon}_i(jk)$ will be high (close to 1) for $jk$ where $d_jd_k \gg\norm{{\bf d}}_1$. Nonetheless, the first idea mentioned earlier will be applicable in this case --- we reject these $jk$ more often than the others for the construction of $G_L$. Consequently, the edge probability for such $jk$ in the final construction of $G_L$ is close to $1-e^{-1}$ instead of $1$ (and note that only a small proportion of $jk\in\binom{[n]}{2}$ are of this type). \end{itemize} We define the procedure for sequential generation of ${\mathcal G}(n,W)$ in Section~\ref{sec:W}, the procedure for sequential generation of ${\mathcal G}(n,{\bf d})$ in Section~\ref{sec:d}. In Section~\ref{sec:coupling} we couple the two procedures together to sequentially generate the coupled pair $(G_L,G)$. The parameters used by the coupling procedure will be specified in Section~\ref{sec:parameters}. Finally the proof of Theorem~\ref{thm:main2} is given in Section~\ref{sec:proof}. \subsection{Sequential generation of ${\mathcal G}(n,W)$} \label{sec:W} We define a sequential sampling procedure \textsc{SeqApprox-P}(${\bf d},\lambda,\Lambda$), which adds a sequence of edges one at a time, and outputs a random graph where every edge in $\binom{[n]}{2}$ appears independently. The input $\lambda$ is a positive real number, and $\Lambda\in [0,1]^{[n]\times [n]}$ is a symmetric $n\times n$ matrix. We may think of ${\bf d}$ as the target degree sequence. As mentioned before, instead of weighting edge probabilities according to their true probabilities in ${\mathcal G}(n,{\bf d})$, we boost the probability of edge $jk$ if $d_jd_k$ is large. To formalise this notion, we define matrix $Q$ as follows, which provides the probability distribution for the edges to be sequentially sampled in the procedure \textsc{SeqApprox-P}(${\bf d},\lambda,\Lambda$). \begin{definition} Given ${\bf d}=(d_1,\ldots, d_n)$, let $Q({{\bf d}})$ be the symmetric $n\times n$ matrix defined by $Q_{ij}=Q_{ji}=(\sum_{1\le k<\ell\le n}d_kd_\ell)^{-1}d_id_j$, for every $1\le i<j\le n$, and $Q_{ii}=0$ for every $1\le i\le n$. \end{definition} Procedure \textsc{SeqApprox-P}(${\bf d},\lambda,\Lambda$) is given below. \begin{algorithm}[H] \label{alg:W} \begin{algorithmic}[1] \Procedure{SeqApprox-P}{${\bf d},\lambda,\Lambda$} \State Let ${\mathcal I} \sim {\textbf{Po}}(\lambda)$. \State Let $G_0$ be the empty graph on $[n]$. \For{$i$ in $1, \dots,{\mathcal I}$} \State Pick an edge $jk\in \binom{[n]}{2}$ with probability proportional to $d_jd_k$ (i.e.\ with probability $Q({\bf d})_{ij}$). \State $G_i=G_{i-1}\cup \{jk\}$ with probability $\Lambda_{jk}$, and $G_i=G_{i-1}$ with probability $1-\Lambda_{jk}$. \EndFor \State Return $G_{{\mathcal I}}$. \EndProcedure \end{algorithmic} \end{algorithm} We prove that the output of \textsc{SeqApprox-P}(${\bf d},\lambda,\Lambda$) has distribution ${\mathcal G}(n,W)$ for some $W$ as a function of ${\bf d},\lambda$ and $\Lambda$. For two matrices $A$ and $B$ of the same dimension, we denote by $A\odot B$ the Hadamard product of $A$ and $B$, defined by $(A\odot B)_{ij}= A_{ij} B_{ij}$ for every entry $ij$. \begin{lemma}\label{lem:P1} \textsc{SeqApprox-P}(${\bf d},\lambda,\Lambda$) returns a random graph $G$ with distribution ${\mathcal G}(n,f_c(\Lambda\odot Q))$, where $Q=Q({\bf d})$ and $f(x)=1-\exp(-\lambda x)$. \end{lemma} \noindent{\bf Proof.~}~ Let $e_1 = j_1k_1, \dots, e_N = j_Nk_N$ be an enumeration of the edges in $\binom{[n]}{2}$, where $N = \binom{n}{2}$. For $1 \le i \le N$, let $X_i$ denote the number of times that edge $e_i$ is sampled throughout \textsc{SeqApprox-P}(${\bf d},\lambda,\Lambda$). For each edge $e_i\in\binom{[n]}{2}$, the probability that $e_i$ is in $G$ is thus given by \begin{align*} {\mathbb P}(X_i \ge 1) &= 1-{\mathbb P}(X_i=0) =1-\sum_{m=0}^{\infty} e^{-\lambda} \frac{\lambda^m}{m!} \sum_{j=0}^{m} \binom{m}{j} Q_{e_i}^j (1-Q_{e_i})^{m-j} \left(1-\Lambda_{e_i}\right)^j \\ & =1- \sum_{m=0}^{\infty} e^{-\lambda} \frac{\lambda^m}{m!} (1- Q_{e_i}+Q_{e_i}(1-\Lambda_{e_i}))^m \\ &= 1- \exp\left(-\lambda+\lambda(1- Q_{e_i}+Q_{e_i}(1-\Lambda_{e_i}))\right)= 1-\exp(-\lambda Q_{e_i}\Lambda_{e_i}), \end{align*} and the probability generating function for $X_i$ is then given by \begin{align*} {\mathbb E} z^{X_i} &=\sum_{m=0}^{\infty} e^{-\lambda} \frac{\lambda^m}{m!}\sum_{j=0}^{m} \binom{m}{j} Q_{e_i}^j (1-Q_{e_i})^{m-j} \sum_{k=0}^j \binom{j}{i} \Lambda_{e_i}^k (1-\Lambda_{e_i})^{j-k} z^k \\ &=\sum_{m=0}^{\infty} e^{-\lambda} \frac{\lambda^m}{m!} \sum_{j=0}^{m} \binom{m}{j} Q_{e_i}^j (1-Q_{e_i})^{m-j} (1-\Lambda_{e_i}+\Lambda_{e_i}z)^j\\ & =\sum_{m=0}^{\infty} e^{-\lambda} \frac{\lambda^m}{m!} (1-Q_{e_i}+Q_{e_i}(1-\Lambda_{e_i}+\Lambda_{e_i}z))^m=\exp(-\lambda Q_{e_i}\Lambda_{e_i}(1-z)). \end{align*} On the other hand, the probability generating function for the random vector $\bf{X}$ is given by \begin{align*} &\sum_{j_1, \dots, j_N} {\mathbb P}(X_1 = j_1, \dots, X_N = j_N) z_1^{j_1}\cdots z_N^{j_N} = \sum_{m = 0}^{\infty} e^{-\lambda} \frac{\lambda^m}{m!} \left( \sum\limits_{i=1}^{N} Q_{e_i} \Lambda_{e_i} z_i +Q_{e_i}(1-\Lambda_{e_i}) \right)^m\\ &\hspace{1cm}= e^{-\lambda} \exp\left(\sum_{i=1}^N \lambda Q_{e_i}\Lambda_{e_i}z_i +\lambda Q_{e_i}(1-\Lambda_{e_i}) \right) = \prod_{i=1}^N \exp\left(-\lambda \ Q_{e_i} \Lambda_{e_i}+\lambda Q_{e_i} \Lambda_{e_i} z_i\right)= \prod_{i=1}^N {\mathbb E} z^{X_i}, \end{align*} where the second last equation above holds because $\sum_{i=1}^N Q_{e_i}=1$ by definition of $Q$. Thus, the components of ${\bf X}$ are mutually independent, and consequently, every edge $e_i$ appears independently in $G$ with probability ${\mathbb P}(X_i\ge 1)$. Thus, $G\sim {\mathcal G}(n,W)$ where $W$ is the symmetric $n\times n$ matrix given by $W_{ij}=1-\exp(-\lambda Q_{ij}\Lambda_{ij})$ for every $i,j\in [n]$. ~~\vrule height8pt width4pt depth0pt \subsection{Sequential generation of ${\mathcal G}(n,{\bf d})$} \label{sec:d} We define procedure \textsc{SeqSample-D}(${\bf d}$) which sequentially generates a random graph with distribution ${\mathcal G}(n,{\bf d})$. This procedure is essentially the same as previously used in~\cite{gao2020sandwiching,gao2022sandwiching,klimovsova2023sandwiching,gao2020kim}. Recall from Theorem~\ref{lem:edgeprob} that ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d})\mid H)$ denotes the probability that $jk$ is an edge of ${\mathcal G}(n,{\bf d})$ conditional on $H$ being a subgraph of ${\mathcal G}(n,{\bf d})$. \begin{algorithm}[H] \label{alg:d} \begin{algorithmic}[1] \Procedure{SeqSample-D}{${\bf d}$} \State Let $G_0$ be the empty graph on $[n]$. \For{$i$ in $1, \dots,\norm{{\bf d}}_1/2$} \State Pick an edge $jk\in \binom{[n]}{2}\setminus G_{i-1}$ with probability proportional to ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d})\mid G_{i-1})$. \State $G_i=G_{i-1}\cup \{jk\}$. \EndFor \State Return $G_{\norm{{\bf d}}_1/2}$. \EndProcedure \end{algorithmic} \end{algorithm} Given ${\bf d}$, and an integer $0\le m\le \norm{{\bf d}}_1$, let ${\mathcal G}(n,{\bf d},m)$ denote a uniformly random subgraph of ${\mathcal G}(n,{\bf d})$ with exactly $m$ edges. The following lemma follows by a simple counting argument and was proved in~\cite[Lemma 3]{gao2022sandwiching}. \begin{lemma}\label{lem:G(n,d)} Let $0\le m \le \norm{{\bf d}}_1/2$ and $G_m$ be the graph obtained after $m$ iterations of \textsc{SeqSample-D}(${\bf d}$). Then $G_m\sim {\mathcal G}(n,{\bf d},m)$. \end{lemma} Lemma~\ref{lem:G(n,d)} (with $m=\norm{{\bf d}}_1/2$) immediately implies the following corollary. \begin{corollary}\label{cor:G(n,d)} Let $G$ be the output of $\textsc{SeqSample-D}({\bf d})$. Then $G \sim {\mathcal G}(n, {\bf d})$. \end{corollary} We need the following lemma, which follows by standard concentration results and can be found in~\cite[Lemma 4]{gao2022sandwiching}. (We correct two obvious typos that are present in~\cite[Lemma 4]{gao2022sandwiching}: the probability bounds in parts (a,c) were ``$\ge$'' instead of ``$\le$''.) \begin{lemma}\label{lem:Bin-bounds} Let $Y \sim {\textbf{Bin}}(K, p)$ for some positive integer $K$ and $p \in [0, 1]$. \begin{itemize} \item[(a)] For any ${\epsilon}\ge 0$, ${\mathbb P}(|Y- p K| > {\epsilon} p K) \le 2e^{-\frac{{\epsilon}^2}{2+{\epsilon}} pK}$. \item[(b)] If $p = m/K$ for some integer $m \in (0, K)$, then ${\mathbb P}(Y = m) \ge \frac{1}{3} (p(1-p) K)^{-1/2}$. \item[(c)] Let ${\mathcal I}\sim {\textbf{Po}}(\mu)$ for some $\mu>0$. Then, for any ${\epsilon}\ge 0$, ${\mathbb P}({\mathcal I} \ge \mu(1+{\epsilon}))\le e^{-\frac{{\epsilon}^2}{2+{\epsilon}}\mu}$. \end{itemize} \end{lemma} The following lemma was proved in~\cite{gao2022sandwiching}. We include a short proof here. We will use this lemma to gain information on the remaining degree sequence of ${\mathcal G}(n,{\bf d})$ when part of it has been constructed. \begin{lemma}\label{lem:bounds} Let $0\le m<\norm{{\bf d}}_1/2$ and $p_m = (\norm{{\bf d}}_1-2m)/\norm{{\bf d}}_1$. For any $\xi=\xi_n\in (0,1)$, $|d_j-d^{G_m}_j-p_m d_j| \le \xi p_m d_j$ for all $j \in [n]$ with probability $1-\exp\left(-\Omega(\xi^2 p_m d_j)+2\log n\right)$. \end{lemma} \noindent{\bf Proof.~}~ Take $G \sim {\mathcal G}(n, {\bf d})$ and let $\textbf{h} = (h_1, \dots, h_n)$ be the degree sequence of the graph $H$ obtained by independently keeping every edge of $G$ with probability $p_m$. Then, $h_j \sim {\textbf{Bin}}(d_j, p_m)$ for every $j\in [n]$. By Lemma~\ref{lem:G(n,d)}, conditioned on the event that $|E(H)| = \norm{{\bf d}}_1/2-m$, $\textbf{h}$ has the same distribution as ${\bf d}-{\bf d}^{G_m}$. Therefore, by Lemma~\ref{lem:Bin-bounds} (a,b), for every $j\in [n]$, \begin{eqnarray*} {\mathbb P}(|d_j-d_j^{G_m}-p_m d_j| \ge \xi p_m d_j) &\le& \frac{{\mathbb P}(|h_j-p_md_j| \ge \xi p_m d_j)}{{\mathbb P}(|E(H)| = \norm{{\bf d}}_1/2-m)} \le \frac{2e^{-\frac{\xi^2}{2+\xi}p_md_j}}{\frac{1}{3}(p_m(1-p_m)\norm{{\bf d}}_1/2)^{-1/2}}\\ & \le& \sqrt{\norm{{\bf d}}_1} \cdot e^{-\Omega(\xi^2 p_m d_j)} = \exp\left(-\Omega(\xi^2 p_m d_j)+\log n\right), \end{eqnarray*} as $\norm{{\bf d}}_1\le n^2$. The lemma follows by taking the union bound over $j\in [n]$. ~~\vrule height8pt width4pt depth0pt \subsection{Couple the two sequential sampling procedures} \label{sec:coupling} Finally we couple the two aforementioned procedures and define procedure \textsc{Coupling}(${\bf d}$, $\lambda$, $\Lambda$) which sequentially constructs ${\mathcal G}(n,W)$ and ${\mathcal G}(n,{\bf d})$ together. \begin{algorithm}[H] \label{alg:coupling} \begin{algorithmic}[1] \Procedure{Coupling}{${\bf d}$, $\lambda$, $\Lambda$} \State Let $L_0$ and $G_0$ be empty graphs on vertex set $[n]$. \State Let ${\mathcal I} \sim {\textbf{Po}}(\lambda)$. \For{$i$ in $1, \dots,{\mathcal I}$} \State Pick an edge $jk$ of $K_n$ with probability proportional to $d_jd_k$ (i.e.\ with probability $Q_{jk}$). \If{$jk \in G_{i-1}$} \State $G_i = G_{i-1}$; \State $L_i = L_{i-1} \cup \{jk\}$ with probability $\Lambda_{jk}$, \State $L_i = L_{i-1}$ with probability $1-\Lambda_{jk}$. \Else \State Let $\eta_{jk}^{(i)} = \frac{\rho_i(jk)}{\max_{h\ell\notin G_{i-1}}\rho_i(h\ell)}$, where $\rho_i(h\ell)=(d_hd_{\ell})^{-1}{\mathbb P}(h\ell \in {\mathcal G}(n, {\bf d}) \mid G_{i-1})$. \If{$\eta_{jk}^{(i)} <\Lambda_{jk}$} \State \textbf{Return} \textsc{IndSample}(${\bf d},\lambda,\Lambda$ \Else \State $G_i = G_{i-1} \cup \{jk\}$ and $L_i = L_{i-1} \cup \{jk\}$ with probability $\Lambda_{jk}$, \State $G_i = G_{i-1} \cup \{jk\}$ and $L_i = L_{i-1}$ with probability $\eta_{jk}^{(i)}-\Lambda_{jk}$, \State $G_i = G_{i-1}$ and $L_i = L_{i-1}$ with probability $1-\eta_{jk}^{(i)}$. \EndIf \EndIf \EndFor \For{$i \ge {\mathcal I}+1$, while $G_{i-1}$ has fewer edges than ${\mathcal G}(n, {\bf d})$} \State Pick an edge $jk \not\in G_{i-1}$ with probability proportional to ${\mathbb P}(jk \in {\mathcal G}(n, {\bf d}) \mid G_{i-1})$; \State $G_i = G_{i-1} \cup \{jk\}$. \EndFor \State \textbf{Return} $(G_L, G)$, where $G = G_i$ and $G_L= L_{{\mathcal I}}$. \EndProcedure \item[] \Procedure{IndSample}{${\bf d},\lambda,\Lambda$} \State Independently sample $G\sim{\mathcal G}(n, {\bf d})$, and $G_L\sim {\mathcal G}(n, f_c(\Lambda\odot Q))$ where $f(x)=1-\exp(-\lambda x)$. \State \textbf{Return} ($G_L$, $G$) \EndProcedure \end{algorithmic} \end{algorithm} \begin{lemma}\label{lem:distribution} Let $(G_L,G)$ be the output of \textsc{Coupling}(${\bf d},\lambda,\Lambda$). \begin{enumerate}[(a)] \item $G_L\sim {\mathcal G}(n, f_c(\Lambda\odot Q))$ where $Q=({\bf d})$ and $f(x)=1-\exp(-\lambda x)$. \item $G\sim {\mathcal G}(n,{\bf d})$. \item If \textsc{IndSample}(${\bf d},\lambda,\Lambda$) is not called during the execution of \textsc{Coupling}(${\bf d},\lambda,\Lambda$), then $G_L\subseteq G$ in the output of \textsc{Coupling}(${\bf d},\lambda,\Lambda$). \end{enumerate} \end{lemma} \noindent{\bf Proof.~}~ Parts (a,b) are trivially true if \textsc{IndSample}(${\bf d},\zeta$) is called. Now assume that \textsc{IndSample}(${\bf d},\zeta$) is not called. Part (c) follows directly by the coupling procedure, as every edge is added to $G_L$ only when it is, or has been added to $G$. For (a), note that if \textsc{IndSample}(${\bf d},\zeta$) is not called then the edges are sequentially added to $G_L$ exactly as in \textsc{SeqApprox-P}(${\bf d}, \lambda, \Lambda$), and thus part (a) follows by Lemma~\ref{lem:P1}. For part (b), notice that in each step $i$, edge $jk$ is added to $G_{i-1}$ with probability $Q_{jk}\eta_{jk}^{(i)}$, which is proportional ${\mathbb P}(jk\in {\mathcal G}(n,{\bf d}) \mid G_{i-1})$. Thus, the edges are added to $G$ exactly as in the execution of \textsc{SeqSample-D}(${\bf d}$), and part (b) follows by Corollary~\ref{cor:G(n,d)}. ~~\vrule height8pt width4pt depth0pt \subsection{Specify $\lambda$ and $\Lambda$ for the coupling procedure} \label{sec:parameters} By the assumptions $J({\bf d})=o(\norm{{\bf d}}_1)$ and $\delta({\bf d})\gg \log n$, there exist $\xi,\zeta,\zeta'=o(1)$ that go to zero sufficiently slowly so that \begin{align} \zeta'&\gg \norm{{\bf d}}_1^{-1/3} \label{z'}\\ \zeta'\xi^2 & \gg \frac{\log n}{\delta({\bf d})} \label{z'-xi}\\ \zeta' & \gg \frac{J({\bf d})}{\norm{{\bf d}}_1}\label{z'2}\\ \zeta & \gg \frac{J({\bf d})}{\zeta'\norm{{\bf d}}_1}+\xi \label{z}. \end{align} Choose $\xi,\zeta,\zeta'$ that satisfy all the conditions above. For the coupling procedure, we set \begin{align} \lambda &=(1-\zeta')\norm{{\bf d}}_1/2 \label{lambda}\\ \Lambda_{jk} &=(1-\zeta) \frac{\norm{{\bf d}}_1}{\norm{{\bf d}}_1+d_jd_k} \quad \mbox{for every $1\le j<k\le n$.} \label{Lambda} \end{align} \section{Proof of Theorem~\ref{thm:main2}} \label{sec:proof} Let $\Delta=\Delta({\bf d})$, $Q=Q({\bf d})$ and $P=P({\bf d})$. By Lemma~\ref{lem:distribution}, it suffices to show that \begin{align} {\mathbb P}(\textsc{IndSample}({\bf d},\zeta) \text{ is called}) &=o(1) \label{rejection} \\ \lambda \Lambda\odot Q &= (1+o(1))P, \label{Q} \end{align} as $1-\exp(-(1+o(1))P_{ij})=(1+o(1))(1-e^{-P_{ij}})$ for every $ij$. \smallskip {\em Proof of~\eqn{Q}.} For every $1\le j<k\le n$, \[ Q_{jk}= \frac{d_jd_k}{\sum_{1\le h<\ell\le n} d_hd_{\ell}}= \frac{d_jd_k}{\frac{1}{2}(\norm{{\bf d}}_1^2-\sum_{i= 1}^n d_i^2)} = \frac{2d_jd_k}{\norm{{\bf d}}_1^2- O(\Delta) \norm{{\bf d}}_1}. \] Since $\lambda = (1-\zeta')\norm{{\bf d}}_1/2$ by~\eqn{lambda}, and by~\eqn{Lambda} \[ \lambda \Lambda_{jk} Q_{jk} = \frac{(1-\zeta') \norm{{\bf d}}_1}{2} \frac{(1-\zeta)\norm{{\bf d}}_1}{\norm{{\bf d}}_1+d_jd_k} \frac{2d_jd_k}{\norm{{\bf d}}_1^2(1+ O(\Delta/\norm{{\bf d}}_1) } =\left(1+O\left(\zeta+\zeta'+\frac{\Delta}{\norm{{\bf d}}_1}\right)\right) \frac{d_jd_k}{\norm{{\bf d}}_1+d_jd_k}, \] and~\eqn{Q} follows by noting that $\zeta,\zeta',\Delta/\norm{{\bf d}}_1=o(1)$. \medskip {\em Proof of~\eqn{rejection}.} By Lemma~\ref{lem:Bin-bounds}(c) and~\eqn{lambda}, a.a.s.\ ${\mathcal I}=(1+O(\lambda^{-1/3}))\lambda = (1-\zeta'+O(\norm{{\bf d}}_1^{-1/3}))\norm{{\bf d}}_1/2$. It suffices then to show that a.a.s.\ throughout the execution of \textsc{Coupling}(${\bf d},\lambda,\Lambda$), $\eta^{(i)}_{x_i}\ge \Lambda_{x_i}$ for every $1\le i\le {\mathcal I}$, where $x_i$ is the random edge sampled in the $i$th iteration. Let $m_i$ be the number of edges in $G_i$. Since a.a.s.\ ${\mathcal I}=(1-\zeta'+O(\norm{{\bf d}}_1^{-1/3}))\norm{{\bf d}}_1/2$ and $m_i\le {\mathcal I}$ for every $1\le i\le {\mathcal I}$, it follows then by~\eqn{z'} that a.a.s.\ \begin{equation} p_{m_i}\ge p_{m_{\mathcal I}}\ge \zeta'/2 \quad \mbox{for every $1\le i\le {\mathcal I}$}, \label{p_m} \end{equation} where $p_{m_i}$ is defined by $1-2m_i/\norm{{\bf d}}_1$ as in Lemma~\ref{lem:bounds}. By Lemma~\ref{lem:bounds} (with $\xi$ chosen in Section~\ref{sec:parameters}),~\eqn{z'-xi} and~\eqn{p_m}, and by taking the union bound over all the ${\mathcal I}\le \norm{{\bf d}}_1/2$ steps, $d_j - d_j^{G_i}=(1+O(\xi)) p_{m_i} d_j$ for every $j\in [n]$, and for every $i\le {\mathcal I}$. By Theorem~\ref{lem:edgeprob}, a.a.s.\ \[ {\mathbb P}(jk\in {\mathcal G}(n,{\bf d}) \mid G_i)=\left(1+O\left(\frac{J({\bf d})}{p_{m_i}\norm{{\bf d}}_1} + \xi\right)\right) \frac{ p_{m_i} d_j d_k }{ \norm{{\bf d}}_1+p_{m_i} d_jd_k } \] for every $1\le i\le {\mathcal I}$, where $J({\bf d})/p_{m_i}\norm{{\bf d}}_1 + \xi \le 2J({\bf d})/\zeta'\norm{{\bf d}}_1 + \xi=o(1)$ by~\eqn{z'2}. Recall that \[ \rho_i(jk)=\frac{{\mathbb P}(jk\in {\mathcal G}(n,{\bf d}) \mid G_i)}{d_jd_k}=\left(1+O\left(\frac{J({\bf d})}{\zeta'\norm{{\bf d}}_1} + \xi\right)\right) \frac{ p_{m_i}}{ \norm{{\bf d}}_1+p_{m_i} d_jd_k }. \] Notice that \[ \frac{ p_{m_i}}{ \norm{{\bf d}}_1+p_{m_i} d_jd_k } \le \frac{ p_{m_i}}{ \norm{{\bf d}}_1} \quad\mbox{for all $jk\notin G_{i-1}$}. \] Thus, $\max_{h\ell\notin G_{i-1}}\rho_i(h\ell) \le p_{m_i}/\norm{{\bf d}}_1$. Hence, \[ \eta^{(i)}_{jk}\ge \left(1+O\left(\frac{J({\bf d})}{\zeta'\norm{{\bf d}}_1} + \xi\right)\right) \frac{\norm{{\bf d}}_1}{\norm{{\bf d}}_1+p_{m_i}d_jd_k}\ge (1-\zeta)\frac{\norm{{\bf d}}_1}{\norm{{\bf d}}_1+d_jd_k}, \] by~\eqn{z}. Now~\eqn{rejection} follows. ~~\vrule height8pt width4pt depth0pt
{ "timestamp": "2023-02-21T02:20:09", "yymm": "2302", "arxiv_id": "2302.09729", "language": "en", "url": "https://arxiv.org/abs/2302.09729", "abstract": "Given an $n\\times n$ symmetric matrix $W\\in [0,1]^{[n]\\times [n]}$, let $\\mathcal{G}(n,W)$ be the random graph obtained by independently including each edge $jk$ with probability $W_{jk}$. Given a degree sequence ${\\bf d}=(d_1,\\ldots, d_n)$, let $\\mathcal{G}(n,{\\bf d})$ denote a uniformly random graph with degree sequence ${\\bf d}$. We couple $\\mathcal{G}(n,W)$ and $\\mathcal{G}(n,{\\bf d})$ together so that a.a.s. $\\mathcal{G}(n,W)$ is a subgraph of $\\mathcal{G}(n,{\\bf d})$, where $W$ is some function of ${\\bf d}$. Let $\\Delta({\\bf d})$ denote the maximum degree in ${\\bf d}$. Our coupling result is optimal when $\\Delta({\\bf d})^2\\ll \\|{\\bf d}\\|_1$, i.e.\\ $W_{ij}$ is asymptotic to $\\mathbb{P}(ij\\in \\mathcal{G}(n,{\\bf d}))$ for every $i,j\\in [n]$. We also have coupling results for ${\\bf d}$ that are not constrained by the condition $\\Delta({\\bf d})^2\\ll \\|{\\bf d}\\|_1$. For such ${\\bf d}$ our coupling result is still close to optimal, in the sense that $W_{ij}$ is asymptotic to $\\mathbb{P}(ij\\in \\mathcal{G}(n,{\\bf d}))$ for most pairs $i,j\\in [n]$.", "subjects": "Combinatorics (math.CO)", "title": "Embedding theorems for random graphs with specified degrees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854138058637, "lm_q2_score": 0.7310585903489891, "lm_q1q2_score": 0.7089699575579258 }
https://arxiv.org/abs/1702.02918
The Geometry of Strong Koszul Algebras
Koszul algebras with quadratic Groebner bases, called strong Koszul algebras, are studied. We introduce affine algebraic varieties whose points are in one-to-one correspondence with certain strong Koszul algebras and we investigate the connection between the varieties and the algebras.
\section{Introduction}\label{sec-ntro}The connection between affine algebraic varieties and commutative rings, especially quotients of commutative polynomial rings over a field, is well established. In this paper, we introduce a new connection between affine algebraic varieties and a class of Koszul algebra which are not necessarily commutative. The varieties we consider have the property that the points are in one-to-one correspondence with certain Koszul algebras. Given one of these varieties, the Koszul algebras corresponding to the points are shown to have a number of features in common. To describe our results more precisely, let $K$ be a field, $\cQ$ a finite quiver, and let $K\cQ$ denote the path algebra. An element $x\in K\cQ$ is called \emph{quadratic} if $x$ is a $K$-linear combination of paths of length 2. We say a (two-sided) ideal $I$ in $K\cQ$ is a \emph{quadratic ideal} if $I$ can be generated by quadratic elements. One of many equivalent definitions of a \emph{Koszul algebra} is the following: If $J$ is the ideal in $K\cQ$ generated by the arrows of $\cQ$ and $I$ is a quadratic ideal in $K\cQ$ , then $K\cQ/I$ is a \emph{Koszul algebra} if the Ext-algebra, $\oplus_{n\ge 0}\Ext^n_{K\cQ/I}(K\cQ/J,K\cQ/J)$, can be generated in degrees 0 and 1. Although this a very special class of algebras, Koszul algebras occur in many different settings, for example, see \cite{C,F,Gr1} and their references. In this paper we study a class of Koszul algebras which we call \emph{strong Koszul algebras}. To define this class, fix an admissible order $\succ$ on the paths in $\cQ$. The formal definition of an admissible order can be found in the beginning of Section \ref{sec-strong}. Such an order is needed for $K\cQ$ to have a \grb basis theory. We provide a brief overview of the \grb basis theory needed for the paper in Section \ref{sec-strong}. An algebra $\Lambda=K\cQ/I$ is a \emph{strong Koszul algebra (with respect to $ I$ and $\succ$)}, if $I$ is a quadratic ideal in $K\cQ$ and $I$ has a \grb basis consisting of quadratic elements. That a strong Koszul algebra is in fact a Koszul algebra is proved in \cite{GH}. Note that not all Koszul algebras are strong; for example, Sklyanin algebras \cite{S} are Koszul algebras that are not strong. One important type of a strong Koszul algebra is of the form $K\cQ/I^*$, where $I^*$ is an ideal that can be generated by a set $\cT$ of paths of length 2 in $\cQ$. That $K\cQ/I^*$ is a strong Koszul algebra follows from the fact that $\cT$ is a \grb basis of $I^*$ with respect to any admissible order \cite{GH}. For each set $\cT$ of paths of length 2 in $K\cQ$, we define an affine algebraic variety $\GrAlg(\cT)$, such that the points of $\GrAlg(\cT)$ are in one-to-one correspondence with a particular set of strong Koszul algebras; see Theorem \ref{thm-one} and Theorem \ref{thm-var}. We view this correspondence as an identification and show that each strong Koszul algebra corresponds to a point in one of these varieties; see Corollary \ref{cor-corres}. In each variety $\GrAlg(\cT)$, there is a distinguished algebra, $K\cQ/I^*$, where $I^* $ is generated by $\cT$. All other algebras $K\cQ/I$ in $\GrAlg(\cT)$ have the property that $ I$ cannot be generated by paths. The class of strong Koszul algebras includes preprojective algebras whose underlying graph is connected and not a tree \cite{Gr1}, straightening closed algebras generated by minors \cite{GH}, and algebras of the form $K\cQ/\langle \cT\rangle$ where $\cT$ is a set of paths of length 2 in $K \cQ$ and $\langle \cT \rangle $ denotes the ideal generated by $\cT$. The algebras lying in one variety have a number of properties in common. Suppose that $\Lambda=K\cQ/I$ and $\Lambda'=K\cQ/I'$ are two strong Koszul algebras in $\GrAlg(\cT)$. Then we prove the following results. Let $\Lambda^*=K\cQ/\langle \cT\rangle$. \begin{enumerate} \item $\dim_K(\Lambda)= \dim_K(\Lambda')= \dim_K(\Lambda^*)$; see Theorem \ref{thm-cartan}. \item Assuming $\Lambda^*$ is finite dimensional, then $\gldim(\Lambda)=\gldim(\Lambda')=\gldim(\Lambda^*)$; see Corollary \ref{cor-gldim}. Note that \cite{GHZ} provides a fast algorithm for computing the global dimension of $\Lambda^*$. \item The Betti numbers in the minimal projective resolutions of one dimensional simple modules for all three algebras are the same; see Theorem \ref{thm-ext}. \item The Cartan matrices of $\Lambda$, $\Lambda'$, and $\Lambda^*$ are the same; see Corollary \ref{cor-cartan}. \item If $\Lambda^*$ is quasi-hereditary, then so are $\Lambda$ and $\Lambda'$ \cite{GHS}. Furthermore a method for determining if $\Lambda^*$ is quasi-hereditary is given in \cite{GHS}. \end{enumerate} Section \ref{sec-ex} is devoted to examples. In particular, the varieties that include the commutative polynomial rings of dimension 2 and 3 are investigated; see Examples \ref{ex-poly2} and \ref{ex-poly3}. These examples show that in certain cases, the algebras in the variety $\GrAlg(\cT)$ are Koszul Artin-Schelter regular algebras, which have played a fundamental role in noncommutative geometry; see, for example, \cite{HOZ,MS} and their references. Section \ref{sec-sub} shows how to restrict the varieties to subvarieties that can be more tractible than the full variety $\GrAlg(\cT)$. Section \ref{sec-results} shows that if $\Lambda$ is a strong Koszul algebra, then so are the opposite algebra $\Lambda^{op}$ and the enveloping algebra, $\Lambda\otimes_K\Lambda^{op}$. The paper ends with some remarks and questions. \section{Strong Koszul algebras}\label{sec-strong} To define a strong Koszul algebra we will need to briefly review (graded) \grb basis theory. For details we refer the reader to \cite{Gr2}. We fix a field $K$ and a finite quiver $\cQ$. The set $\cB$ of finite (directed) paths forms a $K$-basis of the path algebra $K\cQ$. We positively $\mathbb Z$-grade $K\cQ=K\cQ_0+K\cQ_1+\cdots$ by defining $K\cQ_n$ to be the $K$-span of paths in $ \cB$ of length $n$. This is called the \emph{length grading} on $K\cQ$. For a \grb basis theory we need a special type of order on $\cB$. We say a well-order $\succ$ on $\cB$ is \emph{admissible} if, for all $p,q,r,s,t$ in $\cB$, \begin{enumerate} \item if $p\succ q$, then $pr\succ qr$ if both $pr$ and $qr$ are nonzero, \item if $p\succ q$, then $sp\succ sq$ if both $sp$ and $sq$ are nonzero, and \item if $p=rqt$, then $p\succeq q$. \end{enumerate} We fix an admissible order $\succ$ on $\cB$. Since we are interested in graded Koszul algebras where the grading is induced from the length grading of $K\cQ$, we add the requirement that if $p,q\in\cB$ and $\ell( p) >\ell (q)$ then $p\succ q$, where $\ell(p)$ denotes the path length of $p$. We call such an admissible order a \emph{length admissible order}. Fix a length admissible order $\succ$. Note that we give an example of such an order in the beginning of Section \ref{sec-ex}. In general, $\cB$ will be an infinite set. We make the convention that if $x\in K\cQ$ and we write $x=\sum_{p\in\cB}\alpha_pp$ with $\alpha_p\in K$, then all but a finite number of $\alpha_p$ eqal 0. If $x=\sum_{p\in\cB}\alpha_pp$ is a nonzero element of $K\cQ$, then $\tip(x)=p$ if $\alpha_p\ne 0$ and $p\succeq q$, for all $q$ with $\alpha_q\ne 0$. If $X\subseteq K\cQ$, then \[\tip(X)=\{\tip(x)\mid x\in X \text{ and }x\ne 0\}.\] We say a nonzero element $x\in K\cQ$ is \emph{uniform} if there exist vertices $v$ and $w$ such that $x=vxw$. Paths are always uniform, and, if $\cQ$ has one vertex and $n$ loops, then $K\cQ$ is isomorphic to the free algebra on $n$ noncommuting variables and that every nonzero element of $KQ$ is uniform. If $I$ is an ideal in $K\cQ$, then we say that $I$ is a \emph{graded ideal} if $I =\sum_{n\ge 0}I\cap K\cQ_n$. Equivalently, $I$ can be generated by (length) homogeneous elements. If $I$ is a graded ideal in $K\cQ$ and $\Lambda=K\cQ/I$, then $\Lambda$ has positive $\mathbb Z$-grading induced from the length grading on $K\cQ$, which we call the \emph{induced length grading}. \begin{definition}\label{def-gb}{\rm Let $I$ be a graded ideal in $K\cQ$ and $\cG$ a set of length homogeneous uniform elements in $I$. Then $\cG$ is a \emph{(graded) \grb basis} of $I$ (with respect to $\succ$) if \[ \langle \tip(\cG)\rangle=\langle\tip(I)\rangle.\] } \end{definition} \begin{definition}\label{def-st-kosz}{\rm Let $\Lambda =K\cQ/I$. We say that $\Lambda$ is a \emph{strong Koszul algebra} (with respect to $I$ and $\succ$) if $I$ has a \grb basis with respect to $\succ$ consisting of quadratic elements. } \end{definition} \begin{theorem}\label{thm-is-kz}\cite{GH} A strong Koszul algebra is a Koszul algebra. \end{theorem} The converse is false in general, for example, the Sklyanin algebras \cite{S}. For the remainder of this section we look more closely at the strong Koszul algebras. If $X$ is a subset of $K\cQ$, then we define \[\nontip(X)=\cB\setminus \tip(X)\] We have the following result whose proof is left to the reader. \begin{proposition}\label{prop-tip-nontip}. Let $\cT$ be a set of paths in $K\cQ$ such that if $t,t'\in\cT$ and $t\ne t'$ then $t$ is not a subpath of $t'$. Let $I=\langle \cT\rangle$. \begin{enumerate} \item $n\in\nontip(I)$ if and only if no path in $\cT$ is a subpath of $n$. \item $t=a_1a_2\cdots a_n$ with $a_i\in\cQ_1$ is in $\cT$ if and only if $t\notin\nontip(I)$ but $a_2a_3\cdots a_n$ and $a_1a_2\cdots a_{n-1}$ are in $\nontip(I)$. \end{enumerate}\qed\end{proposition} Our next result is fundamental and is slightly more general than the result found in \cite{Gr2}. Let $\cS$ denote the subalgebra of $K\cQ$ generated by the vertices of $\cQ$. Note that $\cS$ is a semisimple $K$-algebra. If $\cX$ is a set of paths in $\cQ$, $\Span_K(\cX)$ is an $S$-bimodule as follows: if $\sum_{x\in\cX}\alpha_xx\in \Span_K(\cX)$ and $v,w\in\cQ_0$, then $v(\sum_{x\in\cX}\alpha_xx)w=\sum_{x\in\cX}\alpha_x(vxw)$. The proof found in \cite{Gr2} can easily be adjusted from $K$-vector spaces to $S$-bimodules. \begin{lemma}\label{lem-fund}{\rm\bf (Fundamental Lemma)} \ If $I$ is an ideal in $K\cQ$, then \[ K\cQ = I\oplus \Span_K(\nontip(I)),\] as $S$-bimodules. \end{lemma} We say an ideal $I$ in $K\cQ$ is a \emph{monomial} ideal if $I$ can be generated by paths. Note that a monomial ideal is a graded ideal. The proof of the following well-known result is left to the reader. \begin{proposition}\label{prop-mono}Let $L$ be a monomial ideal in $K\cQ$. Then \begin{enumerate} \item an element $x=\sum_{p\in\cB}\alpha_pp$ with $\alpha_p\in K$ is in $L$ if and only if, for each $\alpha_p\ne 0$, $p\in L$, and \item there is a unique minimal set of paths that generate $L$.\qed \end{enumerate} \end{proposition} We apply the second part of the above proposition and the Fundamental Lemma as follows. Let $I$ be a graded ideal in $K\cQ$. Then $\langle \tip(I)\rangle$, the two-sided ideal generated by $\tip(I)$, is a monomial ideal. Hence there is a unique minimal subset, $\cT$, of $\tip(I)$, that generates $\langle \tip(I)\rangle$. By the Fundamental Lemma, for each $t\in\cT$, there is a unique $g_t\in I$ and a unique $n(t)\in\Span_K(\nontip(I)$ such that $t=g_t+n(t)$. In particular, for each $t\in\cT$, $t-n_t\in I$. \begin{proposition}\label{prop-red} The set $\cG=\{g_t\mid t\in \cT\}$ is a graded \grb basis for $I$. \end{proposition} \begin{proof} Each $g_t\in I$ implies that $\tip(g_t)\in\tip(I)$. Since $n(t)\in \Span_K(\nontip(I))$, we conclude that, for each $t\in\cT$, $\tip(g_t)=t$. Next we show that $\cG$ consists of uniform length homogeneous elements. Letting $t\in\cT$ be a path of length $m$, writing $t=g_t+n(t)$ we see that, in degree $m$, $(g_t)_m\in I$ and $n(t)_m$ remains in $\Span_K(\nontip(I))$. We have that $t=(g_t)_m+n(t)_m$ and, by unicity, each $g_t$ is a length homogeneous element. The proof that each $g_t$ is uniform is similar. Since $\cT$ generates $\langle\tip(I)\rangle$, $\cT=\tip(\cG)$, the elements of $\cG$ are uniform, length homogenous, and hence we are done. \end{proof} \begin{definition}\label{def-red}{\rm Given a graded ideal $I$ in $K\cQ$ and $\cG$ as constructed above, we call $\cG$ the \emph{reduced} \grb basis for $I$ (with respect to $\succ$).} \end{definition} Returning to strong Koszul algebras, if an ideal has a \grb basis $\cH$ of uniform quadratic elements then $\tip(\cH)$ consists of paths of length 2, and hence the reduced \grb basis consists of quadratic uniform elements. Thus, \begin{proposition}\label{prop-ska}We have that $\Lambda=K\cQ/I$ is a strong Koszul algebra if and only the reduced \grb basis consists of quadratic uniform elements. \end{proposition} \section{The variety $\GrAlg(\cT)$}\label{sec-var} In this section, if $\cT$ is a set of paths of length 2, we define an affine variety whose points are in one-to-one correspondence to the strong Koszul algebras $\Lambda=K\cQ/I$ (with respect to $I$ and $\succ$) , having the property that $\langle\tip(I)\rangle$ is generated by $\cT$. Referring to Example \ref{ex-gd} while reading this section should be helpful. Fix $\cT$ to be a set of paths of length 2. Set $\cN=\cB\setminus \tip( \langle\cT\rangle)$. Recall that if $I $ is an ideal such that $\langle \tip(I)\rangle =\langle \cT\rangle$, then $\cN=\nontip(I)$. It is important to note that $\cN$ is only dependent on $\cT$ and not on $I$. We begin by defining the affine space in which our variety lives. For this we need the following definitions. We say two elements $x$ and $y$ of $K\cQ$ are \emph{parallel} if there are vertices $v$ and $w$ such that $vxw=x$ and $vyw=y$. In particular, if $x$ and $y$ are parallel then both $x$ and $y$ are uniform. If $x$ and $y$ are parallel, we write $x\| y$. Note that if $x=\sum_{p\in\cB}\alpha_pp\in K\cQ$, then $x$ is uniform if and only if $p\| q$, for all $p,q\in\cB$ with $\alpha_p$ and $\alpha_q$ nonzero. Let $\cN_2$ be the set of paths of length 2 in $\cN=\cB\setminus \tip(\langle\cT\rangle)$ and, for $t\in\cT$, define \[ \cN_2(t)=\{n\in\cN_2\mid t\succ n\text{ and } n\| t\}.\] We now can define the affine space in which our variety lives. If $S$ is a set, then $|S|$ denotes the cardinality of $S$. Let $D=\sum_{t\in\cT}|\cN_2(t)|$. We let $\cA=K^D$, viewed as a $D$-dimensional affine space. If $X\in \cA$, then we write $X$ as tuple with indices in the disjoint union of the $\cN_2(t)$'s; that is, we write $X=(x_{t,n})$, where $t\in \cT$, $ n\in\cN_2(t)$, and $x_{t,n}\in K$. For each $X=(x_{t,n})\in\cA$, let \[\cG(X)= \{g_t\in K\cQ\mid g_t=t-\sum_{n\in\cN_2(t)}x_{t,n}n\}.\] We now define the subset of $\cA$ of interest. \begin{definition}{\rm Given a set $\cT$ of paths of length $2$, define \[ \GrAlg(\cT)=\{X\in\cA\mid K\cQ/\langle \cG(X)\rangle \text{ is a strong Koszul algebra (with}\]\[\text{ respect to }\langle \cG(X)\rangle \text{ and }\succ) \} \] } \end{definition} The remainder of this section is devoted to showing that $\GrAlg(\cT)$ is an affine variety in $\cA$ whose points are in one-to-one correspondence with the elements of \[\clU =\text{ the set of algebras }K\cQ/I\text{ that are strong Koszul algebras}\]\[\text{ (with respect to }I\text{ and }\succ)\text{ such that }\langle \cT\rangle=\langle \tip(I)\rangle.\]First we will show the one-to-one correspondence. We begin with a preparatory result. \begin{proposition}\label{prop-oneone} If $K\cQ/I\in\clU$ then there exists $X\in\cA$ such that the reduced \grb basis of $I$ is $\cG(X)$ for some $X$. Moreover, $X$ is unique. \end{proposition} \begin{proof} Suppose that $K\cQ/I\in\clU$. Since $\langle \cT\rangle=\langle \tip(I)\rangle$ and $\cT$ are paths of length 2, $\cT$ must be the unique minimal generating set of the monomial ideal $\langle \tip(I)\rangle$. It now follows from our discussion of the reduced \grb basis that there is some $X\in\cA$ such that $I=\langle \cG(X)\rangle$. Uniqueness follows from the uniqueness of the reduced \grb basis. \end{proof} \begin{corollary}\label{cor-corres} If $\Lambda=K\cQ/I$ is a strong Koszul algebra (with respect to $I$ and $\succ$), then $\Lambda\in\GrAlg(\cT)$ where $\cT$ is the minimal set of generators of $\langle \tip(I)\rangle$. \end{corollary} \begin{proof} The reduced \grb basis $\cG$ of $I$ with respect to $\succ$ is composed of uniform quadratic elements. Let $\cT=\tip(\cG)$. It is immediate that $\cT$ is the minimal set of paths that generate $\langle \tip(I)\rangle$. It is now clear that $\Lambda\in \GrAlg(\cT)$. \end{proof} We now state the correspondence theorem. \begin{theorem}\label{thm-one}Let $\cT$ be a set of paths of length 2. There is a one-to-one correspondence between the points of $\GrAlg(\cT)$ and the algebras $K\cQ/I$ that are strong Koszul algebras (with respect to $I$ and $\succ$) such that $\langle \cT\rangle=\langle \tip(I)\rangle$. \end{theorem} \begin{proof} Define $\varphi\colon \GrAlg(\cT)\to \clU$ by $\varphi(X)=KQ/\langle \cG(X)\rangle$. The map $\varphi$ is well-defined. We see that $\varphi$ is injective, since if $X=(x_{t,n}),X'=(x'_{t,n})\in\GrAlg(\cT)$ with $X\ne X'$, the reduced \grb bases of $\varphi(X)$ and $\varphi(X')$ differ. But the reduced \grb basis of an ideal is unique, and $\varphi$ being injective follows. To see that $\varphi$ is onto, let $K\cQ/I\in\clU$. We are assuming that $\langle \tip(I)\rangle=\langle \cT\rangle$. Since every path in $\cT$ is a path of length 2 and $\langle \tip(I)\rangle=\langle\cT\rangle$, we see that $\cT$ is the (unique) minimal set of paths that generate $\langle \tip(I)\rangle$. By the construction of the reduced \grb basis for $I$ with respect to $\succ$ found in Section \ref{sec-strong}, the reduced \grb basis for $I$ with respect to $\succ$ is $\{g_t\mid t\in\cT \text{ and }g_t =t-\sum_{n\in\cN_2(\cT)}x_{t,n}n\}$ for some $x_{t,n}\in K$. Thus, $K\cQ/I=\varphi((x_{t,n}))$, and we are done. \end{proof} We now show the somewhat surprising result that $\GrAlg(\cT)$ is an affine algebraic variety in $\cA$. \begin{theorem}\label{thm-var} Let $K$ be a field, $\cQ$ a finite quiver, and $\cT$ be a set of paths of length 2 in $\cQ$. Then $\GrAlg(\cT)$ is an affine algebraic variety. \end{theorem} Before proving this result, we will need some preliminary work. We introduce a ``polynomial ring over a path algebra". Let $\mathbf y$ be a set of $D$ variables with $\{y_{t,n}\}=\mathbf y$ where $t\in \cT$ and $n\in\cN_2(t)$. Consider the ring $R=K\cQ[\mathbf y]$, consisting of finite sums of the form $\sum_{p\in\cB}f_p(\mathbf y)p$, where $f_p(\mathbf y)$ is a polynomial in the commutative polynomial ring $K[\mathbf y]$. The variables $ y_{t,n}$ commute with elements of the path algebra $K\cQ$ (and each other). Note that $K[\mathbf y]$ is the coordinate ring of the affine space $\cA$. Given an element $\sum_{p\in\cB}f_p(\mathbf y)p\in R$, we call the polynomial $f_p(\mathbf y)$ the `coefficient' of $p$ We are interested in a particular set of elements in $R$, namely \[\cH=\{h_t\in R\mid h_t=t-\sum_{n\in\cN_2(t)}y_{t,n}n\}.\] If $F=\sum_{p\in\cB}f_p(\mathbf y)p$ is an element of $R$, then we say $F'\in R$ is a \emph{simple reduction of $F$ by $\cH$}, written $F\to_{\cH}F'$, if there is some $p\in\cB$ and $t\in \cT$ such that \begin{enumerate} \item $p=qtr$ for some paths $q$ and $r$, \item $f_p(\mathbf y)\ne 0$, and \item $F'= F-f_p(\mathbf y)p + (f_p(\mathbf y)(q(\sum_{n\in\cN_2(t)}y_{n,t}n)r)$. \end{enumerate} The effect of a simple reduction is the following. Suppose $f_p(\mathbf y)p$ occurs in $F$ with $p=qtr$ and it is the term we work with. Then the term $f_p(\mathbf y)p$ is replaced with the sum of terms $(f_p(\mathbf y)y_{n,t})qtr$, for $n\in\cN_2(t)$. Note that $p\succ qtr$ for each $n\in\cN_2(t)$. Thus, for each $n\in\cN_2(t)$, $f_p(\mathbf y)y_{n,t}$ is added to $f_{qtr}(\mathbf y)$ as the 'coefficient' in front of $p=qtr$ and $f_p(\mathbf y)p$ is removed. All other terms in $F$ are unchanged. We say $F^*$ is a \emph{complete reduction of $F$ by $\cH$}, written $F\Rightarrow_{\cH}F^*$, if, for some $m$, there is a sequence $F_1=F,F_2,\dots, F_m=F^*$ such that for each $i=1,\dots, m-1$, $F_i\to_{\cH}F_{i+1}$ is a simple reduction, and $F^*$ has no simple reduction. Note that $F^*$ having no simple reduction is equivalent to saying that all the paths $p$ in $F^*$ having nonzero coefficient in $K[\mathbf y]$ are in $ \cN$; that is, for all $t\in\cT$, $t$ is not a subpath of any $p$ occuring in $F^*$. Since $\succ$ is a well-order on $\cB$, every $F\in R$ will have a complete reduction. {\bf We now prove Theorem \ref{thm-var}.} Recall that $\cH=\{h_t\in R\mid h_t=t-\sum_{n\in\cN_2(t)}y_{t,n}n\}$. For each pair $t$ and $t'$ of elements of $\cT$ such that $t=ab$ and $t'=bc$, where $a,b$, and $c$ are arrows in $\cQ$, form the \emph{overlap relation} \[ Ov(t,{t'})=h_t\cdot c-a\cdot h_{t'}.\] Note that $t=t'=a^2$ is allowed. Since each $h_t$ is a $K[\mathbf y]$-combination of paths of length 2, each overlap relation is a $K[\mathbf y]$-combination of paths of length 3. We note that if we have a $K[\mathbf y]$-combination of paths of length 3, then a simple reduction is again a $K[\mathbf y]$-combination of paths of length 3. It follows that a complete reduction of a $K[\mathbf y]$-combination of paths of length 3 is again a $K[\mathbf y]$-combination of paths of length 3. Let $\cN_3$ denote the set of paths in $\cN$ of length 3. For each $Ov(t,{t'})$, let \[ F^*_{t,t'}=\sum_{\hat n\in\cN_3}f^*_{t,t',\hat n}(\mathbf y)\hat n,\] with $f^*_{t,t',\hat n}(\mathbf y)\in K[\mathbf y]$, be a complete reduction of $Ov(t,{t'})$ by $\cH$. Thus, for each $t=ab$, $t'=bc$ and each $\hat n\in\cN_3$, we obtain polynomials in commutative polynomial ring $K[\mathbf y]$, namely, the coefficient $f^*_{t,t',\hat n}(\mathbf y)$ of $\hat n$ in $F^*_{t,t'}$. We claim that $\GrAlg(\cT)$ is the zero set of \[ \cI=\{f^*_{t,t',\hat n}(\mathbf y)\mid \hat n\in\cN_3, t,t'\in \cT\text{ with } t=ab\text{ and }t'=bc,\text{ for some arrows }a,b,c\}.\] If, in the definitions of overlap relation, simple reduction, and complete reduction, instead of variables $y_{t,n}$ we use elements $x_{t,n}$ in $K$, we would have the definitions of overlap relation, simple reduction, and complete reduction for elements of the path algebra $K\cQ$. The noncommutative version of Buchberger's Theorem \cite{Gr2,B}, applied to our setup, says that if $\cG=\{g_t\mid t\in\cT\}$ is a uniform set of quadratic elements in $K\cQ$ such that $\tip(g_t)=t$, then $\cG$ is a \grb basis for $\langle \cG\rangle$ if and only if all overlap relations completely reduce to $0$. Suppose that $X=(x_{t,n})\in\GrAlg(\cT)$. We show that $X$ is in the zero set of $\cI$. We note that $\cG(X)=\{g_t=t-\sum_{n\in\cN_2(t)}x_{t.n}n\mid t\in\cT \}$ is just $\cH$ evaluated at $X$. Since $X\in \GrAlg(\cT)$, $\cG(X)$ is the reduced \grb basis for $\langle\cG(X)\rangle$ and hence all overlap relations of $\cG(X)$ reduce to 0. Thus each $f^*_{t,t',\hat n}(X)=0$; in particular $X$ is in the zero set of $\cI$. Conversly, if $X$ is in the zero set of $\cI $, each $f^*_{t,t',\hat n}(X)=0$. Hence every overlap relation of $\cG(X)$ completely reduces to 0, and we conclude that $\cG(X)$ is a \grb basis of the ideal $\langle \cG(A)\rangle$. Since $\tip(\cG(X))=\cT$ we see that $X\in \GrAlg(\cT)$. This completes the proof. \qed \section{Properties of $\GrAlg(\cT)$}\label{sec-prop} We begin with a general definition. \begin{definition}\label{def-assoc}{\rm Given $\Lambda=K\cQ/I$, an arbitrary algebra , $K\cQ/\langle \tip(I)\rangle$ is called the \emph{associated monomial algebra} of $\Lambda$ and denoted $\Lambda_{Mon}$. We also define $I_{Mon}$ to be $\langle\tip(I)\rangle$.} \end{definition} Note that given an ideal $I$, $\tip(I)$ is dependent on the choice of the admissible order $\succ$ and that, in this paper, $\succ$ is fixed and has the property that, if $p,q\in\cB$ and the length of $p$ is greater than the length of $q$, then $p\succ q$. The next result provides an alternative definition of $\GrAlg(\cT)$. Recall that if $\mathbf x = (x_{t,n})\in \cA=K^D$, then $\cG(\mathbf x)=\{g_t=t-\sum_{n\in\cN_2(t)}x_{t,n}n\mid t\in\cT\}$. \begin{proposition}\label{prop-mono} Let $\cT$ be a set of paths of length 2 in a quiver $\cQ$. \sloppy Let $\mathbf 0=(0,0,\dots,0)\in \cA$. The following statements hold: \begin{enumerate} \item $\cG(\mathbf 0)=\cT$. \item The element $\mathbf 0\in\cA$ is in $\GrAlg(\cT)$ and corresponds to the strong Koszul algebra $K\cQ/\langle\cT\rangle$ (with respect to $\langle\cT\rangle$ and $\succ$). \item Let $\Lambda=K\cQ/I$ be a length graded algebra. Then $\Lambda$ is a strong Koszul algebra (with respect to $I$ and $\succ$) corresponding to a point in $\GrAlg(\cT)$ if and only if $I_{Mon}=\langle\cT\rangle$. \item If $I$ is an ideal in $K\cQ$ generated by length homogeneous elements, then $K\cQ/I$ corresponds to an element in the zero set of $\cI$ if and only if $(K\cQ/I)_{Mon} =K\cQ/(I_{Mon})=K\cQ/\langle\cT\rangle$. \item There is exactly one algebra with quadratic monomial \grb basis that corresponds to a point in $\GrAlg(\cT)$, namely, $K\cQ/\langle\cT\rangle$. \end{enumerate} \qed \end{proposition} The proof is straightforward and left to the reader. As a consequence, we have the following corollary. \begin{corollary}\label{cor-mono}Let $\Lambda=K\cQ/I$ be a $K$-algebra with length grading induced from the length grading of $K\cQ$ . The following statements are equivalent: \begin{enumerate} \item $\Lambda$ corresponds to an element of $\GrAlg(\cT)$. \item $\Lambda_{Mon}=K\cQ/\langle\cT\rangle$. \item $I_{Mon}=\langle\cT\rangle$. \end{enumerate} \end{corollary} The next result shows that two algebras in a variety have the same $K$-bases of paths under the spliting of the canonical surjection $\pi\colon K\cQ\to K\cQ/I$ given by the Fundamental Lemma. More precisely, let $\sigma\colon K\cQ/I\to K\cQ$ be defined by $\sigma(\pi(x))=n_x$ where $x=i_x+n_x$ with $i_x\in I$ and $n_x\in\Span_K(\nontip(I))$. The map $\sigma$ is well-defined by the Fundamental Lemma, and $\pi\sigma=1_{K\cQ/I}$. We identify $\Lambda = K\cQ/I$ with $\Span_K(\nontip(I))$. If $\Lambda\in\GrAlg(\cT)$, then $\nontip(I)=\cB\setminus \tip(I)$. Now $\langle\tip(I)\rangle=\langle \cT\rangle$ and hence $\nontip(I)=\{n\in\cB\mid n \text{ has no subpath in }\cT\}$ by Proposition \ref{prop-tip-nontip}. Thus, every algebra in $\GrAlg(\cT)$ has as $K$-basis $\{n\in\cB\mid n \text{ has no subpath in }\cT\}$. Of course, multiplication of elements of the basis differs for different algebras. The converse holds. More precisely, if $\Lambda=K\cQ/I$, where $I$ is generated by uniform quadratic elements and $\nontip(I)= \{n\in\cB\mid n \text{ has no subpath in }\cT\}$, then $\Lambda$ is in $\GrAlg(\cT)$. To see this, since $\cB=\tip(I)\oplus \nontip(I)$, it follows that $\tip(I)=\{p\in\cB\mid \text{ there is some }t\in\cT \text{ such that }t\text{ is a subpath of }p\}$. From this description it follows that $\langle \tip(I)\rangle=\langle \cT\rangle$, and hence $\Lambda\in\GrAlg(\cT)$. We summarize the above discussion in the next result, which provides another description of $\GrAlg(\cT)$. \begin{theorem}\label{thm-cartan} Let $\cT$ be a set of paths of length 2 in $\cQ$ and $\cN=\cB\setminus \tip(\langle\cT\rangle)$. We have that if $I$ is generated by uniform length homogeneous elements, then $\Lambda=K\cQ/I\in\GrAlg(\cT)$ if and only if $\cN=\nontip(I)$. Moreover, if $\Lambda=K\cQ/I\in\GrAlg(\cT)$ then, as a subspace of $K\cQ$, $\Lambda$ has $K$-basis $\cN$.\qed \end{theorem} The following consequence of the previous theorem describes the Cartan matrix of a strong Koszul algebra and shows that two algebras in the same variety have the same Cartan matrix. \begin{corollary}\label{cor-cartan} Let $\cT$ and $\cN$ be as in Theorem \ref{thm-cartan} and $\Lambda=K\cQ/I\in\GrAlg(\cT)$. Suppose that $|\cN|<\infty$ and $\{v_1,\dots, v_n\}=\cQ_0$. Then the Cartan matrix of $\Lambda$ is the $n\times n$ matrix $C$ where the $(i,j)$-th entry in $C$ is $|v_i\cN v_j|$, the number of paths from $i$ to $j$ in $\cN$. \end{corollary} \begin{proof} The Cartan matrix is the $n\times n$ matrix with $(i,j)$-th entries $\dim_K(v_i\Lambda v_j)$. But $\Span_K(\nontip(I))$ is isomorphic to $\Lambda$ and $\cN=\nontip(I)$ \end{proof} The next result shows that algebras in the same variety share some homological properties. Assume $I$ is an ideal in $K\cQ$ contained in $J^2$, where $J$ is the ideal in $K\cQ$ generated by the arrows of $\cQ$. If $v$ is a vertex in $\cQ$ and $\Lambda=K\cQ/I$, we let $S_v(\Lambda)$ be the one-dimensional simple $\Lambda$-module associated to the vertex $v$. Note that if $\Lambda=K\cQ/I\in\GrAlg(\cT)$, $I\subseteq J^2$ since $I$ has a \grb bases consisting of quadratic elements. If $\Lambda$ is a ring and $M$ is a $\Lambda$-module, we let $\pd_{\Lambda}(M)$ and $\id_{\Lambda}(M)$ denote the projective and injective dimensions of $M$ respectively. \begin{theorem}\label{thm-ext} Let $\cT$ be a set of paths of length 2 in $\cQ$ and $\Lambda\in\GrAlg(\cT)$. Suppose that $v$and $w$ are vertices in $\cQ$. Then for $n\ge 0$, \[ \dim_K(\Ext_{\Lambda}^n(S_v(\Lambda),S_w(\Lambda))= \dim_K(\Ext_{\Lambda^*}^n(S_v(\Lambda^*),S_w(\Lambda^*)), \] where $\Lambda^*=K\cQ/\langle\cT\rangle$. In particular, if $\cN=\cB\setminus \tip(\langle\cT\rangle)$, then \[\pd_{\Lambda}(S_v(\Lambda))=\pd_{\Lambda^*}(S_v(\Lambda^*)) \text{ and }\id_{\Lambda}(S_v(\Lambda))=\id_{\Lambda^*}(S_v(\Lambda^*)).\] . \end{theorem} \begin{proof} Although the proof of this result is implicit in \cite{AG}, we sketch a proof employing the ideas in \cite{GS}. In \cite{GS}, a projective $\Lambda$-resolution is constructed inductively from subsets $F^n=\{f^n_i\}_{i=1}^{k_i}$ for $n\ge 0$, where $f^n_i\in K\cQ$ and $k_i \text{ is finite}$ if the reduced \grb basis is finite, which it is in our case. For $S_v(\Lambda)$, we have $F^0=\{v\}$, $F^1$ is the set of arrows starting at $v$, and $F^2$ are the elements of a reduced \grb basis that start at $v$. The $F^n$'s are constructed using the $F^i$'s, $i<n$ and the reduced \grb basis. We describe $\tip(F^n)$, for $n\ge 2$ which can be deduced from the construction of $F^n$ from $F^{n-1}$. The construction shows that $\tip(F^n)=T^n$, where \[T^n=\{a_1a_2\cdots a_n\mid a_i\in\cQ_1, va_1=a_1, \text{ and }a_ia_{i+1}\in\cT,\text{ for }1\le i\le n-1\}.\] Note that $T^n$ depends only on $\cT$, and that $|T^n|=|F^n|$. Since the $f^n_i$ are length homogeneous, we see that if $f^n_i\in F^n$ and $\tip(f^n_i)=a_1\cdots a_n\in T^n$, then $f^n_i$ is length homogeneous of length $n$. Since each $f^n_i$ is uniform, if $w$ is the end vertex of $a_n$, then $vf^{n}_iw=f^{n}_i$. From the construction of the $\{f^n_i\}$, each $f^n_i$ is a sum of elements of the form $f^{n-1}_jr_{i,j}$ with $r_{i,j}\in K\cQ$. By length, the $r_{i,j}$ are linear combinations of arrows. Since the $r_{i,j}$ modulo $I$ are entries in the matrix mapping the $n^{th}$ projective to the $n-1^{st}$ in the constructed projective resolution of $S_v(\Lambda)$, we conclude that the resolution constructed in \cite{GS} is minimal in our case. Finally, we see that if $F^n$ is the set for resolving $S_v(\Lambda)$, and ${F^*}^n$ is the set for resolving $S_v(\Lambda^*)$, then they both have tip set $T^n$. This finishes the proof since the dimension of $\Ext_{\Lambda}^n(S_v(\Lambda),S_w(\Lambda))$ equals the number of $f^n_i$s such that $vf^n_iw=f^n_i$. \end{proof} We have the following consequence. \begin{corollary}\label{cor-gldim}If $\cN=\cB\setminus \tip(\langle \cT\rangle)$ is a finite set and $\Lambda,\Lambda'\in\GrAlg(\cT)$, then \[\gldim(\Lambda)=\gldim(\Lambda').\] \end{corollary} \begin{proof} Let $\Lambda,\Lambda'\in\GrAlg(\cT)$. Since $|\cN|=\dim_K(\Lambda)= \dim_K(\Lambda')$, $\gldim(\Lambda)=\max_{v\in\cQ_0}\{\pd_{\Lambda}(S_v(\Lambda))$ and $\gldim(\Lambda')=\max_{v\in\cQ_0}\{\pd_{\Lambda'}(S_v(\Lambda'))\}$. The result follows from Theorem \ref{thm-ext}. \end{proof} If $\Lambda^*=K\cQ/\langle \cT\rangle$ is a finite dimensional strong Koszul algebra (with respect to $\langle\cT\rangle$ and $\succ$) and of finite global dimension, then the determinant of the Cartan matrix for every algebra in $\GrAlg(\cT)$ is $1$, since every algebra in $\GrAlg(\cT)$ is length graded and of finite global dimension and, hence, we may apply \cite{W}. The final property is one that is proved in a more general setting in \cite{GHS}. \begin{theorem}\label{thm-qh}\cite{GHS} Let $\cT$ be set of paths of length 2 in $\cQ$ and $\cN=\cB\setminus \tip(\langle \cT\rangle)$. Assume that $\cN$ is a finite set. If $K\cQ/\langle\cT\rangle$ is a quasi-heredity algebra, then every algebra in $\GrAlg(\cT)$ is quasi-heredity. \end{theorem} \section{Examples}\label{sec-ex} We begin by defining a particular length admissible order. Unlike the commutative case, the (left) lexicographic order is not a well-order on $\cB$. In particular, the lexicographic order will have infinite descending chains, contradicting that it must be a well-order. We describe the \emph{length-left-lexicograpic order} which is length admissible. Arbitrarily linearly order the vertices and arrows of $\cQ$ and require that every vertex is less than every arrow. If $p=a_1a_2\cdots a_n$ and $q=b_1b_2\cdots b_m$ are paths with the $a_i$ and $b_j$ arrows, then $p\succ q$ if $n>m$ or $n=m$ and there is $k$, $1\le k\le n$ such that for $1\le i\le k-1$, $a_i=b_i$ and $a_k\succ b_k$. In all the examples in this section we will use the length-left-lexicographic order and it will suffice, in our setting, to simply give the ordering of the arrows. Recall that if $\cT$ is a set of paths of length 2 in a quiver $\cQ$ and $\cG=\{g_t\mid t\in\cT\}$ where, for $t\in\cT$, $g_t=t-\sum_{n\in\cN_2(t)}x_{t,n}n$ for $n\in\cN_2(t)$, $x_{t,n}\in K$, then $\cG$ is the reduced \grb basis for the ideal $\langle\cG\rangle$ if and only if all overlap relations completely reduce to 0 by $\cG$. There are two extreme cases given $\cT$ and $\cG$ as above. The first occurs if there are no overlap relations. In this case, the ideal of the variety $\GrAlg{\cT}$ is $(0)$ and hence $\GrAlg{\cT}$ is all of affine space and there are no restrictions on the choice of the $x_{t,n}$. The second case extreme case occurs if $\cN_2(t)=\emptyset$, for all $t\in\cT$. For example, this occurs if for each $t\in\cT$, if $t\succ t'$ and $t'$ is a path of length 2, then $t'\in \cT$ . In this case, the affine space $\cA$ is dimension 0 and $\GrAlg(\cT)=\cA$ is a point, namely the monomial algebra $K\cQ/\langle\cT\rangle$. We now turn to specific examples. The next example is designed to help understand the proof of Theorem \ref{thm-var}. \begin{Example}\label{ex-gd}{\rm Let $Q$ be the quiver \xymatrix{ &&\circ \ar[ld]_a\ar[d]^b\ar[rd]^c\\ &\circ\ar[ldd]_e \ar[ddrrr]_f &\circ \ar[ddll]_g\ar[drrd]^h &\circ\ar[ddlll]^i\ar[ddr]^j\\\\ \circ\ar[drr]^k&&&&\circ\ar[dll]_l\\ &&\circ } and let $\succ$ be\linebreak defined by $a\succ b\succ \cdots \succ l$. Consider $\cT=\{ af,ae,bg,bh, ek,gk,ik\}$. Then, it follows that $\cN= \cQ_0\cup \{a,b,c,\dots,k,l,ci, cj, fl,hl,jl,cfk,cjl\}$. Thus we have $\cN_2(af)=\{cj\}, \cN_2(ae)=\{ci\},\cN_2(bg)=\{ci\}, \cN_2(bh)=\{cj\},\cN_2(ek) =\{fl \}, \cN_2(gk) = \{hl\},$ $ \cN_2(ik) =\{jl\}$. Thus $\GrAlg(\cT)$ is a variety in $\cA=K^{7}$. Simplify notation by renaming the variables $y_{t,n}$ where $t\in\cT$ and $n\in \cN_2(t)$ as follows: $ y_{af,cj}=X_1,\ y_{ae, cf}=X_2,\ y_{bg,ci}=X_3,\ y_{bh,cj}=X_4,\ y_{ek,fl}=X_5,\ y_{gk,hl}=X_6,\ y_{ik,jl}=X_7$. We let \[\cG=\{af-X_1cj,\ ae-X_2cf,\ bg-X_3ci, \dots, ik-X_7jl\}.\] There are 2 overlap relations $Ov(ae,ek)= -X_2cik+X_5afl$ and $Ov(bg,gk)= -X_3cik +X_6bjl$. We completely reduce the first overlap relation and leave the computation of the second to the reader. We see that $-X_2cik$ simply reduces to $-X_7X_2cjl$ by $ik-X_7jl$. Since $cjl\in\cN$ it has no simple reductions. Next consider the second term $X_5afl$. Using $af-X_1cj$, $X_5afl$ simply reduces to $X_1X_5cjl$ and, as we noted, $cjl\in\cN$. Thus $Ov(ae,ek)$ completely reduces to $-X_2X_7cjl + X_1X_5cjl$. Similarly, $Ov(bg,gk)\Rightarrow_{\cG} -X_3X_7cjl + X_4X_6cjl$. Thus, the ideal of the variety is $\cI=\langle X_1X_5-X_2X_7,\ X_4X_6-X_3X_7\rangle$ in the commutative polynomial ring $K[X_1,X_2,\dots, X_7]$. Note that using \cite{GHZ} we see that the $\gldim(K\cQ/\langle \cT\rangle = 3$. Thus by Corollary \ref{cor-gldim} and Theorem \ref{thm-cartan}, every algebra in $\GrAlg(\cT)$ is a strong Koszul algebra of dimension 26 with $K$-basis $\cN$, and has global dimension 3. \qed }\end{Example} We denote the free associative algebra in $n$ variables by $K\{x_1,\dots,x_n\}$. Our next example is very small and simple. In this example, $\GrAlg(\cT)$ is affine 2-space and there is a punctured line in $\GrAlg(\cT)$ consisting of the quantum affine planes. Moreover there is another line in $\GrAlg(\cT)$ on which all the algebras are isomorphic to the monomial algebra $K\{x, y\}/\langle \cT\rangle$. This provides an example of distinct points in $\GrAlg(\cT)$ corresponding to isomorphic algebras. \begin{Example}\label{ex-poly2}{\rm Let $R =K\{x,y\}/\langle xy-yx\rangle$ and $y\succ x$. In this example, $\cQ$ has one vertex and two loops. Then $\cG=\{yx-xy\}$ and $\cT=\{yx\}$. The paths of length 2 are ordered $y^2\succ yx\succ xy\succ x^2$. We see that $\cN=\{x^iy^j\mid i,j\ge 0\}$. $\cN_2(yx)=\{xy,x^2\}$. Thus, $\GrAlg(\{yx\})$ lives in $\cA=K^2$. There are no overlap relations and hence $\GrAlg(\{yx\})=\cA$. Thus, every algebra of the form $\Lambda_{(\lambda,\gamma)}= K\{x,y\}/\langle yx-\lambda xy-\gamma x^2\rangle$ is in $\GrAlg(\{yx\})$, where $(\lambda,\gamma)\in K^2$. Note that in the notation of Section \ref{sec-var}, $\lambda=y_{yx,xy}$ and $\gamma =y_{yx,x^2}$. If $\gamma=0$ and $\lambda\ne 0,1$, then $\Lambda_{(\lambda,0)}$ is a quantum affine plane. They all lie on the punctured line $L=\{(\lambda,0)\mid \lambda\in K\setminus \{0,1\}\}$ in $\cA$. We also have $\Lambda_{(1,0)}=R$, the commutative polynomial ring in 2 variables. Of course, $\Lambda_{(0,0)}$ is the (noncommutative) monomial algebra $K\{x,y\}/\langle yx\rangle$. On the other hand, the line determined by $\lambda=0$ consists of the algebras $K\{x,y\}/\langle yx-\gamma x^2$. It is not hard to show these algebras are all isomorphic to each other. In particular, they are all isomorphic to the monomial algebra $K\{x,y\}/\langle yx\rangle$. If both $\lambda$ and $\gamma$ are not 0, then other strong Koszul algebras occur. Finally, for every $\Lambda\in\GrAlg(\{yx\})$ the minimal projective $\Lambda$-resolution of $K$ looks like \[0\to \Lambda\to \Lambda^2\to \Lambda \to K\to 0.\] \qed }\end{Example} In the next small example, the algebras occuring are finite dimensional. \begin{Example}{\rm Again take $\cQ$ having one vertex and two loops, $x$ and $y$. Again we order $ y\succ x$. Let $\cT=\{x^2,y^2, yx\}$. Then $\cN=\{1,x,y, xy\}$ Hence $\cN_2(x^2)=\emptyset,\ \cN_2(y^2)=\{xy\}=\cN_2(yx)$. Thus $\cA$ is two space. Now let $U$ and $V$ be variables. We wish to find the ideal $\cI$ of the variety $\GrAlg(\cT)$. We have $\cG=\{x^2,y^2-Uxy,yx-Vxy\}$. Note that in the notation of Section \ref{sec-var}, $U=y_{y^2,xy}$ and $V= y_{yx,xy}$. To find the polynomials in $\cI$, we need to completely reduce all overlap relations. There are 3 overlap relations \[Ov(x^ 2,x^2)=0, Ov(y^2,y^2)=Vxy^2-Vyxy, \text{ and } Ov(y^2,yx)=-Vxyx+Uyxy.\] But there are no length 3 paths in $\cN$, hence the three overlap relations must completly reduce to 0 by $\cG$. It follows that $\cI=\{0\}$. Thus, $\cA=\GrAlg(\cT)$. Note that for $V=1$ we have commutative strong Koszul algebras, including $K\{x,y\}/\langle x^2,y^2, yx-xy\rangle$. \qed }\end{Example} These examples are deceptively easy. In general, $\GrAlg(\cT)$ is a proper nontrivial variety. The next example gives some indication of the complexity of the varieties. \begin{Example}\label{ex-poly3} {\rm Let $\cQ$ be the quiver with one vertex and 3 loops, $x,y$, and $z$. Order them by $z\succ y\succ x$. Let $\cT=\{ zy,zx,yx\}$. Then $\cN=\{x^iy^jz^k\mid i,j,k\ge 0\}$. We note that the commutative polynomial ring $K\{x,y,z\}/\langle zy-yz, zx-xz, yx-xy\rangle$ is a strong Koszul algebra since $Ov(zy,yx)=-yzx+zyx$ completely reduces to 0 by $\cG=\{ zy-yz, zx-xz, yx-xy\}$. By the results in Section \ref{sec-prop}, the strong Koszul algebras having basis $\cN$ (the same as the commutative polynomial ring in 3 variables ) are the points in $\GrAlg(\cT)$. The projective resolution of $K$ over these algebras all have the same shape as the resolution of $K$ over the commutative polynomial ring by Theorem \ref{thm-ext}. Thus $\GrAlg(\cT)$ has some connection with Artin-Schelter regular algebras of dimension 3. Now $\cN_2=\{z^2,yz,y^2,xz,xy, x^2\}$ and the length-lexicographic order yields \[z^2\succ zy\succ zx\succ yz\succ y^2 \succ yx\succ xz\succ xy\succ x^2\] Thus, $\cN_2(zy)=\{yz,y^2,xz,xy,x^2\}=\cN_2(zx)$ and $\cN_2(yx)=\{xz,xy,x^2\}$. Hence $\cA=K^{13}$ and $\GrAlg(\cT)$ is a variety in $\cA$. We let $A,B,\dots,H,L,M,N,P,Q$ denote 13 variables. We set \begin{enumerate} \item[] $g_{zy}=zy-Ayz-By^2-Cxz-Dxy-Ex^2$ \item[] $g_{zx}=zx-Fyz-Gy^2-Hxz-Kxy-Mx^2$ \item[] $g_{yx}=yx-Nxz-Pxy-Qx^2$. \end{enumerate} Let $\cG=\{g_{zy},g_{zx},g_{yx}\}$. There is only one overlap relation we need to study, namely, \[ Ov(zy,yx)=-Ayzx-By^2x-Cxzx-Dxyx-Ex^2+Nzxz+Pzxy+Qzx^2.\] We completely reduce this overlap relation and collect the polynomials that are the coefficients of elements of $\cN_3$. By brute force computation, the ideal of $\GrAlg(\cT)$ contains 8 polynomials, 2 of total degree 3 and 6 of total degree 4. Below is the polynomial that is the coefficient of $xyz$: \[\begin{array}{l} -AP-A^2LN -AFMN-BP-BPNA-BQNF-CF+NFP+N^2DA+N^EF\\ +NHA+P^2F+PDNA+PENF +PHA+QFP+BLNA+QMNF\\ +MGNP+MPNA+MQNF+MHF \end{array}\] The dimension of $\GrAlg(\cT)$ is unclear, as is its irreducibility. \qed } \end{Example} \begin{remark}{\rm The connection with Artin-Shelter regular algebras holds in all dimensions. More precisely, consider the commutative polynomail in $n$ variables $R=K\{x_1, x_2,\dots, x_n\}/ \langle x_jx_i-x_ix_j\mid 1\le i<j\le n\rangle $. Taking $x_n\succ x_{n-1}\succ\cdots\succ x_1$, it is easy to check that $R$ is a strong Koszul algebra. Thus, if $\cT=\{x_jx_i\mid 1\le i<j\le n\}$ then $R\in\GrAlg(\cT)$ and, in this case , $\cN=\{x_1^{i_1}x_2^{i_2}\cdots x_n^{i_n} \mid i_j\ge 0, !\le i\le n\}$. Thus $\GrAlg(\cT)$ consists of all the strong Koszul algebras with graded $K$-bases $\cN$. Again, for each algebra in $\GrAlg(\cT)$, the simple module $K$ has a projective resolution the same shape as the projective resolution of $K$ over $ R$ by Theorem \ref{thm-ext}. }\end{remark} \section{Subvarieties}\label{sec-sub} Fix a quiver $\cQ$, a length admissible order $\succ$ on the set of paths $\cB$, and a set $\cT$ of paths of length 2. As usual, let $\cN=\cB\setminus \tip(\langle\cT \rangle)$ and $D=\sum_{t\in \cT}\mid\cN_2(t)\mid$. As we have seen, $\GrAlg(\cT)$ is a variety in $\cA=K^D$ and the points of $\GrAlg(\cT)$ correspond to the strong Koszul algebras with a fixed associated monomial algebra $\K\cQ/\langle \cT\rangle$. In this section we introduce subvarieties of $\GrAlg(\cT)$ that have a distinguished subalgebra and which are intersections of $\GrAlg(\cT)$ with specified affine subspaces. Let $\mathfrak r$ be a subset of $\{(t,n)\mid t\in \cT \text{ and }n\in \cN_2(t)\}$ and $\psi\colon \mathfrak r\to K$. We define $\GrAlg_{\psi}(\cT)$ to be the set of the strong Koszul algebras $\Lambda=K\cQ/I$ (with respect to $I$ and $\succ$) such that the reduced \grb basis of $I$, $\cG=\{g_t\mid t\in\cT\}$, where $g_t =t-\sum_{n\in\cN_2(t)}c_{t,n} n$ and satisfies the restriction that, for each $(t,n)\in\mathfrak r, c_{t,n}=\psi((t,n))$. We break $\mathfrak r$ into two disjoint sets; namely, Let $\mathfrak r^0=\{(t,n)\in\mathfrak r\mid \psi((t,n))= 0\}$ and $\mathfrak r^+= \mathfrak r\setminus \mathfrak r^0$. The \emph{distinguished algebra} in $\GrAlg_{\psi}(\cT)$ is $\Lambda^*=K\cQ/I^*$ where $I^*$ is generated by $\{g^*_t\mid t\in\cT \}$ where $g^*_t =t-\sum_{(t,n)\in\mathfrak r^+} \psi((t,n))n$. Note that if $\mathfrak r^+=\emptyset$, then $g^*_t=t$ and $\Lambda^*=K\cQ/ \langle\cT\rangle$. In general, $\Lambda^*$ may or may not be in $\GrAlg_{\psi}(\cT)$, depending on whether or not $\Lambda^*$ is a strong Koszul algebra. Summarizing, $\GrAlg_{\psi}(\cT)$ are the strong Koszul algebras in $\GrAlg(\cT)$ having reduced \grb bases $\{t-\sum_{n\in\cN_2(\cT)}c_{t,n}n\}$ with $c_{t,n}=\psi((t,n))$ for all $(t,n)\in\mathfrak r\}$. Let $\mathfrak A_{\psi}$ be the affine subspace of $\cA$ defined by \[\mathfrak A_{\psi} =\{(x_{t,n})\in \cA\mid x_{t,n}=\psi((t,n))\text{ if } (t,n)\in\mathfrak r\}\]. We have the following result, whose proof is left to the reader.. \begin{proposition}\label{prop-inter}Let $\cT$ be a set of paths of length 2 in a quiver $\cQ$. If $\mathfrak r\subseteq \{(t,n)\mid t\in\cT, n\in\cN_2(t)\}$ and $\psi\colon \mathfrak r\to K$, then \[ GrAlg_{\psi}(\cT)=\GrAlg(\cT)\cap \mathfrak A_{\psi}.\] Moreover, $\dim_K(\mathfrak A_{\psi})=D-|\mathfrak r|$. \qed \end{proposition} By the above Proposition, $\GrAlg_{\psi}(\cT)$ could be viewed as living in $K^{D- |\mathfrak r|}$. In this setting the distinguished algebra is the algebra associated to the point $\mathbf 0\in K^{D- |\mathfrak r|}$. The next result provides another proof that $\GrAlg_{\psi}(\cT)$ is an affine variety in affine $(D-|\mathfrak r|)$-space by explicitly describing the ideal of the variety, viewed as a variety in $K^{|D|-|\mathfrak r|}$. \begin{theorem}\label{thm-sub} Keeping the notation above, $\GrAlg_{\psi}(\cT)$ is a subvariety of $\GrAlg(\cT)$. The subvariety $\GrAlg_{\psi}(\cT)$ lives in affine space $K^{D'}$, where $D'=(\sum_{t\in\cT} |\cN_2(t)|) -|\mathfrak r|$. \end{theorem} \begin{proof} The proof follows the proof of Theorem \ref{thm-var} after replacing the variables $y_{t,n}$ with the constants $\psi((t,n))$ for $(t,n)\in \mathfrak r$. Thus the polynomials in the ideal of the variety are produced by completely reducing the appropriate overlap relations. (See Example \ref{ex-poly3-1} below.) The dimension of the underlying affine space is clear. \end{proof} \begin{Example}\label{ex-poly3-1}{\rm Suppose we are interested in strong Koszul algebras in the same variety as the commutative polynomial ring $R=K\{x,y,z\}/\langle zy-yz,zx-xz,yx-xy\rangle$ with $z\succ y\succ x$. We saw in Example \ref{ex-poly3} that $\GrAlg(\cT)$ where $\cT=\{zy,zx,yx\}$ is extremely complicated. We consider the following subvariety. Let \sloppy $\mathfrak r=\{(zy,yz),(zy,y^2),(zy,xz),(zy,x^2),(zx,yz),(zx,y^2),(zx,xz),(zx,x^2),(yx,xz),$ \linebreak $(yx,xy)\}$ and set $\psi((zy,yz))=\psi((zx,xz))=\psi((yx,xy))=1$ with all other values of $\psi$ being 0. Note that the distinguished algebra in $\GrAlg_{\psi}(\cT)$ is $R$, the commutative polynomial ring in 3 variables.. There three $(t,n)$s not in $\mathfrak r$: $(zy,xy), (zx,xy), (yx,x^2)$. Let $A=c_{zy,xy}, B=c_{zx,xy}$ and $C=c_{yx,x^2}$. We have \[\cG=\{ g_{zy}=zy-yz -Axy, g_{zx}=zx-xz+Bxy, g_{yx}=yx-xy+Cx^2\}.\] It follows that $\GrAlg_{\psi}(\cT)$ is the variety whose points $(A,B,C)$ satisfy the property that $\cG$ is the reduced \grb for the ideal it generates in $K\{x,y,z\}$. To find ideal of the variety, we must completely reduce every overlap relation . As we noted in Example \ref{ex-poly3}, there is only one overlap relation, namely, \[ Ov(zy,yx)=-yzx-Axyx +zxy +Czx^2.\] After completely reducing the overlap relation, the polynomials that are coefficients of $\cN_3$ generate $\cI$. The reader can verify that we obtain two polynomials: $BC^2-AC$ and $BC$. Thus, $\cI=\langle BC^2-AC, BC\rangle =\langle AC, BC\rangle$. The variety $\GrAlg_{\mathfrak r}(\cT)$ in $K^3$ has two irreducible components the plane $\{(A,B,0)\}$ and the line $\{(0,0,C)\}$. the commutative polynomial ring lies in the plane $\{(A,B,0)\}$. }\end{Example} \section{Further results on strong Koszul algebras}\label{sec-results} We begin by looking at $\Lambda^{op}$, the opposite algebra of $\Lambda$. The opposite algebra of $\Lambda$ is $\{\lambda^{op}\mid \lambda\in \Lambda\}$ with addition and multiplication given by $\lambda^{op}+(\lambda')^{op}=(\lambda+\lambda')^{op}$ and $\lambda^{op}\cdot (\lambda')^{op}=(\lambda'\cdot\lambda)^{op}$. It is well-known that $\Lambda$ is a Koszul algebra if and only if $\Lambda^{op}$ is a Koszul algebra. If $\succ$ is a length admissible order, then let $\succ^{op}$ be the order $p^{op}\succ^{op}q^{op}$ if and only if $p\succ q$. In particular, if $\succ$ is the length-(left)-lexicographic order defined in Section \ref{sec-ex}, then $\succ^{op}$ is the length -(right)-lexicographic order. Given a quiver $\cQ$, define the opposite quiver, $\cQ^{op}$ in the obvious way. If $I$ is an ideal in $K\cQ$, then let $I^{op}=\{x^{op}\mid x\in I\}$. We have the following result, whose proof is left to the reader. \begin{proposition}\label{prop-op} The algebra $\Lambda=K\cQ/I$ is a strong Koszul algebra (with respect to $I$ and $\succ$) if and only if $\Lambda^{op}=K\cQ^{op}/I^{op}$ is a strong Koszul algebra (with respect to $I^{op}$ and $\succ^{op}$). \qed \end{proposition} The next result deals with tensoring two strong Koszul algebras. In fact, we prove a result about the reduced \grb basis of the tensor of two algebras in general; see Theorem \ref{thm-tensor}(1) below. Let $\Lambda=K\cQ/I$ and $\Lambda'=K\cQ'/I'$. Define $Q^*$ to be the quiver with vertex set $\cQ_0\times \cQ_0'$ and arrow set $ (\cQ_1\times\cQ_0')\cup (\cQ_0\times \cQ'_1) $, where $(a,w')\colon (u,w')\to (v,w')$ if $a\colon u\to v$ and $(v,b')\colon (v,w')\to (v,x')$ if $b'\colon w'\to x'$. If $p=a_1a_2\cdots a_r$ and $w'\in\cQ'_0$, then let $(p,w')$ denote the path $(a_1,w')(a_2,w')\cdots (a_r,w')$. If $q'$ is a path in $\cQ'$ and $v\in\cQ_0$, then $(v,q')$ has a similar meaning. If $r=\sum_{p\in\cB}\alpha_pp\in K\cQ$ and $w'\in\cQ'_0$, then let $(r,w)=\sum_{p\in\cB} \alpha_p(p,w')$. Similarly, $(v,\sum_{q'}\beta_{q'}q')= \sum_{q'}\beta_{q'}(v,q')$. Define $\varphi\colon K\cQ^*\to \Lambda\otimes_K\Lambda'$ as follows. If $(v,w')\in\cQ^*_0$, $\varphi(v,w')=v\otimes w'$, if $(a,w') \in\cQ_1\times \cQ'_0$, $\varphi(a,w')=a\otimes w'$, and if $(v,b') \in\cQ_0\times \cQ'_1$, $\varphi(v,b')=\otimes b'$. This ring homomorphism is clearly surjective. Let $I^*$ denote the kernel of this morphism. Finally, let $\succ$ and $\succ'$ be length admissible orders on $\cB$, the set of paths in $\cQ$, and on $\cB'$, the set of paths in $\cQ'$, respectively. Let $\succ^*$ be the length admissible order on $\cB^*$, the set of paths in $\cQ^*$, be defined as follows: On vertices $(u,w')\succ^* (v,x')$ if $w'\succ' x'$ or $w'=x'$ and $u\succ v$. Let $p^*$ be the path $(v_1,q'_1)(p_1, w_1')(v_2,q'_2)\cdots (p_n,w_n')$ and $\hat p^*$ be the path $(\hat v_1,\hat q'_1)(\hat p_1,\hat w_1')(\hat v_2,\hat q'_2)\cdots (\hat p_m,\hat w_m')$ with $p_i,\hat p_\in\cB, q'_j,\hat q'_j\in \cB'$. Remove all the $\cQ^*$ vertices from $p^*$ and $\hat p^*$. Then $p^*\succ^* \hat p^*$ if $\ell(p^*)>\ell(\hat p^*)$ or $\ell(p^*)=\ell(\hat p^*)$ and the first arrows from the left in $p^*$ and $\hat p^*$ where the arrows differ, we have one of the following 4 cases: \begin{enumerate} \item the arrow in $p$ is $(v,b')$ and the arrow in $\hat p$ is $(u,b'')$ with $b',b''\in\cQ_1'$ and $b'\succ' b''$. \item the arrow in $p$ is $(v,b')$ and the arrow in $\hat p$ is $(a,w')$ with $b'\in \cQ'_1$ and $a\in \cQ_1$. \item the arrow in $p$ is $(a,w')$ and the arrow in $\hat p$ is $(a',w'')$ with $a,a'\in\cQ_1$ and $a\succ a'$. \item the arrow in $p$ is $(a,w')$ and the arrow in $\hat p$ is $(a,w'')$ with $a\in\cQ_1$ and $w'\succ' w''$. \end{enumerate} The reader may check that $\succ^*$ is a length admissible order. Before getting to the result on tensors, we let \[C=\{(u,b')(a,x')-(a,w')(v,b')\mid a\colon u\to v\text{ is an arrow in }\cQ_1, \]\[ b'\colon w'\to x'\text{ is an arrow in }\cQ'_1\}.\] We see that $C\subseteq I^*$. Let $\cC$ be ideal in $K\cC$ generated by $C$. The elements of $C$ are called \emph{commutativity relations}. Note that they are quadratic elements in $K\cQ^*$. The proof of the following result is left to the reader. \begin{lemma}\label{lem-comm} Let $p^*$ be a path in $\cB^*$ of length at least 1. Then there exist paths $p\in \cB$, $q'\in\cB'$ such that either $\ell(p)\ge 1$ or $\ell(q')\ge 1$ and $p^*-(p,w')(v,q')\in \cC$, where $v\in\cQ_0$ is the end vertex of $p$ and $w'\in\cQ'_0$ is the start vertex of $q'$ . \qed \end{lemma} We use the following convention: If $z$ is a uniform element in $K\cQ$ with $uzv=z$, $u,v\in\cQ_0$ and $y'$ is a uniform element in $K\cQ'$ with$w'y'x'=y'$, then we define $(z,y')\in K\cQ^*$ to be $( z,w')(v,y')$. Note that $(z,w')(v,y')-(u,y')(z,x')$ is an element of $\cC$. We also have the following lemma. \begin{lemma}\label{lem-basis} Suppose that $\Lambda=K\cQ/I$, $\Lambda'=K\cQ'/I'$ are $K$-algebras, and $K\cQ^*$ and $I^*$ are defined above. Let $\succ$ and $\succ'$ be length admissible orders for $\cB$ and $\cB'$, respectively. Set $\cN=\cB\setminus\tip(I)$ and $\cN'=\cB'\setminus\tip(I')$. Then the set $\{n\otimes_Kn'\mid n\in\cN, n'\in\cN'\}$ is a $K$-basis for $\Lambda\otimes_K\Lambda'$. \end{lemma} \begin{proof} By the Fundamental Lemma, we have that $\cN$ is a $K$-basis of $\Lambda$ and $\cN'$ is a $K$-basis of $\Lambda'$. Since we are tensoring over $K$, the result follows. \end{proof} The first part of the following result is quite general. The proof of the result is somewhat technical but straightforward, and we leave routine checking to the reader. \begin{theorem}\label{thm-tensor} Suppose that $\Lambda=K\cQ/I$, $\Lambda'=K\cQ'/I'$ are $K$-algebras, and $K\cQ^*$ and $I^*$ are defined above. Let $\succ$ and $\succ'$ be length admissible orders for $\cB$ and $\cB'$, respectively, and let $\succ^*$ be the length admissible order defined above. Let $\cG$ and $\cG'$ be the reduced \grb bases for $I$ and $I'$, respectively. Then \begin{enumerate} \item \[\cG^* =\{(g,w')\mid g\in\cG, w'\in\cQ'_0\}\cup \{(v,g')\mid g'\in\cG', v\in\cQ_0\}\cup C\] is the reduced \grb basis for $I^*$ with respect to $\succ^*$. If $\cG$ and $\cG'$ are composed of length homogeneous elements, then so is $\cG^*$ and in this case $\cG^*$ is a graded \grb basis for the induced length graded algebra $K\cG^*/I^*=\Lambda\otimes_K\Lambda'$. \item If $\Lambda=K\cQ/I$ and $\Lambda'=K\cQ'/I'$ are strong Koszul algebras (with respect to $I$ and $\succ$) and (with respect to $I'$ and $\succ'$), respectively, then $\Lambda\otimes_K\Lambda'=K\cQ^*/I^*$ is a strong Koszul algebra (with respect to $I^*$ and $\succ^*$), \end{enumerate} \end{theorem} \begin{proof} First we show that $\cG^*$ generates $I^*$. Clearly $\cG^*\subseteq I^*$. Suppose $X=\sum_i\alpha_{p_i^*}p_i^*\in I^*$, where $\alpha_{p_i^*}\in K$ and $p_i^*\in\cB^*$. Then, by Lemma \ref{lem-comm}, since $\cC\subset I^*$, by repeatedly applying the commutativity relations, we may assume that $X=\sum_i\alpha_i(p_i,w'_i)(v_i,q'_i)$ where $\alpha_i\in K$, $p_i\in\cB, v_i\in \cQ_0,q'_i\in\cB'$, and $ w'_i\in \cQ'_0$. Next we apply the Fundamental Lemma and write, for each $i$, $p_i= \iota_i+N_i$ and $q'_i=\iota'_i+N'_i$ where $\iota_i\in I$, $N_i\in \Span_K(\cN)$ and $\iota'_i\in I'$, $N'_i\in \Span_K(\cN')$. Thus, \[X=\sum_i\alpha_i( \iota_i+N_i,w')(v_i,\iota'_i+N'_i)= \sum_i\alpha_i[(\iota_i,\iota'_i)+(\iota,N_i')+(N_i,\iota'_i)+(N_i,N'_i)].\] We are assuming that $\varphi(X)=0$. Since $\iota_i\in I$ and $\iota'_i\in I'$, we see that \[0=\varphi(X)=\varphi(\sum_i\alpha_i(N_i,N'_i))=\sum_i\alpha_iN_i\otimes N'_i.\] Hence $0=\sum_i\alpha_iN_i\otimes N'_i$. Let $N_i=\sum_j\beta_{i,j}n_j$ and $N'_i=\sum\beta'_{i,j}n'_j$ with $n_j\in\cN$ and $n'_{j}\in\cN'$. Then $\sum_i \alpha_i\beta_{i,j}\beta'_{i,j}=0$. This implies $\sum_i\alpha_i(N_i,N'_i)=0$. Thus $X=\sum_i\alpha_i[(\iota_i,\iota'_i)+(\iota,N_i')+(N_i,\iota'_i)]$ and $X$ is generated by $\cG^*$. The elements of $\cG^*$ are clearly uniform. Overlap relations involving two elements of the form $(g,w')$ with $g\in\cG$ completely reduce 0 using the complete reduction to 0 in $K\cQ$ by $\cG$. Similarly,the overlap relation of two elements of the form $(v,g')$ with $g'\in\cG'$ completely reduce to 0 by $\cG'$. Now suppose $g=t - N\in\cG$ with $\tip(g)=t$. Consider $Ov((u,b')(a,x'),( t,x'))$ where $ (u,b')(a,x')-(a,w')(v,b')\in C$ and $t=a\hat t$ (with $a\in\cQ_1, b\in\cQ'_1)$. Then $Ov((u,b')(a,x'),(t,x'))=-(a,w')(v,b')(\hat t,x') + (u,b')(N,x')$. Using simple reduction involving elements of $C$, we commute $(u,b')$ past the elments of $\hat t$ and $N$ (changing the vertices as needed) to obtain that $Ov((u,b')(a,x'),(t,x'))$ reduces to $-(a,w')(\hat t,w')(v,b') + (N,w')(v,b')$. But $-(a,w')(\hat t,w')(v,b') + N(v,b')=-(a\hat t-N,w') (v,b')=-(g,w')(v,b')$. Since $(g,w)\in\cG^*$, $Ov((u,b')(a,x'),(t,x'))$ completely reduces to 0. The case of element of the form $(v,g')$, overlapping a commutitivity relation, completely reducing to 0 is similar. This completes the proof that $\cG$ is a \grb basis for $I^*$ with respect to $\succ^*$. We leave it to the reader to show that $\cG^*$ is the reduced \grb basis. This completes the proof of part 1. The proof of part 2 is straightforward and left to the reader. \end{proof} The next result follows from Proposition \ref{prop-op} and Theorem \ref{thm-tensor}. \begin{corollary}\label{cor-env} Let $\Lambda=K\cQ/I$ be a strong Koszul algebra (with respect to $I$ and $\succ$). Let $\succ^*$ be the length admissible order defined above where $\Lambda'=\Lambda^{op}$. Then the enveloping algebra $\Lambda\otimes_K\Lambda^{op}=K\cQ^*/I^*$ is a strong Koszul algebra (with respect to $I^*$ and $\succ^*$). \qed \end{corollary} \section{Remarks and questions}\label{sec-rem} The goal of this paper was to introduce the variety $\GrAlg(\cT)$ and its connection to Koszul algebras. We believe this connection will lead to further interesting results, and, to that end, we present a number of open questions. By dropping the restriction that $\cT$ is composed of paths of length 2, thus allowing $\cT$ to be an arbitrary finite set of paths, one still has a variety $\GrAlg(\cT)$ whose points correspond to certain graded algebras. In this more general setting, if one drops the length homogeneity restriction, one still obtains an algebraic variety whose points correspond to (not necessarily graded) algebras. Such connections are currently under investigation in \cite{GHS}. In the case of Koszul algebras that are not strong, one still can look at an appropriate variety with $\cT$ being the set $\tip(\cG)$ where $\cG$ is a reduced \grb basis with respect to some admissible order. In such a variety, $K\cQ/\langle \cT\rangle$ is not a Koszul algebra since it is a nonquadratic monomial algebra. This leads to the following question: \begin{Question}\label{qu-present}{\rm Suppose $\Lambda=K\cQ/I$ is a Koszul algebra such that the reduced \grb basis $\cG$ is not a quadratic ideal. Then $\Lambda_{Mon}=K\cQ/\langle \tip(\cG)\rangle$ is not a Koszul algebra. Thus $\Lambda$ and $\Lambda_{Mon}$ are in $\GrAlg(\tip(\cG))$. Are there necessarily other Koszul algebras in $GrAlg(\tip(\cG))$ and, if so, does the set of Koszul algebras in $GrAlg(\tip(\cG))$ have some geometrical interpretation? In particular, it would be interesting to study $GrAlg(\tip(\cG))$ for the Sklyanin algebras. \qed } \end{Question} \begin{Question}\label{qu-dual}{\rm If $\Lambda=K\cQ/I$ is a strong Koszul algebra, is there a length admissible order $\succ^{\perp}$ on the paths in quiver $\cQ^{op}$ so that the Koszul dual, $\oplus_{n\ge 0}\Ext_{\Lambda}^n(\overline{\Lambda}. \overline{\Lambda})$, is a strong Koszul algebra, where $\overline{\Lambda}$ is $K\cQ/J$ with $J=\langle \cQ_1\rangle$? We believe the answer is no. \qed}\end{Question} One can drop the graded restriction and allow arrows together with paths of length two in a \grb basis. The new variety contains $\GrAlg(\cT)$. It would be interesting to study how these two varieties relate to one another. Basic geometic questions need to be investigated, some of which are listed below. \begin{Question}\label{qu-geo}{\rm Is $\GrAlg(\cT)$ irreducible? Given Example \ref{ex-poly3-1} one expects that the answer, in general, is no. Example \ref{ex-poly2} shows that sometimes $\GrAlg(\cT)$ is irreducible. What do the irreducible components look like, and is there an interpretation of irreducibility in terms of the algebras? \qed }\end{Question} \begin{Question} {\rm Does every affine algebraic variety occur as some $\GrAlg(\cT)$? $\GrAlg_{\psi}(\cT)$? }\end{Question} \begin{Question}{\rm What is the dimension of $\GrAlg(\cT)$ in terms of some invariant, like $\cT$? Is there an algorithm to compute it? \qed}\end{Question} \begin{Question}{\rm Characterize when $\GrAlg(\cT)$ is a point? \qed}\end{Question} \begin{Question}{\rm Characterize when $\GrAlg(\cT)$ is all of affine space? \qed}\end{Question} \begin{Question}\label{qu-self}{\rm Suppose $\Lambda=K\cQ/I$ is a finite dimensional, selfinjective, strong Koszul algebra (with respect to $I$ and $\succ$), does $\GrAlg(\tip(I))$ necessarily contain other selfinjective, strong Koszul algebras? Does the set of selfinjective strong Koszul algebras have a geometrical interpretation in $\GrAlg(\tip(I))$? Is there a geometric condition on $\GrAlg(\cT)$ that assures the existence of a finite dimensional selfinjective algebra in $\GrAlg{\cT}$? \qed} \end{Question} \begin{Question}\label{qu-bound}{\rm Let $\cT$ be a set of paths of length 2 in a quiver $\cQ$ and let $\succ$ be a length admissible order on the set of paths in $\cQ$. Let $\cI$ be the ideal of the variety $\GrAlg(\cT)$. Is there a fast way to determine the degrees of a generating set of polynomials in $\cI$ or a bound on their degrees? More precisely, overlap relations result in paths of length 3. There is a bound on the number of simple reductions needed to completely reduce to words that are in $\cN_3$, i.e., nontips of length 3. This number (plus 1) should bound the degree of the polynomials in $\cI$. Is there a fast way to find this number given $\cT$? \qed }\end{Question}
{ "timestamp": "2017-02-10T02:06:56", "yymm": "1702", "arxiv_id": "1702.02918", "language": "en", "url": "https://arxiv.org/abs/1702.02918", "abstract": "Koszul algebras with quadratic Groebner bases, called strong Koszul algebras, are studied. We introduce affine algebraic varieties whose points are in one-to-one correspondence with certain strong Koszul algebras and we investigate the connection between the varieties and the algebras.", "subjects": "Rings and Algebras (math.RA)", "title": "The Geometry of Strong Koszul Algebras", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.969785409439575, "lm_q2_score": 0.7310585903489891, "lm_q1q2_score": 0.7089699543659129 }
https://arxiv.org/abs/2002.03899
K-bMOM: a robust Lloyd-type clustering algorithm based on bootstrap Median-of-Means
We propose a new clustering algorithm that is robust to the presence of outliers in the dataset. We perform Lloyd-type iterations with robust estimates of the centroids. More precisely, we build on the idea of median-of-means statistics to estimate the centroids, but allow for replacement while constructing the blocks. We call this methodology the bootstrap median-of-means (bMOM) and prove that if enough blocks are generated through the bootstrap sampling, then it has a better breakdown point for mean estimation than the classical median-of-means (MOM), where the blocks form a partition of the dataset. From a clustering perspective, bMOM enables to take many blocks of a desired size, thus avoiding possible disappearance of clusters in some blocks, a pitfall that can occur for the partition-based generation of blocks of the classical median-of-means. Experiments on simulated datasets show that the proposed approach, called K-bMOM, performs better than existing robust K-means based methods. Guidelines are provided for tuning the hyper-parameters K-bMOM in practice. It is also recommended to the practitionner to use such a robust approach to initialize their clustering algorithm. Finally, considering a simplified and theoretical version of our estimator, we prove its robustness to adversarial contamination by deriving robust rates of convergence for the K-means distorsion. To our knowledge, it is the first result of this kind for the K-means distorsion.
\section{Introduction} \sloppypar Data scientists have nowadays to deal with massive and complex datasets, that are often corrupted by outliers. Classical data mining procedures such as K-means or more general EM algorithms for instance are however sensitive to the presence of outliers, which can induce a time consuming pre-processing of the data. In this context, robust versions of data mining procedures are particularly relevant and we investigate a way to produce a Lloyd-type algorithm for hard clustering that is robust to the presence of ouliers. To do this, we propose to use a variant of median-of-means (MOM) statistics, that we call bootstrap median-of-means (bMOM). MOM principle has been the object of recent active research in mean estimation, regression, high-dimensional framework and also supervised classification and machine learning (\cite{lerasle2011robust,MR3576558,lecue2017learning,lecue2017robust,MR3909950,MR4017683,MR4055993,minsker2018uniform}). Note that other approaches to robustness for K-means exist in the literature, such as for instance K-median or trimmed K-means (see for instance the survey \cite{MR2720402} and references therein ; see also \cite{brecheteau2018robust}). Given a dataset, the boostrap median-of-means consists in first generating a (large) bootstrap sample and then perform a classical median-of-means on this bootstrap sample. We prove in Section \ref{sec:Robust-mean-estimation} that if enough blocks are generated from the bootstrap sampling, then for a fixed block size, bMOM has a higher breakdown point than MOM. We propose a robust-to-outliers version of K-means, that we call K-bMOM, and that performs Lloyd-type iterations through the use of bMOM estimates of the K-means distorsion, as further explained in Section \ref{sec:kB-MOM-algorithm}. In Section \ref{sec:Simulations-and-practical}, a bMOM based procedure is considered to initialize clustering algorithms and is compared to existing initialisation on simulated datasets. Practical considerations to choose the number and size of blocks are discussed and a heuristic is proposed. Finally, the K-bMOM algorithm is compared to existing robust k-means based clustering approach on simulated datasets with the presence of outliers. While completing the editing of this work, we became aware of the recent preprint \cite{Klochkov2020robust} that investigates the use of median-of-means statistics to produce a robust K-means clustering. However, the latter work is theoretical and the authors study probabilistic performance bounds for the minimizer of the median-of-means of the K-means distorsion loss. In particular the authors do not discuss the use of median-of-means through Lloyd-type iterations nor a practical way to compute the estimator. Neither do they discuss the possibility of generating blocks with replacement in the dataset. \section{Robust mean estimation by the bootstrap median-of-means\label{sec:Robust-mean-estimation}} \subsection{Median-of-Means and bootstrap Median-of-Means} The median-of-means (MOM) estimator of the mean in dimension one consists in taking a median of some arithmetic means computed on a collection - say of size $B$ - of disjoint blocks $\left(x_{i}\right)_{i\in b_{k}}$, where $\left\{ b_{k}:k\in\left\{ 1,...,B\right\} \right\} $ form a partition of the set of indices $\left\{ \text{1,...,n}\right\} $ of a real valued sample $x_{1}^{n}=\left(x_{1},...,x_{n}\right)$. The length of the blocks are generally taken to be equal, eventually up to one data. We thus have, by denoting $b_{1}^{B}$ the collection of blocks, \[ {\rm MOM}(x_{1}^{n},b_{1}^{B})={\rm med}\left\{ \sum_{j\in b_{k}}x_{j}:k\in\left\{ 1,...,B\right\} \right\} . \] where ${\rm med}$ is a median, that is $\#\left\{ k\in\left\{ 1,...,B\right\} ;a_{k}\leq{\rm med}\left\{ a_{i}\right\} \right\} \geq B/2$ and $\#\left\{ k\in\left\{ 1,...,B\right\} ;a_{k}\geq{\rm med}\left\{ a_{i}\right\} \right\} \geq B/2$. We can consider that the blocks are generated according to a random drawing process, that proceeds whithout replacements (disjoint blocks) and according to the uniform distribution at each step on the remaining data. This formulation naturally leads to consider more general random block generating processes. For any positive integers $n_{B}$ and $B$, denote $m=Bn_{B}$ and generate a bootstrap sample $y_{1}^{m}=(y_{1},...,y_{m})$ from the dataset $x_{1}^{n}$. More precisely, each $y_{i}$ is taken uniformly at random from the values $\left(x_{1},...,x_{n}\right)$ and independently from the $\left(y_{j}\right)_{j\neq i}$. Then the boostrap median-of-means (bMOM) of the dataset $x_{1}^{n}$ with parameters $n_{B}$ and $B$ is the (classical) MOM estimator on the boostrap sample $y_{1}^{m}$ with blocks $b_{j}=(n_{B}(j-1)+1,...,n_{B}j)$ for $j\in\{1,...,B\}$, \[ {\rm bMOM}(x_{1}^{n},n_{B},B)={\rm MOM}(y_{1}^{m},b_{1}^{B}). \] We can notice that ${\rm bMOM}$ is a randomized estimator. Also, for any fixed sample size $n$, we can choose any block size $n_{B}$ and number of blocks $B$ to define a ${\rm bMOM}$ estimator, on contrary to the classical MOM, where the product of the block size with the number of blocks should be equal to the sample size. This will turn out to be precious in the clustering context, where we do not want too small sample block sizes in order to avoid disappearance of some clusters in the blocks. We prove below that taking enough blocks in the definition of bMOM enables to perform a more robust estimation than with MOM and same block size, in the sense that the breakdown point of the bMOM is higher. This also provides an interest to bMOM compared to MOM for mean estimation in general. We leave as an interesting problem the question of sub-gaussian deviation bounds for mean estimation using bMOM. \subsection{Breakdown points\label{subsec:Breakdown-points}} The breakdown point is a classical concept in the robust statistics literature (\cite{MR2488795,MR3839299}), that gives the maximal proportion of outliers that is allowed so that the deviations of the estimator stay bounded compared to the no-corruption setting. Assume that we are given a sample $x_{1}^{n}=$$\left(x_{1},...,x_{n}\right)$ of real valued random variables. \begin{defn}[Deterministic Breakdown point] \label{def:The-(deterministic)-breakdown}The (deterministic) breakdown point $\delta_{n}\left(T_{n},x_{1}^{n}\right)$ of an estimator $T_{n}$ given the sample $x_{1}^{n}$ is the maximal proportion of outliers that leave the value of the estimator bounded. \[ \delta_{n}\left(T_{n},x_{1}^{n}\right)=\frac{1}{n}\max\left\{ m;\max_{i_{1},...,i_{m}}\sup_{y_{1},...,y_{m}}\left|T_{n}\left(z_{1},...,z_{n}\right)\right|<+\infty\right\} \,, \] where the sample $\left(z_{1},...,z_{n}\right)$ is obtained by replacing the $m$ data points $x_{i_{1}},...,x_{i_{m}}$ of the sample $x_{1}^{n}$ by arbitrary values $y_{1},...,y_{m}$. \end{defn} One can notice that Definition \ref{def:The-(deterministic)-breakdown} corresponds to a worst case analysis, the outliers potentially appearing at the worst places for the estimator $T_{n}$. If the estimator $T_{n}$ is randomized - we rather denote it $T_{n}^{\omega}$ in this case -, then its breakdown point is a random variable. For a median ${\rm med}$, it holds $\delta_{n}\left(T_{n},x_{1}^{n}\right)$=$\left\lfloor n/2\right\rfloor /n$ and for the empirical mean $\bar{x}_{n}=1/n\sum_{i=1}^{n}x_{i}$, $\delta_{n}\left(T_{n},x_{1}^{n}\right)=1/n$. For the median-of-means estimator, \[ \delta_{n}\left({\rm MOM}(x_{1}^{n},b_{1}^{B}),x_{1}^{n}\right)=\left\lfloor B/2\right\rfloor /n\;a.s., \] since it suffices to have one outlier in a majority of blocks to make MOM diverge. Note that \cite[Section 4.2]{MR3576558} proposes to automatically select the number of blocks of the MOM estimator by a Lepskii-type procedure that consists in choosing the smallest number of blocks such that the intersection of some confidence intervals constructed for MOM with greater number of blocks is empty. Then, it is easily seen that the resulting estimator will inherit from the value of the breakdown point corresponding to the highest number of blocks in the considered collection. If the highest number of blocks is $n$, the sample size, thus corresponding to a median, then the method of intersection of confidence intervals gives an optimal value of breakdown point, corresponding $\left\lfloor n/2\right\rfloor /n$. However, computing such selection procedure is time consuming and as we want to make an iterative use of (bootstrap) MOM estimates, this method seems out of the scope for us. Instead, we show below that the use of replacements while constructing the blocks already gives an improvement of the breakdown point if enough blocks are considered, compared to the use of disjoint blocks when applied to median-of-means statistics. \begin{prop} \label{prop:deterministic_bdp}We have \[ \delta_{n}\left({\rm bMOM}(x_{1}^{n},n_{B},B),x_{1}^{n}\right)\leq\delta_{n}\left({\rm MOM}(x_{1}^{n},b_{1}^{B}),x_{1}^{n}\right)\;a.s. \] and \[ \lim_{B\rightarrow+\infty}\delta_{n}\left({\rm bMOM}(x_{1}^{n},n_{B},B),x_{1}^{n}\right)=1-\frac{1}{2^{1/n_{B}}}>\frac{1}{2n_{B}}\,\;a.s. \] Note that $1-\frac{1}{2^{1/n_{B}}}\sim_{n_{B}\rightarrow+\infty}\frac{\log2}{n_{B}}\simeq\frac{0.69}{n_{B}}$. \end{prop} On the one hand, the first display in Proposition \ref{prop:deterministic_bdp} states that when the number of blocks in bMOM is equal to the number of blocks in MOM, bMOM has a breakdown point that is smaller than or equal to the breakdown point of MOM (this is due to the possible repetitions of outliers along the blocks for bMOM). On the other hand, the second display in Proposition \ref{prop:deterministic_bdp} states that for a fixed block size, when the number of blocks in bMOM tends to infinity, its breakdown point tends to a value that is strictly greater than the breakdown point of MOM with the same block size. \begin{proof} For the second display. Assume that the sample is corrupted by $m$ outliers. Denote $S_{i}$ the indicator that the block $B_{i}$ is not corrupted. Then $S_{i}$ is a Bernoulli random variable of mean $\left(1-m/n\right)^{n_{B}}.$ Then $\sup_{y_{1},...,y_{m}}\left|{\rm bMOM}(x_{1}^{n},n_{B},B)\right|$ is finite if the proportion of corrupted blocks smaller than $1/2$. This corresponds to the condition $\sum_{i=1}^{B}S_{i}/B>1/2$. By the strong law of large numbers, the latter is almost surely realized asymptotically if $\left(1-m/n\right)^{n_{B}}>1/2$, hence the result. \end{proof} Considering that the contaminated sample is fixed, it is interesting to evaluate the probability that a randomized estimator does not diverge under divergence of the outliers. It can indeed happen that the indices of the outliers are not the worst with respect to the block drawing process. This leads to the following definition. \begin{defn}[Probabilistic Breakdown point] The probabilistic breakdown point of a randomized estimator $T_{n}^{\omega}$ given the sample $x_{1}^{n}$ is \end{defn} \[ p_{n}\left(T_{n}^{\omega},x_{1}^{n},\left(i_{1},...,i_{m}\right)\right)=\mathbb{P}\left(\left\{ \omega:\sup_{y_{1},...,y_{m}}\left|T_{n}^{\omega}\left(z_{1},...,z_{n}\right)\right|<+\infty\right\} \right) \] where the sample $\left(z_{1},...,z_{n}\right)$ is obtained by replacing the $m$ data points $x_{i_{1}},...,x_{i_{m}}$, for some fixed indices $\left(i_{1},...,i_{m}\right)$, by the arbitrary values $y_{1},...,y_{m}$. As $p_{n}\left({\rm bMOM}(x_{1}^{n},n_{B},B),x_{1}^{n},\left(i_{1},...,i_{m}\right)\right)$ only depends on $n$ and $m$, but not on the values of $\left(i_{1},...,i_{m}\right)$ or $x_{1}^{n}$, we will rather denote it $p_{n}\left({\rm bMOM}(x_{1}^{n},n_{B},B),m\right)$. We have the following bound. \begin{prop} It holds \[ p_{n}\left({\rm bMOM}(x_{1}^{n},n_{B},B),m\right)\geq1-\exp\left(-2B\left(\left(1-m/n\right)^{n_{B}}-1/2\right)^{2}\right)\,. \] \end{prop} If $m$ and $n$ are fixed then the block length $n_{B}$ should be such that $\left(1-m/n\right)^{n_{B}}>1/2$, that is $n_{B}<\log(2)/\log\left(\left(1-m/n\right)^{-1}\right)$. In this case, by denoting $D=$$\left(1-m/n\right)^{n_{B}}-1/2$, we have that $p_{n}\left({\rm bMOM}(x_{1}^{n},n_{B},B),m\right)\geq1-R$ is equivalent to $B>\log\left(1/R\right)/\left(2D^{2}\right)$. \begin{proof} As in the proof of Proposition \ref{prop:deterministic_bdp}, denote $S_{i}$ the indicator that the block $B_{i}$ is not corrupted. We have, by Hoeffding's inequality (\cite[Theorem 2.27]{MR3363542}), \begin{align*} \mathbb{P}\left(\left\{ \omega:\sup_{y_{1},...,y_{m}}\left|{\rm bMOM}(x_{1}^{n},n_{B},B)\right|=+\infty\right\} \right) & =\mathbb{P}\left(\sum_{i=1}^{B}\left(1-S_{i}\right)>B/2\right)\\ & \leq\exp\left(-2B\left(\left(1-m/n\right)^{n_{B}}-1/2\right)^{2}\right).\\ \end{align*} \end{proof} \section{$K$-bMOM algorithm\label{sec:kB-MOM-algorithm}} This Section proposes an estimation procedure based on the MOM principle for clustering unlabeled data. Moreover, since the resulting partition of most of clustering approaches depends on the starting centers, we propose also a MOM-based initialization procedure. Let us introduce the following notations. Let $x_{1},\dots,x_{n}\in\mathbb{R}^{p}$ denote a dataset of $n$ observations that we want to cluster into $K$ homogenous groups. $b\in\{1,\dots,B\}$ stands for the index of a block $b$ and $B\in\mathbb{N}${*} the number of blocks containing at least $n_{B}>K$ datapoints. We define the empirical risk of the block $b$ as: \[ R_{b}(\textbf{c})=\sum_{k=1}^{K}\sum_{i\in\mathcal{C}_{k}^{(b)}}\left\Vert x_{i}^{(b)}-c_{k}^{(b)}\right\Vert ^{2} \] where $x_{i}^{(b)}$ stands for the $i$th datapoint contained in the block $b$, $\mathcal{C}_{k}^{(b)}$ stands for the set of datapoints belonging to cluster $k$ in the block $b$ and $\left\Vert .\right\Vert ^{2}$ is the Euclidean norm. $c_{k}^{(b)}$ stands for the mean vector of the cluster $k$ in the block $b$ and we denote by $v_{k}^{(b)}$ its within variance. Finally, we denote by $\mathcal{P}(\mathbf{c})$ the partition obtained from the matrix of centroids~$\mathbf{c}$. \subsubsection*{A robust initialisation} It is well-known that since the clustering problem is non convex, then the initialisation step is a key step for the resulting partition. We propose therefore a robust variant of traditional initialisation strategies by applying the MOM principle. To do so, the idea is to build uniformly and with replacement $B$ blocks of $n_{B}$ datapoints where the number of points is strictly superior to the number of groups. In each block a traditional k-means++ initialisation~\cite{Vassilvitskii07} is operated. Such an approach proceeds iteratively : it starts with a centroid picked at random among the datapoints. Then, iteratively and until the number of groups $K$ is reached, a new centroid is chosen from the datapoints with a probability which increases exponentially with the distance $D^{2}(x,c)$ to the closest centers already chosen. In each block, the empirical risk is therefore computed and the centers linked to the median empirical risk, called the median block, is selected as the initial centers. We define the following algorithm : \begin{algorithm}[H] \textbf{Input:} the dataset $\left\{ x_{1},\ldots,x_{n}\right\} $, $B$ the number of blocks and $n_{B}>K$ size of blocks \begin{enumerate} \item Iterate from $1$ until $B$ blocks: \begin{enumerate} \item Select at random, uniformly and with replacement $n_{B}$ datapoints \item Proceed a kmeans++ initialisation \item Compute the empirical risk $\hat{R}_{b}\mathrm{(\textbf{c})}$ of the block $b$ \end{enumerate} \item Select the centers from the block having the median empirical risk and get: $\left(\widehat{c}_{bmed}^{(1)},\ldots,\widehat{c}_{bmed}^{(K)}\right)$, $\left(\widehat{v}_{bmed}^{(1)},\ldots,\widehat{v}_{bmed}^{(K)}\right)$. \end{enumerate} \textbf{Output:} $\left(\widehat{c}_{bmed}^{(1)},\ldots,\widehat{c}_{bmed}^{(K)}\right)$ \caption{Initialisation strategy\label{alg:Initialisation-strategy}} \end{algorithm} \subsubsection*{The $K$-bMOM algorithm} Due to the nature of the MOM principle and the clustering goal, the algorithm we propose alternates 3 main steps. At iteration $t$, and given the centers fitted in the median block of the previous iteration, $B$ blocks of $n_{B}$ data are built by uniform sampling with replacement. Then, a partition per block is computed by assigning each data point to its closest centroids fitted on the median block at ($t-1$). The centroids of each block are updated according to their block partition and the empirical risk $\hat{R}_{b}\mathrm{(\textbf{c})}$ is returned. The block with the median empirical risk is selected and the fitted centers of this median block become the current ones. This is done until the empirical risk of the median block $\hat{R}_{bmed}(\textbf{c})$ remains stable. The final partition on all the dataset is obtained by assigning each data point to its nearest closest centroid $\left(\widehat{c}_{1}^{(bmed)},\dots,\hat{c}_{K}^{(bmed)}\right)$ of the current median block. A pseudo algorithm of this procedure is detailed in Algorithm~\ref{alg:Iteration-phase-structure-1}. \bigskip \begin{algorithm}[h] \caption{Iteration phase structure \label{alg:Iteration-phase-structure-1}} \textbf{Input:} $\left\{ x_{1},\ldots,x_{n}\right\} $, $B$ the number of blocks and $n_{B}$ size of blocks ($n_{B}$>$K$) \textbf{Initialisation step:} Algorithm~\ref{alg:Initialisation-strategy}. \textbf{Set:} $q=0$ and $crit>>\varepsilon$. \medskip \textbf{Main Loop:} while $crit>\varepsilon$ or $q<\bar{q}_{max}$: \begin{enumerate} \item Create $B$ blocks of the data of size $n_{B}$ randomly and uniformly with replacement \item In each block $b$: \begin{itemize} \item Assign each datapoint to its closest centroid. \item If $n_{k}^{(b)}\geq1,$ $\forall k\in\{1,\dots,K\}$: \begin{itemize} \item for $k\in\{1,\dots,K\}$: $c_{k}^{(b)}\leftarrow1/n_{k}^{(b)}\sum_{i\in\mathcal{C}_{k}}x_{i}^{(b)}$ \item $\hat{R}_{b}\mathrm{(\textbf{c})}\leftarrow\sum_{k=1}^{K}\sum_{i\in\mathcal{C}_{k}^{(b)}}\left\Vert x_{i}^{(b)}-\hat{c}_{k}^{(b)}\right\Vert ^{2}$ \end{itemize} \end{itemize} \item Get the median empirical risk $\hat{R}_{bmed}\mathrm{(\textbf{c})}$ and the associated quantities of the median block\,: $b_{med}$, $\left(\widehat{c}_{1}^{(bmed)},\ldots,\widehat{c}_{K}^{(bmed)}\right)$. \item if $q>2$: \begin{enumerate} \item $A^{(q)}\leftarrow(\hat{R}_{bmed}^{(q+1)}-\hat{R}_{bmed}^{(q)})/(\hat{R}_{bmed}^{(q)}-\hat{R}_{bmed}^{(q-1)})$ \item $crit\leftarrow1/(1-A^{(q)})(\hat{R}_{bmed}^{(q+1)}-\hat{R}_{bmed}^{(q)})$ \end{enumerate} \item $q\leftarrow q+1$ \end{enumerate} \medskip \begin{description} \item [{Output:}] $\left(\widehat{c}_{bmed}^{(1)},\ldots,\widehat{c}_{bmed}^{(K)}\right)$ and $\mathcal{P}(\hat{\mathbf{c}}_{med})$ \end{description} \end{algorithm} \subsubsection*{Stopping criterion and convergence monitoring} In order to decide whether the algorithm has converged or not, we propose to use a criterion based on the Aitken's criterion defined by $A^{(q)}=(\hat{R}_{b}^{(q+1)}-\hat{R}_{b}^{(q)})/(\hat{R}_{b}^{(q)}-\hat{R}_{b}^{(q-1)})$ where $\hat{R}_{b}^{(q)}$ is the empirical distortion computed on the median bloc at iteration $q$. Then asymptotic estimate of the empirical distortion maximum is given by: \[ R_{\infty}^{(q+1)}=R_{\infty}^{(q)}+\frac{1}{1-A^{(q)}}\left(R^{(q+1)}-R^{(q)}\right) \] and the algorithm can be considered to have converged if $\left|R_{\infty}^{(q+1)}-R_{\infty}^{(q)}\right|<\varepsilon$ where $\varepsilon$ is a small positive number (set to $\varepsilon=0.001$). In practice, if the criterion is not satisfied after a given number of maximum iterations, the algorithm is stopped. \subsubsection*{Model selection} In model-based clustering, it is frequent to consider several models in order to find the most appropriate one for the considered data. In particular, for most of clustering algorithms, the model is specified by its number of clusters $K$. There are lots of ad-hoc approaches in the literature to select the number of components $K$ and we can therefore think of the Gap statistics from~\cite{tibshirani2001estimating}, the Silhouette criterion and so one. However, since the k-means algorithm can be seen as a hard version of an EM-like algorithm which tries to estimate a mixture of $K$ Gaussians with isotropic covariance matrices, we can therefore apply classical tools for model selection including BIC, ICL criteria and the heuristic slope~\cite{BauMauMich:12} for example. We can therefore use such criteria on the proposed robust version of the $k$means by processing the K-b-MOM on several values of $K$, computing the chosen criterion for each model and select the model defined by its number of components which either maximizes the BIC or ICL criteria or follow the principle of the heuristic slope. \section{Simulations and practical consideriations\label{sec:Simulations-and-practical}} \subsection{Comparing initialisation strategies for the clustering task} It is well-known that the resulting partition of most clustering approaches such as for example the k-means or the Gaussian Mixture models, heavily depends on the starting centers. Therefore, a bad initialisation leads to a poor partitioning of the data. This is particularly true in the context of data with outliers where most of traditional and state-of-the-art initialisation techniques behave poorly in such a context. We propose in this section to apply the MOM principle to the most widely used initialisation methods among which kmeans++ and kmedians++ . We evaluate and compare them to their traditional use. These different strategies will be compared on simulated data in two different contexts of outliers: punctual, spread out outliers and a cluster of outliers. \paragraph{Simulation contexts:} The data are generated from $K=3$ multivariate Gaussian distributions of dimension $p=2$ and length $n_{1}=n_{2}=n_{3}=300$ with variance $\sigma^{2}=0.6$ and average vectors $\mu_{1}=[1,4],\mu_{2}=[2,1]$ and $\mu_{3}=[-2,3]$. Figure \ref{fig:initialisation_datapoints}.a illustrates one realisation of the simulated context. \begin{itemize} \item \textbf{simulation 1}: \textit{punctual outliers}. From these $n=600$ datapoints, we randomly select $n_{outlier}$ as potential outliers and their coordinates are multiplied by a constant term $\beta$ which quantifies how far these outliers are from their own distribution. We consider different level of pollution of data $n_{outlier}\in\{9,27\}$ and different degrees of outliers $\beta\in\{5,20\}$. Figure \ref{fig:initialisation_datapoints}.b illustrates the data polluted by $n_{outlier}=9$ with degree $\beta=20$. \item \textbf{simulation 2}: \textit{cluster of outliers. }A cluster of outliers of size $n_{outlier}$ is generated according to a $2$-dimensional Gaussian distribution with average $\mu_{outlier}=\beta[1,1]$ and variance fixed to $\sigma^{2}=1$. Note that, the size of the cluster of outliers varies among $n_{outlier}\in\{9,27\}$ and the level distance varies such that $\beta\in\{5,20\}$. Figure \ref{fig:initialisation_datapoints}.c illustrates the cluster of outliers with $n_{outlier}=9$ and degree $\beta=5$. \end{itemize} For all the methods, the number of clusters is supposed to be known and fixed to $K=3$. \begin{figure}[h] \begin{centering} \subfloat[Data generated without outliers]{\includegraphics[scale=0.29]{init_simul_1} }\subfloat[case 1: without punctual outliers]{\includegraphics[scale=0.29]{outliers_simul_1} }\subfloat[case 2: with a cluster of outliers]{\includegraphics[scale=0.29]{outliers_simul_2} } \par\end{centering} \caption{Illustrations of simulated data generated according to a Gaussian Mixture Model in order to compare initialisation methods if the context of outliers \label{fig:initialisation_datapoints}} \end{figure} \paragraph{Initialisation strategies: } We consider the following 3 traditional initialisation strategies: \begin{itemize} \item \textbf{Random initialisation}: we select K datapoints randomly and without replacement as initial centers. \item \textbf{kmeans++} proposed by \cite{Vassilvitskii07} which is maybe the most widely used technique to initialise clustering algorithm. The first center is taken from the data uniformly at random. Then iteratively and until the number $K$ of chosen clusters is reached, a new center is chosen from the datapoints with a probability which increases exponentially with the distance $D^{2}(x,c)$ to the closest centers already chosen. \item \textbf{kmedians++} is a variant of kmeans++. The same process is iterated but the probability is computed with respect to $D(x,c)$ instead of $D^{2}(x,c)$. \end{itemize} and a robust initialisation strategy developed by Hasan et al. in 2009 named ROBIN~\cite{al2009robust} which is a density-based approach: \begin{itemize} \item \textbf{ROBIN} (ROBust INitialisation) uses the Local Outlier Factor approach (LOF)~\cite{breunig2000lof} to select, as initial centroids, data points far away from each other and representative of dense regions in the dataset. This approach requires to know the number of clusters $K$ and the number of neighboring data points in order to compute the LOF of each data point. In the experiment, the number of neighboring datapoints has been fixed to 10. According to the chosen method, selected datapoints changes drastically and it has to be noted that the best approach is obtained for the approximation method where the algorithm looks for the first LOF value that falls in $]1-\varepsilon,1+\varepsilon[$ . We chose this method and set $\varepsilon=0.2$. \end{itemize} We propose a robust variant of kmeans++ and kmedians++ by applying the MOM principle as described in Section~\ref{sec:kB-MOM-algorithm}. In particular, let $B$ be the number of blocks of data, $n_{B}$ the size of each block and $\mathcal{R}_{b}(c)$ the empirical risk of the $b$th block. Then, we define the following algorithm : \begin{enumerate} \item Iterate from $1$ until $B$ blocks: \begin{enumerate} \item Select at random, uniformly and with replacement $n_{B}$ datapoints \item Proceed a kmeans++ (or kmedians++) initialisation \item Compute the empirical risk of the block $b$ \end{enumerate} \item Select the centers from the block having the median empirical risk \item Affect the datapoints to their nearest centroid of the selected (median) block. \end{enumerate} Note that the size of each block is chosen equal to 18 and the number of blocks is fixed to 250. These parameters follow the breakdown point bounds presented in Section \ref{subsec:Breakdown-points}. For the rest of the paper, we will call K-bMOM-km++ (respectively K-bMOM-kmed++) the robust strategy based on k-means++ (respectively kmedians++). \paragraph{Performance criteria:} In order to compare the different starting strategies in terms of performance, we compute 4 criteria: \begin{itemize} \item the Root Mean Square Error (RMSE) in order to evaluate the robustness of fitted centers once the initialisation step is performed. This criterion is calculated between the centers proposed by the initialisation process and the ones used to simulate the data, given by: \[ \text{RMSE}=\sqrt{\frac{\sum_{k=1}^{K}\left\Vert \hat{c}_{k}-\mu_{k}\right\Vert ^{2}}{K}} \] where $\hat{c}_{k}$ stands for the started center the most probable for the class $k$ and $\mu_{k}$ the average parameter of the $k$th mixture. \item the accuracy (acc) of the initial partition obtained by the nearest initial centers and computed on the non-polluted data. This is equivalent to a classification rate. \item the Adjusted Rand Index (ARI) computed between the partition obtained by the nearest initial centers and computed on the non-polluted data. \item the empirical distortion obtained at the end of the initialisation step and computed on the non polluted data: \[ \hat{R}(\hat{\mathbf{c}})=\sum_{k=1}^{K}\sum_{x_{i}\in\mathcal{C}_{k}}\left\Vert x_{i}-\hat{c}_{k}\right\Vert ^{2} \] \item the number of clusters obtained on the non polluted data named \textit{nb}. \end{itemize} The experience has been repeated 300 times and for all these criteria, average and standard deviations have been computed for each initialisation method. \paragraph{Empirical Results for simulation 1: } The results of simulation 1 are summarized in Table~\ref{tab:Table simulation1 init}. As we can observe, except for the random approach which behaves roughly the same manner according to the different contexts, all the starting approaches behave quite well when the number of outliers is small ($n_{outlier}\leq9$) and their distance level is low (cases $\beta=5$) : accuracies vary between 0.92 to 0.98. However, ROBIN and the K-bMOM based initialisation are the more stable approaches with a standard deviation around 2 to 5\% whereas the 3 other methods remains up to 8.4\%. Besides, as soon as the context becomes harder (more outliers and further), only the K-bMOM approaches have their accuracies and ARIs unchanged whereas the performances of the 4 other methods decrease drastically. The level of the RMSE computed on the initial centers depends on the strategy used: in particular, it remains under 1 in average for the kmedians++, ROBIN, K-bMOM-km++ and K-bMOM-kmed++ strategies when the simulated context is simple ($n_{outlier}=9$ , $\beta=5$). As the distance level of outliers and the number of outliers increase, the kmeans++ strategy propose poor centers since at least one of them is stuck on an outlier. Indeed, its RMSE is up to 50 and the number of clusters fitted on the non polluted data is below the true number of components. The kmedians++ is more robust to outliers, by construction, but its performances decrease drastically when both the number of outliers and the distance level become higher ($n_{outlier}=27,\beta=20$). The RMSE becomes up to 30 and the accuracy is about 0.77. At the opposite, K-bMOM-km++ and K-bMOM-kmed++, well-perform in every contexts of simulations even when the number of outliers reaches 27 and the distance level 20. In average, the initialisation by K-bMOM-km++ is 95 \% accurate at the end of the initialisation step and the proposed centers remain really close to theoritical ones (RMSE $<1$ in average). Finally, Figures \ref{fig:Boxplot simulation 1 init } and \ref{fig:Boxplot simulation 1 init - distoritions} stand for violinplots of accuracies and distortions respectively for each initialisation method from the less noisy simulation context to the noisiest one. Several information are displayed in these violinplots: the interquartile range (black bold vertical line), the median (orange point), the percentile 95 (navy blue horizontal line) and the probability density of accuracies (resp. distortions) for each method. In the context $\left(n_{outlier},\beta\right)=\left(20,27\right)$, one can observe the erratic behavior of ROBIN represented by the bimodal distribution of its accuracy: it is true that in median this approach reaches 95\% of accuracy but 10\% of the time, the initialisation present poor results (under 60\% of accuracy) compared to K-bMOM which does not decrease below 65\%. The same kind of observations can be done on the distortions (see Figure \ref{fig:Boxplot simulation 1 init - distoritions}). Finally, by combining the results in distortions and accuracies K-bMOM-km++ and K-bMOM-kmed++ are the initialisation procedures which performs the best in terms of stability and the accuracy of initial centers. They are totally insensitive to the distance of outliers with the rest of data and remain quite effective even when the number of outliers increases (around 3\% of data in our context). \begin{table}[p] \begin{centering} \begin{tabular}{|l|l|l|rrrrr|} \hline {\footnotesize{}$n_{outlier}$} & {\footnotesize{}$\beta$} & \textbf{\footnotesize{}Initialisation} & \textbf{\footnotesize{}RMSE} & \textbf{\footnotesize{}accuracy} & \textbf{\footnotesize{}ari} & \textbf{\footnotesize{}distortion} & \textbf{\footnotesize{}nb}\tabularnewline \hline \hline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}random} & {\footnotesize{}1.738 (1.697)} & {\footnotesize{}0.763 (0.133)} & {\footnotesize{}0.564 (0.212)} & {\footnotesize{}3399.1 (1785.4)} & {\footnotesize{}3.0 (0.2)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}kmeans++} & {\footnotesize{}2.538 (3.598)} & {\footnotesize{}0.91 (0.13)} & {\footnotesize{}0.84 (0.187)} & {\footnotesize{}1559.1 (897.9)} & {\footnotesize{}2.8 (0.4)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}kmedians++} & {\footnotesize{}1.009 (1.365)} & {\footnotesize{}0.95 (0.084)} & {\footnotesize{}0.891 (0.141)} & {\footnotesize{}1306.6 (619.8)} & {\footnotesize{}3.0 (0.2)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}ROBIN} & {\footnotesize{}0.951 (0.45)} & \textbf{\footnotesize{}0.973 (0.028)} & {\footnotesize{}0.925 (0.063)} & {\footnotesize{}1385.0 (326.3)} & {\footnotesize{}3.0 (0.1)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}K-bMOM-km++} & \textbf{\footnotesize{}0.457 (0.947)} & \textbf{\footnotesize{}0.988 (0.029)} & \textbf{\footnotesize{}0.968 (0.044)} & \textbf{\footnotesize{}790.0 (234.5)} & \textbf{\footnotesize{}3.0 (0.1)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}K-bMOM-kmed++} & \textbf{\footnotesize{}0.488 (0.815)} & \textbf{\footnotesize{}0.981 (0.053)} & \textbf{\footnotesize{}0.956 (0.088)} & \textbf{\footnotesize{}832.3 (342.3)} & \textbf{\footnotesize{}3.0 (0.1)}\tabularnewline \hline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}random} & {\footnotesize{}2.432 (5.659)} & {\footnotesize{}0.771 (0.143)} & {\footnotesize{}0.58 (0.238)} & {\footnotesize{}3421.4 (2079.7)} & {\footnotesize{}3.0 (0.2)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}kmeans++} & {\footnotesize{}54.734 (10.795)} & {\footnotesize{}0.427 (0.147)} & {\footnotesize{}0.141 (0.226)} & {\footnotesize{}6807.2 (2869.2)} & {\footnotesize{}1.3 (0.5)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}kmedians++} & {\footnotesize{}7.884 (15.954)} & {\footnotesize{}0.907 (0.13)} & {\footnotesize{}0.835 (0.192)} & {\footnotesize{}1593.5 (952.0)} & {\footnotesize{}2.8 (0.4)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}ROBIN} & {\footnotesize{}1.317 (3.876)} & \textbf{\footnotesize{}0.972 (0.037)} & {\footnotesize{}0.924 (0.073)} & {\footnotesize{}1412.3 (376.2)} & {\footnotesize{}3.0 (0.1)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}K-bMOM-km++} & \textbf{\footnotesize{}0.402 (0.162)} & \textbf{\footnotesize{}0.989 (0.009)} & \textbf{\footnotesize{}0.969 (0.026)} & \textbf{\footnotesize{}789.2 (150.7)} & \textbf{\footnotesize{}3.0 (0.0)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}K-bMOM-kmed++} & \textbf{\footnotesize{}0.393 (0.171)} & \textbf{\footnotesize{}0.987 (0.031)} & \textbf{\footnotesize{}0.966 (0.052)} & \textbf{\footnotesize{}801.5 (287.5)} & \textbf{\footnotesize{}3.0 (0.0)}\tabularnewline \hline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}random} & {\footnotesize{}4.175 (9.975)} & {\footnotesize{}0.752 (0.143)} & {\footnotesize{}0.549 (0.229)} & {\footnotesize{}3506.7 (1891.5)} & {\footnotesize{}2.9 (0.3)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}kmeans++} & {\footnotesize{}57.84 (7.832)} & {\footnotesize{}0.343 (0.05)} & {\footnotesize{}0.012 (0.077)} & {\footnotesize{}8810.5 (2902.3)} & {\footnotesize{}1.0 (0.2)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}kmedians++} & {\footnotesize{}31.532 (19.748)} & {\footnotesize{}0.734 (0.156)} & {\footnotesize{}0.604 (0.222)} & {\footnotesize{}2782.4 (1378.7)} & {\footnotesize{}2.2 (0.5)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}ROBIN} & {\footnotesize{}25.71 (29.783)} & {\footnotesize{}0.738 (0.289)} & {\footnotesize{}0.585 (0.42)} & {\footnotesize{}4199.2 (3936.4)} & {\footnotesize{}2.3 (0.9)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}K-bMOM-km++} & \textbf{\footnotesize{}3.361 (10.576)} & \textbf{\footnotesize{}0.951 (0.094)} & \textbf{\footnotesize{}0.903 (0.143)} & \textbf{\footnotesize{}1005.0 (507.3)} & \textbf{\footnotesize{}2.9 (0.3)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}K-bMOM-kmed++} & \textbf{\footnotesize{}4.786 (12.513)} & \textbf{\footnotesize{}0.934 (0.115)} & \textbf{\footnotesize{}0.882 (0.172)} & \textbf{\footnotesize{}1117.7 (677.9)} & \textbf{\footnotesize{}2.9 (0.3)}\tabularnewline \hline \end{tabular} \par\end{centering} \caption{{\footnotesize{}Average (and standard deviation in parentheses) of accuracies and RMSE computed on 300 repetitions of the simulation 1 for the 6 proposed initialisation methods for different number of outliers and distance levels.\label{tab:Table simulation1 init}}} \end{table} \begin{center} \begin{figure}[p] \begin{centering} \includegraphics[scale=0.42]{\string"violin_accuracies_simulation1_2020-01-03_14_00_01\string".png} \par\end{centering} \begin{centering} \caption{{\footnotesize{}Violinplots of accuracies of 6 initialisation approaches according to the level of pollution of data in the context of punctual outliers. From the less noisy context (left) to the noisiest one (right).\label{fig:Boxplot simulation 1 init }}} \par\end{centering} \begin{centering} \includegraphics[scale=0.42]{\string"violin_distortions_simulation1_2020-01-03_14_00_01\string".png} \par\end{centering} \caption{{\footnotesize{}Violinplots of distortions of 6 initialisation approaches according to the level of pollution of data in the context of punctual outliers. From the less noisy context (left) to the noisiest one (right).\label{fig:Boxplot simulation 1 init - distoritions}}} \end{figure} \par\end{center} \paragraph{Empirical Results for simulation 2: } The results of simulation 2 are summarized in Table \ref{tab:Table simulation 2 init}. Again, in this situation the random initialization is not as bad as we could expect in average, however such a starting approach is very instable as we can observe via its standard deviations. On the other hand, the standard initialization methods based on kmeans++ and kmedians++ (at least in accuracy) present comparable performances to their robust version for a low number of outliers (see case $n_{outlier}=9$ for $\beta=5$). This can be explained simply by the fact that the outliers are grouped together in the same area of the space and therefore kmeans++ and kmedians++ are going to chose started centers well-spread among the datasets by construction. However, when the number of outliers increases and so does their distance to the grouped data, then they are outperformed by their robust versions. Finally, Figure \ref{fig:Boxplot simulation 2 init} stands for boxplots of all accuracies (left) and all RMSE (right) over the noisiest versions of the simulation context of a cluster of outliers which groups together $n_{outlier}=27$ and $\beta=\{5,20\}$. Again, the K-bMOM-km++ initialisation presents better and stable results in both accuracy and RMSE compared to the rest of methods. \begin{table}[p] \begin{centering} \begin{tabular}{|lll|rrrrr|} \hline {\footnotesize{}$n_{outlier}$} & {\footnotesize{}$\beta$} & \multicolumn{1}{l}{\textbf{\footnotesize{}Initialisation}} & \textbf{\footnotesize{}RMSE} & \textbf{\footnotesize{}accuracy} & \textbf{\footnotesize{}ari} & \textbf{\footnotesize{}distortion} & \textbf{\footnotesize{}nb}\tabularnewline \hline \hline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}random} & {\scriptsize{}1.429 (0.663)} & {\scriptsize{}0.791 (0.138)} & {\scriptsize{}0.609 (0.226)} & {\scriptsize{}3239.7 (1795.4)} & {\scriptsize{}3.0 (0.1)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}kmeans++} & {\scriptsize{}0.743 (0.307)} & {\scriptsize{}0.962 (0.07)} & {\scriptsize{}0.912 (0.122)} & {\scriptsize{}1193.9 (464.1)} & {\scriptsize{}3.0 (0.1)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}kmedians++} & {\scriptsize{}0.777 (0.347)} & {\scriptsize{}0.955 (0.077)} & {\scriptsize{}0.896 (0.137)} & {\scriptsize{}1239.6 (517.0)} & {\scriptsize{}3.0 (0.1)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}ROBIN} & {\scriptsize{}0.948 (0.161)} & {\scriptsize{}0.97 (0.032)} & {\scriptsize{}0.916 (0.074)} & {\scriptsize{}1437.3 (369.1)} & {\scriptsize{}3.0 (0.1)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}K-bMOM-km++} & \textbf{\scriptsize{}0.368 (0.141)} & \textbf{\scriptsize{}0.99 (0.008)} & \textbf{\scriptsize{}0.971 (0.023)} & \textbf{\scriptsize{}772.6 (125.5)} & \textbf{\scriptsize{}3.0 (0.0)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}5} & {\footnotesize{}K-bMOM-kmed++} & \textbf{\scriptsize{}0.376 (0.197)} & \textbf{\scriptsize{}0.987 (0.034)} & \textbf{\scriptsize{}0.965 (0.06)} & \textbf{\scriptsize{}790.8 (244.6)} & \textbf{\scriptsize{}3.0 (0.0)}\tabularnewline \hline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}ranom} & {\scriptsize{}1.4 (0.608)} & {\scriptsize{}0.771 (0.131)} & {\scriptsize{}0.582 (0.211)} & {\scriptsize{}3220.1 (1577.0)} & {\scriptsize{}3.0 (0.2)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}kmeans++} & {\scriptsize{}1.058 (0.608)} & {\scriptsize{}0.666 (0.039)} & {\scriptsize{}0.513 (0.074)} & {\scriptsize{}3280.9 (779.8)} & {\scriptsize{}2.0 (0.1)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}kmedians++} & {\scriptsize{}0.795 (0.32)} & {\scriptsize{}0.94 (0.098)} & {\scriptsize{}0.877 (0.152)} & {\scriptsize{}1359.4 (648.8)} & {\scriptsize{}2.9 (0.3)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}ROBIN} & {\scriptsize{}0.921 (0.132)} & \textbf{\scriptsize{}0.974 (0.032)} & \textbf{\scriptsize{}0.928 (0.066)} & {\scriptsize{}1401.6 (348.9)} & \textbf{\scriptsize{}3.0 (0.1)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}K-bMOM-km++} & \textbf{\scriptsize{}0.37 (0.147)} & \textbf{\scriptsize{}0.989 (0.011)} & \textbf{\scriptsize{}0.969 (0.029)} & \textbf{\scriptsize{}772.3 (141.8)} & \textbf{\scriptsize{}3.0 (0.0)}\tabularnewline {\footnotesize{}9} & {\footnotesize{}20} & {\footnotesize{}K-bMOM-kmed++} & \textbf{\scriptsize{}0.359 (0.132)} & \textbf{\scriptsize{}0.99 (0.007)} & \textbf{\scriptsize{}0.971 (0.02)} & \textbf{\scriptsize{}763.5 (106.7)} & \textbf{\scriptsize{}3.0 (0.0)}\tabularnewline \hline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}random} & {\scriptsize{}1.455 (0.705)} & {\scriptsize{}0.755 (0.137)} & {\scriptsize{}0.552 (0.22)} & {\scriptsize{}3656.2 (2096.8)} & {\scriptsize{}2.9 (0.3)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}kmeans++} & {\scriptsize{}0.962 (0.552)} & {\scriptsize{}0.661 (0.019)} & {\scriptsize{}0.506 (0.059)} & {\scriptsize{}3264.6 (756.5)} & {\scriptsize{}2.0 (0.0)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}kmedians++} & {\scriptsize{}0.925 (0.494)} & {\scriptsize{}0.807 (0.156)} & {\scriptsize{}0.707 (0.214)} & {\scriptsize{}2179.2 (1084.5)} & {\scriptsize{}2.5 (0.5)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}ROBIN} & {\scriptsize{}2.036 (1.295)} & {\scriptsize{}0.38 (0.115)} & {\scriptsize{}0.068 (0.17)} & {\scriptsize{}8219.6 (2931.6)} & {\scriptsize{}1.1 (0.3)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}K-bMOM-km++} & \textbf{\scriptsize{}0.548 (0.354)} & \textbf{\scriptsize{}0.94 (0.108)} & \textbf{\scriptsize{}0.89 (0.161)} & \textbf{\scriptsize{}1106.5 (652.8)} & \textbf{\scriptsize{}2.9 (0.3)}\tabularnewline {\footnotesize{}27} & {\footnotesize{}20} & {\footnotesize{}K-bMOM-kmed++} & {\scriptsize{}0.658 (0.429)} & {\scriptsize{}0.893 (0.141)} & {\scriptsize{}0.821 (0.207)} & \textbf{\scriptsize{}1329.6 (762.6)} & {\scriptsize{}2.8 (0.4)}\tabularnewline \hline \end{tabular} \par\end{centering} \caption{{\footnotesize{}Average (and standard deviation in parentheses) of RMSE, accuracies, distortions and number of clusters computed on 300 repetitions of the simulation 2 for the 6 proposed initialisation methods for different number of outliers in the cluster of outliers and different distance levels.\label{tab:Table simulation 2 init}}} \end{table} \begin{figure}[p] \begin{centering} \includegraphics[scale=0.41]{\string"violin_accuracies_simulation2_2020-01-03_14_00_01\string".png} \par\end{centering} \caption{{\footnotesize{}Violinplots of accuracies of 6 initialisation approaches according to the level of pollution of data in the context of cluster of outliers. From the less noisy context (left) to the noisiest one (right).\label{fig:Boxplot simulation 2 init}}} \begin{centering} \includegraphics[scale=0.41]{\string"violin_distortions_simulation2_2020-01-03_14_00_01\string".png} \par\end{centering} \caption{{\footnotesize{}Violinplots of distortions of 6 initialisation approaches according to the level of pollution of data in the context of cluster of outliers. From the less noisy context (left) to the noisiest one (right).\label{fig:Boxplot simulation 2 - distortions}}} \end{figure} \paragraph{Conclusion:} We showed in this Section that it seems therefore preferable to use the robust version of popular initialization methods whatever is the context (with or without outlier).\pagebreak{} \subsection{Discussion about the selection of hyperparameters linked to blocks} The good behavior of our algorithm with respect to outliers is linked to an appropriate choice of the size of blocks $n_{B}$ and the number of blocks $B$. For a known level of noise, we are able to compute lower and upper bounds respectively for the within-block size and the number of blocks as presented in Section \ref{subsec:Breakdown-points} enabling therefore to guide the practitioner. However, when the number of outlier is unknown, it is important to propose a heuristic which selects automatically the size of the blocks $n_{B}$. The proposed strategy is the following: the within block size varies a priori from K to $n/K$ at its maximum and for each level of within block size, the empirical risk of each block is computed and the median one is kept and plotted. We choose $n_{B}^{*}$ the level of the size block linked to a cutting-point of the curve. Indeed, as the within block size increases the likelihood of picking an outlier in the block and among all $B$ blocks increases and this should drastically impact the empirical risk of the median block, hence the search of breakpoints in this empirical risk. \\ In order to illustrate such a strategy, we consider a $2$-dimensional Gaussian mixture models of $K=3$ components with equal size $n_{1}=n_{2}=n_{3}=300$. The mean vectors are set to $\mu_{1}=[3,12],\mu_{2}=[6,3]$ and $\mu_{3}=[-6,9]$ and the variance parameter is set to $\sigma^{2}=0.6$. Twenty outliers are selected randomly from the data and their coordinates are multiplied by $50$. We look for 2 situations where we fix the number of blocks to $B=50$ and $B=100$. Figure~\ref{fig:Evolution_emprisk_hyperprms_calibration}, Figure~\ref{fig:Evolution_nb_outliers_hyperprms_calibration} and Figure~\ref{fig:Evolution_ari_hyperprms_calibration} depict respectively the evolution of the median empirical risk, the number of outliers present in the median block and the Adjusted Rand Index (ARI) computed on the partitionning of data obtained by the nearest centroid selected in the median block, according to the number of data in the blocks. We get $n_{B}^{*}\leq25$ for both cases as we can observe the evolution of the empirical risk of the median block in Figure \ref{fig:Evolution_emprisk_hyperprms_calibration}a. for the case with a number of blocks $B=50$ and in Figure \ref{fig:Evolution_emprisk_hyperprms_calibration}b. for the case $B=100$. Note that the selection of $n_{B}^{*}$ works well in both examples and the associated clustering seems also good. Indeed, under the selected $n_{B=50}^{*}=20$ and $n_{B=100}^{*}=25$, there is no outlier present in the median block and the resulting partitionning of data is perfect on the non polluted data (ARI = 1). Above this cutting-point, the number of outliers in the median block increases with the within block size whereas the ARI index decreases. These results show that, in practice, if one chooses a small size of blocks and a high number of blocks, then the initialisation step is likely to be robust. \begin{figure} \begin{centering} \subfloat[case $B=50$ blocks]{\begin{centering} \includegraphics[scale=0.2]{hyperprms_discussion_emprisk_B=50} \par\end{centering} }\subfloat[case $B=100$ blocks]{\begin{centering} \includegraphics[scale=0.2]{hyperprms_discussion_emprisk_B=100} \par\end{centering} } \par\end{centering} \caption{Evolution of the empirical risk of the median block for $B=50$ blocks (left) and $B=100$ blocks (right)\label{fig:Evolution_emprisk_hyperprms_calibration}} \begin{centering} \subfloat[case $B=50$ blocks]{\begin{centering} \includegraphics[scale=0.2]{hyperprms_discussion_nboutliers_B=50} \par\end{centering} }\subfloat[case $B=100$ blocks]{\begin{centering} \includegraphics[scale=0.2]{hyperprms_discussion_nboutliers_B=100} \par\end{centering} } \par\end{centering} \caption{Evolution of the number of outliers selected in the median block for $B=50$ blocks (left) and $B=100$ blocks (right)\label{fig:Evolution_nb_outliers_hyperprms_calibration}} \begin{centering} \subfloat[case $B=50$ blocks]{\begin{centering} \includegraphics[scale=0.2]{hyperprms_discussion_ari_B=50} \par\end{centering} }\subfloat[case $B=100$ blocks]{\begin{centering} \includegraphics[scale=0.2]{hyperprms_discussion_ari_B=100} \par\end{centering} } \par\end{centering} \caption{Evolution of the ARI obtained by the partionning associated the median block for $B=50$ blocks (left) and $B=100$ blocks (right)\label{fig:Evolution_ari_hyperprms_calibration}} \end{figure} \pagebreak{} \subsection{Benchmark among the robust kmeans-based algorithms} The objective of that section is to compare the performance of the K-bMOM strategy with the robust clustering algorithms based on k-means approaches on a framework with outliers. To do so, we dispose of $N=1500$ points of dimension $p=3$ which are generated according to a mixture of $K=5$ multivariate Gaussian density functions with isotropic covariance matrix. The average vectors for the 5 components are respectively $\mu_{1}=[0,1,4]$, $\mu_{2}=[2,1,0]$, $\mu_{3}=[0,-2,3]$, $\mu_{4}=[0,5,-5]$ and $\mu_{5}=[-1,-2,0]$. An example of data generated according to this framework is displayed in Figure~\ref{fig:Data-without-outlier}. Outliers have been generated by randomly taken $30$ datapoints from which their coordinates have been multiplied by a factor of +/-10, An example of the final polluted data are illustrated in Figure~\ref{fig:Data-with-outliers}. \begin{figure} \begin{centering} \subfloat[Data without outlier\label{fig:Data-without-outlier}]{\includegraphics[scale=0.35]{benchmark_variation_3_without_outliers} }\hspace{2ex}\subfloat[Data with outliers\label{fig:Data-with-outliers}]{\includegraphics[scale=0.35]{benchmark_variation_3_with_outliers} } \par\end{centering} \caption{Illustration of generated data. } \end{figure} Given this context, three variations from this framework have been considered in this Section: \begin{description} \item [{Variation~1}] The clusters have equal size and dispose of the same spherical covariance matrix. These assumptions are well-suited for the k-means procedure. \item [{Variation~2}] The clusters have unequal size but dispose of the same spherical covariance matrix. \item [{Variation~3}] The clusters have unequal size and dispose of different scaling parameters. \end{description} Simulation parameters for each of these variations are detailed below: \begin{center} \begin{tabular}{|c|c|c|} \hline variation & size & scaling parameter : $\sigma_{k}^{2}$\tabularnewline \hline \hline 1 & $\forall k\in\{1,\dots,5\}:n_{k}=n=300$ & $\forall k\in\{1,\dots,5\}:\sigma_{k}^{2}=\sigma^{2}=0.6$\tabularnewline \hline 2 & $n_{1}=300,n_{2}=n_{5}=100,n_{3}=400,n_{4}=600$ & $\forall k\in\{1,\dots,5\}:\sigma_{k}^{2}=\sigma^{2}=0.6$\tabularnewline \hline 3 & $n_{1}=300,n_{2}=n_{5}=100,n_{3}=400,n_{4}=600$ & $\sigma_{1}^{2}=\sigma_{4}^{2}=1,\sigma_{2}^{2}=0.4,\sigma_{3}^{2}=0.6,\sigma_{5}^{2}=0.5]$\tabularnewline \hline \end{tabular} \par\end{center} \vspace{2ex} These variations have been repeated 50 times and each time, the kmeans-based algorithms have been initialized in the same manner with a kmeans++ procedure iterated 10 times. \\ We consider 6 different algorithms : our proposed robust clustering algorithm based on MOM principle, a variant and also, 4 well-known robust versions of the k-means. These methods are described below: \begin{lyxlist}{00.00.0000} \item [{\textbf{K-bmom}}] k-mom algorithm introduced in Section~\ref{sec:kB-MOM-algorithm}. \item [{\textbf{Block-k-mom}}] $B$ blocks of $n_{B}$ data are randomly selected with replacement. In each block, a k-means is processed initialized with a kmeans++ strategy and the block with the median empirical distorition is selected to define the centroids and therefore the final partition of data. \end{lyxlist} These $k$-MOM type approaches are compared to the traditional k-means algorithm and 4 other robust kmeans approaches well-known in the literature such as: \begin{lyxlist}{00.00.0000} \item [{\textbf{k-medoids}}] aims at finding $k$ data points as centers such as the within inertia is minimized. The partition around medoids algorithm named PAM~\cite{rdusseeun1987clustering} aims to achieve this in two steps~: an assignement step where each datapoint is assigned to its closest medoid; a refinement step which looks for better medoids than the current ones. The search is each time exhaustive in the data PAM has a complexity dominated by $\mathcal{O}\left(n^{2}kp\right)$. Faster versions have been proposed in~\cite{schubert2018faster}. The number of clusters $K$ needs to be set in the procedure. \item [{\textbf{k-medians}}] is a robust variant of the $k$-means algorithm~\cite{jain1988algorithms} : in the aggregation step, instead of computing the barycenter of each group ad in the $k$-means procedure, the $k$-medians compute in each single dimension the median in the Manhattan-distance formulation. This makes the algorithm more reliable for outlier for extreme values. The number of clusters $K$ needs to be specified by the practitioner. \item [{\textbf{trimmed-kmeans}}] (Trim-km) implementation is an EM-like algorithm introduced by~\cite{cuesta1997trimmed} in the late 90s. It is derived from the $k$-means and benefits robustness properties from the trimming action during the maximisation step where only a proportion $1-\alpha$ of the closest data point from their assigned centroid is taken into account. Since the trimming needs to sort the data points according to their distance to centroid , it leads therefore to an overall complexity of $\mathcal{O}\left(nkp+n.\log n\right)$ at each iteration. Besides, note that in practice, the user needs to choose a value for the number of datapoints to be discarded $\alpha$and no practical information is given to calibrate such an hyperparameter. In the simulations, $\alpha$ is set to the true value of the number of outliers ie $n_{outlier}$. A more general framework has been proposed by {[}Brecheteau, Fisher and Levrard{]}. They extend the previous trimming approaches to the general framework of clustering with Bregman divergences. They propose a Lloyd-type algorithm with a trimming parameter $\alpha$ which is automatically selected from sample by a heuristic using the notion of breakdown point. \item [{\textbf{k-PDTM}}] is a robust quantization algorithm introduced by~\cite{brecheteau2018robust,brecheteau2019k} that aims to infer the manifold from which the data points are drawn. This inference is done by means of $K$ centroids that should be on the manifold if the algorithm runs well. It is based also on a Lloyd-type algorithm where in the updating step, the centroid is computed as the barycenter of the $q$ nearest neighbours of the barycenter of the cluster. In the assignement step, the data point is assigned according to a Bregman divergence. This algorithm has two hyperparameters: $q$, the number of neighbors used to compute the centroid and the number of clusters $K$. \end{lyxlist} Finally, by default, for all the proposed methods having the number of clusters as hyperparameter, we set it to its true value ie $K=5$. \\ In order to compare the performances of these algorithms, the distortion and the Adjusted Rand Index (ARI) have been computed based on the true parameters of data distribution and their label membership. Moreover, the average number of clusters found among the non polluted data have also been displayed. \subsection*{Results and Analysis} The results of 3 simulated contexts presented above are summarized in Tables~\ref{tab:Benchmark-Case-1}, \ref{tab:Benchmark_Case-2} and \ref{tab:Benchmark_Case-3} where averages and standard deviations of distortion, ARI and number of clusters describing the non polluted data are displayed. Besides, the whole distribution of 50 repetitions for each metric and tested algorithm are illustrated according to violinplots in Figures~\ref{fig:Benchmark_Violinplots_ARI}, \ref{fig:Benchmark_Violinplots_distortions} and \ref{fig:Benchmark_Violinplots_nbK} where the median of each distribution is depicted by an orange dot and the interquartile range by a thick black vertical line. First of all, one can observe that the k-means, k-median and k-medoids methods fail to discover the right number of clusters among the non polluted data. Indeed, in average the outliers are grouped in 2 clusters and the rest of the data in 3 instead of 5 groups in the first case as it is illustrated in Table~\ref{tab:Benchmark-Case-1}. On the violinplot in left side Figure~\ref{fig:Benchmark_Violinplots_nbK}, we can observe that none of these 3 procedures is able to find the real structure of data: the maximum number of clusters found on the non polluted data is about 4. This is mainly due to the initialisation process via the kmeans++ procedure which instantiates the algorithm on one or two outliers. Thus, the Lloyd type algorithm whatever is the agregation method used, is stucked in a local minima. This situation gets worse in cases 2 and 3 since all 3 centers among 5 are located towards outliers as one can see in Table~\ref{tab:Benchmark_Case-2} and Table~\ref{tab:Benchmark_Case-3} but also on the middle and right side of Figure~\ref{fig:Benchmark_Violinplots_nbK} where the associated violinplots can be summarized by a point. The cluster assignment in the last context for the kmedians procedure is depicted in Figure~\ref{fig:output_k-medians}. By looking at the number of clusters found among the non polluted data, trimmed k-means and k-pdtm algorithms seem to have a better behavior. They tend to find the intrinsic structure all the times (5 clusters) whatever is the situation considered since the average number of clusters found among the non polluted data is around 4.9 in average with a very low standard deviation. However, the relevance of the data grouping decreases with the complexity of the simulated situation and is really dependant of the algorithm. Indeed, for the 3 simulated contexts, trimmed-k-means dispose of an average ARI about 0.60 and an average distortion which is quite huge and reaches approximately 12000 \textit{ie} 6 times more than k-pdtm distortion and twice more than kmeans distortion as we can observe in Tables~\ref{tab:Benchmark-Case-1}, \ref{tab:Benchmark_Case-2} and \ref{tab:Benchmark_Case-3}. The cluster assigment in Figure~\ref{fig:output_trimmed-k-means} illustrates the failure of the algorithm to discover the true partition of data. On the other side, the ARI for k-pdtm reaches in average 0.88 in Table~\ref{tab:Benchmark-Case-1}. Moreover, on the associated violinplot in the left side of Figure~\ref{fig:Benchmark_Violinplots_ARI}, we can see that this method is really performant since 50\% of the time (the median is represented by an orange dot), the ARI on the non polluted data is perfect and equals to 1 and the empirical distortion is low. However, the performance of this method decreases and becomes more erratic as the complexity of the situation increases. As we can observe in the middle and left side in Figure~\ref{fig:Benchmark_Violinplots_ARI}, the median ARI is as the same level as the average one which is about 0.71 and the distribution of ARI are spread almost uniformely between 0.2 and 1. An example of cluster assigment resulting from the k-pdtm procedure after 300 iterations is depicted in Figure~\ref{fig:output_k-pdtm}. Finally, even if the average performance tends to slowly decrease according to the different situations, both robust versions based on the MOM principle perform well in the presence of outliers. Indeed, the intrinsic structure is almost all the time found in the easiest context (Case 1) as the ARI, the distortions and the number of clusters show it. The average ARI is up to 0.98 for the K-bmom algorithm with a standard deviation around 0.03 whereas the ARI of the Block-k-mom version reaches almost 0.97. For both algorithms the ARI median reaches 1 as it is illustrated in the left hand side of Figure~\ref{fig:Benchmark_Violinplots_ARI}. In the same manner, the distortion is slightly better and more stable for the K-bmom algorithm than the Block-k-mom version as it can be observed in Table~\ref{tab:Benchmark-Case-1} and Figure~\ref{fig:Benchmark_Violinplots_distortions}. This remark remains true for the more complex contexts where the distortion is more favorable for the K-bmom algorithm both in average, in medians and in variation. In the less constraint context (case 3), the K-bmom algorithm outperforms the rest of approaches even if it remains less stable than in the easiest simulated context as it can be seen in the right-hand side and left hand side of Figure~\ref{fig:Benchmark_Violinplots_ARI}.\\ To conclude, this work provides a benchmark of robust k-means-based clustering algorithms. Although it is still necessary to test their performances on other different settings, our simulations give a preliminary overview of performances of using MOM principle in clustering context. Though the algorithmic principle of K-bMOM is the simplest one one can think of when merging the Lloyd's algorithm and the Median-Of-Means design, it has good performances compared to already known robust k-means based algorithm in the presence of outliers. \begin{table}[p] \begin{centering} \subfloat[Case 1 : equal cluster size and same covariance matrix ($\forall k,n_{k}=n$ and $\sigma_{k}=\sigma$)\label{tab:Benchmark-Case-1}]{\begin{centering} \begin{tabular}{|l|ll|ll|ll|} \hline methods & \multicolumn{2}{c|}{\textbf{ari (std)}} & \multicolumn{2}{c|}{\textbf{distortion (std)}} & \multicolumn{2}{c|}{\textbf{nb groups (std)}}\tabularnewline \hline \hline k-means & 0.427 & (0.185) & 7236.4 & (1677.8) & 2.56 & (0.49)\tabularnewline k-pdtm & 0.879 & (0.176) & 2168.5 & (1263.9) & 4.90 & (0.36)\tabularnewline trim-km & 0.598 & (0.110) & 14456.5 & (8554.6) & 4.84 & (0.36)\tabularnewline k-median & 0.351 & (0.151) & 12066.5 & (3859.1) & 2.52 & (0.50)\tabularnewline k-medoids & 0.418 & (0.178) & 7688.4 & (1889.1) & 2.56 & (0.49)\tabularnewline \hline K-bmom & \textbf{0.982} & \textbf{(0.034)} & \textbf{1990.9} & \textbf{(164.2)} & \textbf{4.98} & \textbf{(0.14)}\tabularnewline block-k-mom & 0.967 & (0.076) & 2189.2 & (828.6) & \textbf{4.96} & \textbf{(0.19)}\tabularnewline \hline \end{tabular} \par\end{centering} } \par\end{centering} \vspace{4ex} \begin{centering} \subfloat[Case 2 : unequal cluster size but same covariance matrix ($\forall k$, $\sigma_{k}=\sigma$) \label{tab:Benchmark_Case-2}]{\begin{centering} \begin{tabular}{|l|ll|ll|ll|} \hline methods & \multicolumn{2}{c|}{\textbf{ari (std)}} & \multicolumn{2}{c|}{\textbf{distortion (std)}} & \multicolumn{2}{c|}{\textbf{nb groups (std)}}\tabularnewline \hline \hline k-means & 0.529 & (6.6e-16) & 5879.8 & (13.1) & 2.00 & (0.0)\tabularnewline k-pdtm & 0.704 & (0.246) & 2734.4 & (1265.4) & 4.88 & (0.32)\tabularnewline trim-km & 0.626 & (0.184) & 12352.3 & (12304.9) & \textbf{4.94} & \textbf{(0.24)}\tabularnewline k-median & 0.530 & (0.002) & 8309.3 & (971.0) & 2.00 & (0.0)\tabularnewline k-medoids & 0.529 & (6.6e-16) & 5980.7 & (24.9) & 2.00 & (0.0)\tabularnewline \hline K-bmom & 0.863 & (0.131) & \textbf{2263.7} & \textbf{(419.6)} & \textbf{4.94} & \textbf{(0.27)}\tabularnewline block-k-mom & \textbf{0.914} & \textbf{(0.102)} & 2544.0 & (789.3) & 4.78 & (0.41)\tabularnewline \hline \end{tabular} \par\end{centering} } \par\end{centering} \vspace{4ex} \begin{centering} \subfloat[Case 3 : unequal cluster size and different spherical covariance matrix among clusters.\label{tab:Benchmark_Case-3}]{\begin{centering} \begin{tabular}{|l|ll|ll|ll|} \hline methods & \multicolumn{2}{c|}{\textbf{ari (std)}} & \multicolumn{2}{c|}{\textbf{distortion (std)}} & \multicolumn{2}{c|}{\textbf{nb groups (std)}}\tabularnewline \hline \hline k-means & 0.529 & (6.6e-16) & 5881.7 & (15.1) & 2.0 & (0.0)\tabularnewline k-pdtm & 0.655 & (0.257) & 2948.5 & (1369.4) & 4.92 & (0.27)\tabularnewline trim-km & 0.570 & (0.197) & 12336.3 & (11705.1) & 4.94 & (0.23)\tabularnewline k-median & 0.530 & (0.001) & 8561.9 & (1986.6) & 2.0 & (0.0)\tabularnewline k-medoids & 0.529 & (6.6e-16) & 5981.9 & (33.4) & 2.0 & (0.0)\tabularnewline \hline K-bmom & \textbf{0.922} & \textbf{(0.122)} & \textbf{2114.2} & \textbf{(454.4)} & \textbf{5.0} & \textbf{(0.0)}\tabularnewline block-k-mom & 0.879 & (0.111) & 2776.6 & (841.4) & 4.8 & (0.44)\tabularnewline \hline \end{tabular} \par\end{centering} } \par\end{centering} \vspace{4ex} \caption{Distortions, ARI and number of clusters represented in the dataset without outliers averaged among 50 repetitions of the k-means-based approaches and their standard deviation according to 3 frameworks.\label{tab:Benchmark_Distortions_ARI}} \end{table} \begin{figure}[p] \begin{centering} \subfloat[Violinplots of ARI\label{fig:Benchmark_Violinplots_ARI}]{\begin{centering} \includegraphics[scale=0.41]{violin_ari_benchmark} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[{Violinplots of distortions focused on the window {[}0,20000{]}\label{fig:Benchmark_Violinplots_distortions} }]{\begin{centering} \includegraphics[scale=0.41]{violin_distortion_zoom_benchmark} \par\end{centering} } \par\end{centering} \begin{centering} \subfloat[Violinplots of the number of clusters found on the non polluted data at the end of each procedure\label{fig:Benchmark_Violinplots_nbK}]{\begin{centering} \includegraphics[scale=0.41]{violin_nbK_benchmark} \par\end{centering} } \par\end{centering} \vspace{4ex} \caption{Violinplots of different metrics computed on 50 repetitions of 6 kmeans-based algorithms according to 3 frameworks. The median of each distribution is depicted by an orange dot and the interquartile range by a thick black vertical line.} \end{figure} \begin{figure} \begin{centering} \subfloat[k-medians\label{fig:output_k-medians}]{\includegraphics[scale=0.4]{benchmark_variation_3_without_outliers_kmedians} }\hspace{3ex}\subfloat[trimmed k-means\label{fig:output_trimmed-k-means}]{\includegraphics[scale=0.4]{benchmark_variation_3_without_outliers_trimkmeans} } \par\end{centering} \begin{centering} \subfloat[k-pdtm\label{fig:output_k-pdtm}]{\includegraphics[scale=0.4]{benchmark_variation_3_without_outliers_pkdtm} }\hspace{3ex}\subfloat[k-Bmom\label{fig:output_k-mom}]{\includegraphics[scale=0.4]{benchmark_variation_3_without_outliers_kmom} } \par\end{centering} \caption{Examples of cluster assigment according to several procedures in the more complex simulation case (unequal cluster size and unequal scaling parameter in the covariance matrix). \textit{Note:} Outliers have been removed from the pictures to ease the interpretation.} \end{figure} \bibliographystyle{plain}
{ "timestamp": "2020-02-11T02:30:29", "yymm": "2002", "arxiv_id": "2002.03899", "language": "en", "url": "https://arxiv.org/abs/2002.03899", "abstract": "We propose a new clustering algorithm that is robust to the presence of outliers in the dataset. We perform Lloyd-type iterations with robust estimates of the centroids. More precisely, we build on the idea of median-of-means statistics to estimate the centroids, but allow for replacement while constructing the blocks. We call this methodology the bootstrap median-of-means (bMOM) and prove that if enough blocks are generated through the bootstrap sampling, then it has a better breakdown point for mean estimation than the classical median-of-means (MOM), where the blocks form a partition of the dataset. From a clustering perspective, bMOM enables to take many blocks of a desired size, thus avoiding possible disappearance of clusters in some blocks, a pitfall that can occur for the partition-based generation of blocks of the classical median-of-means. Experiments on simulated datasets show that the proposed approach, called K-bMOM, performs better than existing robust K-means based methods. Guidelines are provided for tuning the hyper-parameters K-bMOM in practice. It is also recommended to the practitionner to use such a robust approach to initialize their clustering algorithm. Finally, considering a simplified and theoretical version of our estimator, we prove its robustness to adversarial contamination by deriving robust rates of convergence for the K-means distorsion. To our knowledge, it is the first result of this kind for the K-means distorsion.", "subjects": "Methodology (stat.ME); Machine Learning (stat.ML)", "title": "K-bMOM: a robust Lloyd-type clustering algorithm based on bootstrap Median-of-Means", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854138058638, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.708969951875476 }
https://arxiv.org/abs/2012.13769
Population Quasi-Monte Carlo
Monte Carlo methods are widely used for approximating complicated, multidimensional integrals for Bayesian inference. Population Monte Carlo (PMC) is an important class of Monte Carlo methods, which utilizes a population of proposals to generate weighted samples that approximate the target distribution. The generic PMC framework iterates over three steps: samples are simulated from a set of proposals, weights are assigned to such samples to correct for mismatch between the proposal and target distributions, and the proposals are then adapted via resampling from the weighted samples. When the target distribution is expensive to evaluate, the PMC has its computational limitation since the convergence rate is $\mathcal{O}(N^{-1/2})$. To address this, we propose in this paper a new Population Quasi-Monte Carlo (PQMC) framework, which integrates Quasi-Monte Carlo ideas within the sampling and adaptation steps of PMC. A key novelty in PQMC is the idea of importance support points resampling, a deterministic method for finding an "optimal" subsample from the weighted proposal samples. Moreover, within the PQMC framework, we develop an efficient covariance adaptation strategy for multivariate normal proposals. Lastly, a new set of correction weights is introduced for the weighted PMC estimator to improve the efficiency from the standard PMC estimator. We demonstrate the improved empirical convergence of PQMC over PMC in extensive numerical simulations and a friction drilling application.
\section{Introduction} \label{sec:introduction} A fundamental challenge in Bayesian inference is the evaluation of integrals involving some multi-dimensional posterior distribution $\pi$. Generally, closed-form analytical solutions are not feasible, and Monte Carlo (MC) methods are often used for approximation. Of such methods, Markov Chain Monte Carlo (MCMC; \cite{robert2013mc}) is widely used due to ease of implementation. In recent decades, there has been renewed interest in exploring an alternative class of methods called iterated Importance Sampling (IS; \cite{robert2013mc}), which allows for parallel implementation, flexibility of adaptation, and easy assessment of approximation error over MCMC. However, the success of iterated IS depends on finding a good set of proposal distributions that mimics the target distribution $\pi$, which can be difficult when $\pi$ is high-dimensional and/or time-consuming if $\pi$ is computationally expensive to evaluate. To address this, we propose a novel Population Quasi-Monte Carlo (PQMC) framework, which integrates Quasi-Monte Carlo sampling within the Population Monte Carlo (PMC; \cite{cappe2004pmc}) -- a popular iterated IS method -- for improved sampling performance. \par The key idea in PMC is to adapt a population of proposals iteratively, to generate weighted samples which are approximately drawn from $\pi$. This adaptation idea can be traced back to \textcite{oh1993ais}, \textcite{west1993ais}, and \textcite{givens1996lais}. At each iteration, the PMC algorithm first simulates $J$ \textit{samples} from each of the $K$ proposal distributions $\{q_{k}\}_{k=1}^{K}$, i.e. $x_{k,j}\sim q_{k}$. Next, it \textit{weighs} the obtained $KJ$ samples $\{x_{k,j}\}_{k=1}^{K}{}_{j=1}^{J}$ to correct for mismatch between the proposal and target distributions. Last, it updates the $K$ proposals via \textit{resampling}, so that samples with larger weights are duplicated and samples with insignificant weights are eliminated, thereby allocating more resources for exploring higher probability regions. These \textit{sampling, weighting, adaptation} steps are then repeated for $T$ iterations, yielding a total of $N = TKJ$ weighted samples for approximating $\pi$. We note that there are other adaptation variants of PMC which do not rely on resampling, such as D-kernel PMC \parencite{douc2007dmis1,douc2007dmis2}, Mixture PMC \parencite{cappe2008mmis}, Adaptive Population Importance Sampler \parencite{martino2015apis}, and Random Walk Importance Sampler \parencite{martino2017lais}. In this paper, we only focus on the PMC with resampling adaptation scheme, which enjoys the same convergence rate of $\mathcal{O}(N^{-1/2})$ as IS. \par In the literature, there are two main \textit{weighting} strategies which both yield unbiased integral estimates: the standard importance weights $w(x_{k,j}) = \pi(x_{k,j})/q_{k}(x_{k,j})$, and the deterministic mixture weights $w(x_{k,j}) = \pi(x_{k,j})/[K^{-1}\sum_{i=1}^{K}q_{i}(x_{k,j})]$. \textcite{elvira2019mis} proved theoretically that the latter mixture weighting scheme has smaller variance for integral estimation. However, this mixture weighting strategy requires $\mathcal{O}(K^2J)$ evaluations of the proposal distributions. When $K$ is large, this is a major computational burden compared to the standard weighting strategy where only $\mathcal{O}(KJ)$ evaluations are required. One way to reduce $K$ while keeping the total number of samples $N$ fixed is to increase $J$. A large $J$, e.g. $J = 10$, was also proposed as a remedy to the sample impoverishment issue in resampling, i.e., it is possible for the samples to collapse to only a few particles with very large weights \parencite{carpenter1999sr}. \textcite{elvira2017dmmis} also shows empirical improvement for PMC when $J > 1$ under the deterministic mixture weighting scheme. However, with a smaller number of proposals $K$, we could lose too much information when down-sampling the $KJ$ simulated samples to $K$ particles via resampling. One solution is to incorporate Quasi-Monte Carlo (QMC; \cite{niederreiter1992random}) into the \textit{resampling} step. QMC uses a set of low discrepancy deterministic points which are well spread out over the sample space to achieve better convergence rate for integration. Thus, by obtaining $K$ ``space-filling'' (i.e., well spaced-out) points that retain the most information from the $KJ$ simulated samples via QMC resampling, we can reduce the additional Monte Carlo error introduced in the \textit{resampling} step. Moreover, the use of ``space-filling" resamples as the new proposals enables more efficient exploration of the parameter space. For this, we propose a new deterministic resampling method called importance support points (ISP) resampling, which makes use of the support points in \textcite{mak2018sp} to find the set of resamples which ``best'' represents the weighted proposal samples. \par Moreover, it is known that the QMC convergence rate can achieve $\mathcal{O}(N^{-1}(\log N)^{p-1})$ for integration in uniform hypercube, where $p$ is the dimension of the parameters \parencite{owen2013mc}. Hence, it is also beneficial to leverage QMC in the \textit{sampling} step by using low-discrepancy samples, leading to more representative points from each proposal distribution. With QMC sampling, we can also reduce $J$, the number of samples simulated from each proposal, and thus reducing $N$, the total number of posterior evaluations, while still achieving the desired precision. With the above two QMC modifications to the sampling and resampling steps of the PMC algorithm, we propose a novel Population Quasi-Monte Carlo (PQMC) framework that provides significant improvement over the PMC algorithm. The faster empirical convergence of PQMC makes it a useful tool for efficiently sampling from posterior distributions which are computationally expensive; such posteriors often arise in complex engineering applications \parencite{joseph2019mined}. In recent years, QMC has been adapted for speeding up a variety of statistical methods involving Monte Carlo, including MCMC \parencite{owen2005quasi}, density estimation \parencite{abdellah2018density}, and data reduction \parencite{mak2018sp}. However, to our knowledge, there has been little-to-no work on integrating QMC ideas to speed up PMC - this is the aim of the current paper. \par For the proposals, elliptical distributions (e.g., the multivariate normal distribution) are commonly used. Most of the PMC literature focuses on the adaptation of the location (center) parameter, and treats the covariance parameter as static throughout the algorithm. However, the covariance parameter plays a key role in determining the size of the proposal ellipsoid; a poorly chosen covariance could result in a significant mismatch to the target distribution, so the adaptation of this covariance is also essential for the success of PMC. By taking advantage of ISP resampling, we propose a computationally efficient adaptation scheme called \textit{lookback adaptation}. Moreover, since there is adaptation, the samples simulated from the first few iterations are not as good as the later samples. One way to address this is via the weighted PMC estimator \parencite{douc2007dmis2,portier2018ais}, which aims to ``forget" samples from early stages. We further propose a new weighting scheme for the PQMC estimator, which is free of the integrand and the normalizing constant of the target distribution. \par The paper is organized as follows. Section~\ref{sec:importance_support_points} first reviews QMC and support points, and then introduces the proposed importance support points (ISP). Section~\ref{sec:population_quasi_monte_carlo} discusses the novel Population Quasi-Monte Carlo framework, which makes use of the proposed ISP for resampling and lookback adaptation. Section~\ref{sec:simulation} presents several simulation studies demonstrating the improvements of PQMC over the existing PMC methods. Section~\ref{sec:simulation_pmc_drilling} illustrates the usefulness of PQMC on friction drilling model calibration application, where the posterior is computationally expensive. We conclude the article with some remarks in Section~\ref{sec:conclusion}. \section{Importance Support Points} \label{sec:importance_support_points} We first provide a brief overview of Quasi-Monte Carlo and then introduce importance support points, which is an integral part of the proposed PQMC framework. \subsection{Quasi-Monte Carlo} \label{subsec:quasi-monte_carlo} Quasi-Monte Carlo (QMC) is traditionally used for numerical integration of a function $h$ with respect to the $p$-dimensional unit hypercube $[0,1]^p$, that is \begin{equation} \label{eq:qmc1} \int_{[0,1]^{p}}h(x) dx \approx \frac{1}{N}\sum_{n=1}^{N}h(x_n)\; . \end{equation} In standard Monte Carlo, the $N$ evaluation points $\{x_n\}_{n=1}^{N}$ are sampled uniformly on $[0,1]^{p}$. It is well known that, by Central Limit Theorem, the integration error converges at a rate of $\mathcal{O}(N^{-1/2})$. QMC aims to improve this rate by carefully choosing a set of well-spread out points that fill the $p$-dimensional hypercube in an even and uniform way. This measure of sample uniformity is typically referred to as a \textit{discrepancy measure} in the QMC literature. One well-known discrepancy measure for sample $\{x_n\}_{n=1}^N$ on $[0,1]^p$ is the star-discrepancy \parencite{niederreiter1992random}, \begin{equation} \label{eq:qmc2} D^{*}_N(\{x_n\}_{n=1}^{N}) = \sup_{a\in[0,1)^{p}}\bigg|\frac{1}{N}\sum_{n=1}^{N}\mathbbm{1}(x_n \in [0,a)) - \prod_{j=1}^{p}a_j\bigg|\;, \quad \mbox{vol}([0,a)) = \prod_{j=1}^{p}a_j \;. \end{equation} The star discrepancy measures the maximum difference between the empirical cumulative distribution of the sample $\{x_n\}_{n=1}^{N}$ and the desired uniform distribution on $[0,1]^p$. A small star discrepancy suggests a more uniform sample on $[0,1]^p$, and vice versa. When $p=1$, $D^{*}_N(\{x_n\}_{n=1}^{N})$ reduces to the well-known Kolmogorov-Smirnov statistic for testing the goodness-of-fit of a sample $\{x_n\}_{n=1}^{N}$ to $\mbox{Uniform}[0,1]$ \parencite{owen2013mc}. The Koksma-Hlawka inequality connects the integration error from \eqref{eq:qmc1} to the star-discrepancy, \begin{equation} \label{eq:qmc3} \bigg|\frac{1}{N}\sum_{n=1}^{N}h(x_n) - \int_{[0,1)^{p}}h(x)dx\bigg| \leq D^{*}_N(\{x_n\}_{n=1}^{N})V_{\text{HK}}(h)\; , \end{equation} where $V_{\text{HK}}(h)$ is the total variation of $h$ in the sense of Hardy and Krause for measuring the roughness of integrand $h$ \parencite{owen2013mc}. Equation \eqref{eq:qmc3} shows that samples which are more uniformly distributed over $[0,1]^p$ (i.e., have lower star-discrepancy) tend to result in smaller integration errors. QMC therefore studies sampling strategies which result in low star-discrepancies, as well as other discrepancy measures for which a similar Koksma-Hlawka-like bound holds. These methods achieve an integration rate of $\mathcal{O}(N^{-1}(\log N)^{p-1})$ \parencite{niederreiter1992random} under smoothness assumptions on $h$, which is faster than the MC rate. Recent developments have focused on \textit{randomized} QMC methods, which provide a randomized low-discrepancy sample, with each sample point marginally distributed as $\mbox{Uniform}[0,1]^p$. Randomized QMC allows for unbiased integral estimates, and provides relief from the curse-of-dimensionality for high-dimensional sampling \parencite{dick2013qmc}. We will later make use of randomized QMC for generating proposal samples within PQMC. One drawback of traditional QMC methods, however, is that they are mainly developed for sampling from the \textit{uniform} unit hypercube. For the resampling step in PQMC, we wish to generate a representative sample from the \textit{non-uniform} distribution for the weighted samples. We introduce next a method called \textit{importance support points}, which achieves this via an extension of a recent QMC method called support points \parencite{mak2018sp}. \subsection{Importance Support Points} \label{subsec:importance_support_points} Let us first review the support points proposed in \textcite{mak2018sp}, which generates representative samples from a target distribution $F$. \begin{definition}{\textbf{\parencite[Support Points;][]{mak2018sp}}} \label{df:support_points} Let $Y\sim F$ where $F$ is a target distribution function on $\emptyset\neq\mathcal{X}\subseteq\mathbb{R}^{p}$ with finite means. The support points $\{\xi_i\}_{i=1}^{n}$ of $F$ are \begin{equation} \label{eq:sp1} \{\xi_{i}\}_{i=1}^{n}\in \arg\min_{x_1,\ldots,x_n\in\mathcal{X}}\mathcal{E}(F,F_n) = \arg\min_{x_1,\ldots,x_n\in\mathcal{X}}\frac{2}{n}\sum_{i=1}^{n}\mathbb{E}\lVert x_i - Y\rVert_{2} - \frac{1}{n^2}\sum_{i=1}^{n}\sum_{j=1}^{n}\lVert x_i - x_j\rVert_{2} \; . \end{equation} where $\mathcal{E}(F,F_n)$ is the energy distance \parencite{szekely2004gof,szekely2013es} between $F$ and $F_n$, and $F_n$ is the empirical distribution function for $\{x_i\}_{i=1}^{n}$. \end{definition} In the case where only samples $\{y_m\}_{m=1}^{M}$ are available on $F$ (where $M > n$), the Monte Carlo approximation of \eqref{eq:sp1} becomes: \begin{equation} \label{eq:sp2} \begin{aligned} \{\xi_i\}_{i=1}^{n}= \arg\min_{x_1,\ldots,x_n\in\mathcal{X}}\frac{2}{nM}\sum_{i=1}^{n}\sum_{m=1}^{M}\lVert x_i - y_m\rVert_{2} - \frac{1}{n^2}\sum_{i=1}^{n}\sum_{j=1}^{n}\lVert x_i - x_j\rVert_{2} \; . \end{aligned} \end{equation} \noindent The first term in \eqref{eq:sp2} forces the support points $\{\xi_{i}\}_{i=1}^{n}$ to mimic the samples from $F$, while the second term forces these points to be as far apart from each other as possible. The latter is often referred to as the ``space-filling property'' in experimental design \parencite{johnson1990design}. The problem in \eqref{eq:sp2} can be efficiently solved via the convex-concave procedure \parencite{yuille2002ccp}, and is implemented in the R package \texttt{support} \parencite{mak2019spR}. We now present an extension of support points, called importance support points, which generates representative samples from a \textit{weighted} distribution for $F$. To foreshadow, these ISPs will be used for finding an ``optimal'' subsample from the weighted proposal samples. \begin{definition}{\textbf{(Importance Support Points)}} \label{df:importance_support_points} Let $\pi = \gamma/Z$ be the probability density function of the target distribution $F$ that we only know up to an unknown constant of proportionality. Let $Y\sim q$, the importance distribution that is defined on the same support $\mathcal{X}$ of $F$. The importance support points $\{\xi_i\}_{i=1}^{n}$ of $F$ with respect to the importance distribution $q$ are \begin{equation} \label{eq:isp1} \{\xi_{i}\}_{i=1}^{n} \in \arg\min_{x_1,\ldots,x_n\in\mathcal{X}}\frac{2}{n}\sum_{i=1}^{n}\frac{\mathbb{E}_{q}[w(Y)\lVert x_i - Y\rVert_{2}]}{\mathbb{E}_{q}[w(Y)]} - \frac{1}{n^2}\sum_{i=1}^{n}\sum_{j=1}^{n}\lVert x_i - x_j\rVert_{2}\; , \end{equation} where $w(\cdot) = \gamma(\cdot) / q(\cdot)$ is the unnormalized importance weight function. \end{definition} In the case where only samples $\{y_m\}_{m=1}^{M}$ are available from $q$ (where $M > n$), the self-normalized IS approximation of \eqref{eq:isp1} is \begin{equation} \label{eq:isp2} \{\xi_{i}\}_{i=1}^{n} \in \arg\min_{x_1,\ldots,x_n\in\mathcal{X}}\frac{2}{n}\sum_{i=1}^{n}\sum_{m=1}^{M}\bar{w}_m\lVert x_i - y_m\rVert_{2} - \frac{1}{n^2}\sum_{i=1}^{n}\sum_{j=1}^{n}\lVert x_i - x_j\rVert_{2} \; , \end{equation} where $\bar{w}_m = [\gamma(y_m)/q(y_m)]/[\sum_{l=1}^{M}\gamma(y_l)/q(y_l)]$ is the normalized importance weight. This approach only requires samples $\{y_m\}_{m=1}^{M} \sim q$ where $q$ can be some simple distribution that we can generate QMC samples from. The ISP can be generalized to reduce any large set of weighted samples to a few unweighted representative points. The problem in \eqref{eq:isp2} can also be solved via the convex-concave procedure \parencite{yuille2002ccp}, and the details are presented in Appendix~\ref{appendix:importance_support_points}. Figure~\ref{fig:isp_2d} shows the $n=100$ support points for the two-dimensional axe-shaped, banana-shaped, and mixtures of normal distributions. Top panels shows the support points from 10{,}000 MCMC samples obtained by running MCMC implemented in the R package \texttt{adaptMCMC} \parencite{scheidegger2018mcmcR} for 15{,}000 iterations and discarding the first 5{,}000 samples as burn-in. Bottom panels shows ISPs from 10{,}000 Sobol' points \parencite{joe2003sobol} generated by R package \texttt{randtoolbox} \parencite{christophe2019randtoolbox} as the importance samples ($q = \mbox{Uniform}[0,1]^2$). When the MCMC explores the distribution well as in the axe-shaped distribution, the support points from MCMC samples are as good as the ISPs. However, for the banana-shaped distribution, the support points from MCMC samples cannot reflect its symmetry structure. The problem is more severe for the mixture of normals as poor mixing on multimodal distribution is a known issue of standard MCMC. ISPs show substantial improvement over support points generated by the MCMC samples, by making use of the density information in $F$.\par \begin{figure}[t!] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/sp/axe_mcmc_sp.png} \caption{Axe; MCMC} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/sp/banana_mcmc_sp.png} \caption{Banana; MCMC} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/sp/mixture_mcmc_sp.png} \caption{Mixtures; MCMC} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/sp/axe_is_sp.png} \caption{Axe; IS} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/sp/banana_is_sp.png} \caption{Banana; IS} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/sp/mixture_is_sp.png} \caption{Mixtures; IS} \end{subfigure}% \caption{$n = 100$ support points (red dots) generated from 10{,}000 MCMC samples (green diamonds) and 10{,}000 Importance samples (green diamonds) for two dimensional axe-shaped, banana-shaped, and mixtures of normal distributions. Lines represent the density contours.} \label{fig:isp_2d} \end{figure} On the other hand, the ISPs suffers the same limitation of IS. The choice of the importance distribution $q$ is critical. As shown in Figure~\ref{fig:isp_2d}, a robust choice would be the uniform distribution over a region that covers the support of the target $\pi$, but we also need sufficient samples on the high-probability regions for yielding good ISPs, where the effective sample size ($N_e = [\sum_{m=1}^{N}\bar{w}_m^2]^{-1}$) is a good measure. Figure~\ref{fig:isp_2d_ess} shows the 100 ISPs for two-dimensional standard normal obtained using 1{,}000 inverse Sobol' points of different importance distributions as the importance samples. The inverse Sobol’ points are generated by first simulating the 1{,}000 Sobol' points on $[0,1]^2$ and then applying the inverse-transform of the desired distribution on those points. As the variance of the proposal increases, fewer importance samples are in the key region, so effective sample size drops and the quality of the ISPs get worse. Thus, the quality of ISPs is subject to the effective sample size of the importance samples, which could be treated as another advantage over the MCMC approach since there is no direct quantitative metric to evaluate the quality of the MCMC samples.\par Finally, we note that a similar form of the ``weighted'' energy criterion \eqref{eq:isp2} was recently used in \textcite{huling2020energy} to balance covariate distributions for causal inference. The key distinction is that the proposed ISPs optimize for the representative \textit{samples} given fixed weights, whereas the energy balancing weights in \textcite{huling2020energy} optimize for the \textit{weights} given fixed samples. The former problem can be challenging to solve and requires several approximations if we restrict the representative samples $\{\xi_i\}_{i=1}^{n}$ to be points from $\{y_m\}_{m=1}^{M}$ in resampling, which we discuss in the following section. \par \begin{figure}[t!] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/sp/normal_isp_sd1.png} \caption{$q = \mathcal{N}(0,I_2)$; ESS = 1{,}000} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/sp/normal_isp_sd3.png} \caption{$q = \mathcal{N}(0,3I_2)$; ESS = 209} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/sp/normal_isp_sd5.png} \caption{$q = \mathcal{N}(0,5I_2)$; ESS = 78} \end{subfigure}% \caption{$n = 100$ importance support points (red dots) for two dimensional standard normal by 1{,}000 inverse Sobol' points of $q$ as the importance samples (green diamonds). Lines represent the density contours. ESS stands for effective sample size.} \label{fig:isp_2d_ess} \end{figure} \section{Population Quasi-Monte Carlo} \label{sec:population_quasi_monte_carlo} \begin{algorithm}[t!] \SetAlgoLined \textbf{Target Distribution:} $\pi = \gamma / Z$ where $Z$ is the normalizing constant\; \textbf{Initialization:} set the parameters for the $K$ initial proposals $\{q_{k}^{(1)}=\mathcal{N}(\cdot|\mu_{k}^{(1)},\Sigma)\}_{k=1}^{K}$ \; \vspace{2mm} \For{$t = 1,\ldots,T$}{ $\bullet$ \textbf{Sampling:} draw $J$ samples from each proposal, \begin{equation} \label{eq:pmc1} x_{k,j}^{(t)} \sim q_{k}^{(t)}(x|\mu_{k}^{(t)}, \Sigma) \end{equation} for $k = 1,\ldots,K$ and $j = 1,\ldots,J$, so total $KJ$ samples are simulated. \\ $\bullet$ \textbf{Weighting:} compute the importance weight, \begin{equation} \label{eq:pmc2} w_{k,j}^{(t)} = \frac{\gamma(x_{k,j}^{(t)})}{K^{-1}\sum_{i=1}^{K}q_{i}^{(t)}(x_{k,j}^{(t)}|\mu_i^{(t)}, \Sigma)} \end{equation} for $k = 1,\ldots,K$ and $j = 1,\ldots,J$, and normalize them by \begin{equation} \label{eq:pmc3} \bar{w}_{k,j}^{(t)} = \frac{w_{k,j}^{(t)}}{\sum_{i=1}^{K}\sum_{l=1}^{J}w_{i,l}} \end{equation} $\bullet$ \textbf{Adaptation:} perform resampling by drawing $K$ independent samples from the discrete probability random measure \begin{equation} \label{eq:pmc4} \sum_{k=1}^{K}\sum_{j=1}^{J}\bar{w}_{k,j}^{(t)}\delta(x - x_{k,j}^{(t)}) \end{equation} to be the proposal centers $\{\mu_k^{(t+1)}\}_{k=1}^{K}$ for the next iteration. } \vspace{2mm} \textbf{Return:} $\{(x_{k,j}^{(t)}, w_{k,j}^{(t)})\}_{t=1}^{T}{}_{k=1}^{K}{}_{j=1}^{J}$ where $w_{k,j}^{(t)}$ is the unnormalized weight for sample $x_{k,j}^{(t)}$. \caption{PMC Algorithm with Normal Proposals and Static Global Covariance} \label{algo:pmc} \end{algorithm} We now integrate the aforementioned QMC ideas within the PMC framework. For reference, Algorithm~\ref{algo:pmc} outlines the generic PMC procedure, with normal proposal distributions, static global covariance (all proposals share the same covariance), and the deterministic mixture weighting strategy \eqref{eq:pmc2} from \textcite{elvira2017dmmis}. We introduce next novel modifications to incorporate QMC into the \textit{sampling} and \textit{adaptation} steps of PMC, yielding the proposed Population Quasi-Monte Carlo (PQMC) framework. We also consider the use of normal proposals in this paper for ease of illustration, but the results presented can be generalized to any elliptical distribution, such as the multivariate $t$-distribution. Algorithm~\ref{algo:pqmc} outlines the steps for PQMC. We discuss in detail below three novel developments of this PQMC framework: the Quasi-Monte Carlo proposals in the \textit{sampling} step, the importance support point resampling and the lookback adaptation in the \textit{adaptation} step. \begin{algorithm}[t!] \SetAlgoLined \textbf{Target Distribution:} $\pi = \gamma / Z$ where $Z$ is the normalizing constant\; \textbf{Initialization:} set the parameters for the $K$ initial proposals $\{q_{k}^{(1)}=\mathcal{N}(\cdot|\mu_{k}^{(1)},\Sigma^{(1)})\}_{k=1}^{K}$ \; \vspace{2mm} \For{$t = 1,\ldots,T$}{ $\bullet$ \textbf{Sampling:} simulate $J$ scrambled Sobol' points $\{u_{k,j}^{(t)}\}_{j=1}^{J}$ and apply equation \eqref{eq:qmcs} to obtain $\{x_{k,j}^{(t)}\}_{j=1}^{n}$ for $k = 1,\ldots,K$, so total $KJ$ samples are simulated. \\ $\bullet$ \textbf{Weighting:} apply the deterministic mixture weighting strategy as in PMC \eqref{eq:pmc2} and normalize the weights by \eqref{eq:pmc3}. \\ $\bullet$ \textbf{Adaptation:} perform ISP resampling (Algorithm~\ref{algo:isprs}) with respect to the weighted samples $\{(x_{k,j}^{(t)},\bar{w}_{k,j}^{(t)})\}_{k=1}^{K}{}_{j=1}^{J}$ to obtain new proposal centers $\{\mu_{k}^{(t+1)}\}_{k=1}^{K}$, and apply lookback adaptation \eqref{eq:ca4} for updating the global covariance $\Sigma^{(t+1)}$. } \vspace{2mm} \textbf{Return:} $\{(x_{k,j}^{(t)}, w_{k,j}^{(t)})\}_{t=1}^{T}{}_{k=1}^{K}{}_{j=1}^{J}$ where $w_{k,j}^{(t)}$ is the unnormalized weight for sample $x_{k,j}^{(t)}$. \caption{PQMC with Normal Proposals and Adaptive Global Covariance} \label{algo:pqmc} \end{algorithm} \subsection{Quasi-Monte Carlo Proposals} \label{subsec:quasi_monte_calo_sampling} By applying the reparameterization trick, we can represent a $p$-dimensional random variable $x_n\sim\mathcal{N}(\mu,\Sigma)$ by a continuous function defined on a new variable $u_n \sim \mbox{Uniform}[0,1]^{p}$, \begin{equation} \label{eq:qmcs} x_n = g(u_n) = \mu + \Sigma^{1/2}[\Phi^{-1}(u_{n1}),\ldots,\Phi^{-1}(u_{np})]^{T}\; , \end{equation} where $g$ is the inverse transform of the multivariate normal distribution. \textcite{fang1994ntm} show that by applying $g$ to a set of low discrepancy points $\{u_n\}_{n=1}^{N}$, the resulting set $\{x_n = g(u_n)\}_{n=1}^{N}$ also have low F-discrepancy, the largest discrepancy between the cumulative distribution function $F$ and the empirical distribution function $F_{N}$ constructed by $\{x_n\}_{n=1}^{N}$ over the support $\mathcal{X}$. Moreover, from previous discussion of QMC in Subsection~\ref{subsec:quasi-monte_carlo}, randomized QMC is preferred over QMC. Thus, for the sampling step, we use Owen-style scrambling \parencite{owen1998sobol} Sobol' points for $\{u_n\}_{n=1}^{N}$ that is available in the R package \texttt{randtoolbox} \parencite{christophe2019randtoolbox}, and apply \eqref{eq:qmcs} to obtain the samples $\{x_n\}_{n=1}^{N}$. \subsection{Importance Support Points Resampling} \label{subsec:isp_resampling} Resampling is commonly used for adapting location (center) parameter of the proposals in PMC. It is first introduced by \textcite{rubin1987sir} as Sampling-Importance Resampling, and it plays a key role in Sequential Monte Carlo (SMC) to deal with the weight degeneracy problem \parencite{chen2003smc,del2006smc,cappe2007smc}. Let $\{\xi_i\}_{i=1}^{n}$ be the resamples for any normalized weighted samples $\{(y_m,\bar{w}_m)\}_{m=1}^{M}$ where $\xi_i \in \{y_m\}_{m=1}^{M}$. Assume that there are $n_m$ copies of $y_m$ in the resamples, i.e., $\sum_{i=1}^{n}\mathbbm{1}(\xi_i = y_m) = n_m$, so $n = \sum_{m=1}^{M} n_m$. The goal is to have the resampled empirical distribution function $\tilde{F}_n(y) = n^{-1}\sum_{i=1}^{n}\mathbbm{1}(\xi_i\leq y) = n^{-1}\sum_{m=1}^{M}n_m\mathbbm{1}(y_m\leq y)$ be as close to the original empirical distribution function $\hat{F}_M(y) = \sum_{m=1}^{M}\bar{w}_m\mathbbm{1}(y_m\leq y)$ as possible. As mentioned in \textcite{hol2006rs}, when the resampled density and the original weighted density are close, we expect that for any integrand $h$, the squared integration error, \begin{equation} \label{eq:isprs1} \mathbb{E}\bigg[\bigg(\int h(y)d\tilde{F}_n(y) - \int h(y)d\hat{F}_M(y)\bigg)^2\bigg] = \mathbb{E}\bigg[\bigg(\sum_{m=1}^{M}\frac{n_m - n \bar{w}_m}{n}h(y_m)\bigg)^2\bigg] \end{equation} should also be small. In the case of normal proposals, assuming that all covariances are the same, i.e., $\Sigma_{k} = \Sigma$, and considering that $h(\cdot) = \mathcal{N}(x|\cdot,\Sigma)$ for any $x \in \mathcal{X}$, adapting the proposal centers by resampling is to find a mixture of $K$ equally weighted normals, that best approximates the mixture of $KJ$ weighted normals with the centers being the $KJ$ simulated samples and the associated weights computed by \eqref{eq:pmc3}, as the proposal for the next iteration. \textcite{mak2018sp} present a Koksma-Hlawka-like bound that upper bounds the squared integration error \eqref{eq:isprs1} by a term proportional to the energy distance for a large class of integrand $h$. Thus, we propose a deterministic resampling method that find the resampled point set $\{\xi_i\}_{i=1}^{n}$ minimized over the energy distance to the weighted samples $\{(y_m,\bar{w}_m)\}_{m=1}^{M}$, leading to the optimization, \begin{equation} \label{eq:isprs3} \begin{aligned} \{\xi_{i}\}_{i=1}^{n} \in & \arg\min_{x_1,\ldots,x_n\in\{y_m\}_{m=1}^{M}}\hat{\mathcal{E}}\bigg(\{(y_m,\bar{w}_m)\}_{m=1}^{M}, \{x_i\}_{i=1}^{n}\bigg) \\ = & \arg\min_{x_1,\ldots,x_n\in\{y_m\}_{m=1}^{M}}\frac{2}{n}\sum_{i=1}^{n}\sum_{m=1}^{M}\bar{w}_m\lVert x_i - y_m\rVert_{2} - \frac{1}{n^2}\sum_{i=1}^{n}\sum_{j=1}^{n}\lVert x_i - x_j\rVert_{2} \; . \end{aligned} \end{equation} Let us call this the ISP resampling. \eqref{eq:isprs3} is the same optimization problem of the ISPs \eqref{eq:isp2} but under the constraints that $\xi_i \in \{y_m\}_{m=1}^{M}\; \forall i$, making it an integer programming problem that is much harder to solve. \par \begin{algorithm}[t!] \SetAlgoLined \textbf{Objective:} optimize the ISP problem \eqref{eq:isprs3}. \\ \vspace{2mm} \textbf{Distance Computing:} compute and store the pairwise distances of $\{y_m\}_{m=1}^{M}$. \\ \vspace{2mm} \textbf{Greedy Initialization:} conditional on finding $\{\xi_{j}\}_{j=1}^{i-1}$, $\xi_{i}$ is obtained by \begin{equation} \label{eq:isprs4} \xi_i = \arg\min_{x\in\{y_m\}_{m=1}^{M}}\frac{2}{i}\sum_{m=1}^{M}\bar{w}_m\lVert x - y_m\rVert_{2} - \frac{2}{i^2}\sum_{j=1}^{i-1}\lVert x - \xi_j\rVert_{2}\; . \end{equation} Solve \eqref{eq:isprs4} for $i=1,\ldots,n$ sequentially to obtain the initial resamples $\{\xi_i\}_{i=1}^{n}$. \\ \vspace{2mm} \textbf{Point Refinement:} for $i = 1,\ldots,n$, fixing $\{\xi_{j}\}_{j\neq i}$, refine $\xi_i$ by \begin{equation} \label{eq:isprs5} \xi_i = \arg\min_{x\in\{y_m\}_{m=1}^{M}}\frac{2}{n}\sum_{m=1}^{M}\bar{w}_m\lVert x - y_m\rVert_{2} - \frac{2}{n^2}\sum_{\substack{j=1\\j\neq i}}^{n}\lVert x - \xi_j\rVert_{2}\; . \end{equation} Repeat above until the energy distance \eqref{eq:isprs3} converges. \\ \vspace{2mm} \textbf{Return:} $\{\xi_i\}_{i=1}^{n}$, the set of ISP resamples. \caption{Importance Support Points Resampling} \label{algo:isprs} \end{algorithm} We propose a quadratic runtime sequential optimization procedure presented in Algorithm~\ref{algo:isprs} to approximately solve \eqref{eq:isprs3}. The algorithm consists of three parts: \textit{distance computing}, \textit{greedy initialization}, and \textit{point refinement}. In \textit{distance computing}, we compute and store the pairwise distances of the $M$ samples that are used extensively in the other two parts. We then obtain an initial set of resamples $\{\xi_i\}_{i=1}^{n}$ from \textit{greedy initialization} by sequentially solving \eqref{eq:isprs4} for $i=1,\ldots,n$. \eqref{eq:isprs4} can be solved by first computing the objective value for each $x\in\{y_m\}_{m=1}^{M}$ using pre-computed pairwise distances, and then locating the $y_m$ with smallest objective value. The key idea of the \textit{greedy initialization} is that conditional on having the best $(i-1)$-point resamples $\{\xi_{j}\}_{j=1}^{i-1}$, we find the best $i$-th resample $\xi_{i}$ from $\{y_m\}_{m=1}^{M}$ such that the energy distance between $\{\xi_j\}_{j=1}^{i-1}\cup\{\xi_{i}\}$ and $\{(y_m,\bar{w}_m)\}_{m=1}^{M}$ is minimized. However, the resamples from the \textit{greedy initialization} could be a local optimum. Thus, we propose the \textit{point refinement} to improve each $\xi_i$ by \eqref{eq:isprs5}, that is by fixing the other $(n-1)$ resamples $\{\xi_j\}_{j\neq i}$, we update $\xi_i$ to improve the energy distance \eqref{eq:isprs3}. \eqref{eq:isprs5} is solved similarly by the aforementioned procedure of solving \eqref{eq:isprs4}. In practice, less than 10 repetitions of the \textit{point refinement} step is needed for convergence. The proposed algorithms splits the $n$ variables optimization problem to $n$ smaller optimization problems each with only one variable, making it feasible to solve in polynomial time. The main computational complexity of the algorithm is $\mathcal{O}(M^2)$ from computing the pairwise distance of the $M$ weighted samples. The use of the energy distance as the resampling criterion also has a natural connection to the Cram{\'e}r-von Mises criterion \parencite{cramer1928l2d,anderson1962l2d}, a well-known goodness-of-fit measure. Indeed, one can show that the energy distance is a multivariate extension of the Cram{\'e}r-von Mises criterion which preserves rotation-invariance \parencite{szekely2013es}. This new resampling criterion is favored over the traditional approaches for its direct connection to the Cram{\'e}r-von Mises criterion and the minimization of the squared integration error \eqref{eq:isprs1} via Koksma-Hlawka-like bound, whereas the multinomial resampling \parencite{gordon1993mr}, stratified resampling \parencite{kitagawa1996sr}, residual resampling \parencite{liu1998rr}, and systematic resampling \parencite{carpenter1999sr}, all aim to minimize \begin{equation} \label{eq:isprs2} \mathbb{E}[(n_m - n\bar{w}_m)^2] = (\mathbb{E}[n_m] - n\bar{w}_m)^2 + \mathbb{V}[n_m] = \mathbb{V}[n_m] \end{equation} where $\mathbb{E}[n_m] = n\bar{w}_m$ unbiased \parencite{douc2005rs}. Though ISP resampling requires quadratic runtime, in PMC, the number of samples simulated at each iteration, $M = KJ$, is moderate size, making the use of quadratic runtime algorithm acceptable. Also, with the ISP resampling, we can allow $K$ to be small without losing too much information in the resampling step, so the additional computational burden can be offset by the reduction in computational cost from the $\mathcal{O}(K^2J)$ evaluations of the proposal distributions in the deterministic mixture weighting strategy. \par Figure~\ref{fig:resample_2d} shows the 100-point resampled point set for the mixture of normals from the importance samples of 10{,}000 Sobol' points over $[0,1]^2$ using multinomial, systematic, and ISP resampling. By visualization, the 100 points from ISP resampling serve as a better set of the proposal centers, since these points not only better capture the shape of the target distribution, but are also well-spaced out from one another (``space-filling''), which allows for better exploration.\par \begin{figure}[t!] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/resample/multinomial.png} \caption{Multinomial} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/resample/systematic.png} \caption{Systematic} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/resample/support_points.png} \caption{ISP} \end{subfigure}% \caption{$n = 100$ resamples from 10{,}000 Sobol' points as importance samples for the mixture of normals using multinomial, systematic, and ISP. Lines represent the density contours.} \label{fig:resample_2d} \end{figure} \subsection{Covariance Adaptation} \label{subsec:covariance_adaptation} Finally, we present the adaptation procedures for updating covariance matrices in the proposal distributions, which is critical for the success of PQMC. At each iteration, the set of equally weighted proposals can be seen as a kernel density approximation of the target distribution where each proposal plays the role of kernel \parencite{elvira2017dmmis}. However, finding the optimal kernel covariances often relies on cross-validation, which is computational expensive. On the other hand, as proposed by \textcite{cappe2008mmis} for Mixture PMC and by \textcite{ji2013amcmc} for adaptive MCMC, an alternative solution is to find the set of covariances $\{\Sigma_{k}^{(t+1)}\}_{k=1}^{K}$ that minimizes the Kullback-Leibler (KL) divergence between the target density $\pi$ and the normal mixture proposal $K^{-1}\sum_{k=1}^{K}\mathcal{N}(x|\mu_{k}^{(t+1)},\Sigma_{k}^{(t+1)})$ for next iteration if the proposals have different covariances, \begin{equation} \label{eq:ca1} \begin{aligned} \{\Sigma_k^{(t+1)}\}_{k=1}^{K} \in& \arg\min_{C_{1},\ldots,C_{K}\in S^{p}_{+}} KL\bigg(\pi(x) \bigg|\bigg| \frac{1}{K}\sum_{k=1}^{K}\mathcal{N}(x|\mu_{k}^{(t+1)},C_{k})\bigg) \\ =& \arg\min_{C_{1},\ldots,C_{K}\in S^{p}_{+}}\bigg(\int_{\mathcal{X}}\pi(x)\log\pi(x)dx - \int_{\mathcal{X}}\pi(x)\log\bigg[\frac{1}{K}\sum_{k=1}^{K}\mathcal{N}(x|\mu_{k}^{(t+1)},C_{k})\bigg] dx\bigg) \\ =& \arg\max_{C_{1},\ldots,C_{K}\in S^{p}_{+}} \int_{\mathcal{X}}\pi(x)\log\bigg[\frac{1}{K}\sum_{k=1}^{K}\mathcal{N}(x|\mu_{k}^{(t+1)},C_{k})\bigg] dx\; , \end{aligned} \end{equation} where $\{\mu_k^{(t+1)}\}_{k=1}^{K}$ are obtained from resampling. Recall that at the $t$-th iteration, we have a set of weighted samples $\{(x_{k,j}^{(t)}, \bar{w}_{k,j}^{(t)})\}_{k=1}^{K}{}_{j=1}^{J}$ that approximately simulated from $\pi$, and thus leading to the Importance Sampling approximation of \eqref{eq:ca1}, \begin{equation} \label{eq:ca2} \{\Sigma_k^{(t+1)}\}_{k=1}^{K} \in \arg\min_{C_{1},\ldots,C_{K}\in S^{p}_{+}}\sum_{k=1}^{K}\sum_{j=1}^{J}\bar{w}_{k,j}^{(t)}\log\bigg[\frac{1}{K}\sum_{i=1}^{K}\mathcal{N}(x_{k,j}^{(t)}|\mu_{i}^{(t+1)},C_{i})\bigg]\; . \end{equation} $\{\Sigma_k^{(t+1)}\}_{k=1}^{K}$ can be estimated by applying Expectation-Maximization \parencite[EM;][]{dempster1977em,wu1983em} to the Gaussian Mixtures Model with fixed weights $1/K$ and fixed centers $\{\mu_k^{(t+1)}\}_{k=1}^{K}$ on the $t$-th iteration's weighted samples $\{(x_{k,j}^{(t)}, \bar{w}_{k,j}^{(t)})\}_{k=1}^{K}{}_{j=1}^{J}$. The EM should converge in around 10 steps since only covariances are estimated. Let us call it exact covariance adaptation. However, this is computational expensive as each EM step requires $\mathcal{O}(K^2J)$ evaluations of the proposal distribution. \par Consider a special case when all the proposals in the same iteration share one global covariance matrix, i.e., $\Sigma_1^{(t+1)} = \cdots = \Sigma_K^{(t+1)} = \Sigma^{(t+1)}$. We propose the lookback covariance adaptation that does not require additional evaluations of the proposal distribution. The idea is that after several iterations of the PQMC, the samples should converge to the desired regions, then the proposal centers will not vary much from iteration to iteration except in different orientation when ISP resampling is used. Thus, the lookback covariance adaptation optimizes over the prior centers $\{\mu_k^{(t)}\}_{k=1}^{K}$, which gives \begin{equation} \label{eq:ca3} \Sigma^{(t+1)} = \arg\min_{C\in S^{p}_{+}}\sum_{k=1}^{K}\sum_{j=1}^{J}\bar{w}_{k,j}^{(t)}\log\bigg[\frac{1}{K}\sum_{i=1}^{K}\mathcal{N}(x_{k,j}^{(t)}|\mu_{i}^{(t)},C)\bigg]\; . \end{equation} We do an one-step EM using $\Sigma^{(t)}$ as the prior, leading to a closed-form update, \begin{equation} \label{eq:ca4} \Sigma^{(t+1)} = \sum_{k=1}^{K}\sum_{j=1}^{J}\bar{w}_{k,j}^{(t)}\frac{\mathcal{N}(x_{k,j}^{(t)}|\mu_{k}^{(t)},\Sigma_{k}^{(t)})}{\sum_{i=1}^{K}\mathcal{N}(x_{k,j}^{(t)}|\mu_{i}^{(t)},\Sigma_{i}^{(t)})}(x_{k,j}^{(t)} - \mu_{k}^{(t)})(x_{k,j}^{(t)} - \mu_{k}^{(t)})^{T} \; , \end{equation} where the evaluations of the proposal distributions are all done in the weighting steps. Though the lookback covariance adaptation can also work jointly with the traditional resampling methods, the performance would not be as good since it builds on the assumption that the proposal centers are not varying much except in different rotation as the algorithm converge, where the traditional resampling methods might not be able to achieve that due to the lack of consideration on the space-filling property of the resamples. \subsection{Weighted PMC Estimator} \label{subsec:weighted_pmc} With the above modifications, the PQMC method (Algorithm \ref{algo:pqmc}) returns a set of weighted samples $\{(x_{k,j}^{(t)}, w_{k,j}^{(t)})\}_{t=1}^{T}{}_{k=1}^{K}{}_{j=1}^{J}$. This can then be used to construct the following PQMC estimator for $\mathbb{E}_{\pi}[h(X)]$ for a desired integrand $h$. When the normalizing constant $Z$ is known, the standard PMC estimator for $\mathbb{E}_{\pi}[h(X)]$ is \begin{equation} \label{eq:pmce1} \hat{I}^{\text{PMC}} = \frac{1}{Z}\bigg(\frac{1}{TKJ}\sum_{t=1}^{T}\sum_{k=1}^{K}\sum_{j=1}^{J}w_{k,j}^{(t)}h(x_{k,j}^{(t)})\bigg) = \frac{1}{T}\sum_{t=1}^{T}\hat{I}^{\text{PMC}}_{t}\; , \end{equation} where $\hat{I}^{\text{PMC}}_{t} = \frac{1}{Z}(\frac{1}{KJ}\sum_{k=1}^{K}\sum_{j=1}^{J}w_{k,j}^{(t)}h(x_{k,j}^{(t)}))$ is the estimator constructing using only the $t$-th iteration weighted samples. If $Z$ is unknown, we can replace it by a consistent estimator \begin{equation} \label{eq:pmce2} \hat{Z}^{\text{PMC}} = \frac{1}{TKJ}\sum_{t=1}^{T}\sum_{k=1}^{K}\sum_{j=1}^{J}w_{k,j}^{(t)}\; . \end{equation} We can see that the standard PMC estimator can be viewed as the simple average of $T$ different estimators each is constructed by the weighted samples simulated from the corresponding iteration. However, when there is adaptation, the standard PMC estimator is not efficient since better samples are obtained as the algorithm proceeds. The weighted PMC (WPMC) estimator assigns a set of correction weights $\{\alpha^{(t)}\}_{t=1}^{T}$ with the constraint that $\sum_{t=1}^{T}\alpha^{(t)} = 1$ to the $T$ estimators, allowing to ``forget" the poor samples simulated at the early stages. When the normalizing constant $Z$ is known, the WPMC estimator for $\mathbb{E}_{\pi}[h(X)]$ is \begin{equation} \label{eq:wpmce1} \hat{I}^{\text{WPMC}} = \frac{1}{Z}\sum_{t=1}^{T}\alpha^{(t)}\hat{I}^{\text{PMC}}_{t} = \frac{1}{Z}\bigg(\frac{1}{KJ}\sum_{t=1}^{T}\sum_{k=1}^{K}\sum_{j=1}^{J}\alpha^{(t)}w_{k,j}^{(t)}h(x_{k,j}^{(t)})\bigg)\; . \end{equation} If $Z$ is unknown, we replace it by the following consistent estimator \begin{equation} \label{eq:wpmce2} \hat{Z}^{\text{WPMC}} = \frac{1}{KJ}\sum_{t=1}^{T}\sum_{k=1}^{K}\sum_{j=1}^{J}\alpha^{(t)}w_{k,j}^{(t)}\; . \end{equation} Let $N_e^{(t)}$ denotes the effective sample size of the weighted samples simulated at the $t$-th iteration. We propose the correction weights $\{\alpha^{(t)}\}_{t=1}^{T}$ that approximately minimize the variance of $\hat{I}^{\text{WPMC}}$, \begin{equation} \label{eq:wpmce3} \alpha^{(t)} = \frac{N_e^{(t)}}{\sum_{i=1}^{T} N_e^{(i)}}\; . \end{equation} The proposed weights are proportional to the effective sample size, assigning larger weights to the estimators that are more reliable. Appendix~\ref{appendix:wpmc} provides further justification of these weights. This approach is free from the integrand $h$ and does not require knowing the normalizing constant. The idea of using effective sample size to weight the estimators from different iterations is also mentioned in the Adaptive Population Importance Sampler \parencite{martino2015apis}. \section{Simulation Results} \label{sec:simulation} In this section, we report some simulation results to demonstrate the improvement of our proposed importance support points resampling and Population Quasi-Monte Carlo algorithm. More simulations results can be found in Appendix~\ref{appendix:simulation}. Source codes and tutorials can be found at \url{https://github.com/BillHuang01/PQMC}. \subsection{Importance Support Points Resampling} \label{subsec:simulation_resampling} \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{figures/resample/ess.png} \caption{LogMSEs in the estimation of the IS estimator and $\mathbb{E}[X]$ where $X\sim\mathcal{N}(0,I_{p})$ for $p = 2,\ldots,20$ using 100 resampled points from the 1{,}000 inverse Sobol' points of $q = \mathcal{N}(0,\sqrt{2}I_{p})$ as the importance samples by multinomial, systematic, and ISP resampling. MSE for multinomial and systematic are averaged over 100 independent runs. ESS stands for effective sample size. Lines denote the logMSEs, and shaded bands mark the 10th and 90th quantiles.} \label{fig:resample_ess} \end{figure} Let $X\sim\mathcal{N}(0,I_{p})$, the $p$-dimensional standard normal distribution, be the target distribution. Consider $M = 1{,}000$ inverse Sobol' points of the importance distribution $\mathcal{N}(0,\sqrt{2}I_{p})$ as the importance samples $\{(y_m,\bar{w}_m)\}_{m=1}^{M}$ where $\bar{w}_m$ is the normalized importance weight for $y_m$, then $\hat{I}_{M} = \sum_{m=1}^{M}\bar{w}_m y_m$ is the Importance Sampling (IS) estimator for $\mathbb{E}[X]$. Next, we simulate $n=100$ resamples $\{\xi_i\}_{i=1}^{n}$ using multinomial, systematic, and ISP resampling on the importance samples, then $\tilde{I}_n = n^{-1}\sum_{i=1}^{n}\xi_i$ is the Monte Carlo (MC) estimator for $\mathbb{E}[X]$. We repeat it 100 times to obtain 100 MC estimators $\{\tilde{I}_n^{(l)}\}_{l=1}^{100}$, then $\mbox{MSE}(\hat{I}_N) = 100^{-1}\sum_{l=1}^{100}p^{-1}(\tilde{I}_n^{(l)} - \hat{I}_N)^{T}(\tilde{I}_n^{(l)} - \hat{I}_N)$ is a good empirical approximation for the squared integration error in \eqref{eq:isprs1} with $h(y) = y$ where the error is averaging over $p$ components. The top panel of Figure~\ref{fig:resample_ess} shows the $\mbox{MSE}(\hat{I}_N)$ in log for $p = 2,\ldots,20$. Empirically, we see that ISP resampling enjoys the squared integration error of $\mathcal{O}(n^{-3})$ in 2 dimensions and $\mathcal{O}(n^{-2})$ up to 20 dimensions, outperforming the other two resampling methods. However, the ISP resampling suffers from small effective sample size as dimension increases. One might also be interested in how well the MC estimator using $\{\xi_i\}_{i=1}^{n}$ approximates $\mathbb{E}[X]$. A good empirical measure is $\mbox{MSE}(\mathbb{E}[X]) = 100^{-1}\sum_{l=1}^{100}p^{-1}(\tilde{I}_n^{(l)} - \mathbb{E}[X])^{T}(\tilde{I}_n^{(l)} - \mathbb{E}[X])$. The middle panel of Figure~\ref{fig:resample_ess} shows the $\mbox{MSE}(\mathbb{E}[X])$ in log for $p = 2,\ldots,20$. The performance of the estimator using the 100 resamples from ISP resampling is almost as good as the performance of the IS estimator using the 1{,}000 importance samples, showing that ISP resampling can retain most information from the original importance samples $\{(y_m,\bar{w}_m)\}_{m=1}^{M}$. Consider a more carefully chosen importance distribution $q = \mathcal{N}(0,3^{(2/p^{0.8})}I_{p})$ such that the effective sample sizes are similar for $p = 2,\ldots,20$ (Figure~\ref{fig:resample_rhd} in Appendix~\ref{appendix:simulation}): the ISP resampling only suffers slightly from the curse of dimensionality. Thus, with proper adaptation for the proposals in PQMC, the ISP resampling appears to be quite robust for this high dimensional problem. We only compare the ISP resampling to multinomial resampling for its simplicity and systematic resampling for its good empirical performance mentioned in the literatures \parencite[e.g.][]{douc2005rs}. \par \subsection{Two Dimensional PQMC Example} \label{subsec:simulation_pmc_2d} Consider a two-dimensional multimodal distribution that consists of a mixture of five normals, \begin{equation} \label{eq:pmc_2d} \pi(x) = \frac{1}{5}\sum_{i=1}^{5}\mathcal{N}(x|\mu_i,\Sigma_i)\; , \end{equation} where $\mu_1 = [0.250,0.250]^{T}$, $\mu_2 = [0.500,0.900]^{T}$, $\mu_3 = [0.825,0.700]^{T}$, $\mu_4 = [0.275, 0.675]^{T}$, $\mu_5 = [0.850, 0.150]^{T}$, $\Sigma_1 = 40^{-2}[2,0.6;0.6,1]$, $\Sigma_2 = 40^{-2}[2, -0.4; -0.4,2]$, $\Sigma_3 = 40^{-2}[2,0.8;0.8,2]$, $\Sigma_4 = 40^{-2}[3,0;0,0.5]$, and $\Sigma_5 = 40^{-2}[2,-0.1;-0.1,2]$. The example is from \textcite{elvira2017dmmis} but with proper scaling so the main support of $\pi$ is inside $[0,1]^2$, and the density contour is shown in Figure~\ref{fig:resample_2d}. The mean $\mathbb{E}_{\pi}[X] = [0.540,0.535]^{T}$ and the normalizing constant $Z = 1$ can both be computed analytically so we can validate the performance of the PQMC and PMC. We use the Mean Squared Error (MSE) of the estimates as the evaluation metric. \par Let us compare the PQMC described in Algorithm~\ref{algo:pqmc} to the generic PMC outlined in Algorithm~\ref{algo:pmc} both with normal proposals and global covariance. For the PMC, we consider two resampling methods: multinomial and systematic. We also apply the lookback covariance adaptation to the PMC. We run both PMC and PQMC for $T = 10$ iterations but vary $K$ and $J$ while keeping $KJ = 1{,}000$, leading to the total of $TKJ = 10{,}000$ evaluations of the target distribution. The initial proposal centers are selected as the $K$ Sobol' points over $[0,1]^2$. We use the same isotropic covariance matrix $\sigma^2 I_2$ for all the initial proposals, i.e. $\Sigma_{k}^{(1)} = \sigma^2 I_2 \; \forall k$, with $\sigma = 0.1,0.2,0.5$. For schemes without covariance adaptation, we fix the covariances for all iterations as in \textcite{elvira2017dmmis}, i.e. $\Sigma_{k}^{(t)} = \sigma^2 I_2 \; \forall k,t$ with the specified $\sigma$. For the ones with lookback adaptation, we keep the adapted covariance isotropic for simplicity, i.e. $\Sigma_{k}^{(t)} = (\sigma^{(t)})^2 I_2 \; \forall k,t$ where the adaptation is performed on $\sigma^{(t)}$ only. We compute the MSEs for both the standard PMC estimator and the weighted PMC estimator averaging over 100 independent runs. \par \begin{table}[t!] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|cccc|ccc|} \hline Estimator & Algorithm & K & J & $\sigma = 0.1$ & $\sigma = 0.2$ & $\sigma = 0.5$ \\ \hline Standard & PMC (Multinomial) & 25 & 40 & -8.09 [-16.09,-5.07] & -8.56 [-12.98,-6.53] & -8.14 [-14.36,-6.58] \\ Standard & PMC (Systematic) & 25 & 40 & -8.78 [-14.32,-5.74] & -8.71 [-14.14,-7.22] & -8.17 [-13.48,-6.19] \\ Standard & PMC (Multinomial + Lookback) & 25 & 40 & -7.65 [-14.67,-5.38] & -7.78 [-13.79,-5.33] & -8.71 [-13.15,-5.70] \\ Standard & PMC (Systematic + Lookback) & 25 & 40 & -8.30 [-16.08,-5.02] & -8.35 [-13.94,-5.55] & -8.62 [-17.77,-5.80] \\ Standard & PQMC (ISP + Lookback) & 25 & 40 & -12.33 [-15.82,-10.03] & -11.18 [-16.12,-9.71] & -9.63 [-14.32,-7.58] \\ \hdashline Weighted & PMC (Multinomial) & 25 & 40 & -7.97 [-14.49,-5.02] & -8.51 [-12.58,-6.54] & -8.12 [-12.68,-6.50] \\ Weighted & PMC (Systematic) & 25 & 40 & -8.65 [-14.86,-5.60] & -8.65 [-13.08,-7.07] & -8.13 [-12.95,-6.31] \\ Weighted & PMC (Multinomial + Lookback) & 25 & 40 & -7.44 [-15.62,-5.23] & -7.45 [-14.44,-4.92] & -7.98 [-15.27,-4.98] \\ Weighted & PMC (Systematic + Lookback) & 25 & 40 & -8.05 [-18.16,-4.94] & -7.91 [-16.29,-5.01] & -8.14 [-15.74,-5.06] \\ Weighted & PQMC (ISP + Lookback) & 25 & 40 & \textcolor{red}{\textbf{-15.04}} [-20.62,-13.35] & \textcolor{red}{\textbf{-14.54}} [-18.66,-13.20] & \textcolor{red}{\textbf{-13.81}} [-18.44,-12.02] \\ \hline Standard & PMC (Multinomial) & 50 & 20 & -9.87 [-14.56,-8.41] & -8.79 [-13.66,-7.22] & -8.02 [-13.36,-6.12] \\ Standard & PMC (Systematic) & 50 & 20 & -10.13 [-13.36,-8.92] & -8.90 [-12.88,-7.27] & -7.95 [-12.39,-6.40] \\ Standard & PMC (Multinomial + Lookback) & 50 & 20 & -10.71 [-14.68,-9.02] & -10.31 [-13.66,-8.48] & -9.03 [-14.87,-5.70] \\ Standard & PMC (Systematic + Lookback) & 50 & 20 & -11.03 [-17.20,-9.43] & -10.14 [-13.85,-8.80] & -9.25 [-16.24,-7.67] \\ Standard & PQMC (ISP + Lookback) & 50 & 20 & -11.98 [-16.74,-10.40] & -11.01 [-15.59,-9.26] & -9.39 [-14.29,-7.50] \\ \hdashline Weighted & PMC (Multinomial) & 50 & 20 & -9.99 [-14.50,-8.61] & -8.67 [-13.80,-7.01] & -8.01 [-13.23,-6.17] \\ Weighted & PMC (Systematic) & 50 & 20 & -10.15 [-16.04,-8.83] & -8.82 [-12.89,-7.17] & -7.92 [-13.42,-6.50] \\ Weighted & PMC (Multinomial + Lookback) & 50 & 20 & -11.83 [-17.14,-9.66] & -11.93 [-16.13,-10.03] & -9.45 [-16.80,-4.99] \\ Weighted & PMC (Systematic + Lookback) & 50 & 20 & -12.25 [-16.88,-10.86] & -11.85 [-17.57,-10.19] & -11.57 [-15.62,-9.85] \\ Weighted & PQMC (ISP + Lookback) & 50 & 20 & \textbf{-14.89} [-19.18,-13.25] & \textbf{-14.35} [-18.07,-12.61] & \textbf{-13.11} [-17.29,-11.55] \\ \hline Standard & PMC (Multinomial) & 100 & 10 & -9.78 [-15.37,-7.89] & -9.01 [-13.16,-7.47] & -7.92 [-11.18,-6.49] \\ Standard & PMC (Systematic) & 100 & 10 & -10.12 [-15.29,-8.24] & -8.97 [-12.88,-7.71] & -8.05 [-11.68,-6.49] \\ Standard & PMC (Multinomial + Lookback) & 100 & 10 & -10.99 [-15.21,-8.59] & -10.29 [-15.70,-8.78] & -9.19 [-14.28,-7.23] \\ Standard & PMC (Systematic + Lookback) & 100 & 10 & -10.77 [-17.62,-9.13] & -10.18 [-15.17,-8.41] & -9.29 [-14.83,-7.80] \\ Standard & PQMC (ISP + Lookback) & 100 & 10 & -11.65 [-18.61,-9.59] & -10.84 [-14.62,-9.13] & -9.52 [-14.28,-7.74] \\ \hdashline Weighted & PMC (Multinomial) & 100 & 10 & -9.82 [-15.40,-7.54] & -8.99 [-13.54,-7.42] & -7.88 [-12.04,-6.50] \\ Weighted & PMC (Systematic) & 100 & 10 & -10.23 [-14.81,-8.35] & -8.92 [-12.98,-7.50] & -7.99 [-11.90,-6.45] \\ Weighted & PMC (Multinomial + Lookback) & 100 & 10 & -12.46 [-15.29,-11.10] & -12.36 [-19.14,-10.70] & -11.80 [-18.31,-10.16] \\ Weighted & PMC (Systematic + Lookback) & 100 & 10 & -12.51 [-16.59,-11.24] & -12.20 [-16.80,-10.66] & -11.76 [-15.13,-9.88] \\ Weighted & PQMC (ISP + Lookback) & 100 & 10 & \textbf{-14.34} [-19.00,-13.08] & \textbf{-13.89} [-19.40,-12.41] & \textbf{-12.89} [-18.01,-11.05] \\ \hline \end{tabular} } \caption{LogMSEs in the estimation of $\mathbb{E}_{\pi}[X]$ for the two dimensional mixture of five normals using different values of $K$, $J$, and $\sigma$ with the initial proposal centers being the $K$ Sobol' points over $[0,1]^2$. The number of evaluations of the target distribution is fixed to $TKJ = 10{,}000$. The MSEs are averaged over 100 independent runs and shown in log under format ``mean [min,max]". The best results for each value of $\sigma$ are highlighted in red bold-face.} \label{tab:pmc_2d_full_m} \end{table} Table~\ref{tab:pmc_2d_full_m} shows the MSEs in log for the estimation of $\mathbb{E}_{\pi}[X]$. The weighted PMC estimator on PQMC samples outperforms all PMC settings for different values of $K$, $J$, and $\sigma$, demonstrating the significant improvement from the PQMC. Moreover, PQMC is robust even for small $K$. Recall that the deterministic mixture weighting scheme requires $\mathcal{O}(K^2J)$ evaluations of the proposal distributions, thus by being able to use a small $K$, PQMC could reduce the computational cost of proposal evaluations, somewhat offsetting the additional computational burden the ISP resampling brings over the traditional resampling methods. For the PMC algorithm, having the lookback covariance adaptation generally improves the performance, especially when the initial $\sigma$ is chosen poorly. Also, when the $\sigma$ is adapted, weighted PMC estimator is preferred. However, when the number of proposals $K$ is small, using lookback covariance adaptation in PMC could sometimes go wrong for multimodal distribution. The reason is that if too many samples (much larger than $K$) are in the high density regions, random resampling likely results in $K$ particles that are from only few modals rather from all modals in which there exists some samples. The same issue also causes the worse weighted estimator for PMC samples when $K$ is small. Because of its space-filling property, ISP resampling in PQMC does not suffer from the aforementioned issue. Table~\ref{tab:pmc_2d_full_z} in the Appendix shows the MSEs in log for the estimation of the normalizing constant $Z$, and similar conclusions can be drawn. \par Now consider a ``bad" initialization by using $K$ Sobol' points over $[0.4,0.6]^2$ for the initial proposal centers as in \textcite{elvira2017dmmis} to further test the robustness of PQMC. Table~\ref{tab:pmc_2d_sub_m} in the Appendix shows the MSEs in log for the estimation of $\mathbb{E}_{\pi}[X]$ using the ``bad" initialization. When $\sigma = 0.2 \mbox{ or } 0.5$, the weighted PMC estimator on PQMC sample again outperforms the PMC for different values of settings of $K$ and $J$. When the initial $\sigma = 0.1$ is too small, the performance of both PMC and PQMC are bad since they both fail to discover all the modes of the target distribution. Similar conclusion can be drawn from the MSEs in log for the estimation of the normalizing constant $Z$ presented in Table~\ref{tab:pmc_2d_sub_z} in the Appendix. This shows that the proposed PQMC is robust against the ``bad" initialization of the proposal centers as long as the initial proposal covariances are large enough such that at least few of the simulated samples at the initial iteration can land on the key regions. \subsection{High Dimensional PQMC Example} \label{subsec:simulation_pmc_hd} Consider a ten-dimensional multimodal distribution that consists of a mixture of three normals, \begin{equation} \label{eq:mixture_10d} \pi(x) = \frac{1}{3}\sum_{i=1}^{3}\mathcal{N}(x|\mu_i,\Sigma_i)\; , \end{equation} where $\mu_{1,j} = 0.375$ for $j = 1,\ldots,10$, $\mu_{2,j} = 0.575$ for $j = 1,\ldots,10$, and $\mu_{3,j} = 0.700$ for $j = 1,\ldots,10$, $\Sigma_{1} = \Sigma_{2} = \Sigma_{3} = 0.2^2 I_{10}$. This example is also from \textcite{elvira2017dmmis} but with proper scaling so that the main support of $\pi$ is inside $[0,1]^{10}$. The mean $\mathbb{E}_{\pi}[X_j] = 0.550$ for $j = 1,\ldots,10$ and the normalizing constant $Z = 1$. We again use the Mean Squared Error (MSE) of the estimates for the performance evaluation. Similar to the experiment setup for the two dimensional problem in Subsection~\ref{subsec:simulation_pmc_2d}, we compare the PQMC to the PMC with and without the covariance adaptation. We run both PMC and PQMC for $T = 10$ iterations but vary $K$ and $J$ while keeping $KJ = 2{,}000$, so total of $TKJ = 20{,}000$ evaluations of the target distributions. The initial proposal centers are the $K$ Sobol' points over $[0,1]^{10}$. We use the same isotropic covariance matrix $\sigma^2 I_2$ for all the initial proposals and keep the adapted covariance isotropic. We compute the MSEs for both the standard PMC estimator and the weighted PMC estimator averaged over 100 independent runs. Table~\ref{tab:pmc_10d_full_m} shows the MSEs in log for the estimation of $\mathbb{E}_{\pi}[X]$. Similar to the conclusion drawn for the two-dimensional example, the weighted PMC estimator on PQMC samples outperforms all PMC settings for different values of $K$, $J$, and $\sigma$. Also, significant improvements are observed for PMC algorithms that have covariance adaptation, especially under the weighted PMC estimator. Table~\ref{tab:pmc_10d_full_z} in the Appendix shows the MSEs in log for the estimation of the normalizing constant $Z$. \par \begin{table}[t!] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|cccc|ccc|} \hline Estimator & Algorithm & K & J & $\sigma = 0.1$ & $\sigma = 0.2$ & $\sigma = 0.5$ \\ \hline Standard & PMC (Multinomial) & 50 & 40 & -4.95 [-8.68,-2.88] & -10.06 [-13.00,-7.35] & -6.66 [-8.34,-5.30] \\ Standard & PMC (Systematic) & 50 & 40 & -4.98 [-9.01,-2.74] & -10.04 [-11.98,-7.07] & -6.69 [-8.55,-5.48] \\ Standard & PMC (Multinomial + Lookback) & 50 & 40 & -8.66 [-11.93,-5.19] & -9.70 [-11.91,-6.08] & -8.71 [-11.12,-6.95] \\ Standard & PMC (Systematic + Lookback) & 50 & 40 & -7.86 [-11.94,-4.11] & -10.14 [-11.84,-8.31] & -8.66 [-11.38,-6.00] \\ Standard & PQMC (ISP + Lookback) & 50 & 40 & -8.90 [-12.31,-4.98] & -10.10 [-13.07,-7.40] & -8.85 [-11.26,-6.61] \\ \hdashline Weighted & PMC (Multinomial) & 50 & 40 & -5.75 [-8.51,-3.36] & -11.00 [-12.44,-9.32] & -7.03 [-8.62,-5.74] \\ Weighted & PMC (Systematic) & 50 & 40 & -5.94 [-8.93,-3.28] & -11.17 [-12.95,-9.69] & -7.02 [-8.57,-5.96] \\ Weighted & PMC (Multinomial + Lookback) & 50 & 40 & -10.84 [-12.81,-9.31] & -10.91 [-12.80,-9.25] & -10.82 [-12.09,-9.37] \\ Weighted & PMC (Systematic + Lookback) & 50 & 40 & -10.77 [-12.69,-9.51] & -11.10 [-12.64,-9.67] & -10.85 [-12.29,-9.69] \\ Weighted & PQMC (ISP + Lookback) & 50 & 40 & \textbf{-12.06} [-13.92,-10.90] & \textbf{-12.13} [-13.63,-10.87] & \textbf{-11.95} [-13.91,-10.87] \\ \hline Standard & PMC (Multinomial) & 100 & 20 & -5.32 [-8.05,-2.89] & -10.44 [-12.62,-8.19] & -6.67 [-8.01,-5.44] \\ Standard & PMC (Systematic) & 100 & 20 & -5.40 [-8.00,-2.73] & -10.74 [-12.84,-9.83] & -6.58 [-8.62,-4.73] \\ Standard & PMC (Multinomial + Lookback) & 100 & 20 & -6.96 [-12.08,-3.59] & -10.56 [-12.66,-8.82] & -8.43 [-10.61,-5.94] \\ Standard & PMC (Systematic + Lookback) & 100 & 20 & -7.69 [-12.36,-4.00] & -10.68 [-12.84,-8.50] & -8.26 [-10.87,-5.92] \\ Standard & PQMC (ISP + Lookback) & 100 & 20 & -8.97 [-12.16,-6.19] & -10.80 [-12.65,-7.81] & -8.62 [-11.76,-5.94] \\ \hdashline Weighted & PMC (Multinomial) & 100 & 20 & -6.47 [-8.52,-4.55] & -11.31 [-12.99,-10.13] & -7.02 [-8.65,-5.64] \\ Weighted & PMC (Systematic) & 100 & 20 & -6.69 [-9.23,-4.40] & -11.33 [-12.79,-9.77] & -6.90 [-8.85,-5.33] \\ Weighted & PMC (Multinomial + Lookback) & 100 & 20 & -11.35 [-13.52,-9.49] & -11.41 [-13.58,-10.22] & -11.10 [-13.00,-9.81] \\ Weighted & PMC (Systematic + Lookback) & 100 & 20 & -11.42 [-12.85,-10.44] & -11.33 [-12.93,-10.34] & -11.21 [-12.89,-9.45] \\ Weighted & PQMC (ISP + Lookback) & 100 & 20 & \textcolor{red}{\textbf{-12.11}} [-13.55,-10.89] & \textcolor{red}{\textbf{-12.25}} [-13.58,-11.22] & \textcolor{red}{\textbf{-11.98}} [-13.49,-10.80] \\ \hline \end{tabular} } \caption{LogMSEs in the estimation of $\mathbb{E}_{\pi}[X]$ for the ten dimensional mixture of three normals using different values of $K$, $J$, and $\sigma$ with the initial proposal centers being the $K$ Sobol' points over $[0,1]^{10}$. The number of evaluations of the target distribution is fixed to $TKJ = 20{,}000$. The MSEs are averaged over 100 independent runs and shown in log under format ``mean [min,max]". The best results for each value of $\sigma$ are highlighted in red bold-face.} \label{tab:pmc_10d_full_m} \end{table} \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{figures/pmc/pmc_hd_z_comp_mc.png} \caption{LogMSEs in the estimation of $Z$ for the mixture of three normals with $K = 50$, $J = 40$, and $T = 10$ for $p = 2,\ldots,20$. The initial proposal centers are the $K$ Sobol' points over $[0,1]^{p}$. The initial proposal covariances are $0.2^2 I_{p}$ and updated by lookback adaptation. The estimation is by the weighted PMC estimator. The MSEs are averaged over 100 independent runs.} \label{fig:pmc_hd2_z_comp} \end{figure} To better study the performance of PQMC as the dimension $p$ increases, we change the dimension for the mixture of three normals in \eqref{eq:mixture_10d} while keeping the same structure for the means and covariances. We compare the performance of PQMC to PMC with covariance adaptation. We run the algorithms for $T = 10$ iterations with $K = 50$ proposals and $J = 40$ samples simulated from each proposal. We use the same isotropic covariance matrix $0.2^2 I_{p}$ for the initial proposal covariances and keep the covariances isotropic after lookback adaptation. The estimation is by the weighted PMC estimator for its empirical improvement over the standard PMC estimator when there is adaptation for the proposal covariances. Figure~\ref{fig:pmc_hd2_z_comp} shows evolution of the MSEs in log for the estimation of the normalizing constant $Z = 1$ as the dimension increases. We can see that PQMC outperforms the PMC for all dimensions up to $p = 20$ in this example, but the improvement diminishes as dimension goes up. The diminishing improvement is more obvious for the estimation of the mean $\mathbb{E}_{\pi}[X]$ presented in Figure~\ref{fig:pmc_hd2_m_comp} in the Appendix. \section{Expensive Posterior Example: Friction Drilling} \label{sec:simulation_pmc_drilling} \begin{figure}[t!] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/drilling/fem.png} \caption{FEM} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/drilling/calibrated.png} \caption{Calibrated} \end{subfigure}% \caption{Left Panel: FEM outputs at three values of the friction coefficient versus the physical experiment output. Right Panel: Calibrated model output at the posterior means, obtained by the weighted PMC estimators on PQMC samples, versus the physical experiment output.} \label{fig:drilling_output} \end{figure} \textcite{miller2007drilling} develop a thermomechanical finite element model (FEM) to simulate a fiction drilling process for analyzing the relationship between the thrust force ($y$) and the tool travel distance ($x$). There is an unknown parameter, the friction coefficient ($\eta$), in the FEM that one has to specify to obtain the FEM output. The left panel of Figure~\ref{fig:drilling_output} shows the FEM outputs of thrust force over the tool travel distance for three different values of the friction coefficient. A physical experiment is also performed to validate the FEM, where the actual experiment output is also presented in the left panel of Figure~\ref{fig:drilling_output}. From the plot, \textcite{miller2007drilling} conclude that $\eta = 0.7$ is the best choice for the coefficient of friction, but there is still a large discrepancy in the FEM predictions of the thrust force. A further investigation shows that due to the deflection of the sheet at the initial contact with the tool, the tool travel in the physical experiment is less than the tool travel inputted to the FEM, causing the discrepancy. However, fixing this in the FEM code is difficult and computationally expensive. \textcite{joseph2015calibration} propose an engineering-driven statistical adjustment that can reduce the discrepancy in a more efficient way. \par Following the steps described in \textcite{joseph2019mined} Section 5, let $y = g(x;\eta)$ be the FEM, and introduce two adjustment parameters $\gamma_{1}$ and $\gamma_{2}$ such that $y = g(\gamma_{1}x^{\gamma_{2}};\eta)$ where $\gamma_{1}\in[0,1]$ accounts for the deflection at the initial contact and $\gamma_{2}$ reflects that the deflection could change during tool travel. $\gamma_{2} > 1$ indicates that a longer travel distance results in larger deflection. Thus, the calibration problem reduces to a nonlinear regression problem, \begin{equation} \label{eq:drilling1} y_i = g(\gamma_{1}x_{i}^{\gamma_{2}};\eta) + \epsilon_{i}\; , \end{equation} where $\epsilon_{i} \overset{\text{iid}}{\sim} \mathcal{N}(0,\sigma^2)$. However, the FEM $g(\cdot;\cdot)$ in the nonlinear regression is expensive to compute, so we approximate it using Gaussian Process (GP), \begin{equation} \label{eq:drilling2} \hat{g}(x;\eta) = \exp\{\hat{\mu} + r(x;\eta)^{T}R^{-1}(\log y^{\text{FEM}} - \hat{\mu}1)\}\; , \end{equation} where $\hat{\mu} = 1^{T}R^{-1}\log y^{\text{FEM}}/ 1^{T}R^{-1}1$. $r(x;\eta)$ is the correlation vector and $R$ is the correlation matrix both using the Gaussian correlation function $R(h) = \exp\{-\sum_{i}\theta_i h_i^2\}$. We use the R package \texttt{GPfit} \parencite{macdonald2015gpfitR} to fit the model. We use Bayesian inference to estimate the friction coefficient ($\eta$) and the two adjustment parameter ($\gamma_1,\gamma_2$), where the model is \begin{equation} \label{eq:drilling3} \begin{aligned} y_i &\overset{\text{iid}}{\sim} \mathcal{N}(\hat{g}(\gamma_{1}x_{i}^{\gamma_{2}};\eta), \sigma^2) \; \forall i = 1,\ldots,N \\ \eta &\sim p(\eta; 0.5, 1, 10, 10) \\ \gamma_1 &\sim p(\gamma_1; 0.5, 1, 10, 100) \\ \gamma_2 &\sim p(\gamma_2; 0.75, 1.25, 10, 10) \\ p(\sigma^{2}) &\propto 1 / \sigma^2 \end{aligned} \end{equation} where $p(x;a,b,\lambda_{a},\lambda_{b}) = \exp\{\lambda_a(x-a)\}I(x<a) + I(a\leq x\leq b) + \exp\{-\lambda_b(x-b)\}I(x>b)$ is the prior distribution where Uniform prior is used for $x\in[a,b]$ and Exponential distribution is used for $x\notin[a,b]$. It follows that the posterior distribution is \begin{equation} \label{eq:drilling4} p(\eta,\gamma_{1},\gamma_{2},\sigma^2|y)\propto\frac{1}{\sigma^{N}}\exp\bigg\{-\frac{1}{2\sigma^2}\sum_{i=1}^{N}[y_i - \hat{g}(\gamma_{1}x_{i}^{\gamma_{2}};\eta)]^2\bigg\}\times p(\eta)p(\gamma_1)p(\gamma_2)p(\sigma^2) \; . \end{equation} We can integrate out $\sigma^2$, leading to the log posterior distribution, \begin{equation} \label{eq:drilling5} \log p(\eta,\gamma_{1},\gamma_{2}|y) = \mbox{const.} - \frac{N}{2}\log\bigg(\sum_{i=1}^{N}[y_i - \hat{g}(\gamma_{1}x_{i}^{\gamma_{2}};\eta)]^2\bigg) + \log p(\eta) + \log p (\gamma_1) + \log p (\gamma_2)\; . \end{equation} There are 332 observations for the FEM, so one evaluation of the GP approximation is expensive, and we have to compute it for each of the $N = 96$ data points in the physical experiment. One evaluation of the posterior distribution takes more than 10 seconds on an average laptop. \par We compare the performance of PQMC to PMC with covariance adaptation. We run both algorithms for $T = 7$ iterations with $K = 13$ proposals and $J = 7$ samples drawn from each proposal, thus leading to $TKJ = 637$ evaluations of the posterior distribution. The initial centers are the 13 Lattice points over $[0.5,1]\times[0.5,1]\times[0.75,1.25]$ that covers the key region of the prior, and the Minimax measure of the 13 points is 0.3 in the region $[0.5,1]\times[0.5,1]\times[0.75,1.25]$ computed using the \texttt{minimaxdesign} package in R \parencite{mak2019minimaxR}. Thus, we use the isotropic covariance matrix $0.2^2 I_{3}$ for all the initial proposals such that the proposals at the first iteration can cover up the main region of the prior, and we keep the adapted covariances isotropic. We compute the posterior means by the weighted PMC estimator. \par \begin{figure}[t!] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/drilling/eta_mc.png} \caption{$\eta$} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/drilling/gamma1_mc.png} \caption{$\gamma_{1}$} \end{subfigure}% \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/drilling/gamma2_mc.png} \caption{$\gamma_{2}$} \end{subfigure}% \caption{Histograms of the MCMC samples and the weighted marginal densities of the PQMC and PMC (multinomial and systematic) samples.} \label{fig:drilling_density} \end{figure} The right panel of Figure~\ref{fig:drilling_output} shows the predictions from the calibrated model using the posterior means ($\hat{\eta} = 0.756$, $\hat{\gamma}_{1} = 0.869$, $\hat{\gamma}_{2} = 1.170$) computed from the PQMC samples. We can see that the calibration model helps bring the FEM model output much closer to the actual physical experiment data. The PMC with multinomial and systematic resampling both yield similar posterior means, and thus comparable calibrated models to the PQMC. Moreover, we also run the MCMC for 5000 iterations using normal proposal with covariance $0.01^2 I_3$. The starting point of the Markov Chain is at $[0.75,0.88,1.18]^{T}$, the posterior means computed in \textcite{joseph2019mined}. Figure~\ref{fig:drilling_density} shows the histograms of MCMC samples. The weighted marginal densities of the PQMC and PMC samples are plotted in Figure~\ref{fig:drilling_density}. The marginal density of the PQMC samples shows slightly better agreement with the MCMC samples overall. On the other hand, since the true posterior means cannot be computed analytically and it is also very expensive to approximate using the numerical integration method, we instead use the Mean Square Error, $\mbox{MSE} = N^{-1}(y_i - \hat{y}_i)^2$ where $\hat{y}_i = \hat{g}(\hat{\gamma}_{1}x_i^{\hat{\gamma}_2};\hat{\eta})$ is the prediction of the calibrated model at the posterior means, for evaluating the performance and convergence of the PMC and PQMC. Figure~\ref{fig:drilling_convergence} shows the MSEs of the calibrated model predictions at the posterior means constructed by weighted PMC estimator using samples up to the $t$-th iteration. Using the proposed MSE criterion, the PQMC converges in only 4 iterations ($4KJ = 364$ samples), while using PMC would require 5 iterations, demonstrating that faster convergence can be achieved by PQMC numerically. \begin{figure}[t!] \centering \includegraphics[width=0.75\textwidth]{figures/drilling/mse_mc.png} \caption{MSEs of the calibrated model predictions at posterior mean using PMC or PQMC samples up to $t$-th iteration to the physical experiment output.} \label{fig:drilling_convergence} \end{figure} \section{Conclusion} \label{sec:conclusion} This paper proposes the Population Quasi-Monte Carlo (Algorithm~\ref{algo:pqmc}) that incorporates Quasi-Monte Carlo ideas into the sampling and adaptation step of the generic Population Monte Carlo (Algorithm~\ref{algo:pmc}). For the \textit{sampling} step, we propose to use a set of random but low discrepancy points to replace the simple random samples from the proposal distributions. For the \textit{adaptation} step, we propose the importance support points (ISP) resampling, a deterministic resampling method that yields the set of resamples minimized over the energy distance to the original weighted samples such that most information can be retained. Numerical examples are shown to demonstrate the significant improvement of the ISP resampling over the traditional resampling methods for problems up to 20 dimensions. Given the Koksma-Hlawka-like bound presented by \textcite{mak2018sp} that connects the energy distance to the squared integration error from resampling, energy distance is a better measure for the effectiveness of resampling methods than the conditional variances shown in \eqref{eq:isprs2}. Within the PQMC framework, we also propose the lookback adaptation for updating the global covariance, where all proposals share the same covariance parameters. This adaptation is computationally efficient as it does not require additional evaluation of the proposal distribution. This covariance adaptation also demonstrates significant improvement when it is used in generic PMC with random resampling in numerical studies. This is especially important when the initial proposal covariances are chosen poorly, but this issue has received scant attention in the literature. Last, since there is adaptation in PMC and PQMC, the standard PMC estimator is not efficient, and we propose the weighted PMC estimator with the set of correction weights that are proportional to the effective samples size of each iteration. Extensive numerical studies in various settings presented in Section~\ref{sec:simulation} shows that PQMC yields faster convergence rate than the generic PMC, but more theoretical studies on PQMC are needed. \par On the other hand, the ISP resampling in PQMC suffer more computational burden than the traditional resampling methods, as it requires $\mathcal{O}(M^2)$ evaluations to compute the pairwise distance for the weighted samples $\{(y_m,\bar{w}_m)\}_{m=1}^{M}$. However, in many real world Bayesian problems, the dominant computational cost is from the evaluation of the target distribution, as shown by the friction drilling calibration example in Section~\ref{sec:simulation_pmc_drilling}. Thus, it is justifiable to use a more computational expensive resampling scheme if it can result in faster convergence, hence reducing the evaluations of the target distribution while still achieving the desirable performance. On the other hand, as mentioned by \textcite{cornuet2012amis}, the initialization has a major impact on the performance for the class of adaptive importance sampling algorithm, as the adaptation is only based on the prior samples. It is difficult to recover from poor initialization as shown by the two dimensional mixture of five normals example with ``bad" initialized centers from $[0.4,0.6]^2$ and covariance being $0.1^2 I_2$. One promising solution is to allocate more resources at the initial stage of the PQMC by starting with a large $K$ (number of proposals) and slowly decreasing $K$ as the algorithm converges. Using ISP resampling can retain most information from the original samples and shows robust empirical performance even when $K$ is small, making it a perfect fit for the idea of using a decreasing sequence of $K$. It is an interesting future research direction to further explore. \bigskip \printbibliography \bigskip \section*{\LARGE{Appendices}}
{ "timestamp": "2020-12-29T02:13:31", "yymm": "2012", "arxiv_id": "2012.13769", "language": "en", "url": "https://arxiv.org/abs/2012.13769", "abstract": "Monte Carlo methods are widely used for approximating complicated, multidimensional integrals for Bayesian inference. Population Monte Carlo (PMC) is an important class of Monte Carlo methods, which utilizes a population of proposals to generate weighted samples that approximate the target distribution. The generic PMC framework iterates over three steps: samples are simulated from a set of proposals, weights are assigned to such samples to correct for mismatch between the proposal and target distributions, and the proposals are then adapted via resampling from the weighted samples. When the target distribution is expensive to evaluate, the PMC has its computational limitation since the convergence rate is $\\mathcal{O}(N^{-1/2})$. To address this, we propose in this paper a new Population Quasi-Monte Carlo (PQMC) framework, which integrates Quasi-Monte Carlo ideas within the sampling and adaptation steps of PMC. A key novelty in PQMC is the idea of importance support points resampling, a deterministic method for finding an \"optimal\" subsample from the weighted proposal samples. Moreover, within the PQMC framework, we develop an efficient covariance adaptation strategy for multivariate normal proposals. Lastly, a new set of correction weights is introduced for the weighted PMC estimator to improve the efficiency from the standard PMC estimator. We demonstrate the improved empirical convergence of PQMC over PMC in extensive numerical simulations and a friction drilling application.", "subjects": "Methodology (stat.ME); Computation (stat.CO)", "title": "Population Quasi-Monte Carlo", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854120593483, "lm_q2_score": 0.731058584489497, "lm_q1q2_score": 0.7089699505986707 }
https://arxiv.org/abs/2011.12245
Effect of barren plateaus on gradient-free optimization
Barren plateau landscapes correspond to gradients that vanish exponentially in the number of qubits. Such landscapes have been demonstrated for variational quantum algorithms and quantum neural networks with either deep circuits or global cost functions. For obvious reasons, it is expected that gradient-based optimizers will be significantly affected by barren plateaus. However, whether or not gradient-free optimizers are impacted is a topic of debate, with some arguing that gradient-free approaches are unaffected by barren plateaus. Here we show that, indeed, gradient-free optimizers do not solve the barren plateau problem. Our main result proves that cost function differences, which are the basis for making decisions in a gradient-free optimization, are exponentially suppressed in a barren plateau. Hence, without exponential precision, gradient-free optimizers will not make progress in the optimization. We numerically confirm this by training in a barren plateau with several gradient-free optimizers (Nelder-Mead, Powell, and COBYLA algorithms), and show that the numbers of shots required in the optimization grows exponentially with the number of qubits.
\section{Introduction} Parameterized quantum circuits offer a flexible paradigm for programming Noisy Intermediate Scale Quantum (NISQ) computers. These circuits are utilized in both Variational Quantum Algorithms (VQAs)~\cite{cerezo2020variationalreview,bharti2021noisy,peruzzo2014variational,mcclean2016theory,farhi2014quantum,romero2017quantum,khatri2019quantum,larose2019variational,arrasmith2019variational,cerezo2020variationalfidelity,cirstoiu2020variational,sharma2019noise,bravo2020variational,cerezo2020variational} and Quantum Neural Networks (QNNs)~\cite{schuld2014quest,cong2019quantum,beer2020training,verdon2018universal}. Both VQA and QNN approaches involve efficiently evaluating a cost function $C(\vec{\theta})$ or its gradient $\nabla C(\vec{\theta})$ on a quantum computer. A classical optimizer is then employed to train the parameters $\vec{\theta}$ of a parameterized quantum circuit $V(\vec{\theta})$ to minimize the cost. Rigorous scaling results are urgently needed for this exciting approach to near-term quantum computing. Gradient scaling is one of the few directions of significant progress. The most famous gradient scaling result is the barren plateau phenomenon~\cite{mcclean2018barren,cerezo2020cost,sharma2020trainability,wang2020noise,cerezo2020impact,holmes2020barren,pesah2020absence,zhang2020toward,abbas2020power,marrero2020entanglement,patti2020entanglement,uvarov2020barren,holmes2021connecting,du2020learnability,arrasmith2021equivalence}, whereby the gradient of the cost function shrinks exponentially with the number of qubits. Various issues lead to barren plateaus, such as deep ansatzes that lack structure~\cite{mcclean2018barren,sharma2020trainability,holmes2021connecting}, global cost functions~\cite{cerezo2020cost,sharma2020trainability}, high levels of noise~\cite{wang2020noise,du2020learnability}, scrambling target unitaries~\cite{holmes2020barren}, and large entanglement~\cite{marrero2020entanglement,patti2020entanglement}. Without effort to avoid barren plateaus, this phenomenon can have a major impact on the scaling of one's algorithm. Specifically, the exponential suppression of the gradient implies that one would need an exponential precision to make progress in the optimization, consequently, causing one's algorithm to scale exponentially in the number of qubits. The standard goal of quantum algorithms is polynomial scaling, unlike the exponential scaling of classical algorithms. Hence, the exponential scaling due to barren plateaus could erase the possibility of a quantum speedup with a parametrized quantum circuit. It is therefore crucial to study barren plateaus in VQAs and QNNs in order to understand when quantum speedup is possible. This has spawned an important research direction of finding strategies to avoid barren plateaus. Some examples include employing local cost functions~\cite{cerezo2020cost}, modifying the architecture~\cite{pesah2020absence,zhang2020toward}, pre-training~\cite{verdon2019learning}, parameter correlation~\cite{volkoff2021large}, layer-by-layer training~\cite{skolik2020layerwise}, and initializing layers to the identity~\cite{grant2019initialization}. These strategies are promising. However, more analytical and numerical studies are needed to understand how effective they are in general, for example, as in Ref.~\cite{campos2021abrupt}. One possible strategy to consider is the choice of optimizer. It is widely believed that gradient-based optimizers will be directly impacted by barren plateaus, for obvious reasons. Moreover, higher-order derivatives are also exponentially suppressed in a barren plateau~\cite{cerezo2020impact}, so optimizers based on such derivatives will also be impacted. Nevertheless, there still remains the question of whether gradient-free optimizers could somehow avoid the barren plateau problem. This is currently a topic of debate~\cite{cerezo2020cost,marrero2020entanglement}. The question is naturally made subtle by the fact that gradient-free optimizers can potentially use global information about the landscape, rather than being restricted to using local gradient information. In this work, we present an analytical argument suggesting that gradient-free approaches will, indeed, be impacted by barren plateaus. Specifically, we show that cost function differences, $C(\vec{\theta}_B)-C(\vec{\theta}_A)$, will be exponentially suppressed in a barren plateau. This holds even when the points $\vec{\theta}_A$ and $\vec{\theta}_B$ are not necessarily close in parameter space. Gradient-free optimizers use such cost function differences to make decisions during the optimization. Hence, our results imply that such optimizers will either need to spend exponentially large resources to characterize cost function differences, or else these optimizers will not make progress in the optimization. We confirm our analytical results with numerical simulations involving several gradient-free optimizers: Nelder-Mead, Powell, and COBYLA. For each of these optimizers, we attempt to train a deep parametrized quantum circuit, corresponding to the barren plateau scenario in Ref.~\cite{mcclean2018barren}. In all cases, we find that the number of shots (i.e., the amount of statistics) required to begin to train the cost function grows exponentially in the number of qubits. This is the same behavior that one sees for gradient-based methods, and is a hallmark of the barren plateau phenomenon. \section{Theoretical Background} Here we provide background needed to understand our results. We first consider the cost function used to train parameterized quantum circuits. Then we consider optimizers that can be used to optimize this cost function, with a specific focus on gradient-free optimizers. Finally, we give background on the barren plateau phenomenon. \subsection{Cost function} Consider a parameterized quantum circuit $V(\vec{\theta})$, whose parameters will be trained by minimizing a cost function $C(\vec{\theta})$. In this work, we consider a highly general cost function that can be expressed in the form \begin{equation}\label{eq:cost} C(\vec{\theta}) =\sum_{x=1}^{S} f_x(\vec{\theta},\rho_x)\,. \end{equation} Here, $\vec{\theta}$ is a vector of $m$ continuous parameters, $\{\rho_x\}_{x=1}^S$ are $n$-qubit input quantum states from a training set $\mathcal{S}$ of size $S$, and $f_x$ are functions that encode the problem and which can be different for each input state. To ensure algorithmic efficiency, we assume that the number $m$ of parameters in $\vec{\theta}$ is in $\mathcal{O}(\operatorname{poly}(n))$. In addition we consider that any $\theta_\mu\in\vec{\theta}$ parametrizes a unitary of the form $e^{-i\theta_\mu H_{\mu}}$. We assume $H_{\mu}$ is a Hermitian operator with two distinct non-zero eigenvalues (e.g., $H_{\mu}$ could be a Pauli operator). We remark that the cost function in~\eqref{eq:cost} contains as special cases many relevant applications. For instance, in a binary classification problem the cost function is given by the mean squared-error $C(\vec{\theta})=\sum_{x}(y_x-\widetilde{y}(\vec{\theta},\rho_x))^2$~\cite{farhi2018classification,killoran2019continuous,beer2020training,cong2019quantum}. Here, the training set is given by $\mathcal{S}=\{\rho_x,y_x\}$ where $y_x$ are the true labels, and $\widetilde{y}(\vec{\theta},\rho_x)$ are the labels predicted by the Quantum Neural Network. In addition, the cost of several Variational Quantum Algorithms is covered by~\eqref{eq:cost}. In this case, the cost takes a simpler form, where the training set contains a single state ($S=1$) and the cost is $C(\vec{\theta})={\rm Tr}[OV(\vec{\theta})\rho V^\dagger(\vec{\theta})]$, with $O$ a Hermitian operator~\cite{peruzzo2014variational,mcclean2016theory,farhi2014quantum,romero2017quantum,khatri2019quantum,larose2019variational,arrasmith2019variational,cerezo2020variationalfidelity,cirstoiu2020variational,sharma2019noise,bravo2020variational,cerezo2020variational}. The goal is then to solve the optimization problem \begin{equation}\label{eq:optimization} \vec{\theta}_{\text{opt}}=\argmin_{\vec{\theta}} C(\vec{\theta})\,. \end{equation} This involves choosing an optimizer, which can either be a gradient-based or gradient-free optimizer. Various gradient-based approaches~\cite{kubler2020adaptive,sweke2020stochastic,arrasmith2020operator,stokes2020quantum} have been proposed for training parameterized quantum circuits, and these will be directly impacted by barren plateaus. Optimizers employing higher-order derivatives are also impacted by barren plateaus~\cite{cerezo2020impact}. In this work we consider the case when one employs a gradient-free optimization method. In the next section we review some widely-used gradient-free optimizers. \subsection{Gradient-Free Optimizers} We will refer to any optimization method that only accesses a zeroth-order oracle (i.e., does not directly access derivative information) as being gradient free. This is a very large class of methods, but they all depend on being able to distinguish cost function values at different points. Though our analytical results are general to any such optimizer, we now introduce three particular gradient-free optimizers that we will examine numerically: Nelder-Mead, Powell's Method, and COBYLA. \subsubsection{Nelder-Mead} One popular gradient-free optimization strategy is the Nelder-Mead algorithm~\cite{nelder1965simplex}. In this approach, one constructs a simplex in the space to be optimized over. Then one modifies it with a sequence of reflect, expand, contract, and shrink operations to move the simplex and then shrink it around the minimum. These operations are chosen based on conditional comparisons of the cost function values at each vertex as well as proposed new vertices. See Figure~\ref{fig:opts}\textbf{a} for an illustration these operations. When used in an environment where the errors in those cost function values are large enough to cause mistakes in these comparisons, however, this algorithm is vulnerable to performing shrink operations prematurely, which slows the optimization down and may lead to a false appearance of convergence~\cite{barton1991modifications}. Due to this difficulty, one would expect the number of iterations required to converge with Nelder-Mead to be especially bad in limited precision environments, though we note that there are a number of modifications that attempt to improve the method's robustness to noise~\cite{barton1991modifications,huang2018robust}. \begin{figure*}[ht] \includegraphics[width=1.9\columnwidth]{gf_optimizers.pdf} \caption{Graphical depiction of the gradient-free optimizers considered. Panel \textbf{a} shows the different operations that the Nelder-Mead algorithm performs on the initial (grey) simplex: reflection (red), reflection with expansion (blue), reflection with contraction (green), and shrinking (turquoise). Panel \textbf{b} shows two iterations of the Powell method (with black for the first iteration and red for the second). Note that the direction of the final step is changed, reflecting a modified search direction. Finally, panel \textbf{c} shows an illustration of a COBYLA step from an initial (grey) simplex. After fitting a plane to the initial simplex, the method steps along the fitted slope to form a new simplex (red). The trust region is shown as a solid blue circle. A smaller trust region which might be used later in the optimization is illustrated with the dashed blue circle (though for this particular step the trust region would likely not be contracted).} \label{fig:opts} \end{figure*} \subsubsection{Powell's Method} The Powell algorithm~\cite{powell1964efficient} is another popular gradient-free optimizer that performs sequential line searches. This method starts with some input set of search vectors $V=\{\vec{v}_i\}$, usually just the coordinate directions in the parameter space. Searching along each of these directions in sequence, this method looks for the displacement $\{a_i\}$ along each direction that would minimize the cost when only varying parameters along the current direction. Finding the displacements $\{a_i\}$ is typically done with Brent's parabolic interpolation method~\cite{brent2013algorithms}, though in principle one could use any univariate gradient free optimizer. After sweeping through all of the search vectors, the iteration is completed by replacing the search vector $\vec{v}_j$ that corresponds to the greatest displacement, $a_j=\max({a_i})$, and replacing it with \begin{equation} \vec{v}_j\to \sum_i a_i \vec{v}_i. \end{equation} By making this replacement, convergence is accelerated and the method avoids getting stuck in a cyclic pattern of updates. See Figure~\ref{fig:opts}\textbf{b} for a sketch of two iterations of this method. \subsubsection{COBYLA} Constrained Optimization BY Linear Approximation (COBYLA) is another popular gradient-free optimizer by Powell~\cite{powell1994direct}. This algorithm constructs a simplex and uses the $m + 1$ points in the parameter space, with $m$ being the number of parameters, to define a hyperplane to capture the local slope of the cost function. The algorithm then replaces the highest cost function value point on the simplex by stepping from the lowest cost point along the direction of the slope. The method steps as far as possible along this estimated slope while staying within a lower bound on the radius of the trust region. The lower bound on the size of the trust region is decreased when the algorithm detects that is has stopped making progress, allowing the method to converge. Note, however, that the size of the trust region never increases in COBYLA. An iteration of this method (showing a shrinking trust region) is sketched in Figure~\ref{fig:opts}\textbf{c}. \subsection{Barren Plateaus} When the cost function exhibits a barren plateau, the cost function gradient vanishes exponentially with the system size. Without loss of generality we consider here the following generic definition of a barren plateau. \begin{definition}[Barren Plateau]\label{def:BP} Consider the cost function defined in Eq.~\eqref{eq:cost}. This cost exhibits a barren plateau if, for all $\theta_\mu\in\vec{\theta}$, the expectation value of the cost function partial derivative $\partial C(\vec{\theta})/\partial\theta_\mu=\partial_\mu C(\vec{\theta})$ is $\text{E}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]=0$ and its variance vanishes exponentially with the number of qubits $n$ as \begin{equation}\label{eq:var} {\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]\leq F(n)\,,\quad \text{with}\quad F(n)\in\mathcal{O}\left(\frac{1}{b^n}\right)\,. \end{equation} for some $b> 1$. As indicated, the expectation values are taken over the parameters $\vec{\theta}$. \end{definition} We remark here that, as shown in Definition~\ref{def:BP}, the barren plateau phenomenon is a probabilistic statement. In fact, from Chebyshev's inequality we know that ${\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]$ bounds the probability that the cost function partial derivative deviates from its mean of zero as \begin{equation}\label{eq:cheb} P(|\partial_\mu C(\vec{\theta})|\geq c)\leq \frac{{\rm Var}_{\vec{\theta}}[\partial_\mu C(\vec{\theta})]}{c^2}\,, \end{equation} for any $c>0$. In practice this means that by randomly initializing the parameters $\vec{\theta}$, there is a high probability that one ends up in a flat region of the landscape where the gradients are exponentially suppressed. Let us now discuss different mechanisms that can lead to barren plateaus in the cost function landscape. As shown in the seminal work of Ref.~\cite{mcclean2018barren}, deep random unstructured circuits which form $2$-designs on $n$ qubits will exhibit barren plateaus. Here we use the term deep when the depth of the ansatz is in $\mathcal{O}(\operatorname{poly}(n))$. For instance, as shown in~\cite{harrow2009random,brandao2016local,harrow2018approximate} local circuits will form $2$-designs when their depth is in $\mathcal{O}(\operatorname{poly}(n))$. The barren plateau phenomenon was extended in~\cite{cerezo2020cost} to a type of shallow depth ansatz known as the layered hardware efficient ansatz, where random local gates act on alternating pairs of neighboring qubits in a brick-like structure. Here it was shown that the locality of the cost function can be linked to its trainability. Specifically, global cost functions (those where one compares operators living in exponentially large Hilbert spaces) exhibit barren plateaus for any circuit depth. On the other hand, it was shown that local cost functions (where one compares operators on an individual qubit level) are trainable when the ansatz depth is in $\mathcal{O}(\log(n))$, as here their gradients vanish at worst polynomially (rather than exponentially) with the system size. Barren plateaus have also been shown to arise in more general QNN architectures~\cite{sharma2020trainability,marrero2020entanglement}. In perceptron-based QNNs with hidden and visible layers, connecting a large number of qubits in different layers with random global perceptrons (and hence highly entangling them) can lead to exponentially vanishing gradients. These results have shown that the barren plateau phenomenon is a generic problem that can arise in multiple architectures for quantum machine learning. Finally, in~\cite{wang2020noise} a noise-induced barren plateau mechanism was found. Here it was proven that the presence of noise acting before and after each unitary layer in a parametrized quantum circuit leads to exponentially vanishing gradients for circuits with linear or super-linear depth. When the cost exhibits a noise-induced barren plateau we have $|\partial_\mu C(\vec{\theta})|\leq \widehat{F}(n)$ with $\widehat{F}(n)\in\mathcal{O}(1/\widehat{b}^n)$ for some $\widehat{b}>1$. The underlying mechanism here is that the state gets corrupted due to noise, leading to a flattening of the whole cost landscape. This phenomenon is conceptually different from the previous barren plateaus as here one does not average over the parameters $\vec{\theta}$. Nevertheless, the noise-induced barren plateau still satisfies Definition~\ref{def:BP}, which is a weaker condition. \section{Main Results} In this section we first present our main analytical results in the form of Proposition~\ref{prop:1} and Corollary~\ref{cor:1}. We then discuss the implications for employing gradient-free optimizers in a barren plateau. \subsection{Exponentially suppressed cost differences} Here we consider two relevant scenarios where we analyze, on average, how large the difference $\Delta C= C(\vec{\theta}_B)-C(\vec{\theta}_A)$ between two points in the landscape can be. First we consider the case when $\vec{\theta}_A$ and $\vec{\theta}_B$ are not independent, but rather $\vec{\theta}_B$ can be obtained from $\vec{\theta}_A$ through a given translation in parameter space. We then analyze the case when $\vec{\theta}_A$ and $\vec{\theta}_B$ are independent. The following proposition constitutes the main result of our work. The proof is presented in the Appendix. \begin{proposition}\label{prop:1} Consider the cost function of Eq.~\eqref{eq:cost}. Let $\vec{\theta}_A$ be a randomly chosen point in parameter space. Let $\vec{\theta}_B=\vec{\theta}_A+L\hat{\vec{\ell}}$ be a point at a distance $L=\|\vec{\theta}_B-\vec{\theta}_A\|$ from $\vec{\theta}_A$ in parameter space, for some unit vector $\hat{\vec{\ell}}$. If the cost exhibits a barren plateau according to Definition~\ref{def:BP}, then the expectation value of the difference $\Delta C=C(\vec{\theta}_B)-C(\vec{\theta}_A)$ is \begin{equation} \text{E}_{\vec{\theta}_A}[\Delta C]=0\,, \end{equation} and the variance is exponentially vanishing with $n$ as \begin{equation} {\rm Var}_{\vec{\theta}_A}[\Delta C]\leq G(n)\,, \end{equation} with \begin{equation}\label{eq:G-n} G(n)=m^2L^2 F(n)\,, \quad \text{and}\quad G(n)\in \widetilde{\mathcal{O}}\left(\frac{1}{b^n}\right)\,, \end{equation} for some $b> 1$. Here $m$ is the dimension of the parameter space, and $F(n)$ was defined in~\eqref{eq:var}. \end{proposition} Let us here recall that we have assumed that $m\in\mathcal{O}(\operatorname{poly}(n))$. Similarly, we have that $\theta_\mu$ parametrizes a unitary generated by a Hermitian operator $H_{\mu}$ with two distinct non-zero eigenvalues. From the latter it then follows that $L$ is always in $\mathcal{O}(\operatorname{poly}(n))$, and hence that $G(n)\in \widetilde{\mathcal{O}}(1/b^n)$. From the previous results one can readily evaluate the case when $\vec{\theta}_B$ and $\vec{\theta}_A$ are independent. This case is of relevance to global optimizers, such as Bayesian approaches, where initial points on the landscape are chosen independently. This scenario can be analyzed by computing the expectation value $\text{E}_{\vec{\theta}_A,\vec{\theta}_B}[\Delta C]=\text{E}_{\vec{\theta}_B}[\text{E}_{\vec{\theta}_A}[\Delta C]]$. From Proposition~\ref{prop:1}, we can derive the following corollary. \begin{corollary}\label{cor:1} Consider the cost function of Eq.~\eqref{eq:cost}. Let $\vec{\theta}_A$ and $\vec{\theta}_B$ be two randomly chosen points in parameter space. Without loss of generality we assume that $\vec{\theta}_B=\vec{\theta}_A+L\hat{\vec{\ell}}$ for random $L$ and $\hat{\vec{\ell}}$ so that $\text{E}_{\vec{\theta}_A,\vec{\theta}_B}[\cdots]=\text{E}_{\vec{\theta}_A,L,\hat{\vec{\ell}}}[\cdots]$. If the cost exhibits a barren plateau according to Definition~\ref{def:BP}, then the expectation value of the difference $\Delta C=C(\vec{\theta}_B)-C(\vec{\theta}_A)$ is $\text{E}_{\vec{\theta}_A,L,\hat{\vec{\ell}}}[\Delta C]=0$, and the variance is exponentially vanishing with $n$ as \begin{equation}\label{eq:boundprop} {\rm Var}_{\vec{\theta}_A,L,\hat{\vec{\ell}}}[\Delta C]\leq \widehat{G}(n)\,, \end{equation} with \begin{equation} \widehat{G}(n)=m^2\overline{L}^2 F(n)\,, \quad \text{and}\quad \widehat{G}(n)\in \widetilde{\mathcal{O}}\left(\frac{1}{b^n}\right)\,, \end{equation} for some $b> 1$. Here $m$ is the dimension of the parameter space, $F(n)$ was defined in~\eqref{eq:var}, and \begin{equation} \overline{L}=\text{E}_{L,\hat{\vec{\ell}}}\left[L\right] \end{equation} is the average distance between any two points in parameter space. \end{corollary} The proof of Corollary~\ref{cor:1} readily follows from Proposition~\ref{prop:1} by additionally computing the expectation value over $L$ and $\hat{\vec{\ell}}$. Moreover, here we can see that $\widehat{G}(n)$ is exponentially vanishing with the system size as $\overline{L}\in\mathcal{O}(\operatorname{poly}(n))$. From Proposition~\ref{prop:1} we have that given two dependent set of parameters $\vec{\theta}_A$ and a set $\vec{\theta}_B$ related trough a translation in parameter space, then the probability that the difference $\Delta C=C(\vec{\theta}_B)-C(\vec{\theta}_A)$ is larger than a given $c>0$ can be bounded as \begin{equation}\label{eq:difference-bound} P(|\Delta C|\geq c)\leq \frac{G(n)}{c^2}\,, \end{equation} where we have used~\eqref{eq:cheb}, and Eq.~\eqref{eq:boundprop} from Proposition~\ref{prop:1}. Note that a similar result can be obtained for the case when $\vec{\theta}_A$ and $\vec{\theta}_B$ are independent, but here one replaces $G(n)$ by $\widehat{G}(n)$ in~\eqref{eq:difference-bound}. This implies that, with high probability, the difference $\Delta C$ will be exponentially vanishing with the system size, for both cases when $\vec{\theta}_A$ and $\vec{\theta}_B$ are dependent or independent. Moreover, we remark that this is a direct consequence of the fact that the cost exhibits a barren plateau. \subsection{Implications for gradient-free optimizers} Let us first recall that, as discussed in the previous section, the capability of distinguishing the cost function value at different sets of parameters is at the core of gradient-free optimization methods. Therefore, the precision required to differentiate choices fundamentally limits the scaling of these methods, with smaller differences requiring greater precision. If an optimizer's precision requirements are not met, then each decision the method makes becomes randomly chosen by shot noise, leading to many optimizers effectively becoming either random walks or random sampling. The results in Proposition~\ref{prop:1} pertain to gradient-free optimizers that compare points that are a given distance and direction apart. For example, simplex-based methods like Nelder-Mead fall under this category. As we show that cost differences are exponentially suppressed with the system size in a barren plateau, this leads to applications having sampling requirements that scale exponentially. Exponentially scaling sampling requirements, in turn, hinder the possibility of achieving quantum speedup with such an algorithm. Similarly, Corollary~\ref{cor:1} tells us that cost function differences between randomly chosen points are also exponentially suppressed. This means that either random search methods or methods that use random initialization, such as Bayesian optimization~\cite{movckus1975bayesian}, will also struggle with barren plateau landscapes. Therefore, using randomness in the selection of points cannot evade this exponential scaling result. Let us finally remark that Proposition~\ref{prop:1} and Corollary~\ref{cor:1} make no assumption about how close (or far) the parameters $\vec{\theta}_A$ and $\vec{\theta}_B$ are in parameter space other than that $L=\|\vec{\theta}_B-\vec{\theta}_A\|\in\mathcal{O}(\operatorname{poly}(n))$. Given that for any practical application the number of parameters $m$ should scale no faster than $m\in\mathcal{O}(\operatorname{poly}(n))$ (or the problem will become untrainable for reasons having nothing to do with barren plateaus), this seems very reasonable. For example, if all of the parameters are single qubit rotations, the parameter space is a $m$-dimensional torus with unit radius. On that torus, the greatest length of the shortest path between two points is: \begin{equation} \begin{aligned} L_{\mathrm{max}}=&\max_{\vec{\theta}_A,\vec{\theta}_B}\|\vec{\theta}_A-\vec{\theta}_B\|\\ =&\sqrt{m}\pi. \end{aligned} \end{equation} This means that our results are valid for both local and global optimizers as sampling points that are further apart cannot overcome the suppressed slope. \section{Numerical Implementation} \begin{figure}[t] \includegraphics[width=\columnwidth]{PBC.pdf} \caption{Single layer of the hardware efficient ansatz employed in our numerical implementations, shown here for $n=4$. The $U$ gates are general single qubit unitaries $U(\theta_1,\theta_2,\theta_3) = R_Z(\theta_2+\pi)R_X(\pi/2)R_Z(\theta_1+\pi)R_X(\pi/2)R_Z(\theta_3)$. Here $R_Z(\alpha) = e^{-i\alpha/2 \sigma_Z}$, $R_X(\alpha) = e^{-i\alpha/2 \sigma_X}$, and $\sigma_Z$, $\sigma_X$ are Pauli matrices. } \label{fig:hardeff} \end{figure} In this section we present numerical results obtained by simulating a variational quantum compiling algorithm~\cite{khatri2019quantum,sharma2019noise}. Here, one trains a parametrized quantum circuit $V(\vec{\theta})$ to approximate a target unitary $U$. Specifically, we consider the toy-model problem where $U=\openone$ and the goal is to train the parameters in $V(\vec{\theta})$ such that $V(\vec{\theta})\ket{\vec{0}}=\ket{\vec{0}}$, with $\ket{\vec{0}}=\ket{0}^{\otimes n}$ the all-zero state. As shown in~\cite{khatri2019quantum,sharma2019noise}, the following local cost function is faithful \begin{equation} C(\vec{\theta}) = {\rm Tr}[ O_L V(\vec{\theta})\dya{\vec{0}}V^\dagger(\vec{\theta})]\,, \label{cost_function} \end{equation} where \begin{equation} O_L = \openone - \frac{1}{n}\sum_{i=1}^{n}\ket{0_i}\bra{0_i}\,, \end{equation} in that can verify that $C(\vec{\theta})\in[0,1]$, with $C(\vec{\theta})=0$ iff $V(\vec{\theta})\ket{\vec{0}}=\ket{\vec{0}}$ (up to global phase). We remark that there is an efficient quantum circuit to compute $C(\vec{\theta})$~\cite{sharma2019noise}. For $V(\vec{\theta})$ we employ a layered hardware efficient ansatz as shown in Fig.~\ref{fig:hardeff}. Moreover, we recall that Ref.~\cite{mcclean2018barren} showed that a cost function such as~\eqref{cost_function} with a layered hardware efficient ansatz that is randomly initialized will exhibit barren plateaus when the depth of $V(\vec{\theta})$ scales at least linearly in $n$. In our numerics we simulated the aforementioned quantum compilation task for different numbers of qubits $n=5,6,\ldots,11$. Letting $p$ be the number of layers in the ansatz in Fig.~\ref{fig:hardeff}, we choose $p=n$, so that the depth grows linearly in $n$. This corresponds to the barren plateau scenario in Ref.~\cite{mcclean2018barren}. For each value of $n$, we solved the optimization of Eq.~\eqref{eq:optimization} by employing the Nelder-Mead, Powell, and COBYLA methods. These simulations were performed using MATLAB (Nelder-Mead) and SciPy (Powell and COBYLA). In all cases we randomly initialized the parameters $\vec{\theta}$ in the ansatz and we ran the optimizer until a cost function value $C=0.4$ was achieved or until a maximal total number of shots used throughout the optimization was surpassed. For simplicity we use the default values for hyper-parameters not related to optimization termination. We note that we chose a relatively large value ($C=0.4$) for the cost threshold because we are interested in the initial stages of the training process, i.e., the question of whether one can get past the barren plateau. With this choice, the computational expense of reaching this threshold does not take into account the difficulty of finding a minimum, it only reflects the burden of the barren plateau. Since the goal is to heuristically determine the precision (i.e., the number of shots $N$) needed to minimize the cost, we first ran simulations with different values of $N$ allocated per cost-function evaluation. For each $N$ we simulated $20$ optimization instances (runs) with different initial points and we kept track of $N_{\text{total}}$ used throughout the optimization. The next step was to determine the value of $N$ for which a cost of $C=0.4$ could be reached and which minimizes the median total number of shots $N_{\text{total}}$ computed over different runs. We analyze the scaling of this median value as a function of $n$ below. \begin{figure}[t] \includegraphics[width=\columnwidth]{train_scal2.pdf} \caption{Median value of the total shot number $N_{\textrm{total}}$ necessary to reach $C=0.4$ plotted versus the number of qubits $n$. We show results for Nelder-Mead (blue diamonds), Powell (red asterisks), and COBYLA (pink squares) implementations, as well as results for a gradient-descent implementation (black crosses) for reference. Each data point was obtained from a sample of $20$ runs initialized by random initial angles~$\vec{\theta}$. } \label{fig:numerical} \end{figure} In Fig.~\ref{fig:numerical} we present our numerical results. Here we can see that the total number of shots scales exponentially with $n$ for the Powell method, and scales super-exponentially for the Nelder-Mead optimizer. For the COBYLA method, the $N_{\text{total}}$ behavior as a function of $n$ is not very regular but it is consistent with at least an exponential increase. As a reference point we also show in Fig. \ref{fig:numerical} results obtained with a custom gradient-descent optimizer. As expected, the total number of shots also scales exponentially in this case. \section{Discussion} With a wide range of applications spanning chemistry, optimization, and big data analysis, training parameterized quantum circuits is arguably the leading paradigm for near-term quantum computing. Yet barren plateaus in the training landscape remains an obstacle to making these paradigms scalable. Hence, one of the most important lines of research in this field is developing methods to avoid barren plateaus. In this work, we consider the question of whether the choice of optimizer could be a potential strategy in avoiding barren plateaus. We focus on gradient-free optimizers, since there has been recent debate in the community about whether barren plateaus effect such optimizers. Our main result is an analytical argument suggesting that gradient-free optimizers will, indeed, be impacted by barren plateaus. Proposition~\ref{prop:1} is relevant to gradient-free optimizers that search through the landscape starting from a (random) initial point. For example, this includes simplex-based optimizers like Nelder-Mead. This proposition asserts that the variance of cost function differences is exponentially suppressed in a barren plateau. This implies that such optimizers will need to expend exponentially large resources in order to make decisions about where to move in the landscape. Corollary~\ref{cor:1} considers a slightly different scenario, where both points are randomly and independently chosen. This is relevant to global gradient-free optimizers, such as Bayesian methods, which initially choose multiple random points on the landscape, and then proceed from this initial set of points. This corollary implies that these optimizers will also need to utilize exponentially large resources in order to make progress. We also numerically attempt to train in a barren plateau scenario using several gradient-free optimizers. We ask how many shots are required to begin to train the cost function. In all cases, we find that the required number of shots grows exponentially in the number of qubits. This is consistent with our main result, and demonstrates that barren plateaus can lead to exponential scaling even for gradient-free optimizers. We note that this exponential scaling is a lower bound on the asymptotics. For the Nelder-Mead we find super-exponential scaling, likely due to the chances of prematurely shrinking the simplex when it is hard to order the cost values~\cite{barton1991modifications}. For the case of COBYLA, there may be a similar effect from prematurely shrinking the radius of the trust region, though this is not clearly demonstrated in our data. Finally, the Powell method appears to show exponential scaling. It is likely that the reason Powell shows better scaling is that, unlike the other optimizers, statistical noise does not have a cumulative effect on the state of the optimizer. Our work casts doubt on the notion that the choice of optimizer could provide a strategy to avoid barren plateaus. While the asymptotically exponential scaling cannot be avoided, we note that the size limits of trainable problems may be extended by a careful choice of optimization strategy. For example, techniques using neural networks~\cite{verdon2019learning} or natural evolutionary strategies~\cite{anand2021natural} may improve the constants multiplying the exponential scaling. However, we emphasize that all such strategies at minimum require the comparison of cost function values at different points and thus are subject to our scaling analysis. This result highlights the difficult challenge posed by barren plateaus. Future work certainly should continue to develop strategies to avoid them. Additionally, in future work, we hope to develop a unified treatment that covers the impact of barren plateaus on various types of optimizers, gradient-based and gradient-free. \section*{Acknowledgements} AA and LC were initially supported by LDRD program of LANL under project number 20190065DR. MC acknowledges support from the Center for Nonlinear Studies at Los Alamos National Laboratory (LANL). Piotr C. was supported by the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory (LANL) under project number 20190659PRD4. PJC acknowledges initial support from the LANL ASC Beyond Moore's Law project. This work was supported by the U.S. DOE, Office of Science, Office of Advanced Scientific Computing Research, under the Accelerated Research in Quantum Computing (ARQC) program. \bibliographystyle{apsrev4-1mod}
{ "timestamp": "2021-10-04T02:06:11", "yymm": "2011", "arxiv_id": "2011.12245", "language": "en", "url": "https://arxiv.org/abs/2011.12245", "abstract": "Barren plateau landscapes correspond to gradients that vanish exponentially in the number of qubits. Such landscapes have been demonstrated for variational quantum algorithms and quantum neural networks with either deep circuits or global cost functions. For obvious reasons, it is expected that gradient-based optimizers will be significantly affected by barren plateaus. However, whether or not gradient-free optimizers are impacted is a topic of debate, with some arguing that gradient-free approaches are unaffected by barren plateaus. Here we show that, indeed, gradient-free optimizers do not solve the barren plateau problem. Our main result proves that cost function differences, which are the basis for making decisions in a gradient-free optimization, are exponentially suppressed in a barren plateau. Hence, without exponential precision, gradient-free optimizers will not make progress in the optimization. We numerically confirm this by training in a barren plateau with several gradient-free optimizers (Nelder-Mead, Powell, and COBYLA algorithms), and show that the numbers of shots required in the optimization grows exponentially with the number of qubits.", "subjects": "Quantum Physics (quant-ph); Machine Learning (cs.LG); Machine Learning (stat.ML)", "title": "Effect of barren plateaus on gradient-free optimization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.969785415552379, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.7089699474698308 }
https://arxiv.org/abs/2208.06151
Unifying local and global model explanations by functional decomposition of low dimensional structures
We consider a global representation of a regression or classification function by decomposing it into the sum of main and interaction components of arbitrary order. We propose a new identification constraint that allows for the extraction of interventional SHAP values and partial dependence plots, thereby unifying local and global explanations. With our proposed identification, a feature's partial dependence plot corresponds to the main effect term plus the intercept. The interventional SHAP value of feature $k$ is a weighted sum of the main component and all interaction components that include $k$, with the weights given by the reciprocal of the component's dimension. This brings a new perspective to local explanations such as SHAP values which were previously motivated by game theory only. We show that the decomposition can be used to reduce direct and indirect bias by removing all components that include a protected feature. Lastly, we motivate a new measure of feature importance. In principle, our proposed functional decomposition can be applied to any machine learning model, but exact calculation is only feasible for low-dimensional structures or ensembles of those. We provide an algorithm and efficient implementation for gradient-boosted trees (xgboost) and random planted forest. Conducted experiments suggest that our method provides meaningful explanations and reveals interactions of higher orders. The proposed methods are implemented in an R package, available at \url{this https URL}.
\section{Introduction} In the early years of machine learning interpretability research, the focus was mostly on global feature importance methods that assign a single importance value to each feature. More recently, the attention has shifted towards local interpretability methods, which provide explanations for individual observations. Popular examples of the latter are LIME \citep{ribeiro2016should} and SHAP \citep{shapley1953value,lipovetsky2001analysis,lundberg2017unified}. The major reason for this shift is that local methods provide a more comprehensive picture of model explanations than single-value global methods, most importantly in presence of nonlinear effects and interactions. This, however, neglects the fact that global methods can be more than single-value methods: Ideally, a global method provides useful information about the entire regression or classification function by providing an explanation for each feature and each interaction effect of arbitrary order, relative to the values they take. As with local methods, this gives us an explanation for each observation. The crucial difference is that two observations which have a set of features in common receive the same explanation for main effects and interaction effects involving exclusively those features. The reason is that higher order effects are not hidden in the main effects. A global explanation is not specific to observations but only to feature values, and it does not only give a more comprehensive picture than local methods but the complete picture. In this paper, we introduce a global explanation procedure by identifying components in a functional decomposition. We show that the proposed global explanation is identical to $q$-interaction SHAP \citep{tsai2022faith}, where $q$ corresponds to the maximal order of interaction present in the model to be analyzed. Hence, we provide a new interpretation of SHAP values which is not game-theoretically motivated. We develop a fast implementation that exactly calculates $q$-interaction-SHAP for tree-based machine learning models. In principle, our results can be applied to any model and our algorithm can be applied to any tree-based model. However, since the number of components grows exponentially with increasing $q$, exact calculation is only feasible if $q$ is sufficiently small, as e.g. in gradient boosted trees. We provide an implementation for \textit{xgboost} \citep{chen2016xgboost} and \textit{random planted forest} \citep{hiabu2020random}. The $q$-interaction SHAP is a global explanation of a trained model. One and two-dimensional components can be plotted and together with higher order components they can be used to decompose local SHAP values into main effects and all involved interaction effects. Additionally, main and interaction components can be summarized into feature importance values. Beyond explaining feature effects, our proposed decomposition can be used to detect bias in models where LIME and SHAP fail \citep{slack2020fooling} and reduce such bias by removing individual components from the decomposition. \subsection{Motivating example}\label{sec:example} We will give a toy example of how the interplay of correlations and interactions can give rise to misleading SHAP values. Consider the function $m(x_1,x_2)=x_1+x_2 + 2x_1x_2 $. The SHAP value for the first variable is $ \phi_1(x_1,x_2)=x_1-E[X_1]+x_1x_2 -E[X_1X_2]+x_1E[X_2]-x_2E[X_1]. $ If the features are standardized, i.e., $X_1$ and $X_2$ have mean zero and variance one, the expression reduces to \[ \phi_1(x_1,x_2)=x_1+x_1x_2 -\textrm{corr}(X_1,X_2). \] Hence, e.g., if $\textrm{corr}(X_1,X_2)=0.3$, an individual with $x_1=1$ and $x_2=-0.7$ would see a SHAP value of 0 for the first variable: \[ \phi_1(1,-0.7) = 0. \] This is quite misleading, since clearly $x_1$ has an effect on the response $m$ that is irrespective of the particular value of $x_1$. The underlying problem is that locally at $(x_1,x_2)=(1,-0.7)$, the main effect contribution and interaction contribution cancel each other out. Indeed, we will see that the SHAP value $\phi_1$ can be decomposed into a main effect contribution of $\{x_1\}$ , which is $x_1-2\textrm{corr}(X_1,X_2)=0.4$ and an interaction contribution of $\{x_1,x_2\}$, which is $x_1x_2+\textrm{corr}(X_1,X_2)=-0.4$. Figure~\ref{fig:example} shows SHAP values and the functional decomposition of an \textit{xgboost} model of the function $m(x_1,x_2)$. The SHAP values $\phi_1$ and $\phi_2$ contain main effect contributions $m_1$ and $m_2$ as well as the interaction contribution $m_{12}$. The functional decomposition separates the contributions $m_1$, $m_2$ and $m_{12}$. Those familiar with SHAP values may argue that one can detect the non-zero impact of $x_1$ by plotting $\phi_1$ over all instances (see Figure~\ref{fig:example}). This argument has two problems. Firstly, this does not change the misleading local value. Secondly, SHAP values can be quite arbitrary: Two estimators that are equal on the support of the data can have very different SHAP values. To see this, define $\tilde m(x)= m(x) +\alpha(x)$, where $\alpha(x)=0$, for $x\in supp(X_1,X_2)$, and choose $\alpha$ outside the support of $(X_1,X_2)$ such that it approximates some desired SHAP values. This works because SHAP values are constructed by extrapolating outside the support of the data. \cite{slack2020fooling} has empirically demonstrated how this can be exploited to hide the importance of protected features. One could ask for local explanations that do not extrapolate, hoping that this solves the problem. Unfortunately, this is not the case: If explanations are deduced only from the region with data support, those explanations are based on the correlation structure of the features \citep{janzing2020feature}. In particular a variable that has zero effect on the model output can still be assigned a value stemming from a correlated variable \citep{janzing2020feature,sundararajan2020many}. We conclude: \begin{quote} \emph{Local explanations that do not explicitly specify all interactions cannot lead to meaningful causal interpretations in the presence of correlated features.} \end{quote} This is worrying noting that interpretation tools are usually used for black-box algorithms with the main purpose being to explain the model well in cases where interactions are present. A local interpretation that considers all interactions is actually a global interpretation. Hence the goal of this paper is to unify local and global explanations. \begin{figure} \centering \includegraphics[width=\linewidth]{simple_example.png} \caption{Simple example. SHAP values (top row) and functional decomposition (bottom row) of an \textit{xgboost} model of the function $m(x_1,x_2)=x_1+x_2 + 2x_1x_2$.} \label{fig:example} \end{figure} \subsection{Related Work} A functional decomposition for global interpretation of regression functions was introduced in the statistical literature in \cite{stone1994use}, and has been further discussed in \cite{hooker2007generalized,chastaing2012generalized,lengerich2020purifying}. These authors considered a different constraint called (generalized) functional ANOVA decomposition. In contrast, the constraint we introduce in this paper is linked to Shapley values. There is considerable literature on interactions and Shapley values. In cooperative game theory, pairwise player-player interactions were first considered by \cite{owen1972multilinear} and later generalized to higher-order interactions by \cite{grabisch1999axiomatic}. In the machine learning context, arbitrariness of Shapley values due to interactions and correlations has been discussed in \cite{kumar2020problems}, \cite{slack2020fooling}, \cite{sundararajan2020many}, and possible solutions have been proposed in \cite{zhang2020interpreting}, \cite{kumar2021shapley}, \cite{harris2022joint}, \cite{ittner2021feature}, \cite{sundararajan2020shapley}. Recently, \cite{tsai2022faith} introduced interaction SHAP for any given order and proposed an approximation to calculate them. In this paper, we introduce an identification constraint for a functional decomposition that is motivated by a causal interpretation, which will connect to Shapley values with a value function that has recently been coined interventional SHAP \citep{chen2020true}. Alternative value functions have been discussed in \cite{frye2020asymmetric}, \cite{yeh2022threading}. There are a variety of methods to obtain global feature importance measures implied by SHAP, in the form of a single value. These include \cite{casalicchio2018visualizing}, \cite{frye2020asymmetric} and \cite{williamson2020efficient}, among others. Similar to our method suggested in Section \ref{sec:vim}, the measures are obtained by weighted averages of local importance values. However, in contrast to our suggestion, most are motivated by additive importance measures \citep{covert2020understanding}. \section{Main result} Let $(Y_i, X_{i,1},\dots,X_{i,d})$ be a data set of $n$ i.i.d. observations with $X_{i,k}\in\mathbb{R}$, $i=1,\dots,n$; $k=1,\dots,d$. We consider the supervised learning setting \[ E[Y_i|X_i=x] = m(x), \] where the function $m$ is of interest and $Y$ is a real valued random variable.\footnote{We use $Y_i \in \mathbb{R}$ for national convenience. It is straight-forward to extend to binary classification, whereas multiclass classification would require a slightly different procedure.} We assume that a reasonable estimator $\hat m$ of $m$ has been provided. \subsection{Global Interpretation} With increasing dimension it can quickly get very hard, if not impossible, to visualize and thereby comprehend a multivariate function. Hence, a global interpretation of $\hat m$ is arguably only feasible if it is a composition of low-dimensional structures. Let us consider a specific decomposition of a multivariate function into a sum of main effects, bivariate interactions, etc., up to a $d$-variate interaction term. \begin{align}\label{anova} \hat m(x)&= \hat m_0+\sum_{k=1}^d \hat m_k(x_{k}) + \sum_{k<l} \hat m_{kl}(x_{k},x_{l}) + \cdots + \hat m_{1,\dots,d}(x)= \sum_{S\subseteq \{1,\dots,d\}} \hat m_S(x_S). \end{align} The heuristic of the decomposition is that if the underlying function $m(x)$ only lives on low-dimensional structures, then $m_S$ should be zero for most feature subsets $S$ and the order of maximal interaction $q=\max\{ \abs{S}: m_S\neq 0\}$ should be much smaller than the number of features: $q << d$. This discussion, however, is not very meaningful before one has agreed on an identification; without suitable identification constraints, it is possible to change components on the right without altering the left hand side. We propose the following identification: \textbf{Marginal identification:} For every $S\subseteq \{1,\dots,d\}$, \begin{align}\label{constraint1} \sum_{T \cap S \neq \emptyset} \int \hat m_T(x_T) \hat p_S(x_S)\mathrm dx_S=0, \end{align} where $\hat p_S$ is some estimator of the density $p_S$ of $X_S$. The identification can be motivated by a causal interpretation. \begin{figure} \begin{center} \begin{minipage}[t]{0.3\linewidth} \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!60, fill=red!5, very thick, minimum size=5mm}, ] \node[squarednode] (U) {$X_U$}; \node[squarednode] (V) [below=of U] {$X_V$}; \node[squarednode] (m) [right=of V] {m(U,V)}; \draw[->] (U.south) -- (V.north); \draw[->] (V.east) -- (m.west); \draw[->] (U.east) -- (m.north); \end{tikzpicture} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!60, fill=red!5, very thick, minimum size=5mm}, ] \node[squarednode] (U) {$X_U$}; \node[squarednode] (V) [below=of U] {$do(X_V=x_V)$}; \node[squarednode] (m) [right=of V] {m(U,V)}; \draw[->] (V.east) -- (m.west); \draw[->] (U.east) -- (m.north); \end{tikzpicture} \end{minipage} \caption{Left: Initial causal structure. Right: Causal Structure after removing effect of $X_U$ on $X_V$.} \label{fig:graph} \end{center} \end{figure} Assume $U$ is a set of features that should not have an effect on $\hat m$. For example $U=\{\text{gender, ethnicity}\}$ in the case of non-discriminatory regulation requirements. Assume $\{1,\dots, d\}$ is the disjoint union of $U$ and $V$ with a directed acyclic graph structure $X_U\rightarrow X_V \rightarrow m$, $X_U\rightarrow m$; as illustrated in Figure \ref{fig:graph}. Eliminating the causal relationship between $X_V$ and $X_U$ can be achieved via the do-operator, $do(X_V=x_V)$, that removes all edges going into $X_V$, see Figure \ref{fig:graph}. The function $E[m(X) |\ do(X_V=x_V))$ does not use information contained in $X_U$; neither directly nor indirectly. Under the assumed causal structure, standard calculations \citep{pearl2009causality} lead to \[ E[m(X) |\ do(X_V=x_V)]= \int m(x) p_U(x_U) dx_U. \] If $\hat m$ is identified via \eqref{constraint1}, then $\tilde m(x_{-U}):=\int \hat m(x) \hat p_U(x_U) dx_U$ can be extracted from $\hat m$ by dropping all components that include variables in $U$: \[ \tilde m(x_{-U})= \int \hat m(x) \hat p_U(x_U) dx_U= \sum_{S \subseteq V} \hat m_S(x_S). \] The next theorem states existence and uniqueness of a decomposition that satisfies the identification constraint \eqref{constraint1} and describes the solution explicitly. \begin{theo}\label{thm1} Given any initial estimator $\hat m^{(0)}=\{\hat m^{(0)}_S | S\subseteq \{1,\dots,d\}\}$, there exists exactly one set of functions $\hat m^\ast=\{\hat m^\ast_S | S\subseteq \{1,\dots,d\}\}$ satisfying constraint \eqref{constraint1} with $\sum_S \hat m^\ast_S = \sum_S \hat m^{(0)}_S$. The functions are given by \begin{align}\label{sol} \hat m^\ast_S (x_S)= \sum_{T \supseteq S} \sum_{T\setminus S \subseteq U\subseteq T}(-1)^{\abs{S}-\abs{T\setminus U}}\int \hat m^{(0)}_T(x_T) \hat p_U(x_U)\mathrm dx_U. \end{align} In particular $\hat m^\ast$ does not depend on the particular identification of $\hat m^{(0)}$. \end{theo} \begin{example} Going back to the setting of our simple example (Section \ref{sec:example}), $m(x_1,x_2)=x_1+x_2 + 2x_1x_2 $, we have \begin{align*} m(x_1,x_2)=m^\ast_0 + m_1^\ast(x_1) + m_2^\ast(x_2) + m_{12}^\ast(x_1,x_2), \end{align*} where $m^\ast_0=c= 2corr(X_1,X_2)$, $m_1^\ast(x_1)= x_1 - c$, $m_2^\ast(x_2)=x_2 - c$, $m_{12}^\ast(x_1,x_2)= 2x_1x_2 + c.$ \end{example} In principle, in Theorem \ref{thm1} we could always consider $\hat{m}^{(0)}_{\{1,\dots,d\}}=\hat{m}^{(0)}$ and $\hat{m}^{(0)}_S=0$ otherwise as an initial estimator. This would simplify the notation. In Section \ref{sec:calculate}, we will discuss that if $\hat m^{(0)}$ is a composition of low dimensional structures, then $\hat m^\ast$ can be calculated reasonably fast. In this case, we exploit the fact that $\hat{m}^{(0)}$ can be represented by functions $\hat{m}^{(0)}_S$, where $\hat{m}^{(0)}_S=0$ for $|S|\geq q << d$, which is why the more complicated notation is used. We provide an implementation for \textit{xgboost} \citep{chen2016xgboost} and \textit{random planted forest} \citep{hiabu2020random}. \subsection{From global to local explanation} Fix a value $x_0 \in \mathbb R^d$. A local approximation of the function $\hat m$ is given by \begin{align}\label{eq:shap} \hat m\left(x_0\right)=\phi_{0}+\sum_{k=1}^{d} \phi_k(x_0), \end{align} for constants $\phi_0,\phi_1(x_0),\dots,\phi_d(x_0)$. Similar to the case of global explanations, the right hand-side is not identified. Local explanations add constraints to equation \eqref{eq:shap} aiming for an identification such that $\phi_k(x_0)$ reflects the local contribution of the variable $X_k$ to $\hat m\left(x_0\right)$. Consider a value function $v_{x_0}$ that assigns a real value $v_{x_0}(S)$ to each subset $S \subseteq \{1,\dots d\}$. Shapley axioms provide a unique solution under the four axioms efficiency, symmetry, dummy and additivity \citep{shapley1953value}, see Section \ref{sec:axioms} in the appendix. Defining $ \Delta_{v}(k, S)=v(S \cup k)-v(S)$, the Shapley values are \begin{align}\label{eq:shap2} \phi_{k}=\frac{1}{d !} \sum_{\pi \in \Pi_d} \Delta_{v}\left(k, \{\pi(1),\dots,\pi(k-1)\}\right)=\frac 1 {d!}\sum_{S \subseteq \{1,\dots,d\} \setminus\{k\}} {|S| !(d -|S|-1) !}\Delta_{v}(k, S), \end{align} where $\Pi_d$ is the set of permutations of $\{1,\dots,d\}$. We follow \citep{janzing2020feature} and define SHAP values as Shapley values with the value function \begin{align}\label{value} v_{x_0}(S) \int \hat m(x) \hat p_{-S}(x_{-S}) d x_{-S}\rvert_{x=x_0}, \end{align} which is also the version implemented in TreeSHAP \citep{lundberg2020local}. We show that there is a direct connection between the global explanation described in the previous section and SHAP values defined via \eqref{value}. In particular, this connection describes SHAP values uniquely without the use of the Shapley axioms or formula \eqref{eq:shap2}, running through permutations, where the number of summands grows exponentially with $d$. The result is intriguing since usually the contribution or importance of a single variable in a general global representation as in \eqref{anova} is a complicated interplay between various interactions, see Section \ref{generalexpansion} in the appendix. \begin{corollar}\label{cor:mobius} If $\hat m$ is decomposed such that \eqref{constraint1} is fulfilled, then the SHAP values are weighted averages of the corresponding components, where an interaction component is equally split to all involved variables: \[ \phi_k(x)= \hat m^{\ast}_k(x_k)+ \frac 1 2 \sum_j \hat m^{\ast}_{kj}(x_{kj}) + \cdots + \frac 1 d \hat m^{\ast}_{1,\dots,d}(x_{1,\dots,d}). \] \end{corollar} \begin{remark} A crucial point of Corollary \ref{cor:mobius} is that the local SHAP values can be described by a composition of global explanations. By this we mean that while $\phi_k(x)$ depends on all components of $x$, the function $m^{\ast}_S$ does not depend on values $x_{\{1,\dots,d\}\setminus S}$. \end{remark} \section{Feature importance}\label{sec:vim} The global interpretation also provides a new perspective on feature importance. SHAP value feature importance for feature $k$ is usually given by an empirical version of $E[\abs{\phi_k(x)}]$. By Corollary \ref{cor:mobius}, \begin{align*} E[\abs{\phi_k(x)}]=E\left[ \abs{\sum_{S: k \in S} \frac 1 {\abs S} \hat m^\ast_S(x_S)} \right]. \end{align*} In this definition, contributions from various interactions and main effects can cancel each other out, which may not be desirable. An alternative is to consider \begin{align*} E \left[ \sum_{S: k \in S} \frac 1 {\abs S} \abs{ \hat m^\ast_S(x_S)}, \right] \end{align*} or to extend the definition of variable importance to interactions by defining variable importance as $E \left[\abs{ m^\ast_S(x_S)}\right]$, for a set $S \subseteq \{1,\dots, d\}$. \begin{example} Going back to our simple example (Section \ref{sec:example}), where $m(x_1,x_2)=x_1+x_2 + 2x_1x_2 $, SHAP variable importance for variable $x_1$ is an empirical version of \begin{align*} E\left[ \abs{x_1 - 2\textrm{corr}(X_1,X_2)- \frac 1 2 \{2x_1x_2 + 2\textrm{corr}(X_1,X_2)\}}\right] =E\left[ \abs{x_1 - x_1x_2 - \textrm{corr}(X_1,X_2) \}}\right], \end{align*} which merges main effect and interaction effect. Alternatively, one may consider \begin{align*} E&\left[ \abs{x_1 - 2\textrm{corr}(X_1,X_2)}+ \abs{ x_1x_2 + \textrm{corr}(X_1,X_2)\}}\right]. \end{align*} \end{example} \section{Experiments}\label{Experiments} We apply our method to several real and simulated datasets to show that the functional decomposition provides additional insights compared to SHAP values and SHAP interaction values. First, we show on real data that a global explanation can provide a more comprehensive picture than a local explanation method. Second, we show on real and simulated data that the same holds for the feature importance measure proposed in Section~\ref{sec:vim}. Finally, we show that the functional decomposition allows post-hoc removal of features from a model, which can be used to reduce bias of prediction models. We performed all experiments with \textit{xgboost} and \textit{random planted forests}. The results with \textit{xgboost} are presented in Sections~\ref{sec:bike}-\ref{sec:debias} in the main paper, whereas the results with \textit{random planted forests} are in Sections~\ref{sec:bike_rpf}-\ref{sec:debias_rpf} in the appendix. \subsection{Global explanations}\label{sec:bike} As an example of a real data application, we apply our method to the \textit{bike sharing} data \citep{fanaee2014event}, predicting the number of rented bicycles per day, given seasonal and weather information. Figure~\ref{fig:bike} shows SHAP values, main effects, 2-way interactions and 3-way interactions of the features \textit{hour of the day} (hr, 0-24 full hours), \textit{Temperature} (temp, normalized to 0-1) and \textit{working day} (workingday, 0=no, 1=yes). In the top row, we see that different SHAP values are observed for the same values of the features and conclude that SHAP values are not sufficient to describe the features' effects on the outcome, due to interactions. In the second row, the main effects from the decomposition show a strong effect of the hour of the day: Many bikes are rented in the typical commute times in the morning and afternoon. We also see a positive effect of the temperature and no main effect of whether or not it is a working day. The 2-way interactions in the third row reveal strong interactions between the hour of the day and working day: On working days, more bikes are rented in the morning and less during the night and around noon. We also see that the temperature has a slightly higher effect on non working days and in the afternoon. In the bottom row, the 3-way interactions show that interactions between the hour of the day and the temperature are stronger on non working days than on working days. We conclude that the full functional decomposition provides a more comprehensive picture of the features' effects, compared to usual SHAP value interpretations and 2-way interaction SHAP, as e.g. proposed by \cite{lundberg2020local}. Note that, as described above, our methods do indeed provide the full picture, including all higher-order interactions, whereas Figure~\ref{fig:bike} only shows a subset of these interactions. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{bike_example.png} \caption{Bike sharing example (\textit{xgboost}). SHAP values (top row), main effects (second row), 2-way interactions (third row) and 3-way interactions (bottom row) of the features \textit{hour of the day} (hr, 0-24 full hours), \textit{Temperature} (temp, normalized to 0-1) and \textit{working day} (workingday, 0=no, 1=yes) of the bike sharing data.} \label{fig:bike} \end{figure} \subsection{Feature importance}\label{sec:expvim} As described in Section~\ref{sec:vim}, the functional decomposition can also be used to calculate feature importance. Figure~\ref{fig:vim} shows the feature importance for the function $m(x) = x_1 + x_3 + x_2 x_3 - 2 x_2 x_3 x_4$ and the bike sharing data from Section~\ref{sec:bike} based on SHAP values and our functional decomposition. For the simple function, the SHAP feature importance identifies $x_1$ and $x_3$ as equally important and $x_2$ and $x_4$ as less important but it gives no information about interactions. On the other hand, the feature importance based on the functional decomposition shows that $x_1$ has a strong main effect but no interactions, whereas $x_2$ and $x_4$ have only interaction effects but no main effects and $x_3$ both kinds of effect. Similarly, on the bike sharing data, the hour of the day (feature \textit{hr}) and the temperature (\textit{temp}) have both main and interaction effects, whereas the feature \textit{working day} has 2-way interaction effects but no main effects (compare Figure~\ref{fig:bike}). Note that both definitions of feature importance are based on absolute values of SHAP values or components $m_S$ and thus are non-negative, in contrast to other methods of feature importance \citep{nembrini2018revival,casalicchio2018visualizing}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{variable_importance.pdf} \caption{Feature importance (\textit{xgboost}) for the function $m(x) = x_1 + x_3 + x_2 x_3 - 2 x_2 x_3 x_4$ (top row) and the bike sharing data from Section~\ref{sec:bike} (bottom row) based on SHAP values (left column) and our functional decomposition separately for main effects and interactions of different orders (right column). } \label{fig:vim} \end{figure} \subsection{Post-hoc feature removal}\label{sec:debias} We show that our method can be used to remove features and all their effects, including interactions, from a model \textit{post-hoc}, i.e. after model fitting. We trained models on simulated data and the \textit{adult} dataset \citep{Dua:2019}. Both models contained a feature \textit{sex} or \textit{gender}, which is a protected attribute and should not have an effect in fair prediction models \citep{barocas-hardt-narayanan}. In the simulation, we considered the simplified scenario where we predict a person's salary, based on their sex and weekly working hours. We set the weekly working hours to an average of 40 for men and to 30 for women. Salary was simulated as 1 unit (e.g. thousand Euro per year) per weekly working hour and an additional 20 for males (see Figure~\ref{fig:graph}). Thus, men earn more for working longer hours (on average) and for being male per se. The first effect should be kept by a fair machine learning model, whereas the second effect is discriminating women. In the \textit{adult} data, we have the same features \textit{sex} and \textit{hours} but we do not know the causal structure. Figrue~\ref{fig:dediscr} shows the prediction for females and males of the full model, a refitted model without the protected feature \textit{sex} and a decomposed model where the feature \textit{sex} was removed post-hoc. In the simulated data, we see that refitting the model does not change the predictions at all: Because of the high correlation between \textit{sex} and \textit{hours}, the effects of \textit{sex} cannot be removed by not considering the feature in the model. Our decomposition on the other hand allows us to remove the (unwanted) direct effect of \textit{sex} while keeping the (wanted) indirect effect through \textit{hours}. On the \textit{adult} data, we see a similar difference, but less pronounced. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{feature_removal.pdf} \begin{tabular}{lrrr} \toprule Setting & \multicolumn{3}{c}{Median difference} \\ & Full & Refitted & Decomposed \\ \midrule Simulation & 29.79 & 29.79 & 10.57 \\ Adult & 0.13 & 0.07 & 0.05 \\ \bottomrule \end{tabular} \caption{Post-hoc feature removal (\textit{xgboost}). Predictions in a simulation (left) and the \textit{adult} dataset for males and females of the full model, a refitted model without the protected feature \textit{sex} and a decomposed model where the feature \textit{sex} was removed post-hoc. The table below shows the median differences between females and males for the three models.} \label{fig:dediscr} \end{figure} \section{Concluding remarks and limitations} In this paper, we have introduced a way to turn local Shapley values into a global explanation using a functional decomposition. The global explanation has a causal interpretation under the DAG structure given in Figure \ref{fig:graph}. This causal structure might be quite realistic in many fairness considerations, but the true causal structure is generally unknown. In this respect, it would be interesting to look into other causal structures that could motivate other identification constraints than \eqref{constraint1}, which may connect to other local explanations than interventional SHAP. Also, while our suggestions for variable importance measures paint a more precise picture in many cases, it is not directly motivated by a theoretical constraint, such as usual additive importance measures. It will require more research to back these ideas by theory. Another point not considered in this paper is the difference between the estimate $\hat m$ and a potential true function $m$. In particular, it is not clear if a method that estimates $m$ well is also a good estimator for a selection of components {$m_S$}. This discussion is related to work done in double/debiased machine learning \citep{chernozhukov2018double}. Moving forward, it could be interesting to modify out of the box machine learning algorithms to specifically learn the low dimensional structures well. \bibliographystyle{chicago} \section{Shapley axioms} \label{sec:axioms} Given a function $m$, a point $x_0$, and a value function $v$, the Shapley axioms \cite{shapley1953value} are \begin{itemize} \item \textbf{Efficiency}: $ m\left(x_0\right)=\phi_{0}+\sum_{k=1}^{d} \phi_k(x_0)$. \item \textbf{Symmetry}: Fix any $k,l \in \{1,\dots,d\}, k\neq l$. If $v_{x_0}(S\cup k)=v_{x_0}(S\cup l)$, for all $S \subseteq \{1,\dots d\}\setminus \{k,l\}$, then $\phi_k(x_0)=\phi_l(x_0)$ \item \textbf{Dummy} If $v_{x_0}(S\cup k)=v_{x_0}(S)$, for all $S \subseteq \{1,\dots d\}\setminus \{k\}$, then $\phi_k=0$ \item \textbf{Linearity} If $m(x_0)=m^1(x_0)+m^2(x_0)$, then $\phi_k(x_0)=\phi^1_k(x_0)+\phi^2_k(x_0)$, where $\phi^l$ is the explanation corresponding to the function $m^l$. \end{itemize} \section{Lemmata and proofs} \subsection{Lemmata} \begin{lemma} The solution $m^\ast_S$ described in theorem \ref{thm1} can be re-written as \begin{align*} m^\ast_S(x_S &= \sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}(x) p_{-V}(x_{-V})\mathrm dx_{-V}, \end{align*} where $m^{(0)}(x)=\sum_S m^{(0)}_S(x_S)$. In particular $m_S^\ast$ does not depend on the particular identification of $m^{(0)}$. \end{lemma} \begin{proof} We consider a fixed $S\subseteq \{1,\dots, d\}$. We will make use of the fact that for a set $T\not\supseteq S$ \begin{align}\label{subsets} &\sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}_T(x_T) p_{-V}(x_{-V})\mathrm dx_{-V} =0, \end{align} and for a set $T\subseteq \{1,\dots, d\}, T\supseteq S$ \begin{align}\label{subsets2} \{U: T\setminus S \subseteq U \subseteq T\}=\{ T \setminus V: V \subseteq S\} \end{align} Combining \eqref{subsets}--\eqref{subsets2}, we get \begin{align*} &\sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}(x) p_{-V}(x_{-V})\mathrm dx_{-V}\\ &=\sum_{T\subseteq \{1,\dots,d\}}\sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}_T(x_T) p_{-V}(x_{-V})\mathrm dx_{-V}\\ &=\sum_{T\supseteq S}\sum_{V \subseteq S} (-1)^{\abs{S}-\abs{V}}\int m^{(0)}_T(x_T) p_{T\setminus V}(x_{-T\setminus V})\mathrm dx_{T\setminus V}\\ &=\sum_{T \supseteq S} \sum_{T\setminus S \subseteq U\subseteq T}(-1)^{\abs{S}-\abs{T\setminus U}}\int m^{(0)}_T(x_T) p_U(x_U)\mathrm dx_U. \end{align*} It is left to show \eqref{subsets}--\eqref{subsets2}. Equation \eqref{subsets2} follows from straight forward calculations. To see \ref{subsets}, note \begin{align*} &\sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}_T(x_T) p_{-V}(x_{-V})\mathrm dx_{-V} \\ =&\sum_{U \subseteq S\cap T} \sum_{W \subseteq S\setminus T}(-1)^{\abs{S\setminus \{W\cup U\}}}\int m^{(0)}_T(x_T) p_{-\{U\cup W\}}(x_{-\{U\cup W\}})\mathrm dx_{-\{U\cup W\}} \\ =&\sum_{U \subseteq S\cap T} \int m^{(0)}_T(x_T) p_{-U}(x_{-U})\mathrm dx_{-U} \sum_{W \subseteq S\setminus T}(-1)^{\abs{S\setminus \{W\cup U\}}} \\ =&\sum_{U \subseteq S\cap T} \int m^{(0)}_T(x_T) p_{-U}(x_{-U})\mathrm dx_{-U} \left(\sum_{W \subseteq S\setminus T, \abs W=\text{odd}}(-1)^{\abs{S\setminus U} - 1}+\sum_{W \subseteq S\setminus T, \abs W=\text{even}}(-1)^{\abs{S\setminus U} }\right) \\ =&0, \end{align*} where the last equality follows from the fact that every non-empty set has an equal number of odd and even subsets. \end{proof} \begin{lemma}[\cite{shapley1953value}] \label{mobius}For every $U \subseteq\{1,\dots ,d\}$, \[ \int m(x) p_{-U}(x_{-U})\mathrm dx_{-U}= \sum_{T\subseteq U} m^{\ast}_T(x_T) \] \end{lemma} \begin{proof} \begin{align*} \sum_{T\subseteq U} m_T(x_T) &= \sum_{T \subseteq U} \sum_{V \subseteq T} (-1)^{\abs{T\setminus V}}\int m^{(0)}(x) p_{-V}(x_{-V})\mathrm dx_{-V}\\ &= \sum_{V \subset U} \int m^{(0)}(x) p_{-V}(x_{-V})\mathrm dx_{-V} \sum_{S \subseteq\{1,\dots, \abs{U\setminus V}\} } (-1)^{\abs S}\\ &\quad + \int m^{(0)}(x) p_{-U}(x_{-U})\\ &= \int m^{(0)}(x) p_{-U}(x_{-U}), \end{align*} where the last equation follows from $\sum_{S \subseteq\{1,\dots, \abs{U-V}\} } (-1)^{\abs S}$=0, noting that a non-empty set has an equal number of subsets with an odd number of elements as subsets with an even number of elements. \end{proof} \subsection{Proofs} \begin{proof}[Proof of Corrolary \ref{cor:mobius}] This proof is analogue to the proof of Lemma 3.1 in \cite{shapley1953value}. From \ref{mobius}, We have $ \int m(x) p_{-U}(x_{-U})\mathrm dx_{-U}= \sum_{T\subseteq U} m^{\ast}_T(x_T). $ Hence the game $m$ with value function \[ v_m(U)=\int m(x) p_{-U}(x_{-U})\mathrm dx_{-U} \] equals the game $m^\ast$ with value function \[ v_{m^\ast}(U)=\sum_{S \subseteq \{1,\dots,d\}} m^{\ast}_S(x_S)\delta_S(U), \quad \delta_S(U)= 1( S\subseteq U). \] We now concentrate on the function $m_S^\ast$ with value function $m^{\ast}_S(x_S)\delta_S(U)$. We will show that for every non-empty $S \subseteq\{1,\dots, d\}$, \begin{align}\label{proof:shap} \phi_k(x,m^{\ast}_S(x_S)\delta_S(U)) = 1(k \in S) \abs S^{-1} m^{\ast}_S(x_S). \end{align} Here, $\phi_k(x,v)$ denotes the Shapley value for feature $k$ at point $x$ in a game with value function $v$. The proof is then completed by the additivity axiom, together with \[ \phi_k(x,m^{\ast}_\emptyset\delta_\emptyset(U) )= \begin{cases} m^{\ast}_\emptyset, &k=0, \\ 0 & \text{else}.\end{cases} \] The last statement follows from the dummy axiom. To show \eqref{proof:shap}, let's assume that $j,k \in S$. Then for every $U\subseteq\{1,\dots, d\}$, $m^{\ast}_S(x_S)\delta_S(U\cup j)=m^{\ast}_S(x_S)\delta_S(U\cup k)$, which, by the symmetry axiom, implies \[ \phi_j(x,m^{\ast}_S(x_S)\delta_S(U)) =\phi_k(x,m^{\ast}_S(x_S)\delta_S(U)). \] Additionally, for $k\not \in S$, we have $\phi_j(x,m^{\ast}_S(x_S)\delta_S(U)) =0$, by the dummy axiom. Hence we conclude \eqref{proof:shap} by applying \eqref{eq:shap}. \end{proof} \begin{proof}[Proof of Theorem 2.2] Lemma \ref{mobius} implies that $m^\ast$ is a solution. To see this, for $S\subseteq\{1,\dots ,d\}$, consider the following decomposition of $\int m^\ast(x) p_S(x_S)\mathrm dx_S$ \begin{align*} \int m^\ast(x) p_S(x_S)\mathrm dx_S&= \sum_{T \cap S \neq \emptyset} \int m^\ast_T(x_T) p_S(x_S)\mathrm dx_S + \sum_{T \cap S = \emptyset} m^\ast_T(x_T). \end{align*} Using Lemma \ref{mobius}, we have \[ \int m^\ast(x) p_S(x_S)\mathrm dx_S=\sum_{T\subseteq S^c} m^\ast_T(x_T)=\sum_{T \cap S = \emptyset} m^\ast_T(x_T), \] which with the previous statement implies that $m^\ast$ is a solution: \[ \sum_{T \cap S \neq \emptyset} \int m^\ast_T(x_T) p_S(x_S)\mathrm dx_S=0. \] It is left to show that the solution is unique. Note that for every $S\subseteq \{1,\dots,d\}$ \begin{align*} \sum_T \int m_T(x_T)p_S(x_S)\mathrm dx_S &=\sum_{T \cap S = \emptyset} m_T(x_T) + \sum_{T \cap S \neq \emptyset} \int m_T(x_T) p_S(x_S)\mathrm dx_S. \end{align*} Hence, condition \eqref{constraint1} is equivalent to \begin{align}\label{constraint2} \sum_T \int m_T(x_T)p_S(x_S)\mathrm dx_S &=\sum_{T \cap S = \emptyset} m_T(x_T). \end{align} Now assume that there are two set of functions $m^{\circ}$ and $m^{\ast}$ that satisfy \eqref{constraint1} with $\sum_S m_S^{\circ}=\sum_S m_S^{\ast}$. From \eqref{constraint2}, it follows that for all $S\subseteq \{1,\dots,d\}$ \[ \sum_{T \cap S = \emptyset} m^{\circ}_T(x_T)=\sum_{T \cap S = \emptyset} m^{\ast}_T(x_T), \] implying $m^{\circ}_T(x_T)=m^{\ast}_T(x_T)$ for all $T\subseteq\{1,\dots,d\}.$ \end{proof} \section{Connecting a general global expansion to SHAP values} \label{generalexpansion} If a regression or classification function $m$ is not identified via \eqref{constraint1}, then calculating SHAP values from such a decomposition leads to lengthy and non-trivial expressions. Here, we show how the terms up to dimension three in a general non-identified decomposition enter into a SHAP value. The following formula follows from straight forward calculations using \eqref{eq:shap}. For $v_{x}(S)= \int m(x){p_{-S}(x_{-S})} d x_{-S}$, we get \begin{align*} \phi_1(x_0)&= m_1(x_1) - E[m_1(X_1)] \\ &+ \frac 1 2 \left \{\sum_{j\neq1} m_{1j}(x_1,x_j) - E[m_{1j}(X_1,X_j)] \right. \\ & \qquad \quad + \left. \sum_{j\neq 1} E[m_{1j}(x_1,X_j)] - E[m_{1j}(X_1,x_j)] \right\}\\ &+ \frac 1 3 \left \{\sum_{j,k \neq 1, j < k} m_{1jk}(x_1,x_j,x_k) - E[m_{1jk}(X_1,X_j, X_k)]\right.\\ & \qquad \quad + \sum_{j,k \neq 1, j< k} E[m_{1jk}(x_1,X_j,X_k)]-E[m_{1jk}(X_1,x_j, x_k)] \\ & \qquad \quad + \frac 1 2\sum_{j,k \neq 1,j<k}E[m_{1jk}(x_1,X_j, x_k)] - E[m_{1jk}(X_1,x_j,X_k)] \\ & \qquad \quad + \frac 1 2 \left.\sum_{j,k \neq 1, j<k} E[m_{1jk}(x_1,x_j, X_k)] - E[m_{1jk}(X_1,X_j,x_k)] \right\}\\ &+ \frac 1 4 \left \{\sum_{j,k,l \neq 1, j< k< l} m_{1jkl}(x_1,x_j,x_k, x_l) - E[m_{1jkl}(X_1,X_j, X_k, X_l)]\right.\\ & \qquad \quad + \sum_{j,k \neq 1,j< k< l} E[m_{1jk}(x_1,X_j,X_k, X_l)]-E[m_{1jk}(X_1,x_j, x_k,x_l)] \\ & \qquad \quad + \frac 1 2\sum_{j,k \neq 1, j< k< l}E[m_{1jk}(x_1,x_j, X_k,X_l)] - E[m_{1jk}(X_1,X_j,x_k,x_l)] \\ & \qquad \quad + \frac 1 2 \sum_{j,k \neq 1, j< k< l} E[m_{1jk}(x_1,X_j, x_k,X_l)] - E[m_{1jk}(X_1,x_j,X_k,x_l)] \\ & \qquad \quad + \frac 1 2 \sum_{j,k \neq 1, j< k< l} E[m_{1jk}(x_1,X_j, X_k,x_l)] - E[m_{1jk}(X_1,x_j,x_k,X_l)] \\ & \qquad \quad + \frac 1 3\sum_{j,k \neq 1, j< k< l}E[m_{1jk}(x_1,x_j, x_k,X_l)] - E[m_{1jk}(X_1,X_j,X_k,x_l)] \\ & \qquad \quad + \frac 1 3 \sum_{j,k \neq 1, j< k< l} E[m_{1jk}(x_1,x_j, X_k,x_l)] - E[m_{1jk}(X_1,X_j,x_k,X_l)]\\ & \qquad \quad + \frac 1 3 \left.\sum_{j,k \neq 1, j< k< l} E[m_{1jk}(x_1,X_j, x_k,x_l)] - E[m_{1jk}(X_1,x_j,X_k,X_l)] \right\}\\ & \qquad \qquad \qquad \qquad \cdots \end{align*} \section{Calculating the functional decomposition of SHAP values from low-dimensional tree structures} \label{sec:calculate} Our proposed decomposition can be calculated from tree-based models by directly applying Theorem~\ref{thm1}. Inspired by \cite{lundberg2020local}, we first describe naïve algorithms for \textit{xgboost} and \textit{random planted forest} models and then describe an improved algorithm for \textit{xgboost} that only needs a single recursion through each tree. \subsection{Naïve xgboost algorithm}\label{sec:xgboost_naive} For all subsets of features $S \subseteq \{1,\dots, d\}$, we calculate the decomposition $\hat{m}_S(x_i)$ for all observations of interest $x_i \in \mathbf{X}$ recursively for each tree with features $T$ by considering all subsets $U, T$ with $T\setminus S \subseteq U\subseteq T$. In each node of a tree, if the node is a leaf node we return it's prediction (e.g. the mean in CART-like trees). For internal (non-leaf) nodes, the procedure depends on whether the feature used for splitting in the node is in the subset $U$ or not. If the feature is in the subset $U$, we continue in both the left and right children nodes, each weighted by the coverage, i.e. the proportion of training observations going left and right, respectively. If the feature is not in the subset $U$, we apply the splitting criterion of the node and continue with the respective node selected by the splitting procedure for observation $x_i$. See Algorithm~\ref{alg:naive} for the full algorithm in pseudo code. \begin{algorithm}[H] \caption{\sc Naïve xgboost algorithm} \label{alg:naive} \begin{algorithmic} \STATE {\bfseries Procedure:} \textsc{decompose}($\mathbf{X}$, $\hat{m}(x)$) \STATE {\bfseries Input:} Dataset to be explained $\mathbf{X} \in \mathbb{R}^{n \times d}$, tree-based model $\hat{m}(x)$ with $B$ trees \STATE {\bfseries Output:} Components $\hat{m}_S(x_i)$ for all $S \subseteq \{1,\dots, d\}$ and $x_i \in \mathbf{X}$ \FOR{$i \in 1,\dots,n$} \FOR{$S \subseteq \{1,\dots, d\}$} \STATE $\hat{m}_S(x_i) \gets 0$ \FOR{$\text{tree} \in \{1,\dots, B\}$} \IF{$T \supseteq S$} \FOR{$U: T\setminus S \subseteq U\subseteq T$} \STATE $\hat{m}_S(x_i) \gets \hat{m}_S(x_i) + (-1)^{\abs{S}-\abs{T\setminus U}} \textsc{recurse}(\text{tree}, U, x_i, 0)$ \ENDFOR \ENDIF \ENDFOR \ENDFOR \ENDFOR \STATE {\bfseries Return:} $\hat{m}_S$ \STATE \STATE {\bfseries Procedure:} \textsc{recurse}($\text{tree}, U, x_i, \text{node}$) \STATE {\bfseries Input:} Tree ID (tree), subset $U$, data point $x_i$, node ID (node) \STATE {\bfseries Output:} Coverage-weighted prediction \IF{\textsc{isleaf}(node)} \STATE {\bfseries Return:} \textsc{prediction}(node) \ELSE \STATE $j \gets$ \textsc{split-feature}(node) \IF{$j \in U$} \STATE $C_{\text{left}} \gets$ \textsc{coverage}(left-node) \STATE $C_{\text{right}} \gets$ \textsc{coverage}(right-node) \STATE {\bfseries Return:} $C_{\text{left}} \textsc{recurse}(\text{tree}, U, x_i, \text{left-node}) + C_{\text{right}} \textsc{recurse}(tree, U, x_i, \text{right-node})$ \ELSE \IF{$x_i^j \leq$ \textsc{split-value}(node)} \STATE {\bfseries Return:} $\textsc{recurse}(\text{tree}, U, x_i, \text{left-node})$ \ELSE \STATE {\bfseries Return:} $\textsc{recurse}(\text{tree}, U, x_i, \text{right-node})$ \ENDIF \ENDIF \ENDIF \end{algorithmic} \end{algorithm} \subsection{Naïve random planted forest algorithm}\label{sec:rpf_naive} For the random planted forest algorithm (rpf), we use a different approach. By slightly altering the representation of an rpf in \cite{hiabu2020random}, the result of an rpf is given by a set $$\hat m^{(0)}=\{\hat m^{(0)}_{S,b} | S\subseteq \{1,\dots,d\},\ b\in\{1,\dots,B\}\},$$ where each estimator $\hat m^{(0)}_{S,b}$ can be represented by a finite partition defined by an $|S|$-dimensional grid (leaves) and corresponding values. Thus, we start with \begin{itemize} \item a grid $G_k=\{x_{k,1},\dots,x_{k,t_k}\}$ for each coordinate $k\in\{1,\dots,d\}$, \item for each $S\subseteq \{1,\dots,d\}, b\in\{1,\dots,B\}$ an array representing the value of $m^{(0)}_{S,b}(x)$ for each coordinate $x\in\times_{k\in S}G_k$. Here $x$ is considered to be the bottom left corner of a hyperrectangle. \end{itemize} Note that every tree-based algorithm can be described in such a manner. Given an estimator $\hat{p}_S$ and using this representation, directly calculating \eqref{sol} is simple, where for each combination of sets $U,T\subseteq \{1,\dots,d\}$ with $U\subseteq T$, we only need to calculate the term $\int \hat m^{(0)}_{T,b}(x_T) \hat p_U(x_U)\mathrm dx_U$ once and then add/subtract it to the correct estimators $\hat{m}^*_S$. See Algorithm~\ref{alg:naive_rpf} for the full algorithm in pseudo code. \begin{algorithm}[H] \caption{\sc Naïve rpf algorithm} \label{alg:naive_rpf} \begin{algorithmic} \STATE {\bfseries Procedure:} \textsc{decompose}($\mathbf{X}$, $\hat{m}(x)$) \STATE {\bfseries Input:} Dataset to be explained $\mathbf{X} \in \mathbb{R}^{n \times d}$, tree-based model with initial decomposition $\hat{m}_{S,b}^{(0)}(x)$ for $S\subseteq\{1,\dots,d\}$ and trees $b=\{1,\dots,B\}$, estimator $\hat{p}_S$ \STATE {\bfseries Output:} Components $\hat{m}_S(x_i)$ for all $S \subseteq \{1,\dots, d\}$ and $x_i \in \mathbf{X}$ \FOR{$i \in 1,\dots,n$} \FOR{$S \subseteq \{1,\dots, d\}$} \STATE $\hat{m}_S(x_i) \gets 0$ \ENDFOR \FOR{$b \in \{1,\dots, B\}$} \FOR{$T \subseteq \{1,\dots, d\}$} \FOR{$U \subseteq T$} \STATE $\mathrm{update}_{T,U} \gets \int \hat m^{(0)}_{T,b}(x_{i,T\backslash U},x_U) \hat p_U(x_U)\mathrm dx_U $ \FOR{$S: T\setminus S \subseteq U,\ S\subseteq T$} \STATE $\hat{m}_S(x_i) \gets \hat{m}_S(x_i) + (-1)^{\abs{S}-\abs{T\setminus U}} \mathrm{update}_{T,U}$ \ENDFOR \ENDFOR \ENDFOR \ENDFOR \STATE $\hat{m}_S(x_i) \gets \hat{m}_S(x_i)/B$ \ENDFOR \STATE {\bfseries Return:} $\hat{m}_S$ \end{algorithmic} \end{algorithm} For the calculation of an estimator $\hat{p}_S$ in our simulations we used the following. For each $S\subseteq \{1,\dots,d\}$, let $a_S(x)$ be the number of data points residing in the hyperrectangle with bottom left corner $x$ for each coordinate $x\in\times_{k\in S}G_k$. For $|S|$-dimensional $y$ we then set \[ \hat{p}_S(y)=\frac{a_S(x_y)}{\sum_{x\in \times_{k\in S}G_k} a_S(x)}\frac{1}{\mathrm{vol}(x)}, \] where $x_y$ is the coordinate of the bottom left corner of the hyperrectangle which includes $y$ and $\mathrm{vol}(x)$ is the volume of the hyperrectangle corresponding to $x$. Using this estimator, the updating function in the algorithm simplifies to \[ \mathrm{update}_{T,U} = \sum_{x_U\in\times_{k\in U}G_k} \hat m^{(0)}_{T,b}(x_{i,T\backslash U},x_U) \hat p_U(x_U). \] \subsection{Improved xgboost algorithm} To improve the algorithm described in Section~\ref{sec:xgboost_naive} and Algorithm~\ref{alg:naive}, we pre-calculate the contribution of each tree for all $n$ observations and tree-subsets $T$ in a single recursive procedure by filling an $n \times 2^D$ matrix, where $D$ is the tree depth. In a second step, we just have to sum these contributions with the corresponding sign (see Theorem~\ref{thm1}). \section{Experiments with random planted forest} This section shows the simulation results when using the \textit{random planted forest algorithm} as an estimation procedure. First of all, the results considering the motivating example from Section \ref{sec:example} are given in Figure \ref{fig:rpf_simple_example}. The following subsections include the results of experiments which where discussed in Section \ref{Experiments}. \begin{figure} \centering \includegraphics[width=\linewidth]{simple_example_rpf.png} \caption{Simple example. SHAP values (top row) and functional decomposition (bottom row) of a \textit{random planted forest} model of the function $m(x_1,x_2)=x_1+x_2 + 2x_1x_2$. The red lines in the bottom row represent the SHAP values of the true function.} \label{fig:rpf_simple_example} \end{figure} \subsection{Global explanations}\label{sec:bike_rpf} Figure \ref{fig:bike_rpf} includes the results discussed in Section \ref{sec:bike} when considering the \textit{random planted forest algorithm} instead of \textit{xgboost}. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{bike_example_rpf.png} \caption{Bike sharing example (\textit{random planted forest}). SHAP values (top row), main effects (second row), 2-way interactions (third row) and 3-way interactions (bottom row) of the features \textit{hour of the day} (hr, 0-24 full hours), \textit{Temperature} (temp, normalized to 0-1) and \textit{working day} (workingday, 0=no, 1=yes) of the bike sharing data.} \label{fig:bike_rpf} \end{figure} \subsection{Feature importance}\label{sec:expvim_rpf} Figure \ref{fig:vim_rpf} includes the results discussed in Section \ref{sec:expvim} when considering the \textit{random planted forest algorithm} instead of \textit{xgboost}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{variable_importance_rpf.png} \caption{Feature importance (\textit{random planted forest}) for the function $m(x) = x_1 + x_3 + x_2 x_3 - 2 x_2 x_3 x_4$ (top row) and the bike sharing data from Section~\ref{sec:bike} (bottom row) based on SHAP values (left column) and our functional decomposition separately for main effects and interactions of different orders (right column). } \label{fig:vim_rpf} \end{figure} \subsection{Post-hoc feature removal}\label{sec:debias_rpf} Figure \ref{fig:dediscr_rpf} includes the results discussed in Section \ref{sec:debias} when considering the \textit{random planted forest algorithm} instead of \textit{xgboost}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{feature_removal_rpf.pdf} \begin{tabular}{lrrr} \toprule Setting & \multicolumn{3}{c}{Median difference} \\ & Full & Refitted & Decomposed \\ \midrule Simulation & 29.90 & 29.91 & 9.84 \\ Adult & 0.18 & 0.052& 0.047 \\ \bottomrule \end{tabular} \caption{Post-hoc feature removal (\textit{random planted forest}). Predictions in a simulation (left) and the \textit{adult} dataset for males and females of the full model, a refitted model without the protected feature \textit{sex} and a decomposed model where the feature \textit{sex} was removed post-hoc. The table below shows the median differences between females and males for the three models.} \label{fig:dediscr_rpf} \end{figure} \section{Relationship of global explanation to partial dependence plots} Given an estimator $\hat m$ and a target subset $S\subset \{1,\dots,d\}$, the partial dependence plot \cite{friedman2001greedy}, $\xi_S$, is defined as \[ \xi_S(x_S)= \int \hat m(x) p_{-S}(x_{-S})\mathrm dx_{-S}. \] It is straight forward to verify that partial dependence plots are linked to a functional decomposition $\{\hat m^\ast_S\}$ identified via \eqref{constraint1} through \[ \xi_S=\sum_{U \subseteq S} \hat m^{\ast}_U. \] \section{Computing environment} A 64-bit Linux platform running Ubuntu 20.04 with an AMD Ryzen Threadripper 3960X (24 cores, 48 threads) CPU and 256 GByte RAM was used for all computations with R version 4.1.2. \section{Shapley axioms} \label{sec:axioms} Given a function $m$, a point $x_0$, and a value function $v$, the Shapley axioms \cite{shapley1953value} are \begin{itemize} \item \textbf{Efficiency}: $ m\left(x_0\right)=\phi_{0}+\sum_{k=1}^{d} \phi_k(x_0)$. \item \textbf{Symmetry}: Fix any $k,l \in \{1,\dots,d\}, k\neq l$. If $v_{x_0}(S\cup k)=v_{x_0}(S\cup l)$, for all $S \subseteq \{1,\dots d\}\setminus \{k,l\}$, then $\phi_k(x_0)=\phi_l(x_0)$ \item \textbf{Dummy} If $v_{x_0}(S\cup k)=v_{x_0}(S)$, for all $S \subseteq \{1,\dots d\}\setminus \{k\}$, then $\phi_k=0$ \item \textbf{Linearity} If $m(x_0)=m^1(x_0)+m^2(x_0)$, then $\phi_k(x_0)=\phi^1_k(x_0)+\phi^2_k(x_0)$, where $\phi^l$ is the explanation corresponding to the function $m^l$. \end{itemize} \section{Lemmata and proofs} \subsection{Lemmata} \begin{lemma} The solution $m^\ast_S$ described in theorem \ref{thm1} can be re-written as \begin{align*} m^\ast_S(x_S &= \sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}(x) p_{-V}(x_{-V})\mathrm dx_{-V}, \end{align*} where $m^{(0)}(x)=\sum_S m^{(0)}_S(x_S)$. In particular $m_S^\ast$ does not depend on the particular identification of $m^{(0)}$. \end{lemma} \begin{proof} We consider a fixed $S\subseteq \{1,\dots, d\}$. We will make use of the fact that for a set $T\not\supseteq S$ \begin{align}\label{subsets} &\sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}_T(x_T) p_{-V}(x_{-V})\mathrm dx_{-V} =0, \end{align} and for a set $T\subseteq \{1,\dots, d\}, T\supseteq S$ \begin{align}\label{subsets2} \{U: T\setminus S \subseteq U \subseteq T\}=\{ T \setminus V: V \subseteq S\} \end{align} Combining \eqref{subsets}--\eqref{subsets2}, we get \begin{align*} &\sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}(x) p_{-V}(x_{-V})\mathrm dx_{-V}\\ &=\sum_{T\subseteq \{1,\dots,d\}}\sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}_T(x_T) p_{-V}(x_{-V})\mathrm dx_{-V}\\ &=\sum_{T\supseteq S}\sum_{V \subseteq S} (-1)^{\abs{S}-\abs{V}}\int m^{(0)}_T(x_T) p_{T\setminus V}(x_{-T\setminus V})\mathrm dx_{T\setminus V}\\ &=\sum_{T \supseteq S} \sum_{T\setminus S \subseteq U\subseteq T}(-1)^{\abs{S}-\abs{T\setminus U}}\int m^{(0)}_T(x_T) p_U(x_U)\mathrm dx_U. \end{align*} It is left to show \eqref{subsets}--\eqref{subsets2}. Equation \eqref{subsets2} follows from straight forward calculations. To see \ref{subsets}, note \begin{align*} &\sum_{V \subseteq S} (-1)^{\abs{S\setminus V}}\int m^{(0)}_T(x_T) p_{-V}(x_{-V})\mathrm dx_{-V} \\ =&\sum_{U \subseteq S\cap T} \sum_{W \subseteq S\setminus T}(-1)^{\abs{S\setminus \{W\cup U\}}}\int m^{(0)}_T(x_T) p_{-\{U\cup W\}}(x_{-\{U\cup W\}})\mathrm dx_{-\{U\cup W\}} \\ =&\sum_{U \subseteq S\cap T} \int m^{(0)}_T(x_T) p_{-U}(x_{-U})\mathrm dx_{-U} \sum_{W \subseteq S\setminus T}(-1)^{\abs{S\setminus \{W\cup U\}}} \\ =&\sum_{U \subseteq S\cap T} \int m^{(0)}_T(x_T) p_{-U}(x_{-U})\mathrm dx_{-U} \left(\sum_{W \subseteq S\setminus T, \abs W=\text{odd}}(-1)^{\abs{S\setminus U} - 1}+\sum_{W \subseteq S\setminus T, \abs W=\text{even}}(-1)^{\abs{S\setminus U} }\right) \\ =&0, \end{align*} where the last equality follows from the fact that every non-empty set has an equal number of odd and even subsets. \end{proof} \begin{lemma}[\cite{shapley1953value}] \label{mobius}For every $U \subseteq\{1,\dots ,d\}$, \[ \int m(x) p_{-U}(x_{-U})\mathrm dx_{-U}= \sum_{T\subseteq U} m^{\ast}_T(x_T) \] \end{lemma} \begin{proof} \begin{align*} \sum_{T\subseteq U} m_T(x_T) &= \sum_{T \subseteq U} \sum_{V \subseteq T} (-1)^{\abs{T\setminus V}}\int m^{(0)}(x) p_{-V}(x_{-V})\mathrm dx_{-V}\\ &= \sum_{V \subset U} \int m^{(0)}(x) p_{-V}(x_{-V})\mathrm dx_{-V} \sum_{S \subseteq\{1,\dots, \abs{U\setminus V}\} } (-1)^{\abs S}\\ &\quad + \int m^{(0)}(x) p_{-U}(x_{-U})\\ &= \int m^{(0)}(x) p_{-U}(x_{-U}), \end{align*} where the last equation follows from $\sum_{S \subseteq\{1,\dots, \abs{U-V}\} } (-1)^{\abs S}$=0, noting that a non-empty set has an equal number of subsets with an odd number of elements as subsets with an even number of elements. \end{proof} \subsection{Proofs} \begin{proof}[Proof of Corrolary \ref{cor:mobius}] This proof is analogue to the proof of Lemma 3.1 in \cite{shapley1953value}. From \ref{mobius}, We have $ \int m(x) p_{-U}(x_{-U})\mathrm dx_{-U}= \sum_{T\subseteq U} m^{\ast}_T(x_T). $ Hence the game $m$ with value function \[ v_m(U)=\int m(x) p_{-U}(x_{-U})\mathrm dx_{-U} \] equals the game $m^\ast$ with value function \[ v_{m^\ast}(U)=\sum_{S \subseteq \{1,\dots,d\}} m^{\ast}_S(x_S)\delta_S(U), \quad \delta_S(U)= 1( S\subseteq U). \] We now concentrate on the function $m_S^\ast$ with value function $m^{\ast}_S(x_S)\delta_S(U)$. We will show that for every non-empty $S \subseteq\{1,\dots, d\}$, \begin{align}\label{proof:shap} \phi_k(x,m^{\ast}_S(x_S)\delta_S(U)) = 1(k \in S) \abs S^{-1} m^{\ast}_S(x_S). \end{align} Here, $\phi_k(x,v)$ denotes the Shapley value for feature $k$ at point $x$ in a game with value function $v$. The proof is then completed by the additivity axiom, together with \[ \phi_k(x,m^{\ast}_\emptyset\delta_\emptyset(U) )= \begin{cases} m^{\ast}_\emptyset, &k=0, \\ 0 & \text{else}.\end{cases} \] The last statement follows from the dummy axiom. To show \eqref{proof:shap}, let's assume that $j,k \in S$. Then for every $U\subseteq\{1,\dots, d\}$, $m^{\ast}_S(x_S)\delta_S(U\cup j)=m^{\ast}_S(x_S)\delta_S(U\cup k)$, which, by the symmetry axiom, implies \[ \phi_j(x,m^{\ast}_S(x_S)\delta_S(U)) =\phi_k(x,m^{\ast}_S(x_S)\delta_S(U)). \] Additionally, for $k\not \in S$, we have $\phi_j(x,m^{\ast}_S(x_S)\delta_S(U)) =0$, by the dummy axiom. Hence we conclude \eqref{proof:shap} by applying \eqref{eq:shap}. \end{proof} \begin{proof}[Proof of Theorem 2.2] Lemma \ref{mobius} implies that $m^\ast$ is a solution. To see this, for $S\subseteq\{1,\dots ,d\}$, consider the following decomposition of $\int m^\ast(x) p_S(x_S)\mathrm dx_S$ \begin{align*} \int m^\ast(x) p_S(x_S)\mathrm dx_S&= \sum_{T \cap S \neq \emptyset} \int m^\ast_T(x_T) p_S(x_S)\mathrm dx_S + \sum_{T \cap S = \emptyset} m^\ast_T(x_T). \end{align*} Using Lemma \ref{mobius}, we have \[ \int m^\ast(x) p_S(x_S)\mathrm dx_S=\sum_{T\subseteq S^c} m^\ast_T(x_T)=\sum_{T \cap S = \emptyset} m^\ast_T(x_T), \] which with the previous statement implies that $m^\ast$ is a solution: \[ \sum_{T \cap S \neq \emptyset} \int m^\ast_T(x_T) p_S(x_S)\mathrm dx_S=0. \] It is left to show that the solution is unique. Note that for every $S\subseteq \{1,\dots,d\}$ \begin{align*} \sum_T \int m_T(x_T)p_S(x_S)\mathrm dx_S &=\sum_{T \cap S = \emptyset} m_T(x_T) + \sum_{T \cap S \neq \emptyset} \int m_T(x_T) p_S(x_S)\mathrm dx_S. \end{align*} Hence, condition \eqref{constraint1} is equivalent to \begin{align}\label{constraint2} \sum_T \int m_T(x_T)p_S(x_S)\mathrm dx_S &=\sum_{T \cap S = \emptyset} m_T(x_T). \end{align} Now assume that there are two set of functions $m^{\circ}$ and $m^{\ast}$ that satisfy \eqref{constraint1} with $\sum_S m_S^{\circ}=\sum_S m_S^{\ast}$. From \eqref{constraint2}, it follows that for all $S\subseteq \{1,\dots,d\}$ \[ \sum_{T \cap S = \emptyset} m^{\circ}_T(x_T)=\sum_{T \cap S = \emptyset} m^{\ast}_T(x_T), \] implying $m^{\circ}_T(x_T)=m^{\ast}_T(x_T)$ for all $T\subseteq\{1,\dots,d\}.$ \end{proof} \section{Connecting a general global expansion to SHAP values} \label{generalexpansion} If a regression or classification function $m$ is not identified via \eqref{constraint1}, then calculating SHAP values from such a decomposition leads to lengthy and non-trivial expressions. Here, we show how the terms up to dimension three in a general non-identified decomposition enter into a SHAP value. The following formula follows from straight forward calculations using \eqref{eq:shap}. For $v_{x}(S)= \int m(x){p_{-S}(x_{-S})} d x_{-S}$, we get \begin{align*} \phi_1(x_0)&= m_1(x_1) - E[m_1(X_1)] \\ &+ \frac 1 2 \left \{\sum_{j\neq1} m_{1j}(x_1,x_j) - E[m_{1j}(X_1,X_j)] \right. \\ & \qquad \quad + \left. \sum_{j\neq 1} E[m_{1j}(x_1,X_j)] - E[m_{1j}(X_1,x_j)] \right\}\\ &+ \frac 1 3 \left \{\sum_{j,k \neq 1, j < k} m_{1jk}(x_1,x_j,x_k) - E[m_{1jk}(X_1,X_j, X_k)]\right.\\ & \qquad \quad + \sum_{j,k \neq 1, j< k} E[m_{1jk}(x_1,X_j,X_k)]-E[m_{1jk}(X_1,x_j, x_k)] \\ & \qquad \quad + \frac 1 2\sum_{j,k \neq 1,j<k}E[m_{1jk}(x_1,X_j, x_k)] - E[m_{1jk}(X_1,x_j,X_k)] \\ & \qquad \quad + \frac 1 2 \left.\sum_{j,k \neq 1, j<k} E[m_{1jk}(x_1,x_j, X_k)] - E[m_{1jk}(X_1,X_j,x_k)] \right\}\\ &+ \frac 1 4 \left \{\sum_{j,k,l \neq 1, j< k< l} m_{1jkl}(x_1,x_j,x_k, x_l) - E[m_{1jkl}(X_1,X_j, X_k, X_l)]\right.\\ & \qquad \quad + \sum_{j,k \neq 1,j< k< l} E[m_{1jk}(x_1,X_j,X_k, X_l)]-E[m_{1jk}(X_1,x_j, x_k,x_l)] \\ & \qquad \quad + \frac 1 2\sum_{j,k \neq 1, j< k< l}E[m_{1jk}(x_1,x_j, X_k,X_l)] - E[m_{1jk}(X_1,X_j,x_k,x_l)] \\ & \qquad \quad + \frac 1 2 \sum_{j,k \neq 1, j< k< l} E[m_{1jk}(x_1,X_j, x_k,X_l)] - E[m_{1jk}(X_1,x_j,X_k,x_l)] \\ & \qquad \quad + \frac 1 2 \sum_{j,k \neq 1, j< k< l} E[m_{1jk}(x_1,X_j, X_k,x_l)] - E[m_{1jk}(X_1,x_j,x_k,X_l)] \\ & \qquad \quad + \frac 1 3\sum_{j,k \neq 1, j< k< l}E[m_{1jk}(x_1,x_j, x_k,X_l)] - E[m_{1jk}(X_1,X_j,X_k,x_l)] \\ & \qquad \quad + \frac 1 3 \sum_{j,k \neq 1, j< k< l} E[m_{1jk}(x_1,x_j, X_k,x_l)] - E[m_{1jk}(X_1,X_j,x_k,X_l)]\\ & \qquad \quad + \frac 1 3 \left.\sum_{j,k \neq 1, j< k< l} E[m_{1jk}(x_1,X_j, x_k,x_l)] - E[m_{1jk}(X_1,x_j,X_k,X_l)] \right\}\\ & \qquad \qquad \qquad \qquad \cdots \end{align*} \section{Calculating the functional decomposition of SHAP values from low-dimensional tree structures} \label{sec:calculate} Our proposed decomposition can be calculated from tree-based models by directly applying Theorem~\ref{thm1}. Inspired by \cite{lundberg2020local}, we first describe naïve algorithms for \textit{xgboost} and \textit{random planted forest} models and then describe an improved algorithm for \textit{xgboost} that only needs a single recursion through each tree. \subsection{Naïve xgboost algorithm}\label{sec:xgboost_naive} For all subsets of features $S \subseteq \{1,\dots, d\}$, we calculate the decomposition $\hat{m}_S(x_i)$ for all observations of interest $x_i \in \mathbf{X}$ recursively for each tree with features $T$ by considering all subsets $U, T$ with $T\setminus S \subseteq U\subseteq T$. In each node of a tree, if the node is a leaf node we return it's prediction (e.g. the mean in CART-like trees). For internal (non-leaf) nodes, the procedure depends on whether the feature used for splitting in the node is in the subset $U$ or not. If the feature is in the subset $U$, we continue in both the left and right children nodes, each weighted by the coverage, i.e. the proportion of training observations going left and right, respectively. If the feature is not in the subset $U$, we apply the splitting criterion of the node and continue with the respective node selected by the splitting procedure for observation $x_i$. See Algorithm~\ref{alg:naive} for the full algorithm in pseudo code. \begin{algorithm}[H] \caption{\sc Naïve xgboost algorithm} \label{alg:naive} \begin{algorithmic} \STATE {\bfseries Procedure:} \textsc{decompose}($\mathbf{X}$, $\hat{m}(x)$) \STATE {\bfseries Input:} Dataset to be explained $\mathbf{X} \in \mathbb{R}^{n \times d}$, tree-based model $\hat{m}(x)$ with $B$ trees \STATE {\bfseries Output:} Components $\hat{m}_S(x_i)$ for all $S \subseteq \{1,\dots, d\}$ and $x_i \in \mathbf{X}$ \FOR{$i \in 1,\dots,n$} \FOR{$S \subseteq \{1,\dots, d\}$} \STATE $\hat{m}_S(x_i) \gets 0$ \FOR{$\text{tree} \in \{1,\dots, B\}$} \IF{$T \supseteq S$} \FOR{$U: T\setminus S \subseteq U\subseteq T$} \STATE $\hat{m}_S(x_i) \gets \hat{m}_S(x_i) + (-1)^{\abs{S}-\abs{T\setminus U}} \textsc{recurse}(\text{tree}, U, x_i, 0)$ \ENDFOR \ENDIF \ENDFOR \ENDFOR \ENDFOR \STATE {\bfseries Return:} $\hat{m}_S$ \STATE \STATE {\bfseries Procedure:} \textsc{recurse}($\text{tree}, U, x_i, \text{node}$) \STATE {\bfseries Input:} Tree ID (tree), subset $U$, data point $x_i$, node ID (node) \STATE {\bfseries Output:} Coverage-weighted prediction \IF{\textsc{isleaf}(node)} \STATE {\bfseries Return:} \textsc{prediction}(node) \ELSE \STATE $j \gets$ \textsc{split-feature}(node) \IF{$j \in U$} \STATE $C_{\text{left}} \gets$ \textsc{coverage}(left-node) \STATE $C_{\text{right}} \gets$ \textsc{coverage}(right-node) \STATE {\bfseries Return:} $C_{\text{left}} \textsc{recurse}(\text{tree}, U, x_i, \text{left-node}) + C_{\text{right}} \textsc{recurse}(tree, U, x_i, \text{right-node})$ \ELSE \IF{$x_i^j \leq$ \textsc{split-value}(node)} \STATE {\bfseries Return:} $\textsc{recurse}(\text{tree}, U, x_i, \text{left-node})$ \ELSE \STATE {\bfseries Return:} $\textsc{recurse}(\text{tree}, U, x_i, \text{right-node})$ \ENDIF \ENDIF \ENDIF \end{algorithmic} \end{algorithm} \subsection{Naïve random planted forest algorithm}\label{sec:rpf_naive} For the random planted forest algorithm (rpf), we use a different approach. By slightly altering the representation of an rpf in \cite{hiabu2020random}, the result of an rpf is given by a set $$\hat m^{(0)}=\{\hat m^{(0)}_{S,b} | S\subseteq \{1,\dots,d\},\ b\in\{1,\dots,B\}\},$$ where each estimator $\hat m^{(0)}_{S,b}$ can be represented by a finite partition defined by an $|S|$-dimensional grid (leaves) and corresponding values. Thus, we start with \begin{itemize} \item a grid $G_k=\{x_{k,1},\dots,x_{k,t_k}\}$ for each coordinate $k\in\{1,\dots,d\}$, \item for each $S\subseteq \{1,\dots,d\}, b\in\{1,\dots,B\}$ an array representing the value of $m^{(0)}_{S,b}(x)$ for each coordinate $x\in\times_{k\in S}G_k$. Here $x$ is considered to be the bottom left corner of a hyperrectangle. \end{itemize} Note that every tree-based algorithm can be described in such a manner. Given an estimator $\hat{p}_S$ and using this representation, directly calculating \eqref{sol} is simple, where for each combination of sets $U,T\subseteq \{1,\dots,d\}$ with $U\subseteq T$, we only need to calculate the term $\int \hat m^{(0)}_{T,b}(x_T) \hat p_U(x_U)\mathrm dx_U$ once and then add/subtract it to the correct estimators $\hat{m}^*_S$. See Algorithm~\ref{alg:naive_rpf} for the full algorithm in pseudo code. \begin{algorithm}[H] \caption{\sc Naïve rpf algorithm} \label{alg:naive_rpf} \begin{algorithmic} \STATE {\bfseries Procedure:} \textsc{decompose}($\mathbf{X}$, $\hat{m}(x)$) \STATE {\bfseries Input:} Dataset to be explained $\mathbf{X} \in \mathbb{R}^{n \times d}$, tree-based model with initial decomposition $\hat{m}_{S,b}^{(0)}(x)$ for $S\subseteq\{1,\dots,d\}$ and trees $b=\{1,\dots,B\}$, estimator $\hat{p}_S$ \STATE {\bfseries Output:} Components $\hat{m}_S(x_i)$ for all $S \subseteq \{1,\dots, d\}$ and $x_i \in \mathbf{X}$ \FOR{$i \in 1,\dots,n$} \FOR{$S \subseteq \{1,\dots, d\}$} \STATE $\hat{m}_S(x_i) \gets 0$ \ENDFOR \FOR{$b \in \{1,\dots, B\}$} \FOR{$T \subseteq \{1,\dots, d\}$} \FOR{$U \subseteq T$} \STATE $\mathrm{update}_{T,U} \gets \int \hat m^{(0)}_{T,b}(x_{i,T\backslash U},x_U) \hat p_U(x_U)\mathrm dx_U $ \FOR{$S: T\setminus S \subseteq U,\ S\subseteq T$} \STATE $\hat{m}_S(x_i) \gets \hat{m}_S(x_i) + (-1)^{\abs{S}-\abs{T\setminus U}} \mathrm{update}_{T,U}$ \ENDFOR \ENDFOR \ENDFOR \ENDFOR \STATE $\hat{m}_S(x_i) \gets \hat{m}_S(x_i)/B$ \ENDFOR \STATE {\bfseries Return:} $\hat{m}_S$ \end{algorithmic} \end{algorithm} For the calculation of an estimator $\hat{p}_S$ in our simulations we used the following. For each $S\subseteq \{1,\dots,d\}$, let $a_S(x)$ be the number of data points residing in the hyperrectangle with bottom left corner $x$ for each coordinate $x\in\times_{k\in S}G_k$. For $|S|$-dimensional $y$ we then set \[ \hat{p}_S(y)=\frac{a_S(x_y)}{\sum_{x\in \times_{k\in S}G_k} a_S(x)}\frac{1}{\mathrm{vol}(x)}, \] where $x_y$ is the coordinate of the bottom left corner of the hyperrectangle which includes $y$ and $\mathrm{vol}(x)$ is the volume of the hyperrectangle corresponding to $x$. Using this estimator, the updating function in the algorithm simplifies to \[ \mathrm{update}_{T,U} = \sum_{x_U\in\times_{k\in U}G_k} \hat m^{(0)}_{T,b}(x_{i,T\backslash U},x_U) \hat p_U(x_U). \] \subsection{Improved xgboost algorithm} To improve the algorithm described in Section~\ref{sec:xgboost_naive} and Algorithm~\ref{alg:naive}, we pre-calculate the contribution of each tree for all $n$ observations and tree-subsets $T$ in a single recursive procedure by filling an $n \times 2^D$ matrix, where $D$ is the tree depth. In a second step, we just have to sum these contributions with the corresponding sign (see Theorem~\ref{thm1}). \section{Experiments with random planted forest} This section shows the simulation results when using the \textit{random planted forest algorithm} as an estimation procedure. First of all, the results considering the motivating example from Section \ref{sec:example} are given in Figure \ref{fig:rpf_simple_example}. The following subsections include the results of experiments which where discussed in Section \ref{Experiments}. \begin{figure} \centering \includegraphics[width=\linewidth]{simple_example_rpf.png} \caption{Simple example. SHAP values (top row) and functional decomposition (bottom row) of a \textit{random planted forest} model of the function $m(x_1,x_2)=x_1+x_2 + 2x_1x_2$. The red lines in the bottom row represent the SHAP values of the true function.} \label{fig:rpf_simple_example} \end{figure} \subsection{Global explanations}\label{sec:bike_rpf} Figure \ref{fig:bike_rpf} includes the results discussed in Section \ref{sec:bike} when considering the \textit{random planted forest algorithm} instead of \textit{xgboost}. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{bike_example_rpf.png} \caption{Bike sharing example (\textit{random planted forest}). SHAP values (top row), main effects (second row), 2-way interactions (third row) and 3-way interactions (bottom row) of the features \textit{hour of the day} (hr, 0-24 full hours), \textit{Temperature} (temp, normalized to 0-1) and \textit{working day} (workingday, 0=no, 1=yes) of the bike sharing data.} \label{fig:bike_rpf} \end{figure} \subsection{Feature importance}\label{sec:expvim_rpf} Figure \ref{fig:vim_rpf} includes the results discussed in Section \ref{sec:expvim} when considering the \textit{random planted forest algorithm} instead of \textit{xgboost}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{variable_importance_rpf.png} \caption{Feature importance (\textit{random planted forest}) for the function $m(x) = x_1 + x_3 + x_2 x_3 - 2 x_2 x_3 x_4$ (top row) and the bike sharing data from Section~\ref{sec:bike} (bottom row) based on SHAP values (left column) and our functional decomposition separately for main effects and interactions of different orders (right column). } \label{fig:vim_rpf} \end{figure} \subsection{Post-hoc feature removal}\label{sec:debias_rpf} Figure \ref{fig:dediscr_rpf} includes the results discussed in Section \ref{sec:debias} when considering the \textit{random planted forest algorithm} instead of \textit{xgboost}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{feature_removal_rpf.pdf} \begin{tabular}{lrrr} \toprule Setting & \multicolumn{3}{c}{Median difference} \\ & Full & Refitted & Decomposed \\ \midrule Simulation & 29.90 & 29.91 & 9.84 \\ Adult & 0.18 & 0.052& 0.047 \\ \bottomrule \end{tabular} \caption{Post-hoc feature removal (\textit{random planted forest}). Predictions in a simulation (left) and the \textit{adult} dataset for males and females of the full model, a refitted model without the protected feature \textit{sex} and a decomposed model where the feature \textit{sex} was removed post-hoc. The table below shows the median differences between females and males for the three models.} \label{fig:dediscr_rpf} \end{figure} \section{Relationship of global explanation to partial dependence plots} Given an estimator $\hat m$ and a target subset $S\subset \{1,\dots,d\}$, the partial dependence plot \cite{friedman2001greedy}, $\xi_S$, is defined as \[ \xi_S(x_S)= \int \hat m(x) p_{-S}(x_{-S})\mathrm dx_{-S}. \] It is straight forward to verify that partial dependence plots are linked to a functional decomposition $\{\hat m^\ast_S\}$ identified via \eqref{constraint1} through \[ \xi_S=\sum_{U \subseteq S} \hat m^{\ast}_U. \] \section{Computing environment} A 64-bit Linux platform running Ubuntu 20.04 with an AMD Ryzen Threadripper 3960X (24 cores, 48 threads) CPU and 256 GByte RAM was used for all computations with R version 4.1.2. \section{Introduction} In the early years of machine learning interpretability research, the focus was mostly on global feature importance methods that assign a single importance value to each feature. More recently, the attention has shifted towards local interpretability methods, which provide explanations for individual observations. Popular examples of the latter are LIME \citep{ribeiro2016should} and SHAP \citep{shapley1953value,lipovetsky2001analysis,lundberg2017unified}. The major reason for this shift is that local methods provide a more comprehensive picture of model explanations than single-value global methods, most importantly in presence of nonlinear effects and interactions. This, however, neglects the fact that global methods can be more than single-value methods: Ideally, a global method provides useful information about the entire regression or classification function by providing an explanation for each feature and each interaction effect of arbitrary order, relative to the values they take. As with local methods, this gives us an explanation for each observation. The crucial difference is that two observations which have a set of features in common receive the same explanation for main effects and interaction effects involving exclusively those features. The reason is that higher order effects are not hidden in the main effects. A global explanation is not specific to observations but only to feature values, and it does not only give a more comprehensive picture than local methods but the complete picture. In this paper, we introduce a global explanation procedure by identifying components in a functional decomposition. We show that the proposed global explanation is identical to $q$-interaction SHAP \citep{tsai2022faith}, where $q$ corresponds to the maximal order of interaction present in the model to be analyzed. Hence, we provide a new interpretation of SHAP values which is not game-theoretically motivated. We develop a fast implementation that exactly calculates $q$-interaction-SHAP for tree-based machine learning models. In principle, our results can be applied to any model and our algorithm can be applied to any tree-based model. However, since the number of components grows exponentially with increasing $q$, exact calculation is only feasible if $q$ is sufficiently small, as e.g. in gradient boosted trees. We provide an implementation for \textit{xgboost} \citep{chen2016xgboost} and \textit{random planted forest} \citep{hiabu2020random}. The $q$-interaction SHAP is a global explanation of a trained model. One and two-dimensional components can be plotted and together with higher order components they can be used to decompose local SHAP values into main effects and all involved interaction effects. Additionally, main and interaction components can be summarized into feature importance values. Beyond explaining feature effects, our proposed decomposition can be used to detect bias in models where LIME and SHAP fail \citep{slack2020fooling} and reduce such bias by removing individual components from the decomposition. \subsection{Motivating example}\label{sec:example} We will give a toy example of how the interplay of correlations and interactions can give rise to misleading SHAP values. Consider the function $m(x_1,x_2)=x_1+x_2 + 2x_1x_2 $. The SHAP value for the first variable is $ \phi_1(x_1,x_2)=x_1-E[X_1]+x_1x_2 -E[X_1X_2]+x_1E[X_2]-x_2E[X_1]. $ If the features are standardized, i.e., $X_1$ and $X_2$ have mean zero and variance one, the expression reduces to \[ \phi_1(x_1,x_2)=x_1+x_1x_2 -\textrm{corr}(X_1,X_2). \] Hence, e.g., if $\textrm{corr}(X_1,X_2)=0.3$, an individual with $x_1=1$ and $x_2=-0.7$ would see a SHAP value of 0 for the first variable: \[ \phi_1(1,-0.7) = 0. \] This is quite misleading, since clearly $x_1$ has an effect on the response $m$ that is irrespective of the particular value of $x_1$. The underlying problem is that locally at $(x_1,x_2)=(1,-0.7)$, the main effect contribution and interaction contribution cancel each other out. Indeed, we will see that the SHAP value $\phi_1$ can be decomposed into a main effect contribution of $\{x_1\}$ , which is $x_1-2\textrm{corr}(X_1,X_2)=0.4$ and an interaction contribution of $\{x_1,x_2\}$, which is $x_1x_2+\textrm{corr}(X_1,X_2)=-0.4$. Figure~\ref{fig:example} shows SHAP values and the functional decomposition of an \textit{xgboost} model of the function $m(x_1,x_2)$. The SHAP values $\phi_1$ and $\phi_2$ contain main effect contributions $m_1$ and $m_2$ as well as the interaction contribution $m_{12}$. The functional decomposition separates the contributions $m_1$, $m_2$ and $m_{12}$. Those familiar with SHAP values may argue that one can detect the non-zero impact of $x_1$ by plotting $\phi_1$ over all instances (see Figure~\ref{fig:example}). This argument has two problems. Firstly, this does not change the misleading local value. Secondly, SHAP values can be quite arbitrary: Two estimators that are equal on the support of the data can have very different SHAP values. To see this, define $\tilde m(x)= m(x) +\alpha(x)$, where $\alpha(x)=0$, for $x\in supp(X_1,X_2)$, and choose $\alpha$ outside the support of $(X_1,X_2)$ such that it approximates some desired SHAP values. This works because SHAP values are constructed by extrapolating outside the support of the data. \cite{slack2020fooling} has empirically demonstrated how this can be exploited to hide the importance of protected features. One could ask for local explanations that do not extrapolate, hoping that this solves the problem. Unfortunately, this is not the case: If explanations are deduced only from the region with data support, those explanations are based on the correlation structure of the features \citep{janzing2020feature}. In particular a variable that has zero effect on the model output can still be assigned a value stemming from a correlated variable \citep{janzing2020feature,sundararajan2020many}. We conclude: \begin{quote} \emph{Local explanations that do not explicitly specify all interactions cannot lead to meaningful causal interpretations in the presence of correlated features.} \end{quote} This is worrying noting that interpretation tools are usually used for black-box algorithms with the main purpose being to explain the model well in cases where interactions are present. A local interpretation that considers all interactions is actually a global interpretation. Hence the goal of this paper is to unify local and global explanations. \begin{figure} \centering \includegraphics[width=\linewidth]{simple_example.png} \caption{Simple example. SHAP values (top row) and functional decomposition (bottom row) of an \textit{xgboost} model of the function $m(x_1,x_2)=x_1+x_2 + 2x_1x_2$.} \label{fig:example} \end{figure} \subsection{Related Work} A functional decomposition for global interpretation of regression functions was introduced in the statistical literature in \cite{stone1994use}, and has been further discussed in \cite{hooker2007generalized,chastaing2012generalized,lengerich2020purifying}. These authors considered a different constraint called (generalized) functional ANOVA decomposition. In contrast, the constraint we introduce in this paper is linked to Shapley values. There is considerable literature on interactions and Shapley values. In cooperative game theory, pairwise player-player interactions were first considered by \cite{owen1972multilinear} and later generalized to higher-order interactions by \cite{grabisch1999axiomatic}. In the machine learning context, arbitrariness of Shapley values due to interactions and correlations has been discussed in \cite{kumar2020problems}, \cite{slack2020fooling}, \cite{sundararajan2020many}, and possible solutions have been proposed in \cite{zhang2020interpreting}, \cite{kumar2021shapley}, \cite{harris2022joint}, \cite{ittner2021feature}, \cite{sundararajan2020shapley}. Recently, \cite{tsai2022faith} introduced interaction SHAP for any given order and proposed an approximation to calculate them. In this paper, we introduce an identification constraint for a functional decomposition that is motivated by a causal interpretation, which will connect to Shapley values with a value function that has recently been coined interventional SHAP \citep{chen2020true}. Alternative value functions have been discussed in \cite{frye2020asymmetric}, \cite{yeh2022threading}. There are a variety of methods to obtain global feature importance measures implied by SHAP, in the form of a single value. These include \cite{casalicchio2018visualizing}, \cite{frye2020asymmetric} and \cite{williamson2020efficient}, among others. Similar to our method suggested in Section \ref{sec:vim}, the measures are obtained by weighted averages of local importance values. However, in contrast to our suggestion, most are motivated by additive importance measures \citep{covert2020understanding}. \section{Main result} Let $(Y_i, X_{i,1},\dots,X_{i,d})$ be a data set of $n$ i.i.d. observations with $X_{i,k}\in\mathbb{R}$, $i=1,\dots,n$; $k=1,\dots,d$. We consider the supervised learning setting \[ E[Y_i|X_i=x] = m(x), \] where the function $m$ is of interest and $Y$ is a real valued random variable.\footnote{We use $Y_i \in \mathbb{R}$ for national convenience. It is straight-forward to extend to binary classification, whereas multiclass classification would require a slightly different procedure.} We assume that a reasonable estimator $\hat m$ of $m$ has been provided. \subsection{Global Interpretation} With increasing dimension it can quickly get very hard, if not impossible, to visualize and thereby comprehend a multivariate function. Hence, a global interpretation of $\hat m$ is arguably only feasible if it is a composition of low-dimensional structures. Let us consider a specific decomposition of a multivariate function into a sum of main effects, bivariate interactions, etc., up to a $d$-variate interaction term. \begin{align}\label{anova} \hat m(x)&= \hat m_0+\sum_{k=1}^d \hat m_k(x_{k}) + \sum_{k<l} \hat m_{kl}(x_{k},x_{l}) + \cdots + \hat m_{1,\dots,d}(x)= \sum_{S\subseteq \{1,\dots,d\}} \hat m_S(x_S). \end{align} The heuristic of the decomposition is that if the underlying function $m(x)$ only lives on low-dimensional structures, then $m_S$ should be zero for most feature subsets $S$ and the order of maximal interaction $q=\max\{ \abs{S}: m_S\neq 0\}$ should be much smaller than the number of features: $q << d$. This discussion, however, is not very meaningful before one has agreed on an identification; without suitable identification constraints, it is possible to change components on the right without altering the left hand side. We propose the following identification: \textbf{Marginal identification:} For every $S\subseteq \{1,\dots,d\}$, \begin{align}\label{constraint1} \sum_{T \cap S \neq \emptyset} \int \hat m_T(x_T) \hat p_S(x_S)\mathrm dx_S=0, \end{align} where $\hat p_S$ is some estimator of the density $p_S$ of $X_S$. The identification can be motivated by a causal interpretation. \begin{figure} \begin{center} \begin{minipage}[t]{0.3\linewidth} \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!60, fill=red!5, very thick, minimum size=5mm}, ] \node[squarednode] (U) {$X_U$}; \node[squarednode] (V) [below=of U] {$X_V$}; \node[squarednode] (m) [right=of V] {m(U,V)}; \draw[->] (U.south) -- (V.north); \draw[->] (V.east) -- (m.west); \draw[->] (U.east) -- (m.north); \end{tikzpicture} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!60, fill=red!5, very thick, minimum size=5mm}, ] \node[squarednode] (U) {$X_U$}; \node[squarednode] (V) [below=of U] {$do(X_V=x_V)$}; \node[squarednode] (m) [right=of V] {m(U,V)}; \draw[->] (V.east) -- (m.west); \draw[->] (U.east) -- (m.north); \end{tikzpicture} \end{minipage} \caption{Left: Initial causal structure. Right: Causal Structure after removing effect of $X_U$ on $X_V$.} \label{fig:graph} \end{center} \end{figure} Assume $U$ is a set of features that should not have an effect on $\hat m$. For example $U=\{\text{gender, ethnicity}\}$ in the case of non-discriminatory regulation requirements. Assume $\{1,\dots, d\}$ is the disjoint union of $U$ and $V$ with a directed acyclic graph structure $X_U\rightarrow X_V \rightarrow m$, $X_U\rightarrow m$; as illustrated in Figure \ref{fig:graph}. Eliminating the causal relationship between $X_V$ and $X_U$ can be achieved via the do-operator, $do(X_V=x_V)$, that removes all edges going into $X_V$, see Figure \ref{fig:graph}. The function $E[m(X) |\ do(X_V=x_V))$ does not use information contained in $X_U$; neither directly nor indirectly. Under the assumed causal structure, standard calculations \citep{pearl2009causality} lead to \[ E[m(X) |\ do(X_V=x_V)]= \int m(x) p_U(x_U) dx_U. \] If $\hat m$ is identified via \eqref{constraint1}, then $\tilde m(x_{-U}):=\int \hat m(x) \hat p_U(x_U) dx_U$ can be extracted from $\hat m$ by dropping all components that include variables in $U$: \[ \tilde m(x_{-U})= \int \hat m(x) \hat p_U(x_U) dx_U= \sum_{S \subseteq V} \hat m_S(x_S). \] The next theorem states existence and uniqueness of a decomposition that satisfies the identification constraint \eqref{constraint1} and describes the solution explicitly. \begin{theo}\label{thm1} Given any initial estimator $\hat m^{(0)}=\{\hat m^{(0)}_S | S\subseteq \{1,\dots,d\}\}$, there exists exactly one set of functions $\hat m^\ast=\{\hat m^\ast_S | S\subseteq \{1,\dots,d\}\}$ satisfying constraint \eqref{constraint1} with $\sum_S \hat m^\ast_S = \sum_S \hat m^{(0)}_S$. The functions are given by \begin{align}\label{sol} \hat m^\ast_S (x_S)= \sum_{T \supseteq S} \sum_{T\setminus S \subseteq U\subseteq T}(-1)^{\abs{S}-\abs{T\setminus U}}\int \hat m^{(0)}_T(x_T) \hat p_U(x_U)\mathrm dx_U. \end{align} In particular $\hat m^\ast$ does not depend on the particular identification of $\hat m^{(0)}$. \end{theo} \begin{example} Going back to the setting of our simple example (Section \ref{sec:example}), $m(x_1,x_2)=x_1+x_2 + 2x_1x_2 $, we have \begin{align*} m(x_1,x_2)=m^\ast_0 + m_1^\ast(x_1) + m_2^\ast(x_2) + m_{12}^\ast(x_1,x_2), \end{align*} where $m^\ast_0=c= 2corr(X_1,X_2)$, $m_1^\ast(x_1)= x_1 - c$, $m_2^\ast(x_2)=x_2 - c$, $m_{12}^\ast(x_1,x_2)= 2x_1x_2 + c.$ \end{example} In principle, in Theorem \ref{thm1} we could always consider $\hat{m}^{(0)}_{\{1,\dots,d\}}=\hat{m}^{(0)}$ and $\hat{m}^{(0)}_S=0$ otherwise as an initial estimator. This would simplify the notation. In Section \ref{sec:calculate}, we will discuss that if $\hat m^{(0)}$ is a composition of low dimensional structures, then $\hat m^\ast$ can be calculated reasonably fast. In this case, we exploit the fact that $\hat{m}^{(0)}$ can be represented by functions $\hat{m}^{(0)}_S$, where $\hat{m}^{(0)}_S=0$ for $|S|\geq q << d$, which is why the more complicated notation is used. We provide an implementation for \textit{xgboost} \citep{chen2016xgboost} and \textit{random planted forest} \citep{hiabu2020random}. \subsection{From global to local explanation} Fix a value $x_0 \in \mathbb R^d$. A local approximation of the function $\hat m$ is given by \begin{align}\label{eq:shap} \hat m\left(x_0\right)=\phi_{0}+\sum_{k=1}^{d} \phi_k(x_0), \end{align} for constants $\phi_0,\phi_1(x_0),\dots,\phi_d(x_0)$. Similar to the case of global explanations, the right hand-side is not identified. Local explanations add constraints to equation \eqref{eq:shap} aiming for an identification such that $\phi_k(x_0)$ reflects the local contribution of the variable $X_k$ to $\hat m\left(x_0\right)$. Consider a value function $v_{x_0}$ that assigns a real value $v_{x_0}(S)$ to each subset $S \subseteq \{1,\dots d\}$. Shapley axioms provide a unique solution under the four axioms efficiency, symmetry, dummy and additivity \citep{shapley1953value}, see Section \ref{sec:axioms} in the appendix. Defining $ \Delta_{v}(k, S)=v(S \cup k)-v(S)$, the Shapley values are \begin{align}\label{eq:shap2} \phi_{k}=\frac{1}{d !} \sum_{\pi \in \Pi_d} \Delta_{v}\left(k, \{\pi(1),\dots,\pi(k-1)\}\right)=\frac 1 {d!}\sum_{S \subseteq \{1,\dots,d\} \setminus\{k\}} {|S| !(d -|S|-1) !}\Delta_{v}(k, S), \end{align} where $\Pi_d$ is the set of permutations of $\{1,\dots,d\}$. We follow \citep{janzing2020feature} and define SHAP values as Shapley values with the value function \begin{align}\label{value} v_{x_0}(S) \int \hat m(x) \hat p_{-S}(x_{-S}) d x_{-S}\rvert_{x=x_0}, \end{align} which is also the version implemented in TreeSHAP \citep{lundberg2020local}. We show that there is a direct connection between the global explanation described in the previous section and SHAP values defined via \eqref{value}. In particular, this connection describes SHAP values uniquely without the use of the Shapley axioms or formula \eqref{eq:shap2}, running through permutations, where the number of summands grows exponentially with $d$. The result is intriguing since usually the contribution or importance of a single variable in a general global representation as in \eqref{anova} is a complicated interplay between various interactions, see Section \ref{generalexpansion} in the appendix. \begin{corollar}\label{cor:mobius} If $\hat m$ is decomposed such that \eqref{constraint1} is fulfilled, then the SHAP values are weighted averages of the corresponding components, where an interaction component is equally split to all involved variables: \[ \phi_k(x)= \hat m^{\ast}_k(x_k)+ \frac 1 2 \sum_j \hat m^{\ast}_{kj}(x_{kj}) + \cdots + \frac 1 d \hat m^{\ast}_{1,\dots,d}(x_{1,\dots,d}). \] \end{corollar} \begin{remark} A crucial point of Corollary \ref{cor:mobius} is that the local SHAP values can be described by a composition of global explanations. By this we mean that while $\phi_k(x)$ depends on all components of $x$, the function $m^{\ast}_S$ does not depend on values $x_{\{1,\dots,d\}\setminus S}$. \end{remark} \section{Feature importance}\label{sec:vim} The global interpretation also provides a new perspective on feature importance. SHAP value feature importance for feature $k$ is usually given by an empirical version of $E[\abs{\phi_k(x)}]$. By Corollary \ref{cor:mobius}, \begin{align*} E[\abs{\phi_k(x)}]=E\left[ \abs{\sum_{S: k \in S} \frac 1 {\abs S} \hat m^\ast_S(x_S)} \right]. \end{align*} In this definition, contributions from various interactions and main effects can cancel each other out, which may not be desirable. An alternative is to consider \begin{align*} E \left[ \sum_{S: k \in S} \frac 1 {\abs S} \abs{ \hat m^\ast_S(x_S)}, \right] \end{align*} or to extend the definition of variable importance to interactions by defining variable importance as $E \left[\abs{ m^\ast_S(x_S)}\right]$, for a set $S \subseteq \{1,\dots, d\}$. \begin{example} Going back to our simple example (Section \ref{sec:example}), where $m(x_1,x_2)=x_1+x_2 + 2x_1x_2 $, SHAP variable importance for variable $x_1$ is an empirical version of \begin{align*} E\left[ \abs{x_1 - 2\textrm{corr}(X_1,X_2)- \frac 1 2 \{2x_1x_2 + 2\textrm{corr}(X_1,X_2)\}}\right] =E\left[ \abs{x_1 - x_1x_2 - \textrm{corr}(X_1,X_2) \}}\right], \end{align*} which merges main effect and interaction effect. Alternatively, one may consider \begin{align*} E&\left[ \abs{x_1 - 2\textrm{corr}(X_1,X_2)}+ \abs{ x_1x_2 + \textrm{corr}(X_1,X_2)\}}\right]. \end{align*} \end{example} \section{Experiments}\label{Experiments} We apply our method to several real and simulated datasets to show that the functional decomposition provides additional insights compared to SHAP values and SHAP interaction values. First, we show on real data that a global explanation can provide a more comprehensive picture than a local explanation method. Second, we show on real and simulated data that the same holds for the feature importance measure proposed in Section~\ref{sec:vim}. Finally, we show that the functional decomposition allows post-hoc removal of features from a model, which can be used to reduce bias of prediction models. We performed all experiments with \textit{xgboost} and \textit{random planted forests}. The results with \textit{xgboost} are presented in Sections~\ref{sec:bike}-\ref{sec:debias} in the main paper, whereas the results with \textit{random planted forests} are in Sections~\ref{sec:bike_rpf}-\ref{sec:debias_rpf} in the appendix. \subsection{Global explanations}\label{sec:bike} As an example of a real data application, we apply our method to the \textit{bike sharing} data \citep{fanaee2014event}, predicting the number of rented bicycles per day, given seasonal and weather information. Figure~\ref{fig:bike} shows SHAP values, main effects, 2-way interactions and 3-way interactions of the features \textit{hour of the day} (hr, 0-24 full hours), \textit{Temperature} (temp, normalized to 0-1) and \textit{working day} (workingday, 0=no, 1=yes). In the top row, we see that different SHAP values are observed for the same values of the features and conclude that SHAP values are not sufficient to describe the features' effects on the outcome, due to interactions. In the second row, the main effects from the decomposition show a strong effect of the hour of the day: Many bikes are rented in the typical commute times in the morning and afternoon. We also see a positive effect of the temperature and no main effect of whether or not it is a working day. The 2-way interactions in the third row reveal strong interactions between the hour of the day and working day: On working days, more bikes are rented in the morning and less during the night and around noon. We also see that the temperature has a slightly higher effect on non working days and in the afternoon. In the bottom row, the 3-way interactions show that interactions between the hour of the day and the temperature are stronger on non working days than on working days. We conclude that the full functional decomposition provides a more comprehensive picture of the features' effects, compared to usual SHAP value interpretations and 2-way interaction SHAP, as e.g. proposed by \cite{lundberg2020local}. Note that, as described above, our methods do indeed provide the full picture, including all higher-order interactions, whereas Figure~\ref{fig:bike} only shows a subset of these interactions. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{bike_example.png} \caption{Bike sharing example (\textit{xgboost}). SHAP values (top row), main effects (second row), 2-way interactions (third row) and 3-way interactions (bottom row) of the features \textit{hour of the day} (hr, 0-24 full hours), \textit{Temperature} (temp, normalized to 0-1) and \textit{working day} (workingday, 0=no, 1=yes) of the bike sharing data.} \label{fig:bike} \end{figure} \subsection{Feature importance}\label{sec:expvim} As described in Section~\ref{sec:vim}, the functional decomposition can also be used to calculate feature importance. Figure~\ref{fig:vim} shows the feature importance for the function $m(x) = x_1 + x_3 + x_2 x_3 - 2 x_2 x_3 x_4$ and the bike sharing data from Section~\ref{sec:bike} based on SHAP values and our functional decomposition. For the simple function, the SHAP feature importance identifies $x_1$ and $x_3$ as equally important and $x_2$ and $x_4$ as less important but it gives no information about interactions. On the other hand, the feature importance based on the functional decomposition shows that $x_1$ has a strong main effect but no interactions, whereas $x_2$ and $x_4$ have only interaction effects but no main effects and $x_3$ both kinds of effect. Similarly, on the bike sharing data, the hour of the day (feature \textit{hr}) and the temperature (\textit{temp}) have both main and interaction effects, whereas the feature \textit{working day} has 2-way interaction effects but no main effects (compare Figure~\ref{fig:bike}). Note that both definitions of feature importance are based on absolute values of SHAP values or components $m_S$ and thus are non-negative, in contrast to other methods of feature importance \citep{nembrini2018revival,casalicchio2018visualizing}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{variable_importance.pdf} \caption{Feature importance (\textit{xgboost}) for the function $m(x) = x_1 + x_3 + x_2 x_3 - 2 x_2 x_3 x_4$ (top row) and the bike sharing data from Section~\ref{sec:bike} (bottom row) based on SHAP values (left column) and our functional decomposition separately for main effects and interactions of different orders (right column). } \label{fig:vim} \end{figure} \subsection{Post-hoc feature removal}\label{sec:debias} We show that our method can be used to remove features and all their effects, including interactions, from a model \textit{post-hoc}, i.e. after model fitting. We trained models on simulated data and the \textit{adult} dataset \citep{Dua:2019}. Both models contained a feature \textit{sex} or \textit{gender}, which is a protected attribute and should not have an effect in fair prediction models \citep{barocas-hardt-narayanan}. In the simulation, we considered the simplified scenario where we predict a person's salary, based on their sex and weekly working hours. We set the weekly working hours to an average of 40 for men and to 30 for women. Salary was simulated as 1 unit (e.g. thousand Euro per year) per weekly working hour and an additional 20 for males (see Figure~\ref{fig:graph}). Thus, men earn more for working longer hours (on average) and for being male per se. The first effect should be kept by a fair machine learning model, whereas the second effect is discriminating women. In the \textit{adult} data, we have the same features \textit{sex} and \textit{hours} but we do not know the causal structure. Figrue~\ref{fig:dediscr} shows the prediction for females and males of the full model, a refitted model without the protected feature \textit{sex} and a decomposed model where the feature \textit{sex} was removed post-hoc. In the simulated data, we see that refitting the model does not change the predictions at all: Because of the high correlation between \textit{sex} and \textit{hours}, the effects of \textit{sex} cannot be removed by not considering the feature in the model. Our decomposition on the other hand allows us to remove the (unwanted) direct effect of \textit{sex} while keeping the (wanted) indirect effect through \textit{hours}. On the \textit{adult} data, we see a similar difference, but less pronounced. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{feature_removal.pdf} \begin{tabular}{lrrr} \toprule Setting & \multicolumn{3}{c}{Median difference} \\ & Full & Refitted & Decomposed \\ \midrule Simulation & 29.79 & 29.79 & 10.57 \\ Adult & 0.13 & 0.07 & 0.05 \\ \bottomrule \end{tabular} \caption{Post-hoc feature removal (\textit{xgboost}). Predictions in a simulation (left) and the \textit{adult} dataset for males and females of the full model, a refitted model without the protected feature \textit{sex} and a decomposed model where the feature \textit{sex} was removed post-hoc. The table below shows the median differences between females and males for the three models.} \label{fig:dediscr} \end{figure} \section{Concluding remarks and limitations} In this paper, we have introduced a way to turn local Shapley values into a global explanation using a functional decomposition. The global explanation has a causal interpretation under the DAG structure given in Figure \ref{fig:graph}. This causal structure might be quite realistic in many fairness considerations, but the true causal structure is generally unknown. In this respect, it would be interesting to look into other causal structures that could motivate other identification constraints than \eqref{constraint1}, which may connect to other local explanations than interventional SHAP. Also, while our suggestions for variable importance measures paint a more precise picture in many cases, it is not directly motivated by a theoretical constraint, such as usual additive importance measures. It will require more research to back these ideas by theory. Another point not considered in this paper is the difference between the estimate $\hat m$ and a potential true function $m$. In particular, it is not clear if a method that estimates $m$ well is also a good estimator for a selection of components {$m_S$}. This discussion is related to work done in double/debiased machine learning \citep{chernozhukov2018double}. Moving forward, it could be interesting to modify out of the box machine learning algorithms to specifically learn the low dimensional structures well. \bibliographystyle{chicago}
{ "timestamp": "2022-08-15T02:07:03", "yymm": "2208", "arxiv_id": "2208.06151", "language": "en", "url": "https://arxiv.org/abs/2208.06151", "abstract": "We consider a global representation of a regression or classification function by decomposing it into the sum of main and interaction components of arbitrary order. We propose a new identification constraint that allows for the extraction of interventional SHAP values and partial dependence plots, thereby unifying local and global explanations. With our proposed identification, a feature's partial dependence plot corresponds to the main effect term plus the intercept. The interventional SHAP value of feature $k$ is a weighted sum of the main component and all interaction components that include $k$, with the weights given by the reciprocal of the component's dimension. This brings a new perspective to local explanations such as SHAP values which were previously motivated by game theory only. We show that the decomposition can be used to reduce direct and indirect bias by removing all components that include a protected feature. Lastly, we motivate a new measure of feature importance. In principle, our proposed functional decomposition can be applied to any machine learning model, but exact calculation is only feasible for low-dimensional structures or ensembles of those. We provide an algorithm and efficient implementation for gradient-boosted trees (xgboost) and random planted forest. Conducted experiments suggest that our method provides meaningful explanations and reveals interactions of higher orders. The proposed methods are implemented in an R package, available at \\url{this https URL}.", "subjects": "Machine Learning (cs.LG); Statistics Theory (math.ST); Machine Learning (stat.ML)", "title": "Unifying local and global model explanations by functional decomposition of low dimensional structures", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854146791213, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.7089699468314283 }
https://arxiv.org/abs/2004.04728
Hyperbolic metrics on open subsests of Ptolemaic spaces with sharp parameter bounds
It is shown that a construction of Z. Zhang and Y. Xiao on open subsets of Ptolemaic spaces yields, when the subset has boundary containing at least two points, metrics that are Gromov hyperbolic with parameter $\log 2$ and strongly hyperbolic with parameter $1$ with no further conditions on the open set. A class of examples is constructed on Hadamard manifolds showing these estimates of the parameters are sharp.
\section*{Introduction} \blfootnote{2010 Mathematics Subject Classification: 51M10, 53C23.} \blfootnote{keywords: Ptolemaic, Gromov hyperbolic, strongly hyperbolic, metric space.} In this paper a construction in \cite{ZX} is applied to produce a metric on an open subset of a Ptolemaic space. When the open set has boundary containing at least two points, the metric is strongly hyperbolic with parameter 1 and therefore (by Theorem 4.2 in \cite{NS}) it is Gromov hyperbolic with parameter $\log 2$. This is shown with no other conditions on the boundary (\thmref{maintheorem}). A class of examples is constructed for an open subset of a Hadamard manifold. (By Theorem 1.1 in \cite{BFW} a complete Riemannian manifold is Ptolemaic if and only if it is Hadamard.) In the examples the open set has a boundary with at least two isolated points, one of which is the nearest neighbor in the boundary to the other (\thmref{hadamard}). This class of examples are all Gromov hyperbolic with parameter $\log 2$ and no lower, so by applying the same theorem \cite{NS} and \thmref{maintheorem} they are strongly hyperbolic with parameter 1 and no higher. \begin{defn} Let $(X,d)$ be a metric space. It is Ptolemaic iff $$ d(x,y)\,d(z,t)\leq d(x,z)\,d(y,t)+d(x,t)\,d(y,z) $$ for any $x,y,z,t\in X$. It is hyperbolic in the sense of Gromov {\rm (\cite{G}; see also \cite{JV})} with parameter $\delta>0$ iff $$ d(x,y)+d(z,t)\leq \max\left\{d(x,z)+d(y,t),d(x,t)+d(y,z)\right\}+2\delta $$ for any $x,y,z,t\in X$. It is strongly hyperbolic {\rm (\cite{NS})} with parameter $\epsilon>0$ iff \begin{eqnarray*} & & \exp\left(\frac\epsilon 2\,d(x,y)+\frac\epsilon 2\,d(z,t)\right) \\ & & \qquad\leq\;\exp\left(\frac\epsilon 2\,d(x,z)+\frac\epsilon 2\,d(y,t)\right)\:+\: \exp\left(\frac\epsilon 2\,d(x,t)+\frac\epsilon 2\,d(y,z)\right) \end{eqnarray*} for any $x,y,z,t\in X$. \end{defn} Note that if a space is Gromov hyperbolic with parameter $\delta$ then the property also holds for any parameter $\wt\delta>\delta$. In \cite{ZX} the following construction was made. For a Ptolemaic space $(X,d)$ and $U\subset X$ open with non-empty boundary let, \begin{eqnarray} \rho_U(x,y)= \sup_{p\in\partial U}\log \left(1+\frac{d(x,y)}{d(p,x)\,d(p,y)}\right). \label{themetric} \end{eqnarray} This should be compared to \cite{AIW} where metrics hyperbolic in the sense of Gromov were constructed on the complement of a finite subset of a general metric space with parameter of hyperbolicity independent of the size of the set. Related metrics are to be found in \cite{DHV}, \cite{PH}, and \cite{ZI} on domains in $\mathbb R^n$. The construction can also be seen as a variation of the inversion on metric spaces (and the metric defined using it) in \cite{BHX}. It was shown in \cite{ZX} that (\ref{themetric}) defines a metric on any open $U\subset X$ for any Ptolemaic space $(X,d)$. Furthermore, the following theorem was proved. \begin{theorem}{\rm (\cite{ZX} Theorem 6)} Let $(X,d)$ be a Ptolemaic space and $U\subset X$ open with non-empty boundary. If the distance between any two distinct points in $\partial U$ is at least $R>0$, then $(U,\rho_U)$ is Gromov hyperbolic with parameter $\frac 12\log\max\left\{2+\frac{20}R,392\right\}$. \label{ZXtheorem6} \end{theorem} When $(X,d)$ is Ptolemaic and $U$ is the complement of a point, it was shown in \cite{ZX} (Theorem 5) that the resulting metric is strongly hyperbolic with parameter $2$ and Gromov hyperbolic with parameter $\frac 12\log 2$. They also show (Theorem 4) that for $(X,d)$ Ptolemaic, and $p\in X$ fixed, if $$ s_p(x,y)=\frac{d(x,y)}{(1+d(x,p))(1+d(y,p))}, $$ then $(X,\log(1+s_p))$ is strongly hyperbolic with parameter $2$ and Gromov hyperbolic with parameter $\frac 12\log 2$. In this paper \thmref{ZXtheorem6} is generalized and sharpened. It is shown that the metric (\ref{themetric}) is Gromov hyperbolic with parameter $\log 2$ for any open $U$ whose boundary contains at least two points. The possibility is raised in \cite{ZX} that the construction may yield a strongly hyperbolic metric. It is proved (\thmref{maintheorem}) that this is indeed the case with parameter $1$. Examples are given (\thmref{hadamard}) showing that the parameter bounds are sharp. The paper is organized as follows. In the first section it is proved that for any open subset $U$ of a Ptolemaic space the construction produces a metric that is strongly hyperbolic with parameter $1$, and Gromov hyperbolic with parameter $\log 2$. The second section is devoted to examples on Hadamard manifolds exhibiting the sharp lower bound for the parameter of Gromov hyperbolicity. \section{Hyperbolic metrics} In this section it is shown that the construction yields a strongly hyperbolic metric with parameter $1$. \begin{theorem} If $(X,d)$ is a Ptolemaic space and $U\subset X$ is open with non-empty boundary, then $(U,\rho_U)$ is Gromov hyperbolic with parameter $\log 2$, and strongly hyperbolic with parameter $1$. This result is sharp in the sense that it does not hold in general for stronger assumptions on the parameters when $\partial U$ contains at least two distinct points. \label{maintheorem} \end{theorem} \noindent Before proving this theorem a rearrangement inequality is proved as a lemma. \begin{lemma} If $\alpha,\beta,\gamma,\delta\in[0,\infty)$, then \begin{eqnarray*} \min\{\alpha+\beta,\gamma+\delta\}\cdot\min\{\alpha+\gamma,\beta+\delta\} &\leq& \alpha\delta+\beta\gamma + 2\sqrt{\alpha\beta\gamma\delta}. \end{eqnarray*} Equality is realized iff one or more of the following non-exclusive conditions hold: $$ \begin{array}{cl} (\roman{rcc}\stepcounter{rcc}) & \alpha\delta=0\;\mbox{and}\;\max\{\alpha,\delta\}\geq|\beta-\gamma|, \\ (\roman{rcc}\stepcounter{rcc}) & \beta\gamma=0\;\mbox{and}\;\max\{\beta,\gamma\}\geq|\alpha-\delta|, \\ (\roman{rcc}\stepcounter{rcc}) & \alpha=\delta\;\mbox{and}\;\beta=\gamma. \end{array} $$ \label{thelemma} \end{lemma} \noindent {\em (Proof)} \bigskip\noindent Consider the case \begin{eqnarray} \alpha+\beta\leq\gamma+\delta\quad\mbox{and}\quad\alpha+\gamma\leq\beta+\delta, \label{case1} \end{eqnarray} Since the inequality as well as conditionis (i), (ii) and (iii) are invariant under the permutations $(\alpha\:\delta)$, $(\alpha\:\beta)(\gamma\:\delta)$, and $(\alpha\:\gamma\:\delta\:\beta)$, the other cases follow from this one. From (\ref{case1}), \begin{eqnarray} \alpha &\leq& \delta-|\beta-\gamma| \label{case1a} \end{eqnarray} and the inequality becomes \begin{eqnarray} &\mbox{and}& (\alpha+\beta)(\alpha+\gamma)\;\leq\;\alpha\delta+\beta\gamma +2\sqrt{\alpha\beta\gamma\delta} \label{case1b} \\ &\Leftrightarrow& \sqrt\alpha\left(\sqrt\alpha^{\,3} +(\beta+\gamma-\delta)\sqrt\alpha -2\sqrt{\beta\gamma\delta}\right) \;\leq\; 0. \label{case1c} \end{eqnarray} If $\delta=0$, then (\ref{case1a})$\;\Rightarrow 0\leq\alpha\leq-|\beta-\gamma|$, so $\beta=\gamma$ and both sides of the inequality are equal to $\beta^2$. If $\beta=0$, then the (\ref{case1b}) becomes $\alpha(\alpha+\gamma)\leq\alpha\delta$, which is true by (\ref{case1}). The inequality holds similarly if $\gamma=0$. Now assume $\beta\gamma\delta>0$. Multiplying or dividing each side of (\ref{case1b}) by $\delta^{\,2}$ does not change its validity, so without loss of generality replace $(\alpha,\beta,\gamma,\delta)$ with $(\alpha/\delta,\beta/\delta,\gamma/\delta,1),$ or simply assume $\delta\geq 1$. We claim that $$ \phi(x)=x^3+(\beta+\gamma-\delta)x-2\sqrt{\beta\gamma\delta} $$ is non-positive for $0\leq x<\sqrt{\delta-|\beta-\gamma|}$, from which (\ref{case1b}) follows. Since $\phi(0)<0$, and $\phi$ has only one critical point in $[0,\infty)$, it's enough to show that $\phi(\sqrt{\delta-|\beta-\gamma|})\leq 0$, that is $$ \sqrt{\delta-|\beta-\gamma|}\,(\delta-|\beta-\gamma|+\beta+\gamma-\delta) \;\leq\; \sqrt{\delta}\,2\min\{\beta,\gamma\} \;\leq\; 2\sqrt{\beta\gamma\delta} $$ which is true when $\delta\geq 1$. If equality holds in (\ref{case1b}), then \begin{eqnarray*} \alpha=0 &\mbox{or}& \sqrt{\alpha}^{\,3}+(\beta+\gamma-\delta)\sqrt{\alpha} -2\sqrt{\beta\gamma\delta}=0. \end{eqnarray*} If $\alpha=0$, then $\max\{\alpha,\delta\}=\delta\geq|\beta-\gamma|$, which is covered by (i). Otherwise, for reasons stated above, the only possibility of a root of the cubic polynomial could be when $\alpha=\delta-|\beta-\gamma|$, in which case $$ \begin{array}{cccc} & \sqrt{\delta-|\beta-\gamma|}\,(\delta-|\beta-\gamma|+\beta+\gamma-\delta) &=& 2\sqrt{\beta\gamma\delta} \\ \Rightarrow& \sqrt{\delta-|\beta-\gamma|}\,\min\{\beta,\gamma\} &=& \sqrt{\beta\gamma\delta}. \end{array} $$ This equation is invariant under the permutation of $\beta$ with $\gamma$, so without loss of generality assume that $\beta\leq\gamma$. This gives $$ (\delta-\gamma+\beta)\beta^2 \;=\; \beta\gamma\delta, $$ so either $\beta=0$, in which case $\alpha=\delta-\gamma=\delta-\max\{\beta,\gamma\}$, which is covered by (ii), or $\beta>0$ and, \begin{eqnarray*} (\delta-\gamma+\beta)\beta = \gamma\delta &\Rightarrow& \beta = \gamma\;\Rightarrow\; \alpha = \delta, \end{eqnarray*} which is (iii). It is simple to verify that each of the conditions (i), (ii), and (iii) yield equality. \hfill\hfill\fbox{\rule[-1mm]{0mm}{1mm}\,} \bigskip Now to prove \thmref{maintheorem}. Let $x,y,z,t\in U$ and $p,q\in\partial U$. Since $(X,d)$ is Ptolemaic, \begin{eqnarray*} d(x,y)\,d(z,p) &\leq& d(x,z)\,d(y,p)+d(x,p)\,d(y,z), \\ d(x,y)\,d(t,p) &\leq& d(x,t)\,d(y,p)+d(x,p)\,d(y,t), \\ d(z,t)\,d(x,q) &\leq& d(x,z)\,d(t,q)+d(z,q)\,d(x,t), \\ \mbox{and}\quad d(z,t)\,d(y,q) &\leq& d(y,z)\,d(t,q)+d(z,q)\,d(y,t). \end{eqnarray*} Therefore, \begin{eqnarray*} \frac{d(x,y)}{d(x,p)\,d(y,p)} &\leq & \frac{d(x,z)}{d(x,p)\,d(z,p)}+ \frac{d(y,z)}{d(y,p)\,d(z,p)}, \\ \frac{d(x,y)}{d(x,p)\,d(y,p)} &\leq & \frac{d(x,t)}{d(x,p)\,d(t,p)}+ \frac{d(y,t)}{d(y,p)\,d(t,p)}, \\ \frac{d(z,t)}{d(z,q)\,d(t,q)} &\leq & \frac{d(x,z)}{d(z,q)\,d(x,q)}+ \frac{d(x,t)}{d(t,q)\,d(x,q)}, \\ \mbox{and}\quad\frac{d(z,t)}{d(z,q)\,d(t,q)} &\leq & \frac{d(y,z)}{d(z,q)\,d(y,q)}+ \frac{d(y,t)}{d(t,q)\,d(y,q)}. \end{eqnarray*} For $a,b\in U$ denote the supremal metric space inversion over $\partial U$ by, \begin{eqnarray} \lambda(a,b)=\sup_{u\in\partial U}\frac{d(a,b)}{d(a,u)\,d(b,u)}. \label{ld} \end{eqnarray} All the terms are positive so \begin{eqnarray*} & & \mbox{$\left(1+\frac{d(x,y)}{d(x,p)\,d(y,p)}\right)\left(1+ \frac{d(z,t)}{d(z,q)\,d(t,q)}\right)$} \\ &\leq& \mbox{$\min\left\{1+\frac{d(x,z)}{d(x,p)\,d(z,p)} +\frac{d(y,z)}{d(y,p)\,d(z,p)}\, ,\: 1+\frac{d(x,t)}{d(x,p)\,d(t,p)}+\frac{d(y,t)}{d(y,p)\,d(t,p)} \right\}$} \\ & & \cdot\mbox{$\min\left\{1+\frac{d(x,z)}{d(z,q)\,d(x,q)} +\frac{d(x,t)}{d(t,q)\,d(x,q)}\, ,\: 1+\frac{d(y,z)}{d(z,q)\,d(y,q)}+\frac{d(y,t)}{d(t,q)\,d(y,q)} \right\}$} \\ &\leq& \min\left\{1+\lambda(x,z)+\lambda(y,z)\, ,\: 1+\lambda(x,t)+\lambda(y,t)\right\} \\ & & \cdot\min\left\{1+\lambda(x,z)+\lambda(x,t)\, ,\: 1+\lambda(y,z)+\lambda(y,t)\right\} \\ &<& \min\left\{2+\lambda(x,z)+\lambda(y,z)\, ,\: 2+\lambda(x,t)+\lambda(y,t)\right\} \\ & & \cdot\min\left\{2+\lambda(x,z)+\lambda(x,t)\, ,\: 2+\lambda(y,z)+\lambda(y,t)\right\} . \end{eqnarray*} Since $p,q\in\partial U$ were arbitrary, \begin{eqnarray} & & (1+\lambda(x,y))(1+\lambda(z,t)) \nonumber \\ &<& \min\left\{2+\lambda(x,z)+\lambda(y,z)\, ,\: 2+\lambda(x,t)+\lambda(y,t)\right\} \nonumber \\ & & \cdot\min\left\{2+\lambda(x,z)+\lambda(x,t)\, ,\: 2+\lambda(y,z)+\lambda(y,t)\right\} \nonumber \nonumber \\ &\leq& (1+\lambda(x,z))\,(1+\lambda(y,t))+(1+\lambda(y,z))\,(1+\lambda(x,t)) \nonumber \\ & & +\; 2\sqrt{(1+\lambda(x,z))\,(1+\lambda(y,z))\,(1+\lambda(x,t)) \,(1+\lambda(y,t))}. \label{shest} \end{eqnarray} by \lemref{thelemma}. From this it is easy to that, \begin{eqnarray*} & & (1+\lambda(x,y))(1+\lambda(z,t)) \;< \\ & & \qquad 4\max\left\{(1+\lambda(x,z))(1+\lambda(y,t)), \:(1+\lambda(y,z))(1+\lambda(x,t))\right\}. \end{eqnarray*} Gromov hyperbolicity of $(U,\rho_U)$ with parameter $\log 2$ follows directly. To show strong hyperbolicity, from (\ref{shest}) \begin{eqnarray*} \sqrt{(1+\lambda(x,y)(1+\lambda(z,t))} &<& \sqrt{(1+\lambda(x,z))(1+\lambda(y,t))} \\ & & \quad +\;\sqrt{(1+\lambda(x,t)(1+\lambda(y,z))}. \end{eqnarray*} Therefore $\rho_U$ is strongly hyperbolic with parameter $1$. To see that the parameter is sharp, from \thmref{hadamard} it follows that in general the parameter of Gromov hyperbolicity can not be smaller than $\log 2$. This and Theorem 4.2 in \cite{NS} give that the parameter of strong hyperbolicity can not be larger than $1$. \hfill\hfill\fbox{\rule[-1mm]{0mm}{1mm}\,} \section{Examples on Hadamard manifolds} \label{examples} For a complete Riemannian manifold the Ptolemaic property is equivalent to the manifold being Hadamard (Theorem 1.1 in \cite{BFW}). This is the setting for a class of examples which are Gromov hyperbolic with parameter $\log 2$ and no lower. \begin{theorem} Let $(M,g)$ be a complete Riemannian manifold of dimension at least two whose distance is Ptolemaic. Take $U\subset M$ open with $p,q\in\partial U$ such that for some $R>0$ $$ d(p,q')\geq d(p,q)>0\:\mbox{ and }\: d(q,q')\geq R $$ for all $q'\in\partial U\setminus\{p\}$. Then $(U,\rho_U)$ is Gromov hyperbolic with parameter $\log 2$ and not lower. \label{hadamard} \end{theorem} \noindent {\em (Proof)} Applying \thmref{maintheorem}, $(U,\rho_U)$ is $\log 2$-Gromov hyperbolic. In the proof four points $x_+,x_-,y_+,y_-$ will be constructed with the property that \begin{eqnarray*} & & \rho_U(x_+,x_-)+\rho_U(y_+,y_-) \\ & & \qquad\leq\max\left\{\rho_U(x_+,y_+)+\rho_U(x_-,y_-),\: \rho_U(x_+,y_-)+\rho_U(x_-,y_+)\right\}+2\delta \end{eqnarray*} with $\delta$ arbitrarily close to $\log 2$. If $d_g(p,q)=2r$ take $\gamma\maps{[-r,r]}{M}$ a unit speed, minimal geodesic with $\gamma(-r)=p$ and $\gamma(r)=q$. The dimension of $M$ is at least two, so we can construct a parallel unit vector field $F$ along $\gamma$ with $\g{F}{\gamma'}=0$. Fix $\theta>0$ small. Define the curves $$ \alpha_\theta(s)=\exp_{\gamma(-r\cos\theta)}(s F(-r\cos\theta)) \;\mbox{ and }\; \beta_\theta(s)=\exp_{\gamma(r\cos\theta)}(s F(r\cos\theta)) $$ on the interval $[-r\sin\theta,r\sin\theta]$. Let $$ x_\pm=\alpha_\theta(\pm r\sin\theta),\; x_0=\alpha_\theta(0), \; y_\pm=\beta_\theta(\pm r\sin\theta)\;\mbox{ and }\; y_0=\beta_\theta(0). $$ \begin{figure} \begin{center} \includegraphics[scale=.9]{ptolemaic_fig1.eps}\\ \caption{A construction in a Hadamard manifold.} \label{hfig} \end{center} \end{figure} By construction, $$ \g{\alpha_\theta'(0)}{\gamma'(-r\cos\theta)}=0 \qquad \g{\beta_\theta'(0)}{\gamma'(r\cos\theta)}=0. $$ Since $(M,g)$ is complete, there exists $\kappa>0$ such that the absolute value of the sectional curvature of $M$ is bounded above by $\kappa$ on a closed Riemannian metric ball of radius $2r+R$ about $\gamma(0)$. For all $\theta>0$ sufficiently small, $r\sin\theta$ is less than the injectivity radius at $\gamma(t)$ for all $-r\leq t\leq r$. Take such a $\theta$. Then there are unique minimal geodesics joining $x_0$, $p$, and $x_\pm$ and joining $y_0$, $q$, and $y_\pm$ to form four geodesic triangles. Appling the Toponogov Comparison Theorem (Theorem 2.2 \cite{CE}) there are comparison triangles $\triangle(\widetilde x_0,\widetilde p,\widetilde x_\pm)$, and $\triangle(\widetilde y_0,\widetilde q,\widetilde y_\pm)$, in the hyperbolic plane with constant curvature $-\kappa$ whose edges have the same length as those in the corresponding triangle in $(M,g)$ and whose corresponding angles are smaller. Specifically, \begin{eqnarray*} \angle(\widetilde p,\widetilde x_0,\widetilde x_\pm) \leq\angle(p,x_0,x_\pm)=\pi/2, & & \angle(\widetilde x_0,\widetilde p,\widetilde x_\pm) \leq\angle(x_0,p,x_\pm), \\ \angle(\widetilde q,\widetilde y_0,\widetilde y_\pm) \leq\angle(q,y_0,y_\pm)=\pi/2,&\mbox{and}& \angle(\widetilde y_0,\widetilde q,\widetilde y_\pm) \leq\angle(y_0,q,y_\pm). \end{eqnarray*} This and $\gamma|_{[-r\cos\theta,r]}$ and law of cosines in simply connected space forms give (\ref{RII}). Applying the first and second variation formulas for length to variations by geodesics along $\gamma|_{[-r\cos\theta,r\cos\theta]}$ gives (\ref{sv1}); and along $\alpha_\theta|_{[-r\sin\theta,0]}$, $\alpha_\theta|_{[0,r\sin\theta]}$, $\beta_\theta|_{[-r\sin\theta,0]}$, and $\beta_\theta|_{[0,r\sin\theta]}$ give (\ref{sv2}) where $\sigma, \tau>0$ are independent of $\theta\in(0,\theta_0)$ for some $\theta_0>0$. \begin{eqnarray} 2r-\sigma\theta \;\leq & d_g(x_\pm,q),\, d_g(y_\pm,p) & \leq\; 2r+\sigma\theta \label{RII} \\ 2r\cos\theta-\tau\theta^2 \;\leq & d_g(x_\pm,y_\pm), d_g(x_\pm,y_\mp) & \leq\; 2r\cos\theta+\tau\theta^2 \label{sv1} \\ r\sin\theta-\tau\theta^2 \;\leq & d_g(x_\pm,p),\, d_g(y_\pm,q) & \leq\; r\sin\theta+\tau\theta^2 \label{sv2} \end{eqnarray} Also, by construction, \begin{eqnarray} d_g(x_-,x_+)=d_g(y_-,y_+)=2r\sin\theta. \label{fc} \end{eqnarray} We claim that for $\theta>0$ sufficiently small, the suprema over $\partial U$ in the definition (\ref{ld}) of $\lambda(x_\pm,y_\pm)$, $\lambda(x_\pm,y_\mp)$, $\lambda(x_-,x_+)$, and $\lambda(y_-,y_+)$ are realized at $p$ or at $q$. This together with (\ref{sv1}), (\ref{sv2}), (\ref{RII}), and (\ref{fc}) imply that $$ \lim_{\theta\rightarrow 0^+}\mbox{$\frac {4\max\{(1+\lambda(x_-,y_-)(1+\lambda(x_+,y_+)),\: (1+\lambda(x_-,y_+)(1+\lambda(x_+,y_-))\}} {(1+\lambda(x_-,x_+))(1+\lambda(y_-,y_+))} =1$} $$ from which it follows that $\rho_U$ is Gromov hyperbolic with parameter $\log 2$ and no smaller. Now to prove the claim. Let $$ \eta=\max\{d_g(x_\pm,p),d_g(y_\pm,q)\}<\min\{r/2,R/3\}. $$ Then for $w\in\partial U\setminus\{p,q\}$, \begin{eqnarray*} d_g(y_-,w) &\geq& d_g(q,w)-d_g(y_-,q)\;\geq\; R-\eta \;>\;\frac 23 R \\ d_g(x_-,w) &\geq& d_g(p,w)-d_g(x_-,p)\;\geq\; d_g(q,p)-\eta = 2r-\eta > 3\eta. \end{eqnarray*} Therefore, \begin{eqnarray*} d_g(x_-,p)\,d_g(y_-,p) &\leq& \eta\left((d_g(y_-,q)+d_g(q,p)\right) \;\leq\; \eta\left(\eta+d_g(p,w)\right) \\ &\leq& \eta\left(\eta+d_g(p,x_-)+d_g(x_-,w)\right) \;\leq\; \eta\left(2\eta+d_g(x_-,w)\right) \\ &<& \frac{5R}{9}\,d_g(x_-,w) \;<\;\frac 56 \, d_g(y_-,w)\,d_g(x_-,w). \end{eqnarray*} The inequalities \begin{eqnarray*} d_g(x_-,p)\,d_g(y_+,p) &<& \frac 56\,d_g(x_-,w)\,d_g(y_+,w), \\ d_g(x_+,p)\,d_g(y_-,p) &<& \frac 56\,d_g(x_+,w)\,d_g(y_-,w), \\ \mbox{and}\quad d_g(x_+,p)\,d_g(y_+,p) &<& \frac 56\,d_g(x_+,w)\,d_g(y_+,w), \end{eqnarray*} are proved similarly. From this we obtain \begin{eqnarray*} d_g(x_-,q)\,d_g(y_-,q) &<& d_g(x_-,w)\,d_g(y_-,w), \\ d_g(x_-,q)\,d_g(y_+,q) &<& d_g(x_-,w)\,d_g(y_+,w), \\ d_g(x_+,q)\,d_g(y_-,q) &<& d_g(x_+,w)\,d_g(y_-,w), \\ \mbox{and}\quad d_g(x_+,q)\,d_g(y_+,q) &<& d_g(x_+,w)\,d_g(y_+,w), \end{eqnarray*} using (\ref{sv2}) and (\ref{RII}) for all $w\in\partial U$ when $\theta>0$ is sufficiently small. For the other cases, \begin{eqnarray*} d_g(x_+,w) &\geq& d_g(p,w)-d_g(x_+,p)\;\geq\; 2r-\eta \;>\; 3\eta, \\ \mbox{and}\quad d_g(x_-,w) &\geq& d_g(p,w)-d_g(x_-,p)\;\geq\; 2r-\eta \;>\; 3\eta. \end{eqnarray*} Therefore, $$ d_g(x_-,p)\,d_g(x_+,p) \;\leq\; \eta^2 < \frac19\,d_g(x_-,w)\,d_g(x_+,w). $$ The inequality $$ d_g(y_-,p)\,d_g(y_+,p) < \frac19\,d_g(y_-,w)\,d_g(y_+,w), $$ is obtained in a similar way. Therefore for $z_1,z_2\in\{x_\pm,y_\pm\}$ distinct we have that $$ \max\left\{\frac{d_g(z_1,z_2)}{d_g(z_1,p)\,d_g(z_2,p)}, \frac{d_g(z_1,z_2)}{d_g(z_1,q)\,d_g(z_2,q)}\right\} \geq \frac{d_g(z_1,z_2)}{d_g(z_1,w)\,d_g(z_2,w)} $$ for all $w\in\partial U$ which was the claim. \hfill\hfill\fbox{\rule[-1mm]{0mm}{1mm}\,}
{ "timestamp": "2020-07-14T02:18:41", "yymm": "2004", "arxiv_id": "2004.04728", "language": "en", "url": "https://arxiv.org/abs/2004.04728", "abstract": "It is shown that a construction of Z. Zhang and Y. Xiao on open subsets of Ptolemaic spaces yields, when the subset has boundary containing at least two points, metrics that are Gromov hyperbolic with parameter $\\log 2$ and strongly hyperbolic with parameter $1$ with no further conditions on the open set. A class of examples is constructed on Hadamard manifolds showing these estimates of the parameters are sharp.", "subjects": "Metric Geometry (math.MG); Differential Geometry (math.DG)", "title": "Hyperbolic metrics on open subsests of Ptolemaic spaces with sharp parameter bounds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854103128328, "lm_q2_score": 0.7310585786300049, "lm_q1q2_score": 0.7089699436394157 }